CN110868602A - Video encoder, video decoder and corresponding methods - Google Patents

Video encoder, video decoder and corresponding methods Download PDF

Info

Publication number
CN110868602A
CN110868602A CN201810992362.1A CN201810992362A CN110868602A CN 110868602 A CN110868602 A CN 110868602A CN 201810992362 A CN201810992362 A CN 201810992362A CN 110868602 A CN110868602 A CN 110868602A
Authority
CN
China
Prior art keywords
block
motion vector
control point
current
candidate motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810992362.1A
Other languages
Chinese (zh)
Other versions
CN110868602B (en
Inventor
陈焕浜
杨海涛
陈建乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201810992362.1A priority Critical patent/CN110868602B/en
Priority to PCT/CN2019/079955 priority patent/WO2020042604A1/en
Publication of CN110868602A publication Critical patent/CN110868602A/en
Application granted granted Critical
Publication of CN110868602B publication Critical patent/CN110868602B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/567Motion estimation based on rate distortion criteria

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The embodiment of the invention provides a video encoder, a video decoder and a corresponding method, wherein the method comprises the following steps: analyzing the code stream to obtain an index, wherein the index is used for indicating a target candidate motion vector group of the current decoding block; determining a target candidate motion vector group from the candidate motion vector list according to the index, wherein if the first adjacent affine decoding block is a four-parameter affine decoding block and the first adjacent affine decoding block is positioned above the current decoding block CTU, the candidate motion vector list comprises a first group of candidate motion vector predicted values, and the first group of candidate motion vector predicted values are obtained based on a lower left control point and a lower right control point of the first adjacent affine decoding block; and obtaining the pixel predicted value of the current decoding block based on the target candidate motion vector group. By adopting the embodiment of the invention, the memory reading can be reduced, thereby improving the coding and decoding performance.

Description

Video encoder, video decoder and corresponding methods
Technical Field
The present application relates to the field of video encoding and decoding technologies, and in particular, to a method and an apparatus for inter-frame prediction of a video image, and a corresponding encoder and decoder.
Background
Digital video capabilities can be incorporated into a wide variety of devices, including digital televisions, digital direct broadcast systems, wireless broadcast systems, Personal Digital Assistants (PDAs), laptop or desktop computers, tablet computers, electronic book readers, digital cameras, digital recording devices, digital media players, video gaming devices, video gaming consoles, cellular or satellite radio telephones (so-called "smart phones"), video teleconferencing devices, video streaming devices, and the like. Digital video devices implement video compression techniques, such as those described in the standards defined by MPEG-2, MPEG-4, ITU-T H.263, ITU-T H.264/MPEG-4 part 10 Advanced Video Coding (AVC), the video coding standard H.265/High Efficiency Video Coding (HEVC), and extensions of such standards. Video devices may transmit, receive, encode, decode, and/or store digital video information more efficiently by implementing such video compression techniques.
Video compression techniques perform spatial (intra-picture) prediction and/or temporal (inter-picture) prediction to reduce or remove redundancy inherent in video sequences. For block-based video coding, a video slice (i.e., a video frame or a portion of a video frame) may be partitioned into tiles, which may also be referred to as treeblocks, Coding Units (CUs), and/or coding nodes. An image block in a to-be-intra-coded (I) strip of an image is encoded using spatial prediction with respect to reference samples in neighboring blocks in the same image. An image block in a to-be-inter-coded (P or B) slice of an image may use spatial prediction with respect to reference samples in neighboring blocks in the same image or temporal prediction with respect to reference samples in other reference images. A picture may be referred to as a frame and a reference picture may be referred to as a reference frame.
Various video coding standards, including the High Efficiency Video Coding (HEVC) standard, among others, propose predictive coding modes for image blocks, i.e. predicting a block currently to be coded based on already coded blocks of video data. In intra-prediction mode, predicting the current block based on one or more previously decoded neighboring blocks in the same picture as the current block; in inter prediction mode, the current block is predicted based on already decoded blocks in a different picture.
Motion vector prediction is a key technique that affects the performance of encoding/decoding. In the existing motion vector prediction process, a motion vector prediction method based on a translation motion model is adopted for a translation object in a picture; the method aims at the non-translational object and comprises a motion vector prediction method based on a motion model and a motion vector prediction method based on a control point combination. Among them, the motion vector prediction method based on the motion model reads more memory, resulting in a slower encoding/decoding speed. How to reduce the memory reading amount in the motion vector prediction process is a technical problem being studied by those skilled in the art.
Disclosure of Invention
The embodiment of the application provides an inter-frame prediction method and device of a video image, a corresponding encoder and a corresponding encoder, which can reduce the reading amount of a memory to a certain extent, thereby improving the encoding performance.
In a first aspect, an embodiment of the present application discloses an encoding method, including: determining a set of target candidate motion vectors from a list of candidate motion vectors (e.g., a list of affine transformation candidate motion vectors) according to a rate-distortion cost criterion; the target candidate motion vector group represents motion vector predictors of a group of control points of a current encoding block (which may be specifically a current affine encoding block), wherein, if a first adjacent affine encoding block is a four-parameter affine encoding block and the first adjacent affine encoding block is located in a coding tree unit CTU above the current encoding block, the candidate motion vector list comprises a first group of candidate motion vector predictors obtained based on a lower left control point and a lower right control point of the first adjacent affine encoding block; optionally, the candidate motion vector list may be constructed in a manner as follows: determining one or more adjacent affine coding blocks of the current coding block in the order of the adjacent block a, the adjacent block B, the adjacent block C, the adjacent block D, and the adjacent block E (as in fig. 7A), wherein the one or more adjacent affine coding blocks comprise a first adjacent affine coding block; then if the first adjacent affine coding block is a four-parameter affine coding block, obtaining a motion vector predicted value of a first group of control points of the current coding block by adopting a first affine model based on a lower left control point and a lower right control point of the first adjacent affine coding block, wherein the motion vector predicted value of the first group of control points of the current coding block is used as a first group of candidate motion vectors of the candidate motion vector list; after the target candidate motion vector group is determined in the above manner, an index corresponding to the target candidate motion vector is coded into a code stream to be transmitted (optionally, when the length of the candidate motion vector list is 1, the index is not needed to indicate the target motion vector group).
In the above method, there may be only one candidate motion vector set in the candidate motion vector list, or there may be multiple candidate motion vector sets, where each candidate motion vector set may be a motion vector doublet or a motion vector triplet. When there are multiple candidate motion vector sets, the first candidate motion vector predictor is one of the multiple candidate motion vector sets, and the generation principle of the other candidate motion vector sets in the multiple candidate motion vector sets may be the same as or different from that of the first candidate motion vector predictor. Further, the target candidate motion vector set is an optimal candidate motion vector set selected from a candidate motion vector list according to a rate-distortion cost criterion, and if the first group of candidate motion vector predicted values is optimal, the selected target candidate motion vector set is the first group of candidate motion vector predicted values; if the first set of candidate motion vector predictors is not optimal, then the selected target set of candidate motion vectors is not the first set of candidate motion vector predictors. The first neighboring affine coding block is a four-parameter affine coding block in neighboring blocks of the current coding block, and specifically, which one is not limited herein, may be a neighboring block a, or a neighboring block B, or another neighboring block therein, as shown in fig. 7A. In addition, the terms "first", "second", "third", and the like appearing elsewhere in the embodiments of the present application all mean one of them, where one of "first" and one of "second" and one of "third" each refer to a different object, for example, if a first group of control points and a second group of control points appear, the first group of control points and the second group of control points each refer to a different control point; in addition, "first", "second", and the like in the embodiments of the present application also have no ordinal meanings.
It can be understood that, when the coding tree unit CTU where the first adjacent affine coding block is located is above the position of the current coding block, the information of the control point at the lowest position of the first adjacent affine coding block has already been read from the memory; the above solution is therefore in the construction of candidate motion vectors from a first set of control points of a first adjacent affine coding block, comprising a lower left control point and a lower right control point of said first adjacent affine coding block; instead of fixing the upper left control point, the upper right control point and the lower left control point of the first adjacent coding block as the first group of control points (or fixing the upper left control point and the upper right control point of the first adjacent coding block as the first group of control points) as in the prior art. Therefore, by adopting the method for determining the first group of control points, the information (such as position coordinates, motion vectors and the like) of the first group of control points can be directly multiplexed with the information read from the memory at a high probability, so that the reading of the memory is reduced, and the coding performance is improved. In addition, since the first adjacent affine coding block is specially specified to be the four-parameter affine coding block, when the candidate motion vector is constructed according to the group control points of the first adjacent affine coding block, only the lower left control point and the lower right control point of the first adjacent affine coding block are needed, and no additional control point is needed, so that the memory reading is further ensured not to be too high.
In one possible implementation, if the current encoding block is a four-parameter affine encoding block, the first set of candidate motion vector predictors is used to represent motion vector predictors for an upper left control point and an upper right control point of the current encoding block, e.g., by substituting position coordinates of the upper left control point and the upper right control point of the current encoding block into a first affine model determined based on motion vectors and position coordinates of a lower left control point and a lower right control point of the first adjacent affine encoding block, thereby obtaining motion vector predictors for the upper left control point and the upper right control point of the current encoding block.
And if the current coding block is a six-parameter affine coding block, the first group of candidate motion vector predicted values are used for representing the motion vector predicted values of the upper left control point, the upper right control point and the lower left fixed point control point of the current coding block. For example, the position coordinates of the upper left control point, the upper right control point and the lower left fixed point of the current coding block are substituted into the first affine model, so as to obtain the motion vector predicted values of the upper left control point, the upper right control point and the lower left fixed point of the current coding block.
In another possible implementation manner, in the advanced motion vector prediction AMVP mode, the method further includes: searching the motion vectors of a group of control points with the lowest cost in a preset search range according to a rate distortion cost criterion by taking the target candidate motion vector group as a search starting point; then, a motion vector difference value MVD between the motion vector of the set of control points and the set of target candidate motion vectors is determined, e.g. if the first set of control points comprises a first control point and a second control point, it is necessary to determine the motion vector difference value MVD of the motion vector of the first control point and the motion vector predictor of the first control point of the set of control points represented by the set of target candidate motion vectors, and to determine the motion vector difference value MVD of the motion vector of the second control point and the motion vector predictor of the second control point of the set of control points represented by the set of target candidate motion vectors. In this case, the encoding the index corresponding to the target candidate motion vector group into the code stream to be transmitted may specifically include: and coding the MVD and the index corresponding to the target candidate motion vector group into a code stream to be transmitted.
In another optional scheme, in the merge mode, the encoding the index corresponding to the target candidate motion vector group into the code stream to be transmitted may specifically include: and coding the indexes corresponding to the target candidate motion vector group, the reference frame index and the prediction direction into a code stream to be transmitted.
In one possible implementation, the position coordinate (x) of the lower left control point of the first adjacent affine coding block6,y6) And the position coordinates (x) of the lower right control point7,y7) All according to the position coordinate (x) of the upper left control point of the first adjacent affine coding block4,y4) Calculated position coordinates (x) of the lower left control point of the first adjacent affine coding block6,y6) Is (x)4,y4+ cuH), the position coordinate (x) of the lower right control point of the first adjacent affine coding block7,y7) Is (x)4+cuW,y4+ cuH), cuW being the width of the first neighboring affine coding block, cuH being the height of the first neighboring affine coding block; in addition, the motion vector of the lower left control point of the first adjacent affine coding block is the motion vector of the lower left sub-block of the first adjacent affine coding block, and the motion vector of the lower right control point of the first adjacent affine coding block is the motion vector of the lower right sub-block of the first adjacent affine coding block. It can be seen that the position coordinates of the lower left control point and the position coordinates of the lower right control point of the first adjacent affine coding block are obtained by derivation rather than by reading from the memory, so that the method can further reduce the reading of the memory and improve the coding performance. Alternatively, the position coordinates of the lower left control point and the lower right control point may be stored in a memory in advance and read from the memory when it is to be used later.
In yet another alternative, after determining the target candidate motion vector group from the candidate motion vector list according to the rate-distortion cost criterion, the method further includes: obtaining motion vectors of one or more sub-blocks of the current coding block based on the target candidate motion vector group; and predicting to obtain a pixel predicted value of the current coding block based on the motion vectors of one or more sub-blocks of the current coding block. Optionally, when the motion vectors of the one or more sub-blocks of the current coding block are obtained based on the target candidate motion vector group, if the lower boundary of the current coding block coincides with the lower boundary of the CTU where the current coding block is located, the motion vector of the sub-block at the lower left corner of the current coding block is obtained by calculation according to the target candidate motion vector group and the position coordinates (0, H) of the lower left corner of the current coding block, and the motion vector of the sub-block at the lower right corner of the current coding block is obtained by calculation according to the target candidate motion vector group and the position coordinates (W, H) of the lower right corner of the current coding block. For example, an affine model is constructed according to the target candidate motion vector, then the position coordinate (0, H) of the lower left corner of the current coding block is substituted into the affine model to obtain the motion vector of the sub-block at the lower left corner of the current coding block (instead of substituting the center point coordinate of the sub-block at the lower left corner into the affine model for calculation), and the position coordinate (W, H) of the lower right corner of the current coding block is substituted into the affine model to obtain the motion vector of the sub-block at the lower right corner of the current coding block (instead of substituting the center point coordinate of the sub-block at the lower right corner into the affine model for calculation). In this way, when the motion vector of the lower left control point and the motion vector of the lower right control point of the current encoding block are used (e.g., the subsequent other blocks construct the candidate motion vector list of the other block based on the motion vectors of the lower left control point and the lower right control point of the current block), the exact values are used instead of the estimated values; wherein, W is the width of the current coding block, and H is the height of the current coding block.
In a second aspect, embodiments of the present application provide a video encoder comprising several functional units for implementing any one of the methods of the first aspect. For example, a video encoder may include:
an inter-frame prediction unit for determining a target candidate motion vector group from the candidate motion vector list according to a rate-distortion cost criterion; the target candidate motion vector group represents a motion vector predicted value of a group of control points of the current coding block;
an entropy coding unit for encoding an index corresponding to the target candidate motion vector into a code stream and transmitting the code stream;
wherein, if a first neighboring affine coding block is a four-parameter affine coding block and the first neighboring affine coding block is located above the coding tree unit CTU of the current coding block, the candidate motion vector list includes a first set of candidate motion vector predictors, the first set of candidate motion vector predictors being derived based on a lower-left control point and a lower-right control point of the first neighboring affine coding block.
In a third aspect, an embodiment of the present application provides an apparatus for encoding video data, the apparatus including:
the memory is used for storing video data in a code stream form;
a video encoder for determining a set of target candidate motion vectors from the list of candidate motion vectors according to a rate-distortion cost criterion; the target candidate motion vector group represents a motion vector predicted value of a group of control points of the current coding block; and the index corresponding to the target candidate motion vector is coded into a code stream and the code stream is transmitted; wherein, if a first neighboring affine coding block is a four-parameter affine coding block and the first neighboring affine coding block is located above the coding tree unit CTU of the current coding block, the candidate motion vector list includes a first set of candidate motion vector predictors, the first set of candidate motion vector predictors being derived based on a lower-left control point and a lower-right control point of the first neighboring affine coding block.
In a fourth aspect, an embodiment of the present application provides an encoding apparatus, including: a non-volatile memory and a processor coupled to each other, the processor calling program code stored in the memory to perform part or all of the steps of any one of the methods of the first aspect.
In a fifth aspect, the present application provides a computer-readable storage medium storing program code, where the program code includes instructions for performing part or all of the steps of any one of the methods of the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, which when run on a computer causes the computer to perform some or all of the steps of any one of the methods of the first aspect.
It should be understood that the second to sixth aspects of the present application are consistent with the technical solutions of the first aspect of the present application, and the beneficial effects obtained by the aspects and the corresponding possible embodiments are similar, and are not described again.
In a seventh aspect, an embodiment of the present application discloses a decoding method, including: analyzing the code stream to obtain an index, wherein the index is used for indicating a target candidate motion vector group of a current decoding block (which can be specifically a current affine decoding block); then, according to the index, determining the target candidate motion vector group from a candidate motion vector list (for example, an affine transformation candidate motion vector list) (optionally, when the length of the candidate motion vector list is 1, the target motion vector group can be directly determined without parsing the code stream to obtain the index), where the target candidate motion vector group represents the motion vector predicted values of a group of control points of the current decoding block; wherein, if a first neighboring affine decoding block is a four parameter affine decoding block and said first neighboring affine decoding block is located at an upper decoding tree unit CTU of said current decoding block, said candidate motion vector list comprises a first set of candidate motion vector predictors derived based on a lower left control point and a lower right control point of said first neighboring affine decoding block; optionally, the candidate motion vector list may be constructed in a manner as follows: determining one or more neighboring affine decoding blocks of the current decoding block in the order of the neighboring block a, the neighboring block B, the neighboring block C, the neighboring block D, the neighboring block E (as in fig. 7A), said one or more neighboring affine decoding blocks including the first neighboring affine decoding block; then if the first adjacent affine decoding block is a four-parameter affine decoding block, obtaining motion vector predicted values of a first group of control points of the current decoding block by adopting a first affine model based on a lower left control point and a lower right control point of the first adjacent affine decoding block, wherein the motion vector predicted values of the first group of control points of the current encoding block are used as a first group of candidate motion vectors of the candidate motion vector list; after the target candidate motion vector group is determined in the above manner, an index corresponding to the target candidate motion vector is coded into a code stream to be transmitted (optionally, when the length of the candidate motion vector list is 1, the index is not needed to indicate the target motion vector group).
In the above method, there may be only one candidate motion vector set in the candidate motion vector list, or there may be multiple candidate motion vector sets, where each candidate motion vector set may be a motion vector doublet or a motion vector triplet. When there are multiple candidate motion vector sets, the first candidate motion vector predictor is one of the multiple candidate motion vector sets, and the generation principle of the other candidate motion vector sets in the multiple candidate motion vector sets may be the same as or different from that of the first candidate motion vector predictor. Further, the target candidate motion vector set is an optimal candidate motion vector set selected from a candidate motion vector list according to a rate-distortion cost criterion, and if the first group of candidate motion vector predicted values is optimal, the selected target candidate motion vector set is the first group of candidate motion vector predicted values; if the first set of candidate motion vector predictors is not optimal, then the selected target set of candidate motion vectors is not the first set of candidate motion vector predictors. The first neighboring affine decoding block is a certain four-parameter affine decoding block in the neighboring blocks of the current decoding block, and specifically, which one is not limited herein, may be a neighboring block a, a neighboring block B, or other neighboring blocks therein, as shown in fig. 7A. In addition, the terms "first", "second", "third", and the like appearing elsewhere in the embodiments of the present application all mean one of them, where one of "first" and one of "second" and one of "third" each refer to a different object, for example, if a first group of control points and a second group of control points appear, the first group of control points and the second group of control points each refer to a different control point; in addition, "first", "second", and the like in the embodiments of the present application also have no ordinal meanings.
It can be understood that when the decoding tree unit CTU in which the first adjacent affine decoding block is located is above the position of the current decoding block, the information of the lowest control point of the first adjacent affine decoding block has been read from the memory; the above solution is therefore in the construction of candidate motion vectors from a first set of control points of a first adjacent affine decoding block, comprising the lower left and lower right control points of said first adjacent affine decoding block; instead of fixing the upper left control point, the upper right control point and the lower left control point of the first adjacent decoding block as the first set of control points (or fixing the upper left control point and the upper right control point of the first adjacent decoding block as the first set of control points) as in the prior art. Therefore, by adopting the method for determining the first group of control points in the application, the information (such as position coordinates, motion vectors and the like) of the first group of control points can be directly multiplexed with the information read from the memory at a high probability, so that the reading of the memory is reduced, and the decoding performance is improved. In addition, since the first adjacent affine decoding block is specially specified to be the four-parameter affine decoding block, when the candidate motion vector is constructed according to the set of control points of the first adjacent affine decoding block, only the lower left control point and the lower right control point of the first adjacent affine decoding block are needed, and no additional control point is needed, so that the memory reading is further ensured not to be too high.
In one possible implementation, if the current decoded block is a four-parameter affine decoded block, the first set of candidate motion vector predictors is used to represent motion vector predictors for an upper left control point and an upper right control point of the current decoded block, e.g., by substituting position coordinates of the upper left control point and the upper right control point of the current decoded block into a first affine model determined based on the motion vectors and the position coordinates of the lower left control point and the lower right control point of the first neighboring affine decoded block, thereby obtaining motion vector predictors for the upper left control point and the upper right control point of the current decoded block.
And if the current decoding block is a six-parameter affine decoding block, the first group of candidate motion vector predicted values is used for representing motion vector predicted values of an upper left control point, an upper right control point and a lower left fixed point of the current decoding block. For example, the position coordinates of the upper left control point, the upper right control point, and the lower left fixed point of the current decoding block are substituted into the first affine model, so as to obtain the motion vector prediction values of the upper left control point, the upper right control point, and the lower left fixed point of the current decoding block.
In another optional scheme, the obtaining the motion vectors of one or more sub-blocks of the current decoded block based on the target candidate motion vector group specifically includes: the motion vectors of one or more sub-blocks of the current decoded block are derived based on a second affine model (e.g., by substituting coordinates of a center point of the one or more sub-blocks into the second affine model, thereby deriving motion vectors of one or more sub-blocks), wherein the second affine model is determined based on the target set of candidate motion vectors and position coordinates of a set of control points of the current decoded block.
In an alternative scheme, in the advanced motion vector prediction AMVP mode, the obtaining motion vectors of one or more sub-blocks of the current decoded block based on the target candidate motion vector group may specifically include: obtaining a new candidate motion vector group based on a motion vector difference value MVD obtained by analyzing the code stream and the target candidate motion vector group indicated by the index; then, based on the new candidate motion vector group, motion vectors of one or more sub-blocks of the current decoding block are obtained, for example, a second affine model is determined based on the new candidate motion vector group and position coordinates of a group of control points of the current decoding block, and then the motion vectors of one or more sub-blocks of the current decoding block are obtained based on the second affine model.
In another possible implementation manner, in the merge mode, the predicting a pixel prediction value of the current decoded block based on motion vectors of one or more sub-blocks of the current decoded block may specifically include: and predicting to obtain a pixel predicted value of the current decoding block according to the motion vector of one or more sub-blocks of the current decoding block, and the reference frame index and the prediction direction indicated by the index.
In a further alternative, the position coordinates (x) of the lower left control point of said first adjacent affine decoding block6,y6) And the position coordinates (x) of the lower right control point7,y7) Are all according to the position coordinate (x) of the upper left control point of the first adjacent affine decoding block4,y4) Calculated position coordinates (x) of the lower left control point of said first adjacent affine decoding block6,y6) Is (x)4,y4+ cuH) of the position coordinate (x) of the lower right control point of said first adjacent affine decoding block7,y7) Is (x)4+cuW,y4+ cuH), cuW being the width of the first neighboring affine decoding block, cuH being the height of the first neighboring affine decoding block; in addition, the motion vector of the lower left control point of the first adjacent affine decoding block is the motion vector of the lower left sub-block of the first adjacent affine decoding block, and the motion vector of the lower right control point of the first adjacent affine decoding block is the motion vector of the lower right sub-block of the first adjacent affine decoding block. It can be seen that the position coordinates of the lower left control point and the position coordinates of the lower right control point of the first adjacent affine decoding block are derived, but not read from the memory, so that by adopting the method, the reading of the memory can be further reduced, and the decoding performance can be improved. Alternatively, the position coordinates of the lower left control point and the lower right control point may be stored in a memory in advance and read from the memory when it is to be used later.
In yet another alternative, when the motion vectors of the one or more sub-blocks of the current decoding block are obtained based on the target candidate motion vector group, if the lower boundary of the current decoding block coincides with the lower boundary of the CTU in which the current decoding block is located, the motion vector of the sub-block at the lower left corner of the current decoding block is calculated according to the target candidate motion vector group and the position coordinates (0, H) of the lower left corner of the current decoding block, and the motion vector of the sub-block at the lower right corner of the current decoding block is calculated according to the target candidate motion vector group and the position coordinates (W, H) of the lower right corner of the current decoding block. For example, an affine model is constructed according to the target candidate motion vector, then the position coordinate (0, H) of the lower left corner of the current decoding block is substituted into the affine model to obtain the motion vector of the sub-block at the lower left corner of the current decoding block (instead of substituting the center point coordinate of the sub-block at the lower left corner into the affine model for calculation), and the position coordinate (W, H) of the lower right corner of the current decoding block is substituted into the affine model to obtain the motion vector of the sub-block at the lower right corner of the current decoding block (instead of substituting the center point coordinate of the sub-block at the lower right corner into the affine model for calculation). In this way, when the motion vector of the lower left control point and the motion vector of the lower right control point of the current decoded block are used (e.g., the subsequent other blocks construct the candidate motion vector list of the other block based on the motion vectors of the lower left control point and the lower right control point of the current block), the exact values are used instead of the estimated values. Wherein W is the width of the current decoded block and H is the height of the current decoded block.
In an eighth aspect, an embodiment of the present application provides a video decoder, including:
the entropy decoding unit is used for analyzing the code stream to obtain an index, and the index is used for indicating a target candidate motion vector group of a current decoding block;
an inter prediction unit configured to determine, according to the index, the target candidate motion vector group from a candidate motion vector list, the target candidate motion vector group representing a motion vector predictor of a set of control points of a current decoded block, wherein, if a first neighboring affine decoding block is a four-parameter affine decoding block and the first neighboring affine decoding block is located at an upper decoding tree unit CTU of the current decoded block, the candidate motion vector list includes a first set of candidate motion vector predictors, the first set of candidate motion vector predictors being derived based on a lower left control point and a lower right control point of the first neighboring affine decoding block; and deriving motion vectors for one or more sub-blocks of the current decoded block based on the set of target candidate motion vectors; and predicting to obtain a pixel predicted value of the current decoding block based on the motion vectors of one or more sub-blocks of the current decoding block.
In a ninth aspect, an embodiment of the present application provides an apparatus for decoding video data, the apparatus including:
the memory is used for storing video data in a code stream form;
the video decoder is used for analyzing the code stream to obtain an index, and the index is used for indicating a target candidate motion vector group of a current decoding block; and means for determining, from a candidate motion vector list, said target candidate motion vector set representing motion vector predictors for a set of control points of a current decoded block, according to said index, wherein, if a first neighboring affine decoding block is a four-parameter affine decoding block and said first neighboring affine decoding block is located at an upper decoding tree unit CTU of said current decoding block, said candidate motion vector list comprises a first set of candidate motion vector predictors derived based on a lower left control point and a lower right control point of said first neighboring affine decoding block; and deriving motion vectors for one or more sub-blocks of the current decoded block based on the set of target candidate motion vectors; and predicting to obtain a pixel predicted value of the current decoding block based on the motion vectors of one or more sub-blocks of the current decoding block.
In a tenth aspect, an embodiment of the present application provides a decoding apparatus, including: a non-volatile memory and a processor coupled to each other, the processor calling program code stored in the memory to perform part or all of the steps of any one of the methods of the seventh aspect.
In an eleventh aspect, embodiments of the present application provide a computer-readable storage medium storing program code, where the program code includes instructions for performing part or all of the steps of any one of the methods of the seventh aspect.
In a twelfth aspect, embodiments of the present application provide a computer program product, which when run on a computer, causes the computer to perform part or all of the steps of any one of the methods of the seventh aspect.
It should be understood that the eighth to twelfth aspects of the present application are consistent with the technical solutions of the seventh aspect of the present application, and the beneficial effects obtained by the aspects and the corresponding possible embodiments are similar, and are not repeated.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the background art of the present application, the drawings required to be used in the embodiments or the background art of the present application will be described below.
FIG. 1 is a schematic block diagram of a video encoding and decoding system in an embodiment of the present application;
FIG. 2A is a schematic block diagram of a video encoder in an embodiment of the present application;
FIG. 2B is a schematic block diagram of a video decoder in an embodiment of the present application;
FIG. 3 is a flow chart of a method for inter-prediction of encoded video images in an embodiment of the present application;
FIG. 4 is a flowchart of a method for inter-prediction for decoding video images according to an embodiment of the present application;
fig. 5 is a schematic diagram illustrating motion information of a current image block and a reference block in an embodiment of the present application;
fig. 6 is a schematic flowchart of an encoding method according to an embodiment of the present application;
fig. 7A is a schematic view of a scene of a neighboring block according to an embodiment of the present application;
fig. 7B is a schematic view of a scene of a neighboring block according to an embodiment of the present application;
fig. 8A is a schematic structural diagram of a motion compensation unit according to an embodiment of the present application;
fig. 8B is a schematic structural diagram of another motion compensation unit provided in the embodiment of the present application;
fig. 9 is a flowchart illustrating a decoding method according to an embodiment of the present application;
fig. 9A is a schematic flowchart of constructing a candidate motion vector list according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of an encoding device or a decoding device provided by an embodiment of the present invention;
fig. 11 is a video encoding system 1100 including the encoder 20 of fig. 2A and/or the decoder 200 of fig. 2B according to an example embodiment.
Detailed Description
The embodiments of the present application will be described below with reference to the drawings.
The non-translational motion model prediction refers to that motion information (such as a motion vector) of each sub motion compensation unit (also called sub-block) in a current coding/decoding block is derived by using the same motion model at a coding/decoding end (for example, two ends of a video coder and a video decoder), and motion compensation is performed according to the motion information of the sub motion compensation units to obtain a prediction block, so that the prediction efficiency is improved. Motion vector prediction based on a motion model is involved in the process of deriving motion information of a motion compensation unit (also called a sub-block) in a current coding/decoding block, and at present, affine models are generally derived by using position coordinates and motion vectors of an upper left control point, an upper right control point and a lower left control point of adjacent affine decoding blocks of the current coding/decoding block; the motion vector predictors for a set of control points of the current coded/decoded block are then derived from the affine model as a set of candidate motion vector predictors in a candidate motion vector list. However, the position coordinates and the motion vectors of the upper left control point, the upper right control point, and the lower left control point of the adjacent affine decoding blocks used in the motion vector prediction process all need to be read from the memory in real time, which increases the pressure of memory reading. The embodiment of the present application mainly teaches how to reduce the memory read pressure, and relates to optimization of the encoding and decoding end.
Encoding a video stream, or a portion thereof, such as a video frame or an image block, may use temporal and spatial similarities in the video stream to improve encoding performance. For example, a current image block of a video stream may be encoded based on previously encoded blocks by predicting motion information for the current image block based on previously encoded blocks in the video stream and identifying a difference (also referred to as a residual) between the predicted block and the current image block (i.e., the original block). In this way, only the residual and some parameters used to generate the current image block are included in the digital video output bitstream, rather than the entirety of the current image block. This technique may be referred to as inter prediction.
A motion vector is an important parameter in the inter prediction process and represents the spatial displacement of a previously coded block relative to the current coded block. The motion vector may be obtained using a method of motion estimation, such as motion search. Early inter prediction techniques included bits representing motion vectors in the encoded bitstream to allow the decoder to reproduce the predicted blocks, and hence the reconstructed blocks. In order to further improve the coding efficiency, it was subsequently proposed to use the reference motion vector to encode the motion vector differentially, i.e. instead of encoding the motion vector as a whole, only the difference between the motion vector and the reference motion vector is encoded. In some cases, the reference motion vector may be selected from among previously used motion vectors in the video stream, and selecting the previously used motion vector to encode the current motion vector may further reduce the number of bits included in the encoded video bitstream.
FIG. 1 is a block diagram of a video coding system 1 of one example described in an embodiment of the present application. As used herein, the term "video coder" generally refers to both video encoders and video decoders. In this application, the term "video coding" or "coding" may generally refer to video encoding or video decoding. The video encoder 100 and the video decoder 200 of the video coding system 1 are configured to predict motion information, such as motion vectors, of a currently coded image block or a sub-block thereof according to various method examples described in any one of a plurality of new inter prediction modes proposed in the present application, such that the predicted motion vectors are maximally close to the motion vectors obtained using a motion estimation method, thereby eliminating the need to transmit motion vector differences when encoding, and further improving the coding and decoding performance.
As shown in fig. 1, video coding system 1 includes a source device 10 and a destination device 20. Source device 10 generates encoded video data. Accordingly, source device 10 may be referred to as a video encoding device. Destination device 20 may decode the encoded video data generated by source device 10. Destination device 20 may therefore be referred to as a video decoding device. Various implementations of source device 10, destination device 20, or both may include one or more processors and memory coupled to the one or more processors. The memory can include, but is not limited to, RAM, ROM, EEPROM, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures that can be accessed by a computer, as described herein.
Source device 10 and destination device 20 may comprise a variety of devices, including desktop computers, mobile computing devices, notebook (e.g., laptop) computers, tablet computers, set-top boxes, telephone handsets such as so-called "smart" phones, televisions, cameras, display devices, digital media players, video game consoles, in-vehicle computers, or the like.
Destination device 20 may receive encoded video data from source device 10 over link 30. Link 30 may comprise one or more media or devices capable of moving encoded video data from source device 10 to destination device 20. In one example, link 30 may comprise one or more communication media that enable source device 10 to transmit encoded video data directly to destination device 20 in real-time. In this example, source device 10 may modulate the encoded video data according to a communication standard, such as a wireless communication protocol, and may transmit the modulated video data to destination device 20. The one or more communication media may include wireless and/or wired communication media such as a Radio Frequency (RF) spectrum or one or more physical transmission lines. The one or more communication media may form part of a packet-based network, such as a local area network, a wide area network, or a global network (e.g., the internet). The one or more communication media may include a router, switch, base station, or other apparatus that facilitates communication from source device 10 to destination device 20.
In another example, encoded data may be output from output interface 140 to storage device 40. Similarly, encoded data may be accessed from storage device 40 through input interface 240. Storage device 40 may include any of a variety of distributed or locally accessed data storage media such as a hard drive, blu-ray discs, DVDs, CD-ROMs, flash memory, volatile or non-volatile memory, or any other suitable digital storage media for storing encoded video data.
In another example, storage device 40 may correspond to a file server or another intermediate storage device that may hold the encoded video generated by source device 10. Destination device 20 may access the stored video data from storage device 40 via streaming or download. The file server may be any type of server capable of storing encoded video data and transmitting the encoded video data to destination device 20. Example file servers include web servers (e.g., for a website), FTP servers, Network Attached Storage (NAS) devices, or local disk drives. Destination device 20 may access the encoded video data over any standard data connection, including an internet connection. This may include a wireless channel (e.g., a Wi-Fi connection), a wired connection (e.g., DSL, cable modem, etc.), or a combination of both suitable for accessing encoded video data stored on a file server. The transmission of the encoded video data from storage device 40 may be a streaming transmission, a download transmission, or a combination of both.
The motion vector prediction techniques of the present application may be applied to video codecs to support a variety of multimedia applications, such as over-the-air television broadcasts, cable television transmissions, satellite television transmissions, streaming video transmissions (e.g., via the internet), encoding for video data stored on a data storage medium, decoding of video data stored on a data storage medium, or other applications. In some examples, video coding system 1 may be used to support one-way or two-way video transmission to support applications such as video streaming, video playback, video broadcasting, and/or video telephony.
The video coding system 1 illustrated in fig. 1 is merely an example, and the techniques of this application may be applied to video coding settings (e.g., video encoding or video decoding) that do not necessarily include any data communication between an encoding device and a decoding device. In other examples, the data is retrieved from local storage, streamed over a network, and so forth. A video encoding device may encode and store data to a memory, and/or a video decoding device may retrieve and decode data from a memory. In many examples, the encoding and decoding are performed by devices that do not communicate with each other, but merely encode data to and/or retrieve data from memory and decode data.
In the example of fig. 1, source device 10 includes video source 120, video encoder 100, and output interface 140. In some examples, output interface 140 may include a regulator/demodulator (modem) and/or a transmitter. Video source 120 may comprise a video capture device (e.g., a video camera), a video archive containing previously captured video data, a video feed interface to receive video data from a video content provider, and/or a computer graphics system for generating video data, or a combination of such sources of video data.
Video encoder 100 may encode video data from video source 120. In some examples, source device 10 transmits the encoded video data directly to destination device 20 via output interface 140. In other examples, encoded video data may also be stored onto storage device 40 for later access by destination device 20 for decoding and/or playback.
In the example of fig. 1, destination device 20 includes input interface 240, video decoder 200, and display device 220. In some examples, input interface 240 includes a receiver and/or a modem. Input interface 240 may receive encoded video data via link 30 and/or from storage device 40. Display device 220 may be integrated with destination device 20 or may be external to destination device 20. In general, display device 220 displays decoded video data. The display device 220 may include a variety of display devices, such as a Liquid Crystal Display (LCD), a plasma display, an Organic Light Emitting Diode (OLED) display, or other types of display devices.
Although not shown in fig. 1, in some aspects, video encoder 100 and video decoder 200 may each be integrated with an audio encoder and decoder, and may include appropriate multiplexer-demultiplexer units or other hardware and software to handle encoding of both audio and video in a common data stream or separate data streams. In some examples, the MUX-DEMUX unit may conform to the ITU h.223 multiplexer protocol, or other protocols such as the User Datagram Protocol (UDP), if applicable.
Video encoder 100 and video decoder 200 may each be implemented as any of a variety of circuits such as: one or more microprocessors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), discrete logic, hardware, or any combinations thereof. If the present application is implemented in part in software, a device may store instructions for the software in a suitable non-volatile computer-readable storage medium and may execute the instructions in hardware using one or more processors to implement the techniques of the present application. Any of the foregoing, including hardware, software, a combination of hardware and software, etc., may be considered one or more processors. Each of video encoder 100 and video decoder 200 may be included in one or more encoders or decoders, either of which may be integrated as part of a combined encoder/decoder (codec) in a respective device.
This application may generally refer to video encoder 100 as "signaling" or "transmitting" certain information to another device, such as video decoder 200. The terms "signaling" or "transmitting" may generally refer to the communication of syntax elements and/or other data used to decode compressed video data. This transfer may occur in real time or near real time. Alternatively, such communication may occur over a period of time, such as may occur when, at the time of encoding, syntax elements are stored in the encoded bitstream to a computer-readable storage medium, which the decoding device may then retrieve at any time after the syntax elements are stored to such medium.
Video encoder 100 and video decoder 200 may operate according to a video compression standard such as High Efficiency Video Coding (HEVC), or an extension thereof, and may conform to the HEVC test model (HM). Alternatively, video encoder 100 and video decoder 200 may also operate in accordance with other industry standards, such as the ITU-T h.264, h.265 standards, or extensions of such standards. However, the techniques of this application are not limited to any particular codec standard.
In one example, referring collectively to fig. 3, the video encoder 100 is configured to: coding syntax elements related to the current image block to be coded into a digital video output bit stream (bit stream or code stream for short), wherein the syntax elements used for inter-frame prediction of the current image block are simply called as inter-frame prediction data; in order to determine an inter prediction mode for encoding a current image block, the video encoder 100 is further configured to determine or select (S301) an inter prediction mode for inter prediction of the current image block from the candidate inter prediction mode set (e.g., select an inter prediction mode with a trade-off or minimum rate-distortion cost for encoding the current image block from among a plurality of new inter prediction modes); and encoding the current image block based on the determined inter prediction mode (S303), where the encoding process may include predicting motion information of one or more sub-blocks in the current image block (specifically, motion information of each sub-block or all sub-blocks) based on the determined inter prediction mode, and performing inter prediction on the current image block using the motion information of one or more sub-blocks in the current image block;
it should be understood that if a difference (i.e., a residual) between a prediction block generated from motion information predicted based on a new inter prediction mode proposed in the present application and an image block to be currently encoded (i.e., an original block) is 0, only syntax elements related to the image block to be currently encoded need to be coded into a bitstream (also referred to as a code stream) in the video encoder 100; conversely, in addition to the syntax elements, the corresponding residual needs to be coded into the bitstream.
In another example, referring collectively to fig. 4, the video decoder 200 is configured to: decoding syntax elements related to a current image block to be decoded from a bitstream (S401), determining an inter prediction mode for inter prediction of the current image block in a set of candidate inter prediction modes (S403) when the inter prediction data indicates that the current image block is predicted using the set of candidate inter prediction modes (i.e., a new inter prediction mode), and decoding the current image block based on the determined inter prediction mode (S405), where the decoding process may include predicting motion information of one or more sub-blocks in the current image block based on the determined inter prediction mode and performing inter prediction on the current image block using the motion information of the one or more sub-blocks in the current image block.
Optionally, if the inter-frame prediction data further includes a second identifier indicating which inter-frame prediction mode the current image block adopts, the video decoder 200 is configured to determine that the inter-frame prediction mode indicated by the second identifier is the inter-frame prediction mode used for inter-frame prediction of the current image block; alternatively, if the inter prediction data does not include the second identifier indicating which inter prediction mode the current image block adopts, the video decoder 200 is configured to determine the first inter prediction mode for the non-directional motion field as the inter prediction mode for inter prediction of the current image block.
Fig. 2A is a block diagram of a video encoder 100 of one example described in an embodiment of the present application. The video encoder 100 is used to output the video to the post-processing entity 41. Post-processing entity 41 represents an example of a video entity, such as a media-aware network element (MANE) or a splicing/editing device, that may process the encoded video data from video encoder 100. In some cases, post-processing entity 41 may be an instance of a network entity. In some video encoding systems, post-processing entity 41 and video encoder 100 may be parts of separate devices, while in other cases, the functionality described with respect to post-processing entity 41 may be performed by the same device that includes video encoder 100. In some example, post-processing entity 41 is an example of storage 40 of FIG. 1.
The video encoder 100 may perform encoding of a video image block, e.g., perform inter-prediction of a video image block, according to any one of the new inter-prediction modes in the set of candidate inter-prediction modes proposed herein, including modes 0,1,2 … or 10.
In the example of fig. 2A, video encoder 100 includes prediction processing unit 108, filter unit 106, decoded picture buffer unit (DPB)107, summing unit 112, transform unit 101, quantization unit 102, and entropy encoding unit 103. The prediction processing unit 108 includes an inter prediction unit 110 and an intra prediction unit 109. For image block reconstruction, the video encoder 100 further includes an inverse quantization unit 104, an inverse transform unit 105, and a summation unit 111. Filter unit 106 is intended to represent one or more loop filtering units, such as deblocking filtering units, adaptive loop filtering units (ALF), and Sample Adaptive Offset (SAO) filtering units. Although filtering unit 106 is shown in fig. 2A as an in-loop filter, in other implementations, filtering unit 106 may be implemented as a post-loop filter. In one example, the video encoder 100 may further include a video data storage unit, a partitioning unit (not shown).
The video data storage unit may store video data to be encoded by components of the video encoder 100. The video data stored in the video data storage unit may be obtained from the video source 120. DPB 107 may be a reference picture storage unit that stores reference video data used to encode video data by video encoder 100 in intra, inter coding modes. The video data storage unit and DPB 107 may be formed from any of a variety of memory cell devices, such as dynamic random access memory cells (DRAM) including synchronous DRAM (sdram), magnetoresistive ram (mram), resistive ram (rram), or other types of memory cell devices. The video data storage unit and the DPB 107 may be provided by the same memory cell device or separate memory cell devices. In various examples, the video data storage unit may be on-chip with other components of video encoder 100, or off-chip with respect to those components.
As shown in fig. 2A, the video encoder 100 receives video data and stores the video data in a video data storage unit. The partitioning unit partitions the video data into image blocks and these image blocks may be further partitioned into smaller blocks, e.g. image block partitions based on a quadtree structure or a binary tree structure. This segmentation may also include segmentation into stripes (slices), slices (tiles), or other larger units. Video encoder 100 generally illustrates components that encode image blocks within a video slice to be encoded. The slice may be divided into a plurality of tiles (and possibly into a set of tiles called a slice). Prediction processing unit 108 may select one of a plurality of possible coding modes for the current image block, such as one of a plurality of intra coding modes or one of a plurality of inter coding modes, including but not limited to one or more of modes 0,1,2,3 … 10 as set forth herein. Prediction processing unit 108 may provide the resulting intra, inter coded blocks to summation unit 112 to generate a residual block, and to summation unit 111 to reconstruct the encoded blocks used as reference pictures.
Intra-prediction unit 109 within prediction processing unit 108 may perform intra-predictive encoding of the current block relative to one or more neighboring blocks in the same frame or slice as the current block to be encoded to remove spatial redundancy. Inter prediction unit 110 within prediction processing unit 108 may perform inter-predictive encoding of the current block relative to one or more prediction blocks in one or more reference pictures to remove temporal redundancy.
Specifically, the inter prediction unit 110 may be used to determine an inter prediction mode for encoding the current image block. For example, the inter-prediction unit 110 may use a rate-distortion analysis to calculate rate-distortion values for various inter-prediction modes in the set of candidate inter-prediction modes and select the inter-prediction mode having the best rate-distortion characteristics therefrom. Rate-distortion analysis typically determines the amount of distortion (or error) between an encoded block and an original, unencoded block that was encoded to produce the encoded block, as well as the bit rate (that is, the number of bits) used to produce the encoded block. For example, the inter prediction unit 110 may determine the inter prediction mode with the smallest rate-distortion cost for encoding the current image block in the candidate inter prediction mode set as the inter prediction mode for inter predicting the current image block. The following describes in detail the inter-predictive coding process, and particularly the process of predicting motion information of one or more sub-blocks (specifically, each sub-block or all sub-blocks) in a current image block in various inter-prediction modes of the present application for non-directional or directional motion fields.
Inter prediction unit 110 is configured to predict motion information (e.g., motion vectors) of one or more sub-blocks in a current image block based on the determined inter prediction mode, and obtain or generate a prediction block for the current image block using the motion information (e.g., motion vectors) of the one or more sub-blocks in the current image block. Inter-prediction unit 110 may locate the prediction block to which the motion vector points in one of the reference picture lists. Inter-prediction unit 110 may also generate syntax elements associated with the tiles and the video strips for use by video decoder 200 in decoding the tiles of the video strips. Alternatively, in an example, the inter-frame prediction unit 110 performs a motion compensation process using the motion information of each sub-block to generate a prediction block of each sub-block, so as to obtain a prediction block of the current image block; it should be understood that inter prediction unit 110 herein performs motion estimation and motion compensation processes.
Specifically, after selecting the inter prediction mode for the current image block, the inter prediction unit 110 may provide information indicating the selected inter prediction mode for the current image block to the entropy encoding unit 103, so that the entropy encoding unit 103 encodes the information indicating the selected inter prediction mode. In this application, the video encoder 100 may include inter prediction data related to a current image block in a transmitted bitstream, which may include a first flag block _ based _ enable _ flag to indicate whether to inter predict the current image block using a new inter prediction mode proposed in this application; optionally, a second flag block _ based _ index may be further included to indicate which new inter prediction mode is used for the current image block. In this application, a process of predicting a motion vector of a current image block or a sub-block thereof using motion vectors of a plurality of reference blocks in different modes 0,1,2 … 10 will be described in detail below.
The intra prediction unit 109 may perform intra prediction on the current image block. In particular, the intra-prediction unit 109 may determine the intra-prediction mode used to encode the current block. For example, the intra-prediction unit 109 may use a rate-distortion analysis to calculate rate-distortion values for various intra-prediction modes to be tested and select the intra-prediction mode having the best rate-distortion characteristics from among the modes to be tested. In any case, after selecting the intra-prediction mode for the image block, intra-prediction unit 109 may provide information indicating the selected intra-prediction mode for the current image block to entropy encoding unit 103 so that entropy encoding unit 103 encodes the information indicating the selected intra-prediction mode.
After prediction processing unit 108 generates a prediction block for the current image block via inter-prediction, intra-prediction, video encoder 100 forms a residual image block by subtracting the prediction block from the current image block to be encoded. Summing unit 112 represents one or more components that perform this subtraction operation. The residual video data in the residual block may be included in one or more TUs and applied to transform unit 101. The transform unit 101 transforms the residual video data into residual transform coefficients using a transform such as a Discrete Cosine Transform (DCT) or a conceptually similar transform. Transform unit 101 may convert residual video data from a pixel value domain to a transform domain, e.g., the frequency domain.
Transform unit 101 may send the resulting transform coefficients to quantization unit 102. Quantization unit 102 quantizes the transform coefficients to further reduce the bit rate. In some examples, quantization unit 102 may then perform a scan of a matrix that includes quantized transform coefficients. Alternatively, entropy encoding unit 103 may perform scanning.
After quantization, entropy encoding unit 103 entropy encodes the quantized transform coefficients. For example, entropy encoding unit 103 may perform Context Adaptive Variable Length Coding (CAVLC), Context Adaptive Binary Arithmetic Coding (CABAC), syntax-based context adaptive binary arithmetic coding (SBAC), Probability Interval Partition Entropy (PIPE) coding, or another entropy encoding method or technique. After entropy encoding by entropy encoding unit 103, the encoded bitstream may be transmitted to video decoder 200, or archived for later transmission or retrieval by video decoder 200. Entropy encoding unit 103 may also entropy encode syntax elements of the current image block to be encoded.
Inverse quantization unit 104 and inverse transform unit 105 apply inverse quantization and inverse transform, respectively, to reconstruct the residual block in the pixel domain, e.g., for later use as a reference block for a reference image. Summing unit 111 adds the reconstructed residual block to the prediction block produced by inter prediction unit 110 or intra prediction unit 109 to produce a reconstructed image block. The filter unit 106 may be adapted to reconstruct the image block to reduce distortions, such as block artifacts. This reconstructed image block is then stored as a reference block in decoded image buffer unit 107, which may be used by inter prediction unit 110 as a reference block to inter predict a block in a subsequent video frame or image.
It should be understood that other structural variations of the video encoder 100 may be used to encode the video stream. For example, for some image blocks or image frames, the video encoder 100 may quantize the residual signal directly without processing by the transform unit 101, and correspondingly without processing by the inverse transform unit 105; alternatively, for some image blocks or image frames, the video encoder 100 does not generate residual data and accordingly does not need to be processed by the transform unit 101, the quantization unit 102, the inverse quantization unit 104, and the inverse transform unit 105; alternatively, the video encoder 100 may store the reconstructed picture block directly as a reference block without processing by the filter unit 106; alternatively, the quantization unit 102 and the inverse quantization unit 104 in the video encoder 100 may be combined together. The loop filtering unit is optional, and in the case of lossless compression coding, the transform unit 101, the quantization unit 102, the inverse quantization unit 104, and the inverse transform unit 105 are optional. It should be understood that the inter prediction unit and the intra prediction unit may be selectively enabled according to different application scenarios, and in this case, the inter prediction unit is enabled.
Fig. 2B is a block diagram of a video decoder 200 of one example described in an embodiment of the present application. In the example of fig. 2B, video decoder 200 includes entropy decoding unit 203, prediction processing unit 208, inverse quantization unit 204, inverse transform unit 205, summing unit 211, filter unit 206, and decoded image buffer unit 207. The prediction processing unit 208 may include an inter prediction unit 210 and an intra prediction unit 209. In some examples, video decoder 200 may perform a decoding process that is substantially reciprocal to the encoding process described with respect to video encoder 100 from fig. 2A.
In the decoding process, the video decoder 200 receives an encoded video bitstream representing image blocks and associated syntax elements of an encoded video slice from the video encoder 100. The video decoder 200 may receive video data from the network entity 42 and, optionally, may store the video data in a video data storage unit (not shown). The video data storage unit may store video data, such as an encoded video bitstream, to be decoded by components of the video decoder 200. The video data stored in the video data storage unit may be obtained, for example, from the storage device 40, from a local video source such as a camera, via wired or wireless network communication of the video data, or by accessing a physical data storage medium. The video data storage unit may serve as a decoded picture buffer unit (CPB) for storing encoded video data from an encoded video bitstream. Therefore, although the video data storage unit is not illustrated in fig. 2B, the video data storage unit and the DPB207 may be the same storage unit or may be separately provided storage units. The video data storage unit and DPB207 may be formed from any of a variety of storage unit devices, such as: dynamic Random Access Memory (DRAM) including Synchronous DRAM (SDRAM), Magnetoresistive RAM (MRAM), Resistive RAM (RRAM), or other types of memory cell devices. In various examples, the video data storage unit may be integrated on-chip with other components of video decoder 200, or disposed off-chip with respect to those components.
Network entity 42 may be, for example, a server, a MANE, a video editor/splicer, or other such device for implementing one or more of the techniques described above. Network entity 42 may or may not include a video encoder, such as video encoder 100. Prior to network entity 42 sending the encoded video bitstream to video decoder 200, network entity 42 may implement portions of the techniques described in this application. In some video decoding systems, network entity 42 and video decoder 200 may be part of separate devices, while in other cases, the functionality described with respect to network entity 42 may be performed by the same device that includes video decoder 200. In some cases, network entity 42 may be an example of storage 40 of fig. 1.
Entropy decoding unit 203 of video decoder 200 entropy decodes the bitstream to generate quantized coefficients and some syntax elements. The entropy decoding unit 203 forwards the syntax element to the prediction processing unit 208. Video decoder 200 may receive syntax elements at the video slice level and/or the picture block level.
When a video slice is decoded as an intra-decoded (I) slice, intra-prediction unit 209 of prediction processing unit 208 may generate a prediction block for an image block of the current video slice based on the signaled intra-prediction mode and data from previously decoded blocks of the current frame or picture. When a video slice is decoded as an inter-decoded (i.e., B or P) slice, inter prediction unit 210 of prediction processing unit 208 may determine an inter prediction mode for decoding a current image block of the current video slice based on syntax elements received from entropy decoding unit 203, decode the current image block (e.g., perform inter prediction) based on the determined inter prediction mode. Specifically, the inter prediction unit 210 may determine whether a current image block of the current video slice is predicted using a new inter prediction mode, and if the syntax element indicates that the current image block is predicted using the new inter prediction mode, predict motion information of the current image block or a sub-block of the current image block of the current video slice based on the new inter prediction mode (e.g., a new inter prediction mode specified by the syntax element or a default new inter prediction mode), so as to obtain or generate a prediction block of the current image block or the sub-block of the current image block using the motion information of the predicted current image block or the sub-block of the current image block through a motion compensation process. The motion information herein may include reference picture information and motion vectors, wherein the reference picture information may include, but is not limited to, uni/bi-directional prediction information, reference picture list numbers and reference picture indexes corresponding to the reference picture lists. For inter-prediction, a prediction block may be generated from one of the reference pictures within one of the reference picture lists. Video decoder 200 may construct reference picture lists, i.e., list 0 and list 1, based on the reference pictures stored in DPB 207. The reference frame index for the current picture may be included in one or more of reference frame list 0 and list 1. In some examples, it may be the particular syntax element that video encoder 100 signals indicating whether a new inter prediction mode is employed to decode the particular block, or it may be the particular syntax element that signals indicating whether a new inter prediction mode is employed and which new inter prediction mode is specifically employed to decode the particular block. It should be understood that the inter prediction unit 210 herein performs a motion compensation process. The inter prediction process for predicting motion information of a current image block or a sub-block of the current image block using motion information of a reference block in various new inter prediction modes will be described in detail below.
Inverse quantization unit 204 inverse quantizes, i.e., dequantizes, the quantized transform coefficients provided in the bitstream and decoded by entropy decoding unit 203. The inverse quantization process may include: the quantization parameter calculated by the video encoder 100 for each image block in the video slice is used to determine the degree of quantization that should be applied and likewise the degree of inverse quantization that should be applied. The inverse transform unit 205 applies an inverse transform, such as an inverse DCT, an inverse integer transform, or a conceptually similar inverse transform process, to the transform coefficients in order to generate a residual block in the pixel domain.
After inter prediction unit 210 generates a prediction block for the current image block or a sub-block of the current image block, video decoder 200 obtains a reconstructed block, i.e., a decoded image block, by summing the residual block from inverse transform unit 205 with the corresponding prediction block generated by inter prediction unit 210. Summing unit 211 represents the component that performs this summing operation. A loop filtering unit (in or after the decoding loop) may also be used to smooth pixel transitions or otherwise improve video quality, if desired. Filter unit 206 may represent one or more loop filtering units, such as a deblocking filtering unit, an adaptive loop filtering unit (ALF), and a Sample Adaptive Offset (SAO) filtering unit. Although the filtering unit 206 is shown in fig. 2B as an in-loop filtering unit, in other implementations, the filtering unit 206 may be implemented as a post-loop filtering unit. In one example, the filter unit 206 is adapted to reconstruct the blocks to reduce the block distortion, and the result is output as a decoded video stream. Also, decoded image blocks in a given frame or picture may also be stored in decoded image buffer unit 207, with decoded image buffer unit 207 storing reference pictures for subsequent motion compensation. Decoded image buffer unit 207 may be part of a storage unit that may also store decoded video for later presentation on a display device (e.g., display device 220 of fig. 1), or may be separate from such a storage unit.
It should be understood that other structural variations of the video decoder 200 may be used to decode the encoded video bitstream. For example, the video decoder 200 may generate an output video stream without processing by the filtering unit 206; alternatively, for some image blocks or image frames, the entropy decoding unit 203 of the video decoder 200 does not decode quantized coefficients and accordingly does not need to be processed by the inverse quantization unit 204 and the inverse transform unit 205. The loop filtering unit is optional; and the inverse quantization unit 204 and the inverse transformation unit 205 are optional for the case of lossless compression. It should be understood that the inter prediction unit and the intra prediction unit may be selectively enabled according to different application scenarios, and in this case, the inter prediction unit is enabled.
Fig. 5 is a diagram illustrating motion information of an exemplary current image block 600 and a reference block in an embodiment of the present application. As shown in fig. 5, W and H are the width and height of the current image block 600 and the co-located block (simply referred to as a collocated block) 600' of the current image block 600. The reference block of the current image block 600 includes: an upper spatial neighboring block and a left spatial neighboring block of the current image block 600, and a lower spatial neighboring block and a right spatial neighboring block of a collocated block 600 ', wherein the collocated block 600' is an image block in a reference image having the same size, shape, and coordinates as the current image block 600. It should be noted that the motion information of the lower spatial neighboring block and the right spatial neighboring block of the current image block does not exist and has not been encoded. It should be understood that the current image block 600 and the collocated block 600' may be of any block size. For example, the current image block 600 and the collocated block 600' may include, but are not limited to, 16x16 pixels, 32x32 pixels, 32x16 pixels, 16x32 pixels, and the like. As described above, each image frame may be divided into image blocks for encoding. These image blocks may be further partitioned into smaller blocks, for example, the current image block 600 and the collocated block 600' may be partitioned into multiple MxN sub-blocks, i.e., each sub-block is MxN pixels in size, and each reference block is also MxN pixels in size, i.e., the same size as the sub-blocks of the current image block. The coordinates in fig. 5 are measured in MxN blocks. "M × N" and "M by N" are used interchangeably to refer to the pixel size of an image block in both the horizontal and vertical dimensions, i.e., having M pixels in the horizontal direction and N pixels in the vertical direction, where M, N represents a non-negative integer value. Furthermore, the block does not necessarily need to have the same number of pixels in the horizontal direction as in the vertical direction. For example, where M ═ N ═ 4, the subblock size of the current image block and the size of the reference block may be 8x8 pixels, 8x4 pixels, or 4x8 pixels, or the minimum prediction block size. Furthermore, the image blocks described in this application may be understood as, but not limited to: a Prediction Unit (PU) or a Coding Unit (CU) or a Transform Unit (TU), etc. According to the specifications of different video compression codec standards, a CU may include one or more prediction units, PUs, or the sizes of the PUs and the CU are the same. The tiles may have fixed or variable sizes and differ in size according to different video compression codec standards. Furthermore, a current image block refers to an image block to be currently encoded or decoded, such as a prediction unit to be encoded or decoded.
In one example, each left spatial neighboring block of the current image block 600 may be sequentially determined as to whether it is available along direction 1, and each upper spatial neighboring block of the current image block 600 may be sequentially determined as to whether it is available along direction 2, e.g., whether neighboring blocks (also referred to as reference blocks, used interchangeably) are inter-coded, which are available if they exist and are inter-coded; neighboring blocks are not available if they do not exist or are intra-coded. If a neighboring block is intra-coded, the motion information of other neighboring reference blocks is copied as the motion information of the neighboring block. Whether the lower spatial neighboring block and the right spatial neighboring block of the concatenation block 600' are available is detected according to a similar method, which is not described herein again.
Further, if the size of the available reference block and the size of the sub-block of the current image block are 4x4, the motion information of the fetch available reference block can be directly obtained; if the size of the available reference block is, for example, 8x4,8x8, the motion information of the center 4x4 block can be obtained as the motion information of the available reference block, the coordinates of the top left vertex of the center 4x4 block with respect to the top left vertex of the reference block are ((W/4)/2 x4, (H/4)/2 x 4), where the division operation is an integer division operation, and if M is 8 and N is 4, the coordinates of the top left vertex of the center 4x4 block with respect to the top left vertex of the reference block are (4, 0). Optionally, the motion information of the top left corner 4 × 4 block of the reference block may also be obtained as the motion information of the available reference block, but the application is not limited thereto.
For simplicity of description, the following description will be made with the sub-block representing the MxN sub-block and the adjacent block representing the adjacent MxN block.
FIG. 6 is a flow diagram illustrating a process 700 of an encoding method according to one embodiment of the present application. The process 700 may be performed by the video encoder 100, and in particular, may be performed by the inter-prediction unit 110, and the entropy coding unit (also referred to as entropy encoder) 103 of the video encoder 100. Process 700 is described as a series of steps or operations, it being understood that process 700 may be performed in various orders and/or concurrently, and is not limited to the order of execution shown in FIG. 6. Assuming that a video data stream having a plurality of video frames is using a video encoder, if a Coding Tree Unit (CTU) with a first neighboring affine Coding block located above a current Coding block, a set of candidate motion vector predictors is determined based on a lower left control point and a lower right control point of the first neighboring affine Coding block, which corresponds to the flow shown in fig. 6 and is described as follows:
step S700: the video encoder determines an inter prediction mode for the current coding block.
Specifically, the inter prediction mode may be an Advanced Motion Vector Prediction (AMVP) mode or a merge (merge) mode.
If the inter prediction mode of the current coding block is determined to be the AMVP mode, steps S711-S713 are performed.
If the inter prediction mode of the current coding block is determined to be the merge mode, steps S721-S723 are performed.
AMVP mode:
step S711: the video encoder constructs a list of candidate motion vector predictors, MVPs.
Specifically, the video encoder constructs a candidate Motion Vector Predictor (MVP) list (also referred to as a candidate motion vector list) by using an inter prediction unit (also referred to as an inter prediction module), and may be constructed in one of two manners provided as follows or in a combination of the two manners, where the constructed candidate Motion Vector Predictor (MVP) list may be a triplet candidate Motion Vector Predictor (MVP) list or a binary candidate Motion Vector Predictor (MVP) list; the two modes are as follows:
in the first mode, a motion vector prediction method based on a motion model is adopted to construct a candidate motion vector prediction value MVP list.
Firstly, all or part of adjacent blocks of the current encoding block are traversed according to a predetermined sequence, so as to determine adjacent affine encoding blocks therein, wherein the determined number of the adjacent affine encoding blocks may be one or more. For example, the neighboring blocks A, B, C, D, E shown in FIG. 7A may be traversed in sequence to determine neighboring affine encoded blocks in the neighboring block A, B, C, D, E. The inter-frame prediction unit at least determines a set of candidate motion vector predictors from an adjacent affine coding block (each set of candidate motion vector predictors is a tuple or a triplet), which is described below by taking an adjacent affine coding block as an example, and for convenience of description, the adjacent affine coding block is referred to as a first adjacent affine coding block, which is specifically as follows:
and determining a first affine model according to the motion vector of the control point of the first adjacent affine coding block, and predicting the motion vector of the control point of the current coding block according to the first affine model. When the parameter models of the current encoding block are different, the manner of predicting the motion vector of the control point of the current encoding block based on the motion vector of the control point of the first adjacent affine encoding block is also different, and therefore the following description is divided into cases.
A. The parameter model of the current coding block is a 4-parameter affine transformation model, and the derivation mode can be as follows:
if a first adjacent affine Coding block is located in a Coding Tree Unit (CTU) above a current Coding block and the first adjacent affine Coding block is a four-parameter affine Coding block, motion vectors of two control points at the lowest side of the first adjacent affine Coding block are obtained, for example, position coordinates (x) of control points at the left and the bottom of the first adjacent affine Coding block can be obtained6,y6) And motion vector (vx)6,vy6) And the position coordinates (x) of the lower right control point7,y7) And motion vector value (vx)7,vy7)。
And forming a first affine model according to the motion vectors and the coordinate positions of the two lowest control points of the first adjacent affine coding block (the obtained first affine model is a 4-parameter affine model).
The motion vector of the control point of the current coding block is predicted according to the first affine model, for example, the position coordinates of the upper left control point and the position coordinates of the upper right control point of the current coding block may be respectively substituted into the first affine model, so as to predict the motion vector of the upper left control point and the motion vector of the upper right control point of the current coding block, as shown in formulas (1) and (2).
Figure BDA0001779220870000171
Figure BDA0001779220870000172
In the formulas (1) and (2), (x)0,y0) As the coordinates of the upper left control point of the current coding block, (x)1,y1) The coordinate of the upper right control point of the current coding block is obtained; in addition, (vx)0,vy0) For the motion vector of the upper left control point of the predicted current coding block, (vx)1,vy1) The motion vector of the upper right control point of the current coding block is predicted.
Optionally, the position coordinate (x) of the lower left control point of the first adjacent affine coding block6,y6) And the position coordinates (x) of the lower right control point7,y7) All according to the position coordinate (x) of the upper left control point of the first adjacent affine coding block4,y4) Calculated position coordinates (x) of the lower left control point of the first adjacent affine coding block6,y6) Is (x)4,y4+ cuH), the position coordinate (x) of the lower right control point of the first adjacent affine coding block7,y7) Is (x)4+cuW,y4+ cuH), cuW being the width of the first neighboring affine coding block, cuH being the height of the first neighboring affine coding block; in addition, the motion vector of the lower left control point of the first adjacent affine coding block is the motion vector of the lower left sub-block of the first adjacent affine coding block, and the motion vector of the lower right control point of the first adjacent affine coding block is the motion vector of the lower right sub-block of the first adjacent affine coding block. It can be seen that the position coordinates of the lower left control point and the position coordinates of the lower right control point of the first adjacent affine coding block are obtained by derivation rather than by reading from the memory, so that the method can further reduce the reading of the memory and improve the coding performance. Alternatively, the bits for the lower left control point and the lower right control point may be pre-selected and stored in memoryAnd (4) setting coordinates, and reading from the memory when the coordinates are needed to be used subsequently.
If the first adjacent affine Coding block is located in a Coding Tree Unit (CTU) above the current Coding block and the first adjacent affine Coding block is a six-parameter affine Coding block, the candidate motion vector prediction value of the control point of the current block is not generated based on the first adjacent affine Coding block.
If the first neighboring affine coding block is not located above the current coding block CTU, the manner of predicting the motion vector of the control point of the current coding block is not limited herein. However, for ease of understanding, the following is also illustrative of an alternative manner of determination:
the position coordinates and motion vectors of the three control points of the first neighboring affine coding block, e.g. the position coordinate (x) of the upper left control point, can be obtained4,y4) And motion vector value (vx)4,vy4) Position coordinates (x) of the upper right control point5,y5) And motion vector value (vx)5,vy5) Position coordinates (x) of lower left control point6,y6) And motion vector (vx)6,vy6)。
And forming a 6-parameter affine model according to the position coordinates and the motion vectors of the three control points of the first adjacent affine coding block.
The position coordinate (x) of the upper left control point of the current coding block0,y0) And the position coordinates (x) of the upper right control point1,y1) Substituting 6 parameter affine model to predict the motion vector of the upper left control point and the motion vector of the upper right control point of the current coding block, as shown in formulas (4) and (5).
Figure BDA0001779220870000181
Figure BDA0001779220870000182
In the formulas (4) and (5), (vx)0,vy0) Upper left control for predicted current coding blockMotion vector of system point, (vx)1,vy1) The motion vector of the upper right control point of the current coding block is predicted.
B. The parameter model of the current coding block is a 6-parameter affine transformation model, and the derivation mode can be as follows:
if the first adjacent affine coding block is located above the current coding block CTU and the first adjacent affine coding block is a four-parameter affine coding block, the position coordinates and motion vectors of the two control points at the lowest side of the first adjacent affine coding block are obtained, for example, the position coordinates (x) of the control point at the lower left of the first adjacent affine coding block can be obtained6,y6) And motion vector (vx)6,vy6) And the position coordinates (x) of the lower right control point7,y7) And motion vector value (vx)7,vy7)。
And forming a first affine model according to the motion vectors of the two lowest control points of the first adjacent affine coding block (the obtained first affine model is a 4-parameter affine model).
The motion vector of the control point of the current coding block is predicted according to the first affine model, for example, the position coordinate of the upper left control point, the position coordinate of the upper right control point, and the position coordinate of the lower left control point of the current coding block may be respectively substituted into the first affine model, so as to predict the motion vector of the upper left control point, the motion vector of the upper right control point, and the motion vector of the lower left control point of the current coding block, as shown in equations (1), (2), (3).
Figure BDA0001779220870000183
The formulas (1), (2) are described above, and in the formulas (1), (2), (3), (x)0,y0) As the coordinates of the upper left control point of the current coding block, (x)1,y1) As the coordinates of the upper right control point of the current coding block, (x)2,y2) Coordinates of a lower left control point of the current coding block; in addition, (vx)0,vy0) Motion of upper left control point for predicted current coding blockVector (vx)1,vy1) For the motion vector of the upper right control point of the predicted current coding block, (vx)2,vy2) Is the motion vector of the lower right-left control point of the predicted current coding block.
If the first adjacent affine Coding block is located in a Coding Tree Unit (CTU) above the current Coding block and the first adjacent affine Coding block is a six-parameter affine Coding block, the candidate motion vector prediction value of the control point of the current block is not generated based on the first adjacent affine Coding block.
If the first neighboring affine coding block is not located above the current coding block CTU, the manner of predicting the motion vector of the control point of the current coding block is not limited herein. However, for ease of understanding, the following is also illustrative of an alternative manner of determination:
the position coordinates and motion vectors of the three control points of the first neighboring affine coding block, e.g. the position coordinate (x) of the upper left control point, can be obtained4,y4) And motion vector value (vx)4,vy4) Position coordinates (x) of the upper right control point5,y5) And motion vector value (vx)5,vy5) Position coordinates (x) of lower left control point6,y6) And motion vector (vx)6,vy6)。
And forming a 6-parameter affine model according to the position coordinates and the motion vectors of the three control points of the first adjacent affine coding block.
The position coordinate (x) of the upper left control point of the current coding block0,y0) Position coordinates (x) of the upper right control point1,y1) And position coordinates (x) of lower left control point2,y2) Substituting 6 parameter affine models to predict the motion vector of the upper left control point, the motion vector of the upper right control point and the motion vector of the lower left control point of the current coding block, as shown in formulas (4), (5) and (6).
Figure BDA0001779220870000191
Before equations (4), (5)As already described, (vx) in the formulae (4), (5), (6)0,vy0) For the motion vector of the upper left control point of the predicted current coding block, (vx)1,vy1) For the motion vector of the upper right control point of the predicted current coding block, (vx)2,vy2) Is the motion vector of the lower left control point of the predicted current coding block.
And in the second mode, a candidate motion vector predicted value MVP list is constructed by adopting a motion vector prediction method based on control point combination.
The parameter models of the current coding block are different in the way of constructing the candidate motion vector predictor MVP list, and the following description is provided.
A. The parameter model of the current coding block is a 4-parameter affine transformation model, and the derivation mode can be as follows:
and predicting motion vectors of the top left vertex and the top right vertex of the current coding block by utilizing the motion information of the coded blocks adjacent to the periphery of the current coding block. As shown in fig. 7B: firstly, using the motion vector of the block A, B and/or C adjacent to the top left vertex as the candidate motion vector of the top left vertex of the current coding block; and utilizing the motion vector of the D and/or E block of the adjacent coded block with the top right vertex as a candidate motion vector of the top right vertex of the current coded block. Combining the candidate motion vector of the upper left vertex and the candidate motion vector of the upper right vertex to obtain a group of candidate motion vector predicted values, and combining a plurality of records obtained according to the combination mode to form a candidate motion vector predicted value MVP list.
B. The current coding block parameter model is a 6-parameter affine transformation model, and the derivation mode can be as follows:
and predicting motion vectors of the top left vertex and the top right vertex of the current coding block by utilizing the motion information of the coded blocks adjacent to the periphery of the current coding block. As shown in fig. 7B: firstly, using the motion vector of the block A, B and/or C adjacent to the top left vertex as the candidate motion vector of the top left vertex of the current coding block; using the motion vector of the coded block D and/or E adjacent to the top right vertex as the candidate motion vector of the top right vertex of the current coded block; and utilizing the motion vector of the F and/or G block adjacent to the top right vertex of the current coding block as a candidate motion vector of the top right vertex of the current coding block. Combining the candidate motion vector of the upper left vertex, the candidate motion vector of the upper right vertex and the candidate motion vector of the lower left vertex to obtain a group of candidate motion vector predicted values, wherein a plurality of groups of candidate motion vector predicted values obtained by combining according to the combination mode can form a candidate motion vector predicted value MVP list.
It should be noted that the candidate motion vector predictor list may be constructed by using only the candidate motion vector predictor obtained by the first mode prediction, may be constructed by using only the candidate motion vector predictor obtained by the second mode prediction, and may be constructed by using both the candidate motion vector predictor obtained by the first mode prediction and the candidate motion vector predictor obtained by the second mode prediction. In addition, the candidate motion vector predictor MVP list can be pruned and sorted according to a pre-configured rule, and then truncated or filled to a specific number. When each group of candidate motion vector predictors in the candidate motion vector predictor MVP list includes motion vector predictors of three control points, the candidate motion vector predictor MVP list can be called a triplet list; when each set of candidate motion vector predictors in the candidate motion vector predictor MVP list includes motion vector predictors for two control points, the candidate motion vector predictor MVP list can be referred to as a binary set list.
Step S712: the video encoder determines a target set of candidate motion vectors from a list of candidate motion vector predictors, MVPs, according to a rate-distortion cost criterion. Specifically, for each candidate motion vector group in the candidate motion vector predictor MVP list, a motion vector of each sub-block of the current block is calculated, and motion compensation is performed to obtain a predictor of each sub-block, thereby obtaining the predictor of the current block. And selecting the candidate motion vector group with the smallest error between the predicted value and the original value as a group of optimal motion vector predicted values, namely a target candidate motion vector group. In addition, the determined target candidate motion vector group is used as the optimal candidate motion vector predictor of a group of control points, and the target candidate motion vector group corresponds to a unique index number in the candidate motion vector predictor MVP list.
Step S713: and the video encoder encodes the index corresponding to the target candidate motion vector and the motion vector difference value MVD into a code stream to be transmitted.
Specifically, the video encoder may further search motion vectors of a group of control points with the lowest cost in a preset search range according to a rate-distortion cost criterion with the target candidate motion vector group as a search starting point; then, a motion vector difference MVD between the motion vectors of the set of control points and the set of target candidate motion vectors is determined, e.g. if the first set of control points comprises a first control point and a second control point, it is necessary to determine the motion vector difference MVD of the motion vector of the first control point and the motion vector predictor of the first control point of the set of control points represented by the set of target candidate motion vectors, and to determine the motion vector difference MVD of the motion vector of the second control point and the motion vector predictor of the second control point of the set of control points represented by the set of target candidate motion vectors.
Alternatively, in the AMVP mode, steps S714 to S715 may be performed in addition to the above-described steps S711 to S713.
Step S714: and the video encoder obtains the motion vector value of each sub-block in the current coding block by adopting an affine transformation model according to the determined motion vector value of the control point of the current coding block.
Specifically, the new candidate motion vector group obtained based on the target candidate motion vector group and the MVD includes motion vectors of two (upper left control point and upper right control point) or three control points (e.g., upper left control point, upper right control point, and lower left control point). For each sub-block of the current coding block (a sub-block can also be equivalent to a motion compensation unit), the motion information of the pixel points at the preset position in the motion compensation unit can be adopted to represent the motion information of all the pixel points in the motion compensation unit. Assuming that the size of the motion compensation unit is MxN (M is smaller than or equal to the width W of the current coding block, and N is smaller than or equal to the height H of the current coding block, where M, N, W, H is a positive integer, and is usually a power of 2, such as 4,8, 16, 32, 64, 128, etc.), the predetermined location pixels may be the motion compensation unit center point (M/2, N/2), the top left pixel (0,0), the top right pixel (M-1,0), or other locations. Fig. 8A illustrates a 4x4 motion compensation unit and fig. 8B illustrates an 8x8 motion compensation unit.
The coordinates of the center point of the motion compensation unit relative to the pixel of the top left vertex of the current coding block are calculated by using a formula (5), wherein i is the ith motion compensation unit in the horizontal direction (from left to right), j is the jth motion compensation unit in the vertical direction (from top to bottom), and x(i,j),y(i,j)) And (3) the coordinates of the central point of the (i, j) th motion compensation unit relative to the pixel of the control point at the upper left of the current coding block. Then (x) is determined according to the affine model type (6 parameters or 4 parameters) of the current coding block(i,j),y(i,j)) Substituting 6 parameter affine model formula (6-1) or (x)(i,j),y(i,j)) Substituting into a 4-parameter affine model formula (6-2) to obtain the motion information of the central point of each motion compensation unit as the motion vectors (vx) of all pixel points in the motion compensation unit(i,j),vy(i,j))。
Figure BDA0001779220870000211
Figure BDA0001779220870000212
Figure BDA0001779220870000213
Optionally, when the current coding block is a 6-parameter coding block, and when motion vectors of one or more sub-blocks of the current coding block are obtained based on the target candidate motion vector group, if the lower boundary of the current coding block coincides with the lower boundary of the CTU in which the current coding block is located, the motion vector of the sub-block at the lower left corner of the current coding block is obtained by calculation according to the 6-parameter affine model constructed by the three control points and the position coordinate (0, H) at the lower left corner of the current coding block, and the motion vector of the sub-block at the lower right corner of the current coding block is obtained by calculation according to the 6-parameter affine model constructed by the three control points and the position coordinate (W, H) at the lower right corner of the current coding block. For example, substituting the position coordinate (0, H) of the lower left corner of the current encoding block into the 6-parameter affine model may obtain the motion vector of the sub-block at the lower left corner of the current encoding block (instead of substituting the center point coordinate of the sub-block at the lower left corner into the affine model for calculation), and substituting the position coordinate (W, H) of the lower right corner of the current encoding block into the 6-parameter affine model may obtain the motion vector of the sub-block at the lower right corner of the current encoding block (instead of substituting the center point coordinate of the sub-block at the lower right corner into the affine model for calculation). In this way, when the motion vector of the lower left control point and the motion vector of the lower right control point of the current coding block are used (e.g., the subsequent other blocks construct the candidate motion vector predictor MVP list of the other block based on the motion vectors of the lower left control point and the lower right control point of the current block), the exact values are used instead of the estimated values. Wherein, W is the width of the current coding block, and H is the height of the current coding block.
Optionally, when the current coding block is a 4-parameter coding block, and when motion vectors of one or more sub-blocks of the current coding block are obtained based on the target candidate motion vector group, if the lower boundary of the current coding block coincides with the lower boundary of the CTU in which the current coding block is located, the motion vector of the sub-block at the lower left corner of the current coding block is obtained by calculation according to the 4-parameter affine model constructed by the two control points and the position coordinate (0, H) at the lower left corner of the current coding block, and the motion vector of the sub-block at the lower right corner of the current coding block is obtained by calculation according to the 4-parameter affine model constructed by the two control points and the position coordinate (W, H) at the lower right corner of the current coding block. For example, substituting the position coordinate (0, H) of the lower left corner of the current encoding block into the 4-parameter affine model may obtain the motion vector of the sub-block at the lower left corner of the current encoding block (instead of substituting the center point coordinate of the sub-block at the lower left corner into the affine model for calculation), and substituting the position coordinate (W, H) of the lower right corner of the current encoding block into the four-parameter affine model may obtain the motion vector of the sub-block at the lower right corner of the current encoding block (instead of substituting the center point coordinate of the sub-block at the lower right corner into the affine model for calculation). In this way, when the motion vector of the lower left control point and the motion vector of the lower right control point of the current coding block are used (e.g., the subsequent other blocks construct the candidate motion vector predictor MVP list of the other block based on the motion vectors of the lower left control point and the lower right control point of the current block), the exact values are used instead of the estimated values. Wherein, W is the width of the current coding block, and H is the height of the current coding block.
Step S715: the video encoder performs motion compensation according to the motion vector value of each sub-block in the current coding block to obtain the pixel prediction value of each sub-block, for example, finds the corresponding sub-block in the reference frame according to the motion vector value of each sub-block and the reference frame index value, and performs interpolation filtering to obtain the pixel prediction value of each sub-block.
Merge mode:
step S721: the video encoder constructs a candidate motion information list.
Specifically, the video encoder constructs a candidate motion information list (also referred to as a candidate motion vector list) by an inter prediction unit (also referred to as an inter prediction module), and the candidate motion information list can be constructed in one of two manners provided below or a combination of the two manners, and is a candidate motion information list of a triplet; the two modes are as follows:
in the first mode, a motion vector prediction method based on a motion model is adopted to construct a candidate motion information list.
Firstly, all or part of adjacent blocks of the current encoding block are traversed according to a predetermined sequence, so as to determine adjacent affine encoding blocks therein, wherein the determined number of the adjacent affine encoding blocks may be one or more. For example, the neighboring blocks A, B, C, D, E shown in FIG. 7A may be traversed in sequence to determine neighboring affine encoded blocks in the neighboring block A, B, C, D, E. The inter-frame prediction unit determines a set of candidate motion vector predictors (each set of candidate motion vector predictors is a tuple or a triplet) according to each neighboring affine coding block, and an example of one neighboring affine coding block is described below, where for convenience of description, the one neighboring affine coding block is referred to as a first neighboring affine coding block, which is specifically as follows:
determining a first affine model according to the motion vector of the control point of the first adjacent affine coding block, and then predicting the motion vector of the control point of the current coding block according to the first affine model, which is specifically described as follows:
if the first adjacent affine coding block is located above the current coding block CTU and the first adjacent affine coding block is a four-parameter affine coding block, the position coordinates and motion vectors of the two control points at the lowest side of the first adjacent affine coding block are obtained, for example, the position coordinates (x) of the control point at the left and lower sides of the first adjacent affine coding block can be obtained6,y6) And motion vector (vx)6,vy6) And the position coordinates (x) of the lower right control point7,y7) And motion vector value (vx)7,vy7)。
And forming a first affine model according to the motion vectors of the two lowest control points of the first adjacent affine coding block (the obtained first affine model is a 4-parameter affine model).
Optionally, the motion vector of the control point of the current coding block is predicted according to the first affine model, for example, the position coordinate of the upper left control point, the position coordinate of the upper right control point, and the position coordinate of the lower left control point of the current coding block may be respectively substituted into the first affine model, so as to predict the motion vector of the upper left control point, the motion vector of the upper right control point, and the motion vector of the lower left control point of the current coding block, to form a candidate motion vector triple, and add a candidate motion information list, specifically as shown in formulas (1), (2), and (3).
Optionally, the motion vector of the control point of the current coding block is predicted according to the first affine model, for example, the position coordinates of the upper left control point and the position coordinates of the upper right control point of the current coding block may be respectively substituted into the first affine model, so as to predict the motion vector of the upper left control point and the motion vector of the upper right control point of the current coding block, form a candidate motion vector binary group, and add the candidate motion information list, specifically as shown in formulas (1) and (2).
In the formulae (1), (2) and (3), (x)0,y0) As the coordinates of the upper left control point of the current coding block, (x)1,y1) As the coordinates of the upper right control point of the current coding block, (x)2,y2) Coordinates of a lower left control point of the current coding block; in addition, (vx)0,vy0) For the motion vector of the upper left control point of the predicted current coding block, (vx)1,vy1) For the motion vector of the upper right control point of the predicted current coding block, (vx)2,vy2) Is the motion vector of the lower left control point of the predicted current coding block.
If the first adjacent affine Coding block is located in a Coding Tree Unit (CTU) above the current Coding block and the first adjacent affine Coding block is a six-parameter affine Coding block, the candidate motion vector prediction value of the control point of the current block is not generated based on the first adjacent affine Coding block.
If the first neighboring affine coding block is not located above the current coding block CTU, the manner of predicting the motion vector of the control point of the current coding block is not limited herein. However, for ease of understanding, the following is also illustrative of an alternative manner of determination:
the position coordinates and motion vectors of the three control points of the first neighboring affine coding block, e.g. the position coordinate (x) of the upper left control point, can be obtained4,y4) And motion vector value (vx)4,vy4) Position coordinates (x) of the upper right control point5,y5) And motion vector value (vx)5,vy5) Position coordinates (x) of lower left control point6,y6) And motion vector (vx)6,vy6)。
And forming a 6-parameter affine model according to the position coordinates and the motion vectors of the three control points of the first adjacent affine coding block.
The position coordinate (x) of the upper left control point of the current coding block0,y0) Position coordinates (x) of the upper right control point1,y1) And position coordinates (x) of lower left control point2,y2) Substituting 6 parameter affine models to predict the motion vector of the upper left control point, the motion vector of the upper right control point and the motion vector of the lower left control point of the current coding block, as shown in formulas (4), (5) and (6).
Figure BDA0001779220870000231
The equations (4), (5) have been described above, and in the equations (4), (5), (6), (vx)0,vy0) For the motion vector of the upper left control point of the predicted current coding block, (vx)1,vy1) For the motion vector of the upper right control point of the predicted current coding block, (vx)2,vy2) Is the motion vector of the lower left control point of the predicted current coding block.
And in the second mode, a candidate motion information list is constructed by adopting a motion vector prediction method based on control point combination.
The following two schemes, denoted scheme a and scheme B respectively, are exemplified:
scheme A: and combining the motion information of the control points of the 2 current coding blocks to construct a 4-parameter affine transformation model. The combination of the 2 control points is { CP1, CP4}, { CP2, CP3}, { CP1, CP2}, { CP2, CP4}, { CP1, CP3}, { CP3, CP4 }. For example, a 4-parameter Affine transformation model constructed using CP1 and CP2 control points is denoted as Affine (CP1, CP 2).
It should be noted that a combination of different control points may also be converted into a control point at the same position. For example: a4-parameter affine transformation model obtained by combining { CP1, CP4}, { CP2, CP3}, { CP2, CP4}, { CP1, CP3}, { CP3, CP4} is converted into control points { CP1, CP2} or { CP1, CP2, CP3 }. The conversion method is to substitute the motion vector and the coordinate information of the control point into the formula (9-1) to obtain model parameters, and substitute the coordinate information of the { CP1, CP2} to obtain the motion vector as a group of candidate motion vector predicted values.
Figure BDA0001779220870000232
In the formula (9-1), a0,a1,a2,a3Are parameters in the parametric model, and (x, y) represent position coordinates.
More directly, a group of motion vector predictors represented by an upper left control point and an upper right control point can also be obtained by conversion according to the following formula, and a candidate motion information list is added:
{ CP1, CP2} is converted into { CP1, CP2, CP3} of formula (9-2):
Figure BDA0001779220870000233
{ CP1, CP3} converted to { CP1, CP2, CP3} formula (9-3):
Figure BDA0001779220870000234
{ CP2, CP3} is converted into { CP1, CP2, CP3} formula (10):
Figure BDA0001779220870000235
{ CP1, CP4} is converted into { CP1, CP2, CP3} of formula (11):
Figure BDA0001779220870000236
{ CP2, CP4} is converted into { CP1, CP2, CP3} formula (12):
Figure BDA0001779220870000237
{ CP3, CP4} is converted into { CP1, CP2, CP3} formula (13):
Figure BDA0001779220870000241
scheme B: and combining the motion information of the 3 control points of the current coding block to construct a 6-parameter affine transformation model. The combination of the 3 control points is { CP1, CP2, CP4}, { CP1, CP2, CP3}, { CP2, CP3, CP4}, and { CP1, CP3, CP4 }. For example, a 6-parameter Affine transformation model constructed using CP1, CP2, and CP3 control points is denoted as Affine (CP1, CP2, CP 3).
It should be noted that a combination of different control points may also be converted into a control point at the same position. For example: a6-parameter affine transformation model of a combination of { CP1, CP2, CP4}, { CP2, CP3, CP4}, { CP1, CP3, CP4} is converted into control points { CP1, CP2, CP3} for representation. The conversion method is to substitute the motion vector and the coordinate information of the control point into the formula (14) to obtain model parameters, and substitute the coordinate information of the { CP1, CP2 and CP3} to obtain the motion vector as a group of candidate motion vector predicted values.
Figure BDA0001779220870000242
In the formula (14), a1,a2,a3,a4,a5,a6As parameters in the parametric model, (x, y) represents the position coordinates.
More directly, a group of motion vector predictors represented by an upper left control point, an upper right control point, and a lower left control point may be obtained by performing conversion according to the following formula, and added to the candidate motion information list:
{ CP1, CP2, CP4} is converted into { CP1, CP2, CP3} formula (15):
Figure BDA0001779220870000243
{ CP2, CP3, CP4} is converted into { CP1, CP2, CP3} formula (16):
Figure BDA0001779220870000244
{ CP1, CP3, CP4} is converted into { CP1, CP2, CP3} of formula (17):
Figure BDA0001779220870000245
it should be noted that the candidate motion information list may be constructed by using only the candidate motion vector predictor obtained by the first mode prediction, may be constructed by using only the candidate motion vector predictor obtained by the second mode prediction, and may be constructed by using both the candidate motion vector predictor obtained by the first mode prediction and the candidate motion vector predictor obtained by the second mode prediction. In addition, the candidate motion information list can be pruned and sorted according to a preset rule, and then is truncated or filled to a specific number. When each group of candidate motion vector predicted values in the candidate motion information list comprises motion vector predicted values of three control points, the candidate motion information list can be called as a triple group list; when each set of candidate motion vector predictors in the candidate motion information list includes motion vector predictors for two control points, the candidate motion information list can be referred to as a binary set list.
Step S722: the video encoder determines a set of target candidate motion vectors from the list of candidate motion information according to a rate-distortion cost criterion. Specifically, for each candidate motion vector group in the candidate motion information list, a motion vector of each sub-block of the current block is obtained through calculation, and motion compensation is performed to obtain a prediction value of each sub-block, so that the prediction value of the current block is obtained. And selecting the candidate motion vector group with the smallest error between the predicted value and the original value as a group of optimal motion vector predicted values, namely a target candidate motion vector group. In addition, the determined target candidate motion vector group is used as an optimal candidate motion vector prediction value of a group of control points, and the target candidate motion vector group corresponds to a unique index number in the candidate motion information list.
Step S723: and the video encoder encodes an index corresponding to the target candidate motion vector group, the reference frame index and the prediction direction into a code stream to be transmitted.
Alternatively, in addition to performing steps S721-S723 described above in merge mode, steps S724-S725 may be performed.
Step S724: and the video encoder obtains the motion vector value of each sub-block in the current coding block by adopting a parameter affine transformation model according to the determined motion vector value of the control point of the current coding block.
Specifically, the motion vectors of two (upper left control point and upper right control point) or three control points (e.g., upper left control point, upper right control point, and lower left control point) included in the target candidate motion vector group. For each sub-block of the current coding block (a sub-block can also be equivalent to a motion compensation unit), the motion information of the pixel points at the preset position in the motion compensation unit can be adopted to represent the motion information of all the pixel points in the motion compensation unit. Assuming that the size of the motion compensation unit is MxN (M is smaller than or equal to the width W of the current coding block, and N is smaller than or equal to the height H of the current coding block, where M, N, W, H is a positive integer, and is usually a power of 2, such as 4,8, 16, 32, 64, 128, etc.), the predetermined location pixels may be the motion compensation unit center point (M/2, N/2), the top left pixel (0,0), the top right pixel (M-1,0), or other locations. Fig. 8A illustrates a 4x4 motion compensation unit and fig. 8B illustrates an 8x8 motion compensation unit.
The coordinates of the center point of the motion compensation unit relative to the pixel of the top left vertex of the current coding block are calculated by using a formula (5), wherein i is the ith motion compensation unit in the horizontal direction (from left to right), j is the jth motion compensation unit in the vertical direction (from top to bottom), and x(i,j),y(i,j)) And (3) the coordinates of the central point of the (i, j) th motion compensation unit relative to the pixel of the control point at the upper left of the current coding block. Then (x) is determined according to the affine model type (6 parameters or 4 parameters) of the current coding block(i,j),y(i,j)) Substituting 6 parameter affine model formula (6-1) or (x)(i,j),y(i,j)) Substituting into a 4-parameter affine model formula (6-2) to obtain the motion information of the central point of each motion compensation unit as the motion vectors (vx) of all pixel points in the motion compensation unit(i,j),vy(i,j))。
Optionally, when the current coding block is a 6-parameter coding block, and when motion vectors of one or more sub-blocks of the current coding block are obtained based on the target candidate motion vector group, if the lower boundary of the current coding block coincides with the lower boundary of the CTU in which the current coding block is located, the motion vector of the sub-block at the lower left corner of the current coding block is obtained by calculation according to the 6-parameter affine model constructed by the three control points and the position coordinate (0, H) at the lower left corner of the current coding block, and the motion vector of the sub-block at the lower right corner of the current coding block is obtained by calculation according to the 6-parameter affine model constructed by the three control points and the position coordinate (W, H) at the lower right corner of the current coding block. For example, substituting the position coordinate (0, H) of the lower left corner of the current encoding block into the 6-parameter affine model may obtain the motion vector of the sub-block at the lower left corner of the current encoding block (instead of substituting the center point coordinate of the sub-block at the lower left corner into the affine model for calculation), and substituting the position coordinate (W, H) of the lower right corner of the current encoding block into the 6-parameter affine model may obtain the motion vector of the sub-block at the lower right corner of the current encoding block (instead of substituting the center point coordinate of the sub-block at the lower right corner into the affine model for calculation). In this way, when the motion vector of the lower left control point and the motion vector of the lower right control point of the current encoding block are used (e.g., the subsequent other blocks construct the candidate motion information list of the other block based on the motion vectors of the lower left control point and the lower right control point of the current block), an accurate value is used instead of the estimated value. Wherein, W is the width of the current coding block, and H is the height of the current coding block.
Optionally, when the current coding block is a 4-parameter coding block, and when motion vectors of one or more sub-blocks of the current coding block are obtained based on the target candidate motion vector group, if the lower boundary of the current coding block coincides with the lower boundary of the CTU in which the current coding block is located, the motion vector of the sub-block at the lower left corner of the current coding block is obtained by calculation according to the 4-parameter affine model constructed by the two control points and the position coordinate (0, H) at the lower left corner of the current coding block, and the motion vector of the sub-block at the lower right corner of the current coding block is obtained by calculation according to the 4-parameter affine model constructed by the two control points and the position coordinate (W, H) at the lower right corner of the current coding block. For example, substituting the position coordinate (0, H) of the lower left corner of the current encoding block into the 4-parameter affine model may obtain the motion vector of the sub-block at the lower left corner of the current encoding block (instead of substituting the center point coordinate of the sub-block at the lower left corner into the affine model for calculation), and substituting the position coordinate (W, H) of the lower right corner of the current encoding block into the four-parameter affine model may obtain the motion vector of the sub-block at the lower right corner of the current encoding block (instead of substituting the center point coordinate of the sub-block at the lower right corner into the affine model for calculation). In this way, when the motion vector of the lower left control point and the motion vector of the lower right control point of the current encoding block are used (e.g., the subsequent other blocks construct the candidate motion information list of the other block based on the motion vectors of the lower left control point and the lower right control point of the current block), an accurate value is used instead of the estimated value. Wherein, W is the width of the current coding block, and H is the height of the current coding block.
Step S725: the video encoder performs motion compensation according to a motion vector value of each sub-block in the current coding block to obtain a pixel prediction value of each sub-block, and specifically, obtains the pixel prediction value of the current coding block by prediction according to a motion vector of one or more sub-blocks of the current coding block, and a reference frame index and a prediction direction indicated by the index.
It can be understood that, when the coding tree unit CTU where the first adjacent affine coding block is located is above the position of the current coding block, the information of the control point at the lowest position of the first adjacent affine coding block has already been read from the memory; the above solution is therefore in the construction of candidate motion vectors from a first set of control points of a first adjacent affine coding block, comprising a lower left control point and a lower right control point of said first adjacent affine coding block; instead of fixing the upper left control point, the upper right control point and the lower left control point of the first adjacent coding block as the first group of control points as in the prior art. Therefore, by using the method for determining the first group of control points in the present application, the information (e.g., position coordinates, motion vectors, etc.) of the first group of control points can directly reuse the information read from the memory, thereby reducing the memory reading and improving the encoding performance.
Fig. 9 is a flow diagram illustrating a process 900 of a decoding method according to one embodiment of the present application. The process 900 may be performed by the video decoder 100, and in particular, may be performed by the inter-prediction unit 210, and the entropy decoding unit (also referred to as entropy decoder) 203 of the video decoder 200. Process 900 is described as a series of steps or operations, it being understood that process 900 may be performed in various orders and/or concurrently, and is not limited to the order of execution shown in fig. 9. Assuming that a video data stream having a plurality of video frames is using a video decoder, if a first neighboring affine decoding block is located above a decoding Tree Unit (CTU) of a current decoding block, a set of candidate motion vector predictors is determined based on a lower left control point and a lower right control point of the first neighboring affine decoding block, which corresponds to the flow shown in fig. 9 and is described as follows:
if the first neighboring affine decoding block is located in a Coding Tree Unit (CTU) above the current decoding block, determining a set of candidate motion vector predictors based on a lower left control point and a lower right control point of the first neighboring affine decoding block, as described in detail below:
step S1200: the video decoder determines an inter prediction mode for a currently decoded block.
Specifically, the inter prediction mode may be an Advanced Motion Vector Prediction (AMVP) mode or a merge (merge) mode.
If it is determined that the inter prediction mode of the currently decoded block is the AMVP mode, steps S1211-S1216 are performed.
If the inter prediction mode of the current decoding block is determined to be the merge mode, steps S1221 to S1225 are performed.
AMVP mode:
step S1211: the video decoder constructs a list of candidate motion vector predictors, MVPs.
Specifically, the video decoder constructs a candidate Motion Vector Predictor (MVP) list (also referred to as a candidate motion vector list) by using an inter prediction unit (also referred to as an inter prediction module), and the candidate Motion Vector Predictor (MVP) list can be constructed in one of two manners provided as follows or in a combination of the two manners, and the constructed candidate Motion Vector Predictor (MVP) list can be a triplet candidate Motion Vector Predictor (MVP) list or a binary candidate Motion Vector Predictor (MVP) list; the two modes are as follows:
in the first mode, a motion vector prediction method based on a motion model is adopted to construct a candidate motion vector prediction value MVP list.
First, all or part of adjacent blocks of the current decoding block are traversed according to a predetermined sequence, so as to determine adjacent affine decoding blocks, wherein the number of the determined adjacent affine decoding blocks may be one or more. For example, the neighboring blocks A, B, C, D, E shown in FIG. 7A may be traversed in sequence to determine neighboring affine decoded blocks in the neighboring blocks A, B, C, D, E. The inter-frame prediction unit at least determines a set of candidate motion vector predictors from an adjacent affine decoding block (each set of candidate motion vector predictors is a binary set or a ternary set), which is described below by taking an adjacent affine decoding block as an example, and for convenience of description, the adjacent affine decoding block is referred to as a first adjacent affine decoding block, as follows:
and determining a first affine model according to the motion vector of the control point of the first adjacent affine decoding block, and predicting the motion vector of the control point of the current decoding block according to the first affine model. When the parameter models of the current decoded block are different, the manner of predicting the motion vector of the control point of the current decoded block based on the motion vector of the control point of the first adjacent affine decoded block is also different, and therefore the following description is given in cases.
A. The parameter model of the current decoding block is a 4-parameter affine transformation model, and can be derived as follows (as shown in fig. 9A):
if a first adjacent affine decoding block is located in a Coding Tree Unit (CTU) above a current decoding block and the first adjacent affine decoding block is a four-parameter affine decoding block, obtaining the maximum value of the first adjacent affine decoding blockThe motion vectors of the lower two control points, for example, the position coordinates (x) of the lower left control point of the first adjacent affine decoding block can be obtained6,y6) And motion vector (vx)6,vy6) And the position coordinates (x) of the lower right control point7,y7) And motion vector value (vx)7,vy7) (step S1201).
A first affine model is formed from the motion vectors and the coordinate positions of the two lowermost control points of the first adjacent affine decoding block (the first affine model obtained at this time is a 4-parameter affine model) (step S1202).
The motion vector of the control point of the current decoded block is predicted based on the first affine model, and for example, the position coordinates of the upper left control point and the position coordinates of the upper right control point of the current decoded block are respectively substituted into the first affine model, so that the motion vector of the upper left control point and the motion vector of the upper right control point of the current decoded block are predicted, specifically as shown in formulas (1) and (2) (step S1203).
Figure BDA0001779220870000271
Figure BDA0001779220870000272
In the formulas (1) and (2), (x)0,y0) As the coordinates of the upper left control point of the currently decoded block, (x)1,y1) The coordinate of the upper right control point of the current decoding block; in addition, (vx)0,vy0) For the motion vector of the upper left control point of the predicted currently decoded block, (vx)1,vy1) The motion vector of the upper right control point of the current decoded block is predicted.
Optionally, the position coordinate (x) of the lower left control point of the first adjacent affine decoding block6,y6) And the position coordinates (x) of the lower right control point7,y7) Are all according to the position coordinate (x) of the upper left control point of the first adjacent affine decoding block4,y4) Is calculated to obtainWherein the position coordinates (x) of the lower left control point of said first adjacent affine decoding block6,y6) Is (x)4,y4+ cuH) of the position coordinate (x) of the lower right control point of said first adjacent affine decoding block7,y7) Is (x)4+cuW,y4+ cuH), cuW being the width of the first neighboring affine decoding block, cuH being the height of the first neighboring affine decoding block; in addition, the motion vector of the lower left control point of the first adjacent affine decoding block is the motion vector of the lower left sub-block of the first adjacent affine decoding block, and the motion vector of the lower right control point of the first adjacent affine decoding block is the motion vector of the lower right sub-block of the first adjacent affine decoding block. It can be seen that the position coordinates of the lower left control point and the position coordinates of the lower right control point of the first adjacent affine decoding block are derived, but not read from the memory, so that by adopting the method, the reading of the memory can be further reduced, and the decoding performance can be improved. Alternatively, the position coordinates of the lower left control point and the lower right control point may be stored in a memory in advance and read from the memory when it is to be used later.
If the first adjacent affine decoding block is located in a Coding Tree Unit (CTU) above the current decoding block and the first adjacent affine decoding block is a six-parameter affine decoding block, the candidate motion vector prediction value of the control point of the current block is not generated based on the first adjacent affine decoding block.
If the first neighboring affine decoding block is not located above the current decoding block CTU, the manner of predicting the motion vector of the control point of the current decoding block is not limited herein. However, for ease of understanding, the following is also illustrative of an alternative manner of determination:
the position coordinates and motion vectors of the three control points of the first adjacent affine decoding block, for example, the position coordinate (x) of the upper left control point, can be obtained4,y4) And motion vector value (vx)4,vy4) Position coordinates (x) of the upper right control point5,y5) And motion vector value (vx)5,vy5) Position coordinates (x) of lower left control point6,y6) And motion vector (vx)6,vy6)。
And forming a 6-parameter affine model according to the position coordinates and the motion vectors of the three control points of the first adjacent affine decoding block.
The position coordinate (x) of the upper left control point of the current decoding block0,y0) And the position coordinates (x) of the upper right control point1,y1) Substituting 6 parameter affine models to predict the motion vector of the upper left control point and the motion vector of the upper right control point of the current decoding block, as shown in formulas (4) and (5).
Figure BDA0001779220870000281
Figure BDA0001779220870000282
In the formulas (4) and (5), (vx)0,vy0) For the motion vector of the upper left control point of the predicted currently decoded block, (vx)1,vy1) The motion vector of the upper right control point of the current decoded block is predicted.
B. The parameter model of the current decoding block is a 6-parameter affine transformation model, and the derivation mode can be as follows:
if the first adjacent affine decoding block is located above the current decoding block CTU and the first adjacent affine decoding block is a four-parameter affine decoding block, the position coordinates and motion vectors of the two control points at the lowest side of the first adjacent affine decoding block are obtained, for example, the position coordinate (x) of the control point at the lower left of the first adjacent affine decoding block can be obtained6,y6) And motion vector (vx)6,vy6) And the position coordinates (x) of the lower right control point7,y7) And motion vector value (vx)7,vy7)。
And forming a first affine model according to the motion vectors of the two lowest control points of the first adjacent affine decoding block (the first affine model obtained at the moment is a 4-parameter affine model).
The motion vector of the control point of the current decoded block is predicted according to the first affine model, for example, the position coordinate of the upper left control point, the position coordinate of the upper right control point, and the position coordinate of the lower left control point of the current decoded block may be respectively substituted into the first affine model, so as to predict the motion vector of the upper left control point, the motion vector of the upper right control point, and the motion vector of the lower left control point of the current decoded block, as shown in formulas (1), (2), (3).
Figure BDA0001779220870000283
The formulas (1), (2) are described above, and in the formulas (1), (2), (3), (x)0,y0) As the coordinates of the upper left control point of the currently decoded block, (x)1,y1) As the coordinates of the upper right control point of the currently decoded block, (x)2,y2) The coordinate of a lower left control point of the current decoding block; in addition, (vx)0,vy0) For the motion vector of the upper left control point of the predicted currently decoded block, (vx)1,vy1) For the motion vector of the upper right control point of the predicted currently decoded block, (vx)2,vy2) The motion vector of the lower right-left control point of the current decoded block is predicted.
If the first adjacent affine decoding block is located in a Coding Tree Unit (CTU) above the current decoding block and the first adjacent affine decoding block is a six-parameter affine decoding block, the candidate motion vector prediction value of the control point of the current block is not generated based on the first adjacent affine decoding block.
If the first neighboring affine decoding block is not located above the current decoding block CTU, the manner of predicting the motion vector of the control point of the current decoding block is not limited herein. However, for ease of understanding, the following is also illustrative of an alternative manner of determination:
the position coordinates and motion vectors of the three control points of the first adjacent affine decoding block, for example, the position coordinate (x) of the upper left control point, can be obtained4,y4) And motion vector value (vx)4,vy4) Position coordinates (x) of the upper right control point5,y5) And motion vector value (vx)5,vy5) Position coordinates (x) of lower left control point6,y6) And motion vector (vx)6,vy6)。
And forming a 6-parameter affine model according to the position coordinates and the motion vectors of the three control points of the first adjacent affine decoding block.
The position coordinate (x) of the upper left control point of the current decoding block0,y0) Position coordinates (x) of the upper right control point1,y1) And position coordinates (x) of lower left control point2,y2) Substituting 6 parameter affine models to predict the motion vector of the upper left control point, the motion vector of the upper right control point and the motion vector of the lower left control point of the current decoding block, as shown in formulas (4), (5) and (6).
Figure BDA0001779220870000291
The equations (4), (5) have been described above, and in the equations (4), (5), (6), (vx)0,vy0) For the motion vector of the upper left control point of the predicted currently decoded block, (vx)1,vy1) For the motion vector of the upper right control point of the predicted currently decoded block, (vx)2,vy2) The motion vector of the lower left control point of the current decoded block is predicted.
And in the second mode, a candidate motion vector predicted value MVP list is constructed by adopting a motion vector prediction method based on control point combination.
The way of constructing the candidate motion vector predictor MVP list is also different when the parameter model of the current decoded block is different, and the following description is provided.
A. The parameter model of the current decoding block is a 4-parameter affine transformation model, and the derivation mode can be as follows:
and estimating motion vectors of the top left vertex and the top right vertex of the current decoding block by utilizing the motion information of the decoded blocks adjacent to the periphery of the current decoding block. As shown in fig. 7B: firstly, using the motion vector of the decoded block A and/or B and/or C adjacent to the top left vertex as the candidate motion vector of the top left vertex of the current decoded block; and utilizing the motion vector of the D and/or E block adjacent to the top right vertex of the decoded block as a candidate motion vector of the top right vertex of the current decoded block. Combining the candidate motion vector of the upper left vertex and the candidate motion vector of the upper right vertex to obtain a group of candidate motion vector predicted values, and combining a plurality of records obtained according to the combination mode to form a candidate motion vector predicted value MVP list.
B. The current decoding block parameter model is a 6-parameter affine transformation model, and the derivation mode can be as follows:
and estimating motion vectors of the top left vertex and the top right vertex of the current decoding block by utilizing the motion information of the decoded blocks adjacent to the periphery of the current decoding block. As shown in fig. 7B: firstly, using the motion vector of the decoded block A and/or B and/or C adjacent to the top left vertex as the candidate motion vector of the top left vertex of the current decoded block; using the motion vector of the decoded block D and/or E adjacent to the top right vertex as a candidate motion vector of the top right vertex of the current decoded block; the motion vector of the upper right vertex neighboring decoded block F and/or G is used as a candidate motion vector for the motion vector of the upper right vertex of the current decoded block. Combining the above-mentioned one candidate motion vector of the upper left vertex, one candidate motion vector of the upper right vertex and one candidate motion vector of the lower left vertex to obtain a group of candidate motion vector predicted values, and the multiple groups of candidate motion vector predicted values obtained by combining according to the combination mode can form a candidate motion vector predicted value MVP list.
It should be noted that the candidate motion vector predictor list may be constructed by using only the candidate motion vector predictor obtained by the first mode prediction, may be constructed by using only the candidate motion vector predictor obtained by the second mode prediction, and may be constructed by using both the candidate motion vector predictor obtained by the first mode prediction and the candidate motion vector predictor obtained by the second mode prediction. In addition, the candidate motion vector predictor MVP list can be pruned and sorted according to a pre-configured rule, and then truncated or filled to a specific number. When each group of candidate motion vector predictors in the candidate motion vector predictor MVP list includes motion vector predictors of three control points, the candidate motion vector predictor MVP list can be called a triplet list; when each set of candidate motion vector predictors in the candidate motion vector predictor MVP list includes motion vector predictors for two control points, the candidate motion vector predictor MVP list can be referred to as a binary set list.
Step S1212: and the video decoder analyzes the code stream to obtain an index and a motion vector difference value MVD.
In particular, the video decoder may parse the code stream by the entropy decoding unit, the index indicating a target set of candidate motion vectors of the currently decoded block, the target candidate motion vectors representing motion vector predictors of a set of control points of the currently decoded block.
Step S1213: and the video decoder determines a target motion vector group from the candidate Motion Vector Predictor (MVP) list according to the index.
Specifically, the video decoder uses the target candidate motion vector group determined from the candidate motion vectors according to the index as the optimal candidate motion vector predictor (optionally, when the length of the candidate motion vector predictor MVP list is 1, the target motion vector group can be directly determined without parsing the code stream, and the optimal preferred motion vector predictor is briefly introduced below).
If the parameter model of the current decoding block is a 4-parameter affine transformation model, selecting the optimal motion vector predicted value of 2 control points from the candidate motion vector predicted value MVP list established above; for example, the video decoder parses the index number from the code stream, and determines the optimal motion vector predictor of 2 control points from the candidate motion vector predictor MVP list of the binary group according to the index number, wherein each group of candidate motion vector predictors in the candidate motion vector predictor MVP list corresponds to a respective index number.
If the parameter model of the current decoding block is a 6-parameter affine transformation model, selecting the optimal motion vector predicted value of 3 control points from the candidate motion vector predicted value MVP list established above; for example, the video decoder parses the index number from the code stream, and determines the optimal motion vector predictor of 3 control points from the candidate motion vector predictor MVP list of the triplet according to the index number, where each group of candidate motion vector predictors in the candidate motion vector predictor MVP list corresponds to its respective index number.
Step S1214: and the video decoder determines the motion vector of the control point of the current decoding block according to the target candidate motion vector group and the motion vector difference value MVD analyzed from the code stream.
If the parameter model of the current decoding block is a 4-parameter affine transformation model, motion vector differences of 2 control points of the current decoding block are obtained by decoding from a code stream, and new candidate motion vector groups are obtained according to the motion vector differences of the control points and the target candidate motion vector groups indicated by the indexes. For example, the motion vector difference value MVD of the upper left control point and the motion vector difference value MVD of the upper right control point are decoded from the code stream and added to the motion vectors of the upper left control point and the upper right control point in the target candidate motion vector group to obtain a new candidate motion vector group, and therefore, the new candidate motion vector group includes new motion vector values of the upper left control point and the upper right control point of the current decoding block.
Optionally, the motion vector value of the 3 rd control point may also be obtained by using a 4-parameter affine transformation model according to the motion vector values of the 2 control points of the current decoding block in the new candidate motion vector group. For example, a motion vector (vx) of the upper left control point of the current decoded block is obtained0,vy0) And motion vector (vx) of upper right control point1,vy1) Then, the lower left control point (x) of the current decoding block is obtained by calculation according to the formula (7)2,y2) Motion vector (vx)2,vy2)。
Figure BDA0001779220870000301
Wherein (x)0,y0) As the position coordinates of the upper left control point, (x)1,y1) Is the position coordinate of the upper right control point, W is the width of the current decoding block, and H is the height of the current decoding block.
If the parameter model of the current decoding block is a 6-parameter affine transformation model, motion vector difference values of 3 control points of the current decoding block are obtained by decoding from a code stream, and new candidate motion vector groups are obtained according to the motion vector difference value MVD of each control point and the target candidate motion vector group indicated by the index. For example, a motion vector difference value MVD of an upper left control point, a motion vector difference value MVD of an upper right control point, and a motion vector difference value of a lower left control point are decoded from the code stream and added to motion vectors of an upper left control point, an upper right control point, and a lower left control point in the target candidate motion vector group to obtain a new candidate motion vector group, and thus the new candidate motion vector group includes motion vector values of an upper left control point, an upper right control point, and a lower left control point of the current decoded block.
Step S1215: and the video decoder obtains the motion vector value of each sub-block in the current decoding block by adopting an affine transformation model according to the determined motion vector value of the control point of the current decoding block.
Specifically, the new candidate motion vector group obtained based on the target candidate motion vector group and the MVD includes motion vectors of two (upper left control point and upper right control point) or three control points (e.g., upper left control point, upper right control point, and lower left control point). For each sub-block of the current decoding block (a sub-block may also be equivalent to a motion compensation unit), the motion information of the pixels at the preset positions in the motion compensation unit may be used to represent the motion information of all the pixels in the motion compensation unit. Assuming that the size of the motion compensation unit is MxN (M is smaller than or equal to the width W of the current decoding block, and N is smaller than or equal to the height H of the current decoding block, where M, N, W, H is a positive integer, and is usually a power of 2, such as 4,8, 16, 32, 64, 128, etc.), the predetermined location pixel point may be a motion compensation unit center point (M/2, N/2), an upper left pixel point (0,0), an upper right pixel point (M-1,0), or a pixel point at another location. Fig. 8A illustrates a 4x4 motion compensation unit and fig. 8B illustrates an 8x8 motion compensation unit.
The coordinates of the motion compensation unit center point relative to the top left vertex pixel of the current decoding block are calculated by using the formula (8-1), wherein i is the ith motion compensation unit in the horizontal direction (from left to right), j is the jth motion compensation unit in the vertical direction (from top to bottom), (x)(i,j),y(i,j)) The coordinates of the (i, j) th motion compensation unit center point with respect to the pixel of the upper left control point of the current decoding block are represented. Then (x) is determined according to the affine model type (6 parameters or 4 parameters) of the current decoding block(i,j),y(i,j)) Substituting 6 parameter affine model formula (8-2) or (x)(i,j),y(i,j)) Substituting into a 4-parameter affine model formula (8-3) to obtain the motion information of the central point of each motion compensation unit as the motion vectors (vx) of all pixel points in the motion compensation unit(i,j),vy(i,j))。
Figure BDA0001779220870000311
Figure BDA0001779220870000312
Figure BDA0001779220870000313
Optionally, when the current decoding block is a 6-parameter decoding block, and when motion vectors of one or more sub-blocks of the current decoding block are obtained based on the target candidate motion vector group, if a lower boundary of the current decoding block coincides with a lower boundary of a CTU in which the current decoding block is located, the motion vector of the sub-block at the lower left corner of the current decoding block is calculated according to a 6-parameter affine model constructed by the three control points and a position coordinate (0, H) at the lower left corner of the current decoding block, and the motion vector of the sub-block at the lower right corner of the current decoding block is calculated according to a 6-parameter affine model constructed by the three control points and a position coordinate (W, H) at the lower right corner of the current decoding block. For example, substituting the position coordinate (0, H) of the lower left corner of the current decoding block into the 6-parameter affine model may obtain the motion vector of the sub-block at the lower left corner of the current decoding block (instead of substituting the center point coordinate of the sub-block at the lower left corner into the affine model for calculation), and substituting the position coordinate (W, H) of the lower right corner of the current decoding block into the 6-parameter affine model may obtain the motion vector of the sub-block at the lower right corner of the current decoding block (instead of substituting the center point coordinate of the sub-block at the lower right corner into the affine model for calculation). In this way, when the motion vector of the lower left control point and the motion vector of the lower right control point of the current decoded block are used (e.g., the subsequent other blocks construct the candidate motion vector predictor MVP list of the other block based on the motion vectors of the lower left control point and the lower right control point of the current block), the exact values are used instead of the estimated values. Wherein W is the width of the current decoded block and H is the height of the current decoded block.
Optionally, when the current decoding block is a 4-parameter decoding block, and when the motion vectors of one or more sub-blocks of the current decoding block are obtained based on the target candidate motion vector group, if the lower boundary of the current decoding block coincides with the lower boundary of the CTU in which the current decoding block is located, the motion vector of the sub-block at the lower left corner of the current decoding block is calculated according to a 4-parameter affine model constructed by the two control points and the position coordinates (0, H) at the lower left corner of the current decoding block, and the motion vector of the sub-block at the lower right corner of the current decoding block is calculated according to a 4-parameter affine model constructed by the two control points and the position coordinates (W, H) at the lower right corner of the current decoding block. For example, substituting the position coordinate (0, H) of the lower left corner of the current decoding block into the 4-parameter affine model may obtain the motion vector of the sub-block at the lower left corner of the current decoding block (instead of substituting the center point coordinate of the sub-block at the lower left corner into the affine model for calculation), and substituting the position coordinate (W, H) of the lower right corner of the current decoding block into the four-parameter affine model may obtain the motion vector of the sub-block at the lower right corner of the current decoding block (instead of substituting the center point coordinate of the sub-block at the lower right corner into the affine model for calculation). In this way, when the motion vector of the lower left control point and the motion vector of the lower right control point of the current decoded block are used (e.g., the subsequent other blocks construct the candidate motion vector predictor MVP list of the other block based on the motion vectors of the lower left control point and the lower right control point of the current block), the exact values are used instead of the estimated values. Wherein W is the width of the current decoded block and H is the height of the current decoded block.
Step S1216: the video decoder performs motion compensation according to the motion vector value of each sub-block in the current decoded block to obtain the pixel prediction value of each sub-block, for example, finds a corresponding sub-block in a reference frame according to the motion vector value of each sub-block and a reference frame index value, and performs interpolation filtering to obtain the pixel prediction value of each sub-block.
Merge mode:
step S1221: the video decoder constructs a candidate motion information list.
Specifically, the video decoder constructs a candidate motion information list (also referred to as a candidate motion vector list) through an inter prediction unit (also referred to as an inter prediction module), and the candidate motion information list can be constructed in one of two manners provided below or a combination of the two manners, and is a candidate motion information list of a triplet; the two modes are as follows:
in the first mode, a motion vector prediction method based on a motion model is adopted to construct a candidate motion information list.
First, all or part of adjacent blocks of the current decoding block are traversed according to a predetermined sequence, so as to determine adjacent affine decoding blocks, wherein the number of the determined adjacent affine decoding blocks may be one or more. For example, the neighboring blocks A, B, C, D, E shown in FIG. 7A may be traversed in sequence to determine neighboring affine decoded blocks in the neighboring blocks A, B, C, D, E. The inter-frame prediction unit determines a set of candidate motion vector predictors from each neighboring affine decoding block (each set of candidate motion vector predictors is a binary set or a ternary set), and the following description will be given by taking a neighboring affine decoding block as an example, and for convenience of description, the neighboring affine decoding block is referred to as a first neighboring affine decoding block, as follows:
determining a first affine model according to the motion vector of the control point of the first adjacent affine decoding block, and predicting the motion vector of the control point of the current decoding block according to the first affine model, which is specifically described as follows:
if the first adjacent affine decoding block is located above the current decoding block CTU and the first adjacent affine decoding block is a four-parameter affine decoding block, the position coordinates and motion vectors of the two control points at the lowest side of the first adjacent affine decoding block are obtained, for example, the position coordinates (x) of the control point at the lower left of the first adjacent affine decoding block can be obtained6,y6) And motion vector (vx)6,vy6) And the position coordinates (x) of the lower right control point7,y7) And motion vector value (vx)7,vy7)。
And forming a first affine model according to the motion vectors of the two lowest control points of the first adjacent affine decoding block (the first affine model obtained at the moment is a 4-parameter affine model).
Optionally, the motion vector of the control point of the current decoded block is predicted according to the first affine model, for example, the position coordinate of the upper left control point, the position coordinate of the upper right control point, and the position coordinate of the lower left control point of the current decoded block may be respectively substituted into the first affine model, so as to predict the motion vector of the upper left control point, the motion vector of the upper right control point, and the motion vector of the lower left control point of the current decoded block, to form a candidate motion vector triple, and add a candidate motion information list, specifically as shown in formulas (1), (2), and (3).
Optionally, the motion vector of the control point of the current decoded block is predicted according to the first affine model, for example, the position coordinates of the upper left control point and the position coordinates of the upper right control point of the current decoded block may be respectively substituted into the first affine model, so as to predict the motion vector of the upper left control point and the motion vector of the upper right control point of the current decoded block, form a candidate motion vector binary group, and add a candidate motion information list, which is specifically shown in formulas (1) and (2).
In the formulae (1), (2) and (3), (x)0,y0) As the coordinates of the upper left control point of the currently decoded block, (x)1,y1) As the coordinates of the upper right control point of the currently decoded block, (x)2,y2) The coordinate of a lower left control point of the current decoding block; in addition, (vx)0,vy0) For the motion vector of the upper left control point of the predicted currently decoded block, (vx)1,vy1) For the motion vector of the upper right control point of the predicted currently decoded block, (vx)2,vy2) The motion vector of the lower left control point of the current decoded block is predicted.
If the first adjacent affine decoding block is located in a Coding Tree Unit (CTU) above the current decoding block and the first adjacent affine decoding block is a six-parameter affine decoding block, the candidate motion vector prediction value of the control point of the current block is not generated based on the first adjacent affine decoding block.
If the first neighboring affine decoding block is not located above the current decoding block CTU, the manner of predicting the motion vector of the control point of the current decoding block is not limited herein. However, for ease of understanding, the following is also illustrative of an alternative manner of determination:
the position coordinates and motion vectors of the three control points of the first adjacent affine decoding block, for example, the position coordinate (x) of the upper left control point, can be obtained4,y4) And motion vector value (vx)4,vy4) Position coordinates (x) of the upper right control point5,y5) And motion vector value (vx)5,vy5) Position coordinates (x) of lower left control point6,y6) And motion vector (vx)6,vy6)。
And forming a 6-parameter affine model according to the position coordinates and the motion vectors of the three control points of the first adjacent affine decoding block.
The position coordinate (x) of the upper left control point of the current decoding block0,y0) Position coordinates (x) of the upper right control point1,y1) And position coordinates (x) of lower left control point2,y2) Substituting 6-parameter affine model to predict motion vector of upper left control point of current decoded blockThe motion vector of the upper right control point and the motion vector of the lower left control point, as shown in equations (4), (5) and (6).
Figure BDA0001779220870000331
The equations (4), (5) have been described above, and in the equations (4), (5), (6), (vx)0,vy0) For the motion vector of the upper left control point of the predicted currently decoded block, (vx)1,vy1) For the motion vector of the upper right control point of the predicted currently decoded block, (vx)2,vy2) The motion vector of the lower left control point of the current decoded block is predicted.
And in the second mode, a candidate motion information list is constructed by adopting a motion vector prediction method based on control point combination.
The following two schemes, denoted scheme a and scheme B respectively, are exemplified:
scheme A: and combining the motion information of the 2 control points of the current decoding block to construct a 4-parameter affine transformation model. The combination of the 2 control points is { CP1, CP4}, { CP2, CP3}, { CP1, CP2}, { CP2, CP4}, { CP1, CP3}, { CP3, CP4 }. For example, a 4-parameter Affine transformation model constructed using CP1 and CP2 control points is denoted as Affine (CP1, CP 2).
It should be noted that a combination of different control points may also be converted into a control point at the same position. For example: a4-parameter affine transformation model obtained by combining { CP1, CP4}, { CP2, CP3}, { CP2, CP4}, { CP1, CP3}, { CP3, CP4} is converted into control points { CP1, CP2} or { CP1, CP2, CP3 }. The conversion method is to substitute the motion vector and the coordinate information of the control point into the formula (9-1) to obtain model parameters, and substitute the coordinate information of the { CP1, CP2} to obtain the motion vector as a group of candidate motion vector predicted values.
Figure BDA0001779220870000332
In the formula (9-1), a0,a1,a2,a3Are all parameters in the parametric model, and the (x, y) tableShowing the position coordinates.
More directly, a group of motion vector predictors represented by an upper left control point and an upper right control point can also be obtained by conversion according to the following formula, and a candidate motion information list is added:
{ CP1, CP2} is converted into { CP1, CP2, CP3} of formula (9-2):
Figure BDA0001779220870000333
{ CP1, CP3} converted to { CP1, CP2, CP3} formula (9-3):
Figure BDA0001779220870000341
{ CP2, CP3} is converted into { CP1, CP2, CP3} formula (10):
Figure BDA0001779220870000342
{ CP1, CP4} is converted into { CP1, CP2, CP3} of formula (11):
Figure BDA0001779220870000343
{ CP2, CP4} is converted into { CP1, CP2, CP3} formula (12):
Figure BDA0001779220870000344
{ CP3, CP4} is converted into { CP1, CP2, CP3} formula (13):
Figure BDA0001779220870000345
scheme B: and combining the motion information of the 3 control points of the current decoding block to construct a 6-parameter affine transformation model. The combination of the 3 control points is { CP1, CP2, CP4}, { CP1, CP2, CP3}, { CP2, CP3, CP4}, and { CP1, CP3, CP4 }. For example, a 6-parameter Affine transformation model constructed using CP1, CP2, and CP3 control points is denoted as Affine (CP1, CP2, CP 3).
It should be noted that a combination of different control points may also be converted into a control point at the same position. For example: a6-parameter affine transformation model of a combination of { CP1, CP2, CP4}, { CP2, CP3, CP4}, { CP1, CP3, CP4} is converted into control points { CP1, CP2, CP3} for representation. The conversion method is to substitute the motion vector and the coordinate information of the control point into the formula (14) to obtain model parameters, and substitute the coordinate information of the { CP1, CP2 and CP3} to obtain the motion vector as a group of candidate motion vector predicted values.
Figure BDA0001779220870000346
In the formula (14), a1,a2,a3,a4,a5,a6As parameters in the parametric model, (x, y) represents the position coordinates.
More directly, a group of motion vector predictors represented by an upper left control point, an upper right control point, and a lower left control point may be obtained by performing conversion according to the following formula, and added to the candidate motion information list:
{ CP1, CP2, CP4} is converted into { CP1, CP2, CP3} formula (15):
Figure BDA0001779220870000347
{ CP2, CP3, CP4} is converted into { CP1, CP2, CP3} formula (16):
Figure BDA0001779220870000348
{ CP1, CP3, CP4} is converted into { CP1, CP2, CP3} of formula (17):
Figure BDA0001779220870000349
it should be noted that the candidate motion information list may be constructed by using only the candidate motion vector predictor obtained by the first mode prediction, may be constructed by using only the candidate motion vector predictor obtained by the second mode prediction, and may be constructed by using both the candidate motion vector predictor obtained by the first mode prediction and the candidate motion vector predictor obtained by the second mode prediction. In addition, the candidate motion information list can be pruned and sorted according to a preset rule, and then is truncated or filled to a specific number. When each group of candidate motion vector predicted values in the candidate motion information list comprises motion vector predicted values of three control points, the candidate motion information list can be called as a triple group list; when each set of candidate motion vector predictors in the candidate motion information list includes motion vector predictors for two control points, the candidate motion information list can be referred to as a binary set list.
Step S1222: the video decoder parses the code stream to obtain an index.
In particular, the video decoder may parse the code stream by the entropy decoding unit, the index indicating a target set of candidate motion vectors of the currently decoded block, the target candidate motion vectors representing motion vector predictors of a set of control points of the currently decoded block.
Step S1223: the video decoder determines a set of target motion vectors from the list of candidate motion information based on the index. Specifically, the target candidate motion vector group determined by the frequency decoder from the candidate motion vectors according to the index is used as an optimal candidate motion vector predictor (optionally, when the length of the candidate motion information list is 1, the target motion vector group can be directly determined without analyzing the code stream to obtain the index), specifically, the optimal motion vector predictor of 2 or 3 control points; for example, the video decoder parses the index number from the code stream, and then determines the optimal motion vector prediction values of 2 or 3 control points from the candidate motion information list according to the index number, wherein each group of candidate motion vector prediction values in the candidate motion information list respectively corresponds to a respective index number.
Step S1224: and the video decoder obtains the motion vector value of each sub-block in the current decoding block by adopting a parameter affine transformation model according to the determined motion vector value of the control point of the current decoding block.
Specifically, the motion vectors of two (upper left control point and upper right control point) or three control points (e.g., upper left control point, upper right control point, and lower left control point) included in the target candidate motion vector group. For each sub-block of the current decoding block (a sub-block may also be equivalent to a motion compensation unit), the motion information of the pixels at the preset positions in the motion compensation unit may be used to represent the motion information of all the pixels in the motion compensation unit. Assuming that the size of the motion compensation unit is MxN (M is smaller than or equal to the width W of the current decoding block, and N is smaller than or equal to the height H of the current decoding block, where M, N, W, H is a positive integer, and is usually a power of 2, such as 4,8, 16, 32, 64, 128, etc.), the predetermined location pixel point may be a motion compensation unit center point (M/2, N/2), an upper left pixel point (0,0), an upper right pixel point (M-1,0), or a pixel point at another location. Fig. 8A illustrates a 4x4 motion compensation unit and fig. 8B illustrates an 8x8 motion compensation unit.
The coordinates of the motion compensation unit center point with respect to the top left vertex pixel of the current decoding block are calculated using equation (5), where i is the ith motion compensation unit in the horizontal direction (left to right), j is the jth motion compensation unit in the vertical direction (top to bottom), (x)(i,j),y(i,j)) The coordinates of the (i, j) th motion compensation unit center point with respect to the pixel of the upper left control point of the current decoding block are represented. Then (x) is determined according to the affine model type (6 parameters or 4 parameters) of the current decoding block(i,j),y(i,j)) Substituting 6 parameter affine model formula (6-1) or (x)(i,j),y(i,j)) Substituting into a 4-parameter affine model formula (6-2) to obtain the motion information of the central point of each motion compensation unit as the motion vectors (vx) of all pixel points in the motion compensation unit(i,j),vy(i,j))。
Optionally, when the current decoding block is a 6-parameter decoding block, and when motion vectors of one or more sub-blocks of the current decoding block are obtained based on the target candidate motion vector group, if a lower boundary of the current decoding block coincides with a lower boundary of a CTU in which the current decoding block is located, the motion vector of the sub-block at the lower left corner of the current decoding block is calculated according to a 6-parameter affine model constructed by the three control points and a position coordinate (0, H) at the lower left corner of the current decoding block, and the motion vector of the sub-block at the lower right corner of the current decoding block is calculated according to a 6-parameter affine model constructed by the three control points and a position coordinate (W, H) at the lower right corner of the current decoding block. For example, substituting the position coordinate (0, H) of the lower left corner of the current decoding block into the 6-parameter affine model may obtain the motion vector of the sub-block at the lower left corner of the current decoding block (instead of substituting the center point coordinate of the sub-block at the lower left corner into the affine model for calculation), and substituting the position coordinate (W, H) of the lower right corner of the current decoding block into the 6-parameter affine model may obtain the motion vector of the sub-block at the lower right corner of the current decoding block (instead of substituting the center point coordinate of the sub-block at the lower right corner into the affine model for calculation). In this way, when the motion vector of the lower left control point and the motion vector of the lower right control point of the current decoded block are used (e.g., the subsequent other blocks construct the candidate motion information list of the other block based on the motion vectors of the lower left control point and the lower right control point of the current block), an accurate value is used instead of the estimated value. Wherein W is the width of the current decoded block and H is the height of the current decoded block.
Optionally, when the current decoding block is a 4-parameter decoding block, and when the motion vectors of one or more sub-blocks of the current decoding block are obtained based on the target candidate motion vector group, if the lower boundary of the current decoding block coincides with the lower boundary of the CTU in which the current decoding block is located, the motion vector of the sub-block at the lower left corner of the current decoding block is calculated according to a 4-parameter affine model constructed by the two control points and the position coordinates (0, H) at the lower left corner of the current decoding block, and the motion vector of the sub-block at the lower right corner of the current decoding block is calculated according to a 4-parameter affine model constructed by the two control points and the position coordinates (W, H) at the lower right corner of the current decoding block. For example, substituting the position coordinate (0, H) of the lower left corner of the current decoding block into the 4-parameter affine model may obtain the motion vector of the sub-block at the lower left corner of the current decoding block (instead of substituting the center point coordinate of the sub-block at the lower left corner into the affine model for calculation), and substituting the position coordinate (W, H) of the lower right corner of the current decoding block into the four-parameter affine model may obtain the motion vector of the sub-block at the lower right corner of the current decoding block (instead of substituting the center point coordinate of the sub-block at the lower right corner into the affine model for calculation). In this way, when the motion vector of the lower left control point and the motion vector of the lower right control point of the current decoded block are used (e.g., the subsequent other blocks construct the candidate motion information list of the other block based on the motion vectors of the lower left control point and the lower right control point of the current block), an accurate value is used instead of the estimated value. Wherein W is the width of the current decoded block and H is the height of the current decoded block.
Step S1225: the video decoder performs motion compensation according to the motion vector value of each sub-block in the current decoded block to obtain the pixel prediction value of each sub-block, and specifically, the pixel prediction value of the current decoded block is obtained by prediction according to the motion vector of one or more sub-blocks of the current decoded block, and the reference frame index and the prediction direction indicated by the index.
It can be understood that when the decoding tree unit CTU in which the first adjacent affine decoding block is located is above the position of the current decoding block, the information of the lowest control point of the first adjacent affine decoding block has been read from the memory; the above solution is therefore in the construction of candidate motion vectors from a first set of control points of a first adjacent affine decoding block, comprising the lower left and lower right control points of said first adjacent affine decoding block; instead of fixing the upper left control point, the upper right control point and the lower left control point of the first adjacent decoding block as the first set of control points (or fixing the upper left control point and the upper right control point of the first adjacent decoding block as the first set of control points) as in the prior art. Therefore, by using the method for determining the first group of control points in the present application, the information (e.g., position coordinates, motion vectors, etc.) of the first group of control points can directly reuse the information read from the memory, thereby reducing the memory reading and improving the decoding performance.
Fig. 10 is a schematic block diagram of an implementation of an encoding apparatus or a decoding apparatus (simply referred to as a decoding apparatus 1000) according to an embodiment of the present application. Among other things, the decoding apparatus 1000 may include a processor 1010, a memory 1030, and a bus system 1050. Wherein the processor is connected with the memory through the bus system, the memory is used for storing instructions, and the processor is used for executing the instructions stored by the memory. The memory of the encoding device stores program code, and the processor may call the program code stored in the memory to perform various video encoding or decoding methods described herein, particularly video encoding or decoding methods in various new inter prediction modes, and methods of predicting motion information in various new inter prediction modes. To avoid repetition, it is not described in detail here.
In the embodiment of the present application, the processor 1010 may be a Central Processing Unit (CPU), and the processor 1010 may also be other general-purpose processors, Digital Signal Processors (DSP), Application Specific Integrated Circuits (ASIC), Field Programmable Gate Arrays (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and so on. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 1030 may include a Read Only Memory (ROM) device or a Random Access Memory (RAM) device. Any other suitable type of memory device may also be used for memory 1030. The memory 1030 may include code and data 1031 that are accessed by the processor 1010 using the bus 1050. The memory 1030 may further include an operating system 1033 and application programs 1035, the application programs 1035 including at least one program that allows the processor 1010 to perform the video encoding or decoding methods described herein, and in particular the encoding methods or decoding methods described herein. For example, the application programs 1035 may include applications 1 to N, which further include video encoding or decoding applications (simply video coding applications) that perform the video encoding or decoding methods described herein.
The bus system 1050 may include a power bus, a control bus, a status signal bus, and the like, in addition to a data bus. For clarity of illustration, however, the various buses are labeled in the figure as bus system 1050.
Optionally, the translator device 1000 may also include one or more output devices, such as a display 1070. In one example, the display 1070 may be a touch-sensitive display that incorporates a display with touch-sensitive elements operable to sense touch input. A display 1070 may be connected to the processor 1010 via the bus 1050.
Fig. 11 is an illustration of an example of a video encoding system 1100 including encoder 20 of fig. 2A and/or decoder 200 of fig. 2B, according to an example embodiment. The system 1100 may implement a combination of the various techniques of this application. In the illustrated embodiment, the video encoding system 1100 may include an imaging device 1101, a video encoder 100, a video decoder 200 (and/or a video encoder implemented by logic circuitry 1107 of a processing unit 1106), an antenna 1102, one or more processors 1103, one or more memories 1104, and/or a display device 1105.
As shown, the imaging device 1101, the antenna 1102, the processing unit 1106, the logic circuit 1107, the video encoder 100, the video decoder 200, the processor 1103, the memory 1104, and/or the display device 1105 can communicate with each other. As discussed, although video encoding system 1100 is depicted with video encoder 100 and video decoder 200, in different examples, video encoding system 1100 may include only video encoder 100 or only video decoder 200.
In some examples, as shown, video encoding system 1100 may include antenna 1102. For example, antenna 1102 may be used to transmit or receive an encoded bitstream of video data. Additionally, in some examples, video encoding system 1100 may include display device 1105. Display device 1105 may be used to present video data. In some instances, the logic 1107 may be implemented by the processing unit 1106, as shown. The processing unit 1106 may comprise application-specific integrated circuit (ASIC) logic, a graphics processor, a general-purpose processor, etc. The video coding system 1100 may also include an optional processor 1103, and the optional processor 1103 may similarly comprise application-specific integrated circuit (ASIC) logic, a graphics processor, a general-purpose processor, or the like. In some examples, the logic 1107 can be implemented in hardware, such as video coding dedicated hardware, and the processor 1103 can be implemented in general-purpose software, an operating system, and the like. In addition, the Memory 1104 may be any type of Memory, such as a volatile Memory (e.g., Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), etc.) or a nonvolatile Memory (e.g., flash Memory, etc.), and the like. In a non-limiting example, storage 1104 may be implemented by a speed cache memory. In some instances, the logic 1107 may access the memory 1104 (e.g., to implement an image buffer). In other examples, the logic circuitry 1107 and/or processing unit 1106 may contain memory (e.g., cache, etc.) for implementing image buffers, etc.
In some examples, video encoder 100 implemented with logic circuitry may include an image buffer (e.g., implemented with processing unit 1106 or memory 1104) and a graphics processing unit (e.g., implemented with processing unit 1106). The graphics processing unit may be communicatively coupled to the image buffer. The graphics processing unit may include video encoder 100 implemented by logic circuitry 1107 to implement the various modules discussed with reference to fig. 2A and/or any other encoder system or subsystem described herein. Logic circuitry may be used to perform various operations discussed herein.
Video decoder 200 may be implemented in a similar manner by logic circuitry 1107 to implement the various modules discussed with reference to decoder 200 of fig. 2B and/or any other decoder system or subsystem described herein. In some examples, logic circuit implemented video decoder 200 may include an image buffer (implemented by processing unit 2820 or memory 1104) and a graphics processing unit (e.g., implemented by processing unit 1106). The graphics processing unit may be communicatively coupled to the image buffer. The graphics processing unit may include video decoder 200 implemented by logic circuitry 1107 to implement the various modules discussed with reference to fig. 2B and/or any other decoder system or subsystem described herein.
In some examples, the antenna 1102 of the video encoding system 1100 may be used to receive an encoded bitstream of video data. As discussed, the encoded bitstream may include data related to the encoded video frame, indicators, index values, mode selection data, etc., discussed herein, such as data related to the encoding partition (e.g., transform coefficients or quantized transform coefficients, (as discussed) optional indicators, and/or data defining the encoding partition). The video encoding system 1100 may also include a video decoder 200 coupled to the antenna 1102 and used to decode the encoded bitstream. The display device 1105 is used to present video frames.
In the steps of the above method flow, the description order of the steps does not represent the execution order of the steps, and the steps may be executed according to the description order or may not be executed according to the description order. For example, the step S1211 may be executed after the step S1212 or before the step S1212; step S1221 may be executed after step S1222 or before step S1222; the remaining steps are not exemplified here.
Those of skill in the art will appreciate that the functions described in connection with the various illustrative logical blocks, modules, and algorithm steps described in the disclosure herein may be implemented as hardware, software, firmware, or any combination thereof. If implemented in software, the functions described in the various illustrative logical blocks, modules, and steps may be stored on or transmitted over as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. The computer-readable medium may include a computer-readable storage medium, which corresponds to a tangible medium, such as a data storage medium, or any communication medium including a medium that facilitates transfer of a computer program from one place to another (e.g., according to a communication protocol). In this manner, a computer-readable medium may generally correspond to (1) a non-transitory tangible computer-readable storage medium, or (2) a communication medium, such as a signal or carrier wave. A data storage medium may be any available medium that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementing the techniques described herein. The computer program product may include a computer-readable medium.
By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that the computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but are instead directed to non-transitory tangible storage media. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
The instructions may be executed by one or more processors, such as one or more Digital Signal Processors (DSPs), general purpose microprocessors, Application Specific Integrated Circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Thus, the term "processor," as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. Additionally, in some aspects, the functions described by the various illustrative logical blocks, modules, and steps described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or incorporated in a combined codec. Also, the techniques may be fully implemented in one or more circuits or logic elements.
The techniques of this application may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an Integrated Circuit (IC), or a set of ICs (e.g., a chipset). Various components, modules, or units are described in this application to emphasize functional aspects of means for performing the disclosed techniques, but do not necessarily require realization by different hardware units. Indeed, as described above, the various units may be combined in a codec hardware unit, in conjunction with suitable software and/or firmware, or provided by an interoperating hardware unit (including one or more processors as described above).
The above description is only an exemplary embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (32)

1. A method of encoding, comprising:
determining a target candidate motion vector group from the candidate motion vector list according to a rate-distortion cost criterion; the target candidate motion vector group represents a motion vector predicted value of a group of control points of the current coding block;
encoding an index corresponding to the target candidate motion vector into a code stream, and transmitting the code stream;
wherein, if a first neighboring affine coding block is a four-parameter affine coding block and the first neighboring affine coding block is located above the coding tree unit CTU of the current coding block, the candidate motion vector list includes a first set of candidate motion vector predictors, the first set of candidate motion vector predictors being derived based on a lower-left control point and a lower-right control point of the first neighboring affine coding block.
2. The method of claim 1, wherein:
if the current coding block is a four-parameter affine coding block, the first group of candidate motion vector predicted values are used for representing motion vector predicted values of an upper left control point and an upper right control point of the current coding block;
and if the current coding block is a six-parameter affine coding block, the first group of candidate motion vector predicted values are used for representing the motion vector predicted values of the upper left control point, the upper right control point and the lower left fixed point control point of the current coding block.
3. The method according to claim 1 or 2, characterized in that: the first group of candidate motion vector predictors is obtained based on a lower left control point and a lower right control point of the first neighboring affine coding block, and specifically:
if the current coding block is a four-parameter affine coding block, the first group of candidate motion vector predicted values are obtained by substituting position coordinates of an upper left control point and an upper right control point of the current coding block into a first affine model; alternatively, the first and second electrodes may be,
if the current coding block is a six-parameter affine coding block, the first group of candidate motion vector predicted values are obtained by substituting position coordinates of an upper left control point, an upper right control point and a lower left fixed point control point of the current coding block into a first affine model;
wherein the first affine model is determined based on motion vectors and position coordinates of a lower left control point and a lower right control point of the first neighboring affine coding block.
4. The method according to any one of claims 1-3, further comprising:
searching the motion vectors of a group of control points with the lowest cost in a preset search range according to a rate distortion cost criterion by taking the target candidate motion vector group as a search starting point;
determining a motion vector difference value MVD between the motion vector of the set of control points and the set of target candidate motion vectors;
the encoding the index corresponding to the target candidate motion vector into a code stream and transmitting the code stream includes:
and coding the MVD and the index corresponding to the target candidate motion vector group into a code stream to be transmitted, and transmitting the code stream.
5. The method according to any one of claims 1 to 3, wherein the encoding the index corresponding to the target candidate motion vector into a code stream and transmitting the code stream comprises:
and encoding indexes corresponding to the target candidate motion vector group, the reference frame index and the prediction direction into a code stream, and transmitting the code stream.
6. Method according to any of claims 1 to 5, characterized in that the position coordinates (x) of the lower left control point of said first adjacent affine coding block6,y6) And the position coordinates (x) of the lower right control point7,y7) Position coordinates (x) of upper left control points all based on the first adjacent affine coding block4,y4) Calculated and derived, wherein the position coordinates (x) of the lower left control point of the first adjacent affine coding block6,y6) Is (x)4,y4+ cuH), the position coordinate (x) of the lower right control point of the first adjacent affine coding block7,y7) Is (x)4+cuW,y4+ cuH), cuW is the width of the first neighboring affine coding block, cuH is the height of the first neighboring affine coding block.
7. The method of claim 6 wherein the motion vector of the lower left control point of the first neighboring affine coding block is the motion vector of the lower left sub-block of the first neighboring affine coding block and the motion vector of the lower right control point of the first neighboring affine coding block is the motion vector of the lower right sub-block of the first neighboring affine coding block.
8. A method of decoding, comprising:
analyzing the code stream to obtain an index, wherein the index is used for indicating a target candidate motion vector group of a current decoding block;
determining, from a candidate motion vector list, the target candidate motion vector set representing motion vector predictors for a set of control points of a current decoded block, according to the index, wherein, if a first neighboring affine decoding block is a four-parameter affine decoding block and said first neighboring affine decoding block is located at an upper decoding tree unit CTU of said current decoded block, said candidate motion vector list comprises a first set of candidate motion vector predictors derived based on a lower left control point and a lower right control point of said first neighboring affine decoding block;
deriving motion vectors for one or more sub-blocks of the current decoded block based on the set of target candidate motion vectors;
and predicting to obtain a pixel predicted value of the current decoding block based on the motion vectors of one or more sub-blocks of the current decoding block.
9. The method of claim 8, wherein:
if the current decoded block is a four-parameter affine decoded block, the first set of candidate motion vector predictors is used to represent motion vector predictors for an upper left control point and an upper right control point of the current decoded block;
and if the current decoding block is a six-parameter affine decoding block, the first group of candidate motion vector predicted values is used for representing motion vector predicted values of an upper left control point, an upper right control point and a lower left fixed point of the current decoding block.
10. The method according to claim 8 or 9, characterized in that: the first group of candidate motion vector predictors are obtained based on a lower left control point and a lower right control point of the first neighboring affine decoding block, and specifically are:
if the current decoding block is a four-parameter affine decoding block, the first group of candidate motion vector predicted values are obtained by substituting position coordinates of an upper left control point and an upper right control point of the current decoding block into a first affine model; alternatively, the first and second electrodes may be,
if the current decoding block is a six-parameter affine decoding block, the first group of candidate motion vector predicted values are obtained by substituting position coordinates of an upper left control point, an upper right control point and a lower left fixed point control point of the current decoding block into a first affine model;
wherein the first affine model is determined based on motion vectors and position coordinates of a lower left control point and a lower right control point of the first neighboring affine decoding block.
11. The method according to any of claims 8-10, wherein said deriving motion vectors for one or more sub-blocks of said current decoded block based on said set of target candidate motion vectors comprises:
deriving motion vectors for one or more sub-blocks of the current decoded block based on a second affine model determined based on the position coordinates of the set of target candidate motion vectors and a set of control points of the current decoded block.
12. The method of any of claims 8-11, wherein said deriving motion vectors for one or more sub-blocks of said current decoded block based on said set of target candidate motion vectors comprises:
obtaining a new candidate motion vector group based on a motion vector difference value MVD obtained by analyzing the code stream and the target candidate motion vector group indicated by the index;
and obtaining the motion vectors of one or more sub-blocks of the current decoding block based on the new candidate motion vector group.
13. The method of any of claims 8-11, wherein predicting a pixel prediction value for the current decoded block based on motion vectors of one or more sub-blocks of the current decoded block comprises:
and predicting to obtain a pixel predicted value of the current decoding block according to the motion vector of one or more sub-blocks of the current decoding block, and the reference frame index and the prediction direction indicated by the index.
14. Method according to any of claims 8 to 13, characterized in that the position coordinate (x) of the lower left control point of said first adjacent affine decoding block6,y6) And the position coordinates (x) of the lower right control point7,y7) Are all according to the position coordinate (x) of the upper left control point of the first adjacent affine decoding block4,y4) Calculated derivation, wherein the position coordinate (x) of the lower left control point of the first neighboring affine decoding block6,y6) Is (x)4,y4+ cuH) of the position coordinate (x) of the lower right control point of said first adjacent affine decoding block7,y7) Is (x)4+cuW,y4+ cuH), cuW is the width of the first neighboring affine decoding block, cuH is the height of the first neighboring affine decoding block.
15. The method of claim 14 wherein said motion vector of said lower left control point of said first adjacent affine decoding block is said motion vector of said lower left sub-block of said first adjacent affine decoding block and said motion vector of said lower right control point of said first adjacent affine decoding block is said motion vector of said lower right sub-block of said first adjacent affine decoding block.
16. The method according to any one of claims 8 to 15, wherein in said deriving the motion vector of one or more sub-blocks of the current decoded block based on the target candidate motion vector set, if the lower boundary of the current decoded block coincides with the lower boundary of the CTU in which the current decoded block is located, the motion vector of the sub-block of the lower left vertex of the current decoded block is calculated from the target candidate motion vector set and the position coordinates (0, H) of the lower left vertex of the current decoded block, the motion vector of the sub-block of the lower right vertex of the current decoded block is calculated from the target candidate motion vector set and the position coordinates (W, H) of the lower right vertex of the current decoded block, where W is equal to the width of the current decoded block, H is equal to the height of the current decoded block, and the coordinates of the upper left vertex of the current decoded block is (0, 0).
17. A video encoder, comprising:
an inter-frame prediction unit for determining a target candidate motion vector group from the candidate motion vector list according to a rate-distortion cost criterion; the target candidate motion vector group represents a motion vector predicted value of a group of control points of the current coding block;
an entropy coding unit for encoding an index corresponding to the target candidate motion vector into a code stream and transmitting the code stream;
wherein, if a first neighboring affine coding block is a four-parameter affine coding block and the first neighboring affine coding block is located above the coding tree unit CTU of the current coding block, the candidate motion vector list includes a first set of candidate motion vector predictors, the first set of candidate motion vector predictors being derived based on a lower-left control point and a lower-right control point of the first neighboring affine coding block.
18. The video encoder of claim 17, wherein:
if the current coding block is a four-parameter affine coding block, the first group of candidate motion vector predicted values are used for representing motion vector predicted values of an upper left control point and an upper right control point of the current coding block;
and if the current coding block is a six-parameter affine coding block, the first group of candidate motion vector predicted values are used for representing the motion vector predicted values of the upper left control point, the upper right control point and the lower left fixed point control point of the current coding block.
19. A video encoder as defined in claim 17 or 18 wherein: the first group of candidate motion vector predictors is obtained based on a lower left control point and a lower right control point of the first neighboring affine coding block, and specifically:
if the current coding block is a four-parameter affine coding block, the first group of candidate motion vector predicted values are obtained by substituting position coordinates of an upper left control point and an upper right control point of the current coding block into a first affine model; alternatively, the first and second electrodes may be,
if the current coding block is a six-parameter affine coding block, the first group of candidate motion vector predicted values are obtained by substituting position coordinates of an upper left control point, an upper right control point and a lower left fixed point control point of the current coding block into a first affine model;
wherein the first affine model is determined based on motion vectors and position coordinates of a lower left control point and a lower right control point of the first neighboring affine coding block.
20. The video encoder according to any of claims 17-19, wherein:
the inter-frame prediction unit is further configured to search motion vectors of a group of control points with the lowest cost in a preset search range according to a rate distortion cost criterion by using the target candidate motion vector group as a search starting point; and determining a motion vector difference value MVD between the motion vectors of the set of control points and the set of target candidate motion vectors;
the entropy coding unit is specifically configured to encode the MVD and an index corresponding to the target candidate motion vector group into a code stream to be transmitted, and transmit the code stream.
21. The video encoder according to any of claims 17 to 19, wherein the entropy coding unit is specifically configured to encode an index corresponding to the target set of candidate motion vectors, the reference frame index and the prediction direction into a bitstream, and to transmit the bitstream.
22. According to claim17-21, wherein the position coordinate (x) of the lower left control point of the first neighboring affine coding block6,y6) And the position coordinates (x) of the lower right control point7,y7) Position coordinates (x) of upper left control points all based on the first adjacent affine coding block4,y4) Calculated and derived, wherein the position coordinates (x) of the lower left control point of the first adjacent affine coding block6,y6) Is (x)4,y4+ cuH), the position coordinate (x) of the lower right control point of the first adjacent affine coding block7,y7) Is (x)4+cuW,y4+ cuH), cuW is the width of the first neighboring affine coding block, cuH is the height of the first neighboring affine coding block.
23. The video encoder of claim 22, wherein the motion vector of the lower left control point of the first neighboring affine coding block is the motion vector of the lower left sub-block of the first neighboring affine coding block and the motion vector of the lower right control point of the first neighboring affine coding block is the motion vector of the lower right sub-block of the first neighboring affine coding block.
24. A video decoder, comprising:
the entropy decoding unit is used for analyzing the code stream to obtain an index, and the index is used for indicating a target candidate motion vector group of a current decoding block;
an inter prediction unit configured to determine, according to the index, the target candidate motion vector group from a candidate motion vector list, the target candidate motion vector group representing a motion vector predictor of a set of control points of a current decoded block, wherein, if a first neighboring affine decoding block is a four-parameter affine decoding block and the first neighboring affine decoding block is located at an upper decoding tree unit CTU of the current decoded block, the candidate motion vector list includes a first set of candidate motion vector predictors, the first set of candidate motion vector predictors being derived based on a lower left control point and a lower right control point of the first neighboring affine decoding block; and deriving motion vectors for one or more sub-blocks of the current decoded block based on the set of target candidate motion vectors; and predicting to obtain a pixel predicted value of the current decoding block based on the motion vectors of one or more sub-blocks of the current decoding block.
25. The video decoder of claim 24, wherein:
if the current decoded block is a four-parameter affine decoded block, the first set of candidate motion vector predictors is used to represent motion vector predictors for an upper left control point and an upper right control point of the current decoded block;
and if the current decoding block is a six-parameter affine decoding block, the first group of candidate motion vector predicted values is used for representing motion vector predicted values of an upper left control point, an upper right control point and a lower left fixed point of the current decoding block.
26. A video decoder as defined in claim 24 or 25, wherein: the first group of candidate motion vector predictors are obtained based on a lower left control point and a lower right control point of the first neighboring affine decoding block, and specifically are:
if the current decoding block is a four-parameter affine decoding block, the first group of candidate motion vector predicted values are obtained by substituting position coordinates of an upper left control point and an upper right control point of the current decoding block into a first affine model; alternatively, the first and second electrodes may be,
if the current decoding block is a six-parameter affine decoding block, the first group of candidate motion vector predicted values are obtained by substituting position coordinates of an upper left control point, an upper right control point and a lower left fixed point control point of the current decoding block into a first affine model;
wherein the first affine model is determined based on motion vectors and position coordinates of a lower left control point and a lower right control point of the first neighboring affine decoding block.
27. The video decoder according to any of claims 24-26, wherein said inter prediction unit is configured to derive motion vectors of one or more sub-blocks of said currently decoded block based on said set of target candidate motion vectors, in particular: deriving motion vectors for one or more sub-blocks of the current decoded block based on a second affine model determined based on the position coordinates of the set of target candidate motion vectors and a set of control points of the current decoded block.
28. The video decoder according to any of claims 24-27, wherein said inter prediction unit is configured to derive motion vectors for one or more sub-blocks of said currently decoded block based on said set of target candidate motion vectors, in particular: obtaining a new candidate motion vector group based on a motion vector difference value MVD obtained by analyzing the code stream and the target candidate motion vector group indicated by the index; and obtaining motion vectors of one or more sub-blocks of the current decoded block based on the new set of candidate motion vectors.
29. The video decoder according to any of claims 24-27, wherein the inter prediction unit is configured to predict a pixel prediction value of the current decoded block based on the motion vectors of one or more sub-blocks of the current decoded block, and specifically is configured to: and predicting to obtain a pixel predicted value of the current decoding block according to the motion vector of one or more sub-blocks of the current decoding block, and the reference frame index and the prediction direction indicated by the index.
30. The video decoder according to any of claims 24-29, wherein said first neighboring affine decoding block has a position coordinate (x) of a lower left control point6,y6) And the position coordinates (x) of the lower right control point7,y7) Are all according to the position coordinate (x) of the upper left control point of the first adjacent affine decoding block4,y4) Derived by calculation, wherein the first neighboring replicaPosition coordinates (x) of lower left control point of decoding block6,y6) Is (x)4,y4+ cuH) of the position coordinate (x) of the lower right control point of said first adjacent affine decoding block7,y7) Is (x)4+cuW,y4+ cuH), cuW is the width of the first neighboring affine decoding block, cuH is the height of the first neighboring affine decoding block.
31. The video decoder of claim 30, wherein the motion vector of the lower left control point of said first neighboring affine decoding block is the motion vector of the lower left sub-block of said first neighboring affine decoding block, and wherein the motion vector of the lower right control point of said first neighboring affine decoding block is the motion vector of the lower right sub-block of said first neighboring affine decoding block.
32. The video decoder of any of claims 24-31, wherein in said deriving motion vectors of one or more sub-blocks of said current decoded block based on said set of target candidate motion vectors, if a lower boundary of said current decoded block coincides with a lower boundary of a CTU in which said current decoded block is located, a motion vector of a sub-block of a lower left vertex of said current decoded block is calculated from said set of target candidate motion vectors and position coordinates (0, H) of a lower left vertex of said current decoded block, and a motion vector of a sub-block of a lower right vertex of said current decoded block is calculated from said set of target candidate motion vectors and position coordinates (W, H) of a lower right vertex of said current decoded block, where W is equal to a width of said current decoded block and H is equal to a height of said current decoded block, the coordinates of the top left vertex of the currently decoded block are (0, 0).
CN201810992362.1A 2018-08-27 2018-08-27 Video encoder, video decoder and corresponding methods Active CN110868602B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810992362.1A CN110868602B (en) 2018-08-27 2018-08-27 Video encoder, video decoder and corresponding methods
PCT/CN2019/079955 WO2020042604A1 (en) 2018-08-27 2019-03-27 Video encoder, video decoder and corresponding method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810992362.1A CN110868602B (en) 2018-08-27 2018-08-27 Video encoder, video decoder and corresponding methods

Publications (2)

Publication Number Publication Date
CN110868602A true CN110868602A (en) 2020-03-06
CN110868602B CN110868602B (en) 2024-04-12

Family

ID=69643826

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810992362.1A Active CN110868602B (en) 2018-08-27 2018-08-27 Video encoder, video decoder and corresponding methods

Country Status (2)

Country Link
CN (1) CN110868602B (en)
WO (1) WO2020042604A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111327901A (en) * 2020-03-10 2020-06-23 北京达佳互联信息技术有限公司 Video encoding method, video encoding device, storage medium and encoding device
CN113630602A (en) * 2021-06-29 2021-11-09 杭州未名信科科技有限公司 Affine motion estimation method and device for coding unit, storage medium and terminal
CN113709484A (en) * 2020-03-26 2021-11-26 杭州海康威视数字技术股份有限公司 Decoding method, encoding method, device, equipment and machine readable storage medium
WO2021238396A1 (en) * 2020-05-29 2021-12-02 Oppo广东移动通信有限公司 Inter-frame prediction methods, encoder, decoder, and computer storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102595110A (en) * 2011-01-10 2012-07-18 华为技术有限公司 Video coding method, decoding method and terminal
US20130083853A1 (en) * 2011-10-04 2013-04-04 Qualcomm Incorporated Motion vector predictor candidate clipping removal for video coding
CN103329537A (en) * 2011-01-21 2013-09-25 Sk电信有限公司 Apparatus and method for generating/recovering motion information based on predictive motion vector index encoding, and apparatus and method for image encoding/decoding using same
CN106331722A (en) * 2015-07-03 2017-01-11 华为技术有限公司 Image prediction method and associated device
WO2017147765A1 (en) * 2016-03-01 2017-09-08 Mediatek Inc. Methods for affine motion compensation
CN108271023A (en) * 2017-01-04 2018-07-10 华为技术有限公司 Image prediction method and relevant device
CN108432250A (en) * 2016-01-07 2018-08-21 联发科技股份有限公司 The method and device of affine inter-prediction for coding and decoding video

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080279478A1 (en) * 2007-05-09 2008-11-13 Mikhail Tsoupko-Sitnikov Image processing method and image processing apparatus
US9438910B1 (en) * 2014-03-11 2016-09-06 Google Inc. Affine motion prediction in video coding
CN104935938B (en) * 2015-07-15 2018-03-30 哈尔滨工业大学 Inter-frame prediction method in a kind of hybrid video coding standard

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102595110A (en) * 2011-01-10 2012-07-18 华为技术有限公司 Video coding method, decoding method and terminal
CN103329537A (en) * 2011-01-21 2013-09-25 Sk电信有限公司 Apparatus and method for generating/recovering motion information based on predictive motion vector index encoding, and apparatus and method for image encoding/decoding using same
US20130083853A1 (en) * 2011-10-04 2013-04-04 Qualcomm Incorporated Motion vector predictor candidate clipping removal for video coding
CN106331722A (en) * 2015-07-03 2017-01-11 华为技术有限公司 Image prediction method and associated device
CN108432250A (en) * 2016-01-07 2018-08-21 联发科技股份有限公司 The method and device of affine inter-prediction for coding and decoding video
WO2017147765A1 (en) * 2016-03-01 2017-09-08 Mediatek Inc. Methods for affine motion compensation
CN108271023A (en) * 2017-01-04 2018-07-10 华为技术有限公司 Image prediction method and relevant device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111327901A (en) * 2020-03-10 2020-06-23 北京达佳互联信息技术有限公司 Video encoding method, video encoding device, storage medium and encoding device
CN113709484A (en) * 2020-03-26 2021-11-26 杭州海康威视数字技术股份有限公司 Decoding method, encoding method, device, equipment and machine readable storage medium
CN113709484B (en) * 2020-03-26 2022-12-23 杭州海康威视数字技术股份有限公司 Decoding method, encoding method, device, equipment and machine readable storage medium
WO2021238396A1 (en) * 2020-05-29 2021-12-02 Oppo广东移动通信有限公司 Inter-frame prediction methods, encoder, decoder, and computer storage medium
CN113630602A (en) * 2021-06-29 2021-11-09 杭州未名信科科技有限公司 Affine motion estimation method and device for coding unit, storage medium and terminal

Also Published As

Publication number Publication date
WO2020042604A1 (en) 2020-03-05
CN110868602B (en) 2024-04-12

Similar Documents

Publication Publication Date Title
US11252436B2 (en) Video picture inter prediction method and apparatus, and codec
CN111480338B (en) Inter-frame prediction method and device of video data
CN110876282B (en) Motion vector prediction method and related device
CN110868602B (en) Video encoder, video decoder and corresponding methods
CN110868587B (en) Video image prediction method and device
US20230239494A1 (en) Video encoder, video decoder, and corresponding method
CN112740663B (en) Image prediction method, device and corresponding encoder and decoder
CN110876057B (en) Inter-frame prediction method and device
CN110677645B (en) Image prediction method and device
KR102566569B1 (en) Inter prediction method and apparatus, video encoder and video decoder
CN111355958B (en) Video decoding method and device
WO2019237287A1 (en) Inter-frame prediction method for video image, device, and codec
WO2020007187A1 (en) Image block decoding method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant