CN110868602B - Video encoder, video decoder and corresponding methods - Google Patents

Video encoder, video decoder and corresponding methods Download PDF

Info

Publication number
CN110868602B
CN110868602B CN201810992362.1A CN201810992362A CN110868602B CN 110868602 B CN110868602 B CN 110868602B CN 201810992362 A CN201810992362 A CN 201810992362A CN 110868602 B CN110868602 B CN 110868602B
Authority
CN
China
Prior art keywords
motion vector
block
control point
current
candidate motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810992362.1A
Other languages
Chinese (zh)
Other versions
CN110868602A (en
Inventor
陈焕浜
杨海涛
陈建乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201810992362.1A priority Critical patent/CN110868602B/en
Priority to PCT/CN2019/079955 priority patent/WO2020042604A1/en
Publication of CN110868602A publication Critical patent/CN110868602A/en
Application granted granted Critical
Publication of CN110868602B publication Critical patent/CN110868602B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/567Motion estimation based on rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding

Abstract

The embodiment of the invention provides a video encoder, a video decoder and a corresponding method, wherein the method comprises the following steps: analyzing the code stream to obtain an index, wherein the index is used for indicating a target candidate motion vector group of the current decoding block; determining a target candidate motion vector group from a candidate motion vector list according to the index, and if the first adjacent affine decoding block is a four-parameter affine decoding block and the first adjacent affine decoding block is positioned above the CTU of the current decoding block, the candidate motion vector list comprises a first group of candidate motion vector predicted values, and the first group of candidate motion vector predicted values are obtained based on a lower left control point and a lower right control point of the first adjacent affine decoding block; and obtaining a pixel predicted value of the current decoding block based on the target candidate motion vector group. By adopting the embodiment of the invention, the memory reading can be reduced, thereby improving the coding and decoding performance.

Description

Video encoder, video decoder and corresponding methods
Technical Field
The present disclosure relates to the field of video encoding and decoding technologies, and in particular, to an inter-frame prediction method and apparatus for video images, and a corresponding encoder and decoder.
Background
Digital video capabilities can be incorporated into a wide variety of devices, including digital televisions, digital direct broadcast systems, wireless broadcast systems, personal Digital Assistants (PDAs), laptop or desktop computers, tablet computers, electronic book readers, digital cameras, digital recording devices, digital media players, video gaming devices, video game consoles, cellular or satellite radio telephones (so-called "smartphones"), video teleconferencing devices, video streaming devices, and the like. Digital video devices implement video compression techniques such as those described in the standards defined by MPEG-2, MPEG-4, ITU-T H.263, ITU-T H.264/MPEG-4 part 10 Advanced Video Coding (AVC), the video coding standard H.265/High Efficiency Video Coding (HEVC) standard, and extensions of such standards. Video devices may more efficiently transmit, receive, encode, decode, and/or store digital video information by implementing such video compression techniques.
Video compression techniques perform spatial (intra-picture) prediction and/or temporal (inter-picture) prediction to reduce or eliminate redundancy inherent in video sequences. For block-based video coding, a video slice (i.e., a video frame or a portion of a video frame) may be partitioned into tiles, which may also be referred to as treeblocks, coding Units (CUs), and/or coding nodes. Image blocks in a slice to be intra-coded (I) of an image are encoded using spatial prediction with respect to reference samples in neighboring blocks in the same image. Image blocks in a to-be-inter-coded (P or B) stripe of an image may use spatial prediction with respect to reference samples in neighboring blocks in the same image or temporal prediction with respect to reference samples in other reference images. An image may be referred to as a frame and a reference image may be referred to as a reference frame.
Among them, various video coding standards including the High Efficiency Video Coding (HEVC) standard propose predictive coding modes for image blocks, i.e. predicting a block currently to be coded based on an already coded block of video data. In intra prediction mode, a current block is predicted based on one or more previously decoded neighboring blocks in the same image as the current block; in the inter prediction mode, a current block is predicted based on already decoded blocks in different pictures.
Motion vector prediction is a key technique affecting coding/decoding performance. In the existing motion vector prediction process, a motion vector prediction method based on a translational motion model is provided for translational objects in a picture; there are motion vector prediction methods based on motion models and motion vector prediction methods based on control point combinations for non-translational objects. The motion vector prediction method based on the motion model has more memory to read, so that the encoding/decoding speed is slower. How to reduce the memory read-out in the motion vector prediction process is a technical problem that is being studied by those skilled in the art.
Disclosure of Invention
The embodiment of the application provides an inter-frame prediction method and device for video images, and a corresponding encoder and encoder, which can reduce the reading amount of a memory to a certain extent, thereby improving the encoding performance.
In a first aspect, embodiments of the present application disclose an encoding method, the method comprising: determining a target set of candidate motion vectors from a list of candidate motion vectors (e.g., a list of affine transformation candidate motion vectors) according to a rate-distortion cost criterion; the target candidate motion vector group represents motion vector predicted values of a group of control points of a current coding block (may be specifically a current affine coding block), wherein if a first neighboring affine coding block is a four-parameter affine coding block and the first neighboring affine coding block is located above a coding tree unit CTU of the current coding block, the candidate motion vector list includes a first group of candidate motion vector predicted values obtained based on a lower left control point and a lower right control point of the first neighboring affine coding block; optionally, the construction manner of the candidate motion vector list may be: determining one or more adjacent affine encoded blocks of the current encoded block in the order of adjacent block a, adjacent block B, adjacent block C, adjacent block D, adjacent block E (as in fig. 7A), the one or more adjacent affine encoded blocks including the first adjacent affine encoded block; then, if the first adjacent affine coding block is a four-parameter affine coding block, obtaining a motion vector predicted value of a first group of control points of the current coding block by adopting a first affine model based on a lower left control point and a lower right control point of the first adjacent affine coding block, wherein the motion vector predicted value of the first group of control points of the current coding block is used as a first group of candidate motion vectors of the candidate motion vector list; after the target candidate motion vector group is determined in the above manner, an index corresponding to the target candidate motion vector is coded into the code stream to be transmitted (alternatively, when the length of the candidate motion vector list is 1, no index is required to indicate the target motion vector group).
In the above method, there may be only one candidate motion vector group or there may be a plurality of candidate motion vector groups in the candidate motion vector list, where each candidate motion vector group may be a motion vector binary group or a motion vector ternary group. When there are a plurality of candidate motion vector groups, the first candidate motion vector predictor is one candidate motion vector group of the plurality of candidate motion vector groups, and the generation principle of other candidate motion vector groups of the plurality of candidate motion vector groups may be the same as the generation principle of the first candidate motion vector predictor or may be different from the generation principle of the first candidate motion vector predictor. Further, the target candidate motion vector group is an optimal candidate motion vector group selected from the candidate motion vector list according to a rate-distortion cost criterion, and if the first candidate motion vector predictor is optimal, the selected target candidate motion vector group is the first candidate motion vector predictor; if the first set of candidate motion vector predictors is not optimal, then the selected set of target candidate motion vectors is not the first set of candidate motion vector predictors. The first neighboring affine encoded block is one of the neighboring blocks of the current encoded block, specifically, which one is not limited herein, and fig. 7A is a possible neighboring block a, a neighboring block B, or other neighboring blocks. In addition, the terms "first", "second", "third", etc. appearing elsewhere in the embodiments herein mean that any one of the terms "first", "second", and "third" each refer to a different object, e.g., if a first set of control points and a second set of control points are present, then the first set of control points and the second set of control points each refer to a different control point; in addition, the terms "first", "second", etc. in the embodiments of the present application also have no meaning of a sequence.
It can be understood that, when the coding tree unit CTU where the first adjacent affine coding block is located is above the current coding block position, the information of the lowest control point of the first adjacent affine coding block has been read from the memory; the above scheme thus builds a candidate motion vector from a first set of control points of a first neighboring affine coding block, the first set of control points comprising a lower left control point and a lower right control point of the first neighboring affine coding block; instead of fixing the upper left control point, the upper right control point, and the lower left control point of the first neighboring coding block as the first set of control points (or fixing the upper left control point and the upper right control point of the first neighboring coding block as the first set of control points) as in the prior art. Therefore, by adopting the method for determining the first group of control points in the application, the information (such as position coordinates, motion vectors and the like) of the first group of control points can be directly multiplexed with the information read from the memory with high probability, so that the reading of the memory is reduced, and the coding performance is improved. In addition, since the first adjacent affine coding block is specifically defined as the four-parameter affine coding block, when the candidate motion vector is constructed according to the group control points of the first adjacent affine coding block, only the lower left control point and the lower right control point of the first adjacent affine coding block are needed, and no additional control point is needed, so that the memory reading is further ensured not to be too high.
In one possible implementation, if the current coding block is a four-parameter affine coding block, the first set of candidate motion vector predictors is used to represent motion vector predictors of an upper left control point and an upper right control point of the current coding block, for example, position coordinates of the upper left control point and the upper right control point of the current coding block are substituted into a first affine model, so as to obtain motion vector predictors of the upper left control point and the upper right control point of the current coding block, where the first affine model is determined based on motion vectors and position coordinates of the lower left control point and the lower right control point of the first neighboring affine coding block.
If the current coding block is a six-parameter affine coding block, the first set of candidate motion vector predictors is used to represent motion vector predictors for an upper left control point, an upper right control point and a lower left fixed point control point of the current coding block. For example, the position coordinates of the upper left control point, the upper right control point and the lower left fixed point control point of the current coding block are substituted into the first affine model, so that the motion vector predicted values of the upper left control point, the upper right control point and the lower left fixed point control point of the current coding block are obtained.
In yet another possible implementation manner, in the advanced motion vector prediction AMVP mode, the method further includes: searching the motion vector of a group of control points with the lowest cost according to the rate distortion cost criterion in a preset searching range by taking the target candidate motion vector group as a searching starting point; then, a motion vector difference MVD between the motion vector of the set of control points and the set of target candidate motion vectors is determined, e.g. if the first set of control points comprises a first control point and a second control point, then it is necessary to determine a motion vector difference MVD of the motion vector of the first control point and the motion vector predictor of the first control point of the set of control points represented by the set of target candidate motion vectors, and a motion vector difference MVD of the motion vector of the second control point and the motion vector predictor of the second control point of the set of control points represented by the set of target candidate motion vectors. In this case, the indexing the index corresponding to the target candidate motion vector group into the code stream to be transmitted may specifically include: and indexing the MVD and the index corresponding to the target candidate motion vector group into a code stream to be transmitted.
In still another alternative, in the merge mode, the indexing the index corresponding to the target candidate motion vector group into the code stream to be transmitted may specifically include: and indexing indexes corresponding to the target candidate motion vector group, the reference frame index and the prediction direction into a code stream to be transmitted.
In one possible implementation, the position coordinates (x 6 ,y 6 ) And the position coordinates (x 7 ,y 7 ) Are all the position coordinates (x) of the upper left control point according to the first adjacent affine code block 4 ,y 4 ) Calculated, wherein the position coordinates (x 6 ,y 6 ) Is (x) 4 ,y 4 + cuH), the position coordinates (x) of the lower right control point of the first adjacent affine-coded block 7 ,y 7 ) Is (x) 4 +cuW,y 4 + cuH), cuW is the width of the first adjacent simulated encoded block, cuH is the height of the first adjacent affine encoded block; in addition, the saidThe motion vector of the lower left control point of the first adjacent affine coding block is the motion vector of the lower left sub-block of the first adjacent affine coding block, and the motion vector of the lower right control point of the first adjacent affine coding block is the motion vector of the lower right sub-block of the first adjacent affine coding block. It can be seen that the position coordinates of the lower left control point and the position coordinates of the lower right control point of the first adjacent affine coding block are derived instead of being read from the memory, so that by adopting the method, the reading of the memory can be further reduced, and the coding performance is improved. Alternatively, the position coordinates of the lower left control point and the lower right control point may be stored in the memory in a preselected manner, and then read from the memory when the control points are to be used.
In yet another alternative, after the determining the target candidate motion vector group from the candidate motion vector list according to the rate-distortion cost criterion, the method further includes: obtaining motion vectors of one or more sub-blocks of the current coding block based on the target candidate motion vector group; and predicting to obtain a pixel predicted value of the current coding block based on the motion vector of one or more sub-blocks of the current coding block. Optionally, when the motion vector of one or more sub-blocks of the current coding block is obtained based on the target candidate motion vector group, if the lower boundary of the current coding block coincides with the lower boundary of the CTU where the current coding block is located, the motion vector of the sub-block in the lower left corner of the current coding block is calculated according to the target candidate motion vector group and the position coordinates (0, H) in the lower left corner of the current coding block, and the motion vector of the sub-block in the lower right corner of the current coding block is calculated according to the target candidate motion vector group and the position coordinates (W, H) in the lower right corner of the current coding block. For example, an affine model is constructed according to the target candidate motion vector, then the motion vector of the sub-block at the lower left corner of the current coding block is obtained by substituting the position coordinates (0, H) of the lower left corner of the current coding block into the affine model (instead of substituting the center point coordinates of the sub-block at the lower left corner into the affine model for calculation), and the motion vector of the sub-block at the lower right corner of the current coding block is obtained by substituting the position coordinates (W, H) of the lower right corner of the current coding block into the affine model (instead of substituting the center point coordinates of the sub-block at the lower right corner into the affine model for calculation). In this way, when the motion vector of the lower left control point and the motion vector of the lower right control point of the current encoded block are used (e.g., a subsequent other block builds a candidate motion vector list for the other block based on the motion vectors of the lower left control point and the lower right control point of the current block), an accurate value is used instead of an estimated value; where W is the width of the current coding block and H is the height of the current coding block.
In a second aspect, embodiments of the present application provide a video encoder comprising a number of functional units for implementing any of the methods of the first aspect. For example, a video encoder may include:
an inter-frame prediction unit, configured to determine a target candidate motion vector group from a candidate motion vector list according to a rate-distortion cost criterion; the target candidate motion vector group represents a motion vector predicted value of a group of control points of the current coding block;
an entropy encoding unit for encoding an index corresponding to the target candidate motion vector into a code stream and transmitting the code stream;
wherein if a first neighboring affine coding block is a four-parameter affine coding block and the first neighboring affine coding block is located above the current coding block by a coding tree unit CTU, the candidate motion vector list includes a first set of candidate motion vector predictors obtained based on a lower left control point and a lower right control point of the first neighboring affine coding block.
In a third aspect, embodiments of the present application provide an apparatus for encoding video data, the apparatus comprising:
a memory for storing video data in the form of a code stream;
A video encoder for determining a set of target candidate motion vectors from the list of candidate motion vectors according to a rate-distortion cost criterion; the target candidate motion vector group represents a motion vector predicted value of a group of control points of the current coding block; and the index corresponding to the target candidate motion vector is coded into a code stream, and the code stream is transmitted; wherein if a first neighboring affine coding block is a four-parameter affine coding block and the first neighboring affine coding block is located above the current coding block by a coding tree unit CTU, the candidate motion vector list includes a first set of candidate motion vector predictors obtained based on a lower left control point and a lower right control point of the first neighboring affine coding block.
In a fourth aspect, embodiments of the present application provide an encoding apparatus, including: a non-volatile memory and a processor coupled to each other, the processor invoking program code stored in the memory to perform some or all of the steps of any of the methods of the first aspect.
In a fifth aspect, embodiments of the present application provide a computer-readable storage medium storing program code, wherein the program code includes instructions for performing part or all of the steps of any one of the methods of the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product which, when run on a computer, causes the computer to perform part or all of the steps of any one of the methods of the first aspect.
It should be understood that, in the second to sixth aspects of the present application, the technical solutions of the first aspect of the present application are consistent, and the beneficial effects obtained by each aspect and the corresponding possible embodiments are similar, which are not repeated.
In a seventh aspect, embodiments of the present application disclose a decoding method, the method comprising: parsing the code stream to obtain an index indicating a target candidate set of motion vectors for the current decoding block (which may be specifically a current affine decoding block); then determining the target candidate motion vector group from a candidate motion vector list (for example, an affine transformation candidate motion vector list) according to the index (alternatively, when the length of the candidate motion vector list is 1, the target motion vector group can be directly determined without analyzing a code stream to obtain the index), and the target candidate motion vector group represents the motion vector predicted value of a group of control points of the current decoding block; if the first adjacent affine decoding block is a four-parameter affine decoding block and the first adjacent affine decoding block is located above the decoding tree unit CTU of the current decoding block, the candidate motion vector list includes a first set of candidate motion vector predictors, where the first set of candidate motion vector predictors are obtained based on a lower left control point and a lower right control point of the first adjacent affine decoding block; optionally, the construction manner of the candidate motion vector list may be: determining one or more neighboring affine decoding blocks of the current decoding block in the order of neighboring block a, neighboring block B, neighboring block C, neighboring block D, neighboring block E (as in fig. 7A), the one or more neighboring affine decoding blocks including a first neighboring affine decoding block; then, if the first adjacent affine decoding block is a four-parameter affine decoding block, obtaining a motion vector predicted value of a first group of control points of the current decoding block by adopting a first affine model based on a lower left control point and a lower right control point of the first adjacent affine decoding block, wherein the motion vector predicted value of the first group of control points of the current encoding block is used as a first group of candidate motion vectors of the candidate motion vector list; after the target candidate motion vector group is determined in the above manner, an index corresponding to the target candidate motion vector is coded into the code stream to be transmitted (alternatively, when the length of the candidate motion vector list is 1, no index is required to indicate the target motion vector group).
In the above method, there may be only one candidate motion vector group or there may be a plurality of candidate motion vector groups in the candidate motion vector list, where each candidate motion vector group may be a motion vector binary group or a motion vector ternary group. When there are a plurality of candidate motion vector groups, the first candidate motion vector predictor is one candidate motion vector group of the plurality of candidate motion vector groups, and the generation principle of other candidate motion vector groups of the plurality of candidate motion vector groups may be the same as the generation principle of the first candidate motion vector predictor or may be different from the generation principle of the first candidate motion vector predictor. Further, the target candidate motion vector group is an optimal candidate motion vector group selected from the candidate motion vector list according to a rate-distortion cost criterion, and if the first candidate motion vector predictor is optimal, the selected target candidate motion vector group is the first candidate motion vector predictor; if the first set of candidate motion vector predictors is not optimal, then the selected set of target candidate motion vectors is not the first set of candidate motion vector predictors. The first adjacent affine decoding block is one of four parameter affine decoding blocks among adjacent blocks of the current decoding block, specifically, which is not limited herein, and fig. 7A is a possible adjacent block a, an adjacent block B, or other adjacent blocks. In addition, the terms "first", "second", "third", etc. appearing elsewhere in the embodiments herein mean that any one of the terms "first", "second", and "third" each refer to a different object, e.g., if a first set of control points and a second set of control points are present, then the first set of control points and the second set of control points each refer to a different control point; in addition, the terms "first", "second", etc. in the embodiments of the present application also have no meaning of a sequence.
It can be understood that, when the decoding tree unit CTU where the first adjacent affine decoding block is located is above the current decoding block position, the information of the lowest control point of the first adjacent affine decoding block has been read from the memory; the above scheme thus builds a candidate motion vector from a first set of control points of a first neighboring affine decoding block, the first set of control points comprising a lower left control point and a lower right control point of the first neighboring affine decoding block; instead of fixing the upper left control point, the upper right control point, and the lower left control point of the first neighboring decoding block as the first set of control points (or fixing the upper left control point and the upper right control point of the first neighboring decoding block as the first set of control points) as in the prior art. Therefore, by adopting the method for determining the first group of control points in the application, the information (such as position coordinates, motion vectors and the like) of the first group of control points can be directly multiplexed with the information read from the memory with high probability, so that the reading of the memory is reduced, and the decoding performance is improved. In addition, since the first adjacent affine decoding block is specifically defined as the four-parameter affine decoding block, when the candidate motion vector is constructed according to the group control point of the first adjacent affine decoding block, only the lower left control point and the lower right control point of the first adjacent affine decoding block are needed, and no additional control point is needed, so that the memory reading is further ensured not to be too high.
In one possible implementation, if the current decoding block is a four-parameter affine decoding block, the first set of candidate motion vector predictors are used to represent motion vector predictors of an upper left control point and an upper right control point of the current decoding block, for example, position coordinates of the upper left control point and the upper right control point of the current decoding block are substituted into a first affine model, so as to obtain motion vector predictors of the upper left control point and the upper right control point of the current decoding block, where the first affine model is determined based on motion vectors and position coordinates of the lower left control point and the lower right control point of the first neighboring affine decoding block.
If the current decoding block is a six-parameter affine decoding block, the first set of candidate motion vector predictors is used to represent motion vector predictors for an upper left control point, an upper right control point and a lower left fixed point control point of the current decoding block. For example, the position coordinates of the upper left control point, the upper right control point and the lower left fixed point control point of the current decoding block are substituted into the first affine model, so that the motion vector predicted values of the upper left control point, the upper right control point and the lower left fixed point control point of the current decoding block are obtained.
In yet another alternative, the obtaining the motion vector of the one or more sub-blocks of the current decoding block based on the target candidate motion vector group is specifically: obtaining a motion vector of one or more sub-blocks of the current decoded block based on a second affine model (e.g., substituting coordinates of a center point of the one or more sub-blocks into the second affine model, thereby obtaining motion vectors of the one or more sub-blocks), wherein the second affine model is determined based on the target candidate motion vector set and position coordinates of a set of control points of the current decoded block.
In an alternative solution, in the advanced motion vector prediction AMVP mode, the obtaining, based on the target candidate motion vector group, a motion vector of one or more sub-blocks of the current decoded block may specifically include: obtaining a new candidate motion vector group based on the motion vector difference MVD obtained by analyzing the code stream and the target candidate motion vector group indicated by the index; and then obtaining motion vectors of one or more sub-blocks of the current decoding block based on the new candidate motion vector group, for example, determining a second affine model based on the new candidate motion vector group and position coordinates of a group of control points of the current decoding block, and obtaining motion vectors of one or more sub-blocks of the current decoding block based on the second affine model.
In still another possible implementation manner, in the merge mode, the predicting, based on the motion vector of one or more sub-blocks of the current decoding block, to obtain the pixel prediction value of the current decoding block may specifically include: and predicting to obtain a pixel predicted value of the current decoding block according to the motion vector of one or more sub-blocks of the current decoding block, the reference frame index indicated by the index and the prediction direction.
In still another alternative, the position coordinates (x 6 ,y 6 ) And the position coordinates (x 7 ,y 7 ) Are all the position coordinates (x) of the upper left control point according to the first adjacent affine decoding block 4 ,y 4 ) Calculated, wherein the position coordinates (x 6 ,y 6 ) Is (x) 4 ,y 4 + cuH), the position coordinates (x) of the lower right control point of the first adjacent affine decoding block 7 ,y 7 ) Is (x) 4 +cuW,y 4 + cuH), cuW is the width of the first adjacent simulated decoded block, cuH is the height of the first adjacent affine decoded block; in addition, the first adjacent affine solutionThe motion vector of the lower left control point of the code block is the motion vector of the lower left sub-block of the first adjacent affine decoding block, and the motion vector of the lower right control point of the first adjacent affine decoding block is the motion vector of the lower right sub-block of the first adjacent affine decoding block. It can be seen that the position coordinates of the lower left control point and the position coordinates of the lower right control point of the first adjacent affine decoding block are derived instead of being read from the memory, so that by adopting the method, the reading of the memory can be further reduced, and the decoding performance is improved. Alternatively, the position coordinates of the lower left control point and the lower right control point may be stored in the memory in a preselected manner, and then read from the memory when the control points are to be used.
In still another alternative, when the motion vector of one or more sub-blocks of the current decoding block is obtained based on the target candidate motion vector group, if the lower boundary of the current decoding block coincides with the lower boundary of the CTU where the current decoding block is located, the motion vector of the sub-block at the lower left corner of the current decoding block is calculated according to the target candidate motion vector group and the position coordinate (0, H) at the lower left corner of the current decoding block, and the motion vector of the sub-block at the lower right corner of the current decoding block is calculated according to the target candidate motion vector group and the position coordinate (W, H) at the lower right corner of the current decoding block. For example, an affine model is constructed according to the target candidate motion vector, then the motion vector of the sub-block at the lower left corner of the current decoding block is obtained by substituting the position coordinates (0, H) of the lower left corner of the current decoding block into the affine model (instead of substituting the center point coordinates of the sub-block at the lower left corner into the affine model for calculation), and the motion vector of the sub-block at the lower right corner of the current decoding block is obtained by substituting the position coordinates (W, H) of the lower right corner of the current decoding block into the affine model (instead of substituting the center point coordinates of the sub-block at the lower right corner into the affine model for calculation). In this way, when the motion vector of the lower left control point and the motion vector of the lower right control point of the current decoded block are used (e.g., a subsequent other block builds a candidate motion vector list for the other block based on the motion vectors of the lower left control point and the lower right control point of the current block), an accurate value is used instead of an estimated value. Where W is the width of the current decoded block and H is the height of the current decoded block.
In an eighth aspect, embodiments of the present application provide a video decoder, including:
the entropy decoding unit is used for analyzing the code stream to obtain an index, and the index is used for indicating a target candidate motion vector group of the current decoding block;
an inter prediction unit, configured to determine, according to the index, the target candidate motion vector group from a candidate motion vector list, where the target candidate motion vector group represents a motion vector predicted value of a set of control points of a current decoding block, and if a first neighboring affine decoding block is a four-parameter affine decoding block and the first neighboring affine decoding block is located in an upper decoding tree unit CTU of the current decoding block, the candidate motion vector list includes a first set of candidate motion vector predicted values, where the first set of candidate motion vector predicted values is obtained based on a lower left control point and a lower right control point of the first neighboring affine decoding block; and deriving motion vectors for one or more sub-blocks of the current decoded block based on the set of target candidate motion vectors; and predicting to obtain a pixel predicted value of the current decoding block based on the motion vector of one or more sub-blocks of the current decoding block.
In a ninth aspect, embodiments of the present application provide an apparatus for decoding video data, the apparatus comprising:
a memory for storing video data in the form of a code stream;
a video decoder for parsing the bitstream to obtain an index, the index being used to indicate a target candidate motion vector set for a current decoded block; and determining the target candidate motion vector group from a candidate motion vector list according to the index, wherein the target candidate motion vector group represents motion vector predicted values of a group of control points of a current decoding block, and the candidate motion vector list comprises a first group of candidate motion vector predicted values obtained based on a lower left control point and a lower right control point of a first adjacent affine decoding block if the first adjacent affine decoding block is a four-parameter affine decoding block and the first adjacent affine decoding block is located above a decoding tree unit CTU of the current decoding block; and deriving motion vectors for one or more sub-blocks of the current decoded block based on the set of target candidate motion vectors; and predicting to obtain a pixel predicted value of the current decoding block based on the motion vector of one or more sub-blocks of the current decoding block.
In a tenth aspect, embodiments of the present application provide a decoding apparatus, including: a non-volatile memory and a processor coupled to each other, the processor invoking program code stored in the memory to perform some or all of the steps of any of the methods of the seventh aspect.
In an eleventh aspect, embodiments of the present application provide a computer-readable storage medium storing program code, where the program code includes instructions for performing part or all of the steps of any one of the methods of the seventh aspect.
In a twelfth aspect, embodiments of the present application provide a computer program product which, when run on a computer, causes the computer to perform part or all of the steps of any one of the methods of the seventh aspect.
It should be understood that, in the eighth to twelfth aspects of the present application, the technical solutions of the seventh aspect of the present application are consistent, and the beneficial effects obtained by each aspect and the corresponding possible embodiments are similar, and are not repeated.
Drawings
In order to more clearly describe the technical solutions in the embodiments or the background of the present application, the following description will describe the drawings that are required to be used in the embodiments or the background of the present application.
FIG. 1 is a schematic block diagram of a video encoding and decoding system in an embodiment of the present application;
FIG. 2A is a schematic block diagram of a video encoder in an embodiment of the present application;
FIG. 2B is a schematic block diagram of a video decoder in an embodiment of the present application;
FIG. 3 is a flow chart of a method for inter prediction of an encoded video image in an embodiment of the present application;
FIG. 4 is a flow chart of a method for inter prediction of a decoded video image in an embodiment of the present application;
fig. 5 is a schematic diagram of motion information of a current image block and a reference block in an embodiment of the present application;
FIG. 6 is a schematic flow chart of an encoding method according to an embodiment of the present application;
FIG. 7A is a schematic view of a scenario of adjacent blocks provided in an embodiment of the present application;
fig. 7B is a schematic view of a scenario of a neighboring block according to an embodiment of the present application;
fig. 8A is a schematic structural diagram of a motion compensation unit according to an embodiment of the present application;
fig. 8B is a schematic structural diagram of still another motion compensation unit according to an embodiment of the present application;
fig. 9 is a schematic flow chart of a decoding method according to an embodiment of the present application;
FIG. 9A is a schematic flow chart of constructing a candidate motion vector list according to an embodiment of the present application;
Fig. 10 is a schematic structural diagram of an encoding apparatus or decoding apparatus provided in the own embodiment;
fig. 11 is a video encoding system 1100 including the encoder 100 of fig. 2A and/or the decoder 200 of fig. 2B, according to an example embodiment.
Detailed Description
Embodiments of the present application are described below with reference to the accompanying drawings in the embodiments of the present application.
The non-translational motion model prediction refers to deriving motion information (such as a motion vector) of each sub-motion compensation unit (also called a sub-block) in a current coding/decoding block by using the same motion model at a coding end and a decoding end (for example, at two ends of a video encoder and a video decoder), and performing motion compensation according to the motion information of the sub-motion compensation unit to obtain a prediction block, thereby improving the prediction efficiency. Motion vector prediction based on a motion model is involved in the process of deriving motion information of a motion compensation unit (also called a sub-block) in a current coding/decoding block, and an affine model is usually derived by adopting position coordinates and motion vectors of an upper left control point, an upper right control point and a lower left control point of an adjacent affine decoding block of the current coding/decoding block at present; motion vector predictors for a set of control points of the current codec block are then derived from the affine model as a set of candidate motion vector predictors in a candidate motion vector list. However, the position coordinates of the upper left control point, the upper right control point, the lower left control point and the motion vector of the adjacent affine decoding block used in the motion vector prediction process need to be read from the memory in real time, which increases the pressure of memory reading. In order to better understand the idea of the embodiments of the present application, the application scenario of the embodiments of the present application is first described below.
Encoding a video stream, or a portion thereof, such as a video frame or image block, may use temporal and spatial similarities in the video stream to improve encoding performance. For example, a current image block of a video stream may be encoded based on a previously encoded block by predicting motion information for the current image block based on the previously encoded block in the video stream and identifying a difference (also referred to as a residual) between the predicted block and the current image block (i.e., the original block). In this way, only the residual and some parameters used to generate the current image block are included in the digital video output bitstream, rather than the entirety of the current image block. This technique may be referred to as inter prediction.
Motion vectors are an important parameter in the inter prediction process that represents the spatial displacement of a previously encoded block relative to the current encoded block. Motion vectors may be obtained using methods of motion estimation, such as motion search. Early inter prediction techniques included bits representing motion vectors in the encoded bitstream to allow a decoder to reproduce the predicted block to obtain a reconstructed block. In order to further improve the coding efficiency, it is proposed to differentially encode the motion vector using a reference motion vector, i.e. instead of encoding the motion vector as a whole, only the difference between the motion vector and the reference motion vector is encoded. In some cases, the reference motion vector may be selected from previously used motion vectors in the video stream, and selecting the previously used motion vector to encode the current motion vector may further reduce the number of bits included in the encoded video bitstream.
Fig. 1 is a block diagram of a video coding system 1 of one example described in an embodiment of the present application. As used herein, the term "video coder" generally refers to both a video encoder and a video decoder. In this application, the term "video coding" or "coding" may refer generally to video encoding or video decoding. The video encoder 100 and the video decoder 200 of the video coding system 1 are configured to predict motion information, such as motion vectors, of a currently coded image block or its sub-blocks according to various method examples described in any of a plurality of new inter prediction modes proposed in the present application, such that the predicted motion vectors are maximally close to motion vectors obtained using a motion estimation method, and thus no motion vector difference value needs to be transmitted during coding, thereby further improving coding and decoding performance.
As shown in fig. 1, video coding system 1 includes a source device 10 and a destination device 20. Source device 10 generates encoded video data. Thus, source device 10 may be referred to as a video encoding device. Destination device 20 may decode the encoded video data generated by source device 10. Thus, destination device 20 may be referred to as a video decoding device. Various implementations of source device 10, destination device 20, or both may include one or more processors and memory coupled to the one or more processors. The memory may include, but is not limited to RAM, ROM, EEPROM, flash memory, or any other medium that can be used to store the desired program code in the form of instructions or data structures that can be accessed by a computer, as described herein.
Source device 10 and destination device 20 may include a variety of devices including desktop computers, mobile computing devices, notebook (e.g., laptop) computers, tablet computers, set-top boxes, telephone handsets such as so-called "smart" phones, televisions, cameras, display devices, digital media players, video game consoles, vehicle mount computers, or the like.
Destination device 20 may receive encoded video data from source device 10 via link 30. Link 30 may comprise one or more media or devices capable of moving encoded video data from source device 10 to destination device 20. In one example, link 30 may include one or more communication media that enable source device 10 to transmit encoded video data directly to destination device 20 in real-time. In this example, source device 10 may modulate the encoded video data according to a communication standard, such as a wireless communication protocol, and may transmit the modulated video data to destination device 20. The one or more communication media may include wireless and/or wired communication media such as a Radio Frequency (RF) spectrum or one or more physical transmission lines. The one or more communication media may form part of a packet-based network, such as a local area network, a wide area network, or a global network (e.g., the internet). The one or more communication media may include routers, switches, base stations, or other apparatus that facilitate communication from source device 10 to destination device 20.
In another example, encoded data may be output from output interface 140 to storage device 40. Similarly, encoded data may be accessed from storage device 40 through input interface 240. Storage device 40 may include any of a variety of distributed or locally accessed data storage media such as hard drives, blu-ray discs, DVDs, CD-ROMs, flash memory, volatile or non-volatile memory, or any other suitable digital storage media for storing encoded video data.
In another example, storage device 40 may correspond to a file server or another intermediate storage device that may hold the encoded video generated by source device 10. Destination device 20 may access stored video data from storage device 40 via streaming or download. The file server may be any type of server capable of storing encoded video data and transmitting the encoded video data to destination device 20. Example file servers include web servers (e.g., for websites), FTP servers, network Attached Storage (NAS) devices, or local disk drives. Destination device 20 may access the encoded video data over any standard data connection, including an internet connection. This may include a wireless channel (e.g., wi-Fi connection), a wired connection (e.g., DSL, cable modem, etc.), or a combination of both suitable for accessing encoded video data stored on a file server. The transmission of encoded video data from storage device 40 may be a streaming transmission, a download transmission, or a combination of both.
The motion vector prediction techniques of the present application may be applied to video encoding and decoding to support a variety of multimedia applications, such as over-the-air television broadcasting, cable television transmission, satellite television transmission, streaming video transmission (e.g., via the internet), encoding of video data for storage on a data storage medium, decoding of video data stored on a data storage medium, or other applications. In some examples, video coding system 1 may be used to support unidirectional or bidirectional video transmission to support applications such as video streaming, video playback, video broadcasting, and/or video telephony.
The video coding system 1 illustrated in fig. 1 is merely an example, and the techniques of this disclosure may be applicable to video coding settings (e.g., video encoding or video decoding) that do not necessarily include any data communication between an encoding device and a decoding device. In other examples, the data is retrieved from local memory, streamed over a network, and so forth. The video encoding device may encode and store data to the memory and/or the video decoding device may retrieve and decode data from the memory. In many examples, encoding and decoding are performed by devices that do not communicate with each other, but instead only encode data to memory and/or retrieve data from memory and decode data.
In the example of fig. 1, source device 10 includes a video source 120, a video encoder 100, and an output interface 140. In some examples, output interface 140 may include a regulator/demodulator (modem) and/or a transmitter. Video source 120 may include a video capture device (e.g., a video camera), a video archive containing previously captured video data, a video feed interface to receive video data from a video content provider, and/or a computer graphics system for generating video data, or a combination of such sources of video data.
Video encoder 100 may encode video data from video source 120. In some examples, source device 10 transmits encoded video data directly to destination device 20 via output interface 140. In other examples, the encoded video data may also be stored onto storage device 40 for later access by destination device 20 for decoding and/or playback.
In the example of fig. 1, destination device 20 includes an input interface 240, a video decoder 200, and a display device 220. In some examples, input interface 240 includes a receiver and/or a modem. Input interface 240 may receive encoded video data via link 30 and/or from storage device 40. The display device 220 may be integrated with the destination device 20 or may be external to the destination device 20. In general, the display device 220 displays decoded video data. The display device 220 may include a variety of display devices, such as a Liquid Crystal Display (LCD), a plasma display, an Organic Light Emitting Diode (OLED) display, or other types of display devices.
Although not shown in fig. 1, in some aspects, video encoder 100 and video decoder 200 may each be integrated with an audio encoder and decoder, and may include an appropriate multiplexer-demultiplexer unit or other hardware and software to handle encoding of both audio and video in a common data stream or separate data streams. In some examples, the MUX-DEMUX units may conform to the ITU h.223 multiplexer protocol, or other protocols such as the User Datagram Protocol (UDP), if applicable.
Video encoder 100 and video decoder 200 may each be implemented as any of a variety of circuits, such as: one or more microprocessors, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs), discrete logic, hardware, or any combinations thereof. If the present application is implemented in part in software, the device may store instructions for the software in a suitable non-volatile computer-readable storage medium and the instructions may be executed in hardware using one or more processors to implement the techniques of this application. Any of the foregoing (including hardware, software, a combination of hardware and software, etc.) may be considered one or more processors. Each of video encoder 100 and video decoder 200 may be included in one or more encoders or decoders, any of which may be integrated as part of a combined encoder/decoder (codec) in the respective device.
The present application may generally refer to video encoder 100 as "signaling" or "transmitting" certain information to another device, such as video decoder 200. The term "signaling" or "transmitting" may generally refer to the transfer of syntax elements and/or other data used to decode compressed video data. This transfer may occur in real time or near real time. Alternatively, this communication may occur over a period of time, such as may occur when syntax elements are stored to a computer-readable storage medium in an encoded bitstream when encoded, which the decoding device may then retrieve at any time after the syntax elements are stored to such medium.
Video encoder 100 and video decoder 200 may operate in accordance with a video compression standard, such as High Efficiency Video Coding (HEVC), or an extension thereof, and may conform to an HEVC test model (HM). Alternatively, video encoder 100 and video decoder 200 may operate in accordance with other industry standards, such as the ITU-T H.264, H.265 standards, or extensions to such standards. However, the techniques of this application are not limited to any particular codec standard.
In one example, referring also to fig. 3, video encoder 100 is configured to: encoding syntax elements related to a current image block to be encoded into a digital video output bitstream (simply referred to as a bitstream or code stream), where syntax elements for inter prediction of the current image block are simply referred to as inter prediction data; in order to determine an inter prediction mode for encoding the current image block, the video encoder 100 is further configured to determine or select (S301) an inter prediction mode for inter predicting the current image block from the set of candidate inter prediction modes (e.g., selecting an inter prediction mode with a reduced or minimum rate distortion cost for encoding the current image block from a plurality of new inter prediction modes); and encoding the current image block based on the determined inter prediction mode (S303), wherein the encoding process may include predicting motion information of one or more sub-blocks (in particular, motion information of each sub-block or all sub-blocks) in the current image block based on the determined inter prediction mode, and performing inter prediction on the current image block using the motion information of the one or more sub-blocks in the current image block;
It should be understood that if the difference (i.e., residual) between the predicted block generated from the motion information predicted based on the new inter prediction mode proposed in the present application and the current image block to be encoded (i.e., original block) is 0, only syntax elements related to the current image block to be encoded need be encoded into a bitstream (also referred to as a bitstream) in the video encoder 100; conversely, in addition to syntax elements, the corresponding residual needs to be coded into the bitstream.
In another example, referring also to fig. 4, video decoder 200 is configured to: syntax elements associated with a current image block to be decoded are decoded from a bitstream (S401), when the inter prediction data indicates that a set of candidate inter prediction modes (i.e., new inter prediction modes) is employed to predict the current image block, an inter prediction mode of the set of candidate inter prediction modes for inter prediction of the current image block is determined (S403), and the current image block is decoded based on the determined inter prediction mode (S405), where the decoding process may include predicting motion information of one or more sub-blocks in the current image block based on the determined inter prediction mode, and performing inter prediction on the current image block using the motion information of the one or more sub-blocks in the current image block.
Optionally, if the inter-prediction data further includes a second identifier for indicating what inter-prediction mode is used for the current image block, the video decoder 200 is configured to determine that the inter-prediction mode indicated by the second identifier is an inter-prediction mode for inter-predicting the current image block; alternatively, if the inter prediction data does not include a second identification indicating what inter prediction mode the current image block adopts, the video decoder 200 is configured to determine that the first inter prediction mode for the non-directional motion field is an inter prediction mode for inter predicting the current image block.
Fig. 2A is a block diagram of a video encoder 100 of one example described in an embodiment of the present application. The video encoder 100 is arranged to output video to a post-processing entity 41. Post-processing entity 41 represents an example of a video entity, such as a Media Aware Network Element (MANE) or a stitching/editing device, that may process encoded video data from video encoder 100. In some cases, post-processing entity 41 may be an instance of a network entity. In some video coding systems, post-processing entity 41 and video encoder 100 may be parts of separate devices, while in other cases, the functionality described with respect to post-processing entity 41 may be performed by the same device that includes video encoder 100. In one example, post-processing entity 41 is an example of storage device 40 of FIG. 1.
The video encoder 100 may perform encoding of the video image block, e.g., perform inter prediction of the video image block, according to any new inter prediction mode of the set of candidate inter prediction modes including modes 0,1,2, …, or 10 as set forth herein.
In the example of fig. 2A, video encoder 100 includes a prediction processing unit 108, a filter unit 106, a decoded image buffer unit (DPB) 107, a summing unit 112, a transform unit 101, a quantization unit 102, and an entropy encoding unit 103. The prediction processing unit 108 includes an inter prediction unit 110 and an intra prediction unit 109. For image block reconstruction, the video encoder 100 also includes an inverse quantization unit 104, an inverse transform unit 105, and a summing unit 111. The filter unit 106 is intended to represent one or more loop filter units, such as a deblocking filter unit, an adaptive loop filter unit (ALF), and a Sample Adaptive Offset (SAO) filter unit. Although filtering unit 106 is shown in fig. 2A as an in-loop filter, in other implementations filtering unit 106 may be implemented as a post-loop filter. In one example, the video encoder 100 may further include a video data storage unit, a segmentation unit (not illustrated in the figure).
The video data storage unit may store video data to be encoded by components of the video encoder 100. Video data stored in the video data storage unit may be obtained from video source 120. DPB 107 may be a reference picture storage unit that stores reference video data for encoding video data by video encoder 100 in an intra, inter coding mode. The video data memory cells and DPB 107 may be formed from any of a variety of memory cell devices, such as dynamic random access memory cells (DRAM) including Synchronous DRAM (SDRAM), magnetoresistive RAM (MRAM), resistive RAM (RRAM), or other types of memory cell devices. The video data storage unit and the DPB 107 may be provided by the same storage unit device or separate storage unit devices. In various examples, the video data storage unit may be on-chip with other components of video encoder 100, or off-chip with respect to those components.
As shown in fig. 2A, the video encoder 100 receives video data and stores the video data in a video data storage unit. The segmentation unit segments the video data into image blocks and these image blocks may be further segmented into smaller blocks, e.g. image block segmentations based on a quadtree structure or a binary tree structure. Such partitioning may also include partitioning into slices (slices), tiles, or other larger units. Video encoder 100 generally illustrates the components that encode image blocks within a video slice to be encoded. The stripe may be divided into a plurality of tiles (and possibly into a set of tiles called tiles). The prediction processing unit 108 may select one of a plurality of possible coding modes for the current image block, such as one of a plurality of intra coding modes or one of a plurality of inter coding modes, including but not limited to one or more of modes 0,1,2,3, … proposed herein. Prediction processing unit 108 may provide the resulting intra-frame, inter-coded blocks to summing unit 112 to generate residual blocks, and to summing unit 111 to reconstruct the encoded blocks used as reference pictures.
The intra-prediction unit 109 within the prediction processing unit 108 may perform intra-predictive encoding of the current image block with respect to one or more neighboring blocks in the same frame or slice as the current block to be encoded to remove spatial redundancy. Inter prediction unit 110 within prediction processing unit 108 may perform inter predictive encoding of the current image block with respect to one or more prediction blocks in one or more reference images to remove temporal redundancy.
In particular, the inter prediction unit 110 may be used to determine an inter prediction mode for encoding a current image block. For example, the inter prediction unit 110 may calculate rate-distortion values of various inter prediction modes in the candidate inter prediction mode set using rate-distortion analysis, and select the inter prediction mode having the best rate-distortion characteristics from among them. Rate-distortion analysis typically determines the amount of distortion (or error) between an encoded block and an original, unencoded block encoded to produce the encoded block, as well as the bit rate (that is, the number of bits) used to produce the encoded block. For example, the inter prediction unit 110 may determine an inter prediction mode having the smallest rate distortion cost for encoding the current image block from among the candidate inter prediction mode sets as an inter prediction mode for inter predicting the current image block. The inter-predictive coding process, and more particularly, the process of predicting motion information of one or more sub-blocks (in particular, each sub-block or all sub-blocks) in a current image block in various inter-prediction modes for non-directional or directional motion fields of the present application, will be described in detail below.
The inter prediction unit 110 is configured to predict motion information (e.g., motion vectors) of one or more sub-blocks in the current image block based on the determined inter prediction mode, and to acquire or generate a predicted block of the current image block using the motion information (e.g., motion vectors) of the one or more sub-blocks in the current image block. Inter prediction unit 110 may locate the prediction block to which the motion vector points in one of the reference picture lists. Inter-prediction unit 110 may also generate syntax elements associated with the tiles and video slices for use by video decoder 200 in decoding the tiles of the video slices. Alternatively, in one example, the inter prediction unit 110 performs a motion compensation process using motion information of each sub-block to generate a prediction block of each sub-block, thereby obtaining a prediction block of the current image block; it should be appreciated that the inter prediction unit 110 herein performs motion estimation and motion compensation processes.
Specifically, after selecting the inter prediction mode for the current image block, the inter prediction unit 110 may provide information indicating the selected inter prediction mode of the current image block to the entropy encoding unit 103 so that the entropy encoding unit 103 encodes the information indicating the selected inter prediction mode. In the present application, the video encoder 100 may include inter prediction data related to the current image block in the transmitted bitstream, which may include a first flag block_based_enable_flag to indicate whether to inter predict the current image block using the new inter prediction mode proposed in the present application; optionally, a second flag block_based_index may also be included to indicate which new inter prediction mode is used for the current image block. In the present application, a process of predicting a motion vector of a current image block or a sub-block thereof using motion vectors of a plurality of reference blocks in different modes 0,1,2, … will be described in detail below.
The intra prediction unit 109 may perform intra prediction on the current image block. In particular, the intra prediction unit 109 may determine an intra prediction mode used to encode the current block. For example, the intra prediction unit 109 may calculate rate-distortion values of various intra prediction modes to be tested using rate-distortion analysis, and select an intra prediction mode having the best rate-distortion characteristic from among the modes to be tested. In any case, after selecting the intra prediction mode for the image block, the intra prediction unit 109 may provide information indicating the selected intra prediction mode of the current image block to the entropy encoding unit 103 so that the entropy encoding unit 103 encodes the information indicating the selected intra prediction mode.
After the prediction processing unit 108 generates a prediction block of the current image block via inter prediction, intra prediction, the video encoder 100 forms a residual image block by subtracting the prediction block from the current image block to be encoded. Summing unit 112 represents one or more components that perform this subtraction operation. Residual video data in the residual block may be included in one or more TUs and applied to transform unit 101. The transform unit 101 transforms the residual video data into residual transform coefficients using a transform such as a Discrete Cosine Transform (DCT) or a conceptually similar transform. Transform unit 101 may convert the residual video data from a pixel value domain to a transform domain, such as the frequency domain.
The transform unit 101 may send the resulting transform coefficients to the quantization unit 102. Quantization unit 102 quantizes the transform coefficients to further reduce bit rate. In some examples, quantization unit 102 may then perform a scan of a matrix including quantized transform coefficients. Alternatively, the entropy encoding unit 103 may perform scanning.
After quantization, the entropy encoding unit 103 entropy encodes the quantized transform coefficients. For example, entropy encoding unit 103 may perform Context Adaptive Variable Length Coding (CAVLC), context Adaptive Binary Arithmetic Coding (CABAC), syntax-based context adaptive binary arithmetic coding (SBAC), probability Interval Partitioning Entropy (PIPE) coding, or another entropy encoding method or technique. After entropy encoding by entropy encoding unit 103, the encoded bitstream may be transmitted to video decoder 200, or archived for later transmission or retrieval by video decoder 200. The entropy encoding unit 103 may also entropy encode syntax elements of the current image block to be encoded.
The inverse quantization unit 104 and the inverse transform unit 105 apply inverse quantization and inverse transform, respectively, to reconstruct the residual block in the pixel domain, e.g. for later use as a reference block of a reference image. The summing unit 111 adds the reconstructed residual block to the prediction block generated by the inter prediction unit 110 or the intra prediction unit 109 to generate a reconstructed image block. The filter unit 106 may be adapted to reconstruct image blocks to reduce distortion, such as block artifacts (blockartifacts). The reconstructed image block is then stored as a reference block in the decoded image buffer unit 107, which may be used by the inter prediction unit 110 as a reference block for inter prediction of blocks in subsequent video frames or images.
It should be appreciated that other structural variations of the video encoder 100 may be used to encode a video stream. For example, for some image blocks or image frames, the video encoder 100 may directly quantize the residual signal without processing by the transform unit 101, and correspondingly without processing by the inverse transform unit 105; alternatively, for some image blocks or image frames, the video encoder 100 does not generate residual data, and accordingly does not need to be processed by the transform unit 101, the quantization unit 102, the inverse quantization unit 104 and the inverse transform unit 105; alternatively, the video encoder 100 may store the reconstructed image block directly as a reference block without processing by the filter unit 106; alternatively, the quantization unit 102 and the inverse quantization unit 104 in the video encoder 100 may be combined together. The loop filtering unit is optional and in the case of lossless compression encoding, the transform unit 101, quantization unit 102, inverse quantization unit 104 and inverse transform unit 105 are optional. It should be appreciated that the inter prediction unit and the intra prediction unit may be selectively enabled according to different application scenarios, and in this case, the inter prediction unit is enabled.
Fig. 2B is a block diagram of a video decoder 200 of one example described in an embodiment of the present application. In the example of fig. 2B, the video decoder 200 includes an entropy decoding unit 203, a prediction processing unit 208, an inverse quantization unit 204, an inverse transform unit 205, a summing unit 211, a filter unit 206, and a decoded image buffer unit 207. The prediction processing unit 208 may include an inter prediction unit 210 and an intra prediction unit 209. In some examples, video decoder 200 may perform a decoding process that is substantially reciprocal to the encoding process described with respect to video encoder 100 from fig. 2A.
In the decoding process, video decoder 200 receives an encoded video bitstream from video encoder 100 representing image blocks and associated syntax elements of an encoded video slice. The video decoder 200 may receive video data from the network entity 42, which may optionally also be stored in a video data storage unit (not shown). The video data storage unit may store video data, such as an encoded video bitstream, to be decoded by components of the video decoder 200. The video data stored in the video data storage unit may be obtained, for example, from the storage device 40, from a local video source such as a camera, via wired or wireless network communication of the video data, or by accessing a physical data storage medium. The video data storage unit may be a decoded image buffer unit (CPB) for storing encoded video data from an encoded video bitstream. Therefore, although the video data storage unit is not illustrated in fig. 2B, the video data storage unit and the DPB 207 may be the same storage unit or may be separately provided storage units. Video data storage unit and DPB 207 may be formed from any of a variety of storage unit devices, such as: dynamic random access memory cells (DRAM), magnetoresistive RAM (MRAM), resistive RAM (RRAM), or other types of memory cell devices, including Synchronous DRAM (SDRAM). In various examples, the video data storage unit may be integrated on-chip with other components of video decoder 200 or off-chip with respect to those components.
The network entity 42 may be, for example, a server, a MANE, a video editor/splicer, or other such device for implementing one or more of the techniques described above. The network entity 42 may or may not include a video encoder, such as video encoder 100. The network entity 42 may implement portions of the techniques described in this application before the network entity 42 sends the encoded video bitstream to the video decoder 200. In some video decoding systems, network entity 42 and video decoder 200 may be part of separate devices, while in other cases, the functionality described with respect to network entity 42 may be performed by the same device that includes video decoder 200. In some cases, network entity 42 may be an example of storage 40 of fig. 1.
The entropy decoding unit 203 of the video decoder 200 entropy decodes the bit stream to produce quantized coefficients and some syntax elements. Entropy decoding unit 203 forwards the syntax elements to prediction processing unit 208. The video decoder 200 may receive syntax elements at the video slice level and/or the picture block level.
When a video slice is decoded as an intra-decoded (I) slice, the intra-prediction unit 209 of the prediction processing unit 208 may generate a prediction block for an image block of the current video slice based on the signaled intra-prediction mode and data from a previously decoded block of the current frame or image. When a video slice is decoded as an inter-decoded (i.e., B or P) slice, the inter-prediction unit 210 of the prediction processing unit 208 may determine an inter-prediction mode for decoding a current image block of the current video slice based on the syntax element received from the entropy decoding unit 203, and decode (e.g., perform inter-prediction) the current image block based on the determined inter-prediction mode. Specifically, the inter prediction unit 210 may determine whether to predict the current image block of the current video slice using a new inter prediction mode, and if the syntax element indicates that the current image block is predicted using the new inter prediction mode, predict motion information of the current image block or a sub-block of the current image block of the current video slice based on the new inter prediction mode (e.g., a new inter prediction mode specified by the syntax element or a default new inter prediction mode), thereby acquiring or generating a predicted block of the current image block or a sub-block of the current image block using the motion information of the predicted current image block or the sub-block of the current image block through the motion compensation process. The motion information herein may include reference picture information and motion vectors, wherein the reference picture information may include, but is not limited to, uni-directional/bi-directional prediction information, a reference picture list number, and a reference picture index corresponding to the reference picture list. For inter prediction, a prediction block may be generated from one of the reference pictures within one of the reference picture lists. Video decoder 200 may construct reference picture lists, i.e., list 0 and list 1, based on the reference pictures stored in DPB 207. The reference frame index of the current image may be included in one or more of reference frame list 0 and list 1. In some examples, it may be that the video encoder 100 signals a specific syntax element indicating whether a specific block is decoded using a new inter prediction mode, or it may also signal whether a new inter prediction mode is used, and which new inter prediction mode is specifically used to decode a specific block. It should be appreciated that the inter prediction unit 210 herein performs a motion compensation process. The inter prediction process of predicting motion information of a current image block or a sub-block of the current image block using motion information of a reference block in various new inter prediction modes will be described in detail below.
The dequantization unit 204 dequantizes, i.e., dequantizes, the quantized transform coefficients provided in the bitstream and decoded by the entropy decoding unit 203. The inverse quantization process may include: the quantization parameter calculated by the video encoder 100 for each image block in the video slice is used to determine the degree of quantization that should be applied and likewise the degree of inverse quantization that should be applied. The inverse transform unit 205 applies an inverse transform to the transform coefficients, such as an inverse DCT, an inverse integer transform, or a conceptually similar inverse transform process, in order to generate a residual block in the pixel domain.
After the inter prediction unit 210 generates a prediction block for the current image block or a sub-block of the current image block, the video decoder 200 obtains a reconstructed block, i.e., a decoded image block, by summing the residual block from the inverse transform unit 205 with the corresponding prediction block generated by the inter prediction unit 210. The summing unit 211 represents a component that performs this summing operation. A loop filtering unit (in or after the decoding loop) may also be used to smooth out pixel transitions or otherwise improve video quality, if desired. The filter unit 206 may represent one or more loop filter units, such as a deblocking filter unit, an adaptive loop filter unit (ALF), and a Sample Adaptive Offset (SAO) filter unit. Although the filter unit 206 is shown in fig. 2B as an in-loop filter unit, in other implementations, the filter unit 206 may be implemented as a post-loop filter unit. In one example, the filter unit 206 is adapted to reconstruct the block to reduce block distortion, and the result is output as a decoded video stream. Also, the decoded image blocks in a given frame or image may be stored in the decoded image buffer unit 207, and the decoded image buffer unit 207 stores reference images for subsequent motion compensation. The decoded image buffer unit 207 may be part of a storage unit that may also store decoded video for later presentation on a display device (e.g., the display device 220 of fig. 1), or may be separate from such a storage unit.
It should be appreciated that other structural variations of video decoder 200 may be used to decode the encoded video bitstream. For example, the video decoder 200 may generate an output video stream without processing by the filtering unit 206; alternatively, for some image blocks or image frames, the entropy decoding unit 203 of the video decoder 200 does not decode quantized coefficients, and accordingly does not need to be processed by the inverse quantization unit 204 and the inverse transform unit 205. The loop filter unit is optional; and for the case of lossless compression, the inverse quantization unit 204 and the inverse transformation unit 205 are optional. It should be appreciated that the inter prediction unit and the intra prediction unit may be selectively enabled according to different application scenarios, and in this case, the inter prediction unit is enabled.
Fig. 5 is a diagram illustrating motion information of an exemplary current image block 600 and a reference block in an embodiment of the present application. As shown in fig. 5, W and H are the width and height of the current image block 600 and co-located blocks (simply referred to as juxtaposing blocks) 600' of the current image block 600. The reference blocks of the current image block 600 include: an upper side-spatial neighboring block and a left side-spatial neighboring block of the current image block 600, and a lower side-spatial neighboring block and a right side-spatial neighboring block of the collocated block 600', wherein the collocated block 600' is an image block having the same size, shape, and coordinates as the current image block 600 in the reference image. It should be noted that the motion information of the lower side-spatial neighboring block and the right side-spatial neighboring block of the current image block does not exist, and has not been encoded. It should be appreciated that the current image block 600 and the collocated block 600' may be any block size. For example, the current image block 600 and the co-located block 600' may include, but are not limited to, 16x16 pixels, 32x32 pixels, 32x16 pixels, 16x32 pixels, and the like. As described above, each image frame may be divided into image blocks for encoding. These image blocks may be further partitioned into smaller blocks, e.g., the current image block 600 and the co-located block 600' may be partitioned into multiple MxN sub-blocks, i.e., each sub-block is of the size MxN pixels, and each reference block is of the size MxN pixels, i.e., the same size as the sub-block of the current image block. The coordinates in fig. 5 are measured in MxN blocks. "M x N" and "M by N" are used interchangeably to refer to the pixel size of an image block in terms of the horizontal dimension and the vertical dimension, i.e., having M pixels in the horizontal direction and N pixels in the vertical direction, where M, N represents a non-negative integer value. Furthermore, the block does not necessarily need to have the same number of pixels in the horizontal direction as in the vertical direction. For example, where m=n=4, the subblock size of the current image block and the reference block size may be 8×8 pixels, 8×4 pixels, or 4×8 pixels, or the minimum prediction block size. Furthermore, the image blocks described herein may be understood as, but are not limited to: a Prediction Unit (PU) or a Coding Unit (CU) or a Transform Unit (TU) or the like. A CU may contain one or more prediction units PU, or the PU and CU may be the same size, as specified by different video compression codec standards. The image blocks may have a fixed or variable size and differ in size according to different video compression codec standards. In addition, the current image block refers to an image block to be currently encoded or decoded, such as a prediction unit to be encoded or decoded.
In one example, it may be sequentially determined along direction 1 whether each left spatial neighboring block of the current image block 600 is available, and it may be sequentially determined along direction 2 whether each upper spatial neighboring block of the current image block 600 is available, e.g., whether neighboring blocks (also referred to as reference blocks, used interchangeably) are inter-coded, if neighboring blocks exist and are inter-coded, then the neighboring blocks are available; if a neighboring block does not exist or is intra-coded, the neighboring block is not available. If one neighboring block is intra-coded, motion information of other neighboring reference blocks is copied as motion information of the neighboring block. Whether the lower side-space domain neighboring block and the right side-space domain neighboring block of the collocated block 600' are available or not is detected in a similar manner, and will not be described again.
Further, if the size of the available reference block and the size of the sub-block of the current image block are 4x4, the motion information of the fetch available reference block can be directly obtained; if the size of the available reference block is, for example, 8×4,8×8, the motion information of the central 4×4 block may be obtained as the motion information of the available reference block, where the coordinates of the top-left corner vertex of the central 4×4 block with respect to the top-left corner vertex of the reference block are ((W/4)/2×4, (H/4)/2×4), where the division operation is a integer division operation, and if m=8, n=4, the coordinates of the top-left corner vertex of the central 4×4 block with respect to the top-left corner vertex of the reference block are (4, 0). Alternatively, the motion information of the upper left corner 4x4 block of the reference block may also be acquired as the motion information of the available reference block, but the present application is not limited thereto.
For simplicity of description, the MxN subblocks are denoted by subblocks and the neighboring MxN subblocks are denoted by neighboring blocks.
Fig. 6 is a flow chart illustrating a process 700 of an encoding method according to one embodiment of the present application. The process 700 may be performed by the video encoder 100, and in particular, may be performed by the inter-prediction unit 110 of the video encoder 100, and the entropy encoding unit (also referred to as entropy encoder) 103. Process 700 is described as a series of steps or operations, it being understood that process 700 may be performed in various orders and/or concurrently, and is not limited to the order of execution as depicted in fig. 6. Assuming that a video data stream having a plurality of video frames is using a video encoder, if a Coding Tree Unit (CTU) in which a first neighboring affine Coding block is located above a current Coding block, a set of candidate motion vector predictors is determined based on a lower left control point and a lower right control point of the first neighboring affine Coding block, corresponding to the flow shown in fig. 6, and the related description is as follows:
step S700: the video encoder determines an inter prediction mode for the current encoded block.
Specifically, the inter prediction mode may be an advanced motion vector prediction (Advanced Motion Vector Prediction, AMVP) mode or a merge (merge) mode.
If it is determined that the inter prediction mode of the current coding block is AMVP mode, steps S711 to S713 are performed.
If it is determined that the inter prediction mode of the current encoded block is the merge mode, steps S721-S723 are performed.
AMVP mode:
step S711: the video encoder builds a list of candidate motion vector predictors MVP.
Specifically, the video encoder constructs a candidate motion vector predictor MVP list (also referred to as a candidate motion vector list) through an inter-frame prediction unit (also referred to as an inter-frame prediction module), and the candidate motion vector predictor MVP list may be constructed in one of two ways provided below, or in a combination of two ways, and the constructed candidate motion vector predictor MVP list may be a triplet candidate motion vector predictor MVP list or a binary candidate motion vector predictor MVP list; the two modes are specifically as follows:
in one mode, a motion vector prediction method based on a motion model is adopted to construct a candidate motion vector prediction value MVP list.
First, all or part of adjacent blocks of the current encoding block are traversed according to a predetermined sequence, so that adjacent affine encoding blocks are determined, and the number of the determined adjacent affine encoding blocks may be one or a plurality of adjacent affine encoding blocks. For example, adjacent blocks A, B, C, D, E shown in fig. 7A may be traversed sequentially to determine adjacent affine encoded blocks in adjacent blocks A, B, C, D, E. The inter prediction unit determines a set of candidate motion vector predictors according to at least one neighboring affine encoding block (each set of candidate motion vector predictors is a binary set or a ternary set), and the following description will take one neighboring affine encoding block as an example, and the one neighboring affine encoding block is referred to as a first neighboring affine encoding block for convenience of description, specifically as follows:
And determining a first affine model according to the motion vector of the control point of the first adjacent affine coding block, and predicting the motion vector of the control point of the current coding block according to the first affine model. When the parameter models of the current coding block are different, the manner of predicting the motion vector of the control point of the current coding block based on the motion vector of the control point of the first neighboring affine coding block is also different, and thus the following description will be made in case.
A. The parametric model of the current coding block is a 4-parameter affine transformation model, and the deduction mode can be as follows:
if the first adjacent affine coding block is located at the current positionA Coding Tree Unit (CTU) above the Coding block and the first neighboring affine Coding block is a four-parameter affine Coding block, motion vectors of two control points at the lowest side of the first neighboring affine Coding block are obtained, for example, position coordinates (x 6 ,y 6 ) And motion vector (vx) 6 ,vy 6 ) And the position coordinates (x 7 ,y 7 ) And motion vector value (vx) 7 ,vy 7 )。
And forming a first affine model according to the motion vectors and the coordinate positions of the two control points at the bottommost side of the first adjacent affine coding block (the first affine model obtained at the moment is a 4-parameter affine model).
The motion vector of the control point of the current coding block is predicted according to the first affine model, for example, the position coordinates of the upper left control point and the position coordinates of the upper right control point of the current coding block can be respectively brought into the first affine model, so that the motion vector of the upper left control point and the motion vector of the upper right control point of the current coding block are predicted, and the motion vector of the upper left control point and the motion vector of the upper right control point of the current coding block are specifically shown as formulas (1) and (2).
In the formulas (1), (2), (x) 0 ,y 0 ) Coordinates of the upper left control point of the current coding block, (x) 1 ,y 1 ) Coordinates of an upper right control point of the current coding block; in addition, (vx) 0 ,vy 0 ) For the motion vector of the upper left control point of the predicted current coding block, (vx) 1 ,vy 1 ) Is the motion vector of the upper right control point of the predicted current coding block.
Optionally, the position coordinates (x 6 ,y 6 ) And the right lower controlPosition coordinates of the point (x 7 ,y 7 ) Are all the position coordinates (x) of the upper left control point according to the first adjacent affine code block 4 ,y 4 ) Calculated, wherein the position coordinates (x 6 ,y 6 ) Is (x) 4 ,y 4 + cuH), the position coordinates (x) of the lower right control point of the first adjacent affine-coded block 7 ,y 7 ) Is (x) 4 +cuW,y 4 + cuH), cuW is the width of the first adjacent simulated encoded block, cuH is the height of the first adjacent affine encoded block; in addition, the motion vector of the lower left control point of the first adjacent affine coding block is the motion vector of the lower left sub-block of the first adjacent affine coding block, and the motion vector of the lower right control point of the first adjacent affine coding block is the motion vector of the lower right sub-block of the first adjacent affine coding block. It can be seen that the position coordinates of the lower left control point and the position coordinates of the lower right control point of the first adjacent affine coding block are derived instead of being read from the memory, so that by adopting the method, the reading of the memory can be further reduced, and the coding performance is improved. Alternatively, the position coordinates of the lower left control point and the lower right control point may be stored in the memory in a preselected manner, and then read from the memory when the control points are to be used.
If the first adjacent affine Coding block is located above the current Coding block in a Coding Tree Unit (CTU) and the first adjacent affine Coding block is a six-parameter affine Coding block, generating no candidate motion vector predicted value of a control point of the current block based on the first adjacent affine Coding block.
If the first neighboring affine coded block is not located above the CTU of the current coded block, the manner of predicting the motion vector of the control point of the current coded block is not limited herein. However, for ease of understanding, the following also exemplifies an alternative determination:
the position coordinates and motion vectors of the three control points of the first adjacent affine-coded block can be obtained, for example, the position coordinates (x 4 ,y 4 ) And motion vector value (vx) 4 ,vy 4 ) Position coordinates of the upper right control point (x 5 ,y 5 ) And motion vector value (vx) 5 ,vy 5 ) Position coordinates of lower left control point (x 6 ,y 6 ) And motion vector (vx) 6 ,vy 6 )。
And forming a 6-parameter affine model according to the position coordinates and the motion vectors of the three control points of the first adjacent affine coding block.
The position coordinates (x 0 ,y 0 ) And the position coordinates (x 1 ,y 1 ) The 6-parameter affine model is substituted to predict the motion vector of the upper left control point and the motion vector of the upper right control point of the current coding block, specifically as shown in formulas (4) and (5).
In the formulas (4), (5), (vx) 0 ,vy 0 ) For the motion vector of the upper left control point of the predicted current coding block, (vx) 1 ,vy 1 ) Is the motion vector of the upper right control point of the predicted current coding block.
B. The parametric model of the current coding block is a 6-parameter affine transformation model, and the deduction mode can be as follows:
if the first adjacent affine code block is located above the CTU of the current code block and the first adjacent affine code block is a four-parameter affine code block, the position coordinates and motion vectors of the two control points at the lowest side of the first adjacent affine code block are obtained, for example, the position coordinates (x 6 ,y 6 ) And motion vector (vx) 6 ,vy 6 ) And the position coordinates (x 7 ,y 7 ) And motion vector value (vx) 7 ,vy 7 )。
And forming a first affine model according to the motion vectors of the two control points at the bottommost side of the first adjacent affine coding block (the first affine model obtained at the moment is a 4-parameter affine model).
The motion vector of the control point of the current coding block is predicted according to the first affine model, for example, the position coordinates of the upper left control point, the position coordinates of the upper right control point and the position coordinates of the lower left control point of the current coding block can be respectively brought into the first affine model, so that the motion vector of the upper left control point, the motion vector of the upper right control point and the motion vector of the lower left control point of the current coding block are predicted, as shown in the formulas (1), (2) and (3).
Formulas (1), (2) have been described above, in formulas (1), (2), (3), (x) 0 ,y 0 ) Coordinates of the upper left control point of the current coding block, (x) 1 ,y 1 ) Coordinates of the upper right control point of the current code block, (x) 2 ,y 2 ) Coordinates of a control point at the lower left of the current coding block; in addition, (vx) 0 ,vy 0 ) Motion vector for upper left control point of current coding block for prediction, (vy) 1 ,vy 1 ) For the motion vector of the upper right control point of the predicted current coding block, (vx) 2 ,vy 2 ) Is the motion vector of the lower right-left control point of the predicted current coding block.
If the first adjacent affine Coding block is located above the current Coding block in a Coding Tree Unit (CTU) and the first adjacent affine Coding block is a six-parameter affine Coding block, generating no candidate motion vector predicted value of a control point of the current block based on the first adjacent affine Coding block.
If the first neighboring affine coded block is not located above the CTU of the current coded block, the manner of predicting the motion vector of the control point of the current coded block is not limited herein. However, for ease of understanding, the following also exemplifies an alternative determination:
the position coordinates and motion vectors of the three control points of the first adjacent affine-coded block can be obtained, for example, the position coordinates (x 4 ,y 4 ) And motion vector value (vx) 4 ,vy 4 ) Position coordinates of the upper right control point (x 5 ,y 5 ) And motion vector value (vx) 5 ,vy 5 ) Position coordinates of lower left control point (x 6 ,y 6 ) And motion vector (vx) 6 ,vy 6 )。
And forming a 6-parameter affine model according to the position coordinates and the motion vectors of the three control points of the first adjacent affine coding block.
The position coordinates (x 0 ,y 0 ) Position coordinates of the upper right control point (x 1 ,y 1 ) And the position coordinates (x 2 ,y 2 ) Substituting 6 parameter affine model predicts the motion vector of the upper left control point, the motion vector of the upper right control point and the motion vector of the lower left control point of the current coding block as shown in formulas (4), (5) and (6).
Equations (4), (5) have been described above, in equations (4), (5), (6), (vy) 0 ,vy 0 ) For the motion vector of the upper left control point of the predicted current coding block, (vx) 1 ,vy 1 ) For the motion vector of the upper right control point of the predicted current coding block, (vx) 2 ,vy 2 ) Is the motion vector of the lower left control point of the predicted current coding block.
And secondly, constructing a candidate motion vector predicted value MVP list by adopting a motion vector prediction method based on control point combination.
The manner in which the candidate motion vector predictor MVP list is constructed when the parametric model of the current coding block is different is also different, as described below.
A. The parametric model of the current coding block is a 4-parameter affine transformation model, and the deduction mode can be as follows:
and predicting the motion vectors of the left top vertex and the right top vertex of the current coding block by utilizing the motion information of the coded blocks adjacent to the periphery of the current coding block. As shown in fig. 7B: firstly, using the motion vector of the adjacent coded block A and/or B and/or C of the top left vertex as the candidate motion vector of the top left vertex of the current coding block; and using the motion vector of the adjacent coded block D and/or E of the top right vertex as a candidate motion vector of the top right vertex of the current coding block. And combining the candidate motion vector of the top left vertex and the candidate motion vector of the top right vertex to obtain a group of candidate motion vector predicted values, and combining a plurality of records obtained in a combined mode to form a candidate motion vector predicted value MVP list.
B. The current coding block parameter model is a 6-parameter affine transformation model, and the deduction mode can be as follows:
and predicting the motion vectors of the left top vertex and the right top vertex of the current coding block by utilizing the motion information of the coded blocks adjacent to the periphery of the current coding block. As shown in fig. 7B: firstly, using the motion vector of the adjacent coded block A and/or B and/or C of the top left vertex as the candidate motion vector of the top left vertex of the current coding block; using the motion vector of the adjacent coded block D and/or E of the top right vertex as the candidate motion vector of the top right vertex of the current coding block; and using the motion vector of the adjacent coded block F and/or G of the top right vertex as a candidate motion vector of the top right vertex of the current coded block. And combining the candidate motion vector of the top left vertex, the candidate motion vector of the top right vertex and the candidate motion vector of the bottom left vertex to obtain a group of candidate motion vector predictors, and combining the plurality of groups of candidate motion vector predictors in the combination mode to form a candidate motion vector predictor MVP list.
It should be noted that, the candidate motion vector predictor MVP list may be constructed by using only the candidate motion vector predictor obtained by the first prediction, the candidate motion vector predictor MVP list may be constructed by using only the candidate motion vector predictor obtained by the second prediction, or the candidate motion vector predictor MVP list may be constructed by using both the candidate motion vector predictor obtained by the first prediction and the candidate motion vector predictor obtained by the second prediction. In addition, the candidate motion vector predictor MVP list may be pruned and ordered according to a preconfigured rule, and then truncated or padded to a specific number. When each group of candidate motion vector predictors in the candidate motion vector predictor MVP list comprises motion vector predictors of three control points, the candidate motion vector predictor MVP list can be called as a triplet list; when each set of candidate motion vector predictors in the candidate motion vector predictor MVP list includes motion vector predictors of two control points, the candidate motion vector predictor MVP list may be referred to as a binary set list.
Step S712: the video encoder determines a set of target candidate motion vectors from the list of candidate motion vector predictors MVP according to a rate distortion cost criterion. Specifically, for each candidate motion vector group in the candidate motion vector prediction value MVP list, a motion vector of each sub-block of the current block is calculated, motion compensation is performed to obtain a prediction value of each sub-block, and thus the prediction value of the current block is obtained. And selecting the candidate motion vector group with the smallest error between the predicted value and the original value as a group of optimal motion vector predicted values, namely a target candidate motion vector group. In addition, the determined target candidate motion vector group is used as an optimal candidate motion vector predicted value of a group of control points, and the target candidate motion vector group corresponds to a unique index number in the candidate motion vector predicted value MVP list.
Step S713: the video encoder compiles an index corresponding to the target candidate motion vector and a motion vector difference MVD into a code stream to be transmitted.
Specifically, the video encoder may further search, with the target candidate motion vector set as a search starting point, a motion vector of a set of control points with the lowest cost within a preset search range according to a rate distortion cost criterion; then a motion vector difference MVD between the motion vector of the set of control points and the set of target candidate motion vectors is determined, e.g. if the first set of control points comprises a first control point and a second control point, then a motion vector difference MVD between the motion vector of the first control point and the motion vector predictor of the first control point of the set of control points represented by the set of target candidate motion vectors is determined, and a motion vector difference MVD between the motion vector of the second control point and the motion vector predictor of the second control point of the set of control points represented by the set of target candidate motion vectors is determined.
Alternatively, steps S714 to S715 may be performed in addition to the above steps S711 to S713 in AMVP mode.
Step S714: the video encoder obtains the motion vector value of each sub-block in the current coding block by adopting an affine transformation model according to the determined motion vector value of the control point of the current coding block.
Specifically, the new candidate motion vector group obtained based on the target candidate motion vector group and MVD includes motion vectors of two (upper left control point and upper right control point) or three control points (for example, upper left control point, upper right control point and lower left control point). For each sub-block (a sub-block may be equivalently referred to as a motion compensation unit) of the current coding block, motion information of pixels at preset positions in the motion compensation unit may be used to represent motion information of all pixels in the motion compensation unit. Assuming that the size of the motion compensation unit is MxN (M is less than or equal to the width W of the current coding block, N is less than or equal to the height H of the current coding block, where M, N, W, H is a positive integer, typically a power of 2, such as 4, 8, 16, 32, 64, 128, etc.), the preset position pixel point may be a motion compensation unit center point (M/2, N/2), an upper left pixel point (0, 0), an upper right pixel point (M-1, 0), or a pixel point at another position. Fig. 8A illustrates a 4x4 motion compensation unit and fig. 8B illustrates an 8x8 motion compensation unit.
The coordinates of the center point of the motion compensation unit with respect to the pixel of the top left vertex of the current coding block are calculated using formula (5), where i is the i-th motion compensation unit in the horizontal direction (left to right), j is the j-th motion compensation unit in the vertical direction (top to bottom), (x (i,j) ,y (i,j) ) Representing the (i, j) th motionThe coordinates of the center point of the compensation unit with respect to the upper left control point pixel of the current coding block. Then according to affine model type (6 parameters or 4 parameters) of the current coding block, the method comprises the following steps of (x (i,j) ,y (i,j) ) Substituting 6 parameter affine model equation (6-1) or substituting (x) (i,j) ,y (i,j) ) Substituting the 4-parameter affine model formula (6-2) to obtain the motion information of the center point of each motion compensation unit as the motion vector (vx) of all pixel points in the motion compensation unit (i,j) ,vy (i,j) )。
Optionally, when the current coding block is a 6-parameter coding block and the motion vector of one or more sub-blocks of the current coding block is obtained based on the target candidate motion vector group, if the lower boundary of the current coding block coincides with the lower boundary of the CTU where the current coding block is located, the motion vector of the sub-block at the lower left corner of the current coding block is calculated according to the 6-parameter affine model constructed by the three control points and the position coordinates (0, H) at the lower left corner of the current coding block, and the motion vector of the sub-block at the lower right corner of the current coding block is calculated according to the 6-parameter affine model constructed by the three control points and the position coordinates (W, H) at the lower right corner of the current coding block. For example, the motion vector of the sub-block in the lower left corner of the current coding block can be obtained by substituting the position coordinates (0, H) of the lower left corner of the current coding block into the 6-parameter affine model (instead of substituting the center point coordinates of the sub-block in the lower left corner into the affine model for calculation), and the motion vector of the sub-block in the lower right corner of the current coding block can be obtained by substituting the position coordinates (W, H) of the lower right corner of the current coding block into the 6-parameter affine model (instead of substituting the center point coordinates of the sub-block in the lower right corner into the affine model for calculation). In this way, when the motion vector of the lower left control point and the motion vector of the lower right control point of the current encoded block are used (e.g., the subsequent other block builds the candidate motion vector predictor MVP list for the other block based on the motion vectors of the lower left control point and the lower right control point of the current block), the exact value is used instead of the estimated value. Where W is the width of the current coding block and H is the height of the current coding block.
Optionally, when the current coding block is a 4-parameter coding block and the motion vector of one or more sub-blocks of the current coding block is obtained based on the target candidate motion vector group, if the lower boundary of the current coding block coincides with the lower boundary of the CTU where the current coding block is located, the motion vector of the sub-block at the lower left corner of the current coding block is calculated according to the 4-parameter affine model constructed by the two control points and the position coordinates (0, H) at the lower left corner of the current coding block, and the motion vector of the sub-block at the lower right corner of the current coding block is calculated according to the 4-parameter affine model constructed by the two control points and the position coordinates (W, H) at the lower right corner of the current coding block. For example, the motion vector of the sub-block in the lower left corner of the current coding block can be obtained by substituting the position coordinates (0, H) of the lower left corner of the current coding block into the 4-parameter affine model (instead of substituting the center point coordinates of the sub-block in the lower left corner into the affine model for calculation), and the motion vector of the sub-block in the lower right corner of the current coding block can be obtained by substituting the position coordinates (W, H) of the lower right corner of the current coding block into the four-parameter affine model (instead of substituting the center point coordinates of the sub-block in the lower right corner into the affine model for calculation). In this way, when the motion vector of the lower left control point and the motion vector of the lower right control point of the current encoded block are used (e.g., the subsequent other block builds the candidate motion vector predictor MVP list for the other block based on the motion vectors of the lower left control point and the lower right control point of the current block), the exact value is used instead of the estimated value. Where W is the width of the current coding block and H is the height of the current coding block.
Step S715: the video encoder performs motion compensation according to the motion vector value of each sub-block in the current coding block to obtain a pixel prediction value of each sub-block, for example, by using the motion vector of each sub-block and the reference frame index value, find the corresponding sub-block in the reference frame, and perform interpolation filtering to obtain the pixel prediction value of each sub-block.
Merge mode:
step S721: the video encoder builds a list of candidate motion information.
Specifically, the video encoder constructs a candidate motion information list (also referred to as a candidate motion vector list) through an inter-prediction unit (also referred to as an inter-prediction module), and may be constructed in one of two ways provided below, or in a form of a combination of two ways, and the constructed candidate motion information list is a triplet candidate motion information list; the two modes are specifically as follows:
in one mode, a motion vector prediction method based on a motion model is adopted to construct a candidate motion information list.
First, all or part of adjacent blocks of the current encoding block are traversed according to a predetermined sequence, so that adjacent affine encoding blocks are determined, and the number of the determined adjacent affine encoding blocks may be one or a plurality of adjacent affine encoding blocks. For example, adjacent blocks A, B, C, D, E shown in fig. 7A may be traversed sequentially to determine adjacent affine encoded blocks in adjacent blocks A, B, C, D, E. The inter prediction unit determines a set of candidate motion vector predictors according to each neighboring affine encoding block (each set of candidate motion vector predictors is a binary set or a ternary set), and the following description will take one neighboring affine encoding block as an example, and the neighboring affine encoding block is called a first neighboring affine encoding block for convenience of description, specifically as follows:
Determining a first affine model according to the motion vector of the control point of the first adjacent affine coding block, and further predicting the motion vector of the control point of the current coding block according to the first affine model, wherein the method is specifically described as follows:
if the first adjacent affine code block bitIn the CTU above the current encoding block and the first neighboring affine encoding block is a four-parameter affine encoding block, the position coordinates and motion vectors of the two control points at the lowest side of the first neighboring affine encoding block are obtained, for example, the position coordinates (x 6 ,y 6 ) And motion vector (vx) 6 ,vy 6 ) And the position coordinates (x 7 ,y 7 ) And motion vector value (vx) 7 ,vy 7 )。
And forming a first affine model according to the motion vectors of the two control points at the bottommost side of the first adjacent affine coding block (the first affine model obtained at the moment is a 4-parameter affine model).
Alternatively, the motion vector of the control point of the current coding block is predicted according to the first affine model, for example, the position coordinate of the upper left control point, the position coordinate of the upper right control point and the position coordinate of the lower left control point of the current coding block may be respectively brought into the first affine model, so that the motion vector of the upper left control point, the motion vector of the upper right control point and the motion vector of the lower left control point of the current coding block are predicted to form a candidate motion vector triplet, and a candidate motion information list is added, which is specifically shown in formulas (1), (2) and (3).
Alternatively, the motion vector of the control point of the current coding block is predicted according to the first affine model, for example, the position coordinates of the upper left control point and the position coordinates of the upper right control point of the current coding block may be respectively brought into the first affine model, so that the motion vector of the upper left control point and the motion vector of the upper right control point of the current coding block are predicted to form a candidate motion vector binary group, and the candidate motion information list is added, as shown in the specific formulas (1) and (2).
In the formulas (1), (2) and (3), (x) 0 ,y 0 ) Coordinates of the upper left control point of the current coding block, (x) 1 ,y 1 ) Coordinates of the upper right control point of the current code block, (x) 2 ,y 2 ) Coordinates of a control point at the lower left of the current coding block; in addition, (vx) 0 ,vy 0 ) For predictive current coding blockMotion vector of upper left control point of (vx) 1 ,vy 1 ) For the motion vector of the upper right control point of the predicted current coding block, (vx) 2 ,vy 2 ) Is the motion vector of the lower left control point of the predicted current coding block.
If the first adjacent affine Coding block is located above the current Coding block in a Coding Tree Unit (CTU) and the first adjacent affine Coding block is a six-parameter affine Coding block, generating no candidate motion vector predicted value of a control point of the current block based on the first adjacent affine Coding block.
If the first neighboring affine coded block is not located above the CTU of the current coded block, the manner of predicting the motion vector of the control point of the current coded block is not limited herein. However, for ease of understanding, the following also exemplifies an alternative determination:
the position coordinates and motion vectors of the three control points of the first adjacent affine-coded block can be obtained, for example, the position coordinates (x 4 ,y 4 ) And motion vector value (vx) 4 ,vy 4 ) Position coordinates of the upper right control point (x 5 ,y 5 ) And motion vector value (vx) 5 ,vy 5 ) Position coordinates of lower left control point (x 6 ,y 6 ) And motion vector (vx) 6 ,vy 6 )。
And forming a 6-parameter affine model according to the position coordinates and the motion vectors of the three control points of the first adjacent affine coding block.
The position coordinates (x 0 ,y 0 ) Position coordinates of the upper right control point (x 1 ,y 1 ) And the position coordinates (x 2 ,y 2 ) Substituting 6 parameter affine model predicts the motion vector of the upper left control point, the motion vector of the upper right control point and the motion vector of the lower left control point of the current coding block as shown in formulas (4), (5) and (6).
Equations (4), (5) have been described above, in equations (4), (5), (6), (vx) 0 ,vy 0 ) For the motion vector of the upper left control point of the predicted current coding block, (vx) 1 ,vy 1 ) For the motion vector of the upper right control point of the predicted current coding block, (vx) 2 ,vy 2 ) Is the motion vector of the lower left control point of the predicted current coding block.
And secondly, constructing a candidate motion information list by adopting a motion vector prediction method based on control point combination.
Two schemes, denoted scheme a and scheme B, respectively, are exemplified below:
scheme a: the motion information of the control points of the 2 current coding blocks are combined to construct a 4-parameter affine transformation model. The combination mode of the 2 control points is { CP1, CP4}, { CP2, CP3}, { CP1, CP2}, { CP2, CP4}, { CP1, CP3}, { CP3, CP4}. For example, a 4-parameter Affine transformation model constructed by using CP1 and CP2 control points is denoted as Affine (CP 1, CP 2).
It should be noted that, combinations of different control points may also be converted into control points at the same location. For example: the 4-parameter affine transformation model obtained by combining { CP1, CP4}, { CP2, CP3}, { CP2, CP4}, { CP1, CP3}, { CP3, CP4} is converted into a control point { CP1, CP2} or { CP1, CP2, CP3 }. The conversion method is that the motion vector of the control point and the coordinate information thereof are substituted into a formula (9-1) to obtain model parameters, and the coordinate information of { CP1, CP2} is substituted into the model parameters to obtain the motion vector thereof, and the motion vector is used as a group of candidate motion vector predicted values.
In the formula (9-1), a 0 ,a 1 ,a 2 ,a 3 Are parameters in the parametric model, and (x, y) represents the position coordinates.
More directly, a set of motion vector predictors represented by the upper left control point and the upper right control point can also be obtained through conversion according to the following formula, and the candidate motion information list is added:
conversion of { CP1, CP2} yields { CP1, CP2, CP3} equation (9-2):
conversion of { CP1, CP3} yields { CP1, CP2, CP3} of equation (9-3):
conversion of { CP2, CP3} yields { CP1, CP2, CP3} as equation (10):
conversion of { CP1, CP4} yields { CP1, CP2, CP3} as equation (11):
conversion of { CP2, CP4} yields { CP1, CP2, CP3} as equation (12):
conversion of { CP3, CP4} yields equation (13) for { CP1, CP2, CP3 }:
scheme B: the motion information of the 3 control points of the current coding block is combined to construct a 6-parameter affine transformation model. The combination mode of the 3 control points is { CP1, CP2, CP4}, { CP1, CP2, CP3}, { CP2, CP3, CP4}, { CP1, CP3, CP4}. For example, a 6-parameter Affine transformation model constructed by using CP1, CP2 and CP3 control points is denoted as Affine (CP 1, CP2, CP 3).
It should be noted that combinations of different control points may also be converted into control points at the same location. For example: the 6-parameter affine transformation model of { CP1, CP2, CP4}, { CP2, CP3, CP4}, { CP1, CP3, CP4} combination is converted into a control point { CP1, CP2, CP3 }. The conversion method is that the motion vector of the control point and the coordinate information thereof are substituted into a formula (14) to obtain model parameters, and the coordinate information of { CP1, CP2 and CP3} is substituted into the model parameters to obtain the motion vector thereof, and the motion vector is used as a group of candidate motion vector predicted values.
In formula (14), a 1 ,a 2 ,a 3 ,a 4 ,a 5 ,a 6 For parameters in the parametric model, (x, y) represents position coordinates.
More directly, a set of motion vector predictors represented by an upper left control point, an upper right control point, and a lower left control point may also be obtained by converting according to the following formula, and added to the candidate motion information list:
conversion of { CP1, CP2, CP4} yields equation (15) for { CP1, CP2, CP3 }:
conversion of { CP2, CP3, CP4} yields equation (16) for { CP1, CP2, CP3 }:
conversion of { CP1, CP3, CP4} yields equation (17) for { CP1, CP2, CP3 }:
it should be noted that, the candidate motion information list may be constructed by using only the candidate motion vector predicted value obtained by the first prediction, the candidate motion information list may be constructed by using only the candidate motion vector predicted value obtained by the second prediction, and the candidate motion information list may be constructed by using both the candidate motion vector predicted value obtained by the first prediction and the candidate motion vector predicted value obtained by the second prediction. In addition, the candidate motion information list may be pruned and ordered according to a pre-configured rule and then truncated or padded to a specific number. When each group of candidate motion vector predicted values in the candidate motion information list comprises motion vector predicted values of three control points, the candidate motion information list can be called as a triplet list; when each set of candidate motion vector predictors in the candidate motion information list includes motion vector predictors of two control points, the candidate motion information list may be referred to as a binary set list.
Step S722: the video encoder determines a set of target candidate motion vectors from the list of candidate motion information based on a rate-distortion cost criterion. Specifically, for each candidate motion vector group in the candidate motion information list, a motion vector of each sub-block of the current block is calculated, motion compensation is performed to obtain a predicted value of each sub-block, and thus the predicted value of the current block is obtained. And selecting the candidate motion vector group with the smallest error between the predicted value and the original value as a group of optimal motion vector predicted values, namely a target candidate motion vector group. In addition, the determined target candidate motion vector group is used as an optimal candidate motion vector predicted value of a group of control points, and the target candidate motion vector group corresponds to a unique index number in the candidate motion information list.
Step S723: the video encoder indexes corresponding to the target candidate motion vector group, the reference frame index and the prediction direction into a code stream to be transmitted.
Alternatively, steps S724 to S725 may be performed in addition to the above steps S721 to S723 in the merge mode.
Step S724: and the video encoder obtains the motion vector value of each sub-block in the current coding block by adopting a parametric affine transformation model according to the determined motion vector value of the control point of the current coding block.
Specifically, i.e., motion vectors of two (upper left control point and upper right control point) or three control points (e.g., upper left control point, upper right control point, and lower left control point) included in the target candidate motion vector group. For each sub-block (a sub-block may be equivalently referred to as a motion compensation unit) of the current coding block, motion information of pixels at preset positions in the motion compensation unit may be used to represent motion information of all pixels in the motion compensation unit. Assuming that the size of the motion compensation unit is MxN (M is less than or equal to the width W of the current coding block, N is less than or equal to the height H of the current coding block, where M, N, W, H is a positive integer, typically a power of 2, such as 4, 8, 16, 32, 64, 128, etc.), the preset position pixel point may be a motion compensation unit center point (M/2, N/2), an upper left pixel point (0, 0), an upper right pixel point (M-1, 0), or a pixel point at another position. Fig. 8A illustrates a 4x4 motion compensation unit and fig. 8B illustrates an 8x8 motion compensation unit.
The coordinates of the center point of the motion compensation unit with respect to the pixel of the top left vertex of the current coding block are calculated using formula (5), where i is the i-th motion compensation unit in the horizontal direction (left to right), j is the j-th motion compensation unit in the vertical direction (top to bottom), (x (i,j) ,y (i,j) ) Representing the coordinates of the (i, j) th motion compensation unit center point with respect to the upper left control point pixel of the current coding block. Then according to affine model type (6 parameters or 4 parameters) of current coding block, the method comprises the steps of (x #) i,j ),y( i,j ) Substitution of 6-parameter affine model formula (6-1) or substitution of (x) (i,j) ,y (i,j) ) Substituting the 4-parameter affine model formula (6-2) to obtain the motion information of the center point of each motion compensation unit as the motion vector (vx) of all pixel points in the motion compensation unit (i,j) ,vy (i,j) )。
Optionally, when the current coding block is a 6-parameter coding block and the motion vector of one or more sub-blocks of the current coding block is obtained based on the target candidate motion vector group, if the lower boundary of the current coding block coincides with the lower boundary of the CTU where the current coding block is located, the motion vector of the sub-block at the lower left corner of the current coding block is calculated according to the 6-parameter affine model constructed by the three control points and the position coordinates (0, H) at the lower left corner of the current coding block, and the motion vector of the sub-block at the lower right corner of the current coding block is calculated according to the 6-parameter affine model constructed by the three control points and the position coordinates (W, H) at the lower right corner of the current coding block. For example, the motion vector of the sub-block in the lower left corner of the current coding block can be obtained by substituting the position coordinates (0, H) of the lower left corner of the current coding block into the 6-parameter affine model (instead of substituting the center point coordinates of the sub-block in the lower left corner into the affine model for calculation), and the motion vector of the sub-block in the lower right corner of the current coding block can be obtained by substituting the position coordinates (W, H) of the lower right corner of the current coding block into the 6-parameter affine model (instead of substituting the center point coordinates of the sub-block in the lower right corner into the affine model for calculation). In this way, when the motion vector of the lower left control point and the motion vector of the lower right control point of the current encoded block are used (e.g., a subsequent other block builds a candidate motion information list for the other block based on the motion vectors of the lower left control point and the lower right control point of the current block), an accurate value is used instead of an estimated value. Where W is the width of the current coding block and H is the height of the current coding block.
Optionally, when the current coding block is a 4-parameter coding block and the motion vector of one or more sub-blocks of the current coding block is obtained based on the target candidate motion vector group, if the lower boundary of the current coding block coincides with the lower boundary of the CTU where the current coding block is located, the motion vector of the sub-block at the lower left corner of the current coding block is calculated according to the 4-parameter affine model constructed by the two control points and the position coordinates (0, H) at the lower left corner of the current coding block, and the motion vector of the sub-block at the lower right corner of the current coding block is calculated according to the 4-parameter affine model constructed by the two control points and the position coordinates (W, H) at the lower right corner of the current coding block. For example, the motion vector of the sub-block in the lower left corner of the current coding block can be obtained by substituting the position coordinates (0, H) of the lower left corner of the current coding block into the 4-parameter affine model (instead of substituting the center point coordinates of the sub-block in the lower left corner into the affine model for calculation), and the motion vector of the sub-block in the lower right corner of the current coding block can be obtained by substituting the position coordinates (W, H) of the lower right corner of the current coding block into the four-parameter affine model (instead of substituting the center point coordinates of the sub-block in the lower right corner into the affine model for calculation). In this way, when the motion vector of the lower left control point and the motion vector of the lower right control point of the current encoded block are used (e.g., a subsequent other block builds a candidate motion information list for the other block based on the motion vectors of the lower left control point and the lower right control point of the current block), an accurate value is used instead of an estimated value. Where W is the width of the current coding block and H is the height of the current coding block.
Step S725: the video encoder performs motion compensation according to the motion vector value of each sub-block in the current coding block to obtain a pixel prediction value of each sub-block, and specifically predicts to obtain the pixel prediction value of the current coding block according to the motion vector of one or more sub-blocks of the current coding block, the reference frame index indicated by the index and the prediction direction.
It can be understood that, when the coding tree unit CTU where the first adjacent affine coding block is located is above the current coding block position, the information of the lowest control point of the first adjacent affine coding block has been read from the memory; the above scheme thus builds a candidate motion vector from a first set of control points of a first neighboring affine coding block, the first set of control points comprising a lower left control point and a lower right control point of the first neighboring affine coding block; instead of fixing the upper left control point, the upper right control point and the lower left control point of the first neighboring coding block as the first set of control points as in the prior art. Therefore, by adopting the method for determining the first group of control points in the application, the information (such as position coordinates, motion vectors and the like) of the first group of control points can be directly multiplexed with the information read from the memory, so that the reading of the memory is reduced, and the coding performance is improved.
Fig. 9 is a flow chart illustrating a process 900 of a decoding method according to one embodiment of the present application. The process 900 may be performed by the video decoder 200, and in particular, may be performed by the inter prediction unit 210 of the video decoder 200, and the entropy decoding unit (also referred to as entropy decoder) 203. Process 900 is described as a series of steps or operations, it being understood that process 900 may be performed in various orders and/or concurrently, and is not limited to the order of execution as depicted in fig. 9. Assuming that a video data stream having a plurality of video frames is using a video decoder, if a decoding Tree Unit (CTU) in which a first neighboring affine decoding block is located above a current decoding block, a set of candidate motion vector predictors is determined based on a lower left control point and a lower right control point of the first neighboring affine decoding block, corresponding to the flow shown in fig. 9, and the related description is as follows:
if the first neighboring affine decoding block is located above the current decoding block in a Coding Tree Unit (CTU), a set of candidate motion vector predictors is determined based on the lower left control point and the lower right control point of the first neighboring affine decoding block, which is described in detail below:
step S1200: the video decoder determines an inter prediction mode of the current decoded block.
Specifically, the inter prediction mode may be an advanced motion vector prediction (Advanced Motion Vector Prediction, AMVP) mode or a merge (merge) mode.
If it is determined that the inter prediction mode of the current decoded block is AMVP mode, steps S1211 to S1216 are performed.
If it is determined that the inter prediction mode of the current decoded block is the merge mode, steps S1221 to S1225 are performed.
AMVP mode:
step S1211: the video decoder builds a list of candidate motion vector predictors MVP.
Specifically, the video decoder constructs a candidate motion vector predictor MVP list (also referred to as a candidate motion vector list) through an inter-frame prediction unit (also referred to as an inter-frame prediction module), and the constructed candidate motion vector predictor MVP list may be a triplet candidate motion vector predictor MVP list or a binary candidate motion vector predictor MVP list, which may be constructed in one of two manners provided below, or in a combination of two manners; the two modes are specifically as follows:
in one mode, a motion vector prediction method based on a motion model is adopted to construct a candidate motion vector prediction value MVP list.
First, all or part of adjacent blocks of the current decoding block are traversed according to a predetermined sequence, so that adjacent affine decoding blocks are determined, and the number of the determined adjacent affine decoding blocks may be one or a plurality of adjacent affine decoding blocks. For example, adjacent blocks A, B, C, D, E shown in fig. 7A may be traversed sequentially to determine adjacent affine decoded blocks in adjacent blocks A, B, C, D, E. The inter prediction unit determines a set of candidate motion vector predictors according to at least one adjacent affine decoding block (each set of candidate motion vector predictors is a binary set or a ternary set), and the following description will take an adjacent affine decoding block as an example, where the adjacent affine decoding block is referred to as a first adjacent affine decoding block for convenience of description, specifically as follows:
And determining a first affine model according to the motion vector of the control point of the first adjacent affine decoding block, and predicting the motion vector of the control point of the current decoding block according to the first affine model. When the parameter models of the current decoding block are different, the manner in which the motion vector of the control point of the current decoding block is predicted based on the motion vector of the control point of the first neighboring affine decoding block is also different, and thus the following description will be made in case.
A. The parametric model of the current decoding block is a 4-parameter affine transformation model, and the derivation can be (as in fig. 9A):
if the first neighboring affine decoding block is located above the current decoding block in a Coding Tree Unit (CTU) and the first neighboring affine decoding block is a four-parameter affine decoding block, motion vectors of two control points at the lowest side of the first neighboring affine decoding block are obtained, for example, position coordinates (x 6 ,y 6 ) And motion vector (vx) 6 ,vy 6 ) And the position coordinates (x 7 ,y 7 ) And motion vector value (vx) 7 ,vy 7 ) (step S1201).
And forming a first affine model according to the motion vectors and the coordinate positions of the two control points at the bottommost side of the first adjacent affine decoding block (the obtained first affine model is a 4-parameter affine model) (step S1202).
The motion vector of the control point of the current decoding block is predicted according to the first affine model, for example, the position coordinates of the upper left control point and the position coordinates of the upper right control point of the current decoding block may be respectively brought into the first affine model, so that the motion vector of the upper left control point and the motion vector of the upper right control point of the current decoding block are predicted, as shown in formulas (1) and (2) (step S1203).
In the formulas (1), (2), (x) 0 ,y 0 ) Coordinates of the upper left control point for the current decoded block, (x) 1 ,y 1 ) Coordinates of an upper right control point of the current decoding block; in addition, (vx) 0 ,vy 0 ) For the motion vector of the upper left control point of the predicted current decoded block, (vx) 1 ,vy 1 ) Is the motion vector of the upper right control point of the predicted current decoded block.
Optionally, the position coordinates (x 6 ,y 6 ) And the position coordinates (x 7 ,y 7 ) Are all the position coordinates (x) of the upper left control point according to the first adjacent affine decoding block 4 ,y 4 ) Calculated, wherein the position coordinates (x 6 ,y 6 ) Is (x) 4 ,y 4 + cuH), the position coordinates (x) of the lower right control point of the first adjacent affine decoding block 7 ,y 7 ) Is (x) 4 +cuW,y 4 + cuH), cuW is the width of the first adjacent decoding block, cuH isThe height of the first adjacent affine decoding block; in addition, the motion vector of the lower left control point of the first adjacent affine decoding block is the motion vector of the lower left sub-block of the first adjacent affine decoding block, and the motion vector of the lower right control point of the first adjacent affine decoding block is the motion vector of the lower right sub-block of the first adjacent affine decoding block. It can be seen that the position coordinates of the lower left control point and the position coordinates of the lower right control point of the first adjacent affine decoding block are derived instead of being read from the memory, so that by adopting the method, the reading of the memory can be further reduced, and the decoding performance is improved. Alternatively, the position coordinates of the lower left control point and the lower right control point may be stored in the memory in a preselected manner, and then read from the memory when the control points are to be used.
If the first adjacent affine decoding block is located in a Coding Tree Unit (CTU) above the current decoding block and the first adjacent affine decoding block is a six-parameter affine decoding block, generating no candidate motion vector predicted value of a control point of the current block based on the first adjacent affine decoding block.
If the first neighboring affine decoding block is not located above the CTU of the current decoding block, the manner of predicting the motion vector of the control point of the current decoding block is not limited herein. However, for ease of understanding, the following also exemplifies an alternative determination:
the position coordinates and motion vectors of the three control points of the first adjacent affine decoding block can be obtained, for example, the position coordinates (x 4 ,y 4 ) And motion vector value (vx) 4 ,vy 4 ) Position coordinates of the upper right control point (x 5 ,y 5 ) And motion vector value (vx) 5 ,vy 5 ) Position coordinates of lower left control point (x 6 ,y 6 ) And motion vector (vx) 6 ,vy 6 )。
And forming a 6-parameter affine model according to the position coordinates and the motion vectors of the three control points of the first adjacent affine decoding block.
The position coordinates (x 0 ,y 0 ) Andposition coordinates of the upper right control point (x 1 ,y 1 ) The 6-parameter affine model is substituted to predict the motion vector of the upper left control point and the motion vector of the upper right control point of the current decoding block, as shown in formulas (4) and (5).
In the formulas (4), (5), (vx) 0 ,vy 0 ) For the motion vector of the upper left control point of the predicted current decoded block, (vx) 1 ,vy 1 ) Is the motion vector of the upper right control point of the predicted current decoded block.
B. The parametric model of the current decoding block is a 6-parameter affine transformation model, and the deduction mode can be as follows:
if the first adjacent affine decoding block is located above the CTU of the current decoding block and the first adjacent affine decoding block is a four-parameter affine decoding block, the position coordinates and motion vectors of the two control points at the lowest side of the first adjacent affine decoding block are obtained, for example, the position coordinates (x 6 ,y 6 ) And motion vector (vx) 6 ,vy 6 ) And the position coordinates (x 7 ,y 7 ) And motion vector value (vx) 7 ,vy 7 )。
And forming a first affine model according to the motion vectors of the two control points at the bottommost side of the first adjacent affine decoding block (the first affine model obtained at the moment is a 4-parameter affine model).
The motion vector of the control point of the current decoding block is predicted according to the first affine model, for example, the position coordinates of the upper left control point, the position coordinates of the upper right control point, and the position coordinates of the lower left control point of the current decoding block may be respectively brought into the first affine model, thereby predicting the motion vector of the upper left control point, the motion vector of the upper right control point, and the motion vector of the lower left control point of the current decoding block, as specifically shown in equations (1), (2), and (3).
Formulas (1), (2) have been described above, in formulas (1), (2), (3), (x) 0 ,y 0 ) Coordinates of the upper left control point for the current decoded block, (x) 1 ,y 1 ) Coordinates of the control point at the upper right of the current decoded block, (x) 2 ,y 2 ) Coordinates of a control point at the lower left of the current decoding block; in addition, (vx) 0 ,vy 0 ) For the motion vector of the upper left control point of the predicted current decoded block, (vx) 1 ,vy 1 ) For the motion vector of the upper right control point of the predicted current decoded block, (vx) 2 ,vy 2 ) Is the motion vector of the control point at the lower right-left of the predicted current decoded block.
If the first adjacent affine decoding block is located in a Coding Tree Unit (CTU) above the current decoding block and the first adjacent affine decoding block is a six-parameter affine decoding block, generating no candidate motion vector predicted value of a control point of the current block based on the first adjacent affine decoding block.
If the first neighboring affine decoding block is not located above the CTU of the current decoding block, the manner of predicting the motion vector of the control point of the current decoding block is not limited herein. However, for ease of understanding, the following also exemplifies an alternative determination:
the position coordinates and motion vectors of the three control points of the first adjacent affine decoding block can be obtained, for example, the position coordinates (x 4 ,y 4 ) And motion vector value (vx) 4 ,vy 4 ) Position coordinates of the upper right control point (x 5 ,y 5 ) And motion vector value (vx) 5 ,vy 5 ) Position coordinates of lower left control point (x 6 ,y 6 ) And motion vector (vx) 6 ,vy 6 )。
And forming a 6-parameter affine model according to the position coordinates and the motion vectors of the three control points of the first adjacent affine decoding block.
The position coordinates (x 0 ,y 0 ) Position coordinates of the upper right control point (x 1 ,y 1 ) And the position coordinates (x 2 ,y 2 ) The 6-parameter affine model is substituted to predict the motion vector of the upper left control point, the motion vector of the upper right control point, and the motion vector of the lower left control point of the current decoding block as shown in formulas (4), (5), and (6).
Equations (4), (5) have been described above, in equations (4), (5), (6), (vx) 0 ,vy 0 ) For the motion vector of the upper left control point of the predicted current decoded block, (vx) 1 ,vy 1 ) For the motion vector of the upper right control point of the predicted current decoded block, (vx) 2 ,vy 2 ) Is the motion vector of the lower left control point of the predicted current decoded block.
And secondly, constructing a candidate motion vector predicted value MVP list by adopting a motion vector prediction method based on control point combination.
The manner in which the candidate motion vector predictor MVP list is constructed when the parametric model of the current decoded block is different is also different, as described below.
A. The parametric model of the current decoding block is a 4-parameter affine transformation model, and the deduction mode can be as follows:
and predicting the motion vectors of the top left vertex and the top right vertex of the current decoding block by utilizing the motion information of the decoded blocks adjacent to the periphery of the current decoding block. As shown in fig. 7B: firstly, using the motion vector of the adjacent decoded block A and/or B and/or C of the top left vertex as the candidate motion vector of the top left vertex of the current decoding block; and using the motion vector of the block D and/or E adjacent to the decoded block D at the top right vertex as a candidate motion vector of the top right vertex of the current decoded block. And combining the candidate motion vector of the top left vertex and the candidate motion vector of the top right vertex to obtain a group of candidate motion vector predicted values, and combining a plurality of records obtained in a combined mode to form a candidate motion vector predicted value MVP list.
B. The current decoding block parameter model is a 6-parameter affine transformation model, and the deduction mode can be as follows:
and predicting the motion vectors of the top left vertex and the top right vertex of the current decoding block by utilizing the motion information of the decoded blocks adjacent to the periphery of the current decoding block. As shown in fig. 7B: firstly, using the motion vector of the adjacent decoded block A and/or B and/or C of the top left vertex as the candidate motion vector of the top left vertex of the current decoding block; using the motion vector of the adjacent decoded block D and/or E of the top right vertex as the candidate motion vector of the top right vertex of the current decoding block; and using the motion vector of the block F and/or G adjacent to the decoded block F and/or G at the top right vertex as a candidate motion vector of the top right vertex of the current decoded block. And combining the candidate motion vector of the top left vertex, the candidate motion vector of the top right vertex and the candidate motion vector of the bottom left vertex to obtain a group of candidate motion vector predictors, and combining the plurality of groups of candidate motion vector predictors in the combination mode to form a candidate motion vector predictor MVP list.
It should be noted that, the candidate motion vector predictor MVP list may be constructed by using only the candidate motion vector predictor obtained by the first prediction, the candidate motion vector predictor MVP list may be constructed by using only the candidate motion vector predictor obtained by the second prediction, or the candidate motion vector predictor MVP list may be constructed by using both the candidate motion vector predictor obtained by the first prediction and the candidate motion vector predictor obtained by the second prediction. In addition, the candidate motion vector predictor MVP list may be pruned and ordered according to a preconfigured rule, and then truncated or padded to a specific number. When each group of candidate motion vector predictors in the candidate motion vector predictor MVP list comprises motion vector predictors of three control points, the candidate motion vector predictor MVP list can be called as a triplet list; when each set of candidate motion vector predictors in the candidate motion vector predictor MVP list includes motion vector predictors of two control points, the candidate motion vector predictor MVP list may be referred to as a binary set list.
Step S1212: the video decoder parses the code stream to obtain the index and motion vector difference MVD.
In particular, the video decoder may parse the bitstream through the entropy decoding unit, the index indicating a set of target candidate motion vectors for the current decoded block, the target candidate motion vectors representing motion vector predictors for a set of control points of the current decoded block.
Step S1213: the video decoder determines a target set of motion vectors from the list of candidate motion vector predictors MVP according to the index.
Specifically, the video decoder uses the target candidate motion vector group determined from the candidate motion vectors according to the index as an optimal candidate motion vector predictor (alternatively, when the length of the candidate motion vector predictor MVP list is 1, the target motion vector group may be determined directly without parsing the code stream to obtain the index), and the optimal preferred motion vector predictor is briefly described below.
If the parameter model of the current decoding block is a 4-parameter affine transformation model, selecting an optimal motion vector predicted value of 2 control points from the candidate motion vector predicted value MVP list established above; for example, the video decoder parses an index number from a code stream, and determines optimal motion vector predictors of 2 control points from a binary set of candidate motion vector predictors MVP list according to the index number, where each set of candidate motion vector predictors in the candidate motion vector predictor MVP list corresponds to a respective index number.
If the parameter model of the current decoding block is a 6-parameter affine transformation model, selecting optimal motion vector predicted values of 3 control points from the candidate motion vector predicted value MVP list established above; for example, the video decoder parses the index number from the code stream, and determines the optimal motion vector predictors of 3 control points from the candidate motion vector predictor MVP list of the triplet according to the index number, where each set of candidate motion vector predictors in the candidate motion vector predictor MVP list corresponds to a respective index number.
Step S1214: the video decoder determines a motion vector of a control point of the current decoding block based on the target candidate motion vector group and the motion vector difference MVD parsed from the bitstream.
If the parameter model of the current decoding block is a 4-parameter affine transformation model, decoding to obtain motion vector difference values of 2 control points of the current decoding block from the code stream, and obtaining a new candidate motion vector group according to the motion vector difference values of the control points and the target candidate motion vector group indicated by the index. For example, the motion vector difference MVD of the upper left control point and the motion vector difference MVD of the upper right control point are decoded from the code stream, and added to the motion vectors of the upper left control point and the upper right control point in the target candidate motion vector group, respectively, to thereby obtain a new candidate motion vector group, and thus the new candidate motion vector group includes new motion vector values of the upper left control point and the upper right control point of the current decoding block.
Alternatively, the motion vector value of the 3 rd control point can be obtained by adopting a 4-parameter affine transformation model according to the motion vector values of 2 control points of the current decoding block in the new candidate motion vector group. For example, a motion vector (vx) of the upper left control point of the current decoded block is obtained 0 ,vy 0 ) And the motion vector (vx) of the upper right control point 1 ,vy 1 ) Then the control point (x) at the lower left of the current decoding block is obtained by calculation according to the formula (7) 2 ,y 2 ) Motion vector (vx) 2 ,vy 2 )。
Wherein, (x) 0 ,y 0 ) Is the position coordinates of the upper left control point, (x) 1 ,y 1 ) And the position coordinates of the control point on the right are W, W is the width of the current decoding block, and H is the height of the current decoding block.
If the current decoding block parameter model is a 6-parameter affine transformation model, decoding to obtain motion vector difference values of 3 control points of the current decoding block from the code stream, and obtaining a new candidate motion vector group according to the motion vector difference MVD of each control point and the target candidate motion vector group indicated by the index. For example, the motion vector difference MVD of the upper left control point, the motion vector difference MVD of the upper right control point, and the motion vector difference of the lower left control point are decoded from the code stream, and are added to the motion vectors of the upper left control point, the upper right control point, and the lower left control point in the target candidate motion vector group, respectively, to obtain a new candidate motion vector group, so that the new candidate motion vector group includes the motion vector values of the upper left control point, the upper right control point, and the lower left control point of the current decoding block.
Step S1215: the video decoder obtains the motion vector value of each sub-block in the current decoding block by adopting an affine transformation model according to the determined motion vector value of the control point of the current decoding block.
Specifically, the new candidate motion vector group obtained based on the target candidate motion vector group and MVD includes motion vectors of two (upper left control point and upper right control point) or three control points (for example, upper left control point, upper right control point and lower left control point). For each sub-block (a sub-block may be equivalently referred to as a motion compensation unit) of the current decoding block, motion information of pixels at preset positions in the motion compensation unit may be used to represent motion information of all pixels in the motion compensation unit. Assuming that the size of the motion compensation unit is MxN (M is less than or equal to the width W of the current decoding block, N is less than or equal to the height H of the current decoding block, where M, N, W, H is a positive integer, typically a power of 2, such as 4, 8, 16, 32, 64, 128, etc.), the preset position pixel point may be a motion compensation unit center point (M/2, N/2), an upper left pixel point (0, 0), an upper right pixel point (M-1, 0), or a pixel point at another position. Fig. 8A illustrates a 4x4 motion compensation unit and fig. 8B illustrates an 8x8 motion compensation unit.
The coordinates of the center point of the motion compensation unit relative to the pixel of the top left vertex of the current decoding block are calculated by using a formula (8-1), wherein i is the ith motion compensation unit in the horizontal directionFrom left to right), j is the jth motion compensation unit in the vertical direction (from top to bottom), (x) (i,j) ,y (i,j) ) Representing the coordinates of the (i, j) th motion compensation unit center point with respect to the upper left control point pixel of the current decoded block. Then according to affine model type (6 parameters or 4 parameters) of current decoding block, the method comprises the steps of (x (i,j) ,y (i,j) ) Substituting 6-parameter affine model equation (8-2) or adding (x) (i,j) ,y (i,j) ) Substituting the motion information into a 4-parameter affine model formula (8-3) to obtain the motion information of the central point of each motion compensation unit as the motion vector (vx) of all pixel points in the motion compensation unit (i,j) ,vy (i,j) )。
/>
Optionally, when the current decoding block is a 6-parameter decoding block and the motion vector of one or more sub-blocks of the current decoding block is obtained based on the target candidate motion vector group, if the lower boundary of the current decoding block coincides with the lower boundary of the CTU where the current decoding block is located, the motion vector of the sub-block at the lower left corner of the current decoding block is calculated according to the 6-parameter affine model constructed by the three control points and the position coordinates (0, H) of the lower left corner of the current decoding block, and the motion vector of the sub-block at the lower right corner of the current decoding block is calculated according to the 6-parameter affine model constructed by the three control points and the position coordinates (W, H) of the lower right corner of the current decoding block. For example, the motion vector of the sub-block in the lower left corner of the current decoding block can be obtained by substituting the position coordinates (0, H) in the lower left corner of the current decoding block into the 6-parameter affine model (instead of substituting the center point coordinates of the sub-block in the lower left corner into the affine model for calculation), and the motion vector of the sub-block in the lower right corner of the current decoding block can be obtained by substituting the position coordinates (W, H) in the lower right corner of the current decoding block into the 6-parameter affine model (instead of substituting the center point coordinates of the sub-block in the lower right corner into the affine model for calculation). In this way, when the motion vector of the lower left control point and the motion vector of the lower right control point of the current decoding block are used (e.g., the subsequent other block builds the candidate motion vector predictor MVP list for the other block based on the motion vectors of the lower left control point and the lower right control point of the current block), the exact value is used instead of the estimated value. Where W is the width of the current decoded block and H is the height of the current decoded block.
Optionally, when the current decoding block is a 4-parameter decoding block and the motion vector of one or more sub-blocks of the current decoding block is obtained based on the target candidate motion vector group, if the lower boundary of the current decoding block coincides with the lower boundary of the CTU where the current decoding block is located, the motion vector of the sub-block at the lower left corner of the current decoding block is calculated according to the 4-parameter affine model constructed by the two control points and the position coordinates (0, H) of the lower left corner of the current decoding block, and the motion vector of the sub-block at the lower right corner of the current decoding block is calculated according to the 4-parameter affine model constructed by the two control points and the position coordinates (W, H) of the lower right corner of the current decoding block. For example, the motion vector of the sub-block in the lower left corner of the current decoding block can be obtained by substituting the position coordinates (0, H) in the lower left corner of the current decoding block into the 4-parameter affine model (instead of substituting the center point coordinates of the sub-block in the lower left corner into the affine model for calculation), and the motion vector of the sub-block in the lower right corner of the current decoding block can be obtained by substituting the position coordinates (W, H) in the lower right corner of the current decoding block into the four-parameter affine model (instead of substituting the center point coordinates of the sub-block in the lower right corner into the affine model for calculation). In this way, when the motion vector of the lower left control point and the motion vector of the lower right control point of the current decoding block are used (e.g., the subsequent other block builds the candidate motion vector predictor MVP list for the other block based on the motion vectors of the lower left control point and the lower right control point of the current block), the exact value is used instead of the estimated value. Where W is the width of the current decoded block and H is the height of the current decoded block.
Step S1216: the video decoder performs motion compensation according to the motion vector value of each sub-block in the current decoding block to obtain a pixel prediction value of each sub-block, for example, by using the motion vector of each sub-block and the reference frame index value, finding a corresponding sub-block in the reference frame, and performing interpolation filtering to obtain the pixel prediction value of each sub-block.
Merge mode:
step S1221: the video decoder builds a candidate motion information list.
Specifically, the video decoder constructs a candidate motion information list (also referred to as a candidate motion vector list) through an inter-prediction unit (also referred to as an inter-prediction module), and may be constructed in one of two ways provided below, or in a form of a combination of two ways, and the constructed candidate motion information list is a triplet candidate motion information list; the two modes are specifically as follows:
in one mode, a motion vector prediction method based on a motion model is adopted to construct a candidate motion information list.
First, all or part of adjacent blocks of the current decoding block are traversed according to a predetermined sequence, so that adjacent affine decoding blocks are determined, and the number of the determined adjacent affine decoding blocks may be one or a plurality of adjacent affine decoding blocks. For example, adjacent blocks A, B, C, D, E shown in fig. 7A may be traversed sequentially to determine adjacent affine decoded blocks in adjacent blocks A, B, C, D, E. The inter prediction unit determines a set of candidate motion vector predictors according to each adjacent affine decoding block (each set of candidate motion vector predictors is a binary set or a ternary set), and the following description will take an adjacent affine decoding block as an example, and the adjacent affine decoding block is called as a first adjacent affine decoding block for convenience of description, specifically as follows:
Determining a first affine model according to the motion vector of the control point of the first adjacent affine decoding block, and further predicting the motion vector of the control point of the current decoding block according to the first affine model, wherein the method is specifically described as follows:
if the first adjacent affine decoding block is located above the CTU of the current decoding block and the first adjacent affine decoding block is a four-parameter affine decoding block, the position coordinates and motion vectors of the two control points at the lowest side of the first adjacent affine decoding block are obtained, for example, the position coordinates (x 6 ,y 6 ) And motion vector (vx) 6 ,vy 6 ) And the position coordinates (x 7 ,y 7 ) And motion vector value (vx) 7 ,vy 7 )。
And forming a first affine model according to the motion vectors of the two control points at the bottommost side of the first adjacent affine decoding block (the first affine model obtained at the moment is a 4-parameter affine model).
Alternatively, the motion vector of the control point of the current decoding block is predicted according to the first affine model, for example, the position coordinates of the upper left control point, the position coordinates of the upper right control point and the position coordinates of the lower left control point of the current decoding block may be respectively brought into the first affine model, so that the motion vector of the upper left control point, the motion vector of the upper right control point and the motion vector of the lower left control point of the current decoding block are predicted to form a candidate motion vector triplet, and a candidate motion information list is added, as shown in formulas (1), (2) and (3).
Alternatively, the motion vector of the control point of the current decoding block is predicted according to the first affine model, for example, the position coordinates of the upper left control point and the position coordinates of the upper right control point of the current decoding block may be respectively brought into the first affine model, so that the motion vector of the upper left control point and the motion vector of the upper right control point of the current decoding block are predicted to form a candidate motion vector binary group, and the candidate motion information list is added, as shown in the specific formulas (1) and (2).
In the formulas (1), (2) and (3), (x) 0 ,y 0 ) Coordinates of the upper left control point for the current decoded block, (x) 1 ,y 1 ) Coordinates of the control point at the upper right of the current decoded block, (x) 2 ,y 2 ) Coordinates of a control point at the lower left of the current decoding block; in addition, (vx) 0 ,vy 0 ) For the motion vector of the upper left control point of the predicted current decoded block, (vx) 1 ,vy 1 ) For the motion vector of the upper right control point of the predicted current decoded block, (vx) 2 ,vy 2 ) Is the motion vector of the lower left control point of the predicted current decoded block.
If the first adjacent affine decoding block is located in a Coding Tree Unit (CTU) above the current decoding block and the first adjacent affine decoding block is a six-parameter affine decoding block, generating no candidate motion vector predicted value of a control point of the current block based on the first adjacent affine decoding block.
If the first neighboring affine decoding block is not located above the CTU of the current decoding block, the manner of predicting the motion vector of the control point of the current decoding block is not limited herein. However, for ease of understanding, the following also exemplifies an alternative determination:
the position coordinates and motion vectors of the three control points of the first adjacent affine decoding block can be obtained, for example, the position coordinates (x 4 ,y 4 ) And motion vector value (vx) 4 ,vy 4 ) Position coordinates of the upper right control point (x 5 ,y 5 ) And motion vector value (vx) 5 ,vy 5 ) Position coordinates of lower left control point (x 6 ,y 6 ) And motion vector (vx) 6 ,vy 6 )。
And forming a 6-parameter affine model according to the position coordinates and the motion vectors of the three control points of the first adjacent affine decoding block.
The position coordinates (x 0 ,y 0 ) Position coordinates of the upper right control point (x 1 ,y 1 ) And the position coordinates (x 2 ,y 2 ) The 6-parameter affine model is substituted to predict the motion vector of the upper left control point, the motion vector of the upper right control point, and the motion vector of the lower left control point of the current decoding block as shown in formulas (4), (5), and (6).
Equations (4), (5) have been described above, in equations (4), (5), (6), (vx) 0 ,vy 0 ) For the motion vector of the upper left control point of the predicted current decoded block, (vx) 1 ,vy 1 ) For the motion vector of the upper right control point of the predicted current decoded block, (vx) 2 ,vy 2 ) Is the motion vector of the lower left control point of the predicted current decoded block.
And secondly, constructing a candidate motion information list by adopting a motion vector prediction method based on control point combination.
Two schemes, denoted scheme a and scheme B, respectively, are exemplified below:
scheme a: the motion information of the 2 control points of the current decoding block is combined to construct a 4-parameter affine transformation model. The combination mode of the 2 control points is { CP1, CP4}, { CP2, CP3}, { CP1, CP2}, { CP2, CP4}, { CP1, CP3}, { CP3, CP4}. For example, a 4-parameter Affine transformation model constructed by using CP1 and CP2 control points is denoted as Affine (CP 1, CP 2).
It should be noted that, combinations of different control points may also be converted into control points at the same location. For example: the 4-parameter affine transformation model obtained by combining { CP1, CP4}, { CP2, CP3}, { CP2, CP4}, { CP1, CP3}, { CP3, CP4} is converted into a control point { CP1, CP2} or { CP1, CP2, CP3 }. The conversion method is that the motion vector of the control point and the coordinate information thereof are substituted into a formula (9-1) to obtain model parameters, and the coordinate information of { CP1, CP2} is substituted into the model parameters to obtain the motion vector thereof, and the motion vector is used as a group of candidate motion vector predicted values.
In the formula (9-1), a 0 ,a 1 ,a 2 ,a 3 Are parameters in the parametric model, and (x, y) represents the position coordinates.
More directly, a set of motion vector predictors represented by the upper left control point and the upper right control point can also be obtained through conversion according to the following formula, and the candidate motion information list is added:
conversion of { CP1, CP2} yields { CP1, CP2, CP3} equation (9-2):
conversion of { CP1, CP3} yields { CP1, CP2, CP3} of equation (9-3):
conversion of { CP2, CP3} yields { CP1, CP2, CP3} as equation (10):
conversion of { CP1, CP4} yields { CP1, CP2, CP3} as equation (11):
conversion of { CP2, CP4} yields { CP1, CP2, CP3} as equation (12):
conversion of { CP3, CP4} yields equation (13) for { CP1, CP2, CP3 }:
scheme B: the motion information of the 3 control points of the current decoding block is combined to construct a 6-parameter affine transformation model. The combination mode of the 3 control points is { CP1, CP2, CP4}, { CP1, CP2, CP3}, { CP2, CP3, CP4}, { CP1, CP3, CP4}. For example, a 6-parameter Affine transformation model constructed by using CP1, CP2 and CP3 control points is denoted as Affine (CP 1, CP2, CP 3).
It should be noted that combinations of different control points may also be converted into control points at the same location. For example: the 6-parameter affine transformation model of { CP1, CP2, CP4}, { CP2, CP3, CP4}, { CP1, CP3, CP4} combination is converted into a control point { CP1, CP2, CP3 }. The conversion method is that the motion vector of the control point and the coordinate information thereof are substituted into a formula (14) to obtain model parameters, and the coordinate information of { CP1, CP2 and CP3} is substituted into the model parameters to obtain the motion vector thereof, and the motion vector is used as a group of candidate motion vector predicted values.
In formula (14), a 1 ,a 2 ,a 3 ,a 4 ,a 5 ,a 6 For parameters in the parametric model, (x, y) represents position coordinates.
More directly, a set of motion vector predictors represented by an upper left control point, an upper right control point, and a lower left control point may also be obtained by converting according to the following formula, and added to the candidate motion information list:
conversion of { CP1, CP2, CP4} yields equation (15) for { CP1, CP2, CP3 }:
conversion of { CP2, CP3, CP4} yields equation (16) for { CP1, CP2, CP3 }:
conversion of { CP1, CP3, CP4} yields equation (17) for { CP1, CP2, CP3 }:
it should be noted that, the candidate motion information list may be constructed by using only the candidate motion vector predicted value obtained by the first prediction, the candidate motion information list may be constructed by using only the candidate motion vector predicted value obtained by the second prediction, and the candidate motion information list may be constructed by using both the candidate motion vector predicted value obtained by the first prediction and the candidate motion vector predicted value obtained by the second prediction. In addition, the candidate motion information list may be pruned and ordered according to a pre-configured rule and then truncated or padded to a specific number. When each group of candidate motion vector predicted values in the candidate motion information list comprises motion vector predicted values of three control points, the candidate motion information list can be called as a triplet list; when each set of candidate motion vector predictors in the candidate motion information list includes motion vector predictors of two control points, the candidate motion information list may be referred to as a binary set list.
Step S1222: the video decoder parses the code stream to obtain an index.
In particular, the video decoder may parse the bitstream through the entropy decoding unit, the index indicating a set of target candidate motion vectors for the current decoded block, the target candidate motion vectors representing motion vector predictors for a set of control points of the current decoded block.
Step S1223: the video decoder determines a set of target motion vectors from the list of candidate motion information based on the index. Specifically, the frequency decoder determines a target candidate motion vector group from the candidate motion vectors according to the index, and uses the target candidate motion vector group as an optimal candidate motion vector predicted value (alternatively, when the length of the candidate motion information list is 1, the target motion vector group can be directly determined without analyzing the code stream to obtain the index), specifically, the optimal motion vector predicted value of 2 or 3 control points; for example, the video decoder parses the index number from the code stream, and then determines the optimal motion vector predictors for 2 or 3 control points from the candidate motion information list according to the index number, where each set of candidate motion vector predictors in the candidate motion information list corresponds to a respective index number.
Step S1224: the video decoder obtains the motion vector value of each sub-block in the current decoding block by adopting a parametric affine transformation model according to the determined motion vector value of the control point of the current decoding block.
Specifically, i.e., motion vectors of two (upper left control point and upper right control point) or three control points (e.g., upper left control point, upper right control point, and lower left control point) included in the target candidate motion vector group. For each sub-block (a sub-block may be equivalently referred to as a motion compensation unit) of the current decoding block, motion information of pixels at preset positions in the motion compensation unit may be used to represent motion information of all pixels in the motion compensation unit. Assuming that the size of the motion compensation unit is MxN (M is less than or equal to the width W of the current decoding block, N is less than or equal to the height H of the current decoding block, where M, N, W, H is a positive integer, typically a power of 2, such as 4, 8, 16, 32, 64, 128, etc.), the preset position pixel point may be a motion compensation unit center point (M/2, N/2), an upper left pixel point (0, 0), an upper right pixel point (M-1, 0), or a pixel point at another position. Fig. 8A illustrates a 4x4 motion compensation unit and fig. 8B illustrates an 8x8 motion compensation unit.
The coordinates of the center point of the motion compensation unit with respect to the pixel of the top left vertex of the current decoding block are calculated using formula (5), where i is the i-th motion compensation unit in the horizontal direction (left to right), j is the j-th motion compensation unit in the vertical direction (top to bottom), (x) (i,j) ,y (i,j) ) Representing the coordinates of the (i, j) th motion compensation unit center point with respect to the upper left control point pixel of the current decoded block. Then according to affine model type (6 parameters or 4 parameters) of current decoding block, the method comprises the steps of (x (i,j) ,y (i,j) ) Substituting 6 parameter affine model equation (6-1) or substituting (x) (i,j) ,y (i,j) ) Substituting the 4-parameter affine model formula (6-2) to obtain the motion information of the center point of each motion compensation unit as the motion vector (vx) of all pixel points in the motion compensation unit (i,j) ,vy (i,j) )。
Optionally, when the current decoding block is a 6-parameter decoding block and the motion vector of one or more sub-blocks of the current decoding block is obtained based on the target candidate motion vector group, if the lower boundary of the current decoding block coincides with the lower boundary of the CTU where the current decoding block is located, the motion vector of the sub-block at the lower left corner of the current decoding block is calculated according to the 6-parameter affine model constructed by the three control points and the position coordinates (0, H) of the lower left corner of the current decoding block, and the motion vector of the sub-block at the lower right corner of the current decoding block is calculated according to the 6-parameter affine model constructed by the three control points and the position coordinates (W, H) of the lower right corner of the current decoding block. For example, the motion vector of the sub-block in the lower left corner of the current decoding block can be obtained by substituting the position coordinates (0, H) in the lower left corner of the current decoding block into the 6-parameter affine model (instead of substituting the center point coordinates of the sub-block in the lower left corner into the affine model for calculation), and the motion vector of the sub-block in the lower right corner of the current decoding block can be obtained by substituting the position coordinates (W, H) in the lower right corner of the current decoding block into the 6-parameter affine model (instead of substituting the center point coordinates of the sub-block in the lower right corner into the affine model for calculation). In this way, when the motion vector of the lower left control point and the motion vector of the lower right control point of the current decoding block are used (e.g., the subsequent other block builds a candidate motion information list for the other block based on the motion vectors of the lower left control point and the lower right control point of the current block), an accurate value is used instead of an estimated value. Where W is the width of the current decoded block and H is the height of the current decoded block.
Optionally, when the current decoding block is a 4-parameter decoding block and the motion vector of one or more sub-blocks of the current decoding block is obtained based on the target candidate motion vector group, if the lower boundary of the current decoding block coincides with the lower boundary of the CTU where the current decoding block is located, the motion vector of the sub-block at the lower left corner of the current decoding block is calculated according to the 4-parameter affine model constructed by the two control points and the position coordinates (0, H) of the lower left corner of the current decoding block, and the motion vector of the sub-block at the lower right corner of the current decoding block is calculated according to the 4-parameter affine model constructed by the two control points and the position coordinates (W, H) of the lower right corner of the current decoding block. For example, the motion vector of the sub-block in the lower left corner of the current decoding block can be obtained by substituting the position coordinates (0, H) in the lower left corner of the current decoding block into the 4-parameter affine model (instead of substituting the center point coordinates of the sub-block in the lower left corner into the affine model for calculation), and the motion vector of the sub-block in the lower right corner of the current decoding block can be obtained by substituting the position coordinates (W, H) in the lower right corner of the current decoding block into the four-parameter affine model (instead of substituting the center point coordinates of the sub-block in the lower right corner into the affine model for calculation). In this way, when the motion vector of the lower left control point and the motion vector of the lower right control point of the current decoding block are used (e.g., the subsequent other block builds a candidate motion information list for the other block based on the motion vectors of the lower left control point and the lower right control point of the current block), an accurate value is used instead of an estimated value. Where W is the width of the current decoded block and H is the height of the current decoded block.
Step S1225: the video decoder performs motion compensation according to the motion vector value of each sub-block in the current decoding block to obtain a pixel prediction value of each sub-block, specifically, according to the motion vector of one or more sub-blocks of the current decoding block, the reference frame index indicated by the index and the prediction direction, predicting to obtain the pixel prediction value of the current decoding block.
It can be understood that, when the decoding tree unit CTU where the first adjacent affine decoding block is located is above the current decoding block position, the information of the lowest control point of the first adjacent affine decoding block has been read from the memory; the above scheme thus builds a candidate motion vector from a first set of control points of a first neighboring affine decoding block, the first set of control points comprising a lower left control point and a lower right control point of the first neighboring affine decoding block; instead of fixing the upper left control point, the upper right control point, and the lower left control point of the first neighboring decoding block as the first set of control points (or fixing the upper left control point and the upper right control point of the first neighboring decoding block as the first set of control points) as in the prior art. Therefore, by adopting the method for determining the first group of control points in the application, the information (such as position coordinates, motion vectors and the like) of the first group of control points can be directly multiplexed with the information read from the memory, so that the reading of the memory is reduced, and the decoding performance is improved.
Fig. 10 is a schematic block diagram of one implementation of an encoding device or decoding device (simply referred to as a decoding device 1000) of an embodiment of the present application. The decoding device 1000 may include, among other things, a processor 1010, a memory 1030, and a bus system 1050. The processor is connected with the memory through the bus system, the memory is used for storing instructions, and the processor is used for executing the instructions stored by the memory. The memory of the encoding device stores program codes, and the processor may invoke the program codes stored in the memory to perform the various video encoding or decoding methods described herein, particularly the video encoding or decoding methods in various new inter prediction modes, and the methods of predicting motion information in various new inter prediction modes. To avoid repetition, a detailed description is not provided herein.
In the present embodiment, the processor 1010 may be a central processing unit (Central Processing Unit, abbreviated as "CPU"), and the processor 1010 may also be other general purpose processors, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), off-the-shelf programmable gate arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 1030 may include a Read Only Memory (ROM) device or a Random Access Memory (RAM) device. Any other suitable type of storage device may also be used as memory 1030. Memory 1030 may include code and data 1031 that is accessed by processor 1010 using bus 1050. Memory 1030 may further include an operating system 1033 and application programs 1035, the application programs 1035 including at least one program that allows processor 1010 to perform the video encoding or decoding methods described herein, and in particular the encoding or decoding methods described herein. For example, the application programs 1035 may include applications 1 to N, which further include a video encoding or decoding application (simply referred to as a video coding application) that performs the video encoding or decoding method described in the present application.
The bus system 1050 may include a power bus, a control bus, a status signal bus, and the like, in addition to a data bus. For clarity of illustration, the various buses are labeled in the figure as bus system 1050.
Optionally, the decoding device 1000 may also include one or more output devices, such as a display 1070. In one example, the display 1070 may be a touch sensitive display incorporating a display with a touch sensitive unit operable to sense touch input. A display 1070 may be connected to the processor 1010 via the bus 1050.
Fig. 11 is an illustration of an example of a video encoding system 1100 including the encoder 100 of fig. 2A and/or the decoder 200 of fig. 2B, according to an example embodiment. The system 1100 may implement a combination of the various techniques of the present application. In the illustrated embodiment, the video encoding system 1100 may include an imaging device 1101, a video encoder 100, a video decoder 200 (and/or a video encoder implemented by logic 1107 of a processing unit 1106), an antenna 1102, one or more processors 1103, one or more memories 1104, and/or a display device 1105.
As shown, the imaging device 1101, antenna 1102, processing unit 1106, logic 1107, video encoder 100, video decoder 200, processor 1103, memory 1104, and/or display device 1105 can communicate with each other. As discussed, although video encoding system 1100 is depicted with video encoder 100 and video decoder 200, in different examples, video encoding system 1100 may include only video encoder 100 or only video decoder 200.
In some examples, as shown, video coding system 1100 may include antenna 1102. For example, antenna 1102 may be used to transmit or receive an encoded bitstream of video data. Additionally, in some examples, the video encoding system 1100 may include a display device 1105. The display device 1105 may be used to present video data. In some examples, as shown, logic 1107 may be implemented by processing unit 1106. The processing unit 1106 may comprise application-specific integrated circuit (ASIC) logic, a graphics processor, a general-purpose processor, or the like. The video coding system 1100 may also include an optional processor 1103, which optional processor 1103 may similarly include application-specific integrated circuit (ASIC) logic, a graphics processor, a general purpose processor, or the like. In some examples, the logic 1107 may be implemented by hardware, such as video encoding dedicated hardware, etc., and the processor 1103 may be implemented by general purpose software, an operating system, etc. In addition, the memory 1104 may be any type of memory, such as volatile memory (e.g., static random access memory (Static Random Access Memory, SRAM), dynamic random access memory (Dynamic Random Access Memory, DRAM), etc.) or non-volatile memory (e.g., flash memory, etc.), and the like. In a non-limiting example, the memory 1104 may be implemented by an overspeed cache. In some examples, logic 1107 may access memory 1104 (e.g., to implement an image buffer). In other examples, logic 1107 and/or processing unit 1106 may include memory (e.g., cache, etc.) for implementing image buffers, etc.
In some examples, video encoder 100 implemented by logic circuitry may include an image buffer (e.g., implemented by processing unit 1106 or memory 1104) and a graphics processing unit (e.g., implemented by processing unit 1106). The graphics processing unit may be communicatively coupled to the image buffer. The graphics processing unit may include the video encoder 100 implemented by logic circuitry 1107 to implement the various modules discussed with reference to fig. 2A and/or any other encoder system or subsystem described herein. Logic circuitry may be used to perform various operations discussed herein.
The video decoder 200 may be implemented in a similar manner by logic circuitry 1107 to implement the various modules discussed with reference to the decoder 200 of fig. 2B and/or any other decoder system or subsystem described herein. In some examples, the video decoder 200 implemented by logic circuitry may include an image buffer (implemented by the processing unit 2820 or the memory 1104) and a graphics processing unit (e.g., implemented by the processing unit 1106). The graphics processing unit may be communicatively coupled to the image buffer. The graphics processing unit may include the video decoder 200 implemented by logic circuitry 1107 to implement the various modules discussed with reference to fig. 2B and/or any other decoder system or subsystem described herein.
In some examples, antenna 1102 of video coding system 1100 may be used to receive an encoded bitstream of video data. As discussed, the encoded bitstream may include data related to the encoded video frame, indicators, index values, mode selection data, etc., discussed herein, such as data related to the encoded partitions (e.g., transform coefficients or quantized transform coefficients, optional indicators (as discussed), and/or data defining the encoded partitions). The video encoding system 1100 may also include a video decoder 200 coupled to the antenna 1102 and used to decode the encoded bitstream. The display device 1105 is for presenting video frames.
In the steps of the above method flow, the description order of the steps does not represent the execution order of the steps, and it is possible to execute in the above description order, and it is also possible to execute in the above description order. For example, step S1211 may be performed after step S1212, or may be performed before step S1212; the step S1221 may be performed after the step S1222 or may be performed before the step S1222; the remaining steps are not exemplified here.
Those of skill in the art will appreciate that the functions described in connection with the various illustrative logical blocks, modules, and algorithm steps described in connection with the disclosure herein may be implemented as hardware, software, firmware, or any combination thereof. If implemented in software, the functions described by the various illustrative logical blocks, modules, and steps may be stored on a computer readable medium or transmitted as one or more instructions or code and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media corresponding to tangible media, such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another (e.g., according to a communication protocol). In this manner, a computer-readable medium may generally correspond to (1) a non-transitory tangible computer-readable storage medium, or (2) a communication medium, such as a signal or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementing the techniques described herein. The computer program product may include a computer-readable medium.
By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that the computer-readable storage medium and data storage medium do not include connections, carrier waves, signals, or other transitory media, but are actually directed to non-transitory tangible storage media. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, digital Versatile Disc (DVD), and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
The instructions may be executed by one or more processors, such as one or more Digital Signal Processors (DSPs), general purpose microprocessors, application Specific Integrated Circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Thus, the term "processor" as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. Additionally, in some aspects, the functions described by the various illustrative logical blocks, modules, and steps described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or incorporated in a combination codec. Moreover, the techniques may be fully implemented in one or more circuits or logic elements.
The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses including a wireless handset, an Integrated Circuit (IC), or a set of ICs (e.g., a chipset). The various components, modules, or units are described in this application to emphasize functional aspects of the devices for performing the disclosed techniques but do not necessarily require realization by different hardware units. Indeed, as described above, the various units may be combined in a codec hardware unit in combination with suitable software and/or firmware, or provided by an interoperable hardware unit (including one or more processors as described above).
The foregoing is merely illustrative of specific embodiments of the present application, and the scope of the present application is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (32)

1. A method of encoding, comprising:
determining a target candidate motion vector group from the candidate motion vector list according to the rate distortion cost criterion; the target candidate motion vector group represents a motion vector predicted value of a group of control points of the current coding block;
coding an index corresponding to the target candidate motion vector into a code stream, and transmitting the code stream;
wherein if a first neighboring affine coding block is a four-parameter affine coding block and the first neighboring affine coding block is located above the current coding block by a coding tree unit CTU, the candidate motion vector list includes a first set of candidate motion vector predictors obtained based on a lower left control point and a lower right control point of the first neighboring affine coding block, the first set of candidate motion vector predictors corresponding to an affine model of the current coding block.
2. The method according to claim 1, characterized in that:
if the current coding block is a four-parameter affine coding block, the first group of candidate motion vector predictors are used for representing motion vector predictors of an upper left control point and an upper right control point of the current coding block;
if the current coding block is a six-parameter affine coding block, the first set of candidate motion vector predictors is used to represent motion vector predictors for an upper left control point, an upper right control point and a lower left fixed point control point of the current coding block.
3. The method according to claim 2, characterized in that: the first set of candidate motion vector predictors is obtained based on a lower left control point and a lower right control point of the first adjacent affine coding block, specifically:
if the current coding block is a four-parameter affine coding block, substituting the position coordinates of the upper left control point and the upper right control point of the current coding block into a first affine model to obtain a first group of candidate motion vector predicted values; or,
if the current coding block is a six-parameter affine coding block, substituting the position coordinates of an upper left control point, an upper right control point and a lower left fixed point control point of the current coding block into a first affine model to obtain a first group of candidate motion vector predicted values;
Wherein the first affine model is determined based on motion vectors and position coordinates of a lower left control point and a lower right control point of the first neighboring affine encoding block.
4. A method according to any one of claims 1-3, further comprising:
searching the motion vector of a group of control points with the lowest cost according to the rate distortion cost criterion in a preset searching range by taking the target candidate motion vector group as a searching starting point;
determining a motion vector difference MVD between the motion vector of the set of control points and the set of target candidate motion vectors;
the encoding the index corresponding to the target candidate motion vector into a code stream, and transmitting the code stream, including:
and indexing the MVD and the index corresponding to the target candidate motion vector group into a code stream to be transmitted, and transmitting the code stream.
5. A method according to any of claims 1-3, wherein said indexing an index corresponding to said target candidate motion vector into a code stream and transmitting said code stream comprises:
and coding indexes corresponding to the target candidate motion vector group, the reference frame index and the prediction direction into a code stream, and transmitting the code stream.
6. A method according to any of claims 1-3, characterized in that the position coordinates (x 6 ,y 6 ) And the position coordinates (x 7 ,y 7 ) Are all based on the position coordinates (x) of the upper left control point of the first neighboring affine encoded block 4 ,y 4 ) Calculated and deduced, wherein the position coordinates (x 6 ,y 6 ) Is (x) 4 ,y 4 + cuH), the position coordinates (x) of the lower right control point of the first adjacent affine-coded block 7 ,y 7 ) Is (x) 4 +cuW,y 4 + cuH), cuW is the width of the first adjacent simulated encoded block and cuH is the height of the first adjacent affine encoded block.
7. The method according to claim 6, wherein the motion vector of the lower left control point of the first neighboring affine encoded block is the motion vector of the lower left sub-block of the first neighboring affine encoded block, and the motion vector of the lower right control point of the first neighboring affine encoded block is the motion vector of the lower right sub-block of the first neighboring affine encoded block.
8. A decoding method, comprising:
parsing the code stream to obtain an index, wherein the index is used for indicating a target candidate motion vector group of a current decoding block;
Determining, from the candidate motion vector list, the target candidate motion vector group representing motion vector predictors of a set of control points of a current decoding block, wherein if a first neighboring affine decoding block is a four-parameter affine decoding block and the first neighboring affine decoding block is located above a decoding tree unit CTU of the current decoding block, the candidate motion vector list includes a first set of candidate motion vector predictors obtained based on a lower left control point and a lower right control point of the first neighboring affine decoding block, the first set of candidate motion vector predictors corresponding to an affine model of the current encoding block;
obtaining motion vectors of one or more sub-blocks of the current decoding block based on the target candidate motion vector group;
and predicting to obtain a pixel predicted value of the current decoding block based on the motion vector of one or more sub-blocks of the current decoding block.
9. The method according to claim 8, wherein:
if the current decoding block is a four-parameter affine decoding block, the first set of candidate motion vector predictors are used for representing motion vector predictors of an upper left control point and an upper right control point of the current decoding block;
If the current decoding block is a six-parameter affine decoding block, the first set of candidate motion vector predictors is used to represent motion vector predictors for an upper left control point, an upper right control point and a lower left fixed point control point of the current decoding block.
10. The method according to claim 9, wherein: the first set of candidate motion vector predictors is obtained based on a lower left control point and a lower right control point of the first adjacent affine decoding block, specifically:
if the current decoding block is a four-parameter affine decoding block, substituting the position coordinates of an upper left control point and an upper right control point of the current decoding block into a first affine model to obtain a first group of candidate motion vector predicted values; or,
if the current decoding block is a six-parameter affine decoding block, substituting the position coordinates of an upper left control point, an upper right control point and a lower left fixed point control point of the current decoding block into a first affine model to obtain a first group of candidate motion vector predicted values;
wherein the first affine model is determined based on motion vectors and position coordinates of a lower left control point and a lower right control point of the first neighboring affine decoding block.
11. The method according to any of the claims 8-10, wherein said deriving motion vectors for one or more sub-blocks of the current decoded block based on the target set of candidate motion vectors is in particular:
motion vectors of one or more sub-blocks of the current decoded block are derived based on a second affine model determined based on the set of target candidate motion vectors and position coordinates of a set of control points of the current decoded block.
12. The method according to any of claims 8-10, wherein said deriving motion vectors for one or more sub-blocks of the current decoded block based on the target set of candidate motion vectors comprises:
obtaining a new candidate motion vector group based on the motion vector difference MVD obtained by analyzing the code stream and the target candidate motion vector group indicated by the index;
and obtaining the motion vector of one or more sub-blocks of the current decoding block based on the new candidate motion vector group.
13. The method according to any one of claims 8-10, wherein predicting a pixel prediction value of the current decoded block based on motion vectors of one or more sub-blocks of the current decoded block comprises:
And predicting to obtain a pixel predicted value of the current decoding block according to the motion vector of one or more sub-blocks of the current decoding block, the reference frame index indicated by the index and the prediction direction.
14. The method according to any of the claims 8-10, characterized in that the position coordinates (x 6 ,y 6 ) And the position coordinates (x 7 ,y 7 ) Are all the position coordinates (x) of the upper left control point according to the first adjacent affine decoding block 4 ,y 4 ) Calculated and deduced, wherein the position coordinates (x 6 ,y 6 ) Is (x) 4 ,y 4 + cuH), the position coordinates (x) of the lower right control point of the first adjacent affine decoding block 7 ,y 7 ) Is (x) 4 +cuW,y 4 + cuH), cuW is the width of the first adjacent affine decoding block, cuH is the height of the first adjacent affine decoding block.
15. The method of claim 14, wherein the motion vector of the lower left control point of the first neighboring affine decoding block is the motion vector of the lower left sub-block of the first neighboring affine decoding block, and wherein the motion vector of the lower right control point of the first neighboring affine decoding block is the motion vector of the lower right sub-block of the first neighboring affine decoding block.
16. The method according to any one of claims 15, wherein, in the process of obtaining the motion vector of the one or more sub-blocks of the current decoding block based on the target candidate motion vector group, if the lower boundary of the current decoding block coincides with the lower boundary of the CTU where the current decoding block is located, the motion vector of the sub-block of the lower left vertex of the current decoding block is calculated according to the target candidate motion vector group and the position coordinates (0, H) of the lower left vertex of the current decoding block, and the motion vector of the sub-block of the lower right vertex of the current decoding block is calculated according to the target candidate motion vector group and the position coordinates (W, H) of the lower right vertex of the current decoding block, wherein W is equal to the width of the current decoding block, H is equal to the height of the current decoding block, and the position coordinates of the upper left vertex of the current decoding block is (0, 0).
17. A video encoder, comprising:
an inter-frame prediction unit, configured to determine a target candidate motion vector group from a candidate motion vector list according to a rate-distortion cost criterion; the target candidate motion vector group represents a motion vector predicted value of a group of control points of the current coding block;
An entropy encoding unit for encoding an index corresponding to the target candidate motion vector into a code stream and transmitting the code stream;
wherein if a first neighboring affine coding block is a four-parameter affine coding block and the first neighboring affine coding block is located above the current coding block by a coding tree unit CTU, the candidate motion vector list includes a first set of candidate motion vector predictors obtained based on a lower left control point and a lower right control point of the first neighboring affine coding block, the first set of candidate motion vector predictors corresponding to an affine model of the current coding block.
18. The video encoder of claim 17, wherein:
if the current coding block is a four-parameter affine coding block, the first group of candidate motion vector predictors are used for representing motion vector predictors of an upper left control point and an upper right control point of the current coding block;
if the current coding block is a six-parameter affine coding block, the first set of candidate motion vector predictors is used to represent motion vector predictors for an upper left control point, an upper right control point and a lower left fixed point control point of the current coding block.
19. The video encoder of claim 18, wherein: the first set of candidate motion vector predictors is obtained based on a lower left control point and a lower right control point of the first adjacent affine coding block, specifically:
if the current coding block is a four-parameter affine coding block, substituting the position coordinates of the upper left control point and the upper right control point of the current coding block into a first affine model to obtain a first group of candidate motion vector predicted values; or,
if the current coding block is a six-parameter affine coding block, substituting the position coordinates of an upper left control point, an upper right control point and a lower left fixed point control point of the current coding block into a first affine model to obtain a first group of candidate motion vector predicted values;
wherein the first affine model is determined based on motion vectors and position coordinates of a lower left control point and a lower right control point of the first neighboring affine encoding block.
20. The video encoder of any of claims 17-19, wherein:
the inter-frame prediction unit is further configured to search, in a preset search range, a motion vector of a group of control points with the lowest cost according to a rate-distortion cost criterion by using the target candidate motion vector group as a search start point; and determining a motion vector difference MVD between the motion vector of the set of control points and the set of target candidate motion vectors;
The entropy coding unit is specifically configured to index the MVD and an index corresponding to the target candidate motion vector group into a code stream to be transmitted, and transmit the code stream.
21. The video encoder according to any of claims 17-19, wherein the entropy encoding unit is specifically configured to encode an index corresponding to the target candidate motion vector group, the reference frame index and the prediction direction into a code stream, and to transmit the code stream.
22. The video encoder according to any of the claims 17-19, characterized in that the position coordinates (x 6 ,y 6 ) And the position coordinates (x 7 ,y 7 ) Are all based on the position coordinates (x) of the upper left control point of the first neighboring affine encoded block 4 ,y 4 ) Calculated and deduced, wherein the position coordinates (x 6 ,y 6 ) Is (x) 4 ,y 4 + cuH), the position coordinates (x) of the lower right control point of the first adjacent affine-coded block 7 ,y 7 ) Is (x) 4 +cuW,y 4 + cuH), cuW is the width of the first adjacent simulated encoded block and cuH is the height of the first adjacent affine encoded block.
23. The video encoder of claim 22, wherein the motion vector of the lower left control point of the first neighboring affine encoded block is the motion vector of the lower left sub-block of the first neighboring affine encoded block and the motion vector of the lower right control point of the first neighboring affine encoded block is the motion vector of the lower right sub-block of the first neighboring affine encoded block.
24. A video decoder, comprising:
the entropy decoding unit is used for analyzing the code stream to obtain an index, and the index is used for indicating a target candidate motion vector group of the current decoding block;
an inter prediction unit, configured to determine, according to the index, the target candidate motion vector group from a candidate motion vector list, where the target candidate motion vector group represents a motion vector predicted value of a set of control points of a current decoding block, and if a first neighboring affine decoding block is a four-parameter affine decoding block and the first neighboring affine decoding block is located in an upper decoding tree unit CTU of the current decoding block, the candidate motion vector list includes a first set of candidate motion vector predicted values, where the first set of candidate motion vector predicted values is obtained based on a lower left control point and a lower right control point of the first neighboring affine decoding block, and the first set of candidate motion vector predicted values corresponds to an affine model of the current encoding block; and deriving motion vectors for one or more sub-blocks of the current decoded block based on the set of target candidate motion vectors; and predicting to obtain a pixel predicted value of the current decoding block based on the motion vector of one or more sub-blocks of the current decoding block.
25. The video decoder of claim 24, characterized in that:
if the current decoding block is a four-parameter affine decoding block, the first set of candidate motion vector predictors are used for representing motion vector predictors of an upper left control point and an upper right control point of the current decoding block;
if the current decoding block is a six-parameter affine decoding block, the first set of candidate motion vector predictors is used to represent motion vector predictors for an upper left control point, an upper right control point and a lower left fixed point control point of the current decoding block.
26. The video decoder of claim 25, characterized in that: the first set of candidate motion vector predictors is obtained based on a lower left control point and a lower right control point of the first adjacent affine decoding block, specifically:
if the current decoding block is a four-parameter affine decoding block, substituting the position coordinates of an upper left control point and an upper right control point of the current decoding block into a first affine model to obtain a first group of candidate motion vector predicted values; or,
if the current decoding block is a six-parameter affine decoding block, substituting the position coordinates of an upper left control point, an upper right control point and a lower left fixed point control point of the current decoding block into a first affine model to obtain a first group of candidate motion vector predicted values;
Wherein the first affine model is determined based on motion vectors and position coordinates of a lower left control point and a lower right control point of the first neighboring affine decoding block.
27. The video decoder according to any of claims 24-26, wherein the inter prediction unit is configured to obtain, based on the target set of candidate motion vectors, motion vectors of one or more sub-blocks of the current decoded block, in particular: motion vectors of one or more sub-blocks of the current decoded block are derived based on a second affine model determined based on the set of target candidate motion vectors and position coordinates of a set of control points of the current decoded block.
28. The video decoder according to any of claims 24-26, wherein the inter prediction unit is configured to obtain, based on the target set of candidate motion vectors, motion vectors of one or more sub-blocks of the current decoded block, in particular: obtaining a new candidate motion vector group based on the motion vector difference MVD obtained by analyzing the code stream and the target candidate motion vector group indicated by the index; and deriving motion vectors for one or more sub-blocks of the current decoded block based on the new set of candidate motion vectors.
29. The video decoder according to any of claims 24-26, wherein the inter prediction unit is configured to predict, based on motion vectors of one or more sub-blocks of the current decoded block, a pixel prediction value of the current decoded block, specifically: and predicting to obtain a pixel predicted value of the current decoding block according to the motion vector of one or more sub-blocks of the current decoding block, the reference frame index indicated by the index and the prediction direction.
30. The video decoder according to any of claims 24-26, characterized in that the position coordinates (x 6 ,y 6 ) And the position coordinates (x 7 ,y 7 ) Are all the position coordinates (x) of the upper left control point according to the first adjacent affine decoding block 4 ,y 4 ) Calculated and deduced, wherein the position coordinates (x 6 ,y 6 ) Is (x) 4 ,y 4 + cuH), the position coordinates (x) of the lower right control point of the first adjacent affine decoding block 7 ,y 7 ) Is (x) 4 +cuW,y 4 + cuH), cuW is the width of the first adjacent affine decoding block, cuH is the height of the first adjacent affine decoding block.
31. The video decoder of claim 30, wherein the motion vector of the lower left control point of the first neighboring affine decoding block is the motion vector of the lower left sub-block of the first neighboring affine decoding block and the motion vector of the lower right control point of the first neighboring affine decoding block is the motion vector of the lower right sub-block of the first neighboring affine decoding block.
32. The video decoder of claim 31, characterized in that, in the process of obtaining the motion vector of the one or more sub-blocks of the current decoding block based on the target candidate motion vector group, if the lower boundary of the current decoding block coincides with the lower boundary of the CTU where the current decoding block is located, the motion vector of the sub-block of the lower left vertex of the current decoding block is calculated based on the target candidate motion vector group and the position coordinates (0, H) of the lower left vertex of the current decoding block, and the motion vector of the sub-block of the lower right vertex of the current decoding block is calculated based on the target candidate motion vector group and the position coordinates (W, H) of the lower right vertex of the current decoding block, wherein W is equal to the width of the current decoding block, H is equal to the height of the current decoding block, and the coordinates of the upper left vertex of the current decoding block are (0, 0).
CN201810992362.1A 2018-08-27 2018-08-27 Video encoder, video decoder and corresponding methods Active CN110868602B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810992362.1A CN110868602B (en) 2018-08-27 2018-08-27 Video encoder, video decoder and corresponding methods
PCT/CN2019/079955 WO2020042604A1 (en) 2018-08-27 2019-03-27 Video encoder, video decoder and corresponding method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810992362.1A CN110868602B (en) 2018-08-27 2018-08-27 Video encoder, video decoder and corresponding methods

Publications (2)

Publication Number Publication Date
CN110868602A CN110868602A (en) 2020-03-06
CN110868602B true CN110868602B (en) 2024-04-12

Family

ID=69643826

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810992362.1A Active CN110868602B (en) 2018-08-27 2018-08-27 Video encoder, video decoder and corresponding methods

Country Status (2)

Country Link
CN (1) CN110868602B (en)
WO (1) WO2020042604A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111327901B (en) * 2020-03-10 2023-05-30 北京达佳互联信息技术有限公司 Video encoding method, device, storage medium and encoding equipment
CN113709484B (en) * 2020-03-26 2022-12-23 杭州海康威视数字技术股份有限公司 Decoding method, encoding method, device, equipment and machine readable storage medium
CN113747172A (en) * 2020-05-29 2021-12-03 Oppo广东移动通信有限公司 Inter-frame prediction method, encoder, decoder, and computer storage medium
CN113630602A (en) * 2021-06-29 2021-11-09 杭州未名信科科技有限公司 Affine motion estimation method and device for coding unit, storage medium and terminal

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102595110A (en) * 2011-01-10 2012-07-18 华为技术有限公司 Video coding method, decoding method and terminal
CN103329537A (en) * 2011-01-21 2013-09-25 Sk电信有限公司 Apparatus and method for generating/recovering motion information based on predictive motion vector index encoding, and apparatus and method for image encoding/decoding using same
CN106331722A (en) * 2015-07-03 2017-01-11 华为技术有限公司 Image prediction method and associated device
WO2017147765A1 (en) * 2016-03-01 2017-09-08 Mediatek Inc. Methods for affine motion compensation
CN108271023A (en) * 2017-01-04 2018-07-10 华为技术有限公司 Image prediction method and relevant device
CN108432250A (en) * 2016-01-07 2018-08-21 联发科技股份有限公司 The method and device of affine inter-prediction for coding and decoding video

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080279478A1 (en) * 2007-05-09 2008-11-13 Mikhail Tsoupko-Sitnikov Image processing method and image processing apparatus
US9083983B2 (en) * 2011-10-04 2015-07-14 Qualcomm Incorporated Motion vector predictor candidate clipping removal for video coding
US9438910B1 (en) * 2014-03-11 2016-09-06 Google Inc. Affine motion prediction in video coding
CN104935938B (en) * 2015-07-15 2018-03-30 哈尔滨工业大学 Inter-frame prediction method in a kind of hybrid video coding standard

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102595110A (en) * 2011-01-10 2012-07-18 华为技术有限公司 Video coding method, decoding method and terminal
CN103329537A (en) * 2011-01-21 2013-09-25 Sk电信有限公司 Apparatus and method for generating/recovering motion information based on predictive motion vector index encoding, and apparatus and method for image encoding/decoding using same
CN106331722A (en) * 2015-07-03 2017-01-11 华为技术有限公司 Image prediction method and associated device
CN108432250A (en) * 2016-01-07 2018-08-21 联发科技股份有限公司 The method and device of affine inter-prediction for coding and decoding video
WO2017147765A1 (en) * 2016-03-01 2017-09-08 Mediatek Inc. Methods for affine motion compensation
CN108271023A (en) * 2017-01-04 2018-07-10 华为技术有限公司 Image prediction method and relevant device

Also Published As

Publication number Publication date
CN110868602A (en) 2020-03-06
WO2020042604A1 (en) 2020-03-05

Similar Documents

Publication Publication Date Title
US11252436B2 (en) Video picture inter prediction method and apparatus, and codec
CN110868602B (en) Video encoder, video decoder and corresponding methods
KR102607443B1 (en) Interframe prediction method and device for video data
KR102606146B1 (en) Motion vector prediction method and related devices
CN110868587B (en) Video image prediction method and device
US20230239494A1 (en) Video encoder, video decoder, and corresponding method
WO2019154424A1 (en) Video decoding method, video decoder, and electronic device
CN110832859B (en) Decoding method and device based on template matching
CN110677645B (en) Image prediction method and device
US20210185323A1 (en) Inter prediction method and apparatus, video encoder, and video decoder
CN111355958B (en) Video decoding method and device
WO2019237287A1 (en) Inter-frame prediction method for video image, device, and codec

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant