WO2021088695A1 - 一种编解码方法、装置及其设备 - Google Patents

一种编解码方法、装置及其设备 Download PDF

Info

Publication number
WO2021088695A1
WO2021088695A1 PCT/CN2020/124304 CN2020124304W WO2021088695A1 WO 2021088695 A1 WO2021088695 A1 WO 2021088695A1 CN 2020124304 W CN2020124304 W CN 2020124304W WO 2021088695 A1 WO2021088695 A1 WO 2021088695A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion vector
block
sub
value
pixel
Prior art date
Application number
PCT/CN2020/124304
Other languages
English (en)
French (fr)
Inventor
陈方栋
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Priority to KR1020227010788A priority Critical patent/KR20220050227A/ko
Priority to US17/766,210 priority patent/US12114005B2/en
Priority to JP2022520621A priority patent/JP7527359B2/ja
Publication of WO2021088695A1 publication Critical patent/WO2021088695A1/zh
Priority to JP2024060914A priority patent/JP2024081785A/ja

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/109Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • H04N19/122Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding

Definitions

  • This application relates to the field of coding and decoding technologies, and in particular to a coding and decoding method, device and equipment.
  • a complete video encoding method can include processes such as prediction, transformation, quantization, entropy encoding, and filtering.
  • predictive coding includes intra-frame coding and inter-frame coding.
  • Inter-frame coding uses the correlation of the video time domain to predict the pixels of the current image using pixels adjacent to the coded image to achieve the purpose of effectively removing video time domain redundancy.
  • a motion vector Motion Vector, MV
  • MV Motion Vector
  • a motion search can be performed in the reference frame B to find the block that best matches the current block A1 B1 (ie the reference block), and determine the relative displacement between the current block A1 and the reference block B1, and the relative displacement is also the motion vector of the current block A1.
  • the encoding end may send the motion vector to the decoding end, instead of sending the current block A1 to the decoding end, the decoding end may obtain the current block A1 according to the motion vector and the reference block B1. Obviously, since the number of bits occupied by the motion vector is less than the number of bits occupied by the current block A1, a large number of bits can be saved.
  • the current block when the current block is a one-way block, after obtaining the motion vector of the current block (hereinafter referred to as the original motion vector), the original motion vector can be adjusted, and encoding/decoding is performed based on the adjusted motion vector. Thereby improving coding performance.
  • the current block when the current block is a bidirectional block, after obtaining the first original motion vector and the second original motion vector of the current block, how to adjust the first original motion vector and the second original motion vector, there is currently no reasonable solution In other words, for scenes with bidirectional blocks, there are problems such as poor prediction quality and prediction errors, resulting in poor coding performance.
  • This application provides an encoding and decoding method, device and equipment, which can improve encoding performance.
  • This application provides an encoding and decoding method, which includes:
  • the control information is to allow the current block to use the motion vector adjustment mode
  • the prediction mode of the current block is a normal fusion mode; or, the prediction mode of the current block is a fusion mode or a skip mode, and the prediction mode of the current block is not a mode other than the normal fusion mode;
  • the predicted value of the current block is obtained by weighting the reference blocks from two reference frames, and the display order of the two reference frames is located one before and one after the current frame, and the distance between the two reference frames and the current frame the same;
  • the weights of the two reference frames of the current block are the same;
  • the two reference frames of the current block are both short-term reference frames
  • the width, height and area of the current block are all within a limited range
  • the size of the two reference frames of the current block is the same as the size of the current frame
  • motion compensation is performed on the current block.
  • the present application provides a coding and decoding device, the device includes:
  • the determining module is used to determine to start the motion vector adjustment mode for the current block if the following conditions are all met:
  • the control information is to allow the current block to use the motion vector adjustment mode
  • the prediction mode of the current block is a normal fusion mode; or, the prediction mode of the current block is a fusion mode or a skip mode, and the prediction mode of the current block is not a mode other than the normal fusion mode;
  • the predicted value of the current block is obtained by weighting the reference blocks from two reference frames, and the display order of the two reference frames is located one before and one after the current frame, and the distance between the two reference frames and the current frame the same;
  • the weights of the two reference frames of the current block are the same;
  • the two reference frames of the current block are both short-term reference frames
  • the width, height and area of the current block are all within a limited range
  • the size of the two reference frames of the current block is the same as the size of the current frame
  • the motion compensation module is used to perform motion compensation on the current block if it is determined to start the motion vector adjustment mode for the current block.
  • the present application provides an encoding terminal device, including: a processor and a machine-readable storage medium, where the machine-readable storage medium stores machine-executable instructions that can be executed by the processor;
  • the processor is used to execute machine executable instructions to implement the following steps:
  • the control information is to allow the current block to use the motion vector adjustment mode
  • the prediction mode of the current block is a normal fusion mode; or, the prediction mode of the current block is a fusion mode or a skip mode, and the prediction mode of the current block is not a mode other than the normal fusion mode;
  • the predicted value of the current block is obtained by weighting the reference blocks from two reference frames, and the display order of the two reference frames is located one before and one after the current frame, and the distance between the two reference frames and the current frame the same;
  • the weights of the two reference frames of the current block are the same;
  • the two reference frames of the current block are both short-term reference frames
  • the width, height and area of the current block are all within a limited range
  • the size of the two reference frames of the current block is the same as the size of the current frame
  • motion compensation is performed on the current block.
  • the present application provides a decoding end device, including: a processor and a machine-readable storage medium, where the machine-readable storage medium stores machine-executable instructions that can be executed by the processor;
  • the processor is used to execute machine executable instructions to implement the following steps:
  • the control information is to allow the current block to use the motion vector adjustment mode
  • the prediction mode of the current block is a normal fusion mode; or, the prediction mode of the current block is a fusion mode or a skip mode, and the prediction mode of the current block is not a mode other than the normal fusion mode;
  • the predicted value of the current block is obtained by weighting the reference blocks from two reference frames, and the display order of the two reference frames is located one before and one after the current frame, and the distance between the two reference frames and the current frame the same;
  • the weights of the two reference frames of the current block are the same;
  • the two reference frames of the current block are both short-term reference frames
  • the width, height and area of the current block are all within a limited range
  • the size of the two reference frames of the current block is the same as the size of the current frame
  • motion compensation is performed on the current block.
  • the first target motion vector and the second target motion vector are obtained according to the first original motion vector and the second original motion vector.
  • the first target motion vector and the second target motion vector determine the predicted value, instead of determining the predicted value according to the first original motion vector and the second original motion vector, solve the problems of poor prediction quality and prediction errors, and improve coding performance and coding efficiency .
  • FIG. 1A is a schematic diagram of interpolation in an embodiment of the present application.
  • FIG. 1B is a schematic diagram of a video coding framework in an implementation manner of the present application.
  • FIG. 2 is a flowchart of an encoding and decoding method in an embodiment of the present application
  • FIG. 3 is a flowchart of an encoding and decoding method in an embodiment of the present application.
  • FIG. 4 is a flowchart of an encoding and decoding method in an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a reference block obtained in an implementation manner of the present application.
  • Fig. 6 is a schematic diagram of motion vector iteration in an embodiment of the present application.
  • FIGS. 7A-7G are schematic diagrams of the sequence of candidate points in an embodiment of the present application.
  • FIG. 8 is a schematic diagram of extending a reference block in an implementation manner of the present application.
  • FIG. 9A is a structural diagram of a coding and decoding device in an embodiment of the present application.
  • FIG. 9B is a hardware structure diagram of a decoding end device in an embodiment of the present application.
  • Fig. 9C is a hardware structure diagram of an encoding terminal device in an embodiment of the present application.
  • first information may also be referred to as second information
  • second information may also be referred to as first information
  • first information may also be referred to as first information
  • An encoding and decoding method, device, and equipment proposed in the embodiments of the present application may involve the following concepts:
  • Intra prediction and inter prediction refers to the use of the correlation of the video space domain to predict the current pixel using the pixels of the coded block of the current image to achieve the removal of video spatial redundancy purpose.
  • Inter-frame prediction refers to the use of video time domain correlation. Since video sequences usually contain strong time domain correlation, using adjacent encoded image pixels to predict the pixels of the current image can effectively remove video time domain redundancy. purpose.
  • the main video coding standard inter-frame prediction part adopts block-based motion compensation technology. The main principle is to find a best matching block in the previously encoded image for each pixel block of the current image. This process is called motion estimation. .
  • Motion Vector In inter-frame coding, a motion vector is used to represent the relative displacement between the current block and the best matching block in the reference image. Each divided block has a corresponding motion vector that is transmitted to the decoding end. If the motion vector of each block is independently coded and transmitted, especially when divided into small-sized blocks, a lot of bits are consumed. In order to reduce the number of bits used to code the motion vector, the spatial correlation between adjacent image blocks is used to predict the motion vector of the current block based on the motion vector of the adjacent coded block, and then the prediction difference is coded. Effectively reduce the number of bits representing the motion vector. When coding the motion vector of the current block, use the motion vector of the adjacent coded block to predict the motion vector of the current block. The difference between the motion vector prediction (MVP, Motion Vector Prediction) and the true estimation of the motion vector Value (MVD, MotionVector Difference) is coded, effectively reducing the number of coded bits.
  • MVP Motion Vector Prediction
  • MVPD MotionVector Difference
  • Motion Information Since the motion vector represents the position offset between the current block and a reference block, in order to accurately obtain the information pointing to the image block, in addition to the motion vector, the index information of the reference frame image is also required to indicate which reference frame is used image.
  • a reference frame image list can be established, and the reference frame image index information indicates which reference frame image in the reference frame image list is used in the current block.
  • Many coding technologies also support multiple reference image lists. Therefore, an index value can be used to indicate which reference image list is used, and this index value is called the reference direction.
  • Motion-related information such as motion vector, reference frame index, reference direction, etc. may be collectively referred to as motion information.
  • Interpolation If the current motion vector is of non-integer pixel accuracy, the existing pixel value cannot be directly copied from the reference frame corresponding to the current block. The required pixel value of the current block can only be obtained by interpolation. Referring to FIG. 1A, if it is necessary to obtain the pixel value Y 1/2 with an offset of 1/2 pixel, it can be obtained by interpolating the surrounding existing pixel value X. Exemplarily, if an interpolation filter with N taps is used, it needs to be obtained by interpolation of N whole pixels around it.
  • Motion compensation is the process of obtaining all pixel values of the current block through interpolation or copying.
  • Merge mode including normal fusion mode (ie Normal Merge mode, also called regular Merge mode), sub-block fusion mode (a fusion mode that uses sub-block motion information, which can be called Subblock fusion mode), and MMVD mode (The fusion mode of coding motion difference can be called merge with MVD mode), CIIP mode (the fusion mode in which new prediction values are jointly generated by inter-frame and intra-prediction, can be called combine inter-intra prediciton mode), TPM mode (used to The fusion mode of triangular prediction can be called triangular prediction mode), GEO mode (the fusion mode based on arbitrary geometric division shapes, can be called Geometrical Partitioning).
  • normal fusion mode ie Normal Merge mode, also called regular Merge mode
  • sub-block fusion mode a fusion mode that uses sub-block motion information, which can be called Subblock fusion mode
  • MMVD mode The fusion mode of coding motion difference can be called merge with MVD mode
  • CIIP mode the fusion mode in
  • the skip mode is a special fusion mode. The difference between the skip mode and the fusion mode is that the skip mode does not require coding residuals. If the current block is in skip mode, the CIIP mode is closed by default, and the normal fusion mode, sub-block fusion mode, MMVD mode, TPM mode, and GEO mode are still applicable.
  • the predicted value it is possible to determine how to generate the predicted value based on the normal fusion mode, the sub-block fusion mode, the MMVD mode, the CIIP mode, the TPM mode, the GEO mode, etc.
  • the predicted value and residual value can be used to obtain the reconstructed value; for the skip mode, there is no residual value, and the predicted value is directly used to obtain the reconstructed value.
  • Sequence parameter set In the sequence parameter set, there is a flag bit that determines whether certain tool switches are allowed in the entire sequence. If the flag bit is 1, the tool corresponding to the flag bit is allowed to be activated in the video sequence; if the flag bit is 0, the tool corresponding to the flag bit is not allowed to be activated during the encoding process in the video sequence.
  • Normal fusion mode select one motion information from the candidate motion information list, and generate the prediction value of the current block based on the motion information.
  • the candidate motion information list includes: spatial neighboring block candidate motion information, temporal neighboring block candidate motion information, Candidate motion information of spatial non-adjacent blocks, motion information obtained by combining existing motion information, default motion information, etc.
  • MMVD mode Based on the candidate motion information list of the normal fusion mode, select one motion information from the candidate motion information list of the normal fusion mode as the reference motion information, and obtain the difference of the motion information through the table look-up method. The final motion information is obtained based on the difference between the reference motion information and the motion information, and the prediction value of the current block is generated based on the final motion information.
  • the new prediction value of the current block is obtained by combining the intra-frame prediction value and the inter-frame prediction value.
  • Sub-block fusion mode includes Affine fusion mode and sub-block TMVP mode.
  • the Affine (affine) fusion mode similar to the normal fusion mode, also selects a motion information from the candidate motion information list, and generates the prediction value of the current block based on the motion information.
  • the motion information in the candidate motion information list of the normal fusion mode is a 2-parameter translation motion vector
  • the motion information in the candidate motion information list of the Affine fusion mode is 4-parameter Affine motion information, or , 6-parameter Affine motion information.
  • TMVP subblock-based temporal motion vector prediction
  • TPM mode Divide a block into two triangular sub-blocks (there are two triangular sub-blocks of 45 degrees and 135 degrees). The two triangular sub-blocks have different unidirectional motion information.
  • the TPM mode is only used for the prediction process and does not affect In the subsequent transformation and quantization process, the one-way motion information here is also directly obtained from the candidate motion information list.
  • the GEO mode is similar to the TPM mode, but the shape of the division is different.
  • the GEO mode divides a square block into two sub-blocks of any shape (any other shape except the shape of the two triangular sub-blocks of the TPM), such as a triangular sub-block, a pentagonal sub-block; or, a triangular sub-block , One quadrilateral sub-block; or, two trapezoidal sub-blocks, etc. There is no restriction on the shape of this division.
  • the two sub-blocks divided by GEO mode have different unidirectional motion information.
  • the fusion mode and skip mode involved in this embodiment refer to a type of prediction mode that directly selects a motion information from the candidate motion information list to generate the prediction value of the current block.
  • These prediction modes There is no need to perform a motion search process at the encoding end. Except for the MMVD mode, other modes do not need to encode the motion information difference.
  • Video encoding framework As shown in Figure 1B, the video encoding framework can be used to implement the encoding end processing flow of the embodiment of the present application.
  • the schematic diagram of the video decoding framework is similar to that of Figure 1B, which will not be repeated here.
  • video decoding can be used
  • the framework implements the decoding end processing flow of the embodiment of the present application.
  • the video coding framework and the video decoding framework it includes modules such as intra prediction, motion estimation/motion compensation, reference image buffer, in-loop filtering, reconstruction, transformation, quantization, inverse transformation, inverse quantization, entropy encoder, etc. .
  • the encoding end processing flow can be realized, and at the decoding end, through the cooperation between these modules, the decoding end processing flow can be realized.
  • a motion vector can be provided Adjustment mode.
  • the motion vector adjustment mode based on the predicted value obtained from the original motion vector, the motion vector is fine-tuned through the local search method at the decoder to obtain a better motion vector to generate a predicted value with less distortion.
  • the first reference block corresponding to the sub-block may be determined according to the first original motion vector of the sub-block, and the first reference block corresponding to the sub-block may be determined according to the sub-block.
  • Determine the second reference block corresponding to the sub-block based on the second original motion vector of the sub-block, and compare the first original motion vector and the second original motion according to the first pixel value of the first reference block and the second pixel value of the second reference block
  • the vector is adjusted to obtain the first target motion vector and the second target motion vector, and then the prediction value of the sub-block can be determined according to the first target motion vector and the second target motion vector.
  • Embodiment 1 As shown in FIG. 2, it is a schematic flowchart of the encoding and decoding method proposed in the embodiment of this application.
  • the encoding and decoding method can be applied to the decoding end or the encoding end.
  • the encoding and decoding method may include the following steps:
  • Step 201 if the following conditions are all met, it is determined to start the motion vector adjustment mode for the current block:
  • the control information is to allow the current block to use the motion vector adjustment mode
  • the prediction mode of the current block is a normal fusion mode; or, the prediction mode of the current block is a fusion mode or a skip mode, and the prediction mode of the current block is not a mode other than the normal fusion mode;
  • the predicted value of the current block is obtained by weighting the reference blocks from the two reference frames, and the display order of the two reference frames is located one before and one after the current frame, and the distance between the two reference frames and the current frame is the same;
  • the weights of the two reference frames of the current block are the same;
  • the two reference frames of the current block are both short-term reference frames
  • the width, height and area of the current block are all within a limited range
  • the size of the two reference frames of the current block is the same as the size of the current frame.
  • the control information is to allow the current block to use the motion vector adjustment mode
  • the prediction mode of the current block is a normal fusion mode; or, the prediction mode of the current block is a fusion mode or a skip mode, and the prediction mode of the current block is not a mode other than the normal fusion mode;
  • the predicted value of the current block is obtained by weighting the reference blocks from the two reference frames, and the display order of the two reference frames is located one before and one after the current frame, and the distance between the two reference frames and the current frame is the same;
  • the weights of the two reference frames of the current block are the same;
  • the two reference frames of the current block are both short-term reference frames
  • the width, height and area of the current block are all within a limited range
  • the size of the two reference frames of the current block is the same as the size of the current frame.
  • 7 conditions are given, and it is determined whether to start the motion vector adjustment mode for the current block based on the 7 conditions.
  • the fusion mode or skip mode includes the normal fusion mode, the sub-block fusion mode, the MMVD mode, the CIIP mode, the TPM mode, and the GEO mode.
  • the prediction mode of the current block is not other than the normal fusion mode means: the prediction mode is not the sub-block fusion mode, MMVD mode, CIIP mode, TPM mode, GEO mode, etc.
  • the prediction mode of the current block is the fusion mode or the skip mode
  • the prediction mode of the current block is not the MMVD mode
  • the prediction mode of the current block is not the CIIP mode.
  • the prediction mode of the current block is fusion mode or skip mode
  • the prediction mode of the current block is not MMVD mode
  • the prediction mode of the current block is not CIIP mode
  • the prediction mode of the current block is not sub-block fusion mode
  • the prediction of the current block it can be determined that the prediction mode of the current block is not other than the normal fusion mode. That is, it is determined by the elimination method that the prediction mode of the current block is the normal fusion mode.
  • the prediction value of the current block is obtained by weighting the reference blocks from two reference frames means that the current block adopts the bidirectional prediction mode, that is, the prediction value of the current block is obtained by the weighting of the reference blocks from the two reference frames. obtain.
  • the current block can correspond to the motion information of the two lists, which are recorded as the first motion information and the second motion information.
  • the first motion information includes the first reference frame and the first original motion vector
  • the second motion information includes The second reference frame and the second original motion vector.
  • the above two reference frames may be the first reference frame and the second reference frame.
  • the display sequence of the two reference frames respectively located one before and one after the current frame means that the first reference frame is located in front of the current frame where the current block is located, and the second reference frame is located behind the current frame.
  • the first reference frame may also be called a forward reference frame, the forward reference frame is located in the first list (for example, list0), the second reference frame may also be called a backward reference frame, and the backward reference frame is located in the second list.
  • List (such as list1).
  • the width, height, and area of the current block are all within a limited range including: the width is greater than or equal to the first threshold, the height is greater than or equal to the second threshold, and the area is greater than or equal to the third threshold.
  • the width is greater than or equal to the first threshold
  • the height is greater than or equal to the second threshold
  • the area is greater than the fourth threshold.
  • the third threshold may be greater than the fourth threshold.
  • the first threshold may be 8, the second threshold may be 8, the third threshold may be 128, and the fourth threshold may be 64.
  • the above values are just a few examples, and there is no restriction on this.
  • control information for allowing the current block to use the motion vector adjustment mode may include, but is not limited to: sequence-level control information (such as control information for multi-frame images) to allow the current block to use the motion vector adjustment mode; and/or,
  • sequence-level control information such as control information for multi-frame images
  • the frame-level control information (such as the control information of a frame of image) is to allow the current block to use the motion vector adjustment mode.
  • Step 202 If it is determined to start the motion vector adjustment mode for the current block, perform motion compensation on the current block.
  • the motion vector adjustment mode for the current block for each sub-block in at least one sub-block included in the current block: determine the sub-block according to the first original motion vector of the sub-block According to the corresponding first reference block, determine the second reference block corresponding to the sub-block according to the second original motion vector of the sub-block; according to the first pixel value of the first reference block and the second pixel value of the second reference block, The first original motion vector and the second original motion vector are adjusted to obtain the first target motion vector corresponding to the first original motion vector and the second target motion vector corresponding to the second original motion vector; according to the first target motion vector and the second target motion vector The target motion vector determines the predicted value of the sub-block. After the predicted value of each sub-block is obtained, the predicted value of the current block can be determined according to the predicted value of each sub-block.
  • the first reference block corresponding to the sub-block is determined according to the first original motion vector of the sub-block
  • the second reference block corresponding to the sub-block is determined according to the second original motion vector of the sub-block, which may include but not Limited to:
  • the first reference block corresponding to the sub-block is determined from the first reference frame; the pixel value of each pixel in the first reference block is determined by the first reference block.
  • the pixel values of the adjacent pixels in are obtained by interpolation, or obtained by copying the pixel values of the adjacent pixels in the first reference block.
  • the second reference block corresponding to the sub-block is determined from the second reference frame; the pixel value of each pixel in the second reference block is determined by the second reference block.
  • the pixel value of the adjacent pixel in the second reference block is obtained by interpolation, or obtained by copying the pixel value of the adjacent pixel in the second reference block.
  • the size of the first reference block is the same as the size of the second reference block
  • the width of the first reference block is determined based on the width and search range of the sub-block
  • the height value of the first reference block is based on the height and the search range of the sub-block.
  • the search scope is determined.
  • the first pixel value of the sub-block is An original motion vector and the second original motion vector of the sub-block are adjusted to obtain the first target motion vector corresponding to the first original motion vector and the second target motion vector corresponding to the second original motion vector, that is, the first target motion vector of the sub-block.
  • a target motion vector and a second target motion vector are adjusted to obtain the first target motion vector corresponding to the first original motion vector and the second target motion vector corresponding to the second original motion vector.
  • the initial motion vector may be taken as the center, a part or all of the motion vectors may be selected from the motion vectors surrounding the initial motion vector including the initial motion vector, and the selected motion vector may be determined as the candidate motion.
  • the initial motion vector is the first original motion vector or the second original motion vector.
  • a motion vector can be selected as the optimal motion vector from the initial motion vector and each candidate motion vector.
  • the first original motion vector can be adjusted according to the optimal motion vector to obtain the first target motion vector corresponding to the first original motion vector
  • the second original motion vector can be adjusted according to the optimal motion vector to obtain the second original motion vector.
  • the second target motion vector corresponding to the motion vector is the first original motion vector or the second original motion vector.
  • the first original motion vector is adjusted according to the optimal motion vector to obtain the first target motion vector corresponding to the first original motion vector
  • the second original motion vector is adjusted according to the optimal motion vector to obtain
  • the second target motion vector corresponding to the second original motion vector may include: determining the first integer-pixel motion vector adjustment value of the sub-block according to the optimal motion vector, the second integer-pixel motion vector adjustment value, and the first sub-pixel motion vector adjustment Value and the second sub-pixel motion vector adjustment value; according to the first full-pixel motion vector adjustment value and the first sub-pixel motion vector adjustment value, the first original motion vector is adjusted to obtain the first target motion vector of the sub-block; according to The second integer-pixel motion vector adjustment value and the second sub-pixel motion vector adjustment value are adjusted to the second original motion vector to obtain the second target motion vector of the sub-block.
  • the prediction value of the sub-block may be determined according to the first target motion vector of the sub-block and the second target motion vector of the sub-block. The process will not be repeated.
  • the third reference block corresponding to the sub-block may be determined from the first reference frame based on the first target motion vector of the sub-block; Based on the second target motion vector of the sub-block, a fourth reference block corresponding to the sub-block is determined from the second reference frame. Then, the pixel value of the third reference block and the pixel value of the fourth reference block may be weighted to obtain the predicted value of the sub-block.
  • the fifth reference block can be determined from the first reference frame, and the fifth reference block can be extended to obtain the sixth reference block ; Then, based on the first target motion vector of the sub-block, a third reference block corresponding to the sub-block is selected from the sixth reference block. And, the seventh reference block may be determined from the second reference frame, and the seventh reference block may be expanded to obtain the eighth reference block; based on the second target motion vector of the sub-block, the eighth reference block may be selected The fourth reference block corresponding to this sub-block. Then, the pixel value of the third reference block and the pixel value of the fourth reference block may be weighted to obtain the predicted value of the sub-block.
  • the pixel value of the third reference block and the pixel value of the fourth reference block are weighted to obtain the predicted value of the sub-block, which may include, but is not limited to: the pixel value of the third reference block, the third
  • the first weight corresponding to the pixel value of the reference block, the pixel value of the fourth reference block, and the second weight corresponding to the pixel value of the fourth reference block are weighted to obtain the predicted value of the sub-block; for example, the first weight It can be the same as the second weight.
  • the predicted value of each sub-block can be combined to obtain the predicted value of the current block, and the process of determining the predicted value of the current block is not limited.
  • the first target motion vector and the second target motion vector are obtained according to the first original motion vector and the second original motion vector.
  • the first target motion vector and the second target motion vector determine the predicted value, instead of determining the predicted value according to the first original motion vector and the second original motion vector, solve the problems of poor prediction quality and prediction errors, and improve coding performance and coding efficiency .
  • Embodiment 2 Based on the same concept as the above method, referring to FIG. 3, which is a schematic flowchart of another encoding and decoding method proposed in an embodiment of this application, the method can be applied to the encoding end, and the method can include the following steps :
  • Step 301 The encoder determines whether to activate the motion vector adjustment mode for the current block. If yes, go to step 302, if no, there is no need to use the motion vector adjustment method proposed in this application, and there is no restriction on the processing of this situation.
  • step 302 is executed.
  • the encoder determines not to activate the motion vector adjustment mode for the current block, it indicates that the motion information of the current block is sufficiently accurate. Therefore, the motion vector adjustment mode may not be activated for the current block, and the motion vector adjustment method proposed in this application is not used.
  • Step 302 For each sub-block of at least one sub-block included in the current block: the encoding end determines the first reference block corresponding to the sub-block from the first reference frame according to the first original motion vector of the sub-block; For the second original motion vector of the sub-block, the second reference block corresponding to the sub-block is determined from the second reference frame.
  • the pixel value of each pixel in the first reference block is called the first pixel value
  • the pixel value of each pixel in the second reference block is called the second pixel value.
  • this bidirectional motion information may include two reference frames and two original motion vectors.
  • the information may include a first reference frame and a first original motion vector, a second reference frame and a second original motion vector.
  • the encoding end determines the first reference block corresponding to the sub-block from the first reference frame, and calls the pixel value of each pixel in the first reference block the first pixel value. Based on the second original motion vector, the encoding end determines the second reference block corresponding to the sub-block from the second reference frame, and calls the pixel value of each pixel in the second reference block the second pixel value.
  • the distance between the current frame where the current block is located and the first reference frame and the distance between the second reference frame and the current frame where the current block is located may be the same.
  • the first reference frame is the first frame
  • the current frame is the fifth frame
  • the second reference frame is the ninth frame.
  • the first original motion vector and the second original motion vector may have a mirror symmetry relationship, for example, the first original motion vector is (4, 4), and the second original motion vector is (-4, -4); The first original motion vector is (2.5, 3.5), and the second original motion vector is (-2.5, -3.5).
  • the above is only an example, and there is no restriction on this.
  • Step 303 The encoding end adjusts the first original motion vector according to the first pixel value of the first reference block and the second pixel value of the second reference block to obtain the first target motion vector of the sub-block; according to the first reference block The first pixel value of and the second pixel value of the second reference block are adjusted to the second original motion vector to obtain the second target motion vector of the sub-block.
  • the encoding end can perform a local search method based on the first pixel value of the first reference block and the second pixel value of the second reference block.
  • the motion vector and the second original motion vector are fine-tuned to obtain a better first target motion vector and the second target motion vector, and then the first target motion vector and the second target motion vector are used to generate a predicted value with less distortion.
  • the current block may include at least one sub-block, and if the current block only includes one sub-block, the sub-block is the current block itself.
  • the sub-block may correspond to the first original motion vector and the second original motion vector. After adjustment, the sub-block may correspond to the first target motion vector and the second target motion vector.
  • sub-block A corresponds to the first original motion vector A1 and the second original motion vector A2.
  • sub-block A corresponds to the first target The motion vector A3 and the second target motion vector A4.
  • sub-block B corresponds to the first original motion vector B1 and the second original motion vector B2.
  • the sub-block B corresponds to the first target motion vector B3 and the second target motion vector B4.
  • the first original motion vector A1 corresponding to the sub-block A and the first original motion vector B1 corresponding to the sub-block B may be the same, and both are the first original motion vector of the current block; the second original motion corresponding to the sub-block A
  • the vector A2 and the second original motion vector B2 corresponding to the sub-block B may be the same, and both are the second original motion vector of the current block.
  • the first target motion vector A3 corresponding to the sub-block A and the first target motion vector B3 corresponding to the sub-block B may be the same or different.
  • the second target motion vector A4 corresponding to the sub-block A and the second target motion vector B4 corresponding to the sub-block B may be the same or different.
  • Step 304 The encoding end determines the predicted value of the sub-block according to the first target motion vector and the second target motion vector.
  • Step 305 The encoding end determines the prediction value of the current block according to the prediction value of each sub-block.
  • the first target motion vector and second target motion vector of sub-block A can be used to determine the predicted value of sub-block A, and the first target motion vector of sub-block B can be used.
  • the second target motion vector to determine the predicted value of sub-block B, and the predicted value of sub-block A and the predicted value of sub-block B are the predicted values of the current block.
  • the encoding end saves the first target motion vector and the second target motion vector of each sub-block of the current block, or saves the first original motion vector and the second original motion vector of each sub-block of the current block, or, Save the first original motion vector, the second original motion vector, the first target motion vector and the second target motion vector of each sub-block of the current block.
  • Embodiment 3 Based on the same concept as the above method, referring to FIG. 4, which is a schematic flow diagram of another encoding and decoding method proposed in an embodiment of this application.
  • the method can be applied to the decoder.
  • the method can include the following steps :
  • Step 401 The decoder determines whether to activate the motion vector adjustment mode for the current block. If it is, then step 402 is executed. If not, the motion vector adjustment method proposed in this application does not need to be adopted, and there is no restriction on the processing of this situation.
  • the decoder determines to activate the motion vector adjustment mode for the current block, it indicates that the motion information of the current block is not accurate enough. Therefore, the motion vector adjustment mode is activated for the current block (ie, the technical solution of the present application), and step 402 is executed.
  • the decoder determines not to activate the motion vector adjustment mode for the current block, it indicates that the motion information of the current block is sufficiently accurate. Therefore, the motion vector adjustment mode may not be activated for the current block, and the motion vector adjustment method proposed in this application is not used.
  • Step 402 For each sub-block of at least one sub-block included in the current block: the decoding end determines the first reference block corresponding to the sub-block from the first reference frame according to the first original motion vector of the sub-block; For the second original motion vector of the sub-block, the second reference block corresponding to the sub-block is determined from the second reference frame.
  • the pixel value of each pixel in the first reference block is called the first pixel value
  • the pixel value of each pixel in the second reference block is called the second pixel value.
  • Step 403 The decoding end adjusts the first original motion vector according to the first pixel value of the first reference block and the second pixel value of the second reference block to obtain the first target motion vector of the sub-block;
  • the first pixel value of and the second pixel value of the second reference block are adjusted to the second original motion vector to obtain the second target motion vector of the sub-block.
  • Step 404 The decoding end determines the predicted value of the sub-block according to the first target motion vector and the second target motion vector.
  • Step 405 The decoding end determines the predicted value of the current block according to the predicted value of each sub-block.
  • the decoding end saves the first target motion vector and the second target motion vector of each sub-block of the current block, or saves the first original motion vector and the second original motion vector of each sub-block of the current block, or, Save the first original motion vector, the second original motion vector, the first target motion vector and the second target motion vector of each sub-block of the current block.
  • step 401 to step 405 may refer to step 301 to step 305, which will not be repeated here.
  • Embodiment 4 In the above embodiment, it is related to whether to activate the motion vector adjustment mode for the current block, which will be described below.
  • the following starting conditions can be given.
  • the following starting conditions are just an example. In practical applications, the following starting conditions can be combined arbitrarily, which is not limited. Exemplarily, when all the starting conditions in the following starting conditions are satisfied, it is determined to start the motion vector adjustment mode for the current block.
  • the control information is to allow the current block to use the motion vector adjustment mode.
  • control information may include, but is not limited to: sequence-level control information and/or frame-level control information.
  • the sequence-level (such as multi-frame image) control information may include a control flag (such as sps_cur_tool_enabled_flag), and the frame-level (such as a frame of image) control information may include a control flag (such as pic_cur_tool_disabled_flag).
  • a control flag such as sps_cur_tool_enabled_flag
  • pic_cur_tool_disabled_flag is the second value, it means that the current block is allowed to use the motion vector adjustment mode.
  • sps_cur_tool_enabled_flag indicates whether all images in the sequence are allowed to use the motion vector adjustment mode.
  • pic_cur_tool_disabled_flag indicates whether each block in the current image is not allowed to use the motion vector adjustment mode.
  • sps_cur_tool_enabled_flag is the first value, it means that all images in the sequence are allowed to use the motion vector adjustment mode.
  • pic_cur_tool_disabled_flag is the second value, it means that each block in the current image is allowed to use the motion vector adjustment mode.
  • sps_cur_tool_enabled_flag is the second value and/or pic_cur_tool_disabled_flag is the first value, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the control information is that the current block is not allowed to use the motion vector adjustment mode.
  • the sequence-level (such as multi-frame image) control information may include a control flag (such as sps_cur_tool_disabled_flag), and the frame-level (such as a frame of image) control information may include a control flag (such as pic_cur_tool_disabled_flag).
  • a control flag such as sps_cur_tool_disabled_flag
  • pic_cur_tool_disabled_flag is the second value, it means that the current block is allowed to use the motion vector adjustment mode.
  • sps_cur_tool_disabled_flag indicates whether all images in the sequence are not allowed to use the motion vector adjustment mode.
  • pic_cur_tool_disabled_flag indicates whether each block in the current image is not allowed to use the motion vector adjustment mode.
  • sps_cur_tool_disabled_flag is the second value, it means that all images in the sequence are allowed to use the motion vector adjustment mode.
  • pic_cur_tool_disabled_flag is the second value, it means that each block in the current image is allowed to use the motion vector adjustment mode.
  • sps_cur_tool_disabled_flag is the first value and/or pic_cur_tool_disabled_flag is the first value, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the control information is that the current block is not allowed to use the motion vector adjustment mode.
  • the sequence-level (such as multi-frame image) control information may include a control flag bit (such as sps_cur_tool_enabled_flag), and the frame-level (such as a frame of image) control information may include a control flag bit (such as pic_cur_tool_enabled_flag).
  • a control flag bit such as sps_cur_tool_enabled_flag
  • pic_cur_tool_enabled_flag is the first value
  • sps_cur_tool_enabled_flag indicates whether all images in the sequence are allowed to use the motion vector adjustment mode.
  • pic_cur_tool_enabled_flag indicates whether to allow each block in the current image to use the motion vector adjustment mode.
  • sps_cur_tool_enabled_flag is the first value, it means that all images in the sequence are allowed to use the motion vector adjustment mode.
  • pic_cur_tool_enabled_flag is the first value, it means that each block in the current image is allowed to use the motion vector adjustment mode.
  • sps_cur_tool_enabled_flag is the second value and/or pic_cur_tool_enabled_flag is the second value, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the control information is that the current block is not allowed to use the motion vector adjustment mode.
  • the sequence-level (such as multi-frame image) control information may include a control flag (such as sps_cur_tool_disabled_flag), and the frame-level (such as a frame of image) control information may include a control flag (such as pic_cur_tool_enabled_flag).
  • a control flag such as sps_cur_tool_disabled_flag
  • pic_cur_tool_enabled_flag is the first value, it means that the current block is allowed to use the motion vector adjustment mode.
  • sps_cur_tool_disabled_flag indicates whether all images in the sequence are not allowed to use the motion vector adjustment mode.
  • pic_cur_tool_enabled_flag indicates whether to allow each block in the current image to use the motion vector adjustment mode.
  • sps_cur_tool_disabled_flag is the second value, it means that all images in the sequence are allowed to use the motion vector adjustment mode.
  • pic_cur_tool_enabled_flag is the first value, it means that each block in the current image is allowed to use the motion vector adjustment mode.
  • sps_cur_tool_disabled_flag is the first value and/or pic_cur_tool_enabled_flag is the second value, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the control information is that the current block is not allowed to use the motion vector adjustment mode.
  • the first value can be 1, and the second value can be 0, or the first value can be 0, and the second value can be 1.
  • the above is just an example. limit.
  • the frame herein is equivalent to an image, for example, the current frame represents the current image, and the reference frame represents the reference image.
  • the prediction mode of the current block is a normal fusion mode; or, the prediction mode of the current block is a fusion mode or a skip mode, and the prediction mode of the current block is not a mode other than the normal fusion mode.
  • the prediction mode of the current block (such as inter prediction mode) is the fusion mode or skip mode, and the prediction mode of the current block is not other than the normal fusion mode (such as sub-block Fusion mode, MMVD mode, CIIP mode, TPM mode, GEO mode, etc.), it means that the current block is allowed to use the motion vector adjustment mode.
  • the prediction mode of the current block is the fusion mode or the skip mode
  • the prediction mode of the current block is not the MMVD mode
  • the prediction mode of the current block is not the CIIP mode
  • the current block is allowed to use the motion vector adjustment mode.
  • the prediction mode of the current block is not the fusion mode, and the prediction mode of the current block is not the skip mode, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 2 is not satisfied.
  • the prediction mode of the current block is the fusion mode or the skip mode, and the prediction mode of the current block is other than the normal fusion mode (such as sub-block fusion mode, MMVD mode, CIIP mode, TPM mode, GEO mode, etc.), it means that the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 2 is not met.
  • the normal fusion mode such as sub-block fusion mode, MMVD mode, CIIP mode, TPM mode, GEO mode, etc.
  • the prediction mode of the current block is a normal fusion mode (such as a regular merge mode)
  • the common fusion mode is: multiplexing a certain motion information in the motion information list of the current block as the motion information of the current block to generate the prediction value of the current block.
  • the prediction mode of the current block is not the normal fusion mode, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 2 is not satisfied.
  • the predicted value of the current block is obtained by weighting the reference blocks from the two reference frames, and the display order of the two reference frames is located one before and one after the current frame, and the distance between the two reference frames and the current frame is the same.
  • the prediction value of the current block obtained by weighting the reference blocks from two reference frames means that the current block adopts the bidirectional prediction mode, that is, the prediction value of the current block is obtained by weighting the reference blocks from the two reference frames.
  • the current block may correspond to the motion information of the two lists, which are recorded as the first motion information and the second motion information.
  • the first motion information includes the first reference frame and the first original motion vector
  • the second motion information includes the second motion information.
  • the display sequence of the two reference frames respectively located one before and one after the current frame means that the first reference frame is located in front of the current frame where the current block is located, and the second reference frame is located behind the current frame.
  • the current block has two lists (such as list0 and list1) of motion information (such as two reference frames and two motion vectors), and the display order of the two reference frames is located in the current frame respectively One after the other, and the distance between the two reference frames and the current frame is the same, it means that the current block is allowed to use the motion vector adjustment mode.
  • the display sequence of the two reference frames are located one before and one after the current frame, and the distance between the two reference frames and the current frame is the same. You can use the display sequence number of the current frame POC_Cur, the display sequence number of the reference frame of list0, POC_0, list1
  • the relative relationship of the display sequence number POC_1 of the reference frame is expressed: that is, (POC_0-POC_Cur) is completely equal to (POC_Cur-POC_0).
  • the current block adopts bidirectional prediction, and the two reference frames corresponding to the current block come from different directions, that is, one reference frame corresponding to the current block is located before the current frame, and the other reference frame corresponding to the current block is located in the current frame after that.
  • the above condition "there are two reference frames in the current block, and the display sequence of the two reference frames are located one before and one after the current frame, and the distance between the two reference frames and the current frame is the same" can be expressed by the following content :
  • the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 3 is not satisfied.
  • the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 3 is not met.
  • the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 3 is not satisfied.
  • the current block is not allowed to use the motion vector
  • the adjustment mode that is, the start condition 3 is not met.
  • the weights of the two reference frames of the current block are the same, it means that the current block is allowed to use the motion vector adjustment mode.
  • the luminance weighting weight of the reference frame refIdxL0 may be equal to the luminance weighting weight of the reference frame refIdxL1 (luma_weight_l1_flag[refIdxL1]), which means the current The weights of the two reference frames of the block are the same.
  • the block-level weights of the two reference frames are the same, for example, the index BcwIdx[xCb][yCb] of the block-level weight of the current block is 0, it means that the weights of the two reference frames of the current block are the same.
  • the frame-level weights of the two reference frames are the same, and the block-level weights of the two reference frames are the same, it means that the weights of the two reference frames of the current block are the same.
  • the weights of the two reference frames of the current block are different, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 4 is not satisfied.
  • the frame-level weights of the two reference frames are different, it means that the weights of the two reference frames of the current block are different.
  • the block-level weights of the two reference frames are different, it means that the weights of the two reference frames of the current block are different.
  • the frame-level weights of the two reference frames are different, and the block-level weights of the two reference frames are different, it means that the weights of the two reference frames of the current block are different.
  • the weighted weight of the two reference frames of the current block refers to the weight used in bidirectional weight compensation.
  • the two predicted values need to be weighted to obtain the final predicted value of the sub-block.
  • the weights corresponding to the two predicted values are the weighted weights of the two reference frames of the current block, that is, the weights corresponding to the two predicted values are the same.
  • the two reference frames of the current block are both short-term reference frames. In other words, neither of the two reference frames of the current block is a long-term reference frame.
  • the short-term reference frame refers to a reference frame that is closer to the current frame, and is generally an actual image frame.
  • the two reference frames of the current block are not both short-term reference frames, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 5 is not met. Or, if a reference frame of the current block is not a short-term reference frame, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 5 is not satisfied. Or, if the two reference frames of the current block are not short-term reference frames, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 5 is not satisfied.
  • the two reference frames of the current block are not long-term reference frames, it means that the current block is allowed to use the motion vector adjustment mode.
  • the display sequence number POC of the long-term reference frame has no actual meaning.
  • the long-term reference frame refers to a reference frame far away from the current frame, or an image frame synthesized from several actual images.
  • a reference frame of the current block is a long-term reference frame, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 5 is not satisfied.
  • the two reference frames of the current block are both long-term reference frames, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 5 is not satisfied.
  • the width, height and area of the current block are all within a limited range.
  • the width cbWidth of the current block is greater than or equal to the first threshold (such as 8), and the height cbHeight of the current block is greater than or equal to the second threshold (such as 8), and the area of the current block (cbHeight *cbWidth) is greater than or equal to the third threshold (such as 128), which means that the current block is allowed to use the motion vector adjustment mode.
  • the width cbWidth of the current block is less than the first threshold, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 6 is not satisfied.
  • the height cbHeight of the current block is less than the second threshold, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 6 is not met.
  • the area of the current block is smaller than the third threshold, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 6 is not satisfied.
  • the width cbWidth of the current block is greater than or equal to the first threshold (such as 8), and the height cbHeight of the current block is greater than or equal to the second threshold (such as 8), and the area of the current block ( cbHeight*cbWidth) is greater than the fourth threshold (such as 64), which means that the current block is allowed to use the motion vector adjustment mode.
  • the width cbWidth of the current block is less than the first threshold, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 6 is not satisfied.
  • the height cbHeight of the current block is less than the second threshold, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 6 is not met.
  • the area of the current block is less than or equal to the fourth threshold, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 6 is not satisfied.
  • the size of the two reference frames of the current block is the same as the size of the current frame.
  • the size of the reference frame of list0 is the same as the size of the current frame, for example, the width of the reference frame of list0 is the same as the width of the current frame, and the height of the reference frame of list0 is the same as the height of the current frame.
  • the size of the reference frame of list1 is the same as the size of the current frame, for example, the width of the reference frame of list1 is the same as the width of the current frame, and the height of the reference frame of list1 is the same as the height of the current frame, which means that the current block is allowed to be used Motion vector adjustment mode.
  • the size of at least one of the two reference frames of the current block is different from the size of the current frame, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 7 is not satisfied.
  • the width of the reference frame of list0 is different from the width of the current frame, it means that the current block is not allowed to use the motion vector adjustment mode.
  • the height of the reference frame of list0 is different from the height of the current frame, it means that the current block is not allowed to use the motion vector adjustment mode.
  • the width of the reference frame of list1 is different from the width of the current frame, it means that the current block is not allowed to use the motion vector adjustment mode.
  • the height of the reference frame of list1 is different from the height of the current frame, it means that the current block is not allowed to use the motion vector adjustment mode.
  • Embodiment 5 In the above embodiment, for each sub-block of the current block, according to the first original motion vector of the sub-block, the first reference block corresponding to the sub-block is determined from the first reference frame, and the first reference The pixel value of each pixel in the block is called the first pixel value; according to the second original motion vector of the sub-block, the second reference block corresponding to the sub-block is determined from the second reference frame. The pixel value of each pixel is called the second pixel value, which will be described below.
  • the first pixel value of each pixel in the first reference block is obtained by interpolating the pixel values of adjacent pixels in the first reference block, or by interpolating the pixel values of adjacent pixels in the first reference block
  • the pixel value of is copied.
  • the second pixel value of each pixel in the second reference block is obtained by interpolating the pixel values of adjacent pixels in the second reference block, or, by interpolating the pixel values of adjacent pixels in the second reference block.
  • the pixel value of is copied.
  • the size of the first reference block is the same as the size of the second reference block, the width of the first reference block/second reference block is determined based on the width and search range of the sub-block, and the height of the first reference block/second reference block is based on the sub-block The height and search range are determined.
  • the smaller sub-block can be 8*8, the larger The sub-block of can be 32*32.
  • the size of the sub-block can be the same as the size of the current block, that is, the sub-block is the current block.
  • the current block is 8*16
  • the block includes only one sub-block, and the size of the sub-block is 8*16.
  • the size of the sub-block can also be different from the size of the current block.
  • the current block when the current block is 8*32, the current block can include two 8*16
  • the above is just an example.
  • the width of the sub-block is dx
  • the height of the sub-block is dy
  • the first original motion vector is denoted as MV0
  • the second original motion vector is denoted as MV1.
  • an entire pixel block with an area of (dx+filtersize-1)*(dy+filtersize-1) can be obtained, which can be recorded as an entire pixel block A.
  • an entire pixel block with an area of (dx+filtersize-1)*(dy+filtersize-1) can be obtained, which can be recorded as an entire pixel block B.
  • this initial reference pixel block can be recorded as the first reference block.
  • the size of (dx+2*IterNum)*(dy+2*IterNum can be obtained by bilinear interpolation ), this initial reference pixel block can be marked as the second reference block.
  • this initial reference pixel block is recorded as the first reference block.
  • the entire pixel block B with an area of (dx+filtersize-1)*(dy+filtersize-1) you can directly copy to obtain the size (dx+2*IterNum)*(dy+2*IterNum)
  • the initial reference pixel block, and this initial reference pixel block is marked as the second reference block.
  • the subsequent search process only uses the brightness component to calculate the cost value to reduce complexity
  • an integer pixel with an area of (dx+filtersize-1)*(dy+filtersize-1) Block (such as the whole pixel block A and the whole pixel block B)
  • the initial reference pixel block is the first reference block (such as Pred_Inter0) and the second reference block (such as Pred_Inter1).
  • filtersize may be the number of taps of the interpolation filter, for example, it may be 8, etc., and there is no restriction on this.
  • obtaining the first reference block/second reference block through bilinear interpolation refers to: the pixel value of each pixel in the first reference block/second reference block, and the first reference block/second reference block The pixel values of adjacent pixels in the reference block are interpolated.
  • Obtaining the first reference block/second reference block by copying refers to: the pixel value of each pixel in the first reference block/second reference block, by comparing adjacent pixels in the first reference block/second reference block The pixel value of is copied.
  • the area of the first reference block is (dx+2*IterNum)*(dy+2*IterNum), and the area of the second reference block is (dx+2*IterNum)*(dy+2*IterNum)
  • the width value of the first reference block/second reference block is dx+2*IterNum
  • the height value of the first reference block/second reference block is dy+2*IterNum.
  • IterNum can be the search range SR, for example, the number of iterations in subsequent embodiments, IterNum can be the maximum horizontal/vertical component interpolation between the target motion vector and the original motion vector, For example, IterNum can be 2, etc.
  • an entire pixel block A with an area of 23 (that is, 16+8-1)*23 is obtained.
  • the first reference block with a size of 20 (that is, 16+2*2)*20 can be obtained by bilinear interpolation.
  • an entire pixel block B with an area of 23*23 is obtained.
  • a second reference block with a size of 20*20 is obtained.
  • the first reference block and the second reference block it is used for the motion vector adjustment in the subsequent process.
  • Embodiment 6 In the above embodiment, for each sub-block of the current block, the first original motion vector is adjusted according to the first pixel value of the first reference block and the second pixel value of the second reference block to obtain The first target motion vector of the sub-block; according to the first pixel value of the first reference block and the second pixel value of the second reference block, the second original motion vector is adjusted to obtain the second target motion vector of the sub-block.
  • a sub-block such as each sub-block of the current block with a size of dx*dy
  • Step a1 Determine the first original motion vector or the second original motion vector as the central motion vector.
  • the first original motion vector is (4, 4) and the second original motion vector is (-4, -4)
  • the first original motion vector (4, 4) or the second original motion vector (-4, 4) -4) Determine as the central motion vector.
  • the following takes the determination of the first original motion vector (4, 4) as the center motion vector as an example, and the process of determining the second original motion vector (-4, -4) as the center motion vector is similar. Go into details again.
  • Step a2 Determine the edge motion vector corresponding to the center motion vector.
  • the center motion vector (x, y) can be shifted by S in different directions to obtain edge motion vectors (x, y+S), edge motion vectors (x, yS), and edge motion vectors (x+S) in different directions.
  • Y edge motion vector
  • xS, y edge motion vector
  • x+right, y+down edge motion vector
  • right may be S or -S
  • down may be S or -S.
  • the center motion vector (x, y) is taken as the center, that is, the center motion vector is (0, 0).
  • edge motion vector includes: edge motion vector (0, 1), edge motion vector (0, -1), edge motion vector (1, 0), edge motion vector (-1, 0), edge motion vector (1, 1).
  • Step a3 Obtain the first generation value corresponding to the center motion vector and the second generation value corresponding to each edge motion vector according to the first pixel value of the first reference block and the second pixel value of the second reference block.
  • the sub-reference block A1 corresponding to the center motion vector (0, 0) is copied from the first reference block, and the sub-reference block A1 is the sub-reference block of the center motion vector (0, 0) in the first reference block.
  • the sub-reference block B1 corresponding to the central motion vector (0, 0) is obtained by copying from the second reference block.
  • the sub-reference block B1 is the sub-reference block of the central motion vector (0, 0) in the second reference block.
  • the cost value 1 corresponding to the center motion vector (0, 0) is obtained by using the first pixel value of the sub-reference block A1 and the second pixel value of the sub-reference block B1.
  • the sub-reference block A2 corresponding to the edge motion vector (0, 1) is obtained by copying from the first reference block.
  • the sub-reference block A2 is the sub-reference block of the edge motion vector (0, 1) in the first reference block.
  • the sub-reference block B2 corresponding to the symmetric motion vector (0, -1) of the edge motion vector (0, 1) is copied from the second reference block.
  • the sub-reference block B2 is the symmetric motion vector (0, -1) in the second The sub-reference block in the reference block.
  • the cost value 2 corresponding to the edge motion vector (0, 1) is obtained by using the first pixel value of the sub-reference block A2 and the second pixel value of the sub-reference block B2.
  • For the determination method of the cost value please refer to the subsequent embodiments.
  • the cost value 3 corresponding to the edge motion vector (0, -1) can be determined, the cost value corresponding to the edge motion vector (1, 0) 4, the edge
  • the cost value 5 corresponding to the motion vector (-1, 0) and the cost value 6 corresponding to the edge motion vector (1, 1) are not repeated here.
  • Step a4 According to the first-generation value and the second-generation value, a motion vector is selected from the center motion vector and the edge motion vector as the optimal motion vector. For example, the motion vector with the smallest cost value can be used as the optimal motion vector.
  • the edge motion vector (0, 1) corresponding to the cost value 2 can be used as the optimal motion vector.
  • the edge motion vector (0, 1) corresponding to the cost value 2 can be used as the optimal motion vector.
  • Step a5 Determine whether the end condition is met. If not, the optimal motion vector can be determined as the center motion vector and return to step a2. If it is, then step a6 can be performed.
  • the end condition is met; if the number of iterations/search range does not reach the threshold, the end condition is not met. For example, assuming that SR is 2, that is, the threshold is 2, if the number of iterations/search range has reached 2, that is, step a2-step a4 has been executed twice, the end condition is satisfied; otherwise, the end condition is not satisfied.
  • the end condition can be satisfied.
  • Step a6 Determine the first integer-pixel motion vector adjustment value (used to adjust the first original motion vector) and the second integer-pixel motion vector adjustment value (used to adjust the second original motion vector) according to the optimal motion vector.
  • the first integer-pixel motion vector adjustment value may be determined according to the optimal motion vector and the first original motion vector
  • the second integer-pixel motion vector adjustment value may be determined according to the first integer-pixel motion vector adjustment value.
  • the second integer-pixel motion vector adjustment value may be symmetrical to the first integer-pixel motion vector adjustment value.
  • the optimal motion vector is the edge motion vector (0, 1), and the second iteration is performed with the edge motion vector (0, 1) as the center.
  • the optimal motion vector is Edge motion vector (0, 1).
  • the first integer pixel motion vector adjustment value is (0, 2), that is, the edge motion vector (0, 1) and the edge motion vector (0, 1) with.
  • the first original motion vector is (4, 4).
  • the optimal motion vector is the edge motion vector (0, 1), that is, the optimal motion vector can correspond to the optimal motion vector (4, 5).
  • the second iteration takes the edge motion vector (0, 1) as the center.
  • the optimal motion vector is the edge motion vector (0, 1), that is, the optimal motion vector can correspond to the optimal motion vector (4 , 6).
  • the first integer-pixel motion vector adjustment value is determined according to the optimal motion vector (4, 6) and the first original motion vector (4, 4), and the first integer-pixel motion vector adjustment value is the optimal motion vector ( The difference between 4, 6) and the first original motion vector (4, 4), that is, the adjustment value of the first integer-pixel motion vector is (0, 2).
  • the second integer pixel motion vector adjustment value can be (0, -2), that is, the symmetric value of (0, 2) .
  • Step a7 Determine the first sub-pixel motion vector adjustment value (used to adjust the first original motion vector) and the second sub-pixel motion vector adjustment value (used to adjust the second original motion vector) according to the optimal motion vector.
  • the first sub-pixel motion vector adjustment value can be determined according to the cost value corresponding to the optimal motion vector and the cost value corresponding to the edge motion vector corresponding to the optimal motion vector.
  • SPMV can be the first sub-pixel motion vector adjustment value
  • N can be related to the pixel precision of the motion vector.
  • the pixel precision of the motion vector can be 1/2, N is 1, and the pixel precision of the motion vector is 1/4 , N is 2, the pixel precision of the motion vector is 1/8, N is 4, the pixel precision of the motion vector is 1/16, and N is 8.
  • E(0,0) represents the cost value of the optimal motion vector
  • E(-1,0) is the edge motion vector (0,0) of the optimal motion vector centered on the optimal motion vector ( -1,0);
  • E(1,0) is the cost value of the edge motion vector (1,0) of the optimal motion vector (0,0) centered on the optimal motion vector;
  • E(0, -1) is the optimal motion vector as the center, and the cost value of the edge motion vector (0,-1) of the optimal motion vector (0,0);
  • E(0,1) is the optimal motion vector as the center,
  • For the cost value of each motion vector refer to the above example for the determination method, which will not be repeated here.
  • the second sub-pixel motion vector adjustment value can be determined according to the first sub-pixel motion vector adjustment value, and the second sub-pixel motion vector adjustment value is the first sub-pixel motion vector The symmetric value of the adjustment value. For example, if the first sub-pixel motion vector adjustment value is (1, 0), the second sub-pixel motion vector adjustment value can be (-1, 0), that is, the first sub-pixel motion vector adjustment value (1, 0) The symmetrical value.
  • Step a8 Adjust the first original motion vector according to the first full-pixel motion vector adjustment value and/or the first sub-pixel motion vector adjustment value to obtain the first target motion vector.
  • the first target motion vector the first original motion vector+the first integer-pixel motion vector adjustment value+the first sub-pixel motion vector adjustment value.
  • Step a9 Adjust the second original motion vector according to the second full-pixel motion vector adjustment value and/or the second sub-pixel motion vector adjustment value to obtain a second target motion vector.
  • the second target motion vector the second original motion vector+the second integer-pixel motion vector adjustment value+the second sub-pixel motion vector adjustment value.
  • Embodiment 7 In the above embodiment, for each sub-block of the current block, the first original motion vector is adjusted according to the first pixel value of the first reference block and the second pixel value of the second reference block to obtain The first target motion vector of the sub-block; according to the first pixel value of the first reference block and the second pixel value of the second reference block, the second original motion vector is adjusted to obtain the second target motion vector of the sub-block.
  • a sub-block such as each sub-block of the current block with a size of dx*dy
  • the first original motion vector is recorded as Org_MV0
  • the second original motion vector is recorded as Org_MV1.
  • the obtained first target motion vector is recorded as Refined_MV0.
  • the second target motion vector obtained is denoted as Refined_MV1.
  • Step b1. Perform SR iterations to obtain the optimal integer pixel offset of the integer pixel MV point, and record it as IntegerDeltaMV, which is the first integer pixel motion vector adjustment value in the above embodiment. For example, first initialize IntegerDeltaMV to (0, 0), and perform the following process for each iteration:
  • Step b11 Set deltaMV to (0, 0). If it is the first iteration, based on the reference pixels of the first original motion vector in the first reference block, copy the predicted value block A1 (that is, the most central dx*dy block of the first reference block); based on the second original motion vector in the The reference pixels in the second reference block are copied to obtain the predicted value block B1 (that is, the dx*dy block in the center of the second reference block). Obtain the initial cost value cost based on the predicted value block A1 and the predicted value block B1 (the initial cost value is the SAD (sum of abstract distance) based on the predicted value block A1 and the predicted value block B1, see the subsequent embodiments for the determination method ). If the initial cost cost is less than dx*dy, and dx and dy are the width and height of the current sub-block, skip the subsequent search process directly, execute step b2, and set notZeroCost to false.
  • Step b12 As shown in Figure 6, centering on the above initial point, follow ⁇ Mv(0,1),Mv(0,-1),Mv(1,0),Mv(-1,0),Mv( Right, down) ⁇ order to obtain five offset MVs (the five offset MVs are called MVOffset), and the cost value of these five offset MV calculation and comparison process.
  • MVOffset such as Mv(0,1), etc.
  • two blocks of predicted values are obtained through this MVOffset (such as the center position offset in the first reference block)
  • the dx*dy block of the MVOffset and the dx*dy block of the second reference block with the center position offset-MVOffset (the opposite of MVOffset) are calculated, and the down-sampled SAD of the two predicted value blocks is calculated as the cost value of the MVOffset.
  • Step b13 After iteration, if the optimal MV is still the initial MV (that is, not the MVOffset) or the minimum cost is 0, then the next iteration search process is not performed, step b2 is executed, and notZeroCost is set to false.
  • step b2 is executed. If the number of iterations does not reach SR, the optimal MV can be used as the center to perform the next iteration search process, that is, return to step b11.
  • IntegerDeltaMV is obtained, that is, the final value of IntegerDeltaMV, which is the first integer pixel motion vector adjustment value, which is subsequently recorded as IntegerDeltaMV.
  • step b2 it is possible to obtain the optimal sub-pixel offset MV by taking the optimal integer pixel MV point of step b1 as the center, which is recorded as SPMV, and SPMV is the first sub-pixel motion vector adjustment value in the above-mentioned embodiment.
  • Step b21 Only when notZeroCost is not false and deltaMV is (0, 0), can subsequent processing be performed (that is, SPMV needs to be obtained), otherwise, IntegerDeltaMV can be directly used to adjust the original motion vector instead of IntegerDeltaMV and SPMV Adjust the original motion vector.
  • the value of SPMV that is, the first sub-pixel motion vector adjustment value can be obtained.
  • BestMVoffset IntegerDeltaMV+SPMV, that is, the sum of the first integer pixel motion vector adjustment value and the first sub-pixel motion vector adjustment value.
  • -IntegerDeltaMV is the symmetric value of IntegerDeltaMV, that is, the second integer pixel motion vector adjustment value
  • -SPMV is the symmetric value of SPMV, that is, the second sub-pixel motion vector adjustment value
  • -BestMVoffset (-IntegerDeltaMV)+(- SPMV), that is, the sum of the second full-pixel motion vector adjustment value and the second sub-pixel motion vector adjustment value.
  • Embodiment 8 In an example, in order to adjust the first original motion vector Org_MV0 and the second original motion vector Org_MV1 to the first target motion vector Refined_MV0 and the second target motion vector Refined_MV1, the implementation is similar to that of the seventh embodiment, with the difference The point is: "If the initial cost is less than dx*dy, skip the subsequent search process" in step b11, that is, even if the initial cost is less than dx*dy, it will not "directly Skip the subsequent search process", but continue the subsequent search process, that is, step b12 needs to be performed.
  • Embodiment 9 In order to adjust the first original motion vector Org_MV0 and the second original motion vector Org_MV1 to the first target motion vector Refined_MV0 and the second target motion vector Refined_MV1, the implementation is similar to that of the seventh embodiment, the difference is: "If the initial cost is less than dx*dy, then skip the subsequent search process" in b11 is removed, that is, even if the initial cost is less than dx*dy, it will not “directly skip the subsequent search process" ", but to continue the subsequent search process, that is, step b12 needs to be performed.
  • step b13 Remove "If the optimal MV is still the initial MV (that is, not MVOffset) or the minimum cost is 0, then the next iteration of the search process" in step b13 is removed, that is, even if the optimal MV is still the initial MV Or the minimum cost value is 0, and the next iterative search process can also be performed.
  • Embodiment 10 In an example, in order to adjust the first original motion vector Org_MV0 and the second original motion vector Org_MV1 to the first target motion vector Refined_MV0 and the second target motion vector Refined_MV1, the implementation is similar to that of the seventh embodiment, with the difference The point is: the "notZeroCost" related process is removed, that is, in step b11 and step b13, the value of notZeroCost is not set and saved. In step b21, as long as deltaMV is (0, 0), the sub-pixel offset calculation process (ie step b22) can be performed, not only when notZeroCost is not false and deltaMV is (0, 0). Perform the sub-pixel offset calculation process.
  • Embodiment 11 In an example, in order to adjust the first original motion vector Org_MV0 and the second original motion vector Org_MV1 to the first target motion vector Refined_MV0 and the second target motion vector Refined_MV1, the implementation is similar to that of the seventh embodiment, with the difference The point is: "Only when notZeroCost is not false and deltaMV is (0, 0), can the subsequent processing be performed in step b21, otherwise, directly use IntegerDeltaMV to adjust the original motion vector", and modify it to "Only notZeroCost is not false, and the cost value of the four points separated by 1 full pixel from the top, bottom, left, and right of the current optimal integer pixel has been calculated in step b1 before proceeding with subsequent processing. Otherwise, directly use IntegerDeltaMV to adjust the original motion vector.” In an example, “subsequent processing” refers to the sub-pixel offset calculation process in step b22.
  • the sub-pixel offset calculation process in step b22 needs to use the cost value of four points separated by 1 full pixel from the top, bottom, left, and right of the optimal integer pixel. Therefore, the "optimal integer pixel" has been calculated in step b1.
  • the cost value of four dots separated by 1 full pixel from the top, bottom, left, and right” may be a necessary condition.
  • Embodiment 12 In order to adjust the first original motion vector Org_MV0 and the second original motion vector Org_MV1 to the first target motion vector Refined_MV0 and the second target motion vector Refined_MV1, the implementation is similar to that of the seventh embodiment, the difference is: In b21, "Only when notZeroCost is not false and deltaMV is (0, 0), can the subsequent processing be performed, otherwise, use IntegerDeltaMV to adjust the original motion vector", and modify it to "As long as the current optimal integer pixel is separated from the top, bottom, left, and right. When the cost value of the four points of an entire pixel has been calculated in step b1, the subsequent processing (that is, the sub-pixel offset calculation process) is performed, otherwise, the original motion vector is adjusted by IntegerDeltaMV.”
  • Embodiment 13 In order to adjust the first original motion vector Org_MV0 and the second original motion vector Org_MV1 to the first target motion vector Refined_MV0 and the second target motion vector Refined_MV1, the implementation is similar to that of the seventh embodiment, the difference is: In b21, "Only when notZeroCost is not false and deltaMV is (0, 0), can subsequent processing be performed, otherwise, use IntegerDeltaMV directly to adjust the original motion vector" and amended to "If the current optimal integer pixel is up, down, left, and right When the cost value of the four points separated by 1 full pixel has been calculated in step b1, the subsequent processing (the sub-pixel offset calculation process in step b22) is performed, otherwise, step b23 is used for processing".
  • Step b23 Set the current optimal integral pixel point MV_inter_org to the nearest one, and the cost value of the surrounding four points separated by 1 integral pixel from the top, bottom, left, and right has been calculated in step b1 to obtain the integral pixel point MV_inter_nearest. Then, the sub-pixel offset calculation process of step b22 is performed with MV_inter_nearest as the center, that is, SPMV is obtained with MV_inter_nearest as the center.
  • step b1 For example, if the current optimal integer pixel point MV_inter_org has four points separated by 1 integer pixel from the top, bottom, left, and right, not all of them are calculated in step b1, then an integer pixel point MV_inter_nearest is selected from the periphery of the optimal integer pixel point MV_inter_org , And the cost values of the four points separated by 1 full pixel from the top, bottom, left, and right of the whole pixel point MV_inter_nearest have been calculated in step b1.
  • the whole pixel point MV_inter_nearest can be taken as the current optimal whole pixel point, and the SPMV can be obtained with the whole pixel point MV_inter_nearest as the center.
  • the SPMV can be obtained with the whole pixel point MV_inter_nearest as the center.
  • x 0 /y 0 is greater than 2N, x 0 /y 0 can be assigned as 2N; if x 0 /y 0 is less than -2N, x 0 /y 0 can be assigned as -2N.
  • Embodiment 14 In the above embodiment, it is necessary to determine the edge motion vector corresponding to the center motion vector. For example, the center motion vector (x, y) is shifted to different directions by S, and the edge motion vector (x, y+S), edge motion vector (x, yS), edge motion vector (x+S, y), edge motion vector (xS, y), edge motion vector (x+right, y+down). Or, shift the center motion vector (x, y) to different directions by S, and sequentially obtain the edge motion vector (x, yS), the edge motion vector (x, y+S), and the edge motion vector (xS, y) in different directions.
  • edge motion vector (x+S, y) edge motion vector (x+right, y+down).
  • S is 1, then according to (0, 1), (0, -1), (1,0), (-1,0), (right, down) order to get 5 edge motion vectors.
  • (0, -1), (0, 1), (-1,0), (1,0), (right, down) 5 edge motion vectors are obtained.
  • Embodiment 15 In the above embodiment, the default value of the edge motion vector (x+right, y+down) is (x-S, y-S). If the cost value of the edge motion vector (x+S, y) is less than the cost value of the edge motion vector (xS, y), then right is S (modified from -S to S); if the edge motion vector (x, y+S) The cost value of) is less than the cost value of the edge motion vector (x, yS), then down is S (modified from -S to S).
  • the cost value of the edge motion vector (x+S, y) is less than or equal to the cost value of the edge motion vector (xS, y), then right is S (modified from -S to S); if the edge motion vector (x , Y+S) is less than or equal to the cost value of the edge motion vector (x, yS), then down is S (modified from -S to S).
  • the cost value of the edge motion vector (1,0) is less than or equal to the cost value of the edge motion vector (-1,0), then right is 1; if the cost value of the edge motion vector (0,1) is less than or equal to The cost value of the edge motion vector (0, -1), then down is 1.
  • the cost value of the edge motion vector (0, -1) is less than or equal to The cost value of the edge motion vector (0, -1), then down is 1.
  • the default value is (-1, -1).
  • the cost value of the edge motion vector (1,0) is less than the cost value of the edge motion vector (-1,0), then right is 1; if the cost value of the edge motion vector (0,1) is less than the cost value of the edge motion vector (0, The cost value of -1), then down is 1. Or, if the cost value of the edge motion vector (1,0) is less than or equal to the cost value of the edge motion vector (-1,0), then right is 1; if the cost value of the edge motion vector (0,1) is less than or equal to The cost value of the edge motion vector (0, -1), then down is 1.
  • Embodiment 16 In the above embodiment, for each sub-block of the current block, the first original motion vector is adjusted according to the first pixel value of the first reference block and the second pixel value of the second reference block to obtain The first target motion vector of the sub-block; according to the first pixel value of the first reference block and the second pixel value of the second reference block, the second original motion vector is adjusted to obtain the second target motion vector of the sub-block.
  • a sub-block such as each sub-block of the current block with a size of dx*dy
  • Step c1 with the initial motion vector as the center, select some or all of the motion vectors from the motion vectors around the initial motion vector including the initial motion vector, and use the selected motion vector as a candidate motion vector.
  • the initial motion vector may be the first original motion vector or the second original motion vector.
  • the first original motion vector is taken as an example, that is, the initial motion vector is the first original motion vector.
  • the initial motion vector may be taken as the center, and part or all of the motions may be selected from (2*SR+1)*(2*SR+1) motion vectors surrounding the initial motion vector and including the initial motion vector.
  • Vector and determine the selected motion vector as a candidate motion vector; wherein, the SR is the search range.
  • the search order of the motion vector can include from left to right and from top to bottom.
  • all motion vectors are selected from 25 motion vectors including the initial motion vector around the initial motion vector, and the selected motion vector is determined as a candidate motion vector;
  • the search order of motion vectors is: ⁇ Mv( -2,-2), Mv(-1,-2), Mv(0,-2), Mv(1,-2), Mv(2,-2), Mv(-2,-1), Mv (-1,-1), Mv(0,-1), Mv(1,-1), Mv(2,-1), Mv(-2,0), Mv(-1,0), Mv( 0,0), Mv(1,0), Mv(2,0), Mv(-2,1), Mv(-1,1), Mv(0,1), Mv(1,1), Mv (2,1), Mv(-2,2), Mv(-1,2), Mv(0,2), Mv(1,2), Mv(2,2) ⁇ .
  • the search order of motion vectors is: ⁇ Mv( -1,-2), Mv(0,-2), Mv(1,-2), Mv(-2,-1), Mv(-1,-1), Mv(0,-1), Mv (1,-1), Mv(2,-1), Mv(-2,0), Mv(-1,0), Mv(0,0), Mv(1,0), Mv(2,0 ), Mv(-2,1), Mv(-1,1), Mv(0,1), Mv(1,1), Mv(2,1), Mv(-1,2), Mv(0 ,2), Mv(1,2) ⁇ .
  • Step c2 according to the first pixel value of the first reference block and the second pixel value of the second reference block, obtain the third generation value corresponding to the first original motion vector (ie, the initial motion vector), and the third generation value corresponding to each candidate motion vector Four generations of value.
  • the sub-reference block A1 corresponding to the first original motion vector may be obtained by copying from the first reference block, and the sub-reference block A1 may be the sub-reference block of the first original motion vector in the first reference block. Then, the sub-reference block B1 corresponding to the second original motion vector may be copied from the second reference block, and the sub-reference block B1 is the sub-reference block of the second original motion vector in the second reference block. Then, the first pixel value of the sub-reference block A1 and the second pixel value of the sub-reference block B1 can be used to obtain the third generation value corresponding to the first original motion vector.
  • the sub-reference block A2 corresponding to the candidate motion vector can be obtained by copying from the first reference block, and the sub-reference block A2 is the sub-reference block of the candidate motion vector in the first reference block. Then, the sub-reference block B2 corresponding to the symmetric motion vector of the candidate motion vector is copied from the second reference block, and the sub-reference block B2 is the sub-reference block of the symmetric motion vector in the second reference block. The first pixel value of the sub-reference block A2 and the second pixel value of the sub-reference block B2 are used to obtain the fourth generation value corresponding to the candidate motion vector.
  • Step c3 According to the third-generation value and the fourth-generation value, a motion vector is selected from the first original motion vector and each candidate motion vector, and the selected motion vector is determined as the optimal motion vector. For example, the motion vector with the smallest cost value (such as the first original motion vector or any candidate motion vector) may be used as the optimal motion vector.
  • Step c4 Determine the first integer pixel motion vector adjustment value (used to adjust the first original motion vector) and the second integer pixel motion vector adjustment value (used to adjust the second original motion vector) according to the optimal motion vector.
  • the first integer-pixel motion vector adjustment value is determined according to the optimal motion vector and the first original motion vector
  • the second integer-pixel motion vector adjustment value is determined according to the first integer-pixel motion vector adjustment value.
  • the second integer-pixel motion vector adjustment value is the same as The adjustment value of the first full-pixel motion vector is symmetrical.
  • the first integer pixel motion vector adjustment value is the difference between the optimal motion vector (4, 6) and the first original motion vector (4, 4), that is, the first integer pixel motion vector adjustment value (0, 2).
  • the second integer pixel motion vector adjustment value is determined according to the first integer pixel motion vector adjustment value (0, 2).
  • the second integer pixel motion vector adjustment value can be (0, -2), that is, (0, 2) Symmetric value.
  • Step c5 Determine the first sub-pixel motion vector adjustment value (used to adjust the first original motion vector) and the second sub-pixel motion vector adjustment value (used to adjust the second original motion vector) according to the optimal motion vector. For example, according to the cost value corresponding to the optimal motion vector and the cost value corresponding to the edge motion vector corresponding to the optimal motion vector, the first sub-pixel motion vector adjustment value is determined, and then the first sub-pixel motion vector adjustment value is determined The second sub-pixel motion vector adjustment value.
  • x 0 N*(E(-1,0)–E(1,0))/(E(-1,0)+E(1,0)–2*E(0,0))
  • y 0 N*(E(0,-1)–E(0,1))/(E(0,-1)+E(0,1)–2*E(0,0))
  • N 1, 2, 4, and 8.
  • SPMV deltaMv/2N, if the current pixel accuracy of the motion vector is 1/16, then SPMV is (x 0 /16, y 0 /16).
  • SPMV is the first sub-pixel motion vector adjustment value.
  • E(0,0) represents the cost value of the optimal motion vector;
  • E(-1,0) is the center of the optimal motion vector, and the edge motion vector (-1,0) of the optimal motion vector (0,0)
  • E(1,0) is the cost value of the edge motion vector (1,0) of the optimal motion vector (0,0) centered on the optimal motion vector;
  • E(0,-1) is The optimal motion vector is the center, and the cost value of the edge motion vector (0,-1) of the optimal motion vector (0,0);
  • E(0,1) is the center of the optimal motion vector, and the optimal motion vector ( The cost value of the edge motion vector (0,1) of 0,0).
  • the second sub-pixel motion vector adjustment value can be determined according to the first sub-pixel motion vector adjustment value, and the second sub-pixel motion vector adjustment value is the first sub-pixel motion vector The symmetric value of the adjustment value. For example, if the first sub-pixel motion vector adjustment value is (1, 0), the second sub-pixel motion vector adjustment value is (-1, 0), that is, the symmetric value of (1, 0).
  • Step c6 Adjust the first original motion vector according to the first full-pixel motion vector adjustment value and/or the first sub-pixel motion vector adjustment value to obtain a first target motion vector corresponding to the first original motion vector.
  • the first target motion vector the first original motion vector+the first integer-pixel motion vector adjustment value+the first sub-pixel motion vector adjustment value.
  • Step c7 Adjust the second original motion vector according to the second integer-pixel motion vector adjustment value and/or the second sub-pixel motion vector adjustment value to obtain a second target motion vector corresponding to the second original motion vector.
  • the second target motion vector the second original motion vector+the second integer-pixel motion vector adjustment value+the second sub-pixel motion vector adjustment value.
  • Embodiment 17 In the above embodiment, for each sub-block of the current block, the first original motion vector is adjusted according to the first pixel value of the first reference block and the second pixel value of the second reference block to obtain The first target motion vector of the sub-block; according to the first pixel value of the first reference block and the second pixel value of the second reference block, the second original motion vector is adjusted to obtain the second target motion vector of the sub-block.
  • a sub-block such as each sub-block of the current block with a size of dx*dy
  • the first original motion vector as Org_MV0
  • the second original motion vector as Org_MV1
  • the first target motion vector as Refined_MV0
  • the second target motion vector as Refined_MV1.
  • step d1 there is no need to perform an iterative process, that is, all candidate motion vectors to be processed can be selected at one time, instead of the iterative process, the first iteration selects part of the motion vectors. Part of the motion vector is selected in the second iteration. Based on this, since all candidate motion vectors to be processed can be selected at one time, these candidate motion vectors can be processed in parallel to obtain the cost value of each candidate motion vector, thereby reducing computational complexity and improving coding performance .
  • Step d2 Determine the value of IntegerDeltaMV according to the optimal motion vector.
  • the final value of IntegerDeltaMV is the first integer-pixel motion vector adjustment value.
  • Step d3 Obtain the optimal sub-pixel offset MV with the optimal motion vector as the center, record the optimal sub-pixel offset as SPMV, and the value of SPMV is the first sub-pixel motion vector adjustment value.
  • step d3 For the implementation process of step d3, refer to the above step b2, which will not be repeated here.
  • Embodiment 18 In order to adjust the first original motion vector Org_MV0 and the second original motion vector Org_MV1 to the first target motion vector Refined_MV0 and the second target motion vector Refined_MV1, the implementation manner is similar to that of Embodiment 16 and Embodiment 17.
  • Embodiment 19 In order to adjust the first original motion vector Org_MV0 and the second original motion vector Org_MV1 to the first target motion vector Refined_MV0 and the second target motion vector Refined_MV1, the implementation manner is similar to that of the 16th and the 17th embodiment.
  • all candidate motion vectors to be processed are selected at one time, these candidate motion vectors can be processed in parallel to obtain the cost value of each candidate motion vector, thereby reducing computational complexity and improving coding performance.
  • the offset is selected not to exceed the SR range Partial motion vector inside.
  • N is greater than or equal to 1, and less than or equal to (2*SR+1)*(2 *SR+1))
  • Candidate points determine the cost value of the motion vector corresponding to these N points.
  • the cost value of these N points may be scanned in a certain order, and the motion vector with the smallest cost value may be selected as the optimal motion vector. If the cost value is equal, the candidate points with the highest order will be selected first.
  • the method for determining the cost value may be: determining the cost value based on the down-sampled SAD of the two predicted values obtained by the candidate motion vector.
  • the number of candidate points may be 25, and the order of these candidate points may be from left to right and from top to bottom.
  • the order of these candidate points can be: ⁇ Mv(-2,-2), Mv(-1,-2), Mv(0,-2), Mv(1,-2), Mv (2,-2), Mv(-2,-1), Mv(-1,-1), Mv(0,-1), Mv(1,-1), Mv(2,-1), Mv (-2,0), Mv(-1,0), Mv(0,0), Mv(1,0), Mv(2,0), Mv(-2,1), Mv(-1,1 ), Mv(0,1), Mv(1,1), Mv(2,1), Mv(-2,2), Mv(-1,2), Mv(0,2), Mv(1, 2), Mv(2,2) ⁇ .
  • the sequence of these candidate points can be: ⁇ Mv(0,0), Mv(-2,-2), Mv(-1,-2), Mv(0,-2), Mv(1,-2), Mv(2,-2), Mv(-2,-1), Mv(-1,-1), Mv(0,-1), Mv(1,-1), Mv(2,-1), Mv(-2,0), Mv(-1,0), Mv(0,0), Mv(1,0), Mv(2,0), Mv(-2, 1), Mv(-1,1), Mv(0,1), Mv(1,1), Mv(2,1), Mv(-2,2), Mv(-1,2), Mv( 0,2), Mv(1,2), Mv(2,2) ⁇ .
  • the optimal offset MV can be used to determine the adjustment value and score of the entire pixel motion vector.
  • the order of these candidate points is: ⁇ Mv(-1,-2), Mv(0,-2), Mv(1,-2), Mv(-2,-1), Mv( -1,-1), Mv(0,-1), Mv(1,-1), Mv(2,-1), Mv(-2,0), Mv(-1,0), Mv(0 ,0), Mv(1,0), Mv(2,0), Mv(-2,1), Mv(-1,1), Mv(0,1), Mv(1,1), Mv( 2,1), Mv(-1,2), Mv(0,2), Mv(1,2) ⁇ .
  • the sequence of these candidate points is: ⁇ Mv(0,0), Mv(-1,-2), Mv(0,-2), Mv(1,-2), Mv( -2,-1), Mv(-1,-1), Mv(0,-1), Mv(1,-1), Mv(2,-1), Mv(-2,0), Mv( -1,0), Mv(0,0), Mv(1,0), Mv(2,0), Mv(-2,1), Mv(-1,1), Mv(0,1), Mv(1,1), Mv(2,1), Mv(-1,2), Mv(0,2), Mv(1,2) ⁇ .
  • the optimal offset MV can be used to determine the adjustment value and score of the entire pixel motion vector.
  • the number of candidate points can be 25.
  • the motion vector (0, 0) is used as the center, and the order from near to far from the center is adopted.
  • the sequence of these candidate points can be: ⁇ Mv(0,0), Mv(-1,0), Mv(0,-1), Mv(1,0), Mv(0,1 ), Mv(-1,1), Mv(-1,-1), Mv(1,-1), Mv(1,1), Mv(0,2), Mv(-2,0), Mv (0,-2), Mv(2,0), Mv(1,2), Mv(-1,2), Mv(-2,1), Mv(-2,-1), Mv(-1 ,-2), Mv(1,-2), Mv(2,-1), Mv(2,1), Mv(2,1), Mv(-2,2), Mv(-2,-2), Mv(2,- 2), Mv(2,2) ⁇ .
  • the optimal offset MV can be used to determine the adjustment value and score of the entire pixel motion vector.
  • the number of candidate points can be 21.
  • the motion vector (0, 0) is used as the center, and the distance from the center is in the order from near to far.
  • the order of these candidate points is: ⁇ Mv(0,0), Mv(-1,0), Mv(0,-1), Mv(1,0), Mv(0,1) , Mv(-1,1), Mv(-1,-1), Mv(1,-1), Mv(1,1), Mv(0,2), Mv(-2,0), Mv( 0,-2), Mv(2,0), Mv(1,2), Mv(-1,2), Mv(-2,1), Mv(-2,-1), Mv(-1, -2), Mv(1,-2), Mv(2,-1), Mv(2,1) ⁇ .
  • the number of candidate points can be 13.
  • the motion vector (0, 0) is the center, and the distance from the center is in the order from near to far.
  • the order of these candidate points is: ⁇ Mv(0,0), Mv(-1,0), Mv(0,-1), Mv(1,0), Mv(0,1) , Mv(-1,1), Mv(-1,-1), Mv(1,-1), Mv(1,1), Mv(0,2), Mv(-2,0), Mv( 0,-2), Mv(2,0) ⁇ .
  • the optimal offset MV can be used to determine the adjustment value and score of the entire pixel motion vector. For the determination method of the pixel motion vector adjustment value, refer to the above-mentioned embodiment.
  • the cost SAD(0,0) of the first candidate motion vector (Mv(0,0)) is less than the threshold dx*dy, then the subsequent candidate motion vector will not be tested, that is, the optimal integer pixel offset of the sub-block is Mv(0,0).
  • the cost of a certain candidate motion vector is 0, the subsequent candidate motion vector is not tested, and the current candidate motion vector is used as the optimal integer pixel offset.
  • the subsequent sub-pixel offset calculation process is not performed, that is, the target motion vector of the sub-block is directly obtained through the entire pixel offset.
  • Embodiment 20 In order to adjust the first original motion vector Org_MV0 and the second original motion vector Org_MV1 to the first target motion vector Refined_MV0 and the second target motion vector Refined_MV1, the implementation manner is similar to that of Embodiment 16 and Embodiment 17.
  • this embodiment since all candidate motion vectors to be processed are selected at one time, these candidate motion vectors can be processed in parallel to obtain the cost value of each candidate motion vector, thereby reducing computational complexity and improving coding performance.
  • taking the original motion vector as the center from (2*SR+1)*(2*SR+1) points, select a part of the motion vector whose offset does not exceed the range of SR.
  • N is greater than or equal to 1, and less than or equal to (2*SR+1)*(2 *SR+1))
  • Candidate points Determine the cost value of the motion vector corresponding to these N points. Scan the cost value of these N points in a certain order, and select the motion vector with the smallest cost value as the optimal motion vector. If the cost value is equal, the candidate points with the highest order will be selected first.
  • Embodiment 19 The difference from Embodiment 19 is that the positions of candidate points in Embodiment 19 are fixed, that is, have nothing to do with the original motion vector.
  • the positions of candidate points in Embodiment 20 are related to the original motion vector, which will be described below in conjunction with several specific examples. .
  • the number of candidate points may be 13.
  • the motion vector (0, 0) is used as the center, and the order from near to far from the center is adopted.
  • the order of the candidate points in the first layer from the center has nothing to do with the size of the original motion vector, while the order of the candidate points in the second layer from the center is related to the size of the original motion vector.
  • the order of these candidate points is: ⁇ Mv(0,0), Mv(-1,0), Mv(0,-1), Mv(1,0), Mv(0,1), Mv(-1, 1), Mv(-1,-1), Mv(1,-1), Mv(1,1), Mv(sign_H*2,0), Mv(sign_H*2,sign_V*1), Mv(0 ,sign_V*2), Mv(0,sign_V*2) ⁇ .
  • the first original motion vector is denoted as MV0
  • the horizontal component is MV0_Hor
  • the vertical component is MV0_Ver.
  • the optimal offset MV can be used to determine the adjustment value and score of the entire pixel motion vector. For the determination method of the pixel motion vector adjustment value, refer to the above-mentioned embodiment.
  • the motion vector (0, 0) is used as the center, and the distance from the center is in the order from near to far.
  • the order of the candidate points in the first layer from the center is independent of the size of the original motion vector, while the order of the candidate points in the second layer from the center is related to the size of the original motion vector.
  • the order of these candidate points is: ⁇ Mv( 0,0), Mv(-1,0), Mv(0,-1), Mv(1,0), Mv(0,1), Mv(-1,1), Mv(-1,-1 ), Mv(1,-1), Mv(1,1), Mv(sign_H*2,0), Mv(sign_H*2,sign_V*1), Mv(0,sign_V*2), Mv(0, sign_V*2) ⁇ .
  • the first original motion vector is denoted as MV0
  • the horizontal component is MV0_Hor
  • the vertical component is MV0_Ver.
  • the optimal offset MV can be used to determine the adjustment value and score of the entire pixel motion vector. For the determination method of the pixel motion vector adjustment value, refer to the above-mentioned embodiment.
  • Embodiment 21 In the above embodiment, it involves obtaining the first generation value corresponding to the center motion vector and the second generation value corresponding to the edge motion vector according to the first pixel value of the first reference block and the second pixel value of the second reference block. Cost value. According to the first pixel value of the first reference block and the second pixel value of the second reference block, the third generation value corresponding to the first original motion vector and the fourth generation value corresponding to the candidate motion vector are obtained. In an example, the first generation value corresponding to the center motion vector, the second generation value corresponding to the edge motion vector, and the first original motion vector are obtained according to the first pixel value that is not downsampled and the second pixel value that is not downsampled.
  • the corresponding third-generation value and the fourth-generation value corresponding to the candidate motion vector perform a down-sampling operation on the first pixel value, and perform a down-sampling operation on the second pixel value; obtain the first generation corresponding to the center motion vector according to the down-sampled first pixel value and the down-sampled second pixel value Value, the second-generation value corresponding to the edge motion vector, the third-generation value corresponding to the first original motion vector, and the fourth-generation value corresponding to the candidate motion vector.
  • the method of determining the cost value is similar.
  • the sub-reference block A1 corresponding to the central motion vector can be copied from the first reference block, and the sub-reference corresponding to the symmetrical motion vector of the central motion vector can be copied from the second reference block.
  • the first pixel value of the sub-reference block A1 and the second pixel value of the sub-reference block B1 are used to obtain the cost value corresponding to the center motion vector.
  • the sub-reference block A2 corresponding to the edge motion vector can be copied from the first reference block, and the sub-reference block B2 corresponding to the symmetrical motion vector of the edge motion vector can be copied from the second reference block.
  • the first pixel value of the sub-reference block A2 and the second pixel value of the sub-reference block B2 can be copied from the cost value corresponding to the edge motion vector, and so on.
  • the sub-reference block corresponding to the motion vector can be obtained from the first reference block, and the sub-reference corresponding to the symmetrical motion vector of the motion vector can be obtained from the second reference block. Block, and then use the pixel values of the two sub-reference blocks to obtain the cost value corresponding to the motion vector, and this process will not be repeated.
  • Embodiment 22 On the basis of Embodiment 21, according to the un-down-sampled first pixel value (that is, the un-down-sampled pixel value of the sub-reference block in the first reference block) and the un-down-sampled second pixel value ( That is, the non-downsampled pixel value of the sub-reference block in the second reference block), and the cost value corresponding to the motion vector is obtained.
  • the un-down-sampled first pixel value that is, the un-down-sampled pixel value of the sub-reference block in the first reference block
  • the un-down-sampled second pixel value That is, the non-downsampled pixel value of the sub-reference block in the second reference block
  • the cost value is determined based on the SAD of all pixel values of the sub-reference block pred 0 and sub-reference block pred 1 , There is no need to vertically down-sample the pixels of the sub-reference block pred 0 and sub-reference block pred 1.
  • the cost value calculation formula is:
  • cost can represent the cost value
  • W can be the width value of the sub-reference block
  • H can be the height value of the sub-reference block
  • pred 0 (i, j) can represent the i-th column of the sub-reference block pred 0
  • pred 1 (i, j) can represent the pixel value of the i-th column and the j-th row of the sub-reference block pred 1
  • abs(x) can represent the absolute value of x.
  • Embodiment 23 On the basis of Embodiment 21, the first pixel value can be down-sampled, and the second pixel value can be down-sampled; it can be based on the down-sampled first pixel value (that is, in the first reference block). The down-sampled pixel value of the sub-reference block) and the down-sampled second pixel value (that is, the down-sampled pixel value of the sub-reference block in the second reference block) to obtain the cost value corresponding to the motion vector.
  • the cost value is determined based on the SAD of all pixel values of the sub-reference block pred 0 and sub-reference block pred 1 .
  • the pixel values of the sub-reference block pred 0 and the sub-reference block pred 1 are vertically down-sampled by N times (N is an integer greater than 0, and may be 2).
  • the cost value calculation formula is:
  • cost can represent the cost value
  • W can be the width value of the sub-reference block
  • H can be the height value of the sub-reference block
  • N can represent the down-sampling parameter, which is an integer greater than 0, which can be 2
  • j) can represent the pixel value of the jth row in the 1+N(i-1) column of the sub-reference block pred 0
  • j) can represent the pixel value of the jth row in the 1+N(i-1) column of the sub-reference block pred 1
  • abs(x) can represent the absolute value of x.
  • Embodiment 24 On the basis of Embodiment 21, the first pixel value is shifted and down-sampled, and the second pixel value is shifted and down-sampled; according to the operated first pixel value (first reference Shift and downsampled pixel value of the sub-reference block in the block) and the second pixel value after operation (shifted and down-sampled pixel value of the sub-reference block in the second reference block) to obtain the motion vector Corresponding cost value.
  • the sub-reference block in the first reference block is pred 0
  • the sub-reference block in the second reference block is pred 1
  • both pred 0 and pred 1 are stored in D bits
  • each of pred 0 Pixel values are all stored in D bits
  • each pixel value in pred 1 is stored in D bits.
  • the cost value is determined according to the SAD of all pixel values of the sub-reference block pred 0 and the sub-reference block pred 1.
  • the pixel values of the sub-reference block pred 0 and the sub-reference block pred 1 are vertically down-sampled by N times (N is an integer greater than 0, and may be 2). Based on all pixel values of the sub-reference block pred 0 and sub-reference block pred 1 , the cost value calculation formula is:
  • cost represents the cost value
  • W is the width value of the sub-reference block
  • H is the height value of the sub-reference block
  • N is the down-sampling parameter, which is an integer greater than 0, which can be 2
  • pred 0 (1+ N(i-1), j) represents the pixel value of the jth row in the 1+N(i-1) column of the sub-reference block pred 0
  • pred 1 (1+N(i-1), j) represents the sub-reference
  • abs(x) represents the absolute value of x, it can be seen from the above that only the first row and the N+1th row are calculated, The sum of the absolute values of the difference in line 2N+1...
  • D is greater than 8
  • the cost value calculation formula can be:
  • Embodiment 25 In the above embodiment, for each sub-block of the current block, the prediction value of the sub-block is determined according to the first target motion vector and the second target motion vector of the sub-block, and the prediction value of the sub-block is determined according to the prediction of each sub-block. The value determines the predicted value of the current block. For example, based on the first target motion vector and the second target motion vector of the sub-block, reference blocks in two directions (ie, the third reference block and the fourth reference block) can be obtained through interpolation (such as 8-tap interpolation), which may include three components Because the target motion vector may be sub-pixel, interpolation is required). Then, weighting is performed according to the third pixel value of the third reference block and the fourth pixel value of the fourth reference block to obtain the final predicted value (such as the predicted value of the three components).
  • interpolation such as 8-tap interpolation
  • the optimal motion vector is the same as the initial motion vector (that is, the first original motion vector or the second original motion vector)
  • the first reference frame based on the first target motion vector of the sub-block
  • the third reference block corresponding to the sub-block is determined in
  • the fourth reference block corresponding to the sub-block is determined from the second reference frame based on the second target motion vector of the sub-block.
  • the pixel value of the third reference block and the pixel value of the fourth reference block are weighted to obtain the predicted value of the sub-block.
  • a third reference block of size dx*dy is determined from the first reference frame based on the first target motion vector.
  • a reference block with a size of A*B is determined from the first reference frame.
  • the size of A*B is related to the interpolation method, such as A is greater than dx, and B is greater than dy, and there is no restriction on this.
  • a fourth reference block with a size of dx*dy is determined from the second reference frame based on the second target motion vector.
  • a reference block with a size of A*B is determined from the second reference frame.
  • the size of A*B is related to the interpolation mode, such as A is greater than dx and B is greater than dy, and there is no restriction on this.
  • the fifth reference block can be determined from the first reference frame, and the fifth reference block can be extended to obtain the sixth reference block ; Then, based on the first target motion vector of the sub-block, a third reference block corresponding to the sub-block is selected from the sixth reference block. And, the seventh reference block may be determined from the second reference frame, and the seventh reference block may be expanded to obtain the eighth reference block; based on the second target motion vector of the sub-block, the eighth reference block may be selected The fourth reference block corresponding to this sub-block. Then, the pixel value of the third reference block and the pixel value of the fourth reference block may be weighted to obtain the predicted value of the sub-block.
  • a fifth reference block of size dx*dy is determined from the first reference frame based on the first original motion vector. For example, a reference block with a size of A*B is determined from the first reference frame.
  • the size of A*B is related to the interpolation method, such as A is greater than dx, and B is greater than dy, and there is no restriction on this.
  • the fifth reference block is filled up, down, left, and right by copying adjacent values, and the filled reference block is regarded as the sixth reference block.
  • the size of the reference block can be larger than dx*dy. Then, based on the first target motion vector of the sub-block, a third reference block with a size of dx*dy corresponding to the sub-block is selected from the sixth reference block.
  • a seventh reference block of size dx*dy is determined from the second reference frame based on the second original motion vector. For example, a reference block with a size of A*B is determined from the second reference frame.
  • the size of A*B is related to the interpolation mode, such as A is greater than dx and B is greater than dy, and there is no restriction on this.
  • the seventh reference block can be filled up, down, left and right by copying adjacent values, and the filled reference block is used as the eighth reference block.
  • the size of the eight reference block can be larger than dx*dy. Then, based on the second target motion vector of the sub-block, a fourth reference block with a size of dx*dy corresponding to the sub-block is selected from the eighth reference block.
  • Embodiment 26 After obtaining the target motion vector, based on the target motion vector of each sub-block, the predicted value in two directions (ie, the three components of YUV, that is, the predicted value of the third reference block and the predicted value of the third reference block) are obtained through an 8-tap interpolation filter. The predicted value of the fourth reference block), and weighted to obtain the final predicted value. Or, based on the target motion vector of each sub-block, a bilinear interpolation filter (here no longer an 8-tap interpolation filter) is used to obtain predicted values in two directions (that is, the three components of YUV, that is, the third reference block The predicted value of and the predicted value of the fourth reference block), and weighted to obtain the final predicted value.
  • a bilinear interpolation filter here no longer an 8-tap interpolation filter
  • Embodiment 27 After obtaining the predicted values in the two directions, the final predicted value is obtained by means of weighted average (that is, the weights of the predicted values in the two directions are the same). Or, after obtaining the predicted values in the two directions, the final predicted value is obtained by weighted average, and the weights of the two predicted values may be different.
  • the weight ratio of the two predicted values can be 1:2, 1:3, 2:1, etc.
  • the weight table can include 1:2, 1:3, 2:1 and other weight ratios.
  • the encoding end can determine the cost value of each weight ratio and determine the weight ratio with the smallest cost value. In this way, The encoding end can obtain the final predicted value through weighted average based on the weight ratio with the smallest cost value.
  • the encoded bit stream carries the index value of the weight ratio in the weight table.
  • the decoding end obtains the weight ratio corresponding to the index value from the weight table by analyzing the index value of the encoded bit stream, and obtains the final predicted value through weighted average based on the weight ratio.
  • the weight table may include but is not limited to ⁇ -2, 3, 4, 5, 10 ⁇ .
  • the sum of the two weights may be 8.
  • the weight may be a negative value, as long as the sum of the two weights is 8.
  • the weight "-2" is a negative value.
  • the weight of one predicted value is -2
  • the weight of the other predicted value is 10, that is, the sum of the two weights is 8.
  • the final predicted value (Predicted value 1*(-2)+predicted value 2*(8-(-2))).
  • the weight "10” means that the weight of one predicted value is 10, and the weight of the other predicted value is -2, that is, the sum of the two weights is 8.
  • the final predicted value (predicted value 1* (10)+predicted value 2*(-2)).
  • the weight "3" means that the weight of one predicted value is 3, and the weight of the other predicted value is 5, that is, the sum of the two weights is 8.
  • the final predicted value (predicted value 1*( 3)+predicted value 2*(5)).
  • the weight "5" means that the weight of one predicted value is 5, and the weight of the other predicted value is 3, that is, the sum of the two weights is 8.
  • the final predicted value (predicted value 1*( 5)+predicted value 2*(3)).
  • the weight "4" means that the weight of one predicted value is 4, and the weight of the other predicted value is 4, that is, the sum of the two weights is 8.
  • the final predicted value (predicted value 1*( 4)+predicted value 2*(4)).
  • the third pixel value of the third reference block and the fourth pixel value of the fourth reference block can be obtained, and then, according to the third reference
  • the third pixel value of the block and the fourth pixel value of the fourth reference block are weighted to obtain the final predicted value.
  • the third pixel value, the first weight corresponding to the third pixel value, the fourth pixel value, and the second weight corresponding to the fourth pixel value are weighted to obtain the predicted value of the sub-block. If the final predicted value is obtained by means of weighted average (that is, the two weights are the same), the first weight and the second weight are the same.
  • Embodiment 28 In the above embodiment, the first target motion vector and the second target motion vector of each sub-block of the current block can be saved, or the first original motion vector and the second original motion vector of each sub-block of the current block can be saved.
  • the motion vector or save the first original motion vector, the second original motion vector, the first target motion vector and the second target motion vector of each sub-block of the current block.
  • these motion vectors can be used as a reference for encoding/decoding of subsequent blocks.
  • the first target motion vector and the second target motion vector of each sub-block of the current block are saved as an example.
  • the first target motion vector and the second target motion vector are used for loop filtering of the current frame; the first target The motion vector and the second target motion vector are used for the time domain reference of the subsequent frame; and/or, the first target motion vector and the second target motion vector are used for the spatial reference of the current frame.
  • the first target motion vector and the second target motion vector of each sub-block of the current block can be used for motion compensation of the current block, and can also be used for time-domain reference of subsequent frames.
  • first target motion vector and the second target motion vector of each sub-block of the current block can be used for motion compensation of the current block, can also be used for the loop filtering process of the current block, and can also be used for the subsequent frame Time domain reference.
  • first target motion vector and the second target motion vector of each sub-block of the current block can be used for motion compensation of the current block, can also be used for the loop filtering process of the current block, and can also be used for the subsequent frame
  • the time domain reference can also be used for the spatial reference of the current frame, which will be described below.
  • the first target motion vector and the second target motion vector of each sub-block of the current block may be used for spatial reference of blocks in certain LCU (Largest Coding Unit) in the spatial domain. Since the codec sequence is from top to bottom and from left to right, the motion vector of the current block can be referenced by other blocks in the current LCU, and can also be referenced by blocks in subsequent adjacent LCUs. Since the obtained target motion vector requires a large amount of calculation, if the subsequent block refers to the target motion vector of the current block, it will take a long time to wait. In order to avoid the time delay caused by excessive waiting, only a few spatial neighboring blocks are allowed to refer to the target motion vector of the current block, and other blocks refer to the original motion vector of the current block.
  • LCU Large Coding Unit
  • these few blocks include the sub-blocks in the lower LCU and the lower right LCU located on the lower side of the current LCU, and the sub-blocks located in the right LCU and the left LCU cannot refer to the target motion of the current block. Vector.
  • Embodiment 29 The following describes the adjustment process of the motion vector in conjunction with a specific example.
  • the specific steps of the motion vector adjustment can be as follows.
  • the "copy” below shows that it can be obtained without interpolation. If the MV (ie motion vector) is an integer pixel offset, it can be directly copied from the reference frame, otherwise it needs to be obtained by interpolation.
  • Step e1 If the motion vector adjustment mode is activated for the current block, perform the following process.
  • Step e2 Prepare reference pixel values (assuming that the width of the current block is W and the height is H).
  • Step e3 Based on the original motion vector (the original motion vector of list0 is recorded as Org_MV0, the original motion vector of list1 is recorded as Org_MV1), copy two blocks at the corresponding position of the corresponding reference frame as (W+FS -1)*(H+FS-1) an entire pixel block of three components.
  • step e4 on the basis of the whole pixel block of (W+FS-1)*(H+FS-1), add (W+FS-1)*(H+FS -1)
  • the three-component whole pixel block is expanded by SR rows/columns up, down, left and right, and the area obtained after expansion is (W+FS-1+2*SR)*(H+FS-1+2*SR
  • the whole pixel blocks of the three components of) are denoted as Pred_Inter0 and Pred_Inter1, as shown in FIG. 8.
  • the size of the inner black area is the size of the current block
  • the outer white area is the additional reference pixels required for the 8-tap filter interpolation of the original motion vector
  • the outer black area is the target motion vector for the 8-tap filter interpolation Additional reference pixels required.
  • the black area and white area of the inner layer W*H it is the pixel value obtained from the reference frame.
  • the pixel value of the outer black area it does not need to be obtained from the reference frame, but the adjacent pixel value can be copied Way to get.
  • the W+FS-1 pixel values of the first row of the white area are copied to the pixel values of the first SR row of the outer black area. Copy the W+FS-1 pixel values of the last row of the white area to the pixel values of the last SR row of the outer black area.
  • the H+FS-1 pixel values of the first column of the white area and the upper and lower SR obtained pixel values of the outer black area are copied to the pixel values of the first SR column of the outer black area.
  • the H+FS-1 pixel values of the last column of the white area and the upper and lower SR obtained pixel values of the outer black area are copied to the pixel values of the last SR column of the outer black.
  • the H+FS-1 pixel values of the first column of the white area are copied to the pixel values of the first SR column of the outer black area.
  • Copy the H+FS-1 pixel values of the last column of the white area to the pixel values of the last SR column of the outer black area.
  • the W+FS-1 pixel values of the first row of the white area and the obtained pixel values of the left and right SR outer black areas are copied to the pixel values of the previous SR rows of the outer black area.
  • the W+FS-1 pixel values of the last row of the white area and the pixel values of the left and right SR obtained outer black areas are copied to the pixel values of the last SR row of the outer black area.
  • the luminance component (because the luminance component is used to calculate the substitute value in the subsequent search process), based on two whole-pixel reference blocks with an area of (W+FS-1)*(H+FS-1), two are obtained through bilinear interpolation.
  • the block size is (W+2*SR)*(H+2*SR) initial reference prediction value (denoted as Pred_Bilinear0 and Pred_Bilinear1)
  • FS is the number of filter taps
  • the default is 8
  • SR is the search range, that is, the target motion
  • the maximum horizontal/vertical component interpolation between the vector and the original motion vector, the default is 2.
  • Pred_Bilinear0/1 is used for the use of step e3.
  • Step e3 Obtain target motion vectors for each dx*dy sub-block of the current block (target motion vectors in the two directions are respectively denoted as Refined_MV0 and Refined_MV1).
  • Step e31 perform SR iterations to obtain the optimal integer pixel offset of the entire pixel MV point, which is recorded as IntegerDeltaMV, initialize IntegerDeltaMV to (0, 0), and perform the following process for each iteration:
  • Step e311 set deltaMV to (0, 0). If it is the first iterative process, based on the original motion vector in the reference pixel Pred_Bilinear0/1, copy two prediction value blocks (in fact, the most central W*H block of Pred_Bilinear0/1), based on these two prediction value blocks, Obtain the initial cost value, that is, the SAD after the vertical 2 times downsampling of the predicted value blocks in the two directions. If the initial cost value is less than dx*dy, dx and dy are the width and height of the current sub-block, skip the subsequent search process directly, execute step e32, and set notZeroCost to false.
  • Step e312 centering on the above initial point, follow ⁇ Mv(-2,-2),Mv(-1,-2),Mv(0,-2),Mv(1,-2),Mv(2, -2),Mv(-2,-1),Mv(-1,-1),Mv(0,-1),Mv(1,-1),Mv(2,-1),Mv(-2 ,0),Mv(-1,0),Mv(1,0),Mv(2,0),Mv(-2,1),Mv(-1,1),Mv(0,1),Mv (1,1),Mv(2,1),Mv(-2,2),Mv(-1,2),Mv(0,2),Mv(1,2),Mv(2,2))
  • Obtain 24 offset MVs (the 24 offset MVs are all called MVOffsets) in the order of, and perform the calculation and comparison of the cost value of these offset MVs.
  • MVOffset For example, based on a certain MVOffset, in the reference pixel Pred_Bilinear0/1, two blocks of predicted values are obtained through MVOffset (that is, the block of W*H with the center position offset MVOffset in Pred_Bilinear0, and the center position offset -MVOffset(and List0 opposite) W*H block), calculate the down-sampled SAD of these two blocks as the cost value of MVOffset. Keep the MVOffset with the smallest cost value (stored in deltaMV).
  • Step e313 After one iteration, if the optimal MV is still the initial MV or the minimum cost is 0, then the next iteration search process is not performed, step e32 is executed, and notZeroCost is set to false.
  • step e32 it is possible to obtain the optimal sub-pixel offset MV with the optimal whole pixel MV point in step e31 as the center, which is recorded as SPMV (that is, subMV), initialize SPMV to (0, 0), and then perform the following process:
  • Step e321 Only when notZeroCost is not false and deltaMV is (0, 0), the subsequent processing is performed, otherwise, the original motion vector is directly adjusted by IntegerDeltaMV.
  • Step e4 Based on the target motion vector of each sub-block, perform 8-tap interpolation to obtain the predicted value of the three components in two directions, and weight the predicted value to obtain the final predicted value (such as the predicted value of the three components). For example, based on the target motion vectors Refined_MV0 and Refined_MV1 of each sub-block, in Pred_Inter0/1 prepared in step e2, the corresponding prediction block is obtained through interpolation (the motion vector may be sub-pixels, and interpolation is required to obtain the corresponding pixel block).
  • Step e5 The target motion vector is used for the motion compensation of the current block (that is, to obtain the predicted value of each sub-block and the predicted value of the current block) and the time domain reference of the subsequent frame. It is not used for the loop filtering and spatial reference of the current frame .
  • Embodiment 30 Different from Embodiment 29, the preparation process of the reference pixel is moved to each sub-block of dx*dy.
  • the preparation process of the reference pixel is moved to each sub-block of dx*dy.
  • When preparing reference pixels only prepare pixel blocks of (dx+(filtersize-1))*(dy+(filtersize-1)). If the optimal motion vector obtained by the search is not the original motion vector, expand the reference pixel, otherwise it will not Expand.
  • For each dx*dy sub-block of the current block obtain the target motion vector separately, perform motion compensation based on the target motion vector, and weight to obtain the final predicted value. The following process is for each dx*dy sub-block of the current block:
  • Step f1 If the motion vector adjustment mode is activated for the current block, perform the following process.
  • Step f2 prepare the whole pixel block used in step f3: For example, only the luminance component: Based on the original motion vector (the original motion vector of list0 is recorded as Org_MV0, the original motion vector of list1 is recorded as Org_MV1), from the corresponding reference frame Obtain two whole pixel blocks with an area of (dx+(filtersize-1))*(dy+(filtersize-1)) in the corresponding positions.
  • filtersize may be the number of filter taps, and the default value is 8.
  • Step f3 For each dx*dy sub-block of the current block, a target motion vector is obtained (the target motion vectors in the two directions are respectively denoted as Refined_MV0 and Refined_MV1).
  • step f3 the implementation process of step f3 can be referred to step e3, which will not be described in detail here.
  • the first motion vector compensation is performed based on the original motion vector.
  • the initial prediction value of size (dx+2*IterNum)*(dy+2*IterNum) is obtained based on bilinear interpolation. IterNum is 2 by default, IterNum can be the search range SR, and IterNum can be the target motion vector and The maximum horizontal/vertical component interpolation of the original motion vector.
  • the initial prediction value of the original motion vector obtained above is stored in m_cYuvPredTempL0/1.
  • the optimal offset MV is obtained, and the optimal offset MV is recorded as BestMVoffset.
  • BestMVoffset IntegerDeltaMV+SPMV.
  • Step f4 If the optimal offset MV is (0, 0), the following steps are not performed (that is, no additional expansion is performed when the original motion vector is used). If the optimal offset MV is not (0, 0), then re-acquire the whole pixel (because the above steps did not expand the reference pixels, the required reference pixels after the offset exceed the range of the reference pixels obtained in the above steps), then execute The following steps:
  • the three components are respectively filled.
  • the fill width has a luminance component of 2, 420 has a chrominance component of 1).
  • the integer pixel values that can be used around the current sub-block (in the current CU block) are not used here.
  • Step f5 Based on the target motion vector of each sub-block and the two reference pixel blocks (obtained in step f4), perform 8-tap interpolation to obtain the predicted values of the three components in two directions, and weight to obtain the final predicted values (such as three components) Predicted value).
  • Embodiment 31 The above embodiments can be implemented individually or in any combination, which is not limited.
  • Embodiment 4 can be implemented in combination with Embodiment 2; Embodiment 4 can be implemented in combination with Embodiment 3.
  • Embodiment 5 can be realized in combination with embodiment 2; embodiment 5 can be realized in combination with embodiment 2 and embodiment 4; embodiment 5 can be realized in combination with embodiment 3; embodiment 5 can be combined with embodiment 3 and embodiment 4 achieve.
  • Embodiment 6 can be realized separately, embodiment 7 can be realized separately, embodiment 8 can be realized in combination with embodiment 7; embodiment 9 can be realized in combination with embodiment 7; embodiment 10 can be realized in combination with embodiment 7; embodiment 11 It can be implemented in combination with embodiment 7; embodiment 12 can be implemented in combination with embodiment 7; embodiment 13 can be implemented in combination with embodiment 7; embodiment 14 can be implemented in combination with embodiment 7; embodiment 15 can be implemented in combination with embodiment 7 achieve.
  • the 16th embodiment can be realized separately, the 17th embodiment can be realized separately, the 18th embodiment can be realized by combining with the 17th embodiment; the 19th embodiment can be realized by combining with the 17th embodiment; and the 20th embodiment can be realized by combining with the 17th embodiment.
  • Embodiment 21 can be implemented in combination with embodiment 6, embodiment 21 can be implemented in combination with embodiment 16, embodiment 21 can be implemented in combination with embodiment 7, embodiment 21 can be implemented in combination with embodiment 17, and embodiment 22 can be implemented in combination Example 21 can be implemented in combination, embodiment 23 can be implemented in combination with embodiment 21, and embodiment 24 can be implemented in combination with embodiment 21.
  • Embodiment 25 can be realized in combination with embodiment 2; embodiment 25 can be realized in combination with embodiment 2 and embodiment 4; embodiment 25 can be realized in combination with embodiment 3; embodiment 25 can be realized in combination with embodiment 3 and embodiment 4. achieve.
  • Embodiment 26 can be implemented in combination with Embodiment 25; Embodiment 27 can be implemented in combination with Embodiment 25.
  • Embodiment 28 can be realized in combination with embodiment 2; embodiment 28 can be realized in combination with embodiment 2 and embodiment 4; embodiment 28 can be realized in combination with embodiment 3; embodiment 28 can be realized in combination with embodiment 3 and embodiment 4. achieve.
  • Embodiment 29 can be implemented alone, and Embodiment 29 can be implemented in combination with Embodiment 4.
  • the embodiment 30 can be implemented separately, and the embodiment 30 can be implemented in combination with the embodiment 4.
  • All the embodiments involved in this application can be implemented individually or in combination, which will not be described in detail.
  • an embodiment of the application also proposes an encoding and decoding device applied to the encoding end or the decoding end.
  • FIG. 9A it is a structural diagram of the device, and the device includes:
  • the determining module 911 is configured to determine to start the motion vector adjustment mode for the current block if the following conditions are all met:
  • the control information is to allow the current block to use the motion vector adjustment mode
  • the prediction mode of the current block is a normal fusion mode; or, the prediction mode of the current block is a fusion mode or a skip mode, and the prediction mode of the current block is not a mode other than the normal fusion mode;
  • the predicted value of the current block is obtained by weighting the reference blocks from two reference frames, and the display order of the two reference frames is located one before and one after the current frame, and the distance between the two reference frames and the current frame the same;
  • the weights of the two reference frames of the current block are the same;
  • the two reference frames of the current block are both short-term reference frames
  • the width, height and area of the current block are all within a limited range
  • the size of the two reference frames of the current block is the same as the size of the current frame
  • the motion compensation module 912 is configured to perform motion compensation on the current block if it is determined to start the motion vector adjustment mode for the current block.
  • the motion compensation module 912 is specifically configured to: for each of the at least one sub-block included in the current block:
  • the first pixel value of a reference block and the second pixel value of the second reference block are adjusted to the first original motion vector and the second original motion vector to obtain the corresponding value of the first original motion vector A first target motion vector and a second target motion vector corresponding to the second original motion vector; determining the predicted value of the sub-block according to the first target motion vector and the second target motion vector;
  • the predicted value of the current block is determined according to the predicted value of each sub-block.
  • the determining module 911 is further configured to: if any one of the following conditions is not met, determine not to start the motion vector adjustment mode for the current block: the control information is to allow the current block to use the motion vector adjustment mode;
  • the prediction mode of the current block is a normal fusion mode; or, the prediction mode of the current block is a fusion mode or a skip mode, and the prediction mode of the current block is not a mode other than the normal fusion mode;
  • the predicted value of the current block is obtained by weighting the reference blocks from two reference frames, and the display order of the two reference frames is located one before and one after the current frame, and the distance between the two reference frames and the current frame the same;
  • the weights of the two reference frames of the current block are the same;
  • the two reference frames of the current block are both short-term reference frames
  • the width, height and area of the current block are all within a limited range
  • the size of the two reference frames of the current block is the same as the size of the current frame.
  • the control information for allowing the current block to use the motion vector adjustment mode includes: sequence-level control information for allowing the current block to use the motion vector adjustment mode; and/or frame-level control information for allowing the current block to use the motion vector adjustment mode.
  • the width, height and area of the current block are all within a limited range, including: the width is greater than or equal to the first threshold, the height is greater than or equal to the second threshold, and the area is greater than or equal to the third threshold; or, the width is greater than or equal to the first threshold, and the height Is greater than or equal to the second threshold, and the area is greater than the fourth threshold; wherein, the third threshold is greater than the fourth threshold.
  • the first threshold is 8, the second threshold is 8, the third threshold is 128, and the fourth threshold is 64.
  • the motion compensation module 912 determines the first reference block corresponding to the sub-block according to the first original motion vector of the sub-block, and determines the second reference block corresponding to the sub-block according to the second original motion vector of the sub-block
  • the block is specifically used for:
  • the first reference block corresponding to the sub-block is determined from the first reference frame; the pixel value of each pixel in the first reference block is determined by The pixel values of adjacent pixels in the reference block are obtained by interpolation, or obtained by copying the pixel values of adjacent pixels in the first reference block;
  • the second reference block corresponding to the sub-block is determined from the second reference frame; the pixel value of each pixel in the second reference block is determined by comparing the second reference block.
  • the pixel values of adjacent pixels in the reference block are obtained by interpolation, or obtained by copying the pixel values of adjacent pixels in the second reference block.
  • the motion compensation module 912 adjusts the first original motion vector and the second original motion vector according to the first pixel value of the first reference block and the second pixel value of the second reference block, When obtaining the first target motion vector corresponding to the first original motion vector and the second target motion vector corresponding to the second original motion vector, it is specifically used to:
  • the initial motion vector Taking the initial motion vector as the center, select part or all of the motion vectors from the motion vectors around the initial motion vector including the initial motion vector, and determine the selected motion vector as a candidate motion vector; wherein, the initial motion The vector is the first original motion vector or the second original motion vector;
  • the first original motion vector is adjusted according to the optimal motion vector to obtain the first target motion vector corresponding to the first original motion vector
  • the second original motion vector is adjusted according to the optimal motion vector The adjustment is performed to obtain the second target motion vector corresponding to the second original motion vector.
  • the motion compensation module 912 adjusts the first original motion vector according to the optimal motion vector, obtains the first target motion vector corresponding to the first original motion vector, and performs adjustments to the first original motion vector according to the optimal motion vector.
  • the adjustment of the second original motion vector to obtain the second target motion vector corresponding to the second original motion vector is specifically used for:
  • the second original motion vector is adjusted according to the second integer-pixel motion vector adjustment value and the second sub-pixel motion vector adjustment value to obtain the second target motion vector of the sub-block.
  • the motion compensation module 912 is specifically configured to determine the predicted value of the sub-block according to the first target motion vector and the second target motion vector:
  • the pixel value of the third reference block and the pixel value of the fourth reference block are weighted to obtain the predicted value of the sub-block.
  • the motion compensation module 912 is specifically configured to determine the predicted value of the sub-block according to the first target motion vector and the second target motion vector:
  • the pixel value of the third reference block and the pixel value of the fourth reference block are weighted to obtain the predicted value of the sub-block.
  • the motion compensation module 912 weights the pixel value of the third reference block and the pixel value of the fourth reference block, and when obtaining the predicted value of the sub-block, it is specifically used to:
  • the first weight corresponding to the pixel value of the third reference block, the pixel value of the fourth reference block, and the second weight corresponding to the pixel value of the fourth reference block are weighted to obtain the predicted value of the sub-block ; Wherein, the first weight is the same as the second weight.
  • FIG. 9B the schematic diagram of its hardware architecture can be specifically referred to as shown in FIG. 9B. It includes a processor 921 and a machine-readable storage medium 922, where the machine-readable storage medium 922 stores machine executable instructions that can be executed by the processor 921; the processor 921 is configured to execute machine executable instructions, In order to realize the method disclosed in the above example of this application.
  • the processor is used to execute machine executable instructions to implement the following steps:
  • the control information is to allow the current block to use the motion vector adjustment mode
  • the prediction mode of the current block is a normal fusion mode; or, the prediction mode of the current block is a fusion mode or a skip mode, and the prediction mode of the current block is not a mode other than the normal fusion mode;
  • the predicted value of the current block is obtained by weighting the reference blocks from two reference frames, and the display order of the two reference frames is located one before and one after the current frame, and the distance between the two reference frames and the current frame the same;
  • the weights of the two reference frames of the current block are the same;
  • the two reference frames of the current block are both short-term reference frames
  • the width, height and area of the current block are all within a limited range
  • the size of the two reference frames of the current block is the same as the size of the current frame
  • motion compensation is performed on the current block.
  • FIG. 9C the schematic diagram of the hardware architecture of the device can be specifically referred to as shown in FIG. 9C. It includes a processor 931 and a machine-readable storage medium 932, where the machine-readable storage medium 932 stores machine-executable instructions that can be executed by the processor 931; the processor 931 is configured to execute machine-executable instructions, In order to realize the method disclosed in the above example of this application.
  • the processor is used to execute machine executable instructions to implement the following steps:
  • the control information is to allow the current block to use the motion vector adjustment mode
  • the prediction mode of the current block is a normal fusion mode; or, the prediction mode of the current block is a fusion mode or a skip mode, and the prediction mode of the current block is not a mode other than the normal fusion mode;
  • the predicted value of the current block is obtained by weighting the reference blocks from two reference frames, and the display order of the two reference frames is located one before and one after the current frame, and the distance between the two reference frames and the current frame the same;
  • the weights of the two reference frames of the current block are the same;
  • the two reference frames of the current block are both short-term reference frames
  • the width, height and area of the current block are all within a limited range
  • the size of the two reference frames of the current block is the same as the size of the current frame
  • motion compensation is performed on the current block.
  • an embodiment of the application also provides a machine-readable storage medium having a number of computer instructions stored on the machine-readable storage medium.
  • the computer instructions When executed by a processor, the present invention can be realized. Apply for the encoding and decoding method disclosed in the above example.
  • the aforementioned machine-readable storage medium may be any electronic, magnetic, optical, or other physical storage device, and may contain or store information, such as executable instructions, data, and so on.
  • the machine-readable storage medium can be: RAM (Radom Access Memory), volatile memory, non-volatile memory, flash memory, storage drive (such as hard drive), solid state hard drive, any type of storage disk (Such as CD, DVD, etc.), or similar storage media, or a combination of them.
  • the embodiments of the present application also provide a computer program product, the computer program product includes computer instructions, when the computer instructions are executed by a processor, the encoding and decoding methods disclosed in the above examples of this application can be implemented .
  • an embodiment of the present application also provides an encoding and decoding system.
  • the encoding and decoding system includes a processor and a machine-readable storage medium, and the machine-readable storage medium stores data that can be used by the processor.
  • Machine executable instructions executed. When the machine-executable instructions are executed by the processor, the coding and decoding methods disclosed in the above examples of this application can be implemented.
  • a typical implementation device is a computer.
  • the specific form of the computer can be a personal computer, a laptop computer, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email receiving and sending device, and a game control A console, a tablet computer, a wearable device, or a combination of any of these devices.
  • the functions are divided into various units and described separately. Of course, when implementing this application, the functions of each unit can be implemented in the same one or more software and/or hardware.
  • the embodiments of the present application can be provided as methods, systems, or computer program products. Therefore, this application may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, the embodiments of the present application may adopt the form of computer program products implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
  • This application is described with reference to flowcharts and/or block diagrams of methods, devices (systems), and computer program products according to embodiments of this application.
  • each process and/or block in the flowchart and/or block diagram, and the combination of processes and/or blocks in the flowchart and/or block diagram can be realized by computer program instructions.
  • These computer program instructions can be provided to the processor of a general-purpose computer, a special-purpose computer, an embedded processor, or other programmable data processing equipment to generate a machine, so that the instructions executed by the processor of the computer or other programmable data processing equipment are generated for use. It is a device that realizes the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • these computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device,
  • the instruction device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本申请提供一种编解码方法、装置及其设备,包括:若如下条件满足,确定对当前块启动运动矢量调整模式:控制信息为允许当前块使用运动矢量调整模式;当前块的预测模式为普通融合模式,或当前块的预测模式为融合模式或跳过模式,当前块的预测模式不是除普通融合模式之外的其它模式;当前块的预测值通过来自两个参考帧的参考块的加权获得,且两个参考帧的显示顺序分别位于当前帧的一前一后,且两个参考帧与当前帧的距离相同;两个参考帧的加权权重相同;两个参考帧均是短期参考帧;当前块的宽度,高度和面积均在限定范围内;两个参考帧的尺寸与当前帧的尺寸均相同;若对当前块启动运动矢量调整模式,对当前块进行运动补偿。本申请能够提高编码性能。

Description

一种编解码方法、装置及其设备 技术领域
本申请涉及编解码技术领域,尤其是涉及一种编解码方法、装置及其设备。
背景技术
为了达到节约空间的目的,视频图像都是经过编码后才传输的,完整的视频编码方法可以包括预测、变换、量化、熵编码、滤波等过程。其中,预测编码包括帧内编码和帧间编码,帧间编码是利用视频时间域的相关性,使用邻近已编码图像的像素预测当前图像的像素,以达到有效去除视频时域冗余的目的。在帧间编码中,可以使用运动矢量(Motion Vector,MV)表示当前帧的当前块与参考帧的参考块之间的相对位移。例如,当前帧A与参考帧B存在很强的时域相关性,在需要传输当前帧A的当前块A1时,则可以在参考帧B中进行运动搜索,找到与当前块A1最匹配的块B1(即参考块),并确定当前块A1与参考块B1之间的相对位移,该相对位移也就是当前块A1的运动矢量。编码端可以将运动矢量发送给解码端,不是将当前块A1发送给解码端,解码端可以根据运动矢量和参考块B1得到当前块A1。显然,由于运动矢量占用的比特数小于当前块A1占用的比特数,因此可以节约大量比特。
在相关技术中,在当前块是单向块时,获得当前块的运动矢量(后续称为原始运动矢量)后,可以对原始运动矢量进行调整,并基于调整后的运动矢量进行编码/解码,从而提高编码性能。但是,在当前块是双向块时,获得当前块的第一原始运动矢量和第二原始运动矢量后,如何对第一原始运动矢量和第二原始运动矢量进行调整,目前并没有合理的解决方案,也就是说,针对双向块的场景,存在预测质量不高,预测错误等问题,从而导致编码性能较差。
发明内容
本申请提供了一种编解码方法、装置及其设备,可以提高编码性能。
本申请提供一种编解码方法,所述方法包括:
若如下条件均满足,则确定对当前块启动运动矢量调整模式:
控制信息为允许当前块使用运动矢量调整模式;
当前块的预测模式为普通融合模式;或者,当前块的预测模式为融合模式或跳过模式,且当前块的预测模式不是除普通融合模式之外的其它模式;
当前块的预测值通过来自两个参考帧的参考块的加权获得,且所述两个参考帧的显示顺序分别位于当前帧的一前一后,且所述两个参考帧与当前帧的距离相同;
当前块的两个参考帧的加权权重相同;
当前块的两个参考帧均是短期参考帧;
当前块的宽度,高度和面积均在限定范围内;
当前块的两个参考帧的尺寸与当前帧的尺寸均相同;
若确定对当前块启动运动矢量调整模式,则对所述当前块进行运动补偿。
本申请提供一种编解码装置,所述装置包括:
确定模块,用于若如下条件均满足,则确定对当前块启动运动矢量调整模式:
控制信息为允许当前块使用运动矢量调整模式;
当前块的预测模式为普通融合模式;或者,当前块的预测模式为融合模式或跳过模式,且当前块的预测模式不是除普通融合模式之外的其它模式;
当前块的预测值通过来自两个参考帧的参考块的加权获得,且所述两个参考帧的显示顺序分别位于当前帧的一前一后,且所述两个参考帧与当前帧的距离相同;
当前块的两个参考帧的加权权重相同;
当前块的两个参考帧均是短期参考帧;
当前块的宽度,高度和面积均在限定范围内;
当前块的两个参考帧的尺寸与当前帧的尺寸均相同;
运动补偿模块,用于若确定对当前块启动运动矢量调整模式,对当前块进行运动补偿。
本申请提供一种编码端设备,包括:处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;
所述处理器用于执行机器可执行指令,以实现如下步骤:
若如下条件均满足,则确定对当前块启动运动矢量调整模式:
控制信息为允许当前块使用运动矢量调整模式;
当前块的预测模式为普通融合模式;或者,当前块的预测模式为融合模式或跳过模式,且当前块的预测模式不是除普通融合模式之外的其它模式;
当前块的预测值通过来自两个参考帧的参考块的加权获得,且所述两个参考帧的显示顺序分别位于当前帧的一前一后,且所述两个参考帧与当前帧的距离相同;
当前块的两个参考帧的加权权重相同;
当前块的两个参考帧均是短期参考帧;
当前块的宽度,高度和面积均在限定范围内;
当前块的两个参考帧的尺寸与当前帧的尺寸均相同;
若确定对当前块启动运动矢量调整模式,则对所述当前块进行运动补偿。
本申请提供一种解码端设备,包括:处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;
所述处理器用于执行机器可执行指令,以实现如下步骤:
若如下条件均满足,则确定对当前块启动运动矢量调整模式:
控制信息为允许当前块使用运动矢量调整模式;
当前块的预测模式为普通融合模式;或者,当前块的预测模式为融合模式或跳过模式,且当前块的预测模式不是除普通融合模式之外的其它模式;
当前块的预测值通过来自两个参考帧的参考块的加权获得,且所述两个参考帧的显示顺序分别位于当前帧的一前一后,且所述两个参考帧与当前帧的距离相同;
当前块的两个参考帧的加权权重相同;
当前块的两个参考帧均是短期参考帧;
当前块的宽度,高度和面积均在限定范围内;
当前块的两个参考帧的尺寸与当前帧的尺寸均相同;
若确定对当前块启动运动矢量调整模式,则对所述当前块进行运动补偿。
由以上技术方案可见,本申请实施例中,若确定对当前块启动运动矢量调整模式,则根据第一原始运动矢量和第二原始运动矢量获得第一目标运动矢量和第二目标运动矢量,根据第一目标运动矢量和第二目标运动矢量确定预测值,而不是根据第一原始运动矢量和第二原始运动矢量确定预测值,解决预测质量不高,预测错误等问题,提高编码性能和编码效率。
附图说明
图1A是本申请一种实施方式中的插值的示意图;
图1B是本申请一种实施方式中的视频编码框架的示意图;
图2是本申请一种实施方式中的编解码方法的流程图;
图3是本申请一种实施方式中的编解码方法的流程图;
图4是本申请一种实施方式中的编解码方法的流程图;
图5是本申请一种实施方式中获得的参考块的示意图;
图6是本申请一种实施方式中的运动矢量迭代的示意图;
图7A-图7G是本申请一种实施方式中的候选点的顺序示意图;
图8是本申请一种实施方式中的对参考块进行扩展的示意图;
图9A是本申请一种实施方式中的编解码装置的结构图;
图9B是本申请一种实施方式中的解码端设备的硬件结构图;
图9C是本申请一种实施方式中的编码端设备的硬件结构图。
具体实施方式
在本申请实施例中使用的术语仅仅是出于描述特定实施例的目的,而非限制本申请实施例。本申请实施例和权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其它含义。还应当理解,本文中使用的术语“和/或”是指包含一个或多个相关联的列出项目的任何或所有可能组合。应当理解,尽管在本申请实施例可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本申请范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,此外,所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。
本申请实施例中提出一种编解码方法、装置及其设备,可以涉及如下概念:
帧内预测与帧间预测(intra prediction and inter prediction)技术:帧内预测是指,利用视频空间域的相关性,使用当前图像已经编码块的像素预测当前像素,以达到去除视频空域冗余的目的。帧间预测是指,利用视频时域的相关性,由于视频序列通常包含有较强的时域相关性,使用邻近已编码图像像素预测当前图像的像素,可以达到有效去除视频时域冗余的目的。主要的视频编码标准帧间预测部分都采用了基于块的运动补偿技术,主要原理是为当前图像的每一个像素块在之前的已编码图像中寻找一个最佳匹配块,该过程称为运动估计。
运动矢量(Motion Vector,MV):在帧间编码中,使用运动矢量表示当前块与其参考图像中的最佳匹配块之间的相对位移。每个划分的块都有相应的运动矢量传输到解码端,若对每个块的运动矢量进行独立编码和传输,特别是划分成小尺寸的块时,需要消耗相当多的比特。为了降低用于编码运动矢量的比特数,则利用相邻图像块之间的空间相关性,根据相邻已编码块的运动矢量对当前块的运动矢量进行预测,然后对预测差进行编码,从而有效地降低表示运动矢量的比特数。在对当前块的运动矢量编码时,使用相邻已编码块的运动矢量预测当前块的运动矢量,对运动矢量的预测值(MVP,Motion Vector Prediction)与运动矢量的真正估值之间的差值(MVD,MotionVector Difference)进行编码,有效降低编码比特数。
运动信息(Motion Information):由于运动矢量表示当前块与某个参考块的位置偏移,为了准确获取指向图像块的信息,除了运动矢量,还需要参考帧图像的索引信息来表示使用哪个参考帧图像。对于当前帧图像,可以建立一个参考帧图像列表,参考帧图像索引信息则表示当前块采用了参考帧图像列表中的第几个参考帧图像。很多编码技术还支持多个参考图像列表,因此,可以使用一个索引值来表示使用哪一个参考图像列表,这个索引值称为参考方向。可以将运动矢量、参考帧索引、参考方向等与运动相关的信息统称为运动信息。
插值(Interpolation):若当前运动矢量为非整像素精度,则无法直接从当前块对应的参考帧中拷贝已有像素值,当前块的所需像素值只能通过插值获得。参见图1A所示,若需要获得偏移为1/2像素的像素值Y 1/2,则可以通过对周围的已有像素值X进行插值获得。示例性的,若采用抽头数为N的插值滤波器,则需要通过周围N个整像素插值获得。
运动补偿:运动补偿就是通过插值或拷贝获得当前块所有像素值的过程。
融合模式(Merge mode):包括普通融合模式(即Normal Merge模式,也可以称为regular Merge模式),子块融合模式(采用子块运动信息的融合模式,可以称为Subblock融合模式),MMVD模式(编码运动差的融合模式,可以称为merge with MVD模式),CIIP模式(帧间帧内预测联合生成新的预测值的融合模式,可以称为combine inter intra prediciton mode),TPM模式(用于三角预测的融合模式,可以称为triangular prediction mode),GEO模式(基于任意几何划分形状的融合模式,可以称为Geometrical Partitioning)。
跳过模式(skip mode):跳过模式是一种特殊的融合模式,跳过模式与融合模式不同的是,跳过模式不需要编码残差。若当前块为跳过模式时,则CIIP模式默认为关闭,而普通融合模式,子块融合模式,MMVD模式,TPM模式,GEO模式仍然可以适用。
示例性的,可以基于普通融合模式,子块融合模式,MMVD模式,CIIP模式,TPM模式,GEO模式等,确定如何生成预测值。在生成预测值后,对于融合模式,可以利用预测值和残差值来获取重建值;对于跳过模式,不存在残差值,直接利用预测值来获取重建值。
序列参数集(SPS,sequence parameter set):在序列参数集中,存在确定整个序列中是否允许某些工具开关的标志位。若标志位为1,则视频序列中,允许启用该标志位对应的工具;若标志位为0,则视频序列中,不允许该标志位对应的工具在编码过程中启用。
普通融合模式:从候选运动信息列表中选择一个运动信息,基于该运动信息生成当前块的预测值,该候选运动信息列表包括:空域相邻块候选运动信息,时域相邻块候选运动信息,空域非相邻块候选运动信息,基于已有运动信息进行组合获取的运动信息,默认运动信息等。
MMVD模式:基于普通融合模式的候选运动信息列表,从普通融合模式的候选运动信息列表中选择一个运动信息作为基准运动信息,通过查表方法获取运动信息差。基于基准运动信息和运动信息差获取最终的运动信息,基于该最终的运动信息生成当前块的预测值。
CIIP模式:通过结合帧内预测值和帧间预测值获取当前块新的预测值。
子块融合模式:子块融合模式包括Affine融合模式和子块TMVP模式。
Affine(仿射)融合模式,类似于普通融合模式,也是从候选运动信息列表中选择一个运动信息,基于该运动信息生成当前块的预测值。与普通融合模式不同的是,普通融合模式的候选运动信息列表中的运动信息是2参数的平移运动矢量,而Affine融合模式的候选运动信息列表中的运动信息是4参数的Affine运动信息,或者,6参数的Affine运动信息。
子块TMVP(subblock-based temporal motion vector prediction)模式,在时域参考帧中,直接复用某块运动信息用于生成当前块的预测值,该块内的各子块的运动信息可不相同。
TPM模式:将一个块分成两个三角子块(存在45度和135度两种三角子块),这两个三角子块拥有不同的单向运动信息,TPM模式仅用于预测过程,不影响后续的变换、量化过程,这里的单向运动信息也是直接从候选运动信息列表中获取的。
GEO模式:GEO模式与TPM模式类似,只是划分形状不同。GEO模式将一个方形块分成任意形状的两个子块(除了TPM的两个三角子块的形状外的任意其它形状),如一个三角子块,一个五边形子块;或者,一个三角子块,一个四边形子块;或者,两个梯形子块等,对此划分形状不做限制。GEO模式划分的这两个子块拥有不同的单向运动信息。
从上述例子可以看出,本实施例涉及的融合模式和跳过模式,是指一类直接从候选运动信息列表中选择一个运动信息,生成当前块的预测值的一类预测模式,这些预测模式在编码端不需要进行运动搜索过程,除了MMVD模式外,其它模式都不需要编码运动信息差。
视频编码框架:参见图1B所示,可以使用视频编码框架实现本申请实施例的编码端处理流程,此外,视频解码框架的示意图与图1B类似,在此不再赘述,而且,可以使用视频解码框架实现本申请实施例的解码端处理流程。具体的,在视频编码框架和视频解码框架中,包括帧内预测、运动估计/运动补偿、参考图像缓冲器、环内滤波、重建、变换、量化、反变换、反量化、熵编码器等模块。在编码端,通过这些模块之间的配合,可以实现编码端处理流程,在解码端,通过这些模块之间的配合,可以实现解码端处理流程。
在相关技术中,在当前块是双向块时,获得当前块的第一原始运动矢量和第二原始运动矢量后,如何对第一原始运动矢量和第二原始运动矢量进行调整,并没有合理的解决方案。
本申请实施例中,在当前块是双向块时,考虑到来自两个不同方向的运动矢量往往存在镜像对称的关系,可以基于该特性来进一步去除冗余,比如说,可以提供一种运动矢量调整模式,在运动矢量调整模式中,基于原始运动矢量获得的预测值,通过解码端局部搜索的方法,微调了运动矢量,以获得更好的运动矢量来生成失真更小的预测值。
示例性的,若确定对当前块启动运动矢量调整模式,则针对当前块的每个子块,可以根据该子块的第一原始运动矢量确定该子块对应的第一参考块,根据该子块的第二原始运动矢量确定该子块对应的第二参考块,并根据第一参考块的第一像素值和第二参考块的第二像素值,对第一原始运动矢量和第二原始运动矢量进行调整,得到第一目标运动矢量和第二目标运动矢量,继而可以根据第一目标运动矢量和第二目标运动矢量确定该子块的预测值,上述方式能够解决预测质量不高,预测错误等问题,并提高编码性能和编码效率。
以下结合几个具体实施例,对本申请的编解码方法进行详细说明。
实施例1:参见图2所示,为本申请实施例中提出的编解码方法的流程示意图,该编解码方法可以应用于解码端或者编码端,该编解码方法可以包括以下步骤:
步骤201,若如下条件均满足,则确定对当前块启动运动矢量调整模式:
控制信息为允许当前块使用运动矢量调整模式;
当前块的预测模式为普通融合模式;或者,当前块的预测模式为融合模式或跳过模式,且当前块的预测模式不是除普通融合模式之外的其它模式;
当前块的预测值通过来自两个参考帧的参考块的加权获得,且两个参考帧的显示顺序分别位于当前帧的一前一 后,且两个参考帧与当前帧的距离相同;
当前块的两个参考帧的加权权重相同;
当前块的两个参考帧均是短期参考帧;
当前块的宽度,高度和面积均在限定范围内;
当前块的两个参考帧的尺寸与当前帧的尺寸均相同。
在一种可能的实施方式中,若如下条件中的任意一个条件不满足,则确定不对当前块启动运动矢量调整模式:
控制信息为允许当前块使用运动矢量调整模式;
当前块的预测模式为普通融合模式;或者,当前块的预测模式为融合模式或跳过模式,且当前块的预测模式不是除普通融合模式之外的其它模式;
当前块的预测值通过来自两个参考帧的参考块的加权获得,且两个参考帧的显示顺序分别位于当前帧的一前一后,且两个参考帧与当前帧的距离相同;
当前块的两个参考帧的加权权重相同;
当前块的两个参考帧均是短期参考帧;
当前块的宽度,高度和面积均在限定范围内;
当前块的两个参考帧的尺寸与当前帧的尺寸均相同。
在上述实施例中,给出了7个条件,基于所述7个条件确定是否对当前块启动运动矢量调整模式。在实际应用中,还可以从所述7个条件中选取部分条件,基于选取的部分条件确定是否对当前块启动运动矢量调整模式。例如,从所述7个条件中选取5个条件,对此选取方式不做限制,可以是任意5个条件,若选取的5个条件均满足,则确定对当前块启动运动矢量调整模式;若选取的5个条件中的任意一个条件不满足,则确定不对当前块启动运动矢量调整模式。当然,还可以从所述7个条件中选取其它数量的条件,对此不做限制。
在上述实施例中,融合模式或跳过模式包括普通融合模式,子块融合模式,MMVD模式,CIIP模式,TPM模式,GEO模式。当前块的预测模式不是除普通融合模式之外的其它模式是指:预测模式不是子块融合模式,MMVD模式,CIIP模式,TPM模式,GEO模式等。
例如,当前块的预测模式为融合模式或跳过模式时,当前块的预测模式不是MMVD模式,当前块的预测模式也不是CIIP模式。
当确定当前块的预测模式为融合模式或跳过模式,当前块的预测模式不是MMVD模式,当前块的预测模式也不是CIIP模式,当前块的预测模式也不是子块融合模式,当前块的预测模式也不是TPM模式,且当前块的预测模式也不是GEO模式时,可以确定当前块的预测模式不是除普通融合模式之外的其它模式。也即,通过排除法确定了当前块的预测模式是普通融合模式。
在上述实施例中,当前块的预测值通过来自两个参考帧的参考块的加权获得是指:当前块采用双向预测模式,即当前块的预测值通过来自两个参考帧的参考块的加权获得。
在上述实施例中,当前块可以对应两个列表的运动信息,记为第一运动信息和第二运动信息,第一运动信息包括第一参考帧和第一原始运动矢量,第二运动信息包括第二参考帧和第二原始运动矢量。上述两个参考帧可以是第一参考帧和第二参考帧。两个参考帧的显示顺序分别位于当前帧的一前一后是指:第一参考帧位于当前块所处当前帧的前面,第二参考帧位于当前帧的后面。示例性的,第一参考帧也可以称为前向参考帧,前向参考帧位于第一列表(如list0),第二参考帧也可以称为后向参考帧,后向参考帧位于第二列表(如list1)。
在上述实施例中,当前块的宽度,高度和面积均在限定范围内包括:宽度大于或者等于第一阈值,高度大于或者等于第二阈值,面积大于或者等于第三阈值。或者,宽度大于或者等于第一阈值,高度大于或者等于第二阈值,面积大于第四阈值。示例性的,该第三阈值可以大于该第四阈值。例如,第一阈值可以为8,第二阈值可以为8,第三阈值可以为128,第四阈值可以为64。当然,上述数值只是几个示例,对此不做限制。
在上述实施例中,控制信息为允许当前块使用运动矢量调整模式可以包括但不限于:序列级控制信息(如多帧图像的控制信息)为允许当前块使用运动矢量调整模式;和/或,帧级控制信息(如一帧图像的控制信息)为允许当前块使用运动矢量调整模式。
步骤202,若确定对当前块启动运动矢量调整模式,则对当前块进行运动补偿。
在一种可能的实施方式中,若确定对当前块启动运动矢量调整模式,则针对当前块包括的至少一个子块中的每个子块:根据该子块的第一原始运动矢量确定该子块对应的第一参考块,根据该子块的第二原始运动矢量确定该子块对应的第二参考块;根据第一参考块的第一像素值和第二参考块的第二像素值,对第一原始运动矢量和第二原始运动矢量进行调整,得到第一原始运动矢量对应的第一目标运动矢量和第二原始运动矢量对应的第二目标运动矢量;根据第一目标运动矢量和第二目标运动矢量确定该子块的预测值。在得到每个子块的预测值之后,则可以根据每个子块的预测值确定当前块的预测值。
示例性的,根据该子块的第一原始运动矢量确定该子块对应的第一参考块,根据该子块的第二原始运动矢量确定该子块对应的第二参考块,可以包括但不限于:
基于该子块的第一原始运动矢量,从第一参考帧中确定该子块对应的第一参考块;该第一参考块中每个像素点的像素值,是通过对该第一参考块中的相邻像素点的像素值进行插值得到,或者,通过对该第一参考块中的相邻像素点的像素值进行拷贝得到。
基于该子块的第二原始运动矢量,从第二参考帧中确定该子块对应的第二参考块;该第二参考块中每个像素点的像素值,是通过对该第二参考块中的相邻像素点的像素值进行插值得到,或者,通过对该第二参考块中的相邻像素点的像素值进行拷贝得到。
示例性的,第一参考块的尺寸与第二参考块的尺寸相同,第一参考块的宽度基于该子块的宽度与搜索范围确定,第一参考块的高度值基于该子块的高度与搜索范围确定。
示例性的,针对当前块包括的每个子块:根据该子块对应的第一参考块的第一像素值和该子块对应的第二参考块的第二像素值,对该子块的第一原始运动矢量和该子块的第二原始运动矢量进行调整,得到第一原始运动矢量对 应的第一目标运动矢量和第二原始运动矢量对应的第二目标运动矢量,即该子块的第一目标运动矢量和第二目标运动矢量。
在一种可能的实施方式中,可以以初始运动矢量为中心,从该初始运动矢量周围的包括该初始运动矢量的运动矢量中选择部分或者全部运动矢量,并将选择的运动矢量确定为候选运动矢量;该初始运动矢量为第一原始运动矢量或者第二原始运动矢量。然后,可以根据第一参考块的第一像素值和第二参考块的第二像素值,从该初始运动矢量和各个候选运动矢量中选择一个运动矢量作为最优运动矢量。然后,可以根据最优运动矢量对第一原始运动矢量进行调整,得到第一原始运动矢量对应的第一目标运动矢量,并根据最优运动矢量对第二原始运动矢量进行调整,得到第二原始运动矢量对应的第二目标运动矢量。
示例性的,根据所述最优运动矢量对第一原始运动矢量进行调整,得到第一原始运动矢量对应的第一目标运动矢量,并根据最优运动矢量对第二原始运动矢量进行调整,得到第二原始运动矢量对应的第二目标运动矢量,可以包括:根据最优运动矢量确定子块的第一整像素运动矢量调整值,第二整像素运动矢量调整值,第一分像素运动矢量调整值和第二分像素运动矢量调整值;根据第一整像素运动矢量调整值和第一分像素运动矢量调整值,对第一原始运动矢量进行调整,得到子块的第一目标运动矢量;根据第二整像素运动矢量调整值和第二分像素运动矢量调整值,对第二原始运动矢量进行调整,得到子块的第二目标运动矢量。
示例性的,针对当前块包括的至少一个子块中的每个子块:可以根据该子块的第一目标运动矢量和该子块的第二目标运动矢量确定该子块的预测值,对此过程不再赘述。
在一种可能的实施方式中,若最优运动矢量与初始运动矢量相同,则可以基于该子块的第一目标运动矢量,从第一参考帧中确定该子块对应的第三参考块;基于该子块的第二目标运动矢量,从第二参考帧中确定该子块对应的第四参考块。然后,可以对该第三参考块的像素值和该第四参考块的像素值进行加权,得到该子块的预测值。
在另一种可能的实施方式中,若最优运动矢量与初始运动矢量不同,则可以从第一参考帧中确定第五参考块,并对该第五参考块进行扩展,得到第六参考块;然后,基于该子块的第一目标运动矢量,从该第六参考块中选择该子块对应的第三参考块。以及,可以从第二参考帧中确定第七参考块,并对该第七参考块进行扩展,得到第八参考块;基于该子块的第二目标运动矢量,从该第八参考块中选择该子块对应的第四参考块。然后,可以对该第三参考块的像素值和该第四参考块的像素值进行加权,得到该子块的预测值。
在上述实施例中,对第三参考块的像素值和第四参考块的像素值进行加权,得到该子块的预测值,可以包括但不限于:对第三参考块的像素值,第三参考块的像素值对应的第一权重,第四参考块的像素值,第四参考块的像素值对应的第二权重进行加权处理,得到该子块的预测值;示例性的,第一权重与第二权重可以相同。
示例性的,在得到每个子块的预测值后,可以将每个子块的预测值组合在一起,得到当前块的预测值,对此当前块的预测值的确定过程不做限制。
由以上技术方案可见,本申请实施例中,若确定对当前块启动运动矢量调整模式,则根据第一原始运动矢量和第二原始运动矢量获得第一目标运动矢量和第二目标运动矢量,根据第一目标运动矢量和第二目标运动矢量确定预测值,而不是根据第一原始运动矢量和第二原始运动矢量确定预测值,解决预测质量不高,预测错误等问题,提高编码性能和编码效率。
实施例2:基于与上述方法同样的构思,参见图3所示,为本申请实施例中提出的另一种编解码方法的流程示意图,该方法可以应用于编码端,该方法可以包括以下步骤:
步骤301,编码端确定是否对当前块启动运动矢量调整模式。如果是,则执行步骤302,如果否,则不需要采用本申请提出的运动矢量调整方式,对此情况的处理不做限制。
在一个例子中,若编码端确定对当前块启动运动矢量调整模式,则说明当前块的运动信息不够准确,因此,对当前块启动运动矢量调整模式(即本申请技术方案),执行步骤302。
若编码端确定不对当前块启动运动矢量调整模式,则说明当前块的运动信息已经足够准确,因此,可以不对当前块启动运动矢量调整模式,不采用本申请提出的运动矢量调整方式。
步骤302,针对当前块包括的至少一个子块中的每个子块:编码端根据该子块的第一原始运动矢量,从第一参考帧中确定该子块对应的第一参考块;根据该子块的第二原始运动矢量,从第二参考帧中确定该子块对应的第二参考块。为了区分方便,将第一参考块中每个像素点的像素值称为第一像素值,将第二参考块中每个像素点的像素值称为第二像素值。
在一个例子中,若当前块是采用双向预测的块,则针对当前块的每个子块,可以存在双向运动信息,这个双向运动信息可以包括两个参考帧和两个原始运动矢量,该双向运动信息可以包括第一参考帧和第一原始运动矢量、第二参考帧和第二原始运动矢量。
基于第一原始运动矢量,编码端从第一参考帧中确定子块对应的第一参考块,将第一参考块中每个像素点的像素值称为第一像素值。基于第二原始运动矢量,编码端从第二参考帧中确定子块对应的第二参考块,将第二参考块中每个像素点的像素值称为第二像素值。
在一个例子中,当前块所在当前帧与第一参考帧的距离和第二参考帧与当前块所在当前帧的距离可以相同。例如,第一参考帧为第1帧,当前帧为第5帧,第二参考帧为第9帧。
在一个例子中,第一原始运动矢量和第二原始运动矢量可以存在镜像对称关系,例如,第一原始运动矢量为(4,4),第二原始运动矢量为(-4,-4);第一原始运动矢量为(2.5,3.5),第二原始运动矢量为(-2.5,-3.5)。当然,上述只是一个示例,对此不做限制。
针对确定第一参考块和第二参考块的方式,可以参见后续实施例,在此不再赘述。
步骤303,编码端根据第一参考块的第一像素值和第二参考块的第二像素值,对第一原始运动矢量进行调整,得到子块的第一目标运动矢量;根据第一参考块的第一像素值和第二参考块的第二像素值,对第二原始运动矢量进行调整,得到子块的第二目标运动矢量。
在一个例子中,若针对当前块启动运动矢量调整模式,则编码端可以基于第一参考块的第一像素值和第二参考块的第二像素值,通过局部搜索的方法,对第一原始运动矢量和第二原始运动矢量进行微调,以获得更好的第一目 标运动矢量和第二目标运动矢量,继而利用第一目标运动矢量和第二目标运动矢量来生成失真更小的预测值。
在一个例子中,当前块可以包括至少一个子块,若当前块只包括一个子块,则该子块就是当前块本身。针对当前块的每个子块,该子块可以对应第一原始运动矢量和第二原始运动矢量,在进行调整后,该子块可以对应第一目标运动矢量和第二目标运动矢量。
示例性的,若当前块包括子块A和子块B,针对子块A,子块A对应第一原始运动矢量A1和第二原始运动矢量A2,在进行调整后,子块A对应第一目标运动矢量A3和第二目标运动矢量A4。针对子块B,子块B对应第一原始运动矢量B1和第二原始运动矢量B2,在进行调整后,子块B对应第一目标运动矢量B3和第二目标运动矢量B4。
示例性的,子块A对应的第一原始运动矢量A1与子块B对应的第一原始运动矢量B1可以相同,均是当前块的第一原始运动矢量;子块A对应的第二原始运动矢量A2与子块B对应的第二原始运动矢量B2可以相同,均是当前块的第二原始运动矢量。
由于针对每个子块的第一原始运动矢量分别进行调整,因此,子块A对应的第一目标运动矢量A3与子块B对应的第一目标运动矢量B3可以相同,也可以不同。
由于针对每个子块的第二原始运动矢量分别进行调整,因此,子块A对应的第二目标运动矢量A4与子块B对应的第二目标运动矢量B4可以相同,也可以不同。
针对原始运动矢量的调整方式,可以参见后续实施例,在此不再赘述。
步骤304,编码端根据第一目标运动矢量和第二目标运动矢量确定子块的预测值。
步骤305,编码端根据每个子块的预测值确定当前块的预测值。
例如,若当前块包括子块A和子块B,则可以利用子块A的第一目标运动矢量和第二目标运动矢量确定子块A的预测值,并利用子块B的第一目标运动矢量和第二目标运动矢量确定子块B的预测值,而子块A的预测值和子块B的预测值,就是当前块的预测值。
示例性的,编码端保存当前块的每个子块的第一目标运动矢量和第二目标运动矢量,或者,保存当前块的每个子块的第一原始运动矢量和第二原始运动矢量,或者,保存当前块的每个子块的第一原始运动矢量,第二原始运动矢量,第一目标运动矢量和第二目标运动矢量。
实施例3:基于与上述方法同样的构思,参见图4所示,为本申请实施例中提出的另一种编解码方法的流程示意图,该方法可以应用于解码端,该方法可以包括以下步骤:
步骤401,解码端确定是否对当前块启动运动矢量调整模式。如果是,则执行步骤402,如果否,则不需要采用本申请提出的运动矢量调整方式,对此情况的处理不做限制。
在一个例子中,若解码端确定对当前块启动运动矢量调整模式,则说明当前块的运动信息不够准确,因此,对当前块启动运动矢量调整模式(即本申请技术方案),执行步骤402。
若解码端确定不对当前块启动运动矢量调整模式,则说明当前块的运动信息已经足够准确,因此,可以不对当前块启动运动矢量调整模式,不采用本申请提出的运动矢量调整方式。
步骤402,针对当前块包括的至少一个子块中的每个子块:解码端根据该子块的第一原始运动矢量,从第一参考帧中确定该子块对应的第一参考块;根据该子块的第二原始运动矢量,从第二参考帧中确定该子块对应的第二参考块。为了区分方便,将第一参考块中每个像素点的像素值称为第一像素值,将第二参考块中每个像素点的像素值称为第二像素值。
步骤403,解码端根据第一参考块的第一像素值和第二参考块的第二像素值,对第一原始运动矢量进行调整,得到子块的第一目标运动矢量;根据第一参考块的第一像素值和第二参考块的第二像素值,对第二原始运动矢量进行调整,得到子块的第二目标运动矢量。
步骤404,解码端根据第一目标运动矢量和第二目标运动矢量确定子块的预测值。
步骤405,解码端根据每个子块的预测值确定当前块的预测值。
示例性的,解码端保存当前块的每个子块的第一目标运动矢量和第二目标运动矢量,或者,保存当前块的每个子块的第一原始运动矢量和第二原始运动矢量,或者,保存当前块的每个子块的第一原始运动矢量,第二原始运动矢量,第一目标运动矢量和第二目标运动矢量。
示例性的,步骤401-步骤405可以参见步骤301-步骤305,在此不再赘述。
实施例4:在上述实施例中,涉及是否对当前块启动运动矢量调整模式,以下进行说明。
在一种可能的实施方式中,可以给出如下启动条件,当然,如下这些启动条件只是一个示例,在实际应用中,如下这些启动条件可以任意组合,对此不做限制。示例性的,当如下启动条件中的所有启动条件均满足时,确定对当前块启动运动矢量调整模式。
1、控制信息为允许当前块使用运动矢量调整模式。
示例性的,该控制信息可以包括但不限于:序列级控制信息和/或帧级控制信息。
在一种可能的实施方式中,序列级(如多帧图像)控制信息可以包括控制标志位(如sps_cur_tool_enabled_flag),帧级(如一帧图像)控制信息可以包括控制标志位(如pic_cur_tool_disabled_flag)。当sps_cur_tool_enabled_flag为第一取值,并且pic_cur_tool_disabled_flag为第二取值,则表示允许当前块使用运动矢量调整模式。
示例性的,sps_cur_tool_enabled_flag表示序列中的所有图像是否允许使用运动矢量调整模式。pic_cur_tool_disabled_flag表示是否不允许当前图像内的各个块使用运动矢量调整模式。当sps_cur_tool_enabled_flag为第一取值时,表示序列中的所有图像允许使用运动矢量调整模式。当pic_cur_tool_disabled_flag为第二取值时,表示允许当前图像内的各个块使用运动矢量调整模式。
示例性的,当sps_cur_tool_enabled_flag为第二取值和/或pic_cur_tool_disabled_flag为第一取值时,表示不允许当前块使用运动矢量调整模式,即控制信息为不允许当前块使用运动矢量调整模式。
在另一种可能的实施方式中,序列级(如多帧图像)控制信息可以包括控制标志位(如sps_cur_tool_disabled_flag),帧级(如一帧图像)控制信息可以包括控制标志位(如pic_cur_tool_disabled_flag)。当sps_cur_tool_disabled_flag为 第二取值,并且pic_cur_tool_disabled_flag为第二取值,则表示允许当前块使用运动矢量调整模式。
示例性的,sps_cur_tool_disabled_flag表示序列中的所有图像是否不允许使用运动矢量调整模式。pic_cur_tool_disabled_flag表示是否不允许当前图像内的各个块使用运动矢量调整模式。当sps_cur_tool_disabled_flag为第二取值时,表示序列中的所有图像允许使用运动矢量调整模式。当pic_cur_tool_disabled_flag为第二取值时,表示允许当前图像内的各个块使用运动矢量调整模式。
示例性的,当sps_cur_tool_disabled_flag为第一取值和/或pic_cur_tool_disabled_flag为第一取值时,表示不允许当前块使用运动矢量调整模式,即控制信息为不允许当前块使用运动矢量调整模式。
在另一种可能的实施方式中,序列级(如多帧图像)控制信息可以包括控制标志位(如sps_cur_tool_enabled_flag),帧级(如一帧图像)控制信息可以包括控制标志位(如pic_cur_tool_enabled_flag)。当sps_cur_tool_enabled_flag为第一取值,并且pic_cur_tool_enabled_flag为第一取值,则表示允许当前块使用运动矢量调整模式。
示例性的,sps_cur_tool_enabled_flag表示序列中的所有图像是否允许使用运动矢量调整模式。pic_cur_tool_enabled_flag表示是否允许当前图像内的各个块使用运动矢量调整模式。当sps_cur_tool_enabled_flag为第一取值时,表示序列中的所有图像允许使用运动矢量调整模式。当pic_cur_tool_enabled_flag为第一取值时,表示允许当前图像内的各个块使用运动矢量调整模式。
示例性的,当sps_cur_tool_enabled_flag为第二取值和/或pic_cur_tool_enabled_flag为第二取值时,表示不允许当前块使用运动矢量调整模式,即控制信息为不允许当前块使用运动矢量调整模式。
在另一种可能的实施方式中,序列级(如多帧图像)控制信息可以包括控制标志位(如sps_cur_tool_disabled_flag),帧级(如一帧图像)控制信息可以包括控制标志位(如pic_cur_tool_enabled_flag)。当sps_cur_tool_disabled_flag为第二取值,并且pic_cur_tool_enabled_flag为第一取值,则表示允许当前块使用运动矢量调整模式。
示例性的,sps_cur_tool_disabled_flag表示序列中的所有图像是否不允许使用运动矢量调整模式。pic_cur_tool_enabled_flag表示是否允许当前图像内的各个块使用运动矢量调整模式。当sps_cur_tool_disabled_flag为第二取值时,表示序列中的所有图像允许使用运动矢量调整模式。当pic_cur_tool_enabled_flag为第一取值时,表示允许当前图像内的各个块使用运动矢量调整模式。
示例性的,当sps_cur_tool_disabled_flag为第一取值和/或pic_cur_tool_enabled_flag为第二取值时,表示不允许当前块使用运动矢量调整模式,即控制信息为不允许当前块使用运动矢量调整模式。
在上述实施例中,第一取值可以为1,第二取值可以为0,或者,第一取值可以为0,第二取值可以为1,当然,上述只是示例,对此不做限制。
示例性的,本文的帧等同于图像,如当前帧表示当前图像,参考帧表示参考图像。
2、当前块的预测模式为普通融合模式;或者,当前块的预测模式为融合模式或跳过模式,且当前块的预测模式不是除普通融合模式之外的其它模式。
在一种可能的实施方式中,若当前块的预测模式(如帧间预测模式)为融合模式或跳过模式,且当前块的预测模式不是除普通融合模式之外的其它模式(如子块融合模式,MMVD模式,CIIP模式,TPM模式,GEO模式等),则表示允许当前块使用运动矢量调整模式。例如,当前块的预测模式为融合模式或跳过模式时,当前块的预测模式不是MMVD模式,当前块的预测模式也不是CIIP模式时,则表示允许当前块使用运动矢量调整模式。
示例性的,若当前块的预测模式不是融合模式,且当前块的预测模式不是跳过模式,则表示不允许当前块使用运动矢量调整模式,即启动条件2不满足。
示例性的,若当前块的预测模式为融合模式或跳过模式,且当前块的预测模式是除普通融合模式之外的其它模式(如子块融合模式,MMVD模式,CIIP模式,TPM模式,GEO模式等),则表示不允许当前块使用运动矢量调整模式,即启动条件2不满足。
在另一种可能的实施方式中,若当前块的预测模式为普通融合模式(如regular merge模式),则表示允许当前块使用运动矢量调整模式。示例性的,普通融合模式为:复用当前块运动信息列表中的某一运动信息作为当前块的运动信息来生成当前块的预测值。
示例性的,若当前块的预测模式不为普通融合模式,则表示不允许当前块使用运动矢量调整模式,即启动条件2不满足。
3、当前块的预测值通过来自两个参考帧的参考块的加权获得,且两个参考帧的显示顺序分别位于当前帧的一前一后,且两个参考帧与当前帧的距离相同。当前块的预测值通过来自两个参考帧的参考块的加权获得是指:当前块采用双向预测模式,即当前块的预测值通过来自两个参考帧的参考块的加权获得。示例性的,当前块可以对应两个列表的运动信息,记为第一运动信息和第二运动信息,第一运动信息包括第一参考帧和第一原始运动矢量,第二运动信息包括第二参考帧和第二原始运动矢量。两个参考帧的显示顺序分别位于当前帧的一前一后是指:第一参考帧位于当前块所处当前帧的前面,第二参考帧位于当前帧的后面。
在一种可能的实施方式中,若当前块存在两个列表(如list0和list1)的运动信息(如两个参考帧和两个运动矢量),且两个参考帧的显示顺序分别位于当前帧的一前一后,且两个参考帧到当前帧的距离相同,则表示允许当前块使用运动矢量调整模式。
两个参考帧的显示顺序分别位于当前帧的一前一后,且两个参考帧到当前帧的距离相同,可以通过当前帧的显示顺序号POC_Cur,list0的参考帧的显示顺序号POC_0,list1的参考帧的显示顺序号POC_1的相对关系来表示:即,(POC_0-POC_Cur)完全等于(POC_Cur-POC_0)。
示例性的,当前块采用双向预测,且当前块对应的两个参考帧来自不同方向,也就是说,当前块对应的一个参考帧位于当前帧之前,当前块对应的另一个参考帧位于当前帧之后。
示例性的,上述条件“当前块存在两个参考帧,且两个参考帧的显示顺序分别位于当前帧的一前一后,且两个参考帧与当前帧的距离相同”可以通过如下内容表示:
示例性的,若当前块只存在一个参考帧,则表示不允许当前块使用运动矢量调整模式,即启动条件3不满足。或者,若当前块存在两个参考帧,但是两个参考帧的显示顺序均位于当前帧的前面,则表示不允许当前块使用运动 矢量调整模式,即启动条件3不满足。或者,若当前块存在两个参考帧,但是两个参考帧的显示顺序均位于当前帧的后面,则表示不允许当前块使用运动矢量调整模式,即启动条件3不满足。或者,若当前块存在两个参考帧,且两个参考帧的显示顺序分别位于当前帧的一前一后,但是两个参考帧与当前帧的距离不同,则表示不允许当前块使用运动矢量调整模式,即启动条件3不满足。
4、当前块的两个参考帧的加权权重相同。
在一种可能的实施方式中,若当前块的两个参考帧的加权权重相同,则表示允许当前块使用运动矢量调整模式。示例性的,若两个参考帧的帧级加权权重相同,如参考帧refIdxL0的亮度加权权重(luma_weight_l0_flag[refIdxL0]),可以等于参考帧refIdxL1的亮度加权权重(luma_weight_l1_flag[refIdxL1]),则表示当前块的两个参考帧的加权权重相同。或者,若两个参考帧的块级加权权重相同,如当前块的块级加权值的索引BcwIdx[xCb][yCb]为0,则表示当前块的两个参考帧的加权权重相同。或者,若两个参考帧的帧级加权权重相同,且两个参考帧的块级加权权重相同,则表示当前块的两个参考帧的加权权重相同。
示例性的,若当前块的两个参考帧的加权权重不同,则表示不允许当前块使用运动矢量调整模式,即启动条件4不满足。例如,若两个参考帧的帧级加权权重不同,则表示当前块的两个参考帧的加权权重不同。或者,若两个参考帧的块级加权权重不同,则表示当前块的两个参考帧的加权权重不同。或者,若两个参考帧的帧级加权权重不同,且两个参考帧的块级加权权重不同,则表示当前块的两个参考帧的加权权重不同。
示例性的,当前块的两个参考帧的加权权重是指双向加权补偿时采用的权重。例如,针对当前块的每个子块,在得到该子块的两个预测值(获取过程参见后续实施例)后,需要对这两个预测值进行加权,得到该子块的最终预测值。在对这两个预测值进行加权时,这两个预测值对应的权重,就是当前块的两个参考帧的加权权重,即这两个预测值对应的权重相同。
5、当前块的两个参考帧均是短期参考帧。或者说当前块的两个参考帧均不是长期参考帧。
在一种可能的实施方式中,若当前块的两个参考帧都为短期参考帧,则表示允许当前块使用运动矢量调整模式。短期参考帧表示离当前帧较近的参考帧,一般是实际的图像帧。
示例性的,若当前块的两个参考帧不都为短期参考帧,则表示不允许当前块使用运动矢量调整模式,即启动条件5不满足。或者,若当前块的一个参考帧不是短期参考帧,则表示不允许当前块使用运动矢量调整模式,即启动条件5不满足。或者,若当前块的两个参考帧都不是短期参考帧,则表示不允许当前块使用运动矢量调整模式,即启动条件5不满足。
在另一种可能的实施方式中,若当前块的两个参考帧都不是长期参考帧,则表示允许当前块使用运动矢量调整模式。长期参考帧的显示序号POC并没有实际含义,长期参考帧表示离当前帧较远的参考帧,或者是通过几帧实际图像合成出来的图像帧。
示例性的,若当前块的一个参考帧是长期参考帧,则表示不允许当前块使用运动矢量调整模式,即启动条件5不满足。或者,若当前块的两个参考帧都是长期参考帧,则表示不允许当前块使用运动矢量调整模式,即启动条件5不满足。
6、当前块的宽度,高度和面积均在限定范围内。
在一种可能的实施方式中,若当前块的宽cbWidth大于或者等于第一阈值(如8),且当前块的高cbHeight大于或者等于第二阈值(如8),且当前块的面积(cbHeight*cbWidth)大于或者等于第三阈值(如128),则表示允许当前块使用运动矢量调整模式。
示例性的,若当前块的宽cbWidth小于第一阈值,则表示不允许当前块使用运动矢量调整模式,即启动条件6不满足。或者,若当前块的高cbHeight小于第二阈值,则表示不允许当前块使用运动矢量调整模式,即启动条件6不满足。或者,且当前块的面积小于第三阈值,则表示不允许当前块使用运动矢量调整模式,即启动条件6不满足。
在另一种可能的实施方式中,若当前块的宽cbWidth大于或者等于第一阈值(如8),且当前块的高cbHeight大于或者等于第二阈值(如8),且当前块的面积(cbHeight*cbWidth)大于第四阈值(如64),则表示允许当前块使用运动矢量调整模式。
示例性的,若当前块的宽cbWidth小于第一阈值,则表示不允许当前块使用运动矢量调整模式,即启动条件6不满足。或者,若当前块的高cbHeight小于第二阈值,则表示不允许当前块使用运动矢量调整模式,即启动条件6不满足。或者,且当前块的面积小于或者等于第四阈值,则表示不允许当前块使用运动矢量调整模式,即启动条件6不满足。
7、当前块的两个参考帧的尺寸与当前帧的尺寸相同。
在一种可能的实施方式中,若list0的参考帧的尺寸与当前帧的尺寸相同,例如,list0的参考帧的宽与当前帧的宽相同,list0的参考帧的高与当前帧的高相同,并且,list1的参考帧的尺寸与当前帧的尺寸相同,例如,list1的参考帧的宽与当前帧的宽相同,list1的参考帧的高与当前帧的高相同,则表示允许当前块使用运动矢量调整模式。
示例性的,若当前块的两个参考帧中的至少一个参考帧的尺寸与当前帧的尺寸不同,则表示不允许当前块使用运动矢量调整模式,即启动条件7不满足。例如,若list0的参考帧的宽与当前帧的宽不同,则表示不允许当前块使用运动矢量调整模式。或者,若list0的参考帧的高与当前帧的高不同,则表示不允许当前块使用运动矢量调整模式。或者,若list1的参考帧的宽与当前帧的宽不同,则表示不允许当前块使用运动矢量调整模式。或者,若list1的参考帧的高与当前帧的高不同,则表示不允许当前块使用运动矢量调整模式。
实施例5:在上述实施例中,涉及针对当前块的每个子块,根据该子块的第一原始运动矢量,从第一参考帧中确定该子块对应的第一参考块,第一参考块中每个像素点的像素值称为第一像素值;根据该子块的第二原始运动矢量,从第二参考帧中确定该子块对应的第二参考块,第二参考块中每个像素点的像素值称为第二像素值,以下对此进行说明。
第一参考块中的每个像素点的第一像素值,是通过对第一参考块中的相邻像素点的像素值进行插值得到,或者,通过对第一参考块中的相邻像素点的像素值进行拷贝得到。
第二参考块中的每个像素点的第二像素值,是通过对第二参考块中的相邻像素点的像素值进行插值得到,或者, 通过对第二参考块中的相邻像素点的像素值进行拷贝得到。
第一参考块的尺寸与第二参考块的尺寸相同,第一参考块/第二参考块的宽度基于子块的宽度与搜索范围确定,第一参考块/第二参考块的高度基于子块的高度与搜索范围确定。
例如,针对当前块的每个dx*dy的子块(如16*16的子块,或更小的子块,或更大的子块,更小的子块可以为8*8,更大的子块可以为32*32,对此不做限制,示例性的,子块的大小可以与当前块的大小相同,即子块就是当前块,例如,当前块为8*16时,则当前块只包括一个子块,该子块的大小为8*16,子块的大小也可以与当前块的大小不同,例如,当前块为8*32时,则当前块可以包括两个8*16的子块,当然,上述只是示例,为了方便描述,后续以16*16的子块为例进行说明),子块的宽度为dx,子块的高度为dy,第一原始运动矢量记为MV0,第二原始运动矢量记为MV1。
从第一原始运动矢量MV0在第一参考帧的对应位置,得到面积为(dx+filtersize-1)*(dy+filtersize-1)的整像素块,可以将这个整像素块记为整像素块A。
从第二原始运动矢量MV1在第二参考帧的对应位置,得到面积为(dx+filtersize-1)*(dy+filtersize-1)的整像素块,可以将这个整像素块记为整像素块B。
在一种可能的实施方式中,基于面积为(dx+filtersize-1)*(dy+filtersize-1)的整像素块A,可以通过双线性插值的方式,获得尺寸为(dx+2*IterNum)*(dy+2*IterNum)的初始参考像素块,可以将这个初始参考像素块记为第一参考块。基于面积为(dx+filtersize-1)*(dy+filtersize-1)的整像素块B,可以通过双线性插值的方式,获得尺寸为(dx+2*IterNum)*(dy+2*IterNum)的初始参考像素块,可以将这个初始参考像素块记为第二参考块。
在另一种可能的实施方式中,基于面积为(dx+filtersize-1)*(dy+filtersize-1)的整像素块A,通过直接拷贝(无需插值)的方式,获得尺寸为(dx+2*IterNum)*(dy+2*IterNum)的初始参考像素块,将这个初始参考像素块记为第一参考块。基于面积为(dx+filtersize-1)*(dy+filtersize-1)的整像素块B,可以通过直接拷贝的方式,获得尺寸为(dx+2*IterNum)*(dy+2*IterNum)的初始参考像素块,将这个初始参考像素块记为第二参考块。
示例性的,可以仅针对亮度分量(因为后续的搜索过程仅用亮度分量计算代价值,以降低复杂度),基于面积为(dx+filtersize-1)*(dy+filtersize-1)的整像素块(如整像素块A和整像素块B),获得尺寸为(dx+2*IterNum)*(dy+2*IterNum)的初始参考像素块,该初始参考像素块为第一参考块(如Pred_Inter0)和第二参考块(如Pred_Inter1)。
在一个例子中,filtersize可以为插值滤波器的抽头数,如可以为8等,对此不做限制。
在一个例子中,通过双线性插值获得第一参考块/第二参考块是指:第一参考块/第二参考块中每个像素点的像素值,通过对第一参考块/第二参考块中的相邻像素点的像素值进行插值得到。通过拷贝获得第一参考块/第二参考块是指:第一参考块/第二参考块中每个像素点的像素值,通过对第一参考块/第二参考块中的相邻像素点的像素值进行拷贝得到。
参见上述实施例,第一参考块的面积为(dx+2*IterNum)*(dy+2*IterNum),第二参考块的面积为(dx+2*IterNum)*(dy+2*IterNum),比如说,第一参考块/第二参考块的宽度值为dx+2*IterNum,第一参考块/第二参考块的高度值为dy+2*IterNum。dx为子块的宽,dy为子块的高,IterNum可以为搜索范围SR,比如说,后续实施例的迭代次数,IterNum可以为目标运动矢量与原始运动矢量的最大水平/竖直分量插值,如IterNum可以为2等。
参见图5所示,针对16*16的子块,基于第一原始运动矢量MV0在第一参考帧的对应位置,得到面积为23(即16+8-1)*23的整像素块A。基于面积为23*23的整像素块A,可以通过双线性插值的方式,获得尺寸为20(即16+2*2)*20的第一参考块。同理,针对16*16的子块,基于第二原始运动矢量MV1在第二参考帧的对应位置,得到面积为23*23的整像素块B。基于面积为23*23的整像素块B,获得尺寸为20*20的第二参考块。
针对第一参考块和第二参考块,用于后续过程中的运动矢量调整。
实施例6:在上述实施例中,涉及针对当前块的每个子块,根据第一参考块的第一像素值和第二参考块的第二像素值,对第一原始运动矢量进行调整,得到子块的第一目标运动矢量;根据第一参考块的第一像素值和第二参考块的第二像素值,对第二原始运动矢量进行调整,得到子块的第二目标运动矢量。以一个子块(如当前块的每个dx*dy大小的子块)的处理过程为例,介绍原始运动矢量的调整过程。
步骤a1、将第一原始运动矢量或者第二原始运动矢量确定为中心运动矢量。
例如,假设第一原始运动矢量为(4,4),第二原始运动矢量为(-4,-4),将第一原始运动矢量(4,4)或第二原始运动矢量(-4,-4)确定为中心运动矢量。
为了方便描述,后续以将第一原始运动矢量(4,4)确定为中心运动矢量为例,将第二原始运动矢量(-4,-4)确定为中心运动矢量的流程类似,在此不再赘述。
步骤a2、确定与中心运动矢量对应的边缘运动矢量。
例如,可以将中心运动矢量(x,y)向不同方向偏移S,得到不同方向的边缘运动矢量(x,y+S)、边缘运动矢量(x,y-S)、边缘运动矢量(x+S,y)、边缘运动矢量(x-S,y)、边缘运动矢量(x+right,y+down)。示例性的,right可以为S或者-S,down可以为S或者-S,关于right和down的确定方式,可以参见后续实施例。参见图6所示,将中心运动矢量(x,y)作为中心,即中心运动矢量为(0,0),以S为1,right和down均为1为例,则中心运动矢量(0,0)对应的边缘运动矢量包括:边缘运动矢量(0,1),边缘运动矢量(0,-1),边缘运动矢量(1,0),边缘运动矢量(-1,0),边缘运动矢量(1,1)。
步骤a3、根据第一参考块的第一像素值和第二参考块的第二像素值,获取中心运动矢量对应的第一代价值、每个边缘运动矢量对应的第二代价值。
例如,从第一参考块中复制获取中心运动矢量(0,0)对应的子参考块A1,子参考块A1是中心运动矢量(0,0)在第一参考块中的子参考块。从第二参考块中复制获取中心运动矢量(0,0)对应的子参考块B1,子参考块B1是中心运动矢量(0,0)在第二参考块中的子参考块。然后,利用子参考块A1的第一像素值和子参考块B1的第二像素值,获取中心运动矢量(0,0)对应的代价值1,关于代价值的确定方式可以参见后续实施例。
从第一参考块中复制获取边缘运动矢量(0,1)对应的子参考块A2,子参考块A2是边缘运动矢量(0,1)在第一参考块中的子参考块。从第二参考块中复制获取边缘运动矢量(0,1)的对称运动矢量(0,-1)对应的子参考 块B2,子参考块B2是对称运动矢量(0,-1)在第二参考块中的子参考块。利用子参考块A2的第一像素值和子参考块B2的第二像素值,获取边缘运动矢量(0,1)对应的代价值2,关于代价值的确定方式可以参见后续实施例。
基于边缘运动矢量(0,1)对应的代价值2的确定方式,可以确定边缘运动矢量(0,-1)对应的代价值3、边缘运动矢量(1,0)对应的代价值4、边缘运动矢量(-1,0)对应的代价值5、边缘运动矢量(1,1)对应的代价值6,在此不再重复赘述。
步骤a4、根据第一代价值和第二代价值,从中心运动矢量和边缘运动矢量中选择一个运动矢量,作为最优运动矢量。例如,可以将代价值最小的运动矢量作为最优运动矢量。
例如,假设边缘运动矢量(0,1)对应的代价值2最小,则可以将代价值2对应的边缘运动矢量(0,1)作为最优运动矢量。当然,这里只是一个示例,对此不做限制。
步骤a5、判断是否满足结束条件。如果否,则可以将该最优运动矢量确定为中心运动矢量,并返回步骤a2。如果是,则可以执行步骤a6。
在一个例子中,若迭代次数/搜索范围达到阈值,则满足结束条件;若迭代次数/搜索范围未达到阈值,则不满足结束条件。例如,假设SR为2,即阈值为2,若迭代次数/搜索范围已经达到2次,即步骤a2-步骤a4已经执行两次,则满足结束条件;否则,不满足结束条件。
在另一个例子中,从中心运动矢量和边缘运动矢量中选择一个运动矢量作为最优运动矢量后,若选择中心运动矢量作为最优运动矢量,则可以满足结束条件。
步骤a6、根据最优运动矢量确定第一整像素运动矢量调整值(用于调整第一原始运动矢量)和第二整像素运动矢量调整值(用于调整第二原始运动矢量)。
在一个例子中,可以根据最优运动矢量和第一原始运动矢量确定第一整像素运动矢量调整值,并根据第一整像素运动矢量调整值确定第二整像素运动矢量调整值,示例性的,第二整像素运动矢量调整值可以与第一整像素运动矢量调整值对称。
例如,第一次迭代过程,最优运动矢量为边缘运动矢量(0,1),以边缘运动矢量(0,1)为中心进行第二次迭代,第二次迭代过程,最优运动矢量为边缘运动矢量(0,1),假设至此完成迭代过程,则第一整像素运动矢量调整值为(0,2),即边缘运动矢量(0,1)与边缘运动矢量(0,1)的和。基于此,假设第一原始运动矢量为(4,4),第一次迭代过程,最优运动矢量为边缘运动矢量(0,1),即最优运动矢量可以对应最优运动矢量(4,5)。以边缘运动矢量(0,1)为中心进行第二次迭代,第二次迭代过程,最优运动矢量为边缘运动矢量(0,1),即最优运动矢量可以对应最优运动矢量(4,6)。综上所述,根据最优运动矢量(4,6)和第一原始运动矢量(4,4)确定第一整像素运动矢量调整值,第一整像素运动矢量调整值为最优运动矢量(4,6)与第一原始运动矢量(4,4)的差,即第一整像素运动矢量调整值为(0,2)。根据第一整像素运动矢量调整值(0,2)确定第二整像素运动矢量调整值,第二整像素运动矢量调整值可以为(0,-2),即(0,2)的对称值。
步骤a7、根据最优运动矢量确定第一分像素运动矢量调整值(用于调整第一原始运动矢量)和第二分像素运动矢量调整值(用于调整第二原始运动矢量)。
在一个例子中,可以根据最优运动矢量对应的代价值、与最优运动矢量对应的边缘运动矢量对应的代价值,确定第一分像素运动矢量调整值,然后,可以根据所述第一分像素运动矢量调整值确定第二分像素运动矢量调整值。例如,x 0=N*(E(-1,0)–E(1,0))/(E(-1,0)+E(1,0)–2*E(0,0)),y 0=N*(E(0,-1)–E(0,1))/(E(0,-1)+E(0,1)–2*E(0,0)),对于1/2、1/4、1/8和1/16的运动矢量像素精度,则N=1、2、4和8。然后,将(x0,y0)赋值给deltaMv,SPMV=deltaMv/2N,若当前为1/16的运动矢量像素精度,则SPMV为(x 0/16,y 0/16)。
在上述公式中,SPMV可以是第一分像素运动矢量调整值,N可以与运动矢量像素精度有关,例如,运动矢量像素精度可以为1/2,N为1,运动矢量像素精度为1/4,N为2,运动矢量像素精度为1/8,N为4,运动矢量像素精度为1/16,N为8。
在上述公式中,E(0,0)表示最优运动矢量的代价值;E(-1,0)是以最优运动矢量为中心,最优运动矢量(0,0)的边缘运动矢量(-1,0)的代价值;E(1,0)是以最优运动矢量为中心,最优运动矢量(0,0)的边缘运动矢量(1,0)的代价值;E(0,-1)是以最优运动矢量为中心,最优运动矢量(0,0)的边缘运动矢量(0,-1)的代价值;E(0,1)是以最优运动矢量为中心,最优运动矢量(0,0)的边缘运动矢量(0,1)的代价值。针对各运动矢量的代价值,确定方式参见上述示例,在此不再赘述。
在采用上述方式确定第一分像素运动矢量调整值后,可以根据第一分像素运动矢量调整值确定第二分像素运动矢量调整值,第二分像素运动矢量调整值是第一分像素运动矢量调整值的对称值。例如,若第一分像素运动矢量调整值为(1,0),则第二分像素运动矢量调整值可以为(-1,0),即第一分像素运动矢量调整值(1,0)的对称值。
步骤a8、根据第一整像素运动矢量调整值和/或第一分像素运动矢量调整值,对第一原始运动矢量进行调整,得到第一目标运动矢量。例如,第一目标运动矢量=第一原始运动矢量+第一整像素运动矢量调整值+第一分像素运动矢量调整值。
步骤a9、根据第二整像素运动矢量调整值和/或第二分像素运动矢量调整值,对第二原始运动矢量进行调整,得到第二目标运动矢量。例如,第二目标运动矢量=第二原始运动矢量+第二整像素运动矢量调整值+第二分像素运动矢量调整值。
实施例7:在上述实施例中,涉及针对当前块的每个子块,根据第一参考块的第一像素值和第二参考块的第二像素值,对第一原始运动矢量进行调整,得到子块的第一目标运动矢量;根据第一参考块的第一像素值和第二参考块的第二像素值,对第二原始运动矢量进行调整,得到子块的第二目标运动矢量。以一个子块(如当前块的每个dx*dy大小的子块)的处理过程为例,介绍原始运动矢量的调整过程。
本实施例中,将第一原始运动矢量记为Org_MV0,将第二原始运动矢量记为Org_MV1,对第一原始运动矢量Org_MV0进行调整后,得到的第一目标运动矢量记为Refined_MV0,对第二原始运动矢量Org_MV1进行调整后,得到的第二目标运动矢量记为Refined_MV1。
步骤b1、进行SR次迭代,获得最优的整像素MV点的整像素偏移,将其记为IntegerDeltaMV,IntegerDeltaMV就是上述实施例中的第一整像素运动矢量调整值。例如,先将IntegerDeltaMV初始化为(0,0),每次迭代执行如下过 程:
步骤b11、将deltaMV设为(0,0)。若为首次迭代,基于第一原始运动矢量在第一参考块中的参考像素,复制获得预测值块A1(即第一参考块最中心的dx*dy的块);基于第二原始运动矢量在第二参考块中的参考像素,复制获得预测值块B1(即第二参考块最中心的dx*dy的块)。基于预测值块A1和预测值块B1获得初始代价值cost(初始代价值为基于预测值块A1和预测值块B1的SAD(绝对值差和,sum of abstract distortion),确定方式参见后续实施例)。若该初始代价值cost小于dx*dy,dx和dy是当前子块的宽度和高度,则直接跳过后续搜索过程,执行步骤b2,并将notZeroCost设为false。
步骤b12、如图6所示,以上述初始点为中心,按照{Mv(0,1),Mv(0,-1),Mv(1,0),Mv(-1,0),Mv(right,down)}的顺序得到五个偏移MV(这五个偏移MV均称为MVOffset),并进行这五个偏移MV的代价值的计算与比较过程。例如,基于某个MVOffset(如Mv(0,1)等),在第一参考块和第二参考块中,通过这个MVOffset获得两块预测值块(如第一参考块中进行中心位置偏移MVOffset的dx*dy块、第二参考块中进行中心位置偏移-MVOffset(与MVOffset相反)的dx*dy块),计算两个预测值块的下采样SAD作为MVOffset的代价值。
然后,保留代价值最小的MVOffset,将代价值最小的MVOffset更新为deltaMV的值,且代价值最小的MVOffset作为下一次迭代的新中心偏移点。
基于deltaMV更新IntegerDeltaMV的值,更新后的IntegerDeltaMV=更新前的IntegerDeltaMV+deltaMV,即在当前IntegerDeltaMV的基础上加上deltaMV。
步骤b13、经过迭代后,若最优MV仍为初始MV(即不是MVOffset)或者最小代价值为0,则不进行下一次迭代搜索过程,执行步骤b2,并将notZeroCost设为false。
否则,若迭代次数达到SR,则执行步骤b2,若迭代次数未达到SR,则可以将最优MV作为中心,进行下一次的迭代搜索过程,即返回步骤b11。
在迭代搜索过程结束后,得到IntegerDeltaMV的取值,即IntegerDeltaMV的最终取值,就是第一整像素运动矢量调整值,后续记为IntegerDeltaMV。
步骤b2、可以以步骤b1的最优整像素MV点为中心,获得最优的分像素偏移MV,记为SPMV,而SPMV就是上述实施例中的第一分像素运动矢量调整值。
例如,可以先将SPMV初始化为(0,0),然后执行如下过程:
步骤b21、只有notZeroCost不为false,且deltaMV为(0,0)时,才可以进行后续处理(即需要获取SPMV),否则,可以直接利用IntegerDeltaMV对原始运动矢量进行调整,而不是利用IntegerDeltaMV和SPMV对原始运动矢量进行调整。
步骤b22、将E(x,y)表示为步骤b1所得最优MV点偏移(x,y)的MV对应代价值(步骤b1计算的代价值)。基于中心及上下左右五个点的E(x,y),可得E(x,y)最小的点的偏移(x0,y0)为:x 0=N*(E(-1,0)–E(1,0))/(E(-1,0)+E(1,0)–2*E(0,0)),y 0=N*(E(0,-1)–E(0,1))/(E(0,-1)+E(0,1)–2*E(0,0))。在一个例子中,对于1/2、1/4、1/8和1/16的运动矢量像素精度,则N=1、2、4和8。然后,可以将(x0,y0)赋值给deltaMv,SPMV=deltaMv/2N,若当前为1/16的运动矢量像素精度,则SPMV可以为(x 0/16,y 0/16)。
若E(-1,0)=E(0,0),则水平向左偏移半个像素(deltaMv[0]=-N)。
若E(1,0)=E(0,0),则水平向右偏移半个像素(deltaMv[0]=N)。
若E(0,-1)=E(0,0),则垂直向上偏移半个像素(deltaMv[1]=-N)。
若E(0,1)=E(0,0),则垂直向下偏移半个像素(deltaMv[1]=N)。
基于上述处理,可以得到SPMV的取值,即第一分像素运动矢量调整值。
步骤b3、基于步骤b1的整像素偏移IntegerDeltaMV和步骤b2的分像素偏移SPMV,获得最优偏移MV,可以将这个最优偏移MV记为BestMVoffset。而且,BestMVoffset=IntegerDeltaMV+SPMV。基于BestMVoffset可以获得两个方向的目标运动矢量:Refined_MV0=Org_MV0+BestMVoffset;Refined_MV1=Org_MV1-BestMVoffset。
显然,BestMVoffset=IntegerDeltaMV+SPMV,即第一整像素运动矢量调整值与第一分像素运动矢量调整值的和。而且,-IntegerDeltaMV是IntegerDeltaMV的对称值,即第二整像素运动矢量调整值,-SPMV是SPMV的对称值,即第二分像素运动矢量调整值,因此,-BestMVoffset=(-IntegerDeltaMV)+(-SPMV)时,即第二整像素运动矢量调整值与第二分像素运动矢量调整值的和。
实施例8:在一个例子中,为了将第一原始运动矢量Org_MV0和第二原始运动矢量Org_MV1调整为第一目标运动矢量Refined_MV0和第二目标运动矢量Refined_MV1,实现方式与实施例7类似,不同之处在于:将步骤b11中的“若该初始代价值cost小于dx*dy,则直接跳过后续搜索过程”移除,也就是说,即使初始代价值cost小于dx*dy,也不会“直接跳过后续搜索过程”,而是继续后续搜索过程,即需要执行步骤b12。
实施例9:为了将第一原始运动矢量Org_MV0和第二原始运动矢量Org_MV1调整为第一目标运动矢量Refined_MV0和第二目标运动矢量Refined_MV1,实现方式与实施例7类似,不同之处在于:将步骤b11中的“若该初始代价值cost小于dx*dy,则直接跳过后续搜索过程”移除,也就是说,即使初始代价值cost小于dx*dy,也不会“直接跳过后续搜索过程”,而是继续后续搜索过程,即需要执行步骤b12。将步骤b13中的“若最优MV仍为初始MV(即不是MVOffset)或者最小代价值为0,则不进行下一次迭代搜索过程”移除,也就是说,即使最优MV仍为初始MV或者最小代价值为0,也可以进行下一次迭代搜索过程。
实施例10:在一个例子中,为了将第一原始运动矢量Org_MV0和第二原始运动矢量Org_MV1调整为第一目标运动矢量Refined_MV0和第二目标运动矢量Refined_MV1,实现方式与实施例7类似,不同之处在于:将“notZeroCost”的相关过程移除,也就是说,在步骤b11和步骤b13中,不设置和保存notZeroCost的值。在步骤b21中,只要deltaMV为(0,0),就可以进行分像素偏移计算过程(即步骤b22),而不是只有当notZeroCost不为false、且deltaMV为(0,0)时,才可以进行分像素偏移计算过程。
实施例11:在一个例子中,为了将第一原始运动矢量Org_MV0和第二原始运动矢量Org_MV1调整为第一目 标运动矢量Refined_MV0和第二目标运动矢量Refined_MV1,实现方式与实施例7类似,不同之处在于:将步骤b21中的“只有notZeroCost不为false,且deltaMV为(0,0)时,才进行后续处理,否则,直接利用IntegerDeltaMV对原始运动矢量进行调整”,修改为“只有notZeroCost不为false,且当前最优整像素的上下左右相隔1个整像素的四个点的代价值已在步骤b1计算获得时,才进行后续处理,否则,直接利用IntegerDeltaMV对原始运动矢量进行调整”。在一个例子中,“后续处理”是指步骤b22的分像素偏移计算过程。
在一个例子中,步骤b22的分像素偏移计算过程,需要使用最优整像素的上下左右相隔1个整像素的四个点的代价值,因此,步骤b1中已经计算获得“最优整像素的上下左右相隔1个整像素的四个点的代价值”,可以是一个必要条件。
实施例12:为了将第一原始运动矢量Org_MV0和第二原始运动矢量Org_MV1调整为第一目标运动矢量Refined_MV0和第二目标运动矢量Refined_MV1,实现方式与实施例7类似,不同之处在于:将步骤b21中的“只有notZeroCost不为false,且deltaMV为(0,0)时,才进行后续处理,否则直接利用IntegerDeltaMV对原始运动矢量进行调整”,修改为“只要当前最优整像素的上下左右相隔1个整像素的四个点的代价值已在步骤b1计算获得时,才进行后续处理(即分像素偏移计算过程),否则利用IntegerDeltaMV对原始运动矢量进行调整”。
实施例13:为了将第一原始运动矢量Org_MV0和第二原始运动矢量Org_MV1调整为第一目标运动矢量Refined_MV0和第二目标运动矢量Refined_MV1,实现方式与实施例7类似,不同之处在于:将步骤b21中的“只有notZeroCost不为false,且deltaMV为(0,0)时,才进行后续处理,否则,直接利用IntegerDeltaMV对原始运动矢量进行调整”,修改为“若当前最优整像素的上下左右相隔1个整像素的四个点的代价值已在步骤b1计算获得时,才进行后续处理(步骤b22的分像素偏移计算过程),否则采用步骤b23进行处理”。
步骤b23、将当前最优整像素点MV_inter_org设为距离其最近的,且周围的上下左右相隔1个整像素的四个点的代价值已在步骤b1计算获得的整像素点MV_inter_nearest。然后,以MV_inter_nearest为中心,进行步骤b22的分像素偏移计算过程,也就是说,以MV_inter_nearest为中心获取SPMV。例如,若当前最优整像素点MV_inter_org的上下左右相隔1个整像素的四个点的代价值,没有全部在步骤b1计算获得,则从最优整像素点MV_inter_org的周围选择一个整像素点MV_inter_nearest,且整像素点MV_inter_nearest的上下左右相隔1个整像素的四个点的代价值,均已经在步骤b1计算获得。
然后,可以将整像素点MV_inter_nearest作为当前的最优整像素点,并以整像素点MV_inter_nearest为中心获取SPMV,具体获取方式参见步骤b22。在以整像素点MV_inter_nearest为中心获取SPMV时,参见步骤b22,在计算x 0和y 0时,x 0和y 0可以限制在[-2N,2N]的范围内。若x 0/y 0大于2N,则可以将x 0/y 0赋值为2N;若x 0/y 0小于-2N,则可以将x 0/y 0赋值为-2N。对于1/2、1/4、1/8和1/16的运动矢量像素精度,则N=1、2、4和8。
实施例14:在上述实施例中,需要确定与中心运动矢量对应的边缘运动矢量。如将中心运动矢量(x,y)向不同方向偏移S,顺序得到不同方向的边缘运动矢量(x,y+S)、边缘运动矢量(x,y-S)、边缘运动矢量(x+S,y)、边缘运动矢量(x-S,y)、边缘运动矢量(x+right,y+down)。或者,将中心运动矢量(x,y)向不同方向偏移S,顺序得到不同方向的边缘运动矢量(x,y-S)、边缘运动矢量(x,y+S)、边缘运动矢量(x-S,y)、边缘运动矢量(x+S,y)、边缘运动矢量(x+right,y+down)。例如,假设(x,y)为(0,0),S为1,则按照(0,1)、(0,-1)、(1,0)、(-1,0)、(right,down)的顺序,得到5个边缘运动矢量。或者,按照(0,-1)、(0,1)、(-1,0)、(1,0)、(right,down)的顺序,得到5个边缘运动矢量。
实施例15:在上述实施例中,边缘运动矢量(x+right,y+down)的默认值为(x-S,y-S)。若边缘运动矢量(x+S,y)的代价值小于边缘运动矢量(x-S,y)的代价值,则right为S(从-S修改为S);若边缘运动矢量(x,y+S)的代价值小于边缘运动矢量(x,y-S)的代价值,则down为S(从-S修改为S)。或者,若边缘运动矢量(x+S,y)的代价值小于或等于边缘运动矢量(x-S,y)的代价值,则right为S(从-S修改为S);若边缘运动矢量(x,y+S)的代价值小于或等于边缘运动矢量(x,y-S)的代价值,则down为S(从-S修改为S)。
按照(0,1)、(0,-1)、(1,0)、(-1,0)、(right,down)的顺序,得到5个边缘运动矢量,(right,down)的默认值为(-1,-1)。若边缘运动矢量(1,0)的代价值小于边缘运动矢量(-1,0)的代价值,则right为1;若边缘运动矢量(0,1)的代价值小于边缘运动矢量(0,-1)的代价值,则down为1。或,若边缘运动矢量(1,0)的代价值小于或等于边缘运动矢量(-1,0)的代价值,则right为1;若边缘运动矢量(0,1)的代价值小于或等于边缘运动矢量(0,-1)的代价值,则down为1。例如,按照(0,-1)、(0,1)、(-1,0)、(1,0)、(right,down)的顺序,得到5个边缘运动矢量,(right,down)的默认值为(-1,-1)。若边缘运动矢量(1,0)的代价值小于边缘运动矢量(-1,0)的代价值,则right为1;若边缘运动矢量(0,1)的代价值小于边缘运动矢量(0,-1)的代价值,则down为1。或,若边缘运动矢量(1,0)的代价值小于或等于边缘运动矢量(-1,0)的代价值,则right为1;若边缘运动矢量(0,1)的代价值小于或等于边缘运动矢量(0,-1)的代价值,则down为1。
实施例16:在上述实施例中,涉及针对当前块的每个子块,根据第一参考块的第一像素值和第二参考块的第二像素值,对第一原始运动矢量进行调整,得到子块的第一目标运动矢量;根据第一参考块的第一像素值和第二参考块的第二像素值,对第二原始运动矢量进行调整,得到子块的第二目标运动矢量。以一个子块(如当前块的每个dx*dy大小的子块)的处理过程为例,介绍原始运动矢量的调整过程。
步骤c1、以初始运动矢量为中心,从初始运动矢量周围的包括该初始运动矢量的运动矢量中选择部分或全部运动矢量,将选择的运动矢量作为候选运动矢量。示例性的,初始运动矢量可以为第一原始运动矢量或者第二原始运动矢量。例如,可以以第一原始运动矢量为中心,从第一原始运动矢量周围的包括第一原始运动矢量的运动矢量中选择部分或全部运动矢量,作为候选运动矢量,对此选择方式参见后续实施例。或者,可以以第二原始运动矢量为中心,从第二原始运动矢量周围的包括第二原始运动矢量的运动矢量中选择部分或全部运动矢量,作为候选运动矢量,对此选择方式参见后续实施例。为了方便描述,后续实施例中,后续以第一原始运动矢量为中心为例,即初始运动矢量为第一原始运动矢量。
示例性的,可以以初始运动矢量为中心,从该初始运动矢量周围的包括该初始运动矢量的(2*SR+1)*(2*SR+1)个运动矢量中,选择部分或者全部运动矢量,并将选择的运动矢量确定为候选运动矢量;其中,所述SR为搜索范围。在从初始运动矢量周围的包括该初始运动矢量的(2*SR+1)*(2*SR+1)个运动矢量中,选择部分或者全部运动矢 量,将选择的运动矢量确定为候选运动矢量时,运动矢量的搜索顺序可以包括从左到右,从上到下。
当SR为2时,则从初始运动矢量周围的包括初始运动矢量的25个运动矢量中选择全部运动矢量,将选择的运动矢量确定为候选运动矢量;运动矢量的搜索顺序依次为:{Mv(-2,-2),Mv(-1,-2),Mv(0,-2),Mv(1,-2),Mv(2,-2),Mv(-2,-1),Mv(-1,-1),Mv(0,-1),Mv(1,-1),Mv(2,-1),Mv(-2,0),Mv(-1,0),Mv(0,0),Mv(1,0),Mv(2,0),Mv(-2,1),Mv(-1,1),Mv(0,1),Mv(1,1),Mv(2,1),Mv(-2,2),Mv(-1,2),Mv(0,2),Mv(1,2),Mv(2,2)}。或者,{Mv(0,0),Mv(-2,-2),Mv(-1,-2),Mv(0,-2),Mv(1,-2),Mv(2,-2),Mv(-2,-1),Mv(-1,-1),Mv(0,-1),Mv(1,-1),Mv(2,-1),Mv(-2,0),Mv(-1,0),Mv(0,0),Mv(1,0),Mv(2,0),Mv(-2,1),Mv(-1,1),Mv(0,1),Mv(1,1),Mv(2,1),Mv(-2,2),Mv(-1,2),Mv(0,2),Mv(1,2),Mv(2,2)}。
当SR为2时,则从初始运动矢量周围的包括初始运动矢量的21个运动矢量中选择部分运动矢量,将选择的运动矢量确定为候选运动矢量;运动矢量的搜索顺序依次为:{Mv(-1,-2),Mv(0,-2),Mv(1,-2),Mv(-2,-1),Mv(-1,-1),Mv(0,-1),Mv(1,-1),Mv(2,-1),Mv(-2,0),Mv(-1,0),Mv(0,0),Mv(1,0),Mv(2,0),Mv(-2,1),Mv(-1,1),Mv(0,1),Mv(1,1),Mv(2,1),Mv(-1,2),Mv(0,2),Mv(1,2)}。或者,{Mv(0,0),Mv(-1,-2),Mv(0,-2),Mv(1,-2),Mv(-2,-1),Mv(-1,-1),Mv(0,-1),Mv(1,-1),Mv(2,-1),Mv(-2,0),Mv(-1,0),Mv(0,0),Mv(1,0),Mv(2,0),Mv(-2,1),Mv(-1,1),Mv(0,1),Mv(1,1),Mv(2,1),Mv(-1,2),Mv(0,2),Mv(1,2)}。
步骤c2、根据第一参考块的第一像素值和第二参考块的第二像素值,获取第一原始运动矢量(即初始运动矢量)对应的第三代价值、各个候选运动矢量对应的第四代价值。
例如,可以从第一参考块中复制获取第一原始运动矢量对应的子参考块A1,该子参考块A1可以是该第一原始运动矢量在第一参考块中的子参考块。然后,可以从第二参考块中复制获取第二原始运动矢量对应的子参考块B1,该子参考块B1是该第二原始运动矢量在第二参考块中的子参考块。然后,可以利用子参考块A1的第一像素值和子参考块B1的第二像素值,获取第一原始运动矢量对应的第三代价值。针对每个候选运动矢量,可以从第一参考块中复制获取候选运动矢量对应的子参考块A2,该子参考块A2是候选运动矢量在第一参考块中的子参考块。然后,从第二参考块中复制获取候选运动矢量的对称运动矢量对应的子参考块B2,该子参考块B2是对称运动矢量在第二参考块中的子参考块。利用子参考块A2的第一像素值和子参考块B2的第二像素值,获取候选运动矢量对应的第四代价值。
步骤c3、根据第三代价值和第四代价值,从第一原始运动矢量和各个候选运动矢量中选择一个运动矢量,并将选择的运动矢量确定为最优运动矢量。例如,可以将代价值最小的运动矢量(如第一原始运动矢量、或任意一个候选运动矢量)作为最优运动矢量。
步骤c4、根据最优运动矢量确定第一整像素运动矢量调整值(用于调整第一原始运动矢量)和第二整像素运动矢量调整值(用于调整第二原始运动矢量)。如根据最优运动矢量和第一原始运动矢量确定第一整像素运动矢量调整值,根据第一整像素运动矢量调整值确定第二整像素运动矢量调整值,第二整像素运动矢量调整值与第一整像素运动矢量调整值对称。
例如,假设最优运动矢量为(4,6),第一原始运动矢量为(4,4),则根据最优运动矢量(4,6)和第一原始运动矢量(4,4)确定第一整像素运动矢量调整值,第一整像素运动矢量调整值为最优运动矢量(4,6)与第一原始运动矢量(4,4)的差,即第一整像素运动矢量调整值为(0,2)。然后,根据第一整像素运动矢量调整值(0,2)确定第二整像素运动矢量调整值,第二整像素运动矢量调整值可以为(0,-2),即(0,2)的对称值。
步骤c5、根据最优运动矢量确定第一分像素运动矢量调整值(用于调整第一原始运动矢量)和第二分像素运动矢量调整值(用于调整第二原始运动矢量)。如根据最优运动矢量对应的代价值、与最优运动矢量对应的边缘运动矢量对应的代价值,确定第一分像素运动矢量调整值,然后,根据所述第一分像素运动矢量调整值确定第二分像素运动矢量调整值。
例如,x 0=N*(E(-1,0)–E(1,0))/(E(-1,0)+E(1,0)–2*E(0,0)),y 0=N*(E(0,-1)–E(0,1))/(E(0,-1)+E(0,1)–2*E(0,0)),对于1/2、1/4、1/8和1/16的运动矢量像素精度,则N=1、2、4和8。然后,将(x0,y0)赋值给deltaMv,SPMV=deltaMv/2N,若当前为1/16的运动矢量像素精度,则SPMV为(x 0/16,y 0/16)。SPMV是第一分像素运动矢量调整值。E(0,0)表示最优运动矢量的代价值;E(-1,0)是以最优运动矢量为中心,最优运动矢量(0,0)的边缘运动矢量(-1,0)的代价值;E(1,0)是以最优运动矢量为中心,最优运动矢量(0,0)的边缘运动矢量(1,0)的代价值;E(0,-1)是以最优运动矢量为中心,最优运动矢量(0,0)的边缘运动矢量(0,-1)的代价值;E(0,1)是以最优运动矢量为中心,最优运动矢量(0,0)的边缘运动矢量(0,1)的代价值。针对各运动矢量的代价值,确定方式参见上述实施例。在采用上述方式确定第一分像素运动矢量调整值后,可以根据第一分像素运动矢量调整值确定第二分像素运动矢量调整值,第二分像素运动矢量调整值是第一分像素运动矢量调整值的对称值。例如,若第一分像素运动矢量调整值为(1,0),则第二分像素运动矢量调整值为(-1,0),即(1,0)的对称值。
步骤c6、根据第一整像素运动矢量调整值和/或第一分像素运动矢量调整值,对第一原始运动矢量进行调整,得到第一原始运动矢量对应的第一目标运动矢量。例如,第一目标运动矢量=第一原始运动矢量+第一整像素运动矢量调整值+第一分像素运动矢量调整值。
步骤c7、根据第二整像素运动矢量调整值和/或第二分像素运动矢量调整值,对第二原始运动矢量进行调整,得到第二原始运动矢量对应的第二目标运动矢量。例如,第二目标运动矢量=第二原始运动矢量+第二整像素运动矢量调整值+第二分像素运动矢量调整值。
实施例17:在上述实施例中,涉及针对当前块的每个子块,根据第一参考块的第一像素值和第二参考块的第二像素值,对第一原始运动矢量进行调整,得到子块的第一目标运动矢量;根据第一参考块的第一像素值和第二参考块的第二像素值,对第二原始运动矢量进行调整,得到子块的第二目标运动矢量。以一个子块(如当前块的每个dx*dy大小的子块)的处理过程为例,介绍原始运动矢量的调整过程。
可以将第一原始运动矢量记为Org_MV0,将第二原始运动矢量记为Org_MV1,将第一目标运动矢量记为Refined_MV0,将第二目标运动矢量记为Refined_MV1。
步骤d1、以第一原始运动矢量为中心,从第一原始运动矢量周围的包括第一原始运动矢量的(2*SR+1)*(2*SR+1) 个点中,选择部分或者全部运动矢量。例如,若SR=2,则从第一原始运动矢量周围的包括第一原始运动矢量的25个点中选择部分或者全部运动矢量,将选择的这些运动矢量作为候选运动矢量。确定第一原始运动矢量的代价值,并确定每个候选运动矢量的代价值。将代价值最小的运动矢量作为最优运动矢量。与上述实施例的步骤b1相比,在步骤d1中,不需要进行迭代过程,即一次就可以选取所有待处理的候选运动矢量,而不是通过迭代过程,第一次迭代选取部分运动矢量,第二次迭代再选取部分运动矢量。基于此,由于可以一次性的选取所有待处理的候选运动矢量,因此,可以对这些候选运动矢量进行并行处理,得到每个候选运动矢量的代价值,从而能够减少计算复杂度,并提高编码性能。
步骤d2、根据最优运动矢量确定IntegerDeltaMV的取值,IntegerDeltaMV的最终取值就是第一整像素运动矢量调整值,对此确定方式不再赘述,可以参见上述实施例。
步骤d3、以最优运动矢量为中心,获得最优的分像素偏移MV,将最优的分像素偏移记为SPMV,而SPMV的取值就是第一分像素运动矢量调整值。
步骤d3的实现过程可以参见上述步骤b2,在此不再重复赘述。
步骤d4、基于IntegerDeltaMV和SPMV,获得BestMVoffset。例如,BestMVoffset=IntegerDeltaMV+SPMV。然后,可以基于BestMVoffset获得目标运动矢量:Refined_MV0=Org_MV0+BestMVoffset;Refined_MV1=Org_MV1-BestMVoffset。
实施例18:为了将第一原始运动矢量Org_MV0和第二原始运动矢量Org_MV1调整为第一目标运动矢量Refined_MV0和第二目标运动矢量Refined_MV1,实现方式与实施例16、实施例17类似。本实施例中,以原始运动矢量为中心,可以从原始运动矢量周围的包括该原始运动矢量的共(2*SR+1)*(2*SR+1)个点中,选择全部运动矢量。例如,若SR=2,则可以从原始运动矢量周围的包括该原始运动矢量的25个点中选择全部运动矢量,确定这些运动矢量的代价值,并确定每个运动矢量的代价值。将代价值最小的运动矢量作为最优运动矢量。
实施例19:为了将第一原始运动矢量Org_MV0和第二原始运动矢量Org_MV1调整为第一目标运动矢量Refined_MV0和第二目标运动矢量Refined_MV1,实现方式与实施例16、实施例17类似。本实施例中,由于一次性的选取所有待处理的候选运动矢量,因此,可以对这些候选运动矢量进行并行处理,得到每个候选运动矢量的代价值,从而减少计算复杂度,提高编码性能。本实施例中,以原始运动矢量为中心,从原始运动矢量周围的包括原始运动矢量的共(2*SR+1)*(2*SR+1)个点中,选择偏移不超过SR范围内的部分运动矢量。
例如,从包括原始运动矢量在内的(2*SR+1)*(2*SR+1)个点中,选择N个(N大于等于1,小于等于(2*SR+1)*(2*SR+1))候选点。然后,确定这N个点对应的运动矢量的代价值。示例性的,可以按一定顺序扫描这N个点的代价值,选择最小代价值的运动矢量作为最优运动矢量。若代价值相等则优先选取顺序靠前的候选点。示例性的,代价值的确定方式可以为:基于候选运动矢量获得的两个预测值的下采样SAD确定代价值。
在一个例子中,假设SR=2,则候选点可以为25个,针对这些候选点的顺序,可以采用从左到右,从上到下的顺序。参见图7A所示,这些候选点的顺序可以为:{Mv(-2,-2),Mv(-1,-2),Mv(0,-2),Mv(1,-2),Mv(2,-2),Mv(-2,-1),Mv(-1,-1),Mv(0,-1),Mv(1,-1),Mv(2,-1),Mv(-2,0),Mv(-1,0),Mv(0,0),Mv(1,0),Mv(2,0),Mv(-2,1),Mv(-1,1),Mv(0,1),Mv(1,1),Mv(2,1),Mv(-2,2),Mv(-1,2),Mv(0,2),Mv(1,2),Mv(2,2)}。或者,参见图7B所示,这些候选点的顺序可以为:{Mv(0,0),Mv(-2,-2),Mv(-1,-2),Mv(0,-2),Mv(1,-2),Mv(2,-2),Mv(-2,-1),Mv(-1,-1),Mv(0,-1),Mv(1,-1),Mv(2,-1),Mv(-2,0),Mv(-1,0),Mv(0,0),Mv(1,0),Mv(2,0),Mv(-2,1),Mv(-1,1),Mv(0,1),Mv(1,1),Mv(2,1),Mv(-2,2),Mv(-1,2),Mv(0,2),Mv(1,2),Mv(2,2)}。
确定这25个点的运动矢量对应的代价值,并按照上述顺序进行扫描,获得代价值最小的运动矢量作为最优偏移MV,利用最优偏移MV可以确定整像素运动矢量调整值和分像素运动矢量调整值,确定方式参见上述实施例,在此不再重复赘述。
在另一例子中,假设SR=2,候选点可以为21个,针对这些候选点的顺序,可以采用从左到右,从上到下的顺序。参见图7C所示,这些候选点的顺序为:{Mv(-1,-2),Mv(0,-2),Mv(1,-2),Mv(-2,-1),Mv(-1,-1),Mv(0,-1),Mv(1,-1),Mv(2,-1),Mv(-2,0),Mv(-1,0),Mv(0,0),Mv(1,0),Mv(2,0),Mv(-2,1),Mv(-1,1),Mv(0,1),Mv(1,1),Mv(2,1),Mv(-1,2),Mv(0,2),Mv(1,2)}。或者,参见图7D所示,这些候选点的顺序为:{Mv(0,0),Mv(-1,-2),Mv(0,-2),Mv(1,-2),Mv(-2,-1),Mv(-1,-1),Mv(0,-1),Mv(1,-1),Mv(2,-1),Mv(-2,0),Mv(-1,0),Mv(0,0),Mv(1,0),Mv(2,0),Mv(-2,1),Mv(-1,1),Mv(0,1),Mv(1,1),Mv(2,1),Mv(-1,2),Mv(0,2),Mv(1,2)}。
确定这21个点的运动矢量对应的代价值,并按照上述顺序进行扫描,获得代价值最小的运动矢量作为最优偏移MV,利用最优偏移MV可以确定整像素运动矢量调整值和分像素运动矢量调整值,确定方式参见上述实施例,在此不再重复赘述。
在另一个例子中,假设SR=2,则候选点可以为25个,针对这些候选点的顺序,以运动矢量(0,0)为中心,采用距离中心从近到远的顺序。参见图7E所示,这些候选点的顺序可以为:{Mv(0,0),Mv(-1,0),Mv(0,-1),Mv(1,0),Mv(0,1),Mv(-1,1),Mv(-1,-1),Mv(1,-1),Mv(1,1),Mv(0,2),Mv(-2,0),Mv(0,-2),Mv(2,0),Mv(1,2),Mv(-1,2),Mv(-2,1),Mv(-2,-1),Mv(-1,-2),Mv(1,-2),Mv(2,-1),Mv(2,1),Mv(-2,2),Mv(-2,-2),Mv(2,-2),Mv(2,2)}。确定这25个点的运动矢量对应的代价值,并按照上述顺序进行扫描,获得代价值最小的运动矢量作为最优偏移MV,利用最优偏移MV可以确定整像素运动矢量调整值和分像素运动矢量调整值,确定方式参见上述实施例,在此不再重复赘述。
在另一个例子中,假设SR=2,则候选点可以为21个,针对这些候选点的顺序,以运动矢量(0,0)为中心,采用距离中心从近到远的顺序。参见图7F所示,这些候选点的顺序为:{Mv(0,0),Mv(-1,0),Mv(0,-1),Mv(1,0),Mv(0,1),Mv(-1,1),Mv(-1,-1),Mv(1,-1),Mv(1,1),Mv(0,2),Mv(-2,0),Mv(0,-2),Mv(2,0),Mv(1,2),Mv(-1,2),Mv(-2,1),Mv(-2,-1),Mv(-1,-2),Mv(1,-2),Mv(2,-1),Mv(2,1)}。确定这21个点的运动矢量对应的代价值,并按照上述顺序进行扫描,获得代价值最小的运动矢量作为最优偏移MV,利用最优偏移MV确定整像素运动矢量调整值和分像素运动矢量调整值,确定方式参见上述实施例。
在另一个例子中,假设SR=2,则候选点可以为13个,针对这些候选点的顺序,以运动矢量(0,0)为中心,采 用距离中心从近到远的顺序。参见图7G所示,这些候选点的顺序为:{Mv(0,0),Mv(-1,0),Mv(0,-1),Mv(1,0),Mv(0,1),Mv(-1,1),Mv(-1,-1),Mv(1,-1),Mv(1,1),Mv(0,2),Mv(-2,0),Mv(0,-2),Mv(2,0)}。确定这13个点的运动矢量对应的代价值,并按照上述顺序进行扫描,获得代价值最小的运动矢量作为最优偏移MV,利用最优偏移MV可以确定整像素运动矢量调整值和分像素运动矢量调整值,确定方式参见上述实施例。
在上述实施例中,当第一个候选运动矢量为Mv(0,0)时,对第一个候选运动矢量Mv(0,0)的代价SAD(0,0)进行如下处理:SAD(0,0)=SAD(0,0)-SAD(0,0)/4,即将其强制减少1/4,针对其它候选运动矢量的代价SAD不进行上述处理。
在一种可能的实施方式中,在上述候选运动矢量的检验过程中,存在如下提前结束机制:
若第一个候选运动矢量(Mv(0,0))的代价SAD(0,0)小于阈值dx*dy,则不进行后续候选运动矢量的检验,即子块的最优整数像素偏移为Mv(0,0)。
若某个候选运动矢量的代价为0,则不进行后续候选运动矢量的检验,将当前的候选运动矢量作为最优整数像素偏移。
若在上述候选运动矢量的检验过程中,存在上述任何一个提前结束的情况,则不进行后续分像素偏移的计算过程,即直接通过整像素偏移获得子块的目标运动矢量。
实施例20:为了将第一原始运动矢量Org_MV0和第二原始运动矢量Org_MV1调整为第一目标运动矢量Refined_MV0和第二目标运动矢量Refined_MV1,实现方式与实施例16、实施例17类似。本实施例中,由于一次性的选取所有待处理的候选运动矢量,因此,可以对这些候选运动矢量进行并行处理,得到每个候选运动矢量的代价值,从而减少计算复杂度,提高编码性能。本实施例中,以原始运动矢量为中心,从(2*SR+1)*(2*SR+1)个点中,选择偏移不超过SR范围内的部分运动矢量。例如,从包括原始运动矢量在内的(2*SR+1)*(2*SR+1)个点中,选择N个(N大于等于1,小于等于(2*SR+1)*(2*SR+1))候选点。确定这N个点对应的运动矢量的代价值。按一定顺序扫描这N个点的代价值,选择最小代价值的运动矢量作为最优运动矢量。若代价值相等则优先选取顺序靠前的候选点。
与实施例19不同的是,实施例19的候选点的位置均是固定的,即与原始运动矢量无关,实施例20的候选点的位置与原始运动矢量相关,以下结合几个具体例子进行说明。
在一个例子中,假设SR=2,则候选点可以为13个,针对这些候选点的顺序,以运动矢量(0,0)为中心,采用距离中心从近到远的顺序。而且,在距离中心的第一层候选点,顺序与原始运动矢量的大小无关,而距离中心的第二层候选点,顺序与原始运动矢量的大小有关。这些候选点的顺序为:{Mv(0,0),Mv(-1,0),Mv(0,-1),Mv(1,0),Mv(0,1),Mv(-1,1),Mv(-1,-1),Mv(1,-1),Mv(1,1),Mv(sign_H*2,0),Mv(sign_H*2,sign_V*1),Mv(0,sign_V*2),Mv(0,sign_V*2)}。第一原始运动矢量记为MV0,水平分量为MV0_Hor,垂直分量为MV0_Ver。若MV0_Hor大于等于0,则sign_H=1;否则sign_H=-1;若MV0_Ver大于等于0,则sign_V=1;否则sign_V=-1。确定这13个点的运动矢量对应的代价值,并按照上述顺序进行扫描,获得代价值最小的运动矢量作为最优偏移MV,利用最优偏移MV可以确定整像素运动矢量调整值和分像素运动矢量调整值,确定方式参见上述实施例。
在另一个例子中,假设SR=2,则候选点可以为13个,针对这些候选点的顺序,以运动矢量(0,0)为中心,采用距离中心从近到远的顺序。而且,在距离中心的第一层候选点,顺序与原始运动矢量的大小无关,而距离中心的第二层候选点,顺序与原始运动矢量的大小有关,这些候选点的顺序为:{Mv(0,0),Mv(-1,0),Mv(0,-1),Mv(1,0),Mv(0,1),Mv(-1,1),Mv(-1,-1),Mv(1,-1),Mv(1,1),Mv(sign_H*2,0),Mv(sign_H*2,sign_V*1),Mv(0,sign_V*2),Mv(0,sign_V*2)}。第一原始运动矢量记为MV0,水平分量为MV0_Hor,垂直分量为MV0_Ver。若MV0_Hor大于0,则sign_H=1;否则sign_H=-1;若MV0_Ver大于0,则sign_V=1;否则sign_V=-1。确定这13个点的运动矢量对应的代价值,并按照上述顺序进行扫描,获得代价值最小的运动矢量作为最优偏移MV,利用最优偏移MV可以确定整像素运动矢量调整值和分像素运动矢量调整值,确定方式参见上述实施例。
实施例21:在上述实施例中,涉及根据第一参考块的第一像素值和第二参考块的第二像素值,获取中心运动矢量对应的第一代价值、边缘运动矢量对应的第二代价值。根据第一参考块的第一像素值和第二参考块的第二像素值,获取第一原始运动矢量对应的第三代价值、候选运动矢量对应的第四代价值。在一个例子中,根据未下采样的第一像素值和未下采样的第二像素值,获取中心运动矢量对应的第一代价值、边缘运动矢量对应的第二代价值、第一原始运动矢量对应的第三代价值、候选运动矢量对应的第四代价值。或者,对第一像素值进行下采样操作,对第二像素值进行下采样操作;根据下采样后的第一像素值和下采样后的第二像素值,获取中心运动矢量对应的第一代价值、边缘运动矢量对应的第二代价值、第一原始运动矢量对应的第三代价值、候选运动矢量对应的第四代价值。或者,对第一像素值进行移位和下采样操作,对第二像素值进行移位和下采样操作;然后,根据操作后的第一像素值和操作后的第二像素值,获取中心运动矢量对应的第一代价值、边缘运动矢量对应的第二代价值、第一原始运动矢量对应的第三代价值、候选运动矢量对应的第四代价值。
针对不同的情况,确定代价值的方式类似。例如,为了获取中心运动矢量对应的代价值,可以从第一参考块中复制获取中心运动矢量对应的子参考块A1,从第二参考块中复制获取中心运动矢量的对称运动矢量对应的子参考块B1,利用子参考块A1的第一像素值和子参考块B1的第二像素值,获取中心运动矢量对应的代价值。为了获取边缘运动矢量对应的代价值,可以从第一参考块中复制获取边缘运动矢量对应的子参考块A2,从第二参考块中复制获取边缘运动矢量的对称运动矢量对应的子参考块B2,利用子参考块A2的第一像素值和子参考块B2的第二像素值,获取边缘运动矢量对应的代价值,以此类推。
综上所述,为了获取运动矢量对应的代价值,可以从第一参考块中获取该运动矢量对应的子参考块,并从第二参考块中获取该运动矢量的对称运动矢量对应的子参考块,然后利用两个子参考块的像素值获取该运动矢量对应的代价值,对此过程不再赘述。
实施例22:在实施例21的基础上,根据未下采样的第一像素值(即第一参考块中的子参考块的未下采样的像素值)和未下采样的第二像素值(即第二参考块中的子参考块的未下采样的像素值),获取运动矢量对应的代价值。例如,假设第一参考块中的子参考块为pred 0,第二参考块中的子参考块为pred 1,则根据子参考块pred 0和子参考块 pred 1的所有像素值的SAD确定代价值,不需要对子参考块pred 0和子参考块pred 1的像素进行垂直下采样。
基于子参考块pred 0和子参考块pred 1的所有像素值,代价值计算公式为:
Figure PCTCN2020124304-appb-000001
在上述公式中,cost可以表示代价值,W可以为子参考块的宽度值,H可以为子参考块的高度值,pred 0(i,j)可以表示子参考块pred 0的第i列第j行的像素值,pred 1(i,j)可以表示子参考块pred 1的第i列第j行的像素值,abs(x)可以表示x的绝对值。
实施例23:在实施例21的基础上,可以对第一像素值进行下采样操作,对第二像素值进行下采样操作;可以根据下采样后的第一像素值(即第一参考块中的子参考块的下采样后的像素值)和下采样后的第二像素值(即第二参考块中的子参考块的下采样后的像素值),获取运动矢量对应的代价值。例如,假设第一参考块中的子参考块为pred 0,第二参考块中的子参考块为pred 1,则根据子参考块pred 0和子参考块pred 1的所有像素值的SAD确定代价值。在利用所有像素值的SAD确定代价值时,对子参考块pred 0和子参考块pred 1的像素值进行垂直N倍(N为大于0的整数,可以为2)下采样。
基于子参考块pred 0和子参考块pred 1的所有像素值,代价值计算公式为:
Figure PCTCN2020124304-appb-000002
在上述公式中,cost可以表示代价值,W可以为子参考块的宽度值,H可以为子参考块的高度值,N可以表示下采样的参数,为大于0的整数,可以为2,pred 0(1+N(i-1),j)可以表示子参考块pred 0的第1+N(i-1)列第j行的像素值,pred 1(1+N(i-1),j)可以表示子参考块pred 1的第1+N(i-1)列第j行的像素值,abs(x)可以表示x的绝对值。
实施例24:在实施例21的基础上,对第一像素值进行移位和下采样操作,对第二像素值进行移位和下采样操作;根据操作后的第一像素值(第一参考块中的子参考块的移位和下采样后的像素值)和操作后的第二像素值(第二参考块中的子参考块的移位和下采样后的像素值),获取运动矢量对应的代价值。例如,假设第一参考块中的子参考块为pred 0,第二参考块中的子参考块为pred 1,pred 0和pred 1均采用D比特的存储方式,即,pred 0中的每个像素值均采用D比特进行存储,pred 1中的每个像素值均采用D比特进行存储。
若D小于等于8,则根据子参考块pred 0和子参考块pred 1的所有像素值的SAD确定代价值。在利用所有像素值的SAD确定代价值时,对子参考块pred 0和子参考块pred 1的像素值进行垂直N倍(N为大于0的整数,可以为2)下采样。基于子参考块pred 0和子参考块pred 1的所有像素值,代价值计算公式为:
Figure PCTCN2020124304-appb-000003
在上述公式中,cost表示代价值,W为子参考块的宽度值,H为子参考块的高度值,N表示下采样的参数,为大于0的整数,可以为2,pred 0(1+N(i-1),j)表示子参考块pred 0的第1+N(i-1)列第j行的像素值,pred 1(1+N(i-1),j)表示子参考块pred 1的第1+N(i-1)列第j行的像素值,abs(x)表示x的绝对值,综上可以看出,即仅计算第1行,第N+1行,第2N+1行…的差的绝对值和。
若D大于8,先将子参考块pred 0和子参考块pred 1的所有像素值移位到8比特,获得8比特的pred 0和8比特的pred 1,记为pred 0-8bit(i,j)和pred 1-8bit(i,j)。其目的是为了节省SAD计算的存储代价,8位的存储可以实现更高并行度。
pred 0_8bit(i,j)=pred 0(i,j)>>(D-8),pred 1_8bit(i,j)=pred 1(i,j)>>(D-8)
然后,对8比特的pred 0和8比特的pred 1的像素值进行垂直N倍(N为大于0的整数,可以为2)下采样,这样,代价值的计算公式可以为:
Figure PCTCN2020124304-appb-000004
在上述公式中,各个表达式的含义参见上述实施例,在此不再重复赘述。
实施例25:在上述实施例中,针对当前块的每个子块,根据该子块的第一目标运动矢量和第二目标运动矢量,确定该子块的预测值,并根据每个子块的预测值确定当前块的预测值。例如,基于子块的第一目标运动矢量和第二目标运动矢量,通过插值(如8抽头插值)获得两个方向的参考块(即第三参考块和第四参考块,可以包括三个分量的预测值,由于目标运动矢量可能为分像素,所以需要插值)。然后,根据第三参考块的第三像素值和第四参考块的第四像素值进行加权,得到最终的预测值(如三个分量的预测值)。
在一种可能的实施方式中,若最优运动矢量与初始运动矢量(即第一原始运动矢量或者第二原始运动矢量)相同,则基于子块的第一目标运动矢量,从第一参考帧中确定该子块对应的第三参考块;基于该子块的第二目标运动矢量,从第二参考帧中确定该子块对应的第四参考块。对该第三参考块的像素值和该第四参考块的像素值进行加权,得到该子块的预测值。
例如,假设子块的大小为dx*dy,基于第一目标运动矢量从第一参考帧中确定大小为dx*dy的第三参考块。例如,从第一参考帧中确定大小为A*B的参考块,A*B的大小与插值方式有关,如A大于dx,B大于dy,对此不做限制。通过对该参考块中的像素值进行插值,可以得到大小为dx*dy的第三参考块,对此插值方式不做限制。基于第二目标运动矢量从第二参考帧中确定大小为dx*dy的第四参考块。例如,从第二参考帧中确定大小为A*B的参考块,A*B的大小与插值方式有关,如A大于dx,B大于dy,对此不做限制。通过对该参考块中的像素值进行插值,可以得到大小为dx*dy的第四参考块,对此插值方式不做限制。
在另一种可能的实施方式中,若最优运动矢量与初始运动矢量不同,则可以从第一参考帧中确定第五参考块, 并对该第五参考块进行扩展,得到第六参考块;然后,基于该子块的第一目标运动矢量,从该第六参考块中选择该子块对应的第三参考块。以及,可以从第二参考帧中确定第七参考块,并对该第七参考块进行扩展,得到第八参考块;基于该子块的第二目标运动矢量,从该第八参考块中选择该子块对应的第四参考块。然后,可以对该第三参考块的像素值和该第四参考块的像素值进行加权,得到该子块的预测值。
例如,假设子块的大小为dx*dy,基于第一原始运动矢量从第一参考帧中确定大小为dx*dy的第五参考块。例如,从第一参考帧中确定大小为A*B的参考块,A*B的大小与插值方式有关,如A大于dx,B大于dy,对此不做限制。通过对该参考块中的像素值进行插值,可以得到大小为dx*dy的第五参考块,对此插值方式不做限制。然后,对第五参考块进行扩展,得到第六参考块,例如,通过临近值拷贝的方式,对第五参考块进行上下左右的填充,将填充后的参考块作为第六参考块,第六参考块的大小可以大于dx*dy。然后,基于子块的第一目标运动矢量,从第六参考块中选择子块对应的大小为dx*dy的第三参考块。
假设子块的大小为dx*dy,基于第二原始运动矢量从第二参考帧中确定大小为dx*dy的第七参考块。例如,从第二参考帧中确定大小为A*B的参考块,A*B的大小与插值方式有关,如A大于dx,B大于dy,对此不做限制。通过对该参考块中的像素值进行插值,可以得到大小为dx*dy的第七参考块,对此插值方式不做限制。然后,可以对第七参考块进行扩展,得到第八参考块,例如,通过临近值拷贝的方式,对第七参考块进行上下左右的填充,将填充后的参考块作为第八参考块,第八参考块的大小可以大于dx*dy。然后,基于子块的第二目标运动矢量,从第八参考块中选择子块对应的大小为dx*dy的第四参考块。
实施例26:在获得目标运动矢量后,基于每个子块的目标运动矢量,通过8抽头插值滤波器获得两个方向的预测值(即YUV三个分量,即上述第三参考块的预测值和第四参考块的预测值),并加权获得最终的预测值。或者,基于每个子块的目标运动矢量,通过双线性插值滤波器(此处不再是8抽头插值滤波器)获得两个方向的预测值(即YUV三个分量,即上述第三参考块的预测值和第四参考块的预测值),并加权获得最终的预测值。
实施例27:在获得两个方向的预测值后,通过均值加权平均(即两个方向的预测值的权重相同),获得最终的预测值。或者,在获得两个方向的预测值后,通过加权平均获得最终的预测值,两个预测值的权值可以不同。例如,两个预测值的权值比例可以为1:2,1:3,2:1等。对于编码端,权重表中可以包括1:2,1:3,2:1等权值比例,编码端可以确定每个权值比例的代价值,并确定代价值最小的权值比例,这样,编码端可以基于代价值最小的权值比例,通过加权平均获得最终的预测值。编码端向解码端发送编码比特流时,该编码比特流携带权值比例在权重表中的索引值。这样,解码端通过解析编码比特流的索引值,从权重表中获取与该索引值对应的权值比例,基于权值比例通过加权平均获得最终的预测值。
在一个例子中,权重表可以包括但不限于{-2,3,4,5,10}。示例性的,两个权重的和可以为8,针对每个权重来说,权重可以是负值,只要两个权重的和为8即可。
例如,权重“-2”就是一个负值,当一个预测值的权重为-2时,另一个预测值的权重就是10,即两个权重的和为8,在此情况下,最终预测值=(预测值1*(-2)+预测值2*(8-(-2)))。
又例如,权重“10”表示一个预测值的权重为10,而另一个预测值的权重就是-2,即两个权重的和为8,在此情况下,最终预测值=(预测值1*(10)+预测值2*(-2))。
又例如,权重“3”表示一个预测值的权重为3,而另一个预测值的权重就是5,即两个权重的和为8,在此情况下,最终预测值=(预测值1*(3)+预测值2*(5))。
又例如,权重“5”表示一个预测值的权重为5,而另一个预测值的权重就是3,即两个权重的和为8,在此情况下,最终预测值=(预测值1*(5)+预测值2*(3))。
又例如,权重“4”表示一个预测值的权重为4,而另一个预测值的权重就是4,即两个权重的和为8,在此情况下,最终预测值=(预测值1*(4)+预测值2*(4))。
在一种可能的实施方式中,针对当前块的每个子块,参见上述实施例,可以得到第三参考块的第三像素值和第四参考块的第四像素值,然后,根据第三参考块的第三像素值和第四参考块的第四像素值进行加权,得到最终的预测值。例如,对第三像素值,第三像素值对应的第一权重,第四像素值,第四像素值对应的第二权重进行加权处理,得到该子块的预测值。若通过均值加权平均(即两个权重相同)获得最终的预测值,则第一权重与第二权重相同。
实施例28:在上述实施例中,可以保存当前块的每个子块的第一目标运动矢量和第二目标运动矢量,或者,保存当前块的每个子块的第一原始运动矢量和第二原始运动矢量,或者,保存当前块的每个子块的第一原始运动矢量,第二原始运动矢量,第一目标运动矢量和第二目标运动矢量。针对保存的运动矢量,这些运动矢量可以用于后续块的编码/解码参考。
例如,以保存当前块的每个子块的第一目标运动矢量和第二目标运动矢量为例进行说明,第一目标运动矢量和第二目标运动矢量用于当前帧的环路滤波;第一目标运动矢量和第二目标运动矢量用于后续帧的时域参考;和/或,第一目标运动矢量和第二目标运动矢量用于当前帧的空域参考。例如,当前块的每个子块的第一目标运动矢量和第二目标运动矢量,可以用于当前块的运动补偿,也可以用于后续帧的时域参考。又例如,当前块的每个子块的第一目标运动矢量和第二目标运动矢量,可以用于当前块的运动补偿,也可以用于当前块的环路滤波过程,还可以用于后续帧的时域参考。又例如,当前块的每个子块的第一目标运动矢量和第二目标运动矢量,可以用于当前块的运动补偿,也可以用于当前块的环路滤波过程,还可以用于后续帧的时域参考,还可以用于当前帧的空域参考,以下对此进行说明。
当前块的每个子块的第一目标运动矢量和第二目标运动矢量,可以用于空域中某些LCU(Largest Coding Unit,最大编码单元)内的块的空域参考。由于编解码顺序是从上到下,从左到右,因此,当前块的运动矢量可以被当前LCU内的其他块参考,也可以被后续相邻LCU内的块参考。由于获得的目标运动矢量所需要的计算量较大,若后续块参考当前块的目标运动矢量,则需要等待较长时间。为了避免过多等待引起的时延,只允许少数空域相邻块参考当前块的目标运动矢量,其他块则参考当前块的原始运动矢量。
示例性的,这些少数的块包括位于当前LCU下侧的下侧LCU和右下侧LCU内的子块,而位于右侧LCU和左侧LCU内的子块,则不可参考当前块的目标运动矢量。
实施例29:以下结合一个具体例子,对运动矢量的调整过程进行说明。运动矢量调整的具体步骤可以如下,下文中的“复制”说明不需要插值即可获得,MV(即运动矢量)为整像素偏移则可直接从参考帧中复制,否则需要进行插值获得。
步骤e1、若针对当前块启动运动矢量调整模式,则执行下面的过程。
步骤e2、准备参考像素值(假设当前块的宽为W,高为H)。
准备用于步骤e3的整像素块:基于原始运动矢量(list0的原始运动矢量记为Org_MV0,list1的原始运动矢量记为Org_MV1),在对应参考帧的对应位置复制两块面积为(W+FS-1)*(H+FS-1)的三个分量的整像素块。此外,准备用于步骤e4的整像素块:在上述(W+FS-1)*(H+FS-1)的整像素块的基础上,将(W+FS-1)*(H+FS-1)的三个分量的整像素块,进行上下左右各SR行/列的扩展,扩展后得到面积为(W+FS-1+2*SR)*(H+FS-1+2*SR)的三个分量的整像素块,记为Pred_Inter0和Pred_Inter1,参考图8所示。示例性的,内层黑色区域的尺寸为当前块的尺寸,外拓白色区域为原始运动矢量进行8抽头滤波器插值所需额外参考像素,外层黑色区域为目标运动矢量进行8抽头滤波器插值所需的额外参考像素。
针对内层W*H的黑色区域和白色区域,是从参考帧中获取的像素值,针对外层黑色区域的像素值,不需要从参考帧中获取,而是可以采用拷贝相邻像素值的方式获得。在一个例子中,将白色区域的第一行W+FS-1个像素值,复制给外层黑色区域的前SR行的像素值。将白色区域的最后一行W+FS-1个像素值,复制给外层黑色区域的最后SR行的像素值。然后,将白色区域第一列的H+FS-1个像素值以及上下各SR个已获得的外层黑色区域的像素值,复制给外层黑色区域的前SR列的像素值。将白色区域最后一列的H+FS-1个像素值及上下各SR个已获得的外层黑色区域的像素值,复制给外层黑色的最后SR列的像素值。在另一个例子中,将白色区域的第一列H+FS-1个像素值,复制给外层黑色区域的前SR列的像素值。将白色区域的最后一列H+FS-1个像素值,复制给外层黑色区域的最后SR列的像素值。然后,将白色区域第一行的W+FS-1个像素值及左右各SR个已获得的外层黑色区域的像素值,复制给外层黑色区域的前SR行的像素值。将白色区域的最后一行的W+FS-1个像素值及左右各SR个已获得的外层黑色区域的像素值,复制给外层黑色区域最后SR行的像素值。
基于两个不同方向的运动信息进行第一次运动补偿。如对亮度分量(因为后续搜索过程用亮度分量计算代价值),基于两块面积为(W+FS-1)*(H+FS-1)的整像素参考块,通过双线性插值获得两块尺寸为(W+2*SR)*(H+2*SR)的初始参考预测值(记为Pred_Bilinear0和Pred_Bilinear1),FS为滤波器抽头数,默认为8,SR为搜索范围,即目标运动矢量与原始运动矢量最大水平/竖直分量插值,默认为2。Pred_Bilinear0/1用于步骤e3的使用。
步骤e3、对于当前块的每个dx*dy子块,分别获得目标运动矢量(两个方向的目标运动矢量分别记为Refined_MV0和Refined_MV1)。
步骤e31,进行SR次迭代,获得最优的整像素MV点的整像素偏移,记为IntegerDeltaMV,将IntegerDeltaMV初始化为(0,0),每次迭代执行如下过程:
步骤e311,将deltaMV设为(0,0)。若为首次迭代过程,则基于原始运动矢量在参考像素Pred_Bilinear0/1中,复制获得两块预测值块(其实就是Pred_Bilinear0/1最中心的W*H的块),基于这两个预测值块,获得初始代价值,也就是两个方向预测值块的垂直2倍下采样后的SAD。若该初始代价值小于dx*dy,dx和dy是当前子块的宽度和高度,则直接跳过后续搜索过程,执行步骤步骤e32,并将notZeroCost设为false。
步骤e312,以上述初始点为中心,按照{Mv(-2,-2),Mv(-1,-2),Mv(0,-2),Mv(1,-2),Mv(2,-2),Mv(-2,-1),Mv(-1,-1),Mv(0,-1),Mv(1,-1),Mv(2,-1),Mv(-2,0),Mv(-1,0),Mv(1,0),Mv(2,0),Mv(-2,1),Mv(-1,1),Mv(0,1),Mv(1,1),Mv(2,1),Mv(-2,2),Mv(-1,2),Mv(0,2),Mv(1,2),Mv(2,2)}的顺序得到24个偏移MV(这24个偏移MV均称为MVOffset),并进行这些偏移MV的代价值的计算与比较过程。例如,基于某MVOffset,在参考像素Pred_Bilinear0/1中,通过MVOffset获得两块预测值块(即Pred_Bilinear0中进行中心位置偏移MVOffset的W*H的块,Pred_Bilinear1中进行中心位置偏移-MVOffset(与list0相反)的W*H的块),计算这两个块的下采样SAD作为MVOffset的代价值。保留代价值最小的MVOffset(存于deltaMV)。
基于deltaMV值更新IntegerDeltaMV:IntegerDeltaMV=deltaMV。
步骤e313,经过一次迭代后,若最优MV仍为初始MV或者最小代价值为0,则不进行下一次迭代搜索过程,执行步骤e32,并将notZeroCost设为false。
步骤e32、可以以步骤e31的最优整像素MV点为中心,获得最优的分像素偏移MV,记为SPMV(即subMV),将SPMV初始化为(0,0),然后执行如下过程:
步骤e321、只有notZeroCost不为false,且deltaMV为(0,0)时,才进行后续处理,否则,直接利用IntegerDeltaMV对原始运动矢量进行调整。
步骤e322、将E(x,y)表示为步骤e31所得最优MV点偏移(x,y)的MV对应代价值(步骤e31计算的代价值)。基于中心及上下左右五个点的E(x,y),可得E(x,y)最小的点的偏移(x0,y0)为:x 0=N*(E(-1,0)–E(1,0))/(E(-1,0)+E(1,0)–2*E(0,0)),y 0=N*(E(0,-1)–E(0,1))/(E(0,-1)+E(0,1)–2*E(0,0))。在一个例子中,对于1/2、1/4、1/8和1/16的运动矢量像素精度,则N=1、2、4和8。然后,可以将(x0,y0)赋值给deltaMv,SPMV=deltaMv/2N,若当前为1/16的运动矢量像素精度,则SPMV可以为(x 0/16,y 0/16)。
若E(-1,0)=E(0,0),则水平向左偏移半个像素(deltaMv[0]=-N)。
若E(1,0)=E(0,0),则水平向右偏移半个像素(deltaMv[0]=N)。
若E(0,-1)=E(0,0),则垂直向上偏移半个像素(deltaMv[1]=-N)。
若E(0,1)=E(0,0),则垂直向下偏移半个像素(deltaMv[1]=N)。
步骤e33、基于步骤e31的整像素偏移IntegerDeltaMV和步骤e32的分像素偏移SPMV,获得最优偏移MV,将最优偏移MV记为BestMVoffset。BestMVoffset=IntegerDeltaMV+SPMV。基于BestMVoffset可以获得两个方向的目标运动矢量:Refined_MV0=Org_MV0+BestMVoffset;Refined_MV1=Org_MV1-BestMVoffset。
步骤e4、基于每个子块的目标运动矢量,进行8抽头插值获得两个方向的三个分量的预测值,并加权获得最终 的预测值(如三个分量的预测值)。例如,基于每个子块的目标运动矢量Refined_MV0和Refined_MV1,在步骤e2准备的Pred_Inter0/1中,通过插值获得对应预测块(运动矢量可能为分像素,需插值才能获得对应像素块)。
步骤e5、目标运动矢量,用于当前块的运动补偿(即获得每个子块的预测值和当前块的预测值)和后续帧的时域参考,不用于当前帧的环路滤波使用和空域参考。
实施例30:与实施例29不同的是,参考像素的准备过程移动到每个dx*dy的子块中进行。在准备参考像素时,仅准备(dx+(filtersize-1))*(dy+(filtersize-1))的像素块,若搜索得到的最优运动矢量不是原始运动矢量,进行参考像素的扩充,否则不进行扩充。对于当前块的每个dx*dy的子块,分别获得目标运动矢量,基于目标运动矢量进行运动补偿,加权获得最终的预测值,下面过程是针对当前块的每个dx*dy的子块:
步骤f1、若针对当前块启动运动矢量调整模式,则执行下面的过程。
步骤f2、准备用于步骤f3的整像素块:例如,仅对亮度分量进行:基于原始运动矢量(list0的原始运动矢量记为Org_MV0,list1的原始运动矢量记为Org_MV1),从对应参考帧的对应位置中获取两块面积为(dx+(filtersize-1))*(dy+(filtersize-1))的整像素块。
示例性的,filtersize可以是滤波器抽头数,默认为8。
步骤f3、对于当前块的每个dx*dy子块,分别获得目标运动矢量(两个方向的目标运动矢量分别记为Refined_MV0和Refined_MV1)。
示例性的,步骤f3的实现过程可以参见步骤e3,在此不再详加赘述。
例如,基于原始运动矢量进行第一次运动矢量补偿。仅对亮度分量,基于bilinear插值获得尺寸为(dx+2*IterNum)*(dy+2*IterNum)的初始预测值,IterNum默认为2,IterNum可以为搜索范围SR,IterNum可以为目标运动矢量与原始运动矢量的最大水平/竖直分量插值。上述得到的原始运动矢量的初始预测值存于m_cYuvPredTempL0/1中。
进行25个点的代价值计算,获得最优的整像素MV点的整像素偏移。若为第一个点(MV偏移(0,0)),则获得初始cost(cost为两个方向预测值的垂直2倍下采样的SAD,若该cost小于dx*dy,则直接跳过后续搜索过程(notZeroCost设为false)。以上述初始点为中心,进行24个点的cost计算与比较,保留cost最小的点,作为下一步的新中心点。以最优的整像素MV点为中心,获得最优的1/16像素的分像素偏移。基于整像素偏移和分像素偏移,获得最优偏移MV,将最优偏移MV记为BestMVoffset。BestMVoffset=IntegerDeltaMV+SPMV。基于BestMVoffset可以获得两个方向的目标运动矢量:Refined_MV0=Org_MV0+BestMVoffset;Refined_MV1=Org_MV1-BestMVoffset。
步骤f4、若最优偏移MV为(0,0),则不进行下面步骤(即采用原始运动矢量时不进行额外扩充)。若最优偏移MV不为(0,0),则重新获取整像素(由于上述步骤没有进行参考像素扩充,偏移后所需的参考像素超过上述步骤获得的参考像素的范围),则执行如下步骤:
针对list0的参考帧和list1的参考帧,对U/V分量分别进行(因为步骤f2获取了亮度分量):从参考帧中获取(dxc+(filtersizeC-1))*(dyc+(filtersizeC-1))的整像素值,dxc和dyc与采样率有关,当YUV为420采样率时,dxc=dx/2,dyc=dy/2,当然,这里只是示例,对此dxc和dyc不做限制。filtersizeC可以为4,当然,这里只是示例,对此filtersizeC不做限制。又例如,可以直接从参考帧中获取dx*dy的整像素值,对此不做限制。
针对list0的参考帧和list1的参考帧,对三个分量分别进行进行填充。例如,通过临近值边拷贝的方法,在上述步骤得到的整像素值(如(dxc+(filtersizeC-1))*(dyc+(filtersizeC-1))的整像素值)的基础上,进行上下左右填充(如填充宽度亮度分量为2,420的色度分量为1)。示例性的,这里并没有采用当前子块周围(在当前CU块内)可以使用的整像素值。
步骤f5、基于每个子块的目标运动矢量和两个参考像素块(步骤f4得到),进行8抽头插值获得两个方向的三个分量的预测值,加权获得最终的预测值(如三个分量的预测值)。
实施例31:上述实施例可以单独实现,也可以任意组合实现,对此不做限制。
例如,实施例4可以和实施例2组合实现;实施例4可以和实施例3组合实现。
实施例5可以和实施例2组合实现;实施例5可以和实施例2、实施例4组合实现;实施例5可以和实施例3组合实现;实施例5可以和实施例3、实施例4组合实现。
实施例6可以单独实现,实施例7可以单独实现,实施例8可以和实施例7组合实现;实施例9可以和实施例7组合实现;实施例10可以和实施例7组合实现;实施例11可以和实施例7组合实现;实施例12可以和实施例7组合实现;实施例13可以和实施例7组合实现;实施例14可以和实施例7组合实现;实施例15可以和实施例7组合实现。
实施例16可以单独实现,实施例17可以单独实现,实施例18可以和实施例17组合实现;实施例19可以和实施例17组合实现;实施例20可以和实施例17组合实现。
实施例21可以和实施例6组合实现,实施例21可以和实施例16组合实现,实施例21可以和实施例7组合实现,实施例21可以和实施例17组合实现,实施例22可以和实施例21组合实现,实施例23可以和实施例21组合实现,实施例24可以和实施例21组合实现。
实施例25可以和实施例2组合实现;实施例25可以和实施例2、实施例4组合实现;实施例25可以和实施例3组合实现;实施例25可以和实施例3、实施例4组合实现。
实施例26可以和实施例25组合实现;实施例27可以和实施例25组合实现。
实施例28可以和实施例2组合实现;实施例28可以和实施例2、实施例4组合实现;实施例28可以和实施例3组合实现;实施例28可以和实施例3、实施例4组合实现。
实施例29可以单独实现,实施例29可以和实施例4组合实现。实施例30可以单独实现,实施例30可以和实施例4组合实现。当然,上述只是本申请的几个示例,对此不做限制,本申请涉及的所有实施例,均可以单独实现或者组合实现,对此不再详加赘述。
实施例32:
基于与上述方法同样的申请构思,本申请实施例还提出一种编解码装置,应用于编码端或者解码端,如图9A所示,为所述装置的结构图,所述装置包括:
确定模块911,用于若如下条件均满足,则确定对当前块启动运动矢量调整模式:
控制信息为允许当前块使用运动矢量调整模式;
当前块的预测模式为普通融合模式;或者,当前块的预测模式为融合模式或跳过模式,且当前块的预测模式不是除普通融合模式之外的其它模式;
当前块的预测值通过来自两个参考帧的参考块的加权获得,且所述两个参考帧的显示顺序分别位于当前帧的一前一后,且所述两个参考帧与当前帧的距离相同;
当前块的两个参考帧的加权权重相同;
当前块的两个参考帧均是短期参考帧;
当前块的宽度,高度和面积均在限定范围内;
当前块的两个参考帧的尺寸与当前帧的尺寸均相同;
运动补偿模块912,用于若确定对当前块启动运动矢量调整模式,则对当前块进行运动补偿。
所述运动补偿模块912具体用于:针对所述当前块包括的至少一个子块中的每个子块:
根据所述子块的第一原始运动矢量确定所述子块对应的第一参考块,根据所述子块的第二原始运动矢量确定所述子块对应的第二参考块;根据所述第一参考块的第一像素值和所述第二参考块的第二像素值,对所述第一原始运动矢量和所述第二原始运动矢量进行调整,得到所述第一原始运动矢量对应的第一目标运动矢量和所述第二原始运动矢量对应的第二目标运动矢量;根据所述第一目标运动矢量和所述第二目标运动矢量确定所述子块的预测值;
根据每个子块的预测值确定所述当前块的预测值。
所述确定模块911还用于:若如下条件中的任意一个条件不满足,则确定不对当前块启动运动矢量调整模式:控制信息为允许当前块使用运动矢量调整模式;
当前块的预测模式为普通融合模式;或者,当前块的预测模式为融合模式或跳过模式,且当前块的预测模式不是除普通融合模式之外的其它模式;
当前块的预测值通过来自两个参考帧的参考块的加权获得,且所述两个参考帧的显示顺序分别位于当前帧的一前一后,且所述两个参考帧与当前帧的距离相同;
当前块的两个参考帧的加权权重相同;
当前块的两个参考帧均是短期参考帧;
当前块的宽度,高度和面积均在限定范围内;
当前块的两个参考帧的尺寸与当前帧的尺寸均相同。
所述控制信息为允许当前块使用运动矢量调整模式包括:序列级控制信息为允许当前块使用运动矢量调整模式;和/或,帧级控制信息为允许当前块使用运动矢量调整模式。
当前块的宽度,高度和面积均在限定范围内包括:宽度大于或等于第一阈值,高度大于或等于第二阈值,面积大于或等于第三阈值;或者,宽度大于或等于第一阈值,高度大于或等于第二阈值,面积大于第四阈值;其中,所述第三阈值大于所述第四阈值。
所述第一阈值为8,所述第二阈值为8,所述第三阈值为128,所述第四阈值为64。
所述运动补偿模块912根据所述子块的第一原始运动矢量确定所述子块对应的第一参考块,根据所述子块的第二原始运动矢量确定所述子块对应的第二参考块时具体用于:
基于所述子块的第一原始运动矢量,从第一参考帧中确定所述子块对应的第一参考块;所述第一参考块中每个像素点的像素值,是通过对第一参考块中的相邻像素点的像素值进行插值得到,或者,通过对第一参考块中的相邻像素点的像素值进行拷贝得到;
基于所述子块的第二原始运动矢量,从第二参考帧中确定所述子块对应的第二参考块;所述第二参考块中每个像素点的像素值,是通过对第二参考块中的相邻像素点的像素值进行插值得到,或,通过对第二参考块中的相邻像素点的像素值进行拷贝得到。
所述运动补偿模块912根据所述第一参考块的第一像素值和所述第二参考块的第二像素值,对所述第一原始运动矢量和所述第二原始运动矢量进行调整,得到所述第一原始运动矢量对应的第一目标运动矢量和所述第二原始运动矢量对应的第二目标运动矢量时具体用于:
以初始运动矢量为中心,从所述初始运动矢量周围的包括所述初始运动矢量的运动矢量中选择部分或者全部运动矢量,并将选择的运动矢量确定为候选运动矢量;其中,所述初始运动矢量为所述第一原始运动矢量或者所述第二原始运动矢量;
根据所述第一参考块的第一像素值和所述第二参考块的第二像素值,从所述初始运动矢量和各个候选运动矢量中选择一个运动矢量作为最优运动矢量;
根据所述最优运动矢量对所述第一原始运动矢量进行调整,得到所述第一原始运动矢量对应的第一目标运动矢量,并根据所述最优运动矢量对所述第二原始运动矢量进行调整,得到所述第二原始运动矢量对应的第二目标运动矢量。
所述运动补偿模块912根据所述最优运动矢量对所述第一原始运动矢量进行调整,得到所述第一原始运动矢量对应的第一目标运动矢量,并根据所述最优运动矢量对所述第二原始运动矢量进行调整,得到所述第二原始运动矢量对应的第二目标运动矢量时具体用于:
根据所述最优运动矢量确定所述子块的第一整像素运动矢量调整值,第二整像素运动矢量调整值,第一分像素运动矢量调整值和第二分像素运动矢量调整值;
根据所述第一整像素运动矢量调整值和所述第一分像素运动矢量调整值,对所述第一原始运动矢量进行调整,得到所述子块的第一目标运动矢量;
根据所述第二整像素运动矢量调整值和所述第二分像素运动矢量调整值,对所述第二原始运动矢量进行调整,得到所述子块的第二目标运动矢量。
若所述最优运动矢量与所述初始运动矢量相同,所述运动补偿模块912根据所述第一目标运动矢量和所述第二 目标运动矢量确定所述子块的预测值时具体用于:
基于所述子块的第一目标运动矢量,从第一参考帧中确定所述子块对应的第三参考块;
基于所述子块的第二目标运动矢量,从第二参考帧中确定所述子块对应的第四参考块;
对第三参考块的像素值和第四参考块的像素值进行加权,得到所述子块的预测值。
若所述最优运动矢量与所述初始运动矢量不同,所述运动补偿模块912根据所述第一目标运动矢量和所述第二目标运动矢量确定所述子块的预测值时具体用于:
从第一参考帧中确定第五参考块,并对所述第五参考块进行扩展,得到第六参考块;基于所述子块的第一目标运动矢量,从所述第六参考块中选择所述子块对应的第三参考块;
从第二参考帧中确定第七参考块,并对所述第七参考块进行扩展,得到第八参考块;基于所述子块的第二目标运动矢量,从所述第八参考块中选择所述子块对应的第四参考块;
对第三参考块的像素值和第四参考块的像素值进行加权,得到所述子块的预测值。
所述运动补偿模块912对第三参考块的像素值和第四参考块的像素值进行加权,得到所述子块的预测值时具体用于:对所述第三参考块的像素值,所述第三参考块的像素值对应的第一权重,所述第四参考块的像素值,所述第四参考块的像素值对应的第二权重进行加权处理,得到所述子块的预测值;其中,所述第一权重与所述第二权重相同。
本申请实施例提供的解码端设备,从硬件层面而言,其硬件架构示意图具体可以参见图9B所示。包括:处理器921和机器可读存储介质922,所述机器可读存储介质922存储有能够被所述处理器921执行的机器可执行指令;所述处理器921用于执行机器可执行指令,以实现本申请上述示例公开的方法。例如,处理器用于执行机器可执行指令,以实现如下步骤:
若如下条件均满足,则确定对当前块启动运动矢量调整模式:
控制信息为允许当前块使用运动矢量调整模式;
当前块的预测模式为普通融合模式;或者,当前块的预测模式为融合模式或跳过模式,且当前块的预测模式不是除普通融合模式之外的其它模式;
当前块的预测值通过来自两个参考帧的参考块的加权获得,且所述两个参考帧的显示顺序分别位于当前帧的一前一后,且所述两个参考帧与当前帧的距离相同;
当前块的两个参考帧的加权权重相同;
当前块的两个参考帧均是短期参考帧;
当前块的宽度,高度和面积均在限定范围内;
当前块的两个参考帧的尺寸与当前帧的尺寸均相同;
若确定对当前块启动运动矢量调整模式,则对所述当前块进行运动补偿。
本申请实施例提供的编码端设备,从硬件层面而言,其硬件架构示意图具体可以参见图9C所示。包括:处理器931和机器可读存储介质932,所述机器可读存储介质932存储有能够被所述处理器931执行的机器可执行指令;所述处理器931用于执行机器可执行指令,以实现本申请上述示例公开的方法。例如,处理器用于执行机器可执行指令,以实现如下步骤:
若如下条件均满足,则确定对当前块启动运动矢量调整模式:
控制信息为允许当前块使用运动矢量调整模式;
当前块的预测模式为普通融合模式;或者,当前块的预测模式为融合模式或跳过模式,且当前块的预测模式不是除普通融合模式之外的其它模式;
当前块的预测值通过来自两个参考帧的参考块的加权获得,且所述两个参考帧的显示顺序分别位于当前帧的一前一后,且所述两个参考帧与当前帧的距离相同;
当前块的两个参考帧的加权权重相同;
当前块的两个参考帧均是短期参考帧;
当前块的宽度,高度和面积均在限定范围内;
当前块的两个参考帧的尺寸与当前帧的尺寸均相同;
若确定对当前块启动运动矢量调整模式,则对所述当前块进行运动补偿。
基于与上述方法同样的申请构思,本申请实施例还提供一种机器可读存储介质,所述机器可读存储介质上存储有若干计算机指令,所述计算机指令被处理器执行时,能够实现本申请上述示例公开的编解码方法。其中,上述机器可读存储介质可以是任何电子、磁性、光学或其它物理存储装置,可以包含或存储信息,如可执行指令、数据,等等。例如,机器可读存储介质可以是:RAM(Radom Access Memory,随机存取存储器)、易失存储器、非易失性存储器、闪存、存储驱动器(如硬盘驱动器)、固态硬盘、任何类型的存储盘(如光盘、dvd等),或者类似的存储介质,或者它们的组合。
基于与上述方法同样的申请构思,本申请实施例还提供一种计算机程序产品,该计算机程序产品包括计算机指令,所述计算机指令被处理器执行时,能够实现本申请上述示例公开的编解码方法。
基于与上述方法同样的申请构思,本申请实施例还提供一种编解码系统,该编解码系统包括处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令。当所述机器可执行指令被处理器执行时,能够实现本申请上述示例公开的编解码方法。
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机,计算机的具体形式可以是个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件收发设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任意几种设备的组合。为了描述的方便,描述以上装置时以功能分为各种单元分别描述。当然,在实施本申请时可以把各单元的功能在同一个或多个软件和/或硬件中实现。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用 完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可以由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其它可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其它可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
而且,这些计算机程序指令也可以存储在能引导计算机或其它可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或者多个流程和/或方框图一个方框或者多个方框中指定的功能。这些计算机程序指令也可装载到计算机或其它可编程数据处理设备上,使得在计算机或者其它可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其它可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
以上所述仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (15)

  1. 一种编解码方法,其特征在于,所述方法包括:
    若如下条件均满足,则确定对当前块启动运动矢量调整模式:
    控制信息为允许当前块使用运动矢量调整模式;
    当前块的预测模式为普通融合模式;或者,当前块的预测模式为融合模式或跳过模式,且当前块的预测模式不是除普通融合模式之外的其它模式;
    当前块的预测值通过来自两个参考帧的参考块的加权获得,且所述两个参考帧中其中一个参考帧位于当前帧之前,所述两个参考帧中另一个参考帧位于所述当前帧之后,且所述两个参考帧与当前帧的距离相同;
    当前块的两个参考帧的加权权重相同;
    当前块的两个参考帧均是短期参考帧;
    当前块的宽度,高度和面积均在限定范围内;
    当前块的两个参考帧的尺寸与当前帧的尺寸均相同;
    若确定对当前块启动运动矢量调整模式,则对所述当前块进行运动补偿。
  2. 根据权利要求1所述的方法,其特征在于,所述对所述当前块进行运动补偿,包括:
    针对所述当前块包括的至少一个子块中的每个子块:
    根据所述子块的第一原始运动矢量确定所述子块对应的第一参考块,根据所述子块的第二原始运动矢量确定所述子块对应的第二参考块;根据所述第一参考块的第一像素值和所述第二参考块的第二像素值,对所述第一原始运动矢量和所述第二原始运动矢量进行调整,得到所述第一原始运动矢量对应的第一目标运动矢量和所述第二原始运动矢量对应的第二目标运动矢量;根据所述第一目标运动矢量和所述第二目标运动矢量确定所述子块的预测值;
    根据每个子块的预测值确定所述当前块的预测值。
  3. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    若如下条件中的任意一个条件不满足,则确定不对当前块启动运动矢量调整模式:
    控制信息为允许当前块使用运动矢量调整模式;
    当前块的预测模式为普通融合模式;或者,当前块的预测模式为融合模式或跳过模式,且当前块的预测模式不是除普通融合模式之外的其它模式;
    当前块的预测值通过来自两个参考帧的参考块的加权获得,且所述两个参考帧的显示顺序分别位于当前帧的一前一后,且所述两个参考帧与当前帧的距离相同;
    当前块的两个参考帧的加权权重相同;
    当前块的两个参考帧均是短期参考帧;
    当前块的宽度,高度和面积均在限定范围内;
    当前块的两个参考帧的尺寸与当前帧的尺寸均相同。
  4. 根据权利要求1或3所述的方法,其特征在于,
    所述控制信息为允许当前块使用运动矢量调整模式包括:帧级控制信息为允许当前块使用运动矢量调整模式。
  5. 根据权利要求1或3所述的方法,其特征在于,
    当前块的宽度,高度和面积均在限定范围内包括:
    宽度大于或等于第一阈值,高度大于或等于第二阈值,面积大于或等于第三阈值。
  6. 根据权利要求5所述的方法,其特征在于,
    所述第一阈值为8,所述第二阈值为8,所述第三阈值为128。
  7. 根据权利要求2所述的方法,其特征在于,
    所述根据所述子块的第一原始运动矢量确定所述子块对应的第一参考块,根据所述子块的第二原始运动矢量确定所述子块对应的第二参考块,包括:
    基于所述子块的第一原始运动矢量,从第一参考帧中确定所述子块对应的第一参考块;所述第一参考块中每个像素点的像素值,是通过对第一参考块中的相邻像素点的像素值进行插值得到,或者,通过对第一参考块中的相邻像素点的像素值进行拷贝得到;
    基于所述子块的第二原始运动矢量,从第二参考帧中确定所述子块对应的第二参考块;所述第二参考块中每个像素点的像素值,是通过对第二参考块中的相邻像素点的像素值进行插值得到,或,通过对第二参考块中的相邻像素点的像素值进行拷贝得到。
  8. 根据权利要求2或7所述的方法,其特征在于,
    所述根据所述第一参考块的第一像素值和所述第二参考块的第二像素值,对所述第一原始运动矢量和所述第二原始运动矢量进行调整,得到所述第一原始运动矢量对应的第一目标运动矢量和所述第二原始运动矢量对应的第二目标运动矢量,包括:
    以初始运动矢量为中心,从所述初始运动矢量周围的包括所述初始运动矢量的运动矢量中选择部分或者全部运动矢量,并将选择的运动矢量确定为候选运动矢量;其中,所述初始运动矢量为所述第一原始运动矢量或者所述第二原始运动矢量;
    根据所述第一参考块的第一像素值和所述第二参考块的第二像素值,从所述初始运动矢量和各个候选运动矢量中选择一个运动矢量作为最优运动矢量;
    根据所述最优运动矢量对所述第一原始运动矢量进行调整,得到所述第一原始运动矢量对应的第一目标运动矢量,并根据所述最优运动矢量对所述第二原始运动矢量进行调整,得到所述第二原始运动矢量对应的第二目标运动矢量。
  9. 根据权利要求8所述的方法,其特征在于,
    所述根据所述最优运动矢量对所述第一原始运动矢量进行调整,得到所述第一原始运动矢量对应的第一目标运动矢量,并根据所述最优运动矢量对所述第二原始运动矢量进行调整,得到所述第二原始运动矢量对应的第二目标 运动矢量,包括:
    根据所述最优运动矢量确定所述子块的第一整像素运动矢量调整值,第二整像素运动矢量调整值,第一分像素运动矢量调整值和第二分像素运动矢量调整值;
    根据所述第一整像素运动矢量调整值和所述第一分像素运动矢量调整值,对所述第一原始运动矢量进行调整,得到所述子块的第一目标运动矢量;
    根据所述第二整像素运动矢量调整值和所述第二分像素运动矢量调整值,对所述第二原始运动矢量进行调整,得到所述子块的第二目标运动矢量。
  10. 根据权利要求8所述的方法,其特征在于,若所述最优运动矢量与所述初始运动矢量相同,根据所述第一目标运动矢量和所述第二目标运动矢量确定所述子块的预测值,包括:
    基于所述子块的第一目标运动矢量,从第一参考帧中确定所述子块对应的第三参考块;
    基于所述子块的第二目标运动矢量,从第二参考帧中确定所述子块对应的第四参考块;
    对第三参考块的像素值和第四参考块的像素值进行加权,得到所述子块的预测值。
  11. 根据权利要求8所述的方法,其特征在于,若所述最优运动矢量与所述初始运动矢量不同,根据所述第一目标运动矢量和所述第二目标运动矢量确定所述子块的预测值,包括:
    从第一参考帧中确定第五参考块,并对所述第五参考块进行扩展,得到第六参考块;基于所述子块的第一目标运动矢量,从所述第六参考块中选择所述子块对应的第三参考块;
    从第二参考帧中确定第七参考块,并对所述第七参考块进行扩展,得到第八参考块;基于所述子块的第二目标运动矢量,从所述第八参考块中选择所述子块对应的第四参考块;
    对第三参考块的像素值和第四参考块的像素值进行加权,得到所述子块的预测值。
  12. 根据权利要求10或11所述的方法,其特征在于,所述对第三参考块的像素值和第四参考块的像素值进行加权,得到所述子块的预测值,包括:
    对所述第三参考块的像素值,所述第三参考块的像素值对应的第一权重,所述第四参考块的像素值,所述第四参考块的像素值对应的第二权重进行加权处理,得到所述子块的预测值;其中,所述第一权重与所述第二权重相同。
  13. 一种编解码装置,其特征在于,所述装置包括:
    确定模块,用于若如下条件均满足,则确定对当前块启动运动矢量调整模式:
    控制信息为允许当前块使用运动矢量调整模式;
    当前块的预测模式为普通融合模式;或者,当前块的预测模式为融合模式或跳过模式,且当前块的预测模式不是除普通融合模式之外的其它模式;
    当前块的预测值通过来自两个参考帧的参考块的加权获得,且所述两个参考帧中其中一个参考帧位于当前帧之前,所述两个参考帧中另一个参考帧位于所述当前帧之后,且所述两个参考帧与当前帧的距离相同;
    当前块的两个参考帧的加权权重相同;
    当前块的两个参考帧均是短期参考帧;
    当前块的宽度,高度和面积均在限定范围内;
    当前块的两个参考帧的尺寸与当前帧的尺寸均相同;
    运动补偿模块,用于若确定对当前块启动运动矢量调整模式,则对当前块进行运动补偿。
  14. 一种编码端设备,其特征在于,包括:处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;
    所述处理器用于执行机器可执行指令,以实现权利要求1-12任一所述的编解码方法。
  15. 一种解码端设备,其特征在于,包括:处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;
    所述处理器用于执行机器可执行指令,以实现权利要求1-12任一所述的编解码方法。
PCT/CN2020/124304 2019-11-05 2020-10-28 一种编解码方法、装置及其设备 WO2021088695A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
KR1020227010788A KR20220050227A (ko) 2019-11-05 2020-10-28 인코딩 및 디코딩 방법, 장치 및 이의 기기
US17/766,210 US12114005B2 (en) 2019-11-05 2020-10-28 Encoding and decoding method and apparatus, and devices
JP2022520621A JP7527359B2 (ja) 2019-11-05 2020-10-28 符号化及び復号方法、装置及びデバイス
JP2024060914A JP2024081785A (ja) 2019-11-05 2024-04-04 符号化及び復号方法、装置及びデバイス

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911072766.XA CN112770113B (zh) 2019-11-05 2019-11-05 一种编解码方法、装置及其设备
CN201911072766.X 2019-11-05

Publications (1)

Publication Number Publication Date
WO2021088695A1 true WO2021088695A1 (zh) 2021-05-14

Family

ID=74036798

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/124304 WO2021088695A1 (zh) 2019-11-05 2020-10-28 一种编解码方法、装置及其设备

Country Status (5)

Country Link
US (1) US12114005B2 (zh)
JP (2) JP7527359B2 (zh)
KR (1) KR20220050227A (zh)
CN (3) CN112135127B (zh)
WO (1) WO2021088695A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12081751B2 (en) * 2021-04-26 2024-09-03 Tencent America LLC Geometry partition mode and merge mode with motion vector difference signaling
CN113411581B (zh) * 2021-06-28 2022-08-05 展讯通信(上海)有限公司 视频序列的运动补偿方法、系统、存储介质及终端
CN113938690B (zh) * 2021-12-03 2023-10-31 北京达佳互联信息技术有限公司 视频编码方法、装置、电子设备及存储介质
KR20240142301A (ko) * 2023-03-21 2024-09-30 주식회사 케이티 영상 부호화/복호화 방법 및 비트스트림을 저장하는 기록 매체

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060120613A1 (en) * 2004-12-07 2006-06-08 Sunplus Technology Co., Ltd. Method for fast multiple reference frame motion estimation
CN105578197A (zh) * 2015-12-24 2016-05-11 福州瑞芯微电子股份有限公司 一种实现帧间预测主控系统
CN109391814A (zh) * 2017-08-11 2019-02-26 华为技术有限公司 视频图像编码和解码的方法、装置及设备
CN109495746A (zh) * 2018-11-07 2019-03-19 建湖云飞数据科技有限公司 一种基于运动矢量调整的视频编码方法
CN110312132A (zh) * 2019-03-11 2019-10-08 杭州海康威视数字技术股份有限公司 一种编解码方法、装置及其设备

Family Cites Families (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2003281133A1 (en) * 2002-07-15 2004-02-02 Hitachi, Ltd. Moving picture encoding method and decoding method
CN101299799B (zh) * 2008-06-13 2011-11-09 北京中星微电子有限公司 图像检测、修复方法和图像检测、修复装置
CN102387360B (zh) * 2010-09-02 2016-05-11 乐金电子(中国)研究开发中心有限公司 视频编解码帧间图像预测方法及视频编解码器
US8736767B2 (en) * 2010-09-29 2014-05-27 Sharp Laboratories Of America, Inc. Efficient motion vector field estimation
KR20120118780A (ko) 2011-04-19 2012-10-29 삼성전자주식회사 다시점 비디오의 움직임 벡터 부호화 방법 및 장치, 그 복호화 방법 및 장치
KR20120140592A (ko) * 2011-06-21 2012-12-31 한국전자통신연구원 움직임 보상의 계산 복잡도 감소 및 부호화 효율을 증가시키는 방법 및 장치
CN110650336B (zh) * 2012-01-18 2022-11-29 韩国电子通信研究院 视频解码装置、视频编码装置和传输比特流的方法
CN104427345B (zh) * 2013-09-11 2019-01-08 华为技术有限公司 运动矢量的获取方法、获取装置、视频编解码器及其方法
WO2015149698A1 (en) * 2014-04-01 2015-10-08 Mediatek Inc. Method of motion information coding
CN105338362B (zh) * 2014-05-26 2018-10-19 富士通株式会社 运动目标检测方法和运动目标检测装置
KR101908249B1 (ko) * 2014-11-18 2018-10-15 미디어텍 인크. 단방향 예측 및 병합 후보로부터의 모션 벡터에 기초한 양방향 예측 비디오 코딩 방법
JP2017011458A (ja) * 2015-06-19 2017-01-12 富士通株式会社 符号化データ生成プログラム、符号化データ生成方法および符号化データ生成装置
WO2017034089A1 (ko) * 2015-08-23 2017-03-02 엘지전자(주) 인터 예측 모드 기반 영상 처리 방법 및 이를 위한 장치
EP3355578B1 (en) * 2015-09-24 2020-12-09 LG Electronics Inc. Motion vector predictor derivation and candidate list construction
WO2017147765A1 (en) * 2016-03-01 2017-09-08 Mediatek Inc. Methods for affine motion compensation
US10271062B2 (en) * 2016-03-18 2019-04-23 Google Llc Motion vector prediction through scaling
EP3264769A1 (en) * 2016-06-30 2018-01-03 Thomson Licensing Method and apparatus for video coding with automatic motion information refinement
CN106101716B (zh) * 2016-07-11 2019-05-07 北京大学 一种视频帧率上变换方法
CN110100440B (zh) * 2016-12-22 2023-04-25 株式会社Kt 一种用于对视频进行解码、编码的方法
EP3343925A1 (en) * 2017-01-03 2018-07-04 Thomson Licensing Method and apparatus for encoding and decoding motion information
US10291928B2 (en) * 2017-01-10 2019-05-14 Blackberry Limited Methods and devices for inter-prediction using motion vectors for video coding
CN109089119B (zh) * 2017-06-13 2021-08-13 浙江大学 一种运动矢量预测的方法及设备
WO2019009498A1 (ko) * 2017-07-03 2019-01-10 엘지전자 주식회사 인터 예측 모드 기반 영상 처리 방법 및 이를 위한 장치
WO2019072370A1 (en) * 2017-10-09 2019-04-18 Huawei Technologies Co., Ltd. MEMORY ACCESS WINDOW AND FILLING FOR VECTOR MOVEMENT REFINEMENT
CN117336504A (zh) * 2017-12-31 2024-01-02 华为技术有限公司 图像预测方法、装置以及编解码器
CN111886867B (zh) * 2018-01-09 2023-12-19 夏普株式会社 运动矢量推导装置、运动图像解码装置以及运动图像编码装置
CN110324623B (zh) * 2018-03-30 2021-09-07 华为技术有限公司 一种双向帧间预测方法及装置
CN111971966A (zh) * 2018-03-30 2020-11-20 韩国电子通信研究院 图像编码/解码方法和设备以及存储比特流的记录介质
CN110891176B (zh) * 2018-09-10 2023-01-13 华为技术有限公司 基于仿射运动模型的运动矢量预测方法及设备
CN110891180B (zh) * 2018-09-10 2023-11-17 华为技术有限公司 视频解码方法及视频解码器
CN111107354A (zh) * 2018-10-29 2020-05-05 华为技术有限公司 一种视频图像预测方法及装置
US11758125B2 (en) * 2019-01-02 2023-09-12 Lg Electronics Inc. Device and method for processing video signal by using inter prediction
JP2022527701A (ja) * 2019-03-20 2022-06-03 ホアウェイ・テクノロジーズ・カンパニー・リミテッド アフィンコーディングされたブロックに対するオプティカルフローを用いた予測洗練化のための方法および装置
WO2020242238A1 (ko) * 2019-05-29 2020-12-03 한국전자통신연구원 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
JP7471328B2 (ja) 2019-06-21 2024-04-19 ホアウェイ・テクノロジーズ・カンパニー・リミテッド エンコーダ、デコーダ、および対応する方法
CN110213590B (zh) * 2019-06-25 2022-07-12 浙江大华技术股份有限公司 时域运动矢量获取、帧间预测、视频编码的方法及设备
CN113596460A (zh) 2019-09-23 2021-11-02 杭州海康威视数字技术股份有限公司 编解码方法方法、装置及设备
US11683517B2 (en) * 2020-11-23 2023-06-20 Qualcomm Incorporated Block-adaptive search range and cost factors for decoder-side motion vector (MV) derivation techniques

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060120613A1 (en) * 2004-12-07 2006-06-08 Sunplus Technology Co., Ltd. Method for fast multiple reference frame motion estimation
CN105578197A (zh) * 2015-12-24 2016-05-11 福州瑞芯微电子股份有限公司 一种实现帧间预测主控系统
CN109391814A (zh) * 2017-08-11 2019-02-26 华为技术有限公司 视频图像编码和解码的方法、装置及设备
CN109495746A (zh) * 2018-11-07 2019-03-19 建湖云飞数据科技有限公司 一种基于运动矢量调整的视频编码方法
CN110312132A (zh) * 2019-03-11 2019-10-08 杭州海康威视数字技术股份有限公司 一种编解码方法、装置及其设备

Also Published As

Publication number Publication date
US20240073437A1 (en) 2024-02-29
JP2024081785A (ja) 2024-06-18
CN112135127B (zh) 2021-09-21
KR20220050227A (ko) 2022-04-22
US12114005B2 (en) 2024-10-08
CN112770113A (zh) 2021-05-07
JP7527359B2 (ja) 2024-08-02
CN112135126B (zh) 2021-09-21
JP2022550592A (ja) 2022-12-02
CN112770113B (zh) 2024-08-23
CN112135127A (zh) 2020-12-25
CN112135126A (zh) 2020-12-25

Similar Documents

Publication Publication Date Title
WO2020182162A1 (zh) 编解码方法与装置、编码端设备和解码端设备
WO2021088695A1 (zh) 一种编解码方法、装置及其设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20883767

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20227010788

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2022520621

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 17766210

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20883767

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17-05-2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20883767

Country of ref document: EP

Kind code of ref document: A1