WO2021088695A1 - 一种编解码方法、装置及其设备 - Google Patents
一种编解码方法、装置及其设备 Download PDFInfo
- Publication number
- WO2021088695A1 WO2021088695A1 PCT/CN2020/124304 CN2020124304W WO2021088695A1 WO 2021088695 A1 WO2021088695 A1 WO 2021088695A1 CN 2020124304 W CN2020124304 W CN 2020124304W WO 2021088695 A1 WO2021088695 A1 WO 2021088695A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- motion vector
- block
- sub
- value
- pixel
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 158
- 230000033001 locomotion Effects 0.000 claims abstract description 1316
- 239000013598 vector Substances 0.000 claims abstract description 1215
- 230000004927 fusion Effects 0.000 claims abstract description 93
- 238000012545 processing Methods 0.000 claims description 27
- 238000003860 storage Methods 0.000 claims description 27
- 230000008569 process Effects 0.000 description 74
- 241000428919 Sweet potato mosaic virus Species 0.000 description 33
- 238000010586 diagram Methods 0.000 description 20
- 238000004364 calculation method Methods 0.000 description 17
- 230000002457 bidirectional effect Effects 0.000 description 11
- 238000004590 computer program Methods 0.000 description 9
- 238000005070 sampling Methods 0.000 description 8
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 6
- 238000001914 filtration Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 230000007774 longterm Effects 0.000 description 6
- 238000006073 displacement reaction Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000012804 iterative process Methods 0.000 description 4
- 238000013139 quantization Methods 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 238000010187 selection method Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 239000006227 byproduct Substances 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/109—Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/12—Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
- H04N19/122—Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
Definitions
- This application relates to the field of coding and decoding technologies, and in particular to a coding and decoding method, device and equipment.
- a complete video encoding method can include processes such as prediction, transformation, quantization, entropy encoding, and filtering.
- predictive coding includes intra-frame coding and inter-frame coding.
- Inter-frame coding uses the correlation of the video time domain to predict the pixels of the current image using pixels adjacent to the coded image to achieve the purpose of effectively removing video time domain redundancy.
- a motion vector Motion Vector, MV
- MV Motion Vector
- a motion search can be performed in the reference frame B to find the block that best matches the current block A1 B1 (ie the reference block), and determine the relative displacement between the current block A1 and the reference block B1, and the relative displacement is also the motion vector of the current block A1.
- the encoding end may send the motion vector to the decoding end, instead of sending the current block A1 to the decoding end, the decoding end may obtain the current block A1 according to the motion vector and the reference block B1. Obviously, since the number of bits occupied by the motion vector is less than the number of bits occupied by the current block A1, a large number of bits can be saved.
- the current block when the current block is a one-way block, after obtaining the motion vector of the current block (hereinafter referred to as the original motion vector), the original motion vector can be adjusted, and encoding/decoding is performed based on the adjusted motion vector. Thereby improving coding performance.
- the current block when the current block is a bidirectional block, after obtaining the first original motion vector and the second original motion vector of the current block, how to adjust the first original motion vector and the second original motion vector, there is currently no reasonable solution In other words, for scenes with bidirectional blocks, there are problems such as poor prediction quality and prediction errors, resulting in poor coding performance.
- This application provides an encoding and decoding method, device and equipment, which can improve encoding performance.
- This application provides an encoding and decoding method, which includes:
- the control information is to allow the current block to use the motion vector adjustment mode
- the prediction mode of the current block is a normal fusion mode; or, the prediction mode of the current block is a fusion mode or a skip mode, and the prediction mode of the current block is not a mode other than the normal fusion mode;
- the predicted value of the current block is obtained by weighting the reference blocks from two reference frames, and the display order of the two reference frames is located one before and one after the current frame, and the distance between the two reference frames and the current frame the same;
- the weights of the two reference frames of the current block are the same;
- the two reference frames of the current block are both short-term reference frames
- the width, height and area of the current block are all within a limited range
- the size of the two reference frames of the current block is the same as the size of the current frame
- motion compensation is performed on the current block.
- the present application provides a coding and decoding device, the device includes:
- the determining module is used to determine to start the motion vector adjustment mode for the current block if the following conditions are all met:
- the control information is to allow the current block to use the motion vector adjustment mode
- the prediction mode of the current block is a normal fusion mode; or, the prediction mode of the current block is a fusion mode or a skip mode, and the prediction mode of the current block is not a mode other than the normal fusion mode;
- the predicted value of the current block is obtained by weighting the reference blocks from two reference frames, and the display order of the two reference frames is located one before and one after the current frame, and the distance between the two reference frames and the current frame the same;
- the weights of the two reference frames of the current block are the same;
- the two reference frames of the current block are both short-term reference frames
- the width, height and area of the current block are all within a limited range
- the size of the two reference frames of the current block is the same as the size of the current frame
- the motion compensation module is used to perform motion compensation on the current block if it is determined to start the motion vector adjustment mode for the current block.
- the present application provides an encoding terminal device, including: a processor and a machine-readable storage medium, where the machine-readable storage medium stores machine-executable instructions that can be executed by the processor;
- the processor is used to execute machine executable instructions to implement the following steps:
- the control information is to allow the current block to use the motion vector adjustment mode
- the prediction mode of the current block is a normal fusion mode; or, the prediction mode of the current block is a fusion mode or a skip mode, and the prediction mode of the current block is not a mode other than the normal fusion mode;
- the predicted value of the current block is obtained by weighting the reference blocks from two reference frames, and the display order of the two reference frames is located one before and one after the current frame, and the distance between the two reference frames and the current frame the same;
- the weights of the two reference frames of the current block are the same;
- the two reference frames of the current block are both short-term reference frames
- the width, height and area of the current block are all within a limited range
- the size of the two reference frames of the current block is the same as the size of the current frame
- motion compensation is performed on the current block.
- the present application provides a decoding end device, including: a processor and a machine-readable storage medium, where the machine-readable storage medium stores machine-executable instructions that can be executed by the processor;
- the processor is used to execute machine executable instructions to implement the following steps:
- the control information is to allow the current block to use the motion vector adjustment mode
- the prediction mode of the current block is a normal fusion mode; or, the prediction mode of the current block is a fusion mode or a skip mode, and the prediction mode of the current block is not a mode other than the normal fusion mode;
- the predicted value of the current block is obtained by weighting the reference blocks from two reference frames, and the display order of the two reference frames is located one before and one after the current frame, and the distance between the two reference frames and the current frame the same;
- the weights of the two reference frames of the current block are the same;
- the two reference frames of the current block are both short-term reference frames
- the width, height and area of the current block are all within a limited range
- the size of the two reference frames of the current block is the same as the size of the current frame
- motion compensation is performed on the current block.
- the first target motion vector and the second target motion vector are obtained according to the first original motion vector and the second original motion vector.
- the first target motion vector and the second target motion vector determine the predicted value, instead of determining the predicted value according to the first original motion vector and the second original motion vector, solve the problems of poor prediction quality and prediction errors, and improve coding performance and coding efficiency .
- FIG. 1A is a schematic diagram of interpolation in an embodiment of the present application.
- FIG. 1B is a schematic diagram of a video coding framework in an implementation manner of the present application.
- FIG. 2 is a flowchart of an encoding and decoding method in an embodiment of the present application
- FIG. 3 is a flowchart of an encoding and decoding method in an embodiment of the present application.
- FIG. 4 is a flowchart of an encoding and decoding method in an embodiment of the present application.
- FIG. 5 is a schematic diagram of a reference block obtained in an implementation manner of the present application.
- Fig. 6 is a schematic diagram of motion vector iteration in an embodiment of the present application.
- FIGS. 7A-7G are schematic diagrams of the sequence of candidate points in an embodiment of the present application.
- FIG. 8 is a schematic diagram of extending a reference block in an implementation manner of the present application.
- FIG. 9A is a structural diagram of a coding and decoding device in an embodiment of the present application.
- FIG. 9B is a hardware structure diagram of a decoding end device in an embodiment of the present application.
- Fig. 9C is a hardware structure diagram of an encoding terminal device in an embodiment of the present application.
- first information may also be referred to as second information
- second information may also be referred to as first information
- first information may also be referred to as first information
- An encoding and decoding method, device, and equipment proposed in the embodiments of the present application may involve the following concepts:
- Intra prediction and inter prediction refers to the use of the correlation of the video space domain to predict the current pixel using the pixels of the coded block of the current image to achieve the removal of video spatial redundancy purpose.
- Inter-frame prediction refers to the use of video time domain correlation. Since video sequences usually contain strong time domain correlation, using adjacent encoded image pixels to predict the pixels of the current image can effectively remove video time domain redundancy. purpose.
- the main video coding standard inter-frame prediction part adopts block-based motion compensation technology. The main principle is to find a best matching block in the previously encoded image for each pixel block of the current image. This process is called motion estimation. .
- Motion Vector In inter-frame coding, a motion vector is used to represent the relative displacement between the current block and the best matching block in the reference image. Each divided block has a corresponding motion vector that is transmitted to the decoding end. If the motion vector of each block is independently coded and transmitted, especially when divided into small-sized blocks, a lot of bits are consumed. In order to reduce the number of bits used to code the motion vector, the spatial correlation between adjacent image blocks is used to predict the motion vector of the current block based on the motion vector of the adjacent coded block, and then the prediction difference is coded. Effectively reduce the number of bits representing the motion vector. When coding the motion vector of the current block, use the motion vector of the adjacent coded block to predict the motion vector of the current block. The difference between the motion vector prediction (MVP, Motion Vector Prediction) and the true estimation of the motion vector Value (MVD, MotionVector Difference) is coded, effectively reducing the number of coded bits.
- MVP Motion Vector Prediction
- MVPD MotionVector Difference
- Motion Information Since the motion vector represents the position offset between the current block and a reference block, in order to accurately obtain the information pointing to the image block, in addition to the motion vector, the index information of the reference frame image is also required to indicate which reference frame is used image.
- a reference frame image list can be established, and the reference frame image index information indicates which reference frame image in the reference frame image list is used in the current block.
- Many coding technologies also support multiple reference image lists. Therefore, an index value can be used to indicate which reference image list is used, and this index value is called the reference direction.
- Motion-related information such as motion vector, reference frame index, reference direction, etc. may be collectively referred to as motion information.
- Interpolation If the current motion vector is of non-integer pixel accuracy, the existing pixel value cannot be directly copied from the reference frame corresponding to the current block. The required pixel value of the current block can only be obtained by interpolation. Referring to FIG. 1A, if it is necessary to obtain the pixel value Y 1/2 with an offset of 1/2 pixel, it can be obtained by interpolating the surrounding existing pixel value X. Exemplarily, if an interpolation filter with N taps is used, it needs to be obtained by interpolation of N whole pixels around it.
- Motion compensation is the process of obtaining all pixel values of the current block through interpolation or copying.
- Merge mode including normal fusion mode (ie Normal Merge mode, also called regular Merge mode), sub-block fusion mode (a fusion mode that uses sub-block motion information, which can be called Subblock fusion mode), and MMVD mode (The fusion mode of coding motion difference can be called merge with MVD mode), CIIP mode (the fusion mode in which new prediction values are jointly generated by inter-frame and intra-prediction, can be called combine inter-intra prediciton mode), TPM mode (used to The fusion mode of triangular prediction can be called triangular prediction mode), GEO mode (the fusion mode based on arbitrary geometric division shapes, can be called Geometrical Partitioning).
- normal fusion mode ie Normal Merge mode, also called regular Merge mode
- sub-block fusion mode a fusion mode that uses sub-block motion information, which can be called Subblock fusion mode
- MMVD mode The fusion mode of coding motion difference can be called merge with MVD mode
- CIIP mode the fusion mode in
- the skip mode is a special fusion mode. The difference between the skip mode and the fusion mode is that the skip mode does not require coding residuals. If the current block is in skip mode, the CIIP mode is closed by default, and the normal fusion mode, sub-block fusion mode, MMVD mode, TPM mode, and GEO mode are still applicable.
- the predicted value it is possible to determine how to generate the predicted value based on the normal fusion mode, the sub-block fusion mode, the MMVD mode, the CIIP mode, the TPM mode, the GEO mode, etc.
- the predicted value and residual value can be used to obtain the reconstructed value; for the skip mode, there is no residual value, and the predicted value is directly used to obtain the reconstructed value.
- Sequence parameter set In the sequence parameter set, there is a flag bit that determines whether certain tool switches are allowed in the entire sequence. If the flag bit is 1, the tool corresponding to the flag bit is allowed to be activated in the video sequence; if the flag bit is 0, the tool corresponding to the flag bit is not allowed to be activated during the encoding process in the video sequence.
- Normal fusion mode select one motion information from the candidate motion information list, and generate the prediction value of the current block based on the motion information.
- the candidate motion information list includes: spatial neighboring block candidate motion information, temporal neighboring block candidate motion information, Candidate motion information of spatial non-adjacent blocks, motion information obtained by combining existing motion information, default motion information, etc.
- MMVD mode Based on the candidate motion information list of the normal fusion mode, select one motion information from the candidate motion information list of the normal fusion mode as the reference motion information, and obtain the difference of the motion information through the table look-up method. The final motion information is obtained based on the difference between the reference motion information and the motion information, and the prediction value of the current block is generated based on the final motion information.
- the new prediction value of the current block is obtained by combining the intra-frame prediction value and the inter-frame prediction value.
- Sub-block fusion mode includes Affine fusion mode and sub-block TMVP mode.
- the Affine (affine) fusion mode similar to the normal fusion mode, also selects a motion information from the candidate motion information list, and generates the prediction value of the current block based on the motion information.
- the motion information in the candidate motion information list of the normal fusion mode is a 2-parameter translation motion vector
- the motion information in the candidate motion information list of the Affine fusion mode is 4-parameter Affine motion information, or , 6-parameter Affine motion information.
- TMVP subblock-based temporal motion vector prediction
- TPM mode Divide a block into two triangular sub-blocks (there are two triangular sub-blocks of 45 degrees and 135 degrees). The two triangular sub-blocks have different unidirectional motion information.
- the TPM mode is only used for the prediction process and does not affect In the subsequent transformation and quantization process, the one-way motion information here is also directly obtained from the candidate motion information list.
- the GEO mode is similar to the TPM mode, but the shape of the division is different.
- the GEO mode divides a square block into two sub-blocks of any shape (any other shape except the shape of the two triangular sub-blocks of the TPM), such as a triangular sub-block, a pentagonal sub-block; or, a triangular sub-block , One quadrilateral sub-block; or, two trapezoidal sub-blocks, etc. There is no restriction on the shape of this division.
- the two sub-blocks divided by GEO mode have different unidirectional motion information.
- the fusion mode and skip mode involved in this embodiment refer to a type of prediction mode that directly selects a motion information from the candidate motion information list to generate the prediction value of the current block.
- These prediction modes There is no need to perform a motion search process at the encoding end. Except for the MMVD mode, other modes do not need to encode the motion information difference.
- Video encoding framework As shown in Figure 1B, the video encoding framework can be used to implement the encoding end processing flow of the embodiment of the present application.
- the schematic diagram of the video decoding framework is similar to that of Figure 1B, which will not be repeated here.
- video decoding can be used
- the framework implements the decoding end processing flow of the embodiment of the present application.
- the video coding framework and the video decoding framework it includes modules such as intra prediction, motion estimation/motion compensation, reference image buffer, in-loop filtering, reconstruction, transformation, quantization, inverse transformation, inverse quantization, entropy encoder, etc. .
- the encoding end processing flow can be realized, and at the decoding end, through the cooperation between these modules, the decoding end processing flow can be realized.
- a motion vector can be provided Adjustment mode.
- the motion vector adjustment mode based on the predicted value obtained from the original motion vector, the motion vector is fine-tuned through the local search method at the decoder to obtain a better motion vector to generate a predicted value with less distortion.
- the first reference block corresponding to the sub-block may be determined according to the first original motion vector of the sub-block, and the first reference block corresponding to the sub-block may be determined according to the sub-block.
- Determine the second reference block corresponding to the sub-block based on the second original motion vector of the sub-block, and compare the first original motion vector and the second original motion according to the first pixel value of the first reference block and the second pixel value of the second reference block
- the vector is adjusted to obtain the first target motion vector and the second target motion vector, and then the prediction value of the sub-block can be determined according to the first target motion vector and the second target motion vector.
- Embodiment 1 As shown in FIG. 2, it is a schematic flowchart of the encoding and decoding method proposed in the embodiment of this application.
- the encoding and decoding method can be applied to the decoding end or the encoding end.
- the encoding and decoding method may include the following steps:
- Step 201 if the following conditions are all met, it is determined to start the motion vector adjustment mode for the current block:
- the control information is to allow the current block to use the motion vector adjustment mode
- the prediction mode of the current block is a normal fusion mode; or, the prediction mode of the current block is a fusion mode or a skip mode, and the prediction mode of the current block is not a mode other than the normal fusion mode;
- the predicted value of the current block is obtained by weighting the reference blocks from the two reference frames, and the display order of the two reference frames is located one before and one after the current frame, and the distance between the two reference frames and the current frame is the same;
- the weights of the two reference frames of the current block are the same;
- the two reference frames of the current block are both short-term reference frames
- the width, height and area of the current block are all within a limited range
- the size of the two reference frames of the current block is the same as the size of the current frame.
- the control information is to allow the current block to use the motion vector adjustment mode
- the prediction mode of the current block is a normal fusion mode; or, the prediction mode of the current block is a fusion mode or a skip mode, and the prediction mode of the current block is not a mode other than the normal fusion mode;
- the predicted value of the current block is obtained by weighting the reference blocks from the two reference frames, and the display order of the two reference frames is located one before and one after the current frame, and the distance between the two reference frames and the current frame is the same;
- the weights of the two reference frames of the current block are the same;
- the two reference frames of the current block are both short-term reference frames
- the width, height and area of the current block are all within a limited range
- the size of the two reference frames of the current block is the same as the size of the current frame.
- 7 conditions are given, and it is determined whether to start the motion vector adjustment mode for the current block based on the 7 conditions.
- the fusion mode or skip mode includes the normal fusion mode, the sub-block fusion mode, the MMVD mode, the CIIP mode, the TPM mode, and the GEO mode.
- the prediction mode of the current block is not other than the normal fusion mode means: the prediction mode is not the sub-block fusion mode, MMVD mode, CIIP mode, TPM mode, GEO mode, etc.
- the prediction mode of the current block is the fusion mode or the skip mode
- the prediction mode of the current block is not the MMVD mode
- the prediction mode of the current block is not the CIIP mode.
- the prediction mode of the current block is fusion mode or skip mode
- the prediction mode of the current block is not MMVD mode
- the prediction mode of the current block is not CIIP mode
- the prediction mode of the current block is not sub-block fusion mode
- the prediction of the current block it can be determined that the prediction mode of the current block is not other than the normal fusion mode. That is, it is determined by the elimination method that the prediction mode of the current block is the normal fusion mode.
- the prediction value of the current block is obtained by weighting the reference blocks from two reference frames means that the current block adopts the bidirectional prediction mode, that is, the prediction value of the current block is obtained by the weighting of the reference blocks from the two reference frames. obtain.
- the current block can correspond to the motion information of the two lists, which are recorded as the first motion information and the second motion information.
- the first motion information includes the first reference frame and the first original motion vector
- the second motion information includes The second reference frame and the second original motion vector.
- the above two reference frames may be the first reference frame and the second reference frame.
- the display sequence of the two reference frames respectively located one before and one after the current frame means that the first reference frame is located in front of the current frame where the current block is located, and the second reference frame is located behind the current frame.
- the first reference frame may also be called a forward reference frame, the forward reference frame is located in the first list (for example, list0), the second reference frame may also be called a backward reference frame, and the backward reference frame is located in the second list.
- List (such as list1).
- the width, height, and area of the current block are all within a limited range including: the width is greater than or equal to the first threshold, the height is greater than or equal to the second threshold, and the area is greater than or equal to the third threshold.
- the width is greater than or equal to the first threshold
- the height is greater than or equal to the second threshold
- the area is greater than the fourth threshold.
- the third threshold may be greater than the fourth threshold.
- the first threshold may be 8, the second threshold may be 8, the third threshold may be 128, and the fourth threshold may be 64.
- the above values are just a few examples, and there is no restriction on this.
- control information for allowing the current block to use the motion vector adjustment mode may include, but is not limited to: sequence-level control information (such as control information for multi-frame images) to allow the current block to use the motion vector adjustment mode; and/or,
- sequence-level control information such as control information for multi-frame images
- the frame-level control information (such as the control information of a frame of image) is to allow the current block to use the motion vector adjustment mode.
- Step 202 If it is determined to start the motion vector adjustment mode for the current block, perform motion compensation on the current block.
- the motion vector adjustment mode for the current block for each sub-block in at least one sub-block included in the current block: determine the sub-block according to the first original motion vector of the sub-block According to the corresponding first reference block, determine the second reference block corresponding to the sub-block according to the second original motion vector of the sub-block; according to the first pixel value of the first reference block and the second pixel value of the second reference block, The first original motion vector and the second original motion vector are adjusted to obtain the first target motion vector corresponding to the first original motion vector and the second target motion vector corresponding to the second original motion vector; according to the first target motion vector and the second target motion vector The target motion vector determines the predicted value of the sub-block. After the predicted value of each sub-block is obtained, the predicted value of the current block can be determined according to the predicted value of each sub-block.
- the first reference block corresponding to the sub-block is determined according to the first original motion vector of the sub-block
- the second reference block corresponding to the sub-block is determined according to the second original motion vector of the sub-block, which may include but not Limited to:
- the first reference block corresponding to the sub-block is determined from the first reference frame; the pixel value of each pixel in the first reference block is determined by the first reference block.
- the pixel values of the adjacent pixels in are obtained by interpolation, or obtained by copying the pixel values of the adjacent pixels in the first reference block.
- the second reference block corresponding to the sub-block is determined from the second reference frame; the pixel value of each pixel in the second reference block is determined by the second reference block.
- the pixel value of the adjacent pixel in the second reference block is obtained by interpolation, or obtained by copying the pixel value of the adjacent pixel in the second reference block.
- the size of the first reference block is the same as the size of the second reference block
- the width of the first reference block is determined based on the width and search range of the sub-block
- the height value of the first reference block is based on the height and the search range of the sub-block.
- the search scope is determined.
- the first pixel value of the sub-block is An original motion vector and the second original motion vector of the sub-block are adjusted to obtain the first target motion vector corresponding to the first original motion vector and the second target motion vector corresponding to the second original motion vector, that is, the first target motion vector of the sub-block.
- a target motion vector and a second target motion vector are adjusted to obtain the first target motion vector corresponding to the first original motion vector and the second target motion vector corresponding to the second original motion vector.
- the initial motion vector may be taken as the center, a part or all of the motion vectors may be selected from the motion vectors surrounding the initial motion vector including the initial motion vector, and the selected motion vector may be determined as the candidate motion.
- the initial motion vector is the first original motion vector or the second original motion vector.
- a motion vector can be selected as the optimal motion vector from the initial motion vector and each candidate motion vector.
- the first original motion vector can be adjusted according to the optimal motion vector to obtain the first target motion vector corresponding to the first original motion vector
- the second original motion vector can be adjusted according to the optimal motion vector to obtain the second original motion vector.
- the second target motion vector corresponding to the motion vector is the first original motion vector or the second original motion vector.
- the first original motion vector is adjusted according to the optimal motion vector to obtain the first target motion vector corresponding to the first original motion vector
- the second original motion vector is adjusted according to the optimal motion vector to obtain
- the second target motion vector corresponding to the second original motion vector may include: determining the first integer-pixel motion vector adjustment value of the sub-block according to the optimal motion vector, the second integer-pixel motion vector adjustment value, and the first sub-pixel motion vector adjustment Value and the second sub-pixel motion vector adjustment value; according to the first full-pixel motion vector adjustment value and the first sub-pixel motion vector adjustment value, the first original motion vector is adjusted to obtain the first target motion vector of the sub-block; according to The second integer-pixel motion vector adjustment value and the second sub-pixel motion vector adjustment value are adjusted to the second original motion vector to obtain the second target motion vector of the sub-block.
- the prediction value of the sub-block may be determined according to the first target motion vector of the sub-block and the second target motion vector of the sub-block. The process will not be repeated.
- the third reference block corresponding to the sub-block may be determined from the first reference frame based on the first target motion vector of the sub-block; Based on the second target motion vector of the sub-block, a fourth reference block corresponding to the sub-block is determined from the second reference frame. Then, the pixel value of the third reference block and the pixel value of the fourth reference block may be weighted to obtain the predicted value of the sub-block.
- the fifth reference block can be determined from the first reference frame, and the fifth reference block can be extended to obtain the sixth reference block ; Then, based on the first target motion vector of the sub-block, a third reference block corresponding to the sub-block is selected from the sixth reference block. And, the seventh reference block may be determined from the second reference frame, and the seventh reference block may be expanded to obtain the eighth reference block; based on the second target motion vector of the sub-block, the eighth reference block may be selected The fourth reference block corresponding to this sub-block. Then, the pixel value of the third reference block and the pixel value of the fourth reference block may be weighted to obtain the predicted value of the sub-block.
- the pixel value of the third reference block and the pixel value of the fourth reference block are weighted to obtain the predicted value of the sub-block, which may include, but is not limited to: the pixel value of the third reference block, the third
- the first weight corresponding to the pixel value of the reference block, the pixel value of the fourth reference block, and the second weight corresponding to the pixel value of the fourth reference block are weighted to obtain the predicted value of the sub-block; for example, the first weight It can be the same as the second weight.
- the predicted value of each sub-block can be combined to obtain the predicted value of the current block, and the process of determining the predicted value of the current block is not limited.
- the first target motion vector and the second target motion vector are obtained according to the first original motion vector and the second original motion vector.
- the first target motion vector and the second target motion vector determine the predicted value, instead of determining the predicted value according to the first original motion vector and the second original motion vector, solve the problems of poor prediction quality and prediction errors, and improve coding performance and coding efficiency .
- Embodiment 2 Based on the same concept as the above method, referring to FIG. 3, which is a schematic flowchart of another encoding and decoding method proposed in an embodiment of this application, the method can be applied to the encoding end, and the method can include the following steps :
- Step 301 The encoder determines whether to activate the motion vector adjustment mode for the current block. If yes, go to step 302, if no, there is no need to use the motion vector adjustment method proposed in this application, and there is no restriction on the processing of this situation.
- step 302 is executed.
- the encoder determines not to activate the motion vector adjustment mode for the current block, it indicates that the motion information of the current block is sufficiently accurate. Therefore, the motion vector adjustment mode may not be activated for the current block, and the motion vector adjustment method proposed in this application is not used.
- Step 302 For each sub-block of at least one sub-block included in the current block: the encoding end determines the first reference block corresponding to the sub-block from the first reference frame according to the first original motion vector of the sub-block; For the second original motion vector of the sub-block, the second reference block corresponding to the sub-block is determined from the second reference frame.
- the pixel value of each pixel in the first reference block is called the first pixel value
- the pixel value of each pixel in the second reference block is called the second pixel value.
- this bidirectional motion information may include two reference frames and two original motion vectors.
- the information may include a first reference frame and a first original motion vector, a second reference frame and a second original motion vector.
- the encoding end determines the first reference block corresponding to the sub-block from the first reference frame, and calls the pixel value of each pixel in the first reference block the first pixel value. Based on the second original motion vector, the encoding end determines the second reference block corresponding to the sub-block from the second reference frame, and calls the pixel value of each pixel in the second reference block the second pixel value.
- the distance between the current frame where the current block is located and the first reference frame and the distance between the second reference frame and the current frame where the current block is located may be the same.
- the first reference frame is the first frame
- the current frame is the fifth frame
- the second reference frame is the ninth frame.
- the first original motion vector and the second original motion vector may have a mirror symmetry relationship, for example, the first original motion vector is (4, 4), and the second original motion vector is (-4, -4); The first original motion vector is (2.5, 3.5), and the second original motion vector is (-2.5, -3.5).
- the above is only an example, and there is no restriction on this.
- Step 303 The encoding end adjusts the first original motion vector according to the first pixel value of the first reference block and the second pixel value of the second reference block to obtain the first target motion vector of the sub-block; according to the first reference block The first pixel value of and the second pixel value of the second reference block are adjusted to the second original motion vector to obtain the second target motion vector of the sub-block.
- the encoding end can perform a local search method based on the first pixel value of the first reference block and the second pixel value of the second reference block.
- the motion vector and the second original motion vector are fine-tuned to obtain a better first target motion vector and the second target motion vector, and then the first target motion vector and the second target motion vector are used to generate a predicted value with less distortion.
- the current block may include at least one sub-block, and if the current block only includes one sub-block, the sub-block is the current block itself.
- the sub-block may correspond to the first original motion vector and the second original motion vector. After adjustment, the sub-block may correspond to the first target motion vector and the second target motion vector.
- sub-block A corresponds to the first original motion vector A1 and the second original motion vector A2.
- sub-block A corresponds to the first target The motion vector A3 and the second target motion vector A4.
- sub-block B corresponds to the first original motion vector B1 and the second original motion vector B2.
- the sub-block B corresponds to the first target motion vector B3 and the second target motion vector B4.
- the first original motion vector A1 corresponding to the sub-block A and the first original motion vector B1 corresponding to the sub-block B may be the same, and both are the first original motion vector of the current block; the second original motion corresponding to the sub-block A
- the vector A2 and the second original motion vector B2 corresponding to the sub-block B may be the same, and both are the second original motion vector of the current block.
- the first target motion vector A3 corresponding to the sub-block A and the first target motion vector B3 corresponding to the sub-block B may be the same or different.
- the second target motion vector A4 corresponding to the sub-block A and the second target motion vector B4 corresponding to the sub-block B may be the same or different.
- Step 304 The encoding end determines the predicted value of the sub-block according to the first target motion vector and the second target motion vector.
- Step 305 The encoding end determines the prediction value of the current block according to the prediction value of each sub-block.
- the first target motion vector and second target motion vector of sub-block A can be used to determine the predicted value of sub-block A, and the first target motion vector of sub-block B can be used.
- the second target motion vector to determine the predicted value of sub-block B, and the predicted value of sub-block A and the predicted value of sub-block B are the predicted values of the current block.
- the encoding end saves the first target motion vector and the second target motion vector of each sub-block of the current block, or saves the first original motion vector and the second original motion vector of each sub-block of the current block, or, Save the first original motion vector, the second original motion vector, the first target motion vector and the second target motion vector of each sub-block of the current block.
- Embodiment 3 Based on the same concept as the above method, referring to FIG. 4, which is a schematic flow diagram of another encoding and decoding method proposed in an embodiment of this application.
- the method can be applied to the decoder.
- the method can include the following steps :
- Step 401 The decoder determines whether to activate the motion vector adjustment mode for the current block. If it is, then step 402 is executed. If not, the motion vector adjustment method proposed in this application does not need to be adopted, and there is no restriction on the processing of this situation.
- the decoder determines to activate the motion vector adjustment mode for the current block, it indicates that the motion information of the current block is not accurate enough. Therefore, the motion vector adjustment mode is activated for the current block (ie, the technical solution of the present application), and step 402 is executed.
- the decoder determines not to activate the motion vector adjustment mode for the current block, it indicates that the motion information of the current block is sufficiently accurate. Therefore, the motion vector adjustment mode may not be activated for the current block, and the motion vector adjustment method proposed in this application is not used.
- Step 402 For each sub-block of at least one sub-block included in the current block: the decoding end determines the first reference block corresponding to the sub-block from the first reference frame according to the first original motion vector of the sub-block; For the second original motion vector of the sub-block, the second reference block corresponding to the sub-block is determined from the second reference frame.
- the pixel value of each pixel in the first reference block is called the first pixel value
- the pixel value of each pixel in the second reference block is called the second pixel value.
- Step 403 The decoding end adjusts the first original motion vector according to the first pixel value of the first reference block and the second pixel value of the second reference block to obtain the first target motion vector of the sub-block;
- the first pixel value of and the second pixel value of the second reference block are adjusted to the second original motion vector to obtain the second target motion vector of the sub-block.
- Step 404 The decoding end determines the predicted value of the sub-block according to the first target motion vector and the second target motion vector.
- Step 405 The decoding end determines the predicted value of the current block according to the predicted value of each sub-block.
- the decoding end saves the first target motion vector and the second target motion vector of each sub-block of the current block, or saves the first original motion vector and the second original motion vector of each sub-block of the current block, or, Save the first original motion vector, the second original motion vector, the first target motion vector and the second target motion vector of each sub-block of the current block.
- step 401 to step 405 may refer to step 301 to step 305, which will not be repeated here.
- Embodiment 4 In the above embodiment, it is related to whether to activate the motion vector adjustment mode for the current block, which will be described below.
- the following starting conditions can be given.
- the following starting conditions are just an example. In practical applications, the following starting conditions can be combined arbitrarily, which is not limited. Exemplarily, when all the starting conditions in the following starting conditions are satisfied, it is determined to start the motion vector adjustment mode for the current block.
- the control information is to allow the current block to use the motion vector adjustment mode.
- control information may include, but is not limited to: sequence-level control information and/or frame-level control information.
- the sequence-level (such as multi-frame image) control information may include a control flag (such as sps_cur_tool_enabled_flag), and the frame-level (such as a frame of image) control information may include a control flag (such as pic_cur_tool_disabled_flag).
- a control flag such as sps_cur_tool_enabled_flag
- pic_cur_tool_disabled_flag is the second value, it means that the current block is allowed to use the motion vector adjustment mode.
- sps_cur_tool_enabled_flag indicates whether all images in the sequence are allowed to use the motion vector adjustment mode.
- pic_cur_tool_disabled_flag indicates whether each block in the current image is not allowed to use the motion vector adjustment mode.
- sps_cur_tool_enabled_flag is the first value, it means that all images in the sequence are allowed to use the motion vector adjustment mode.
- pic_cur_tool_disabled_flag is the second value, it means that each block in the current image is allowed to use the motion vector adjustment mode.
- sps_cur_tool_enabled_flag is the second value and/or pic_cur_tool_disabled_flag is the first value, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the control information is that the current block is not allowed to use the motion vector adjustment mode.
- the sequence-level (such as multi-frame image) control information may include a control flag (such as sps_cur_tool_disabled_flag), and the frame-level (such as a frame of image) control information may include a control flag (such as pic_cur_tool_disabled_flag).
- a control flag such as sps_cur_tool_disabled_flag
- pic_cur_tool_disabled_flag is the second value, it means that the current block is allowed to use the motion vector adjustment mode.
- sps_cur_tool_disabled_flag indicates whether all images in the sequence are not allowed to use the motion vector adjustment mode.
- pic_cur_tool_disabled_flag indicates whether each block in the current image is not allowed to use the motion vector adjustment mode.
- sps_cur_tool_disabled_flag is the second value, it means that all images in the sequence are allowed to use the motion vector adjustment mode.
- pic_cur_tool_disabled_flag is the second value, it means that each block in the current image is allowed to use the motion vector adjustment mode.
- sps_cur_tool_disabled_flag is the first value and/or pic_cur_tool_disabled_flag is the first value, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the control information is that the current block is not allowed to use the motion vector adjustment mode.
- the sequence-level (such as multi-frame image) control information may include a control flag bit (such as sps_cur_tool_enabled_flag), and the frame-level (such as a frame of image) control information may include a control flag bit (such as pic_cur_tool_enabled_flag).
- a control flag bit such as sps_cur_tool_enabled_flag
- pic_cur_tool_enabled_flag is the first value
- sps_cur_tool_enabled_flag indicates whether all images in the sequence are allowed to use the motion vector adjustment mode.
- pic_cur_tool_enabled_flag indicates whether to allow each block in the current image to use the motion vector adjustment mode.
- sps_cur_tool_enabled_flag is the first value, it means that all images in the sequence are allowed to use the motion vector adjustment mode.
- pic_cur_tool_enabled_flag is the first value, it means that each block in the current image is allowed to use the motion vector adjustment mode.
- sps_cur_tool_enabled_flag is the second value and/or pic_cur_tool_enabled_flag is the second value, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the control information is that the current block is not allowed to use the motion vector adjustment mode.
- the sequence-level (such as multi-frame image) control information may include a control flag (such as sps_cur_tool_disabled_flag), and the frame-level (such as a frame of image) control information may include a control flag (such as pic_cur_tool_enabled_flag).
- a control flag such as sps_cur_tool_disabled_flag
- pic_cur_tool_enabled_flag is the first value, it means that the current block is allowed to use the motion vector adjustment mode.
- sps_cur_tool_disabled_flag indicates whether all images in the sequence are not allowed to use the motion vector adjustment mode.
- pic_cur_tool_enabled_flag indicates whether to allow each block in the current image to use the motion vector adjustment mode.
- sps_cur_tool_disabled_flag is the second value, it means that all images in the sequence are allowed to use the motion vector adjustment mode.
- pic_cur_tool_enabled_flag is the first value, it means that each block in the current image is allowed to use the motion vector adjustment mode.
- sps_cur_tool_disabled_flag is the first value and/or pic_cur_tool_enabled_flag is the second value, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the control information is that the current block is not allowed to use the motion vector adjustment mode.
- the first value can be 1, and the second value can be 0, or the first value can be 0, and the second value can be 1.
- the above is just an example. limit.
- the frame herein is equivalent to an image, for example, the current frame represents the current image, and the reference frame represents the reference image.
- the prediction mode of the current block is a normal fusion mode; or, the prediction mode of the current block is a fusion mode or a skip mode, and the prediction mode of the current block is not a mode other than the normal fusion mode.
- the prediction mode of the current block (such as inter prediction mode) is the fusion mode or skip mode, and the prediction mode of the current block is not other than the normal fusion mode (such as sub-block Fusion mode, MMVD mode, CIIP mode, TPM mode, GEO mode, etc.), it means that the current block is allowed to use the motion vector adjustment mode.
- the prediction mode of the current block is the fusion mode or the skip mode
- the prediction mode of the current block is not the MMVD mode
- the prediction mode of the current block is not the CIIP mode
- the current block is allowed to use the motion vector adjustment mode.
- the prediction mode of the current block is not the fusion mode, and the prediction mode of the current block is not the skip mode, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 2 is not satisfied.
- the prediction mode of the current block is the fusion mode or the skip mode, and the prediction mode of the current block is other than the normal fusion mode (such as sub-block fusion mode, MMVD mode, CIIP mode, TPM mode, GEO mode, etc.), it means that the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 2 is not met.
- the normal fusion mode such as sub-block fusion mode, MMVD mode, CIIP mode, TPM mode, GEO mode, etc.
- the prediction mode of the current block is a normal fusion mode (such as a regular merge mode)
- the common fusion mode is: multiplexing a certain motion information in the motion information list of the current block as the motion information of the current block to generate the prediction value of the current block.
- the prediction mode of the current block is not the normal fusion mode, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 2 is not satisfied.
- the predicted value of the current block is obtained by weighting the reference blocks from the two reference frames, and the display order of the two reference frames is located one before and one after the current frame, and the distance between the two reference frames and the current frame is the same.
- the prediction value of the current block obtained by weighting the reference blocks from two reference frames means that the current block adopts the bidirectional prediction mode, that is, the prediction value of the current block is obtained by weighting the reference blocks from the two reference frames.
- the current block may correspond to the motion information of the two lists, which are recorded as the first motion information and the second motion information.
- the first motion information includes the first reference frame and the first original motion vector
- the second motion information includes the second motion information.
- the display sequence of the two reference frames respectively located one before and one after the current frame means that the first reference frame is located in front of the current frame where the current block is located, and the second reference frame is located behind the current frame.
- the current block has two lists (such as list0 and list1) of motion information (such as two reference frames and two motion vectors), and the display order of the two reference frames is located in the current frame respectively One after the other, and the distance between the two reference frames and the current frame is the same, it means that the current block is allowed to use the motion vector adjustment mode.
- the display sequence of the two reference frames are located one before and one after the current frame, and the distance between the two reference frames and the current frame is the same. You can use the display sequence number of the current frame POC_Cur, the display sequence number of the reference frame of list0, POC_0, list1
- the relative relationship of the display sequence number POC_1 of the reference frame is expressed: that is, (POC_0-POC_Cur) is completely equal to (POC_Cur-POC_0).
- the current block adopts bidirectional prediction, and the two reference frames corresponding to the current block come from different directions, that is, one reference frame corresponding to the current block is located before the current frame, and the other reference frame corresponding to the current block is located in the current frame after that.
- the above condition "there are two reference frames in the current block, and the display sequence of the two reference frames are located one before and one after the current frame, and the distance between the two reference frames and the current frame is the same" can be expressed by the following content :
- the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 3 is not satisfied.
- the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 3 is not met.
- the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 3 is not satisfied.
- the current block is not allowed to use the motion vector
- the adjustment mode that is, the start condition 3 is not met.
- the weights of the two reference frames of the current block are the same, it means that the current block is allowed to use the motion vector adjustment mode.
- the luminance weighting weight of the reference frame refIdxL0 may be equal to the luminance weighting weight of the reference frame refIdxL1 (luma_weight_l1_flag[refIdxL1]), which means the current The weights of the two reference frames of the block are the same.
- the block-level weights of the two reference frames are the same, for example, the index BcwIdx[xCb][yCb] of the block-level weight of the current block is 0, it means that the weights of the two reference frames of the current block are the same.
- the frame-level weights of the two reference frames are the same, and the block-level weights of the two reference frames are the same, it means that the weights of the two reference frames of the current block are the same.
- the weights of the two reference frames of the current block are different, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 4 is not satisfied.
- the frame-level weights of the two reference frames are different, it means that the weights of the two reference frames of the current block are different.
- the block-level weights of the two reference frames are different, it means that the weights of the two reference frames of the current block are different.
- the frame-level weights of the two reference frames are different, and the block-level weights of the two reference frames are different, it means that the weights of the two reference frames of the current block are different.
- the weighted weight of the two reference frames of the current block refers to the weight used in bidirectional weight compensation.
- the two predicted values need to be weighted to obtain the final predicted value of the sub-block.
- the weights corresponding to the two predicted values are the weighted weights of the two reference frames of the current block, that is, the weights corresponding to the two predicted values are the same.
- the two reference frames of the current block are both short-term reference frames. In other words, neither of the two reference frames of the current block is a long-term reference frame.
- the short-term reference frame refers to a reference frame that is closer to the current frame, and is generally an actual image frame.
- the two reference frames of the current block are not both short-term reference frames, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 5 is not met. Or, if a reference frame of the current block is not a short-term reference frame, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 5 is not satisfied. Or, if the two reference frames of the current block are not short-term reference frames, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 5 is not satisfied.
- the two reference frames of the current block are not long-term reference frames, it means that the current block is allowed to use the motion vector adjustment mode.
- the display sequence number POC of the long-term reference frame has no actual meaning.
- the long-term reference frame refers to a reference frame far away from the current frame, or an image frame synthesized from several actual images.
- a reference frame of the current block is a long-term reference frame, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 5 is not satisfied.
- the two reference frames of the current block are both long-term reference frames, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 5 is not satisfied.
- the width, height and area of the current block are all within a limited range.
- the width cbWidth of the current block is greater than or equal to the first threshold (such as 8), and the height cbHeight of the current block is greater than or equal to the second threshold (such as 8), and the area of the current block (cbHeight *cbWidth) is greater than or equal to the third threshold (such as 128), which means that the current block is allowed to use the motion vector adjustment mode.
- the width cbWidth of the current block is less than the first threshold, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 6 is not satisfied.
- the height cbHeight of the current block is less than the second threshold, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 6 is not met.
- the area of the current block is smaller than the third threshold, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 6 is not satisfied.
- the width cbWidth of the current block is greater than or equal to the first threshold (such as 8), and the height cbHeight of the current block is greater than or equal to the second threshold (such as 8), and the area of the current block ( cbHeight*cbWidth) is greater than the fourth threshold (such as 64), which means that the current block is allowed to use the motion vector adjustment mode.
- the width cbWidth of the current block is less than the first threshold, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 6 is not satisfied.
- the height cbHeight of the current block is less than the second threshold, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 6 is not met.
- the area of the current block is less than or equal to the fourth threshold, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 6 is not satisfied.
- the size of the two reference frames of the current block is the same as the size of the current frame.
- the size of the reference frame of list0 is the same as the size of the current frame, for example, the width of the reference frame of list0 is the same as the width of the current frame, and the height of the reference frame of list0 is the same as the height of the current frame.
- the size of the reference frame of list1 is the same as the size of the current frame, for example, the width of the reference frame of list1 is the same as the width of the current frame, and the height of the reference frame of list1 is the same as the height of the current frame, which means that the current block is allowed to be used Motion vector adjustment mode.
- the size of at least one of the two reference frames of the current block is different from the size of the current frame, it means that the current block is not allowed to use the motion vector adjustment mode, that is, the start condition 7 is not satisfied.
- the width of the reference frame of list0 is different from the width of the current frame, it means that the current block is not allowed to use the motion vector adjustment mode.
- the height of the reference frame of list0 is different from the height of the current frame, it means that the current block is not allowed to use the motion vector adjustment mode.
- the width of the reference frame of list1 is different from the width of the current frame, it means that the current block is not allowed to use the motion vector adjustment mode.
- the height of the reference frame of list1 is different from the height of the current frame, it means that the current block is not allowed to use the motion vector adjustment mode.
- Embodiment 5 In the above embodiment, for each sub-block of the current block, according to the first original motion vector of the sub-block, the first reference block corresponding to the sub-block is determined from the first reference frame, and the first reference The pixel value of each pixel in the block is called the first pixel value; according to the second original motion vector of the sub-block, the second reference block corresponding to the sub-block is determined from the second reference frame. The pixel value of each pixel is called the second pixel value, which will be described below.
- the first pixel value of each pixel in the first reference block is obtained by interpolating the pixel values of adjacent pixels in the first reference block, or by interpolating the pixel values of adjacent pixels in the first reference block
- the pixel value of is copied.
- the second pixel value of each pixel in the second reference block is obtained by interpolating the pixel values of adjacent pixels in the second reference block, or, by interpolating the pixel values of adjacent pixels in the second reference block.
- the pixel value of is copied.
- the size of the first reference block is the same as the size of the second reference block, the width of the first reference block/second reference block is determined based on the width and search range of the sub-block, and the height of the first reference block/second reference block is based on the sub-block The height and search range are determined.
- the smaller sub-block can be 8*8, the larger The sub-block of can be 32*32.
- the size of the sub-block can be the same as the size of the current block, that is, the sub-block is the current block.
- the current block is 8*16
- the block includes only one sub-block, and the size of the sub-block is 8*16.
- the size of the sub-block can also be different from the size of the current block.
- the current block when the current block is 8*32, the current block can include two 8*16
- the above is just an example.
- the width of the sub-block is dx
- the height of the sub-block is dy
- the first original motion vector is denoted as MV0
- the second original motion vector is denoted as MV1.
- an entire pixel block with an area of (dx+filtersize-1)*(dy+filtersize-1) can be obtained, which can be recorded as an entire pixel block A.
- an entire pixel block with an area of (dx+filtersize-1)*(dy+filtersize-1) can be obtained, which can be recorded as an entire pixel block B.
- this initial reference pixel block can be recorded as the first reference block.
- the size of (dx+2*IterNum)*(dy+2*IterNum can be obtained by bilinear interpolation ), this initial reference pixel block can be marked as the second reference block.
- this initial reference pixel block is recorded as the first reference block.
- the entire pixel block B with an area of (dx+filtersize-1)*(dy+filtersize-1) you can directly copy to obtain the size (dx+2*IterNum)*(dy+2*IterNum)
- the initial reference pixel block, and this initial reference pixel block is marked as the second reference block.
- the subsequent search process only uses the brightness component to calculate the cost value to reduce complexity
- an integer pixel with an area of (dx+filtersize-1)*(dy+filtersize-1) Block (such as the whole pixel block A and the whole pixel block B)
- the initial reference pixel block is the first reference block (such as Pred_Inter0) and the second reference block (such as Pred_Inter1).
- filtersize may be the number of taps of the interpolation filter, for example, it may be 8, etc., and there is no restriction on this.
- obtaining the first reference block/second reference block through bilinear interpolation refers to: the pixel value of each pixel in the first reference block/second reference block, and the first reference block/second reference block The pixel values of adjacent pixels in the reference block are interpolated.
- Obtaining the first reference block/second reference block by copying refers to: the pixel value of each pixel in the first reference block/second reference block, by comparing adjacent pixels in the first reference block/second reference block The pixel value of is copied.
- the area of the first reference block is (dx+2*IterNum)*(dy+2*IterNum), and the area of the second reference block is (dx+2*IterNum)*(dy+2*IterNum)
- the width value of the first reference block/second reference block is dx+2*IterNum
- the height value of the first reference block/second reference block is dy+2*IterNum.
- IterNum can be the search range SR, for example, the number of iterations in subsequent embodiments, IterNum can be the maximum horizontal/vertical component interpolation between the target motion vector and the original motion vector, For example, IterNum can be 2, etc.
- an entire pixel block A with an area of 23 (that is, 16+8-1)*23 is obtained.
- the first reference block with a size of 20 (that is, 16+2*2)*20 can be obtained by bilinear interpolation.
- an entire pixel block B with an area of 23*23 is obtained.
- a second reference block with a size of 20*20 is obtained.
- the first reference block and the second reference block it is used for the motion vector adjustment in the subsequent process.
- Embodiment 6 In the above embodiment, for each sub-block of the current block, the first original motion vector is adjusted according to the first pixel value of the first reference block and the second pixel value of the second reference block to obtain The first target motion vector of the sub-block; according to the first pixel value of the first reference block and the second pixel value of the second reference block, the second original motion vector is adjusted to obtain the second target motion vector of the sub-block.
- a sub-block such as each sub-block of the current block with a size of dx*dy
- Step a1 Determine the first original motion vector or the second original motion vector as the central motion vector.
- the first original motion vector is (4, 4) and the second original motion vector is (-4, -4)
- the first original motion vector (4, 4) or the second original motion vector (-4, 4) -4) Determine as the central motion vector.
- the following takes the determination of the first original motion vector (4, 4) as the center motion vector as an example, and the process of determining the second original motion vector (-4, -4) as the center motion vector is similar. Go into details again.
- Step a2 Determine the edge motion vector corresponding to the center motion vector.
- the center motion vector (x, y) can be shifted by S in different directions to obtain edge motion vectors (x, y+S), edge motion vectors (x, yS), and edge motion vectors (x+S) in different directions.
- Y edge motion vector
- xS, y edge motion vector
- x+right, y+down edge motion vector
- right may be S or -S
- down may be S or -S.
- the center motion vector (x, y) is taken as the center, that is, the center motion vector is (0, 0).
- edge motion vector includes: edge motion vector (0, 1), edge motion vector (0, -1), edge motion vector (1, 0), edge motion vector (-1, 0), edge motion vector (1, 1).
- Step a3 Obtain the first generation value corresponding to the center motion vector and the second generation value corresponding to each edge motion vector according to the first pixel value of the first reference block and the second pixel value of the second reference block.
- the sub-reference block A1 corresponding to the center motion vector (0, 0) is copied from the first reference block, and the sub-reference block A1 is the sub-reference block of the center motion vector (0, 0) in the first reference block.
- the sub-reference block B1 corresponding to the central motion vector (0, 0) is obtained by copying from the second reference block.
- the sub-reference block B1 is the sub-reference block of the central motion vector (0, 0) in the second reference block.
- the cost value 1 corresponding to the center motion vector (0, 0) is obtained by using the first pixel value of the sub-reference block A1 and the second pixel value of the sub-reference block B1.
- the sub-reference block A2 corresponding to the edge motion vector (0, 1) is obtained by copying from the first reference block.
- the sub-reference block A2 is the sub-reference block of the edge motion vector (0, 1) in the first reference block.
- the sub-reference block B2 corresponding to the symmetric motion vector (0, -1) of the edge motion vector (0, 1) is copied from the second reference block.
- the sub-reference block B2 is the symmetric motion vector (0, -1) in the second The sub-reference block in the reference block.
- the cost value 2 corresponding to the edge motion vector (0, 1) is obtained by using the first pixel value of the sub-reference block A2 and the second pixel value of the sub-reference block B2.
- For the determination method of the cost value please refer to the subsequent embodiments.
- the cost value 3 corresponding to the edge motion vector (0, -1) can be determined, the cost value corresponding to the edge motion vector (1, 0) 4, the edge
- the cost value 5 corresponding to the motion vector (-1, 0) and the cost value 6 corresponding to the edge motion vector (1, 1) are not repeated here.
- Step a4 According to the first-generation value and the second-generation value, a motion vector is selected from the center motion vector and the edge motion vector as the optimal motion vector. For example, the motion vector with the smallest cost value can be used as the optimal motion vector.
- the edge motion vector (0, 1) corresponding to the cost value 2 can be used as the optimal motion vector.
- the edge motion vector (0, 1) corresponding to the cost value 2 can be used as the optimal motion vector.
- Step a5 Determine whether the end condition is met. If not, the optimal motion vector can be determined as the center motion vector and return to step a2. If it is, then step a6 can be performed.
- the end condition is met; if the number of iterations/search range does not reach the threshold, the end condition is not met. For example, assuming that SR is 2, that is, the threshold is 2, if the number of iterations/search range has reached 2, that is, step a2-step a4 has been executed twice, the end condition is satisfied; otherwise, the end condition is not satisfied.
- the end condition can be satisfied.
- Step a6 Determine the first integer-pixel motion vector adjustment value (used to adjust the first original motion vector) and the second integer-pixel motion vector adjustment value (used to adjust the second original motion vector) according to the optimal motion vector.
- the first integer-pixel motion vector adjustment value may be determined according to the optimal motion vector and the first original motion vector
- the second integer-pixel motion vector adjustment value may be determined according to the first integer-pixel motion vector adjustment value.
- the second integer-pixel motion vector adjustment value may be symmetrical to the first integer-pixel motion vector adjustment value.
- the optimal motion vector is the edge motion vector (0, 1), and the second iteration is performed with the edge motion vector (0, 1) as the center.
- the optimal motion vector is Edge motion vector (0, 1).
- the first integer pixel motion vector adjustment value is (0, 2), that is, the edge motion vector (0, 1) and the edge motion vector (0, 1) with.
- the first original motion vector is (4, 4).
- the optimal motion vector is the edge motion vector (0, 1), that is, the optimal motion vector can correspond to the optimal motion vector (4, 5).
- the second iteration takes the edge motion vector (0, 1) as the center.
- the optimal motion vector is the edge motion vector (0, 1), that is, the optimal motion vector can correspond to the optimal motion vector (4 , 6).
- the first integer-pixel motion vector adjustment value is determined according to the optimal motion vector (4, 6) and the first original motion vector (4, 4), and the first integer-pixel motion vector adjustment value is the optimal motion vector ( The difference between 4, 6) and the first original motion vector (4, 4), that is, the adjustment value of the first integer-pixel motion vector is (0, 2).
- the second integer pixel motion vector adjustment value can be (0, -2), that is, the symmetric value of (0, 2) .
- Step a7 Determine the first sub-pixel motion vector adjustment value (used to adjust the first original motion vector) and the second sub-pixel motion vector adjustment value (used to adjust the second original motion vector) according to the optimal motion vector.
- the first sub-pixel motion vector adjustment value can be determined according to the cost value corresponding to the optimal motion vector and the cost value corresponding to the edge motion vector corresponding to the optimal motion vector.
- SPMV can be the first sub-pixel motion vector adjustment value
- N can be related to the pixel precision of the motion vector.
- the pixel precision of the motion vector can be 1/2, N is 1, and the pixel precision of the motion vector is 1/4 , N is 2, the pixel precision of the motion vector is 1/8, N is 4, the pixel precision of the motion vector is 1/16, and N is 8.
- E(0,0) represents the cost value of the optimal motion vector
- E(-1,0) is the edge motion vector (0,0) of the optimal motion vector centered on the optimal motion vector ( -1,0);
- E(1,0) is the cost value of the edge motion vector (1,0) of the optimal motion vector (0,0) centered on the optimal motion vector;
- E(0, -1) is the optimal motion vector as the center, and the cost value of the edge motion vector (0,-1) of the optimal motion vector (0,0);
- E(0,1) is the optimal motion vector as the center,
- For the cost value of each motion vector refer to the above example for the determination method, which will not be repeated here.
- the second sub-pixel motion vector adjustment value can be determined according to the first sub-pixel motion vector adjustment value, and the second sub-pixel motion vector adjustment value is the first sub-pixel motion vector The symmetric value of the adjustment value. For example, if the first sub-pixel motion vector adjustment value is (1, 0), the second sub-pixel motion vector adjustment value can be (-1, 0), that is, the first sub-pixel motion vector adjustment value (1, 0) The symmetrical value.
- Step a8 Adjust the first original motion vector according to the first full-pixel motion vector adjustment value and/or the first sub-pixel motion vector adjustment value to obtain the first target motion vector.
- the first target motion vector the first original motion vector+the first integer-pixel motion vector adjustment value+the first sub-pixel motion vector adjustment value.
- Step a9 Adjust the second original motion vector according to the second full-pixel motion vector adjustment value and/or the second sub-pixel motion vector adjustment value to obtain a second target motion vector.
- the second target motion vector the second original motion vector+the second integer-pixel motion vector adjustment value+the second sub-pixel motion vector adjustment value.
- Embodiment 7 In the above embodiment, for each sub-block of the current block, the first original motion vector is adjusted according to the first pixel value of the first reference block and the second pixel value of the second reference block to obtain The first target motion vector of the sub-block; according to the first pixel value of the first reference block and the second pixel value of the second reference block, the second original motion vector is adjusted to obtain the second target motion vector of the sub-block.
- a sub-block such as each sub-block of the current block with a size of dx*dy
- the first original motion vector is recorded as Org_MV0
- the second original motion vector is recorded as Org_MV1.
- the obtained first target motion vector is recorded as Refined_MV0.
- the second target motion vector obtained is denoted as Refined_MV1.
- Step b1. Perform SR iterations to obtain the optimal integer pixel offset of the integer pixel MV point, and record it as IntegerDeltaMV, which is the first integer pixel motion vector adjustment value in the above embodiment. For example, first initialize IntegerDeltaMV to (0, 0), and perform the following process for each iteration:
- Step b11 Set deltaMV to (0, 0). If it is the first iteration, based on the reference pixels of the first original motion vector in the first reference block, copy the predicted value block A1 (that is, the most central dx*dy block of the first reference block); based on the second original motion vector in the The reference pixels in the second reference block are copied to obtain the predicted value block B1 (that is, the dx*dy block in the center of the second reference block). Obtain the initial cost value cost based on the predicted value block A1 and the predicted value block B1 (the initial cost value is the SAD (sum of abstract distance) based on the predicted value block A1 and the predicted value block B1, see the subsequent embodiments for the determination method ). If the initial cost cost is less than dx*dy, and dx and dy are the width and height of the current sub-block, skip the subsequent search process directly, execute step b2, and set notZeroCost to false.
- Step b12 As shown in Figure 6, centering on the above initial point, follow ⁇ Mv(0,1),Mv(0,-1),Mv(1,0),Mv(-1,0),Mv( Right, down) ⁇ order to obtain five offset MVs (the five offset MVs are called MVOffset), and the cost value of these five offset MV calculation and comparison process.
- MVOffset such as Mv(0,1), etc.
- two blocks of predicted values are obtained through this MVOffset (such as the center position offset in the first reference block)
- the dx*dy block of the MVOffset and the dx*dy block of the second reference block with the center position offset-MVOffset (the opposite of MVOffset) are calculated, and the down-sampled SAD of the two predicted value blocks is calculated as the cost value of the MVOffset.
- Step b13 After iteration, if the optimal MV is still the initial MV (that is, not the MVOffset) or the minimum cost is 0, then the next iteration search process is not performed, step b2 is executed, and notZeroCost is set to false.
- step b2 is executed. If the number of iterations does not reach SR, the optimal MV can be used as the center to perform the next iteration search process, that is, return to step b11.
- IntegerDeltaMV is obtained, that is, the final value of IntegerDeltaMV, which is the first integer pixel motion vector adjustment value, which is subsequently recorded as IntegerDeltaMV.
- step b2 it is possible to obtain the optimal sub-pixel offset MV by taking the optimal integer pixel MV point of step b1 as the center, which is recorded as SPMV, and SPMV is the first sub-pixel motion vector adjustment value in the above-mentioned embodiment.
- Step b21 Only when notZeroCost is not false and deltaMV is (0, 0), can subsequent processing be performed (that is, SPMV needs to be obtained), otherwise, IntegerDeltaMV can be directly used to adjust the original motion vector instead of IntegerDeltaMV and SPMV Adjust the original motion vector.
- the value of SPMV that is, the first sub-pixel motion vector adjustment value can be obtained.
- BestMVoffset IntegerDeltaMV+SPMV, that is, the sum of the first integer pixel motion vector adjustment value and the first sub-pixel motion vector adjustment value.
- -IntegerDeltaMV is the symmetric value of IntegerDeltaMV, that is, the second integer pixel motion vector adjustment value
- -SPMV is the symmetric value of SPMV, that is, the second sub-pixel motion vector adjustment value
- -BestMVoffset (-IntegerDeltaMV)+(- SPMV), that is, the sum of the second full-pixel motion vector adjustment value and the second sub-pixel motion vector adjustment value.
- Embodiment 8 In an example, in order to adjust the first original motion vector Org_MV0 and the second original motion vector Org_MV1 to the first target motion vector Refined_MV0 and the second target motion vector Refined_MV1, the implementation is similar to that of the seventh embodiment, with the difference The point is: "If the initial cost is less than dx*dy, skip the subsequent search process" in step b11, that is, even if the initial cost is less than dx*dy, it will not "directly Skip the subsequent search process", but continue the subsequent search process, that is, step b12 needs to be performed.
- Embodiment 9 In order to adjust the first original motion vector Org_MV0 and the second original motion vector Org_MV1 to the first target motion vector Refined_MV0 and the second target motion vector Refined_MV1, the implementation is similar to that of the seventh embodiment, the difference is: "If the initial cost is less than dx*dy, then skip the subsequent search process" in b11 is removed, that is, even if the initial cost is less than dx*dy, it will not “directly skip the subsequent search process" ", but to continue the subsequent search process, that is, step b12 needs to be performed.
- step b13 Remove "If the optimal MV is still the initial MV (that is, not MVOffset) or the minimum cost is 0, then the next iteration of the search process" in step b13 is removed, that is, even if the optimal MV is still the initial MV Or the minimum cost value is 0, and the next iterative search process can also be performed.
- Embodiment 10 In an example, in order to adjust the first original motion vector Org_MV0 and the second original motion vector Org_MV1 to the first target motion vector Refined_MV0 and the second target motion vector Refined_MV1, the implementation is similar to that of the seventh embodiment, with the difference The point is: the "notZeroCost" related process is removed, that is, in step b11 and step b13, the value of notZeroCost is not set and saved. In step b21, as long as deltaMV is (0, 0), the sub-pixel offset calculation process (ie step b22) can be performed, not only when notZeroCost is not false and deltaMV is (0, 0). Perform the sub-pixel offset calculation process.
- Embodiment 11 In an example, in order to adjust the first original motion vector Org_MV0 and the second original motion vector Org_MV1 to the first target motion vector Refined_MV0 and the second target motion vector Refined_MV1, the implementation is similar to that of the seventh embodiment, with the difference The point is: "Only when notZeroCost is not false and deltaMV is (0, 0), can the subsequent processing be performed in step b21, otherwise, directly use IntegerDeltaMV to adjust the original motion vector", and modify it to "Only notZeroCost is not false, and the cost value of the four points separated by 1 full pixel from the top, bottom, left, and right of the current optimal integer pixel has been calculated in step b1 before proceeding with subsequent processing. Otherwise, directly use IntegerDeltaMV to adjust the original motion vector.” In an example, “subsequent processing” refers to the sub-pixel offset calculation process in step b22.
- the sub-pixel offset calculation process in step b22 needs to use the cost value of four points separated by 1 full pixel from the top, bottom, left, and right of the optimal integer pixel. Therefore, the "optimal integer pixel" has been calculated in step b1.
- the cost value of four dots separated by 1 full pixel from the top, bottom, left, and right” may be a necessary condition.
- Embodiment 12 In order to adjust the first original motion vector Org_MV0 and the second original motion vector Org_MV1 to the first target motion vector Refined_MV0 and the second target motion vector Refined_MV1, the implementation is similar to that of the seventh embodiment, the difference is: In b21, "Only when notZeroCost is not false and deltaMV is (0, 0), can the subsequent processing be performed, otherwise, use IntegerDeltaMV to adjust the original motion vector", and modify it to "As long as the current optimal integer pixel is separated from the top, bottom, left, and right. When the cost value of the four points of an entire pixel has been calculated in step b1, the subsequent processing (that is, the sub-pixel offset calculation process) is performed, otherwise, the original motion vector is adjusted by IntegerDeltaMV.”
- Embodiment 13 In order to adjust the first original motion vector Org_MV0 and the second original motion vector Org_MV1 to the first target motion vector Refined_MV0 and the second target motion vector Refined_MV1, the implementation is similar to that of the seventh embodiment, the difference is: In b21, "Only when notZeroCost is not false and deltaMV is (0, 0), can subsequent processing be performed, otherwise, use IntegerDeltaMV directly to adjust the original motion vector" and amended to "If the current optimal integer pixel is up, down, left, and right When the cost value of the four points separated by 1 full pixel has been calculated in step b1, the subsequent processing (the sub-pixel offset calculation process in step b22) is performed, otherwise, step b23 is used for processing".
- Step b23 Set the current optimal integral pixel point MV_inter_org to the nearest one, and the cost value of the surrounding four points separated by 1 integral pixel from the top, bottom, left, and right has been calculated in step b1 to obtain the integral pixel point MV_inter_nearest. Then, the sub-pixel offset calculation process of step b22 is performed with MV_inter_nearest as the center, that is, SPMV is obtained with MV_inter_nearest as the center.
- step b1 For example, if the current optimal integer pixel point MV_inter_org has four points separated by 1 integer pixel from the top, bottom, left, and right, not all of them are calculated in step b1, then an integer pixel point MV_inter_nearest is selected from the periphery of the optimal integer pixel point MV_inter_org , And the cost values of the four points separated by 1 full pixel from the top, bottom, left, and right of the whole pixel point MV_inter_nearest have been calculated in step b1.
- the whole pixel point MV_inter_nearest can be taken as the current optimal whole pixel point, and the SPMV can be obtained with the whole pixel point MV_inter_nearest as the center.
- the SPMV can be obtained with the whole pixel point MV_inter_nearest as the center.
- x 0 /y 0 is greater than 2N, x 0 /y 0 can be assigned as 2N; if x 0 /y 0 is less than -2N, x 0 /y 0 can be assigned as -2N.
- Embodiment 14 In the above embodiment, it is necessary to determine the edge motion vector corresponding to the center motion vector. For example, the center motion vector (x, y) is shifted to different directions by S, and the edge motion vector (x, y+S), edge motion vector (x, yS), edge motion vector (x+S, y), edge motion vector (xS, y), edge motion vector (x+right, y+down). Or, shift the center motion vector (x, y) to different directions by S, and sequentially obtain the edge motion vector (x, yS), the edge motion vector (x, y+S), and the edge motion vector (xS, y) in different directions.
- edge motion vector (x+S, y) edge motion vector (x+right, y+down).
- S is 1, then according to (0, 1), (0, -1), (1,0), (-1,0), (right, down) order to get 5 edge motion vectors.
- (0, -1), (0, 1), (-1,0), (1,0), (right, down) 5 edge motion vectors are obtained.
- Embodiment 15 In the above embodiment, the default value of the edge motion vector (x+right, y+down) is (x-S, y-S). If the cost value of the edge motion vector (x+S, y) is less than the cost value of the edge motion vector (xS, y), then right is S (modified from -S to S); if the edge motion vector (x, y+S) The cost value of) is less than the cost value of the edge motion vector (x, yS), then down is S (modified from -S to S).
- the cost value of the edge motion vector (x+S, y) is less than or equal to the cost value of the edge motion vector (xS, y), then right is S (modified from -S to S); if the edge motion vector (x , Y+S) is less than or equal to the cost value of the edge motion vector (x, yS), then down is S (modified from -S to S).
- the cost value of the edge motion vector (1,0) is less than or equal to the cost value of the edge motion vector (-1,0), then right is 1; if the cost value of the edge motion vector (0,1) is less than or equal to The cost value of the edge motion vector (0, -1), then down is 1.
- the cost value of the edge motion vector (0, -1) is less than or equal to The cost value of the edge motion vector (0, -1), then down is 1.
- the default value is (-1, -1).
- the cost value of the edge motion vector (1,0) is less than the cost value of the edge motion vector (-1,0), then right is 1; if the cost value of the edge motion vector (0,1) is less than the cost value of the edge motion vector (0, The cost value of -1), then down is 1. Or, if the cost value of the edge motion vector (1,0) is less than or equal to the cost value of the edge motion vector (-1,0), then right is 1; if the cost value of the edge motion vector (0,1) is less than or equal to The cost value of the edge motion vector (0, -1), then down is 1.
- Embodiment 16 In the above embodiment, for each sub-block of the current block, the first original motion vector is adjusted according to the first pixel value of the first reference block and the second pixel value of the second reference block to obtain The first target motion vector of the sub-block; according to the first pixel value of the first reference block and the second pixel value of the second reference block, the second original motion vector is adjusted to obtain the second target motion vector of the sub-block.
- a sub-block such as each sub-block of the current block with a size of dx*dy
- Step c1 with the initial motion vector as the center, select some or all of the motion vectors from the motion vectors around the initial motion vector including the initial motion vector, and use the selected motion vector as a candidate motion vector.
- the initial motion vector may be the first original motion vector or the second original motion vector.
- the first original motion vector is taken as an example, that is, the initial motion vector is the first original motion vector.
- the initial motion vector may be taken as the center, and part or all of the motions may be selected from (2*SR+1)*(2*SR+1) motion vectors surrounding the initial motion vector and including the initial motion vector.
- Vector and determine the selected motion vector as a candidate motion vector; wherein, the SR is the search range.
- the search order of the motion vector can include from left to right and from top to bottom.
- all motion vectors are selected from 25 motion vectors including the initial motion vector around the initial motion vector, and the selected motion vector is determined as a candidate motion vector;
- the search order of motion vectors is: ⁇ Mv( -2,-2), Mv(-1,-2), Mv(0,-2), Mv(1,-2), Mv(2,-2), Mv(-2,-1), Mv (-1,-1), Mv(0,-1), Mv(1,-1), Mv(2,-1), Mv(-2,0), Mv(-1,0), Mv( 0,0), Mv(1,0), Mv(2,0), Mv(-2,1), Mv(-1,1), Mv(0,1), Mv(1,1), Mv (2,1), Mv(-2,2), Mv(-1,2), Mv(0,2), Mv(1,2), Mv(2,2) ⁇ .
- the search order of motion vectors is: ⁇ Mv( -1,-2), Mv(0,-2), Mv(1,-2), Mv(-2,-1), Mv(-1,-1), Mv(0,-1), Mv (1,-1), Mv(2,-1), Mv(-2,0), Mv(-1,0), Mv(0,0), Mv(1,0), Mv(2,0 ), Mv(-2,1), Mv(-1,1), Mv(0,1), Mv(1,1), Mv(2,1), Mv(-1,2), Mv(0 ,2), Mv(1,2) ⁇ .
- Step c2 according to the first pixel value of the first reference block and the second pixel value of the second reference block, obtain the third generation value corresponding to the first original motion vector (ie, the initial motion vector), and the third generation value corresponding to each candidate motion vector Four generations of value.
- the sub-reference block A1 corresponding to the first original motion vector may be obtained by copying from the first reference block, and the sub-reference block A1 may be the sub-reference block of the first original motion vector in the first reference block. Then, the sub-reference block B1 corresponding to the second original motion vector may be copied from the second reference block, and the sub-reference block B1 is the sub-reference block of the second original motion vector in the second reference block. Then, the first pixel value of the sub-reference block A1 and the second pixel value of the sub-reference block B1 can be used to obtain the third generation value corresponding to the first original motion vector.
- the sub-reference block A2 corresponding to the candidate motion vector can be obtained by copying from the first reference block, and the sub-reference block A2 is the sub-reference block of the candidate motion vector in the first reference block. Then, the sub-reference block B2 corresponding to the symmetric motion vector of the candidate motion vector is copied from the second reference block, and the sub-reference block B2 is the sub-reference block of the symmetric motion vector in the second reference block. The first pixel value of the sub-reference block A2 and the second pixel value of the sub-reference block B2 are used to obtain the fourth generation value corresponding to the candidate motion vector.
- Step c3 According to the third-generation value and the fourth-generation value, a motion vector is selected from the first original motion vector and each candidate motion vector, and the selected motion vector is determined as the optimal motion vector. For example, the motion vector with the smallest cost value (such as the first original motion vector or any candidate motion vector) may be used as the optimal motion vector.
- Step c4 Determine the first integer pixel motion vector adjustment value (used to adjust the first original motion vector) and the second integer pixel motion vector adjustment value (used to adjust the second original motion vector) according to the optimal motion vector.
- the first integer-pixel motion vector adjustment value is determined according to the optimal motion vector and the first original motion vector
- the second integer-pixel motion vector adjustment value is determined according to the first integer-pixel motion vector adjustment value.
- the second integer-pixel motion vector adjustment value is the same as The adjustment value of the first full-pixel motion vector is symmetrical.
- the first integer pixel motion vector adjustment value is the difference between the optimal motion vector (4, 6) and the first original motion vector (4, 4), that is, the first integer pixel motion vector adjustment value (0, 2).
- the second integer pixel motion vector adjustment value is determined according to the first integer pixel motion vector adjustment value (0, 2).
- the second integer pixel motion vector adjustment value can be (0, -2), that is, (0, 2) Symmetric value.
- Step c5 Determine the first sub-pixel motion vector adjustment value (used to adjust the first original motion vector) and the second sub-pixel motion vector adjustment value (used to adjust the second original motion vector) according to the optimal motion vector. For example, according to the cost value corresponding to the optimal motion vector and the cost value corresponding to the edge motion vector corresponding to the optimal motion vector, the first sub-pixel motion vector adjustment value is determined, and then the first sub-pixel motion vector adjustment value is determined The second sub-pixel motion vector adjustment value.
- x 0 N*(E(-1,0)–E(1,0))/(E(-1,0)+E(1,0)–2*E(0,0))
- y 0 N*(E(0,-1)–E(0,1))/(E(0,-1)+E(0,1)–2*E(0,0))
- N 1, 2, 4, and 8.
- SPMV deltaMv/2N, if the current pixel accuracy of the motion vector is 1/16, then SPMV is (x 0 /16, y 0 /16).
- SPMV is the first sub-pixel motion vector adjustment value.
- E(0,0) represents the cost value of the optimal motion vector;
- E(-1,0) is the center of the optimal motion vector, and the edge motion vector (-1,0) of the optimal motion vector (0,0)
- E(1,0) is the cost value of the edge motion vector (1,0) of the optimal motion vector (0,0) centered on the optimal motion vector;
- E(0,-1) is The optimal motion vector is the center, and the cost value of the edge motion vector (0,-1) of the optimal motion vector (0,0);
- E(0,1) is the center of the optimal motion vector, and the optimal motion vector ( The cost value of the edge motion vector (0,1) of 0,0).
- the second sub-pixel motion vector adjustment value can be determined according to the first sub-pixel motion vector adjustment value, and the second sub-pixel motion vector adjustment value is the first sub-pixel motion vector The symmetric value of the adjustment value. For example, if the first sub-pixel motion vector adjustment value is (1, 0), the second sub-pixel motion vector adjustment value is (-1, 0), that is, the symmetric value of (1, 0).
- Step c6 Adjust the first original motion vector according to the first full-pixel motion vector adjustment value and/or the first sub-pixel motion vector adjustment value to obtain a first target motion vector corresponding to the first original motion vector.
- the first target motion vector the first original motion vector+the first integer-pixel motion vector adjustment value+the first sub-pixel motion vector adjustment value.
- Step c7 Adjust the second original motion vector according to the second integer-pixel motion vector adjustment value and/or the second sub-pixel motion vector adjustment value to obtain a second target motion vector corresponding to the second original motion vector.
- the second target motion vector the second original motion vector+the second integer-pixel motion vector adjustment value+the second sub-pixel motion vector adjustment value.
- Embodiment 17 In the above embodiment, for each sub-block of the current block, the first original motion vector is adjusted according to the first pixel value of the first reference block and the second pixel value of the second reference block to obtain The first target motion vector of the sub-block; according to the first pixel value of the first reference block and the second pixel value of the second reference block, the second original motion vector is adjusted to obtain the second target motion vector of the sub-block.
- a sub-block such as each sub-block of the current block with a size of dx*dy
- the first original motion vector as Org_MV0
- the second original motion vector as Org_MV1
- the first target motion vector as Refined_MV0
- the second target motion vector as Refined_MV1.
- step d1 there is no need to perform an iterative process, that is, all candidate motion vectors to be processed can be selected at one time, instead of the iterative process, the first iteration selects part of the motion vectors. Part of the motion vector is selected in the second iteration. Based on this, since all candidate motion vectors to be processed can be selected at one time, these candidate motion vectors can be processed in parallel to obtain the cost value of each candidate motion vector, thereby reducing computational complexity and improving coding performance .
- Step d2 Determine the value of IntegerDeltaMV according to the optimal motion vector.
- the final value of IntegerDeltaMV is the first integer-pixel motion vector adjustment value.
- Step d3 Obtain the optimal sub-pixel offset MV with the optimal motion vector as the center, record the optimal sub-pixel offset as SPMV, and the value of SPMV is the first sub-pixel motion vector adjustment value.
- step d3 For the implementation process of step d3, refer to the above step b2, which will not be repeated here.
- Embodiment 18 In order to adjust the first original motion vector Org_MV0 and the second original motion vector Org_MV1 to the first target motion vector Refined_MV0 and the second target motion vector Refined_MV1, the implementation manner is similar to that of Embodiment 16 and Embodiment 17.
- Embodiment 19 In order to adjust the first original motion vector Org_MV0 and the second original motion vector Org_MV1 to the first target motion vector Refined_MV0 and the second target motion vector Refined_MV1, the implementation manner is similar to that of the 16th and the 17th embodiment.
- all candidate motion vectors to be processed are selected at one time, these candidate motion vectors can be processed in parallel to obtain the cost value of each candidate motion vector, thereby reducing computational complexity and improving coding performance.
- the offset is selected not to exceed the SR range Partial motion vector inside.
- N is greater than or equal to 1, and less than or equal to (2*SR+1)*(2 *SR+1))
- Candidate points determine the cost value of the motion vector corresponding to these N points.
- the cost value of these N points may be scanned in a certain order, and the motion vector with the smallest cost value may be selected as the optimal motion vector. If the cost value is equal, the candidate points with the highest order will be selected first.
- the method for determining the cost value may be: determining the cost value based on the down-sampled SAD of the two predicted values obtained by the candidate motion vector.
- the number of candidate points may be 25, and the order of these candidate points may be from left to right and from top to bottom.
- the order of these candidate points can be: ⁇ Mv(-2,-2), Mv(-1,-2), Mv(0,-2), Mv(1,-2), Mv (2,-2), Mv(-2,-1), Mv(-1,-1), Mv(0,-1), Mv(1,-1), Mv(2,-1), Mv (-2,0), Mv(-1,0), Mv(0,0), Mv(1,0), Mv(2,0), Mv(-2,1), Mv(-1,1 ), Mv(0,1), Mv(1,1), Mv(2,1), Mv(-2,2), Mv(-1,2), Mv(0,2), Mv(1, 2), Mv(2,2) ⁇ .
- the sequence of these candidate points can be: ⁇ Mv(0,0), Mv(-2,-2), Mv(-1,-2), Mv(0,-2), Mv(1,-2), Mv(2,-2), Mv(-2,-1), Mv(-1,-1), Mv(0,-1), Mv(1,-1), Mv(2,-1), Mv(-2,0), Mv(-1,0), Mv(0,0), Mv(1,0), Mv(2,0), Mv(-2, 1), Mv(-1,1), Mv(0,1), Mv(1,1), Mv(2,1), Mv(-2,2), Mv(-1,2), Mv( 0,2), Mv(1,2), Mv(2,2) ⁇ .
- the optimal offset MV can be used to determine the adjustment value and score of the entire pixel motion vector.
- the order of these candidate points is: ⁇ Mv(-1,-2), Mv(0,-2), Mv(1,-2), Mv(-2,-1), Mv( -1,-1), Mv(0,-1), Mv(1,-1), Mv(2,-1), Mv(-2,0), Mv(-1,0), Mv(0 ,0), Mv(1,0), Mv(2,0), Mv(-2,1), Mv(-1,1), Mv(0,1), Mv(1,1), Mv( 2,1), Mv(-1,2), Mv(0,2), Mv(1,2) ⁇ .
- the sequence of these candidate points is: ⁇ Mv(0,0), Mv(-1,-2), Mv(0,-2), Mv(1,-2), Mv( -2,-1), Mv(-1,-1), Mv(0,-1), Mv(1,-1), Mv(2,-1), Mv(-2,0), Mv( -1,0), Mv(0,0), Mv(1,0), Mv(2,0), Mv(-2,1), Mv(-1,1), Mv(0,1), Mv(1,1), Mv(2,1), Mv(-1,2), Mv(0,2), Mv(1,2) ⁇ .
- the optimal offset MV can be used to determine the adjustment value and score of the entire pixel motion vector.
- the number of candidate points can be 25.
- the motion vector (0, 0) is used as the center, and the order from near to far from the center is adopted.
- the sequence of these candidate points can be: ⁇ Mv(0,0), Mv(-1,0), Mv(0,-1), Mv(1,0), Mv(0,1 ), Mv(-1,1), Mv(-1,-1), Mv(1,-1), Mv(1,1), Mv(0,2), Mv(-2,0), Mv (0,-2), Mv(2,0), Mv(1,2), Mv(-1,2), Mv(-2,1), Mv(-2,-1), Mv(-1 ,-2), Mv(1,-2), Mv(2,-1), Mv(2,1), Mv(2,1), Mv(-2,2), Mv(-2,-2), Mv(2,- 2), Mv(2,2) ⁇ .
- the optimal offset MV can be used to determine the adjustment value and score of the entire pixel motion vector.
- the number of candidate points can be 21.
- the motion vector (0, 0) is used as the center, and the distance from the center is in the order from near to far.
- the order of these candidate points is: ⁇ Mv(0,0), Mv(-1,0), Mv(0,-1), Mv(1,0), Mv(0,1) , Mv(-1,1), Mv(-1,-1), Mv(1,-1), Mv(1,1), Mv(0,2), Mv(-2,0), Mv( 0,-2), Mv(2,0), Mv(1,2), Mv(-1,2), Mv(-2,1), Mv(-2,-1), Mv(-1, -2), Mv(1,-2), Mv(2,-1), Mv(2,1) ⁇ .
- the number of candidate points can be 13.
- the motion vector (0, 0) is the center, and the distance from the center is in the order from near to far.
- the order of these candidate points is: ⁇ Mv(0,0), Mv(-1,0), Mv(0,-1), Mv(1,0), Mv(0,1) , Mv(-1,1), Mv(-1,-1), Mv(1,-1), Mv(1,1), Mv(0,2), Mv(-2,0), Mv( 0,-2), Mv(2,0) ⁇ .
- the optimal offset MV can be used to determine the adjustment value and score of the entire pixel motion vector. For the determination method of the pixel motion vector adjustment value, refer to the above-mentioned embodiment.
- the cost SAD(0,0) of the first candidate motion vector (Mv(0,0)) is less than the threshold dx*dy, then the subsequent candidate motion vector will not be tested, that is, the optimal integer pixel offset of the sub-block is Mv(0,0).
- the cost of a certain candidate motion vector is 0, the subsequent candidate motion vector is not tested, and the current candidate motion vector is used as the optimal integer pixel offset.
- the subsequent sub-pixel offset calculation process is not performed, that is, the target motion vector of the sub-block is directly obtained through the entire pixel offset.
- Embodiment 20 In order to adjust the first original motion vector Org_MV0 and the second original motion vector Org_MV1 to the first target motion vector Refined_MV0 and the second target motion vector Refined_MV1, the implementation manner is similar to that of Embodiment 16 and Embodiment 17.
- this embodiment since all candidate motion vectors to be processed are selected at one time, these candidate motion vectors can be processed in parallel to obtain the cost value of each candidate motion vector, thereby reducing computational complexity and improving coding performance.
- taking the original motion vector as the center from (2*SR+1)*(2*SR+1) points, select a part of the motion vector whose offset does not exceed the range of SR.
- N is greater than or equal to 1, and less than or equal to (2*SR+1)*(2 *SR+1))
- Candidate points Determine the cost value of the motion vector corresponding to these N points. Scan the cost value of these N points in a certain order, and select the motion vector with the smallest cost value as the optimal motion vector. If the cost value is equal, the candidate points with the highest order will be selected first.
- Embodiment 19 The difference from Embodiment 19 is that the positions of candidate points in Embodiment 19 are fixed, that is, have nothing to do with the original motion vector.
- the positions of candidate points in Embodiment 20 are related to the original motion vector, which will be described below in conjunction with several specific examples. .
- the number of candidate points may be 13.
- the motion vector (0, 0) is used as the center, and the order from near to far from the center is adopted.
- the order of the candidate points in the first layer from the center has nothing to do with the size of the original motion vector, while the order of the candidate points in the second layer from the center is related to the size of the original motion vector.
- the order of these candidate points is: ⁇ Mv(0,0), Mv(-1,0), Mv(0,-1), Mv(1,0), Mv(0,1), Mv(-1, 1), Mv(-1,-1), Mv(1,-1), Mv(1,1), Mv(sign_H*2,0), Mv(sign_H*2,sign_V*1), Mv(0 ,sign_V*2), Mv(0,sign_V*2) ⁇ .
- the first original motion vector is denoted as MV0
- the horizontal component is MV0_Hor
- the vertical component is MV0_Ver.
- the optimal offset MV can be used to determine the adjustment value and score of the entire pixel motion vector. For the determination method of the pixel motion vector adjustment value, refer to the above-mentioned embodiment.
- the motion vector (0, 0) is used as the center, and the distance from the center is in the order from near to far.
- the order of the candidate points in the first layer from the center is independent of the size of the original motion vector, while the order of the candidate points in the second layer from the center is related to the size of the original motion vector.
- the order of these candidate points is: ⁇ Mv( 0,0), Mv(-1,0), Mv(0,-1), Mv(1,0), Mv(0,1), Mv(-1,1), Mv(-1,-1 ), Mv(1,-1), Mv(1,1), Mv(sign_H*2,0), Mv(sign_H*2,sign_V*1), Mv(0,sign_V*2), Mv(0, sign_V*2) ⁇ .
- the first original motion vector is denoted as MV0
- the horizontal component is MV0_Hor
- the vertical component is MV0_Ver.
- the optimal offset MV can be used to determine the adjustment value and score of the entire pixel motion vector. For the determination method of the pixel motion vector adjustment value, refer to the above-mentioned embodiment.
- Embodiment 21 In the above embodiment, it involves obtaining the first generation value corresponding to the center motion vector and the second generation value corresponding to the edge motion vector according to the first pixel value of the first reference block and the second pixel value of the second reference block. Cost value. According to the first pixel value of the first reference block and the second pixel value of the second reference block, the third generation value corresponding to the first original motion vector and the fourth generation value corresponding to the candidate motion vector are obtained. In an example, the first generation value corresponding to the center motion vector, the second generation value corresponding to the edge motion vector, and the first original motion vector are obtained according to the first pixel value that is not downsampled and the second pixel value that is not downsampled.
- the corresponding third-generation value and the fourth-generation value corresponding to the candidate motion vector perform a down-sampling operation on the first pixel value, and perform a down-sampling operation on the second pixel value; obtain the first generation corresponding to the center motion vector according to the down-sampled first pixel value and the down-sampled second pixel value Value, the second-generation value corresponding to the edge motion vector, the third-generation value corresponding to the first original motion vector, and the fourth-generation value corresponding to the candidate motion vector.
- the method of determining the cost value is similar.
- the sub-reference block A1 corresponding to the central motion vector can be copied from the first reference block, and the sub-reference corresponding to the symmetrical motion vector of the central motion vector can be copied from the second reference block.
- the first pixel value of the sub-reference block A1 and the second pixel value of the sub-reference block B1 are used to obtain the cost value corresponding to the center motion vector.
- the sub-reference block A2 corresponding to the edge motion vector can be copied from the first reference block, and the sub-reference block B2 corresponding to the symmetrical motion vector of the edge motion vector can be copied from the second reference block.
- the first pixel value of the sub-reference block A2 and the second pixel value of the sub-reference block B2 can be copied from the cost value corresponding to the edge motion vector, and so on.
- the sub-reference block corresponding to the motion vector can be obtained from the first reference block, and the sub-reference corresponding to the symmetrical motion vector of the motion vector can be obtained from the second reference block. Block, and then use the pixel values of the two sub-reference blocks to obtain the cost value corresponding to the motion vector, and this process will not be repeated.
- Embodiment 22 On the basis of Embodiment 21, according to the un-down-sampled first pixel value (that is, the un-down-sampled pixel value of the sub-reference block in the first reference block) and the un-down-sampled second pixel value ( That is, the non-downsampled pixel value of the sub-reference block in the second reference block), and the cost value corresponding to the motion vector is obtained.
- the un-down-sampled first pixel value that is, the un-down-sampled pixel value of the sub-reference block in the first reference block
- the un-down-sampled second pixel value That is, the non-downsampled pixel value of the sub-reference block in the second reference block
- the cost value is determined based on the SAD of all pixel values of the sub-reference block pred 0 and sub-reference block pred 1 , There is no need to vertically down-sample the pixels of the sub-reference block pred 0 and sub-reference block pred 1.
- the cost value calculation formula is:
- cost can represent the cost value
- W can be the width value of the sub-reference block
- H can be the height value of the sub-reference block
- pred 0 (i, j) can represent the i-th column of the sub-reference block pred 0
- pred 1 (i, j) can represent the pixel value of the i-th column and the j-th row of the sub-reference block pred 1
- abs(x) can represent the absolute value of x.
- Embodiment 23 On the basis of Embodiment 21, the first pixel value can be down-sampled, and the second pixel value can be down-sampled; it can be based on the down-sampled first pixel value (that is, in the first reference block). The down-sampled pixel value of the sub-reference block) and the down-sampled second pixel value (that is, the down-sampled pixel value of the sub-reference block in the second reference block) to obtain the cost value corresponding to the motion vector.
- the cost value is determined based on the SAD of all pixel values of the sub-reference block pred 0 and sub-reference block pred 1 .
- the pixel values of the sub-reference block pred 0 and the sub-reference block pred 1 are vertically down-sampled by N times (N is an integer greater than 0, and may be 2).
- the cost value calculation formula is:
- cost can represent the cost value
- W can be the width value of the sub-reference block
- H can be the height value of the sub-reference block
- N can represent the down-sampling parameter, which is an integer greater than 0, which can be 2
- j) can represent the pixel value of the jth row in the 1+N(i-1) column of the sub-reference block pred 0
- j) can represent the pixel value of the jth row in the 1+N(i-1) column of the sub-reference block pred 1
- abs(x) can represent the absolute value of x.
- Embodiment 24 On the basis of Embodiment 21, the first pixel value is shifted and down-sampled, and the second pixel value is shifted and down-sampled; according to the operated first pixel value (first reference Shift and downsampled pixel value of the sub-reference block in the block) and the second pixel value after operation (shifted and down-sampled pixel value of the sub-reference block in the second reference block) to obtain the motion vector Corresponding cost value.
- the sub-reference block in the first reference block is pred 0
- the sub-reference block in the second reference block is pred 1
- both pred 0 and pred 1 are stored in D bits
- each of pred 0 Pixel values are all stored in D bits
- each pixel value in pred 1 is stored in D bits.
- the cost value is determined according to the SAD of all pixel values of the sub-reference block pred 0 and the sub-reference block pred 1.
- the pixel values of the sub-reference block pred 0 and the sub-reference block pred 1 are vertically down-sampled by N times (N is an integer greater than 0, and may be 2). Based on all pixel values of the sub-reference block pred 0 and sub-reference block pred 1 , the cost value calculation formula is:
- cost represents the cost value
- W is the width value of the sub-reference block
- H is the height value of the sub-reference block
- N is the down-sampling parameter, which is an integer greater than 0, which can be 2
- pred 0 (1+ N(i-1), j) represents the pixel value of the jth row in the 1+N(i-1) column of the sub-reference block pred 0
- pred 1 (1+N(i-1), j) represents the sub-reference
- abs(x) represents the absolute value of x, it can be seen from the above that only the first row and the N+1th row are calculated, The sum of the absolute values of the difference in line 2N+1...
- D is greater than 8
- the cost value calculation formula can be:
- Embodiment 25 In the above embodiment, for each sub-block of the current block, the prediction value of the sub-block is determined according to the first target motion vector and the second target motion vector of the sub-block, and the prediction value of the sub-block is determined according to the prediction of each sub-block. The value determines the predicted value of the current block. For example, based on the first target motion vector and the second target motion vector of the sub-block, reference blocks in two directions (ie, the third reference block and the fourth reference block) can be obtained through interpolation (such as 8-tap interpolation), which may include three components Because the target motion vector may be sub-pixel, interpolation is required). Then, weighting is performed according to the third pixel value of the third reference block and the fourth pixel value of the fourth reference block to obtain the final predicted value (such as the predicted value of the three components).
- interpolation such as 8-tap interpolation
- the optimal motion vector is the same as the initial motion vector (that is, the first original motion vector or the second original motion vector)
- the first reference frame based on the first target motion vector of the sub-block
- the third reference block corresponding to the sub-block is determined in
- the fourth reference block corresponding to the sub-block is determined from the second reference frame based on the second target motion vector of the sub-block.
- the pixel value of the third reference block and the pixel value of the fourth reference block are weighted to obtain the predicted value of the sub-block.
- a third reference block of size dx*dy is determined from the first reference frame based on the first target motion vector.
- a reference block with a size of A*B is determined from the first reference frame.
- the size of A*B is related to the interpolation method, such as A is greater than dx, and B is greater than dy, and there is no restriction on this.
- a fourth reference block with a size of dx*dy is determined from the second reference frame based on the second target motion vector.
- a reference block with a size of A*B is determined from the second reference frame.
- the size of A*B is related to the interpolation mode, such as A is greater than dx and B is greater than dy, and there is no restriction on this.
- the fifth reference block can be determined from the first reference frame, and the fifth reference block can be extended to obtain the sixth reference block ; Then, based on the first target motion vector of the sub-block, a third reference block corresponding to the sub-block is selected from the sixth reference block. And, the seventh reference block may be determined from the second reference frame, and the seventh reference block may be expanded to obtain the eighth reference block; based on the second target motion vector of the sub-block, the eighth reference block may be selected The fourth reference block corresponding to this sub-block. Then, the pixel value of the third reference block and the pixel value of the fourth reference block may be weighted to obtain the predicted value of the sub-block.
- a fifth reference block of size dx*dy is determined from the first reference frame based on the first original motion vector. For example, a reference block with a size of A*B is determined from the first reference frame.
- the size of A*B is related to the interpolation method, such as A is greater than dx, and B is greater than dy, and there is no restriction on this.
- the fifth reference block is filled up, down, left, and right by copying adjacent values, and the filled reference block is regarded as the sixth reference block.
- the size of the reference block can be larger than dx*dy. Then, based on the first target motion vector of the sub-block, a third reference block with a size of dx*dy corresponding to the sub-block is selected from the sixth reference block.
- a seventh reference block of size dx*dy is determined from the second reference frame based on the second original motion vector. For example, a reference block with a size of A*B is determined from the second reference frame.
- the size of A*B is related to the interpolation mode, such as A is greater than dx and B is greater than dy, and there is no restriction on this.
- the seventh reference block can be filled up, down, left and right by copying adjacent values, and the filled reference block is used as the eighth reference block.
- the size of the eight reference block can be larger than dx*dy. Then, based on the second target motion vector of the sub-block, a fourth reference block with a size of dx*dy corresponding to the sub-block is selected from the eighth reference block.
- Embodiment 26 After obtaining the target motion vector, based on the target motion vector of each sub-block, the predicted value in two directions (ie, the three components of YUV, that is, the predicted value of the third reference block and the predicted value of the third reference block) are obtained through an 8-tap interpolation filter. The predicted value of the fourth reference block), and weighted to obtain the final predicted value. Or, based on the target motion vector of each sub-block, a bilinear interpolation filter (here no longer an 8-tap interpolation filter) is used to obtain predicted values in two directions (that is, the three components of YUV, that is, the third reference block The predicted value of and the predicted value of the fourth reference block), and weighted to obtain the final predicted value.
- a bilinear interpolation filter here no longer an 8-tap interpolation filter
- Embodiment 27 After obtaining the predicted values in the two directions, the final predicted value is obtained by means of weighted average (that is, the weights of the predicted values in the two directions are the same). Or, after obtaining the predicted values in the two directions, the final predicted value is obtained by weighted average, and the weights of the two predicted values may be different.
- the weight ratio of the two predicted values can be 1:2, 1:3, 2:1, etc.
- the weight table can include 1:2, 1:3, 2:1 and other weight ratios.
- the encoding end can determine the cost value of each weight ratio and determine the weight ratio with the smallest cost value. In this way, The encoding end can obtain the final predicted value through weighted average based on the weight ratio with the smallest cost value.
- the encoded bit stream carries the index value of the weight ratio in the weight table.
- the decoding end obtains the weight ratio corresponding to the index value from the weight table by analyzing the index value of the encoded bit stream, and obtains the final predicted value through weighted average based on the weight ratio.
- the weight table may include but is not limited to ⁇ -2, 3, 4, 5, 10 ⁇ .
- the sum of the two weights may be 8.
- the weight may be a negative value, as long as the sum of the two weights is 8.
- the weight "-2" is a negative value.
- the weight of one predicted value is -2
- the weight of the other predicted value is 10, that is, the sum of the two weights is 8.
- the final predicted value (Predicted value 1*(-2)+predicted value 2*(8-(-2))).
- the weight "10” means that the weight of one predicted value is 10, and the weight of the other predicted value is -2, that is, the sum of the two weights is 8.
- the final predicted value (predicted value 1* (10)+predicted value 2*(-2)).
- the weight "3" means that the weight of one predicted value is 3, and the weight of the other predicted value is 5, that is, the sum of the two weights is 8.
- the final predicted value (predicted value 1*( 3)+predicted value 2*(5)).
- the weight "5" means that the weight of one predicted value is 5, and the weight of the other predicted value is 3, that is, the sum of the two weights is 8.
- the final predicted value (predicted value 1*( 5)+predicted value 2*(3)).
- the weight "4" means that the weight of one predicted value is 4, and the weight of the other predicted value is 4, that is, the sum of the two weights is 8.
- the final predicted value (predicted value 1*( 4)+predicted value 2*(4)).
- the third pixel value of the third reference block and the fourth pixel value of the fourth reference block can be obtained, and then, according to the third reference
- the third pixel value of the block and the fourth pixel value of the fourth reference block are weighted to obtain the final predicted value.
- the third pixel value, the first weight corresponding to the third pixel value, the fourth pixel value, and the second weight corresponding to the fourth pixel value are weighted to obtain the predicted value of the sub-block. If the final predicted value is obtained by means of weighted average (that is, the two weights are the same), the first weight and the second weight are the same.
- Embodiment 28 In the above embodiment, the first target motion vector and the second target motion vector of each sub-block of the current block can be saved, or the first original motion vector and the second original motion vector of each sub-block of the current block can be saved.
- the motion vector or save the first original motion vector, the second original motion vector, the first target motion vector and the second target motion vector of each sub-block of the current block.
- these motion vectors can be used as a reference for encoding/decoding of subsequent blocks.
- the first target motion vector and the second target motion vector of each sub-block of the current block are saved as an example.
- the first target motion vector and the second target motion vector are used for loop filtering of the current frame; the first target The motion vector and the second target motion vector are used for the time domain reference of the subsequent frame; and/or, the first target motion vector and the second target motion vector are used for the spatial reference of the current frame.
- the first target motion vector and the second target motion vector of each sub-block of the current block can be used for motion compensation of the current block, and can also be used for time-domain reference of subsequent frames.
- first target motion vector and the second target motion vector of each sub-block of the current block can be used for motion compensation of the current block, can also be used for the loop filtering process of the current block, and can also be used for the subsequent frame Time domain reference.
- first target motion vector and the second target motion vector of each sub-block of the current block can be used for motion compensation of the current block, can also be used for the loop filtering process of the current block, and can also be used for the subsequent frame
- the time domain reference can also be used for the spatial reference of the current frame, which will be described below.
- the first target motion vector and the second target motion vector of each sub-block of the current block may be used for spatial reference of blocks in certain LCU (Largest Coding Unit) in the spatial domain. Since the codec sequence is from top to bottom and from left to right, the motion vector of the current block can be referenced by other blocks in the current LCU, and can also be referenced by blocks in subsequent adjacent LCUs. Since the obtained target motion vector requires a large amount of calculation, if the subsequent block refers to the target motion vector of the current block, it will take a long time to wait. In order to avoid the time delay caused by excessive waiting, only a few spatial neighboring blocks are allowed to refer to the target motion vector of the current block, and other blocks refer to the original motion vector of the current block.
- LCU Large Coding Unit
- these few blocks include the sub-blocks in the lower LCU and the lower right LCU located on the lower side of the current LCU, and the sub-blocks located in the right LCU and the left LCU cannot refer to the target motion of the current block. Vector.
- Embodiment 29 The following describes the adjustment process of the motion vector in conjunction with a specific example.
- the specific steps of the motion vector adjustment can be as follows.
- the "copy” below shows that it can be obtained without interpolation. If the MV (ie motion vector) is an integer pixel offset, it can be directly copied from the reference frame, otherwise it needs to be obtained by interpolation.
- Step e1 If the motion vector adjustment mode is activated for the current block, perform the following process.
- Step e2 Prepare reference pixel values (assuming that the width of the current block is W and the height is H).
- Step e3 Based on the original motion vector (the original motion vector of list0 is recorded as Org_MV0, the original motion vector of list1 is recorded as Org_MV1), copy two blocks at the corresponding position of the corresponding reference frame as (W+FS -1)*(H+FS-1) an entire pixel block of three components.
- step e4 on the basis of the whole pixel block of (W+FS-1)*(H+FS-1), add (W+FS-1)*(H+FS -1)
- the three-component whole pixel block is expanded by SR rows/columns up, down, left and right, and the area obtained after expansion is (W+FS-1+2*SR)*(H+FS-1+2*SR
- the whole pixel blocks of the three components of) are denoted as Pred_Inter0 and Pred_Inter1, as shown in FIG. 8.
- the size of the inner black area is the size of the current block
- the outer white area is the additional reference pixels required for the 8-tap filter interpolation of the original motion vector
- the outer black area is the target motion vector for the 8-tap filter interpolation Additional reference pixels required.
- the black area and white area of the inner layer W*H it is the pixel value obtained from the reference frame.
- the pixel value of the outer black area it does not need to be obtained from the reference frame, but the adjacent pixel value can be copied Way to get.
- the W+FS-1 pixel values of the first row of the white area are copied to the pixel values of the first SR row of the outer black area. Copy the W+FS-1 pixel values of the last row of the white area to the pixel values of the last SR row of the outer black area.
- the H+FS-1 pixel values of the first column of the white area and the upper and lower SR obtained pixel values of the outer black area are copied to the pixel values of the first SR column of the outer black area.
- the H+FS-1 pixel values of the last column of the white area and the upper and lower SR obtained pixel values of the outer black area are copied to the pixel values of the last SR column of the outer black.
- the H+FS-1 pixel values of the first column of the white area are copied to the pixel values of the first SR column of the outer black area.
- Copy the H+FS-1 pixel values of the last column of the white area to the pixel values of the last SR column of the outer black area.
- the W+FS-1 pixel values of the first row of the white area and the obtained pixel values of the left and right SR outer black areas are copied to the pixel values of the previous SR rows of the outer black area.
- the W+FS-1 pixel values of the last row of the white area and the pixel values of the left and right SR obtained outer black areas are copied to the pixel values of the last SR row of the outer black area.
- the luminance component (because the luminance component is used to calculate the substitute value in the subsequent search process), based on two whole-pixel reference blocks with an area of (W+FS-1)*(H+FS-1), two are obtained through bilinear interpolation.
- the block size is (W+2*SR)*(H+2*SR) initial reference prediction value (denoted as Pred_Bilinear0 and Pred_Bilinear1)
- FS is the number of filter taps
- the default is 8
- SR is the search range, that is, the target motion
- the maximum horizontal/vertical component interpolation between the vector and the original motion vector, the default is 2.
- Pred_Bilinear0/1 is used for the use of step e3.
- Step e3 Obtain target motion vectors for each dx*dy sub-block of the current block (target motion vectors in the two directions are respectively denoted as Refined_MV0 and Refined_MV1).
- Step e31 perform SR iterations to obtain the optimal integer pixel offset of the entire pixel MV point, which is recorded as IntegerDeltaMV, initialize IntegerDeltaMV to (0, 0), and perform the following process for each iteration:
- Step e311 set deltaMV to (0, 0). If it is the first iterative process, based on the original motion vector in the reference pixel Pred_Bilinear0/1, copy two prediction value blocks (in fact, the most central W*H block of Pred_Bilinear0/1), based on these two prediction value blocks, Obtain the initial cost value, that is, the SAD after the vertical 2 times downsampling of the predicted value blocks in the two directions. If the initial cost value is less than dx*dy, dx and dy are the width and height of the current sub-block, skip the subsequent search process directly, execute step e32, and set notZeroCost to false.
- Step e312 centering on the above initial point, follow ⁇ Mv(-2,-2),Mv(-1,-2),Mv(0,-2),Mv(1,-2),Mv(2, -2),Mv(-2,-1),Mv(-1,-1),Mv(0,-1),Mv(1,-1),Mv(2,-1),Mv(-2 ,0),Mv(-1,0),Mv(1,0),Mv(2,0),Mv(-2,1),Mv(-1,1),Mv(0,1),Mv (1,1),Mv(2,1),Mv(-2,2),Mv(-1,2),Mv(0,2),Mv(1,2),Mv(2,2))
- Obtain 24 offset MVs (the 24 offset MVs are all called MVOffsets) in the order of, and perform the calculation and comparison of the cost value of these offset MVs.
- MVOffset For example, based on a certain MVOffset, in the reference pixel Pred_Bilinear0/1, two blocks of predicted values are obtained through MVOffset (that is, the block of W*H with the center position offset MVOffset in Pred_Bilinear0, and the center position offset -MVOffset(and List0 opposite) W*H block), calculate the down-sampled SAD of these two blocks as the cost value of MVOffset. Keep the MVOffset with the smallest cost value (stored in deltaMV).
- Step e313 After one iteration, if the optimal MV is still the initial MV or the minimum cost is 0, then the next iteration search process is not performed, step e32 is executed, and notZeroCost is set to false.
- step e32 it is possible to obtain the optimal sub-pixel offset MV with the optimal whole pixel MV point in step e31 as the center, which is recorded as SPMV (that is, subMV), initialize SPMV to (0, 0), and then perform the following process:
- Step e321 Only when notZeroCost is not false and deltaMV is (0, 0), the subsequent processing is performed, otherwise, the original motion vector is directly adjusted by IntegerDeltaMV.
- Step e4 Based on the target motion vector of each sub-block, perform 8-tap interpolation to obtain the predicted value of the three components in two directions, and weight the predicted value to obtain the final predicted value (such as the predicted value of the three components). For example, based on the target motion vectors Refined_MV0 and Refined_MV1 of each sub-block, in Pred_Inter0/1 prepared in step e2, the corresponding prediction block is obtained through interpolation (the motion vector may be sub-pixels, and interpolation is required to obtain the corresponding pixel block).
- Step e5 The target motion vector is used for the motion compensation of the current block (that is, to obtain the predicted value of each sub-block and the predicted value of the current block) and the time domain reference of the subsequent frame. It is not used for the loop filtering and spatial reference of the current frame .
- Embodiment 30 Different from Embodiment 29, the preparation process of the reference pixel is moved to each sub-block of dx*dy.
- the preparation process of the reference pixel is moved to each sub-block of dx*dy.
- When preparing reference pixels only prepare pixel blocks of (dx+(filtersize-1))*(dy+(filtersize-1)). If the optimal motion vector obtained by the search is not the original motion vector, expand the reference pixel, otherwise it will not Expand.
- For each dx*dy sub-block of the current block obtain the target motion vector separately, perform motion compensation based on the target motion vector, and weight to obtain the final predicted value. The following process is for each dx*dy sub-block of the current block:
- Step f1 If the motion vector adjustment mode is activated for the current block, perform the following process.
- Step f2 prepare the whole pixel block used in step f3: For example, only the luminance component: Based on the original motion vector (the original motion vector of list0 is recorded as Org_MV0, the original motion vector of list1 is recorded as Org_MV1), from the corresponding reference frame Obtain two whole pixel blocks with an area of (dx+(filtersize-1))*(dy+(filtersize-1)) in the corresponding positions.
- filtersize may be the number of filter taps, and the default value is 8.
- Step f3 For each dx*dy sub-block of the current block, a target motion vector is obtained (the target motion vectors in the two directions are respectively denoted as Refined_MV0 and Refined_MV1).
- step f3 the implementation process of step f3 can be referred to step e3, which will not be described in detail here.
- the first motion vector compensation is performed based on the original motion vector.
- the initial prediction value of size (dx+2*IterNum)*(dy+2*IterNum) is obtained based on bilinear interpolation. IterNum is 2 by default, IterNum can be the search range SR, and IterNum can be the target motion vector and The maximum horizontal/vertical component interpolation of the original motion vector.
- the initial prediction value of the original motion vector obtained above is stored in m_cYuvPredTempL0/1.
- the optimal offset MV is obtained, and the optimal offset MV is recorded as BestMVoffset.
- BestMVoffset IntegerDeltaMV+SPMV.
- Step f4 If the optimal offset MV is (0, 0), the following steps are not performed (that is, no additional expansion is performed when the original motion vector is used). If the optimal offset MV is not (0, 0), then re-acquire the whole pixel (because the above steps did not expand the reference pixels, the required reference pixels after the offset exceed the range of the reference pixels obtained in the above steps), then execute The following steps:
- the three components are respectively filled.
- the fill width has a luminance component of 2, 420 has a chrominance component of 1).
- the integer pixel values that can be used around the current sub-block (in the current CU block) are not used here.
- Step f5 Based on the target motion vector of each sub-block and the two reference pixel blocks (obtained in step f4), perform 8-tap interpolation to obtain the predicted values of the three components in two directions, and weight to obtain the final predicted values (such as three components) Predicted value).
- Embodiment 31 The above embodiments can be implemented individually or in any combination, which is not limited.
- Embodiment 4 can be implemented in combination with Embodiment 2; Embodiment 4 can be implemented in combination with Embodiment 3.
- Embodiment 5 can be realized in combination with embodiment 2; embodiment 5 can be realized in combination with embodiment 2 and embodiment 4; embodiment 5 can be realized in combination with embodiment 3; embodiment 5 can be combined with embodiment 3 and embodiment 4 achieve.
- Embodiment 6 can be realized separately, embodiment 7 can be realized separately, embodiment 8 can be realized in combination with embodiment 7; embodiment 9 can be realized in combination with embodiment 7; embodiment 10 can be realized in combination with embodiment 7; embodiment 11 It can be implemented in combination with embodiment 7; embodiment 12 can be implemented in combination with embodiment 7; embodiment 13 can be implemented in combination with embodiment 7; embodiment 14 can be implemented in combination with embodiment 7; embodiment 15 can be implemented in combination with embodiment 7 achieve.
- the 16th embodiment can be realized separately, the 17th embodiment can be realized separately, the 18th embodiment can be realized by combining with the 17th embodiment; the 19th embodiment can be realized by combining with the 17th embodiment; and the 20th embodiment can be realized by combining with the 17th embodiment.
- Embodiment 21 can be implemented in combination with embodiment 6, embodiment 21 can be implemented in combination with embodiment 16, embodiment 21 can be implemented in combination with embodiment 7, embodiment 21 can be implemented in combination with embodiment 17, and embodiment 22 can be implemented in combination Example 21 can be implemented in combination, embodiment 23 can be implemented in combination with embodiment 21, and embodiment 24 can be implemented in combination with embodiment 21.
- Embodiment 25 can be realized in combination with embodiment 2; embodiment 25 can be realized in combination with embodiment 2 and embodiment 4; embodiment 25 can be realized in combination with embodiment 3; embodiment 25 can be realized in combination with embodiment 3 and embodiment 4. achieve.
- Embodiment 26 can be implemented in combination with Embodiment 25; Embodiment 27 can be implemented in combination with Embodiment 25.
- Embodiment 28 can be realized in combination with embodiment 2; embodiment 28 can be realized in combination with embodiment 2 and embodiment 4; embodiment 28 can be realized in combination with embodiment 3; embodiment 28 can be realized in combination with embodiment 3 and embodiment 4. achieve.
- Embodiment 29 can be implemented alone, and Embodiment 29 can be implemented in combination with Embodiment 4.
- the embodiment 30 can be implemented separately, and the embodiment 30 can be implemented in combination with the embodiment 4.
- All the embodiments involved in this application can be implemented individually or in combination, which will not be described in detail.
- an embodiment of the application also proposes an encoding and decoding device applied to the encoding end or the decoding end.
- FIG. 9A it is a structural diagram of the device, and the device includes:
- the determining module 911 is configured to determine to start the motion vector adjustment mode for the current block if the following conditions are all met:
- the control information is to allow the current block to use the motion vector adjustment mode
- the prediction mode of the current block is a normal fusion mode; or, the prediction mode of the current block is a fusion mode or a skip mode, and the prediction mode of the current block is not a mode other than the normal fusion mode;
- the predicted value of the current block is obtained by weighting the reference blocks from two reference frames, and the display order of the two reference frames is located one before and one after the current frame, and the distance between the two reference frames and the current frame the same;
- the weights of the two reference frames of the current block are the same;
- the two reference frames of the current block are both short-term reference frames
- the width, height and area of the current block are all within a limited range
- the size of the two reference frames of the current block is the same as the size of the current frame
- the motion compensation module 912 is configured to perform motion compensation on the current block if it is determined to start the motion vector adjustment mode for the current block.
- the motion compensation module 912 is specifically configured to: for each of the at least one sub-block included in the current block:
- the first pixel value of a reference block and the second pixel value of the second reference block are adjusted to the first original motion vector and the second original motion vector to obtain the corresponding value of the first original motion vector A first target motion vector and a second target motion vector corresponding to the second original motion vector; determining the predicted value of the sub-block according to the first target motion vector and the second target motion vector;
- the predicted value of the current block is determined according to the predicted value of each sub-block.
- the determining module 911 is further configured to: if any one of the following conditions is not met, determine not to start the motion vector adjustment mode for the current block: the control information is to allow the current block to use the motion vector adjustment mode;
- the prediction mode of the current block is a normal fusion mode; or, the prediction mode of the current block is a fusion mode or a skip mode, and the prediction mode of the current block is not a mode other than the normal fusion mode;
- the predicted value of the current block is obtained by weighting the reference blocks from two reference frames, and the display order of the two reference frames is located one before and one after the current frame, and the distance between the two reference frames and the current frame the same;
- the weights of the two reference frames of the current block are the same;
- the two reference frames of the current block are both short-term reference frames
- the width, height and area of the current block are all within a limited range
- the size of the two reference frames of the current block is the same as the size of the current frame.
- the control information for allowing the current block to use the motion vector adjustment mode includes: sequence-level control information for allowing the current block to use the motion vector adjustment mode; and/or frame-level control information for allowing the current block to use the motion vector adjustment mode.
- the width, height and area of the current block are all within a limited range, including: the width is greater than or equal to the first threshold, the height is greater than or equal to the second threshold, and the area is greater than or equal to the third threshold; or, the width is greater than or equal to the first threshold, and the height Is greater than or equal to the second threshold, and the area is greater than the fourth threshold; wherein, the third threshold is greater than the fourth threshold.
- the first threshold is 8, the second threshold is 8, the third threshold is 128, and the fourth threshold is 64.
- the motion compensation module 912 determines the first reference block corresponding to the sub-block according to the first original motion vector of the sub-block, and determines the second reference block corresponding to the sub-block according to the second original motion vector of the sub-block
- the block is specifically used for:
- the first reference block corresponding to the sub-block is determined from the first reference frame; the pixel value of each pixel in the first reference block is determined by The pixel values of adjacent pixels in the reference block are obtained by interpolation, or obtained by copying the pixel values of adjacent pixels in the first reference block;
- the second reference block corresponding to the sub-block is determined from the second reference frame; the pixel value of each pixel in the second reference block is determined by comparing the second reference block.
- the pixel values of adjacent pixels in the reference block are obtained by interpolation, or obtained by copying the pixel values of adjacent pixels in the second reference block.
- the motion compensation module 912 adjusts the first original motion vector and the second original motion vector according to the first pixel value of the first reference block and the second pixel value of the second reference block, When obtaining the first target motion vector corresponding to the first original motion vector and the second target motion vector corresponding to the second original motion vector, it is specifically used to:
- the initial motion vector Taking the initial motion vector as the center, select part or all of the motion vectors from the motion vectors around the initial motion vector including the initial motion vector, and determine the selected motion vector as a candidate motion vector; wherein, the initial motion The vector is the first original motion vector or the second original motion vector;
- the first original motion vector is adjusted according to the optimal motion vector to obtain the first target motion vector corresponding to the first original motion vector
- the second original motion vector is adjusted according to the optimal motion vector The adjustment is performed to obtain the second target motion vector corresponding to the second original motion vector.
- the motion compensation module 912 adjusts the first original motion vector according to the optimal motion vector, obtains the first target motion vector corresponding to the first original motion vector, and performs adjustments to the first original motion vector according to the optimal motion vector.
- the adjustment of the second original motion vector to obtain the second target motion vector corresponding to the second original motion vector is specifically used for:
- the second original motion vector is adjusted according to the second integer-pixel motion vector adjustment value and the second sub-pixel motion vector adjustment value to obtain the second target motion vector of the sub-block.
- the motion compensation module 912 is specifically configured to determine the predicted value of the sub-block according to the first target motion vector and the second target motion vector:
- the pixel value of the third reference block and the pixel value of the fourth reference block are weighted to obtain the predicted value of the sub-block.
- the motion compensation module 912 is specifically configured to determine the predicted value of the sub-block according to the first target motion vector and the second target motion vector:
- the pixel value of the third reference block and the pixel value of the fourth reference block are weighted to obtain the predicted value of the sub-block.
- the motion compensation module 912 weights the pixel value of the third reference block and the pixel value of the fourth reference block, and when obtaining the predicted value of the sub-block, it is specifically used to:
- the first weight corresponding to the pixel value of the third reference block, the pixel value of the fourth reference block, and the second weight corresponding to the pixel value of the fourth reference block are weighted to obtain the predicted value of the sub-block ; Wherein, the first weight is the same as the second weight.
- FIG. 9B the schematic diagram of its hardware architecture can be specifically referred to as shown in FIG. 9B. It includes a processor 921 and a machine-readable storage medium 922, where the machine-readable storage medium 922 stores machine executable instructions that can be executed by the processor 921; the processor 921 is configured to execute machine executable instructions, In order to realize the method disclosed in the above example of this application.
- the processor is used to execute machine executable instructions to implement the following steps:
- the control information is to allow the current block to use the motion vector adjustment mode
- the prediction mode of the current block is a normal fusion mode; or, the prediction mode of the current block is a fusion mode or a skip mode, and the prediction mode of the current block is not a mode other than the normal fusion mode;
- the predicted value of the current block is obtained by weighting the reference blocks from two reference frames, and the display order of the two reference frames is located one before and one after the current frame, and the distance between the two reference frames and the current frame the same;
- the weights of the two reference frames of the current block are the same;
- the two reference frames of the current block are both short-term reference frames
- the width, height and area of the current block are all within a limited range
- the size of the two reference frames of the current block is the same as the size of the current frame
- motion compensation is performed on the current block.
- FIG. 9C the schematic diagram of the hardware architecture of the device can be specifically referred to as shown in FIG. 9C. It includes a processor 931 and a machine-readable storage medium 932, where the machine-readable storage medium 932 stores machine-executable instructions that can be executed by the processor 931; the processor 931 is configured to execute machine-executable instructions, In order to realize the method disclosed in the above example of this application.
- the processor is used to execute machine executable instructions to implement the following steps:
- the control information is to allow the current block to use the motion vector adjustment mode
- the prediction mode of the current block is a normal fusion mode; or, the prediction mode of the current block is a fusion mode or a skip mode, and the prediction mode of the current block is not a mode other than the normal fusion mode;
- the predicted value of the current block is obtained by weighting the reference blocks from two reference frames, and the display order of the two reference frames is located one before and one after the current frame, and the distance between the two reference frames and the current frame the same;
- the weights of the two reference frames of the current block are the same;
- the two reference frames of the current block are both short-term reference frames
- the width, height and area of the current block are all within a limited range
- the size of the two reference frames of the current block is the same as the size of the current frame
- motion compensation is performed on the current block.
- an embodiment of the application also provides a machine-readable storage medium having a number of computer instructions stored on the machine-readable storage medium.
- the computer instructions When executed by a processor, the present invention can be realized. Apply for the encoding and decoding method disclosed in the above example.
- the aforementioned machine-readable storage medium may be any electronic, magnetic, optical, or other physical storage device, and may contain or store information, such as executable instructions, data, and so on.
- the machine-readable storage medium can be: RAM (Radom Access Memory), volatile memory, non-volatile memory, flash memory, storage drive (such as hard drive), solid state hard drive, any type of storage disk (Such as CD, DVD, etc.), or similar storage media, or a combination of them.
- the embodiments of the present application also provide a computer program product, the computer program product includes computer instructions, when the computer instructions are executed by a processor, the encoding and decoding methods disclosed in the above examples of this application can be implemented .
- an embodiment of the present application also provides an encoding and decoding system.
- the encoding and decoding system includes a processor and a machine-readable storage medium, and the machine-readable storage medium stores data that can be used by the processor.
- Machine executable instructions executed. When the machine-executable instructions are executed by the processor, the coding and decoding methods disclosed in the above examples of this application can be implemented.
- a typical implementation device is a computer.
- the specific form of the computer can be a personal computer, a laptop computer, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email receiving and sending device, and a game control A console, a tablet computer, a wearable device, or a combination of any of these devices.
- the functions are divided into various units and described separately. Of course, when implementing this application, the functions of each unit can be implemented in the same one or more software and/or hardware.
- the embodiments of the present application can be provided as methods, systems, or computer program products. Therefore, this application may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, the embodiments of the present application may adopt the form of computer program products implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
- This application is described with reference to flowcharts and/or block diagrams of methods, devices (systems), and computer program products according to embodiments of this application.
- each process and/or block in the flowchart and/or block diagram, and the combination of processes and/or blocks in the flowchart and/or block diagram can be realized by computer program instructions.
- These computer program instructions can be provided to the processor of a general-purpose computer, a special-purpose computer, an embedded processor, or other programmable data processing equipment to generate a machine, so that the instructions executed by the processor of the computer or other programmable data processing equipment are generated for use. It is a device that realizes the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
- these computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device,
- the instruction device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
- These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
- the instructions provide steps for implementing the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Discrete Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims (15)
- 一种编解码方法,其特征在于,所述方法包括:若如下条件均满足,则确定对当前块启动运动矢量调整模式:控制信息为允许当前块使用运动矢量调整模式;当前块的预测模式为普通融合模式;或者,当前块的预测模式为融合模式或跳过模式,且当前块的预测模式不是除普通融合模式之外的其它模式;当前块的预测值通过来自两个参考帧的参考块的加权获得,且所述两个参考帧中其中一个参考帧位于当前帧之前,所述两个参考帧中另一个参考帧位于所述当前帧之后,且所述两个参考帧与当前帧的距离相同;当前块的两个参考帧的加权权重相同;当前块的两个参考帧均是短期参考帧;当前块的宽度,高度和面积均在限定范围内;当前块的两个参考帧的尺寸与当前帧的尺寸均相同;若确定对当前块启动运动矢量调整模式,则对所述当前块进行运动补偿。
- 根据权利要求1所述的方法,其特征在于,所述对所述当前块进行运动补偿,包括:针对所述当前块包括的至少一个子块中的每个子块:根据所述子块的第一原始运动矢量确定所述子块对应的第一参考块,根据所述子块的第二原始运动矢量确定所述子块对应的第二参考块;根据所述第一参考块的第一像素值和所述第二参考块的第二像素值,对所述第一原始运动矢量和所述第二原始运动矢量进行调整,得到所述第一原始运动矢量对应的第一目标运动矢量和所述第二原始运动矢量对应的第二目标运动矢量;根据所述第一目标运动矢量和所述第二目标运动矢量确定所述子块的预测值;根据每个子块的预测值确定所述当前块的预测值。
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:若如下条件中的任意一个条件不满足,则确定不对当前块启动运动矢量调整模式:控制信息为允许当前块使用运动矢量调整模式;当前块的预测模式为普通融合模式;或者,当前块的预测模式为融合模式或跳过模式,且当前块的预测模式不是除普通融合模式之外的其它模式;当前块的预测值通过来自两个参考帧的参考块的加权获得,且所述两个参考帧的显示顺序分别位于当前帧的一前一后,且所述两个参考帧与当前帧的距离相同;当前块的两个参考帧的加权权重相同;当前块的两个参考帧均是短期参考帧;当前块的宽度,高度和面积均在限定范围内;当前块的两个参考帧的尺寸与当前帧的尺寸均相同。
- 根据权利要求1或3所述的方法,其特征在于,所述控制信息为允许当前块使用运动矢量调整模式包括:帧级控制信息为允许当前块使用运动矢量调整模式。
- 根据权利要求1或3所述的方法,其特征在于,当前块的宽度,高度和面积均在限定范围内包括:宽度大于或等于第一阈值,高度大于或等于第二阈值,面积大于或等于第三阈值。
- 根据权利要求5所述的方法,其特征在于,所述第一阈值为8,所述第二阈值为8,所述第三阈值为128。
- 根据权利要求2所述的方法,其特征在于,所述根据所述子块的第一原始运动矢量确定所述子块对应的第一参考块,根据所述子块的第二原始运动矢量确定所述子块对应的第二参考块,包括:基于所述子块的第一原始运动矢量,从第一参考帧中确定所述子块对应的第一参考块;所述第一参考块中每个像素点的像素值,是通过对第一参考块中的相邻像素点的像素值进行插值得到,或者,通过对第一参考块中的相邻像素点的像素值进行拷贝得到;基于所述子块的第二原始运动矢量,从第二参考帧中确定所述子块对应的第二参考块;所述第二参考块中每个像素点的像素值,是通过对第二参考块中的相邻像素点的像素值进行插值得到,或,通过对第二参考块中的相邻像素点的像素值进行拷贝得到。
- 根据权利要求2或7所述的方法,其特征在于,所述根据所述第一参考块的第一像素值和所述第二参考块的第二像素值,对所述第一原始运动矢量和所述第二原始运动矢量进行调整,得到所述第一原始运动矢量对应的第一目标运动矢量和所述第二原始运动矢量对应的第二目标运动矢量,包括:以初始运动矢量为中心,从所述初始运动矢量周围的包括所述初始运动矢量的运动矢量中选择部分或者全部运动矢量,并将选择的运动矢量确定为候选运动矢量;其中,所述初始运动矢量为所述第一原始运动矢量或者所述第二原始运动矢量;根据所述第一参考块的第一像素值和所述第二参考块的第二像素值,从所述初始运动矢量和各个候选运动矢量中选择一个运动矢量作为最优运动矢量;根据所述最优运动矢量对所述第一原始运动矢量进行调整,得到所述第一原始运动矢量对应的第一目标运动矢量,并根据所述最优运动矢量对所述第二原始运动矢量进行调整,得到所述第二原始运动矢量对应的第二目标运动矢量。
- 根据权利要求8所述的方法,其特征在于,所述根据所述最优运动矢量对所述第一原始运动矢量进行调整,得到所述第一原始运动矢量对应的第一目标运动矢量,并根据所述最优运动矢量对所述第二原始运动矢量进行调整,得到所述第二原始运动矢量对应的第二目标 运动矢量,包括:根据所述最优运动矢量确定所述子块的第一整像素运动矢量调整值,第二整像素运动矢量调整值,第一分像素运动矢量调整值和第二分像素运动矢量调整值;根据所述第一整像素运动矢量调整值和所述第一分像素运动矢量调整值,对所述第一原始运动矢量进行调整,得到所述子块的第一目标运动矢量;根据所述第二整像素运动矢量调整值和所述第二分像素运动矢量调整值,对所述第二原始运动矢量进行调整,得到所述子块的第二目标运动矢量。
- 根据权利要求8所述的方法,其特征在于,若所述最优运动矢量与所述初始运动矢量相同,根据所述第一目标运动矢量和所述第二目标运动矢量确定所述子块的预测值,包括:基于所述子块的第一目标运动矢量,从第一参考帧中确定所述子块对应的第三参考块;基于所述子块的第二目标运动矢量,从第二参考帧中确定所述子块对应的第四参考块;对第三参考块的像素值和第四参考块的像素值进行加权,得到所述子块的预测值。
- 根据权利要求8所述的方法,其特征在于,若所述最优运动矢量与所述初始运动矢量不同,根据所述第一目标运动矢量和所述第二目标运动矢量确定所述子块的预测值,包括:从第一参考帧中确定第五参考块,并对所述第五参考块进行扩展,得到第六参考块;基于所述子块的第一目标运动矢量,从所述第六参考块中选择所述子块对应的第三参考块;从第二参考帧中确定第七参考块,并对所述第七参考块进行扩展,得到第八参考块;基于所述子块的第二目标运动矢量,从所述第八参考块中选择所述子块对应的第四参考块;对第三参考块的像素值和第四参考块的像素值进行加权,得到所述子块的预测值。
- 根据权利要求10或11所述的方法,其特征在于,所述对第三参考块的像素值和第四参考块的像素值进行加权,得到所述子块的预测值,包括:对所述第三参考块的像素值,所述第三参考块的像素值对应的第一权重,所述第四参考块的像素值,所述第四参考块的像素值对应的第二权重进行加权处理,得到所述子块的预测值;其中,所述第一权重与所述第二权重相同。
- 一种编解码装置,其特征在于,所述装置包括:确定模块,用于若如下条件均满足,则确定对当前块启动运动矢量调整模式:控制信息为允许当前块使用运动矢量调整模式;当前块的预测模式为普通融合模式;或者,当前块的预测模式为融合模式或跳过模式,且当前块的预测模式不是除普通融合模式之外的其它模式;当前块的预测值通过来自两个参考帧的参考块的加权获得,且所述两个参考帧中其中一个参考帧位于当前帧之前,所述两个参考帧中另一个参考帧位于所述当前帧之后,且所述两个参考帧与当前帧的距离相同;当前块的两个参考帧的加权权重相同;当前块的两个参考帧均是短期参考帧;当前块的宽度,高度和面积均在限定范围内;当前块的两个参考帧的尺寸与当前帧的尺寸均相同;运动补偿模块,用于若确定对当前块启动运动矢量调整模式,则对当前块进行运动补偿。
- 一种编码端设备,其特征在于,包括:处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;所述处理器用于执行机器可执行指令,以实现权利要求1-12任一所述的编解码方法。
- 一种解码端设备,其特征在于,包括:处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;所述处理器用于执行机器可执行指令,以实现权利要求1-12任一所述的编解码方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020227010788A KR20220050227A (ko) | 2019-11-05 | 2020-10-28 | 인코딩 및 디코딩 방법, 장치 및 이의 기기 |
US17/766,210 US12114005B2 (en) | 2019-11-05 | 2020-10-28 | Encoding and decoding method and apparatus, and devices |
JP2022520621A JP7527359B2 (ja) | 2019-11-05 | 2020-10-28 | 符号化及び復号方法、装置及びデバイス |
JP2024060914A JP2024081785A (ja) | 2019-11-05 | 2024-04-04 | 符号化及び復号方法、装置及びデバイス |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911072766.XA CN112770113B (zh) | 2019-11-05 | 2019-11-05 | 一种编解码方法、装置及其设备 |
CN201911072766.X | 2019-11-05 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021088695A1 true WO2021088695A1 (zh) | 2021-05-14 |
Family
ID=74036798
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/124304 WO2021088695A1 (zh) | 2019-11-05 | 2020-10-28 | 一种编解码方法、装置及其设备 |
Country Status (5)
Country | Link |
---|---|
US (1) | US12114005B2 (zh) |
JP (2) | JP7527359B2 (zh) |
KR (1) | KR20220050227A (zh) |
CN (3) | CN112135127B (zh) |
WO (1) | WO2021088695A1 (zh) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12081751B2 (en) * | 2021-04-26 | 2024-09-03 | Tencent America LLC | Geometry partition mode and merge mode with motion vector difference signaling |
CN113411581B (zh) * | 2021-06-28 | 2022-08-05 | 展讯通信(上海)有限公司 | 视频序列的运动补偿方法、系统、存储介质及终端 |
CN113938690B (zh) * | 2021-12-03 | 2023-10-31 | 北京达佳互联信息技术有限公司 | 视频编码方法、装置、电子设备及存储介质 |
KR20240142301A (ko) * | 2023-03-21 | 2024-09-30 | 주식회사 케이티 | 영상 부호화/복호화 방법 및 비트스트림을 저장하는 기록 매체 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060120613A1 (en) * | 2004-12-07 | 2006-06-08 | Sunplus Technology Co., Ltd. | Method for fast multiple reference frame motion estimation |
CN105578197A (zh) * | 2015-12-24 | 2016-05-11 | 福州瑞芯微电子股份有限公司 | 一种实现帧间预测主控系统 |
CN109391814A (zh) * | 2017-08-11 | 2019-02-26 | 华为技术有限公司 | 视频图像编码和解码的方法、装置及设备 |
CN109495746A (zh) * | 2018-11-07 | 2019-03-19 | 建湖云飞数据科技有限公司 | 一种基于运动矢量调整的视频编码方法 |
CN110312132A (zh) * | 2019-03-11 | 2019-10-08 | 杭州海康威视数字技术股份有限公司 | 一种编解码方法、装置及其设备 |
Family Cites Families (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU2003281133A1 (en) * | 2002-07-15 | 2004-02-02 | Hitachi, Ltd. | Moving picture encoding method and decoding method |
CN101299799B (zh) * | 2008-06-13 | 2011-11-09 | 北京中星微电子有限公司 | 图像检测、修复方法和图像检测、修复装置 |
CN102387360B (zh) * | 2010-09-02 | 2016-05-11 | 乐金电子(中国)研究开发中心有限公司 | 视频编解码帧间图像预测方法及视频编解码器 |
US8736767B2 (en) * | 2010-09-29 | 2014-05-27 | Sharp Laboratories Of America, Inc. | Efficient motion vector field estimation |
KR20120118780A (ko) | 2011-04-19 | 2012-10-29 | 삼성전자주식회사 | 다시점 비디오의 움직임 벡터 부호화 방법 및 장치, 그 복호화 방법 및 장치 |
KR20120140592A (ko) * | 2011-06-21 | 2012-12-31 | 한국전자통신연구원 | 움직임 보상의 계산 복잡도 감소 및 부호화 효율을 증가시키는 방법 및 장치 |
CN110650336B (zh) * | 2012-01-18 | 2022-11-29 | 韩国电子通信研究院 | 视频解码装置、视频编码装置和传输比特流的方法 |
CN104427345B (zh) * | 2013-09-11 | 2019-01-08 | 华为技术有限公司 | 运动矢量的获取方法、获取装置、视频编解码器及其方法 |
WO2015149698A1 (en) * | 2014-04-01 | 2015-10-08 | Mediatek Inc. | Method of motion information coding |
CN105338362B (zh) * | 2014-05-26 | 2018-10-19 | 富士通株式会社 | 运动目标检测方法和运动目标检测装置 |
KR101908249B1 (ko) * | 2014-11-18 | 2018-10-15 | 미디어텍 인크. | 단방향 예측 및 병합 후보로부터의 모션 벡터에 기초한 양방향 예측 비디오 코딩 방법 |
JP2017011458A (ja) * | 2015-06-19 | 2017-01-12 | 富士通株式会社 | 符号化データ生成プログラム、符号化データ生成方法および符号化データ生成装置 |
WO2017034089A1 (ko) * | 2015-08-23 | 2017-03-02 | 엘지전자(주) | 인터 예측 모드 기반 영상 처리 방법 및 이를 위한 장치 |
EP3355578B1 (en) * | 2015-09-24 | 2020-12-09 | LG Electronics Inc. | Motion vector predictor derivation and candidate list construction |
WO2017147765A1 (en) * | 2016-03-01 | 2017-09-08 | Mediatek Inc. | Methods for affine motion compensation |
US10271062B2 (en) * | 2016-03-18 | 2019-04-23 | Google Llc | Motion vector prediction through scaling |
EP3264769A1 (en) * | 2016-06-30 | 2018-01-03 | Thomson Licensing | Method and apparatus for video coding with automatic motion information refinement |
CN106101716B (zh) * | 2016-07-11 | 2019-05-07 | 北京大学 | 一种视频帧率上变换方法 |
CN110100440B (zh) * | 2016-12-22 | 2023-04-25 | 株式会社Kt | 一种用于对视频进行解码、编码的方法 |
EP3343925A1 (en) * | 2017-01-03 | 2018-07-04 | Thomson Licensing | Method and apparatus for encoding and decoding motion information |
US10291928B2 (en) * | 2017-01-10 | 2019-05-14 | Blackberry Limited | Methods and devices for inter-prediction using motion vectors for video coding |
CN109089119B (zh) * | 2017-06-13 | 2021-08-13 | 浙江大学 | 一种运动矢量预测的方法及设备 |
WO2019009498A1 (ko) * | 2017-07-03 | 2019-01-10 | 엘지전자 주식회사 | 인터 예측 모드 기반 영상 처리 방법 및 이를 위한 장치 |
WO2019072370A1 (en) * | 2017-10-09 | 2019-04-18 | Huawei Technologies Co., Ltd. | MEMORY ACCESS WINDOW AND FILLING FOR VECTOR MOVEMENT REFINEMENT |
CN117336504A (zh) * | 2017-12-31 | 2024-01-02 | 华为技术有限公司 | 图像预测方法、装置以及编解码器 |
CN111886867B (zh) * | 2018-01-09 | 2023-12-19 | 夏普株式会社 | 运动矢量推导装置、运动图像解码装置以及运动图像编码装置 |
CN110324623B (zh) * | 2018-03-30 | 2021-09-07 | 华为技术有限公司 | 一种双向帧间预测方法及装置 |
CN111971966A (zh) * | 2018-03-30 | 2020-11-20 | 韩国电子通信研究院 | 图像编码/解码方法和设备以及存储比特流的记录介质 |
CN110891176B (zh) * | 2018-09-10 | 2023-01-13 | 华为技术有限公司 | 基于仿射运动模型的运动矢量预测方法及设备 |
CN110891180B (zh) * | 2018-09-10 | 2023-11-17 | 华为技术有限公司 | 视频解码方法及视频解码器 |
CN111107354A (zh) * | 2018-10-29 | 2020-05-05 | 华为技术有限公司 | 一种视频图像预测方法及装置 |
US11758125B2 (en) * | 2019-01-02 | 2023-09-12 | Lg Electronics Inc. | Device and method for processing video signal by using inter prediction |
JP2022527701A (ja) * | 2019-03-20 | 2022-06-03 | ホアウェイ・テクノロジーズ・カンパニー・リミテッド | アフィンコーディングされたブロックに対するオプティカルフローを用いた予測洗練化のための方法および装置 |
WO2020242238A1 (ko) * | 2019-05-29 | 2020-12-03 | 한국전자통신연구원 | 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체 |
JP7471328B2 (ja) | 2019-06-21 | 2024-04-19 | ホアウェイ・テクノロジーズ・カンパニー・リミテッド | エンコーダ、デコーダ、および対応する方法 |
CN110213590B (zh) * | 2019-06-25 | 2022-07-12 | 浙江大华技术股份有限公司 | 时域运动矢量获取、帧间预测、视频编码的方法及设备 |
CN113596460A (zh) | 2019-09-23 | 2021-11-02 | 杭州海康威视数字技术股份有限公司 | 编解码方法方法、装置及设备 |
US11683517B2 (en) * | 2020-11-23 | 2023-06-20 | Qualcomm Incorporated | Block-adaptive search range and cost factors for decoder-side motion vector (MV) derivation techniques |
-
2019
- 2019-11-05 CN CN202010990344.7A patent/CN112135127B/zh active Active
- 2019-11-05 CN CN201911072766.XA patent/CN112770113B/zh active Active
- 2019-11-05 CN CN202010988743.XA patent/CN112135126B/zh active Active
-
2020
- 2020-10-28 KR KR1020227010788A patent/KR20220050227A/ko active Search and Examination
- 2020-10-28 WO PCT/CN2020/124304 patent/WO2021088695A1/zh active Application Filing
- 2020-10-28 JP JP2022520621A patent/JP7527359B2/ja active Active
- 2020-10-28 US US17/766,210 patent/US12114005B2/en active Active
-
2024
- 2024-04-04 JP JP2024060914A patent/JP2024081785A/ja active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060120613A1 (en) * | 2004-12-07 | 2006-06-08 | Sunplus Technology Co., Ltd. | Method for fast multiple reference frame motion estimation |
CN105578197A (zh) * | 2015-12-24 | 2016-05-11 | 福州瑞芯微电子股份有限公司 | 一种实现帧间预测主控系统 |
CN109391814A (zh) * | 2017-08-11 | 2019-02-26 | 华为技术有限公司 | 视频图像编码和解码的方法、装置及设备 |
CN109495746A (zh) * | 2018-11-07 | 2019-03-19 | 建湖云飞数据科技有限公司 | 一种基于运动矢量调整的视频编码方法 |
CN110312132A (zh) * | 2019-03-11 | 2019-10-08 | 杭州海康威视数字技术股份有限公司 | 一种编解码方法、装置及其设备 |
Also Published As
Publication number | Publication date |
---|---|
US20240073437A1 (en) | 2024-02-29 |
JP2024081785A (ja) | 2024-06-18 |
CN112135127B (zh) | 2021-09-21 |
KR20220050227A (ko) | 2022-04-22 |
US12114005B2 (en) | 2024-10-08 |
CN112770113A (zh) | 2021-05-07 |
JP7527359B2 (ja) | 2024-08-02 |
CN112135126B (zh) | 2021-09-21 |
JP2022550592A (ja) | 2022-12-02 |
CN112770113B (zh) | 2024-08-23 |
CN112135127A (zh) | 2020-12-25 |
CN112135126A (zh) | 2020-12-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020182162A1 (zh) | 编解码方法与装置、编码端设备和解码端设备 | |
WO2021088695A1 (zh) | 一种编解码方法、装置及其设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20883767 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20227010788 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2022520621 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 17766210 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20883767 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17-05-2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20883767 Country of ref document: EP Kind code of ref document: A1 |