WO2020253769A1 - 一种编解码方法、装置及其设备 - Google Patents

一种编解码方法、装置及其设备 Download PDF

Info

Publication number
WO2020253769A1
WO2020253769A1 PCT/CN2020/096788 CN2020096788W WO2020253769A1 WO 2020253769 A1 WO2020253769 A1 WO 2020253769A1 CN 2020096788 W CN2020096788 W CN 2020096788W WO 2020253769 A1 WO2020253769 A1 WO 2020253769A1
Authority
WO
WIPO (PCT)
Prior art keywords
current block
value
block
threshold
optical flow
Prior art date
Application number
PCT/CN2020/096788
Other languages
English (en)
French (fr)
Inventor
陈方栋
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Publication of WO2020253769A1 publication Critical patent/WO2020253769A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/184Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/577Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/59Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution

Definitions

  • This application relates to the field of coding and decoding technologies, and in particular to a coding and decoding method, device and equipment.
  • a complete video encoding method can include processes such as prediction, transformation, quantization, entropy encoding, and filtering.
  • predictive coding can include intra-frame coding and inter-frame coding.
  • Inter-frame coding uses the correlation of the video time domain to predict the pixels of the current image by using the pixels adjacent to the coded image to effectively remove the video time domain redundancy. .
  • a motion vector (Motion Vector, MV) can be used to represent the relative displacement between the current block of the current frame video image and the reference block of the reference frame video image.
  • the video image A of the current frame and the video image B of the reference frame have a strong temporal correlation.
  • the motion search can be performed in the video image B , Find the image block B1 (ie, the reference block) that best matches the image block A1, and determine the relative displacement between the image block A1 and the image block B1, which is the motion vector of the image block A1.
  • the encoding end can send the motion vector to the decoding end, instead of sending the image block A1 to the decoding end, the decoding end can obtain the image block A1 according to the motion vector and the image block B1. Obviously, since the number of bits occupied by the motion vector is much smaller than the number of bits occupied by the image block A1, the foregoing method can save a lot of bit overhead.
  • the current block when the current block is a unidirectional block, after obtaining the motion information of the current block, encoding/decoding can be performed according to the motion information, thereby improving the encoding performance.
  • the current block when the current block is a bidirectional block, after obtaining the bidirectional motion information of the current block, predicted images from two different directions can be obtained according to the bidirectional motion information, and the predicted images from two different directions often have a mirror symmetry relationship. This feature is not fully utilized in the current coding framework to further remove redundancy. In other words, for the application scenario of bidirectional blocks, there are currently problems such as poor coding performance.
  • This application provides an encoding and decoding method, device and equipment, which can improve encoding performance.
  • This application provides an encoding and decoding method, which includes:
  • the conditions met by the current block include: CIIP mode is disabled for the current block, or symmetrical motion vector difference mode is disabled;
  • This application provides an encoding and decoding method, which includes:
  • the conditions that the current block meets include: the width of the current block is greater than or equal to 8, or the height is greater than or equal to 8, and the width and height product is greater than or equal to 128;
  • This application provides an encoding and decoding method, which includes:
  • the conditions that the current block meets at the same time include:
  • the width of the current block is greater than or equal to 8, or the height is greater than or equal to 8, the product of width and height is greater than or equal to 128, and
  • Disable CIIP mode for the current block or disable symmetrical motion vector difference mode
  • This application provides an encoding and decoding method, which includes:
  • the conditions that the current block meets include:
  • the current block adopts bidirectional prediction, and the two reference frames corresponding to the current block are from different directions, and the distance between the two reference frames corresponding to the current block and the current frame is the same;
  • This application provides an encoding and decoding method, which includes:
  • the conditions that the current block meets at the same time include:
  • the switch control information is to allow the current block to adopt the bidirectional optical flow mode
  • the current block does not use the sub-block motion information mode, and the current block does not use the CIIP mode, and the current block does not use the SMVD mode;
  • the current block adopts bidirectional prediction, and the two reference frames corresponding to the current block are from different directions, and the distance between the two reference frames corresponding to the current block and the current frame is the same;
  • the current block adopts bidirectional prediction, and the weighted weights of the two reference frames corresponding to the current block are the same;
  • the width value, height value and area value of the current block are all within a limited range
  • the present application provides a coding and decoding device, which includes:
  • the determination module is used to determine the conditions met by the current block when the bidirectional optical flow mode is enabled for the current block, including: CIIP mode is disabled for the current block, or the symmetrical motion vector difference mode is disabled; the motion compensation module is used for determining whether to enable bidirectional optical flow for the current block. In the flow mode, motion compensation based on the bidirectional optical flow mode is performed on the current block.
  • the present application provides a coding and decoding device, which includes:
  • the determining module is used to determine that the current block is in the bidirectional optical flow mode when the current block is enabled to meet the conditions including: the width of the current block is greater than or equal to 8, or the height is greater than or equal to 8, and the width-to-height product is greater than or equal to 128; the motion compensation module is used if It is determined that the bidirectional optical flow mode is activated for the current block, and then motion compensation based on the bidirectional optical flow mode is performed on the current block.
  • the present application provides a coding and decoding device, which includes:
  • the determining module is used to determine the conditions that the current block meets at the same time when the bidirectional optical flow mode is enabled for the current block including:
  • the width of the current block is greater than or equal to 8, or the height is greater than or equal to 8, the product of width and height is greater than or equal to 128, and
  • Disable CIIP mode for the current block or disable symmetrical motion vector difference mode
  • the motion compensation module is configured to perform motion compensation based on the bidirectional optical flow mode on the current block if it is determined to start the bidirectional optical flow mode for the current block.
  • the present application provides a coding and decoding device, which includes:
  • the determining module is used to determine the conditions that the current block meets when the bidirectional optical flow mode is enabled for the current block including:
  • the current block adopts bidirectional prediction, and the two reference frames corresponding to the current block are from different directions, and the distance between the two reference frames corresponding to the current block and the current frame is the same;
  • the motion compensation module is configured to perform motion compensation based on the bidirectional optical flow mode on the current block if it is determined to start the bidirectional optical flow mode for the current block.
  • the present application provides a coding and decoding device, which includes:
  • the determining module is used to determine that the conditions that the current block meets at the same time include:
  • the switch control information is to allow the current block to adopt the bidirectional optical flow mode
  • the current block does not use the sub-block motion information mode, and the current block does not use the CIIP mode, and the current block does not use the SMVD mode;
  • the current block adopts bidirectional prediction, and the two reference frames corresponding to the current block are from different directions, and the distance between the two reference frames corresponding to the current block and the current frame is the same;
  • the current block adopts bidirectional prediction, and the weighted weights of the two reference frames corresponding to the current block are the same;
  • the width value, height value and area value of the current block are all within a limited range
  • the motion compensation module is configured to perform motion compensation based on the bidirectional optical flow mode on the current block if it is determined to start the bidirectional optical flow mode for the current block.
  • the present application provides a video encoder, including: a processor and a machine-readable storage medium, the machine-readable storage medium storing machine executable instructions that can be executed by the processor;
  • the processor is used to execute machine executable instructions to implement the following steps:
  • the conditions met by the current block include: CIIP mode is disabled for the current block, or symmetrical motion vector difference mode is disabled;
  • the conditions that the current block meets include: the width of the current block is greater than or equal to 8, or the height is greater than or equal to 8, and the width and height product is greater than or equal to 128;
  • the conditions that the current block meets at the same time include:
  • the width of the current block is greater than or equal to 8, or the height is greater than or equal to 8, the product of width and height is greater than or equal to 128, and
  • Disable CIIP mode for the current block or disable symmetrical motion vector difference mode
  • the conditions that the current block meets include:
  • the current block adopts bidirectional prediction, and the two reference frames corresponding to the current block are from different directions, and the distance between the two reference frames corresponding to the current block and the current frame is the same;
  • the conditions that the current block meets at the same time include:
  • the switch control information is to allow the current block to adopt the bidirectional optical flow mode
  • the current block does not use the sub-block motion information mode, and the current block does not use the CIIP mode, and the current block does not use the SMVD mode;
  • the current block adopts bidirectional prediction, and the two reference frames corresponding to the current block are from different directions, and the distance between the two reference frames corresponding to the current block and the current frame is the same;
  • the current block adopts bidirectional prediction, and the weighted weights of the two reference frames corresponding to the current block are the same;
  • the width value, height value and area value of the current block are all within a limited range
  • the present application provides a video decoder, including: a processor and a machine-readable storage medium, where the machine-readable storage medium stores machine executable instructions that can be executed by the processor;
  • the processor is used to execute machine executable instructions to implement the following steps:
  • the conditions met by the current block include: CIIP mode is disabled for the current block, or symmetrical motion vector difference mode is disabled;
  • the conditions that the current block meets include: the width of the current block is greater than or equal to 8, or the height is greater than or equal to 8, and the width and height product is greater than or equal to 128;
  • the conditions that the current block meets at the same time include:
  • the width of the current block is greater than or equal to 8, or the height is greater than or equal to 8, the product of width and height is greater than or equal to 128, and
  • Disable CIIP mode for the current block or disable symmetrical motion vector difference mode
  • the conditions met by the current block include: the current block adopts bidirectional prediction, and the two reference frames corresponding to the current block come from different directions, and the two reference frames corresponding to the current block The same distance as the current frame;
  • the conditions that the current block meets at the same time include:
  • the switch control information is to allow the current block to adopt the bidirectional optical flow mode
  • the current block does not use the sub-block motion information mode, and the current block does not use the CIIP mode, and the current block does not use the SMVD mode;
  • the current block adopts bidirectional prediction, and the two reference frames corresponding to the current block are from different directions, and the distance between the two reference frames corresponding to the current block and the current frame is the same;
  • the current block adopts bidirectional prediction, and the weighted weights of the two reference frames corresponding to the current block are the same;
  • the width value, height value and area value of the current block are all within a limited range
  • the first original prediction value can be determined according to the first unidirectional motion information of the current block
  • the second original prediction value can be determined according to the second unidirectional motion information of the current block
  • the second original prediction value can be determined according to the first unidirectional motion information of the current block.
  • the original prediction value and the second original prediction value determine the horizontal velocity and the vertical velocity, and obtain the predicted compensation value according to the horizontal velocity and the vertical velocity, and obtain the target prediction value according to the predicted compensation value.
  • the above method can obtain the target predicted value of the current block or sub-blocks of the current block based on the optical flow method, thereby improving the friendliness of hardware implementation and bringing about an improvement in coding performance.
  • FIG. 1A is a schematic diagram of interpolation in an embodiment of the present application.
  • FIG. 1B is a schematic diagram of a video coding framework in an implementation manner of the present application.
  • FIG. 2 is a flowchart of an encoding and decoding method in an embodiment of the present application
  • FIG. 3 is a flowchart of an encoding and decoding method in an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a reference block corresponding to a sub-block of a current block in an embodiment of the present application
  • FIG. 5 is a structural diagram of a coding and decoding device in an embodiment of the present application.
  • Fig. 6 is a hardware structure diagram of a decoding end device in an embodiment of the present application.
  • Fig. 7 is a hardware structure diagram of an encoding terminal device in an embodiment of the present application.
  • first information may also be referred to as second information
  • second information may also be referred to as first information
  • first information may also be referred to as first information
  • An encoding and decoding method, device, and equipment proposed in the embodiments of the present application may involve the following concepts:
  • Intra prediction and inter prediction refers to the use of the correlation of the video space domain to predict the current pixel using the pixels of the coded block of the current image to achieve the removal of video spatial redundancy purpose.
  • Inter-frame prediction refers to the use of the temporal correlation of the video. Since the video sequence contains strong temporal correlation, the pixels of the current image are predicted by neighboring coded image pixels to achieve the purpose of effectively removing video temporal redundancy.
  • the inter-frame prediction part of the video coding standard basically uses block-based motion compensation technology. The principle is to find the best matching block in the previously encoded image for each pixel block of the current image. This process is called Motion Estimation. , ME).
  • Motion Vector In inter-frame coding, a motion vector can be used to represent the relative displacement between the current coding block and the best matching block in the reference image. Each divided block has a corresponding motion vector that needs to be transmitted to the decoding end. If the motion vector of each block is independently coded and transmitted, especially when divided into small-sized blocks, it will consume a lot of bits. In order to reduce the number of bits used to code the motion vector, the spatial correlation between adjacent image blocks can be used to predict the motion vector of the current block to be coded according to the motion vector of the adjacent coded block, and then the prediction difference Encode. In this way, the number of bits representing the motion vector can be effectively reduced.
  • the motion vector of the adjacent coded block is used to predict the motion vector of the current block, and then the motion vector prediction value (MVP, Motion Vector Prediction) and the real motion vector can be predicted.
  • MVP Motion Vector Prediction
  • the difference between the estimates (MVD, Motion Vector Difference) is encoded, thereby effectively reducing the number of MV encoding bits.
  • Motion Information Since the motion vector represents the position offset between the current image block and a reference image block, in order to accurately obtain the information pointing to the image block, in addition to the motion vector, the index information of the reference frame image is also needed to indicate the use Which reference frame image.
  • a reference frame image list can usually be established, and the reference frame image index information indicates which reference frame image in the reference frame image list is used by the current image block.
  • many coding technologies also support multiple reference image lists. Therefore, an index value can also be used to indicate which reference image list is used, and this index value can be called a reference direction.
  • motion-related information such as motion vector, reference frame index, and reference direction can be collectively referred to as motion information.
  • Interpolation If the current motion vector is of non-integer pixel accuracy, the existing pixel value cannot be directly copied from the corresponding reference frame, and the required pixel value is obtained through interpolation. As shown in FIG. 1A, if you want to obtain the pixel value Y 1/2 with an offset of 1/2 pixel, it is obtained by interpolation of the surrounding pixel value X. If an interpolation filter with N taps is used, it is obtained by interpolating the surrounding N whole pixels. If the number of taps N is 8, then a k is the filter coefficient, that is, the weighting coefficient.
  • Motion compensation The process of motion compensation is the process of obtaining all predicted pixel values of the current block through interpolation or copying.
  • Video encoding framework As shown in Figure 1B, the video encoding framework can be used to implement the encoding end processing flow of the embodiment of this application.
  • the schematic diagram of the video decoding framework is similar to that of Figure 1B, which will not be repeated here, and video can be used
  • the decoding framework implements the decoding end processing flow of the embodiments of the present application.
  • the video coding framework and the video decoding framework it includes modules such as intra prediction, motion estimation/motion compensation, reference image buffer, in-loop filtering, reconstruction, transformation, quantization, inverse transformation, inverse quantization, entropy encoder, etc. .
  • the encoding end processing flow can be realized, and at the decoding end, through the cooperation between these modules, the decoding end processing flow can be realized.
  • the current block is a bidirectional block (that is, the current block is a block that uses bidirectional prediction)
  • the predicted images from two different directions often have a mirror symmetry relationship.
  • This feature is not fully utilized in the current coding framework.
  • the redundancy is further removed, resulting in problems such as poor coding performance.
  • a prediction signal adjustment method based on the optical flow method is proposed, which can obtain the original prediction value based on the original motion information, and based on the original prediction value, through the optical flow equation
  • the prediction compensation value of the current block is obtained, and the target prediction value of the current block is obtained based on the prediction compensation value and the original prediction value.
  • the above method can obtain the target prediction value of the current block based on the optical flow method, thereby improving the friendliness of hardware implementation, and bringing about an improvement in coding performance, that is, coding performance and coding efficiency can be improved.
  • Embodiment 1 As shown in FIG. 2, it is a schematic flow chart of an encoding and decoding method.
  • the method can be applied to the decoding end or the encoding end.
  • the method obtains the target prediction value of the current block or sub-blocks of the current block by performing the following steps. If the following steps are performed for the current block, the target prediction value of the current block can be obtained; if the current block is divided into at least one sub-block, and the following steps are performed for each sub-block of the current block, the sub-block of the current block can be obtained Target predicted value, the method includes:
  • Step 201 If the characteristic information of the current block satisfies a specific condition, a first original prediction value is determined according to the first unidirectional motion information of the current block, and a second original prediction value is determined according to the second unidirectional motion information of the current block.
  • the current block may be a bidirectional block (that is, the current block is a block using bidirectional prediction), that is, the motion information corresponding to the current block is bidirectional motion information, and this bidirectional motion information may include motion in two different directions Information, these two motion information in different directions are called the first one-way motion information and the second one-way motion information.
  • the first one-way motion information may correspond to the first reference frame, and the first reference frame is located in front of the current frame where the current block is located; the second one-way motion information may correspond to the second reference frame, and the second reference frame is located in the current frame.
  • the block is located behind the current frame.
  • the feature information may include, but is not limited to, one or more of the following: motion information attributes; prediction mode attributes; size information; sequence-level switch control information.
  • motion information attributes may include, but is not limited to, one or more of the following: motion information attributes; prediction mode attributes; size information; sequence-level switch control information.
  • the current block adopts bidirectional prediction, and the two reference frames corresponding to the current block come from different directions 2.
  • the current block includes multiple sub-blocks, and the motion information of the multiple sub-blocks is the same; 3.
  • the current block uses bidirectional prediction, and the weights of the two reference frames corresponding to the current block are the same; 4.
  • the current block uses bidirectional prediction, and The distance between the two reference frames corresponding to the current block and the current frame is the same; 5.
  • the current block adopts bidirectional prediction, and the difference between the predicted values of the two reference frames corresponding to the current block is less than the preset threshold.
  • the difference value it is also necessary to obtain the difference value of the predicted values of the two reference frames corresponding to the current block.
  • the following method can be used: obtain from the first reference frame according to the first unidirectional motion information of the current block The first prediction block, and obtain the second prediction block from the second reference frame according to the second one-way motion information of the current block; according to the down-sampled prediction value of the first prediction block and the down-sampled second prediction block
  • the SAD of the predicted value obtains the difference value between the predicted value of the first reference frame and the second reference frame.
  • the SAD of the predicted value of the predicted block and the predicted value of the second predicted block is used to obtain the difference value of the predicted value of the first reference frame and the second reference frame.
  • the prediction mode attribute is that the fusion mode based on intra-frame and inter-frame joint prediction is not used, and/or the symmetrical motion vector difference mode is not used, it is determined that the prediction mode attribute meets a specific condition.
  • the characteristic information includes sequence-level switch control information, and the sequence-level switch control information allows the current block to adopt the bidirectional optical flow mode, it is determined that the sequence-level switch control information meets a specific condition.
  • the size information includes at least one of a width value, a height value, and an area value, when at least one of the width value, height value, and area value in the size information meets the corresponding threshold condition, the size information Meet certain conditions.
  • the size information meets at least one of the following conditions, it is determined that the size information meets a specific condition; the width value of the current block is greater than or equal to the first threshold, and the width value of the current block is less than or equal to the second threshold; The height value is greater than or equal to the third threshold, the height value of the current block is less than or equal to the fourth threshold; the area value of the current block is greater than or equal to the fifth threshold, and the area value of the current block is less than or equal to the sixth threshold.
  • the above are just a few examples, and there is no restriction on this.
  • the first threshold value may be less than the second threshold value, and the first threshold value and the second threshold value are both positive integer powers of 2.
  • the first threshold value may be 8.
  • the second threshold can be 128.
  • the third threshold can be less than the fourth threshold, and the third and fourth thresholds are also positive integer powers of 2.
  • the third threshold can be 8 and the fourth The threshold can be 128.
  • the fifth threshold can be less than the sixth threshold, and the fifth and sixth thresholds are also positive integer powers of 2.
  • the fifth threshold can be 64 (that is, 8 *8)
  • the sixth threshold can be 16384 (that is, 128*128).
  • the above thresholds are just examples, and there are no restrictions on these thresholds.
  • the feature information may include one or more of motion information attributes, prediction mode attributes, size information, and sequence-level switch control information.
  • the feature information when the motion information attributes meet specific conditions, it can indicate that the feature information meets specific conditions; when the feature information includes prediction mode attributes, when the prediction mode attributes meet specific conditions, it can indicate that the feature information meets specific conditions.
  • Conditions when the feature information includes size information, when the size information meets specific conditions, it can indicate that the feature information meets the specific conditions; when the feature information includes sequence-level switch control information, when the sequence-level switch control information meets the specific conditions, it can indicate The characteristic information satisfies certain conditions.
  • the feature information includes at least two of motion information attributes, prediction mode attributes, size information, and sequence-level switch control information
  • the motion information attributes meet specific conditions
  • the attributes of the prediction mode meet specific conditions
  • taking the feature information including motion information attributes, size information, and sequence-level switch control information as an example when the motion information attributes meet specific conditions, the size information meets specific conditions, and the sequence-level switch control information meets specific conditions, it can indicate The characteristic information satisfies certain conditions.
  • the above process is just a few examples, and there is no restriction on this.
  • determining the first original prediction value according to the first one-way motion information of the current block, and determining the second original prediction value according to the second one-way motion information of the current block may include, but is not limited to: the first one based on the current block One-way motion information, the first reference block is determined from the first reference frame, and the first original prediction value of the first reference block is determined; the first original prediction value of the central area of the first reference block is obtained by comparing the first reference block The pixel value in the frame is obtained by interpolation, and the first original prediction value of the edge area of the first reference block is obtained by copying the pixel value in the first reference frame.
  • the second reference block is determined from the second reference frame, and the second original prediction value of the second reference block is determined; the second original prediction value of the central area of the second reference block is It is obtained by interpolating the pixel value in the second reference frame, and the second original predicted value of the edge area of the second reference block is obtained by copying the pixel value in the second reference frame.
  • the first reference block corresponding to the current block can be determined from the first reference frame, assuming that the size of the current block is M*M, and the size of the first reference block is N*N , N can be greater than M, for example, M is 4 and N is 6.
  • the first reference block can be divided into a central area and an edge area.
  • the central area of the first reference block refers to an area with a size of M*M centered on the central point of the first reference block; the edge area of the first reference block Refers to: areas other than the central area in the first reference block.
  • the first original prediction value of the central area of the first reference block is obtained by interpolating the pixel values in the first reference frame; for the edge area of the first reference block, the first reference block The first original prediction value of the edge region of is obtained by copying the pixel value in the first reference frame.
  • the second reference block corresponding to the current block can be determined from the second reference frame, assuming that the size of the current block is M*M, and the size of the second reference block is N*N, N It can be greater than M, for example, M is 4 and N is 6.
  • the second reference block can be divided into a central area and an edge area.
  • the central area of the second reference block refers to an area with a size of M*M centered on the central point of the second reference block; the edge area of the second reference block Refers to: areas other than the central area in the second reference block.
  • the second original prediction value of the central area of the second reference block is obtained by interpolating the pixel values in the second reference frame; for the edge area of the second reference block, the second reference block The second original prediction value of the edge area of is obtained by copying the pixel value in the second reference frame.
  • determining the first original prediction value according to the first one-way motion information of the current block, and determining the second original prediction value according to the second one-way motion information of the current block may include, but is not limited to: the first one based on the current block One-way motion information, the first reference block is determined from the first reference frame, and the first original prediction value of the first reference block is determined, and the first original prediction value is obtained by performing a calculation on the pixel value in the first reference frame. Obtained by interpolation.
  • the second reference block is determined from the second reference frame, and the second original prediction value of the second reference block is determined, and the second original prediction value is obtained by comparing the second reference frame
  • the pixel value in is obtained by interpolation. For example, based on the first one-way motion information of the current block, the first reference block corresponding to the current block can be determined from the first reference frame. If the size of the current block is M*M, the size of the first reference block can be M *M. Based on the second one-way motion information of the current block, the second reference block corresponding to the current block can be determined from the second reference frame. Assuming the size of the current block is M*M, the size of the second reference block can be M*M .
  • Step 202 Determine the horizontal velocity according to the first original predicted value and the second original predicted value.
  • the horizontal direction rate refers to the horizontal direction (that is, the X direction) rate of the corresponding sub-block of the current block in the reference frame (that is, the sub-block corresponding to the current block in the reference frame).
  • the sub-block of the current block corresponds to the sub-block in the reference frame (that is, the sub-block corresponding to the sub-block of the current block in the reference frame) in the horizontal direction (that is, the X direction) rate.
  • determining the horizontal velocity according to the first original predicted value and the second original predicted value includes but is not limited to:
  • Manner 1 Determine the autocorrelation coefficient S1 of the horizontal gradient sum according to the first original predicted value and the second original predicted value, and the correlation coefficient S3 between the time domain predicted value difference and the horizontal gradient sum. Then, the horizontal velocity is determined according to the autocorrelation coefficient S1, the velocity threshold, the cross-correlation coefficient S3, the first amplification factor and the second amplification factor.
  • Method 2 If the first preset condition is met, determine the cross-correlation coefficient S2 between the horizontal gradient and the vertical gradient sum according to the first original predicted value and the second original predicted value, S2, the temporal predicted value difference and the vertical gradient sum
  • the cross-correlation number S6 the horizontal direction speed is determined according to the cross-correlation number S2, the rate threshold, the cross-correlation number S6, the first amplification factor and the second amplification factor.
  • the autocorrelation coefficient S1, the velocity threshold, the cross-correlation coefficient S3, the first amplification factor and the second amplification factor determine the horizontal velocity.
  • the first preset condition is determined based on the cross-correlation coefficient S2, the vertical direction gradient and the autocorrelation coefficient S5.
  • Method 3 If the second preset condition is met, the autocorrelation coefficient S1 of the horizontal gradient sum and the vertical gradient sum S2 is determined according to the first original predicted value and the second original predicted value.
  • the cross-correlation coefficient S3 between the difference of the domain prediction value and the horizontal gradient sum, the autocorrelation coefficient S5 of the vertical gradient sum, and the cross-correlation coefficient S6 between the time domain predicted value difference and the vertical gradient sum; according to the autocorrelation coefficient S1
  • the correlation number S2, the cross correlation number S3, the autocorrelation coefficient S5, the cross correlation number S6, the rate threshold, the first amplification factor and the second amplification factor determine the horizontal velocity.
  • the horizontal velocity is determined according to the autocorrelation coefficient S1, the velocity threshold, the cross-correlation coefficient S3, the first amplification factor and the second amplification factor.
  • the second preset condition is determined based on the cross-correlation coefficient S2 and the auto-correlation coefficient S5.
  • Step 203 Determine the vertical velocity according to the first original prediction value and the second original prediction value.
  • the vertical direction rate refers to the vertical direction (that is, the Y direction) rate of the corresponding sub-block of the current block in the reference frame (that is, the sub-block corresponding to the current block in the reference frame).
  • the sub-block of the current block corresponds to the sub-block in the reference frame (that is, the sub-block corresponding to the sub-block of the current block in the reference frame) in the vertical direction (that is, the Y direction) rate.
  • determining the vertical velocity according to the first original prediction value and the second original prediction value includes but is not limited to:
  • Method 1 Determine the horizontal direction gradient and the cross-correlation coefficient S2 of the vertical direction gradient sum and the vertical direction gradient sum according to the first original predicted value and the second original predicted value, the autocorrelation coefficient S5 of the vertical direction gradient sum, and the time domain predicted value difference and the vertical direction
  • the correlation number of the gradient sum is S6.
  • the vertical velocity can be determined according to the cross-correlation coefficient S2, the auto-correlation coefficient S5, the cross-correlation coefficient S6, the velocity threshold, the horizontal velocity, the first amplification factor, and the second amplification factor.
  • Manner 2 Obtain the untruncated horizontal velocity without truncation processing according to the first original predicted value and the second original predicted value, and determine the vertical velocity according to the untruncated horizontal velocity.
  • the autocorrelation coefficient S1 of the horizontal gradient sum and the vertical gradient sum S2 can be determined according to the first original predicted value and the second original predicted value.
  • the autocorrelation coefficient S1 determines the untruncated horizontal velocity according to the autocorrelation coefficient S1, the cross-correlation coefficient S3, the first amplification factor and the second amplification factor, and determine the untruncated horizontal velocity according to the cross-correlation coefficient S2, the autocorrelation coefficient S5, the cross-correlation coefficient S6, the rate threshold, and the The truncated horizontal velocity, the first amplification factor, and the second amplification factor determine the vertical velocity.
  • the third preset condition determines the horizontal direction gradient and the cross-correlation coefficient S2 of the vertical direction gradient sum and the vertical direction gradient sum S5, and the time domain according to the first original predicted value and the second original predicted value
  • the correlation coefficient S6 between the predicted value difference and the vertical gradient sum is determined according to the correlation coefficient S2, the autocorrelation coefficient S5, the cross correlation coefficient S6, the speed threshold, the horizontal speed, the first amplification factor and the second amplification factor Vertical velocity.
  • the third preset condition may be determined based on the horizontal velocity.
  • the horizontal gradient sum, the vertical gradient sum, and the temporal prediction value difference can be determined according to the first original prediction value, the second original prediction value, the first amplification factor, and the second amplification factor;
  • the horizontal gradient sum, the vertical gradient sum, and the time-domain prediction value difference determine the autocorrelation coefficient S1 of the horizontal gradient sum and the cross-correlation coefficient S2 of the horizontal gradient sum and the vertical gradient sum S2, the time-domain prediction value difference and The cross-correlation coefficient S3 of the sum of gradients in the horizontal direction, the auto-correlation coefficient S5 of the sum of gradients in the vertical direction, and the cross-correlation coefficient S6 of the time-domain prediction value difference and the sum of gradients in the vertical direction.
  • the cross-correlation coefficient S2 obtained according to the first original prediction value, the second original prediction value, the first amplification factor, and the second amplification factor is less than the first cross-correlation coefficient threshold
  • the cross-correlation coefficient S2 is updated to the first cross-correlation coefficient
  • the correlation coefficient threshold if the correlation coefficient S2 obtained according to the first original prediction value, the second original prediction value, the first amplification factor, and the second amplification factor is greater than the second correlation coefficient threshold, then the correlation coefficient S2 is updated to the first 2.
  • Cross-correlation coefficient threshold if the cross-correlation coefficient S2 obtained according to the first original prediction value, the second original prediction value, the first amplification factor and the second amplification factor is greater than or equal to the first cross-correlation coefficient threshold, and less than or equal to the second
  • the cross-correlation coefficient threshold value keeps the cross-correlation coefficient S2 unchanged.
  • the cross-correlation coefficient S6 obtained according to the first original predicted value, the second original predicted value, the first amplification factor, and the second amplification factor is less than the third cross-correlation coefficient threshold, then the cross-correlation coefficient S6 is updated to the third cross-correlation coefficient Threshold; if the cross-correlation coefficient S6 obtained according to the first original prediction value, the second original prediction value, the first amplification factor and the second amplification factor is greater than the fourth cross-correlation coefficient threshold, the cross-correlation coefficient S6 is updated to the fourth cross-correlation coefficient Correlation coefficient threshold; if the correlation coefficient S6 obtained according to the first original prediction value, the second original prediction value, the first amplification factor, and the second amplification factor is greater than or equal to the third correlation coefficient threshold, and less than or equal to the fourth correlation The number threshold, the cross-correlation number S6 remains unchanged.
  • the first amplification factor may be the smaller value of 5 and (BD-7), or the larger value of 1 and (BD-11).
  • the above is only an example of the first amplification factor, and there is no restriction on this, and it can be configured according to experience.
  • the second amplification factor may be the smaller value of 8 and (BD-4), or the larger value of 4 and (BD-8).
  • the above is only an example of the second amplification factor, and there is no restriction on this, and it can be configured based on experience.
  • the rate threshold may be 2 to the power of M, and M is the difference between 13 and BD, or the larger value of 5 and (BD-7).
  • M is the difference between 13 and BD, or the larger value of 5 and (BD-7).
  • BD-7 the rate threshold
  • BD bit depth
  • bit depth is bit depth, which is expressed as the bit width required by each chroma or luminance pixel value.
  • Step 204 Obtain a predicted compensation value according to the horizontal velocity and the vertical velocity.
  • obtaining the predicted compensation value according to the horizontal direction rate and the vertical direction rate may include but is not limited to: determining the horizontal direction gradient and the vertical direction gradient according to the first original predicted value, the second original predicted value, and the right shift number of the gradient , And obtain the predicted compensation value according to the horizontal velocity, the vertical velocity, the horizontal gradient and the vertical gradient.
  • the number of gradient right shifts may be the larger value of 2 and (14-BD), or the larger value of 6 and (BD-6).
  • the above is only an example of the number of right shifts of the gradient, there is no restriction on this, and it can be configured based on experience.
  • Step 205 Obtain the target predicted value according to the first original predicted value, the second original predicted value, and the predicted compensation value.
  • the first original block corresponding to the current block is determined according to the first unidirectional motion information of the current block. Prediction value, and determine the second original prediction value corresponding to the current block according to the second unidirectional motion information of the current block.
  • the horizontal velocity corresponding to the current block is determined according to the first original prediction value and the second original prediction value, and the vertical velocity corresponding to the current block is determined according to the first original prediction value and the second original prediction value.
  • the prediction compensation value corresponding to the current block is obtained according to the horizontal direction rate and the vertical direction rate, and the target prediction value corresponding to the current block is obtained according to the first original prediction value, the second original prediction value and the prediction compensation value. So far, the target predicted value corresponding to the current block is successfully obtained.
  • the current block is divided into at least one sub-block, and the above-mentioned method is used to obtain the target prediction value of each sub-block of the current block, then: if the characteristic information of the current block meets a specific condition, the current block For each sub-block, the first original prediction value corresponding to the sub-block is determined according to the first one-way motion information of the sub-block (the same as the first one-way motion information of the current block), and the first original prediction value of the sub-block is determined according to the first one-way motion information of the sub-block.
  • Two unidirectional motion information (the same as the second unidirectional motion information of the current block) determines the second original prediction value corresponding to the sub-block.
  • the horizontal velocity corresponding to the sub-block is determined according to the first original predicted value and the second original predicted value, and the vertical velocity corresponding to the sub-block is determined according to the first original predicted value and the second original predicted value. Then, the prediction compensation value corresponding to the sub-block is obtained according to the horizontal direction rate and the vertical direction rate, and the target prediction value corresponding to the sub-block is obtained according to the first original prediction value, the second original prediction value and the prediction compensation value.
  • the target predicted value corresponding to the sub-block is successfully obtained. After obtaining the target prediction value of each sub-block of the current block, in fact, the target prediction value of the current block is obtained.
  • the first original prediction value can be determined according to the first unidirectional motion information of the current block
  • the second original prediction value can be determined according to the second unidirectional motion information of the current block
  • the second original prediction value can be determined according to the first unidirectional motion information of the current block.
  • the original prediction value and the second original prediction value determine the horizontal velocity and the vertical velocity, and obtain the predicted compensation value according to the horizontal velocity and the vertical velocity, and obtain the target prediction value according to the predicted compensation value.
  • the above method can obtain the target predicted value of the current block or sub-blocks of the current block based on the optical flow method, thereby improving the friendliness of hardware implementation and bringing about an improvement in coding performance.
  • Embodiment 2 An encoding and decoding method is proposed in this embodiment of the application, which can be applied to the decoding end or the encoding end.
  • FIG. 3 is a schematic flowchart of the encoding and decoding method. Exemplarily, if the characteristic information of the current block satisfies a specific condition, the following steps may be performed for each sub-block of the current block to obtain the target prediction value of each sub-block of the current block.
  • Step 301 If the characteristic information of the current block satisfies a specific condition, determine the first original prediction value of the sub-block according to the first unidirectional motion information of the current block (that is, the first unidirectional motion information of the sub-block of the current block), and The second original prediction value of the sub-block is determined according to the second unidirectional motion information of the current block (that is, the second unidirectional motion information of the sub-block of the current block).
  • the bidirectional motion information corresponding to the current block can be acquired, and there is no restriction on the acquisition method.
  • This two-way motion information includes motion information in two different directions.
  • the motion information in two different directions is called the first one-way motion information (such as the first motion vector and the first reference frame index) and the second one-way motion information (such as the second motion vector and the second reference frame index).
  • the first reference frame (such as reference frame 0) can be determined based on the first one-way motion information, and the first reference frame is located in front of the current frame where the current block is located; the second reference frame (such as Reference frame 1), and the second reference frame is located behind the current frame where the current block is located.
  • the first one-way motion information of the sub-block is the same as the first one-way motion information of the current block
  • the second one-way motion information of the sub-block is the same as the second one-way motion information of the current block.
  • the one-way motion information is the same.
  • the feature information may include one or more of the following: motion information attributes; prediction mode attributes; size information; sequence-level switch control information.
  • motion information attributes may include one or more of the following: motion information attributes; prediction mode attributes; size information; sequence-level switch control information.
  • determining the first original prediction value of the sub-block according to the first unidirectional motion information of the current block, and determining the second original prediction value of the sub-block according to the second unidirectional motion information of the current block may include: For the first unidirectional motion information of the block, the first reference block corresponding to the sub-block of the current block is determined from the first reference frame, and the first original prediction value I (0) (x, y); Based on the second unidirectional motion information of the current block, determine the second reference block corresponding to the sub-block of the current block from the second reference frame, and determine the second original prediction value I ( 1) (x, y). Regarding the determination method of the first original predicted value and the second original predicted value, refer to step 201, which will not be repeated here.
  • Step 302 Determine the horizontal gradient sum, the vertical gradient sum, and the temporal prediction value difference according to the first original predicted value and the second original predicted value of the sub-block. For example, according to the first original predicted value of the sub-block, the second original predicted value of the sub-block, the first amplification factor and the second amplification factor, the horizontal gradient sum, the vertical gradient sum, and the temporal prediction value difference are determined.
  • Step 303 Determine the autocorrelation coefficient S1 of the horizontal gradient sum (hereinafter referred to as the autocorrelation coefficient S1), the horizontal gradient sum and the vertical gradient sum according to the horizontal gradient sum, the vertical gradient sum, and the time domain prediction value difference.
  • the cross-correlation coefficient S2 (hereinafter referred to as the cross-correlation coefficient S2), the cross-correlation coefficient S3 of the time domain prediction value difference and the horizontal gradient sum (hereinafter referred to as the cross-correlation coefficient S3), and the autocorrelation coefficient S5 of the vertical gradient sum (Hereinafter referred to as the autocorrelation coefficient S5), the correlation coefficient S6 between the time domain prediction value difference and the vertical gradient sum (hereinafter referred to as the cross correlation coefficient S6).
  • Step 304 according to one or more of the autocorrelation coefficient S1, the cross-correlation coefficient S2, the cross-correlation coefficient S3, the auto-correlation coefficient S5, and the cross-correlation coefficient S6, determine the level of the sub-block of the current block corresponding to the sub-block in the reference frame Directional rate.
  • Step 305 according to one or more of the autocorrelation coefficient S1, the cross-correlation coefficient S2, the cross-correlation coefficient S3, the auto-correlation coefficient S5, and the cross-correlation coefficient S6, determine the vertical of the sub-block of the current block in the reference frame.
  • Directional rate the vertical rate of the sub-block of the current block in the reference frame.
  • Step 306 Obtain the prediction compensation value of the sub-block of the current block according to the horizontal velocity and the vertical velocity.
  • Step 307 Obtain the target prediction value of the sub-block of the current block according to the first original prediction value of the sub-block of the current block, the second original prediction value of the sub-block of the current block, and the prediction compensation value of the sub-block of the current block.
  • step 301 to step 307 please refer to Embodiment 1, which will not be repeated here.
  • Embodiment 3 The encoding end/decoding end needs to determine whether the characteristic information of the current block meets specific conditions. If so, the technical solution of the embodiment of the present application is adopted to obtain the target predicted value of the current block or sub-blocks of the current block. This technical solution may also be referred to as a bidirectional optical flow mode. If not, there is no need to adopt the target prediction value acquisition method proposed in this application.
  • the current block adopts bidirectional prediction, and the two reference frames corresponding to the current block come from different directions, that is, one reference frame corresponding to the current block is located before the current frame, and the other reference frame corresponding to the current block is located after the current frame;
  • the size information of the current block (such as width value, height value, area value, etc.) is within a limited range.
  • Embodiment 4 When the feature information satisfies the following conditions at least at the same time, it is determined that the feature information of the current block meets the specific condition.
  • the current block adopts bidirectional prediction, and the two reference frames corresponding to the current block come from different directions, that is, one reference frame corresponding to the current block is located before the current frame, and the other reference frame corresponding to the current block is located after the current frame;
  • the size information of the current block (such as width value, height value, area value, etc.) is within a limited range
  • the current block includes multiple sub-blocks, and the motion information of the multiple sub-blocks is the same, that is, the motion information of each sub-block of the current block may be exactly the same, that is, the current block does not adopt the sub-block motion information mode.
  • the current block does not use the sub-block motion information mode, which may include: the current block does not use the Affine mode or the SBTMVP mode; the Affine mode uses the affine motion model mode, and the SBTMVP (subblock-based temporal motion vector prediction) mode is in The mode of obtaining the whole block of motion information in the time domain.
  • the current block adopts the Affine mode or the SBTMVP mode, the motion information of the sub-blocks within the current block may not be exactly the same. Therefore, the current block may not adopt the Affine mode or the SBTMVP mode.
  • Embodiment 5 When the feature information meets at least the following conditions at the same time, it is determined that the feature information of the current block meets the specific condition.
  • the current block adopts bidirectional prediction, and the two reference frames corresponding to the current block come from different directions, that is, one reference frame corresponding to the current block is located before the current frame, and the other reference frame corresponding to the current block is located after the current frame;
  • the size information of the current block (such as width value, height value, area value, etc.) is within a limited range
  • the current block includes multiple sub-blocks, and the motion information of the multiple sub-blocks is the same, that is, the motion information of each sub-block of the current block can be exactly the same, that is, the current block does not adopt the sub-block motion information mode;
  • the current block adopts bidirectional prediction, and the weights of the two reference frames corresponding to the current block are the same.
  • Embodiment 6 When the feature information meets the following conditions at least at the same time, it is determined that the feature information of the current block meets the specific condition.
  • the current block adopts bidirectional prediction, and the two reference frames corresponding to the current block come from different directions, that is, one reference frame corresponding to the current block is located before the current frame, and the other reference frame corresponding to the current block is located after the current frame;
  • the size information of the current block (such as width value, height value, area value, etc.) is within a limited range
  • the current block includes multiple sub-blocks, and the motion information of the multiple sub-blocks is the same, that is, the motion information of each sub-block of the current block can be exactly the same, that is, the current block does not adopt the sub-block motion information mode;
  • the current block adopts bidirectional prediction, and the weights of the two reference frames corresponding to the current block are the same;
  • the current block does not use the CIIP mode (combine intra inter prediction mode based on intra-frame and inter-frame joint prediction).
  • Embodiment 7 When the feature information meets the following conditions at least at the same time, it is determined that the feature information of the current block meets a specific condition.
  • the current block adopts bidirectional prediction, and the two reference frames corresponding to the current block come from different directions, that is, one reference frame corresponding to the current block is located before the current frame, and the other reference frame corresponding to the current block is located after the current frame;
  • the size information of the current block (such as width value, height value, area value, etc.) is within a limited range
  • the current block includes multiple sub-blocks, and the motion information of the multiple sub-blocks is the same, that is, the motion information of each sub-block of the current block can be exactly the same, that is, the current block does not adopt the sub-block motion information mode;
  • the current block adopts bidirectional prediction, and the weights of the two reference frames corresponding to the current block are the same;
  • the current block does not use the SMVD (Symmetric Motion Vector Difference) mode.
  • SMVD Symmetric Motion Vector Difference
  • the two MVDs in the bidirectional motion information in the SMVD mode are symmetric, that is, only one of the motion vector differences MVD needs to be coded, and the other motion vector difference is Negative MVD.
  • Embodiment 8 When the feature information meets the following conditions at least at the same time, it is determined that the feature information of the current block meets the specific condition.
  • the sequence level switch control information is to allow the current block to use the bidirectional optical flow mode, that is, the sequence level control allows the bidirectional optical flow mode to be enabled, that is, the sequence level control switch is on, which means that the current block is allowed to use the bidirectional optical flow mode;
  • the current block adopts bidirectional prediction, and the two reference frames corresponding to the current block come from different directions, that is, one reference frame corresponding to the current block is located before the current frame, and the other reference frame corresponding to the current block is located after the current frame;
  • the size information of the current block (such as width value, height value, area value, etc.) is within a limited range
  • the current block includes multiple sub-blocks, and the motion information of the multiple sub-blocks is the same, that is, the motion information of each sub-block of the current block can be exactly the same, that is, the current block does not adopt the sub-block motion information mode;
  • the current block adopts bidirectional prediction, and the weights of the two reference frames corresponding to the current block are the same.
  • Embodiment 9 Modify the condition in any of the foregoing Embodiment 3 to Embodiment 8 "The current block adopts bidirectional prediction, and the two reference frames corresponding to the current block come from different directions", are modified to "The current block adopts bidirectional prediction, and the current block uses bidirectional prediction.
  • the two reference frames corresponding to the block come from different directions, and the two reference frames corresponding to the current block are at the same distance from the current frame".
  • the two reference frames come from different directions, which is equivalent to (POC-POC0)*(POC- POC1) ⁇ 0, the distance between the two reference frames and the current frame is the same, which is equivalent to the value of (POC-POC0) equal to the value of (POC1-POC).
  • Embodiment 10 When the feature information meets at least the following conditions, it is determined that the feature information of the current block meets a specific condition.
  • the sequence level switch control information is to allow the current block to adopt the bidirectional optical flow mode, that is, the sequence level control allows the bidirectional optical flow mode to be enabled, that is, the sequence level control switch is on, indicating that the current block is allowed to use the bidirectional optical flow mode.
  • Embodiment 11 When the feature information meets at least the following conditions, it is determined that the feature information of the current block meets a specific condition.
  • the current block adopts bidirectional prediction, and the difference between the prediction values of the two reference frames corresponding to the current block may be less than the preset threshold TH_SAD.
  • the following method may be used to obtain the difference value of the predicted values of the two reference frames corresponding to the current block:
  • Manner 1 Obtain the first prediction block corresponding to the sub-block of the current block from the first reference frame according to the first unidirectional motion information of the current block, and obtain the first prediction block from the second reference frame according to the second unidirectional motion information of the current block Obtain the second prediction block corresponding to the sub-block of the current block; obtain the first reference frame and the SAD (Sum of Absolute Difference) of the predicted value of the first prediction block and the predicted value of the second prediction block The difference value of the predicted value of the second reference frame.
  • SAD Sud of Absolute Difference
  • the difference between the predicted value is the SAD of the predicted value of the first prediction block (denoted as pred0 later) and the predicted value of the second prediction block (denoted pred1 later), that is, the SAD of all pixels in pred0 and pred1 .
  • the difference between the predicted values of the first reference frame and the second reference frame can be determined by the following formula.
  • pred 0 (i, j) is the i-th column of pred0 and the j-th row of pred0 predicted value
  • pred 1 (i, j) is the i-th column of pred1
  • n is the total number of pixels
  • abs(x) is the absolute value of x
  • H is the height value
  • W is the width value.
  • Manner 2 Obtain the first prediction block corresponding to the sub-block of the current block from the first reference frame according to the first unidirectional motion information of the current block, and obtain the first prediction block from the second reference frame according to the second unidirectional motion information of the current block Obtain the second prediction block corresponding to the sub-block of the current block; according to the down-sampling prediction value of the first prediction block (that is, the prediction value obtained after down-sampling the prediction value of the first prediction block) and the second prediction
  • the SAD of the down-sampled predicted value of the block that is, the predicted value obtained after down-sampling the predicted value of the second predicted block
  • the difference value between the predicted value of the first reference frame and the second reference frame is used to obtain the difference value between the predicted value of the first reference frame and the second reference frame.
  • the difference between the predicted value is the predicted value of the first prediction block after being down-sampled N times (subsequently referred to as pred0) and the predicted value of the second prediction block after being down-sampled N times (subsequently referred to as pred1) SAD.
  • pred0 the predicted value of the first prediction block after being down-sampled N times
  • pred1 the predicted value of the second prediction block after being down-sampled N times
  • the difference between the predicted values of the first reference frame and the second reference frame can be determined by the following formula.
  • pred 0 (i, j) is the i-th column of pred0 and the j-th row of pred0 predicted value
  • pred 1 (i, j) is the i-th column of pred1
  • n is the total number of pixels
  • abs(x) is the absolute value of x
  • H is the height value
  • W is the width value
  • N is positive An integer, preferably 2.
  • Embodiment 12 In the above embodiment, the size information of the current block is within a limited range, which may be one of the following situations:
  • Case 1 The width of the current block is within the range of the first interval [Wmin, Wmax]; the height of the current block is within the range of the second interval [Hmin, Hmax]; Wmin, Wmax, Hmin, and Hmax are all 2 Positive integer power; for example, Wmin is 8, Wmax is 128, Hmin is 8, Hmax is 128.
  • [a, b] means greater than or equal to a and less than or equal to b.
  • Case 2 The width value of the current block is within the range of the first interval [Wmin, Wmax]; Wmin and Wmax are both positive integer powers of 2; for example, Wmin is 8, Wmax is 128.
  • Case 3 The height value of the current block is within the range of the second interval [Hmin, Hmax]; Hmin and Hmax are both positive integer powers of 2; for example, Hmin is 8, Hmax is 128.
  • the weights of the two reference frames corresponding to the current block are the same:
  • Condition 1 The current block does not use a method that allows different weights.
  • the current block allows different weighting methods (for example, the block-level weighted prediction method BCW (Bi-prediction with CU based weighting) is enabled), and the two weighting weights of the current block are exactly the same.
  • BCW Block-level weighted prediction method
  • Condition 3 The current frame where the current block is located does not use a different weighting method.
  • Condition 4 The current frame where the current block is located allows different weighting methods (for example, the frame-level weighted prediction method is enabled), and the two weighting weights of the current frame are exactly the same.
  • Embodiment 13 The encoder/decoder needs to determine the first original prediction value of the sub-block according to the first unidirectional motion information of the current block, and determine the second original prediction value of the sub-block according to the second unidirectional motion information of the current block . For example, based on the first unidirectional motion information of the current block, the first reference block corresponding to the sub-block of the current block is determined from the first reference frame, and the first original prediction value I (0 ) (x, y); Based on the second one-way motion information of the current block, the second reference block corresponding to the sub-block of the current block is determined from the second reference frame, and the second original of the second reference block is determined The predicted value I (1) (x, y).
  • the first reference block is determined from the first reference frame, and the first original prediction value I (0) (x, y) of the first reference block is determined; the first reference a first central region of the original block prediction value I (0) (x, y ) is the pixel value by the first reference frame is interpolated, a first prediction value of the original edge of the I region of the first reference block (0 ) (x, y) is obtained by copying the pixel value in the first reference frame.
  • the second reference block Based on the second unidirectional motion information of the current block, determine the second reference block from the second reference frame, and determine the second original prediction value I (1) (x, y) of the second reference block;
  • the second original prediction value I (1) (x, y) of the central area is obtained by interpolating the pixel values in the second reference frame, and the second original prediction value I (1) ( x, y) is obtained by copying the pixel value in the second reference frame.
  • the central area of the first reference block refers to the center point of the first reference block and the size is In the area of 4*4, the first original prediction value of the central area of the first reference block is obtained by interpolating the pixel values in the first reference frame, which will not be repeated here.
  • the edge area of the first reference block refers to the area other than the central area in the first reference block (that is, the area with 1 row and 1 column each, except for the central area), and the first reference block edge area
  • An original predicted value is obtained by copying the pixel value in the first reference frame.
  • the pixel value of the pixel point in the first reference frame is copied to the edge area of the first reference block.
  • FIG. 4 is only an example, and the pixel values of other pixels can also be used for copying.
  • the edge area of the first reference block can be obtained by copying the nearest integer pixel value in the first reference frame, so as to avoid additional interpolation process and indirectly avoid accessing additional reference pixels.
  • the process of obtaining the second original predicted value may refer to the process of obtaining the first original predicted value, here Do not repeat it again.
  • the first reference block is determined from the first reference frame, and the first original prediction value of the first reference block is determined.
  • the first original prediction value of the first reference block is passed
  • the pixel value in the first reference frame is obtained by interpolation.
  • the second reference block is determined from the second reference frame, and the second original prediction value of the second reference block is determined.
  • the pixel values in the reference frame are interpolated.
  • the size of the sub-block is 4*4
  • the size of the first reference block is 4*4
  • the size of the second reference block is 4*4
  • the first original prediction values of all regions of the first reference block are obtained by comparing
  • the pixel values in the first reference frame are obtained by interpolation
  • the first original prediction values of all regions of the second reference block are obtained by interpolating the pixel values in the first reference frame.
  • Embodiment 14 After obtaining the first original predicted value of the sub-block and the second original predicted value of the sub-block, the encoding end/decoding end determines the horizontal gradient sum, according to the first original predicted value and the second original predicted value of the sub-block, The vertical gradient sum and the time-domain prediction value difference. For example, according to the first original predicted value of the sub-block, the second original predicted value of the sub-block, the first amplification factor and the second amplification factor, the horizontal gradient sum, the vertical gradient sum, and the temporal prediction value difference are determined. For example, the horizontal gradient sum is determined by formula (1), the vertical gradient sum is determined by formula (2), and the temporal prediction value difference is determined by formula (3):
  • Exemplary Represents the horizontal gradient, Indicates the gradient in the vertical direction, for the following parameters in formula (1)-formula (3): with It can be determined by way (4) and formula (5).
  • ⁇ x (i,j) represents the sum of gradients in the horizontal direction
  • ⁇ y (i,j) represents the sum of gradients in the vertical direction
  • ⁇ (i,j) represents the difference in time domain prediction values
  • I (0) (x, y) represents the first original prediction value of the sub-block
  • I (1) (x, y) represents the second original prediction value of the sub-block.
  • the size of the sub-block is 4*4
  • I (0) (x, y) is the first original prediction value of the first reference block with a size of 4*4, or the first original prediction value with a size of 6*6.
  • the first original prediction value of a reference block I (1) (x, y) is the second original prediction value of the second reference block of size 4*4, or the first original prediction value of the second reference block of size 6*6
  • I (0) (x, y) is the first original prediction value of the first reference block of size 4*4
  • I (1) (x, y) is the first original prediction value of the size 4*4
  • the second original prediction value of the two reference blocks is an example.
  • I (k) (i, j) represents the pixel value of coordinates (i, j), such as I (0) (i, j) represents the pixel value of coordinates (i, j) in the first reference block, corresponding to the sub-block
  • I (1) (i, j) represents the pixel value of the coordinate (i, j) in the second reference block, and corresponds to the second original prediction value of the sub-block.
  • n a may represent the first amplification factor, and the first amplification factor n a may be the smaller value of 5 and (BD-7), or the larger value of 1 and (BD-11).
  • n b may represent the second amplification factor, and the second amplification factor n b may be the smaller value of 8 and (BD-4), or the larger value of 4 and (BD-8).
  • shift1 can represent the number of right shifts of the gradient, and the number shift1 of right shifts of the gradient can be the greater of 2 and (14-BD), or the greater of 6 and (BD-6).
  • >> represents a right shift, as represented by the right >> n a n a, i.e., divided by n a power of 2.
  • n b means shifting right by n b , that is, dividing by 2 to the power of n b .
  • shift1 means shift right shift1, that is, divided by 2 to the power of shift1.
  • BD bit depth
  • BD can be a bit depth, and can be expressed as a bit width required for each chroma or luminance pixel value.
  • BD can be 10 or 8.
  • Normally, BD can be a known value.
  • Embodiment 15 After obtaining the horizontal gradient sum, vertical gradient sum, and temporal prediction value difference, the encoder/decoder can also determine the horizontal gradient sum, vertical gradient sum, and temporal prediction value difference
  • the autocorrelation coefficient S1 of the horizontal gradient sum (hereinafter referred to as the autocorrelation coefficient S1), the horizontal gradient and the cross-correlation coefficient S2 with the vertical gradient sum (hereinafter referred to as the cross-correlation coefficient S2), the time-domain prediction value difference and the horizontal direction
  • the cross-correlation coefficient S3 of the gradient sum hereinafter referred to as the cross-correlation coefficient S3
  • the autocorrelation coefficient S5 of the vertical gradient sum hereinafter referred to as the auto-correlation coefficient S5
  • the cross-correlation coefficient S6 between the time domain prediction value difference and the vertical gradient sum
  • cross-correlation number S6 the autocorrelation coefficient S1
  • the cross-correlation coefficient S2 the cross-correlation coefficient S2 with the vertical
  • ⁇ x (i,j) represents the sum of gradients in the horizontal direction
  • ⁇ y (i,j) represents the sum of gradients in the vertical direction
  • ⁇ (i,j) represents the difference in time domain prediction values
  • represents the window corresponding to the 4*4 sub-block, or ⁇ represents the 6*6 window around the 4*4 sub-block.
  • ⁇ x (i, j), ⁇ y (i, j), and ⁇ (i, j) can be determined first through the above embodiment, and then, according to ⁇ x ( i, j), ⁇ y (i, j) and ⁇ (i, j), determine S1, S2, S3, S5, S6.
  • Embodiment 16 After obtaining the cross-correlation coefficient S2 and the cross-correlation coefficient S6, the cross-correlation coefficient S2 can be limited between the first cross-correlation coefficient threshold and the second cross-correlation coefficient threshold. For example, if the cross-correlation coefficient S2 is less than the first cross-correlation coefficient A cross-correlation coefficient threshold, the cross-correlation coefficient S2 can be updated to the first cross-correlation coefficient threshold, and if the cross-correlation coefficient S2 is greater than the second cross-correlation coefficient threshold, the cross-correlation coefficient S2 can be updated to the second cross-correlation coefficient threshold .
  • the cross-correlation coefficient S6 can be limited between the third cross-correlation coefficient threshold and the fourth cross-correlation coefficient threshold. For example, if the cross-correlation coefficient S6 is less than the third cross-correlation coefficient threshold, the cross-correlation coefficient S6 can be updated to third. The cross-correlation coefficient threshold. If the cross-correlation coefficient S6 is greater than the fourth cross-correlation coefficient threshold, the cross-correlation coefficient S6 can be updated to the fourth cross-correlation coefficient threshold; the first cross-correlation coefficient threshold may be less than the second cross-correlation coefficient threshold, and The three cross-correlation coefficient threshold may be smaller than the fourth cross-correlation coefficient threshold.
  • the following size restriction may be performed on the cross-correlation coefficient S2 and the cross-correlation coefficient S6, so as to prevent the intermediate result from overflowing, that is, the bit width does not exceed a certain range.
  • -(1 ⁇ THS2) represents the first cross-correlation coefficient threshold
  • 1 ⁇ THS2 represents the second cross-correlation coefficient threshold
  • -(1 ⁇ THS6) represents the third cross-correlation coefficient threshold
  • 1 ⁇ THS6 represents the fourth cross-correlation coefficient
  • the threshold for the number of relations For example, to prevent the bit width from exceeding 32 bits, THS2 can be 25, and THS6 can be 27. Of course, the above value is only an example, and there is no restriction on it.
  • formula (12) shows that if S 6 is less than -(1 ⁇ THS6), then S 6 is -(1 ⁇ THS6), if S 6 is greater than (1 ⁇ THS6), then S 6 is (1 ⁇ THS6), otherwise, S 6 remained unchanged. ⁇ means to move left.
  • Embodiment 17 After obtaining the autocorrelation coefficient S1, the cross-correlation number S2, the cross-correlation number S3, the auto-correlation coefficient S5, and the cross-correlation number S6, the autocorrelation coefficient S1, the cross-correlation number S2, the cross-correlation number S3, and the One or more of the correlation coefficient S5 and the cross-correlation coefficient S6 determine the horizontal velocity of the sub-block of the current block corresponding to the sub-block in the reference frame.
  • the vertical velocity of the sub-block of the current block in the reference frame can be determined .
  • the horizontal velocity can be determined according to the autocorrelation coefficient S1, the velocity threshold, the cross-correlation coefficient S3, the first amplification factor, and the second amplification factor. See the following formula as an example of determining the horizontal velocity based on the above parameters.
  • v x represents the horizontal velocity
  • th′ BIO represents the rate threshold, which is used to limit the horizontal velocity v x between -th′ BIO and th′ BIO , that is, the horizontal velocity v x is greater than or equal to -th′ BIO
  • the horizontal velocity v x is less than or equal to th′ BIO .
  • n a may represent the first amplification factor, and the first amplification factor n a may be the smaller value of 5 and (BD-7), or the larger value of 1 and (BD-11).
  • n b may represent the second amplification factor, and the second amplification factor n b may be the smaller value of 8 and (BD-4), or the larger value of 4 and (BD-8).
  • >> means move right, Is rounded down.
  • a first amplification factor and a second amplification factor n a n b can be configured based on experience, the second amplification factor may be greater than n b n a first amplification factor.
  • a first amplification factor and a second amplification factor n a n b for amplifying the horizontal rate value interval v x for example, assuming n b -n a is 3, Is 8, the value interval of the horizontal velocity v x can be enlarged 8 times, assuming that n b -n a is 4, then If it is 16, the value interval of the horizontal velocity v x can be enlarged by 16 times, and so on.
  • the vertical velocity can be determined according to the cross-correlation coefficient S2, the auto-correlation coefficient S5, the cross-correlation coefficient S6, the rate threshold, the horizontal velocity, the first amplification factor, and the second amplification factor, as shown in the following formula.
  • v y represents the vertical velocity
  • v x represents the horizontal velocity
  • th' BIO represents the rate threshold, which is used to limit the vertical velocity between -th' BIO and th' BIO , that is, the vertical velocity is greater than or equal to -th ′ BIO , the vertical speed is less than or equal to th′ BIO .
  • n a may represent the first amplification factor, and the first amplification factor n a may be the smaller value of 5 and (BD-7), or the larger value of 1 and (BD-11).
  • n b may represent the second amplification factor, and the second amplification factor n b may be the smaller value of 8 and (BD-4), or the larger value of 4 and (BD-8).
  • >> means move right, Is rounded down.
  • a first amplification factor and a second amplification factor n a n b can be configured based on experience, the second amplification factor may be greater than n b n a first amplification factor.
  • Value interval n a first amplification factor and a second amplifier for amplifying factor n b v y vertical rate for example, assuming n b -n a is 3, Is 8, the value interval of the vertical velocity v y can be enlarged by 8 times, assuming that n b -n a is 4, then If it is 16, the value interval of the vertical velocity v y can be enlarged by 16 times, and so on.
  • Embodiment 18 After obtaining the autocorrelation coefficient S1, the cross-correlation number S2, the cross-correlation number S3, the auto-correlation coefficient S5, and the cross-correlation number S6, the autocorrelation coefficient S1, the cross-correlation number S2, the cross-correlation number S3, and the One or more of the correlation coefficient S5 and the cross-correlation coefficient S6 determine the horizontal velocity of the sub-block of the current block corresponding to the sub-block in the reference frame.
  • the vertical velocity of the sub-block of the current block in the reference frame can be determined .
  • the horizontal velocity can be determined according to the autocorrelation coefficient S1, the velocity threshold, the cross-correlation coefficient S3, the first amplification factor, and the second amplification factor. See the following formula as an example of determining the horizontal velocity based on the above parameters.
  • formula (15) removes
  • the other content of the negative sign in front is the same as that of formula (13), and the meaning of each parameter is also the same as that of formula (13), which will not be repeated here.
  • the vertical velocity can be determined according to the cross-correlation coefficient S2, the auto-correlation coefficient S5, the cross-correlation coefficient S6, the rate threshold, the horizontal velocity, the first amplification factor, and the second amplification factor, as shown in the following formula.
  • Embodiment 19 After the autocorrelation coefficient S1, the cross-correlation coefficient S2, the cross-correlation coefficient S3, the auto-correlation coefficient S5, and the cross-correlation coefficient S6 are obtained, the autocorrelation coefficient S1, the cross-correlation coefficient S2, the cross-correlation coefficient S3, and the One or more of the correlation coefficient S5 and the cross-correlation coefficient S6 determine the horizontal velocity of the sub-block of the current block corresponding to the sub-block in the reference frame.
  • the horizontal velocity can be determined according to the correlation coefficient S2, the rate threshold, the correlation coefficient S6, the first amplification factor and the second amplification factor. If the first preset condition is not met, the horizontal velocity can be determined according to the autocorrelation coefficient S1, the velocity threshold, the correlation coefficient S3, the first amplification factor and the second amplification factor.
  • the first preset condition is determined based on the cross-correlation coefficient S2 and the autocorrelation coefficient S5, as shown in the following formula:
  • the above-mentioned first preset condition may include:
  • formula (18) is the same as formula (13), and will not be repeated here.
  • Embodiment 20 After the autocorrelation coefficient S1, the cross-correlation number S2, the cross-correlation number S3, the auto-correlation coefficient S5, and the cross-correlation number S6 are obtained, the autocorrelation coefficient S1, the cross-correlation number S2, the cross-correlation number S3, and the One or more of the correlation coefficient S5 and the cross-correlation coefficient S6 determine the horizontal velocity of the sub-block of the current block corresponding to the sub-block in the reference frame.
  • the second preset condition can be based on the autocorrelation coefficient S1, the cross-correlation coefficient S2, the cross-correlation coefficient S3, the autocorrelation coefficient S5, the cross-correlation coefficient S6, the rate threshold, the first amplification factor, and the second amplification factor.
  • the horizontal velocity can be determined according to the autocorrelation coefficient S1, the velocity threshold, the cross-correlation coefficient S3, the first amplification factor and the second amplification factor.
  • formula (20) is the same as formula (13), and will not be repeated here.
  • the foregoing second preset condition may include but is not limited to:
  • this second preset condition is only an example, and there is no restriction on this.
  • Embodiment 21 For Embodiment 19, when the first preset condition is not satisfied, the formula for determining the horizontal velocity becomes formula (15). For Embodiment 20, when the second preset condition is not satisfied, the horizontal velocity The formula for determining is changed to formula (15).
  • Embodiment 22 After the autocorrelation coefficient S1, the cross-correlation coefficient S2, the cross-correlation coefficient S3, the auto-correlation coefficient S5, and the cross-correlation coefficient S6 are obtained, the autocorrelation coefficient S1, the cross-correlation coefficient S2, the cross-correlation coefficient S3, and the One or more of the correlation coefficient S5 and the cross-correlation coefficient S6 determines the vertical velocity of the sub-block of the current block corresponding to the sub-block in the reference frame.
  • the untruncated horizontal velocity can be determined according to the autocorrelation coefficient S1, the cross-correlation coefficient S3, the first amplification factor, and the second amplification factor, and the cross-correlation coefficient S2, the autocorrelation coefficient S5, the cross-correlation coefficient S6, the rate threshold, The untruncated horizontal velocity, the first amplification factor, and the second amplification factor determine the vertical velocity.
  • v x_org 0.
  • the rate threshold th' BIO is not used to limit the horizontal rate between -th' BIO and th' BIO . Therefore, v x_org is called untruncated
  • the horizontal speed, that is, no truncation processing is performed, that is, the untruncated horizontal speed v x_org is not limited between -th′ BIO and th′ BIO .
  • v y represents the vertical velocity
  • v x_org represents the untruncated horizontal velocity
  • th' BIO represents the rate threshold
  • n a represents the first amplification factor
  • n b represents the second amplification factor
  • >> represents right shift
  • Embodiment 23 After obtaining the autocorrelation coefficient S1, the cross-correlation number S2, the cross-correlation number S3, the auto-correlation coefficient S5, and the cross-correlation number S6, the autocorrelation coefficient S1, the cross-correlation number S2, the cross-correlation number S3, and the One or more of the correlation coefficient S5 and the cross-correlation coefficient S6 determines the vertical velocity of the sub-block of the current block corresponding to the sub-block in the reference frame.
  • the vertical velocity is determined according to the autocorrelation coefficient S5, the cross-correlation coefficient S6, the velocity threshold, the first amplification factor and the second amplification factor.
  • the third preset condition is not met, the vertical velocity is determined according to the cross-correlation coefficient S2, the auto-correlation coefficient S5, the cross-correlation coefficient S6, the velocity threshold, the horizontal velocity, the first amplification factor, and the second amplification factor.
  • the third preset condition may be determined based on the horizontal velocity.
  • the third preset condition may be: if v x is -th′ BIO or th′ BIO , that is, v x is the minimum or maximum value.
  • v y 0.
  • v y the vertical velocity
  • th' BIO represents the rate threshold
  • n a the first amplification factor
  • n b represents the second amplification factor
  • >> represents right shift, Is rounded down.
  • formula (24) is the same as formula (14), and will not be repeated here.
  • Embodiment 24 For Embodiment 23, when the third preset condition is not satisfied, the formula for determining the vertical velocity is changed to formula (16).
  • the rate threshold th' BIO can be 2 to the power of M, and M is the difference between 13 and BD, or the larger value of 5 and (BD-7).
  • the above is only an example of the rate threshold, and there is no restriction on this, and it can be configured based on experience.
  • the right shift number of the autocorrelation coefficient and the cross-correlation coefficient of the gradient can be reduced, and the total value of the auto-correlation coefficient and the cross-correlation coefficient can be increased.
  • Need bit width (preservation accuracy) By setting the second amplification factor n b to the larger value of 4 and (BD-8), the right shift number of the autocorrelation coefficient and the cross-correlation coefficient of the gradient can be reduced, and the total value of the auto-correlation coefficient and the cross-correlation coefficient can be increased.
  • Need bit width (preservation accuracy) By setting the rate threshold th' BIO to 2 max (5, BD-7) , the required bit width (preservation accuracy) of the horizontal velocity v x and the vertical velocity v y can be increased.
  • Embodiment 26 After the encoder/decoder obtains the horizontal velocity v x and the vertical velocity v y , it can obtain the prediction compensation value of the sub-block of the current block according to the horizontal velocity v x and the vertical velocity v y , for example, according to The first original predicted value of the sub-block, the second original predicted value of the sub-block, and the right shift number of the gradient, determine the horizontal gradient and the vertical gradient, and according to the horizontal velocity v x , the vertical velocity v y , and the horizontal gradient And the vertical gradient to obtain the predicted compensation value.
  • the encoder/decoder can obtain the prediction compensation value b(x, y) of the sub-block of the current block through the following formula:
  • Represents the vertical gradient for the following parameters in the above formula: with These parameters can also be determined in the following ways.
  • I (k) (i, j) represents the pixel value of coordinates (i, j), such as I (0) (i, j) represents the pixel value of coordinates (i, j) in the first reference block, corresponding to the sub-block
  • the first original prediction value, I (1) (i, j) represents the pixel value of the coordinate (i, j) in the second reference block, and corresponds to the second original prediction value of the sub-block.
  • I (0) (x, y) represents the first original prediction value of the sub-block
  • I (1) (x, y) represents the second original prediction value of the sub-block.
  • I (0) (x, y) is the first original prediction value of the first reference block with a size of 4*4, or the first reference block with a size of 6*6.
  • An original prediction value, I (1) (x, y) is the second original prediction value of the second reference block with a size of 4*4, or the second original prediction value of the second reference block with a size of 6*6,
  • I (0) (x, y) be the first original prediction value of the first reference block of size 4*4
  • I (1) (x, y) is the first original prediction value of the second reference block of size 4*4. 2.
  • the original prediction value is an example.
  • v x represents the horizontal velocity
  • v y represents the vertical velocity
  • >> represents right shift
  • rnd is round
  • shift1 represents the number of gradient right shifts
  • >>shift1 represents right shift1.
  • the above is only an example of the number of right shifts of the gradient, there is no restriction on this, and it can be configured based on experience.
  • BD bit depth, that is, the bit width required for the brightness value, generally 10 or 8.
  • Embodiment 27 The encoder/decoder can obtain the sub-block of the current block according to the first original prediction value of the sub-block of the current block, the second original prediction value of the sub-block of the current block, and the prediction compensation value of the sub-block of the current block.
  • the target predicted value of the block For example, the encoding end/decoding end can obtain the target predicted value pred BDOF (x, y) of the sub-block of the current block through the following formula:
  • I (0) (x, y) represents the first original prediction value of the sub-block
  • I (1) (x, y) represents the second original prediction value of the sub-block
  • b(x, y) represents
  • >> shift means right shift
  • o offset 2 shift-1 + 2 14
  • shift 15-BD
  • BD bit depth
  • Embodiment 28 When the current block is divided into multiple sub-blocks, the above-mentioned embodiment can be used to determine the target prediction value of each sub-block, that is, to predict each sub-block of the current block (such as a sub-block of 4*4 size) The signal is adjusted to obtain the target predicted value of each sub-block.
  • the signal adjustment process of a certain sub-block can also be jumped out in advance, that is, the method of the foregoing embodiment is no longer used to determine the target predicted value of the sub-block.
  • the current block is divided into sub-block 1, sub-block 2, sub-block 3, and sub-block 4.
  • the encoder/decoder first determines the target predicted value of sub-block 1 in the manner in the above-mentioned embodiment, and then adopts the method in the above-mentioned embodiment Determine the target predictive value of sub-block 2. Assuming that the target condition is met, the encoder/decoder no longer uses the above-mentioned embodiment to determine the target predictive value of sub-block 3 and the target predictive value of sub-block 4, that is, jump out of sub-block 3 in advance And the signal adjustment process of sub-block 4.
  • the difference between the predicted values of the two sub-blocks of the current block is less than a certain threshold TH_SUB_SAD, it can be determined that the target condition is met, and the signal adjustment process of the remaining sub-blocks can be jumped out in advance. For example, if the difference value of the predicted value of sub-block 1 is less than the threshold TH_SUB_SAD, and the difference value of the predicted value of sub-block 2 is less than the threshold TH_SUB_SAD, it is determined that the target condition is met, and the remaining sub-blocks (ie, sub-block 3 and sub-block 4) Signal adjustment process.
  • TH_SUB_SAD a certain threshold
  • the current block is divided into multiple sub-blocks, and for each sub-block, the encoding end/decoding end determines whether the target condition is satisfied before determining the target predicted value of the sub-block by using the above-mentioned embodiment. If the target condition is met, the method of the foregoing embodiment is no longer used to determine the target predicted value of the sub-block, that is, the signal adjustment process of the sub-block is jumped out in advance. If the target condition is not met, the method of the foregoing embodiment may be used to determine the target predicted value of the sub-block.
  • the difference value of the predicted value of the sub-block is less than a certain threshold TH_SUB_SAD, it can be determined that the target condition is satisfied, and the signal adjustment process of the sub-block is jumped out in advance. For example, if the difference value of the predicted value of the sub-block 1 is less than the threshold TH_SUB_SAD, it is determined that the target condition is satisfied, and the signal adjustment process of the sub-block 1 is jumped out in advance.
  • the difference value of the predicted value may be the predicted value of the first predicted block (that is, the first predicted block corresponding to the sub-block obtained from the first reference frame according to the first unidirectional motion information of the current block) (subsequent note) Is pred0) and the SAD of the predicted value of the second prediction block (that is, the second prediction block corresponding to the sub-block obtained from the second reference frame according to the second unidirectional motion information of the current block) (subsequently referred to as pred1), That is, the SAD of all pixels in pred0 and pred1.
  • the difference between the predicted values of the first reference frame and the second reference frame can be determined by the following formula.
  • pred 0 (i, j) is the i-th column of pred0 and the j-th row of pred0 predicted value
  • pred 1 (i, j) is the i-th column of pred1
  • n is the total number of pixels
  • abs(x) is the absolute value of x
  • H is the height value
  • W is the width value.
  • the difference value of the predicted value can also be the predicted value of the first predicted block down-sampled N times (denoted as pred0, that is, the predicted value obtained after down-sampling the predicted value of the first predicted block) and the first predicted value
  • the SAD of the predicted value after downsampling N times of the second prediction block (denoted as pred1, that is, the predicted value obtained after downsampling the predicted value of the second prediction block).
  • pred 0 (i, j) is the i-th column of pred0
  • pred0 predicted value of the j-th row is the pred 1 (i, j ) Is the predicted value of pred1 in the i-th column and j-th row of pred1
  • n is the total number of pixels
  • abs(x) is the absolute value of x
  • H is the height value
  • W is the width value
  • N is a positive integer, preferably 2.
  • an embodiment of the application also proposes an encoding and decoding device applied to the encoding end or the decoding end, and the device is used to obtain the current block if the characteristic information of the current block meets a specific condition Or the target predicted value of the sub-block of the current block, as shown in FIG. 5, is a structural diagram of the apparatus, and the apparatus includes:
  • the first determining module 51 is configured to determine a first original prediction value according to the first unidirectional motion information of the current block if the characteristic information of the current block meets a specific condition, and determine according to the second unidirectional motion information of the current block The second original predicted value;
  • the second determining module 52 is configured to determine the horizontal velocity according to the first original predicted value and the second original predicted value; determine the vertical velocity according to the first original predicted value and the second original predicted value;
  • the first obtaining module 53 is configured to obtain a predicted compensation value according to the horizontal velocity and the vertical velocity;
  • the second obtaining module 54 is configured to obtain a target predicted value according to the first original predicted value, the second original predicted value, and the predicted compensation value.
  • the feature information may include, but is not limited to, one or more of the following: motion information attributes; prediction mode attributes; size information; sequence-level switch control information.
  • the first determining module 51 is further configured to: if the characteristic information includes the motion information attribute, and the motion information attribute meets at least one of the following conditions, determine that the motion information attribute meets a specific condition;
  • the current block adopts bidirectional prediction, and the two reference frames corresponding to the current block come from different directions;
  • the current block includes multiple sub-blocks, and motion information of the multiple sub-blocks is the same;
  • the current block adopts bidirectional prediction, and the weighted weights of two reference frames corresponding to the current block are the same;
  • the current block adopts bidirectional prediction, and the two reference frames corresponding to the current block are at the same distance from the current frame;
  • the current block adopts bidirectional prediction, and the difference between the prediction values of the two reference frames corresponding to the current block is less than a preset threshold.
  • the first determining module 51 is further configured to: obtain a first prediction block from a first reference frame according to the first unidirectional motion information of the current block, and obtain a first prediction block from the first reference frame according to the second unidirectional motion information of the current block. 2. Obtain the second prediction block from the reference frame;
  • the first determining module 51 is further configured to: if the feature information includes the prediction mode attribute, the prediction mode attribute is not to use a fusion mode based on intra-frame and inter-frame joint prediction, and/or not to use symmetrical motion
  • the vector difference mode determines that the attributes of the prediction mode meet a specific condition.
  • the first determining module 51 is further configured to: if the characteristic information includes the sequence-level switch control information, and the sequence-level switch control information is to allow the current block to adopt a bidirectional optical flow mode, determine the sequence The level switch control information satisfies certain conditions.
  • the first determining module 51 is further configured to: if the feature information includes the size information, and the size information meets at least one of the following conditions, determine that the size information meets a specific condition; The width value is greater than or equal to the first threshold, the width value of the current block is less than or equal to the second threshold; the height value of the current block is greater than or equal to the third threshold, and the height value of the current block is less than or equal to the fourth threshold ; The area value of the current block is greater than or equal to the fifth threshold, and the area value of the current block is less than or equal to the sixth threshold.
  • the first determining module 51 determines the first original prediction value according to the first unidirectional motion information of the current block, and is specifically used to determine the second original prediction value according to the second unidirectional motion information of the current block:
  • the first reference block is determined from the first reference frame, and the first original prediction value of the first reference block is determined; wherein the center of the first reference block The first original predicted value of the area is obtained by interpolating the pixel value in the first reference frame, and the first original predicted value of the edge area of the first reference block is obtained by copying the pixel value in the first reference frame Obtain;
  • the second unidirectional motion information of the current block determine a second reference block from a second reference frame, and determine a second original prediction value of the second reference block; wherein, the second reference block The second original prediction value of the central area of the second reference block is obtained by interpolating the pixel values in the second reference frame, and the second original prediction value of the edge area of the second reference block is obtained by comparing the pixel values in the second reference frame Make a copy.
  • the second determining module 52 determines the horizontal velocity according to the first original predicted value and the second original predicted value, it is specifically configured to: when the first preset condition is met, according to the first original predicted value and the second The original predicted value determines the cross-correlation coefficient S2 between the horizontal gradient and the vertical gradient sum S2, and the cross-correlation coefficient S6 between the time-domain prediction value difference and the vertical gradient sum; according to the cross-correlation coefficient S2, the rate threshold, the cross-correlation coefficient S6, The first amplification factor and the second amplification factor determine the horizontal velocity; the first preset condition is determined based on the correlation coefficient S2, the vertical direction gradient and the autocorrelation coefficient S5.
  • the second determining module 52 determines the horizontal velocity according to the first original prediction value and the second original prediction value, it is specifically configured to: if a second preset condition is met, according to the first original prediction value and the second original prediction value
  • the predicted value determines the autocorrelation coefficient S1 of the horizontal gradient and the cross-correlation coefficient S2 between the horizontal gradient and the vertical gradient sum S2, the cross-correlation coefficient S3 between the time-domain predicted value difference and the horizontal gradient sum, and the vertical gradient sum
  • the rate threshold, the first amplification factor and the second amplification factor determine the horizontal velocity; wherein the second preset condition is determined based on the cross-correlation coefficient S2 and the autocorrelation coefficient S5.
  • the second determining module 52 determines the vertical velocity according to the first original prediction value and the second original prediction value, it is specifically configured to: obtain the data according to the first original prediction value and the second original prediction value.
  • the untruncated horizontal velocity subjected to truncation processing, and the vertical velocity is determined according to the untruncated horizontal velocity.
  • the second determining module 52 determines the vertical velocity according to the first original prediction value and the second original prediction value, it is specifically configured to: when a third preset condition is satisfied, according to the first original prediction value and The second original prediction value determines the autocorrelation coefficient S5 of the vertical gradient sum, the correlation coefficient S6 between the time domain prediction value difference and the vertical gradient sum; according to the autocorrelation coefficient S5, the correlation coefficient S6, and the rate threshold , The first amplification factor and the second amplification factor determine the vertical velocity; wherein the third preset condition is determined based on the horizontal velocity.
  • the cross-correlation coefficient S2 is located between the first cross-correlation coefficient threshold and the second cross-correlation coefficient threshold;
  • the cross-correlation coefficient S6 is located between the third cross-correlation coefficient threshold and the fourth cross-correlation coefficient threshold.
  • the first amplification factor is the smaller value of 5 and (BD-7), or the larger value of 1 and (BD-11); the second amplification factor is the smaller value of 8 and (BD-4) , Or, the greater of 4 and (BD-8); the rate threshold is the M power of 2, where M is the difference between 13 and BD, or, the greater of 5 and (BD-7); where , BD is the bit depth.
  • the first obtaining module 53 obtains the prediction compensation value according to the horizontal direction rate and the vertical direction rate, it is specifically configured to: according to the first original prediction value, the second original prediction value, and the gradient right shift number , Determine the horizontal direction gradient and the vertical direction gradient, and obtain the predicted compensation value according to the horizontal direction rate, the vertical direction rate, the horizontal direction gradient and the vertical direction gradient; wherein the gradient is shifted to the right
  • the number of bits is the larger of 2 and (14-BD), or the larger of 6 and (BD-6), and BD is the bit depth.
  • the schematic diagram of its hardware architecture can be specifically referred to as shown in FIG. 6. It includes: a processor 61 and a machine-readable storage medium 62, where the machine-readable storage medium 62 stores machine-executable instructions that can be executed by the processor 61; the processor 61 is configured to execute the machine-executable instructions,
  • the processor is used to execute machine executable instructions to implement the following steps:
  • the characteristic information of the current block meets a specific condition, perform the following steps to obtain the target prediction value of the current block or a sub-block of the current block: determine the first original prediction value according to the first unidirectional motion information of the current block , Determine the second original prediction value according to the second unidirectional motion information of the current block; determine the horizontal velocity according to the first original prediction value and the second original prediction value; and determine the horizontal velocity according to the first original prediction value and The second original prediction value determines the vertical direction rate; the prediction compensation value is obtained according to the horizontal direction rate and the vertical direction rate; according to the first original prediction value, the second original prediction value, and the prediction compensation Value to obtain the target predicted value.
  • the schematic diagram of the hardware architecture of the device may be specifically shown in FIG. 7. It includes: a processor 71 and a machine-readable storage medium 72, where the machine-readable storage medium 72 stores machine-executable instructions that can be executed by the processor 71; the processor 71 is configured to execute machine-executable instructions, In order to realize the method disclosed in the above examples of this application.
  • the processor is used to execute machine executable instructions to implement the following steps:
  • the characteristic information of the current block meets a specific condition, perform the following steps to obtain the target prediction value of the current block or a sub-block of the current block: determine the first original prediction value according to the first unidirectional motion information of the current block , Determine the second original prediction value according to the second unidirectional motion information of the current block; determine the horizontal velocity according to the first original prediction value and the second original prediction value; and determine the horizontal velocity according to the first original prediction value and The second original prediction value determines the vertical direction rate; the prediction compensation value is obtained according to the horizontal direction rate and the vertical direction rate; according to the first original prediction value, the second original prediction value, and the prediction compensation Value to obtain the target predicted value.
  • an embodiment of the present application also proposes an encoding and decoding device, which includes:
  • the determining module is used to determine the conditions that the current block meets when the bidirectional optical flow mode is enabled for the current block including:
  • Disable CIIP mode for the current block or disable symmetrical motion vector difference mode
  • the motion compensation module is configured to perform motion compensation based on the bidirectional optical flow mode on the current block if it is determined to start the bidirectional optical flow mode for the current block.
  • an embodiment of the present application also proposes an encoding and decoding device, which includes:
  • the determining module is used to determine the conditions that the current block meets when the bidirectional optical flow mode is enabled for the current block including:
  • the width of the current block is greater than or equal to 8, or the height is greater than or equal to 8, and the product of width and height is greater than or equal to 128;
  • the motion compensation module is configured to perform motion compensation based on the bidirectional optical flow mode on the current block if it is determined to start the bidirectional optical flow mode for the current block.
  • an embodiment of the present application also proposes an encoding and decoding device, which includes:
  • the determining module is used to determine the conditions that the current block meets at the same time when the bidirectional optical flow mode is enabled for the current block including:
  • the width of the current block is greater than or equal to 8, or the height is greater than or equal to 8, the product of width and height is greater than or equal to 128, and
  • Disable CIIP mode for the current block or disable symmetrical motion vector difference mode
  • the motion compensation module is configured to perform motion compensation based on the bidirectional optical flow mode on the current block if it is determined to start the bidirectional optical flow mode for the current block.
  • an embodiment of the present application also proposes an encoding and decoding device, which includes:
  • the determining module is used to determine the conditions that the current block meets when the bidirectional optical flow mode is enabled for the current block including:
  • the current block adopts bidirectional prediction, and the two reference frames corresponding to the current block are from different directions, and the distance between the two reference frames corresponding to the current block and the current frame is the same;
  • the motion compensation module is configured to perform motion compensation based on the bidirectional optical flow mode on the current block if it is determined to start the bidirectional optical flow mode for the current block.
  • an embodiment of the present application also proposes an encoding and decoding device, which includes:
  • the determining module is used to determine that the conditions that the current block meets at the same time include:
  • the switch control information is to allow the current block to adopt the bidirectional optical flow mode
  • the current block does not use the sub-block motion information mode, and the current block does not use the CIIP mode, and the current block does not use the SMVD mode;
  • the current block adopts bidirectional prediction, and the two reference frames corresponding to the current block are from different directions, and the distance between the two reference frames corresponding to the current block and the current frame is the same;
  • the current block adopts bidirectional prediction, and the weighted weights of the two reference frames corresponding to the current block are the same;
  • the width value, height value and area value of the current block are all within a limited range
  • the motion compensation module is configured to perform motion compensation based on the bidirectional optical flow mode on the current block if it is determined to start the bidirectional optical flow mode for the current block.
  • the determining module is further configured to determine that the current block does not satisfy any one of the following conditions when the current block does not start the bidirectional optical flow mode:
  • the switch control information is to allow the current block to adopt the bidirectional optical flow mode
  • the current block does not use the sub-block motion information mode, and the current block does not use the CIIP mode, and the current block does not use the SMVD mode;
  • the current block adopts bidirectional prediction, and the two reference frames corresponding to the current block are from different directions, and the distance between the two reference frames corresponding to the current block and the current frame is the same;
  • the current block adopts bidirectional prediction, and the weighted weights of the two reference frames corresponding to the current block are the same;
  • the width value, height value and area value of the current block are all within the limited range.
  • the current block does not adopt the sub-block motion information mode, including:
  • the current block does not adopt the Affine mode; or, the current block does not adopt the SBTMVP mode.
  • the width value, height value and area value of the current block all within a limited range include:
  • the width value of the current block is greater than or equal to a first threshold, and the width value of the current block is less than or equal to a second threshold;
  • the height value of the current block is greater than or equal to a third threshold, and the height value of the current block is less than or equal to a fourth threshold;
  • the area value of the current block is greater than or equal to the fifth threshold, and the area value of the current block is less than or equal to the sixth threshold.
  • the first threshold is 8, and the second threshold is 16;
  • the third threshold is 8, and the fourth threshold is 16;
  • the fifth threshold is 64, and the sixth threshold is 256.
  • the width value, height value and area value of the current block all within a limited range include:
  • the width value of the current block is greater than or equal to the first threshold
  • the height value of the current block is greater than or equal to the third threshold
  • the area value of the current block is greater than or equal to the fifth threshold.
  • the first threshold is 8
  • the third threshold is 8
  • the fifth threshold is 128.
  • the motion compensation module is specifically used for:
  • an embodiment of the present application also proposes a video encoder, including a processor and a machine-readable storage medium, where the machine-readable storage medium stores a machine-readable memory that can be executed by the processor Execute instructions
  • the processor is used to execute machine executable instructions to implement the following steps:
  • the conditions that the current block meets include:
  • Disable CIIP mode for the current block or disable symmetrical motion vector difference mode
  • the conditions that the current block meets include:
  • the width of the current block is greater than or equal to 8, or the height is greater than or equal to 8, and the product of width and height is greater than or equal to 128;
  • the conditions that the current block meets at the same time include:
  • the width of the current block is greater than or equal to 8, or the height is greater than or equal to 8, the product of width and height is greater than or equal to 128, and
  • Disable CIIP mode for the current block or disable symmetrical motion vector difference mode
  • the conditions that the current block meets include:
  • the current block adopts bidirectional prediction, and the two reference frames corresponding to the current block are from different directions, and the distance between the two reference frames corresponding to the current block and the current frame is the same;
  • the conditions that the current block meets at the same time include:
  • the switch control information is to allow the current block to adopt the bidirectional optical flow mode
  • the current block does not use the sub-block motion information mode, and the current block does not use the CIIP mode, and the current block does not use the SMVD mode;
  • the current block adopts bidirectional prediction, and the two reference frames corresponding to the current block are from different directions, and the distance between the two reference frames corresponding to the current block and the current frame is the same;
  • the current block adopts bidirectional prediction, and the weighted weights of the two reference frames corresponding to the current block are the same;
  • the width value, height value and area value of the current block are all within a limited range
  • an embodiment of the present application also proposes a video decoder, including a processor and a machine-readable storage medium, and the machine-readable storage medium stores a machine executable that can be executed by the processor Execute instructions
  • the processor is used to execute machine executable instructions to implement the following steps:
  • the conditions that the current block meets include:
  • Disable CIIP mode for the current block or disable symmetrical motion vector difference mode
  • the conditions that the current block meets include:
  • the width of the current block is greater than or equal to 8, or the height is greater than or equal to 8, and the product of width and height is greater than or equal to 128;
  • the conditions that the current block meets at the same time include:
  • the width of the current block is greater than or equal to 8, or the height is greater than or equal to 8, the product of width and height is greater than or equal to 128, and
  • Disable CIIP mode for the current block or disable symmetrical motion vector difference mode
  • the conditions that the current block meets include:
  • the current block adopts bidirectional prediction, and the two reference frames corresponding to the current block are from different directions, and the distance between the two reference frames corresponding to the current block and the current frame is the same;
  • the conditions that the current block meets at the same time include:
  • the switch control information is to allow the current block to adopt the bidirectional optical flow mode
  • the current block does not use the sub-block motion information mode, and the current block does not use the CIIP mode, and the current block does not use the SMVD mode;
  • the current block adopts bidirectional prediction, and the two reference frames corresponding to the current block are from different directions, and the distance between the two reference frames corresponding to the current block and the current frame is the same;
  • the current block adopts bidirectional prediction, and the weighted weights of the two reference frames corresponding to the current block are the same;
  • the width value, height value and area value of the current block are all within a limited range
  • an embodiment of the application also provides a machine-readable storage medium.
  • the machine-readable storage medium stores a number of computer instructions. When the computer instructions are executed by a processor, the present invention can be realized. Apply for the encoding and decoding method disclosed in the above example.
  • the above-mentioned machine-readable storage medium may be any electronic, magnetic, optical or other physical storage device, and may contain or store information, such as executable instructions, data, and so on.
  • the machine-readable storage medium may be: RAM (Radom Access Memory), volatile memory, non-volatile memory, flash memory, storage drives (such as hard drives), solid state drives, and any type of storage disk (Such as CD, DVD, etc.), or similar storage media, or a combination of them.
  • RAM Random Access Memory
  • volatile memory volatile memory
  • non-volatile memory flash memory
  • storage drives such as hard drives
  • solid state drives solid state drives
  • any type of storage disk (Such as CD, DVD, etc.), or similar storage media, or a combination of them.
  • the systems, devices, modules or units explained in the above embodiments may be implemented by computer chips or entities, or implemented by products with certain functions.
  • a typical implementation is a computer.
  • the form of the computer can be a personal computer, a laptop computer, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email receiving and sending device, a tablet computer, a computer Wearable devices or a combination of any of these devices.
  • the embodiments of the present application can be provided as methods, systems, or computer program products.
  • This application may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware.
  • the embodiments of the present application may adopt the form of computer program products implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
  • these computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device,
  • the instruction device realizes the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Reduction Or Emphasis Of Bandwidth Of Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

本申请提供一种编解码方法、装置及其设备,该方法包括:当前块启用双向光流模式时,当前块满足的条件包括:当前块禁用CIIP模式,或者禁用对称运动矢量差模式;若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿。

Description

一种编解码方法、装置及其设备 技术领域
本申请涉及编解码技术领域,尤其是涉及一种编解码方法、装置及其设备。
背景技术
为了达到节约空间的目的,视频图像都是经过编码后才传输的,完整的视频编码方法可以包括预测、变换、量化、熵编码、滤波等过程。其中,预测编码可以包括帧内编码和帧间编码,帧间编码是利用视频时间域的相关性,使用邻近已编码图像的像素预测当前图像的像素,以达到有效去除视频时域冗余的目的。
在帧间编码中,可以使用运动矢量(Motion Vector,MV)表示当前帧视频图像的当前块与参考帧视频图像的参考块之间的相对位移。例如,当前帧的视频图像A与参考帧的视频图像B存在很强的时域相关性,在需要传输视频图像A的图像块A1(当前块)时,则可以在视频图像B中进行运动搜索,找到与图像块A1最匹配的图像块B1(即参考块),并确定图像块A1与图像块B1之间的相对位移,该相对位移也就是图像块A1的运动矢量。
编码端可以将运动矢量发送给解码端,不是将图像块A1发送给解码端,解码端可以根据运动矢量和图像块B1得到图像块A1。显然,由于运动矢量占用的比特数远远小于图像块A1占用的比特数,因此,上述方式可以节约大量比特开销。
在传统方式中,在当前块是单向块时,获得当前块的运动信息后,可以根据该运动信息进行编码/解码,从而提高编码性能。但是,在当前块是双向块时,获得当前块的双向运动信息后,可以根据双向运动信息获取来自两个不同方向的预测图像,而来自两个不同方向的预测图像往往存在镜像对称的关系,当前编码框架中未充分利用这一特性来进一步的去除冗余。也就是说,针对双向块的应用场景,目前存在编码性能比较差等问题。
发明内容
本申请提供了一种编解码方法、装置及其设备,可以提高编码性能。
本申请提供一种编解码方法,所述方法包括:
当前块启用双向光流模式时,当前块满足的条件包括:当前块禁用CIIP模式,或者禁用对称运动矢量差模式;
若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿。
本申请提供一种编解码方法,所述方法包括:
当前块启用双向光流模式时,当前块满足的条件包括:当前块的宽度大于等于8,或者高度大于等于8,宽高乘积大于等于128;
若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿。
本申请提供一种编解码方法,所述方法包括:
当前块启用双向光流模式时,当前块同时满足的条件包括:
当前块的宽度大于等于8,或者高度大于等于8,宽高乘积大于等于128,且
当前块禁用CIIP模式,或者禁用对称运动矢量差模式;
若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿。
本申请提供一种编解码方法,所述方法包括:
当前块启用双向光流模式时,当前块满足的条件包括:
当前块采用双向预测,且所述当前块对应的两个参考帧来自不同方向,且所述当前块对应的所述两个参考帧与当前帧的距离相同;
若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿。
本申请提供一种编解码方法,所述方法包括:
在当前块启动双向光流模式时,当前块同时满足的条件包括:
开关控制信息为允许所述当前块采用双向光流模式;
当前块不采用子块运动信息模式,且当前块不采用CIIP模式,且当前块不采用SMVD模式;
当前块采用双向预测,且所述当前块对应的两个参考帧来自不同方向,且所述当前块对应的所述两个参考帧与当前帧的距离相同;
当前块采用双向预测,且所述当前块对应的两个参考帧的加权权重相同;
当前块的宽度值,高度值和面积值均在限定范围内;
若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿。
本申请提供一种编解码装置,所述装置包括:
确定模块,用于当前块启用双向光流模式时,确定当前块满足的条件包括:当前块禁用CIIP模式,或者禁用对称运动矢量差模式;运动补偿模块,用于若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿。
本申请提供一种编解码装置,所述装置包括:
确定模块,用于当前块启用双向光流模式时,确定当前块满足的条件包括:当前块的宽度大于等于8,或者高度大于等于8,宽高乘积大于等于128;运动补偿模块,用于若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿。
本申请提供一种编解码装置,所述装置包括:
确定模块,用于当前块启用双向光流模式时,确定当前块同时满足的条件包括:
当前块的宽度大于等于8,或者高度大于等于8,宽高乘积大于等于128,且
当前块禁用CIIP模式,或者禁用对称运动矢量差模式;
运动补偿模块,用于若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿。
本申请提供一种编解码装置,所述装置包括:
确定模块,用于当前块启用双向光流模式时,确定当前块满足的条件包括:
当前块采用双向预测,且所述当前块对应的两个参考帧来自不同方向,且所述当前块对应的所述两个参考帧与当前帧的距离相同;
运动补偿模块,用于若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿。
本申请提供一种编解码装置,所述装置包括:
确定模块,用于在当前块启动双向光流模式时,确定当前块同时满足的条件包括:
开关控制信息为允许所述当前块采用双向光流模式;
当前块不采用子块运动信息模式,且当前块不采用CIIP模式,且当前块不采用SMVD模式;
当前块采用双向预测,且所述当前块对应的两个参考帧来自不同方向,且所述当前块对应的所述两个参考帧与当前帧的距离相同;
当前块采用双向预测,且所述当前块对应的两个参考帧的加权权重相同;
当前块的宽度值,高度值和面积值均在限定范围内;
运动补偿模块,用于若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿。
本申请提供一种视频编码器,包括:处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;
所述处理器用于执行机器可执行指令,以实现如下步骤:
当前块启用双向光流模式时,当前块满足的条件包括:当前块禁用CIIP模式,或者禁用对称运动矢量差模式;
若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿;
或者,
当前块启用双向光流模式时,当前块满足的条件包括:当前块的宽度大于等于8,或者高度大于等于8,宽高乘积大于等于128;
若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿;
或者,
当前块启用双向光流模式时,当前块同时满足的条件包括:
当前块的宽度大于等于8,或者高度大于等于8,宽高乘积大于等于128,且
当前块禁用CIIP模式,或者禁用对称运动矢量差模式;
若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿;
或者,
当前块启用双向光流模式时,当前块满足的条件包括:
当前块采用双向预测,且所述当前块对应的两个参考帧来自不同方向,且所述当前块对应的所述两个参考帧与当前帧的距离相同;
若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿;
或者,
在当前块启动双向光流模式时,当前块同时满足的条件包括:
开关控制信息为允许所述当前块采用双向光流模式;
当前块不采用子块运动信息模式,且当前块不采用CIIP模式,且当前块不采用SMVD模式;
当前块采用双向预测,且所述当前块对应的两个参考帧来自不同方向,且所述当前块对应的所述两个参考帧与当前帧的距离相同;
当前块采用双向预测,且所述当前块对应的两个参考帧的加权权重相同;
当前块的宽度值,高度值和面积值均在限定范围内;
若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿。
本申请提供一种视频解码器,包括:处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;
所述处理器用于执行机器可执行指令,以实现如下步骤:
当前块启用双向光流模式时,当前块满足的条件包括:当前块禁用CIIP模式,或者禁用对称运动矢量差模式;
若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿;
或者,
当前块启用双向光流模式时,当前块满足的条件包括:当前块的宽度大于等于8,或者高度大于等于8,宽高乘积大于等于128;
若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿;
或者,
当前块启用双向光流模式时,当前块同时满足的条件包括:
当前块的宽度大于等于8,或者高度大于等于8,宽高乘积大于等于128,且
当前块禁用CIIP模式,或者禁用对称运动矢量差模式;
若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿;
或者,
当前块启用双向光流模式时,当前块满足的条件包括:当前块采用双向预测,且所述当前块对应的两个参考帧来自不同方向,且所述当前块对应的所述两个参考帧与当前帧的距离相同;
若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿;
或者,
在当前块启动双向光流模式时,当前块同时满足的条件包括:
开关控制信息为允许所述当前块采用双向光流模式;
当前块不采用子块运动信息模式,且当前块不采用CIIP模式,且当前块不采用SMVD模式;
当前块采用双向预测,且所述当前块对应的两个参考帧来自不同方向,且所述当前块对应的所述两个参考帧与当前帧的距离相同;
当前块采用双向预测,且所述当前块对应的两个参考帧的加权权重相同;
当前块的宽度值,高度值和面积值均在限定范围内;
若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿。
由以上技术方案可见,本申请实施例中,可以根据当前块的第一单向运动信息确定第一原始预测值,根据当前块的第二单向运动信息确定第二原始预测值,根据第一原始预测值和第二原始预测值确定水平方向速率和垂直方向速率,并根据水平方向速率和垂直方向速率获取预测补偿值,并根据预测补偿值获取目标预测值。上述方式能够基于光流法获取当前块或者当前块的子块的目标预测值,从而提高硬件实现友好性,带来编码性能的提高。
附图说明
为了更加清楚地说明本申请实施例的技术方案,下面将对本申请实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请中记载的一些实施例,对于本领域普通技术人员来讲,还可以根据本申请实施例的这些附图获得其他的附图。
图1A是本申请一种实施方式中的插值的示意图;
图1B是本申请一种实施方式中的视频编码框架的示意图;
图2是本申请一种实施方式中的编解码方法的流程图;
图3是本申请一种实施方式中的编解码方法的流程图;
图4是本申请一种实施方式中的当前块的子块对应的参考块的示意图;
图5是本申请一种实施方式中的编解码装置的结构图;
图6是本申请一种实施方式中的解码端设备的硬件结构图;
图7是本申请一种实施方式中的编码端设备的硬件结构图。
具体实施方式
在本申请实施例中所使用的术语仅仅是出于描述特定实施例的目的,而非用于限制本申请。本申请实施例和权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其它含义。还应当理解,本文中使用的术语“和/或”是指包含一个或多个相关联的列出项目的任何或所有可能组合。应当理解,尽管在本申请实施例可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本申请范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,此外,所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。
本申请实施例中提出一种编解码方法、装置及其设备,可以涉及如下概念:
帧内预测与帧间预测(intra prediction and inter prediction)技术:帧内预测是指,利用视频空间域的相关性,使用当前图像已经编码块的像素预测当前像素,以达到去除视频空域冗余的目的。帧间预测是指,利用视频时域的相关性,由于视频序列包含有较强的时域相关性,使用邻近已编码图像像素预测当前图像的像素,达到有效去除视频时域冗余的目的。视频编码标准帧间预测部分基本都采用了基于块的运动补偿技术,原理是为当前图像的每一像素块在之前的已编码图像中寻找最佳匹配块,该过程称为运动估计(Motion Estimation,ME)。
运动矢量(Motion Vector,MV):在帧间编码中,可以使用运动矢量表示当前编码块与其参考图像中的最佳匹配块之间的相对位移。每个划分的块都有相应的运动矢量需要传输到解码端,如果对每个块的运动矢量进行独立编码和传输,特别是划分成小尺寸的块时,则需要消耗相当多的比特。为了降低用于编码运动矢量的比特数,则可以利用相邻图像块之间的空间相关性,根据相邻已编码块的运动矢量对当前待编码块的运动矢量进行预测,然后,对预测差进行编码。这样,可以有效地降低表示运动矢量的比特数。在对当前块的运动矢量编码过程中,首先,使用相邻已编码块的运动矢量预测当前块的运动矢量,然后,可以对运动矢量的预测值(MVP,Motion Vector Prediction)与运动矢量的真正估值之间的差值(MVD,Motion Vector Difference)进行编码,从而有效降低MV的编码比特数。
运动信息(Motion Information):由于运动矢量表示当前图像块与某个参考图像块的位置偏移,为了准确的获取指向图像块的信息,除了运动矢量,还需要参考帧图像的索引信息来表示使用哪个参考帧图像。在视频编码技术中,对于当前帧图像,通常可以建立一个参考帧图像列表,参考帧图像索引信息则表示当前图像块采用了参考帧图像列表中的第几个参考帧图像。此外,很多编码技术还支持多个参考图像列表,因此,还可以使用一个索引值来表示使用了哪一个参考图像列表,这个索引值可以称为参考方向。在视频编码技术中,可以将运动矢量、参考帧索引、参考方向等与运动相关的信息统称为运动信息。
插值(Interpolation):若当前运动矢量为非整像素精度,则无法直接从对应参考帧中拷贝已有像素值,所需像素值通过插值获得。如图1A所示,若想获得偏移为1/2像素的像素值Y 1/2,则通过周围已有像素值X插值获得。若采用抽头数为N的插值滤波器,则通过周围N个整像素插值获得。若抽头数N为8,则
Figure PCTCN2020096788-appb-000001
a k为滤波器系数,即加权系数。
运动补偿:运动补偿的过程就是通过插值或拷贝获得当前块所有预测像素值的过程。
视频编码框架:参见图1B所示,可以使用视频编码框架实现本申请实施例的编码端处理流程,此外,视频解码框架的示意图与图1B类似,在此不再重复赘述,而且,可以使用视频解码框架实现本申请实施例的解码端处理流程。具体的,在视频编码框架和视频解码框架中,包括帧内预测、运动估计/运动补偿、参考图像缓冲器、环内滤波、重建、变换、量化、反变换、反量化、熵编码器等模块。在编码端,通过这些模块之间的配合,可以实现编码端处理流程,在解码端,通过这些模块之间的配合,可以实现解码端处理流程。
在传统方式中,在当前块是双向块(即当前块是采用双向预测的块)时,来自两个不同方向的预测图像往往存在镜像对称的关系,当前编码框架中未充分利用这一特性来进一步的去除冗余,从而存在编码性能比较差等问题。针对上述发现,本申请实施例中,在当前块是双向块时,提出一种基于光流法的预测信号调整方式,能够基于原始 运动信息获得原始预测值,基于原始预测值,通过光流方程获得当前块的预测补偿值,基于该预测补偿值与原始预测值获得当前块的目标预测值。上述方式能够基于光流法获取当前块的目标预测值,从而提高硬件实现友好性,带来编码性能的提高,即可以提高编码性能和编码效率。
以下结合几个具体实施例,对本申请实施例中的编解码方法进行详细说明。
实施例1:参见图2所示,为编解码方法的流程示意图,该方法可以应用于解码端或者编码端,该方法通过执行以下步骤获取当前块或者当前块的子块的目标预测值。若针对当前块执行以下步骤,则可以获取当前块的目标预测值;若当前块被划分为至少一个子块,且针对当前块的每个子块执行以下步骤,则可以获取当前块的子块的目标预测值,该方法包括:
步骤201,若当前块的特征信息满足特定条件,则根据当前块的第一单向运动信息确定第一原始预测值,并根据当前块的第二单向运动信息确定第二原始预测值。
示例性的,当前块可以是双向块(即当前块是采用双向预测的块),也就是说,当前块对应的运动信息是双向运动信息,且这个双向运动信息可以包括两个不同方向的运动信息,将这两个不同方向的运动信息称为第一单向运动信息和第二单向运动信息。第一单向运动信息可以对应第一参考帧,且该第一参考帧位于当前块所处当前帧的前面;第二单向运动信息可以对应第二参考帧,且该第二参考帧位于当前块所处当前帧的后面。
示例性的,特征信息可以包括但不限于以下一种或者多种:运动信息属性;预测模式属性;尺寸信息;序列级开关控制信息。当然,上述只是几个示例,对此不做限制。
若特征信息包括运动信息属性,且运动信息属性满足如下情况的至少一种时,则确定运动信息属性满足特定条件;1、当前块采用双向预测,且当前块对应的两个参考帧来自不同方向;2、当前块包括多个子块,多个子块的运动信息均相同;3、当前块采用双向预测,且当前块对应的两个参考帧的加权权重相同;4、当前块采用双向预测,且当前块对应的两个参考帧与当前帧的距离相同;5、当前块采用双向预测,且当前块对应的两个参考帧的预测值的差异值小于预设阈值。当然,上述只是几个示例,对此不做限制。
针对情况5,还需要获取当前块对应的两个参考帧的预测值的差异值,为了获取该差异值,可以采用如下方式:根据当前块的第一单向运动信息从第一参考帧中获取第一预测块,并根据当前块的第二单向运动信息从第二参考帧中获取第二预测块;根据第一预测块的下采样后的预测值与第二预测块的下采样后的预测值的SAD,获取第一参考帧与第二参考帧的预测值的差异值。或者,根据当前块的第一单向运动信息从第一参考帧中获取第一预测块,并根据当前块的第二单向运动信息从第二参考帧中获取第二预测块;根据第一预测块的预测值与第二预测块的预测值的SAD,获取第一参考帧与第二参考帧的预测值的差异值。
若特征信息包括预测模式属性,且预测模式属性为不采用基于帧内帧间联合预测的融合模式,和/或,不采用对称运动矢量差模式,则确定预测模式属性满足特定条件。
若特征信息包括序列级开关控制信息,且该序列级开关控制信息为允许当前块采用双向光流模式,则确定序列级开关控制信息满足特定条件。
若特征信息包括尺寸信息,且尺寸信息包括宽度值、高度值和面积值中的至少一个,当尺寸信息中的宽度值、高度值和面积值中的至少一个满足相应的阈值条件时,尺寸信息满足特定条件。示例性的,尺寸信息满足如下情况的至少一种时,则确定尺寸信息满足特定条件;当前块的宽度值大于或等于第一阈值,当前块的宽度值小于或等于第二阈值;当前块的高度值大于或等于第三阈值,当前块的高度值小于或等于第四阈值;当前块的面积值大于或等于第五阈值,当前块的面积值小于或等于第六阈值。当然,上述只是几个示例,对此不做限制。
示例性的,第一阈值可以小于第二阈值,第一阈值和第二阈值均为2的正整数次幂,对此第一阈值和第二阈值均不做限制,例如,第一阈值可以为8,第二阈值可以为128。第三阈值可以小于第四阈值,第三阈值和第四阈值也为2的正整数次幂,对此第三阈值和第四阈值均不做限制,例如,第三阈值可以为8,第四阈值可以为128。第五阈值可以小于第六阈值,第五阈值和第六阈值也为2的正整数次幂,对此第五阈值和第六阈值均不做限制,例如,第五阈值可以为64(即8*8),第六阈值可以为16384(即128*128)。当然,上述阈值只是示例,对这些阈值均不做限制。
示例性的,特征信息可以包括运动信息属性、预测模式属性、尺寸信息、序列级开关控制信息中的一种或者几种。当特征信息包括运动信息属性时,则运动信息属性满足特定条件时,可以表明特征信息满足特定条件;当特征信息包括预测模式属性时,则预测模式属性满足特定条件时,可以表明特征信息满足特定条件;当特征信息包括尺寸信息时,则尺寸信息满足特定条件时,可以表明特征信息满足特定条件;当特征信息包括序列级开关控制信息时,则序列级开关控制信息满足特定条件时,可以表明特征信息满足特定条件。
当特征信息包括运动信息属性、预测模式属性、尺寸信息、序列级开关控制信息中的至少两种时,例如,以特征信息包括运动信息属性和预测模式属性为例,则运动信息属性满足特定条件、且预测模式属性满足特定条件时,可以表明特征信息满足特定条件。又例如,以特征信息包括运动信息属性、尺寸信息、序列级开关控制信息为例,则运动信息属性满足特定条件、且尺寸信息满足特定条件、且序列级开关控制信息满足特定条件时,可以表明特征信息满足特定条件。当然,上述过程只是几个示例,对此不做限制。
示例性的,根据当前块的第一单向运动信息确定第一原始预测值,并根据当前块的第二单向运动信息确定第二原始预测值,可以包括但不限于:基于当前块的第一单向运动信息,从第一参考帧中确定第一参考块,并确定第一参考块的第一原始预测值;第一参考块的中心区域的第一原始预测值是通过对第一参考帧中的像素值进行插值得到, 第一参考块的边缘区域的第一原始预测值是通过对第一参考帧中的像素值进行拷贝得到。基于当前块的第二单向运动信息,从第二参考帧中确定第二参考块,并确定第二参考块的第二原始预测值;第二参考块的中心区域的第二原始预测值是通过对第二参考帧中的像素值进行插值得到,第二参考块的边缘区域的第二原始预测值是通过对第二参考帧中的像素值进行拷贝得到。
例如,基于当前块的第一单向运动信息,可以从第一参考帧中确定当前块对应的第一参考块,假设当前块的大小为M*M,第一参考块的大小为N*N,N可以大于M,如M为4,N为6。可以将第一参考块划分为中心区域和边缘区域,第一参考块的中心区域是指:以第一参考块的中心点为中心,大小为M*M的区域;第一参考块的边缘区域是指:第一参考块中除了中心区域之外的其它区域。针对第一参考块的中心区域,第一参考块的中心区域的第一原始预测值是通过对第一参考帧中的像素值进行插值得到;针对第一参考块的边缘区域,第一参考块的边缘区域的第一原始预测值是通过对第一参考帧中的像素值进行拷贝得到。
基于当前块的第二单向运动信息,可以从第二参考帧中确定当前块对应的第二参考块,假设当前块的大小为M*M,第二参考块的大小为N*N,N可以大于M,如M为4,N为6。可以将第二参考块划分为中心区域和边缘区域,第二参考块的中心区域是指:以第二参考块的中心点为中心,大小为M*M的区域;第二参考块的边缘区域是指:第二参考块中除了中心区域之外的其它区域。针对第二参考块的中心区域,第二参考块的中心区域的第二原始预测值是通过对第二参考帧中的像素值进行插值得到;针对第二参考块的边缘区域,第二参考块的边缘区域的第二原始预测值是通过对第二参考帧中的像素值进行拷贝得到。
示例性的,根据当前块的第一单向运动信息确定第一原始预测值,并根据当前块的第二单向运动信息确定第二原始预测值,可以包括但不限于:基于当前块的第一单向运动信息,从第一参考帧中确定第一参考块,并确定第一参考块的第一原始预测值,所述第一原始预测值是通过对第一参考帧中的像素值进行插值得到。基于当前块的第二单向运动信息,从第二参考帧中确定第二参考块,并确定第二参考块的第二原始预测值,所述第二原始预测值是通过对第二参考帧中的像素值进行插值得到。例如,基于当前块的第一单向运动信息,可以从第一参考帧中确定当前块对应的第一参考块,假设当前块的大小为M*M,则第一参考块的大小可以为M*M。基于当前块的第二单向运动信息,可以从第二参考帧中确定当前块对应的第二参考块,假设当前块的大小为M*M,则第二参考块的大小可以为M*M。
步骤202,根据第一原始预测值和第二原始预测值确定水平方向速率。
示例性的,水平方向速率是指:当前块在参考帧上对应子块(即位于参考帧的与当前块对应的子块)的水平方向(即X方向)速率。或者,当前块的子块在参考帧上对应子块(即位于参考帧的与当前块的子块对应的子块)的水平方向(即X方向)速率。
示例性的,根据第一原始预测值和第二原始预测值确定水平方向速率,包括但不限于:
方式一、根据第一原始预测值和第二原始预测值确定水平方向梯度和的自相关系数S1、时域预测值差值与水平方向梯度和的互相关系数S3。然后,根据自相关系数S1、速率阈值、互相关系数S3、第一放大因子和第二放大因子确定水平方向速率。
方式二、若满足第一预设条件,根据第一原始预测值和第二原始预测值确定水平方向梯度和与垂直方向梯度和的互相关系数S2、时域预测值差值与垂直方向梯度和的互相关系数S6;根据互相关系数S2、速率阈值、互相关系数S6、第一放大因子和第二放大因子确定水平方向速率。若不满足第一预设条件,根据第一原始预测值和第二原始预测值确定水平方向梯度和的自相关系数S1、时域预测值差值与水平方向梯度和的互相关系数S3;根据自相关系数S1、速率阈值、互相关系数S3、第一放大因子和第二放大因子确定水平方向速率。
示例性的,第一预设条件基于互相关系数S2、垂直方向梯度和的自相关系数S5确定。
方式三、若满足第二预设条件,根据第一原始预测值和第二原始预测值确定水平方向梯度和的自相关系数S1、水平方向梯度和与垂直方向梯度和的互相关系数S2、时域预测值差值与水平方向梯度和的互相关系数S3、垂直方向梯度和的自相关系数S5、时域预测值差值与垂直方向梯度和的互相关系数S6;根据自相关系数S1、互相关系数S2、互相关系数S3、自相关系数S5、互相关系数S6、速率阈值、第一放大因子和第二放大因子确定水平方向速率。若不满足第二预设条件,则根据第一原始预测值和第二原始预测值确定水平方向梯度和的自相关系数S1、时域预测值差值与水平方向梯度和的互相关系数S3;根据自相关系数S1、速率阈值、互相关系数S3、第一放大因子和第二放大因子确定水平方向速率。
示例性的,第二预设条件基于互相关系数S2、自相关系数S5确定。
步骤203,根据第一原始预测值和第二原始预测值确定垂直方向速率。
示例性的,垂直方向速率是指:当前块在参考帧上对应子块(即位于参考帧的与当前块对应的子块)的垂直方向(即Y方向)速率。或者,当前块的子块在参考帧上对应子块(即位于参考帧的与当前块的子块对应的子块)的垂直方向(即Y方向)速率。
示例性的,根据第一原始预测值和第二原始预测值确定垂直方向速率,包括但不限于:
方式一、根据第一原始预测值和第二原始预测值确定水平方向梯度和与垂直方向梯度和的互相关系数S2、垂直方向梯度和的自相关系数S5、时域预测值差值与垂直方向梯度和的互相关系数S6。然后,可以根据互相关系数S2、自相关系数S5、互相关系数S6、速率阈值、水平方向速率、第一放大因子和第二放大因子确定垂直方向速率。
方式二、根据第一原始预测值和第二原始预测值获取未进行截断处理的未截断水平方向速率,并根据所述未截断水平方向速率确定垂直方向速率。
示例性的,可以根据第一原始预测值和第二原始预测值确定水平方向梯度和的自相关系数S1、水平方向梯度和与垂直方向梯度和的互相关系数S2、时域预测值差值与水平方向梯度和的互相关系数S3、垂直方向梯度和的自相关系数S5、时域预测值差值与垂直方向梯度和的互相关系数S6。然后,根据自相关系数S1、互相关系数S3、第一放大因子和第二放大因子确定未截断水平方向速率,并根据互相关系数S2、自相关系数S5、互相关系数S6、速率阈值、未截断水平方向速率、第一放大因子和第二放大因子确定垂直方向速率。
方式三、当满足第三预设条件时,根据第一原始预测值和第二原始预测值确定垂直方向梯度和的自相关系数S5、时域预测值差值与垂直方向梯度和的互相关系数S6;然后,根据自相关系数S5、互相关系数S6、速率阈值、第一放大因子和第二放大因子确定垂直方向速率。当不满足第三预设条件时,根据第一原始预测值和第二原始预测值确定水平方向梯度和与垂直方向梯度和的互相关系数S2、垂直方向梯度和的自相关系数S5、时域预测值差值与垂直方向梯度和的互相关系数S6;然后,根据互相关系数S2、自相关系数S5、互相关系数S6、速率阈值、水平方向速率、第一放大因子和第二放大因子确定垂直方向速率。
示例性的,第三预设条件可以基于所述水平方向速率确定。
步骤202和步骤203中,可以根据第一原始预测值、第二原始预测值、第一放大因子和第二放大因子,确定水平方向梯度和、垂直方向梯度和、时域预测值差值;根据水平方向梯度和、垂直方向梯度和、时域预测值差值,确定水平方向梯度和的自相关系数S1、水平方向梯度和与垂直方向梯度和的互相关系数S2、时域预测值差值与水平方向梯度和的互相关系数S3、垂直方向梯度和的自相关系数S5、时域预测值差值与垂直方向梯度和的互相关系数S6。
示例性的,互相关系数S2可以位于第一互相关系数阈值与第二互相关系数阈值之间;互相关系数S6可以位于第三互相关系数阈值与第四互相关系数阈值之间。
例如,若根据第一原始预测值、第二原始预测值、第一放大因子和第二放大因子得到的互相关系数S2小于第一互相关系数阈值,则将互相关系数S2更新为第一互相关系数阈值;若根据第一原始预测值、第二原始预测值、第一放大因子和第二放大因子得到的互相关系数S2大于第二互相关系数阈值,则将互相关系数S2更新为第二互相关系数阈值;若根据第一原始预测值、第二原始预测值、第一放大因子和第二放大因子得到的互相关系数S2大于或等于第一互相关系数阈值、小于或者等于第二互相关系数阈值,则保持互相关系数S2不变。
若根据第一原始预测值、第二原始预测值、第一放大因子和第二放大因子得到的互相关系数S6小于第三互相关系数阈值,则将互相关系数S6更新为第三互相关系数阈值;若根据第一原始预测值、第二原始预测值、第一放大因子和第二放大因子得到的互相关系数S6大于第四互相关系数阈值,则将互相关系数S6更新为第四互相关系数阈值;若根据第一原始预测值、第二原始预测值、第一放大因子和第二放大因子得到的互相关系数S6大于或等于第三互相关系数阈值、小于或者等于第四互相关系数阈值,则保持互相关系数S6不变。
示例性的,第一放大因子可以为5与(BD-7)中的较小值,或者,1与(BD-11)中的较大值。当然,上述只是第一放大因子的示例,对此不做限制,可以根据经验配置。
示例性的,第二放大因子可以为8与(BD-4)中的较小值,或者,4与(BD-8)中的较大值。当然,上述只是第二放大因子的示例,对此不做限制,可以根据经验配置。
示例性的,速率阈值可以为2的M次方,M为13与BD的差值,或者,5与(BD-7)中的较大值。当然,上述只是速率阈值的示例,对此不做限制,可以根据经验配置。
示例性的,BD(bit depth)为比特深度,表示为每个色度或亮度像素值所需的比特宽度。
步骤204,根据水平方向速率和垂直方向速率获取预测补偿值。
示例性的,根据水平方向速率和垂直方向速率获取预测补偿值,可以包括但不限于:根据第一原始预测值、第二原始预测值和梯度右移位数,确定水平方向梯度和垂直方向梯度,并根据水平方向速率、垂直方向速率、水平方向梯度和所述垂直方向梯度,获取预测补偿值。
示例性的,梯度右移位数可以为2与(14-BD)中的较大值,或者,6与(BD-6)中的较大值。当然,上述只是梯度右移位数的示例,对此不做限制,可以根据经验配置。
步骤205,根据第一原始预测值、第二原始预测值和预测补偿值获取目标预测值。
示例性的,若上述方法用于获取当前块的目标预测值,则:若当前块的特征信息满足特定条件,则根据当前块的第一单向运动信息确定所述当前块对应的第一原始预测值,并根据当前块的第二单向运动信息确定所述当前块对应的第二原始预测值。根据第一原始预测值和第二原始预测值确定所述当前块对应的水平方向速率,并根据第一原始预测值和第二原始预测值确定所述当前块对应的垂直方向速率。然后,根据水平方向速率和垂直方向速率获取所述当前块对应的预测补偿值,并根据第一原始预测值、第二原始预测值和预测补偿值获取所述当前块对应的目标预测值。至此,成功获取到所述当前块对应的目标预测值。
示例性的,若当前块被划分为至少一个子块,且上述方法用于获取当前块的每个子块的目标预测值,则:若当前块的特征信息满足特定条件,针对所述当前块的每个子块,根据所述子块的第一单向运动信息(与当前块的第一单向运动信息相同)确定所述子块对应的第一原始预测值,并根据所述子块的第二单向运动信息(与当前块的第二 单向运动信息相同)确定所述子块对应的第二原始预测值。根据第一原始预测值和第二原始预测值确定所述子块对应的水平方向速率,根据第一原始预测值和第二原始预测值确定所述子块对应的垂直方向速率。然后,根据水平方向速率和垂直方向速率获取所述子块对应的预测补偿值,并根据第一原始预测值、第二原始预测值和预测补偿值获取所述子块对应的目标预测值。
至此,成功获取到所述子块对应的目标预测值。在得到所述当前块的每个子块的目标预测值之后,实际上,就是得到了所述当前块的目标预测值。
由以上技术方案可见,本申请实施例中,可以根据当前块的第一单向运动信息确定第一原始预测值,根据当前块的第二单向运动信息确定第二原始预测值,根据第一原始预测值和第二原始预测值确定水平方向速率和垂直方向速率,并根据水平方向速率和垂直方向速率获取预测补偿值,并根据预测补偿值获取目标预测值。上述方式能够基于光流法获取当前块或者当前块的子块的目标预测值,从而提高硬件实现友好性,带来编码性能的提高。
实施例2:本申请实施例中提出一种编解码方法,可以应用于解码端或者编码端,参见图3所示,为编解码方法的流程示意图。示例性的,若当前块的特征信息满足特定条件,则可以针对当前块的每个子块执行以下步骤,以获取当前块的每个子块的目标预测值。
步骤301,若当前块的特征信息满足特定条件,则根据当前块的第一单向运动信息(即当前块的子块的第一单向运动信息)确定子块的第一原始预测值,并根据当前块的第二单向运动信息(即当前块的子块的第二单向运动信息)确定子块的第二原始预测值。
示例性的,若当前块是双向块(即当前块是采用双向预测的块),则可以获取当前块对应的双向运动信息,对此获取方式不做限制。这个双向运动信息包括两个不同方向的运动信息,将两个不同方向的运动信息称为第一单向运动信息(如第一运动矢量和第一参考帧索引)和第二单向运动信息(如第二运动矢量和第二参考帧索引)。基于第一单向运动信息可以确定第一参考帧(如参考帧0),且第一参考帧位于当前块所处当前帧的前面;基于第二单向运动信息可以确定第二参考帧(如参考帧1),且第二参考帧位于当前块所处当前帧的后面。
示例性的,针对当前块的每个子块,该子块的第一单向运动信息与当前块的第一单向运动信息相同,该子块的第二单向运动信息与当前块的第二单向运动信息相同。
示例性的,特征信息可以包括以下一种或者多种:运动信息属性;预测模式属性;尺寸信息;序列级开关控制信息,关于特征信息满足特定条件,参见步骤201,在此不再赘述。
示例性的,根据当前块的第一单向运动信息确定子块的第一原始预测值,并根据当前块的第二单向运动信息确定子块的第二原始预测值,可以包括:基于当前块的第一单向运动信息,从第一参考帧中确定与当前块的子块对应的第一参考块,并确定所述第一参考块的第一原始预测值I (0)(x,y);基于当前块的第二单向运动信息,从第二参考帧中确定与当前块的子块对应的第二参考块,并确定所述第二参考块的第二原始预测值I (1)(x,y)。关于第一原始预测值和第二原始预测值的确定方式,可以参见步骤201,在此不再赘述。
步骤302,根据子块的第一原始预测值和第二原始预测值确定水平方向梯度和、垂直方向梯度和、时域预测值差值。例如,根据子块的第一原始预测值、子块的第二原始预测值、第一放大因子和第二放大因子,确定水平方向梯度和、垂直方向梯度和、时域预测值差值。
步骤303,根据水平方向梯度和、垂直方向梯度和、时域预测值差值,确定水平方向梯度和的自相关系数S1(后续称为自相关系数S1)、水平方向梯度和与垂直方向梯度和的互相关系数S2(后续称为互相关系数S2)、时域预测值差值与水平方向梯度和的互相关系数S3(后续称为互相关系数S3)、垂直方向梯度和的自相关系数S5(后续称为自相关系数S5)、时域预测值差值与垂直方向梯度和的互相关系数S6(后续称为互相关系数S6)。
步骤304,根据自相关系数S1、互相关系数S2、互相关系数S3、自相关系数S5、互相关系数S6中的一个或者多个,确定当前块的子块在参考帧上对应子块的水平方向速率。
步骤305,根据自相关系数S1、互相关系数S2、互相关系数S3、自相关系数S5、互相关系数S6中的一个或者多个,确定当前块的子块在参考帧上对应子块的垂直方向速率。
步骤306,根据水平方向速率和垂直方向速率获取当前块的子块的预测补偿值。
步骤307,根据当前块的子块的第一原始预测值、当前块的子块的第二原始预测值和当前块的子块的预测补偿值,获取当前块的子块的目标预测值。
示例性的,步骤301-步骤307的流程,可以参见实施例1,在此不再赘述。
实施例3:编码端/解码端需要判断当前块的特征信息是否满足特定条件。如果是,则采用本申请实施例的技术方案,获取当前块或者当前块的子块的目标预测值,这个技术方案也可以称为双向光流模式。如果否,则不需要采用本申请提出的目标预测值获取方式。
当特征信息至少同时满足下面条件时,确定当前块的特征信息满足特定条件。
当前块采用双向预测,且当前块对应的两个参考帧来自不同方向,也就是说,当前块对应的一个参考帧位于当前帧之前,当前块对应的另一个参考帧位于当前帧之后;
当前块的尺寸信息(如宽度值、高度值、面积值等)在限定范围内。
实施例4:当特征信息至少同时满足下面条件时,确定当前块的特征信息满足特定条件。
当前块采用双向预测,且当前块对应的两个参考帧来自不同方向,也就是说,当前块对应的一个参考帧位于当前帧之前,当前块对应的另一个参考帧位于当前帧之后;
当前块的尺寸信息(如宽度值、高度值、面积值等)在限定范围内;
当前块包括多个子块,多个子块的运动信息均相同,也就是说,当前块的每个子块的运动信息可以完全一样,即当前块不采用子块运动信息模式。
示例性的,当前块不采用子块运动信息模式,可以包括:当前块不采用Affine模式或SBTMVP模式;Affine模式为采用仿射运动模型模式,SBTMVP(subblock-based temporal motion vector prediction)模式为在时域获取整块运动信息的模式。在当前块采用Affine模式或SBTMVP模式时,当前块内部的各个子块的运动信息,大概率不完全一样,因此,当前块可以不采用Affine模式或SBTMVP模式。
实施例5:当特征信息至少同时满足下面条件时,确定当前块的特征信息满足特定条件。
当前块采用双向预测,且当前块对应的两个参考帧来自不同方向,也就是说,当前块对应的一个参考帧位于当前帧之前,当前块对应的另一个参考帧位于当前帧之后;
当前块的尺寸信息(如宽度值、高度值、面积值等)在限定范围内;
当前块包括多个子块,多个子块的运动信息均相同,也就是说,当前块的每个子块的运动信息可以完全一样,即当前块不采用子块运动信息模式;
当前块采用双向预测,且当前块对应的两个参考帧的加权权重相同。
实施例6:当特征信息至少同时满足下面条件时,确定当前块的特征信息满足特定条件。
当前块采用双向预测,且当前块对应的两个参考帧来自不同方向,也就是说,当前块对应的一个参考帧位于当前帧之前,当前块对应的另一个参考帧位于当前帧之后;
当前块的尺寸信息(如宽度值、高度值、面积值等)在限定范围内;
当前块包括多个子块,多个子块的运动信息均相同,也就是说,当前块的每个子块的运动信息可以完全一样,即当前块不采用子块运动信息模式;
当前块采用双向预测,且当前块对应的两个参考帧的加权权重相同;
当前块不采用CIIP模式(基于帧内帧间联合预测的融合模式,combine intra inter prediction mode)。
实施例7:当特征信息至少同时满足下面条件时,确定当前块的特征信息满足特定条件。
当前块采用双向预测,且当前块对应的两个参考帧来自不同方向,也就是说,当前块对应的一个参考帧位于当前帧之前,当前块对应的另一个参考帧位于当前帧之后;
当前块的尺寸信息(如宽度值、高度值、面积值等)在限定范围内;
当前块包括多个子块,多个子块的运动信息均相同,也就是说,当前块的每个子块的运动信息可以完全一样,即当前块不采用子块运动信息模式;
当前块采用双向预测,且当前块对应的两个参考帧的加权权重相同;
当前块不采用SMVD(Symmetric Motion Vector Difference,对称运动矢量差)模式,SMVD模式中双向运动信息中的两个MVD是对称的,即只需要编码其中一个运动矢量差MVD,另一个运动矢量差为负的MVD。
实施例8:当特征信息至少同时满足下面条件时,确定当前块的特征信息满足特定条件。
序列级开关控制信息为允许当前块采用双向光流模式,也就是说,序列级控制允许双向光流模式启用,即序列级控制开关为开,表示允许当前块采用双向光流模式;
当前块采用双向预测,且当前块对应的两个参考帧来自不同方向,也就是说,当前块对应的一个参考帧位于当前帧之前,当前块对应的另一个参考帧位于当前帧之后;
当前块的尺寸信息(如宽度值、高度值、面积值等)在限定范围内;
当前块包括多个子块,多个子块的运动信息均相同,也就是说,当前块的每个子块的运动信息可以完全一样,即当前块不采用子块运动信息模式;
当前块采用双向预测,且当前块对应的两个参考帧的加权权重相同。
实施例9:将上述实施例3-实施例8任一中的条件“当前块采用双向预测,且当前块对应的两个参考帧来自不同方向”,修改为“当前块采用双向预测,且当前块对应的两个参考帧来自不同方向,且当前块对应的两个参考帧与当前帧的距离相同”。
例如,若当前帧的显示顺序号为POC,当前块对应的两个参考帧的显示顺序号分别为POC0和POC1,则两个 参考帧来自不同方向,等同于(POC-POC0)*(POC-POC1)<0,两个参考帧与当前帧的距离相同,等同于(POC-POC0)的值等于(POC1-POC)的值。
实施例10:当特征信息至少满足下面条件时,确定当前块的特征信息满足特定条件。
序列级开关控制信息为允许当前块采用双向光流模式,也就是说,序列级控制允许双向光流模式启用,即序列级控制开关为开,表示允许当前块采用双向光流模式。
实施例11:当特征信息至少满足下面条件时,确定当前块的特征信息满足特定条件。
当前块采用双向预测,且当前块对应的两个参考帧的预测值的差异值可以小于预设阈值TH_SAD。示例性的,可以采用如下方式获取当前块对应的两个参考帧的预测值的差异值:
方式一、根据当前块的第一单向运动信息从第一参考帧中获取与当前块的子块对应的第一预测块,并根据当前块的第二单向运动信息从第二参考帧中获取与当前块的子块对应的第二预测块;根据第一预测块的预测值与第二预测块的预测值的SAD(Sum of Absolute Difference,绝对差值和),获取第一参考帧与第二参考帧的预测值的差异值。
在方式一中,预测值的差异值为第一预测块的预测值(后续记为pred0)与第二预测块的预测值(后续记为pred1)的SAD,即pred0和pred1中所有像素的SAD。例如,可以通过如下公式确定第一参考帧与第二参考帧的预测值的差异值,在该公式中,pred 0(i,j)为pred0的第i列,第j行的pred0预测值,pred 1(i,j)为pred1的第i列,第j行的pred1预测值,n为像素总数,abs(x)表示x的绝对值,H表示高度值,W表示宽度值。
Figure PCTCN2020096788-appb-000002
方式二、根据当前块的第一单向运动信息从第一参考帧中获取与当前块的子块对应的第一预测块,并根据当前块的第二单向运动信息从第二参考帧中获取与当前块的子块对应的第二预测块;根据第一预测块的下采样后的预测值(即对第一预测块的预测值进行下采样后,得到的预测值)与第二预测块的下采样后的预测值(即对第二预测块的预测值进行下采样后,得到的预测值)的SAD,获取第一参考帧与第二参考帧的预测值的差异值。
在方式二中,预测值的差异值为第一预测块的下采样N倍后的预测值(后续记为pred0)与第二预测块的下采样N倍后的预测值(后续记为pred1)的SAD。例如,可以通过如下公式确定第一参考帧与第二参考帧的预测值的差异值,在该公式中,pred 0(i,j)为pred0的第i列,第j行的pred0预测值,pred 1(i,j)为pred1的第i列,第j行的pred1预测值,n为像素总数,abs(x)表示x的绝对值,H表示高度值,W表示宽度值,N为正整数,优选为2。
Figure PCTCN2020096788-appb-000003
实施例12:在上述实施例中,当前块的尺寸信息在限定范围内,可为如下情况的一种:
情况一、当前块的宽度值在第一区间[Wmin,Wmax]的范围内;当前块的高度值位于第二区间[Hmin,Hmax]的范围内;Wmin、Wmax、Hmin、Hmax均为2的正整数次幂;例如,Wmin为8,Wmax为128,Hmin为8,Hmax为128。
当前块的面积值位于第三区间[Smin,Smax]的范围内;Smin、Smax均为2的正整数次幂;例如,Smin为64,Smax为128*128=16384。
在上述实施例中,[a,b]表示大于等于a,且小于等于b。
情况二、当前块的宽度值在第一区间[Wmin,Wmax]的范围内;Wmin、Wmax均为2的正整数次幂;例如,Wmin为8,Wmax为128。
情况三、当前块的高度值位于第二区间[Hmin,Hmax]的范围内;Hmin、Hmax均为2的正整数次幂;例如,Hmin为8,Hmax为128。
示例性的,若满足如下任何一种条件,则当前块对应的两个参考帧的加权权重相同:
条件1:当前块不采用允许不同加权权重的方法。
条件2:当前块允许采用不同加权权重的方法(如块级加权预测方法BCW(Bi-prediction with CU based weighting)启用),且当前块的两个加权权重完全一样。
条件3:当前块所在当前帧不采用不同加权权重的方法。
条件4:当前块所在当前帧允许采用不同加权权重的方法(如帧级加权预测方法启用),且当前帧的两个加权权重完全一样。
实施例13:编码端/解码端需要根据当前块的第一单向运动信息确定子块的第一原始预测值,并根据当前块的第二单向运动信息确定子块的第二原始预测值。例如,基于当前块的第一单向运动信息,从第一参考帧中确定与当前块的子块对应的第一参考块,并确定所述第一参考块的第一原始预测值I (0)(x,y);基于当前块的第二单向运动信息,从第二参考帧中确定与当前块的子块对应的第二参考块,并确定所述第二参考块的第二原始预测值I (1)(x,y)。
例如,基于当前块的第一单向运动信息,从第一参考帧中确定第一参考块,并确定第一参考块的第一原始预测 值I (0)(x,y);第一参考块的中心区域的第一原始预测值I (0)(x,y)是通过对第一参考帧中的像素值进行插值得到,第一参考块的边缘区域的第一原始预测值I (0)(x,y)是通过对第一参考帧中的像素值进行拷贝得到。基于当前块的第二单向运动信息,从第二参考帧中确定第二参考块,并确定第二参考块的第二原始预测值I (1)(x,y);第二参考块的中心区域的第二原始预测值I (1)(x,y)是通过对第二参考帧中的像素值进行插值得到,第二参考块的边缘区域的第二原始预测值I (1)(x,y)是通过对第二参考帧中的像素值进行拷贝得到。
参见图4所示,假设子块的大小为4*4,第一参考块的大小为6*6,第一参考块的中心区域是指:以第一参考块的中心点为中心,大小为4*4的区域,第一参考块的中心区域的第一原始预测值是通过对第一参考帧中的像素值进行插值得到,对此不再赘述。第一参考块的边缘区域是指:第一参考块中除了中心区域之外的其它区域(即中心区域之外,上下左右各1行1列的区域),第一参考块的边缘区域的第一原始预测值是通过对第一参考帧中的像素值进行拷贝得到,在图4中,示出了将第一参考帧中的像素点的像素值,拷贝到第一参考块的边缘区域。当然,图4只是一个示例,还可以利用其它像素点的像素值进行拷贝。
显然,在上述方式中,针对第一参考块的边缘区域,可以通过第一参考帧中最近的整像素值拷贝获取,以避免额外的插值过程,间接地避免访问额外的参考像素。
关于第二参考块的中心区域的第二原始预测值、第二参考块的边缘区域的第二原始预测值,第二原始预测值的获取过程可以参见第一原始预测值的获取过程,在此不再重复赘述。
例如,基于当前块的第一单向运动信息,从第一参考帧中确定第一参考块,确定第一参考块的第一原始预测值,第一参考块的第一原始预测值均是通过对第一参考帧中的像素值进行插值得到。基于当前块的第二单向运动信息,从第二参考帧中确定第二参考块,确定第二参考块的第二原始预测值,第二参考块的第二原始预测值是通过对第二参考帧中的像素值进行插值得到。例如,假设子块的大小为4*4,第一参考块的大小为4*4,第二参考块的大小为4*4,第一参考块的所有区域的第一原始预测值是通过对第一参考帧中的像素值进行插值得到,第二参考块的所有区域的第一原始预测值是通过对第一参考帧中的像素值进行插值得到。
实施例14:在得到子块的第一原始预测值和子块的第二原始预测值后,编码端/解码端根据子块的第一原始预测值和第二原始预测值确定水平方向梯度和、垂直方向梯度和、时域预测值差值。例如,根据子块的第一原始预测值、子块的第二原始预测值、第一放大因子和第二放大因子,确定水平方向梯度和、垂直方向梯度和、时域预测值差值。例如,通过公式(1)确定水平方向梯度和,通过公式(2)确定垂直方向梯度和,通过公式(3)确定时域预测值差值:
Figure PCTCN2020096788-appb-000004
Figure PCTCN2020096788-appb-000005
θ(i,j)=(I (1)(i,j)>>n b)-(I (0)(i,j)>>n b)    (3)
示例性的,
Figure PCTCN2020096788-appb-000006
表示水平方向梯度,
Figure PCTCN2020096788-appb-000007
表示垂直方向梯度,针对公式(1)-公式(3)中的如下参数:
Figure PCTCN2020096788-appb-000008
Figure PCTCN2020096788-appb-000009
Figure PCTCN2020096788-appb-000010
可以通过方式(4)和公式(5)确定。
Figure PCTCN2020096788-appb-000011
Figure PCTCN2020096788-appb-000012
ψ x(i,j)表示水平方向梯度和,ψ y(i,j)表示垂直方向梯度和,θ(i,j)表示时域预测值差值。
I (0)(x,y)表示子块的第一原始预测值,I (1)(x,y)表示子块的第二原始预测值。假设子块的大小为4*4,参见实施例13,I (0)(x,y)是大小为4*4的第一参考块的第一原始预测值,或大小为6*6的第一参考块的第一原始预测值,I (1)(x,y)是大小为4*4的第二参考块的第二原始预测值,或大小为6*6的第二参考块的第二原始预测值,以I (0)(x,y)是大小为4*4的第一参考块的第一原始预测值,I (1)(x,y)是大小为4*4的第二参考块的第二原始预测值为例。I (k)(i,j)表示坐标(i,j)的像素值,如I (0)(i,j)表示第一参考块中坐标(i,j)的像素值,对应子块的第一原始预测值,I (1)(i,j)表示第二参考块中坐标(i,j)的像素值,对应子块的第二原始预测值。
n a可以表示第一放大因子,第一放大因子n a可以为5与(BD-7)中的较小值,或者,1与(BD-11)中的较大值。n b可以表示第二放大因子,第二放大因子n b可以为8与(BD-4)中的较小值,或者,4与(BD-8)中的较大值。shift1可以表示梯度右移位数,梯度右移位数shift1可以为2与(14-BD)中的较大值,或者,6与(BD-6)中的较大值。
>>表示右移,如>>n a表示右移n a,即,除以2的n a次方。>>n b表示右移n b,也就是说,除以2的n b次方。>>shift1表示右移shift1,即,除以2的shift1次方。
BD(bit depth)可以为比特深度,可以表示为每个色度或亮度像素值所需的比特宽度。例如,BD可以为10或者8,通常情况下,BD可以为已知值。
实施例15:在得到水平方向梯度和、垂直方向梯度和、时域预测值差值后,编码端/解码端还可以根据水平方向 梯度和、垂直方向梯度和、时域预测值差值,确定水平方向梯度和的自相关系数S1(后续称自相关系数S1)、水平方向梯度和与垂直方向梯度和的互相关系数S2(后续称互相关系数S2)、时域预测值差值与水平方向梯度和的互相关系数S3(后续称互相关系数S3)、垂直方向梯度和的自相关系数S5(后续称自相关系数S5)、时域预测值差值与垂直方向梯度和的互相关系数S6(后续称互相关系数S6)。例如,可以通过如下公式确定自相关系数S1、互相关系数S2、互相关系数S3、自相关系数S5、互相关系数S6。
S 1=∑ (i,j)∈Ωψ x(i,j)·ψ x(i,j)   (6)
S 2=∑ (i,j)∈Ωψ x(i,j)·ψ y(i,j)  (7)
S 3=∑ (i,j)∈Ωθ(i,j)·ψ x(i,j)   (8)
S 5=∑ (i,j)∈Ωψ y(i,j)·ψ y(i,j)   (9)
S 6=∑ (i,j)∈Ωθ(i,j)·ψ y(i,j)   (10)
ψ x(i,j)表示水平方向梯度和,ψ y(i,j)表示垂直方向梯度和,θ(i,j)表示时域预测值差值。
假设子块的大小为4*4,Ω表示4*4子块对应的窗口,或者,Ω表示4*4子块周围的6*6窗口。针对Ω中的每个坐标点(i,j),可以先通过上述实施例确定ψ x(i,j)、ψ y(i,j)和θ(i,j),然后,根据ψ x(i,j)、ψ y(i,j)和θ(i,j),确定S1、S2、S3、S5、S6。
实施例16:在得到互相关系数S2和互相关系数S6之后,可以将互相关系数S2限制在第一互相关系数阈值与第二互相关系数阈值之间,例如,若互相关系数S2小于第一互相关系数阈值,则可以将互相关系数S2更新为第一互相关系数阈值,若互相关系数S2大于第二互相关系数阈值,则可以将互相关系数S2更新为第二互相关系数阈值。可以将互相关系数S6限制在第三互相关系数阈值与第四互相关系数阈值之间,例如,若互相关系数S6小于第三互相关系数阈值,则可以将互相关系数S6更新为第三互相关系数阈值,若互相关系数S6大于第四互相关系数阈值,则可以将互相关系数S6更新为第四互相关系数阈值;第一互相关系数阈值可以小于第二互相关系数阈值,第三互相关系数阈值可以小于第四互相关系数阈值。
示例性的,可以对互相关系数S2和互相关系数S6进行如下大小限制,从而防止中间结果溢出,即使得位宽不超过一定范围。-(1<<THS2)表示第一互相关系数阈值,1<<THS2表示第二互相关系数阈值,-(1<<THS6)表示第三互相关系数阈值,1<<THS6表示第四互相关系数阈值。例如,为了防止位宽超过32比特,则THS2可以为25,THS6可以为27。当然,上述数值只是一个示例,对此不做限制。
S 2=clip3(-(1<<THS2),(1<<THS2),S 2)   (11)
S 6=clip3(-(1<<THS6),(1<<THS6),S 6)   (12)
Clip3(a,b,x)表示,若x小于a,则Clip3(a,b,x)=a;若x大于b,则Clip3(a,b,x)=b;若x大于等于a且小于等于b,则Clip3(a,b,x)=x。因此,公式(11)表示,若S 2小于-(1<<THS2),则S 2为-(1<<THS2),若S 2大于(1<<THS2),则S 2为(1<<THS2),否则,S 2保持不变。同理,公式(12)表示,若S 6小于-(1<<THS6),则S 6为-(1<<THS6),若S 6大于(1<<THS6),则S 6为(1<<THS6),否则,S 6保持不变。<<表示左移。
实施例17:在得到自相关系数S1、互相关系数S2、互相关系数S3、自相关系数S5、互相关系数S6后,可以根据自相关系数S1、互相关系数S2、互相关系数S3、自相关系数S5、互相关系数S6中的一个或者多个,确定当前块的子块在参考帧上对应子块的水平方向速率。可以根据自相关系数S1、互相关系数S2、互相关系数S3、自相关系数S5、互相关系数S6中的一个或者多个,确定当前块的子块在参考帧上对应子块的垂直方向速率。
例如,可以根据自相关系数S1、速率阈值、互相关系数S3、第一放大因子和第二放大因子确定水平方向速率,参见如下公式所示,为基于上述参数确定水平方向速率的示例。
Figure PCTCN2020096788-appb-000013
在上述公式中,若S 1>0成立,则
Figure PCTCN2020096788-appb-000014
若S 1>0不成立,则v x=0。v x表示水平方向速率,th′ BIO表示速率阈值,即用于将水平方向速率v x限制在-th′ BIO与th′ BIO之间,即水平方向速率v x大于或者等于-th′ BIO,水平方向速率v x小于或者等于th′ BIO。速率阈值th′ BIO可以为2的M次方,M为13与BD的差值,或者,5与(BD-7)中的较大值。例如,th′ BIO=2 13-BD或者th′ BIO=2 max(5,BD-7),BD为比特深度。
n a可以表示第一放大因子,第一放大因子n a可以为5与(BD-7)中的较小值,或者,1与(BD-11)中的较大值。n b可以表示第二放大因子,第二放大因子n b可以为8与(BD-4)中的较小值,或者,4与(BD-8)中的较大值。>>表示右移,
Figure PCTCN2020096788-appb-000015
为向下取整。
第一放大因子n a和第二放大因子n b均可以根据经验配置,第二放大因子n b可以大于第一放大因子n a。第一放大因子n a和第二放大因子n b用于放大水平方向速率v x的取值区间,例如,假设n b-n a为3,则
Figure PCTCN2020096788-appb-000016
为8,则可以将水平方向速率v x的取值区间放大8倍,假设n b-n a为4,则
Figure PCTCN2020096788-appb-000017
为16,则可以将水平方向速率v x的取值区间放大16倍,以此类推。
Clip3(a,b,x)表示,若x小于a,则Clip3(a,b,x)=a;若x大于b,则Clip3(a,b,x)=b;若x大于等于a且小于等于b,则Clip3(a,b,x)=x,在上述公式中,-th′ BIO为a,th′ BIO为b,
Figure PCTCN2020096788-appb-000018
为x,综上 所述,若
Figure PCTCN2020096788-appb-000019
大于-th′ BIO,且小于th′ BIO,则水平方向速率v x
Figure PCTCN2020096788-appb-000020
例如,可以根据互相关系数S2、自相关系数S5、互相关系数S6、速率阈值、水平方向速率、第一放大因子和第二放大因子确定垂直方向速率,参见如下公式所示。
Figure PCTCN2020096788-appb-000021
从上述公式中可以看出,若S 5>0成立,则
Figure PCTCN2020096788-appb-000022
若S 5>0不成立,则v x=0。
v y表示垂直方向速率,v x表示水平方向速率,th′ BIO表示速率阈值,即用于将垂直方向速率限制在-th′ BIO与th′ BIO之间,即垂直方向速率大于或者等于-th′ BIO,垂直方向速率小于或者等于th′ BIO。速率阈值th′ BIO可以为2的M次方,M为13与BD的差值,或者,5与(BD-7)中的较大值。例如,th′ BIO=2 13-BD或者th′ BIO=2 max(5,BD-7),BD为比特深度。
n a可以表示第一放大因子,第一放大因子n a可以为5与(BD-7)中的较小值,或者,1与(BD-11)中的较大值。n b可以表示第二放大因子,第二放大因子n b可以为8与(BD-4)中的较小值,或者,4与(BD-8)中的较大值。>>表示右移,
Figure PCTCN2020096788-appb-000023
为向下取整。
第一放大因子n a和第二放大因子n b均可以根据经验配置,第二放大因子n b可以大于第一放大因子n a。第一放大因子n a和第二放大因子n b用于放大垂直方向速率v y的取值区间,例如,假设n b-n a为3,则
Figure PCTCN2020096788-appb-000024
为8,则可以将垂直方向速率v y的取值区间放大8倍,假设n b-n a为4,则
Figure PCTCN2020096788-appb-000025
为16,则可以将垂直方向速率v y的取值区间放大16倍,以此类推。
Clip3(a,b,x)表示,若x小于a,则Clip3(a,b,x)=a;若x大于b,则Clip3(a,b,x)=b;若x大于等于a且小于等于b,则Clip3(a,b,x)=x,在上述公式中,-th′ BIO为a,th′ BIO为b,
Figure PCTCN2020096788-appb-000026
为x,综上所述,若
Figure PCTCN2020096788-appb-000027
大于-th′ BIO,且小于th′ BIO,则垂直方向速率v y
Figure PCTCN2020096788-appb-000028
示例性的,在上述公式中,
Figure PCTCN2020096788-appb-000029
实施例18:在得到自相关系数S1、互相关系数S2、互相关系数S3、自相关系数S5、互相关系数S6后,可以根据自相关系数S1、互相关系数S2、互相关系数S3、自相关系数S5、互相关系数S6中的一个或者多个,确定当前块的子块在参考帧上对应子块的水平方向速率。可以根据自相关系数S1、互相关系数S2、互相关系数S3、自相关系数S5、互相关系数S6中的一个或者多个,确定当前块的子块在参考帧上对应子块的垂直方向速率。
例如,可以根据自相关系数S1、速率阈值、互相关系数S3、第一放大因子和第二放大因子确定水平方向速率,参见如下公式所示,为基于上述参数确定水平方向速率的示例。
Figure PCTCN2020096788-appb-000030
公式(15)与公式(13)相比,去除了
Figure PCTCN2020096788-appb-000031
前面的负号,其它内容与公式(13)相同,各参数的含义也与公式(13)相同,在此不再重复赘述。
例如,可以根据互相关系数S2、自相关系数S5、互相关系数S6、速率阈值、水平方向速率、第一放大因子和第二放大因子确定垂直方向速率,参见如下公式所示。
Figure PCTCN2020096788-appb-000032
公式(16)与公式(14)相比,去除
Figure PCTCN2020096788-appb-000033
前面的负号,其它内容与公式(14)相同,各参数的含义也与公式(14)相同,在此不再重复赘述。
实施例19:在得到自相关系数S1、互相关系数S2、互相关系数S3、自相关系数S5、互相关系数S6后,可以根据自相关系数S1、互相关系数S2、互相关系数S3、自相关系数S5、互相关系数S6中的一个或者多个,确定当前块的子块在参考帧上对应子块的水平方向速率。
例如,若满足第一预设条件,则可以根据互相关系数S2、速率阈值、互相关系数S6、第一放大因子和第二放大因子确定水平方向速率。若不满足第一预设条件,则可以根据自相关系数S1、速率阈值、互相关系数S3、第一放大因子和第二放大因子确定水平方向速率。第一预设条件基于互相关系数S2、自相关系数S5确定,参见如下公式所示:
当满足第一预设条件时,
Figure PCTCN2020096788-appb-000034
当不满足第一预设条件时,
Figure PCTCN2020096788-appb-000035
上述第一预设条件可以包括:|S 2|>k|S 5|,|.|表示取绝对值,k为阈值,可任意设置,如设为8。公式(17)中的S 2==0?表示判断S 2是否等于0。
当满足第一预设条件时,若S 2==0成立,则
Figure PCTCN2020096788-appb-000036
若S 2==0不成立,则v x=0。v x表示水平方向速率,th′ BIO表示速率阈值,n a表示第一放大因子,n b表示第二放大因子,>>表示右移,
Figure PCTCN2020096788-appb-000037
为向下取整。
Clip3(a,b,x)表示,若x小于a,则Clip3(a,b,x)=a;若x大于b,则Clip3(a,b,x)=b;若x大于等于a且小于等于b,则Clip3(a,b,x)=x,在上述公式中,-th′ BIO为a,th′ BIO为b,
Figure PCTCN2020096788-appb-000038
为x,综上所述,若
Figure PCTCN2020096788-appb-000039
大于-th′ BIO,且小于th′ BIO,则水平方向速率v x
Figure PCTCN2020096788-appb-000040
当不满足第一预设条件时,公式(18)与公式(13)相同,在此不再重复赘述。
实施例20:在得到自相关系数S1、互相关系数S2、互相关系数S3、自相关系数S5、互相关系数S6后,可以根据自相关系数S1、互相关系数S2、互相关系数S3、自相关系数S5、互相关系数S6中的一个或者多个,确定当前块的子块在参考帧上对应子块的水平方向速率。
例如,若满足第二预设条件,则可以根据自相关系数S1、互相关系数S2、互相关系数S3、自相关系数S5、互相关系数S6、速率阈值、第一放大因子和第二放大因子确定水平方向速率。若不满足第二预设条件,则可以根据自相关系数S1、速率阈值、互相关系数S3、第一放大因子和第二放大因子确定水平方向速率。第二预设条件基于互相关系数S2、自相关系数S5确定,参见如下公式所示:当满足第二预设条件时,则S tmp=S 1·S 5-S 2·S 2
Figure PCTCN2020096788-appb-000041
当不满足第二预设条件时,则
Figure PCTCN2020096788-appb-000042
当满足第二预设条件时,先根据S 1、S 2、S 5确定S tmp,具体确定方式参见上述公式,然后,若S tmp==0成立,则
Figure PCTCN2020096788-appb-000043
若S tmp==0不成立,则v x=0。v x表示水平方向速率,th′ BIO表示速率阈值,n a表示第一放大因子,n b表示第二放大因子,>>表示右移,
Figure PCTCN2020096788-appb-000044
为向下取整。
在上述公式中,Clip3(a,b,x)表示,若x小于a,则Clip3(a,b,x)=a;若x大于b,则Clip3(a,b,x)=b;若x大于等于a且小于等于b,则Clip3(a,b,x)=x,在上述公式中,-th′ BIO为a,th′ BIO为b,
Figure PCTCN2020096788-appb-000045
Figure PCTCN2020096788-appb-000046
为x,综上所述,若
Figure PCTCN2020096788-appb-000047
大于-th′ BIO,且小于th′ BIO,则水平方向速率v x可以为
Figure PCTCN2020096788-appb-000048
当不满足第二预设条件时,公式(20)与公式(13)相同,在此不再重复赘述。
上述第二预设条件可以包括但不限于:|S 2|>k|S 5|,|.|表示取绝对值,k为阈值,可任意设置,如设为8。当然,这个第二预设条件只是一个示例,对此不做限制。
实施例21:针对实施例19,当不满足第一预设条件时,水平方向速率的确定公式变为公式(15),针对实施例20,当不满足第二预设条件时,水平方向速率的确定公式变为公式(15)。
实施例22:在得到自相关系数S1、互相关系数S2、互相关系数S3、自相关系数S5、互相关系数S6后,可以根据自相关系数S1、互相关系数S2、互相关系数S3、自相关系数S5、互相关系数S6中的一个或者多个,确定当前块的子块在参考帧上对应子块的垂直方向速率。
例如,可以根据自相关系数S1、互相关系数S3、第一放大因子和第二放大因子确定未截断水平方向速率,并根据互相关系数S2、自相关系数S5、互相关系数S6、速率阈值、所述未截断水平方向速率、第一放大因子和第二放大因子确定垂直方向速率。
参见如下公式所示:
Figure PCTCN2020096788-appb-000049
Figure PCTCN2020096788-appb-000050
从上述公式可以看出,若S 1>0成立,则
Figure PCTCN2020096788-appb-000051
若S 1>0不成立,则v x_org=0。与上述实施例中的v x相比,本实施例中,并未使用速率阈值th′ BIO将水平方向速率限制在-th′ BIO与th′ BIO之间,因此, 将v x_org称为未截断水平方向速率,即未进行截断处理,也就是说,这个未截断水平方向速率v x_org没有被限制在-th′ BIO与th′ BIO之间。
从上述公式中可以看出,若S 5>0成立,则
Figure PCTCN2020096788-appb-000052
若S 5>0不成立,则v y=0。
v y表示垂直方向速率,v x_org表示未截断水平方向速率,th′ BIO表示速率阈值,n a表示第一放大因子,n b表示第二放大因子,>>表示右移,
Figure PCTCN2020096788-appb-000053
为向下取整。Clip3(a,b,x)表示,若x小于a,则Clip3(a,b,x)=a;若x大于b,则Clip3(a,b,x)=b;若x大于等于a且小于等于b,则Clip3(a,b,x)=x,在上述公式中,-th′ BIO为a,th′ BIO为b,
Figure PCTCN2020096788-appb-000054
为x,综上所述,若
Figure PCTCN2020096788-appb-000055
大于-th′ BIO,且小于th′ BIO,则垂直方向速率v y
Figure PCTCN2020096788-appb-000056
示例性的,在上述公式中,
Figure PCTCN2020096788-appb-000057
实施例23:在得到自相关系数S1、互相关系数S2、互相关系数S3、自相关系数S5、互相关系数S6后,可以根据自相关系数S1、互相关系数S2、互相关系数S3、自相关系数S5、互相关系数S6中的一个或者多个,确定当前块的子块在参考帧上对应子块的垂直方向速率。
例如,当满足第三预设条件时,根据自相关系数S5、互相关系数S6、速率阈值、第一放大因子和第二放大因子确定垂直方向速率。当不满足第三预设条件时,根据互相关系数S2、自相关系数S5、互相关系数S6、速率阈值、水平方向速率、第一放大因子和第二放大因子确定垂直方向速率。示例性的,第三预设条件可以基于所述水平方向速率确定。
参见如下公式所示:当满足第三预设条件时,则:
Figure PCTCN2020096788-appb-000058
当不满足第三预设条件时,则:
Figure PCTCN2020096788-appb-000059
示例性的,第三预设条件可以为:若v x为-th′ BIO或th′ BIO,即v x为最小值或最大值。
当满足第三预设条件时,若S 5>0成立,则
Figure PCTCN2020096788-appb-000060
若S 5>0不成立,则v y=0。示例性的,v y表示垂直方向速率,th′ BIO表示速率阈值,n a表示第一放大因子,n b表示第二放大因子,>>表示右移,
Figure PCTCN2020096788-appb-000061
为向下取整。Clip3(a,b,x)表示,若x小于a,则Clip3(a,b,x)=a;若x大于b,则Clip3(a,b,x)=b;若x大于等于a且小于等于b,则Clip3(a,b,x)=x,在上述公式中,-th′ BIO为a,th′ BIO为b,
Figure PCTCN2020096788-appb-000062
Figure PCTCN2020096788-appb-000063
为x,综上所述,若
Figure PCTCN2020096788-appb-000064
大于-th′ BIO,且小于th′ BIO,则垂直方向速率v y可以为
Figure PCTCN2020096788-appb-000065
当不满足第三预设条件时,公式(24)与公式(14)相同,在此不再重复赘述。
实施例24:针对实施例23,当不满足第三预设条件时,垂直方向速率的确定公式变更为公式(16)。
实施例25:在上述实施例中,第一放大因子n a可以为5与(BD-7)中的较小值,即n a=min(5,BD-7);或者,第一放大因子n a可以为1与(BD-11)中的较大值,即n a=Max(1,BD-11)。当然,上述只是第一放大因子的示例,对此不做限制,可以根据经验配置。第二放大因子n b可以为8与(BD-4)中的较小值,即n b=min(8,BD-4);或者,第二放大因子n b可以为4与(BD-8)中的较大值,即n b=Max(4,BD-8)。当然,上述只是第二放大因子的示例,对此不做限制,可以根据经验配置。速率阈值th′ BIO可以为2的M次方,M为13与BD的差值,或者,5与(BD-7)中的较大值。例如,th′ BIO=2 13-BD或者th′ BIO=2 max(5,BD-7)。当然,上述只是速率阈值的示例,对此不做限制,可以根据经验配置。
通过将第一放大因子n a设置为1与(BD-11)中的较大值,可以减少梯度的自相关系数和互相关系数的右移位数,增加自相关系数和互相关系数的所需位宽(保存精度)。通过将第二放大因子n b设置为4与(BD-8)中的较大值,可以减少梯度的自相关系数和互相关系数的右移位数,增加自相关系数和互相关系数的所需位宽(保存精度)。通过将速率阈值th′ BIO设置为2 max(5,BD-7),可以增加水平方向速率v x和垂直方向速率v y的所需位宽(保存精度)。
实施例26:编码端/解码端得到水平方向速率v x和垂直方向速率v y后,可以根据水平方向速率v x和垂直方向速率v y获取当前块的子块的预测补偿值,例如,根据子块的第一原始预测值、子块的第二原始预测值和梯度右移位数,确定水平方向梯度和垂直方向梯度,并根据水平方向速率v x、垂直方向速率v y、水平方向梯度和所述垂直方向梯度,获取预测补偿值。
例如,编码端/解码端可以通过如下公式获取当前块的子块的预测补偿值b(x,y):
Figure PCTCN2020096788-appb-000066
示例性的,
Figure PCTCN2020096788-appb-000067
表示水平方向梯度,
Figure PCTCN2020096788-appb-000068
表示垂直方向梯度,针对上述公式中的如下参数:
Figure PCTCN2020096788-appb-000069
Figure PCTCN2020096788-appb-000070
Figure PCTCN2020096788-appb-000071
还可以通过如下方式确定这些参数。
Figure PCTCN2020096788-appb-000072
Figure PCTCN2020096788-appb-000073
I (k)(i,j)表示坐标(i,j)的像素值,如I (0)(i,j)表示第一参考块中坐标(i,j)的像素值,对应子块的第一原始预测值,I (1)(i,j)表示第二参考块中坐标(i,j)的像素值,对应子块的第二原始预测值。例如,I (0)(x,y)表示子块的第一原始预测值,I (1)(x,y)表示子块的第二原始预测值。假设子块的大小为4*4,I (0)(x,y)是大小为4*4的第一参考块的第一原始预测值,或者大小为6*6的第一参考块的第一原始预测值,I (1)(x,y)是大小为4*4的第二参考块的第二原始预测值,或者大小为6*6的第二参考块的第二原始预测值,以I (0)(x,y)是大小为4*4的第一参考块的第一原始预测值,I (1)(x,y)是大小为4*4的第二参考块的第二原始预测值为例。
v x表示水平方向速率,v y表示垂直方向速率,>>表示右移,rnd为round,表示取整操作,shift1表示梯度右移位数,>>shift1表示右移shift1。梯度右移位数shift1可以为2与(14-BD)中的较大值,即shift1=max(2,14-BD),或者,shift1可以为6与(BD-6)中的较大值,即shift1=max(6,BD-6)。当然,上述只是梯度右移位数的示例,对此不做限制,可以根据经验配置。通过将shift1设置为6与(BD-6)中的较大值,可以增加梯度的右移位数,即减少梯度所需位宽(保存精度)。BD表示比特深度,即亮度值所需比特宽度,一般为10或8。
实施例27:编码端/解码端可以根据当前块的子块的第一原始预测值、当前块的子块的第二原始预测值和当前块的子块的预测补偿值,获取当前块的子块的目标预测值。例如,编码端/解码端可以通过如下公式获取当前块的子块的目标预测值pred BDOF(x,y):
pred BDOF(x,y)=(I (0)(x,y)+I (1)(x,y)+b(x,y)+o offset)>>shift  (28)
在上述公式中,I (0)(x,y)表示子块的第一原始预测值,I (1)(x,y)表示子块的第二原始预测值,b(x,y)表示子块的预测补偿值,>>表示右移,>>shift表示右移shift。
示例性的,o offset=2 shift-1+2 14,shift=15-BD,BD表示比特深度。
当然,上述方式只是获取子块的目标预测值的示例,对此不做限制。
实施例28:在当前块被划分为多个子块时,可以采用上述实施例,确定每个子块的目标预测值,即针对当前块的每个子块(如4*4大小的子块)进行预测信号的调整,从而得到每个子块的目标预测值。本实施例中,当满足目标条件时,还可以提前跳出某个子块的信号调整过程,也就是说,不再采用上述实施例的方式确定子块的目标预测值。
例如,当前块被划分为子块1、子块2、子块3和子块4,编码端/解码端先采用上述实施例的方式确定子块1的目标预测值,然后采用上述实施例的方式确定子块2的目标预测值,假设满足目标条件,则编码端/解码端不再采用上述实施例的方式确定子块3的目标预测值和子块4的目标预测值,即提前跳出子块3和子块4的信号调整过程。
示例性的,若当前块的两个子块的预测值的差异值均小于某个阈值TH_SUB_SAD时,可以确定满足目标条件,提前跳出剩余子块的信号调整过程。例如,若子块1的预测值的差异值小于阈值TH_SUB_SAD,且子块2的预测值的差异值小于阈值TH_SUB_SAD,则确定满足目标条件,提前跳出剩余子块(即子块3和子块4)的信号调整过程。
又例如,当前块被划分为多个子块,针对每个子块,编码端/解码端采用上述实施例的方式确定所述子块的目标预测值之前,判断是否满足目标条件。若满足目标条件,则不再采用上述实施例的方式确定所述子块的目标预测值,即提前跳出所述子块的信号调整过程。若不满足目标条件,则可以采用上述实施例的方式确定所述子块的目标预测值。
示例性的,若所述子块的预测值的差异值小于某个阈值TH_SUB_SAD时,可以确定满足目标条件,提前跳出所述子块的信号调整过程。例如,若子块1的预测值的差异值小于阈值TH_SUB_SAD,则确定满足目标条件,提前跳出子块1的信号调整过程。
示例性的,为了确定子块的预测值的差异值,可以采用如下方式实现:
方式一、预测值的差异值可以为第一预测块(即根据当前块的第一单向运动信息从第一参考帧中获取的与子块对应的第一预测块)的预测值(后续记为pred0)与第二预测块(即根据当前块的第二单向运动信息从第二参考帧中获取的与子块对应的第二预测块)的预测值(后续记为pred1)的SAD,即pred0和pred1中所有像素的SAD。例如,可以通过如下公式确定第一参考帧与第二参考帧的预测值的差异值,在该公式中,pred 0(i,j)为pred0的第i列,第j行的pred0预测值,pred 1(i,j)为pred1的第i列,第j行的pred1预测值,n为像素总数,abs(x)表示x的绝对值,H表示高度值,W表示宽度值。
Figure PCTCN2020096788-appb-000074
方式二、预测值的差异值还可以为第一预测块的下采样N倍后的预测值(记为pred0,即对第一预测块的预测值进行下采样后,得到的预测值)与第二预测块的下采样N倍后的预测值(记为pred1,即对第二预测块的预测值进行下采样后,得到的预测值)的SAD。例如,通过如下公式确定第一参考帧与第二参考帧的预测值的差异值,pred 0(i,j)为pred0的第i列,第j行的pred0预测值,pred 1(i,j)为pred1的第i列,第j行的pred1预测值,n为像素总数,abs(x)表示x的绝对值,H表示高度值,W表示宽度值,N为正整数,优选为2。
Figure PCTCN2020096788-appb-000075
实施例29:
基于与上述方法同样的申请构思,本申请实施例还提出一种编解码装置,应用于编码端或者解码端,所述装置用于若当前块的特征信息满足特定条件,则获取所述当前块或者所述当前块的子块的目标预测值,如图5所示,为所述装置的结构图,所述装置包括:
第一确定模块51,用于若当前块的特征信息满足特定条件,根据所述当前块的第一单向运动信息确定第一原始预测值,根据所述当前块的第二单向运动信息确定第二原始预测值;
第二确定模块52,用于根据所述第一原始预测值和所述第二原始预测值确定水平方向速率;根据所述第一原始预测值和所述第二原始预测值确定垂直方向速率;
第一获取模块53,用于根据所述水平方向速率和所述垂直方向速率获取预测补偿值;
第二获取模块54,用于根据所述第一原始预测值、所述第二原始预测值和所述预测补偿值获取目标预测值。
示例性的,所述特征信息可以包括但不限于以下一种或多种:运动信息属性;预测模式属性;尺寸信息;序列级开关控制信息。
所述第一确定模块51还用于:若所述特征信息包括所述运动信息属性,所述运动信息属性满足如下情况的至少一种时,确定所述运动信息属性满足特定条件;
所述当前块采用双向预测,且所述当前块对应的两个参考帧来自不同方向;
所述当前块包括多个子块,所述多个子块的运动信息均相同;
所述当前块采用双向预测,且所述当前块对应的两个参考帧的加权权重相同;
所述当前块采用双向预测,且所述当前块对应的两个参考帧与当前帧的距离相同;
所述当前块采用双向预测,且当前块对应的两个参考帧的预测值的差异值小于预设阈值。
所述第一确定模块51还用于:根据所述当前块的第一单向运动信息从第一参考帧中获取第一预测块,并根据所述当前块的第二单向运动信息从第二参考帧中获取第二预测块;
根据所述第一预测块的下采样后的预测值与所述第二预测块的下采样后的预测值的SAD,获取所述第一参考帧与所述第二参考帧的预测值的差异值。
所述第一确定模块51还用于:若所述特征信息包括所述预测模式属性,所述预测模式属性为不采用基于帧内帧间联合预测的融合模式,和/或,不采用对称运动矢量差模式,确定所述预测模式属性满足特定条件。
所述第一确定模块51还用于:若所述特征信息包括所述序列级开关控制信息,且所述序列级开关控制信息为允许所述当前块采用双向光流模式,则确定所述序列级开关控制信息满足特定条件。
所述第一确定模块51还用于:若所述特征信息包括所述尺寸信息,所述尺寸信息满足如下情况的至少一种时,则确定所述尺寸信息满足特定条件;所述当前块的宽度值大于或等于第一阈值,所述当前块的宽度值小于或等于第二阈值;所述当前块的高度值大于或等于第三阈值,所述当前块的高度值小于或等于第四阈值;所述当前块的面积值大于或等于第五阈值,所述当前块的面积值小于或等于第六阈值。
所述第一确定模块51根据所述当前块的第一单向运动信息确定第一原始预测值,根据所述当前块的第二单向运动信息确定第二原始预测值时具体用于:
基于所述当前块的第一单向运动信息,从第一参考帧中确定第一参考块,并确定所述第一参考块的第一原始预测值;其中,所述第一参考块的中心区域的第一原始预测值是通过对第一参考帧中的像素值进行插值得到,所述第一参考块的边缘区域的第一原始预测值是通过对第一参考帧中的像素值进行拷贝得到;基于所述当前块的第二单向运动信息,从第二参考帧中确定第二参考块,并确定所述第二参考块的第二原始预测值;其中,所述第二参考块的中心区域的第二原始预测值是通过对第二参考帧中的像素值进行插值得到,所述第二参考块的边缘区域的第二原始预测值是通过对第二参考帧中的像素值进行拷贝得到。
所述第二确定模块52根据所述第一原始预测值和所述第二原始预测值确定水平方向速率时具体用于:当满足第一预设条件时,根据第一原始预测值和第二原始预测值确定水平方向梯度和与垂直方向梯度和的互相关系数S2、时域预测值差值与垂直方向梯度和的互相关系数S6;根据互相关系数S2、速率阈值、互相关系数S6、第一放大因子和第二放大因子确定水平方向速率;第一预设条件基于互相关系数S2、垂直方向梯度和的自相关系数S5确定。
所述第二确定模块52根据所述第一原始预测值和所述第二原始预测值确定水平方向速率时具体用于:若满足第二预设条件,根据第一原始预测值和第二原始预测值确定水平方向梯度和的自相关系数S1、水平方向梯度和与垂直方向梯度和的互相关系数S2、时域预测值差值与水平方向梯度和的互相关系数S3、垂直方向梯度和的自相关系数S5、时域预测值差值与垂直方向梯度和的互相关系数S6;根据所述自相关系数S1、互相关系数S2、互相关系数S3、自相关系数S5、互相关系数S6、速率阈值、第一放大因子和第二放大因子确定水平方向速率;其中,所述第二预设条件基于所述互相关系数S2、所述自相关系数S5确定。
所述第二确定模块52根据所述第一原始预测值和所述第二原始预测值确定垂直方向速率时具体用于:根据所述第一原始预测值和所述第二原始预测值获取未进行截断处理的未截断水平方向速率,并根据所述未截断水平方向速 率确定垂直方向速率。
所述第二确定模块52根据所述第一原始预测值和所述第二原始预测值确定垂直方向速率时具体用于:当满足第三预设条件时,根据所述第一原始预测值和所述第二原始预测值确定垂直方向梯度和的自相关系数S5、时域预测值差值与垂直方向梯度和的互相关系数S6;根据所述自相关系数S5、互相关系数S6、速率阈值、第一放大因子和第二放大因子确定垂直方向速率;其中,所述第三预设条件基于所述水平方向速率确定。
所述互相关系数S2位于第一互相关系数阈值与第二互相关系数阈值之间;
所述互相关系数S6位于第三互相关系数阈值与第四互相关系数阈值之间。
第一放大因子为5与(BD-7)中的较小值,或,1与(BD-11)中的较大值;第二放大因子为8与(BD-4)中的较小值,或,4与(BD-8)中的较大值;速率阈值为2的M次方,M为13与BD的差值,或,5与(BD-7)中的较大值;其中,BD为比特深度。
所述第一获取模块53根据所述水平方向速率和所述垂直方向速率获取预测补偿值时具体用于:根据所述第一原始预测值、所述第二原始预测值和梯度右移位数,确定水平方向梯度和垂直方向梯度,并根据所述水平方向速率、所述垂直方向速率、所述水平方向梯度和所述垂直方向梯度,获取所述预测补偿值;其中,所述梯度右移位数为2与(14-BD)中的较大值,或,6与(BD-6)中的较大值,BD为比特深度。
本申请实施例提供的解码端设备,从硬件层面而言,其硬件架构示意图具体可以参见图6所示。包括:处理器61和机器可读存储介质62,所述机器可读存储介质62存储有能够被所述处理器61执行的机器可执行指令;所述处理器61用于执行机器可执行指令,以实现本申请上述示例公开的方法。例如,处理器用于执行机器可执行指令,以实现如下步骤:
若当前块的特征信息满足特定条件,则执行以下步骤获取所述当前块或者所述当前块的子块的目标预测值:根据所述当前块的第一单向运动信息确定第一原始预测值,根据所述当前块的第二单向运动信息确定第二原始预测值;根据所述第一原始预测值和所述第二原始预测值确定水平方向速率;根据所述第一原始预测值和所述第二原始预测值确定垂直方向速率;根据所述水平方向速率和所述垂直方向速率获取预测补偿值;根据所述第一原始预测值、所述第二原始预测值和所述预测补偿值获取目标预测值。
本申请实施例提供的编码端设备,从硬件层面而言,其硬件架构示意图具体可以参见图7所示。包括:处理器71和机器可读存储介质72,所述机器可读存储介质72存储有能够被所述处理器71执行的机器可执行指令;所述处理器71用于执行机器可执行指令,以实现本申请上述示例公开的方法。例如,处理器用于执行机器可执行指令,以实现如下步骤:
若当前块的特征信息满足特定条件,则执行以下步骤获取所述当前块或者所述当前块的子块的目标预测值:根据所述当前块的第一单向运动信息确定第一原始预测值,根据所述当前块的第二单向运动信息确定第二原始预测值;根据所述第一原始预测值和所述第二原始预测值确定水平方向速率;根据所述第一原始预测值和所述第二原始预测值确定垂直方向速率;根据所述水平方向速率和所述垂直方向速率获取预测补偿值;根据所述第一原始预测值、所述第二原始预测值和所述预测补偿值获取目标预测值。
基于与上述方法同样的申请构思,本申请实施例还提出一种编解码装置,所述装置包括:
确定模块,用于当前块启用双向光流模式时,确定当前块满足的条件包括:
当前块禁用CIIP模式,或者禁用对称运动矢量差模式;
运动补偿模块,用于若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿。
基于与上述方法同样的申请构思,本申请实施例还提出一种编解码装置,所述装置包括:
确定模块,用于当前块启用双向光流模式时,确定当前块满足的条件包括:
当前块的宽度大于等于8,或者高度大于等于8,宽高乘积大于等于128;
运动补偿模块,用于若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿。
基于与上述方法同样的申请构思,本申请实施例还提出一种编解码装置,所述装置包括:
确定模块,用于当前块启用双向光流模式时,确定当前块同时满足的条件包括:
当前块的宽度大于等于8,或者高度大于等于8,宽高乘积大于等于128,且
当前块禁用CIIP模式,或者禁用对称运动矢量差模式;
运动补偿模块,用于若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿。
基于与上述方法同样的申请构思,本申请实施例还提出一种编解码装置,所述装置包括:
确定模块,用于当前块启用双向光流模式时,确定当前块满足的条件包括:
当前块采用双向预测,且所述当前块对应的两个参考帧来自不同方向,且所述当前块对应的所述两个参考帧与当前帧的距离相同;
运动补偿模块,用于若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿。
基于与上述方法同样的申请构思,本申请实施例还提出一种编解码装置,所述装置包括:
确定模块,用于在当前块启动双向光流模式时,确定当前块同时满足的条件包括:
开关控制信息为允许所述当前块采用双向光流模式;
当前块不采用子块运动信息模式,且当前块不采用CIIP模式,且当前块不采用SMVD模式;
当前块采用双向预测,且所述当前块对应的两个参考帧来自不同方向,且所述当前块对应的所述两个参考帧与当前帧的距离相同;
当前块采用双向预测,且所述当前块对应的两个参考帧的加权权重相同;
当前块的宽度值,高度值和面积值均在限定范围内;
运动补偿模块,用于若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿。
示例性的,所述确定模块,还用于在当前块不启动双向光流模式时,确定当前块不满足如下条件中的任意一个条件:
开关控制信息为允许所述当前块采用双向光流模式;
当前块不采用子块运动信息模式,且当前块不采用CIIP模式,且当前块不采用SMVD模式;
当前块采用双向预测,且所述当前块对应的两个参考帧来自不同方向,且所述当前块对应的所述两个参考帧与当前帧的距离相同;
当前块采用双向预测,且所述当前块对应的两个参考帧的加权权重相同;
当前块的宽度值,高度值和面积值均在限定范围内。
示例性的,所述当前块不采用子块运动信息模式,包括:
当前块不采用Affine模式;或者,当前块不采用SBTMVP模式。
示例性的,当前块的宽度值,高度值和面积值均在限定范围内包括:
所述当前块的宽度值大于或等于第一阈值,且所述当前块的宽度值小于或等于第二阈值;
所述当前块的高度值大于或等于第三阈值,且所述当前块的高度值小于或等于第四阈值;
所述当前块的面积值大于或等于第五阈值,且所述当前块的面积值小于或等于第六阈值。
示例性的,所述第一阈值为8,所述第二阈值为16;
所述第三阈值为8,所述第四阈值为16;
所述第五阈值为64,所述第六阈值为256。
示例性的,当前块的宽度值,高度值和面积值均在限定范围内包括:
所述当前块的宽度值大于或等于第一阈值;
所述当前块的高度值大于或等于第三阈值;
所述当前块的面积值大于或等于第五阈值。
示例性的,所述第一阈值为8,所述第三阈值为8,所述第五阈值为128。
示例性的,所述运动补偿模块具体用于:
针对所述当前块包括的至少一个子块中的每个子块:
确定所述子块的第一原始预测值和第二原始预测值;
根据所述第一原始预测值和所述第二原始预测值确定所述子块的水平方向速率;
根据所述第一原始预测值和所述第二原始预测值确定所述子块的垂直方向速率;
根据所述水平方向速率和所述垂直方向速率获取所述子块的预测补偿值;
根据所述第一原始预测值,所述第二原始预测值和所述预测补偿值获取所述子块的目标预测值;根据每个子块的目标预测值确定所述当前块的预测值。
基于与上述方法同样的申请构思,本申请实施例还提出一种视频编码器,包括处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;
所述处理器用于执行机器可执行指令,以实现如下步骤:
当前块启用双向光流模式时,当前块满足的条件包括:
当前块禁用CIIP模式,或者禁用对称运动矢量差模式;
若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿;
或者,
当前块启用双向光流模式时,当前块满足的条件包括:
当前块的宽度大于等于8,或者高度大于等于8,宽高乘积大于等于128;
若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿;
或者,
当前块启用双向光流模式时,当前块同时满足的条件包括:
当前块的宽度大于等于8,或者高度大于等于8,宽高乘积大于等于128,且
当前块禁用CIIP模式,或者禁用对称运动矢量差模式;
若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿;
或者,
当前块启用双向光流模式时,当前块满足的条件包括:
当前块采用双向预测,且所述当前块对应的两个参考帧来自不同方向,且所述当前块对应的所述两个参考帧与当前帧的距离相同;
若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿;
或者,
在当前块启动双向光流模式时,当前块同时满足的条件包括:
开关控制信息为允许所述当前块采用双向光流模式;
当前块不采用子块运动信息模式,且当前块不采用CIIP模式,且当前块不采用SMVD模式;
当前块采用双向预测,且所述当前块对应的两个参考帧来自不同方向,且所述当前块对应的所述两个参考帧与当前帧的距离相同;
当前块采用双向预测,且所述当前块对应的两个参考帧的加权权重相同;
当前块的宽度值,高度值和面积值均在限定范围内;
若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿。
基于与上述方法同样的申请构思,本申请实施例还提出一种视频解码器,包括处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;
所述处理器用于执行机器可执行指令,以实现如下步骤:
当前块启用双向光流模式时,当前块满足的条件包括:
当前块禁用CIIP模式,或者禁用对称运动矢量差模式;
若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿;
或者,
当前块启用双向光流模式时,当前块满足的条件包括:
当前块的宽度大于等于8,或者高度大于等于8,宽高乘积大于等于128;
若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿;
或者,
当前块启用双向光流模式时,当前块同时满足的条件包括:
当前块的宽度大于等于8,或者高度大于等于8,宽高乘积大于等于128,且
当前块禁用CIIP模式,或者禁用对称运动矢量差模式;
若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿;
或者,
当前块启用双向光流模式时,当前块满足的条件包括:
当前块采用双向预测,且所述当前块对应的两个参考帧来自不同方向,且所述当前块对应的所述两个参考帧与当前帧的距离相同;
若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿;
或者,
在当前块启动双向光流模式时,当前块同时满足的条件包括:
开关控制信息为允许所述当前块采用双向光流模式;
当前块不采用子块运动信息模式,且当前块不采用CIIP模式,且当前块不采用SMVD模式;
当前块采用双向预测,且所述当前块对应的两个参考帧来自不同方向,且所述当前块对应的所述两个参考帧与当前帧的距离相同;
当前块采用双向预测,且所述当前块对应的两个参考帧的加权权重相同;
当前块的宽度值,高度值和面积值均在限定范围内;
若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿。
基于与上述方法同样的申请构思,本申请实施例还提供一种机器可读存储介质,所述机器可读存储介质上存储有若干计算机指令,所述计算机指令被处理器执行时,能够实现本申请上述示例公开的编解码方法。其中,上述机器可读存储介质可以是任何电子、磁性、光学或其它物理存储装置,可以包含或存储信息,如可执行指令、数据,等等。例如,机器可读存储介质可以是:RAM(Radom Access Memory,随机存取存储器)、易失存储器、非易失性存储器、闪存、存储驱动器(如硬盘驱动器)、固态硬盘、任何类型的存储盘(如光盘、dvd等),或者类似的存储介质,或者它们的组合。
上述实施例阐明的系统、装置、模块或单元,可以由计算机芯片或实体实现,或由具有某种功能的产品来实现。一种典型的实现为计算机,计算机的形式可以是个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件收发设备、平板计算机、可穿戴设备或者这些设备中的任意几种设备的组合。
为了描述的方便,描述以上装置时以功能分为各种单元分别描述。当然,在实施本申请时可以把各单元的功能在同一个或多个软件和/或硬件中实现。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可以由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其它可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其它可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。而且,这些计算机程序指令也可以存储在能引导计算机或其它可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或者多个流程和/或方框图一个方框或者多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其它可编程数据处理设备上,使得在计算机或者其它可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其它可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。以上所述仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (26)

  1. 一种编解码方法,包括:
    当前块启用双向光流模式时,当前块满足的条件包括:当前块禁用CIIP模式,或者禁用对称运动矢量差模式;
    若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿。
  2. 一种编解码方法,包括:
    当前块启用双向光流模式时,当前块满足的条件包括:当前块的宽度大于等于8,或者高度大于等于8,宽高乘积大于等于128;
    若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿。
  3. 一种编解码方法,包括:
    当前块启用双向光流模式时,当前块同时满足的条件包括:
    当前块的宽度大于等于8,或者高度大于等于8,宽高乘积大于等于128,且
    当前块禁用CIIP模式,或者禁用对称运动矢量差模式;
    若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿。
  4. 一种编解码方法,包括:
    当前块启用双向光流模式时,当前块满足的条件包括:
    当前块采用双向预测,且所述当前块对应的两个参考帧来自不同方向,且所述当前块对应的所述两个参考帧与当前帧的距离相同;
    若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿。
  5. 一种编解码方法,包括:
    在当前块启动双向光流模式时,当前块同时满足的条件包括:
    开关控制信息为允许所述当前块采用双向光流模式;
    当前块不采用子块运动信息模式,且当前块不采用CIIP模式,且当前块不采用SMVD模式;
    当前块采用双向预测,且所述当前块对应的两个参考帧来自不同方向,且所述当前块对应的所述两个参考帧与当前帧的距离相同;
    当前块采用双向预测,且所述当前块对应的两个参考帧的加权权重相同;
    当前块的宽度值,高度值和面积值均在限定范围内;
    若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿。
  6. 根据权利要求5所述的方法,其特征在于,所述方法还包括:
    在当前块不启动双向光流模式时,当前块不满足如下条件中的任意一个条件:
    开关控制信息为允许所述当前块采用双向光流模式;
    当前块不采用子块运动信息模式,且当前块不采用CIIP模式,且当前块不采用SMVD模式;
    当前块采用双向预测,且所述当前块对应的两个参考帧来自不同方向,且所述当前块对应的所述两个参考帧与当前帧的距离相同;
    当前块采用双向预测,且所述当前块对应的两个参考帧的加权权重相同;当前块的宽度值,高度值和面积值均在限定范围内。
  7. 根据权利要求5或6所述的方法,其特征在于,所述当前块不采用子块运动信息模式,包括:
    当前块不采用Affine模式;或者
    当前块不采用SBTMVP模式。
  8. 根据权利要求5或6所述的方法,其特征在于,所述当前块的宽度值、高度值和面积值均在限定范围内包括:
    所述当前块的宽度值大于或等于第一阈值,且所述当前块的宽度值小于或等于第二阈值;
    所述当前块的高度值大于或等于第三阈值,且所述当前块的高度值小于或等于第四阈值;
    所述当前块的面积值大于或等于第五阈值,且所述当前块的面积值小于或等于第六阈值。
  9. 根据权利要求8所述的方法,其特征在于,
    所述第一阈值为8,所述第二阈值为16;
    所述第三阈值为8,所述第四阈值为16;
    所述第五阈值为64,所述第六阈值为256。
  10. 根据权利要求5或6所述的方法,其特征在于,所述当前块的宽度值、高度值和面积值均在限定范围内包括:
    所述当前块的宽度值大于或等于第一阈值;
    所述当前块的高度值大于或等于第三阈值;
    所述当前块的面积值大于或等于第五阈值。
  11. 根据权利要求10所述的方法,其特征在于,
    所述第一阈值为8,所述第三阈值为8,所述第五阈值为128。
  12. 根据权利要求1-11任一项所述的方法,其特征在于,所述对所述当前块进行基于双向光流模式的运动补偿,包括:
    针对所述当前块包括的至少一个子块中的每个子块:
    确定所述子块的第一原始预测值和第二原始预测值;根据所述第一原始预测值和所述第二原始预测值确定所述子块的水平方向速率;
    根据所述第一原始预测值和所述第二原始预测值确定所述子块的垂直方向速率;根据所述水平方向速率和所述垂直方向速率获取所述子块的预测补偿值;
    根据所述第一原始预测值,所述第二原始预测值和所述预测补偿值获取所述子块的目标预测值;
    根据每个子块的目标预测值确定所述当前块的预测值。
  13. 一种编解码装置,包括:
    确定模块,用于当前块启用双向光流模式时,确定当前块满足的条件包括:
    当前块禁用CIIP模式,或者禁用对称运动矢量差模式;
    运动补偿模块,用于若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿。
  14. 一种编解码装置,包括:
    确定模块,用于当前块启用双向光流模式时,确定当前块满足的条件包括:
    当前块的宽度大于等于8,或者高度大于等于8,宽高乘积大于等于128;
    运动补偿模块,用于若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿。
  15. 一种编解码装置,包括:
    确定模块,用于当前块启用双向光流模式时,确定当前块同时满足的条件包括:
    当前块的宽度大于等于8,或者高度大于等于8,宽高乘积大于等于128,且
    当前块禁用CIIP模式,或者禁用对称运动矢量差模式;
    运动补偿模块,用于若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿。
  16. 一种编解码装置,包括:
    确定模块,用于当前块启用双向光流模式时,确定当前块满足的条件包括:
    当前块采用双向预测,且所述当前块对应的两个参考帧来自不同方向,且所述当前块对应的所述两个参考帧与当前帧的距离相同;
    运动补偿模块,用于若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿。
  17. 一种编解码装置,包括:
    确定模块,用于在当前块启动双向光流模式时,确定当前块同时满足的条件包括:
    开关控制信息为允许所述当前块采用双向光流模式;
    当前块不采用子块运动信息模式,且当前块不采用CIIP模式,且当前块不采用SMVD模式;
    当前块采用双向预测,且所述当前块对应的两个参考帧来自不同方向,且所述当前块对应的所述两个参考帧与当前帧的距离相同;
    当前块采用双向预测,且所述当前块对应的两个参考帧的加权权重相同;
    当前块的宽度值,高度值和面积值均在限定范围内;
    运动补偿模块,用于若确定对当前块启动双向光流模式,则对所述当前块进行基于双向光流模式的运动补偿。
  18. 根据权利要求17所述的装置,其特征在于,所述确定模块,还用于在当前块不启动双向光流模式时,确定当前块不满足如下条件中的任意一个条件:
    开关控制信息为允许所述当前块采用双向光流模式;
    当前块不采用子块运动信息模式,且当前块不采用CIIP模式,且当前块不采用SMVD模式;
    当前块采用双向预测,且所述当前块对应的两个参考帧来自不同方向,且所述当前块对应的所述两个参考帧与当前帧的距离相同;
    当前块采用双向预测,且所述当前块对应的两个参考帧的加权权重相同;
    当前块的宽度值,高度值和面积值均在限定范围内。
  19. 根据权利要求17或18所述的装置,其特征在于,
    所述当前块不采用子块运动信息模式,包括:
    当前块不采用Affine模式;或者,当前块不采用SBTMVP模式。
  20. 根据权利要求17或18所述的装置,其特征在于,
    当前块的宽度值,高度值和面积值均在限定范围内包括:
    所述当前块的宽度值大于或等于第一阈值,且所述当前块的宽度值小于或等于第二阈值;
    所述当前块的高度值大于或等于第三阈值,且所述当前块的高度值小于或等于第四阈值;
    所述当前块的面积值大于或等于第五阈值,且所述当前块的面积值小于或等于第六阈值。
  21. 根据权利要求20所述的装置,其特征在于,
    所述第一阈值为8,所述第二阈值为16;
    所述第三阈值为8,所述第四阈值为16;
    所述第五阈值为64,所述第六阈值为256。
  22. 根据权利要求17或18所述的装置,其特征在于,
    当前块的宽度值,高度值和面积值均在限定范围内包括:
    所述当前块的宽度值大于或等于第一阈值;
    所述当前块的高度值大于或等于第三阈值;
    所述当前块的面积值大于或等于第五阈值。
  23. 根据权利要求22所述的装置,其特征在于,
    所述第一阈值为8,所述第三阈值为8,所述第五阈值为128。
  24. 根据权利要求13-23任一项所述的装置,其特征在于,所述运动补偿模块用于:
    针对所述当前块包括的至少一个子块中的每个子块:
    确定所述子块的第一原始预测值和第二原始预测值;
    根据所述第一原始预测值和所述第二原始预测值确定所述子块的水平方向速率;
    根据所述第一原始预测值和所述第二原始预测值确定所述子块的垂直方向速率;
    根据所述水平方向速率和所述垂直方向速率获取所述子块的预测补偿值;
    根据所述第一原始预测值,所述第二原始预测值和所述预测补偿值获取所述子块的目标预测值;根据每个子块的目标预测值确定所述当前块的预测值。
  25. 一种视频编码器,包括:处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;
    所述处理器用于执行机器可执行指令,以实现权利要求1-12任一所述的编解码方法。
  26. 一种视频解码器,其特征在于,包括:处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;
    所述处理器用于执行机器可执行指令,以实现权利要求1-12任一所述的编解码方法。
PCT/CN2020/096788 2019-06-21 2020-06-18 一种编解码方法、装置及其设备 WO2020253769A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910544562.5A CN112118455B (zh) 2019-06-21 2019-06-21 一种编解码方法、装置及其设备
CN201910544562.5 2019-06-21

Publications (1)

Publication Number Publication Date
WO2020253769A1 true WO2020253769A1 (zh) 2020-12-24

Family

ID=70220964

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/CN2020/096600 WO2020253730A1 (zh) 2019-06-21 2020-06-17 一种编解码方法、装置及其设备
PCT/CN2020/096788 WO2020253769A1 (zh) 2019-06-21 2020-06-18 一种编解码方法、装置及其设备

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/096600 WO2020253730A1 (zh) 2019-06-21 2020-06-17 一种编解码方法、装置及其设备

Country Status (8)

Country Link
US (1) US20220232242A1 (zh)
EP (1) EP3989576A4 (zh)
JP (2) JP7334272B2 (zh)
KR (2) KR20240005233A (zh)
CN (30) CN113411593B (zh)
CA (1) CA3139466A1 (zh)
MX (1) MX2021015977A (zh)
WO (2) WO2020253730A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113411593B (zh) * 2019-06-21 2022-05-27 杭州海康威视数字技术股份有限公司 一种编解码方法、装置及其设备
CN112135145B (zh) * 2019-11-14 2022-01-25 杭州海康威视数字技术股份有限公司 一种编解码方法、装置及其设备
WO2023172002A1 (ko) * 2022-03-11 2023-09-14 현대자동차주식회사 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170339405A1 (en) * 2016-05-20 2017-11-23 Arris Enterprises Llc System and method for intra coding
WO2018212578A1 (ko) * 2017-05-17 2018-11-22 주식회사 케이티 비디오 신호 처리 방법 및 장치
WO2019045427A1 (ko) * 2017-08-29 2019-03-07 에스케이텔레콤 주식회사 양방향 옵티컬 플로우를 이용한 움직임 보상 방법 및 장치
CN109479141A (zh) * 2016-07-12 2019-03-15 韩国电子通信研究院 图像编码/解码方法和用于所述方法的记录介质
CN111031318A (zh) * 2019-06-21 2020-04-17 杭州海康威视数字技术股份有限公司 一种编解码方法、装置及其设备

Family Cites Families (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008048235A (ja) * 2006-08-18 2008-02-28 Sony Corp 可変長符号の復号化方法および復号化装置
JP2008289134A (ja) * 2007-04-20 2008-11-27 Panasonic Corp 符号化レート変換装置、集積回路、符号化レート変換方法、及びプログラム
CN100493194C (zh) * 2007-05-17 2009-05-27 上海交通大学 用于视频感兴趣区域编解码的泄漏运动补偿方法
CN102934444A (zh) * 2010-04-06 2013-02-13 三星电子株式会社 用于对视频进行编码的方法和设备以及用于对视频进行解码的方法和设备
JPWO2014054267A1 (ja) * 2012-10-01 2016-08-25 パナソニックIpマネジメント株式会社 画像符号化装置及び画像符号化方法
CN104427345B (zh) * 2013-09-11 2019-01-08 华为技术有限公司 运动矢量的获取方法、获取装置、视频编解码器及其方法
WO2016187776A1 (zh) * 2015-05-25 2016-12-01 北京大学深圳研究生院 一种基于光流法的视频插帧方法及系统
US20180249172A1 (en) * 2015-09-02 2018-08-30 Mediatek Inc. Method and apparatus of motion compensation for video coding based on bi prediction optical flow techniques
US10375413B2 (en) * 2015-09-28 2019-08-06 Qualcomm Incorporated Bi-directional optical flow for video coding
WO2017134957A1 (ja) * 2016-02-03 2017-08-10 シャープ株式会社 動画像復号装置、動画像符号化装置、および予測画像生成装置
MY201069A (en) * 2016-02-05 2024-02-01 Hfi Innovation Inc Method and apparatus of motion compensation based on bi-directional optical flow techniques for video coding
CN114449288A (zh) * 2016-03-16 2022-05-06 联发科技股份有限公司 视频编码的样式基础的运动向量推导之方法及装置
CN109417620B (zh) * 2016-03-25 2021-04-27 松下知识产权经营株式会社 用于使用信号依赖型自适应量化将运动图像编码及解码的方法及装置
CN116156200A (zh) * 2016-07-14 2023-05-23 三星电子株式会社 视频解码方法及其装置以及视频编码方法及其装置
CN116567223A (zh) * 2016-08-11 2023-08-08 Lx 半导体科技有限公司 图像编码/解码设备和图像数据的发送设备
CN106454378B (zh) * 2016-09-07 2019-01-29 中山大学 一种基于变形运动模型的帧率上转换视频编码方法及系统
KR102499139B1 (ko) * 2016-09-21 2023-02-13 삼성전자주식회사 이미지를 표시하는 전자 장치 및 그 제어 방법
EP3556096B1 (en) * 2016-12-22 2024-05-22 HFI Innovation Inc. Method and apparatus of motion refinement for video coding
US10931969B2 (en) * 2017-01-04 2021-02-23 Qualcomm Incorporated Motion vector reconstructions for bi-directional optical flow (BIO)
US10523964B2 (en) * 2017-03-13 2019-12-31 Qualcomm Incorporated Inter prediction refinement based on bi-directional optical flow (BIO)
WO2018174618A1 (ko) * 2017-03-22 2018-09-27 한국전자통신연구원 참조 블록을 사용하는 예측 방법 및 장치
KR102409430B1 (ko) * 2017-04-24 2022-06-15 에스케이텔레콤 주식회사 움직임 보상을 위한 옵티컬 플로우 추정 방법 및 장치
CN116708831A (zh) * 2017-04-24 2023-09-05 Sk电信有限公司 编解码视频数据的方法、发送编码视频数据比特流的方法
US10904565B2 (en) * 2017-06-23 2021-01-26 Qualcomm Incorporated Memory-bandwidth-efficient design for bi-directional optical flow (BIO)
EP3649780A1 (en) * 2017-07-03 2020-05-13 Vid Scale, Inc. Motion-compensation prediction based on bi-directional optical flow
WO2019074273A1 (ko) * 2017-10-10 2019-04-18 한국전자통신연구원 인터 예측 정보를 사용하는 방법 및 장치
CN111630859B (zh) * 2017-12-14 2024-04-16 Lg电子株式会社 根据图像编码系统中的帧间预测进行图像解码的方法和装置
WO2020093999A1 (en) * 2018-11-05 2020-05-14 Beijing Bytedance Network Technology Co., Ltd. Inter prediction with refinement in video processing
CN111385581B (zh) * 2018-12-28 2022-04-26 杭州海康威视数字技术股份有限公司 一种编解码方法及其设备
JP2021513755A (ja) * 2019-01-15 2021-05-27 エルジー エレクトロニクス インコーポレイティド 変換スキップフラグを利用した映像コーディング方法及び装置
EP3907998A4 (en) * 2019-02-14 2022-03-16 LG Electronics Inc. METHOD AND DEVICE FOR INTER-PREDICTION BASED ON A DMVR
CN117014634A (zh) * 2019-03-11 2023-11-07 阿里巴巴集团控股有限公司 用于对视频数据进行编码的帧间预测方法
KR20210114060A (ko) * 2019-03-13 2021-09-17 엘지전자 주식회사 Dmvr 기반의 인터 예측 방법 및 장치
EP3925220A4 (en) * 2019-03-17 2022-06-29 Beijing Bytedance Network Technology Co., Ltd. Calculation of prediction refinement based on optical flow
US11140409B2 (en) * 2019-03-22 2021-10-05 Lg Electronics Inc. DMVR and BDOF based inter prediction method and apparatus thereof
WO2020197083A1 (ko) * 2019-03-22 2020-10-01 엘지전자 주식회사 Dmvr 및 bdof 기반의 인터 예측 방법 및 장치
US11509910B2 (en) * 2019-09-16 2022-11-22 Tencent America LLC Video coding method and device for avoiding small chroma block intra prediction
US20210092404A1 (en) * 2019-09-20 2021-03-25 Qualcomm Incorporated Reference picture constraint for decoder side motion refinement and bi-directional optical flow
US11563980B2 (en) * 2020-04-02 2023-01-24 Qualcomm Incorporated General constraint information syntax in video coding
US11611778B2 (en) * 2020-05-20 2023-03-21 Sharp Kabushiki Kaisha Systems and methods for signaling general constraint information in video coding

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170339405A1 (en) * 2016-05-20 2017-11-23 Arris Enterprises Llc System and method for intra coding
CN109479141A (zh) * 2016-07-12 2019-03-15 韩国电子通信研究院 图像编码/解码方法和用于所述方法的记录介质
WO2018212578A1 (ko) * 2017-05-17 2018-11-22 주식회사 케이티 비디오 신호 처리 방법 및 장치
WO2019045427A1 (ko) * 2017-08-29 2019-03-07 에스케이텔레콤 주식회사 양방향 옵티컬 플로우를 이용한 움직임 보상 방법 및 장치
CN111031318A (zh) * 2019-06-21 2020-04-17 杭州海康威视数字技术股份有限公司 一种编解码方法、装置及其设备

Also Published As

Publication number Publication date
CN113411594B (zh) 2022-05-31
CN113411600B (zh) 2022-05-31
CN113411596A (zh) 2021-09-17
CN113411600A (zh) 2021-09-17
CN113411604A (zh) 2021-09-17
CN113411609A (zh) 2021-09-17
CN113411603B (zh) 2022-03-25
CN113596476B (zh) 2022-07-01
CN113411593B (zh) 2022-05-27
CN113411602B (zh) 2022-05-31
JP7334272B2 (ja) 2023-08-28
CN113596481A (zh) 2021-11-02
EP3989576A1 (en) 2022-04-27
CN113411598A (zh) 2021-09-17
CN113411603A (zh) 2021-09-17
CN111031318B (zh) 2020-12-29
CN113411601A (zh) 2021-09-17
CN113411609B (zh) 2022-05-31
CN113411597A (zh) 2021-09-17
CN113596478A (zh) 2021-11-02
CN113411610B (zh) 2022-05-27
CN113411596B (zh) 2022-05-31
KR102620203B1 (ko) 2023-12-29
KR20220003027A (ko) 2022-01-07
CN113596480B (zh) 2022-08-12
CN113596479A (zh) 2021-11-02
CN113411608A (zh) 2021-09-17
MX2021015977A (es) 2022-02-10
CN113596478B (zh) 2022-04-26
CA3139466A1 (en) 2020-12-24
CN113411605B (zh) 2022-05-31
CN113596481B (zh) 2022-07-01
CN113411594A (zh) 2021-09-17
CN113596477B (zh) 2022-07-01
CN113411610A (zh) 2021-09-17
US20220232242A1 (en) 2022-07-21
CN113411593A (zh) 2021-09-17
JP2022534240A (ja) 2022-07-28
CN113411599A (zh) 2021-09-17
CN111031318A (zh) 2020-04-17
CN113613020A (zh) 2021-11-05
CN112118455A (zh) 2020-12-22
CN113411607A (zh) 2021-09-17
CN113411598B (zh) 2022-05-31
CN112118455B (zh) 2022-05-31
CN113709503A (zh) 2021-11-26
CN113596476A (zh) 2021-11-02
CN113613021A (zh) 2021-11-05
CN113596480A (zh) 2021-11-02
CN113411601B (zh) 2022-05-31
CN113411606B (zh) 2022-07-01
CN113596477A (zh) 2021-11-02
CN113411606A (zh) 2021-09-17
JP2023156444A (ja) 2023-10-24
CN113411597B (zh) 2022-05-27
CN113411602A (zh) 2021-09-17
WO2020253730A1 (zh) 2020-12-24
CN113411608B (zh) 2022-05-31
CN113411605A (zh) 2021-09-17
CN113411604B (zh) 2022-05-31
CN113411595A (zh) 2021-09-17
CN113411595B (zh) 2022-05-31
CN113411607B (zh) 2022-05-31
EP3989576A4 (en) 2022-08-31
CN113747175A (zh) 2021-12-03
KR20240005233A (ko) 2024-01-11
CN113411599B (zh) 2022-05-31

Similar Documents

Publication Publication Date Title
WO2020253769A1 (zh) 一种编解码方法、装置及其设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20826562

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20826562

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20826562

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16-09-2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20826562

Country of ref document: EP

Kind code of ref document: A1