WO2022001837A1 - 一种编解码方法、装置及其设备 - Google Patents

一种编解码方法、装置及其设备 Download PDF

Info

Publication number
WO2022001837A1
WO2022001837A1 PCT/CN2021/102199 CN2021102199W WO2022001837A1 WO 2022001837 A1 WO2022001837 A1 WO 2022001837A1 CN 2021102199 W CN2021102199 W CN 2021102199W WO 2022001837 A1 WO2022001837 A1 WO 2022001837A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion information
current block
weight
value
information
Prior art date
Application number
PCT/CN2021/102199
Other languages
English (en)
French (fr)
Inventor
孙煜程
曹小强
陈方栋
王莉
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Priority to US18/009,949 priority Critical patent/US20230344985A1/en
Priority to AU2021298606A priority patent/AU2021298606C1/en
Priority to JP2022578722A priority patent/JP2023531010A/ja
Priority to KR1020227043009A priority patent/KR20230006017A/ko
Priority to EP21834647.6A priority patent/EP4152750A4/en
Publication of WO2022001837A1 publication Critical patent/WO2022001837A1/zh
Priority to ZA2022/13605A priority patent/ZA202213605B/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/109Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • H04N19/139Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques

Definitions

  • the present application relates to the technical field of encoding and decoding, and in particular, to an encoding and decoding method, apparatus, and device thereof.
  • video images are transmitted after being encoded, and video encoding may include processes such as prediction, transformation, quantization, entropy encoding, and filtering.
  • Prediction may include intra prediction and inter prediction.
  • Inter-frame prediction refers to using the correlation in the temporal domain of the video to predict the current pixel using the pixels adjacent to the coded image, so as to effectively remove the temporal redundancy of the video.
  • Intra-frame prediction refers to using the correlation of the video spatial domain to predict the current pixel using the pixels of the coded block of the current frame image, so as to achieve the purpose of removing the video spatial redundancy.
  • the current block is a rectangle, and the edge of an actual object is often not a rectangle. Therefore, for the edge of the object, there are often two different objects (such as a foreground object and a background). Based on this, when the motions of the two objects are inconsistent, the rectangular division cannot divide the two objects well, even if the current block is divided into two non-rectangular sub-blocks, and the current block is predicted through the two non-rectangular sub-blocks.
  • problems such as poor prediction effect and poor coding performance.
  • the present application provides an encoding and decoding method, apparatus, and device thereof, which improve the accuracy of prediction.
  • the present application provides a method for encoding and decoding, the method comprising:
  • For each pixel position of the current block determine the peripheral matching position pointed to by the pixel position from the peripheral positions outside the current block according to the weight prediction angle; determine according to the reference weight value associated with the peripheral matching position The target weight value of the pixel position, and the associated weight value of the pixel position is determined according to the target weight value of the pixel position;
  • the first prediction value of the pixel position is determined according to the first target motion information of the current block, and the second prediction value of the pixel position is determined according to the second target motion information of the current block; according to the first prediction value, the target weight value, the second predicted value and the associated weight value, determine the weighted predicted value of the pixel position;
  • the weighted prediction value of the current block is determined according to the weighted prediction values of all pixel positions of the current block.
  • the present application provides an encoding and decoding apparatus, the apparatus comprising: an obtaining module, configured to obtain a weight prediction angle and a weight configuration parameter of the current block when it is determined to start weighted prediction for the current block; a configuration module, configured to The weight configuration parameter configures a reference weight value for the peripheral position outside the current block; the determination module is used for each pixel position of the current block, according to the weight prediction angle from the peripheral position outside the current block.
  • the acquisition module is also used to acquire a motion information candidate list, the motion information candidate list includes at least one candidate motion information; based on the motion information candidate list, acquire the first target motion information and the second target of the current block motion information; the determining module is further configured to determine the first predicted value of the pixel position according to the first target motion information of the current block, and determine the first prediction value of the pixel position according to the second target motion information of the current block second predicted value; according to the first predicted value, the target weight value, the second predicted value and the associated weight value, determine the weighted predicted value of the pixel position; according to all the pixels of the current block
  • the weighted predictor of the position determines the weighted predictor of the current block.
  • the present application provides a decoding end device, including: a processor and a machine-readable storage medium, where the machine-readable storage medium stores machine-executable instructions that can be executed by the processor;
  • the processor is configured to execute machine-executable instructions to implement the following steps:
  • For each pixel position of the current block determine the peripheral matching position pointed to by the pixel position from the peripheral positions outside the current block according to the weight prediction angle; determine according to the reference weight value associated with the peripheral matching position The target weight value of the pixel position, and the associated weight value of the pixel position is determined according to the target weight value of the pixel position;
  • the first prediction value of the pixel position is determined according to the first target motion information of the current block, and the second prediction value of the pixel position is determined according to the second target motion information of the current block; according to the first prediction value, the target weight value, the second predicted value and the associated weight value, determine the weighted predicted value of the pixel position;
  • the weighted prediction value of the current block is determined according to the weighted prediction values of all pixel positions of the current block.
  • the present application provides an encoding device, including: a processor and a machine-readable storage medium, where the machine-readable storage medium stores machine-executable instructions that can be executed by the processor;
  • the processor is configured to execute machine-executable instructions to implement the following steps:
  • For each pixel position of the current block determine the peripheral matching position pointed to by the pixel position from the peripheral positions outside the current block according to the weight prediction angle; determine according to the reference weight value associated with the peripheral matching position The target weight value of the pixel position, and the associated weight value of the pixel position is determined according to the target weight value of the pixel position;
  • the first prediction value of the pixel position is determined according to the first target motion information of the current block, and the second prediction value of the pixel position is determined according to the second target motion information of the current block; according to the first prediction value, the target weight value, the second predicted value and the associated weight value, determine the weighted predicted value of the pixel position;
  • the weighted prediction value of the current block is determined according to the weighted prediction values of all pixel positions of the current block.
  • FIG. 1 is a schematic diagram of a video coding framework
  • 2A-2E are schematic diagrams of weighted prediction
  • FIG. 3 is a flowchart of an encoding and decoding method in an embodiment of the present application.
  • 4A-4D are schematic diagrams of peripheral positions outside the current block
  • FIG. 5 is a flowchart of an encoding and decoding method in an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a weight prediction angle in an embodiment of the present application.
  • FIG. 7 is a schematic diagram of reference weight values of four weight conversion ratios in an embodiment of the present application.
  • FIGS. 8A-8C are schematic diagrams of weight prediction angles and angle partitions in an embodiment of the present application.
  • 9A is a schematic diagram of a neighboring block of a current block in an embodiment of the present application.
  • 9B-9D are schematic diagrams of the target position of the current block in an embodiment of the present application.
  • 10A is a schematic structural diagram of an encoding and decoding apparatus in an embodiment of the present application.
  • 10B is a schematic structural diagram of an encoding and decoding apparatus in an embodiment of the present application.
  • 10C is a hardware structure diagram of a decoding end device in an embodiment of the present application.
  • FIG. 10D is a hardware structure diagram of an encoding end device in an embodiment of the present application.
  • the first information may also be referred to as the second information, and similarly, the second information may also be referred to as the first information, depending on the context.
  • the use of the word "if” can be interpreted as "at the time of", or "when", or "in response to determining”.
  • Intra prediction Intra prediction
  • inter prediction inter prediction
  • IBC intra block copy
  • Intra-frame prediction refers to the use of coded blocks for prediction based on the correlation of the video spatial domain, in order to achieve the purpose of removing the video spatial redundancy.
  • Intra prediction specifies multiple prediction modes, and each prediction mode corresponds to a texture direction (except DC mode). For example, if the image texture is arranged horizontally, the horizontal prediction mode can better predict image information.
  • Inter-frame prediction means that, based on the correlation in the temporal domain of the video, since the video sequence contains strong temporal correlation, using adjacent coded image pixels to predict the pixels of the current image can effectively remove the temporal redundancy of the video.
  • Intra Block Copy means that the same frame reference is allowed, and the reference data of the current block comes from the same frame. Intra block copy can also be called intra block copy.
  • the predicted value of the current block can be obtained by using the block vector of the current block. When predicting the value, it can improve the compression efficiency of the screen content sequence.
  • the predicted pixel refers to the pixel value derived from the encoded and decoded pixels.
  • the residual is obtained by the difference between the original pixel and the predicted pixel, and then the residual transformation quantization and coefficient encoding are performed.
  • the predicted pixels between frames refer to the pixel values derived from the reference frame of the current block. Due to the discrete pixel positions, an interpolation operation is required to obtain the final predicted pixels. The closer the predicted pixel is to the original pixel, the smaller the residual energy obtained by subtracting the two, and the higher the coding compression performance.
  • Motion Vector In inter prediction, a motion vector can be used to represent the relative displacement between the current block of the current frame and the reference block of the reference frame. Each divided block has a corresponding motion vector sent to the decoding end. If the motion vector of each block is independently encoded and transmitted, especially a large number of small-sized blocks, it will consume many bits. In order to reduce the number of bits used to encode the motion vector, the spatial correlation between adjacent blocks can be used to predict the motion vector of the current block according to the motion vector of the adjacent coded block, and then the prediction difference can be encoded. Effectively reduces the number of bits representing motion vectors.
  • the motion vector of the adjacent coded block can be used to predict the motion vector of the current block, and then the predicted value (MVP, Motion Vector Prediction) of the motion vector and the true estimation of the motion vector can be used to predict the motion vector of the current block.
  • MVP Motion Vector Prediction
  • the difference between the values (MVD, Motion Vector Difference) is encoded.
  • the index information of the reference frame image is also required to represent the current block. Which reference frame image to use.
  • a reference frame image list can usually be established for the current frame, and the reference frame image index information indicates which reference frame image in the reference frame image list is used by the current block.
  • motion-related information such as a motion vector, a reference frame index, and a reference direction may be collectively referred to as motion information.
  • is the Lagrange multiplier
  • R is the actual number of bits required for image block coding in this mode, including the sum of bits required for coding mode information, motion information, and residuals.
  • mode selection if the RDO principle is used to make comparison decisions on encoding modes, the best encoding performance can usually be guaranteed.
  • Video coding framework can be used to implement the processing flow of the coding end in this embodiment of the present application; the schematic diagram of the video decoding framework is similar to that in FIG. 1 , and details are not repeated here, and the present application can be implemented using the video decoding framework.
  • the video coding framework and the video decoding framework may include but not limited to intra prediction/inter prediction, motion estimation/motion compensation, reference image buffer, in-loop filtering, reconstruction, transformation, quantization, inverse transformation , inverse quantization, entropy encoder and other modules.
  • intra prediction/inter prediction motion estimation/motion compensation
  • reference image buffer reference image buffer
  • in-loop filtering reconstruction
  • transformation, quantization, inverse transformation , inverse quantization, entropy encoder and other modules.
  • the current block may be a rectangle, but the edge of an actual object is often not a rectangle. Therefore, for the edge of an object, there are often two different objects (such as a foreground object and a background, etc.). When the motions of the two objects are inconsistent, the rectangular division cannot divide the two objects well. To this end, the current block can be divided into two non-rectangular sub-blocks, and the two non-rectangular sub-blocks can be weighted predict.
  • the weighted prediction is to perform a weighted operation using multiple prediction values, thereby obtaining a final prediction value
  • the weighted prediction may include: combined inter/intra prediction (CIIP), inter and frame Inter-frame joint weighted prediction, intra-frame and intra-frame joint weighted prediction, etc.
  • CIIP inter/intra prediction
  • the same weight value may be configured for different pixel positions of the current block, or different weight values may be configured for different pixel positions of the current block.
  • FIG. 2A it is a schematic diagram of inter-frame and intra-frame joint weighted prediction.
  • the CIIP prediction block is obtained by weighting the intra prediction block (that is, using the intra prediction mode to obtain the intra prediction value of the pixel position) and the inter prediction block (that is, using the inter prediction mode to obtain the inter prediction value of the pixel position).
  • the weight ratio of the intra-frame prediction value to the inter-frame prediction value adopted for the pixel position is 1:1. For example, for each pixel position, the intra-frame prediction value of the pixel position and the inter-frame prediction value of the pixel position are weighted to obtain the joint prediction value of the pixel position, and finally the joint prediction value of each pixel position is composed of CIIP prediction block.
  • FIG. 2B it is a schematic diagram of an inter-frame triangular partition weighted prediction (Triangular Partition Mode, TPM).
  • the TPM prediction block is composed of inter prediction block 1 (that is, using inter prediction mode 1 to obtain the inter prediction value of the pixel position) and inter prediction block 2 (that is, using the inter prediction mode 2 to obtain the inter prediction value of the pixel position) is combined get.
  • the TPM prediction block can be divided into two regions, one region can be inter-frame region 1, and the other region can be inter-frame region 2.
  • the two inter-frame regions of the TPM prediction block can be non-rectangularly distributed, and the angle of the dotted line can be Either the main diagonal or the sub-diagonal.
  • the prediction value of the pixel position is determined mainly based on the inter-frame prediction value of the inter-frame prediction block 1.
  • the inter-frame prediction value of the inter-frame prediction block 1 at the pixel position is combined with the
  • the weight value of the inter-frame prediction value of the inter-frame prediction block 1 is larger, and the weight value of the inter-frame prediction value of the inter-frame prediction block 2 is smaller (even is 0), and the joint prediction value of the pixel position is obtained.
  • the prediction value of the pixel position is determined mainly based on the inter-frame prediction value of the inter-frame prediction block 2.
  • the inter-frame prediction value of the inter-frame prediction block 1 at the pixel position is combined with the
  • the weight value of the inter prediction value of the inter prediction block 2 is larger, and the weight value of the inter prediction value of the inter prediction block 1 is smaller (even is 0), and the joint prediction value of the pixel position is obtained.
  • FIG. 2C it is a schematic diagram of inter-frame and intra-frame joint triangular weighted prediction.
  • the inter-frame area and the intra-frame area of the CIIP prediction block exhibit the weight distribution of the triangular weighted division prediction.
  • the CIIP prediction block is obtained by combining the intra-frame prediction block (ie, using the intra-frame prediction mode to obtain the intra-frame prediction value of the pixel position) and the inter-frame prediction block (ie, using the inter-frame prediction mode to obtain the inter-frame prediction value of the pixel position).
  • the CIIP prediction block can be divided into two areas, one area can be an intra-frame area, and the other area can be an inter-frame area.
  • the inter-frame of the CIIP prediction block can be non-rectangularly distributed, and the dashed boundary area can adopt a hybrid weighting method Alternatively, the segmentation can be performed directly, and the angle of the dashed dividing line can be either a main diagonal or a sub-diagonal, and the positions of the intra-frame area and the inter-frame area are variable.
  • the predicted value of the pixel position is determined mainly based on the intra-frame predicted value. For example, when the intra-frame predicted value of the pixel position is weighted with the inter-frame predicted value of the pixel position, the The weight value of the predicted value is larger, the weight value of the inter-frame prediction value is smaller, and the joint predicted value of the pixel position is obtained. For each pixel position in the inter-frame area, the prediction value of the pixel position is mainly determined based on the inter-frame prediction value.
  • the intra-frame prediction value of the pixel position and the inter-frame prediction value of the pixel position are weighted, the The weight value of the predicted value is larger, the weight value of the intra-frame prediction value is smaller, and the joint predicted value of the pixel position is obtained.
  • the GEO mode is used to divide the inter-frame prediction block into two sub-blocks by using a dividing line. Different from the TPM mode, the GEO mode The mode can adopt more division directions, and the weighted prediction process of the GEO mode is similar to that of the TPM mode.
  • the GEO prediction block is composed of inter prediction block 1 (that is, using inter prediction mode 1 to obtain the inter prediction value of the pixel position) and inter prediction block 2 (that is, using the inter prediction mode 2 to obtain the inter prediction value of the pixel position) get.
  • the GEO prediction block may be divided into two regions, one region may be inter-frame region 1, and the other region may be inter-frame region 2.
  • the prediction value of the pixel position is determined mainly based on the inter-frame prediction value of the inter-frame prediction block 1.
  • the inter-frame prediction value of the inter-frame prediction block 1 at the pixel position is combined with the
  • the weight value of the inter prediction value of the inter prediction block 1 is larger, and the weight value of the inter prediction value of the inter prediction block 2 is smaller.
  • the prediction value of the pixel position is determined mainly based on the inter-frame prediction value of the inter-frame prediction block 2.
  • the inter-frame prediction value of the inter-frame prediction block 1 at the pixel position is combined with the When the inter prediction value of the inter prediction block 2 at the pixel position is weighted, the weight value of the inter prediction value of the inter prediction block 2 is larger, and the weight value of the inter prediction value of the inter prediction block 1 is smaller.
  • the weight value configuration of the GEO prediction block is related to the distance of the pixel position from the dividing line, as shown in FIG. 2E , the pixel position A, the pixel position B and the pixel position C are located on the lower right side of the dividing line, and the pixel position D, the pixel position Position E and pixel position F are located on the upper left side of the dividing line.
  • the weight values of inter-frame area 2 are sorted as B ⁇ A ⁇ C
  • the weight values of inter-frame area 1 are sorted as C ⁇ A ⁇ B.
  • the weight values of the inter-frame area 1 are sorted as D ⁇ F ⁇ E
  • the weight values of the inter-frame area 2 are sorted as E ⁇ F ⁇ D.
  • the above method needs to calculate the distance between the pixel position and the dividing line, and then determine the weight value of the pixel position.
  • a method for deriving weight values is proposed in the embodiments of the present application, which can determine the target weight value of each pixel position of the current block according to the reference weight value of the peripheral position outside the current block, and can configure a reasonable configuration for each pixel position.
  • the target weight value is improved, the prediction accuracy is improved, the prediction performance is improved, the coding performance is improved, and the prediction value is closer to the original pixel.
  • Embodiment 1 Referring to FIG. 3, it is a schematic flowchart of an encoding and decoding method. This method can be applied to a decoding end (also called a video decoder) or an encoding end (also called a video encoder), and the method can be include:
  • Step 301 when it is determined to start the weighted prediction for the current block, obtain the weight prediction angle and weight configuration parameters of the current block, where the weight configuration parameters include the weight transformation rate and the starting position of the weight transformation.
  • the starting position of the weight transformation may be determined by at least one of the following parameters: the weight prediction angle of the current block, the weight prediction position of the current block, and the size of the current block.
  • the decoding end or the encoding end may first determine whether to start weighted prediction for the current block. If the weighted prediction is started for the current block, the encoding and decoding method of the embodiment of the present application is adopted, that is, step 301 and subsequent steps are performed. If the weighted prediction is not started for the current block, the implementation manner is not limited in this embodiment of the present application.
  • the weighted prediction angle of the current block, the weighted prediction position of the current block, and the weighted transformation rate of the current block may be obtained. Then, the starting position of the weight transformation of the current block may be determined based on at least one of the weight prediction angle of the current block, the weight prediction position of the current block, and the size of the current block. So far, the weight prediction angle of the current block, the weight transformation rate of the current block, and the starting position of the weight transformation of the current block can be obtained.
  • Step 302 Configure a reference weight value for a peripheral position outside the current block according to the weight configuration parameter of the current block.
  • the number of peripheral positions outside the current block may be determined based on the size of the current block and/or the weight prediction angle of the current block, for example, determining the outside of the current block based on the size of the current block and/or the weight prediction angle of the current block.
  • the number M of surrounding locations is M, and a reference weight value is configured for the M surrounding locations according to the weight configuration parameter of the current block.
  • the reference weight value of the peripheral position outside the current block may increase monotonically; or, the reference weight value of the peripheral position outside the current block may decrease monotonically.
  • the arrangement of the reference weight values of the peripheral positions outside the current block may be 00...0024688...88, or the arrangement of the reference weight values of the peripheral positions outside the current block may be 88...8864200...00.
  • the peripheral position outside the current block may include an integer pixel position, or a sub-pixel position, or an integer pixel position and a sub-pixel position.
  • the peripheral position outside the current block may include, but is not limited to: the peripheral position in the upper row outside the current block, or the peripheral position in the left column outside the current block, or the peripheral position in the lower row outside the current block, or the current block The perimeter position of the outer right column.
  • the above is just an example of the surrounding location, which is not limited.
  • the reference weight value of the peripheral position outside the current block includes the reference weight value of the target area, the reference weight value of the first adjacent area of the target area, and the reference weight value of the second adjacent area of the target area.
  • the reference weight values of the first adjacent area are all first reference weight values, and the reference weight values of the second adjacent area are monotonically increasing.
  • the reference weight values of the first adjacent area are all the first reference weight values, and the reference weight values of the second adjacent area are monotonically decreasing.
  • the reference weight values of the first adjacent area are all second reference weight values, the reference weight values of the second adjacent area are all third reference weight values, and the second reference weight value is different from the third reference weight value.
  • the reference weight value of the first adjacent area increases monotonically, and the reference weight value of the second adjacent area increases monotonically.
  • the reference weight value of the first adjacent area decreases monotonically, and the reference weight value of the second adjacent area decreases monotonically.
  • the target area includes one reference weight value or at least two reference weight values; if the target area includes at least two reference weight values, the at least two reference weight values of the target area monotonically increase or decrease monotonically.
  • Step 303 for each pixel position of the current block, determine the peripheral matching position pointed to by the pixel position from the peripheral position outside the current block according to the weight prediction angle of the current block; determine the pixel according to the reference weight value associated with the peripheral matching position
  • the target weight value of the position, and the associated weight value of the pixel position is determined according to the target weight value of the pixel position.
  • the weighted prediction angle represents the angular direction pointed by the pixel positions inside the current block. For example, based on a certain weighted prediction angle, the angular direction corresponding to the weighted prediction angle points to a certain outer peripheral position of the current block. Based on this, for each pixel position of the current block, the angular direction pointed by the pixel position is determined based on the weighted prediction angle, and then the surrounding matching position pointed by the pixel position is determined from the surrounding positions outside the current block according to the angular direction.
  • the target weight value of the pixel position is determined based on the reference weight value associated with the surrounding matching position, for example, the reference weight associated with the surrounding matching position is The weight value is determined as the target weight value at the pixel position. Then, the associated weight value of the pixel position is determined according to the target weight value of the pixel position. For example, the sum of the target weight value and the associated weight value of each pixel position may be a fixed preset value. Therefore, the associated weight value It can be the difference between the preset value and the target weight value.
  • the associated weight value of the pixel position is 8; if the target weight value of the pixel position is 1, the associated weight value of the pixel position is 7, so that By analogy, as long as the sum of the target weight value and the associated weight value is 8.
  • Step 304 Acquire a motion information candidate list, where the motion information candidate list includes at least one candidate motion information; acquire first target motion information and second target motion information of the current block based on the motion information candidate list.
  • Step 305 for each pixel position of the current block, determine the first predicted value of the pixel position according to the first target motion information of the current block, and determine the second predicted value of the pixel position according to the second target motion information of the current block; Based on the first predicted value, the target weight value, the second predicted value and the associated weight value, a weighted predicted value for the pixel position is determined.
  • the weighted prediction value of the pixel position may be: A predicted value * the target weight value of the pixel position + the second predicted value of the pixel position * the associated weight value of the pixel position) / a fixed preset value.
  • the weighted prediction value of the pixel position may be: (the second prediction value of the pixel position value*target weight value of the pixel position+first predicted value of the pixel position*association weight value of the pixel position)/fixed preset value.
  • Step 306 Determine the weighted prediction value of the current block according to the weighted prediction values of all pixel positions of the current block.
  • the weighted prediction values of all pixel positions of the current block are composed of the weighted prediction values of the current block.
  • Embodiment 2 The embodiment of this application proposes another encoding and decoding method, which can be applied to the encoding end, and the method includes:
  • step a1 when it is determined to start the weighted prediction for the current block, the encoder obtains the weighted prediction angle of the current block, the weighted prediction position of the current block, and the weight transformation rate of the current block.
  • the encoder determines whether to start weighted prediction for the current block, and if so, executes step a1 and subsequent steps, and if not, the processing method is not limited in this application.
  • the current block satisfies the condition for starting the weighted prediction, it is determined to start the weighted prediction for the current block. If the current block does not meet the conditions for starting the weighted prediction, it is determined that the weighted prediction is not started for the current block. For example, it is judged whether the feature information of the current block satisfies a specific condition. If yes, it is determined to start the weighted prediction for the current block; if not, it is determined not to start the weighted prediction for the current block.
  • the feature information includes but is not limited to one or any combination of the following: the frame type of the current frame where the current block is located, the size information of the current block, and the switch control information.
  • the switch control information may include, but is not limited to: sequence level (SPS, SH) switch control information, or, picture level (PPS, PH) switch control information, or, slice level (Slice, Tile, Patch), or, maximum coding unit level (LCU, CTU), or block level (CU, PU, TU) switch control information.
  • SPS sequence level
  • PPS picture level
  • PH picture level
  • slice level Slice, Tile, Patch
  • LCU, CTU maximum coding unit level
  • CU block level
  • CU PU, TU
  • the frame type of the current frame where the current block is located meets certain conditions, which may include but are not limited to: if the frame type of the current frame where the current block is located is a B frame, determining the frame type meet certain conditions. Or, if the frame type of the current frame where the current block is located is an I frame, it is determined that the frame type satisfies a specific condition.
  • the size information satisfies certain conditions including but not limited to: if the width is greater than or equal to the first value and the height is greater than or equal to the second value, determine the current block The size information of satisfies certain conditions. Or, if the width is greater than or equal to the third numerical value, the height is greater than or equal to the fourth numerical value, the width is less than or equal to the fifth numerical value, and the height is less than or equal to the sixth numerical value, it is determined that the size information of the current block satisfies a specific condition.
  • the size information of the current block satisfies a specific condition.
  • the above values can be configured according to experience, such as 8, 16, 32, 64, 128 and so on. For example, the first value is 8, the second value is 8, the third value is 8, the fourth value is 8, the fifth value is 64, the sixth value is 64, and the seventh value is 64. To sum up, if the width is greater than or equal to 8 and the height is greater than or equal to 8, it is determined that the size information of the current block satisfies a specific condition.
  • the width is greater than or equal to 8
  • the height is greater than or equal to 8
  • the width is less than or equal to 64
  • the height is less than or equal to 64
  • the product of the width and the height is greater than or equal to 64, it is determined that the size information of the current block satisfies a specific condition.
  • the size information of the current block satisfies certain conditions, which may include but are not limited to: the width is not less than a, not greater than b, and the height is not less than a , and not greater than b.
  • a may be less than or equal to 16 and b may be greater than or equal to 16.
  • b may be greater than or equal to 16.
  • a 8
  • b 64 or b equals 32.
  • the switch control information satisfies a specific condition, which may include, but is not limited to, determining that the switch control information satisfies the specific condition if the switch control information is to allow the current block to enable weighted prediction.
  • the feature information is the frame type of the current frame where the current block is located, and the size information of the current block, then the frame type satisfies a specific condition, and the size information satisfies a specific condition, it can be determined that the feature information of the current block satisfies the specific condition.
  • the feature information is the frame type and switch control information of the current frame where the current block is located, then the frame type satisfies a specific condition and the switch control information satisfies a specific condition, it can be determined that the feature information of the current block satisfies the specific condition.
  • the feature information is size information and switch control information of the current block
  • the size information satisfies a specific condition
  • the switch control information satisfies a specific condition
  • the feature information is the frame type of the current frame where the current block is located, the size information of the current block, and the switch control information, then the frame type satisfies a specific condition, the size information satisfies a specific condition, and the switch control information satisfies a specific condition, the current block can be determined.
  • the characteristic information of the block satisfies certain conditions.
  • the encoder may obtain the weight prediction angle of the current block, the weight prediction position of the current block, and the weight transformation rate of the current block.
  • the weighted prediction angle represents the angular direction pointed to by the pixel position in the current block.
  • the pixel position in the current block such as pixel position 1, pixel The angular direction pointed to by position 2 and pixel position 3
  • the angular direction pointed by the pixel positions such as pixel position 2, pixel position 3 and pixel position 4
  • the angular direction pointed by the pixel positions is shown, and the angular direction points outside the current block a surrounding location.
  • the weighted prediction position (which may also be referred to as a distance parameter) is used to configure the reference weight value of the outer peripheral position of the current block.
  • the range of peripheral positions outside the current block (that is, the number of peripheral positions outside the current block) is determined according to parameters such as the weight prediction angle of the current block and the size of the current block, as shown in FIG. 4A or FIG. 4B .
  • the weight prediction position is used to indicate which surrounding position outside the current block is used as the current block.
  • weighted prediction positions after dividing all surrounding positions into 8 equal parts, 7 weighted prediction positions can be obtained.
  • the weight prediction position when the weight prediction position is 0, it can represent the surrounding position a0 (that is, the surrounding position pointed to by the dotted line 0. In practical applications, there is no dotted line 0.
  • the dotted line 0 is only an example given for the convenience of understanding.
  • the dotted line 0-dotted line 6 is used to divide all peripheral positions 8 equally) as the starting position of the weight transformation of the outer peripheral positions of the current block.
  • the weight prediction position when the weight prediction position is 6, it indicates that the peripheral position a6 is used as the starting position of the weight transformation of the peripheral position outside the current block.
  • the value of N can be different.
  • the value of N is 6, which means that the range of surrounding positions determined based on the weight prediction angle A will be divided into 6 equal parts.
  • the value of the angle B and N is 8, which means that the range of the surrounding position determined based on the weight prediction angle B is divided into 8 equal parts.
  • the value of N can also be the same.
  • the number of weight prediction positions can be different.
  • the value of N is 8, which means that the weight will be The range of the surrounding position determined by the prediction angle A is divided into 8 equal parts.
  • the value of N is 8, which means that the range of the surrounding position determined based on the weight prediction angle B will be divided into 8 equal parts.
  • the weight prediction angle A total of 5 positions a1 to a5 are selected for the weight prediction position corresponding to A, and a total of 7 positions a0 to a6 are selected for the weight prediction position corresponding to the weight prediction angle B.
  • 7 weight prediction positions can be obtained.
  • the encoder can obtain a weight prediction position from the 7 weight prediction positions, or can select a part from the 7 weight prediction positions Weight prediction positions (such as 5 weight prediction positions), obtain a weight prediction position from the 5 weight prediction positions.
  • the weight transformation rate represents the transformation rate of the reference weight value at the peripheral position outside the current block, and is used to indicate the change speed of the reference weight value.
  • the weight transformation rate may be any number other than 0.
  • the weight transformation rate may be -4, -2, -1, 1, 2, 4, 0.5, 0.75, 1.5, etc.
  • the absolute value of the weight conversion rate is 1, that is, the weight conversion rate is -1 or 1, which is used to indicate that the change rate of the reference weight value is 1, and the reference weight value needs to go through 0, 1, 2, 3, 4 from 0 to 8. , 5, 6, 7, 8 and other values
  • the reference weight value from 8 to 0 needs to go through 8, 7, 6, 5, 4, 3, 2, 1, 0 and other values.
  • the weight transformation rate When the absolute value of the weight transformation rate is 2, that is, the weight transformation rate is -2 or 2, it is used to indicate that the change speed of the reference weight value is 2, and the reference weight value needs to go through 0, 2, 4, 6, 8 from 0 to 8. Equivalent values, the reference weight value from 8 to 0 needs to go through 8, 6, 4, 2, 0 and other values.
  • the absolute value of the weight transformation rate is 4, that is, the weight transformation rate is -4 or 4, it is used to indicate that the change rate of the reference weight value is 4, and the reference weight value needs to go through 0, 4, 8 and other values from 0 to 8.
  • the weight value from 8 to 0 needs to go through 8, 4, 0 and other values.
  • the absolute value of the weight conversion rate is 0.5, that is, the weight conversion rate is -0.5 or 0.5, it is used to indicate that the change speed of the reference weight value is 0.5, and the reference weight value from 0 to 8 needs to go through 0, 0, 1, 1, 2 , 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8 and other values, the reference weight value from 8 to 0 needs to go through 8, 8, 7, 7, 6, 6, 5, 5, 4, 4, 3, 3, 2, 2, 1, 1, 0, 0, etc.
  • the above example is from 0 to 8, and 0 and 8 can be replaced with any number.
  • step a2 the encoder configures reference weight values for peripheral positions outside the current block according to the weight transformation rate of the current block and the starting position of the weight transformation (the weight transformation rate and the starting position of the weight transformation can be called weight configuration parameters).
  • the starting position of the weight transformation may be determined by at least one of the following parameters: the weight prediction angle of the current block, the weight prediction position of the current block, and the size of the current block. At least one of the weight prediction position of the block and the size of the current block determines the starting position of the weight transformation of the current block. Then, reference weight values are configured for peripheral positions outside the current block according to the weight transformation rate of the current block and the starting position of the weight transformation.
  • Step a3 for each pixel position of the current block, the encoder determines the surrounding matching position pointed to by the pixel position from the surrounding positions outside the current block according to the weight prediction angle of the current block.
  • the peripheral position outside the current block pointed to by the pixel position may be referred to as the peripheral matching position of the pixel position.
  • the weighted prediction angle represents the angular direction pointed to by the pixel position inside the current block
  • the angular direction pointed to by the pixel position is determined based on the weighted prediction angle, and then according to the The angular direction determines the perimeter matching position pointed to by the pixel position from the perimeter positions outside the current block.
  • the outer peripheral position of the current block may include: the peripheral position of the upper row outside the current block, such as the peripheral position of the n1th row above the outer current block, n1 may be 1, 2, 3, etc., which is not limited.
  • n1 may be 1, 2, 3, etc., which is not limited.
  • n2 may be 1, 2, 3, etc., which is not limited.
  • n3 may be 1, 2, 3, etc., which is not limited.
  • n4 for the peripheral position of a column on the right outside the current block, such as the peripheral position of the n4th column on the right outside the current block, n4 may be 1, 2, 3, etc., which is not limited.
  • the inner position of the current block can also be used, that is, the inner position of the current block is used to replace the outer surrounding position of the current block.
  • n5 in the internal position of the n5th row inside the current block, n5 can be 1, or 2, 3, etc.
  • n6 in the internal position of the n6th column in the current block, n6 can be 1 or 2 , 3, etc.
  • the length of the internal position may exceed the scope of the current block.
  • the position of the n7th row may exceed the scope of the current block, that is, it can extend to both sides.
  • the inner position of the current block and the outer peripheral position of the current block can also be used at the same time.
  • the current block can be divided into two small blocks, the upper and lower blocks, by the line where the internal position is located, or, by the internal position of the current block.
  • the column is divided into left and right small blocks. At this time, the two small blocks have the same weight prediction angle and the same reference weight configuration.
  • the outer peripheral position of the current block may be located between pixel positions, that is, sub-pixel positions. At this time, the outer peripheral position of the current block cannot be simply described as the xth row, but is located between the xth row and the yth row. the subpixel location row.
  • the peripheral position of the first row on the upper side outside the current block, or the peripheral position of the first column on the left side outside the current block is taken as an example.
  • the implementation is the same as this. similar.
  • a certain range may be pre-specified as the range of peripheral positions outside the current block; or, the range of peripheral positions outside the current block may be determined according to the weight prediction angle, for example, according to the weight.
  • the prediction angle determines the peripheral position pointed to by each pixel position inside the current block.
  • the boundary of the peripheral positions pointed to by all pixel positions can be the range of peripheral positions outside the current block, and the range of the peripheral positions is not limited.
  • the outer peripheral position of the current block may include an integer pixel position; or, the outer peripheral position of the current block may include a non-integer pixel position, and the non-integer pixel position may be a sub-pixel position, such as a 1/2 sub-pixel position, a 1/4 sub-pixel position, 3/4 sub-pixel positions, etc., which are not limited; alternatively, the outer peripheral positions of the current block may include integer pixel positions and sub-pixel positions.
  • two peripheral positions outside the current block may correspond to an integer pixel position; or, four peripheral positions outside the current block may correspond to an integer pixel position; or, one peripheral position outside the current block may correspond to An integer pixel position; or, a peripheral position outside the current block, which can correspond to two integer pixel positions.
  • the above are just a few examples, which are not limited, and the relationship between the peripheral position and the integer pixel position can be arbitrarily configured.
  • one peripheral position corresponds to an integer pixel position.
  • two peripheral positions correspond to one integer pixel position. For other cases, details are not repeated in this embodiment.
  • Step a4 the encoding end determines the target weight value of the pixel position according to the reference weight value associated with the surrounding matching position.
  • the encoding end determines the reference weight value associated with the surrounding matching position, and determines the pixel position according to the reference weight value associated with the surrounding matching position.
  • the reference weight value associated with the surrounding matching position is determined as the target weight value of the pixel position.
  • the encoding end determines the target weight value of the pixel position according to the reference weight value associated with the surrounding matching position, which may include: Case 1, if the surrounding matching position is an integer pixel position, and the integer pixel position If the reference weight value has been configured, the target weight value of the pixel position is determined according to the reference weight value of the integer pixel position.
  • the target weight value of the pixel position can be determined according to the reference weight value of the adjacent position of the integer pixel position.
  • the reference weight value of the adjacent position can be rounded up to obtain the target weight value of the pixel position; or, the reference weight value of the adjacent position can be rounded down to obtain the target weight of the pixel position. or, the target weight value of the pixel position is determined according to the interpolation of the reference weight value of the adjacent position of the integer pixel position, which is not limited.
  • Case 3 If the peripheral matching position is a sub-pixel position, and a reference weight value has been configured for the sub-pixel position, the target weight value of the pixel position can be determined according to the reference weight value of the sub-pixel position.
  • the target weight value of the pixel position can be determined according to the reference weight value of the adjacent position of the sub-pixel position.
  • the reference weight value of the adjacent position can be rounded up to obtain the target weight value of the pixel position; or, the reference weight value of the adjacent position can be rounded down to obtain the target weight of the pixel position. or, the target weight value of the pixel position is determined according to the interpolation of the reference weight value of the adjacent position of the sub-pixel position, which is not limited.
  • Case 5 Determine the target weight value of the pixel position according to multiple reference weight values associated with the surrounding matching position. For example, when the surrounding matching position is an integer pixel position or a sub-pixel position, obtain the reference of multiple adjacent positions of the surrounding matching position. Weights. If the surrounding matching position has been configured with a reference weight value, the weighting operation is performed according to the reference weight value of the surrounding matching position and the reference weight values of multiple adjacent positions to obtain the target weight value of the pixel position; if the surrounding matching position is not configured with a reference weight value, the weighting operation is performed according to the reference weight values of multiple adjacent positions to obtain the target weight value of the pixel position.
  • Step a5 the encoding end determines the associated weight value of the pixel position according to the target weight value of the pixel position.
  • the sum of the target weight value of the pixel position and the associated weight value of the pixel position may be a fixed preset value, that is, the associated weight value may be the preset value and the target weight. difference in value. Based on this, assuming that the preset value is 8 and the target weight value of the pixel position is 2, the associated weight value of the pixel position is 6.
  • Step a6 the encoder obtains a motion information candidate list, where the motion information candidate list includes at least one candidate motion information; and obtains the first target motion information and the second target motion information of the current block based on the motion information candidate list.
  • Step a7 for each pixel position of the current block, the encoding end determines the first predicted value of the pixel position according to the first target motion information of the current block, and determines the second target motion information of the pixel position according to the second target motion information of the current block. Predictive value.
  • Step a8 the encoder determines the weighted predicted value of the pixel position according to the first predicted value of the pixel position, the target weight value of the pixel position, the second predicted value of the pixel position and the associated weight value of the pixel position.
  • the weighted predicted value of the pixel position may be: (the first predicted value of the pixel position * the target weight value of the pixel position + the second predicted value of the pixel position * the associated weight value of the pixel position) / fixed Default value.
  • Step a9 the encoder determines the weighted prediction value of the current block according to the weighted prediction values of all pixel positions of the current block.
  • Embodiment 3 The embodiment of this application proposes another encoding and decoding method, which can be applied to the decoding end, and the method includes:
  • step b1 when it is determined to start the weighted prediction for the current block, the decoding end obtains the weight prediction angle of the current block, the weight prediction position of the current block, and the weight transformation rate of the current block.
  • the decoding end determines whether to start the weighted prediction for the current block, and if so, executes step b1 and subsequent steps, and if not, the processing method is not limited in this application.
  • the encoder determines whether the feature information of the current block satisfies a specific condition, and if so, determines to start weighted prediction for the current block.
  • the decoding end also determines whether the feature information of the current block satisfies a specific condition, and if so, determines to start the weighted prediction for the current block; if not, determines not to start the weighted prediction for the current block. How the decoding end determines whether to start the weighted prediction of the current block based on the feature information is similar to the determination method of the encoding end, and will not be repeated here.
  • the encoder determines whether the current block supports weighted prediction according to the feature information of the current block.
  • other strategies may also be used to determine whether to start the weighted prediction for the current block, such as using The rate-distortion principle determines whether weighted prediction is enabled for the current block.
  • the encoded bit stream may include a syntax for enabling the weighted prediction, the syntax indicating whether the weighted prediction is enabled for the current block.
  • the decoding end determines whether the current block supports weighted prediction according to the feature information of the current block. When the current block supports weighted prediction, the decoding end may further parse out the syntax of whether to enable the weighted prediction from the encoded bit stream, and determine whether to enable the weighted prediction for the current block according to the syntax.
  • the decoding end may also obtain the weight prediction angle of the current block, the weight prediction position of the current block, and the weight transformation rate of the current block.
  • the weight prediction angle For the relevant description of the position and the weight conversion rate, reference may be made to step a1, which will not be repeated here.
  • Step b2 the decoding end configures reference weight values for peripheral positions outside the current block according to the weight transformation rate of the current block and the starting position of the weight transformation (the weight transformation rate and the starting position of the weight transformation can be called weight configuration parameters).
  • the decoding end may determine the starting position of the weight transformation of the current block based on at least one of the weight prediction angle of the current block, the weight prediction position of the current block, and the size of the current block. Then, the decoding end configures reference weight values for peripheral positions outside the current block according to the weight transformation rate of the current block and the starting position of the weight transformation.
  • Step b3 for each pixel position of the current block, the decoding end determines the surrounding matching position pointed to by the pixel position from the surrounding positions outside the current block according to the weight prediction angle of the current block.
  • Step b4 the decoding end determines the target weight value of the pixel position according to the reference weight value associated with the surrounding matching position.
  • Step b5 the decoding end determines the associated weight value of the pixel position according to the target weight value of the pixel position.
  • Step b6 the decoding end obtains a motion information candidate list, where the motion information candidate list includes at least one candidate motion information; and obtains the first target motion information and the second target motion information of the current block based on the motion information candidate list.
  • Step b7 for each pixel position of the current block, the decoding end determines the first predicted value of the pixel position according to the first target motion information of the current block, and determines the second prediction value of the pixel position according to the second target motion information of the current block. Predictive value.
  • Step b8 the decoding end determines the weighted predicted value of the pixel position according to the first predicted value of the pixel position, the target weight value of the pixel position, the second predicted value of the pixel position and the associated weight value of the pixel position.
  • Step b9 the decoding end determines the weighted prediction value of the current block according to the weighted prediction value of all pixel positions of the current block.
  • step b2-step b8 the implementation process can refer to step a2-step a8, the difference is that step b2-step b8 is the processing flow of the decoding end, not the processing flow of the encoding end, which will not be repeated here. .
  • Embodiment 4 Referring to FIG. 5, it is a schematic flowchart of an encoding and decoding method. This method can be applied to a decoding end (also called a video decoder) or an encoding end (also called a video encoder), and the method can be include:
  • Step 501 when it is determined to start the weighted prediction for the current block, obtain the weight prediction angle and the weight configuration parameter of the current block, where the weight configuration parameter includes the weight transformation rate and the starting position of the weight transformation.
  • the starting position of the weight transformation may be determined by at least one of the following parameters: the weight prediction angle of the current block, the weight prediction position of the current block, and the size of the current block.
  • Step 502 Configure a reference weight value for a peripheral position outside the current block according to the weight configuration parameter of the current block.
  • Step 503 for each pixel position of the current block, determine the peripheral matching position pointed to by the pixel position from the peripheral position outside the current block according to the weight prediction angle of the current block; determine the pixel according to the reference weight value associated with the peripheral matching position
  • the target weight value of the position, and the associated weight value of the pixel position is determined according to the target weight value of the pixel position.
  • steps 501 to 503 reference may be made to steps 301 to 303, and details are not repeated here.
  • Step 504 Obtain reference frame information, and obtain a motion vector candidate list corresponding to the reference frame information, where the motion vector candidate list includes at least one candidate motion vector, and the reference frame information includes first reference frame information and second reference frame information ; Obtain the first target motion vector and the second target motion vector of the current block based on the motion vector candidate list.
  • Step 505 for each pixel position of the current block, determine the first predicted value of the pixel position according to the first target motion information of the current block, and determine the second predicted value of the pixel position according to the second target motion information of the current block; Based on the first predicted value, the target weight value, the second predicted value and the associated weight value, a weighted predicted value for the pixel position is determined.
  • the first target motion information of the current block may include the first target motion vector of the current block and first reference frame information corresponding to the first target motion vector
  • the second target motion vector of the current block The target motion information may include the second target motion vector of the current block and second reference frame information corresponding to the second target motion vector.
  • the first reference frame information and the second reference frame information may be the same, or the first reference frame information and the second reference frame information may be different. If the first reference frame information is the same as the second reference frame information, the reference frame pointed to by the first target motion vector and the reference frame pointed to by the second target motion vector are the same frame. If the first reference frame information and the second reference frame information If they are different, the reference frame pointed to by the first target motion vector and the reference frame pointed to by the second target motion vector are different frames.
  • Step 506 Determine the weighted prediction value of the current block according to the weighted prediction values of all pixel positions of the current block.
  • step a6 the encoder obtains motion vector candidates. list, and obtain the first target motion vector and the second target motion vector of the current block based on the motion vector candidate list, and obtain the first target motion information based on the first reference frame information corresponding to the first target motion vector and the first target motion vector , the second target motion information is obtained based on the second target motion vector and the second reference frame information corresponding to the second target motion vector, which will not be repeated here.
  • step b6 the decoding end obtains motion vector candidates. list, and obtain the first target motion vector and the second target motion vector of the current block based on the motion vector candidate list, and obtain the first target motion information based on the first reference frame information corresponding to the first target motion vector and the first target motion vector , the second target motion information is obtained based on the second target motion vector and the second reference frame information corresponding to the second target motion vector, which will not be repeated here.
  • Embodiment 5 In Embodiment 1 to Embodiment 4, it is necessary to perform weighted processing based on the weighted prediction angle, and this weighted processing method can be recorded as the inter-frame angle weighted prediction (Angular Weighted Prediction, AWP) mode, that is, in the current When the block supports the AWP mode, use Embodiment 1 to Embodiment 4 to predict the current block to obtain the predicted value of the current block.
  • AWP Angular Weighted Prediction
  • Embodiment 1 to Embodiment 4 relate to the weighted prediction angle.
  • the weighted prediction angle can be any angle, such as any angle within 180 degrees, or any angle within 360 degrees. There is no restriction on the weighted prediction angle, such as 10 degrees, 20 degrees. degrees, 30 degrees, etc.
  • the weighted prediction angle may be a horizontal angle; or, the weighted prediction angle may be a vertical angle; or, the absolute value of the slope of the weighted prediction angle (the absolute value of the slope of the weighted prediction angle is also It is the tan value of the weight prediction angle) can be 2 to the nth power, n is an integer, such as a positive integer, 0, a negative integer, etc.
  • the absolute value of the slope of the weight prediction angle can be 1 (that is, 2 to the 0th power), or 2 (that is, 2 to the 1st power), or 1/2 (that is, 2 to the -1th power), Either 4 (ie 2 to the power of 2), or 1/4 (ie 2 to the -2 power), or 8 (ie 2 to the third power), or 1/8 (ie 2 to the -3 power) power) etc.
  • 8 weighted prediction angles are shown, and the absolute value of the slope of these weighted prediction angles is 2 to the nth power.
  • a shift operation may be performed on the weight prediction angle.
  • the absolute value of the slope of the weight prediction angle is 2 to the nth power
  • the division operation can be avoided, thereby facilitating the shift implementation.
  • the number of weighted prediction angles supported by different block sizes may be the same or different.
  • block size A supports 8 weight prediction angles
  • block size B and block size C support 6 weight prediction angles, and so on.
  • Embodiment 6 In the above Embodiment 1 to Embodiment 4, the encoder/decoder needs to configure reference weight values for peripheral positions outside the current block according to the weight transformation rate of the current block and the starting position of the weight transformation of the current block. In a possible implementation manner, the following method may be adopted: for each peripheral position outside the current block, according to the coordinate value of the peripheral position, the coordinate value of the starting position of the weight transformation, and the weight transformation rate, configure The reference weight value for this surrounding location.
  • the coordinate value of the peripheral position may be the abscissa value, and the starting point of the weight transformation.
  • the coordinate value of the starting position can be the abscissa value.
  • the coordinate value of the peripheral position may be an ordinate value, and the coordinate value of the starting position of the weight transformation may be an ordinate value.
  • the pixel position of the upper left corner of the current block (such as the first pixel position of the upper left corner) can be used as the coordinate origin, and the coordinate value of the surrounding position of the current block (such as the abscissa value or the ordinate value) and the weight transformation value.
  • the coordinate values of the starting position (such as the abscissa value or the ordinate value) are the coordinate values relative to the origin of the coordinate.
  • other pixel positions of the current block may also be used as the coordinate origin, and the implementation manner is similar to the implementation manner in which the pixel position of the upper left corner is used as the coordinate origin.
  • the first value that is, the minimum value of the reference weight value, such as 0, etc.
  • the reference weight value associated with the surrounding position can also be directly determined according to the magnitude relationship between the coordinate value of the surrounding position and the coordinate value of the starting position of the weight transformation. For example, if the coordinate value of the surrounding position is smaller than the coordinate value of the starting position of the weight transformation, the reference weight value associated with the surrounding position is determined to be the first value; if the coordinate value of the surrounding position is not less than the starting position of the weight transformation A coordinate value, and the reference weight value associated with the peripheral position is determined to be a second value.
  • the reference weight value associated with the surrounding position is determined to be the second value; if the coordinate value of the surrounding position is not less than the starting position of the weight transformation.
  • both the first numerical value and the second numerical value can be configured according to experience, and the first numerical value is smaller than the second numerical value, and neither the first numerical value nor the second numerical value is limited.
  • the first value is the minimum value of the pre-agreed reference weight value, such as 0, and the second value is the maximum value of the pre-agreed reference weight value, such as 8.
  • 0 and 8 are just examples.
  • weight prediction positions after dividing all the surrounding positions into 8 equal parts, 7 weight prediction positions can be obtained.
  • the weight prediction position When the weight prediction position is 0, it indicates the surrounding position a0, the coordinates of the starting position of the weight transformation. The value is the coordinate value of the surrounding position a0.
  • the weight prediction position When the weight prediction position is 1, it indicates the surrounding position a1, the coordinate value of the starting position of the weight transformation is the coordinate value of the surrounding position a1, and so on, regarding the determination method of the coordinate value of the starting position of the weight transformation, here No longer.
  • Embodiment 7 In Embodiment 1 to Embodiment 4, the encoder/decoder needs to configure reference weight values for peripheral positions outside the current block according to the weight transformation rate of the current block and the starting position of the weight transformation of the current block.
  • the following method may be adopted: obtaining the weight prediction angle of the current block, the weight transformation rate of the current block, and the weight prediction position of the current block, and determining the weight transformation value of the current block based on the weight prediction position of the current block.
  • the starting position, the weight configuration parameter is determined based on the starting position of the weight transformation and the weight transformation rate, that is, the weight configuration parameter includes the starting position of the weight transformation and the weight transformation rate.
  • the reference weight value of the peripheral position outside the current block is determined according to the weight configuration parameter.
  • Step c1 Obtain an effective number of reference weight values.
  • the number of peripheral positions outside the current block is a valid number.
  • a valid number of reference weight values needs to be obtained, and the valid number may be determined based on the size of the current block and/or the weight prediction angle of the current block.
  • the log2 logarithmic value of such as 0 or 1.
  • an effective number of reference weight values it can be monotonically increasing, or monotonically decreasing.
  • a plurality of first values may be included first, and then a plurality of second values may be included, or a plurality of second values may be included first, and then a plurality of first values may be included. This will be described below with reference to several specific situations.
  • Case 1 For a valid number of reference weight values, it can be monotonically increasing, or monotonically decreasing.
  • a valid number of reference weight values may be [88...88765432100...00], ie monotonically decreasing.
  • the effective number of reference weight values may be [00...00123456788...88], that is, monotonically increasing.
  • the above is just an example, which is not limited.
  • the reference weight value may be configured according to the weight configuration parameter, and the weight configuration parameter may include the weight transformation rate and the starting position of the weight transformation.
  • the weight configuration parameter may include the weight transformation rate and the starting position of the weight transformation.
  • the position can be a value configured according to experience, or the starting position of the weight transformation can be determined by the weight prediction position, and the starting position of the weight transformation can also be determined by the weight prediction angle and the weight prediction position, which is not limited.
  • the multiple reference weights can be monotonically decreasing from 8 to 0; or monotonically increasing from 0 to 8.
  • the weight transformation rate and the starting position of the weight transformation may be obtained first, and then, according to the weight transformation rate and the starting position of the weight transformation, multiple reference weight values are determined.
  • a represents the weight transformation rate, and s represents the starting position of the weight transformation.
  • Clip3 is used to limit the reference weight between the minimum value and the maximum value.
  • the minimum value and maximum value can be configured according to experience. For the convenience of description, in the subsequent process, the minimum value is 0 and the maximum value is 8 for illustration. .
  • a represents the weight conversion rate
  • a may be an integer other than 0, for example, a may be -4, -2, -1, 1, 2, 4, etc.
  • the value of a is not limited. If the absolute value of a is 1, the reference weight value from 0 to 8 needs to go through 0, 1, 2, 3, 4, 5, 6, 7, 8, or, the reference weight value from 8 to 0 needs to go through 8, 7 , 6, 5, 4, 3, 2, 1, 0.
  • f weight prediction position
  • s is a function related to the weight prediction position.
  • N can be arbitrarily configured, such as 4, 6, 8, etc., while the weight
  • the prediction position is used to indicate which surrounding position outside the current block is used as the target surrounding area of the current block, and the surrounding position corresponding to this weighted prediction position is the starting position of the weight transformation.
  • the range of the peripheral positions outside the current block can be determined according to the weight prediction angle, after the range of the peripheral positions outside the current block is determined, the effective number of the peripheral positions can be determined, and all the peripheral positions are divided into N equally, the weight prediction position It is used to indicate which surrounding position outside the current block is used as the target surrounding area of the current block, and the surrounding position corresponding to this weight prediction position is the starting position of the weight transformation.
  • the weight transformation rate a and the starting position s of the weight transformation are both known values
  • the reference weight value of the peripheral position outside the current block includes the reference weight value of the target area, the reference weight value of the first adjacent area of the target area, and the reference weight value of the second adjacent area of the target area.
  • the target area includes one reference weight value or at least two reference weight values.
  • a reference weight value is determined, and the reference weight value is used as the reference weight value of the target area.
  • at least two reference weight values are determined, and the at least two reference weight values are used as the reference weight values of the target area.
  • the at least two reference weight values of the target area are monotonically increasing or monotonically decreasing.
  • Monotonically increasing may be strictly monotonically increasing (that is, at least two reference weight values of the target area are strictly monotonically increasing); and monotonically decreasing may be strictly monotonically decreasing (that is, at least two reference weight values of the target area are strictly monotonically decreasing).
  • the reference weight value of the target area increases monotonically from 1 to 7, or the reference weight value of the target area decreases monotonically from 7 to 1.
  • the reference weight values of the first adjacent area may all be the first reference weight value, and the reference weight values of the second adjacent area may be monotonically increasing.
  • the reference weight values of the first adjacent area may all be 0, the target area includes a reference weight value of 1, and the reference weight value of the second adjacent area monotonically increases from 2 to 8.
  • the reference weight values of the first adjacent area may all be the first reference weight values, and the reference weight values of the second adjacent area may decrease monotonically.
  • the reference weight values of the first adjacent area may all be 8, the target area includes a reference weight value of 7, and the reference weight value of the second adjacent area decreases monotonically from 6 to 0.
  • the reference weight values of the first adjacent area are all second reference weight values
  • the reference weight values of the second adjacent area are all third reference weight values
  • the second reference weight value is different from the third reference weight value.
  • the reference weight values of the first adjacent area are all 0
  • the target area includes at least two reference weight values
  • the reference weight values monotonically increase from 1 to 7
  • the reference weight values of the second adjacent area are all 8.
  • the first The reference weight value of the adjacent area is different from the reference weight value of the second adjacent area.
  • the reference weight value of the first adjacent area is monotonically increasing or decreasing, and the reference weight value of the second adjacent area is monotonically increasing or decreasing; for example, the reference weight value of the first adjacent area is monotonically increasing, and the reference weight of the second adjacent area is monotonically increasing.
  • the value is also monotonically increasing; for another example, the reference weight value of the first adjacent area is monotonically decreasing, and the reference weight value of the second adjacent area is also monotonically decreasing.
  • the reference weight value of the first adjacent area increases monotonically from 0 to 3
  • the target area includes a reference weight value of 4
  • the reference weight value of the second adjacent area increases monotonically from 5 to 8.
  • Case 2 For the effective number of reference weight values, a plurality of first values may be included first, and then a plurality of second values may be included, or a plurality of second values may be included first, and then a plurality of first values may be included.
  • a valid number of reference weight values may be [88...8800...00] or [00...0088...88].
  • multiple reference weight values may be determined according to the starting position of the weight transformation. For example, the starting position of the weight transformation represents the s-th reference weight value. Therefore, all reference weight values before the s-th reference weight value (excluding the s-th reference weight value) are the first value (such as 8).
  • All the reference weight values after the s reference weight values are the second value (eg, 0).
  • all the reference weight values before the s-th reference weight value are the second value, and all the reference weight values after the s-th reference weight value are the first value.
  • Step c2 configure the reference weight values of the peripheral positions outside the current block according to the effective number of reference weight values.
  • the number of surrounding locations outside the current block is a valid number
  • the number of reference weight values is a valid number. Therefore, the valid number of reference weight values may be configured as reference weight values for surrounding locations outside the current block.
  • the first reference weight value can be configured as the reference weight value of the first peripheral position outside the current block
  • the second reference weight value can be configured as the reference weight value of the second peripheral position outside the current block, so as to And so on.
  • the reference weight value has been configured for the peripheral positions outside the current block, that is, each peripheral position has a reference weight value, therefore, after determining the peripheral matching position pointed to by the pixel position from the peripheral positions outside the current block , the reference weight value associated with the surrounding matching position, that is, the target weight value of the pixel position can be determined.
  • the size of the current block is M*N, where M is the width of the current block, and N is the height of the current block.
  • X is the log2 logarithm of the tan value of the weight prediction angle, such as 0 or 1.
  • Y is the index value of the weight prediction position, and a, b, c, and d are preset constant values.
  • ValidLength indicates the valid number
  • FirstPos indicates the starting position of the weight transformation
  • ReferenceWeights[i] indicates the reference weight value of the i-th surrounding position
  • Clip3 is used to limit the reference weight value between the minimum value of 0 and the maximum value of 8
  • i indicates the current
  • the index of the peripheral position outside the block, a represents the absolute value of the weight conversion rate.
  • Application scenario 1 Determine the valid number (also referred to as the reference weight valid length, ie ValidLength) based on the size of the current block and the weight prediction angle of the current block, and obtain the starting position of the weight transformation (ie FirstPos).
  • the value range of i can be 0 to ValidLength-1; or 1 to ValidLength.
  • ReferenceWeights[i] Clip3(0,8,a*(i-FirstPos)).
  • Application Scenario 5 Referring to FIG. 7 , a schematic diagram of reference weight values of four weight transformation ratios is shown.
  • FirstPos can be 4, that is, the reference weight value of the first to fourth surrounding positions is 0, and the reference weight value of the fifth surrounding position is 1, The reference weight of the sixth surrounding location is 2, and so on.
  • FirstPos can be 6, that is, the reference weight value of the 1st to 6th surrounding positions is 0, and the reference weight value of the 7th surrounding position is 2, The reference weight of the 8th surrounding location is 4, and so on.
  • FirstPos may be 7, the reference weight value of the 1st to 7th surrounding positions is 0, the reference weight value of the 8th surrounding position is 4, and the reference weight value of the 8th surrounding position is 4. 9 to 17 surrounding locations have a reference weight of 8, and so on.
  • FirstPos can be 8, that is, the reference weight value of the 1st to 8th surrounding positions is 0, and the reference weight value of the 9th surrounding position is 9, The 10th to 17th surrounding locations have a reference weight of 8, and so on.
  • the FirstPos when the absolute value of the weight conversion rate is 1, the FirstPos is 4, and when the absolute value of the weight conversion rate is 2, the FirstPos is 6 (ie, FirstPos+2 when the weight conversion rate is 1). When the absolute value of the rate is 4, the FirstPos is 7 (ie, FirstPos+3 when the weight conversion rate is 1). Based on this, the position with the reference weight value of 4 can be aligned.
  • the weight conversion rate switching when the weight conversion rate switching is supported and the weight conversion rate switching is started, one can be selected from the examples of reference weight value distributions of the four types of weight conversion rates shown in FIG.
  • the jump highlight of the image display in some image display scenes can be reduced.
  • the mixed image content includes part of the screen content, cartoons, images containing cartoons, etc., and the weight conversion rate can be switched for an area containing screen content to solve the problem of prominent jumps.
  • ValidLength is related to the weight prediction angle of the current block and the size of the current block.
  • some parameters can be fixed to achieve optimization purposes.
  • the weight prediction angle of the current block can be configured as a fixed parameter value
  • ValidLength is only related to the size of the current block.
  • FirstPos is related to the weight prediction angle of the current block, the size of the current block, and the weight prediction position of the current block.
  • certain parameters can be fixed for optimization purposes.
  • the weight prediction angle of the current block can be configured as a fixed parameter value
  • FirstPos is only related to the size of the current block and the weight prediction position of the current block.
  • the weight prediction position of the current block is configured as a fixed parameter value
  • FirstPos is only related to the size of the current block and the weight prediction angle of the current block.
  • both the weight prediction angle of the current block and the weight prediction position of the current block are configured as fixed parameter values, the two fixed parameter values may be the same or different, and FirstPos is only related to the size of the current block.
  • Embodiment 8 In Embodiment 1 to Embodiment 4, the encoder/decoder needs to configure reference weight values for peripheral positions outside the current block according to the weight transformation rate of the current block and the starting position of the weight transformation of the current block.
  • denoting M and N are the width and height of the current block
  • the derivation method of the weight array of the angle weighted prediction mode (AWP) including:
  • step d1 parameters such as stepIdx, angleIdx and subAngleIdx are obtained according to AwpIdx.
  • AwpIdx represents the index value of the weight prediction position and the weight prediction angle. Assuming that there are 7 kinds of weight prediction positions and 8 kinds of weight prediction angles, the value range of AwpIdx is 0-55. If the weight prediction position is -3 to 3 (indicating that the fourth weight prediction position is the center, and the fourth weight prediction position is 0), and the index of the weight prediction angle is 0-7, then the weights corresponding to the 56 index values of AwpIdx The prediction position and weight prediction angle can be seen in Table 1.
  • stepIdx represents the weight prediction position (ie, the index value of the weight prediction position), and the range of the weight prediction position may be -3 to 3.
  • the weight prediction position is -3
  • the weight prediction position is -2
  • so on for the seventh weight prediction position, the weight prediction position is 3.
  • angleIdx represents the log2 logarithm of the absolute value of the slope of the weighted prediction angle (eg, 0, or 1, or a larger constant), and subAngleIdx represents the angle partition where the weighted prediction angle is located.
  • 8A, 8 weight prediction angles are shown, the angleIdx of the weight prediction angle 0 is the log2 logarithm of the absolute value of the slope of the weight prediction angle 0, and the angleIdx of the weight prediction angle 1 is the absolute value of the slope of the weight prediction angle 1.
  • the log2 log value of the value, and so on, the angleIdx of the weight prediction angle 7 is the log2 log value of the absolute value of the slope of the weight prediction angle 7.
  • Weight prediction angle 0 and weight prediction angle 1 are located in angle partition 0
  • weight prediction angle 2 and weight prediction angle 3 are located in angle partition 1
  • weight prediction angle 4 and weight prediction angle 5 are located in angle partition 2
  • weight prediction angle 6 and weight prediction angle 7 is in angular partition 3.
  • stepIdx (AwpIdx>>3) ⁇ 3.
  • subAngleIdx modAngNum>>1.
  • the encoder can determine the value of AwpIdx based on the weight prediction angle and the weight prediction position, as shown in Table 1.
  • the encoded bitstream can carry the value of AwpIdx.
  • the decoder can obtain the value of AwpIdx and obtain stepIdx, angleIdx and subAngleIdx according to AwpIdx.
  • angleIdx and subAngleIdx can uniquely determine a weight prediction angle, as shown in Table 2.
  • other methods can also be used to determine the weight prediction angle, such as changing the partition number.
  • Step d2 according to stepIdx, angleIdx and subAngleIdx, configure a reference weight value for the outer peripheral position of the current block.
  • ReferenceWeights[x] Clip3(0,8,x-FirstPos)
  • x may be an index of a peripheral position outside the current block, the value of x ranges from 0 to ValidLength_H-1, and a represents the weight transformation rate.
  • ValidLength_H can represent the number of peripheral positions outside the current block (that is, the effective number, which can also be called effective length).
  • the peripheral position outside the current block pointed to by the weight prediction angle can be left
  • the valid number can be recorded as ValidLength_H.
  • one bit is shifted to the left because the formula uses 1/2-pel precision.
  • the parts of >>1 operations involved may change due to different pixel precisions.
  • DeltaPos_H represents the position change parameter (that is, an intermediate parameter).
  • the peripheral position outside the current block pointed to by the weight prediction angle can be the peripheral position in the left column outside the current block. Therefore, The position change parameter can be recorded as DeltaPos_H.
  • ReferenceWeights[x] Clip3(0, 8, a*(x-FirstPos)), in this formula, it is the minimum value of the reference weight value is 0, the maximum value of the reference weight value is 8, and the weight conversion rate is a for example.
  • x may be an index of a peripheral position outside the current block, and the value of x ranges from 0 to ValidLength_H-1.
  • ValidLength_H and DeltaPos_H can refer to Case 1, and details are not repeated here.
  • ReferenceWeights[x] Clip3(0, 8, a*(x-FirstPos)), in this formula, it is the minimum value of the reference weight value is 0, the maximum value of the reference weight value is 8, and the weight conversion rate is a for example.
  • x may be an index of a peripheral position outside the current block, and the value of x ranges from 0 to ValidLength_W-1.
  • ValidLength_W represents the number of peripheral positions outside the current block (that is, the effective number, which can also be called effective length).
  • the peripheral position outside the current block pointed to by the weight prediction angle can be the upper side
  • the peripheral position of a row therefore, denote the valid number as ValidLength_W.
  • DeltaPos_W represents the position change parameter (that is, an intermediate parameter).
  • the position change parameter can be denoted as DeltaPos_W.
  • x may be an index of a peripheral position outside the current block, and the value of x ranges from 0 to ValidLength_W-1.
  • ValidLength_W and DeltaPos_W can refer to Case 3, which will not be repeated here.
  • each case should be used for processing can be determined according to subAngleIdx.
  • ValidLength_H and DeltaPos_H can be determined according to angleIdx and stepIdx
  • FirstPos can be determined according to ValidLength_H and DeltaPos_H, and then configured according to FirstPos Reference weight value.
  • ValidLength_W and DeltaPos_W may be determined according to angleIdx and stepIdx
  • FirstPos may be determined according to ValidLength_W and DeltaPos_W, and then the reference weight value may be configured according to FirstPos.
  • the starting position of the reference weight value ReferenceWeights[x] changes.
  • Fig. 8B which shows an example of angle partition 2 and angle partition 3, at 1/2-pel precision
  • the starting position of the reference weight value ReferenceWeights[x] is (high ⁇ 1)>> angleIdx, which is the offset "–((N ⁇ 1)>>angleIdx)" in the formula.
  • the implementation of angle partition 0 and angle partition 1 is similar, except that the offset in the formula is "–((M ⁇ 1)>>angleIdx)", that is, the height can be changed to the width.
  • Step d3 obtain the brightness weight value of the pixel position according to the angleIdx and the reference weight value ReferenceWeights[x].
  • step d2 and step d3 can be combined into one step, that is, step d2 (configure reference weight values for peripheral positions outside the current block according to stepIdx, angleIdx and subAngleIdx) and step d3 (according to angleIdx and reference weight value ReferenceWeight[x] to obtain brightness
  • the combination of weight value is: according to stepIdx, angleIdx and subAngleIdx, the brightness weight value of the pixel position is obtained, that is, it is determined according to the coordinate value of the surrounding position and the coordinate value of the starting position of the weight transformation.
  • step d4 the chrominance weight value of the pixel position is obtained according to the luminance weight value of the pixel position, and the luminance weight value of the pixel position and the chrominance weight value of the pixel position can constitute the target weight value of the pixel position.
  • the format of the chrominance resolution is 4:2:0
  • the format of the chrominance resolution is 4:4:4
  • the value range of x is 0 ⁇ M/2-1; the value range of y is 0 ⁇ N/2-1.
  • step d4 Another implementation manner of step d4 is: obtaining the chrominance weight value of the pixel position according to the angleIdx and the reference weight value ReferenceWeight[x], instead of obtaining the chrominance weight value according to the luminance weight value. For example, if the format of the chrominance resolution is 4:2:0, the chrominance weight value of the pixel position is obtained according to the angleIdx and the reference weight value ReferenceWeight[x].
  • the value range of x is 0 to M-1
  • the value range of y is 0 to N-1.
  • step d3 and step d4 the difference in the formulas in each case is that, referring to FIG. 8C , examples of angle partition 2 and angle partition 3 are shown.
  • the calculation formula of the matching position of (x, y) in the angle partition 2 can be: x–y >> angleIdx
  • the matching position of (x, y) in the calculation of the angle partition 3 The formula can be: x+y>>angleIdx.
  • the calculation formula of the matching position of (x,y) in angle partition 2 can be: (x ⁇ 1)–(y ⁇ 1)>>angleIdx, (x,y)
  • the calculation formula of the matching position in the angle partition 3 may be: (x ⁇ 1)+(y ⁇ 1)>>angleIdx.
  • the implementation of angle partition 0 and angle partition 1 is similar, just swap the position of (x, y).
  • Embodiment 9 In Embodiment 1 to Embodiment 4, the encoding end/decoding end needs to obtain the weight transformation rate of the current block. If the current block supports the weight transformation rate switching mode, the following method is adopted to obtain the weight transformation rate of the current block: Obtain the weight transformation rate indication information of the current block, and determine the weight transformation rate of the current block according to the weight transformation rate indication information. Exemplarily, if the weight conversion rate indication information is the first indication information, the weight conversion rate of the current block is the first weight conversion rate; if the weight conversion rate indication information is the second indication information, the weight conversion rate of the current block is is the second weight transformation rate. If the current block does not support the weight conversion rate switching mode, the preset weight conversion rate is determined as the weight conversion rate of the current block.
  • the weight conversion rate of the current block may be the first weight conversion rate or the second weight conversion rate, and the first weight conversion rate is different from the second weight conversion rate, That is, the weight transformation ratio of the current block is variable, so that the weight transformation ratio can be switched adaptively instead of using a uniform weight transformation ratio.
  • the switching control information allows the current block to enable the weight transform rate switching mode, the current block supports the weight transform rate switching mode, and if the switching control information does not allow the current block to enable the weight transform rate switching mode, the current block does not support the weight transform. rate switching mode.
  • the switching control information may include, but is not limited to: sequence level switching control information, frame level switching control information, Slice (slice) level switching control information, Tile (slice) level switching control information, Patch (slice) level switching control information, CTU (Coding Tee Unit, coding tree unit) level switching control information, LCU (Largest Coding Unit, largest coding unit) level switching control information, block level switching control information, CU (Coding Unit, coding unit) level switching control information, PU ( Prediction Unit, prediction unit) level switching control information, etc., which is not limited.
  • the switching control information can be obtained, and whether the switching control information allows the current block to enable the weight conversion rate switching mode, and then determine whether the current block supports the weight conversion rate switching mode.
  • the encoding end can encode the switching control information into the code stream, so that the decoding end parses the switching control information from the code stream to know whether the switching control information allows the current block to enable the weight conversion rate switching mode, and then determines whether the current block supports the weight conversion rate switching. model.
  • the encoder can also not encode the switching control information into the code stream, but implicitly derive the switching control information from the decoding end to know whether the switching control information allows the current block to enable the weight conversion rate switching mode, and then determine whether the current block supports weight conversion. rate switching mode.
  • the sequence-level switching control information can be awp_adptive_flag (inter-frame angle weighted prediction adaptation flag). If awp_adptive_flag is the first value, it means that the sequence-level switching control information allows the current sequence to enable the weight conversion rate. Switch mode, thereby allowing the current block to enable the weight conversion rate switching mode. If awp_adptive_flag is the second value, it means that the sequence-level switching control information does not allow the current sequence to enable the weight conversion rate switching mode, so that the current block is not allowed to enable the weight conversion rate switching. model. Exemplarily, the first value is 1 and the second value is 0, or the first value is 0 and the second value is 1. Of course, the above are only examples of the first value and the second value, which are not limited. For other types of handover control information, the implementation process is similar to that of sequence-level handover control information, and details are not repeated here.
  • the weight transformation rate indication information of the current block may be an SCC (Screen Content Coding, screen content coding) identifier corresponding to the current block, and the first indication information is used to indicate that the current block belongs to screen content coding, and the first indication information is used to indicate that the current block belongs to screen content coding.
  • the second indication information is used to indicate that the current block belongs to non-screen content coding.
  • the SCC identifier corresponding to the current block can be obtained, and the weight transformation rate of the current block can be determined according to the SCC identifier.
  • the weight transformation rate of the current block is the first weight transformation rate
  • the weight of the current block is The transformation ratio is the second weight transformation ratio
  • the absolute value of the first weight transformation rate may be greater than the absolute value of the second weight transformation rate.
  • the absolute value of the first weight transformation rate may be 4, and the absolute value of the second weight transformation rate may be 1 or 2.
  • the absolute value of the first weight transformation rate may be 2, and the absolute value of the second weight transformation rate may be 1.
  • the absolute value of the first weight transformation rate may be 8, and the absolute value of the second weight transformation rate may be 1, 2, or 4.
  • the absolute value of the first weight transformation rate may be 8 or 4, and the absolute value of the second weight transformation rate may be 1 or 2.
  • the above are just a few examples, which are not limited, as long as the absolute value of the first weight transformation rate is greater than the absolute value of the second weight transformation rate.
  • the SCC identifiers may include, but are not limited to: sequence-level SCC identifiers, frame-level SCC identifiers, slice-level SCC identifiers, tile-level SCC identifiers, Patch-level SCC identifiers, CTU-level SCC identifiers, LCU-level SCC identifiers, and block-level SCC identifiers.
  • sequence-level SCC identifiers may include, but are not limited to: sequence-level SCC identifiers, frame-level SCC identifiers, slice-level SCC identifiers, tile-level SCC identifiers, Patch-level SCC identifiers, CTU-level SCC identifiers, LCU-level SCC identifiers, and block-level SCC identifiers.
  • the identifier, the CU-level SCC identifier, the PU-level SCC identifier, etc. are not limited.
  • the sequence-level SCC identifier corresponding to the current block may be determined as the SCC identifier corresponding to the current block, or the frame-level SCC identifier corresponding to the current block may be determined as the SCC identifier corresponding to the current block, and so on, as long as the current block can be obtained.
  • the corresponding SCC logo can be used.
  • the encoding end it is possible to decide whether the current block belongs to screen content encoding or non-screen content encoding. If the current block belongs to the screen content encoding, the encoding end determines the weight transformation rate of the current block to be the first weight transformation rate. If the current block belongs to non-screen content encoding, the encoding end determines the weight transformation rate of the current block to the second weight transformation rate.
  • the SCC identifier corresponding to the current block can be obtained. If the SCC identifier is used to indicate that the current block belongs to screen content encoding, the encoder determines the weight transformation rate of the current block to be the first weight transformation rate. If the SCC identifier is used to indicate that the current block belongs to non-screen content coding, the encoder determines that the weight transformation rate of the current block is the second weight transformation rate.
  • the encoding end can encode the SCC identifier (such as sequence-level SCC identifier, frame-level SCC identifier, slice-level SCC identifier, etc.) into the code stream, so that the decoding end parses the SCC identifier from the code stream, and determines the SCC identifier as The SCC identifier corresponding to the current block, for example, the sequence-level SCC identifier corresponding to the current block may be determined as the SCC identifier corresponding to the current block. To sum up, the decoding end can obtain the SCC identifier corresponding to the current block.
  • the SCC identifier such as sequence-level SCC identifier, frame-level SCC identifier, slice-level SCC identifier, etc.
  • the decoding end determines the weight transformation rate of the current block to the first weight transformation rate. If the SCC identifier is used to indicate that the current block belongs to non-screen content coding, the decoding end determines that the weight transformation rate of the current block is the second weight transformation rate. For example, if the SCC identifier is the first value, it is used to indicate that the current block belongs to the screen content encoding, and if the SCC identifier is the second value, it is used to indicate that the current block belongs to the non-screen content encoding. The first value is 1 and the second value is 0, or the first value is 0 and the second value is 1. Of course, the above are only examples of the first value and the second value, which are not limited.
  • the encoding end may also not encode the SCC identification into the code stream, but implicitly derive the SCC identification by using information consistent with the decoding end.
  • the decoding end may also implicitly derive the SCC identification and determine the SCC identification as the current SCC identification.
  • the SCC identifier corresponding to the block For example, if multiple consecutive frames are all screen content coding, then deduce that the current frame is screen content coding, therefore, the decoding end implicitly derives the frame-level SCC identifier, and determines the SCC identifier as the SCC identifier corresponding to the current block, and The SCC identifier is used to indicate that the current block belongs to the screen content coding.
  • the decoding end implicitly derives a frame-level SCC flag, and the SCC flag is used to indicate that the current block belongs to non-screen content coding.
  • the SCC flag is used to indicate that the current block belongs to non-screen content coding.
  • the proportion of the IBC mode is less than a certain percentage, the next frame is determined to be non-screen content encoding, otherwise, the screen content encoding is continued.
  • the above method is only an example of implicitly deriving the SCC identifier, and this implicit derivation method is not limited.
  • the decoding end can obtain the SCC identifier corresponding to the current block, and if the SCC identifier is used to indicate that the current block belongs to screen content coding, the decoding end determines the weight transformation rate of the current block to be the first weight transformation rate. If the SCC identifier is used to indicate that the current block belongs to non-screen content coding, the decoding end determines that the weight transformation rate of the current block is the second weight transformation rate. For example, if the SCC identifier is the first value, it is used to indicate that the current block belongs to the screen content encoding, and if the SCC identifier is the second value, it is used to indicate that the current block belongs to the non-screen content encoding.
  • the weight conversion rate of the current block can be the first weight conversion rate or the second weight conversion rate, that is, the weight conversion rate of the current block can be switched, and the switching of the weight conversion rate depends on the SCC display mark of a certain level.
  • the SCC display identification refers to that the scc_flag (SCC identification) is encoded into the code stream, so that the decoding end parses the SCC identification from the code stream, and the SCC implicit identification refers to the decoding end according to the available information. Adaptation to derive SCC identity.
  • a certain level of SCC identification refers to: the sequence level indicates the SCC identification of the current sequence, and the SCC identification is used as the SCC identification of all blocks belonging to the current sequence; the frame level indicates the SCC identification of the current frame, and the SCC identification is used as the SCC identification of all blocks belonging to the current frame.
  • SCC identification indicates the SCC identification of the current Slice, and the SCC identification is used as the SCC identification of all blocks belonging to the current Slice;
  • Tile level indicates the SCC identification of the current Tile, and the SCC identification is used as the SCC identification of all blocks belonging to the current Tile;
  • Patch Level indicates the SCC identity of the current Patch, the SCC identity is used as the SCC identity of all blocks belonging to the current Patch;
  • CTU level indicates the SCC identity of the current CTU, and the SCC identity is used as the SCC identity of all blocks belonging to the current CTU;
  • the LCU level indicates the current LCU The SCC identification, the SCC identification is used as the SCC identification of all blocks belonging to the current LCU;
  • the block level indicates the SCC identification of the current block, and the SCC identification is used as the SCC identification of all the blocks belonging to the current block;
  • the CU level indicates the SCC identification of the current CU, The SCC identifier is used as the SCC identifier belonging to the current CU;
  • the second weight conversion rate may be used as the default weight conversion rate, and when the SCC identifier is used to indicate that the current block belongs to non-screen content coding, the weight conversion rate does not need to be switched, that is, the weight conversion rate of the current block is determined. is the second weight transformation rate.
  • the weight transformation ratio needs to be switched, that is, the weight transformation ratio of the current block is determined to be the first weight transformation ratio.
  • the first weight conversion rate can be used as the default weight conversion rate.
  • the weight conversion rate needs to be switched, that is, the weight conversion rate of the current block is determined to be the second.
  • Weight transformation rate When the SCC identifier is used to indicate that the current block belongs to screen content coding, the weight transformation rate does not need to be switched, that is, the weight transformation rate of the current block is determined to be the first weight transformation rate.
  • the weight transformation rate of the current block is the first weight transformation ratio
  • the weight transformation ratio of the current block is the second weight transformation ratio
  • the absolute value of the first weight conversion rate is greater than the absolute value of the second weight conversion rate, for example, the absolute value of the first weight conversion rate is 4, and the absolute value of the second weight conversion rate is 1, so that the current block belonging to the SCC sequence has an absolute value of 1.
  • the absolute value of the weight transformation rate increases, that is, the transformation speed increases.
  • the indication information of the weight conversion rate of the current block may be a weight conversion rate switching identifier corresponding to the current block
  • the first indication information is used to indicate that the current block does not need to perform weight conversion rate switching
  • the second indication The information is used to indicate that the current block needs to perform weight conversion rate switching.
  • the weight transformation rate switching identifier corresponding to the current block can be obtained, and the weight transformation rate of the current block can be determined according to the weight transformation rate switching identifier.
  • the weight conversion rate of the current block may be the first weight conversion rate; if the weight conversion rate switching flag is used to indicate that the current block needs After switching the weight transformation rate, the weight transformation rate of the current block may be the second weight transformation rate.
  • the absolute value of the first weight conversion rate is not equal to the absolute value of the second weight conversion rate.
  • the absolute value of the first weight conversion rate may be greater than the absolute value of the second weight conversion rate.
  • the absolute value of the first weight conversion rate may be 4, and the absolute value of the second weight conversion rate may be 1 or 2.
  • the absolute value of the first weight transformation rate may be 2, and the absolute value of the second weight transformation rate may be 1.
  • the absolute value of the first weight transformation rate may be 8, and the absolute value of the second weight transformation rate may be 1, 2, or 4.
  • the absolute value of the first weight conversion rate may be smaller than the absolute value of the second weight conversion rate.
  • the absolute value of the first weight conversion rate may be 1, and the absolute value of the second weight conversion rate may be 2 or 4 or 8. .
  • the absolute value of the first weight transformation rate may be 2, and the absolute value of the second weight transformation rate may be 4 or 8.
  • the absolute value of the first weight transformation rate may be 4, and the absolute value of the second weight transformation rate may be 8.
  • the above are just a few examples, which are not limited, as long as the absolute value of the first weight transformation rate is not equal to the absolute value of the second weight transformation rate.
  • the weight conversion rate switching identifiers may include, but are not limited to: sequence-level weight conversion rate switching identifiers, frame-level weight conversion rate switching identifiers, slice-level weight conversion rate switching identifiers, tile-level weight conversion rate switching identifiers, and Patch-level weighting identifiers. Conversion rate switching flags, CTU-level weight conversion rate switching flags, LCU-level weight conversion rate switching flags, block-level weight conversion rate switching flags, CU-level weight conversion rate switching flags, PU-level weight conversion rate switching flags, etc. make restrictions.
  • the sequence-level weight conversion rate switching flag corresponding to the current block may be determined as the weight conversion rate switching flag corresponding to the current block, or the frame-level weight conversion rate switching flag corresponding to the current block may be determined as the weight conversion rate corresponding to the current block.
  • the first weight conversion rate can be used as the default weight conversion rate.
  • the encoding end it can be known whether the current block needs to perform weight conversion rate switching. If the current block does not need to perform weight conversion rate switching, the encoding end The weight transformation ratio of the current block is determined to be the first weight transformation ratio. If the weight transformation rate of the current block needs to be switched, the encoding end determines the weight transformation rate of the current block to the second weight transformation rate. Or, for the encoding end, the weight conversion rate switching flag corresponding to the current block can be known. If the weight conversion rate switching flag is used to indicate that the current block does not need to perform weight conversion rate switching, the encoding end can determine the weight conversion rate of the current block. is the first weight transformation rate. If the weight transformation rate switching identifier is used to indicate that the current block needs to perform weight transformation rate switching, the encoder determines that the weight transformation rate of the current block is the second weight transformation rate.
  • the encoder determines a rate-distortion cost value of 1 corresponding to the first weight conversion rate, and a rate-distortion cost value of 2 corresponding to the second weight conversion rate. If the rate-distortion cost value 1 is less than the rate-distortion cost value 2, it is determined that the current block does not need to perform weight conversion rate switching; if the rate-distortion cost value 2 is less than the rate-distortion cost value 1, it is determined that the current block needs to perform weight conversion rate switching.
  • the encoding end can encode the weight conversion rate switching flag (such as the sequence-level weight conversion rate switching flag, etc.) into the code stream, so that the decoding end parses the weight conversion rate switching flag from the code stream, and determines the weight conversion rate switching flag. Switch the flag for the weight transformation rate corresponding to the current block. To sum up, the decoding end can know the weight conversion rate switching flag corresponding to the current block. If the weight conversion rate switching flag is used to indicate that the current block does not need to perform weight conversion rate switching, the decoding end determines the weight conversion rate of the current block. The first weight transformation rate.
  • the weight conversion rate switching flag such as the sequence-level weight conversion rate switching flag, etc.
  • the decoding end determines that the weight transformation rate of the current block is the second weight transformation rate. For example, if the weight conversion rate switching flag is the first value, it indicates that the current block does not need to perform weight conversion rate switching; if the weight conversion rate switching flag is the second value, it indicates that the current block needs to perform weight conversion rate switching.
  • the first value is 1 and the second value is 0, or the first value is 0 and the second value is 1.
  • the above are only examples of the first value and the second value, which are not limited.
  • the encoding end may also not encode the weight conversion rate switching flag into the code stream, but implicitly derive the weight conversion rate switching flag by the decoding end, and determine the weight conversion rate switching flag as the weight conversion rate switching flag corresponding to the current block. For example, if multiple consecutive blocks need to perform weight conversion rate switching, the current block also needs to perform weight conversion rate switching, the decoding end implicitly derives the weight conversion rate switching flag, and determines the weight conversion rate switching flag as the corresponding current block.
  • the weight conversion rate switching flag of , and the weight conversion rate switching flag indicates that the current block needs to perform weight conversion rate switching.
  • the decoding end implicitly derives the weight conversion rate switching flag, and the weight conversion rate switching flag indicates that the current block does not need to perform weight conversion rate switching. Weight transformation rate switching.
  • the above method is only an example of implicitly deriving the weight conversion rate switching flag, and this derivation method is not limited.
  • the decoding end can know the weight conversion rate switching flag corresponding to the current block, and if the weight conversion rate switching flag indicates that the current block does not need to perform weight conversion rate switching, it is determined that the weight conversion rate of the current block is the first weight conversion rate. Rate. If the weight transformation rate switching identifier indicates that the current block needs to perform weight transformation rate switching, the weight transformation rate of the current block is determined to be the second weight transformation rate.
  • the weight conversion rate of the current block can be the first weight conversion rate or the second weight conversion rate, that is, the weight conversion rate of the current block can be switched, and the switching of the weight conversion rate depends on the weight conversion rate switching of a certain level.
  • Flag (refine_flag)
  • the refine_flag is a display flag or an implicit flag
  • the display flag refers to encoding the refine_flag into the code stream, so that the decoding end parses the refine_flag from the code stream
  • the implicit flag refers to the information that the codec end can obtain according to the Adaptively derive refine_flag.
  • a certain level of refine_flag refers to: the sequence level indicates the refine_flag of the current sequence, as the refine_flag of all blocks belonging to the current sequence; the frame level indicates the refine_flag of the current frame, as the refine_flag of all the blocks belonging to the current frame; the slice level indicates the current The refine_flag of the Slice is used as the refine_flag of all blocks belonging to the current Slice; the Tile level indicates the refine_flag of the current Tile as the refine_flag of all the blocks belonging to the current Tile; the patch level indicates the refine_flag of the current Patch, as the refine_flag of all the blocks belonging to the current Patch; The CTU level indicates the refine_flag of the current CTU as the refine_flag of all blocks belonging to the current CTU; the LCU level indicates the refine_flag of the current LCU as the refine_flag of all the blocks belonging to the current LCU; the block level indicates the refine_flag of the current block as the refine_flag belonging to the current block ; The
  • the first weight conversion rate may be used as the default weight conversion rate, and when the weight conversion rate switching flag is used to indicate that the current block does not need to perform weight conversion rate switching, the weight conversion rate is not switched, that is, the current block is determined.
  • the weight conversion rate is the first weight conversion rate.
  • the weight transformation rate switching flag is used to indicate that the current block needs to perform weight transformation rate switching, the weight transformation rate is switched, that is, the weight transformation rate of the current block is determined to be the second weight transformation rate.
  • Embodiment 10 In Embodiment 1 to Embodiment 4, the encoding end/decoding end needs to obtain the weight prediction angle and weight prediction position of the current block.
  • the weight transformation rate of the current block can be obtained, such as the first Based on the weight transformation rate or the second weight transformation rate, the weight prediction angle and weight prediction position of the current block are obtained in the following manner:
  • Manner 1 The encoding end and the decoding end agree on the same weight prediction angle as the weight prediction angle of the current block, and agree on the same weight prediction position as the weight prediction position of the current block.
  • the encoding end and the decoding end take the weight prediction angle A as the weight prediction angle of the current block
  • the encoding end and the decoding end take the weight prediction position 4 as the weight prediction position of the current block.
  • the encoder builds a weight prediction angle list, and the weight prediction angle list includes at least one weight prediction angle, such as weight prediction angle A and weight prediction angle B.
  • the encoding end constructs a weight prediction position list, and the weight prediction position list includes at least one weight prediction position, such as weight prediction position 0-weight prediction position 6.
  • Each weight prediction angle in the weight prediction angle list is traversed in turn, and each weight prediction position in the weight prediction position list is traversed, that is, the combination of each weight prediction angle and each weight prediction position is traversed. Taking each combination as the weight prediction angle and weight prediction position obtained in step a1, based on the weight prediction angle, the weight prediction position and the weight transformation rate, the weighted prediction value of the current block is obtained.
  • the weighted prediction value of the current block is obtained based on the weight prediction angle A and the weight prediction position 0.
  • the weighted prediction value of the current block is obtained based on the weight prediction angle A and the weight prediction position 1.
  • the weighted prediction value of the current block is obtained based on the weight prediction angle B and the weight prediction position 0, and so on.
  • the encoder can obtain the weighted prediction value of the current block based on each combination of the weighted prediction angle and the weighted prediction position.
  • the coding end After the coding end obtains the weighted prediction value of the current block based on the combination of the weight prediction angle and the weight prediction position, it can determine the rate-distortion cost value according to the weighted prediction value of the current block.
  • the method for determining the rate-distortion cost value is not limited. The rate-distortion cost value of each combination can be obtained, and the smallest rate-distortion cost value can be selected from all rate-distortion cost values.
  • the encoder uses the combination of the weight prediction angle and the weight prediction position corresponding to the minimum rate-distortion cost value as the target weight prediction angle and the target weight prediction position, and finally uses the index value of the target weight prediction angle in the weight prediction angle list and the target weight prediction
  • the index value of the position in the weight prediction position list is encoded into the code stream.
  • the decoding end constructs a weight prediction angle list, the weight prediction angle list is the same as the weight prediction angle list of the encoder end, and the weight prediction angle list includes at least one weight prediction angle.
  • a weight prediction location list is constructed, the weight prediction location list is the same as the weight prediction location list at the encoder, and the weight prediction location list includes at least one weight prediction location.
  • the encoded bit stream may include indication information 1, which is used to indicate the weight prediction angle of the current block (that is, the target weight prediction angle) and the weight of the current block Predicted position (i.e. target weight predicted position). For example, when the indication information 1 is 0, it is used to indicate the first weight prediction angle in the weight prediction angle list, and the first weight prediction position in the weight prediction position list.
  • the indication information 1 When the indication information 1 is 1, use In the first weight prediction angle in the list of indicated weight prediction angles, and the second weight prediction position in the list of weight prediction positions, and so on, for the value of indication information 1, it is used to indicate which weight prediction angle and Which weight prediction position is required as long as the encoding end and the decoding end make an agreement, which is not limited in this embodiment.
  • the decoding end After receiving the encoded bit stream, the decoding end parses out the indication information 1 from the encoded bit stream. Based on the indication information 1, the decoding end can select the weight prediction angle corresponding to the indication information 1 from the weight prediction angle list. The weight prediction angle is used as the weight prediction angle of the current block. Based on the indication information 1, the decoding end can select the weight prediction position corresponding to the indication information 1 from the weight prediction position list, and the weight prediction position is used as the weight prediction position of the current block.
  • the encoded bit stream may include indication information 2 and indication information 3.
  • the indication information 2 is used to indicate the target weight prediction angle of the current block, such as the index value 1 of the target weight prediction angle in the weight prediction angle list, and the index value 1 indicates that the target weight prediction angle is the first weight prediction in the weight prediction angle list.
  • the indication information 3 is used to indicate the target weight prediction position of the current block, such as the index value 2 of the target weight prediction position in the weight prediction position list, and the index value 2 indicates that the target weight prediction position is the first weight prediction in the weight prediction position list.
  • the decoding end After receiving the encoded bit stream, the decoding end parses out the indication information 2 and the indication information 3 from the encoded bit stream, and based on the indication information 2, selects the weight prediction angle corresponding to the index value 1 from the weight prediction angle list. The angle is used as the weight prediction angle of the current block. Based on the indication information 3, the weight prediction position corresponding to the index value 2 is selected from the weight prediction position list, and the weight prediction position is used as the weight prediction position of the current block.
  • Application Scenario 3 The encoder and the decoder can agree on a preferred configuration combination, which is not limited, and can be configured according to actual experience.
  • the encoder and the decoder agree on the weight prediction angle A and the weight prediction position 4
  • the preferred configuration combination 1 includes the preferred configuration combination 2 of the weight prediction angle B and the weight prediction position 4 .
  • the encoder determines whether the target weight prediction angle and the target weight prediction position are a preferred configuration combination. If yes, when the encoding end sends the encoded bit stream to the decoding end, the encoded bit stream may include indication information 4 and indication information 5 .
  • the indication information 4 is used to indicate whether the current block adopts the preferred configuration combination. If the indication information 4 is the first value (such as 0), it means that the current block adopts the preferred configuration combination.
  • the indication information 5 is used to indicate which preferred configuration combination the current block adopts. For example, when the indication information 5 is 0, it is used to indicate that the current block adopts the preferred configuration combination 1. When the indication information 5 is 1, it is used to indicate that the current block adopts the preferred configuration combination 2. .
  • the decoding end After receiving the encoded bit stream, the decoding end parses the indication information 4 and the indication information 5 from the encoded bit stream, and based on the indication information 4, the decoding end determines whether the current block adopts the preferred configuration combination. If the indication information 4 is the first value, it is determined that the current block adopts the preferred configuration combination. When the current block adopts the preferred configuration combination, the decoding end determines which preferred configuration combination the current block adopts based on the indication information 5. For example, when the indication information 5 is 0, it is determined that the current block adopts the preferred configuration combination 1, that is, the weight prediction angle of the current block. is the weight prediction angle A, and the weight prediction position of the current block is the weight prediction position 4. For another example, when the indication information 5 is 1, it is determined that the current block adopts the preferred configuration combination 2, that is, the weight prediction angle of the current block is the weight prediction angle B, and the weight prediction position of the current block is the weight prediction position 4.
  • the encoded bit stream may include the indication information 4, but not the indication information. 5.
  • the indication information 4 is used to indicate that the current block adopts the preferred configuration combination.
  • the decoding end parses the indication information 4 from the encoded bit stream, if the indication information 4 is the first value, it is determined that the current block adopts the preferred configuration combination, and based on the preferred configuration combination, it is determined that the weight prediction angle of the current block is the weight prediction. At angle A, the weight prediction position of the current block is the weight prediction position 4.
  • Application Scenario 4 The encoding end and the decoding end can agree on a preferred configuration combination. After the encoding end determines the target weight prediction angle and target weight prediction position of the current block, it determines whether the target weight prediction angle and the target weight prediction position are the preferred configuration combination. If not, when the encoding end sends the encoded bit stream to the decoding end, the encoded bit stream includes indication information 4 and indication information 6 .
  • the indication information 4 is used to indicate whether the current block adopts the preferred configuration combination. If the indication information 4 is the second value (eg 1), it means that the current block does not adopt the preferred configuration combination.
  • the indication information 6 is used to indicate the target weight prediction angle of the current block and the target weight prediction position of the current block. For example, when the indication information 6 is 0, it is used to indicate the first weighted prediction angle in the weighted prediction angle list, and the first weighted prediction position in the weighted prediction position list, and so on.
  • the decoding end After receiving the encoded bit stream, the decoding end parses the indication information 4 and the indication information 6 from the encoded bit stream, and based on the indication information 4, the decoding end determines whether the current block adopts the preferred configuration combination. If the indication information 4 is the second value, it is determined that the current block does not adopt the preferred configuration combination. When the current block does not adopt the preferred configuration combination, the decoding end can select the weight prediction angle corresponding to the indication information 6 from the weight prediction angle list based on the indication information 6, and the weight prediction angle is used as the weight prediction angle of the current block. Indication information 6, the decoding end can select the weight prediction position corresponding to the indication information 6 from the weight prediction position list, and the weight prediction position is used as the weight prediction position of the current block.
  • Application scenario 5 The encoder and the decoder can agree on a preferred configuration combination. After determining the target weight prediction angle and target weight prediction position of the current block, the encoder determines whether the target weight prediction angle and the target weight prediction position are the preferred configuration combination. If not, when the encoding end sends the encoded bit stream to the decoding end, the encoded bit stream includes indication information 4 , indication information 7 and indication information 8 . Exemplarily, the indication information 4 is used to indicate whether the current block adopts the preferred configuration combination. For example, if the indication information 4 is the second value, it means that the current block does not adopt the preferred configuration combination.
  • the indication information 7 is used to indicate the target weight prediction angle of the current block, such as the index value 1 of the target weight prediction angle in the weight prediction angle list, and the index value 1 indicates that the target weight prediction angle is the first weight prediction in the weight prediction angle list.
  • the indication information 8 is used to indicate the target weight prediction position of the current block, such as the index value 2 of the target weight prediction position in the weight prediction position list, and the index value 2 indicates that the target weight prediction position is the first weight prediction in the weight prediction position list. Location.
  • the decoding end After receiving the encoded bit stream, the decoding end parses out indication information 4, indication information 7 and indication information 8 from the encoded bit stream. Based on the indication information 4, the decoding end determines whether the current block adopts the preferred configuration combination. If the indication information 4 is the second value, it is determined that the current block does not adopt the preferred configuration combination. When the current block does not adopt the preferred configuration combination, the decoding end selects the weight prediction angle corresponding to the index value 1 from the weight prediction angle list based on the indication information 7, and the weight prediction angle is used as the weight prediction angle of the current block. Based on the indication information 8, the decoding end selects the weight prediction position corresponding to the index value 2 from the weight prediction position list, and the weight prediction position is used as the weight prediction position of the current block.
  • Embodiment 11 In Embodiment 1 to Embodiment 4, the encoding end/decoding end needs to obtain the weight conversion rate of the current block. If the current block supports the weight conversion rate switching mode, the following method is used to obtain the weight conversion rate of the current block: Obtaining weight transformation rate indication information of the current block; selecting a weight transformation rate corresponding to the weight transformation rate indication information from a preset lookup table; the preset lookup table includes at least two weight transformation rates. The selected weight transformation rate is determined as the weight transformation rate of the current block.
  • the weight transformation ratio of the current block can be selected from the at least two weight transformation ratios, that is, the weight transformation ratio of the current block
  • the conversion rate is variable, so that the weight conversion rate can be switched adaptively instead of using a uniform weight conversion rate.
  • the switching control information allows the current block to enable the weight transform rate switching mode, the current block supports the weight transform rate switching mode, and if the switching control information does not allow the current block to enable the weight transform rate switching mode, the current block does not support the weight transform.
  • Rate switching mode for the content of whether the current block supports the weight conversion rate switching mode, see Embodiment 9.
  • the preset lookup table may include at least two weight transformation rates
  • the weight transformation rate indication information may include weight transformation rate index information (used to indicate a certain weight transformation rate among all the weight transformation ratios in the lookup table). weight conversion rate). Based on this, the weight transformation rate corresponding to the weight transformation rate index information can be selected from the lookup table.
  • the encoding end can determine the rate-distortion cost value corresponding to the weight conversion rate, and assign the smallest rate-distortion cost value to the corresponding rate-distortion cost value.
  • the weight conversion rate of the current block is used as the target weight conversion rate of the current block, and the index information of the target weight conversion rate in the look-up table is determined, that is, the weight conversion rate index information, and the weight conversion rate index information indicates the number of weight conversions in the look-up table. Rate.
  • the encoded bit stream may carry weight transformation rate index information, and the weight transformation rate index information is used to indicate the target weight The index information of the transformation rate in the lookup table.
  • the decoding end selects the weight transformation ratio corresponding to the weight transformation ratio index information from the lookup table, and the weight transformation ratio is used as the target weight transformation ratio of the current block.
  • Embodiment 12 In Embodiment 1 to Embodiment 4, the encoder/decoder needs to obtain the weight prediction angle of the current block, the weight prediction position of the current block, and the weight transformation rate of the current block. In Embodiment 11, it can be obtained: On the basis of the weight transformation rate of the current block, the weight prediction angle and weight prediction position of the current block can be obtained in the following ways:
  • Manner 1 The encoding end and the decoding end agree on the same weight prediction angle as the weight prediction angle of the current block, and agree on the same weight prediction position as the weight prediction position of the current block.
  • the encoding end and the decoding end take the weight prediction angle A as the weight prediction angle of the current block
  • the encoding end and the decoding end take the weight prediction position 4 as the weight prediction position of the current block.
  • Manner 2 The encoder builds a weight prediction angle list, where the weight prediction angle list includes at least one weight prediction angle.
  • the encoding end constructs a weight prediction position list, and the weight prediction position list includes at least one weight prediction position.
  • the encoding end constructs at least two look-up tables, taking the first look-up table and the second look-up table as an example, the first look-up table includes at least one weight conversion rate, and the second look-up table includes at least one weight conversion rate.
  • the encoding end determines the target look-up table. For the determination method, refer to Embodiment 11, taking the target look-up table as the first look-up table as an example.
  • the encoder traverses each weight prediction angle in the weight prediction angle list in turn, traverses each weight prediction position in the weight prediction position list, and traverses each weight conversion rate in the target lookup table, that is, traverses each weight prediction angle, each weight prediction position, a combination of transformation rates for each weight. For each combination of the weight prediction angle, the weight prediction position and the weight transformation rate, as the weight prediction angle, the weight prediction position and the weight transformation rate obtained in step a1, based on the weight prediction angle, the weight prediction position and the weight transformation rate, the current The weighted prediction value of the block.
  • the encoder can obtain the weighted prediction value of the current block based on each combination. After obtaining the weighted prediction value of the current block, the encoding end can determine the rate-distortion cost value according to the weighted prediction value of the current block, that is, the encoding end can obtain the rate-distortion cost value of each combination and select it from all rate-distortion cost values. Minimum rate-distortion cost value.
  • the encoder uses the weight prediction angle, weight prediction position and weight transformation rate corresponding to the minimum rate-distortion cost value as the target weight prediction angle, target weight prediction position and target weight transformation rate, respectively.
  • the index value in the angle list, the index value of the target weight prediction position in the weight prediction position list, and the index value of the target weight conversion rate in the target lookup table are all programmed into the code stream of the current block.
  • the decoding end constructs a weight prediction angle list, which is the same as the weight prediction angle list of the encoding end, and the decoding end constructs a weight prediction position list, which is the same as the weight prediction position list of the encoding end.
  • the decoding end constructs a first look-up table and a second look-up table, the first look-up table is the same as the first look-up table of the encoding end, and the second look-up table is the same as the second look-up table of the encoding end.
  • the decoding end After receiving the encoded bit stream of the current block, the decoding end parses the indication information from the encoded bit stream, and selects a weight prediction angle from the weight prediction angle list as the weight prediction angle of the current block according to the indication information. The information selects a weight prediction position from the weight prediction position list as the weight prediction position of the current block. For the acquisition method of the weight prediction angle and the weight prediction position, refer to Embodiment 10, and details are not repeated here.
  • the decoding end After receiving the encoded bit stream of the current block, the decoding end can determine the target lookup table (such as the first lookup table or the second lookup table), and select a weight conversion rate from the target lookup table as the current weight conversion rate according to the weight conversion rate index information. For the weight transformation rate of the block, for the acquisition method of the weight transformation rate, refer to Embodiment 11, and details are not repeated here.
  • Embodiment 13 In Embodiment 1 to Embodiment 3, the encoder/decoder needs to obtain the motion information candidate list.
  • the motion information candidate list is obtained in the following manner: at least one available motion to be added to the motion information candidate list is obtained. information; obtains a motion information candidate list based on at least one available motion information.
  • the at least one available motion information includes, but is not limited to, at least one of the following motion information: spatial motion information; temporal motion information; HMVP (History-based Motion Vector Prediction, history-based motion vector prediction) motion information; Set exercise information.
  • HMVP History-based Motion Vector Prediction, history-based motion vector prediction
  • At least one available motion information to be added to the motion information candidate list may be acquired in the following manner: for the spatial adjacent block of the current block, if the spatial adjacent block exists, and the spatial adjacent block is In the inter prediction mode, the motion information of the adjacent block in the spatial domain is determined as the available motion information.
  • At least one available motion information to be added to the motion information candidate list may be acquired in the following manner: based on the preset position of the current block (for example, the pixel position of the upper left corner of the current block), A temporal neighboring block corresponding to the preset position is selected from the reference frame of the current block, and the motion information of the temporal neighboring block is determined as available motion information.
  • At least one available motion information to be added to the motion information candidate list may be acquired in the following manner: the preset motion information is determined as the available motion information, and the preset motion information may include but not Limited to: default motion information derived based on candidate motion information already in the motion information candidate list.
  • the motion information candidate list may be obtained in the following manner: for the available motion information to be added to the motion information candidate list currently, if the available motion information is unidirectional motion information, And the unidirectional motion information does not overlap with the candidate motion information existing in the motion information candidate list, then the unidirectional motion information is added to the motion information candidate list; if the available motion information is bidirectional motion information, then the bidirectional motion information Cut to the first unidirectional motion information and the second unidirectional motion information; if the first unidirectional motion information does not overlap with the candidate motion information that exists in the motion information candidate list, add the first unidirectional motion information to the motion information Information candidate list.
  • the first unidirectional motion information may be unidirectional motion information pointing to a reference frame in the first reference frame list; the second unidirectional motion information may be unidirectional motion information pointing to a reference frame in the second reference frame list information.
  • List0 may be a forward reference frame list
  • List1 may be a backward reference frame list
  • the duplication check operation for the unidirectional motion information and the candidate motion information existing in the motion information candidate list may include, but is not limited to: if the POC (display order) of the reference frame pointed to by the unidirectional motion information and the If the POC of the reference frame pointed to by the candidate motion information is the same, and the motion vector of the unidirectional motion information is the same as the motion vector of the candidate motion information, it is determined that the unidirectional motion information is repeated with the candidate motion information; otherwise, it is determined that the unidirectional motion information is repeated The motion information does not overlap with the candidate motion information.
  • the duplication check operation for the first unidirectional motion information and the candidate motion information existing in the motion information candidate list may include, but is not limited to: if the reference frame list pointed to by the first unidirectional motion information and the reference frame pointed to by the candidate motion information The list is the same, and the refIdx of the first unidirectional motion information is the same as the refIdx of the candidate motion information, and the motion vector of the first unidirectional motion information is the same as the motion vector of the candidate motion information, then determine the first unidirectional motion information The motion information is repeated with the candidate motion information; otherwise, it is determined that the first unidirectional motion information and the candidate motion information are not repeated.
  • the POC of the reference frame pointed to by the first unidirectional motion information is the same as the POC of the reference frame pointed to by the candidate motion information, and the motion vector of the first unidirectional motion information is the same as the motion vector of the candidate motion information, then determine The first unidirectional motion information is repeated with the candidate motion information; otherwise, it is determined that the first unidirectional motion information and the candidate motion information are not repeated.
  • the duplication check operation for the second unidirectional motion information and the candidate motion information existing in the motion information candidate list may include, but is not limited to: if the reference frame list pointed to by the second unidirectional motion information and the reference frame pointed to by the candidate motion information The list is the same, and the refIdx of the second unidirectional motion information is the same as the refIdx of the candidate motion information, and the motion vector of the second unidirectional motion information is the same as the motion vector of the candidate motion information, then determine the second unidirectional motion information The motion information is repeated with the candidate motion information; otherwise, it is determined that the second unidirectional motion information does not overlap with the candidate motion information.
  • the POC of the reference frame pointed to by the second unidirectional motion information is the same as the POC of the reference frame pointed to by the candidate motion information, and the motion vector of the second unidirectional motion information is the same as the motion vector of the candidate motion information, then determine The second unidirectional motion information is repeated with the candidate motion information; otherwise, it is determined that the second unidirectional motion information and the candidate motion information are not repeated.
  • the above process needs to compare whether the POC of the reference frame pointed to by the unidirectional motion information is the same as the POC of the reference frame pointed to by the candidate motion information.
  • the POC of the reference frame is only an example to confirm whether the reference frame is the same,
  • other attributes that can confirm whether the reference frames are the same may also be used, such as DOC (Decoding Order Mark) of the reference frames, etc., which are not limited.
  • DOC Decoding Order Mark
  • Embodiment 14 For Embodiment 13, spatial motion information (herein, the motion information of the spatially adjacent blocks of the current block is referred to as spatial motion information) and/or temporal motion information (herein, the temporally adjacent blocks of the current block are referred to as spatial motion information) can be used.
  • the motion information of the block is called temporal motion information) to obtain the motion information candidate list. Therefore, it is necessary to first select the available motion information from the spatial motion information and/or the temporal motion information.
  • FIG. 9A which is a schematic diagram of the spatial adjacent blocks of the current block
  • the spatial motion information may be motion information of spatial adjacent blocks such as F, G, C, A, B, and D
  • the temporal motion information may be at least one .
  • the available motion information to be added to the motion information candidate list can be obtained in the following manner:
  • F, G, C, A, B, D are the adjacent blocks in the spatial domain of the current block E, and the "availability" of the motion information of F, G, C, A, B, D can be determined. sex.
  • the motion information of F is available motion information; otherwise, the motion information of F is unavailable motion information.
  • G exists and the inter prediction mode is adopted, the motion information of G is available motion information; otherwise, the motion information of G is unavailable motion information.
  • C exists and the inter prediction mode is adopted the motion information of C is available motion information; otherwise, the motion information of C is unavailable motion information.
  • the motion information of A is available motion information; otherwise, the motion information of A is unavailable motion information.
  • B exists and the inter prediction mode is adopted, the motion information of B is available motion information; otherwise, the motion information of B is unavailable motion information.
  • D exists and the inter prediction mode is adopted, the motion information of D is available motion information; otherwise, the motion information of D is unavailable motion information.
  • F, G, C, A, B, D are the spatial neighbor blocks of the current block E, and determine the "availability" of the motion information of F, G, C, A, B, and D. If F exists and the inter prediction mode is adopted, the motion information of F is available motion information; otherwise, the motion information of F is unavailable motion information. If G exists and the inter prediction mode is adopted, the motion information of G is different from the motion information of F, then the motion information of G is available motion information; otherwise, the motion information of G is unavailable motion information. If C exists and the inter prediction mode is adopted, the motion information of C is different from the motion information of G, then the motion information of C is available motion information; otherwise, the motion information of C is unavailable motion information.
  • the motion information of A is different from the motion information of F, then the motion information of A is available motion information; otherwise, the motion information of A is unavailable motion information.
  • B exists and the inter prediction mode is adopted the motion information of B is different from the motion information of G, then the motion information of B is available motion information; otherwise, the motion information of B is unavailable motion information.
  • D exists and the inter prediction mode is adopted the motion information of D is different from that of A, and the motion information of D is different from that of G, then the motion information of D is available motion information; otherwise, the motion information of D is The information is unavailable motion information.
  • F, G, C, A, B, D are the spatial neighbor blocks of the current block E, and the "availability" of the motion information of F, G, C, A, B, D can be determined.
  • the motion information of F is available motion information; otherwise, the motion information of F is unavailable motion information.
  • G exists and the inter prediction mode is adopted, the motion information of G is different from the motion information of F, then the motion information of G is available motion information; otherwise, the motion information of G is unavailable motion information.
  • C exists and the inter prediction mode is adopted, the motion information of C is different from the motion information of G, then the motion information of C is available motion information; otherwise, the motion information of C is unavailable motion information.
  • the motion information of A is different from the motion information of F, then the motion information of A is available motion information; otherwise, the motion information of A is unavailable motion information.
  • B exists and the inter prediction mode is adopted the motion information of B is available motion information; otherwise, the motion information of B is unavailable motion information.
  • D exists and the inter prediction mode is adopted the motion information of D is different from that of A, and the motion information of D is different from that of G, then the motion information of D is available motion information; otherwise, the motion information of D is The information is unavailable motion information.
  • Modes 12 and 13 when determining whether the motion information of the spatially adjacent blocks is available motion information, it may be necessary to compare whether the motion information of the two spatially adjacent blocks is the same, for example, compare the motion information of G Whether the motion information of F is the same or not, and the comparison process of whether the motion information is the same, is actually the duplication check operation of the motion information. If the duplicate check result is duplicate, the comparison result of the motion information is the same. Then the comparison result of the motion information is different. Regarding the duplication checking operation of the motion information, reference may be made to the subsequent embodiments, and details are not repeated here.
  • the available motion information to be added to the motion information candidate list can be obtained in the following manner:
  • Manner 21 Based on the preset position of the current block, select a temporal neighbor block corresponding to the preset position from a reference frame of the current block, and determine the motion information of the temporal neighbor block as available motion information. For example, if the current frame where the current block is located is a B frame, unidirectional motion information or bidirectional motion information is derived according to the co-located block of the co-located reference frame, and the unidirectional motion information or bidirectional motion information is used as the available motion information. If the current frame is a P frame, the unidirectional motion information is derived according to the co-located block of the co-located reference frame, and the unidirectional motion information is used as the available motion information.
  • the co-located block is a time-domain adjacent block in the co-located reference frame corresponding to the preset position of the current block.
  • the preset position of the current block can be configured according to experience, and the preset position of the current block is not Make restrictions, such as the pixel position of the upper left corner of the current block, the pixel position of the upper right corner of the current block, the pixel position of the lower left corner of the current block, the pixel position of the lower right corner of the current block, the center pixel position of the current block, etc.
  • the co-located reference frame can be a preset reference frame, such as taking the first reference frame in the List0 of the current block as the co-located reference frame, or it can be a derived reference frame, such as combining the List0 of the current block with The nearest reference frame of the current frame is used as the co-located reference frame, or it can be the reference frame parsed from the code stream. For example, for the decoding end, the information of the co-located reference frame can be parsed from the code stream, and then determine co-located reference frame.
  • the unidirectional motion information or bidirectional motion information is derived according to the motion information of the co-located block. If the current frame where the current block is located is a P frame, the single-directional motion information is derived according to the motion information of the co-located block. to sports information.
  • Mode 22 Determine the target position of the current block based on the weight prediction angle and the weight prediction position of the current block; select a time-domain adjacent block corresponding to the target position from the reference frame of the current block, and combine the time-domain adjacent block.
  • the motion information of is determined as the available motion information.
  • the target position of the current block can be determined based on the weight prediction angle and the weight prediction position of the current block, for example, the target position of the current block can be determined based on the weight prediction angle and the index value of the weight prediction position of the current block.
  • the co-located block of the co-located reference frame can be determined based on the target position of the current block.
  • unidirectional motion information or bidirectional motion information is derived according to the co-located block of the co-located reference frame.
  • Motion information use the unidirectional motion information or bidirectional motion information as available motion information.
  • the current frame is a P frame, the unidirectional motion information is derived according to the co-located block of the co-located reference frame, and the unidirectional motion information is used as the available motion information.
  • the co-located block is the temporally adjacent block in the co-located reference frame corresponding to the target position of the current block.
  • the target position can be the pixel position of the upper left corner, the pixel position of the upper right corner, the pixel position of the lower left corner, and the pixel position of the lower right corner of the current block. location etc.
  • the weight matrix of the current block can be obtained, as shown in FIG. 9B , since the upper right weight part accounts for a small proportion (the black part), the spatial motion information and black Therefore, the selection of temporal motion information can be biased towards the upper right weight part, so as to provide suitable candidate motion information.
  • the co-located block can be the upper right corner pixel of the current block.
  • the temporal adjacent block corresponding to the position (ie, the weight portion with a smaller proportion), that is, the target position of the current block is the pixel position of the upper right corner of the current block.
  • the target position of the current block is the pixel position of the lower right corner of the current block.
  • the target position of the current block is the pixel position of the lower left corner of the current block.
  • each available motion information (such as spatial motion information, temporal motion information, etc.)
  • one or a combination of the following methods can be used to add the available motion information to the motion information candidate list:
  • Mode 31 Do not perform duplicate checking. For example, if the available motion information is unidirectional motion information, the unidirectional motion information is added to the motion information candidate list. If the available motion information is bidirectional motion information, then trim the bidirectional motion information into the first unidirectional motion information and the second unidirectional motion information, and add the cropped unidirectional motion information to the motion information candidate list, for example , the first unidirectional motion information can be added to the motion information candidate list.
  • Manner 32 Perform half-check processing, that is, do not check the other half of the bidirectional motion information. For example, if the available motion information is unidirectional motion information, and the unidirectional motion information does not overlap with the candidate motion information existing in the motion information candidate list, the unidirectional motion information is added to the motion information candidate list. If the available motion information is bidirectional motion information, the bidirectional motion information is trimmed into the first unidirectional motion information and the second unidirectional motion information, and a certain unidirectional motion information is added to the motion information candidate list. If the first unidirectional motion information in the motion information does not overlap with the candidate motion information existing in the motion information candidate list, the first unidirectional motion information is added to the motion information candidate list.
  • Method 33 Carry out full-duplication checking. For example, if the available motion information is unidirectional motion information, and the unidirectional motion information does not overlap with the candidate motion information already in the motion information candidate list, the unidirectional motion information is added to the motion information candidate list. If the available motion information is bidirectional motion information, the bidirectional motion information is trimmed into the first unidirectional motion information and the second unidirectional motion information, and a certain unidirectional motion information is added to the motion information candidate list.
  • the first unidirectional motion information in the motion information does not overlap with the candidate motion information existing in the motion information candidate list, then the first unidirectional motion information is added to the motion information candidate list; if the first unidirectional motion information and the motion information If the candidate motion information existing in the candidate list is duplicated, and the second unidirectional motion information in the bidirectional motion information does not overlap with the candidate motion information existing in the motion information candidate list, the second unidirectional motion information is added to the motion information Candidate list.
  • the first unidirectional motion information may be unidirectional motion information pointing to a reference frame in the first reference frame list; the second unidirectional motion information may be referring to a reference frame in the second reference frame list The unidirectional motion information of the frame.
  • the first reference frame list is List0, and the second reference frame list is List1; if the total number of candidate motion information existing in the motion information candidate list is odd, then the first reference frame list is List1, and the second The reference frame list is List0.
  • the first reference frame list is List0, and the second reference frame list is List1; If the total number is an even number, the first reference frame list is List1, and the second reference frame list is List0.
  • the first reference frame list is List0, and the second reference frame list is List1.
  • the first reference frame list is List1, and the second reference frame list is List0.
  • the result of the duplicate check operation can be duplicated or not duplicated. It also involves checking the motion information of two adjacent blocks in the spatial domain, and the result of the checking operation may be repeated or not.
  • it also involves performing a duplication checking operation on the bidirectional motion information and the candidate motion information already existing in the motion information candidate list, and the result of the duplication checking operation may be repeated or non-repetitive.
  • motion information 1 is the unidirectional motion information to be checked
  • motion information 2 already exists in the motion information candidate list.
  • the motion information 1 is the bidirectional motion information to be checked
  • the motion information 2 is the candidate motion information that exists in the motion information candidate list; information, motion information 2 is the motion information of the adjacent blocks in the spatial domain whose availability has been determined, see method 12, for example, when the motion information of F is available, when it is necessary to determine whether the motion information of G is available, the motion information 1 is the motion information of G , and motion information 2 is the motion information of F.
  • the duplicate checking operation can be implemented in the following ways:
  • Manner 41 Perform a duplicate check operation based on List+refIdx+MV_x+MV_y. For example, first check whether the pointed lists are the same (that is, point to List0, point to List1, or point to both List0 and List1), then check whether the refidx is the same, and then check whether the MV is the same (that is, whether the horizontal components are the same, and whether the vertical components are the same) are the same).
  • the two are the same, or, if the reference frame list pointed to by motion information 1 and the reference frame pointed to by motion information 2 Both lists are List1, then the two are the same, or, if the reference frame list pointed to by motion information 1 is List0, and the reference frame list pointed to by motion information 2 is List1, then the two are different, or, if the reference frame pointed to by motion information 1 is List0
  • the list is List1, and the list of reference frames pointed to by motion information 2 is List0, then the two are different, or, if the list of reference frames pointed to by motion information 1 is List0, and the list of reference frames pointed to by motion information 2 is List0 and List1, then the two different, or, if the reference frame list pointed to by motion information 1 is List1, and the reference frame lists pointed to by motion information 2 are List0 and List1, then the two are different, or, if the reference frame index pointed to by motion information 1 is in the reference frame
  • the horizontal component of the motion vector of motion information 1 is the same as the horizontal component of the motion vector of motion information 2
  • the vertical component of the motion vector of motion information 1 is the same as the vertical component of the motion vector of motion information 2
  • the motion vector of motion information 1 is the same as the motion vector of motion information 2 .
  • Manner 42 Perform a duplicate checking operation based on POC+MV_x+MV_y.
  • first check whether the POC of the reference frame pointed to is the same that is, if the POC pointing to the reference frame whose reference frame index is refIdx0 in List0 is equal to the POC pointing to the reference frame whose reference frame index is refIdx1 in List1, it is also determined to be the same.
  • Duplication checking of unidirectional motion information and unidirectional motion information, and duplication checking of bidirectional motion information and bidirectional motion information are applicable); then check whether the MVs are the same (that is, whether the horizontal components are the same, and whether the vertical components are the same).
  • first query whether the POC of the reference frame pointed to by motion information 1 and the POC of the reference frame pointed to by motion information 2 are the same. If they are different, motion information 1 and motion information 2 do not overlap. If they are the same, continue to check whether the motion vector of motion information 1 and the motion vector of motion information 2 are the same. If they are different, motion information 1 and motion information 2 are not repeated.
  • the POCs of the pointed reference frames are the same, which may include: motion information 1 points to a reference frame whose reference frame index is refIdx0 in List0, and motion information 2 points to a reference frame whose reference frame index is refIdx0 in List0, and motion information 1 points to a reference frame whose reference frame index is refIdx0 in List0.
  • the POC of the reference frame pointed to is the same as the POC of the reference frame pointed to by motion information 2 .
  • motion information 1 points to the reference frame whose reference frame index is refIdx1 in List1
  • motion information 2 points to the reference frame whose reference frame index is refIdx1 in List1
  • the POC of the frame is the same.
  • motion information 1 points to the reference frame whose reference frame index is refIdx0 in List0
  • motion information 2 points to the reference frame whose reference frame index is refIdx1 in List1
  • the POC of the frame is the same.
  • motion information 1 points to the reference frame whose reference frame index is refIdx1 in List1
  • motion information 2 points to the reference frame whose reference frame index is refIdx0 in List0
  • the POC of the frame is the same.
  • motion information 1 points to the reference frame whose reference frame index is refIdx0 in List0
  • motion information 2 points to the reference frame whose reference frame index is refIdx2 in List0, and points to the reference frame in List1.
  • the POC of the reference frame whose frame index is refIdx3, the POC of the reference frame whose reference frame index is refIdx0 in List0 pointed to by motion information 1 is the same as the POC of the reference frame whose reference frame index is refIdx3 in List1 pointed to by motion information 2, and the POC of the reference frame whose reference frame index is refIdx3 in List1 pointed to by motion information 1, and the motion information 1 points to
  • the POC of the reference frame whose reference frame is refIdx1 in List1 is the same as the POC of the reference frame whose reference frame index is refIdx2 in List0 pointed to by motion information 2.
  • the above are just a few examples of the same POC of the reference frame, which is not limited.
  • the above process is to check the POC of the reference frame.
  • the POC of the reference frame is only an example of the check operation.
  • other attributes that can confirm whether the reference frames are the same can also be used.
  • the DOC of the reference frame, etc. is not limited. To sum up, the above POC can be replaced by DOC.
  • the motion information candidate list can be obtained, and the motion information candidate list can also be called AwpUniArray.
  • the following describes the process of obtaining the motion information candidate list in combination with several specific application scenarios. In subsequent application scenarios, it is assumed that the length of the motion information candidate list is X, that is, X pieces of available motion information need to be added. After the available motion information is added to the motion information candidate list, the added motion information is called candidate motion information.
  • Application Scenario 1 Add the spatial motion information to the motion information candidate list. If the list length is equal to X, end the adding process. If the list length is less than X, add the preset motion information to the motion information candidate list until the list length is X.
  • available motion information ie, spatial motion information
  • use method 11 to determine available motion information or use method 12 to determine available motion information, or use method 13 to determine available motion information.
  • the duplication checking operation involving the motion information of the two adjacent blocks in the spatial domain may be performed in the manner 41, or the duplication checking operation may be performed in the manner 42.
  • the available motion information is added to the motion information candidate list.
  • the available motion information can be added to the motion information candidate list by way of 31, the available motion information can be added to the motion information candidate list by way of 32, or the available motion information can be added by way of 33. Added to motion information candidate list.
  • a duplication check operation may also be performed on the unidirectional motion information and the candidate motion information already in the motion information candidate list, for example, Mode 41 may be used to perform the duplication checking operation, or, Mode 42 may be used to perform the duplication checking operation.
  • Application scenario 2 Add the temporal motion information to the motion information candidate list. If the list length is equal to X, end the adding process. If the list length is less than X, add the preset motion information to the motion information candidate list until the list length is X.
  • available motion information (real-time motion information) to be added to the motion information candidate list is determined first. For example, use method 21 to determine the available motion information, or use method 22 to determine the available motion information.
  • the available motion information can be added to the motion information candidate list by way of 31, the available motion information can also be added to the motion information candidate list by way of The available motion information is added to the motion information candidate list.
  • a duplication check operation may also be performed on the unidirectional motion information and the candidate motion information already in the motion information candidate list, for example, Mode 41 may be used to perform the duplication checking operation, or, Mode 42 may be used to perform the duplication checking operation.
  • Application Scenario 3 Add the spatial motion information and the temporal motion information to the motion information candidate list (the spatial motion information can be located before the temporal motion information, and the temporal motion information can also be located before the spatial motion information.
  • the motion information is located before the temporal motion information as an example)
  • the adding process is ended. If the length of the list is less than X, the preset motion information is added to the motion information candidate list until the length of the list is X.
  • available motion information ie, spatial motion information and temporal motion information
  • the duplication checking operation involving the motion information of the two adjacent blocks in the spatial domain may be performed in the manner 41, or the duplication checking operation may be performed in the manner 42.
  • the available motion information is added to the motion information candidate list.
  • a duplication check operation may also be performed on the unidirectional motion information and the candidate motion information already in the motion information candidate list, for example, Mode 41 may be used to perform the duplication checking operation, or, Mode 42 may be used to perform the duplication checking operation.
  • the spatial motion information and the temporal motion information can be added to the motion information candidate list until the length of the list is equal to X, or, when the traversal ends, the length of the list is less than X , and the preset motion information is added to the motion information candidate list until the length of the list is X.
  • Application scenario 4 After adding the spatial motion information to the motion information candidate list, at least Y positions are reserved for the temporal motion information, and the temporal motion information is bidirectional motion information or unidirectional motion information.
  • the spatial motion information is added to the motion information candidate list, and if the length of the list is equal to X-Y, the process of adding the spatial motion information is ended, or until the traversal of the spatial motion information ends, and the length of the list is less than X-Y, the process of adding the spatial motion information is ended. Then, the temporal motion information is added to the motion information candidate list.
  • the process of adding the temporal motion information is ended, or until the traversal of the temporal motion information ends, and the list length is less than X, the addition of the temporal motion information ends.
  • the preset motion information is added to the motion information candidate list until the length of the list is X.
  • available motion information ie, spatial motion information and temporal motion information
  • the duplication checking operation involving the motion information of the two adjacent blocks in the spatial domain may be performed in the manner 41, or the duplication checking operation may be performed in the manner 42.
  • the available motion information is added to the motion information candidate list until the list length is equal to XY, or the spatial motion information Information traversal ends.
  • the temporal motion information that is, the available motion information
  • the available motion information is added to the motion information candidate list until the length of the list is equal to X, or the traversal of the temporal motion information ends.
  • the available motion information is added to the motion information candidate list by using the method 31, or the available motion information is added to the motion information candidate list by using the method 32, or the method is used. 33 Add the available motion information to the motion information candidate list.
  • a duplication check operation may also be performed on the unidirectional motion information and the candidate motion information already in the motion information candidate list, for example, Mode 41 may be used to perform the duplication checking operation, or, Mode 42 may be used to perform the duplication checking operation.
  • X can take any positive integer, for example, the value of X can be 4, 5, and so on.
  • preset motion information (also referred to as default motion information) may be added to the motion information candidate list.
  • the implementation of the preset motion information may include but not be limited to the following methods:
  • the preset motion information is zero motion information, such as a zero motion vector that points to ListX and whose refIdx is less than the number of reference frames in ListX.
  • a 0-fill operation can be performed, that is, the motion information can be (0, 0, ref_idx3, ListZ).
  • Manner 52 Default motion information derived based on candidate motion information existing in the motion information candidate list.
  • temp_val the absolute value of the x-axis direction or y-axis direction to be enlarged or reduced
  • the motion vector in the x-axis direction of the default motion information is 8, or the motion vector in the x-axis direction of the default motion information is -8.
  • the motion vector in the y-axis direction of the default motion information is 8, or the motion vector in the y-axis direction of the default motion information is -8.
  • the motion vector in the x-axis direction of the candidate motion information is positive, and the absolute value of the motion vector in the x-axis direction (that is, temp_val) is less than or equal to 64, then the motion vector in the x-axis direction of the default motion information is (temp_val*5+2)>>2, if the motion vector in the x-axis direction of the candidate motion information is negative, and the absolute value of the motion vector in the x-axis direction (ie temp_val) is less than or equal to 64, then the default motion information
  • the motion vector in the y-axis direction for the default motion information is similar to the motion vector in the x-axis direction.
  • Manner 53 Default motion information derived based on candidate motion information existing in the motion information candidate list.
  • ref_idx and ListX are the reference frame index and the reference frame list respectively, and the two are collectively referred to as reference frame information .
  • at least one of the following motion information can be added: (x+a, y+b, ref_idx, ListX), a, b can be any integer; (k1*x, k1*y, ref_idx_new1, ListX), k1 is any non-zero A positive integer, that is, a scaling operation is performed on the motion vector; (k2*x, k2*y, ref_idx_new2, ListY), k2 is any non-zero positive integer, that is, a scaling operation is performed on the motion vector.
  • the preset motion information is candidate motion information existing in the motion information candidate list, that is, a padding operation is performed, and the repeated padding operation may be performed by using the candidate motion information existing in the motion information candidate list.
  • the last unidirectional motion information existing in the motion information candidate list may be used for repeated filling.
  • the preset motion information of Mode 51 can be added to the motion information candidate list until the list length is X.
  • the preset motion information in mode 52 may be added to the motion information candidate list until the length of the list is X.
  • the preset motion information in mode 53 may be added to the motion information candidate list until the length of the list is X.
  • the preset motion information of mode 51 and mode 52 may be added to the motion information candidate list until the length of the list is X.
  • the preset motion information of mode 51 and mode 53 may be added to the motion information candidate list until the length of the list is X.
  • the preset motion information of mode 52 and mode 53 may be added to the motion information candidate list until the length of the list is X.
  • the preset motion information of mode 51, mode 52 and mode 53 may be added to the motion information candidate list until the list length is X.
  • the preset motion information in manner 54 may be added to the motion information candidate list, that is, using motion information
  • the candidate motion information existing in the information candidate list is repeatedly filled until the length of the list is X.
  • the preset motion information of Mode 54 may be directly added to the motion information candidate list, that is, using The candidate motion information existing in the motion information candidate list is repeatedly filled until the length of the list is X.
  • the preset motion information may be unidirectional motion information or bidirectional motion information. If the preset motion information is unidirectional motion information, when the preset motion information is added to the motion information candidate list, a duplicate checking operation may or may not be performed. If the duplicate checking operation is not performed, the preset motion information is directly added to the motion information candidate list. If the duplication checking operation is performed, the duplication checking operation may be performed in the manner 41, or the duplication checking operation may be performed in the manner 42. If the result of the duplicate checking operation is non-repetitive, the preset motion information is added to the motion information candidate list. If the result of the duplicate checking operation is duplicate, the preset motion information is not added to the motion information candidate list.
  • Embodiment 15 In Embodiment 1 to Embodiment 3, after obtaining the motion information candidate list (see Embodiments 13 and 14), the encoder/decoder may obtain the first target motion of the current block based on the motion information candidate list
  • the information and the second target motion information, regarding the acquisition method of the first target motion information and the second target motion information can be implemented in the following ways:
  • Manner 1 Select one candidate motion information from the motion information candidate list as the first target motion information of the current block, and select another candidate motion information from the motion information candidate list as the second target motion information of the current block.
  • a motion information candidate list can be obtained, and the motion information candidate list of the encoder is the same as the motion information candidate list of the decoder, and the motion information candidate list is not limited.
  • one candidate motion information can be selected from the motion information candidate list as the first target motion information of the current block, and another candidate motion information can be selected from the motion information candidate list as the first target motion information of the current block.
  • Two target motion information, the first target motion information is different from the second target motion information, which is not limited.
  • the encoded bit stream may carry indication information a and indication information b, and the indication information a is used to indicate the value of the first target motion information of the current block.
  • the index value is 1, and the index value 1 indicates that the first target motion information is the number of candidate motion information in the motion information candidate list.
  • the indication information b is used to indicate the index value 2 of the second target motion information of the current block, and the index value 2 indicates that the second target motion information is the number of candidate motion information in the motion information candidate list.
  • index value 1 and index value 2 may be different.
  • the decoding end After receiving the encoded bit stream, the decoding end parses out the indication information a and the indication information b from the encoded bit stream. Based on the indication information a, the decoding end selects the candidate motion information corresponding to the index value 1 from the motion information candidate list, and the candidate motion information is used as the first target motion information of the current block. Based on the indication information b, the decoding end selects the candidate motion information corresponding to the index value 2 from the motion information candidate list, and the candidate motion information is used as the second target motion information of the current block.
  • the encoded bit stream may carry indication information a and indication information c, and the indication information a may be used to indicate the first target of the current block
  • the index value of the motion information is 1, and the index value 1 indicates that the first target motion information is the number of candidate motion information in the motion information candidate list.
  • the indication information c may be used to indicate the difference between the index value 2 and the index value 1, and the index value 2 indicates that the second target motion information is the number of candidate motion information in the motion information candidate list.
  • the index value 1 and the index value 2 may be different.
  • the decoding end After receiving the encoded bit stream, the decoding end can parse out the indication information a and the indication information c from the encoded bit stream. Based on the indication information a, the decoding end may select candidate motion information corresponding to the index value 1 from the motion information candidate list, and the candidate motion information is used as the first target motion information of the current block. Based on the indication information c, the decoding end first determines the index value 2 according to the difference between the index value 2 and the index value 1, and the index value 1, and then the decoding end can select the candidate motion information corresponding to the index value 2 from the motion information candidate list, The candidate motion information is used as the second target motion information of the current block.
  • the indication information of the first target motion information and the indication information of the second target motion information can be exchanged, and the encoding end and the decoding end only need to be consistent.
  • the exchange of the indication information does not affect the parsing process. That is, there is no parsing dependency.
  • the indication information of the first target motion information and the indication information of the second target motion information cannot be equal.
  • the index value a is 1, the value of the index value b is 3, and the index value a is encoded first.
  • the index value b can be encoded by 2 (ie, 3-1), and when the index value b is encoded first, the index value b needs to be encoded by 3.
  • the indication information of the first target motion information such as the index value a
  • the indication information of the second target motion information such as the index value b
  • the indication information of the first target motion information such as the index value a
  • the index value a is encoded first
  • the index value b is encoded.
  • the index value b is encoded first
  • the index value a is encoded.
  • Mode 2 Select candidate motion information from the motion information candidate list as the first original motion information of the current block, and select candidate motion information from the motion information candidate list as the second original motion information of the current block.
  • One original motion information and the second original motion information may be different, that is, two different candidate motion information are selected from the motion information candidate list as the first original motion information and the second original motion information; or, the first original motion information It can also be the same as the second original motion information, that is, the same candidate motion information is selected from the motion information candidate list as the first original motion information and the second original motion information.
  • the first target motion information of the current block is determined according to the first original motion information
  • the second target motion information of the current block is determined according to the second original motion information.
  • the first target motion information and the second target motion information may be different.
  • the first original motion information includes The first original motion vector
  • the first target motion information includes the first target motion vector
  • the second original motion information includes the second original motion vector
  • the second target motion information includes the second target motion vector.
  • the first motion vector difference (that is, the MVD) corresponding to the first original motion vector
  • the first target motion vector (that is, the sum of the first motion vector difference and the first original motion vector) is determined according to the first motion vector difference and the first original motion vector. as the first target motion vector).
  • the second motion vector difference corresponding to the second original motion vector can be obtained; the second target motion vector (that is, the sum of the second motion vector difference and the second original motion vector) is determined according to the second motion vector difference and the second original motion vector as second target motion vector).
  • the first motion vector difference when determining the first target motion vector, the first motion vector difference may not be superimposed, that is, the first original motion vector is determined as the first target motion vector.
  • the second motion vector difference when the second target motion vector is determined, the second motion vector difference may be superimposed, that is, the second target motion vector is determined according to the second motion vector difference and the second original motion vector. or,
  • the second motion vector difference may not be superimposed, that is, the second original motion vector is determined as the second target motion vector.
  • the first motion vector difference may be superimposed, that is, the first target motion vector is determined according to the first motion vector difference and the first original motion vector.
  • the first motion vector difference may be superimposed, that is, the first target motion vector is determined according to the first motion vector difference and the first original motion vector.
  • the second motion vector difference may be superimposed, that is, the second target motion vector is determined according to the second motion vector difference and the second original motion vector.
  • direction information and amplitude information of the first motion vector difference may be acquired, and the first motion vector difference may be determined according to the direction information and amplitude information of the first motion vector difference.
  • the direction information and amplitude information of the second motion vector difference may be acquired, and the second motion vector difference may be determined according to the direction information and the amplitude information of the second motion vector difference.
  • the direction information of the first motion vector difference may be obtained in the following manner: the decoding end parses the direction information of the first motion vector difference from the encoded bit stream of the current block; The weight prediction angle of the current block derives the direction information of the first motion vector difference.
  • the direction information of the second motion vector difference can be obtained in the following manner: the decoding end parses the direction information of the second motion vector difference from the encoded bit stream of the current block; The predicted angle derives direction information for the second motion vector difference.
  • the amplitude information of the first motion vector difference may be acquired in the following manner: the amplitude information of the first motion vector difference is parsed from the encoded bit stream of the current block.
  • the amplitude information of the second motion vector difference may be acquired in the following manner: the amplitude information of the second motion vector difference is parsed from the coded bit stream of the current block.
  • the encoding end and the decoding end may agree on the direction information and amplitude information of the motion vector difference. If the direction information indicates that the direction is to the right, and the amplitude information indicates that the amplitude is A, then the motion vector difference is (A, 0); if the direction information indicates that the direction is downward, and the amplitude information indicates that the amplitude is A, then the motion vector difference is (0, -A); if the direction information indicates that the direction is left, the amplitude information indicates that the amplitude is If the value is A, the motion vector difference is (-A, 0); if the direction information indicates that the direction is up, and the amplitude information indicates that the amplitude is A, then the motion vector difference is (0, A); if the direction information indicates that the direction is Up to the right, the amplitude information indicates that the amplitude is A, then the motion vector difference is (A, A); if the direction information indicates that the direction is up and left, and the amplitude information indicates that the amplitude information indicates that
  • the motion vector difference can support part or all of the above-mentioned direction information, and the value range of the amplitude A supported by the motion vector difference is configured according to experience, at least one value, which is not limited.
  • the motion vector difference supports up, down, left, right and other directions, and the motion vector difference supports the following 5 types of step size configurations: 1/4-pel, 1/2-pel, 1-pel, 2-pel, 4-pel, That is, the values of the amplitude A are 1, 2, 4, 8, and 16.
  • the motion vector difference can be (0, 1), (0, 2), (0, 4), (0, 8), (0, 16).
  • the motion vector difference When the direction is down, the motion vector difference can be (0, -1), (0, -2), (0, -4), (0, -8), (0, -16).
  • the motion vector difference When the direction is left, the motion vector difference can be (-1, 0), (-2, 0), (-4, 0), (-8, 0), (-16, 0).
  • the motion vector difference When the direction is right, the motion vector difference can be (1, 0), (2, 0), (4, 0), (8, 0), (16, 0).
  • the motion vector difference supports up, down, left, right and other directions, and the motion vector difference supports the following six types of step size configurations: 1/4-pel, 1/2-pel, 1-pel, 2-pel, 3- pel,4-pel, that is, the value of the amplitude A can be 1, 2, 4, 8, 12, 16.
  • the motion vector difference supports eight directions such as up, down, left, right, upper left, lower left, upper right, and lower right, and the motion vector difference supports the following three types of step size configurations: 1/4-pel, 1/2-pel ,1-pel, that is, the value of the amplitude A can be 1, 2, or 4.
  • the motion vector difference supports four directions such as up, down, left, and right, and the motion vector difference supports the following 4 types of step size configurations: 1/4-pel, 1/2-pel, 1-pel, 2-pel, That is, the value of the amplitude A can be 1, 2, 4, or 8.
  • the direction supported by the motion vector difference can be selected arbitrarily, for example, six directions of up, down, left, right, upper left, and lower left can be supported, or two directions of up and down can be supported.
  • the step size configuration supported by the motion vector difference is variable, which can be flexibly configured.
  • the step size configuration can be adaptively configured according to coding parameters such as the quantization parameter QP. pel, 1/2-pel, 1-pel, 2-pel.
  • a suitable step size configuration can be configured at sequence level, image level, frame level, slice level, tile level, patch level, CTU level, etc., so that the decoder can The step size configuration parsed at the tile level, patch level, and CTU level is used for decoding operations.
  • the motion vector difference can be ( 0, 4), (0, 8), (0, -4), (0, -8), i.e., (0, 1 ⁇ 2), (0, 1 ⁇ 3), (0, -1 ⁇ 2), (0, -1 ⁇ 3).
  • the encoder After acquiring the motion information candidate list, it traverses each candidate motion information combination in the motion information candidate list in turn.
  • the candidate motion information combination includes two candidate motion information, and one candidate motion information is used as the first original motion information. motion information, another candidate motion information as the second original motion information.
  • the first original motion information and the second original motion information may be the same (that is, the two candidate motion information selected from the motion information candidate list are the same), or may be different. If the first original motion information is the same as the second original motion information, it can be ensured that the first target motion information and the second target motion information are different by superimposing different motion vector differences.
  • the motion vector difference combinations are sequentially traversed, and the motion vector difference combinations include a first motion vector difference and a second motion vector difference, and the first motion vector difference and the second motion vector difference may be the same or different.
  • Motion Vector Difference 1 and Motion Vector Difference 2 there are two motion vector differences, Motion Vector Difference 1 and Motion Vector Difference 2, respectively, Motion Vector Difference Combination 1 is Motion Vector Difference 1 and Motion Vector Difference 1, and Motion Vector Difference Combination 2 is Motion Vector Difference 1 (ie The first motion vector difference) and the motion vector difference 2, the motion vector difference combination 3 is the motion vector difference 2 (that is, the first motion vector difference) and the motion vector difference 1, and the motion vector difference combination 4 is the motion vector difference 2 and the motion vector difference. 2.
  • the first motion vector difference and the second motion vector difference may be the same or different. If the first original motion information and the second original motion information are the same, the first motion vector difference and the second motion vector difference may be different.
  • the sum of the motion vector of the first original motion information and the first motion vector difference is used as the first target motion vector
  • the motion vector of the second original motion information and the second The sum of the motion vector differences is used as the second target motion vector
  • the rate-distortion cost value is determined based on the first target motion vector and the second target motion vector, and the determination method is not limited.
  • the above processing is performed for each combination of candidate motion information and each combination of motion vector differences, and the rate-distortion cost value is obtained.
  • encode the index value of the first original motion information corresponding to the minimum rate-distortion cost value in the motion information candidate list in the encoded bitstream of the current block, and the second original motion information corresponding to the minimum rate-distortion cost value is in the motion information candidate list.
  • the indication information of the direction information may be 0, which is used to represent the first direction in the direction list.
  • the indication information of the magnitude information may be 0, indicating the first step size configuration in the step size configuration list.
  • the motion vector difference supports four directions such as up, down, left, and right
  • the motion vector difference supports 1/4-pel, 1/2-pel, 1-pel, 2-pel, 4-pel, etc.5
  • the direction information of the motion vector difference can be encoded with a 2bin fixed length code (4 types of values in total), and the four values of the 2bin fixed length code represent four directions of up, down, left and right respectively.
  • the amplitude information of the motion vector difference can be encoded by using a truncated unary code, that is, five types of step size configurations are represented by the truncated unary code.
  • the motion vector difference supports four directions of up, down, left and right
  • the motion vector difference supports 1/4-pel, 1/2-pel, 1-pel, 2-pel, 3-pel, 4-pel6
  • the direction information of the motion vector difference can be encoded with a 2bin fixed-length code (4 types of values in total), and the amplitude information of the motion vector difference can be encoded with a truncated unary code.
  • the motion vector difference supports eight directions of up, down, left, right, upper left, lower left, upper right, and lower right
  • the motion vector difference supports three categories such as 1/4-pel, 1/2-pel, and 1-pel.
  • the direction information of the motion vector difference can be coded with a 3bin fixed-length code (8 types of values in total), and the amplitude information of the motion vector difference can be coded with a truncated unary code.
  • the motion vector difference supports four directions of up, down, left, and right, and the motion vector difference supports 1/4-pel, 1/2-pel, 1-pel, and 2-pel 4 types of step size configurations
  • the direction information of the motion vector difference can be coded by using a truncated unary code
  • the amplitude information of the motion vector difference can be coded by using a 2bin fixed-length code (4 types of values in total).
  • the optimal motion vector ie, the target motion vector
  • the difference between the optimal motion vector and the original motion vector is used as the motion vector difference (MVD)
  • the coding end searches for the best motion vector in a certain area, it needs to agree on the direction and magnitude of the motion vector difference, that is, in (A, 0), (0, -A), (-A, 0), (0, A), (A, A), (-A, A), (-A, -A), (A, -A) and other motion vector differences within the limited range, search for the best motion vector.
  • the index value of the first original motion information in the motion information candidate list can be parsed from the coded bit stream, and the index value of the first original motion information in the motion information candidate list can be selected from the motion information candidate list.
  • the candidate motion information corresponding to the index value is used as the first original motion information of the current block.
  • the decoding end can parse out the index value of the second original motion information in the motion information candidate list from the encoded bit stream, select the candidate motion information corresponding to the index value from the motion information candidate list, and use the candidate motion information as the current motion information.
  • the second original motion information of the block can parsed out the index value of the second original motion information in the motion information candidate list from the encoded bit stream, select the candidate motion information corresponding to the index value from the motion information candidate list, and use the candidate motion information as the current motion information.
  • the second original motion information of the block can parsed out the index value of the second original motion information in the motion information candidate list from the encoded bit stream, select the candidate motion information corresponding to the index value from
  • the decoding end may also parse out the direction information and amplitude information of the first motion vector difference from the encoded bit stream, and determine the first motion vector difference according to the direction information and the amplitude information. And, the direction information and amplitude information of the second motion vector difference are parsed from the encoded bit stream, and the second motion vector difference is determined according to the direction information and the amplitude information.
  • the decoding end may determine the first target motion information of the current block according to the first motion vector difference and the first original motion information, and determine the second target motion information of the current block according to the second motion vector difference and the second original motion information.
  • the first motion vector difference when the first motion vector difference is determined according to the direction information of the first motion vector difference and the magnitude information of the first motion vector difference, if the direction information of the first motion vector difference indicates that the direction is to the right, the first motion The amplitude information of the vector difference indicates that the amplitude is A, then the first motion vector difference is (A, 0); if the direction information of the first motion vector difference indicates that the direction is downward, the amplitude information of the first motion vector difference indicates If the amplitude is A, the first motion vector difference is (0, -A); if the direction information of the first motion vector difference indicates that the direction is left, and the amplitude information of the first motion vector difference indicates that the amplitude is A, then The first motion vector difference is (-A, 0); if the direction information of the first motion vector difference indicates that the direction is upward, and the amplitude information of the first motion vector difference indicates that the amplitude is A, then the first motion vector difference is ( 0, A).
  • the second motion vector difference When determining the second motion vector difference according to the direction information of the second motion vector difference and the magnitude information of the second motion vector difference, if the direction information of the second motion vector difference indicates that the direction is rightward, the magnitude of the second motion vector difference The value information indicates that the amplitude is A, then the second motion vector difference is (A, 0); if the direction information of the second motion vector difference indicates that the direction is downward, the amplitude information of the second motion vector difference indicates that the amplitude is A , then the second motion vector difference is (0, -A); if the direction information of the second motion vector difference indicates that the direction is left, and the amplitude information of the second motion vector difference indicates that the amplitude is A, then the second motion vector difference The difference is (-A, 0); if the direction information of the second motion vector difference indicates that the direction is upward, and the amplitude information of the second motion vector difference indicates that the amplitude is A, then the second motion vector difference is (0, A) .
  • the encoder when encoding the direction information of the motion vector difference, can use fixed-length codes, truncated unary codes, etc., so the decoding end can use fixed-length codes, truncated unary codes, etc.
  • the difference direction information is decoded to obtain the direction information of the motion vector difference, such as up, down, left, right, upper left, lower left, upper right, lower right and so on.
  • the encoding end when encoding the amplitude information of the motion vector difference, can use fixed-length codes, truncated unary codes, etc., therefore, the decoding end can use fixed-length codes, truncated unary codes, etc.
  • the amplitude information of the vector difference is decoded to obtain the amplitude information of the motion vector difference, such as 1/4-pel, 1/2-pel, 1-pel, 2-pel, etc. step size configuration, and then according to 1/4-pel , 1/2-pel, 1-pel, 2-pel and other step size configurations determine the value of the amplitude A of the motion vector difference.
  • the encoding end may further encode the first sub-mode flag and the second sub-mode flag of the enhanced angle-weighted prediction mode in the encoded bit stream, where the first sub-mode flag indicates that the first original motion information The motion vector difference is superimposed, or, the motion vector difference is not superimposed on the first original motion information.
  • the second sub-mode flag indicates that the motion vector difference is superimposed on the second original motion information, or the motion vector difference is not superimposed on the second original motion information.
  • the decoding end may first parse the first sub-mode flag and the second sub-mode flag of the enhanced angle weighted prediction mode from the encoded bit stream of the current block. If the first sub-mode flag indicates that the motion vector difference is superimposed on the first original motion information, the direction information and amplitude information of the first motion vector difference are parsed from the coded bit stream of the current block, and according to the first motion vector difference The direction information and the magnitude information determine the first motion vector difference, and then determine the first target motion information of the current block according to the first original motion information and the first motion vector difference.
  • the direction information and amplitude information of the first motion vector difference will not be parsed, and the first original motion information can be directly used as the first motion information of the current block.
  • Target motion information If the second sub-mode flag indicates that the motion vector difference is superimposed on the second original motion information, the direction information and amplitude information of the second motion vector difference are parsed from the coded bit stream of the current block, and according to the second motion vector difference The direction information and the magnitude information determine the second motion vector difference, and then the second target motion information of the current block is determined according to the second original motion information and the second motion vector difference.
  • the second sub-mode flag indicates that the motion vector difference is not superimposed on the second original motion information
  • the direction information and amplitude information of the second motion vector difference will not be parsed, and the second original motion information can be directly used as the second original motion information of the current block.
  • Target motion information If the second sub-mode flag indicates that the motion vector difference is not superimposed on the second original motion information, the direction information and amplitude information of the second motion vector difference will not be parsed, and the second original motion information can be directly used as the second original motion information of the current block.
  • the first sub-mode flag of the enhanced angle-weighted prediction mode when the first sub-mode flag of the enhanced angle-weighted prediction mode is the first value, it indicates that the motion vector difference is superimposed on the first original motion information, and when the first sub-mode flag of the enhanced angle-weighted prediction mode is the second When the value is set, it means that the motion vector difference is not superimposed on the first original motion information.
  • the second sub-mode flag of the enhanced angle-weighted prediction mode is the first value, it indicates that the motion vector difference is superimposed on the second original motion information, and when the second sub-mode flag of the enhanced angle-weighted prediction mode is the second value, Indicates that the motion vector difference is not superimposed on the second original motion information.
  • the first value and the second value may be configured according to experience, for example, the first value is 1 and the second value is 0, or, for example, the first value is 0 and the second value is 1.
  • the decoding end parses the direction information of the first motion vector difference and the direction information of the second motion vector difference from the encoded bit stream of the current block.
  • the direction information of the first motion vector difference is derived from the angle
  • the direction information of the second motion vector difference is derived according to the weight prediction angle of the current block.
  • the weight prediction angle of the current block represents an angular direction, referring to the 8 angular directions shown in FIG. 6 , the weight prediction angle of the current block represents one of the 8 angular directions, for the decoding end , you can select the direction information matching the angle direction from all the direction information (such as up, down, left, right, upper left, lower left, upper right, lower right, etc.), and directly use the direction information as the direction information of the first motion vector difference and the direction information of the second motion vector difference.
  • the direction information matching the angular direction may include: the angular difference between the direction information and the angular direction is a preset angle or close to the preset angle, or, in all the direction information, the difference between the direction information and the angular direction is The difference between the angle difference and the preset angle is the smallest.
  • the preset angle can be configured according to experience, for example, the preset angle can be 90 degrees.
  • the direction information of the first motion vector difference may also be derived according to the weight prediction angle of the current block, and the direction information of the second motion vector difference may be derived according to the weight prediction angle of the current block, without the need for
  • the direction information of the first motion vector difference and the direction information of the second motion vector difference are determined by means of a rate-distortion cost value.
  • the direction information of the first motion vector difference and the direction information of the second motion vector difference may not be encoded, and the direction information of the first motion vector difference and the direction information of the second motion vector difference are deduced by the decoding end.
  • Direction information of the second motion vector difference may also be derived according to the weight prediction angle of the current block, and the direction information of the second motion vector difference may be derived according to the weight prediction angle of the current block, without the need for
  • the direction information of the first motion vector difference and the direction information of the second motion vector difference are determined by means of a rate-distortion cost value.
  • Embodiment 16 the decoding end may parse out the amplitude information of the first motion vector difference and the amplitude information of the second motion vector difference from the encoded bit stream.
  • the encoding The terminal and the decoding terminal can construct the same motion vector difference amplitude list, the encoding terminal determines the amplitude index of the amplitude information of the first motion vector difference in the motion vector difference amplitude list, and the encoded bit stream includes the first motion vector difference. The magnitude index of the vector difference.
  • the decoding end parses the amplitude index of the first motion vector difference from the encoded bit stream of the current block, and selects the amplitude information corresponding to the amplitude index from the motion vector difference amplitude list, and the amplitude information is the first The magnitude information of the motion vector difference.
  • the encoding end determines the magnitude index of the magnitude information of the second motion vector difference in the motion vector difference magnitude list, and the encoded bit stream includes the magnitude index of the second motion vector difference.
  • the decoding end parses out the magnitude index of the second motion vector difference from the encoded bit stream of the current block, and selects the magnitude information corresponding to the magnitude index from the motion vector difference magnitude list, where the magnitude information is the second magnitude index. The magnitude information of the motion vector difference.
  • the encoding end and the decoding end may construct the same at least two motion vector difference amplitude lists, for example, the encoding end and the decoding end construct the same motion vector difference amplitude list 1, and construct the same Motion Vector Difference Magnitude List 2.
  • the encoding end first selects the target motion vector difference amplitude list from all the motion vector difference amplitude lists based on the indication information of the motion vector difference amplitude list; the encoding end determines that the amplitude information of the first motion vector difference is in the target motion vector difference.
  • the magnitude index in the magnitude list, and the encoded bitstream includes the magnitude index of the first motion vector difference.
  • the encoding end may further determine the magnitude index of the magnitude information of the second motion vector difference in the target motion vector difference magnitude list, and the encoded bit stream may include the magnitude index of the second motion vector difference.
  • the decoding end can parse out the magnitude index of the second motion vector difference from the coded bit stream of the current block, and select the magnitude information corresponding to the magnitude index from the target motion vector difference magnitude list. That is, the magnitude information of the second motion vector difference.
  • the indication information of the motion vector difference magnitude list may be any level of indication information, for example, it may be the indication information of the sequence-level motion vector difference magnitude list, the frame-level motion vector difference magnitude list. Indication information, the indication information of the Slice-level motion vector difference magnitude list, the indication information of the Tile-level motion vector difference magnitude list, the indication information of the Patch-level motion vector difference magnitude list, the CTU-level motion vector difference magnitude.
  • the indication information of the magnitude list is not limited.
  • the indication information of the frame-level motion vector difference magnitude list is taken as an example.
  • the indication information of the frame-level motion vector difference magnitude list can be awp_umve_offset_list_flag.
  • awp_umve_offset_list_flag controls the switching of the motion vector offset list.
  • the encoding end and the decoding end may construct a motion vector difference magnitude list 1 and a motion vector difference magnitude list 2, as shown in Table 3 and Table 4.
  • Binarization processing may be performed on the motion vector difference amplitude list 1, and binarization processing may be performed on the motion vector difference amplitude list 2, and this processing method is not limited.
  • use a truncated unary code for motion vector difference magnitude list 1 and a truncated unary code for motion vector difference magnitude list 2 or use a truncated unary code for motion vector difference magnitude list 1 and use truncated unary code for motion vector difference magnitude list 1.
  • awp_umve_offset_list_flag is used to control the switching of the motion vector difference magnitude list, that is, the motion vector difference magnitude list shown in Table 3 or the motion vector difference magnitude list shown in Table 4 is controlled to be used. For example, if the value of awp_umve_offset_list_flag is the first value, the motion vector difference amplitude list shown in Table 3 is the target motion vector difference amplitude list.
  • the motion vector difference magnitude list is the target motion vector difference magnitude list; Or, if the value of awp_umve_offset_list_flag is the second value, then the motion vector difference magnitude list shown in Table 3 is the target motion vector difference magnitude list, If the value of awp_umve_offset_list_flag is the first value, the motion vector difference magnitude list shown in Table 4 is the target motion vector difference magnitude list
  • the encoding end uses the binarization method shown in Table 5 to perform encoding, and the decoding end uses the binarization method shown in Table 5 to perform decoding.
  • the target motion vector difference magnitude list is Table 4
  • the encoding end uses the binarization method shown in Table 6 to perform encoding
  • the decoding end uses the binarization method shown in Table 6 to perform decoding.
  • Embodiment 17 On the basis of Embodiment 15 or Embodiment 16, for the first motion vector difference and the second motion vector difference, the following describes the related syntax of the unidirectional motion information superimposed motion vector difference AWP_MVR in combination with specific application scenarios. :
  • Application Scenario 1 See Table 7 for an example of the relevant syntax.
  • SkipFlag indicates whether the current block is in Skip mode
  • DirectFlag indicates whether the current block is in Direct mode
  • AwpFlag indicates whether the current block is in AWP mode.
  • awp_idx angle weighted prediction mode index: the angle weighted prediction mode index value in skip mode or direct mode, the value of AwpIdx, which may be equal to the value of awp_idx. If awp_idx does not exist in the code stream, the value of AwpIdx is equal to 0.
  • awp_cand_idx0 (first motion information index of the angle-weighted prediction mode): The first motion information index value of the angle-weighted prediction mode in skip mode or direct mode.
  • the value of AwpCandIdx0 is equal to the value of awp_cand_idx0. If awp_cand_idx0 does not exist in the code stream, the value of AwpCandIdx0 is equal to 0.
  • awp_cand_idx1 angle weighted prediction mode second motion information index: the second motion information index value of the angle weighted prediction mode in skip mode or direct mode.
  • the value of AwpCandIdx1 is equal to the value of awp_cand_idx1. If awp_cand_idx1 does not exist in the code stream, the value of AwpCandIdx1 is equal to 0.
  • awp_mvd_flag is a binary variable.
  • awp_mvd_flag is the first value (such as 1), it indicates that the current block is in the enhanced angle weighted prediction mode.
  • awp_mvd_flag is the second value (such as 0), Indicates that the current block is in non-enhanced angle weighted prediction mode.
  • the value of AwpMvdFlag may be equal to the value of awp_mvd_flag. If awp_mvd_flag does not exist in the code stream, the value of AwpMvdFlag is equal to 0.
  • awp_mvd_sub_flag0 (the first sub-mode flag of the enhanced angle-weighted prediction mode), which can be a binary variable.
  • awp_mvd_sub_flag0 is the first value, it can indicate that the first motion information of the angle-weighted prediction mode needs to superimpose the motion information difference;
  • awp_mvd_sub_flag0 is the second When the value is set, it can indicate that the first motion information of the angle-weighted prediction mode does not need to superimpose the motion information difference.
  • the value of AwpMvdSubFlag0 may be equal to the value of awp_mvd_sub_flag0. If awp_mvd_sub_flag0 does not exist in the code stream, the value of AwpMvdSubFlag0 is equal to 0.
  • awp_mvd_sub_flag1 (the second sub-mode flag of the enhanced angle-weighted prediction mode), which can be a binary variable.
  • awp_mvd_sub_flag1 is the first value, it can indicate that the second motion information of the angle-weighted prediction mode needs to superimpose the motion information difference;
  • awp_mvd_sub_flag1 is the second When the value is set, it can indicate that the second motion information of the angle-weighted prediction mode does not need to superimpose the motion information difference.
  • the value of AwpMvdSubFlag1 may be equal to the value of awp_mvd_sub_flag1.
  • awp_mvd_sub_flag1 does not exist in the code stream, the following conditions may also exist: if AwpMvdFlag is equal to 1, then the value of AwpMvdSubFlag1 may be equal to 1, otherwise, the value of AwpMvdSubFlag1 may be equal to 0.
  • awp_mvd_dir0 first motion information motion vector difference direction index value
  • the motion vector difference direction index value of the first motion information in the angle-weighted prediction mode awp_mvd_dir0
  • the value of AwpMvdDir0 may be equal to 0.
  • awp_mvd_step0 first motion information motion vector difference step size index value
  • the motion vector difference step size index value of the first motion information in the angle weighted prediction mode awp_mvd_step0 (first motion information motion vector difference step size index value), the motion vector difference step size index value of the first motion information in the angle weighted prediction mode.
  • the value of AwpMvdStep0 may be equal to the value of awp_mvd_step0, and if awp_mvd_step0 does not exist in the code stream, the value of AwpMvdStep0 may be equal to 0.
  • awp_mvd_dir1 second motion information motion vector difference direction index value
  • the motion vector difference direction index value of the second motion information in the angle-weighted prediction mode may be equal to the value of awp_mvd_dir1. If awp_mvd_dir1 does not exist in the codestream, the value of AwpMvdDir1 may be equal to 0.
  • awp_mvd_step1 (second motion information motion vector difference step size index value), the motion vector difference step size index value of the second motion information in the angle-weighted prediction mode.
  • the value of AwpMvdStep1 may be equal to the value of awp_mvd_step1. If awp_mvd_step1 does not exist in the code stream, the value of AwpMvdStep1 may be equal to 0.
  • SkipFlag indicates whether the current block is in Skip mode
  • DirectFlag indicates whether the current block is in Direct mode
  • AwpFlag indicates whether the current block is in AWP mode.
  • awp_mvd_sub_flag0 (the first sub-mode flag of the enhanced angle-weighted prediction mode), which can be a binary variable.
  • awp_mvd_sub_flag0 is the first value, it can indicate that the first motion information of the angle-weighted prediction mode needs to superimpose the motion information difference;
  • awp_mvd_sub_flag0 is the second When the value is set, it can indicate that the first motion information of the angle-weighted prediction mode does not need to superimpose the motion information difference.
  • the value of AwpMvdSubFlag0 may be equal to the value of awp_mvd_sub_flag0. If awp_mvd_sub_flag0 does not exist in the code stream, the value of AwpMvdSubFlag0 is equal to 0.
  • awp_mvd_sub_flag1 (the second sub-mode flag of the enhanced angle-weighted prediction mode), which can be a binary variable.
  • awp_mvd_sub_flag1 is the first value, it can indicate that the second motion information of the angle-weighted prediction mode needs to superimpose the motion information difference;
  • awp_mvd_sub_flag1 is the second When the value is set, it can indicate that the second motion information of the angle-weighted prediction mode does not need to superimpose the motion information difference.
  • the value of AwpMvdSubFlag1 may be equal to the value of awp_mvd_sub_flag1, and if awp_mvd_sub_flag1 does not exist in the code stream, the value of AwpMvdSubFlag1 may be equal to 0.
  • awp_mvd_dir0 For awp_mvd_dir0, awp_mvd_step0, awp_mvd_dir1, awp_mvd_step1, see application scenario 1.
  • the difference between the two is: in the application scenario 1, the syntax awp_mvd_flag exists, and in the application scenario 2, the syntax awp_mvd_flag does not exist.
  • the enhanced angle weighted prediction mode can be controlled through awp_mvd_flag, that is, the enhanced angle weighted prediction mode can be controlled through the main switch.
  • Application scenario 3 The application scenario 1 and the application scenario 2 can be derived, and the AWP mode and the AWP_MVR mode can be combined, that is, the span is increased by 0 span, so that the flag bit of whether to encode is not required.
  • motion vector difference supports up, down, left, right and other directions
  • motion vector difference supports the following step size configurations: 0-pel, 1/4-pel, 1/2-pel, 1-pel, 2-pel, 4 -pel, that is, increase the step size configuration 0-pel.
  • Table 7/Table 8 can be updated to Table 9, see Table 7 for related syntax.
  • coding unit definition Descriptor ... if((SkipFlag
  • the order of the syntax elements in the relevant grammar can be adjusted accordingly.
  • the order of the syntax elements can be adjusted accordingly, and the order shown in Table 10 can be obtained. related syntax.
  • the parsing method of awp_cand_idx0 and awp_cand_idx1 can be based on AwpMvdSubFlag0, AwpMvdSubFlag1, AwpMvdDir0, AwpMvdStep0, AwpMvdDir1, AwpM One or more values to adjust.
  • AwpMvdSubFlag0 indicates whether to superimpose MVD on the first original motion information. If superimposed, the MVD value corresponding to the first original motion information is determined based on AwpMvdDir0 and AwpMvdStep0.
  • AwpMvdSubFlag1 indicates whether to superimpose MVD on the second original motion information. If superimposed, The MVD value corresponding to the second original motion information is determined based on AwpMvdDir1 and AwpMvdStep1.
  • awp_cand_idx0 means the first
  • awp_cand_idx1 represents the index value of the second original motion information
  • the parsing methods of awp_cand_idx0 and awp_cand_idx1 are exactly the same, that is, the first original motion information corresponding to awp_cand_idx0 is parsed from the complete motion information candidate list, And parse out the second original motion information corresponding to awp_cand_idx1 from the complete motion information candidate list.
  • the first original motion information and the second original motion information are different. Based on this, the analysis methods of awp_cand_idx0 and awp_cand_idx1 are inconsistent.
  • the first original motion information corresponding to awp_cand_idx0 is parsed from the complete motion information candidate list. Since the second original motion information is different from the first original motion information, the complete motion information candidate list corresponding to awp_cand_idx1 is not parsed out.
  • the second original motion information corresponding to awp_cand_idx1 is parsed from the incomplete motion information candidate list.
  • the decoding end parses AwpCandIdx0 and AwpCandIdx1 from the encoded bit stream, and assigns the AwpCandIdx0+1 th motion information in the AwpUniArray to mvAwp0L0, mvAwp0L1, RefIdxAwp0L0, and RefIdxAwp0L1.
  • the motion information of AwpCandIdx0+1 in AwpUniArray can also be assigned to mvAwp1L0, mvAwp1L1, RefIdxAwp1L0, RefIdxAwp1L1, and the motion information of AwpCandIdx1+1 in AwpUniArray can be assigned to mvAwp0L0, mvAwp0L1, RefIdxAwp0L0, RefIdxAwp0L1.
  • AwpCandIdx0 represents the index value of the first target motion information. Therefore, the AwpCandIdx0+1 th motion information in the AwpUniArray may be assigned to the first target motion information. For example, if AwpCandIdx0 is 0, the first motion information in AwpUniArray is assigned to the first target motion information, and so on.
  • the first target motion information includes unidirectional motion information pointing to List0 and unidirectional motion information pointing to List1.
  • the motion information AwpCandIdx0+1 in the AwpUniArray is the unidirectional motion information pointing to List0
  • the first target motion information includes the unidirectional motion information pointing to List0
  • the unidirectional motion information pointing to List1 is empty.
  • the AwpCandIdx0+1 th motion information in the AwpUniArray is the unidirectional motion information pointing to List1
  • the first target motion information includes the unidirectional motion information pointing to List1
  • the unidirectional motion information pointing to List0 is empty.
  • mvAwp0L0 and RefIdxAwp0L0 represent unidirectional motion information in the first target motion information pointing to List0
  • mvAwp0L1 and RefIdxAwp0L1 represent unidirectional motion information in the first target motion information pointing to List1.
  • RefIdxAwp0L0 If RefIdxAwp0L0 is valid, it means that the unidirectional motion information pointing to List0 is valid, therefore, the prediction mode of the first target motion information is PRED_List0. If RefIdxAwp0L1 is valid, it means that the unidirectional motion information pointing to List1 is valid, therefore, the prediction mode of the first target motion information is PRED_List1.
  • AwpCandIdx1 represents the index value of the second target motion information. Therefore, the AwpCandIdx1+1 th motion information in the AwpUniArray may be assigned to the second target motion information. For example, if AwpCandIdx1 is 0, the first motion information in AwpUniArray is assigned to the second target motion information, and so on.
  • the second target motion information includes unidirectional motion information pointing to List0 and unidirectional motion information pointing to List1.
  • the second target motion information includes the unidirectional motion information pointing to List0, and the unidirectional motion information pointing to List1 is empty.
  • the AwpCandIdx1+1 th motion information in the AwpUniArray is the unidirectional motion information pointing to List1
  • the second target motion information includes the unidirectional motion information pointing to List1
  • the unidirectional motion information pointing to List0 is empty.
  • mvAwp1L0 and RefIdxAwp1L0 represent unidirectional motion information in the second target motion information that points to List0
  • mvAwp1L1 and RefIdxAwp1L1 represent unidirectional motion information in the second target motion information that points to List1.
  • RefIdxAwp1L0 If RefIdxAwp1L0 is valid, it indicates that the unidirectional motion information pointing to List0 is valid, therefore, the prediction mode of the second target motion information is PRED_List0. If RefIdxAwp1L1 is valid, it indicates that the unidirectional motion information pointing to List1 is valid, and therefore, the prediction mode of the second target motion information is PRED_List1.
  • the current frame is a B frame
  • the POC of the reference frame in List0 and List1 are the same, that is, the same frame
  • you can The syntax elements of the two transmitted motion information are redesigned, that is, the motion information is transmitted in the manner of this embodiment.
  • the AWP mode can be applied to the P frame.
  • the second target The motion vector of the motion information can be derived from the motion vector of the first target motion information, that is, the first motion information conveys the index value, and the second motion information is to superimpose MVD on the basis of the first motion information.
  • the encoding method of MVD see Example 15 or 16.
  • awp_mvd_dir0 represents the first motion information motion vector difference direction index value of the restricted angle weighted prediction mode, and the value of AwpMvdDir0 is equal to the value of awp_mvd_dir0. If awp_mvd_dir0 does not exist in the bitstream, then AwpMvdDir0 value is equal to 0.
  • awp_mvd_step0 represents the first motion information motion vector difference step size index value of the restricted angle weighted prediction mode
  • the value of AwpMvdStep0 is equal to the value of awp_mvd_step0
  • awp_mvd_dir1 represents the motion vector difference direction index value of the second motion information in the restricted angle weighted prediction mode
  • the value of AwpMvdIdx1 is equal to the value of awp_mvd_dir1.
  • awp_mvd_dir1 does not exist in the bitstream, the value of AwpMvdDir1 is equal to 0.
  • awp_mvd_step1 represents the second motion information motion vector difference step index value of the restricted angle weighted prediction mode.
  • the value of AwpMvdStep1 is equal to the value of awp_mvd_step1. If awp_mvd_step1 does not exist in the bit stream, the value of AwpMvdStep1 is equal to 0.
  • coding unit definition Descriptor ... if((SkipFlag
  • the original motion information (the original motion information includes the original motion vector) can be determined based on awp_cand_idx, and the first motion vector difference MVD0 and the second motion vector difference MVD1 can be determined according to awp_mvd_dir0 and awp_mvd_step0.
  • the first target motion vector is determined according to the original motion vector and the first motion vector difference MVD0, such as the original motion vector+MVD0.
  • the second target motion vector is determined according to the first target motion vector and the second motion vector difference MVD1, for example, the first target motion vector+MVD1.
  • first target motion information is obtained based on the first target motion vector, where the first target motion information includes the first target motion vector.
  • second target motion information is obtained based on the second target motion vector, where the second target motion information includes the second target motion vector.
  • the first target motion vector is determined according to the original motion vector and the first motion vector difference MVD0, such as the original motion vector+MVD0.
  • the second target motion vector is determined according to the difference between the original motion vector and the second motion vector MVD1, such as the original motion vector+MVD1.
  • first target motion information is obtained based on the first target motion vector, where the first target motion information includes the first target motion vector.
  • second target motion information is obtained based on the second target motion vector, where the second target motion information includes the second target motion vector.
  • awp_mvd_dir and awp_mvd_step form the syntax of the MVD of the second motion information (that is, the second target motion information) on the first motion information (that is, the first target motion information) Express.
  • awp_mvd_dir represents the limited angle weighted prediction mode motion vector difference direction index value
  • the value of AwpMvdDir is equal to the value of awp_mvd_dir. If awp_mvd_dir does not exist in the bitstream, the value of AwpMvdDir is equal to 0.
  • awp_mvd_step represents the constrained angle weighted prediction mode motion vector difference step index value.
  • the value of AwpMvdStep is equal to the value of awp_mvd_step. If awp_mvd_step does not exist in the bit stream, the value of AwpMvdStep is equal to 0.
  • the original motion information (the original motion information includes the original motion vector) can be determined based on awp_cand_idx, and the motion vector difference MVD can be determined based on awp_mvd_dir and awp_mvd_step.
  • the first target motion vector is determined according to the original motion vector, for example, the first target motion vector is the original motion vector.
  • the second target motion vector is determined according to the first target motion vector and the motion vector difference MVD, such as the first target motion vector+MVD.
  • first target motion information is obtained based on the first target motion vector, where the first target motion information includes the first target motion vector.
  • second target motion information is obtained based on the second target motion vector, where the second target motion information includes the second target motion vector.
  • the first target motion vector is determined according to the original motion vector and the motion vector difference MVD, such as the original motion vector+MVD.
  • the second target motion vector is determined according to the original motion vector and the motion vector difference MVD, such as the original motion vector-MVD.
  • the first target motion vector such as the original motion vector-MVD
  • the second target motion vector is determined according to the original motion vector and the motion vector difference MVD, such as the original motion vector+MVD.
  • second target motion information is obtained based on the second target motion vector, where the second target motion information includes the second target motion vector.
  • MVD acts in two opposite directions at the same time, so that the original motion information can derive two target motion information.
  • the syntax expression of the MVD may also be a separate expression of the horizontal component and the vertical component, as shown in Table 13, which is an example of the separate expression.
  • awp_mv_diff_x_abs represents the absolute value of the horizontal component difference of the restricted angle weighted prediction mode motion vector
  • awp_mv_diff_y_abs represents the absolute value of the vertical component difference of the restricted angle weighted prediction mode motion vector.
  • awp_mv_diff_x_sign represents the restricted angle weighted prediction mode motion vector horizontal component difference sign value
  • awp_mv_diff_y_sign represents the restricted angle weighted prediction mode motion vector vertical component difference sign value.
  • the absolute value of the motion vector difference of the restricted angle weighted prediction mode, AwpMvDiffXAbs is equal to the value of awp_mv_diff_x_abs
  • AwpMvDiffYAbs is equal to the value of awp_mv_diff_y_abs.
  • the value of AwpMvDiffXSign is equal to the value of awp_mv_diff_x_sign
  • the value of AwpMvDiffYSign is equal to awp_mv_diff_y_sign0.
  • the value of AwpMvDiffXSign or AwpMvDiffYSign is 0 if awp_mv_diff_x_sign or awp_mv_diff_y_sign does not exist in the bitstream.
  • AwpMvDiffXSign is 0, AwpMvDiffX is equal to AwpMvDiffXAbs; if the value of AwpMvDiffXSign is 1, AwpMvDiffX is equal to -AwpMvDiffXAbs. If the value of AwpMvDiffYSign is 0, AwpMvDiffY is equal to AwpMvDiffYAbs; if the value of AwpMvDiffYSign is 1, AwpMvDiffY is equal to -AwpMvDiffYAbs.
  • the value range of AwpMvDiffX and AwpMvDiffY is -32768 to 32767.
  • coding unit definition Descriptor awp_mv_diff_x_abs ae(v) if(AwpMvDiffXAbs) awp_mv_diff_x_sign ae(v) awp_mv_diff_y_abs ae(v) if(AwpMvDiffYAbs) awp_mv_diff_y_sign ae(v)
  • Table 14 is another example of separate expressions. Based on what is shown in Table 14, it can be determined whether
  • Embodiment 19 In Embodiment 1 to Embodiment 3, the encoder/decoder needs to obtain the motion information candidate list.
  • the following method is used to obtain the motion information candidate list: Obtain at least one available motion to be added to the motion information candidate list information; obtains a motion information candidate list based on at least one available motion information.
  • the at least one available motion information includes, but is not limited to, at least one of the following motion information: spatial motion information; temporal motion information; preset motion information.
  • the acquisition method of each available motion information refer to Embodiment 13. This will not be repeated here.
  • acquiring the motion information candidate list based on at least one piece of available motion information may include: for available motion information to be added to the motion information candidate list currently, adding the available motion information to the motion information candidate list. For example, for the available motion information, regardless of whether the available motion information is unidirectional motion information or bidirectional motion information, the available motion information is added to the motion information candidate list. Different from Embodiment 13 and Embodiment 14, when the available motion information is bidirectional motion information, the bidirectional motion information does not need to be cut into the first unidirectional motion information and the second unidirectional motion information, but the bidirectional motion information is directly The information is added to the motion information candidate list, that is, the motion information candidate list may include bidirectional motion information.
  • the duplication checking operation may be performed on the available motion information, or the duplication checking operation may not be performed on the available motion information, which is not limited. If the duplication checking operation is performed on the available motion information, the duplication checking operation can be performed based on List+refIdx+MV_x+MV_y, or the duplication checking operation can be performed based on POC+MV_x+MV_y. The method of the duplication checking operation will not be described again. See Example 14.
  • Embodiment 20 In Embodiment 1 to Embodiment 3, after obtaining the motion information candidate list, the encoder/decoder may obtain the first target motion information and the second target motion information of the current block based on the motion information candidate list, When it is determined that the reference frame pointed to by the first target motion information and the reference frame pointed to by the second target motion information are the same frame, the candidate motion information in the motion information candidate list may all be bidirectional motion information.
  • the motion information candidate list is bidirectional motion information.
  • the rate-distortion cost value can be used to select a candidate motion information from the motion information candidate list.
  • the encoder can carry the candidate motion information in the motion
  • the index value in the information candidate list for the decoding end, a candidate motion information can be selected from the motion information candidate list based on the index value.
  • Embodiment 21 In Embodiment 4, the encoder/decoder needs to obtain the motion vector candidate list, for example, obtain the reference frame information (that is, the reference frame information of the current block), and obtain the motion vector candidate corresponding to the reference frame information
  • the list ie the motion vector candidate list of the current block
  • ie the motion vector candidate list is created for the reference frame information.
  • the reference frame information may include first reference frame information and second reference frame information.
  • the motion vector candidate list may include motion corresponding to the first reference frame information (such as reference frame index and reference frame direction, etc.)
  • the vector candidate list and the motion vector candidate list corresponding to the second reference frame information (such as reference frame index and reference frame direction, etc.)
  • the first reference frame information is the reference frame information corresponding to the first target motion vector
  • the second reference frame information is Reference frame information corresponding to the second target motion vector.
  • the first reference frame information and the second reference frame information can be obtained.
  • the first reference frame information and the second reference frame information can be selected from a reference frame list.
  • the first reference frame information and the second reference frame information can also be selected from two reference frame lists.
  • the two reference frame lists are List0 and List1 respectively, and the first reference frame information is selected from List0. , and select the second reference frame information from List1.
  • the information of the first reference frame and the information of the second reference frame can be obtained.
  • the first reference frame can be selected from a list of reference frames.
  • the first reference frame information and the second reference frame information can also be selected from two reference frame lists.
  • the two reference frame lists are List0 and List1 respectively.
  • the first reference frame information is selected from List0
  • the second reference frame information is selected from List1 based on the index information of the second reference frame information.
  • the above is just an example of acquiring the first reference frame information and the second reference frame information, which is not limited.
  • the first reference frame information and the second reference frame information may be the same.
  • the reference frame pointed to by the first target motion vector and the reference frame pointed to by the second target motion vector are the same frame
  • the motion vector candidate list corresponding to the first reference frame information and the motion vector candidate list corresponding to the second reference frame information are the same motion vector candidate list, that is, the encoder/decoder obtains one motion vector candidate list.
  • the first reference frame information and the second reference frame information may be different, in this case, the reference frame pointed to by the first target motion vector and the reference frame pointed to by the second target motion vector are different frame, the motion vector candidate list corresponding to the first reference frame information and the motion vector candidate list corresponding to the second reference frame information are different motion vector candidate lists, that is, the encoder/decoder obtains two different motion vector candidate lists .
  • both are recorded as the motion vector candidate list corresponding to the first reference frame information and the motion vector candidate list corresponding to the second reference frame information.
  • the first target motion vector and the second target motion vector of the current block can be obtained in the following manner:
  • Mode 1 Select a candidate motion vector from the motion vector candidate list corresponding to the first reference frame information as the first target motion vector of the current block, and select a candidate motion vector from the motion vector candidate list corresponding to the second reference frame information as the The second target motion vector for the current block.
  • the second target motion vector may be different from the first target motion vector.
  • a candidate motion vector can be selected from the motion vector candidate list corresponding to the first reference frame information as the first target motion vector of the current block, and the motion vector corresponding to the second reference frame information can be selected from the motion vector of the current block.
  • One candidate motion vector is selected from the candidate list as the second target motion vector of the current block, which is not limited.
  • the encoded bit stream may carry indication information a and indication information b, and the indication information a is used to indicate the value of the first target motion vector of the current block.
  • the index value is 1, and the index value 1 indicates that the first target motion vector is the number of candidate motion vectors in the motion vector candidate list corresponding to the first reference frame information.
  • the indication information b is used to indicate the index value 2 of the second target motion vector of the current block, and the index value 2 indicates that the second target motion vector is the number of candidate motion vectors in the motion vector candidate list corresponding to the second reference frame information.
  • the decoding end After receiving the encoded bit stream, the decoding end parses out the indication information a and the indication information b from the encoded bit stream. Based on the indication information a, the decoding end selects the candidate motion vector corresponding to the index value 1 from the motion vector candidate list corresponding to the first reference frame information, and the candidate motion vector is used as the first target motion vector of the current block. Based on the indication information b, the decoding end selects the candidate motion vector corresponding to the index value 2 from the motion vector candidate list corresponding to the second reference frame information, and the candidate motion vector is used as the second target motion vector of the current block.
  • Mode 2 Select a candidate motion vector from the motion vector candidate list corresponding to the first reference frame information as the first original motion vector of the current block, and select a candidate motion vector from the motion vector candidate list corresponding to the second reference frame information
  • the second original motion vector of the current block for example, the first original motion vector and the second original motion vector may be different, or the first original motion vector and the second original motion vector may also be the same.
  • the first target motion vector of the current block is determined according to the first original motion vector
  • the second target motion vector of the current block is determined according to the second original motion vector.
  • the first target motion vector and the second target motion vector may be different.
  • this embodiment provides a scheme of superimposing the motion vector difference with a one-way motion vector.
  • the first motion vector difference corresponding to the first original motion vector ie, MVD
  • determine the first target motion vector according to the first motion vector difference and the first original motion vector that is, the sum of the first motion vector difference and the first original motion vector is taken as the first target motion vector.
  • the first original motion vector may be determined as the first target motion vector.
  • the second motion vector difference corresponding to the second original motion vector can be obtained, and the second target motion vector (that is, the second motion vector difference and the second original motion vector) can be determined according to the second motion vector difference and the second original motion vector. and as the second target motion vector).
  • the second original motion vector is determined as the second target motion vector.
  • direction information and amplitude information of the first motion vector difference may be acquired, and the first motion vector difference may be determined according to the direction information and amplitude information of the first motion vector difference.
  • the direction information and amplitude information of the second motion vector difference can be acquired, and the second motion vector difference can be determined according to the direction information and the amplitude information of the second motion vector difference.
  • the direction information of the first motion vector difference may be obtained in the following manner: the decoding end parses the direction information of the first motion vector difference from the encoded bit stream of the current block; The weight prediction angle of the current block derives the direction information of the first motion vector difference.
  • the direction information of the second motion vector difference can be obtained in the following manner: the decoding end parses the direction information of the second motion vector difference from the encoded bit stream of the current block; The predicted angle derives direction information for the second motion vector difference.
  • the amplitude information of the first motion vector difference may be acquired in the following manner: the amplitude information of the first motion vector difference is parsed from the encoded bit stream of the current block.
  • the amplitude information of the second motion vector difference may be acquired in the following manner: the amplitude information of the second motion vector difference is parsed from the coded bit stream of the current block.
  • the encoding end and the decoding end may agree on the direction information and amplitude information of the motion vector difference. If the direction information indicates that the direction is to the right, and the amplitude information indicates that the amplitude is A, then the motion vector difference is (A, 0); if the direction information indicates that the direction is down, and the amplitude information indicates that the amplitude is A, then the motion vector difference is (0, -A); if the direction information indicates that the direction is left, the amplitude information indicates that the amplitude is If the value is A, the motion vector difference is (-A, 0); if the direction information indicates that the direction is up, and the amplitude information indicates that the amplitude is A, then the motion vector difference is (0, A); if the direction information indicates that the direction is Up to the right, the amplitude information indicates that the amplitude is A, then the motion vector difference is (A, A); if the direction information indicates that the direction is up and left, and the amplitude information indicates that the amplitude information indicates that
  • the first original motion vector can be selected from the motion vector candidate list corresponding to the first reference frame information based on the rate-distortion cost value
  • the second original motion vector can be selected from the motion vector candidate list corresponding to the second reference frame information.
  • vector, the direction information and magnitude information of the first motion vector difference corresponding to the first original motion vector, and the direction information and magnitude information of the second motion vector difference corresponding to the second original motion vector are determined based on the rate-distortion cost value.
  • the index value of the first original motion vector in the motion vector candidate list corresponding to the first reference frame information is encoded in the encoded bit stream
  • the second original motion vector is encoded in the second reference frame information.
  • the decoding end after receiving the encoded bit stream of the current block, based on the index value of the first original motion vector in the motion vector candidate list corresponding to the first reference frame information, from the motion vector corresponding to the first reference frame information The first original motion vector is selected from the candidate list, and based on the index value of the second original motion vector in the motion vector candidate list corresponding to the second reference frame information, the second original motion vector is selected from the motion vector candidate list corresponding to the second reference frame information.
  • Motion vector After receiving the encoded bit stream of the current block, based on the index value of the first original motion vector in the motion vector candidate list corresponding to the first reference frame information, from the motion vector corresponding to the first reference frame information The first original motion vector is selected from the candidate list, and based on the index value of the second original motion vector in the motion vector candidate list corresponding to the second reference frame information, the second original motion vector is selected from the motion vector candidate list corresponding to the second reference frame information.
  • the decoding end may also parse out the direction information and amplitude information of the first motion vector difference from the encoded bit stream, and determine the first motion vector difference according to the direction information and the amplitude information. And, the direction information and amplitude information of the second motion vector difference are parsed from the encoded bit stream, and the second motion vector difference is determined according to the direction information and the amplitude information.
  • the encoded bit stream may further include first reference frame information corresponding to the first original motion vector, and the decoding end may determine the first reference frame according to the first original motion vector and the first reference frame information corresponding to the first original motion vector.
  • Raw motion information may be further included.
  • the encoded bit stream may further include second reference frame information corresponding to the second original motion vector, and the decoding end may determine the second original motion information according to the second original motion vector and the second reference frame information corresponding to the second original motion vector.
  • the decoding end may determine the first target motion information of the current block according to the first motion vector difference and the first original motion information, and determine the second target motion information of the current block according to the second motion vector difference and the second original motion information.
  • the encoding end may also encode the first sub-mode flag and the second sub-mode flag of the enhanced angle weighted prediction mode in the encoded bit stream, where the first sub-mode flag indicates that the motion vector difference is superimposed on the first original motion vector, or The motion vector difference is not superimposed on the first original motion vector.
  • the second sub-mode flag indicates that the motion vector difference is superimposed on the second original motion vector, or that the motion vector difference is not superimposed on the second original motion vector.
  • the decoding end parses the direction information of the first motion vector difference and the direction information of the second motion vector difference from the encoded bit stream of the current block.
  • the angle derives the direction information of the first motion vector difference, and derives the direction information of the second motion vector difference according to the weight prediction angle of the current block.
  • the direction information of the first motion vector difference may also be derived according to the weight prediction angle of the current block, and the direction information of the second motion vector difference may be derived according to the weight prediction angle of the current block.
  • the decoding end may parse out the amplitude information of the first motion vector difference and the amplitude information of the second motion vector difference from the encoded bit stream.
  • the encoding end and the decoding end may construct the same one motion vector difference magnitude list, the encoding end determines the magnitude index of the magnitude information of the first motion vector difference in the motion vector difference magnitude list, and the encoded bit stream includes the magnitude index of the first motion vector difference .
  • the decoding end parses the amplitude index of the first motion vector difference from the encoded bit stream of the current block, and selects the amplitude information corresponding to the amplitude index from the motion vector difference amplitude list, and the amplitude information is the first The magnitude information of the motion vector difference.
  • the encoding end determines the magnitude index of the magnitude information of the second motion vector difference in the motion vector difference magnitude list, and the encoded bit stream includes the magnitude index of the second motion vector difference.
  • the decoding end parses out the magnitude index of the second motion vector difference from the encoded bit stream of the current block, and selects the magnitude information corresponding to the magnitude index from the motion vector difference magnitude list, where the magnitude information is the second magnitude index. The magnitude information of the motion vector difference.
  • the encoding end and the decoding end may construct the same at least two motion vector difference amplitude lists, for example, the encoding end and the decoding end construct the same motion vector difference amplitude list 1, and construct the same Motion Vector Difference Magnitude List 2.
  • the encoding end first selects the target motion vector difference amplitude list from all the motion vector difference amplitude lists based on the indication information of the motion vector difference amplitude list; the encoding end determines that the amplitude information of the first motion vector difference is in the target motion vector difference.
  • the magnitude index in the magnitude list, and the encoded bitstream includes the magnitude index of the first motion vector difference.
  • the encoding end may further determine the magnitude index of the magnitude information of the second motion vector difference in the target motion vector difference magnitude list, and the encoded bit stream may include the magnitude index of the second motion vector difference.
  • the decoding end can parse out the magnitude index of the second motion vector difference from the coded bit stream of the current block, and select the magnitude information corresponding to the magnitude index from the target motion vector difference magnitude list. That is, the magnitude information of the second motion vector difference.
  • Embodiment 22 In Embodiment 4, the encoder/decoder may obtain the first target motion vector of the current block based on the motion vector candidate list corresponding to the first reference frame information and the motion vector candidate list corresponding to the second reference frame information and the second target motion vector, when the reference frame pointed to by the first target motion vector and the reference frame pointed to by the second target motion vector are the same frame, the first reference frame information is the same as the second reference frame information, that is, the first reference frame The motion vector candidate list corresponding to the frame information is the same as the motion vector candidate list corresponding to the second reference frame information.
  • the first target motion vector and the second target motion vector are obtained in the following manner: Select from the motion vector candidate list One candidate motion vector is used as the original motion vector of the current block; the first target motion vector of the current block is determined according to the original motion vector. The second target motion vector of the current block is determined according to the first target motion vector, or the second target motion vector of the current block is determined according to the original motion vector.
  • the motion vector difference corresponding to the original motion vector is obtained, the first target motion vector of the current block is determined according to the original motion vector, and the second target motion vector of the current block is determined according to the first target motion vector and the motion vector difference .
  • obtain the motion vector difference corresponding to the original motion vector determine the first target motion vector of the current block according to the original motion vector and the motion vector difference, and determine the second target motion vector of the current block according to the original motion vector and the motion vector difference Target motion vector.
  • the first target motion vector and the second target motion vector may be different.
  • the motion vector candidate list corresponding to the first reference frame information and the motion vector candidate list corresponding to the second reference frame information are constructed instead of the motion information candidate list.
  • the refIdx frame of the list ListX constructs a motion vector candidate list, and the construction method may be the construction method of the motion vector candidate list in the inter-frame normal mode, or on the basis of Embodiment 13 or Embodiment 14, adding a restriction pointing to the reference frame, in When adding constraints to reference frames, the motion information candidate list may be a motion vector candidate list.
  • the judgment of the reference frame is added when judging the availability, or, when adding a unidirectional motion vector, means such as scaling operation (Scale) may be used.
  • the encoding end/decoding end may determine the first predicted value of the pixel position according to the first target motion information, and determine the second predicted value of the pixel position according to the second target motion information.
  • Inter-frame prediction process which is not limited.
  • an inter-frame weighted prediction mode may be used to obtain the first predicted value of the pixel position.
  • the first target motion information is used to determine the initial predicted value of the pixel position, and then the initial predicted value is multiplied by a preset factor to obtain the adjusted predicted value.
  • the adjusted predicted value is greater than the maximum predicted value, the largest predicted value will be used as the first predicted value of the current block; if the adjusted predicted value is smaller than the smallest predicted value, the smallest predicted value will be taken as the first predicted value of the current block. If it is not less than the minimum predicted value and not greater than the maximum predicted value, the adjusted predicted value will be used as the first predicted value of the current block.
  • the inter-frame weighted prediction mode can also be used to obtain the second predicted value of the pixel position. For the specific implementation method, refer to the above example, which will not be repeated here. .
  • Embodiment 1 to Embodiment 3, Embodiment 5 to Embodiment 12, and Embodiment 13 to Embodiment 20 may be implemented independently or in combination.
  • Embodiment 1 and Embodiment 2 are implemented in combination; Embodiment 1 and Embodiment 3 are implemented in combination; At least one embodiment in Embodiment 20 is implemented in combination; Embodiment 2 and Embodiment 5-at least one embodiment in Embodiment 12 are implemented in combination; Embodiment 2 and Embodiment 13-At least one embodiment in Embodiment 20 is combined Implemented; implemented in combination with at least one of Embodiment 3 and Example 5-Example 12; implemented in combination with at least one of Example 3 and Example 13-Example 20; The combination method is not limited.
  • Embodiment 4, Embodiment 5 to Embodiment 12, and Embodiment 21 to Embodiment 22 may be implemented individually or in combination.
  • embodiment 4 and embodiment 5 - at least one of embodiment 12 are implemented in combination;
  • embodiment 4 and embodiment 21 are implemented in combination;
  • embodiment 4 and embodiment 22 are implemented in combination; 21.
  • Embodiment 22 is implemented in combination; of course, there is no limitation on the combination of the embodiments.
  • Embodiment 23 Based on the same application concept as the above method, this embodiment of the present application also proposes an encoding and decoding device, which is applied to the encoding end or the decoding end.
  • FIG. 10A which is a structural diagram of the device, include:
  • the obtaining module 111 is configured to obtain the weight prediction angle and the weight configuration parameter of the current block when it is determined that the weighted prediction is started on the current block; the configuration module 112 is configured to obtain the peripheral edge outside the current block according to the weight configuration parameter The position configuration reference weight value; the determining module 113 is configured to, for each pixel position of the current block, determine the peripheral matching position pointed to by the pixel position from the peripheral positions outside the current block according to the weight prediction angle; The target weight value of the pixel position is determined according to the reference weight value associated with the surrounding matching positions, and the associated weight value of the pixel position is determined according to the target weight value of the pixel position; the obtaining module 111 is further configured to obtain a motion information candidate list, where the motion information candidate list includes at least one candidate motion information; obtain first target motion information and second target motion information of the current block based on the motion information candidate list; the determining module 113, further for determining the first predicted value of the pixel position according to the first target motion information of the current block, and determining
  • the obtaining module 111 obtains the motion information candidate list, it is specifically used for:
  • the obtaining module 111 obtains the motion information candidate list based on the at least one piece of available motion information, it is specifically used to: for the available motion information currently to be added to the motion information candidate list,
  • the available motion information is unidirectional motion information, adding the unidirectional motion information to the motion information candidate list;
  • the bidirectional motion information is trimmed into the first unidirectional motion information and the second unidirectional motion information, and the first unidirectional motion information is added to the motion information candidate list.
  • the available motion information is unidirectional motion information, and the unidirectional motion information does not overlap with the candidate motion information existing in the motion information candidate list, adding the unidirectional motion information to the motion information candidate list;
  • the available motion information is bidirectional motion information
  • the available motion information is unidirectional motion information, and the unidirectional motion information does not overlap with existing candidate motion information in the motion information candidate list, then the unidirectional motion information is added to the motion information candidate list;
  • the available motion information is bidirectional motion information
  • the acquiring module 111 acquires the first target motion information and the second target motion information of the current block based on the motion information candidate list, it is specifically configured to: select candidate motion information from the motion information candidate list. as the first original motion information of the current block, and selecting candidate motion information from the motion information candidate list as the second original motion information of the current block; determining the current block according to the first original motion information The first target motion information of the current block is determined according to the second original motion information.
  • the first original motion information includes a first original motion vector
  • the first target motion information includes a first target motion vector
  • the obtaining module 111 determines the current block according to the first original motion information
  • the first target motion information is specifically used for: obtaining the first motion vector difference corresponding to the first original motion vector; determining the first target according to the first motion vector difference and the first original motion vector motion vector; or, determining the first original motion vector as the first target motion vector;
  • the second original motion information includes the second original motion vector, the second target motion information includes the second target motion vector, and the acquiring
  • the module 111 determines the second target motion information of the current block according to the second original motion information it is specifically configured to: obtain a second motion vector difference corresponding to the second original motion vector; The difference and the second original motion vector determine the second target motion vector; or, the second original motion vector is determined as the second target motion vector.
  • the obtaining module 111 obtains the current block based on the motion information candidate list.
  • the first target motion information and the second target motion information are specifically used for: selecting a candidate motion information from the motion information candidate list as the original motion information of the current block; determining the current block according to the original motion information
  • the first target motion information of the block; the second target motion information of the current block is determined according to the first target motion information, or the second target motion information of the current block is determined according to the original motion information.
  • the original motion information includes an original motion vector
  • the first target motion information includes a first target motion vector
  • the second target motion information includes a second target motion vector
  • the obtaining module 111 is specifically configured to: obtain the motion vector difference corresponding to the original motion vector; determine the first target motion vector of the current block according to the original motion vector; determine the current block according to the first target motion vector and the motion vector difference the second target motion vector of the block; or, obtain the motion vector difference corresponding to the original motion vector; determine the first target motion vector of the current block according to the original motion vector and the motion vector difference;
  • the difference between the original motion vector and the motion vector determines the second target motion vector of the current block; or, obtain the first motion vector difference and the second motion vector difference corresponding to the original motion vector; according to the original motion vector and the first motion vector difference to determine the first target motion vector of the current block; determine the second target motion vector of the current block according to the first target motion vector and the second motion vector difference; or, obtaining the first motion vector difference and the second motion vector difference
  • the first target motion vector and the second target motion vector may be different.
  • an embodiment of the present application also proposes an encoding and decoding device, which is applied to the encoding end or the decoding end.
  • FIG. 10B it is a structural diagram of the device, including:
  • the obtaining module 121 is used for obtaining the weight prediction angle and weight configuration parameter of the current block when it is determined to start the weighted prediction for the current block; the configuration module 122 is used for obtaining the peripheral outside the current block according to the weight configuration parameter The position configuration reference weight value; the determining module 123 is configured to, for each pixel position of the current block, determine the peripheral matching position pointed to by the pixel position from the peripheral positions outside the current block according to the weight prediction angle; The target weight value of the pixel position is determined according to the reference weight value associated with the surrounding matching positions, and the associated weight value of the pixel position is determined according to the target weight value of the pixel position; the obtaining module 121 is further configured to obtain reference frame information, and obtain a motion vector candidate list corresponding to the reference frame information, the motion vector candidate list includes at least one candidate motion vector, and the reference frame information includes first reference frame information and second reference frame information; Obtain the first target motion vector and the second target motion vector of the current block based on the motion vector candidate list; the determining module
  • the weight configuration parameter includes a weight transformation rate. If the current block supports a weight transformation rate switching mode, the obtaining module 121 obtains the weight transformation rate of the current block in the following manner: Obtain the weight transformation rate of the current block. Weight transformation rate indication information; determining the weight transformation rate of the current block according to the weight transformation rate indication information: wherein, if the weight transformation rate indication information is the first indication information, the weight transformation rate of the current block is The first weight transformation rate; if the weight transformation rate indication information is the second indication information, the weight transformation rate of the current block is the second weight transformation rate.
  • the weight conversion rate indication information of the current block is a weight conversion rate switching identifier corresponding to the current block, and the first indication information is used to indicate that the current block does not need to perform weight conversion rate switching, and the The second indication information is used to indicate that the current block needs to perform weight conversion rate switching.
  • the weight configuration parameters include the weight transformation rate and the starting position of the weight transformation.
  • the configuration module 122 configures the reference weight value for the peripheral position outside the current block according to the weight configuration parameter, it is specifically used for:
  • a reference weight value of the peripheral position is configured according to the coordinate value of the peripheral position, the coordinate value of the starting position of the weight transformation, and the weight transformation rate.
  • the motion vector candidate list includes a motion vector candidate list corresponding to the first reference frame information and a motion vector candidate list corresponding to the second reference frame information, and the acquiring module 121 acquires the motion vector candidate list based on the motion vector candidate list.
  • the first target motion vector and the second target motion vector of the current block are specifically used for:
  • One candidate motion vector is selected from the motion vector candidate list corresponding to the first reference frame information as the first original motion vector of the current block, and one candidate motion vector is selected from the motion vector candidate list corresponding to the second reference frame information
  • the candidate motion vector is used as the second original motion vector of the current block; the first target motion vector of the current block is determined according to the first original motion vector; the first target motion vector of the current block is determined according to the second original motion vector. 2 Target motion vector.
  • the obtaining module 121 determines the first target motion vector of the current block according to the first original motion vector, it is specifically configured to: obtain a first motion vector difference corresponding to the first original motion vector; The first target motion vector is determined according to the first motion vector difference and the first original motion vector; or, the first original motion vector is determined as the first target motion vector.
  • the obtaining module 121 determines the second target motion vector of the current block according to the second original motion vector, it is specifically configured to: obtain a second motion vector difference corresponding to the second original motion vector; The second target motion vector is determined according to the second motion vector difference and the second original motion vector; or, the second original motion vector is determined as the second target motion vector.
  • the motion vector candidate list is a motion vector candidate list
  • the acquiring module 121 acquires the first target motion vector and the second target motion vector of the current block based on the motion vector candidate list, it is specifically used for: from the Select a candidate motion vector from the motion vector candidate list as the original motion vector of the current block; determine the first target motion vector of the current block according to the original motion vector; determine the current block according to the first target motion vector
  • the second target motion vector of the block, or the second target motion vector of the current block is determined according to the original motion vector.
  • the obtaining module 121 is specifically used for:
  • obtain the first motion vector difference and the second motion vector difference corresponding to the original motion vector obtain the first target motion vector of the current block according to the original motion vector and the first motion vector difference; The difference between the first target motion vector and the second motion vector determines the second target motion vector of the current block; or, obtain the first motion vector difference and the second motion vector difference corresponding to the original motion vector; Determine the first target motion vector of the current block according to the difference between the original motion vector and the first motion vector; determine the second target motion of the current block according to the difference between the original motion vector and the second motion vector vector.
  • the first target motion vector and the second target motion vector may be different.
  • the decoding end device (which may also be referred to as a video decoder) provided by the embodiment of the present application, in terms of hardware, can refer to FIG. 10C for a schematic diagram of its hardware architecture. It includes: a processor 131 and a machine-readable storage medium 132, wherein: the machine-readable storage medium 132 stores machine-executable instructions that can be executed by the processor 131; the processor 131 is used for executing machine-executable instructions instructions to implement the methods disclosed in the above examples of this application. For example, the processor 131 is configured to execute machine-executable instructions to implement the following steps:
  • For each pixel position of the current block determine the peripheral matching position pointed to by the pixel position from the peripheral positions outside the current block according to the weight prediction angle; determine according to the reference weight value associated with the peripheral matching position The target weight value of the pixel position, and the associated weight value of the pixel position is determined according to the target weight value of the pixel position;
  • the first prediction value of the pixel position is determined according to the first target motion information of the current block, and the second prediction value of the pixel position is determined according to the second target motion information of the current block; according to the first prediction value, the target weight value, the second predicted value and the associated weight value, determine the weighted predicted value of the pixel position;
  • the weighted prediction value of the current block is determined according to the weighted prediction values of all pixel positions of the current block. or,
  • For each pixel position of the current block determine the peripheral matching position pointed to by the pixel position from the peripheral positions outside the current block according to the weight prediction angle; determine according to the reference weight value associated with the peripheral matching position The target weight value of the pixel position, and the associated weight value of the pixel position is determined according to the target weight value of the pixel position;
  • the motion vector candidate list includes at least one candidate motion vector
  • the reference frame information includes first reference frame information and second reference frame information ;
  • the first prediction value of the pixel position is determined according to the first target motion information of the current block, and the second prediction value of the pixel position is determined according to the second target motion information of the current block; according to the first prediction value, the target weight value, the second predicted value and the associated weight value, determine the weighted predicted value of the pixel position;
  • the first target motion information includes a first target motion vector and the first target motion vector A first reference frame information corresponding to a target motion vector
  • the second target motion information includes a second target motion vector and second reference frame information corresponding to the second target motion vector;
  • the weighted prediction value of the current block is determined according to the weighted prediction values of all pixel positions of the current block.
  • the encoding end device (also may be referred to as a video encoder) provided by the embodiment of the present application, from a hardware level, a schematic diagram of its hardware architecture can be specifically shown in FIG. 10D .
  • It includes: a processor 141 and a machine-readable storage medium 142, wherein: the machine-readable storage medium 142 stores machine-executable instructions that can be executed by the processor 141; the processor 141 is used for executing machine-executable instructions instructions to implement the methods disclosed in the above examples of this application.
  • the processor 141 is configured to execute machine-executable instructions to implement the following steps:
  • For each pixel position of the current block determine the peripheral matching position pointed to by the pixel position from the peripheral positions outside the current block according to the weight prediction angle; determine according to the reference weight value associated with the peripheral matching position The target weight value of the pixel position, and the associated weight value of the pixel position is determined according to the target weight value of the pixel position;
  • the first prediction value of the pixel position is determined according to the first target motion information of the current block, and the second prediction value of the pixel position is determined according to the second target motion information of the current block; according to the first prediction value, the target weight value, the second predicted value and the associated weight value, determine the weighted predicted value of the pixel position;
  • the weighted prediction value of the current block is determined according to the weighted prediction values of all pixel positions of the current block. or,
  • For each pixel position of the current block determine the peripheral matching position pointed to by the pixel position from the peripheral positions outside the current block according to the weight prediction angle; determine according to the reference weight value associated with the peripheral matching position The target weight value of the pixel position, and the associated weight value of the pixel position is determined according to the target weight value of the pixel position;
  • the motion vector candidate list includes at least one candidate motion vector
  • the reference frame information includes first reference frame information and second reference frame information ;
  • the first prediction value of the pixel position is determined according to the first target motion information of the current block, and the second prediction value of the pixel position is determined according to the second target motion information of the current block; according to the first prediction value, the target weight value, the second predicted value and the associated weight value, determine the weighted predicted value of the pixel position;
  • the first target motion information includes a first target motion vector and the first target motion vector A first reference frame information corresponding to a target motion vector
  • the second target motion information includes a second target motion vector and second reference frame information corresponding to the second target motion vector;
  • the weighted prediction value of the current block is determined according to the weighted prediction values of all pixel positions of the current block.
  • an embodiment of the present application further provides a machine-readable storage medium, where several computer instructions are stored on the machine-readable storage medium, and when the computer instructions are executed by a processor, the present invention can be implemented.
  • the methods disclosed in the above examples of the application are like the encoding and decoding methods in the above-mentioned embodiments.
  • a typical implementation device is a computer, which may be in the form of a personal computer, laptop computer, cellular phone, camera phone, smart phone, personal digital assistant, media player, navigation device, e-mail device, game control desktop, tablet, wearable device, or a combination of any of these devices.
  • a computer which may be in the form of a personal computer, laptop computer, cellular phone, camera phone, smart phone, personal digital assistant, media player, navigation device, e-mail device, game control desktop, tablet, wearable device, or a combination of any of these devices.
  • the functions are divided into various units and described respectively.
  • the functions of each unit may be implemented in one or more software and/or hardware.
  • Embodiments of the present application may be provided as a method, a system, or a computer program product.
  • the application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects.
  • Embodiments of the present application may take the form of a computer program product implemented on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Reduction Or Emphasis Of Bandwidth Of Signals (AREA)

Abstract

本申请提供一种编解码方法、装置及其设备,包括:获取权重预测角度和权重配置参数;根据权重配置参数为当前块外部的周边位置配置参考权重值;根据权重预测角度从当前块外部的周边位置中确定像素位置指向的周边匹配位置;根据周边匹配位置关联的参考权重值确定像素位置的目标权重值,根据像素位置的目标权重值确定像素位置的关联权重值;根据第一目标运动信息确定像素位置的第一预测值,根据第二目标运动信息确定像素位置的第二预测值;根据第一预测值,目标权重值,第二预测值和关联权重值,确定像素位置的加权预测值;根据所有像素位置的加权预测值确定当前块的加权预测值。通过本申请提高预测准确性。

Description

一种编解码方法、装置及其设备
相关申请的交叉引用
本专利申请要求于2020年6月30日提交的、申请号为202010622752.7、发明名称为“一种编解码方法、装置及其设备”的中国专利申请的优先权,该申请的全文以引用的方式并入本文中。
技术领域
本申请涉及编解码技术领域,尤其是涉及一种编解码方法、装置及其设备。
背景技术
为了达到节约空间的目的,视频图像都是经过编码后才传输的,视频编码可以包括预测、变换、量化、熵编码、滤波等过程。预测可以包括帧内预测和帧间预测。帧间预测是指利用视频时域的相关性,使用邻近已编码图像的像素预测当前像素,以达到有效去除视频时域冗余的目的。帧内预测是指利用视频空域的相关性,使用当前帧图像的已编码块的像素预测当前像素,以达到去除视频空域冗余的目的。
在相关技术中,当前块为矩形,而实际物体的边缘往往不是矩形,因此,对于物体边缘来说,往往存在两个不同对象(如存在前景的物体和背景)。基于此,当两个对象的运动不一致时,矩形划分不能很好的将两个对象分割,即使将当前块划分为两个非矩形子块,通过两个非矩形子块预测当前块,目前,也存在预测效果不佳,编码性能较差等问题。
发明内容
有鉴于此,本申请提供了一种编解码方法、装置及其设备,提高了预测的准确性。
本申请提供一种编解码方法,所述方法包括:
在确定对当前块启动加权预测时,获取所述当前块的权重预测角度和权重配置参数;
根据所述权重配置参数为所述当前块外部的周边位置配置参考权重值;
针对所述当前块的每个像素位置,根据所述权重预测角度从所述当前块外部的周边位置中确定所述像素位置指向的周边匹配位置;根据所述周边匹配位置关联的参考权重值确定所述像素位置的目标权重值,根据所述像素位置的目标权重值确定所述像素位置的关联权重值;
获取运动信息候选列表,所述运动信息候选列表包括至少一个候选运动信息;基于所述运动信息候选列表获取所述当前块的第一目标运动信息和第二目标运动信息;
根据所述当前块的第一目标运动信息确定所述像素位置的第一预测值,根据所述当前块的第二目标运动信息确定所述像素位置的第二预测值;根据所述第一预测值,所述目标权重值,所述第二预测值和所述关联权重值,确定所述像素位置的加权预测值;
根据所述当前块的所有像素位置的加权预测值确定所述当前块的加权预测值。
本申请提供一种编解码装置,所述装置包括:获取模块,用于在确定对当前块启动加权预测时,获取所述当前块的权重预测角度和权重配置参数;配置模块,用于根据所述权重配置参数为所述当前块外部的周边位置配置参考权重值;确定模块,用于针对所述当前块的每个像素位置,根据所述权重预测角度从所述当前块外部的周边位置中确定所述像素位置指向的周边匹配位置;根据所述周边匹配位置关联的参考权重值确定所述像素位置的目标权重值,根据所述像素位置的目标权重值确定所述像素位置的关联权重值;所述获取模块,还用于获取运动信息候选列表,所述运动信息候选列表包括至少一个候选运动信息;基于所述运动信息候选列表获取所述当前块的第一目标运动信息和第二目标运动信息;所述确定模块,还用于根据所述当前块的第一目标运动信息确定所述像素位置的第一预测值,根据所述当前块的第二目标运动信息确定所述像素位置的第二预测值;根据所述第一预测值,所述目标权重值,所述第二预测值和所述关联权重值,确定所述像素位置的加权预测值;根据所述当前块的所有像素位置的加权预测值确定所述当前块的加权预测值。
本申请提供一种解码端设备,包括:处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;
所述处理器用于执行机器可执行指令,以实现如下步骤:
在确定对当前块启动加权预测时,获取所述当前块的权重预测角度和权重配置参数;
根据所述权重配置参数为所述当前块外部的周边位置配置参考权重值;
针对所述当前块的每个像素位置,根据所述权重预测角度从所述当前块外部的周边位置中确定所述像素位置指向的周边匹配位置;根据所述周边匹配位置关联的参考权重值确定所述像素位置的目标权重值,根据所述像素位置的目标权重值确定所述像素位置的关联权重值;
获取运动信息候选列表,所述运动信息候选列表包括至少一个候选运动信息;基于所述运动信息候选列表获取所述当前块的第一目标运动信息和第二目标运动信息;
根据所述当前块的第一目标运动信息确定所述像素位置的第一预测值,根据所述当前块的第二目标运动信息确定所述像素位置的第二预测值;根据所述第一预测值,所述目标权重值,所述第二预测值和所述关联权重值,确定所述像素位置的加权预测值;
根据所述当前块的所有像素位置的加权预测值确定所述当前块的加权预测值。
本申请提供一种编码端设备,包括:处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;
所述处理器用于执行机器可执行指令,以实现如下步骤:
在确定对当前块启动加权预测时,获取所述当前块的权重预测角度和权重配置参数;
根据所述权重配置参数为所述当前块外部的周边位置配置参考权重值;
针对所述当前块的每个像素位置,根据所述权重预测角度从所述当前块外部的周边 位置中确定所述像素位置指向的周边匹配位置;根据所述周边匹配位置关联的参考权重值确定所述像素位置的目标权重值,根据所述像素位置的目标权重值确定所述像素位置的关联权重值;
获取运动信息候选列表,所述运动信息候选列表包括至少一个候选运动信息;基于所述运动信息候选列表获取所述当前块的第一目标运动信息和第二目标运动信息;
根据所述当前块的第一目标运动信息确定所述像素位置的第一预测值,根据所述当前块的第二目标运动信息确定所述像素位置的第二预测值;根据所述第一预测值,所述目标权重值,所述第二预测值和所述关联权重值,确定所述像素位置的加权预测值;
根据所述当前块的所有像素位置的加权预测值确定所述当前块的加权预测值。
由以上技术方案可见,本申请实施例中,提出一种配置权重值的有效方式,能够为当前块的每个像素位置配置合理的目标权重值,从而提高预测的准确性,提高预测性能,提高编码性能,能够使当前块的预测值加更接近原始像素,并带来编码性能的提高。
附图说明
图1是视频编码框架的示意图;
图2A-图2E是加权预测的示意图;
图3是本申请一种实施方式中的编解码方法的流程图;
图4A-图4D是当前块外部的周边位置的示意图;
图5是本申请一种实施方式中的编解码方法的流程图;
图6是本申请一种实施方式中的权重预测角度的示意图;
图7是本申请一种实施方式中的四种权重变换率的参考权重值的示意图;
图8A-图8C是本申请一种实施方式中的权重预测角度与角度分区的示意图;
图9A是本申请一种实施方式中的当前块的相邻块的示意图;
图9B-图9D是本申请一种实施方式中的当前块的目标位置的示意图;
图10A是本申请一种实施方式中的编解码装置的结构示意图;
图10B是本申请一种实施方式中的编解码装置的结构示意图;
图10C是本申请一种实施方式中的解码端设备的硬件结构图;
图10D是本申请一种实施方式中的编码端设备的硬件结构图。
具体实施方式
在本申请实施例使用的术语仅仅是出于描述特定实施例的目的,而非限制本申请。本申请实施例和权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其它含义。还应当理解,本文中使用的术语“和/或”是指 包含一个或多个相关联的列出项目的任何或所有可能组合。应当理解,尽管在本申请实施例可能采用术语第一、第二、第三等来描述各种信息,但是,这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本申请实施例范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息,取决于语境。此外,所使用的词语“如果”可以被解释成为“在……时”,或“当……时”,或“响应于确定”。
本申请实施例中提出一种编解码方法、装置及其设备,可以涉及如下概念:
帧内预测(intra prediction),帧间预测(inter prediction)与IBC(帧内块拷贝)预测:
帧内预测是指,基于视频空域的相关性,使用已编码块进行预测,以达到去除视频空域冗余的目的。帧内预测规定了多种预测模式,每种预测模式对应一种纹理方向(DC模式除外),例如,若图像纹理呈现水平状排布,则水平预测模式可以更好的预测图像信息。
帧间预测是指,基于视频时域的相关性,由于视频序列包含较强的时域相关性,使用邻近已编码图像像素预测当前图像的像素,可以达到有效去除视频时域冗余的目的。
帧内块拷贝(IBC,Intra Block Copy)是指,允许同帧参考,当前块的参考数据来自同一帧,帧内块拷贝也可以称为帧内块复制。帧内块拷贝技术中,可以使用当前块的块矢量获取当前块的预测值,示例性的,基于屏幕内容中同一帧内存在大量重复出现的纹理这一特性,在采用块矢量获取当前块的预测值时,能够提升屏幕内容序列的压缩效率。
预测像素(Prediction Signal):预测像素是指从已编解码的像素中导出的像素值,通过原始像素与预测像素之差获得残差,进而进行残差变换量化以及系数编码。帧间的预测像素指的是当前块从参考帧导出的像素值,由于像素位置离散,需要通过插值运算来获取最终的预测像素。预测像素与原始像素越接近,两者相减得到的残差能量越小,编码压缩性能越高。
运动矢量(Motion Vector,MV):在帧间预测中,可以使用运动矢量表示当前帧的当前块与参考帧的参考块之间的相对位移。每个划分的块都有相应的运动矢量传送到解码端,如果对每个块的运动矢量进行独立编码和传输,特别是小尺寸的大量块,则消耗很多比特。为降低用于编码运动矢量的比特数,可以利用相邻块之间的空间相关性,根据相邻已编码块的运动矢量对当前块的运动矢量进行预测,然后对预测差进行编码,这样可以有效降低表示运动矢量的比特数。在对当前块的运动矢量进行编码时,可以先使用相邻已编码块的运动矢量预测当前块的运动矢量,然后对该运动矢量的预测值(MVP,Motion Vector Prediction)与运动矢量的真正估值之间的差值(MVD,Motion Vector Difference)进行编码。
运动信息(Motion Information):由于运动矢量表示当前块与某个参考块之间的位置偏移,为了准确的获取指向块的信息,除了运动矢量,还需要参考帧图像的索引信息来表示当前块使用哪个参考帧图像。在视频编码技术中,对于当前帧,通常可以建立一 个参考帧图像列表,参考帧图像索引信息则表示当前块采用了参考帧图像列表中的第几个参考帧图像。
此外,很多编码技术还支持多个参考帧图像列表,因此,还可以使用一个索引值来表示使用了哪一个参考帧图像列表,这个索引值可以称为参考方向。综上所述,在视频编码技术中,可以将运动矢量,参考帧索引,参考方向等与运动相关的信息统称为运动信息。
率失真原则(Rate-Distortion Optimized):评价编码效率有两大指标:码率和PSNR(Peak Signal to Noise Ratio,峰值信噪比),比特流越小,则压缩率越大,PSNR越大,则重建图像质量越好;在模式选择时,判别公式实质上也就是对二者的综合评价。例如,模式对应的代价:J(mode)=D+λ*R,其中,D表示Distortion(失真),通常可以使用SSE指标来进行衡量,SSE是指重建图像块与源图像的差值的均方和;λ是拉格朗日乘子,R就是该模式下图像块编码所需的实际比特数,包括编码模式信息、运动信息、残差等所需的比特总和。在模式选择时,若使用RDO原则去对编码模式做比较决策,通常可以保证编码性能最佳。
视频编码框架:参见图1所示,可以使用视频编码框架实现本申请实施例的编码端处理流程;视频解码框架的示意图与图1类似,在此不再赘述,可以使用视频解码框架实现本申请实施例的解码端处理流程。示例性的,在视频编码框架和视频解码框架中,可以包括但不限于帧内预测/帧间预测、运动估计/运动补偿、参考图像缓冲器、环内滤波、重建、变换、量化、反变换、反量化、熵编码器等模块。在编码端,通过这些模块之间的配合,可以实现编码端处理流程,在解码端,通过这些模块之间的配合,可以实现解码端处理流程。
在相关技术中,当前块可以为矩形,而实际物体的边缘往往不是矩形,因此,对于物体的边缘来说,往往存在两个不同对象(如存在前景的物体和背景等)。当两个对象的运动不一致时,则矩形划分不能很好的将这两个对象进行分割,为此,可以将当前块划分为两个非矩形子块,并对两个非矩形子块进行加权预测。示例性的,加权预测是利用多个预测值进行加权操作,从而获得最终预测值,加权预测可以包括:帧间和帧内的联合加权预测(Combined inter/intra prediction,CIIP),帧间和帧间的联合加权预测,帧内和帧内的联合加权预测等。针对联合加权预测的权重值,可以为当前块的不同像素位置配置相同权重值,也可以为当前块的不同像素位置配置不同权重值。
参见图2A所示,为帧间帧内联合加权预测的示意图。
CIIP预测块由帧内预测块(即采用帧内预测模式得到像素位置的帧内预测值)和帧间预测块(即采用帧间预测模式得到像素位置的帧间预测值)加权得到,每个像素位置采用的帧内预测值与帧间预测值的权重比是1:1。例如,针对每个像素位置,将该像素位置的帧内预测值与该像素位置的帧间预测值进行加权,得到该像素位置的联合预测值,最终将每个像素位置的联合预测值组成CIIP预测块。
参见图2B所示,为帧间三角划分加权预测(Triangular Partition Mode,TPM)的示意图。
TPM预测块由帧间预测块1(即采用帧间预测模式1得到像素位置的帧间预测值)和帧间预测块2(即采用帧间预测模式2得到像素位置的帧间预测值)组合得到。TPM预测块可以划分为两个区域,一个区域可以为帧间区域1,另一个区域可以为帧间区域2,TPM预测块的两个帧间区域可以呈非矩形分布,虚线分界线的角度可以为主对角线或者副对角线两种。
针对帧间区域1的每个像素位置,主要基于帧间预测块1的帧间预测值确定该像素位置的预测值,例如,将该像素位置的帧间预测块1的帧间预测值与该像素位置的帧间预测块2的帧间预测值进行加权时,帧间预测块1的帧间预测值的权重值较大,帧间预测块2的帧间预测值的权重值较小(甚至为0),得到该像素位置的联合预测值。针对帧间区域2的每个像素位置,主要基于帧间预测块2的帧间预测值确定该像素位置的预测值,例如,将该像素位置的帧间预测块1的帧间预测值与该像素位置的帧间预测块2的帧间预测值进行加权时,帧间预测块2的帧间预测值的权重值较大,帧间预测块1的帧间预测值的权重值较小(甚至为0),得到该像素位置的联合预测值。
参见图2C所示,为帧间帧内联合三角加权预测的示意图。通过对帧间帧内联合加权预测进行修改,使CIIP预测块的帧间区域和帧内区域呈现三角加权划分预测的权重分布。
CIIP预测块由帧内预测块(即采用帧内预测模式得到像素位置的帧内预测值)和帧间预测块(即采用帧间预测模式得到像素位置的帧间预测值)组合得到。CIIP预测块可以划分为两个区域,一个区域可以为帧内区域,另一个区域可以为帧间区域,CIIP预测块的帧间帧内可以呈非矩形分布,虚线分界线区域可采用混合加权方式或者直接进行分割,且该虚线分界线的角度可以为主对角线或者副对角线两种,帧内区域和帧间区域的位置可变。
针对帧内区域的每个像素位置,主要基于帧内预测值确定该像素位置的预测值,例如,将该像素位置的帧内预测值与该像素位置的帧间预测值进行加权时,帧内预测值的权重值较大,帧间预测值的权重值较小,得到该像素位置的联合预测值。针对帧间区域的每个像素位置,主要基于帧间预测值确定该像素位置的预测值,例如,将该像素位置的帧内预测值与该像素位置的帧间预测值进行加权时,帧间预测值的权重值较大,帧内预测值的权重值较小,得到该像素位置的联合预测值。
参见图2D所示,为帧间块几何分割模式(Geometrical partitioning for inter blocks,GEO)的示意图,GEO模式用于利用一条分割线将帧间预测块划分为两个子块,不同于TPM模式,GEO模式可以采用更多的划分方向,GEO模式的加权预测过程与TPM模式类似。
GEO预测块由帧间预测块1(即采用帧间预测模式1得到像素位置的帧间预测值)和帧间预测块2(即采用帧间预测模式2得到像素位置的帧间预测值)组合得到。GEO预测块可以划分为两个区域,一个区域可以为帧间区域1,另一个区域可以为帧间区域2。
针对帧间区域1的每个像素位置,主要基于帧间预测块1的帧间预测值确定该像素位置的预测值,例如,将该像素位置的帧间预测块1的帧间预测值与该像素位置的帧间 预测块2的帧间预测值进行加权时,帧间预测块1的帧间预测值的权重值较大,帧间预测块2的帧间预测值的权重值较小。针对帧间区域2的每个像素位置,主要基于帧间预测块2的帧间预测值确定该像素位置的预测值,例如,将该像素位置的帧间预测块1的帧间预测值与该像素位置的帧间预测块2的帧间预测值进行加权时,帧间预测块2的帧间预测值的权重值较大,帧间预测块1的帧间预测值的权重值较小。
示例性的,GEO预测块的权重值配置与像素位置离分割线的距离有关,参见图2E所示,像素位置A,像素位置B和像素位置C位于分割线右下侧,像素位置D,像素位置E和像素位置F位于分割线左上侧。对于像素位置A,像素位置B和像素位置C来说,帧间区域2的权重值排序为B≥A≥C,帧间区域1的权重值排序为C≥A≥B。对于像素位置D,像素位置E和像素位置F来说,帧间区域1的权重值排序为D≥F≥E,帧间区域2的权重值排序为E≥F≥D。上述方式需要计算像素位置与分割线的距离,继而确定像素位置的权重值。
针对上述各种情况,为了实现加权预测,均需要确定当前块的每个像素位置的权重值,并基于像素位置的权重值对该像素位置进行加权预测。但是,相关技术中,并没有配置权重值的有效方式,无法配置合理的权重值,导致预测效果不佳,编码性能差等问题。
针对上述发现,本申请实施例中提出权重值的导出方式,可以根据当前块外部的周边位置的参考权重值,确定当前块的每个像素位置的目标权重值,能够为每个像素位置配置合理的目标权重值,提高预测准确性,提高预测性能,提高编码性能,预测值更接近原始像素。
以下结合几个具体实施例,对本申请实施例中的编解码方法进行详细说明。
实施例1:参见图3所示,为编解码方法的流程示意图,该方法可以应用于解码端(也可以称为视频解码器)或者编码端(也可以称为视频编码器),该方法可以包括:
步骤301,在确定对当前块启动加权预测时,获取当前块的权重预测角度和权重配置参数,该权重配置参数包括权重变换率和权重变换的起始位置。权重变换的起始位置可以由如下参数的至少一个确定:当前块的权重预测角度,当前块的权重预测位置,当前块的尺寸。
示例性的,在需要对当前块进行预测时,解码端或者编码端可以先确定是否对当前块启动加权预测。若对当前块启动加权预测,则采用本申请实施例的编解码方法,即执行步骤301和后续步骤。若对当前块不启动加权预测,则实现方式本申请实施例中不做限制。
示例性的,在确定对当前块启动加权预测时,可以获取当前块的权重预测角度,当前块的权重预测位置,及当前块的权重变换率。然后,可以基于当前块的权重预测角度,当前块的权重预测位置和当前块的尺寸中的至少一个,确定当前块的权重变换的起始位置。至此,可以得到当前块的权重预测角度,当前块的权重变换率和当前块的权重变换的起始位置。
步骤302,根据当前块的权重配置参数为当前块外部的周边位置配置参考权重值。
示例性的,当前块外部的周边位置的数量可以是基于当前块的尺寸和/或当前块的权重预测角度确定,例如,基于当前块的尺寸和/或当前块的权重预测角度确定当前块外部的周边位置的数量M,并根据当前块的权重配置参数为M个周边位置配置参考权重值。
示例性的,当前块外部的周边位置的参考权重值可以单调递增;或者,当前块外部的周边位置的参考权重值可以单调递减。比如说,当前块外部的周边位置的参考权重值的排布可以为00…0024688…88,或者,当前块外部的周边位置的参考权重值的排布可以为88…8864200…00。
示例性的,当前块外部的周边位置可以包括整像素位置,或者,亚像素位置,或者,整像素位置和亚像素位置。当前块外部的周边位置可以包括但不限于:当前块外部上侧一行的周边位置,或者,当前块外部左侧一列的周边位置,或者,当前块外部下侧一行的周边位置,或者,当前块外部右侧一列的周边位置。当然,上述只是周边位置的示例,对此不做限制。
在一种可能的实施方式中,当前块外部的周边位置的参考权重值包括目标区域的参考权重值,目标区域的第一邻近区域的参考权重值,目标区域的第二邻近区域的参考权重值。
示例性的,第一邻近区域的参考权重值均为第一参考权重值,第二邻近区域的参考权重值单调递增。或者,第一邻近区域的参考权重值均为第一参考权重值,第二邻近区域的参考权重值单调递减。或者,第一邻近区域的参考权重值均为第二参考权重值,第二邻近区域的参考权重值均为第三参考权重值,且该第二参考权重值与该第三参考权重值不同。或者,第一邻近区域的参考权重值单调递增,第二邻近区域的参考权重值单调递增。或者,第一邻近区域的参考权重值单调递减,第二邻近区域的参考权重值单调递减。
示例性的,目标区域包括一个参考权重值或者至少两个参考权重值;若目标区域包括至少两个参考权重值,则目标区域的至少两个参考权重值单调递增或单调递减。
步骤303,针对当前块的每个像素位置,根据当前块的权重预测角度从当前块外部的周边位置中确定该像素位置指向的周边匹配位置;根据该周边匹配位置关联的参考权重值确定该像素位置的目标权重值,根据该像素位置的目标权重值确定该像素位置的关联权重值。
示例性的,权重预测角度表示当前块内部的像素位置所指向的角度方向,例如,基于某一种权重预测角度,该权重预测角度对应的角度方向指向当前块的某个外部周边位置。基于此,针对当前块的每个像素位置,基于该权重预测角度确定该像素位置所指向的角度方向,继而根据该角度方向从当前块外部的周边位置中确定该像素位置指向的周边匹配位置。
针对当前块的每个像素位置,在确定该像素位置指向的周边匹配位置后,基于该周边匹配位置关联的参考权重值确定该像素位置的目标权重值,例如,将该周边匹配位置关联的参考权重值确定为该像素位置的目标权重值。然后,根据该像素位置的目标权重值确定该像素位置的关联权重值,例如,每个像素位置的目标权重值与关联权重值的和, 可以均为固定的预设数值,因此,关联权重值可以为预设数值与目标权重值之差。假设预设数值为8,像素位置的目标权重值为0,则该像素位置的关联权重值为8;若像素位置的目标权重值为1,则该像素位置的关联权重值为7,以此类推,只要目标权重值与关联权重值之和为8即可。
步骤304,获取运动信息候选列表,该运动信息候选列表包括至少一个候选运动信息;基于该运动信息候选列表获取当前块的第一目标运动信息和第二目标运动信息。
步骤305,针对当前块的每个像素位置,根据当前块的第一目标运动信息确定该像素位置的第一预测值,根据当前块的第二目标运动信息确定该像素位置的第二预测值;根据该第一预测值,该目标权重值,该第二预测值和该关联权重值,确定该像素位置的加权预测值。
示例性的,假设目标权重值是第一目标运动信息对应的权重值,关联权重值是第二目标运动信息对应的权重值,则该像素位置的加权预测值可以为:(该像素位置的第一预测值*该像素位置的目标权重值+该像素位置的第二预测值*该像素位置的关联权重值)/固定的预设数值。或者,假设目标权重值是第二目标运动信息对应的权重值,关联权重值是第一目标运动信息对应的权重值,则该像素位置的加权预测值可以为:(该像素位置的第二预测值*该像素位置的目标权重值+该像素位置的第一预测值*该像素位置的关联权重值)/固定的预设数值。
步骤306,根据当前块的所有像素位置的加权预测值确定当前块的加权预测值。
例如,将当前块的所有像素位置的加权预测值组成当前块的加权预测值。
由以上技术方案可见,本申请实施例中,提出一种配置权重值的有效方式,能够为当前块的每个像素位置配置合理的目标权重值,从而提高预测的准确性,提高预测性能,提高编码性能,能够使当前块的预测值加更接近原始像素,并带来编码性能的提高。
实施例2:本申请实施例提出另一种编解码方法,可以应用于编码端,该方法包括:
步骤a1,在确定对当前块启动加权预测时,编码端获取当前块的权重预测角度,当前块的权重预测位置,及当前块的权重变换率。示例性的,编码端确定是否对当前块启动加权预测,如果是,则执行步骤a1及后续步骤,如果否,则处理方式本申请不做限制。
在一种可能的实施方式中,若当前块满足启动加权预测的条件,则确定对当前块启动加权预测。若当前块不满足启动加权预测的条件,则确定不对当前块启动加权预测。例如,判断当前块的特征信息是否满足特定条件。如果是,确定对当前块启动加权预测;如果否,确定对当前块不启动加权预测。特征信息包括但不限于以下之一或任意组合:当前块所在当前帧的帧类型,当前块的尺寸信息,开关控制信息。开关控制信息可以包括但不限于:序列级(SPS、SH)开关控制信息,或,图像级(PPS、PH)开关控制信息,或,片级(Slice、Tile、Patch),或,最大编码单元级(LCU、CTU),或块级(CU、PU、TU)开关控制信息。
例如,若特征信息为当前块所在当前帧的帧类型,当前块所在当前帧的帧类型满足特定条件,可以包括但不限于:若当前块所在当前帧的帧类型为B帧,则确定帧类型满 足特定条件。或者,若当前块所在当前帧的帧类型为I帧,则确定帧类型满足特定条件。
例如,若特征信息为当前块的尺寸信息,如当前块的宽度和高度,尺寸信息满足特定条件包括但不限于:若宽度大于或等于第一数值,高度大于或等于第二数值,确定当前块的尺寸信息满足特定条件。或,若宽度大于或等于第三数值,高度大于或等于第四数值,宽度小于或等于第五数值,高度小于或等于第六数值,确定当前块的尺寸信息满足特定条件。或,若宽度和高度的乘积大于或等于第七数值,确定当前块的尺寸信息满足特定条件。上述数值可以根据经验配置,如8、16、32、64、128等。比如说,第一数值为8,第二数值为8,第三数值为8,第四数值为8,第五数值为64,第六数值为64,第七数值为64。综上所述,若宽度大于或等于8,高度大于或等于8,确定当前块的尺寸信息满足特定条件。或,若宽度大于或等于8,高度大于或等于8,宽度小于或等于64,高度小于或等于64,确定当前块的尺寸信息满足特定条件。或,若宽度和高度的乘积大于或等于64,确定当前块的尺寸信息满足特定条件。
例如,若特征信息为当前块的尺寸信息,如当前块的宽度和高度,则当前块的尺寸信息满足特定条件,可以包括但不限于:宽度不小于a,且不大于b,高度不小于a,且不大于b。a可以小于或等于16,b可以大于或等于16。例如,a等于8,b等于64或者b等于32。
例如,若特征信息为开关控制信息,则该开关控制信息满足特定条件,可以包括但不限于:若开关控制信息为允许当前块启用加权预测,则确定该开关控制信息满足特定条件。
例如,若特征信息为当前块所在当前帧的帧类型,当前块的尺寸信息,则帧类型满足特定条件,且尺寸信息满足特定条件时,可以确定当前块的特征信息满足特定条件。或者,若特征信息为当前块所在当前帧的帧类型,开关控制信息,则帧类型满足特定条件,且开关控制信息满足特定条件时,可以确定当前块的特征信息满足特定条件。或者,若特征信息为当前块的尺寸信息、开关控制信息,则尺寸信息满足特定条件,且开关控制信息满足特定条件时,可以确定当前块的特征信息满足特定条件。或者,若特征信息为当前块所在当前帧的帧类型、当前块的尺寸信息、开关控制信息,则帧类型满足特定条件,尺寸信息满足特定条件,且开关控制信息满足特定条件时,可以确定当前块的特征信息满足特定条件。
在一种可能的实施方式中,在确定对当前块启动加权预测时,编码端可以获取当前块的权重预测角度,当前块的权重预测位置,及当前块的权重变换率。
示例性的,权重预测角度表示当前块内部的像素位置所指向的角度方向,参见图4A所示,基于某一种权重预测角度,示出了当前块内部的像素位置(如像素位置1、像素位置2和像素位置3)所指向的角度方向,该角度方向指向当前块外部的某个周边位置。参见图4B所示,基于另一种权重预测角度,示出了当前块内部的像素位置(如像素位置2、像素位置3和像素位置4)所指向的角度方向,该角度方向指向当前块外部的某个周边位置。
示例性的,权重预测位置(也可以称为距离参数)用于配置当前块外部周边位置的参考权重值。例如,根据当前块的权重预测角度、当前块的尺寸等参数,确定当前 块外部的周边位置的范围(即当前块外部的周边位置的数量),参见图4A或者图4B所示。
然后,将周边位置的范围进行N等分,N的取值可以任意配置,如4、6、8等,以8为例进行说明,权重预测位置用于表示当前块外部的哪个周边位置作为当前块的权重变换的起始位置,从而根据权重变换的起始位置配置当前块外部的周边位置的参考权重值。
参见图4C所示,在将所有周边位置8等分后,可以得到7个权重预测位置。在此基础上,当权重预测位置为0时,可以表示周边位置a0(即虚线0指向的周边位置,在实际应用中,并不存在虚线0,虚线0只是为了方便理解给出的示例,虚线0-虚线6用于将所有周边位置8等分)作为当前块外部周边位置的权重变换的起始位置。以此类推,当权重预测位置为6时,表示周边位置a6作为当前块外部周边位置的权重变换的起始位置。
针对不同的权重预测角度,N的取值可以不同,例如,针对权重预测角度A,N的取值为6,表示将基于权重预测角度A确定的周边位置的范围进行6等分,针对权重预测角度B,N的取值为8,表示将基于权重预测角度B确定的周边位置的范围进行8等分。
针对不同的权重预测角度,N的取值也可以相同,在N的取值相同的情况下,权重预测位置数量可以不同,如针对权重预测角度A,N的取值为8,表示将基于权重预测角度A确定的周边位置的范围进行8等分,针对权重预测角度B,N的取值为8,表示将基于权重预测角度B确定的周边位置的范围进行8等分,但是,权重预测角度A对应的权重预测位置选择a1至a5共5个位置,权重预测角度B对应的权重预测位置选择a0至a6共7个位置。
上述是以将周边位置的范围进行N等分为例,在实际应用中,还可以采用不均匀的划分方式,比如说,将周边位置的范围划分成N份,而不是N等分,对此不做限制。
在将所有的周边位置8等分后,可以得到7个权重预测位置,步骤a1中,编码端可以从7个权重预测位置中获取一个权重预测位置,也可以从7个权重预测位置中选择部分权重预测位置(如5个权重预测位置),从5个权重预测位置获取一个权重预测位置。
示例性的,权重变换率表示当前块外部的周边位置的参考权重值的变换率,用于表示参考权重值的变化速度,权重变换率可以是不为0的任意数,如权重变换率可以是-4、-2、-1、1、2、4、0.5、0.75、1.5等。权重变换率的绝对值为1时,即权重变换率为-1或者1,用于表示参考权重值的变化速度为1,参考权重值从0到8需要经过0,1,2,3,4,5,6,7,8等数值,参考权重值从8到0需要经过8,7,6,5,4,3,2,1,0等数值。权重变换率的绝对值为2时,即权重变换率为-2或者2,用于表示参考权重值的变化速度为2,参考权重值从0到8需要经过0,2,4,6,8等数值,参考权重值从8到0需要经过8,6,4,2,0等数值。权重变换率的绝对值为4时,即权重变换率为-4或者4,用于表示参考权重值的变化速度为4,参考权重值从0到8需要经过0, 4,8等数值,参考权重值从8到0需要经过8,4,0等数值。权重变换率的绝对值为0.5时,即权重变换率为-0.5或者0.5,用于表示参考权重值的变化速度为0.5,参考权重值从0到8需要经过0,0,1,1,2,2,3,3,4,4,5,5,6,6,7,7,8,8等数值,参考权重值从8到0需要经过8,8,7,7,6,6,5,5,4,4,3,3,2,2,1,1,0,0等数值。当然,上述举例为从0到8,可以将0和8替换为任意数。
步骤a2,编码端根据当前块的权重变换率和权重变换的起始位置(权重变换率和权重变换的起始位置可以称为权重配置参数)为当前块外部的周边位置配置参考权重值。
示例性的,权重变换的起始位置可以由如下参数的至少一个确定:当前块的权重预测角度,当前块的权重预测位置,当前块的尺寸,因此,可以基于当前块的权重预测角度,当前块的权重预测位置和当前块的尺寸中的至少一个,确定当前块的权重变换的起始位置。然后,根据当前块的权重变换率和权重变换的起始位置为当前块外部的周边位置配置参考权重值。
步骤a3,针对当前块的每个像素位置,编码端根据当前块的权重预测角度从当前块外部的周边位置中确定该像素位置指向的周边匹配位置。为了区分方便,本实施例中,可以将该像素位置指向的当前块外部的周边位置,称为该像素位置的周边匹配位置。
示例性的,由于权重预测角度表示当前块内部的像素位置所指向的角度方向,因此,针对当前块的每个像素位置,基于该权重预测角度确定该像素位置所指向的角度方向,继而根据该角度方向从当前块外部的周边位置中确定该像素位置指向的周边匹配位置。
当前块外部周边位置可以包括:当前块外部上侧一行的周边位置,如当前块外部上侧第n1行的周边位置,n1可以为1,也可以为2、3等,对此不做限制。或者,当前块外部左侧一列的周边位置,如当前块外部左侧第n2列的周边位置,n2可以为1,也可以为2、3等,对此不做限制。或者,当前块外部下侧一行的周边位置,如当前块外部下侧第n3行的周边位置,n3可以为1,也可以为2、3等,对此不做限制。或者,当前块外部右侧一列的周边位置,如当前块外部右侧第n4列的周边位置,n4可以为1,也可以为2、3等,对此不做限制。
当然,上述只是周边位置的几个示例,对此不做限制,实际应用中,除了利用当前块外部周边位置,还可以利用当前块内部位置,即使用当前块的内部位置替换当前块外部周边位置,例如,位于当前块内部第n5行的内部位置,n5可以为1,也可以为2、3等,又例如,位于当前块内部第n6列的内部位置,n6可以为1,也可以为2、3等。当然,内部位置的长度(即,当前块内位于该内部位置的一行)可以超出当前块的范围,如第n7行的位置可以超出当前块的范围,即往两边可延伸。
当然,还可以同时使用当前块的内部位置和当前块外部周边位置。
针对使用当前块的内部位置,或者,同时使用当前块的内部位置和当前块外部周边位置的情况,可以将该当前块通过内部位置所在行分为上下两个小块,或者,通过内部位置所在列分为左右两个小块,此时,两个小块拥有相同的权重预测角度以及相同 的参考权重配置。
示例性的,当前块外部周边位置可以位于像素位置之间,即亚像素位置,此时,当前块的外部周边位置不能简单描述为第x行,而是位于第x行与第y行之间的亚像素位置行。
为了方便描述,在后续实施例中,以当前块外部上侧第1行的周边位置,或者,当前块外部左侧第1列的周边位置为例,针对其它周边位置的情况,实现方式与此类似。
示例性的,针对当前块外部周边位置的范围,可以预先指定某个范围是当前块外部的周边位置的范围;或者,可以根据权重预测角度确定当前块外部的周边位置的范围,例如,根据权重预测角度确定当前块内部的每个像素位置指向的周边位置,所有像素位置指向的周边位置的边界,可以是当前块外部的周边位置的范围,对此周边位置的范围不做限制。
当前块外部周边位置可以包括整像素位置;或者,当前块外部周边位置可以包括非整像素位置,非整像素位置可以为亚像素位置,如1/2亚像素位置,1/4亚像素位置,3/4亚像素位置等,对此不做限制;或者,当前块外部周边位置可以包括整像素位置和亚像素位置。
示例性的,当前块外部的两个周边位置,可以对应一个整像素位置;或者,当前块外部的四个周边位置,可以对应一个整像素位置;或者,当前块外部的一个周边位置,可以对应一个整像素位置;或者,当前块外部的一个周边位置,可以对应两个整像素位置。当然,上述只是几个示例,对此不做限制,周边位置与整像素位置的关系可以任意配置。
参见图4A和图4B所示,是一个周边位置对应一个整像素位置,参见图4D所示,是两个周边位置对应一个整像素位置,对于其它情况,本实施例中不再赘述。
步骤a4,编码端根据周边匹配位置关联的参考权重值确定该像素位置的目标权重值。
针对当前块的每个像素位置,在确定该像素位置指向的周边匹配位置后,编码端确定该周边匹配位置关联的参考权重值,并根据该周边匹配位置关联的参考权重值确定该像素位置的目标权重值,例如,将该周边匹配位置关联的参考权重值确定为该像素位置的目标权重值。
在一种可能的实施方式中,编码端根据周边匹配位置关联的参考权重值确定该像素位置的目标权重值,可以包括:情况一、若该周边匹配位置是整像素位置,且该整像素位置已配置参考权重值,则根据该整像素位置的参考权重值确定该像素位置的目标权重值。
情况二、若该周边匹配位置是整像素位置,且该整像素位置未配置参考权重值,则可以根据该整像素位置的相邻位置的参考权重值确定该像素位置的目标权重值。例如,可以对相邻位置的参考权重值进行向上取整操作,得到该像素位置的目标权重值;或者,对相邻位置的参考权重值进行向下取整操作,得到该像素位置的目标权重值;或者,根据该整像素位置的相邻位置的参考权重值的插值确定该像素位置的目标权重值,对此不 做限制。
情况三、若该周边匹配位置是亚像素位置,且该亚像素位置已配置参考权重值,则可以根据该亚像素位置的参考权重值确定该像素位置的目标权重值。
情况四、若该周边匹配位置是亚像素位置,且该亚像素位置未配置参考权重值,则可以根据该亚像素位置的相邻位置的参考权重值确定该像素位置的目标权重值。例如,可以对相邻位置的参考权重值进行向上取整操作,得到该像素位置的目标权重值;或者,对相邻位置的参考权重值进行向下取整操作,得到该像素位置的目标权重值;或者,根据该亚像素位置的相邻位置的参考权重值的插值确定该像素位置的目标权重值,对此不做限制。
情况五、根据周边匹配位置关联的多个参考权重值确定该像素位置的目标权重值,例如,周边匹配位置是整像素位置或亚像素位置时,获取周边匹配位置的多个相邻位置的参考权重值。若周边匹配位置已配置参考权重值,则根据周边匹配位置的参考权重值和多个相邻位置的参考权重值进行加权运算,得到该像素位置的目标权重值;若周边匹配位置未配置参考权重值,则根据多个相邻位置的参考权重值进行加权运算,得到该像素位置的目标权重值。
步骤a5,编码端根据该像素位置的目标权重值确定该像素位置的关联权重值。
示例性的,针对每个像素位置来说,该像素位置的目标权重值与该像素位置的关联权重值的和,可以为固定的预设数值,即关联权重值可以为预设数值与目标权重值之差。基于此,假设预设数值为8,该像素位置的目标权重值为2,则该像素位置的关联权重值为6。
步骤a6,编码端获取运动信息候选列表,该运动信息候选列表包括至少一个候选运动信息;基于该运动信息候选列表获取当前块的第一目标运动信息和第二目标运动信息。
步骤a7,针对当前块的每个像素位置,编码端根据当前块的第一目标运动信息确定该像素位置的第一预测值,并根据当前块的第二目标运动信息确定该像素位置的第二预测值。
步骤a8,编码端根据该像素位置的第一预测值,该像素位置的目标权重值,该像素位置的第二预测值和该像素位置的关联权重值,确定该像素位置的加权预测值。
例如,该像素位置的加权预测值可以为:(该像素位置的第一预测值*该像素位置的目标权重值+该像素位置的第二预测值*该像素位置的关联权重值)/固定的预设数值。
步骤a9,编码端根据当前块的所有像素位置的加权预测值确定当前块的加权预测值。
实施例3:本申请实施例提出另一种编解码方法,可以应用于解码端,该方法包括:
步骤b1,在确定对当前块启动加权预测时,解码端获取当前块的权重预测角度, 当前块的权重预测位置,及当前块的权重变换率。示例性的,解码端确定是否对当前块启动加权预测,如果是,则执行步骤b1及后续步骤,如果否,则处理方式本申请不做限制。
在一种可能的实施方式中,编码端判断当前块的特征信息是否满足特定条件,如果是,则确定对当前块启动加权预测。解码端也判断当前块的特征信息是否满足特定条件,如果是,则确定对当前块启动加权预测;如果否,则确定不对当前块启动加权预测。关于解码端如何基于特征信息确定当前块是否启动加权预测,与编码端的确定方式类似,在此不再重复赘述。
在另一种可能的实施方式中,编码端根据当前块的特征信息确定当前块是否支持加权预测,在当前块支持加权预测时,还可以采用其它策略确定是否对当前块启动加权预测,如采用率失真原则确定是否对当前块启动加权预测。在确定是否对当前块启动加权预测后,编码端在发送当前块的编码比特流时,该编码比特流可以包括是否启动加权预测的语法,该语法表示当前块是否启动加权预测。解码端根据当前块的特征信息确定当前块是否支持加权预测,具体方式与编码端的确定方式类似,在此不再赘述。在当前块支持加权预测时,解码端还可以从编码比特流中解析出是否启动加权预测的语法,并根据该语法确定是否对当前块启动加权预测。
示例性的,在确定对当前块启动加权预测时,解码端还可以获取当前块的权重预测角度,当前块的权重预测位置,及当前块的权重变换率,关于该权重预测角度,该权重预测位置及该权重变换率的相关说明,可以参见步骤a1,在此不再重复赘述。
步骤b2,解码端根据当前块的权重变换率和权重变换的起始位置(权重变换率和权重变换的起始位置可以称为权重配置参数)为当前块外部的周边位置配置参考权重值。
示例性的,解码端可以基于当前块的权重预测角度,当前块的权重预测位置和当前块的尺寸中的至少一个,确定当前块的权重变换的起始位置。然后,解码端根据当前块的权重变换率和权重变换的起始位置为当前块外部的周边位置配置参考权重值。
步骤b3,针对当前块的每个像素位置,解码端根据当前块的权重预测角度从当前块外部的周边位置中确定该像素位置指向的周边匹配位置。
步骤b4,解码端根据该周边匹配位置关联的参考权重值确定该像素位置的目标权重值。
步骤b5,解码端根据该像素位置的目标权重值确定该像素位置的关联权重值。
步骤b6,解码端获取运动信息候选列表,该运动信息候选列表包括至少一个候选运动信息;基于该运动信息候选列表获取当前块的第一目标运动信息和第二目标运动信息。
步骤b7,针对当前块的每个像素位置,解码端根据当前块的第一目标运动信息确定该像素位置的第一预测值,并根据当前块的第二目标运动信息确定该像素位置的第二预测值。
步骤b8,解码端根据该像素位置的第一预测值,该像素位置的目标权重值,该像素位置的第二预测值和该像素位置的关联权重值,确定该像素位置的加权预测值。
步骤b9,解码端根据当前块的所有像素位置的加权预测值确定当前块的加权预测值。
示例性的,针对步骤b2-步骤b8,其实现过程可以参见步骤a2-步骤a8,不同之处在于,步骤b2-步骤b8是解码端的处理流程,而不是编码端的处理流程,在此不再赘述。
实施例4:参见图5所示,为编解码方法的流程示意图,该方法可以应用于解码端(也可以称为视频解码器)或者编码端(也可以称为视频编码器),该方法可以包括:
步骤501,在确定对当前块启动加权预测时,获取当前块的权重预测角度和权重配置参数,该权重配置参数包括权重变换率和权重变换的起始位置。权重变换的起始位置可以由如下参数的至少一个确定:当前块的权重预测角度,当前块的权重预测位置,当前块的尺寸。
步骤502,根据当前块的权重配置参数为当前块外部的周边位置配置参考权重值。
步骤503,针对当前块的每个像素位置,根据当前块的权重预测角度从当前块外部的周边位置中确定该像素位置指向的周边匹配位置;根据该周边匹配位置关联的参考权重值确定该像素位置的目标权重值,根据该像素位置的目标权重值确定该像素位置的关联权重值。
示例性的,步骤501-步骤503可以参见步骤301-步骤303,在此不再重复赘述。
步骤504,获取参考帧信息,并获取与该参考帧信息对应的运动矢量候选列表,该运动矢量候选列表包括至少一个候选运动矢量,该参考帧信息包括第一参考帧信息和第二参考帧信息;基于该运动矢量候选列表获取当前块的第一目标运动矢量和第二目标运动矢量。
步骤505,针对当前块的每个像素位置,根据当前块的第一目标运动信息确定该像素位置的第一预测值,根据当前块的第二目标运动信息确定该像素位置的第二预测值;根据该第一预测值,该目标权重值,该第二预测值和该关联权重值,确定该像素位置的加权预测值。
在一种可能的实施方式中,当前块的该第一目标运动信息可以包括当前块的该第一目标运动矢量和该第一目标运动矢量对应的第一参考帧信息,当前块的该第二目标运动信息可以包括当前块的该第二目标运动矢量和该第二目标运动矢量对应的第二参考帧信息。
示例性的,第一参考帧信息与第二参考帧信息可以相同,或者,第一参考帧信息与第二参考帧信息可以不同。若第一参考帧信息与第二参考帧信息相同,则第一目标运动矢量指向的参考帧与第二目标运动矢量指向的参考帧为同一帧,若第一参考帧信息与第二参考帧信息不同,则第一目标运动矢量指向的参考帧与第二目标运动矢量指向的参考帧为不同帧。
步骤506,根据当前块的所有像素位置的加权预测值确定当前块的加权预测值。
示例性的,在编解码方法应用于编码端时,针对步骤501-步骤506的详细流程,还可以通过实施例2实现,不同之处在于:在步骤a6中,编码端获取的是运动矢量候选列表,并基于该运动矢量候选列表获取当前块的第一目标运动矢量和第二目标运动矢量,基于第一目标运动矢量和第一目标运动矢量对应的第一参考帧信息得到第一目标运动信息,基于第二目标运动矢量和第二目标运动矢量对应的第二参考帧信息得到第二目标运动信息,在此不再赘述。
示例性的,在编解码方法应用于解码端时,针对步骤501-步骤506的详细流程,还可以通过实施例3实现,不同之处在于:在步骤b6中,解码端获取的是运动矢量候选列表,并基于该运动矢量候选列表获取当前块的第一目标运动矢量和第二目标运动矢量,基于第一目标运动矢量和第一目标运动矢量对应的第一参考帧信息得到第一目标运动信息,基于第二目标运动矢量和第二目标运动矢量对应的第二参考帧信息得到第二目标运动信息,在此不再赘述。
由以上技术方案可见,本申请实施例中,提出一种配置权重值的有效方式,能够为当前块的每个像素位置配置合理的目标权重值,从而提高预测的准确性,提高预测性能,提高编码性能,能够使当前块的预测值加更接近原始像素,并带来编码性能的提高。
实施例5:在实施例1-实施例4中,需要基于权重预测角度进行加权处理,可以将这种加权处理方式记为帧间角度加权预测(Angular Weighted Prediction,AWP)模式,即,在当前块支持AWP模式时,采用实施例1-实施例4对当前块进行预测,得到当前块的预测值。
实施例1-实施例4涉及权重预测角度,该权重预测角度可以是任意角度,如180度内任意角度,或,360度内任意角度,对此权重预测角度不做限制,如10度,20度,30度等。
在一种可能的实施方式中,该权重预测角度可以为水平角度;或者,该权重预测角度可以为垂直角度;或者,该权重预测角度的斜率的绝对值(权重预测角度的斜率的绝对值也就是权重预测角度的tan值)可以为2的n次方,n为整数,如正整数,0,负整数等。
例如,该权重预测角度的斜率的绝对值可以为1(即2的0次方),或者为2(即2的1次方),或者为1/2(即2的-1次方),或者为4(即2的2次方),或者为1/4(即2的-2次方),或者为8(即2的3次方),或者为1/8(即2的-3次方)等。示例性的,参见图6所示,示出了8种权重预测角度,这些权重预测角度的斜率的绝对值为2的n次方。
本申请实施例中,可以对权重预测角度进行移位操作,关于对权重预测角度进行移位操作的例子参见后续实施例,因此,在权重预测角度的斜率的绝对值为2的n次方时,在对权重预测角度进行移位操作时,可以避免出现除法操作,从而方便的进行移位实现。
示例性的,不同块尺寸(即当前块的尺寸)支持的权重预测角度的数量可以相同或者不同。例如,块尺寸A支持8种权重预测角度,块尺寸B和块尺寸C支持6种权重预测角度等。
实施例6:在上述实施例1-实施例4中,编码端/解码端需要根据当前块的权重变换率和当前块的权重变换的起始位置为当前块外部的周边位置配置参考权重值。在一种可能的实施方式中,可以采用如下方式:针对当前块外部的每个周边位置,根据该周边位置的坐标值,该权重变换的起始位置的坐标值,以及该权重变换率,配置该周边位置的参考权重值。
示例性的,针对当前块外部的每个周边位置,若该周边位置是当前块外部上侧一行或者下侧一行的周边位置,则该周边位置的坐标值可以是横坐标值,权重变换的起始位置的坐标值可以是横坐标值。或者,若该周边位置是当前块外部左侧一列或者右侧一列的周边位置,则该周边位置的坐标值可以是纵坐标值,权重变换的起始位置的坐标值可以是纵坐标值。
示例性的,可以将当前块左上角的像素位置(如左上角的第一个像素位置)作为坐标原点,当前块的周边位置的坐标值(如横坐标值或纵坐标值)和权重变换的起始位置的坐标值(如横坐标值或纵坐标值),均是相对于该坐标原点的坐标值。当然,也可以将当前块的其它像素位置作为坐标原点,实现方式与左上角的像素位置作为坐标原点的实现方式类似。
在一种可能的实施方式中,先确定该周边位置的坐标值与权重变换的起始位置的坐标值的差值,并确定该差值与当前块的权重变换率的乘积。若该乘积小于第一数值(即参考权重值的最小值,如0等),则确定该周边位置关联的参考权重值为第一数值;若该乘积大于第二数值(即参考权重值的最大值,如8等),则确定该周边位置关联的参考权重值为第二数值;若该乘积不小于第一数值,且该乘积不大于第二数值,则确定该周边位置关联的参考权重值为该乘积。在另一种可能的实施方式中,还可以根据该周边位置的坐标值与权重变换的起始位置的坐标值的大小关系,直接确定该周边位置关联的参考权重值。例如,若该周边位置的坐标值小于权重变换的起始位置的坐标值,确定该周边位置关联的参考权重值为第一数值;若该周边位置的坐标值不小于权重变换的起始位置的坐标值,确定该周边位置关联的参考权重值为第二数值。又例如,若该周边位置的坐标值小于权重变换的起始位置的坐标值,确定该周边位置关联的参考权重值为第二数值;若该周边位置的坐标值不小于权重变换的起始位置的坐标值,确定该周边位置关联的参考权重值为第一数值。
示例性的,第一数值和第二数值均可以根据经验配置,且第一数值小于第二数值,对此第一数值和第二数值均不做限制。例如,第一数值是预先约定的参考权重值的最小值,如0,第二数值是预先约定的参考权重值的最大值,如8,当然,0和8也只是示例。
示例性的,参见图4C所示,在将所有的周边位置8等分后,可以得到7个权重预测位置,当权重预测位置为0时,表示周边位置a0,权重变换的起始位置的坐标值为周边位置a0的坐标值。当权重预测位置为1时,表示周边位置a1,权重变换的起始位 置的坐标值为周边位置a1的坐标值,以此类推,关于权重变换的起始位置的坐标值的确定方式,在此不再赘述。
实施例7:在实施例1-实施例4中,编码端/解码端需要根据当前块的权重变换率和当前块的权重变换的起始位置为当前块外部的周边位置配置参考权重值。在一种可能的实施方式中,可以采用如下方式:获取当前块的权重预测角度、当前块的权重变换率和当前块的权重预测位置,基于当前块的权重预测位置确定当前块的权重变换的起始位置,基于该权重变换的起始位置和该权重变换率确定权重配置参数,即该权重配置参数包括权重变换的起始位置和权重变换率。然后,根据该权重配置参数确定当前块外部的周边位置的参考权重值。
以下结合具体步骤,对为当前块外部的周边位置配置参考权重值的过程进行说明。
步骤c1、获取有效数量个参考权重值。
示例性的,当前块外部的周边位置的数量为有效数量,步骤c1中,需要获取有效数量个参考权重值,该有效数量可以是基于当前块的尺寸和/或当前块的权重预测角度确定。例如,采用如下方式确定该有效数量:ValidLength=(N+(M>>X))<<1,N和M分别是当前块的高和宽,X为当前块的权重预测角度的斜率的绝对值的log2对数值,如0或1。
在一种可能的实施方式中,针对有效数量个参考权重值,可以单调递增,或,单调递减。或者,针对有效数量个参考权重值,可以先包括多个第一数值,再包括多个第二数值,或,先包括多个第二数值,再包括多个第一数值。以下结合几个具体情况,对此进行说明。
情况1:针对有效数量个参考权重值,可以单调递增,或,单调递减。例如,有效数量个参考权重值可以为[88...88765432100...00],即单调递减。又例如,有效数量个参考权重值可以为[00...00123456788...88],即单调递增。当然,上述只是示例,对此不做限制。
示例性的,参考权重值可以是根据权重配置参数配置的,该权重配置参数可以包括权重变换率和权重变换的起始位置,权重变换率的获取方式可以参见后续实施例,权重变换的起始位置可以是根据经验配置的数值,也可以由权重预测位置确定权重变换的起始位置,还可以由权重预测角度和权重预测位置确定权重变换的起始位置,对此不做限制。
针对有效数量个参考权重值,按照从第一个到最后一个的顺序,可以单调递增或者单调递减。例如,参考权重值的最大值为M1,参考权重值的最小值为M2,针对有效数量个参考权重值,从最大值M1至最小值M2单调递减;或,从最小值M2至最大值M1单调递增。假设M1为8,M2为0,则多个参考权重值,可以从8至0单调递减;或从0至8单调递增。
示例性的,可以先获取权重变换率和权重变换的起始位置,然后,根据权重变换率和权重变换的起始位置,确定多个参考权重值。例如,采用如下方式确定参考权重 值:y=Clip3(最小值,最大值,a*(x-s)),x表示周边位置的索引,即x的取值范围是1-有效数量值,如x为1,表示第1个周边位置,y表示第1个周边位置的参考权重值,x为2,表示第2个周边位置,y表示第2个周边位置的参考权重值。a表示权重变换率,s表示权重变换的起始位置。
Clip3用于限制参考权重值位于最小值与最大值之间,最小值和最大值均可以根据经验配置,为了方便描述,在后续过程中,以最小值为0,最大值为8为例进行说明。
a表示权重变换率,a可以是不为0的整数,如a可以是-4、-2、-1、1、2、4等,对此a的取值不做限制。若a的绝对值为1,则参考权重值从0到8需要经过0,1,2,3,4,5,6,7,8,或者,参考权重值从8到0需要经过8,7,6,5,4,3,2,1,0。
s表示权重变换的起始位置,s可以由权重预测位置确定,例如,s=f(权重预测位置),即s是一个与权重预测位置有关的函数。例如,在当前块外部的周边位置的范围确定后,可以确定周边位置的有效数量,并将所有周边位置进行N等分,N的取值可以任意配置,如4、6、8等,而权重预测位置用于表示采用当前块外部的哪个周边位置作为当前块的目标周边区域,而这个权重预测位置对应的周边位置就是权重变换的起始位置。或者,s可以由权重预测角度和权重预测位置确定,例如,s=f(权重预测角度,权重预测位置),即s是一个与权重预测角度和权重预测位置有关的函数。例如,可以根据权重预测角度确定当前块外部的周边位置的范围,在当前块外部的周边位置的范围确定后,可以确定周边位置的有效数量,并将所有周边位置进行N等分,权重预测位置用于表示采用当前块外部的哪个周边位置作为当前块的目标周边区域,而这个权重预测位置对应的周边位置就是权重变换的起始位置。
综上所述,在y=Clip3(最小值,最大值,a*(x-s))中,权重变换率a和权重变换的起始位置s均为已知值,针对当前块外部的每个周边位置,可以通过该函数关系确定该周边位置的参考权重值。例如,假设权重变换率a为2,权重变换的起始位置s为2,则该函数关系可以为y=2*(x-2),针对当前块外部的每个周边位置x,可以得到参考权重值y。
综上所述,可以得到当前块的有效数量个参考权重值,这些参考权重值单调递增或单调递减。在一种可能的实施方式中,当前块外部的周边位置的参考权重值包括目标区域的参考权重值,目标区域的第一邻近区域的参考权重值,目标区域的第二邻近区域的参考权重值。
示例性的,目标区域包括一个参考权重值或者至少两个参考权重值。例如,基于权重变换的起始位置,确定一个参考权重值,将这个参考权重值作为目标区域的参考权重值。又例如,基于权重变换的起始位置,确定至少两个参考权重值,将这至少两个参考权重值作为目标区域的参考权重值。
若目标区域包括至少两个参考权重值,则目标区域的至少两个参考权重值单调递增或单调递减。单调递增可以是严格单调递增(即目标区域的至少两个参考权重值严格单调递增);单调递减可以是严格单调递减(即目标区域的至少两个参考权重值严格单调递减)。例如,目标区域的参考权重值从1至7单调递增,或者,目标区域的参考权重值从7至1单调递减。
示例性的,第一邻近区域的参考权重值可以均为第一参考权重值,第二邻近区域的参考权重值可以单调递增。例如,第一邻近区域的参考权重值可以均为0,目标区域包括一个参考权重值,该参考权重值为1,第二邻近区域的参考权重值从2至8单调递增。
或者,第一邻近区域的参考权重值可以均为第一参考权重值,第二邻近区域的参考权重值可以单调递减。例如,第一邻近区域的参考权重值可以均为8,目标区域包括一个参考权重值,该参考权重值为7,第二邻近区域的参考权重值从6至0单调递减。
或者,第一邻近区域的参考权重值均为第二参考权重值,第二邻近区域的参考权重值均为第三参考权重值,第二参考权重值与第三参考权重值不同。例如,第一邻近区域的参考权重值均为0,目标区域包括至少两个参考权重值,参考权重值从1至7单调递增,第二邻近区域的参考权重值均为8,显然,第一邻近区域的参考权重值与第二邻近区域的参考权重值不同。
或者,第一邻近区域的参考权重值单调递增或单调递减,第二邻近区域的参考权重值单调递增或单调递减;例如,第一邻近区域的参考权重值单调递增,第二邻近区域的参考权重值也单调递增;又例如,第一邻近区域的参考权重值单调递减,第二邻近区域的参考权重值也单调递减。例如,第一邻近区域的参考权重值从0至3单调递增,目标区域包括一个参考权重值,该参考权重值为4,第二邻近区域的参考权重值从5至8单调递增。
情况2:针对有效数量个参考权重值,可以先包括多个第一数值,再包括多个第二数值,或者,可以先包括多个第二数值,再包括多个第一数值。例如,有效数量个参考权重值可以为[88...8800...00]或[00...0088...88]。示例性的,可以根据权重变换的起始位置确定多个参考权重值。例如,权重变换的起始位置表示第s个参考权重值,因此,第s个参考权重值之前(不包括第s个参考权重值)的所有参考权重值为第一数值(如8),第s个参考权重值之后(包括第s个参考权重值)的所有参考权重值为第二数值(如0)。或者,第s个参考权重值之前的所有参考权重值为第二数值,第s个参考权重值之后的所有参考权重值为第一数值。
步骤c2,根据有效数量个参考权重值,配置当前块外部的周边位置的参考权重值。
示例性的,当前块外部的周边位置的数量为有效数量,且参考权重值的数量为有效数量,因此,可以将有效数量个参考权重值,配置为当前块外部的周边位置的参考权重值。
例如,可以将第1个参考权重值配置为当前块外部的第1个周边位置的参考权重值,将第2个参考权重值配置为当前块外部的第2个周边位置的参考权重值,以此类推。
综上所述,由于已经为当前块外部的周边位置配置参考权重值,即每个周边位置均具有参考权重值,因此,在从当前块外部的周边位置中确定像素位置指向的周边匹配位置后,可以确定该周边匹配位置关联的参考权重值,也就是该像素位置的目标权重 值。
以下结合几个具体的应用场景,对上述过程的实施方式进行说明。示例性的,在后续几个应用场景中,假设当前块的尺寸为M*N,M为当前块的宽,N为当前块的高。X为权重预测角度的tan值的log2对数值,如0或者1。Y为权重预测位置的索引值,a,b,c,d为预设的常数值。ValidLength表示有效数量,FirstPos表示权重变换的起始位置,ReferenceWeights[i]表示第i个周边位置的参考权重值,Clip3用于限制参考权重值位于最小值0与最大值8之间,i表示当前块外部的周边位置的索引,a表示权重变换率的绝对值。
应用场景1:基于当前块的尺寸和当前块的权重预测角度确定有效数量(也可以称为参考权重有效长度,即ValidLength),并获取权重变换的起始位置(即FirstPos)。例如,可以通过如下公式确定ValidLength:ValidLength=(N+(M>>X))<<1;通过如下公式确定FirstPos:FirstPos=(ValidLength>>1)-a+Y*((ValidLength-1)>>3)。在此基础上,通过如下公式导出当前块的每个周边位置的参考权重值:ReferenceWeights[i]=Clip3(0,8,a*(i-FirstPos))。i的取值范围可以是0~ValidLength-1;或1~ValidLength。在得到当前块的周边位置的参考权重值ReferenceWeights[i]后,通过如下公式导出当前块的像素位置(x,y)的目标权重值:SampleWeight[x][y]=ReferenceWeights[(y<<1)+((x<<1)>>X)],<<表示左移,>>表示右移。
应用场景2:可以通过如下公式确定ValidLength:ValidLength=(N+(M>>X))<<1;通过如下公式确定FirstPos:FirstPos=(ValidLength>>1)-b+Y*((ValidLength-1)>>3)–((M<<1)>>X)。在此基础上,通过如下公式导出当前块的每个周边位置的参考权重值:ReferenceWeights[i]=Clip3(0,8,a*(i-FirstPos))。可以通过如下公式导出当前块的每个像素位置(x,y)的目标权重值:SampleWeight[x][y]=ReferenceWeights[(y<<1)-((x<<1)>>X)]。
应用场景3:可以通过如下公式确定ValidLength:ValidLength=(M+(N>>X))<<1;通过如下公式确定FirstPos:FirstPos=(ValidLength>>1)-c+Y*((ValidLength-1)>>3)–((N<<1)>>X)。在此基础上,可以通过如下公式导出当前块的每个周边位置的参考权重值:ReferenceWeights[i]=Clip3(0,8,a*(i-FirstPos))。可以通过如下公式导出当前块的每个像素位置(x,y)的目标权重值:SampleWeight[x][y]=ReferenceWeights[(x<<1)-((y<<1)>>X)]。
应用场景4:可以通过如下公式确定ValidLength:ValidLength=(M+(N>>X))<<1;可以通过如下公式确定FirstPos:FirstPos=(ValidLength>>1)-d+Y*((ValidLength-1)>>3);在此基础上,可以通过如下公式导出当前块的每个周边位置的参考权重值:ReferenceWeights[i]=Clip3(0,8,a*(i-FirstPos))。可以通过如下公式导出当前块的每个像素位置(x,y)的目标权重值:SampleWeight[x][y]=ReferenceWeights[(x<<1)+((y<<1)>>X)]。
应用场景5:参见图7所示,示出了四种权重变换率的参考权重值的示意图。
在权重变换率的绝对值为1时,即权重变换率为1或者权重变换率为-1,可以通过如下公式导出当前块的每个周边位置的参考权重值:ReferenceWeights[i]=Clip3(0,8, 1*(i-FirstPos)),上述公式可以等价为ReferenceWeight[i]=Clip3(0,8,i–FirstPos)。在该情况中,参见图7所示的第一类情况,FirstPos可以为4,即第1个到第4个周边位置的参考权重值为0,第5个周边位置的参考权重值为1,第6个周边位置的参考权重值为2,以此类推。
在权重变换率的绝对值为2时,即权重变换率为2或者权重变换率为-2,通过如下公式导出当前块的每个周边位置的参考权重值:ReferenceWeights[i]=Clip3(0,8,2*(i-FirstPos)),上述公式可以等价为ReferenceWeight[i]=Clip3(0,8,(i–FirstPos)<<1)。在该情况中,参见图7所示的第二类情况,FirstPos可以为6,即第1个到第6个周边位置的参考权重值为0,第7个周边位置的参考权重值为2,第8个周边位置的参考权重值为4,以此类推。
在权重变换率的绝对值为4时,即权重变换率为4或者权重变换率为-4,可以通过如下公式导出当前块的每个周边位置的参考权重值:ReferenceWeights[i]=Clip3(0,8,4*(i-FirstPos)),上述公式可以等价为ReferenceWeight[i]=Clip3(0,8,(i–FirstPos)<<2)。在该情况中,参见图7所示的第三类情况,FirstPos可以为7,第1个到第7个周边位置的参考权重值为0,第8个周边位置的参考权重值为4,第9至17个周边位置的参考权重值为8,以此类推。
在权重变换率的绝对值为8时,即权重变换率为8或者权重变换率为-8,通过如下公式导出当前块的每个周边位置的参考权重值:ReferenceWeights[i]=Clip3(0,8,8*(i-FirstPos)),上述公式可以等价为ReferenceWeight[i]=Clip3(0,8,(i–FirstPos)<<3)。在该情况中,参见图7所示的第四类情况,FirstPos可以为8,即第1个到第8个周边位置的参考权重值为0,第9个周边位置的参考权重值为9,第10至17个周边位置的参考权重值为8,以此类推。
综上所述,在权重变换率的绝对值为1时,FirstPos为4,在权重变换率的绝对值为2时,FirstPos为6(即权重变换率1时的FirstPos+2),在权重变换率的绝对值为4时,FirstPos为7(即权重变换率1时的FirstPos+3),基于此,可以对齐参考权重值为4的位置。
示例性的,针对当前块来说,在支持权重变换率切换并启动权重变换率切换时,可以从图7所示的4类权重变换率的参考权重值分布示例中选择一种进行切换,从而通过对图像或者图像的局部区域进行权重变换率的切换,达到减弱一些图像显示场景中的图像显示的跳变突出。比如说,有一些图像显示场景中需要解决跳变比较突出的问题,AWP模式的权重变化率切换能够解决这个问题。例如,混合图像内容包括部分屏幕内容,动画片,包含动画片的图像等,可以对某个含有屏幕内容的区域进行权重变换率切换,解决跳变比较突出的问题。
在上述过程中,ValidLength与当前块的权重预测角度和当前块的尺寸相关,为了方案简化,可以固化某些参数来达到优化目的,例如,可以将当前块的权重预测角度配置为固定参数值,ValidLength只与当前块的尺寸相关。FirstPos与当前块的权重预测角度,当前块的尺寸,当前块的权重预测位置相关,为了方案简化,可以固化某些参数来达到优化目的,例如,可以将当前块的权重预测角度配置为固定参数值,FirstPos只 与当前块的尺寸和当前块的权重预测位置相关。或,将当前块的权重预测位置配置为固定参数值,FirstPos只与当前块的尺寸和当前块的权重预测角度相关。或,将当前块的权重预测角度和当前块的权重预测位置均配置为固定参数值,这两个固定参数值可以相同或者不同,FirstPos只与当前块的尺寸相关。
实施例8:在实施例1-实施例4中,编码端/解码端需要根据当前块的权重变换率和当前块的权重变换的起始位置为当前块外部的周边位置配置参考权重值。在一种可能的实施方式中,记M和N是当前块的宽和高,角度加权预测模式(AWP)的权重阵列导出方式,包括:
步骤d1,根据AwpIdx获取stepIdx,angleIdx以及subAngleIdx等参数。
示例性的,AwpIdx表示权重预测位置和权重预测角度的索引值,假设存在7种权重预测位置,8种权重预测角度,则AwpIdx的取值范围是0-55。若权重预测位置为-3至3(表示第4个权重预测位置是中心,第4个权重预测位置为0),权重预测角度的索引为0-7,则AwpIdx的56个索引值对应的权重预测位置和权重预测角度,可以参见表1所示。
表1
AwpIdx 权重预测位置 权重预测角度
0 -3 0
1 -3 1
7 -3 7
8 -2 0
9 -2 1
55 3 7
示例性的,stepIdx表示权重预测位置(即权重预测位置的索引值),权重预测位置的范围可以是-3至3。例如,针对第1个权重预测位置,权重预测位置为-3,针对第2个权重预测位置,权重预测位置为-2,以此类推,针对第7个权重预测位置,权重预测位置为3。
angleIdx表示权重预测角度的斜率的绝对值的log2对数值(如0,或1,或较大常数),subAngleIdx表示权重预测角度所在的角度分区。参见图8A所示,示出8种权重预测角度,权重预测角度0的angleIdx为权重预测角度0的斜率的绝对值的log2对数值,权重预测角度1的angleIdx为权重预测角度1的斜率的绝对值的log2对数值,以此类推,权重预测角度7的angleIdx为权重预测角度7的斜率的绝对值的log2对数值。权重预测角度0和权重预测角度1位于角度分区0,权重预测角度2和权重预测角度3位于角度分区1,权重预测角度4和权重预测角度5位于角度分区2,权重预测角度6和 权重预测角度7位于角度分区3。
示例性的,可以采用如下公式确定stepIdx:stepIdx=(AwpIdx>>3)–3。
示例性的,可以根据如下公式确定modAngNum(角度编号):modAngNum=AwpIdx%8;基于modAngNum,可以采用如下方式确定angleIdx:若modAngNum等于2,则angleIdx=7;若modAngNum等于6,则angleIdx=8;否则,angleIdx=modAngNum%2。
示例性的,可以采用如下公式确定subAngleIdx:subAngleIdx=modAngNum>>1。
综上所述,编码端在确定出当前块的权重预测角度和当前块的权重预测位置后,可以基于该权重预测角度和该权重预测位置确定AwpIdx的取值,参见表1所示。编码端在向解码端发送编码比特流时,该编码比特流可以携带AwpIdx的取值,基于此,解码端可以得到AwpIdx的取值,并根据AwpIdx获取stepIdx,angleIdx以及subAngleIdx。
示例性的,angleIdx以及subAngleIdx能够唯一确定一个权重预测角度,参见表2所示,当然,也可以采用其它方式确定权重预测角度,例如,更改分区编号等。
表2
Figure PCTCN2021102199-appb-000001
步骤d2,根据stepIdx,angleIdx和subAngleIdx为当前块外部周边位置配置参考权重值。
情况一、若subAngleIdx为0,即权重预测角度位于角度分区0,比如说,权重预测角度为权重预测角度0或权重预测角度1,则可以采用如下公式确定权重变换的起始位置FirstPos:FirstPos=(ValidLength_H>>1)-6+DeltaPos_H。然后,采用如下公式确定当前块外部的周边位置的参考权重值:ReferenceWeights[x]=Clip3(0,8,x-FirstPos),在该公式中,是以参考权重值的最小值为0,参考权重值的最大值为8,权重变换率为1 为例进行说明,也就是说,上述公式可以等价为ReferenceWeights[x]=Clip3(最小值,最大值,a*(x-FirstPos))。x可以是当前块外部的周边位置的索引,x的取值范围是0~ValidLength_H-1,a表示权重变换率。
在上述公式中,ValidLength_H可以表示当前块外部的周边位置的数量(即有效数量,也可以称为有效长度),在subAngleIdx为0时,权重预测角度指向的当前块外部的周边位置,可以是左侧一列的周边位置,因此,可以将有效数量记为ValidLength_H。示例性的,可以采用如下公式确定有效数量ValidLength_H:ValidLength_H=(N+(M>>angleIdx))<<1。此处左移一位,是因为公式采用1/2-pel精度,若为1-pel精度,则ValidLength_H=(N+(M>>angleIdx));若为1/4-pel精度,则ValidLength_H=(N+(M>>angleIdx))<<2;若为2-pel精度,则ValidLength_H=(N+(M>>angleIdx))>>1,其他像素精度以此类推,在此不在赘述。之后的公式中,涉及的部分>>1操作,均可能因为像素精度的不同而发生改变。
在上述公式中,DeltaPos_H表示位置变化量参数(即一个中间参数),在subAngleIdx为0时,权重预测角度指向的当前块外部的周边位置,可以是当前块外部左侧一列的周边位置,因此,可以将位置变化量参数记为DeltaPos_H。示例性的,可以采用如下公式确定DeltaPos_H:DeltaPos_H=stepIdx*((ValidLength_H>>3)-1)。
情况二、若subAngleIdx为1,即权重预测角度位于角度分区1,比如说,权重预测角度为权重预测角度2或权重预测角度3,则可以采用如下公式确定权重变换的起始位置FirstPos:FirstPos=(ValidLength_H>>1)-4+DeltaPos_H–((M<<1)>>angleIdx)。然后,采用如下公式确定当前块外部的周边位置的参考权重值:ReferenceWeights[x]=Clip3(0,8,a*(x-FirstPos)),在该公式中,是以参考权重值的最小值为0,参考权重值的最大值为8,权重变换率为a为例进行说明。x可以是当前块外部的周边位置的索引,x的取值范围是0~ValidLength_H-1。
在上述公式中,ValidLength_H和DeltaPos_H可以参见情况一,在此不再赘述。
情况三、若subAngleIdx为2,即权重预测角度位于角度分区2,比如说,权重预测角度为权重预测角度4或权重预测角度5,则可以采用如下公式确定权重变换的起始位置FirstPos:FirstPos=(ValidLength_W>>1)-4+DeltaPos_W–((N<<1)>>angleIdx)。然后,采用如下公式确定当前块外部的周边位置的参考权重值:ReferenceWeights[x]=Clip3(0,8,a*(x-FirstPos)),在该公式中,是以参考权重值的最小值为0,参考权重值的最大值为8,权重变换率为a为例进行说明。x可以是当前块外部的周边位置的索引,x的取值范围是0~ValidLength_W-1。
在上述公式中,ValidLength_W表示当前块外部的周边位置的数量(即有效数量,也可以称为有效长度),在subAngleIdx为2时,权重预测角度指向的当前块外部的周边位置,可以是上侧一行的周边位置,因此,将有效数量记为ValidLength_W。示例性的,可以采用如下公式确定有效数量ValidLength_W:ValidLength_W=(M+(N>>angleIdx))<<1。
在上述公式中,DeltaPos_W表示位置变化量参数(即一个中间参数),在subAngleIdx为2时,权重预测角度指向的当前块外部的周边位置,可以是当前块外部 上侧一行的周边位置,因此,可以将位置变化量参数记为DeltaPos_W。示例性的,可以采用如下公式确定DeltaPos_W:DeltaPos_W=stepIdx*((ValidLength_W>>3)-1)。
情况四、若subAngleIdx为3,即权重预测角度位于角度分区3,比如说,权重预测角度为权重预测角度6或权重预测角度7,则可以采用如下公式确定权重变换的起始位置FirstPos:FirstPos=(ValidLength_W>>1)-6+DeltaPos_W。然后,可以采用如下公式确定当前块外部的周边位置的参考权重值:ReferenceWeights[x]=Clip3(0,8,a*(x-FirstPos)),在该公式中,是以参考权重值的最小值为0,参考权重值的最大值为8,权重变换率为a为例进行说明。x可以是当前块外部的周边位置的索引,x的取值范围是0~ValidLength_W-1。
在上述公式中,ValidLength_W和DeltaPos_W可以参见情况三,在此不再赘述。
综上所述,可以根据subAngleIdx确定应该采用哪种情况进行处理,例如,在情况一和情况二中,可以根据angleIdx和stepIdx确定ValidLength_H和DeltaPos_H,并可以根据ValidLength_H和DeltaPos_H确定FirstPos,继而根据FirstPos配置参考权重值。在情况三和情况四中,可以根据angleIdx和stepIdx确定ValidLength_W和DeltaPos_W,并可以根据ValidLength_W和DeltaPos_W确定FirstPos,继而根据FirstPos配置参考权重值。
上述各情况中的公式的区别在于,将当前块的左上角作为坐标原点时,参考权重值ReferenceWeights[x]的起始位置发生了变化。例如,参见图8B所示,示出了角度分区2和角度分区3的示例,在1/2-pel精度时,参考权重值ReferenceWeights[x]的起始位置为(高<<1)>>angleIdx,即公式中的偏移“–((N<<1)>>angleIdx)”。角度分区0和角度分区1的实现类似,只是公式中的偏移为“–((M<<1)>>angleIdx)”,即高度改为宽度即可。
步骤d3,根据angleIdx和参考权重值ReferenceWeights[x]获取像素位置的亮度权重值。
情况一、若subAngleIdx为0,则可以采用如下公式确定像素位置(x,y)的亮度权重值:AwpWeightArrayY[x][y]=ReferenceWeights[(y<<1)+((x<<1)>>angleIdx)],(y<<1)+((x<<1)>>angleIdx)表示像素位置(x,y)指向的周边位置,ReferenceWeights[(y<<1)+((x<<1)>>angleIdx)]表示该周边位置的参考权重值。x的取值范围是0~M-1,y的取值范围是0~N-1。
情况二、若subAngleIdx为1,则可以采用如下公式确定像素位置(x,y)的亮度权重值:AwpWeightArrayY[x][y]=ReferenceWeights[(y<<1)-((x<<1)>>angleIdx)],(y<<1)-((x<<1)>>angleIdx)表示像素位置(x,y)指向的周边位置,ReferenceWeights[(y<<1)-((x<<1)>>angleIdx)]表示该周边位置的参考权重值。x的取值范围是0~M-1;y的取值范围是0~N-1。
情况三、若subAngleIdx为2,则可以采用如下公式确定像素位置(x,y)的亮度权重值:AwpWeightArrayY[x][y]=ReferenceWeights[(x<<1)-((y<<1)>>angleIdx)],(x<<1)-((y<<1)>>angleIdx)表示像素位置(x,y)指向的周边位置,ReferenceWeights[(x <<1)-((y<<1)>>angleIdx)]表示该周边位置的参考权重值。x的取值范围是0~M-1;y的取值范围是0~N-1。
情况四、若subAngleIdx为3,则可以采用如下公式确定像素位置(x,y)的亮度权重值:AwpWeightArrayY[x][y]=ReferenceWeights[(x<<1)+((y<<1)>>angleIdx)],(x<<1)+((y<<1)>>angleIdx)表示像素位置(x,y)指向的周边位置,ReferenceWeights[(x<<1)+((y<<1)>>angleIdx)]表示该周边位置的参考权重值。x的取值范围是0~M-1;y的取值范围是0~N-1。
上述步骤d2和步骤d3可联合为一个步骤,即将步骤d2(根据stepIdx,angleIdx以及subAngleIdx为当前块外部的周边位置配置参考权重值)及步骤d3(根据angleIdx和参考权重值ReferenceWeight[x]获取亮度权重值)联合为:根据stepIdx,angleIdx及subAngleIdx获取像素位置的亮度权重值,即根据周边位置的坐标值与权重变换起始位置的坐标值确定。
以情况一为例:若subAngleIdx为0,即权重预测角度位于角度分区0,如权重预测角度为权重预测角度0或权重预测角度1,则采用如下公式确定权重变换的起始位置FirstPos:FirstPos=(ValidLength_H>>1)-6+DeltaPos_H。然后,采用如下公式AwpWeightArrayY[x][y]=Clip3(0,8,(y<<1)+((x<<1)>>angleIdx)-FirstPos)确定像素位置(x,y)的亮度权重值:(y<<1)+((x<<1)>>angleIdx)表示像素位置(x,y)指向的周边匹配位置。
步骤d4,根据像素位置的亮度权重值获取该像素位置的色度权重值,而该像素位置的亮度权重值和该像素位置的色度权重值,就可以组成该像素位置的目标权重值。
例如,若色度分辨率的格式为4:2:0,则采用如下公式确定像素位置(x,y)的色度权重值:AwpWeightArrayUV[x][y]=AwpWeightArrayY[x<<1][y<<1]。又例如,若色度分辨率的格式为4:4:4,则采用如下公式确定像素位置(x,y)的色度权重值:AwpWeightArrayUV[x][y]=AwpWeightArrayY[x][y]。其中,x的取值范围是0~M/2-1;y的取值范围是0~N/2-1。
步骤d4的另一种实现方式为:根据angleIdx和参考权重值ReferenceWeight[x]获取像素位置的色度权重值,而不需要根据亮度权重值获取色度权重值。例如,若色度分辨率的格式为4:2:0,则根据angleIdx和参考权重值ReferenceWeight[x]获取像素位置的色度权重值。
例如,若subAngleIdx为0,则可以采用如下公式确定像素位置(x,y)的色度权重值:AwpWeightArrayUV[x][y]=ReferenceWeights[(y<<2)+((x<<2)>>angleIdx)]。
例如,若subAngleIdx为1,则可以采用如下公式确定像素位置(x,y)的色度权重值:AwpWeightArrayUV[x][y]=ReferenceWeights[(y<<2)-((x<<2)>>angleIdx)]。
例如,若subAngleIdx为2,则可以采用如下公式确定像素位置(x,y)的色度权重值:AwpWeightArrayUV[x][y]=ReferenceWeights[(x<<2)-((y<<2)>>angleIdx)]。
例如,若subAngleIdx为3,则可以采用如下公式确定像素位置(x,y)的色度权重值:AwpWeightArrayUV[x][y]=ReferenceWeights[(x<<2)+((y<<2)>>angleIdx)]。
在上述各公式中,x的取值范围是0~M-1,y的取值范围是0~N-1。
在步骤d3和步骤d4中,各情况中的公式的区别在于,参见图8C所示,示出了角度分区2和角度分区3的示例。将当前块的左上角作为坐标原点时,(x,y)的匹配位置在角度分区2的计算公式可以为:x–y>>angleIdx,(x,y)的匹配位置在角度分区3的计算公式可以为:x+y>>angleIdx。在1/2-pel精度时,(x,y)的匹配位置在角度分区2的计算公式可以为:(x<<1)–(y<<1)>>angleIdx,(x,y)的匹配位置在角度分区3的计算公式可以为:(x<<1)+(y<<1)>>angleIdx。角度分区0和角度分区1的实现类似,只是交换(x,y)的位置即可。
实施例9:在实施例1-实施例4中,编码端/解码端需要获取当前块的权重变换率,若当前块支持权重变换率切换模式,则采用如下方式获取当前块的权重变换率:获取当前块的权重变换率指示信息,并根据该权重变换率指示信息确定当前块的权重变换率。示例性的,若该权重变换率指示信息为第一指示信息,则当前块的权重变换率为第一权重变换率;若权重变换率指示信息为第二指示信息,则当前块的权重变换率为第二权重变换率。若当前块不支持权重变换率切换模式,则将预设的权重变换率确定为当前块的权重变换率。
综上所述,若当前块支持权重变换率切换模式,则当前块的权重变换率可以为第一权重变换率或者第二权重变换率,且第一权重变换率与第二权重变换率不同,即,当前块的权重变换率是可变的,从而能够自适应的切换权重变换率,而不是采用统一的一个权重变换率。
示例性的,若切换控制信息允许当前块启用权重变换率切换模式,则当前块支持权重变换率切换模式,若切换控制信息不允许当前块启用权重变换率切换模式,则当前块不支持权重变换率切换模式。该切换控制信息可以包括但不限于:序列级切换控制信息,帧级切换控制信息,Slice(片)级切换控制信息,Tile(片)级切换控制信息,Patch(片)级切换控制信息,CTU(Coding Tee Unit,编码树单元)级切换控制信息,LCU(Largest Coding Unit,最大编码单元)级切换控制信息,块级切换控制信息,CU(Coding Unit,编码单元)级切换控制信息,PU(Prediction Unit,预测单元)级切换控制信息等,对此不做限制。
对于编码端来说,可以获知切换控制信息,且获知切换控制信息是否允许当前块启用权重变换率切换模式,继而确定当前块是否支持权重变换率切换模式。编码端可以将切换控制信息编码到码流,使得解码端从码流中解析出切换控制信息,获知切换控制信息是否允许当前块启用权重变换率切换模式,继而确定当前块是否支持权重变换率切换模式。编码端也可以不将切换控制信息编码到码流,而是由解码端隐式推导出切换控制信息,获知切换控制信息是否允许当前块启用权重变换率切换模式,继而确定当前块是否支持权重变换率切换模式。
以序列级切换控制信息为例,序列级切换控制信息可以为awp_adptive_flag(帧间角度加权预测适应标志位),若awp_adptive_flag为第一取值,则表示序列级切换控制信息允许当前序列启用权重变换率切换模式,从而允许当前块启用权重变换率切换模式,若awp_adptive_flag为第二取值,则表示序列级切换控制信息不允许当前序列启用 权重变换率切换模式,从而不允许当前块启用权重变换率切换模式。示例性的,第一取值为1,第二取值为0,或者,第一取值为0,第二取值为1。当然,上述只是第一取值和第二取值的示例,对此不做限制。针对其它类型的切换控制信息,实现过程与序列级切换控制信息类似,在此不再赘述。
在一种可能的实施方式中,当前块的权重变换率指示信息可以为当前块对应的SCC(Screen Content Coding,屏幕内容编码)标识,第一指示信息用于指示当前块属于屏幕内容编码,第二指示信息用于指示当前块属于非屏幕内容编码。在此基础上,可以获取当前块对应的SCC标识,并根据该SCC标识确定当前块的权重变换率。例如,若该SCC标识用于指示当前块属于屏幕内容编码,则当前块的权重变换率为第一权重变换率;若该SCC标识用于指示当前块属于非屏幕内容编码,则当前块的权重变换率为第二权重变换率。
示例性的,第一权重变换率的绝对值可以大于第二权重变换率的绝对值。例如,第一权重变换率的绝对值可以为4,第二权重变换率的绝对值可以为1或2。又例如,第一权重变换率的绝对值可以为2,第二权重变换率的绝对值可以为1。又例如,第一权重变换率的绝对值可以为8,第二权重变换率的绝对值可以为1或2或4。又例如,第一权重变换率的绝对值可以为8或4,第二权重变换率的绝对值可以为1或2。当然,上述只是几个示例,对此不做限制,只要第一权重变换率的绝对值大于第二权重变换率的绝对值即可。
示例性的,SCC标识可以包括但不限于:序列级SCC标识,帧级SCC标识,Slice级SCC标识,Tile级SCC标识,Patch级SCC标识,CTU级SCC标识,LCU级SCC标识,块级SCC标识,CU级SCC标识,PU级SCC标识等等,对此不做限制。例如,可以将当前块对应的序列级SCC标识确定为当前块对应的SCC标识,或者,将当前块对应的帧级SCC标识确定为当前块对应的SCC标识,以此类推,只要能够得到当前块对应的SCC标识即可。
示例性的,对于编码端来说,可以决策当前块属于屏幕内容编码,还是属于非屏幕内容编码。若当前块属于屏幕内容编码,则编码端确定当前块的权重变换率为第一权重变换率。若当前块属于非屏幕内容编码,则编码端确定当前块的权重变换率为第二权重变换率。或者,对于编码端来说,可以获取当前块对应的SCC标识,若该SCC标识用于指示当前块属于屏幕内容编码,则编码端确定当前块的权重变换率为第一权重变换率。若该SCC标识用于指示当前块属于非屏幕内容编码,则编码端确定当前块的权重变换率为第二权重变换率。
编码端可以将SCC标识(如序列级SCC标识,帧级SCC标识,Slice级SCC标识等等)编码到码流,使得解码端从码流中解析出该SCC标识,并将该SCC标识确定为当前块对应的SCC标识,例如,可以将当前块对应的序列级SCC标识确定为当前块对应的SCC标识。综上所述,解码端可以获知当前块对应的SCC标识,若该SCC标识用于指示当前块属于屏幕内容编码,则解码端确定当前块的权重变换率为第一权重变换率。若该SCC标识用于指示当前块属于非屏幕内容编码,则解码端确定当前块的权重变换率为第二权重变换率。例如,若SCC标识为第一取值,则用于指示当前块属于 屏幕内容编码,若SCC标识为第二取值,则用于指示当前块属于非屏幕内容编码。第一取值为1,第二取值为0,或者,第一取值为0,第二取值为1。当然,上述只是第一取值和第二取值的示例,对此不做限制。
编码端也可以不将SCC标识编码到码流,而是利用与解码端一致的信息隐式推导出SCC标识,此时解码端也可以隐式推导出SCC标识,并将该SCC标识确定为当前块对应的SCC标识。例如,若连续多帧均是屏幕内容编码,则推导出当前帧为屏幕内容编码,因此,解码端隐式推导出帧级SCC标识,并将该SCC标识确定为当前块对应的SCC标识,且该SCC标识用于指示当前块属于屏幕内容编码。例如,若连续多帧均是非屏幕内容编码,则推导出当前帧为非屏幕内容编码,因此,解码端隐式推导出帧级SCC标识,且该SCC标识用于指示当前块属于非屏幕内容编码。例如,若IBC模式占比小于一定百分比,将下一帧确定为非屏幕内容编码,否则继续为屏幕内容编码。当然,上述方式只是隐式推导出SCC标识的示例,对此隐式推导方式不做限制。综上所述,解码端可以获取当前块对应的SCC标识,若该SCC标识用于指示当前块属于屏幕内容编码,则解码端确定当前块的权重变换率为第一权重变换率。若该SCC标识用于指示当前块属于非屏幕内容编码,则解码端确定当前块的权重变换率为第二权重变换率。例如,若SCC标识为第一取值,则用于指示当前块属于屏幕内容编码,若SCC标识为第二取值,则用于指示当前块属于非屏幕内容编码。
综上所述,当前块的权重变换率可以为第一权重变换率或者第二权重变换率,即当前块的权重变换率可以进行切换,且权重变换率的切换依赖于某级的SCC显示标识或SCC隐式标识,SCC显示标识是指,将scc_flag(SCC标识)编入码流,使得解码端从码流中解析出SCC标识,SCC隐式标识是指,解码端根据能够得到的信息自适应推导出SCC标识。
某级SCC标识是指:序列级指示当前序列的SCC标识,该SCC标识作为属于当前序列的所有块的SCC标识;帧级指示当前帧的SCC标识,该SCC标识作为属于当前帧的所有块的SCC标识;Slice级指示当前Slice的SCC标识,该SCC标识作为属于当前Slice的所有块的SCC标识;Tile级指示当前Tile的SCC标识,该SCC标识作为属于当前Tile的所有块的SCC标识;Patch级指示当前Patch的SCC标识,该SCC标识作为属于当前Patch的所有块的SCC标识;CTU级指示当前CTU的SCC标识,该SCC标识作为属于当前CTU的所有块的SCC标识;LCU级指示当前LCU的SCC标识,该SCC标识作为属于当前LCU的所有块的SCC标识;块级指示当前块的SCC标识,该SCC标识作为属于当前块的所有块的SCC标识;CU级指示当前CU的SCC标识,该SCC标识作为属于当前CU的SCC标识;PU级指示当前PU的SCC标识,该SCC标识作为属于当前PU的SCC标识。
示例性的,可以将第二权重变换率作为默认的权重变换率,在SCC标识用于指示当前块属于非屏幕内容编码时,不需要对权重变换率进行切换,即确定当前块的权重变换率为第二权重变换率。在SCC标识用于指示当前块属于屏幕内容编码时,需要对权重变换率进行切换,即确定当前块的权重变换率为第一权重变换率。或者,可以将第一权重变换率作为默认的权重变换率,在SCC标识用于指示当前块属于非屏幕内容编码时,需要对权重变换率进行切换,即确定当前块的权重变换率为第二权重变换率。在 SCC标识用于指示当前块属于屏幕内容编码时,不需要对权重变换率进行切换,即确定当前块的权重变换率为第一权重变换率。
综上所述,在当前块属于屏幕内容编码时,当前块的权重变换率为第一权重变换率,在当前块属于非屏幕内容编码时,当前块的权重变换率为第二权重变换率,且第一权重变换率的绝对值大于第二权重变换率的绝对值,如第一权重变换率的绝对值为4,第二权重变换率的绝对值为1,使得属于SCC序列的当前块的权重变换率的绝对值增加,即变换速度增加。
在另一种可能的实施方式中,当前块的权重变换率指示信息可以为当前块对应的权重变换率切换标识,第一指示信息用于指示当前块不需要进行权重变换率切换,第二指示信息用于指示当前块需要进行权重变换率切换。在此基础上,可以获取当前块对应的权重变换率切换标识,并根据该权重变换率切换标识确定当前块的权重变换率。例如,若该权重变换率切换标识用于指示当前块不需要进行权重变换率切换,则当前块的权重变换率可以为第一权重变换率;若该权重变换率切换标识用于指示当前块需要进行权重变换率切换,则当前块的权重变换率可以为第二权重变换率。第一权重变换率的绝对值不等于第二权重变换率的绝对值。
例如,第一权重变换率的绝对值可以大于第二权重变换率的绝对值,如第一权重变换率的绝对值可以为4,第二权重变换率的绝对值为1或2。或,第一权重变换率的绝对值可以为2,第二权重变换率的绝对值为1。或,第一权重变换率的绝对值可以为8,第二权重变换率的绝对值为1或2或4。又例如,第一权重变换率的绝对值可以小于第二权重变换率的绝对值,如第一权重变换率的绝对值可以为1,第二权重变换率的绝对值可以为2或4或8。或者,第一权重变换率的绝对值可以为2,第二权重变换率的绝对值可以为4或8。或者,第一权重变换率的绝对值可以为4,第二权重变换率的绝对值可以为8。当然,上述只是几个示例,对此不做限制,只要第一权重变换率的绝对值不等于第二权重变换率的绝对值即可。
示例性的,权重变换率切换标识可以包括但不限于:序列级权重变换率切换标识,帧级权重变换率切换标识,Slice级权重变换率切换标识,Tile级权重变换率切换标识,Patch级权重变换率切换标识,CTU级权重变换率切换标识,LCU级权重变换率切换标识,块级权重变换率切换标识,CU级权重变换率切换标识,PU级权重变换率切换标识等等,对此不做限制。
例如,可以将当前块对应的序列级权重变换率切换标识确定为当前块对应的权重变换率切换标识,或者,将当前块对应的帧级权重变换率切换标识确定为当前块对应的权重变换率切换标识,以此类推,只要能够得到当前块对应的权重变换率切换标识即可。
示例性的,可以将第一权重变换率作为默认的权重变换率,对于编码端来说,可以获知当前块是否需要进行权重变换率切换,若当前块不需要进行权重变换率切换,则编码端确定当前块的权重变换率为第一权重变换率。若当前块需要进行权重变换率切换,则编码端确定当前块的权重变换率为第二权重变换率。或者,对于编码端来说,可以获知当前块对应的权重变换率切换标识,若该权重变换率切换标识用于指示当前块不 需要进行权重变换率切换,则编码端可以确定当前块的权重变换率为第一权重变换率。若该权重变换率切换标识用于指示当前块需要进行权重变换率切换,则编码端确定当前块的权重变换率为第二权重变换率。
例如,编码端确定与第一权重变换率对应的率失真代价值1,与第二权重变换率对应的率失真代价值2。若率失真代价值1小于率失真代价值2,则确定当前块不需要进行权重变换率切换,若率失真代价值2小于率失真代价值1,则确定当前块需要进行权重变换率切换。
编码端可以将权重变换率切换标识(如序列级权重变换率切换标识等)编码到码流,使得解码端从码流中解析出该权重变换率切换标识,并将该权重变换率切换标识确定为当前块对应的权重变换率切换标识。综上所述,解码端可以获知当前块对应的权重变换率切换标识,若该权重变换率切换标识用于指示当前块不需要进行权重变换率切换,则解码端确定当前块的权重变换率为第一权重变换率。若该权重变换率切换标识用于指示当前块需要进行权重变换率切换,则解码端确定当前块的权重变换率为第二权重变换率。例如,若权重变换率切换标识为第一取值,则指示当前块不需要进行权重变换率切换,若权重变换率切换标识为第二取值,则指示当前块需要进行权重变换率切换。第一取值为1,第二取值为0,或者,第一取值为0,第二取值为1。当然,上述只是第一取值和第二取值的示例,对此不做限制。
编码端也可以不将权重变换率切换标识编码到码流,而是由解码端隐式推导权重变换率切换标识,并将该权重变换率切换标识确定为当前块对应的权重变换率切换标识。例如,若连续多个块均需要进行权重变换率切换,则当前块也需要进行权重变换率切换,解码端隐式推导出权重变换率切换标识,将该权重变换率切换标识确定为当前块对应的权重变换率切换标识,且该权重变换率切换标识指示当前块需要进行权重变换率切换。若连续多个块均不需要进行权重变换率切换,则当前块也不需要进行权重变换率切换,解码端隐式推导出权重变换率切换标识,且权重变换率切换标识指示当前块不需要进行权重变换率切换。当然,上述方式只是隐式推导出权重变换率切换标识的示例,对此推导方式不做限制。综上所述,解码端可以获知当前块对应的权重变换率切换标识,若该权重变换率切换标识指示当前块不需要进行权重变换率切换,则确定当前块的权重变换率为第一权重变换率。若该权重变换率切换标识指示当前块需要进行权重变换率切换,则确定当前块的权重变换率为第二权重变换率。
综上所述,当前块的权重变换率可以为第一权重变换率或第二权重变换率,即当前块的权重变换率可以进行切换,权重变换率的切换依赖于某级的权重变换率切换标识(refine_flag),该refine_flag为显示标识或隐式标识,显示标识是指将refine_flag编入码流,使得解码端从码流中解析出refine_flag,隐式标识是指编解码端根据能够得到的信息自适应推导refine_flag。
示例性的,某级refine_flag是指:序列级指示当前序列的refine_flag,作为属于当前序列的所有块的refine_flag;帧级指示当前帧的refine_flag,作为属于当前帧的所有块的refine_flag;Slice级指示当前Slice的refine_flag,作为属于当前Slice的所有块的refine_flag;Tile级指示当前Tile的refine_flag,作为属于当前Tile的所有块的refine_flag; Patch级指示当前Patch的refine_flag,作为属于当前Patch的所有块的refine_flag;CTU级指示当前CTU的refine_flag,作为属于当前CTU的所有块的refine_flag;LCU级指示当前LCU的refine_flag,作为属于当前LCU的所有块的refine_flag;块级指示当前块的refine_flag,作为属于当前块的refine_flag;CU级指示当前CU的refine_flag,作为属于当前CU的refine_flag;PU级指示当前PU的refine_flag,作为属于当前PU的refine_flag。当然,上述只是几个示例,对此不做限制。
示例性的,可以将第一权重变换率作为默认的权重变换率,在权重变换率切换标识用于指示当前块不需要进行权重变换率切换时,不对权重变换率进行切换,即确定当前块的权重变换率为第一权重变换率。在权重变换率切换标识用于指示当前块需要进行权重变换率切换时,对权重变换率进行切换,即确定当前块的权重变换率为第二权重变换率。
实施例10:在实施例1-实施例4中,编码端/解码端需要获取当前块的权重预测角度和权重预测位置,在实施例9中,可以得到当前块的权重变换率,如第一权重变换率或者第二权重变换率,在此基础上,采用如下方式获取当前块的权重预测角度和权重预测位置:
方式一、编码端和解码端约定相同的权重预测角度作为当前块的权重预测角度,并约定相同的权重预测位置作为当前块的权重预测位置。例如,编码端和解码端将权重预测角度A作为当前块的权重预测角度,编码端和解码端将权重预测位置4作为当前块的权重预测位置。
方式二、编码端构建权重预测角度列表,权重预测角度列表包括至少一个权重预测角度,如权重预测角度A和权重预测角度B。编码端构建权重预测位置列表,权重预测位置列表包括至少一个权重预测位置,如权重预测位置0-权重预测位置6。依次遍历权重预测角度列表中的每个权重预测角度,遍历权重预测位置列表中的每个权重预测位置,即遍历每个权重预测角度及每个权重预测位置的组合。将每个组合作为步骤a1中获取的权重预测角度和权重预测位置,基于该权重预测角度,该权重预测位置及权重变换率,得到当前块的加权预测值。
例如,编码端遍历到权重预测角度A和权重预测位置0时,基于权重预测角度A和权重预测位置0得到当前块的加权预测值。遍历到权重预测角度A和权重预测位置1时,基于权重预测角度A和权重预测位置1得到当前块的加权预测值。遍历到权重预测角度B和权重预测位置0时,基于权重预测角度B和权重预测位置0得到当前块的加权预测值,以此类推。编码端可以基于权重预测角度和权重预测位置的每个组合,得到当前块的加权预测值。
编码端基于权重预测角度和权重预测位置的组合得到当前块的加权预测值后,可以根据当前块的加权预测值确定率失真代价值,对此率失真代价值的确定方式不做限制,编码端可以得到每个组合的率失真代价值,并从所有率失真代价值中选择最小率失真代价值。
然后,编码端将最小率失真代价值对应的权重预测角度和权重预测位置组合作为目标权重预测角度和目标权重预测位置,最后将目标权重预测角度在权重预测角度列 表中的索引值和目标权重预测位置在权重预测位置列表中的索引值编入码流。
针对解码端来说,解码端构建权重预测角度列表,该权重预测角度列表与编码端的权重预测角度列表相同,权重预测角度列表包括至少一个权重预测角度。构建权重预测位置列表,该权重预测位置列表与编码端的权重预测位置列表相同,权重预测位置列表包括至少一个权重预测位置。解码端在接收到当前块的编码比特流后,从编码比特流中解析出指示信息,根据该指示信息从权重预测角度列表中选择一个权重预测角度作为当前块的权重预测角度,根据该指示信息从权重预测位置列表中选择一个权重预测位置作为当前块的权重预测位置。
应用场景1:编码端在向解码端发送编码比特流时,该编码比特流可以包括指示信息1,指示信息1用于指示当前块的权重预测角度(即目标权重预测角度)和当前块的权重预测位置(即目标权重预测位置)。例如,当指示信息1为0时,用于指示权重预测角度列表中的第一个权重预测角度,且指示权重预测位置列表中的第一个权重预测位置,当指示信息1为1时,用于指示权重预测角度列表中的第一个权重预测角度,且指示权重预测位置列表中的第二个权重预测位置,以此类推,对于指示信息1的取值,用于指示哪个权重预测角度和哪个权重预测位置,只要编码端与解码端进行约定即可,本实施例中对此不做限制。
解码端在接收到编码比特流后,从该编码比特流中解析出指示信息1,基于该指示信息1,解码端可以从权重预测角度列表中选择与该指示信息1对应的权重预测角度,该权重预测角度作为当前块的权重预测角度。基于该指示信息1,解码端可以从权重预测位置列表中选择与该指示信息1对应的权重预测位置,该权重预测位置作为当前块的权重预测位置。
应用场景2:编码端在向解码端发送编码比特流时,该编码比特流可以包括指示信息2和指示信息3。指示信息2用于指示当前块的目标权重预测角度,如目标权重预测角度在权重预测角度列表中的索引值1,索引值1表示目标权重预测角度是权重预测角度列表中的第几个权重预测角度。指示信息3用于指示当前块的目标权重预测位置,如目标权重预测位置在权重预测位置列表中的索引值2,索引值2表示目标权重预测位置是权重预测位置列表中的第几个权重预测位置。解码端接收到编码比特流后,从编码比特流中解析出指示信息2和指示信息3,基于指示信息2,从权重预测角度列表中选择与该索引值1对应的权重预测角度,该权重预测角度作为当前块的权重预测角度。基于该指示信息3,从权重预测位置列表中选择与该索引值2对应的权重预测位置,该权重预测位置作为当前块的权重预测位置。
应用场景3:编码端和解码端可以约定优选配置组合,对此优选配置组合不做限制,可以根据实际经验进行配置,例如,编码端和解码端约定包括权重预测角度A和权重预测位置4的优选配置组合1,包括权重预测角度B和权重预测位置4的优选配置组合2。
编码端确定当前块的目标权重预测角度和目标权重预测位置后,确定目标权重预测角度和目标权重预测位置是否为优选配置组合。如果是,则编码端在向解码端发送编码比特流时,该编码比特流可以包括指示信息4和指示信息5。指示信息4用于指示 当前块是否采用优选配置组合,如指示信息4为第一取值(如0),表示当前块采用优选配置组合。指示信息5用于指示当前块采用哪个优选配置组合,如指示信息5为0时,用于指示当前块采用优选配置组合1,指示信息5为1时,用于指示当前块采用优选配置组合2。
解码端在接收到编码比特流后,从该编码比特流中解析出指示信息4和指示信息5,基于该指示信息4,解码端确定当前块是否采用优选配置组合。若指示信息4为第一取值,则确定当前块采用优选配置组合。在当前块采用优选配置组合时,解码端基于指示信息5,确定当前块采用哪个优选配置组合,如当指示信息5为0时,确定当前块采用优选配置组合1,即当前块的权重预测角度为权重预测角度A,当前块的权重预测位置为权重预测位置4。又例如,当指示信息5为1时,确定当前块采用优选配置组合2,即当前块的权重预测角度为权重预测角度B,当前块的权重预测位置为权重预测位置4。
示例性的,若编码端和解码端只约定一组优选配置组合,如包括权重预测角度A和权重预测位置4的优选配置组合,则该编码比特流可以包括指示信息4,而不包括指示信息5,指示信息4用于指示当前块采用优选配置组合。解码端从该编码比特流中解析出指示信息4后,若指示信息4为第一取值,则确定当前块采用优选配置组合,基于该优选配置组合,确定当前块的权重预测角度为权重预测角度A,当前块的权重预测位置为权重预测位置4。
应用场景4:编码端和解码端可以约定优选配置组合,编码端确定当前块的目标权重预测角度和目标权重预测位置后,确定目标权重预测角度和目标权重预测位置是否为优选配置组合。如果否,则编码端向解码端发送编码比特流时,该编码比特流包括指示信息4和指示信息6。指示信息4用于指示当前块是否采用优选配置组合,如指示信息4为第二取值(如1),表示当前块未采用优选配置组合。指示信息6用于指示当前块的目标权重预测角度和当前块的目标权重预测位置。例如,当指示信息6为0时,用于指示权重预测角度列表中的第一个权重预测角度,且指示权重预测位置列表中的第一个权重预测位置,以此类推。
解码端在接收到编码比特流后,从该编码比特流中解析出指示信息4和指示信息6,基于该指示信息4,解码端确定当前块是否采用优选配置组合。若指示信息4为第二取值,则确定当前块未采用优选配置组合。在当前块未采用优选配置组合时,解码端基于指示信息6,可以从权重预测角度列表中选择与该指示信息6对应的权重预测角度,该权重预测角度作为当前块的权重预测角度,基于该指示信息6,解码端可以从权重预测位置列表中选择与该指示信息6对应的权重预测位置,该权重预测位置作为当前块的权重预测位置。
应用场景5:编码端和解码端可以约定优选配置组合,编码端确定当前块的目标权重预测角度和目标权重预测位置后,确定目标权重预测角度和目标权重预测位置是否为优选配置组合。如果否,则编码端向解码端发送编码比特流时,该编码比特流包括指示信息4,指示信息7和指示信息8。示例性的,指示信息4用于指示当前块是否采用优选配置组合,如指示信息4为第二取值,表示当前块未采用优选配置组合。指示信息 7用于指示当前块的目标权重预测角度,如目标权重预测角度在权重预测角度列表中的索引值1,索引值1表示目标权重预测角度是权重预测角度列表中的第几个权重预测角度。指示信息8用于指示当前块的目标权重预测位置,如目标权重预测位置在权重预测位置列表中的索引值2,索引值2表示目标权重预测位置是权重预测位置列表中的第几个权重预测位置。
解码端在接收到编码比特流后,从该编码比特流中解析出指示信息4,指示信息7和指示信息8,基于该指示信息4,解码端确定当前块是否采用优选配置组合。若指示信息4为第二取值,则确定当前块未采用优选配置组合。在当前块未采用优选配置组合时,解码端基于该指示信息7,从权重预测角度列表中选择与该索引值1对应的权重预测角度,该权重预测角度作为当前块的权重预测角度。解码端基于该指示信息8,从权重预测位置列表中选择与该索引值2对应的权重预测位置,该权重预测位置作为当前块的权重预测位置。
实施例11:在实施例1-实施例4中,编码端/解码端需要获取当前块的权重变换率,若当前块支持权重变换率切换模式,则采用如下方式获取当前块的权重变换率:获取当前块的权重变换率指示信息;从预设的查找表中选择与该权重变换率指示信息对应的权重变换率;该预设的查找表包括至少两个权重变换率。将选择的权重变换率确定为当前块的权重变换率。
由于预设的查找表包括至少两个权重变换率,因此,若当前块支持权重变换率切换模式,则可以从至少两个权重变换率中选择当前块的权重变换率,即,当前块的权重变换率是可变的,从而能够自适应的切换权重变换率,而不是采用统一的一个权重变换率。
示例性的,若切换控制信息允许当前块启用权重变换率切换模式,则当前块支持权重变换率切换模式,若切换控制信息不允许当前块启用权重变换率切换模式,则当前块不支持权重变换率切换模式,关于当前块是否支持权重变换率切换模式的内容,参见实施例9。
在一种可能的实施方式中,预设的查找表可以包括至少两个权重变换率,权重变换率指示信息可以包括权重变换率索引信息(用于指示查找表中所有权重变换率中的某个权重变换率)。基于此,可以从该查找表中选择与该权重变换率索引信息对应的权重变换率。
针对编码端来说,若只存在一个查找表,则针对查找表中的每个权重变换率,编码端可以确定与该权重变换率对应的率失真代价值,并将最小的率失真代价值对应的权重变换率作为当前块的目标权重变换率,确定目标权重变换率在查找表中的索引信息,即权重变换率索引信息,该权重变换率索引信息表示该查找表中的第几个权重变换率。
针对解码端来说,若只存在一个查找表,编码端向解码端发送当前块的编码比特流时,该编码比特流可以携带权重变换率索引信息,该权重变换率索引信息用于指示目标权重变换率在查找表中的索引信息。解码端从该查找表中选择与该权重变换率索引信息对应的权重变换率,该权重变换率作为当前块的目标权重变换率。
实施例12:在实施例1-实施例4中,编码端/解码端需要获取当前块的权重预测角度,当前块的权重预测位置和当前块的权重变换率,在实施例11中,可以得到当前块的权重变换率,在此基础上,可以采用如下方式获取当前块的权重预测角度和权重预测位置:
方式一、编码端和解码端约定相同的权重预测角度作为当前块的权重预测角度,并约定相同的权重预测位置作为当前块的权重预测位置。例如,编码端和解码端将权重预测角度A作为当前块的权重预测角度,编码端和解码端将权重预测位置4作为当前块的权重预测位置。
方式二、编码端构建权重预测角度列表,权重预测角度列表包括至少一个权重预测角度。编码端构建权重预测位置列表,权重预测位置列表包括至少一个权重预测位置。编码端构建至少两个查找表,以第一查找表和第二查找表为例,第一查找表包括至少一个权重变换率,第二查找表包括至少一个权重变换率。编码端确定出目标查找表,确定方式参见实施例11,以目标查找表是第一查找表为例。编码端依次遍历权重预测角度列表中每个权重预测角度,遍历权重预测位置列表中每个权重预测位置,遍历目标查找表中每个权重变换率,即遍历每个权重预测角度,每个权重预测位置,每个权重变换率的组合。针对权重预测角度,权重预测位置及权重变换率的每个组合,作为步骤a1中获取的权重预测角度,权重预测位置及权重变换率,基于权重预测角度,权重预测位置及权重变换率,得到当前块的加权预测值。
综上所述,编码端可以基于每个组合得到当前块的加权预测值。在得到当前块的加权预测值后,编码端可以根据当前块的加权预测值确定率失真代价值,即,编码端可以得到每个组合的率失真代价值,并从所有率失真代价值中选择最小率失真代价值。
然后,编码端将最小率失真代价值对应的权重预测角度,权重预测位置和权重变换率分别作为目标权重预测角度,目标权重预测位置和目标权重变换率,最后,将目标权重预测角度在权重预测角度列表中的索引值,目标权重预测位置在权重预测位置列表中的索引值,及目标权重变换率在目标查找表中的索引值,均编入当前块的码流。
针对解码端来说,解码端构建权重预测角度列表,该权重预测角度列表与编码端的权重预测角度列表相同,解码端构建权重预测位置列表,该权重预测位置列表与编码端的权重预测位置列表相同,解码端构建第一查找表和第二查找表,该第一查找表与编码端的第一查找表相同,该第二查找表与编码端的第二查找表相同。解码端在接收到当前块的编码比特流后,从该编码比特流中解析出指示信息,根据该指示信息从权重预测角度列表中选择一个权重预测角度作为当前块的权重预测角度,根据该指示信息从权重预测位置列表中选择一个权重预测位置作为当前块的权重预测位置,权重预测角度和权重预测位置的获取方式,参见实施例10,在此不再赘述。解码端在接收到当前块的编码比特流后,可以确定目标查找表(如第一查找表或者第二查找表),并根据权重变换率索引信息从目标查找表中选择一个权重变换率作为当前块的权重变换率,关于权重变换率的获取方式,参见实施例11,在此不再赘述。
实施例13:在实施例1-实施例3中,编码端/解码端需要获取运动信息候选列表,如采用如下方式获取运动信息候选列表:获取待加入到运动信息候选列表中的至少一个 可用运动信息;基于至少一个可用运动信息获取运动信息候选列表。示例性的,至少一个可用运动信息包括但不限于如下运动信息的至少一种:空域运动信息;时域运动信息;HMVP(History-based Motion Vector Prediction,基于历史的运动矢量预测)运动信息;预设运动信息。
示例性的,针对空域运动信息,可以采用如下方式获取待加入到运动信息候选列表中的至少一个可用运动信息:针对当前块的空域相邻块,若空域相邻块存在,且空域相邻块采用帧间预测模式,则将该空域相邻块的运动信息确定为可用运动信息。
示例性的,针对时域运动信息,可以采用如下方式获取待加入到运动信息候选列表中的至少一个可用运动信息:基于当前块的预设位置(例如,当前块的左上角像素位置),从当前块的参考帧中选取与该预设位置对应的时域相邻块,并将该时域相邻块的运动信息确定为可用运动信息。
示例性的,针对预设运动信息,可以采用如下方式获取待加入到运动信息候选列表中的至少一个可用运动信息:将预设运动信息确定为可用运动信息,该预设运动信息可以包括但不限于:基于运动信息候选列表中已存在的候选运动信息所导出的缺省运动信息。
在一种可能的实施方式中,基于至少一个可用运动信息,可以采用如下方式获取运动信息候选列表:针对当前待加入到运动信息候选列表的可用运动信息,若可用运动信息为单向运动信息,且该单向运动信息与运动信息候选列表中已存在的候选运动信息不重复,则将该单向运动信息加入到运动信息候选列表;若可用运动信息为双向运动信息,则将该双向运动信息裁剪为第一单向运动信息和第二单向运动信息;若第一单向运动信息与运动信息候选列表中已存在的候选运动信息不重复,则将该第一单向运动信息加入到运动信息候选列表。
在上述实施例中,第一单向运动信息可以是指向第一参考帧列表中参考帧的单向运动信息;第二单向运动信息可以是指向第二参考帧列表中参考帧的单向运动信息。
在一种可能的实施方式中,List0可以是前向参考帧列表,List1可以是后向参考帧列表。
在上述实施例中,针对单向运动信息与运动信息候选列表中已存在的候选运动信息的查重操作,可以包括但不限于:若单向运动信息指向的参考帧的POC(显示顺序)与候选运动信息指向的参考帧的POC相同,且该单向运动信息的运动矢量与该候选运动信息的运动矢量相同,则确定该单向运动信息与该候选运动信息重复;否则,确定该单向运动信息与该候选运动信息不重复。
针对第一单向运动信息与运动信息候选列表中已存在的候选运动信息的查重操作,可以包括但不限于:若第一单向运动信息指向的参考帧列表与候选运动信息指向的参考帧列表相同,且该第一单向运动信息的refIdx与该候选运动信息的refIdx相同,且该第一单向运动信息的运动矢量与该候选运动信息的运动矢量相同,则确定该第一单向运动信息与该候选运动信息重复;否则,确定该第一单向运动信息与该候选运动信息不重复。或者,若第一单向运动信息指向的参考帧的POC与候选运动信息指向的参考帧 的POC相同,且该第一单向运动信息的运动矢量与该候选运动信息的运动矢量相同,则确定该第一单向运动信息与该候选运动信息重复;否则,确定该第一单向运动信息与该候选运动信息不重复。
针对第二单向运动信息与运动信息候选列表中已存在的候选运动信息的查重操作,可以包括但不限于:若第二单向运动信息指向的参考帧列表与候选运动信息指向的参考帧列表相同,且该第二单向运动信息的refIdx与该候选运动信息的refIdx相同,且该第二单向运动信息的运动矢量与该候选运动信息的运动矢量相同,则确定该第二单向运动信息与该候选运动信息重复;否则,确定该第二单向运动信息与该候选运动信息不重复。或者,若第二单向运动信息指向的参考帧的POC与候选运动信息指向的参考帧的POC相同,且该第二单向运动信息的运动矢量与该候选运动信息的运动矢量相同,则确定该第二单向运动信息与该候选运动信息重复;否则,确定该第二单向运动信息与该候选运动信息不重复。
示例性的,上述过程需要比较单向运动信息指向的参考帧的POC与候选运动信息指向的参考帧的POC是否相同,需要注意的是,参考帧的POC只是确认参考帧是否相同的一个示例,除了POC以外,还可以采用其它能够确认参考帧是否相同的属性,如参考帧的DOC(解码顺序标记)等,对此不作限制。综上所述,还可以比较单向运动信息指向的参考帧的DOC与候选运动信息指向的参考帧的DOC是否相同,实现原理与POC的实现原理类似。
实施例14:针对实施例13,可以利用空域运动信息(本文将当前块的空域相邻块的运动信息称为空域运动信息)和/或时域运动信息(本文将当前块的时域相邻块的运动信息称为时域运动信息)获取运动信息候选列表,因此,需要先从空域运动信息和/或时域运动信息中选取出可用运动信息。参见图9A所示,为当前块的空域相邻块的示意图,空域运动信息可以是F,G,C,A,B,D等空域相邻块的运动信息,时域运动信息可以为至少一个。
针对空域运动信息,可以采用如下方式获取待加入到运动信息候选列表的可用运动信息:
方式11、参见图9A所示,F,G,C,A,B,D是当前块E的空域相邻块,可以确定F,G,C,A,B,D的运动信息的“可用”性。示例性的,如果F存在且采用帧间预测模式,则F的运动信息为可用运动信息;否则,F的运动信息为不可用运动信息。如果G存在且采用帧间预测模式,则G的运动信息为可用运动信息;否则,G的运动信息为不可用运动信息。如果C存在且采用帧间预测模式,则C的运动信息为可用运动信息;否则,C的运动信息为不可用运动信息。如果A存在且采用帧间预测模式,则A的运动信息为可用运动信息;否则,A的运动信息为不可用运动信息。如果B存在且采用帧间预测模式,则B的运动信息为可用运动信息;否则,B的运动信息为不可用运动信息。如果D存在且采用帧间预测模式,则D的运动信息为可用运动信息;否则,D的运动信息为不可用运动信息。
方式12、F,G,C,A,B,D是当前块E的空域相邻块,确定F,G,C,A,B,D的运动信息的“可用”性。如果F存在且采用帧间预测模式,则F的运动信息为 可用运动信息;否则,F的运动信息为不可用运动信息。如果G存在且采用帧间预测模式,G的运动信息与F的运动信息不相同,则G的运动信息为可用运动信息;否则,G的运动信息为不可用运动信息。如果C存在且采用帧间预测模式,C的运动信息与G的运动信息不相同,则C的运动信息为可用运动信息;否则,C的运动信息为不可用运动信息。如果A存在且采用帧间预测模式,A的运动信息与F的运动信息不相同,则A的运动信息为可用运动信息;否则,A的运动信息为不可用运动信息。如果B存在且采用帧间预测模式,B的运动信息与G的运动信息不相同,则B的运动信息为可用运动信息;否则,B的运动信息为不可用运动信息。如果D存在且采用帧间预测模式,D的运动信息与A的运动信息不相同,且D的运动信息与G的运动信息不相同,则D的运动信息为可用运动信息;否则,D的运动信息为不可用运动信息。
方式13、F,G,C,A,B,D是当前块E的空域相邻块,可以确定F,G,C,A,B,D的运动信息的“可用”性。示例性的,如果F存在且采用帧间预测模式,则F的运动信息为可用运动信息;否则,F的运动信息为不可用运动信息。如果G存在且采用帧间预测模式,G的运动信息与F的运动信息不相同,则G的运动信息为可用运动信息;否则,G的运动信息为不可用运动信息。如果C存在且采用帧间预测模式,C的运动信息与G的运动信息不相同,则C的运动信息为可用运动信息;否则,C的运动信息为不可用运动信息。如果A存在且采用帧间预测模式,A的运动信息与F的运动信息不相同,则A的运动信息为可用运动信息;否则,A的运动信息为不可用运动信息。如果B存在且采用帧间预测模式,则B的运动信息为可用运动信息;否则,B的运动信息为不可用运动信息。如果D存在且采用帧间预测模式,D的运动信息与A的运动信息不相同,且D的运动信息与G的运动信息不相同,则D的运动信息为可用运动信息;否则,D的运动信息为不可用运动信息。
示例性的,在方式12和方式13中,在确定空域相邻块的运动信息是否为可用运动信息时,可能需要比较两个空域相邻块的运动信息是否相同,例如,比较G的运动信息与F的运动信息是否相同,运动信息是否相同的比较过程,实际上就是运动信息的查重操作,若查重结果为重复,则运动信息的比较结果为相同,若查重结果为不重复,则运动信息的比较结果为不同。关于运动信息的查重操作,可以参见后续实施例,在此不再赘述。
针对时域运动信息,可以采用如下方式获取待加入到运动信息候选列表的可用运动信息:
方式21、基于当前块的预设位置,从当前块的参考帧中选取与该预设位置对应的时域相邻块,并将该时域相邻块的运动信息确定为可用运动信息。比如说,若当前块所在当前帧为B帧,则根据co-located参考帧的co-located块导出单向运动信息或双向运动信息,将该单向运动信息或双向运动信息作为可用运动信息。若当前帧为P帧,则根据co-located参考帧的co-located块导出单向运动信息,将该单向运动信息作为可用运动信息。
示例性的,co-located块是co-located参考帧中与当前块的预设位置对应的时域相邻块,当前块的预设位置可以根据经验配置,对此当前块的预设位置不做限制,如当 前块的左上角像素位置,当前块的右上角像素位置,当前块的左下角像素位置,当前块的右下角像素位置,当前块的中心像素位置等。关于co-located参考帧可以是预设的参考帧,如将当前块的List0中的第一个参考帧作为co-located参考帧,也可以是导出的参考帧,如将当前块的List0中与当前帧最近的参考帧作为co-located参考帧,还可以是从码流中解析出的参考帧,如针对解码端来说,可以从码流中解析出co-located参考帧的信息,继而确定co-located参考帧。
若当前块所在当前帧为B帧,则根据co-located块的运动信息导出单向运动信息或双向运动信息,若当前块所在当前帧为P帧,则根据co-located块的运动信息导出单向运动信息。
方式22、基于当前块的权重预测角度和权重预测位置,确定当前块的目标位置;从当前块的参考帧中选取与该目标位置对应的时域相邻块,并将该时域相邻块的运动信息确定为可用运动信息。比如说,基于当前块的权重预测角度和权重预测位置,可以确定当前块的目标位置,如基于当前块的权重预测角度和权重预测位置的索引值,确定当前块的目标位置。然后,可以基于当前块的目标位置确定co-located参考帧的co-located块,若当前块所在当前帧为B帧,则根据co-located参考帧的co-located块导出单向运动信息或双向运动信息,将该单向运动信息或双向运动信息作为可用运动信息。若当前帧为P帧,则根据co-located参考帧的co-located块导出单向运动信息,将该单向运动信息作为可用运动信息。
co-located块是co-located参考帧中与当前块的目标位置对应的时域相邻块,目标位置可以是当前块的左上角像素位置,右上角像素位置,左下角像素位置,右下角像素位置等。
例如,基于当前块的权重预测角度和当前块的权重预测位置,可以得到当前块的权重矩阵,参见图9B所示,由于右上侧权重部分的占比较小(黑色部分),空域运动信息与黑色部分的相关性较低,因此,时域运动信息的选择可以偏向于右上侧权重部分,以此来提供合适的候选运动信息,在此基础上,co-located块可以为当前块的右上角像素位置(即占比较小的权重部分)对应的时域相邻块,即,当前块的目标位置为当前块的右上角像素位置。
同理,参见图9C所示,基于当前块的权重预测角度和当前块的权重预测位置,可以确定当前块的目标位置为当前块的右下角像素位置。参见图9D所示,基于当前块的权重预测角度和当前块的权重预测位置,可以确定当前块的目标位置为当前块的左下角像素位置。
在得到可用运动信息后,针对每个可用运动信息(如空域运动信息,时域运动信息等),可以采用如下方式之一或组合将可用运动信息加入到运动信息候选列表:
方式31、不进行查重处理。比如说,若可用运动信息为单向运动信息,则将该单向运动信息加入到运动信息候选列表。若可用运动信息为双向运动信息,则将该双向运动信息裁剪为第一单向运动信息和第二单向运动信息,并将裁剪后的某个单向运动信息加入到运动信息候选列表,例如,可以将该第一单向运动信息加入到运动信息候选列表。
方式32、进行半查重处理,即不查重双向运动信息的另一半。比如说,若可用运动信息为单向运动信息,且单向运动信息与运动信息候选列表中已存在的候选运动信息不重复,则将该单向运动信息加入到运动信息候选列表。若可用运动信息为双向运动信息,则将该双向运动信息裁剪为第一单向运动信息和第二单向运动信息,并将某个单向运动信息加入到运动信息候选列表,例如,若双向运动信息中的第一单向运动信息与运动信息候选列表中已存在的候选运动信息不重复,则将该第一单向运动信息加入到运动信息候选列表。
方式33、进行全查重处理。比如说,若可用运动信息为单向运动信息,且单向运动信息与运动信息候选列表中已存在的候选运动信息不重复,则将单向运动信息加入到运动信息候选列表。若可用运动信息为双向运动信息,则将该双向运动信息裁剪为第一单向运动信息和第二单向运动信息,并将某个单向运动信息加入到运动信息候选列表,例如,若双向运动信息中的第一单向运动信息与运动信息候选列表中已存在的候选运动信息不重复,则将第一单向运动信息加入到运动信息候选列表;若第一单向运动信息与运动信息候选列表中已存在的候选运动信息重复,且双向运动信息中的第二单向运动信息与运动信息候选列表中已存在的候选运动信息不重复,则将第二单向运动信息加入到运动信息候选列表。
在方式31,方式32和方式33中,第一单向运动信息可以是指向第一参考帧列表中参考帧的单向运动信息;第二单向运动信息可以是指向第二参考帧列表中参考帧的单向运动信息。
示例性的,若运动信息候选列表中已存在的候选运动信息的总数量(即在将可用运动信息添加到运动信息候选列表时,该运动信息候选列表中当前已存在的候选运动信息的总数量)为偶数,则第一参考帧列表为List0,第二参考帧列表为List1;若运动信息候选列表中已存在的候选运动信息的总数量为奇数,则第一参考帧列表为List1,第二参考帧列表为List0。或者,若运动信息候选列表中已存在的候选运动信息的总数量为奇数,则第一参考帧列表为List0,第二参考帧列表为List1;若运动信息候选列表中已存在的候选运动信息的总数量为偶数,则第一参考帧列表为List1,第二参考帧列表为List0。或者,第一参考帧列表为List0,第二参考帧列表为List1。或者,第一参考帧列表为List1,第二参考帧列表为List0。
在上述实施例中,涉及对单向运动信息(如单向运动信息、第一单向运动信息、第二单向运动信息等)与运动信息候选列表中已存在的候选运动信息进行查重操作,查重操作的结果可以为重复或者不重复。还涉及对两个空域相邻块的运动信息进行查重,查重操作的结果可以为重复或者不重复。在后续实施例19中,还涉及对双向运动信息与运动信息候选列表中已存在的候选运动信息进行查重操作,查重操作的结果可以为重复或者不重复。
为了方便描述,将进行查重操作的两个运动信息称为运动信息1和运动信息2,当运动信息1为待查重的单向运动信息时,运动信息2为运动信息候选列表中已存在的候选运动信息;当运动信息1为待查重的双向运动信息时,运动信息2为运动信息候选列表中已存在的候选运动信息;当运动信息1为需要确定可用性的空域相邻块的运动信 息时,运动信息2为已确定可用性的空域相邻块的运动信息,参见方式12,如F的运动信息可用时,当需要确定G的运动信息是否可用时,运动信息1为G的运动信息,运动信息2为F的运动信息。
针对该查重操作,可以采用如下方式实现:
方式41、基于List+refIdx+MV_x+MV_y进行查重操作。比如说,先查指向的List列表是否相同(即是指向List0,还是指向List1,还是同时指向List0和List1),再查refidx是否相同,再查MV是否相同(即水平分量是否相同,以及垂直分量是否相同)。
示例性的,先查询运动信息1指向的参考帧列表与运动信息2指向的参考帧列表是否相同,如果不同,则运动信息1与运动信息2不重复。如果相同,则查询运动信息1的refIdx与运动信息2的refIdx是否相同,若不同,则运动信息1与运动信息2不重复。若相同,则查询运动信息1的运动矢量与运动信息2的运动矢量是否相同,若否,则运动信息1与运动信息2不重复,若是,则确定运动信息1与运动信息2重复。
示例性的,若运动信息1指向的参考帧列表与运动信息2指向的参考帧列表均是List0,则二者相同,或者,若运动信息1指向的参考帧列表与运动信息2指向的参考帧列表均是List1,则二者相同,或者,若运动信息1指向的参考帧列表为List0,运动信息2指向的参考帧列表为List1,则二者不同,或者,若运动信息1指向的参考帧列表为List1,运动信息2指向的参考帧列表为List0,则二者不同,或者,若运动信息1指向的参考帧列表为List0,运动信息2指向的参考帧列表为List0和List1,则二者不同,或者,若运动信息1指向的参考帧列表为List1,运动信息2指向的参考帧列表为List0和List1,则二者不同,或者,若运动信息1指向的参考帧列表List0中参考帧索引为refIdx0的参考帧,并指向List1中参考帧索引为refIdx1的参考帧,运动信息2指向List0中参考帧索引为refIdx2的参考帧,并指向List1中参考帧索引为refIdx3的参考帧,refIdx0不等于refIdx2,或refIdx1不等于refIdx3,则二者不同。当然,上述只是对参考帧列表进行比较的几个示例,对此不做限制。
示例性的,若运动信息1的运动矢量的水平分量与运动信息2的运动矢量的水平分量相同,且运动信息1的运动矢量的垂直分量与运动信息2的运动矢量的垂直分量相同,则说明运动信息1的运动矢量与运动信息2的运动矢量相同。
方式42、基于POC+MV_x+MV_y进行查重操作。
比如说,先查指向的参考帧的POC是否相同(即若指向List0中参考帧索引为refIdx0的参考帧的POC等于指向List1中参考帧索引为refIdx1的参考帧的POC,也判定为相同,针对单向运动信息与单向运动信息的查重,以及,双向运动信息与双向运动信息的查重都适用);再查MV是否相同(即水平分量是否相同,以及垂直分量是否相同)。
示例性的,先查询运动信息1指向的参考帧的POC与运动信息2指向的参考帧的POC是否相同,如果不同,则运动信息1与运动信息2不重复。如果相同,则继续查询运动信息1的运动矢量与运动信息2的运动矢量是否相同,若不同,则运动信息1 与运动信息2不重复,若相同,则确定运动信息1与运动信息2重复。
示例性的,指向的参考帧的POC相同,可以包括:运动信息1指向List0中参考帧索引为refIdx0的参考帧,且运动信息2指向List0中参考帧索引为refIdx0的参考帧,且运动信息1指向的参考帧的POC与运动信息2指向的参考帧的POC相同。或者,运动信息1指向List1中参考帧索引为refIdx1的参考帧,且运动信息2指向List1中参考帧索引为refIdx1的参考帧,且运动信息1指向的参考帧的POC与运动信息2指向的参考帧的POC相同。或者,运动信息1指向List0中参考帧索引为refIdx0的参考帧,且运动信息2指向List1中参考帧索引为refIdx1的参考帧,且运动信息1指向的参考帧的POC与运动信息2指向的参考帧的POC相同。或者,运动信息1指向List1中参考帧索引为refIdx1的参考帧,且运动信息2指向List0中参考帧索引为refIdx0的参考帧,且运动信息1指向的参考帧的POC与运动信息2指向的参考帧的POC相同。或者,运动信息1指向List0中参考帧索引为refIdx0的参考帧,并指向List1中参考帧索引为refIdx1的参考帧,运动信息2指向List0中参考帧索引为refIdx2的参考帧,并指向List1中参考帧索引为refIdx3的参考帧,运动信息1指向的List0中参考帧索引为refIdx0的参考帧的POC与运动信息2指向的List1中参考帧索引为refIdx3的参考帧的POC相同,且运动信息1指向的List1中参考帧为refIdx1的参考帧的POC与运动信息2指向的List0中参考帧索引为refIdx2的参考帧的POC相同。当然,上述只是参考帧的POC相同的几个示例,对此不做限制。
示例性的,上述过程是对参考帧的POC进行查重,需要注意的是,参考帧的POC只是查重操作的一个示例,除了POC以外,还可以采用其它能够确认参考帧是否相同的属性,如参考帧的DOC等,对此不作限制。综上所述,可以将上述POC替换为DOC。
综上所述,可以获取运动信息候选列表,该运动信息候选列表也可以称为AwpUniArray,以下结合几个具体应用场景,对上述运动信息候选列表的获取过程进行说明。在后续应用场景中,假设运动信息候选列表的长度为X,即需要添加X个可用运动信息,在将可用运动信息添加到运动信息候选列表后,将已添加的运动信息称为候选运动信息。
应用场景1:将空域运动信息加入到运动信息候选列表,若列表长度等于X,结束添加过程,若列表长度小于X,将预设运动信息加入到运动信息候选列表,直至列表长度为X。
示例性的,先确定待加入到运动信息候选列表的可用运动信息(即空域运动信息)。比如说,采用方式11确定可用运动信息,或,采用方式12确定可用运动信息,或,采用方式13确定可用运动信息。在采用方式12或方式13确定可用运动信息时,涉及两个空域相邻块的运动信息的查重操作,可以采用方式41进行查重操作,或者,采用方式42进行查重操作。
示例性的,参见图9A所示,按照F、G、C、A、B、D中可用运动信息的顺序(该顺序可变),将可用运动信息加入到运动信息候选列表。针对每个可用运动信息,可以采用方式31将该可用运动信息加入到运动信息候选列表,也可以采用方式32将该可用运动信息加入到运动信息候选列表,还可以采用方式33将该可用运动信息加入到 运动信息候选列表。
示例性的,在采用方式32或者方式33将该可用运动信息加入到运动信息候选列表时,还可以对单向运动信息与运动信息候选列表中已存在的候选运动信息进行查重操作,例如,可以采用方式41进行查重操作,或者,采用方式42进行查重操作。
应用场景2:将时域运动信息加入到运动信息候选列表,若列表长度等于X,结束添加过程,若列表长度小于X,将预设运动信息加入到运动信息候选列表,直至列表长度为X。
示例性的,先确定待加入到运动信息候选列表的可用运动信息(即时域运动信息)。比如说,采用方式21确定可用运动信息,或,采用方式22确定可用运动信息。
示例性的,针对每个可用运动信息,可以采用方式31将该可用运动信息加入到运动信息候选列表,也可以采用方式32将该可用运动信息加入到运动信息候选列表,还可以采用方式33将该可用运动信息加入到运动信息候选列表。
示例性的,在采用方式32或者方式33将该可用运动信息加入到运动信息候选列表时,还可以对单向运动信息与运动信息候选列表中已存在的候选运动信息进行查重操作,例如,可以采用方式41进行查重操作,或者,采用方式42进行查重操作。
应用场景3:将空域运动信息与时域运动信息一起加入运动信息候选列表(空域运动信息可以位于时域运动信息之前,时域运动信息也可以位于空域运动信息之前,为了方便描述,后续以空域运动信息位于时域运动信息之前为例),若列表长度等于X,结束添加过程,若列表长度小于X,将预设运动信息加入到运动信息候选列表,直至列表长度为X。
示例性的,确定待加入到运动信息候选列表的可用运动信息(即空域运动信息和时域运动信息)。比如说,采用方式11和方式21确定可用运动信息,或,采用方式11和方式22确定可用运动信息,或,采用方式12和方式21确定可用运动信息,或,采用方式12和方式22确定可用运动信息,或,采用方式13和方式21确定可用运动信息,或,采用方式13和方式22确定可用运动信息。当然,上述只是几个示例,对此不做限制。
在采用方式12或方式13确定可用运动信息时,涉及两个空域相邻块的运动信息的查重操作,可以采用方式41进行查重操作,或者,采用方式42进行查重操作。
示例性的,按照F、G、C、A、B、D中可用运动信息的顺序,及时域运动信息(即可用运动信息)的顺序,将可用运动信息加入到运动信息候选列表。针对每个可用运动信息,采用方式31将该可用运动信息加入到运动信息候选列表,或采用方式32将该可用运动信息加入到运动信息候选列表,或采用方式33将该可用运动信息加入到运动信息候选列表。
示例性的,在采用方式32或者方式33将该可用运动信息加入到运动信息候选列表时,还可以对单向运动信息与运动信息候选列表中已存在的候选运动信息进行查重操作,例如,可以采用方式41进行查重操作,或者,采用方式42进行查重操作。
综上所述,基于空域运动信息和时域运动信息的顺序,可以将空域运动信息与时域运动信息一起加入运动信息候选列表,一直到列表长度等于X,或者,遍历结束,列表长度小于X,并将预设运动信息加入到运动信息候选列表,直至列表长度为X。
应用场景4:将空域运动信息加入运动信息候选列表后,预留至少Y个位置给时域运动信息,时域运动信息为双向运动信息或单向运动信息。示例性的,将空域运动信息加入运动信息候选列表,若列表长度等于X-Y,则结束空域运动信息的添加过程,或直至空域运动信息遍历结束,列表长度小于X-Y,结束空域运动信息的添加过程。然后,将时域运动信息加入运动信息候选列表,若列表长度等于X,则结束时域运动信息的添加过程,或直至时域运动信息遍历结束,列表长度小于X,结束时域运动信息的添加过程,将预设运动信息加入到运动信息候选列表,直至列表长度为X。
示例性的,确定待加入到运动信息候选列表的可用运动信息(即空域运动信息和时域运动信息)。比如说,采用方式11和方式21确定可用运动信息,或,采用方式11和方式22确定可用运动信息,或,采用方式12和方式21确定可用运动信息,或,采用方式12和方式22确定可用运动信息,或,采用方式13和方式21确定可用运动信息,或,采用方式13和方式22确定可用运动信息。当然,上述只是几个示例,对此不做限制。
在采用方式12或方式13确定可用运动信息时,涉及两个空域相邻块的运动信息的查重操作,可以采用方式41进行查重操作,或者,采用方式42进行查重操作。
示例性的,按照F、G、C、A、B、D中可用运动信息(即空域运动信息)的顺序,将可用运动信息加入到运动信息候选列表,一直到列表长度等于X-Y,或空域运动信息遍历结束。然后,将时域运动信息(即可用运动信息)加入到运动信息候选列表,一直到列表长度等于X,或时域运动信息遍历结束。在可用运动信息的添加过程中,针对每个可用运动信息,采用方式31将该可用运动信息加入到运动信息候选列表,或采用方式32将该可用运动信息加入到运动信息候选列表,或采用方式33将该可用运动信息加入到运动信息候选列表。
示例性的,在采用方式32或者方式33将该可用运动信息加入到运动信息候选列表时,还可以对单向运动信息与运动信息候选列表中已存在的候选运动信息进行查重操作,例如,可以采用方式41进行查重操作,或者,采用方式42进行查重操作。
在上述各应用场景中,X可以取任意的正整数,例如,X的取值可以4,5等。
在上述各应用场景中,均可能将预设运动信息(也可以称为缺省运动信息)加入到运动信息候选列表,关于预设运动信息的实现方式,可以包括但不限于如下方式:
方式51、预设运动信息为零运动信息,如指向ListX且refIdx小于ListX中参考帧数量的零运动矢量。比如说,可以进行补0操作,即运动信息可以为(0,0,ref_idx3,ListZ)。
方式52、基于运动信息候选列表中已存在的候选运动信息所导出的缺省运动信息。
比如说,记需要放大或缩小的x轴方向或y轴方向的绝对值为temp_val,结果 为result:
1、如果temp_val<8,则result=8,或者,result=-8。
示例性的,若候选运动信息的x轴方向的temp_val小于8,则缺省运动信息的x轴方向的运动矢量为8,或者,缺省运动信息的x轴方向的运动矢量为-8。
若候选运动信息的y轴方向的temp_val小于8,则缺省运动信息的y轴方向的运动矢量为8,或者,缺省运动信息的y轴方向的运动矢量为-8。
2、假设不满足1,如果temp_val<=64,则result=(temp_val*5+2)>>2,或者,result=(temp_val*3+2)>>2,正负号与候选运动信息的运动矢量的正负号相同。
示例性的,若候选运动信息的x轴方向的运动矢量为正,且x轴方向的运动矢量的绝对值(即temp_val)小于或等于64,则缺省运动信息的x轴方向的运动矢量为(temp_val*5+2)>>2,若候选运动信息的x轴方向的运动矢量为负,且x轴方向的运动矢量的绝对值(即temp_val)小于或等于64,则缺省运动信息的x轴方向的运动矢量为result=(temp_val*3+2)>>2。
针对缺省运动信息的y轴方向的运动矢量,与x轴方向的运动矢量类似。
3、假设不满足1和2,如果temp_val<=128,则result=(temp_val*9+4)>>3,或者,result=(temp_val*7+4)>>3,正负号与候选运动信息的运动矢量的正负号相同。
4、假设不满足1,不满足2,且不满足3,则result=(temp_val*33+16)>>5,或者,result=(temp_val*31+16)>>5,正负号与候选运动信息的运动矢量的正负号相同。
方式53、基于运动信息候选列表中已存在的候选运动信息所导出的缺省运动信息。
比如说,以运动信息候选列表中的任意一个有效的候选运动信息(x,y,ref_idx,ListX)为基础,ref_idx和ListX分别为参考帧索引和参考帧列表,且二者统称为参考帧信息,可添加以下至少一个运动信息:(x+a,y+b,ref_idx,ListX),a,b可以取任意整数;(k1*x,k1*y,ref_idx_new1,ListX),k1为任意非0正整数,即对运动矢量进行伸缩操作;(k2*x,k2*y,ref_idx_new2,ListY),k2为任意非0正整数,即对运动矢量进行伸缩操作。
方式54、预设运动信息为运动信息候选列表中已存在的候选运动信息,即进行扩充(padding)操作,可采用运动信息候选列表中已存在的候选运动信息进行重复填充操作。例如,可采用运动信息候选列表中已存在的最后一个单向运动信息进行重复填充。
综上所述,在列表长度小于X,需要将预设运动信息加入到运动信息候选列表时,可以将方式51的预设运动信息加入到运动信息候选列表,直至列表长度为X。或者,可以将方式52的预设运动信息加入到运动信息候选列表,直至列表长度为X。或者,可以将方式53的预设运动信息加入到运动信息候选列表,直至列表长度为X。或者,可以将方式51和方式52的预设运动信息加入到运动信息候选列表,直至列表长度为X。或者,可以将方式51和方式53的预设运动信息加入到运动信息候选列表,直至 列表长度为X。或者,可以将方式52和方式53的预设运动信息加入到运动信息候选列表,直至列表长度为X。或者,可以将方式51,方式52和方式53的预设运动信息加入到运动信息候选列表,直至列表长度为X。
示例性的,在采用上述方式,将预设运动信息加入到运动信息候选列表后,若列表长度仍然小于X,则可以将方式54的预设运动信息加入到运动信息候选列表,即,采用运动信息候选列表中已存在的候选运动信息进行重复填充操作,一直到列表长度为X。
在另一种可能的实施方式中,在列表长度小于X,需要将预设运动信息加入到运动信息候选列表时,可以直接将方式54的预设运动信息加入到运动信息候选列表,即,采用运动信息候选列表中已存在的候选运动信息进行重复填充操作,一直到列表长度为X。
在上述实施例中,预设运动信息可以为单向运动信息,也可以为双向运动信息。若预设运动信息为单向运动信息,在将该预设运动信息添加到运动信息候选列表时,可以进行查重操作,也可以不进行查重操作。若不进行查重操作,则直接将该预设运动信息添加到运动信息候选列表。若进行查重操作,则可以采用方式41进行查重操作,或者,采用方式42进行查重操作。若查重操作的结果为不重复,则将该预设运动信息添加到运动信息候选列表。若查重操作的结果为重复,则不将该预设运动信息添加到运动信息候选列表。
实施例15:在实施例1-实施例3中,编码端/解码端在得到运动信息候选列表(参见实施例13和14)后,可以基于该运动信息候选列表获取当前块的第一目标运动信息和第二目标运动信息,关于第一目标运动信息和第二目标运动信息的获取方式,可以采用如下方式实现:
方式一、从运动信息候选列表中选择一个候选运动信息作为当前块的第一目标运动信息,从运动信息候选列表中选择另一个候选运动信息作为当前块的第二目标运动信息。
示例性的,针对编码端和解码端来说,均可以获取运动信息候选列表,且编码端的运动信息候选列表与解码端的运动信息候选列表相同,对此运动信息候选列表不做限制。
针对编码端来说,可以基于率失真原则,从运动信息候选列表中选择一个候选运动信息作为当前块的第一目标运动信息,从运动信息候选列表中选择另一个候选运动信息作为当前块的第二目标运动信息,第一目标运动信息与第二目标运动信息不同,对此不做限制。
在一种可能的实施方式中,编码端在向解码端发送编码比特流时,该编码比特流可以携带指示信息a和指示信息b,指示信息a用于指示当前块的第一目标运动信息的索引值1,索引值1表示第一目标运动信息是运动信息候选列表中的第几个候选运动信息。指示信息b用于指示当前块的第二目标运动信息的索引值2,索引值2表示第二目标运动信息是运动信息候选列表中的第几个候选运动信息。示例性的,索引值1和索 引值2可以不同。
解码端在接收到编码比特流后,从编码比特流中解析出指示信息a和指示信息b。基于指示信息a,解码端从运动信息候选列表中选择索引值1对应的候选运动信息,该候选运动信息作为当前块的第一目标运动信息。基于指示信息b,解码端从运动信息候选列表中选择索引值2对应的候选运动信息,该候选运动信息作为当前块的第二目标运动信息。
在另一种可能的实施方式中,编码端在向解码端发送编码比特流时,该编码比特流可以携带指示信息a和指示信息c,该指示信息a可以用于指示当前块的第一目标运动信息的索引值1,索引值1表示第一目标运动信息是运动信息候选列表中的第几个候选运动信息。该指示信息c可以用于指示索引值2与索引值1的差值,索引值2表示第二目标运动信息是运动信息候选列表中的第几个候选运动信息。示例性的,索引值1和索引值2可以不同。
解码端在接收到编码比特流后,可以从编码比特流中解析出指示信息a和指示信息c。基于指示信息a,解码端可以从运动信息候选列表中选择索引值1对应的候选运动信息,该候选运动信息作为当前块的第一目标运动信息。基于指示信息c,解码端先根据索引值2与索引值1的差值,以及索引值1确定索引值2,然后,解码端可以从运动信息候选列表中选择索引值2对应的候选运动信息,该候选运动信息作为当前块的第二目标运动信息。
示例性的,第一目标运动信息的指示信息与第二目标运动信息的指示信息,可以进行互换,编码端与解码端一致即可,此处是指示信息的互换,不影响解析过程,即不存在解析依赖。第一目标运动信息的指示信息与第二目标运动信息的指示信息不能相等,假设编码两个索引值,索引值a的取值为1,索引值b的取值为3,先编码索引值a时,索引值b可以编码2(即,3-1),在先编码索引值b时,则索引值b需要编码3。综上所述,先编码索引值小的指示信息,可以降低较大索引值的编码代价。比如说,先编码第一目标运动信息的指示信息,如索引值a,后编码第二目标运动信息的指示信息,如索引值b。也可以先编码第二目标运动信息的指示信息,如索引值b,后编码第一目标运动信息的指示信息,如索引值a。例如,假设索引值a的取值为1,索引值b的取值为3,则先编码索引值a,后编码索引值b。又例如,假设索引值b的取值为1,索引值a的取值为3,则先编码索引值b,后编码索引值a。
方式二、从运动信息候选列表中选择候选运动信息作为当前块的第一原始运动信息,并从运动信息候选列表中选择候选运动信息作为当前块的第二原始运动信息,示例性的,该第一原始运动信息与该第二原始运动信息可以不同,即从运动信息候选列表中选择两个不同的候选运动信息作为第一原始运动信息和第二原始运动信息;或者,该第一原始运动信息与该第二原始运动信息也可以相同,即从运动信息候选列表中选择相同的候选运动信息作为第一原始运动信息和第二原始运动信息。然后,根据该第一原始运动信息确定当前块的第一目标运动信息,并根据该第二原始运动信息确定当前块的第二目标运动信息。示例性的,该第一目标运动信息与该第二目标运动信息可以不同。
关于如何根据原始运动信息确定目标运动信息,本实施例中给出一种单向运动 信息叠加运动矢量差(Angular Weighted Prediction with Motion Vector Refinement,AWP_MVR)的方案,比如说,第一原始运动信息包括第一原始运动矢量,第一目标运动信息包括第一目标运动矢量,第二原始运动信息包括第二原始运动矢量,第二目标运动信息包括第二目标运动矢量,在此基础上,可以获取与第一原始运动矢量对应的第一运动矢量差(即MVD);根据第一运动矢量差和第一原始运动矢量确定第一目标运动矢量(即第一运动矢量差与第一原始运动矢量的和作为第一目标运动矢量)。可以获取与第二原始运动矢量对应的第二运动矢量差;根据第二运动矢量差和第二原始运动矢量确定第二目标运动矢量(即第二运动矢量差与第二原始运动矢量的和作为第二目标运动矢量)。
示例性的,在确定第一目标运动矢量时,也可以不叠加第一运动矢量差,即将第一原始运动矢量确定为第一目标运动矢量。但是,在确定第二目标运动矢量,可以叠加第二运动矢量差,即根据第二运动矢量差和第二原始运动矢量确定第二目标运动矢量。或者,
在确定第二目标运动矢量时,也可以不叠加第二运动矢量差,即将第二原始运动矢量确定为第二目标运动矢量。但是,在确定第一目标运动矢量,可以叠加第一运动矢量差,即根据第一运动矢量差和第一原始运动矢量确定第一目标运动矢量。或者,
在确定第一目标运动矢量,可以叠加第一运动矢量差,即根据第一运动矢量差和第一原始运动矢量确定第一目标运动矢量。在确定第二目标运动矢量,可以叠加第二运动矢量差,即根据第二运动矢量差和第二原始运动矢量确定第二目标运动矢量。
示例性的,可以获取第一运动矢量差的方向信息和幅值信息,并根据第一运动矢量差的方向信息和幅值信息确定第一运动矢量差。以及,可以获取第二运动矢量差的方向信息和幅值信息,并根据第二运动矢量差的方向信息和幅值信息确定第二运动矢量差。
示例性的,针对解码端来说,可以采用如下方式获取第一运动矢量差的方向信息:解码端从当前块的编码比特流中解析出第一运动矢量差的方向信息;或者,解码端根据当前块的权重预测角度推导第一运动矢量差的方向信息。针对解码端来说,可以采用如下方式获取第二运动矢量差的方向信息:解码端从当前块的编码比特流中解析出第二运动矢量差的方向信息;或者,解码端根据当前块的权重预测角度推导第二运动矢量差的方向信息。
示例性的,针对解码端来说,可以采用如下方式获取第一运动矢量差的幅值信息:从当前块的编码比特流中解析出第一运动矢量差的幅值信息。可以采用如下方式获取第二运动矢量差的幅值信息:从当前块的编码比特流中解析出第二运动矢量差的幅值信息。
在一种可能的实施方式中,编码端和解码端可以约定运动矢量差的方向信息和幅值信息,若方向信息表示方向为向右,幅值信息表示幅值为A,则运动矢量差为(A,0);若方向信息表示方向为向下,幅值信息表示幅值为A,则运动矢量差为(0,-A);若方向信息表示方向为向左,幅值信息表示幅值为A,则运动矢量差为(-A,0);若方向信息表示方向为向上,幅值信息表示幅值为A,则运动矢量差为(0,A);若方向 信息表示方向为向右上,幅值信息表示幅值为A,则运动矢量差为(A,A);若方向信息表示方向为向左上,幅值信息表示幅值为A,则运动矢量差为(-A,A);若方向信息表示方向为向左下,幅值信息表示幅值为A,则运动矢量差为(-A,-A);若方向信息表示方向为向右下,幅值信息表示幅值为A,则运动矢量差为(A,-A)。当然,上述只是几个示例,对此方向信息和幅值信息不做限制。
示例性的,运动矢量差可支持上述方向信息的部分或全部,运动矢量差支持的幅值A的取值范围,根据经验进行配置,至少为一个取值,对此不做限制。如运动矢量差支持上,下,左,右等方向,运动矢量差支持如下5类步长配置:1/4-pel,1/2-pel,1-pel,2-pel,4-pel,即幅值A的取值为1,2,4,8,16。综上所述,在方向为向上时,运动矢量差可以为(0,1),(0,2),(0,4),(0,8),(0,16)。在方向为向下时,运动矢量差可以为(0,-1),(0,-2),(0,-4),(0,-8),(0,-16)。在方向为向左时,运动矢量差可以为(-1,0),(-2,0),(-4,0),(-8,0),(-16,0)。在方向为向右时,运动矢量差可以为(1,0),(2,0),(4,0),(8,0),(16,0)。
又例如,运动矢量差支持上,下,左,右等方向,运动矢量差支持如下6类步长配置:1/4-pel,1/2-pel,1-pel,2-pel,3-pel,4-pel,即幅值A的取值可以为1,2,4,8,12,16。
又例如,运动矢量差支持上,下,左,右,左上,左下,右上,右下等八个方向,运动矢量差支持如下3类步长配置:1/4-pel,1/2-pel,1-pel,即幅值A的取值可以为1,2,4。
又例如,运动矢量差支持上,下,左,右等四个方向,运动矢量差支持如下4类步长配置:1/4-pel,1/2-pel,1-pel,2-pel,即幅值A的取值可以为1,2,4,8。
当然,上述只是给出了几个示例,对此不做限制。例如,运动矢量差支持的方向可以任意选择,如可以支持上,下,左,右,左上,左下六个方向,或者,可以支持上,下两个方向。又例如,运动矢量差支持的步长配置可变,可以进行灵活配置。又例如,可根据量化参数QP等编码参数对步长配置进行自适应配置,如对较大QP采用1-pel,2-pel,4-pel,8-pel对较小QP采用1/4-pel,1/2-pel,1-pel,2-pel。又例如,可以在序列级,图像级,帧级,Slice级,tile级,patch级,CTU级等配置合适的步长配置,使得解码端可以根据序列级,图像级,帧级,Slice级,tile级,patch级,CTU级解析到的步长配置进行解码操作。
为了方便描述,在后续实施例中,假设运动矢量差支持上,下等方向,支持1-pel,2-pel等步长配置,按照1/4-pel精度描述,则运动矢量差可以为(0,4),(0,8),(0,-4),(0,-8),即,(0,1<<2),(0,1<<3),(0,-1<<2),(0,-1<<3)。
针对编码端来说,在获取到运动信息候选列表后,依次遍历运动信息候选列表中的每个候选运动信息组合,该候选运动信息组合包括两个候选运动信息,一个候选运动信息作为第一原始运动信息,另一个候选运动信息作为第二原始运动信息。需要注意的是,第一原始运动信息与第二原始运动信息可以相同(即从运动信息候选列表中选出的两个候选运动信息相同),也可以不同。若第一原始运动信息与第二原始运动信息相 同,则可以通过叠加不同的运动矢量差来保证第一目标运动信息和第二目标运动信息不同。针对每个候选运动信息组合,依次遍历运动矢量差组合,该运动矢量差组合包括第一运动矢量差和第二运动矢量差,第一运动矢量差和第二运动矢量差可以相同或不同。比如说,存在两个运动矢量差,分别为运动矢量差1和运动矢量差2,运动矢量差组合1为运动矢量差1和运动矢量差1,运动矢量差组合2为运动矢量差1(即第一运动矢量差)和运动矢量差2,运动矢量差组合3为运动矢量差2(即第一运动矢量差)和运动矢量差1,运动矢量差组合4为运动矢量差2和运动矢量差2。
示例性的,针对当前遍历的候选运动信息组合和运动矢量差组合,若第一原始运动信息与第二原始运动信息不同,则第一运动矢量差和第二运动矢量差可以相同或不同。若第一原始运动信息与第二原始运动信息相同,则第一运动矢量差和第二运动矢量差可以不同。
针对当前遍历的候选运动信息组合和运动矢量差组合,将第一原始运动信息的运动矢量与第一运动矢量差的和作为第一目标运动矢量,将第二原始运动信息的运动矢量与第二运动矢量差的和作为第二目标运动矢量,基于第一目标运动矢量和第二目标运动矢量确定率失真代价值,对此确定方式不做限制。针对每个候选运动信息组合和每个运动矢量差组合进行上述处理,均得到率失真代价值。然后,从所有率失真代价值中选择最小率失真代价值,在当前块的编码比特流中编码最小率失真代价值对应的候选运动信息组合(第一原始运动信息和第二原始运动信息)的信息和运动矢量差组合(第一运动矢量差和第二运动矢量差)的信息。
比如说,在当前块的编码比特流中编码最小率失真代价值对应的第一原始运动信息在运动信息候选列表中的索引值,最小率失真代价值对应的第二原始运动信息在运动信息候选列表中的索引值,最小率失真代价值对应的第一运动矢量差的方向信息和幅值信息,最小率失真代价值对应的第二运动矢量差的方向信息和幅值信息。例如,针对第一运动矢量差的方向信息或者第二运动矢量差的方向信息来说,该方向信息的指示信息可以为0,用于表示方向列表中的第一个方向。针对第一运动矢量差的幅值信息或者第二运动矢量差的幅值信息来说,该幅值信息的指示信息可以为0,表示步长配置列表中的第一个步长配置。
示例性的,若运动矢量差支持上,下,左,右等四个方向,运动矢量差支持1/4-pel,1/2-pel,1-pel,2-pel,4-pel等5类步长配置时,则运动矢量差的方向信息可以采用2bin定长码(共4类值)进行编码,2bin定长码的4个取值分别表示上,下,左,右四个方向。运动矢量差的幅值信息可以采用截断一元码进行编码,即通过截断一元码表示5类步长配置。
又例如,若运动矢量差支持上,下,左,右四个方向,运动矢量差支持1/4-pel,1/2-pel,1-pel,2-pel,3-pel,4-pel6类步长配置时,则运动矢量差的方向信息可以采用2bin定长码(共4类值)进行编码,运动矢量差的幅值信息可以采用截断一元码进行编码。
又例如,若运动矢量差支持上,下,左,右,左上,左下,右上,右下八个方向,运动矢量差支持1/4-pel,1/2-pel,1-pel等3类步长配置时,则运动矢量差的方向信息可以采用3bin定长码(共8类值)进行编码,运动矢量差的幅值信息可以采用截断一元码 进行编码。
又例如,若运动矢量差支持上,下,左,右四个方向,运动矢量差支持1/4-pel,1/2-pel,1-pel,2-pel4类步长配置时,基于此,则运动矢量差的方向信息可以采用截断一元码进行编码,运动矢量差的幅值信息可以采用2bin定长码(共4类值)进行编码。
当然,上述只是编码方式的几个示例,对此编码方式不做限制。
综上所述,针对编码端来说,可以在一定区域内搜索到最佳运动矢量(即目标运动矢量),而后将最佳运动矢量与原始运动矢量的差值,作为运动矢量差(MVD),将运动矢量差的幅值信息和方向信息编入码流。编码端在一定区域内搜索最佳运动矢量时,需要约定运动矢量差的方向以及幅值,即,在(A,0),(0,-A),(-A,0),(0,A),(A,A),(-A,A),(-A,-A),(A,-A)等运动矢量差的限定范围内,搜索最佳运动矢量。
针对解码端来说,在接收到当前块的编码比特流后,可以从编码比特流中解析出第一原始运动信息在运动信息候选列表中的索引值,并从运动信息候选列表中选择与该索引值对应的候选运动信息,将该候选运动信息作为当前块的第一原始运动信息。解码端可以从编码比特流中解析出第二原始运动信息在运动信息候选列表中的索引值,并从运动信息候选列表中选择与该索引值对应的候选运动信息,将该候选运动信息作为当前块的第二原始运动信息。
解码端还可以从该编码比特流中解析出第一运动矢量差的方向信息和幅值信息,并根据该方向信息和该幅值信息确定第一运动矢量差。以及,从该编码比特流中解析出第二运动矢量差的方向信息和幅值信息,并根据该方向信息和该幅值信息确定第二运动矢量差。
然后,解码端可以根据第一运动矢量差和第一原始运动信息确定当前块的第一目标运动信息,并根据第二运动矢量差和第二原始运动信息确定当前块的第二目标运动信息。
示例性的,在根据第一运动矢量差的方向信息和第一运动矢量差的幅值信息确定第一运动矢量差时,若第一运动矢量差的方向信息表示方向为向右,第一运动矢量差的幅值信息表示幅值为A,则第一运动矢量差为(A,0);若第一运动矢量差的方向信息表示方向为向下,第一运动矢量差的幅值信息表示幅值为A,则第一运动矢量差为(0,-A);若第一运动矢量差的方向信息表示方向为向左,第一运动矢量差的幅值信息表示幅值为A,则第一运动矢量差为(-A,0);若第一运动矢量差的方向信息表示方向为向上,第一运动矢量差的幅值信息表示幅值为A,则第一运动矢量差为(0,A)。在根据第二运动矢量差的方向信息和第二运动矢量差的幅值信息确定第二运动矢量差时,若第二运动矢量差的方向信息表示方向为向右,第二运动矢量差的幅值信息表示幅值为A,则第二运动矢量差为(A,0);若第二运动矢量差的方向信息表示方向为向下,第二运动矢量差的幅值信息表示幅值为A,则第二运动矢量差为(0,-A);若第二运动矢量差的方向信息表示方向为向左,第二运动矢量差的幅值信息表示幅值为A,则第二运动矢量差为(-A,0);若第二运动矢量差的方向信息表示方向为向上,第二运动矢量差的幅值信息表示幅值为A,则第二运动矢量差为(0,A)。
参见上述实施例,编码端在对运动矢量差的方向信息进行编码时,可以采用定长码,截断一元码等方式,因此,解码端可以采用定长码,截断一元码等方式,对运动矢量差的方向信息进解码,得到运动矢量差的方向信息,如上,下,左,右,左上,左下,右上,右下等。
参见上述实施例,编码端在对运动矢量差的幅值信息进行编码时,可以采用定长码,截断一元码等方式,因此,解码端可以采用定长码,截断一元码等方式,对运动矢量差的幅值信息进解码,得到运动矢量差的幅值信息,如1/4-pel,1/2-pel,1-pel,2-pel等步长配置,继而根据1/4-pel,1/2-pel,1-pel,2-pel等步长配置确定出运动矢量差的幅值A的取值。
在一种可能的实施方式中,编码端还可以在编码比特流中编码增强角度加权预测模式的第一子模式标志和第二子模式标志,该第一子模式标志指示对第一原始运动信息叠加运动矢量差,或者,对第一原始运动信息不叠加运动矢量差。该第二子模式标志指示对第二原始运动信息叠加运动矢量差,或者,对第二原始运动信息不叠加运动矢量差。
解码端在接收到当前块的编码比特流后,可以先从当前块的编码比特流中解析出增强角度加权预测模式的第一子模式标志和第二子模式标志。若第一子模式标志指示对第一原始运动信息叠加运动矢量差,则从当前块的编码比特流中解析出第一运动矢量差的方向信息和幅值信息,并根据第一运动矢量差的方向信息和幅值信息确定第一运动矢量差,继而根据第一原始运动信息和第一运动矢量差确定当前块的第一目标运动信息。若第一子模式标志指示对第一原始运动信息不叠加运动矢量差,则不会解析第一运动矢量差的方向信息和幅值信息,可以直接将第一原始运动信息作为当前块的第一目标运动信息。若第二子模式标志指示对第二原始运动信息叠加运动矢量差,则从当前块的编码比特流中解析出第二运动矢量差的方向信息和幅值信息,并根据第二运动矢量差的方向信息和幅值信息确定第二运动矢量差,继而根据第二原始运动信息和第二运动矢量差确定当前块的第二目标运动信息。若第二子模式标志指示对第二原始运动信息不叠加运动矢量差,则不会解析第二运动矢量差的方向信息和幅值信息,可以直接将第二原始运动信息作为当前块的第二目标运动信息。
示例性的,当增强角度加权预测模式的第一子模式标志为第一取值时,指示对第一原始运动信息叠加运动矢量差,当增强角度加权预测模式的第一子模式标志为第二取值时,表示对第一原始运动信息不叠加运动矢量差。当增强角度加权预测模式的第二子模式标志为第一取值时,指示对第二原始运动信息叠加运动矢量差,当增强角度加权预测模式的第二子模式标志为第二取值时,表示对第二原始运动信息不叠加运动矢量差。第一取值和第二取值可以根据经验配置,如第一取值为1,第二取值为0,或者,如第一取值为0,第二取值为1。
在上述实施例中,解码端是从当前块的编码比特流中解析出第一运动矢量差的方向信息和第二运动矢量差的方向信息,在实际应用中,还可以根据当前块的权重预测角度推导第一运动矢量差的方向信息,并根据当前块的权重预测角度推导第二运动矢量差的方向信息。
比如说,当前块的权重预测角度表示一种角度方向,参见图6所示的8种角度方向,当前块的权重预测角度表示8种角度方向中的某一种角度方向,对于解码端来说,可以从所有方向信息(如上,下,左,右,左上,左下,右上,右下等)中选择与该角度方向匹配的方向信息,直接将该方向信息作为第一运动矢量差的方向信息和第二运动矢量差的方向信息。
与该角度方向匹配的方向信息,可以包括:该方向信息与该角度方向之间的角度差为预设角度或接近预设角度,或者,在所有方向信息中,该方向信息与该角度方向之间的角度差与预设角度的差值最小。该预设角度可以根据经验配置,如该预设角度可以为90度。
示例性的,针对编码端来说,也可以根据当前块的权重预测角度推导第一运动矢量差的方向信息,并根据当前块的权重预测角度推导第二运动矢量差的方向信息,而不需要采用率失真代价值的方式确定第一运动矢量差的方向信息和第二运动矢量差的方向信息。编码端在向解码端发送当前块的编码比特流时,也可以不编码第一运动矢量差的方向信息和第二运动矢量差的方向信息,由解码端推导第一运动矢量差的方向信息和第二运动矢量差的方向信息。
实施例16:在实施例15中,解码端可以从编码比特流中解析出第一运动矢量差的幅值信息和第二运动矢量差的幅值信息,在一种可能的实施方式中,编码端和解码端可以构建相同的一个运动矢量差幅值列表,编码端确定第一运动矢量差的幅值信息在该运动矢量差幅值列表中的幅值索引,且编码比特流包括第一运动矢量差的幅值索引。解码端从当前块的编码比特流中解析出第一运动矢量差的幅值索引,并从该运动矢量差幅值列表选取与该幅值索引对应的幅值信息,该幅值信息即第一运动矢量差的幅值信息。编码端确定第二运动矢量差的幅值信息在该运动矢量差幅值列表中的幅值索引,且编码比特流包括第二运动矢量差的幅值索引。解码端从当前块的编码比特流中解析出第二运动矢量差的幅值索引,并从该运动矢量差幅值列表选取与该幅值索引对应的幅值信息,该幅值信息即第二运动矢量差的幅值信息。
在另一种可能的实施方式中,编码端和解码端可以构建相同的至少两个运动矢量差幅值列表,如编码端和解码端构建相同的运动矢量差幅值列表1,并构建相同的运动矢量差幅值列表2。编码端先基于运动矢量差幅值列表的指示信息,从所有运动矢量差幅值列表中选取目标运动矢量差幅值列表;编码端确定第一运动矢量差的幅值信息在该目标运动矢量差幅值列表中的幅值索引,且编码比特流包括第一运动矢量差的幅值索引。
编码端还可以确定第二运动矢量差的幅值信息在该目标运动矢量差幅值列表中的幅值索引,且编码比特流可以包括第二运动矢量差的幅值索引。解码端可以从当前块的编码比特流中解析出第二运动矢量差的幅值索引,并从该目标运动矢量差幅值列表中选取与该幅值索引对应的幅值信息,该幅值信息即第二运动矢量差的幅值信息。
示例性的,运动矢量差幅值列表的指示信息,可以是任意级别的指示信息,比如说,可以是序列级的运动矢量差幅值列表的指示信息,帧级的运动矢量差幅值列表的指示信息,Slice级的运动矢量差幅值列表的指示信息,Tile级的运动矢量差幅值列表的 指示信息,Patch级的运动矢量差幅值列表的指示信息,CTU级的运动矢量差幅值列表的指示信息,LCU级的运动矢量差幅值列表的指示信息,块级的运动矢量差幅值列表的指示信息,CU级的运动矢量差幅值列表的指示信息,PU级的运动矢量差幅值列表的指示信息,对此不做限制,为了方便描述,以帧级的运动矢量差幅值列表的指示信息为例,帧级的运动矢量差幅值列表的指示信息可以为awp_umve_offset_list_flag,通过awp_umve_offset_list_flag控制运动矢量差幅值列表的切换。
比如说,编码端和解码端可以构建运动矢量差幅值列表1和运动矢量差幅值列表2,参见表3和表4所示。可以对运动矢量差幅值列表1进行二值化处理,并对运动矢量差幅值列表2进行二值化处理,对此处理方式不做限制。例如,对运动矢量差幅值列表1采用截断一元码,对运动矢量差幅值列表2采用截断一元码,或者,对运动矢量差幅值列表1采用截断一元码,对运动矢量差幅值列表2采用定长码,或者,对运动矢量差幅值列表1采用定长码,对运动矢量差幅值列表2采用截断一元码,对此不做限制,参见表5和表6所示。
表3
MVD幅值(像素) 1/4 1/2 1 2 4
表4
MVD幅值(像素) 1/4 1/2 1 2 4 8 16 32
表5
MVD幅值(像素) 1/4 1/2 1 2 4
二值化 1 01 001 0001 0000
表6
MVD幅值(像素) 1/4 1/2 1 2 4 8 16 32
二值化 000 001 011 010 10 110 1110 1111
在上述应用场景下,通过awp_umve_offset_list_flag控制运动矢量差幅值列表的切换,即控制采用表3所示的运动矢量差幅值列表,或表4所示的运动矢量差幅值列表。例如,若awp_umve_offset_list_flag的取值为第一取值,则表3所示的运动矢量差幅值列表为目标运动矢量差幅值列表,若awp_umve_offset_list_flag的取值为第二取值,则表4所示的运动矢量差幅值列表为目标运动矢量差幅值列表;或者,若awp_umve_offset_list_flag的取值为第二取值,则表3所示的运动矢量差幅值列表为目标运动矢量差幅值列表,若awp_umve_offset_list_flag的取值为第一取值,则表4所示的运动矢量差幅值列表为目标运动矢量差幅值列表
在目标运动矢量差幅值列表为表3时,编码端采用表5所示的二值化方式进行编码,解码端采用表5所示的二值化方式进行解码。在目标运动矢量差幅值列表为表4时,编码端采用表6所示的二值化方式进行编码,解码端采用表6所示的二值化方式进 行解码。
实施例17:在实施例15或实施例16的基础上,针对第一运动矢量差和第二运动矢量差,以下结合具体应用场景,对单向运动信息叠加运动矢量差AWP_MVR的相关语法进行说明:
应用场景1:参见表7所示,为相关语法的示例,SkipFlag表示当前块是否为Skip模式,DirectFlag表示当前块是否为Direct模式,AwpFlag表示当前块是否为AWP模式。
awp_idx(角度加权预测模式索引):跳过模式或直接模式下的角度加权预测模式索引值,AwpIdx的值,可以等于awp_idx的值。如果码流中不存在awp_idx,则AwpIdx的值等于0。
awp_cand_idx0(角度加权预测模式的第一运动信息索引):跳过模式或直接模式下的角度加权预测模式的第一运动信息索引值。AwpCandIdx0的值等于awp_cand_idx0的值,如果码流中不存在awp_cand_idx0,则AwpCandIdx0的值等于0。
awp_cand_idx1(角度加权预测模式第二运动信息索引):跳过模式或直接模式下的角度加权预测模式的第二运动信息索引值。AwpCandIdx1的值等于awp_cand_idx1的值,如果码流中不存在awp_cand_idx1,则AwpCandIdx1的值等于0。
awp_mvd_flag(增强角度加权预测模式标志),是一个二值变量,awp_mvd_flag为第一取值(如1)时,表示当前块为增强角度加权预测模式,awp_mvd_flag为第二取值(如0)时,表示当前块为非增强角度加权预测模式。示例性的,AwpMvdFlag的取值,可以等于awp_mvd_flag的值,如果码流中不存在awp_mvd_flag,则AwpMvdFlag的值等于0。
awp_mvd_sub_flag0(增强角度加权预测模式第一子模式标志),可以是一个二值变量,awp_mvd_sub_flag0为第一取值时,可以表示角度加权预测模式的第一运动信息需要叠加运动信息差;awp_mvd_sub_flag0为第二取值时,可以表示角度加权预测模式的第一运动信息不需要叠加运动信息差。示例性的,AwpMvdSubFlag0的值可以等于awp_mvd_sub_flag0的值,如果码流中不存在awp_mvd_sub_flag0,则AwpMvdSubFlag0的值等于0。
awp_mvd_sub_flag1(增强角度加权预测模式第二子模式标志),可以是一个二值变量,awp_mvd_sub_flag1为第一取值时,可以表示角度加权预测模式的第二运动信息需要叠加运动信息差;awp_mvd_sub_flag1为第二取值时,可以表示角度加权预测模式的第二运动信息不需要叠加运动信息差。示例性的,AwpMvdSubFlag1的值可以等于awp_mvd_sub_flag1的值,如果码流中不存在awp_mvd_sub_flag1,则还可以存在以下情况:若AwpMvdFlag等于1,那么,AwpMvdSubFlag1的值等于1,否则,AwpMvdSubFlag1的值可以等于0。
awp_mvd_dir0(第一运动信息运动矢量差方向索引值),角度加权预测模式第一运动信息的运动矢量差方向索引值。示例性的,AwpMvdDir0的值可以等于awp_mvd_dir0的值,如果码流中不存在awp_mvd_dir0,则AwpMvdDir0的值可以等于 0。
awp_mvd_step0(第一运动信息运动矢量差步长索引值),角度加权预测模式第一运动信息的运动矢量差步长索引值。示例性的,AwpMvdStep0的值可以等于awp_mvd_step0的值,如果码流中不存在awp_mvd_step0,则AwpMvdStep0的值可以等于0。
awp_mvd_dir1(第二运动信息运动矢量差方向索引值),角度加权预测模式第二运动信息的运动矢量差方向索引值。示例性的,AwpMvdIdx1的值可以等于awp_mvd_dir1的值。如果码流中不存在awp_mvd_dir1,则AwpMvdDir1的值可以等于0。
awp_mvd_step1(第二运动信息运动矢量差步长索引值),角度加权预测模式第二运动信息的运动矢量差步长索引值。示例性的,AwpMvdStep1的值可以等于awp_mvd_step1的值。如果码流中不存在awp_mvd_step1,则AwpMvdStep1的值可以等于0。
表7
Figure PCTCN2021102199-appb-000002
Figure PCTCN2021102199-appb-000003
应用场景2:参见表8所示,为相关语法的示例,SkipFlag表示当前块是否为Skip模式,DirectFlag表示当前块是否为Direct模式,AwpFlag表示当前块是否为AWP模式。
关于awp_idx,awp_cand_idx0和awp_cand_idx1,可以参见应用场景1,在此不再赘述。
awp_mvd_sub_flag0(增强角度加权预测模式第一子模式标志),可以是一个二值变量,awp_mvd_sub_flag0为第一取值时,可以表示角度加权预测模式的第一运动信息需要叠加运动信息差;awp_mvd_sub_flag0为第二取值时,可以表示角度加权预测模式的第一运动信息不需要叠加运动信息差。示例性的,AwpMvdSubFlag0的值可以等于awp_mvd_sub_flag0的值,如果码流中不存在awp_mvd_sub_flag0,则AwpMvdSubFlag0的值等于0。
awp_mvd_sub_flag1(增强角度加权预测模式第二子模式标志),可以是一个二值变量,awp_mvd_sub_flag1为第一取值时,可以表示角度加权预测模式的第二运动信息需要叠加运动信息差;awp_mvd_sub_flag1为第二取值时,可以表示角度加权预测模式的第二运动信息不需要叠加运动信息差。示例性的,AwpMvdSubFlag1的值可以等于awp_mvd_sub_flag1的值,如果码流中不存在awp_mvd_sub_flag1,则AwpMvdSubFlag1的值可以等于0。
关于awp_mvd_dir0,awp_mvd_step0,awp_mvd_dir1,awp_mvd_step1,参见应用场景1。
表8
Figure PCTCN2021102199-appb-000004
Figure PCTCN2021102199-appb-000005
示例性的,针对应用场景1和应用场景2,二者的区别在于:在应用场景1中,存在语法awp_mvd_flag,在应用场景2中,不存在语法awp_mvd_flag。在应用场景1中,可以通过awp_mvd_flag控制增强角度加权预测模式,即通过总开关控制增强角度加权预测模式。
应用场景3:可以对应用场景1和应用场景2进行衍生,将AWP模式与AWP_MVR模式融合,即对跨度增加0跨度,从而不需要编码是否使能的标志位。例如,运动矢量差支持上,下,左,右等方向,运动矢量差支持如下步长配置:0-pel,1/4-pel,1/2-pel,1-pel,2-pel,4-pel,即增加步长配置0-pel。在此基础上,表7/表8可以更新为表9,相关语法参见表7。
表9
编码单元定义 描述符
...  
if((SkipFlag||DirectFlag)&&AwpFlag){  
awp_idx ae(v)
awp_cand_idx0 ae(v)
awp_cand_idx1 ae(v)
awp_mvd_step0 ae(v)
if(AwpMvdStep0!=0)  
awp_mvd_dir0 ae(v)
awp_mvd_step1 ae(v)
if(AwpMvdStep1!=0)  
awp_mvd_dir1 ae(v)
}  
...  
示例性的,针对各应用场景来说,相关语法中语法元素的顺序可进行相应调整,比如说,针对表8所示的相关语法,可以对语法元素的顺序进行相应调整,得到表10所示的相关语法。
表10
Figure PCTCN2021102199-appb-000006
从表8和表10可以看出,可以对awp_cand_idx0和awp_cand_idx1的顺序进行调整,示例性的,在表10中,awp_cand_idx0和awp_cand_idx1的解析方式,可以根据AwpMvdSubFlag0,AwpMvdSubFlag1,AwpMvdDir0,AwpMvdStep0,AwpMvdDir1,AwpMvdStep1中一个或多个的值进行调整。例如,当AwpMvdSubFlag0,AwpMvdSubFlag1中的至少一个值为1时,awp_cand_idx0以及awp_cand_idx1的解析方式完全一致,否则,解析方式不一致。
示例性的,AwpMvdSubFlag0表示是否对第一原始运动信息叠加MVD,如果叠加,基于AwpMvdDir0和AwpMvdStep0确定出第一原始运动信息对应的MVD值,AwpMvdSubFlag1表示是否对第二原始运动信息叠加MVD,如果叠加,基于 AwpMvdDir1和AwpMvdStep1确定出第二原始运动信息对应的MVD值。显然,若一个原始运动信息叠加MVD,另一个原始运动信息不叠加MVD,或两个原始运动信息叠加的MVD值不同,则允许第一原始运动信息和第二原始运动信息相同,awp_cand_idx0表示第一原始运动信息的索引值,awp_cand_idx1表示第二原始运动信息的索引值,因此,awp_cand_idx0和awp_cand_idx1的解析方式完全一致,即从完整的运动信息候选列表中解析出与awp_cand_idx0对应的第一原始运动信息,并从完整的运动信息候选列表中解析出与awp_cand_idx1对应的第二原始运动信息。
若两个原始运动信息均不叠加MVD,或者,两个原始运动信息叠加的MVD值相同,则第一原始运动信息和第二原始运动信息不同,基于此,awp_cand_idx0和awp_cand_idx1的解析方式不一致,从完整的运动信息候选列表中解析出与awp_cand_idx0对应的第一原始运动信息,由于第二原始运动信息与第一原始运动信息不同,因此,不是从完整的运动信息候选列表中解析出与awp_cand_idx1对应的第二原始运动信息,而是在排除第一原始运动信息的基础上,从不完整的运动信息候选列表中解析出与awp_cand_idx1对应的第二原始运动信息。
基于运动信息候选列表AwpUniArray,从AwpUniArray中选择两个候选运动信息,基于该两个候选运动信息确定当前块的第一目标运动信息和第二目标运动信息。比如说,解码端从编码比特流中解析出AwpCandIdx0和AwpCandIdx1,将AwpUniArray中的第AwpCandIdx0+1个运动信息赋值给mvAwp0L0,mvAwp0L1,RefIdxAwp0L0,RefIdxAwp0L1。将AwpUniArray中的第AwpCandIdx1+1个运动信息赋值给mvAwp1L0,mvAwp1L1,RefIdxAwp1L0,RefIdxAwp1L1。当然,也可将AwpUniArray中第AwpCandIdx0+1个运动信息赋值给mvAwp1L0,mvAwp1L1,RefIdxAwp1L0,RefIdxAwp1L1,将AwpUniArray中第AwpCandIdx1+1个运动信息赋值给mvAwp0L0,mvAwp0L1,RefIdxAwp0L0,RefIdxAwp0L1。
AwpCandIdx0表示第一目标运动信息的索引值,因此,可以将AwpUniArray中的第AwpCandIdx0+1个运动信息赋值给第一目标运动信息。例如,若AwpCandIdx0为0,则将AwpUniArray中的第1个运动信息赋值给第一目标运动信息,以此类推。
mvAwp0L0,mvAwp0L1,RefIdxAwp0L0,RefIdxAwp0L1合起来作为第一目标运动信息,即第一目标运动信息包括指向List0的单向运动信息以及指向List1的单向运动信息。
若AwpUniArray中的第AwpCandIdx0+1个运动信息是指向List0的单向运动信息,则第一目标运动信息包括指向List0的单向运动信息,且指向List1的单向运动信息为空。
若AwpUniArray中的第AwpCandIdx0+1个运动信息是指向List1的单向运动信息,则第一目标运动信息包括指向List1的单向运动信息,且指向List0的单向运动信息为空。
示例性的,mvAwp0L0和RefIdxAwp0L0表示第一目标运动信息中指向List0的单向运动信息,mvAwp0L1和RefIdxAwp0L1表示第一目标运动信息中指向List1的单向运动信息。
若RefIdxAwp0L0有效,则表示指向List0的单向运动信息为有效,因此,第一目标运动信息的预测模式为PRED_List0。若RefIdxAwp0L1有效,则表示指向List1的单向运动信息为有效,因此,第一目标运动信息的预测模式为PRED_List1。
示例性的,AwpCandIdx1表示第二目标运动信息的索引值,因此,可以将AwpUniArray中的第AwpCandIdx1+1个运动信息赋值给第二目标运动信息。例如,若AwpCandIdx1为0,则将AwpUniArray中的第1个运动信息赋值给第二目标运动信息,以此类推。
mvAwp1L0,mvAwp1L1,RefIdxAwp1L0,RefIdxAwp1L1合起来作为第二目标运动信息,即第二目标运动信息包括指向List0的单向运动信息以及指向List1的单向运动信息。
若AwpUniArray中的第AwpCandIdx1+1个运动信息是指向List0的单向运动信息,则第二目标运动信息包括指向List0的单向运动信息,且指向List1的单向运动信息为空。
若AwpUniArray中的第AwpCandIdx1+1个运动信息是指向List1的单向运动信息,则第二目标运动信息包括指向List1的单向运动信息,且指向List0的单向运动信息为空。
示例性的,mvAwp1L0和RefIdxAwp1L0表示第二目标运动信息中指向List0的单向运动信息,mvAwp1L1和RefIdxAwp1L1表示第二目标运动信息中指向List1的单向运动信息。
若RefIdxAwp1L0有效,则表示指向List0的单向运动信息为有效,因此,第二目标运动信息的预测模式为PRED_List0。若RefIdxAwp1L1有效,则表示指向List1的单向运动信息为有效,因此,第二目标运动信息的预测模式为PRED_List1。
在若干应用中,由于参考帧的存储空间有限,因此,若当前帧为B帧,但List0与List1中的参考帧的POC一致,即为同一帧,则为了最大化AWP模式的性能,可以对传输的两个运动信息的语法元素进行重新设计,即采用本实施例的方式传输运动信息。另一方面,针对P帧的情况,若指向同一帧的两个运动矢量限制在一定范围内,不增加P帧的带宽限制,则AWP模式可以应用于P帧。
针对上述发现,本实施例中,在第一目标运动信息指向的参考帧与第二目标运动信息指向的参考帧为同一帧时,即AWP模式的两个运动矢量指向同一帧,则第二目标运动信息的运动矢量可以由第一目标运动信息的运动矢量导出,即第一个运动信息传递索引值,第二个运动信息为在第一个运动信息的基础上叠加MVD,MVD的编码方式参见实施例15或16。
参见表11所示,为语法元素的一个示例,awp_mvd_dir0表示受限角度加权预测模式第一运动信息运动矢量差方向索引值,AwpMvdDir0的值等于awp_mvd_dir0的值,如果位流中不存在awp_mvd_dir0,则AwpMvdDir0的值等于0。awp_mvd_step0表示受限角度加权预测模式第一运动信息运动矢量差步长索引值,AwpMvdStep0的值等于awp_mvd_step0的值,如果位流中不存在awp_mvd_step0,则AwpMvdStep0的值等 于0。awp_mvd_dir1表示受限角度加权预测模式第二运动信息运动矢量差方向索引值,AwpMvdIdx1的值等于awp_mvd_dir1的值,如果位流中不存在awp_mvd_dir1,则AwpMvdDir1的值等于0。awp_mvd_step1表示受限角度加权预测模式第二运动信息运动矢量差步长索引值,AwpMvdStep1的值等于awp_mvd_step1的值,如果位流中不存在awp_mvd_step1,则AwpMvdStep1的值等于0。
表11
编码单元定义 描述符
...  
if((SkipFlag||DirectFlag)&&AwpFlag&&Limited){  
awp_idx ae(v)
awp_cand_idx ae(v)
awp_mvd_dir0 ae(v)
awp_mvd_step0 ae(v)
awp_mvd_dir1 ae(v)
awp_mvd_step1 ae(v)
}  
...  
综上所述,可以基于awp_cand_idx确定出原始运动信息(原始运动信息包括原始运动矢量),根据awp_mvd_dir0和awp_mvd_step0确定出第一运动矢量差MVD0和第二运动矢量差MVD1。在此基础上,根据原始运动矢量和第一运动矢量差MVD0确定第一目标运动矢量,如原始运动矢量+MVD0。根据第一目标运动矢量和第二运动矢量差MVD1确定第二目标运动矢量,如第一目标运动矢量+MVD1。得到第一目标运动矢量后,基于第一目标运动矢量得到第一目标运动信息,第一目标运动信息包括第一目标运动矢量。得到第二目标运动矢量后,基于第二目标运动矢量得到第二目标运动信息,第二目标运动信息包括第二目标运动矢量。
又例如,根据原始运动矢量和第一运动矢量差MVD0确定第一目标运动矢量,如原始运动矢量+MVD0。根据原始运动矢量和第二运动矢量差MVD1确定第二目标运动矢量,如原始运动矢量+MVD1。在得到第一目标运动矢量后,基于第一目标运动矢量得到第一目标运动信息,第一目标运动信息包括第一目标运动矢量。在得到第二目标运动矢量后,基于第二目标运动矢量得到第二目标运动信息,第二目标运动信息包括第二目标运动矢量。
参见表12所示,为语法元素的另一个示例,awp_mvd_dir与awp_mvd_step组成第二个运动信息(即第二目标运动信息)在第一个运动信息(即第一目标运动信息)上的MVD的语法表达。awp_mvd_dir表示受限角度加权预测模式运动矢量差方向索引 值,AwpMvdDir的值等于awp_mvd_dir的值,如果位流中不存在awp_mvd_dir,AwpMvdDir的值等于0。awp_mvd_step表示受限角度加权预测模式运动矢量差步长索引值,AwpMvdStep的值等于awp_mvd_step的值,如果位流中不存在awp_mvd_step,AwpMvdStep的值等于0。
表12
编码单元定义 描述符
...  
if((SkipFlag||DirectFlag)&&AwpFlag&&Limited){  
awp_idx ae(v)
awp_cand_idx ae(v)
awp_mvd_dir ae(v)
awp_mvd_step ae(v)
}  
...  
综上所述,可以基于awp_cand_idx确定出原始运动信息(原始运动信息包括原始运动矢量),根据awp_mvd_dir和awp_mvd_step确定出运动矢量差MVD。
在此基础上,根据原始运动矢量确定第一目标运动矢量,如第一目标运动矢量为原始运动矢量。根据第一目标运动矢量和运动矢量差MVD确定第二目标运动矢量,如第一目标运动矢量+MVD。在得到第一目标运动矢量后,基于第一目标运动矢量得到第一目标运动信息,第一目标运动信息包括第一目标运动矢量。在得到第二目标运动矢量后,基于第二目标运动矢量得到第二目标运动信息,第二目标运动信息包括第二目标运动矢量。
又例如,根据原始运动矢量和运动矢量差MVD确定第一目标运动矢量,如原始运动矢量+MVD。根据原始运动矢量和运动矢量差MVD确定第二目标运动矢量,如原始运动矢量-MVD。或者,根据原始运动矢量和运动矢量差MVD确定第一目标运动矢量,如原始运动矢量-MVD。根据原始运动矢量和运动矢量差MVD确定第二目标运动矢量,如原始运动矢量+MVD。在得到第一目标运动矢量后,基于第一目标运动矢量得到第一目标运动信息,第一目标运动信息包括第一目标运动矢量。在得到第二目标运动矢量后,基于第二目标运动矢量得到第二目标运动信息,第二目标运动信息包括第二目标运动矢量。综上可以看出,MVD同时作用在相反的两个方向,使得原始运动信息能够导出两个目标运动信息。
在一种可能的实施方式中,MVD的语法表述还可以为水平分量和垂直分量的分开表述,参见表13所示,为分开表述的一个示例。awp_mv_diff_x_abs表示受限角度加权预测模式运动矢量水平分量差绝对值,awp_mv_diff_y_abs表示受限角度加权预测模式运动矢量垂直分量差绝对值。awp_mv_diff_x_sign表示受限角度加权预测模式运动矢 量水平分量差符号值,awp_mv_diff_y_sign表示受限角度加权预测模式运动矢量垂直分量差符号值。
示例性的,受限角度加权预测模式的运动矢量差值的绝对值,AwpMvDiffXAbs等于awp_mv_diff_x_abs的值,AwpMvDiffYAbs等于awp_mv_diff_y_abs的值。
示例性的,受限角度加权预测模式的运动矢量差值的符号位,AwpMvDiffXSign的值等于awp_mv_diff_x_sign的值,AwpMvDiffYSign的值等于awp_mv_diff_y_sign0。如果位流中不存在awp_mv_diff_x_sign或awp_mv_diff_y_sign,则AwpMvDiffXSign或AwpMvDiffYSign的值为0。如果AwpMvDiffXSign的值为0,AwpMvDiffX等于AwpMvDiffXAbs;如果AwpMvDiffXSign的值为1,AwpMvDiffX等于-AwpMvDiffXAbs。如果AwpMvDiffYSign的值为0,AwpMvDiffY等于AwpMvDiffYAbs;如果AwpMvDiffYSign的值为1,AwpMvDiffY等于-AwpMvDiffYAbs。AwpMvDiffX和AwpMvDiffY的取值范围是-32768~32767。
表13
编码单元定义 描述符
awp_mv_diff_x_abs ae(v)
if(AwpMvDiffXAbs)  
awp_mv_diff_x_sign ae(v)
awp_mv_diff_y_abs ae(v)
if(AwpMvDiffYAbs)  
awp_mv_diff_y_sign ae(v)
在另一种可能的实施方式中,参见表14所示,为分开表述的另一个示例。基于表14所示,可以判断|MVD_X|是否大于0,|...|为绝对值符号,MVD_X表示MVD的横向分量;判断|MVD_Y|是否大于0,|...|为绝对值符号,MVD_Y表示MVD的纵向分量;如果|MVD_X|大于0,判断|MVD_X|是否大于1;如果|MVD_Y|大于0,判断|MVD_Y|是否大于1;如果|MVD_X|大于1,编码|MVD_X|-2;如果|MVD_X|大于0,编码MVD_X的符号位;如果|MVD_Y|大于1,编码|MVD_Y|-2;如果|MVD_Y|大于0,编码MVD_Y的符号位。
表14
Figure PCTCN2021102199-appb-000007
Figure PCTCN2021102199-appb-000008
实施例19:在实施例1-实施例3中,编码端/解码端需要获取运动信息候选列表,如采用如下方式获取运动信息候选列表:获取待加入到运动信息候选列表中的至少一个可用运动信息;基于至少一个可用运动信息获取运动信息候选列表。示例性的,至少一个可用运动信息包括但不限于如下运动信息的至少一种:空域运动信息;时域运动信息;预设运动信息,各可用运动信息的获取方式,可以参见实施例13,在此不再赘述。
示例性的,基于至少一个可用运动信息获取运动信息候选列表,可以包括:针对当前待加入到运动信息候选列表的可用运动信息,将可用运动信息加入到运动信息候选列表。比如说,针对可用运动信息,无论可用运动信息是单向运动信息还是双向运动信息,均将可用运动信息加入到运动信息候选列表。与实施例13和实施例14不同的是,当可用运动信息为双向运动信息时,不需要将双向运动信息裁剪为第一单向运动信息和第二单向运动信息,而是直接将双向运动信息加入到运动信息候选列表,即运动信息候选列表可以包括双向运动信息。
示例性的,在将可用运动信息加入到运动信息候选列表时,可以对可用运动信息进行查重操作,也可以对可用运动信息不进行查重操作,对此不做限制。若对可用运动信息进行查重操作,则可以基于List+refIdx+MV_x+MV_y进行查重操作,或者,可以基于POC+MV_x+MV_y进行查重操作,对此查重操作的方式不再赘述,可以参见实施例14。
实施例20:在实施例1-实施例3中,编码端/解码端在得到运动信息候选列表后,可以基于该运动信息候选列表获取当前块的第一目标运动信息和第二目标运动信息,在确定第一目标运动信息指向的参考帧与第二目标运动信息指向的参考帧为同一帧时,运动信息候选列表中的候选运动信息可以均为双向运动信息。
示例性的,如果限定第一目标运动信息指向的参考帧与第二目标运动信息指向的参考帧为同一帧,即AWP模式的两个运动矢量指向同一帧,则在构建运动信息候选列表时,添加到运动信息候选列表中的可用运动信息均为双向运动信息。基于此,针对编码端来说,可以采用率失真代价值从运动信息候选列表中选择一个候选运动信息,编码端在向解码端发送当前块的编码比特流时,可以携带该候选运动信息在运动信息候选列表中的索引值,针对解码端来说,可以基于该索引值从运动信息候选列表中选择一个候选运动信息。
实施例21:在实施例4中,编码端/解码端需要获取运动矢量候选列表,例如,获取参考帧信息(即当前块的参考帧信息),并获取与该参考帧信息对应的运动矢量候选列表(即当前块的运动矢量候选列表),即运动矢量候选列表是针对参考帧信息创建的。示例性的,该参考帧信息可以包括第一参考帧信息和第二参考帧信息,因此,该运动矢量候选列表可以包括第一参考帧信息(如参考帧索引和参考帧方向等)对应的运动矢量候选列表和第二参考帧信息(如参考帧索引和参考帧方向等)对应的运动矢量候选列表,第一参考帧信息是第一目标运动矢量对应的参考帧信息,第二参考帧信息是第二目标运动矢量对应的参考帧信息。
示例性的,针对编码端来说,可以获取第一参考帧信息和第二参考帧信息,比 如说,基于率失真代价值,可以从一个参考帧列表中选取出第一参考帧信息和第二参考帧信息,也可以从两个参考帧列表中选取出第一参考帧信息和第二参考帧信息,例如,两个参考帧列表分别为List0和List1,从List0中选取出第一参考帧信息,从List1中选取出第二参考帧信息。
示例性的,针对解码端来说,可以获取第一参考帧信息和第二参考帧信息,比如说,基于当前块的编码比特流中的索引信息,可以从一个参考帧列表中选取出第一参考帧信息和第二参考帧信息,也可以从两个参考帧列表中选取出第一参考帧信息和第二参考帧信息,例如,两个参考帧列表分别为List0和List1,基于第一参考帧信息的索引信息,从List0中选取出第一参考帧信息,基于第二参考帧信息的索引信息,从List1中选取出第二参考帧信息。
当然,上述只是获取第一参考帧信息和第二参考帧信息的示例,对此不做限制。
在一种可能的实施方式中,第一参考帧信息和第二参考帧信息可以相同,在此情况下,第一目标运动矢量指向的参考帧与第二目标运动矢量指向的参考帧为同一帧,第一参考帧信息对应的运动矢量候选列表与第二参考帧信息对应的运动矢量候选列表为同一个运动矢量候选列表,即,编码端/解码端获取的是一个运动矢量候选列表。
在另一种可能的实施方式中,第一参考帧信息与第二参考帧信息可以不同,在此情况下,第一目标运动矢量指向的参考帧与第二目标运动矢量指向的参考帧为不同帧,第一参考帧信息对应的运动矢量候选列表与第二参考帧信息对应的运动矢量候选列表为不同的运动矢量候选列表,即编码端/解码端获取的是两个不同的运动矢量候选列表。
为了方便描述,无论是一个运动矢量候选列表,还是两个不同的运动矢量候选列表,均记为第一参考帧信息对应的运动矢量候选列表与第二参考帧信息对应的运动矢量候选列表。
基于此,可以采用如下方式获取当前块的第一目标运动矢量和第二目标运动矢量:
方式一、从第一参考帧信息对应的运动矢量候选列表中选择一个候选运动矢量作为当前块的第一目标运动矢量,从第二参考帧信息对应的运动矢量候选列表中选择一个候选运动矢量作为当前块的第二目标运动矢量。第二目标运动矢量与第一目标运动矢量可以不同。
针对编码端来说,可以基于率失真原则,从第一参考帧信息对应的运动矢量候选列表中选择一个候选运动矢量作为当前块的第一目标运动矢量,从第二参考帧信息对应的运动矢量候选列表中选择一个候选运动矢量作为当前块的第二目标运动矢量,对此不做限制。
在一种可能的实施方式中,编码端在向解码端发送编码比特流时,该编码比特流可以携带指示信息a和指示信息b,指示信息a用于指示当前块的第一目标运动矢量的索引值1,索引值1表示第一目标运动矢量是第一参考帧信息对应的运动矢量候选列表中的第几个候选运动矢量。指示信息b用于指示当前块的第二目标运动矢量的索引值2,索引值2表示第二目标运动矢量是第二参考帧信息对应的运动矢量候选列表中的第 几个候选运动矢量。
解码端在接收到编码比特流后,从编码比特流中解析出指示信息a和指示信息b。基于指示信息a,解码端从第一参考帧信息对应的运动矢量候选列表中选择索引值1对应的候选运动矢量,该候选运动矢量作为当前块的第一目标运动矢量。基于指示信息b,解码端从第二参考帧信息对应的运动矢量候选列表中选择索引值2对应的候选运动矢量,该候选运动矢量作为当前块的第二目标运动矢量。
方式二、从第一参考帧信息对应的运动矢量候选列表中选择一个候选运动矢量作为当前块的第一原始运动矢量,并从第二参考帧信息对应的运动矢量候选列表中选择一个候选运动矢量作为当前块的第二原始运动矢量,示例性的,该第一原始运动矢量与该第二原始运动矢量可以不同,或者,该第一原始运动矢量与该第二原始运动矢量也可以相同。然后,根据该第一原始运动矢量确定当前块的第一目标运动矢量,并根据该第二原始运动矢量确定当前块的第二目标运动矢量。示例性的,该第一目标运动矢量与该第二目标运动矢量可以不同。
关于如何根据原始运动矢量确定目标运动矢量,本实施例中给出一种单向运动矢量叠加运动矢量差的方案,比如说,可以获取与第一原始运动矢量对应的第一运动矢量差(即MVD),并根据第一运动矢量差和第一原始运动矢量确定第一目标运动矢量(即第一运动矢量差与第一原始运动矢量的和作为第一目标运动矢量)。或者,可以将第一原始运动矢量确定为第一目标运动矢量。以及,可以获取与第二原始运动矢量对应的第二运动矢量差,并根据第二运动矢量差和第二原始运动矢量确定第二目标运动矢量(即第二运动矢量差与第二原始运动矢量的和作为第二目标运动矢量)。或者,将第二原始运动矢量确定为第二目标运动矢量。
示例性的,可以获取第一运动矢量差的方向信息和幅值信息,并根据第一运动矢量差的方向信息和幅值信息确定第一运动矢量差。以及,可以获取第二运动矢量差的方向信息和幅值信息,并根据第二运动矢量差的方向信息和幅值信息确定所述第二运动矢量差。
示例性的,针对解码端来说,可以采用如下方式获取第一运动矢量差的方向信息:解码端从当前块的编码比特流中解析出第一运动矢量差的方向信息;或者,解码端根据当前块的权重预测角度推导第一运动矢量差的方向信息。针对解码端来说,可以采用如下方式获取第二运动矢量差的方向信息:解码端从当前块的编码比特流中解析出第二运动矢量差的方向信息;或者,解码端根据当前块的权重预测角度推导第二运动矢量差的方向信息。
示例性的,针对解码端来说,可以采用如下方式获取第一运动矢量差的幅值信息:从当前块的编码比特流中解析出第一运动矢量差的幅值信息。可以采用如下方式获取第二运动矢量差的幅值信息:从当前块的编码比特流中解析出第二运动矢量差的幅值信息。
在一种可能的实施方式中,编码端和解码端可以约定运动矢量差的方向信息和幅值信息,若方向信息表示方向为向右,幅值信息表示幅值为A,则运动矢量差为(A,0);若方向信息表示方向为向下,幅值信息表示幅值为A,则运动矢量差为(0,-A); 若方向信息表示方向为向左,幅值信息表示幅值为A,则运动矢量差为(-A,0);若方向信息表示方向为向上,幅值信息表示幅值为A,则运动矢量差为(0,A);若方向信息表示方向为向右上,幅值信息表示幅值为A,则运动矢量差为(A,A);若方向信息表示方向为向左上,幅值信息表示幅值为A,则运动矢量差为(-A,A);若方向信息表示方向为向左下,幅值信息表示幅值为A,则运动矢量差为(-A,-A);若方向信息表示方向为向右下,幅值信息表示幅值为A,则运动矢量差为(A,-A)。当然,上述只是几个示例,对此方向信息和幅值信息不做限制。
示例性的,关于运动矢量差的相关介绍,可以参见实施例15,在此不再赘述。
针对编码端来说,可以基于率失真代价值从第一参考帧信息对应的运动矢量候选列表中选择第一原始运动矢量,从第二参考帧信息对应的运动矢量候选列表中选择第二原始运动矢量,基于率失真代价值确定与第一原始运动矢量对应的第一运动矢量差的方向信息和幅值信息,与第二原始运动矢量对应的第二运动矢量差的方向信息和幅值信息。编码端向解码端发送编码比特流时,在编码比特流中编码第一原始运动矢量在第一参考帧信息对应的运动矢量候选列表中的索引值,第二原始运动矢量在第二参考帧信息对应的运动矢量候选列表中的索引值,第一运动矢量差的方向信息和幅值信息,第二运动矢量差的方向信息和幅值信息。
针对解码端来说,在接收到当前块的编码比特流后,基于第一原始运动矢量在第一参考帧信息对应的运动矢量候选列表中的索引值,从第一参考帧信息对应的运动矢量候选列表中选择第一原始运动矢量,基于第二原始运动矢量在第二参考帧信息对应的运动矢量候选列表中的索引值,从第二参考帧信息对应的运动矢量候选列表中选择第二原始运动矢量。
解码端还可以从编码比特流中解析出第一运动矢量差的方向信息和幅值信息,并根据该方向信息和该幅值信息确定第一运动矢量差。以及,从该编码比特流中解析出第二运动矢量差的方向信息和幅值信息,并根据该方向信息和该幅值信息确定第二运动矢量差。
示例性的,编码比特流中还可以包括第一原始运动矢量对应的第一参考帧信息,解码端可以根据第一原始运动矢量和第一原始运动矢量对应的第一参考帧信息,确定第一原始运动信息。编码比特流中还可以包括第二原始运动矢量对应的第二参考帧信息,解码端可以根据第二原始运动矢量和第二原始运动矢量对应的第二参考帧信息,确定第二原始运动信息。
然后,解码端可以根据第一运动矢量差和第一原始运动信息确定当前块的第一目标运动信息,并根据第二运动矢量差和第二原始运动信息确定当前块的第二目标运动信息。
示例性的,编码端还可以在编码比特流中编码增强角度加权预测模式的第一子模式标志和第二子模式标志,第一子模式标志指示对第一原始运动矢量叠加运动矢量差,或对第一原始运动矢量不叠加运动矢量差。第二子模式标志指示对第二原始运动矢量叠加运动矢量差,或对第二原始运动矢量不叠加运动矢量差。相关处理参见实施例15,在此不再赘述。
在上述实施例中,解码端是从当前块的编码比特流中解析出第一运动矢量差的方向信息和第二运动矢量差的方向信息,在实际应用中,还可以根据当前块的权重预测角度推导第一运动矢量差的方向信息,并根据当前块的权重预测角度推导第二运动矢量差的方向信息。
针对编码端来说,也可以根据当前块的权重预测角度推导第一运动矢量差的方向信息,并根据当前块的权重预测角度推导第二运动矢量差的方向信息。
示例性的,解码端可以从编码比特流中解析出第一运动矢量差的幅值信息和第二运动矢量差的幅值信息,在一种可能的实施方式中,编码端和解码端可以构建相同的一个运动矢量差幅值列表,编码端确定第一运动矢量差的幅值信息在该运动矢量差幅值列表中的幅值索引,且编码比特流包括第一运动矢量差的幅值索引。解码端从当前块的编码比特流中解析出第一运动矢量差的幅值索引,并从该运动矢量差幅值列表选取与该幅值索引对应的幅值信息,该幅值信息即第一运动矢量差的幅值信息。编码端确定第二运动矢量差的幅值信息在该运动矢量差幅值列表中的幅值索引,且编码比特流包括第二运动矢量差的幅值索引。解码端从当前块的编码比特流中解析出第二运动矢量差的幅值索引,并从该运动矢量差幅值列表选取与该幅值索引对应的幅值信息,该幅值信息即第二运动矢量差的幅值信息。
在另一种可能的实施方式中,编码端和解码端可以构建相同的至少两个运动矢量差幅值列表,如编码端和解码端构建相同的运动矢量差幅值列表1,并构建相同的运动矢量差幅值列表2。编码端先基于运动矢量差幅值列表的指示信息,从所有运动矢量差幅值列表中选取目标运动矢量差幅值列表;编码端确定第一运动矢量差的幅值信息在该目标运动矢量差幅值列表中的幅值索引,且编码比特流包括第一运动矢量差的幅值索引。
编码端还可以确定第二运动矢量差的幅值信息在该目标运动矢量差幅值列表中的幅值索引,且编码比特流可以包括第二运动矢量差的幅值索引。解码端可以从当前块的编码比特流中解析出第二运动矢量差的幅值索引,并从该目标运动矢量差幅值列表中选取与该幅值索引对应的幅值信息,该幅值信息即第二运动矢量差的幅值信息。
示例性的,关于运动矢量差幅值列表的相关内容,可以参见实施例16,在此不再赘述。
实施例22:在实施例4中,编码端/解码端可以基于第一参考帧信息对应的运动矢量候选列表和第二参考帧信息对应的运动矢量候选列表,获取当前块的第一目标运动矢量和第二目标运动矢量,在第一目标运动矢量指向的参考帧与第二目标运动矢量指向的参考帧为同一帧时,则第一参考帧信息与第二参考帧信息相同,即第一参考帧信息对应的运动矢量候选列表与第二参考帧信息对应的运动矢量候选列表为同一个,基于此,采用如下方式获取第一目标运动矢量和第二目标运动矢量:从运动矢量候选列表中选择一个候选运动矢量作为当前块的原始运动矢量;根据原始运动矢量确定当前块的第一目标运动矢量。根据第一目标运动矢量确定当前块的第二目标运动矢量,或,根据原始运动矢量确定当前块的第二目标运动矢量。
示例性的,获取与原始运动矢量对应的运动矢量差,根据该原始运动矢量确定 当前块的第一目标运动矢量,根据第一目标运动矢量和该运动矢量差确定当前块的第二目标运动矢量。或者,获取与该原始运动矢量对应的运动矢量差,根据该原始运动矢量和该运动矢量差确定当前块的第一目标运动矢量,根据该原始运动矢量和该运动矢量差确定当前块的第二目标运动矢量。或者,获取与该原始运动矢量对应的第一运动矢量差和第二运动矢量差;根据该原始运动矢量和第一运动矢量差确定当前块的第一目标运动矢量,根据该第一目标运动矢量和第二运动矢量差确定当前块的第二目标运动矢量。或者,获取与该原始运动矢量对应的第一运动矢量差和第二运动矢量差;根据该原始运动矢量和第一运动矢量差确定当前块的第一目标运动矢量;根据该原始运动矢量和第二运动矢量差确定当前块的第二目标运动矢量。
在上述实施例中,第一目标运动矢量和第二目标运动矢量可以不同。
在实施例21或者实施例22中,是构建第一参考帧信息对应的运动矢量候选列表和第二参考帧信息对应的运动矢量候选列表,而不是构建运动信息候选列表,例如,采用针对参考帧列表ListX的第refIdx帧构建运动矢量候选列表,构建方式可以为帧间普通模式的运动矢量候选列表构建方式,也可以在实施例13或者实施例14的基础上,添加指向参考帧的限制,在添加指向参考帧的限制时,运动信息候选列表可以为运动矢量候选列表。示例性的,在构建运动矢量候选列表时,在判定可用性时增加对参考帧的判断,或者,在加入单向运动矢量时,可以采用进行伸缩操作(Scale)等手段。
在实施例1-实施例4中,编码端/解码端可以根据第一目标运动信息确定像素位置的第一预测值,根据第二目标运动信息确定像素位置的第二预测值,该过程可以参见帧间预测过程,对此不做限制。示例性的,在根据第一目标运动信息确定像素位置的第一预测值时,可以采用帧间加权预测模式得到像素位置的第一预测值。例如,先利用第一目标运动信息确定像素位置的初始预测值,然后对该初始预测值乘以预设因子,得到调整预测值。若调整预测值大于最大预测值,则将最大预测值作为当前块的第一预测值,若调整预测值小于最小预测值,则将最小预测值作为当前块的第一预测值,若调整预测值不小于最小预测值,且不大于最大预测值,则将调整预测值作为当前块的第一预测值。当然,上述方式只是示例,对此不做限制。同理,在根据第二目标运动信息确定像素位置的第二预测值时,也可以采用帧间加权预测模式得到像素位置的第二预测值,具体实现方式参见上述示例,在此不再重复赘述。
示例性的,实施例1-实施例3,实施例5-实施例12,实施例13-实施例20,可以单独实现,也可以组合实现。例如,实施例1和实施例2组合实现;实施例1和实施例3组合实现;实施例1和实施例5-实施例12中的至少一个实施例组合实现;实施例1和实施例13-实施例20中的至少一个实施例组合实现;实施例2和实施例5-实施例12中的至少一个实施例组合实现;实施例2和实施例13-实施例20中的至少一个实施例组合实现;实施例3和实施例5-实施例12中的至少一个实施例组合实现;实施例3和实施例13-实施例20中的至少一个实施例组合实现;当然,对实施例之间的组合方式不做限制。
示例性的,实施例4,实施例5-实施例12,实施例21-实施例22,可以单独实现,也可以组合实现。例如,例如,实施例4和实施例5-实施例12中的至少一个实施 例组合实现;实施例4和实施例21组合实现;实施例4和实施例22组合实现;实施例4和实施例21,实施例22组合实现;当然,对实施例之间的组合方式不做限制。
实施例23:基于与上述方法同样的申请构思,本申请实施例还提出一种编解码装置,所述装置应用于编码端或者解码端,参见图10A所示,为所述装置的结构图,包括:
获取模块111,用于在确定对当前块启动加权预测时,获取所述当前块的权重预测角度和权重配置参数;配置模块112,用于根据所述权重配置参数为所述当前块外部的周边位置配置参考权重值;确定模块113,用于针对所述当前块的每个像素位置,根据所述权重预测角度从所述当前块外部的周边位置中确定所述像素位置指向的周边匹配位置;根据所述周边匹配位置关联的参考权重值确定所述像素位置的目标权重值,根据所述像素位置的目标权重值确定所述像素位置的关联权重值;所述获取模块111,还用于获取运动信息候选列表,所述运动信息候选列表包括至少一个候选运动信息;基于所述运动信息候选列表获取所述当前块的第一目标运动信息和第二目标运动信息;所述确定模块113,还用于根据所述当前块的第一目标运动信息确定所述像素位置的第一预测值,根据所述当前块的第二目标运动信息确定所述像素位置的第二预测值;根据所述第一预测值,所述目标权重值,所述第二预测值和所述关联权重值,确定所述像素位置的加权预测值;根据所述当前块的所有像素位置的加权预测值确定所述当前块的加权预测值。
示例性的,所述获取模块111获取运动信息候选列表时具体用于:
获取待加入到运动信息候选列表中的至少一个可用运动信息;基于所述至少一个可用运动信息,获取所述运动信息候选列表;其中,所述可用运动信息包括如下运动信息的至少一种:空域运动信息;时域运动信息;预设运动信息。
示例性的,所述获取模块111基于所述至少一个可用运动信息,获取所述运动信息候选列表时具体用于:针对当前待加入到运动信息候选列表的可用运动信息,
若所述可用运动信息为单向运动信息,则将所述单向运动信息加入到运动信息候选列表;
若所述可用运动信息为双向运动信息,则将所述双向运动信息裁剪为第一单向运动信息和第二单向运动信息,将所述第一单向运动信息加入到运动信息候选列表。
或者,针对当前待加入到运动信息候选列表的可用运动信息,
若所述可用运动信息为单向运动信息,且所述单向运动信息与运动信息候选列表中已存在的候选运动信息不重复,则将所述单向运动信息加入到运动信息候选列表;
若所述可用运动信息为双向运动信息,则将所述双向运动信息裁剪为第一单向运动信息和第二单向运动信息;若所述第一单向运动信息与运动信息候选列表中已存在的候选运动信息不重复,则将所述第一单向运动信息加入到运动信息候选列表。
或者,针对当前待加入到运动信息候选列表的可用运动信息,
若所述可用运动信息为单向运动信息,且所述单向运动信息与运动信息候选列 表中已存在的候选运动信息不重复,则将所述单向运动信息加入到运动信息候选列表;
若所述可用运动信息为双向运动信息,则将所述双向运动信息裁剪为第一单向运动信息和第二单向运动信息;若第一单向运动信息与运动信息候选列表中已存在的候选运动信息不重复,则将所述第一单向运动信息加入到运动信息候选列表;若第一单向运动信息与运动信息候选列表中已存在的候选运动信息重复,且第二单向运动信息与运动信息候选列表中已存在的候选运动信息不重复,则将所述第二单向运动信息加入到运动信息候选列表。
示例性的,所述获取模块111基于所述运动信息候选列表获取所述当前块的第一目标运动信息和第二目标运动信息时具体用于:从所述运动信息候选列表中选择候选运动信息作为所述当前块的第一原始运动信息,并从所述运动信息候选列表中选择候选运动信息作为所述当前块的第二原始运动信息;根据所述第一原始运动信息确定所述当前块的第一目标运动信息;根据所述第二原始运动信息确定所述当前块的第二目标运动信息。
示例性的,所述第一原始运动信息包括第一原始运动矢量,所述第一目标运动信息包括第一目标运动矢量,所述获取模块111根据所述第一原始运动信息确定所述当前块的第一目标运动信息时具体用于:获取与所述第一原始运动矢量对应的第一运动矢量差;根据所述第一运动矢量差和所述第一原始运动矢量确定所述第一目标运动矢量;或者,将第一原始运动矢量确定为第一目标运动矢量;所述第二原始运动信息包括第二原始运动矢量,所述第二目标运动信息包括第二目标运动矢量,所述获取模块111根据所述第二原始运动信息确定所述当前块的第二目标运动信息时具体用于:获取与所述第二原始运动矢量对应的第二运动矢量差;根据所述第二运动矢量差和所述第二原始运动矢量确定所述第二目标运动矢量;或者,将第二原始运动矢量确定为第二目标运动矢量。
示例性的,在所述第一目标运动信息指向的参考帧与所述第二目标运动信息指向的参考帧为同一帧时,所述获取模块111基于所述运动信息候选列表获取所述当前块的第一目标运动信息和第二目标运动信息时具体用于:从所述运动信息候选列表中选择一个候选运动信息作为所述当前块的原始运动信息;根据所述原始运动信息确定所述当前块的第一目标运动信息;根据所述第一目标运动信息确定所述当前块的第二目标运动信息,或者,根据所述原始运动信息确定所述当前块的第二目标运动信息。
示例性的,所述原始运动信息包括原始运动矢量,所述第一目标运动信息包括第一目标运动矢量,所述第二目标运动信息包括第二目标运动矢量,所述获取模块111具体用于:获取与所述原始运动矢量对应的运动矢量差;根据所述原始运动矢量确定所述当前块的第一目标运动矢量;根据所述第一目标运动矢量和所述运动矢量差确定所述当前块的第二目标运动矢量;或者,获取与所述原始运动矢量对应的运动矢量差;根据所述原始运动矢量和所述运动矢量差确定所述当前块的第一目标运动矢量;根据所述原始运动矢量和所述运动矢量差确定所述当前块的第二目标运动矢量;或者,获取与所述原始运动矢量对应的第一运动矢量差和第二运动矢量差;根据所述原始运动矢量和所述第一运动矢量差确定所述当前块的第一目标运动矢量;根据所述第一目标运动矢量和所述第二运动矢量差确定所述当前块的第二目标运动矢量;或者,获取与所述原始运动矢 量对应的第一运动矢量差和第二运动矢量差;根据所述原始运动矢量和所述第一运动矢量差确定所述当前块的第一目标运动矢量;根据所述原始运动矢量和所述第二运动矢量差确定所述当前块的第二目标运动矢量。
在上述实施例中,第一目标运动矢量与第二目标运动矢量可以不同。
基于与上述方法同样的申请构思,本申请实施例还提出一种编解码装置,所述装置应用于编码端或者解码端,参见图10B所示,为所述装置的结构图,包括:
获取模块121,用于在确定对当前块启动加权预测时,获取所述当前块的权重预测角度和权重配置参数;配置模块122,用于根据所述权重配置参数为所述当前块外部的周边位置配置参考权重值;确定模块123,用于针对所述当前块的每个像素位置,根据所述权重预测角度从所述当前块外部的周边位置中确定所述像素位置指向的周边匹配位置;根据所述周边匹配位置关联的参考权重值确定所述像素位置的目标权重值,根据所述像素位置的目标权重值确定所述像素位置的关联权重值;所述获取模块121,还用于获取参考帧信息,并获取与所述参考帧信息对应的运动矢量候选列表,所述运动矢量候选列表包括至少一个候选运动矢量,所述参考帧信息包括第一参考帧信息和第二参考帧信息;基于所述运动矢量候选列表获取所述当前块的第一目标运动矢量和第二目标运动矢量;所述确定模块123,还用于根据所述当前块的第一目标运动信息确定所述像素位置的第一预测值,根据所述当前块的第二目标运动信息确定所述像素位置的第二预测值;根据所述第一预测值,所述目标权重值,所述第二预测值和所述关联权重值,确定所述像素位置的加权预测值;其中,所述第一目标运动信息包括第一目标运动矢量和所述第一目标运动矢量对应的第一参考帧信息,所述第二目标运动信息包括第二目标运动矢量和所述第二目标运动矢量对应的第二参考帧信息;根据所述当前块的所有像素位置的加权预测值确定所述当前块的加权预测值。
示例性的,所述权重配置参数包括权重变换率,若所述当前块支持权重变换率切换模式,所述获取模块121采用如下方式获取所述当前块的权重变换率:获取所述当前块的权重变换率指示信息;根据所述权重变换率指示信息确定所述当前块的权重变换率:其中,若所述权重变换率指示信息为第一指示信息,则所述当前块的权重变换率为第一权重变换率;若所述权重变换率指示信息为第二指示信息,则所述当前块的权重变换率为第二权重变换率。
示例性的,所述当前块的权重变换率指示信息为所述当前块对应的权重变换率切换标识,所述第一指示信息用于指示所述当前块不需要进行权重变换率切换,所述第二指示信息用于指示所述当前块需要进行权重变换率切换。
所述权重配置参数包括权重变换率和权重变换的起始位置,所述配置模块122根据所述权重配置参数为所述当前块外部的周边位置配置参考权重值时具体用于:
针对所述当前块外部的周边位置,根据所述周边位置的坐标值,所述权重变换的起始位置的坐标值,以及所述权重变换率,配置所述周边位置的参考权重值。
示例性的,所述运动矢量候选列表包括第一参考帧信息对应的运动矢量候选列表和第二参考帧信息对应的运动矢量候选列表,所述获取模块121基于所述运动矢量候 选列表获取所述当前块的第一目标运动矢量和第二目标运动矢量时具体用于:
从所述第一参考帧信息对应的运动矢量候选列表中选择一个候选运动矢量作为所述当前块的第一原始运动矢量,并从所述第二参考帧信息对应的运动矢量候选列表中选择一个候选运动矢量作为所述当前块的第二原始运动矢量;根据所述第一原始运动矢量确定所述当前块的第一目标运动矢量;根据所述第二原始运动矢量确定所述当前块的第二目标运动矢量。
示例性的,所述获取模块121根据所述第一原始运动矢量确定所述当前块的第一目标运动矢量时具体用于:获取与所述第一原始运动矢量对应的第一运动矢量差;根据所述第一运动矢量差和所述第一原始运动矢量确定所述第一目标运动矢量;或者,将第一原始运动矢量确定为第一目标运动矢量。
示例性的,所述获取模块121根据所述第二原始运动矢量确定所述当前块的第二目标运动矢量时具体用于:获取与所述第二原始运动矢量对应的第二运动矢量差;根据所述第二运动矢量差和所述第二原始运动矢量确定所述第二目标运动矢量;或者,将第二原始运动矢量确定为第二目标运动矢量。
示例性的,在所述第一目标运动矢量指向的参考帧与所述第二目标运动矢量指向的参考帧为同一帧时,所述第一参考帧信息与所述第二参考帧信息相同,所述运动矢量候选列表为一个运动矢量候选列表,所述获取模块121基于所述运动矢量候选列表获取所述当前块的第一目标运动矢量和第二目标运动矢量时具体用于:从所述运动矢量候选列表中选择一个候选运动矢量作为所述当前块的原始运动矢量;根据所述原始运动矢量确定所述当前块的第一目标运动矢量;根据所述第一目标运动矢量确定所述当前块的第二目标运动矢量,或者,根据所述原始运动矢量确定所述当前块的第二目标运动矢量。
示例性的,所述获取模块121具体用于:
获取与所述原始运动矢量对应的运动矢量差;根据所述原始运动矢量确定所述当前块的第一目标运动矢量;根据所述第一目标运动矢量和所述运动矢量差确定所述当前块的第二目标运动矢量;或者,获取与所述原始运动矢量对应的运动矢量差;根据所述原始运动矢量和所述运动矢量差确定所述当前块的第一目标运动矢量;根据所述原始运动矢量和所述运动矢量差确定所述当前块的第二目标运动矢量。或者,获取与所述原始运动矢量对应的第一运动矢量差和第二运动矢量差;根据所述原始运动矢量和所述第一运动矢量差确定所述当前块的第一目标运动矢量;根据所述第一目标运动矢量和所述第二运动矢量差确定所述当前块的第二目标运动矢量;或者,获取与所述原始运动矢量对应的第一运动矢量差和第二运动矢量差;根据所述原始运动矢量和所述第一运动矢量差确定所述当前块的第一目标运动矢量;根据所述原始运动矢量和所述第二运动矢量差确定所述当前块的第二目标运动矢量。
在上述实施例中,第一目标运动矢量和第二目标运动矢量可以不同。
基于与上述方法同样的申请构思,本申请实施例提供的解码端设备(也可以称为视频解码器),从硬件层面而言,其硬件架构示意图具体可以参见图10C所示。包括:处理器131和机器可读存储介质132,其中:所述机器可读存储介质132存储有能够被 所述处理器131执行的机器可执行指令;所述处理器131用于执行机器可执行指令,以实现本申请上述示例公开的方法。例如,所述处理器131用于执行机器可执行指令,以实现如下步骤:
在确定对当前块启动加权预测时,获取所述当前块的权重预测角度和权重配置参数;
根据所述权重配置参数为所述当前块外部的周边位置配置参考权重值;
针对所述当前块的每个像素位置,根据所述权重预测角度从所述当前块外部的周边位置中确定所述像素位置指向的周边匹配位置;根据所述周边匹配位置关联的参考权重值确定所述像素位置的目标权重值,根据所述像素位置的目标权重值确定所述像素位置的关联权重值;
获取运动信息候选列表,所述运动信息候选列表包括至少一个候选运动信息;基于所述运动信息候选列表获取所述当前块的第一目标运动信息和第二目标运动信息;
根据所述当前块的第一目标运动信息确定所述像素位置的第一预测值,根据所述当前块的第二目标运动信息确定所述像素位置的第二预测值;根据所述第一预测值,所述目标权重值,所述第二预测值和所述关联权重值,确定所述像素位置的加权预测值;
根据所述当前块的所有像素位置的加权预测值确定所述当前块的加权预测值。或者,
在确定对当前块启动加权预测时,获取所述当前块的权重预测角度和权重配置参数;
根据所述权重配置参数为所述当前块外部的周边位置配置参考权重值;
针对所述当前块的每个像素位置,根据所述权重预测角度从所述当前块外部的周边位置中确定所述像素位置指向的周边匹配位置;根据所述周边匹配位置关联的参考权重值确定所述像素位置的目标权重值,根据所述像素位置的目标权重值确定所述像素位置的关联权重值;
获取参考帧信息,并获取与所述参考帧信息对应的运动矢量候选列表,所述运动矢量候选列表包括至少一个候选运动矢量,所述参考帧信息包括第一参考帧信息和第二参考帧信息;基于所述运动矢量候选列表获取所述当前块的第一目标运动矢量和第二目标运动矢量;
根据所述当前块的第一目标运动信息确定所述像素位置的第一预测值,根据所述当前块的第二目标运动信息确定所述像素位置的第二预测值;根据所述第一预测值,所述目标权重值,所述第二预测值和所述关联权重值,确定所述像素位置的加权预测值;其中,所述第一目标运动信息包括第一目标运动矢量和所述第一目标运动矢量对应的第一参考帧信息,所述第二目标运动信息包括第二目标运动矢量和所述第二目标运动矢量对应的第二参考帧信息;
根据所述当前块的所有像素位置的加权预测值确定所述当前块的加权预测值。
基于与上述方法同样的申请构思,本申请实施例提供的编码端设备(也可以称 为视频编码器),从硬件层面而言,其硬件架构示意图具体可以参见图10D所示。包括:处理器141和机器可读存储介质142,其中:所述机器可读存储介质142存储有能够被所述处理器141执行的机器可执行指令;所述处理器141用于执行机器可执行指令,以实现本申请上述示例公开的方法。例如,所述处理器141用于执行机器可执行指令,以实现如下步骤:
在确定对当前块启动加权预测时,获取所述当前块的权重预测角度和权重配置参数;
根据所述权重配置参数为所述当前块外部的周边位置配置参考权重值;
针对所述当前块的每个像素位置,根据所述权重预测角度从所述当前块外部的周边位置中确定所述像素位置指向的周边匹配位置;根据所述周边匹配位置关联的参考权重值确定所述像素位置的目标权重值,根据所述像素位置的目标权重值确定所述像素位置的关联权重值;
获取运动信息候选列表,所述运动信息候选列表包括至少一个候选运动信息;基于所述运动信息候选列表获取所述当前块的第一目标运动信息和第二目标运动信息;
根据所述当前块的第一目标运动信息确定所述像素位置的第一预测值,根据所述当前块的第二目标运动信息确定所述像素位置的第二预测值;根据所述第一预测值,所述目标权重值,所述第二预测值和所述关联权重值,确定所述像素位置的加权预测值;
根据所述当前块的所有像素位置的加权预测值确定所述当前块的加权预测值。或者,
在确定对当前块启动加权预测时,获取所述当前块的权重预测角度和权重配置参数;
根据所述权重配置参数为所述当前块外部的周边位置配置参考权重值;
针对所述当前块的每个像素位置,根据所述权重预测角度从所述当前块外部的周边位置中确定所述像素位置指向的周边匹配位置;根据所述周边匹配位置关联的参考权重值确定所述像素位置的目标权重值,根据所述像素位置的目标权重值确定所述像素位置的关联权重值;
获取参考帧信息,并获取与所述参考帧信息对应的运动矢量候选列表,所述运动矢量候选列表包括至少一个候选运动矢量,所述参考帧信息包括第一参考帧信息和第二参考帧信息;基于所述运动矢量候选列表获取所述当前块的第一目标运动矢量和第二目标运动矢量;
根据所述当前块的第一目标运动信息确定所述像素位置的第一预测值,根据所述当前块的第二目标运动信息确定所述像素位置的第二预测值;根据所述第一预测值,所述目标权重值,所述第二预测值和所述关联权重值,确定所述像素位置的加权预测值;其中,所述第一目标运动信息包括第一目标运动矢量和所述第一目标运动矢量对应的第一参考帧信息,所述第二目标运动信息包括第二目标运动矢量和所述第二目标运动矢量对应的第二参考帧信息;
根据所述当前块的所有像素位置的加权预测值确定所述当前块的加权预测值。
基于与上述方法同样的申请构思,本申请实施例还提供一种机器可读存储介质,所述机器可读存储介质上存储有若干计算机指令,所述计算机指令被处理器执行时,能够实现本申请上述示例公开的方法,如上述上述各实施例中的编解码方法。
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机,计算机的具体形式可以是个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件收发设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任意几种设备的组合。为了描述的方便,描述以上装置时以功能分为各种单元分别描述。当然,在实施本申请时可以把各单元的功能在同一个或多个软件和/或硬件中实现。
本领域内的技术人员应明白,本申请实施例可提供为方法、系统、或计算机程序产品。本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。本申请实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
以上所述仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (18)

  1. 一种编解码方法,其特征在于,所述方法包括:
    在确定对当前块启动加权预测时,获取所述当前块的权重预测角度和权重配置参数;
    根据所述权重配置参数为所述当前块外部的周边位置配置参考权重值;
    针对所述当前块的每个像素位置,根据所述权重预测角度从所述当前块外部的周边位置中确定所述像素位置指向的周边匹配位置;根据所述周边匹配位置关联的参考权重值确定所述像素位置的目标权重值,根据所述像素位置的目标权重值确定所述像素位置的关联权重值;
    获取运动信息候选列表,所述运动信息候选列表包括至少一个候选运动信息;基于所述运动信息候选列表获取所述当前块的第一目标运动信息和第二目标运动信息;
    根据所述当前块的第一目标运动信息确定所述像素位置的第一预测值,根据所述当前块的第二目标运动信息确定所述像素位置的第二预测值;根据所述第一预测值,所述目标权重值,所述第二预测值和所述关联权重值,确定所述像素位置的加权预测值;
    根据所述当前块的所有像素位置的加权预测值确定所述当前块的加权预测值。
  2. 根据权利要求1所述的方法,其特征在于,所述获取运动信息候选列表,包括:
    获取待加入到运动信息候选列表中的至少一个可用运动信息;
    基于所述至少一个可用运动信息,获取所述运动信息候选列表;
    其中,所述可用运动信息包括如下运动信息的至少一种:
    空域运动信息;时域运动信息;预设运动信息。
  3. 根据权利要求2所述的方法,其特征在于,
    所述获取待加入到运动信息候选列表中的至少一个可用运动信息,包括:
    针对所述当前块的空域相邻块,若所述空域相邻块存在,且所述空域相邻块采用帧间预测模式,则将所述空域相邻块的运动信息确定为可用运动信息。
  4. 根据权利要求2所述的方法,其特征在于,
    所述获取待加入到运动信息候选列表中的至少一个可用运动信息,包括:
    基于所述当前块的预设位置,从所述当前块的参考帧中选取与所述预设位置对应的时域相邻块,将所述时域相邻块的运动信息确定为可用运动信息。
  5. 根据权利要求2所述的方法,其特征在于,
    所述获取待加入到运动信息候选列表中的至少一个可用运动信息,包括:
    将预设运动信息确定为可用运动信息,所述预设运动信息包括:
    基于运动信息候选列表中已存在的候选运动信息所导出的缺省运动信息。
  6. 根据权利要求2所述的方法,其特征在于,
    所述基于所述至少一个可用运动信息,获取所述运动信息候选列表,包括:
    针对当前待加入到运动信息候选列表的可用运动信息,
    若所述可用运动信息为单向运动信息,且所述单向运动信息与运动信息候选列表中已存在的候选运动信息不重复,则将所述单向运动信息加入到运动信息候选列表;
    若所述可用运动信息为双向运动信息,则将所述双向运动信息裁剪为第一单向运动信息和第二单向运动信息;若所述第一单向运动信息与运动信息候选列表中已存在的候选运动信息不重复,则将所述第一单向运动信息加入到运动信息候选列表。
  7. 根据权利要求2所述的方法,其特征在于,
    所述基于所述至少一个可用运动信息,获取所述运动信息候选列表,包括:针对当前待加入到运动信息候选列表的可用运动信息,将所述可用运动信息加入到运动信息候选列表。
  8. 根据权利要求6所述的方法,其特征在于,
    所述第一单向运动信息是指向第一参考帧列表中参考帧的单向运动信息;
    所述第二单向运动信息是指向第二参考帧列表中参考帧的单向运动信息;
    其中,所述第一参考帧列表为List0,所述第二参考帧列表为List1;或者,
    所述第一参考帧列表为List1,所述第二参考帧列表为List0。
  9. 根据权利要求6所述的方法,其特征在于,
    针对单向运动信息与运动信息候选列表中已存在的候选运动信息的查重操作,包括:
    若单向运动信息指向的参考帧的显示顺序POC与候选运动信息指向的参考帧的POC相同,且该单向运动信息的运动矢量与该候选运动信息的运动矢量相同,则确定该单向运动信息与该候选运动信息重复;否则,确定该单向运动信息与该候选运动信息不重复。
  10. 根据权利要求1所述的方法,其特征在于,所述基于所述运动信息候选列表获取所述当前块的第一目标运动信息和第二目标运动信息,包括:
    从所述运动信息候选列表中选择候选运动信息作为所述当前块的第一原始运动信息,并从所述运动信息候选列表中选择候选运动信息作为所述当前块的第二原始运动信息;
    根据所述第一原始运动信息确定所述当前块的第一目标运动信息;
    根据所述第二原始运动信息确定所述当前块的第二目标运动信息。
  11. 根据权利要求10所述的方法,其特征在于,
    所述第一原始运动信息包括第一原始运动矢量,所述第一目标运动信息包括第一目标运动矢量,所述根据所述第一原始运动信息确定所述当前块的第一目标运动信息,包括:
    获取与所述第一原始运动矢量对应的第一运动矢量差;根据所述第一运动矢量差和所述第一原始运动矢量确定所述第一目标运动矢量;或者,
    将所述第一原始运动矢量确定为所述第一目标运动矢量;
    所述第二原始运动信息包括第二原始运动矢量,所述第二目标运动信息包括第二目标运动矢量,所述根据所述第二原始运动信息确定所述当前块的第二目标运动信息,包括:
    获取与所述第二原始运动矢量对应的第二运动矢量差;根据所述第二运动矢量差和所述第二原始运动矢量确定所述第二目标运动矢量;或者,
    将所述第二原始运动矢量确定为所述第二目标运动矢量。
  12. 根据权利要求11所述的方法,其特征在于,
    所述获取与所述第一原始运动矢量对应的第一运动矢量差,包括:
    获取所述第一运动矢量差的方向信息和幅值信息;
    根据所述第一运动矢量差的方向信息和幅值信息确定所述第一运动矢量差;
    所述获取与所述第二原始运动矢量对应的第二运动矢量差,包括:
    获取所述第二运动矢量差的方向信息和幅值信息;
    根据所述第二运动矢量差的方向信息和幅值信息确定所述第二运动矢量差。
  13. 根据权利要求12所述的方法,其特征在于,若所述方法应用于解码端,则:
    所述获取所述第一运动矢量差的方向信息,包括:
    从所述当前块的编码比特流中解析出所述第一运动矢量差的方向信息;或者,
    根据所述当前块的权重预测角度推导所述第一运动矢量差的方向信息;
    所述获取所述第二运动矢量差的方向信息,包括:
    从所述当前块的编码比特流中解析出所述第二运动矢量差的方向信息;或者,
    根据所述当前块的权重预测角度推导所述第二运动矢量差的方向信息。
  14. 根据权利要求12所述的方法,其特征在于,若所述方法应用于解码端,则:
    所述获取所述第一运动矢量差的幅值信息,包括:
    从所述当前块的编码比特流中解析出所述第一运动矢量差的幅值信息;
    所述获取所述第二运动矢量差的幅值信息,包括:
    从所述当前块的编码比特流中解析出所述第二运动矢量差的幅值信息。
  15. 根据权利要求12-14任一项所述的方法,其特征在于,
    若第一运动矢量差的方向信息表示方向为向右,第一运动矢量差的幅值信息为Ar1,则第一运动矢量差为(Ar1,0);若第一运动矢量差的方向信息表示方向为向下,第一运动矢量差的幅值信息为Ad1,则第一运动矢量差为(0,-Ad1);若第一运动矢量差的方向信息表示方向为向左,第一运动矢量差的幅值信息为Al1,则第一运动矢量差为(-Al1,0);若第一运动矢量差的方向信息表示方向为向上,第一运动矢量差的幅值信息为Au1,则第一运动矢量差为(0,Au1);
    若第二运动矢量差的方向信息表示方向为向右,第二运动矢量差的幅值信息为Ar2,则第二运动矢量差为(Ar2,0);若第二运动矢量差的方向信息表示方向为向下,第二运动矢量差的幅值信息为Ad2,则第二运动矢量差为(0,-Ad2);若第二运动矢量差的方向信息表示方向为向左,第二运动矢量差的幅值信息为Al2,则第二运动矢量差为(-Al2,0);若第二运动矢量差的方向信息表示方向为向上,第二运动矢量差的幅值信息为Au2,则第二运动矢量差为(0,Au2)。
  16. 一种编解码装置,其特征在于,所述装置包括:
    获取模块,用于在确定对当前块启动加权预测时,获取所述当前块的权重预测角度和权重配置参数;
    配置模块,用于根据所述权重配置参数为所述当前块外部的周边位置配置参考权重值;
    确定模块,用于针对所述当前块的每个像素位置,根据所述权重预测角度从所述当前块外部的周边位置中确定所述像素位置指向的周边匹配位置;根据所述周边匹配位置关联的参考权重值确定所述像素位置的目标权重值,根据所述像素位置的目标权重值确定所述像素位置的关联权重值;
    所述获取模块,还用于获取运动信息候选列表,所述运动信息候选列表包括至少一个候选运动信息;基于所述运动信息候选列表获取所述当前块的第一目标运动信息和第二目标运动信息;
    所述确定模块,还用于根据所述当前块的第一目标运动信息确定所述像素位置的第一预测值,根据所述当前块的第二目标运动信息确定所述像素位置的第二预测值;根据 所述第一预测值,所述目标权重值,所述第二预测值和所述关联权重值,确定所述像素位置的加权预测值;根据所述当前块的所有像素位置的加权预测值确定所述当前块的加权预测值。
  17. 一种解码端设备,其特征在于,包括:处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;
    所述处理器用于执行机器可执行指令,以实现如下步骤:
    在确定对当前块启动加权预测时,获取所述当前块的权重预测角度和权重配置参数;
    根据所述权重配置参数为所述当前块外部的周边位置配置参考权重值;
    针对所述当前块的每个像素位置,根据所述权重预测角度从所述当前块外部的周边位置中确定所述像素位置指向的周边匹配位置;根据所述周边匹配位置关联的参考权重值确定所述像素位置的目标权重值,根据所述像素位置的目标权重值确定所述像素位置的关联权重值;
    获取运动信息候选列表,所述运动信息候选列表包括至少一个候选运动信息;基于所述运动信息候选列表获取所述当前块的第一目标运动信息和第二目标运动信息;
    根据所述当前块的第一目标运动信息确定所述像素位置的第一预测值,根据所述当前块的第二目标运动信息确定所述像素位置的第二预测值;根据所述第一预测值,所述目标权重值,所述第二预测值和所述关联权重值,确定所述像素位置的加权预测值;
    根据所述当前块的所有像素位置的加权预测值确定所述当前块的加权预测值。
  18. 一种编码端设备,其特征在于,包括:处理器和机器可读存储介质,所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;
    所述处理器用于执行机器可执行指令,以实现如下步骤:
    在确定对当前块启动加权预测时,获取所述当前块的权重预测角度和权重配置参数;
    根据所述权重配置参数为所述当前块外部的周边位置配置参考权重值;
    针对所述当前块的每个像素位置,根据所述权重预测角度从所述当前块外部的周边位置中确定所述像素位置指向的周边匹配位置;根据所述周边匹配位置关联的参考权重值确定所述像素位置的目标权重值,根据所述像素位置的目标权重值确定所述像素位置的关联权重值;
    获取运动信息候选列表,所述运动信息候选列表包括至少一个候选运动信息;基于所述运动信息候选列表获取所述当前块的第一目标运动信息和第二目标运动信息;
    根据所述当前块的第一目标运动信息确定所述像素位置的第一预测值,根据所述当前块的第二目标运动信息确定所述像素位置的第二预测值;根据所述第一预测值,所述目标权重值,所述第二预测值和所述关联权重值,确定所述像素位置的加权预测值;
    根据所述当前块的所有像素位置的加权预测值确定所述当前块的加权预测值。
PCT/CN2021/102199 2020-06-30 2021-06-24 一种编解码方法、装置及其设备 WO2022001837A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
US18/009,949 US20230344985A1 (en) 2020-06-30 2021-06-24 Encoding and decoding method, apparatus, and device
AU2021298606A AU2021298606C1 (en) 2020-06-30 2021-06-24 Encoding and decoding method and apparatus, and device therefor
JP2022578722A JP2023531010A (ja) 2020-06-30 2021-06-24 符号化・復号方法、装置及びその機器
KR1020227043009A KR20230006017A (ko) 2020-06-30 2021-06-24 코딩 및 디코딩 방법, 장치 및 이의 기기
EP21834647.6A EP4152750A4 (en) 2020-06-30 2021-06-24 ENCODING AND DECODING APPARATUS AND METHOD, AND ASSOCIATED DEVICE
ZA2022/13605A ZA202213605B (en) 2020-06-30 2022-12-15 Encoding and decoding method and apparatus, and device therefor

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010622752.7 2020-06-30
CN202010622752.7A CN113873249B (zh) 2020-06-30 2020-06-30 一种编解码方法、装置及其设备

Publications (1)

Publication Number Publication Date
WO2022001837A1 true WO2022001837A1 (zh) 2022-01-06

Family

ID=78668983

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/102199 WO2022001837A1 (zh) 2020-06-30 2021-06-24 一种编解码方法、装置及其设备

Country Status (9)

Country Link
US (1) US20230344985A1 (zh)
EP (1) EP4152750A4 (zh)
JP (1) JP2023531010A (zh)
KR (1) KR20230006017A (zh)
CN (2) CN113873249B (zh)
AU (1) AU2021298606C1 (zh)
TW (1) TWI790662B (zh)
WO (1) WO2022001837A1 (zh)
ZA (1) ZA202213605B (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113873249B (zh) * 2020-06-30 2023-02-28 杭州海康威视数字技术股份有限公司 一种编解码方法、装置及其设备
CN114598889B (zh) * 2020-12-03 2023-03-28 杭州海康威视数字技术股份有限公司 一种编解码方法、装置及其设备
CN115002486A (zh) * 2022-05-26 2022-09-02 百果园技术(新加坡)有限公司 编码单元预测块的权重确定方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107113425A (zh) * 2014-11-06 2017-08-29 三星电子株式会社 视频编码方法和设备以及视频解码方法和设备
US20180288425A1 (en) * 2017-04-04 2018-10-04 Arris Enterprises Llc Memory Reduction Implementation for Weighted Angular Prediction
CN109819255A (zh) * 2018-12-28 2019-05-28 杭州海康威视数字技术股份有限公司 一种编解码方法及其设备
EP3644612A1 (en) * 2018-10-23 2020-04-29 InterDigital VC Holdings, Inc. Method and device for picture encoding and decoding
CN112543323A (zh) * 2019-09-23 2021-03-23 杭州海康威视数字技术股份有限公司 一种编解码方法、装置及其设备
CN112584142A (zh) * 2019-09-30 2021-03-30 杭州海康威视数字技术股份有限公司 一种编解码方法、装置及其设备

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103533376B (zh) * 2012-07-02 2017-04-12 华为技术有限公司 帧间预测编码运动信息的处理方法、装置和编解码系统
US10523949B2 (en) * 2016-05-25 2019-12-31 Arris Enterprises Llc Weighted angular prediction for intra coding
CN110800302A (zh) * 2017-06-07 2020-02-14 联发科技股份有限公司 用于视频编解码的帧内-帧间预测的方法及装置
CN117560506A (zh) * 2018-03-29 2024-02-13 华为技术有限公司 一种双向帧间预测方法及装置
MX2021001833A (es) * 2018-08-17 2021-05-12 Hfi Innovation Inc Metodos y aparatos de procesamiento de video con prediccion bidireccional en sistemas de codificacion de video.
CN110933426B (zh) * 2018-09-20 2022-03-01 杭州海康威视数字技术股份有限公司 一种解码、编码方法及其设备
CN111385569B (zh) * 2018-12-28 2022-04-26 杭州海康威视数字技术股份有限公司 一种编解码方法及其设备
CN113452997B (zh) * 2020-03-25 2022-07-29 杭州海康威视数字技术股份有限公司 一种编解码方法、装置及其设备
CN113873249B (zh) * 2020-06-30 2023-02-28 杭州海康威视数字技术股份有限公司 一种编解码方法、装置及其设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107113425A (zh) * 2014-11-06 2017-08-29 三星电子株式会社 视频编码方法和设备以及视频解码方法和设备
US20180288425A1 (en) * 2017-04-04 2018-10-04 Arris Enterprises Llc Memory Reduction Implementation for Weighted Angular Prediction
EP3644612A1 (en) * 2018-10-23 2020-04-29 InterDigital VC Holdings, Inc. Method and device for picture encoding and decoding
CN109819255A (zh) * 2018-12-28 2019-05-28 杭州海康威视数字技术股份有限公司 一种编解码方法及其设备
CN112543323A (zh) * 2019-09-23 2021-03-23 杭州海康威视数字技术股份有限公司 一种编解码方法、装置及其设备
CN112584142A (zh) * 2019-09-30 2021-03-30 杭州海康威视数字技术股份有限公司 一种编解码方法、装置及其设备

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
See also references of EP4152750A4 *
SUN YUCHENG, CHEN FANGDONG, WANG LI, PU SHILIANG: "Angular Weighted Prediction for Next-Generation Video Coding Standard", 2021 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 5 July 2021 (2021-07-05) - 9 July 2021 (2021-07-09), pages 1 - 6, XP055883777, ISBN: 978-1-6654-3864-3, DOI: 10.1109/ICME51207.2021.9428315 *

Also Published As

Publication number Publication date
CN113873249B (zh) 2023-02-28
CN113873249A (zh) 2021-12-31
TWI790662B (zh) 2023-01-21
JP2023531010A (ja) 2023-07-20
CN113709488B (zh) 2023-02-28
ZA202213605B (en) 2024-04-24
AU2021298606C1 (en) 2024-05-09
EP4152750A4 (en) 2023-10-18
TW202205852A (zh) 2022-02-01
EP4152750A1 (en) 2023-03-22
AU2021298606B2 (en) 2024-02-01
AU2021298606A1 (en) 2023-02-02
CN113709488A (zh) 2021-11-26
KR20230006017A (ko) 2023-01-10
US20230344985A1 (en) 2023-10-26

Similar Documents

Publication Publication Date Title
CN110463201B (zh) 使用参考块的预测方法和装置
CN108141604B (zh) 图像编码和解码方法和图像解码设备
WO2022001837A1 (zh) 一种编解码方法、装置及其设备
CN111567045A (zh) 使用帧间预测信息的方法和装置
JP2020523853A (ja) 動きベクトル予測
WO2021190515A1 (zh) 一种编解码方法、装置及其设备
CN111903123B (zh) 基于帧间预测模式的图像处理方法和用于该方法的装置
WO2021244496A1 (zh) 一种编解码方法、装置及其设备
CN113709499B (zh) 一种编解码方法、装置及其设备
RU2809701C1 (ru) Способ, оборудование и устройство для кодирования и декодирования
CN114598889B (zh) 一种编解码方法、装置及其设备
US20240031557A1 (en) Block Vector Predictor Candidate Selection
CN118176731A (zh) 用于视频处理的方法、装置和介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21834647

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20227043009

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2022578722

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2021834647

Country of ref document: EP

Effective date: 20221215

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021298606

Country of ref document: AU

Date of ref document: 20210624

Kind code of ref document: A