WO2024007165A1 - 编解码方法、装置、编码设备、解码设备以及存储介质 - Google Patents

编解码方法、装置、编码设备、解码设备以及存储介质 Download PDF

Info

Publication number
WO2024007165A1
WO2024007165A1 PCT/CN2022/103963 CN2022103963W WO2024007165A1 WO 2024007165 A1 WO2024007165 A1 WO 2024007165A1 CN 2022103963 W CN2022103963 W CN 2022103963W WO 2024007165 A1 WO2024007165 A1 WO 2024007165A1
Authority
WO
WIPO (PCT)
Prior art keywords
value
current block
color component
predicted
sampling point
Prior art date
Application number
PCT/CN2022/103963
Other languages
English (en)
French (fr)
Inventor
霍俊彦
马彦卓
杨付正
张振尧
李明
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to PCT/CN2022/103963 priority Critical patent/WO2024007165A1/zh
Priority to TW112124424A priority patent/TW202404358A/zh
Publication of WO2024007165A1 publication Critical patent/WO2024007165A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/59Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques

Definitions

  • the embodiments of the present application relate to the field of video coding and decoding technology, and in particular, to a coding and decoding method, device, encoding device, decoding device, and storage medium.
  • JVET Joint Video Exploration Team
  • H.266/VVC includes inter-color component prediction technology.
  • inter-color component prediction technology of H.266/VVC and the original value, which results in low prediction accuracy, resulting in a decrease in the quality of the decoded video and reduced encoding performance.
  • Embodiments of the present application provide a coding and decoding method, device, coding equipment, decoding equipment and storage medium, which can not only improve the accuracy of chroma prediction, reduce the computational complexity of chroma prediction, but also improve coding and decoding performance.
  • embodiments of the present application provide a decoding method, including:
  • the reconstructed value of the second color component sampling point in the current block is determined according to the predicted value of the second color component sampling point in the current block.
  • embodiments of the present application provide an encoding method, including:
  • the predicted difference value of the second color component sampling point in the current block is determined.
  • embodiments of the present application provide a coding device, including a first determination unit, a first calculation unit and a first prediction unit; wherein,
  • a first determination unit configured to determine the reference value of the first color component of the current block and the reference value of the second color component of the current block; and determine the weighting coefficient according to the reference value of the first color component of the current block;
  • a first calculation unit configured to determine a reference mean value of the second color component of the current block based on the reference value of the second color component of the current block; and based on the reference value of the second color component of the current block and the second color of the current block.
  • the reference mean value of the component determines the reference sample value of the second color component of the current block;
  • a first prediction unit configured to determine the predicted value of the second color component sampling point in the current block based on the reference mean value of the second color component of the current block, the reference sample value of the second color component of the current block, and the corresponding weighting coefficient;
  • the first determination unit is further configured to determine the predicted difference value of the second color component sampling point in the current block based on the predicted value of the second color component sampling point in the current block.
  • embodiments of the present application provide an encoding device, including a first memory and a first processor; wherein,
  • a first memory for storing a computer program capable of running on the first processor
  • the first processor is configured to execute the method described in the second aspect when running the computer program.
  • embodiments of the present application provide a decoding device, including a second determination unit, a second calculation unit, and a second prediction unit; wherein,
  • a second determination unit configured to determine the reference value of the first color component of the current block and the reference value of the second color component of the current block; and determine the weighting coefficient according to the reference value of the first color component of the current block;
  • the second calculation unit is configured to determine the reference mean value of the second color component of the current block based on the reference value of the second color component of the current block; and based on the reference value of the second color component of the current block and the second color of the current block.
  • the reference mean value of the component determines the reference sample value of the second color component of the current block;
  • the second prediction unit is configured to determine the predicted value of the second color component sampling point in the current block based on the reference mean value of the second color component of the current block, the reference sample value of the second color component of the current block, and the corresponding weighting coefficient;
  • the second determination unit is further configured to determine the reconstructed value of the second color component sampling point in the current block based on the predicted value of the second color component sampling point in the current block.
  • embodiments of the present application provide a decoding device including a second memory and a second processor; wherein,
  • a second memory for storing a computer program capable of running on the second processor
  • the second processor is configured to execute the method described in the first aspect when running the computer program.
  • embodiments of the present application provide a computer-readable storage medium that stores a computer program.
  • the computer program When the computer program is executed, the method described in the first aspect is implemented, or the method described in the second aspect is implemented. methods described in this aspect.
  • Embodiments of the present application provide a coding and decoding method, device, coding device, decoding device and storage medium. Whether it is the coding end or the decoding end, by determining the reference value of the first color component of the current block and the second color of the current block the reference value of the component; determine the weighting coefficient according to the reference value of the first color component of the current block; determine the reference mean value of the second color component of the current block according to the reference value of the second color component of the current block; and according to the reference value of the second color component of the current block; The reference value of the second color component and the reference mean value of the second color component of the current block determine the reference sample value of the second color component of the current block; according to the reference mean value of the second color component of the current block, the second color of the current block The reference sample value of the component and the corresponding weighting coefficient determine the predicted value of the second color component sampling point in the current block.
  • the predicted difference value of the second color component sampling point in the current block can be determined; so that at the decoding end, based on the predicted value of the second color component sampling point in the current block
  • the predicted value of can determine the reconstructed value of the second color component sampling point of the current block. That is to say, based on the color component reference information in the adjacent area of the current block and the color component reconstruction information in the current block, it is not only necessary to construct a brightness difference vector in order to determine the weighting coefficient; it is also necessary to determine the color based on the chrominance reference information.
  • the chromaticity average is then used to determine the chromaticity difference vector based on the chromaticity reference information and the chromaticity average, and then the chromaticity prediction value is determined based on the chromaticity difference vector and the corresponding weighting coefficient, plus the chromaticity average; thus , by optimizing the calculation process of weight-based chroma prediction, it is possible to completely use integer operations, and fully consider the characteristics of the weight model to reasonably optimize the integer operation process; while fully ensuring the accuracy of chroma prediction, It can also reduce computational complexity, improve encoding and decoding efficiency, and thereby improve encoding and decoding performance.
  • Figure 1 is a schematic diagram of the distribution of effective adjacent areas
  • Figure 2 is a schematic diagram of the distribution of selected areas under different prediction modes
  • Figure 3 is a schematic flow chart of a model parameter derivation scheme
  • Figure 4A is a schematic block diagram of an encoder provided by an embodiment of the present application.
  • Figure 4B is a schematic block diagram of a decoder provided by an embodiment of the present application.
  • Figure 5 is a schematic flow chart of a decoding method provided by an embodiment of the present application.
  • Figure 6 is a schematic diagram of a reference area of a current block provided by an embodiment of the present application.
  • Figure 7 is a schematic flow chart of another decoding method provided by an embodiment of the present application.
  • Figure 8 is a schematic flow chart of another decoding method provided by an embodiment of the present application.
  • Figure 9 is a schematic flow chart of an encoding method provided by an embodiment of the present application.
  • Figure 10 is a schematic structural diagram of an encoding device provided by an embodiment of the present application.
  • Figure 11 is a schematic diagram of the specific hardware structure of an encoding device provided by an embodiment of the present application.
  • Figure 12 is a schematic structural diagram of a decoding device provided by an embodiment of the present application.
  • Figure 13 is a schematic diagram of the specific hardware structure of a decoding device provided by an embodiment of the present application.
  • Figure 14 is a schematic structural diagram of a coding and decoding system provided by an embodiment of the present application.
  • the first color component, the second color component and the third color component are generally used to characterize the coding block (Coding Block, CB).
  • the three color components are a brightness component, a blue chroma component and a red chroma component respectively.
  • the brightness component is usually represented by the symbol Y
  • the blue chroma component is usually represented by the symbol Cb or U
  • the red chroma component is usually represented by the symbol Cr or V; in this way, the video image can be represented in the YCbCr format or YUV Format representation.
  • the video image may also be in RGB format, YCgCo format, etc., and the embodiments of this application do not impose any limitations.
  • the cross-component prediction technology mainly includes the inter-component linear model (Cross-component Linear Model, CCLM) prediction mode and the multi-directional linear model (Multi-Directional Linear Model, MDLM) prediction mode, whether it is the model parameters derived according to the CCLM prediction mode or the model parameters derived according to the MDLM prediction mode, the corresponding prediction model can realize the first color component to the second color component, and the second color component to the third color component. Prediction between one color component, the first color component to the third color component, the third color component to the first color component, the second color component to the third color component, or the third color component to the second color component, etc. .
  • VVC Versatile Video Coding
  • i, j represents the position coordinates of the pixel to be predicted in the coding block, i represents the horizontal direction, and j represents the vertical direction;
  • Pred C (i, j) represents the correspondence of the pixel to be predicted at the position coordinate (i, j) in the coding block.
  • the chroma prediction value, Rec L (i, j) represents the brightness reconstruction value corresponding to the pixel to be predicted at the (downsampled) position coordinate (i, j) in the same coding block.
  • ⁇ and ⁇ represent model parameters, which can be derived from reference pixels.
  • H.266/VVC includes three cross-component linear model prediction modes, namely: left and upper adjacent intra-frame CCLM prediction mode (can be represented by INTRA_LT_CCLM), left and lower left adjacent intra-frame The CCLM prediction mode (can be represented by INTRA_L_CCLM) and the upper and upper right adjacent intra-frame CCLM prediction mode (can be represented by INTRA_T_CCLM).
  • each prediction mode can select a preset number (such as 4) of reference pixels for derivation of model parameters ⁇ and ⁇ . The biggest difference between these three prediction modes is that they are used to derive the model.
  • the selection areas corresponding to the reference pixels of parameters ⁇ and ⁇ are different.
  • the coding block size corresponding to the chroma component is W ⁇ H, assuming that the upper selection area corresponding to the reference pixel is W', and the left selection area corresponding to the reference pixel is H'; thus,
  • the selection area of INTRA_L_CCLM mode and INTRA_T_CCLM mode is defined as W+H, in actual applications, the selection area of INTRA_L_CCLM mode will be limited to H+H, and the selection area of INTRA_T_CCLM mode will be limited to W+W. ;so,
  • Figure 1 shows a schematic distribution diagram of effective adjacent areas.
  • the left adjacent area, the lower left adjacent area, the upper adjacent area and the upper right adjacent area are all valid; in addition, the block filled with gray is the position coordinate in the coding block (i, j) pixel to be predicted.
  • Figure 2 shows the selection areas for the three prediction modes, including the left adjacent area and the upper adjacent area; (b) shows the selection area of the INTRA_L_CCLM mode, including the left adjacent area. and the lower left adjacent area; (c) shows the selected area of the INTRA_T_CCLM mode, including the upper adjacent area and the upper right adjacent area.
  • pixel selection for model parameter derivation can be performed within the selection area.
  • the pixels selected in this way can be called reference pixels, and usually the number of reference pixels is four; and for a W ⁇ H coding block with a certain size, the position of the reference pixel is generally determined.
  • the chromaticity prediction is currently performed according to the flow diagram of the model parameter derivation scheme shown in Figure 3.
  • the process can include:
  • VVC the step in which the effective reference pixel is 0 is judged based on the validity of adjacent areas.
  • the prediction model is constructed using the principle of "two points determine a straight line", and the two points here can be called fitting points.
  • two reference pixels with larger values and two reference pixels with smaller values in the brightness component are obtained through comparison; then based on the two reference pixels with larger values, Find a mean point (can be expressed by mean max ), and find another mean point (can be expressed by mean min ) based on the two reference pixels with smaller values, and you can get two mean points mean max and mean min ; Then, using mean max and mean min as two fitting points, the model parameters (represented by ⁇ and ⁇ ) can be derived; finally, a prediction model is constructed based on ⁇ and ⁇ , and the prediction processing of the chroma component is performed based on the prediction model.
  • coding blocks with different content characteristics use a simple linear model to map brightness to chroma to achieve chroma prediction, but not the mapping function from brightness to chroma in any coding block is the same. It can be accurately fitted by this simple linear model, which results in inaccurate prediction results of some coding blocks.
  • pixels at different positions in the coding block use the same model parameters ⁇ and ⁇ .
  • embodiments of the present application provide an encoding method by determining the reference value of the first color component of the current block and the reference value of the second color component of the current block; according to the reference value of the first color component of the current block, Determine the weighting coefficient; determine the reference mean of the second color component of the current block based on the reference value of the second color component of the current block; and based on the reference value of the second color component of the current block and the reference of the second color component of the current block mean, determine the reference sample value of the second color component of the current block; determine the second color component of the current block based on the reference mean value of the second color component of the current block, the reference sample value of the second color component of the current block, and the corresponding weighting coefficient.
  • the predicted value of the color component sampling point based on the predicted value of the second color component sampling point in the current block, determine the predicted difference value of the second color component sampling point in the current block.
  • the embodiment of the present application also provides a decoding method, by determining the reference value of the first color component of the current block and the reference value of the second color component of the current block; determining the weighted value according to the reference value of the first color component of the current block.
  • the encoder 100 may include a transformation and quantization unit 101, an intra estimation unit 102, an intra prediction unit 103, a motion compensation unit 104, a motion estimation unit 105, an inverse transform
  • the filter unit 108 can implement deblocking filtering and sample adaptive indentation (Sample Adaptive Offset, SAO ) filtering
  • the encoding unit 109 can implement header information encoding and context-based adaptive binary arithmetic coding (Context-based Adaptive Binary Arithmetic Coding, CABAC).
  • a video coding block can be obtained by dividing the coding tree block (Coding Tree Unit, CTU), and then the residual pixel information obtained after intra-frame or inter-frame prediction is processed through the transformation and quantization unit 101
  • the video coding block is transformed, including transforming the residual information from the pixel domain to the transform domain, and quantizing the resulting transform coefficients to further reduce the bit rate;
  • the intra-frame estimation unit 102 and the intra-frame prediction unit 103 are used to Intra prediction is performed on the video encoding block; specifically, intra estimation unit 102 and intra prediction unit 103 are used to determine an intra prediction mode to be used to encode the video encoding block;
  • motion compensation unit 104 and motion estimation unit 105 is used to perform inter-frame prediction encoding of the received video encoding block with respect to one or more blocks in one or more reference frames to provide temporal prediction information; motion estimation performed by the motion estimation unit 105 is to generate a motion vector.
  • the motion vector can estimate the motion of the video encoding block, and then the motion compensation unit 104 performs motion compensation based on the motion vector determined by the motion estimation unit 105; after determining the intra prediction mode, the intra prediction unit 103 also is used to provide the selected intra prediction data to the encoding unit 109, and the motion estimation unit 105 also sends the calculated and determined motion vector data to the encoding unit 109; in addition, the inverse transformation and inverse quantization unit 106 is used for the video Reconstruction of the coding block, the residual block is reconstructed in the pixel domain, the reconstructed residual block removes block effect artifacts through the filter control analysis unit 107 and the filtering unit 108, and then the reconstructed residual block is added to the decoding A predictive block in the frame of the image cache unit 110 is used to generate a reconstructed video encoding block; the encoding unit 109 is used to encode various encoding parameters and quantized transform coefficients.
  • the contextual content can be based on adjacent coding blocks and can be used to encode information indicating the determined intra prediction mode and output the code stream of the video signal; and the decoded image buffer unit 110 is used to store the reconstructed video coding blocks for Forecast reference. As the video image encoding proceeds, new reconstructed video encoding blocks will be continuously generated, and these reconstructed video encoding blocks will be stored in the decoded image cache unit 110 .
  • the decoder 200 includes a decoding unit 201, an inverse transform and inverse quantization unit 202, an intra prediction unit 203, a motion compensation unit 204, a filtering unit 205 and a decoded image cache unit. 206, etc., wherein the decoding unit 201 can implement header information decoding and CABAC decoding, and the filtering unit 205 can implement deblocking filtering and SAO filtering.
  • the code stream of the video signal is output; the code stream is input into the decoder 200 and first passes through the decoding unit 201 to obtain the decoded transformation coefficient; for the transformation coefficient, the inverse Transform and inverse quantization unit 202 performs processing to generate a residual block in the pixel domain; intra prediction unit 203 may be operable to generate a current block based on the determined intra prediction mode and data from previously decoded blocks of the current frame or picture. Prediction data for the video decoding block; motion compensation unit 204 determines prediction information for the video decoding block by parsing motion vectors and other associated syntax elements, and uses the prediction information to generate predictive blocks for the video decoding block being decoded.
  • a decoded video block is formed by summing the residual block from the inverse transform and inverse quantization unit 202 and the corresponding predictive block produced by the intra prediction unit 203 or the motion compensation unit 204; the decoded video signal is formed by The filtering unit 205 removes blocking artifacts, which can improve video quality; the decoded video blocks are then stored in the decoded image cache unit 206, which stores reference images for subsequent intra prediction or motion compensation, At the same time, it is also used for the output of video signals, that is, the restored original video signals are obtained.
  • the method in the embodiment of the present application is mainly applied to the intra prediction unit 103 part as shown in FIG. 4A and the intra prediction unit 203 part as shown in FIG. 4B. That is to say, the embodiments of the present application can be applied to both encoders and decoders, or even to both encoders and decoders. However, the embodiments of the present application are not specifically limited.
  • the "current block” specifically refers to the coding block currently to be intra-predicted; when applied to the intra prediction unit 203 part, the "current block” specifically refers to Refers to the decoding block currently to be intra-predicted.
  • FIG. 5 shows a schematic flow chart of a decoding method provided by an embodiment of the present application. As shown in Figure 5, the method may include:
  • S501 Determine the reference value of the first color component of the current block and the reference value of the second color component of the current block.
  • the decoding method in the embodiment of the present application is applied to a decoding device, or a decoding device integrated with the decoding device (which may also be referred to as a "decoder" for short).
  • the decoding method in the embodiment of the present application may specifically refer to an intra-frame prediction method, and more specifically, an integer operation method of weight-based chroma prediction (Weight-based Chroma Prediction, WCP).
  • the video image may be divided into multiple decoding blocks, and each decoding block may include a first color component, a second color component, and a third color component, and the current block here refers to the current block to be processed in the video image.
  • Decoded block for intra prediction may be performed using any suitable decoding block.
  • the current block predicts the first color component, and the first color component is a brightness component, that is, the component to be predicted is a brightness component, then the current block can also be called a brightness prediction block; or, assuming that the current block predicts the second color Component prediction, and the second color component is a chroma component, that is, the component to be predicted is a chroma component, then the current block can also be called a chroma prediction block.
  • the reference information of the current block may include the value of the first color component sampling point in the adjacent area of the current block and the second color component in the adjacent area of the current block.
  • the value of the sampling point can be determined based on the decoded pixels in the adjacent area of the current block.
  • the adjacent area of the current block may include at least one of the following: an upper adjacent area, an upper right adjacent area, a left adjacent area, and a lower left adjacent area.
  • the upper adjacent area and the upper right adjacent area can be regarded as the upper area
  • the left adjacent area and the lower left adjacent area can be regarded as the left area
  • the adjacent area can also be Including the upper left area, see Figure 6 for details.
  • the upper area, left area and upper left area of the current block as adjacent areas can be called the reference area of the current block, and the pixels in the reference area are is the reconstructed reference pixel.
  • determining the reference value of the first color component of the current block and the reference value of the second color component of the current block may include:
  • the reference value of the second color component of the current block is determined according to the value of the second color component sampling point in the adjacent area of the current block.
  • the reference pixel of the current block may refer to the reference pixel point adjacent to the current block, and may also be called the first color component sampling point, the first color component sampling point, and the third color component sampling point in the adjacent area of the current block.
  • the second color component sampling point is represented by Neighboring Sample or Reference Sample.
  • the adjacency here may be spatial adjacency, but is not limited to this.
  • adjacency can also be adjacent in time domain, adjacent in space and time domain, and even the reference pixel of the current block can be a certain reference pixel point adjacent in space, adjacent in time domain, or adjacent in space and time domain.
  • the reference pixels obtained after such processing are not limited in any way by the embodiments of this application.
  • the first color component is the brightness component and the second color component is the chrominance component; then, the value of the first color component sampling point in the adjacent area of the current block It is expressed as the reference brightness information corresponding to the reference pixel of the current block, and the value of the second color component sampling point in the adjacent area of the current block is expressed as the reference chrominance information corresponding to the reference pixel of the current block.
  • the value of the first color component sampling point or the value of the second color component sampling point is determined from the adjacent area of the current block.
  • the adjacent area here may be only Including the upper adjacent area, or including only the left adjacent area, or including the upper adjacent area and the upper right adjacent area, or including the left adjacent area and the lower left adjacent area, or including the upper side
  • the adjacent area and the left adjacent area or may even include the upper adjacent area, the upper right adjacent area, the left adjacent area, etc., are not limited in any way by the embodiment of the present application.
  • the adjacent area may also be determined based on the prediction mode of the current block. In a specific embodiment, this may include: if the prediction mode of the current block is the upper neighbor mode, determining that the adjacent area of the current block includes the upper adjacent area and/or the upper right adjacent area; if the current block If the prediction mode is the left adjacent mode, it is determined that the adjacent area of the current block includes the left adjacent area and/or the lower left adjacent area.
  • the upper adjacent mode includes a prediction mode using an upper adjacent reference sampling point
  • the left adjacent mode includes a prediction mode using a left adjacent reference sampling point.
  • the adjacent area in the weight-based chroma component prediction can only select the upper adjacent area and/or the upper right adjacent area; if If the prediction mode of the current block is the horizontal mode in the left adjacent mode, then only the left adjacent area and/or the lower left adjacent area can be selected as the adjacent area in the weight-based chroma component prediction.
  • determining the reference pixel of the current block may include: filtering pixels in adjacent areas of the current block to determine the reference pixel.
  • a first reference pixel set is formed according to pixels in adjacent areas of the current block; then the first reference pixel set can be filtered to determine the reference pixels.
  • the number of reference pixels can be M, and M is a positive integer.
  • M reference pixels can be selected from pixels in adjacent areas.
  • the value of M can generally be 4, but there is no specific limit.
  • filtering pixels in adjacent areas and determining reference pixels may include:
  • the reference pixel is determined from the pixels in the adjacent area.
  • the color component intensity can be represented by color component information, such as reference brightness information, reference chromaticity information, etc.; here, the larger the value of the color component information, the greater the color component intensity. high.
  • the pixels in the adjacent area can be filtered according to the position of the pixel, or according to the intensity of the color component, so that the reference pixel of the current block can be determined based on the filtered pixels, and further can Determine the value of the first color component sampling point in the adjacent area of the current block and the value of the second color component sampling point in the adjacent area of the current block; then based on the first color component sampling point in the adjacent area of the current block
  • the reference value of the first color component of the current block is determined based on the value of the color component sampling point
  • the reference value of the second color component of the current block is determined based on the value of the second color component sampling point in the adjacent area of the current block.
  • determining the reference value of the first color component of the current block based on the value of the first color component sampling point in the adjacent area of the current block may include: The value of a color component sampling point is subjected to a first filtering process to determine the reference value of the first color component of the current block.
  • the first filtering process is a downsampling filtering process.
  • the first color component is a brightness component.
  • the reference brightness information can be down-sampled and filtered so that the spatial resolution of the filtered reference brightness information is the same as the spatial resolution of the reference chrominance information. For example, if the size of the current block is 2M ⁇ 2N and the reference brightness information is 2M+2N, then after downsampling and filtering, it can be transformed to M+N to obtain the first color component of the current block. Reference.
  • determining the reference value of the second color component of the current block based on the value of the second color component sampling point in the adjacent area of the current block may include: The values of the two color component sampling points are subjected to a second filtering process to determine the reference value of the second color component of the current block.
  • the second filtering process is an upsampling filtering process.
  • the upsampling rate is a positive integer multiple of 2.
  • the first color component is the brightness component
  • the second color component is the chrominance component.
  • the embodiment of the present application can also perform upsampling filtering on the reference chrominance information, so that the spatial resolution of the filtered reference chrominance information is the same as that of the chrominance component.
  • the spatial resolution of the reference brightness is the same. For example, if the reference luminance information is 2M+2N, and the reference chrominance information is M+N, then after the reference chrominance information is upsampled and filtered, it can be transformed to 2M+2N to obtain the current block.
  • the reference value of the second color component is the reference luminance information.
  • S502 Determine the weighting coefficient according to the reference value of the first color component of the current block.
  • the reference information of the current block may also include the reconstructed value of the first color component sampling point in the current block. Assuming that the first color component is a brightness component, the reconstructed value of the first color component sampling point in the current block is the reconstructed brightness information of the current block.
  • determining the weighting coefficient according to the reference value of the first color component of the current block may include:
  • the weighting coefficient is determined based on the reference sample of the first color component of the current block.
  • determining the reference sample value of the first color component of the current block based on the reconstructed value of the first color component sampling point in the current block and the reference value of the first color component of the current block may include:
  • the reference sample value of the first color component of the current block is determined.
  • the reference sample value of the first color component of the current block may be set equal to the absolute value of the difference.
  • the reference sample value of the first color component of the current block is determined based on the difference value.
  • the difference value may also be squared, or the difference value may be subjected to some related processing and mapping to determine the first color of the current block.
  • the reference sample value of the component is not limited here.
  • determining the reference sample value of the first color component of the current block based on the reconstructed value of the first color component sampling point in the current block and the reference value of the first color component of the current block may include :
  • the reference sample value of the first color component of the current block is determined according to the filtered sample value of the first color component sampling point in the current block and the reference value of the first color component of the current block.
  • the third filtering process is a downsampling filtering process.
  • the first color component is a brightness component.
  • the reconstructed brightness information in the current block may also be subjected to downsampling filtering processing. For example, if the amount of reconstructed brightness information in the current block is 2M ⁇ 2N, it can be transformed to M ⁇ N after downsampling filtering.
  • determining the reference sample value of the first color component of the current block based on the filtered sample value of the first color component sampling point in the current block and the reference value of the first color component of the current block may be based on the current block.
  • the reference sample value of the first color component of the current block is determined by the difference between the filtered sample value of the first color component sampling point in the block and the reference value of the first color component of the current block; more specifically, it can be based on the difference between the filtered sample value of the first color component sampling point in the current block and the reference value of the first color component of the current block.
  • the reference sample value of the first color component of the current block is determined by the absolute value of the difference between the filtered sample value of a color component sampling point and the reference value of the first color component of the current block, and there is no limitation here.
  • the embodiment of the present application may further determine the weighting coefficient.
  • the reference sample value of the first color component of the current block may be the absolute value of the difference between the reconstructed value of the first color component sampling point in the current block and the reference value of the first color component of the current block.
  • the first color component is a chrominance component and the second color component is a brightness component.
  • This embodiment of the present application mainly predicts the chrominance component of the pixel to be predicted in the current block. First, select at least one pixel to be predicted in the current block, and calculate the chromaticity difference between its reconstructed chromaticity and the reference chromaticity in the adjacent area (expressed by
  • represents the reference sample value of the first color component
  • ) represents the value corresponding to the reference sample value of the first color component under the preset mapping relationship
  • w k represents the weighting coefficient, that is, w k is set equal to f(
  • the first color component is the brightness component and the second color component is the chrominance component.
  • at least one pixel to be predicted in the current block can also be selected, and its reconstructed brightness and phase can be calculated respectively.
  • the brightness difference between reference brightnesses in adjacent areas (expressed by
  • the corresponding weighting coefficient (expressed by w k ) can be given a larger weight; conversely, if
  • the similarity is relatively weak, and w k can be given a smaller weight.
  • the reference sample value of the first color component can also be
  • , and the weighting coefficient can also be calculated according to w k f(
  • the reference sample value of the first color component it can be
  • the preset multiplier here is the model factor described in the embodiment of this application.
  • the method may also include: performing a least squares calculation based on the first color component value of the reference pixel and the second color component value of the reference pixel, and determining the model factor. factor.
  • the least squares calculation is performed on the chrominance component values and brightness component values of the N reference pixels to obtain the model factors.
  • the least squares regression is calculated as follows,
  • Lk represents the brightness component value of the k-th reference pixel
  • Ck represents the chrominance component value of the k-th reference pixel
  • N represents the number of reference pixels
  • represents the model factor, which can be using least squares regression calculated.
  • the model factors can also be fixed values or fine-tuned based on fixed values, etc., which are not specifically limited in the embodiments of this application.
  • the reference sample value of the first color component of the current block may be the absolute difference between the brightness reconstruction information (represented by recY) in the current block and the inSize number of reference brightness information (represented by refY). value.
  • its corresponding brightness difference vector diffY[i][j][k] can be the corresponding brightness reconstruction information recY[i][j ] is obtained by subtracting the inSize number of reference brightness information refY[k] and taking the absolute value; that is, in the embodiment of the present application, the reference sample value of the first color component of the current block can be obtained by diffY[i][j][ k] represents.
  • determining the weighting coefficient according to the reference sample value of the first color component of the current block may include: determining the weight index value according to the reference sample value of the first color component of the current block; value, and use the first preset mapping relationship to determine the weighting coefficient.
  • determining the weight index value based on the reference sample value of the first color component of the current block may include:
  • the reference sample value of the first color component is corrected according to the maximum weight index value and the minimum weight index value to determine the weight index value.
  • the maximum weight index value can be represented by theMaxPos
  • the minimum weight index value can be represented by zero
  • the weight index value can be represented by index.
  • the weight index value is limited between theMaxPos and zero.
  • the weight index value can be calculated according to the following formula, as follows:
  • the maximum weight index value may be related to the bit depth (represented by BitDepth) of the color component.
  • the maximum weight index value can be calculated according to the following formula, as follows:
  • the value of theMaxPos includes but is not limited to being calculated according to equation (3), and it may also be determined in the core parameters of the WCP.
  • the first preset mapping relationship is a numerical mapping lookup table of weight index values and weighting coefficients. That is to say, in the embodiment of the present application, the decoding end may be pre-set with a corresponding look-up table (Look Up Table, LUT). The corresponding weighting coefficient can be determined through this lookup table and combined with the index.
  • the weighting coefficient cWeightInt[i][j][k] can be represented by the following mapping relationship:
  • the first preset mapping relationship may also be a preset functional relationship.
  • using the first preset mapping relationship to determine the weighting coefficient according to the weight index value may include: determining the first value corresponding to the weight index value under the first preset mapping relationship; setting the weighting coefficient equal to The first value.
  • determining the first value corresponding to the weight index value under the first preset mapping relationship may include:
  • weight index value use the second preset mapping relationship to determine the second value
  • the first value is set equal to the corresponding value of the first product value under the first preset mapping relationship.
  • the first factor can be represented by ModelScale
  • the weight index value can be represented by index.
  • the weighting coefficient cWeightInt[i][j][k] can also be expressed by the following functional relationship:
  • n the weight index value
  • the first preset mapping relationship can be set to Round(x); then when x is equal to the first product value, the value of Round(x) is the first value, that is, the weighting coefficient cWeightInt[i][j] [k].
  • the first preset mapping relationship can be expressed by the following formula:
  • Floor(x) represents the largest integer less than or equal to x.
  • the first factor may be a preset constant value. That is, the first factor may be a preset constant, which is independent of the block size parameter.
  • the value of the first factor may also be related to the block size parameter.
  • determining the first factor may include: determining the value of the first factor according to the size parameter of the current block; wherein the size parameter of the current block includes at least one of the following parameters: Width, the height of the current block. That is to say, in this embodiment of the present application, a classification method can be used to fix the value of the first factor. For example, the current block size parameter is divided into three categories, and the value of the first factor corresponding to each category is determined. To address this situation, embodiments of the present application may also pre-store a lookup table mapping the size parameters of the current block and the value of the first factor, and then determine the value of the first factor based on the lookup table.
  • W represents the width of the current block
  • H represents the height of the current block
  • the size parameter of the current block meets the third preset condition, that is, Min(W,H)>16, then the value of the first factor is set to the third value.
  • the first factor can be a preset constant, or it can be determined based on the size parameters of the current block, or even other methods (for example, based on the BitDepth of the color component, etc. ) to determine, without any restrictions here.
  • determining the weight index value based on the reference sample value of the first color component of the current block may include: determining the second factor; and determining the weight index value based on the reference sample value of the first color component of the current block.
  • the second factor determines the weight index value.
  • the weighting coefficient can also be adjusted according to the control parameters in the core parameters of the WCP under certain conditions.
  • the second factor is the control parameter described in this embodiment (which may also be called “scale parameter”, “scale factor”, etc.) and is represented by S.
  • the weight coefficient can be adjusted according to the second factor.
  • the nonlinear function such as Softmax function
  • different selections can be made according to the block classification category to which the current block belongs.
  • the second factor is used to adjust the function so that the weighting coefficient is determined based on the adjusted function.
  • the second factor may be a preset constant value. That is, in this case, for S, the weighted coefficient distribution of neighboring chromaticities can be adjusted according to the relatively flat characteristics of the chromaticity, thereby capturing a weighted coefficient distribution suitable for natural image chromaticity prediction.
  • the given S set is traversed, and the suitability of S is measured by the difference between the predicted chroma and the original chroma under different S.
  • S can be 2- ⁇ , where ⁇ 1,0,-1,-2,-3 ⁇ ; after experiments, it is found that in this S set, the best value of S is 4. Therefore, in a specific embodiment, S may be set to 4, but this is not specifically limited in the embodiment of this application.
  • the value of the second factor may also be related to the block size parameter.
  • determining the second factor may include: determining the value of the second factor according to the size parameter of the current block; wherein the size parameter of the current block includes at least one of the following parameters: Width, the height of the current block.
  • determining the value of the second factor according to the size parameter of the current block may include:
  • the second factor is determined to be 16.
  • a classification method may be used to fix the value of the second factor.
  • the current block size parameter is divided into three categories, and the value of the second factor corresponding to each category is determined.
  • embodiments of the present application may also pre-store a lookup table mapping the size parameters of the current block and the value of the second factor, and then determine the value of the second factor based on the lookup table.
  • Table 1 shows the correspondence between a second factor provided by the embodiment of the present application and the size parameter of the current block.
  • the corresponding relationship between the above-mentioned second factor and the size parameter of the current block can also be fine-tuned.
  • Table 2 shows the correspondence between another second factor provided by the embodiment of the present application and the size parameter of the current block.
  • determining the value of the second factor according to the size parameter of the current block may include:
  • the second factor is determined to be 15.
  • Table 3 shows the correspondence between another second factor provided by the embodiment of the present application and the size parameter of the current block.
  • determining the value of the second factor according to the size parameter of the current block may include: determining the value of the second factor according to the block type index value.
  • Table 4 shows the correspondence between a second factor and the block type index value provided by the embodiment of the present application.
  • Table 5 shows the correspondence between another second factor and the block type index value provided by the embodiment of the present application.
  • Block type index value second factor 0 10 1 8 2 1
  • Table 6 shows the correspondence between yet another second factor and the block type index value provided by the embodiment of the present application.
  • Block type index value second factor 0 16 1 4 2 1
  • the second factor can also be classified according to the number of reference pixels of the current block.
  • determining the second factor may include: determining the value of the second factor according to the number of reference pixels of the current block; where N represents the number of reference pixels.
  • determining the value of the second factor based on the number of reference pixels of the current block may include:
  • classification is performed according to the number of reference pixels of the current block.
  • Table 7 shows the correspondence between a second factor and the number of reference pixels provided by the embodiment of the present application.
  • determining the weight index value according to the reference sample value of the first color component of the current block and the second factor may include:
  • the third value is corrected according to the maximum weight index value and the minimum weight index value to determine the weight index value.
  • the maximum weight index value can be represented by theMaxPos
  • the minimum weight index value can be represented by zero
  • the weight index value can be represented by index.
  • the weight index value is limited to between theMaxPos and zero
  • the third value is f(S,diffY[i][j][k]).
  • the weight index value can be calculated according to the following formula, as follows:
  • the value of theMaxPos can be calculated according to the above formula (3), or can be determined in the core parameters of WCP, without any limitation.
  • f() refers to the function with the second factor S and the brightness difference vector diffY[i][j][k].
  • f(S,diffY[i][j][k]) can be realized by the following formula, as shown below:
  • f(S,diffY[i][j][k]) can be implemented through the following operations.
  • using a third preset mapping relationship to determine the third value based on the reference sample value of the first color component and the second factor may include:
  • f(S,diffY[i][j][k]) can be realized by the following formula, as shown below:
  • the target offset is equal to 0; if the second factor is equal to 4, then the target offset is equal to 1; if the second factor is equal to 8, then the target offset is equal to 2; Then perform a right shift operation on the reference sample value of the first color component to determine the third value.
  • using the first preset mapping relationship to determine the weighting coefficient according to the weight index value may include:
  • determining the fourth value corresponding to the second product value under the first preset mapping relationship may include:
  • the fourth value is set equal to the corresponding value of the third product value under the first preset mapping relationship.
  • the first factor can be represented by ModelScale
  • the second factor can be represented by S
  • the weight index value can be represented by index
  • the first preset mapping relationship is a numerical mapping lookup table of the second factor, the weight index value and the weighting coefficient. That is to say, in the embodiment of the present application, the decoding end may be pre-set with a corresponding look-up table (Look Up Table, LUT). The corresponding weighting coefficient can be determined through this lookup table and combined with the index.
  • the weighting coefficient cWeightInt[i][j][k] can be represented by the following mapping relationship:
  • the first preset mapping relationship may also be a preset functional relationship.
  • the inputs of the function are index and S, and the output is a weighting coefficient.
  • the weighting coefficient cWeightInt[i][j][k] can also be expressed by the following functional relationship:
  • the embodiment of the present application uses LUT to determine the weighting coefficient cWeightInt[i][j][k].
  • the number of elements contained in the LUT is theMaxPos, as shown in equation (3), theMaxPos is related to the bit depth of brightness or chroma; as shown in equations (4) and (11), the value of the elements in the LUT All are constant values.
  • the decoder can be stored in the decoder, or it can be calculated before encoding or decoding; as shown in (5) or equation (12), it can be said to be determined based on an exponential function relationship with an index of n; as shown in equation (4) and equation (11), the index parameters of the LUT are determined based on diffY[i][j][k], and the LUT stores cWeightInt[i][j][k] corresponding to different index parameters.
  • S503 Determine the reference mean value of the second color component of the current block based on the reference value of the second color component of the current block; and based on the reference value of the second color component of the current block and the reference mean value of the second color component of the current block, Determine the reference sample for the second color component of the current block.
  • the number of input reference pixels predicted based on weights can be represented by N or inSize.
  • the number of input reference pixels predicted based on the weight is the same as the number of reference samples of the second color component. It can also be said that N represents the number of reference samples of the second color component, and N is a positive integer.
  • determining the reference mean value of the second color component of the current block according to the reference value of the second color component of the current block may include: performing an average calculation on the reference values of the second color component of the N current blocks. , obtain the reference mean of the second color component of the current block.
  • the reference value of the second color component of the current block can be represented by refC[k]
  • the method may further include: determining the block type index value according to the size parameter of the current block; and using the fifth preset mapping relationship to determine the value of N based on the block type index value.
  • the fifth preset mapping relationship represents the numerical mapping lookup table between the block type index value and N.
  • the block type index value may be represented by wcpSizeId.
  • the number of input reference pixels predicted based on weights will also be different; that is, the values of N or (inSize) are different.
  • Table 8 shows the correspondence between a block type index value and the value of N(inSize) provided by the embodiment of the present application.
  • Block type index value N(inSize) 0 2 ⁇ H+2 ⁇ W 1 2 ⁇ H+2 ⁇ W 2 2 ⁇ H+2 ⁇ W
  • Table 9 shows the correspondence between another block type index value and the value of N(inSize) provided by the embodiment of the present application.
  • Block type index value N(inSize) 0 2 ⁇ H+2 ⁇ W 1 1.5 ⁇ H+1.5 ⁇ W 2 H+W
  • the reference value of the second color component of the current block and the reference mean value of the second color component of the current block are determined.
  • the reference sample value of the second color component of the current block may include:
  • Subtraction calculation is performed on the reference value of the second color component of the current block and the reference mean value of the second color component of the current block to obtain a reference sample value of the second color component of the current block.
  • the average avgC of the N pieces of reference chromaticity information refC is calculated, and then the reference chromaticity difference vector diffC is obtained by subtracting the N pieces of reference chromaticity information refC from the average avgC.
  • diffC is calculated as follows:
  • k 0,1,...,N-1.
  • S504 Determine the predicted value of the second color component sampling point in the current block based on the reference mean value of the second color component of the current block, the reference sample value of the second color component of the current block, and the corresponding weighting coefficient.
  • S505 Determine the reconstructed value of the second color component sampling point in the current block based on the predicted value of the second color component sampling point in the current block.
  • the predicted value of the second color component sampling point in the current block can be further determined.
  • the predicted value of the second color component sampling point in the current block is determined according to the reference mean value of the second color component of the current block, the reference sample value of the second color component of the current block, and the corresponding weighting coefficient.
  • the weighted sum value of the pixels to be predicted in the current block is set to be equal to the sum of N weighted values, and the sum of coefficients of the pixels to be predicted in the current block is set to be equal to the sum of N weighted coefficients;
  • N represents the second color component
  • N is a positive integer
  • the predicted value of the second color component of the pixel to be predicted in the current block is determined.
  • (i,j) represents the position information of the pixel to be predicted in the current block
  • k represents the reference sample of the k-th second color component used in the WCP calculation process
  • k 0,...,N-1.
  • the N weighting coefficients can be added to obtain the coefficient sum value of the pixels to be predicted in the current block, It can be represented by sum. Specifically, its calculation formula is as follows:
  • using the fourth preset mapping relationship to determine the sixth value based on the weighted sum value and the coefficient sum value may include:
  • the preset offset can be represented by Shift
  • the array index value can be represented by normDiff
  • the first value can be represented by x
  • the second value can be represented by v
  • the third value can be It is represented by y
  • the preset addition value is represented by add.
  • the value of Shift may be set to 5, but this is not specifically limited.
  • the sixth preset mapping relationship can be expressed by the following formula:
  • the calculation of the first value may include: determining the base 2 logarithm of the coefficient and value; determining the maximum integer value that is less than or equal to the logarithm. ;The maximum integer value determined here is the first value.
  • the calculation of the first numerical value may include: setting the first numerical value equal to the number of binary symbols required in the binary representation of the coefficient and value minus one.
  • the calculation of the first numerical value is described (text description) according to a possible integer implementation, which may include: performing a binary right shift operation on the coefficients and values, and determining that the numerical value is exactly the same after the right shift. The number of right shifts when equal to zero; sets the first value equal to the number of right shifts minus one.
  • equation (18) the calculation of the first numerical value is described according to a possible integer implementation (pseudocode description), as shown in equation (18), which may include:
  • using the seventh preset mapping relationship to determine the array index value based on the coefficient sum value and the first value may include: using the coefficient sum value and the first value as the input of the preset functional relationship, and using the seventh preset mapping relationship as input to the preset function relationship. Let the function relationship output the array index value.
  • the calculation of the array index value can be expressed by the following formula:
  • Func() is a function related to Shift.
  • a specific form of Func() is as follows:
  • using the eighth preset mapping relationship to determine the second value based on the array index value and the preset offset may include:
  • the second value is determined using an eighth preset mapping relationship according to the index indication value and the preset offset.
  • the array mapping table is represented by DivSigTable, then the corresponding index indication value of the array index value normDiff in the DivSigTable is DivSigTable[normDiff].
  • DivSigTable[normDiff] and Shift the eighth preset mapping relationship is as follows:
  • v DivSigTable[normDiff]
  • " operator represents a bitwise OR operation, that is, v is obtained by performing a bitwise OR operation on DivSigTable[normDiff] and 32.
  • determining the third value based on the first value may include:
  • the third array is set equal to the first value plus one.
  • the first numerical value is represented by x
  • the third numerical value is represented by y.
  • it can be expressed by the following formula:
  • the preset addition value is determined according to the third value and the preset offset, and the calculation formula is as follows:
  • the first offset is determined by the third value and the preset offset. For example, by adding the third value and the preset offset , to get the first offset, that is, the first offset can be y+Shift.
  • the " ⁇ " operator represents a left shift operation
  • the ">>” operator represents a right shift operation
  • the predicted value of the second color component of the pixel to be predicted in the current block is expressed by C pred [ i][j] means that at this time, based on the addition calculation of the reference mean value of the second color component and the sixth value, the predicted value of the second color component of the pixel to be predicted in the current block can be obtained.
  • the calculation formula is as follows:
  • C pred [i][j] usually needs to be limited to a preset range. Therefore, in some embodiments, the method may further include: performing a correction operation on the predicted value of the second color component of the pixel to be predicted, and using the corrected predicted value as the predicted value of the second color component of the pixel to be predicted in the current block. .
  • the preset range may be between: 0 and (1 ⁇ BitDepth)-1; where BitDepth is the bit depth required for the chroma component. If the predicted value exceeds the preset range, the predicted value needs to be corrected accordingly.
  • C pred [i][j] can be clamped as follows:
  • C pred [i][j] When the value of C pred [i][j] is greater than or equal to 0 and less than or equal to (1 ⁇ BitDepth)-1, it is equal to C pred [i][j];
  • the predicted values of the second color components of the pixels to be predicted in the current block are between 0 and (1 ⁇ BitDepth)-1.
  • determining the predicted value of the second color component sampling point in the current block based on the predicted value of the second color component of the pixel to be predicted in the current block may include:
  • the predicted value of the second color component of the pixel to be predicted includes the predicted value of at least some of the second color component sampling points in the current block.
  • the first prediction block includes the prediction value of at least part of the second color component sampling points in the current block.
  • the method may further include: performing an upsampling filtering process on the first prediction block, and determining a second prediction block of the second color component of the current block.
  • the method may further include: performing filtering enhancement processing on the first prediction block, and determining a second prediction block of the second color component of the current block.
  • the first prediction block includes the prediction values of all the second color component sampling points in the current block, there is no need to perform any processing on the first prediction block at this time, and the first prediction block can be directly The first prediction block serves as the final second prediction block.
  • the first prediction block may contain prediction values of at least part of the second color component sampling points in the current block.
  • the predicted values of the second color component sampling points in the current block can be set equal to the value of the first prediction block; if the first The prediction block contains the predicted values of some of the second color component sampling points in the current block, then the values of the first prediction block can be upsampled and filtered, and the predicted values of the second color component sampling points in the current block are set equal to the above Sample the filtered output value.
  • the second prediction block includes the prediction values of all the second color component sampling points in the current block.
  • the weight-based chroma prediction output predWcp requires post-processing under certain conditions before it can be used as the final chroma prediction value predSamples. Otherwise, the final chroma prediction value predSamples is predWcp.
  • the reconstructed value of the second color component sampling point in the current block is determined based on the predicted value of the second color component sampling point in the current block. , which can include:
  • the reconstructed value of the second color component sampling point in the current block is determined based on the predicted difference value of the second color component sampling point in the current block and the predicted value of the second color component sampling point in the current block.
  • determining the reconstruction value of the second color component sampling point in the current block includes: the predicted difference value of the second color component sampling point in the current block and the predicted difference value of the second color component sampling point in the current block. The predicted values are added to determine the reconstructed value of the second color component sampling point in the current block.
  • determining the prediction difference (residual) of the second color component sampling point in the current block may be: parsing the code stream, and determining the prediction of the second color component sampling point in the current block. difference.
  • the chroma prediction difference value of the current block is determined; after determining the chroma prediction value of the current block, the chroma prediction value and the chroma prediction difference value are added together.
  • the chroma reconstruction value of the current block can be obtained.
  • the embodiment of the present application is an optimization of floating-point operations in the process of WCP prediction technology, and is implemented by using integer operations. On the one hand, it makes full use of the content characteristics of the current block and adaptively selects the best integer operation displacement; on the other hand, it fully ensures the accuracy of the WCP prediction technology; on the other hand, it fully considers the characteristics of the weight model and rationally designs the entire type operation process; thus, the computational complexity of WCP prediction technology can be reduced while ensuring a certain accuracy of WCP prediction technology.
  • This embodiment provides a decoding method by determining the reference value of the first color component of the current block and the reference value of the second color component of the current block; determining the weighting coefficient according to the reference value of the first color component of the current block; Determine the reference mean value of the second color component of the current block according to the reference value of the second color component of the current block; determine the reference mean value of the second color component of the current block based on the reference value of the second color component of the current block and the reference mean value of the second color component of the current block.
  • the reference sample value of the second color component determines the second color component sampling point in the current block according to the reference mean value of the second color component of the current block, the reference sample value of the second color component of the current block, and the corresponding weighting coefficient Predicted value; determine the reconstructed value of the second color component sampling point in the current block based on the predicted value of the second color component sampling point in the current block.
  • the embodiment of the present application proposes a weight-based chroma prediction technology that utilizes the above information, and all operations in the implementation process of this technology use integer operations.
  • the technical solution of the embodiment of the present application mainly proposes: using integer operations to replace floating point operations.
  • the detailed steps of the chromaticity prediction process of WCP prediction technology are as follows:
  • Input of WCP mode the position of the current block (xTbCmp, yTbCmp), the width of the current block nTbW and the height of the current block nTbH.
  • the prediction process of WCP prediction technology can include steps such as determining WCP core parameters, obtaining input information, weight-based chroma prediction, and post-processing processes. After these steps, the chroma prediction value of the current block can be obtained.
  • FIG. 7 shows a schematic flowchart of another decoding method provided by an embodiment of the present application. As shown in Figure 7, the method may include:
  • the core parameters involved in WCP are determined, that is, the WCP core parameters can be obtained or inferred through configuration or in some way, for example, the WCP core parameters are obtained from the code stream at the decoding end.
  • WCP core parameters include but are not limited to control parameters (S), the number of weight-based chroma prediction inputs (inSize), the number of weight-based chroma prediction outputs (arranged into predSizeW ⁇ predSizeH), and weight models LUT, weight model The maximum weight index value of the LUT, theMaxPos.
  • the control parameter (S) can be used to adjust the nonlinear function in subsequent links or to adjust the data involved in subsequent links.
  • the weight model LUT can be predefined, or it can be calculated in real time according to different WCP control parameters (S); the maximum weight index value theMaxPos of the weight model LUT can be adjusted according to different weight model LUTs, or it can be fixed.
  • WCP core parameters are affected by the block size or block content or the number of pixels within the block under certain conditions. For example:
  • the current block can be classified according to its block size or block content or the number of pixels within the block, with the same or different core parameters determined according to different categories. That is, the control parameters (S) corresponding to different categories or the number of weight-based chroma prediction inputs inSize or the number of weight-based chroma prediction outputs (arranged in predSizeW ⁇ predSizeH) may be the same or different. Note that predSizeW and predSizeH can also be the same or different.
  • the current block can be processed based on its block size or block content or the number of pixels within the block.
  • Classification determine the same or different core parameters according to different categories. That is, the control parameters (S) corresponding to different categories or the number of weight-based chroma prediction inputs inSize or the number of weight-based chroma prediction outputs (arranged in predSizeW ⁇ predSizeH) may be the same or different. Note that predSizeW and predSizeH can also be the same or different.
  • WCP can classify the current block according to the width and height of the current block.
  • the control parameter (S) the number of weight-based chroma prediction inputs inSize, and the number of weight-based chroma prediction outputs (arranged in predSizeW ⁇ predSizeH) may be the same or different.
  • S the control parameter
  • S the number of weight-based chroma prediction inputs inSize
  • predSizeW ⁇ predSizeH predSizeW ⁇ predSizeH
  • the current block is divided into 3 categories according to the width and height of the current block.
  • the control parameters (S) of different categories can be set to different inSize and the number of weight-based chroma prediction outputs of different categories (arranged into predSizeW ⁇ predSizeH) Can be set to the same.
  • nTbW is the width of the current block
  • nTbH is the height of the current block
  • type of block wcpSizeId is defined as follows:
  • the control parameter (S) is 8
  • inSize is (2 ⁇ nTbH+2 ⁇ nTbW)
  • the weight-based chroma prediction outputs nTbH ⁇ nTbW chroma prediction values
  • the control parameter (S) is 12
  • inSize is (2 ⁇ nTbH+2 ⁇ nTbW)
  • the weight-based chroma prediction outputs nTbH ⁇ nTbW chroma prediction values
  • the control parameter (S) is 16
  • inSize is (2 ⁇ nTbH+2 ⁇ nTbW)
  • the weight-based chroma prediction outputs nTbH ⁇ nTbW chroma prediction values
  • the current block is divided into 3 categories according to the width and height of the current block.
  • the control parameters (S) of different categories can be set to different inSize and the number of weight-based chroma prediction outputs of different categories (arranged into predSizeW ⁇ predSizeH) Can be set to the same.
  • nTbW is the width of the current block
  • nTbH is the height of the current block
  • type of block wcpSizeId is defined as follows:
  • the control parameter (S) is 8
  • inSize is (2 ⁇ nTbH+2 ⁇ nTbW)
  • the weight-based chroma prediction outputs nTbH ⁇ nTbW chroma prediction values
  • the control parameter (S) is 12
  • inSize is (1.5 ⁇ nTbH+1.5 ⁇ nTbW)
  • the weight-based chroma prediction outputs nTbH/2 ⁇ nTbW/2 chroma prediction values
  • wcpSizeId 2: indicates the current block with min(nTbW,nTbH)>16.
  • the WCP control parameter (S) is 16
  • inSize is (nTbH+nTbW)
  • the weight-based chroma prediction outputs nTbH/4 ⁇ nTbW/4 chroma prediction values
  • WCP mode can also classify the current block according to the number of pixels of the current block, and use wcpSizeId to indicate the type of block.
  • the control parameter (S) the number of weight-based chroma prediction inputs inSize, and the number of weight-based chroma prediction outputs (arranged in predSizeW ⁇ predSizeH) may be the same or different.
  • S control parameter
  • predSizeW ⁇ predSizeH predSizeW ⁇ predSizeH
  • the current block is divided into 3 categories according to the number of pixels in the current block.
  • the control parameters (S) of different categories can be set to different values.
  • the inSize of different categories and the number of weight-based chroma prediction outputs (arranged into predSizeW ⁇ predSizeH) can be Set to the same.
  • nTbW is the width of the current block
  • nTbH is the height of the current block
  • nTbW ⁇ nTbH represents the number of pixels in the current block.
  • the type of block wcpSizeId is defined as follows:
  • wcpSizeId 0: indicates the current block of (nTbW ⁇ nTbH) ⁇ 128.
  • the control parameter (S) is 10
  • inSize is (2 ⁇ nTbH+2 ⁇ nTbW)
  • the weight-based chroma prediction outputs nTbH ⁇ nTbW chroma prediction values
  • the control parameter (S) is 8
  • inSize is (2 ⁇ nTbH+2 ⁇ nTbW)
  • the weight-based chroma prediction outputs nTbH ⁇ nTbW chroma prediction values
  • wcpSizeId 2: indicates (nTbW ⁇ nTbH)>256 current blocks.
  • the control parameter (S) is 1, inSize is (2 ⁇ nTbH+2 ⁇ nTbW), and the weight-based chroma prediction outputs nTbH ⁇ nTbW chroma prediction values;
  • the current block is divided into 3 categories according to the number of pixels in the current block.
  • the control parameters (S) of different categories can be set to different values.
  • the inSize of different categories and the number of weight-based chroma prediction outputs (arranged into predSizeW ⁇ predSizeH) can be Set to the same.
  • nTbW is the width of the current block
  • nTbH is the height of the current block
  • nTbW ⁇ nTbH represents the number of pixels in the current block.
  • the type of block wcpSizeId is defined as follows:
  • wcpSizeId 0: indicates the current block of (nTbW ⁇ nTbH) ⁇ 64.
  • the control parameter (S) is 16
  • inSize is (2 ⁇ nTbH+2 ⁇ nTbW)
  • the weight-based chroma prediction outputs nTbH ⁇ nTbW chroma prediction values
  • the control parameter (S) is 4, inSize is (1.5 ⁇ nTbH+1.5 ⁇ nTbW), and the weight-based chroma prediction outputs nTbH/2 ⁇ nTbW/2 chroma prediction values;
  • wcpSizeId 2: indicates the current block of (nTbW ⁇ nTbH)>512.
  • the control parameter (S) is 1, inSize is (nTbH+nTbW), and the weight-based chroma prediction outputs nTbH/4 ⁇ nTbW/4 chroma prediction values;
  • the filtering processing here can be an interpolation filtering method, an upsampling filtering method, etc., and there is no limit to this.
  • S702 Obtain input information based on WCP core parameters.
  • the input information may include reference chrominance information (refC), reference luminance information (refY) and reconstructed luminance information (recY).
  • reference chrominance information refC
  • reference luminance information refY
  • recY reconstructed luminance information
  • the reference chromaticity information refC and the reference brightness information refY are obtained from adjacent areas.
  • the obtained reference chromaticity information includes but is not limited to: the reference chromaticity reconstruction value of the upper area of the selected current block, and/or the reference chromaticity reconstruction value of the left area.
  • the obtained reference brightness information includes but is not limited to: obtaining the corresponding reference brightness information according to the position of the reference chrominance information.
  • the acquisition method includes but is not limited to: acquiring the corresponding reconstructed brightness information according to the position of the chrominance information in the current block as the reconstructed brightness information of the current block.
  • Obtaining input information includes obtaining inSize amount of reference chroma information refC (if pre-processing is required, it is after pre-processing), and obtaining inSize amount of reference brightness information refY (if pre-processing is needed, it is after pre-processing) After the operation) and obtain the brightness reconstruction information recY of the current prediction block (if pre-processing is required, it is after the pre-processing operation).
  • S703 Perform weight-based chroma prediction calculation according to the input information to determine the chroma prediction value of the current block.
  • predSizeH and predSizeW are determined WCP core parameters, which may be the same as or different from the height nTbH or width nTbW of the current chroma block to be predicted. In this way, under certain conditions, the following calculations can also be performed on only some of the pixels to be predicted in the current block.
  • the chroma prediction calculation of WCP includes the following operations: preprocessing the reference chroma information, obtaining the weight vector, performing weighted prediction based on the weight vector to obtain the chroma prediction value based on the weight, and then correcting it.
  • the preprocessing process of the reference information includes calculating the average value and constructing the reference chromaticity difference vector; the obtaining process of the weight vector includes constructing the brightness difference vector and calculating the weight vector.
  • FIG. 8 shows a schematic flowchart of yet another decoding method provided by an embodiment of the present application. As shown in Figure 8, the method may include:
  • the average value avgC is calculated for the inSize number of reference chromaticity information refC, and the reference chromaticity difference vector diffC is obtained by subtracting the inSize number of reference chromaticity information refC from the average avgC.
  • S802 For each pixel to be predicted, construct a brightness difference vector using the reference brightness information and the brightness reconstruction information of the current block.
  • linear or nonlinear numerical processing can be performed on the brightness difference vector of the pixel to be predicted.
  • the value of the brightness difference vector of the pixel to be predicted can be scaled according to the WCP control parameter S in the WCP core parameter.
  • the LUT of the nonlinear weight model is used to process the brightness difference vector diffY[i][j] corresponding to each pixel to be predicted C pred [i][j], and the corresponding integer weight vector cWeightInt[i] can be obtained [j].
  • a nonlinear Softmax function can be used as the weight model.
  • the methods for obtaining the weight model LUT include but are not limited to the following methods:
  • the value of theMaxPos includes, but is not limited to, calculated according to equation (29), which can be a parameter determined in the WCP core parameters.
  • ModelScale includes but is not limited to the parameters determined by the WCP core parameters. It represents the amplification factor of the weight coefficient and is a preset constant.
  • the weight model can also be adjusted according to the WCP control parameter (S) in the core parameters.
  • the weight model LUT can be adjusted according to the WCP control parameter (S). Taking the nonlinear Softmax function as an example, different control parameters can be selected to adjust the function according to the block type category to which the current block belongs.
  • the methods for obtaining the weight model LUT include but are not limited to the following methods:
  • the value of theMaxPos includes but is not limited to calculated according to equation (29). It can be a parameter determined in the WCP core parameters, and can be assigned different values to different WCP control parameters (S), that is, theMaxPos[S] .
  • ModelScale includes but is not limited to the parameters determined by the WCP core parameters, which represents the amplification factor of the weight coefficient.
  • f() refers to the function of the WCP control parameter S and the brightness difference diffY[i][j][k], and its output is the weight index value of the LUT.
  • f() includes but is not limited to the implementation of the following examples:
  • the chroma prediction value of the pixel to be predicted is calculated based on the integer weight vector cWeightInt[i][j] corresponding to each pixel to be predicted, the reference chroma difference vector diffC, and the average value of the reference chroma information. Specifically, the reference chromaticity difference vector diffC and the weight vector elements corresponding to each pixel to be predicted are multiplied one by one to obtain subC[i][j], and the multiplication result is accumulated and divided by the integer corresponding to each pixel to be predicted.
  • the method of obtaining Shift includes but is not limited to the parameters determined by the WCP determination parameters.
  • the value of Shift can be assigned a value of 5.
  • Func() is a function related to Shift.
  • the input is the sum of the weight vectors and the calculated x, and the output is the array index value of DivSigTable[].
  • a specific form of Func() can be expressed as:
  • the correction operation is mainly performed on the chroma prediction value in the first prediction block predWcp.
  • the chroma prediction value in predWcp should be limited to the preset range. If it exceeds the preset range, corresponding correction operations need to be performed. For example:
  • the chroma prediction value of C pred [i][j] can be clamped, as follows:
  • BitDepth is the bit depth required for the chroma pixel value to ensure that all chroma prediction values in predWcp are between 0 and (1 ⁇ BitDepth)–1. That is, as shown in the following formula:
  • S704 Perform post-processing operations on the chroma prediction value to determine the target chroma prediction value of the current block.
  • the weight-based chroma prediction output predWcp needs to be post-processed as the final target chroma prediction value predSamples under certain conditions, otherwise the final target chroma prediction value predSamples is predWcp.
  • predWcp can be smoothed and filtered as the final chroma prediction value predSamples.
  • a position-related correction process can be performed on predWcp. For example: use reference pixels with close spatial positions to calculate the chroma compensation value for each pixel to be predicted, use this chroma compensation value to correct predWcp, and use the corrected prediction value as the final chroma prediction value predSamples.
  • the chroma prediction values calculated by other chroma prediction modes can be weighted and fused with the chroma prediction value predWcp calculated by WCP, and this fusion result can be used as the final chroma prediction value predSamples .
  • the chroma prediction value predicted by CCLM mode and the chroma prediction value predWcp calculated by WCP can be weighted with equal or unequal weight, and the weighted result is used as the final chroma prediction value predSamples.
  • a neural network model can be used to modify the WCP prediction output predWcp, etc. This embodiment of the present application does not limit this in any way.
  • the weight model LUT can be adjusted according to the WCP control parameter (S) in the WCP core parameters. For example, if the size of the current block is flexible, the weight model can be adjusted according to the WCP control parameter (S). Taking the nonlinear Softmax function as an example, different control parameters can be selected according to the block type category to which the current prediction block belongs. Adjust function.
  • the methods for obtaining the weight model LUT include but are not limited to the following methods:
  • the value of theMaxPos includes but is not limited to calculated according to equation (50). It can be a parameter determined in the WCP core parameters, and can be assigned different values to different WCP control parameters (S), that is, theMaxPos[S ].
  • ModelScale includes but is not limited to the parameters determined by the WCP core parameters, which represents the amplification factor of the weight coefficient.
  • f() refers to the function of the WCP control parameter (S) and the brightness difference vector diffY[i][j][k], and its output is the weight index value of the LUT.
  • f() includes but is not limited to the implementation of the following examples:
  • f() refers to the function of the WCP control parameter (S) and the brightness difference vector diffY[i][j][k], and its output is the weight index value of the LUT.
  • f() includes but is not limited to the implementation of the following examples:
  • equation (58) corresponds to (baseDiffL+(blockIndex&1)*(baseDiffL>>1)-(blockIndex&2)*(baseDiffL>>2)+indexOffset[cnt]) in the decoding specification text.
  • the wcpSizeId in Equation (58) is a variable used to determine the WCP core parameters.
  • the indexOffset is calculated as follows:
  • indexOffset ((wcpSizeId&1)-(wcpSizeId&2)) ⁇ (diffY[i][j][k]&1) (59)
  • Inputs to this process include:
  • the upper left sample position of the current transform block is relative to the upper left sample position of the current picture (xTbC, yTbC).
  • the brightness position (xTbY, yTbY) corresponding to the current block is derived as follows:
  • availL and availT are derived as follows:
  • the current block’s luminance position (xCurr, yCurr) is set equal to (xTbY, yTbY), the neighboring luma position (xTbY-1, yTbY), Set checkPredModeY to FALSE, cIdx as input, and output to availableL.
  • the current block’s luminance position (xCurr, yCurr) is set equal to (xTbY, yTbY), the neighboring luma position (xTbY, yTbY-1), Set checkPredModeY to FALSE, take cIdx as input, and assign output to availT.
  • numTopRight The number of available upper right adjacent chroma samples, numTopRight, is as follows:
  • variable numTopRight is set to 0 and availableTR is set to TRUE.
  • the current brightness position (xCurr, yCurr) is set equal to (xTbY, yTbY) the neighbor brightness position (xTbY+x*SubWidthC, yTbY-1) , set checkPredModeY to FALSE, take cIdx as input, and assign the output to availableTR.
  • variable numleftbellow is set to 0 and availableLB is set to TRUE.
  • the current luma position (xCurr, yCurr) is set equal to (xTbY, yTbY) and the adjacent luma position (xTbY-1, yTbY+y*SubweightC ), set checkPredModeY to FALSE, cIdx as input, and output assigned to AvailLB.
  • the number of available adjacent chroma samples numSampT for the upper and upper right sides and the number of available adjacent chroma samples numSampL for the left and lower left sides are derived as follows:
  • predModeIntra equals INTRA_WCP
  • predModeIntra equals INTRA_WCP
  • pDsY[x][y] (pY[SubWidthC*x-1][y]+2*pY[SubWidthC*x][y]+pY[SubWidthC*x+1][y]+2)>>2
  • blockIndex is set to equal 0
  • blockIndex is set to equal 1
  • blockIndex is set equal to 2
  • indexOffset[cnt] ((blockIndex&1)–(blockIndex&2))*(baseDiffY[cnt]&1)
  • LUTindex[cnt] Clip3(0,WCP_LUT_Max_Index,(baseDiffL+(blockIndex&1)*(baseDiffL>>1)-(blockIn dex&2)*(baseDiffL>>2)+indexOffset[cnt]))
  • WCP_LUT_Max_Index is equal to 49
  • numerator[cnt] WcpLUT[LUTindex[cnt]]
  • Clip1(x) Clip3(0,(1 ⁇ BitDepth)-1,x)
  • Inputs to this process include:
  • variable checkPredModeY is used to identify whether availability depends on the prediction mode.
  • the output of this process is the availability of the adjacent block containing the position (xNbY, yNbY), denoted by availableN.
  • the availability availableN of neighboring block availability is derived as follows:
  • – availableN is set to FALSE if one or more of the following conditions are TRUE:
  • –xNbY is less than 0;
  • –yNbY is less than 0;
  • –xNbY is greater than or equal to pps_pic_width_in_luma_samples
  • –yNbY is greater than or equal to pps_pic_height_in_luma_samples
  • Adjacent blocks are contained in a different slice than the current block
  • Adjacent blocks are contained in different tiles than the current block
  • ⁇ sps_entropy_coding_sync_enabled_flag is equal to 1 and (xNbY>>CtbLog2SizeY) is greater than or equal to (xCurr>>CtbLog2SizeY)+1;
  • availableN is set equal to TRUE.
  • availableN is set to FALSE when all of the following conditions are true:
  • This embodiment provides a decoding method.
  • the specific implementation of the foregoing embodiments is described in detail through the above embodiments.
  • the floating point operation in the WCP prediction technology process is Optimization is implemented using integer operations; on the one hand, it makes full use of the content characteristics of the current block and adaptively selects the best integer operation displacement; on the other hand, it fully ensures the accuracy of WCP prediction technology; on the other hand, it fully considers The characteristics of the weight model and the reasonable design of the integer operation process. That is to say, by optimizing the calculation process of weight-based chroma prediction in WCP prediction technology, this technical solution completely uses integer operations, and can also adaptively select the optimal displacement amount and other information required for integerization.
  • WCP prediction technology improves the encoding and decoding performance.
  • FIG. 9 shows a schematic flow chart of an encoding method provided by an embodiment of the present application. As shown in Figure 9, the method may include:
  • S901 Determine the reference value of the first color component of the current block and the reference value of the second color component of the current block.
  • the encoding method in the embodiment of the present application is applied to an encoding device, or an encoding device (which may also be referred to as an "encoder" for short) integrated with the encoding device.
  • the encoding method in the embodiment of the present application may specifically refer to an intra-frame prediction method, and more specifically, an integer operation method of weight-based chroma prediction (Weight-based Chroma Prediction, WCP).
  • the video image may be divided into multiple coding blocks, and each coding block may include a first color component, a second color component, and a third color component
  • the current block here refers to the current block to be processed in the video image. Coding block for intra prediction.
  • the current block predicts the first color component
  • the first color component is a brightness component
  • the component to be predicted is a brightness component
  • the current block can also be called a brightness prediction block
  • the current block predicts the second color Component prediction
  • the second color component is a chroma component, that is, the component to be predicted is a chroma component
  • the current block can also be called a chroma prediction block.
  • the reference information of the current block may include the value of the first color component sampling point in the adjacent area of the current block and the second color component in the adjacent area of the current block.
  • the value of the sampling point can be determined based on the coded pixels in the adjacent area of the current block.
  • the adjacent area of the current block may include at least one of the following: an upper adjacent area, an upper right adjacent area, a left adjacent area, and a lower left adjacent area.
  • the upper adjacent area and the upper right adjacent area can be regarded as the upper area
  • the left adjacent area and the lower left adjacent area can be regarded as the left area
  • the adjacent area can also be Including the upper left area, see the aforementioned Figure 6 for details.
  • the upper area, left area and upper left area of the current block as adjacent areas can be called the reference area of the current block, and the pixels in the reference area are is the reconstructed reference pixel.
  • determining the reference value of the first color component of the current block and the reference value of the second color component of the current block may include:
  • the reference value of the second color component of the current block is determined according to the value of the second color component sampling point in the adjacent area of the current block.
  • the reference pixel of the current block may refer to the reference pixel point adjacent to the current block, and may also be called the first color component sampling point, the first color component sampling point, and the third color component sampling point in the adjacent area of the current block.
  • the second color component sampling point is represented by Neighboring Sample or Reference Sample.
  • the adjacency here may be spatial adjacency, but is not limited to this.
  • adjacency can also be adjacent in time domain, adjacent in space and time domain, and even the reference pixel of the current block can be a certain reference pixel point adjacent in space, adjacent in time domain, or adjacent in space and time domain.
  • the reference pixels obtained after such processing are not limited in any way by the embodiments of this application.
  • the first color component is the brightness component and the second color component is the chrominance component; then, the value of the first color component sampling point in the adjacent area of the current block It is expressed as the reference brightness information corresponding to the reference pixel of the current block, and the value of the second color component sampling point in the adjacent area of the current block is expressed as the reference chrominance information corresponding to the reference pixel of the current block.
  • the value of the first color component sampling point or the value of the second color component sampling point is determined from the adjacent area of the current block.
  • the adjacent area here may be only Including the upper adjacent area, or including only the left adjacent area, or including the upper adjacent area and the upper right adjacent area, or including the left adjacent area and the lower left adjacent area, or including the upper side
  • the adjacent area and the left adjacent area or may even include the upper adjacent area, the upper right adjacent area, the left adjacent area, etc., are not limited in any way by the embodiment of the present application.
  • determining the reference pixel of the current block may include: filtering pixels in adjacent areas of the current block to determine the reference pixel.
  • a first reference pixel set is formed according to pixels in adjacent areas of the current block; then the first reference pixel set can be filtered to determine the reference pixels.
  • the number of reference pixels can be M, and M is a positive integer.
  • M reference pixels can be selected from pixels in adjacent areas.
  • the value of M can generally be 4, but there is no specific limit.
  • filtering pixels in adjacent areas and determining reference pixels may include:
  • the reference pixel is determined from the pixels in the adjacent area.
  • the color component intensity can be represented by color component information, such as reference brightness information, reference chromaticity information, etc.; here, the larger the value of the color component information, the greater the color component intensity. high.
  • the pixels in the adjacent area can be filtered according to the position of the pixel, or according to the intensity of the color component, so that the reference pixel of the current block can be determined based on the filtered pixels, and further can Determine the value of the first color component sampling point in the adjacent area of the current block and the value of the second color component sampling point in the adjacent area of the current block; then based on the first color component sampling point in the adjacent area of the current block
  • the reference value of the first color component of the current block is determined based on the value of the color component sampling point
  • the reference value of the second color component of the current block is determined based on the value of the second color component sampling point in the adjacent area of the current block.
  • determining the reference value of the first color component of the current block based on the value of the first color component sampling point in the adjacent area of the current block may include: The value of a color component sampling point is subjected to a first filtering process to determine the reference value of the first color component of the current block.
  • the first filtering process is a downsampling filtering process.
  • the first color component is a brightness component.
  • the reference brightness information can be down-sampled and filtered so that the spatial resolution of the filtered reference brightness information is the same as the spatial resolution of the reference chrominance information. For example, if the size of the current block is 2M ⁇ 2N and the reference brightness information is 2M+2N, then after downsampling and filtering, it can be transformed to M+N to obtain the first color component of the current block. Reference.
  • determining the reference value of the second color component of the current block based on the value of the second color component sampling point in the adjacent area of the current block may include: The values of the two color component sampling points are subjected to a second filtering process to determine the reference value of the second color component of the current block.
  • the second filtering process is an upsampling filtering process.
  • the upsampling rate is a positive integer multiple of 2.
  • the first color component is the brightness component
  • the second color component is the chrominance component.
  • the embodiment of the present application can also perform upsampling filtering on the reference chrominance information, so that the spatial resolution of the filtered reference chrominance information is equal to that of the chrominance component.
  • the spatial resolution of the reference brightness is the same. For example, if the reference luminance information is 2M+2N, and the reference chrominance information is M+N, then after the reference chrominance information is upsampled and filtered, it can be transformed to 2M+2N to obtain the current block.
  • the reference value of the second color component is the reference luminance information.
  • S902 Determine the weighting coefficient according to the reference value of the first color component of the current block.
  • the reference information of the current block may also include the reconstructed value of the first color component sampling point in the current block. Assuming that the first color component is a brightness component, the reconstructed value of the first color component sampling point in the current block is the reconstructed brightness information of the current block.
  • determining the weighting coefficient according to the reference value of the first color component of the current block may include:
  • the weighting coefficient is determined based on the reference sample of the first color component of the current block.
  • determining the reference sample value of the first color component of the current block based on the reconstructed value of the first color component sampling point in the current block and the reference value of the first color component of the current block may include:
  • the reference sample value of the first color component of the current block is determined.
  • the reference sample value of the first color component of the current block may be set equal to the absolute value of the difference.
  • the reference sample value of the first color component of the current block is determined based on the difference value.
  • the difference value may also be squared, or the difference value may be subjected to some related processing and mapping to determine the first color of the current block.
  • the reference sample value of the component is not limited here.
  • determining the reference sample value of the first color component of the current block based on the reconstructed value of the first color component sampling point in the current block and the reference value of the first color component of the current block may include :
  • the reference sample value of the first color component of the current block is determined according to the filtered sample value of the first color component sampling point in the current block and the reference value of the first color component of the current block.
  • the third filtering process is a downsampling filtering process.
  • the first color component is a brightness component.
  • the reconstructed brightness information in the current block may also be subjected to downsampling filtering processing. For example, if the amount of reconstructed brightness information in the current block is 2M ⁇ 2N, it can be transformed to M ⁇ N after downsampling filtering.
  • determining the reference sample value of the first color component of the current block based on the filtered sample value of the first color component sampling point in the current block and the reference value of the first color component of the current block may be based on the current block.
  • the reference sample value of the first color component of the current block is determined by the difference between the filtered sample value of the first color component sampling point in the block and the reference value of the first color component of the current block; more specifically, it can be based on the difference between the filtered sample value of the first color component sampling point in the current block and the reference value of the first color component of the current block.
  • the reference sample value of the first color component of the current block is determined by the absolute value of the difference between the filtered sample value of a color component sampling point and the reference value of the first color component of the current block, and there is no limitation here.
  • the embodiment of the present application may further determine the weighting coefficient.
  • the reference sample value of the first color component of the current block may be the absolute value of the difference between the reconstructed value of the first color component sampling point in the current block and the reference value of the first color component of the current block.
  • the reference sample value of the first color component of the current block may be the absolute value of the difference between the brightness reconstruction information (represented by recY) in the current block and the inSize number of reference brightness information (represented by refY).
  • its corresponding brightness difference vector diffY[i][j][k] can be the corresponding brightness reconstruction information recY[i][j ] is obtained by subtracting the inSize number of reference brightness information refY[k] and taking the absolute value; that is, in the embodiment of the present application, the reference sample value of the first color component of the current block can be obtained by diffY[i][j][ k] represents.
  • determining the weighting coefficient according to the reference sample value of the first color component of the current block may include: determining the weight index value according to the reference sample value of the first color component of the current block; value, and use the first preset mapping relationship to determine the weighting coefficient.
  • determining the weight index value based on the reference sample value of the first color component of the current block may include:
  • the reference sample value of the first color component is corrected according to the maximum weight index value and the minimum weight index value to determine the weight index value.
  • the maximum weight index value can be represented by theMaxPos
  • the minimum weight index value can be represented by zero
  • the weight index value can be represented by index.
  • the weight index value is limited between theMaxPos and zero.
  • the weight index value can be calculated according to the following formula, as shown in the aforementioned formula (2).
  • the maximum weight index value may be related to the bit depth (represented by BitDepth) of luminance or chrominance.
  • the maximum weight index value can be calculated according to the following formula, as shown in the aforementioned formula (3).
  • the value of theMaxPos includes but is not limited to being calculated according to equation (3), and it may also be determined in the core parameters of the WCP.
  • the first preset mapping relationship is a numerical mapping lookup table of weight index values and weighting coefficients. That is to say, in the embodiment of the present application, the decoding end may be pre-set with a corresponding look-up table (Look Up Table, LUT). The corresponding weighting coefficient can be determined through this lookup table and combined with the index.
  • the weighting coefficient cWeightInt[i][j][k] can be represented by the following mapping relationship. For details, see the aforementioned equation (4).
  • the first preset mapping relationship may also be a preset functional relationship.
  • using the first preset mapping relationship to determine the weighting coefficient according to the weight index value may include: determining the first value corresponding to the weight index value under the first preset mapping relationship; setting the weighting coefficient equal to The first value.
  • determining the first value corresponding to the weight index value under the first preset mapping relationship may include:
  • weight index value use the second preset mapping relationship to determine the second value
  • the first value is set equal to the corresponding value of the first product value under the first preset mapping relationship.
  • the first factor can be represented by ModelScale
  • the weight index value can be represented by index.
  • the weighting coefficient cWeightInt[i][j][k] can also be expressed by the following functional relationship, as shown in the aforementioned equation (5).
  • n the weight index value
  • the first preset mapping relationship can be set to Round(x); then when x is equal to the first product value, the value of Round(x) is the first value, that is, the weighting coefficient cWeightInt[i][j] [k].
  • the first preset mapping relationship may be as shown in the aforementioned equation (6).
  • the first factor may be a preset constant value. That is, the first factor may be a preset constant, which is independent of the block size parameter.
  • the value of the first factor may also be related to the block size parameter.
  • determining the first factor may include: determining the value of the first factor according to the size parameter of the current block; wherein the size parameter of the current block includes at least one of the following parameters: Width, the height of the current block. That is to say, in this embodiment of the present application, a classification method can be used to fix the value of the first factor. For example, the current block size parameter is divided into three categories, and the value of the first factor corresponding to each category is determined.
  • the size parameter of the current block meets the third preset condition, that is, Min(W, H)>16, then set the value of the first factor to the third value.
  • W represents the width of the current block
  • H represents the height of the current block.
  • determining the weight index value based on the reference sample value of the first color component of the current block may include: determining the second factor; and determining the weight index value based on the reference sample value of the first color component of the current block.
  • the second factor determines the weight index value.
  • the weighting coefficient can also be adjusted according to the control parameters in the core parameters of the WCP under certain conditions.
  • the second factor is the control parameter described in this embodiment (which may also be called “scale parameter”, “scale factor”, etc.) and is represented by S.
  • the weight coefficient can be adjusted according to the second factor.
  • the nonlinear function such as Softmax function
  • different selections can be made according to the block classification category to which the current block belongs.
  • the second factor is used to adjust the function so that the weighting coefficient is determined based on the adjusted function.
  • the second factor may be a preset constant value. That is, in this case, for S, the weighted coefficient distribution of neighboring chromaticities can be adjusted according to the relatively flat characteristics of the chromaticity, thereby capturing a weighted coefficient distribution suitable for natural image chromaticity prediction.
  • the given S set is traversed, and the suitability of S is measured by the difference between the predicted chroma and the original chroma under different S.
  • S can be 2- ⁇ , where ⁇ 1,0,-1,-2,-3 ⁇ ; after experiments, it is found that in this S set, the best value of S is 4. Therefore, in a specific embodiment, S may be set to 4, but this is not specifically limited in the embodiment of this application.
  • the value of the second factor may also be related to the block size parameter.
  • determining the second factor may include: determining the value of the second factor according to the size parameter of the current block; wherein the size parameter of the current block includes at least one of the following parameters: Width, the height of the current block.
  • determining the value of the second factor according to the size parameter of the current block may include:
  • the second factor is determined to be 16.
  • a classification method may be used to fix the value of the second factor.
  • the current block size parameter is divided into three categories, and the value of the second factor corresponding to each category is determined.
  • embodiments of the present application may also pre-store a lookup table mapping the size parameters of the current block and the value of the second factor, and then determine the value of the second factor based on the lookup table.
  • Table 1 shows the correspondence between a second factor provided by the embodiment of the present application and the size parameter of the current block.
  • the corresponding relationship between the above-mentioned second factor and the size parameter of the current block can also be fine-tuned.
  • Table 2 shows the correspondence between another second factor provided by the embodiment of the present application and the size parameter of the current block.
  • determining the value of the second factor according to the size parameter of the current block may include:
  • the second factor is determined to be 15.
  • Table 3 shows the correspondence between another second factor provided by the embodiment of the present application and the size parameter of the current block.
  • determining the value of the second factor according to the size parameter of the current block may include: determining the value of the second factor according to the block type index value.
  • the aforementioned Table 4 shows the correspondence between a second factor and the block type index value provided by the embodiment of the present application.
  • the aforementioned Table 5 shows the correspondence between another second factor and the block type index value provided by the embodiment of the present application.
  • the aforementioned Table 6 shows another correspondence between the second factor and the block type index value provided by the embodiment of the present application.
  • the second factor can also be classified according to the number of reference pixels of the current block.
  • determining the second factor may include: determining the value of the second factor according to the number of reference pixels of the current block; where N represents the number of reference pixels.
  • determining the value of the second factor based on the number of reference pixels of the current block may include:
  • classification is performed according to the number of reference pixels of the current block.
  • Table 7 shows a correspondence between a second factor and the number of reference pixels provided by the embodiment of the present application.
  • determining the weight index value according to the reference sample value of the first color component of the current block and the second factor may include:
  • the third value is corrected according to the maximum weight index value and the minimum weight index value to determine the weight index value.
  • the maximum weight index value can be represented by theMaxPos
  • the minimum weight index value can be represented by zero
  • the weight index value can be represented by index.
  • the weight index value is limited to between theMaxPos and zero
  • the third value is f(S,diffY[i][j][k]).
  • the weight index value can be calculated according to the following formula, as shown in the aforementioned formula (7).
  • the value of theMaxPos can be calculated according to the above formula (3), or can be determined in the core parameters of WCP, without any limitation.
  • f() refers to the function with the second factor S and the brightness difference vector diffY[i][j][k].
  • f(S,diffY[i][j][k]) can be realized by the following formula, as shown in the aforementioned formula (8).
  • f(S,diffY[i][j][k]) can be implemented through the following operations.
  • using the third preset mapping relationship to determine the third value based on the reference sample value of the first color component and the second factor may include:
  • the target offset is equal to 0; if the second factor is equal to 4, then the target offset is equal to 1; if the second factor is equal to 8, then the target offset is equal to 2; Then perform a right shift operation on the reference sample value of the first color component to determine the third value.
  • using the first preset mapping relationship to determine the weighting coefficient according to the weight index value may include:
  • determining the fourth value corresponding to the second product value under the first preset mapping relationship may include:
  • the fourth value is set equal to the corresponding value of the third product value under the first preset mapping relationship.
  • the first factor can be represented by ModelScale
  • the second factor can be represented by S
  • the weight index value can be represented by index
  • the first preset mapping relationship is a numerical mapping lookup table of the second factor, the weight index value and the weighting coefficient. That is to say, in the embodiment of the present application, the decoding end may be pre-set with a corresponding look-up table (Look Up Table, LUT). The corresponding weighting coefficient can be determined through this lookup table and combined with the index.
  • the weighting coefficient cWeightInt[i][j][k] can be represented by the following mapping relationship as shown in the aforementioned equation (11).
  • the first preset mapping relationship may also be a preset functional relationship.
  • the inputs of the function are index and S, and the output is a weighting coefficient.
  • the weighting coefficient cWeightInt[i][j][k] can also be represented by the following functional relationship as shown in the aforementioned formula (12).
  • S903 Determine the reference mean value of the second color component of the current block based on the reference value of the second color component of the current block; and based on the reference value of the second color component of the current block and the reference mean value of the second color component of the current block, Determine the reference sample for the second color component of the current block.
  • the number of input reference pixels predicted based on weights can be represented by N or inSize.
  • the number of input reference pixels predicted based on the weight is the same as the number of reference samples of the second color component. It can also be said that N represents the number of reference samples of the second color component, and N is a positive integer.
  • determining the reference mean value of the second color component of the current block according to the reference value of the second color component of the current block may include: performing an average calculation on the reference values of the second color component of N current blocks. , obtain the reference mean of the second color component of the current block.
  • the reference value of the second color component of the current block can be represented by refC[k]
  • the reference mean value of the second color component of the current block can be represented by avgC, and the calculation of avgC As shown in the aforementioned formula (13).
  • the method may further include: determining the block type index value according to the size parameter of the current block; and using the fifth preset mapping relationship to determine the value of N based on the block type index value.
  • the fifth preset mapping relationship represents a numerical mapping lookup table between the block type index value and N.
  • the block type index value may be represented by wcpSizeId.
  • the number of input reference pixels predicted based on weights will also be different; that is, the values of N or (inSize) are different.
  • the aforementioned Table 8 and Table 9 respectively show the corresponding relationship between the block type index value and the value of N(inSize).
  • the reference value of the second color component of the current block and the reference mean value of the second color component of the current block are determined.
  • the reference sample value of the second color component of the current block may include:
  • Subtraction calculation is performed on the reference value of the second color component of the current block and the reference mean value of the second color component of the current block to obtain a reference sample value of the second color component of the current block.
  • the average avgC of the N pieces of reference chromaticity information refC is calculated, and then the reference chromaticity difference vector diffC is obtained by subtracting the N pieces of reference chromaticity information refC from the average avgC. Specifically, diffC is calculated as shown in the aforementioned equation (14).
  • S904 Determine the predicted value of the second color component sampling point in the current block based on the reference mean value of the second color component of the current block, the reference sample value of the second color component of the current block, and the corresponding weighting coefficient.
  • S905 Determine the predicted difference value of the second color component sampling point in the current block based on the predicted value of the second color component sampling point in the current block.
  • the predicted value of the second color component sampling point in the current block can be further determined.
  • the predicted value of the second color component sampling point in the current block is determined according to the reference mean value of the second color component of the current block, the reference sample value of the second color component of the current block, and the corresponding weighting coefficient.
  • the weighted sum value of the pixels to be predicted in the current block is set to be equal to the sum of N weighted values, and the sum of coefficients of the pixels to be predicted in the current block is set to be equal to the sum of N weighted coefficients;
  • N represents the second color component
  • N is a positive integer
  • the predicted value of the second color component of the pixel to be predicted in the current block is determined.
  • the N weighting coefficients can be added to obtain the coefficient sum value of the pixels to be predicted in the current block, It can be represented by sum.
  • the calculation formula is as shown in the aforementioned formula (17).
  • using the fourth preset mapping relationship to determine the sixth value based on the weighted sum value and the coefficient sum value may include:
  • the preset offset can be represented by Shift
  • the array index value can be represented by normDiff
  • the first value can be represented by x
  • the second value can be represented by v
  • the third value can be It is represented by y
  • the preset addition value is represented by add.
  • the value of Shift may be set to 5, but this is not specifically limited.
  • the sixth preset mapping relationship may be as shown in the aforementioned equation (18).
  • the calculation of the first value can be: first determine the base 2 logarithm of the coefficient and value, and then determine the maximum integer value that is less than or equal to the logarithm.
  • the maximum integer value determined here is The first numerical value; or, it can also be: setting the first numerical value equal to the number of binary symbols required for the binary representation of the coefficient and value minus one; or, it can also be: performing a binary right shift operation on the coefficient and value, Determine the number of right shifts when the value after right shift is equal to zero, and then set the first value to be equal to the number of right shifts minus one, etc.
  • This embodiment of the present application does not impose any limitations.
  • using the seventh preset mapping relationship to determine the array index value based on the coefficient sum value and the first value may include: using the coefficient sum value and the first value as the input of the preset functional relationship, and using the seventh preset mapping relationship as input to the preset function relationship. Let the function relationship output the array index value.
  • the calculation of the array index value can be shown by the aforementioned formula (19).
  • Func() is a function related to Shift
  • the aforementioned equation (20) shows a specific form of Func().
  • using the eighth preset mapping relationship to determine the second value based on the array index value and the preset offset may include:
  • the second value is determined using an eighth preset mapping relationship according to the index indication value and the preset offset.
  • the array mapping table is represented by DivSigTable, then the corresponding index indication value of the array index value normDiff in the DivSigTable is DivSigTable[normDiff].
  • DivSigTable[normDiff] and Shift the eighth preset mapping relationship is as shown in the aforementioned equation (21).
  • determining the third value based on the first value may include:
  • the third array is set equal to the first value plus one.
  • the first numerical value is represented by x
  • the third numerical value is represented by y.
  • it can be represented by the aforementioned formula (22).
  • the preset addition value is determined according to the third value and the preset offset, and its calculation formula is as shown in the aforementioned equation (23).
  • the first offset is determined by the third value and the preset offset. For example, by adding the third value and the preset offset , to get the first offset, that is, the first offset can be y+Shift.
  • the predicted value of the second color component of the pixel to be predicted in the current block is expressed by C pred [ i][j] means that at this time, the predicted value of the second color component of the pixel to be predicted in the current block can be obtained by adding the reference mean value of the second color component and the sixth value.
  • the calculation formula is as follows: (25) shown.
  • C pred [i][j] usually needs to be limited to a preset range. Therefore, in some embodiments, the method may further include: performing a correction operation on the predicted value of the second color component of the pixel to be predicted, and using the corrected predicted value as the predicted value of the second color component of the pixel to be predicted in the current block. .
  • the preset range may be between: 0 and (1 ⁇ BitDepth)-1; where BitDepth is the bit depth required for the chroma component. If the predicted value exceeds the preset range, the predicted value needs to be corrected accordingly.
  • C pred [i][j] can be clamped as follows:
  • C pred [i][j] When the value of C pred [i][j] is greater than or equal to 0 and less than or equal to (1 ⁇ BitDepth)-1, it is equal to C pred [i][j];
  • the predicted values of the second color components of the pixels to be predicted in the current block are between 0 and (1 ⁇ BitDepth)-1.
  • determining the predicted value of the second color component sampling point in the current block based on the predicted value of the second color component of the pixel to be predicted in the current block may include:
  • the predicted value of the second color component of the pixel to be predicted includes the predicted value of at least some of the second color component sampling points in the current block.
  • the first prediction block includes the prediction value of at least part of the second color component sampling points in the current block.
  • the method may further include: performing an upsampling filtering process on the first prediction block, and determining a second prediction block of the second color component of the current block.
  • the method may further include: performing filtering enhancement processing on the first prediction block, and determining a second prediction block of the second color component of the current block.
  • the first prediction block includes the prediction values of all the second color component sampling points in the current block, there is no need to perform any processing on the first prediction block at this time, and the first prediction block can be directly The first prediction block serves as the final second prediction block.
  • the first prediction block may contain prediction values of at least part of the second color component sampling points in the current block.
  • the predicted values of the second color component sampling points in the current block can be set equal to the value of the first prediction block; if the first The prediction block contains the predicted values of some of the second color component sampling points in the current block, then the values of the first prediction block can be upsampled and filtered, and the predicted values of the second color component sampling points in the current block are set equal to the above Sample the filtered output value.
  • the second prediction block includes the prediction values of all the second color component sampling points in the current block.
  • the weight-based chroma prediction output predWcp requires post-processing under certain conditions before it can be used as the final chroma prediction value predSamples. Otherwise, the final chroma prediction value predSamples is predWcp.
  • the prediction difference of the second color component sampling point in the current block is determined based on the predicted value of the second color component sampling point in the current block.
  • the predicted difference value of the second color component sampling point in the current block is determined based on the original value of the second color component sampling point in the current block and the predicted value of the second color component sampling point in the current block.
  • the method may further include: encoding the prediction difference value of the second color component sampling point in the current block, and writing the resulting encoded bits into the code stream.
  • the predicted difference value of the second color component sampling point can be determined. Specifically, the original value of the second color component sampling point and the predicted value of the second color component sampling point can be subtracted, so that the third value of the current block can be determined. The predicted difference between the two color component sampling points. In this way, after the predicted difference value of the second color component sampling point is written into the code stream, the prediction difference value of the second color component sampling point can be obtained through decoding at the decoding end in order to restore the second color component in the current block. The reconstructed value of the sampling point.
  • the embodiment of the present application also provides a code stream, which is generated by bit encoding according to the information to be encoded; wherein the information to be encoded at least includes: the prediction difference of the second color component sampling point in the current block value.
  • the embodiment of the present application is to optimize floating-point operations in the process of WCP prediction technology, and is implemented by using integer operations. On the one hand, it makes full use of the content characteristics of the current block and adaptively selects the best integer operation displacement; on the other hand, it fully ensures the accuracy of the WCP prediction technology; on the other hand, it fully considers the characteristics of the weight model and rationally designs the entire type operation process; thus, the computational complexity of WCP prediction technology can be reduced while ensuring a certain accuracy of WCP prediction technology.
  • the embodiment of the present application also provides an encoding method, by determining the reference value of the first color component of the current block and the reference value of the second color component of the current block; determining the weighted value according to the reference value of the first color component of the current block.
  • FIG. 10 shows a schematic structural diagram of an encoding device 310 provided by an embodiment of the present application.
  • the encoding device 310 may include: a first determination unit 3101, a first calculation unit 3102, and a first prediction unit 3103; wherein,
  • the first determination unit 3101 is configured to determine the reference value of the first color component of the current block and the reference value of the second color component of the current block; and determine the weighting coefficient according to the reference value of the first color component of the current block;
  • the first calculation unit 3102 is configured to determine the reference mean value of the second color component of the current block according to the reference value of the second color component of the current block; and according to the reference value of the second color component of the current block and the second value of the current block.
  • the reference mean value of the color component determines the reference sample value of the second color component of the current block;
  • the first prediction unit 3103 is configured to determine the predicted value of the second color component sampling point in the current block based on the reference mean value of the second color component of the current block, the reference sample value of the second color component of the current block, and the corresponding weighting coefficient. ;
  • the first determination unit 3101 is further configured to determine the predicted difference value of the second color component sampling point in the current block based on the predicted value of the second color component sampling point in the current block.
  • the first determination unit 3101 is further configured to obtain the original value of the second color component sampling point in the current block; and according to the original value of the second color component sampling point in the current block and the second color in the current block The predicted value of the component sampling point determines the predicted difference value of the second color component sampling point in the current block.
  • the encoding device 310 may further include an encoding unit 3104 configured to encode the prediction difference value of the second color component sampling point in the current block and write the resulting encoded bits into the code stream.
  • the "unit" may be part of a circuit, part of a processor, part of a program or software, etc., and of course may also be a module, or may be non-modular.
  • each component in this embodiment can be integrated into one processing unit, or each unit can exist physically alone, or two or more units can be integrated into one unit.
  • the above integrated units can be implemented in the form of hardware or software function modules.
  • the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of this embodiment is essentially either The part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium and includes a number of instructions to make a computer device (can It is a personal computer, server, or network device, etc.) or processor that executes all or part of the steps of the method described in this embodiment.
  • the aforementioned storage media include: U disk, mobile hard disk, Read Only Memory (ROM), Random Access Memory (RAM), magnetic disk or optical disk and other media that can store program code.
  • the embodiment of the present application provides a computer-readable storage medium for use in the encoding device 310.
  • the computer-readable storage medium stores a computer program.
  • the computer program is executed by the first processor, any of the foregoing embodiments can be implemented. A step of the method described.
  • the encoding device 320 may include: a first communication interface 3201, a first memory 3202, and a first processor 3203; the various components are coupled together through a first bus system 3204. It can be understood that the first bus system 3204 is used to implement connection communication between these components. In addition to the data bus, the first bus system 3204 also includes a power bus, a control bus and a status signal bus. However, for the sake of clear explanation, various buses are labeled as the first bus system 3204 in FIG. 11 . in,
  • the first communication interface 3201 is used for receiving and sending signals during the process of sending and receiving information with other external network elements;
  • the first memory 3202 is used to store a computer program capable of running on the first processor 3203;
  • the first processor 3203 is configured to execute: when running the computer program:
  • the predicted difference value of the second color component sampling point in the current block is determined.
  • the first memory 3202 in the embodiment of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memories.
  • non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), electrically removable memory. Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory.
  • Volatile memory may be Random Access Memory (RAM), which is used as an external cache.
  • RAM static random access memory
  • DRAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • SDRAM double data rate synchronous dynamic random access memory
  • Double Data Rate SDRAM DDRSDRAM
  • enhanced SDRAM ESDRAM
  • Synchlink DRAM SLDRAM
  • Direct Rambus RAM DRRAM
  • the first memory 3202 of the systems and methods described herein is intended to include, but is not limited to, these and any other suitable types of memory.
  • the first processor 3203 may be an integrated circuit chip with signal processing capabilities. During the implementation process, each step of the above method can be completed by instructions in the form of hardware integrated logic circuits or software in the first processor 3203 .
  • the above-mentioned first processor 3203 can be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or a ready-made programmable gate array (Field Programmable Gate Array, FPGA). or other programmable logic devices, discrete gate or transistor logic devices, or discrete hardware components.
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • a general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc.
  • the steps of the method disclosed in conjunction with the embodiments of the present application can be directly implemented by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other mature storage media in this field.
  • the storage medium is located in the first memory 3202.
  • the first processor 3203 reads the information in the first memory 3202 and completes the steps of the above method in combination with its hardware.
  • the embodiments described in this application can be implemented using hardware, software, firmware, middleware, microcode, or a combination thereof.
  • the processing unit can be implemented in one or more Application Specific Integrated Circuits (ASIC), Digital Signal Processing (DSP), Digital Signal Processing Device (DSP Device, DSPD), programmable Logic device (Programmable Logic Device, PLD), Field-Programmable Gate Array (FPGA), general-purpose processor, controller, microcontroller, microprocessor, and other devices used to perform the functions described in this application electronic unit or combination thereof.
  • ASIC Application Specific Integrated Circuits
  • DSP Digital Signal Processing
  • DSP Device Digital Signal Processing Device
  • DSPD Digital Signal Processing Device
  • PLD programmable Logic Device
  • FPGA Field-Programmable Gate Array
  • the technology described in this application can be implemented through modules (such as procedures, functions, etc.) that perform the functions described in this application.
  • Software code may be stored in memory and executed by a processor.
  • the memory can be implemented in the processor or external to the processor.
  • the first processor 3203 is further configured to perform the method described in any one of the preceding embodiments when running the computer program.
  • This embodiment provides an encoding device, which may further include the encoding device 310 described in the previous embodiment.
  • the encoding device based on the color component reference information in the adjacent area of the current block and the color component reconstruction information within the current block, it is not only necessary to construct a brightness difference vector in order to determine the weighting coefficient; it also needs to be based on the chrominance reference information Determine the chromaticity average, then determine the chromaticity difference vector based on the chromaticity reference information and the chromaticity average, and then determine the chromaticity prediction value based on the chromaticity difference vector and the corresponding weighting coefficient, plus the chromaticity average ; In this way, by optimizing the calculation process of weight-based chroma prediction, it is possible to completely use integer operations, and fully consider the characteristics of the weight model to reasonably optimize the integer operation process; while fully ensuring the accuracy of chroma prediction At the same time, it can also reduce computational complexity, improve encoding and decoding efficiency, and thereby improve encoding
  • FIG. 12 shows a schematic structural diagram of a decoding device 330 provided by an embodiment of the present application.
  • the decoding device 330 may include: a second determination unit 3301, a second calculation unit 3302, and a second prediction unit 3303; wherein,
  • the second determination unit 3301 is configured to determine the reference value of the first color component of the current block and the reference value of the second color component of the current block; and determine the weighting coefficient according to the reference value of the first color component of the current block;
  • the second calculation unit 3302 is configured to determine the reference mean value of the second color component of the current block according to the reference value of the second color component of the current block; and according to the reference value of the second color component of the current block and the second value of the current block.
  • the reference mean value of the color component determines the reference sample value of the second color component of the current block;
  • the second prediction unit 3303 is configured to determine the predicted value of the second color component sampling point in the current block based on the reference mean value of the second color component of the current block, the reference sample value of the second color component of the current block, and the corresponding weighting coefficient. ;
  • the second determination unit 3301 is further configured to determine the reconstructed value of the second color component sampling point in the current block based on the predicted value of the second color component sampling point in the current block.
  • the second determination unit 3301 is further configured to determine the predicted difference value of the second color component sampling point in the current block; and based on the predicted difference value of the second color component sampling point in the current block and the third color component sampling point in the current block. The predicted value of the second color component sampling point determines the reconstructed value of the second color component sampling point in the current block.
  • the decoding device 330 may further include a decoding unit 3304 configured to parse the code stream and determine the prediction difference value of the second color component sampling point in the current block.
  • the "unit" may be part of a circuit, part of a processor, part of a program or software, etc., and of course may also be a module, or may be non-modular.
  • each component in this embodiment can be integrated into one processing unit, or each unit can exist physically alone, or two or more units can be integrated into one unit.
  • the above integrated units can be implemented in the form of hardware or software function modules.
  • the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • this embodiment provides a computer-readable storage medium for use in the decoding device 330.
  • the computer-readable storage medium stores a computer program.
  • the computer program is executed by the second processor, the foregoing embodiments are implemented. The steps of any of the methods.
  • FIG. 13 shows a schematic structural diagram of the decoding device 340 provided by the embodiment of the present application.
  • the decoding device 340 may include: a second communication interface 3401, a second memory 3402, and a second processor 3403; the various components are coupled together through a second bus system 3404.
  • the second bus system 3404 is used to implement connection communication between these components.
  • the second bus system 3404 also includes a power bus, a control bus and a status signal bus.
  • various buses are labeled as second bus system 3404 in FIG. 13 . in,
  • the second communication interface 3401 is used for receiving and sending signals during the process of sending and receiving information with other external network elements;
  • the second memory 3402 is used to store computer programs that can be run on the second processor 3403;
  • the second processor 3403 is used to execute: when running the computer program:
  • the reconstructed value of the second color component sampling point in the current block is determined according to the predicted value of the second color component sampling point in the current block.
  • the second processor 3403 is further configured to perform the method described in any one of the preceding embodiments when running the computer program.
  • This embodiment provides a decoding device, which may further include the decoding device 340 described in the previous embodiment.
  • the decoding device based on the color component reference information in the adjacent area of the current block and the color component reconstruction information in the current block, it is not only necessary to construct a brightness difference vector in order to determine the weighting coefficient; it also needs to be based on the chrominance reference information Determine the chromaticity average, then determine the chromaticity difference vector based on the chromaticity reference information and the chromaticity average, and then determine the chromaticity prediction value based on the chromaticity difference vector and the corresponding weighting coefficient, plus the chromaticity average ; In this way, by optimizing the calculation process of weight-based chroma prediction, it is possible to completely use integer operations, and fully consider the characteristics of the weight model to reasonably optimize the integer operation process; while fully ensuring the accuracy of chroma prediction At the same time, it can also reduce computational complexity, improve encoding and decoding efficiency, and thereby improve encoding and de
  • the encoding and decoding system 350 may include an encoder 3501 and a decoder 3502.
  • the encoder 3501 can be a device integrated with the encoding device 310 described in the previous embodiment, or can also be the encoding device 320 described in the previous embodiment
  • the decoder 3502 can be a device integrated with the decoding device 330 described in the previous embodiment. device, or may also be the decoding device 340 described in the previous embodiment.
  • the encoding and decoding system 350 whether it is the encoder 3501 or the decoder 3502, by optimizing the calculation process of weight-based chroma prediction, it is possible to completely use integer operations, and fully consider It understands the characteristics of the weight model and reasonably optimizes the integer operation process; while fully ensuring the accuracy of chroma prediction, it can also reduce the computational complexity, improve the encoding and decoding efficiency, and thereby improve the encoding and decoding performance.
  • whether it is the encoding end or the decoding end by determining the reference value of the first color component of the current block and the reference value of the second color component of the current block; according to the reference value of the current block Determine the weighting coefficient based on the reference value of the first color component; determine the reference mean value of the second color component of the current block based on the reference value of the second color component of the current block; and determine the reference value of the second color component of the current block based on the difference between the reference value of the second color component of the current block and the current
  • the reference mean value of the second color component of the block determines the reference sample value of the second color component of the current block; based on the reference mean value of the second color component of the current block, the reference sample value of the second color component of the current block and the corresponding weighting Coefficient, determines the predicted value of the second color component sampling point in the current block.
  • the predicted difference value of the second color component sampling point of the current block can be determined based on the predicted value of the second color component sampling point in the current block; so that at the decoding end, the second color component of the current block is obtained during decoding
  • the reconstructed value of the second color component sampling point of the current block can be determined by combining the predicted value of the second color component sampling point in the current block. That is to say, based on the color component reference information in the adjacent area of the current block and the color component reconstruction information in the current block, it is not only necessary to construct a brightness difference vector in order to determine the weighting coefficient; it is also necessary to determine the color based on the chrominance reference information.
  • the chromaticity average is then used to determine the chromaticity difference vector based on the chromaticity reference information and the chromaticity average, and then the chromaticity prediction value is determined based on the chromaticity difference vector and the corresponding weighting coefficient, plus the chromaticity average; thus , by optimizing the calculation process of weight-based chroma prediction, it is possible to completely use integer operations, and fully consider the characteristics of the weight model to reasonably optimize the integer operation process; while fully ensuring the accuracy of chroma prediction, It can also reduce computational complexity, improve encoding and decoding efficiency, and thereby improve encoding and decoding performance.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

本申请实施例公开了一种编解码方法、装置、编码设备、解码设备以及存储介质,该方法包括:确定当前块的第一颜色分量的参考值和当前块的第二颜色分量的参考值;根据当前块的第一颜色分量的参考值,确定加权系数;根据当前块的第二颜色分量的参考值,确定当前块的第二颜色分量的参考均值;以及根据当前块的第二颜色分量的参考值与当前块的第二颜色分量的参考均值,确定当前块的第二颜色分量的参考样值;根据当前块的第二颜色分量的参考均值、当前块的第二颜色分量的参考样值以及对应的加权系数,确定当前块中第二颜色分量采样点的预测值;根据当前块中第二颜色分量采样点的预测值,确定当前块中第二颜色分量采样点的重建值。

Description

编解码方法、装置、编码设备、解码设备以及存储介质 技术领域
本申请实施例涉及视频编解码技术领域,尤其涉及一种编解码方法、装置、编码设备、解码设备以及存储介质。
背景技术
随着人们对视频显示质量要求的提高,高清和超高清视频等新视频应用形式应运而生。国际标准组织ISO/IEC和ITU-T的联合视频研究组(Joint Video Exploration Team,JVET)制定了下一代视频编码标准H.266/多功能视频编码(Versatile Video Coding,VVC)。
H.266/VVC中包含了颜色分量间预测技术。然而,H.266/VVC的颜色分量间预测技术计算得到的编码块的预测值与原始值之间存在较大偏差,这导致预测准确度低,造成解码视频的质量下降,降低了编码性能。
发明内容
本申请实施例提供一种编解码方法、装置、编码设备、解码设备以及存储介质,不仅可以提高色度预测的准确性,降低色度预测的计算复杂度,而且还能够提升编解码性能。
本申请实施例的技术方案可以如下实现:
第一方面,本申请实施例提供了一种解码方法,包括:
确定当前块的第一颜色分量的参考值和当前块的第二颜色分量的参考值;
根据当前块的第一颜色分量的参考值,确定加权系数;
根据当前块的第二颜色分量的参考值,确定当前块的第二颜色分量的参考均值;以及根据当前块的第二颜色分量的参考值与当前块的第二颜色分量的参考均值,确定当前块的第二颜色分量的参考样值;
根据当前块的第二颜色分量的参考均值、当前块的第二颜色分量的参考样值以及对应的加权系数,确定当前块中第二颜色分量采样点的预测值;
根据当前块中第二颜色分量采样点的预测值,确定当前块中第二颜色分量采样点的重建值。
第二方面,本申请实施例提供了一种编码方法,包括:
确定当前块的第一颜色分量的参考值和当前块的第二颜色分量的参考值;
根据当前块的第一颜色分量的参考值,确定加权系数;
根据当前块的第二颜色分量的参考值,确定当前块的第二颜色分量的参考均值;以及根据当前块的第二颜色分量的参考值与当前块的第二颜色分量的参考均值,确定当前块的第二颜色分量的参考样值;
根据当前块的第二颜色分量的参考均值、当前块的第二颜色分量的参考样值以及对应的加权系数,确定当前块中第二颜色分量采样点的预测值;
根据当前块中第二颜色分量采样点的预测值,确定当前块中第二颜色分量采样点的预测差值。
第三方面,本申请实施例提供了一种编码装置,包括第一确定单元、第一计算单元和第一预测单元;其中,
第一确定单元,配置为确定当前块的第一颜色分量的参考值和当前块的第二颜色分量的参考值;以及根据当前块的第一颜色分量的参考值,确定加权系数;
第一计算单元,配置为根据当前块的第二颜色分量的参考值,确定当前块的第二颜色分量的参考均值;以及根据当前块的第二颜色分量的参考值与当前块的第二颜色分量的参考均值,确定当前块的第二颜色分量的参考样值;
第一预测单元,配置为根据当前块的第二颜色分量的参考均值、当前块的第二颜色分量的参考样值以及对应的加权系数,确定当前块中第二颜色分量采样点的预测值;
第一确定单元,还配置为根据当前块中第二颜色分量采样点的预测值,确定当前块中第二颜色分量采样点的预测差值。
第四方面,本申请实施例提供了一种编码设备,包括第一存储器和第一处理器;其中,
第一存储器,用于存储能够在第一处理器上运行的计算机程序;
第一处理器,用于在运行计算机程序时,执行如第二方面所述的方法。
第五方面,本申请实施例提供了一种解码装置,包括第二确定单元、第二计算单元和第二预测单元;其中,
第二确定单元,配置为确定当前块的第一颜色分量的参考值和当前块的第二颜色分量的参考值;以及根据当前块的第一颜色分量的参考值,确定加权系数;
第二计算单元,配置为根据当前块的第二颜色分量的参考值,确定当前块的第二颜色分量的参考均值;以及根据当前块的第二颜色分量的参考值与当前块的第二颜色分量的参考均值,确定当前块的第二颜色分量的参考样值;
第二预测单元,配置为根据当前块的第二颜色分量的参考均值、当前块的第二颜色分量的参考样值以及对应的加权系数,确定当前块中第二颜色分量采样点的预测值;
第二确定单元,还配置为根据当前块中第二颜色分量采样点的预测值,确定当前块中第二颜色分量采样点的重建值。
第六方面,本申请实施例提供了一种解码设备,包括第二存储器和第二处理器;其中,
第二存储器,用于存储能够在第二处理器上运行的计算机程序;
第二处理器,用于在运行计算机程序时,执行如第一方面所述的方法。
第七方面,本申请实施例提供了一种计算机可读存储介质,该计算机可读存储介质存储有计算机程序,所述计算机程序被执行时实现如第一方面所述的方法、或者如第二方面所述的方法。
本申请实施例提供了一种编解码方法、装置、编码设备、解码设备以及存储介质,无论是编码端还是解码端,通过确定当前块的第一颜色分量的参考值和当前块的第二颜色分量的参考值;根据当前块的第一颜色分量的参考值,确定加权系数;根据当前块的第二颜色分量的参考值,确定当前块的第二颜色分量的参考均值;以及根据当前块的第二颜色分量的参考值与当前块的第二颜色分量的参考均值,确定当前块的第二颜色分量的参考样值;根据当前块的第二颜色分量的参考均值、当前块的第二颜色分量的参考样值以及对应的加权系数,确定当前块中第二颜色分量采样点的预测值。这样,在编码端根据当前块中第二颜色分量采样点的预测值,能够确定当前块的第二颜色分量采样点的预测差值;使得在解码端,根据当前块中第二颜色分量采样点的预测值可以确定出当前块的第二颜色分量采样点的重建值。也就是说,基于当前块的相邻区域中的颜色分量参考信息与当前块内的颜色分量重建信息,不仅需要构造亮度差向量,以便确定出加权系数;而且还需要根据色度参考信息确定色度平均值,然后根据色度参考信息和色度平均值来确定出色度差向量,进而根据色度差向量及对应的加权系数,再加上色度平均值即可确定出色度预测值;如此,通过对基于权重的色度预测的计算过程进行优化,能够实现完全采用整型运算,而且充分考虑了权重模型的特性,合理优化整型运算过程;在充分保证色度预测准确性的同时,还可以降低计算复杂度,提高编解码效率,进而提升编解码性能。
附图说明
图1为一种有效相邻区域的分布示意图;
图2为一种不同预测模式下选择区域的分布示意图;
图3为一种模型参数推导方案的流程示意图;
图4A为本申请实施例提供的一种编码器的组成框图示意图;
图4B为本申请实施例提供的一种解码器的组成框图示意图;
图5为本申请实施例提供的一种解码方法的流程示意图;
图6为本申请实施例提供的一种当前块的参考区域示意图;
图7为本申请实施例提供的另一种解码方法的流程示意图;
图8为本申请实施例提供的又一种解码方法的流程示意图;
图9为本申请实施例提供的一种编码方法的流程示意图;
图10为本申请实施例提供的一种编码装置的组成结构示意图;
图11为本申请实施例提供的一种编码设备的具体硬件结构示意图;
图12为本申请实施例提供的一种解码装置的组成结构示意图;
图13为本申请实施例提供的一种解码设备的具体硬件结构示意图;
图14为本申请实施例提供的一种编解码系统的组成结构示意图。
具体实施方式
为了能够更加详尽地了解本申请实施例的特点与技术内容,下面结合附图对本申请实施例的实现进行详细阐述,所附附图仅供参考说明之用,并非用来限定本申请实施例。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。
在以下的描述中,涉及到“一些实施例”,其描述了所有可能实施例的子集,但是可以理解,“一些实施例”可以是所有可能实施例的相同子集或不同子集,并且可以在不冲突的情况下相互结合。还需要指出,本申请实施例所涉及的术语“第一\第二\第三”仅是用于区别类似的对象,不代表针对对象的特定排序,可以理解地,“第一\第二\第三”在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。
在视频图像中,一般采用第一颜色分量、第二颜色分量和第三颜色分量来表征编码块(Coding Block,CB)。其中,这三个颜色分量分别为一个亮度分量、一个蓝色色度分量和一个红色色度分量。示例性地,亮度分量通常使用符号Y表示,蓝色色度分量通常使用符号Cb或者U表示,红色色度分量通常使用符号Cr或者V表示;这样,视频图像可以用YCbCr格式表示,也可以用YUV格式表示。除此之外,视频图像也可以是RGB格式、YCgCo格式等,本申请实施例不作任何限定。
可以理解,在当前的视频图像或者视频编解码过程中,对于跨分量预测技术,主要包括分量间线性模型(Cross-component Linear Model,CCLM)预测模式和多方向线性模型(Multi-Directional Linear Model,MDLM)预测模式,无论是根据CCLM预测模式推导的模型参数,还是根据MDLM预测模式推导的模型参数,其对应的预测模型均可以实现第一颜色分量到第二颜色分量、第二颜色分量到第一颜色分量、第一颜色分量到第三颜色分量、第三颜色分量到第一颜色分量、第二颜色分量到第三颜色分量、或者第三颜色分量到第二颜色分量等颜色分量间的预测。
以第一颜色分量到第二颜色分量的预测为例,假定第一颜色分量为亮度分量,第二颜色分量为色度分量,为了减少亮度分量与色度分量之间的冗余,在H.266/多功能视频编码(Versatile Video Coding,VVC)中使用CCLM预测模式,即根据同一编码块的亮度重建值来构造色度的预测值,如:Pred C(i,j)=α·Rec L(i,j)+β。
其中,i,j表示编码块中待预测像素的位置坐标,i表示水平方向,j表示竖直方向;Pred C(i,j)表示编码块中位置坐标(i,j)的待预测像素对应的色度预测值,Rec L(i,j)表示同一编码块中(经过下采样的)位置坐标(i,j)的待预测像素对应的亮度重建值。另外,α和β表示模型参数,可通过参考像素推导得到。
对于编码块而言,其相邻区域可以分为五部分:左侧相邻区域、上侧相邻区域、左下侧相邻区域、左上侧相邻区域和右上侧相邻区域。在H.266/VVC中包括三种跨分量线性模型预测模式,分别为:左侧及上侧相邻的帧内CCLM预测模式(可以用INTRA_LT_CCLM表示)、左侧及左下侧相邻的帧内CCLM预测模式(可以用INTRA_L_CCLM表示)和上侧及右上侧相邻的帧内CCLM预测模式(可以用INTRA_T_CCLM表示)。在这三种预测模式中,每种预测模式都可以选取预设数量(比如4个)的参考像素用于模型参数α和β的推导,而这三种预测模式的最大区别在于用于推导模型参数α和β的参考像素对应的选择区域是不同的。
具体地,针对色度分量对应的编码块尺寸为W×H,假定参考像素对应的上侧选择区域为W′,参考像素对应的左侧选择区域为H′;这样,
对于INTRA_LT_CCLM模式,参考像素可以在上侧相邻区域和左侧相邻区域进行选取,即W′=W,H′=H;
对于INTRA_L_CCLM模式,参考像素可以在左侧相邻区域和左下侧相邻区域进行选取,即H′=W+H,并设置W′=0;
对于INTRA_T_CCLM模式,参考像素可以在上侧相邻区域和右上侧相邻区域进行选取,即W′=W+H,并设置H′=0。
需要注意的是,在VTM中,对于右上侧相邻区域内最多只存储了W范围的像素点,对于左下侧相邻区域内最多只存储了H范围的像素点。虽然INTRA_L_CCLM模式和INTRA_T_CCLM模式的选择区域的范围定义为W+H,但是在实际应用中,INTRA_L_CCLM模式的选择区域将限制在H+H之内,INTRA_T_CCLM模式的选择区域将限制在W+W之内;这样,
对于INTRA_L_CCLM模式,参考像素可以在左侧相邻区域和左下侧相邻区域进行选取,H′=min{W+H,H+H};
对于INTRA_T_CCLM模式,参考像素可以在上侧相邻区域和右上侧相邻区域进行选取,W′=min{W+H,W+W}。
参见图1,其示出了一种有效相邻区域的分布示意图。在图1中,左侧相邻区域、左下侧相邻区域、上侧相邻区域和右上侧相邻区域都是有效的;另外,灰色填充的块即为编码块中位置坐标为(i,j)的待预测像素。
如此,在图1的基础上,针对三种预测模式的选择区域如图2所示。其中,在图2中,(a)表示了INTRA_LT_CCLM模式的选择区域,包括了左侧相邻区域和上侧相邻区域;(b)表示了INTRA_L_CCLM模式的选择区域,包括了左侧相邻区域和左下侧相邻区域;(c)表示了INTRA_T_CCLM模式的选择区域,包括了上侧相邻区域和右上侧相邻区域。这样,在确定出三种预测模式的选择区域之后,可以在选择区域内进行用于模型参数推导的像素选取。如此选取到的像素可以称为参考像素,通常参考像素的个数为4个;而且对于一个尺寸确定的W×H的编码块,其参考像素的位置一般是确定的。
在获取到预设数量的参考像素之后,目前是按照图3所示的模型参数推导方案的流程示意图进行色度预测。根据图3所示的流程,假定预设数量为4个,该流程可以包括:
S301:在选择区域中获取参考像素。
S302:判断有效参考像素的个数。
S303:若有效参考像素的个数为0,则将模型参数α设置为0,β设置为默认值。
S304:色度预测值填充为默认值。
S305:若有效参考像素的个数为4,则经过比较获得亮度分量中较大值的两个参考像素和较小值的两个参考像素。
S306:计算较大值对应的均值点和较小值对应的均值点。
S307:根据两个均值点推导得到模型参数α和β。
S308:使用α和β所构建的预测模型进行色度预测。
需要说明的是,在VVC中,有效参考像素为0的这一步骤是根据相邻区域的有效性进行判断的。
还需要说明的是,利用“两点确定一条直线”原则来构建预测模型,这里的两点可以称为拟合点。目前的技术方案中,在获取到4个参考像素之后,经过比较获得亮度分量中较大值的两个参考像素和较小值的两个参考像素;然后根据较大值的两个参考像素,求取一均值点(可以用mean max表示),根据较小值的两个参考像素,求取另一均值点(可以用mean min表示),即可得到两个均值点mean max和mean min;再将mean max和mean min作为两个拟合点,能够推导出模型参数(用α和β表示);最后根据α和β构建出预测模型,并根据该预测模型进行色度分量的预测处理。
然而,在相关技术中,针对每个编码块都使用简单的线性模型Pred C(i,j)=α·Rec L(i,j)+β来预测色度分量,并且每个编码块任意位置像素都使用相同的模型参数α和β进行预测。这样会导致以下缺陷:一方面,不同内容特性的编码块都利用简单线性模型进行亮度到色度的映射,以此实现色度预测,但是并非任意编码块内的亮度到色度的映射函数都可以准确由此简单线性模型拟合,这导致部分编码块预测效果不够准确;另一方面,在预测过程中,编码块内不同位置的像素点均使用相同的模型参数α和β,编码块内不同位置的预测准确性的同样存在较大差异;又一方面,在CCLM技术的预测过程中,未充分考虑当前块的重建亮度信息与相邻区域的参考信息之间的高度相关性,这会导致部分编码块在此技术下无法准确预测,从而影响该技术的增益效果。简单来说,目前的CCLM技术下部分编码块的预测值与原始值之间存在较大偏差,导致预测准确度低,造成质量下降,进而降低了编解码效率。
基于此,本申请实施例提供了一种编码方法,通过确定当前块的第一颜色分量的参考值和当前块的第二颜色分量的参考值;根据当前块的第一颜色分量的参考值,确定加权系数;根据当前块的第二颜色分量的参考值,确定当前块的第二颜色分量的参考均值;以及根据当前块的第二颜色分量的参考值与当前块的第二颜色分量的参考均值,确定当前块的第二颜色分量的参考样值;根据当前块的第二颜色分量的参考均值、当前块的第二颜色分量的参考样值以及对应的加权系数,确定当前块中第二颜色分量采样点的预测值;根据当前块中第二颜色分量采样点的预测值,确定当前块中第二颜色分量采样点的预测差值。
本申请实施例还提供了一种解码方法,通过确定当前块的第一颜色分量的参考值和当前块的第二颜色分量的参考值;根据当前块的第一颜色分量的参考值,确定加权系数;根据当前块的第二颜色分量的参考值,确定当前块的第二颜色分量的参考均值;以及根据当前块的第二颜色分量的参考值与当前块的第二颜色分量的参考均值,确定当前块的第二颜色分量的参考样值;根据当前块的第二颜色分量的参考均值、当前块的第二颜色分量的参考样值以及对应的加权系数,确定当前块中第二颜色分量采样点的预测值;根据当前块中第二颜色分量采样点的预测值,确定当前块中第二颜色分量采样点的重建值。
这样,基于当前块的相邻区域中的颜色分量参考信息与当前块内的颜色分量重建信息,不仅需要构造亮度差向量,以便确定出加权系数;而且还需要根据色度参考信息确定色度平均值,然后根据色度参考信息和色度平均值来确定出色度差向量,进而根据色度差向量及对应的加权系数,再加上色度平均值 即可确定出色度预测值;如此,通过对基于权重的色度预测的计算过程进行优化,能够实现完全采用整型运算,而且充分考虑了权重模型的特性,合理优化整型运算过程;在充分保证色度预测准确性的同时,还可以降低计算复杂度,提高编解码效率,进而提升编解码性能。
下面将结合附图对本申请各实施例进行详细说明。
参见图4A,其示出了本申请实施例提供的一种编码器的组成框图示意图。如图4A所示,编码器(具体为“视频编码器”)100可以包括变换与量化单元101、帧内估计单元102、帧内预测单元103、运动补偿单元104、运动估计单元105、反变换与反量化单元106、滤波器控制分析单元107、滤波单元108、编码单元109和解码图像缓存单元110等,其中,滤波单元108可以实现去方块滤波及样本自适应缩进(Sample Adaptive 0ffset,SAO)滤波,编码单元109可以实现头信息编码及基于上下文的自适应二进制算术编码(Context-based Adaptive Binary Arithmetic Coding,CABAC)。针对输入的原始视频信号,通过编码树块(Coding Tree Unit,CTU)的划分可以得到一个视频编码块,然后对经过帧内或帧间预测后得到的残差像素信息通过变换与量化单元101对该视频编码块进行变换,包括将残差信息从像素域变换到变换域,并对所得的变换系数进行量化,用以进一步减少比特率;帧内估计单元102和帧内预测单元103是用于对该视频编码块进行帧内预测;明确地说,帧内估计单元102和帧内预测单元103用于确定待用以编码该视频编码块的帧内预测模式;运动补偿单元104和运动估计单元105用于执行所接收的视频编码块相对于一或多个参考帧中的一或多个块的帧间预测编码以提供时间预测信息;由运动估计单元105执行的运动估计为产生运动向量的过程,所述运动向量可以估计该视频编码块的运动,然后由运动补偿单元104基于由运动估计单元105所确定的运动向量执行运动补偿;在确定帧内预测模式之后,帧内预测单元103还用于将所选择的帧内预测数据提供到编码单元109,而且运动估计单元105将所计算确定的运动向量数据也发送到编码单元109;此外,反变换与反量化单元106是用于该视频编码块的重构建,在像素域中重构建残差块,该重构建残差块通过滤波器控制分析单元107和滤波单元108去除方块效应伪影,然后将该重构残差块添加到解码图像缓存单元110的帧中的一个预测性块,用以产生经重构建的视频编码块;编码单元109是用于编码各种编码参数及量化后的变换系数,在基于CABAC的编码算法中,上下文内容可基于相邻编码块,可用于编码指示所确定的帧内预测模式的信息,输出该视频信号的码流;而解码图像缓存单元110是用于存放重构建的视频编码块,用于预测参考。随着视频图像编码的进行,会不断生成新的重构建的视频编码块,这些重构建的视频编码块都会被存放在解码图像缓存单元110中。
参见图4B,其示出了本申请实施例提供的一种解码器的组成框图示意图。如图4B所示,解码器(具体为“视频解码器”)200包括解码单元201、反变换与反量化单元202、帧内预测单元203、运动补偿单元204、滤波单元205和解码图像缓存单元206等,其中,解码单元201可以实现头信息解码以及CABAC解码,滤波单元205可以实现去方块滤波以及SAO滤波。输入的视频信号经过图4A的编码处理之后,输出该视频信号的码流;该码流输入解码器200中,首先经过解码单元201,用于得到解码后的变换系数;针对该变换系数通过反变换与反量化单元202进行处理,以便在像素域中产生残差块;帧内预测单元203可用于基于所确定的帧内预测模式和来自当前帧或图片的先前经解码块的数据而产生当前视频解码块的预测数据;运动补偿单元204是通过剖析运动向量和其他关联语法元素来确定用于视频解码块的预测信息,并使用该预测信息以产生正被解码的视频解码块的预测性块;通过对来自反变换与反量化单元202的残差块与由帧内预测单元203或运动补偿单元204产生的对应预测性块进行求和,而形成解码的视频块;该解码的视频信号通过滤波单元205以便去除方块效应伪影,可以改善视频质量;然后将经解码的视频块存储于解码图像缓存单元206中,解码图像缓存单元206存储用于后续帧内预测或运动补偿的参考图像,同时也用于视频信号的输出,即得到了所恢复的原始视频信号。
需要说明的是,本申请实施例的方法主要应用在如图4A所示的帧内预测单元103部分和如图4B所示的帧内预测单元203部分。也就是说,本申请实施例既可以应用于编码器,也可以应用于解码器,甚至还可以同时应用于编码器和解码器,但是本申请实施例不作具体限定。
还需要说明的是,当应用于帧内预测单元103部分时,“当前块”具体是指当前待进行帧内预测的编码块;当应用于帧内预测单元203部分时,“当前块”具体是指当前待进行帧内预测的解码块。
在本申请的一实施例中,参见图5,其示出了本申请实施例提供的一种解码方法的流程示意图。如图5所示,该方法可以包括:
S501:确定当前块的第一颜色分量的参考值和当前块的第二颜色分量的参考值。
需要说明的是,本申请实施例的解码方法应用于解码装置,或者集成有该解码装置的解码设备(也可简称为“解码器”)。另外,本申请实施例的解码方法具体可以是指一种帧内预测方法,更具体地,是一种基于权重的色度预测(Weight-based Chroma Prediction,WCP)的整数化运算方法。
在本申请实施例中,视频图像可以划分为多个解码块,每个解码块可以包括第一颜色分量、第二颜色分量和第三颜色分量,而这里的当前块是指视频图像中当前待进行帧内预测的解码块。另外,假定当前块进行第一颜色分量的预测,而且第一颜色分量为亮度分量,即待预测分量为亮度分量,那么当前块也可以称为亮度预测块;或者,假定当前块进行第二颜色分量的预测,而且第二颜色分量为色度分量,即待预测分量为色度分量,那么当前块也可以称为色度预测块。
还需要说明的是,在本申请实施例中,当前块的参考信息可以包括当前块的相邻区域中的第一颜色分量采样点的取值和当前块的相邻区域中的第二颜色分量采样点的取值,这些采样点(Sample)可以是根据当前块的相邻区域中的已解码像素确定的。在一些实施例中,当前块的相邻区域可以包括下述至少之一:上侧相邻区域、右上侧相邻区域、左侧相邻区域和左下侧相邻区域。
在这里,上侧相邻区域和右上侧相邻区域整体可以看作上方区域,左侧相邻区域和左下侧相邻区域整体可以看作左方区域;除此之外,相邻区域还可以包括左上方区域,详见图6所示。其中,在对当前块进行第二颜色分量的预测时,当前块的上方区域、左方区域和左上方区域作为相邻区域均可被称为当前块的参考区域,而且参考区域中的像素都是已重建的参考像素。
在一些实施例中,确定当前块的第一颜色分量的参考值和当前块的第二颜色分量的参考值,可以包括:
根据当前块的相邻区域中的第一颜色分量采样点的取值,确定当前块的第一颜色分量的参考值;
根据当前块的相邻区域中的第二颜色分量采样点的取值,确定当前块的第二颜色分量的参考值。
需要说明的是,在本申请实施例中,当前块的参考像素可以是指与当前块相邻的参考像素点,也可称为当前块的相邻区域中的第一颜色分量采样点、第二颜色分量采样点,用Neighboring Sample或Reference Sample表示。其中,这里的相邻可以是空间相邻,但是并不局限于此。例如,相邻也可以是时域相邻、空间与时域相邻,甚至当前块的参考像素还可以是对空间相邻、时域相邻、空间和时域相邻的参考像素点进行某种处理后得到的参考像素等等,本申请实施例不作任何限定。
还需要说明的是,在本申请实施例中,假定第一颜色分量为亮度分量,第二颜色分量为色度分量;那么,当前块的相邻区域中的第一颜色分量采样点的取值表示为当前块的参考像素对应的参考亮度信息,当前块的相邻区域中的第二颜色分量采样点的取值表示为当前块的参考像素对应的参考色度信息。
还需要说明的是,在本申请实施例中,从当前块的相邻区域中确定第一颜色分量采样点的取值或第二颜色分量采样点的取值,这里的相邻区域可以是仅包括上侧相邻区域,或者仅包括左侧相邻区域,也可以是包括上侧相邻区域和右上侧相邻区域,或者包括左侧相邻区域和左下侧相邻区域,或者包括上侧相邻区域和左侧相邻区域,或者甚至还可以包括上侧相邻区域、右上侧相邻区域和左侧相邻区域等等,本申请实施例不作任何限定。
还需要说明的是,在本申请实施例中,相邻区域也可以是根据当前块的预测模式进行确定。在一种具体的实施例中,可以包括:若当前块的预测模式为上相邻模式,则确定当前块的相邻区域包括上侧相邻区域和/或右上侧相邻区域;若当前块的预测模式为左相邻模式,则确定当前块的相邻区域包括左侧相邻区域和/或左下侧相邻区域。其中,所述上相邻模式包括使用上相邻参考采样点的预测模式,所述左相邻模式包括使用左相邻参考采样点的预测模式。
示例性地,如果当前块的预测模式是上相邻模式中的垂直模式,那么基于权重的色度分量预测中相邻区域可以只选取上侧相邻区域和/或右上侧相邻区域;如果当前块的预测模式是左相邻模式中的水平模式,那么基于权重的色度分量预测中相邻区域可以只选取左侧相邻区域和/或左下侧相邻区域。
进一步地,在一些实施例中,对于当前块的参考像素的确定,可以包括:对当前块的相邻区域中的像素进行筛选处理,确定参考像素。
具体来说,在本申请实施例中,根据当前块的相邻区域中的像素,组成第一参考像素集合;那么可以对第一参考像素集合进行筛选处理,确定参考像素。在这里,参考像素的数量可以为M个,M为正整数。换句话说,可以从相邻区域中的像素中选取出M个参考像素。其中,M的取值一般可以为4,但是并不作具体限定。
还需要说明的是,在当前块的相邻区域中的像素中,可能会存在部分不重要的像素(例如,这些像素的相关性较差)或者部分异常的像素,为了保证预测的准确性,需要将这些像素剔除掉,以便得到有效的参考像素。因此,在一种具体的实施例中,对相邻区域中的像素进行筛选处理,确定参考像素,可以包括:
基于相邻区域中的像素的位置和/或颜色分量强度,确定待选择像素位置;
根据待选择像素位置,从相邻区域中的像素中确定参考像素。
需要说明的是,在本申请实施例中,颜色分量强度可以用颜色分量信息来表示,比如参考亮度信息、参考色度信息等;这里,颜色分量信息的值越大,表明了颜色分量强度越高。这样,针对相邻区域中的 像素进行筛选,可以是根据像素的位置来进行筛选的,也可以是根据颜色分量强度来进行筛选的,从而根据筛选出的像素确定当前块的参考像素,进一步可以确定出当前块的相邻区域中的第一颜色分量采样点的取值和当前块的相邻区域中的第二颜色分量采样点的取值;然后根据当前块的相邻区域中的第一颜色分量采样点的取值来确定当前块的第一颜色分量的参考值,根据当前块的相邻区域中的第二颜色分量采样点的取值来确定当前块的第二颜色分量的参考值。
在一些实施例中,根据当前块的相邻区域中的第一颜色分量采样点的取值,确定当前块的第一颜色分量的参考值,可以包括:对当前块的相邻区域中的第一颜色分量采样点的取值进行第一滤波处理,确定当前块的第一颜色分量的参考值。
在本申请实施例中,第一滤波处理为下采样滤波处理。其中,第一颜色分量为亮度分量,此时可以通过对参考亮度信息进行下采样滤波处理,使得滤波后的参考亮度信息的空间分辨率与参考色度信息的空间分辨率相同。示例性地,如果当前块的尺寸大小为2M×2N,参考亮度信息为2M+2N个,那么在经过下采样滤波之后,可以变换至M+N个,以得到当前块的第一颜色分量的参考值。
在一些实施例中,根据当前块的相邻区域中的第二颜色分量采样点的取值,确定当前块的第二颜色分量的参考值,可以包括:对当前块的相邻区域中的第二颜色分量采样点的取值进行第二滤波处理,确定当前块的第二颜色分量的参考值。
在本申请实施例中,第二滤波处理为上采样滤波处理。其中,上采样率是2的正整数倍。
也就是说,第一颜色分量为亮度分量,第二颜色分量为色度分量,本申请实施例也可以对参考色度信息进行上采样滤波,使得滤波后的参考色度信息的空间分辨率与参考亮度的空间分辨率相同。示例性地,如果参考亮度信息为2M+2N个,而参考色度信息为M+N个;那么在对参考色度信息经过上采样滤波之后,可以变换至2M+2N个,以得到当前块的第二颜色分量的参考值。
S502:根据当前块的第一颜色分量的参考值,确定加权系数。
需要说明的是,在本申请实施例中,当前块的参考信息还可以包括当前块中第一颜色分量采样点的重建值。假定第一颜色分量为亮度分量,那么当前块中第一颜色分量采样点的重建值即为当前块的重建亮度信息。
在一些实施例中,根据所述当前块的第一颜色分量的参考值,确定加权系数,可以包括:
确定当前块中第一颜色分量采样点的重建值;
根据当前块中第一颜色分量采样点的重建值和当前块的第一颜色分量的参考值,确定当前块的第一颜色分量的参考样值;
根据当前块的第一颜色分量的参考样值,确定加权系数。
在一种可能的实施例中,根据当前块中第一颜色分量采样点的重建值和当前块的第一颜色分量的参考值,确定当前块的第一颜色分量的参考样值,可以包括:
确定当前块中第一颜色分量采样点的重建值和当前块的第一颜色分量的参考值的差值;
根据差值,确定当前块的第一颜色分量的参考样值。
在本申请实施例中,可以将当前块的第一颜色分量的参考样值设置为等于差值的绝对值。另外,根据差值来确定当前块的第一颜色分量的参考样值,还可以是对差值进行平方计算、或者将差值进行一些相关处理和映射等,以确定出当前块的第一颜色分量的参考样值,这里并不作任何限定。
在另一种可能的实施例中,根据当前块中第一颜色分量采样点的重建值和当前块的第一颜色分量的参考值,确定当前块的第一颜色分量的参考样值,可以包括:
对当前块中第一颜色分量采样点的重建值进行第三滤波处理,得到当前块中第一颜色分量采样点的滤波样值;
根据当前块中第一颜色分量采样点的滤波样值和当前块的第一颜色分量的参考值,确定当前块的第一颜色分量的参考样值。
在本申请实施例中,第三滤波处理为下采样滤波处理。其中,第一颜色分量为亮度分量,此时也可以对当前块内的重建亮度信息进行下采样滤波处理。示例性地,如果当前块内的重建亮度信息的数量为2M×2N,那么在经过下采样滤波之后,可以变换至M×N。
在本申请实施例中,根据当前块中第一颜色分量采样点的滤波样值和当前块的第一颜色分量的参考值,确定当前块的第一颜色分量的参考样值,可以是根据当前块中第一颜色分量采样点的滤波样值和当前块的第一颜色分量的参考值之差来确定当前块的第一颜色分量的参考样值;更具体地,可以是根据当前块中第一颜色分量采样点的滤波样值和当前块的第一颜色分量的参考值之差的绝对值来确定当前块的第一颜色分量的参考样值,这里也不作任何限定。
可以理解地,在确定出当前块的第一颜色分量的参考样值之后,本申请实施例可以进一步确定加权系数。其中,当前块的第一颜色分量的参考样值可以是当前块中第一颜色分量采样点的重建值和当前块 的第一颜色分量的参考值之差的绝对值。
在一种可能的实现方式中,假定第一颜色分量为色度分量,第二颜色分量为亮度分量,本申请实施例主要是对当前块中待预测像素的色度分量进行预测。首先选取当前块中的至少一个待预测像素,分别计算其重建色度与相邻区域中的参考色度之间的色度差(用|ΔC k|表示);不同位置的待预测像素在相邻区域的色度差存在差异,色度差最小的参考像素位置会跟随当前块中待预测像素的变化,通常色度差的大小代表色度之间的相近程度。如果|ΔC k|较小,表明色度值的相似性比较强,对应的加权系数(用w k表示)可以赋予较大的权重;反之,如果|ΔC k|较大,表明色度值的相似性比较弱,w k可以赋予较小的权重。也就是说,w k与|ΔC k|之间的关系近似呈反比。这样,根据|ΔC k|可以建立预设的映射关系,w k=f(|ΔC k|)。其中,|ΔC k|表示第一颜色分量的参考样值,f(|ΔC k|)表示在预设映射关系下第一颜色分量的参考样值对应的取值,w k表示加权系数,即将w k设置为等于f(|ΔC k|)。
在另一种可能的实现方式中,假定第一颜色分量为亮度分量,第二颜色分量为色度分量,这时候也可以选取当前块中的至少一个待预测像素,分别计算其重建亮度与相邻区域中的参考亮度之间的亮度差(用|ΔY k|表示)。其中,如果|ΔY k|较小,表明亮度值的相似性比较强,对应的加权系数(用w k表示)可以赋予较大的权重;反之,如果|ΔY k|较大,表明亮度值的相似性比较弱,w k可以赋予较小的权重。这样,在计算加权系数时,第一颜色分量的参考样值也可以为|ΔY k|,根据w k=f(|ΔY k|)也可计算出加权系数。
需要注意的是,在对当前块中待预测像素的色度分量进行预测时,由于待预测像素的色度分量值无法直接确定,因而参考像素与待预测像素之间的色度差|ΔC k|也无法直接得到;但是当前块的局部区域内,分量间存在强相关性,这时候可以根据参考像素与待预测像素之间的亮度差|ΔY k|来推导得到|ΔC k|;即根据|ΔY k|和模型因子的乘积,可以得到|ΔC k|;这样,该乘积值等于模型因子与|ΔY k|的乘积。也就是说,在本申请实施例中,对于第一颜色分量的参考样值而言,其可以是|ΔC k|,即色度差的绝对值;或者也可以是|ΔY k|,即亮度差的绝对值;或者还可以是|αΔY k|,即亮度差的绝对值与预设乘子之积等等。其中,这里的预设乘子即为本申请实施例所述的模型因子。
进一步地,对于模型因子而言,在一种具体的实施例中,该方法还可以包括:根据参考像素的第一颜色分量值与参考像素的第二颜色分量值进行最小二乘法计算,确定模型因子。
也就是说,假定参考像素的数量为N个,参考像素的第一颜色分量值即为当前块的参考亮度信息,参考像素的第二颜色分量值即为当前块的参考色度信息,那么可以对N个参考像素的色度分量值和亮度分量值进行最小二乘法计算,得到模型因子。示例性地,最小二乘法回归计算如下所示,
Figure PCTCN2022103963-appb-000001
其中,L k表示第k个参考像素的亮度分量值,C k表示第k个参考像素的色度分量值,N表示参考像素的个数;α表示模型因子,其可以是采用最小二乘法回归计算得到。需要注意的是,模型因子也可以是固定取值或者基于固定取值进行微调整等等,本申请实施例并不作具体限定。
在又一种可能的实施例中,当前块的第一颜色分量的参考样值可以是当前块中亮度重建信息(用recY表示)和inSize数量的参考亮度信息(用refY表示)之差的绝对值。其中,对于当前块中的待预测像素C pred[i][j],其对应的亮度差向量diffY[i][j][k]可以是由其对应的亮度重建信息recY[i][j]与inSize数量的参考亮度信息refY[k]相减并取绝对值得到的;即在本申请实施例中,当前块的第一颜色分量的参考样值可以用diffY[i][j][k]表示。
进一步地,在一些实施例中,根据当前块的第一颜色分量的参考样值,确定加权系数,可以包括:根据当前块的第一颜色分量的参考样值,确定权重索引值;根据权重索引值,使用第一预设映射关系确定加权系数。
在一种具体的实施例中,根据当前块的第一颜色分量的参考样值,确定权重索引值,可以包括:
确定当前块的最大权重索引值和最小权重索引值;
根据最大权重索引值和最小权重索引值对第一颜色分量的参考样值进行修正处理,确定权重索引值。
需要说明的是,在本申请实施例中,最大权重索引值可以用theMaxPos表示,最小权重索引值可以用表示零表示,权重索引值用index表示。在这里,权重索引值是限定在theMaxPos和零之间。示例性地,权重索引值可以根据下式计算得到,如下所示:
index=Clip3(0,theMaxPos,diffY[i][j][k])          (2)
其中,k=0,1,...,inSize-1,
Figure PCTCN2022103963-appb-000002
也就是说,在式(2)中,如果diffY[i][j][k]<0,那么index=0;如果diffY[i][j][k]>theMaxPos,那么index=theMaxPos;否则,如果0≤diffY[i][j][k]≤theMaxPos,那么index=diffY[i][j][k]。
在一种具体的实施例中,最大权重索引值可以与颜色分量的比特深度(用BitDepth表示)有关。示例性地,最大权重索引值可以根据下式计算得到,如下所示:
theMaxPos=1<<BitDepth-1              (3)
需要说明的是,在本申请实施例中,theMaxPos的取值包括但不限于根据式(3)计算得到,它还可以是WCP的核心参数中所确定的。
进一步地,对于第一预设映射关系而言,在一些实施例中,第一预设映射关系是权重索引值与加权系数的数值映射查找表。也就是说,在本申请实施例中,解码端可以预先设置有对应的查找表(Look Up Table,LUT)。通过该查找表,结合index即可确定出对应的加权系数。示例性地,加权系数cWeightInt[i][j][k]可以用下述的映射关系表示:
cWeightInt[i][j][k]=LUT[index]           (4)
对于第一预设映射关系而言,在一些实施例中,第一预设映射关系还可以是预设函数关系。在一些实施例中,根据权重索引值,使用第一预设映射关系确定加权系数,可以包括:确定在第一预设映射关系下权重索引值对应的第一取值;将加权系数设置为等于第一取值。
在一种具体的实施例中,确定在第一预设映射关系下权重索引值对应的第一取值,可以包括:
确定第一因子;
根据权重索引值,使用第二预设映射关系确定第二取值;
计算第一因子与第二取值的第一乘积值;
将第一取值设置为等于第一乘积值在第一预设映射关系下对应的取值。
需要说明的是,在本申请实施例中,第一因子可以用ModelScale表示,权重索引值用index表示。示例性地,加权系数cWeightInt[i][j][k]还可以用下述的函数关系表示:
cWeightInt[i][j][k]=Round(e -index×ModelScale)          (5)
在这里,第二预设映射关系可以为基于n的指数函数关系,例如e -n;其中,n的取值等于权重索引值,即n=0,1,…,theMaxPos。这样,当n的取值等于index时,那么第二取值等于e -index,第一乘积值等于e -index×ModelScale。另外,第一预设映射关系可以设置为Round(x);那么当x等于第一乘积值时,Round(x)的值即为第一取值,也即加权系数cWeightInt[i][j][k]。
还需要说明的是,在本申请实施例中,对于第一预设映射关系而言,可以用下式表示:
Round(x)=Sign(x)×Floor(Abs(x)+0.5)                (6)
其中,
Figure PCTCN2022103963-appb-000003
Floor(x)表示小于或等于x的最大整数。
进一步地,对于第一因子而言,在一些实施例中,第一因子可以为预设常数值。也就是说,第一因子可以是预先设定的常数,其与块尺寸参数无关。
对于第一因子而言,在一些实施例中,第一因子的取值还可以与块尺寸参数有关。在一种具体的实施例中,确定第一因子,可以包括:根据当前块的尺寸参数,确定第一因子的取值;其中,当前块的尺寸参数包括以下参数的至少之一:当前块的宽度,当前块的高度。也就是说,本申请实施例可以采用分类方式固定第一因子的取值。例如,将根据当前块的尺寸参数分为三类,确定每一类对应的第一因子的取值。针对这种情况,本申请实施例还可以预先存储当前块的尺寸参数与第一因子的取值映射查找表,然后根据该查找表即可确定出第一因子的取值。
示例性地,W表示当前块的宽度,H表示当前块的高度;若当前块的尺寸参数满足第一预设条件,即Min(W,H)<=4,则设置第一因子的取值为第一值;若当前块的尺寸参数满足第二预设条件,即Min(W,H)>4&&Min(W,H)<=16,则设置第一因子的取值为第二值;若当前块的尺寸参数满足第三预设条件,即Min(W,H)>16,则设置第一因子的取值为第三值。简单来说,在本申请实施例中,第一因子可以是预先设定的常数,也可以是根据当前块的尺寸参数来确定的,甚至还可以是其它方式(例如,根据颜色分量的BitDepth等)来确定,这里不作任何限定。
进一步地,在一些实施例中,所述根据当前块的第一颜色分量的参考样值,确定权重索引值,可以包括:确定第二因子;根据当前块的第一颜色分量的参考样值和第二因子,确定权重索引值。
需要说明的是,在本申请实施例中,一定条件下还可以根据WCP的核心参数中的控制参数对加权系数进行调整。在这里,第二因子即为本实施例所述的控制参数(也可称为“尺度参数”、“尺度因子”等),用S表示。示例性地,如果当前块的尺寸灵活性好,可以根据该第二因子调整权重系数,以非线性函数(例如Softmax函数)为例,可以根据当前块所归属的块分类类别的不同,选择不同的第二因子来调整函数,以便根据调整后的函数确定出加权系数。
还需要说明的是,对于第二因子而言,在一些实施例中,第二因子可以是预设常数值。也就是说,在这种情况下,对于S而言,可以根据色度相对平坦的特性调整邻近色度的加权系数分布,从而捕获适合自然图像色度预测的加权系数分布。为了确定适合自然图像色度预测的参数S,遍历给定的S集合,通过不同S下预测色度与原始色度间的差距来衡量S合适与否。示例性地,S可以取2-ε,其中ε∈{1,0,-1,-2,-3};经过试验发现,在此S集合中,S的最佳取值为4。因此,在一种具体的实施例中,S可以设置为4,但是本申请实施例并不作具体限定。
还需要说明的是,对于第二因子而言,在一些实施例中,第二因子的取值还可以与块尺寸参数有关。在一种具体的实施例中,确定第二因子,可以包括:根据当前块的尺寸参数,确定第二因子的取值;其中,当前块的尺寸参数包括以下参数的至少之一:当前块的宽度,当前块的高度。
在一种可能的实现方式中,所述根据当前块的尺寸参数,确定第二因子的取值,可以包括:
若当前块的高度和宽度的最小值小于或等于4,则确定第二因子为8;
若当前块的高度和宽度的最小值大于4且小于或等于16,则确定第二因子为12;
若当前块的高度和宽度的最小值大于16,则确定第二因子为16。
需要说明的是,本申请实施例可以采用分类方式固定第二因子的取值。例如,将根据当前块的尺寸参数分为三类,确定每一类对应的第二因子的取值。针对这种情况,本申请实施例还可以预先存储当前块的尺寸参数与第二因子的取值映射查找表,然后根据该查找表即可确定出第二因子的取值。示例性地,表1示出了本申请实施例提供的一种第二因子与当前块的尺寸参数之间的对应关系。
表1
当前块的尺寸参数 第二因子
Min(W,H)<=4 8
Min(W,H)>4&&Min(W,H)<=16 12
Min(W,H)>16 16
在另一种可能的实现方式中,在对权重系数的调整中,还可以对上述的第二因子与当前块的尺寸参数之间的对应关系进行微调。表2示出了本申请实施例提供的另一种第二因子与当前块的尺寸参数之间的对应关系。
表2
当前块的尺寸参数 第二因子
Min(W,H)<=4 8
Min(W,H)>4&&Min(W,H)<16 12
Min(W,H)>=16 16
在又一种可能的实现方式中,在对权重系数的调整中,还可以对上述的第二因子的取值进行微调。在一些实施例中,所述根据当前块的尺寸参数,确定第二因子的取值,可以包括:
若当前块的高度和宽度的最小值小于或等于4,则确定第二因子为7;
若当前块的高度和宽度的最小值大于4且小于或等于16,则确定第二因子为11;
若当前块的高度和宽度的最小值大于16,则确定第二因子为15。
也就是说,通过对上述的第二因子的取值进行微调,表3示出了本申请实施例提供的又一种第二因子与当前块的尺寸参数之间的对应关系。
表3
当前块的尺寸参数 第二因子
Min(W,H)<=4 7
Min(W,H)>4&&Min(W,H)<=16 11
Min(W,H)>16 15
还需要说明的是,在本申请实施例中,将根据当前块的尺寸参数分为三类,还可以通过块种类索引值(用wcpSizeId表示)来指示不同的尺寸参数。在再一种可能的实现方式下,所述根据当前块的尺寸参数,确定第二因子的取值,可以包括:根据块种类索引值,确定第二因子的取值。
示例性地,若块种类索引值等于0,则指示Min(W,H)<=4的当前块;若块种类索引值等于1,则指示Min(W,H)>4&&Min(W,H)<=16的当前块;若块种类索引值等于2,则指示Min(W,H)>16的当前块。在这种情况下,表4示出了本申请实施例提供的一种第二因子与块种类索引值之间的对应关系。
表4
块种类索引值 第二因子
0 8
1 12
2 16
示例性地,若块种类索引值等于0,则指示Min(W,H)<128的当前块;若块种类索引值等于1,则指示Min(W,H)>=128&&Min(W,H)<=256的当前块;若块种类索引值等于2,则指示Min(W,H)>256的当前块。在这种情况下,表5示出了本申请实施例提供的另一种第二因子与块种类索引值之间的对应关系。
表5
块种类索引值 第二因子
0 10
1 8
2 1
示例性地,若块种类索引值等于0,则指示Min(W,H)<64的当前块;若块种类索引值等于1,则指示Min(W,H)>=64&&Min(W,H)<=512的当前块;若块种类索引值等于2,则指示Min(W,H)>512的当前块。在这种情况下,表6示出了本申请实施例提供的又一种第二因子与块种类索引值之间的对应关系。
表6
块种类索引值 第二因子
0 16
1 4
2 1
还需要说明的是,对于第二因子而言,在一些实施例中,第二因子还可以根据当前块的参考像素数量进行分类。在另一种具体的实施例中,确定第二因子,可以包括:根据当前块的参考像素数量,确定第二因子的取值;其中,N表示参考像素数量。
在一种可能的实现方式中,所述根据当前块的参考像素数量,确定第二因子的取值,可以包括:
若N的取值小于16,则确定第二因子为8;
若N的取值大于或等于16且小于32,则确定第二因子为12;
若N的取值大于或等于32,则确定第二因子为16。
也就是说,根据当前块的参考像素数量进行分类,表7示出了本申请实施例提供的一种第二因子与参考像素数量之间的对应关系。
表7
参考像素数量 第二因子
N<16 8
N>=16&&N<32 12
N>=32 16
需要注意的是,无论是表1、表2、表3,还是表4、表5、表6、表7等,这里仅是一种示例性查找表,本申请实施例并不作具体限定。
进一步地,在一些实施例中,根据当前块的第一颜色分量的参考样值和第二因子,确定权重索引值,可以包括:
根据第一颜色分量的参考样值和第二因子,使用第三预设映射关系确定第三取值;
确定当前块的最大权重索引值和最小权重索引值;
根据最大权重索引值和最小权重索引值对第三取值进行修正处理,确定权重索引值。
需要说明的是,在本申请实施例中,最大权重索引值可以用theMaxPos表示,最小权重索引值可以用表示零表示,权重索引值用index表示。在这里,权重索引值是限定在theMaxPos和零之间,而第三取值是f(S,diffY[i][j][k])。示例性地,权重索引值可以根据下式计算得到,如下所示:
index=Clip3(0,theMaxPos,f(S,diffY[i][j][k]))           (7)
其中,k=0,1,...,inSize-1,
Figure PCTCN2022103963-appb-000004
也就是说,在式(7)中,如果f(S,diffY[i][j][k])<0,那么index=0;如果f(S,diffY[i][j]]k])>theMaxPos,那么index=theMaxPos;否则,如果0≤f(S,diffY[i][j][k])≤theMaxPos,那么index=f(S,diffY[i][j][k])。在这里,theMaxPos的取值可以是根据上述的式(3)计算得到,还可以是WCP的核心参数中所确定的,对此并不作任何限定。
还需要说明的是,对于第三预设映射关系而言,f()是指与第二因子S和亮度差向量diffY[i][j][k]的函数。在一种可能的实现方式中,f(S,diffY[i][j][k])可以通过下式来实现,如下所示:
f(S,diffY[i][j][k])=diffY[i][j][k]            (8)
在另一种可能的实现方式中,f(S,diffY[i][j][k])可以通过如下操作来实现。在一些实施例中,所示根据第一颜色分量的参考样值和第二因子,使用第三预设映射关系确定第三取值,可以包括:
确定至少一个移位数组;
根据第二因子,从至少一个移位数组中确定目标偏移量;
对第一颜色分量的参考样值进行目标偏移量的右移运算,确定第三取值。
在这里,由于不同的第二因子S可以使用各自的LUT[S],本申请实施例仅存储几个基础的LUT。例如,当S={2,4,8}时,仅存储S=2时的LUT;对于其他的第二因子S,可以通过移位运算得到。这时候,f(S,diffY[i][j][k])可以通过下式来实现,如下所示:
f(S,diffY[i][j][k])=diffY[i][j][k]>>LUTshift[S]          (9)
其中,对于移位数组LUTshift[S],其定义如下:
Figure PCTCN2022103963-appb-000005
在本申请实施例中,如果第二因子等于2,那么目标偏移量等于0;如果第二因子等于4,那么目标偏移量等于1;如果第二因子等于8,那么目标偏移量等于2;然后对第一颜色分量的参考样值进行目标偏移量的右移运算,即可确定出第三取值。
另外,需要注意的是,对于f(S,diffY[i][j][k])来说,其实现方式并不局限于式(8)或式(9),还可以是其他实现方式,本申请实施例也不作任何限定。
进一步地,在一些实施例中,所述根据权重索引值,使用第一预设映射关系确定加权系数,可以包括:
根据第二因子和权重索引值,确定第二乘积值;
确定第二乘积值在第一预设映射关系下对应的第四取值,将加权系数设置为等于第四取值。
在一种具体的实施例中,确定第二乘积值在第一预设映射关系下对应的第四取值,可以包括:
确定第一因子;
根据第二乘积值,使用第二预设映射关系确定第五取值;
计算第一因子与第五取值的第三乘积值;
将第四取值设置为等于第三乘积值在第一预设映射关系下对应的取值。
需要说明的是,在本申请实施例中,第一因子可以用ModelScale表示,第二因子可以用S表示,权重索引值可以用index表示。
对于第一预设映射关系而言,在一些实施例中,第一预设映射关系是第二因子、权重索引值与加权系数的数值映射查找表。也就是说,在本申请实施例中,解码端可以预先设置有对应的查找表(Look Up Table,LUT)。通过该查找表,结合index即可确定出对应的加权系数。示例性地,加权系数cWeightInt[i][j][k]可以用下述的映射关系表示:
cWeightInt[i][j][k]=LUT[S][index]             (11)
对于第一预设映射关系而言,在一些实施例中,第一预设映射关系还可以是预设函数关系,该函数的输入为index和S,输出为加权系数。示例性地,加权系数cWeightInt[i][j][k]还可以用下述的函数关系表示:
Figure PCTCN2022103963-appb-000006
在这里,第二预设映射关系仍可以为基于n的指数函数关系,例如
Figure PCTCN2022103963-appb-000007
其中,n的取值等于权重索引值,即n=0,1,…,theMaxPos。这样,当n的取值等于index时,那么第五取值等于
Figure PCTCN2022103963-appb-000008
第三乘积值等于
Figure PCTCN2022103963-appb-000009
另外,第一预设映射关系可以设置为Round(x),具体如上述的式(6)所示;那么当x等于第三乘积值时,Round(x)的值即为第四取值,也即加权系数cWeightInt[i][j][k]。
简单来说,本申请实施例使用的是LUT来确定加权系数cWeightInt[i][j][k]。其中,LUT中包含的元素个数是theMaxPos个,如式(3)所示,theMaxPos与亮度或色度的比特深度有关;如式(4)和式(11)所示,LUT中元素的值都是常数值。可以是实现存储在解码器中,也可以是在编码或解码之前计算得到的;如(5)或式(12)所示,可以说是根据一个指数为n的指数函数关系来确定;如式(4)和式(11)所示,LUT的索引参数是根据diffY[i][j][k]确定的,LUT存储的是不同索引参数对应的 cWeightInt[i][j][k]。
S503:根据当前块的第二颜色分量的参考值,确定当前块的第二颜色分量的参考均值;以及根据当前块的第二颜色分量的参考值与当前块的第二颜色分量的参考均值,确定当前块的第二颜色分量的参考样值。
需要说明的是,在本申请实施例中,基于权重预测的输入参考像素数量可以用N表示,也可以用inSize表示。在这里,基于权重预测的输入参考像素数量与第二颜色分量的参考样值的数量相同,也可以说是N表示第二颜色分量的参考样值的数量,N是正整数。
在一些实施例中,根据当前块的第二颜色分量的参考值,确定当前块的第二颜色分量的参考均值,可以包括:对N个当前块的第二颜色分量的参考值进行平均值计算,得到当前块的第二颜色分量的参考均值。
还需要说明的是,在本申请实施例中,当前块的第二颜色分量的参考值可以用refC[k]表示,当前块的第二颜色分量的参考均值可以用avgC表示,k=0,1,…,N-1。avgC的计算如下:
Figure PCTCN2022103963-appb-000010
对于N的取值,在一些实施例中,该方法还可以包括:根据当前块的尺寸参数,确定块种类索引值;根据块种类索引值,使用第五预设映射关系确定N的取值。
在一种具体的实施例中,第五预设映射关系表示块种类索引值与N的数值映射查找表。
需要说明的是,在本申请实施例中,块种类索引值可以用wcpSizeId表示。针对不同的块种类索引值,基于权重预测的输入参考像素数量也会存在差异;即N或者(inSize)的取值不同。
示例性地,若Min(W,H)<=4的当前块,则确定块种类索引值等于0;若Min(W,H)>4&&Min(W,H)<=16的当前块,则确定块种类索引值等于1;若Min(W,H)>16的当前块,则确定块种类索引值等于2;或者,若Min(W,H)<128的当前块,则确定块种类索引值等于0;若Min(W,H)>=128&&Min(W,H)<=256的当前块,则确定块种类索引值等于1;若Min(W,H)>256的当前块,则确定块种类索引值等于2;对此并不作任何限定。
在一种可能的实现方式中,表8示出了本申请实施例提供的一种块种类索引值与N(inSize)的取值之间的对应关系。
表8
块种类索引值 N(inSize)
0 2×H+2×W
1 2×H+2×W
2 2×H+2×W
在另一种可能的实现方式中,表9示出了本申请实施例提供的另一种块种类索引值与N(inSize)的取值之间的对应关系。
表9
块种类索引值 N(inSize)
0 2×H+2×W
1 1.5×H+1.5×W
2 H+W
进一步地,对于当前块的第二颜色分量的参考样值而言,在一些实施例中,所述根据当前块的第二颜色分量的参考值与当前块的第二颜色分量的参考均值,确定当前块的第二颜色分量的参考样值,可以包括:
对当前块的第二颜色分量的参考值与当前块的第二颜色分量的参考均值进行减法计算,得到当前块的第二颜色分量的参考样值。
在本申请实施例中,对N个参考色度信息refC计算其平均值avgC,然后将N个参考色度信息refC与平均值avgC相减即可得到参考色度差向量diffC。具体地,diffC的计算如下:
diffC[k]=refC[k]-avgC           (14)
其中,k=0,1,…,N-1。
S504:根据当前块的第二颜色分量的参考均值、当前块的第二颜色分量的参考样值以及对应的加权系数,确定当前块中第二颜色分量采样点的预测值。
S505:根据当前块中第二颜色分量采样点的预测值,确定当前块中第二颜色分量采样点的重建值。
需要说明的是,在确定当前块的第二颜色分量的参考均值avgC、当前块的第二颜色分量的参考样值 diffC[k]以及对应的加权系数cWeightInt[i][j][k]之后,就可以进一步确定当前块中第二颜色分量采样点的预测值。
在一些实施例中,根据当前块的第二颜色分量的参考均值、当前块的第二颜色分量的参考样值以及对应的加权系数,确定当前块中第二颜色分量采样点的预测值,可以包括:
确定第二颜色分量的参考样值与对应的加权系数的加权值;
将当前块中待预测像素的加权和值设置为等于N个加权值之和,将当前块中待预测像素的系数和值设置为等于N个加权系数之和;其中,N表示第二颜色分量的参考样值的数量,N是正整数;
根据加权和值和系数和值,使用第四预设映射关系确定第六取值;
对第二颜色分量的参考均值与第六取值进行加法计算,得到当前块中待预测像素的第二颜色分量的预测值;
根据当前块中待预测像素的第二颜色分量的预测值,确定当前块中第二颜色分量采样点的预测值。
需要说明的是,在本申请实施例中,如果第二颜色分量的参考样值的数量有N个,那么首先确定每一个第二颜色分量的参考样值与对应的加权系数的加权值(即subC[i][j][k]),然后对这N个加权值进行加法运算,可以得到当前块中待预测像素的加权和值,可以用calVal表示。具体地,其计算公式具体如下,
subC[i][j][k]=(cWeightInt[i][j][k]×diffC[k])           (15)
Figure PCTCN2022103963-appb-000011
在这里,(i,j)表示当前块中待预测像素的位置信息,k表示WCP计算过程中使用的第k个第二颜色分量的参考样值,且k=0,…,N-1。
还需要说明的是,在本申请实施例中,针对当前块中待预测像素对应的N个加权系数,可以对这N个加权系数进行加法运算,得到当前块中待预测像素的系数和值,可以用sum表示。具体地,其计算公式具体如下:
Figure PCTCN2022103963-appb-000012
进一步地,在一些实施例中,根据加权和值和系数和值,使用第四预设映射关系确定第六取值,可以包括:
确定预设偏移量;
根据系数和值,使用第六预设映射关系确定第一数值;以及根据系数和值与第一数值,使用第七预设映射关系确定数组索引值,并根据数组索引值与预设偏移量,使用第八预设映射关系确定第二数值;
在数组索引值是否等于零的情况下,根据第一数值确定第三数值;以及根据第三数值与预设偏移量,确定第一偏移量;
根据第二数值与加权和值确定第四乘积值,根据第三数值和预设偏移量确定预设加法值,对第四乘积值和预设加法值进行加法运算,得到目标和值;
对目标和值进行第一偏移量的右移运算,确定第六取值。
需要说明的是,在本申请实施例中,预设偏移量可以用Shift表示,数组索引值可以用normDiff表示,第一数值可以用x表示,第二数值可以用v表示,第三数值可以用y表示,预设加法值用add表示。在一种具体的实施例中,Shift的取值可以设置为5,但是并不作具体限定。
还需要说明的是,在本申请实施例中,对于第一数值的计算,第六预设映射关系可以用下式表示:
x=Floor(Log2(sum))            (18)
在一种可能的实现方式中,对于第一数值的计算,按照数学上的直接描述,可以包括:确定系数和值的以2为底的对数值;确定小于或等于该对数值的最大整数值;这里所确定的最大整数值即为第一数值。
在另一种可能的实现方式中,对于第一数值的计算,按照计算结果进行描述,可以包括:将第一数值设置为等于系数和值的二进制表示时所需二进制符号的位数减一。
在又一种可能的实现方式中,对于第一数值的计算,按照可能的整数化实施方式进行描述(文字描述),可以包括:对系数和值进行二进制右移操作,确定右移后数值恰等于零时的右移位数;将第一数值设置为等于该右移位数减一。
在又一种可能的实现方式中,对于第一数值的计算,按照可能的整数化实施方式进行描述(伪代码描述),如式(18)所示,可以包括:
x=0
while(sum!=0){
sum=sum>>1;
x++;
}
x=x-1
等价地,该伪代码也可以描述如下:
x=0
while(sum>1){
sum=sum>>1;
x++;
}
或者,该伪代码还可以描述如下:
x=0
while(sum!=1){
sum=sum>>1;
x++;
}
需要注意的是,在伪代码描述中,本申请实施例所涉及的符号按照C语言进行理解。
在一些实施例中,所述根据系数和值与第一数值,使用第七预设映射关系确定数组索引值,可以包括:将系数和值与第一数值作为预设函数关系的输入,根据预设函数关系输出数组索引值。
在本申请实施例中,对于数组索引值的计算,可以用下式表示:
normDiff=Func(sum,x)                (19)
其中,sum表示系数和值,x表示第一数值,normDiff表示数组索引值。在这里,Func()是与Shift有关的函数,示例性地,Func()的一种具体形式如下所示:
Func(sum,x)=(sum<<(Shift+1)>>x)&(1<<(Shift+1)-1)             (20)
对应地,当Shift=5时,normDiff=((sum<<5)>>x)&31。
在一些实施例中,根据数组索引值与预设偏移量,使用第八预设映射关系确定第二数值,可以包括:
根据数组索引值,在数组映射表中确定索引指示值;
根据索引指示值与预设偏移量,使用第八预设映射关系确定第二数值。
需要说明的是,在本申请实施例中,数组映射表用DivSigTable表示,那么数组索引值normDiff在DivSigTable中对应的索引指示值为DivSigTable[normDiff]。示例性地,根据DivSigTable[normDiff]和Shift,第八预设映射关系如下式所示:
v=DivSigTable[normDiff]|(1<<Shift)                (21)
对应地,当Shift=5时,v=DivSigTable[normDiff]|32。在这里,“|”运算符表示按位或运算,即v是由DivSigTable[normDiff]与32进行按位或运算得到的。
在一些实施例中,在数组索引值是否等于零的情况下,根据第一数值确定第三数值,可以包括:
若数组索引值等于零,则将第三数组设置为等于第一数值;
若数组索引值不等于零,则将第三数组设置为等于第一数值与一的和值。
需要说明的是,在本申请实施例中,第一数值用x表示,第三数值用y表示。示例性地,可以用下式表示:
Figure PCTCN2022103963-appb-000013
在这里,“==”运算符表示等于运算,“!=”运算符表示不等于运算。
还需要说明的是,在本申请实施例中,对于预设加法值而言,根据第三数值和预设偏移量确定预设加法值,其计算公式如下所示:
add=1<<y<<(Shift-1)             (23)
还需要说明的是,在本申请实施例中,第一偏移量是由第三数值和预设偏移量确定的,示例性地,通过对第三数值和预设偏移量进行加法运算,以得到第一偏移量,即第一偏移量可以为y+Shift。
如此,假定第六取值用C表示,那么对于第六取值而言,可以用下式表示:
Figure PCTCN2022103963-appb-000014
在这里,“<<”运算符表示左移运算,“>>”运算符表示右移运算。
进一步地,对于当前块中待预测像素的第二颜色分量的预测值的确定,假定待预测像素为(i,j),当前块中待预测像素的第二颜色分量的预测值用C pred[i][j]表示,这时候根据第二颜色分量的参考均值与第六取值进行加法计算,可以得到当前块中待预测像素的第二颜色分量的预测值,其计算公式如下:
Figure PCTCN2022103963-appb-000015
进一步地,在本申请实施例中,对于C pred[i][j],通常还需要限定在一预设范围内。因此,在一些实施例中,该方法还可以包括:对待预测像素的第二颜色分量的预测值进行修正操作,将修正后的预测值作为当前块中待预测像素的第二颜色分量的预测值。
需要说明的是,在本申请实施例中,预设范围可以为:0到(1<<BitDepth)-1之间;其中,BitDepth为色度分量所要求的比特深度。如果预测值超出该预设范围的取值,那么需要对预测值进行相应的修正操作。示例性地,可以对C pred[i][j]进行钳位操作,具体如下:
当C pred[i][j]的值小于0时,将其置为0;
当C pred[i][j]的值大于或等于0且小于或等于(1<<BitDepth)-1时,其等于C pred[i][j];
当C pred[i][j]的值大于(1<<BitDepth)-1时,其置为(1<<BitDepth)-1。
这样,在对预测值进行修正处理之后,可以保证当前块中待预测像素的第二颜色分量的预测值都在0到(1<<BitDepth)-1之间。
进一步地,在确定出预测值之后,在一定条件下需要后处理操作后作为最终的色度预测值。因此,在一些实施例中,根据当前块中待预测像素的第二颜色分量的预测值,确定当前块中第二颜色分量采样点的预测值,可以包括:
对待预测像素的第二颜色分量的预测值进行滤波处理,确定当前块中第二颜色分量采样点的预测值。
在本申请实施例中,待预测像素的第二颜色分量的预测值包含当前块中至少部分第二颜色分量采样点的预测值。换句话说,根据待预测像素的第二颜色分量的预测值所组成的第一预测块,该第一预测块包含当前块中至少部分第二颜色分量采样点的预测值。
需要说明的是,在本申请实施例中,如果第一预测块包括了当前块中部分第二颜色分量采样点的预测值,这时候需要对第一预测块进行上采样滤波,以得到最终的第二预测块。因此,在一些实施例中,该方法还可以包括:对第一预测块进行上采样滤波处理,确定当前块的第二颜色分量的第二预测块。
还需要说明的是,在本申请实施例中,如果第一预测块中包含的第二颜色分量的预测值的数量与当前块中包含的第二颜色分量采样点的数量相同,但是并未包含当前块的第二颜色分量采样点的预测值,这时候需要用滤波对预测值进行增强,以得到最终的第二预测块。因此,在一些实施例中,该方法还可以包括:对第一预测块进行滤波增强处理,确定当前块的第二颜色分量的第二预测块。
还需要说明的是,在本申请实施例中,如果第一预测块包括了当前块中全部第二颜色分量采样点的预测值,这时候不需要对第一预测块进行任何处理,可以直接将第一预测块作为最终的第二预测块。
也就是说,第一预测块可以包含当前块中至少部分第二颜色分量采样点的预测值。其中,如果第一预测块包含当前块中全部第二颜色分量采样点的预测值,那么可以将当前块中第二颜色分量采样点的预测值设置为等于第一预测块的值;如果第一预测块包含当前块中部分第二颜色分量采样点的预测值,那么可以对第一预测块的值进行上采样滤波,将当前块中第二颜色分量采样点的预测值设置为等于所述上采样滤波后的输出值。
这样,在经过上述操作后,第二预测块包括当前块中全部第二颜色分量采样点的预测值。如此,基于权重的色度预测输出predWcp在一定条件下需要后处理后才能够作为最终的色度预测值predSamples,否则最终的色度预测值predSamples即为predWcp。
在一些实施例中,在确定出当前块中第二颜色分量采样点的预测值之后,根据当前块中第二颜色分量采样点的预测值,确定当前块中第二颜色分量采样点的重建值,可以包括:
确定当前块中第二颜色分量采样点的预测差值;
根据当前块中第二颜色分量采样点的预测差值和当前块中第二颜色分量采样点的预测值,确定当前块中第二颜色分量采样点的重建值。
在一种具体的实施例中,确定当前块中第二颜色分量采样点的重建值,包括:对当前块中第二颜色分量采样点的预测差值和当前块中第二颜色分量采样点的预测值进行加法运算,确定当前块中第二颜色分量采样点的重建值。
还需要说明的是,在本申请实施例中,确定当前块中第二颜色分量采样点的预测差值(residual),可以是:解析码流,确定当前块中第二颜色分量采样点的预测差值。
这样,以色度分量为例,通过解析码流,确定当前块的色度预测差值;在确定当前块的色度预测值之后,对色度预测值和色度预测差值进行加法计算,可以得到当前块的色度重建值。
可以理解地,本申请实施例是对WCP预测技术过程中对浮点型运算的优化,采用整型运算实现。一方面,充分利用当前块的内容特性,自适应选择最佳的整型运算位移量;另一方面,充分保证WCP预测技术的精确性;又一方面,充分考虑权重模型的特性,合理设计整型运算过程;从而可以在保证WCP预测技术的一定准确性的前提下,降低了WCP预测技术的计算复杂度。
本实施例提供了一种解码方法,通过确定当前块的第一颜色分量的参考值和当前块的第二颜色分量 的参考值;根据当前块的第一颜色分量的参考值,确定加权系数;根据当前块的第二颜色分量的参考值,确定当前块的第二颜色分量的参考均值;根据当前块的第二颜色分量的参考值与当前块的第二颜色分量的参考均值,确定当前块的第二颜色分量的参考样值;根据当前块的第二颜色分量的参考均值、当前块的第二颜色分量的参考样值以及对应的加权系数,确定当前块中第二颜色分量采样点的预测值;根据当前块中第二颜色分量采样点的预测值,确定当前块中第二颜色分量采样点的重建值。这样,基于当前块的相邻区域中的颜色分量参考信息与当前块内的颜色分量重建信息,不仅需要构造亮度差向量,以便确定出加权系数;而且还需要根据色度参考信息确定色度平均值,然后根据色度参考信息和色度平均值来确定出色度差向量,进而根据色度差向量及对应的加权系数,再加上色度平均值即可确定出色度预测值;如此,通过对基于权重的色度预测的计算过程进行优化,能够实现完全采用整型运算,而且充分考虑了权重模型的特性,合理优化整型运算过程;在充分保证色度预测准确性的同时,还可以降低计算复杂度,提高编解码效率,进而提升编解码性能。
在本申请的另一实施例中,基于前述实施例所述的解码方法,以当前块进行色度预测为例,在对当前块进行色度预测时,当前块的重建亮度信息、相邻区域的参考亮度信息及参考色度信息都是已解码的参考信息。因此,本申请实施例提出了一种利用上述这些信息的基于权重的色度预测技术,该技术实现过程中的全部运算使用整数运算。本申请实施例的技术方案主要提出了:采用整数运算替代浮点数运算。在这里,WCP预测技术的色度预测过程的详细步骤如下:
WCP模式的输入:当前块的位置(xTbCmp,yTbCmp),当前块的宽度nTbW及当前块的高度nTbH。
WCP模式的输出:当前块的预测值predSamples[x][y],其中以当前块内左上角位置为坐标原点,x=0,...,nTbW-1,y=0,...,nTbH-1。
其中,WCP预测技术的预测过程可以包含确定WCP核心参数、获取输入信息、基于权重的色度预测、后处理过程等步骤,经过这些步骤之后,可以得到当前块的色度预测值。
在一种具体的实施例中,参见图7,其示出了本申请实施例提供的另一种解码方法的流程示意图。如图7所示,该方法可以包括:
S701:确定WCP核心参数。
需要说明的是,对于S701而言,对WCP涉及的核心参数进行确定,即可以通过配置或通过某种方式获取或推断WCP核心参数,例如在解码端从码流获取WCP核心参数。
在这里,WCP核心参数包括但不限于控制参数(S)、基于权重的色度预测输入的个数(inSize)、基于权重的色度预测输出的个数(排列成predSizeW×predSizeH)、权重模型LUT、权重模型LUT的最大权重索引值theMaxPos。其中,基于权重的色度预测输出的第一预测块可以用predWcp表示,这里基于权重的色度预测输出的个数可以设置为相同的值(如predSizeW=predSizeH=S/4)或者与当前块的尺寸参数相关(如predSizeW=nTbW,predSizeH=nTbH)。其中,控制参数(S)可以用于对后续环节中非线性函数进行调整或用来对后续环节涉及的数据进行调整。在这里,权重模型LUT可以预定义,也可以根据不同的WCP控制参数(S)实时计算得到;权重模型LUT的最大权重索引值theMaxPos可以根据不同的权重模型LUT进行调整,或者是固定的。
对于WCP核心参数的确定,在一定条件下受块尺寸或块内容或块内像素数的影响。例如:
可以对当前块根据其块尺寸或块内容或块内像素数进行分类,根据不同的类别确定相同或不同的核心参数。即不同类别对应的控制参数(S)或基于权重的色度预测输入的个数inSize或基于权重的色度预测输出的个数(排列成predSizeW×predSizeH)可以相同或不同。注意,predSizeW和高predSizeH也可以相同或不同。
如果WCP应用的块尺寸种类较多或块尺寸之间的差异较大或块内容差异很大或块内像素数差异较大,可以对当前块根据其块尺寸或块内容或块内像素数进行分类,根据不同的类别确定相同或不同的核心参数。即不同类别对应的控制参数(S)或基于权重的色度预测输入的个数inSize或基于权重的色度预测输出的个数(排列成predSizeW×predSizeH)可以相同或不同。注意,predSizeW和高predSizeH也可以相同或不同。
下面为了更好说明核心参数的确定,以两种简单分类为例进行说明:
分类示例1:WCP可以根据当前块的宽度和高度将当前块分类,用wcpSizeId表示块的种类,也可称为块种类索引值。对不同种类的块,控制参数(S)、基于权重的色度预测输入的个数inSize、基于权重的色度预测输出的个数(排列成predSizeW×predSizeH)可以相同或不同。这里以分为3类的一种示例进行说明:
根据当前块的宽度和高度将当前块分为3类,不同类别的控制参数(S)可以设为不同,不同类别的inSize、基于权重的色度预测输出的个数(排列成predSizeW×predSizeH)可以设为相同。nTbW为当 前块的宽度,nTbH为当前块的高度,块的种类wcpSizeId的定义如下:
wcpSizeId=0:表示min(nTbW,nTbH)<=4的当前块。其中,控制参数(S)为8,inSize为(2×nTbH+2×nTbW),基于权重的色度预测输出nTbH×nTbW个色度预测值;
wcpSizeId=1:表示4<min(nTbW,nTbH)<=16的当前块。其中,控制参数(S)为12,inSize为(2×nTbH+2×nTbW),基于权重的色度预测输出nTbH×nTbW个色度预测值;
wcpSizeId=2:表示min(nTbW,nTbH)>16的当前块。其中,控制参数(S)为16,inSize为(2×nTbH+2×nTbW),基于权重的色度预测输出nTbH×nTbW个色度预测值;
以表格形式表示上述WCP核心参数的数量关系,如表10所示。
表10
wcpSizeId S inSize predSizeH predSizeW
0 8 2×nTbH+2×nTbW nTbH nTbW
1 12 2×nTbH+2×nTbW nTbH nTbW
2 16 2×nTbH+2×nTbW nTbH nTbW
分为3类还可以另一种示例进行说明:
根据当前块的宽度和高度将当前块分为3类,不同类别的控制参数(S)可以设为不同,不同类别的inSize、基于权重的色度预测输出的个数(排列成predSizeW×predSizeH)可以设为相同。nTbW为当前块的宽度,nTbH为当前块的高度,块的种类wcpSizeId的定义如下:
wcpSizeId=0:表示min(nTbW,nTbH)<=4的当前块。其中,控制参数(S)为8,inSize为(2×nTbH+2×nTbW),基于权重的色度预测输出nTbH×nTbW个色度预测值;
wcpSizeId=1:表示4<min(nTbW,nTbH)<=16的当前块。其中,控制参数(S)为12,inSize为(1.5×nTbH+1.5×nTbW),基于权重的色度预测输出nTbH/2×nTbW/2个色度预测值;
wcpSizeId=2:表示min(nTbW,nTbH)>16的当前块。其中,WCP控制参数(S)为16,inSize为(nTbH+nTbW),基于权重的色度预测输出nTbH/4×nTbW/4个色度预测值;
以表格形式表示上述核心参数的数量关系,如表11所示。
表11
wcpSizeId S inSize predSizeH predSizeW
0 8 2×nTbH+2×nTbW nTbH nTbW
1 12 1.5×nTbH+1.5×nTbW nTbH/2 nTbW/2
2 16 nTbH+nTbW nTbH/4 nTbW/4
分类示例2:WCP模式也可以根据当前块的像素数将当前块分类,用wcpSizeId表示块的种类。对不同种类的块,控制参数(S)、基于权重的色度预测输入的个数inSize、基于权重的色度预测输出的个数(排列成predSizeW×predSizeH)可以相同或不同。这里以分为3类的一种示例进行说明:
根据当前块的像素数将当前块分为3类,不同类别的控制参数(S)可以设为不同,不同类别的inSize、基于权重的色度预测输出的个数(排列成predSizeW×predSizeH)可以设为相同。nTbW为当前块的宽度,nTbH为当前块的高度,nTbW×nTbH表示当前块的像素数。块的种类wcpSizeId的定义如下:
wcpSizeId=0:表示(nTbW×nTbH)<128的当前块。其中,控制参数(S)为10,inSize为(2×nTbH+2×nTbW),基于权重的色度预测输出nTbH×nTbW个色度预测值;
wcpSizeId=1:表示128<=(nTbW×nTbH)<=256的当前块。其中,控制参数(S)为8,inSize为(2×nTbH+2×nTbW),基于权重的色度预测输出nTbH×nTbW个色度预测值;
wcpSizeId=2:表示(nTbW×nTbH)>256当前块。其中,控制参数(S)为1,inSize为(2×nTbH+2×nTbW),基于权重的色度预测输出nTbH×nTbW个色度预测值;
以表格形式表示上述核心参数的数量关系,如表12所示。
表12
wcpSizeId S inSize predSizeH predSizeW
0 10 2×nTbH+2×nTbW nTbH nTbW
1 8 2×nTbH+2×nTbW nTbH nTbW
2 1 2×nTbH+2×nTbW nTbH nTbW
分为3类还可以另一种示例进行说明:
根据当前块的像素数将当前块分为3类,不同类别的控制参数(S)可以设为不同,不同类别的inSize、 基于权重的色度预测输出的个数(排列成predSizeW×predSizeH)可以设为相同。nTbW为当前块的宽度,nTbH为当前块的高度,nTbW×nTbH表示当前块的像素数。块的种类wcpSizeId的定义如下:
wcpSizeId=0:表示(nTbW×nTbH)<64的当前块。其中,控制参数(S)为16,inSize为(2×nTbH+2×nTbW),基于权重的色度预测输出nTbH×nTbW个色度预测值;
wcpSizeId=1:表示64<=(nTbW×nTbH)<=512的当前块。其中,控制参数(S)为4,inSize为(1.5×nTbH+1.5×nTbW),基于权重的色度预测输出nTbH/2×nTbW/2个色度预测值;
wcpSizeId=2:表示(nTbW×nTbH)>512的当前块。其中,控制参数(S)为1,inSize为(nTbH+nTbW),基于权重的色度预测输出nTbH/4×nTbW/4个色度预测值;
以表格形式表示上述核心参数的数量关系,如表13所示。
表13
wcpSizeId S inSize predSizeH predSizeW
0 16 2×nTbH+2×nTbW nTbH nTbW
1 4 1.5×nTbH+1.5×nTbW nTbH/2 nTbW/2
2 1 nTbH+nTbW nTbH/4 nTbW/4
还需要说明的是,对于表11、表13来说,predSizeW×predSizeH是基于权重的色度预测计算得到的;但是当wcpSizeId=1时,仅部分即nTbH/2×nTbW/2的色度预测值是根据WCP计算得到的;当wcpSizeId=2时,仅部分即nTbH/4×nTbW/4的色度预测值是根据WCP计算得到的;那么剩下一部分的色度预测值,则是根据WCP计算得到的色度预测值进行滤波处理所得到的,这里的滤波处理可以是插值滤波方式、上采样滤波方式等,对此不作任何限定。
S702:根据WCP核心参数,获取输入信息。
需要说明的是,对于S702而言,输入信息可以包括参考色度信息(refC)、参考亮度信息(refY)和重建亮度信息(recY)。其中,对于输入信息的获取,在预测当前块时,当前块的上方区域、左上方区域及左方区域作为当前块的相邻区域(也可被称为“参考区域”),如前述的图6所示,相邻区域中的像素都是已重建的参考像素。
还需要说明的是,从相邻区域中获取参考色度信息refC和参考亮度信息refY。获取的参考色度信息包括但不限于:选取当前块的上方区域的参考色度重建值,和/或,左方区域的参考色度重建值。获取的参考亮度信息包括但不限于:根据参考色度信息位置获取对应的参考亮度信息。
获取当前块的重建亮度信息recY,获取方式包括但不限于:根据当前块内的色度信息位置获取对应的重建亮度信息作为当前块的重建亮度信息。
获取输入信息包括获取inSize数量的参考色度信息refC(若需要进行前处理,则是经过前处理操作后的)、获取inSize数量的参考亮度信息refY(若需要进行前处理,则是经过前处理操作后的)及获取当前预测块的亮度重建信息recY(若需要进行前处理,则是经过前处理操作后的)。
S703:根据输入信息进行基于权重的色度预测计算,确定当前块的色度预测值。
需要说明的是,对于S703而言,对WCP核心参数规定的尺寸内的色度预测值C pred[i][j],i=0…predSizeW-1,j=0…predSizeH-1,逐个进行获取。注意,predSizeH和predSizeW为确定的WCP核心参数,可与当前待预测色度块的高nTbH或宽nTbW相同或不同。这样在一定条件下,也可只对当前块内的部分待预测像素进行以下计算。
WCP的色度预测计算包括以下操作:通过参考色度信息的预处理、获取权重向量、根据权重向量进行加权预测得到基于权重的色度预测值,再对其进行修正。其中,参考信息的预处理过程包括计算平均值、构造参考色度差向量;权重向量的获取过程包括构造亮度差向量、计算权重向量。
详细计算过程如下:
计算参考色度信息的平均值avgC
对于k=0,1...inSize-1
构造参考色度差向量diffC
对于i=0…predSizeW-1,j=0…predSizeH-1
对于k=0,1...inSize-1
构造亮度差向量中各个元素diffY[i][j][k]
计算权重向量中各个元素cWeightInt[i][j][k],然后由cWeightInt[i][j]、diffC和avgC计算色度预测 值C pred[i][j]。
在一种具体的实施例中,参见图8,其示出了本申请实施例提供的又一种解码方法的流程示意图。如图8所示,该方法可以包括:
S801:对于当前块,利用获取的参考色度信息计算色度平均值,并利用色度平均值构建参考色度差向量。
需要说明的是,对于S801而言,主要是对参考色度信息的预处理。其中,对inSize数量的参考色度信息refC计算其平均值avgC,将inSize数量的参考色度信息refC与平均值avgC相减得到参考色度差向量diffC。
Figure PCTCN2022103963-appb-000016
diffC[k]=refC[k]-avgC                 (27)
S802:对于每个待预测像素,利用参考亮度信息及当前块的亮度重建信息构造亮度差向量。
需要说明的是,对于S802而言,主要是构造亮度差向量。其中,对WCP核心参数所规定尺寸内的每个待预测像素C pred[i][j],将其对应的亮度重建信息recY[i][j]与inSize数量的参考亮度信息refY相减并取绝对值得到亮度差向量diffY[i][j]。
diffY[i][j][k]=Abs(refY[k]-recY[i][j])         (28)
在一定条件下,可以对待预测像素的亮度差向量进行线性或非线性数值处理。例如:可根据WCP核心参数中的WCP控制参数S来缩放待预测像素的亮度差向量的数值。
S803:对于每个待预测像素,利用所得到的亮度差向量,根据权重模型LUT确定权重向量。
需要说明的是,对于S803而言,主要是计算权重向量。其中,采用非线性权重模型的LUT对每个待预测像素C pred[i][j]对应的亮度差向量diffY[i][j]进行处理,可以得到对应的整型权重向量cWeightInt[i][j]。
示例性地,在一种可能的实现方式中,可以采用非线性Softmax函数作为权重模型。此时的权重模型LUT的获取方式包括但不限于以下方法:
theMaxPos=1<<BitDepth-1            (29)
在这里,theMaxPos的值包括但不限于根据式(29)计算得到,它可以是WCP核心参数中所确定的参数。
对于n=0,1…theMaxPos
LUT[n]=Round(e -n×ModelScale)             (30)
在这里,ModelScale的值包括但不限于由WCP核心参数中所确定的参数,它代表的是权重系数的放大倍数,是一个预先设定的常数。
对于k=0,1...inSize-1
index=Clip3(0,theMaxPos,diffY[i][j][k])          (31)
cWeightInt[i][j][k]=LUT[index]         (32)
示例性地,在另一种可能的实现方式中,在一定条件下,还可以根据核心参数中的WCP控制参数(S)对权重模型进行调整。
如果当前块尺寸灵活,那么可以根据WCP控制参数(S)调整权重模型LUT,以非线性Softmax函数为例,可根据当前块属于的块种类类别不同,选取不同的控制参数来调整函数。此时的权重模型LUT的获取方式包括但不限于以下方式:
其中,theMaxPos的值包括但不限于根据式(29)计算得到,它可以是WCP核心参数中所确定的参数,并可以与不同的WCP控制参数(S)赋值不同的值,即theMaxPos[S]。
对于n=0,1…theMaxPos
Figure PCTCN2022103963-appb-000017
在这里,ModelScale的值包括但不限于由WCP核心参数中所确定的参数,它代表的是权重系数的放大倍数。
对于k=0,1...inSize-1
index=Clip3(0,theMaxPos,f(S,diffY[i][j][k]))       (34)
cWeightInt[i][j][k]=LUT[S][index]         (35)
其中,f()指与WCP控制参数S和亮度差diffY[i][j][k]的函数,其输出为LUT的权重索引值。这里,f()包括但不限于以下例子的实现方式:
f(S,diffY[i][j][k])=diffY[i][j][k]            (36)
其中,
Figure PCTCN2022103963-appb-000018
Figure PCTCN2022103963-appb-000019
Floor(x)表示小于或等于x的最大整数;
Log2(x)表示以2为底的对数;
Figure PCTCN2022103963-appb-000020
Round(x)=Sign(x)×Floor(Abs(x)+0.5)             (40)
S804:对于每个待预测像素,利用所得到的参考色度差向量与所得到的权重向量的乘积和色度平均值,计算色度预测值。
需要说明的是,对于S804而言,主要是计算色度预测值。其中,根据每个待预测像素对应的整型权重向量cWeightInt[i][j]和参考色度差向量diffC、参考色度信息的平均值来计算待预测像素的色度预测值。具体地,将参考色度差向量diffC与每个待预测像素对应的权重向量元素逐一对应相乘得到subC[i][j],将相乘结果累加除以每个待预测像素对应的整型权重向量cWeightInt[i][j]的和,再加上参考色度信息的平均值就得到了每个待预测像素的色度预测值C pred[i][j]。在这里,本申请实施例所涉及的除法操作一律使用右移操作,具体过程如下:
对于k=0,1...inSize-1
subC[i][j][k]=(cWeightInt[i][j][k]×diffC[k])           (41)
对于i=0…predSizeW-1,j=0…predSizeH-1
Figure PCTCN2022103963-appb-000021
其中,
Figure PCTCN2022103963-appb-000022
v=DivSigTable[normDiff]|(1<<Shift)           (44)
Figure PCTCN2022103963-appb-000023
Figure PCTCN2022103963-appb-000024
add=1<<y<<(Shift-1)           (47)
在这里,Shift的获取方式包括但不限于由WCP确定参数中所确定的参数,在解码规范文本中,Shift的取值可以被赋值为5。
另外,关于x的计算公式对应于解码规范文本中,x=Floor(Log2(sum))。
关于v的计算公式对应于解码规范文本中,在Shift=5时的divSigTable[normDiff]|32。
关于normDiff的计算公式对应于解码规范文本中,对应于解码规范文本中,在Shift=5时的normDiff=((sum<<5)>>x)&31。
关于y的计算公式对应于解码规范文本中,x+=(normDiff!=0)?1:0。
关于add的计算公式对应于解码规范文本中,在Shift=5时的add=1<<x<<4。
还需要说明的是,在本申请实施例中,DivSigTable是一个预定义的数组,与Shift有关,DivSigTable对应于解码规范文本中的divSigTable[]={0,15,14,13,12,12,11,10,10,9,8,8,7,7,6,6,5,5,4,4,4,3,3,3,2,2,2,1,1,1,1,0}。
还需要说明的是,在本申请实施例中,Func()是与Shift有关的函数,输入是权重向量之和与所计算得到的x,输出是DivSigTable[]的数组索引值。示例性地,Func()的一种具体形式可以表示为:
Figure PCTCN2022103963-appb-000025
其中,式(48)对应于解码规范文本中,在Shift=5时的normDiff=((sum<<5)>>x)&31。
S805:对于每个待预测像素,对计算得到的色度预测值进行修正处理,确定当前块的色度预测值。
需要说明的是,对于S805而言,主要是对第一预测块predWcp内的色度预测值进行修正操作。其中,predWcp内的色度预测值应该在限定在预设范围内,如果超出该预设范围,那么需要进行相应的修正操作。例如:
可以对C pred[i][j]的色度预测值进行钳位操作,具体如下:
●当C pred[i][j]的值小于0时,将其置为0;
●当C pred[i][j]的值大于(1<<BitDepth)–1时,其置为(1<<BitDepth)–1。
其中,BitDepth为色度像素值所要求的比特深度,以保证predWcp中所有的色度预测值都在0到(1<<BitDepth)–1之间。即,如下式所示:
C pred[i][j]=Clip3(0,(1<<BitDepth)-1,C pred[i][j])           (49)
S704:对色度预测值进行后处理操作,确定当前块的目标色度预测值。
需要说明的是,对于S704而言,基于权重的色度预测输出predWcp在一定条件下需要后处理后作为最终的目标色度预测值predSamples,否则最终的目标色度预测值predSamples即为predWcp。
示例性地,为了降低WCP逐像素独立并行预测带来的不稳定性,可以对predWcp进行平滑滤波作为最终的色度预测值predSamples。或者,为了进一步提升WCP预测值的准确性,可以对predWcp进行位置相关的修正过程。比如:利用空间位置接近的参考像素对每个待预测像素计算色度补偿值,用此色度补偿值对predWcp进行修正,将修正后的预测值作为最终的色度预测值predSamples。或者,为了进一步提升WCP预测值的准确性,可以将其他色度预测模式计算的色度预测值与WCP计算的色度预测值predWcp进行加权融合,将此融合结果作为最终的色度预测值predSamples。比如:可以将CCLM模式预测得到的色度预测值与WCP计算的色度预测值predWcp进行等权重或不等权重加权,加权结果作为最终的色度预测值predSamples。或者,为了提高WCP预测性能,可以采用神经网络模型对WCP的预测输出predWcp进行修正等等,本申请实施例对此不作任何限定。
可以理解地,在本申请实施例中,对于权重向量的获取,其中关于具有WCP控制参数(S)的权重模型的LUT以及index的推导仍占据较大的储存空间,因此,本申请实施例还可以针对具有WCP控制参数(S)的权重模型LUT进行修改,其它保持不变。具体步骤如下:
在一定条件下,可根据WCP核心参数中的WCP控制参数(S)对权重模型LUT进行调整。示例性地,如果当前块的尺寸灵活,那么可以根据WCP控制参数(S)调整权重模型,以非线性Softmax函数为例,可根据当前预测块属于的块种类类别不同,选取不同的控制参数来调整函数。此时的权重模型LUT的获取方式包括但不限于以下方式:
theMaxPos=1<<BitDepth-1            (50)
在这里,theMaxPos的值包括但不限于根据式(50)计算得到,它可以是WCP核心参数中所确定的参数,并可以与不同的WCP控制参数(S)赋值不同的值,即theMaxPos[S]。
对于n=0,1…theMaxPos
Figure PCTCN2022103963-appb-000026
在这里,ModelScale的值包括但不限于由WCP核心参数中所确定的参数,它代表的是权重系数的放大倍数。
由于不同的WCP控制参数(S)使用各自的LUT[S],这里可以仅存储几个基础的LUT。例如,当S={2,4,8}时,仅存储S=2时的LUT。这时候代码如下:
对于k=0,1...inSize-1
index=Clip3(0,theMaxPos,f(S,diffY[i][j][k]))         (52)
cWeightInt[i][j][k]=LUT[index]           (53)
其中,f()指与WCP控制参数(S)和亮度差向量diffY[i][j][k]的函数,其输出为LUT的权重索引值。这里,f()包括但不限于以下例子的实现方式:
f(S,diffY[i][j][k])=diffY[i][j][k]>>LUTshift[S]          (54)
其中,式(54)中的数组LUTshift的定义如下:
Figure PCTCN2022103963-appb-000027
例如,当S={8,12,16}时,仅存储S=12时的LUT。这时候代码如下:
对于k=0,1...inSize-1
index=Clip3(0,theMaxPos,f(S,diffY[i][j][k]))        (56)
cWeightInt[i][j][k]=LUT[index]         (57)
其中,f()指与WCP控制参数(S)和亮度差向量diffY[i][j][k]的函数,其输出为LUT的权重索引值。这里,f()包括但不限于以下例子的实现方式:
f(S,diffY[i][j][k])=diffY[i][j][k]+(wcpSizeId&1)×(diffY[i][j][k]>>1)-(wcpSizeId&2)×(diffY[i][j][k]>>2)+indexOffset         (58)
还需要说明的是,式(58)对应于解码规范文本中的(baseDiffL+(blockIndex&1)*(baseDiffL>>1)-(blockIndex&2)*(baseDiffL>>2)+indexOffset[cnt])。
式(58)中的wcpSizeId为确定WCP核心参数中的变量,另外,indexOffset的计算方式如下:
indexOffset=((wcpSizeId&1)-(wcpSizeId&2))×(diffY[i][j][k]&1)       (59)
还需要说明的是,式(59)对应于解码规范文本中的indexOffset[cnt]=((blockIndex&1)–(blockIndex&2))*(baseDiffY[cnt]&1)。
在又一种具体的实施例中,基于前述实施例所述的解码方法,本申请实施例提供的一种解码规范文本的具体描述如下:
(1)关于INTRAL_WCP帧内预测模式的规范说明。
该过程的输入包括:
–帧内预测模式的标识predModeIntra。
–当前变换块的左上的样本位置相对于当前图片的左上的样本位置(xTbC、yTbC)。
–表示变换块宽度的变量nTbW。
–表示变换块高度的变量nTbH。
–表示当前块的颜色分量的变量cIdx。
–相邻色度样本p[x][y],x=-1,y=-1…2*nTbH-1和x=0…2*nTbW-1,y=-1。
此过程的输出为预测样本predSamples[x][y],x=0…nTbW-1,y=0…nTbH-1。
当前块对应的亮度位置(xTbY,yTbY)的推导如下:
(xTbY,yTbY)=(xTbC<<(SubWidthC-1),yTbC<<(SubHeightC-1))
变量availL和availT的推导如下:
–调用第(2)部分中规定的相邻块可用性的推导过程时,当前块的亮度位置(xCurr,yCurr)设置为等于(xTbY,yTbY),相邻亮度位置(xTbY-1,yTbY),将checkPredModeY设置为FALSE,cIdx作为输入,输出分配给availL。
–调用第(2)部分中规定的相邻块可用性的推导过程时,当前块的亮度位置(xCurr,yCurr)设置为等于(xTbY,yTbY),相邻亮度位置(xTbY,yTbY-1),将checkPredModeY设置为FALSE,将cIdx作为输入,并将输出分配给availT。
可用的右上方相邻色度样本数numTopRight如下所示:
–变量numTopRight设置为0,availTR设置为TRUE。
–当predModeIntra等于INTRA_WCP时,以下适用于x=nTbW…2*nTbW-1,直到availTR等于FALSE:
–调用第(2)部分中规定的相邻块可用性的推导过程时,当前亮度位置(xCurr,yCurr)设置为等于(xTbY,yTbY)相邻亮度位置(xTbY+x*SubWidthC,yTbY-1),将checkPredModeY设置为FALSE,将cIdx作为输入,并将输出分配给availTR。
–当availTR等于TRUE时,numTopRight增加1。
可用的左下方相邻色度样本数numleftbellow如下所示:
–变量numleftbellow设置为0,availLB设置为TRUE。
–当predModeIntra等于INTRA_WCP时,以下适用于y=nTbH…2*nTbH-1,直到availB等于FALSE:
–调用第(2)部分中规定的相邻块可用性的推导过程时,当前亮度位置(xCurr,yCurr)设置为等于(xTbY,yTbY),相邻luma位置(xTbY-1,yTbY+y*SubweightC),将checkPredModeY设置为FALSE,cIdx作为输入,输出分配给AvailLB。
–当availLB等于TRUE时,numLeftBelow将增加1。
上方和右上方的可用相邻色度样本数numSampT以及左方和左下方的可用相邻色度样本数numSampL推导出如下:
–如果predModeIntra等于INTRA_WCP,则以下情况适用:
numSampT=availT?(nTbW+numTopRight):0
numSampL=availL?(nTbH+numLeftBelow):0
所有可用的相邻色度样本数numSamp和变量enableWcp的推导如下:
–如果predModeIntra等于INTRA_WCP,则以下情况适用:
numSamp=numSampT+numSampL
enableWcp=numSamp?true:false
预测样本predSamples[x][y]的x=0…nTbW-1,y=0…nTbH-1的推导如下:
–如果enableWcp等于FALSE,则以下内容适用:
predSamples[x][y]=1<<(BitDepth-1)
–否则,按照以下步骤顺序进行:
1、亮度样本pY[x][y]的x=0…nTbW*SubWidthC-1,y=0…nTbH*SubHeightC-1被设置为等于在位置(xTbY+x,yTbY+y)处的解码滤波处理之前的重建亮度样本。
2、相邻的亮度样本pY[x][y]推导如下:
–当availL等于TRUE时,相邻的亮度样本pY[x][y],x=-3…-1,y=(availT?-1:0)…SubHeightC*Max(nTbH,numSampL)-1,被设置为在位置(xTbY+x,yTbY+y)处的解码滤波处理之前的重建亮度样本。
–当availT等于FALSE时,相邻的亮度样本pY[x][y],x=-2..SubWidthC*nTbW-1,y=-2..-1,被设置为等于重建亮度样本pY[x][0]。
–当availT等于TRUE时,相邻的亮度样本pY[x][y],x=(availL?-1:0)…SubWidthC*Max(nTbW,numSampT)-1,y=-3…-1,被设置为等于在位置(xTbY+x,yTbY+y)处的解码滤波处理之前的重建亮度样本。
–当avillL等于FALSE时,相邻的亮度样本pY[x][y],x=-1,y=-2..SubHeightC*nTbH-1,被设置为等于重建亮度样本pY[0][y]。
3、下采样的同位的亮度样本pDsY[x][y],x=0..nTbW-1,y=0..nTbH-1的推导如下:
–如果SubWidthC和SubHeightC均等于1,则以下各项适用:
pDsY[x][y]=pY[x][y]
–否则,如果SubHeightC等于1,则以下内容适用:
pDsY[x][y]=(pY[SubWidthC*x-1][y]+2*pY[SubWidthC*x][y]+pY[SubWidthC*x+1][y]+2)>>2
–否则(SubHeightC不等于1),以下适用:
–如果sps_chroma_vertical_collocated_flag等于1,则以下情况适用:
pDsY[x][y]=(pY[SubWidthC*x][SubHeightC*y-1]+
pY[SubWidthC*x-1][SubHeightC*y]+
4*pY[SubWidthC*x][SubHeightC*y]+
pY[SubWidthC*x+1][SubHeightC*y]+
pY[SubWidthC*x][SubHeightC*y+1]+4)>>3
–否则(sps_chroma_vertical_collocated_flag等于0),以下内容适用:
pDsY[x][y]=(pY[SubWidthC*x-1][SubHeightC*y]+
pY[SubWidthC*x-1][SubHeightC*y+1]+
2*pY[SubWidthC*x][SubHeightC*y]+
2*pY[SubWidthC*x][SubHeightC*y+1]+
pY[SubWidthC*x+1][SubHeightC*y]+
pY[SubWidthC*x+1][SubHeightC*y+1]+4)>>3
4、当numSampT大于0时,相邻上侧色度样本refC[idx]设置为等于p[idx][-1],其中,idx=0…numSampT-1,并且下采样的相邻上侧亮度样本refY[idx],idx=0…numSampT-1规定如下:
如果SubWidthC和SubHeightC均等于1,则以下情况适用:
refY[idx]=pY[idx][-1]
否则,以下内容适用:
如果SubHeightC不等于1且bCTUboundary等于FALSE,则以下情况适用:
如果sps_chroma_vertical_collocated_flag等于1,则以下情况适用:
refY[idx]=(pY[SubWidthC*x][-3]+
pY[SubWidthC*x-1][-2]+
4*pY[SubWidthC*x][-2]+
pY[SubWidthC*x+1][-2]+
pY[SubWidthC*x][-1]+4)>>3
否则(sps_chroma_vertical_collocated_flag等于0),以下内容适用:
refY[idx]=(pY[SubWidthC*x-1][-1]+
pY[SubWidthC*x-1][-2]+
2*pY[SubWidthC*x][-1]+
2*pY[SubWidthC*x][-2]+
pY[SubWidthC*x+1][-1]+
pY[SubWidthC*x+1][-2]+4)>>3
否则(SubHeightC等于1或者bCTUboundary等于TRUE),以下情况适用:
refY[idx]=(pY[SubWidthC*x-1][-1]+
2*pY[SubWidthC*x][-1]+
pY[SubWidthC*x+1][-1]+2)>>2
5、当numSampL大于0时,相邻的左侧色度样本refC[idx]设置为等于p[-1][idx-numSampT],其中,idx=numSampT…numSamp-1,并且下采样的相邻左侧亮度样本refY[idx],其中,idx=numSampT…numSamp-1的推导如下:
如果SubWidthC和SubHeightC均等于1,则以下情况适用:
refY[idx]=pY[-1][y]
否则,如果SubHeightC等于1,则以下内容适用:
refY[idx]=(pY[-1-SubWidthC][y]+
2*pY[-SubWidthC][y]+
pY[1-SubWidthC][y]+2)>>2
否则,以下内容适用:
如果sps_chroma_vertical_collocated_flag等于1,则以下情况适用:
refY[idx]=(pY[-SubWidthC][SubHeightC*y-1]+
pY[-1-SubWidthC][SubHeightC*y]+
4*pY[-SubWidthC][SubHeightC*y]+
pY[1-SubWidthC][SubHeightC*y]+
pY[-SubWidthC][SubHeightC*y+1]+4)>>3
否则(sps_chroma_vertical_collocated_flag等于0),以下内容适用:
refY[idx]=(pY[-1-SubWidthC][SubHeightC*y]+
pY[-1-SubWidthC][SubHeightC*y+1]+
2*pY[-SubWidthC][SubHeightC*y]+
2*pY[-SubWidthC][SubHeightC*y+1]+
pY[1-SubWidthC][SubHeightC*y]+
pY[1-SubWidthC][SubHeightC*y+1]+4)>>3
6、变量avgC和色度参考样本差refDiffC[idx],其中,idx=0..numSamp-1的推导如下:
RedNumSamp=1<<Floor(Log2(numSamp))
bDwn=numSamp/RedNumSamp
Figure PCTCN2022103963-appb-000028
refDiffC[idx]=refC[idx]-avgC
7、变量minBlockSize、blockIndex、亮度参考样本和重建样本的差baseDiffY[cnt],其中,cnt=0…numSamp–1;变量indexDiffY[cnt],其中,cnt=0..numSamp–1;变量indexOffset[cnt],其中,cnt=0…numSamp–1;变量LUTindex[cnt],其中,cnt=0…numSamp–1;最终的预测样本predSamples[x][y],其中,x=0...nTbW-1,y=0...nTbH–1的推导如下:
minBlockSize=Min(nTbW,nTbH)
–If minBlockSize等于2或4,blockIndex设置为等于0
–If minBlockSize等于8或16,blockIndex设置为等于1
–If minBlockSize大于16,blockIndex设置为等于2
baseDiffY[cnt]=Abs(refY[cnt]-predSamples[x][y])
indexOffset[cnt]=((blockIndex&1)–(blockIndex&2))*(baseDiffY[cnt]&1)
LUTindex[cnt]=Clip3(0,WCP_LUT_Max_Index,(baseDiffL+(blockIndex&1)*(baseDiffL>>1)-(blockIn dex&2)*(baseDiffL>>2)+indexOffset[cnt]))
其中,WCP_LUT_Max_Index等于49
8、变量numerator[cnt],其中,cnt=0…numSamp–1;用于预测样本的predSamples[x][y]的变量sum和finalVal,其中,x=0…nTbW-1,y=0…nTbH–1的推导如下:
numerator[cnt]=WcpLUT[LUTindex[cnt]]
whereWcpLUT[i]={31,29,26,24,22,20,19,17,16,15,13,12,11,10,10,9,8,8,7,6,6,5,5,5,4,4,4,3,3,3,3,2,2,2,2,2,2,1,1,1,1,1,1,1,1,1,1,1,1,1}
Figure PCTCN2022103963-appb-000029
Figure PCTCN2022103963-appb-000030
x=Floor(Log2(sum))
normDiff=((sum<<5)>>x)&31
x+=(normDiff!=0)?1:0
add=1<<x<<4
finalVal=(calVal*(divSigTable[normDiff]|32)+add)>>(x+5)+avgC
where divSigTable[]is specified as follows:
divSigTable[]={0,15,14,13,12,12,11,10,10,9,8,8,7,7,6,6,5,5,4,4,4,3,3,3,2,2,2,1,1,1,1,0}
9、预测样本predSamples[x][y],其中,x=0…nTbW-1,y=0…nTbH-1的推导如下:
predSamples[x][y]=Clip1(finalVal[x][y])
Clip1(x)=Clip3(0,(1<<BitDepth)-1,x)
(2)关于相邻块可用性的推导过程的规范说明。
该过程的输入包括:
–当前块的左上样本相对于当前图片的左上样本的亮度位置(xCurr,yCurr)。
–相邻块包含的亮度位置(xNbY,yNbY),该位置相对于当前图片左上位置的亮度样本。
–变量checkPredModeY用于标识可用性是否依赖于预测模式。
–表示当前块颜色分量的变量cIdx。
该过程的输出是包含该位置(xNbY,yNbY)的相邻块的可用性,用availableN表示。
相邻块可用性的可用性availableN的推导如下:
–如果以下一个或多个条件为TRUE,则availableN设置为FALSE:
–xNbY小于0;
–yNbY小于0;
–xNbY大于或等于pps_pic_width_in_luma_samples;
–yNbY大于或等于pps_pic_height_in_luma_samples;
–(xNbY>>CtbLog2SizeY)大于(xCurr>>CtbLog2SizeY)且(yNbY>>CtbLog2SizeY)大于或等于(yCurr>>CtbLog2SizeY);
–(yNbY>>CtbLog2SizeY)大于或等于(yCurr>>CtbLog2SizeY)+1;
–IsAvailable[cIdx][xNbY][yNbY]等于FALSE;
–相邻块包含在与当前块不同的slice中;
–相邻块包含在与当前块不同的tile中;
–sps_entropy_coding_sync_enabled_flag等于1且(xNbY>>CtbLog2SizeY)大于或等于(xCurr>>CtbLog2SizeY)+1;
–否则,availableN设置为等于TRUE。
当以下所有条件均为true时,availableN设置为FALSE:
–checkPredModeY等于TRUE;
–CuPredMode[0][xNbY][yNbY]不等于CuPredMode[0][xCurr][yCurr]。
本实施例提供了一种解码方法,通过上述实施例对前述实施例的具体实现进行详细阐述,根据前述实施例的技术方案,从中可以看出,对WCP预测技术过程中对浮点型运算的优化,采用整型运算实现;一方面,充分利用当前块的内容特性,自适应选择最佳的整型运算位移量;另一方面,充分保证WCP预测技术的精确性;又一方面,充分考虑权重模型的特性,合理设计整型运算过程。也就是说,通过对WCP预测技术中的基于权重的色度预测的计算过程的优化,本技术方案完全采用整数运算,同时也可以自适应选择最佳的整数化所需的位移量等信息,进一步降低计算复杂度;而且由于WCP预测技术所采用的计算方式均为整数运算,这样对硬件实现是极为友好的;这样,可以在保证WCP预测技术的一定的准确性的前提下,降低WCP预测技术的计算复杂度,提升编解码性能。
在本申请的又一实施例中,参见图9,其示出了本申请实施例提供的一种编码方法的流程示意图。如图9所示,该方法可以包括:
S901:确定当前块的第一颜色分量的参考值和当前块的第二颜色分量的参考值。
需要说明的是,本申请实施例的编码方法应用于编码装置,或者集成有该编码装置的编码设备(也可简称为“编码器”)。另外,本申请实施例的编码方法具体可以是指一种帧内预测方法,更具体地,是一种基于权重的色度预测(Weight-based Chroma Prediction,WCP)的整数化运算方法。
在本申请实施例中,视频图像可以划分为多个编码块,每个编码块可以包括第一颜色分量、第二颜色分量和第三颜色分量,而这里的当前块是指视频图像中当前待进行帧内预测的编码块。另外,假定当前块进行第一颜色分量的预测,而且第一颜色分量为亮度分量,即待预测分量为亮度分量,那么当前块也可以称为亮度预测块;或者,假定当前块进行第二颜色分量的预测,而且第二颜色分量为色度分量,即待预测分量为色度分量,那么当前块也可以称为色度预测块。
还需要说明的是,在本申请实施例中,当前块的参考信息可以包括当前块的相邻区域中的第一颜色分量采样点的取值和当前块的相邻区域中的第二颜色分量采样点的取值,这些采样点(Sample)可以是根据当前块的相邻区域中的已编码像素确定的。在一些实施例中,当前块的相邻区域可以包括下述至少 之一:上侧相邻区域、右上侧相邻区域、左侧相邻区域和左下侧相邻区域。
在这里,上侧相邻区域和右上侧相邻区域整体可以看作上方区域,左侧相邻区域和左下侧相邻区域整体可以看作左方区域;除此之外,相邻区域还可以包括左上方区域,详见前述的图6所示。其中,在对当前块进行第二颜色分量的预测时,当前块的上方区域、左方区域和左上方区域作为相邻区域均可被称为当前块的参考区域,而且参考区域中的像素都是已重建的参考像素。
在一些实施例中,确定当前块的第一颜色分量的参考值和当前块的第二颜色分量的参考值,可以包括:
根据当前块的相邻区域中的第一颜色分量采样点的取值,确定当前块的第一颜色分量的参考值;
根据当前块的相邻区域中的第二颜色分量采样点的取值,确定当前块的第二颜色分量的参考值。
需要说明的是,在本申请实施例中,当前块的参考像素可以是指与当前块相邻的参考像素点,也可称为当前块的相邻区域中的第一颜色分量采样点、第二颜色分量采样点,用Neighboring Sample或Reference Sample表示。其中,这里的相邻可以是空间相邻,但是并不局限于此。例如,相邻也可以是时域相邻、空间与时域相邻,甚至当前块的参考像素还可以是对空间相邻、时域相邻、空间和时域相邻的参考像素点进行某种处理后得到的参考像素等等,本申请实施例不作任何限定。
还需要说明的是,在本申请实施例中,假定第一颜色分量为亮度分量,第二颜色分量为色度分量;那么,当前块的相邻区域中的第一颜色分量采样点的取值表示为当前块的参考像素对应的参考亮度信息,当前块的相邻区域中的第二颜色分量采样点的取值表示为当前块的参考像素对应的参考色度信息。
还需要说明的是,在本申请实施例中,从当前块的相邻区域中确定第一颜色分量采样点的取值或第二颜色分量采样点的取值,这里的相邻区域可以是仅包括上侧相邻区域,或者仅包括左侧相邻区域,也可以是包括上侧相邻区域和右上侧相邻区域,或者包括左侧相邻区域和左下侧相邻区域,或者包括上侧相邻区域和左侧相邻区域,或者甚至还可以包括上侧相邻区域、右上侧相邻区域和左侧相邻区域等等,本申请实施例不作任何限定。
进一步地,在一些实施例中,对于当前块的参考像素的确定,可以包括:对当前块的相邻区域中的像素进行筛选处理,确定参考像素。
具体来说,在本申请实施例中,根据当前块的相邻区域中的像素,组成第一参考像素集合;那么可以对第一参考像素集合进行筛选处理,确定参考像素。在这里,参考像素的数量可以为M个,M为正整数。换句话说,可以从相邻区域中的像素中选取出M个参考像素。其中,M的取值一般可以为4,但是并不作具体限定。
还需要说明的是,在当前块的相邻区域中的像素中,可能会存在部分不重要的像素(例如,这些像素的相关性较差)或者部分异常的像素,为了保证预测的准确性,需要将这些像素剔除掉,以便得到有效的参考像素。因此,在一种具体的实施例中,对相邻区域中的像素进行筛选处理,确定参考像素,可以包括:
基于相邻区域中的像素的位置和/或颜色分量强度,确定待选择像素位置;
根据待选择像素位置,从相邻区域中的像素中确定参考像素。
需要说明的是,在本申请实施例中,颜色分量强度可以用颜色分量信息来表示,比如参考亮度信息、参考色度信息等;这里,颜色分量信息的值越大,表明了颜色分量强度越高。这样,针对相邻区域中的像素进行筛选,可以是根据像素的位置来进行筛选的,也可以是根据颜色分量强度来进行筛选的,从而根据筛选出的像素确定当前块的参考像素,进一步可以确定出当前块的相邻区域中的第一颜色分量采样点的取值和当前块的相邻区域中的第二颜色分量采样点的取值;然后根据当前块的相邻区域中的第一颜色分量采样点的取值来确定当前块的第一颜色分量的参考值,根据当前块的相邻区域中的第二颜色分量采样点的取值来确定当前块的第二颜色分量的参考值。
在一些实施例中,根据当前块的相邻区域中的第一颜色分量采样点的取值,确定当前块的第一颜色分量的参考值,可以包括:对当前块的相邻区域中的第一颜色分量采样点的取值进行第一滤波处理,确定当前块的第一颜色分量的参考值。
在本申请实施例中,第一滤波处理为下采样滤波处理。其中,第一颜色分量为亮度分量,此时可以通过对参考亮度信息进行下采样滤波处理,使得滤波后的参考亮度信息的空间分辨率与参考色度信息的空间分辨率相同。示例性地,如果当前块的尺寸大小为2M×2N,参考亮度信息为2M+2N个,那么在经过下采样滤波之后,可以变换至M+N个,以得到当前块的第一颜色分量的参考值。
在一些实施例中,根据当前块的相邻区域中的第二颜色分量采样点的取值,确定当前块的第二颜色分量的参考值,可以包括:对当前块的相邻区域中的第二颜色分量采样点的取值进行第二滤波处理,确定当前块的第二颜色分量的参考值。
在本申请实施例中,第二滤波处理为上采样滤波处理。其中,上采样率是2的正整数倍。
也就是说,第一颜色分量为亮度分量,第二颜色分量为色度分量,本申请实施例也可以对参考色度信息进行上采样滤波,使得滤波后的参考色度信息的空间分辨率与参考亮度的空间分辨率相同。示例性地,如果参考亮度信息为2M+2N个,而参考色度信息为M+N个;那么在对参考色度信息经过上采样滤波之后,可以变换至2M+2N个,以得到当前块的第二颜色分量的参考值。
S902:根据当前块的第一颜色分量的参考值,确定加权系数。
需要说明的是,在本申请实施例中,当前块的参考信息还可以包括当前块中第一颜色分量采样点的重建值。假定第一颜色分量为亮度分量,那么当前块中第一颜色分量采样点的重建值即为当前块的重建亮度信息。
在一些实施例中,根据所述当前块的第一颜色分量的参考值,确定加权系数,可以包括:
确定当前块中第一颜色分量采样点的重建值;
根据当前块中第一颜色分量采样点的重建值和当前块的第一颜色分量的参考值,确定当前块的第一颜色分量的参考样值;
根据当前块的第一颜色分量的参考样值,确定加权系数。
在一种可能的实施例中,根据当前块中第一颜色分量采样点的重建值和当前块的第一颜色分量的参考值,确定当前块的第一颜色分量的参考样值,可以包括:
确定当前块中第一颜色分量采样点的重建值和当前块的第一颜色分量的参考值的差值;
根据差值,确定当前块的第一颜色分量的参考样值。
在本申请实施例中,可以将当前块的第一颜色分量的参考样值设置为等于差值的绝对值。另外,根据差值来确定当前块的第一颜色分量的参考样值,还可以是对差值进行平方计算、或者将差值进行一些相关处理和映射等,以确定出当前块的第一颜色分量的参考样值,这里并不作任何限定。
在另一种可能的实施例中,根据当前块中第一颜色分量采样点的重建值和当前块的第一颜色分量的参考值,确定当前块的第一颜色分量的参考样值,可以包括:
对当前块中第一颜色分量采样点的重建值进行第三滤波处理,得到当前块中第一颜色分量采样点的滤波样值;
根据当前块中第一颜色分量采样点的滤波样值和当前块的第一颜色分量的参考值,确定当前块的第一颜色分量的参考样值。
在本申请实施例中,第三滤波处理为下采样滤波处理。其中,第一颜色分量为亮度分量,此时也可以对当前块内的重建亮度信息进行下采样滤波处理。示例性地,如果当前块内的重建亮度信息的数量为2M×2N,那么在经过下采样滤波之后,可以变换至M×N。
在本申请实施例中,根据当前块中第一颜色分量采样点的滤波样值和当前块的第一颜色分量的参考值,确定当前块的第一颜色分量的参考样值,可以是根据当前块中第一颜色分量采样点的滤波样值和当前块的第一颜色分量的参考值之差来确定当前块的第一颜色分量的参考样值;更具体地,可以是根据当前块中第一颜色分量采样点的滤波样值和当前块的第一颜色分量的参考值之差的绝对值来确定当前块的第一颜色分量的参考样值,这里也不作任何限定。
可以理解地,在确定出当前块的第一颜色分量的参考样值之后,本申请实施例可以进一步确定加权系数。其中,当前块的第一颜色分量的参考样值可以是当前块中第一颜色分量采样点的重建值和当前块的第一颜色分量的参考值之差的绝对值。
在本申请实施例中,当前块的第一颜色分量的参考样值可以是当前块中亮度重建信息(用recY表示)和inSize数量的参考亮度信息(用refY表示)之差的绝对值。其中,对于当前块中的待预测像素C pred[i][j],其对应的亮度差向量diffY[i][j][k]可以是由其对应的亮度重建信息recY[i][j]与inSize数量的参考亮度信息refY[k]相减并取绝对值得到的;即在本申请实施例中,当前块的第一颜色分量的参考样值可以用diffY[i][j][k]表示。
进一步地,在一些实施例中,根据当前块的第一颜色分量的参考样值,确定加权系数,可以包括:根据当前块的第一颜色分量的参考样值,确定权重索引值;根据权重索引值,使用第一预设映射关系确定加权系数。
在一种具体的实施例中,根据当前块的第一颜色分量的参考样值,确定权重索引值,可以包括:
确定当前块的最大权重索引值和最小权重索引值;
根据最大权重索引值和最小权重索引值对第一颜色分量的参考样值进行修正处理,确定权重索引值。
需要说明的是,在本申请实施例中,最大权重索引值可以用theMaxPos表示,最小权重索引值可以用表示零表示,权重索引值用index表示。在这里,权重索引值是限定在theMaxPos和零之间。示例性地,权重索引值可以根据下式计算得到,如前述的式(2)所示。
其中,k=0,1,...,inSize-1,
Figure PCTCN2022103963-appb-000031
也就是说,在式(2)中,如果diffY[i][j][k]<0,那么index=0;如果diffY[i][j][k]>theMaxPos,那么index=theMaxPos;否则,如果0≤diffY[i][j][k]≤theMaxPos,那么index=diffY[i][j][k]。
在一种具体的实施例中,最大权重索引值可以与亮度或色度的比特深度(用BitDepth表示)有关。示例性地,最大权重索引值可以根据下式计算得到,如前述的式(3)所示。
需要说明的是,在本申请实施例中,theMaxPos的取值包括但不限于根据式(3)计算得到,它还可以是WCP的核心参数中所确定的。
进一步地,对于第一预设映射关系而言,在一些实施例中,第一预设映射关系是权重索引值与加权系数的数值映射查找表。也就是说,在本申请实施例中,解码端可以预先设置有对应的查找表(Look Up Table,LUT)。通过该查找表,结合index即可确定出对应的加权系数。示例性地,加权系数cWeightInt[i][j][k]可以用下述的映射关系表示,详见前述的式(4)。
对于第一预设映射关系而言,在一些实施例中,第一预设映射关系还可以是预设函数关系。在一些实施例中,根据权重索引值,使用第一预设映射关系确定加权系数,可以包括:确定在第一预设映射关系下权重索引值对应的第一取值;将加权系数设置为等于第一取值。
在一种具体的实施例中,确定在第一预设映射关系下权重索引值对应的第一取值,可以包括:
确定第一因子;
根据权重索引值,使用第二预设映射关系确定第二取值;
计算第一因子与第二取值的第一乘积值;
将第一取值设置为等于第一乘积值在第一预设映射关系下对应的取值。
需要说明的是,在本申请实施例中,第一因子可以用ModelScale表示,权重索引值用index表示。示例性地,加权系数cWeightInt[i][j][k]还可以用下述的函数关系表示,如前述的式(5)所示。
在这里,第二预设映射关系可以为基于n的指数函数关系,例如e -n;其中,n的取值等于权重索引值,即n=0,1,…,theMaxPos。这样,当n的取值等于index时,那么第二取值等于e -index,第一乘积值等于e -index×ModelScale。另外,第一预设映射关系可以设置为Round(x);那么当x等于第一乘积值时,Round(x)的值即为第一取值,也即加权系数cWeightInt[i][j][k]。另外,在本申请实施例中,对于第一预设映射关系而言,可以如前述的式(6)所示。
进一步地,对于第一因子而言,在一些实施例中,第一因子可以为预设常数值。也就是说,第一因子可以是预先设定的常数,其与块尺寸参数无关。
对于第一因子而言,在一些实施例中,第一因子的取值还可以与块尺寸参数有关。在一种具体的实施例中,确定第一因子,可以包括:根据当前块的尺寸参数,确定第一因子的取值;其中,当前块的尺寸参数包括以下参数的至少之一:当前块的宽度,当前块的高度。也就是说,本申请实施例可以采用分类方式固定第一因子的取值。例如,将根据当前块的尺寸参数分为三类,确定每一类对应的第一因子的取值。针对这种情况,本申请实施例还可以预先存储当前块的尺寸参数与第一因子的取值映射查找表,然后根据该查找表即可确定出第一因子的取值。示例性地,当前块的尺寸参数满足第一预设条件,即Min(W,H)<=4,则设置第一因子的取值为第一值;当前块的尺寸参数满足第二预设条件,即Min(W,H)>4&&Min(W,H)<=16,则设置第一因子的取值为第二值;当前块的尺寸参数满足第三预设条件,即Min(W,H)>16,则设置第一因子的取值为第三值。其中,W表示当前块的宽度,H表示当前块的高度。
进一步地,在一些实施例中,所述根据当前块的第一颜色分量的参考样值,确定权重索引值,可以包括:确定第二因子;根据当前块的第一颜色分量的参考样值和第二因子,确定权重索引值。
需要说明的是,在本申请实施例中,一定条件下还可以根据WCP的核心参数中的控制参数对加权系数进行调整。在这里,第二因子即为本实施例所述的控制参数(也可称为“尺度参数”、“尺度因子”等),用S表示。示例性地,如果当前块的尺寸灵活性好,可以根据该第二因子调整权重系数,以非线性函数(例如Softmax函数)为例,可以根据当前块所归属的块分类类别的不同,选择不同的第二因子来调整函数,以便根据调整后的函数确定出加权系数。
还需要说明的是,对于第二因子而言,在一些实施例中,第二因子可以是预设常数值。也就是说,在这种情况下,对于S而言,可以根据色度相对平坦的特性调整邻近色度的加权系数分布,从而捕获适合自然图像色度预测的加权系数分布。为了确定适合自然图像色度预测的参数S,遍历给定的S集合,通过不同S下预测色度与原始色度间的差距来衡量S合适与否。示例性地,S可以取2-ε,其中ε∈{1,0,-1,-2,-3};经过试验发现,在此S集合中,S的最佳取值为4。因此,在一种具体的实施例中,S可以设置为4,但是本申请实施例并不作具体限定。
还需要说明的是,对于第二因子而言,在一些实施例中,第二因子的取值还可以与块尺寸参数有关。在一种具体的实施例中,确定第二因子,可以包括:根据当前块的尺寸参数,确定第二因子的取值;其中,当前块的尺寸参数包括以下参数的至少之一:当前块的宽度,当前块的高度。
在一种可能的实现方式中,所述根据当前块的尺寸参数,确定第二因子的取值,可以包括:
若当前块的高度和宽度的最小值小于或等于4,则确定第二因子为8;
若当前块的高度和宽度的最小值大于4且小于或等于16,则确定第二因子为12;
若当前块的高度和宽度的最小值大于16,则确定第二因子为16。
需要说明的是,本申请实施例可以采用分类方式固定第二因子的取值。例如,将根据当前块的尺寸参数分为三类,确定每一类对应的第二因子的取值。针对这种情况,本申请实施例还可以预先存储当前块的尺寸参数与第二因子的取值映射查找表,然后根据该查找表即可确定出第二因子的取值。示例性地,前述的表1示出了本申请实施例提供的一种第二因子与当前块的尺寸参数之间的对应关系。
在另一种可能的实现方式中,在对权重系数的调整中,还可以对上述的第二因子与当前块的尺寸参数之间的对应关系进行微调。前述的表2示出了本申请实施例提供的另一种第二因子与当前块的尺寸参数之间的对应关系。
在又一种可能的实现方式中,在对权重系数的调整中,还可以对上述的第二因子的取值进行微调。在一些实施例中,所述根据当前块的尺寸参数,确定第二因子的取值,可以包括:
若当前块的高度和宽度的最小值小于或等于4,则确定第二因子为7;
若当前块的高度和宽度的最小值大于4且小于或等于16,则确定第二因子为11;
若当前块的高度和宽度的最小值大于16,则确定第二因子为15。
也就是说,通过对上述的第二因子的取值进行微调,前述的表3示出了本申请实施例提供的又一种第二因子与当前块的尺寸参数之间的对应关系。
还需要说明的是,在本申请实施例中,将根据当前块的尺寸参数分为三类,还可以通过块种类索引值(用wcpSizeId表示)来指示不同的尺寸参数。在再一种可能的实现方式下,所述根据当前块的尺寸参数,确定第二因子的取值,可以包括:根据块种类索引值,确定第二因子的取值。
示例性地,若块种类索引值等于0,则指示Min(W,H)<=4的当前块;若块种类索引值等于1,则指示Min(W,H)>4&&Min(W,H)<=16的当前块;若块种类索引值等于2,则指示Min(W,H)>16的当前块。在这种情况下,前述的表4示出了本申请实施例提供的一种第二因子与块种类索引值之间的对应关系。
示例性地,若块种类索引值等于0,则指示Min(W,H)<128的当前块;若块种类索引值等于1,则指示Min(W,H)>=128&&Min(W,H)<=256的当前块;若块种类索引值等于2,则指示Min(W,H)>256的当前块。在这种情况下,前述的表5示出了本申请实施例提供的另一种第二因子与块种类索引值之间的对应关系。
示例性地,若块种类索引值等于0,则指示Min(W,H)<64的当前块;若块种类索引值等于1,则指示Min(W,H)>=64&&Min(W,H)<=512的当前块;若块种类索引值等于2,则指示Min(W,H)>512的当前块。在这种情况下,前述的表6示出了本申请实施例提供的又一种第二因子与块种类索引值之间的对应关系。
还需要说明的是,对于第二因子而言,在一些实施例中,第二因子还可以根据当前块的参考像素数量进行分类。在另一种具体的实施例中,确定第二因子,可以包括:根据当前块的参考像素数量,确定第二因子的取值;其中,N表示参考像素数量。
在一种可能的实现方式中,所述根据当前块的参考像素数量,确定第二因子的取值,可以包括:
若N的取值小于16,则确定第二因子为8;
若N的取值大于或等于16且小于32,则确定第二因子为12;
若N的取值大于或等于32,则确定第二因子为16。
也就是说,根据当前块的参考像素数量进行分类,前述的表7示出了本申请实施例提供的一种第二因子与参考像素数量之间的对应关系。
进一步地,在一些实施例中,根据当前块的第一颜色分量的参考样值和第二因子,确定权重索引值,可以包括:
根据第一颜色分量的参考样值和第二因子,使用第三预设映射关系确定第三取值;
确定当前块的最大权重索引值和最小权重索引值;
根据最大权重索引值和最小权重索引值对第三取值进行修正处理,确定权重索引值。
需要说明的是,在本申请实施例中,最大权重索引值可以用theMaxPos表示,最小权重索引值可以用表示零表示,权重索引值用index表示。在这里,权重索引值是限定在theMaxPos和零之间,而第三取值是f(S,diffY[i][j][k])。示例性地,权重索引值可以根据下式计算得到,如前述的式(7)所示。
其中,k=0,1,...,inSize-1,
Figure PCTCN2022103963-appb-000032
也就是说,在式(7)中,如果f(S,diffY[i][j][k])<0,那么index=0;如果f(S,diffY[i][j][k])>theMaxPos,那么index=theMaxPos;否则,如果0≤f(S,diffY[i][j][k])≤theMaxPos,那么index=f(S,diffY[i][j][k])。在这里,theMaxPos的取值可以是根据上述的式(3)计算得到,还可以是WCP的核心参数中所确定的,对此并不作任何限定。
还需要说明的是,对于第三预设映射关系而言,f()是指与第二因子S和亮度差向量diffY[i][j][k]的函数。在一种可能的实现方式中,f(S,diffY[i][j][k])可以通过下式来实现,如前述的式(8)所示。
在另一种可能的实现方式中,f(S,diffY[i][j][k])可以通过如下操作来实现。在一些实施例中,所示根据第一颜色分量的参考样值和第二因子,使用第三预设映射关系确定第三取值,可以包括:
确定至少一个移位数组;
根据第二因子,从至少一个移位数组中确定目标偏移量;
对第一颜色分量的参考样值进行目标偏移量的右移运算,确定第三取值。
在这里,由于不同的第二因子S可以使用各自的LUT[S],本申请实施例仅存储几个基础的LUT。例如,当S={2,4,8}时,仅存储S=2时的LUT;对于其他的第二因子S,可以通过移位运算得到。这时候,f(S,diffY[i][j][k])可以通过下式来实现,如前述的式(9)和式(10)所示。
在本申请实施例中,如果第二因子等于2,那么目标偏移量等于0;如果第二因子等于4,那么目标偏移量等于1;如果第二因子等于8,那么目标偏移量等于2;然后对第一颜色分量的参考样值进行目标偏移量的右移运算,即可确定出第三取值。
另外,需要注意的是,对于f(S,diffY[i][j][k])来说,其实现方式并不局限于式(8)或式(9),还可以是其他实现方式,本申请实施例也不作任何限定。
进一步地,在一些实施例中,所述根据权重索引值,使用第一预设映射关系确定加权系数,可以包括:
根据第二因子和权重索引值,确定第二乘积值;
确定第二乘积值在第一预设映射关系下对应的第四取值,将加权系数设置为等于第四取值。
在一种具体的实施例中,确定第二乘积值在第一预设映射关系下对应的第四取值,可以包括:
确定第一因子;
根据第二乘积值,使用第二预设映射关系确定第五取值;
计算第一因子与第五取值的第三乘积值;
将第四取值设置为等于第三乘积值在第一预设映射关系下对应的取值。
需要说明的是,在本申请实施例中,第一因子可以用ModelScale表示,第二因子可以用S表示,权重索引值可以用index表示。
对于第一预设映射关系而言,在一些实施例中,第一预设映射关系是第二因子、权重索引值与加权系数的数值映射查找表。也就是说,在本申请实施例中,解码端可以预先设置有对应的查找表(Look Up Table,LUT)。通过该查找表,结合index即可确定出对应的加权系数。示例性地,加权系数cWeightInt[i][j][k]可以用下述的映射关系如前述的式(11)所示。
对于第一预设映射关系而言,在一些实施例中,第一预设映射关系还可以是预设函数关系,该函数的输入为index和S,输出为加权系数。示例性地,加权系数cWeightInt[i][j][k]还可以用下述的函数关系如前述的式(12)所示。
在这里,第二预设映射关系仍可以为基于n的指数函数关系,例如
Figure PCTCN2022103963-appb-000033
其中,n的取值等于权重索引值,即n=0,1,…,theMaxPos。这样,当n的取值等于index时,那么第五取值等于
Figure PCTCN2022103963-appb-000034
第三乘积值等于
Figure PCTCN2022103963-appb-000035
另外,第一预设映射关系可以设置为Round(x),具体如上述的式(6)所示;那么当x等于第三乘积值时,Round(x)的值即为第四取值,也即加权系数cWeightInt[i][j][k]。
S903:根据当前块的第二颜色分量的参考值,确定当前块的第二颜色分量的参考均值;以及根据当前块的第二颜色分量的参考值与当前块的第二颜色分量的参考均值,确定当前块的第二颜色分量的参考样值。
需要说明的是,在本申请实施例中,基于权重预测的输入参考像素数量可以用N表示,也可以用inSize表示。在这里,基于权重预测的输入参考像素数量与第二颜色分量的参考样值的数量相同,也可以说是N表示第二颜色分量的参考样值的数量,N是正整数。
在一些实施例中,根据当前块的第二颜色分量的参考值,确定当前块的第二颜色分量的参考均值, 可以包括:对N个当前块的第二颜色分量的参考值进行平均值计算,得到当前块的第二颜色分量的参考均值。
还需要说明的是,在本申请实施例中,当前块的第二颜色分量的参考值可以用refC[k]表示,当前块的第二颜色分量的参考均值可以用avgC表示,而且avgC的计算如前述的式(13)所示。
对于N的取值,在一些实施例中,该方法还可以包括:根据当前块的尺寸参数,确定块种类索引值;根据块种类索引值,使用第五预设映射关系确定N的取值。
在一种具体的实施例中,第五预设映射关系表示块种类索引值与N的数值映射查找表。
需要说明的是,在本申请实施例中,块种类索引值可以用wcpSizeId表示。针对不同的块种类索引值,基于权重预测的输入参考像素数量也会存在差异;即N或者(inSize)的取值不同。
示例性地,若Min(W,H)<=4的当前块,则确定块种类索引值等于0;若Min(W,H)>4&&Min(W,H)<=16的当前块,则确定块种类索引值等于1;若Min(W,H)>16的当前块,则确定块种类索引值等于2;或者,若Min(W,H)<128的当前块,则确定块种类索引值等于0;若Min(W,H)>=128&&Min(W,H)<=256的当前块,则确定块种类索引值等于1;若Min(W,H)>256的当前块,则确定块种类索引值等于2;对此并不作任何限定。示例性地,前述的表8和表9分别示出了块种类索引值与N(inSize)的取值之间的对应关系示意。
进一步地,对于当前块的第二颜色分量的参考样值而言,在一些实施例中,所述根据当前块的第二颜色分量的参考值与当前块的第二颜色分量的参考均值,确定当前块的第二颜色分量的参考样值,可以包括:
对当前块的第二颜色分量的参考值与当前块的第二颜色分量的参考均值进行减法计算,得到当前块的第二颜色分量的参考样值。
在本申请实施例中,对N个参考色度信息refC计算其平均值avgC,然后将N个参考色度信息refC与平均值avgC相减即可得到参考色度差向量diffC。具体地,diffC的计算如前述的式(14)所示。
S904:根据当前块的第二颜色分量的参考均值、当前块的第二颜色分量的参考样值以及对应的加权系数,确定当前块中第二颜色分量采样点的预测值。
S905:根据当前块中第二颜色分量采样点的预测值,确定当前块中第二颜色分量采样点的预测差值。
需要说明的是,在确定当前块的第二颜色分量的参考均值avgC、当前块的第二颜色分量的参考样值diffC[k]以及对应的加权系数cWeightInt[i][j][k]之后,就可以进一步确定当前块中第二颜色分量采样点的预测值。
在一些实施例中,根据当前块的第二颜色分量的参考均值、当前块的第二颜色分量的参考样值以及对应的加权系数,确定当前块中第二颜色分量采样点的预测值,可以包括:
确定第二颜色分量的参考样值与对应的加权系数的加权值;
将当前块中待预测像素的加权和值设置为等于N个加权值之和,将当前块中待预测像素的系数和值设置为等于N个加权系数之和;其中,N表示第二颜色分量的参考样值的数量,N是正整数;
根据加权和值和系数和值,使用第四预设映射关系确定第六取值;
对第二颜色分量的参考均值与第六取值进行加法计算,得到当前块中待预测像素的第二颜色分量的预测值;
根据当前块中待预测像素的第二颜色分量的预测值,确定当前块中第二颜色分量采样点的预测值。
需要说明的是,在本申请实施例中,如果第二颜色分量的参考样值的数量有N个,那么首先确定每一个第二颜色分量的参考样值与对应的加权系数的加权值(即subC[i][j][k]),然后对这N个加权值进行加法运算,可以得到当前块中待预测像素的加权和值,可以用calVal表示。具体地,其计算公式具体如前述的式(15)和式(16)所示。
还需要说明的是,在本申请实施例中,针对当前块中待预测像素对应的N个加权系数,可以对这N个加权系数进行加法运算,得到当前块中待预测像素的系数和值,可以用sum表示。具体地,其计算公式具体如前述的式(17)所示。
进一步地,在一些实施例中,根据加权和值和系数和值,使用第四预设映射关系确定第六取值,可以包括:
确定预设偏移量;
根据系数和值,使用第六预设映射关系确定第一数值;以及根据系数和值与第一数值,使用第七预设映射关系确定数组索引值,并根据数组索引值与预设偏移量,使用第八预设映射关系确定第二数值;
在数组索引值是否等于零的情况下,根据第一数值确定第三数值;以及根据第三数值与预设偏移量,确定第一偏移量;
根据第二数值与加权和值确定第四乘积值,根据第三数值和预设偏移量确定预设加法值,对第四乘积值和预设加法值进行加法运算,得到目标和值;
对目标和值进行第一偏移量的右移运算,确定第六取值。
需要说明的是,在本申请实施例中,预设偏移量可以用Shift表示,数组索引值可以用normDiff表示,第一数值可以用x表示,第二数值可以用v表示,第三数值可以用y表示,预设加法值用add表示。在一种具体的实施例中,Shift的取值可以设置为5,但是并不作具体限定。
还需要说明的是,在本申请实施例中,对于第一数值的计算,第六预设映射关系可以如前述的式(18)所示。
在这里,对于第一数值的计算,可以是:先确定系数和值的以2为底的对数值,然后再确定小于或等于该对数值的最大整数值,这里所确定的最大整数值即为第一数值;或者,也可以是:将第一数值设置为等于系数和值的二进制表示时所需二进制符号的位数减一;或者,还可以是:对系数和值进行二进制右移操作,确定右移后数值恰等于零时的右移位数,然后将第一数值设置为等于该右移位数减一等等,本申请实施例不作任何限定。
在一些实施例中,所述根据系数和值与第一数值,使用第七预设映射关系确定数组索引值,可以包括:将系数和值与第一数值作为预设函数关系的输入,根据预设函数关系输出数组索引值。
在本申请实施例中,对于数组索引值的计算,可以用前述的式(19)所示。在这里,Func()是与Shift有关的函数,前述的式(20)示出了Func()的一种具体形式。
在一些实施例中,根据数组索引值与预设偏移量,使用第八预设映射关系确定第二数值,可以包括:
根据数组索引值,在数组映射表中确定索引指示值;
根据索引指示值与预设偏移量,使用第八预设映射关系确定第二数值。
需要说明的是,在本申请实施例中,数组映射表用DivSigTable表示,那么数组索引值normDiff在DivSigTable中对应的索引指示值为DivSigTable[normDiff]。示例性地,根据DivSigTable[normDiff]和Shift,第八预设映射关系如前述的式(21)所示。
在一些实施例中,在数组索引值是否等于零的情况下,根据第一数值确定第三数值,可以包括:
若数组索引值等于零,则将第三数组设置为等于第一数值;
若数组索引值不等于零,则将第三数组设置为等于第一数值与一的和值。
需要说明的是,在本申请实施例中,第一数值用x表示,第三数值用y表示。示例性地,可以用前述的式(22)所示。
还需要说明的是,在本申请实施例中,对于预设加法值而言,根据第三数值和预设偏移量确定预设加法值,其计算公式如前述的式(23)所示。
还需要说明的是,在本申请实施例中,第一偏移量是由第三数值和预设偏移量确定的,示例性地,通过对第三数值和预设偏移量进行加法运算,以得到第一偏移量,即第一偏移量可以为y+Shift。
如此,假定第六取值用C表示,那么对于第六取值而言,可以用前述的式(24)所示。
进一步地,对于当前块中待预测像素的第二颜色分量的预测值的确定,假定待预测像素为(i,j),当前块中待预测像素的第二颜色分量的预测值用C pred[i][j]表示,这时候根据第二颜色分量的参考均值与第六取值进行加法计算,可以得到当前块中待预测像素的第二颜色分量的预测值,其计算公式如前述的式(25)所示。
进一步地,在本申请实施例中,对于C pred[i][j],通常还需要限定在一预设范围内。因此,在一些实施例中,该方法还可以包括:对待预测像素的第二颜色分量的预测值进行修正操作,将修正后的预测值作为当前块中待预测像素的第二颜色分量的预测值。
需要说明的是,在本申请实施例中,预设范围可以为:0到(1<<BitDepth)-1之间;其中,BitDepth为色度分量所要求的比特深度。如果预测值超出该预设范围的取值,那么需要对预测值进行相应的修正操作。示例性地,可以对C pred[i][j]进行钳位操作,具体如下:
当C pred[i][j]的值小于0时,将其置为0;
当C pred[i][j]的值大于或等于0且小于或等于(1<<BitDepth)-1时,其等于C pred[i][j];
当C pred[i][j]的值大于(1<<BitDepth)-1时,其置为(1<<BitDepth)-1。
这样,在对预测值进行修正处理之后,可以保证当前块中待预测像素的第二颜色分量的预测值都在0到(1<<BitDepth)-1之间。
进一步地,在确定出预测值之后,在一定条件下需要后处理操作后作为最终的色度预测值。因此,在一些实施例中,根据当前块中待预测像素的第二颜色分量的预测值,确定当前块中第二颜色分量采样点的预测值,可以包括:
对待预测像素的第二颜色分量的预测值进行滤波处理,确定当前块中第二颜色分量采样点的预测值。
在本申请实施例中,待预测像素的第二颜色分量的预测值包含当前块中至少部分第二颜色分量采样点的预测值。换句话说,根据待预测像素的第二颜色分量的预测值所组成的第一预测块,该第一预测块包含当前块中至少部分第二颜色分量采样点的预测值。
需要说明的是,在本申请实施例中,如果第一预测块包括了当前块中部分第二颜色分量采样点的预测值,这时候需要对第一预测块进行上采样滤波,以得到最终的第二预测块。因此,在一些实施例中,该方法还可以包括:对第一预测块进行上采样滤波处理,确定当前块的第二颜色分量的第二预测块。
还需要说明的是,在本申请实施例中,如果第一预测块中包含的第二颜色分量的预测值的数量与当前块中包含的第二颜色分量采样点的数量相同,但是并未包含当前块的第二颜色分量采样点的预测值,这时候需要用滤波对预测值进行增强,以得到最终的第二预测块。因此,在一些实施例中,该方法还可以包括:对第一预测块进行滤波增强处理,确定当前块的第二颜色分量的第二预测块。
还需要说明的是,在本申请实施例中,如果第一预测块包括了当前块中全部第二颜色分量采样点的预测值,这时候不需要对第一预测块进行任何处理,可以直接将第一预测块作为最终的第二预测块。
也就是说,第一预测块可以包含当前块中至少部分第二颜色分量采样点的预测值。其中,如果第一预测块包含当前块中全部第二颜色分量采样点的预测值,那么可以将当前块中第二颜色分量采样点的预测值设置为等于第一预测块的值;如果第一预测块包含当前块中部分第二颜色分量采样点的预测值,那么可以对第一预测块的值进行上采样滤波,将当前块中第二颜色分量采样点的预测值设置为等于所述上采样滤波后的输出值。
这样,在经过上述操作后,第二预测块包括当前块中全部第二颜色分量采样点的预测值。如此,基于权重的色度预测输出predWcp在一定条件下需要后处理后才能够作为最终的色度预测值predSamples,否则最终的色度预测值predSamples即为predWcp。
在一些实施例中,在确定出当前块中第二颜色分量采样点的预测值之后,根据当前块中第二颜色分量采样点的预测值,确定当前块中第二颜色分量采样点的预测差值,可以包括:
获取当前块中第二颜色分量采样点的原始值;
根据当前块中第二颜色分量采样点的原始值和当前块中第二颜色分量采样点的预测值,确定当前块中第二颜色分量采样点的预测差值。
进一步地,在一些实施例中,该方法还可以包括:对当前块中第二颜色分量采样点的预测差值进行编码,将所得到的编码比特写入码流。
需要说明的是,在本申请实施例中,在确定出当前块中第二颜色分量采样点的预测值后,根据第二颜色分量采样点的原始值和第二颜色分量采样点的预测值,即可确定出第二颜色分量采样点的预测差值,具体可以是对第二颜色分量采样点的原始值和第二颜色分量采样点的预测值进行减法计算,从而能够确定出当前块的第二颜色分量采样点的预测差值。这样,在将第二颜色分量采样点的预测差值写入码流后,后续在解码端,通过解码即可获得第二颜色分量采样点的预测差值,以便恢复当前块中第二颜色分量采样点的重建值。
可以理解地,本申请实施例还提供了一种码流,该码流是根据待编码信息进行比特编码生成的;其中,待编码信息至少包括:当前块中第二颜色分量采样点的预测差值。
还可以理解地,本申请实施例是对WCP预测技术过程中对浮点型运算的优化,采用整型运算实现。一方面,充分利用当前块的内容特性,自适应选择最佳的整型运算位移量;另一方面,充分保证WCP预测技术的精确性;又一方面,充分考虑权重模型的特性,合理设计整型运算过程;从而可以在保证WCP预测技术的一定准确性的前提下,降低了WCP预测技术的计算复杂度。
本申请实施例还提供了一种编码方法,通过确定当前块的第一颜色分量的参考值和当前块的第二颜色分量的参考值;根据当前块的第一颜色分量的参考值,确定加权系数;根据当前块的第二颜色分量的参考值,确定当前块的第二颜色分量的参考均值;以及根据当前块的第二颜色分量的参考值与当前块的第二颜色分量的参考均值,确定当前块的第二颜色分量的参考样值;根据当前块的第二颜色分量的参考均值、当前块的第二颜色分量的参考样值以及对应的加权系数,确定当前块中第二颜色分量采样点的预测值;根据当前块中第二颜色分量采样点的预测值,确定当前块中第二颜色分量采样点的预测差值。这样,基于当前块的相邻区域中的颜色分量参考信息与当前块内的颜色分量重建信息,不仅需要构造亮度差向量,以便确定出加权系数;而且还需要根据色度参考信息确定色度平均值,然后根据色度参考信息和色度平均值来确定出色度差向量,进而根据色度差向量及对应的加权系数,再加上色度平均值即可确定出色度预测值;如此,通过对基于权重的色度预测的计算过程进行优化,能够实现完全采用整型运算,而且充分考虑了权重模型的特性,合理优化整型运算过程;在充分保证色度预测准确性的同时,还可以降低计算复杂度,提高编解码效率,进而提升编解码性能。
在本申请的再一实施例中,基于前述实施例相同的发明构思,参见图10,其示出了本申请实施例提供的一种编码装置310的组成结构示意图。如图10所示,该编码装置310可以包括:第一确定单元3101、第一计算单元3102和第一预测单元3103;其中,
第一确定单元3101,配置为确定当前块的第一颜色分量的参考值和当前块的第二颜色分量的参考值;以及根据当前块的第一颜色分量的参考值,确定加权系数;
第一计算单元3102,配置为根据当前块的第二颜色分量的参考值,确定当前块的第二颜色分量的参考均值;以及根据当前块的第二颜色分量的参考值与当前块的第二颜色分量的参考均值,确定当前块的第二颜色分量的参考样值;
第一预测单元3103,配置为根据当前块的第二颜色分量的参考均值、当前块的第二颜色分量的参考样值以及对应的加权系数,确定当前块中第二颜色分量采样点的预测值;
第一确定单元3101,还配置为根据当前块中第二颜色分量采样点的预测值,确定当前块中第二颜色分量采样点的预测差值。
在一些实施例中,第一确定单元3101,还配置为获取当前块中第二颜色分量采样点的原始值;以及根据当前块中第二颜色分量采样点的原始值和当前块中第二颜色分量采样点的预测值,确定当前块中第二颜色分量采样点的预测差值。
在一些实施例中,参见图10,编码装置310还可以包括编码单元3104,配置为对当前块中第二颜色分量采样点的预测差值进行编码,将所得到的编码比特写入码流。
可以理解地,在本申请实施例中,“单元”可以是部分电路、部分处理器、部分程序或软件等等,当然也可以是模块,还可以是非模块化的。而且在本实施例中的各组成部分可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
所述集成的单元如果以软件功能模块的形式实现并非作为独立的产品进行销售或使用时,可以存储在一个计算机可读取存储介质中,基于这样的理解,本实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或processor(处理器)执行本实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
因此,本申请实施例提供了一种计算机可读存储介质,应用于编码装置310,该计算机可读存储介质存储有计算机程序,所述计算机程序被第一处理器执行时实现前述实施例中任一项所述的方法的步骤。
基于上述编码装置310的组成以及计算机可读存储介质,参见图11,其示出了本申请实施例提供的编码设备320的组成结构示意图。如图11所示,编码设备320可以包括:第一通信接口3201、第一存储器3202和第一处理器3203;各个组件通过第一总线系统3204耦合在一起。可理解,第一总线系统3204用于实现这些组件之间的连接通信。第一总线系统3204除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图11中将各种总线都标为第一总线系统3204。其中,
第一通信接口3201,用于在与其他外部网元之间进行收发信息过程中,信号的接收和发送;
第一存储器3202,用于存储能够在第一处理器3203上运行的计算机程序;
第一处理器3203,用于在运行所述计算机程序时,执行:
确定当前块的第一颜色分量的参考值和当前块的第二颜色分量的参考值;
根据当前块的第一颜色分量的参考值,确定加权系数;
根据当前块的第二颜色分量的参考值,确定当前块的第二颜色分量的参考均值;以及根据当前块的第二颜色分量的参考值与当前块的第二颜色分量的参考均值,确定当前块的第二颜色分量的参考样值;
根据当前块的第二颜色分量的参考均值、当前块的第二颜色分量的参考样值以及对应的加权系数,确定当前块中第二颜色分量采样点的预测值;
根据当前块中第二颜色分量采样点的预测值,确定当前块中第二颜色分量采样点的预测差值。
可以理解,本申请实施例中的第一存储器3202可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic  RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DRRAM)。本申请描述的系统和方法的第一存储器3202旨在包括但不限于这些和任意其它适合类型的存储器。
而第一处理器3203可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过第一处理器3203中的硬件的集成逻辑电路或者软件形式的指令完成。上述的第一处理器3203可以是通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于第一存储器3202,第一处理器3203读取第一存储器3202中的信息,结合其硬件完成上述方法的步骤。
可以理解的是,本申请描述的这些实施例可以用硬件、软件、固件、中间件、微码或其组合来实现。对于硬件实现,处理单元可以实现在一个或多个专用集成电路(Application Specific Integrated Circuits,ASIC)、数字信号处理器(Digital Signal Processing,DSP)、数字信号处理设备(DSP Device,DSPD)、可编程逻辑设备(Programmable Logic Device,PLD)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、通用处理器、控制器、微控制器、微处理器、用于执行本申请所述功能的其它电子单元或其组合中。对于软件实现,可通过执行本申请所述功能的模块(例如过程、函数等)来实现本申请所述的技术。软件代码可存储在存储器中并通过处理器执行。存储器可以在处理器中或在处理器外部实现。
可选地,作为另一个实施例,第一处理器3203还配置为在运行所述计算机程序时,执行前述实施例中任一项所述的方法。
本实施例提供了一种编码设备,该编码设备还可以包括前述实施例所述的编码装置310。对于编码设备而言,基于当前块的相邻区域中的颜色分量参考信息与当前块内的颜色分量重建信息,不仅需要构造亮度差向量,以便确定出加权系数;而且还需要根据色度参考信息确定色度平均值,然后根据色度参考信息和色度平均值来确定出色度差向量,进而根据色度差向量及对应的加权系数,再加上色度平均值即可确定出色度预测值;如此,通过对基于权重的色度预测的计算过程进行优化,能够实现完全采用整型运算,而且充分考虑了权重模型的特性,合理优化整型运算过程;在充分保证色度预测准确性的同时,还可以降低计算复杂度,提高编解码效率,进而提升编解码性能。
基于前述实施例相同的发明构思,参见图12,其示出了本申请实施例提供的一种解码装置330的组成结构示意图。如图12所示,该解码装置330可以包括:第二确定单元3301、第二计算单元3302和第二预测单元3303;其中,
第二确定单元3301,配置为确定当前块的第一颜色分量的参考值和当前块的第二颜色分量的参考值;以及根据当前块的第一颜色分量的参考值,确定加权系数;
第二计算单元3302,配置为根据当前块的第二颜色分量的参考值,确定当前块的第二颜色分量的参考均值;以及根据当前块的第二颜色分量的参考值与当前块的第二颜色分量的参考均值,确定当前块的第二颜色分量的参考样值;
第二预测单元3303,配置为根据当前块的第二颜色分量的参考均值、当前块的第二颜色分量的参考样值以及对应的加权系数,确定当前块中第二颜色分量采样点的预测值;
第二确定单元3301,还配置为根据当前块中第二颜色分量采样点的预测值,确定当前块中第二颜色分量采样点的重建值。
在一些实施例中,第二确定单元3301,还配置为确定当前块中第二颜色分量采样点的预测差值;以及根据当前块中第二颜色分量采样点的预测差值和当前块中第二颜色分量采样点的预测值,确定当前块中第二颜色分量采样点的重建值。
在一些实施例中,参见图12,该解码装置330还可以包括解码单元3304,配置为解析码流,确定当前块中第二颜色分量采样点的预测差值。
可以理解地,在本实施例中,“单元”可以是部分电路、部分处理器、部分程序或软件等等,当然也可以是模块,还可以是非模块化的。而且在本实施例中的各组成部分可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
所述集成的单元如果以软件功能模块的形式实现并非作为独立的产品进行销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本实施例提供了一种计算机可读存储介质,应用于解码装置330,该计算机可读存储介质存储有计算机程序,所述计算机程序被第二处理器执行时实现前述实施例中任一项所述的方法的步骤。
基于上述解码装置330的组成以及计算机可读存储介质,参见图13,其示出了本申请实施例提供的解码设备340的组成结构示意图。如图13所示,解码设备340可以包括:第二通信接口3401、第二存储器3402和第二处理器3403;各个组件通过第二总线系统3404耦合在一起。可理解,第二总线系统3404用于实现这些组件之间的连接通信。第二总线系统3404除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图13中将各种总线都标为第二总线系统3404。其中,
第二通信接口3401,用于在与其他外部网元之间进行收发信息过程中,信号的接收和发送;
第二存储器3402,用于存储能够在第二处理器3403上运行的计算机程序;
第二处理器3403,用于在运行所述计算机程序时,执行:
确定当前块的第一颜色分量的参考值和当前块的第二颜色分量的参考值;
根据当前块的第一颜色分量的参考值,确定加权系数;
根据当前块的第二颜色分量的参考值,确定当前块的第二颜色分量的参考均值;以及根据当前块的第二颜色分量的参考值与当前块的第二颜色分量的参考均值,确定当前块的第二颜色分量的参考样值;
根据当前块的第二颜色分量的参考均值、当前块的第二颜色分量的参考样值以及对应的加权系数,确定当前块中第二颜色分量采样点的预测值;
根据当前块中第二颜色分量采样点的预测值,确定当前块中第二颜色分量采样点的重建值。
可选地,作为另一个实施例,第二处理器3403还配置为在运行所述计算机程序时,执行前述实施例中任一项所述的方法。
可以理解,第二存储器3402与第一存储器2102的硬件功能类似,第二处理器3403与第一处理器2103的硬件功能类似;这里不再详述。
本实施例提供了一种解码设备,该解码设备还可以包括前述实施例所述的解码装置340。对于解码设备而言,基于当前块的相邻区域中的颜色分量参考信息与当前块内的颜色分量重建信息,不仅需要构造亮度差向量,以便确定出加权系数;而且还需要根据色度参考信息确定色度平均值,然后根据色度参考信息和色度平均值来确定出色度差向量,进而根据色度差向量及对应的加权系数,再加上色度平均值即可确定出色度预测值;如此,通过对基于权重的色度预测的计算过程进行优化,能够实现完全采用整型运算,而且充分考虑了权重模型的特性,合理优化整型运算过程;在充分保证色度预测准确性的同时,还可以降低计算复杂度,提高编解码效率,进而提升编解码性能。
在本申请的再一实施例中,参见图14,其示出了本申请实施例提供的一种编解码系统的组成结构示意图。如图14所示,编解码系统350可以包括编码器3501和解码器3502。其中,编码器3501可以为集成有前述实施例所述编码装置310的设备,或者也可以为前述实施例所述的编码设备320;解码器3502可以为集成有前述实施例所述解码装置330的设备,或者也可以为前述实施例所述的解码设备340。
在本申请实施例中,在该编解码系统350中,无论是编码器3501还是解码器3502,通过对基于权重的色度预测的计算过程进行优化,能够实现完全采用整型运算,而且充分考虑了权重模型的特性,合理优化整型运算过程;在充分保证色度预测准确性的同时,还可以降低计算复杂度,提高编解码效率,进而提升编解码性能。
需要说明的是,在本申请中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
本申请所提供的几个方法实施例中所揭露的方法,在不冲突的情况下可以任意组合,得到新的方法实施例。
本申请所提供的几个产品实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的产品实施例。
本申请所提供的几个方法或设备实施例中所揭露的特征,在不冲突的情况下可以任意组合, 得到新的方法实施例或设备实施例。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。
工业实用性
本申请实施例中,无论是编码端还是解码端,无论是编码端还是解码端,通过确定当前块的第一颜色分量的参考值和当前块的第二颜色分量的参考值;根据当前块的第一颜色分量的参考值,确定加权系数;根据当前块的第二颜色分量的参考值,确定当前块的第二颜色分量的参考均值;以及根据当前块的第二颜色分量的参考值与当前块的第二颜色分量的参考均值,确定当前块的第二颜色分量的参考样值;根据当前块的第二颜色分量的参考均值、当前块的第二颜色分量的参考样值以及对应的加权系数,确定当前块中第二颜色分量采样点的预测值。这样,在编码端根据当前块中第二颜色分量采样点的预测值,能够确定当前块的第二颜色分量采样点的预测差值;使得在解码端,在解码获得当前块的第二颜色分量采样点的预测差值之后,结合当前块中第二颜色分量采样点的预测值可以确定出当前块的第二颜色分量采样点的重建值。也就是说,基于当前块的相邻区域中的颜色分量参考信息与当前块内的颜色分量重建信息,不仅需要构造亮度差向量,以便确定出加权系数;而且还需要根据色度参考信息确定色度平均值,然后根据色度参考信息和色度平均值来确定出色度差向量,进而根据色度差向量及对应的加权系数,再加上色度平均值即可确定出色度预测值;如此,通过对基于权重的色度预测的计算过程进行优化,能够实现完全采用整型运算,而且充分考虑了权重模型的特性,合理优化整型运算过程;在充分保证色度预测准确性的同时,还可以降低计算复杂度,提高编解码效率,进而提升编解码性能。

Claims (82)

  1. 一种解码方法,包括:
    确定当前块的第一颜色分量的参考值和所述当前块的第二颜色分量的参考值;
    根据所述当前块的第一颜色分量的参考值,确定加权系数;
    根据所述当前块的第二颜色分量的参考值,确定所述当前块的第二颜色分量的参考均值;以及根据所述当前块的第二颜色分量的参考值与所述当前块的第二颜色分量的参考均值,确定所述当前块的第二颜色分量的参考样值;
    根据所述当前块的第二颜色分量的参考均值、所述当前块的第二颜色分量的参考样值以及对应的所述加权系数,确定所述当前块中第二颜色分量采样点的预测值;
    根据所述当前块中第二颜色分量采样点的预测值,确定所述当前块中第二颜色分量采样点的重建值。
  2. 根据权利要求1所述的方法,其中,所述确定当前块的第一颜色分量的参考值和所述当前块的第二颜色分量的参考值,包括:
    根据所述当前块的相邻区域中的第一颜色分量采样点的取值,确定所述当前块的第一颜色分量的参考值;
    根据所述当前块的相邻区域中的第二颜色分量采样点的取值,确定所述当前块的第二颜色分量的参考值;
    其中,所述相邻区域包括下述至少之一:上侧相邻区域、右上侧相邻区域、左侧相邻区域和左下侧相邻区域。
  3. 根据权利要求2所述的方法,其中,所述根据所述当前块的相邻区域中的第一颜色分量采样点的取值,确定所述当前块的第一颜色分量的参考值,包括:
    对所述当前块的相邻区域中的第一颜色分量采样点的取值进行第一滤波处理,确定所述当前块的第一颜色分量的参考值。
  4. 根据权利要求3所述的方法,其中,所述第一滤波处理为下采样滤波处理。
  5. 根据权利要求2所述的方法,其中,所述根据所述当前块的相邻区域中的第二颜色分量采样点的取值,确定所述当前块的第二颜色分量的参考值,包括:
    对所述当前块的相邻区域中的第二颜色分量采样点的取值进行第二滤波处理,确定所述当前块的第二颜色分量的参考值。
  6. 根据权利要求5所述的方法,其中,所述第二滤波处理为上采样滤波处理。
  7. 根据权利要求1所述的方法,其中,所述根据所述当前块的第一颜色分量的参考值,确定加权系数,包括:
    确定所述当前块中第一颜色分量采样点的重建值;
    根据所述当前块中第一颜色分量采样点的重建值和所述当前块的第一颜色分量的参考值,确定所述当前块的第一颜色分量的参考样值;
    根据所述当前块的第一颜色分量的参考样值,确定所述加权系数。
  8. 根据权利要求7所述的方法,其中,所述根据所述当前块中第一颜色分量采样点的重建值和所述当前块的第一颜色分量的参考值,确定所述当前块的第一颜色分量的参考样值,包括:
    确定所述当前块中第一颜色分量采样点的重建值和所述当前块的第一颜色分量的参考值的差值;
    根据所述差值,确定所述当前块的第一颜色分量的参考样值。
  9. 根据权利要求8所述的方法,其中,所述根据所述差值,确定所述当前块的第一颜色分量的参考样值,包括:
    将所述当前块的第一颜色分量的参考样值设置为等于所述差值的绝对值。
  10. 根据权利要求7所述的方法,其中,所述根据所述当前块的第一颜色分量的参考样值,确定所述加权系数,包括:
    根据所述当前块的第一颜色分量的参考样值,确定权重索引值;
    根据所述权重索引值,使用第一预设映射关系确定所述加权系数。
  11. 根据权利要求10所述的方法,其中,所述根据所述当前块的第一颜色分量的参考样值,确定权重索引值,包括:
    确定所述当前块的最大权重索引值和最小权重索引值;
    根据所述最大权重索引值和所述最小权重索引值对所述第一颜色分量的参考样值进行修正处理,确定所述权重索引值。
  12. 根据权利要求10所述的方法,其中,所述根据所述权重索引值,使用第一预设映射关系确定所述加权系数,包括:
    所述第一预设映射关系是所述权重索引值与所述加权系数的数值映射查找表。
  13. 根据权利要求10所述的方法,其中,所述根据所述权重索引值,使用第一预设映射关系确定所述加权系数,包括:
    确定在所述第一预设映射关系下所述权重索引值对应的第一取值;
    将所述加权系数设置为等于所述第一取值。
  14. 根据权利要求13所述的方法,其中,所述确定在所述第一预设映射关系下所述权重索引值对应的第一取值,包括:
    确定第一因子;
    根据所述权重索引值,使用第二预设映射关系确定第二取值;
    计算所述第一因子与所述第二取值的第一乘积值;
    将所述第一取值设置为等于所述第一乘积值在所述第一预设映射关系下对应的取值。
  15. 根据权利要求14所述的方法,其中,所述第二预设映射关系为基于n的指数函数关系;其中,n的取值等于所述权重索引值。
  16. 根据权利要求14所述的方法,其中,所述确定第一因子,包括:
    所述第一因子为预设常数值。
  17. 根据权利要求14所述的方法,其中,所述确定第一因子,包括:
    根据所述当前块的尺寸参数,确定所述第一因子的取值;
    其中,所述当前块的尺寸参数包括以下参数的至少之一:所述当前块的宽度,所述当前块的高度。
  18. 根据权利要求10所述的方法,其中,所述根据所述当前块的第一颜色分量的参考样值,确定权重索引值,包括:
    确定第二因子;
    根据所述当前块的第一颜色分量的参考样值和所述第二因子,确定所述权重索引值。
  19. 根据权利要求18所述的方法,其中,所述确定第二因子,包括:
    所述第二因子是预设常数值。
  20. 根据权利要求18所述的方法,其中,所述确定第二因子,包括:
    根据所述当前块的尺寸参数,确定所述第二因子的取值;
    其中,所述当前块的尺寸参数包括以下参数的至少之一:所述当前块的宽度,所述当前块的高度。
  21. 根据权利要求18所述的方法,其中,所述根据所述当前块的第一颜色分量的参考样值和所述第二因子,确定所述权重索引值,包括:
    根据所述第一颜色分量的参考样值和所述第二因子,使用第三预设映射关系确定第三取值;
    确定所述当前块的最大权重索引值和最小权重索引值;
    根据所述最大权重索引值和所述最小权重索引值对所述第三取值进行修正处理,确定所述权重索引值。
  22. 根据权利要求21所述的方法,其中,所述根据所述第一颜色分量的参考样值和所述第二因子,使用第三预设映射关系确定第三取值,包括:
    确定至少一个移位数组;
    根据所述第二因子,从所述至少一个移位数组中确定目标偏移量;
    对所述第一颜色分量的参考样值进行所述目标偏移量的右移运算,确定所述第三取值。
  23. 根据权利要求18所述的方法,其中,所述根据所述权重索引值,使用第一预设映射关系确定所述加权系数,包括:
    根据所述第二因子和所述权重索引值,确定第二乘积值;
    确定所述第二乘积值在所述第一预设映射关系下对应的第四取值;
    将所述加权系数设置为等于所述第四取值。
  24. 根据权利要求23所述的方法,其中,所述确定所述第二乘积值在所述第一预设映射关系下对应的第四取值,包括:
    确定第一因子;
    根据所述第二乘积值,使用第二预设映射关系确定第五取值;
    计算所述第一因子与所述第五取值的第三乘积值;
    将所述第四取值设置为等于所述第三乘积值在所述第一预设映射关系下对应的取值。
  25. 根据权利要求1所述的方法,其中,所述根据所述当前块的第二颜色分量的参考均值、所述当 前块的第二颜色分量的参考样值以及对应的所述加权系数,确定所述当前块中第二颜色分量采样点的预测值,包括:
    确定所述第二颜色分量的参考样值与对应的所述加权系数的加权值;
    将所述当前块中待预测像素的加权和值设置为等于N个所述加权值之和,将所述当前块中待预测像素的系数和值设置为等于N个所述加权系数之和;其中,N表示所述第二颜色分量的参考样值的数量,N是正整数;
    根据所述加权和值和所述系数和值,使用第四预设映射关系确定第六取值;
    对所述第二颜色分量的参考均值与所述第六取值进行加法计算,得到所述当前块中待预测像素的第二颜色分量的预测值;
    根据所述当前块中待预测像素的第二颜色分量的预测值,确定所述当前块中第二颜色分量采样点的预测值。
  26. 根据权利要求25所述的方法,其中,所述待预测像素的第二颜色分量的预测值包含所述当前块中至少部分第二颜色分量采样点的预测值。
  27. 根据权利要求1所述的方法,其中,所述根据所述当前块的第二颜色分量的参考值,确定所述当前块的第二颜色分量的参考均值,包括:
    对N个所述当前块的第二颜色分量的参考值进行平均值计算,得到所述当前块的第二颜色分量的参考均值。
  28. 根据权利要求27所述的方法,其中,所述方法还包括:
    根据所述当前块的尺寸参数,确定块种类索引值;
    根据所述块种类索引值,使用第五预设映射关系确定N的取值。
  29. 根据权利要求28所述的方法,其中,所述第五预设映射关系表示所述块种类索引值与N的数值映射查找表。
  30. 根据权利要求1所述的方法,其中,所述根据所述当前块的第二颜色分量的参考值与所述当前块的第二颜色分量的参考均值,确定所述当前块的第二颜色分量的参考样值,包括:
    对所述当前块的第二颜色分量的参考值与所述当前块的第二颜色分量的参考均值进行减法计算,得到所述当前块的第二颜色分量的参考样值。
  31. 根据权利要求25所述的方法,其中,所述根据所述加权和值和所述系数和值,使用第四预设映射关系确定第六取值,包括:
    确定预设偏移量;
    根据所述系数和值,使用第六预设映射关系确定第一数值;以及根据所述系数和值与所述第一数值,使用第七预设映射关系确定数组索引值,并根据所述数组索引值与所述预设偏移量,使用第八预设映射关系确定第二数值;
    在所述数组索引值是否等于零的情况下,根据所述第一数值确定第三数值;以及根据所述第三数值与所述预设偏移量,确定第一偏移量;
    根据所述第二数值与所述加权和值确定第四乘积值,根据所述第三数值和所述预设偏移量确定预设加法值,对所述第四乘积值和所述预设加法值进行加法运算,得到目标和值;
    对所述目标和值进行所述第一偏移量的右移运算,确定所述第六取值。
  32. 根据权利要求31所述的方法,其中,所述根据所述系数和值,使用第六预设映射关系确定第一数值,包括:
    将所述第一数值设置为等于所述系数和值的二进制表示时所需二进制符号的位数减一。
  33. 根据权利要求31所述的方法,其中,所述根据所述系数和值与所述第一数值,使用第七预设映射关系确定数组索引值,包括:
    将所述系数和值与所述第一数值作为预设函数关系的输入,根据所述预设函数关系输出所述数组索引值。
  34. 根据权利要求31所述的方法,其中,所述根据所述数组索引值与所述预设偏移量,使用第八预设映射关系确定第二数值,包括:
    根据所述数组索引值,在数组映射表中确定索引指示值;
    根据所述索引指示值与所述预设偏移量,使用所述第八预设映射关系确定所述第二数值。
  35. 根据权利要求31所述的方法,其中,所述在所述数组索引值是否等于零的情况下,根据所述第一数值确定第三数值,包括:
    若所述数组索引值等于零,则将所述第三数组设置为等于所述第一数值;
    若所述数组索引值不等于零,则将所述第三数组设置为等于所述第一数值与一的和值。
  36. 根据权利要求25所述的方法,其中,所述方法还包括:
    对所述待预测像素的第二颜色分量的预测值进行修正操作,将修正后的预测值作为所述当前块中待预测像素的第二颜色分量的预测值。
  37. 根据权利要求25所述的方法,其中,所述根据所述当前块中待预测像素的第二颜色分量的预测值,确定所述当前块中第二颜色分量采样点的预测值,包括:
    对所述待预测像素的第二颜色分量的预测值进行滤波处理,确定所述当前块中第二颜色分量采样点的预测值。
  38. 根据权利要求1至33任一项所述的方法,其中,所述根据所述当前块中第二颜色分量采样点的预测值,确定所述当前块中第二颜色分量采样点的重建值,包括:
    确定所述当前块中第二颜色分量采样点的预测差值;
    根据所述当前块中第二颜色分量采样点的预测差值和所述当前块中第二颜色分量采样点的预测值,确定所述当前块中第二颜色分量采样点的重建值。
  39. 一种编码方法,包括:
    确定当前块的第一颜色分量的参考值和所述当前块的第二颜色分量的参考值;
    根据所述当前块的第一颜色分量的参考值,确定加权系数;
    根据所述当前块的第二颜色分量的参考值,确定所述当前块的第二颜色分量的参考均值;以及根据所述当前块的第二颜色分量的参考值与所述当前块的第二颜色分量的参考均值,确定所述当前块的第二颜色分量的参考样值;
    根据所述当前块的第二颜色分量的参考均值、所述当前块的第二颜色分量的参考样值以及对应的所述加权系数,确定所述当前块中第二颜色分量采样点的预测值;
    根据所述当前块中第二颜色分量采样点的预测值,确定所述当前块中第二颜色分量采样点的预测差值。
  40. 根据权利要求39所述的方法,其中,所述确定当前块的第一颜色分量的参考值和所述当前块的第二颜色分量的参考值,包括:
    根据所述当前块的相邻区域中的第一颜色分量采样点的取值,确定所述当前块的第一颜色分量的参考值;
    根据所述当前块的相邻区域中的第二颜色分量采样点的取值,确定所述当前块的第二颜色分量的参考值;
    其中,所述相邻区域包括下述至少之一:上侧相邻区域、右上侧相邻区域、左侧相邻区域和左下侧相邻区域。
  41. 根据权利要求40所述的方法,其中,所述根据所述当前块的相邻区域中的第一颜色分量采样点的取值,确定所述当前块的第一颜色分量的参考值,包括:
    对所述当前块的相邻区域中的第一颜色分量采样点的取值进行第一滤波处理,确定所述当前块的第一颜色分量的参考值。
  42. 根据权利要求41所述的方法,其中,所述第一滤波处理为下采样滤波处理。
  43. 根据权利要求40所述的方法,其中,所述根据所述当前块的相邻区域中的第二颜色分量采样点的取值,确定所述当前块的第二颜色分量的参考值,包括:
    对所述当前块的相邻区域中的第二颜色分量采样点的取值进行第二滤波处理,确定所述当前块的第二颜色分量的参考值。
  44. 根据权利要求43所述的方法,其中,所述第二滤波处理为上采样滤波处理。
  45. 根据权利要求39所述的方法,其中,所述根据所述当前块的第一颜色分量的参考值,确定加权系数,包括:
    确定所述当前块中第一颜色分量采样点的重建值;
    根据所述当前块中第一颜色分量采样点的重建值和所述当前块的第一颜色分量的参考值,确定所述当前块的第一颜色分量的参考样值;
    根据所述当前块的第一颜色分量的参考样值,确定所述加权系数。
  46. 根据权利要求45所述的方法,其中,所述根据所述当前块中第一颜色分量采样点的重建值和所述当前块的第一颜色分量的参考值,确定所述当前块的第一颜色分量的参考样值,包括:
    确定所述当前块中第一颜色分量采样点的重建值和所述当前块的第一颜色分量的参考值的差值;
    根据所述差值,确定所述当前块的第一颜色分量的参考样值。
  47. 根据权利要求46所述的方法,其中,所述根据所述差值,确定所述当前块的第一颜色分量的参考样值,包括:
    将所述当前块的第一颜色分量的参考样值设置为等于所述差值的绝对值。
  48. 根据权利要求45所述的方法,其中,所述根据所述当前块的第一颜色分量的参考样值,确定所述加权系数,包括:
    根据所述当前块的第一颜色分量的参考样值,确定权重索引值;
    根据所述权重索引值,使用第一预设映射关系确定所述加权系数。
  49. 根据权利要求49所述的方法,其中,所述根据所述当前块的第一颜色分量的参考样值,确定权重索引值,包括:
    确定所述当前块的最大权重索引值和最小权重索引值;
    根据所述最大权重索引值和所述最小权重索引值对所述第一颜色分量的参考样值进行修正处理,确定所述权重索引值。
  50. 根据权利要求48所述的方法,其中,所述根据所述权重索引值,使用第一预设映射关系确定所述加权系数,包括:
    所述第一预设映射关系是所述权重索引值与所述加权系数的数值映射查找表。
  51. 根据权利要求48所述的方法,其中,所述根据所述权重索引值,使用第一预设映射关系确定所述加权系数,包括:
    确定在所述第一预设映射关系下所述权重索引值对应的第一取值;
    将所述加权系数设置为等于所述第一取值。
  52. 根据权利要求51所述的方法,其中,所述确定在所述第一预设映射关系下所述权重索引值对应的第一取值,包括:
    确定第一因子;
    根据所述权重索引值,使用第二预设映射关系确定第二取值;
    计算所述第一因子与所述第二取值的第一乘积值;
    将所述第一取值设置为等于所述第一乘积值在所述第一预设映射关系下对应的取值。
  53. 根据权利要求52所述的方法,其中,所述第二预设映射关系为基于n的指数函数关系;其中,n的取值等于所述权重索引值。
  54. 根据权利要求52所述的方法,其中,所述确定第一因子,包括:
    所述第一因子为预设常数值。
  55. 根据权利要求52所述的方法,其中,所述确定第一因子,包括:
    根据所述当前块的尺寸参数,确定所述第一因子的取值;
    其中,所述当前块的尺寸参数包括以下参数的至少之一:所述当前块的宽度,所述当前块的高度。
  56. 根据权利要求48所述的方法,其中,所述根据所述当前块的第一颜色分量的参考样值,确定权重索引值,包括:
    确定第二因子;
    根据所述当前块的第一颜色分量的参考样值和所述第二因子,确定所述权重索引值。
  57. 根据权利要求56所述的方法,其中,所述确定第二因子,包括:
    所述第二因子是预设常数值。
  58. 根据权利要求56所述的方法,其中,所述确定第二因子,包括:
    根据所述当前块的尺寸参数,确定所述第二因子的取值;
    其中,所述当前块的尺寸参数包括以下参数的至少之一:所述当前块的宽度,所述当前块的高度。
  59. 根据权利要求56所述的方法,其中,所述根据所述当前块的第一颜色分量的参考样值和所述第二因子,确定所述权重索引值,包括:
    根据所述第一颜色分量的参考样值和所述第二因子,使用第三预设映射关系确定第三取值;
    确定所述当前块的最大权重索引值和最小权重索引值;
    根据所述最大权重索引值和所述最小权重索引值对所述第三取值进行修正处理,确定所述权重索引值。
  60. 根据权利要求59所述的方法,其中,所述根据所述第一颜色分量的参考样值和所述第二因子,使用第三预设映射关系确定第三取值,包括:
    确定至少一个移位数组;
    根据所述第二因子,从所述至少一个移位数组中确定目标偏移量;
    对所述第一颜色分量的参考样值进行所述目标偏移量的右移运算,确定所述第三取值。
  61. 根据权利要求56所述的方法,其中,所述根据所述权重索引值,使用第一预设映射关系确定所述加权系数,包括:
    根据所述第二因子和所述权重索引值,确定第二乘积值;
    确定所述第二乘积值在所述第一预设映射关系下对应的第四取值;
    将所述加权系数设置为等于所述第四取值。
  62. 根据权利要求61所述的方法,其中,所述确定所述第二乘积值在所述第一预设映射关系下对应的第四取值,包括:
    确定第一因子;
    根据所述第二乘积值,使用第二预设映射关系确定第五取值;
    计算所述第一因子与所述第五取值的第三乘积值;
    将所述第四取值设置为等于所述第三乘积值在所述第一预设映射关系下对应的取值。
  63. 根据权利要求39所述的方法,其中,所述根据所述当前块的第二颜色分量的参考均值、所述当前块的第二颜色分量的参考样值以及对应的所述加权系数,确定所述当前块中第二颜色分量采样点的预测值,包括:
    确定所述第二颜色分量的参考样值与对应的所述加权系数的加权值;
    将所述当前块中待预测像素的加权和值设置为等于N个所述加权值之和,将所述当前块中待预测像素的系数和值设置为等于N个所述加权系数之和;其中,N表示所述第二颜色分量的参考样值的数量,N是正整数;
    根据所述加权和值和所述系数和值,使用第四预设映射关系确定第六取值;
    对所述第二颜色分量的参考均值与所述第六取值进行加法计算,得到所述当前块中待预测像素的第二颜色分量的预测值;
    根据所述当前块中待预测像素的第二颜色分量的预测值,确定所述当前块中第二颜色分量采样点的预测值。
  64. 根据权利要求63所述的方法,其中,所述待预测像素的第二颜色分量的预测值包含所述当前块中至少部分第二颜色分量采样点的预测值。
  65. 根据权利要求39所述的方法,其中,所述根据所述当前块的第二颜色分量的参考值,确定所述当前块的第二颜色分量的参考均值,包括:
    对N个所述当前块的第二颜色分量的参考值进行平均值计算,得到所述当前块的第二颜色分量的参考均值。
  66. 根据权利要求65所述的方法,其中,所述方法还包括:
    根据所述当前块的尺寸参数,确定块种类索引值;
    根据所述块种类索引值,使用第五预设映射关系确定N的取值。
  67. 根据权利要求66所述的方法,其中,所述第五预设映射关系表示所述块种类索引值与N的数值映射查找表。
  68. 根据权利要求39所述的方法,其中,所述根据所述当前块的第二颜色分量的参考值与所述当前块的第二颜色分量的参考均值,确定所述当前块的第二颜色分量的参考样值,包括:
    对所述当前块的第二颜色分量的参考值与所述当前块的第二颜色分量的参考均值进行减法计算,得到所述当前块的第二颜色分量的参考样值。
  69. 根据权利要求63所述的方法,其中,所述根据所述加权和值和所述系数和值,使用第四预设映射关系确定第六取值,包括:
    确定预设偏移量;
    根据所述系数和值,使用第六预设映射关系确定第一数值;以及根据所述系数和值与所述第一数值,使用第七预设映射关系确定数组索引值,并根据所述数组索引值与所述预设偏移量,使用第八预设映射关系确定第二数值;
    在所述数组索引值是否等于零的情况下,根据所述第一数值确定第三数值;以及根据所述第三数值与所述预设偏移量,确定第一偏移量;
    根据所述第二数值与所述加权和值确定第四乘积值,根据所述第三数值和所述预设偏移量确定预设加法值,对所述第四乘积值和所述预设加法值进行加法运算,得到目标和值;
    对所述目标和值进行所述第一偏移量的右移运算,确定所述第六取值。
  70. 根据权利要求69所述的方法,其中,所述根据所述系数和值,使用第六预设映射关系确定第一数值,包括:
    将所述第一数值设置为等于所述系数和值的二进制表示时所需二进制符号的位数减一。
  71. 根据权利要求69所述的方法,其中,所述根据所述系数和值与所述第一数值,使用第七预设映射关系确定数组索引值,包括:
    将所述系数和值与所述第一数值作为预设函数关系的输入,根据所述预设函数关系输出所述数组索引值。
  72. 根据权利要求69所述的方法,其中,所述根据所述数组索引值与所述预设偏移量,使用第八预设映射关系确定第二数值,包括:
    根据所述数组索引值,在数组映射表中确定索引指示值;
    根据所述索引指示值与所述预设偏移量,使用所述第八预设映射关系确定所述第二数值。
  73. 根据权利要求69所述的方法,其中,所述在所述数组索引值是否等于零的情况下,根据所述第一数值确定第三数值,包括:
    若所述数组索引值等于零,则将所述第三数组设置为等于所述第一数值;
    若所述数组索引值不等于零,则将所述第三数组设置为等于所述第一数值与一的和值。
  74. 根据权利要求63所述的方法,其中,所述方法还包括:
    对所述待预测像素的第二颜色分量的预测值进行修正操作,将修正后的预测值作为所述当前块中待预测像素的第二颜色分量的预测值。
  75. 根据权利要求63所述的方法,其中,所述根据所述当前块中待预测像素的第二颜色分量的预测值,确定所述当前块中第二颜色分量采样点的预测值,包括:
    对所述待预测像素的第二颜色分量的预测值进行滤波处理,确定所述当前块中第二颜色分量采样点的预测值。
  76. 根据权利要求39至75任一项所述的方法,其中,所述根据所述当前块中第二颜色分量采样点的预测值,确定所述当前块中第二颜色分量采样点的预测差值,包括:
    获取所述当前块中第二颜色分量采样点的原始值;
    根据所述当前块中第二颜色分量采样点的原始值和所述当前块中第二颜色分量采样点的预测值,确定所述当前块中第二颜色分量采样点的预测差值。
  77. 根据权利要求76所述的方法,其中,所述方法还包括:
    对所述当前块中第二颜色分量采样点的预测差值进行编码,将所得到的编码比特写入码流。
  78. 一种编码装置,包括第一确定单元、第一计算单元和第一预测单元;其中,
    所述第一确定单元,配置为确定当前块的第一颜色分量的参考值和所述当前块的第二颜色分量的参考值;以及根据所述当前块的第一颜色分量的参考值,确定加权系数;
    所述第一计算单元,配置为根据所述当前块的第二颜色分量的参考值,确定所述当前块的第二颜色分量的参考均值;以及根据所述当前块的第二颜色分量的参考值与所述当前块的第二颜色分量的参考均值,确定所述当前块的第二颜色分量的参考样值;
    所述第一预测单元,配置为根据所述当前块的第二颜色分量的参考均值、所述当前块的第二颜色分量的参考样值以及对应的所述加权系数,确定所述当前块中第二颜色分量采样点的预测值;
    所述第一确定单元,还配置为根据所述当前块中第二颜色分量采样点的预测值,确定所述当前块中第二颜色分量采样点的预测差值。
  79. 一种编码设备,包括第一存储器和第一处理器;其中,
    所述第一存储器,用于存储能够在所述第一处理器上运行的计算机程序;
    所述第一处理器,用于在运行所述计算机程序时,执行如权利要求39至77任一项所述的方法。
  80. 一种解码装置,包括第二确定单元、第二计算单元和第二预测单元;其中,
    所述第二确定单元,配置为确定当前块的第一颜色分量的参考值和所述当前块的第二颜色分量的参考值;以及根据所述当前块的第一颜色分量的参考值,确定加权系数;
    所述第二计算单元,配置为根据所述当前块的第二颜色分量的参考值,确定所述当前块的第二颜色分量的参考均值;以及根据所述当前块的第二颜色分量的参考值与所述当前块的第二颜色分量的参考均值,确定所述当前块的第二颜色分量的参考样值;
    所述第二预测单元,配置为根据所述当前块的第二颜色分量的参考均值、所述当前块的第二颜色分量的参考样值以及对应的所述加权系数,确定所述当前块中第二颜色分量采样点的预测值;
    所述第二确定单元,还配置为根据所述当前块中第二颜色分量采样点的预测值,确定所述当前块中第二颜色分量采样点的重建值。
  81. 一种解码设备,包括第二存储器和第二处理器;其中,
    所述第二存储器,用于存储能够在所述第二处理器上运行的计算机程序;
    所述第二处理器,用于在运行所述计算机程序时,执行如权利要求1至38任一项所述的方法。
  82. 一种计算机可读存储介质,其中,所述计算机可读存储介质存储有计算机程序,所述计算机程序被执行时实现如权利要求1至38任一项所述的方法、或者如权利要求39至77任一项所述的方法。
PCT/CN2022/103963 2022-07-05 2022-07-05 编解码方法、装置、编码设备、解码设备以及存储介质 WO2024007165A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2022/103963 WO2024007165A1 (zh) 2022-07-05 2022-07-05 编解码方法、装置、编码设备、解码设备以及存储介质
TW112124424A TW202404358A (zh) 2022-07-05 2023-06-29 編解碼方法、裝置、編碼設備、解碼設備以及儲存媒介

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/103963 WO2024007165A1 (zh) 2022-07-05 2022-07-05 编解码方法、装置、编码设备、解码设备以及存储介质

Publications (1)

Publication Number Publication Date
WO2024007165A1 true WO2024007165A1 (zh) 2024-01-11

Family

ID=89454711

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/103963 WO2024007165A1 (zh) 2022-07-05 2022-07-05 编解码方法、装置、编码设备、解码设备以及存储介质

Country Status (2)

Country Link
TW (1) TW202404358A (zh)
WO (1) WO2024007165A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113261286A (zh) * 2018-12-28 2021-08-13 韩国电子通信研究院 用于推导帧内预测模式的方法和设备
CN113711610A (zh) * 2019-04-23 2021-11-26 北京字节跳动网络技术有限公司 降低跨分量依赖性的方法
CN113747176A (zh) * 2020-05-29 2021-12-03 Oppo广东移动通信有限公司 图像编码方法、图像解码方法及相关装置
WO2021251787A1 (ko) * 2020-06-11 2021-12-16 현대자동차주식회사 루마 매핑 크로마 스케일링을 이용하는 영상 부호화 및 복호화

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113261286A (zh) * 2018-12-28 2021-08-13 韩国电子通信研究院 用于推导帧内预测模式的方法和设备
CN113711610A (zh) * 2019-04-23 2021-11-26 北京字节跳动网络技术有限公司 降低跨分量依赖性的方法
CN113747176A (zh) * 2020-05-29 2021-12-03 Oppo广东移动通信有限公司 图像编码方法、图像解码方法及相关装置
WO2021251787A1 (ko) * 2020-06-11 2021-12-16 현대자동차주식회사 루마 매핑 크로마 스케일링을 이용하는 영상 부호화 및 복호화

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
J.-Y. HUO, H.-Q. DU, Z.-Y. ZHANG, Y.-Z. MA, F.-Z. YANG (XIDIAN UNIV.), M. LI (OPPO), Y. LIU (OPPO): "Non-EE2: Weighted Chroma Prediction (WCP)", 26. JVET MEETING; 20220420 - 20220429; TELECONFERENCE; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ), 13 April 2022 (2022-04-13), XP030300916 *

Also Published As

Publication number Publication date
TW202404358A (zh) 2024-01-16

Similar Documents

Publication Publication Date Title
US20200288127A1 (en) Sample adaptive offset control
JP2023015093A (ja) 統合された画像再構成及び映像符号化
CN107396105B (zh) 图像编解码方法和装置、以及计算机可读介质
US20090175338A1 (en) Methods and Systems for Inter-Layer Image Prediction Parameter Determination
CN113068028B (zh) 视频图像分量的预测方法、装置及计算机存储介质
US11843781B2 (en) Encoding method, decoding method, and decoder
CN114830651A (zh) 帧内预测方法、编码器、解码器以及计算机存储介质
TW201937927A (zh) 使用雙重分類用於線性分量樣本預測的方法以及裝置
US9641847B2 (en) Method and device for classifying samples of an image
JP2023510666A (ja) 画像コンポーネント予測方法、エンコーダ、デコーダ及び記憶媒体
CN116438796A (zh) 图像预测方法、编码器、解码器以及计算机存储介质
WO2023197191A1 (zh) 编解码方法、装置、编码设备、解码设备以及存储介质
WO2024007165A1 (zh) 编解码方法、装置、编码设备、解码设备以及存储介质
AU2019357929A1 (en) Video image component prediction method and apparatus, and computer storage medium
CN113766233B (zh) 图像预测方法、编码器、解码器以及存储介质
WO2023197194A1 (zh) 编解码方法、装置、编码设备、解码设备以及存储介质
WO2023197192A1 (zh) 编解码方法、装置、编码设备、解码设备以及存储介质
WO2020258052A1 (zh) 图像分量预测方法、装置及计算机存储介质
WO2021134303A1 (zh) 变换方法、编码器、解码器以及存储介质
WO2021134327A1 (zh) 变换方法、编码器、解码器以及存储介质
WO2023197189A1 (zh) 编解码方法、装置、编码设备、解码设备以及存储介质
WO2023141781A1 (zh) 编解码方法、装置、编码设备、解码设备以及存储介质
WO2023197193A1 (zh) 编解码方法、装置、编码设备、解码设备以及存储介质
WO2024098263A1 (zh) 编解码方法、码流、编码器、解码器以及存储介质
JP7448568B2 (ja) 画像成分の予測方法、装置およびコンピュータ記憶媒体

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22949753

Country of ref document: EP

Kind code of ref document: A1