WO2023197194A1 - 编解码方法、装置、编码设备、解码设备以及存储介质 - Google Patents

编解码方法、装置、编码设备、解码设备以及存储介质 Download PDF

Info

Publication number
WO2023197194A1
WO2023197194A1 PCT/CN2022/086471 CN2022086471W WO2023197194A1 WO 2023197194 A1 WO2023197194 A1 WO 2023197194A1 CN 2022086471 W CN2022086471 W CN 2022086471W WO 2023197194 A1 WO2023197194 A1 WO 2023197194A1
Authority
WO
WIPO (PCT)
Prior art keywords
color component
current block
value
block
prediction block
Prior art date
Application number
PCT/CN2022/086471
Other languages
English (en)
French (fr)
Inventor
霍俊彦
马彦卓
杨付正
杜红青
李明
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to PCT/CN2022/086471 priority Critical patent/WO2023197194A1/zh
Publication of WO2023197194A1 publication Critical patent/WO2023197194A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction

Definitions

  • the embodiments of the present application relate to the field of video coding and decoding technology, and in particular, to a coding and decoding method, device, encoding device, decoding device, and storage medium.
  • JVET Joint Video Exploration Team
  • H.266/VVC includes inter-color component prediction technology.
  • inter-color component prediction technology of H.266/VVC and the original value, which results in low prediction accuracy, resulting in a decrease in the quality of the decoded video and reduced encoding performance.
  • Embodiments of the present application provide a coding and decoding method, device, coding equipment, decoding equipment and storage medium, which can not only improve the accuracy of chroma prediction and save code rate, but also improve coding and decoding performance.
  • embodiments of the present application provide a decoding method, including:
  • the reconstructed value of the second color component sampling point of the current block is determined.
  • embodiments of the present application provide an encoding method, including:
  • a prediction difference value of the second color component sampling point of the current block is determined.
  • embodiments of the present application provide a coding device, including a first determination unit, a first prediction unit and a first filtering unit; wherein,
  • the first determination unit is configured to determine the reference sample value of the first color component of the current block; and determine the weighting coefficient according to the reference sample value of the first color component of the current block;
  • the first prediction unit is configured to determine the first prediction block of the second color component of the current block according to the weighting coefficient and the reference sample value of the second color component of the current block; wherein the second color component contained in the first prediction block The number of predicted values is greater than the number of second color component sampling points contained in the current block;
  • a first filtering unit configured to perform a first filtering process on the first prediction block and determine a second prediction block of the second color component of the current block
  • the first determination unit is further configured to determine the prediction difference value of the second color component sampling point of the current block according to the second prediction block.
  • embodiments of the present application provide an encoding device, including a first memory and a first processor; wherein,
  • a first memory for storing a computer program capable of running on the first processor
  • the first processor is configured to execute the method described in the second aspect when running the computer program.
  • embodiments of the present application provide a decoding device, including a second determination unit, a second prediction unit, and a second filtering unit; wherein,
  • the second determination unit is configured to determine the reference sample value of the first color component of the current block; and determine the weighting coefficient according to the reference sample value of the first color component of the current block;
  • the second prediction unit is configured to determine the first prediction block of the second color component of the current block according to the weighting coefficient and the reference sample value of the second color component of the current block; wherein the second color component contained in the first prediction block The number of predicted values is greater than the number of second color component sampling points contained in the current block;
  • a second filtering unit configured to perform a first filtering process on the first prediction block and determine a second prediction block of the second color component of the current block
  • the second determination unit is further configured to determine the reconstruction value of the second color component sampling point of the current block according to the second prediction block.
  • embodiments of the present application provide a decoding device including a second memory and a second processor; wherein,
  • a second memory for storing a computer program capable of running on the second processor
  • the second processor is configured to execute the method described in the first aspect when running the computer program.
  • embodiments of the present application provide a computer-readable storage medium that stores a computer program.
  • the computer program When the computer program is executed, the method described in the first aspect is implemented, or the method described in the first aspect is implemented. The methods described in the two aspects.
  • Embodiments of the present application provide a coding and decoding method, device, coding device, decoding device and storage medium. Whether it is the coding end or the decoding end, by determining the reference sample value of the first color component of the current block; The reference sample value of a color component determines the weighting coefficient; according to the weighting coefficient and the reference sample value of the second color component of the current block, the first prediction block of the second color component of the current block is determined; wherein, the first prediction block contains The number of predicted values of the second color component is greater than the number of second color component sampling points contained in the current block; perform a first filtering process on the first prediction block to determine a second prediction block of the second color component of the current block.
  • the prediction difference of the second color component sampling point of the current block can be determined based on the second prediction block; so that at the decoding end, the prediction difference of the second color component sampling point of the current block can be determined based on the second prediction block.
  • Rebuild value In this way, using the adjacent reference pixels of the current block and the color component information in the current block not only fully considers the existing color component information, but also enables a more accurate nonlinear mapping model to be established for each image without losing the brightness information.
  • the reference sample values of each chroma component are assigned weights for weighted prediction; and for the first filtering process, different color format information is fully considered, and chroma and/or brightness sampling filtering is performed based on different color format information, which can always ensure
  • the spatial resolution of the chroma component and the luminance component is consistent, which can not only ensure the accuracy of the existing luminance information, but also improve the accuracy of chroma prediction and save money when using the unlost luminance information to predict the chroma component. bit rate, while improving encoding and decoding performance.
  • Figure 1 is a schematic diagram of the distribution of effective adjacent areas
  • Figure 2 is a schematic diagram of the distribution of selected areas under different prediction modes
  • Figure 3 is a schematic flow chart of a model parameter derivation scheme
  • Figure 4A is a schematic block diagram of an encoder provided by an embodiment of the present application.
  • Figure 4B is a schematic block diagram of a decoder provided by an embodiment of the present application.
  • Figure 5 is a schematic flowchart 1 of a decoding method provided by an embodiment of the present application.
  • Figure 6A is a schematic diagram of a reference area of a current block provided by an embodiment of the present application.
  • Figure 6B is a schematic diagram of upsampling interpolation of reference chromaticity information provided by an embodiment of the present application.
  • Figure 7 is a schematic diagram of weighted prediction of a WCP mode and other prediction modes provided by the embodiment of the present application.
  • Figure 8 is a schematic diagram of a weight-based chroma prediction framework provided by an embodiment of the present application.
  • Figure 9 is a schematic flow chart 2 of a decoding method provided by an embodiment of the present application.
  • Figure 10 is a schematic flowchart 3 of a decoding method provided by an embodiment of the present application.
  • Figure 11 is a schematic flow chart 4 of a decoding method provided by an embodiment of the present application.
  • Figure 12 is a schematic diagram 1 of an upsampling interpolation processing process provided by an embodiment of the present application.
  • Figure 13 is a schematic diagram 2 of an upsampling interpolation processing process provided by an embodiment of the present application.
  • Figure 14 is a schematic diagram of weight values for upsampling interpolation provided by an embodiment of the present application.
  • Figure 15 is a schematic diagram 3 of an upsampling interpolation processing process provided by an embodiment of the present application.
  • Figure 16 is a schematic flow chart 5 of a decoding method provided by an embodiment of the present application.
  • Figure 17 is a schematic flow chart 6 of a decoding method provided by an embodiment of the present application.
  • Figure 18 is a schematic flow chart 1 of an encoding method provided by an embodiment of the present application.
  • Figure 19 is a schematic flow chart 2 of an encoding method provided by an embodiment of the present application.
  • Figure 20 is a schematic structural diagram of an encoding device provided by an embodiment of the present application.
  • Figure 21 is a schematic diagram of the specific hardware structure of an encoding device provided by an embodiment of the present application.
  • Figure 22 is a schematic structural diagram of a decoding device provided by an embodiment of the present application.
  • Figure 23 is a schematic diagram of the specific hardware structure of a decoding device provided by an embodiment of the present application.
  • Figure 24 is a schematic structural diagram of a coding and decoding system provided by an embodiment of the present application.
  • the first color component, the second color component and the third color component are generally used to represent the coding block (CB); among them, these three color components are a brightness component and a blue chrominance component respectively. and a red chrominance component.
  • the brightness component is usually represented by the symbol Y
  • the blue chroma component is usually represented by the symbol Cb or U
  • the red chroma component is usually represented by the symbol Cr or V; in this way, the video image can be represented in the YCbCr format or YUV Format representation; in addition, the video image may also be in RGB format, YCgCo format, etc., and the embodiments of this application do not impose any limitations.
  • the cross-component prediction technology mainly includes the inter-component linear model (Cross-component Linear Model, CCLM) prediction mode and the multi-directional linear model (Multi-Directional Linear Model, MDLM) prediction mode, whether it is the model parameters derived according to the CCLM prediction mode or the model parameters derived according to the MDLM prediction mode, the corresponding prediction model can realize the first color component to the second color component, and the second color component to the third color component. Prediction between one color component, the first color component to the third color component, the third color component to the first color component, the second color component to the third color component, or the third color component to the second color component, etc. .
  • the first color component is the brightness component and the second color component is the chrominance component.
  • i, j represents the position coordinates of the pixel to be predicted in the coding block, i represents the horizontal direction, and j represents the vertical direction;
  • Pred C (i, j) represents the correspondence of the pixel to be predicted at the position coordinate (i, j) in the coding block.
  • the chroma prediction value, Rec L (i, j) represents the brightness reconstruction value corresponding to the pixel to be predicted at the (downsampled) position coordinate (i, j) in the same coding block.
  • ⁇ and ⁇ represent model parameters, which can be derived from reference pixels.
  • H.266/VVC includes three cross-component linear model prediction modes, namely: left and upper adjacent intra-frame CCLM prediction mode (can be represented by INTRA_LT_CCLM), left and lower left adjacent intra-frame The CCLM prediction mode (can be represented by INTRA_L_CCLM) and the upper and upper right adjacent intra-frame CCLM prediction mode (can be represented by INTRA_T_CCLM).
  • each prediction mode can select a preset number (such as 4) of reference pixels for derivation of model parameters ⁇ and ⁇ . The biggest difference between these three prediction modes is that they are used to derive the model.
  • the selection areas corresponding to the reference pixels of parameters ⁇ and ⁇ are different.
  • the coding block size corresponding to the chroma component is W ⁇ H, assuming that the upper selection area corresponding to the reference pixel is W', and the left selection area corresponding to the reference pixel is H'; thus,
  • the selection area of INTRA_L_CCLM mode and INTRA_T_CCLM mode is defined as W+H, in actual applications, the selection area of INTRA_L_CCLM mode will be limited to H+H, and the selection area of INTRA_T_CCLM mode will be limited to W+W. ;so,
  • Figure 1 shows a schematic distribution diagram of effective adjacent areas.
  • the left adjacent area, the lower left adjacent area, the upper adjacent area and the upper right adjacent area are all valid; in addition, the block filled with gray is the position coordinate of (i, j) pixel to be predicted.
  • Figure 2 shows the selection areas for the three prediction modes, including the left adjacent area and the upper adjacent area; (b) shows the selection area of the INTRA_L_CCLM mode, including the left adjacent area. and the lower left adjacent area; (c) shows the selected area of the INTRA_T_CCLM mode, including the upper adjacent area and the upper right adjacent area.
  • pixel selection for model parameter derivation can be performed within the selection areas.
  • the pixels selected in this way can be called reference pixels, and usually the number of reference pixels is four; and for a W ⁇ H coding block with a certain size, the position of the reference pixel is generally determined.
  • the chromaticity prediction is currently performed according to the flow diagram of the model parameter derivation scheme shown in Figure 3.
  • the process can include:
  • VVC the step in which the effective reference pixel is 0 is judged based on the validity of adjacent areas.
  • the prediction model is constructed using the principle of "two points determine a straight line", and the two points here can be called fitting points.
  • two reference pixels with larger values and two reference pixels with smaller values in the brightness component are obtained through comparison; then based on the two reference pixels with larger values, Find a mean point (can be expressed by mean max ), and find another mean point (can be expressed by mean min ) based on the two reference pixels with smaller values, and you can get two mean points mean max and mean min ; Then, using mean max and mean min as two fitting points, the model parameters (represented by ⁇ and ⁇ ) can be derived; finally, a prediction model is constructed based on ⁇ and ⁇ , and the prediction processing of the chroma component is performed based on the prediction model.
  • coding blocks with different content characteristics use a simple linear model to map brightness to chroma to achieve chroma prediction, but not the mapping function from brightness to chroma in any coding block is the same. It can be accurately fitted by this simple linear model, which results in inaccurate prediction results of some coding blocks.
  • pixels at different positions in the coding block use the same model parameters ⁇ and ⁇ .
  • embodiments of the present application provide an encoding method by determining the reference sample value of the first color component of the current block; determining the weighting coefficient according to the reference sample value of the first color component of the current block; and determining the weighting coefficient according to the weighting coefficient and the current
  • the reference sample value of the second color component of the block determines the first prediction block of the second color component of the current block; wherein the number of prediction values of the second color component contained in the first prediction block is greater than the number of prediction values of the second color component contained in the current block.
  • the number of sampling points of the second color component perform a first filtering process on the first prediction block to determine the second prediction block of the second color component of the current block; determine the number of sampling points of the second color component of the current block based on the second prediction block Forecast difference.
  • the embodiment of the present application also provides a decoding method, by determining the reference sample value of the first color component of the current block; determining the weighting coefficient according to the reference sample value of the first color component of the current block; The reference sample value of the second color component determines the first prediction block of the second color component of the current block; wherein the number of prediction values of the second color component contained in the first prediction block is greater than the number of prediction values of the second color contained in the current block.
  • the number of component sampling points perform a first filtering process on the first prediction block to determine the second prediction block of the second color component of the current block; determine the reconstructed value of the second color component sampling point of the current block based on the second prediction block .
  • the reference sample values of each chroma component are assigned weights for weighted prediction; and for the first filtering process, different color format information is fully considered, and chroma and/or brightness sampling filtering is performed based on different color format information, which can always ensure
  • the spatial resolution of the chroma component and the luminance component is consistent. This can not only ensure the accuracy of the existing luminance information, but also help improve nonlinearity based on accurate luminance information when predicting the chrominance component using unlost luminance information.
  • the accuracy and stability of the mapping model can improve the accuracy of chroma prediction, save bit rate, improve encoding and decoding efficiency, and thus improve encoding and decoding performance.
  • the encoder 100 may include a transformation and quantization unit 101, an intra estimation unit 102, an intra prediction unit 103, a motion compensation unit 104, a motion estimation unit 105, an inverse transform
  • the filter unit 108 can implement deblocking filtering and sample adaptive indentation (Sample Adaptive Offset, SAO ) filtering
  • the encoding unit 109 can implement header information encoding and context-based adaptive binary arithmetic coding (Context-based Adaptive Binary Arithmetic Coding, CABAC).
  • a video coding block can be obtained by dividing the coding tree block (Coding Tree Unit, CTU), and then the residual pixel information obtained after intra-frame or inter-frame prediction is processed through the transformation and quantization unit 101
  • the video coding block is transformed, including transforming the residual information from the pixel domain to the transform domain, and quantizing the resulting transform coefficients to further reduce the bit rate;
  • the intra-frame estimation unit 102 and the intra-frame prediction unit 103 are used to Intra prediction is performed on the video encoding block; specifically, intra estimation unit 102 and intra prediction unit 103 are used to determine an intra prediction mode to be used to encode the video encoding block;
  • motion compensation unit 104 and motion estimation unit 105 is used to perform inter-frame prediction encoding of the received video encoding block with respect to one or more blocks in one or more reference frames to provide temporal prediction information; motion estimation performed by the motion estimation unit 105 is to generate a motion vector.
  • the motion vector can estimate the motion of the video encoding block, and then the motion compensation unit 104 performs motion compensation based on the motion vector determined by the motion estimation unit 105; after determining the intra prediction mode, the intra prediction unit 103 also is used to provide the selected intra prediction data to the encoding unit 109, and the motion estimation unit 105 also sends the calculated and determined motion vector data to the encoding unit 109; in addition, the inverse transformation and inverse quantization unit 106 is used for the video Reconstruction of the coding block, the residual block is reconstructed in the pixel domain, the reconstructed residual block removes block effect artifacts through the filter control analysis unit 107 and the filtering unit 108, and then the reconstructed residual block is added to the decoding A predictive block in the frame of the image cache unit 110 is used to generate a reconstructed video encoding block; the encoding unit 109 is used to encode various encoding parameters and quantized transform coefficients.
  • the contextual content can be based on adjacent coding blocks and can be used to encode information indicating the determined intra prediction mode and output the code stream of the video signal; and the decoded image buffer unit 110 is used to store the reconstructed video coding blocks for Forecast reference. As the video image encoding proceeds, new reconstructed video encoding blocks will be continuously generated, and these reconstructed video encoding blocks will be stored in the decoded image cache unit 110 .
  • the decoder 200 includes a decoding unit 201, an inverse transform and inverse quantization unit 202, an intra prediction unit 203, a motion compensation unit 204, a filtering unit 205 and a decoded image cache unit. 206, etc., wherein the decoding unit 201 can implement header information decoding and CABAC decoding, and the filtering unit 205 can implement deblocking filtering and SAO filtering.
  • the code stream of the video signal is output; the code stream is input into the decoder 200 and first passes through the decoding unit 201 to obtain the decoded transformation coefficient; for the transformation coefficient, the inverse Transform and inverse quantization unit 202 performs processing to generate a residual block in the pixel domain; intra prediction unit 203 may be operable to generate a current block based on the determined intra prediction mode and data from previously decoded blocks of the current frame or picture. Prediction data for the video decoding block; motion compensation unit 204 determines prediction information for the video decoding block by parsing motion vectors and other associated syntax elements, and uses the prediction information to generate predictive blocks for the video decoding block being decoded.
  • a decoded video block is formed by summing the residual block from the inverse transform and inverse quantization unit 202 and the corresponding predictive block produced by the intra prediction unit 203 or the motion compensation unit 204; the decoded video signal is formed by The filtering unit 205 removes blocking artifacts, which can improve video quality; the decoded video blocks are then stored in the decoded image cache unit 206, which stores reference images for subsequent intra prediction or motion compensation, At the same time, it is also used for the output of video signals, that is, the restored original video signals are obtained.
  • the method in the embodiment of the present application is mainly applied to the intra prediction unit 103 part as shown in FIG. 4A and the intra prediction unit 203 part as shown in FIG. 4B. That is to say, the embodiments of the present application can be applied to both encoders and decoders, or even to both encoders and decoders. However, the embodiments of the present application are not specifically limited.
  • the "current block” specifically refers to the coding block currently to be intra-predicted; when applied to the intra prediction unit 203 part, the "current block” specifically refers to Refers to the decoding block currently to be intra-predicted.
  • FIG. 5 shows a schematic flowchart 1 of a decoding method provided by an embodiment of the present application.
  • the method may include:
  • S501 Determine the reference sample value of the first color component of the current block.
  • the decoding method in the embodiment of the present application is applied to a decoding device, or a decoding device integrated with the decoding device (which may also be referred to as a "decoder" for short).
  • the decoding method in the embodiment of the present application may specifically refer to an intra-frame prediction method, and more specifically, a weight-based chroma prediction (Weight-based Chroma Prediction, WCP) method.
  • WCP Weight-based Chroma Prediction
  • the video image may be divided into multiple decoding blocks, and each decoding block may include a first color component, a second color component, and a third color component
  • the current block here refers to the current block to be processed in the video image.
  • Decoded block for intra prediction assuming that the current block predicts the first color component, and the first color component is a brightness component, that is, the component to be predicted is a brightness component, then the current block can also be called a brightness prediction block; or, assume that the current block predicts the second color component.
  • Component prediction, and the second color component is a chroma component, that is, the component to be predicted is a chroma component, then the current block can also be called a chroma prediction block.
  • the reference information of the current block may include the value of the first color component sampling point in the adjacent area of the current block and the second color in the adjacent area of the current block.
  • the value of the component sampling point can be determined based on the decoded pixels in the adjacent area of the current block.
  • the adjacent area of the current block may include at least one of the following: an upper adjacent area, an upper right adjacent area, a left adjacent area, and a lower left adjacent area.
  • the upper adjacent area and the upper right adjacent area can be regarded as the upper area
  • the left adjacent area and the lower left adjacent area can be regarded as the left area
  • the adjacent area can also be Including the upper left area, see Figure 6 for details.
  • the upper area, left area and upper left area of the current block as adjacent areas can be called the reference area of the current block, and the pixels in the reference area are is the decoded reference pixel.
  • the adjacent area of the current block may include multiple rows or columns adjacent to the current block.
  • the left area may include one or more columns
  • the upper area may include one or more rows. Even if the number of rows or columns increases or decreases, the embodiment of the present application does not impose any limitation.
  • determining the reference sample value (Sample) of the first color component of the current block may include: determining the first color component sampling point value of the current block based on the value of the first color component sampling point in the adjacent area of the current block. Reference sample value of color component.
  • the reference pixel of the current block may refer to the reference pixel point adjacent to the current block, and may also be called the first color component sampling point, the first color component sampling point, and the third color component sampling point in the adjacent area of the current block.
  • the second color component sampling point is represented by Neighboring Sample or Reference Sample.
  • the adjacency here may be spatial adjacency, but is not limited to this.
  • adjacency can also be adjacent in time domain, adjacent in space and time domain, and even the reference pixel of the current block can be a certain reference pixel point adjacent in space, adjacent in time domain, or adjacent in space and time domain.
  • the reference pixels obtained after such processing are not limited in any way by the embodiments of this application.
  • the first color component is the brightness component
  • the second color component is the chrominance component
  • the value of the first color component sampling point is determined from the adjacent area of the current block.
  • the adjacent area here may include only the upper adjacent area, or only include
  • the left adjacent area may also include an upper adjacent area and an upper right adjacent area, or include a left adjacent area and a lower left adjacent area, or include an upper adjacent area and a left adjacent area, Or it may even include an upper adjacent area, an upper right adjacent area, a left adjacent area, etc., which are not limited in this embodiment of the present application.
  • the adjacent area may also be determined based on the prediction mode of the current block. In a specific embodiment, it may include:
  • the reference pixel is determined based on the pixels in the upper adjacent area and/or the upper right adjacent area;
  • the reference pixel is determined based on the pixels in the left adjacent area and/or the lower left adjacent area.
  • the adjacent areas in the prediction of the chroma component can only select the upper adjacent area and/or the upper right adjacent area; if the prediction mode of the current block is the vertical mode , then only the left adjacent area and/or the lower left adjacent area can be selected for the adjacent area in the chroma component prediction.
  • the method may also include: filtering the first color component sampling point in the adjacent area to determine the first color component sampling point. The value of the point.
  • the first sampling point set is formed according to the first color component sampling points in the adjacent area of the current block; then the first sampling point set can be filtered to determine the first color The value of the component sampling point.
  • filtering the first color component sampling points in adjacent areas and determining the value of the first color component sampling point may include:
  • the value of the first color component sampling point is determined from the adjacent area.
  • the color component intensity can be represented by color component information, such as reference brightness information, reference chromaticity information, etc.; here, the larger the value of the color component information, the greater the color component intensity. high.
  • the first color component sampling point in the adjacent area can be filtered according to the position of the sampling point, or the color component intensity can be filtered, so as to determine the sampling point obtained by filtering.
  • the effective first color component sampling point is then determined to determine the value of the first color component sampling point.
  • determining the reference sample value of the first color component of the current block may further include:
  • a reference sample of the first color component of the current block is determined based on filtered neighboring samples of the first color component of the current block.
  • the number of filtered adjacent samples of the first color component of the current block is greater than the number of values of the first color component sampling points.
  • the second filtering process may be an upsampling filtering process.
  • the first color component is a brightness component.
  • the reference brightness information may remain unchanged, or the reference brightness information may be subjected to upsampling filtering. For example, if the size of the current block is 2M ⁇ 2N and the reference brightness information is 2M+2N, it can be transformed to 4M+4N after upsampling filtering.
  • determining the reference sample value of the first color component of the current block may further include: determining the reference sample value of the first color component of the current block based on the reconstructed value of the first reference color component sampling point in the current block. sample value.
  • the first reference color component may be a brightness component; then, the reconstructed value of the first reference color component sampling point in the current block is the reconstructed brightness information of the current block.
  • determining the reference sample value of the first color component of the current block may also include:
  • the reference sample value of the first color component of the current block is determined according to the filtered sample value of the first reference color component sampling point in the current block.
  • the number of filtered samples of the first reference color component sampling point in the current block is greater than the number of reconstructed values of the first reference color component sampling point in the current block.
  • the third filtering process may be an upsampling filtering process.
  • the first reference color component is the brightness component.
  • the reconstructed brightness information in the current block can be kept unchanged, or the reconstructed brightness information in the current block can be kept unchanged.
  • the reconstructed brightness information in the current block is subjected to upsampling filtering. For example, if the amount of reconstructed brightness information in the current block is 2M ⁇ 2N, it can be transformed to 4M ⁇ 4N after upsampling filtering.
  • the filtering process of the reference information before the weighted prediction it may be that only the reference brightness information is filtered, or only the reconstructed brightness information is filtered, or it may be It is to perform filtering processing on both the reference brightness information and the reconstructed brightness information, and there is no limitation here.
  • the calculation of the brightness difference can be the absolute value of the difference between the reference brightness information and the reconstructed brightness information, or the absolute value of the difference between the filtered reference brightness information and the reconstructed brightness information, or the reference brightness information and filtered
  • the absolute value of the difference between the processed reconstructed brightness information may even be the absolute value of the difference between the filtered reference brightness information and the filtered reconstructed brightness information.
  • the reference sample value of the first color component of the current block can also be set as brightness difference information. Therefore, in a possible implementation, the reference sample value of the first color component of the current block is set to the absolute value of the difference between the value of the first color component sampling point and the reconstructed value of the first reference color component sampling point.
  • the reference sample value of the first color component of the current block is set to the absolute value of the difference between the filtered adjacent sample value of the first color component and the reconstructed value of the first reference color component sampling point.
  • the reference sample value of the first color component of the current block is set to the absolute value of the difference between the filtered adjacent sample value of the first color component and the filtered sample value of the first reference color component sampling point.
  • the reference sample value of the first color component of the current block is set to the absolute value of the difference between the value of the first color component sampling point and the filtered sample value of the first reference color component sampling point.
  • the reference brightness information in the adjacent area of the current block can be upsampled and filtered, and the reconstructed brightness information in the current block can also be upsampled and filtered.
  • Upsampling filtering can be performed on both, or even neither upsampling and filtering can be performed, and then the brightness difference information is determined according to different combinations, and the brightness difference information is used as the first color component of the current block. Reference sample value.
  • S502 Determine the weighting coefficient according to the reference sample value of the first color component of the current block.
  • determining the weighting coefficient based on the reference sample of the first color component of the current block may include:
  • the reference sample value of the first color component may be the filtered adjacent sample value of the first color component of the current block and the filtered sample value of the first reference color component sampling point in the current block.
  • the absolute value of the difference is a first color component
  • the first color component is a color component different from the second color component to be predicted in this embodiment of the present application.
  • the embodiment of the present application mainly predicts the chrominance component of the pixel to be predicted in the current block.
  • First select at least one pixel to be predicted in the current block, and calculate the brightness difference between its reconstructed chromaticity and the reference chromaticity in the adjacent area (expressed by
  • the reference pixel position with the smallest chromaticity difference will follow the change of the pixel to be predicted in the current block.
  • the size of the chromaticity difference represents the degree of similarity between chromaticities.
  • the number of weighting coefficients is also N; where the sum of the N weighting coefficients is equal to 1, and Each weighting coefficient is a value greater than or equal to 0 and less than or equal to 1, that is, 0 ⁇ w k ⁇ 1.
  • the sum of N weighting coefficients is equal to 1
  • the absolute value of the weighting coefficient can be greater than 1.
  • the normalized exponential function is a generalization of the logical function. It can "compress" an N-dimensional vector z containing any real number into another N-dimensional vector ⁇ (z), so that the range of each element is between (0, 1), and the sum of all elements is 1 , is often used as a nonlinear activation function for multi-classification neural networks.
  • the Softmax function is shown below,
  • the Softmax function can satisfy the conditional constraints of w k , but its function value increases with the increase of vector elements, which does not conform to the approximately inverse relationship between w k and
  • can be limited to a negative number. That is to say, when the first reference color component parameter is
  • determining the value corresponding to the reference sample of the first color component under the preset mapping relationship may include:
  • the first factor is a constant value less than zero.
  • represents the first factor
  • represents the first reference color component parameter
  • represents the first product value
  • determining the first factor may include: the first factor is a preset constant value.
  • the weighted coefficient distribution of adjacent chromaticities can be adjusted according to the relatively flat characteristics of the chromaticity, thereby capturing a weighted coefficient distribution suitable for natural image chromaticity prediction.
  • the given ⁇ set is traversed, and the appropriateness of ⁇ is measured by the difference between the predicted chroma and the original chroma under different ⁇ .
  • can take -2 ⁇ , where ⁇ 1,0,-1,-2,-3 ⁇ ; after experiments, it is found that in this ⁇ set, the best value of ⁇ is -0.25. Therefore, in a specific embodiment, ⁇ can be set to -0.25, but this is not specifically limited in the embodiment of the present application.
  • determining the first factor may include: determining the value of the first factor according to the size parameter of the current block.
  • the method may further include: determining the value of the first factor according to a preset mapping lookup table between the size parameter of the current block and the value of the first factor.
  • the size parameter of the current block may include at least one of the following parameters: the width of the current block, the height of the current block, and the product of the width and height of the current block.
  • a classification method may be used to fix the value of the first factor.
  • the current block size parameter is divided into three categories, and the value of the first factor corresponding to each category is determined.
  • embodiments of the present application may also pre-store a lookup table mapping the size parameters of the current block and the value of the first factor, and then determine the value of the first factor based on the lookup table.
  • Table 1 shows the correspondence between a first factor and the size parameter of the current block provided by the embodiment of the present application. It should be noted that Table 1 is only an exemplary lookup table and is not specifically limited.
  • determining the first factor may include: determining the value of the first factor based on the number of reference pixels in the current block.
  • the method may further include: determining the value of the first factor according to a preset mapping lookup table between the number of reference pixels of the current block and the value of the first factor.
  • this embodiment of the present application can divide the number of reference pixels into three categories, and still use a classification method to fix the value of the first factor.
  • the current block is divided into three categories according to the number of reference pixels, and the value of the first factor corresponding to each category is determined.
  • embodiments of the present application may also pre-store a lookup table mapping the number of reference pixels of the current block and the value of the first factor, and then determine the value of the first factor based on the lookup table.
  • Table 2 shows the correspondence between a first factor provided by the embodiment of the present application and the number of reference pixels of the current block. It should be noted that Table 2 is only an exemplary lookup table and is not specifically limited.
  • determining the first product value based on the first factor and the reference sample of the first color component may include:
  • the first product value is set to a value obtained by adding and bit-shifting the reference sample of the first color component according to the first factor.
  • the first factor is equal to 0.25 and the reference sample value of the first color component is represented by Ref
  • the first product value can be equal to 0.25 ⁇ Ref
  • 0.25 ⁇ Ref can be expressed as Ref/4, that is, Ref>> 2.
  • floating-point numbers may be converted into addition and displacement operations; that is to say, for the first product value, there are no restrictions on its calculation method.
  • the first color component may also be a brightness component
  • the second color component may be a chrominance component.
  • at least one pixel to be predicted in the current block is selected, and its reconstructed brightness and value are calculated respectively.
  • the brightness difference between reference brightnesses in adjacent areas (expressed as
  • the corresponding weighting coefficient (expressed by w k ) can be given a larger weight; conversely, if
  • the similarity is relatively weak, and w k can be given a smaller weight. That is to say, when calculating the weighting coefficient, the reference sample value of the first color component can also be
  • the reference sample value of the first color component it can be
  • the preset multiplier here is the second factor described in the embodiment of this application.
  • the method may also include: performing a least squares calculation based on the first color component value of the reference pixel and the second color component value of the reference pixel, determining Second factor.
  • the least squares calculation is performed on the chrominance component values and brightness component values of the N reference pixels to obtain the second factor.
  • the least squares regression is calculated as follows,
  • Lk represents the brightness component value of the k-th reference pixel
  • Ck represents the chrominance component value of the k-th reference pixel
  • N represents the number of reference pixels
  • represents the second factor, which can be calculated using the least squares method. Regression is calculated. It should be noted that the second factor can also be a fixed value or be fine-tuned based on a fixed value, etc., which are not specifically limited in the embodiments of this application.
  • the preset mapping relationship may be a preset functional relationship.
  • the preset mapping relationship may be a Softmax function.
  • the Softmax function is a normalized exponential function, but the embodiment of the present application may not require normalization, and its value is not limited to the range [0,1].
  • the weighting coefficient w k corresponding to the k-th reference pixel can be calculated by formula (4), or can be replaced as follows:
  • S represents the control parameter
  • the relationship between S and the first factor ( ⁇ ) is:
  • the value of S is related to the size parameter of the current block.
  • the size parameters of the current block include the width and height of the current block. In a possible implementation, if the minimum value of the width and height is less than or equal to 4, then the value of S is equal to 8; if the minimum value of the width and height is greater than 4 and less than or equal to 16, then the value of S is equal to 12; If the minimum value of width and height is greater than 16, then the value of S is equal to 16.
  • the value of S is related to the number of reference pixels (R) of the current block. In a possible implementation, if R is less than 16, then the value of S is equal to 8; if R is greater than or equal to 16 and less than 32, then the value of S is equal to 12; if R is greater than or equal to 16, then the value of S The value is equal to 16; the embodiments of this application do not impose any limitation on this.
  • the preset mapping relationship may be a weighting function that has an inverse relationship with the reference sample value of the first color component.
  • the preset mapping relationship when it is a preset functional relationship, it can be as shown in Equation (4), as shown in Equation (6) or (7), or as shown in Equation (8) or Equation ( 9), it can even be that the closer the reference brightness value of the reference pixel is to the brightness reconstruction value of the pixel to be predicted in the current block, the more important the reference chromaticity value of the reference pixel is to the pixel to be predicted in the current block. The higher the trend, the function model of the weighting coefficient will be constructed, etc., which are not specifically limited in the embodiments of this application.
  • the preset mapping relationship may also be a preset look-up table method.
  • the embodiments of the present application can also simplify operations, for example, using an array element table lookup method to reduce part of the calculation operations.
  • the preset mapping relationship it can be: determining the array element value according to the preset reference sample value of the first color component, the first factor and the array element mapping lookup table; and then determining the array element value in the preset The corresponding value under the mapping relationship; then set the weighting coefficient equal to the value.
  • the f model for calculating weighted coefficients can be implemented through simplified operations, such as table lookup.
  • of the pixel (i, j) to be predicted can also be expressed by
  • ; in this way, the weighting coefficient w kij f(
  • the numerator in the f model can be stored as an array index with the independent variable
  • the storage here can be divided into complete storage or partial storage.
  • complete storage is to store all molecules in the
  • value range multiplied by the number of S types) needs to be opened. array space.
  • the required storage array size is
  • is an integer in the range of 0 to 1023, and the number of S categories is 3.
  • and the number of S categories To store the molecule completely as an index into the 2D array storMole:
  • the complete storage can also set the corresponding array offset according to the classification of S, and then use (
  • the partial storage is to store a selected part of all the molecule values in the calculated weighting coefficient.
  • partial value range and/or the molecule values of the S partial category can be selected for storage.
  • the f model value within the selection range defaults to 0, and the required storage array size is (the selected
  • the storage range of partial storage can be stored according to actual needs, and is not limited to
  • the offset thus opens up a one-dimensional storage space for storage, which is not limited in the embodiments of this application.
  • the enlarged integer values can also be stored here; in this way, corresponding reduction operations need to be performed after the predicted values are determined.
  • the weighting coefficient can be determined based on the reference sample value of the first color component (for example
  • N represents The number of reference samples for the second color component.
  • the sum of these N weighting coefficients is equal to 1, and each weighting coefficient is a value greater than or equal to 0 and less than or equal to 1.
  • the sum of N weighting coefficients is equal to 1
  • S503 Determine the first prediction block of the second color component of the current block based on the weighting coefficient and the reference sample value of the second color component of the current block.
  • chroma prediction is performed for each reconstructed brightness position in the current block, so that the obtained first prediction block size is larger than the original current block size.
  • the block size that is, the number of predicted values of the second color component contained in the first prediction block is greater than the number of second color component sampling points contained in the current block.
  • the method may also include: based on the second color component sampling point in the adjacent area of the current block. Get the value to determine the reference sample value of the second color component of the current block.
  • the adjacent area may include at least one of the following: an upper adjacent area, an upper right adjacent area, a left adjacent area, and a lower left adjacent area.
  • the method may further include:
  • a reference sample of the second color component of the current block is determined based on filtered adjacent samples of the second color component of the current block.
  • the number of filtered adjacent sample values of the second color component of the current block is greater than the number of values of the second color component sampling points in adjacent areas of the current block.
  • the fourth filtering process may be upsampling filtering; where the upsampling rate is a positive integer multiple of 2.
  • the first color component is the brightness component
  • the second color component is the chrominance component.
  • the reference chrominance information can also be upsampled and filtered here.
  • the chroma block size is M ⁇ N
  • the number of reference chroma information in adjacent areas is M+N.
  • 2M+2N can be obtained.
  • the chroma reference sample is then used to obtain 2M ⁇ 2N chroma prediction values using a weighted prediction method, followed by downsampling and filtering to obtain M ⁇ N chroma prediction values as the final prediction value.
  • the chroma block size is M ⁇ N, and the number of reference chroma information in adjacent areas is M+N. You can also use only these M+N Referring to the chroma information, 2M ⁇ 2N chroma prediction values are calculated using weighting coefficients, and then down-sampled and filtered to obtain M ⁇ N chroma prediction values as the final prediction value.
  • the chroma block size is M ⁇ N, and the number of reference chroma information in adjacent areas is M+N. After upsampling and filtering, 4M+4N chroma references are obtained. sample value; or, for the current block of 2M ⁇ 2N in YUV444 format, the chroma block size is 2M ⁇ 2N, the number of reference chroma information in adjacent areas is 2M ⁇ 2N, and 4M+4N are obtained after upsampling and filtering.
  • Chroma reference sample values subsequently, by using weighted prediction and downsampling filtering based on these chroma reference samples, M ⁇ N chroma prediction values can also be obtained as the final prediction value.
  • This embodiment of the present application does not make any limited.
  • the second filtering process can use the first filter
  • the third filtering process can use the second filter
  • the fourth filtering process can use the third filter; for these three filters, here, the first filtering
  • the filter, the second filter and the third filter may all be upsampling filters.
  • the upsampling rates of the filters may also be different. Therefore, these three filters can be the same or different. Furthermore, the first filter, the second filter and the third filter may all be neural network filters, and the embodiments of this application do not impose any limitation on this.
  • the first color component is a brightness component and the second color component is a chrominance component
  • the adjacent area ie., the reference brightness information
  • the adjacent The spatial resolution of the value of the second color component sampling point in the area i.e., the reference chrominance information
  • the spatial resolution of the reconstructed value of the first reference color component sampling point in the current block i.e., the reconstructed luminance information
  • the second filtering process, the third filtering process or the fourth filtering process can also be performed according to the current color format information.
  • the method may also include: based on the color format information, performing a fourth filtering process on the values of the second color component sampling points in the adjacent areas of the current block, Get the filtered adjacent samples of the second color component of the current block.
  • the fourth filtering process may also include: if the color format information indicates 4:2:0 sampling, sampling the second color component in the adjacent area of the current block The value of the point is upsampled and filtered; where the upsampling rate is a positive integer multiple of 2.
  • the color format information may include 4:4:4 sampling, 4:2:2 sampling, 4:1:1 sampling, 4:2:0 sampling, and so on.
  • the color format information indicates a 4:4:4 sampling (can also be expressed as YUV444) video, that is, the spatial resolution of brightness and chroma is equal, then no processing is required for the reference chroma information; if the color format The information indicates 4:2:2 sampling (also expressed as YUV422), or 4:1:1 sampling (also expressed as YUV411), 4:2:0 sampling (also expressed as YUV420), etc.
  • the reference chroma information obtained from adjacent areas needs to be upsampled and filtered. deal with.
  • the upsampling filtering method can be any one of the linear interpolation methods, such as nearest neighbor interpolation, bilinear interpolation, bicubic interpolation, mean interpolation, median interpolation, Copy interpolation, etc.; it can also be any nonlinear interpolation method, such as an interpolation algorithm based on wavelet transform, an interpolation algorithm based on edge information, etc.; it can also be used for upsampling filtering based on a convolutional neural network.
  • the embodiments of this application do not make any limited. Exemplarily, referring to FIG. 6B , YUV420 video format and copy interpolation are used as an example for explanation.
  • each pixel in the 4 ⁇ 1 grid filled block is equivalent to upsampling.
  • the upper left corner pixel of each 2 ⁇ 2 sub-block in the filtered 8 ⁇ 2 chroma block that is, the pixel value of each chroma pixel in the interpolated 2 ⁇ 2 sub-block is the same.
  • the other three chroma pixel points are all copied from the upper left corner pixel (grid-filled block).
  • the first prediction block of the second color component may include:
  • the predicted value of the second color component sampling point in the first prediction block is set equal to the sum of N weighted values; where N represents the number of reference samples of the second color component, and N is a positive integer.
  • Equation (16 ) obtains the predicted value of the chroma component at the (i,j) position in the prediction block.
  • this method is conducive to parallel processing and can speed up calculations.
  • upsampling filtering when upsampling filtering is performed on both the reference brightness information and the reference chrominance information in the adjacent area of the current block, at this time, it may also include: based on the color format information, using the first level upsampling factor Perform a second filtering process on the value of the first color component sampling point in the adjacent area of the current block with the first vertical upsampling factor to obtain the filtered adjacent sample value of the first color component of the current block; and use the second The horizontal upsampling factor and the second vertical upsampling factor perform a fourth filtering process on the values of the second color component sampling points in the adjacent areas of the current block to obtain filtered adjacent sample values of the second color component of the current block.
  • the method may also include:
  • color format information indicates 4:4:4 sampling, it is determined that the second horizontal upsampling factor is equal to the first horizontal upsampling factor, and the second vertical upsampling factor is equal to the first vertical upsampling factor;
  • the color format information indicates 4:2:2 sampling, then it is determined that the second horizontal upsampling factor is equal to 2 times the first horizontal upsampling factor, and the second vertical upsampling factor is equal to the first vertical upsampling factor;
  • the color format information indicates 4:1:1 sampling, then it is determined that the second horizontal upsampling factor is equal to 4 times the first horizontal upsampling factor, and the second vertical upsampling factor is equal to the first vertical upsampling factor;
  • the color format information indicates 4:2:0 sampling, it is determined that the second horizontal upsampling factor is equal to 2 times the first horizontal upsampling factor, and the second vertical upsampling factor is equal to 2 times the first vertical upsampling factor.
  • upsampling filtering is performed on the reference brightness information, and at this time there is no loss of the brightness information.
  • the reference chrominance information since the spatial resolution of the luminance component in the YUV video is always greater than or equal to the spatial resolution of the chrominance component, the reference chrominance information must be upsampled and filtered to maintain the spatial resolution of the reference luminance information. consistent.
  • the first horizontal upsampling factor is represented by S_Hor_RefLuma
  • the first vertical upsampling factor is represented by S_Ver_RefLuma
  • the second horizontal upsampling factor is represented by S_Hor_RefChroma
  • the second vertical upsampling factor is represented by S_Ver_RefChroma.
  • S_HOR_REFCHROMA can be set to equal to S_HOR_REFLUMA
  • S_ver_refchroma can be set to S_ver_refluma
  • videos of yuv422 format S_Hor_refchroma can be set to s_Hor_reflu, which is equal to 2 times.
  • S_ver_refchroma can be set as equal to S_VER_REFLUMA
  • videos for yuv411 format S_Hor_RefChroma can be set to be equal to 4 times S_Hor_RefLuma
  • S_Ver_RefChroma can be set to be equal to S_Ver_RefLuma
  • for YUV420 format video S_Hor_RefChroma can be set to be equal to 2 times S_Hor_RefLuma
  • S_Ver_RefChroma can be set to be equal to 2 times S_Ver_RefLuma.
  • the method may also include:
  • the color format information indicates 4:4:4 sampling, it is determined that the width of the first prediction block is equal to the width of the current block, and the height of the first prediction block is equal to the height of the current block;
  • the color format information indicates 4:2:2 sampling, then it is determined that the width of the first prediction block is equal to 2 times the width of the current block, and the height of the first prediction block is equal to the height of the current block;
  • the color format information indicates 4:1:1 sampling, then it is determined that the width of the first prediction block is equal to 4 times the width of the current block, and the height of the first prediction block is equal to the height of the current block;
  • the width of the first prediction block is equal to 2 times the width of the current block
  • the height of the first prediction block is equal to 2 times the height of the current block
  • the size parameters of the first prediction block include width and height, where the width of the first prediction block is represented by predSizeW, and the height of the first prediction block is represented by predSizeH; the size of the current block Parameters also include width and height, where the width of the current block is represented by nTbW and the height of the current block is represented by nTbH.
  • the spatial resolution of the luminance component and the chrominance component is equal.
  • predSizeW can be set equal to nTbW
  • predSizeH can be set equal to nTbH
  • the luminance component and chrominance component have the same The vertical resolution, but the horizontal resolution of the chrominance component is 1/2 of the luminance component.
  • predSizeW can be set to be equal to 2 times nTbW
  • predSizeH can be set to be equal to nTbH
  • the chroma component has the same vertical resolution, but the horizontal resolution of the chroma component is 1/4 of the luminance component.
  • predSizeW can be set to equal to 4 times nTbW, and predSizeH can be set to equal to nTbH; for YUV420 format video, The horizontal resolution and vertical resolution of the chroma component are both 1/2 of the luminance component.
  • predSizeW can be set to be equal to 2 times nTbW, and predSizeH can be set to be equal to 2 times nTbH.
  • the size of the first prediction block needs to be inferred according to the YUV video format and the spatial upsampling frequency of the current block brightness after the current block brightness is upsampled. size.
  • it may include: based on the color format information, using the third horizontal upsampling factor and the third vertical upsampling factor to perform a third filtering process on the reconstructed value of the first reference color component sampling point in the current block, to obtain the third filtering process in the current block.
  • a filtered sample of a reference color component sampling point may be used to obtain the third filtering process in the current block.
  • the method may also include:
  • the width of the first prediction block is equal to the product of the width of the current block and the third horizontal sampling factor, and the height of the first prediction block is equal to the height of the current block and the third horizontal sampling factor.
  • the width of the first prediction block is equal to the product of 2 times the width of the current block and the third level upsampling factor, and the height of the first prediction block is equal to the height of the current block.
  • the width of the first prediction block is equal to the product of 4 times the width of the current block and the third level sampling factor, and the height of the first prediction block is equal to the height of the current block.
  • the width of the first prediction block is equal to the product of 2 times the width of the current block and the third horizontal upsampling factor, and the height of the first prediction block is equal to 2 times the current block. The height of the block multiplied by the third vertical upsampling factor.
  • the third horizontal upsampling factor is represented by S_Hor_RecLuma
  • the third vertical upsampling factor is represented by S_Ver_RecLuma.
  • predSizeW can be set to be equal to the product of S_Hor_RecLuma and nTbW
  • predSizeH can be set to be equal to S_Ver_RecLuma and The product of nTbH; for YUV422 format video, the luminance component and the chrominance component have the same vertical resolution, but the horizontal resolution of the chroma component is 1/2 of the luminance component.
  • predSizeW can be set to equal to 2 times S_Hor_RecLuma
  • the product of nTbW, predSizeH can be set equal to the product of S_Ver_RecLuma and nTbH; for YUV411 format video, the luminance component and the chrominance component have the same vertical resolution, but the horizontal resolution of the chroma component is 1/4 of the luminance component , at this time, predSizeW can be set to be equal to 4 times the product of S_Hor_RecLuma and nTbW, and predSizeH can be set to be equal to the product of S_Ver_RecLuma and nTbH; for YUV420 format video, the horizontal resolution and vertical resolution of the chroma component are both luminance components.
  • predSizeW can be set to be equal to 2 times the product of S_Hor_RecLuma and nTbW
  • predSizeH can be set to be equal to 2 times the product of S_Ver_RecLuma and nTbH.
  • predSizeH is greater than or equal to the height nTbH of the current block, or predSizeW is greater than or equal to the width nTbW of the current block, so that the number of predicted values of the second color component contained in the first prediction block Greater than the number of second color component sample points contained in the current block.
  • S504 Perform the first filtering process on the first prediction block, and determine the second prediction block of the second color component of the current block.
  • S505 Determine the reconstruction value of the second color component sampling point of the current block according to the second prediction block.
  • the first filtering process may be a downsampling filtering process.
  • the number of predicted values of the second color component contained in the second prediction block is the same as the number of second color component sample points contained in the current block.
  • performing the first filtering process on the first prediction block and determining the second prediction block of the second color component of the current block may include:
  • a preset filter is used to perform downsampling filtering processing on the first prediction block to determine a second prediction block of the second color component of the current block.
  • the preset filter may be a downsampling filter.
  • the downsampling filter here may be a neural network filter, which is not limited in this embodiment of the present application.
  • performing the first filtering process on the first prediction block and determining the second prediction block of the second color component of the current block may include:
  • performing downsampling filtering on the first prediction block according to the horizontal downsampling factor and the vertical downsampling factor to obtain the second prediction block of the second color component of the current block may include: if the horizontal downsampling factor If the sampling factor is greater than 1, or the vertical down-sampling factor is greater than 1, then down-sampling filtering is performed on the first prediction block to obtain the second prediction block.
  • performing downsampling filtering on the first prediction block may include at least one of the following:
  • the first prediction block is subjected to down-sampling filtering processing in the vertical direction and then down-sampling filtering processing is performed in the horizontal direction.
  • the horizontal downsampling factor can be calculated according to the width of the first prediction block and the width of the current block
  • the vertical downsampling factor can be calculated according to the height of the first prediction block and the height of the current block; then, according to the horizontal downsampling factor and the vertical
  • the downsampling factor performs downsampling filtering on the first prediction block.
  • the horizontal down-sampling factor is greater than 1 and the vertical down-sampling factor is equal to 1, then only the first prediction block needs to be down-sampled in the horizontal direction; if the horizontal down-sampling factor is equal to 1 and the vertical down-sampling factor is greater than 1, then Only the first prediction block needs to be down-sampled in the vertical direction; if the horizontal down-sampling factor is greater than 1 and the vertical down-sampling factor is greater than 1, then the first prediction block needs to be down-sampled in both the horizontal and vertical directions, where, Down-sampling is performed first in the horizontal direction and then in the vertical direction, or down-sampling in the vertical direction and then in the horizontal direction can be performed.
  • the convolution operation in the neural network structure can even be used to replace the down-sampling operation here.
  • the embodiments of this application do not impose any limitations. .
  • down-sampling filtering can also be performed by point extraction, such as a two-dimensional filter, a one-dimensional filter, and so on.
  • a one-dimensional filter it can be "vertical direction first and then horizontal direction", or it can be “horizontal direction first and then vertical direction”.
  • It can also be a fixed filtering order, or even a flexibly adjustable filtering order.
  • the embodiment of the present application does not impose any limitation on this.
  • performing the first filtering process on the first prediction block and determining the second prediction block of the second color component of the current block may include: the first filtering process includes downsampling filtering process; wherein, downsampling The input of the filtering process is the first downsampling input block; the output of the downsampling filtering process is the first downsampling output block.
  • the upsampling filtering process may include:
  • the down-sampling factor includes at least one of the following: a horizontal down-sampling factor, a vertical down-sampling factor;
  • performing downsampling filtering on the first downsampling input block according to the downsampling factor to obtain the first downsampling output block may include: if the horizontal downsampling factor is greater than 1, or the vertical downsampling factor If it is greater than 1, the first down-sampling input block is subjected to down-sampling filtering processing to obtain the first down-sampling output block.
  • performing downsampling filtering on the first downsampling input block includes at least one of the following:
  • the first down-sampling input block is subjected to down-sampling filtering processing in the vertical direction and then down-sampling filtering processing is performed in the horizontal direction.
  • the horizontal downsampling factor can first be calculated based on the width of the first downsampling input block and the width of the first downsampling output block, and the vertical downsampling factor can be calculated based on the height of the first downsampling input block and the height of the first downsampling output block. sampling factor; then, the first downsampling input block is downsampled and filtered according to the horizontal downsampling factor and the vertical downsampling factor.
  • the horizontal down-sampling factor is greater than 1 and the vertical down-sampling factor is equal to 1, then only the first down-sampling input block needs to be down-sampled in the horizontal direction; if the horizontal down-sampling factor is equal to 1, the vertical down-sampling factor is greater than 1 , then only the first downsampling input block needs to be downsampled in the vertical direction; if the horizontal downsampling factor is greater than 1 and the vertical downsampling factor is greater than 1, then the first downsampling input block needs to be downsampled in both the horizontal and vertical directions.
  • downsampling filtering is performed on the first prediction block to determine the second prediction block.
  • the method may further include: using the first prediction block as a first down-sampling input block; using the first down-sampling output block as a second prediction block of the second color component of the current block.
  • enhancement filtering is first performed on the first prediction block, and then downsampling filtering is performed to determine the second prediction block.
  • the method may also include: performing filtering enhancement processing on the first prediction block to determine the first enhanced prediction block; using the first enhanced prediction block as the first downsampling input block; and using the first downsampling output block as the current block.
  • the second prediction block of the second color component is
  • downsampling filtering is first performed on the first prediction block, and then enhancement filtering is performed to determine the second prediction block.
  • the method may also include: using the first prediction block as the first down-sampling input block; using the first down-sampling output block as the first down-sampling filter prediction block; and performing filter enhancement on the first down-sampling filter prediction block. Processing to determine a second prediction block of a second color component of the current block.
  • enhancement filtering is first performed on the first prediction block, then downsampling filtering is performed, and then enhancement filtering is performed to determine the second prediction block.
  • the method may also include: performing a first filtering enhancement process on the first prediction block to determine the second enhanced prediction block; using the second enhanced prediction block as the first downsampling input block; and using the first downsampling output block.
  • the second down-sampling filter prediction block perform a second filter enhancement process on the second down-sampling filter prediction block to determine the second prediction block of the second color component of the current block.
  • performing the first filtering process on the first prediction block and determining the second prediction block of the second color component of the current block may also include: according to the horizontal downsampling factor and the vertical downsampling factor.
  • the sampling factor is a weighted sum calculation of the predicted values of each preset number of second color components of the first prediction block in the horizontal direction and/or vertical direction to obtain the second prediction block.
  • performing a weighted sum calculation on the predicted values of each preset number of second color components of the first prediction block in the horizontal direction and/or vertical direction to obtain the second prediction block may include: The second prediction block is obtained by performing a weighted sum calculation on the predicted values of the second color component of the first prediction block by the number of downsampling factors per level in the horizontal direction.
  • weighted sum calculation is performed on the predicted values of each preset number of second color components of the first prediction block in the horizontal direction and/or vertical direction to obtain the second prediction block, which may include : Calculate the weighted sum of the predicted values of the second color component of the first prediction block for every number of vertical downsampling factors in the vertical direction to obtain the second prediction block.
  • weighted sum calculation is performed on the predicted values of each preset number of second color components of the first prediction block in the horizontal direction and/or vertical direction to obtain the second prediction block, which may include : Calculate the weighted sum of the predicted values of the second color component of the first prediction block per number of horizontal downsampling factors in the horizontal direction, and weight the predicted values of the second color component per number of vertical downsampling factors in the vertical direction. and calculation to obtain the second prediction block.
  • the embodiment of the present application may also ignore the situation of horizontal down-sampling and vertical down-sampling, and perform a weighted sum calculation for each preset number of chroma prediction values in the direction where down-sampling is required (vertical direction or horizontal direction); Among them, if the weight of each chroma prediction value is equal, then in a special case, the weighted sum calculation of each preset number of chroma prediction values can also be regarded as the preset number of chroma The predicted values are averaged, and the average is used as the predicted value after downsampling and filtering.
  • the method may also include: determining the weighting coefficient according to the reference sample value of the first color component of some sampling points in the first prediction block; and determining the weighting coefficient according to the weighting coefficient and some sampling points in the first prediction block.
  • the reference sample of the second color component determines a second prediction block of the second color component of the current block.
  • determining the second prediction block of the second color component of the current block based on the weighting coefficient and the reference sample value of the second color component of some sampling points in the first prediction block may include: according to the weighted The coefficient performs a weighted calculation on the reference sample value of the second color component of the sampling point at the (i, j)-th position in the first prediction block to obtain the second color component of the sampling point at the (x, y)-th position in the current block. Predicted value; where i, j, x, j are all integers greater than or equal to zero.
  • the reconstructed brightness information in the current block is not upsampled and filtered. At this time, it may be considered not to perform co-location of each brightness position in the reconstructed brightness information in the current block.
  • some brightness positions can be selected for prediction of co-located chroma points, so that no downsampling filtering is required after obtaining the prediction block, which can ensure the accuracy of the existing brightness information.
  • Accurate brightness information is conducive to improving non-linearity. The accuracy and stability of the linear mapping model, thereby improving the accuracy of color prediction values.
  • the corresponding position can be selected according to the YUV video format characteristics for chroma prediction, assuming that the position of the current brightness point is CurRecLuma(i,j) , then the position of the sampling point that needs to be predicted for chroma is CurPredChroma(x,y).
  • the method may also include:
  • color format information indicates 4:4:4 sampling, set x equal to i and set y equal to j;
  • color format information indicates 4:2:2 sampling, set x to be equal to the product of i and 2, and set y to be equal to j;
  • color format information indicates 4:1:1 sampling, set x equal to the product of i and 4, and set y equal to j;
  • x is set equal to the product of i and 2
  • y is set equal to the product of j and 2.
  • the method may also include: determining the horizontal sampling position factor and the vertical sampling position factor; setting x equal to the product of i and the horizontal sampling position factor, and setting y equal to j and the vertical sampling position product of factors.
  • x may be set equal to the product of i and S_Pos_Hor, and y may be set equal to the product of j and S_Pos_Ver.
  • the method may further include: after determining the second prediction block of the second color component of the current block, performing correlation processing on the second prediction block, and using the processed second prediction block as the second prediction block. prediction block.
  • relevant processing is performed on the second prediction block, including at least one of the following:
  • Weighted fusion processing is performed on the second prediction block using the prediction value of the second color component of the current block in at least one prediction mode.
  • the third filtering process may be a smoothing filtering process.
  • the second prediction block in order to reduce the instability caused by pixel-by-pixel independent and parallel prediction, for example, the second prediction block can be smoothed and filtered, and then the processed second prediction block can be used as the final second prediction block. prediction block.
  • the preset compensation value of the second color component of the second prediction block is determined based on the reference sample value in the adjacent area of the current block;
  • the predicted value of the second color component sampling point is corrected to determine the final second prediction block.
  • a position-related correction process may be performed on the second prediction block. For example, a reference pixel with a close spatial position is used to calculate a chromaticity compensation value for each second color component sampling point to be predicted, and this chromaticity compensation value is used to correct the second color component sampling point in the second prediction block. According to the correction
  • the final predicted value of the second color component sampling point is determined by the final predicted value, and the final second predicted block is obtained.
  • prediction processing is performed on the second color component sampling point in the second prediction block according to at least one prediction mode, and at least one initial value of the second color component sampling point in the second prediction block is determined.
  • Prediction value perform weighted fusion processing based on at least one initial prediction value and the prediction value of the second color component sampling point in the second prediction block to determine the final second prediction block.
  • the chroma prediction values calculated in other prediction modes can also be weighted and fused with the chroma prediction values calculated in the WCP mode, and the final color prediction value is determined based on this fusion result.
  • prediction block may include: planar mode, direct current (DC) mode, vertical mode, horizontal mode, CCLM mode, etc., and each prediction mode is connected to a switch. The switch is used to control whether the chroma prediction values in this prediction mode participate in the weighted fusion process.
  • the weight value corresponding to Planar mode is W_Planar
  • the weight value corresponding to DC mode is W_DC
  • the weight value corresponding to vertical mode is W_Ver
  • the weight value corresponding to horizontal mode is W_Hor
  • the weight value corresponding to CCLM mode is W_CCLM
  • the weight value corresponding to WCP mode The weight value is W_Wcp; for the chroma prediction value in Planar mode, the chroma prediction value in DC mode, the chroma prediction value in vertical mode, the chroma prediction value in horizontal mode and the chroma prediction value in CCLM mode , if only the switch connected in CCLM mode is in the closed state, then the chroma prediction value in CCLM mode and the chroma prediction value in WCP mode can be weighted and fused according to W_CCLM and W_Wcp, and the values of W_CCLM and W_Wcp can be determined It is to perform weighted fusion with equal weight or unequal weight, and the
  • determining the reconstruction value of the second color component sampling point of the current block according to the second prediction block may include:
  • the reconstructed value of the second color component sampling point of the current block is determined according to the predicted difference value of the second color component sampling point of the current block and the predicted value of the second color component sampling point of the current block.
  • determining the prediction difference (residual) of the second color component sampling point of the current block may be by parsing the code stream to determine the prediction difference of the second color component sampling point of the current block. value.
  • determining the predicted value of the second color component sampling point of the current block based on the second prediction block may be to set the predicted value of the second color component sampling point of the current block equal to The value of the second prediction block; or the value of the second prediction block may be upsampled and filtered, and the predicted value of the second color component sampling point of the current block is set equal to the output value after upsampling and filtering.
  • the chroma prediction difference of the current block is determined; then based on the second prediction block, the chroma prediction value of the current block can be determined; then the chroma prediction value and the chroma prediction value can be determined
  • the prediction difference values are added to obtain the chroma reconstruction value of the current block.
  • the embodiment of the present application uses unlost brightness information to perform chroma prediction during the prediction process of WCP mode, which mainly includes three aspects: on the one hand, it makes full use of the brightness information of the reference pixel and the current block to achieve the reference Calculation of pixel chroma weighting coefficients; on the other hand, fully consider the importance of existing brightness information, and establish a more accurate nonlinear mapping model to weight the reference chroma point allocation weights without losing brightness information; and On the one hand, when performing reference chroma upsampling and in-situ chroma prediction based on the position of each brightness point of the current block, the characteristics of various YUV video formats are fully considered.
  • the chroma subsampling format and brightness upsampling of different YUV videos The situation always ensures that the spatial resolution of the chroma component and the luminance component is consistent, and the existing non-lost luminance information is fully utilized for in-situ chroma prediction to improve the accuracy of the chroma prediction value in the WCP mode.
  • the embodiment of the present application also provides a weight-based chroma prediction framework.
  • Point recY[i][j] for each sample in the down-sampled reconstructed luminance information in the current block (ie, the down-sampled luminance block) Point recY[i][j], firstly obtain the brightness difference vector diffY[i][j] based on the absolute value of the difference between recY[i][j] and the adjacent brightness vector refY[k]; secondly, according to the difference between recY[i][j] and diffY [i][j] related nonlinear mapping model, derive the normalized weight vector cWeight[i][j]; again, use the weight vector to vectorize the weight vector with the adjacent chroma vector of the current block Multiplication, you can get the predicted chroma prediction value
  • This embodiment provides a decoding method by determining the reference sample value of the first color component of the current block; determining the weighting coefficient according to the reference sample value of the first color component of the current block; and determining the weighting coefficient based on the weighting coefficient and the second color component of the current block.
  • the reference sample value of the color component determines the first prediction block of the second color component of the current block; wherein the number of prediction values of the second color component contained in the first prediction block is greater than the number of second color component samples contained in the current block. the number of points; perform the first filtering process on the first prediction block to determine the second prediction block of the second color component of the current block; and determine the reconstruction value of the second color component sampling point of the current block based on the second prediction block.
  • the reference sample values of each chroma component are assigned weights for weighted prediction; and for the first filtering process, different color format information is fully considered, and chroma and/or brightness sampling filtering is performed based on different color format information, which can always ensure
  • the spatial resolution of the chroma component and the luminance component is consistent, which not only ensures the accuracy of the existing luminance information, but also helps to improve nonlinear mapping based on accurate luminance information when using unlost luminance information to predict the chroma component.
  • the accuracy and stability of the model improve the accuracy of chroma prediction, save code rate, improve encoding and decoding efficiency, and thus improve encoding and decoding performance.
  • the embodiment of the present application proposes a weight-based chroma prediction technology that utilizes the above information, and on this basis, in order to ensure that the reference luminance information and the reconstructed luminance information in the current block are not lost, reference chroma matching is achieved.
  • the spatial resolution of the reference brightness and the current predicted chroma match the spatial resolution of the currently existing reconstructed brightness, and the predicted chroma is downsampled and filtered based on the matching, and finally the chroma prediction value is post-processed.
  • Input of WCP mode the position of the current block (xTbCmp, yTbCmp), the width of the current block nTbW and the height of the current block nTbH.
  • the prediction process of the WCP mode can include steps such as determining core parameters, upsampling filtering of reference chroma information, obtaining target information, weight-based chroma prediction, downsampling filtering of the current predicted chroma, and post-processing. After these steps, the chroma prediction value of the current block can be obtained.
  • FIG. 9 shows a schematic flowchart 2 of a decoding method provided by an embodiment of the present application.
  • the method may include:
  • S901 Determine the core parameters of the WCP mode.
  • the core parameters involved in the WCP mode are determined, that is, the core parameters of the WCP mode can be obtained or inferred through configuration or in some way, for example, the core parameters are obtained from the code stream at the decoding end.
  • the core parameters of the WCP mode include but are not limited to the control parameter (S), the number of weight-based chroma prediction inputs (inSize), and the number of weight-based chroma prediction outputs (arranged in predSizeW ⁇ predSizeH).
  • the control parameter (S) can be used to adjust the nonlinear function in subsequent links or to adjust the data involved in subsequent links.
  • the determination of core parameters is affected by the block size or block content or the number of pixels within the block under certain conditions. For example:
  • the current block can be processed according to its block size or block content or the number of pixels within the block. Classify and determine the same or different core parameters according to different categories. That is, the control parameters (S) corresponding to different categories or the number of weight-based chroma prediction inputs inSize or the number of weight-based chroma prediction outputs (arranged in predSizeW ⁇ predSizeH) may be the same or different. Note that predSizeW and predSizeH can also be the same or different.
  • WCP mode can classify the current block according to the width and height of the current block, and use wcpSizeId to indicate the type of block.
  • the control parameter (S) the number of weight-based chroma prediction inputs inSize, and the number of weight-based chroma prediction outputs (arranged in predSizeW ⁇ predSizeH) may be the same or different.
  • S control parameter
  • predSizeW ⁇ predSizeH predSizeW ⁇ predSizeH
  • the current block is divided into 3 categories according to the width and height of the current block.
  • the control parameters (S) of different categories can be set to different inSize and the number of weight-based chroma prediction outputs of different categories (arranged into predSizeW ⁇ predSizeH) Can be set to the same.
  • nTbW is the width of the current block
  • nTbH is the height of the current block
  • type of block wcpSizeId is defined as follows:
  • the control parameter (S) is 8
  • inSize is (2 ⁇ nTbH+2 ⁇ nTbW)
  • the weight-based chroma prediction outputs nTbH ⁇ nTbW chroma prediction values
  • the control parameter (S) is 12
  • inSize is (2 ⁇ nTbH+2 ⁇ nTbW)
  • the weight-based chroma prediction outputs nTbH ⁇ nTbW chroma prediction values
  • the control parameter (S) is 16
  • inSize is (2 ⁇ nTbH+2 ⁇ nTbW)
  • the weight-based chroma prediction outputs nTbH ⁇ nTbW chroma prediction values
  • the current block is divided into 3 categories according to the width and height of the current block.
  • the control parameters (S) of different categories can be set to different inSize and the number of weight-based chroma prediction outputs of different categories (arranged into predSizeW ⁇ predSizeH) Can be set to the same.
  • nTbW is the width of the current block
  • nTbH is the height of the current block
  • type of block wcpSizeId is defined as follows:
  • the control parameter (S) is 8
  • inSize is (2 ⁇ nTbH+2 ⁇ nTbW)
  • the weight-based chroma prediction outputs nTbH ⁇ nTbW chroma prediction values
  • the control parameter (S) is 12
  • inSize is (1.5 ⁇ nTbH+1.5 ⁇ nTbW)
  • the weight-based chroma prediction outputs nTbH/2 ⁇ nTbW/2 chroma prediction values
  • wcpSizeId 2: indicates the current block with min(nTbW,nTbH)>16.
  • the WCP control parameter (S) is 16
  • inSize is (nTbH+nTbW)
  • the weight-based chroma prediction outputs nTbH/4 ⁇ nTbW/4 chroma prediction values
  • WCP mode can also classify the current block according to the width and height of the current block, using wcpSizeId to indicate the type of block.
  • the control parameter (S) the number of weight-based chroma prediction inputs inSize, and the number of weight-based chroma prediction outputs (arranged in predSizeW ⁇ predSizeH) may be the same or different.
  • S control parameter
  • predSizeW ⁇ predSizeH predSizeW ⁇ predSizeH
  • the current block is divided into 3 categories according to the width and height of the current block.
  • the control parameters (S) of different categories can be set to different inSize and the number of weight-based chroma prediction outputs of different categories (arranged into predSizeW ⁇ predSizeH) Can be set to the same.
  • nTbW is the width of the current block
  • nTbH is the height of the current block
  • nTbW ⁇ nTbH represents the number of pixels in the current block.
  • the type of block wcpSizeId is defined as follows:
  • wcpSizeId 0: indicates the current block of (nTbW ⁇ nTbH) ⁇ 128.
  • the control parameter (S) is 10
  • inSize is (2 ⁇ nTbH+2 ⁇ nTbW)
  • the weight-based chroma prediction outputs nTbH ⁇ nTbW chroma prediction values
  • the control parameter (S) is 8
  • inSize is (2 ⁇ nTbH+2 ⁇ nTbW)
  • the weight-based chroma prediction outputs nTbH ⁇ nTbW chroma prediction values
  • wcpSizeId 2: indicates (nTbW ⁇ nTbH)>256 current blocks.
  • the control parameter (S) is 1, inSize is (2 ⁇ nTbH+2 ⁇ nTbW), and the weight-based chroma prediction outputs nTbH ⁇ nTbW chroma prediction values;
  • the current block is divided into 3 categories according to the width and height of the current block.
  • the control parameters (S) of different categories can be set to different inSize and the number of weight-based chroma prediction outputs of different categories (arranged into predSizeW ⁇ predSizeH) Can be set to the same.
  • nTbW is the width of the current block
  • nTbH is the height of the current block
  • nTbW ⁇ nTbH represents the number of pixels in the current block.
  • the type of block wcpSizeId is defined as follows:
  • wcpSizeId 0: indicates the current block of (nTbW ⁇ nTbH) ⁇ 64.
  • the control parameter (S) is 16
  • inSize is (2 ⁇ nTbH+2 ⁇ nTbW)
  • the weight-based chroma prediction outputs nTbH ⁇ nTbW chroma prediction values
  • the control parameter (S) is 4, inSize is (1.5 ⁇ nTbH+1.5 ⁇ nTbW), and the weight-based chroma prediction outputs nTbH/2 ⁇ nTbW/2 chroma prediction values;
  • wcpSizeId 2: indicates the current block of (nTbW ⁇ nTbH)>512.
  • the control parameter (S) is 1, inSize is (nTbH+nTbW), and the weight-based chroma prediction outputs nTbH/4 ⁇ nTbW/4 chroma prediction values;
  • the upper area, upper left area, and left area of the current block are used as adjacent areas of the current block (which may also be called “reference areas”), as mentioned above.
  • reference areas pixels in adjacent areas are all decoded reference pixels.
  • the reference chromaticity information refC and the reference luminance information refY can be obtained from the adjacent area. Since the human eye is sensitive to brightness information, and the embodiment of the present application mainly uses existing brightness information to build a model to predict chromaticity, based on the perspective of not losing the reference brightness information, the spatial resolution matching of the reference chromaticity information is achieved to follow the reference The spatial resolution of the luminance information, and the reference chroma information is upsampled and filtered for different YUV video formats. In a specific embodiment, as shown in Figure 10, for S902, this step may include:
  • S1003 Perform upsampling filtering on the reference brightness information.
  • the reference brightness information while ensuring that the reference brightness information is not lost, it can be divided into two operating conditions: (1) the reference brightness information remains unchanged; (2) the reference brightness information is upsampled and filtered.
  • the operation case (1) can be divided into two types of sub-cases: (a) the reference chromaticity information remains unchanged; (b) the reference chromaticity information is upsampled and filtered.
  • operation case (2) there is no subcase in which the reference chromaticity information remains unchanged, and the reference chromaticity information needs to be upsampled and filtered.
  • step S1002 the reference brightness information remains unchanged, and there is no loss of brightness information at this time.
  • different sub-case processing of the reference chroma information is required:
  • Sub-case 1 For video in YUV444 format, the spatial resolutions of the luminance component and the chrominance component are equal, and no processing is required on the reference chrominance component at this time.
  • Sub-case 2 For videos with chroma subsampling characteristics such as YUV422, YUV411, YUV420, etc., the spatial resolution of the luminance component and the chroma component are inconsistent, and the spatial resolution of the chroma component is smaller than the spatial resolution of the luminance component.
  • the reference chroma component needs to be upsampled and filtered according to the YUV video format.
  • the upsampling filtering method can be any of the linear interpolation methods, such as nearest neighbor interpolation, bilinear interpolation, bicubic interpolation, mean interpolation, median interpolation, copy interpolation, etc.; it can also be any of the nonlinear interpolation methods.
  • FIG. 6B shows the interpolation method of reference chroma copy using 4 ⁇ 1 reference chroma information and 8 ⁇ 2 reference brightness information.
  • the 8 ⁇ 2 diagonal filled block is the reference brightness information
  • the 4 ⁇ 1 grid filled block is the reference chrominance information of the same position
  • each pixel in the 4 ⁇ 1 grid filled block is It is equivalent to the upper left corner pixel of each 2 ⁇ 2 sub-block in the 8 ⁇ 2 chroma block after upsampling and filtering, that is, the pixel value of each chroma pixel in the 2 ⁇ 2 sub-block after copying and interpolation is the same.
  • the other three chroma pixels are copied from the upper left corner pixel (filled with grids)
  • step S1003 upsampling filtering is performed on the reference brightness information, and at this time there is no loss of brightness information.
  • the reference chrominance information since the spatial resolution of the luminance component in the YUV format video is always greater than or equal to the spatial resolution of the chrominance component, the reference chrominance information must be upsampled and filtered to be consistent with the space of the reference luminance information. The resolution remains consistent. At this time, it is necessary to determine the spatial upsampling frequency of the reference chroma information based on the YUV video format and the spatial upsampling frequency of the reference luminance information.
  • the horizontal upsampling frequency of the reference brightness information i.e., the first horizontal upsampling factor of the aforementioned embodiment
  • the vertical upsampling frequency of the reference brightness information i.e., the first vertical upsampling factor of the aforementioned embodiment
  • S_Ver_RefLuma the vertical upsampling frequency of the reference brightness information
  • the horizontal upsampling frequency of the reference chroma information i.e., the second horizontal upsampling factor of the foregoing embodiment
  • the vertical upsampling frequency of the reference chroma information i.e., the second vertical upsampling factor of the foregoing embodiment
  • Sampling factor is denoted as S_Ver_RefChroma.
  • S903 Determine the target information of the current block according to the core parameters.
  • the target information of the current block may include reference chroma information (refC), reference luminance information (refY) and reconstructed luminance information (recY).
  • the reference chromaticity information refC and the reference brightness information refY are obtained from the reference area in FIG. 6A.
  • the obtained reference chromaticity information includes but is not limited to obtaining corresponding reference chromaticity information according to the position of the reference brightness information.
  • the obtained reference brightness information includes but is not limited to: the reference brightness reconstruction value of the upper area of the current block and the reference brightness reconstruction value of the left area.
  • refC or refY or recY can be used as weight-based chroma prediction input after pre-processing under certain conditions.
  • obtaining target information can include:
  • the refC here may or may not have gone through upsampling filtering; if pre-processing operations such as filtering are required, refC is obtained after pre-processing operations.
  • the refY here may or may not have gone through upsampling filtering; if pre-processing operations such as filtering are required, refY is obtained after pre-processing operations.
  • obtaining the target information may also include obtaining reconstructed brightness information recY within the current block.
  • recY will be described in detail in subsequent steps.
  • the obtained reconstructed brightness information recY may be without any processing, or may be after upsampling filtering.
  • the following steps can be included:
  • S1103 Perform upsampling filtering on the reconstructed brightness information in the current block.
  • the embodiment of the present application compares each brightness pixel in the current block with the reference brightness to obtain the brightness difference, and then obtains each reference value based on the brightness difference.
  • the weight of the luminance co-located reference chroma is finally weighted and predicted based on the weight to obtain the weight-based chroma prediction value of each co-located chroma.
  • step S1102 If no processing is performed on the reconstructed brightness information in the current block, the size of the first prediction block needs to be inferred and output according to the YUV video format:
  • predSizeW nTbW
  • predSizeH nTbH
  • the luminance component and the chrominance component have the same vertical resolution, but the horizontal resolution of the chrominance component is 1/2 of the luminance component.
  • the luminance component and the chrominance component have the same vertical resolution, but the horizontal resolution of the chrominance component is 1/4 of the luminance component.
  • predSizeW 2 ⁇ nTbW
  • predSizeH 2 ⁇ nTbH.
  • the upsampling filtering method can be any linear interpolation method, such as nearest neighbor interpolation, bilinear interpolation Interpolation, bicubic interpolation, mean interpolation, median interpolation, copy interpolation, etc.; it can also be any of the nonlinear interpolation methods, such as interpolation algorithms based on wavelet transform, interpolation algorithms based on edge information, etc.; it can also be based on convolution Neural networks do upsampling and so on.
  • linear interpolation method such as nearest neighbor interpolation, bilinear interpolation Interpolation, bicubic interpolation, mean interpolation, median interpolation, copy interpolation, etc.
  • the nonlinear interpolation methods such as interpolation algorithms based on wavelet transform, interpolation algorithms based on edge information, etc.
  • it can also be based on convolution Neural networks do upsampling and so on.
  • the size of the first prediction block needs to be inferred and output based on the YUV video format and the spatial upsampling frequency of the reconstructed brightness information.
  • the horizontal upsampling frequency of the reconstructed brightness information i.e., the third horizontal upsampling factor of the aforementioned embodiment
  • the vertical upsampling frequency of the reconstructed brightness information i.e., the third vertical upsampling factor of the aforementioned embodiment
  • the luminance component and the chrominance component have the same vertical resolution, but the horizontal resolution of the chrominance component is 1/2 of the luminance component.
  • the luminance component and the chrominance component have the same vertical resolution, but the horizontal resolution of the chrominance component is 1/4 of the luminance component.
  • the horizontal resolution and vertical resolution of the chroma component are both 1/2 of the luminance component.
  • predSizeW 2 ⁇ nTbW ⁇ S_Hor_RecLuma
  • predSizeH 2 ⁇ nTbH ⁇ S_Ver_RecLuma
  • the reconstructed brightness information recY is upsampled and filtered to generate the input reconstructed brightness information in_recY , an example of an interpolation process is as follows:
  • in_recY is equally divided into recSizeW ⁇ recSizeH sub-blocks, and the position placed is the lower right corner of each sub-block.
  • the diagonal filling is the upper reference reconstructed brightness pixel refY_T
  • the vertical bar filling is the left reference reconstructed brightness pixel refY_L
  • the grid filling is The position where the reconstructed brightness information recY of the current block is filled.
  • the upsampling method uses linear interpolation upsampling, that is, the value of each interpolated point (bar filling) between two upsampling brightness reference points (grid filling) is the weighted average of the two upsampling reference brightness points. .
  • the weight of the upsampling reference chromaticity point on the left is (upHor-dX)/upHor
  • FIG 14 shows an example of weights.
  • upHor 4.
  • the weight of the upsampling reference chroma point on the left is 3/4, and the weight of the upsampling reference chroma point on the right is 1/4;
  • the weight of the upsampling reference chroma point on the left is 2/4, and the weight of the upsampling reference chroma point on the right is 2/4;
  • the weight of the upsampling reference chroma point on the left is 2/4.
  • the upsampling method also uses linear interpolation.
  • the weight of the upper upsampling reference point is (upVer-dY)/upVer
  • the weight is only related to the vertical upsampling frequency upVer.
  • the final in_recY can be obtained according to the above process.
  • the "first vertical and then horizontal” upsampling method can also be used, and the reconstructed brightness value after the "first horizontal and then vertical” upsampling can also be used.
  • the reconstructed brightness values after "first vertical and then horizontal” upsampling are averaged to obtain the final input reconstructed brightness value in_recY.
  • the convolution operation in the neural network can also be used to replace the upsampling operation. The embodiments of this application do not impose any restrictions on this. .
  • the final acquired input target information may include: inSize number of reference chroma information refC, inSize number of reference brightness information refY, and reconstructed brightness information recY in the current block.
  • predSizeH is greater than or equal to the height nTbH of the current block or predSizeW is greater than or equal to the width nTbW of the current block, which may be the same or different.
  • weight-based chroma prediction includes the following operations: obtaining weights and performing weighted prediction based on the weights to obtain weight-based chroma prediction values.
  • the process of obtaining the weight includes constructing the brightness difference vector and calculating the weight. The detailed calculation process is as follows:
  • the method may further include:
  • S1601 For the pixel to be predicted at each brightness position, construct a brightness difference vector using the reference chroma information, the reference brightness information included in the target information, and the reconstructed brightness information of the current block.
  • abs(x) represents the absolute value of x.
  • linear or nonlinear numerical processing can be performed on the brightness difference vector of the pixel to be predicted.
  • the value of the brightness difference vector of the pixel to be predicted can be scaled according to the control parameter S in the core parameter.
  • S1602 For the pixel to be predicted at each brightness position, use a nonlinear function to calculate a weight vector according to the brightness difference vector.
  • a weight model is used to process the brightness difference vector diffY[i][j] corresponding to each pixel to be predicted C pred [i][j] to obtain the corresponding floating point weight.
  • the weight model here includes but is not limited to nonlinear functions such as nonlinear normalization functions, nonlinear exponential normalization functions, etc., without any limitation.
  • the nonlinear Softmax function can be used as the weight model
  • the brightness difference vector diffY[i][j] corresponding to each pixel to be predicted is used as the input of the weight model
  • the weight model outputs the floating point corresponding to each pixel to be predicted.
  • Type weight vector cWeightFloat[i][j] the calculation formula is as follows:
  • the weight model can also be adjusted according to the control parameters (S) in the core parameters. For example, if the size of the current block is flexible, the weight model can be adjusted according to the control parameter (S). Taking the nonlinear Softmax function as an example, different control parameters can be selected to adjust the function according to the block classification category to which the current block belongs. , at this time, the weight vector calculation formula corresponding to each pixel to be predicted is as follows:
  • the weighting coefficient can also be fixed-pointed. Therefore, in some embodiments, the method further includes: if the weighting coefficient is a floating-point weighting coefficient, performing fixed-point processing on the floating-point weighting coefficient to obtain a fixed-point weighting coefficient.
  • cWeightFloat can be fixed-point processed, as follows:
  • round(x) Sign(x) ⁇ Floor(abs(x)+0.5).
  • Floor(x) represents the largest integer less than or equal to x
  • abs(x) represents the absolute value of x.
  • Shift is the shift amount set in the fixed-point operation to improve accuracy.
  • the chroma prediction value of the pixel to be predicted is calculated based on the weight vector cWeight[i][j] (or cWeightFloat[i][j]) corresponding to each pixel to be predicted and the reference chroma information refC.
  • the reference chromaticity information refC of each pixel to be predicted C pred [i][j] is multiplied one by one by the weight vector elements corresponding to each pixel to be predicted to obtain subC[i][j] (or subCFloat[ i][j]), the multiplication results are accumulated to obtain the chroma prediction value C pred [i][j] of each pixel to be predicted, that is, weighted prediction of the chroma component is achieved.
  • this may include: performing weighted calculations based on floating-point weighting coefficients and reference chromaticity information to obtain the initial prediction value of the pixel to be predicted; performing fixed-point processing on the initial prediction value to obtain the target of the pixel to be predicted. Predictive value.
  • calculation formula is as follows:
  • subCFloat[i][j][k] can also be fixed-point processed. In order to retain a certain calculation accuracy during the fixed-point process, it can be multiplied by a coefficient, as shown below:
  • C pred Float[i][j] is fixed-pointed, such as:
  • a weighted calculation can be performed based on the fixed-point weighting coefficient and the reference chromaticity information to obtain the initial prediction value of the pixel to be predicted; the initial prediction value is subjected to fixed-point compensation processing to obtain the target prediction of the pixel to be predicted. value.
  • calculation formula is as follows:
  • Offset 1 ⁇ (Shift 1 -1)
  • the chroma prediction value should be limited to a preset range. If it exceeds the preset range, corresponding corrective operations are required. For example, in a possible implementation, a clamping operation can be performed on the chroma prediction value of C pred [i][j], as follows:
  • BitDepth is the bit depth required for the chroma pixel value to ensure that all chroma prediction values in the prediction block are between 0 and (1 ⁇ BitDepth)-1.
  • S905 Perform downsampling filtering on the calculated first prediction block to determine the second prediction block of the current block, and the second prediction block has the same spatial resolution as the chroma component of the current block.
  • chroma prediction is performed on each position in the reconstructed brightness information in the current block to finally obtain the first prediction block of predSizeW ⁇ predSizeH size.
  • the size of the first prediction block must be is greater than or equal to the size of the current block, so the first prediction block at this time needs to be down-sampled and filtered to restore the size to the same size as the current block.
  • the horizontal down-sampling factor downHor is calculated based on the width predSizeW of the output first prediction block and the width nTbW of the current block; similarly, the vertical down-sampling factor is calculated based on the height predSizeH of the output first prediction block and the height nTbH of the current block.
  • the calculation method is as follows:
  • the first prediction block predWcp is down-sampled and filtered according to the following conditions, and the sample values are filled into the second prediction block predSamples:
  • predSamples[x][y] (predWcp[downHor ⁇ x-1][y]+2 ⁇ predWcp
  • predSamples[x][y] (predWcp[x][downVer ⁇ y-1]+2 ⁇ predWcp
  • predSamples[x][y] (predWcp[downHor ⁇ x-1][downVer ⁇ y]+predWcp[downHor ⁇ x-1][downVer ⁇ y+1]+2 ⁇ predWcp[downHor ⁇ x][downVer ⁇ y]+2 ⁇ predWcp[downHor ⁇ x][downVer ⁇ y+1]+predWcp[downHor ⁇ x+1][downVer ⁇ y]+predWcp[downHor ⁇ x+1][downVer ⁇ y+1]+4 )>>3
  • predSamples is the chroma prediction value after final downsampling.
  • S906 Perform a post-processing operation on the chroma prediction value of the second prediction block to determine the target prediction block of the current block.
  • predWcp weight-based chroma prediction output chroma prediction value
  • predWcp can be smoothed and filtered as the final chroma prediction value predSamples.
  • a position-related correction process can be performed on predWcp. For example: use reference pixels with close spatial positions to calculate the chroma compensation value for each pixel to be predicted, use this chroma compensation value to correct predWcp, and use the corrected prediction value as the final chroma prediction value predSamples.
  • the chroma prediction values calculated by other chroma prediction modes can be weighted and fused with the chroma prediction value predWcp calculated by WCP, and this fusion result can be used as the final chroma prediction value predSamples .
  • the chroma prediction value predicted by CCLM mode and the chroma prediction value predWcp calculated by WCP can be weighted with equal or unequal weight, and the weighted result is used as the final chroma prediction value predSamples.
  • a neural network model can be used to modify the WCP prediction output predWcp, etc. This embodiment of the present application does not limit this in any way.
  • the number of chroma points that need to be predicted is the same as the number of luma points in the currently reconstructed luma area, so the chroma prediction block needs to be down-sampled to restore the same size as the original chroma block.
  • This approach can ensure the accuracy of the existing brightness information. Accurate brightness information is conducive to improving the accuracy and stability of the nonlinear mapping model, thereby increasing the accuracy of the chroma prediction value.
  • the method may include:
  • S1703 For the pixels to be predicted at partial brightness positions, construct a brightness difference vector using the reference chroma information, the reference brightness information included in the target information, and the reconstructed brightness information of the current block.
  • the correction process here includes a clip operation.
  • the corresponding position can be selected according to the YUV video format characteristics for chroma prediction. It is assumed that the position of the current brightness point is CurRecLuma(i,j) , then the position of the point where chromaticity prediction needs to be performed is CurPredChroma(x,y), and the coordinate relationship between the two positions is as follows:
  • This embodiment provides a decoding method. Through this embodiment, the specific implementation of the foregoing embodiment is elaborated. It can be seen that according to the technical solution of the foregoing embodiment, the accuracy of the chroma prediction value under the WCP prediction technology can be improved. sex. Specifically, the weighting coefficient is calculated through the reference information of the adjacent area and the brightness information of the current block, and this weighting coefficient is used in the chroma prediction of the current block, fully exploring the correlation between the brightness information of the current block and the adjacent area, and This correlation is involved in the chroma prediction of the current block, thereby improving the accuracy of chroma prediction.
  • this decoding method is tested on the test software ECM3.1 with a 48-frame interval under All Intra conditions, and can obtain -0.10%, -0.74%, and -0.61 on Y, Cb, and Cr respectively.
  • % BD-rate change that is, the average code rate change under the same Peak Signal to Noise Ratio (PSNR)). Especially on large-resolution sequences, it has better performance. On the Class A1 test sequence, it can achieve a Y-0.24% BD-rate change.
  • the embodiment of the present application does not perform down-sampling filtering on the reference brightness of adjacent areas and the currently reconstructed brightness area, and adopts the operation of maintaining brightness or upsampling it. Then, calculate the absolute difference between each brightness point in the currently reconstructed brightness information and the reference brightness, perform nonlinear mapping to obtain the weight of each reference chroma, and weight all reference chroma points to obtain the predicted value of the same chroma point. .
  • This can ensure the accuracy of existing brightness information, and accurate brightness information is conducive to improving the accuracy and stability of the nonlinear mapping model, further increasing the accuracy of chroma prediction.
  • FIG. 18 shows a schematic flowchart 1 of an encoding method provided by an embodiment of the present application. As shown in Figure 18, the method may include:
  • S1801 Determine the reference sample value of the first color component of the current block.
  • the encoding method in the embodiment of the present application is applied to an encoding device, or an encoding device (which may also be referred to as an "encoder" for short) integrated with the encoding device.
  • the encoding method in the embodiment of the present application may specifically refer to an intra-frame prediction method, and more specifically, a weight-based chroma prediction method.
  • the video image may be divided into multiple coding blocks, and each coding block may include a first color component, a second color component, and a third color component
  • the current block here refers to the current block to be processed in the video image. Coding block for intra prediction.
  • the current block predicts the first color component
  • the first color component is a brightness component
  • the component to be predicted is a brightness component
  • the current block can also be called a brightness prediction block
  • the current block predicts the second color Component prediction
  • the second color component is a chroma component, that is, the component to be predicted is a chroma component
  • the current block can also be called a chroma prediction block.
  • the reference information of the current block may include the value of the first color component sampling point in the adjacent area of the current block and the second color in the adjacent area of the current block.
  • the value of the component sampling point can be determined based on the coded pixels in the adjacent area of the current block.
  • the adjacent area of the current block may include at least one of the following: an upper adjacent area, an upper right adjacent area, a left adjacent area, and a lower left adjacent area.
  • determining the reference sample value of the first color component of the current block may include: determining the first color of the current block based on the value of the first color component sampling point in an adjacent area of the current block. The reference sample value of the component.
  • the reference pixel of the current block may refer to the reference pixel point adjacent to the current block, and may also be called the first color component sampling point, the first color component sampling point, and the third color component sampling point in the adjacent area of the current block.
  • the second color component sampling point is represented by Neighboring Sample or Reference Sample.
  • the first color component is the brightness component
  • the second color component is the chrominance component; at this time, the value of the first color component sampling point in the adjacent area of the current block is expressed as the value of the current block.
  • the reference brightness information corresponding to the reference pixel, and the value of the second color component sampling point in the adjacent area of the current block is expressed as the reference chromaticity information corresponding to the reference pixel of the current block.
  • the method may further include: filtering the first color component sampling point in the adjacent area, and determining the value of the first color component sampling point. value.
  • the first sampling point set is formed according to the first color component sampling points in the adjacent area of the current block; then the first sampling point set can be filtered to determine the first color The value of the component sampling point.
  • filtering the first color component sampling points in adjacent areas and determining the value of the first color component sampling point may include:
  • the color component intensity can be represented by color component information, such as reference brightness information, reference chromaticity information, etc.; here, the larger the value of the color component information, the greater the color component intensity. high.
  • the first color component sampling point in the adjacent area can be filtered according to the position of the sampling point, or the color component intensity can be filtered, so as to determine the sampling point obtained by filtering.
  • the effective first color component sampling point is then determined to determine the value of the first color component sampling point.
  • determining the reference sample value of the first color component of the current block may also include: performing a second filtering process on the value of the first color component sampling point to obtain the first color component of the current block. Filter adjacent sample values; determine a reference sample value of the first color component of the current block according to the filtered adjacent sample value of the first color component of the current block. In this embodiment of the present application, the number of filtered adjacent samples of the first color component of the current block is greater than the number of values of the first color component sampling points.
  • the second filtering process may be an upsampling filtering process.
  • the first color component is a brightness component.
  • the reference brightness information may remain unchanged, or the reference brightness information may be subjected to upsampling filtering.
  • determining the reference sample value of the first color component of the current block may further include: determining the reference sample value of the first color component of the current block based on the reconstructed value of the first reference color component sampling point in the current block. sample value.
  • the first reference color component may be a brightness component; then, the reconstructed value of the first reference color component sampling point in the current block is the reconstructed brightness information of the current block.
  • determining the reference sample value of the first color component of the current block may also include: performing a third filtering process on the reconstructed value of the first reference color component sampling point in the current block to obtain the current The filtered sample value of the first reference color component sampling point in the block; determine the reference sample value of the first color component of the current block based on the filtered sample value of the first reference color component sampling point in the current block.
  • the number of filtered samples of the first reference color component sampling point in the current block is greater than the number of reconstructed values of the first reference color component sampling point in the current block.
  • the third filtering process may be an upsampling filtering process.
  • the first reference color component is the brightness component.
  • the reconstructed brightness information in the current block can be kept unchanged, or the reconstructed brightness information in the current block can be kept unchanged.
  • the reconstructed brightness information in the current block is subjected to upsampling filtering.
  • the reference sample value of the first color component of the current block can also be set as the brightness difference information. Therefore, in a possible implementation, the reference sample value of the first color component of the current block is set to the absolute value of the difference between the value of the first color component sampling point and the reconstructed value of the first reference color component sampling point.
  • the reference sample value of the first color component of the current block is set to the absolute value of the difference between the filtered adjacent sample value of the first color component and the reconstructed value of the first reference color component sampling point.
  • the reference sample value of the first color component of the current block is set to the absolute value of the difference between the filtered adjacent sample value of the first color component and the filtered sample value of the first reference color component sampling point.
  • the reference sample value of the first color component of the current block is set to the absolute value of the difference between the value of the first color component sampling point and the filtered sample value of the first reference color component sampling point.
  • the reference brightness information in the adjacent area of the current block can be upsampled and filtered, and the reconstructed brightness information in the current block can also be upsampled and filtered.
  • Upsampling filtering can be performed on both, or even neither upsampling and filtering can be performed, and then the brightness difference information is determined according to different combinations, and the brightness difference information is used as the first color component of the current block. Reference sample value.
  • S1802 Determine the weighting coefficient according to the reference sample value of the first color component of the current block.
  • determining the weighting coefficient based on the reference sample of the first color component of the current block may include: determining the corresponding reference sample of the first color component under a preset mapping relationship. value; set the weighting coefficient equal to the value.
  • the reference sample value of the first color component may be the filtered adjacent sample value of the first color component of the current block and the filtered sample value of the first reference color component sampling point in the current block.
  • the absolute value of the difference is a first color component
  • the first color component is a color component different from the second color component to be predicted in this embodiment of the present application.
  • At least one pixel to be predicted in the current block can be selected, and its reconstructed brightness and the reference value in the adjacent area can be calculated respectively.
  • the difference in brightness between brightnesses (expressed by
  • if
  • are approximately inversely proportional.
  • determining the value corresponding to the reference sample of the first color component under the preset mapping relationship may include: determining the first factor; sample value, determine the first product value; determine the corresponding value of the first product value under the preset mapping relationship.
  • determining the first factor may include: the first factor is a preset constant value.
  • determining the first factor may include: determining the value of the first factor according to the size parameter of the current block.
  • the method may further include: determining the value of the first factor according to a preset mapping lookup table between the size parameter of the current block and the value of the first factor.
  • the size parameter of the current block may include at least one of the following parameters: the width of the current block, the height of the current block, and the product of the width and height of the current block.
  • a classification method may be used to fix the value of the first factor.
  • the current block size parameter is divided into three categories, and the value of the first factor corresponding to each category is determined.
  • embodiments of the present application may also pre-store a lookup table mapping the size parameters of the current block and the value of the first factor, and then determine the value of the first factor based on the lookup table.
  • Table 1 shows a schematic diagram of the correspondence between the first factor and the size parameter of the current block.
  • determining the first factor may include: determining the value of the first factor based on the number of reference pixels in the current block.
  • the method may further include: determining the value of the first factor according to a preset mapping lookup table between the number of reference pixels of the current block and the value of the first factor.
  • this embodiment of the present application can divide the number of reference pixels into three categories, and still use a classification method to fix the value of the first factor.
  • the current block is divided into three categories according to the number of reference pixels, and the value of the first factor corresponding to each category is determined.
  • embodiments of the present application may also pre-store a lookup table mapping the number of reference pixels of the current block and the value of the first factor, and then determine the value of the first factor based on the lookup table.
  • the aforementioned Table 2 shows a schematic diagram of the correspondence between the first factor and the number of reference pixels of the current block.
  • determining the first product value based on the first factor and the reference sample of the first color component may include:
  • the first product value is set to a value obtained by adding and bit-shifting the reference sample of the first color component according to the first factor.
  • the first factor is equal to 0.25 and the reference sample value of the first color component is represented by Ref
  • the first product value can be equal to 0.25 ⁇ Ref
  • 0.25 ⁇ Ref can be expressed as Ref/4, that is, Ref>> 2.
  • floating-point numbers may be converted into addition and displacement operations; that is to say, for the first product value, there are no restrictions on its calculation method.
  • the first reference color component can also be a chrominance component, that is, at least one pixel to be predicted in the current block is selected, and its reconstructed chromaticity and the reference color in the adjacent area are respectively calculated.
  • the chromaticity difference between degrees (expressed by
  • the corresponding weighting coefficient (expressed by w k ) can be given a larger weight; conversely, if
  • the reference sample value of the first color component it can be
  • the preset multiplier here is the second factor described in the embodiment of this application.
  • the method may also include: performing a least squares calculation based on the first color component value of the reference pixel and the second color component value of the reference pixel, determining Second factor.
  • the least squares calculation is performed on the chrominance component values and brightness component values of the N reference pixels to obtain the second factor.
  • the least squares regression calculation is as shown in the aforementioned equation (5), and the second factor can be calculated.
  • the preset mapping relationship may be a Softmax function, as shown in the aforementioned equation (6) or equation (7).
  • the Softmax function is a normalized exponential function, but the embodiment of the present application may not require normalization, and its value is not limited to the range [0,1].
  • the preset mapping relationship in addition to the Softmax function, in other embodiments, may be a weighted function that has an inverse relationship with the reference sample value of the first color component, such as It is represented by the aforementioned formula (8) or formula (9).
  • the preset mapping relationship can be as shown in Equation (4), as shown in Equation (6) or Equation (7), as shown in Equation (8) or Equation (9), or even It can be that the closer the reference brightness value of other reference pixels to the brightness reconstruction value of the pixel to be predicted in the current block is, the more important the reference chromaticity value of the reference pixel is to the pixel to be predicted in the current block.
  • the function model, etc. can even simplify the operation, for example, using the array element table lookup method to reduce part of the calculation operations, etc., and the embodiments of this application are not specifically limited.
  • the weighting coefficient can be determined according to the reference sample value of the first color component (for example
  • N represents The number of reference samples for the second color component.
  • the sum of these N weighting coefficients is equal to 1, and each weighting coefficient is a value greater than or equal to 0 and less than or equal to 1.
  • S1803 Determine the first prediction block of the second color component of the current block based on the weighting coefficient and the reference sample value of the second color component of the current block.
  • chroma prediction is performed for each reconstructed brightness position in the current block, so that the obtained first prediction block size is larger than the original current block size.
  • the block size that is, the number of predicted values of the second color component contained in the first prediction block is greater than the number of second color component sampling points contained in the current block.
  • the method may also include: based on the second color component sampling point in the adjacent area of the current block. Get the value to determine the reference sample value of the second color component of the current block.
  • the adjacent area may include at least one of the following: an upper adjacent area, an upper right adjacent area, a left adjacent area, and a lower left adjacent area.
  • the method may further include: performing a fourth filtering process on the values of the second color component sampling points in the adjacent areas of the current block to obtain the filtered phase of the second color component of the current block. Adjacent sample values; determine the reference sample value of the second color component of the current block based on the filtered adjacent sample values of the second color component of the current block. In this embodiment of the present application, the number of filtered adjacent sample values of the second color component of the current block is greater than the number of values of the second color component sampling points in adjacent areas of the current block.
  • the fourth filtering process may be upsampling filtering; where the upsampling rate is a positive integer multiple of 2.
  • the first color component is the brightness component
  • the second color component is the chrominance component.
  • the reference chrominance information can be upsampled and filtered.
  • the chroma block size is M ⁇ N
  • the number of reference chroma information in adjacent areas is M+N.
  • 2M+2N can be obtained. Chroma reference sample values, and then use weighted prediction to obtain 2M ⁇ 2N chroma prediction values, and then down-sample and filter to obtain M ⁇ N chroma prediction values as the final prediction value.
  • the chroma block size is M ⁇ N, and the number of reference chroma information in adjacent areas is M+N. You can also use only these M+N Referring to the chroma information, 2M ⁇ 2N chroma prediction values are calculated using weighting coefficients, and then down-sampled and filtered to obtain M ⁇ N chroma prediction values as the final prediction value. In addition, for the current block of 2M ⁇ 2N in YUV420 format, the chroma block size is M ⁇ N, and the number of reference chroma information in adjacent areas is M+N. After upsampling and filtering, 4M+4N chroma references are obtained.
  • the chroma block size is 2M ⁇ 2N
  • the number of reference chroma information in adjacent areas is 2M ⁇ 2N
  • 4M+4N are obtained after upsampling and filtering.
  • the second filtering process can use the first filter
  • the third filtering process can use the second filter
  • the fourth filtering process can use the third filter; for these three filters, here, the first filtering
  • the filter, the second filter and the third filter may all be upsampling filters. Due to different data being processed, the upsampling rates of the filters may also be different. Therefore, these three filters can be the same or different.
  • the first filter, the second filter and the third filter may all be neural network filters, and the embodiments of the present application do not limit this in any way.
  • the second filtering can also be performed based on the current color format information processing, third filtering processing or fourth filtering processing.
  • the method may also include: based on the color format information, performing a fourth filtering process on the values of the second color component sampling points in the adjacent areas of the current block, Get the filtered adjacent samples of the second color component of the current block.
  • the fourth filtering process may also include: if the color format information indicates 4:2:0 sampling, sampling the second color component in the adjacent area of the current block The value of the point is upsampled and filtered; where the upsampling rate is a positive integer multiple of 2.
  • the color format information indicates that the spatial resolution of brightness and chrominance are equal (such as YUV444 format), then no processing is required for the reference chromaticity information; if the color format information indicates Because the spatial resolution of brightness and chroma is inconsistent (such as YUV422 format/YUV411 format/YUV420 format and other videos with chroma subsampling characteristics), and the spatial resolution of the chroma component is smaller than the spatial resolution of the luminance component, then it is necessary to The reference chroma information obtained from adjacent areas is subjected to upsampling filtering.
  • the first prediction block of the second color component may include: determining the weighted value of the reference sample value of the second color component and the corresponding weighting coefficient; setting the predicted value of the second color component sampling point in the first prediction block to equal N The sum of weighted values; where N represents the number of reference samples of the second color component, and N is a positive integer.
  • upsampling filtering when upsampling filtering is performed on both the reference brightness information and the reference chrominance information in the adjacent area of the current block, at this time, it may also include: based on the color format information, using the first level upsampling factor Perform a second filtering process on the value of the first color component sampling point in the adjacent area of the current block with the first vertical upsampling factor to obtain the filtered adjacent sample value of the first color component of the current block; and use the second The horizontal upsampling factor and the second vertical upsampling factor perform a fourth filtering process on the values of the second color component sampling points in the adjacent areas of the current block to obtain filtered adjacent sample values of the second color component of the current block.
  • the method may also include:
  • color format information indicates 4:4:4 sampling, it is determined that the second horizontal upsampling factor is equal to the first horizontal upsampling factor, and the second vertical upsampling factor is equal to the first vertical upsampling factor;
  • the color format information indicates 4:2:2 sampling, then it is determined that the second horizontal upsampling factor is equal to 2 times the first horizontal upsampling factor, and the second vertical upsampling factor is equal to the first vertical upsampling factor;
  • the color format information indicates 4:1:1 sampling, then it is determined that the second horizontal upsampling factor is equal to 4 times the first horizontal upsampling factor, and the second vertical upsampling factor is equal to the first vertical upsampling factor;
  • the color format information indicates 4:2:0 sampling, it is determined that the second horizontal upsampling factor is equal to 2 times the first horizontal upsampling factor, and the second vertical upsampling factor is equal to 2 times the first vertical upsampling factor.
  • upsampling filtering is performed on the reference brightness information, and at this time there is no loss of the brightness information.
  • the reference chrominance information since the spatial resolution of the luminance component in the YUV video is always greater than or equal to the spatial resolution of the chrominance component, the reference chrominance information must be upsampled and filtered to maintain the spatial resolution of the reference luminance information. consistent.
  • the spatial upsampling frequency of the reference chrominance information i.e., the second horizontal upsampling factor
  • the spatial upsampling frequency of the reference luminance information i.e., the first horizontal upsampling factor and the first vertical upsampling factor. sampling factor and the second vertical upsampling factor).
  • the method may also include:
  • the color format information indicates 4:4:4 sampling, it is determined that the width of the first prediction block is equal to the width of the current block, and the height of the first prediction block is equal to the height of the current block;
  • the color format information indicates 4:2:2 sampling, then it is determined that the width of the first prediction block is equal to 2 times the width of the current block, and the height of the first prediction block is equal to the height of the current block;
  • the color format information indicates 4:1:1 sampling, then it is determined that the width of the first prediction block is equal to 4 times the width of the current block, and the height of the first prediction block is equal to the height of the current block;
  • the width of the first prediction block is equal to 2 times the width of the current block
  • the height of the first prediction block is equal to 2 times the height of the current block
  • the size of the first prediction block needs to be inferred according to the YUV video format and the spatial upsampling frequency of the current block brightness after the current block brightness is upsampled. size.
  • it may include: based on the color format information, using the third horizontal upsampling factor and the third vertical upsampling factor to perform a third filtering process on the reconstructed value of the first reference color component sampling point in the current block, to obtain the third filtering process in the current block.
  • a filtered sample of a reference color component sampling point may be used to obtain the third filtering process in the current block.
  • the method may also include:
  • the width of the first prediction block is equal to the product of the width of the current block and the third horizontal sampling factor, and the height of the first prediction block is equal to the height of the current block and the third horizontal sampling factor.
  • the width of the first prediction block is equal to the product of 2 times the width of the current block and the third level upsampling factor, and the height of the first prediction block is equal to the height of the current block. product with the third vertical upsampling factor;
  • the width of the first prediction block is equal to the product of 4 times the width of the current block and the third level sampling factor, and the height of the first prediction block is equal to the height of the current block.
  • the width of the first prediction block is equal to the product of 2 times the width of the current block and the third horizontal upsampling factor, and the height of the first prediction block is equal to 2 times the current block. The height of the block multiplied by the third vertical upsampling factor.
  • the width of the first prediction block is represented by predSizeW
  • the height of the first prediction block is represented by predSizeH
  • the width of the current block is represented by nTbW
  • the height of the current block is represented by nTbH.
  • S1804 Perform the first filtering process on the first prediction block to determine the second prediction block of the second color component of the current block.
  • S1805 Determine the prediction difference value of the second color component sampling point of the current block according to the second prediction block.
  • the first filtering process may be a downsampling filtering process.
  • the number of predicted values of the second color component contained in the second prediction block is the same as the number of second color component sample points contained in the current block.
  • performing a first filtering process on the first prediction block and determining a second prediction block of the second color component of the current block may include: performing a first filtering process on the first prediction block using a preset filter. A down-sampling filtering process is performed to determine a second prediction block of the second color component of the current block.
  • the preset filter may be a downsampling filter.
  • the downsampling filter here may be a neural network filter, which is not limited in this embodiment of the present application.
  • performing the first filtering process on the first prediction block and determining the second prediction block of the second color component of the current block may include:
  • performing downsampling filtering on the first prediction block according to the horizontal downsampling factor and the vertical downsampling factor to obtain the second prediction block of the second color component of the current block may include: if the horizontal downsampling factor If the sampling factor is greater than 1, or the vertical down-sampling factor is greater than 1, then down-sampling filtering is performed on the first prediction block to obtain the second prediction block.
  • performing downsampling filtering on the first prediction block may include at least one of the following:
  • the first prediction block is subjected to down-sampling filtering processing in the vertical direction and then down-sampling filtering processing is performed in the horizontal direction.
  • the horizontal downsampling factor can be calculated according to the width of the first prediction block and the width of the current block
  • the vertical downsampling factor can be calculated according to the height of the first prediction block and the height of the current block; then, according to the horizontal downsampling factor and the vertical
  • the downsampling factor performs downsampling filtering on the first prediction block.
  • the horizontal down-sampling factor is greater than 1 and the vertical down-sampling factor is equal to 1, then only the first prediction block needs to be down-sampled in the horizontal direction; if the horizontal down-sampling factor is equal to 1 and the vertical down-sampling factor is greater than 1, then Only the first prediction block needs to be down-sampled in the vertical direction; if the horizontal down-sampling factor is greater than 1 and the vertical down-sampling factor is greater than 1, then the first prediction block needs to be down-sampled in both the horizontal and vertical directions, where, Down-sampling is performed first in the horizontal direction and then in the vertical direction, or down-sampling in the vertical direction and then in the horizontal direction can be performed.
  • the convolution operation in the neural network structure can even be used to replace the down-sampling operation here.
  • the embodiments of this application do not impose any limitations. .
  • down-sampling filtering can also be performed by point extraction, such as a two-dimensional filter, a one-dimensional filter, and so on.
  • a one-dimensional filter it can be "vertical direction first and then horizontal direction", or it can be “horizontal direction first and then vertical direction”.
  • It can also be a fixed filtering order, or even a flexibly adjustable filtering order.
  • the embodiment of the present application does not impose any limitation on this.
  • performing the first filtering process on the first prediction block and determining the second prediction block of the second color component of the current block may also include: according to the horizontal downsampling factor and the vertical downsampling factor.
  • the sampling factor is a weighted sum calculation of the predicted values of each preset number of second color components of the first prediction block in the horizontal direction and/or vertical direction to obtain the second prediction block.
  • performing a weighted sum calculation on the predicted values of each preset number of second color components in the horizontal direction and/or vertical direction of the first prediction block to obtain the second prediction block may include:
  • the embodiment of the present application may also ignore the situation of horizontal down-sampling and vertical down-sampling, and perform a weighted sum calculation for each preset number of chroma prediction values in the direction where down-sampling is required (vertical direction or horizontal direction); Among them, if the weight of each chroma prediction value is equal, then in a special case, the weighted sum calculation of each preset number of chroma prediction values can also be regarded as the preset number of chroma The predicted values are averaged, and the average is used as the predicted value after downsampling and filtering.
  • the method may also include: determining the weighting coefficient according to the reference sample value of the first color component of some sampling points in the first prediction block; and determining the weighting coefficient according to the weighting coefficient and some sampling points in the first prediction block.
  • the reference sample of the second color component determines a second prediction block of the second color component of the current block.
  • determining the second prediction block of the second color component of the current block based on the weighting coefficient and the reference sample value of the second color component of some sampling points in the first prediction block may include: according to the weighted The coefficient performs a weighted calculation on the reference sample value of the second color component of the sampling point at the (i, j)-th position in the first prediction block to obtain the second color component of the sampling point at the (x, y)-th position in the current block. Predicted value; where i, j, x, j are all integers greater than or equal to zero.
  • the reconstructed brightness information in the current block is not upsampled and filtered. At this time, it may be considered not to perform co-location of each brightness position in the reconstructed brightness information in the current block.
  • some brightness positions can be selected for prediction of co-located chroma points, so that no downsampling filtering is required after obtaining the prediction block, which can ensure the accuracy of the existing brightness information.
  • Accurate brightness information is conducive to improving non-linearity. The accuracy and stability of the linear mapping model, thereby improving the accuracy of color prediction values.
  • the corresponding position can be selected according to the YUV video format characteristics for chroma prediction, assuming that the position of the current brightness point is CurRecLuma(i,j) , then the position of the sampling point that needs to be predicted for chroma is CurPredChroma(x,y).
  • the method may also include:
  • color format information indicates 4:4:4 sampling, set x equal to i and set y equal to j;
  • color format information indicates 4:2:2 sampling, set x to be equal to the product of i and 2, and set y to be equal to j;
  • color format information indicates 4:1:1 sampling, set x equal to the product of i and 4, and set y equal to j;
  • x is set equal to the product of i and 2
  • y is set equal to the product of j and 2.
  • the method may also include: determining the horizontal sampling position factor and the vertical sampling position factor; setting x equal to the product of i and the horizontal sampling position factor, and setting y equal to j and the vertical sampling position product of factors.
  • x may be set equal to the product of i and S_Pos_Hor, and y may be set equal to the product of j and S_Pos_Ver.
  • the method may further include: after determining the second prediction block of the second color component of the current block, performing correlation processing on the second prediction block, and using the processed second prediction block as the second prediction block. prediction block.
  • relevant processing is performed on the second prediction block, including at least one of the following:
  • Weighted fusion processing is performed on the second prediction block using the prediction value of the second color component of the current block in at least one prediction mode.
  • the second prediction block in order to reduce the instability caused by WCP pixel-by-pixel independent parallel prediction, the second prediction block can be smoothed and filtered as the final chroma prediction value.
  • a position-related correction process can be performed on the second prediction block. For example: use reference pixels with close spatial positions to calculate the chroma compensation value for each pixel to be predicted, use this chroma compensation value to correct the prediction block, and use the corrected prediction value as the final chroma prediction value.
  • the chroma prediction value calculated by other chroma prediction modes can be weighted and fused with the chroma prediction value calculated by WCP, and the fusion result can be used as the final chroma prediction value.
  • a neural network model can be used to correct the chromaticity prediction value calculated by the WCP, etc. This embodiment of the present application does not impose any limitation on this.
  • S1901 Determine the predicted value of the second color component sampling point of the current block according to the second prediction block.
  • S1902 Determine the predicted difference value of the second color component sampling point of the current block based on the original value of the second color component sampling point of the current block and the predicted value of the second color component sampling point of the current block.
  • S1903 Encode the prediction difference value of the second color component sampling point of the current block, and write the resulting encoded bits into the code stream.
  • determining the predicted value of the second color component sampling point of the current block according to the second prediction block may be to set the predicted value of the second color component sampling point of the current block equal to the third The value of the second prediction block; or the value of the second prediction block may be upsampled and filtered, and the predicted value of the second color component sampling point of the current block is set equal to the output value after the upsampling and filtering.
  • the second color component sampling point can be determined based on the original value of the second color component sampling point and the predicted value of the second color component sampling point.
  • the predicted difference value of the color component sampling point may be calculated by subtracting the original value of the second color component sampling point and the predicted value of the second color component sampling point, so as to determine the second color component sampling point of the current block. Forecast difference.
  • the prediction difference value of the second color component sampling point can be obtained through decoding at the decoding end in order to restore the second color component of the current block. The reconstructed value of the sampling point.
  • the use of unlost brightness information for chroma prediction mainly includes three aspects: On the one hand, the brightness information of the reference pixel and the current block is fully utilized to realize the reference pixel chroma weighting coefficient calculation; on the other hand, fully consider the importance of the existing brightness information, and establish a more accurate nonlinear mapping model to weight the reference chromaticity point allocation weight without losing the brightness information; on the other hand, in the When referring to chroma upsampling and performing in-situ chroma prediction based on the position of each brightness point in the current block, the characteristics of various YUV video formats are fully considered, and chroma is always guaranteed according to the chroma subsampling format and brightness upsampling conditions of different YUV videos. The spatial resolution of the component and the luminance component is consistent, and the existing non-lost luminance information is fully utilized for in-situ chromaticity prediction to improve the accuracy of chromaticity prediction
  • the embodiment of the present application also provides an encoding method, by determining the reference sample value of the first color component of the current block; determining the weighting coefficient according to the reference sample value of the first color component of the current block; The reference sample value of the second color component determines the first prediction block of the second color component of the current block; wherein the number of prediction values of the second color component contained in the first prediction block is greater than the number of prediction values of the second color contained in the current block.
  • the number of component sampling points perform a first filtering process on the first prediction block to determine the second prediction block of the second color component of the current block; determine the prediction difference of the second color component sampling point of the current block based on the second prediction block value.
  • the reference sample values of each chroma component are assigned weights for weighted prediction; and for the first filtering process, different color format information is fully considered, and chroma and/or brightness sampling filtering is performed based on different color format information, which can always ensure
  • the spatial resolution of the chroma component and the luminance component is consistent. This can not only ensure the accuracy of the existing luminance information, but also help improve nonlinearity based on accurate luminance information when predicting the chrominance component using unlost luminance information.
  • the accuracy and stability of the mapping model can improve the accuracy of chroma prediction, save bit rate, and improve encoding and decoding performance.
  • the embodiment of the present application also provides a code stream, which is generated by bit encoding according to the information to be encoded; wherein the information to be encoded at least includes: the second color of the current block Predicted difference of component sample points.
  • the decoding end obtains the prediction difference value of the second color component sampling point through decoding, and then combines it with the current block
  • the predicted value of the second color component sampling point of the current block can be recovered, so that the reconstructed value of the second color component sampling point of the current block can be restored.
  • not only the existing color component information is fully considered, but also different color format information is fully considered, which can not only ensure the accuracy of the existing brightness information, but also make use of the unlost brightness information for chroma component prediction.
  • the encoding device 300 may include: a first determination unit 3001, a first prediction unit 3002, and a first filtering unit 3003; wherein,
  • the first determination unit 3001 is configured to determine the reference sample value of the first color component of the current block; and determine the weighting coefficient according to the reference sample value of the first color component of the current block;
  • the first prediction unit 3002 is configured to determine the first prediction block of the second color component of the current block according to the weighting coefficient and the reference sample value of the second color component of the current block; wherein, the second color contained in the first prediction block The number of predicted values of the component is greater than the number of second color component sampling points contained in the current block;
  • the first filtering unit 3003 is configured to perform a first filtering process on the first prediction block and determine the second prediction block of the second color component of the current block;
  • the first determination unit 3001 is further configured to determine the prediction difference value of the second color component sampling point of the current block according to the second prediction block.
  • the number of predicted values of the second color component contained in the second prediction block is the same as the number of second color component sample points contained in the current block.
  • the first determination unit 3001 is further configured to determine the reference sample value of the first color component of the current block based on the value of the first color component sampling point in the adjacent area of the current block; wherein, The adjacent area includes at least one of the following: an upper adjacent area, an upper right adjacent area, a left adjacent area, and a lower left adjacent area.
  • the first determination unit 3001 is further configured to perform filtering processing on the first color component sampling points in the adjacent area, and determine the value of the first color component sampling point.
  • the first determining unit 3001 is further configured to determine the position of the sampling point to be selected based on the position and/or intensity of the color component sampling point in the adjacent area; and based on the position of the sampling point to be selected , determine the value of the first color component sampling point from the adjacent area.
  • the first filtering unit 3003 is also configured to perform a second filtering process on the value of the first color component sampling point to obtain the filtered adjacent sample value of the first color component of the current block; and according to the current block Filter adjacent samples of the first color component to determine a reference sample of the first color component of the current block.
  • the number of filtered adjacent samples of the first color component of the current block is greater than the number of values of the first color component sampling points.
  • the first determination unit 3001 is further configured to determine the reference sample value of the first color component of the current block based on the reconstructed value of the first reference color component sampling point in the current block.
  • the reference sample value of the first color component of the current block is set to the absolute value of the difference between the value of the first color component sampling point and the reconstructed value of the first reference color component sampling point.
  • the reference sample value of the first color component of the current block is set to the absolute value of the difference between the filtered adjacent sample value of the first color component and the reconstructed value of the first reference color component sampling point.
  • the first filtering unit 3003 is also configured to perform a third filtering process on the reconstructed value of the first reference color component sampling point in the current block to obtain the filtered sample value of the first reference color component sampling point in the current block. ; and determine the reference sample value of the first color component of the current block according to the filtered sample value of the first reference color component sampling point in the current block.
  • the number of filtered samples of the first reference color component sampling point in the current block is greater than the number of reconstructed values of the first reference color component sampling point in the current block.
  • the reference sample value of the first color component of the current block is set to the absolute value of the difference between the filtered adjacent sample value of the first color component and the filtered sample value of the first reference color component sampling point.
  • the reference sample value of the first color component of the current block is set to the absolute value of the difference between the value of the first color component sampling point and the filtered sample value of the first reference color component sampling point.
  • the first determination unit 3001 is further configured to determine the value corresponding to the reference sample of the first color component under the preset mapping relationship; and set the weighting coefficient equal to the value.
  • the first determining unit 3001 is further configured to determine the first factor; and determine the first product value according to the first factor and the reference sample of the first color component; and determine the first product value at a preset value. The corresponding value under the mapping relationship.
  • the first factor is a preset constant value.
  • the first determining unit 3001 is further configured to determine the value of the first factor according to the size parameter of the current block; wherein the size parameter of the current block includes at least one of the following parameters: the width of the current block, The height of the current block.
  • the preset mapping relationship is a Softmax function.
  • the preset mapping relationship is a weighting function having an inverse relationship with the reference sample of the first color component.
  • the first determination unit 3001 is further configured to determine the reference sample value of the second color component of the current block based on the value of the second color component sampling point in the adjacent area of the current block.
  • the first filtering unit 3003 is further configured to perform a fourth filtering process on the value of the second color component sampling point in the adjacent area of the current block to obtain the filtered phase of the second color component of the current block. adjacent sample values; and determining a reference sample value of the second color component of the current block based on the filtered adjacent sample values of the second color component of the current block.
  • the number of filtered adjacent samples of the second color component of the current block is greater than the number of values of the second color component sampling points in adjacent areas of the current block.
  • the fourth filtering process is upsampling filtering; wherein the upsampling rate is a positive integer multiple of 2.
  • the first filtering unit 3003 is further configured to perform a fourth filtering process on the value of the second color component sampling point in the adjacent area of the current block based on the color format information to obtain the second value of the current block. Filtered adjacent samples of color components.
  • the first filtering unit 3003 is also configured to upsample the value of the second color component sampling point in the adjacent area of the current block if the color format information indicates 4:2:0 sampling. Filtering; where the upsampling rate is a positive integer multiple of 2.
  • the first filtering unit 3003 is further configured to use the first horizontal upsampling factor and the first vertical upsampling factor to perform a second step on the value of the first color component sampling point in the adjacent area of the current block. Filter processing to obtain filtered adjacent sample values of the first color component of the current block; and use the second horizontal upsampling factor and the second vertical upsampling factor to obtain the second color component sampling point in the adjacent area of the current block. Perform a fourth filtering process on the value to obtain the filtered adjacent sample value of the second color component of the current block;
  • the first determining unit 3001 is further configured to determine that the second horizontal upsampling factor is equal to the first horizontal upsampling factor and the second vertical upsampling factor is equal to the first vertical upsampling if the color format information indicates 4:4:4 sampling.
  • the color format information indicates 4:2:2 sampling, then it is determined that the second horizontal upsampling factor is equal to 2 times the first horizontal upsampling factor, and the second vertical upsampling factor is equal to the first vertical upsampling factor; if the color If the format information indicates 4:1:1 sampling, then it is determined that the second horizontal upsampling factor is equal to 4 times the first horizontal upsampling factor, and the second vertical upsampling factor is equal to the first vertical upsampling factor; if the color format information indicates 4:2:0 sampling, then it is determined that the second horizontal upsampling factor is equal to 2 times the first horizontal upsampling factor, and the second vertical upsampling factor is equal to 2 times the first vertical upsampling factor.
  • the first determining unit 3001 is further configured to determine that the width of the first prediction block is equal to the width of the current block and the height of the first prediction block is equal to the current block if the color format information indicates 4:4:4 sampling. The height of the block; if the color format information indicates 4:2:2 sampling, then it is determined that the width of the first prediction block is equal to 2 times the width of the current block, and the height of the first prediction block is equal to the height of the current block; if the color format information If the indication is 4:1:1 sampling, then it is determined that the width of the first prediction block is equal to 4 times the width of the current block, and the height of the first prediction block is equal to the height of the current block; if the color format information indicates 4:2:0 sampling , then it is determined that the width of the first prediction block is equal to 2 times the width of the current block, and the height of the first prediction block is equal to 2 times the height of the current block.
  • the first filtering unit 3003 is further configured to perform a third filtering process on the reconstructed value of the first reference color component sampling point in the current block using a third horizontal upsampling factor and a third vertical upsampling factor, to obtain The filtered sample value of the first reference color component sampling point in the current block;
  • the first determination unit 3001 is also configured to determine that if the color format information indicates 4:4:4 sampling, determine that the width of the first prediction block is equal to the product of the width of the current block and the third horizontal sampling factor.
  • the height is equal to the product of the height of the current block and the third vertical upsampling factor; if the color format information indicates 4:2:2 sampling, it is determined that the width of the first prediction block is equal to 2 times the width of the current block and the third horizontal upsampling factor.
  • the product of the sampling factor, the height of the first prediction block is equal to the product of the height of the current block and the third vertical upsampling factor; if the color format information indicates 4:1:1 sampling, then it is determined that the width of the first prediction block is equal to 4 times The product of the width of the current block and the third horizontal upsampling factor, the height of the first prediction block is equal to the product of the height of the current block and the third vertical upsampling factor; if the color format information indicates 4:2:0 sampling, then It is determined that the width of the first prediction block is equal to the product of 2 times the width of the current block and the third horizontal upsampling factor, and the height of the first prediction block is equal to the product of 2 times the height of the current block and the third vertical upsampling factor.
  • the first prediction unit 3002 is further configured to determine the weighted value of the reference sample value of the second color component and the corresponding weighting coefficient; and set the predicted value of the second color component sampling point in the first prediction block is equal to the sum of N weighted values; where N represents the number of reference samples of the second color component, and N is a positive integer.
  • the first filtering process is a downsampling filtering process.
  • the first filtering unit 3003 is further configured to perform downsampling filtering processing on the first prediction block using a preset filter to determine the second prediction block of the second color component of the current block.
  • the first filtering unit 3003 is further configured to determine a horizontal downsampling factor and a vertical downsampling factor; and perform downsampling filtering processing on the first prediction block according to the horizontal downsampling factor and the vertical downsampling factor to obtain the current The second prediction block of the second color component of the block.
  • the first filtering unit 3003 is also configured to perform downsampling filtering on the first prediction block to obtain a second prediction block if the horizontal downsampling factor is greater than 1 or the vertical downsampling factor is greater than 1.
  • the first filtering unit 3003 is also configured to perform at least one of the following down-sampling filtering processes on the first prediction block:
  • the first prediction block is subjected to down-sampling filtering processing in the vertical direction and then down-sampling filtering processing is performed in the horizontal direction.
  • the first filtering unit 3003 is also configured to filter every preset number of second color components of the first prediction block in the horizontal direction and/or vertical direction according to the horizontal downsampling factor and the vertical downsampling factor. The predicted values are weighted and calculated to obtain the second prediction block.
  • the first filtering unit 3003 is further configured to perform a weighted sum calculation on the predicted values of the second color component of the first prediction block by the number of downsampling factors per level in the horizontal direction to obtain the second prediction block.
  • the first filtering unit 3003 is further configured to perform a weighted sum calculation on the predicted values of the second color component of the first prediction block per number of vertical downsampling factors in the vertical direction to obtain the second prediction block.
  • the first filtering unit 3003 is further configured to perform a weighted sum calculation on the predicted value of the second color component of the first prediction block per horizontal downsampling factor number in the horizontal direction, and to calculate the predicted value of the second color component per horizontal downsampling factor in the vertical direction.
  • the predicted values of the second color component by the number of downsampling factors are weighted and calculated to obtain the second prediction block.
  • the first filtering unit 3003 is further configured to determine the weighting coefficient according to the reference sample value of the first color component of some sample points in the first prediction block; and according to the weighting coefficient and the partial sample in the first prediction block The reference sample of the second color component of the point determines the second prediction block of the second color component of the current block.
  • the first filtering unit 3003 is further configured to perform weighted calculation on the reference sample value of the second color component of the sampling point at the (i, j)th position in the first prediction block according to the weighting coefficient to obtain the current block.
  • the first determining unit 3001 is also configured to set x equal to i and set y equal to j if the color format information indicates 4:4:4 sampling; if the color format information indicates 4 :2:2 sampling, then set x equal to the product of i and 2, and set y equal to j; if the color format information indicates 4:1:1 sampling, set x equal to the product of i and 4, Set y equal to j; if the color format information indicates 4:2:0 sampling, set x equal to the product of i and 2, and set y equal to the product of j and 2.
  • the first determining unit 3001 is further configured to determine the horizontal sampling position factor and the vertical sampling position factor; and set x equal to the product of i and the horizontal sampling position factor, and set y equal to the product of j and the vertical sampling product of position factors.
  • the first filtering unit 3003 is further configured to perform correlation processing on the second prediction block after determining the second prediction block of the second color component of the current block, and use the processed second prediction block as the third prediction block.
  • Two prediction blocks; wherein, relevant processing is performed on the second prediction block including at least one of the following:
  • Weighted fusion processing is performed on the second prediction block using the prediction value of the second color component of the current block in at least one prediction mode.
  • the first determination unit 3001 is further configured to determine the predicted value of the second color component sampling point of the current block according to the second prediction block; and the original value of the second color component sampling point of the current block based on the sum of The predicted value of the second color component sampling point of the current block determines the predicted difference value of the second color component sampling point of the current block.
  • the encoding device 300 may also include an encoding unit 3004 configured to encode the prediction difference value of the second color component sampling point of the current block, and write the resulting encoded bits into the code stream. .
  • the "unit" may be part of a circuit, part of a processor, part of a program or software, etc., and of course may also be a module, or may be non-modular.
  • each component in this embodiment can be integrated into one processing unit, or each unit can exist physically alone, or two or more units can be integrated into one unit.
  • the above integrated units can be implemented in the form of hardware or software function modules.
  • the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of this embodiment is essentially either The part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium and includes a number of instructions to make a computer device (can It is a personal computer, server, or network device, etc.) or processor that executes all or part of the steps of the method described in this embodiment.
  • the aforementioned storage media include: U disk, mobile hard disk, Read Only Memory (ROM), Random Access Memory (RAM), magnetic disk or optical disk and other media that can store program code.
  • the embodiment of the present application provides a computer-readable storage medium for use in the encoding device 300.
  • the computer-readable storage medium stores a computer program.
  • the computer program is executed by the first processor, any of the foregoing embodiments can be implemented. method described in one item.
  • the encoding device 310 may include: a first communication interface 3101, a first memory 3102, and a first processor 3103; the various components are coupled together through a first bus system 3104. It can be understood that the first bus system 3104 is used to implement connection communication between these components. In addition to the data bus, the first bus system 3104 also includes a power bus, a control bus and a status signal bus. However, for the sake of clear explanation, various buses are labeled as the first bus system 3104 in Figure 21. in,
  • the first communication interface 3101 is used for receiving and sending signals during the process of sending and receiving information with other external network elements;
  • the first memory 3102 is used to store a computer program capable of running on the first processor 3103;
  • the first processor 3103 is configured to execute: when running the computer program:
  • a prediction difference value of the second color component sampling point of the current block is determined.
  • the first memory 3102 in the embodiment of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memories.
  • non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), electrically removable memory. Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory.
  • Volatile memory may be Random Access Memory (RAM), which is used as an external cache.
  • RAM static random access memory
  • DRAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • SDRAM double data rate synchronous dynamic random access memory
  • Double Data Rate SDRAM DDRSDRAM
  • enhanced SDRAM ESDRAM
  • Synchlink DRAM SLDRAM
  • Direct Rambus RAM DRRAM
  • the first memory 3102 of the systems and methods described herein is intended to include, but is not limited to, these and any other suitable types of memory.
  • the first processor 3103 may be an integrated circuit chip with signal processing capabilities. During the implementation process, each step of the above method can be completed by instructions in the form of hardware integrated logic circuits or software in the first processor 3103 .
  • the above-mentioned first processor 3103 can be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or a ready-made programmable gate array (Field Programmable Gate Array, FPGA). or other programmable logic devices, discrete gate or transistor logic devices, or discrete hardware components.
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • a general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc.
  • the steps of the method disclosed in conjunction with the embodiments of the present application can be directly implemented by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other mature storage media in this field.
  • the storage medium is located in the first memory 3102.
  • the first processor 3103 reads the information in the first memory 3102 and completes the steps of the above method in combination with its hardware.
  • the embodiments described in this application can be implemented using hardware, software, firmware, middleware, microcode, or a combination thereof.
  • the processing unit can be implemented in one or more Application Specific Integrated Circuits (ASIC), Digital Signal Processing (DSP), Digital Signal Processing Device (DSP Device, DSPD), programmable Logic device (Programmable Logic Device, PLD), Field-Programmable Gate Array (FPGA), general-purpose processor, controller, microcontroller, microprocessor, and other devices used to perform the functions described in this application electronic unit or combination thereof.
  • ASIC Application Specific Integrated Circuits
  • DSP Digital Signal Processing
  • DSP Device Digital Signal Processing Device
  • DSPD Digital Signal Processing Device
  • PLD programmable Logic Device
  • FPGA Field-Programmable Gate Array
  • the technology described in this application can be implemented through modules (such as procedures, functions, etc.) that perform the functions described in this application.
  • Software code may be stored in memory and executed by a processor.
  • the memory can be implemented in the processor or external to the processor.
  • the first processor 3103 is further configured to perform the method described in any one of the preceding embodiments when running the computer program.
  • This embodiment provides an encoding device, which may further include the encoding device 300 described in any of the preceding embodiments.
  • the encoding device 300 may further include the encoding device 300 described in any of the preceding embodiments.
  • the existing color component information is fully considered, but also different color format information is fully considered, which can not only ensure the accuracy of the existing brightness information, but also make use of the unlost brightness information for color processing.
  • FIG. 22 shows a schematic structural diagram of a decoding device 320 provided by an embodiment of the present application.
  • the decoding device 320 may include: a second determination unit 3201, a second prediction unit 3202, and a second filtering unit 3203; wherein,
  • the second determination unit 3201 is configured to determine the reference sample value of the first color component of the current block; and determine the weighting coefficient according to the reference sample value of the first color component of the current block;
  • the second prediction unit 3202 is configured to determine the first prediction block of the second color component of the current block according to the weighting coefficient and the reference sample value of the second color component of the current block; wherein, the second color contained in the first prediction block The number of predicted values of the component is greater than the number of second color component sampling points contained in the current block;
  • the second filtering unit 3203 is configured to perform a first filtering process on the first prediction block and determine the second prediction block of the second color component of the current block;
  • the second determination unit 3201 is further configured to determine the reconstruction value of the second color component sampling point of the current block according to the second prediction block.
  • the number of predicted values of the second color component contained in the second prediction block is the same as the number of second color component sample points contained in the current block.
  • the second determination unit 3201 is further configured to determine the reference sample value of the first color component of the current block based on the value of the first color component sampling point in the adjacent area of the current block; wherein, The adjacent area includes at least one of the following: an upper adjacent area, an upper right adjacent area, a left adjacent area, and a lower left adjacent area.
  • the second determination unit 3201 is further configured to perform filtering processing on the first color component sampling points in the adjacent area, and determine the value of the first color component sampling point.
  • the second determination unit 3201 is further configured to determine the location of the sampling point to be selected based on the location and/or intensity of the color component sampling point in the adjacent area; and based on the location of the sampling point to be selected , determine the value of the first color component sampling point from the adjacent area.
  • the second filtering unit 3203 is also configured to perform a second filtering process on the value of the first color component sampling point to obtain the filtered adjacent sample value of the first color component of the current block; and according to the current block Filter adjacent samples of the first color component to determine a reference sample of the first color component of the current block.
  • the number of filtered adjacent samples of the first color component of the current block is greater than the number of values of the first color component sampling points.
  • the second determination unit 3201 is further configured to determine the reference sample value of the first color component of the current block based on the reconstructed value of the first reference color component sampling point in the current block.
  • the reference sample value of the first color component of the current block is set to the absolute value of the difference between the value of the first color component sampling point and the reconstructed value of the first reference color component sampling point.
  • the reference sample value of the first color component of the current block is set to the absolute value of the difference between the filtered adjacent sample value of the first color component and the reconstructed value of the first reference color component sampling point.
  • the second filtering unit 3203 is also configured to perform a third filtering process on the reconstructed value of the first reference color component sampling point in the current block to obtain the filtered sample value of the first reference color component sampling point in the current block. ; and determine the reference sample value of the first color component of the current block according to the filtered sample value of the first reference color component sampling point in the current block.
  • the number of filtered samples of the first reference color component sampling point in the current block is greater than the number of reconstructed values of the first reference color component sampling point in the current block.
  • the reference sample value of the first color component of the current block is set to the absolute value of the difference between the filtered adjacent sample value of the first color component and the filtered sample value of the first reference color component sampling point.
  • the reference sample value of the first color component of the current block is set to the absolute value of the difference between the value of the first color component sampling point and the filtered sample value of the first reference color component sampling point.
  • the second determination unit 3201 is further configured to determine the value corresponding to the reference sample of the first color component under the preset mapping relationship; and set the weighting coefficient equal to the value.
  • the second determination unit 3201 is further configured to determine the first factor; and determine the first product value according to the first factor and the reference sample value of the first color component; and determine the first product value at a preset value. The corresponding value under the mapping relationship.
  • the first factor is a preset constant value.
  • the second determination unit 3201 is further configured to determine the value of the first factor according to the size parameter of the current block; wherein the size parameter of the current block includes at least one of the following parameters: the width of the current block, The height of the current block.
  • the preset mapping relationship is a Softmax function.
  • the preset mapping relationship is a weighting function having an inverse relationship with the reference sample of the first color component.
  • the second determination unit 3201 is further configured to determine the reference sample value of the second color component of the current block based on the value of the second color component sampling point in the adjacent area of the current block.
  • the second filtering unit 3203 is also configured to perform a fourth filtering process on the values of the second color component sampling points in the adjacent areas of the current block to obtain the filtered phase of the second color component of the current block. adjacent sample values; and determining a reference sample value of the second color component of the current block based on the filtered adjacent sample values of the second color component of the current block.
  • the number of filtered adjacent samples of the second color component of the current block is greater than the number of values of the second color component sampling points in adjacent areas of the current block.
  • the fourth filtering process is upsampling filtering; wherein the upsampling rate is a positive integer multiple of 2.
  • the second filtering unit 3203 is further configured to perform a fourth filtering process on the value of the second color component sampling point in the adjacent area of the current block based on the color format information to obtain the second value of the current block. Filtered adjacent samples of color components.
  • the second filtering unit 3203 is also configured to upsample the value of the second color component sampling point in the adjacent area of the current block if the color format information indicates 4:2:0 sampling. Filtering; where the upsampling rate is a positive integer multiple of 2.
  • the second filtering unit 3203 is further configured to use the first horizontal upsampling factor and the first vertical upsampling factor to perform a second step on the value of the first color component sampling point in the adjacent area of the current block. Filter processing to obtain filtered adjacent sample values of the first color component of the current block; and use the second horizontal upsampling factor and the second vertical upsampling factor to obtain the second color component sampling point in the adjacent area of the current block. Perform a fourth filtering process on the value to obtain the filtered adjacent sample value of the second color component of the current block;
  • the second determination unit 3201 is also configured to determine that the second horizontal upsampling factor is equal to the first horizontal upsampling factor and the second vertical upsampling factor is equal to the first vertical upsampling if the color format information indicates 4:4:4 sampling.
  • the color format information indicates 4:2:2 sampling, then it is determined that the second horizontal upsampling factor is equal to 2 times the first horizontal upsampling factor, and the second vertical upsampling factor is equal to the first vertical upsampling factor; if the color If the format information indicates 4:1:1 sampling, then it is determined that the second horizontal upsampling factor is equal to 4 times the first horizontal upsampling factor, and the second vertical upsampling factor is equal to the first vertical upsampling factor; if the color format information indicates 4:2:0 sampling, then it is determined that the second horizontal upsampling factor is equal to 2 times the first horizontal upsampling factor, and the second vertical upsampling factor is equal to 2 times the first vertical upsampling factor.
  • the second determination unit 3201 is further configured to determine that the width of the first prediction block is equal to the width of the current block and the height of the first prediction block is equal to the current block if the color format information indicates 4:4:4 sampling. The height of the block; if the color format information indicates 4:2:2 sampling, then it is determined that the width of the first prediction block is equal to 2 times the width of the current block, and the height of the first prediction block is equal to the height of the current block; if the color format information If the indication is 4:1:1 sampling, then it is determined that the width of the first prediction block is equal to 4 times the width of the current block, and the height of the first prediction block is equal to the height of the current block; if the color format information indicates 4:2:0 sampling , then it is determined that the width of the first prediction block is equal to 2 times the width of the current block, and the height of the first prediction block is equal to 2 times the height of the current block.
  • the second filtering unit 3203 is further configured to perform a third filtering process on the reconstructed value of the first reference color component sampling point in the current block using a third horizontal upsampling factor and a third vertical upsampling factor, to obtain The filtered sample value of the first reference color component sampling point in the current block;
  • the second determination unit 3201 is also configured to determine that if the color format information indicates 4:4:4 sampling, determine that the width of the first prediction block is equal to the product of the width of the current block and the third horizontal sampling factor.
  • the height is equal to the product of the height of the current block and the third vertical upsampling factor; if the color format information indicates 4:2:2 sampling, it is determined that the width of the first prediction block is equal to 2 times the width of the current block and the third horizontal upsampling factor.
  • the product of the sampling factor, the height of the first prediction block is equal to the product of the height of the current block and the third vertical upsampling factor; if the color format information indicates 4:1:1 sampling, then it is determined that the width of the first prediction block is equal to 4 times The product of the width of the current block and the third horizontal upsampling factor, the height of the first prediction block is equal to the product of the height of the current block and the third vertical upsampling factor; if the color format information indicates 4:2:0 sampling, then It is determined that the width of the first prediction block is equal to the product of 2 times the width of the current block and the third horizontal upsampling factor, and the height of the first prediction block is equal to the product of 2 times the height of the current block and the third vertical upsampling factor.
  • the second prediction unit 3202 is further configured to determine the weighted value of the reference sample value of the second color component and the corresponding weighting coefficient; and set the predicted value of the second color component sampling point in the first prediction block is equal to the sum of N weighted values; where N represents the number of reference samples of the second color component, and N is a positive integer.
  • the first filtering process is a downsampling filtering process.
  • the second filtering unit 3203 is further configured to perform downsampling filtering processing on the first prediction block using a preset filter to determine the second prediction block of the second color component of the current block.
  • the second filtering unit 3203 is further configured to determine a horizontal downsampling factor and a vertical downsampling factor; and perform downsampling filtering processing on the first prediction block according to the horizontal downsampling factor and the vertical downsampling factor to obtain the current The second prediction block of the second color component of the block.
  • the second filtering unit 3203 is also configured to perform downsampling filtering on the first prediction block to obtain a second prediction block if the horizontal downsampling factor is greater than 1 or the vertical downsampling factor is greater than 1.
  • the second filtering unit 3203 is also configured to perform at least one of the following downsampling filtering processes on the first prediction block:
  • the first prediction block is subjected to down-sampling filtering processing in the vertical direction and then down-sampling filtering processing is performed in the horizontal direction.
  • the second filtering unit 3203 is further configured to filter each preset number of second color components of the first prediction block in the horizontal direction and/or vertical direction according to the horizontal downsampling factor and the vertical downsampling factor. The predicted values are weighted and calculated to obtain the second prediction block.
  • the second filtering unit 3203 is further configured to perform a weighted sum calculation on the predicted values of the second color component of the first prediction block by the number of downsampling factors per level in the horizontal direction to obtain the second prediction block.
  • the second filtering unit 3203 is further configured to perform a weighted sum calculation on the predicted values of the second color component of the first prediction block per number of vertical downsampling factors in the vertical direction to obtain the second prediction block.
  • the second filtering unit 3203 is further configured to perform a weighted sum calculation on the predicted value of the second color component of the first prediction block per horizontal downsampling factor number in the horizontal direction, and to calculate the predicted value of the second color component per horizontal downsampling factor in the vertical direction.
  • the predicted values of the second color component by the number of downsampling factors are weighted and calculated to obtain the second prediction block.
  • the second filtering unit 3203 is further configured to determine the weighting coefficient according to the reference sample value of the first color component of some sample points in the first prediction block; and according to the weighting coefficient and the partial sample in the first prediction block The reference sample of the second color component of the point determines the second prediction block of the second color component of the current block.
  • the second filtering unit 3203 is further configured to perform weighted calculation on the reference sample value of the second color component of the sampling point at the (i, j)th position in the first prediction block according to the weighting coefficient to obtain the current block.
  • the second determining unit 3201 is also configured to set x equal to i and set y equal to j if the color format information indicates 4:4:4 sampling; if the color format information indicates 4 :2:2 sampling, then set x equal to the product of i and 2, and set y equal to j; if the color format information indicates 4:1:1 sampling, set x equal to the product of i and 4, Set y equal to j; if the color format information indicates 4:2:0 sampling, set x equal to the product of i and 2, and set y equal to the product of j and 2.
  • the second determination unit 3201 is further configured to determine the horizontal sampling position factor and the vertical sampling position factor; and set x equal to the product of i and the horizontal sampling position factor, and set y equal to the product of j and the vertical sampling product of position factors.
  • the second filtering unit 3203 is further configured to perform correlation processing on the second prediction block after determining the second prediction block of the second color component of the current block, and use the processed second prediction block as the third prediction block.
  • Two prediction blocks; wherein, relevant processing is performed on the second prediction block including at least one of the following:
  • Weighted fusion processing is performed on the second prediction block using the prediction value of the second color component of the current block in at least one prediction mode.
  • the second determination unit 3201 is further configured to determine the predicted difference value of the second color component sampling point of the current block; and determine the predicted value of the second color component sampling point of the current block according to the second prediction block. ; and determine the reconstructed value of the second color component sampling point of the current block based on the predicted difference value of the second color component sampling point of the current block and the predicted value of the second color component sampling point of the current block.
  • the decoding device 320 may further include a decoding unit 3204 configured to parse the code stream and determine the prediction difference value of the second color component sampling point of the current block.
  • the "unit" may be part of a circuit, part of a processor, part of a program or software, etc., and of course may also be a module, or may be non-modular.
  • each component in this embodiment can be integrated into one processing unit, or each unit can exist physically alone, or two or more units can be integrated into one unit.
  • the above integrated units can be implemented in the form of hardware or software function modules.
  • the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • this embodiment provides a computer-readable storage medium for use in the decoder 320.
  • the computer-readable storage medium stores a computer program.
  • the computer program is executed by the second processor, the foregoing embodiments are implemented. any one of the methods.
  • the decoding device 330 may include: a second communication interface 3301, a second memory 3302, and a second processor 3303; the various components are coupled together through a second bus system 3304. It can be understood that the second bus system 3304 is used to implement connection communication between these components. In addition to the data bus, the second bus system 3304 also includes a power bus, a control bus and a status signal bus. However, for the sake of clarity of explanation, various buses are labeled as second bus system 3304 in FIG. 23 . in,
  • the second communication interface 3301 is used for receiving and sending signals during the process of sending and receiving information with other external network elements;
  • the second memory 3302 is used to store computer programs that can run on the second processor 3303;
  • the second processor 3303 is used to execute: when running the computer program:
  • the reconstructed value of the second color component sampling point of the current block is determined.
  • the second processor 3303 is also configured to perform the method described in any one of the preceding embodiments when running the computer program.
  • This embodiment provides a decoding device, which may further include the decoding device 320 described in the previous embodiment.
  • a decoding device which may further include the decoding device 320 described in the previous embodiment.
  • the existing color component information is fully considered, but also different color format information is fully considered, which can not only ensure the accuracy of the existing brightness information, but also make use of the unlost brightness information for color processing.
  • FIG. 24 shows a schematic structural diagram of a coding and decoding system provided by an embodiment of the present application.
  • the encoding and decoding system 340 may include an encoder 3401 and a decoder 3402.
  • the encoder 3401 can be a device integrated with the encoding device 300 described in the previous embodiment, or can also be the encoding device 310 described in the previous embodiment
  • the decoder 3402 can be a device integrated with the decoding device 320 described in the previous embodiment. device, or may also be the decoding device 330 described in the previous embodiment.
  • both the encoder 3401 and the decoder 3402 fully consider the existing color component information, and also fully consider the different color format information, which can ensure It already has the accuracy of luminance information, and can use the unlost luminance information to predict chroma components. Based on accurate luminance information, it is helpful to improve the accuracy and stability of the nonlinear mapping model, thereby improving the accuracy of chroma prediction. Accuracy, saving bit rate, while improving encoding and decoding performance.
  • the reference sample value of the first color component of the current block is determined; the weighting coefficient is determined based on the reference sample value of the first color component of the current block; the weighting coefficient and the current
  • the reference sample value of the second color component of the block determines the first prediction block of the second color component of the current block; wherein the number of prediction values of the second color component contained in the first prediction block is greater than the number of prediction values of the second color component contained in the current block.
  • the number of sampling points of the second color component performing the first filtering process on the first prediction block to determine the second prediction block of the second color component of the current block.
  • the prediction difference value of the second color component sampling point of the current block can be determined; the prediction difference value is written into the code stream; so that at the decoding end, based on the second prediction block and the decoding
  • the prediction difference of can determine the reconstructed value of the second color component sampling point of the current block.
  • the reference sample values of each chroma component are assigned weights for weighted prediction; and for the first filtering process, different color format information is fully considered, and chroma and/or brightness sampling filtering is performed based on different color format information, which can always ensure
  • the spatial resolution of the chroma component and the luminance component is consistent. This can not only ensure the accuracy of the existing luminance information, but also help improve nonlinearity based on accurate luminance information when predicting the chrominance component using unlost luminance information.
  • the accuracy and stability of the mapping model can improve the accuracy of chroma prediction, save bit rate, improve encoding and decoding efficiency, and thus improve encoding and decoding performance.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Color Television Systems (AREA)
  • Image Processing (AREA)

Abstract

本申请实施例公开了一种编解码方法、装置、编码设备、解码设备以及存储介质,该方法包括:确定当前块的第一颜色分量的参考样值;根据当前块的第一颜色分量的参考样值,确定加权系数;根据加权系数和当前块的第二颜色分量的参考样值,确定当前块的第二颜色分量的第一预测块;其中,第一预测块中包含的第二颜色分量的预测值的数量大于当前块中包含的第二颜色分量采样点的数量;对第一预测块进行第一滤波处理,确定当前块的第二颜色分量的第二预测块;根据第二预测块,确定当前块的第二颜色分量采样点的重建值。这样,不仅可以提高色度预测的准确性,节省码率,而且还能够提升编解码性能。

Description

编解码方法、装置、编码设备、解码设备以及存储介质 技术领域
本申请实施例涉及视频编解码技术领域,尤其涉及一种编解码方法、装置、编码设备、解码设备以及存储介质。
背景技术
随着人们对视频显示质量要求的提高,高清和超高清视频等新视频应用形式应运而生。国际标组织ISO/IEC和ITU-T的联合视频研究组(Joint Video Exploration Team,JVET)制定了下一代视频编码标准H.266/多功能视频编码(Versatile Video Coding,VVC)。
H.266/VVC中包含了颜色分量间预测技术。然而,H.266/VVC的颜色分量间预测技术计算得到的编码块的预测值与原始值之间存在较大偏差,这导致预测准确度低,造成解码视频的质量下降,降低了编码性能。
发明内容
本申请实施例提供一种编解码方法、装置、编码设备、解码设备以及存储介质,不仅可以提高色度预测的准确性,节省码率,而且还能够提升编解码性能。
本申请实施例的技术方案可以如下实现:
第一方面,本申请实施例提供了一种解码方法,包括:
确定当前块的第一颜色分量的参考样值;
根据当前块的第一颜色分量的参考样值,确定加权系数;
根据加权系数和当前块的第二颜色分量的参考样值,确定当前块的第二颜色分量的第一预测块;其中,第一预测块中包含的第二颜色分量的预测值的数量大于当前块中包含的第二颜色分量采样点的数量;
对第一预测块进行第一滤波处理,确定当前块的第二颜色分量的第二预测块;
根据第二预测块,确定当前块的第二颜色分量采样点的重建值。
第二方面,本申请实施例提供了一种编码方法,包括:
确定当前块的第一颜色分量的参考样值;
根据当前块的第一颜色分量的参考样值,确定加权系数;
根据加权系数和当前块的第二颜色分量的参考样值,确定当前块的第二颜色分量的第一预测块;其中,第一预测块中包含的第二颜色分量的预测值的数量大于当前块中包含的第二颜色分量采样点的数量;
对第一预测块进行第一滤波处理,确定当前块的第二颜色分量的第二预测块;
根据第二预测块,确定当前块的第二颜色分量采样点的预测差值。
第三方面,本申请实施例提供了一种编码装置,包括第一确定单元、第一预测单元和第一滤波单元;其中,
第一确定单元,配置为确定当前块的第一颜色分量的参考样值;以及根据当前块的第一颜色分量的参考样值,确定加权系数;
第一预测单元,配置为根据加权系数和当前块的第二颜色分量的参考样值,确定当前块的第二颜色分量的第一预测块;其中,第一预测块中包含的第二颜色分量的预测值的数量大于当前块中包含的第二颜色分量采样点的数量;
第一滤波单元,配置为对第一预测块进行第一滤波处理,确定当前块的第二颜色分量的第二预测块;
第一确定单元,还配置为根据第二预测块,确定当前块的第二颜色分量采样点的预测差值。
第四方面,本申请实施例提供了一种编码设备,包括第一存储器和第一处理器;其中,
第一存储器,用于存储能够在第一处理器上运行的计算机程序;
第一处理器,用于在运行计算机程序时,执行如第二方面所述的方法。
第五方面,本申请实施例提供了一种解码装置,包括第二确定单元、第二预测单元和第二滤波单元;其中,
第二确定单元,配置为确定当前块的第一颜色分量的参考样值;以及根据当前块的第一颜色分量的 参考样值,确定加权系数;
第二预测单元,配置为根据加权系数和当前块的第二颜色分量的参考样值,确定当前块的第二颜色分量的第一预测块;其中,第一预测块中包含的第二颜色分量的预测值的数量大于当前块中包含的第二颜色分量采样点的数量;
第二滤波单元,配置为对第一预测块进行第一滤波处理,确定当前块的第二颜色分量的第二预测块;
第二确定单元,还配置为根据第二预测块,确定当前块的第二颜色分量采样点的重建值。
第六方面,本申请实施例提供了一种解码设备,包括第二存储器和第二处理器;其中,
第二存储器,用于存储能够在第二处理器上运行的计算机程序;
第二处理器,用于在运行计算机程序时,执行如第一方面所述的方法。
第七方面,本申请实施例提供了一种计算机可读存储介质,该计算机可读存储介质存储有计算机程序,所述计算机程序被执行时实现如第一方面所述的方法、或者实现如第二方面所述的方法。
本申请实施例提供了一种编解码方法、装置、编码设备、解码设备以及存储介质,无论是编码端还是解码端,通过确定当前块的第一颜色分量的参考样值;根据当前块的第一颜色分量的参考样值,确定加权系数;根据加权系数和当前块的第二颜色分量的参考样值,确定当前块的第二颜色分量的第一预测块;其中,第一预测块中包含的第二颜色分量的预测值的数量大于当前块中包含的第二颜色分量采样点的数量;对第一预测块进行第一滤波处理,确定当前块的第二颜色分量的第二预测块。这样,在编码端根据第二预测块,能够确定当前块的第二颜色分量采样点的预测差值;使得在解码端,根据第二预测块可以确定出当前块的第二颜色分量采样点的重建值。如此,利用当前块相邻的参考像素与当前块内的颜色分量信息,不仅充分考虑了已有的颜色分量信息,使得在不损失亮度信息的基础上能够建立更准确的非线性映射模型对每个色度分量的参考样值分配权重进行加权预测;而且对于第一滤波处理还充分考虑了不同的颜色格式信息,根据不同的颜色格式信息进行色度和/或亮度的采样滤波,能够始终保证色度分量和亮度分量的空间分辨率一致,这样既可以保证已有亮度信息的准确性,又可以在利用未损失的亮度信息进行色度分量预测时,能够提高色度预测的准确性,节省码率,同时提升编解码性能。
附图说明
图1为一种有效相邻区域的分布示意图;
图2为一种不同预测模式下选择区域的分布示意图;
图3为一种模型参数推导方案的流程示意图;
图4A为本申请实施例提供的一种编码器的组成框图示意图;
图4B为本申请实施例提供的一种解码器的组成框图示意图;
图5为本申请实施例提供的一种解码方法的流程示意图一;
图6A为本申请实施例提供的一种当前块的参考区域示意图;
图6B为本申请实施例提供的一种参考色度信息的上采样插值示意图;
图7为本申请实施例提供的一种WCP模式与其他预测模式的加权预测示意图;
图8为本申请实施例提供的一种基于权重的色度预测的框架示意图;
图9为本申请实施例提供的一种解码方法的流程示意图二;
图10为本申请实施例提供的一种解码方法的流程示意图三;
图11为本申请实施例提供的一种解码方法的流程示意图四;
图12为本申请实施例提供的一种上采样插值的处理过程示意图一;
图13为本申请实施例提供的一种上采样插值的处理过程示意图二;
图14为本申请实施例提供的一种上采样插值的权重取值示意图;
图15为本申请实施例提供的一种上采样插值的处理过程示意图三;
图16为本申请实施例提供的一种解码方法的流程示意图五;
图17为本申请实施例提供的一种解码方法的流程示意图六;
图18为本申请实施例提供的一种编码方法的流程示意图一;
图19为本申请实施例提供的一种编码方法的流程示意图二;
图20为本申请实施例提供的一种编码装置的组成结构示意图;
图21为本申请实施例提供的一种编码设备的具体硬件结构示意图;
图22为本申请实施例提供的一种解码装置的组成结构示意图;
图23为本申请实施例提供的一种解码设备的具体硬件结构示意图;
图24为本申请实施例提供的一种编解码系统的组成结构示意图。
具体实施方式
为了能够更加详尽地了解本申请实施例的特点与技术内容,下面结合附图对本申请实施例的实现进行详细阐述,所附附图仅供参考说明之用,并非用来限定本申请实施例。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。
在以下的描述中,涉及到“一些实施例”,其描述了所有可能实施例的子集,但是可以理解,“一些实施例”可以是所有可能实施例的相同子集或不同子集,并且可以在不冲突的情况下相互结合。还需要指出,本申请实施例所涉及的术语“第一\第二\第三”仅是用于区别类似的对象,不代表针对对象的特定排序,可以理解地,“第一\第二\第三”在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。
在视频图像中,一般采用第一颜色分量、第二颜色分量和第三颜色分量来表征编码块(Coding Block,CB);其中,这三个颜色分量分别为一个亮度分量、一个蓝色色度分量和一个红色色度分量。示例性地,亮度分量通常使用符号Y表示,蓝色色度分量通常使用符号Cb或者U表示,红色色度分量通常使用符号Cr或者V表示;这样,视频图像可以用YCbCr格式表示,也可以用YUV格式表示;除此之外,视频图像也可以是RGB格式、YCgCo格式等,本申请实施例不作任何限定。
可以理解,在当前的视频图像或者视频编解码过程中,对于跨分量预测技术,主要包括分量间线性模型(Cross-component Linear Model,CCLM)预测模式和多方向线性模型(Multi-Directional Linear Model,MDLM)预测模式,无论是根据CCLM预测模式推导的模型参数,还是根据MDLM预测模式推导的模型参数,其对应的预测模型均可以实现第一颜色分量到第二颜色分量、第二颜色分量到第一颜色分量、第一颜色分量到第三颜色分量、第三颜色分量到第一颜色分量、第二颜色分量到第三颜色分量、或者第三颜色分量到第二颜色分量等颜色分量间的预测。
以第一颜色分量到第二颜色分量的预测为例,假定第一颜色分量为亮度分量,第二颜色分量为色度分量,为了减少亮度分量与色度分量之间的冗余,在VVC中使用CCLM预测模式,即根据同一编码块的亮度重建值来构造色度的预测值,如:Pred C(i,j)=α·Rec L(i,j)+β。
其中,i,j表示编码块中待预测像素的位置坐标,i表示水平方向,j表示竖直方向;Pred C(i,j)表示编码块中位置坐标(i,j)的待预测像素对应的色度预测值,Rec L(i,j)表示同一编码块中(经过下采样的)位置坐标(i,j)的待预测像素对应的亮度重建值。另外,α和β表示模型参数,可通过参考像素推导得到。
对于编码块而言,其相邻区域可以分为五部分:左侧相邻区域、上侧相邻区域、左下侧相邻区域、左上侧相邻区域和右上侧相邻区域。在H.266/VVC中包括三种跨分量线性模型预测模式,分别为:左侧及上侧相邻的帧内CCLM预测模式(可以用INTRA_LT_CCLM表示)、左侧及左下侧相邻的帧内CCLM预测模式(可以用INTRA_L_CCLM表示)和上侧及右上侧相邻的帧内CCLM预测模式(可以用INTRA_T_CCLM表示)。在这三种预测模式中,每种预测模式都可以选取预设数量(比如4个)的参考像素用于模型参数α和β的推导,而这三种预测模式的最大区别在于用于推导模型参数α和β的参考像素对应的选择区域是不同的。
具体地,针对色度分量对应的编码块尺寸为W×H,假定参考像素对应的上侧选择区域为W′,参考像素对应的左侧选择区域为H′;这样,
对于INTRA_LT_CCLM模式,参考像素可以在上侧相邻区域和左侧相邻区域进行选取,即W′=W,H′=H;
对于INTRA_L_CCLM模式,参考像素可以在左侧相邻区域和左下侧相邻区域进行选取,即H′=W+H,并设置W′=0;
对于INTRA_T_CCLM模式,参考像素可以在上侧相邻区域和右上侧相邻区域进行选取,即W′=W+H,并设置H′=0。
需要注意的是,在VTM中,对于右上侧相邻区域内最多只存储了W范围的像素点,对于左下侧相邻区域内最多只存储了H范围的像素点。虽然INTRA_L_CCLM模式和INTRA_T_CCLM模式的选择区域的范围定义为W+H,但是在实际应用中,INTRA_L_CCLM模式的选择区域将限制在H+H之内,INTRA_T_CCLM模式的选择区域将限制在W+W之内;这样,
对于INTRA_L_CCLM模式,参考像素可以在左侧相邻区域和左下侧相邻区域进行选取,H′=min{W+H,H+H};
对于INTRA_T_CCLM模式,参考像素可以在上侧相邻区域和右上侧相邻区域进行选取,W′=min{W+H,W+W}。
参见图1,其示出了一种有效相邻区域的分布示意图。在图1中,左侧相邻区域、左下侧相邻区域、 上侧相邻区域和右上侧相邻区域都是有效的;另外,灰色填充的块即为编码块中位置坐标为(i,j)的待预测像素。
如此,在图1的基础上,针对三种预测模式的选择区域如图2所示。其中,在图2中,(a)表示了INTRA_LT_CCLM模式的选择区域,包括了左侧相邻区域和上侧相邻区域;(b)表示了INTRA_L_CCLM模式的选择区域,包括了左侧相邻区域和左下侧相邻区域;(c)表示了INTRA_T_CCLM模式的选择区域,包括了上侧相邻区域和右上侧相邻区域。这样,在确定出三种预测模式的选择区域之后,可以在选择区域内进行用于模型参数推导的像素选取。如此选取到的像素可以称为参考像素,通常参考像素的个数为4个;而且对于一个尺寸确定的W×H的编码块,其参考像素的位置一般是确定的。
在获取到预设数量的参考像素之后,目前是按照图3所示的模型参数推导方案的流程示意图进行色度预测。根据图3所示的流程,假定预设数量为4个,该流程可以包括:
S301:在选择区域中获取参考像素。
S302:判断有效参考像素的个数。
S303:若有效参考像素的个数为0,则将模型参数α设置为0,β设置为默认值。
S304:色度预测值填充为默认值。
S305:若有效参考像素的个数为4,则经过比较获得亮度分量中较大值的两个参考像素和较小值的两个参考像素。
S306:计算较大值对应的均值点和较小值对应的均值点。
S307:根据两个均值点推导得到模型参数α和β。
S308:使用α和β所构建的预测模型进行色度预测。
需要说明的是,在VVC中,有效参考像素为0的这一步骤是根据相邻区域的有效性进行判断的。
还需要说明的是,利用“两点确定一条直线”原则来构建预测模型,这里的两点可以称为拟合点。目前的技术方案中,在获取到4个参考像素之后,经过比较获得亮度分量中较大值的两个参考像素和较小值的两个参考像素;然后根据较大值的两个参考像素,求取一均值点(可以用mean max表示),根据较小值的两个参考像素,求取另一均值点(可以用mean min表示),即可得到两个均值点mean max和mean min;再将mean max和mean min作为两个拟合点,能够推导出模型参数(用α和β表示);最后根据α和β构建出预测模型,并根据该预测模型进行色度分量的预测处理。
然而,在相关技术中,针对每个编码块都使用简单的线性模型Pred C(i,j)=α·Rec L(i,j)+β来预测色度分量,并且每个编码块任意位置像素都使用相同的模型参数α和β进行预测。这样会导致以下缺陷:一方面,不同内容特性的编码块都利用简单线性模型进行亮度到色度的映射,以此实现色度预测,但是并非任意编码块内的亮度到色度的映射函数都可以准确由此简单线性模型拟合,这导致部分编码块预测效果不够准确;另一方面,在预测过程中,编码块内不同位置的像素点均使用相同的模型参数α和β,编码块内不同位置的预测准确性的同样存在较大差异;又一方面,在CCLM技术的预测过程中,在YUV422、YUV411、YUV420等具有色度亚采样格式视频下获取相邻区域的参考亮度像素和当前块内的重建亮度像素时需要对亮度重建值进行下采样,但是人眼对亮度信号很敏感,亮度下采样后造成的亮度信息损失会影响亮度到色度映射模型的准确性,导致色度预测的准确性下降。简单来说,目前的CCLM技术下部分编码块的预测值与原始值之间存在较大偏差,导致预测准确度低,造成质量下降,进而降低了编解码效率。
基于此,本申请实施例提供了一种编码方法,通过确定当前块的第一颜色分量的参考样值;根据当前块的第一颜色分量的参考样值,确定加权系数;根据加权系数和当前块的第二颜色分量的参考样值,确定当前块的第二颜色分量的第一预测块;其中,第一预测块中包含的第二颜色分量的预测值的数量大于当前块中包含的第二颜色分量采样点的数量;对第一预测块进行第一滤波处理,确定当前块的第二颜色分量的第二预测块;根据第二预测块,确定当前块的第二颜色分量采样点的预测差值。
本申请实施例还提供了一种解码方法,通过确定当前块的第一颜色分量的参考样值;根据当前块的第一颜色分量的参考样值,确定加权系数;根据加权系数和当前块的第二颜色分量的参考样值,确定当前块的第二颜色分量的第一预测块;其中,第一预测块中包含的第二颜色分量的预测值的数量大于当前块中包含的第二颜色分量采样点的数量;对第一预测块进行第一滤波处理,确定当前块的第二颜色分量的第二预测块;根据第二预测块,确定当前块的第二颜色分量采样点的重建值。
这样,利用当前块相邻的参考像素与当前块内的颜色分量信息,不仅充分考虑了已有的颜色分量信息,使得在不损失亮度信息的基础上能够建立更准确的非线性映射模型对每个色度分量的参考样值分配权重进行加权预测;而且对于第一滤波处理还充分考虑了不同的颜色格式信息,根据不同的颜色格式信息进行色度和/或亮度的采样滤波,能够始终保证色度分量和亮度分量的空间分辨率一致,这样既可以保证已有亮度信息的准确性,又可以在利用未损失的亮度信息进行色度分量预测时,基于准确的亮度信 息有利于提高非线性映射模型的准确性和稳定性,从而能够提高色度预测的准确性,节省码率,提升编解码效率,进而提升编解码性能。
下面将结合附图对本申请各实施例进行详细说明。
参见图4A,其示出了本申请实施例提供的一种编码器的组成框图示意图。如图4A所示,编码器(具体为“视频编码器”)100可以包括变换与量化单元101、帧内估计单元102、帧内预测单元103、运动补偿单元104、运动估计单元105、反变换与反量化单元106、滤波器控制分析单元107、滤波单元108、编码单元109和解码图像缓存单元110等,其中,滤波单元108可以实现去方块滤波及样本自适应缩进(Sample Adaptive 0ffset,SAO)滤波,编码单元109可以实现头信息编码及基于上下文的自适应二进制算术编码(Context-based Adaptive Binary Arithmetic Coding,CABAC)。针对输入的原始视频信号,通过编码树块(Coding Tree Unit,CTU)的划分可以得到一个视频编码块,然后对经过帧内或帧间预测后得到的残差像素信息通过变换与量化单元101对该视频编码块进行变换,包括将残差信息从像素域变换到变换域,并对所得的变换系数进行量化,用以进一步减少比特率;帧内估计单元102和帧内预测单元103是用于对该视频编码块进行帧内预测;明确地说,帧内估计单元102和帧内预测单元103用于确定待用以编码该视频编码块的帧内预测模式;运动补偿单元104和运动估计单元105用于执行所接收的视频编码块相对于一或多个参考帧中的一或多个块的帧间预测编码以提供时间预测信息;由运动估计单元105执行的运动估计为产生运动向量的过程,所述运动向量可以估计该视频编码块的运动,然后由运动补偿单元104基于由运动估计单元105所确定的运动向量执行运动补偿;在确定帧内预测模式之后,帧内预测单元103还用于将所选择的帧内预测数据提供到编码单元109,而且运动估计单元105将所计算确定的运动向量数据也发送到编码单元109;此外,反变换与反量化单元106是用于该视频编码块的重构建,在像素域中重构建残差块,该重构建残差块通过滤波器控制分析单元107和滤波单元108去除方块效应伪影,然后将该重构残差块添加到解码图像缓存单元110的帧中的一个预测性块,用以产生经重构建的视频编码块;编码单元109是用于编码各种编码参数及量化后的变换系数,在基于CABAC的编码算法中,上下文内容可基于相邻编码块,可用于编码指示所确定的帧内预测模式的信息,输出该视频信号的码流;而解码图像缓存单元110是用于存放重构建的视频编码块,用于预测参考。随着视频图像编码的进行,会不断生成新的重构建的视频编码块,这些重构建的视频编码块都会被存放在解码图像缓存单元110中。
参见图4B,其示出了本申请实施例提供的一种解码器的组成框图示意图。如图4B所示,解码器(具体为“视频解码器”)200包括解码单元201、反变换与反量化单元202、帧内预测单元203、运动补偿单元204、滤波单元205和解码图像缓存单元206等,其中,解码单元201可以实现头信息解码以及CABAC解码,滤波单元205可以实现去方块滤波以及SAO滤波。输入的视频信号经过图4A的编码处理之后,输出该视频信号的码流;该码流输入解码器200中,首先经过解码单元201,用于得到解码后的变换系数;针对该变换系数通过反变换与反量化单元202进行处理,以便在像素域中产生残差块;帧内预测单元203可用于基于所确定的帧内预测模式和来自当前帧或图片的先前经解码块的数据而产生当前视频解码块的预测数据;运动补偿单元204是通过剖析运动向量和其他关联语法元素来确定用于视频解码块的预测信息,并使用该预测信息以产生正被解码的视频解码块的预测性块;通过对来自反变换与反量化单元202的残差块与由帧内预测单元203或运动补偿单元204产生的对应预测性块进行求和,而形成解码的视频块;该解码的视频信号通过滤波单元205以便去除方块效应伪影,可以改善视频质量;然后将经解码的视频块存储于解码图像缓存单元206中,解码图像缓存单元206存储用于后续帧内预测或运动补偿的参考图像,同时也用于视频信号的输出,即得到了所恢复的原始视频信号。
需要说明的是,本申请实施例的方法主要应用在如图4A所示的帧内预测单元103部分和如图4B所示的帧内预测单元203部分。也就是说,本申请实施例既可以应用于编码器,也可以应用于解码器,甚至还可以同时应用于编码器和解码器,但是本申请实施例不作具体限定。
还需要说明的是,当应用于帧内预测单元103部分时,“当前块”具体是指当前待进行帧内预测的编码块;当应用于帧内预测单元203部分时,“当前块”具体是指当前待进行帧内预测的解码块。
在本申请的一实施例中,参见图5,其示出了本申请实施例提供的一种解码方法的流程示意图一。如图5所示,该方法可以包括:
S501:确定当前块的第一颜色分量的参考样值。
需要说明的是,本申请实施例的解码方法应用于解码装置,或者集成有该解码装置的解码设备(也可简称为“解码器”)。另外,本申请实施例的解码方法具体可以是指一种帧内预测方法,更具体地,是一种基于权重的色度预测(Weight-based Chroma Prediction,WCP)方法。
在本申请实施例中,视频图像可以划分为多个解码块,每个解码块可以包括第一颜色分量、第二颜色分量和第三颜色分量,而这里的当前块是指视频图像中当前待进行帧内预测的解码块。另外,假定当前块进行第一颜色分量的预测,而且第一颜色分量为亮度分量,即待预测分量为亮度分量,那么当前块也可以称为亮度预测块;或者,假定当前块进行第二颜色分量的预测,而且第二颜色分量为色度分量,即待预测分量为色度分量,那么当前块也可以称为色度预测块。
还需要说明的是,在本申请实施例中,当前块的参考信息可以包括有当前块的相邻区域中的第一颜色分量采样点的取值和当前块的相邻区域中的第二颜色分量采样点的取值,这些采样点(Sample)可以是根据当前块的相邻区域中的已解码像素确定的。在一些实施例中,当前块的相邻区域可以包括下述至少之一:上侧相邻区域、右上侧相邻区域、左侧相邻区域和左下侧相邻区域。
在这里,上侧相邻区域和右上侧相邻区域整体可以看作上方区域,左侧相邻区域和左下侧相邻区域整体可以看作左方区域;除此之外,相邻区域还可以包括左上方区域,详见图6所示。其中,在对当前块进行第二颜色分量的预测时,当前块的上方区域、左方区域和左上方区域作为相邻区域均可被称为当前块的参考区域,而且参考区域中的像素都是已解码的参考像素。
还需要说明的是,在本申请实施例中,当前块的相邻区域可以包括与当前块相邻的多行或者多列。例如,左方区域可以包括一列或者多列,上方区域可以包括一行或者多行,甚至对于行或者列的数量增多或者减少,本申请实施例均不作任何限定。
在一些实施例中,确定当前块的第一颜色分量的参考样值(Sample),可以包括:根据当前块的相邻区域中的第一颜色分量采样点的取值,确定当前块的第一颜色分量的参考样值。
需要说明的是,在本申请实施例中,当前块的参考像素可以是指与当前块相邻的参考像素点,也可称为当前块的相邻区域中的第一颜色分量采样点、第二颜色分量采样点,用Neighboring Sample或Reference Sample表示。其中,这里的相邻可以是空间相邻,但是并不局限于此。例如,相邻也可以是时域相邻、空间与时域相邻,甚至当前块的参考像素还可以是对空间相邻、时域相邻、空间和时域相邻的参考像素点进行某种处理后得到的参考像素等等,本申请实施例不作任何限定。
还需要说明的是,在本申请实施例中,第一颜色分量为亮度分量,第二颜色分量为色度分量。那么,当前块的相邻区域中的第一颜色分量采样点的取值表示为当前块的参考像素对应的参考亮度信息,当前块的相邻区域中的第二颜色分量采样点的取值则表示为当前块的参考像素对应的参考色度信息。
还需要说明的是,在本申请实施例中,从当前块的相邻区域中确定第一颜色分量采样点的取值,这里的相邻区域可以是仅包括上侧相邻区域,或者仅包括左侧相邻区域,也可以是包括上侧相邻区域和右上侧相邻区域,或者包括左侧相邻区域和左下侧相邻区域,或者包括上侧相邻区域和左侧相邻区域,或者甚至还可以包括上侧相邻区域、右上侧相邻区域和左侧相邻区域等等,本申请实施例不作任何限定。
还需要说明的是,在本申请实施例中,相邻区域也可以是根据当前块的预测模式进行确定。在一种具体的实施例中,可以包括:
若当前块的预测模式为水平模式,则根据上侧相邻区域和/或右上侧相邻区域中的像素,确定参考像素;
若当前块的预测模式为垂直模式,则根据左侧相邻区域和/或左下侧相邻区域中的像素,确定参考像素。
示例性地,如果当前块的预测模式是水平模式,那么色度分量的预测中相邻区域可以只选取上侧相邻区域和/或右上侧相邻区域;如果当前块的预测模式是垂直模式,那么色度分量预测中相邻区域可以只选取左侧相邻区域和/或左下侧相邻区域。
进一步地,在一些实施例中,对于第一颜色分量采样点的取值的确定,该方法还可以包括:对相邻区域中的第一颜色分量采样点进行筛选处理,确定第一颜色分量采样点的取值。
需要说明的是,在相邻区域中的第一颜色分量采样点中,可能会存在部分不重要的采样点(例如,这些采样点的相关性较差)或者部分异常的采样点,为了保证预测的准确性,需要将这些采样点剔除掉,以便得到有效的第一颜色分量采样点的取值。也就是说,在本申请实施例中,根据当前块的相邻区域中的第一颜色分量采样点,组成第一采样点集合;那么可以对第一采样点集合进行筛选处理,确定第一颜色分量采样点的取值。
在一种具体的实施例中,所述对相邻区域中的第一颜色分量采样点进行筛选处理,确定第一颜色分量采样点的取值,可以包括:
基于相邻区域中的第一颜色分量采样点的位置和/或颜色分量强度,确定待选择采样点位置;
根据待选择采样点位置,从相邻区域中确定第一颜色分量采样点的取值。
需要说明的是,在本申请实施例中,颜色分量强度可以用颜色分量信息来表示,比如参考亮度信息、参考色度信息等;这里,颜色分量信息的值越大,表明了颜色分量强度越高。这样,针对相邻区域中的 第一颜色分量采样点进行筛选,可以是根据采样点的位置来进行筛选的,也可以是根据颜色分量强度来进行筛选的,从而根据筛选得到的采样点确定出有效的第一颜色分量采样点,进而确定出第一颜色分量采样点的取值。
在一些实施例中,所述确定当前块的第一颜色分量的参考样值,还可以包括:
对第一颜色分量采样点的取值进行第二滤波处理,得到当前块的第一颜色分量的滤波相邻样值;
根据当前块的第一颜色分量的滤波相邻样值,确定当前块的第一颜色分量的参考样值。
在本申请实施例中,当前块的第一颜色分量的滤波相邻样值的数量大于第一颜色分量采样点的取值的数量。
在本申请实施例中,第二滤波处理可以为上采样滤波处理。其中,第一颜色分量为亮度分量,为了保证参考亮度信息不损失,这里可以是参考亮度信息保持不变,也可以是对参考亮度信息进行上采样滤波处理。示例性地,如果当前块的尺寸大小为2M×2N,参考亮度信息为2M+2N个,那么在经过上采样滤波之后,可以变换至4M+4N个。
在一些实施例中,所述确定当前块的第一颜色分量的参考样值,还可以包括:基于当前块中第一参考颜色分量采样点的重建值,确定当前块的第一颜色分量的参考样值。
需要说明的是,在本申请实施例中,第一参考颜色分量可以为亮度分量;那么,当前块中第一参考颜色分量采样点的重建值即为当前块的重建亮度信息。
进一步地,在一些实施例中,所述确定当前块的第一颜色分量的参考样值,还可以包括:
对当前块中第一参考颜色分量采样点的重建值进行第三滤波处理,得到当前块中第一参考颜色分量采样点的滤波样值;
根据当前块中第一参考颜色分量采样点的滤波样值,确定当前块的第一颜色分量的参考样值。
在本申请实施例中,当前块中第一参考颜色分量采样点的滤波样值的数量大于当前块中第一参考颜色分量采样点的重建值的数量。
在本申请实施例中,第三滤波处理可以为上采样滤波处理。其中,第一参考颜色分量为亮度分量,为了保证当前块内的重建亮度信息不损失,获得更精确的色度预测值,这里可以是当前块内的重建亮度信息保持不变,也可以是对当前块内的重建亮度信息进行上采样滤波处理。示例性地,如果当前块内的重建亮度信息的数量为2M×2N,那么在经过上采样滤波之后,可以变换至4M×4N。
需要说明的是,在本申请实施例中,考虑到在加权预测之前对参考信息的滤波处理,可以是仅对参考亮度信息进行滤波处理,也可以是仅对重建亮度信息进行滤波处理,还可以是对参考亮度信息和重建亮度信息均进行滤波处理,这里并不作任何限定。那么对于亮度差的计算,可以是参考亮度信息和重建亮度信息之差的绝对值,也可以是滤波处理后的参考亮度信息和重建亮度信息之差的绝对值,还可以是参考亮度信息和滤波处理后的重建亮度信息之差的绝对值,甚至还可以是滤波处理后的参考亮度信息和滤波处理后的重建亮度信息之差的绝对值。
还需要说明的是,在本申请实施例中,当前块的第一颜色分量的参考样值也可以设置为亮度差信息。因此,在一种可能的实现方式中,当前块的第一颜色分量的参考样值设置为第一颜色分量采样点的取值与第一参考颜色分量采样点的重建值之差的绝对值。
在另一种可能的实现方式中,当前块的第一颜色分量的参考样值设置为第一颜色分量的滤波相邻样值与第一参考颜色分量采样点的重建值之差的绝对值。
在又一种可能的实现方式中,当前块的第一颜色分量的参考样值设置为第一颜色分量的滤波相邻样值与第一参考颜色分量采样点的滤波样值之差的绝对值。
在再一种可能的实现方式中,当前块的第一颜色分量的参考样值设置为第一颜色分量采样点的取值与第一参考颜色分量采样点的滤波样值之差的绝对值。
也就是说,在保持亮度信息不损失的情况下,可以对当前块的相邻区域中的参考亮度信息进行上采样滤波处理,也可以对当前块内的重建亮度信息进行上采样滤波处理,还可以对这两者都进行上采样滤波处理,甚至还可以对这两者都不进行上采样滤波处理,然后根据不同组合来确定亮度差信息,将亮度差信息作为当前块的第一颜色分量的参考样值。
S502:根据当前块的第一颜色分量的参考样值,确定加权系数。
需要说明的是,在本申请实施例中,所述根据当前块的第一颜色分量的参考样值,确定加权系数,可以包括:
确定在预设映射关系下第一颜色分量的参考样值对应的取值;
将加权系数设置为等于取值。
可以理解地,在本申请实施例中,第一颜色分量的参考样值可以为当前块的第一颜色分量的滤波相邻样值与当前块中第一参考颜色分量采样点的滤波样值之差的绝对值。其中,第一参考颜色分量为第一 颜色分量,且第一颜色分量是与本申请实施例待预测的第二颜色分量不同的颜色分量。
示例性地,假定第一颜色分量为色度分量,第二颜色分量为亮度分量,本申请实施例主要是对当前块中待预测像素的色度分量进行预测。首先选取当前块中的至少一个待预测像素,分别计算其重建色度与相邻区域中的参考色度之间的亮度差(用|ΔC k|表示);不同位置的待预测像素在相邻区域的色度差存在差异,色度差最小的参考像素位置会跟随当前块中待预测像素的变化,通常色度差的大小代表色度之间的相近程度。如果|ΔC k|较小,表明色度值的相似性比较强,对应的加权系数(用w k表示)可以赋予较大的权重;反之,如果|ΔC k|较大,表明色度值的相似性比较弱,w k可以赋予较小的权重。也就是说,w k与|ΔC k|之间的关系近似呈反比。这样,根据|ΔC k|可以建立预设的映射关系,如下所示,
w k=f(|ΔC k|)         (1)
在这里,以式(1)为例,|ΔC k|表示第一颜色分量的参考样值,f(|ΔC k|)表示在预设映射关系下第一颜色分量的参考样值对应的取值,w k表示加权系数,即将w k设置为等于f(|ΔC k|)。
还需要说明的是,在本申请实施例中,如果第二颜色分量的参考样值的数量为N个,那么加权系数的数量也为N个;其中,N个加权系数之和等于1,且每一个加权系数均为大于或等于0且小于或等于1的值,即0≤w k≤1。但是需要注意的是,“N个加权系数之和等于1”仅属于理论概念;在实际的定点化实现过程中,加权系数的绝对值可以大于1。
可以理解地,在概率论和相关领域中,归一化指数函数,或称Softmax函数,是逻辑函数的一种推广。它能将一个含任意实数的N维向量z“压缩”到另一个N维向量σ(z)中,使得每一个元素的范围都在(0,1)之间,并且所有元素的和为1,经常被作为多分类神经网络的非线性激活函数。Softmax函数如下所示,
Figure PCTCN2022086471-appb-000001
然而,Softmax函数可以满足w k的条件约束,但是其函数值随向量元素的增大而增大,不符合w k与|ΔC k|近似反比的关系。因此,为了适应w k的设计,并且满足对w k非线性的期望,可以带参数γ的Softmax函数来实现式(1)的f函数,具体如下所示,
Figure PCTCN2022086471-appb-000002
其中,为了实现w k与|ΔC k|近似反比的关系,可以将γ限定为负数。也就是说,在第一参考颜色分量参数为|ΔC k|的情况下,|ΔC k|与w k之间的映射关系可以表示为:
Figure PCTCN2022086471-appb-000003
根据式(4),在一些实施例中,所述确定在预设映射关系下第一颜色分量的参考样值对应的取值,可以包括:
确定第一因子;
根据第一因子和第一颜色分量的参考样值,确定第一乘积值;
确定第一乘积值在预设映射关系下对应的取值。
需要说明的是,在本申请实施例中,第一因子为小于零的常数值。其中,以Softmax函数为例,在式(4)中,γ表示第一因子,|ΔC k|表示第一参考颜色分量参数,γ|ΔC k|表示第一乘积值,w k表示加权系数;k=1,2,…,N。
在一种具体的实施例中,所述确定第一因子,可以包括:第一因子是预设常数值。
需要说明的是,在这种情况下,对于γ而言,可以根据色度相对平坦的特性调整邻近色度的加权系数分布,从而捕获适合自然图像色度预测的加权系数分布。为了确定适合自然图像色度预测的参数γ,遍历给定的γ集合,通过不同γ下预测色度与原始色度间的差距来衡量γ合适与否。示例性地,γ可以取-2 ε,其中ε∈{1,0,-1,-2,-3};经过试验发现,在此γ集合中,γ的最佳取值为-0.25。因此,在一种具体的实施例中,γ可以设置为-0.25,但是本申请实施例并不作具体限定。
在另一种具体的实施例中,所述确定第一因子,可以包括:根据当前块的尺寸参数,确定第一因子的取值。
进一步地,在一些实施例中,该方法还可以包括:根据预设的当前块的尺寸参数与第一因子的取值映射查找表,确定第一因子的取值。
在这里,当前块的尺寸参数可以包括以下参数的至少之一:当前块的宽度,当前块的高度、当前块的宽度与高度的乘积。
需要说明的是,本申请实施例可以采用分类方式固定第一因子的取值。例如,将根据当前块的尺寸参数分为三类,确定每一类对应的第一因子的取值。针对这种情况,本申请实施例还可以预先存储当前块的尺寸参数与第一因子的取值映射查找表,然后根据该查找表即可确定出第一因子的取值。示例性地, 表1示出了本申请实施例提供的一种第一因子与当前块的尺寸参数之间的对应关系。需要注意的是,表1仅是一种示例性查找表,并不作具体限定。
表1
当前块的尺寸参数 第一因子
Min(W,H)<=4 -0.125
Min(W,H)>4&&Min(W,H)<=16 -0.0833
Min(W,H)>16 -0.0625
在又一种具体的实施例中,所述确定第一因子,可以包括:根据当前块的参考像素数量,确定第一因子的取值。
进一步地,在一些实施例中,该方法还可以包括:根据预设的当前块的参考像素数量与第一因子的取值映射查找表,确定第一因子的取值。
需要说明的是,本申请实施例可以将参考像素数量划分为三类,仍然采用分类方式固定第一因子的取值。例如,将根据当前块的参考像素数量划分为三类,确定每一类对应的第一因子的取值。针对这种情况,本申请实施例也可以预先存储当前块的参考像素数量与第一因子的取值映射查找表,然后根据该查找表即可确定出第一因子的取值。示例性地,表2示出了本申请实施例提供的一种第一因子与当前块的参考像素数量之间的对应关系。需要注意的是,表2仅是一种示例性查找表,并不作具体限定。
表2
参考像素数量 第一因子
0<R<16 -0.1113
16<=R<32 -0.0773
R>=32 -0.0507
进一步地,对于第一乘积值而言,所述根据第一因子和第一颜色分量的参考样值,确定第一乘积值,可以包括:
将第一乘积值设置为等于第一因子和第一颜色分量的参考样值的乘积;
或者,
将第一乘积值设置为等于对第一颜色分量的参考样值进行比特右移后得到的数值,其中,比特右移的位数等于第一因子;
或者,
将第一乘积值设置为根据第一因子对第一颜色分量的参考样值进行加法和比特移位操作后得到的数值。
示例性地,假定第一因子等于0.25,第一颜色分量的参考样值用Ref表示,那么第一乘积值可以等于0.25×Ref,而0.25×Ref又可表示为Ref/4,即Ref>>2。另外,在定点化计算过程中,还有可能会将浮点数转换为加法和位移操作;也就是说,对于第一乘积值而言,其计算方式不作任何限定。
还可以理解地,在本申请实施例中,第一颜色分量也可以为亮度分量,第二颜色分量为色度分量,这时候选取当前块中的至少一个待预测像素,分别计算其重建亮度与相邻区域中的参考亮度之间的亮度差(用|ΔY k|表示)。其中,如果|ΔY k|较小,表明亮度值的相似性比较强,对应的加权系数(用w k表示)可以赋予较大的权重;反之,如果|ΔY k|较大,表明亮度值的相似性比较弱,w k可以赋予较小的权重,也就是说,在计算加权系数时,第一颜色分量的参考样值也可以为|ΔY k|,进而计算出加权系数。
需要注意的是,在对当前块中待预测像素的色度分量进行预测时,由于待预测像素的色度分量值无法直接确定,因而参考像素与待预测像素之间的色度差|ΔC k|也无法直接得到;但是当前块的局部区域内,分量间存在强相关性,这时候可以根据参考像素与待预测像素之间的亮度差|ΔY k|来推导得到|ΔC k|;即根据|ΔY k|和第二因子的乘积,可以得到|ΔC k|;这样,第一乘积值等于第一因子与|ΔY k|的乘积。
也就是说,在本申请实施例中,对于第一颜色分量的参考样值而言,其可以是|ΔC k|,即色度差的绝对值;或者也可以是|ΔY k|,即亮度差的绝对值;或者还可以是|αΔY k|,即亮度差的绝对值与预设乘子之积等等。其中,这里的预设乘子即为本申请实施例所述的第二因子。
进一步地,对于第二因子而言,在一种具体的实施例中,该方法还可以包括:根据参考像素的第一颜色分量值与参考像素的第二颜色分量值进行最小二乘法计算,确定第二因子。
也就是说,假定参考像素的数量为N个,参考像素的第一颜色分量值即为当前块的参考亮度信息,参考像素的第二颜色分量值即为当前块的参考色度信息,那么可以对N个参考像素的色度分量值和亮度分量值进行最小二乘法计算,得到第二因子。示例性地,最小二乘法回归计算如下所示,
Figure PCTCN2022086471-appb-000004
其中,L k表示第k个参考像素的亮度分量值,C k表示第k个参考像素的色度分量值,N表示参考像素的个数;α表示第二因子,其可以是采用最小二乘法回归计算得到。需要注意的是,第二因子也可以是固定取值或者基于固定取值进行微调整等等,本申请实施例并不作具体限定。
另外,对于预设映射关系而言,预设映射关系可以为预设函数关系。在一些实施例中,预设映射关系可以是Softmax函数。其中,Softmax函数是归一化指数函数,但是本申请实施例也可以不用归一化,其取值并不局限在[0,1]范围内。
示例性地,以|ΔY k|作为第一颜色分量的参考样值为例,那么第k个参考像素对应的加权系数w k可以由式(4)计算得到,也可以替换如下:
Figure PCTCN2022086471-appb-000005
Figure PCTCN2022086471-appb-000006
其中,S表示控制参数,S与第一因子(γ)之间的关系为:
Figure PCTCN2022086471-appb-000007
示例性地,S的取值与当前块的尺寸参数有关。其中,当前块的尺寸参数包括当前块的宽度和高度。在一种可能的实现方式中,如果宽度和高度的最小值小于或等于4,那么S的取值等于8;如果宽度和高度的最小值大于4且小于或等于16,那么S的取值等于12;如果宽度和高度的最小值大于16,那么S的取值等于16。在另一种可能的实现方式中,如果宽度和高度的最小值小于或等于4,那么S的取值等于7;如果宽度和高度的最小值大于4且小于或等于16,那么S的取值等于11;如果宽度和高度的最小值大于16,那么S的取值等于15。或者,S的取值与当前块的参考像素数量(R)有关。在一种可能的实现方式中,如果R小于16,那么S的取值等于8;如果R大于或等于16且小于32,那么S的取值等于12;如果R大于或等于16,那么S的取值等于16;本申请实施例对此均不作任何限定。
还需要说明的是,除了Softmax函数之外,在另一些实施例中,预设映射关系可以是与第一颜色分量的参考样值具有反比关系的加权函数。
示例性地,仍以|ΔY k|作为第一颜色分量的参考样值为例,还可以将式(4)的Softmax函数替换如下:
Figure PCTCN2022086471-appb-000008
Figure PCTCN2022086471-appb-000009
其中,k=1,2,…N,offset=1或2或0.5或0.25。
这样,预设映射关系为预设函数关系时,可以是如式(4)所示,也可以是如式(6)或式(7)所示,还可以是如式(8)或式(9)所示,甚至还可以是其他能够拟合参考像素的参考亮度值与当前块中待预测像素的亮度重建值越接近、参考像素的参考色度值对当前块中待预测像素的重要性越高的趋势构建加权系数的函数模型等,本申请实施例不作具体限定。
除此之外,在一些实施例中,预设映射关系还可以为预设的查表(look-up table)方式。也就是说,本申请实施例还可以简化操作,例如采用数组元素查表方式来减少一部分的计算操作。其中,对于预设映射关系而言,可以是:根据预设的第一颜色分量的参考样值、第一因子与数组元素的映射查找表,确定数组元素值;然后确定数组元素值在预设映射关系下对应的取值;再将加权系数设置为等于该取值。
具体来说,可以将计算加权系数的f模型通过简化操作实现,比如查表方式。其中,对于待预测像素(i,j)的亮度差|ΔY k|,也可以用|ΔY kij|表示;这样,通过f模型计算加权系数w kij=f(|ΔY kij|)。在这里,可将f模型中的分子以自变量|ΔY kij|和S种类数作为数组索引进行存储,后续f模型中涉及与分子计算相同的操作均通过查表得到,从而能够避免分子、分母部分的计算操作。
需要说明的是,在本申请实施例中,这里的存储可以分为完全存储或部分存储。
在一种具体的实施例中,完全存储是将|ΔY kij|值域范围、S种类数的分子全部存储,完全存储下需要开辟(|ΔY kij|值域范围乘以S种类数)大小的数组空间。其中,所需存储数组大小为|ΔY kij|值域范围大小,此时性能不变。
示例性地,以10比特(bit)像素、式(7)为例,变量|ΔY kij|值域范围为0~1023内的整数,S种类数为3,以|ΔY kij|和S种类数作为二维数组storMole的索引来完全存储分子:
Figure PCTCN2022086471-appb-000010
这时候,式(7)的加权系数计算可简化为:
Figure PCTCN2022086471-appb-000011
如此,二维数组storMole[索引1][索引2]的元素如表3所示。
表3
Figure PCTCN2022086471-appb-000012
进一步地,完全存储也可依照S的分类设置对应的数组偏移量,再根据(|ΔY kij|加偏移量)作为一维数组ostorMole的索引来完全存储分子,具体如下所示:
Figure PCTCN2022086471-appb-000013
这时候,式(7)的加权系数计算可简化为:
Figure PCTCN2022086471-appb-000014
如此,一维数组ostorMole[索引]的元素如表4所示。
表4
Figure PCTCN2022086471-appb-000015
在另一种具体的实施例中,部分存储是将计算加权系数中的所有分子值选取部分进行存储,可选取|ΔY kij|部分值域范围和/或S部分种类的分子值进行存储,不在选取范围内的f模型值默认为0,所需的 存储数组大小为(选取的|ΔY kij|值域范围乘以S的选取种类数)大小。
示例性地,以10bit像素、式(7)为例,假设选取|ΔY kij|值域范围为0~99整数、3种S的分子值进行部分存储,那么|ΔY kij|值域范围为0~99内的整数,S种类数为3,以此|ΔY kij|和S种类数作为二维数组partstorMole的索引来存储分子:
Figure PCTCN2022086471-appb-000016
这时候,式(7)的加权系数计算可简化为:
Figure PCTCN2022086471-appb-000017
如此,二维数组partstorMole[索引1][索引2]的元素如表5所示。
表5
Figure PCTCN2022086471-appb-000018
也就是说,部分存储的存储范围可根据实际需求进行存储,不局限于示例中的|ΔY kij|在100整数以内、3种S种类;部分存储同样也可对选取的部分S种类设置对应的偏移量从而开辟一维存储空间存储,本申请实施例对此不作任何限定。另外,对于表3~表5来说,为了定点化计算,这里也可以存储放大后的整数值;这样,后续在确定出预测值之后还需要进行相应的缩小操作。
这样,根据第一颜色分量的参考样值(例如|ΔY kij|),即可确定出加权系数,具体可以包括:w 1、w 2、…、w N等N个加权系数,其中,N表示第二颜色分量的参考样值的数量。在这里,理论上而言,这N个加权系数之和等于1,而且每一个加权系数均为大于或等于0且小于或等于1的值。但是需要注意的是,“N个加权系数之和等于1”仅属于理论概念;在实际的定点化实现过程中,加权系数的绝对值可以大于1。
S503:根据加权系数和当前块的第二颜色分量的参考样值,确定当前块的第二颜色分量的第一预测块。
需要说明的是,在本申请实施例中,为了保证亮度信息不损失,此时针对当前块中每个重建亮度位置都进行了色度预测,使得所得到的第一预测块尺寸大于原来的当前块尺寸,即:第一预测块中包含的第二颜色分量的预测值的数量大于当前块中包含的第二颜色分量采样点的数量。
还需要说明的是,在本申请实施例中,对于当前块的第二颜色分量的参考样值的确定,该方法还可以包括:根据当前块的相邻区域中的第二颜色分量采样点的取值,确定当前块的第二颜色分量的参考样值。
在这里,相邻区域可以包括下述至少之一:上侧相邻区域、右上侧相邻区域、左侧相邻区域和左下侧相邻区域。
在一种具体的实施例中,该方法还可以包括:
对当前块的相邻区域中的第二颜色分量采样点的取值进行第四滤波处理,得到当前块的第二颜色分量的滤波相邻样值;
根据当前块的第二颜色分量的滤波相邻样值,确定当前块的第二颜色分量的参考样值。
在本申请实施例中,当前块的第二颜色分量的滤波相邻样值的数量大于当前块的相邻区域中的第二颜色分量采样点的取值的数量。
在本申请实施例中,第四滤波处理可以为上采样滤波;其中,上采样率是2的正整数倍。
也就是说,第一颜色分量为亮度分量,第二颜色分量为色度分量,为了保证参考亮度信息不损失,这里也可以对参考色度信息进行上采样滤波。示例性地,对于YUV420格式的2M×2N的当前块,其色度块大小为M×N,相邻区域的参考色度信息的数量为M+N,上采样滤波后可以得到2M+2N个色度参考样值,然后使用加权预测的方式得到2M×2N个色度预测值,后续再下采样滤波处理以得到M×N个色度预测值作为最终预测值。需要注意的是,对于YUV420格式的2M×2N的当前块,其色度块大小为M×N,相邻区域的参考色度信息的数量为M+N,也可以仅使用这M+N个参考色度信息,使用加权系数计算得到2M×2N个色度预测值,后续再下采样滤波处理以得到M×N个色度预测值作为最终预测值。
另外,对于YUV420格式的2M×2N的当前块,其色度块大小为M×N,相邻区域的参考色度信息的数量为M+N,上采样滤波后得到4M+4N个色度参考样值;或者,对于YUV444格式的2M×2N的当前块,其色度块大小为2M×2N,相邻区域的参考色度信息的数量为2M×2N,上采样滤波后得到4M+4N个色度参考样值;后续根据这些色度参考样值使用加权预测以及下采样滤波处理的方式,也可以得到M×N个色度预测值作为最终预测值,本申请实施例对此并不作任何限定。
还需要说明的是,在本申请实施例中,无论是对相邻区域中第一颜色分量采样点的取值进行的第二滤波处理,还是对当前块中第一参考颜色分量采样点的重建值进行的第三滤波处理,甚至还是对相邻区域中第二颜色分量采样点的取值进行的第四滤波处理,这三种滤波处理均可以为上采样滤波处理。其中,第二滤波处理可以采用第一滤波器,第三滤波处理可以采用第二滤波器,第四滤波处理可以采用第三滤波器;对于这三种滤波器而言,在这里,第一滤波器、第二滤波器和第三滤波器均可以为上采样滤波器。由于处理的数据不同,滤波器的上采样率也可能存在不同,因此,这三种滤波器可以相同,也可以不同。进一步地,第一滤波器、第二滤波器和第三滤波器均可以为神经网络滤波器,本申请实施例对此均不作任何限定。
可以理解地,假定第一颜色分量为亮度分量,第二颜色分量为色度分量,那么由于相邻区域中第一颜色分量采样点的取值(即参考亮度信息)的空间分辨率、相邻区域中第二颜色分量采样点的取值(即参考色度信息)的空间分辨率,甚至当前块中第一参考颜色分量采样点的重建值(即重建亮度信息)的空间分辨率都会受到颜色格式信息的影响;因此,还可以根据当前的颜色格式信息进行第二滤波处理、第三滤波处理或者第四滤波处理。下面以第四滤波处理为例,在一些实施例中,该方法还可以包括:基于颜色格式信息,对当前块的相邻区域中的第二颜色分量采样点的取值进行第四滤波处理,得到当前块的第二颜色分量的滤波相邻样值。
在一种具体的实施例中中,对于第四滤波处理而言,还可以包括:若颜色格式信息指示为4:2:0采样,则对当前块的相邻区域中的第二颜色分量采样点的取值进行上采样滤波;其中,上采样率是2的正整数倍。
需要说明的是,在本申请实施例中,颜色格式信息可以包括4:4:4采样、4:2:2采样、4:1:1采样、4:2:0采样等等。其中,如果颜色格式信息指示为4:4:4采样(也可表示为YUV444)的视频,即亮度与色度的空间分辨率相等,那么不需要对参考色度信息作任何处理;如果颜色格式信息指示为4:2:2采样(也可表示为YUV422)、或者4:1:1采样(也可表示为YUV411)、4:2:0采样(也可表示为YUV420)等具有色度亚采样特征的视频,即亮度和色度的空间分辨率不一致,且色度分量的空间分辨率小于亮度分量的空间分辨率,那么需要对从相邻区域中获取的参考色度信息进行上采样滤波处理。
这样,对于参考色度信息进行的上采样滤波,上采样滤波方式可以是线性插值方法中的任何一种,比如最邻近插值、双线性内插值、双三次插值、均值插值、中值插值、复制插值等;也可以是非线性插值方法中的任何一种,比如基于小波变换的插值算法、基于边缘信息的插值算法等;也可以基于卷积神经网络进行上采样滤波,本申请实施例不作任何限定。示例性地,参见图6B,以YUV420视频格式、复制插值为例进行说明,这里示出了4×1的参考色度信息、8×2的参考亮度信息的示例。其中,8×2的用斜纹填充块为参考亮度信息,4×1的用网格填充块为同位的参考色度信息,4×1的用网格填充块中每个像素点相当于上采样滤波后的8×2的色度块中每2×2子块的左上角像素点,即复制插值后的2×2子块中每个色度像素点的像素值相同。具体地,对于一个2×2子块,另外三个色度像素点(用点填充块)均是左上角像素点(用网格填充块)进行复制得到的。
进一步地,在确定出当前块的第二颜色分量的参考样值之后,在一些实施例中,对于S503来说,根据加权系数和当前块的第二颜色分量的参考样值,确定当前块的第二颜色分量的第一预测块,可以包括:
确定第二颜色分量的参考样值与对应的加权系数的加权值;
将第一预测块中第二颜色分量采样点的预测值设置为等于N个加权值之和;其中,N表示第二颜色分量的参考样值的数量,N是正整数。
也就是说,如果第二颜色分量的参考样值的数量有N个,那么首先确定每一个第二颜色分量的参考样值与对应的加权系数的加权值(即w kC k),然后将这N个加权值之和作为预测块中的第二颜色分量采样点的预测值。具体地,其计算公式具体如下,
Figure PCTCN2022086471-appb-000019
示例性地,首先根据当前块中(i,j)位置的重建亮度信息与相邻区域中的参考亮度信息,计算绝对亮度差值;然后再根据Softmax函数计算加权系数,进而可以利用式(16)得到预测块中(i,j)位置的色度分量的预测值。特别地,需要注意的是,这种方式利于并行处理,可以加快计算速度。
还可以理解地,在对当前块的相邻区域中的参考亮度信息和参考色度信息均进行上采样滤波处理时,这时候,还可以包括:基于颜色格式信息,利用第一水平上采样因子和第一垂直上采样因子对当前块的相邻区域中的第一颜色分量采样点的取值进行第二滤波处理,得到当前块的第一颜色分量的滤波相邻样值;以及利用第二水平上采样因子和第二垂直上采样因子对当前块的相邻区域中的第二颜色分量采样点的取值进行第四滤波处理,得到当前块的第二颜色分量的滤波相邻样值。
在一种可能的实现方式中,该方法还可以包括:
若颜色格式信息指示为4:4:4采样,则确定第二水平上采样因子等于第一水平上采样因子,第二垂直上采样因子等于第一垂直上采样因子;
若颜色格式信息指示为4:2:2采样,则确定第二水平上采样因子等于2倍的第一水平上采样因子,第二垂直上采样因子等于第一垂直上采样因子;
若颜色格式信息指示为4:1:1采样,则确定第二水平上采样因子等于4倍的第一水平上采样因子,第二垂直上采样因子等于第一垂直上采样因子;
若颜色格式信息指示为4:2:0采样,则确定第二水平上采样因子等于2倍的第一水平上采样因子,第二垂直上采样因子等于2倍的第一垂直上采样因子。
需要说明的是,在本申请实施例中,对参考亮度信息进行上采样滤波,此时亮度信息没有任何损失。在这种情况下,由于YUV视频中的亮度分量的空间分辨率总是大于等于色度分量的空间分辨率,因此参考色度信息必然要进行上采样滤波才能和参考亮度信息的空间分辨率保持一致。这时候需要根据YUV视频格式和对参考亮度信息的空间上采样频率(第一水平上采样因子和第一垂直上采样因子)来决定参考色度信息的空间上采样频率(第二水平上采样因子和第二垂直上采样因子)。
其中,将第一水平上采样因子用S_Hor_RefLuma表示,将第一垂直上采样因子用S_Ver_RefLuma表示,将第二水平上采样因子用S_Hor_RefChroma表示,将第二垂直上采样因子用S_Ver_RefChroma表示。这样,对于YUV444格式的视频,S_Hor_RefChroma可以设置为等于S_Hor_RefLuma,S_Ver_RefChroma可以设置为等于S_Ver_RefLuma;对于YUV422格式的视频:S_Hor_RefChroma可以设置为等于2倍的S_Hor_RefLuma,S_Ver_RefChroma可以设置为等于S_Ver_RefLuma;对于YUV411格式的视频:S_Hor_RefChroma可以设置为等于4倍的S_Hor_RefLuma,S_Ver_RefChroma可以设置为等于S_Ver_RefLuma;对于YUV420格式的视频:S_Hor_RefChroma可以设置为等于2倍的S_Hor_RefLuma,S_Ver_RefChroma可以设置为等于2倍的S_Ver_RefLuma。
还可以理解地,在不对当前块内的重建亮度信息进行上采样滤波处理时,这时候可以根据YUV视频格式推断第一预测块的尺寸大小。因此,在另一种可能的实现方式中,该方法还可以包括:
若颜色格式信息指示为4:4:4采样,则确定第一预测块的宽度等于当前块的宽度,第一预测块的高度等于当前块的高度;
若颜色格式信息指示为4:2:2采样,则确定第一预测块的宽度等于2倍的当前块的宽度,第一预测块的高度等于当前块的高度;
若颜色格式信息指示为4:1:1采样,则确定第一预测块的宽度等于4倍的当前块的宽度,第一预测块的高度等于当前块的高度;
若颜色格式信息指示为4:2:0采样,则确定第一预测块的宽度等于2倍的当前块的宽度,第一预测块的高度等于2倍的当前块的高度。
需要说明的是,在本申请实施例中,第一预测块的尺寸参数包括宽度和高度,其中,第一预测块的宽度用predSizeW表示,第一预测块的高度用predSizeH表示;当前块的尺寸参数也包括宽度和高度,其中,当前块的宽度用nTbW表示,当前块的高度用nTbH表示。这样,对于YUV444格式的视频,亮度分量和色度分量的空间分辨率相等,此时predSizeW可以设置为等于nTbW,predSizeH可以设置为等于nTbH;对于YUV422格式的视频,亮度分量和色度分量具有相同的垂直分辨率,但色度分量的 水平分辨率是亮度分量的1/2,此时predSizeW可以设置为等于2倍的nTbW,predSizeH可以设置为等于nTbH;对于YUV411格式的视频,亮度分量和色度分量具有相同的垂直分辨率,但色度分量的水平分辨率是亮度分量的1/4,此时predSizeW可以设置为等于4倍的nTbW,predSizeH可以设置为等于nTbH;对于YUV420格式的视频,色度分量的水平分辨率和垂直分辨率都是亮度分量的1/2,此时predSizeW可以设置为等于2倍的nTbW,predSizeH可以设置为等于2倍的nTbH。
还可以理解地,在对当前块内的重建亮度信息进行上采样滤波处理时,当前块亮度上采样之后需要根据YUV视频格式和对当前块亮度的空间上采样频率来推断第一预测块的尺寸大小。这时候,可以包括:基于颜色格式信息,利用第三水平上采样因子和第三垂直上采样因子对当前块中第一参考颜色分量采样点的重建值进行第三滤波处理,得到当前块中第一参考颜色分量采样点的滤波样值。
在又一种可能的实现方式中,该方法还可以包括:
若颜色格式信息指示为4:4:4采样,则确定第一预测块的宽度等于当前块的宽度与第三水平上采样因子的乘积,第一预测块的高度等于当前块的高度与第三垂直上采样因子的乘积;
若颜色格式信息指示为4:2:2采样,则确定第一预测块的宽度等于2倍的当前块的宽度与第三水平上采样因子的乘积,第一预测块的高度等于当前块的高度与第三垂直上采样因子的乘积;
若颜色格式信息指示为4:1:1采样,则确定第一预测块的宽度等于4倍的当前块的宽度与第三水平上采样因子的乘积,第一预测块的高度等于当前块的高度与第三垂直上采样因子的乘积;
若颜色格式信息指示为4:2:0采样,则确定第一预测块的宽度等于2倍的当前块的宽度与第三水平上采样因子的乘积,第一预测块的高度等于2倍的当前块的高度与第三垂直上采样因子的乘积。
需要说明的是,在本申请实施例中,将第三水平上采样因子用S_Hor_RecLuma表示,将第三垂直上采样因子用S_Ver_RecLuma表示,然后,根据S_Hor_RecLuma和S_Ver_RecLuma对当前块内的重建亮度信息进行上采样滤波。这时候对于第一预测块的尺寸而言,对于YUV444格式的视频,亮度分量和色度分量的空间分辨率相等,此时predSizeW可以设置为等于S_Hor_RecLuma与nTbW的乘积,predSizeH可以设置为等于S_Ver_RecLuma与nTbH的乘积;对于YUV422格式的视频,亮度分量和色度分量具有相同的垂直分辨率,但色度分量的水平分辨率是亮度分量的1/2,此时predSizeW可以设置为等于2倍的S_Hor_RecLuma与nTbW的乘积,predSizeH可以设置为等于S_Ver_RecLuma与nTbH的乘积;对于YUV411格式的视频,亮度分量和色度分量具有相同的垂直分辨率,但色度分量的水平分辨率是亮度分量的1/4,此时predSizeW可以设置为等于4倍的S_Hor_RecLuma与nTbW的乘积,predSizeH可以设置为等于S_Ver_RecLuma与nTbH的乘积;对于YUV420格式的视频,色度分量的水平分辨率和垂直分辨率都是亮度分量的1/2,此时predSizeW可以设置为等于2倍的S_Hor_RecLuma与nTbW的乘积,predSizeH可以设置为等于2倍的S_Ver_RecLuma与nTbH的乘积。
需要注意的是,在本申请实施例中,predSizeH大于或等于当前块的高度nTbH,或者predSizeW大于或等于当前块的宽度nTbW,使得第一预测块中包含的第二颜色分量的预测值的数量大于当前块中包含的第二颜色分量采样点的数量。
S504:对第一预测块进行第一滤波处理,确定当前块的第二颜色分量的第二预测块。
S505:根据第二预测块,确定当前块的第二颜色分量采样点的重建值。
需要说明的是,在本申请实施例中,第一滤波处理可以为下采样滤波处理。这样,对于经过下采样滤波之后的第二预测块,第二预测块中包含的第二颜色分量的预测值的数量与当前块中包含的第二颜色分量采样点的数量相同。
在一种可能的实现方式中,所述对第一预测块进行第一滤波处理,确定当前块的第二颜色分量的第二预测块,可以包括:
利用预设滤波器对第一预测块进行下采样滤波处理,确定当前块的第二颜色分量的第二预测块。
还需要说明的是,在本申请实施例中,预设滤波器可以为下采样滤波器。进一步地,这里的下采样滤波器可以为神经网络滤波器,本申请实施例对此不作任何限定。
在另一种可能的实现方式中,所述对第一预测块进行第一滤波处理,确定当前块的第二颜色分量的第二预测块,可以包括:
确定水平下采样因子和垂直下采样因子;
根据水平下采样因子和垂直下采样因子对第一预测块进行下采样滤波处理,得到当前块的第二颜色分量的第二预测块。
在一种具体的实施例中,根据水平下采样因子和垂直下采样因子对第一预测块进行下采样滤波处理,得到当前块的第二颜色分量的第二预测块,可以包括:若水平下采样因子大于1,或者垂直下采样因子大于1,则对第一预测块进行下采样滤波处理,得到第二预测块。
在本申请实施例中,对第一预测块进行下采样滤波处理,可以包括下述至少一项:
对第一预测块在水平方向进行下采样滤波处理;
对第一预测块在垂直方向进行下采样滤波处理;
对第一预测块在水平方向进行下采样滤波处理后再在垂直方向进行下采样滤波处理;
对第一预测块在垂直方向进行下采样滤波处理后再在水平方向进行下采样滤波处理。
在这里,首先可以根据第一预测块的宽度和当前块的宽度计算水平下采样因子,根据第一预测块的高度和当前块的高度计算垂直下采样因子;然后,根据水平下采样因子和垂直下采样因子对第一预测块进行下采样滤波。具体来说,如果水平下采样因子大于1,垂直下采样因子等于1,那么只需要对第一预测块在水平方向进行下采样;如果水平下采样因子等于1,垂直下采样因子大于1,那么只需要对第一预测块在垂直方向进行下采样;如果水平下采样因子大于1,垂直下采样因子大于1,那么对第一预测块在水平方向和垂直方向都需要进行下采样,其中,可以执行先水平方向再垂直方向的下采样,也可以执行先垂直方向再水平方向的下采样,甚至还可以采用神经网络结构中的卷积操作代替这里的下采样操作,本申请实施例不作任何限定。
还需要说明的是,对于第一滤波处理,还可以采用隔点抽取的方式进行下采样滤波,比如二维滤波器、一维滤波器等等。其中,对于一维滤波器而言,可以是“先垂直方向再水平方向”,也可以是“先水平方向再垂直方向”,还可以是固定滤波顺序,甚至还可以是可灵活调整的滤波顺序(如有标识信息指示的滤波顺序、与预测模式或块大小绑定的顺序等),本申请实施例对此不作任何限定。
在一些实施例中,所述对第一预测块进行第一滤波处理,确定当前块的第二颜色分量的第二预测块,可以包括:第一滤波处理包括下采样滤波处理;其中,下采样滤波处理的输入是第一下采样输入块;下采样滤波处理的输出是第一下采样输出块。
进一步地,在一些实施例中,上采样滤波处理,可以包括:
确定下采样因子,其中,下采样因子包括以下至少之一:水平下采样因子、垂直下采样因子;
根据下采样因子对第一下采样输入块进行下采样滤波处理,得到第一下采样输出块。
在一种可能的实现方式中,根据下采样因子对第一下采样输入块进行下采样滤波处理,得到第一下采样输出块,可以包括:若水平下采样因子大于1,或者垂直下采样因子大于1,则对第一下采样输入块进行下采样滤波处理,得到第一下采样输出块。
在本申请实施例中,对第一下采样输入块进行下采样滤波处理,包括下述至少一项:
对第一下采样输入块在水平方向进行下采样滤波处理;
对第一下采样输入块在垂直方向进行下采样滤波处理;
对第一下采样输入块在水平方向进行下采样滤波处理后再在垂直方向进行下采样滤波处理;
对第一下采样输入块在垂直方向进行下采样滤波处理后再在水平方向进行下采样滤波处理。
在这里,首先可以根据第一下采样输入块的宽度和第一下采样输出块的宽度计算水平下采样因子,根据第一下采样输入块的高度和第一下采样输出块的高度计算垂直下采样因子;然后,根据水平下采样因子和垂直下采样因子对第一下采样输入块进行下采样滤波。具体来说,如果水平下采样因子大于1,垂直下采样因子等于1,那么只需要对第一下采样输入块在水平方向进行下采样;如果水平下采样因子等于1,垂直下采样因子大于1,那么只需要对第一下采样输入块在垂直方向进行下采样;如果水平下采样因子大于1,垂直下采样因子大于1,那么对第一下采样输入块在水平方向和垂直方向都需要进行下采样,其中,可以执行先水平方向再垂直方向的下采样,也可以执行先垂直方向再水平方向的下采样,甚至还可以采用神经网络结构中的卷积操作代替这里的下采样滤波操作,本申请实施例不作任何限定。
在另一种可能的实现方式中,对第一预测块进行下采样滤波,确定第二预测块。这时候,该方法还可以包括:将第一预测块作为第一下采样输入块;将第一下采样输出块作为当前块的第二颜色分量的第二预测块。
在另一种可能的实现方式中,对第一预测块先进行增强滤波,然后再进行下采样滤波,确定第二预测块。这时候,该方法还可以包括:对第一预测块进行滤波增强处理,确定第一增强预测块;将第一增强预测块作为第一下采样输入块;将第一下采样输出块作为当前块的第二颜色分量的第二预测块。
在又一种可能的实现方式中,对第一预测块先进行下采样滤波,然后再进行增强滤波,确定第二预测块。这时候,该方法还还可以包括:将第一预测块作为第一下采样输入块;将第一下采样输出块作为第一下采样滤波预测块;对第一下采样滤波预测块进行滤波增强处理,确定当前块的第二颜色分量的第二预测块。
在再一种可能的实现方式中,对第一预测块先进行增强滤波,再进行下采样滤波,然后再进行增强滤波,确定第二预测块。这时候,该方法还还可以包括:对第一预测块进行第一滤波增强处理,确定第二增强预测块;将第二增强预测块作为第一下采样输入块;将第一下采样输出块作为第二下采样滤波预测块;对第二下采样滤波预测块进行第二滤波增强处理,确定当前块的第二颜色分量的第二预测块。
除此之外,在一些实施例中,所述对第一预测块进行第一滤波处理,确定当前块的第二颜色分量的第二预测块,还可以包括:根据水平下采样因子和垂直下采样因子,对第一预测块在水平方向和/或垂直方向上每预设个数的第二颜色分量的预测值进行加权和计算,得到第二预测块。
在一种可能的实现方式中,对第一预测块在水平方向和/或垂直方向上每预设个数的第二颜色分量的预测值进行加权和计算,得到第二预测块,可以包括:对第一预测块在水平方向上每水平下采样因子数量的第二颜色分量的预测值进行加权和计算,得到第二预测块。
在另一种可能的实现方式中,对第一预测块在水平方向和/或垂直方向上每预设个数的第二颜色分量的预测值进行加权和计算,得到第二预测块,可以包括:对第一预测块在垂直方向上每垂直下采样因子数量的第二颜色分量的预测值进行加权和计算,得到第二预测块。
在又一种可能的实现方式中,对第一预测块在水平方向和/或垂直方向上每预设个数的第二颜色分量的预测值进行加权和计算,得到第二预测块,可以包括:对第一预测块在水平方向上每水平下采样因子数量的第二颜色分量的预测值进行加权和计算,以及在垂直方向上每垂直下采样因子数量的第二颜色分量的预测值进行加权和计算,得到第二预测块。
也就是说,本申请实施例也可以不考虑水平下采样和垂直下采样的情况,针对需要下采样的方向(垂直方向或水平方向)每预设个数的色度预测值进行加权和计算;其中,如果每一个色度预测值的权值相等,那么在一种特殊情况下,每预设个数的色度预测值进行加权和计算也可以看作是对这预设个数的色度预测值求均值,将该均值作为下采样滤波后的预测值。
进一步地,在一些实施例中,该方法还可以包括:根据第一预测块中部分采样点的第一颜色分量的参考样值,确定加权系数;根据加权系数和第一预测块中部分采样点的第二颜色分量的参考样值,确定当前块的第二颜色分量的第二预测块。
在一种具体的实施例中,根据加权系数和第一预测块中部分采样点的第二颜色分量的参考样值,确定当前块的第二颜色分量的第二预测块,可以包括:根据加权系数对第一预测块中第(i,j)位置处采样点的第二颜色分量的参考样值进行加权计算,得到当前块中第(x,y)位置处采样点的第二颜色分量的预测值;其中,i、j、x、j均为大于或等于零的整数。
需要说明的是,在本申请实施例中,为了减少计算复杂度,不对当前块内的重建亮度信息进行上采样滤波,这时候可以考虑不对当前块内的重建亮度信息中每个亮度位置进行同位色度点的预测,可以选取部分亮度位置进行同位色度点的预测,从而在得到预测块之后可以不进行下采样滤波,能够保证已有亮度信息的准确性,准确的亮度信息有利于提高非线性映射模型的准确性和稳定性,进而提高了色度预测值的准确性。
在这种实现方式中,为了节省后续对预测块进行上采样或者下采样的操作,可以根据YUV视频格式特征选取相应的位置进行色度预测,假设当前亮度点的位置为CurRecLuma(i,j),则需要进行色度预测的采样点的位置为CurPredChroma(x,y)。这时候,在一些实施例中,该方法还可以包括:
若颜色格式信息指示为4:4:4采样,则将x设置为等于i,将y设置为等于j;
若颜色格式信息指示为4:2:2采样,则将x设置为等于i与2的乘积,将y设置为等于j;
若颜色格式信息指示为4:1:1采样,则将x设置为等于i与4的乘积,将y设置为等于j;
若颜色格式信息指示为4:2:0采样,则将x设置为等于i与2的乘积,将y设置为等于j与2的乘积。
进一步地,在一些实施例中,该方法还可以包括:确定水平取样位置因子和垂直取样位置因子;将x设置为等于i与水平取样位置因子的乘积,将y设置为等于j与垂直取样位置因子的乘积。
也就是说,为了进一步减少预测时的计算复杂度,可以选择更少的同位色度点进行预测。假设当前亮度点的位置为CurRecLuma(i,j),则需要进行色度预测的点的位置为CurPredChroma(x,y)。假设当前块内重建亮度信息的水平取样位置因子为S_Pos_Hor,垂直取样位置因子为S_Pos_Ver。那么当前亮度点位置和需要进行色度预测的采样点的位置之间的关系如下:
当颜色格式信息指示为YUV444格式/YUV422格式/YUV411格式/YUV420格式的视频时,x可以设置为等于i与S_Pos_Hor的乘积,y可以设置为等于j与S_Pos_Ver的乘积。
进一步地,在确定出第二预测块之后,该第二预测块在一定条件下还需要后处理作为最终的第二预测块。因此,在一些实施例中,该方法还可以包括:在确定当前块的第二颜色分量的第二预测块之后,对第二预测块进行相关处理,将处理后的第二预测块作为第二预测块。
在本申请实施例中,对第二预测块进行相关处理,包括下述至少一项:
对第二预测块进行第三滤波处理;
利用预设补偿值对第二预测块进行修正处理;
利用至少一种预测模式下当前块的第二颜色分量的预测值对第二预测块进行加权融合处理。
需要说明的是,在一种可能的实现方式中,第三滤波处理可以为平滑滤波处理。示例性地,在WCP模式下,为了降低逐像素独立且并行预测带来的不稳定性,例如可以对第二预测块进行平滑滤波处理,然后将处理后的第二预测块作为最终的第二预测块。
在另一种可能的实现方式中,根据当前块的相邻区域中的参考样值,确定第二预测块的第二颜色分量的预设补偿值;根据预设补偿值对第二预测块中的第二颜色分量采样点的预测值进行修正处理,确定最终的第二预测块。示例性地,为了进一步提升WCP模式的预测准确度,可以对第二预测块进行位置相关的修正过程。例如,利用空间位置接近的参考像素对每个待预测第二颜色分量采样点计算色度补偿值,用此色度补偿值对第二预测块中的第二颜色分量采样点进行修正,根据修正后的预测值确定第二颜色分量采样点的最终预测值,进而得到最终的第二预测块。
在又一种可能的实现方式中,根据至少一种预测模式对第二预测块中的第二颜色分量采样点进行预测处理,确定第二预测块中的第二颜色分量采样点的至少一个初始预测值;根据至少一个初始预测值与第二预测块中第二颜色分量采样点的预测值进行加权融合处理,确定最终的第二预测块。
需要说明的是,为了进一步提升WCP模式的预测准确度,还可以将其他预测模式下计算的色度预测值与WCP模式下计算的色度预测值进行加权融合,根据此融合结果确定最终的色度预测块。示例性地,如图7所示,其他预测模式可以包括:平面(Planar)模式、直流(DC)模式、垂直模式、水平模式和CCLM模式等,而且每一个预测模式均与一个开关连接,该开关用于控制这种预测模式下的色度预测值是否参与加权融合处理。假定Planar模式对应的权重值为W_Planar、DC模式对应的权重值为W_DC、垂直模式对应的权重值为W_Ver、水平模式对应的权重值为W_Hor、CCLM模式对应的权重值为W_CCLM、WCP模式对应的权重值为W_Wcp;对于Planar模式下的色度预测值、DC模式下的色度预测值、垂直模式下的色度预测值、水平模式下的色度预测值和CCLM模式下的色度预测值,如果只有CCLM模式连接的开关处于闭合状态,那么可以根据W_CCLM和W_Wcp对CCLM模式下的色度预测值和WCP模式下的色度预测值进行加权融合,而根据W_CCLM和W_Wcp的取值可以决定是进行等权重或不等权重的加权融合,加权结果即为第二颜色分量采样点的最终色度预测值,进而得到最终的第二预测块。
在一些实施例中,在确定出第二预测块之后,所述根据第二预测块,确定当前块的第二颜色分量采样点的重建值,可以包括:
确定当前块的第二颜色分量采样点的预测差值;
根据第二预测块,确定当前块的第二颜色分量采样点的预测值;
根据当前块的第二颜色分量采样点的预测差值和当前块的第二颜色分量采样点的预测值,确定当前块的第二颜色分量采样点的重建值。
需要说明的是,在本申请实施例中,确定当前块的第二颜色分量采样点的预测差值(residual),可以是通过解析码流,确定当前块的第二颜色分量采样点的预测差值。
还需要说明的是,在本申请实施例中,根据第二预测块确定当前块的第二颜色分量采样点的预测值,可以是将当前块的第二颜色分量采样点的预测值设置为等于第二预测块的值;或者也可以是对第二预测块的值进行上采样滤波,将当前块的第二颜色分量采样点的预测值设置为等于所述上采样滤波后的输出值。
这样,以色度分量为例,通过解析码流,确定当前块的色度预测差值;然后根据第二预测块,可以确定当前块的色度预测值;然后对色度预测值和色度预测差值进行加法计算,可以得到当前块的色度重建值。
可以理解地,本申请实施例是在WCP模式的预测过程中,利用未损失的亮度信息进行色度预测,主要包括三方面:一方面,充分利用参考像素与当前块的亮度信息,实现了参考像素色度加权系数的计算;另一方面,充分考虑已有亮度信息的重要性,在不损失亮度信息的基础上建立更准确的非线性映射模型对参考色度点分配权重进行加权预测;又一方面,在进行参考色度上采样和根据当前块每个亮度点位置进行同位色度预测时,充分考虑各种YUV视频格式的特征,根据不同YUV视频的色度亚采样格式和亮度上采样情况始终保证色度分量和亮度分量的空间分辨率一致,充分利用已有的未损失亮度信息进行同位色度预测,以提高WCP模式中色度预测值的准确性。
简单来说,本申请实施例还提供了一种基于权重的色度预测的框架,如图8所示,对于下采样的当前块内重建亮度信息(即下采样亮度块)中的每个采样点recY[i][j],首先根据recY[i][j]与相邻亮度向量refY[k]的差值的绝对值得到亮度差向量diffY[i][j];其次,根据与diffY[i][j]相关的非线性映射模型,导出归一化权值向量cWeight[i][j];再次,利用权值向量,将权值向量与当前块的相邻色度向量进行向量乘法,可以得到预测的色度预测值
Figure PCTCN2022086471-appb-000020
本实施例提供了一种解码方法,通过确定当前块的第一颜色分量的参考样值;根据当前块的第一颜 色分量的参考样值,确定加权系数;根据加权系数和当前块的第二颜色分量的参考样值,确定当前块的第二颜色分量的第一预测块;其中,第一预测块中包含的第二颜色分量的预测值的数量大于当前块中包含的第二颜色分量采样点的数量;对第一预测块进行第一滤波处理,确定当前块的第二颜色分量的第二预测块;根据第二预测块,确定当前块的第二颜色分量采样点的重建值。这样,利用当前块相邻的参考像素与当前块内的颜色分量信息,不仅充分考虑了已有的颜色分量信息,使得在不损失亮度信息的基础上能够建立更准确的非线性映射模型对每个色度分量的参考样值分配权重进行加权预测;而且对于第一滤波处理还充分考虑了不同的颜色格式信息,根据不同的颜色格式信息进行色度和/或亮度的采样滤波,能够始终保证色度分量和亮度分量的空间分辨率一致,从而既保证了已有亮度信息的准确性,又在利用未损失的亮度信息进行色度分量预测时,基于准确的亮度信息有利于提高非线性映射模型的准确性和稳定性,进而提高了色度预测的准确性,节省码率,提升编解码效率,进而提升编解码性能。
在本申请的另一实施例中,基于前述实施例所述的解码方法,以当前块进行色度预测为例,在本申请实施例中,当前块的重建亮度信息、相邻区域的参考亮度信息及参考色度信息都是已解码的参考信息。因此,本申请实施例提出了一种利用上述这些信息的基于权重的色度预测技术,并在此基础上,为了保证参考亮度信息和当前块内的重建亮度信息不损失,实现参考色度匹配参考亮度的空间分辨率、当前预测色度匹配当前已有的重建亮度的空间分辨率,在已匹配的基础上对预测色度进行下采样滤波,最后对色度预测值进行后处理。
可以理解地,在本申请实施例中,在WCP预测技术的基础上主要提出了:在不损失亮度信息的情况下进一步提高色度预测的准确性。在这里,WCP模式的色度预测过程的详细步骤如下:
WCP模式的输入:当前块的位置(xTbCmp,yTbCmp),当前块的宽度nTbW及当前块的高度nTbH。
WCP模式的输出:当前块的预测值predSamples[x][y],其中以当前块内左上角位置为坐标原点,x=0,...,nTbW-1,y=0,...,nTbH-1。
其中,WCP模式的预测过程可以包含确定核心参数、参考色度信息的上采样滤波、获取目标信息、基于权重的色度预测、当前预测色度的下采样滤波、后处理过程等步骤。在经过这些步骤之后,可以得到当前块的色度预测值。
在一种具体的实施例中,参见图9,其示出了本申请实施例提供的一种解码方法的流程示意图二。如图9所示,该方法可以包括:
S901:确定WCP模式的核心参数。
需要说明的是,对于S901而言,对WCP模式涉及的核心参数进行确定,即可以通过配置或通过某种方式获取或推断WCP模式的核心参数,例如在解码端从码流获取核心参数。
在这里,WCP模式的核心参数包括但不限于控制参数(S)、基于权重的色度预测输入的个数(inSize)、基于权重的色度预测输出的个数(排列成predSizeW×predSizeH)。其中,基于权重的色度预测输出的第一预测块可以用predWcp表示,这里基于权重的色度预测输出的个数可以设置为相同的值(如predSizeW=predSizeH=T/4)或者与当前块的尺寸参数相关(如predSizeW=nTbW,predSizeH=nTbH)。其中,控制参数(S)可以用于对后续环节中非线性函数进行调整或用来对后续环节涉及的数据进行调整。
对于核心参数的确定,在一定条件下受块尺寸或块内容或块内像素数的影响。例如:
如果WCP模式应用的块尺寸种类较多或块尺寸之间的差异较大或块内容差异很大或块内像素数差异较大,可以对当前块根据其块尺寸或块内容或块内像素数进行分类,根据不同的类别确定相同或不同的核心参数。即不同类别对应的控制参数(S)或基于权重的色度预测输入的个数inSize或基于权重的色度预测输出的个数(排列成predSizeW×predSizeH)可以相同或不同。注意,predSizeW和高predSizeH也可以相同或不同。
下面为了更好说明核心参数的确定,以两种简单分类为例进行说明:
分类示例1:WCP模式可以根据当前块的宽度和高度将当前块分类,用wcpSizeId表示块的种类。对不同种类的块,控制参数(S)、基于权重的色度预测输入的个数inSize、基于权重的色度预测输出的个数(排列成predSizeW×predSizeH)可以相同或不同。这里以分为3类的一种示例进行说明:
根据当前块的宽度和高度将当前块分为3类,不同类别的控制参数(S)可以设为不同,不同类别的inSize、基于权重的色度预测输出的个数(排列成predSizeW×predSizeH)可以设为相同。nTbW为当前块的宽度,nTbH为当前块的高度,块的种类wcpSizeId的定义如下:
wcpSizeId=0:表示min(nTbW,nTbH)<=4的当前块。其中,控制参数(S)为8,inSize为(2×nTbH+2×nTbW),基于权重的色度预测输出nTbH×nTbW个色度预测值;
wcpSizeId=1:表示4<min(nTbW,nTbH)<=16的当前块。其中,控制参数(S)为12,inSize为 (2×nTbH+2×nTbW),基于权重的色度预测输出nTbH×nTbW个色度预测值;
wcpSizeId=2:表示min(nTbW,nTbH)>16的当前块。其中,控制参数(S)为16,inSize为(2×nTbH+2×nTbW),基于权重的色度预测输出nTbH×nTbW个色度预测值;
以表格形式表示上述核心参数的数量关系,如表6所示。
表6
wcpSizeId S inSize predSizeH predSizeW
0 8 2×nTbH+2×nTbW nTbH nTbW
1 12 2×nTbH+2×nTbW nTbH nTbW
2 16 2×nTbH+2×nTbW nTbH nTbW
分为3类还可以另一种示例进行说明:
根据当前块的宽度和高度将当前块分为3类,不同类别的控制参数(S)可以设为不同,不同类别的inSize、基于权重的色度预测输出的个数(排列成predSizeW×predSizeH)可以设为相同。nTbW为当前块的宽度,nTbH为当前块的高度,块的种类wcpSizeId的定义如下:
wcpSizeId=0:表示min(nTbW,nTbH)<=4的当前块。其中,控制参数(S)为8,inSize为(2×nTbH+2×nTbW),基于权重的色度预测输出nTbH×nTbW个色度预测值;
wcpSizeId=1:表示4<min(nTbW,nTbH)<=16的当前块。其中,控制参数(S)为12,inSize为(1.5×nTbH+1.5×nTbW),基于权重的色度预测输出nTbH/2×nTbW/2个色度预测值;
wcpSizeId=2:表示min(nTbW,nTbH)>16的当前块。其中,WCP控制参数(S)为16,inSize为(nTbH+nTbW),基于权重的色度预测输出nTbH/4×nTbW/4个色度预测值;
以表格形式表示上述核心参数的数量关系,如表7所示。
表7
wcpSizeId S inSize predSizeH predSizeW
0 8 2×nTbH+2×nTbW nTbH nTbW
1 12 1.5×nTbH+1.5×nTbW nTbH/2 nTbW/2
2 16 nTbH+nTbW nTbH/4 nTbW/4
分类示例2:WCP模式也可以根据当前块的宽度和高度将当前块分类,用wcpSizeId表示块的种类。对不同种类的块,控制参数(S)、基于权重的色度预测输入的个数inSize、基于权重的色度预测输出的个数(排列成predSizeW×predSizeH)可以相同或不同。这里以分为3类的一种示例进行说明:
根据当前块的宽度和高度将当前块分为3类,不同类别的控制参数(S)可以设为不同,不同类别的inSize、基于权重的色度预测输出的个数(排列成predSizeW×predSizeH)可以设为相同。nTbW为当前块的宽度,nTbH为当前块的高度,nTbW×nTbH表示当前块的像素数。块的种类wcpSizeId的定义如下:
wcpSizeId=0:表示(nTbW×nTbH)<128的当前块。其中,控制参数(S)为10,inSize为(2×nTbH+2×nTbW),基于权重的色度预测输出nTbH×nTbW个色度预测值;
wcpSizeId=1:表示128<=(nTbW×nTbH)<=256的当前块。其中,控制参数(S)为8,inSize为(2×nTbH+2×nTbW),基于权重的色度预测输出nTbH×nTbW个色度预测值;
wcpSizeId=2:表示(nTbW×nTbH)>256当前块。其中,控制参数(S)为1,inSize为(2×nTbH+2×nTbW),基于权重的色度预测输出nTbH×nTbW个色度预测值;
以表格形式表示上述核心参数的数量关系,如表8所示。
表8
wcpSizeId S inSize predSizeH predSizeW
0 10 2×nTbH+2×nTbW nTbH nTbW
1 8 2×nTbH+2×nTbW nTbH nTbW
2 1 2×nTbH+2×nTbW nTbH nTbW
分为3类还可以另一种示例进行说明:
根据当前块的宽度和高度将当前块分为3类,不同类别的控制参数(S)可以设为不同,不同类别的inSize、基于权重的色度预测输出的个数(排列成predSizeW×predSizeH)可以设为相同。nTbW为当前块的宽度,nTbH为当前块的高度,nTbW×nTbH表示当前块的像素数。块的种类wcpSizeId的定义如下:
wcpSizeId=0:表示(nTbW×nTbH)<64的当前块。其中,控制参数(S)为16,inSize为(2×nTbH+2×nTbW),基于权重的色度预测输出nTbH×nTbW个色度预测值;
wcpSizeId=1:表示64<=(nTbW×nTbH)<=512的当前块。其中,控制参数(S)为4,inSize为(1.5×nTbH+1.5×nTbW),基于权重的色度预测输出nTbH/2×nTbW/2个色度预测值;
wcpSizeId=2:表示(nTbW×nTbH)>512的当前块。其中,控制参数(S)为1,inSize为(nTbH+nTbW),基于权重的色度预测输出nTbH/4×nTbW/4个色度预测值;
以表格形式表示上述核心参数的数量关系,如表9所示。
表9
wcpSizeId S inSize predSizeH predSizeW
0 16 2×nTbH+2×nTbW nTbH nTbW
1 4 1.5×nTbH+1.5×nTbW nTbH/2 nTbW/2
2 1 nTbH+nTbW nTbH/4 nTbW/4
S902:在参考亮度信息不损失的情况下,对参考色度信息进行上采样滤波,以保持与参考亮度信息具有相同的空间分辨率。
需要说明的是,对于S902而言,在预测当前块时,当前块的上方区域、左上方区域及左方区域作为当前块的相邻区域(也可被称为“参考区域”),如前述的图6A所示,相邻区域中的像素都是已解码的参考像素。
在这里,从相邻区域中可以获取参考色度信息refC和参考亮度信息refY。由于人眼对亮度信息较敏感,且本申请实施例主要利用已有亮度信息建立模型来预测色度,因此基于不损失参考亮度信息的角度考虑,实现参考色度信息的空间分辨率匹配跟随参考亮度信息的空间分辨率,针对不同YUV视频格式对参考色度信息进行上采样滤波。在一种具体的实施例中,如图10所示,对于S902来说,该步骤可以包括:
S1001:参考亮度信息不损失。
S1002:参考亮度信息保持不变。
S1003:对参考亮度信息进行上采样滤波。
S1004:在参考亮度信息保持不变时,参考色度信息保持不变。
S1005:在参考亮度信息保持不变时,对参考色度信息进行上采样滤波。
S1006:在参考亮度信息进行上采样滤波时,对参考色度信息进行上采样滤波。
需要说明的是,在保证参考亮度信息不损失的情况下,可以分为两种操作情况:(1)参考亮度信息保持不变;(2)对参考亮度信息进行上采样滤波。其中,对于操作情况(1),又可以分为两类子情况:(a)参考色度信息保持不变;(b)对参考色度信息进行上采样滤波。而对于操作情况(2),此时不存在参考色度信息保持不变的子情况,需要对参考色度信息进行上采样滤波。
对于操作情况(1),即步骤S1002而言,参考亮度信息保持不变,此时亮度信息没有任何损失。在这种操作情况下,由于YUV视频格式的多样性,需要对参考色度信息进行不同的子情况处理:
子情况1:对于YUV444格式的视频,亮度分量和色度分量的空间分辨率相等,此时不需要对参考色度分量作任何处理。
子情况2:对于YUV422、YUV411、YUV420等具有色度亚采样特征的视频,亮度分量和色度分量的空间分辨率不一致,且色度分量的空间分辨率小于亮度分量的空间分辨率,此时需要根据YUV视频格式对参考色度分量进行上采样滤波处理。上采样滤波方式可以是线性插值方法中的任何一种,比如最邻近插值、双线性内插值、双三次插值、均值插值、中值插值、复制插值等;也可以是非线性插值方法中的任何一种,比如基于小波变换的插值算法、基于边缘信息的插值算法等;也可以基于卷积神经网络进行上采样滤波,本申请实施例不作任何限定。这里以YUV420视频格式、复制插值为例进行说明,图6B示出了4×1的参考色度信息、8×2的参考亮度信息进行参考色度复制的插值方法。如图6B所示,8×2的用斜纹填充块为参考亮度信息,4×1的用网格填充块为同位的参考色度信息,4×1的用网格填充块中每个像素点相当于上采样滤波后的8×2的色度块中每2×2子块的左上角像素点,即复制插值后的2×2子块中每个色度像素点的像素值相同。具体地,对于一个2×2子块,另外三个色度像素点(用点填充块)均是左上角像素点(用网格填充块)进行复制得到的
对于操作情况(2),即步骤S1003而言,对参考亮度信息进行上采样滤波,此时亮度信息没有任何损失。在这种操作情况下,由于YUV格式的视频中亮度分量的空间分辨率总是大于或等于色度分量的空间分辨率,因此参考色度信息必然需要进行上采样滤波才能和参考亮度信息的空间分辨率保持一致。 这时候需要根据YUV视频格式和对参考亮度信息的空间上采样频率来决定参考色度信息的空间上采样频率。这里将参考亮度信息的水平上采样频率(即前述实施例的第一水平上采样因子)记为S_Hor_RefLuma,将参考亮度信息的垂直上采样频率(即前述实施例的第一垂直上采样因子)记为S_Ver_RefLuma,将参考色度信息的水平上采样频率(即前述实施例的第二水平上采样因子)记为S_Hor_RefChroma,将参考色度信息的垂直上采样频率(即前述实施例的第二垂直上采样因子)记为S_Ver_RefChroma。
对于YUV444格式的视频:S_Hor_RefChroma=S_Hor_RefLuma,
S_Ver_RefChroma=S_Ver_RefLuma;
对于YUV422格式的视频:S_Hor_RefChroma=2×S_Hor_RefLuma,
S_Ver_RefChroma=S_Ver_RefLuma;
对于YUV411格式的视频:S_Hor_RefChroma=4×S_Hor_RefLuma,
S_Ver_RefChroma=S_Ver_RefLuma;
对于YUV420格式的视频:S_Hor_RefChroma=2×S_Hor_RefLuma,
S_Ver_RefChroma=2×S_Ver_RefLuma。
S903:根据核心参数,确定当前块的目标信息。
需要说明的是,对于S903而言,当前块的目标信息可以包括参考色度信息(refC)、参考亮度信息(refY)和重建亮度信息(recY)。其中,从图6A的参考区域中获取到参考色度信息refC和参考亮度信息refY。注意,获取的参考色度信息包括但不限于根据参考亮度信息位置获取对应的参考色度信息。获取的参考亮度信息包括但不限于:选取当前块的上方区域的参考亮度重建值和左方区域的参考亮度重建值。
还需要说明的是,在本申请实施例中,refC或refY或recY在一定条件下可以进行前处理操作后作为基于权重的色度预测输入。
示例性地,对不同尺寸的当前块,refC或refY或recY可以进行不同的滤波操作。其中,获取目标信息可以包括:
获取inSize数量的参考色度信息refC。这里的refC有可能经过了上采样滤波,也有可能未经过上采样滤波;若需要进行前处理操作例如滤波处理,则refC是经过前处理操作后得到的。
获取inSize数量的参考亮度信息refY。这里的refY有可能经过了上采样滤波,也有可能未经过上采样滤波;若需要进行前处理操作例如滤波处理,则refY是经过前处理操作后得到的。
此外,获取目标信息还可以包括获取当前块内的重建亮度信息recY。对于recY的情况将在后续步骤中对其进行详细描述。
S904:在当前块内的重建亮度信息不损失的情况下,根据目标信息进行基于权重的色度预测计算,得到当前块的第一预测块。
需要说明的是,为了不损失当前块内的重建亮度信息,获取的重建亮度信息recY可以是不经过任何处理的,也可以是经过上采样滤波之后的。对于获取重建亮度信息来说,如图11所示,可以包括以下步骤:
S1101:当前块内的重建亮度信息不损失。
S1102:当前块内的重建亮度信息保持不变。
S1103:对当前块内的重建亮度信息进行上采样滤波。
也就是说,为了得到更精确的色度预测值,本申请实施例对于当前块中的每个亮度像素,都将其与参考亮度进行作差得到亮度差,再而根据亮度差获取每个参考亮度同位参考色度的权重,最后根据权重进行加权预测得到每个同位色度的基于权重的色度预测值。如此,经过上述过程得到的预测块的空间分辨率与当前块的空间分辨率一致。因此可针对当前块内的重建亮度信息分为以下两种情况进行操作:
对于操作情况1(即步骤S1102而言):若不对当前块内的重建亮度信息进行任何处理,则需要根据YUV视频格式推断输出第一预测块的尺寸:
对于YUV444格式的视频,亮度分量和色度分量的空间分辨率相等,此时:predSizeW=nTbW,predSizeH=nTbH;
对于YUV422格式的视频,亮度分量和色度分量具有相同的垂直分辨率,但色度分量的水平分辨率是亮度分量的1/2,此时:predSizeW=2×nTbW,predSizeH=nTbH;
对于YUV411格式的视频,亮度分量和色度分量具有相同的垂直分辨率,但色度分量的水平分辨率是亮度分量的1/4,此时:predSizeW=4×nTbW,predSizeH=nTbH;
对于YUV420格式的视频,色度分量的水平分辨率和垂直分辨率都是亮度分量的1/2,此时:predSizeW=2×nTbW,predSizeH=2×nTbH。
对于操作情况2(即步骤S1103而言):若对当前块内的重建亮度信息进行上采样滤波,上采样滤波方式可以是线性插值方法中的任何一种,比如最邻近插值、双线性内插值、双三次插值、均值插值、中值插值、复制插值等;也可以是非线性插值方法中的任何一种,比如基于小波变换的插值算法、基于边缘信息的插值算法等;也可以基于卷积神经网络进行上采样等等。当前块内的重建亮度信息在上采样滤波之后需要根据YUV视频格式和对重建亮度信息的空间上采样频率来推断输出第一预测块的尺寸。这里将重建亮度信息的水平上采样频率(即前述实施例的第三水平上采样因子)记为S_Hor_RecLuma,将重建亮度信息的垂直上采样频率(即前述实施例的第三垂直上采样因子)记为S_Ver_RecLuma。
对于YUV444格式的视频,亮度分量和色度分量的空间分辨率相等,此时:
predSizeW=S_Hor_RecLuma×nTbW,predSizeH=S_Ver_RecLuma×nTbH;
对于YUV422格式的视频,亮度分量和色度分量具有相同的垂直分辨率,但色度分量的水平分辨率是亮度分量的1/2,此时:
predSizeW=2×nTbW×S_Hor_RecLuma,predSizeH=nTbH×S_Ver_RecLuma;
对于YUV411格式的视频,亮度分量和色度分量具有相同的垂直分辨率,但色度分量的水平分辨率是亮度分量的1/4,此时:
predSizeW=4×nTbW×S_Hor_RecLuma,predSizeH=nTbH×S_Ver_RecLuma;
对于YUV420格式的视频,色度分量的水平分辨率和垂直分辨率都是亮度分量的1/2,此时:
predSizeW=2×nTbW×S_Hor_RecLuma,predSizeH=2×nTbH×S_Ver_RecLuma。
示例性地,假定重建亮度信息的水平上采样频率S_Hor_RecLuma简称为upHor,重建亮度信息的垂直上采样频率S_Ver_RecLuma简称为upVer,那么对重建亮度信息recY进行上采样滤波,从而生成输入的重建亮度信息in_recY,一种插值过程的示例如下:
先将上参考重建亮度像素refY_T填入到将要输入的重建亮度信息的上一行in_recY[x][-1],将左参考重建亮度像素refY_L填入到将要输入的重建亮度信息的左一列in_recY[-1][y]。其中,x=0,…,predSizeW-1,y=0,…,predSizeH-1。此时通过下式将recY放入in_recY中对应位置((x+1)×upHor-1,(y+1)×upVer-1)中:
in_recY[(x+1)×upHor-1][(y+1)×upVer-1]=recY[x][y]       (17)
可以理解为将in_recY均等划分为recSizeW×recSizeH个子块,则放入的位置就是每个子块的右下角位置。参见图12,以predSizeW=8和predSizeH=8、当前块尺寸为4×4为例,其中,斜纹填充为上参考重建亮度像素refY_T,竖条填充为左参考重建亮度像素refY_L,网格填充为当前块的重建亮度信息recY填入的位置。
●若水平上采样因子upHor大于1,则先进行水平方向上的上采样,上采样过程如下式所示:
sum=(upHor-dX)×rec_Y[xHor][yHor]+dX×rec_Y[xHor+upHor][yHor]      (18)
Figure PCTCN2022086471-appb-000021
该过程参见图13,以predSizeW=8和predSizeH=8、当前块尺寸为4×4为例,水平上采样时左参考亮度像素refY_L中与rec_Y填入水平位置对应的点也会作为上采样的参考点,这里也将其用网格填充标识。则此时网格填充点均为上采样的参考点,水平方向上每两个网格填充点之间通过线性插值得到的预测点用横条填充标识。
上采样方法采用线性插值上采样,即两个上采样亮度参考点(网格填充)之间的每个插值出来的点(横条填充)的值都是两个上采样参考亮度点的加权平均。根据式(18)和式(19),左侧上采样参考色度点的权重为(upHor-dX)/upHor,右侧上采样参考色度点的权重为dX/upHor,其中,dX=1,…,upHor-1(即当前插值点与左侧参考点之间的距离)。这样,在水平方向插值时,权重仅与水平上采样频率upHor有关。图14展示了一个权重的示例。在该示例中,upHor=4。其中,对于图14中的第一个插值点,左侧上采样参考色度点的权重为3/4,右侧上采样参考色度点的权重为1/4;对于图14中的第二个插值点,左侧上采样参考色度点的权重为2/4,右侧上采样参考色度点的权重为2/4;对于图14中的第三个插值点,左侧上采样参考色度点的权重为1/4,右侧上采样参考色度点的权重为3/4。
●若垂直上采样因子upVer大于1,需要进行垂直方向上的上采样,该过程与水平上采样过程类似。具体过程如下式所示:
sum=(upVer-dY)×rec_Y[xVer][yVer]+dY×rec_Y[xVer][yVer+upVer]     (20)
Figure PCTCN2022086471-appb-000022
该过程参见图15,以predSizeW=8和predSizeH=8、当前块尺寸为4×4为例,在完成如图13所示的水平上采样后,上参考亮度像素refY_T和in_recY所有已有的点都将作为垂直上采样的参考点,这里也都将其用网格填充标识。那么此时网格填充点均为上采样的参考点,垂直方向上每两个网格填充点 之间通过线性插值得到预测点用横条填充标识。
上采样方法同样使用线性插值,根据式(20)和式(21),上侧上采样参考点的权重为(upVer-dY)/upVer,下侧参考点的权重为dY/upVer,其中,dY=1,…,upVer-1(即当前插值点与上侧参考点之间的距离)。这样,在垂直插值时,权重仅与垂直上采样频率upVer有关。
这样,根据上述过程即可得到最终的in_recY。针对上述的插值过程,除了采用“先水平再垂直”的上采样方法,也可以采用“先垂直再水平”的上采样方法,还可以将“先水平再垂直”上采样后的重建亮度值和“先垂直再水平”上采样后的重建亮度值进行平均以得到最终输入的重建亮度值in_recY,也可以采用神经网络中的卷积操作代替上采样操作,本申请实施例对此均不作任何限定。
如此,最终获取输入的目标信息可以包括:inSize数量的参考色度信息refC、inSize数量的参考亮度信息refY以及当前块内的重建亮度信息recY。
可以理解地,对于S904而言,对于核心参数所规定的尺寸内的色度预测值用C pred[i][j]表示,其中,i=0,…,predSizeW-1,j=0,…,predSizeH-1,逐个进行获取。注意,predSizeH和predSizeW为核心参数,在本申请实施例中,predSizeH大于或等于当前块的高度nTbH或predSizeW大于或等于当前块的宽度nTbW相同或不同。这样,基于权重的色度预测包括以下操作:通过获取权重、根据权重进行加权预测得到基于权重的色度预测值。而权重的获取过程包括构造亮度差向量和计算权重。其详细计算过程如下:
对于i=0,…,predSizeW-1,j=0,…,predSizeH-1
对于k=0,1...,inSize-1
构造亮度差向量中各个元素diffY[i][j][k]
计算权重向量中各个元素cWeight[i][j][k](或cWeightFloat[i][j][k]),然后由cWeight[i][j](或cWeightFloat[i][j][k])和refC计算色度预测值C pred[i][j]。
在一种具体的实施例中,在获取重建亮度信息之后,如图16所示,该方法还可以包括:
S1601:对于每个亮度位置的待预测像素,利用目标信息所包括的参考色度信息、参考亮度信息和当前块的重建亮度信息构造亮度差向量。
需要说明的是,对核心参数所规定尺寸内的每个待预测像素C pred[i][j],将其对应的重建亮度信息recY[i][j]与inSize数量的参考亮度信息refY相减并取绝对值得到亮度差向量diffY[i][j][k]。具体计算公式如下:
diffY[i][j][k]=abs(refY[k]-recY[i][j])       (22)
其中,
Figure PCTCN2022086471-appb-000023
abs(x)表示x的绝对值。
还需要说明的是,在一定条件下,可以对待预测像素的亮度差向量进行线性或非线性数值处理。例如,可根据核心参数中的控制参数S来缩放待预测像素的亮度差向量的数值。
S1602:对于每个亮度位置的待预测像素,根据亮度差向量,利用非线性函数计算权重向量。
需要说明的是,在本申请实施例中,采用权重模型对每个待预测像素C pred[i][j]对应的亮度差向量diffY[i][j]进行处理得到对应的浮点型权重向量cWeightFloat[i][j]。这里的权重模型包括但不限于非线性归一化函数、非线性指数归一化函数等等非线性函数,并不作任何限定。
示例性地,可以采用非线性Softmax函数作为权重模型,每个待预测像素对应的亮度差向量diffY[i][j]作为权重模型的输入,该权重模型输出每个待预测像素对应的浮点型权重向量cWeightFloat[i][j],计算公式如下所示:
Figure PCTCN2022086471-appb-000024
在一定条件下,也可以根据核心参数中的控制参数(S)对权重模型进行调整。示例性地,如果当前块的尺寸灵活,那么可以根据控制参数(S)调整权重模型,以非线性Softmax函数为例,可根据当前块所属的块分类类别不同,选取不同的控制参数来调整函数,此时每个待预测像素对应的权重向量计算公式如下所示:
Figure PCTCN2022086471-appb-000025
在本申请实施例中,还可以对加权系数进行定点化处理。因此,在一些实施例中,该方法还包括:若加权系数为浮点加权系数,则对浮点加权系数进行定点化处理,得到定点加权系数。
如此,在上述式(23)或者式(24)计算完毕之后,可以对cWeightFloat进行定点化处理,如下所示:
cWeight[i][j][k]=round(cWeightFloat[i][j][k]×2 Shift)     (25)
其中,round(x)=Sign(x)×Floor(abs(x)+0.5)。这里,Floor(x)表示小于或等于x的最大整数,
Figure PCTCN2022086471-appb-000026
Figure PCTCN2022086471-appb-000027
abs(x)表示x的绝对值。另外,Shift是为了提高精度采取的定点化操作中所设置的移位量。
S1603:对于每个亮度位置的待预测像素,根据权重向量与目标信息所包括的参考色度信息进行加权计算,得到色度预测值。
需要说明的是,根据每个待预测像素对应的权重向量cWeight[i][j](或cWeightFloat[i][j])和参考色度信息refC计算待预测像素的色度预测值。具体地,将每个待预测像素C pred[i][j]的参考色度信息refC与每个待预测像素对应的权重向量元素逐一对应相乘得到subC[i][j](或subCFloat[i][j]),将相乘结果累加即为每个待预测像素的色度预测值C pred[i][j],即实现对色度分量的加权预测。
在一种可能的实现方式中,可以包括:根据浮点加权系数和参考色度信息进行加权计算,得到待预测像素的初始预测值;对初始预测值进行定点化处理,得到待预测像素的目标预测值。
示例性地,计算公式如下所示:
对于k=0,1,...,inSize-1
subCFloat[i][j][k]=(cWeightFloat[i][j][k]×refC[k])        (26)
在计算完毕之后,还可以对subCFloat[i][j][k]进行定点化处理,定点化过程中为保留一定的计算精度,这里可以乘上一个系数,具体如下所示:
subC[i][j][k]=round(subCFloat[i][j][k]×2 Shift)        (27)
或者:
对于i=0,…,predSizeW-1,j=0,…,predSizeH-1
Figure PCTCN2022086471-appb-000028
在计算完毕后,对C predFloat[i][j]进行定点化,如:
C pred[i][j]=round(C predFloat[i][j])       (29)
在另一种可能的实现方式中,可以根据定点加权系数和参考色度信息进行加权计算,得到待预测像素的初始预测值;对初始预测值进行定点化补偿处理,得到待预测像素的目标预测值。
示例性地,计算公式如下所示:
对于k=0,1,...,inSize-1
subC[i][j][k]=(cWeight[i][j][k]×refC[k])        (30)
然后使用定点化的subC[i][j][k]进行计算,具体如下:
对于i=0,…,predSizeW-1,j=0,…,predSizeH-1
Figure PCTCN2022086471-appb-000029
其中,Offset=1<<(Shift 1-1),Shift 1是在计算cWeight[i][j][k]或者subC[i][j][k]时(Shift 1=Shift),也可以是其他环节为了提高精度采取的定点化操作中需要的移位量。
S1604:对于每个亮度位置的待预测像素,对计算得到的色度预测值进行修正处理,确定当前块的第一预测块。
需要说明的是,色度预测值应该限定在一预设范围内。如果超出该预设范围,那么需要进行相应的修正操作。示例性地,在一种可能的实现方式中,可以对C pred[i][j]的色度预测值进行钳位(clip)操作,具体如下:
●当C pred[i][j]的值小于0时,将其置为0;
●当C pred[i][j]的值大于(1<<BitDepth)-1时,其置为(1<<BitDepth)-1。
其中,BitDepth为色度像素值所要求的比特深度,以保证预测块中所有色度预测值都在0到(1<<BitDepth)-1之间。
即:
C pred[i][j]=Clip3(0,(1<<BitDepth)-1,C pred[i][j])       (32)
其中,
Figure PCTCN2022086471-appb-000030
或者,将式(32)和式(33)进行合并,得到如下公式所示:
Figure PCTCN2022086471-appb-000031
S905:对计算得到的第一预测块进行下采样滤波,确定当前块的第二预测块,且第二预测块与当前块的色度分量具有相同的空间分辨率。
需要说明的是,对于S905而言,对当前块内的重建亮度信息中每个位置都进行了色度预测最终得到predSizeW×predSizeH大小的第一预测块,此时第一预测块的尺寸必定是大于或等于当前块的尺寸,因此需要对此时的第一预测块进行下采样滤波,以恢复到和当前块一样的尺寸。
首先根据输出的第一预测块的宽度predSizeW和当前块的宽度nTbW,计算得到水平下采样因子downHor;同样根据输出的第一预测块的高度predSizeH和当前块的高度nTbH,计算得到垂直下采样因子downVer,计算方法如下所示:
downHor=predSizeW/nTbW       (35)
downVer=predSizeH/nTbH        (36)
紧接着,按照如下情况将第一预测块predWcp进行下采样滤波后将采样值填入到第二预测块predSamples中:
(ⅰ)若downHor大于1,downVer等于1,则只需要水平方向进行下采样滤波,具体公式如下:
predSamples[x][y]=(predWcp[downHor×x-1][y]+2×predWcp
[downHor×x][y]+predWcp[downHor×x+1][y]+2)>>2        (37)
(ⅱ)若downHor等于1,downVer大于1,则只需要垂直方向进行下采样,具体公式如下:
predSamples[x][y]=(predWcp[x][downVer×y-1]+2×predWcp
[x][downVer×y]+predWcp[x][downVer×y+1]+2)>>2    (38)
(ⅲ)若downHor和downVer都大于1,则水平和垂直方向都需要进行下采样,具体公式如下:
predSamples[x][y]=(predWcp[downHor×x-1][downVer×y]+predWcp[downHor×x-1][downVer×y+1]+2×predWcp[downHor×x][downVer×y]+2×predWcp[downHor×x][downVer×y+1]+predWcp[downHor×x+1][downVer×y]+predWcp[downHor×x+1][downVer×y+1]+4)>>3
                                (39)
(ⅳ)若downHor和downVer都等于1,则水平和垂直方向都不需要进行下采样,具体公式如下:
predSamples[x][y]=predWcp[x][y]          (40)
其中,x=0,…,predSizeW-1,y=0,...,predSizeH-1,predSamples为最终下采样后的色度预测值。
S906:对第二预测块的色度预测值进行后处理操作,确定当前块的目标预测块。
需要说明的是,对于S906而言,基于权重的色度预测输出的色度预测值(predWcp)之后,在一定条件下还需要后处理操作后作为最终的色度预测值(predSamples),否则最终的色度预测值predSamples即为predWcp。
示例性地,为了降低WCP逐像素独立并行预测带来的不稳定性,可以对predWcp进行平滑滤波作为最终的色度预测值predSamples。或者,为了进一步提升WCP预测值的准确性,可以对predWcp进行位置相关的修正过程。比如:利用空间位置接近的参考像素对每个待预测像素计算色度补偿值,用此色度补偿值对predWcp进行修正,将修正后的预测值作为最终的色度预测值predSamples。或者,为了进一步提升WCP预测值的准确性,可以将其他色度预测模式计算的色度预测值与WCP计算的色度预测值predWcp进行加权融合,将此融合结果作为最终的色度预测值predSamples。比如:可以将CCLM模式预测得到的色度预测值与WCP计算的色度预测值predWcp进行等权重或不等权重加权,加权结果作为最终的色度预测值predSamples。或者,为了提高WCP预测性能,可以采用神经网络模型对WCP的预测输出predWcp进行修正等等,本申请实施例对此不作任何限定。
还可以理解地,在本申请实施例中,在进行色度预测时为了不损失亮度信息,还可以不对参考亮度信息和当前已重建亮度信息进行下采样滤波,采取保持亮度或者对其上采样的操作,然后对当前已重建亮度信息的每个亮度点计算与参考亮度信息的绝对差值进行非线性映射得到每个参考色度点的权重并对所有参考色度点进行加权获得同位色度点的预测值。如此,需要预测的色度点数和当前已重建亮度区域的亮度点数相同,从而需要对色度预测块进行下采样恢复和原始色度块一样大小的尺寸。这种做法可以保证已有亮度信息的准确性,准确的亮度信息有利于提高非线性映射模型的准确性和稳定性,从而增加色度预测值的准确性。
为了减少计算复杂度,不对当前已重建亮度信息进行上采样滤波,然后考虑不对当前当前已重建亮度信息中每个亮度位置进行同位色度点的预测,可以选取部分亮度位置进行同位色度点的预测。如此,后续也不一定必须对色度预测块进行下采样滤波。在另一种具体的实施例中,如图17所示,该方法可以包括:
S1701:当前块内的重建亮度信息不损失。
S1702:当前块内的重建亮度信息保持不变。
S1703:对于部分亮度位置的待预测像素,利用目标信息所包括的参考色度信息、参考亮度信息和当前块的重建亮度信息构造亮度差向量。
S1704:对于部分亮度位置的待预测像素,根据亮度差向量,利用非线性函数计算权重向量。
S1705:对于部分亮度位置的待预测像素,根据权重向量与目标信息所包括的参考色度信息进行加权计算,得到色度预测值。
S1706:对于部分亮度位置的待预测像素,对计算得到的色度预测值进行修正处理,确定当前块的第一预测块。
需要说明的是,在本申请实施例中,这里的修正处理,包括clip操作。
还需要说明的是,在本申请实施例中,可以选取部分亮度位置进行同位色度点的预测,该过程说明如下:
(ⅰ)为了节省后续对色度预测块进行上采样滤波或者下采样滤波的操作,可以根据YUV视频格式特征选取相应的位置进行色度预测,假设当前亮度点的位置为CurRecLuma(i,j),则需要进行色度预测的点的位置为CurPredChroma(x,y),两者位置的坐标关系具体如下:
当指示为YUV444格式的视频时:x=i,y=j;
当指示为YUV422格式的视频时:x=2×i,y=j;
当指示为YUV411格式的视频时:x=4×i,y=j;
当指示为YUV420格式的视频时:x=2×i,y=2×j。
(ⅱ)为了进一步减少色度预测时的计算复杂度,可以选择更少的同位色度点进行预测。假设当前亮度点的位置为CurRecLuma(i,j),则需要进行色度预测的点的位置为CurPredChroma(x,y)。假设当前亮度块的水平取样位置因子为S_Pos_Hor,垂直取样位置因子为S_Pos_Ver。当前亮度点位置和需要进行色度预测的点的位置两者关系具体如下:
当指示为YUV444格式/YUV422格式/YUV411格式/YUV420格式的视频时:x=i×S_Pos_Hor,y=j×S_Pos_Ver。
还可以理解地,在本申请实施例中,除了本申请实施例针对不同情况下采取的不同下采样滤波方法,也可以针对所有情况,即不考虑水平下采样因子和垂直下采样因子的情况,对需要下采样滤波的方向(垂直方向或水平方向)每nTbW/predSizeW或nTbH/predSizeH个预测值求这些预测值的均值,然后将该均值作为下采样后的预测值。或者,也可以采用神经网络模型中的卷积和池化操作代替下采样滤波的操作,本申请实施例对此不作任何限定。
本实施例提供了一种解码方法,通过本实施例对前述实施例的具体实现进行详细阐述,从中可以看出,根据前述实施例的技术方案,可以提高WCP预测技术下色度预测值的准确性。具体地,通过相邻区域的参考信息与当前块的亮度信息计算加权系数,将此加权系数参与当前块的色度预测,充分挖掘了当前块与相邻区域在亮度信息上的相关性,并将此相关性参与当前块的色度预测,从而提升了色度预测的准确性。例如,该解码方法在测试软件ECM3.1上,以48帧间隔在全帧内(All Intra)条件下进行测试,可以在Y,Cb,Cr上分别获得-0.10%,-0.74%,-0.61%的BD-rate变化(即同等峰值信噪比(Peak Signal to Noise Ratio,PSNR)下平均码率变化)。尤其在大分辨率的序列上,具有更好的性能表现,在Class A1测试序列上可以达到Y有-0.24%的BD-rate变化。另外,本申请实施例在基于WCP进行色度预测时为了不损失亮度信息,不对相邻区域的参考亮度和当前已重建的亮度区域进行下采样滤波,采取保持亮度或者对其上采样的操作,然后对当前已重建亮度信息中的每个亮度点计算与参考亮度的绝对差值进行非线性映射得到每个参考色度的权重并对所有参考色度点进行加权获得同位色度点的预测值。这样可以保证已有亮度信息的准确性,而且准确的亮度信息有利于提高非线性映射模型的准确性和稳定性,进一步增加了色度预测的准确性。
在本申请的又一实施例中,参见图18,其示出了本申请实施例提供的一种编码方法的流程示意图一。如图18所示,该方法可以包括:
S1801:确定当前块的第一颜色分量的参考样值。
需要说明的是,本申请实施例的编码方法应用于编码装置,或者集成有该编码装置的编码设备(也可简称为“编码器”)。另外,本申请实施例的编码方法具体可以是指一种帧内预测方法,更具体地,是一种基于权重的色度预测方法。
在本申请实施例中,视频图像可以划分为多个编码块,每个编码块可以包括第一颜色分量、第二颜色分量和第三颜色分量,而这里的当前块是指视频图像中当前待进行帧内预测的编码块。另外,假定当前块进行第一颜色分量的预测,而且第一颜色分量为亮度分量,即待预测分量为亮度分量,那么当前块也可以称为亮度预测块;或者,假定当前块进行第二颜色分量的预测,而且第二颜色分量为色度分量,即待预测分量为色度分量,那么当前块也可以称为色度预测块。
还需要说明的是,在本申请实施例中,当前块的参考信息可以包括有当前块的相邻区域中的第一颜色分量采样点的取值和当前块的相邻区域中的第二颜色分量采样点的取值,这些采样点(Sample)可以 是根据当前块的相邻区域中的已编码像素确定的。在一些实施例中,当前块的相邻区域可以包括下述至少之一:上侧相邻区域、右上侧相邻区域、左侧相邻区域和左下侧相邻区域。
在一些实施例中,所述确定当前块的第一颜色分量的参考样值,可以包括:根据当前块的相邻区域中的第一颜色分量采样点的取值,确定当前块的第一颜色分量的参考样值。
需要说明的是,在本申请实施例中,当前块的参考像素可以是指与当前块相邻的参考像素点,也可称为当前块的相邻区域中的第一颜色分量采样点、第二颜色分量采样点,用Neighboring Sample或Reference Sample表示。另外,在本申请实施例中,第一颜色分量为亮度分量,第二颜色分量为色度分量;此时当前块的相邻区域中的第一颜色分量采样点的取值表示为当前块的参考像素对应的参考亮度信息,当前块的相邻区域中的第二颜色分量采样点的取值则表示为当前块的参考像素对应的参考色度信息。
在一些实施例中,对于第一颜色分量采样点的取值的确定,该方法还可以包括:对相邻区域中的第一颜色分量采样点进行筛选处理,确定第一颜色分量采样点的取值。
需要说明的是,在相邻区域中的第一颜色分量采样点中,可能会存在部分不重要的采样点(例如,这些采样点的相关性较差)或者部分异常的采样点,为了保证预测的准确性,需要将这些采样点剔除掉,以便得到有效的第一颜色分量采样点的取值。也就是说,在本申请实施例中,根据当前块的相邻区域中的第一颜色分量采样点,组成第一采样点集合;那么可以对第一采样点集合进行筛选处理,确定第一颜色分量采样点的取值。
在一种具体的实施例中,所述对相邻区域中的第一颜色分量采样点进行筛选处理,确定第一颜色分量采样点的取值,可以包括:
基于相邻区域中的第一颜色分量采样点的位置和/或颜色分量强度,确定待选择采样点位置;根据待选择采样点位置,从相邻区域中确定第一颜色分量采样点的取值。
需要说明的是,在本申请实施例中,颜色分量强度可以用颜色分量信息来表示,比如参考亮度信息、参考色度信息等;这里,颜色分量信息的值越大,表明了颜色分量强度越高。这样,针对相邻区域中的第一颜色分量采样点进行筛选,可以是根据采样点的位置来进行筛选的,也可以是根据颜色分量强度来进行筛选的,从而根据筛选得到的采样点确定出有效的第一颜色分量采样点,进而确定出第一颜色分量采样点的取值。
在一些实施例中,所述确定当前块的第一颜色分量的参考样值,还可以包括:对第一颜色分量采样点的取值进行第二滤波处理,得到当前块的第一颜色分量的滤波相邻样值;根据当前块的第一颜色分量的滤波相邻样值,确定当前块的第一颜色分量的参考样值。在本申请实施例中,当前块的第一颜色分量的滤波相邻样值的数量大于第一颜色分量采样点的取值的数量。
在本申请实施例中,第二滤波处理可以为上采样滤波处理。其中,第一颜色分量为亮度分量,为了保证参考亮度信息不损失,这里可以是参考亮度信息保持不变,也可以是对参考亮度信息进行上采样滤波处理。
在一些实施例中,所述确定当前块的第一颜色分量的参考样值,还可以包括:基于当前块中第一参考颜色分量采样点的重建值,确定当前块的第一颜色分量的参考样值。
在本申请实施例中,第一参考颜色分量可以为亮度分量;那么,当前块中第一参考颜色分量采样点的重建值即为当前块的重建亮度信息。
进一步地,在一些实施例中,所述确定当前块的第一颜色分量的参考样值,还可以包括:对当前块中第一参考颜色分量采样点的重建值进行第三滤波处理,得到当前块中第一参考颜色分量采样点的滤波样值;根据当前块中第一参考颜色分量采样点的滤波样值,确定当前块的第一颜色分量的参考样值。
在本申请实施例中,当前块中第一参考颜色分量采样点的滤波样值的数量大于当前块中第一参考颜色分量采样点的重建值的数量。
在本申请实施例中,第三滤波处理可以为上采样滤波处理。其中,第一参考颜色分量为亮度分量,为了保证当前块内的重建亮度信息不损失,获得更精确的色度预测值,这里可以是当前块内的重建亮度信息保持不变,也可以是对当前块内的重建亮度信息进行上采样滤波处理。
可以理解地,在本申请实施例中,当前块的第一颜色分量的参考样值还可以设置为亮度差信息。因此,在一种可能的实现方式中,当前块的第一颜色分量的参考样值设置为第一颜色分量采样点的取值与第一参考颜色分量采样点的重建值之差的绝对值。
在另一种可能的实现方式中,当前块的第一颜色分量的参考样值设置为第一颜色分量的滤波相邻样值与第一参考颜色分量采样点的重建值之差的绝对值。
在又一种可能的实现方式中,当前块的第一颜色分量的参考样值设置为第一颜色分量的滤波相邻样值与第一参考颜色分量采样点的滤波样值之差的绝对值。
在再一种可能的实现方式中,当前块的第一颜色分量的参考样值设置为第一颜色分量采样点的取值 与第一参考颜色分量采样点的滤波样值之差的绝对值。
也就是说,在保持亮度信息不损失的情况下,可以对当前块的相邻区域中的参考亮度信息进行上采样滤波处理,也可以对当前块内的重建亮度信息进行上采样滤波处理,还可以对这两者都进行上采样滤波处理,甚至还可以对这两者都不进行上采样滤波处理,然后根据不同组合来确定亮度差信息,将亮度差信息作为当前块的第一颜色分量的参考样值。
S1802:根据当前块的第一颜色分量的参考样值,确定加权系数。
需要说明的是,在本申请实施例中,所述根据当前块的第一颜色分量的参考样值,确定加权系数,可以包括:确定在预设映射关系下第一颜色分量的参考样值对应的取值;将加权系数设置为等于取值。
可以理解地,在本申请实施例中,第一颜色分量的参考样值可以为当前块的第一颜色分量的滤波相邻样值与当前块中第一参考颜色分量采样点的滤波样值之差的绝对值。其中,第一参考颜色分量为第一颜色分量,且第一颜色分量是与本申请实施例待预测的第二颜色分量不同的颜色分量。
需要说明的是,在本申请实施例中,对当前块中待预测像素的色度分量进行预测,可以选取当前块中的至少一个待预测像素,分别计算其重建亮度与相邻区域中的参考亮度之间的亮度差(用|ΔY k|表示)。其中,如果|ΔY k|较小,表明亮度值的相似性比较强,对应的加权系数(用w k表示)赋予较大的权重;反之,如果|ΔY k|较大,表明亮度值的相似性比较弱,w k赋予较小的权重。也就是说,预设映射关系下,w k与|ΔY k|之间近似呈反比。
进一步地,在一些实施例中,所述确定在预设映射关系下第一颜色分量的参考样值对应的取值,可以包括:确定第一因子;根据第一因子和第一颜色分量的参考样值,确定第一乘积值;确定第一乘积值在预设映射关系下对应的取值。
在一种具体的实施例中,所述确定第一因子,可以包括:第一因子是预设常数值。
在另一种具体的实施例中,所述确定第一因子,可以包括:根据当前块的尺寸参数,确定第一因子的取值。
进一步地,在一些实施例中,该方法还可以包括:根据预设的当前块的尺寸参数与第一因子的取值映射查找表,确定第一因子的取值。
在这里,当前块的尺寸参数可以包括以下参数的至少之一:当前块的宽度,当前块的高度、当前块的宽度与高度的乘积。
需要说明的是,本申请实施例可以采用分类方式固定第一因子的取值。例如,将根据当前块的尺寸参数分为三类,确定每一类对应的第一因子的取值。针对这种情况,本申请实施例还可以预先存储当前块的尺寸参数与第一因子的取值映射查找表,然后根据该查找表即可确定出第一因子的取值。示例性地,前述的表1示出了一种第一因子与当前块的尺寸参数之间的对应关系示意。
在又一种具体的实施例中,所述确定第一因子,可以包括:根据当前块的参考像素数量,确定第一因子的取值。
进一步地,在一些实施例中,该方法还可以包括:根据预设的当前块的参考像素数量与第一因子的取值映射查找表,确定第一因子的取值。
需要说明的是,本申请实施例可以将参考像素数量划分为三类,仍然采用分类方式固定第一因子的取值。例如,将根据当前块的参考像素数量划分为三类,确定每一类对应的第一因子的取值。针对这种情况,本申请实施例也可以预先存储当前块的参考像素数量与第一因子的取值映射查找表,然后根据该查找表即可确定出第一因子的取值。示例性地,前述的表2示出了一种第一因子与当前块的参考像素数量之间的对应关系示意。
进一步地,对于第一乘积值而言,所述根据第一因子和第一颜色分量的参考样值,确定第一乘积值,可以包括:
将第一乘积值设置为等于第一因子和第一颜色分量的参考样值的乘积;或者,
将第一乘积值设置为等于对第一颜色分量的参考样值进行比特右移后得到的数值,其中,比特右移的位数等于第一因子;或者,
将第一乘积值设置为根据第一因子对第一颜色分量的参考样值进行加法和比特移位操作后得到的数值。
示例性地,假定第一因子等于0.25,第一颜色分量的参考样值用Ref表示,那么第一乘积值可以等于0.25×Ref,而0.25×Ref又可表示为Ref/4,即Ref>>2。另外,在定点化计算过程中,还有可能会将浮点数转换为加法和位移操作;也就是说,对于第一乘积值而言,其计算方式不作任何限定。
还可以理解地,在本申请实施例中,第一参考颜色分量也可以为色度分量,即选取当前块中的至少一个待预测像素,分别计算其重建色度与相邻区域中的参考色度之间的色度差(用|ΔC k|表示)。其中,如果|ΔC k|较小,表明色度值的相似性比较强,对应的加权系数(用w k表示)可以赋予较大的权重;反 之,如果|ΔC k|较大,表明色度值的相似性比较弱,w k可以赋予较小的权重,也就是说,在计算加权系数时,第一颜色分量的参考样值也可以为|ΔC k|,进而计算出加权系数。
简单来说,在本申请实施例中,对于第一颜色分量的参考样值而言,其可以是|ΔC k|,即色度差的绝对值;或者也可以是|ΔY k|,即亮度差的绝对值;或者还可以是|αΔY k|,即亮度差的绝对值与预设乘子之积等等。其中,这里的预设乘子即为本申请实施例所述的第二因子。
进一步地,对于第二因子而言,在一种具体的实施例中,该方法还可以包括:根据参考像素的第一颜色分量值与参考像素的第二颜色分量值进行最小二乘法计算,确定第二因子。
也就是说,假定参考像素的数量为N个,参考像素的第一颜色分量值即为当前块的参考亮度信息,参考像素的第二颜色分量值即为当前块的参考色度信息,那么可以对N个参考像素的色度分量值和亮度分量值进行最小二乘法计算,得到第二因子。示例性地,最小二乘法回归计算如前述的式(5)所示,可以计算得到第二因子。
另外,对于预设映射关系而言,在一些实施例中,预设映射关系可以是Softmax函数,如前述的式(6)或式(7)所示。其中,Softmax函数是归一化指数函数,但是本申请实施例也可以不用归一化,其取值并不局限在[0,1]范围内。
还需要说明的是,对于预设映射关系而言,除了Softmax函数之外,在另一些实施例中,预设映射关系可以是与第一颜色分量的参考样值具有反比关系的加权函数,如前述的式(8)或式(9)所示。
这样,预设映射关系可以是如式(4)所示,也可以是如式(6)或式(7)所示,还可以是如式(8)或式(9)所示,甚至还可以是其他能够拟合参考像素的参考亮度值与当前块中待预测像素的亮度重建值越接近、参考像素的参考色度值对当前块中待预测像素的重要性越高的趋势构建加权系数的函数模型等,甚至也可以简化操作,例如采用数组元素查表方式来减少一部分的计算操作等等,本申请实施例不作具体限定。
如此,根据第一颜色分量的参考样值(例如|ΔY kij|),即可确定出加权系数,具体可以包括:w 1、w 2、…、w N等N个加权系数,其中,N表示第二颜色分量的参考样值的数量。在这里,理论上而言,这N个加权系数之和等于1,而且每一个加权系数均为大于或等于0且小于或等于1的值。
S1803:根据加权系数和当前块的第二颜色分量的参考样值,确定当前块的第二颜色分量的第一预测块。
需要说明的是,在本申请实施例中,为了保证亮度信息不损失,此时针对当前块中每个重建亮度位置都进行了色度预测,使得所得到的第一预测块尺寸大于原来的当前块尺寸,即:第一预测块中包含的第二颜色分量的预测值的数量大于当前块中包含的第二颜色分量采样点的数量。
还需要说明的是,在本申请实施例中,对于当前块的第二颜色分量的参考样值的确定,该方法还可以包括:根据当前块的相邻区域中的第二颜色分量采样点的取值,确定当前块的第二颜色分量的参考样值。在这里,相邻区域可以包括下述至少之一:上侧相邻区域、右上侧相邻区域、左侧相邻区域和左下侧相邻区域。
在一种具体的实施例中,该方法还可以包括:对当前块的相邻区域中的第二颜色分量采样点的取值进行第四滤波处理,得到当前块的第二颜色分量的滤波相邻样值;根据当前块的第二颜色分量的滤波相邻样值,确定当前块的第二颜色分量的参考样值。在本申请实施例中,当前块的第二颜色分量的滤波相邻样值的数量大于当前块的相邻区域中的第二颜色分量采样点的取值的数量。
在本申请实施例中,第四滤波处理可以为上采样滤波;其中,上采样率是2的正整数倍。
具体来说,第一颜色分量为亮度分量,第二颜色分量为色度分量,为了保证参考亮度信息不损失,可以对参考色度信息进行上采样滤波。示例性地,对于YUV420格式的2M×2N的当前块,其色度块大小为M×N,相邻区域的参考色度信息的数量为M+N,上采样滤波后可以得到2M+2N个色度参考样值,然后使用加权预测的方式得到2M×2N个色度预测值,后续再下采样滤波以得到M×N个色度预测值作为最终预测值。需要注意的是,对于YUV420格式的2M×2N的当前块,其色度块大小为M×N,相邻区域的参考色度信息的数量为M+N,也可以仅使用这M+N个参考色度信息,使用加权系数计算得到2M×2N个色度预测值,后续再下采样滤波处理以得到M×N个色度预测值作为最终预测值。另外,对于YUV420格式的2M×2N的当前块,其色度块大小为M×N,相邻区域的参考色度信息的数量为M+N,上采样滤波后得到4M+4N个色度参考样值;或者,对于YUV444格式的2M×2N的当前块,其色度块大小为2M×2N,相邻区域的参考色度信息的数量为2M×2N,上采样滤波后得到4M+4N个色度参考样值;后续根据这些色度参考样值使用加权预测以及下采样滤波的方式,也可以得到M×N个色度预测值作为最终预测值,这里不作任何限定。
还需要说明的是,在本申请实施例中,无论是对相邻区域中第一颜色分量采样点的取值进行的第二滤波处理,还是对当前块中第一参考颜色分量采样点的重建值进行的第三滤波处理,甚至还是对相邻区 域中第二颜色分量采样点的取值进行的第四滤波处理,这三种滤波处理均为上采样滤波处理。其中,第二滤波处理可以采用第一滤波器,第三滤波处理可以采用第二滤波器,第四滤波处理可以采用第三滤波器;对于这三种滤波器而言,在这里,第一滤波器、第二滤波器和第三滤波器均可以为上采样滤波器。由于处理的数据不同,滤波器的上采样率也可能存在不同,因此,这三种滤波器可以相同,也可以不同。进一步地,第一滤波器、第二滤波器和第三滤波器均可以为神经网络滤波器,本申请实施例对此也不作任何限定。
可以理解地,由于相邻区域中第一颜色分量采样点的取值(即参考亮度信息)的空间分辨率、相邻区域中第二颜色分量采样点的取值(即参考色度信息)的空间分辨率,甚至当前块中第一参考颜色分量采样点的重建值(即重建亮度信息)的空间分辨率都会受到颜色格式信息的影响;因此,还可以根据当前的颜色格式信息进行第二滤波处理、第三滤波处理或者第四滤波处理。下面以第四滤波处理为例,在一些实施例中,该方法还可以包括:基于颜色格式信息,对当前块的相邻区域中的第二颜色分量采样点的取值进行第四滤波处理,得到当前块的第二颜色分量的滤波相邻样值。
在一种具体的实施例中中,对于第四滤波处理而言,还可以包括:若颜色格式信息指示为4:2:0采样,则对当前块的相邻区域中的第二颜色分量采样点的取值进行上采样滤波;其中,上采样率是2的正整数倍。
需要说明的是,在本申请实施例中,如果颜色格式信息指示为亮度与色度的空间分辨率相等(如YUV444格式),那么不需要对参考色度信息作任何处理;如果颜色格式信息指示为亮度和色度的空间分辨率不一致(如YUV422格式/YUV411格式/YUV420格式等具有色度亚采样特征的视频),且色度分量的空间分辨率小于亮度分量的空间分辨率,那么需要对从相邻区域中获取的参考色度信息进行上采样滤波处理。
进一步地,在确定出当前块的第二颜色分量的参考样值之后,在一些实施例中,对于S1803来说,根据加权系数和当前块的第二颜色分量的参考样值,确定当前块的第二颜色分量的第一预测块,可以包括:确定第二颜色分量的参考样值与对应的加权系数的加权值;将第一预测块中第二颜色分量采样点的预测值设置为等于N个加权值之和;其中,N表示第二颜色分量的参考样值的数量,N是正整数。
也就是说,如果第二颜色分量的参考样值的数量有N个,那么首先确定每一个第二颜色分量的参考样值与对应的加权系数的加权值(即w kC k),然后将这N个加权值之和作为预测块中的第二颜色分量采样点的预测值。具体地,其计算公式具体如前述的式(16)所示。需要注意的是,这种方式利于并行处理,可以加快计算速度。
还可以理解地,在对当前块的相邻区域中的参考亮度信息和参考色度信息均进行上采样滤波处理时,这时候,还可以包括:基于颜色格式信息,利用第一水平上采样因子和第一垂直上采样因子对当前块的相邻区域中的第一颜色分量采样点的取值进行第二滤波处理,得到当前块的第一颜色分量的滤波相邻样值;以及利用第二水平上采样因子和第二垂直上采样因子对当前块的相邻区域中的第二颜色分量采样点的取值进行第四滤波处理,得到当前块的第二颜色分量的滤波相邻样值。
在一种可能的实现方式中,该方法还可以包括:
若颜色格式信息指示为4:4:4采样,则确定第二水平上采样因子等于第一水平上采样因子,第二垂直上采样因子等于第一垂直上采样因子;
若颜色格式信息指示为4:2:2采样,则确定第二水平上采样因子等于2倍的第一水平上采样因子,第二垂直上采样因子等于第一垂直上采样因子;
若颜色格式信息指示为4:1:1采样,则确定第二水平上采样因子等于4倍的第一水平上采样因子,第二垂直上采样因子等于第一垂直上采样因子;
若颜色格式信息指示为4:2:0采样,则确定第二水平上采样因子等于2倍的第一水平上采样因子,第二垂直上采样因子等于2倍的第一垂直上采样因子。
需要说明的是,在本申请实施例中,对参考亮度信息进行上采样滤波,此时亮度信息没有任何损失。在这种情况下,由于YUV视频中的亮度分量的空间分辨率总是大于等于色度分量的空间分辨率,因此参考色度信息必然要进行上采样滤波才能和参考亮度信息的空间分辨率保持一致。这时候可以根据YUV视频格式和对参考亮度信息的空间上采样频率(即第一水平上采样因子和第一垂直上采样因子)来决定参考色度信息的空间上采样频率(即第二水平上采样因子和第二垂直上采样因子)。
还可以理解地,在不对当前块内的重建亮度信息进行上采样滤波处理时,这时候可以根据YUV视频格式推断第一预测块的尺寸大小。因此,在另一种可能的实现方式中,该方法还可以包括:
若颜色格式信息指示为4:4:4采样,则确定第一预测块的宽度等于当前块的宽度,第一预测块的高度等于当前块的高度;
若颜色格式信息指示为4:2:2采样,则确定第一预测块的宽度等于2倍的当前块的宽度,第一预测 块的高度等于当前块的高度;
若颜色格式信息指示为4:1:1采样,则确定第一预测块的宽度等于4倍的当前块的宽度,第一预测块的高度等于当前块的高度;
若颜色格式信息指示为4:2:0采样,则确定第一预测块的宽度等于2倍的当前块的宽度,第一预测块的高度等于2倍的当前块的高度。
还可以理解地,在对当前块内的重建亮度信息进行上采样滤波处理时,当前块亮度上采样之后需要根据YUV视频格式和对当前块亮度的空间上采样频率来推断第一预测块的尺寸大小。这时候,可以包括:基于颜色格式信息,利用第三水平上采样因子和第三垂直上采样因子对当前块中第一参考颜色分量采样点的重建值进行第三滤波处理,得到当前块中第一参考颜色分量采样点的滤波样值。
在又一种可能的实现方式中,该方法还可以包括:
若颜色格式信息指示为4:4:4采样,则确定第一预测块的宽度等于当前块的宽度与第三水平上采样因子的乘积,第一预测块的高度等于当前块的高度与第三垂直上采样因子的乘积;
若颜色格式信息指示为4:2:2采样,则确定第一预测块的宽度等于2倍的当前块的宽度与第三水平上采样因子的乘积,第一预测块的高度等于当前块的高度与第三垂直上采样因子的乘积;
若颜色格式信息指示为4:1:1采样,则确定第一预测块的宽度等于4倍的当前块的宽度与第三水平上采样因子的乘积,第一预测块的高度等于当前块的高度与第三垂直上采样因子的乘积;
若颜色格式信息指示为4:2:0采样,则确定第一预测块的宽度等于2倍的当前块的宽度与第三水平上采样因子的乘积,第一预测块的高度等于2倍的当前块的高度与第三垂直上采样因子的乘积。
需要说明的是,在本申请实施例中,第一预测块的宽度用predSizeW表示,第一预测块的高度用predSizeH表示,当前块的宽度用nTbW表示,当前块的高度用nTbH表示。这样,根据前述方法所得到的第一预测块,predSizeH大于或等于当前块的高度nTbH,或者predSizeW大于或等于当前块的宽度nTbW;也就是说,第一预测块中包含的第二颜色分量的预测值的数量大于当前块中包含的第二颜色分量采样点的数量。
S1804:对第一预测块进行第一滤波处理,确定当前块的第二颜色分量的第二预测块。
S1805:根据第二预测块,确定当前块的第二颜色分量采样点的预测差值。
需要说明的是,在本申请实施例中,第一滤波处理可以为下采样滤波处理。这样,对于经过下采样滤波之后的第二预测块,第二预测块中包含的第二颜色分量的预测值的数量与当前块中包含的第二颜色分量采样点的数量相同。
在一种可能的实现方式中,所述对第一预测块进行第一滤波处理,确定当前块的第二颜色分量的第二预测块,可以包括:利用预设滤波器对第一预测块进行下采样滤波处理,确定当前块的第二颜色分量的第二预测块。
还需要说明的是,在本申请实施例中,预设滤波器可以为下采样滤波器。进一步地,这里的下采样滤波器可以为神经网络滤波器,本申请实施例对此不作任何限定。
在另一种可能的实现方式中,所述对第一预测块进行第一滤波处理,确定当前块的第二颜色分量的第二预测块,可以包括:
确定水平下采样因子和垂直下采样因子;
根据水平下采样因子和垂直下采样因子对第一预测块进行下采样滤波处理,得到当前块的第二颜色分量的第二预测块。
在一种具体的实施例中,根据水平下采样因子和垂直下采样因子对第一预测块进行下采样滤波处理,得到当前块的第二颜色分量的第二预测块,可以包括:若水平下采样因子大于1,或者垂直下采样因子大于1,则对第一预测块进行下采样滤波处理,得到第二预测块。
在本申请实施例中,对第一预测块进行下采样滤波处理,可以包括下述至少一项:
对第一预测块在水平方向进行下采样滤波处理;
对第一预测块在垂直方向进行下采样滤波处理;
对第一预测块在水平方向进行下采样滤波处理后再在垂直方向进行下采样滤波处理;
对第一预测块在垂直方向进行下采样滤波处理后再在水平方向进行下采样滤波处理。
在这里,首先可以根据第一预测块的宽度和当前块的宽度计算水平下采样因子,根据第一预测块的高度和当前块的高度计算垂直下采样因子;然后,根据水平下采样因子和垂直下采样因子对第一预测块进行下采样滤波。具体来说,如果水平下采样因子大于1,垂直下采样因子等于1,那么只需要对第一预测块在水平方向进行下采样;如果水平下采样因子等于1,垂直下采样因子大于1,那么只需要对第一预测块在垂直方向进行下采样;如果水平下采样因子大于1,垂直下采样因子大于1,那么对第一预测块在水平方向和垂直方向都需要进行下采样,其中,可以执行先水平方向再垂直方向的下采样,也可 以执行先垂直方向再水平方向的下采样,甚至还可以采用神经网络结构中的卷积操作代替这里的下采样操作,本申请实施例不作任何限定。
还需要说明的是,对于第一滤波处理,还可以采用隔点抽取的方式进行下采样滤波,比如二维滤波器、一维滤波器等等。其中,对于一维滤波器而言,可以是“先垂直方向再水平方向”,也可以是“先水平方向再垂直方向”,还可以是固定滤波顺序,甚至还可以是可灵活调整的滤波顺序(如有标识信息指示的滤波顺序、与预测模式或块大小绑定的顺序等),本申请实施例对此不作任何限定。
除此之外,在一些实施例中,所述对第一预测块进行第一滤波处理,确定当前块的第二颜色分量的第二预测块,还可以包括:根据水平下采样因子和垂直下采样因子,对第一预测块在水平方向和/或垂直方向上每预设个数的第二颜色分量的预测值进行加权和计算,得到第二预测块。
在这里,对第一预测块在水平方向和/或垂直方向上每预设个数的第二颜色分量的预测值进行加权和计算,得到第二预测块,可以包括:
对第一预测块在水平方向上每水平下采样因子数量的第二颜色分量的预测值进行加权和计算,得到第二预测块;或者,
对第一预测块在垂直方向上每垂直下采样因子数量的第二颜色分量的预测值进行加权和计算,得到第二预测块;或者,
对第一预测块在水平方向上每水平下采样因子数量的第二颜色分量的预测值进行加权和计算,以及在垂直方向上每垂直下采样因子数量的第二颜色分量的预测值进行加权和计算,得到第二预测块。
也就是说,本申请实施例也可以不考虑水平下采样和垂直下采样的情况,针对需要下采样的方向(垂直方向或水平方向)每预设个数的色度预测值进行加权和计算;其中,如果每一个色度预测值的权值相等,那么在一种特殊情况下,每预设个数的色度预测值进行加权和计算也可以看作是对这预设个数的色度预测值求均值,将该均值作为下采样滤波后的预测值。
进一步地,在一些实施例中,该方法还可以包括:根据第一预测块中部分采样点的第一颜色分量的参考样值,确定加权系数;根据加权系数和第一预测块中部分采样点的第二颜色分量的参考样值,确定当前块的第二颜色分量的第二预测块。
在一种具体的实施例中,根据加权系数和第一预测块中部分采样点的第二颜色分量的参考样值,确定当前块的第二颜色分量的第二预测块,可以包括:根据加权系数对第一预测块中第(i,j)位置处采样点的第二颜色分量的参考样值进行加权计算,得到当前块中第(x,y)位置处采样点的第二颜色分量的预测值;其中,i、j、x、j均为大于或等于零的整数。
需要说明的是,在本申请实施例中,为了减少计算复杂度,不对当前块内的重建亮度信息进行上采样滤波,这时候可以考虑不对当前块内的重建亮度信息中每个亮度位置进行同位色度点的预测,可以选取部分亮度位置进行同位色度点的预测,从而在得到预测块之后可以不进行下采样滤波,能够保证已有亮度信息的准确性,准确的亮度信息有利于提高非线性映射模型的准确性和稳定性,进而提高了色度预测值的准确性。
在这种实现方式中,为了节省后续对预测块进行上采样或者下采样的操作,可以根据YUV视频格式特征选取相应的位置进行色度预测,假设当前亮度点的位置为CurRecLuma(i,j),则需要进行色度预测的采样点的位置为CurPredChroma(x,y)。这时候,在一些实施例中,该方法还可以包括:
若颜色格式信息指示为4:4:4采样,则将x设置为等于i,将y设置为等于j;
若颜色格式信息指示为4:2:2采样,则将x设置为等于i与2的乘积,将y设置为等于j;
若颜色格式信息指示为4:1:1采样,则将x设置为等于i与4的乘积,将y设置为等于j;
若颜色格式信息指示为4:2:0采样,则将x设置为等于i与2的乘积,将y设置为等于j与2的乘积。
进一步地,在一些实施例中,该方法还可以包括:确定水平取样位置因子和垂直取样位置因子;将x设置为等于i与水平取样位置因子的乘积,将y设置为等于j与垂直取样位置因子的乘积。
也就是说,为了进一步减少预测时的计算复杂度,可以选择更少的同位色度点进行预测。假设当前亮度点的位置为CurRecLuma(i,j),则需要进行色度预测的点的位置为CurPredChroma(x,y)。假设当前块内重建亮度信息的水平取样位置因子为S_Pos_Hor,垂直取样位置因子为S_Pos_Ver。那么当前亮度点位置和需要进行色度预测的采样点的位置之间的关系如下:
当颜色格式信息指示为YUV444格式/YUV422格式/YUV411格式/YUV420格式的视频时,x可以设置为等于i与S_Pos_Hor的乘积,y可以设置为等于j与S_Pos_Ver的乘积。
进一步地,在确定出第二预测块之后,该第二预测块在一定条件下还需要后处理作为最终的第二预测块。因此,在一些实施例中,该方法还可以包括:在确定当前块的第二颜色分量的第二预测块之后,对第二预测块进行相关处理,将处理后的第二预测块作为第二预测块。
在本申请实施例中,对第二预测块进行相关处理,包括下述至少一项:
对第二预测块进行第三滤波处理;
利用预设补偿值对第二预测块进行修正处理;
利用至少一种预测模式下当前块的第二颜色分量的预测值对第二预测块进行加权融合处理。
也就是说,针对第二预测块的相关处理,为了降低WCP逐像素独立并行预测带来的不稳定性,可以对第二预测块进行平滑滤波作为最终的色度预测值。或者,为了进一步提升WCP预测值的准确性,可以对第二预测块进行位置相关的修正过程。比如:利用空间位置接近的参考像素对每个待预测像素计算色度补偿值,用此色度补偿值对预测块进行修正,将修正后的预测值作为最终的色度预测值。或者,为了进一步提升WCP预测值的准确性,可以将其他色度预测模式计算的色度预测值与WCP计算的色度预测值进行加权融合,将此融合结果作为最终的色度预测值。或者,为了提高WCP预测性能,可以采用神经网络模型对WCP计算的色度预测值进行修正等等,本申请实施例对此不作任何限定。
在一些实施例中,在确定出第二预测块之后,如图19所示,在S1804之后,还可以包括以下步骤:
S1901:根据第二预测块,确定当前块的第二颜色分量采样点的预测值。
S1902:根据当前块的第二颜色分量采样点的原始值和当前块的第二颜色分量采样点的预测值,确定当前块的第二颜色分量采样点的预测差值。
S1903:对当前块的第二颜色分量采样点的预测差值进行编码,将所得到的编码比特写入码流。
需要说明的是,在本申请实施例中,根据第二预测块确定当前块的第二颜色分量采样点的预测值,可以是将当前块的第二颜色分量采样点的预测值设置为等于第二预测块的值;或者也可以是对第二预测块的值进行上采样滤波,将当前块的第二颜色分量采样点的预测值设置为等于所述上采样滤波后的输出值。
还需要说明的是,在确定出当前块的第二颜色分量采样点的预测值后,根据第二颜色分量采样点的原始值和第二颜色分量采样点的预测值,即可确定出第二颜色分量采样点的预测差值,具体可以是对第二颜色分量采样点的原始值和第二颜色分量采样点的预测值进行减法计算,从而能够确定出当前块的第二颜色分量采样点的预测差值。这样,在将第二颜色分量采样点的预测差值写入码流后,后续在解码端,通过解码即可获得第二颜色分量采样点的预测差值,以便恢复当前块的第二颜色分量采样点的重建值。
综上可知,在WCP模式的预测过程中,利用未损失的亮度信息进行色度预测,主要包括三方面:一方面,充分利用参考像素与当前块的亮度信息,实现了参考像素色度加权系数的计算;另一方面,充分考虑已有亮度信息的重要性,在不损失亮度信息的基础上建立更准确的非线性映射模型对参考色度点分配权重进行加权预测;又一方面,在进行参考色度上采样和根据当前块每个亮度点位置进行同位色度预测时,充分考虑各种YUV视频格式的特征,根据不同YUV视频的色度亚采样格式和亮度上采样情况始终保证色度分量和亮度分量的空间分辨率一致,充分利用已有的未损失亮度信息进行同位色度预测,以提高WCP模式中色度预测值的准确性。
本申请实施例还提供了一种编码方法,通过确定当前块的第一颜色分量的参考样值;根据当前块的第一颜色分量的参考样值,确定加权系数;根据加权系数和当前块的第二颜色分量的参考样值,确定当前块的第二颜色分量的第一预测块;其中,第一预测块中包含的第二颜色分量的预测值的数量大于当前块中包含的第二颜色分量采样点的数量;对第一预测块进行第一滤波处理,确定当前块的第二颜色分量的第二预测块;根据第二预测块,确定当前块的第二颜色分量采样点的预测差值。这样,利用当前块相邻的参考像素与当前块内的颜色分量信息,不仅充分考虑了已有的颜色分量信息,使得在不损失亮度信息的基础上能够建立更准确的非线性映射模型对每个色度分量的参考样值分配权重进行加权预测;而且对于第一滤波处理还充分考虑了不同的颜色格式信息,根据不同的颜色格式信息进行色度和/或亮度的采样滤波,能够始终保证色度分量和亮度分量的空间分辨率一致,这样既可以保证已有亮度信息的准确性,又可以在利用未损失的亮度信息进行色度分量预测时,基于准确的亮度信息有利于提高非线性映射模型的准确性和稳定性,从而能够提高色度预测的准确性,节省码率,同时提升编解码性能。
在本申请的又一实施例中,本申请实施例还提供了一种码流,该码流是根据待编码信息进行比特编码生成的;其中,待编码信息至少包括:当前块的第二颜色分量采样点的预测差值。
在本申请实施例中,在当前块的第二颜色分量采样点的预测差值由编码端传递到解码端之后,解码端通过解码获得第二颜色分量采样点的预测差值,然后结合当前块的第二颜色分量采样点的预测值,从而能够恢复出当前块的第二颜色分量采样点的重建值。这样,不仅充分考虑了已有的颜色分量信息,而且还充分考虑了不同的颜色格式信息,既能够保证已有亮度信息的准确性,又能够在利用未损失的亮度信息进行色度分量预测时,基于准确的亮度信息有利于提高非线性映射模型的准确性和稳定性,从而能够提高色度预测的准确性,节省码率,同时提升编解码性能。
在本申请的再一实施例中,基于前述实施例相同的发明构思,参见图20,其示出了本申请实施例 提供的一种编码装置300的组成结构示意图。如图20所示,该编码装置300可以包括:第一确定单元3001、第一预测单元3002和第一滤波单元3003;其中,
第一确定单元3001,配置为确定当前块的第一颜色分量的参考样值;以及根据当前块的第一颜色分量的参考样值,确定加权系数;
第一预测单元3002,配置为根据加权系数和当前块的第二颜色分量的参考样值,确定当前块的第二颜色分量的第一预测块;其中,第一预测块中包含的第二颜色分量的预测值的数量大于当前块中包含的第二颜色分量采样点的数量;
第一滤波单元3003,配置为对第一预测块进行第一滤波处理,确定当前块的第二颜色分量的第二预测块;
第一确定单元3001,还配置为根据第二预测块,确定当前块的第二颜色分量采样点的预测差值。
在一些实施例中,第二预测块中包含的第二颜色分量的预测值的数量与当前块中包含的第二颜色分量采样点的数量相同。
在一些实施例中,第一确定单元3001,还配置为根据当前块的相邻区域中的第一颜色分量采样点的取值,确定当前块的第一颜色分量的参考样值;其中,相邻区域包括下述至少之一:上侧相邻区域、右上侧相邻区域、左侧相邻区域和左下侧相邻区域。
在一些实施例中,第一确定单元3001,还配置为对相邻区域中的第一颜色分量采样点进行筛选处理,确定第一颜色分量采样点的取值。
在一些实施例中,第一确定单元3001,还配置为基于相邻区域中的第一颜色分量采样点的位置和/或颜色分量强度,确定待选择采样点位置;以及根据待选择采样点位置,从相邻区域中确定第一颜色分量采样点的取值。
在一些实施例中,第一滤波单元3003,还配置为对第一颜色分量采样点的取值进行第二滤波处理,得到当前块的第一颜色分量的滤波相邻样值;以及根据当前块的第一颜色分量的滤波相邻样值,确定当前块的第一颜色分量的参考样值。
在一些实施例中,当前块的第一颜色分量的滤波相邻样值的数量大于第一颜色分量采样点的取值的数量。
在一些实施例中,第一确定单元3001,还配置为基于当前块中第一参考颜色分量采样点的重建值,确定当前块的第一颜色分量的参考样值。
在一些实施例中,当前块的第一颜色分量的参考样值设置为第一颜色分量采样点的取值与第一参考颜色分量采样点的重建值之差的绝对值。
在一些实施例中,当前块的第一颜色分量的参考样值设置为第一颜色分量的滤波相邻样值与第一参考颜色分量采样点的重建值之差的绝对值。
在一些实施例中,第一滤波单元3003,还配置为对当前块中第一参考颜色分量采样点的重建值进行第三滤波处理,得到当前块中第一参考颜色分量采样点的滤波样值;以及根据当前块中第一参考颜色分量采样点的滤波样值,确定当前块的第一颜色分量的参考样值。
在一些实施例中,当前块中第一参考颜色分量采样点的滤波样值的数量大于当前块中第一参考颜色分量采样点的重建值的数量。
在一些实施例中,当前块的第一颜色分量的参考样值设置为第一颜色分量的滤波相邻样值与第一参考颜色分量采样点的滤波样值之差的绝对值。
在一些实施例中,当前块的第一颜色分量的参考样值设置为第一颜色分量采样点的取值与第一参考颜色分量采样点的滤波样值之差的绝对值。
在一些实施例中,第一确定单元3001,还配置为确定在预设映射关系下第一颜色分量的参考样值对应的取值;以及将加权系数设置为等于取值。
在一些实施例中,第一确定单元3001,还配置为确定第一因子;以及根据第一因子和第一颜色分量的参考样值,确定第一乘积值;以及确定第一乘积值在预设映射关系下对应的取值。
在一些实施例中,第一因子是预设常数值。
在一些实施例中,第一确定单元3001,还配置为根据当前块的尺寸参数,确定第一因子的取值;其中,当前块的尺寸参数包括以下参数的至少之一:当前块的宽度,当前块的高度。
在一些实施例中,预设映射关系是Softmax函数。
在一些实施例中,预设映射关系是与第一颜色分量的参考样值具有反比关系的加权函数。
在一些实施例中,第一确定单元3001,还配置为根据当前块的相邻区域中的第二颜色分量采样点的取值,确定当前块的第二颜色分量的参考样值。
在一些实施例中,第一滤波单元3003,还配置为对当前块的相邻区域中的第二颜色分量采样点的 取值进行第四滤波处理,得到当前块的第二颜色分量的滤波相邻样值;以及根据当前块的第二颜色分量的滤波相邻样值,确定当前块的第二颜色分量的参考样值。
在一些实施例中,当前块的第二颜色分量的滤波相邻样值的数量大于当前块的相邻区域中的第二颜色分量采样点的取值的数量。
在一些实施例中,第四滤波处理是上采样滤波;其中,上采样率是2的正整数倍。
在一些实施例中,第一滤波单元3003,还配置为基于颜色格式信息,对当前块的相邻区域中的第二颜色分量采样点的取值进行第四滤波处理,得到当前块的第二颜色分量的滤波相邻样值。
在一些实施例中,第一滤波单元3003,还配置为若颜色格式信息指示为4:2:0采样,则对当前块的相邻区域中的第二颜色分量采样点的取值进行上采样滤波;其中,上采样率是2的正整数倍。
在一些实施例中,第一滤波单元3003,还配置为利用第一水平上采样因子和第一垂直上采样因子对当前块的相邻区域中的第一颜色分量采样点的取值进行第二滤波处理,得到当前块的第一颜色分量的滤波相邻样值;以及利用第二水平上采样因子和第二垂直上采样因子对当前块的相邻区域中的第二颜色分量采样点的取值进行第四滤波处理,得到当前块的第二颜色分量的滤波相邻样值;
第一确定单元3001,还配置为若颜色格式信息指示为4:4:4采样,则确定第二水平上采样因子等于第一水平上采样因子,第二垂直上采样因子等于第一垂直上采样因子;若颜色格式信息指示为4:2:2采样,则确定第二水平上采样因子等于2倍的第一水平上采样因子,第二垂直上采样因子等于第一垂直上采样因子;若颜色格式信息指示为4:1:1采样,则确定第二水平上采样因子等于4倍的第一水平上采样因子,第二垂直上采样因子等于第一垂直上采样因子;若颜色格式信息指示为4:2:0采样,则确定第二水平上采样因子等于2倍的第一水平上采样因子,第二垂直上采样因子等于2倍的第一垂直上采样因子。
在一些实施例中,第一确定单元3001,还配置为若颜色格式信息指示为4:4:4采样,则确定第一预测块的宽度等于当前块的宽度,第一预测块的高度等于当前块的高度;若颜色格式信息指示为4:2:2采样,则确定第一预测块的宽度等于2倍的当前块的宽度,第一预测块的高度等于当前块的高度;若颜色格式信息指示为4:1:1采样,则确定第一预测块的宽度等于4倍的当前块的宽度,第一预测块的高度等于当前块的高度;若颜色格式信息指示为4:2:0采样,则确定第一预测块的宽度等于2倍的当前块的宽度,第一预测块的高度等于2倍的当前块的高度。
在一些实施例中,第一滤波单元3003,还配置为利用第三水平上采样因子和第三垂直上采样因子对当前块中第一参考颜色分量采样点的重建值进行第三滤波处理,得到当前块中第一参考颜色分量采样点的滤波样值;
第一确定单元3001,还配置为若颜色格式信息指示为4:4:4采样,则确定第一预测块的宽度等于当前块的宽度与第三水平上采样因子的乘积,第一预测块的高度等于当前块的高度与第三垂直上采样因子的乘积;若颜色格式信息指示为4:2:2采样,则确定第一预测块的宽度等于2倍的当前块的宽度与第三水平上采样因子的乘积,第一预测块的高度等于当前块的高度与第三垂直上采样因子的乘积;若颜色格式信息指示为4:1:1采样,则确定第一预测块的宽度等于4倍的当前块的宽度与第三水平上采样因子的乘积,第一预测块的高度等于当前块的高度与第三垂直上采样因子的乘积;若颜色格式信息指示为4:2:0采样,则确定第一预测块的宽度等于2倍的当前块的宽度与第三水平上采样因子的乘积,第一预测块的高度等于2倍的当前块的高度与第三垂直上采样因子的乘积。
在一些实施例中,第一预测单元3002,还配置为确定第二颜色分量的参考样值与对应的加权系数的加权值;以及将第一预测块中第二颜色分量采样点的预测值设置为等于N个加权值之和;其中,N表示第二颜色分量的参考样值的数量,N是正整数。
在一些实施例中,第一滤波处理为下采样滤波处理。
在一些实施例中,第一滤波单元3003,还配置为利用预设滤波器对第一预测块进行下采样滤波处理,确定当前块的第二颜色分量的第二预测块。
在一些实施例中,第一滤波单元3003,还配置为确定水平下采样因子和垂直下采样因子;以及根据水平下采样因子和垂直下采样因子对第一预测块进行下采样滤波处理,得到当前块的第二颜色分量的第二预测块。
在一些实施例中,第一滤波单元3003,还配置为若水平下采样因子大于1,或者垂直下采样因子大于1,则对第一预测块进行下采样滤波处理,得到第二预测块。
在一些实施例中,第一滤波单元3003,还配置为对第一预测块进行下述至少一项的下采样滤波处理:
对第一预测块在水平方向进行下采样滤波处理;
对第一预测块在垂直方向进行下采样滤波处理;
对第一预测块在水平方向进行下采样滤波处理后再在垂直方向进行下采样滤波处理;
对第一预测块在垂直方向进行下采样滤波处理后再在水平方向进行下采样滤波处理。
在一些实施例中,第一滤波单元3003,还配置为根据水平下采样因子和垂直下采样因子,对第一预测块在水平方向和/或垂直方向上每预设个数的第二颜色分量的预测值进行加权和计算,得到第二预测块。
在一些实施例中,第一滤波单元3003,还配置为对第一预测块在水平方向上每水平下采样因子数量的第二颜色分量的预测值进行加权和计算,得到第二预测块。
在一些实施例中,第一滤波单元3003,还配置为对第一预测块在垂直方向上每垂直下采样因子数量的第二颜色分量的预测值进行加权和计算,得到第二预测块。
在一些实施例中,第一滤波单元3003,还配置为对第一预测块在水平方向上每水平下采样因子数量的第二颜色分量的预测值进行加权和计算,以及在垂直方向上每垂直下采样因子数量的第二颜色分量的预测值进行加权和计算,得到第二预测块。
在一些实施例中,第一滤波单元3003,还配置为根据第一预测块中部分采样点的第一颜色分量的参考样值,确定加权系数;以及根据加权系数和第一预测块中部分采样点的第二颜色分量的参考样值,确定当前块的第二颜色分量的第二预测块。
在一些实施例中,第一滤波单元3003,还配置为根据加权系数对第一预测块中第(i,j)位置处采样点的第二颜色分量的参考样值进行加权计算,得到当前块中第(x,y)位置处采样点的第二颜色分量的预测值;其中,i、j、x、j均为大于或等于零的整数。
在一些实施例中,第一确定单元3001,还配置为若颜色格式信息指示为4:4:4采样,则将x设置为等于i,将y设置为等于j;若颜色格式信息指示为4:2:2采样,则将x设置为等于i与2的乘积,将y设置为等于j;若颜色格式信息指示为4:1:1采样,则将x设置为等于i与4的乘积,将y设置为等于j;若颜色格式信息指示为4:2:0采样,则将x设置为等于i与2的乘积,将y设置为等于j与2的乘积。
在一些实施例中,第一确定单元3001,还配置为确定水平取样位置因子和垂直取样位置因子;以及将x设置为等于i与水平取样位置因子的乘积,将y设置为等于j与垂直取样位置因子的乘积。
在一些实施例中,第一滤波单元3003,还配置为在确定当前块的第二颜色分量的第二预测块之后,对第二预测块进行相关处理,将处理后的第二预测块作为第二预测块;其中,对第二预测块进行相关处理,包括下述至少一项:
对第二预测块进行第三滤波处理;
利用预设补偿值对第二预测块进行修正处理;
利用至少一种预测模式下当前块的第二颜色分量的预测值对第二预测块进行加权融合处理。
在一些实施例中,第一确定单元3001,还配置为根据第二预测块,确定当前块的第二颜色分量采样点的预测值;以及根据当前块的第二颜色分量采样点的原始值和当前块的第二颜色分量采样点的预测值,确定当前块的第二颜色分量采样点的预测差值。
在一些实施例中,参见图20,该编码装置300还可以包括编码单元3004,配置为对当前块的第二颜色分量采样点的预测差值进行编码,将所得到的编码比特写入码流。
可以理解地,在本申请实施例中,“单元”可以是部分电路、部分处理器、部分程序或软件等等,当然也可以是模块,还可以是非模块化的。而且在本实施例中的各组成部分可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
所述集成的单元如果以软件功能模块的形式实现并非作为独立的产品进行销售或使用时,可以存储在一个计算机可读取存储介质中,基于这样的理解,本实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或processor(处理器)执行本实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
因此,本申请实施例提供了一种计算机可读存储介质,应用于编码装置300,该计算机可读存储介质存储有计算机程序,所述计算机程序被第一处理器执行时实现前述实施例中任一项所述的方法。
基于上述编码装置300的组成以及计算机可读存储介质,参见图21,其示出了本申请实施例提供的编码设备310的具体硬件结构示意图。如图21所示,编码设备310可以包括:第一通信接口3101、第一存储器3102和第一处理器3103;各个组件通过第一总线系统3104耦合在一起。可理解,第一总线系统3104用于实现这些组件之间的连接通信。第一总线系统3104除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图21中将各种总线都标为第一总线系统 3104。其中,
第一通信接口3101,用于在与其他外部网元之间进行收发信息过程中,信号的接收和发送;
第一存储器3102,用于存储能够在第一处理器3103上运行的计算机程序;
第一处理器3103,用于在运行所述计算机程序时,执行:
确定当前块的第一颜色分量的参考样值;
根据当前块的第一颜色分量的参考样值,确定加权系数;
根据加权系数和当前块的第二颜色分量的参考样值,确定当前块的第二颜色分量的第一预测块;其中,第一预测块中包含的第二颜色分量的预测值的数量大于当前块中包含的第二颜色分量采样点的数量;
对第一预测块进行第一滤波处理,确定当前块的第二颜色分量的第二预测块;
根据第二预测块,确定当前块的第二颜色分量采样点的预测差值。
可以理解,本申请实施例中的第一存储器3102可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DRRAM)。本申请描述的系统和方法的第一存储器3102旨在包括但不限于这些和任意其它适合类型的存储器。
而第一处理器3103可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过第一处理器3103中的硬件的集成逻辑电路或者软件形式的指令完成。上述的第一处理器3103可以是通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于第一存储器3102,第一处理器3103读取第一存储器3102中的信息,结合其硬件完成上述方法的步骤。
可以理解的是,本申请描述的这些实施例可以用硬件、软件、固件、中间件、微码或其组合来实现。对于硬件实现,处理单元可以实现在一个或多个专用集成电路(Application Specific Integrated Circuits,ASIC)、数字信号处理器(Digital Signal Processing,DSP)、数字信号处理设备(DSP Device,DSPD)、可编程逻辑设备(Programmable Logic Device,PLD)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、通用处理器、控制器、微控制器、微处理器、用于执行本申请所述功能的其它电子单元或其组合中。对于软件实现,可通过执行本申请所述功能的模块(例如过程、函数等)来实现本申请所述的技术。软件代码可存储在存储器中并通过处理器执行。存储器可以在处理器中或在处理器外部实现。
可选地,作为另一个实施例,第一处理器3103还配置为在运行所述计算机程序时,执行前述实施例中任一项所述的方法。
本实施例提供了一种编码设备,该编码设备还可以包括前述实施例任一项所述的编码装置300。对于编码设备而言,不仅充分考虑了已有的颜色分量信息,而且还充分考虑了不同的颜色格式信息,既能够保证已有亮度信息的准确性,又能够在利用未损失的亮度信息进行色度分量预测时,基于准确的亮度信息有利于提高非线性映射模型的准确性和稳定性,从而能够提高色度预测的准确性,节省码率,同时提升编解码性能。
基于前述实施例相同的发明构思,参见图22,其示出了本申请实施例提供的一种解码装置320的组成结构示意图。如图22所示,该解码装置320可以包括:第二确定单元3201、第二预测单元3202和第二滤波单元3203;其中,
第二确定单元3201,配置为确定当前块的第一颜色分量的参考样值;以及根据当前块的第一颜色分量的参考样值,确定加权系数;
第二预测单元3202,配置为根据加权系数和当前块的第二颜色分量的参考样值,确定当前块的第二颜色分量的第一预测块;其中,第一预测块中包含的第二颜色分量的预测值的数量大于当前块中包含 的第二颜色分量采样点的数量;
第二滤波单元3203,配置为对第一预测块进行第一滤波处理,确定当前块的第二颜色分量的第二预测块;
第二确定单元3201,还配置为根据第二预测块,确定当前块的第二颜色分量采样点的重建值。
在一些实施例中,第二预测块中包含的第二颜色分量的预测值的数量与当前块中包含的第二颜色分量采样点的数量相同。
在一些实施例中,第二确定单元3201,还配置为根据当前块的相邻区域中的第一颜色分量采样点的取值,确定当前块的第一颜色分量的参考样值;其中,相邻区域包括下述至少之一:上侧相邻区域、右上侧相邻区域、左侧相邻区域和左下侧相邻区域。
在一些实施例中,第二确定单元3201,还配置为对相邻区域中的第一颜色分量采样点进行筛选处理,确定第一颜色分量采样点的取值。
在一些实施例中,第二确定单元3201,还配置为基于相邻区域中的第一颜色分量采样点的位置和/或颜色分量强度,确定待选择采样点位置;以及根据待选择采样点位置,从相邻区域中确定第一颜色分量采样点的取值。
在一些实施例中,第二滤波单元3203,还配置为对第一颜色分量采样点的取值进行第二滤波处理,得到当前块的第一颜色分量的滤波相邻样值;以及根据当前块的第一颜色分量的滤波相邻样值,确定当前块的第一颜色分量的参考样值。
在一些实施例中,当前块的第一颜色分量的滤波相邻样值的数量大于第一颜色分量采样点的取值的数量。
在一些实施例中,第二确定单元3201,还配置为基于当前块中第一参考颜色分量采样点的重建值,确定当前块的第一颜色分量的参考样值。
在一些实施例中,当前块的第一颜色分量的参考样值设置为第一颜色分量采样点的取值与第一参考颜色分量采样点的重建值之差的绝对值。
在一些实施例中,当前块的第一颜色分量的参考样值设置为第一颜色分量的滤波相邻样值与第一参考颜色分量采样点的重建值之差的绝对值。
在一些实施例中,第二滤波单元3203,还配置为对当前块中第一参考颜色分量采样点的重建值进行第三滤波处理,得到当前块中第一参考颜色分量采样点的滤波样值;以及根据当前块中第一参考颜色分量采样点的滤波样值,确定当前块的第一颜色分量的参考样值。
在一些实施例中,当前块中第一参考颜色分量采样点的滤波样值的数量大于当前块中第一参考颜色分量采样点的重建值的数量。
在一些实施例中,当前块的第一颜色分量的参考样值设置为第一颜色分量的滤波相邻样值与第一参考颜色分量采样点的滤波样值之差的绝对值。
在一些实施例中,当前块的第一颜色分量的参考样值设置为第一颜色分量采样点的取值与第一参考颜色分量采样点的滤波样值之差的绝对值。
在一些实施例中,第二确定单元3201,还配置为确定在预设映射关系下第一颜色分量的参考样值对应的取值;以及将加权系数设置为等于取值。
在一些实施例中,第二确定单元3201,还配置为确定第一因子;以及根据第一因子和第一颜色分量的参考样值,确定第一乘积值;以及确定第一乘积值在预设映射关系下对应的取值。
在一些实施例中,第一因子是预设常数值。
在一些实施例中,第二确定单元3201,还配置为根据当前块的尺寸参数,确定第一因子的取值;其中,当前块的尺寸参数包括以下参数的至少之一:当前块的宽度,当前块的高度。
在一些实施例中,预设映射关系是Softmax函数。
在一些实施例中,预设映射关系是与第一颜色分量的参考样值具有反比关系的加权函数。
在一些实施例中,第二确定单元3201,还配置为根据当前块的相邻区域中的第二颜色分量采样点的取值,确定当前块的第二颜色分量的参考样值。
在一些实施例中,第二滤波单元3203,还配置为对当前块的相邻区域中的第二颜色分量采样点的取值进行第四滤波处理,得到当前块的第二颜色分量的滤波相邻样值;以及根据当前块的第二颜色分量的滤波相邻样值,确定当前块的第二颜色分量的参考样值。
在一些实施例中,当前块的第二颜色分量的滤波相邻样值的数量大于当前块的相邻区域中的第二颜色分量采样点的取值的数量。
在一些实施例中,第四滤波处理是上采样滤波;其中,上采样率是2的正整数倍。
在一些实施例中,第二滤波单元3203,还配置为基于颜色格式信息,对当前块的相邻区域中的第 二颜色分量采样点的取值进行第四滤波处理,得到当前块的第二颜色分量的滤波相邻样值。
在一些实施例中,第二滤波单元3203,还配置为若颜色格式信息指示为4:2:0采样,则对当前块的相邻区域中的第二颜色分量采样点的取值进行上采样滤波;其中,上采样率是2的正整数倍。
在一些实施例中,第二滤波单元3203,还配置为利用第一水平上采样因子和第一垂直上采样因子对当前块的相邻区域中的第一颜色分量采样点的取值进行第二滤波处理,得到当前块的第一颜色分量的滤波相邻样值;以及利用第二水平上采样因子和第二垂直上采样因子对当前块的相邻区域中的第二颜色分量采样点的取值进行第四滤波处理,得到当前块的第二颜色分量的滤波相邻样值;
第二确定单元3201,还配置为若颜色格式信息指示为4:4:4采样,则确定第二水平上采样因子等于第一水平上采样因子,第二垂直上采样因子等于第一垂直上采样因子;若颜色格式信息指示为4:2:2采样,则确定第二水平上采样因子等于2倍的第一水平上采样因子,第二垂直上采样因子等于第一垂直上采样因子;若颜色格式信息指示为4:1:1采样,则确定第二水平上采样因子等于4倍的第一水平上采样因子,第二垂直上采样因子等于第一垂直上采样因子;若颜色格式信息指示为4:2:0采样,则确定第二水平上采样因子等于2倍的第一水平上采样因子,第二垂直上采样因子等于2倍的第一垂直上采样因子。
在一些实施例中,第二确定单元3201,还配置为若颜色格式信息指示为4:4:4采样,则确定第一预测块的宽度等于当前块的宽度,第一预测块的高度等于当前块的高度;若颜色格式信息指示为4:2:2采样,则确定第一预测块的宽度等于2倍的当前块的宽度,第一预测块的高度等于当前块的高度;若颜色格式信息指示为4:1:1采样,则确定第一预测块的宽度等于4倍的当前块的宽度,第一预测块的高度等于当前块的高度;若颜色格式信息指示为4:2:0采样,则确定第一预测块的宽度等于2倍的当前块的宽度,第一预测块的高度等于2倍的当前块的高度。
在一些实施例中,第二滤波单元3203,还配置为利用第三水平上采样因子和第三垂直上采样因子对当前块中第一参考颜色分量采样点的重建值进行第三滤波处理,得到当前块中第一参考颜色分量采样点的滤波样值;
第二确定单元3201,还配置为若颜色格式信息指示为4:4:4采样,则确定第一预测块的宽度等于当前块的宽度与第三水平上采样因子的乘积,第一预测块的高度等于当前块的高度与第三垂直上采样因子的乘积;若颜色格式信息指示为4:2:2采样,则确定第一预测块的宽度等于2倍的当前块的宽度与第三水平上采样因子的乘积,第一预测块的高度等于当前块的高度与第三垂直上采样因子的乘积;若颜色格式信息指示为4:1:1采样,则确定第一预测块的宽度等于4倍的当前块的宽度与第三水平上采样因子的乘积,第一预测块的高度等于当前块的高度与第三垂直上采样因子的乘积;若颜色格式信息指示为4:2:0采样,则确定第一预测块的宽度等于2倍的当前块的宽度与第三水平上采样因子的乘积,第一预测块的高度等于2倍的当前块的高度与第三垂直上采样因子的乘积。
在一些实施例中,第二预测单元3202,还配置为确定第二颜色分量的参考样值与对应的加权系数的加权值;以及将第一预测块中第二颜色分量采样点的预测值设置为等于N个加权值之和;其中,N表示第二颜色分量的参考样值的数量,N是正整数。
在一些实施例中,第一滤波处理为下采样滤波处理。
在一些实施例中,第二滤波单元3203,还配置为利用预设滤波器对第一预测块进行下采样滤波处理,确定当前块的第二颜色分量的第二预测块。
在一些实施例中,第二滤波单元3203,还配置为确定水平下采样因子和垂直下采样因子;以及根据水平下采样因子和垂直下采样因子对第一预测块进行下采样滤波处理,得到当前块的第二颜色分量的第二预测块。
在一些实施例中,第二滤波单元3203,还配置为若水平下采样因子大于1,或者垂直下采样因子大于1,则对第一预测块进行下采样滤波处理,得到第二预测块。
在一些实施例中,第二滤波单元3203,还配置为对第一预测块进行下述至少一项的下采样滤波处理:
对第一预测块在水平方向进行下采样滤波处理;
对第一预测块在垂直方向进行下采样滤波处理;
对第一预测块在水平方向进行下采样滤波处理后再在垂直方向进行下采样滤波处理;
对第一预测块在垂直方向进行下采样滤波处理后再在水平方向进行下采样滤波处理。
在一些实施例中,第二滤波单元3203,还配置为根据水平下采样因子和垂直下采样因子,对第一预测块在水平方向和/或垂直方向上每预设个数的第二颜色分量的预测值进行加权和计算,得到第二预测块。
在一些实施例中,第二滤波单元3203,还配置为对第一预测块在水平方向上每水平下采样因子数量的第二颜色分量的预测值进行加权和计算,得到第二预测块。
在一些实施例中,第二滤波单元3203,还配置为对第一预测块在垂直方向上每垂直下采样因子数量的第二颜色分量的预测值进行加权和计算,得到第二预测块。
在一些实施例中,第二滤波单元3203,还配置为对第一预测块在水平方向上每水平下采样因子数量的第二颜色分量的预测值进行加权和计算,以及在垂直方向上每垂直下采样因子数量的第二颜色分量的预测值进行加权和计算,得到第二预测块。
在一些实施例中,第二滤波单元3203,还配置为根据第一预测块中部分采样点的第一颜色分量的参考样值,确定加权系数;以及根据加权系数和第一预测块中部分采样点的第二颜色分量的参考样值,确定当前块的第二颜色分量的第二预测块。
在一些实施例中,第二滤波单元3203,还配置为根据加权系数对第一预测块中第(i,j)位置处采样点的第二颜色分量的参考样值进行加权计算,得到当前块中第(x,y)位置处采样点的第二颜色分量的预测值;其中,i、j、x、j均为大于或等于零的整数。
在一些实施例中,第二确定单元3201,还配置为若颜色格式信息指示为4:4:4采样,则将x设置为等于i,将y设置为等于j;若颜色格式信息指示为4:2:2采样,则将x设置为等于i与2的乘积,将y设置为等于j;若颜色格式信息指示为4:1:1采样,则将x设置为等于i与4的乘积,将y设置为等于j;若颜色格式信息指示为4:2:0采样,则将x设置为等于i与2的乘积,将y设置为等于j与2的乘积。
在一些实施例中,第二确定单元3201,还配置为确定水平取样位置因子和垂直取样位置因子;以及将x设置为等于i与水平取样位置因子的乘积,将y设置为等于j与垂直取样位置因子的乘积。
在一些实施例中,第二滤波单元3203,还配置为在确定当前块的第二颜色分量的第二预测块之后,对第二预测块进行相关处理,将处理后的第二预测块作为第二预测块;其中,对第二预测块进行相关处理,包括下述至少一项:
对第二预测块进行第三滤波处理;
利用预设补偿值对第二预测块进行修正处理;
利用至少一种预测模式下当前块的第二颜色分量的预测值对第二预测块进行加权融合处理。
在一些实施例中,第二确定单元3201,还配置为确定当前块的第二颜色分量采样点的预测差值;以及根据第二预测块,确定当前块的第二颜色分量采样点的预测值;以及根据当前块的第二颜色分量采样点的预测差值和当前块的第二颜色分量采样点的预测值,确定当前块的第二颜色分量采样点的重建值。
在一些实施例中,参见图22,该解码装置320还可以包括解码单元3204,配置为解析码流,确定当前块的第二颜色分量采样点的预测差值。
可以理解地,在本实施例中,“单元”可以是部分电路、部分处理器、部分程序或软件等等,当然也可以是模块,还可以是非模块化的。而且在本实施例中的各组成部分可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
所述集成的单元如果以软件功能模块的形式实现并非作为独立的产品进行销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本实施例提供了一种计算机可读存储介质,应用于解码器320,该计算机可读存储介质存储有计算机程序,所述计算机程序被第二处理器执行时实现前述实施例中任一项所述的方法。
基于上述解码装置320的组成以及计算机可读存储介质,参见图23,其示出了本申请实施例提供的解码设备330的具体硬件结构示意图。如图23所示,解码设备330可以包括:第二通信接口3301、第二存储器3302和第二处理器3303;各个组件通过第二总线系统3304耦合在一起。可理解,第二总线系统3304用于实现这些组件之间的连接通信。第二总线系统3304除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图23中将各种总线都标为第二总线系统3304。其中,
第二通信接口3301,用于在与其他外部网元之间进行收发信息过程中,信号的接收和发送;
第二存储器3302,用于存储能够在第二处理器3303上运行的计算机程序;
第二处理器3303,用于在运行所述计算机程序时,执行:
确定当前块的第一颜色分量的参考样值;
根据当前块的第一颜色分量的参考样值,确定加权系数;
根据加权系数和当前块的第二颜色分量的参考样值,确定当前块的第二颜色分量的第一预测块;其中,第一预测块中包含的第二颜色分量的预测值的数量大于当前块中包含的第二颜色分量采样点的数量;
对第一预测块进行第一滤波处理,确定当前块的第二颜色分量的第二预测块;
根据第二预测块,确定当前块的第二颜色分量采样点的重建值。
可选地,作为另一个实施例,第二处理器3303还配置为在运行所述计算机程序时,执行前述实施 例中任一项所述的方法。
可以理解,第二存储器3302与第一存储器3102的硬件功能类似,第二处理器3303与第一处理器3103的硬件功能类似;这里不再详述。
本实施例提供了一种解码设备,该解码设备还可以包括前述实施例所述的解码装置320。对于解码设备而言,不仅充分考虑了已有的颜色分量信息,而且还充分考虑了不同的颜色格式信息,既能够保证已有亮度信息的准确性,又能够在利用未损失的亮度信息进行色度分量预测时,基于准确的亮度信息有利于提高非线性映射模型的准确性和稳定性,从而能够提高色度预测的准确性,节省码率,同时提升编解码性能。
在本申请的再一实施例中,参见图24,其示出了本申请实施例提供的一种编解码系统的组成结构示意图。如图24所示,编解码系统340可以包括编码器3401和解码器3402。其中,编码器3401可以为集成有前述实施例所述编码装置300的设备,或者也可以为前述实施例所述的编码设备310;解码器3402可以为集成有前述实施例所述解码装置320的设备,或者也可以为前述实施例所述的解码设备330。
在本申请实施例中,在该编解码系统340中,无论是编码器3401还是解码器3402,均充分考虑了已有的颜色分量信息,而且还充分考虑了不同的颜色格式信息,既能够保证已有亮度信息的准确性,又能够在利用未损失的亮度信息进行色度分量预测时,基于准确的亮度信息有利于提高非线性映射模型的准确性和稳定性,从而能够提高色度预测的准确性,节省码率,同时提升编解码性能。
需要说明的是,在本申请中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
本申请所提供的几个方法实施例中所揭露的方法,在不冲突的情况下可以任意组合,得到新的方法实施例。
本申请所提供的几个产品实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的产品实施例。
本申请所提供的几个方法或设备实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的方法实施例或设备实施例。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。
工业实用性
本申请实施例中,无论是编码端还是解码端,通过确定当前块的第一颜色分量的参考样值;根据当前块的第一颜色分量的参考样值,确定加权系数;根据加权系数和当前块的第二颜色分量的参考样值,确定当前块的第二颜色分量的第一预测块;其中,第一预测块中包含的第二颜色分量的预测值的数量大于当前块中包含的第二颜色分量采样点的数量;对第一预测块进行第一滤波处理,确定当前块的第二颜色分量的第二预测块。这样,在编码端根据第二预测块,能够确定当前块的第二颜色分量采样点的预测差值;将预测差值写入码流中;使得在解码端,根据第二预测块和解码得到的预测差值可以确定出当前块的第二颜色分量采样点的重建值。如此,利用当前块相邻的参考像素与当前块内的颜色分量信息,不仅充分考虑了已有的颜色分量信息,使得在不损失亮度信息的基础上能够建立更准确的非线性映射模型对每个色度分量的参考样值分配权重进行加权预测;而且对于第一滤波处理还充分考虑了不同的颜色格式信息,根据不同的颜色格式信息进行色度和/或亮度的采样滤波,能够始终保证色度分量和亮度分量的空间分辨率一致,这样既可以保证已有亮度信息的准确性,又可以在利用未损失的亮度信息进行色度分量预测时,基于准确的亮度信息有利于提高非线性映射模型的准确性和稳定性,从而能够提高色度预测的准确性,节省码率,提升编解码效率,进而提升编解码性能。

Claims (96)

  1. 一种解码方法,包括:
    确定当前块的第一颜色分量的参考样值;
    根据所述当前块的第一颜色分量的参考样值,确定加权系数;
    根据所述加权系数和所述当前块的第二颜色分量的参考样值,确定所述当前块的第二颜色分量的第一预测块;其中,所述第一预测块中包含的所述第二颜色分量的预测值的数量大于所述当前块中包含的第二颜色分量采样点的数量;
    对所述第一预测块进行第一滤波处理,确定所述当前块的第二颜色分量的第二预测块;
    根据所述第二预测块,确定所述当前块的所述第二颜色分量采样点的重建值。
  2. 根据权利要求1所述的方法,其中,所述第二预测块中包含的所述第二颜色分量的预测值的数量与所述当前块中包含的第二颜色分量采样点的数量相同。
  3. 根据权利要求1所述的方法,其中,所述确定当前块的第一颜色分量的参考样值,包括:
    根据所述当前块的相邻区域中的第一颜色分量采样点的取值,确定所述当前块的第一颜色分量的参考样值;
    其中,所述相邻区域包括下述至少之一:上侧相邻区域、右上侧相邻区域、左侧相邻区域和左下侧相邻区域。
  4. 根据权利要求3所述的方法,其中,所述方法还包括:
    对所述相邻区域中的第一颜色分量采样点进行筛选处理,确定所述第一颜色分量采样点的取值。
  5. 根据权利要求4所述的方法,其中,所述对所述相邻区域中的第一颜色分量采样点进行筛选处理,确定所述第一颜色分量采样点的取值,包括:
    基于所述相邻区域中的第一颜色分量采样点的位置和/或颜色分量强度,确定待选择采样点位置;
    根据所述待选择采样点位置,从所述相邻区域中确定所述第一颜色分量采样点的取值。
  6. 根据权利要求3所述的方法,其中,所述确定所述当前块的第一颜色分量的参考样值,还包括:
    对所述第一颜色分量采样点的取值进行第二滤波处理,得到所述当前块的第一颜色分量的滤波相邻样值;
    根据所述当前块的第一颜色分量的滤波相邻样值,确定所述当前块的第一颜色分量的参考样值。
  7. 根据权利要求6所述的方法,其中,所述当前块的第一颜色分量的滤波相邻样值的数量大于所述第一颜色分量采样点的取值的数量。
  8. 根据权利要求3或6所述的方法,其中,所述确定当前块的第一颜色分量的参考样值,还包括:
    基于所述当前块中第一参考颜色分量采样点的重建值,确定所述当前块的第一颜色分量的参考样值。
  9. 根据权利要求8所述的方法,其中,所述当前块的第一颜色分量的参考样值设置为所述第一颜色分量采样点的取值与所述第一参考颜色分量采样点的重建值之差的绝对值。
  10. 根据权利要求8所述的方法,其中,所述当前块的第一颜色分量的参考样值设置为所述第一颜色分量的滤波相邻样值与所述第一参考颜色分量采样点的重建值之差的绝对值。
  11. 根据权利要求3或6所述的方法,其中,所述确定所述当前块的第一颜色分量的参考样值,还包括:
    对所述当前块中第一参考颜色分量采样点的重建值进行第三滤波处理,得到所述当前块中第一参考颜色分量采样点的滤波样值;
    根据所述当前块中第一参考颜色分量采样点的滤波样值,确定所述当前块的第一颜色分量的参考样值。
  12. 根据权利要求11所述的方法,其中,所述当前块中第一参考颜色分量采样点的滤波样值的数量大于所述当前块中第一参考颜色分量采样点的重建值的数量。
  13. 根据权利要求11所述的方法,其中,所述当前块的第一颜色分量的参考样值设置为所述第一颜色分量的滤波相邻样值与所述第一参考颜色分量采样点的滤波样值之差的绝对值。
  14. 根据权利要求11所述的方法,其中,所述当前块的第一颜色分量的参考样值设置为所述第一颜色分量采样点的取值与所述第一参考颜色分量采样点的滤波样值之差的绝对值。
  15. 根据权利要求1所述的方法,其中,所述根据所述当前块的第一颜色分量的参考样值,确定加权系数,包括:
    确定在预设映射关系下所述第一颜色分量的参考样值对应的取值;
    将所述加权系数设置为等于所述取值。
  16. 根据权利要求15所述的方法,其中,所述确定在预设映射关系下所述第一颜色分量的参考样值对应的取值,包括:
    确定第一因子;
    根据所述第一因子和所述第一颜色分量的参考样值,确定第一乘积值;
    确定所述第一乘积值在所述预设映射关系下对应的取值。
  17. 根据权利要求16所述的方法,其中,所述确定第一因子,包括:
    所述第一因子是预设常数值。
  18. 根据权利要求16所述的方法,其中,所述确定第一因子,包括:
    根据所述当前块的尺寸参数,确定所述第一因子的取值;
    其中,所述当前块的尺寸参数包括以下参数的至少之一:所述当前块的宽度,所述当前块的高度。
  19. 根据权利要求15所述的方法,其中,所述确定在预设映射关系下所述第一颜色分量的参考样值对应的取值,包括:
    所述预设映射关系是Softmax函数。
  20. 根据权利要求15所述的方法,其中,所述确定在预设映射关系下所述第一颜色分量的参考样值对应的取值,包括:
    所述预设映射关系是与所述第一颜色分量的参考样值具有反比关系的加权函数。
  21. 根据权利要求1所述的方法,其中,所述方法还包括:
    根据所述当前块的相邻区域中的第二颜色分量采样点的取值,确定所述当前块的第二颜色分量的参考样值。
  22. 根据权利要求21所述的方法,其中,所述方法还包括:
    对所述当前块的相邻区域中的第二颜色分量采样点的取值进行第四滤波处理,得到所述当前块的第二颜色分量的滤波相邻样值;
    根据所述当前块的第二颜色分量的滤波相邻样值,确定所述当前块的第二颜色分量的参考样值。
  23. 根据权利要求22所述的方法,其中,所述当前块的第二颜色分量的滤波相邻样值的数量大于所述当前块的相邻区域中的第二颜色分量采样点的取值的数量。
  24. 根据权利要求22所述的方法,其中,所述方法还包括:
    所述第四滤波处理是上采样滤波;其中,上采样率是2的正整数倍。
  25. 根据权利要求22所述的方法,其中,所述方法还包括:
    基于颜色格式信息,对所述当前块的相邻区域中的第二颜色分量采样点的取值进行第四滤波处理,得到所述当前块的第二颜色分量的滤波相邻样值。
  26. 根据权利要求25所述的方法,其中,所述第四滤波处理,还包括:
    若所述颜色格式信息指示为4:2:0采样,则对所述当前块的相邻区域中的第二颜色分量采样点的取值进行上采样滤波;其中,上采样率是2的正整数倍。
  27. 根据权利要求22所述的方法,其中,所述方法还包括:
    利用第一水平上采样因子和第一垂直上采样因子对所述当前块的相邻区域中的第一颜色分量采样点的取值进行第二滤波处理,得到所述当前块的第一颜色分量的滤波相邻样值;以及利用第二水平上采样因子和第二垂直上采样因子对所述当前块的相邻区域中的第二颜色分量采样点的取值进行第四滤波处理,得到所述当前块的第二颜色分量的滤波相邻样值;
    相应地,所述方法还包括:
    若所述颜色格式信息指示为4:4:4采样,则确定所述第二水平上采样因子等于所述第一水平上采样因子,所述第二垂直上采样因子等于所述第一垂直上采样因子;
    若所述颜色格式信息指示为4:2:2采样,则确定所述第二水平上采样因子等于2倍的所述第一水平上采样因子,所述第二垂直上采样因子等于所述第一垂直上采样因子;
    若所述颜色格式信息指示为4:1:1采样,则确定所述第二水平上采样因子等于4倍的所述第一水平上采样因子,所述第二垂直上采样因子等于所述第一垂直上采样因子;
    若所述颜色格式信息指示为4:2:0采样,则确定所述第二水平上采样因子等于2倍的所述第一水平上采样因子,所述第二垂直上采样因子等于2倍的所述第一垂直上采样因子。
  28. 根据权利要求8所述的方法,其中,所述方法还包括:
    若颜色格式信息指示为4:4:4采样,则确定所述第一预测块的宽度等于所述当前块的宽度,所述第一预测块的高度等于所述当前块的高度;
    若颜色格式信息指示为4:2:2采样,则确定所述第一预测块的宽度等于2倍的所述当前块的宽度,所述第一预测块的高度等于所述当前块的高度;
    若颜色格式信息指示为4:1:1采样,则确定所述第一预测块的宽度等于4倍的所述当前块的宽度,所述第一预测块的高度等于所述当前块的高度;
    若颜色格式信息指示为4:2:0采样,则确定所述第一预测块的宽度等于2倍的所述当前块的宽度,所述第一预测块的高度等于2倍的所述当前块的高度。
  29. 根据权利要求11所述的方法,其中,所述方法还包括:
    利用第三水平上采样因子和第三垂直上采样因子对所述当前块中第一参考颜色分量采样点的重建值进行第三滤波处理,得到所述当前块中第一参考颜色分量采样点的滤波样值;
    相应地,所述方法还包括:
    若颜色格式信息指示为4:4:4采样,则确定所述第一预测块的宽度等于所述当前块的宽度与所述第三水平上采样因子的乘积,所述第一预测块的高度等于所述当前块的高度与所述第三垂直上采样因子的乘积;
    若颜色格式信息指示为4:2:2采样,则确定所述第一预测块的宽度等于2倍的所述当前块的宽度与所述第三水平上采样因子的乘积,所述第一预测块的高度等于所述当前块的高度与所述第三垂直上采样因子的乘积;
    若颜色格式信息指示为4:1:1采样,则确定所述第一预测块的宽度等于4倍的所述当前块的宽度与所述第三水平上采样因子的乘积,所述第一预测块的高度等于所述当前块的高度与所述第三垂直上采样因子的乘积;
    若颜色格式信息指示为4:2:0采样,则确定所述第一预测块的宽度等于2倍的所述当前块的宽度与所述第三水平上采样因子的乘积,所述第一预测块的高度等于2倍的所述当前块的高度与所述第三垂直上采样因子的乘积。
  30. 根据权利要求1所述的方法,其中,所述根据所述加权系数和所述当前块的第二颜色分量的参考样值,确定所述当前块的第二颜色分量的第一预测块,包括:
    确定所述第二颜色分量的参考样值与对应的所述加权系数的加权值;
    将所述第一预测块中所述第二颜色分量采样点的预测值设置为等于N个所述加权值之和;其中,N表示所述第二颜色分量的参考样值的数量,N是正整数。
  31. 根据权利要求1所述的方法,其中,所述第一滤波处理为下采样滤波处理。
  32. 根据权利要求31所述的方法,其中,所述对所述第一预测块进行第一滤波处理,确定所述当前块的第二颜色分量的第二预测块,包括:
    利用预设滤波器对所述第一预测块进行下采样滤波处理,确定所述当前块的第二颜色分量的第二预测块。
  33. 根据权利要求31所述的方法,其中,所述对所述第一预测块进行第一滤波处理,确定所述当前块的第二颜色分量的第二预测块,包括:
    确定水平下采样因子和垂直下采样因子;
    根据所述水平下采样因子和所述垂直下采样因子对所述第一预测块进行下采样滤波处理,得到所述当前块的第二颜色分量的第二预测块。
  34. 根据权利要求33所述的方法,其中,所述根据所述水平下采样因子和所述垂直下采样因子对所述第一预测块进行下采样滤波处理,得到所述当前块的第二颜色分量的第二预测块,包括:
    若所述水平下采样因子大于1,或者所述垂直下采样因子大于1,则对所述第一预测块进行下采样滤波处理,得到所述第二预测块。
  35. 根据权利要求34所述的方法,其中,所述对所述第一预测块进行下采样滤波处理,包括下述至少一项:
    对所述第一预测块在水平方向进行下采样滤波处理;
    对所述第一预测块在垂直方向进行下采样滤波处理;
    对所述第一预测块在水平方向进行下采样滤波处理后再在垂直方向进行下采样滤波处理;
    对所述第一预测块在垂直方向进行下采样滤波处理后再在水平方向进行下采样滤波处理。
  36. 根据权利要求33所述的方法,其中,所述对所述第一预测块进行第一滤波处理,确定所述当前块的第二颜色分量的第二预测块,包括:
    根据所述水平下采样因子和所述垂直下采样因子,对所述第一预测块在水平方向和/或垂直方向上每预设个数的第二颜色分量的预测值进行加权和计算,得到所述第二预测块。
  37. 根据权利要求36所述的方法,其中,所述对所述第一预测块在水平方向和/或垂直方向上每预设个数的第二颜色分量的预测值进行加权和计算,得到所述第二预测块,包括:
    对所述第一预测块在水平方向上每所述水平下采样因子数量的第二颜色分量的预测值进行加权和 计算,得到所述第二预测块。
  38. 根据权利要求36所述的方法,其中,所述对所述第一预测块在水平方向和/或垂直方向上每预设个数的第二颜色分量的预测值进行加权和计算,得到所述第二预测块,包括:
    对所述第一预测块在垂直方向上每所述垂直下采样因子数量的第二颜色分量的预测值进行加权和计算,得到所述第二预测块。
  39. 根据权利要求36所述的方法,其中,所述对所述第一预测块在水平方向和/或垂直方向上每预设个数的第二颜色分量的预测值进行加权和计算,得到所述第二预测块,包括:
    对所述第一预测块在水平方向上每所述水平下采样因子数量的第二颜色分量的预测值进行加权和计算,以及在垂直方向上每所述垂直下采样因子数量的第二颜色分量的预测值进行加权和计算,得到所述第二预测块。
  40. 根据权利要求1所述的方法,其中,所述方法还包括:
    根据所述第一预测块中部分采样点的第一颜色分量的参考样值,确定加权系数;
    根据所述加权系数和所述第一预测块中部分采样点的第二颜色分量的参考样值,确定所述当前块的第二颜色分量的第二预测块。
  41. 根据权利要求40所述的方法,其中,所述根据所述加权系数和所述第一预测块中部分采样点的第二颜色分量的参考样值,确定所述当前块的第二颜色分量的第二预测块,包括:
    根据所述加权系数对所述第一预测块中第(i,j)位置处采样点的第二颜色分量的参考样值进行加权计算,得到所述当前块中第(x,y)位置处采样点的第二颜色分量的预测值;
    其中,i、j、x、j均为大于或等于零的整数。
  42. 根据权利要求41所述的方法,其中,所述方法还包括:
    若颜色格式信息指示为4:4:4采样,则将x设置为等于i,将y设置为等于j;
    若颜色格式信息指示为4:2:2采样,则将x设置为等于i与2的乘积,将y设置为等于j;
    若颜色格式信息指示为4:1:1采样,则将x设置为等于i与4的乘积,将y设置为等于j;
    若颜色格式信息指示为4:2:0采样,则将x设置为等于i与2的乘积,将y设置为等于j与2的乘积。
  43. 根据权利要求41所述的方法,其中,所述方法还包括:
    确定水平取样位置因子和垂直取样位置因子;
    将x设置为等于i与所述水平取样位置因子的乘积,将y设置为等于j与所述垂直取样位置因子的乘积。
  44. 根据权利要求1所述的方法,其中,所述方法还包括:
    在确定所述当前块的第二颜色分量的第二预测块之后,对所述第二预测块进行相关处理,将处理后的第二预测块作为所述第二预测块;
    其中,所述对所述第二预测块进行相关处理,包括下述至少一项:
    对所述第二预测块进行第三滤波处理;
    利用预设补偿值对所述第二预测块进行修正处理;
    利用至少一种预测模式下所述当前块的第二颜色分量的预测值对所述第二预测块进行加权融合处理。
  45. 根据权利要求1所述的方法,其中,所述根据所述第二预测块,确定所述当前块的所述第二颜色分量采样点的重建值,包括:
    确定所述当前块的所述第二颜色分量采样点的预测差值;
    根据所述第二预测块,确定所述当前块的所述第二颜色分量采样点的预测值;
    根据所述当前块的所述第二颜色分量采样点的预测差值和所述当前块的所述第二颜色分量采样点的预测值,确定所述当前块的所述第二颜色分量采样点的重建值。
  46. 一种编码方法,包括:
    确定当前块的第一颜色分量的参考样值;
    根据所述当前块的第一颜色分量的参考样值,确定加权系数;
    根据所述加权系数和所述当前块的第二颜色分量的参考样值,确定所述当前块的第二颜色分量的第一预测块;其中,所述第一预测块中包含的所述第二颜色分量的预测值的数量大于所述当前块中包含的第二颜色分量采样点的数量;
    对所述第一预测块进行第一滤波处理,确定所述当前块的第二颜色分量的第二预测块;
    根据所述第二预测块,确定所述当前块的所述第二颜色分量采样点的预测差值。
  47. 根据权利要求46所述的方法,其中,所述第二预测块中包含的所述第二颜色分量的预测值的 数量与所述当前块中包含的第二颜色分量采样点的数量相同。
  48. 根据权利要求46所述的方法,其中,所述确定当前块的第一颜色分量的参考样值,包括:
    根据所述当前块的相邻区域中的第一颜色分量采样点的取值,确定所述当前块的第一颜色分量的参考样值;
    其中,所述相邻区域包括下述至少之一:上侧相邻区域、右上侧相邻区域、左侧相邻区域和左下侧相邻区域。
  49. 根据权利要求48所述的方法,其中,所述方法还包括:
    对所述相邻区域中的第一颜色分量采样点进行筛选处理,确定所述第一颜色分量采样点的取值。
  50. 根据权利要求49所述的方法,其中,所述对所述相邻区域中的第一颜色分量采样点进行筛选处理,确定所述第一颜色分量采样点的取值,包括:
    基于所述相邻区域中的第一颜色分量采样点的位置和/或颜色分量强度,确定待选择采样点位置;
    根据所述待选择采样点位置,从所述相邻区域中确定所述第一颜色分量采样点的取值。
  51. 根据权利要求48所述的方法,其中,所述确定所述当前块的第一颜色分量的参考样值,还包括:
    对所述第一颜色分量采样点的取值进行第二滤波处理,得到所述当前块的第一颜色分量的滤波相邻样值;
    根据所述当前块的第一颜色分量的滤波相邻样值,确定所述当前块的第一颜色分量的参考样值。
  52. 根据权利要求51所述的方法,其中,所述当前块的第一颜色分量的滤波相邻样值的数量大于所述第一颜色分量采样点的取值的数量。
  53. 根据权利要求48或51所述的方法,其中,所述确定当前块的第一颜色分量的参考样值,还包括:
    基于所述当前块中第一参考颜色分量采样点的重建值,确定所述当前块的第一颜色分量的参考样值。
  54. 根据权利要求53所述的方法,其中,所述当前块的第一颜色分量的参考样值设置为所述第一颜色分量采样点的取值与所述第一参考颜色分量采样点的重建值之差的绝对值。
  55. 根据权利要求53所述的方法,其中,所述当前块的第一颜色分量的参考样值设置为所述第一颜色分量的滤波相邻样值与所述第一参考颜色分量采样点的重建值之差的绝对值。
  56. 根据权利要求48或51所述的方法,其中,所述确定所述当前块的第一颜色分量的参考样值,还包括:
    对所述当前块中第一参考颜色分量采样点的重建值进行第三滤波处理,得到所述当前块中第一参考颜色分量采样点的滤波样值;
    根据所述当前块中第一参考颜色分量采样点的滤波样值,确定所述当前块的第一颜色分量的参考样值。
  57. 根据权利要求56所述的方法,其中,所述当前块中第一参考颜色分量采样点的滤波样值的数量大于所述当前块中第一参考颜色分量采样点的重建值的数量。
  58. 根据权利要求56所述的方法,其中,所述当前块的第一颜色分量的参考样值设置为所述第一颜色分量的滤波相邻样值与所述第一参考颜色分量采样点的滤波样值之差的绝对值。
  59. 根据权利要求56所述的方法,其中,所述当前块的第一颜色分量的参考样值设置为所述第一颜色分量采样点的取值与所述第一参考颜色分量采样点的滤波样值之差的绝对值。
  60. 根据权利要求46所述的方法,其中,所述根据所述当前块的第一颜色分量的参考样值,确定加权系数,包括:
    确定在预设映射关系下所述第一颜色分量的参考样值对应的取值;
    将所述加权系数设置为等于所述取值。
  61. 根据权利要求60所述的方法,其中,所述确定在预设映射关系下所述第一颜色分量的参考样值对应的取值,包括:
    确定第一因子;
    根据所述第一因子和所述第一颜色分量的参考样值,确定第一乘积值;
    确定所述第一乘积值在所述预设映射关系下对应的取值。
  62. 根据权利要求61所述的方法,其中,所述确定第一因子,包括:
    所述第一因子是预设常数值。
  63. 根据权利要求61所述的方法,其中,所述确定第一因子,包括:
    根据所述当前块的尺寸参数,确定所述第一因子的取值;
    其中,所述当前块的尺寸参数包括以下参数的至少之一:所述当前块的宽度,所述当前块的高度。
  64. 根据权利要求60所述的方法,其中,所述确定在预设映射关系下所述第一颜色分量的参考样值对应的取值,包括:
    所述预设映射关系是Softmax函数。
  65. 根据权利要求60所述的方法,其中,所述确定在预设映射关系下所述第一颜色分量的参考样值对应的取值,包括:
    所述预设映射关系是与所述第一颜色分量的参考样值具有反比关系的加权函数。
  66. 根据权利要求46所述的方法,其中,所述方法还包括:
    根据所述当前块的相邻区域中的第二颜色分量采样点的取值,确定所述当前块的第二颜色分量的参考样值。
  67. 根据权利要求66所述的方法,其中,所述方法还包括:
    对所述当前块的相邻区域中的第二颜色分量采样点的取值进行第四滤波处理,得到所述当前块的第二颜色分量的滤波相邻样值;
    根据所述当前块的第二颜色分量的滤波相邻样值,确定所述当前块的第二颜色分量的参考样值。
  68. 根据权利要求67所述的方法,其中,所述当前块的第二颜色分量的滤波相邻样值的数量大于所述当前块的相邻区域中的第二颜色分量采样点的取值的数量。
  69. 根据权利要求67所述的方法,其中,所述方法还包括:
    所述第四滤波处理是上采样滤波;其中,上采样率是2的正整数倍。
  70. 根据权利要求67所述的方法,其中,所述方法还包括:
    基于颜色格式信息,对所述当前块的相邻区域中的第二颜色分量采样点的取值进行第四滤波处理,得到所述当前块的第二颜色分量的滤波相邻样值。
  71. 根据权利要求70所述的方法,其中,所述第四滤波处理,还包括:
    若所述颜色格式信息指示为4:2:0采样,则对所述当前块的相邻区域中的第二颜色分量采样点的取值进行上采样滤波;其中,上采样率是2的正整数倍。
  72. 根据权利要求67所述的方法,其中,所述方法还包括:
    利用第一水平上采样因子和第一垂直上采样因子对所述当前块的相邻区域中的第一颜色分量采样点的取值进行第二滤波处理,得到所述当前块的第一颜色分量的滤波相邻样值;以及利用第二水平上采样因子和第二垂直上采样因子对所述当前块的相邻区域中的第二颜色分量采样点的取值进行第四滤波处理,得到所述当前块的第二颜色分量的滤波相邻样值;
    相应地,所述方法还包括:
    若所述颜色格式信息指示为4:4:4采样,则确定所述第二水平上采样因子等于所述第一水平上采样因子,所述第二垂直上采样因子等于所述第一垂直上采样因子;
    若所述颜色格式信息指示为4:2:2采样,则确定所述第二水平上采样因子等于2倍的所述第一水平上采样因子,所述第二垂直上采样因子等于所述第一垂直上采样因子;
    若所述颜色格式信息指示为4:1:1采样,则确定所述第二水平上采样因子等于4倍的所述第一水平上采样因子,所述第二垂直上采样因子等于所述第一垂直上采样因子;
    若所述颜色格式信息指示为4:2:0采样,则确定所述第二水平上采样因子等于2倍的所述第一水平上采样因子,所述第二垂直上采样因子等于2倍的所述第一垂直上采样因子。
  73. 根据权利要求53所述的方法,其中,所述方法还包括:
    若颜色格式信息指示为4:4:4采样,则确定所述第一预测块的宽度等于所述当前块的宽度,所述第一预测块的高度等于所述当前块的高度;
    若颜色格式信息指示为4:2:2采样,则确定所述第一预测块的宽度等于2倍的所述当前块的宽度,所述第一预测块的高度等于所述当前块的高度;
    若颜色格式信息指示为4:1:1采样,则确定所述第一预测块的宽度等于4倍的所述当前块的宽度,所述第一预测块的高度等于所述当前块的高度;
    若颜色格式信息指示为4:2:0采样,则确定所述第一预测块的宽度等于2倍的所述当前块的宽度,所述第一预测块的高度等于2倍的所述当前块的高度。
  74. 根据权利要求56所述的方法,其中,所述方法还包括:
    利用第三水平上采样因子和第三垂直上采样因子对所述当前块中第一参考颜色分量采样点的重建值进行第三滤波处理,得到所述当前块中第一参考颜色分量采样点的滤波样值;
    相应地,所述方法还包括:
    若颜色格式信息指示为4:4:4采样,则确定所述第一预测块的宽度等于所述当前块的宽度与所述第三水平上采样因子的乘积,所述第一预测块的高度等于所述当前块的高度与所述第三垂直上采样因子的 乘积;
    若颜色格式信息指示为4:2:2采样,则确定所述第一预测块的宽度等于2倍的所述当前块的宽度与所述第三水平上采样因子的乘积,所述第一预测块的高度等于所述当前块的高度与所述第三垂直上采样因子的乘积;
    若颜色格式信息指示为4:1:1采样,则确定所述第一预测块的宽度等于4倍的所述当前块的宽度与所述第三水平上采样因子的乘积,所述第一预测块的高度等于所述当前块的高度与所述第三垂直上采样因子的乘积;
    若颜色格式信息指示为4:2:0采样,则确定所述第一预测块的宽度等于2倍的所述当前块的宽度与所述第三水平上采样因子的乘积,所述第一预测块的高度等于2倍的所述当前块的高度与所述第三垂直上采样因子的乘积。
  75. 根据权利要求46所述的方法,其中,所述根据所述加权系数和所述当前块的第二颜色分量的参考样值,确定所述当前块的第二颜色分量的第一预测块,包括:
    确定所述第二颜色分量的参考样值与对应的所述加权系数的加权值;
    将所述第一预测块中所述第二颜色分量采样点的预测值设置为等于N个所述加权值之和;其中,N表示所述第二颜色分量的参考样值的数量,N是正整数。
  76. 根据权利要求46所述的方法,其中,所述第一滤波处理为下采样滤波处理。
  77. 根据权利要求76所述的方法,其中,所述对所述第一预测块进行第一滤波处理,确定所述当前块的第二颜色分量的第二预测块,包括:
    利用预设滤波器对所述第一预测块进行下采样滤波处理,确定所述当前块的第二颜色分量的第二预测块。
  78. 根据权利要求76所述的方法,其中,所述对所述第一预测块进行第一滤波处理,确定所述当前块的第二颜色分量的第二预测块,包括:
    确定水平下采样因子和垂直下采样因子;
    根据所述水平下采样因子和所述垂直下采样因子对所述第一预测块进行下采样滤波处理,得到所述当前块的第二颜色分量的第二预测块。
  79. 根据权利要求78所述的方法,其中,所述根据所述水平下采样因子和所述垂直下采样因子对所述第一预测块进行下采样滤波处理,得到所述当前块的第二颜色分量的第二预测块,包括:
    若所述水平下采样因子大于1,或者所述垂直下采样因子大于1,则对所述第一预测块进行下采样滤波处理,得到所述第二预测块。
  80. 根据权利要求79所述的方法,其中,所述对所述第一预测块进行下采样滤波处理,包括下述至少一项:
    对所述第一预测块在水平方向进行下采样滤波处理;
    对所述第一预测块在垂直方向进行下采样滤波处理;
    对所述第一预测块在水平方向进行下采样滤波处理后再在垂直方向进行下采样滤波处理;
    对所述第一预测块在垂直方向进行下采样滤波处理后再在水平方向进行下采样滤波处理。
  81. 根据权利要求78所述的方法,其中,所述对所述第一预测块进行第一滤波处理,确定所述当前块的第二颜色分量的第二预测块,包括:
    根据所述水平下采样因子和所述垂直下采样因子,对所述第一预测块在水平方向和/或垂直方向上每预设个数的第二颜色分量的预测值进行加权和计算,得到所述第二预测块。
  82. 根据权利要求81所述的方法,其中,所述对所述第一预测块在水平方向和/或垂直方向上每预设个数的第二颜色分量的预测值进行加权和计算,得到所述第二预测块,包括:
    对所述第一预测块在水平方向上每所述水平下采样因子数量的第二颜色分量的预测值进行加权和计算,得到所述第二预测块。
  83. 根据权利要求81所述的方法,其中,所述对所述第一预测块在水平方向和/或垂直方向上每预设个数的第二颜色分量的预测值进行加权和计算,得到所述第二预测块,包括:
    对所述第一预测块在垂直方向上每所述垂直下采样因子数量的第二颜色分量的预测值进行加权和计算,得到所述第二预测块。
  84. 根据权利要求81所述的方法,其中,所述对所述第一预测块在水平方向和/或垂直方向上每预设个数的第二颜色分量的预测值进行加权和计算,得到所述第二预测块,包括:
    对所述第一预测块在水平方向上每所述水平下采样因子数量的第二颜色分量的预测值进行加权和计算,以及在垂直方向上每所述垂直下采样因子数量的第二颜色分量的预测值进行加权和计算,得到所述第二预测块。
  85. 根据权利要求46所述的方法,其中,所述方法还包括:
    根据所述第一预测块中部分采样点的第一颜色分量的参考样值,确定加权系数;
    根据所述加权系数和所述第一预测块中部分采样点的第二颜色分量的参考样值,确定所述当前块的第二颜色分量的第二预测块。
  86. 根据权利要求85所述的方法,其中,所述根据所述加权系数和所述第一预测块中部分采样点的第二颜色分量的参考样值,确定所述当前块的第二颜色分量的第二预测块,包括:
    根据所述加权系数对所述第一预测块中第(i,j)位置处采样点的第二颜色分量的参考样值进行加权计算,得到所述当前块中第(x,y)位置处采样点的第二颜色分量的预测值;
    其中,i、j、x、j均为大于或等于零的整数。
  87. 根据权利要求86所述的方法,其中,所述方法还包括:
    若颜色格式信息指示为4:4:4采样,则将x设置为等于i,将y设置为等于j;
    若颜色格式信息指示为4:2:2采样,则将x设置为等于i与2的乘积,将y设置为等于j;
    若颜色格式信息指示为4:1:1采样,则将x设置为等于i与4的乘积,将y设置为等于j;
    若颜色格式信息指示为4:2:0采样,则将x设置为等于i与2的乘积,将y设置为等于j与2的乘积。
  88. 根据权利要求86所述的方法,其中,所述方法还包括:
    确定水平取样位置因子和垂直取样位置因子;
    将x设置为等于i与所述水平取样位置因子的乘积,将y设置为等于j与所述垂直取样位置因子的乘积。
  89. 根据权利要求46所述的方法,其中,所述方法还包括:
    在确定所述当前块的第二颜色分量的第二预测块之后,对所述第二预测块进行相关处理,将处理后的第二预测块作为所述第二预测块;
    其中,所述对所述第二预测块进行相关处理,包括下述至少一项:
    对所述第二预测块进行第三滤波处理;
    利用预设补偿值对所述第二预测块进行修正处理;
    利用至少一种预测模式下所述当前块的第二颜色分量的预测值对所述第二预测块进行加权融合处理。
  90. 根据权利要求46所述的方法,其中,所述根据所述第二预测块,确定所述当前块的所述第二颜色分量采样点的预测差值,包括:
    根据所述第二预测块,确定所述当前块的所述第二颜色分量采样点的预测值;
    根据所述当前块的所述第二颜色分量采样点的原始值和所述当前块的所述第二颜色分量采样点的预测值,确定所述当前块的所述第二颜色分量采样点的预测差值。
  91. 根据权利要求90所述的方法,其中,所述方法还包括:
    对所述当前块的所述第二颜色分量采样点的预测差值进行编码,将所得到的编码比特写入码流。
  92. 一种编码装置,包括第一确定单元、第一预测单元和第一滤波单元;其中,
    所述第一确定单元,配置为确定当前块的第一颜色分量的参考样值;以及根据所述当前块的第一颜色分量的参考样值,确定加权系数;
    所述第一预测单元,配置为根据所述加权系数和所述当前块的第二颜色分量的参考样值,确定所述当前块的第二颜色分量的第一预测块;其中,所述第一预测块中包含的所述第二颜色分量的预测值的数量大于所述当前块中包含的第二颜色分量采样点的数量;
    所述第一滤波单元,配置为对所述第一预测块进行第一滤波处理,确定所述当前块的第二颜色分量的第二预测块;
    所述第一确定单元,还配置为根据所述第二预测块,确定所述当前块的所述第二颜色分量采样点的预测差值。
  93. 一种编码设备,包括第一存储器和第一处理器;其中,
    所述第一存储器,用于存储能够在所述第一处理器上运行的计算机程序;
    所述第一处理器,用于在运行所述计算机程序时,执行如权利要求46至91任一项所述的方法。
  94. 一种解码装置,包括第二确定单元、第二预测单元和第二滤波单元;其中,
    所述第二确定单元,配置为确定当前块的第一颜色分量的参考样值;以及根据所述当前块的第一颜色分量的参考样值,确定加权系数;
    所述第二预测单元,配置为根据所述加权系数和所述当前块的第二颜色分量的参考样值,确定所述当前块的第二颜色分量的第一预测块;其中,所述第一预测块中包含的所述第二颜色分量的预测值的数 量大于所述当前块中包含的第二颜色分量采样点的数量;
    所述第二滤波单元,配置为对所述第一预测块进行第一滤波处理,确定所述当前块的第二颜色分量的第二预测块;
    所述第二确定单元,还配置为根据所述第二预测块,确定所述当前块的所述第二颜色分量采样点的重建值。
  95. 一种解码设备,包括第二存储器和第二处理器;其中,
    所述第二存储器,用于存储能够在所述第二处理器上运行的计算机程序;
    所述第二处理器,用于在运行所述计算机程序时,执行如权利要求1至45任一项所述的方法。
  96. 一种计算机可读存储介质,其中,所述计算机可读存储介质存储有计算机程序,所述计算机程序被执行时实现如权利要求1至45任一项所述的方法、或者实现如权利要求46至91任一项所述的方法。
PCT/CN2022/086471 2022-04-12 2022-04-12 编解码方法、装置、编码设备、解码设备以及存储介质 WO2023197194A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/086471 WO2023197194A1 (zh) 2022-04-12 2022-04-12 编解码方法、装置、编码设备、解码设备以及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/086471 WO2023197194A1 (zh) 2022-04-12 2022-04-12 编解码方法、装置、编码设备、解码设备以及存储介质

Publications (1)

Publication Number Publication Date
WO2023197194A1 true WO2023197194A1 (zh) 2023-10-19

Family

ID=88328746

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/086471 WO2023197194A1 (zh) 2022-04-12 2022-04-12 编解码方法、装置、编码设备、解码设备以及存储介质

Country Status (1)

Country Link
WO (1) WO2023197194A1 (zh)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020029187A1 (zh) * 2018-08-09 2020-02-13 Oppo广东移动通信有限公司 视频图像分量的预测方法、装置及计算机存储介质
CN113412617A (zh) * 2018-12-21 2021-09-17 三星电子株式会社 编码方法及其装置和解码方法及其装置
CN113891082A (zh) * 2019-12-19 2022-01-04 Oppo广东移动通信有限公司 图像分量预测方法、编码器、解码器以及存储介质

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020029187A1 (zh) * 2018-08-09 2020-02-13 Oppo广东移动通信有限公司 视频图像分量的预测方法、装置及计算机存储介质
CN113412617A (zh) * 2018-12-21 2021-09-17 三星电子株式会社 编码方法及其装置和解码方法及其装置
CN113891082A (zh) * 2019-12-19 2022-01-04 Oppo广东移动通信有限公司 图像分量预测方法、编码器、解码器以及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
J. LAINEMA, A. AMINLOU, P. ASTOLA, R. G. YOUVALARI (NOKIA): "AHG12: Slope adjustment for CCLM", 25. JVET MEETING; 20220112 - 20220121; TELECONFERENCE; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ), 5 January 2022 (2022-01-05), XP030300252 *

Similar Documents

Publication Publication Date Title
CN109842799B (zh) 颜色分量的帧内预测方法、装置及计算机设备
US11843781B2 (en) Encoding method, decoding method, and decoder
EP4094442A1 (en) Learned downsampling based cnn filter for image and video coding using learned downsampling feature
US20230262212A1 (en) Picture prediction method, encoder, decoder, and computer storage medium
TW201937927A (zh) 使用雙重分類用於線性分量樣本預測的方法以及裝置
CN114128269A (zh) 内预测模式的编码
WO2023197191A1 (zh) 编解码方法、装置、编码设备、解码设备以及存储介质
EP3843399A1 (en) Video image component prediction method and apparatus, and computer storage medium
WO2023197194A1 (zh) 编解码方法、装置、编码设备、解码设备以及存储介质
WO2023056364A1 (en) Method, device, and medium for video processing
WO2023197192A1 (zh) 编解码方法、装置、编码设备、解码设备以及存储介质
CN113766233B (zh) 图像预测方法、编码器、解码器以及存储介质
WO2024007165A1 (zh) 编解码方法、装置、编码设备、解码设备以及存储介质
WO2023141781A1 (zh) 编解码方法、装置、编码设备、解码设备以及存储介质
WO2023197189A1 (zh) 编解码方法、装置、编码设备、解码设备以及存储介质
WO2023051654A1 (en) Method, apparatus, and medium for video processing
WO2023197195A1 (zh) 视频编解码方法、编码器、解码器及存储介质
WO2024098263A1 (zh) 编解码方法、码流、编码器、解码器以及存储介质
WO2023197193A1 (zh) 编解码方法、装置、编码设备、解码设备以及存储介质
WO2024078598A1 (en) Method, apparatus, and medium for video processing
WO2024077569A1 (zh) 编解码方法、码流、编码器、解码器以及存储介质
WO2023241634A1 (en) Method, apparatus, and medium for video processing
WO2023051653A1 (en) Method, apparatus, and medium for video processing
WO2024007120A1 (zh) 编解码方法、编码器、解码器以及存储介质
WO2023274392A1 (en) Utilizing Coded Information During Super Resolution Process

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22936848

Country of ref document: EP

Kind code of ref document: A1