WO2020192085A1 - 图像预测方法、编码器、解码器以及存储介质 - Google Patents

图像预测方法、编码器、解码器以及存储介质 Download PDF

Info

Publication number
WO2020192085A1
WO2020192085A1 PCT/CN2019/110834 CN2019110834W WO2020192085A1 WO 2020192085 A1 WO2020192085 A1 WO 2020192085A1 CN 2019110834 W CN2019110834 W CN 2019110834W WO 2020192085 A1 WO2020192085 A1 WO 2020192085A1
Authority
WO
WIPO (PCT)
Prior art keywords
image component
predicted
current block
prediction
value
Prior art date
Application number
PCT/CN2019/110834
Other languages
English (en)
French (fr)
Inventor
万帅
霍俊彦
马彦卓
张伟
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to CN201980083504.5A priority Critical patent/CN113196765A/zh
Priority to KR1020217032518A priority patent/KR20210139328A/ko
Priority to CN202111096320.8A priority patent/CN113784128B/zh
Priority to CN202310354072.5A priority patent/CN116320472A/zh
Priority to EP19922170.6A priority patent/EP3944621A4/en
Priority to JP2021557118A priority patent/JP7480170B2/ja
Publication of WO2020192085A1 publication Critical patent/WO2020192085A1/zh
Priority to US17/483,507 priority patent/US20220014772A1/en
Priority to JP2024069934A priority patent/JP2024095842A/ja

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop

Definitions

  • the embodiments of the present application relate to the technical field of video coding and decoding, and in particular, to an image prediction method, an encoder, a decoder, and a storage medium.
  • cross-component prediction In the latest video coding standard H.266/Versatile Video Coding (VVC), cross-component prediction has been allowed; among them, cross-component Linear Model Prediction (CCLM) is typical One of the cross-component prediction techniques.
  • VVC Video Coding
  • CCLM Linear Model Prediction
  • one component can be used to predict another component (or its residual), such as predicting chrominance components from luminance components, or predicting luminance components from chrominance components, or predicting chrominance components from chrominance components, etc. .
  • the embodiments of the present application provide an image prediction method, an encoder, a decoder, and a storage medium, which balance the statistical characteristics of each image component after cross-component prediction, thereby not only improving the prediction efficiency, but also improving the coding and decoding efficiency of video images .
  • an embodiment of the present application provides an image prediction method applied to an encoder or a decoder, and the method includes:
  • an encoder which includes a first prediction unit and a first processing unit, wherein:
  • the first prediction unit is configured to obtain the initial prediction value of the image component to be predicted of the current block in the image through the prediction model;
  • the first processing unit is configured to perform filtering processing on the initial predicted value to obtain the target predicted value of the image component to be predicted of the current block.
  • an embodiment of the present application provides an encoder.
  • the encoder includes a first memory and a first processor, where:
  • the first memory is configured to store a computer program that can run on the first processor
  • the first processor is configured to execute the method described in the first aspect when running the computer program.
  • an embodiment of the present application provides a decoder, which includes a second prediction unit and a second processing unit, wherein:
  • the second prediction unit is configured to obtain the initial prediction value of the image component to be predicted of the current block in the image through the prediction model;
  • the second processing unit is configured to perform filtering processing on the initial predicted value to obtain the target predicted value of the image component to be predicted of the current block.
  • an embodiment of the present application provides a decoder, which includes a second memory and a second processor, wherein:
  • the second memory is used to store a computer program that can run on the second processor
  • the second processor is configured to execute the method described in the first aspect when running the computer program.
  • an embodiment of the present application provides a computer storage medium that stores an image prediction program, and when the image prediction program is executed by a first processor or a second processor, the Methods.
  • the embodiments of the present application provide an image prediction method, an encoder, a decoder, and a storage medium.
  • the initial prediction value of the image component to be predicted of the current block in the image is obtained through a prediction model; then the initial prediction value is filtered Processing to obtain the target predicted value of the image component to be predicted of the current block; in this way, after predicting at least one image component of the current block, continue to perform filtering processing on the at least one image component, which can balance the inter-component prediction.
  • the statistical properties of the image components not only improve the prediction efficiency, but also because the target predicted value obtained is closer to the true value, the prediction residual of the image component is smaller, so that the bit rate transmitted during the encoding and decoding process is small. It can also improve the coding and decoding efficiency of video images.
  • FIG. 1 is a schematic diagram of the composition structure of a traditional cross-component prediction architecture provided by related technical solutions
  • FIG. 2 is a schematic diagram of the composition of a video encoding system provided by an embodiment of the application.
  • FIG. 3 is a schematic diagram of the composition of a video decoding system provided by an embodiment of the application.
  • FIG. 4 is a schematic flowchart of an image prediction method provided by an embodiment of the application.
  • FIG. 5 is a schematic diagram of the composition structure of an improved cross-component prediction architecture provided by an embodiment of the application.
  • FIG. 6 is a schematic diagram of the composition structure of an encoder provided by an embodiment of the application.
  • FIG. 7 is a schematic diagram of a specific hardware structure of an encoder provided by an embodiment of the application.
  • FIG. 8 is a schematic diagram of the composition structure of a decoder provided by an embodiment of the application.
  • FIG. 9 is a schematic diagram of a specific hardware structure of a decoder provided by an embodiment of the application.
  • the first image component, the second image component, and the third image component are generally used to characterize the coding block; among them, the three image components are a luminance component, a blue chrominance component, and a red chrominance component.
  • the luminance component is usually represented by the symbol Y
  • the blue chrominance component is usually represented by the symbol Cb or U
  • the red chrominance component is usually represented by the symbol Cr or V; in this way, the video image can be represented in YCbCr format or YUV. Format representation.
  • the first image component may be a luminance component
  • the second image component may be a blue chrominance component
  • the third image component may be a red chrominance component
  • H.266/VCC proposed CCLM cross-component prediction technology.
  • the cross-component prediction technology based on CCLM can not only realize the prediction of the luminance component to the chrominance component, that is, the prediction of the first image component to the second image component, or the first image component to the third image component, but also the color The prediction from the degree component to the luminance component, that is, the prediction from the second image component to the first image component, or the third image component to the first image component, and even the prediction between the chrominance component and the chrominance component, that is, the first Prediction from the second image component to the third image component, or the third image component to the second image component, etc.
  • the following will take the prediction from the first image component to the second image component as an example for description, but the technical solutions of the embodiments of the present application can also be applied to the prediction of other image components.
  • FIG. 1 shows a schematic diagram of the composition structure of a traditional cross-component prediction architecture provided by related technical solutions.
  • the first image component for example, represented by Y component
  • the second image component for example, represented by U component
  • the video image adopts the 4:2:0 format of YUV, the Y component and U component
  • the method of using the Y component to predict the third image component for example, represented by the V component
  • the traditional cross-component prediction architecture 10 may include a Y component coding block 110, a resolution adjustment unit 120, a Y 1 component coding block 130, a U component coding block 140, a prediction model 150, and a cross component prediction unit 160.
  • the Y component of the video image is represented by a Y component coding block 110 with a size of 2N ⁇ 2N.
  • the larger bolded box here is used to highlight the Y component coding block 110, and the surrounding gray solid circle is used to indicate the Y component coding block.
  • the neighboring reference value Y(n) of 110; the U component of the video image is represented by the U-component coding block 140 of size N ⁇ N.
  • the larger block in bold here is used to highlight the U-component coding block 140, and the surrounding gray
  • the solid circle is used to indicate the adjacent reference value C(n) of the U component encoding block 140; since the Y component and the U component have different resolutions, the resolution adjustment unit 120 needs to adjust the resolution of the Y component to obtain N ⁇ N-size Y 1 component coding block 130; for Y 1 component coding block 130, the larger bolded box here is used to highlight Y 1 component coding block 130, and the surrounding gray solid circle is used to indicate Y 1 component coding block 130 adjacent reference values Y 1 (n-); and Y 1 (n-) and the U component encoding block adjacent to the reference value C (n) 140 of the prediction model can be constructed by a Y component value encoding neighboring reference blocks 130 150; According to the Y component of the Y 1 component encoding block 130, the pixel value and the prediction model 150 are reconstructed, component prediction can be performed across the component prediction unit 160, and the U component prediction value
  • the image component prediction is not comprehensively considered, for example, the difference in statistical characteristics between the image components is not considered, which makes the prediction efficiency low.
  • the embodiment of the present application provides an image prediction method. First, the initial prediction value of the image component to be predicted of the current block in the image is obtained through the prediction model; then the initial prediction value is filtered to obtain the The target predicted value of the image component to be predicted of the current block; in this way, after at least one image component of the current block is predicted, the filtering process is continued on the at least one image component, which can balance the statistics of each image component after the inter-component prediction This not only improves the prediction efficiency, but also improves the coding and decoding efficiency of video images.
  • the video encoding system 20 includes a transform and quantization unit 201, an intra-frame estimation unit 202, and an intra-frame
  • the encoding unit 209 can implement header information encoding and context-based adaptive binary arithmetic coding (Context-based Adaptive Binary Arithmatic Coding, CABAC).
  • a coding block can be obtained by dividing the coding tree block (Coding Tree Unit, CTU), and then the residual pixel information obtained after intra-frame or inter-frame prediction is performed by the transform and quantization unit 201.
  • the encoding block is transformed, including transforming the residual information from the pixel domain to the transform domain, and quantizing the resulting transform coefficients to further reduce the bit rate;
  • the intra-frame estimation unit 202 and the intra-frame prediction unit 203 are used to The coding block performs intra prediction; specifically, the intra estimation unit 202 and the intra prediction unit 203 are used to determine the intra prediction mode to be used to code the coding block;
  • the motion compensation unit 204 and the motion estimation unit 205 are used to perform Inter-frame prediction coding of the received coding block with respect to one or more blocks in one or more reference frames to provide temporal prediction information;
  • the motion estimation performed by the motion estimation unit 205 is a process of generating a motion vector, the The motion vector can estimate the motion of the coded block, and then the
  • the context content can be based on adjacent coding
  • the block can be used to encode information indicating the determined intra-frame prediction mode and output the code stream of the video signal; and the decoded image buffer unit 210 is used to store reconstructed video blocks for prediction reference. As the encoding of the video image progresses, new reconstructed video blocks are continuously generated, and these reconstructed video blocks are all stored in the decoded image buffer unit 210.
  • the video decoding system 30 includes a decoding unit 301, an inverse transform and inverse quantization unit 302, and an intra-frame
  • the code stream of the video signal is output; the code stream is input into the video decoding system 30, and first passes through the decoding unit 301 to obtain the decoded transform coefficient;
  • the inverse transform and inverse quantization unit 302 performs processing to generate a residual block in the pixel domain;
  • the intra prediction unit 303 can be used to generate data based on the determined intra prediction mode and the data from the previous decoded block of the current frame or picture The prediction data of the current video block to be decoded;
  • the motion compensation unit 304 determines the prediction information for the video block to be decoded by analyzing the motion vector and other associated syntax elements, and uses the prediction information to generate the video being decoded
  • the predictive block of the block; the residual block from the inverse transform and inverse quantization unit 302 and the corresponding predictive block generated by the intra prediction unit 303 or the motion compensation unit 304 are summed to form a decoded video block;
  • the decoded video block is passed through the filtering unit 305 in order to remove the block arti
  • the embodiment of this application is mainly applied to the part of the intra prediction unit 203 shown in FIG. 2 and the part of the intra prediction unit 303 shown in FIG. 3; that is, the embodiment of this application can be applied to both video coding systems and It can be applied to a video decoding system, which is not specifically limited in the embodiment of the present application.
  • FIG. 4 shows a schematic flowchart of an image prediction method provided by an embodiment of the present application.
  • the method may include:
  • S402 Perform filtering processing on the initial predicted value to obtain a target predicted value of the image component to be predicted of the current block.
  • each image block currently to be encoded can be called an encoding block.
  • each coding block may include a first image component, a second image component, and a third image component, and the current block is the coding of the first image component, the second image component, or the third image component currently to be predicted in the video image. Piece.
  • the image prediction method in the embodiments of the present application can be applied to both a video encoding system and a video decoding system, and can even be applied to both a video encoding system and a video decoding system. Specific restrictions.
  • the initial prediction value of the image component to be predicted of the current block in the image is first obtained through the prediction model; then the initial prediction value is filtered to obtain the target prediction of the image component to be predicted of the current block In this way, after predicting at least one image component, continue to filter the at least one image component, which can balance the statistical characteristics of each image component after cross-component prediction, thereby not only improving the prediction efficiency, but also improving the video image Codec efficiency.
  • the method may further include:
  • the at least one image component includes an image component to be predicted and/or an image component to be referenced, and the image component to be predicted is different from the image component to be referenced;
  • the reference value of the image component to be predicted of the current block and/or the reference value of the image component to be referenced of the current block is obtained; wherein, the image component to be predicted is when the prediction model is constructed
  • the predicted component, the to-be-referenced image component is the component used for prediction when the prediction model is constructed.
  • At least one image component of the current block may be an image component to be predicted, an image component to be referenced, or even an image component to be predicted and an image component to be referenced.
  • the prediction of the first image component to the second image component is achieved by the prediction model
  • the image component to be predicted is the second image component and the reference image component is the first image component; or, the first image is assumed to be realized by the prediction model If the component predicts the third image component, the image component to be predicted is the third image component, and the image component to be referenced is still the first image component.
  • the reference value of the image component to be predicted of the current block and/or the reference value of the image component to be referenced in the current block can be obtained.
  • the initial prediction value of the image component to be predicted in the current block may be filtered according to the reference value of the image component to be predicted in the current block and/or the reference value of the image component to be referenced in the current block.
  • the processing the initial predicted value corresponding to the at least one image component according to the reference value of the at least one image component may include:
  • the initial predicted value is filtered using a preset processing mode, wherein the preset The processing mode includes at least one of the following: filtering processing, grouping processing, value correction processing, quantization processing and dequantization processing;
  • the target predicted value is obtained.
  • the preset The processing mode performs filtering processing on the initial prediction value.
  • filtering processing may be used to filter the initial predicted value, or grouping processing may be used to filter the initial predicted value, or value correction processing may be used to filter the initial predicted value, or
  • the quantization process may be used to filter the initial predicted value, or the inverse quantization process (also called dequantization process) may also be used to filter the initial predicted value, etc., which is not specifically limited in the embodiment of the present application.
  • the luminance component is used to predict the chrominance component.
  • the preset processing mode adopts value correction processing, Since the luminance component and the chrominance component have different statistical characteristics, based on the difference in the statistical characteristics of the two image components, a deviation factor can be obtained; then the deviation factor is used to correct the initial predicted value (for example, the initial predicted value Add to the deviation factor) to balance the statistical characteristics of the image components after cross-component prediction, so as to obtain the target predicted value of the chroma component.
  • the target predicted value of the chroma component and the chroma component The true value is closer; if the preset processing mode uses filter processing, since the luminance component and the chrominance component have different statistical characteristics, according to the difference in the statistical characteristics of the two image components, the initial predicted value can be filtered to balance After cross-component prediction, the statistical characteristics of the image components are obtained to obtain the target predicted value corresponding to the chroma component. At this time, the target predicted value of the chroma component is closer to the true value of the chroma component; if preset processing The mode uses grouping processing.
  • the initial prediction value can be grouped to balance the statistics between the image components after cross-component prediction
  • the target prediction value corresponding to the chroma component can be obtained.
  • the target prediction value of the chroma component is closer to the true value of the chroma component;
  • the process of determining the initial prediction value involves the quantization and dequantization of the luminance component and the chrominance component.
  • the difference in the statistical characteristics of the two image components may be affected. This leads to differences between quantization processing and dequantization processing.
  • the initial prediction value can be quantized to balance the statistical characteristics of the image components after cross-component prediction, so that the obtained color
  • the target predicted value corresponding to the degree component At this time, the target predicted value of the chrominance component is closer to the true value of the chrominance component; if the preset processing mode is dequantized, the initial predicted value can be dequantized to balance
  • the statistical characteristics of each image component are obtained, and the target predicted value corresponding to the chroma component is obtained.
  • the target predicted value of the chroma component is closer to the true value of the chroma component; thereby improving the prediction
  • the accuracy of the value also improves the prediction efficiency.
  • the method may further include:
  • the initial prediction residual is filtered using a preset processing mode, wherein the prediction It is assumed that the processing mode includes at least one of the following: filtering processing, grouping processing, value correction processing, quantization processing and dequantization processing;
  • the target prediction residual is obtained.
  • the obtaining the target prediction value of the image component to be predicted of the current block may include:
  • the target prediction value of the image component to be predicted of the current block is calculated.
  • the prediction residual is obtained from the difference between the predicted value of the image component and the true value of the image component; in order to improve the coding and decoding efficiency of the video image, it is necessary to ensure that the prediction residual transmitted by the current block is as much as possible The ground is small.
  • the initial prediction value can be filtered according to the preset processing mode to obtain the Predict the target predicted value of the image component, because the target predicted value of the image component to be predicted is as close as possible to the true value of the image component to be predicted, so that the prediction residual between the two is as small as possible;
  • the prediction model obtains the initial prediction value corresponding to the image component to be predicted, it can also determine the initial prediction value of the image component to be predicted based on the difference between the initial prediction value of the image component to be predicted and the true value of the image component to be predicted.
  • Predict the residual and then filter the initial prediction residual according to the preset processing mode to obtain the target prediction residual of the image component to be predicted, and the target prediction value of the image component to be predicted can be obtained according to the target prediction residual; Since the target prediction residual is as small as possible, the target prediction value of the image component to be predicted is as close as possible to the true value of the image component to be predicted.
  • the embodiment of the present application can be applied not only to filtering the initial prediction value of the image component to be predicted in the current block, but also to filtering the initial prediction residual of the image component to be predicted in the current block; After filtering, the statistical characteristics of each image component after cross-component prediction can be balanced, which not only improves the prediction efficiency, but also because the target prediction value obtained is closer to the true value, the prediction residual of the image component to be predicted is smaller.
  • the bit rate transmitted is less, and the encoding and decoding efficiency of the video image is also improved.
  • the method may further include:
  • Determining the reference value of the image component to be predicted of the current block where the reference value of the image component to be predicted of the current block is the value of the image component to be predicted of adjacent pixels of the current block;
  • the prediction model is constructed according to the calculated model parameters, wherein the prediction model is used to perform cross-component prediction processing on the to-be-predicted image component of the current block according to the to-be-referenced image component of the current block.
  • the prediction model in the embodiment of the present application may be a linear model, such as the cross-component prediction technology of CCLM; the prediction model may also be a non-linear model, such as multi-model CCLM (Multiple Model CCLM, MMLM) cross-component prediction Technology, it is composed of multiple linear models.
  • CCLM Multiple Model CCLM, MMLM
  • the embodiment of the present application will take the prediction model as a linear model as an example for the following description, but the image prediction method of the embodiment of the present application can also be applied to a nonlinear model.
  • the model parameters include a first model parameter (denoted by ⁇ ) and a second model parameter (denoted by ⁇ ).
  • ⁇ and ⁇ There are many ways to calculate ⁇ and ⁇ . It can be a preset factor calculation model constructed by the least squares method, or a preset factor calculation model constructed by the maximum and minimum values, or even other ways.
  • the preset factor calculation model is not specifically limited in the embodiment of this application.
  • the reference value of the image component to be predicted in the current block and the reference value of the image component to be referenced in the current block where the reference value of the image component to be referenced in the current block
  • the value can be the reference image component value of the neighboring pixels of the current block (such as the neighboring reference value of the first image component), and the reference value of the to-be-predicted image component of the current block can be the to-be-predicted image component value of the neighboring pixels of the current block (such as The adjacent reference value of the second image component); using the adjacent reference pixel value of the first image component and the minimized regression error of the adjacent reference pixel value of the second image component to derive the model parameters of the prediction model, specifically, as in the formula ( 1) Shown:
  • L(n) represents the adjacent reference value of the first image component corresponding to the left side and the upper side of the current block after down-sampling
  • C(n) represents the first image component corresponding to the left side and the upper side of the current block.
  • i, j represent the position coordinates of the pixel in the current block
  • i represents the horizontal direction
  • j represents the vertical direction
  • Pred C [i,j] represents the pixel corresponding to the pixel with the location coordinate [i,j] in the current block
  • Rec L [i, j] represents the reconstructed value of the first image component corresponding to the pixel with the position coordinate [i, j] in the same current block (down-sampled).
  • the preset factor calculation model constructed by the maximum and minimum values as an example, it provides a simplified version of the derivation method of model parameters. Specifically, it can search for the adjacent reference value of the largest first image component and the smallest first image component.
  • the adjacent reference values of image components are used to derive the model parameters of the prediction model according to the principle of "two points determine one line", as shown in formula (3):
  • L max and L min represent the maximum and minimum values found in the adjacent reference values of the first image component corresponding to the left side and the upper side of the current block after down-sampling
  • C max and C min represent L max The adjacent reference value of the second image component corresponding to the reference pixel at the position corresponding to L min .
  • the first model parameter ⁇ and the second model parameter ⁇ can also be obtained; based on ⁇ and ⁇ , it is assumed that the second model parameter is predicted according to the first image component. Image component, then the constructed prediction model is still as shown in the above equation (2).
  • the image components can be predicted according to the prediction model; for example, according to the prediction model shown in formula (2), the first image component can be used to predict the second image component, such as the luminance component to predict the chrominance component , Thereby obtaining the initial predicted value of the chrominance component, and subsequently, according to the reference value of the luminance component and/or the reference value of the chrominance component, the initial predicted value can be filtered using the preset processing mode to obtain the chrominance component
  • the target predicted value of the; the second image component can also be used to predict the first image component, for example, the chrominance component is used to predict the luminance component, thereby obtaining the initial predicted value of the luminance component, which can then be based on the reference value of the luminance component and/or chrominance
  • the reference value of the component, the initial prediction value is filtered using the preset processing mode, so as to obtain the target prediction value corresponding to the brightness component; even the second image component can be used to predict the third image component,
  • the initial prediction value can be performed using the preset processing mode according to the reference value of the blue chrominance component and/or the reference value of the red chrominance component. Filtering processing to obtain the target predicted value of the red chrominance component; the purpose of improving the prediction efficiency can be achieved.
  • the resolution of each image component is not the same.
  • the resolution of the image component also needs to be adjusted (including up-sampling or down-sampling the image component) to achieve Predict the target resolution of the image component.
  • the method may further include:
  • the resolution of the image component to be predicted of the current block is different from the resolution of the image component to be referenced in the current block, the resolution of the image component to be referenced is adjusted; wherein, the resolution Adjustment includes up-sampling adjustment or down-sampling adjustment;
  • the reference value of the image component to be referenced in the current block is updated to obtain the first reference value of the image component to be referenced in the current block; wherein, after adjustment
  • the resolution of the image component to be referenced is the same as the resolution of the image component to be predicted.
  • the method may further include:
  • the reference value of the image component to be referenced in the current block is adjusted to obtain the current The first reference value of the to-be-referenced image component of the block, wherein the adjustment processing includes one of the following: down-sampling filtering, up-sampling filtering, cascaded filtering of down-sampling filtering and low-pass filtering, up-sampling filtering and low-pass filtering. Cascade filtering of pass filtering.
  • the resolution of the to-be-predicted image component of the current block is different from the resolution of the to-be-referenced image component of the current block, the resolution of the to-be-referenced image component can be adjusted so that the adjusted to-be-referenced
  • the resolution of the image component is the same as the resolution of the image component to be predicted; the resolution adjustment here includes up-sampling adjustment or down-sampling adjustment; and according to the adjusted resolution of the image component to be referenced, The reference value of is updated to obtain the first reference value of the image component to be referenced in the current block.
  • the reference value of the image component to be referenced in the current block can also be adjusted to obtain the reference value of the current block.
  • the first reference value of the image component; the adjustment processing here includes one of the following: down-sampling filtering, up-sampling filtering, cascading filtering of down-sampling filtering and low-pass filtering, cascading filtering of up-sampling filtering and low-pass filtering .
  • the calculation of the model parameters of the prediction model according to the reference value of the image component to be predicted of the current block and the reference value of the image component to be referenced of the current block may include:
  • the model parameter of the prediction model is calculated according to the reference value of the image component to be predicted of the current block and the first reference value of the image component to be referenced of the current block.
  • the model parameters of the prediction model are calculated according to the first reference value of the image component to be referenced in the current block and the first reference value of the image component to be referenced in the current block.
  • the luminance component is used to predict the chrominance component.
  • the image component to be used is the luminance component
  • the image component to be predicted is the chrominance component
  • the resolution of the luminance component needs to be adjusted at this time, such as down-sampling the luminance component to make the adjusted luminance component
  • the resolution meets the target resolution; on the contrary, if the chrominance component is used to predict the luminance component, after the target resolution of the luminance component is obtained, since the resolution of the chrominance component does not meet the target resolution, the chrominance component Resolution adjustment, such as upsampling the chrominance component, can make the resolution of the adjusted chrominance component meet the target resolution; in addition, if the blue chrominance component is used to predict the red chrominance component, the red After the target resolution
  • the initial prediction value of the image component to be predicted in the current block may be filtered according to the reference value of the image component to be predicted in the current block.
  • the filtering process on the initial predicted value may include:
  • the initial predicted value is filtered according to the reference value of the image component to be predicted of the current block to obtain the target predicted value; wherein, the reference value of the image component to be predicted of the current block is obtained by comparing the image
  • the image component to be predicted or the image component to be predicted of the current block is obtained by performing characteristic statistics.
  • the filtering the initial prediction value according to the reference value of the image component to be predicted of the current block may include:
  • a preset processing mode is used to perform filtering processing on the initial predicted value; wherein, the preset processing mode includes at least one of the following: filtering processing, grouping processing , Value correction processing, quantization processing, inverse quantization processing, low-pass filter processing and adaptive filter processing.
  • performing filtering processing on the initial predicted value includes:
  • a preset processing mode is used to perform filtering processing on the initial prediction residual; wherein the preset processing mode includes at least one of the following: filtering processing, grouping Processing, value correction processing, quantization processing, inverse quantization processing, low-pass filter processing and adaptive filter processing.
  • the preset processing mode may be filtering processing, grouping processing, value correction processing, quantization processing, inverse quantization processing, low-pass filtering processing or adaptive filtering processing, etc.
  • the reference value of the to-be-predicted image component of the current block may be obtained by performing characteristic statistics on the to-be-predicted image component of the image or the to-be-predicted image component of the current block.
  • the characteristic statistics here are not limited to the to-be-predicted image of the current block.
  • the component can also be extended to the image component to be predicted of the image to which the current block belongs.
  • the initial predicted value can be filtered using a preset processing mode according to the reference value of the image component to be predicted of the current block.
  • the initial prediction value can also be used to calculate the initial prediction residual of the image component to be predicted in the current block, and then according to the reference value of the image component to be predicted in the current block, the preset processing mode
  • the initial prediction residual is filtered to obtain the target prediction residual, and the target prediction value can also be obtained according to the target prediction residual.
  • the initial prediction value of the image component to be predicted in the current block may also be filtered according to the reference value of the image component to be predicted in the current block and the reference value of the image component to be referenced in the current block.
  • the method may further include:
  • the model parameters of the prediction model are calculated according to the reference value of the image component to be predicted in the current block and the reference value of the image component to be referenced in the current block.
  • the method may further include:
  • the initial prediction value is filtered using a preset processing mode, wherein the preset processing mode It includes at least one of the following: filtering processing, grouping processing, value correction processing, quantization processing, inverse quantization processing, low-pass filtering processing and adaptive filtering processing.
  • the initial prediction value can also be filtered, which can balance the statistical characteristics of each image component after cross-component prediction, thereby improving the prediction efficiency.
  • FIG. 5 shows a schematic diagram of the composition structure of an improved cross-component prediction architecture provided by an embodiment of the present application.
  • the improved cross-component prediction architecture 50 may further include a processing unit 510, which is mainly used for processing the cross-component prediction unit 160 The predicted value after that is processed to obtain a more accurate target predicted value.
  • the processing unit 510 can also perform related processing on the initial predicted value of the U component, such as filtering, grouping, value correction, quantization, and inverse quantization, so as to obtain the U component target predicted value; Since the U-component target predicted value is
  • the prediction residual when the image prediction method is applied to the encoder side, after obtaining the target prediction value, the prediction residual can be determined according to the difference between the target prediction value and the true value, and then the prediction residual The difference is written into the code stream; at the same time, the model parameters of the prediction model can be calculated according to the reference value of the image component to be predicted in the current block and the reference value of the image component to be referenced in the current block, and then the calculated model parameters are written into In the code stream; the code stream is transmitted from the encoder side to the decoder side; correspondingly, when the image prediction method is applied to the decoder side, the prediction residual can be obtained by parsing the code stream, and the code stream can also be parsed To obtain the model parameters of the prediction model to construct the prediction model; in this way, on the decoder side, the prediction model is still used to obtain the initial prediction value of the image component to be predicted in the current block; then the initial prediction value is filtered to obtain The target predicted value of the image component to be predicted in the
  • This embodiment provides an image prediction method.
  • the initial prediction value of the image component to be predicted of the current block in the image is obtained; the initial prediction value is filtered to obtain the image component to be predicted of the current block
  • the prediction residual of the image component is smaller, so that the bit rate transmitted during the encoding and decoding process is less, and the encoding and decoding efficiency of the video image can also be improved.
  • FIG. 6 shows a schematic diagram of the composition structure of an encoder 60 provided by an embodiment of the present application.
  • the encoder 60 may include a first prediction unit 601 and a first processing unit 602, where
  • the first prediction unit 601 is configured to obtain the initial prediction value of the image component to be predicted of the current block in the image through a prediction model
  • the first processing unit 602 is configured to perform filtering processing on the initial predicted value to obtain the target predicted value of the image component to be predicted of the current block.
  • the encoder 60 may further include a first statistics unit 603 and a first acquisition unit 604, where:
  • the first statistical unit 603 is configured to perform characteristic statistics on at least one image component of the current block; wherein the at least one image component includes an image component to be predicted and/or an image component to be referenced, and the image to be predicted Component is different from the component of the image to be referenced;
  • the first obtaining unit 604 is configured to obtain the reference value of the image component to be predicted of the current block and/or the reference value of the image component to be referenced of the current block according to the result of characteristic statistics;
  • the predicted image component is the predicted component when constructing the prediction model, and the to-be-referenced image component is the component used for prediction when constructing the prediction model.
  • the first processing unit 602 is configured to use a preset processing mode according to the reference value of the image component to be predicted of the current block and/or the reference value of the image component to be referenced in the current block.
  • the initial prediction value is subjected to filtering processing, wherein the preset processing mode includes at least one of the following: filtering processing, grouping processing, value correction processing, quantization processing, and dequantization processing;
  • the first obtaining unit 604 is configured to obtain the target predicted value according to the result of the processing.
  • the encoder 60 may further include a calculation unit 605 configured to calculate the initial prediction residual of the image component to be predicted of the current block based on the initial prediction component value;
  • the first processing unit 602 is further configured to use a preset processing mode to predict the initial prediction based on the reference value of the image component to be predicted of the current block and/or the reference value of the image component to be referenced in the current block.
  • the residual is subjected to filtering processing, wherein the preset processing mode includes at least one of the following: filtering processing, grouping processing, value correction processing, quantization processing, and dequantization processing;
  • the first obtaining unit 604 is further configured to obtain the target prediction residual according to the processing result.
  • the calculation unit 605 is further configured to calculate the target prediction value of the image component to be predicted of the current block according to the target prediction residual.
  • the encoder 60 may further include a first determining unit 606 and a first constructing unit 607, where
  • the first determining unit 606 is configured to determine the reference value of the to-be-predicted image component of the current block, where the reference value of the to-be-predicted image component of the current block is the to-be-predicted image component of the neighboring pixels of the current block. Predicted image component value; and determining the reference value of the to-be-referenced image component of the current block, wherein the to-be-referenced image component of the current block is different from the to-be-predicted image component, and the to-be-referenced image component of the current block
  • the reference value is the reference image component value of the neighboring pixels of the current block;
  • the calculation unit 605 is further configured to calculate the model parameters of the prediction model according to the reference value of the image component to be predicted of the current block and the reference value of the image component to be referenced of the current block;
  • the first construction unit 607 is configured to construct the prediction model according to the calculated model parameters, wherein the prediction model is used to compare the image to be predicted of the current block according to the image component to be referenced in the current block.
  • the components undergo cross-component prediction processing.
  • the encoder 60 may further include a first adjustment unit 608 configured to when the resolution of the image component to be predicted of the current block is different from the resolution of the image component to be referenced in the current block. At the same time, adjust the resolution of the image component to be referenced; wherein the resolution adjustment includes up-sampling adjustment or down-sampling adjustment; and based on the adjusted resolution of the image component to be referenced, The reference value of the to-be-referenced image component of the current block is updated to obtain the first reference value of the to-be-referenced image component of the current block; wherein the adjusted resolution of the to-be-referenced image component and the to-be-predicted image The resolution of the components is the same.
  • the first adjustment unit 608 is further configured to: when the resolution of the to-be-predicted image component of the current block is different from the resolution of the to-be-referenced image component of the current block, The reference value of the to-be-referenced image component is adjusted to obtain the first reference value of the to-be-referenced image component of the current block, wherein the adjustment processing includes one of the following: down-sampling filtering, up-sampling filtering, down Cascade filtering of sampling filtering and low-pass filtering, cascading filtering of upsampling filtering and low-pass filtering.
  • the calculation unit 605 is further configured to calculate the model of the prediction model according to the reference value of the image component to be predicted of the current block and the first reference value of the image component to be referenced in the current block parameter.
  • the first processing unit 602 is further configured to perform filtering processing on the initial predicted value according to the reference value of the image component to be predicted of the current block to obtain the target predicted value; wherein, the The reference value of the image component to be predicted of the current block is obtained by performing characteristic statistics on the image component to be predicted of the image or the image component to be predicted of the current block.
  • the first processing unit 602 is further configured to use a preset processing mode to filter the initial predicted value according to the reference value of the image component to be predicted of the current block; wherein, the prediction It is assumed that the processing mode includes at least one of the following: filtering processing, grouping processing, value correction processing, quantization processing, inverse quantization processing, low-pass filtering processing, and adaptive filtering processing.
  • the calculation unit 605 is further configured to use the initial prediction value to calculate the initial prediction residual of the image component to be predicted of the current block;
  • the first processing unit 602 is further configured to use a preset processing mode to filter the initial prediction residual according to the reference value of the image component to be predicted of the current block; wherein, the preset processing mode is at least Including one of the following: filtering processing, grouping processing, value correction processing, quantization processing, inverse quantization processing, low-pass filtering processing and adaptive filtering processing.
  • the first statistical unit 603 is further configured to perform characteristic statistics on the image components to be predicted of the image
  • the first determining unit 606 is further configured to determine the reference value of the image component to be predicted of the current block and the reference value of the image component to be referenced of the current block according to the result of the characteristic statistics; wherein, the The image component to be referenced is different from the image component to be predicted;
  • the calculation unit 605 is further configured to calculate the model parameters of the prediction model according to the reference value of the image component to be predicted of the current block and the reference value of the image component to be referenced in the current block.
  • the first processing unit 602 is further configured to use a preset processing mode to perform processing on all components according to the reference value of the image component to be predicted of the current block and the reference value of the image component to be referenced in the current block.
  • the initial prediction value is subjected to filtering processing, wherein the preset processing mode includes at least one of the following: filtering processing, grouping processing, value correction processing, quantization processing, dequantization processing, low-pass filtering processing, and adaptive filtering processing .
  • a “unit” may be a part of a circuit, a part of a processor, a part of a program, or software, etc., of course, may also be a module, or may also be non-modular.
  • the various components in this embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be realized in the form of hardware or software function module.
  • the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of this embodiment is essentially or It is said that the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium and includes several instructions to enable a computer device (which can A personal computer, server, or network device, etc.) or a processor (processor) executes all or part of the steps of the method described in this embodiment.
  • the aforementioned storage media include: U disk, mobile hard disk, read only memory (Read Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes.
  • an embodiment of the present application provides a computer storage medium that stores an image prediction program that implements the steps of the method described in the foregoing embodiment when the image prediction program is executed by at least one processor.
  • FIG. 7 shows the specific hardware structure of the encoder 60 provided by the embodiment of the present application, which may include: a first communication interface 701, a first memory 702, and a first communication interface 701; Processor 703; the components are coupled together through the first bus system 704.
  • the first bus system 704 is used to implement connection and communication between these components.
  • the first bus system 704 also includes a power bus, a control bus, and a status signal bus.
  • various buses are marked as the first bus system 704 in FIG. 7. among them,
  • the first communication interface 701 is used for receiving and sending signals in the process of sending and receiving information with other external network elements;
  • the first memory 702 is configured to store a computer program that can run on the first processor 703;
  • the first processor 703 is configured to execute: when the computer program is running:
  • the first memory 702 in the embodiment of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), and electrically available Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory.
  • the volatile memory may be a random access memory (Random Access Memory, RAM), which is used as an external cache.
  • RAM static random access memory
  • DRAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • DDRSDRAM Double Data Rate Synchronous Dynamic Random Access Memory
  • Enhanced SDRAM, ESDRAM Synchronous Link Dynamic Random Access Memory
  • Synchlink DRAM Synchronous Link Dynamic Random Access Memory
  • DRRAM Direct Rambus RAM
  • the first processor 703 may be an integrated circuit chip with signal processing capability. In the implementation process, the steps of the foregoing method can be completed by an integrated logic circuit of hardware in the first processor 703 or instructions in the form of software.
  • the above-mentioned first processor 703 may be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (ASIC), a ready-made programmable gate array (Field Programmable Gate Array, FPGA) Or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • DSP Digital Signal Processor
  • ASIC application specific integrated circuit
  • FPGA ready-made programmable gate array
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers.
  • the storage medium is located in the first memory 702, and the first processor 703 reads the information in the first memory 702, and completes the steps of the foregoing method in combination with its hardware.
  • the embodiments described in this application can be implemented by hardware, software, firmware, middleware, microcode, or a combination thereof.
  • the processing unit can be implemented in one or more Application Specific Integrated Circuits (ASIC), Digital Signal Processing (DSP), Digital Signal Processing Equipment (DSP Device, DSPD), programmable Logic device (Programmable Logic Device, PLD), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), general-purpose processors, controllers, microcontrollers, microprocessors, and others for performing the functions described in this application Electronic unit or its combination.
  • ASIC Application Specific Integrated Circuits
  • DSP Digital Signal Processing
  • DSP Device Digital Signal Processing Equipment
  • PLD programmable Logic Device
  • PLD programmable Logic Device
  • Field-Programmable Gate Array Field-Programmable Gate Array
  • FPGA Field-Programmable Gate Array
  • the technology described in this application can be implemented through modules (such as procedures, functions, etc.) that perform the functions described in this application.
  • the first processor 703 is further configured to execute the method described in any one of the foregoing embodiments when the computer program is running.
  • This embodiment provides an encoder, which may include a first prediction unit and a first processing unit, wherein the first prediction unit is configured to obtain an initial prediction of the image component to be predicted of the current block in the image through a prediction model Value; the first processing unit is configured to perform filtering processing on the initial predicted value to obtain the target predicted value of the image component to be predicted of the current block; in this way, continue the prediction of at least one image component of the current block Filtering of at least one image component can balance the statistical characteristics of each image component after inter-component prediction, thereby not only improving the prediction efficiency, but also because the obtained target prediction value is closer to the true value, making the prediction residual of the image component Smaller, so that the bit rate transmitted during the encoding and decoding process is less, and at the same time it can also improve the encoding and decoding efficiency of the video image.
  • the first prediction unit is configured to obtain an initial prediction of the image component to be predicted of the current block in the image through a prediction model Value
  • the first processing unit is configured to perform filtering processing on
  • FIG. 8 shows a schematic diagram of the composition structure of a decoder 80 provided by an embodiment of the present application.
  • the decoder 80 may include a second prediction unit 801 and a second processing unit 802, where:
  • the second prediction unit 801 is configured to obtain the initial prediction value of the image component to be predicted of the current block in the image through a prediction model
  • the second processing unit 802 is configured to perform filtering processing on the initial predicted value to obtain the target predicted value of the image component to be predicted of the current block.
  • the decoder 80 may further include a second statistics unit 803 and a second acquisition unit 804, where:
  • the second statistical unit 803 is configured to perform characteristic statistics on at least one image component of the current block; wherein the at least one image component includes an image component to be predicted and/or an image component to be referenced, and the image to be predicted Component is different from the component of the image to be referenced;
  • the second obtaining unit 804 is configured to obtain the reference value of the image component to be predicted of the current block and/or the reference value of the image component to be referenced of the current block according to the result of characteristic statistics;
  • the predicted image component is the predicted component when constructing the prediction model, and the to-be-referenced image component is the component used for prediction when constructing the prediction model.
  • the second processing unit 802 is configured to use a preset processing mode according to the reference value of the image component to be predicted of the current block and/or the reference value of the image component to be referenced in the current block.
  • the initial prediction value is subjected to filtering processing, wherein the preset processing mode includes at least one of the following: filtering processing, grouping processing, value correction processing, quantization processing, and dequantization processing;
  • the second obtaining unit 804 is configured to obtain the target predicted value according to the result of the processing.
  • the decoder 80 may further include a parsing unit 805, configured to parse the code stream to obtain the initial prediction residual of the image component to be predicted of the current block;
  • the second processing unit 802 is further configured to use a preset processing mode to predict the initial prediction according to the reference value of the image component to be predicted of the current block and/or the reference value of the image component to be referenced in the current block.
  • the residual is subjected to filtering processing, wherein the preset processing mode includes at least one of the following: filtering processing, grouping processing, value correction processing, quantization processing, and dequantization processing;
  • the second obtaining unit 804 is further configured to obtain the target prediction residual according to the processing result.
  • the decoder 80 may further include a second construction unit 806, wherein:
  • the parsing unit 805 is further configured to analyze the code stream to obtain model parameters of the prediction model;
  • the second construction unit 806 is configured to construct the prediction model according to the model parameters obtained by analysis, wherein the prediction model is used to compare the to-be-referenced image components of the current block to the to-be-predicted image of the current block.
  • the components undergo cross-component prediction processing.
  • the decoder 80 may further include a second adjustment unit 807, configured to: when the resolution of the image component to be predicted of the current block is different from the resolution of the image component to be referenced in the current block At the same time, adjust the resolution of the image component to be referenced; wherein the resolution adjustment includes up-sampling adjustment or down-sampling adjustment; and based on the adjusted resolution of the image component to be referenced, The reference value of the to-be-referenced image component of the current block is updated to obtain the first reference value of the to-be-referenced image component of the current block; wherein the adjusted resolution of the to-be-referenced image component and the to-be-predicted image The resolution of the components is the same.
  • a second adjustment unit 807 configured to: when the resolution of the image component to be predicted of the current block is different from the resolution of the image component to be referenced in the current block At the same time, adjust the resolution of the image component to be referenced; wherein the resolution adjustment includes up-sampling adjustment or down
  • the second adjustment unit 807 is further configured to: when the resolution of the to-be-predicted image component of the current block is different from the resolution of the to-be-referenced image component of the current block, The reference value of the to-be-referenced image component is adjusted to obtain the first reference value of the to-be-referenced image component of the current block, wherein the adjustment processing includes one of the following: down-sampling filtering, up-sampling filtering, down Cascade filtering of sampling filtering and low-pass filtering, cascading filtering of upsampling filtering and low-pass filtering.
  • the second processing unit 802 is further configured to perform filtering processing on the initial predicted value according to the reference value of the image component to be predicted of the current block to obtain the target predicted value; wherein, the The reference value of the image component to be predicted of the current block is obtained by performing characteristic statistics on the image component to be predicted of the image or the image component to be predicted of the current block.
  • the second processing unit 802 is further configured to use a preset processing mode to perform filtering processing on the initial predicted value according to the reference value of the image component to be predicted of the current block; wherein, the prediction It is assumed that the processing mode includes at least one of the following: filtering processing, grouping processing, value correction processing, quantization processing, inverse quantization processing, low-pass filtering processing, and adaptive filtering processing.
  • the parsing unit 805 is configured to analyze the code stream to obtain the initial prediction residual of the image component to be predicted of the current block;
  • the second processing unit 802 is further configured to use a preset processing mode to filter the initial prediction residual according to the reference value of the image component to be predicted of the current block; wherein, the preset processing mode is at least Including one of the following: filtering processing, grouping processing, value correction processing, quantization processing, inverse quantization processing, low-pass filtering processing and adaptive filtering processing.
  • the decoder 80 may further include a second determining unit 808, where:
  • the second statistical unit 803 is further configured to perform characteristic statistics on the image components to be predicted of the image;
  • the second determining unit 808 is further configured to determine the reference value of the to-be-predicted image component of the current block and the reference value of the to-be-referenced image component of the current block according to the result of the characteristic statistics; wherein, the The image component to be referenced is different from the image component to be predicted.
  • the second processing unit 802 is further configured to use a preset processing mode to perform processing on all components according to the reference value of the image component to be predicted of the current block and the reference value of the image component to be referenced in the current block.
  • the initial prediction value is subjected to filtering processing, wherein the preset processing mode includes at least one of the following: filtering processing, grouping processing, value correction processing, quantization processing, dequantization processing, low-pass filtering processing, and adaptive filtering processing .
  • a "unit" may be a part of a circuit, a part of a processor, a part of a program, or software, etc., of course, may also be a module, or may be non-modular.
  • the various components in this embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be realized in the form of hardware or software function module.
  • the integrated unit is implemented in the form of a software function module and is not sold or used as an independent product, it can be stored in a computer readable storage medium.
  • this embodiment provides a computer storage medium that stores an image prediction program, and when the image prediction program is executed by a second processor, the method described in any one of the preceding embodiments is implemented. .
  • FIG. 9 shows the specific hardware structure of the decoder 80 provided by the embodiment of the present application, which may include: a second communication interface 901, a second memory 902, and a second Processor 903; the components are coupled together through the second bus system 904.
  • the second bus system 904 is used to implement connection and communication between these components.
  • the second bus system 904 also includes a power bus, a control bus, and a status signal bus.
  • various buses are marked as the second bus system 904 in FIG. 9. among them,
  • the second communication interface 901 is used for receiving and sending signals in the process of sending and receiving information with other external network elements;
  • the second memory 902 is configured to store a computer program that can run on the second processor 903;
  • the second processor 903 is configured to execute: when the computer program is running:
  • the second processor 903 is further configured to execute the method described in any one of the foregoing embodiments when the computer program is running.
  • the hardware function of the second memory 902 is similar to that of the first memory 702, and the hardware function of the second processor 903 is similar to that of the first processor 703; it will not be detailed here.
  • This embodiment provides a decoder, which may include a second prediction unit and a second processing unit, wherein the second prediction unit is configured to obtain an initial prediction of the image component to be predicted of the current block in the image through a prediction model Value; the second processing unit is configured to perform filtering processing on the initial predicted value to obtain the target predicted value of the image component to be predicted of the current block; in this way, continue the prediction of at least one image component of the current block Filtering processing of at least one image component can balance the statistical characteristics of each image component after cross-component prediction, thereby not only improving the prediction efficiency, but also improving the coding and decoding efficiency of the video image.
  • the initial prediction value of the image component to be predicted in the current block in the image is first obtained through the prediction model; then the initial prediction value is filtered to obtain the target prediction of the image component to be predicted in the current block Value; in this way, after predicting at least one image component of the current block, continue to filter the at least one image component, which can balance the statistical characteristics of each image component after cross-component prediction, thereby not only improving the prediction efficiency, but also because The obtained target prediction value is closer to the true value, so that the prediction residual of the image component is smaller, so that the bit rate transmitted during the encoding and decoding process is small, and the encoding and decoding efficiency of the video image can also be improved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本申请实施例公开了一种图像预测方法、编码器、解码器以及存储介质,该方法包括:通过预测模型,获得图像中当前块的待预测图像分量的初始预测值;对所述初始预测值进行滤波处理,得到所述当前块的待预测图像分量的目标预测值。

Description

图像预测方法、编码器、解码器以及存储介质 技术领域
本申请实施例涉及视频编解码的技术领域,尤其涉及一种图像预测方法、编码器、解码器以及存储介质。
背景技术
在最新的视频编码标准H.266/多功能视频编码(Versatile Video Coding,VVC)中,已经允许跨分量预测的存在;其中,跨分量线性模型预测(Cross-component Linear Model Prediction,CCLM)是典型的跨分量预测技术之一。利用跨分量预测技术,可以实现由一个分量预测另一个分量(或者其残差),例如由亮度分量预测色度分量、或者由色度分量预测亮度分量、或者由色度分量预测色度分量等。
不同的分量具有不同的统计特性,使得各分量间的统计特性存在差异。然而在进行分量预测时,现有的跨分量预测技术考虑不全面,导致预测效率较低。
发明内容
本申请实施例提供一种图像预测方法、编码器、解码器以及存储介质,通过平衡跨分量预测后各图像分量的统计特性,从而不仅能够提高预测效率,而且还能够提高视频图像的编解码效率。
本申请实施例的技术方案可以如下实现:
第一方面,本申请实施例提供了一种图像预测方法,应用于编码器或解码器,该方法包括:
通过预测模型,获得图像中当前块的待预测图像分量的初始预测值;
对所述初始预测值进行滤波处理,得到所述当前块的待预测图像分量的目标预测值。
第二方面,本申请实施例提供了一种编码器,该编码器包括第一预测单元和第一处理单元,其中,
第一预测单元,配置为通过预测模型,获得图像中当前块的待预测图像分量的初始预测值;
第一处理单元,配置为对所述初始预测值进行滤波处理,得到所述当前块的待预测图像分量的目标预测值。
第三方面,本申请实施例提供了一种编码器,该编码器包括第一存储器和第一处理器,其中,
第一存储器,用于存储能够在所述第一处理器上运行的计算机程序;
第一处理器,用于在运行所述计算机程序时,执行如第一方面所述的方法。
第四方面,本申请实施例提供了一种解码器,该解码器包括第二预测单元和第二处理单元,其中,
第二预测单元,配置为通过预测模型,获得图像中当前块的待预测图像分量的初始预测值;
第二处理单元,配置为对所述初始预测值进行滤波处理,得到所述当前块的待预测 图像分量的目标预测值。
第五方面,本申请实施例提供了一种解码器,该解码器包括第二存储器和第二处理器,其中,
第二存储器,用于存储能够在所述第二处理器上运行的计算机程序;
第二处理器,用于在运行所述计算机程序时,执行如第一方面所述的方法。
第六方面,本申请实施例提供了一种计算机存储介质,该计算机存储介质存储有图像预测程序,所述图像预测程序被第一处理器或第二处理器执行时实现如第一方面所述的方法。
本申请实施例提供了一种图像预测方法、编码器、解码器以及存储介质,首先通过预测模型,获得图像中当前块的待预测图像分量的初始预测值;然后对所述初始预测值进行滤波处理,得到所述当前块的待预测图像分量的目标预测值;这样,在对当前块的至少一个图像分量进行预测之后继续对该至少一个图像分量进行滤波处理,可以平衡跨分量间预测之后各图像分量的统计特性,从而不仅提高了预测效率,而且由于所得到的目标预测值更接近于真实值,使得图像分量的预测残差较小,这样在编解码过程中所传输的比特率少,同时还能够提高视频图像的编解码效率。
附图说明
图1为相关技术方案提供的一种传统跨分量预测架构的组成结构示意图;
图2为本申请实施例提供的一种视频编码系统的组成框图示意图;
图3为本申请实施例提供的一种视频解码系统的组成框图示意图;
图4为本申请实施例提供的一种图像预测方法的流程示意图;
图5为本申请实施例提供的一种改进型跨分量预测架构的组成结构示意图;
图6为本申请实施例提供的一种编码器的组成结构示意图;
图7为本申请实施例提供的一种编码器的具体硬件结构示意图;
图8为本申请实施例提供的一种解码器的组成结构示意图;
图9为本申请实施例提供的一种解码器的具体硬件结构示意图。
具体实施方式
为了能够更加详尽地了解本申请实施例的特点与技术内容,下面结合附图对本申请实施例的实现进行详细阐述,所附附图仅供参考说明之用,并非用来限定本申请实施例。
在视频图像中,一般采用第一图像分量、第二图像分量和第三图像分量来表征编码块;其中,这三个图像分量分别为一个亮度分量、一个蓝色色度分量和一个红色色度分量,具体地,亮度分量通常使用符号Y表示,蓝色色度分量通常使用符号Cb或者U表示,红色色度分量通常使用符号Cr或者V表示;这样,视频图像可以用YCbCr格式表示,也可以用YUV格式表示。
在本申请实施例中,第一图像分量可以为亮度分量,第二图像分量可以为蓝色色度分量,第三图像分量可以为红色色度分量,但是本申请实施例不作具体限定。
为了进一步提升编解码性能,H.266/VCC提出了CCLM的跨分量预测技术。其中,基于CCLM的跨分量预测技术,不仅可以实现亮度分量到色度分量的预测,即第一图像分量到第二图像分量、或者第一图像分量到第三图像分量的预测,还可以实现色度分量到亮度分量的预测,即第二图像分量到第一图像分量、或者第三图像分量到第一图像分量的预测,甚至也可以实现色度分量与色度分量之间的预测,即第二图像分量到第三图像分量、或者第三图像分量到第二图像分量的预测等。在本申请实施例中,下述将以 第一图像分量到第二图像分量的预测为例进行描述,但是本申请实施例的技术方案同样也可以适用于其他图像分量的预测。
参见图1,其示出了相关技术方案提供的一种传统跨分量预测架构的组成结构示意图。如图1所示,利用第一图像分量(例如用Y分量表示)预测第二图像分量(例如用U分量表示);假定视频图像采用YUV为4:2:0格式,则Y分量与U分量具有不同的分辨率,此时需要对Y分量进行下采样处理或者对U分量进行上采样处理,以达到待预测分量的目标分辨率,这样就可以在分量之间以相同的分辨率进行预测。本示例中,使用Y分量预测第三图像分量(例如用V分量表示)的方法与此相同。
在图1中,传统跨分量预测架构10可以包括Y分量编码块110、分辨率调整单元120、Y 1分量编码块130、U分量编码块140、预测模型150、跨分量预测单元160。其中,视频图像的Y分量用2N×2N大小的Y分量编码块110表示,这里加粗的较大方框用于突出指示Y分量编码块110,而周围的灰色实心圆圈用于指示Y分量编码块110的相邻参考值Y(n);视频图像的U分量用N×N大小的U分量编码块140表示,这里加粗的较大方框用于突出指示U分量编码块140,而周围的灰色实心圆圈用于指示U分量编码块140的相邻参考值C(n);由于Y分量与U分量具有不同的分辨率,需要经过分辨率调整单元120对Y分量进行分辨率调整,得到N×N大小的Y 1分量编码块130;对于Y 1分量编码块130,这里加粗的较大方框用于突出指示Y 1分量编码块130,而周围的灰色实心圆圈用于指示Y 1分量编码块130的相邻参考值Y 1(n);然后通过Y 1分量编码块130的相邻参考值Y 1(n)和U分量编码块140的相邻参考值C(n)可以构建出预测模型150;根据Y 1分量编码块130的Y分量重建像素值和预测模型150,可以跨分量预测单元160进行分量预测,最终输出U分量预测值。
针对传统跨分量预测架构10,在进行图像分量预测时考虑不全面,比如没有考虑到各图像分量间统计特性的差异性,使得预测效率较低。为了提高预测效率,本申请实施例提供了一种图像预测方法,首先通过预测模型,获得图像中当前块的待预测图像分量的初始预测值;然后对所述初始预测值进行滤波处理,得到所述当前块的待预测图像分量的目标预测值;这样,在对当前块的至少一个图像分量进行预测之后继续对该至少一个图像分量进行滤波处理,可以平衡跨分量间预测之后各图像分量的统计特性,从而不仅提高了预测效率,而且还提高了视频图像的编解码效率。
下面将结合附图对本申请各实施例进行详细描述。
参见图2,其示出了本申请实施例提供的一种视频编码系统的组成框图示例;如图2所示,该视频编码系统20包括变换与量化单元201、帧内估计单元202、帧内预测单元203、运动补偿单元204、运动估计单元205、反变换与反量化单元206、滤波器控制分析单元207、滤波单元208、编码单元209和解码图像缓存单元210等,其中,滤波单元208可以实现去方块滤波及样本自适应缩进(Sample Adaptive 0ffset,SAO)滤波,编码单元209可以实现头信息编码及基于上下文的自适应二进制算术编码(Context-based Adaptive Binary Arithmatic Coding,CABAC)。针对输入的原始视频信号,通过编码树块(Coding Tree Unit,CTU)的划分可以得到一个编码块,然后对经过帧内或帧间预测后得到的残差像素信息通过变换与量化单元201对该编码块进行变换,包括将残差信息从像素域变换到变换域,并对所得的变换系数进行量化,用以进一步减少比特率;帧内估计单元202和帧内预测单元203是用于对该编码块进行帧内预测;明确地说,帧内估计单元202和帧内预测单元203用于确定待用以编码该编码块的帧内预测模式;运动补偿单元204和运动估计单元205用于执行所接收的该编码块相对于一或多个参考帧中的一或多个块的帧间预测编码以提供时间预测信息;由运动估计单元205执行的运动估计为产生运动向量的过程,所述运动向量可以估计该编码块的运动,然后由运 动补偿单元204基于由运动估计单元205所确定的运动向量执行运动补偿;在确定帧内预测模式之后,帧内预测单元203还用于将所选择的帧内预测数据提供到编码单元209,而且运动估计单元205将所计算确定的运动向量数据也发送到编码单元209;此外,反变换与反量化单元206是用于该编码块的重构建,在像素域中重构建残差块,该重构建残差块通过滤波器控制分析单元207和滤波单元208去除方块效应伪影,然后将该重构残差块添加到解码图像缓存单元210的帧中的一个预测性块,用以产生经重构建的视频块;编码单元209是用于编码各种编码参数及量化后的变换系数,在基于CABAC的编码算法中,上下文内容可基于相邻编码块,可用于编码指示所确定的帧内预测模式的信息,输出该视频信号的码流;而解码图像缓存单元210是用于存放重构建的视频块,用于预测参考。随着视频图像编码的进行,会不断生成新的重构建的视频块,这些重构建的视频块都会被存放在解码图像缓存单元210中。
参见图3,其示出了本申请实施例提供的一种视频解码系统的组成框图示例;如图3所示,该视频解码系统30包括解码单元301、反变换与反量化单元302、帧内预测单元303、运动补偿单元304、滤波单元305和解码图像缓存单元306等,其中,解码单元301可以实现头信息解码以及CABAC解码,滤波单元305可以实现去方块滤波以及SAO滤波。输入的视频信号经过图2的编码处理之后,输出该视频信号的码流;该码流输入视频解码系统30中,首先经过解码单元301,用于得到解码后的变换系数;针对该变换系数通过反变换与反量化单元302进行处理,以便在像素域中产生残差块;帧内预测单元303可用于基于所确定的帧内预测模式和来自当前帧或图片的先前经解码块的数据而产生当前待解码的视频块的预测数据;运动补偿单元304是通过剖析运动向量和其他关联语法元素来确定用于该待解码的视频块的预测信息,并使用该预测信息以产生正被解码的视频块的预测性块;通过对来自反变换与反量化单元302的残差块与由帧内预测单元303或运动补偿单元304产生的对应预测性块进行求和,而形成经解码的视频块;经解码的视频块通过滤波单元305以便去除方块效应伪影,可以改善视频质量;然后将经解码的视频块存储于解码图像缓存单元306中,解码图像缓存单元306存储用于后续帧内预测或运动补偿的参考图像,同时也用于视频信号的输出,即得到了所恢复的原始视频信号。
本申请实施例主要应用在如图2所示的帧内预测单元203部分和如图3所示的帧内预测单元303部分;也就是说,本申请实施例既可以应用于视频编码系统,也可以应用于视频解码系统,本申请实施例不作具体限定。
基于上述图2或者图3的应用场景示例,参见图4,其示出了本申请实施例提供的一种图像预测方法的流程示意图,该方法可以包括:
S401:通过预测模型,获得图像中当前块的待预测图像分量的初始预测值;
S402:对所述初始预测值进行滤波处理,得到所述当前块的待预测图像分量的目标预测值。
需要说明的是,视频图像可以划分为多个图像块,每个当前待编码的图像块可以称为编码块。其中,每个编码块可以包括第一图像分量、第二图像分量和第三图像分量,而当前块为视频图像中当前待进行第一图像分量、第二图像分量或者第三图像分量预测的编码块。
还需要说明的是,本申请实施例的图像预测方法,既可以应用于视频编码系统,又可以应用于视频解码系统,甚至还可以同时应用于视频编码系统和视频解码系统,本申请实施例不作具体限定。
本申请实施例中,首先通过预测模型,获得图像中当前块的待预测图像分量的初始预测值;再对所述初始预测值进行滤波处理,得到所述当前块的待预测图像分量的目标 预测值;这样,在对至少一个图像分量预测之后继续对该至少一个图像分量进行滤波处理,可以平衡跨分量预测后各图像分量的统计特性,从而不仅提高了预测效率,而且还提高了视频图像的编解码效率。
进一步地,由于不同的图像分量具有不同的统计特性,而且各个图像分量间的统计特性存在差异,比如亮度分量具有丰富的纹理特性,而色度分量则更趋于均匀平坦;为了能够更好地平衡跨分量预测后各图像分量的统计特性,这时候还需要对当前块的至少一个图像分量进行特性统计。因此,在一些实施例中,对于S402来说,在所述对所述初始预测值进行滤波处理之前,该方法还可以包括:
对所述当前块的至少一个图像分量进行特性统计;其中,所述至少一个图像分量包括待预测图像分量和/或待参考图像分量,所述待预测图像分量不同于所述待参考图像分量;
根据特性统计的结果,获取所述当前块的待预测图像分量的参考值和/或所述当前块的待参考图像分量的参考值;其中,所述待预测图像分量为构建所述预测模型时所被预测的分量,所述待参考图像分量为构建所述预测模型时所用于预测的分量。
需要说明的是,当前块的至少一个图像分量可以是待预测图像分量,也可以是待参考图像分量,甚至还可以是待预测图像分量和待参考图像分量。假定通过预测模型来实现第一图像分量对第二图像分量的预测,那么待预测图像分量为第二图像分量,待参考图像分量为第一图像分量;或者,假定通过预测模型来实现第一图像分量对第三图像分量的预测,那么待预测图像分量为第三图像分量,待参考图像分量仍为第一图像分量。
这样,通过对当前块的至少一个图像分量进行特性统计,根据特性统计的结果,可以得到当前块的待预测图像分量的参考值和/或当前块的待参考图像分量的参考值。
进一步地,为了提高预测效率,可以根据当前块的待预测图像分量的参考值和/或当前块的待参考图像分量的参考值,对当前块的待预测图像分量的初始预测值进行滤波处理。
在一些实施例中,所述根据所述至少一个图像分量的参考值对所述至少一个图像分量对应的初始预测值进行处理,可以包括:
根据所述当前块的待预测图像分量的参考值和/或所述当前块的待参考图像分量的参考值,利用预设处理模式对所述初始预测值进行滤波处理,其中,所述预设处理模式至少包括下述其中之一:过滤处理、分组处理、值修正处理、量化处理和去量化处理;
根据所述处理的结果,得到所述目标预测值。
需要说明的是,根据当前块的至少一个图像分量的特性统计的结果,在获得当前块的待预测图像分量的参考值和/或当前块的待参考图像分量的参考值之后,可以利用预设处理模式对初始预测值进行滤波处理。具体来说,可以利用过滤处理对初始预测值进行滤波处理,或者,也可以利用分组处理对初始预测值进行滤波处理,或者,还可以利用值修正处理对初始预测值进行滤波处理,或者,还可以利用量化处理对初始预测值进行滤波处理,或者,还可以利用反量化处理(也可以称为去量化处理)对初始预测值进行滤波处理等等,本申请实施例不作具体限定。
示例性地,为了提高预测效率,即提高预测值的准确性,假定利用亮度分量预测色度分量,针对预测模型所获取的色度分量的初始预测值,如果预设处理模式采用值修正处理,由于亮度分量与色度分量具有不同的统计特性,根据两个图像分量的统计特性的差异,那么可以得到一个偏差因子;然后利用该偏差因子对初始预测值进行值修正处理(比如将初始预测值与该偏差因子进行相加处理)以平衡跨分量预测后各图像分量间的统计特性,从而所获取到色度分量的目标预测值,此时该色度分量的目标预测值与色度分量的真实值更为接近;如果预设处理模式采用过滤处理,由于亮度分量与色度分量具 有不同的统计特性,根据两个图像分量的统计特性的差异,那么可以对初始预测值进行过滤处理以平衡跨分量预测后各图像分量间的统计特性,从而所获取到色度分量对应的目标预测值,此时该色度分量的目标预测值与色度分量的真实值更为接近;如果预设处理模式采用分组处理,由于亮度分量与色度分量具有不同的统计特性,根据两个图像分量的统计特性的差异,那么可以对初始预测值进行分组处理以平衡跨分量预测后各图像分量间的统计特性,根据分组处理后的初始预测值可以获取到色度分量对应的目标预测值,此时该色度分量的目标预测值与色度分量的真实值更为接近;除此之外,由于在确定初始预测值的过程中涉及到对亮度分量和色度分量的量化处理和反量化处理,同时由于亮度分量与色度分量具有不同的统计特性,根据两个图像分量的统计特性的差异可能会导致量化处理和反量化处理存在差异性,这时候如果预设处理模式采用量化处理,那么可以对初始预测值进行量化处理以平衡跨分量预测后各图像分量间的统计特性,从而所获取到色度分量对应的目标预测值,此时该色度分量的目标预测值与色度分量的真实值更为接近;如果预设处理模式反量化处理,那么可以对初始预测值进行去量化处理以平衡跨分量预测后各图像分量间的统计特性,从而所获取到色度分量对应的目标预测值,此时该色度分量的目标预测值与色度分量的真实值更为接近;从而提高了预测值的准确性,也就提高了预测效率。
进一步地,为了提高预测效率,还可以根据当前块的待预测图像分量的参考值和/或当前块的待参考图像分量的参考值,对当前块的待预测图像分量的初始预测残差进行滤波处理。
在一些实施例中,对于S401来说,在所述通过预测模型,获得图像中当前块的待预测图像分量的初始预测值之后,该方法还可以包括:
基于所述初始预测分量值,计算得到所述当前块的待预测图像分量的初始预测残差;
根据所述当前块的待预测图像分量的参考值和/或所述当前块的待参考图像分量的参考值,利用预设处理模式对所述初始预测残差进行滤波处理,其中,所述预设处理模式至少包括下述其中之一:过滤处理、分组处理、值修正处理、量化处理和去量化处理;
根据所述处理的结果,得到所述目标预测残差。
进一步地,在一些实施例中,对于S402来说,所述得到所述当前块的待预测图像分量的目标预测值,可以包括:
根据所述目标预测残差,计算得到所述当前块的待预测图像分量的目标预测值。
需要说明的是,预测残差是由图像分量的预测值和图像分量的真实值之间的差值得到;为了提高视频图像的编解码效率,则需要保证当前块所传输的预测残差尽可能地小。
为了使得预测残差尽可能地小,一方面,在经过预测模型获取到待预测图像分量对应的初始预测值之后,可以是对该初始预测值按照预设处理模式对其进行滤波处理,得到待预测图像分量的目标预测值,由于待预测图像分量的目标预测值尽可能地接近待预测图像分量的真实值,从而使得两者之间的预测残差尽可能地小;另一方面,在经过预测模型获取到待预测图像分量对应的初始预测值之后,还可以是根据待预测图像分量的初始预测值与待预测图像分量的真实值之间的差值,先确定出待预测图像分量的初始预测残差,然后针对该初始预测残差按照预设处理模式对其进行滤波处理,得到待预测图像分量的目标预测残差,根据该目标预测残差可以得到待预测图像分量的目标预测值;由于该目标预测残差尽可能地小,从而待预测图像分量的目标预测值尽可能地接近待预测图像分量的真实值。也就是说,本申请实施例不仅可以应用于对当前块的待预测图像分量的初始预测值进行滤波处理,也可以应用于对当前块的待预测图像分量的初始预测残差进行滤波处理;经过滤波处理之后,可以平衡跨分量预测后各图像分量的统计特性,从而不仅提高了预测效率,而且由于得到的目标预测值更接近于真实值,使得待预测图 像分量的预测残差较小,这样在编解码过程中所传输的比特率少,同时还提高了视频图像的编解码效率。
进一步地,在获得当前块的待预测图像分量的初始预测值之前,还需要确定预测模型的模型参数,以构建预测模型。因此,在一些实施例中,对于S401来说,在所述通过预测模型,获得图像中当前块的待预测图像分量的初始预测值之前,该方法还可以包括:
确定所述当前块的待预测图像分量的参考值,其中,所述当前块的待预测图像分量的参考值是所述当前块相邻像素的所述待预测图像分量值;
确定所述当前块的待参考图像分量的参考值,其中,所述当前块的待参考图像分量不同于所述待预测图像分量,所述当前块的待参考图像分量的参考值是所述当前块相邻像素的所述参考图像分量值;
根据所述当前块的待预测图像分量的参考值和所述当前块的待参考图像分量的参考值,计算所述预测模型的模型参数;
根据计算得到的模型参数,构建所述预测模型,其中,所述预测模型用于根据所述当前块的待参考图像分量对所述当前块的待预测图像分量进行跨分量预测处理。
需要说明的是,本申请实施例中的预测模型可以是线性模型,比如CCLM的跨分量预测技术;该预测模型也可以是非线性模型,比如多模型CCLM(Multiple Model CCLM,MMLM)的跨分量预测技术,它是由多个线性模型构成的。本申请实施例将以预测模型为线性模型为例进行如下描述,但是本申请实施例的图像预测方法同样可以适用于非线性模型。
模型参数包括第一模型参数(用α表示)和第二模型参数(用β表示)。而针对α和β的计算具有多种方式,可以是以最小二乘法构造的预设因子计算模型,也可以是以最大值与最小值构造的预设因子计算模型,甚至还可以是其他方式构造的预设因子计算模型,本申请实施例不作具体限定。
以最小二乘法构造的预设因子计算模型为例,首先需要确定当前块的待预测图像分量的参考值和当前块的待参考图像分量的参考值,其中,当前块的待参考图像分量的参考值可以为当前块相邻像素的参考图像分量值(比如第一图像分量相邻参考值),当前块的待预测图像分量的参考值可以为当前块相邻像素的待预测图像分量值(比如第二图像分量相邻参考值);利用第一图像分量相邻参考像素值和第二图像分量相邻参考像素值的最小化回归误差进行推导得到预测模型的模型参数,具体地,如式(1)所示:
Figure PCTCN2019110834-appb-000001
其中,L(n)表示经过下采样的当前块左侧边和上侧边所对应的第一图像分量相邻参考值,C(n)表示当前块左侧边和上侧边所对应的第二图像分量相邻参考值,N为第二图像分量当前块的边长,n=1,2,...,2N。通过式(1)的计算,可以得到第一模型参数α和第二模型参数β;基于α和β,假设根据第一图像分量预测第二图像分量,那么所构建的预测模型如式(2)所示,
Pred C[i,j]=α·Rec L[i,j]+β       (2)
其中,i,j表示当前块中像素点的位置坐标,i表示水平方向,j表示竖直方向,Pred C[i,j]表示当前块中位置坐标为[i,j]的像素点对应的第二图像分量预测值, Rec L[i,j]表示同一当前块中(经过下采样的)位置坐标为[i,j]的像素点对应的第一图像分量重建值。
以最大值与最小值构造的预设因子计算模型为例,它提供了一种简化版模型参数的推导方法,具体地,可以通过搜索最大的第一图像分量相邻参考值和最小的第一图像分量相邻参考值,根据“两点确定一线”原则来推导出预测模型的模型参数,如式(3)所示:
Figure PCTCN2019110834-appb-000002
其中,L max和L min表示经过下采样的当前块左侧边和上侧边所对应的第一图像分量相邻参考值中搜索得到的最大值和最小值,C max和C min表示L max和L min对应位置的参考像素点所对应的第二图像分量相邻参考值。根据L max和L min以及C max和C min,通过式(3)的计算,也可以得到第一模型参数α和第二模型参数β;基于α和β,假设根据第一图像分量预测第二图像分量,那么所构建的预测模型仍然如上述式(2)所示。
在构建出预测模型之后,可以根据该预测模型进行图像分量的预测;比如根据式(2)所示的预测模型,可以利用第一图像分量预测第二图像分量,比如利用亮度分量预测色度分量,从而得到了色度分量的初始预测值,后续可以根据亮度分量的参考值和/或色度分量的参考值,利用预设处理模式对该初始预测值进行滤波处理,从而得到该色度分量的目标预测值;也可以利用第二图像分量预测第一图像分量,比如利用色度分量预测亮度分量,从而得到了亮度分量的初始预测值,后续可以根据亮度分量的参考值和/或色度分量的参考值,利用预设处理模式对该初始预测值进行滤波处理,从而得到该亮度分量对应的目标预测值;甚至还可以利用第二图像分量预测第三图像分量,比如利用蓝色色度分量预测红色色度分量,从而得到了红色色度分量的初始预测值,后续可以根据蓝色色度分量的参考值和/或红色色度分量的参考值,利用预设处理模式对该初始预测值进行滤波处理,从而得到该红色色度分量的目标预测值;可以达到提高预测效率的目的。
进一步地,各图像分量的分辨率并不是相同的,为了方便构建预测模型,还需要对图像分量的分辨率进行调整(包括对图像分量进行上采样或者对图像分量进行下采样),以达到待预测图像分量的目标分辨率。
可选地,在一些实施例中,在所述计算所述预测模型的模型参数之前,该方法还可以包括:
当所述当前块的待预测图像分量的分辨率与所述当前块的待参考图像分量的分辨率不同时,对所述待参考图像分量的分辨率进行分辨率调整;其中,所述分辨率调整包括上采样调整或下采样调整;
基于调整后的所述待参考图像分量的分辨率,对所述当前块的待参考图像分量的参考值进行更新,得到所述当前块的待参考图像分量的第一参考值;其中,调整后的所述待参考图像分量的分辨率与所述待预测图像分量的分辨率相同。
可选地,在一些实施例中,在所述计算所述预测模型的模型参数之前,该方法还可以包括:
当所述当前块的待预测图像分量的分辨率与所述当前块的待参考图像分量的分辨率不同时,对所述当前块的待参考图像分量的参考值进行调整处理,得到所述当前块的待参考图像分量的第一参考值,其中,所述调整处理包括下述其中之一:下采样滤波,上采样滤波,下采样滤波及低通滤波的级联滤波,上采样滤波及低通滤波的级联滤波。
需要说明的是,如果当前块的待预测图像分量的分辨率与当前块的待参考图像分量 的分辨率不同,那么可以对待参考图像分量的分辨率进行分辨率调整,以使得调整后的待参考图像分量的分辨率与待预测图像分量的分辨率相同;这里的分辨率调整包括上采样调整或下采样调整;并且根据调整后的待参考图像分量的分辨率,对当前块的待参考图像分量的参考值进行更新,得到当前块的待参考图像分量的第一参考值。
另外,如果当前块的待预测图像分量的分辨率与当前块的待参考图像分量的分辨率不同,那么还可以对当前块的待参考图像分量的参考值进行调整处理,得到当前块的待参考图像分量的第一参考值;这里的调整处理包括下述其中之一:下采样滤波,上采样滤波,下采样滤波及低通滤波的级联滤波,上采样滤波及低通滤波的级联滤波。
进一步地,在一些实施例中,所述根据所述当前块的待预测图像分量的参考值和所述当前块的待参考图像分量的参考值,计算所述预测模型的模型参数,可以包括:
根据所述当前块的待预测图像分量的参考值和所述当前块的待参考图像分量的第一参考值,计算所述预测模型的模型参数。
需要说明的是,如果当前块的待预测图像分量的分辨率与当前块的待参考图像分量的分辨率不同,那么在得到更新后的当前块的待参考图像分量的第一参考值之后,可以根据该当前块的待参考图像分量的第一参考值和当前块的待参考图像分量的第一参考值来计算预测模型的模型参数。
举例来说,假定利用亮度分量预测色度分量,此时待使用图像分量为亮度分量,而待预测图像分量为色度分量;由于亮度分量和色度分量的分辨率是不同的,在获取到色度分量的目标分辨率之后,由于亮度分量的分辨率不符合目标分辨率,这时候需要对亮度分量的分辨率进行调整,比如对亮度分量进行下采样处理,可以使得调整后的亮度分量的分辨率符合目标分辨率;反之,如果利用色度分量预测亮度分量,在获取到亮度分量的目标分辨率之后,由于色度分量的分辨率不符合目标分辨率,这时候需要对色度分量的分辨率进行调整,比如对色度分量进行上采样处理,可以使得调整后的色度分量的分辨率符合目标分辨率;另外,如果利用蓝色色度分量预测红色色度分量,在获取到红色色度分量的目标分辨率之后,由于蓝色色度分量的分辨率符合目标分辨率,这时候不需要对蓝色色度分量的分辨率进行调整,已经保证蓝色色度分量的分辨率符合目标分辨率;这样,可以根据相同分辨率来获得更新后的当前块的待参考图像分量的第一参考值,并且构建预测模型进行图像分量的预测。
除此之外,为了提高预测效率,还可以只是根据当前块的待预测图像分量的参考值对当前块的待预测图像分量的初始预测值进行滤波处理。
在一些实施例中,对于S402来说,所述对所述初始预测值进行滤波处理,可以包括:
根据所述当前块的待预测图像分量的参考值对所述初始预测值进行滤波处理,得到所述目标预测值;其中,所述当前块的待预测图像分量的参考值是通过对所述图像的待预测图像分量或所述当前块的待预测图像分量进行特性统计得到。
进一步地,所述根据所述当前块的待预测图像分量的参考值对所述初始预测值进行滤波处理,可以包括:
根据所述当前块的待预测图像分量的参考值,利用预设处理模式对所述初始预测值进行滤波处理;其中,所述预设处理模式至少包括下述其中之一:过滤处理、分组处理、值修正处理、量化处理、反量化处理、低通滤波处理和自适应滤波处理。
在一些实施例中,对于S402来说,所述对所述初始预测值进行滤波处理,包括:
利用所述初始预测值,计算得到所述当前块的待预测图像分量的初始预测残差;
根据所述当前块的待预测图像分量的参考值,利用预设处理模式对所述初始预测残差进行滤波处理;其中,所述预设处理模式至少包括下述其中之一:过滤处理、分组处 理、值修正处理、量化处理、反量化处理、低通滤波处理和自适应滤波处理。
需要说明的是,预设处理模式可以是过滤处理、分组处理、值修正处理、量化处理、反量化处理、低通滤波处理或自适应滤波处理等等。另外,当前块的待预测图像分量的参考值可以是通过对图像的待预测图像分量或当前块的待预测图像分量进行特性统计得到的,这里的特性统计并不局限于当前块的待预测图像分量,还可以扩展到当前块所属图像的待预测图像分量。
这样,针对滤波处理的过程,在得到当前块的待预测图像分量的初始预测值之后,可以根据当前块的待预测图像分量的参考值,利用预设处理模式对该初始预测值进行滤波处理,以得到目标预测值;或者,也可以利用初始预测值来计算出当前块的待预测图像分量的初始预测残差,然后根据当前块的待预测图像分量的参考值,利用预设处理模式对该初始预测残差进行滤波处理,以得到目标预测残差,根据该目标预测残差也可以得到目标预测值。
为了提高预测效率,也可以根据当前块的待预测图像分量的参考值和当前块的待参考图像分量的参考值对当前块的待预测图像分量的初始预测值进行滤波处理。
在一些实施例中,对于S401来说,在所述通过预测模型,获得图像中当前块的待预测图像分量的初始预测值之前,该方法还可以包括:
对所述图像的待预测图像分量进行特性统计;
根据所述特性统计的结果,确定所述当前块的待预测图像分量的参考值和所述当前块的待参考图像分量的参考值;其中,所述待参考图像分量不同于所述待预测图像分量;
根据所述当前块的待预测图像分量的参考值和所述当前块的待参考图像分量的参考值,计算所述预测模型的模型参数。
进一步地,在一些实施例中,该方法还可以包括:
根据所述当前块的待预测图像分量的参考值和所述当前块的待参考图像分量的参考值,利用预设处理模式对所述初始预测值进行滤波处理,其中,所述预设处理模式至少包括下述其中之一:过滤处理、分组处理、值修正处理、量化处理、反量化处理、低通滤波处理和自适应滤波处理。
需要说明的是,由于不同的图像分量具有不同的统计特性,而且各个图像分量间的统计特性存在差异,比如亮度分量具有丰富的纹理特性,而色度分量则更趋于均匀平坦;为了能够更好地平衡跨分量预测后各图像分量的统计特性,这时候需要对当前块的至少一个图像分量进行特性统计,比如对所述图像的待预测图像分量进行特性统计;然后根据特性统计的结果,确定当前块的待预测图像分量的参考值和当前块的待参考图像分量的参考值;根据当前块的待预测图像分量的参考值和当前块的待参考图像分量的参考值,除了可以计算预测模型的模型参数,以构建预测模型之外,还可以对初始预测值进行滤波处理,可以平衡跨分量间预测之后各图像分量的统计特性,从而提高了预测效率。
示例性地,参见图5,其示出了本申请实施例提供的一种改进型跨分量预测架构的组成结构示意图。如图5所示,在图1所示的传统跨分量预测架构10的基础上,改进型跨分量预测架构50还可以包括处理单元510,该处理单元510主要用于对经过跨分量预测单元160之后的预测值进行相关处理,以获得较为准确的目标预测值。
在图5中,对于当前块来说,假定以Y分量来预测U分量,由于Y分量编码块110与U分量编码块140具有不同的分辨率,此时需要通过分辨率调整单元120对Y分量进行分辨率调整,从而得到与U分量编码块140具有相同分辨率的Y 1分量编码块130;然后利用Y 1分量编码块130的相邻参考值Y 1(n)和U分量编码块140的相邻参考值C(n),可以构建出预测模型150;根据Y 1分量编码块130的Y分量重建像素值和预测模型150,通过跨分量预测单元160进行图像分量预测,得到U分量初始预测值;为了提高预测效 率,还可以通过处理单元510对U分量初始预测值进行相关处理,比如过滤处理、分组处理、值修正处理、量化处理和反量化处理等,从而得到U分量目标预测值;由于该U分量目标预测值较为接近U分量真实值,提高了预测效率,同时还提高了视频图像的编解码效率。
本申请实施例中,当该图像预测方法应用于编码器侧时,在得到目标预测值之后,可以根据目标预测值与真实值之间的差值,以确定出预测残差,然后将预测残差写入码流中;同时还可以根据当前块的待预测图像分量的参考值和当前块的待参考图像分量的参考值来计算得到预测模型的模型参数,然后将计算得到的模型参数写入码流中;该码流由编码器侧传输到解码器侧;对应地,当该图像预测方法应用于解码器侧时,可以通过解析码流来获得预测残差,而且还可以通过解析码流来获得预测模型的模型参数,从而构建出预测模型;这样,在解码器侧,仍然通过预测模型,获得当前块的待预测图像分量的初始预测值;再对初始预测值进行滤波处理,可以得到当前块的待预测图像分量的目标预测值。
本实施例提供了一种图像预测方法,通过预测模型,获得图像中当前块的待预测图像分量的初始预测值;对所述初始预测值进行滤波处理,得到所述当前块的待预测图像分量的目标预测值;这样,在对当前块的至少一个图像分量进行预测之后继续对该至少一个图像分量进行滤波处理,可以平衡跨分量间预测之后各图像分量的统计特性,从而不仅提高了预测效率,而且由于所得到的目标预测值更接近于真实值,使得图像分量的预测残差较小,这样在编解码过程中所传输的比特率少,同时还能够提高视频图像的编解码效率。
基于前述实施例相同的发明构思,参见图6,其示出了本申请实施例提供的一种编码器60的组成结构示意图。该编码器60可以包括第一预测单元601和第一处理单元602,其中,
所述第一预测单元601,配置为通过预测模型,获得图像中当前块的待预测图像分量的初始预测值;
所述第一处理单元602,配置为对所述初始预测值进行滤波处理,得到所述当前块的待预测图像分量的目标预测值。
在上述方案中,参见图6,编码器60还可以包括第一统计单元603和第一获取单元604,其中,
所述第一统计单元603,配置为对所述当前块的至少一个图像分量进行特性统计;其中,所述至少一个图像分量包括待预测图像分量和/或待参考图像分量,所述待预测图像分量不同于所述待参考图像分量;
所述第一获取单元604,配置为根据特性统计的结果,获取所述当前块的待预测图像分量的参考值和/或所述当前块的待参考图像分量的参考值;其中,所述待预测图像分量为构建所述预测模型时所被预测的分量,所述待参考图像分量为构建所述预测模型时所用于预测的分量。
在上述方案中,所述第一处理单元602,配置为根据所述当前块的待预测图像分量的参考值和/或所述当前块的待参考图像分量的参考值,利用预设处理模式对所述初始预测值进行滤波处理,其中,所述预设处理模式至少包括下述其中之一:过滤处理、分组处理、值修正处理、量化处理和去量化处理;
所述第一获取单元604,配置为根据所述处理的结果,得到所述目标预测值。
在上述方案中,参见图6,编码器60还可以包括计算单元605,配置为基于所述初始预测分量值,计算得到所述当前块的待预测图像分量的初始预测残差;
所述第一处理单元602,还配置为根据所述当前块的待预测图像分量的参考值和/ 或所述当前块的待参考图像分量的参考值,利用预设处理模式对所述初始预测残差进行滤波处理,其中,所述预设处理模式至少包括下述其中之一:过滤处理、分组处理、值修正处理、量化处理和去量化处理;
所述第一获取单元604,还配置为根据所述处理的结果,得到所述目标预测残差。
在上述方案中,所述计算单元605,还配置为根据所述目标预测残差,计算得到所述当前块的待预测图像分量的目标预测值。
在上述方案中,参见图6,编码器60还可以包括第一确定单元606和第一构建单元607,其中,
所述第一确定单元606,配置为确定所述当前块的待预测图像分量的参考值,其中,所述当前块的待预测图像分量的参考值是所述当前块相邻像素的所述待预测图像分量值;以及确定所述当前块的待参考图像分量的参考值,其中,所述当前块的待参考图像分量不同于所述待预测图像分量,所述当前块的待参考图像分量的参考值是所述当前块相邻像素的所述参考图像分量值;
所述计算单元605,还配置为根据所述当前块的待预测图像分量的参考值和所述当前块的待参考图像分量的参考值,计算所述预测模型的模型参数;
所述第一构建单元607,配置为根据计算得到的模型参数,构建所述预测模型,其中,所述预测模型用于根据所述当前块的待参考图像分量对所述当前块的待预测图像分量进行跨分量预测处理。
在上述方案中,参见图6,编码器60还可以包括第一调整单元608,配置为当所述当前块的待预测图像分量的分辨率与所述当前块的待参考图像分量的分辨率不同时,对所述待参考图像分量的分辨率进行分辨率调整;其中,所述分辨率调整包括上采样调整或下采样调整;以及基于调整后的所述待参考图像分量的分辨率,对所述当前块的待参考图像分量的参考值进行更新,得到所述当前块的待参考图像分量的第一参考值;其中,调整后的所述待参考图像分量的分辨率与所述待预测图像分量的分辨率相同。
在上述方案中,所述第一调整单元608,还配置为当所述当前块的待预测图像分量的分辨率与所述当前块的待参考图像分量的分辨率不同时,对所述当前块的待参考图像分量的参考值进行调整处理,得到所述当前块的待参考图像分量的第一参考值,其中,所述调整处理包括下述其中之一:下采样滤波,上采样滤波,下采样滤波及低通滤波的级联滤波,上采样滤波及低通滤波的级联滤波。
在上述方案中,所述计算单元605,还配置为根据所述当前块的待预测图像分量的参考值和所述当前块的待参考图像分量的第一参考值,计算所述预测模型的模型参数。
在上述方案中,所述第一处理单元602,还配置为根据所述当前块的待预测图像分量的参考值对所述初始预测值进行滤波处理,得到所述目标预测值;其中,所述当前块的待预测图像分量的参考值是通过对所述图像的待预测图像分量或所述当前块的待预测图像分量进行特性统计得到。
在上述方案中,所述第一处理单元602,还配置为根据所述当前块的待预测图像分量的参考值,利用预设处理模式对所述初始预测值进行滤波处理;其中,所述预设处理模式至少包括下述其中之一:过滤处理、分组处理、值修正处理、量化处理、反量化处理、低通滤波处理和自适应滤波处理。
在上述方案中,所述计算单元605,还配置为利用所述初始预测值,计算得到所述当前块的待预测图像分量的初始预测残差;
所述第一处理单元602,还配置为根据所述当前块的待预测图像分量的参考值,利用预设处理模式对所述初始预测残差进行滤波处理;其中,所述预设处理模式至少包括下述其中之一:过滤处理、分组处理、值修正处理、量化处理、反量化处理、低通滤波 处理和自适应滤波处理。
在上述方案中,所述第一统计单元603,还配置为对所述图像的待预测图像分量进行特性统计;
所述第一确定单元606,还配置为根据所述特性统计的结果,确定所述当前块的待预测图像分量的参考值和所述当前块的待参考图像分量的参考值;其中,所述待参考图像分量不同于所述待预测图像分量;
所述计算单元605,还配置为根据所述当前块的待预测图像分量的参考值和所述当前块的待参考图像分量的参考值,计算所述预测模型的模型参数。
在上述方案中,所述第一处理单元602,还配置为根据所述当前块的待预测图像分量的参考值和所述当前块的待参考图像分量的参考值,利用预设处理模式对所述初始预测值进行滤波处理,其中,所述预设处理模式至少包括下述其中之一:过滤处理、分组处理、值修正处理、量化处理、反量化处理、低通滤波处理和自适应滤波处理。
可以理解地,在本申请实施例中,“单元”可以是部分电路、部分处理器、部分程序或软件等等,当然也可以是模块,还可以是非模块化的。而且在本实施例中的各组成部分可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
所述集成的单元如果以软件功能模块的形式实现并非作为独立的产品进行销售或使用时,可以存储在一个计算机可读取存储介质中,基于这样的理解,本实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或processor(处理器)执行本实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
因此,本申请实施例提供了一种计算机存储介质,该计算机存储介质存储有图像预测程序,所述图像预测程序被至少一个处理器执行时实现前述实施例所述方法的步骤。
基于上述编码器60的组成以及计算机存储介质,参见图7,其示出了本申请实施例提供的编码器60的具体硬件结构,可以包括:第一通信接口701、第一存储器702和第一处理器703;各个组件通过第一总线系统704耦合在一起。可理解,第一总线系统704用于实现这些组件之间的连接通信。第一总线系统704除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图7中将各种总线都标为第一总线系统704。其中,
第一通信接口701,用于在与其他外部网元之间进行收发信息过程中,信号的接收和发送;
第一存储器702,用于存储能够在第一处理器703上运行的计算机程序;
第一处理器703,用于在运行所述计算机程序时,执行:
通过预测模型,获得图像中当前块的待预测图像分量的初始预测值;
对所述初始预测值进行滤波处理,得到所述当前块的待预测图像分量的目标预测值。
可以理解,本申请实施例中的第一存储器702可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random  Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DRRAM)。本申请描述的系统和方法的第一存储器702旨在包括但不限于这些和任意其它适合类型的存储器。
而第一处理器703可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过第一处理器703中的硬件的集成逻辑电路或者软件形式的指令完成。上述的第一处理器703可以是通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于第一存储器702,第一处理器703读取第一存储器702中的信息,结合其硬件完成上述方法的步骤。
可以理解的是,本申请描述的这些实施例可以用硬件、软件、固件、中间件、微码或其组合来实现。对于硬件实现,处理单元可以实现在一个或多个专用集成电路(Application Specific Integrated Circuits,ASIC)、数字信号处理器(Digital Signal Processing,DSP)、数字信号处理设备(DSP Device,DSPD)、可编程逻辑设备(Programmable Logic Device,PLD)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、通用处理器、控制器、微控制器、微处理器、用于执行本申请所述功能的其它电子单元或其组合中。对于软件实现,可通过执行本申请所述功能的模块(例如过程、函数等)来实现本申请所述的技术。软件代码可存储在存储器中并通过处理器执行。存储器可以在处理器中或在处理器外部实现。
可选地,作为另一个实施例,第一处理器703还配置为在运行所述计算机程序时,执行前述实施例中任一项所述的方法。
本实施例提供了一种编码器,该编码器可以包括第一预测单元和第一处理单元,其中,第一预测单元配置为通过预测模型,获得图像中当前块的待预测图像分量的初始预测值;第一处理单元配置为对所述初始预测值进行滤波处理,得到所述当前块的待预测图像分量的目标预测值;这样,在对当前块的至少一个图像分量进行预测之后继续对该至少一个图像分量进行滤波处理,可以平衡跨分量间预测之后各图像分量的统计特性,从而不仅提高了预测效率,而且由于所得到的目标预测值更接近于真实值,使得图像分量的预测残差较小,这样在编解码过程中所传输的比特率少,同时还能够提高视频图像的编解码效率。
基于前述实施例相同的发明构思,参见图8,其示出了本申请实施例提供的一种解码器80的组成结构示意图。该解码器80可以包括第二预测单元801和第二处理单元802,其中,
所述第二预测单元801,配置为通过预测模型,获得图像中当前块的待预测图像分量的初始预测值;
所述第二处理单元802,配置为对所述初始预测值进行滤波处理,得到所述当前块的待预测图像分量的目标预测值。
在上述方案中,参见图8,解码器80还可以包括第二统计单元803和第二获取单元804,其中,
所述第二统计单元803,配置为对所述当前块的至少一个图像分量进行特性统计;其中,所述至少一个图像分量包括待预测图像分量和/或待参考图像分量,所述待预测图像分量不同于所述待参考图像分量;
所述第二获取单元804,配置为根据特性统计的结果,获取所述当前块的待预测图像分量的参考值和/或所述当前块的待参考图像分量的参考值;其中,所述待预测图像分量为构建所述预测模型时所被预测的分量,所述待参考图像分量为构建所述预测模型时所用于预测的分量。
在上述方案中,所述第二处理单元802,配置为根据所述当前块的待预测图像分量的参考值和/或所述当前块的待参考图像分量的参考值,利用预设处理模式对所述初始预测值进行滤波处理,其中,所述预设处理模式至少包括下述其中之一:过滤处理、分组处理、值修正处理、量化处理和去量化处理;
所述第二获取单元804,配置为根据所述处理的结果,得到所述目标预测值。
在上述方案中,参见图8,解码器80还可以包括解析单元805,配置为解析码流,获得所述当前块的待预测图像分量的初始预测残差;
所述第二处理单元802,还配置为根据所述当前块的待预测图像分量的参考值和/或所述当前块的待参考图像分量的参考值,利用预设处理模式对所述初始预测残差进行滤波处理,其中,所述预设处理模式至少包括下述其中之一:过滤处理、分组处理、值修正处理、量化处理和去量化处理;
所述第二获取单元804,还配置为根据所述处理的结果,得到所述目标预测残差。
在上述方案中,参见图8,解码器80还可以包括第二构建单元806,其中,
所述解析单元805,还配置为解析码流,获得所述预测模型的模型参数;
所述第二构建单元806,配置为根据解析得到的模型参数,构建所述预测模型,其中,所述预测模型用于根据所述当前块的待参考图像分量对所述当前块的待预测图像分量进行跨分量预测处理。
在上述方案中,参见图8,解码器80还可以包括第二调整单元807,配置为当所述当前块的待预测图像分量的分辨率与所述当前块的待参考图像分量的分辨率不同时,对所述待参考图像分量的分辨率进行分辨率调整;其中,所述分辨率调整包括上采样调整或下采样调整;以及基于调整后的所述待参考图像分量的分辨率,对所述当前块的待参考图像分量的参考值进行更新,得到所述当前块的待参考图像分量的第一参考值;其中,调整后的所述待参考图像分量的分辨率与所述待预测图像分量的分辨率相同。
在上述方案中,所述第二调整单元807,还配置为当所述当前块的待预测图像分量的分辨率与所述当前块的待参考图像分量的分辨率不同时,对所述当前块的待参考图像分量的参考值进行调整处理,得到所述当前块的待参考图像分量的第一参考值,其中,所述调整处理包括下述其中之一:下采样滤波,上采样滤波,下采样滤波及低通滤波的级联滤波,上采样滤波及低通滤波的级联滤波。
在上述方案中,所述第二处理单元802,还配置为根据所述当前块的待预测图像分量的参考值对所述初始预测值进行滤波处理,得到所述目标预测值;其中,所述当前块的待预测图像分量的参考值是通过对所述图像的待预测图像分量或所述当前块的待预测图像分量进行特性统计得到。
在上述方案中,所述第二处理单元802,还配置为根据所述当前块的待预测图像分 量的参考值,利用预设处理模式对所述初始预测值进行滤波处理;其中,所述预设处理模式至少包括下述其中之一:过滤处理、分组处理、值修正处理、量化处理、反量化处理、低通滤波处理和自适应滤波处理。
在上述方案中,所述解析单元805,配置为解析码流,获得所述当前块的待预测图像分量的初始预测残差;
所述第二处理单元802,还配置为根据所述当前块的待预测图像分量的参考值,利用预设处理模式对所述初始预测残差进行滤波处理;其中,所述预设处理模式至少包括下述其中之一:过滤处理、分组处理、值修正处理、量化处理、反量化处理、低通滤波处理和自适应滤波处理。
在上述方案中,参见图8,解码器80还可以包括第二确定单元808,其中,
所述第二统计单元803,还配置为对所述图像的待预测图像分量进行特性统计;
所述第二确定单元808,还配置为根据所述特性统计的结果,确定所述当前块的待预测图像分量的参考值和所述当前块的待参考图像分量的参考值;其中,所述待参考图像分量不同于所述待预测图像分量。
在上述方案中,所述第二处理单元802,还配置为根据所述当前块的待预测图像分量的参考值和所述当前块的待参考图像分量的参考值,利用预设处理模式对所述初始预测值进行滤波处理,其中,所述预设处理模式至少包括下述其中之一:过滤处理、分组处理、值修正处理、量化处理、反量化处理、低通滤波处理和自适应滤波处理。
可以理解地,在本实施例中,“单元”可以是部分电路、部分处理器、部分程序或软件等等,当然也可以是模块,还可以是非模块化的。而且在本实施例中的各组成部分可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
所述集成的单元如果以软件功能模块的形式实现并非作为独立的产品进行销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本实施例提供了一种计算机存储介质,该计算机存储介质存储有图像预测程序,所述图像预测程序被第二处理器执行时实现前述实施例中任一项所述的方法。
基于上述解码器80的组成以及计算机存储介质,参见图9,其示出了本申请实施例提供的解码器80的具体硬件结构,可以包括:第二通信接口901、第二存储器902和第二处理器903;各个组件通过第二总线系统904耦合在一起。可理解,第二总线系统904用于实现这些组件之间的连接通信。第二总线系统904除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图9中将各种总线都标为第二总线系统904。其中,
第二通信接口901,用于在与其他外部网元之间进行收发信息过程中,信号的接收和发送;
第二存储器902,用于存储能够在第二处理器903上运行的计算机程序;
第二处理器903,用于在运行所述计算机程序时,执行:
通过预测模型,获得图像中当前块的待预测图像分量的初始预测值;
对所述初始预测值进行滤波处理,得到所述当前块的待预测图像分量的目标预测值。
可选地,作为另一个实施例,第二处理器903还配置为在运行所述计算机程序时,执行前述实施例中任一项所述的方法。
可以理解,第二存储器902与第一存储器702的硬件功能类似,第二处理器903与第一处理器703的硬件功能类似;这里不再详述。
本实施例提供了一种解码器,该解码器可以包括第二预测单元和第二处理单元,其 中,第二预测单元配置为通过预测模型,获得图像中当前块的待预测图像分量的初始预测值;第二处理单元配置为对所述初始预测值进行滤波处理,得到所述当前块的待预测图像分量的目标预测值;这样,在对当前块的至少一个图像分量进行预测之后继续对该至少一个图像分量进行滤波处理,可以平衡跨分量间预测之后各图像分量的统计特性,从而不仅提高了预测效率,而且还可以提高视频图像的编解码效率。
需要说明的是,在本申请中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
本申请所提供的几个方法实施例中所揭露的方法,在不冲突的情况下可以任意组合,得到新的方法实施例。
本申请所提供的几个产品实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的产品实施例。
本申请所提供的几个方法或设备实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的方法实施例或设备实施例。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。
工业实用性
本申请实施例中,首先通过预测模型,获得图像中当前块的待预测图像分量的初始预测值;然后对所述初始预测值进行滤波处理,得到所述当前块的待预测图像分量的目标预测值;这样,在对当前块的至少一个图像分量进行预测之后继续对该至少一个图像分量进行滤波处理,可以平衡跨分量间预测之后各图像分量的统计特性,从而不仅提高了预测效率,而且由于所得到的目标预测值更接近于真实值,使得图像分量的预测残差较小,这样在编解码过程中所传输的比特率少,同时还能够提高视频图像的编解码效率。

Claims (19)

  1. 一种图像预测方法,应用于编码器或解码器,所述方法包括:
    通过预测模型,获得图像中当前块的待预测图像分量的初始预测值;
    对所述初始预测值进行滤波处理,得到所述当前块的待预测图像分量的目标预测值。
  2. 根据权利要求1所述的方法,其中,在所述对所述初始预测值进行滤波处理之前,所述方法还包括:
    对所述当前块的至少一个图像分量进行特性统计;其中,所述至少一个图像分量包括待预测图像分量和/或待参考图像分量,所述待预测图像分量不同于所述待参考图像分量;
    根据特性统计的结果,获取所述当前块的待预测图像分量的参考值和/或所述当前块的待参考图像分量的参考值;其中,所述待预测图像分量为构建所述预测模型时所被预测的分量,所述待参考图像分量为构建所述预测模型时所用于预测的分量。
  3. 根据权利要求2所述的方法,其中,所述对所述初始预测值进行滤波处理,得到所述当前块的待预测图像分量的目标预测值,包括:
    根据所述当前块的待预测图像分量的参考值和/或所述当前块的待参考图像分量的参考值,利用预设处理模式对所述初始预测值进行滤波处理,其中,所述预设处理模式至少包括下述其中之一:过滤处理、分组处理、值修正处理、量化处理和去量化处理;
    根据所述处理的结果,得到所述目标预测值。
  4. 根据权利要求2所述的方法,其中,在所述通过预测模型,获得图像中当前块的待预测图像分量的初始预测值之后,所述方法还包括:
    基于所述初始预测分量值,计算得到所述当前块的待预测图像分量的初始预测残差;
    根据所述当前块的待预测图像分量的参考值和/或所述当前块的待参考图像分量的参考值,利用预设处理模式对所述初始预测残差进行滤波处理,其中,所述预设处理模式至少包括下述其中之一:过滤处理、分组处理、值修正处理、量化处理和去量化处理;
    根据所述处理的结果,得到所述目标预测残差。
  5. 根据权利要求4所述的方法,其中,所述得到所述当前块的待预测图像分量的目标预测值,包括:
    根据所述目标预测残差,计算得到所述当前块的待预测图像分量的目标预测值。
  6. 根据权利要求1至5任一项所述的方法,其中,在所述通过预测模型,获得图像中当前块的待预测图像分量的初始预测值之前,所述方法还包括:
    确定所述当前块的待预测图像分量的参考值,其中,所述当前块的待预测图像分量的参考值是所述当前块相邻像素的所述待预测图像分量值;
    确定所述当前块的待参考图像分量的参考值,其中,所述当前块的待参考图像分量不同于所述待预测图像分量,所述当前块的待参考图像分量的参考值是所述当前块相邻像素的所述参考图像分量值;
    根据所述当前块的待预测图像分量的参考值和所述当前块的待参考图像分量的参考值,计算所述预测模型的模型参数;
    根据计算得到的模型参数,构建所述预测模型,其中,所述预测模型用于根据所述当前块的待参考图像分量对所述当前块的待预测图像分量进行跨分量预测处理。
  7. 根据权利要求6所述的方法,其中,在所述计算所述预测模型的模型参数之前,所述方法还包括:
    当所述当前块的待预测图像分量的分辨率与所述当前块的待参考图像分量的分辨 率不同时,对所述待参考图像分量的分辨率进行分辨率调整;其中,所述分辨率调整包括上采样调整或下采样调整;
    基于调整后的所述待参考图像分量的分辨率,对所述当前块的待参考图像分量的参考值进行更新,得到所述当前块的待参考图像分量的第一参考值;其中,调整后的所述待参考图像分量的分辨率与所述待预测图像分量的分辨率相同。
  8. 根据权利要求6所述的方法,其中,在所述计算所述预测模型的模型参数之前,所述方法还包括:
    当所述当前块的待预测图像分量的分辨率与所述当前块的待参考图像分量的分辨率不同时,对所述当前块的待参考图像分量的参考值进行调整处理,得到所述当前块的待参考图像分量的第一参考值,其中,所述调整处理包括下述其中之一:下采样滤波,上采样滤波,下采样滤波及低通滤波的级联滤波,上采样滤波及低通滤波的级联滤波。
  9. 根据权利要求7或8所述的方法,其中,所述根据所述当前块的待预测图像分量的参考值和所述当前块的待参考图像分量的参考值,计算所述预测模型的模型参数,包括:
    根据所述当前块的待预测图像分量的参考值和所述当前块的待参考图像分量的第一参考值,计算所述预测模型的模型参数。
  10. 根据权利要求1所述的方法,其中,所述对所述初始预测值进行滤波处理,包括:
    根据所述当前块的待预测图像分量的参考值对所述初始预测值进行滤波处理,得到所述目标预测值;其中,所述当前块的待预测图像分量的参考值是通过对所述图像的待预测图像分量或所述当前块的待预测图像分量进行特性统计得到。
  11. 根据权利要求10所述的方法,其中,所述根据所述当前块的待预测图像分量的参考值对所述初始预测值进行滤波处理,包括:
    根据所述当前块的待预测图像分量的参考值,利用预设处理模式对所述初始预测值进行滤波处理;其中,所述预设处理模式至少包括下述其中之一:过滤处理、分组处理、值修正处理、量化处理、反量化处理、低通滤波处理和自适应滤波处理。
  12. 根据权利要求10所述的方法,其中,所述根据所述当前块的待预测图像分量的参考值对所述初始预测值进行滤波处理,包括:
    利用所述初始预测值,计算得到所述当前块的待预测图像分量的初始预测残差;
    根据所述当前块的待预测图像分量的参考值,利用预设处理模式对所述初始预测残差进行滤波处理;其中,所述预设处理模式至少包括下述其中之一:过滤处理、分组处理、值修正处理、量化处理、反量化处理、低通滤波处理和自适应滤波处理。
  13. 根据权利要求1所述的方法,其中,在所述通过预测模型,获得图像中当前块的待预测图像分量的初始预测值之前,所述方法还包括:
    对所述图像的待预测图像分量进行特性统计;
    根据所述特性统计的结果,确定所述当前块的待预测图像分量的参考值和所述当前块的待参考图像分量的参考值;其中,所述待参考图像分量不同于所述待预测图像分量;
    根据所述当前块的待预测图像分量的参考值和所述当前块的待参考图像分量的参考值,计算所述预测模型的模型参数。
  14. 根据权利要求13所述的方法,其中,所述方法还包括:
    根据所述当前块的待预测图像分量的参考值和所述当前块的待参考图像分量的参考值,利用预设处理模式对所述初始预测值进行滤波处理,其中,所述预设处理模式至少包括下述其中之一:过滤处理、分组处理、值修正处理、量化处理、反量化处理、低通滤波处理和自适应滤波处理。
  15. 一种编码器,所述编码器包括第一预测单元和第一处理单元,其中,
    所述第一预测单元,配置为通过预测模型,获得图像中当前块的待预测图像分量的初始预测值;
    所述第一处理单元,配置为对所述初始预测值进行滤波处理,得到所述当前块的待预测图像分量的目标预测值。
  16. 一种编码器,所述编码器包括第一存储器和第一处理器,其中,
    所述第一存储器,用于存储能够在所述第一处理器上运行的计算机程序;
    所述第一处理器,用于在运行所述计算机程序时,执行如权利要求1至14任一项所述的方法。
  17. 一种解码器,所述解码器包括第二预测单元和第二处理单元,其中,
    所述第二预测单元,配置为通过预测模型,获得图像中当前块的待预测图像分量的初始预测值;
    所述第二处理单元,配置为对所述初始预测值进行滤波处理,得到所述当前块的待预测图像分量的目标预测值。
  18. 一种解码器,所述解码器包括第二存储器和第二处理器,其中,
    所述第二存储器,用于存储能够在所述第二处理器上运行的计算机程序;
    所述第二处理器,用于在运行所述计算机程序时,执行如权利要求1至14任一项所述的方法。
  19. 一种计算机存储介质,其中,所述计算机存储介质存储有图像预测程序,所述图像预测程序被第一处理器或第二处理器执行时实现如权利要求1至14任一项所述的方法。
PCT/CN2019/110834 2019-03-25 2019-10-12 图像预测方法、编码器、解码器以及存储介质 WO2020192085A1 (zh)

Priority Applications (8)

Application Number Priority Date Filing Date Title
CN201980083504.5A CN113196765A (zh) 2019-03-25 2019-10-12 图像预测方法、编码器、解码器以及存储介质
KR1020217032518A KR20210139328A (ko) 2019-03-25 2019-10-12 화상 예측 방법, 인코더, 디코더 및 저장 매체
CN202111096320.8A CN113784128B (zh) 2019-03-25 2019-10-12 图像预测方法、编码器、解码器以及存储介质
CN202310354072.5A CN116320472A (zh) 2019-03-25 2019-10-12 图像预测方法、编码器、解码器以及存储介质
EP19922170.6A EP3944621A4 (en) 2019-03-25 2019-10-12 FRAME PREDICTION METHOD, ENCODER, DECODER, AND STORAGE MEDIA
JP2021557118A JP7480170B2 (ja) 2019-03-25 2019-10-12 画像予測方法、エンコーダー、デコーダー及び記憶媒体
US17/483,507 US20220014772A1 (en) 2019-03-25 2021-09-23 Method for picture prediction, encoder, and decoder
JP2024069934A JP2024095842A (ja) 2019-03-25 2024-04-23 画像予測方法、エンコーダー、デコーダー及び記憶媒体

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962823613P 2019-03-25 2019-03-25
US62/823,613 2019-03-25

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/483,507 Continuation US20220014772A1 (en) 2019-03-25 2021-09-23 Method for picture prediction, encoder, and decoder

Publications (1)

Publication Number Publication Date
WO2020192085A1 true WO2020192085A1 (zh) 2020-10-01

Family

ID=72611276

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/110834 WO2020192085A1 (zh) 2019-03-25 2019-10-12 图像预测方法、编码器、解码器以及存储介质

Country Status (6)

Country Link
US (1) US20220014772A1 (zh)
EP (1) EP3944621A4 (zh)
JP (2) JP7480170B2 (zh)
KR (1) KR20210139328A (zh)
CN (3) CN116320472A (zh)
WO (1) WO2020192085A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110572673B (zh) * 2019-09-27 2024-04-09 腾讯科技(深圳)有限公司 视频编解码方法和装置、存储介质及电子装置
WO2021136498A1 (en) * 2019-12-31 2021-07-08 Beijing Bytedance Network Technology Co., Ltd. Multiple reference line chroma prediction
CN116472707A (zh) * 2020-09-30 2023-07-21 Oppo广东移动通信有限公司 图像预测方法、编码器、解码器以及计算机存储介质
WO2024148016A1 (en) * 2023-01-02 2024-07-11 Bytedance Inc. Method, apparatus, and medium for video processing
CN117528098B (zh) * 2024-01-08 2024-03-26 北京小鸟科技股份有限公司 基于深压缩码流提升画质的编解码系统、方法及设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106664425A (zh) * 2014-06-20 2017-05-10 高通股份有限公司 视频译码中的跨分量预测
CN106717004A (zh) * 2014-10-10 2017-05-24 高通股份有限公司 视频译码中的跨分量预测和自适应色彩变换的协调
CN107079157A (zh) * 2014-09-12 2017-08-18 Vid拓展公司 用于视频编码的分量间去相关
CN107211124A (zh) * 2015-01-27 2017-09-26 高通股份有限公司 适应性跨分量残差预测
WO2018132710A1 (en) * 2017-01-13 2018-07-19 Qualcomm Incorporated Coding video data using derived chroma mode

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008048489A2 (en) * 2006-10-18 2008-04-24 Thomson Licensing Method and apparatus for video coding using prediction data refinement
KR101455647B1 (ko) * 2007-09-14 2014-10-28 삼성전자주식회사 컬러 기반으로 예측 값을 보정하는 방법 및 장치, 이것을이용한 이미지 압축/복원 방법과 장치
GB2501535A (en) * 2012-04-26 2013-10-30 Sony Corp Chrominance Processing in High Efficiency Video Codecs
US20140192862A1 (en) * 2013-01-07 2014-07-10 Research In Motion Limited Methods and systems for prediction filtering in video coding
US20150373362A1 (en) * 2014-06-19 2015-12-24 Qualcomm Incorporated Deblocking filter design for intra block copy
CN107079166A (zh) * 2014-10-28 2017-08-18 联发科技(新加坡)私人有限公司 用于视频编码的引导交叉分量预测的方法
US10045023B2 (en) * 2015-10-09 2018-08-07 Telefonaktiebolaget Lm Ericsson (Publ) Cross component prediction in video coding
US10484712B2 (en) * 2016-06-08 2019-11-19 Qualcomm Incorporated Implicit coding of reference line index used in intra prediction
WO2018070914A1 (en) * 2016-10-12 2018-04-19 Telefonaktiebolaget Lm Ericsson (Publ) Residual refinement of color components
US11184636B2 (en) * 2017-06-28 2021-11-23 Sharp Kabushiki Kaisha Video encoding device and video decoding device
WO2020036130A1 (ja) * 2018-08-15 2020-02-20 日本放送協会 イントラ予測装置、画像符号化装置、画像復号装置、及びプログラム

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106664425A (zh) * 2014-06-20 2017-05-10 高通股份有限公司 视频译码中的跨分量预测
CN107079157A (zh) * 2014-09-12 2017-08-18 Vid拓展公司 用于视频编码的分量间去相关
CN106717004A (zh) * 2014-10-10 2017-05-24 高通股份有限公司 视频译码中的跨分量预测和自适应色彩变换的协调
CN107211124A (zh) * 2015-01-27 2017-09-26 高通股份有限公司 适应性跨分量残差预测
WO2018132710A1 (en) * 2017-01-13 2018-07-19 Qualcomm Incorporated Coding video data using derived chroma mode

Also Published As

Publication number Publication date
EP3944621A1 (en) 2022-01-26
KR20210139328A (ko) 2021-11-22
US20220014772A1 (en) 2022-01-13
JP2024095842A (ja) 2024-07-10
CN113784128B (zh) 2023-04-21
JP7480170B2 (ja) 2024-05-09
CN116320472A (zh) 2023-06-23
CN113196765A (zh) 2021-07-30
EP3944621A4 (en) 2022-05-18
CN113784128A (zh) 2021-12-10
JP2022528635A (ja) 2022-06-15

Similar Documents

Publication Publication Date Title
WO2020192085A1 (zh) 图像预测方法、编码器、解码器以及存储介质
CN113068028B (zh) 视频图像分量的预测方法、装置及计算机存储介质
WO2021120122A1 (zh) 图像分量预测方法、编码器、解码器以及存储介质
WO2021004155A1 (zh) 图像分量预测方法、编码器、解码器以及存储介质
WO2021134706A1 (zh) 环路滤波的方法与装置
WO2020186763A1 (zh) 图像分量预测方法、编码器、解码器以及存储介质
CN113068025A (zh) 解码预测方法、装置及计算机存储介质
WO2020192084A1 (zh) 图像预测方法、编码器、解码器以及存储介质
US12101495B2 (en) Colour component prediction method, encoder, decoder, and computer storage medium
WO2020056767A1 (zh) 视频图像分量的预测方法、装置及计算机存储介质
RU2805048C2 (ru) Способ предсказания изображения, кодер и декодер
CN112970257A (zh) 解码预测方法、装置及计算机存储介质
JP2024147829A (ja) 画像予測方法、エンコーダ、デコーダ及び記憶媒体
RU2827054C2 (ru) Способ предсказания изображения, кодер, декодер
WO2021174396A1 (zh) 图像预测方法、编码器、解码器以及存储介质
WO2024011370A1 (zh) 视频图像处理方法及装置、编解码器、码流、存储介质
TW202404349A (zh) 一種濾波方法、解碼器、編碼器及電腦可讀儲存媒介

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19922170

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021557118

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20217032518

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2019922170

Country of ref document: EP

Effective date: 20211015