WO2023185806A9 - 一种图像编解码方法、装置、电子设备及存储介质 - Google Patents

一种图像编解码方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2023185806A9
WO2023185806A9 PCT/CN2023/084295 CN2023084295W WO2023185806A9 WO 2023185806 A9 WO2023185806 A9 WO 2023185806A9 CN 2023084295 W CN2023084295 W CN 2023084295W WO 2023185806 A9 WO2023185806 A9 WO 2023185806A9
Authority
WO
WIPO (PCT)
Prior art keywords
block
decoded
residual
pixel
prediction mode
Prior art date
Application number
PCT/CN2023/084295
Other languages
English (en)
French (fr)
Other versions
WO2023185806A1 (zh
Inventor
陈方栋
魏亮
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Publication of WO2023185806A1 publication Critical patent/WO2023185806A1/zh
Publication of WO2023185806A9 publication Critical patent/WO2023185806A9/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel

Definitions

  • the present application relates to the technical field of image coding and decoding, and in particular to an image coding and decoding method, device, electronic equipment and storage medium.
  • Video sequences have a series of redundant information such as spatial redundancy, temporal redundancy, visual redundancy, information entropy redundancy, structural redundancy, knowledge redundancy, and importance redundancy.
  • video coding technology is proposed to reduce storage space and save transmission bandwidth.
  • Video encoding technology is also called video compression technology.
  • the present application provides an image encoding and decoding method, device, electronic equipment and storage medium.
  • the image encoding and decoding method can improve the efficiency of image encoding and decoding.
  • the present application provides an image decoding method, which method includes: parsing a code stream of a block to be decoded to determine a target prediction mode for predicting pixels in the block to be decoded. Based on the target prediction mode, a target prediction sequence corresponding to the target prediction mode is determined. According to the target prediction mode, each pixel in the block to be decoded is predicted in target prediction order. Each pixel is reconstructed based on the predicted value of each pixel to obtain a reconstructed block of the block to be decoded.
  • the image decoding method provided by this application when the decoder predicts pixels in the block to be decoded in the target prediction mode, it predicts the pixels in the block to be decoded in the target prediction order corresponding to the target prediction mode. In this process, when the decoder predicts any pixel in the block to be decoded, the pixels used to predict the pixel have been reconstructed. Therefore, the image decoding method provided by this application predicts each pixel in the block to be decoded in the target prediction order according to the target prediction mode, and can predict some pixels in the block to be decoded in parallel, improving the efficiency of predicting pixels in the block to be decoded.
  • the decoding method can not only save the cache space used to cache the residual value, but also improve Decoding efficiency.
  • any pixel in the block to be decoded is predicted in the above target prediction order, the pixels used to predict any pixel have been reconstructed.
  • Using reconstructed pixels for pixel prediction can make the pixels used in the prediction during the decoding process consistent with the pixels used in the encoding process, thereby reducing the decoding error, making the prediction value of the pixel during the decoding process more accurate, and improving the accuracy of decoding.
  • the target prediction mode indicates that each pixel in the block to be decoded is predicted point by point in the target prediction order
  • the pixels in the block to be decoded are predicted in the target prediction order.
  • Each pixel includes: predicting each pixel in the block to be decoded point by point in the direction indicated by the target prediction order according to the target prediction mode.
  • different prediction modes correspond to different prediction orders.
  • selecting the prediction order suitable for different prediction modes can reduce the difference between the pixels and the predicted values in the block to be encoded, so that the block to be encoded is encoded with fewer bits.
  • the decoding method provided by this possible design can further improve the efficiency of image decoding.
  • a third prediction order is used to predict the block to be decoded in the above target prediction mode.
  • the fourth prediction sequence is used in the above target prediction mode. Predict the block to be decoded in order.
  • the third prediction order and the fourth prediction order are different.
  • the prediction order may also be different.
  • selecting a prediction order suitable for different block sizes to be encoded can reduce the difference between pixels and predicted values in the block to be encoded, so that the block to be encoded is encoded with fewer bits.
  • fewer bits need to be decoded on the decoding side, and decoding efficiency is improved. Therefore, the decoding method provided by this possible design can further improve the efficiency of image decoding.
  • the target prediction mode indicates that the pixels of each sub-block in the block to be decoded are sequentially predicted in units of sub-blocks with a preset size in the block to be decoded
  • the target prediction mode is used to predict the pixels of each sub-block in the block to be decoded. Predicting each pixel in the block to be decoded in the prediction order includes: according to the target prediction mode, sequentially predicting the pixels in each sub-block in the block to be decoded along the direction indicated by the target prediction order.
  • the above target prediction mode includes the prediction mode of each sub-block in the block to be decoded.
  • the prediction mode of the first sub-block is used to predict the first pixel and the second pixel in parallel based on the reconstructed pixels around the first sub-block.
  • the decoder when the decoder predicts a sub-block, it can predict multiple pixels in the sub-block in parallel based on the reconstructed pixels around the sub-block. That is, this prediction mode can further Improve the efficiency of the decoder in predicting blocks to be decoded.
  • the decoder obtains the reconstruction value based on the prediction value and the residual value, there is no need to cache the residual value. Therefore, the decoding methods provided by these two possible designs further save cache space and improve decoding efficiency.
  • the above-mentioned reconstruction of each pixel based on the predicted value of each pixel, and obtaining the reconstructed block of the block to be decoded includes: Inverse quantization parameters and inverse quantization preset arrays of pixels are used to perform inverse quantization on the first residual block of the to-be-decoded block obtained by parsing the code stream of the to-be-decoded block to obtain the second residual block. Each pixel is reconstructed based on the predicted value of each pixel and the second residual block to obtain a reconstructed block.
  • inverse quantization can be performed based on a preset inverse quantization default array, thereby reducing multiplication operations in the inverse quantization process. Since the multiplication operation takes a long time, reducing the multiplication operation can improve the computational efficiency of the inverse quantization process, that is, the inverse quantization of the first residual block of the to-be-decoded block can be efficiently realized. Therefore, this possible design provides a decoding method It can further improve the efficiency of image decoding.
  • the above-mentioned parsing of the code stream of the block to be decoded includes: using a variable code length decoding method to parse the code stream of the block to be decoded to obtain each residual block corresponding to the coded block to be decoded.
  • the decoding method provided by this possible design can further improve the efficiency of image decoding.
  • the present application provides an image encoding method, which method includes: determining a target prediction mode of a block to be encoded, and determining a target prediction order corresponding to the target prediction mode. According to the target prediction mode, each pixel in the block to be encoded is predicted in the target prediction order. The residual block of the block to be encoded is determined based on the predicted value of each pixel. The residual blocks are encoded in target prediction order to obtain the code stream of the block to be encoded.
  • the image coding method provided by this application when the encoding end predicts the pixels in the block to be encoded in the target prediction mode, the pixels in the block to be encoded are predicted in the target prediction order corresponding to the target prediction mode. In this process, when the encoding end predicts any pixel in the block to be encoded, the pixels used to predict the pixel have been reconstructed. Therefore, the image coding method provided by this application predicts each pixel in the block to be encoded in the target prediction order according to the target prediction mode, and can predict some pixels in the block to be encoded in parallel, improving the efficiency of predicting pixels in the block to be encoded.
  • the encoding end When the encoding end re-obtains the reconstructed value based on the predicted value and the residual value, there is no need to cache the above-mentioned residual value. Therefore, the encoding method provided by the embodiment of the present application can not only save the cache space used to cache the residual value, but also improve Coding efficiency.
  • the target prediction order is used to predict the pixels in the block to be encoded.
  • Each pixel includes: predicting each pixel in the block to be encoded point by point in the direction indicated by the target prediction order according to the target prediction mode.
  • the target prediction order is the first prediction order
  • the target prediction order is the second prediction order
  • the first prediction order and the first prediction order are The order of predictions is different. In other words, different prediction modes correspond to different prediction orders.
  • a third prediction order is used to predict the block to be encoded in the above target prediction mode.
  • the block to be encoded is predicted using the fourth prediction order in the above target prediction mode.
  • the third prediction order and the fourth prediction order are different. In other words, when the size of the block to be encoded is different, if the same prediction mode is used to predict the pixels in the block to be encoded, the prediction order may also be different.
  • the above-mentioned target prediction mode indicates that the pixels of each sub-block in the block to be encoded are sequentially predicted in units of sub-blocks with a preset size in the block to be encoded
  • the above-mentioned target prediction mode predicting each pixel in the block to be encoded in the target prediction order includes: according to the target prediction mode, sequentially predicting the pixels in each sub-block in the block to be encoded along the direction indicated by the target prediction order.
  • the above-mentioned target prediction mode includes the prediction mode of each sub-block in the block to be encoded. For the first sub-block in the block to be encoded, if the first sub-block includes the first pixel and second pixel, then the prediction mode of the first sub-block is used to predict the first pixel and the second pixel in parallel according to the reconstructed pixels around the first sub-block.
  • the above method before the above-mentioned encoding of the residual block in the target prediction order to obtain the code stream of the block to be encoded, the above method includes: determining the quantization parameter QP of each pixel in the block to be encoded. Based on the QP of each pixel and the quantization preset array, the residual block to be encoded is quantized to obtain the first residual block.
  • the above-mentioned encoding of the residual block in the target prediction order to obtain the code stream of the block to be encoded includes: encoding the first residual block in the target prediction order to obtain the code stream of the block to be encoded.
  • the above-mentioned encoding of the first residual block in the target prediction order to obtain the code stream of the block to be encoded includes: encoding the first residual block in the target prediction order and using a variable code length encoding method.
  • the residual block is encoded to obtain the code stream of the block to be encoded.
  • the image encoding method provided by the second aspect and any possible design manner thereof corresponds to the image decoding method provided by the first aspect and any possible design manner thereof. Therefore, the second aspect and any possible design manner thereof are provided.
  • the beneficial effects of the technical solution provided by a possible design method please refer to the description of the beneficial effects of the corresponding method in the first aspect, and will not be described again.
  • the present application provides an image decoding method, which method includes: parsing the code stream of the block to be decoded, and obtaining the inverse quantization parameter of each pixel in the block to be decoded and the first residual block of the block to be decoded. Based on the QP indicated by the inverse quantization parameter of each pixel and the inverse quantization preset array, the first residual block is inversely quantized to obtain the second residual block. The block to be decoded is reconstructed based on the second residual block to obtain a reconstructed block.
  • the quantization processing method implemented based on the quantization preset array provided by this application reduces the multiplication operations in the quantization process. Since multiplication operations occupy high computing resources, reducing the multiplication operations can reduce the computing resources occupied in the quantization process, thus greatly saving computing resources on the decoding end. In addition, the speed of multiplication operations is slow, so the quantization in this image decoding method The efficiency of the processing process is greatly improved compared with the existing technology, and the image decoding method greatly improves the image decoding efficiency.
  • the above analysis of the code stream of the block to be decoded and obtaining the inverse quantization parameter of each pixel in the block to be decoded and the first residual block of the block to be decoded includes: determining based on the code stream of the block to be decoded Predict the target prediction mode of the pixels in the block to be decoded and the inverse quantization parameter of each pixel in the block to be decoded; based on the target prediction mode, determine the residual scanning order corresponding to the target prediction mode; analyze the block to be decoded based on the residual scanning order code stream to obtain the first residual block.
  • the residual scanning order is the first scanning order
  • the target prediction mode is the second target prediction mode
  • the residual scanning order is the second scanning order
  • the first scanning order Different from the second scan order.
  • different prediction modes correspond to different residual scanning orders.
  • a third scanning order is used to parse the code stream of the block to be decoded in the above target prediction mode.
  • a fourth scanning order is used to parse the code stream of the block to be decoded in the target prediction mode; wherein the third scanning order and the fourth scanning order are different.
  • the scanning order can be selected to be compatible with the target prediction mode, so that the residual block can be encoded with fewer bits during encoding, that is, the block to be encoded can be encoded with fewer bits. Encoding, correspondingly, fewer bits need to be decoded on the decoding side, and the decoding efficiency is improved. Therefore, the decoding method provided by this possible design can further improve the efficiency of image decoding.
  • the interval between two adjacent numbers in the 1st to nth numbers is 1, and the interval between the n+1th to n+mth numbers is 1.
  • the interval between two adjacent numbers in the number is 2, and the numerical interval between two adjacent numbers in the n+k*m+1 to n+k*m+m numbers is 2 k+ 1 , where n and m are integers greater than 1, and k is a positive integer.
  • the first residual block is inversely quantized to obtain the second residual block including: based on the QP of each pixel in Determine the amplification coefficient corresponding to each pixel in the inverse quantization preset array. Based on the amplification coefficient corresponding to each pixel, an inverse quantization operation is performed on the first residual block to obtain the second residual block.
  • the first residual block is inversely quantized based on the QP and inverse quantization preset array of each pixel
  • the second residual block includes: based on the QP and inverse quantization of each pixel Dequantize the preset array to determine the amplification parameters and displacement parameters corresponding to each pixel. Based on the amplification parameter and displacement parameter corresponding to each pixel, an inverse quantization operation is performed on the first residual block to obtain the second residual block.
  • the value of the amplification parameter corresponding to each pixel is the value corresponding to the bitwise AND of each pixel's QP and 7 in the inverse quantization preset array, and the value of the displacement parameter corresponding to each pixel is 7 and each The difference between the pixel's QP divided by the quotient of 2 and 3 .
  • the multiplication operations in the inverse quantization process can be further reduced based on the preset inverse quantization default array. Since the multiplication operation occupies relatively high computing resources, reducing the multiplication operation can reduce the computing resources occupied in the inverse quantization process. That is, the inverse quantization processing method implemented based on the quantization preset array provided by this application greatly saves time on the decoding end. Computing resources, in addition, the multiplication operation is slow, so the efficiency of the inverse quantization process in this image decoding method is greatly improved compared to the existing technology. Therefore, the decoding method provided by this possible design can further improve the efficiency of image decoding.
  • the above-mentioned reconstruction of the block to be decoded based on the second residual block to obtain the reconstructed block includes: performing inverse transformation on the second residual block to reconstruct the residual value block of the block to be decoded.
  • the block to be decoded is reconstructed based on the residual value block to obtain the reconstructed block.
  • the above-mentioned reconstruction of the block to be decoded based on the second residual block includes: according to the target prediction mode, predicting each pixel in the block to be decoded in a target prediction order corresponding to the target prediction mode. .
  • the block to be decoded is reconstructed based on the predicted value of each pixel and the second residual block to obtain a reconstructed block.
  • the prediction efficiency can be improved when predicting the block to be decoded based on the target prediction mode provided by this application. Therefore, the decoding efficiency of the image is further improved through this possible design method.
  • the above-mentioned parsing of the code stream of the block to be decoded includes: using a variable code length decoding method to parse the code stream of the block to be decoded to obtain each residual block corresponding to the coded block to be decoded.
  • the decoding method provided by this possible design method can further improve the efficiency of image decoding.
  • the present application provides an image encoding method, which method includes: determining a second residual block of a block to be encoded and a quantization parameter QP of each pixel in the block to be encoded. Based on the QP of each pixel and the quantization preset array, the second residual block is quantized to obtain the first residual block. Encode the first residual block to obtain the code stream of the block to be encoded.
  • the multiplication operations in the quantization process are reduced based on the quantization processing method implemented by the quantization preset array provided by this application. Since multiplication takes up a lot of computing resources, reducing multiplication operations can reduce the computing resources occupied in the quantization process, which greatly saves coding-side computing resources. And because multiplication operations take a long time, reducing multiplication operations can improve The computational efficiency of the quantization process, and thus the image coding method greatly improves the coding efficiency of the image.
  • the above-mentioned encoding of the first residual block to obtain the code stream of the block to be encoded includes: determining the target prediction mode of the block to be encoded, and determining the residual scanning order corresponding to the target prediction mode. Encode the residual block in residual scanning order to obtain the code stream of the block to be encoded. Wherein, when the target prediction mode is the first target prediction mode, the residual scanning order is the first scanning order, and when the target prediction mode is the second target prediction mode, the residual scanning order is the second scanning order, and the first scanning order Different from the second scan order.
  • the third scanning order is used to encode the block to be encoded in the target prediction mode.
  • the block to be encoded is encoded using the fourth scanning order in the above target prediction mode. Wherein, the third scanning order and the fourth scanning order are different.
  • the above-mentioned quantization preset array includes an amplification parameter array and a displacement parameter array.
  • the amplification parameter array and the displacement parameter array include the same number of values.
  • the inverse quantization preset array composed of the quotient of shift[i] and amp[i] has the following rules: adjacent numbers from the 1st to nth The interval between two numbers is 1, the interval between two adjacent numbers in the n+1 to n+mth numbers is 2, and the n+k*m+1 to n+k*m
  • the numerical interval between two adjacent numbers in +m numbers is 2 k+1 ; among them, n and m are integers greater than 1, and i and k are both positive integers.
  • the second residual block is quantized based on the QP of each pixel and the quantization preset array to obtain the first residual block including: based on the QP of each pixel, after amplification
  • the parameter array determines the amplification parameters for each pixel, as well as the displacement parameters Array of numbers determines the displacement parameters for each pixel.
  • a quantization operation is performed on the second residual block to obtain the first residual block.
  • the second residual block is quantized based on the QP and quantization preset array of each pixel
  • the first residual block includes: based on the QP and quantization preset array of each pixel. Assume an array to determine the amplification parameters and displacement parameters corresponding to each pixel. Based on the amplification parameter and displacement parameter corresponding to each pixel, a quantization operation is performed on the second residual block to obtain the second residual block.
  • the value of the amplification parameter corresponding to each pixel is the value corresponding to the bitwise AND of each pixel's QP and 7 in the quantization preset array
  • the value of the displacement parameter corresponding to each pixel is 7 and each pixel.
  • QP is divided by the sum of the quotient of 2 and 3 .
  • the above-mentioned second residual block is an original residual value block of the block to be encoded, or the above-mentioned second residual block is a residual coefficient obtained by transforming the residual value block. piece.
  • the above-described determination of the second residual block of the block to be encoded includes: predicting each pixel in the block to be encoded in a target prediction order according to the target prediction mode.
  • a second residual block is determined based on the predicted value of each pixel in the block to be encoded.
  • the above-mentioned encoding of the first residual block to obtain the code stream of the block to be encoded includes: encoding the first residual block using a variable code length encoding method to obtain the block to be encoded. code stream.
  • the image encoding method provided by the fourth aspect and any possible design manner thereof corresponds to the image decoding method provided by the third aspect and any possible design manner thereof. Therefore, the fourth aspect and any possible design manner thereof correspond to the image decoding method provided by the fourth aspect and any possible design manner thereof.
  • the beneficial effects of the technical solution provided by a possible design method please refer to the description of the beneficial effects of the corresponding method in the third aspect, and will not be described again.
  • the present application provides an image coding method, which method includes: determining a residual block corresponding to a block to be coded.
  • the residual block is encoded using a variable code length encoding method to obtain the code stream of the block to be encoded.
  • the coding method provided by this application when the coding end uses a variable code length coding method, for example, the transformable order Exponential Golomb coding algorithm to code the residual block of the to-be-coded block, fewer bits can be adaptively used By encoding a smaller residual value, the purpose of saving bits can be achieved, that is, while improving the compression rate of image encoding, the encoding method provided by this application also improves the encoding efficiency.
  • a variable code length coding method for example, the transformable order Exponential Golomb coding algorithm to code the residual block of the to-be-coded block
  • the above-mentioned variable code length coding method includes a variable-order exponential Golomb coding method, then the above-mentioned variable code length coding method is used to encode the residual block to obtain the to-be-coded
  • the code stream of a block includes: determining the attribute type of each pixel in the block to be encoded. For the first value in the residual block corresponding to the third pixel of the block to be encoded, a target order when encoding the first value is determined based on the preset strategy and the attribute type of the third pixel. The first value is encoded using the exponential Golomb coding algorithm of the target order to obtain the code stream.
  • variable code length coding method includes a preset order exponential Golomb coding method
  • variable code length coding method is used to encode the residual block to obtain the block to be encoded.
  • the code stream includes: for the first value in the residual block corresponding to the third pixel of the block to be encoded, the first value is encoded using the exponential Golomb coding algorithm of a preset order to obtain the code stream.
  • variable-length coding of the residual value in the residual block through variable or specified order exponential Golomb coding.
  • This coding method can adaptively use fewer bits to encode smaller values.
  • the residual value can achieve the purpose of saving bits.
  • the above method further includes: determining the semantic elements corresponding to the residual block, where the semantic elements include the encoding code length CL for encoding each value in the residual block.
  • the CL of each value is encoded using a variable code length encoding method to obtain a code stream.
  • the above-mentioned variable code length coding method includes a variable-order exponential Golomb coding method.
  • the above-mentioned variable code length coding method is used to encode Encoding the CL of each value to obtain a code stream includes: determining the target order when encoding the CL of any value.
  • the exponential Golomb coding algorithm of the target order is used to encode the CL of any value to obtain the code stream.
  • variable code length coding method is used to encode the CL of each value to obtain a code stream including: when any value If CL is less than or equal to the threshold, use a preset number of bits to encode any value of CL to obtain a code stream. When the CL of any value is greater than the threshold, a truncated unary code is used to encode the CL of any value to obtain a code stream.
  • This possible design method achieves variable-length coding of the CL of the residual value in the residual block through fixed-length coding and truncated unary codes.
  • This coding method can adaptively code a smaller CL with fewer bits, thus The purpose of saving bits can be achieved.
  • the above-mentioned residual block is an original residual value block of the block to be encoded; or, the above-mentioned residual block is a residual coefficient block obtained by transforming the original residual value block; or , the above-mentioned residual block is a quantized coefficient block obtained by quantizing the residual coefficient block.
  • the above-mentioned determination of the residual block corresponding to the block to be encoded includes: determining the target prediction mode of the block to be encoded, and determining a target prediction sequence corresponding to the target prediction mode. According to the target prediction mode, each pixel in the block to be encoded is predicted in the target prediction order. Based on the predicted value of each pixel in the block to be encoded, a residual block is determined.
  • this possible design method since the prediction efficiency can be improved when predicting the block to be encoded based on the target prediction mode provided by this application, this possible design method further improves the coding efficiency of the image.
  • the above-mentioned residual block is a quantized coefficient block obtained by quantizing the residual coefficient block
  • the above-mentioned variable code length coding method is used to encode the residual block, so as to Before obtaining the code stream of the block to be encoded, the above method further includes: determining the quantization parameter QP of each pixel in the block to be encoded. Based on the QP of each pixel and the quantization preset array, the residual value block to be encoded is quantized to obtain the residual block.
  • the quantization of the residual value block to be encoded can be realized based on fewer multiplication operations, that is, the quantization of the residual value block to be encoded can be realized efficiently. Therefore, the encoding method provided by this possible design can further Improve the efficiency of image encoding.
  • the present application provides an image decoding method.
  • the method includes: using a variable code length decoding method to parse the code stream of the block to be decoded, and obtain the encoding of each value in the residual block corresponding to the block to be decoded.
  • Code length CL Code length
  • the residual block is determined based on the CL encoding each value.
  • the block to be decoded is reconstructed based on the residual block to obtain the reconstructed block.
  • the above-mentioned variable code length decoding method includes a variable order or preset order exponential Golomb decoding method, then the above-mentioned variable code length decoding method is used to parse the block to be decoded.
  • the code stream to obtain the encoding code length CL of each value in the residual block corresponding to the block to be decoded includes: determining the target order when parsing the CL of each value in the encoding residual block.
  • the exponential Golomb decoding algorithm of the target order is used to parse the code stream and obtain the CL of each value in the encoded residual block.
  • variable code length decoding method is used to parse the code stream of the block to be decoded, and the encoding code length CL of each value in the residual block corresponding to the block to be decoded is obtained: when The number of bits used to encode any value CL in the residual block is a preset number, and the code stream is parsed based on the fixed-length decoding strategy to obtain the CL encoding any value. When the number of bits used to encode any value CL in the residual block is greater than the preset number, the code stream is parsed based on the rules of truncated unary codes to obtain the CL encoding any value.
  • the above-mentioned determination of the residual block based on the CL of encoding each value includes: determining a bit group corresponding to each pixel in the block to be decoded in the code stream based on the CL of encoding each value. Determine the attribute type of each pixel in the block to be decoded. For the first bit group corresponding to the third pixel in the block to be decoded, a target order for parsing the first bit group is determined based on the preset strategy and the attribute type of the third pixel. The first bit group is parsed using the Exponential Golomb decoding algorithm of target order to obtain the residual block.
  • the above-mentioned determination of the residual block based on the CL of encoding each value includes: determining a bit group corresponding to each pixel in the block to be decoded in the code stream based on the CL of encoding each value. For the first bit group corresponding to the third pixel in the block to be decoded, an exponential Golomb decoding algorithm of a preset order is used to parse the first bit group to obtain a residual block.
  • the above-mentioned reconstruction of the block to be decoded based on the residual block, and obtaining the reconstructed block includes: performing inverse quantization and inverse transformation on the residual block, or performing inverse quantization on the residual block, to reconstruct the block of residual values of the block to be decoded.
  • the block to be decoded is reconstructed based on the residual value block to obtain the reconstructed block.
  • the above-mentioned reconstruction of the block to be decoded based on the residual block to obtain the reconstructed block includes: determining a target prediction mode for predicting the pixels in the block to be decoded based on the code stream of the block to be decoded. Based on the target prediction mode, a target prediction sequence corresponding to the target prediction mode is determined. According to the target prediction mode, each pixel in the block to be decoded is predicted in target prediction order. The block to be decoded is reconstructed based on the prediction value of each pixel in the block to be decoded and the residual block to obtain a reconstructed block.
  • the above-mentioned inverse quantization of the residual block includes: based on the inverse quantization parameters and inverse quantization preset array of each pixel in the to-be-decoded block obtained by parsing the code stream of the to-be-decoded block, The residual block is dequantized.
  • the image decoding method provided by the sixth aspect and any of its possible designs corresponds to the image encoding method provided by the fifth aspect and any of its possible designs. Therefore, the sixth aspect and any of its possible designs provide For the beneficial effects of the technical solution provided by a possible design method, please refer to the description of the beneficial effects of the corresponding method in the fifth aspect, and will not be described again.
  • the present application provides an image decoding device.
  • the decoding device may be a video decoder or a device containing a video decoder.
  • the decoding device includes various modules for implementing the method in any possible implementation manner of the first aspect, the third aspect or the sixth aspect.
  • the decoding device has the function of realizing the behaviors in the above related method examples.
  • the functions described can be implemented by hardware, or can be implemented by hardware executing corresponding software.
  • the hardware or software includes one or more modules corresponding to the above functions. Its beneficial effects can be seen in the As described in the method, no further details will be given here.
  • the present application provides an image encoding device.
  • the encoding device may be a video encoder or a device containing a video encoder.
  • the encoding device includes various modules for implementing the method in any possible implementation manner of the second aspect, the fourth aspect, or the fifth aspect.
  • the encoding device has the function of realizing the behaviors in the above related method examples.
  • the functions described can be implemented by hardware, or can be implemented by hardware executing corresponding software.
  • the hardware or software includes one or more modules corresponding to the above functions. Its beneficial effects can be found in the description of the corresponding method and will not be described again here.
  • the present application provides an electronic device, including a processor and a memory.
  • the memory is used to store computer instructions.
  • the processor is used to call and run the computer instructions from the memory to implement the first aspect to the third aspect. Methods to achieve any of the six aspects.
  • the electronic device may refer to a video encoder, or a device including a video encoder.
  • the electronic device may refer to a video decoder, or a device including a video decoder.
  • the present application provides a computer-readable storage medium.
  • Computer programs or instructions are stored in the storage medium.
  • the computer program or instructions are executed by a computing device or a storage system in which the computing device is located, the first aspect to the third aspect is implemented. Methods to achieve any of the six aspects.
  • the present application provides a computer program product.
  • the computing program product includes instructions.
  • the computing device or processor executes the instructions to implement the first aspect. to any method of implementation in the sixth aspect.
  • the present application provides a chip, including a memory and a processor.
  • the memory is used to store computer instructions
  • the processor is used to call and run the computer instructions from the memory to implement any one of the first to sixth aspects.
  • the present application provides an image decoding system.
  • the image decoding system includes an encoding end and a decoding end.
  • the decoding end is used to implement the corresponding decoding method provided in the first, third or sixth aspect.
  • the encoding end Used to implement the encoding method corresponding to this.
  • this application can also be further combined to provide more implementation methods.
  • any possible implementation of any of the above aspects can be applied to other aspects without conflict, thereby obtaining new embodiments.
  • any of the image decoding methods provided by the first, third and sixth aspects mentioned above can be combined in two or three aspects without conflict, so that a new image decoding method can be obtained.
  • Figure 1 is a schematic architectural diagram of the encoding and decoding system 10 applied in the embodiment of the present application;
  • Figure 2 is a schematic block diagram of an example of an encoder 112 used to implement the method according to the embodiment of the present application;
  • Figure 3 is a schematic diagram of the correspondence between an image, a parallel coding unit, an independent coding unit and a coding unit provided by an embodiment of the present application;
  • Figure 4 is a schematic flow chart of an encoding process provided by an embodiment of the present application.
  • Figure 5 is a schematic block diagram of an example of a decoder 122 used to implement the method according to the embodiment of the present application;
  • Figure 6a is a schematic flow chart of an image encoding method provided by an embodiment of the present application.
  • Figure 6b is a schematic flow chart of an image decoding method provided by an embodiment of the present application.
  • Figure 6c is a schematic flow chart of another image encoding method provided by an embodiment of the present application.
  • Figure 7a is a schematic diagram of a prediction sequence provided by an embodiment of the present application.
  • Figure 7b is a schematic diagram of another prediction sequence provided by the embodiment of the present application.
  • Figure 7c-1 is a schematic diagram of another prediction sequence provided by the embodiment of the present application.
  • Figure 7c-2 is a schematic diagram of another prediction sequence provided by the embodiment of the present application.
  • Figure 7d-1 is a schematic diagram of another prediction sequence provided by the embodiment of the present application.
  • Figure 7d-2 is a schematic diagram of another prediction sequence provided by the embodiment of the present application.
  • Figure 7d-3 is a schematic diagram of another prediction sequence provided by the embodiment of the present application.
  • Figure 7e is a schematic diagram of another prediction sequence provided by the embodiment of the present application.
  • Figure 7f is a schematic diagram of another prediction sequence provided by the embodiment of the present application.
  • Figure 7g is a schematic diagram of another encoding sequence provided by an embodiment of the present application.
  • Figure 8 is a schematic flow chart of another image decoding method provided by an embodiment of the present application.
  • Figure 9a is a schematic flow chart of another image encoding method provided by an embodiment of the present application.
  • Figure 9b is a schematic flow chart of another image decoding method provided by an embodiment of the present application.
  • Figure 10a is a schematic flowchart of yet another image encoding method provided by an embodiment of the present application.
  • Figure 10b is a schematic flow chart of another image decoding method provided by an embodiment of the present application.
  • Figure 11 is a schematic structural diagram of a decoding device 1100 provided by an embodiment of the present application.
  • Figure 12 is a schematic structural diagram of an encoding device 1200 provided by an embodiment of the present application.
  • Figure 13 is a schematic structural diagram of a decoding device 1300 provided by an embodiment of the present application.
  • Figure 14 is a schematic structural diagram of an encoding device 1400 provided by an embodiment of the present application.
  • Figure 15 is a schematic structural diagram of an encoding device 1500 provided by an embodiment of the present application.
  • Figure 16 is a schematic structural diagram of a decoding device 1600 provided by an embodiment of the present application.
  • Figure 17 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the combination of prediction methods used to predict the prediction value of each pixel in the current image block (such as the image block to be encoded (hereinafter referred to as the block to be encoded) or the image block to be decoded (hereinafter referred to as the block to be decoded)) is called a prediction mode.
  • different prediction methods can be used to predict different pixels in the current image block, or the same prediction method can be used.
  • the prediction methods used to predict all pixels in the current image block can be collectively referred to as the (or corresponding) prediction method of the current image block. ) prediction model.
  • the prediction modes include: point-by-point prediction mode, intra-frame prediction mode, block copy mode and original value mode (that is, directly decoding a reconstructed value mode with a fixed bit width), etc.
  • the point-by-point prediction mode refers to a prediction mode that uses the reconstructed values of adjacent pixels around the pixel to be predicted as the predicted value of the pixel to be predicted.
  • the point-by-point prediction mode includes one or a combination of one or more prediction methods such as vertical prediction, horizontal prediction, vertical mean prediction, and horizontal mean prediction.
  • vertical prediction uses the reconstructed value of the pixel on the upper side of the pixel to be predicted (either the adjacent upper side or the non-adjacent but close upper side) to obtain the predicted value (PointPredData) of the pixel to be predicted.
  • the prediction method using vertical prediction is called the T prediction method.
  • One example is: using the reconstructed value of the adjacent pixel above the pixel to be predicted as the predicted value of the pixel to be predicted.
  • Horizontal prediction uses the reconstructed value of the pixel to the left of the pixel to be predicted (either the adjacent left side or the non-adjacent but close left side) to obtain the predicted value of the pixel to be predicted.
  • the prediction method using horizontal prediction is called the L prediction method.
  • One example is: using the reconstructed value of the adjacent pixel to the left of the pixel to be predicted as the predicted value of the pixel to be predicted.
  • Vertical mean prediction uses the reconstructed values of pixels above and below the pixel to be predicted to obtain the predicted value of the pixel to be predicted.
  • the prediction method using vertical mean prediction is called the TB prediction method.
  • One example is: using the average of the reconstruction values of the adjacent pixels vertically above the pixel to be predicted and the reconstruction values of the adjacent pixels vertically below the pixel to be predicted as the predicted value of the pixel to be predicted.
  • Horizontal mean prediction uses the reconstructed values of the pixels on the left and right sides of the pixel to be predicted to obtain the predicted value of the pixel to be predicted.
  • the prediction method using horizontal mean prediction is called the RL prediction method.
  • One example is: taking the average of the reconstruction values of the horizontally adjacent pixels on the left side of the pixel to be predicted and the reconstruction values of the horizontally adjacent pixels on the right side as the predicted value of the pixel to be predicted.
  • the intra prediction mode is a prediction mode that uses the reconstructed values of pixels in adjacent blocks around the block to be predicted as prediction values.
  • the block copy model is a prediction mode that uses (decodes) the reconstructed values of block (not necessarily adjacent) pixels as predicted values.
  • the original value mode is a reconstructed value mode that directly decodes a fixed bit width, that is, a reference-free prediction mode.
  • the method of encoding the residual of the current image block (such as the block to be encoded or the block to be decoded) (that is, the residual block, which is composed of the residual value of each pixel in the current image block) is called the residual coding mode.
  • the residual coding mode may include a skip residual coding mode and a normal residual coding mode.
  • the residual values of the pixels in the current image block are all 0, and the reconstructed value of each pixel is equal to the predicted value of the pixel.
  • the residual coefficients need to be encoded (decoded). At this time, the residual values of the pixels in the current image block are not all 0.
  • the reconstruction value of each pixel can be based on the predicted value and residual value of the pixel. get.
  • the residual coefficient of a pixel may be equivalent to the residual value of the pixel; in another example, the residual coefficient of the pixel may be obtained by performing certain processing on the residual value of the pixel.
  • the residual block of the block to be encoded is usually quantized, or the residual coefficient block obtained after certain processing of the residual block is quantized, so that the quantized residual block Difference blocks or blocks of residual coefficients can be encoded with fewer bits.
  • the residual block is a residual value block obtained based on the original pixel block and the prediction block of the block to be encoded
  • the residual coefficient block is a coefficient block obtained by performing certain processing and transformation on the residual block.
  • the encoding device may divide each residual value in the residual block of the block to be encoded by a quantization coefficient to reduce the residual value in the residual block. In this way, the reduced residual value after quantization can be encoded with fewer bits than the residual value without quantization, thus achieving compression coding of the image.
  • the decoding device can inversely quantize the residual block or residual coefficient block parsed from the code stream, so that the unquantized corresponding to the image block can be reconstructed.
  • the residual block or residual coefficient block and further, the decoding device reconstructs the image block according to the reconstructed residual block or residual coefficient block, thereby obtaining a reconstructed block of the image.
  • the encoding device may perform inverse quantization on the residual block. Specifically, the encoding device may multiply each residual value in the parsed residual block by a quantization coefficient to reconstruct the residual value in the unquantized residual block corresponding to the block to be decoded, thereby obtaining the reconstructed residual value.
  • the quantization coefficient is a quantization coefficient used when the encoding device quantizes the residual block of the block to be decoded when encoding the block to be decoded. In this way, the decoding device can reconstruct the block to be decoded based on the residual block reconstructed after inverse quantization, and obtain the reconstructed block of the block to be decoded.
  • At least one (kind) in the embodiments of this application includes one (kind) or multiple (kinds).
  • Multiple (species) means two (species) or more than two (species).
  • at least one of A, B and C includes: A alone, B alone, A and B at the same time, A and C at the same time, B and C at the same time, and A, B and C at the same time.
  • “/” means or, for example, A/B can mean A or B;
  • and/or” in this article is just an association relationship describing related objects, It means that there can be three relationships, for example, A and/or B, which can mean: A exists alone, A and B exist simultaneously, and B exists alone.
  • “Plural” means two or more than two.
  • words such as “first” and “second” are used to distinguish identical or similar items with basically the same functions and effects. Those skilled in the art can understand that words such as “first” and “second” do not limit the number and execution order, and words such as “first” and “second” do not limit the number and execution order.
  • Figure 1 shows a schematic architectural diagram of the encoding and decoding system 10 applied in the embodiment of the present application.
  • the encoding and decoding system 10 may include a source device 11 and a destination device 12 .
  • the source device 11 is used to encode images, therefore, the source device 11 may be called an image encoding device or a video encoding device.
  • the destination device 12 is used to decode the encoded image data generated by the source device 11. Therefore, the destination device 12 may be called an image decoding device or a video decoding device.
  • source device 11 and the destination device 12 may be various devices, which are not limited in the embodiments of this application.
  • source device 11 and destination device 12 may be a desktop computer, a mobile computing device, a notebook (eg, laptop) computer, a tablet computer, a set-top box, a telephone handset such as a so-called "smart" phone, a television, Cameras, display devices, digital media players, video game consoles, in-vehicle computers or other similar devices, etc.
  • the source device 11 and the destination device 12 in Figure 1 may be two separate devices.
  • the source device 11 and the destination device 12 are the same device, that is, the source device 11 or corresponding functions and the destination device 12 or the corresponding functions can be integrated on the same device.
  • Communication can take place between the source device 11 and the destination device 12.
  • destination device 12 may receive encoded image data from source device 11 .
  • one or more communication media for transmitting encoded image data may be included between the source device 11 and the destination device 12 .
  • the one or more communication media may include routers, switches, base stations, or other devices that facilitate communication from the source device 11 to the destination device 12, which is not limited in the embodiments of the present application.
  • source device 11 includes an encoder 112.
  • the source device 11 may also include an image preprocessor 111 and a Letter interface 113.
  • the image preprocessor 111 is used to perform preprocessing on the received image to be encoded.
  • the preprocessing performed by the image preprocessor 111 may include trimming and color format conversion (for example, converting from RGB format to YUV format). , color correction or denoising, etc.
  • the encoder 112 is configured to receive the image preprocessed by the image preprocessor 111, and use the relevant prediction mode to process the preprocessed image, thereby providing encoded image data.
  • the encoder 112 may be used to perform the encoding process in various embodiments described below.
  • the communication interface 113 can be used to transmit the encoded image data to the destination device 12 or any other device (such as a memory) for storage or direct reconstruction.
  • the other devices can be any device used for decoding or storage.
  • the communication interface 113 can also encapsulate the encoded image data into a suitable format before transmitting it.
  • the above-mentioned image preprocessor 111, encoder 112 and communication interface 113 may be hardware components in the source device 11, or they may be software programs in the source device 11, which are not limited in the embodiment of this application.
  • Destination device 12 includes decoder 122 .
  • the destination device 12 may also include a communication interface 121 and an image post-processor 123.
  • the communication interface 121 may be used to receive the encoded image data from the source device 11 or any other source device, such as a storage device.
  • Communication interface 121 may also decapsulate data transmitted by communication interface 113 to obtain encoded image data.
  • the decoder 122 is configured to receive encoded image data and output decoded image data (also referred to as reconstructed image data or reconstructed image data). In some embodiments, decoder 122 may be used to perform the decoding process described in various embodiments described below.
  • the image post-processor 123 is configured to perform post-processing on the decoded image data to obtain post-processed image data.
  • the post-processing performed by the image post-processor 123 may include: color format conversion (for example, from YUV format to RGB format), color correction, trimming or resampling, or any other processing.
  • the image post-processor 123 may also be used to convert the processed
  • the post-processed image data is transmitted to the display device for display.
  • the above-mentioned communication interface 121, decoder 122 and image post-processor 123 may be hardware components in the destination device 12, or they may be software programs in the destination device 12, which are not limited in the embodiment of this application.
  • Figure 2 shows a schematic block diagram of an example of an encoder 112 for implementing the method of the embodiment of the present application.
  • the encoder 112 includes a prediction processing unit 201 , a residual calculation unit 202 , a residual transformation unit 203 , a quantization unit 204 , a coding unit 205 , an inverse quantization unit (which may also be called an inverse quantization unit) 206 , and a residual transform unit 203 .
  • the encoder 112 may also include a buffer, a decoded image buffer. The buffer is used to cache the reconstructed block (or reconstructed block) output by the reconstruction unit 208, and the decoded image buffer is used to cache the filtered image block output by the filter unit 209.
  • the input of the encoder 112 is an image block of the image to be encoded (ie, a block to be encoded or a coding unit).
  • the input of the encoder 112 is an image to be encoded, and the encoder 112 may include a segmentation unit. (not shown in Figure 2), the division unit is used to divide the image to be encoded into multiple image blocks.
  • the encoder 112 is used to encode block by block to complete the encoding of the image to be encoded. For example, the encoder 112 The block performs the encoding process, thereby completing the encoding of the image to be encoded.
  • a method of dividing an image to be encoded into multiple image blocks may include:
  • Step 1 Divide a frame of image into one or more parallel coding units that do not overlap each other. There is no dependency between the parallel coding units and they can be encoded and decoded in parallel/independently.
  • Step 2 For each parallel coding unit, the coding end can divide it into one or more independent coding units that do not overlap with each other.
  • the independent coding units may not depend on each other, but they may share some header information of the parallel coding unit.
  • Step 3 For each independent coding unit, the coding end can further divide it into one or more coding units that do not overlap with each other.
  • the division method may be a horizontal equal division method, a vertical equal division method or a horizontal and vertical equal division method.
  • the specific implementation is not limited to this.
  • Individual coding units within independent coding units may be interdependent, ie may refer to each other during the execution of the prediction step.
  • the width of the coding unit is w_cu and the height is h_cu. Optionally, its width is greater than its height (unless it is an edge area).
  • the coding unit can be a fixed w_cu ⁇ h_cu, w_cu and h_cu are both 2 to the Nth power (N is greater than or equal to 0), such as 16 ⁇ 4, 8 ⁇ 4, 16 ⁇ 2, 8 ⁇ 2, 4 ⁇ 2 , 8 ⁇ 1, 4 ⁇ 1, etc.
  • the coding unit may include three components of brightness Y, chroma Cb, and chroma Cr (or three components of red R, green G, and blue B), or may include only one of the components. If it contains three components, the sizes of the components can be exactly the same or different, depending on the image input format.
  • FIG. 3 it is a schematic diagram of the correspondence between an image, a parallel coding unit, an independent coding unit and a coding unit.
  • parallel coding unit 1 and parallel coding unit 2 in Figure 3 divide an image according to an image area ratio of 3:1, where parallel coding unit 1 includes an independent coding unit divided into four coding units.
  • the prediction processing unit 201 is configured to receive or obtain the true value of the block to be encoded and the reconstructed image data, predict the block to be encoded based on the relevant data in the reconstructed image data, and obtain the prediction block of the block to be encoded.
  • the residual calculation unit 202 is used to calculate the residual value between the real value of the block to be encoded and the prediction block of the block to be encoded, to obtain a residual block.
  • the residual block is obtained by subtracting the pixel value of the predicted block from the real pixel value of the block to be encoded pixel by pixel.
  • residual transform unit 203 is used to determine residual coefficients based on the residual block.
  • this process may include: performing a transformation such as discrete cosine transform (DCT) or discrete sine transform (DST) on the residual block to obtain the transformation coefficients in the transform domain , the transform coefficient can also be called transform residual coefficient or residual coefficient, which can represent the residual block in the transform domain.
  • the encoder 112 may not include the step of residual transformation in the process of encoding the block to be encoded.
  • the quantization unit 204 is configured to quantize the transform coefficients or residual values by applying scalar quantization or vector quantization to obtain quantized residual coefficients (or quantized residual values).
  • the quantization process can reduce the bitdepth associated with some or all of the residual coefficients. For example, p-bit transform coefficients may be rounded down to q-bit transform coefficients during quantization, where p is greater than q.
  • the degree of quantization can be modified by adjusting the quantization parameter (QP). For example, with scalar quantization, different scales can be applied to achieve finer or coarser quantization. A smaller quantization step size corresponds to finer quantization, while a larger quantization step size corresponds to coarser quantization. The appropriate quantization step size can be indicated by QP.
  • the encoding unit 205 is used to encode the above-mentioned quantized residual coefficient (or quantized residual value), and output the encoded image data (ie, the current block to be encoded) in the form of an encoded bit stream (or code stream). encoding result), the encoded bitstream can then be transmitted to the decoder, or stored for later transmission to the decoder or for retrieval.
  • the encoding unit 205 may also be used to encode syntax elements of the block to be encoded, such as encoding the prediction mode adopted by the block to be encoded into a code stream.
  • the encoding unit 205 encodes the residual coefficients, and one possible way is: semi-fixed length encoding.
  • the maximum value of the absolute value of the residual within a residual block (RB) is defined as modified maximum (mm).
  • the number of coding bits of the residual coefficient in the RB is determined based on the above mm (the number of coding bits of the residual coefficient in the same RB is consistent), that is, the coding length CL. For example, if the CL of the current RB is 2 and the current residual coefficient is 1, then 2 bits are required to encode the residual coefficient 1, which is expressed as 01. In a special case, if the CL of the current RB is 7, it means encoding an 8-bit residual coefficient and a 1-bit sign bit.
  • the way to determine CL is to find the smallest M value that satisfies all residuals in the current RB to be within the range of [-2 ⁇ (M-1),2 ⁇ (M-1)], and use the found M as the current RB CL. If there are two boundary values -2 ⁇ (M-1) and 2 ⁇ (M-1) in the current RB, M should be increased by 1, that is, M+1 bits are needed to encode all the residuals of the current RB; if the current There is only one of the two boundary values -2 ⁇ (M-1) and 2 ⁇ (M-1) in the RB, and a Trailing bit (the last bit) needs to be encoded to determine whether the boundary value is -2 ⁇ (M -1) or 2 ⁇ (M-1); if none of -2 ⁇ (M-1) and 2 ⁇ (M-1) exists in all residuals in the current RB, there is no need to encode the Trailing bit.
  • residual coefficient coding methods can also be used, such as exponential Golomb (Golomb coding algorithm) coding method, Golomb-Rice (variant of Golomb coding algorithm) coding method, truncated unary code coding method, run length coding method, direct encoding of the original residual Difference etc.
  • exponential Golomb Golomb coding algorithm
  • Golomb-Rice variant of Golomb coding algorithm
  • truncated unary code coding method run length coding method
  • direct encoding of the original residual Difference etc.
  • the encoding unit 205 can also directly encode the original value instead of the residual value.
  • the inverse quantization unit 206 is used to perform inverse quantization on the above-mentioned quantized residual coefficient (or quantized residual value) to obtain the inverse quantized residual coefficient (inverse quantized residual value).
  • the inverse quantization is the above-mentioned quantization unit 204
  • the inverse application of, for example, an inverse quantization scheme of the quantization scheme applied by the quantization unit 204 is applied based on or using the same quantization step size as the quantization unit 204 .
  • the residual inverse transformation unit 207 is used to inversely transform (or inversely transform) the above-mentioned inverse quantized residual coefficient to obtain a reconstructed residual block.
  • the inverse transformation may include an inverse discrete cosine transform (DCT) or an inverse discrete sine transform (DST).
  • DCT inverse discrete cosine transform
  • DST inverse discrete sine transform
  • the inverse transformation value obtained after inverse transformation (or inverse transformation) of the above-mentioned inverse quantization residual coefficient is the residual value reconstructed in the pixel domain (or called the sample domain). That is, after the inverse quantized residual coefficient block is inversely transformed by the residual inverse transform unit 207, the resulting block is a reconstructed residual block.
  • the encoder 112 may not include the inverse transformation step.
  • the reconstruction unit 208 is used to add the reconstructed residual block to the prediction block to obtain the reconstructed block in the sample domain, and the reconstruction unit 208 may be a summer. For example, the reconstruction unit 208 adds the residual value in the reconstructed residual block and the predicted value of the corresponding pixel in the prediction block to obtain the reconstructed value of the corresponding pixel.
  • the reconstructed block output by the reconstruction unit 208 can be subsequently used to predict other image blocks to be encoded.
  • the filter unit 209 (or simply “filter”) is used to filter the reconstructed block to obtain the filtered block, thereby smoothly performing pixel processing. transformation, thereby improving image quality.
  • an encoding process is shown in Figure 4. Specifically, the encoder determines whether to use the point-by-point prediction mode. If the point-by-point prediction mode is used, the encoder predicts the pixels in the block to be encoded based on the point-by-point prediction mode to encode the block to be encoded, and performs inverse quantization on the encoding result. and reconstruction steps to implement the encoding process; if the point-by-point prediction mode is not used, the encoder determines whether to use the original value mode. If the original value mode is used, the original value mode encoding is used; if the original value mode is not used, the encoder determines to use other prediction modes such as intra prediction mode or block copy mode for prediction and encoding.
  • the encoder determines whether to use the point-by-point prediction mode. If the point-by-point prediction mode is used, the encoder predicts the pixels in the block to be encoded based on the point-by-point prediction mode to encode the block to be encoded, and performs inverse quantization
  • the reconstruction step is directly performed on the coding result; when it is determined not to perform residual skipping, an inverse quantization step is first performed on the coding result to obtain the inverse quantized residual block, and it is determined whether to adopt the block copy prediction mode. . If it is determined to adopt the block copy prediction mode, in one case, when it is determined that transform skipping is performed, the reconstruction step is directly performed on the inverse quantized residual block, and in the other case, when it is determined that transform skipping is not performed, the reconstruction step is performed by The inverse transformation and reconstruction steps are performed on the inverse quantized residual block to implement the encoding process. If it is determined that the block copy mode is not to be used (in this case, one of the prediction modes used is the intra prediction mode), an inverse transformation step and a reconstruction step are performed on the inverse quantized residual block to implement the encoding process.
  • the encoder 112 is used to implement the encoding method described in the embodiments below.
  • an encoding process implemented by encoder 112 may include the following steps:
  • Step 11 The prediction processing unit 201 determines the prediction mode, and predicts the block to be encoded based on the determined prediction mode and the reconstructed block of the encoded image block to obtain the prediction block of the block to be encoded.
  • the reconstructed block of the encoded image block is obtained by sequentially processing the quantized residual coefficient block of the encoded image block by the inverse quantization unit 206, the residual inverse transform unit 207, and the reconstruction unit 208.
  • Step 12 The residual calculation unit 202 obtains the residual block of the block to be encoded based on the original pixel values of the prediction block and the block to be encoded.
  • Step 13 The residual transformation unit 203 transforms the residual block to obtain a residual coefficient block.
  • Step 14 The quantization unit 204 quantizes the residual coefficient block to obtain a quantized residual coefficient block.
  • Step 15 The encoding unit 205 encodes the quantized residual coefficient block and encodes related syntax elements (such as prediction mode, encoding mode) to obtain a code stream of the block to be encoded.
  • related syntax elements such as prediction mode, encoding mode
  • Figure 5 shows a schematic block diagram of an example of a decoder 122 for implementing the method of the embodiment of the present application.
  • the decoder 122 is configured to receive image data (ie, an encoded bitstream, eg, an encoded bitstream including image blocks and associated syntax elements) encoded by the encoder 112 to obtain decoded image blocks.
  • image data ie, an encoded bitstream, eg, an encoded bitstream including image blocks and associated syntax elements
  • the decoder 122 includes a code stream analysis unit 301 , an inverse quantization unit 302 , a residual inverse transformation unit 303 , a prediction processing unit 304 , a reconstruction unit 305 , and a filter unit 306 .
  • decoder 122 may perform a decoding process that is generally reciprocal to the encoding process described for encoder 112 of FIG. 2 .
  • the decoder 122 may also include a buffer and a filtered image buffer, wherein the buffer is used to cache the reconstructed image block output by the reconstruction unit 305, and the filtered image buffer is used to cache the filtered image block output by the filter unit 306. The subsequent image block.
  • the code stream parsing unit 301 is configured to perform decoding on the encoded bit stream to obtain quantized residual coefficients (or quantized residual values) and/or decoding parameters (for example, the decoding parameters may include inter prediction parameters performed on the encoding side , any or all of intra prediction parameters, filter parameters and/or other syntax elements).
  • the code stream parsing unit 301 is also configured to forward the above-mentioned decoding parameters to the prediction processing unit 304, so that the prediction processing unit 304 can perform a prediction process according to the decoding parameters.
  • the function of the inverse quantization unit 302 may be the same as the function of the inverse quantization unit 206 of the encoder 112, and is used for inverse quantization (ie, inverse quantization) of the quantized residual coefficients decoded and output by the code stream parsing unit 301.
  • the function of the residual inverse transform unit 303 may be the same as the function of the residual inverse transform unit 207 of the encoder 112, and is used to inversely transform the above-mentioned inverse quantized residual coefficients (eg, inverse DCT, inverse integer transform, or conceptually similar Inverse transformation process) to obtain the reconstructed residual value.
  • the block obtained by inverse transformation is the residual block in the pixel domain of the reconstructed block to be decoded.
  • the functionality of the reconstruction unit 305 may be the same as the functionality of the reconstruction unit 208 of the encoder 112 .
  • the prediction processing unit 304 is configured to receive or obtain encoded image data (such as the encoded bit stream of the current image block) and reconstructed image data.
  • the prediction processing unit 304 may also receive or obtain the prediction mode from, for example, the code stream parsing unit 301
  • the relevant parameters and/or information about the selected prediction mode i.e., the above-mentioned decoding parameters
  • predict the current image block based on the relevant data and decoding parameters in the reconstructed image data to obtain the prediction block of the current image block .
  • the reconstruction unit 305 is used to add the reconstructed residual block to the prediction block to obtain the reconstructed block of the image to be decoded in the sample domain, for example, the residual value in the reconstructed residual block and the prediction value in the prediction block Add up.
  • the filter unit 306 is used to filter the reconstructed block to obtain a filtered block, which is a decoded image block.
  • the decoder 122 is used to implement the decoding method described in the embodiments below.
  • the processing result for a certain link can also be output to the next link after further processing, for example, in interpolation filtering, motion vector derivation or After the filtering and other steps, the processing results of the corresponding links are further subjected to operations such as truncation, Clip or shift.
  • a decoding process implemented by decoder 122 may include the following steps:
  • Step 21 The code stream analysis unit 301 analyzes the prediction mode and residual coding mode
  • Step 22 The code stream analysis unit 301 analyzes the quantization related values (such as near values (a quantization step representation value), or QP values, etc.) based on the prediction mode and the residual coding mode;
  • quantization related values such as near values (a quantization step representation value), or QP values, etc.
  • Step 23 The inverse quantization unit 302 parses the residual coefficient based on the prediction mode and the quantization correlation value;
  • Step 24 The prediction processing unit 304 obtains the prediction value of each pixel of the current image block based on the prediction mode
  • Step 25 The residual inverse transformation unit 303 inversely transforms the residual coefficients to reconstruct the residual values of each pixel of the current image block;
  • Step 26 The reconstruction unit 305 obtains the reconstruction value based on the prediction value and residual value of each pixel of the current coding unit.
  • the encoding end in any embodiment of the present application may be the encoder 112 in the above-mentioned Figure 1 or Figure 2, or it may be the source device 11 in the above-mentioned Figure 1.
  • the decoding end in any embodiment of the present application may be the decoder 122 in the above-mentioned Figure 1 or Figure 5, or it may be the destination device 12 in the above-mentioned Figure 1, which is not limited in the embodiment of the present application.
  • Figure 6a shows a schematic flowchart of an image encoding method provided by an embodiment of the present application.
  • the method may include the following steps:
  • the encoding end determines the target prediction mode of the block to be encoded, and determines the target prediction order corresponding to the target prediction mode.
  • the encoding end predicts each pixel in the block to be encoded in the above target prediction order according to the above target prediction mode.
  • the encoding end determines the residual block of the block to be encoded based on the predicted value of each pixel in the block to be encoded.
  • the encoding end performs transformation processing on the residual block of the to-be-encoded block to obtain a transformed residual coefficient block.
  • the encoding end performs quantization processing on the above-mentioned residual coefficient block to obtain a quantized residual coefficient block.
  • the encoding end when the encoding end does not execute S14, the encoding end can directly perform quantization processing on the above-mentioned residual block, thereby obtaining a quantized residual block.
  • the encoding end encodes the above-mentioned quantized residual coefficient block to obtain the code stream of the block to be encoded.
  • the encoding end when the encoding end does not execute S14, the encoding end encodes the above-mentioned quantized residual block, thereby obtaining a code stream of the block to be encoded.
  • the residual scanning order of the encoding residual coefficient block at the encoding end can be the same as the target prediction order in S11.
  • the decoding end can predict the predicted values of the pixels in the block to be decoded in the target prediction order, and at the same time decode the residual blocks of the block to be decoded in the same residual scanning order as the target prediction order, thus enabling efficient Get the reconstructed block of the block to be decoded.
  • the target prediction mode used by the encoding end to predict pixels in the block to be encoded has high prediction efficiency
  • the target prediction mode used by the encoding end is used to quantize the residual block or residual coefficient block.
  • the quantization method can reduce the multiplication operation on the encoding side, that is, improve the quantization efficiency
  • the encoding method used on the encoding side can reduce the number of bits used to encode the residual block or the residual coefficient block. Therefore, through the embodiment of the present application
  • the method provided can greatly improve the coding efficiency of the encoding side.
  • Figure 6b shows a schematic flowchart of an image decoding method provided by an embodiment of the present application.
  • the method may include the following steps:
  • the decoder parses the code stream of the block to be decoded to determine the target prediction mode for predicting the pixels in the block to be decoded.
  • the decoder determines the target prediction sequence corresponding to the target prediction mode based on the target prediction mode.
  • the decoding end predicts each pixel in the block to be decoded in the target prediction order according to the target prediction mode, and obtains the prediction value of each pixel.
  • the decoder uses a variable code length decoding method to parse the code stream of the block to be decoded, obtains the CL of each value in the residual block corresponding to the block to be decoded, and parses the block to be decoded based on the CL and in accordance with the above residual scanning order. Decode the code stream of the block to obtain the first residual block of the block to be decoded.
  • the residual scanning order and the above-mentioned target prediction order may be the same.
  • the decoding end When the residual scanning order is the same as the above-mentioned target prediction order, that is, when the decoding end predicts the pixels in the block to be decoded in the target prediction order, the decoding end also follows the same residual scanning order as the target prediction order from the pixels in the block to be decoded.
  • the first residual block of the block to be decoded is obtained by parsing the code stream. This can improve the decoding efficiency of the decoding end.
  • the decoder uses a variable code length decoding method to parse the code stream of the block to be decoded, and obtains the CL of each value in the residual block corresponding to the block to be decoded.
  • the decoder uses a variable code length decoding method to parse the code stream of the block to be decoded, and obtains the CL of each value in the residual block corresponding to the block to be decoded.
  • S601 the description of the decoding end determining the first residual block based on the CL is similar to the implementation of obtaining the residual block of the block to be decoded described in S602, and will not be described again.
  • the decoding end obtains the inverse quantization parameter of each pixel in the block to be decoded by parsing the code stream of the block to be decoded.
  • the decoding end performs inverse quantization on the first residual block based on the QP and the inverse quantization preset array indicated by the inverse quantization parameter of each pixel in the block to be decoded, to obtain the second residual block.
  • S23 and S24-S26 may be executed simultaneously.
  • the decoding end performs inverse transformation processing on the second residual block to obtain an inversely transformed second residual block.
  • the decoding end performs S27.
  • the decoding end reconstructs the block to be decoded based on the inversely transformed second residual block and the prediction value of each pixel in the block to be decoded, to obtain a reconstructed block.
  • the decoding end when the decoding end does not execute S27, the decoding end directly reconstructs the block to be decoded based on the second residual block and the prediction value of each pixel in the block to be decoded, and obtains the reconstructed block.
  • the image decoding method of S21-S28 corresponds to the image encoding method of S11-S16.
  • the target prediction mode used by the decoding end for predicting pixels in the block to be decoded has high prediction efficiency
  • the target prediction mode used by the decoding end is used for inverse quantization of the residual block or residual coefficient block.
  • the inverse quantization method can reduce the multiplication operation at the decoding end, that is, the inverse quantization efficiency is improved
  • the decoding method used at the decoding end can reduce the number of bits used to encode the residual block or residual coefficient block. Therefore, through this application
  • the method provided by the embodiment can greatly improve the decoding efficiency of the decoding end.
  • FIG. 6c it is a schematic flow chart of another image encoding method provided by an embodiment of the present application.
  • the method shown in Figure 6c includes the following steps:
  • the encoding end determines the target prediction mode of the block to be encoded, and determines the target prediction order corresponding to the target prediction mode.
  • the encoding end can use different prediction modes to predict the blocks to be encoded respectively, and predict the encoding performance of the blocks to be encoded based on the prediction values in different prediction modes, thereby determining the target prediction mode.
  • the encoding end can use different prediction modes to predict the blocks to be encoded respectively, and after predicting the prediction blocks based on the different prediction modes, the encoding end performs steps 13 to 16 described above to obtain the blocks to be encoded in different prediction modes.
  • Block code stream. The encoding end determines the time to obtain the code stream of the block to be encoded in different prediction modes, and determines the prediction mode with the shortest time as the target prediction mode. In other words, the encoding end determines the prediction mode with the highest encoding efficiency as the target prediction mode.
  • embodiments of the present application also provide multiple prediction modes.
  • the encoding end can sequentially predict pixels in the block to be encoded in a preset order.
  • the encoding end sequentially predicts the pixels in the block to be encoded in a preset order, which means that the encoding end sequentially predicts the pixels in the block to be encoded in a preset trajectory.
  • the pixels used to predict any pixel have been reconstructed.
  • the target prediction order when predicting pixels in the block to be encoded using the target prediction mode can be determined.
  • the target prediction order may be a prediction order for pixels in the block to be encoded, or a prediction order for sub-blocks in the block to be encoded.
  • the encoding end predicts each pixel in the block to be encoded in the above target prediction order according to the above target prediction mode.
  • the target prediction mode may be used to indicate that each pixel in the encoding block is predicted point by point in the target prediction order.
  • the target prediction order is used to indicate point-by-point prediction of each pixel in the encoding block in the trajectory direction indicated by the target prediction order.
  • the encoding end follows the trajectory direction indicated by the target prediction sequence and predicts each pixel in the block to be encoded point by point along this direction to obtain the predicted value of each pixel.
  • the target prediction mode may also be used to indicate that the pixels of each sub-block in the block to be encoded are sequentially predicted in units of sub-blocks with a preset size in the block to be encoded.
  • the target prediction mode is used to indicate that each sub-block in the block to be encoded is predicted sequentially in units of sub-blocks with a preset size in the block to be encoded. pixels.
  • the target prediction mode includes the prediction mode of each sub-block in the block to be encoded.
  • the encoding end can sequentially predict the pixels of each sub-block in the block to be encoded in the direction indicated by the target prediction order according to the target prediction mode.
  • the encoding end can predict each pixel in the sub-block in parallel based on the reconstructed pixels around the sub-block.
  • the encoding end determines the residual block of the block to be encoded based on the predicted value of each pixel in the block to be encoded.
  • the encoding end can determine the residual block of the block to be encoded based on the predicted value of each pixel in the block to be encoded and the original pixel value of the block to be encoded.
  • the encoding end can use the residual calculation unit 202 shown in FIG. 2 to perform a difference operation between the predicted value of each pixel in the block to be encoded and the original pixel value of the block to be encoded, thereby obtaining the residual block of the block to be encoded.
  • the encoding end encodes the above-mentioned residual block in residual scanning order to obtain the code stream of the block to be encoded.
  • the residual scanning order corresponds to the above-mentioned target prediction mode.
  • the residual scanning order can be the same as the above-mentioned target prediction order.
  • the residual scanning order may be the order indicated by the preset trajectory shown in Figure 7a. If the target prediction mode is the prediction mode shown in Table 2 below, the residual scanning order may be the order indicated by the preset trajectory shown in FIG. 7b. If the target prediction mode is the prediction mode shown in Table 3-1 below, the residual scanning order may be the order indicated by the preset trajectory shown in Figure 7c-1. If the target prediction mode is the prediction mode shown in Table 4-1 below, the residual scanning order may be the order indicated by the preset trajectory shown in Figure 7d-1.
  • the residual scanning order may be the preset trajectory indication shown in Figure 7c-2 Order. If the target prediction mode is the prediction mode shown in Table 4-2 below, the residual scanning order may be the order indicated by the preset trajectory shown in Figure 7d-2.
  • the residual scanning order may be the preset trajectory indication shown in Figure 7d-3 Order. If the target prediction mode is the prediction mode shown in Table 5 below, the residual scanning order may be the order indicated by the preset trajectory shown in Figure 7e.
  • the residual scanning order may be the order indicated by the preset trajectory shown in Figure 7g.
  • the encoding end may first transform the residual block of the block to be encoded to obtain the residual coefficient block of the block to be encoded.
  • the encoding end can also quantize the residual coefficient block to obtain a quantized residual coefficient block. Then, the encoding end encodes the quantized residual coefficient block in the above-mentioned target prediction order, thereby obtaining an encoded code stream of the block to be encoded.
  • the encoding end may first transform the residual block of the block to be encoded through the residual transformation unit 203 shown in FIG. 2 to obtain the residual coefficient block of the block to be encoded.
  • the encoding end can also quantize the residual coefficient block through the quantization unit 204 shown in FIG. 2 to obtain a quantized residual coefficient block.
  • the encoding end uses the encoding unit 205 shown in FIG. 2 to encode the quantized residual coefficient block in the above-mentioned target prediction order, thereby obtaining the encoded code stream of the block to be encoded.
  • the encoding end can directly perform quantization processing on the residual block of the to-be-encoded block to obtain a quantized residual block. Then, the encoding end encodes the quantized residual block in the above-mentioned target prediction order, thereby obtaining the encoded code stream of the block to be encoded.
  • the encoding end can directly perform quantization processing on the residual block of the block to be encoded through the quantization unit 204 shown in FIG. 2 to obtain the quantized residual block. Then, the encoding end performs the quantized residual block on the quantized residual block in the above-mentioned target prediction order through the encoding unit 205 shown in FIG. 2 Row coding is performed to obtain the coded stream of the block to be coded.
  • the encoding end also uses the above-mentioned target prediction mode as a semantic element of the block to be encoded, or uses the above-mentioned target prediction mode and the corresponding target prediction sequence as a semantic element of the block to be encoded, and encodes the semantic element.
  • the encoding end can also add the encoded semantic element data to the encoded code stream of the block to be encoded.
  • the embodiment of the present application does not specifically limit the method of quantizing the residual block or residual coefficient block of the to-be-coded block.
  • the quantization method described in the second embodiment below can be used to quantize the residual block or residual coefficient block of the to-be-coded block.
  • the difference coefficient block is quantized, but is of course not limited to this.
  • the embodiment of the present application does not specifically limit the specific encoding method for encoding the quantized residual blocks or residual coefficients at the encoding end, as well as encoding-related semantic elements.
  • the variable encoding method described in Embodiment 3 below can be used.
  • the long encoding method is encoded, but of course it is not limited to this.
  • the prediction mode for predicting a block to be encoded may include at least one prediction mode among the T prediction mode, TB prediction mode, L prediction mode and RL prediction mode mentioned above.
  • a pixel in the block to be encoded can be predicted by any one of the T prediction method, the TB prediction method, the L prediction method or the RL prediction method.
  • any of the following prediction modes can be applied to the process of encoding images at the encoding end, or can also be applied to the process of decoding image data at the decoding end. This is not limited in the embodiments of the present application.
  • Table 1 shows a method of predicting each pixel in the block to be encoded with a size of 16 ⁇ 2. prediction model.
  • the prediction order in which the encoding end predicts pixels in the block to be encoded may be the order indicated by the preset trajectory shown in Figure 7a. That is to say, when the encoding end predicts the pixels in the block to be encoded one by one using the preset trajectory shown in Figure 7a, it will predict each pixel in the block to be encoded using the specific prediction method in the prediction mode shown in Table 1.
  • the prediction method shown in each grid in Table 1 is used to predict the predicted value of the pixel at the corresponding position in the block to be encoded shown in Figure 7a.
  • the T prediction method shown in the first row and the first cell of Table 1 is used to predict the predicted value of the pixel 1-1 located in the first row and the first cell of the block to be encoded as shown in Figure 7a.
  • the RL prediction method shown in the second row of the first row in Table 1 is used to predict the predicted value of the pixel 1-2 located in the second row of the first row in the block to be encoded as shown in Figure 7a.
  • the T prediction method shown in the first grid of the second row in Table 1 is used to predict the predicted value of the pixel 2-1 located in the first grid of the second row in the block to be encoded shown in Figure 7a.
  • the T prediction method shown in the 15th row of the 2nd row in Table 1 is used to predict the predicted values of pixels 1-15 located in the 15th row of the 2nd row in the block to be encoded as shown in Figure 7a.
  • the two blocks of size 16 ⁇ 2 shown in Figure 7a represent the same image block in the image to be encoded (eg, the block to be encoded).
  • two blocks are used to represent the block to be encoded. This is only to clearly show the preset trajectory when the encoding end predicts the pixels in the block to be encoded in sequence.
  • the preset trajectory is the black solid line with arrow in Figure 7a. indicated trajectory. Among them, the origin is the starting point of the preset trajectory, and the pixels at both ends of the black dotted line are the two adjacent pixels on the preset trajectory.
  • the reconstructed value of pixel 1-1 can be determined based on the predicted value of pixel 1-1 (for example, after the encoding end obtains the predicted value of pixel 1-1, by executing Steps 13 to 16 described above, and by performing inverse quantization and inverse transformation to reconstruct the residual value of the pixel, and then obtain the reconstructed value of the pixel 1-1 based on the predicted value and the reconstructed residual value).
  • Table 2 shows another method of predicting each pixel in the encoding block with a size of 16 ⁇ 2. prediction model.
  • the prediction order in which the encoding end predicts pixels in the block to be encoded may be the order indicated by the preset trajectory shown in FIG. 7b. That is to say, when the encoding end predicts the pixels in the block to be encoded one by one using the preset trajectory shown in Figure 7b, it will predict each pixel in the block to be encoded using the specific prediction method in the prediction mode shown in Table 2.
  • P T1-1 is the reconstruction value of the pixel on the upper side of pixel 1-1 (for example, the pixel in the image block on the upper side of the block to be encoded)
  • P B1-1 is the pixel on the lower side of pixel 1-1 (for example, the reconstructed pixel 2- 1, which is due to the predicted order of pixel 2-1 being before the reconstructed value of pixel 1-1).
  • the prediction order in which the encoding end predicts pixels in the block to be encoded may be the order indicated by the preset trajectory shown in Figure 7c-1. That is to say, when the encoding end predicts the pixels in the block to be encoded one by one using the preset trajectory shown in Figure 7c-1, it will treat each pixel in the encoding block in the specific prediction mode in the prediction mode shown in Table 3-1. pixels for prediction.
  • Table 3-2 shows a method for each encoding block in the size of 8 ⁇ 2
  • the prediction order in which the encoding end predicts pixels in the block to be encoded may be the order indicated by the preset trajectory shown in Figure 7c-2. That is to say, when the encoding end predicts the pixels in the block to be encoded one by one using the preset trajectory shown in Figure 7c-2, it will treat each pixel in the encoding block according to the specific prediction method in the prediction mode shown in Table 3-2. pixels for prediction.
  • the fourth prediction mode taking the size of the block to be coded as 16 ⁇ 2 as an example, as shown in Table 4-1, Table 4-1 shows another method for each coding block of size 16 ⁇ 2.
  • Prediction mode for predicting pixels In this prediction mode, the prediction order in which the encoding end predicts pixels in the block to be encoded may be the order indicated by the preset trajectory shown in Figure 7d-1. That is to say, when the encoding end predicts the pixels in the block to be encoded one by one using the preset trajectory shown in Figure 7d-1, it will treat each pixel in the encoding block in the specific prediction mode in the prediction mode shown in Table 4-1. pixels for prediction.
  • the fourth prediction mode taking the size of the block to be coded as 8 ⁇ 2 as an example, as shown in Table 4-2, Table 4-2 shows another method for each coding block of size 8 ⁇ 2.
  • Prediction mode for predicting pixels may be the order indicated by the preset trajectory shown in Figure 7d-2. That is to say, when the encoding end predicts the pixels in the block to be encoded one by one using the preset trajectory shown in Figure 7d-2, it will treat each pixel in the encoding block in the specific prediction method in the prediction mode shown in Table 4-2. pixels for prediction.
  • the fourth prediction mode taking the size of the block to be coded as 8 ⁇ 1 as an example, as shown in Table 4-3, Table 4-3 shows another method for each coding block of size 8 ⁇ 1.
  • Prediction mode for predicting pixels In this prediction mode, the prediction order in which the encoding end predicts pixels in the block to be encoded may be the order indicated by the preset trajectory shown in Figure 7d-3. That is to say, when the encoding end predicts the pixels in the block to be encoded one by one using the preset trajectory shown in Figure 7d-3, it will treat each pixel in the encoding block according to the specific prediction method in the prediction mode shown in Table 4-3. pixels for prediction.
  • the prediction order in which the encoding end predicts pixels in the block to be encoded may be the order indicated by the preset trajectory shown in Figure 7e. That is to say, when the encoding end predicts the pixels in the block to be encoded one by one using the preset trajectory shown in Figure 7e, each pixel in the block to be encoded will be predicted using the specific prediction method in the prediction mode shown in Table 5.
  • each of the above prediction modes corresponds to a different prediction order.
  • the prediction order is the first prediction order
  • the prediction order is the second prediction order, where the first prediction order and the second prediction order are different.
  • the first prediction order and the second prediction order may be one of the prediction orders corresponding to the first prediction mode to the sixth prediction mode in the above example.
  • the corresponding prediction orders are also different.
  • the third prediction order is used to predict the block to be decoded in prediction mode 1
  • the fourth prediction order is used under prediction mode 1.
  • the block to be decoded is predicted, wherein the third prediction order and the fourth prediction order are different.
  • Figure 7d-1, Figure 7d-2, and Figure 7d-3 can be used respectively.
  • Three different prediction sequences are also used.
  • the first prediction order, the second prediction order, the third prediction order and the fourth prediction order can all represent prediction trajectories for pixels in the block to be encoded, or can also represent prediction trajectories for sub-blocks in the block to be encoded.
  • the prediction mode is used to instruct the encoding end to treat sub-blocks in the block to be encoded with a preset size in units of sub-blocks in the block to be encoded, and to sequentially treat sub-blocks in the encoding block in the direction indicated by the prediction order corresponding to the prediction mode. Predictions are made for each sub-block. Moreover, in this prediction mode, when the encoding end predicts any sub-block of the block to be encoded, it can predict the pixels in any sub-block based on the reconstructed pixels around the sub-block in parallel.
  • the encoding end can divide the block to be encoded into multiple non-overlapping sub-blocks with preset sizes, and divide the The arrangement direction of the obtained sub-blocks is used as the direction indicated by the prediction order corresponding to the prediction mode.
  • Figure 7f shows a schematic diagram of a sub-block in a block to be encoded according to an embodiment of the present application.
  • the block to be encoded can be divided into (a) in Figure 7f ) 8 non-overlapping 2 ⁇ 2 sub-blocks shown in the thick black outline. Furthermore, the arrangement direction of these eight sub-blocks, that is, the direction indicated by the prediction order corresponding to the prediction mode, is, for example, the direction pointed by the arrow shown in (a) in Figure 7f.
  • the block to be encoded can be divided into (b) in Figure 7f ) The four non-overlapping 4 ⁇ 2 sub-blocks shown in the thick black outline. Furthermore, the arrangement direction of these four sub-blocks, that is, the direction indicated by the prediction order corresponding to the prediction mode, is, for example, the direction pointed by the arrow shown in (b) in Figure 7f.
  • the block to be encoded can be divided into (c) in Figure 7f ) are two non-overlapping 8 ⁇ 2 sub-blocks shown in thick black lines. Furthermore, the arrangement direction of these two sub-blocks, that is, the direction indicated by the prediction order corresponding to the prediction mode, is, for example, the direction pointed by the arrow shown in (c) in Figure 7f.
  • the sixth prediction mode indication is based on the upper side (adjacent or non-adjacent), left side (adjacent or non-adjacent) and oblique direction of any sub-block.
  • the reconstructed value of the pixel on the upper side (adjacent or non-adjacent) predicts the pixel in any sub-block.
  • a pixel in a sub-block will not rely on other pixels in the same sub-block for prediction.
  • the embodiment of the present application uses the prediction mode to indicate that the pixels in any sub-block are predicted based on the reconstruction values of the adjacent pixels on the upper side, left side, and diagonally above the any sub-block.
  • each sub-block in the block to be coded is predicted sequentially along the direction indicated by the prediction order corresponding to the prediction mode, taking the sub-block shown in (a) in Figure 7f as a unit. , as shown in (a) in Figure 7f, assuming that any of the above sub-blocks is sub-block a, and the gray square shown in (a) in Figure 7f is the upper side, left side and diagonally upper side of sub-block a adjacent pixels.
  • the pixels in sub-block a are Y0, Y1, Y2 and Y3 respectively, the pixels on the upper side of sub-block a are included as T0 and T1, the pixels on the left side of sub-block a are included as L0 and L1, and the pixels on the diagonally upper side of sub-block a are included.
  • the pixel is LT:
  • the sixth prediction mode can specifically indicate: the predicted value of Y0 in sub-block a is obtained based on T0, L0 and LT, and the predicted value of Y1 in sub-block a is obtained based on T1, L0 and LT. , the predicted value of Y2 in sub-block a is obtained based on T0, L1 and LT, and the predicted value of Y3 in sub-block a is obtained based on T1, L1 and LT.
  • the prediction method 1 specifically indicated by the sixth prediction mode may be: based on the horizontal gradient or vertical gradient of the upper pixels, left pixels and diagonally upper pixels of any pixel in any sub-block in the block to be encoded, Determine the predicted value of any pixel.
  • condition 1 is used to characterize the horizontal gradient of the pixels around Y0 as the smallest.
  • Condition 1 is specifically:
  • condition 2 is used to characterize the minimum vertical gradient of pixels around Y0.
  • Condition 2 is specifically:
  • the encoding end determines that the reconstructed values of the upper pixel T0, left pixel L0 and obliquely upper pixel LT of Y0 do not meet the above conditions 1 and 2, then the reconstructed value of the obliquely upper pixel LT of Y0 is determined as the predicted value of Y0 .
  • the prediction method 2 specifically indicated by the sixth prediction mode may be: based on the average of the reconstructed values of the upper pixels, left pixels and diagonally upper pixels of any pixel in any sub-block in the block to be encoded, determine The predicted value of any pixel.
  • the predicted value of Y0 can be: (L0 reconstruction value + T0 reconstruction value + 2*LT reconstruction value) >> 2.
  • (L0 reconstruction value + T0 reconstruction value + 2*LT reconstruction value) >> 2 represents the value obtained by right-shifting the binary value of (L0 reconstruction value + T0 reconstruction value + 2*LT reconstruction value) by 2 bits.
  • the sixth prediction mode can specifically indicate: determine the reconstruction value of the upper pixel, left pixel, or diagonally upper pixel in any pixel prediction direction in any sub-block in the block to be encoded. is the predicted value of any pixel.
  • the prediction direction may be a 45-degree diagonal direction to the left or a 45-degree diagonal right direction of any pixel, which is not limited in the embodiments of the present application.
  • each sub-block to be encoded is predicted sequentially along the prediction order corresponding to the prediction mode in units of sub-blocks shown in (b) in Figure 7f
  • the gray squares shown in (b) in Figure 7f are the adjacent pixels on the upper, left and diagonally upper sides of sub-block b.
  • the pixels in sub-block b are Y0, Y1, Y2, Y3, Y4, Y5, Y6 and Y7 respectively, and the pixels on the upper side of sub-block b include T0, T1, T2, T3, T4 and T5.
  • the pixels on the left side of sub-block b include L0 and L1, and the pixels on the diagonally upper side of sub-block b are LT:
  • the sixth prediction mode can specifically indicate: determine the reconstructed value of LT located in a direction 45 degrees oblique to the left of Y0 in sub-block b as the prediction of Y0 value, determine the reconstructed value of T0 located in the 45-degree direction oblique to the left of Y1 in sub-block b as the predicted value of Y1, and determine the reconstructed value of T1 located in the 45-degree direction oblique to the left of Y2 in sub-block b as Y2
  • the predicted value of T2 located in the diagonal direction 45 degrees to the left of Y3 in sub-block b is determined as the predicted value of Y3, and the reconstructed value of L0 located in the diagonal direction of 45 degrees to the left of Y4 in sub-block b is determined.
  • the predicted value of Y4 is the predicted value of Y4
  • the reconstructed value of LT located in the 45-degree diagonal direction to the left of Y5 in sub-block b is determined as the predicted value of Y5
  • the reconstructed value of T0 located in the 45-degree diagonal left direction of Y6 in sub-block b The value is determined as the predicted value of Y6, and the reconstructed value of T1 located in the direction obliquely 45 degrees to the left of Y7 in sub-block b is determined as the predicted value of Y7.
  • the sixth prediction mode can specifically indicate: determine the reconstructed value of T1 located in the direction of 45 degrees oblique to the right of Y0 in sub-block b as the predicted value of Y0 , the reconstruction value of T2 located in the 45-degree direction oblique to the right of Y1 in sub-block b is determined as the predicted value of Y1, and the reconstruction value of T3 located in the 45-degree direction oblique to the right of Y2 in sub-block b is determined as the predicted value of Y2 Predicted value, determine the reconstructed value of T4 located in the 45-degree direction oblique to the right of Y3 in sub-block b as the predicted value of Y3, and determine the reconstructed value of T2 located in the 45-degree direction oblique to the right of Y4 in sub-block b as For the predicted value of Y4, the reconstructed value of T3
  • the target prediction mode includes the prediction mode of each sub-block in the block to be decoded
  • the first sub-block includes the first pixel and the second pixel
  • the prediction mode of the first sub-block is used to predict the first pixel and the second pixel in parallel according to the reconstructed pixels around the first sub-block.
  • the first sub-block is a sub-block in the block to be decoded that is currently predicted.
  • the size of each sub-block in the decoding block is 2 ⁇ 2, then the block to be decoded can be divided into 8 sub-blocks.
  • the first sub-block is the first sub-block from left to right.
  • the second sub-block from left to right can be started. The pixels are reconstructed, and the first sub-block is the above-mentioned second sub-block.
  • the first pixel and the second pixel included in the first sub-block may be two groups of non-overlapping pixels in the first sub-block, and the first pixel or the second pixel may include one or more pixels in the first sub-block.
  • the first pixel in sub-block a may include Y0
  • the second pixel in sub-block a may include Y1, Y2, and Y3
  • the first pixel in sub-block a may include Y0 and Y1
  • the second pixel in sub-block a may include Y2 and Y3.
  • the first pixel in sub-block a may include Y1, and the second pixel in sub-block a may include Y2.
  • the encoding end predicts the pixels in the block to be encoded in the target prediction mode, it predicts the pixels in the block to be encoded in the target prediction order corresponding to the target prediction mode. .
  • the encoding method provided by the embodiment of the present application predicts each pixel in the block to be encoded in the target prediction order according to the target prediction mode, and can predict some pixels in the block to be encoded in parallel, improving the efficiency of predicting pixels in the block to be encoded.
  • the encoding method provided by the embodiment of the present application can not only save the cache space for caching the residual value, but also Improve coding efficiency.
  • the target prediction mode indicates that each sub-block in the block to be encoded is predicted sequentially in sub-block units
  • this prediction mode when the encoding end predicts a sub-block, it can be based on the surrounding sub-block in parallel.
  • the reconstructed pixels predict multiple pixels within the sub-block, that is, this prediction mode can further improve the efficiency of the encoding end in predicting the block to be encoded.
  • the encoding end when the encoding end re-obtains the reconstruction value based on the prediction value and the residual value, the encoding end does not need to cache the residual value. Therefore, the encoding method provided by the embodiment of the present application further saves cache space and improves encoding efficiency.
  • FIG. 8 it is a schematic flow chart of another image decoding method provided by an embodiment of the present application.
  • the method shown in Figure 8 includes the following steps:
  • the decoder parses the code stream of the block to be decoded to determine a target prediction mode for predicting pixels in the block to be decoded.
  • the code stream of the block to be decoded may be a code stream received by the decoding end from the encoding end, or a code stream obtained from other devices, such as a code stream obtained from a storage device, which is not limited in this embodiment of the present application.
  • the target prediction mode is used to predict pixels in the block to be decoded to obtain predicted values of the pixels in the block to be decoded. It can be understood that the target prediction mode here is the prediction mode used by the encoding end to predict pixels in the image block during encoding.
  • the decoding end can analyze the code stream of the block to be decoded through the decoding method corresponding to the encoding end, thereby obtaining the target prediction mode.
  • the decoder determines the target prediction sequence corresponding to the target prediction mode based on the target prediction mode.
  • the decoding end may preset the correspondence between multiple prediction modes and their corresponding prediction orders. In this way, when the decoding end determines that the prediction mode is the target prediction mode, the target prediction order corresponding to the target prediction mode can be determined in the preset correspondence relationship.
  • the decoding end predicts each pixel in the block to be decoded in the target prediction order according to the target prediction mode, and obtains the prediction value of each pixel.
  • the description of predicting each pixel in the block to be decoded in the prediction order corresponding to the prediction mode can be referred to the detailed description of the prediction mode above, which will not be described again here.
  • the decoding end reconstructs each pixel in the block to be decoded based on the predicted value of the block, thereby obtaining a reconstructed block of the block to be decoded.
  • the decoder can first parse the code stream to obtain the residual block of the block to be decoded. Then, the decoding end performs inverse quantization on the residual block to obtain the reconstructed residual block of the block to be decoded. In this way, the decoding end can obtain the reconstructed block of the block to be decoded based on the obtained prediction value of the pixel in the block to be decoded and the residual value in the reconstructed residual block.
  • the decoding end may first analyze the code stream of the block to be decoded through the code stream parsing unit 301 shown in FIG. 5 to obtain the residual block of the block to be decoded. Then, the decoding end can perform inverse quantization on the residual block through the inverse quantization unit 302 shown in Figure 5, thereby obtaining the reconstructed residual block of the block to be decoded. In this way, the decoding end can obtain the reconstructed block of the block to be decoded through the reconstruction unit 305 and based on the obtained prediction value of the pixel in the block to be decoded and the residual value in the reconstructed residual block.
  • the decoder can first parse the code stream to obtain the residual coefficient block of the block to be decoded. Then, the decoding end performs inverse quantization on the residual block to obtain an inverse quantized residual coefficient block. Then, the decoding end performs inverse transformation on the inverse quantized residual coefficient block to obtain the reconstructed residual block of the block to be decoded. In this way, the decoding end can obtain the reconstructed block of the block to be decoded based on the obtained prediction value of the pixel in the block to be decoded and the residual value in the reconstructed residual block.
  • the decoding end may first analyze the code stream of the block to be decoded through the code stream analysis unit 301 shown in FIG. 5 to obtain the residual coefficient block of the block to be decoded. Then, the decoding end can perform inverse quantization on the residual coefficient block through the inverse quantization unit 302 shown in FIG. 5, thereby obtaining an inverse quantized residual coefficient block. Next, the decoding end performs inverse transformation on the inverse quantized residual coefficient block through the residual inverse transformation unit 303 shown in FIG. 5 to obtain the reconstructed residual block of the block to be decoded. In this way, the decoding end can obtain the reconstructed block of the block to be decoded through the reconstruction unit 305 and based on the obtained prediction value of the pixel in the block to be decoded and the residual value in the reconstructed residual block.
  • the decoding end can analyze the code stream of the block to be decoded through a decoding method corresponding to the encoding end, thereby obtaining the residual block or residual coefficient block of the block to be decoded.
  • the residual scanning order in which the decoding end decodes the code stream of the block to be decoded is the target prediction order in which the pixels in the block to be decoded are predicted.
  • the target prediction order corresponds to the target prediction mode for predicting the block to be decoded.
  • relevant instructions on the residual scanning sequence please refer to the above and will not be repeated here.
  • the image decoding method shown in Figure 8 corresponds to the image encoding method shown in Figure 6c. Therefore, this image decoding method helps to improve the efficiency of predicting the block to be decoded, and reconstructs the block to be decoded based on the predicted value and the residual value.
  • this image decoding method helps to improve the efficiency of predicting the block to be decoded, and reconstructs the block to be decoded based on the predicted value and the residual value.
  • cache space for caching residual values can be saved and decoding efficiency can be improved.
  • the embodiment of the present application does not specifically limit the method of inverse quantization of the residual block or residual coefficient block of the block to be decoded.
  • the inverse quantization method described in the second embodiment below can be used to perform the inverse quantization of the residual block of the block to be decoded.
  • the residual coefficient block is used for inverse quantization, but is of course not limited to this.
  • Embodiment 3 does not specifically limit the way in which the decoding end decodes the code stream of the block to be decoded.
  • the variable length decoding method described in the following Embodiment 3 can be used for decoding, but is of course not limited to this.
  • FIG. 9a it is a schematic flow chart of yet another image encoding method provided by an embodiment of the present application.
  • the method shown in Figure 9a includes the following steps:
  • the encoding end determines the second residual block of the block to be encoded and the quantization parameter QP of each pixel in the block to be encoded.
  • the second residual block may be an original residual value block of the block to be encoded, or the second residual block may also be a residual coefficient block obtained by transforming the original residual value block.
  • the original residual value block of the block to be encoded is the residual block obtained by the encoding end based on the original pixel value of the block to be encoded and the prediction block of the block to be encoded. It can be understood that the process of the encoding end predicting the pixels in the block to be encoded to obtain the prediction block can be implemented based on the method described in Embodiment 1. Of course, it can also be obtained based on any method in the prior art that can obtain the prediction block of the block to be encoded. , the embodiment of the present application does not limit this.
  • the residual coefficient block is obtained by transforming the original residual value block at the coding end. The embodiment of the present application does not specifically limit the process of transforming the original residual value block at the coding end.
  • the embodiment of the present application will be described below by taking the second residual block as an original residual value block of the block to be encoded as an example.
  • the encoding end may also determine the quantization parameter QP of each pixel in the block to be encoded before or after obtaining the second residual block of the block to be encoded.
  • the encoding end can read the preset quantization parameter QP.
  • the embodiment of the present application does not limit the specific process by which the encoding end determines the quantization parameter QP of each pixel in the block to be encoded.
  • the encoding end quantizes the second residual block based on the QP of each pixel in the block to be encoded and the quantization preset array, to obtain the first residual block.
  • the quantization preset array is used to quantize the values in the second residual block.
  • amp[QP] represents the QP-th amplification parameter in the amplification parameter array
  • shift[QP] represents the QP-th displacement parameter in the displacement parameter array
  • (residual value before quantization ⁇ amp[QP])>>shift[QP ] means to right-shift the binary value of (residual value before quantization ⁇ amp[QP]) by shift[QP] bits.
  • the inverse quantization preset array composed of the quotient of amp[i] has the following rules: the interval between two adjacent numbers in the 1st to nth numbers in the inverse quantization preset array is 1, and the inverse quantization preset array Assume that the interval between two adjacent numbers in the n+1 to n+mth numbers in the array is 2. The inverse quantization presets the n+k*m+1 to n+kth numbers in the array. *The numerical interval between two adjacent numbers in m+m numbers is 2 k+1 . Among them, n and m are integers greater than 1, and i and k are both positive integers.
  • the inverse quantization preset array is used to implement inverse quantization processing on the first residual block obtained after quantization, and the inverse quantization operation implemented by the inverse quantization preset array is inverse to the quantization operation implemented by the quantization preset array. Therefore, based on the inverse quantization preset array with the above rules, the amplification parameter array and the displacement parameter array used to implement the quantization operation can be determined inversely.
  • the encoding end can determine the amplification parameter of each pixel in the above-mentioned amplification parameter array based on the determined QP of each pixel in the block to be encoded, and determine the displacement parameter of each pixel in the above-mentioned displacement parameter array. Then, the encoding end performs a quantization operation on the second residual block based on the amplification parameter and displacement parameter corresponding to each pixel in the block to be encoded, thereby obtaining the first residual block.
  • the encoding end can search in the above-mentioned amplification parameter array based on the determined QP of any pixel, and determine the QP-th value in the amplification parameter array to be any pixel.
  • the amplification parameter of the pixel is searched in the above-mentioned displacement parameter array, and the QP-th value in the displacement parameter array is used to determine the displacement parameter of any pixel.
  • the encoding end performs a quantization operation on the residual value corresponding to the any pixel in the second residual block through the above formula (1), thereby obtaining the any pixel.
  • the quantized residual value corresponding to one pixel When the encoding end completes the quantization process on all the residual values in the second residual block, the first residual block is obtained.
  • the encoding end needs to perform up to 6 multiplication operations when quantizing the residual value in the second residual block.
  • this method greatly reduces the amount of calculation on the encoding end. That is to say, this method greatly saves the computing resources on the encoding side.
  • the quantization preset array in this implementation manner may include fewer values.
  • the encoding end can implement quantization processing of each residual value in the second residual block based on formula (2):
  • Residual value after quantization (Residual value before quantization ⁇ amp + offset) >> shift
  • amp is the amplification parameter corresponding to each pixel determined by the encoding end based on the QP and quantization preset array of each pixel in the block to be encoded
  • shift is the amplification parameter determined by the encoding end based on the QP and quantization preset array of each pixel in the block to be encoded.
  • the value of the amplification parameter amp corresponding to each pixel in the block to be encoded determined by the encoding end is the value corresponding to the bitwise AND of each pixel's QP and 7 in the quantization preset array, and the encoding end determines
  • the value of the displacement parameter shift corresponding to each pixel in the block to be encoded is the sum of 7 and the quotient of the QP of each pixel divided by 2 3 .
  • quant_scal represents the value in the quantization preset array
  • the amplification parameter amp can be calculated by quant_scale[QP&0x07]
  • the displacement parameter shift can be calculated by 7+(QP>>3).
  • [QP&0x07] represents the bitwise AND of the binary value of QP and 7 (equivalent to the mathematical effect of dividing QP by 8 and taking the remainder)
  • quant_scale[QP&0x07] is the QP&0x07th value in the quantization default array.
  • QP>>3 means shifting the binary value of QP to the right by 3 bits.
  • offset can be calculated by 1 ⁇ (shift-1), 1 ⁇ (shift-1) means shifting the binary value of 1 to the left (shift-1) (the mathematical effect is 1 times 2 (shift-1) power).
  • the encoding end calculates the amplification parameter and displacement parameter corresponding to each pixel based on the determined QP and quantization preset array of each pixel in the block to be encoded. Then, the encoding end quantizes the second residual block based on the calculated amplification parameters, displacement parameters, and offset parameters of each pixel in the block to be encoded, and uses formula (2) to obtain the first residual block.
  • the encoding end needs to perform up to 8 multiplication operations when quantizing the residual value in the second residual block.
  • this method greatly reduces the amount of calculation on the encoding side, that is, this method greatly saves the computing resources on the encoding side.
  • the encoding end can implement quantization processing of each residual value in the second residual block through formula (3):
  • Formula (3) residual value after quantization (residual value before quantization + offset) >> shift
  • shift represents the displacement parameter
  • the value of shift is an integer that is related to QP and is monotonically non-decreasing. Or it can be understood that when QP increases, shift corresponds to QP one-to-one, and the value of shift is a monotonically non-decreasing integer.
  • the encoding end can determine the displacement parameter of each pixel based on the determined QP of each pixel in the block to be encoded, and further determine the corresponding offset parameter. Then, the encoding end can quantize the second residual block based on the determined displacement parameter and offset parameter of each pixel in the block to be encoded, and use formula (3) to obtain the first residual block.
  • the encoding end does not need to perform multiplication when quantizing the residual value in the second residual block.
  • this method greatly reduces the amount of calculation on the encoding side, that is, this method greatly saves the computing resources on the encoding side.
  • the encoding end encodes the first residual block to obtain the code stream of the block to be encoded.
  • the encoding end After the encoding end obtains the first residual block, it encodes it, thereby obtaining the encoded code stream of the block to be encoded.
  • the encoding end uses the QP corresponding to each pixel when quantizing the second residual block and the specific quantization method as the semantic element of the block to be encoded, and encodes the semantic element.
  • the encoding end can also add the encoded semantic element data to the encoded code stream of the block to be encoded.
  • the encoding end encodes the residual scanning order of the first residual block, and can predict the target prediction order of pixels in the block to be encoded for the encoding end.
  • the target prediction order corresponds to the target prediction mode for predicting the block to be decoded.
  • the specific mode for predicting the block to be encoded is not specifically limited in this embodiment of the present application.
  • the prediction mode described in the solution of Embodiment 1 above can be used to predict the block to be encoded. Of course, it is not limited to this.
  • this embodiment of the present application does not specifically limit the specific encoding method used by the encoding end to encode the first residual block and related semantic elements.
  • the variable length encoding method described in Embodiment 3 can be used for encoding.
  • the image coding method described in S301-S303 above since a quantization processing method that can save coding end computing resources is adopted in the image coding process, that is, the efficiency of the quantization processing process in the image coding method is greatly improved, and thus the The image coding method greatly improves the coding efficiency of images.
  • FIG. 9b it is a schematic flow chart of another image decoding method provided by an embodiment of the present application.
  • the method shown in Figure 9b includes the following steps:
  • the decoding end parses the code stream of the block to be decoded, and obtains the inverse quantization parameter of each pixel in the block to be decoded and the first residual block of the block to be decoded.
  • the code stream of the block to be decoded may be a code stream received by the decoding end from the encoding end, or a code stream obtained from other devices, such as a code stream obtained from a storage device, which is not limited in this embodiment of the present application.
  • the inverse quantization parameter is used to instruct the decoding end to use an inverse quantization method corresponding to the inverse quantization parameter to perform inverse quantization processing on the residual value in the first residual block.
  • the inverse quantization parameter may include the quantization parameter QP.
  • the decoding end can parse the code stream of the block to be decoded through a decoding method corresponding to the encoding end, thereby obtaining the inverse quantization parameter of each pixel in the block to be decoded and the first residual block of the block to be decoded.
  • the residual scanning order of the code stream of the block to be decoded at the decoding end is the target prediction order for predicting the pixels in the block to be decoded.
  • the target prediction order corresponds to the target prediction mode for predicting the block to be decoded.
  • the decoding end can use a variable code length decoding method to parse the code stream of the block to be decoded to obtain the encoding code length CL and the first residual block of each value in the residual block corresponding to the block to be decoded.
  • the decoder uses a variable code length decoding method to parse the code stream of the block to be decoded, and obtains the CL of each value in the residual block corresponding to the block to be decoded.
  • S601 For instructions on how the decoder determines the first residual block based on the CL, please refer to the description of S602, which will not be described again here.
  • step S401 will be described below and will not be described again here.
  • the decoding end performs inverse quantization on the first residual block based on the QP indicated by the inverse quantization parameter of each pixel in the block to be decoded and the inverse quantization preset array, to obtain the second residual block.
  • the first possible implementation is that when the decoder determines the inverse quantization method for the first residual block based on the inverse quantization parameter of each pixel in the block to be decoded, it is the same as the quantization described in the first possible implementation in S302. If the methods are mutually inverse, the description of the dequantization preset array can be found in the relevant description of the dequantization preset array in S302 above, and will not be described again here.
  • the decoder can determine the amplification coefficient corresponding to each pixel in the inverse quantization preset array based on the QP of each pixel in the block to be decoded. Then, the decoding end can perform inverse quantization processing on the first residual block based on the amplification coefficient corresponding to each pixel, thereby obtaining the second residual block.
  • the decoder can search in the above-mentioned inverse quantization preset array based on the QP of any pixel, and determine the QP-th value in the inverse quantization preset array as The magnification factor of any pixel. Then, based on the amplification coefficient corresponding to any pixel, the decoder multiplies the residual value corresponding to any pixel in the first residual block, thereby obtaining the inverse quantized residual value corresponding to any pixel. .
  • the second residual block is obtained.
  • the decoder can also perform a left shift process on the residual value in the first residual block corresponding to each pixel after determining the amplification coefficient corresponding to each pixel in the block to be decoded.
  • the embodiments of this application do not specifically limit the multiplication operation of the residual value and the amplification coefficient corresponding to the residual value.
  • the second possible implementation is that when the decoder determines the inverse quantization method for the first residual block based on the inverse quantization parameter of each pixel in the block to be decoded, it is the same as the quantization described in the second possible implementation in S302.
  • the methods are mutually inverse, then the decoder can implement inverse quantization processing on each residual value in the first residual block based on the above formula (2) to obtain the second residual block.
  • the amplification parameter mult in formula (2) (different from the amp in the quantization process above) can be calculated by dequant_scale[QP&0x07], which is the QP&0x07th in the dequantization preset array numerical value.
  • the decoding end calculates the amplification parameter and displacement parameter corresponding to each pixel based on the QP and inverse quantization preset array of each pixel in the block to be decoded. Then, the decoding end can implement inverse quantization of the first residual block based on the calculated amplification parameters, displacement parameters and offset parameters of each pixel in the block to be decoded, and use formula (2) to obtain the second residual block. Bad block.
  • the decoder needs to perform up to 8 multiplication operations when performing inverse quantization processing on the residual value in the first residual block.
  • this method greatly reduces the calculation amount of the decoding end, that is, this method greatly saves the computing resources of the decoding end.
  • the third possible implementation is that when the decoder determines the inverse quantization method for the first residual block based on the inverse quantization parameter of each pixel in the block to be decoded, it is the same as the quantization described in the third possible implementation in S302.
  • the methods are mutually inverse, then the decoder can implement inverse quantization processing of each residual value in the first residual block based on formula (4) to obtain the first residual block:
  • the description of the offset parameter offset and the displacement parameter shift can refer to the description in the third possible implementation method in S302, and will not be described again here.
  • the decoder can determine the displacement parameter of each pixel based on the QP of each pixel in the block to be decoded, and further determine the corresponding offset parameter. Then, the decoding end can implement inverse quantization of the first residual block based on the determined displacement parameter and offset parameter of each pixel in the block to be decoded through formula (4), thereby obtaining the second residual block.
  • the decoding end does not need to perform multiplication when performing inverse quantization processing on the residual value in the first residual block.
  • this method greatly reduces the amount of calculation on the decoder, that is, this method greatly saves the computing resources on the decoder.
  • the inverse quantization method among the above possible implementation methods can be applied to the process of encoding the image at the encoding end (for example, obtaining a reconstruction block for predicting pixels in the image block), and can also be applied to decoding.
  • the embodiment of the present application does not limit this.
  • the decoding end reconstructs the block to be decoded based on the second residual block to obtain the reconstructed block.
  • the decoding end can directly reconstruct the second residual block to obtain a reconstructed block of the block to be decoded.
  • the decoding end can directly reconstruct the block to be decoded based on the second residual block and the prediction block of the block to be decoded, to obtain the reconstructed block of the block to be decoded.
  • the decoder The reconstructed block of the block to be decoded can be obtained by summing the second residual block and the prediction block of the block to be decoded.
  • the prediction block of the block to be decoded may be obtained by the decoder predicting the block to be decoded based on the prediction mode.
  • the prediction mode may be obtained by parsing the code stream of the block to be decoded.
  • it may be any prediction mode in the above-mentioned Embodiment 1, or the prediction mode may be any prediction mode in the prior art. mode, the embodiment of the present application does not limit this.
  • the process of the decoder predicting the prediction block of the block to be decoded will not be described in detail here.
  • the decoder can predict each pixel in the block to be decoded in a target prediction order corresponding to the target prediction mode according to the target prediction mode.
  • the decoder reconstructs the block to be decoded based on the predicted value of each pixel and the second residual block to obtain the reconstructed block.
  • the decoding end may first perform inverse transformation on the second residual block to reconstruct the residual value block of the block to be decoded.
  • the second residual block is actually a residual coefficient block.
  • the decoder can reconstruct the block to be decoded based on the reconstructed residual value block and the prediction block of the block to be decoded, to obtain a reconstructed block of the block to be decoded.
  • the decoding end can obtain the reconstructed block of the block to be decoded by summing the reconstructed residual value block and the prediction block of the block to be decoded.
  • the image decoding method shown in Figure 9b corresponds to the image encoding method shown in Figure 9a.
  • an inverse method that can save computing resources on the decoding end is adopted.
  • the efficiency of the quantization processing method, that is, the inverse quantization processing process in the image decoding method is greatly improved, and thus the image decoding method greatly improves the decoding efficiency of the image.
  • the specific mode for predicting the block to be decoded in this embodiment of the present application is not specifically limited.
  • the prediction mode described in the solution of Embodiment 1 above can be used to predict the block to be decoded, but of course it is not limited to this.
  • Embodiment 3 does not specifically limit the way in which the decoding end decodes the code stream of the block to be decoded.
  • the variable length decoding method described in the following Embodiment 3 can be used for decoding, but is of course not limited to this.
  • step S401 Possible implementations of step S401 are described below.
  • the decoding end determines a target prediction mode for predicting pixels in the block to be decoded and an inverse quantization parameter for each pixel in the block to be decoded based on the code stream of the block to be decoded.
  • the decoder determines the residual scanning order corresponding to the target prediction mode. The decoder parses the code stream of the block to be decoded based on the residual scanning order to obtain the first residual block.
  • the residual scanning order is the first scanning order
  • the target prediction mode is the second target prediction mode
  • the residual scanning order is the second scanning order, and the first scanning order Different from the second scan order.
  • the above-mentioned first scanning order or the second scanning order is a kind of residual scanning order, and the first scanning order or the second scanning order corresponds to the above-mentioned target prediction mode.
  • the first scanning order corresponds to the first target prediction mode
  • the second scanning order corresponds to the second target prediction mode.
  • the residual scanning order can be the same as the above-mentioned target prediction order.
  • the above-mentioned first scanning order and second scanning order are used to represent two different scanning orders in different target prediction modes.
  • the first scanning order may be the order indicated by the preset trajectory shown in Figure 7a .
  • the second target prediction mode is the prediction mode shown in Table 2 above, the second scanning order may be the order indicated by the preset trajectory shown in FIG. 7b.
  • the first scanning order may be the order indicated by the preset trajectory shown in Figure 7c-1.
  • the second scanning order may be the order indicated by the preset trajectory shown in Figure 7d-1.
  • the residual scanning order determined corresponding to the target prediction mode is also different in different prediction modes.
  • the decoder can also use the third scan in the target prediction mode for the block to be decoded with the first size when the target prediction mode is determined.
  • the code stream of the block to be decoded is sequentially parsed; for the block to be decoded with the size of the second size, the code stream of the block to be decoded is parsed in the fourth scanning order in the target prediction mode.
  • the third scanning order and the fourth scanning order are different.
  • the third scanning order and the fourth scanning order are one type of residual scanning order.
  • the residual scanning order represented by the first scanning order may be the same as the residual scanning order represented by the third scanning order or the fourth scanning order, or may be different.
  • the residual scanning order represented by the second scanning order may be the same as the residual scanning order represented by the third scanning order or the fourth scanning order, or may be different. The embodiments of the present application do not limit this.
  • the target prediction mode is the prediction mode shown in Table 3-2 above or characterized by Table 5 above
  • the residual scanning order may be the third scanning order, as shown in Figure 7e
  • the residual scan order may be the fourth scan order, such as the scan order shown in Figure 7c-2.
  • the prediction modes shown in Table 3-2 above and Table 5 above can represent two prediction modes of the same target prediction mode at different sizes.
  • the target prediction mode is the prediction mode shown in Table 4-2 above or represented in Table 4-3 above
  • the residual scanning order may be the third scanning order
  • the residual scanning order may be the fourth scanning order, as shown in the scanning order of Figure 7d-2.
  • the prediction modes shown in Table 4-2 above and Table 4-3 above can represent two prediction modes of the same target prediction mode at different sizes.
  • FIG. 10a it is a schematic flow chart of yet another image encoding method provided by an embodiment of the present application.
  • the method shown in Figure 10a includes the following steps:
  • the encoding end determines the residual block corresponding to the block to be encoded.
  • the residual block may be an original residual value block of the block to be encoded, or the residual block may be a residual coefficient block obtained by transforming the original residual value block, or the residual block may be It may be a quantized residual block obtained by quantizing the residual coefficient block at the coding end, and is not limited to this.
  • the original residual value block of the block to be encoded is the residual block obtained by the encoding end based on the original pixel value of the block to be encoded and the prediction block of the block to be encoded.
  • the process of the encoding end predicting the pixels in the block to be encoded to obtain the prediction block can be implemented based on the method described in Embodiment 1.
  • it can also be obtained based on any method in the prior art that can obtain the prediction block of the block to be encoded. , the embodiment of the present application does not limit this.
  • the residual coefficient block is obtained by transforming the original residual value block at the coding end.
  • the embodiment of the present application does not specifically limit the process of transforming the original residual value block at the coding end.
  • the process of obtaining a quantized residual block after the encoding end quantizes the residual coefficient block can be implemented based on the method described in Embodiment 2. Of course, it can also be implemented based on any method in the existing technology that can realize quantization of the residual block. Obtained by a method, which is not limited in the embodiments of this application.
  • the embodiment of the present application will be described below by taking the residual block as a residual coefficient block of the block to be encoded as an example.
  • the encoding end uses a variable code length encoding method to encode the residual block of the block to be encoded to obtain the code stream of the block to be encoded.
  • variable code length coding method may include a variable order exponential Golomb coding method.
  • the encoding end can first determine the attribute type of each pixel in the block to be encoded. For the first value in the residual block of the block to be encoded that corresponds to the third pixel in the block to be encoded, the encoding end can determine the target for encoding the first value based on the preset strategy and the attribute type of the third pixel. Order. Then, the encoding end can use the exponential Golomb coding algorithm of the target order to encode the first value in the residual block of the to-be-encoded block.
  • the encoded code stream of the block to be encoded is obtained.
  • the encoding end may be preset with the above prediction strategy, which is used to indicate the order used for encoding residual values corresponding to pixels of different attribute types.
  • the third pixel represents at least one pixel in the block to be encoded.
  • the encoding end determines the method used to encode the first value based on the preset strategy and the attribute type of the third pixel. The target order when the value is reached. Then, the encoding end can determine the codeword structure used to encode the first value based on the exponential Golomb encoding algorithm of the target order and the size of the first value, and encode the first value with the codeword structure. Similarly, when the encoding end encodes each value in the residual block of the block to be encoded, the encoded code stream of the block to be encoded is obtained.
  • the encoding end determines that the target order when encoding the first value corresponding to the third pixel is 0 based on the preset strategy and the attribute type of the third pixel. Assuming that the value of the first value is 2, as shown in Table 6, the first value belongs to the coding range 1 to 2, then the coding end can determine the coding method based on the 0th-order exponential Golomb coding algorithm and the size of the first value.
  • the codeword structure of the first value is 011 (that is, the codeword structure 01X corresponding to the coding range 1 to 2). In this way, the encoding end can encode the first value with 011. Assume that the value of the first value is 7. As shown in Table 6, the first value belongs to the coding range 7 to 14.
  • the coding end can determine the coding method based on the 0th-order exponential Golomb coding algorithm and the size of the first value.
  • the codeword structure of the first value is 00010000 (that is, the codeword structure 0001XXXX corresponding to the coding range 7 to 14). In this way, the encoding end can encode the first value with 00010000.
  • the encoded code stream of the block to be encoded is obtained.
  • variable code length encoding method may also include a preset order Exponential Golomb encoding method.
  • the preset order may be a value preset by the encoding end, such as 0 or 1 (that is, the value of K may be a preset value of 0 or 1), which is not limited in the embodiments of the present application.
  • the residual value in the residual block (or the residual coefficient value in the residual coefficient block) can also be implemented. variable length encoding.
  • the encoding end can also determine the semantic element corresponding to the residual block.
  • the semantic element includes the coding code length (code legth) for encoding each value in the residual block. ,CL).
  • the encoding end can use the above-mentioned exponential Golomb encoding algorithm with variable order to encode the CL to save bits, thereby improving the compression rate of image encoding and improving coding efficiency.
  • the encoding end can determine the target order when encoding the CL based on the CL of the arbitrary value. Then, the encoding end can use the exponential Golomb coding algorithm of the target order to encode the CL, and add the encoded data to the encoded code stream of the block to be encoded.
  • the encoding end encodes the residual block of the to-be-encoded block (or the CL of the residual value in the residual block) through the exponential Golomb coding algorithm with a variable order, it can adaptively use fewer bits to encode a larger value.
  • a small residual value (or CL value) can achieve the purpose of saving bits. That is to say, while improving the compression rate of image coding, the method provided by the embodiments of the present application also improves the coding efficiency.
  • the encoding end can also use fixed-length coding and truncated unary codes to encode the CL of each value in the residual block of the block to be encoded.
  • the encoding end may use fixed-length coding or truncated unary code to encode the CL of the any value.
  • the encoding end uses a preset number of bits to perform fixed-length encoding on the CL of any value, and adds the encoded data to the code stream of the block to be encoded.
  • the encoding end uses a truncated unary code to encode the CL of any value, and adds the encoded data to the code stream of the block to be encoded.
  • the embodiment of the present application does not limit the specific value of the threshold.
  • Table 7 shows the codeword obtained by the coding end using 2 bits for fixed-length encoding of CL values less than or equal to 2. Table 7 also shows The coding end uses the truncated unary code to encode the codeword for CL values greater than 2. Among them, if CLmax represents the maximum CL value, then for a CL value greater than 2, the codeword of CLmax includes CLmax-1 1, and the codeword of CLmax-1 includes CLmax-2 1s and 1 0,..., The codeword of CLmax-j includes CLmax-j-1 1 and 1 0, where j is a positive integer.
  • the encoding end can also use fixed-length coding and truncated unary codes to encode each value in the residual block of the block to be encoded.
  • the encoding end can also use fixed-length coding or truncated unary codes to encode the residuals of the block to be encoded. The description of CL for each value in the block will not be described again.
  • Figure 10b is a schematic flow chart of yet another image decoding method provided by an embodiment of the present application.
  • the method shown in Figure 10b includes the following steps:
  • the decoder uses a variable code length decoding method to parse the code stream of the block to be decoded, and obtains the CL of each value in the residual block corresponding to the block to be decoded.
  • variable code length decoding method may be an exponential Golomb decoding method with variable order.
  • the decoding end can first parse the target order of the code stream used to decode the block to be decoded from the code stream, and then use the exponential Golomb decoding algorithm of the target order to parse the code stream of the block to be decoded, thereby obtaining the residual of the block to be decoded. CL for each value in the block.
  • variable code length decoding method may be a preset order Exponential Golomb decoding method.
  • the decoding end can use the exponential Golomb decoding algorithm of the preset order to parse the code stream of the block to be decoded, thereby obtaining the CL of each value in the residual block of the block to be decoded.
  • the decoder can also parse the code stream of the block to be decoded based on the fixed-length decoding strategy to Obtain the CL encoding any value in the residual block of the block to be decoded. And, when the number of bits of any value CL in the residual block used to encode the block to be decoded in the code stream is greater than the preset number, the decoder can parse the code stream of the block to be decoded based on the rule of truncating the unary code to obtain the code CL for any value.
  • the decoder can parse the code stream of the block to be decoded based on the rule of truncating the unary code to obtain the code CL for any value.
  • the decoding end determines the residual block of the block to be decoded based on the CL obtained above.
  • the decoding end After the encoding end determines the CL of each value in the residual block of the block to be decoded, the number of bits used to encode each value in the residual block is determined. In this way, the decoding end can determine the bit group corresponding to each pixel in the block to be decoded in the code stream of the block to be decoded based on the CL encoding each value, and determine the target order used to parse each bit group .
  • the decoder can first determine the attribute type of each pixel in the block to be decoded after parsing the CL of each value in the residual block of the block to be decoded. Then, for the first bit group corresponding to the third pixel in the block to be decoded, the decoding end may determine a target order for parsing the first bit group based on the preset strategy and the attribute type of the third pixel. In another possible situation, the decoding end is preset with the preset order of the bit group corresponding to each pixel in the block to be decoded, and then the decoding end will use the preset order of the bit group corresponding to each pixel in the block to be decoded. The order is determined as the target order for each bit group. For example, the attribute type of a pixel may include: whether the pixel has a sign, the number of bits of the pixel, the format information of the pixel, etc.
  • the decoding end uses the exponential Golomb decoding algorithm of the target order to analyze the bit group corresponding to each pixel in the block to be decoded, and then the residual value of each pixel can be obtained, thereby obtaining the residual block of the block to be decoded.
  • the order in which the decoding end obtains the residual values in the residual block based on the code stream analysis of the block to be decoded is the same as the order in which the encoding end encodes the residual values in the residual block.
  • the decoding end reconstructs the block to be decoded based on the residual block of the block to be decoded, and obtains the reconstructed block.
  • the decoder can sequentially perform inverse quantization and inverse transformation on the residual block of the block to be decoded to obtain a reconstructed residual value block of the block to be decoded. Then, the decoding end can reconstruct the block to be decoded based on the reconstructed residual value block, thereby obtaining a reconstructed block. For example, the decoding end can obtain the reconstructed block of the block to be decoded by summing the reconstructed residual value block and the prediction block of the block to be decoded.
  • the decoding end can perform inverse quantization on the residual block of the block to be decoded to obtain a reconstructed residual value block of the block to be decoded. Then, the decoding end can reconstruct the block to be decoded based on the reconstructed residual value block, thereby obtaining a reconstructed block. For example, the decoding end can obtain the reconstructed block of the block to be decoded by summing the reconstructed residual value block and the prediction block of the block to be decoded.
  • the decoding end may also determine a target prediction mode for predicting pixels in the block to be decoded based on the code stream of the block to be decoded.
  • the decoder determines the target prediction order corresponding to the target prediction mode based on the target prediction mode.
  • the decoder predicts each pixel in the block to be decoded in the target prediction order according to the target prediction mode.
  • the decoder reconstructs the block to be decoded based on the prediction value of each pixel in the block to be decoded and the residual block to obtain the reconstructed block.
  • the prediction block of the block to be decoded may be obtained by predicting the block to be decoded based on a prediction mode, and the prediction mode may be obtained by the decoding end based on parsing the code stream of the block to be decoded.
  • the prediction mode may be any prediction mode in Embodiment 1, or the prediction mode may be any prediction mode in the prior art, which is not limited in this embodiment of the present application.
  • the process of the decoder predicting the prediction block of the block to be decoded will not be described in detail here.
  • the process of inverse quantization of the residual block of the block to be decoded at the decoding end can be based on the inverse quantization parameters of each pixel in the block to be decoded and the inverse quantization preset array obtained by parsing the code stream of the block to be decoded.
  • block for inverse quantization Specifically, it can be implemented based on the method in Embodiment 2. Of course, it can also be achieved based on any method in the prior art that can implement inverse quantization of the residual block. This is not limited in the embodiment of the present application.
  • the image decoding method shown in Fig. 10b corresponds to the image encoding method shown in Fig. 10a. Therefore, this method can improve the decoding efficiency while improving the compression rate of image encoding.
  • variable-length encoding/decoding method provided in Embodiment 3 can also be applied to Embodiment 1 and Embodiment 2, or to any scene that requires image encoding/decoding. This is not the case in the embodiments of this application. limited.
  • the encoding end/decoding end includes hardware structures and/or software modules corresponding to each function.
  • the units and method steps of each example described in conjunction with the embodiments disclosed in this application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is executed by hardware or computer software driving the hardware depends on the specific application scenarios and design constraints of the technical solution.
  • any decoding device provided by the embodiment of the present application may be the destination device 12 or the decoder 122 in Figure 1 .
  • any encoding device provided below may be the source device 11 or the encoder 112 in FIG. 1 .
  • a unified explanation is given here and will not be repeated below.
  • FIG 11 is a schematic structural diagram of a decoding device 1100 provided by this application. Any of the above decoding method embodiments can be executed by the decoding device 1100.
  • the decoding device 1100 includes an analysis unit 1101, a determination unit 1102, a prediction unit 1103 and a reconstruction unit 1104.
  • the parsing unit 1101 is used to parse the code stream of the block to be decoded to determine the target prediction mode for predicting the pixels in the block to be decoded.
  • the determining unit 1102 is configured to determine a target prediction order corresponding to the target prediction mode based on the target prediction mode.
  • the prediction unit 1103 is configured to predict each pixel in the block to be decoded in a target prediction order corresponding to the target prediction mode according to the determined target prediction mode.
  • the reconstruction unit 1104 is configured to reconstruct each pixel in the block to be decoded based on the predicted value of the block, thereby obtaining a reconstructed block of the block to be decoded.
  • the parsing unit 1101 can be implemented by the code stream parsing unit 301 in FIG. 5 .
  • the determination unit 1102 and the prediction unit 1103 may be implemented by the prediction processing unit 304 in FIG. 5
  • the reconstruction unit 1104 may be implemented by the reconstruction unit 305 in FIG. 5 .
  • the encoded bit stream in Figure 5 may be the code stream of the block to be decoded in this embodiment of the present application.
  • the prediction unit 1103 is specifically configured to predict that the pixels used to predict any pixel have completed reconstruction when predicting any pixel in the block to be decoded in a target prediction order.
  • the prediction unit 1103 is specifically configured to, according to the target prediction mode, point by point in the direction indicated by the target prediction order. Predict each pixel in the block to be decoded; where, when the target prediction mode is the first target prediction mode, the target prediction order is the first prediction order, and when the target prediction mode is the second target prediction mode, the target prediction order is the Two prediction orders, the first prediction order and the second prediction order are different.
  • the prediction unit 1103 is specifically configured to predict the block to be decoded with the third prediction order in the target prediction mode; for the block to be decoded with the size of the second size block, the prediction unit 1103 is specifically configured to predict the block to be decoded using a fourth prediction order in the target prediction mode; wherein the third prediction order and the fourth prediction order are different.
  • the prediction unit 1103 is specifically configured to predict the pixels of each sub-block in the block to be decoded according to the target prediction mode. , predicting the pixels in each sub-block in the block to be decoded sequentially along the direction indicated by the target prediction order.
  • the target prediction mode includes the prediction mode of each sub-block in the block to be decoded.
  • the first sub-block includes the first pixel and the second pixel, then the first sub-block includes the first pixel and the second pixel.
  • the prediction mode of the sub-block is used to predict the first pixel and the second pixel in parallel according to the reconstructed pixels around the first sub-block.
  • the reconstruction unit 1104 includes: an inverse quantization subunit and a reconstruction subunit.
  • the inverse quantization subunit is used to analyze the code stream of the block to be decoded based on the inverse quantization parameters of each pixel in the block to be decoded and the inverse quantization preset array.
  • the first residual block is inversely quantized to obtain the second residual block.
  • the reconstruction subunit is used to reconstruct each pixel based on the predicted value of each pixel and the second residual block to obtain a reconstruction block.
  • the above-mentioned inverse quantization subunit is specifically used to parse the code stream of the block to be decoded using a variable code length decoding method to obtain the encoding of each value in the residual block corresponding to the block to be decoded.
  • Code length CL and first residual block are specifically used to parse the code stream of the block to be decoded using a variable code length decoding method to obtain the encoding of each value in the residual block corresponding to the block to be decoded.
  • FIG. 12 is a schematic structural diagram of an encoding device 1200 provided by this application. Any of the above encoding method embodiments can be executed by the encoding device 1200.
  • the encoding device 1200 includes a determination unit 1201, a prediction unit 1202 and an encoding unit 1203. Among them, the determining unit 1201 is used to determine the target prediction mode of the block to be encoded, and determine the target prediction order corresponding to the target prediction mode.
  • the prediction unit 1202 is configured to predict each pixel in the block to be encoded in a target prediction order according to the target prediction mode.
  • the determination unit 1201 is also configured to determine the residual block of the block to be encoded based on the predicted value of each pixel.
  • the encoding unit 1203 is used to encode the residual blocks in the target prediction order to obtain the code stream of the block to be encoded.
  • the determination unit 1201 and the prediction unit 1202 may be implemented by the prediction processing unit 201 in FIG. 2 .
  • the determination unit 1201 may also be implemented by the residual calculation unit 202 in FIG. 2 .
  • the encoding unit 1203 may be implemented by the encoding unit 205 in FIG. 2 .
  • the block to be encoded in Figure 2 may be the block to be encoded in the embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of a decoding device 1300 provided by this application. Any of the above decoding method embodiments can be executed by the decoding device 1300.
  • the decoding device 1300 includes an analysis unit 1301, an inverse quantization unit 1302 and a reconstruction unit 1303. Among them, the analysis unit 1301 is used to analyze the code stream of the block to be decoded, and obtain the inverse quantization parameter of each pixel in the block to be decoded and the first residual block of the block to be decoded.
  • the inverse quantization unit 1302 is configured to inversely quantize the first residual block based on the QP indicated by the inverse quantization parameter of each pixel and the inverse quantization preset array to obtain a second residual block.
  • the reconstruction unit 1303 is configured to reconstruct the block to be decoded based on the second residual block to obtain a reconstructed block.
  • the parsing unit 1301 can be implemented by the code stream parsing unit 301 in FIG. 5 .
  • the inverse quantization unit 1302 may be implemented by the inverse quantization unit 302 in FIG. 5 .
  • the reconstruction unit 1303 may be implemented by the reconstruction unit 305 in FIG. 5 . Warp knitting in Figure 5
  • the code bit stream may be the code stream of the block to be decoded in this embodiment of the present application.
  • the parsing unit 1301 is specifically configured to determine a target prediction mode for predicting pixels in the block to be decoded and an inverse quantization parameter for each pixel in the block to be decoded based on the code stream of the block to be decoded; based on the target prediction mode, determine The residual scanning order corresponding to the target prediction mode; where, when the target prediction mode is the first target prediction mode, the residual scanning order is the first scanning order, and when the target prediction mode is the second target prediction mode, the residual scanning order The order is the second scanning order, and the first scanning order and the second scanning order are different; the code stream of the block to be decoded is parsed based on the residual scanning order to obtain the first residual block.
  • the parsing unit 1301 uses the third scanning order to parse the code stream of the block to be decoded in the target prediction mode; for the block to be decoded whose size is the second size , the parsing unit 1301 uses the fourth scanning order to parse the code stream of the block to be decoded in the target prediction mode; wherein the third scanning order and the fourth scanning order are different.
  • the interval between two adjacent numbers in the 1st to nth numbers is 1, and the interval between the n+1th to n+mth numbers is 1.
  • the interval between two adjacent numbers is 2, and the numerical interval between two adjacent numbers from n+k*m+1 to n+k*m+m is 2 k+1 , where, n and m are integers greater than 1, and k is both a positive integer.
  • the inverse quantization unit 1302 is specifically configured to determine the amplification coefficient corresponding to each pixel in the inverse quantization preset array based on the QP of each pixel; based on the amplification coefficient corresponding to each pixel, the first residual The difference block undergoes an inverse quantization operation to obtain the second residual block.
  • the inverse quantization unit 1302 is specifically configured to determine the amplification parameter and displacement parameter corresponding to each pixel based on the QP of each pixel and the inverse quantization preset array; wherein, the amplification parameter corresponding to each pixel is The value is the corresponding value in the inverse quantization preset array after the bitwise AND value of QP of each pixel and 7. The value of the displacement parameter corresponding to each pixel is the quotient of 7 and the QP of each pixel divided by 2 3 Difference; based on the amplification parameter and displacement parameter corresponding to each pixel, perform an inverse quantization operation on the first residual block to obtain the second residual block.
  • the reconstruction unit 1303 is specifically configured to perform inverse transformation on the second residual block to reconstruct the residual value block of the block to be decoded; to reconstruct the block to be decoded based on the residual value block to obtain a reconstructed block.
  • the reconstruction unit 1303 is specifically configured to predict each pixel in the block to be decoded in a target prediction order corresponding to the target prediction mode according to the target prediction mode; based on the predicted value of each pixel and the second residual The difference block is reconstructed to the block to be decoded to obtain the reconstructed block.
  • the parsing unit 1301 is specifically configured to parse the code stream of the block to be decoded using a variable code length decoding method to obtain the encoding code length CL of each value in the residual block corresponding to the block to be decoded. and the first residual block.
  • FIG 14 is a schematic structural diagram of an encoding device 1400 provided by this application. Any of the above encoding method embodiments can be executed by the encoding device 1400.
  • the encoding device 1400 includes a determination unit 1401, a quantization unit 1402, and an encoding unit 1403. Among them, the determining unit 1401 is used to determine the second residual block of the block to be encoded and the quantization parameter QP of each pixel in the block to be encoded.
  • the quantization unit 1402 is configured to quantize the second residual block based on the QP of each pixel and the quantization preset array to obtain the first residual block.
  • the encoding unit 1403 is used to encode the first residual block to obtain the code stream of the block to be encoded.
  • the determination unit 1401 may be implemented by the residual calculation unit 202 in FIG. 2 , or the determination unit 1401 may be implemented by a combination of the residual calculation unit 202 and the residual transformation unit 203 in FIG. 2 .
  • the quantization unit 1402 may be implemented by the quantization unit 204 in FIG. 2 .
  • the encoding unit 1403 may be implemented by the encoding unit 205 in FIG. 2 .
  • the block to be encoded in Figure 2 may be the block to be encoded in the embodiment of the present application.
  • FIG. 15 is a schematic structural diagram of an encoding device 1500 provided by this application. Any of the above encoding method embodiments can be executed by the encoding device 1500.
  • the encoding device 1500 includes a determining unit 1501 and an encoding unit 1502. Among them, the determining unit 1501 is used to determine the residual block corresponding to the block to be encoded.
  • the encoding unit 1502 is used to encode the aforementioned residual block using a variable code length encoding method to obtain a code stream of the block to be encoded.
  • the determination unit 1501 may be implemented by the residual calculation unit 202 in FIG. 2, or the determination unit 1501
  • the determination unit 1501 may be implemented by a combination of the residual calculation unit 202 and the residual transformation unit 203 in FIG. 2
  • the determination unit 1501 may be implemented by a combination of the residual calculation unit 202 , the residual transformation unit 203 and the quantization unit 204 in FIG. 2
  • the encoding unit 1502 may be implemented by the encoding unit 205 in FIG. 2 .
  • the block to be encoded in Figure 2 may be the block to be encoded in the embodiment of the present application.
  • FIG 16 is a schematic structural diagram of a decoding device 1600 provided by this application. Any of the above decoding method embodiments can be executed by the decoding device 1600.
  • the decoding device 1600 includes an analysis unit 1601, a determination unit 1602 and a reconstruction unit 1603.
  • the parsing unit 1601 is used to parse the code stream of the block to be decoded using a variable code length decoding method, and obtain the encoding code length CL of each value in the residual block corresponding to the block to be decoded.
  • Determining unit 1602 configured to determine the residual block of the block to be decoded based on the CL encoding each value.
  • the reconstruction unit 1603 is configured to reconstruct the block to be decoded based on the residual block of the block to be decoded to obtain a reconstructed block.
  • the parsing unit 1601 can be implemented by the code stream parsing unit 301 in Figure 5 .
  • the determination unit 1602 may be implemented by the inverse quantization unit 302 in FIG. 5 , or the determination unit 1602 may be implemented by a combination of the inverse quantization unit 302 and the residual inverse transformation unit 303 in FIG. 5 .
  • the reconstruction unit 1603 may be implemented by the reconstruction unit 305 in FIG. 5 .
  • the encoded bit stream in Figure 5 may be the code stream of the block to be decoded in this embodiment of the present application.
  • the parsing unit 1601 is specifically configured to determine the parsing and encoding of each residual block in the residual block.
  • the parsing unit 1601 is specifically configured to parse the code stream based on a fixed-length decoding strategy when the number of bits used to encode any value CL in the residual block is a preset number, to obtain Encode the CL of any value; when the number of bits used to encode any value CL in the residual block is greater than the preset number, the code stream is parsed based on the rule of truncated unary codes to obtain the encoding of any value.
  • One-valued CL is specifically configured to parse the code stream based on a fixed-length decoding strategy when the number of bits used to encode any value CL in the residual block is a preset number, to obtain Encode the CL of any value; when the number of bits used to encode any value CL in the residual block is greater than the preset number, the code stream is parsed based on the rule of truncated unary codes to obtain the encoding of any value.
  • One-valued CL is specifically configured to parse the code stream based on a fixed-length decoding strategy
  • the determining unit 1602 is specifically configured to determine a bit group corresponding to each pixel in the block to be decoded in the code stream based on the CL encoding each value; determine the bit group to be decoded.
  • the determining unit 1602 is specifically configured to determine the bit group corresponding to each pixel in the block to be decoded in the code stream based on the CL encoding each value;
  • the first bit group corresponding to the third pixel in the decoding block is analyzed using an Exponential Golomb decoding algorithm of a preset order to obtain the residual block.
  • the reconstruction unit 1603 is specifically configured to perform inverse quantization and inverse transformation on the residual block, or perform inverse quantization on the residual block to reconstruct the residual value of the block to be decoded. block; reconstruct the block to be decoded based on the residual value block to obtain the reconstructed block.
  • the reconstruction unit 1603 is specifically configured to determine a target prediction mode for predicting the pixels in the block to be decoded based on the code stream of the block to be decoded; based on the target prediction mode, determine the target prediction mode corresponding to the target prediction mode.
  • the target prediction order according to the target prediction mode, predict each pixel in the block to be decoded in the target prediction order; based on the prediction value of each pixel in the block to be decoded and the residual block
  • the block to be decoded is reconstructed to obtain the reconstructed block.
  • the reconstruction unit 1603 is specifically configured to reconstruct the residual value based on the inverse quantization parameter and inverse quantization preset array of each pixel in the block to be decoded obtained by parsing the code stream of the block to be decoded.
  • the difference block is dequantized.
  • FIG. 17 is a schematic structural diagram of an electronic device provided by this application.
  • the electronic device 1700 includes a processor 1701 and a communication interface 1702.
  • the processor 1701 and the communication interface 1702 are coupled to each other.
  • the communication interface 1702 may be a transceiver or an input-output interface.
  • the electronic device 1700 may further include a memory 1703 for storing instructions executed by the processor 1701 or input data required for the processor 1701 to execute the instructions or data generated after the processor 1701 executes the instructions.
  • connection medium between the above-mentioned communication interface 1702, processor 1701 and memory 1703 is not limited in the embodiment of the present application.
  • the communication interface 1702, the processor 1701 and the memory 1703 are connected through a bus 1704 in Figure 17.
  • the bus is represented by a thick line in Figure 17.
  • the connection methods between other components are only schematically explained. , is not limited.
  • the bus can be divided into address bus, data bus, control bus, etc. For ease of presentation, only one thick line is used in Figure 17, but it does not mean that there is only one bus or one type of bus.
  • the memory 1703 can be used to store software programs and modules, such as program instructions/modules corresponding to the image decoding method or image encoding method provided in the embodiment of the present application.
  • the processor 1701 executes the software programs and modules stored in the memory 1703 to execute Various functional applications and data processing to implement any of the image decoding methods or image encoding methods provided above.
  • the communication interface 1702 can be used to communicate signaling or data with other devices. In this application, the electronic device 1700 may have multiple communication interfaces 1702.
  • the processor in the embodiments of the present application may be a central processing unit (CPU), a neural processing unit (NPU) or a graphics processing unit (GPU), or It can be other general-purpose processors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (field programmable gate arrays, FPGAs) or other programmable logic devices , transistor logic devices, hardware components, or any combination thereof.
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGAs field programmable gate arrays
  • a general-purpose processor can be a microprocessor or any conventional processor.
  • the method steps in the embodiments of the present application can be implemented by hardware or by a processor executing software instructions.
  • Software instructions can be composed of corresponding software modules, and software modules can be stored in random access memory (random access memory, RAM), flash memory, read-only memory (read-only memory, ROM), programmable read-only memory (programmable ROM) , PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically erasable programmable read-only memory (electrically EPROM, EEPROM), register, hard disk, mobile hard disk, CD-ROM or other well-known in the art any other form of storage media.
  • An exemplary storage medium is coupled to the processor such that the processor can read information from the storage medium and write information to the storage medium.
  • the storage medium can also be an integral part of the processor.
  • the processor and storage media may be located in an ASIC.
  • the ASIC can be located in network equipment or terminal equipment.
  • the processor and the storage medium can also exist as discrete components in network equipment or terminal equipment.
  • This application also provides a computer-readable storage medium that stores computer programs or instructions.
  • the computer program or instructions are executed by an electronic device, the embodiments of any of the above image encoding/decoding methods are implemented.
  • Embodiments of the present application also provide a coding and decoding system, including an encoding end and a decoding end.
  • the encoding end can be used to execute any of the image encoding methods provided above, and the decoding end is used to execute the corresponding image decoding method.
  • the computer program product includes one or more computer programs or instructions.
  • the computer may be a general purpose computer, a special purpose computer, a computer network, a network device, a user equipment, or other programmable device.
  • the computer program or instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another.
  • the computer program or instructions may be transmitted from a website, computer, A server or data center transmits via wired or wireless means to another website site, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or data center that integrates one or more available media.
  • the available media may be magnetic media, such as floppy disks, hard disks, and magnetic tapes; they may also be optical media, such as digital video discs (DVDs); they may also be semiconductor media, such as solid state drives (solid state drives). ,SSD).

Abstract

一种图像编解码方法、装置、电子设备及存储介质,涉及图像编解码技术领域。该图像编码方法包括:确定待编码块的目标预测模式,以及确定与目标预测模式对应的目标预测顺序(S101);按照目标预测模式,以目标预测顺序预测待编码块中的每个像素(S102);基于每个像素的预测值确定待编码块的残差块(S103);以目标预测顺序编码残差块,以得到待编码块的码流。通过该方法,能够提高图像编码的效率。

Description

一种图像编解码方法、装置、电子设备及存储介质
本申请要求于2022年03月29日提交中国专利局、申请号为202210320915.5发明名称为“图像编解码方法、装置、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像编解码技术领域,尤其涉及一种图像编解码方法、装置、电子设备及存储介质。
背景技术
视频中完整的图像通常被称为“帧”,由多个帧按照时间顺序组成的视频被称为视频序列(video sequence)。视频序列存在空间冗余、时间冗余、视觉冗余、信息熵冗余、结构冗余、知识冗余、重要性冗余等一系列的冗余信息。为了尽可能的去除视频序列中的冗余信息,减少表征视频的数据量,提出了视频编码(video coding)技术,以达到减小存储空间和节省传输带宽的效果。视频编码技术也称为视频压缩技术。
随着技术的不断发展,采集视频数据越来越便捷,所采集的视频数据的规模也越来越大,因此,如何有效地对视频数据进行编解码,成为迫切需要解决的问题。
发明内容
本申请提供了一种图像编解码方法、装置、电子设备及存储介质,该图像编解码方法能够提高图像编解码的效率。
为达上述目的,本申请提供如下技术方案:
第一方面,本申请提供了一种图像解码方法,该方法包括:解析待解码块的码流,以确定预测待解码块中像素的目标预测模式。基于目标预测模式,确定与目标预测模式对应的目标预测顺序。按照目标预测模式,以目标预测顺序预测待解码块中的每个像素。基于每个像素的预测值对每个像素进行重建,得到待解码块的重建块。
通过本申请提供的图像解码方法,解码端在以目标预测模式对待解码块中的像素进行预测时,是以该目标预测模式对应的目标预测顺序对待解码块中的像素进行预测的。在这一过程中,解码端预测待解码块中的任一个像素时,用于预测该像素的像素均已完成重建。因此,本申请提供的图像解码方法按照目标预测模式,以目标预测顺序预测待解码块中的每个像素,可以并行预测待解码块中的部分像素,提高了预测待解码块中像素的效率,解码端在基于预测值和残差值重获得重建值时,无需对上述残差值进行缓存,因此本申请实施例提供的解码方法不仅可以节约用于缓存残差值的缓存空间,还能提高解码效率。
在一种可能的设计方式中,在以上述的目标预测顺序预测待解码块中的任一像素时,用于预测任一像素的像素已完成重建。使用已完成重建的像素进行像素预测,能够使得解码过程中预测所使用的像素与编码过程预测所使用的像素一致,从而减少解码误差,使得解码过程中像素的预测值更准确,提高解码的准确度。
在另一种可能的设计方式中,若上述目标预测模式指示以目标预测顺序逐点预测待解码块中的每个像素,则上述的按照目标预测模式,以目标预测顺序预测待解码块中的每个像素包括:按照目标预测模式,沿目标预测顺序指示的方向逐点预测待解码块中的每个像素。其中,当目标预测模式为第一目标预测模式时,目标预测顺序为第一预测顺序,当目标预测模式为第二目标预测模式时,目标预测顺序为第二预测顺序,第一预测顺序和第二预测顺序不同。
换言之,不同的预测模式对应不同的预测顺序。这样在编码侧编码时针对不同的预测模式选择与其相适用的预测顺序,可以使得待编码块中像素与预测值间差异减小,从而使得待编码块以更少的比特被编码,相应的,在解码侧需要解码的比特变少,解码效率得以提高。因此该可能的设计提供的解码方法能够进一步提高图像解码的效率。
在另一种可能的设计方式中,对于尺寸为第一尺寸的待解码块,在上述目标预测模式下采用第三预测顺序对待解码块进行预测。对于尺寸为第二尺寸的待解码块,在上述目标预测模式下采用第四预测顺 序对待解码块进行预测。其中,第三预测顺序和第四预测顺序不同。
换言之,在待解码块的大小不同时,若采用相同预测模式预测待解码块中的像素时,预测顺序也可以是不同的。这样在编码侧编码时针对不同的待编码块大小选择与其相适用的预测顺序,可以使得待编码块中像素与预测值间差异减小,从而使得待编码块以更少的比特被编码,相应的,在解码侧需要解码的比特变少,解码效率得以提高。因此该可能的设计提供的解码方法能够进一步提高图像解码的效率。
在一种可能的设计方式中,若上述目标预测模式指示以待解码块中具有预设大小的子块为单位依次预测待解码块中每个子块的像素,则上述按照目标预测模式,以目标预测顺序预测待解码块中的每个像素包括:按照目标预测模式,沿目标预测顺序指示的方向依次预测待解码块中每个子块中的像素。
在另一种可能的设计方式中,上述目标预测模式包括待解码块中每个子块的预测模式,对于待解码块中第一子块而言,当第一子块中包括第一像素和第二像素,则第一子块的预测模式用于根据第一子块周围已重建的像素并行的对第一像素和第二像素进行预测。
通过该两种可能的设计,解码端在对一个子块进行预测时,可以并行的基于该子块周围已重建的像素对该子块内的多个像素进行预测,即这种预测模式能够进一步提高解码端预测待解码块的效率。解码端在基于预测值和残差值获得重建值时,无需对残差值进行缓存,因此该两种可能的设计提供的解码方法进一步的节约了缓存空间,并提高解码效率。
在另一种可能的设计方式中,上述的基于每个像素的预测值对每个像素进行重建,得到待解码块的重建块包括:基于解析待解码块的码流得到的待解码块中每个像素的反量化参数和反量化预设数组,对解析待解码块的码流得到的待解码块的第一残差块进行反量化,得到第二残差块。基于上述每个像素的预测值和第二残差块对每个像素进行重建,得到重建块。
通过该可能的设计,能够基于预先设定的反量化预设数组进行反量化,减少反量化过程中的乘法运算。由于乘法运算的耗时较长,所以减少乘法运算,可以提高反量化过程的计算效率,即能够高效的实现对待解码块的第一残差块的反量化,因此该可能的设计提供的解码方法能够进一步提高图像解码的效率。
在一种可能的设计方式中,上述的解析待解码块的码流包括:采用可变码长的解码方式解析待解码块的码流,以得到编码待解码块对应的残差块中每个值的编码码长(code length,CL)和第一残差块。
通过该可能的设计,待解码块的残差块和CL在编码侧被以更少的比特被编码,因此在相应的解码侧需要解码的比特变少,解码效率得以提高。因此该可能的设计提供的解码方法能够进一步提高图像解码的效率。
第二方面,本申请提供了一种图像编码方法,该方法包括:确定待编码块的目标预测模式,以及确定与目标预测模式对应的目标预测顺序。按照目标预测模式,以目标预测顺序预测待编码块中的每个像素。基于每个像素的预测值确定待编码块的残差块。以目标预测顺序编码残差块,以得到待编码块的码流。
通过本申请提供的图像编码方法,编码端在以目标预测模式对待编码块中的像素进行预测时,是以该目标预测模式对应的目标预测顺序对待编码块中的像素进行预测的。在这一过程中,编码端预测待编码块中的任一个像素时,用于预测该像素的像素均已完成重建。因此,本申请提供的图像编码方法按照目标预测模式,以目标预测顺序预测待编码块中的每个像素,可以并行预测待编码块中的部分像素,提高了预测待编码块中像素的效率,编码端在基于预测值和残差值重获得重建值时,无需对上述残差值进行缓存,因此本申请实施例提供的编码方法不仅可以节约用于缓存残差值的缓存空间,还能提高编码效率。
在一种可能的设计方式中,在以上述目标预测顺序预测待编码块中的任一像素时,用于预测任一像素的像素已完成重建。
在另一种可能的设计方式中,若上述的目标预测模式指示以目标预测顺序逐点预测编码块中的每个像素,则上述的按照目标预测模式,以目标预测顺序预测待编码块中的每个像素包括:按照目标预测模式,沿目标预测顺序指示的方向逐点预测待编码块中的每个像素。其中,当目标预测模式为第一目标预测模式时,目标预测顺序为第一预测顺序,当目标预测模式为第二目标预测模式时,目标预测顺序为第二预测顺序,第一预测顺序和第二预测顺序不同。换言之,不同的预测模式对应不同的预测顺序。
在另一种可能的设计方式中,对于尺寸为第一尺寸的待编码块,在上述目标预测模式下采用第三预测顺序对待编码块进行预测。对于尺寸为第二尺寸的待编码块,在上述目标预测模式下采用第四预测顺序对待编码块进行预测。其中,第三预测顺序和第四预测顺序不同。换言之,在待编码块的大小不同时,若采用相同预测模式预测待编码块中的像素时,预测顺序也可以是不同的。
在另一种可能的设计方式中,若上述的目标预测模式指示以待编码块中具有预设大小的子块为单位依次预测待编码块中每个子块的像素,则上述的按照目标预测模式,以目标预测顺序预测待编码块中的每个像素包括:按照目标预测模式,沿目标预测顺序指示的方向依次预测待编码块中每个子块中的像素。
在另一种可能的设计方式中,上述的目标预测模式包括待编码块中每个子块的预测模式,对于待编码块中第一子块而言,如果第一子块中包括第一像素和第二像素,则第一子块的预测模式用于根据第一子块周围已重建的像素并行的对第一像素和第二像素进行预测。
在另一种可能的设计方式中,在上述的以目标预测顺序编码残差块,以得到待编码块的码流之前,上述方法包括:确定待编码块中每个像素的量化参数QP。基于每个像素的QP和量化预设数组,对待编码的残差块进行量化,得到第一残差块。这样的话,则上述的以目标预测顺序编码残差块,以得到待编码块的码流包括:以目标预测顺序编码第一残差块,以得到待编码块的码流。
在另一种可能的设计方式中,上述的以目标预测顺序编码第一残差块,以得到待编码块的码流包括:以目标预测顺序、且采用可变码长的编码方式对第一残差块进行编码,以得到待编码块的码流。
可以理解,第二方面及其任一种可能的设计方式提供的图像编码方法与第一方面及其任一种可能的设计方式提供的图像解码方法是对应的,因此,第二方面及其任一种可能的设计方法提供的技术方案的有益效果,均可以参考第一方面中对应的方法的有益效果的描述,不再赘述。
第三方面,本申请提供了一种图像解码方法,该方法包括:解析待解码块的码流,得到待解码块中每个像素的反量化参数和待解码块的第一残差块。基于每个像素的反量化参数指示的QP和反量化预设数组,对第一残差块进行反量化,得到第二残差块。基于第二残差块对待解码块进行重建,得到重建块。
通过本申请提供的图像解码方法,在图像解码过程中,基于本申请提供的量化预设数组所实现的量化处理方式,减少了量化过程中的乘法运算。由于乘法运算占用的计算资源较高,所以减少乘法运算,可以减少量化过程中占用的计算资源,从而大大节省了解码端计算资源,另外,乘法运算的速度慢,因此该图像解码方法中的量化处理过程的效率相比现有技术大大提高,进而该图像解码方法大大提高了图像的解码效率。
在一种可能的设计方式中,上述解析待解码块的码流,得到待解码块中每个像素的反量化参数和待解码块的第一残差块包括:基于待解码块的码流确定预测待解码块中像素的目标预测模式和待解码块中每个像素的反量化参数;基于目标预测模式,确定与目标预测模式对应的残差扫描顺序;基于残差扫描顺序解析待解码块的码流,得到第一残差块。其中,当目标预测模式为第一目标预测模式时,残差扫描顺序为第一扫描顺序,当目标预测模式为第二目标预测模式时,残差扫描顺序为第二扫描顺序,第一扫描顺序和第二扫描顺序不同。换言之,不同的预测模式对应不同的残差扫描顺序。
在另一种可能的设计方式中,对于尺寸为第一尺寸的待解码块,在上述目标预测模式下采用第三扫描顺序解析待解码块的码流。对于尺寸为第二尺寸的待解码块,在上述目标预测模式下采用第四扫描顺序解析待解码块的码流;其中,第三扫描顺序和第四扫描顺序不同。换言之,在待解码块的大小不同时,若采用相同预测模式预测待解码块中的像素时,解析待解码块的码流时的残差扫描顺序也可以是不同的。
基于该两种可能的设计方式,当基于残差扫描顺序解析得到待解码的残差块时,若以与该残差扫描顺序相同的目标预测顺序预测待解码块的预测块,则基于残差块和预测块重建待解码块的重建块的效率能够得以提高,即该两种可能的设计方法进一步提高了图像的解码效率。另外,在编码侧进行编码时,可以选择与目标预测模式相适应的扫描顺序,从而在编码时能够以较少的比特对残差块进行编码,也就是,待编码块以更少的比特被编码,相应的,在解码侧需要解码的比特变少,解码效率得以提高。因此该可能的设计提供的解码方法能够进一步提高图像解码的效率。
在另一种可能的设计方式中,在上述的反量化预设数组中,第1至第n个数中相邻的两个数之间的间隔为1,第n+1至第n+m个数中相邻的两个数之间的间隔为2,第n+k*m+1至第n+k*m+m个数中相邻两个数之间的数值间隔为2k+1,其中,n、m为大于1的整数,k均为正整数。
在另一种可能的设计方式中,上述的基于每个像素的QP和反量化预设数组,对第一残差块进行反量化,得到第二残差块包括:基于每个像素的QP在反量化预设数组中确定每个像素对应的放大系数。基于每个像素对应的放大系数,对第一残差块进行反量化运算,得到第二残差块。
在另一种可能的设计方式中,上述的基于每个像素的QP和反量化预设数组,对第一残差块进行反量化,得到第二残差块包括:基于每个像素的QP和反量化预设数组,确定每个像素对应的放大参数和位移参数。基于每个像素对应的放大参数和位移参数,对第一残差块进行反量化运算,得到第二残差块。其中,每个像素对应的放大参数的值为每个像素的QP与7按位与后的值在反量化预设数组中对应的值,每个像素对应的位移参数的值为7与每个像素的QP除以23的商的差值。
基于上述三种可能的实现方式,能够基于预先设定的反量化预设数组,进一步地减少反量化过程中的乘法运算。由于乘法运算占用的计算资源较高,所以减少乘法运算,可以减少反量化过程中占用的计算资源,也就是,基于本申请提供的量化预设数组所实现的反量化处理方式大大节省了解码端计算资源,另外,乘法运算的速度慢,因此该图像解码方法中的反量化处理过程的效率相比现有技术大大提高。因此该可能的设计提供的解码方法能够进一步提高图像解码的效率。
在另一种可能的设计方式中,上述的基于第二残差块对待解码块进行重建,得到重建块包括:对第二残差块进行反变换,以重建待解码块的残差值块。基于残差值块对待解码块进行重建,得到重建块。
在另一种可能的设计方式中,上述的基于第二残差块对待解码块进行重建包括:按照目标预测模式,以与该目标预测模式对应的目标预测顺序预测待解码块中的每个像素。基于每个像素的预测值和第二残差块对待解码块进行重建,得到重建块。
从前面的论述可以得知,基于本申请提供的目标预测模式对待解码块进行预测时能够提高预测效率,因此通过该可能的设计方式进一步提高了图像的解码效率。
在另一种可能的设计方式中,上述的解析待解码块的码流包括:采用可变码长的解码方式解析待解码块的码流,以得到编码待解码块对应的残差块中每个值的编码码长CL和第一残差块。
通过该可能的设计方式,待解码块的残差块和CL在编码侧被以更少的比特被编码,因此在相应的解码侧需要解码的比特变少,解码效率得以提高。因此该可能的设计方式提供的解码方法能够进一步提高图像解码的效率。
第四方面,本申请提供了一种图像编码方法,该方法包括:确定待编码块的第二残差块和待编码块中每个像素的量化参数QP。基于每个像素的QP和量化预设数组,对第二残差块进行量化,得到第一残差块。编码第一残差块,得到待编码块的码流。
通过本申请提供的图像编码方法,在图像编码过程中,基于本申请提供的量化预设数组所实现的量化处理方式,减少量化过程中的乘法运算。由于乘法占用的计算资源较高,所以减少乘法运算,就可以减少量化过程中占用的计算资源,大大节省了编码端计算资源,又因为乘法运算的耗时较长,所以减少乘法运算,可以提高量化过程的计算效率,进而该图像编码方法大大提高了图像的编码效率。
在另一种可能的设计方式中,上述的编码第一残差块,得到待编码块的码流包括:确定待编码块的目标预测模式,以及确定与目标预测模式对应的残差扫描顺序。以残差扫描顺序编码残差块,得到待编码块的码流。其中,当目标预测模式为第一目标预测模式时,残差扫描顺序为第一扫描顺序,当目标预测模式为第二目标预测模式时,残差扫描顺序为第二扫描顺序,第一扫描顺序和第二扫描顺序不同。
在另一种可能的设计方式中,对于尺寸为第一尺寸的待编码块,在上述目标预测模式下采用第三扫描顺序对待编码块进行编码。对于尺寸为第二尺寸的待编码块,在上述目标预测模式下采用第四扫描顺序对待编码块进行编码。其中,第三扫描顺序和第四扫描顺序不同。
在另一种可能的设计方式中,上述的量化预设数组包括放大参数数组和位移参数数组,放大参数数组和位移参数数组包括相同数量个数值,对于放大参数数组的第i个值amp[i]和位移参数数组中的第i个值shift[i],2shift[i]与amp[i]的商构成的反量化预设数组具有以下规律:第1至第n个数中相邻的两个数之间的间隔为1,第n+1至第n+m个数中相邻的两个数之间的间隔为2,第n+k*m+1至第n+k*m+m个数中相邻两个数之间的数值间隔为2k+1;其中,n、m为大于1的整数,i、k均为正整数。
在另一种可能的设计方式中,上述的基于每个像素的QP和量化预设数组,对第二残差块进行量化,得到第一残差块包括:基于每个像素的QP,在放大参数数组确定每个像素的放大参数,以及在位移参 数数组确定每个像素的位移参数。基于每个像素的放大参数和位移参数,对第二残差块进行量化运算,得到第一残差块。
在另一种可能的设计方式中,上述的基于每个像素的QP和量化预设数组,对第二残差块进行量化,得到第一残差块包括:基于每个像素的QP和量化预设数组,确定每个像素对应的放大参数和位移参数。基于每个像素对应的放大参数和位移参数,对第二残差块进行量化运算,得到第二残差块。其中,每个像素对应的放大参数的值为每个像素的QP与7按位与后的值在量化预设数组中对应的值,每个像素对应的位移参数的值为7与每个像素的QP除以23的商的加和值。
在另一种可能的设计方式中,上述的第二残差块为待编码块的原始残差值块,或者,上述的第二残差块为残差值块经变换后得到的残差系数块。
在另一种可能的设计方式中,上述的确定待编码块的第二残差块包括:按照目标预测模式,以目标预测顺序预测待编码块中的每个像素。基于待编码块中的每个像素的预测值,确定第二残差块。
在另一种可能的设计方式中,上述的编码第一残差块,得到待编码块的码流包括:采用可变码长的编码方式对第一残差块进行编码,以得到待编码块的码流。
可以理解,第四方面及其任一种可能的设计方式提供的图像编码方法与第三方面及其任一种可能的设计方式提供的图像解码方法是对应的,因此,第四方面及其任一种可能的设计方式提供的技术方案的有益效果,均可以参考第三方面中对应的方法的有益效果的描述,不再赘述。
第五方面,本申请提供了一种图像编码方法,该方法包括:确定待编码块对应的残差块。采用可变码长的编码方式对残差块进行编码,以得到待编码块的码流。
通过本申请提供的编码方法,当编码端通过可变码长的编码方式,例如,可变换阶数的指数哥伦布编码算法对待编码块的残差块进行编码,可以自适应的用较少的比特编码较小的残差值,从而可以达到节省比特的目的,也即在提高图像编码的压缩率的同时,本申请提供的编码方法还提高了编码效率。
在一种可能的设计方式中,上述的可变码长的编码方式包括可变换阶数的指数哥伦布编码方式,则上述的采用可变码长的编码方式对残差块进行编码,得到待编码块的码流包括:确定待编码块中每个像素的属性类型。对于残差块中与待编码块的第三像素对应的第一值,基于预设策略和第三像素的属性类型,确定编码第一值时的目标阶数。采用目标阶数的指数哥伦布编码算法编码第一值,得到码流。
在另一种可能的设计方式中,上述的可变码长的编码方式包括预设阶数的指数哥伦布编码方式,采用可变码长的编码方式对残差块进行编码,得到待编码块的码流包括:对于残差块中与待编码块的第三像素对应的第一值,采用预设阶数的指数哥伦布编码算法编码第一值,得到码流。
该两种可能的设计方式通过可变或指定阶数的指数哥伦布编码方式,实现了对残差块中残差值的变长编码,该编码方式可以自适应的用较少的比特编码较小的残差值,从而可以达到节省比特的目的。
在另一种可能的设计方式中,上述的方法还包括:确定残差块对应的语义元素,语义元素包括编码残差块中每个值的编码码长CL。采用可变码长的编码方式对每个值的CL进行编码,以得到码流。
在另一种可能的设计方式中,上述的可变码长的编码方式包括可变换阶数的指数哥伦布编码方式,对于残差块中的任一值,上述采用可变码长的编码方式对每个值的CL进行编码,以得到码流包括:确定编码任一值的CL时的目标阶数。采用目标阶数的指数哥伦布编码算法编码任一值的CL,以得到码流。
该两种可能的设计方式通过可变或指定阶数的指数哥伦布编码方式,实现了对残差块中残差值的CL的变长编码,该编码方式可以自适应的用较少的比特编码较小的CL,从而可以达到节省比特的目的。
在另一种可能的设计方式中,对于残差块中的任一值,上述的采用可变码长的编码方式对每个值的CL进行编码,以得到码流包括:当任一值的CL小于等于阈值,采用预设数量个比特编码任一值的CL,以得到码流。当任一值的CL大于阈值,采用截断一元码编码任一值的CL,以得到码流。
该可能的设计方式,通过定长编码和截断一元码实现了对残差块中残差值的CL的变长编码,该编码方式可以自适应的用较少的比特编码较小的CL,从而可以达到节省比特的目的。
在另一种可能的设计方式中,上述的残差块为待编码块的原始残差值块;或者,上述的残差块为原始残差值块经变换后得到的残差系数块;或者,上述的残差块为残差系数块经量化后得到的量化系数块。
在另一种可能的设计方式中,如果上述的残差块为待编码块的原始残差值块,则上述的确定待编码块对应的残差块包括:确定待编码块的目标预测模式,以及确定与目标预测模式对应的目标预测顺序。按照目标预测模式,以目标预测顺序预测待编码块中的每个像素。基于待编码块中的每个像素的预测值,确定残差块。
通过该可能的设计方式,由于基于本申请提供的目标预测模式对待编码块进行预测时能够提高预测效率,因此该可能的设计方式进一步提供了图像的编码效率。
在另一种可能的设计方式中,如果上述的残差块为残差系数块经量化后得到的量化系数块,则在上述的采用可变码长的编码方式对残差块进行编码,以得到待编码块的码流之前,上述方法还包括:确定待编码块中每个像素的量化参数QP。基于每个像素的QP和量化预设数组,对待编码的残差值块进行量化,得到残差块。
通过该可能的设计,能够以基于更少乘法运算实现对待编码的残差值块的量化,即能够高效的实现对待编码的残差值块的量化,因此该可能的设计提供的编码方法能够进一步提高图像编码的效率。
第六方面,本申请提供了一种图像解码方法,该方法包括:采用可变码长的解码方式解析待解码块的码流,得到编码待解码块对应的残差块中每个值的编码码长CL。基于编码每个值的CL确定残差块。基于残差块对待解码块进行重建,得到重建块。
在另一种可能的设计方式中,上述的可变码长的解码方式包括可变换阶数或预设阶数的指数哥伦布解码方式,则上述的采用可变码长的解码方式解析待解码块的码流,得到编码待解码块对应的残差块中每个值的编码码长CL包括:确定解析编码残差块中每个值的CL时的目标阶数。采用目标阶数的指数哥伦布解码算法解析码流,得到编码残差块中每个值的CL。
在另一种可能的设计方式中,上述的采用可变码长的解码方式解析待解码块的码流,得到编码待解码块对应的残差块中每个值的编码码长CL包括:当用于编码残差块中任一值CL的比特数量为预设数量,则基于定长解码策略解析码流,以得到编码任一值的CL。当用于编码残差块中任一值CL的比特数量大于预设数量,则基于截断一元码的规则解析码流,以得到编码任一值的CL。
在另一种可能的设计方式中,上述的基于编码每个值的CL确定残差块包括:基于编码每个值的CL在码流中确定与待解码块中每个像素对应的比特组。确定待解码块中每个像素的属性类型。对于与待解码块中第三像素对应的第一比特组,基于预设策略和第三像素的属性类型,确定解析第一比特组的目标阶数。采用目标阶数的指数哥伦布解码算法解析第一比特组,以得到残差块。
在另一种可能的设计方式中,上述的基于编码每个值的CL确定残差块包括:基于编码每个值的CL在码流中确定与待解码块中每个像素对应的比特组。对于与待解码块中第三像素对应的第一比特组,采用预设阶数的指数哥伦布解码算法解析第一比特组,以得到残差块。
在另一种可能的设计方式中,上述的基于残差块对待解码块进行重建,得到重建块包括:对该残差块进行反量化和反变换,或者,对该残差块进行反量化,以重建待解码块的残差值块。基于残差值块对待解码块进行重建,得到重建块。
在另一种可能的设计方式中,上述的基于残差块对待解码块进行重建,得到重建块包括:基于待解码块的码流确定预测待解码块中像素的目标预测模式。基于目标预测模式,确定与目标预测模式对应的目标预测顺序。按照目标预测模式,以目标预测顺序预测待解码块中的每个像素。基于待解码块中的每个像素的预测值和残差块对待解码块进行重建,得到重建块。
在另一种可能的设计方式中,上述的对残差块进行反量化包括:基于解析待解码块的码流得到的待解码块中每个像素的反量化参数和反量化预设数组,对残差块进行反量化。
可以理解,第六方面及其任一种可能的是设计提供的图像解码方法与第五方面及其任一种可能的设计方式提供的图像编码方法是对应的,因此,第六方面及其任一种可能的设计方式提供的技术方案的有益效果,均可以参考第五方面中对应的方法的有益效果的描述,不再赘述。
第七方面,本申请提供了一种图像解码装置。该解码装置可以是视频解码器或包含视频解码器的设备。该解码装置包括用于实现第一方面、第三方面或第六方面中任一种可能实现方式中方法的各个模块。所述解码装置具有实现上述相关方法实例中行为的功能。所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能对应的模块。其有益效果可以参见相 应方法中的描述,此处不再赘述。
第八方面,本申请提供了一种图像编码装置。该编码装置可以是视频编码器或包含视频编码器的设备。该编码装置包括用于实现第二方面、第四方面或第五方面中任一种可能实现方式中方法的各个模块。所述编码装置具有实现上述相关方法实例中行为的功能。所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能对应的模块。其有益效果可以参见相应方法中的描述,此处不再赘述。
第九方面,本申请提供一种电子设备,包括处理器和存储器,所述存储器用于存储计算机指令,所述处理器用于从存储器中调用并运行所述计算机指令,以实现第一方面至第六方面中任一种实现方式的方法。
例如,该电子设备可以是指视频编码器,或包括视频编码器的设备。
又如,该电子设备可以是指视频解码器,或包括视频解码器的设备。
第十方面,本申请提供一种计算机可读存储介质,存储介质中存储有计算机程序或指令,当计算机程序或指令被计算设备或计算设备所在的存储系统执行时,以实现第一方面至第六方面中任一种实现方式的方法。
第十一方面,本申请提供一种计算机程序产品,该计算程序产品包括指令,当计算机程序产品在计算设备或处理器上运行时,使得计算设备或处理器执行该指令,以实现第一方面至第六方面中任一种实现方式的方法。
第十二方面,本申请提供一种芯片,包括存储器和处理器,存储器用于存储计算机指令,处理器用于从存储器中调用并运行该计算机指令,以实现第一方面至第六方面中任一种实现方式的方法。
第十三方面,本申请提供一种图像译码系统,该图像译码系统包括编码端和解码端,解码端用于实现第一、第三或第六方面提供的相应的解码方法,编码端用于实现与此对应的编码方法。
本申请在上述各方面提供的实现方式的基础上,还可以进行进一步组合以提供更多实现方式。或者说,上述任意一个方面的任意一种可能的实现方式,在不冲突的情况下,均可以应用于其他方面,从而得到新的实施例。例如,上述第一、第三以及第六方面提供的任意一种图像解码方法,可以在不冲突的情况下两两组合、或三个方面进行组合,从而可以得到新的图像解码方法。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。
图1为本申请实施例所应用的编解码系统10的架构示意图;
图2为用于实现本申请实施例方法的编码器112的实例的示意性框图;
图3为本申请实施例提供的一种图像、并行编码单元、独立编码单元和编码单元之间的对应关系的示意图;
图4为本申请实施例提供的一种编码过程的流程示意图;
图5为用于实现本申请实施例方法的解码器122的实例的示意性框图;
图6a为本申请实施例提供的一种图像编码方法的流程示意图;
图6b为本申请实施例提供的一种图像解码方法的流程示意图;
图6c为本申请实施例提供的另一种图像编码方法的流程示意图;
图7a为本申请实施例提供的一种预测顺序的示意图;
图7b为本申请实施例提供的另一种预测顺序的示意图;
图7c-1为本申请实施例提供的又一种预测顺序的示意图;
图7c-2为本申请实施例提供的又一种预测顺序的示意图;
图7d-1为本申请实施例提供的又一种预测顺序的示意图;
图7d-2为本申请实施例提供的又一种预测顺序的示意图;
图7d-3为本申请实施例提供的又一种预测顺序的示意图;
图7e为本申请实施例提供的又一种预测顺序的示意图;
图7f为本申请实施例提供的又一种预测顺序的示意图;
图7g为本申请实施例提供的又一种编码顺序的示意图;
图8为本申请实施例提供的另一种图像解码方法的流程示意图;
图9a为本申请实施例提供的又一种图像编码方法的流程示意图;
图9b为本申请实施例提供的又一种图像解码方法的流程示意图;
图10a为本申请实施例提供的又一种图像编码方法的流程示意图;
图10b为本申请实施例提供的又一种图像解码方法的流程示意图;
图11为本申请实施例提供的一种解码装置1100的结构示意图;
图12为本申请实施例提供的一种编码装置1200的结构示意图;
图13为本申请实施例提供的一种解码装置1300的结构示意图;
图14为本申请实施例提供的一种编码装置1400的结构示意图;
图15为本申请实施例提供的一种编码装置1500的结构示意图;
图16为本申请实施例提供的一种解码装置1600的结构示意图;
图17为本申请实施例提供的一种电子设备的结构示意图。
具体实施方式
为了更清楚的理解本申请实施例,下面对本申请实施例中涉及的部分术语或技术进行说明:
1)、预测模式
预测当前图像块(如待编码图像块(以下简称待编码块)或待解码图像块(以下简称待解码块))中每个像素的预测值所采用的预测方式的组合称为预测模式。其中,预测当前图像块中的不同像素可以采用不同的预测方式,也可以采用相同的预测方式,预测当前图像块中的所有像素所采用的预测方式可以共同称为该当前图像块的(或对应的)预测模式。
可选的,预测模式包括:逐点预测模式、帧内预测模式、块复制模式和原始值模式(即直接解码固定位宽的重建值模式)等。
示例的,逐点预测模式是指将待预测像素周围相邻像素的重建值作为待预测像素的预测值的预测模式。逐点预测模式包括垂直预测、水平预测、垂直均值预测和水平均值预测等预测方式中的一种或多种的组合。
其中,垂直预测为用待预测像素上侧(既可以是相邻的上侧,也可以是非相邻但距离较近的上侧)像素的重建值来获得待预测像素的预测值(PointPredData)。在本申请实施例中,将采用垂直预测的预测方式称为T预测方式。一种示例为:将待预测像素上侧相邻像素的重建值作为待预测像素的预测值。
水平预测为用待预测像素左侧(既可以是相邻的左侧,也可以是非相邻但距离较近的左侧)像素的重建值来获得待预测像素的预测值。在本申请实施例中,将采用水平预测的预测方式称为L预测方式。一种示例为:将待预测像素左侧相邻像素的重建值作为待预测像素的预测值。
垂直均值预测为用待预测像素上下方像素的重建值来获得待预测像素的预测值。在本申请实施例中,将采用垂直均值预测的预测方式称为TB预测方式。一种示例为:将待预测像素垂直上方相邻像素的重建值和垂直下方相邻像素的重建值的均值作为待预测像素的预测值。
水平均值预测为用待预测像素左右两侧像素的重建值来获得待预测像素的预测值。在本申请实施例中,将采用水平均值预测的预测方式称为RL预测方式。一种示例为:将待预测像素水平左侧相邻像素的重建值和水平右侧相邻像素的重建值的均值作为待预测像素的预测值。
示例的,帧内预测模式是将待预测块周围相邻块中像素的重建值作为预测值的预测模式。
示例的,块复制模型是将(解码)块(不一定相邻)像素的重建值作为预测值预测模式。
示例的,原始值模式是直接解码固定位宽的重建值模式,即无参考预测模式。
2)、残差编码模式
对当前图像块(如待编码块或待解码块)的残差(即残差块,由当前图像块中的每个像素的残差值构成)进行编码的方式,被称为残差编码模式。残差编码模式可以包括跳过残差编码模式和正常残差编码模式。
在跳过残差编码模式下,无需编码(解码)残差系数,此时当前图像块中的像素的残差值为全0,每个像素的重建值等于该像素的预测值。
在正常残差编码模式下,需要编码(解码)残差系数,此时当前图像块中的像素的残差值不全为0,每个像素的重建值可以基于该像素的预测值和残差值得到。
在一个示例中,像素的残差系数可以等价于该像素的残差值;在另一个示例中,像素的残差系数可以是该像素的残差值经一定处理得到的。
3)、量化和反量化
在图像编码过程中,为实现对图像的压缩,通常会对待编码块的残差块进行量化,或者对该残差块经一定处理后得到的残差系数块进行量化,从而使得量化后的残差块或残差系数块可以以更少的比特进行编码。可以理解,残差块为基于待编码块的原始像素块和预测块得到的残差值块,残差系数块为对残差块进行一定处理变换后得到的系数块。
作为示例,以编码装置对残差块进行量化为例,编码装置可以为待编码块的残差块中的每个残差值除以量化系数,以缩小该残差块中的残差值。这样,相比未进行量化的残差值,量化后被缩小的残差值即可通过更少的比特来编码,这样即实现了图像的压缩编码。
相应的,为从压缩编码后的码流中重建图像块,解码装置可以对从码流中解析到的残差块或残差系数块进行反量化,从而可以重建图像块对应的未被量化的残差块或残差系数块,进而,解码装置根据重建的残差块或残差系数块对图像块进行重建,从而得到图像的重建块。
作为示例,以解码装置从码流中解析到待解码块经量化后的残差块为例,编码装置可以对该残差块进行反量化。具体的,编码装置可以为解析到的残差块中的每个残差值乘以量化系数,以重建待解码块对应的未被量化的残差块中的残差值,从而得到重建的残差块。其中,量化系数为编码装置在编码待解码块时,对待解码块的残差块进行量化时的量化系数。这样,解码装置基于反量化后重建的残差块,即可实现对待解码块的重建,并得到待解码块的重建块。
4)、其他术语
本申请实施例中的术语“至少一个(种)”包括一个(种)或多个(种)。“多个(种)”是指两个(种)或两个(种)以上。例如,A、B和C中的至少一种,包括:单独存在A、单独存在B、同时存在A和B、同时存在A和C、同时存在B和C,以及同时存在A、B和C。在本申请的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。“多个”是指两个或多于两个。另外,为了便于清楚描述本申请实施例的技术方案,在本申请的实施例中,采用了“第一”、“第二”等字样对功能和作用基本相同的相同项或相似项进行区分。本领域技术人员可以理解“第一”、“第二”等字样并不对数量和执行次序进行限定,并且“第一”、“第二”等字样也并不限定一定不同。
下面描述本申请实施例所应用的系统架构。
参见图1,图1示出了本申请实施例所应用的编解码系统10的架构示意图。如图1所示,编解码系统10可以包括源设备11和目的设备12。其中,源设备11用于对图像进行编码,因此,源设备11可被称为图像编码装置或视频编码装置。目的设备12用于对由源设备11所产生的经编码的图像数据进行解码,因此,目的设备12可被称为图像解码装置或视频解码装置。
源设备11和目的设备12的具体形态可以为各种装置,本申请实施例对比不作限定。例如时源设备11和目的设备12可以为桌上型计算机、移动计算装置、笔记本(例如,膝上型)计算机、平板计算机、机顶盒、例如所谓的“智能”电话等电话手持机、电视机、相机、显示装置、数字媒体播放器、视频游戏控制台、车载计算机或其他类似设备等。
可选的,图1中的源设备11和目的设备12可以是两个单独的设备。或者,源设备11和目的设备12为同一设备,即源设备11或对应的功能以及目的设备12或对应的功能可以集成在同一个设备上。
源设备11和目的设备12之间可以进行通信。例如,目的设备12可从源设备11接收经编码的图像数据。在一个示例中,源设备11和目的设备12之间可以包括一个或多个通信媒体,该一个或多个通信媒体用于传输经编码的图像数据。该一个或多个通信媒体可以包含路由器、交换器、基站或促进从源设备11到目的设备12的通信的其它设备,本申请实施例对此不作限定。
如图1所示,源设备11包括编码器112。可选的,源设备11还可以包括图像预处理器111以及通 信接口113。其中,图像预处理器111,用于对接收到的待编码图像执行预处理,例如,图像预处理器111执行的预处理可以包括整修、色彩格式转换(例如,从RGB格式转换为YUV格式)、调色或去噪等。编码器112,用于接收经图像预处理器111预处理的图像,采用相关预测模式对预处理后的图像进行处理,从而提供经编码的图像数据。在一些实施例中,编码器112可以用于执行下文中所描述的各个实施例中的编码过程。通信接口113,可用于将经编码的图像数据传输至目的设备12或任何其它设备(如存储器),以用于存储或直接重构,其它设备可为任何用于解码或存储的设备。通信接口113也可以将经编码的图像数据封装成合适的格式之后再传输。
可选的,上述图像预处理器111、编码器112以及通信接口113可能是源设备11中的硬件部件,也可能是源设备11中的软件程序,本申请实施例不作限定。
目的设备12包括解码器122。可选的,目的设备12还可以包括通信接口121、图像后处理器123。其中,通信接口121可用于从源设备11或任何其它源设备接收经编码的图像数据,该任何其它源设备例如为存储设备。通信接口121还可以解封装通信接口113所传输的数据以获取经编码的图像数据。解码器122,用于接收经编码的图像数据并输出经解码的图像数据(也称为经重构图像数据或已重构图像数据)。在一些实施例中,解码器122可以用于执行下文中所描述的各个实施例所述的解码过程。图像后处理器123,用于对经解码的图像数据执行后处理,以获得经后处理的图像数据。图像后处理器123执行的后处理可以包括:色彩格式转换(例如,从YUV格式转换为RGB格式)、调色、整修或重采样,或任何其它处理,图像后处理器123还可用于将经后处理的图像数据传输至显示设备进行显示。
可选的,上述通信接口121、解码器122以及图像后处理器123可能是目的设备12中的硬件部件,也可能是目的设备12中的软件程序,本申请实施例不作限定。
下面对图1中的编码器112和解码器122的结构进行简单介绍。
参见图2,图2示出用于实现本申请实施例方法的编码器112的实例的示意性框图。如图2所示,编码器112包括预测处理单元201、残差计算单元202、残差变换单元203、量化单元204、编码单元205、反量化单元(也可以称为逆量化单元)206、残差逆变换单元207、重构单元(或者称为重建单元)208以及滤波器单元209。可选的,编码器112还可以包括缓冲器、经解码图像缓冲器。其中,缓冲器用于缓存重构单元208输出的重构块(或成为重建块),经解码图像缓冲器用于缓存滤波器单元209输出的滤波后的图像块。
在一个示例中,编码器112的输入为待编码图像的图像块(即待编码块或编码单元。在另一个示例中,编码器112的输入为待编码图像,编码器112中可以包括分割单元(图2中未示出),该分割单元用于将待编码图像分割成多个图像块。编码器112用于逐块编码从而完成对待编码图像的编码。例如,编码器112对每个图像块执行编码过程,从而完成对待编码图像的编码。
在一个示例中,一种将待编码图像划分成多个图像块的方法可以包括:
步骤1:将一帧图像分成一个或多个互相不重叠的并行编码单元,各个并行编码单元间无依赖关系,可并行/独立编解码。
步骤2:对于每个并行编码单元,编码端可将其分成一个或多个互相不重叠的独立编码单元,各个独立编码单元间可相互不依赖,但可以共用该并行编码单元的一些头信息。
步骤3:对于每个独立编码单元,编码端可再将其分成一个或多个互相不重叠的编码单元。其中,若将独立编码单元划分成多个互相不重叠的编码单元,则划分方式可以为水平等分法、垂直等分法或水平垂直等分法。当然具体实现时不限于此。独立编码单元内的各个编码单元可相互依赖,即在执行预测步骤的过程中可以相互参考。
编码单元的宽为w_cu,高为h_cu,可选的,其宽大于高(除非是边缘区域)。通常的,编码单元可为固定的w_cu×h_cu,w_cu和h_cu均为2的N次方(N大于等于0),如16×4,8×4,16×2,8×2,4×2,8×1,4×1等。
编码单元既可以是包括亮度Y、色度Cb、色度Cr三个分量(或红R、绿G、蓝B三分量),也可以仅包含其中的某一个分量。若包含三个分量,几个分量的尺寸可以完全一样,也可以不一样,具体与图像输入格式相关。
如图3所示,为一种图像、并行编码单元、独立编码单元和编码单元之间的对应关系的示意图。例 如,图3中的并行编码单元1和并行编码单元2是按照图像面积比例3:1对一个图像进行划分的,其中,并行编码单元1包括一个被划分为4个编码单元的独立编码单元。
预测处理单元201,用于接收或获取待编码块的真实值和已重构图像数据,基于已重构图像数据中的相关数据对待编码块进行预测,得到待编码块的预测块。
残差计算单元202用于计算待编码块的真实值和待编码块的预测块之间的残差值,得到残差块。例如,通过逐像素将待编码块的真实像素值减去预测块的像素值得到残差块。
在一个示例中,残差变换单元203用于基于残差块确定残差系数。可选的,在该过程中,可以包括:对残差块进行例如离散余弦变换(discrete cosine transform,DCT)或离散正弦变换(discrete sine transform,DST)的变换,以在变换域中获取变换系数,变换系数也可以称为变换残差系数或残差系数,该残差系数可以在变换域中表示残差块。当然,编码器112在对待编码块进行编码的过程中也可以不包含残差变换的步骤。
量化单元204用于通过应用标量量化或向量量化来量化变换系数或残差值,以获取经量化残差系数(或经量化残差值)。量化过程可以减少与部分或全部残差系数有关的位深度(bitdepth)。例如,可在量化期间将p位变换系数向下舍入到q位变换系数,其中p大于q。可通过调整量化参数(quantization parameter,QP)修改量化程度。例如,对于标量量化,可以应用不同的标度来实现较细或较粗的量化。较小量化步长对应较细量化,而较大量化步长对应较粗量化。可以通过QP指示合适的量化步长。
编码单元205用于对上述经量化残差系数(或经量化残差值)进行编码,以经编码比特流(或称为码流)的形式输出的经编码的图像数据(即当前待编码块的编码结果),然后可以将经编码比特流传输到解码器,或将其存储起来,后续传输至解码器或用于检索。编码单元205还可用于对待编码块的语法元素进行编码,例如将待编码块采用的预测模式编码至码流等。
在一个示例中,编码单元205对残差系数编码,一种可行方式为:半定长编码方式。首先将一个残差小块(residual block,RB)内残差绝对值的最大值定义为modified maximum(mm)。根据上述mm确定该RB内残差系数的编码比特数(同一个RB内残差系数的编码比特数一致),也就是编码长度CL。例如,若当前RB的CL为2,并且当前残差系数为1,则编码残差系数1需要2个比特,表示为01。在一种特殊情况下,若当前RB的CL为7,则表示编码8-bit的残差系数和1-bit的符号位。其中,CL的确定方式是找满足当前RB内所有残差都在[-2^(M-1),2^(M-1)]范围之内的最小M值,将找到的M作为当前RB的CL。若当前RB内同时存在-2^(M-1)和2^(M-1)两个边界值,则M应增加1,即需要M+1个比特编码当前RB的所有残差;若当前RB内仅存在-2^(M-1)和2^(M-1)两个边界值中的一个,则需要编码一个Trailing位(最末位)来确定该边界值是-2^(M-1)还是2^(M-1);若当前RB内所有残差均不存在-2^(M-1)和2^(M-1)中的任何一个,则无需编码该Trailing位。
当然,也可以采用其他残差系数编码方法,如指数Golomb(哥伦布编码算法)编码方法、Golomb-Rice(哥伦布编码算法的变种)编码方法、截断一元码编码方法、游程编码方法、直接编码原始残差值等。
另外,对于某些特殊的情况,编码单元205也可以直接编码原始值,而不是残差值。
反量化单元206用于对上述经量化残差系数(或经量化残差值)进行反量化,以获取经反量化残差系数(经反量化残差值),该反量化是上述量化单元204的反向应用,例如,基于或使用与量化单元204相同的量化步长,应用量化单元204应用的量化方案的逆量化方案。
残差逆变换单元207用于对上述反量化残差系数逆变换(或反变换),以得到重建的残差块。可选的,反变换可以包括逆离散余弦变换(discrete cosine transform,DCT)或逆离散正弦变换(discrete sine transform,DST)。这样,对上述反量化残差系数逆变换(或反变换)后得到的逆变换值,即为在像素域(或者称为样本域)中重建的残差值。也即,经反量化的残差系数块在经残差逆变换单元207逆变换后,得到的块为重建的残差块。当然,在编码器112中不包括上述的残差变换单元203时,编码器112也可以不包含反变换的步骤。
重构单元208用于将重建的残差块添加至预测块,以在样本域中获取经重构块,重构单元208可以为求和器。例如,重构单元208将重建的残差块中的残差值与预测块中对应像素的预测值相加,以得到对应像素的重建值。该重构单元208输出的重构块可以后续用于预测其他待编码的图像块。
滤波器单元209(或简称“滤波器”)用于对经重构块进行滤波以获取经滤波块,从而顺利进行像素 转变,进而提高图像质量。
在一个示例中,一种编码过程如图4所示。具体的,编码器判断是否采用逐点预测模式,若采用逐点预测模式,则编码器基于逐点预测模式对待编码块中的像素进行预测以对待编码块进行编码,并对编码结果执行反量化和重建步骤,从而实现编码过程;若未采用逐点预测模式,编码器则判断是否采用原始值模式。若采用原始值模式,则采用原始值模式编码;若未采用原始值模式,编码器则确定采用其他预测模式如帧内预测模式或块复制模式进行预测并编码。后续,在确定执行残差跳过时直接对编码结果执行重建步骤;在确定不执行残差跳过时,对编码结果先执行反量化步骤得到经反量化残差块,并判断是否采用块复制预测模式。若确定采用块复制预测模式,则一种情况下,在确定执行变换跳过时,直接对经反量化残差块执行重建步骤,在另一种情况下,在确定没有执行变换跳过时,通过对经反量化残差块执行反变换和重建步骤,实现编码过程。若确定不采用块复制模式(该情况下,所采用的一种预测模式为帧内预测模式)时,对经反量化残差块执行反变换步骤和重建步骤,从而实现编码过程。
具体的,在本申请实施例中,编码器112用于实现后文实施例中描述的编码方法。
在一个示例中,编码器112实现的一种编码过程可以包括以下步骤:
步骤11:预测处理单元201确定预测模式,并基于确定的预测模式和已编码图像块的重构块对待编码块进行预测,得到待编码块的预测块。
其中,已编码图像块的重构块是反量化单元206、残差逆变换单元207以及重构单元208依次对该已编码图像块的经量化残差系数块处理后得到的。
步骤12:残差计算单元202基于预测块和待编码块的原始像素值,得到待编码块的残差块。
步骤13:残差变换单元203对残差块进行变换,得到残差系数块。
步骤14:量化单元204对残差系数块进行量化,得到经量化残差系数块。
步骤15:编码单元205对经量化残差系数块进行编码,以及对相关语法元素(例如预测模式,编码模式)进行编码,得到待编码块的码流。
参见图5,图5示出用于实现本申请实施例方法的解码器122的实例的示意性框图。解码器122用于接收例如由编码器112编码的图像数据(即经编码比特流,例如,包括图像块的经编码比特流及相关联的语法元素),以获取经解码图像块。
如图5所示,解码器122包括码流解析单元301、反量化单元302、残差逆变换单元303、预测处理单元304、重构单元305、滤波器单元306。在一些实例中,解码器122可执行大体上与图2的编码器112描述的编码过程互逆的解码过程。可选的,解码器122还可以包括缓冲器、经滤波图像缓冲器,其中,缓冲器用于缓存重构单元305输出的重构图像块,经滤波图像缓冲器用于缓存滤波器单元306输出的滤波后的图像块。
码流解析单元301用于对经编码比特流执行解码,以获取经量化残差系数(或经量化残差值)和/或解码参数(例如,解码参数可以包括编码侧执行的帧间预测参数、帧内预测参数、滤波器参数和/或其它语法元素中的任意一个或全部)。码流解析单元301还用于将上述解码参数转发至预测处理单元304,以供预测处理单元304根据解码参数执行预测过程。
反量化单元302的功能可与编码器112的反量化单元206的功能相同,用于反量化(即逆量化)经码流解析单元301解码输出的经量化残差系数。
残差逆变换单元303的功能可与编码器112的残差逆变换单元207的功能相同,用于对上述经反量化残差系数进行逆变换(例如,逆DCT、逆整数变换或概念上类似的逆变换过程),得到重建的残差值。逆变换得到块即为重建的待解码块在像素域中的残差块。
重构单元305(例如求和器)的功能可与编码器112的重构单元208的功能相同。
预测处理单元304,用于接收或获取经编码图像数据(例如当前图像块的经编码比特流)和已重构图像数据,预测处理单元304还可以从例如码流解析单元301接收或获取预测模式的相关参数和/或关于所选择的预测模式的信息(即上述的解码参数),并且基于已重构图像数据中的相关数据和解码参数对当前图像块进行预测,得到当前图像块的预测块。
重构单元305用于将重建的残差块添加到预测块,以在样本域中获取待解码图像的重构块,例如将重建的残差块中的残差值与预测块中的预测值相加。
滤波器单元306用于对经重构块进行滤波以获取经滤波块,该经滤波块即为经解码图像块。
具体的,在本申请实施例中,解码器122用于实现后文实施例中描述的解码方法。
应当理解的是,在本申请实施例的编码器112和解码器122中,针对某个环节的处理结果也可以经过进一步处理后,输出到下一个环节,例如,在插值滤波、运动矢量推导或滤波等环节之后,对相应环节的处理结果进一步进行截断Clip或移位shift等操作。
在一个示例中,解码器122实现的一种解码过程可以包括以下步骤:
步骤21:码流解析单元301解析预测模式和残差编码模式;
步骤22:码流解析单元301基于预测模式和残差编码模式解析量化相关值(如near值(一种量化步长表征值),或QP值等);
步骤23:反量化单元302基于预测模式和量化相关值解析残差系数;
步骤24:预测处理单元304基于预测模式,获得当前图像块各个像素的预测值;
步骤25:残差逆变换单元303对残差系数逆变换,以重建当前图像块各个像素的残差值;
步骤26:重构单元305基于当前编码单元各个像素的预测值和残差值,获得其重建值。
以下,结合附图,本申请实施例对以下图像编解码方法进行说明。
需要说明的是,本申请任一实施例中的编码端可以是上述图1或图2中的编码器112,也可以是上述图1中的源设备11。本申请任一实施例中的解码端可以是上述图1或图5中的解码器122,也可以是上述图1中的目的设备12,本申请实施例对此不作限定。
参考图6a,图6a示出了本申请实施例提供的一种图像编码方法的流程示意图,该方法可以包括如下步骤:
S11、编码端确定待编码块的目标预测模式,以及确定与目标预测模式对应的目标预测顺序。
其中,S11的详细说明可以参考下文S101的描述,不再赘述。
S12、编码端按照上述的目标预测模式,以上述目标预测顺序预测待编码块中的每个像素。
其中,S12的详细说明可以参考下文S102的描述,不再赘述。
S13、编码端基于待编码块中每个像素的预测值确定待编码块的残差块。
其中,S13的详细说明可以参考下文S103的描述,不再赘述。
S14(可选的)、编码端对待编码块的残差块进行变换处理,得到经变换的残差系数块。
S15、编码端对上述残差系数块进行量化处理,得到经量化的残差系数块。
可以理解,当编码端未执行S14时,编码端可以直接对上述残差块进行量化处理,从而得到经量化的残差块。
其中,编码端对残差块或残差系数块进行量化处理的详细说明,可以参考下文S301-S302中对残差块进行量化的详细描述,不再赘述。
S16、编码端编码上述经量化的残差系数块,得到待编码块的码流。
可以理解,当编码端未执行S14时,编码端编码上述经量化的残差块,从而得到待编码块的码流。
其中,编码端编码经量化的残差系数块(或残差块)的过程,可以参考下文S502的详细描述,不再赘述。
需要说明,编码端编码残差系数块的残差扫描顺序,可以和S11中的目标预测顺序相同。这样的话,在解码侧,解码端可以以目标预测顺序预测待解码块中像素的预测值,同时以与该目标预测顺序相同的残差扫描顺序解码待解码块的残差块,进而能够高效的得到待解码块的重建块。
在S11-S16所述的图像编码方法中,编码端采用的用于预测待编码块中像素的目标预测模式具有较高的预测效率,编码端采用的用于量化残差块或残差系数块的量化方法能能够减少编码端的乘法运算,即提高了量化效率,并且,编码端所采用的编码方法能够减少用于编码残差块或残差系数块的比特数,因此,通过本申请实施例提供的方法,能够大大提高编码端的编码效率。
参考图6b,图6b示出了本申请实施例提供的一种图像解码方法的流程示意图,该方法可以包括如下步骤:
S21、解码端解析待解码块的码流,以确定预测待解码块中像素的目标预测模式。
其中,S21的详细说明可以参考下文S201的描述,不再赘述。
S22、解码端基于目标预测模式,确定与目标预测模式对应的目标预测顺序。
其中,S22的详细说明可以参考下文S202的描述,不再赘述。
S23、解码端按照目标预测模式,以目标预测顺序预测待解码块中的每个像素,得到该每个像素的预测值。
S24、解码端采用可变码长的解码方式解析待解码块的码流,得到待解码块对应的残差块中每个值的CL,以及基于该CL、并按照上述残差扫描顺序解析待解码块的码流,得到待解码块的第一残差块。
其中,残差扫描顺序和上述的目标预测顺序可以相同。当残差扫描顺序和上述的目标预测顺序相同时,即解码端以目标预测顺序预测待解码块中的像素时,解码端还按照与该目标预测顺序相同的残差扫描顺序从待解码块的码流中解析得到待解码块的第一残差块。这样可以提高解码端的解码效率。
其中,解码端采用可变码长的解码方式解析待解码块的码流,得到待解码块对应的残差块中每个值的CL的过程,可以参考下文S601的描述。解码端基于CL确定第一残差块的说明,与S602中描述的得到待解码块的残差块的实现方式相似,不再赘述。
S25、解码端通过解析待解码块的码流得到待解码块中每个像素的反量化参数。
其中,反量化参数的详细说明可以参考下文S401中的描述,不再赘述。
S26、解码端基于待解码块中每个像素的反量化参数所指示的QP和反量化预设数组,对第一残差块进行反量化,得到第二残差块。
其中,S26的详细说明可以参考下文S402的描述,不再赘述。
需要说明,本申请实施例对S23和S24-S26的执行顺序不作限定,例如可以同时执行S23和S24-S26。
S27(可选的)、解码端对第二残差块进行反变换处理,以得到经反变换的第二残差块。
应理解,当图像编码过程包括上述的S14时,则解码端执行S27。
S28、解码端基于经反变换的第二残差块和待解码块中每个像素的预测值,对待解码块进行重建,得到重建块。
可以理解,当解码端未执行S27时,则解码端直接基于第二残差块和待解码块中每个像素的预测值,对待解码块进行重建,得到重建块。
需要说明,S21-S28的图像解码方法与S11-S16的图像编码方法对应。在S21-S28所述的图像解码方法,解码端采用的用于预测待解码块中像素的目标预测模式具有较高的预测效率,解码端采用的用于反量化残差块或残差系数块的反量化方法能能够减少解码端的乘法运算,即提高了反量化效率,并且,解码端所采用的解码方法能够减少用于编码残差块或残差系数块的比特数,因此,通过本申请实施例提供的方法,能够大大提高解码端的解码效率。
实施例一
如图6c所示,为本申请实施例提供的另一种图像编码方法的流程示意图。图6c所示的方法包括如下步骤:
S101、编码端确定待编码块的目标预测模式,以及确定与目标预测模式对应的目标预测顺序。
具体的,编码端可以采用不同的预测模式分别对待编码块进行预测,并基于不同预测模式下的预测值预测待编码块的编码性能,从而确定目标预测模式。
例如,编码端可以采用不同的预测模式分别对待编码块进行预测,并在基于不同预测模式预测得到预测块后,编码端执行上文描述的步骤13-步骤16以得到不同预测模式下的待编码块的码流。编码端通过确定在不同预测模式下得到待编码块的码流的时间,并将用时最短的预测模式确定为目标预测模式。换句话说,编码端将编码效率最高的预测模式确定为目标预测模式。
除现有的预测模式,本申请实施例还提供了多种预测模式。在本申请实施例提供的预测模式中,编码端可以通过预设顺序依次预测待编码块中的像素。编码端通过预设顺序依次预测待编码块中的像素,是指编码端以预设轨迹依次预测待编码块中的像素。在这一过程中,编码端预测待编码块中的任一个像素时,用于预测该任一像素的像素均已完成重建。其中,本申请实施例所提供的预测模式的详细说明可以参考下文,这里不作赘述。
当编码端基于不同预测模式预测待编码块的编码性能,确定目标预测模式后,即可确定以目标预测模式预测待编码块中像素时的目标预测顺序。
其中,目标预测顺序可以是针对待编码块中的像素的预测顺序,也可以是针对待编码块中子块的预测顺序。
S102、编码端按照上述的目标预测模式,以上述目标预测顺序预测待编码块中的每个像素。
可选的,目标预测模式可以用于指示以目标预测顺序逐点预测编码块中的每个像素。例如,当目标预测模式是下文所述的第一种-第五种预测模式时,目标预测顺序即用于指示以目标预测顺序指示的轨迹方向逐点预测编码块中的每个像素。
这种情况下,编码端即按照目标预测顺序指示的轨迹方向,沿该方向依次逐点预测待编码块中的每个像素,以得到每个像素的预测值。
可选的,目标预测模式还可以用于指示以待编码块中具有预设大小的子块为单位依次预测待编码块中每个子块的像素。例如,当目标预测模式是下文所述的第六种预测模式时,目标预测模式即用于指示以待编码块中具有预设大小的子块为单位,依次预测待编码块中每个子块的像素。
这种情况下,目标预测模式包括待编码块中每个子块的预测模式。这样,编码端即可按照目标预测模式,沿目标预测顺序指示的方向依次预测待编码块中每个子块的像素。并且,在编码端对待编码块的一个子块进行预测时,编码端可以基于该子块周围已重建的像素,并行的对该子块内的每个像素进行预测。
S103、编码端基于待编码块中每个像素的预测值确定待编码块的残差块。
编码端可以根据待编码块中每个像素的预测值和待编码块的原始像素值,确定待编码块的残差块。
例如,编码端可以通过图2所示的残差计算单元202,对待编码块中每个像素的预测值和待编码块的原始像素值做差值运算,从而得到待编码块的残差块。
S104、编码端以残差扫描顺序编码上述残差块,以得到待编码块的码流。
其中,残差扫描顺序和上述的目标预测模式对应。可选的,残差扫描顺序和上述的目标预测顺序可以相同。
例如,当待编码块是16×2大小的图像块时,如果目标预测模式是下文表1所示的预测模式,则残差扫描顺序可以是图7a所示预设轨迹指示的顺序。如果目标预测模式是下文表2所示的预测模式,则残差扫描顺序可以是图7b所示预设轨迹指示的顺序。如果目标预测模式是下文表3-1所示的预测模式,则残差扫描顺序可以是图7c-1所示预设轨迹指示的顺序。如果目标预测模式是下文表4-1所示的预测模式,则残差扫描顺序可以是图7d-1所示预设轨迹指示的顺序。
再例如,当待编码块是8×2大小的图像块时,如果目标预测模式是下文表3-2所示的预测模式,则残差扫描顺序可以是图7c-2所示预设轨迹指示的顺序。如果目标预测模式是下文表4-2所示的预测模式,则残差扫描顺序可以是图7d-2所示预设轨迹指示的顺序。
又例如,当待编码块是8×1大小的图像块时,如果目标预测模式是下文表4-3所示的预测模式,则残差扫描顺序可以是图7d-3所示预设轨迹指示的顺序。如果目标预测模式是下文表5所示的预测模式,则残差扫描顺序可以是图7e所示预设轨迹指示的顺序。
又例如,当待编码块是16×2大小的图像块时,如果目标预测模式是图7f所示的预测模式,则残差扫描顺序可以是图7g所示预设轨迹指示的顺序。
可选的,编码端可以先对待编码块的残差块进行变换处理,得到待编码块的残差系数块。编码端还可以对该残差系数块进行量化,得到量化后的残差系数块。然后,编码端以上述的目标预测顺序对量化后的残差系数块进行编码,从而得到待编码块经编码后的码流。
例如,编码端可以先通过图2所示的残差变换单元203对待编码块的残差块进行变换处理,得到待编码块的残差系数块。编码端还可以通过图2所示的量化单元204对该残差系数块进行量化,得到量化后的残差系数块。然后,编码端通过图2所示的编码单元205以上述的目标预测顺序,对量化后的残差系数块进行编码,从而得到待编码块经编码的码流。
可选的,编码端可以直接对待编码块的残差块进行量化处理,得到量化后的残差块。然后,编码端以上述的目标预测顺序对量化后的残差块进行编码,从而得到待编码块经编码的码流。
例如,编码端可以直接通过图2所示的量化单元204对待编码块的残差块进行量化处理,得到量化后的残差块。然后,编码端通过图2所示的编码单元205以上述的目标预测顺序,对量化后的残差块进 行编码,从而得到待编码块经编码的码流。
可选的,编码端还对上述的目标预测模式作为待编码块的语义元素,或者将上述的目标预测模式和对应的目标预测顺序作为待编码块的语义元素,并对该语义元素进行编码。编码端还可以将编码后的语义元素数据添加至待编码块经编码的码流中。
需要说明,本申请实施例对待编码块的残差块或残差系数块进行量化的方式不作具体限定,例如可以采用下述实施例二方案中描述的量化方式对待编码块的残差块或残差系数块进行量化,当然不限于此。
还需要说明,本申请实施例对编码端编码量化后的残差块或残差系数、以及编码相关的语义元素的具体编码方式不作具体限定,例如可以采用下述实施例三中描述的可变长的编码方式进行编码,当然不限于此。
下面对本申请实施例提供的预测模式进行详细说明。需要预先说明的是,在对一个待编码块进行预测的预测模式中,可以包括上文所述的T预测方式、TB预测方式、L预测方式以及RL预测方式中至少一种预测方式。其中,待编码块中的一个像素可以通过T预测方式、TB预测方式、L预测方式或RL预测方式中的任一种预测方式进行预测。
需要说明的是,下述的任一种预测模式可以应用于编码端对图像编码的流程中,也可以应用于解码端对图像数据进行解码的流程中,本申请实施例对此不作限定。
在第一种预测模式中,以待编码块的大小为16×2为例,如表1所示,表1示出了一种对大小为16×2的待编码块中每个像素进行预测的预测模式。在该预测模式下,编码端预测待编码块中像素的预测顺序可以是图7a所示的预设轨迹指示的顺序。也就是说,编码端以图7a所示的预设轨迹依次逐个预测待编码块中的像素时,会以表1所示预测模式中的具体预测方式对待编码块中的每个像素进行预测。
表1
可以理解,表1中的每一格显示的预测方式,用于预测图7a所示待编码块中对应位置像素的预测值。例如表1中的第1行第1格显示的T预测方式,用于预测图7a所示待编码块中位于第1行第1格的像素1-1的预测值。再例如,表1中的第1行第2格显示的RL预测方式,用于预测图7a所示待编码块中位于第1行第2格的像素1-2的预测值。又例如,表1中的第2行第1格显示的T预测方式,用于预测图7a所示待编码块中位于第2行第1格的像素2-1的预测值。又例如,表1中的第2行第15格显示的T预测方式,用于预测图7a所示待编码块中位于第2行第15格的像素1-15的预测值。
还应理解,图7a所示的两个大小为16×2的块表示待编码图像中的同一个图像块(如待编码块)。图7a中通过两个块表征待编码块,仅为了清楚的展示编码端对待编码块中的像素依次进行预测时的预设轨迹,该预设轨迹即为图7a中带箭头的黑色实线所示的轨迹。其中,原点为该预设轨迹的起点,黑色虚线两端的像素点为预设轨迹上相邻的两个像素点。
这样,示例性的,在编码端按照图7a所示的预设轨迹对待编码块进行预测时,编码端首先对图7a所示的像素1-1以表1所示的T预测方式进行预测。即,像素1-1的预测值=PT1-1,PT1-1为像素1-1上侧像素(例如待编码块上侧图像块中的像素。应理解,待编码块的上侧图像块和左侧图像块通常早于待编码块被执行编码流程,因此待编码块的上侧图像块和左侧图像块中的像素值已完成重建)的重建值。应理解,编码端得到像素1-1的预测值后,即可基于像素1-1的预测值确定像素1-1的重建值(例如在编码端得到像素1-1的预测值后,通过执行上文所述的步骤13-步骤16,并通过执行反量化和反变换重建像素的残差值,进而基于预测值和重建的残差值得到像素1-1的重建值)。
又示例性的,按照图7a所示的预设轨迹,编码端对像素1-1完成预测后,对图7a所示的像素2-1以表1所示的T预测方式对其进行预测。即,像素2-1的预测值=PT2-1,PT2-1为像素2-1上侧像素(例如已完成重建的像素1-1)的重建值。
再示例性的,按照图7a所示的预设轨迹,编码端对像素2-15完成预测后,对图7a所示的像素1-2以表1所示的RL预测方式进行预测。即,像素1-2的预测值=(PR1-2+PL1-2+1)>>1,其中,(PR1-2+PL1-2+1)>>1表示对(PR1-2+PL1-2+1)的二进制值右移1位后得到的值,数学上的效果相当于(PR1-2+PL1-2+1)除以21的值。P R1-2为像素1-2右侧像素(例如已完成重建的像素1-3,这是由于像素1-3的预测顺 序在像素1-2之前)的重建值,PL1-2为像素1-2左侧像素(例如已完成重建的像素1-1)的重建值。
在第二种预测模式中,以待编码块的大小为16×2为例,如表2所示,表2示出了另一种对大小为16×2的编码块中每个像素进行预测的预测模式。在该预测模式下,编码端预测待编码块中像素的预测顺序可以是以图7b所示的预设轨迹指示的顺序。也就是说,编码端以图7b所示的预设轨迹依次逐个预测待编码块中的像素时,会以表2所示预测模式中的具体预测方式对待编码块中的每个像素进行预测。其中,以表2所示预测方式预测待编码块中每个像素的详细说明,可以参考上文中对以表1所示预测方式预测待编码块中每个像素的相关描述,图7b中显示的预设轨迹的说明也可以参考图7a中对预设轨迹的描述,不再赘述。
表2
这样,示例性的,在编码端按照图7b所示的预设轨迹对待编码块进行预测时,编码端首先对图7b所示的像素2-1以表2所示的L预测方式进行预测。即,像素2-1的预测值=PL2-1,PL2-1为像素2-1左侧像素(例如待编码块左侧图像块中的像素)的重建值。
再示例性的,按照图7b所示的预设轨迹,编码端对像素2-16完成预测后,对图7b所示的像素1-1以表2所示的TB预测方式进行预测。即,像素1-1的预测值=(PT1-1+PB1-1+1)>>1。PT1-1为像素1-1上侧像素(例如待编码块上侧图像块中的像素)的重建值,PB1-1为像素1-1下侧像素(例如已完成重建的像素2-1,这是由于像素2-1的预测顺序在像素1-1之前)的重建值。
在第三种预测模式中,以待编码块的大小为16×2为例,如表3-1所示,表3-1示出了又一种对大小为16×2的编码块中每个像素进行预测的预测模式。在该预测模式下,编码端预测待编码块中像素的预测顺序可以是以图7c-1所示的预设轨迹指示的顺序。也就是说,编码端以图7c-1所示的预设轨迹依次逐个预测待编码块中的像素时,会以表3-1所示预测模式中的具体预测方式对待编码块中的每个像素进行预测。其中,以表3-1所示预测方式预测待编码块中每个像素的详细说明,可以参考上文中对以表1所示预测方式预测待编码块中每个像素的相关描述,图7c-1中显示的预设轨迹的说明也可以参考图7a中对预设轨迹的描述,不再赘述。
表3-1
在第三种预测模式中,以待编码块的大小为8×2为例,如表3-2所示,表3-2示出了一种对大小为8×2的编码块中每个像素进行预测的预测模式。在该预测模式下,编码端预测待编码块中像素的预测顺序可以是以图7c-2所示的预设轨迹指示的顺序。也就是说,编码端以图7c-2所示的预设轨迹依次逐个预测待编码块中的像素时,会以表3-2所示预测模式中的具体预测方式对待编码块中的每个像素进行预测。其中,以表3-2所示预测方式预测待编码块中每个像素的详细说明,可以参考上文中对以表1所示预测方式预测待编码块中每个像素的相关描述,图7c-2中显示的预设轨迹的说明也可以参考图7a中对预设轨迹的描述,不再赘述。
表3-2
在第四种预测模式中,以待编码块的大小为16×2为例,如表4-1所示,表4-1示出了又一种对大小为16×2的编码块中每个像素进行预测的预测模式。在该预测模式下,编码端预测待编码块中像素的预测顺序可以是以图7d-1所示的预设轨迹指示的顺序。也就是说,编码端以图7d-1所示的预设轨迹依次逐个预测待编码块中的像素时,会以表4-1所示预测模式中的具体预测方式对待编码块中的每个像素进行预测。其中,以表4-1所示预测方式预测待编码块中每个像素的详细说明,可以参考上文中对以表1所示预测方式预测待编码块中每个像素的相关描述,图7d-1中显示的预设轨迹的说明也可以参考图7a中对预设轨迹的描述,不再赘述。
表4-1
在第四种预测模式中,以待编码块的大小为8×2为例,如表4-2所示,表4-2示出了另一种对大小为8×2的编码块中每个像素进行预测的预测模式。在该预测模式下,编码端预测待编码块中像素的预测顺序可以是以图7d-2所示的预设轨迹指示的顺序。也就是说,编码端以图7d-2所示的预设轨迹依次逐个预测待编码块中的像素时,会以表4-2所示的预测模式中的具体预测方式对待编码块中的每个像素进行预测。其中,以表4-2所示预测方式预测待编码块中每个像素的详细说明,可以参考上文中对以表1所示预测方式预测待编码块中每个像素的相关描述,图7d-2中显示的预设轨迹的说明也可以参考图7a中对预设轨迹的描述,不再赘述。
表4-2
在第四种预测模式中,以待编码块的大小为8×1为例,如表4-3所示,表4-3示出了又一种对大小为8×1的编码块中每个像素进行预测的预测模式。在该预测模式下,编码端预测待编码块中像素的预测顺序可以是以图7d-3所示的预设轨迹指示的顺序。也就是说,编码端以图7d-3所示的预设轨迹依次逐个预测待编码块中的像素时,会以表4-3所示预测模式中的具体预测方式对待编码块中的每个像素进行预测。其中,以表4-3所示预测方式预测待编码块中每个像素的详细说明,可以参考上文中对以表1所示预测方式预测待编码块中每个像素的相关描述,图7d-3中显示的预设轨迹的说明也可以参考图7a中对预设轨迹的描述,不再赘述。
表4-3
在第五种预测模式中,以待编码块的大小为8×1为例,如表5所示,表5示出了一种对大小为8×1的编码块中每个像素进行预测的预测模式。在该预测模式下,编码端预测待编码块中像素的预测顺序可以是以图7e所示的预设轨迹指示的顺序。也就是说,编码端以图7e所示的预设轨迹依次逐个预测待编码块中的像素时,会以表5所示预测模式中的具体预测方式对待编码块中的每个像素进行预测。其中,以表5所示预测方式预测待编码块中每个像素的详细说明,可以参考上文中对以表1所示预测方式预测待编码块中每个像素的相关描述,图7e中显示的预设轨迹的说明也可以参考图7a中对预设轨迹的描述,不再赘述。
表5
可以看出,上述的每种预测模式对应不同的预测顺序。例如,当预测模式为第一预测模式时,预测顺序为第一预测顺序,当预测模式为第二预测模式时,预测顺序为第二预测顺序,其中,第一预测顺序和第二预测顺序不同。第一预测顺序、第二预测顺序可以为上文中实例的第一种预测模式到第六种预测模式对应的预测顺序中的一种。
还可以看出,在相同的预测模式下,当待编码块的尺寸大小不同时,对应的预测顺序也是不同的。例如,对于尺寸为第一尺寸的待解码块,在预测模式1下采用第三预测顺序对待解码块进行预测,对于尺寸为第二尺寸的待解码块,在预测模式1下采用第四预测顺序对待解码块进行预测,其中,第三预测顺序和第四预测顺序不同。例如,对于16×2、8×2和8×1三种不同尺寸的待解码块,在上述第四种预测模式下,可以分别采用如图7d-1、图7d-2、图7d-3三种不同的预测顺序。
其中,第一预测顺序、第二预测顺序、第三预测顺序和第四预测顺序都可以表征针对待编码块中的像素的预测轨迹,也可以表征针对待编码块中子块的预测轨迹。
在第六种预测模式中,该预测模式用于指示编码端以待编码块中具有预设大小的子块为单位,沿与该预测模式对应的预测顺序所指示的方向依次对待编码块中的每个子块进行预测。并且,在该预测模式下,编码端在对待编码块的任一个子块进行预测时,可以并行基于该任一个子块周围已重建的像素对该任一个子块内的像素进行预测。
在这种预测模式下,编码端可以将待编码块划分为具有预设大小的多个不重合的子块,并将划分得 到的子块的排列方向作为该预测模式对应的预测顺序所指示的方向。
作为示例,参考图7f,图7f示出了本申请实施例提供的一种待编码块中子块的示意图。
如图7f中的(a)所示,假设具有预设大小的子块为2×2大小的子块,则对于16×2的待编码块,待编码块可以划分如图7f中的(a)中黑色粗线框所示的8个不重合的2×2子块。并且,这8个子块的排列方向,即作为该预测模式对应的预测顺序所指示的方向,例如图7f中的(a)所示箭头指向的方向。
如图7f中的(b)所示,假设具有预设大小的子块为4×2大小的子块,则对于16×2的待编码块,待编码块可以划分如图7f中的(b)中黑色粗线框所示的4个不重合的4×2子块。并且,这4个子块的排列方向,即作为该预测模式对应的预测顺序所指示的方向,例如图7f中的(b)所示箭头指向的方向。
如图7f中的(c)所示,假设具有预设大小的子块为8×2大小的子块,则对于16×2的待编码块,待编码块可以划分如图7f中的(c)中黑色粗线框所示的2个不重合的8×2子块。并且,这2个子块的排列方向,即作为该预测模式对应的预测顺序所指示的方向,例如图7f中的(c)所示箭头指向的方向。
具体的,对于待编码块中的任一子块,第六种预测模式指示基于该任一子块的上侧(相邻或不相邻)、左侧(相邻或不相邻)以及斜上侧(相邻或不相邻)的像素的重建值对该任一子块内的像素进行预测。需要说明,在该预测模式下,一个子块内的一个像素不会依赖相同子块内的其他像素进行预测。为简单描述,本申请实施例以该预测模式指示基于该任一子块的上侧、左侧以及斜上侧相邻的像素的重建值对该任一子块内的像素进行预测。
以第六种预测模式用于指示以图7f中的(a)所示的子块为单位,沿与该预测模式对应的预测顺序所指示方向依次对待编码块中的每个子块进行预测为例,如图7f中的(a)所示,假设上述任一个子块为子块a,且图7f中的(a)所示的灰色方格为子块a上侧、左侧以及斜上侧相邻的像素。以子块a中的像素分别为Y0、Y1、Y2以及Y3,子块a上侧的像素包括为T0和T1,子块a左侧的像素包括为L0和L1,子块a斜上侧的像素为LT为例:
则在一种可能的实现方式中,第六种预测模式具体可以指示:子块a中Y0的预测值基于T0、L0以及LT得到,子块a中Y1的预测值基于T1、L0以及LT得到,子块a中Y2的预测值基于T0、L1以及LT得到,子块a中Y3的预测值基于T1、L1以及LT得到。
可选的,第六种预测模式具体指示的预测方式1可以是:基于待编码块中任一子块内任一像素的上侧像素、左侧像素以及斜上方像素的水平梯度或垂直梯度,确定该任一像素的预测值。
参考图7f中的(a),以子块a中的像素Y0为例,当编码端确定Y0的上侧像素T0、左侧像素L0以及斜上侧像素LT的重建值满足条件1,则将Y0左侧像素L0的重建值确定为Y0预测值。其中,条件1用于表征Y0周围像素的水平梯度最小。条件1具体为:|T0重建值-LT重建值|≤|L0重建值-LT重建值|,且|T0重建值-LT重建值|≤|L 0重建值+T 0重建值-2*LT重建值|。当编码端确定Y0的上侧像素T0、左侧像素L0以及斜上侧像素LT的重建值满足条件2,则将Y0上侧像素T0的重建值确定为Y0预测值。其中,条件2用于表征Y0周围像素的垂直梯度最小。条件2具体为:|T0重建值-LT重建值|≤|L0重建值+T0重建值-2*LT重建值|。当编码端确定Y0的上侧像素T0、左侧像素L0以及斜上侧像素LT的重建值不满足上述条件1和条件2时,则将Y0斜上侧像素LT的重建值确定为Y0预测值。
可选的,第六种预测模式具体指示的预测方式2可以是:基于待编码块中任一子块内任一像素的上侧像素、左侧像素以及斜上方像素的重建值的均值,确定该任一像素的预测值。
继续参考图7f中的(a),以子块a中的像素Y0为例,Y0的预测值可以是:(L0重建值+T0重建值+2*LT重建值)>>2。其中,(L0重建值+T0重建值+2*LT重建值)>>2表示对(L0重建值+T0重建值+2*LT重建值)的二进制值右移2位后得到的值。
另一种可能的实现方式,第六种预测模式具体可以指示:将位于待编码块中任一子块内任一像素预测方向的上侧像素、左侧像素或斜上方像素的重建值,确定为该任一像素的预测值。其中,预测方向可以是该任一像素的左斜向45度方向,或者是右斜向45度方向,本申请实施例对此不作限定。
以第六种预测模式用于指示以图7f中的(b)所示的子块为单位,沿与该预测模式对应的预测顺序依次对待编码中的每个子块进行预测为例,参考图7f中的(b),假设上述任一个子块为子块b,且图7f中的(b)所示的灰色方格为子块b上侧、左侧以及斜上侧相邻的像素。以子块b中的像素分别为Y0、Y1、Y2、Y3、Y4、Y5、Y6以及Y7,子块b上侧的像素包括为T0、T1、T2、T3、T4以及T5, 子块b左侧的像素包括L0和L1,子块b斜上侧的像素为LT:
则当预测方向为待预测像素的左斜向45度方向时,第六种预测模式具体可以指示:将位于子块b中Y0左斜向45度方向上的LT的重建值确定为Y0的预测值,将位于子块b中Y1左斜向45度方向上的T0的重建值确定为Y1的预测值,将位于子块b中Y2左斜向45度方向上的T1的重建值确定为Y2的预测值,将位于子块b中Y3左斜向45度方向上的T2的重建值确定为Y3的预测值,将位于子块b中Y4左斜向45度方向上的L0的重建值确定为Y4的预测值,将位于子块b中Y5左斜向45度方向上的LT的重建值确定为Y5的预测值,将位于子块b中Y6左斜向45度方向上的T0的重建值确定为Y6的预测值,以及,将位于子块b中Y7左斜向45度方向上的T1的重建值确定为Y7的预测值。
当预测方向为待预测像素的右斜向45度方向时,第六种预测模式具体可以指示:将位于子块b中Y0右斜向45度方向上的T1的重建值确定为Y0的预测值,将位于子块b中Y1右斜向45度方向上的T2的重建值确定为Y1的预测值,将位于子块b中Y2右斜向45度方向上的T3的重建值确定为Y2的预测值,将位于子块b中Y3右斜向45度方向上的T4的重建值确定为Y3的预测值,将位于子块b中Y4右斜向45度方向上的T2的重建值确定为Y4的预测值,将位于子块b中Y5右斜向45度方向上的T3的重建值确定为Y5的预测值,将位于子块b中Y6右斜向45度方向上的T4的重建值确定为Y6的预测值,以及,将位于子块b中Y7右斜向45度方向上的T5的重建值确定为Y7的预测值。
可选的,在目标预测模式包括上述待解码块中每个子块的预测模式的情况下,对于上述待解码块中第一子块,第一子块中包括第一像素和第二像素,则第一子块的预测模式用于根据第一子块周围已重建的像素并行的对第一像素和第二像素进行预测。
其中,第一子块为当前进行预测的待解码块中的子块。例如,如图7f中的(a)所示,解码块中每个子块的大小为2×2,那么,待解码块可以分为8个子块,当开始对图7f中的(a)所示的待编码块进行预测时,第一子块为从左到右的第一个子块,当第一个子块中的像素完成重建后,可以开始对从左到右的第二个子块中的像素进行重建,这时第一子块为上述第二个子块。第一子块中包括的第一像素和第二像素可以为第一子块中的互不重叠的两组像素,第一像素或第二像素可以包括第一子块中的一个或多个像素。例如,以图7f中的(a)中的子块a为例,一种情况下,子块a中的第一像素可以包括Y0,子块a中的第二像素可以包括Y1、Y2和Y3。另一种情况下,子块a中的第一像素可以包括Y0和Y1,子块a中的第二像素可以包括Y2和Y3。再一种情况下,子块a中的第一像素可以包括Y1,子块a中的第二像素可以包括Y2。
需要说明,当编码端采用第六种预测模式对待编码块中的像素进行预测时,则在编码端对待编码块对应的残差块或残差系数块进行编码时,可以以图7g所示轨迹指示的顺序进行编码。
至此,基于上述S101-S104所述的编码方法,编码端在以目标预测模式对待编码块中的像素进行预测时,是以该目标预测模式对应的目标预测顺序对待编码块中的像素进行预测的。在这一过程中,编码端预测待编码块中的任一个像素时,用于预测该像素的像素均已完成重建。因此,本申请实施例提供的编码方法按照目标预测模式,以目标预测顺序预测待编码块中的每个像素,可以并行预测待编码块中的部分像素,提高了预测待编码块中像素的效率,编码端在基于预测值和残差值重获得重建值时,无需对上述残差值进行缓存,因此本申请实施例提供的编码方法不仅可以节约用于缓存残差值的缓存空间,还能提高编码效率。
此外,当目标预测模式指示以子块为单位依次对待编码块中的每个子块进行预测时,由于该预测模式下,编码端在对一个子块进行预测时,可以并行的基于该子块周围已重建的像素对该子块内的多个像素进行预测,即这种预测模式能够进一步提高编码端预测待编码块的效率。进而,当编码端在基于预测值和残差值重获得重建值时,编码端无需对残差值进行缓存,因此本申请实施例提供的编码方法进一步的节约了缓存空间,并提高编码效率。
如图8所示,为本申请实施例提供的另一种图像解码方法的流程示意图。图8所示的方法包括如下步骤:
S201、解码端解析待解码块的码流,以确定预测待解码块中像素的目标预测模式。
其中,待解码块的码流可以是解码端从编码端接收到的码流,或者是从其他设备获取的码流,例如从存储设备获取的码流,本申请实施例对此不作限定。
其中,目标预测模式用于对待解码块中的像素进行预测,以得到待解码块中像素的预测值。可以理解,这里的目标预测模式为编码端在编码时用于预测图像块中像素的预测模式。
具体的,解码端可以通过与编码端对应的解码方式,对待解码块的码流进行解析,从而得到目标预测模式。
S202、解码端基于目标预测模式,确定与目标预测模式对应的目标预测顺序。
其中,预测模式、与预测模式对应的预测顺序的描述,可以参考上文中的描述,这里不再赘述。
可选的,解码端可以预置有多个预测模式及其对应的预测顺序的对应关系。这样,当解码端确定预测模式为目标预测模式后,即可在预置的对应关系中确定出与目标预测模式对应的目标预测顺序。
S203、解码端按照目标预测模式,以目标预测顺序预测待解码块中的每个像素,得到该每个像素的预测值。
这里,基于预测模式,以与预测模式对应的预测顺序预测待解码块中每个像素的说明,可以参考上文中对预测模式的详细描述,这里不再赘述。
S204、解码端基于待解码块中每个像素的预测值对该每个像素进行重建,从而得到待解码块的重建块。
可选的,解码端可以先通过解析码流,得到待解码块的残差块。然后,解码端对该残差块进行反量化,从而得到重建的待解码块的残差块。这样,解码端可以基于上述得到的待解码块中像素的预测值和重建的残差块中的残差值,得到待解码块的重建块。
例如,解码端可以先通过图5所示的码流解析单元301解析待解码块的码流,得到待解码块的残差块。然后,解码端可以通过图5所示的反量化单元302对该残差块进行反量化,从而得到重建的待解码块的残差块。这样,解码端可以通过重构单元305,并基于上述得到的待解码块中像素的预测值和重建的残差块中的残差值,得到待解码块的重建块。
可选的,解码端可以先通过解析码流,得到待解码块的残差系数块。然后,解码端对该残差块进行反量化,从而得到反量化的残差系数块。接着,解码端对反量化后的残差系数块进行反变换,得到重建的待解码块的残差块。这样,解码端可以基于上述得到的待解码块中像素的预测值和重建的残差块中的残差值,得到待解码块的重建块。
例如,解码端可以先通过图5所示的码流解析单元301解析待解码块的码流,得到待解码块的残差系数块。然后,解码端可以通过图5所示的反量化单元302对该残差系数块进行反量化,从而得到反量化的残差系数块。接着,解码端通过图5所示的残差逆变换单元303对反量化后的残差系数块进行反变换,得到重建的待解码块的残差块。这样,解码端可以通过重构单元305,并基于上述得到的待解码块中像素的预测值和重建的残差块中的残差值,得到待解码块的重建块。
其中,解码端可以通过与编码端对应的解码方式,对待解码块的码流进行解析,从而得到待解码块的残差块或残差系数块。例如,解码端解码待解码块码流的残差扫描顺序为上述预测待解码块中像素的目标预测顺序。其中,目标预测顺序与预测待解码块的目标预测模式对应。残差扫描顺序的相关说明可以参考上文,这里不做赘述。
需要说明,图8所示的图像解码方法与图6c所示的图像编码方法对应,因此,该图像解码方法有助于提高预测待解码块的效率,在基于预测值和残差值重构待解码块以得到待解码块的重建值时,能够节约用于缓存残差值的缓存空间,并能提高解码效率。
需要说明,本申请实施例对待解码块的残差块或残差系数块进行反量化的方式不作具体限定,例如可以采用下述实施例二方案中描述的反量化方式对待解码块的残差块或残差系数块进行反量化,当然不限于此。
还需要说明,本申请实施例对解码端解码待解码块的码流的方式不作具体限定,例如可以采用下述实施例三中描述的可变长的解码方式进行解码,当然不限于此。
实施例二
如图9a所示,为本申请实施例提供的又一种图像编码方法的流程示意图。图9a所示的方法包括如下步骤:
S301、编码端确定待编码块的第二残差块和待编码块中每个像素的量化参数QP。
这里,第二残差块可以是待编码块的原始残差值块,或者,第二残差块也可以是该原始残差值块经变换后得到的残差系数块。
其中,待编码块的原始残差值块即为编码端基于待编码块的原始像素值和待编码块的预测块得到的残差块。可以理解,编码端对待编码块中的像素进行预测以得到预测块的过程,可以基于实施例一中所述方法实现,当然也可以基于现有技术中任意能够得到待编码块预测块的方法得到,本申请实施例对此不作限定。残差系数块为编码端对原始残差值块进行变换处理后得到的,本申请实施例对编码端变换处理原始残差值块的过程不作具体限定。
为简单描述,本申请实施例在下文中以第二残差块是待编码块的原始残差值块为例进行说明。
另外,编码端在获得待编码块的第二残差块之前或之后,还可以确定待编码块中每个像素的量化参数QP。例如,编码端可以读取预设的量化参数QP。这里,本申请实施例对编码端确定待编码块中每个像素的量化参数QP的具体过程不作限定。
S302、编码端基于待编码块中每个像素的QP和量化预设数组,对第二残差块进行量化,得到第一残差块。
其中,量化预设数组用于对第二残差块中的值进行量化处理。
第一种可能的实现方式中,上述量化预设数组包括放大参数数组和位移参数数组,且放大参数数组和位移参数数组包括相同数量个数值。若以amp表示放大参数数组中的放大参数,以shift表示位移参数数组中的位移参数,则编码端可以基于公式(1)对第二残差块中每个值进行量化处理:
公式(1)量化后的残差值=(量化前的残差值×amp[QP])>>shift[QP]
其中,amp[QP]表示放大参数数组中第QP个放大参数,shift[QP]表示位移参数数组中第QP个位移参数,(量化前的残差值×amp[QP])>>shift[QP]表示对(量化前的残差值×amp[QP])的二进制值右移shift[QP]位。
在本申请实施例中,对于放大参数数组中的第i个放大参数的值amp[i]和位移参数数组中的第i个位移参数的值shift[i]而言,2shift[i]与amp[i]的商构成的反量化预设数组具有以下规律:该反量化预设数组中的第1至第n个数中相邻的两个数之间的间隔为1,该反量化预设数组中的第n+1至第n+m个数中相邻的两个数之间的间隔为2,该反量化预设数组中的第n+k*m+1至第n+k*m+m个数中相邻两个数之间的数值间隔为2k+1。其中,n、m为大于1的整数,i、k均为正整数。
其中,反量化预设数组用于实现对量化后得到的第一残差块进行反量化处理,并且,反量化预设数组实现的反量化运算与量化预设数组实现的量化运算互逆。因此,基于具有上述规律的反量化预设数组,可以反向确定出用于实现量化运算的放大参数数组和位移参数数组。
作为示例,假设以mult表示反量化预设数组中的放大参数,且反量化预设数组包括42个放大参数,以上述的n取值12,m取值6为例,则基于具有上述规律的反量化预设数组可以为:
mult(42)={1,2,3,4,5,6,
7,8,9,10,11,12,
14,16,18,20,22,24,
28,32,36,40,44,48,
56,64,72,80,88,96,
112,128,144,160,176,192,
224,256,288,320,352,384}
则放大参数数组可以为:
amp[42]={1,2048,2731,2048,3277,2731,
2341,2048,3641,3277,2979,2731,
2341,2048,3641,3277,2979,2731,
2341,2048,3641,3277,2979,2731,
2341,2048,3641,3277,2979,2731,
2341,2048,3641,3277,2979,2731,
2341,2048,3641,3277,2979,2731}
位移参数数组可以为:
shift[42]={0,12,13,13,14,14,
14,14,15,15,15,15,
15,15,16,16,16,16,
16,16,17,17,17,17,
17,17,18,18,18,18,
18,18,19,19,19,19,
19,19,20,20,20,20}
这样,编码端可以基于确定的待编码块中每个像素的QP,在上述放大参数数组中确定该每个像素的放大参数,以及在上述的位移参数数组确定该每个像素的位移参数。然后,编码端基于待编码块中每个像素对应的放大参数和位移参数,对第二残差块进行量化运算,从而得到第一残差块。
例如,对于待编码块中任一像素而言,编码端可以基于确定的该任一像素的QP,在上述放大参数数组中查找,并将放大参数数组中的第QP个值确定为该任一像素的放大参数,以及在上述的位移参数数组中查找,并将位移参数数组中的第QP个值确定该任一像素的位移参数。然后,编码端基于该任一像素对应的放大参数和位移参数,并通过上述的公式(1)对第二残差块中与该任一像素对应的残差值进行量化运算,从而得到该任一像素对应的量化后的残差值。当编码端对第二残差块中的全部残差值完成量化处理时,即得到第一残差块。
通过该可能的实现方式,编码端在对第二残差块中的残差值进行量化处理时,最多需要进行6种乘法运算,相比现有技术,该方式大大减少了编码端的计算量,即该方式大大节省了编码端的计算资源。
第二种可能的实现方式中,相比上述第一种可能的实现方式,该实现方式中的量化预设数组可以包括更少的数值。在该实现方式中,编码端可以基于公式(2)实现对第二残差块中每个残差值的量化处理:
公式(2)量化后的残差值=(量化前的残差值×amp+offset)>>shift
其中,amp为编码端基于待编码块中每个像素的QP和量化预设数组确定出的该每个像素对应的放大参数,shift为编码端基于待编码块中每个像素的QP和量化预设数组确定出的该每个像素对应的位移参数。offset为偏移参数,用于实现量化后的残差值能够四舍五入取整,(量化前的残差值×amp+offset)>>shift表示对(量化前的残差值×amp+offset)的二进制值向右移位shift位。
具体的,编码端确定出的待编码块中每个像素对应的放大参数amp的值为该每个像素的QP与7按位与后的值在量化预设数组中对应的值,编码端确定出的待编码块中每个像素对应的位移参数shift的值为7与该每个像素的QP除以23的商的加和值。
也即,假设以quant_scal表示量化预设数组中的值,则放大参数amp可以通过quant_scale[QP&0x07]计算得到,位移参数shift可以通过7+(QP>>3)计算得到。其中,[QP&0x07]表示QP的二进制值与7按位与(相当于QP除以8取余的数学效果),quant_scale[QP&0x07]即为量化预设数组中的第QP&0x07个数值。QP>>3表示对QP的二进制值向右移位3位。
另外,offset可以通过1<<(shift-1)计算得到,1<<(shift-1)表示对1的二进制值向左移位(shift-1)(数学上的效果为1乘以2的(shift-1)次方)。
在本申请实施例中,当编码端以公式(2)实现第二残差块的量化处理时,量化预设数组可以是:quant_scal[8]={128,140,153,166,182,197,216,234}。
这样,编码端基于确定的待编码块中每个像素的QP和量化预设数组,计算得到该每个像素对应的放大参数和位移参数。然后,编码端基于计算得到的待编码块中每个像素的放大参数、位移参数以及偏移参数,并通过公式(2)实现对第二残差块的量化,从而得到第一残差块。
可以看出,通过该可能的实现方式,编码端在对第二残差块中的残差值进行量化处理时,最多需要进行8种乘法运算。相比现有技术,该方式大大减少了编码端的计算量,即该方式大大节省了编码端的计算资源。
第三种可能的实现方式中,编码端可以通过公式(3)实现对第二残差块中每个残差值的量化处理:
公式(3)量化后的残差值=(量化前的残差值+offset)>>shift
其中,shift表示位移参数,shift的取值为与QP相关、且为单调不递减的整数。或者可以理解为,在QP递增的情况下,shift与QP一一对应,且shift的取值为单调不递减的整数。offset的说明可以参考上文,这里不再赘述。并且,offset可以通过(shift==0)?0:(1<<(shift-1))确定,具体为:当shift取值为0时,offset取值为0,当shift取值不为0,则offset取值为(1<<(shift-1))(即为对1的二进制值左移(shift-1)位后得到的值)。
这样,编码端基于确定的待编码块中每个像素的QP即可确定出该每个像素的位移参数,并进一步确定出对应的偏移参数。然后,编码端即可基于确定出的待编码块中每个像素的位移参数和偏移参数,并通过公式(3)实现对第二残差块的量化,从而得到第一残差块。
可以看出,通过该可能的实现方式,编码端在对第二残差块中的残差值进行量化处理时,无需进行乘法运算。相比现有技术,在对量化粒度要求不高的编码场景中,该方式大大减少了编码端的计算量,即该方式大大节省了编码端的计算资源。
S303、编码端编码第一残差块,得到待编码块的码流。
编码端在得到第一残差块后,对其进行编码,从而得到待编码块经编码的码流。
可选的,编码端将上述对第二残差块进行量化时的每个像素对应的QP,以及具体采用的量化方式作为待编码块的语义元素,并对该语义元素进行编码。编码端还可以将编码后的语义元素数据添加至待编码块经编码的码流中。
可选的,编码端编码第一残差块的残差扫描顺序,可以为编码端预测待编码块中像素的目标预测顺序。其中,目标预测顺序与预测待解码块的目标预测模式对应。目标预测模式和目标预测顺序的相关说明可以参考上文,这里不做赘述。
需要说明,本申请实施例对待编码块进行预测的具体模式不作具体限定,例如可以采用上文实施例一方案中描述的预测模式对待编码块进行预测,当然不限于此。
还需要说明,本申请实施例对编码端编码第一残差块以及相关的语义元素的具体编码方式不作具体限定,例如可以采用实施例三中描述的可变长的编码方式进行编码。
通过上述的S301-S303所述的图像编码方法,由于在图像编码过程中采用了能够节省编码端计算资源的量化处理方式,即该图像编码方法中的量化处理过程的效率被大大提高,进而该图像编码方法大大提高了图像的编码效率。
如图9b所示,为本申请实施例提供的又一种图像解码方法的流程示意图。图9b所示的方法包括如下步骤:
S401、解码端解析待解码块的码流,得到待解码块中每个像素的反量化参数和待解码块的第一残差块。
其中,待解码块的码流可以是解码端从编码端接收到的码流,或者是从其他设备获取的码流,例如从存储设备获取的码流,本申请实施例对此不作限定。
其中,反量化参数用于指示解码端采用与该反量化参数对应的反量化方式对第一残差块中的残差值进行反量化处理。反量化参数中可以包括量化参数QP。
具体的,解码端可以通过与编码端对应的解码方式,对待解码块的码流进行解析,从而得到待解码块中每个像素的反量化参数和待解码块的第一残差块。
例如,解码端解码待解码块码流的残差扫描顺序为预测待解码块中像素的目标预测顺序。其中,目标预测顺序与预测待解码块的目标预测模式对应。目标预测模式和目标预测顺序的相关说明可以参考上文,这里不做赘述。
可选的,解码端可以采用可变码长的解码方式解析待解码块的码流,以得到编码待解码块对应的残差块中每个值的编码码长CL和第一残差块。其中,解码端采用可变码长的解码方式解析待解码块的码流,得到待解码块对应的残差块中每个值的CL的过程,可以参考下文S601的描述。解码端基于CL确定第一残差块的说明,可以参考S602的描述,这里不再赘述。
步骤S401的其它可能的实现方式将在下文进行说明,这里暂不赘述。
S402、解码端基于待解码块中每个像素的反量化参数所指示的QP和反量化预设数组,对第一残差块进行反量化,得到第二残差块。
第一种可能的实现方式,当解码端基于待解码块中每个像素的反量化参数,确定对第一残差块的反量化方式与上述S302中第一种可能的实现方式中描述的量化方式互逆,则反量化预设数组的说明可以参见上文S302中反量化预设数组的相关说明,这里不做赘述。
这种情况下,解码端可以基于待解码块中每个像素的QP,在反量化预设数组中确定该每个像素对应的放大系数。然后,解码端可以基于该每个像素对应的放大系数,对第一残差块进行反量化处理,从而得到第二残差块。
例如,对于待解码块中任一像素而言,解码端可以基于该任一像素的QP,在上述反量化预设数组中查找,并将反量化预设数组中的第QP个值确定为该任一像素的放大系数。然后,解码端基于该任一像素对应的放大系数,对第一残差块中与该任一像素对应的残差值进行乘法运算,从而得到该任一像素对应的反量化后的残差值。当解码端对第一残差块中的全部残差值完成反量化处理,即得到第二残差块。
可选的,解码端还可以在确定待解码块中每个像素对应的放大系数后,通过对该每个像素对应的第一残差块中的残差值做向左移位处理,以实现对残差值和与该残差值对应的放大系数的乘法运算,本申请实施例对比不作具体限定。
第二种可能的实现方式,当解码端基于待解码块中每个像素的反量化参数,确定对第一残差块的反量化方式与上述S302中第二种可能的实现方式中描述的量化方式互逆,则解码端可以基于上述公式(2)实现对第一残差块中每个残差值的反量化处理,以得到第二残差块。
这种情况下,公式(2)中的放大参数mult(和上文量化处理时的amp区分)可以通过dequant_scale[QP&0x07]计算得到,dequant_scale[QP&0x07]即为反量化预设数组中的第QP&0x07个数值。公式(2)中的位移参数shift可以通过7-(QP>>3)计算得到。公式(2)中的偏移参数offset可以通过(shift==0)?0:(1<<(shift-1))计算得到。
在本申请实施例中,当解码端以公式(2)实现第一残差块的反量化处理时,反量化预设数组可以是:dequant_scal[8]={128,117,107,99,90,83,76,70}。
这样,解码端基于待解码块中每个像素的QP和反量化预设数组,计算得到该每个像素对应的放大参数和位移参数。然后,解码端即可基于计算得到的待解码块中每个像素的放大参数、位移参数以及偏移参数,并通过公式(2)实现对第一残差块的反量化,从而得到第二残差块。
可以看出,通过该可能的实现方式,解码端在对第一残差块中的残差值进行反量化处理时,最多需要进行8种乘法运算。相比现有技术,该方式大大减少了解码端的计算量,即该方式大大节省了解码端的计算资源。
第三种可能的实现方式,当解码端基于待解码块中每个像素的反量化参数,确定对第一残差块的反量化方式与上述S302中第三种可能的实现方式中描述的量化方式互逆,则解码端可以基于公式(4)实现对第一残差块中每个残差值的反量化处理,以得到第一残差块:
公式(4)反量化后的残差值=(反量化前的残差值+offset)<<shift
其中,偏移参数offset和位移参数shift的描述均可以参考S302中第三种可能的实现方式中的描述,这里不再赘述。
这样,解码端基于待解码块中每个像素的QP即可确定出该每个像素的位移参数,并进一步确定出对应的偏移参数。然后,解码端即可基于确定出的待解码块中每个像素的位移参数和偏移参数,并通过公式(4)实现对第一残差块的反量化,从而得到第二残差块。
可以看出,通过该可能的实现方式,解码端在对第一残差块中的残差值进行反量化处理时,无需进行乘法运算。相比现有技术,在对量化粒度要求不高的编码场景中,该方式大大减少了解码端的计算量,即该方式大大节省了解码端的计算资源。
需要说明的是,上述几种可能的实现方式中的反量化方式,可以应用于编码端对图像进行编码的过程中(例如获得用于预测图像块中像素的重建块),也可以应用于解码端对图像数据进行解码的过程中,本申请实施例对此不作限定。
S403、解码端基于第二残差块对待解码块进行重建,得到重建块。
可选的,解码端可以直接对第二残差块进行重建,得到待解码块的重建块。具体的,解码端可以直接根据第二残差块以及待解码块的预测块对待解码块进行重建,得到待解码块的重建块。例如,解码端 可以通过对第二残差块和待解码块的预测块求和,从而得到待解码块的重建块。
其中,待解码块的预测块可以是解码端基于预测模式对待解码块进行预测得到的。可选的,该预测模式可以是通过解析待解码块的码流得到的,例如,可以是上述实施例一中的任一种预测模式,或者,该预测模式可以是现有技术中任意的预测模式,本申请实施例对此不作限定。此外,这里对解码端预测待解码块的预测块的过程不做详述。
可选的,解码端可以按照目标预测模式,以与目标预测模式对应的目标预测顺序预测待解码块中的每个像素。解码端基于每个像素的预测值和第二残差块对待解码块进行重建,得到重建块。
可选的,解码端可以先对第二残差块进行反变换,以重建待解码块的残差值块。这种情况下,第二残差块实际为残差系数块。然后,解码端可以根据重建的残差值块和待解码块的预测块对待解码块进行重建,得到待解码块的重建块。例如,解码端可以通过对重建的残差值块和待解码块的预测块求和,从而得到待解码块的重建块。
其中,待解码块的预测块的说明可以参考上文,不再赘述。此外,本申请实施例中,解码端对第二残差块进行反变换处理的过程不做详述。
需要说明,图9b所示的图像解码方法与图9a所示的图像编码方法对应,基于上述S401-S403所述的图像解码方法,由于在图像解码过程中采用了能够节省解码端计算资源的反量化处理方式,即该图像解码方法中的反量化处理过程的效率被大大提高,进而该图像解码方法大大提高了图像的解码效率。
还需要说明,本申请实施例对待解码块进行预测的具体模式不作具体限定,例如可以采用上文实施例一方案中描述的预测模式对待解码块进行预测,当然不限于此。
还需要说明,本申请实施例对解码端解码待解码块的码流的方式不作具体限定,例如可以采用下述实施例三中描述的可变长的解码方式进行解码,当然不限于此。
下面对步骤S401的可能的实现方式进行说明。
可选的,解码端基于待解码块的码流确定预测待解码块中像素的目标预测模式和待解码块中每个像素的反量化参数。解码端基于目标预测模式,确定与目标预测模式对应的残差扫描顺序。解码端基于残差扫描顺序解析待解码块的码流,得到第一残差块。
其中,当目标预测模式为第一目标预测模式时,残差扫描顺序为第一扫描顺序,当目标预测模式为第二目标预测模式时,残差扫描顺序为第二扫描顺序,第一扫描顺序和第二扫描顺序不同。
上述第一扫描顺序或第二扫描顺序为残差扫描顺序的一种,第一扫描顺序或第二扫描顺序和上述的目标预测模式对应。例如,第一扫描顺序与第一目标预测模式对应,第二扫描顺序与第二目标预测模式对应。可选的,残差扫描顺序可以和上述的目标预测顺序可以相同。上述第一扫描顺序和第二扫描顺序用于表征在不同目标预测模式下的两种不同的扫描顺序。
例如,当待解码块是16×2大小的图像块时,如果第一目标预测模式是上文表1所示的预测模式,则第一扫描顺序可以是图7a所示预设轨迹指示的顺序。如果第二目标预测模式是上文表2所示的预测模式,则第二扫描顺序可以是图7b所示预设轨迹指示的顺序。
再例如,如果第一目标预测模式是上文表3-1所示的预测模式,则第一扫描顺序可以是图7c-1所示预设轨迹指示的顺序。如果第二目标预测模式是上文表4-1所示的预测模式,则第二扫描顺序可以是图7d-1所示预设轨迹指示的顺序。
可见,在不同的预测模式下确定与目标预测模式对应的残差扫描顺序也不同。
可选的,在上述步骤S401的可能的实现方式的基础上,解码端还可以在确定目标预测模式的情况下,对于尺寸为第一尺寸的待解码块,在目标预测模式下采用第三扫描顺序解析待解码块的码流;对于尺寸为第二尺寸的待解码块,在目标预测模式下采用第四扫描顺序解析待解码块的码流。
其中,所述第三扫描顺序和所述第四扫描顺序不同。第三扫描顺序和第四扫描顺序为残差扫描顺序的一种。
另外,第一扫描顺序表征的残差扫描顺序可以与第三扫描顺序或第四扫描顺序表征的残差扫描顺序相同,也可以不同。同理,第二扫描顺序表征的残差扫描顺序可以与第三扫描顺序或第四扫描顺序表征的残差扫描顺序相同,也可以不同。本申请实施例不对此进行限定。
下面以第一尺寸为8×1、第二尺寸为8×2为例对上述可选的实现方式进行说明。
例如,当目标预测模式为上文表3-2所示或上文表5所表征的预测模式时,如果解码块为第一尺寸,则残差扫描顺序可以是第三扫描顺序,如图7e所示的扫描顺序,如果解码块为第二尺寸,则残差扫描顺序可以是第四扫描顺序,如图7c-2所示的扫描顺序。其中,上文表3-2和上文表5所示的预测模式可以表征同一种目标预测模式在不同尺寸下的两种预测模式。
又例如,当目标预测模式为上文表4-2所示或上文表4-3所表征的预测模式时,如果解码块为第一尺寸,则残差扫描顺序可以是第三扫描顺序,如图7d-3所示的扫描顺序,如果解码块为第二尺寸,则残差扫描顺序可以是第四扫描顺序,如图7d-2所示的扫描顺序。其中,上文表4-2和上文表4-3所示的预测模式可以表征同一种目标预测模式在不同尺寸下的两种预测模式。
实施例三
如图10a所示,为本申请实施例提供的又一种图像编码方法的流程示意图。图10a所示的方法包括如下步骤:
S501、编码端确定待编码块对应的残差块。
这里,该残差块可以是待编码块的原始残差值块,或者,该残差块也可以是该原始残差值块经变换后得到的残差系数块,或者,该残差块也可以是编码端对该残差系数块进行量化后得到的经量化的残差块,对此不作限定。
其中,待编码块的原始残差值块即为编码端基于待编码块的原始像素值和待编码块的预测块得到残差块。
可以理解,编码端对待编码块中的像素进行预测以得到预测块的过程,可以基于实施例一中所述方法实现,当然也可以基于现有技术中任意能够得到待编码块预测块的方法得到,本申请实施例对此不作限定。
残差系数块为编码端对原始残差值块进行变换处理后得到的,本申请实施例对编码端变换处理原始残差值块的过程不作具体限定。
编码端对残差系数块进行量化后得到经量化的残差块的过程,可以基于实施例二中所述的方法实现,当然也可以基于现有技术中任意能够实现对残差块进行量化的方法得到,本申请实施例对此不作限定。
为简单描述,本申请实施例在下文中以残差块是待编码块的残差系数块为例进行说明。
S502、编码端采用可变码长的编码方式对待编码块的残差块进行编码,以得到待编码块的码流。
可选的,上述的可变码长的编码方式可以包括可变换阶数的指数哥伦布编码方式。这样,编码端可以先确定出待编码块中每个像素的属性类型。对于待编码块的残差块中与待编码块中第三像素对应的第一值而言,编码端可以基于预设策略和第三像素的属性类型,确定用于编码第一值时的目标阶数。然后,编码端可以采用目标阶数的指数哥伦布编码算法对待编码块的残差块中的第一值进行编码。当编码端对待编码块的残差块中的每个值进行编码,即得到待编码块经编码的码流。其中,编码端可以预置有上述预测策略,该预测策略用于指示用于编码不同属性类型的像素对应残差值时的阶数。其中,第三像素表征待编码块中的至少一个像素。
应理解,不同阶数的指数哥伦布编码方式的编码规则不同。参考表6,表6示出了阶数k取不同值时指数哥伦布编码方式的编码规则(包括码字结构及对应的编码范围)。如表6所示,表6所示的码字结构中的X可以为0或1。
表6

这样,对于待编码块的残差块中与待编码块的第三像素对应的第一值而言,当编码端基于预设策略和第三像素的属性类型,确定出用于编码上述第一值时的目标阶数。然后,编码端可以基于目标阶数的指数哥伦布编码算法以及第一值的大小,确定用于编码第一值的码字结构,并以该码字结构编码第一值。类似的,当编码端对待编码块的残差块中的每个值进行编码,即得到待编码块经编码的码流。
例如,当编码端基于预设策略和第三像素的属性类型,确定出编码第三像素对应的第一值时的目标阶数为0。假设第一值的取值为2,则如表6所示,第一值属于编码范围1~2,则编码端可以基于0阶的指数哥伦布编码算法以及第一值的大小,确定用于编码第一值的码字结构为011(即编码范围1~2对应的码字结构01X)。这样,编码端即可以以011编码第一值。假设第一值的取值为7,则如表6所示,第一值属于编码范围7~14,则编码端可以基于0阶的指数哥伦布编码算法以及第一值的大小,确定用于编码第一值的码字结构为00010000(即编码范围7~14对应的码字结构0001XXXX)。这样,编码端即可以以00010000编码第一值。当编码端对待编码块的残差块中的每个值进行编码,即得到待编码块经编码的码流。
可选的,上述的可变码长的编码方式还可以包括预设阶数的指数哥伦布编码方式。其中,该预设阶数可以是编码端预先设置的值,例如是0或1(即上述K的取值可以是预设的0或1),本申请实施例对此不作限定。
由表6可以看出,当编码端预先指定指数哥伦布编码方式的阶数值(即K值),也可以实现对残差块中的残差值(或残差系数块中的残差系数值)的变长编码。
应理解,编码端在对待编码块的残差块编码后,还可以确定该残差块对应的语义元素,例如,该语义元素包括编码该残差块中每个值的编码码长(code legth,CL)。
可选的,编码端可以采用上述的可变换阶数的指数哥伦布编码算法对CL进行编码,以实现节省比特的目的,进而实现了在提高图像编码的压缩率的同时,提高编码效率的目的。
例如,对于待编码块的残差块中的任一值,编码端可以基于该任一值的CL确定编码该CL时的目标阶数。然后,编码端可以采用目标阶数的指数哥伦布编码算法对该CL进行编码,并将编码后的数据添加至待编码块经编码的码流。
可以看出,当编码端通过可变换阶数的指数哥伦布编码算法对待编码块的残差块(或残差块中残差值的CL)进行编码,可以自适应的用较少的比特编码较小的残差值(或CL值),从而可以达到节省比特的目的。也就是,在提高图像编码的压缩率的同时,本申请实施例提供的方法还提高了编码效率。
可选的,编码端也可以采用定长编码和截断一元码来编码待编码块的残差块中每个值的CL。例如,对于待编码块的残差块中的任一值,编码端可以采用定长编码或截断一元码对该该任一值的CL进行编码。
具体的,当上述任一值的CL小于等于阈值,则编码端采用预设数量个比特对该任一值的CL进行定长编码,并将编码后的数据添加至待编码块的码流。当上述任一值的CL大于前述阈值,则编码端采用截断一元码编码该任一值的CL,并将编码后的数据添加至待编码块的码流。其中,本申请实施例对该阈值的具体取值不作限定。
作为示例,以该阈值的取值为2为例,如表7所示,表7示出了编码端对小于等于2的CL值采用2比特进行定长编码得到的码字,表7还示出了编码端对大于2的CL值采用截断一元码编码得到的码字。其中,若以CLmax表示最大的CL值,则对于大于2的CL值,CLmax的码字中包括CLmax-1个1,CLmax-1的码字包括CLmax-2个1和1个0,…,CLmax-j的码字包括CLmax-j-1个1和1个0,其中,j是正整数。
表7

可以理解,编码端也可以采用定长编码和截断一元码来编码待编码块的残差块中每个值,具体可以参考编码端采用定长编码或截断一元码来编码待编码块的残差块中每个值的CL的描述,不再赘述。
可以看出,当编码端采用定长编码和截断一元码来编码CL(或残差值),可以自适应的用较少的比特编码较小的CL值(或残差值),从而可以达到节省比特的目的。也就是,在提高图像编码的压缩率的同时,还提高了编码效率。
如图10b所示,图10b为本申请实施例提供的又一种图像解码方法的流程示意图。图10b所示的方法包括如下步骤:
S601、解码端采用可变码长的解码方式解析待解码块的码流,得到待解码块对应的残差块中每个值的CL。
可选的,上述可变码长的解码方式可以是可变换阶数的指数哥伦布解码方式。解码端可以先从码流中解析出用于解码待解码块的码流的目标阶数,然后采用目标阶数的指数哥伦布解码算法解析待解码块的码流,从而得到待解码块的残差块中每个值的CL。
其中,可变换阶数的指数哥伦布解码方式可以参考上文中可变换阶数的指数哥伦布编码方式的描述。可以理解,解码为编码的逆运算,不再赘述。
可选的,上述可变码长的解码方式可以是预设阶数的指数哥伦布解码方式。这种情况下,解码端可以采用该预设阶数的指数哥伦布解码算法解析待解码块的码流,从而得到待解码块的残差块中每个值的CL。
可选的,当码流中用于编码待解码块的残差块中任一值CL的比特数量为预设数量时,解码端还可以基于定长解码策略解析待解码块的码流,以得到编码该待解码块的残差块中任一值的CL。以及,当码流中用于编码待解码块的残差块中任一值CL的比特数量大于预设数量时,解码端可以基于截断一元码的规则解析待解码块的码流,以得到编码该任一值的CL。其中,具体描述可以参考上文中编码端以定长编码和截断一元码编码CL的说明。可以理解,解码为编码的逆运算,不再赘述。
S602、解码端基于上述获得的CL确定待解码块的残差块。
编码端确定待解码块的残差块中每个值的CL后,即确定了用于编码该残差块中每个值的比特数。这样,解码端即可基于编码该每个值的CL,在待解码块的码流中确定与待解码块中每个像素对应的比特组,并确定用于解析每个比特组的目标阶数。
一种可能的情况下,解码端可以在解析出待解码块的残差块中每个值的CL后,先确定待解码块中每个像素的属性类型。然后,对于与待解码块中第三像素对应的第一比特组而言,解码端可以基于预设策略和第三像素的属性类型,确定用于解析第一比特组的目标阶数。另一种可能的情况下,解码端预置有与待解码块中每个像素对应的比特组的预设阶数,则解码端将与待解码块中每个像素对应的比特组的预设阶数确定为每个比特组的目标阶数。例如,像素的属性类型可以包括:像素是否有符号、像素的位数信息以及像素的格式信息等。
然后,解码端采用目标阶数的指数哥伦布解码算法对与待解码块中每个像素对应比特组进行解析,即可得到该每个像素的残差值,从而得到待解码块的残差块。
可以理解,解码端根据待解码块的码流解析得到残差块中残差值的顺序,和编码端编码残差块中残差值的顺序相同。
S603、解码端基于待解码块的残差块对待解码块进行重建,得到重建块。
可选的,当待解码块的残差块为残差系数块时,解码端可以对待解码块的残差块依次进行反量化和反变换,以得到重建的待解码块的残差值块。然后,解码端可以基于重建的残差值块对待解码块进行重建,从而得到重建块。例如,解码端可以通过对重建的残差值块和待解码块的预测块求和,从而得到待解码块的重建块。
可选的,当待解码块的残差块为残差值块时,解码端可以对待解码块的残差块进行反量化,以得到重建的待解码块的残差值块。然后,解码端可以基于重建的残差值块对待解码块进行重建,从而得到重建块。例如,解码端可以通过对重建的残差值块和待解码块的预测块求和,从而得到待解码块的重建块。
可选的,解码端还可以基于待解码块的码流,确定用于预测待解码块中像素的目标预测模式。解码端基于目标预测模式,确定与目标预测模式对应的目标预测顺序。解码端按照目标预测模式,以目标预测顺序预测待解码块中的每个像素。解码端基于待解码块中的每个像素的预测值和残差块对待解码块进行重建,得到重建块。具体的,上述可选的实现方式的具体内容可以参考实施例一的描述,这里不再赘述。
其中,上述的待解码块的预测块可以是基于预测模式对待解码块进行预测得到的,其中,上述预测模式可以是解码端基于解析待解码块的码流得到的。可选的,该预测模式可以是实施例一中的任一种预测模式,或者,该预测模式可以是现有技术中任意的预测模式,本申请实施例对此不作限定。此外,这里对解码端预测待解码块的预测块的过程不做详述。
其中,解码端对待解码块的残差块进行反量化的过程,可以基于解析待解码块的码流得到的待解码块中每个像素的反量化参数和反量化预设数组,对上述残差块进行反量化。具体的,可以基于实施例二中的方法实现,当然也可以基于现有技术中任意能够实现对残差块进行反量化的方法得到,本申请实施例对此不作限定。
需要说明,图10b所示的图像解码方法与图10a所示的图像编码方法对应,因此该方法可以在提高图像编码的压缩率的同时,提高解码效率。
还需说明,实施例三提供的可变长的编/解码方式也可以应用于实施例一和实施例二,或者应用于任意需要进行图像编/解码的场景中,本申请实施例对此不作限定。
可以理解的是,为了实现上述实施例中功能,编码端/解码端包括了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本申请中所公开的实施例描述的各示例的单元及方法步骤,本申请能够以硬件或硬件和计算机软件相结合的形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用场景和设计约束条件。
以下,说明本申请实施例提供的解码装置和编码装置。
在一个示例中,本申请实施例提供的任一种解码装置均可以是图1中的目的设备12或解码器122。在另一个示例中,以下提供的任一种编码装置均可以是图1中的源设备11或编码器112。此处统一说明,下文不再赘述。
图11为本申请提供的一种解码装置1100的结构示意图,上述任一种解码方法实施例都可以由该解码装置1100执行。该解码装置1100包括解析单元1101、确定单元1102、预测单元1103以及重建单元1104。其中,解析单元1101,用于解析待解码块的码流,以确定预测待解码块中像素的目标预测模式。确定单元1102,用于基于该目标预测模式,确定与该目标预测模式对应的目标预测顺序。预测单元1103,用于按照确定的目标预测模式,以与该目标预测模式对应的目标预测顺序预测待解码块中的每个像素。重建单元1104,用于基于待解码块中的每个像素的预测值对该每个像素进行重建,从而得到待解码块的重建块。
在一个示例中,解析单元1101可以通过图5中的码流解析单元301实现。确定单元1102和预测单元1103可以通过图5中的预测处理单元304实现,重建单元1104可以通过图5中的重构单元305实现。 图5中的经编码比特流可以是本申请实施例中的待解码块的码流。
有关上述解析单元1101、确定单元1102、预测单元1103和重建单元1104更详细的描述、以及其中各技术特征更详细的描述,以及有益效果的描述等,均可以参考上述相应的方法实施例部分,此处不再赘述。
一种可能的设计中,预测单元1103具体用于在以目标预测顺序预测待解码块中的任一像素时,用于预测任一像素的像素已完成重建。
另一种可能的设计中,若目标预测模式指示以目标预测顺序逐点预测待解码块中的每个像素,则预测单元1103具体用于按照目标预测模式,沿目标预测顺序指示的方向逐点预测待解码块中的每个像素;其中,当目标预测模式为第一目标预测模式时,目标预测顺序为第一预测顺序,当目标预测模式为第二目标预测模式时,目标预测顺序为第二预测顺序,第一预测顺序和第二预测顺序不同。
再一种可能的设计中,对于尺寸为第一尺寸的待解码块,预测单元1103具体用于在目标预测模式下采用第三预测顺序对待解码块进行预测;对于尺寸为第二尺寸的待解码块,预测单元1103具体用于在目标预测模式下采用第四预测顺序对待解码块进行预测;其中,第三预测顺序和第四预测顺序不同。
又一种可能的设计中,若目标预测模式指示以待解码块中具有预设大小的子块为单位依次预测待解码块中每个子块的像素,则预测单元1103具体用于按照目标预测模式,沿目标预测顺序指示的方向依次预测待解码块中每个子块中的像素。
又一种可能的设计中,目标预测模式包括待解码块中每个子块的预测模式,对于待解码块中第一子块,第一子块中包括第一像素和第二像素,则第一子块的预测模式用于根据第一子块周围已重建的像素并行的对第一像素和第二像素进行预测。
又一种可能的设计中,重建单元1104包括:反量化子单元和重建子单元。其中,反量化子单元,用于基于解析待解码块的码流得到的待解码块中每个像素的反量化参数和反量化预设数组,对解析待解码块的码流得到的待解码块的第一残差块进行反量化,得到第二残差块。重建子单元,用于基于每个像素的预测值和第二残差块对每个像素进行重建,得到重建块。
又一种可能的设计中,上述反量化子单元,具体用于采用可变码长的解码方式解析待解码块的码流,以得到编码待解码块对应的残差块中每个值的编码码长CL和第一残差块。
有关上述可能的设计的更详细的描述、以及其中各技术特征更详细的描述,以及有益效果的描述等,均可以参考上述相应的方法实施例部分,此处不再赘述。
图12为本申请提供的一种编码装置1200的结构示意图,上述任一种编码方法实施例都可以由该编码装置1200执行。该编码装置1200包括确定单元1201、预测单元1202以及编码单元1203。其中,确定单元1201,用于确定待编码块的目标预测模式,以及确定与目标预测模式对应的目标预测顺序。预测单元1202,用于按照目标预测模式,以目标预测顺序预测待编码块中的每个像素。确定单元1201,还用于基于该每个像素的预测值确定待编码块的残差块。编码单元1203,用于以目标预测顺序编码残差块,以得到待编码块的码流。
在一个示例中,确定单元1201和预测单元1202可以通过图2中的预测处理单元201实现。确定单元1201还可以通过图2中的残差计算单元202实现。编码单元1203可以通过图2中的编码单元205实现。图2中的待编码块可以是本申请实施例中的待编码块。
有关上述确定单元1201、预测单元1202和编码单元1203更详细的描述、以及其中各技术特征更详细的描述,以及有益效果的描述等,均可以参考上述相应的方法实施例部分,此处不再赘述。
图13为本申请提供的一种解码装置1300的结构示意图,上述任一种解码方法实施例都可以由该解码装置1300执行。该解码装置1300包括解析单元1301、反量化单元1302以及重建单元1303。其中,解析单元1301,用于解析待解码块的码流,得到待解码块中每个像素的反量化参数和待解码块的第一残差块。反量化单元1302,用于基于该每个像素的反量化参数指示的QP和反量化预设数组,对第一残差块进行反量化,得到第二残差块。重建单元1303,用于基于第二残差块对待解码块进行重建,得到重建块。
在一个示例中,解析单元1301可以通过图5中的码流解析单元301实现。反量化单元1302可以通过图5中的反量化单元302实现。重建单元1303可以通过图5中的重构单元305实现。图5中的经编 码比特流可以是本申请实施例中的待解码块的码流。
有关上述解析单元1301、反量化单元1302和重建单元1303更详细的描述、以及其中各技术特征更详细的描述,以及有益效果的描述等,均可以参考上述相应的方法实施例部分,此处不再赘述。
一种可能的设计中,解析单元1301具体用于基于待解码块的码流确定预测待解码块中像素的目标预测模式和待解码块中每个像素的反量化参数;基于目标预测模式,确定与目标预测模式对应的残差扫描顺序;其中,当目标预测模式为第一目标预测模式时,残差扫描顺序为第一扫描顺序,当目标预测模式为第二目标预测模式时,残差扫描顺序为第二扫描顺序,第一扫描顺序和第二扫描顺序不同;基于残差扫描顺序解析待解码块的码流,得到第一残差块。
另一种可能的设计中,对于尺寸为第一尺寸的待解码块,解析单元1301在目标预测模式下采用第三扫描顺序解析待解码块的码流;对于尺寸为第二尺寸的待解码块,解析单元1301在目标预测模式下采用第四扫描顺序解析待解码块的码流;其中,第三扫描顺序和第四扫描顺序不同。
再一种可能的设计中,在反量化预设数组中,第1至第n个数中相邻的两个数之间的间隔为1,第n+1至第n+m个数中相邻的两个数之间的间隔为2,第n+k*m+1至第n+k*m+m个数中相邻两个数之间的数值间隔为2k+1,其中,n、m为大于1的整数,k均为正整数。
又一种可能的设计中,反量化单元1302具体用于基于每个像素的QP在反量化预设数组中确定每个像素对应的放大系数;基于每个像素对应的放大系数,对第一残差块进行反量化运算,得到第二残差块。
又一种可能的设计中,反量化单元1302具体用于基于每个像素的QP和反量化预设数组,确定每个像素对应的放大参数和位移参数;其中,每个像素对应的放大参数的值为每个像素的QP与7按位与后的值在反量化预设数组中对应的值,每个像素对应的位移参数的值为7与每个像素的QP除以23的商的差值;基于每个像素对应的放大参数和位移参数,对第一残差块进行反量化运算,得到第二残差块。
又一种可能的设计中,重建单元1303具体用于对第二残差块进行反变换,以重建待解码块的残差值块;基于残差值块对待解码块进行重建,得到重建块。
又一种可能的设计中,重建单元1303具体用于按照目标预测模式,以与目标预测模式对应的目标预测顺序预测待解码块中的每个像素;基于每个像素的预测值和第二残差块对待解码块进行重建,得到重建块。
又一种可能的设计中,解析单元1301具体用于采用可变码长的解码方式解析待解码块的码流,以得到编码待解码块对应的残差块中每个值的编码码长CL和第一残差块。
有关上述可能的设计的更详细的描述、以及其中各技术特征更详细的描述,以及有益效果的描述等,均可以参考上述相应的方法实施例部分,此处不再赘述。
图14为本申请提供的一种编码装置1400的结构示意图,上述任一种编码方法实施例都可以由该编码装置1400执行。该编码装置1400包括确定单元1401、量化单元1402以及编码单元1403。其中,确定单元1401,用于确定待编码块的第二残差块和待编码块中每个像素的量化参数QP。量化单元1402,用于基于该每个像素的QP和量化预设数组,对第二残差块进行量化,得到第一残差块。编码单元1403,用于编码第一残差块,得到待编码块的码流。
在一个示例中,确定单元1401可以通过图2中的残差计算单元202实现,或者,确定单元1401可以通过图2中的残差计算单元202和残差变换单元203结合实现。量化单元1402可以通过图2中的量化单元204实现。编码单元1403可以通过图2中的编码单元205实现。图2中的待编码块可以是本申请实施例中的待编码块。
有关上述确定单元1401、量化单元1402和编码单元1403更详细的描述、以及其中各技术特征更详细的描述,以及有益效果的描述等,均可以参考上述相应的方法实施例部分,此处不再赘述。
图15为本申请提供的一种编码装置1500的结构示意图,上述任一种编码方法实施例都可以由该编码装置1500执行。该编码装置1500包括确定单元1501和编码单元1502。其中,确定单元1501,用于确定待编码块对应的残差块。编码单元1502,用于采用可变码长的编码方式对前述的残差块进行编码,以得到待编码块的码流。
在一个示例中,确定单元1501可以通过图2中的残差计算单元202实现,或者,确定单元1501 可以通过图2中的残差计算单元202和残差变换单元203结合实现,或者,确定单元1501可以通过图2中的残差计算单元202、残差变换单元203以及量化单元204结合实现。编码单元1502可以通过图2中的编码单元205实现。图2中的待编码块可以是本申请实施例中的待编码块。
有关上述确定单元1501和编码单元1502更详细的描述、以及其中各技术特征更详细的描述,以及有益效果的描述等,均可以参考上述相应的方法实施例部分,此处不再赘述。
图16为本申请提供的一种解码装置1600的结构示意图,上述任一种解码方法实施例都可以由该解码装置1600执行。该解码装置1600包括解析单元1601、确定单元1602以及重建单元1603。其中,解析单元1601,用于采用可变码长的解码方式解析待解码块的码流,得到编码待解码块对应的残差块中每个值的编码码长CL。确定单元1602,用于基于编码每个值的CL确定待解码块的残差块。重建单元1603,用于基于待解码块的残差块对待解码块进行重建,得到重建块。
在一个示例中,解析单元1601可以通过图5中的码流解析单元301实现。确定单元1602可以通过图5中的反量化单元302实现,或者,确定单元1602可以通过图5中的反量化单元302和残差逆变换单元303结合实现。重建单元1603可以通过图5中的重构单元305实现。图5中的经编码比特流可以是本申请实施例中的待解码块的码流。
有关上述解析单元1601、确定单元1602和重建单元1603更详细的描述、以及其中各技术特征更详细的描述,以及有益效果的描述等,均可以参考上述相应的方法实施例部分,此处不再赘述。
一种可能的设计中,在可变码长的解码方式包括可变换阶数或预设阶数的指数哥伦布解码方式的情况下,解析单元1601具体用于确定解析编码所述残差块中每个值的CL时的目标阶数;采用所述目标阶数的指数哥伦布解码算法解析所述码流,得到编码所述残差块中每个值的CL。
另一种可能的设计中,解析单元1601具体用于当用于编码所述残差块中任一值CL的比特数量为预设数量,则基于定长解码策略解析所述码流,以得到编码所述任一值的CL;当用于编码所述残差块中任一值CL的比特数量大于预设数量,则基于截断一元码的规则解析所述码流,以得到编码所述任一值的CL。
再一种可能的设计中,确定单元1602具体用于基于编码所述每个值的CL在所述码流中确定与所述待解码块中每个像素对应的比特组;确定所述待解码块中每个像素的属性类型;对于与所述待解码块中第三像素对应的第一比特组,基于预设策略和所述第三像素的属性类型,确定解析所述第一比特组的目标阶数;采用所述目标阶数的指数哥伦布解码算法解析所述第一比特组,以得到所述残差块。
又一种可能的设计中,确定单元1602具体用于基于编码所述每个值的CL在所述码流中确定与所述待解码块中每个像素对应的比特组;对于与所述待解码块中第三像素对应的第一比特组,采用预设阶数的指数哥伦布解码算法解析所述第一比特组,以得到所述残差块。
又一种可能的设计中,重建单元1603具体用于对所述残差块进行反量化和反变换,或者,对所述残差块进行反量化,以重建所述待解码块的残差值块;基于所述残差值块对所述待解码块进行重建,得到所述重建块。
又一种可能的设计中,重建单元1603具体用于基于待解码块的码流确定预测所述待解码块中像素的目标预测模式;基于所述目标预测模式,确定与所述目标预测模式对应的目标预测顺序;按照所述目标预测模式,以所述目标预测顺序预测所述待解码块中的每个像素;基于所述待解码块中的每个像素的预测值和所述残差块对所述待解码块进行重建,得到所述重建块。
又一种可能的设计中,重建单元1603具体用于基于解析所述待解码块的码流得到的所述待解码块中每个像素的反量化参数和反量化预设数组,对所述残差块进行反量化。
有关上述可能的设计的更详细的描述、以及其中各技术特征更详细的描述,以及有益效果的描述等,均可以参考上述相应的方法实施例部分,此处不再赘述。
本申请还提供一种电子设备,用于执行上述任意图像编码/解码方法的实施例。如图17所示,图17为本申请提供的一种电子设备的结构示意图,电子设备1700包括处理器1701和通信接口1702。处理器1701和通信接口1702之间相互耦合。可以理解的是,通信接口1702可以为收发器或输入输出接口。
在一个示例中,电子设备1700还可以包括存储器1703,用于存储处理器1701执行的指令或存储处理器1701运行指令所需要的输入数据或存储处理器1701运行指令后产生的数据。
本申请实施例中不限定上述通信接口1702、处理器1701以及存储器1703之间的具体连接介质。本申请实施例在图17中以通信接口1702、处理器1701以及存储器1703之间通过总线1704连接,总线在图17中以粗线表示,其它部件之间的连接方式,仅是进行示意性说明,并不引以为限。所述总线可以分为地址总线、数据总线、控制总线等。为便于表示,图17中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
存储器1703可用于存储软件程序及模块,如本申请实施例所提供的图像解码方法或图像编码方法对应的程序指令/模块,处理器1701通过执行存储在存储器1703内的软件程序及模块,从而执行各种功能应用以及数据处理,以实现上文提供的任一种图像解码方法或图像编码方法。该通信接口1702可用于与其他设备进行信令或数据的通信。在本申请中该电子设备1700可以具有多个通信接口1702。
可以理解的是,本申请的实施例中的处理器可以是中央处理单元(central processing Unit,CPU)、神经处理器(neural processing unit,NPU)或图形处理器(graphic processing unit,GPU),还可以是其它通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field programmable gate array,FPGA)或者其它可编程逻辑器件、晶体管逻辑器件,硬件部件或者其任意组合。通用处理器可以是微处理器,也可以是任何常规的处理器。
本申请的实施例中的方法步骤可以通过硬件的方式来实现,也可以由处理器执行软件指令的方式来实现。软件指令可以由相应的软件模块组成,软件模块可以被存放于随机存取存储器(random access memory,RAM)、闪存、只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)、寄存器、硬盘、移动硬盘、CD-ROM或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。另外,该ASIC可以位于网络设备或终端设备中。当然,处理器和存储介质也可以作为分立组件存在于网络设备或终端设备中。
本申请还提供一种计算机可读存储介质,该存储介质中存储有计算机程序或指令,当上述计算机程序或指令被电子设备执行时,实现上述任意图像编码/解码方法的实施例。
本申请实施例还提供一种编解码系统,包括编码端和解码端,该编码端可以用于执行上文提供的任意一种图像编码方法,解码端用于执行对应的图像解码方法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机程序或指令。在计算机上加载和执行所述计算机程序或指令时,全部或部分地执行本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、网络设备、用户设备或者其它可编程装置。所述计算机程序或指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机程序或指令可以从一个网站站点、计算机、服务器或数据中心通过有线或无线方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是集成一个或多个可用介质的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,例如,软盘、硬盘、磁带;也可以是光介质,例如,数字视频光盘(digital video disc,DVD);还可以是半导体介质,例如,固态硬盘(solid state drive,SSD)。
在本申请的各个实施例中,如果没有特殊说明以及逻辑冲突,不同的实施例之间的术语和/或描述具有一致性、且可以相互引用,不同的实施例中的技术特征根据其内在的逻辑关系可以组合形成新的实施例。
可以理解的是,在本申请的实施例中涉及的各种数字编号仅为描述方便进行的区分,并不用来限制本申请的实施例的范围。上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定。以上所述仅为本申请的较佳实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。

Claims (25)

  1. 一种图像解码方法,其特征在于,包括:
    解析待解码块的码流,以确定预测所述待解码块中像素的目标预测模式;
    基于所述目标预测模式,确定与所述目标预测模式对应的目标预测顺序;
    按照所述目标预测模式,以所述目标预测顺序预测所述待解码块中的每个像素;
    基于所述每个像素的预测值对所述每个像素进行重建,得到所述待解码块的重建块。
  2. 根据权利要求1所述的方法,其特征在于,在以所述目标预测顺序预测所述待解码块中的任一像素时,用于预测所述任一像素的像素已完成重建。
  3. 根据权利要求2所述的方法,其特征在于,若所述目标预测模式指示以所述目标预测顺序逐点预测所述待解码块中的每个像素,则所述按照所述目标预测模式,以所述目标预测顺序预测所述待解码块中的每个像素包括:
    按照所述目标预测模式,沿所述目标预测顺序指示的方向逐点预测所述待解码块中的每个像素;其中,当所述目标预测模式为第一目标预测模式时,所述目标预测顺序为第一预测顺序,当所述目标预测模式为第二目标预测模式时,所述目标预测顺序为第二预测顺序,所述第一预测顺序和所述第二预测顺序不同。
  4. 根据权利要求3所述的方法,其特征在于,对于尺寸为第一尺寸的所述待解码块,在所述目标预测模式下采用第三预测顺序对所述待解码块进行预测;对于尺寸为第二尺寸的所述待解码块,在所述目标预测模式下采用第四预测顺序对所述待解码块进行预测;其中,所述第三预测顺序和所述第四预测顺序不同。
  5. 根据权利要求2所述的方法,其特征在于,若所述目标预测模式指示以所述待解码块中具有预设大小的子块为单位依次预测所述待解码块中每个子块的像素,则所述按照所述目标预测模式,以所述目标预测顺序预测所述待解码块中的每个像素包括:
    按照所述目标预测模式,沿所述目标预测顺序指示的方向依次预测所述待解码块中每个子块中的像素。
  6. 根据权利要求5所述的方法,其特征在于,所述目标预测模式包括所述待解码块中每个子块的预测模式,对于所述待解码块中第一子块,所述第一子块中包括第一像素和第二像素,则所述第一子块的预测模式用于根据所述第一子块周围已重建的像素并行的对所述第一像素和所述第二像素进行预测。
  7. 根据权利要求1-6中任一项所述的方法,其特征在于,所述基于所述每个像素的预测值对所述每个像素进行重建,得到所述待解码块的重建块包括:
    基于解析所述待解码块的码流得到的所述待解码块中每个像素的反量化参数和反量化预设数组,对解析所述待解码块的码流得到的所述待解码块的第一残差块进行反量化,得到第二残差块;
    基于所述每个像素的预测值和所述第二残差块对所述每个像素进行重建,得到所述重建块。
  8. 根据权利要求7所述的方法,其特征在于,所述解析所述待解码块的码流包括:
    采用可变码长的解码方式解析所述待解码块的码流,以得到编码所述待解码块对应的残差块中每个值的编码码长CL和所述第一残差块。
  9. 一种图像编码方法,其特征在于,包括:
    确定待编码块的目标预测模式,以及确定与所述目标预测模式对应的目标预测顺序;
    按照所述目标预测模式,以所述目标预测顺序预测所述待编码块中的每个像素;
    基于所述每个像素的预测值确定所述待编码块的残差块;
    以所述目标预测顺序编码所述残差块,以得到所述待编码块的码流。
  10. 一种图像解码方法,其特征在于,包括:
    解析待解码块的码流,得到所述待解码块中每个像素的反量化参数和所述待解码块的第一残差块;
    基于所述每个像素的反量化参数指示的量化参数QP和反量化预设数组,对所述第一残差块进行反量化,得到第二残差块;
    基于所述第二残差块对所述待解码块进行重建,得到重建块。
  11. 根据权利要求10所述的方法,其特征在于,所述解析待解码块的码流,得到所述待解码块中 每个像素的反量化参数和所述待解码块的第一残差块包括:
    基于待解码块的码流确定预测所述待解码块中像素的目标预测模式和所述待解码块中每个像素的反量化参数;
    基于所述目标预测模式,确定与所述目标预测模式对应的残差扫描顺序;其中,当所述目标预测模式为第一目标预测模式时,所述残差扫描顺序为第一扫描顺序,当所述目标预测模式为第二目标预测模式时,所述残差扫描顺序为第二扫描顺序,所述第一扫描顺序和所述第二扫描顺序不同;
    基于所述残差扫描顺序解析所述待解码块的码流,得到所述第一残差块。
  12. 根据权利要求11所述的方法,其特征在于,对于尺寸为第一尺寸的所述待解码块,在所述目标预测模式下采用第三扫描顺序解析所述待解码块的码流;对于尺寸为第二尺寸的所述待解码块,在所述目标预测模式下采用第四扫描顺序解析所述待解码块的码流;其中,所述第三扫描顺序和所述第四扫描顺序不同。
  13. 一种图像解码方法,其特征在于,包括:
    采用可变码长的解码方式解析待解码块的码流,得到编码所述待解码块对应的残差块中每个值的编码码长CL;
    基于编码所述每个值的CL确定所述残差块;
    基于所述残差块对所述待解码块进行重建,得到重建块。
  14. 根据权利要求13所述的方法,其特征在于,所述可变码长的解码方式包括可变换阶数或预设阶数的指数哥伦布解码方式,所述采用可变码长的解码方式解析待解码块的码流,得到编码所述待解码块对应的残差块中每个值的编码码长CL包括:
    确定解析编码所述残差块中每个值的CL时的目标阶数;
    采用所述目标阶数的指数哥伦布解码算法解析所述码流,得到编码所述残差块中每个值的CL。
  15. 根据权利要求13所述的方法,其特征在于,所述采用可变码长的解码方式解析待解码块的码流,得到编码所述待解码块对应的残差块中每个值的编码码长CL包括:
    当用于编码所述残差块中任一值CL的比特数量为预设数量,则基于定长解码策略解析所述码流,以得到编码所述任一值的CL;
    当用于编码所述残差块中任一值CL的比特数量大于预设数量,则基于截断一元码的规则解析所述码流,以得到编码所述任一值的CL。
  16. 根据权利要求13所述的方法,其特征在于,所述基于编码所述每个值的CL确定所述残差块包括:
    基于编码所述每个值的CL在所述码流中确定与所述待解码块中每个像素对应的比特组;
    确定所述待解码块中每个像素的属性类型;
    对于与所述待解码块中第三像素对应的第一比特组,基于预设策略和所述第三像素的属性类型,确定解析所述第一比特组的目标阶数;
    采用所述目标阶数的指数哥伦布解码算法解析所述第一比特组,以得到所述残差块。
  17. 根据权利要求13所述的方法,其特征在于,所述基于编码所述每个值的CL确定所述残差块包括:
    基于编码所述每个值的CL在所述码流中确定与所述待解码块中每个像素对应的比特组;
    对于与所述待解码块中第三像素对应的第一比特组,采用预设阶数的指数哥伦布解码算法解析所述第一比特组,以得到所述残差块。
  18. 根据权利要求13-17中任一项所述的方法,其特征在于,所述基于所述残差块对所述待解码块进行重建,得到重建块包括:
    对所述残差块进行反量化和反变换,或者,对所述残差块进行反量化,以重建所述待解码块的残差值块;
    基于所述残差值块对所述待解码块进行重建,得到所述重建块。
  19. 根据权利要求18所述的方法,其特征在于,所述基于所述残差块对所述待解码块进行重建,得到重建块包括:
    基于待解码块的码流确定预测所述待解码块中像素的目标预测模式;
    基于所述目标预测模式,确定与所述目标预测模式对应的目标预测顺序;
    按照所述目标预测模式,以所述目标预测顺序预测所述待解码块中的每个像素;
    基于所述待解码块中的每个像素的预测值和所述残差块对所述待解码块进行重建,得到所述重建块。
  20. 根据权利要求18所述的方法,其特征在于,所述对所述残差块进行反量化包括:
    基于解析所述待解码块的码流得到的所述待解码块中每个像素的反量化参数和反量化预设数组,对所述残差块进行反量化。
  21. 一种图像解码装置,其特征在于,包括:
    解析单元,用于解析待解码块的码流,以确定预测所述待解码块中像素的目标预测模式;
    确定单元,用于基于所述目标预测模式,确定与所述目标预测模式对应的目标预测顺序;
    预测单元,用于按照所述目标预测模式,以所述目标预测顺序预测所述待解码块中的每个像素;
    重建单元,用于基于所述每个像素的预测值对所述每个像素进行重建,得到所述待解码块的重建块。
  22. 一种图像解码装置,其特征在于,包括:
    解析单元,用于解析待解码块的码流,得到所述待解码块中每个像素的反量化参数和所述待解码块的第一残差块;
    反量化单元,用于基于所述每个像素的反量化参数指示的QP和反量化预设数组,对所述第一残差块进行反量化,得到第二残差块;
    重建单元,用于基于所述第二残差块对所述待解码块进行重建,得到重建块。
  23. 一种图像解码装置,其特征在于,包括:
    解析单元,用于采用可变码长的解码方式解析待解码块的码流,得到编码所述待解码块对应的残差块中每个值的编码码长CL;
    确定单元,用于基于编码所述每个值的CL确定所述残差块;
    重建单元,用于基于所述残差块对所述待解码块进行重建,得到重建块。
  24. 一种电子设备,其特征在于,包括处理器和存储器,所述存储器用于存储计算机指令,所述处理器用于从存储器中调用并运行所述计算机指令,实现权利要求1-20中任一项所述的方法。
  25. 一种计算机可读存储介质,其特征在于,所述存储介质中存储有计算机程序或指令,当所述计算机程序或指令被电子设备执行时,实现权利要求1-20中任一项所述的方法。
PCT/CN2023/084295 2022-03-29 2023-03-28 一种图像编解码方法、装置、电子设备及存储介质 WO2023185806A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210320915.5A CN116095310A (zh) 2022-03-29 2022-03-29 图像编解码方法、装置、电子设备及存储介质
CN202210320915.5 2022-03-29

Publications (2)

Publication Number Publication Date
WO2023185806A1 WO2023185806A1 (zh) 2023-10-05
WO2023185806A9 true WO2023185806A9 (zh) 2023-11-16

Family

ID=86205129

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/084295 WO2023185806A1 (zh) 2022-03-29 2023-03-28 一种图像编解码方法、装置、电子设备及存储介质

Country Status (3)

Country Link
CN (3) CN116095310A (zh)
TW (1) TW202348026A (zh)
WO (1) WO2023185806A1 (zh)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE60008716T2 (de) * 2000-07-10 2005-02-10 Stmicroelectronics S.R.L., Agrate Brianza Verfahren zur Kompression digitaler Bilder
KR101379188B1 (ko) * 2010-05-17 2014-04-18 에스케이 텔레콤주식회사 인트라 블록 및 인터 블록이 혼합된 코딩블록을 이용하는 영상 부호화/복호화 장치 및 그 방법
GB2559062B (en) * 2011-10-17 2018-11-14 Kt Corp Video decoding method using transform method selected from a transform method set
WO2019050299A1 (ko) * 2017-09-06 2019-03-14 가온미디어 주식회사 변화계수 서브그룹 스캐닝 방법에 따른 부/복호화 방법 및 장치
CN110650343A (zh) * 2018-06-27 2020-01-03 中兴通讯股份有限公司 图像的编码、解码方法及装置、电子设备及系统
US11245920B2 (en) * 2019-08-23 2022-02-08 Qualcomm Incorporated Escape code coding for motion vector difference in video coding

Also Published As

Publication number Publication date
WO2023185806A1 (zh) 2023-10-05
TW202348026A (zh) 2023-12-01
CN116405664A (zh) 2023-07-07
CN116095310A (zh) 2023-05-09
CN116405663A (zh) 2023-07-07

Similar Documents

Publication Publication Date Title
WO2020253829A1 (zh) 一种编解码方法、装置及存储介质
WO2016138779A1 (zh) 帧内编解码方法、编码器和解码器
CN104041035A (zh) 用于复合视频的无损编码及相关信号表示方法
CN110691250B (zh) 结合块匹配和串匹配的图像压缩装置
TW201725905A (zh) 用於非4:4:4色度子採樣之顯示串流壓縮(dsc)之熵寫碼技術
WO2023236936A1 (zh) 一种图像编解码方法及装置
CN113597757A (zh) 具有区域数自适应的几何划分的形状自适应离散余弦变换
TWI705693B (zh) 用於顯示串流壓縮之基於向量之熵寫碼的裝置及方法
WO2023039859A1 (zh) 视频编解码方法、设备、系统、及存储介质
WO2024022359A1 (zh) 一种图像编解码方法及装置
WO2024022039A1 (zh) 一种视频图像解码方法、编码方法、装置及存储介质
WO2023185806A9 (zh) 一种图像编解码方法、装置、电子设备及存储介质
WO2022166370A1 (zh) 视频编解码方法、装置、计算机程序产品、计算机可读存储介质及电子设备
TWI832661B (zh) 圖像編解碼的方法、裝置及存儲介質
WO2023083239A1 (zh) 图像解码方法及装置、图像编码方法及装置
TWI829424B (zh) 解碼方法、編碼方法及裝置
RU2787217C1 (ru) Способ и устройство интерполяционной фильтрации для кодирования с предсказанием
RU2783385C2 (ru) Кодер, декодер и соответствующие способы с использованием компактного mv хранилища
TWI821013B (zh) 視頻編解碼方法及裝置
RU2786022C1 (ru) Устройство и способ для ограничений уровня блока в зависимости от режима и размера
WO2023065890A1 (zh) 多媒体数据处理方法、装置、计算机设备、计算机可读存储介质及计算机程序产品
RU2771925C1 (ru) Кодер, декодер и соответствующие способы с использованием компактного mv хранилища
RU2816154C2 (ru) Режим сжатия без потерь для универсального кодирования видео
US20220312002A1 (en) Methods and systems for combined lossless and lossy coding
WO2020168520A1 (zh) 编码器、编码系统和编码方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23778171

Country of ref document: EP

Kind code of ref document: A1