WO2022166462A1 - 编码、解码方法和相关设备 - Google Patents

编码、解码方法和相关设备 Download PDF

Info

Publication number
WO2022166462A1
WO2022166462A1 PCT/CN2021/141403 CN2021141403W WO2022166462A1 WO 2022166462 A1 WO2022166462 A1 WO 2022166462A1 CN 2021141403 W CN2021141403 W CN 2021141403W WO 2022166462 A1 WO2022166462 A1 WO 2022166462A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
map
fidelity
value
quantization parameter
Prior art date
Application number
PCT/CN2021/141403
Other languages
English (en)
French (fr)
Inventor
杨海涛
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022166462A1 publication Critical patent/WO2022166462A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding

Definitions

  • the embodiments of the present application design the technical field of video or image compression based on artificial intelligence (AI), and in particular, relate to an encoding and decoding method and related equipment.
  • AI artificial intelligence
  • Video coding (video encoding and decoding) is widely used in digital video applications such as broadcast digital television, video transmission over the Internet and mobile networks, real-time conversational applications such as video chat and video conferencing, DVD and Blu-ray discs, video content capture and editing systems And security applications for camcorders.
  • Video compression devices typically use software and/or hardware on the source side to encode video data prior to transmission or storage, thereby reducing the amount of data required to represent digital video images. Then, the compressed data is received by the video decompression device at the destination side.
  • the present application provides an encoding and decoding method and related equipment, which can obtain distortion intensity information of an encoded image at the decoding end.
  • the present application relates to an encoding method.
  • the method is performed by an encoding device.
  • the method includes: encoding an original image to obtain a first code stream; encoding a fidelity map to obtain a second code stream, wherein the fidelity map is used to represent at least a partial area of the original image Distortion between at least a partial area of the reconstructed image, the reconstructed image obtained after decoding the first code stream.
  • the original image is encoded to obtain the first code stream
  • the fidelity map is encoded to obtain the second code stream
  • the fidelity map is used to represent at least a part of the original image and the reconstructed image.
  • the decoding end decodes the first code stream to obtain a reconstructed image of the original image
  • the decoding end decodes the second code stream to obtain a reconstructed image of the fidelity map ( It can also be referred to as the reconstruction fidelity map); and if the encoding is lossless encoding, the reconstruction map of the fidelity map is the same as the fidelity map; if the encoding is lossy encoding, the reconstruction map of the fidelity map includes encoding distortion generated by encoding the fidelity map; therefore, the fidelity map can be used to represent the distortion between at least part of the original image and at least part of the reconstructed image; therefore, the embodiments of the present application can be obtained at the decoding end Distortion strength information for the encoded image.
  • the method further includes: dividing the original image into a plurality of first image blocks, and dividing the reconstructed image into a plurality of second image blocks, wherein the original image is divided
  • the division strategy is the same as the division strategy for dividing the reconstructed image, the plurality of first image blocks and the plurality of second image blocks are in one-to-one correspondence; or the preset area of the original image is divided into a plurality of first image blocks.
  • the multiple first image blocks correspond to the multiple second image blocks one-to-one; according to any second image block in the multiple second image blocks corresponds to the any second image block
  • the fidelity value of any second image block is obtained by calculating the fidelity value of the first image block of The fidelity value of is used to represent the distortion between the any second image block and the first image block corresponding to the any second image block.
  • the position of any second image block in the reconstructed image is the same as the position of the first image block corresponding to the any second image block in the original image ;
  • the position of the preset area of the original image in the original image is the same as the position of the preset area of the reconstructed image in the reconstructed image, and any second image block in the plurality of second image blocks is the same.
  • the position of the image block in the preset area of the reconstructed image is the same as the position of the first image block corresponding to any second image block in the preset area of the original image.
  • the size of the original image and the size of the reconstructed image are the same, and the position of the preset region in the original image is the same as the size and position in the reconstructed image; the original image is divided according to the same division strategy Divide the original image into multiple first image blocks, and divide the reconstructed image into multiple second image blocks; or divide the preset area of the original image into multiple first image blocks according to the same division strategy, and divide the preset area of the reconstructed image into multiple first image blocks.
  • the area is divided into multiple second image blocks; there is a one-to-one correspondence between the multiple first image blocks obtained by division and the second image blocks obtained by division, wherein, the size of any first image block is the same, and any one of the first image blocks has the same size.
  • the size of a second image block is also the same, and the size of the first image block and the second image block are also the same; therefore, the first image block and the second image block can be used as the basic unit of fidelity calculation, that is, according to this Any second image block in the plurality of second image blocks and its corresponding first image block can be calculated to obtain the fidelity value of any second image block, and the fidelity value of the plurality of second image blocks is also That is, the fidelity value of each area of the reconstructed image, the fidelity map can be obtained according to the fidelity values of multiple second image blocks; When the image is divided, the fidelity map is used to represent the fidelity of the reconstructed image; when the first image block is obtained by dividing the preset area of the original image, and the second image block is obtained by dividing the preset area of the reconstructed image, the The fidelity map is used to characterize the fidelity of the preset region of the reconstructed image; thus, it is beneficial to obtain a fidelity map used to characterize the distortion intensity information
  • the fidelity map includes a plurality of first elements, the plurality of second image blocks are in one-to-one correspondence with the plurality of first elements, and the plurality of first elements
  • the value of any first element is the fidelity value of the second image block corresponding to the any first element
  • the position of the any first element in the fidelity map is based on the The position of the second image block corresponding to a first element in the reconstructed image is determined, or the position of any first element in the fidelity map is determined according to the first element corresponding to the first element.
  • the positions of the two image blocks in the preset area of the reconstructed image are determined.
  • the first element may also be referred to as a pixel point of the fidelity image.
  • the image is divided into basic units for fidelity calculation, so there are as many basic units as there are first elements in the fidelity map; among them, any first element has Two properties are the fidelity value and the position of the fidelity value in the fidelity map.
  • the fidelity map is a two-dimensional array, and the reconstructed image is divided to obtain a plurality of second image blocks, and the fidelity map can be obtained according to the fidelity values of the plurality of second image blocks, that is, according to Multiple second image blocks can determine multiple first elements, multiple second image blocks are in one-to-one correspondence with multiple first elements, and the value of any first element in multiple first elements is the second element corresponding to the multiple first elements.
  • the fidelity value of the image block; and the position of any first element in the fidelity map is determined according to the position of the corresponding second image block in the reconstructed image, specifically, a plurality of The position of any one of the first elements in the fidelity map is the same as the position of its corresponding second image block in the reconstructed image or in the preset area of the reconstructed image.
  • the element characterizes the fidelity of the reconstructed image or the region corresponding to its position in the preset region of the reconstructed image, so it is beneficial for the fidelity map to be used to characterize the distortion intensity information of the encoded image.
  • the second image block includes three color components
  • the fidelity map is a three-dimensional array including three dimensions of color components, width and height, any one of the fidelity map
  • the two-dimensional array under the color component A includes a plurality of first elements
  • the value of any first element in the plurality of first elements is the color component A of the second image block corresponding to the any first element fidelity value
  • the position of any first element in the two-dimensional array under any color component A in the fidelity map is based on the second image block corresponding to the any first element in
  • the position in the reconstructed image is determined, or the position of the two-dimensional array of any first element under any color component A in the fidelity map is determined according to the first element corresponding to the any first element.
  • the positions of the two image blocks in the preset area of the reconstructed image are determined.
  • the fidelity map is a three-dimensional array including three dimensions of color component, width and height
  • the height indicates that the two-dimensional array under any color component A includes multiple row first elements
  • the width indicates that under any color component A
  • the two-dimensional array includes a plurality of columns of first elements, the number of the plurality of first elements is equal to the product of the width and the height, and the color component A is any one of the three color components.
  • the original image or the reconstructed image includes three color components, and when calculating the fidelity map, a fidelity map of a two-dimensional array is calculated under any color component, and a two-dimensional array under the three color components is calculated.
  • the array constitutes the fidelity map of the three-dimensional array, and the first element in the two-dimensional array under any color component A in the fidelity map of the three-dimensional array represents the reconstructed image or the reconstructed image under any color component A.
  • the fidelity of the regions corresponding to their positions in the preset regions, and thus in favor of the fidelity map of the three-dimensional array, can characterize the distortion intensity information of the three color components of the encoded image.
  • the encoding the fidelity map to obtain the second code stream includes: performing entropy encoding on the any one of the first elements to obtain the second code stream, and performing the entropy encoding on the any one of the first elements to obtain the second code stream.
  • the entropy encoding of a first element is independent of the entropy encoding of other first elements; or, the probability distribution of the value of any first element is determined according to the value of at least one of the encoded first elements or the predicted value of any one of the first elements, and entropy encoding the any of the first elements according to the probability distribution of the values of the any of the first elements or the predicted value of the any of the first elements, to obtaining the second code stream; wherein, the second code stream includes the code streams of the plurality of first elements.
  • any first element in the fidelity map if there is no encoded first element, entropy encoding is directly performed on the any first element to obtain the any first element If there is an encoded first element, determine the probability distribution of the value of any first element or the value of any one of the first elements according to the value of at least one of the encoded first elements the predicted value of the first element, and entropy encoding the any first element according to the probability distribution of the value of the any first element or the predicted value of the any first element, so as to obtain the any one of the first elements.
  • the fidelity map is encoded to obtain the second code stream, that is, any first element in the fidelity map is encoded to obtain any first element in the fidelity map
  • the second code stream includes the code stream of any first element in the fidelity map; in the entropy encoding process, the value of the encoded first element can be used to determine the value of the currently encoded first element
  • the probability distribution or the predicted value of the first element of the current encoding for example, the value of the first element adjacent to the left, upper, upper left, etc. of the first element of the current encoding is used to determine the value of the first element of the current encoding.
  • probability distribution and then perform entropy coding on the currently coded first element according to the probability distribution of the value of the currently coded first element or the predicted value of the currently coded first element, so as to assist in improving the entropy coding efficiency.
  • the encoding the fidelity map to obtain the second code stream includes: quantizing any of the first elements to obtain a quantized first element; quantizing the quantization The last first element is encoded to obtain the second code stream; wherein, the second code stream includes the code streams of the plurality of first elements.
  • the quantization step size for quantizing any one of the multiple first elements may be the same or different.
  • the fidelity map is encoded to obtain the second code stream, that is, any first element in the fidelity map is encoded to obtain any first element in the fidelity map
  • the application relates to a decoding method.
  • the method is performed by a decoding device.
  • the method includes: decoding a first code stream to obtain a reconstructed image of the original image; decoding a second code stream to obtain a reconstructed image of a fidelity map, wherein the second code stream is a Obtained by encoding the fidelity map, the reconstructed map of the fidelity map is used to represent the distortion between at least part of the original image and at least part of the reconstructed image.
  • the original image is encoded to obtain the first code stream
  • the fidelity map is encoded to obtain the second code stream
  • the fidelity map is used to represent at least part of the original image and the reconstruction Distortion between at least partial regions of the image, where the distortion includes differences
  • the decoding end decodes the first code stream to obtain a reconstructed image of the original image
  • the decoding end decodes the second code stream to obtain a reconstruction of the fidelity map If the encoding is lossless encoding, the reconstructed image of the fidelity map is the same as the fidelity map; if the encoding is lossy encoding, the reconstructed image of the fidelity map includes the image generated by encoding the fidelity map.
  • the reconstructed image of the fidelity map can be used to represent the distortion between at least part of the original image and at least part of the reconstructed image; therefore, the embodiment of the present application can obtain the distortion intensity information of the encoded image at the decoding end .
  • the fidelity map includes a fidelity value of any second image block in the plurality of second image blocks, and the fidelity value of any second image block is used for represents the distortion between the any second image block and the original image block corresponding to the any second image block.
  • the plurality of second image blocks are obtained by dividing the reconstructed image, the plurality of second image blocks are in one-to-one correspondence with a plurality of original image blocks, and the original image blocks are image blocks in the original image , for example, the original image block is the aforementioned first image block; the multiple original image blocks are obtained by dividing the original image, and the multiple second image blocks are obtained by dividing the reconstructed image.
  • the division strategy of the original image is the same as the division strategy of the reconstructed image; or the multiple original image blocks are obtained by dividing a preset area of the original image, and the multiple second image blocks are Obtained by dividing the preset area of the reconstructed image, the division strategy for dividing the preset area of the original image is the same as the division strategy for dividing the preset area of the reconstructed image.
  • the fidelity map includes a plurality of first elements, the plurality of second image blocks are in one-to-one correspondence with the plurality of first elements, and the plurality of first elements
  • the value of any first element is the fidelity value of the second image block corresponding to the any first element
  • the position of the any first element in the fidelity map is based on the The position of the second image block corresponding to a first element in the reconstructed image is determined, or the position of any first element in the fidelity map is determined according to the first element corresponding to the first element.
  • the positions of the two image blocks in the preset area of the reconstructed image are determined.
  • the second image block includes three color components
  • the fidelity map is a three-dimensional array including three dimensions of color components, width and height, any one of the fidelity map
  • the two-dimensional array under the color component A includes a plurality of first elements
  • the value of any first element in the plurality of first elements is the color component A of the second image block corresponding to the any first element fidelity value
  • the position of any first element in the two-dimensional array under any color component A in the fidelity map is based on the second image block corresponding to the any first element in
  • the position in the reconstructed image is determined, or the position of the two-dimensional array of any first element under any color component A in the fidelity map is determined according to the first element corresponding to the any first element.
  • the positions of the two image blocks in the preset area of the reconstructed image are determined.
  • the decoding the second code stream to obtain the reconstruction map of the fidelity map includes: decoding the second code stream to obtain the reconstruction of any one of the first elements Fidelity value; obtains the reconstruction map of the fidelity map according to the reconstruction fidelity value of any one of the first elements.
  • the reconstruction fidelity value of the first element is also the reconstruction of the value of the first element.
  • the position of the reconstruction fidelity value of the any first element in the reconstruction image of the fidelity map is determined according to the position of the second image block corresponding to the any first element in the reconstructed image .
  • the second code stream includes the position of the any first element in the fidelity map; the reconstruction fidelity value of the any first element is in the reconstruction map of the fidelity map The position in the fidelity map is determined according to the position of any first element in the fidelity map.
  • the second code stream includes a code stream of any first element in the fidelity map, so the reconstruction fidelity value of any first element can be obtained by decoding the second code stream;
  • the reconstruction fidelity value of the first element is the value of the first element;
  • the reconstruction fidelity value of the first element includes encoding the first element
  • the reconstruction fidelity value of the first element is the sum of the value of the first element and the coding distortion; thus, the reconstruction map of the fidelity map can be obtained according to the reconstruction fidelity value of any first element, Therefore, the reconstructed map of the fidelity map can be used to represent the distortion between at least part of the original image and at least part of the reconstructed image; therefore, the embodiment of the present application can obtain the distortion intensity information of the encoded image at the decoding end.
  • the second code stream is obtained by encoding the quantized first element; and the decoding of the second code stream to obtain the reconstructed image of the fidelity map includes: The second code stream is decoded to obtain the reconstruction fidelity value of the quantized first element; the reconstruction fidelity value of the quantized first element is inversely quantized to obtain any of the The reconstruction fidelity value of the first element; the reconstruction map of the fidelity map is obtained according to the reconstruction fidelity value of any one of the first elements.
  • the encoding end may quantize the first element and then encode it to obtain the code stream of the first element. Therefore, the code stream of the first element obtained by the decoding end may be the quantized first element.
  • the reconstruction fidelity value of the quantized first element is obtained by decoding the second code stream, and the reconstruction fidelity value of the quantized first element also needs to be inverted.
  • the distortion intensity information can also reduce the coding overhead.
  • the method further includes: processing the reconstructed image or a preset area of the reconstructed image according to the reconstruction map of the fidelity map, so as to improve the reconstructed image or the image quality of a preset area of the reconstructed image; or determining whether to apply the reconstructed image according to the reconstructed image of the fidelity map.
  • the decoding end may process the reconstructed image or the preset area of the reconstructed image according to the reconstruction map of the fidelity map, so as to improve the image quality of the reconstructed image or the preset area of the reconstructed image; or The reconstruction map of the degree map determines whether to apply the reconstructed image; thus facilitating the application of the reconstructed image.
  • the application relates to a decoding method.
  • the method is performed by a decoding device.
  • the method includes: decoding the first code stream to obtain a reconstructed image of the original image and target quantization parameter information, where the target quantization parameter information includes all or part of the second image blocks of the reconstructed image.
  • the quantization parameter value of the image block; the quantization parameter map of the reconstructed image is constructed according to the target quantization parameter information, wherein the quantization parameter map of the reconstructed image is used to represent at least a partial area of the original image and the reconstructed image. distortion between at least part of the region.
  • the main purpose of the quantization parameters is to perform inverse quantization operations; of course, the quantization parameters themselves represent signal distortion (fidelity), so the quantization parameter map of the reconstructed image constructed according to the quantization parameters can be used for represents a distortion between at least a portion of the original image and at least a portion of the reconstructed image.
  • the decoding end performs the decoding operation on the first code stream obtained by encoding the original image, and cannot obtain the quantization parameter values of each region (second image block) of the reconstructed image.
  • the decoding end performs a decoding operation on the first code stream obtained by encoding the original image, and can obtain the quantization parameter values of each region (second image block) of the reconstructed image;
  • the quantization parameter values of all or part of the second image blocks in the two image blocks can construct a quantization parameter map of the reconstructed image, and the quantization parameter map of the reconstructed image can be used to represent the difference between at least part of the original image and the reconstructed image. Distortion between at least some regions, therefore, the embodiment of the present application can obtain the distortion intensity information of the encoded image at the decoding end.
  • the second image block is a coding unit.
  • the encoding end divides the original image into multiple encoding units, and encodes the multiple encoding units obtained by dividing the original image to obtain the first code stream; the decoding end decodes the first code stream, and can obtain The reconstructed image of the original image and the target quantization parameter information, the target quantization parameter information includes the quantization parameter values of all or part of the coding units in the plurality of coding units; the quantization parameter map of the reconstructed image can be constructed according to the target quantization parameter information; and the reconstructed image
  • the quantization parameter map is a form of fidelity map.
  • the quantization parameter map of the reconstructed image is the fidelity of the entire reconstructed image. degree map; when the target quantization parameter information includes the quantization parameter values of some coding units in the plurality of coding units, the quantization parameter map of the reconstructed image is the fidelity map of the preset area of the reconstructed image; so the quantization parameter of the reconstructed image
  • the map can be used to characterize the fidelity of the reconstructed image or the fidelity of a preset area of the reconstructed image; therefore, the embodiment of the present application can obtain distortion intensity information of the encoded image at the decoding end.
  • the quantization parameter map of the reconstructed image includes a plurality of second elements, the plurality of second image blocks are in one-to-one correspondence with the plurality of second elements, and the plurality of second elements
  • the value of any second element is the quantization parameter value of the second image block corresponding to the any second element
  • the position of the any second element in the quantization parameter map of the reconstructed image is based on and
  • the position of the second image block corresponding to the any second element in the reconstructed image is determined, or the position of the any second element in the quantization parameter map of the reconstructed image is determined according to the position of the second image block in the reconstructed image.
  • the position of the second image block corresponding to the two elements in the preset area of the reconstructed image is determined.
  • the second element may also be referred to as a pixel of the quantization parameter map.
  • the second image block includes three color components
  • the quantization parameter map of the reconstructed image is a three-dimensional array including three dimensions of color components, width and height
  • the quantization parameter map of the reconstructed image is a three-dimensional array.
  • the two-dimensional array under any color component A in includes a plurality of second elements, and the value of any second element in the plurality of second elements is a second image block corresponding to the any second element
  • the quantization parameter value of the color component A of The position of the second image block in the reconstructed image is determined, or the position of any second element in the two-dimensional array under any color component A in the quantization parameter map of the reconstructed image is determined according to the relationship with the any The position of the second image block corresponding to a second element in the preset area of the reconstructed image is determined.
  • the constructing the quantization parameter map of the reconstructed image according to the target quantization parameter information includes: when the target quantization parameter information includes quantization of some coding units in the plurality of coding units When the parameter value is used, the quantization parameter value of the target coding unit is obtained according to the quantization parameter value of the partial coding unit and/or the reference quantization parameter map, wherein the reference quantization parameter map is the quantization parameter of the reference image of the reconstructed image
  • the target coding unit is a coding unit other than the partial coding unit among the plurality of coding units; according to the quantization parameter value of the partial coding unit and the quantization parameter value of the target coding unit, the obtained The quantization parameter map of the reconstructed image is described.
  • the quantization parameter map of the reconstructed image is obtained according to the quantization parameter values of all coding units;
  • the target quantization parameter information includes the quantization parameter values of some coding units in the plurality of coding units, obtain the quantization parameter map of the reconstructed image according to the quantization parameter values of the partial coding units, or obtain the quantization parameter map of the reconstructed image according to the quantization parameter values of the partial coding units, or obtain the quantization parameter map according to the quantization parameter values of the partial coding units.
  • the quantization parameter value of the reconstructed image and the reference quantization parameter map are obtained, wherein the reference quantization parameter map is the quantization parameter map of the reference image of the reconstructed image.
  • the quantization parameter map of the reconstructed image can be obtained according to the quantization parameter values of all the coding units.
  • the quantization parameter map of the reconstructed image can be used to characterize the fidelity of the entire reconstructed image; when the target quantization parameter information includes the quantization parameter values of some coding units in the plurality of coding units, the quantization parameter values of the partial coding units can be To the quantization parameter map of the reconstructed image, the quantization parameter map of the reconstructed image obtained in this case can be used to characterize the fidelity of the preset area of the reconstructed image; when the target quantization parameter information includes some coding units in the plurality of coding units When the quantization parameter value is the quantization parameter value, the quantization parameter map of the reconstructed image can be obtained according to the quantization parameter value of the partial coding unit and the reference quantization parameter map.
  • the reference quantization parameter map is the quantization parameter map of the reference image of the reconstructed image, it can be Obtain the quantization parameter value of any coding unit in the plurality of coding units except the part of the coding unit, so that the quantization parameter of any coding unit in the plurality of coding units can be obtained, and the quantization parameter of the reconstructed image obtained in this case
  • the map can be used to characterize the fidelity of the entire reconstructed image or the fidelity of a preset area of the reconstructed image; therefore, the target quantization parameter information obtained by decoding in this embodiment of the present application includes all of the multiple coding units.
  • a quantization parameter map of the reconstructed image for characterizing the fidelity of the reconstructed image or for characterizing the fidelity of a preset region of the reconstructed image can be obtained.
  • the obtaining the quantization parameter value of the target coding unit according to the quantization parameter value of the partial coding unit includes: determining the quantization parameter value according to the quantization parameter value of at least one coding unit in the partial coding unit. the quantization parameter value of the target coding unit.
  • the quantization parameter of the spatial neighborhood of the coding unit may be used for filling.
  • the target quantization parameter information includes the quantization parameter values of some coding units in the plurality of coding units, it may also be determined according to the quantization parameter value of at least one coding unit in the partial coding units that among the plurality of coding units, dividing the The quantization parameter value of any coding unit other than some coding units, so that the quantization parameter value of any coding unit in the multiple coding units can be guaranteed to be obtained;
  • the quantization parameter value of the reconstructed image is obtained by obtaining the quantization parameter map of the reconstructed image; when the quantization parameter map of the reconstructed image is obtained according to the quantization parameter values of all coding units in the multiple coding units, the obtained quantization parameter map of the reconstructed image can also be used to characterize the entire image.
  • the fidelity of the reconstructed image when the quantization parameter map of the reconstructed image is obtained according to the quantization parameter values of some coding units in the plurality of coding units, the obtained quantization parameter map of the reconstructed image can also be used to characterize the preset area of the reconstructed image fidelity.
  • the reference quantization parameter map includes multiple reference elements, and a value of any reference element in the multiple reference elements is a quantization parameter value of a coding unit in the reference image;
  • the Obtaining the quantization parameter value of the target coding unit according to the reference quantization parameter map includes: taking the value of the target element as the quantization parameter value of any target coding unit in the target coding unit, wherein the target element is the reference quantization unit
  • the reference element in the parameter map, the position of the target element in the reference quantization parameter map is determined according to the position of any target coding unit in the reconstructed image, or the target element in the reference quantization parameter
  • the position in the figure is determined according to the position of the any target coding unit in the reconstructed image and the motion vector of the any target coding unit.
  • the reference element is another name for the second element.
  • the quantization parameter of the temporal neighborhood of the coding unit may be used for filling.
  • the value of the target element in the reference quantization parameter map can be used as the quantization parameter value of the coding unit, and the target element is in the reference quantization parameter value.
  • the position in the figure is determined according to the position of the coding unit in the reconstructed image, or the position of the target element in the reference quantization parameter map is determined according to the position of the coding unit in the reconstructed image and the motion vector of the coding unit.
  • the quantization parameter value of any one of the multiple coding units can be guaranteed to be obtained; and the quantization parameter map of the reconstructed image can be obtained according to the quantization parameter values of some or all of the multiple coding units;
  • the quantization parameter map of the reconstructed image can also be used to characterize the fidelity of the entire reconstructed image;
  • the quantization parameter map of the reconstructed image is obtained from the quantization parameter value of the coding unit, the obtained quantization parameter map of the reconstructed image can also be used to characterize the fidelity of the preset region of the reconstructed image.
  • the method further includes: storing the reconstructed image in association with the quantization parameter map of the reconstructed image, so as to use the reconstructed image as a reference image and the quantization parameter map of the reconstructed image As a reference quantization parameter map.
  • the quantization parameter map of the reconstructed image may be stored and used as a reference quantization parameter map for constructing the quantization parameter map of the subsequent decoded image, thereby facilitating the construction of the quantization parameter map of the subsequent decoded image.
  • the method further includes: processing the reconstructed image or a preset region of the reconstructed image according to the quantization parameter map of the reconstructed image, so as to improve the reconstructed image or the reconstructed image. image quality of a preset area of the image; or determining whether to apply the reconstructed image according to the quantization parameter map of the reconstructed image.
  • the decoding end may process the reconstructed image or the preset area of the reconstructed image according to the reconstruction map of the fidelity map, so as to improve the image quality of the reconstructed image or the preset area of the reconstructed image; or
  • the reconstruction map of the degree map determines whether to apply the reconstructed image; thus facilitating the application of the reconstructed image.
  • the present application relates to an encoding device, and the beneficial effects can be found in the description of the first aspect, which will not be repeated here.
  • the encoding device includes: a video encoder for encoding an original image to obtain a first code stream; a fidelity map encoder for encoding a fidelity map to obtain a second code stream, wherein the The fidelity map is used to represent the distortion between at least part of the original image and at least part of the reconstructed image, where the reconstructed image is obtained after decoding the first code stream.
  • the encoding device further includes a fidelity map calculator for: dividing the original image into a plurality of first image blocks, and dividing the reconstructed image
  • the image is divided into multiple second image blocks, wherein the division strategy for dividing the original image is the same as the division strategy for dividing the reconstructed image, and the multiple first image blocks and the multiple second image blocks are one by one or dividing the preset area of the original image into a plurality of first image blocks, and dividing the preset area of the reconstructed image into a plurality of second image blocks, wherein the preset area of the original image is divided
  • the division strategy of the area is the same as the division strategy of dividing the preset area of the reconstructed image, and the plurality of first image blocks correspond to the plurality of second image blocks one-to-one;
  • the fidelity value of any second image block is calculated to obtain the fidelity value of the any second image block from the first image block corresponding to the any second image block, and the fidelity map includes the any second image block.
  • the fidelity map includes a plurality of first elements, the plurality of second image blocks are in one-to-one correspondence with the plurality of first elements, and the plurality of first elements
  • the value of any first element is the fidelity value of the second image block corresponding to the any first element
  • the position of the any first element in the fidelity map is based on the The position of the second image block corresponding to a first element in the reconstructed image is determined, or the position of any first element in the fidelity map is determined according to the first element corresponding to the first element.
  • the positions of the two image blocks in the preset area of the reconstructed image are determined.
  • the second image block includes three color components
  • the fidelity map is a three-dimensional array including three dimensions of color components, width and height, any one of the fidelity map
  • the two-dimensional array under the color component A includes a plurality of first elements
  • the value of any first element in the plurality of first elements is the color component A of the second image block corresponding to the any first element fidelity value
  • the position of any first element in the two-dimensional array under any color component A in the fidelity map is based on the second image block corresponding to the any first element in
  • the position in the reconstructed image is determined, or the position of the two-dimensional array of any first element under any color component A in the fidelity map is determined according to the first element corresponding to the any first element.
  • the positions of the two image blocks in the preset area of the reconstructed image are determined.
  • the fidelity map encoder is specifically configured to: perform entropy encoding on the any first element to obtain the second code stream,
  • the entropy coding is independent of the entropy coding of other first elements; or, the probability distribution of the value of any first element or the value of any one of the first elements is determined according to the value of at least one of the encoded first elements the predicted value of the first element, and entropy coding the any first element according to the probability distribution of the value of the any first element or the predicted value of the any first element, so as to obtain the second A code stream; wherein the second code stream includes the code streams of the plurality of first elements.
  • the fidelity map encoder is specifically configured to: quantize any of the first elements to obtain a quantized first element; Encoding is performed to obtain the second code stream; wherein the second code stream includes the code streams of the plurality of first elements.
  • the decoding device includes: a video decoder for decoding the first code stream to obtain a reconstructed image of the original image; a fidelity map decoder for decoding the second code stream to obtain the reconstruction of the fidelity map Figure, wherein the second code stream is obtained by encoding the fidelity map, and the reconstructed map of the fidelity map is used to represent at least a partial area of the original image and at least a part of the reconstructed image. Distortion between partial areas.
  • the fidelity map includes a fidelity value of any second image block in the plurality of second image blocks, and the fidelity value of any second image block is used for represents the distortion between the any second image block and the original image block corresponding to the any second image block.
  • the fidelity map includes a plurality of first elements, the plurality of second image blocks are in one-to-one correspondence with the plurality of first elements, and the plurality of first elements
  • the value of any first element is the fidelity value of the second image block corresponding to the any first element
  • the position of the any first element in the fidelity map is based on the The position of the second image block corresponding to a first element in the reconstructed image is determined, or the position of any first element in the fidelity map is determined according to the first element corresponding to the first element.
  • the positions of the two image blocks in the preset area of the reconstructed image are determined.
  • the second image block includes three color components
  • the fidelity map is a three-dimensional array including three dimensions of color components, width and height, any one of the fidelity map
  • the two-dimensional array under the color component A includes a plurality of first elements
  • the value of any first element in the plurality of first elements is the color component A of the second image block corresponding to the any first element fidelity value
  • the position of any first element in the two-dimensional array under any color component A in the fidelity map is based on the second image block corresponding to the any first element in
  • the position in the reconstructed image is determined, or the position of the two-dimensional array of any first element under any color component A in the fidelity map is determined according to the first element corresponding to the any first element.
  • the positions of the two image blocks in the preset area of the reconstructed image are determined.
  • the fidelity map decoder is specifically configured to: decode the second code stream to obtain the reconstruction fidelity value of any one of the first elements; according to the The reconstructed fidelity value of any first element results in a reconstructed map of the fidelity map.
  • the second code stream is obtained by encoding the quantized first element;
  • the fidelity map decoder is specifically configured to: decode the second code stream, to obtain the reconstruction fidelity value of the quantized first element; perform inverse quantization on the reconstruction fidelity value of the quantized first element to obtain the reconstruction fidelity value of any one of the first elements value; obtains the reconstruction map of the fidelity map according to the reconstruction fidelity value of any one of the first elements.
  • the decoding device includes: a video decoder for decoding a first code stream to obtain a reconstructed image of the original image and target quantization parameter information, where the target quantization parameter information includes a plurality of second image blocks in the reconstructed image
  • the quantization parameter value of all or part of the second image block of represents a distortion between at least a portion of the original image and at least a portion of the reconstructed image.
  • the second image block is a coding unit.
  • the quantization parameter map of the reconstructed image includes a plurality of second elements, the plurality of second image blocks are in one-to-one correspondence with the plurality of second elements, and the plurality of second elements
  • the value of any second element is the quantization parameter value of the second image block corresponding to the any second element
  • the position of the any second element in the quantization parameter map of the reconstructed image is based on and
  • the position of the second image block corresponding to the any second element in the reconstructed image is determined, or the position of the any second element in the quantization parameter map of the reconstructed image is determined according to the position of any second element in the quantization parameter map of the reconstructed image.
  • the position of the second image block corresponding to the two elements in the preset area of the reconstructed image is determined.
  • the second image block includes three color components
  • the quantization parameter map of the reconstructed image is a three-dimensional array including three dimensions of color components, width and height
  • the quantization parameter map of the reconstructed image is a three-dimensional array.
  • the two-dimensional array under any color component A in includes a plurality of second elements, and the value of any second element in the plurality of second elements is a second image block corresponding to the any second element
  • the quantization parameter value of the color component A of The position of the second image block in the reconstructed image is determined, or the position of any second element in the two-dimensional array under any color component A in the quantization parameter map of the reconstructed image is determined according to the relationship with the any The position of the second image block corresponding to a second element in the preset area of the reconstructed image is determined.
  • the quantization parameter map builder is specifically configured to: when the target quantization parameter information includes quantization parameter values of some coding units in the plurality of coding units, perform coding according to the partial coding units The quantization parameter value of the unit and/or the reference quantization parameter map to obtain the quantization parameter value of the target coding unit, wherein the reference quantization parameter map is the quantization parameter map of the reference image of the reconstructed image, and the target coding unit is the coding units other than the partial coding unit among the plurality of coding units; and obtaining a quantization parameter map of the reconstructed image according to the quantization parameter value of the partial coding unit and the quantization parameter value of the target coding unit.
  • the reference quantization parameter map includes multiple reference elements, and a value of any reference element in the multiple reference elements is a quantization parameter value of a coding unit in the reference image; the a quantization parameter map builder, specifically configured to: use the value of the target element as the quantization parameter value of any target coding unit in the target coding unit, wherein the target element is the reference element in the reference quantization parameter map, The position of the target element in the reference quantization parameter map is determined according to the position of any target coding unit in the reconstructed image, or the position of the target element in the reference quantization parameter map is determined according to the The position of any target coding unit in the reconstructed image and the motion vector of the any target coding unit are determined.
  • the present application relates to an encoding device, and the beneficial effects can be found in the description of the first aspect, which will not be repeated here.
  • the encoding device has the function of implementing the behavior in the method example of the first aspect above.
  • the functions can be implemented by hardware, and can also be implemented by hardware executing corresponding software.
  • the hardware or software includes one or more modules corresponding to the above functions.
  • the decoding device includes a processing unit, and the processing unit is configured to: encode the original image to obtain the first code stream; encode the fidelity map to obtain the second code stream, wherein, The fidelity map is used to represent the distortion between at least part of the original image and at least part of the reconstructed image, where the reconstructed image is obtained after decoding the first code stream.
  • the processing unit is further configured to: divide the original image into a plurality of first image blocks, and divide the reconstructed image into a plurality of second image blocks, wherein dividing the The division strategy of the original image is the same as the division strategy of the reconstructed image, the plurality of first image blocks and the plurality of second image blocks are in one-to-one correspondence; or the preset area of the original image is divided into multiple parts.
  • the division strategy for dividing the preset area of the original image is the same as the method for dividing the preset area of the reconstructed image
  • the division strategy is the same, the multiple first image blocks and the multiple second image blocks are in one-to-one correspondence; according to any second image block in the multiple second image blocks and the any second image
  • the first image block corresponding to the block is calculated to obtain the fidelity value of the any second image block
  • the fidelity map includes the fidelity value of the any second image block
  • the any second image block The fidelity value of the image block is used to represent the distortion between the any second image block and the first image block corresponding to the any second image block.
  • the fidelity map includes a plurality of first elements, the plurality of second image blocks are in one-to-one correspondence with the plurality of first elements, and the plurality of first elements
  • the value of any first element is the fidelity value of the second image block corresponding to the any first element
  • the position of the any first element in the fidelity map is based on the The position of the second image block corresponding to a first element in the reconstructed image is determined, or the position of any first element in the fidelity map is determined according to the first element corresponding to the first element.
  • the positions of the two image blocks in the preset area of the reconstructed image are determined.
  • the second image block includes three color components
  • the fidelity map is a three-dimensional array including three dimensions of color components, width and height, any one of the fidelity map
  • the two-dimensional array under the color component A includes a plurality of first elements
  • the value of any first element in the plurality of first elements is the color component A of the second image block corresponding to the any first element fidelity value
  • the position of any first element in the two-dimensional array under any color component A in the fidelity map is based on the second image block corresponding to the any first element in
  • the position in the reconstructed image is determined, or the position of the two-dimensional array of any first element under any color component A in the fidelity map is determined according to the first element corresponding to the any first element.
  • the positions of the two image blocks in the preset area of the reconstructed image are determined.
  • the processing unit is specifically configured to: perform entropy coding on any first element to obtain the second code stream, and the entropy coding on any first element is independent of other Entropy encoding of the first element; or, determining the probability distribution of the value of any first element or the prediction of the any first element according to the value of at least one of the encoded first elements value, and perform entropy coding on any first element according to the probability distribution of the value of the any first element or the predicted value of the any first element, so as to obtain the second code stream; wherein, The second code stream includes the code streams of the plurality of first elements.
  • the processing unit is specifically configured to: quantize any of the first elements to obtain a quantized first element; and encode the quantized first element to obtain The second code stream; wherein, the second code stream includes the code streams of the plurality of first elements.
  • the present application relates to an encoding device, and the beneficial effects can be found in the description of the second aspect, which will not be repeated here.
  • the encoding device has the function of implementing the behavior in the method example of the second aspect above.
  • the functions can be implemented by hardware, and can also be implemented by hardware executing corresponding software.
  • the hardware or software includes one or more modules corresponding to the above functions.
  • the decoding device includes a processing unit, and the processing unit is configured to: decode the first code stream to obtain a reconstructed image of the original image; decode the second code stream to obtain a fidelity map The reconstruction map of distortion between at least part of the region.
  • the fidelity map includes a fidelity value of any second image block in the plurality of second image blocks, and the fidelity value of any second image block is used for represents the distortion between the any second image block and the original image block corresponding to the any second image block.
  • the fidelity map includes a plurality of first elements, the plurality of second image blocks are in one-to-one correspondence with the plurality of first elements, and the plurality of first elements
  • the value of any first element is the fidelity value of the second image block corresponding to the any first element
  • the position of the any first element in the fidelity map is based on the The position of the second image block corresponding to a first element in the reconstructed image is determined, or the position of any first element in the fidelity map is determined according to the first element corresponding to the first element.
  • the positions of the two image blocks in the preset area of the reconstructed image are determined.
  • the second image block includes three color components
  • the fidelity map is a three-dimensional array including three dimensions of color components, width and height, any one of the fidelity map
  • the two-dimensional array under the color component A includes a plurality of first elements
  • the value of any first element in the plurality of first elements is the color component A of the second image block corresponding to the any first element fidelity value
  • the position of any first element in the two-dimensional array under any color component A in the fidelity map is based on the second image block corresponding to the any first element in
  • the position in the reconstructed image is determined, or the position of the two-dimensional array of any first element under any color component A in the fidelity map is determined according to the first element corresponding to the any first element.
  • the positions of the two image blocks in the preset area of the reconstructed image are determined.
  • the processing unit is specifically configured to: decode the second code stream to obtain a reconstruction fidelity value of any one of the first elements; The reconstructed fidelity value of is obtained as a reconstructed map of the fidelity map.
  • the second code stream is obtained by encoding the quantized first element; the processing unit is specifically configured to: decode the second code stream to obtain the quantized first element The reconstruction fidelity value of the first element after the The reconstructed fidelity value of any first element results in a reconstructed map of the fidelity map.
  • the present application relates to an encoding device, and the beneficial effects can be found in the description of the third aspect, which will not be repeated here.
  • the encoding device has the function of implementing the behavior in the method example of the third aspect above.
  • the functions can be implemented by hardware, and can also be implemented by hardware executing corresponding software.
  • the hardware or software includes one or more modules corresponding to the above functions.
  • the decoding device includes a processing unit, and the processing unit is configured to: decode the first code stream to obtain a reconstructed image of the original image and target quantization parameter information, where the target quantization parameter information includes the quantization parameter values of all or part of the second image blocks in the reconstructed image; constructing a quantization parameter map of the reconstructed image according to the target quantization parameter information, wherein the quantization parameter of the reconstructed image A graph is used to represent the distortion between at least part of the original image and at least part of the reconstructed image.
  • the second image block is a coding unit.
  • the quantization parameter map of the reconstructed image includes a plurality of second elements, the plurality of second image blocks are in one-to-one correspondence with the plurality of second elements, and the plurality of second elements
  • the value of any second element is the quantization parameter value of the second image block corresponding to the any second element
  • the position of the any second element in the quantization parameter map of the reconstructed image is based on and
  • the position of the second image block corresponding to the any second element in the reconstructed image is determined, or the position of the any second element in the quantization parameter map of the reconstructed image is determined according to the position of the second image block in the reconstructed image.
  • the position of the second image block corresponding to the two elements in the preset area of the reconstructed image is determined.
  • the second image block includes three color components
  • the quantization parameter map of the reconstructed image is a three-dimensional array including three dimensions of color components, width and height
  • the quantization parameter map of the reconstructed image is a three-dimensional array.
  • the two-dimensional array under any color component A in includes a plurality of second elements, and the value of any second element in the plurality of second elements is a second image block corresponding to the any second element
  • the quantization parameter value of the color component A of The position of the second image block in the reconstructed image is determined, or the position of any second element in the two-dimensional array under any color component A in the quantization parameter map of the reconstructed image is determined according to the relationship with the any The position of the second image block corresponding to a second element in the preset area of the reconstructed image is determined.
  • the processing unit is specifically configured to: when the target quantization parameter information includes a quantization parameter value of a partial coding unit in the plurality of coding units, according to the quantization parameter of the partial coding unit value and/or reference quantization parameter map to obtain the quantization parameter value of the target coding unit, wherein the reference quantization parameter map is the quantization parameter map of the reference image of the reconstructed image, and the target coding unit is the multiple coding units coding units other than the partial coding unit in the unit; obtaining a quantization parameter map of the reconstructed image according to the quantization parameter value of the partial coding unit and the quantization parameter value of the target coding unit.
  • the reference quantization parameter map includes multiple reference elements, and a value of any reference element in the multiple reference elements is a quantization parameter value of a coding unit in the reference image;
  • the processing unit is specifically configured to: use the value of the target element as the quantization parameter value of any target coding unit in the target coding unit, wherein the target element is a reference element in the reference quantization parameter map, and the target element
  • the position in the reference quantization parameter map is determined according to the position of the any target coding unit in the reconstructed image, or the position of the target element in the reference quantization parameter map is determined according to the any target coding
  • the position of the unit in the reconstructed image and the motion vector of any of the target coding units are determined.
  • the method described in the first aspect of the present application may be performed by the apparatus described in the seventh aspect of the present application.
  • Other features and implementations of the method described in the first aspect of the present application directly depend on the functionality and implementation of the device described in the seventh aspect of the present application.
  • the method described in the second aspect of the present application may be performed by the apparatus described in the eighth aspect of the present application.
  • Other features and implementations of the method described in the second aspect of the present application directly depend on the functionality and implementation of the device described in the eighth aspect of the present application.
  • the method described in the third aspect of the present application may be performed by the apparatus described in the ninth aspect of the present application.
  • Other features and implementations of the method described in the second aspect of the present application directly depend on the functionality and implementation of the apparatus described in the ninth aspect of the present application.
  • the present application relates to an apparatus for encoding a video stream, comprising a processor and a memory.
  • the memory stores instructions that cause the processor to perform the method of the first aspect.
  • the present application relates to an apparatus for decoding a video stream, comprising a processor and a memory.
  • the memory stores instructions that cause the processor to perform the method of the second aspect.
  • the present application relates to an apparatus for decoding a video stream, comprising a processor and a memory.
  • the memory stores instructions that cause the processor to perform the method of the third aspect.
  • the present application provides a computer-readable storage medium having stored thereon instructions that, when executed, cause one or more processors to encode video data.
  • the instructions cause the one or more processors to perform the method of the first, second or third aspect or any possible embodiment of the first, second or third aspect.
  • the present application relates to a computer program product comprising program code which, when run, performs the first, second or third aspect or any possible implementation of the first, second or third aspect method in the example.
  • the present application relates to an encoder (20) comprising a processing circuit for performing the method of the first aspect or any possible embodiment of the first aspect.
  • the present application relates to a decoder (30) comprising a processing circuit for performing the method of the second or third aspect or any of the possible embodiments of the second or third aspect.
  • the present application relates to an encoder comprising: one or more processors; a non-transitory computer-readable storage medium coupled to the processors and storing a program executed by the processors, wherein When executed by the processor, the program causes the encoder to execute the method in the first aspect or any possible embodiment of the first aspect.
  • the present application relates to a decoder comprising: one or more processors; a non-transitory computer-readable storage medium coupled to the processors and storing a program executed by the processors, wherein The program, when executed by the processor, causes the decoder to perform the method of the second or third aspect or any possible embodiment of the second or third aspect.
  • the application relates to a non-transitory computer-readable storage medium comprising program code, when executed by a computer device, for performing the first, second or third aspect or the first, second or the method in any possible embodiment of the third aspect.
  • the present application relates to a non-transitory storage medium comprising a bitstream encoded according to the first aspect or the method in any of the possible embodiments of the first aspect.
  • the present application relates to an electronic device, the electronic device comprising the encoding device described in the fourth aspect and/or the decoding device described in the fifth aspect or the sixth aspect.
  • FIG. 1A is a block diagram of an example video coding system for implementing embodiments of the present application, wherein the system utilizes a neural network to encode or decode video images;
  • FIG. 1B is a block diagram of another example of a video coding system for implementing embodiments of the present application, wherein the video encoder and/or video decoder use a neural network to encode or decode video images;
  • FIG. 2 is a block diagram of an example example of a video encoder for implementing embodiments of the present application, wherein the video encoder 20 uses a neural network to encode video images;
  • FIG. 3 is a block diagram of an example example of a video decoder for implementing embodiments of the present application, wherein the video decoder 30 uses a neural network to decode video images;
  • FIG. 4 is a schematic block diagram of a video decoding apparatus for implementing an embodiment of the present application.
  • FIG. 5 is a schematic block diagram of a video decoding apparatus for implementing an embodiment of the present application.
  • FIG. 6 is a schematic diagram of an image codec based on a deep neural network for implementing an embodiment of the present application
  • FIG. 7 is a schematic block diagram of an encoding method provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of division of an original image or a preset area of an original image according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of division of a reconstructed image or a preset area of a reconstructed image according to an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a fidelity map provided by an embodiment of the present application.
  • FIG. 11 is a schematic block diagram of a decoding method provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of a reconstruction map of a fidelity map provided by an embodiment of the present application.
  • FIG. 13 is a schematic block diagram of another decoding method provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of a quantization parameter map provided by an embodiment of the present application.
  • 15 is a schematic diagram of another quantization parameter map provided by an embodiment of the present application.
  • FIG. 16 is a schematic block diagram of an encoding device provided by an embodiment of the application.
  • 17 is a schematic block diagram of a decoding device provided by an embodiment of the present application.
  • FIG. 18 is a schematic block diagram of a decoding device provided by an embodiment of the present application.
  • FIG. 19 is a schematic block diagram of an encoding apparatus provided by an embodiment of the present application.
  • FIG. 20 is a schematic block diagram of a decoding apparatus provided by an embodiment of the present application.
  • FIG. 21 is a schematic block diagram of a decoding apparatus provided by an embodiment of the present application.
  • the embodiments of the present application provide a video image compression technology based on artificial intelligence, in particular a neural network (Neural Network, NN)-based video compression technology, and specifically provide a neural network-based inter-frame prediction technology/intra-frame Prediction techniques/filtering techniques to improve traditional hybrid video codec systems.
  • a neural network Neurological Network, NN
  • Video coding generally refers to the processing of sequences of images that form a video or video sequence. In the field of video coding, the terms “picture”, “frame” or “image” may be used as synonyms.
  • Video encoding (or commonly referred to as encoding) includes two parts: video encoding and video decoding. Video encoding is performed on the source side and typically involves processing (eg, compressing) the original video image to reduce the amount of data required to represent the video image (and thus store and/or transmit more efficiently). Video decoding is performed on the destination side and typically involves inverse processing with respect to the encoder to reconstruct the video image.
  • the "encoding" of a video image should be understood as the "encoding” or “decoding” of a video image or a video sequence.
  • the encoding part and the decoding part are also collectively referred to as codec (encoding and decoding, CODEC).
  • the original video image can be reconstructed, ie the reconstructed video image has the same quality as the original video image (assuming no transmission loss or other data loss during storage or transmission).
  • further compression is performed through quantization, etc. to reduce the amount of data required to represent the video image, and the decoder side cannot completely reconstruct the video image, that is, the quality of the reconstructed video image is higher than that of the original video image. low or poor.
  • Video coding standards fall under the category of "lossy hybrid video codecs" (ie, combining spatial and temporal prediction in the pixel domain with 2D transform coding in the transform domain for applying quantization).
  • Any image in a video sequence is usually partitioned into sets of non-overlapping blocks, usually encoded at the block level.
  • the encoder typically processes, i.e. encodes the video at the block (video block) level, e.g.
  • the encoder needs to repeat the decoder's processing steps so that the encoder and decoder generate the same predictions (eg, intra- and inter-prediction) and/or reconstructed pixels for processing, ie, encoding subsequent blocks.
  • FIG. 1A is a schematic block diagram of an exemplary coding system 10, such as a video coding system 10 (or simply coding system 10) that may utilize the techniques of this application.
  • Video encoder 20 (or encoder 20 for short) and video decoder 30 (or decoder 30 for short) in video coding system 10 represent devices, etc. that may be used to perform techniques in accordance with the various examples described in this application .
  • the decoding system 10 includes a source device 12 for providing encoded image data 21 such as encoded images to a destination device 14 for decoding the encoded image data 21 .
  • the encoded image data is also referred to as a bit stream, a compressed code stream or a code stream, so the encoded image data 21 may also be referred to as a bit stream 21 , a compressed code stream 21 or a code stream 21 .
  • the source device 12 includes an encoder 20 and, alternatively, an image source 16 , a preprocessor (or preprocessing unit) 18 such as an image preprocessor, and a communication interface (or communication unit) 22 .
  • Image source 16 may include or be any type of image capture device for capturing real-world images, etc., and/or any type of image generation device, such as a computer graphics processor or any type of user for generating computer animation images. Devices used to acquire and/or provide real-world images, computer-generated images (e.g., screen content, virtual reality (VR) images, and/or any combination thereof (e.g., augmented reality, AR) images).
  • the image source may be any type of memory or storage that stores any of the above-mentioned images.
  • the image (or image data) 17 may also be referred to as the original image (or original image data) 17 .
  • the preprocessor 18 is used to receive the (raw) image data 17 and preprocess the image data 17 to obtain a preprocessed image (or preprocessed image data) 19 .
  • the preprocessing performed by the preprocessor 18 may include trimming, color format conversion (eg, from RGB to YCbCr), toning, or denoising. It is understood that the preprocessing unit 18 may be an optional component.
  • a video encoder (or encoder) 20 is used to receive preprocessed image data 19 and to provide encoded image data 21 (described further below with respect to FIG. 2 etc.).
  • the communication interface 22 in the source device 12 can be used to: receive the encoded image data 21 and send the encoded image data 21 (or any other processed version) over the communication channel 13 to another device such as the destination device 14 or any other device for storage or rebuild directly.
  • the destination device 14 includes a decoder 30 , and may additionally, alternatively, include a communication interface (or communication unit) 28 , a post-processor (or post-processing unit) 32 and a display device 34 .
  • the communication interface 28 in the destination device 14 is used to receive the encoded image data 21 (or any other processed version) directly from the source device 12 or from any other source device such as a storage device, for example, the storage device is an encoded image data storage device, The encoded image data 21 is supplied to the decoder 30 .
  • Communication interface 22 and communication interface 28 may be used through a direct communication link between source device 12 and destination device 14, such as a direct wired or wireless connection, etc., or through any type of network, such as a wired network, a wireless network, or any Combination, any type of private network and public network, or any type of combination, send or receive encoded image data (or encoded data) 21 .
  • the communication interface 22 may be used to encapsulate the encoded image data 21 into a suitable format such as a message, and/or use any type of transfer encoding or processing to process the encoded image data for transmission over a communication link or communication network transfer on.
  • the communication interface 28 corresponds to the communication interface 22 and may be used, for example, to receive transmission data and process the transmission data using any type of corresponding transmission decoding or processing and/or decapsulation to obtain encoded image data 21 .
  • Both the communication interface 22 and the communication interface 28 can be configured as a one-way communication interface as indicated by the arrows from the source device 12 to the corresponding communication channel 13 of the destination device 14 in FIG. 1A, or a two-way communication interface, and can be used to send and receive messages etc. to establish a connection, acknowledge and exchange any other information related to a communication link and/or data transfer such as encoded image data transfer, etc.
  • a video decoder (or decoder) 30 is used to receive the encoded image data 21 and provide decoded image data (or decoded image, reconstructed image) 31 (which will be further described below with reference to FIG. 3 etc.).
  • the post-processor 32 is configured to perform post-processing on the decoded image data 31 (also referred to as reconstructed image data) such as a decoded image to obtain post-processed image data 33 such as a post-processed image.
  • Post-processing performed by post-processing unit 32 may include, for example, color format conversion (eg, from YCbCr to RGB), toning, trimming, or resampling, or any other processing used to generate decoded image data 31 for display by display device 34, etc. .
  • a display device 34 is used to receive post-processed image data 33 to display the image to a user or viewer or the like.
  • Display device 34 may be or include any type of display for representing the reconstructed image, eg, an integrated or external display screen or display.
  • the display screen may include a liquid crystal display (LCD), an organic light emitting diode (OLED) display, a plasma display, a projector, a micro LED display, a liquid crystal on silicon (LCoS) display ), digital light processor (DLP), or any other type of display.
  • the decoding system 10 also includes a training engine 25 for training the encoder 20 (especially the mode selection unit 260 in the encoder 20) or the decoder 30 (especially the mode application unit 360 in the decoder 30) to An input image or image region or image block is processed to generate predicted values for the input image or image region or image block.
  • a training engine 25 for training the encoder 20 (especially the mode selection unit 260 in the encoder 20) or the decoder 30 (especially the mode application unit 360 in the decoder 30) to An input image or image region or image block is processed to generate predicted values for the input image or image region or image block.
  • the training data used to train the encoder 20 or the decoder 30 in this embodiment of the present application may be stored in a database (not shown), and the training engine 25 trains a target model based on the training data (for example, it may be used for image inter-frame prediction or Intra-frame prediction of images or neural networks for in-loop filtering, etc.). It should be noted that the embodiments of the present application do not limit the source of the training data, for example, the training data may be obtained from the cloud or other places to perform model training.
  • the target model trained by the training engine 25 can be applied to the decoding systems 10, 40, eg, the source device 12 (eg, the encoder 20) or the destination device 14 (eg, the decoder 30) shown in FIG. 1A.
  • the training engine 25 can train on the cloud to obtain the target model, and then the decoding system 10 downloads and uses the target model from the cloud; or, the training engine 25 can train on the cloud to obtain the target model and use the target model, and the decoding system 10 directly downloads the target model from the cloud. Get the processing result.
  • the training engine 25 trains to obtain a target model with a filtering function, the decoding system 10 downloads the target model from the cloud, and then the loop filter 220 in the encoder 20 or the loop filter 320 in the decoder 30 can be based on the target model.
  • the target model filters the input reconstructed image or image block to obtain a filtered image or image block.
  • the training engine 25 trains a target model with a filtering function, the decoding system 10 does not need to download the target model from the cloud, the encoder 20 or the decoder 30 transmits the reconstructed image or image block to the cloud, and the cloud passes the target model through the target model.
  • the reconstructed image or image block is filtered to obtain the filtered image or image block and transmitted to the encoder 20 or the decoder 30 .
  • FIG. 1A shows source device 12 and destination device 14 as separate devices
  • device embodiments may include both source device 12 and destination device 14 or the functions of both source device 12 and destination device 14, ie, include both source device 12 and destination device 14.
  • Device 12 or corresponding function and destination device 14 or corresponding function In these embodiments, source device 12 or corresponding functionality and destination device 14 or corresponding functionality may be implemented using the same hardware and/or software or by separate hardware and/or software, or any combination thereof.
  • Encoder 20 eg, video encoder 20
  • decoder 30 eg, video decoder 30
  • processing circuitry as shown in FIG. 1B , eg, one or more microprocessors, digital signal processors (digital signal processor, DSP), application-specific integrated circuit (ASIC), field-programmable gate array (FPGA), discrete logic, hardware, special-purpose processor for video encoding, or any combination thereof .
  • Encoder 20 may be implemented by processing circuitry 46 to include the various modules discussed with reference to encoder 20 of FIG. 2 and/or any other encoder system or subsystem described herein.
  • Decoder 30 may be implemented by processing circuitry 46 to include the various modules discussed with reference to decoder 30 of FIG.
  • the processing circuitry 46 may be used to perform various operations discussed below. As shown in FIG. 5, if parts of the techniques are implemented in software, a device may store the instructions of the software in a suitable non-transitory computer-readable storage medium and execute the instructions in hardware using one or more processors, thereby Implement the techniques of this application.
  • One of the video encoder 20 and the video decoder 30 may be integrated in a single device as part of a combined codec (encoder/decoder, CODEC), as shown in FIG. 1B .
  • Source device 12 and destination device 14 may include any of a variety of devices, including any type of handheld or stationary device, such as a laptop or laptop, cell phone, smartphone, tablet or tablet, camera, Desktop computers, set-top boxes, televisions, display devices, digital media players, video game consoles, video streaming devices (eg, content service servers or content distribution servers), broadcast receiving equipment, broadcast transmitting equipment, etc., and may not Use or use any type of operating system.
  • source device 12 and destination device 14 may be equipped with components for wireless communication.
  • source device 12 and destination device 14 may be wireless communication devices.
  • the video coding system 10 shown in FIG. 1A is merely exemplary, and the techniques provided herein may be applicable to video coding settings (eg, video encoding or video decoding) that do not necessarily include the encoding device and the Decode any data communication between devices.
  • data is retrieved from local storage, sent over a network, and so on.
  • the video encoding device may encode and store the data in memory, and/or the video decoding device may retrieve and decode the data from the memory.
  • encoding and decoding are performed by devices that do not communicate with each other but merely encode data to and/or retrieve and decode data from memory.
  • Video coding system 40 may include imaging device 41, video encoder 20, video decoder 30 (and/or video encoder/decoder implemented by processing circuit 46), antenna 42, one or more processors 43, a or multiple memory stores 44 and/or display devices 45 .
  • imaging device 41, antenna 42, processing circuit 46, video encoder 20, video decoder 30, processor 43, memory storage 44, and/or display device 45 can communicate with each other.
  • video coding system 40 may include only video encoder 20 or only video decoder 30 .
  • antenna 42 may be used to transmit or receive an encoded bitstream of video data.
  • display device 45 may be used to present video data.
  • Processing circuitry 46 may include application-specific integrated circuit (ASIC) logic, graphics processors, general purpose processors, and the like.
  • Video coding system 40 may also include an optional processor 43, which may similarly include application-specific integrated circuit (ASIC) logic, a graphics processor, a general-purpose processor, and the like.
  • the memory memory 44 may be any type of memory, such as volatile memory (eg, static random access memory (SRAM), dynamic random access memory (DRAM), etc.) or non-volatile memory volatile memory (eg, flash memory, etc.), etc.
  • memory storage 44 may be implemented by cache memory.
  • processing circuitry 46 may include memory (eg, cache memory, etc.) for implementing image buffers, and the like.
  • video encoder 20 implemented by logic circuitry may include an image buffer (eg, implemented by processing circuitry 46 or memory memory 44 ) and a graphics processing unit (eg, implemented by processing circuitry 46 ).
  • the graphics processing unit may be communicatively coupled to the image buffer.
  • the graphics processing unit may include video encoder 20 implemented by processing circuitry 46 to implement the various modules discussed with reference to FIG. 2 and/or any other encoder system or subsystem described herein.
  • Logic circuits may be used to perform the various operations discussed herein.
  • video decoder 30 may be implemented by processing circuitry 46 in a similar manner to implement various of the types discussed with reference to video decoder 30 of FIG. 3 and/or any other decoder systems or subsystems described herein. module.
  • logic circuit-implemented video decoder 30 may include an image buffer (implemented by processing circuit 46 or memory memory 44) and a graphics processing unit (eg, implemented by processing circuit 46).
  • the graphics processing unit may be communicatively coupled to the image buffer.
  • the graphics processing unit may include video decoder 30 implemented by processing circuitry 46 to implement the various modules discussed with reference to FIG. 3 and/or any other decoder system or subsystem described herein.
  • antenna 42 may be used to receive an encoded bitstream of video data.
  • the encoded bitstream may include data, indicators, index values, mode selection data, etc., as discussed herein related to encoded video frames, such as data related to encoded partitions (eg, transform coefficients or quantized transform coefficients). , (as discussed) optional indicators, and/or data defining the encoding split).
  • Video coding system 40 may also include video decoder 30 coupled to antenna 42 for decoding the encoded bitstream.
  • Display device 45 is used to present video frames.
  • video decoder 30 may be used to perform the opposite process.
  • video decoder 30 may be operable to receive and parse such syntax elements, decoding the associated video data accordingly.
  • video encoder 20 may entropy encode the syntax elements into an encoded video bitstream. In such instances, video decoder 30 may parse such syntax elements and decode related video data accordingly.
  • VVC Versatile Video Coding
  • VCEG ITU-T Video Coding Experts Group
  • MPEG ISO/IEC Motion Picture Experts Group
  • HEVC High-Efficiency Video Coding
  • JCT-VC Joint Collaboration Team on Video Coding
  • the encoder and encoding method and the decoder and decoding method are described below.
  • FIG. 2 is a schematic block diagram of an example of a video encoder 20 for implementing the techniques of this application.
  • the video encoder 20 includes an input terminal (or input interface) 201 , a residual calculation unit 204 , a transform processing unit 206 , a quantization unit 208 , an inverse quantization unit 210 , an inverse transform processing unit 212 , and a reconstruction unit 214 , a loop filter 220 , a decoded picture buffer (DPB) 230 , a mode selection unit 260 , an entropy encoding unit 270 and an output terminal (or output interface) 272 .
  • DPB decoded picture buffer
  • Mode selection unit 260 may include inter prediction unit 244 , intra prediction unit 254 , and partition unit 262 .
  • Inter prediction unit 244 may include a motion estimation unit and a motion compensation unit (not shown).
  • the video encoder 20 shown in FIG. 2 may also be referred to as a hybrid video encoder or a hybrid video codec-based video encoder.
  • the inter prediction module/intra prediction module/loop filtering module includes (is) a trained target model (also known as a neural network) for processing an input image or image region or image block, to generate predicted values for the input image patch.
  • a trained target model also known as a neural network
  • neural networks for inter prediction/intra prediction/loop filtering are used to receive input images or image regions or image blocks.
  • the residual calculation unit 204, the transform processing unit 206, the quantization unit 208 and the mode selection unit 260 constitute the forward signal path of the encoder 20, while the inverse quantization unit 210, the inverse transform processing unit 212, the reconstruction unit 214, the buffer 216, the loop
  • the path filter 220, the decoded picture buffer (DPB) 230, the inter-frame prediction unit 244 and the intra-frame prediction unit 254 constitute the backward signal path of the encoder, wherein the backward signal path of the encoder 20 corresponds to the decoding signal path of the decoder (see decoder 30 in Figure 3).
  • Inverse quantization unit 210 inverse transform processing unit 212 , reconstruction unit 214 , loop filter 220 , decoded image buffer 230 , inter prediction unit 244 , and intra prediction unit 254 also make up the “built-in decoder” of video encoder 20 .
  • the encoder 20 may be operable to receive images (or image data) 17, eg, images in a sequence of images forming a video or video sequence, via an input 201 or the like.
  • the received image or image data may also be a preprocessed image (or preprocessed image data) 19 .
  • image 17 may also be referred to as the current image, the original image or the image to be coded (especially when distinguishing the current image from other images in video coding, such as other images in the same video sequence, i.e. in the video sequence that also includes the current image. previously encoded image and/or decoded image).
  • a (digital) image is or can be viewed as a two-dimensional array or matrix of pixel points with intensity values.
  • the pixels in the array can also be called pixels or pels (short for picture elements).
  • the number of pixels in the array or image in the horizontal and vertical directions (or axes) determines the size and/or resolution of the image.
  • three color components are usually used, that is, an image can be represented as or include an array of three pixel points.
  • an image includes an array of corresponding red, green and blue pixel points.
  • any pixel is usually represented in a luma/chroma format or color space, such as YCbCr, including a luma component denoted by Y (and sometimes also an L) and two chroma components denoted by Cb and Cr.
  • the luminance (luma) component Y represents the luminance or gray level intensity (eg, both are the same in a grayscale image), while the two chrominance (chroma) components Cb and Cr represent the chrominance or color information components .
  • an image in YCbCr format includes a luminance pixel array of luminance pixel value (Y) and two chrominance pixel arrays of chrominance values (Cb and Cr).
  • Images in RGB format can be converted or transformed to YCbCr format and vice versa, the process is also known as color transformation or conversion. If the image is black and white, the image may only include an array of luminance pixels. Correspondingly, the image may be, for example, a luminance pixel array in monochrome format or a luminance pixel array and two corresponding chrominance pixel arrays in 4:2:0, 4:2:2 and 4:4:4 color formats .
  • an embodiment of the video encoder 20 may include an image segmentation unit (not shown in FIG. 2) for segmenting the image 17 into a plurality of (usually non-overlapping) image blocks 203, which are pixels
  • the image block 203 is sometimes called the current block 203 , the original block 203 or the split block 203 .
  • an image 17 to be encoded is first divided into non-overlapping image blocks 203, and each image block 203 is processed in turn in a given order (eg, line scan order).
  • These image blocks 203 may also be referred to as root blocks, macroblocks (H.264/AVC) or Coding Tree Blocks (CTB), or Coding Tree Units (CTB) in the H.265/HEVC and VVC standards , CTU).
  • the segmentation unit can be used to use the same block size for all images in a video sequence and use a corresponding grid that defines the block size, or to vary the block size between images or subsets of images or groups of images, and to segment any image into corresponding grids piece.
  • the video encoder may be used to directly receive image blocks 203 of the image 17, eg, one, several or all of the blocks that make up the image 17.
  • the image block 203 may also be referred to as a current image block or an image block to be encoded.
  • image block 203 is also or can be considered as a two-dimensional array or matrix of pixels with intensity values (pixel values), but image block 203 is smaller than image 17 .
  • the image block 203 may comprise an array of pixels (eg, a luminance array in the case of a monochrome image 17 or a luminance array or a chrominance array in the case of a color image) or an array of three pixels (eg, a color image 17 One luma array and two chrominance arrays in the case) or any other number and/or type of arrays depending on the color format employed.
  • the number of pixels in the horizontal and vertical directions (or axes) of the image block 203 defines the size of the image block 203 .
  • the image block 203 may be an array of M ⁇ N (M columns ⁇ N rows) pixel points, or an array of M ⁇ N transform coefficients, or the like.
  • the size of the image block 203 is N ⁇ N, it means that the image block 203 is a two-dimensional pixel array, and its size in both the horizontal and vertical directions is N.
  • the video encoder 20 shown in FIG. 2 is used to encode the image 17 block by block, eg, performing encoding and prediction on any image block 203 .
  • the video encoder 20 shown in FIG. 2 may also be used to segment and/or encode an image using slices (also referred to as video slices), where an image may use one or more slices (typically non-overlapping slices) ) for segmentation or encoding.
  • Any slice may include one or more blocks (eg, Coding Tree Unit CTUs) or one or more groups of blocks (eg, coding tiles in the H.265/HEVC/VVC standard and bricks in the VVC standard) ).
  • the video encoder 20 shown in FIG. 2 may also be used to use slice/coding block groups (also referred to as video coding block groups) and/or coding blocks (also referred to as video coding blocks) ) partitioning and/or encoding an image, wherein the image may be partitioned or encoded using one or more slices/encoding block groups (usually non-overlapping), any slice/encoding block group may include one or more blocks (eg CTU) or one or more coding blocks, etc., any coding block may be rectangular or the like, and may include one or more complete or partial blocks (eg CTU).
  • slice/coding block groups also referred to as video coding block groups
  • coding blocks also referred to as video coding blocks
  • the residual calculation unit 204 is configured to calculate the residual block 205 according to the image block 203 and the prediction block 265 (the prediction block 265 will be described in detail later) in the following manner:
  • the pixel value of the prediction block 265 is subtracted from the value to obtain the residual block 205 in the pixel domain.
  • the encoder 20 performs intra-frame prediction or inter-frame prediction on the image block to obtain the predicted value of the pixels in the image block, and the set of predicted values of the pixels in the image block is called the prediction of the image block, also called the prediction block.
  • the difference between the original value of the pixel in the image block and the predicted value of the pixel in the image block is further calculated.
  • the set of the difference between the original value of the pixel in the image block and the predicted value of the pixel in the image block is called the residual of the image block, also called the residual block.
  • the transform processing unit 206 is configured to perform discrete cosine transform (discrete cosine transform, DCT) or discrete sine transform (discrete sine transform, DST) etc. on the pixel point values of the residual block 205 to obtain transform coefficients 207 in the transform domain.
  • Transform coefficients 207 which may also be referred to as transform residual coefficients, represent the residual block 205 in the transform domain.
  • Transform processing unit 206 may be used to apply integer approximations of DCT/DST, such as transforms specified for H.265/HEVC. Compared to the orthogonal DCT transform, this integer approximation is usually scaled by some factor. In order to maintain the norm of the forward and inversely transformed residual blocks, other scaling factors are used as part of the transformation process.
  • the scaling factor is usually chosen according to certain constraints, such as the scaling factor being a power of 2 for the shift operation, the bit depth of the transform coefficients, the trade-off between accuracy and implementation cost, etc.
  • specific scaling factors are specified for the inverse transform by the inverse transform processing unit 212 at the encoder 20 side (and for the corresponding inverse transform at the decoder 30 side by, for example, the inverse transform processing unit 312), and accordingly, can be used at the encoder
  • the 20 side specifies the corresponding scaling factor for the forward transformation through the transformation processing unit 206 .
  • the video encoder 20 (correspondingly, the transform processing unit 206 ) may be configured to output transform parameters such as the type of one or more transforms, eg, directly or after being encoded or compressed by the entropy encoding unit 270 , eg, so that video decoder 30 can receive and decode using transform parameters.
  • transform parameters such as the type of one or more transforms, eg, directly or after being encoded or compressed by the entropy encoding unit 270 , eg, so that video decoder 30 can receive and decode using transform parameters.
  • the quantization unit 208 is configured to quantize the transform coefficients 207 through, for example, scalar quantization or vector quantization, to obtain quantized transform coefficients 209, that is, quantized transform coefficients (quantized transform coefficients) 209, abbreviated as quantization coefficients 209.
  • quantized transform coefficients 209 may also be referred to as quantized residual coefficients 209 .
  • the quantization process may reduce the bit depth associated with some or all of the transform coefficients 207 .
  • n-bit transform coefficients may be rounded down to m-bit transform coefficients during quantization, where n is greater than m.
  • the degree of quantization can be modified by adjusting the quantization parameter (QP).
  • QP quantization parameter
  • QP quantization parameter
  • the quantization parameter may be an index into a predefined set of suitable quantization step sizes.
  • Quantization may include dividing by the quantization step size, and corresponding or inverse dequantization performed by the inverse quantization unit 210 or the like may include multiplying by the quantization step size.
  • Embodiments according to some standards such as HEVC may be used to use quantization parameters to determine the quantization step size.
  • the quantization step size can be calculated from the quantization parameter using a fixed-point approximation of an equation involving division.
  • the video encoder 20 may be used to output a quantization parameter (QP), eg, directly or after being encoded or compressed by the entropy encoding unit 270, eg, such that the video Decoder 30 may receive and decode using the quantization parameters.
  • QP quantization parameter
  • the inverse quantization unit 210 is configured to perform inverse quantization of the quantization unit 208 on the quantized coefficients 209 to obtain the dequantized coefficients 211, for example, performing the inverse of the quantization scheme performed by the quantization unit 208 according to or using the same quantization step size as the quantization unit 208. quantification scheme.
  • Dequantized coefficients 211 may also be referred to as dequantized residual coefficients 211 or inverse quantized coefficients 211, corresponding to transform coefficients 207, but inverse quantized coefficients 211 are usually not identical to transform coefficients 207 due to losses caused by quantization.
  • the inverse transform processing unit 212 is used to perform the inverse transform of the transform performed by the transform processing unit 206, for example, an inverse discrete cosine transform (DCT) or an inverse discrete sine transform (DST), to A reconstructed residual block 213 is obtained.
  • the reconstructed residual block 213 may also be referred to as a transform block 213 .
  • the reconstruction unit 214 (eg, summer 214 ) is used to add the transform block 213 (ie, the reconstructed residual block 213 ) to the prediction block 265 to obtain the reconstructed block 215 in the pixel domain, eg, the The pixel value and the pixel value of the prediction block 265 are added.
  • the loop filter unit 220 (or “loop filter” 220 for short) is used to filter the reconstruction block 215 to obtain the filter block 221, or generally to filter the reconstructed pixels to obtain filtered pixel values.
  • the loop filter unit is used to smoothly perform pixel transitions or improve video quality, and can remove coding distortions such as blocking and ringing effects through loop filtering.
  • the loop filter unit 220 may include one or more loop filters, such as a deblocking filter, a sample-adaptive offset (SAO) filter, or one or more other filters, such as self- Adaptive loop filter (ALF), noise suppression filter (NSF), or any combination.
  • the loop filter unit 220 may include a deblocking filter, a SAO filter, and an ALF filter.
  • the order of the filtering process can be deblocking filter, SAO filter and ALF filter.
  • a process called luma mapping with chroma scaling (LMCS) ie, adaptive in-loop shaper
  • LMCS luma mapping with chroma scaling
  • This process is performed before deblocking.
  • the deblocking filtering process can also be applied to internal sub-block edges, such as affine sub-block edges, ATMVP sub-block edges, sub-block transform (SBT) edges, and intra sub-partition (ISP) edges.
  • loop filter unit 220 is shown in FIG. 2 as a loop filter, in other configurations, loop filter unit 220 may be implemented as a loop filter.
  • Filter block 221 may also be referred to as filter reconstruction block 221 .
  • video encoder 20 may be used to output loop filter parameters (eg, SAO filter parameters, ALF filter parameters, or LMCS parameters), eg, directly or by entropy
  • the encoding unit 270 performs entropy encoding and outputs, eg, so that the decoder 30 can receive and decode using the same or different loop filter parameters.
  • a decoded picture buffer (DPB) 230 may be a reference picture memory that stores reference pictures (or reference picture data) for use by the video encoder 20 in encoding the video data.
  • DPB 230 may be formed by any of a variety of memory devices, such as dynamic random access memory (DRAM), including synchronous DRAM (SDRAM), magnetoresistive RAM (MRAM), Resistive RAM (RRAM) or other types of storage devices.
  • DRAM dynamic random access memory
  • SDRAM synchronous DRAM
  • MRAM magnetoresistive RAM
  • RRAM Resistive RAM
  • Decoded image buffer 230 may be used to store one or more filter blocks 221 .
  • the decoded image buffer 230 may also be used to store other previously filtered blocks of the same current image or a different image, such as a previous reconstructed image, such as the previously reconstructed and filtered block 221, and may provide a complete previously reconstructed or decoded image (and corresponding reference blocks and pixels) and/or a partially reconstructed current image (and corresponding reference blocks and pixels), eg for inter prediction.
  • the decoded image buffer 230 may also be used to store one or more unfiltered reconstructed blocks 215, or generally unfiltered reconstructed pixels, eg, reconstructed blocks 215 not filtered by the in-loop filtering unit 220, or unfiltered Any other processed reconstructed blocks or reconstructed pixels.
  • Mode selection unit 260 includes partition unit 262, inter prediction unit 244, and intra prediction unit 254 for receiving or obtaining original blocks from decoded image buffer 230 or other buffers (eg, column buffers, not shown) 203 (current block 203 of current image 17) and original image data such as reconstructed image data, e.g. filtered and/or unfiltered reconstructed pixels or reconstructions of the same (current) image and/or one or more previously decoded images piece.
  • the reconstructed image data is used as reference image data required for prediction such as inter prediction or intra prediction to obtain the prediction block 265 or the prediction value 265 .
  • the mode selection unit 260 may be used to determine or select a partition for the current block prediction mode (including no partition) and prediction mode (eg, intra or inter prediction mode) to generate a corresponding prediction block 265 for performing the residual block 205
  • the reconstruction block 215 is computed and reconstructed.
  • mode selection unit 260 may be used to select a partitioning and prediction mode (eg, from among those supported or available by mode selection unit 260) that provides the best match or the smallest residual (minimum Residual refers to better compression in transmission or storage), or provides minimal signaling overhead (minimum signaling overhead refers to better compression in transmission or storage), or considers or balances both.
  • the mode selection unit 260 may be configured to determine the segmentation and prediction mode according to rate distortion optimization (RDO), ie select the prediction mode that provides the least rate distortion optimization.
  • RDO rate distortion optimization
  • the partitioning unit 262 may be used to partition the pictures in the video sequence into a sequence of coding tree units (CTUs) 203, which may be further partitioned into smaller block parts or sub-blocks (again forming blocks), such as , by iteratively using quad-tree (QT) partitioning, binary-tree (BT) partitioning or triple-tree (TT) partitioning or any combination thereof, and for e.g.
  • CTUs coding tree units
  • BT binary-tree
  • TT triple-tree
  • segmentation e.g, performed by segmentation unit 262
  • prediction processing e.g, performed by inter-prediction unit 244 and intra-prediction unit 254
  • Partition unit 262 may partition (or divide) one coding tree unit 203 into smaller parts, such as square or rectangular shaped pieces.
  • a CTU For an image with three pixel arrays, a CTU consists of N ⁇ N luminance pixel blocks and two corresponding chrominance pixel blocks.
  • the maximum allowable size of a luma block in a CTU is specified as 128x128 in the under-development Versatile Video Coding (VVC) standard, but may be specified to a value other than 128x128 in the future, such as 256x256.
  • VVC Versatile Video Coding
  • the CTUs of a picture can be aggregated/grouped into slices/coded block groups, coded blocks or bricks.
  • a coding block covers a rectangular area of an image, and a coding block can be divided into one or more tiles.
  • a brick consists of multiple CTU lines within an encoded block.
  • An encoded block that is not divided into multiple bricks can be called a brick.
  • bricks are a true subset of coded blocks and are therefore not called coded blocks.
  • VVC supports two encoding block group modes, namely raster scan slice/encoded block group mode and rectangular slice mode.
  • raster scan coded block group mode a slice/coded block group contains a sequence of coded blocks in a raster scan of coded blocks of an image.
  • tiles contain multiple tiles of an image that together make up a rectangular area of the image.
  • the tiles within the rectangular slice are arranged in the order of the tile raster scan of the photo.
  • These smaller blocks can be further divided into smaller parts. This is also known as tree splitting or hierarchical tree splitting, where a root block at root tree level 0 (hierarchy level 0, depth 0) etc. can be recursively split into two or more blocks of the next lower tree level, For example, a node at tree level 1 (hierarchy level 1, depth 1). These blocks can in turn be split into two or more blocks of the next lower level, e.g. tree level 2 (hierarchy level 2, depth 2), etc., until the split ends (since ending criteria are met, such as reaching a maximum tree depth or minimum block size).
  • Blocks that are not further divided are also called leaf blocks or leaf nodes of the tree.
  • a tree divided into two parts is called a binary-tree (BT)
  • a tree divided into three parts is called a ternary-tree (TT)
  • a tree divided into four parts is called a quadtree ( quad-tree, QT).
  • a coding tree unit may be or include a CTB for luma pixels, two corresponding CTBs for chroma pixels for an image with an array of three pixels, or a CTB for pixels for monochrome images, or a CTB using three
  • the CTB of a pixel of an image encoded by the independent color plane and syntax structure (used to encode the pixel).
  • a coding tree block can be a block of N ⁇ N pixel points, where N can be set to a certain value such that the components are divided into CTBs, which is division.
  • a coding unit may be or include a coding block of luminance pixels, two corresponding coding blocks of chrominance pixels of an image with an array of three pixel points, or a coding block of pixels of a monochrome image, or An encoding block of pixels of an image encoded using three independent color planes and syntax structures (used to encode pixels).
  • a coding block can be a block of M ⁇ N pixel points, where M and N can be set to a certain value so that the CTB is divided into coding blocks, which is division.
  • a coding tree unit may be divided into multiple CUs according to HEVC by using a quad-tree structure represented as a coding tree.
  • the decision whether to use inter (temporal) prediction or intra (spatial) prediction to encode image regions is made at the leaf-CU level.
  • Any leaf-CU may be further divided into one, two, or four PUs depending on the PU partition type.
  • the same prediction process is used within a PU, and relevant information is transmitted to the decoder on a PU basis.
  • the leaf CU may be partitioned into transform units (TUs) according to other quad-tree structures similar to the coding tree used for the CU.
  • VVC Versatile Video Coding
  • a combined quadtree of nested multi-type trees eg, binary and ternary trees
  • the CU can be a square or a rectangle.
  • the coding tree unit (CTU) is first divided by the quad-tree structure.
  • the quad-leaf node is further composed of multiple types of Tree structure division.
  • Multi-type leaf nodes are called A coding unit (CU), unless the CU is too large for the maximum transform length, such a segment is used for prediction and transform processing without any other partitioning. In most cases, this means that the CU, PU, and TU are The block size is the same in the coding block structure of tree-nested multi-type trees. This exception occurs when the maximum supported transform length is less than the width or height of the color components of the CU.
  • VVC has formulated a multi-type tree with quadtree nesting
  • the only signaling mechanism for partitioning information in the coding structure In the signaling mechanism, the coding tree unit (CTU) as the root of the quad-tree is first divided by the quad-tree structure. Then any quad-leaf node (when enough can be further divided into a multi-type tree structure.
  • CTU coding tree unit
  • the decoder can derive the multi-type tree division mode (MttSplitMode) of the CU based on a predefined rule or table.
  • TT division when the width or height of the luma coding block is greater than 64, TT division is not allowed .
  • the width or height of the chrominance coding block is greater than 32, TT division is also not allowed.
  • the pipeline design divides the image into multiple virtual pipeline data units (VPDUs), and any VPDU is defined as mutual in the image. Non-overlapping units.
  • VPDUs virtual pipeline data units
  • the VPDU size is roughly proportional to the buffer size, so it is necessary to keep the VPD small U.
  • the VPDU size can be set to the maximum transform block (TB) size.
  • TT ternary tree
  • BT binary tree
  • the tree node block when a part of the tree node block exceeds the bottom or the right border of the image, the tree node block is forced to be divided until all the pixels of any coding CU are located within the image border.
  • the intra sub-partitions (ISP) tool may divide the luma intra prediction block vertically or horizontally into two or four sub-parts depending on the block size.
  • mode selection unit 260 of video encoder 20 may be used to perform any combination of the partitioning techniques described above.
  • video encoder 20 is used to determine or select the best or optimal prediction mode from a set of (predetermined) prediction modes.
  • the set of prediction modes may include, for example, intra prediction modes and/or inter prediction modes.
  • the set of intra prediction modes may include 35 different intra prediction modes, for example, non-directional modes like DC (or mean) mode and planar mode, or directional modes as defined by HEVC, or may include 67 different Intra prediction modes, for example, non-directional modes like DC (or mean) mode and planar mode, or directional modes as defined in VVC.
  • intra prediction mode of the planar mode may also be modified using a position-dependent intra prediction combination (PDPC) method.
  • PDPC position-dependent intra prediction combination
  • the intra-frame prediction unit 254 is configured to generate an intra-frame prediction block 265 using reconstructed pixels of adjacent blocks of the same current image according to the intra-frame prediction mode in the intra-frame prediction mode set.
  • Intra-prediction unit 254 (or generally mode selection unit 260 ) is also used to output intra-prediction parameters (or generally information indicating the selected intra-prediction mode of the block) to entropy encoding unit 270 in the form of syntax element 266 , to be included in the encoded image data 21 so that the video decoder 30 may perform operations such as receiving and using prediction parameters for decoding.
  • the set of inter-prediction modes depends on available reference pictures (i.e., for example the aforementioned at least partially previously decoded pictures stored in DBP 230) and other inter-prediction parameters, for example on whether to use the entire reference picture or Use only a part of the reference image, e.g. the search window area near the area of the current block, to search for the best matching reference block, and/or e.g. depending on whether half pixel, quarter pixel and/or within 16th of a pixel is performed Interpolated pixel interpolation.
  • available reference pictures i.e., for example the aforementioned at least partially previously decoded pictures stored in DBP 230
  • other inter-prediction parameters for example on whether to use the entire reference picture or Use only a part of the reference image, e.g. the search window area near the area of the current block, to search for the best matching reference block, and/or e.g. depending on whether half pixel, quarter pixel and/or within 16th of a pixel is performed Interpol
  • skip mode and/or direct mode may also be employed.
  • the merge candidate list for this mode consists of the following five candidate types in order: spatial MVP from spatially adjacent CUs, temporal MVP from collocated CUs, history-based MVP from FIFO table, For average MVP and zero MV.
  • Decoder side motion vector refinement (DMVR) based on bilateral matching can be used to increase the accuracy of MV for merge mode.
  • the merge mode with MVD comes from the merge mode with motion vector difference. Send the MMVD flag immediately after sending the skip and merge flags to specify whether the CU uses MMVD mode.
  • a CU-level adaptive motion vector resolution (AMVR) scheme may be used. AMVR supports the MVD of the CU to be encoded in different precisions.
  • the MVD of the current CU is adaptively selected.
  • a combined inter/intra prediction (CIIP) mode may be applied to the current CU.
  • a weighted average is performed on the inter and intra prediction signals to obtain the CIIP prediction.
  • the affine motion field of the block is described by the motion information of 2 control point (4 parameters) or 3 control point (6 parameters) motion vectors.
  • Subblock-based temporal motion vector prediction (SbTMVP) is similar to temporal motion vector prediction (TMVP) in HEVC, but predicts the motion of sub-CUs in the current CU vector.
  • Bi-directional optical flow (BDOF), formerly known as BIO, is a simplified version that reduces computation, especially in terms of the number of multiplications and the size of the multipliers.
  • the triangular division mode the CU is evenly divided into two triangular parts in two divisions: diagonal division and anti-diagonal division.
  • the bidirectional prediction mode is extended on the basis of simple averaging to support weighted average of two prediction signals.
  • Inter prediction unit 244 may include a motion estimation (ME) unit and a motion compensation (MC) unit (both not shown in FIG. 2 ).
  • the motion estimation unit may be used to receive or obtain the image block 203 (the current image block 203 of the current image 17 ) and the decoded image 231 , or at least one or more previously reconstructed blocks, eg, one or more other/different previously decoded images 231 .
  • Reconstruction block for motion estimation may include the current image and the previous decoded image 231, or in other words, the current image and the previous decoded image 231 may be part of or form a sequence of images forming the video sequence.
  • the encoder 20 may be operable to select a reference block from a plurality of reference blocks of the same or different pictures among a plurality of other pictures, and convert the reference picture (or reference picture index) and/or the position (x, y coordinates) of the reference block ) and the position of the current block (spatial offset) are provided to the motion estimation unit as inter prediction parameters.
  • This offset is also called a motion vector (MV).
  • the motion compensation unit is used to obtain, eg, receive, inter-prediction parameters, and perform inter-prediction based on or using the inter-prediction parameters, resulting in the inter-prediction block 246 .
  • the motion compensation performed by the motion compensation unit may involve extracting or generating prediction blocks from motion/block vectors determined through motion estimation, and may also include performing interpolation to sub-pixel precision. Interpolative filtering can generate pixels of other pixels from pixels of known pixels, thereby potentially increasing the number of candidate prediction blocks that can be used to encode an image block.
  • the motion compensation unit may locate the prediction block pointed to by the motion vector in one of the reference image lists.
  • the motion compensation unit may also generate block- and video slice-related syntax elements for use by video decoder 30 in decoding image blocks of the video slice.
  • coding block groups and/or coding blocks and corresponding syntax elements may be generated or used.
  • the entropy coding unit 270 is used for entropy coding algorithm or scheme (for example, variable length coding (variable length coding, VLC) scheme, context adaptive VLC scheme (context adaptive VLC, CALVC), arithmetic coding scheme, binarization algorithm, Context adaptive binary arithmetic coding (context adaptive binary arithmetic coding, CABAC), syntax-based context adaptive binary arithmetic coding (syntax-based context-adaptive binary arithmetic coding, SBAC), probability interval partitioning entropy (probability interval partitioning entropy, PIPE) ) coding or other entropy coding method or technique) is applied to the quantized residual coefficients 209, inter prediction parameters, intra prediction parameters, loop filter parameters and/or other syntax elements, resulting in an encoded bit stream that can be passed through output 272
  • the encoded image data 21 output in the form of 21 or the like, so that the video decoder 30 or the like can receive and use
  • video encoder 20 may be used to encode the video stream.
  • the non-transform based encoder 20 may directly quantize the residual signal without transform processing unit 206 for certain blocks or frames.
  • encoder 20 may have quantization unit 208 and inverse quantization unit 210 combined into a single unit.
  • FIG. 3 illustrates an exemplary video decoder 30 for implementing the techniques of this application.
  • the video decoder 30 is adapted to receive the encoded image data 21 (eg, the encoded bitstream 21 ) encoded by the encoder 20 , for example, to obtain a decoded image 331 .
  • the encoded image data 21 or bitstream includes information for decoding said encoded image data 21, such as data representing image blocks of an encoded video slice (and/or encoded block groups or encoded blocks) and associated syntax elements.
  • decoder 30 includes entropy decoding unit 304, inverse quantization unit 310, inverse transform processing unit 312, reconstruction unit 314 (eg, summer 314), loop filter 320, decoded image buffer (DBP) ) 330 , a mode application unit 360 , an inter prediction unit 344 and an intra prediction unit 354 .
  • Inter prediction unit 344 may be or include a motion compensation unit.
  • video decoder 30 may perform a decoding process that is substantially the inverse of the encoding process described with reference to video encoder 100 of FIG. 2 .
  • the inter prediction module/intra prediction module/loop filtering module includes (is) a trained target model (also known as a neural network) for processing an input image or image region or image block, to generate predicted values for the input image patch.
  • a trained target model also known as a neural network
  • neural networks for inter prediction/intra prediction/loop filtering are used to receive input images or image regions or image blocks.
  • the inverse quantization unit 210, the inverse transform processing unit 212, the reconstruction unit 214, the loop filter 220, the decoded image buffer 230, the inter prediction unit 344 and the intra prediction unit 354 also constitute a video encoder 20 "built-in decoders".
  • the inverse quantization unit 310 may be functionally the same as the inverse quantization unit 110
  • the inverse transform processing unit 312 may be functionally the same as the inverse transform processing unit 122
  • the reconstruction unit 314 may be functionally the same as the reconstruction unit 214
  • the loop Filter 320 may be functionally identical to loop filter 220
  • decoded image buffer 330 may be functionally identical to decoded image buffer 230 . Therefore, the explanations of the corresponding units and functions of the video encoder 20 apply correspondingly to the corresponding units and functions of the video decoder 30 .
  • the entropy decoding unit 304 is used to parse the bit stream 21 (or generally the encoded image data 21 ) and perform entropy decoding on the encoded image data 21 to obtain quantization coefficients 309 and/or decoded encoding parameters (not shown in FIG. 3 ), etc. , such as in inter prediction parameters (such as reference picture indices and motion vectors), intra prediction parameters (such as intra prediction mode or index), transform parameters, quantization parameters, loop filter parameters and/or other syntax elements, etc. any or all.
  • the entropy decoding unit 304 may be configured to apply a decoding algorithm or scheme corresponding to the encoding scheme of the entropy encoding unit 270 of the encoder 20 .
  • Entropy decoding unit 304 may also be used to provide inter-prediction parameters, intra-prediction parameters, and/or other syntax elements to mode application unit 360 , as well as other parameters to other units of decoder 30 .
  • Video decoder 30 may receive syntax elements at the video slice and/or video block level. In addition, or instead of slices and corresponding syntax elements, encoded block groups and/or encoded blocks and corresponding syntax elements may be received or used.
  • Inverse quantization unit 310 may be operable to receive quantization parameters (QPs) (or information related to inverse quantization in general) and quantization coefficients from encoded image data 21 (eg, parsed and/or decoded by entropy decoding unit 304), and based on The quantization parameters inverse quantize the decoded quantized coefficients 309 to obtain inverse quantized coefficients 311 , which may also be referred to as transform coefficients 311 or dequantized coefficients 311 .
  • the inverse quantization process may include using quantization parameters calculated by video encoder 20 for any video block in the video slice to determine the degree of quantization, as well as the degree of inverse quantization that needs to be performed.
  • An inverse transform processing unit 312 may be operable to receive dequantized coefficients 311, also referred to as transform coefficients 311, and apply a transform to the dequantized coefficients 311 to obtain a reconstructed residual block 313 in the pixel domain.
  • the reconstructed residual block 313 may also be referred to as a transform block 313 .
  • the transform may be an inverse transform, such as an inverse DCT, an inverse DST, an inverse integer transform, or a conceptually similar inverse transform process.
  • Inverse transform processing unit 312 may also be operable to receive transform parameters or corresponding information from encoded image data 21 (eg, parsed and/or decoded by entropy decoding unit 304 ) to determine transforms to apply to dequantized coefficients 311 .
  • the reconstruction unit 314 (eg, summer 314) is used to add the reconstructed residual block 313 to the prediction block 365 to obtain the reconstructed block 315 in the pixel domain, for example, the pixel point values of the reconstructed residual block 313 and the prediction block 365 pixel values are added.
  • the loop filter unit 320 (in or after the encoding loop) is used to filter the reconstruction block 315 to obtain a filter block 321, so as to smoothly perform pixel transitions or improve video quality, etc.
  • the loop filter unit 320 may include one or more loop filters, such as a deblocking filter, a sample-adaptive offset (SAO) filter, or one or more other filters, such as a self- Adaptive loop filter (ALF), noise suppression filter (NSF), or any combination.
  • the loop filter unit 320 may include a deblocking filter, a SAO filter, and an ALF filter. The order of the filtering process can be deblocking filter, SAO filter and ALF filter.
  • LMCS luma mapping with chroma scaling
  • SBT sub-block transform
  • ISP intra sub-partition
  • loop filter unit 320 is shown in FIG. 3 as a loop filter, in other configurations, loop filter unit 320 may be implemented as a loop filter.
  • the decoded video block 321 in one picture is then stored in a decoded picture buffer 330 which stores the decoded picture 331 as a reference picture for subsequent motion compensation of other pictures and/or output display respectively.
  • the decoder 30 is used for outputting the decoded image 331 through the output terminal 312 and the like, for displaying to the user or for the user to view.
  • the inter prediction unit 344 may be functionally the same as the inter prediction unit 244 (in particular, the motion compensation unit), the intra prediction unit 354 may be functionally the same as the intra prediction unit 254, and is based on the data obtained from the encoded image data 21 (e.g. The received partitioning and/or prediction parameters or corresponding information are parsed and/or decoded by the entropy decoding unit 304 to decide the partitioning or partitioning and perform prediction.
  • the mode application unit 360 may be configured to perform prediction (intra or inter prediction) of any block according to the reconstructed image, block or corresponding pixel points (filtered or unfiltered), resulting in a prediction block 365 .
  • the intra-prediction unit 354 in the mode application unit 360 is used to generate data based on the indicated intra-prediction mode and data from previously decoded blocks of the current image.
  • an inter-prediction unit 344 eg, a motion compensation unit
  • the mode application unit 360 is used to decode the motion vector and other syntax received from the entropy decoding unit 304 according to the motion vector
  • the element generates a prediction block 365 for a video block of the current video slice.
  • these prediction blocks may be generated from one of the reference pictures in one of the reference picture lists.
  • Video decoder 30 may construct reference frame List 0 and List 1 from reference pictures stored in decoded picture buffer 330 using a default construction technique.
  • slices eg, video slices
  • the same or similar process may be applied to embodiments of coding block groups (eg, video coding block groups) and/or coding blocks (eg, video coding blocks),
  • video may be encoded using I, P, or B encoding block groups and/or encoding blocks.
  • Mode application unit 360 is operable to determine prediction information for a video block of the current video slice by parsing motion vectors and other syntax elements, and use the prediction information to generate a prediction block for the current video block being decoded. For example, mode applying unit 360 uses some of the received syntax elements to determine a prediction mode (eg, intra-prediction or inter-prediction), an inter-prediction slice type (eg, B-slice, P-slice, or GPB for encoding a video block of the video slice) slice), construction information for one or more reference picture lists of the slice, motion vectors for any inter-coded video block of the slice, inter-prediction status for any inter-coded video block of the slice, other information to decode video blocks within the current video slice.
  • a prediction mode eg, intra-prediction or inter-prediction
  • an inter-prediction slice type eg, B-slice, P-slice, or GPB for encoding a video block of the video slice
  • coding block groups eg, video coding block groups
  • coding blocks eg, video coding blocks
  • video may be encoded using I, P, or B encoding block groups and/or encoding blocks.
  • the video encoder 30 shown in FIG. 3 may also be used to segment and/or decode images using slices (also referred to as video slices), where an image may use one or more slices (typically non-overlapping slices) ) for segmentation or decoding.
  • Any slice may include one or more blocks (eg, CTUs) or one or more groups of blocks (eg, coded blocks in the H.265/HEVC/VVC standard and bricks in the VVC standard.
  • the video decoder 30 shown in FIG. 3 may also be used to use slice/coding block groups (also referred to as video coding block groups) and/or coding blocks (also referred to as video coding blocks) ) to segment and/or decode an image, where the image may be segmented or decoded using one or more slices/encoded block groups (usually non-overlapping), any slice/encoded block group may include one or more blocks (eg CTU) or one or more coding blocks, etc., any coding block may be rectangular or the like, and may include one or more complete or partial blocks (eg CTU).
  • slice/coding block groups also referred to as video coding block groups
  • coding blocks also referred to as video coding blocks
  • video decoder 30 may be used to decode the encoded image data 21 .
  • decoder 30 may generate the output video stream without loop filter unit 320 .
  • the non-transform based decoder 30 may directly inverse quantize the residual signal without the inverse transform processing unit 312 for certain blocks or frames.
  • video decoder 30 may have inverse quantization unit 310 and inverse transform processing unit 312 combined into a single unit.
  • the processing result of the current step can be further processed, and then output to the next step.
  • further operations such as clip or shift operations, may be performed on the processing results of interpolation filtering, motion vector derivation or loop filtering.
  • the value of the motion vector is limited to a predefined range according to the representation bits of the motion vector. If the representation bit of the motion vector is bitDepth, the range is -2 ⁇ (bitDepth-1) to 2 ⁇ (bitDepth-1)-1, where " ⁇ " represents a power. For example, if bitDepth is set to 16, the range is -32768 to 32767; if bitDepth is set to 18, the range is -131072 to 131071.
  • the value of the derived motion vector (eg, the MVs of four 4x4 subblocks in an 8x8 block) is limited such that the maximum difference between the integer parts of the four 4x4 subblock MVs does not More than N pixels, eg no more than 1 pixel.
  • bitDepth There are two ways to limit motion vectors based on bitDepth.
  • embodiments of the coding system 10, encoder 20 and decoder 30, as well as other embodiments described herein, may also be used for still image processing or codecs, That is, the processing or coding of a single image in video codecs that is independent of any previous or consecutive images.
  • image processing is limited to a single image 17, inter prediction unit 244 (encoder) and inter prediction unit 344 (decoder) may not be available.
  • All other functions (also referred to as tools or techniques) of video encoder 20 and video decoder 30 are also available for still image processing, such as residual calculation 204/304, transform processing unit 206, quantization unit 208, inverse quantization unit 210/ 310, inverse transform processing unit 212/312, segmentation unit 262, intra prediction unit 254/354, loop filter 220/320, entropy encoding unit 270, entropy decoding unit 304, and the like.
  • each module in the encoder 20 and the decoder 30 has a corresponding relationship.
  • the operations of the inter prediction unit 244 and the inter prediction unit 344, and the intra prediction unit 254 and the intra prediction unit 354 are exactly the same; while the entropy encoding unit 270 and the entropy decoding unit 304.
  • the transform processing unit 206 and the inverse transform processing units 212/312, the quantization unit 208 and the inverse quantization units 210/310, etc., are all paired inverse operations. Therefore, after the operations of prediction, transformation, quantization, and entropy encoding of the encoder 20 are specified, the operations of prediction, inverse transformation, inverse quantization, and entropy decoding of the decoder 30 are also determined accordingly.
  • FIG. 4 is a schematic diagram of a video decoding apparatus 400 provided by an embodiment of the present application.
  • Video coding apparatus 400 is suitable for implementing the disclosed embodiments described herein.
  • the video coding apparatus 400 may be a decoder, such as the video decoder 30 in FIG. 1A, or an encoder, such as the video encoder 20 in FIG. 1A.
  • the video decoding apparatus 400 includes: an input port 410 (or input port 410) for receiving data and a receiver unit (receiver unit, Rx) 420; a processor, a logic unit or a central processing unit (central processing unit) for processing data , CPU) 430; for example, the processor 430 here can be a neural network processor 430; a transmitter unit (transmitter unit, Tx) 440 for transmitting data and an output port 450 (or output port 450); memory 460.
  • the video coding apparatus 400 may also include optical-to-electrical (OE) components and electrical-to-optical (EO) components coupled to the input port 410, the receiving unit 420, the transmitting unit 440, and the output port 450, Exit or entrance for optical or electrical signals.
  • OE optical-to-electrical
  • EO electrical-to-optical
  • the processor 430 is implemented by hardware and software.
  • Processor 430 may be implemented as one or more processor chips, cores (eg, multi-core processors), FPGAs, ASICs, and DSPs.
  • the processor 430 communicates with the ingress port 410 , the receiving unit 420 , the sending unit 440 , the egress port 450 and the memory 460 .
  • the processor 430 includes a decoding module 470 (eg, a neural network-based decoding module 470).
  • the decoding module 470 implements the embodiments disclosed above. For example, the transcoding module 470 performs, processes, prepares or provides various encoding operations.
  • decoding module 470 is implemented as instructions stored in memory 460 and executed by processor 430 .
  • Memory 460 includes one or more magnetic disks, tape drives, and solid-state drives, and may serve as an overflow data storage device for storing programs when such programs are selected for execution, and for storing instructions and data read during program execution.
  • Memory 460 may be volatile and/or non-volatile, and may be read-only memory (ROM), random access memory (RAM), ternary content addressable memory (ternary) content-addressable memory, TCAM) and/or static random-access memory (SRAM).
  • FIG. 5 is a simplified block diagram of an apparatus 500 provided by an exemplary embodiment, and the apparatus 500 can be used as either or both of the source device 12 and the destination device 14 in FIG. 1A .
  • the processor 502 in the apparatus 500 may be a central processing unit.
  • the processor 502 may be any other type of device or devices, existing or to be developed in the future, capable of manipulating or processing information.
  • the disclosed implementations may be implemented using a single processor, such as processor 502 as shown, using more than one processor is faster and more efficient.
  • the memory 504 in the apparatus 500 may be a read only memory (ROM) device or a random access memory (RAM) device. Any other suitable type of storage device may be used as memory 504 .
  • Memory 504 may include code and data 506 accessed by processor 502 via bus 512 .
  • the memory 504 may also include an operating system 508 and application programs 510 including at least one program that allows the processor 502 to perform the methods described herein.
  • applications 510 may include applications 1 through N, and also include video coding applications that perform the methods described herein.
  • Apparatus 500 may also include one or more output devices, such as display 518 .
  • display 518 may be a touch-sensitive display that combines a display with touch-sensitive elements that may be used to sense touch input.
  • Display 518 may be coupled to processor 502 through bus 512.
  • bus 512 in device 500 is described herein as a single bus, bus 512 may include multiple buses.
  • secondary storage may be directly coupled to other components of the device 500 or accessed through a network, and may include a single integrated unit, such as a memory card, or multiple units, such as multiple memory cards. Accordingly, the apparatus 500 may have various configurations.
  • End-to-end usually refers to the technical solutions of a learning task are all implemented using neural networks, so all parameters of the neural network can be optimized simultaneously through gradient backpropagation.
  • Coding picture usually contains three matrices of an image, which respectively store the reconstruction of the intensity values of the YUV or RGB three color components of the image, and also include the block division, coding mode, and quantization of all coding units of the image. Encoding information such as parameters.
  • Decoded picture usually contains three matrices of an image, respectively storing the reconstruction of the intensity values of the three color components of YUV or RGB of the image.
  • CU usually contains three matrices of an image block, which respectively store the reconstruction of the YUV or RGB intensity values of the three color components of the image block, and also include the block division, encoding mode, quantization of the image block Encoding information such as parameters.
  • Reconstructed image Refers to the image that contains coding distortion after the encoding operation.
  • Video compression coding technology has a wide range of applications in multimedia services, broadcasting, video communication and storage.
  • ITU-T and ISO/IEC have jointly formulated and released three video codecs, H.264/AVC, H.265/HEVC and H.266/VVC in 2003, 2013 and 2020. standard.
  • the AVS standard group has also formulated and output a series of video image codec standards such as AVS1, AVS2, and AVS3.
  • the AOM Alliance also released the AV1 video codec in 2018.
  • the above video encoding and decoding technologies all adopt a hybrid encoding and decoding scheme based on block division and transform and quantization, and perform continuous technical iterations in specific block division, prediction, transformation, entropy coding, loop filtering and other modules to continuously improve video images. compression efficiency.
  • FIG. 6 shows a typical deep neural network-based image encoding and decoding scheme, which adopts an autoencoder network structure.
  • the input x is the original image to be encoded, which can be expressed as an array of w x x h x x c x , where w x , h x , and c x represent the width, height, and number of color components of the input image, respectively.
  • the analyzers and synthesizers are usually deep Convolutional Neural Network (CNN) networks.
  • CNN deep Convolutional Neural Network
  • the latent representation can usually be expressed as a w y ⁇ h y ⁇ cy array, where w y , hy , and cy are the latent representations, respectively.
  • Each element in the latent layer representation y output by the analyzer can be a floating-point number or an integer number, which can be quantized or ordinary rounded to obtain a more compact integer representation y'.
  • Entropy decoding is the inverse operation of entropy encoding, and the purpose is to obtain y' from the compressed code stream.
  • Entropy encoding and entropy decoding use the same probability model, and the state of the probability model can be updated synchronously in the encoder and decoder to ensure the matching of encoding and decoding.
  • the encoder can transmit the probability model parameters to the decoder to ensure that the entropy encoding and entropy decoding use the same probability model.
  • the synthesizer reconstructs x' based on the coded image taken from y'.
  • the block division operation can be performed on the input image, and each image block is sent to the encoder shown in FIG. 3 for encoding operation to output a compressed code stream, and the reconstruction of the image block can be obtained by decoding the compressed code stream.
  • the quantization unit 208 in FIG. 2 is configured to perform a quantization operation on the transform coefficient 207 output by the transform processing unit 206 by applying scalar quantization or vector quantization to obtain a quantization level (quantization level) 209 of the transform coefficient 207, wherein the quantization level 209 It is the output of the quantized transform coefficient, also called the quantized transform coefficient 209, or the quantized coefficient 209.
  • quantization step size can be indicated by a quantization parameter (QP).
  • QP quantization parameter
  • the quantization parameter may be an index into a predefined set of suitable quantization step sizes.
  • a smaller quantization parameter may correspond to fine quantization (smaller quantization step size)
  • a larger quantization parameter may correspond to coarse quantization (larger quantization step size)
  • the inverse quantization unit 210 in FIG. 2 performs the same inverse quantization operation as the inverse quantization unit 310 in FIG. 3 , its inputs are all quantized coefficients, and its outputs are dequantized coefficients.
  • Embodiments of encoder 20 may be used to output the quantization scheme and quantization step size, eg, directly or after entropy encoding by entropy encoding unit 270 or any other entropy encoding unit, such that decoder 30 may receive And do the corresponding inverse quantization operation.
  • the quantization step size, or equivalent quantization parameter, used in the quantization unit 208 and the inverse quantization unit 210/310 must be the same.
  • the quantization parameter is specified by the encoder 20 and output directly or after entropy encoding by the entropy encoding unit 270 or any other entropy encoding unit.
  • the decoder 30 can receive the quantization parameter information output by the encoder 20, and obtain the quantization parameter specified by the encoder 20 after entropy decoding by the entropy decoding unit 304 or any other entropy decoding unit.
  • the following takes the HEVC standard scheme as an example to illustrate how the encoder transmits the quantization parameters to the decoder, as shown in Table 1.
  • the initial value of QP is transmitted in the picture parameter set (Picture Parameter Set, PPS), and the initial value of QP is init_qp_minus26+26.
  • PPS Picture Parameter Set
  • whether different QPs can be specified for different CUs is agreed through the control flag cu_qp_delta_enabled_flag. If this control flag indicates false, all CUs in the whole picture use the same QP, so it is not possible to specify a different QP for each CU in the picture. If the control flag indicates true, a different QP can be specified for each CU in the image, and the QP information can be written into the video stream when a CU is specifically encoded.
  • CUs in the HEVC standard can have different sizes, from 64x64 to 8x8.
  • a coded image is all divided into 8 ⁇ 8 CUs.
  • one QP needs to be transmitted for each 8 ⁇ 8 CU, which may bring a huge QP coding overhead and cause coding Significant increase in video rate.
  • the HEVC standard specifies the Quantization Group (QG) size through the syntax diff_cu_qp_delta_depth in the PPS.
  • QG Quantization Group
  • diff_cu_qp_delta_depth value 0 1 2 3 QG size 64 ⁇ 64 32 ⁇ 32 16 ⁇ 16 8 ⁇ 8
  • QG is the basic transmission unit for transmitting QP. In other words, each QG can only transmit at most one QP. If the CU size is smaller than the QG size, that is, a QG contains multiple CUs, the QP is only transmitted in the first CU containing a non-zero quantization level, and the QP will be used for inverse quantization operations for all CUs in the QG. If the CU size is greater than or equal to the QG size, that is, one CU contains one or more QGs, it is determined whether to transmit the QP information of the CU according to whether the CU contains a non-zero quantization level.
  • the initial value of QP transmitted in the PPS applies to all coded pictures within the scope of the PPS.
  • the initial value of QP can be further adjusted to obtain the QP reference value of the processed coded picture, slice, sub-picture and slice.
  • the HEVC standard scheme transmits the syntax slice_qp_delta in the slice header (Slice Header, SH), which means that a differential value is superimposed on the initial QP value transmitted in the PPS, so as to obtain the QP reference value of the slice, as shown in Table 3.
  • each transform unit Transfom Unit, TU
  • the QP difference information of the CU includes the QP difference absolute value cu_qp_delta_abs and the CU difference sign cu_qp_delta_sign_flag, as shown in Table 4; wherein, the QP difference value of the CU is cu_qp_delta_abs ⁇ (1-2 ⁇ cu_qp_delta_sign_flag). Since a CU only transmits at most one QP differential information, in the case where a CU contains multiple TUs, the QP differential information is transmitted only when the first TU containing a non-zero quantization level is processed.
  • TU Transfom Unit
  • the QP predicted value of a CU will be derived according to the QP value of the left adjacent QG, the upper adjacent QG and the previous encoded QG, that is, the QP value of the processed QG in the current QG neighborhood is used. Generate a prediction of the QP value of the current QG.
  • the encoder After the encoder determines the QP value of a CU according to the content complexity and code control strategy, it only needs to transmit the difference value between the QP value of the CU and the QP predicted value of the CU; the decoder obtains the QP difference value of a CU after decoding. , the QP prediction value is obtained through the same prediction operation, and the QP value of the CU can be obtained after superimposing it with the QP difference value.
  • the intensity of the quantization distortion can be obtained by theoretical analysis. For example, assuming a uniformly distributed source, the mean square error of the quantization distortion for uniform scalar quantization is Qstep 2 /12.
  • the encoder will quantize the residual of the image block to all 0s. This situation is more common in P and B frame coding, and it occurs when the quality of the reference image is high and the Qstep setting of the currently coded image block is large. At this time, although the decoding end can obtain the effective QP information of the current block, the QP cannot reflect the actual distortion of the image block.
  • the decoder will skip the inverse quantization operation. In order to avoid transmitting useless QP information, the encoder will not pass the QP information of the image block to the decoding end. At this time, the decoding end cannot obtain the valid QP information of the current block at all.
  • the purpose of the current QP mechanism is to perform correct inverse quantization operations at the decoding end, not to obtain accurate distortion strength information at the decoding end.
  • Academia has also proposed a learning-based deep image encoding and decoding scheme.
  • the image to be encoded obtains the latent layer expression of the input image through the encoder (Encoder) sub-network, and then quantizes ( Quantization) module to obtain multiple quantized code blocks (quantized codes) after processing.
  • the encoder calculates the importance map based on the input image.
  • the encoding end uses the importance map to perform a clipping operation on the individual quantized encoded blocks, and entropy encodes the clipped individual quantized encoded blocks together with the importance map and transmits it to the decoding end.
  • the importance map can be used to control the number of quantized coding blocks to be transmitted, so as to realize the function of code rate control. Therefore, the importance map is essentially equivalent to the quantization parameter (QP), and the region-level bit allocation is performed according to the content of the image.
  • QP quantization parameter
  • the role of the importance map in this scheme is to tell the decoder which coding blocks are included in the code stream, instruct the decoder to decode and obtain these coding blocks, and set the coding blocks not included in the code stream to 0, so as to obtain a complete latent layer expression, Input the decoder (Decoder) sub-network for subsequent decoding operations.
  • the to-be-coded image is first input into an analyzer sub-network (such as an Encoder sub-network) to extract the latent layer representation of the input image.
  • an analyzer sub-network such as an Encoder sub-network
  • the latent layer representation is a dimensionality reduction representation of the input image
  • information loss has been introduced, that is, the distortion of the image signal has been introduced in the latent layer representation, and the distortion cannot be determined by the importance map of the above scheme.
  • the importance map is only used to determine which coded blocks do not need to be coded for transmission, and does not indicate how much signal distortion is caused by discarding these coded blocks. Therefore, the above-mentioned importance map cannot indicate the distortion intensity information of each region in the encoded image.
  • various existing video and image encoding and decoding schemes based on deep neural networks use the trained model to perform video or image encoding and decoding operations.
  • Such methods usually optimize subjective visual quality for human perception.
  • the above is to do flexible bit allocation between different coded pictures and between different regions within a picture.
  • these codec schemes do not transmit the signal quality (or distortion strength) of each coded image or each region within a coded image to the decoding end.
  • the real distortion intensity of the encoded image cannot be obtained by human observation; for example, in some encoded images obtained by training the encoder-decoder network based on the Genrative Adversarial Network (GAN) method , will contain false texture information that the human eye cannot distinguish between true and false.
  • GAN Genrative Adversarial Network
  • the distortion intensity information of the currently encoded image cannot obtain the distortion intensity information of the currently encoded image at the decoding end.
  • the distortion intensity information of the encoded image can be used to assist in judging whether the content of a certain area in the image is credible, so it is very important for applications such as video surveillance. Therefore, if a deep neural network-based video image encoding and decoding scheme is to be used in an actual product or service, this information needs to be communicated to the decoder.
  • embodiments of the present application provide an encoding and decoding method and related equipment.
  • the encoding end can compare the original image and the reconstructed image, and obtain the fidelity information of the reconstructed image (including the fidelity information of each image area in the reconstructed image) , and carry the fidelity information in the compressed code stream to inform the decoding end; wherein, the reconstructed image is the reconstructed image of the original image, that is, the encoded output image.
  • any referenced quality evaluation method such as MSE (Mean Squared Error), SAD (Sum of Absolute Difference), SSIM (Structural Similarity) can be used to calculate the relative quality of the reconstructed image. based on the distortion intensity value of the original image, or obtain an indication of whether there is a synthetically obtained false image content in the reconstructed image content.
  • Fidelity can be calculated at any granularity, such as the entire image, or any M ⁇ N patch in an image, and so on.
  • Fidelity can be calculated separately for any color component, for example, three fidelities are calculated for the three color components of RGB or YUV in an image, or the distortion intensities of the three color components can be combined to obtain one fidelity. Spend. Even for the traditional hybrid video image encoding and decoding scheme, the implementation of this application can use both the QP information already obtained by the decoding end and the fidelity of the image area referenced by the current image block in the inter-frame prediction operation to derive and obtain the The fidelity of the current image patch.
  • FIG. 7 is a flowchart illustrating a process 700 of an encoding method according to an embodiment of the present application.
  • Process 700 may be performed by an encoding apparatus, such as by video encoder 20 .
  • Process 700 is described as a series of steps or operations, and it should be understood that process 700 may be performed in various orders and/or concurrently, and is not limited to the order of execution shown in FIG. 7 .
  • Process 700 includes, but is not limited to, the following steps or operations:
  • the original image is also the image 17 , so the original image is an encoded image; the first code stream is also the encoded image data 21 .
  • a fidelity map to obtain a second code stream, where the fidelity map is used to represent the distortion between at least a partial area of the original image and at least a partial area of the reconstructed image, and the reconstructed image
  • the image is obtained after decoding the first code stream.
  • the reconstructed image is also the decoded image 231, so the reconstructed image is a decoded image. Since the first code stream is obtained by encoding the original image, the reconstructed image obtained by decoding the first code stream is a reconstructed image of the original image, and the reconstructed image has the same image size as the original image.
  • the fidelity map may be calculated from the original image and the reconstructed image, or the fidelity map may be calculated from a preset area of the original image and a preset area of the reconstructed image.
  • the fidelity map is used to characterize the fidelity of the entire reconstructed image; when the fidelity map is calculated based on the original image and the reconstructed image
  • the fidelity map is used to represent the fidelity of the preset region of the reconstructed image.
  • the preset area of the original image is an image of a certain area in the original image, which can be an image block;
  • the preset area of the reconstructed image is an image of a certain area in the reconstructed image, which can also be an image block.
  • the first code stream and the second code stream need to be transmitted from the encoding end to the decoding end, and the first code stream and the second code stream can be combined and transmitted from the encoding end to the decoding end.
  • the first code stream and the second code stream can also be separately transmitted from the encoding end to the decoding end.
  • the position of the preset area in the original image and/or the position of the preset area in the reconstructed image needs to be It is transmitted from the encoding end to the decoding end, wherein, when the preset area is a rectangle, the position of the preset area can be represented by the coordinates of the preset area, and the coordinates of the preset area are usually expressed as the upper left corner brightness pixel of the preset area and the position of the preset area in the original image and/or the position of the preset area in the reconstructed image can be combined with at least one of the first code stream and the second code stream, and then transmitted from the encoding end to the decoding terminal; the position of the preset area in the original image and/or the position of the preset area in the reconstructed image may not be merged with at least one of the first code stream and the second code stream, and transmitted from the encoding end to the decoding
  • the reconstructed image of the fidelity map can be obtained after the first code stream is decoded.
  • the reconstructed image of the fidelity map obtained by decoding may be the same as the fidelity map, or may be different from the fidelity map. Specifically; if the encoding is lossless encoding, the reconstructed image of the fidelity map is the same as the fidelity map; if the encoding is lossy encoding, the reconstructed image of the fidelity map includes the image generated by encoding the fidelity map. Coding distortion.
  • the original image is encoded to obtain the first code stream
  • the fidelity map is encoded to obtain the second code stream
  • the fidelity map is used to represent at least part of the original image and the reconstruction Distortion between at least partial regions of the image
  • the distortion includes differences
  • the decoding end decodes the first code stream to obtain a reconstructed image of the original image
  • the decoding end decodes the second code stream to obtain a reconstructed image of the fidelity map
  • the encoding is lossless encoding
  • the reconstructed image of the fidelity map is the same as the fidelity map
  • the encoding is lossy encoding
  • the reconstructed image of the fidelity map includes the encoding distortion generated by encoding the fidelity map. Therefore, the reconstruction map of the fidelity map can be used to represent the distortion between at least part of the original image and at least part of the reconstructed image; therefore, the embodiment of the present application can obtain the distortion intensity information of the encoded image at the decoding end.
  • the method further includes: dividing the original image into a plurality of first image blocks, and dividing the reconstructed image into a plurality of second image blocks, wherein the original image is divided
  • the division strategy is the same as the division strategy for dividing the reconstructed image, the plurality of first image blocks and the plurality of second image blocks are in one-to-one correspondence; or the preset area of the original image is divided into a plurality of first image blocks.
  • the multiple first image blocks correspond to the multiple second image blocks one-to-one; according to any second image block in the multiple second image blocks corresponds to the any second image block
  • the fidelity value of any second image block is obtained by calculating the fidelity value of the first image block of The fidelity value of is used to represent the distortion between the any second image block and the first image block corresponding to the any second image block.
  • the position of any second image block in the reconstructed image is the same as the position of the first image block corresponding to the any second image block in the original image ;
  • the position of the preset area of the original image in the original image is the same as the position of the preset area of the reconstructed image in the reconstructed image, and any second image block in the plurality of second image blocks is the same.
  • the position of the image block in the preset area of the reconstructed image is the same as the position of the first image block corresponding to any second image block in the preset area of the original image.
  • the division strategy for dividing the original image is the same as the dividing strategy for dividing the reconstructed image
  • the dividing strategy for dividing the preset area of the original image is the same as the dividing strategy for dividing the preset area of the reconstructed image.
  • the size of any first image block in the image block is the same as the size of the corresponding second image block
  • the position in the original image of any first image block in the plurality of first image blocks obtained by division is the same as that of the corresponding second image block.
  • the position of the second image block in the reconstructed image is the same; when the image block is a rectangle, the position of the image block can be represented by the coordinates of the image block, and the coordinates of the image block are usually expressed as the luminance pixel coordinates of the upper left corner of the image block.
  • the basic unit that is, according to the size of any image block, so that any first image block obtained by division has the same size, and any first image block obtained by division has the same size.
  • the sizes of the two image blocks are also the same, and the size of the first image block and the size of the second image block are the sizes of the basic unit.
  • the size of the basic unit can be determined according to the size of the original image or the reconstructed image or the number of first image blocks or second image blocks to be divided, the minimum size of the basic unit can be 1 ⁇ 1 pixel, and the maximum The size of the original or reconstructed image.
  • the fidelity can be calculated separately for any color component, for example, three fidelities are calculated for the three color components of RGB or YUV of an image, or the distortion intensities of the three color components can be combined to obtain a Fidelity.
  • the fidelity map is a three-dimensional array; when three color components are fused to calculate one fidelity, the fidelity map is a two-dimensional array.
  • the fidelity map computed for a reconstructed image or a preset region of the reconstructed image is a (W/M) ⁇ (H/N) two-dimensional array, or a (W/M) ⁇ (H/ N) ⁇ C three-dimensional array, where W and H represent the width and height of the original image or the reconstructed image, W and H represent the width and height of the preset area of the original image or the preset area of the reconstructed image, and M and N represent The width and height of the basic unit used in the fidelity calculation, and C represents the number of color components of the original or reconstructed image.
  • C represents 1 to describe the specific implementation of the embodiments of the present application.
  • the size of the image block is M ⁇ N.
  • the image block is as shown in FIG. 9 ; and the size of the first image block and the second image block obtained by division is M ⁇ N.
  • the same method is used to divide the preset area of the original image and the preset area of the reconstructed image, and the preset area of the original image can be divided into R rows and S columns, with a total of R ⁇ S first image blocks; and The preset area of the reconstructed image is divided into R rows and S columns, with a total of R ⁇ S second image blocks; and the sizes of the first image blocks and the second image blocks obtained by division are M ⁇ N.
  • the fidelity of any second image block relative to the corresponding first image block can be calculated, that is, the reconstruction is calculated.
  • the fidelity map can be obtained by the fidelity values of ⁇ S second image blocks.
  • the decoding is performed by a decoder or a decoding device; the decoder or decoding device stores the size of the first image block and/or the second image block; or, the decoder Or the decoding device stores the number of the first image block and/or the number of the second image block; or, the size of the first image block and/or the second image block is the input of the decoding ; or, the number of the first image block and/or the number of the second image block is the input of the decoding; or, the decoder or the decoding device stores the size of the basic unit and/or the or the size of the basic unit and/or the number of the basic unit is the input of the decoding.
  • the decoding end stores the values of M and N or the values of R and S, or the values of M and N or the values of R and S as the input for decoding; this can ensure that the decoding end decodes to obtain the reconstructed image and the fidelity map
  • the value of any element in the reconstruction map of the fidelity map characterizes the fidelity of which image block in the reconstructed image.
  • the division of the original image in FIG. 8 and the division of the reconstructed image in FIG. 9 are in the manner of uniform division, and the size of any obtained first image block or second image block is the same. It can be understood that the original image or the reconstructed image can also be divided by non-uniform division. At this time, the size of any first image block or second image block obtained by division is the same. When non-uniform division is used, The decoding end needs to know the size and position of any first image block and/or the second image block.
  • the size of the original image and the size of the reconstructed image are the same, and the size and position of the preset area in the original image are the same as those in the reconstructed image; according to the same division strategy, the The original image is divided into multiple first image blocks, and the reconstructed image is divided into multiple second image blocks; or the preset area of the original image is divided into multiple first image blocks according to the same division strategy, and the reconstructed image is divided into multiple first image blocks.
  • the preset area of the , the size of any second image block is also the same, and the size of the first image block and the second image block are also the same; therefore, the first image block and the second image block can be used as the basic unit of fidelity calculation, that is The fidelity value of any second image block can be calculated according to any second image block in the plurality of second image blocks and its corresponding first image block, and the fidelity value of the plurality of second image blocks value is the fidelity value of each area of the reconstructed image, and a fidelity map can be obtained according to the fidelity values of multiple second image blocks; wherein, when the first image block is obtained by dividing the original image, the second image block When the reconstructed image is divided, the fidelity map is used to characterize the fidelity of the reconstructed image; when the first image block is obtained by dividing the preset area of the original image, and the second image block is obtained by dividing the preset area of the reconstructed image , the fidelity map is used to characterize the fidelity of the preset region
  • the fidelity map includes a plurality of first elements, the plurality of second image blocks are in one-to-one correspondence with the plurality of first elements, and the plurality of first elements
  • the value of any first element is the fidelity value of the second image block corresponding to the any first element
  • the position of the any first element in the fidelity map is based on the The position of the second image block corresponding to a first element in the reconstructed image is determined, or the position of any first element in the fidelity map is determined according to the first element corresponding to the first element.
  • the positions of the two image blocks in the preset area of the reconstructed image are determined.
  • the first element may also be referred to as a pixel point of the fidelity image.
  • the construction process of the fidelity map is: determining a plurality of first elements according to the plurality of second image blocks, wherein the plurality of second image blocks are in one-to-one correspondence with the plurality of first elements, and the plurality of second image blocks are in one-to-one correspondence with the plurality of first elements.
  • the value of any one of the first elements is the fidelity value of the corresponding second image block; the fidelity map is obtained according to the plurality of first elements, wherein the plurality of first elements The position of any first element of an element in the fidelity map is determined according to the position of its corresponding second image block in the reconstructed image, or the position of any first element of the plurality of first elements The position of an element in the fidelity map is determined according to the position of the corresponding second image block in the preset area of the reconstructed image.
  • the image is divided into basic units for fidelity calculation, so how many basic units there are, how many first elements there are in the fidelity map; that is, how many first elements there are In an image block or a second image block, how many first elements are there in the fidelity map; among them, any first element has two attributes, which are the fidelity value and the fidelity value of the fidelity value. location in the picture. Therefore, any first element in the fidelity graph is used to represent the fidelity value of the corresponding second image block, and the value of any first element in the fidelity graph is the fidelity value of the corresponding second image block. degree value.
  • the reconstructed image or the preset area of the reconstructed image is divided into R ⁇ S second image blocks, then the fidelity map is a two-dimensional array with R rows and S columns, with a total of R ⁇ S first elements, which are There is a one-to-one correspondence between the R ⁇ S second image blocks and the R ⁇ S first elements, and the value of any first element in the R ⁇ S first elements is the fidelity value of the corresponding second image block. degree value. If the fidelity map of the entire reconstructed image is calculated, the position of any first element in the R ⁇ S first elements in the fidelity map is the same as the position of the corresponding second image block in the reconstructed image.
  • the position of any first element in the fidelity map and its corresponding second image block in the reconstructed image is the same. That is, the first element in the i-th row and the j-th column in the fidelity map corresponds to the second image block in the i-th row and the j-th column in the reconstructed image or the preset area of the reconstructed image, and the i-th row in the fidelity map
  • the value of the first element in the j column is the fidelity value of the second image block in the i-th row and the j-th column in the reconstructed image or the preset area of the reconstructed image, where 1 ⁇ i ⁇ R, 1 ⁇ j ⁇ S.
  • the grid in Figure 10 represents a first element, and the number in any grid represents the value of the first element, and also That is, the fidelity value of the second image block corresponding to the first element;
  • the value of the first element in the i-th row and the j-th column in Figure 10 is the fidelity value of the second image block in the i-th row and the j-th column in Figure 9, where 1 ⁇ i ⁇ 4, 1 ⁇ j ⁇ 7.
  • the fidelity map is a two-dimensional array, and the reconstructed image is divided to obtain a plurality of second image blocks, and the fidelity map can be obtained according to the fidelity values of the plurality of second image blocks, that is, according to Multiple second image blocks can determine multiple first elements, multiple second image blocks are in one-to-one correspondence with multiple first elements, and the value of any first element in multiple first elements is the second element corresponding to the multiple first elements.
  • the fidelity value of the image block; and the position of any first element in the fidelity map is determined according to the position of the corresponding second image block in the reconstructed image, specifically, a plurality of The position of any one of the first elements in the fidelity map is the same as the position of its corresponding second image block in the reconstructed image or in the preset area of the reconstructed image.
  • the element characterizes the fidelity of the reconstructed image or the region corresponding to its position in the preset region of the reconstructed image, so it is beneficial for the fidelity map to be used to characterize the distortion intensity information of the encoded image.
  • the second image block includes three color components
  • the fidelity map is a three-dimensional array including three dimensions of color components, width and height, any one of the fidelity map
  • the two-dimensional array under the color component A includes a plurality of first elements
  • the value of any first element in the plurality of first elements is the color component A of the second image block corresponding to the any first element fidelity value
  • the position of any first element in the two-dimensional array under any color component A in the fidelity map is based on the second image block corresponding to the any first element in
  • the position in the reconstructed image is determined, or the position of the any first element in the two-dimensional array under any color component A in the fidelity map is determined according to the first element corresponding to the any first element.
  • the positions of the two image blocks in the preset area of the reconstructed image are determined.
  • the fidelity map is a three-dimensional array including three dimensions of color component, width and height
  • the height indicates that the two-dimensional array under any color component A includes multiple row first elements
  • the width indicates that under any color component A
  • the two-dimensional array of includes a plurality of columns of first elements, the number of the plurality of first elements is equal to the product of the width and the height, and the color component A is any one of the three color components.
  • the original image or the reconstructed image includes three color components, and when calculating the fidelity map, a fidelity map of a two-dimensional array is calculated under any color component, and a two-dimensional array under the three color components is calculated.
  • the array constitutes the fidelity map of the three-dimensional array, and the first element in the two-dimensional array under any color component A in the fidelity map of the three-dimensional array represents the reconstructed image or the reconstructed image under any color component A.
  • the fidelity of the regions corresponding to their positions in the preset regions, and thus in favor of the fidelity map of the three-dimensional array, can characterize the distortion intensity information of the three color components of the encoded image.
  • the encoding the fidelity map to obtain the second code stream includes: performing entropy encoding on the any one of the first elements to obtain the second code stream, and performing the entropy encoding on the any one of the first elements to obtain the second code stream.
  • the entropy encoding of a first element is independent of the entropy encoding of other first elements; or, the probability distribution of the value of any first element is determined according to the value of at least one of the encoded first elements or the predicted value of any one of the first elements, and entropy encoding the any of the first elements according to the probability distribution of the values of the any of the first elements or the predicted value of the any of the first elements, to obtaining the second code stream; wherein, the second code stream includes the code streams of the plurality of first elements.
  • any first element in the fidelity map if there is no encoded first element, entropy encoding is directly performed on the any first element to obtain the any first element If there is an encoded first element, determine the probability distribution of the value of any first element or the value of any one of the first elements according to the value of at least one of the encoded first elements the predicted value of the first element, and entropy encoding the any first element according to the probability distribution of the value of the any first element or the predicted value of the any first element, so as to obtain the any one of the first elements.
  • the value of the encoded first element can be used to determine the probability distribution of the value of the first element, and the value of the first element is also the fidelity of the second image block.
  • Fidelity value that is, using the encoded fidelity value to determine the probability distribution of the currently encoded fidelity value, so as to help improve the entropy encoding efficiency.
  • the input for arithmetic coding of symbols is symbol probability distribution.
  • any encoded fidelity value can be used to determine the probability distribution of the currently encoded fidelity value.
  • the encoded fidelity values at the left, upper, upper left and other positions of the currently encoded fidelity value in the fidelity map are used to determine the probability distribution of the currently encoded fidelity value to assist in improving entropy coding. efficiency.
  • different Huffman code tables can be selected according to the determination of the probability distribution of the fidelity value of the current encoding, or the sub-interval division mode of the arithmetic encoding can be determined.
  • the value of the encoded first element may be used to determine the predicted value of the first element, that is, the encoded fidelity value may be used to determine the fidelity of the current encoding and then perform entropy coding on the difference between the value of the first element and the predicted value of the first element, so as to obtain the code stream of the first element.
  • the fidelity map is encoded to obtain the second code stream, that is, any first element in the fidelity map is encoded to obtain the code stream of any first element, the first
  • the second code stream includes the code stream of any first element in the fidelity map; in the entropy encoding process, the value of the encoded first element can be used to determine the probability distribution of the value of the currently encoded first element, for example The probability distribution of the value of the first element of the current encoding or the predicted value of the first element of the current encoding is determined by using the values of the adjacent first elements to the left, above, and the upper left of the currently encoded first element, Then, the currently coded first element is coded according to the probability distribution of the value of the currently coded first element or the predicted value of the currently coded first element, so as to assist in improving the entropy coding efficiency.
  • entropy coding is only an example of an entropy coding method; in the entropy coding process of the present application, various existing entropy coding techniques can be used, such as Huffman coding, arithmetic coding, and context modeling arithmetic coding (A context modeling method is described here to assist arithmetic coding), binary arithmetic coding, etc.; this application does not specifically limit this.
  • the encoding the fidelity map to obtain the second code stream includes: quantizing any of the first elements to obtain a quantized first element; quantizing the quantization The last first element is encoded to obtain the second code stream; wherein, the second code stream includes the code streams of the plurality of first elements.
  • the quantization step size for quantizing any one of the multiple first elements may be the same or different.
  • the decoding is performed by a decoder or a decoding device; the decoder or the decoding device stores a quantization step size for quantizing any one of the plurality of first elements; or A quantization step size for quantizing any one of the plurality of first elements is an input to the decoding.
  • the decoding end can recover the fidelity value of the original quantity level.
  • the fidelity map is encoded to obtain the second code stream, that is, any first element in the fidelity map is encoded to obtain any first element in the fidelity map
  • FIG. 11 is a flowchart illustrating a process 1100 of a decoding method according to an embodiment of the present application.
  • Process 1100 may be performed by a decoding device, such as by video decoder 30 .
  • Process 1100 is described as a series of steps or operations, and it should be understood that process 1100 may be performed in various orders and/or concurrently, and is not limited to the order of execution shown in FIG. 11 .
  • Process 1100 includes, but is not limited to, the following steps or operations:
  • the first code stream is a code stream obtained by encoding the original image, that is, the encoded image data 21; the reconstructed image of the original image is the decoded image 331, hereinafter referred to as the reconstructed image.
  • the reconstructed map of the fidelity map since the reconstructed map of the fidelity map is obtained by decoding the code stream of the fidelity map, the reconstructed map of the fidelity map has the same size and properties as the fidelity map; when the fidelity map uses When characterizing the fidelity of the entire reconstructed image, the reconstruction map of the fidelity map is also used to characterize the fidelity of the entire reconstructed image; when the fidelity map is used to characterize the fidelity of the preset area of the reconstructed image The reconstruction map of the fidelity map is also used to characterize the fidelity of a preset region of the reconstructed image.
  • the decoding end needs to obtain the position of the preset area in the original image and/or the position of the preset area in the reconstructed image from the encoding end , so that after decoding the reconstructed image of the fidelity map, the decoding end knows that the reconstructed image of the fidelity map is used to represent the fidelity of the specific region in the reconstructed image.
  • the reconstructed image of the fidelity map can be obtained after the first code stream is decoded.
  • the reconstructed image of the fidelity map obtained by decoding may be the same as the fidelity map, or may be different from the fidelity map. Specifically; if the encoding is lossless encoding, the reconstructed image of the fidelity map is the same as the fidelity map; if the encoding is lossy encoding, the reconstructed image of the fidelity map includes the image generated by encoding the fidelity map. Coding distortion.
  • the original image is encoded to obtain the first code stream
  • the fidelity map is encoded to obtain the second code stream
  • the fidelity map is used to represent at least a part of the original image and the reconstructed image. Distortion between at least some regions, wherein the distortion includes differences; the decoding end decodes the first code stream to obtain a reconstructed image of the original image, and the decoding end decodes the second code stream to obtain a reconstructed image of the fidelity map; If the encoding is lossless encoding, the reconstructed image of the fidelity map is the same as the fidelity map; if the encoding is lossy encoding, the reconstructed image of the fidelity map includes the encoding distortion generated by encoding the fidelity map. Therefore, the reconstruction map of the fidelity map can be used to represent the distortion between at least part of the original image and at least part of the reconstructed image; therefore, the embodiment of the present application can obtain the distortion intensity information of the encoded image at the de
  • the fidelity map includes a fidelity value of any second image block in the plurality of second image blocks, and the fidelity value of any second image block is used for represents the distortion between the any second image block and the original image block corresponding to the any second image block.
  • the plurality of second image blocks are obtained by dividing the reconstructed image, the plurality of second image blocks are in one-to-one correspondence with a plurality of original image blocks, and the original image blocks are image blocks in the original image , for example, the original image block is the aforementioned first image block; the multiple original image blocks are obtained by dividing the original image, and the multiple second image blocks are obtained by dividing the reconstructed image.
  • the division strategy of the original image is the same as the division strategy of the reconstructed image; or the multiple original image blocks are obtained by dividing a preset area of the original image, and the multiple second image blocks are Obtained by dividing the preset area of the reconstructed image, the division strategy for dividing the preset area of the original image is the same as the division strategy for dividing the preset area of the reconstructed image.
  • the fidelity map includes a plurality of first elements, the plurality of second image blocks are in one-to-one correspondence with the plurality of first elements, and the plurality of first elements
  • the value of any first element is the fidelity value of the second image block corresponding to the any first element
  • the position of the any first element in the fidelity map is based on the The position of the second image block corresponding to a first element in the reconstructed image is determined, or the position of any first element in the fidelity map is determined according to the first element corresponding to the first element.
  • the positions of the two image blocks in the preset area of the reconstructed image are determined.
  • the second image block includes three color components
  • the fidelity map is a three-dimensional array including three dimensions of color components, width and height, any one of the fidelity map
  • the two-dimensional array under the color component A includes a plurality of first elements
  • the value of any first element in the plurality of first elements is the color component A of the second image block corresponding to the any first element fidelity value
  • the position of any first element in the two-dimensional array under any color component A in the fidelity map is based on the second image block corresponding to the any first element in
  • the position in the reconstructed image is determined, or the position of the any first element in the two-dimensional array under any color component A in the fidelity map is determined according to the first element corresponding to the any first element.
  • the positions of the two image blocks in the preset area of the reconstructed image are determined.
  • the decoding the second code stream to obtain the reconstruction map of the fidelity map includes: decoding the second code stream to obtain the reconstruction of any one of the first elements Fidelity value; obtains the reconstruction map of the fidelity map according to the reconstruction fidelity value of any one of the first elements.
  • the reconstruction fidelity value of the first element is also the reconstruction of the value of the first element.
  • the position of the reconstruction fidelity value of the any first element in the reconstruction image of the fidelity map is determined according to the position of the second image block corresponding to the any first element in the reconstructed image .
  • the second code stream includes the position of the any first element in the fidelity map; the reconstruction fidelity value of the any first element is in the reconstruction map of the fidelity map
  • the position in the fidelity map is determined according to the position of any first element in the fidelity map.
  • any first element in the reconstruction map of the fidelity map has two attributes, which are the reconstruction fidelity value of the any first element and the reconstruction map of the reconstruction fidelity value in the fidelity map. in the location.
  • the reconstruction fidelity value of any first element is the value of the first element.
  • a, b, c, d, and e in the reconstruction diagram of the fidelity map are , f, g, and h are 15, 67, 99, 134, 16, 76, 123, and 187, respectively, as shown in Figure 10 and Figure 12, wherein a grid in Figure 12 represents a first element, any grid The number inside represents the reconstruction fidelity value for that first element.
  • the reconstruction fidelity value of any first element is the sum of the value of the first element and the encoding distortion.
  • the reconstructed image or the preset area of the reconstructed image can also be divided into multiple second image blocks, and the multiple first elements and the multiple second image blocks are One-to-one correspondence, the value of any first element in the multiple first elements is the fidelity value of the corresponding second image block; because there are multiple first elements in the reconstruction map of the fidelity map Reconstruction fidelity value, the reconstruction fidelity value of the plurality of first elements is in one-to-one correspondence with the plurality of first elements, so the reconstruction fidelity value of the plurality of first elements and the plurality of second elements are in a one-to-one correspondence.
  • Image blocks are also in one-to-one correspondence, and the reconstruction fidelity value of any first element in the reconstruction fidelity values of the plurality of first elements is the fidelity value of the corresponding second image block.
  • the reconstructed image of the fidelity map is a two-dimensional array with R rows and S columns.
  • R ⁇ S first elements there are R ⁇ S first elements, these R ⁇ S second image blocks are in one-to-one correspondence with R ⁇ S first elements, and the reconstruction of any first element in these R ⁇ S first elements is guaranteed.
  • the fidelity value is the fidelity value of the corresponding second image block.
  • the position of any first element in the reconstruction map of the fidelity map and its corresponding second image block in the R ⁇ S first elements are The positions in the reconstructed image are the same; if the reconstruction map of the fidelity map of the preset area of the reconstructed image is calculated, then any first element in the R ⁇ S first elements is in the reconstruction map of the fidelity map. The position is the same as the position of its corresponding second image block in the preset area of the reconstructed image.
  • the first element of the i-th row and the j-th column in the reconstruction image of the fidelity map corresponds to the second image block of the i-th row and the j-th column in the reconstructed image or the preset area of the reconstructed image.
  • the reconstruction of the fidelity map The reconstruction fidelity value of the first element in the i-th row and the j-th column in the figure is the fidelity value of the second image block in the i-th row and the j-th column in the reconstructed image or the preset area of the reconstructed image, where 1 ⁇ i ⁇ R, 1 ⁇ j ⁇ S.
  • the second code stream includes a code stream of any first element in the fidelity map, so the reconstruction fidelity value of any first element can be obtained by decoding the second code stream;
  • the reconstruction fidelity value of the first element is the value of the first element;
  • the reconstruction fidelity value of the first element includes encoding the first element
  • the reconstruction fidelity value of the first element is the sum of the value of the first element and the coding distortion; thus, the reconstruction map of the fidelity map can be obtained according to the reconstruction fidelity value of any first element, Therefore, the reconstructed map of the fidelity map can be used to represent the distortion between at least part of the original image and at least part of the reconstructed image; therefore, the embodiment of the present application can obtain the distortion intensity information of the encoded image at the decoding end.
  • the second code stream is obtained by encoding the quantized first element; and the decoding of the second code stream to obtain the reconstructed image of the fidelity map includes: The second code stream is decoded to obtain the reconstruction fidelity value of the quantized first element; the reconstruction fidelity value of the quantized first element is inversely quantized to obtain any of the The reconstruction fidelity value of the first element; the reconstruction map of the fidelity map is obtained according to the reconstruction fidelity value of any one of the first elements.
  • the quantization step size for quantizing any one of the multiple first elements may be the same or different; therefore, the quantization step size for performing inverse quantization on any one of the multiple first elements They may be the same or different; and the quantization step size for quantizing any one of the plurality of first elements is the same as the quantization step size for quantizing the corresponding first element.
  • the decoding is performed by a decoder or a decoding device; the decoder or the decoding device stores a quantization step size for quantizing any one of the plurality of first elements; or , the decoder or the decoding device stores a quantization step size for performing inverse quantization on any one of the plurality of first elements; or for any first element in the plurality of first elements
  • the quantization step size for quantizing the elements is the input of the decoding; or the quantization step size for performing inverse quantization on any one of the plurality of first elements is the input of the decoding. In this way, the decoding end can recover the fidelity value of the original quantity level.
  • the encoding end may quantize the first element and then encode it to obtain the code stream of the first element. Therefore, the code stream of the first element obtained by the decoding end may be the quantized first element. It is obtained by encoding one element; in this case, the reconstruction fidelity value of the quantized first element is obtained by decoding the second code stream, and the reconstruction fidelity value of the quantized first element also needs to be inverted. Quantization to obtain the reconstruction fidelity value of any first element; then the reconstruction map of the fidelity map can be obtained according to the reconstruction fidelity value of any first element; in this way, the encoded image can be obtained at the decoding end The distortion intensity information can also reduce the coding overhead.
  • the method further includes: processing the reconstructed image or a preset area of the reconstructed image according to the reconstruction map of the fidelity map, so as to improve the reconstructed image or the image quality of a preset area of the reconstructed image; or determining whether to apply the reconstructed image according to the reconstruction map of the fidelity map.
  • the image quality of the reconstructed image or the preset area of the reconstructed image can be determined according to the reconstruction map of the fidelity map.
  • an image quality enhancement algorithm is used to The reconstructed image or the preset area of the reconstructed image is processed to improve the image quality of the reconstructed image or the preset area of the reconstructed image; or the reconstructed image or the preset area of the reconstructed image is determined according to the reconstructed image of the fidelity map
  • the fidelity of the reconstructed image or the fidelity of the preset area of the reconstructed image is determined whether to apply the reconstructed image, for example, when the fidelity value of the reconstructed image or the preset area of the reconstructed image is lower than the preset fidelity threshold , make sure not to apply the reconstructed image.
  • the image when processing the reconstructed image or the preset area of the reconstructed image, in the post-processing enhancement algorithm based on learning, the image can be divided into B distortion ranges according to the degree of image distortion and trained respectively to obtain multiple image enhancement models , where B is an integer greater than 1.
  • the decoding end can determine the distortion degree of different regions of the reconstructed image according to the fidelity map, and select different models for image enhancement in different regions. Using a training model that better matches the distortion distribution can be obtained Better picture quality improvement effect.
  • Another example is a single-model image enhancement algorithm. Training and use can use the quantized parameter map as additional input information to the network. Under the instruction of the quantized parameter map, the network can output images with better enhancement effects.
  • the decoding end may process the reconstructed image or the preset area of the reconstructed image according to the reconstruction map of the fidelity map, so as to improve the image quality of the reconstructed image or the preset area of the reconstructed image; or
  • the reconstruction map of the degree map determines whether to apply the reconstructed image; thus facilitating the application of the reconstructed image.
  • the decoding method described in FIG. 11 is an inverse process of the encoding method described in FIG. 7 , and the steps or operations described in FIG. 11 may refer to the related descriptions in the steps or operations described in FIG. 7 .
  • FIG. 7 to FIG. 12 The technical solutions provided in FIG. 7 to FIG. 12 are further introduced below through the whole process of video image encoding and decoding.
  • any video image to be encoded (that is, the original image) in the video sequence
  • input it into the video encoder for encoding operation and output the first code stream of the video image and the reconstructed image of the video image after the encoding operation.
  • the video image to be encoded is often referred to as an encoded image when performing an encoding operation; the specific encoding operation can refer to the foregoing description, here No longer.
  • the above encoding operation can be any video image encoding method, such as the existing domestic and foreign standard schemes or industry schemes such as H.264, H.265, H.266, AVS2, AVS3, AV1, or the The researched video image coding scheme based on deep neural network, or other schemes that can compress the input video image.
  • a reconstructed image refers to an image that contains coding distortions after an encoding operation.
  • Typical coding artifacts include blocking, ringing, blurring, and the like.
  • Some encoding and decoding schemes based on deep learning technology, such as those based on Generative adversarial network (GAN), can also generate new encoding distortions such as false image content or texture details.
  • GAN Generative adversarial network
  • the above method can be used to process each video image in a video sequence, so that the code stream of the entire video sequence and the reconstructed image of each video image in the video sequence can be generated.
  • the code stream includes the first code stream of each video image in the video sequence.
  • the fidelity map can be calculated for each video image in the video sequence, so as to obtain the fidelity map of each video image in the video sequence; part of the video sequence can also be selected
  • the video image calculates the fidelity map, thereby obtaining the fidelity map of some video images in the video sequence. For example, you can select only the intra-coded image, and calculate its fidelity map; or only select the scene switching frame, that is, the first frame image after the scene switching in the video sequence, and calculate its fidelity map; or only select the key frame, ie the lowest temporal layer picture in the case of coding according to the hierarchical B-frame structure reference structure, its fidelity map is calculated; or other selection rules and combinations of multiple selection rules.
  • the fidelity map of a coded image can be calculated according to the preset basic unit size.
  • an encoded image can be divided into multiple image blocks with a size of M ⁇ N, and each image block is used as a basic unit for calculating fidelity; M and N are the width and height of the image block, respectively, and M The value of N and N can be equal or unequal.
  • the size of the basic unit can be preset and stored in the encoder and the decoder; the size of the basic unit can also be flexibly set by the encoder according to the richness of the content of all video images or a part of the video images in the video sequence, and Notify the decoder of the size of the set basic unit.
  • the number of basic units in a coded image may also be specified, for example, specifying the number of basic units in the horizontal and vertical directions of the coded image. Because a coded image is divided into a group of basic units (that is, a plurality of basic units) in a uniform division manner, the representations of the above two basic units are equivalent.
  • MSE mean squared error
  • SAD sum of absolute error
  • SSIM structural similarity index measurement
  • both the original image and the reconstructed image are evenly divided according to the size of the basic unit or the number of the basic units, so as to divide the original image into multiple first image blocks, and divide the reconstructed image into multiple second image blocks
  • Calculate the fidelity of any second image block in the plurality of second image blocks relative to the first image block corresponding to it, and the fidelity of the second image block relative to the first image block corresponding to it can be called is the fidelity value, so that the fidelity values of the plurality of second image blocks can be obtained; and then the fidelity map of the reconstructed image can be obtained according to the fidelity values of the plurality of second image blocks.
  • the fidelity map is an array, the array includes a plurality of first elements, the plurality of first elements are in one-to-one correspondence with a plurality of second image blocks obtained by dividing the reconstructed image, any one of the plurality of first elements
  • the element is used to represent the fidelity value of the corresponding second image block, the value of any first element in the plurality of first elements is the fidelity value of the corresponding second image block, the plurality of first elements
  • the position of any first element of an element in the array is the same as the position of its corresponding second image block in the reconstructed image.
  • the specified quantization step size can be used to scale the value of the first element in the fidelity map, that is, the specified quantization step size can be used to scale the value of the second image block corresponding to the first element in the fidelity map. Fidelity value to scale.
  • the purpose of quantization is to reduce the dynamic range of fidelity values in the fidelity map to reduce the coding overhead of the fidelity map. Quantization will bring distortion to the fidelity value. Therefore, when setting the quantization step size at the encoding end, the corresponding quantization step size can be set according to the accuracy requirements of the decoding end for the fidelity of the reconstructed image.
  • the same quantization step size can be set for all coded images of a video sequence, that is, the fidelity maps of all coded images are quantized with the same quantization step size; or a quantization step can be set for a part of the coded images of a video sequence long, and another quantization step size is set for another part of the coded images in the video sequence, that is, the fidelity maps of some coded images in all coded images are quantized with the same quantization step size; Different quantization step sizes are set for different coded images, that is, the fidelity map of any coded image in all coded images is quantized with different quantization step sizes; even for different image blocks of each coded image in a video sequence Different quantization step sizes are set, that is, the fidelity values of all coding blocks are quantized using different quantization step sizes.
  • the quantization step size set by the encoding end needs to be informed to the decoding end; that is, which quantization step size is used for quantization of the fidelity map of the encoded image or which quantization step size is used for quantization of the fidelity value of which encoding block, and the encoding The end needs to inform the decoding end.
  • the preset quantization step size can also be stored at the coding end and the decoding end for use; that is, which quantization step size or which quantization step size is used when the fidelity map of the coded image is quantized.
  • fidelity_metric_idc is an indication of the quality evaluation method used to calculate the fidelity, which can be used to indicate which evaluation method is used in the preset quality evaluation method list to calculate the fidelity map.
  • the evaluation method list is preset, and the quality evaluation method list can include quality evaluation indicators such as MSE, SSE, and SSIM; base_unit_width and base_unit_height are the width and height of the basic unit used for calculating fidelity, respectively; quantization_step is the quantization step size; fidelity_value is the fidelity value calculated at a base unit.
  • the encoding end can calculate the fidelity map for the entire encoded image, and can also calculate the fidelity map for the preset area in the encoded image, wherein the encoding end calculates the fidelity map for the preset area in the encoded image.
  • the process is the same as the process of calculating the fidelity map for the entire encoded image, and will not be repeated here. If the encoder calculates the fidelity map for the preset region in the encoded image, the encoder needs to inform the decoder of the position of the preset region in the encoded image or the coordinates of the preset region in the encoded image.
  • the fidelity map when encoding the fidelity map, you can traverse each fidelity_value in the fidelity map of the currently encoded image, and compress and encode them to obtain the code stream of all fidelity_values, the fidelity map
  • the second code stream obtained by encoding includes all code streams of fidelity_value.
  • the value can be binarized by means of fixed-length encoding, exponential Golomb encoding, etc., to obtain a binary string, and then entropy encoding is performed on each binary character in the binary string; or the value can be directly Entropy coding is performed using methods such as Huffman coding and multi-value arithmetic coding.
  • the coded fidelity value in the fidelity map can be used to determine the probability distribution of the currently coded fidelity value or the predicted value of the currently coded fidelity value, for example, using the spatial the coded fidelity value adjacent to the currently coded fidelity value to the left, above or to the upper left to determine the probability distribution of the currently coded fidelity value or the predicted value of the currently coded fidelity value, To help improve the entropy coding efficiency.
  • different Huffman code tables can be selected according to the obtained probability distribution of the fidelity value of the current encoding or the predicted value of the fidelity value of the current encoding, or the sub-interval division method of the arithmetic encoding can be determined.
  • the fidelity map is represented as a two-dimensional or three-dimensional array
  • the fidelity map can be compression-encoded using any of the existing encoding methods for monochrome or color images. At this time, it is not necessary to traverse the fidelity value calculated by all the basic units in Table 5, but directly embed the fidelity map by using the existing coding method to encode the second code stream.
  • the second code stream output by the fidelity map encoding can be embedded in the first code stream output by the original image encoding, or can be independently managed for transmission or storage operations.
  • the encoding end can obtain the first code stream of the original image and the second code stream of the fidelity map, and after obtaining the first code stream and the second code stream, transmit the first code stream and the second code stream to the decoding end.
  • Two code streams; wherein, the code stream transmitted from the encoding end to the decoding end may be collectively referred to as a compressed code stream.
  • the decoding end After receiving the compressed code stream from the encoding end, the decoding end obtains the first code stream of the original image from the compressed code stream, and inputs the first code stream into the video decoder, and obtains the reconstructed image through decoding operation.
  • the decoding operation of the decoding end on the first code stream is the inverse operation of the encoding operation of the encoding end on the original image, and the reconstructed image decoded by the decoding end is the same as the reconstructed image obtained by the encoding end.
  • the decoding end After receiving the compressed code stream from the encoding end, the decoding end obtains the second code stream of the fidelity map from the compressed code stream, and decodes the second code stream to obtain the reconstructed map of the fidelity map.
  • the decoding operation of the second code stream by the decoding end is the inverse operation of the encoding operation of the encoding end on the fidelity map. For each fidelity value in the fidelity map, and perform lossless entropy encoding on each fidelity value, the reconstructed image of the fidelity map obtained by the decoding operation at the decoding end is the same as the fidelity map calculated by the encoding end.
  • the reconstruction map of the fidelity map obtained by the decoding operation at the decoding end is calculated by the encoder.
  • the resulting fidelity maps are different because the reconstructed map of the fidelity map obtained by the decoding operation at the decoding end will contain the coding distortion introduced by the encoding operation of the fidelity map at the encoding end.
  • the decoding end judges the signal distortion of a reconstructed image or a preset area in the reconstructed image according to the reconstructed image of the fidelity map, and applies it to different business environments. For example, in a video surveillance scene, the distortion degree of an image area in the reconstructed image is judged according to the reconstructed image of the fidelity map. If it is greater than a preset threshold, the reconstructed image will not be applied; in a video conference scene , you can judge whether what you see is real or fake based on the fidelity map. For another example, according to the distortion degree of a certain image area in the reconstructed image, one of a set of image enhancement methods may be selected according to a preset rule, and applied to the image area to improve the image quality.
  • FIG. 13 is a flowchart illustrating a process 1300 of a decoding method according to another embodiment of the present application.
  • Process 1300 may be performed by a decoding device, such as by video decoder 30 .
  • Process 1300 is described as a series of steps or operations, and it should be understood that process 1300 may be performed in various orders and/or concurrently, and is not limited to the order of execution shown in FIG. 13 .
  • Process 1300 includes, but is not limited to, the following steps or operations:
  • the first code stream is also the encoded image data 21; the reconstructed image is also the decoded image 331, so the reconstructed image is a decoded image; the original image is also the image 17, so the original image is an encoded image.
  • the second image block is a coding unit.
  • the first code stream can be obtained by encoding the original image by an encoder specified by any existing mainstream video image encoding and decoding standards (H.264, H.265, H.266, AVS3, etc.); During the image encoding process, the original image is divided into multiple coding units (that is, coding blocks), and any coding unit in the multiple coding units is encoded to obtain any coding unit in the multiple coding units.
  • the code stream, the first code stream includes the code streams of the multiple coding units. It should be understood that dividing the original image into multiple coding units, and how to divide the original image into multiple coding units, is determined by the encoder.
  • the decoder for decoding the first code stream is any decoder that can perform the decoding operations specified in the aforementioned video image coding and decoding standards.
  • the decoder performs a standard-specified decoding operation on the input first code stream and outputs a reconstructed image of the original image.
  • the embodiment of the present application decodes the first code stream, and in addition to outputting the reconstructed image of the original image, also outputs target quantization parameter information obtained in the decoding process, where the target quantization parameter information is also the quantization parameter used in the original image encoding process information.
  • the quantization parameter information includes the quantization parameter values of all or part of the coding units obtained by dividing any original image in the encoding process, and all or part of the coding units of the multiple coding units obtained by dividing any original image. The position in the original image and the size of all or part of the coding units obtained by dividing any original image; wherein, the quantization parameter value is also the value of the quantization parameter, and the quantization parameter is also the value of the quantization unit 208.
  • the target quantization parameter information includes the quantization parameter values of all or part of the multiple coding units obtained by dividing the original image, and any coding unit of the multiple coding units obtained by dividing the original image in the original image.
  • the position of a coding unit in the original image can be represented by the coordinates of the coding unit, and the coordinates of the coding unit are usually represented as the luminance pixel coordinates of the upper left corner of the coding unit. Since a coding unit is usually a rectangle, its size can be expressed as its width and height, usually in terms of the number of luminance pixels; if a coding unit is square, its size can be expressed only by the side length or area.
  • the quantization parameter map is also the quantization parameter map of the reconstructed image, which is a data structure indicating the quantization parameter value of the coding unit in the reconstructed image; and the quantization parameter value of the coding unit in the reconstructed image is the code in the original image.
  • the quantization parameter value of the unit It should be understood that the main purpose of the quantization parameter is to perform the inverse quantization operation; of course, the quantization parameter itself represents the signal distortion (fidelity), so the quantization parameter map can be used to characterize the fidelity of the reconstructed image, that is, the quantization parameter map. is a form of fidelity map.
  • the quantization parameter map of the reconstructed image constructed according to the quantization parameters can be used to characterize the fidelity of the reconstructed image or can be used to characterize the fidelity of a preset area of the reconstructed image.
  • the quantization parameter map is a two-dimensional array or a three-dimensional array, and the value of the second element in the quantization parameter map is the quantization parameter value of the coding unit; wherein, any second element in the quantization parameter map has two attributes, which are the quantization parameter value and the position of the quantization parameter value in the quantization parameter map. It should be understood that each quantization parameter value in the quantization parameter map represents the distortion degree of a certain image block in the corresponding reconstructed image.
  • the quantization parameter map is also a form of fidelity map. If the quantization parameter map is constructed from the quantization parameter values of the three color components of RGB or YUV in a reconstructed image, the quantization parameter map is a three-dimensional array; if the quantization parameter value after fusion of the three color components of a reconstructed image is used A quantization parameter map is constructed, which is a two-dimensional array. Without loss of generality, in order to simplify the description, the specific implementation is described below by taking the quantization parameter map as a two-dimensional array.
  • the quantization parameter map can be represented in various ways. Some examples are listed below.
  • the original image is divided into multiple coding units, so that the reconstructed image includes multiple coding units, and the coding unit is used as the basic unit for constructing the quantization parameter map.
  • the coding unit in the reconstructed image is also the second image block , so multiple coding units are multiple basic units used to construct the quantization parameter map; these multiple basic units correspond to multiple second elements in the quantization parameter map, and these multiple basic units are the multiple coding unit, so that multiple coding units correspond to multiple second elements in the quantization parameter map one-to-one, and the value of any second element in the quantization parameter map is set as the quantization parameter value of the corresponding coding unit.
  • the position of any second element in the quantization parameter map in the quantization parameter map is the same as the position of the corresponding coding unit in the reconstructed image.
  • the quantization parameter map may also have the same spatial resolution as the corresponding original image or the reconstructed image, that is, the quantization parameter map has the same size as the original image or the reconstructed image.
  • the original image or reconstructed image includes 6 coding units, that is, the original image or the reconstructed image includes 6 coding units, then there are 6 grids in the quantization parameter map, that is, there are 6 grids in the quantization parameter map.
  • FIG. 14 only exemplarily describes the situation in which the quantization parameter map has the same size as the corresponding original image or reconstructed image. It should be understood that the size of the quantization parameter map and the size of the corresponding original image or reconstructed image may also be different. a certain zoom ratio.
  • the second is to divide the reconstructed image into multiple basic units of equal size for constructing the quantization parameter map.
  • the basic unit in the reconstructed image is also the second image block; the size of the basic unit should be less than or equal to The size of the coding unit with the smallest size; for any one of the multiple basic units, the quantization parameter value of the coding unit containing the basic unit is used as the quantization parameter value of the basic unit; due to the multiple basic units in the reconstructed image
  • the original image and the reconstructed image are divided into multiple basic units, the basic unit obtained by dividing the original image is called basic unit i, and the basic unit obtained by dividing the reconstructed image is called It is a basic unit j; a plurality of basic units i are in a one-to-one correspondence with a plurality of basic units j, and a plurality of basic units j are in a one-to-one correspondence with a plurality of second elements, so a plurality of basic units i and a plurality of second elements are in one-to-one correspondence.
  • the value of any second element in the plurality of second elements is the quantization parameter value of the corresponding basic unit i
  • the quantization parameter value of the basic unit i includes the coding unit including the basic unit i in the original image. quantization parameter value.
  • the quantization parameter value of any reconstructed block in the multiple reconstructed blocks in the reconstructed image is:
  • the quantization parameter value of the corresponding coding unit; the reconstructed image is divided into multiple basic units of equal size for constructing the quantization parameter map, and the multiple basic units in the reconstructed image and the multiple second elements in the quantization parameter map are One-to-one correspondence, therefore, the value of any second element in the quantization parameter map is the quantization parameter value of the corresponding basic unit in the reconstructed image, that is, the quantization parameter value of the reconstructed block including the basic unit.
  • the position of any second element in the quantization parameter map is the same as the position of its corresponding basic unit in the original image or the reconstructed image.
  • the size of the reconstructed image is W ⁇ H
  • the division of the coding unit of the original image corresponding to the reconstructed image is shown in FIG. 14 .
  • the reconstructed image is divided into It is divided into R ⁇ S basic units of size M ⁇ N; thus the quantization parameter map obtained is a two-dimensional array with R rows and S columns, as shown in Figure 15, there are a total of R ⁇ S in the two-dimensional array.
  • the second element, the R ⁇ S second elements are in one-to-one correspondence with the R ⁇ S basic units, and the position of any second element in the two-dimensional array corresponds to the R ⁇ S second element
  • the position of the basic unit in the reconstructed image is the same, and the value of any second element in the R ⁇ S second elements is the quantization parameter value of the corresponding basic unit; specifically, the value of the second element in FIG. 15 22 is the quantization parameter value of the encoding unit 1 in FIG. 14 , the value of the second element in FIG. 15 is 24 is the quantization parameter value of the encoding unit 2 in FIG. 14 , and the value of the second element in FIG. 15 is 26 in FIG. 14
  • the quantization parameter value of the coding unit 3 the value of the second element in FIG.
  • the size of the quantization parameter map may be the same as the size of the corresponding original image or the reconstructed image, or it may have a certain scaling ratio; wherein, the size of the quantization parameter map shown in FIG. 15 is based on the original image or the reconstructed image. size reduced.
  • the quantization parameter map can also be used to represent the fidelity of the preset area of the reconstructed image.
  • the value of the second element in the quantization parameter map is the quantization parameter value of some coding units in the original image, and also That is, only the quantization parameter values of some coding units are used when constructing the quantization parameter map.
  • the quantization parameter map used to characterize the fidelity of the preset area of the reconstructed image can also adopt the representation method described above.
  • the multiple basic units used to construct the quantization parameter map are the preset area of the reconstructed image. divided.
  • the encoding end divides the original image into multiple encoding units, and encodes the multiple encoding units obtained by dividing the original image to obtain the first code stream; the decoding end decodes the first code stream, and can obtain The reconstructed image of the original image and the target quantization parameter information, the target quantization parameter information includes the quantization parameter values of all or part of the coding units in the plurality of coding units; the quantization parameter map of the reconstructed image can be constructed according to the target quantization parameter information; and the reconstructed image
  • the quantization parameter map is a form of fidelity map.
  • the quantization parameter map of the reconstructed image is the fidelity of the entire reconstructed image. degree map; when the target quantization parameter information includes the quantization parameter values of some coding units in the plurality of coding units, the quantization parameter map of the reconstructed image is the fidelity map of the preset area of the reconstructed image; so the quantization parameter of the reconstructed image
  • the map can be used to characterize the fidelity of the reconstructed image or the fidelity of a preset area of the reconstructed image; therefore, the embodiment of the present application can obtain distortion intensity information of the encoded image at the decoding end.
  • the decoding end performs a decoding operation on the first code stream obtained by encoding the original image, and can obtain the quantization parameter values of each region (second image block) of the reconstructed image;
  • the quantization parameter values of all or part of the second image blocks in the two image blocks can construct a quantization parameter map of the reconstructed image, and the quantization parameter map of the reconstructed image can be used to represent the difference between at least part of the original image and the reconstructed image. Distortion between at least some regions, therefore, the embodiment of the present application can obtain the distortion intensity information of the encoded image at the decoding end.
  • the quantization parameter map of the reconstructed image includes a plurality of second elements, the plurality of second image blocks are in one-to-one correspondence with the plurality of second elements, and the plurality of second elements
  • the value of any second element is the quantization parameter value of the second image block corresponding to the any second element
  • the position of the any second element in the quantization parameter map of the reconstructed image is based on and
  • the position of the second image block corresponding to the any second element in the reconstructed image is determined, or the position of the any second element in the quantization parameter map of the reconstructed image is determined according to the position of any second element in the quantization parameter map of the reconstructed image.
  • the position of the second image block corresponding to the two elements in the preset area of the reconstructed image is determined.
  • the second element may also be referred to as a pixel of the quantization parameter map.
  • the second image block includes three color components
  • the quantization parameter map of the reconstructed image is a three-dimensional array including three dimensions of color components, width and height
  • the quantization parameter map of the reconstructed image is a three-dimensional array.
  • the two-dimensional array under any color component A in includes a plurality of second elements, and the value of any second element in the plurality of second elements is a second image block corresponding to the any second element
  • the quantization parameter value of the color component A of The position of the second image block in the reconstructed image is determined, or the position of any second element in the two-dimensional array under any color component A in the quantization parameter map of the reconstructed image is determined according to the relationship with the any The position of the second image block corresponding to a second element in the preset area of the reconstructed image is determined.
  • the constructing the quantization parameter map of the reconstructed image according to the target quantization parameter information includes: when the target quantization parameter information includes quantization of some coding units in the plurality of coding units When the parameter value is used, the quantization parameter value of the target coding unit is obtained according to the quantization parameter value of the partial coding unit and/or the reference quantization parameter map, wherein the reference quantization parameter map is the quantization parameter of the reference image of the reconstructed image
  • the target coding unit is a coding unit other than the partial coding unit among the plurality of coding units; according to the quantization parameter value of the partial coding unit and the quantization parameter value of the target coding unit, the obtained The quantization parameter map of the reconstructed image is described.
  • the quantization parameter map of the reconstructed image is obtained according to the quantization parameter values of all coding units;
  • the target quantization parameter information includes the quantization parameter values of some coding units in the plurality of coding units, obtain the quantization parameter map of the reconstructed image according to the quantization parameter values of the partial coding units, or obtain the quantization parameter map of the reconstructed image according to the quantization parameter values of the partial coding units, or obtain the quantization parameter map according to the quantization parameter values of the partial coding units.
  • the quantization parameter value of the reconstructed image is obtained from the quantization parameter value and the reference quantization parameter map, wherein the reference quantization parameter map is the quantization parameter map of the reference image of the reconstructed image.
  • the reference quantization parameter map is a quantization parameter map of a reference image of the reconstructed image
  • the reference image of the reconstructed image is also a reference picture (reference picture) of a coding picture (coding picture).
  • the current mainstream video image encoding and decoding standards do not guarantee that a real and effective quantization parameter value can be derived for each coding unit at the decoding end. Taking the H.265 standard scheme as an example, if the residual quantization of a coding unit is all 0s, that is, no residual is transmitted for it, the decoding end will skip the inverse quantization operation. In order to avoid transmitting useless quantization parameter information, the encoder will not transmit the quantization parameter value of the coding unit to the decoder.
  • the decoding end cannot obtain the quantization parameter value of the coding unit from the first code stream at all.
  • the quantization parameter value of a coding unit cannot accurately represent the distortion degree of the coding unit.
  • the quantization step size is calculated according to the quantization parameter value
  • the coding end will quantize the residual of the coding unit to be all 0s. This situation is more common in P and B frame coding, and occurs when the quality of the reference image is high and the quantization step size of the coding unit is set large.
  • the decoding end can obtain the quantization parameter value of the coding unit through the decoding operation, the quantization parameter value cannot reflect the actual distortion of the coding unit.
  • the solution for constructing a quantization parameter map provided by the embodiments of the present application can be combined with any existing mainstream video image coding and decoding standards (H.264, H.265, H.266, AVS3, etc.), according to the quantization parameters obtained in the decoding process information to build a quantization parameter map.
  • the inaccurate quantization parameter values identified in the quantization parameter map can also be corrected to obtain a corrected quantization parameter map, that is, a fidelity map, and finally the fidelity map is applied to the reconstructed image.
  • the quantization parameter values of the coding units in the current original image can be used to correct the quantization parameter values of other coding units in the current original image, or the quantization parameter map of the reference image of the current original image (hereinafter referred to as the reference quantization parameter map) can be used. Correction of inaccurate quantitative parameter data.
  • the quantization parameter map of each reconstructed image is constructed and corrected, it will be saved and used as the input for constructing the quantization parameter map of the subsequent reconstructed image, that is, to be used as the input for constructing the quantization parameter map of the subsequent reconstructed image.
  • the quantization parameter map of the reconstructed image can be constructed according to the quantization parameter values of all the coding units, and at this time, the quantization parameter of the reconstructed image is The map is used to characterize the fidelity of the entire reconstructed image; alternatively, the quantization parameter value of the coding unit in the preset area of the reconstructed image can also be selected from all the coding units, and the quantization parameter value of the coding unit in the preset area of the reconstructed image can be selected according to the The quantization parameter value of the reconstructed image constructs a quantization parameter map of the reconstructed image, and the quantization parameter map of the reconstructed image is used to characterize the fidelity of the preset area of the reconstructed image.
  • the quantization parameter map of the reconstructed image can be constructed according to the quantization parameter value of the coding unit in the preset area of the reconstructed image; or, the quantization parameter value of some coding units in the plurality of coding units or the reference quantization parameter map can also be modified. quantizing parameter values to obtain quantization parameter values of all coding units in the plurality of coding units, and then constructing a quantization parameter map of the reconstructed image according to the quantization parameter values of all coding units.
  • the quantization parameter values of the partial coding units do not completely include the quantization parameter values of the coding units in the preset area of the reconstructed image
  • the quantization parameter values can be modified according to the quantization parameter values of some of the coding units in the multiple coding units or the reference quantization parameter map to obtain the quantization parameter values of all the coding units in the multiple coding units;
  • the quantization parameter value of the unit constructs the quantization parameter map of the reconstructed image; or, the quantization parameter value of the coding unit in the preset area of the reconstructed image can also be selected from all coding units, and the quantization parameter value of the coding unit in the preset area of the reconstructed image can be selected according to The quantization parameter values of , construct a quantization parameter map of the reconstructed image.
  • the quantization parameter map of the reconstructed image can be obtained according to the quantization parameter values of all the coding units.
  • the quantization parameter map of the reconstructed image can be used to characterize the fidelity of the entire reconstructed image; when the target quantization parameter information includes the quantization parameter values of some coding units in the plurality of coding units, the quantization parameter values of the partial coding units can be To the quantization parameter map of the reconstructed image, the quantization parameter map of the reconstructed image obtained in this case can be used to characterize the fidelity of the preset area of the reconstructed image; when the target quantization parameter information includes some coding units in the plurality of coding units When the quantization parameter value is the quantization parameter value, the quantization parameter map of the reconstructed image can be obtained according to the quantization parameter value of the partial coding unit and the reference quantization parameter map.
  • the reference quantization parameter map is the quantization parameter map of the reference image of the reconstructed image, it can be Obtain the quantization parameter value of any coding unit in the plurality of coding units except the part of the coding unit, so that the quantization parameter of any coding unit in the plurality of coding units can be obtained, and the quantization parameter of the reconstructed image obtained in this case
  • the map can be used to characterize the fidelity of the entire reconstructed image or the fidelity of a preset area of the reconstructed image; therefore, the target quantization parameter information obtained by decoding in this embodiment of the present application includes all of the multiple coding units.
  • a quantization parameter map of the reconstructed image for characterizing the fidelity of the reconstructed image or for characterizing the fidelity of a preset region of the reconstructed image can be obtained.
  • the obtaining the quantization parameter value of the target coding unit according to the quantization parameter value of the partial coding unit includes: determining the quantization parameter value according to the quantization parameter value of at least one coding unit in the partial coding unit. the quantization parameter value of the target coding unit.
  • the quantization parameter value of the spatial neighborhood of the coding unit will be used to fill the coding unit.
  • the quantization parameter value of at least one coding unit in the coding units with the quantization parameter value obtained by decoding can be used to determine the quantization of the target coding unit.
  • the parameter value includes: taking any one of the quantization parameter values obtained by decoding as the quantization parameter value of the target coding unit, and calculating the average value of several quantization parameter values obtained by decoding, and using the average value as the quantization parameter of the target coding unit value.
  • the quantization parameter value of the coding unit in the left, upper, upper left and other directions of the target coding unit in the original image may be used as the quantization parameter value of the target coding unit.
  • the quantization parameter values of some coding units in the plurality of coding units can be filled, so as to obtain the quantization parameter values of all the coding units in the plurality of coding units.
  • the quantization parameter value; then the quantization parameter map of the reconstructed image can be constructed according to the quantization parameter values of all coding units;
  • the quantization parameter values of the coding units in the preset area construct a quantization parameter map of the reconstructed image.
  • the quantization parameter value of the spatial neighborhood of the coding unit may be used for filling.
  • the target quantization parameter information includes the quantization parameter values of some coding units in the plurality of coding units, it may also be determined according to the quantization parameter value of at least one coding unit in the partial coding units that among the plurality of coding units, dividing the The quantization parameter value of any coding unit other than some coding units, so that the quantization parameter value of any coding unit in the multiple coding units can be guaranteed to be obtained;
  • the quantization parameter value of the reconstructed image is obtained by obtaining the quantization parameter map of the reconstructed image; when the quantization parameter map of the reconstructed image is obtained according to the quantization parameter values of all coding units in the multiple coding units, the obtained quantization parameter map of the reconstructed image can also be used to characterize the entire image.
  • the fidelity of the reconstructed image when the quantization parameter map of the reconstructed image is obtained according to the quantization parameter values of some coding units in the plurality of coding units, the obtained quantization parameter map of the reconstructed image can also be used to characterize the preset area of the reconstructed image fidelity.
  • the reference quantization parameter map includes multiple reference elements, and a value of any reference element in the multiple reference elements is a quantization parameter value of a coding unit in the reference image; the Obtaining the quantization parameter value of the target coding unit according to the reference quantization parameter map includes: taking the value of the target element as the quantization parameter value of any target coding unit in the target coding unit, wherein the target element is the reference quantization unit.
  • the reference element in the parameter map, the position of the target element in the reference quantization parameter map is determined according to the position of any target coding unit in the reconstructed image, or the target element in the reference quantization parameter The position in the figure is determined according to the position of the any target coding unit in the reconstructed image and the motion vector of the any target coding unit.
  • the reference element is another name for the second element.
  • the quantization parameter value of the temporal neighborhood of the coding unit may be used for filling.
  • the filling methods include:
  • Manner 1 uses the value of the target element in the reference quantization parameter map of the reconstructed image as the quantization parameter value of the target coding unit, and the position of the target element in the reference quantization parameter map is determined according to the position of the target coding unit in the reconstructed image.
  • the reference coding unit is determined on the reference image of the reconstructed image according to the target coding unit, wherein the position of the reference coding unit in the reference image is the same as the position of the target coding unit in the reconstructed image;
  • the value of is used as the quantization parameter value of the target coding unit, where the target element is the second element corresponding to the reference coding unit in the reference quantization parameter map.
  • Mode two use the value of the target element in the reference quantization parameter diagram of the reconstructed image as the quantization parameter value of the target coding unit, and the position of the target element in the reference quantization parameter diagram is based on the position of the target coding unit in the original image and this target The motion vector of the coding unit is determined.
  • the coordinates of the target coding unit on the reconstructed image are (x, y), and the coordinates of the target coding unit on the reconstructed image are (x, y) and the motion vector (mvx, mvy) of the target coding unit for offsetting , obtain (x+mvx, y+mvy), determine the reference coding unit at the (x+mvx, y+mvy) position of the reference image of the reconstructed image; take the value of the target element in the reference quantization parameter map as the quantization of the target coding unit parameter value, where the target element is the second element corresponding to the reference coding unit in the reference quantization parameter map.
  • the representation of the reference quantization parameter map and the representation of the quantization parameter map of the reconstructed image are the same, that is, it can be assumed that the quantization parameter map of the reconstructed image, the reference quantization parameter map and the corresponding original The images have the same size, or are scaled proportionally.
  • a plurality of quantization parameter values may be obtained using the aforementioned various methods, and an arithmetic average operation may be performed on the plurality of quantization parameter values to determine the quantization parameter value of the target coding unit.
  • the quantization parameter value filling can be performed on the coding unit of the quantization parameter value that cannot be obtained from the first code stream, so as to obtain the quantization parameter value of all the coding units in the plurality of coding units;
  • the quantization parameter map of the reconstructed image is constructed according to the quantization parameter values of all the coding units; or, the quantization parameter values of the coding units in the preset area of the reconstructed image are selected from all the coding units, and
  • the quantization parameter values of the cells construct a quantization parameter map of the reconstructed image.
  • the quantization parameter of the temporal neighborhood of the coding unit may be used for filling.
  • the value of the target element in the reference quantization parameter map can be used as the quantization parameter value of the coding unit, and the target element is in the reference quantization parameter value.
  • the position in the figure is determined according to the position of the coding unit in the reconstructed image, or the position of the target element in the reference quantization parameter map is determined according to the position of the coding unit in the reconstructed image and the motion vector of the coding unit.
  • the quantization parameter value of any one of the multiple coding units can be guaranteed to be obtained; and the quantization parameter map of the reconstructed image can be obtained according to the quantization parameter values of some or all of the multiple coding units;
  • the quantization parameter map of the reconstructed image can also be used to characterize the fidelity of the entire reconstructed image;
  • the obtained quantization parameter map of the reconstructed image can also be used to characterize the fidelity of the preset area of the reconstructed image.
  • the method further includes: storing the reconstructed image in association with the quantization parameter map of the reconstructed image, so as to use the reconstructed image as a reference image and the quantization parameter map of the reconstructed image As a reference quantization parameter map.
  • a quantization parameter map will be saved after construction and correction is completed, and will be used as an input for constructing a quantization parameter map of a subsequent coded image; for example, a quantization parameter map buffer is designed to store the quantization parameter map.
  • the decoder of any mainstream video codec includes a decoded picture buffer to store decoded pictures, and a reference picture management mechanism to manage the addition and removal of decoded pictures.
  • each quantization parameter map corresponds to a decoded image, and the quantization parameter information of the decoded image is recorded. Therefore, its quantization parameter map can be managed in exactly the same way as a decoded picture.
  • a decoded picture and its quantization parameter map are managed in the decoded picture buffer and the reference quantization parameter map buffer respectively, and their management operations are completely synchronized.
  • the quantization parameter map of the reconstructed image may be stored and used as a reference quantization parameter map for constructing the quantization parameter map of the subsequent decoded image, thereby facilitating the construction of the quantization parameter map of the subsequent decoded image.
  • the method further includes: processing the reconstructed image or a preset region of the reconstructed image according to the quantization parameter map of the reconstructed image, so as to improve the reconstructed image or the reconstructed image. image quality of a preset area of the image; or determining whether to apply the reconstructed image according to the quantization parameter map of the reconstructed image.
  • the decoding end determines the signal distortion of a reconstructed image or a preset area in the reconstructed image according to the reconstructed image of the fidelity map, and applies it to different service environments. For example, in a video surveillance scenario, the degree of distortion of a certain image area in the reconstructed image is determined according to the reconstructed image of the fidelity map, and if it is greater than a preset threshold, the reconstructed image will not be applied. For another example, according to the distortion degree of a certain image area in the reconstructed image, one of a set of image enhancement methods may be selected according to a preset rule, and applied to the image area to improve the image quality.
  • the image when processing the reconstructed image or the preset area of the reconstructed image, in the post-processing enhancement algorithm based on learning, the image can be divided into B distortion ranges according to the degree of image distortion and trained respectively to obtain multiple image enhancement models , where B is an integer greater than 1.
  • the decoding end can determine the degree of distortion in different areas of the reconstructed image according to the quantization parameter map of the reconstructed image, select different models for image enhancement in different areas, and use a training model that better matches the distortion distribution. Can get better picture quality improvement effect.
  • Another example is a single-model image enhancement algorithm. Training and use can use the quantized parameter map as additional input information to the network. Under the instruction of the quantized parameter map, the network can output images with better enhancement effects.
  • the decoding end may process the reconstructed image or the preset area of the reconstructed image according to the reconstruction map of the fidelity map, so as to improve the image quality of the reconstructed image or the preset area of the reconstructed image; or
  • the reconstruction map of the degree map determines whether to apply the reconstructed image; thus facilitating the application of the reconstructed image.
  • 16 is a schematic block diagram of an encoding device provided by an embodiment of the application; the encoding device includes a video encoder and a fidelity map encoder, wherein:
  • a video encoder for encoding the original image to obtain the first code stream
  • a fidelity map encoder configured to encode a fidelity map to obtain a second code stream, wherein the fidelity map is used to represent a difference between at least a partial area of the original image and at least a partial area of the reconstructed image
  • the reconstructed image is obtained after decoding the first code stream.
  • the compressed code stream in FIG. 16 is a general term for the code stream transmitted from the encoding end to the decoding end, and the compressed code stream includes a first code stream and a second code stream.
  • the encoding device further includes a fidelity map calculator for: dividing the original image into a plurality of first image blocks, and dividing the reconstructed image
  • the image is divided into multiple second image blocks, wherein the division strategy for dividing the original image is the same as the division strategy for dividing the reconstructed image, and the multiple first image blocks and the multiple second image blocks are one by one or dividing the preset area of the original image into a plurality of first image blocks, and dividing the preset area of the reconstructed image into a plurality of second image blocks, wherein the preset area of the original image is divided
  • the division strategy of the area is the same as the division strategy of dividing the preset area of the reconstructed image, and the plurality of first image blocks correspond to the plurality of second image blocks one-to-one;
  • the fidelity value of any second image block is calculated to obtain the fidelity value of the any second image block from the first image block corresponding to the any second image block, and the fidelity map includes the any second image block.
  • the fidelity map includes a plurality of first elements, the plurality of second image blocks are in one-to-one correspondence with the plurality of first elements, and the plurality of first elements
  • the value of any first element is the fidelity value of the second image block corresponding to the any first element
  • the position of the any first element in the fidelity map is based on the The position of the second image block corresponding to a first element in the reconstructed image is determined, or the position of any first element in the fidelity map is determined according to the first element corresponding to the first element.
  • the positions of the two image blocks in the preset area of the reconstructed image are determined.
  • the second image block includes three color components
  • the fidelity map is a three-dimensional array including three dimensions of color components, width and height, any one of the fidelity map
  • the two-dimensional array under the color component A includes a plurality of first elements
  • the value of any first element in the plurality of first elements is the color component A of the second image block corresponding to the any first element fidelity value
  • the position of any first element in the two-dimensional array under any color component A in the fidelity map is based on the second image block corresponding to the any first element in
  • the position in the reconstructed image is determined, or the position of the any first element in the two-dimensional array under any color component A in the fidelity map is determined according to the first element corresponding to the any first element.
  • the positions of the two image blocks in the preset area of the reconstructed image are determined.
  • the fidelity map encoder is specifically configured to: perform entropy encoding on the any first element to obtain the second code stream,
  • the entropy coding is independent of the entropy coding of other first elements; or, the probability distribution of the value of any first element or the value of any one of the first elements is determined according to the value of at least one of the encoded first elements the predicted value of the first element, and entropy coding the any first element according to the probability distribution of the value of the any first element or the predicted value of the any first element, so as to obtain the second A code stream; wherein the second code stream includes the code streams of the plurality of first elements.
  • the fidelity map encoder is specifically configured to: quantize any of the first elements to obtain a quantized first element; Encoding is performed to obtain the second code stream; wherein the second code stream includes the code streams of the plurality of first elements.
  • each unit may also correspond to the corresponding description with reference to the method embodiment shown in FIG. 7 .
  • the original image is encoded to obtain the first code stream
  • the fidelity map is encoded to obtain the second code stream
  • the fidelity map is used to represent at least part of the original image Distortion between the region and at least part of the reconstructed image
  • the decoding end decodes the first code stream to obtain a reconstructed image of the original image
  • the decoding end decodes the second code stream to obtain a reconstructed image of the fidelity map
  • the encoding is lossless encoding
  • the reconstructed image of the fidelity map is the same as the fidelity map
  • the reconstructed image of the fidelity map includes the encoding distortion generated by encoding the fidelity map
  • the reconstructed map of the fidelity map can be used to represent the distortion between at least part of the original image and at least part of the reconstructed image; therefore, the embodiment of the present application can obtain the distortion intensity information of the encoded image at the decoding
  • the decoding device includes a video decoder and a fidelity map decoder, wherein:
  • a video decoder for decoding the first code stream to obtain a reconstructed image of the original image
  • a fidelity map decoder configured to decode a second code stream to obtain a reconstruction map of the fidelity map, wherein the second code stream is obtained by encoding the fidelity map, and the fidelity map is obtained by encoding the second code stream.
  • the reconstructed map of the trueness map is used to represent the distortion between at least a partial area of the original image and at least a partial area of the reconstructed image.
  • the compressed code stream in FIG. 17 is a general term for the code stream transmitted from the encoding end to the decoding end, and the compressed code stream includes a first code stream and a second code stream.
  • the fidelity map includes a fidelity value of any second image block in the plurality of second image blocks, and the fidelity value of any second image block is used for represents the distortion between the any second image block and the original image block corresponding to the any second image block.
  • the fidelity map includes a plurality of first elements, the plurality of second image blocks are in one-to-one correspondence with the plurality of first elements, and the plurality of first elements
  • the value of any first element is the fidelity value of the second image block corresponding to the any first element
  • the position of the any first element in the fidelity map is based on the The position of the second image block corresponding to a first element in the reconstructed image is determined, or the position of any first element in the fidelity map is determined according to the first element corresponding to the first element.
  • the positions of the two image blocks in the preset area of the reconstructed image are determined.
  • the second image block includes three color components
  • the fidelity map is a three-dimensional array including three dimensions of color components, width and height, any one of the fidelity map
  • the two-dimensional array under the color component A includes a plurality of first elements
  • the value of any first element in the plurality of first elements is the color component A of the second image block corresponding to the any first element fidelity value
  • the position of any first element in the two-dimensional array under any color component A in the fidelity map is based on the second image block corresponding to the any first element in
  • the position in the reconstructed image is determined, or the position of the any first element in the two-dimensional array under any color component A in the fidelity map is determined according to the first element corresponding to the any first element.
  • the positions of the two image blocks in the preset area of the reconstructed image are determined.
  • the fidelity map decoder is specifically configured to: decode the second code stream to obtain the reconstruction fidelity value of any one of the first elements; according to the The reconstructed fidelity value of any first element results in a reconstructed map of the fidelity map.
  • the second code stream is obtained by encoding the quantized first element;
  • the fidelity map decoder is specifically configured to: decode the second code stream, to obtain the reconstruction fidelity value of the quantized first element; perform inverse quantization on the reconstruction fidelity value of the quantized first element to obtain the reconstruction fidelity value of any one of the first elements value; obtains the reconstruction map of the fidelity map according to the reconstruction fidelity value of any one of the first elements.
  • each unit may also correspond to the corresponding description with reference to the method embodiment shown in FIG. 11 .
  • the reconstructed image of the original image can be obtained by decoding the first code stream, and the reconstructed image of the fidelity map can be obtained after decoding the second code stream. It is obtained by encoding, and the second code stream is obtained by encoding the fidelity map, which is used to represent the distortion between at least part of the original image and at least part of the reconstructed image; if the encoding is lossless encoding, then The reconstruction map of the fidelity map is the same as the fidelity map; if the encoding is lossy encoding, the reconstruction map of the fidelity map includes the coding distortion generated by encoding the fidelity map; therefore, the reconstruction of the fidelity map The map can be used to represent the distortion between at least part of the original image and at least part of the reconstructed image; therefore, the embodiment of the present application can obtain distortion intensity information of the encoded image in the decoding device.
  • the decoding device includes a video decoder and a fidelity map decoder, wherein:
  • a video decoder for decoding the first code stream to obtain a reconstructed image of the original image and target quantization parameter information, where the target quantization parameter information includes all or part of the second image blocks of the reconstructed image.
  • a quantization parameter map builder configured to construct a quantization parameter map of the reconstructed image according to the target quantization parameter information, wherein the quantization parameter map of the reconstructed image is used to represent at least a partial area of the original image and the reconstructed image Distortion between at least partial regions of an image.
  • the compressed code stream in FIG. 18 is a general term for the code stream transmitted from the encoding end to the decoding end, and the compressed code stream includes the first code stream.
  • the second image block is a coding unit.
  • the quantization parameter map of the reconstructed image includes a plurality of second elements, the plurality of second image blocks are in one-to-one correspondence with the plurality of second elements, and the plurality of second elements
  • the value of any second element is the quantization parameter value of the second image block corresponding to the any second element
  • the position of the any second element in the quantization parameter map of the reconstructed image is based on and
  • the position of the second image block corresponding to the any second element in the reconstructed image is determined, or the position of the any second element in the quantization parameter map of the reconstructed image is determined according to the position of any second element in the quantization parameter map of the reconstructed image.
  • the position of the second image block corresponding to the two elements in the preset area of the reconstructed image is determined.
  • the second image block includes three color components
  • the quantization parameter map of the reconstructed image is a three-dimensional array including three dimensions of color components, width and height
  • the quantization parameter map of the reconstructed image is a three-dimensional array.
  • the two-dimensional array under any color component A in includes a plurality of second elements, and the value of any second element in the plurality of second elements is a second image block corresponding to the any second element
  • the quantization parameter value of the color component A of The position of the second image block in the reconstructed image is determined, or the position of any second element in the two-dimensional array under any color component A in the quantization parameter map of the reconstructed image is determined according to the relationship with the any The position of the second image block corresponding to a second element in the preset area of the reconstructed image is determined.
  • the quantization parameter map builder is specifically configured to: when the target quantization parameter information includes quantization parameter values of some coding units in the plurality of coding units, perform coding according to the partial coding units The quantization parameter value of the unit and/or the reference quantization parameter map to obtain the quantization parameter value of the target coding unit, wherein the reference quantization parameter map is the quantization parameter map of the reference image of the reconstructed image, and the target coding unit is the coding units other than the partial coding unit among the plurality of coding units; and obtaining a quantization parameter map of the reconstructed image according to the quantization parameter value of the partial coding unit and the quantization parameter value of the target coding unit.
  • the reference quantization parameter map includes multiple reference elements, and a value of any reference element in the multiple reference elements is a quantization parameter value of a coding unit in the reference image; the a quantization parameter map builder, specifically configured to: use the value of the target element as the quantization parameter value of any target coding unit in the target coding unit, wherein the target element is the reference element in the reference quantization parameter map, The position of the target element in the reference quantization parameter map is determined according to the position of any target coding unit in the reconstructed image, or the position of the target element in the reference quantization parameter map is determined according to the The position of any target coding unit in the reconstructed image and the motion vector of the any target coding unit are determined.
  • each unit may also correspond to the corresponding description with reference to the method embodiment shown in FIG. 13 .
  • the quantization parameter value of each area (second image block) of the reconstructed image can be obtained by decoding the first code stream obtained by encoding the original image;
  • the quantization parameter values of all or part of the second image block in the second image block can construct a quantization parameter map of the reconstructed image, and the quantization parameter map of the reconstructed image can be used to represent at least part of the original image and the reconstructed image. Therefore, in this embodiment of the present application, the distortion intensity information of the encoded image can be obtained at the decoding end.
  • FIG. 19 is a schematic block diagram of an encoding apparatus provided by an embodiment of the application; the encoding apparatus 1900 is applied to an encoding device, and the encoding apparatus 1900 includes a processing unit 1901 and a communication unit 1902, wherein the processing unit 1901 is used to execute In any step in the method embodiment shown in FIG. 7, and when performing data transmission such as acquisition, the communication unit 1902 can be selectively invoked to complete the corresponding operation.
  • the processing unit 1901 is used to execute In any step in the method embodiment shown in FIG. 7, and when performing data transmission such as acquisition, the communication unit 1902 can be selectively invoked to complete the corresponding operation.
  • the processing unit 1901 is configured to: encode the original image to obtain a first code stream; encode a fidelity map to obtain a second code stream, wherein the fidelity map is used to represent the original image. Distortion between at least a partial area and at least a partial area of a reconstructed image obtained after decoding the first code stream.
  • the processing unit 1901 is further configured to: divide the original image into a plurality of first image blocks, and divide the reconstructed image into a plurality of second image blocks, wherein the divided The division strategy of the original image is the same as the division strategy of the reconstructed image, and the plurality of first image blocks are in one-to-one correspondence with the plurality of second image blocks; or the preset area of the original image is divided into A plurality of first image blocks, and dividing the preset area of the reconstructed image into a plurality of second image blocks, wherein a division strategy for dividing the preset area of the original image and dividing the preset area of the reconstructed image
  • the division strategy is the same, the multiple first image blocks correspond to the multiple second image blocks one-to-one; according to any second image block in the multiple second image blocks and the any second image block
  • the first image block corresponding to the image block is calculated to obtain the fidelity value of the any second image block, the fidelity map includes the fidelity value of the any second image block, and the
  • the fidelity map includes a plurality of first elements, the plurality of second image blocks are in one-to-one correspondence with the plurality of first elements, and the plurality of first elements
  • the value of any first element is the fidelity value of the second image block corresponding to the any first element
  • the position of the any first element in the fidelity map is based on the The position of the second image block corresponding to a first element in the reconstructed image is determined, or the position of any first element in the fidelity map is determined according to the first element corresponding to the first element.
  • the positions of the two image blocks in the preset area of the reconstructed image are determined.
  • the second image block includes three color components
  • the fidelity map is a three-dimensional array including three dimensions of color components, width and height, any one of the fidelity map
  • the two-dimensional array under the color component A includes a plurality of first elements
  • the value of any first element in the plurality of first elements is the color component A of the second image block corresponding to the any first element fidelity value
  • the position of any first element in the two-dimensional array under any color component A in the fidelity map is based on the second image block corresponding to the any first element in
  • the position in the reconstructed image is determined, or the position of the any first element in the two-dimensional array under any color component A in the fidelity map is determined according to the first element corresponding to the any first element.
  • the positions of the two image blocks in the preset area of the reconstructed image are determined.
  • the processing unit 1901 is specifically configured to: perform entropy coding on the any first element to obtain the second code stream, and the entropy coding on the any first element is independent of Entropy encoding of other first elements; or, determining the probability distribution of the value of any first element or the probability distribution of the value of any first element according to the value of at least one of the encoded first elements a predicted value, and perform entropy coding on any of the first elements according to the probability distribution of the value of the any of the first elements or the predicted value of the any of the first elements, so as to obtain the second code stream; wherein , the second code stream includes the code streams of the plurality of first elements.
  • the processing unit 1901 is specifically configured to: quantize any of the first elements to obtain a quantized first element; and encode the quantized first element to obtain a quantized first element to obtain a quantized first element. obtaining the second code stream; wherein, the second code stream includes the code streams of the plurality of first elements.
  • the encoding apparatus 1900 may further include a storage unit 1903 for storing program codes and data of the encoding device.
  • the processing unit 1901 may be a processor
  • the communication unit 1902 may be a transceiver
  • the storage unit 1903 may be a memory.
  • each unit may also correspond to the corresponding description with reference to the method embodiment shown in FIG. 7 .
  • the original image is encoded to obtain a first code stream
  • the fidelity map is encoded to obtain a second code stream
  • the fidelity map is used to represent at least part of the original image Distortion between the region and at least part of the reconstructed image
  • the decoding end decodes the first code stream to obtain a reconstructed image of the original image
  • the decoding end decodes the second code stream to obtain a reconstructed image of the fidelity map
  • the encoding is lossless encoding
  • the reconstructed image of the fidelity map is the same as the fidelity map
  • the reconstructed image of the fidelity map includes the encoding distortion generated by encoding the fidelity map
  • the embodiment of the present application can obtain the distortion intensity information of the encoded image at the decoding
  • FIG. 20 is a schematic block diagram of a decoding apparatus provided by an embodiment of the application; the decoding apparatus 2000 is applied to a decoding device, and the decoding apparatus 2000 includes a processing unit 2001 and a communication unit 2002, wherein the processing unit 2001 is used for executing In any step in the method embodiment shown in FIG. 11, and when performing data transmission such as acquisition, the communication unit 2002 can be selectively invoked to complete the corresponding operation.
  • the processing unit 2001 is used for executing In any step in the method embodiment shown in FIG. 11, and when performing data transmission such as acquisition, the communication unit 2002 can be selectively invoked to complete the corresponding operation.
  • the processing unit 2001 is configured to: decode the first code stream to obtain the reconstructed image of the original image; decode the second code stream to obtain the reconstructed image of the fidelity map, wherein the second code stream is a pair of The reconstructed map of the fidelity map is obtained by encoding the fidelity map, and is used to represent the distortion between at least a partial region of the original image and at least a partial region of the reconstructed image.
  • the fidelity map includes a fidelity value of any second image block in the plurality of second image blocks, and the fidelity value of any second image block is used for represents the distortion between the any second image block and the original image block corresponding to the any second image block.
  • the fidelity map includes a plurality of first elements, the plurality of second image blocks are in one-to-one correspondence with the plurality of first elements, and the plurality of first elements
  • the value of any first element is the fidelity value of the second image block corresponding to the any first element
  • the position of the any first element in the fidelity map is based on the The position of the second image block corresponding to a first element in the reconstructed image is determined, or the position of any first element in the fidelity map is determined according to the first element corresponding to the first element.
  • the positions of the two image blocks in the preset area of the reconstructed image are determined.
  • the second image block includes three color components
  • the fidelity map is a three-dimensional array including three dimensions of color components, width and height, any one of the fidelity map
  • the two-dimensional array under the color component A includes a plurality of first elements
  • the value of any first element in the plurality of first elements is the color component A of the second image block corresponding to the any first element fidelity value
  • the position of any first element in the two-dimensional array under any color component A in the fidelity map is based on the second image block corresponding to the any first element in
  • the position in the reconstructed image is determined, or the position of the any first element in the two-dimensional array under any color component A in the fidelity map is determined according to the first element corresponding to the any first element.
  • the positions of the two image blocks in the preset area of the reconstructed image are determined.
  • the processing unit 2001 is specifically configured to: decode the second code stream to obtain the reconstruction fidelity value of any one of the first elements; The reconstructed fidelity values of the elements result in a reconstructed map of the fidelity map.
  • the second code stream is obtained by encoding the quantized first element; the processing unit 2001 is specifically configured to: decode the second code stream to obtain the the reconstruction fidelity value of the quantized first element; perform inverse quantization on the reconstruction fidelity value of the quantized first element to obtain the reconstruction fidelity value of any one of the first elements; The reconstruction fidelity value of any one of the first elements is used to obtain the reconstruction map of the fidelity map.
  • the decoding apparatus 2000 may further include a storage unit 2003 for storing program codes and data of the decoding device.
  • the processing unit 2001 may be a processor
  • the communication unit 2002 may be a transceiver
  • the storage unit 2003 may be a memory.
  • each unit may also correspond to the corresponding description with reference to the method embodiment shown in FIG. 11 .
  • the reconstructed image of the original image can be obtained by decoding the first code stream, and the reconstructed image of the fidelity map can be obtained after decoding the second code stream.
  • the second code stream is obtained by encoding the fidelity map, and the fidelity map is used to represent the distortion between at least part of the original image and at least part of the reconstructed image; if the encoding is Lossless coding, the reconstruction map of the fidelity map is the same as the fidelity map; if the coding is lossy coding, the reconstruction map of the fidelity map includes the coding distortion generated by coding the fidelity map;
  • the reconstructed map of the degree map can be used to represent the distortion between at least part of the original image and at least part of the reconstructed image; therefore, the embodiment of the present application can obtain the distortion intensity information of the encoded image in the decoding device.
  • 21 is a schematic block diagram of a decoding apparatus provided by an embodiment of the application; the decoding apparatus 2100 is applied to a decoding device, and the decoding apparatus 2100 includes a processing unit 2101 and a communication unit 2102, wherein the processing unit 2101 is used to execute In any step in the method embodiment shown in FIG. 13, and when performing data transmission such as acquisition, the communication unit 2102 can be selectively invoked to complete the corresponding operation.
  • the processing unit 2101 is used to execute In any step in the method embodiment shown in FIG. 13, and when performing data transmission such as acquisition, the communication unit 2102 can be selectively invoked to complete the corresponding operation.
  • the processing unit 2101 is configured to: decode the first code stream to obtain a reconstructed image of the original image and target quantization parameter information, where the target quantization parameter information includes all or a plurality of second image blocks of the reconstructed image. quantization parameter values of part of the second image block; constructing a quantization parameter map of the reconstructed image according to the target quantization parameter information, wherein the quantization parameter map of the reconstructed image is used to indicate that at least part of the original image is different from the distortion between at least partial regions of the reconstructed image.
  • the second image block is a coding unit.
  • the quantization parameter map of the reconstructed image includes a plurality of second elements, the plurality of second image blocks are in one-to-one correspondence with the plurality of second elements, and the plurality of second elements
  • the value of any second element is the quantization parameter value of the second image block corresponding to the any second element
  • the position of the any second element in the quantization parameter map of the reconstructed image is based on and
  • the position of the second image block corresponding to the any second element in the reconstructed image is determined, or the position of the any second element in the quantization parameter map of the reconstructed image is determined according to the position of any second element in the quantization parameter map of the reconstructed image.
  • the position of the second image block corresponding to the two elements in the preset area of the reconstructed image is determined.
  • the second image block includes three color components
  • the quantization parameter map of the reconstructed image is a three-dimensional array including three dimensions of color components, width and height
  • the quantization parameter map of the reconstructed image is a three-dimensional array.
  • the two-dimensional array under any color component A in includes a plurality of second elements, and the value of any second element in the plurality of second elements is a second image block corresponding to the any second element
  • the quantization parameter value of the color component A of The position of the second image block in the reconstructed image is determined, or the position of any second element in the two-dimensional array under any color component A in the quantization parameter map of the reconstructed image is determined according to the relationship with the any The position of the second image block corresponding to a second element in the preset area of the reconstructed image is determined.
  • the processing unit 2101 is specifically configured to: when the target quantization parameter information includes the quantization parameter values of some coding units in the plurality of coding units, according to the quantization of the partial coding units the parameter value and/or the reference quantization parameter map to obtain the quantization parameter value of the target coding unit, wherein the reference quantization parameter map is the quantization parameter map of the reference image of the reconstructed image, and the target coding unit is the multiple coding units other than the partial coding unit in the coding unit; obtaining a quantization parameter map of the reconstructed image according to the quantization parameter value of the partial coding unit and the quantization parameter value of the target coding unit.
  • the reference quantization parameter map includes multiple reference elements, and a value of any reference element in the multiple reference elements is a quantization parameter value of a coding unit in the reference image;
  • the The processing unit 2101 is specifically configured to: use the value of the target element as the quantization parameter value of any target coding unit in the target coding unit, wherein the target element is a reference element in the reference quantization parameter map, and the target The position of the element in the reference quantization parameter map is determined according to the position of the any target coding unit in the reconstructed image, or the position of the target element in the reference quantization parameter map is determined according to the any target The position of the coding unit in the reconstructed image and the motion vector of any target coding unit are determined.
  • the decoding apparatus 2100 may further include a storage unit 2103 for storing program codes and data of the decoding device.
  • the processing unit 2101 may be a processor
  • the communication unit 2102 may be a transceiver
  • the storage unit 2103 may be a memory.
  • each unit may also correspond to the corresponding description with reference to the method embodiment shown in FIG. 13 .
  • the quantization parameter value of each region (second image block) of the reconstructed image can be obtained by decoding the first code stream obtained by encoding the original image;
  • the quantization parameter values of all or part of the second image block in the second image block can construct a quantization parameter map of the reconstructed image, and the quantization parameter map of the reconstructed image can be used to represent at least part of the original image and the reconstructed image. Therefore, in this embodiment of the present application, the distortion intensity information of the encoded image can be obtained at the decoding end.
  • Embodiments of the present application provide an apparatus for encoding a video stream, including a processor and a memory.
  • the memory stores instructions that cause the processor to perform the method shown in FIG. 7 .
  • Embodiments of the present application provide an apparatus for decoding a video stream, including a processor and a memory.
  • the memory stores instructions that cause the processor to perform the method shown in FIG. 11 .
  • Embodiments of the present application provide an apparatus for decoding a video stream, including a processor and a memory.
  • the memory stores instructions that cause the processor to perform the method shown in FIG. 13 .
  • Embodiments of the present application provide a computer-readable storage medium on which instructions are stored, and when the instructions are executed, cause one or more processors to encode video data.
  • the instructions cause the one or more processors to perform the method shown in FIG. 7 , FIG. 11 or FIG. 13 .
  • the embodiments of the present application provide a computer program product including program code, the program code executes the method shown in FIG. 7 , FIG. 11 or FIG. 13 when running.
  • An embodiment of the present application provides an encoder (20), including a processing circuit, for executing the method shown in FIG. 7 .
  • An embodiment of the present application provides a decoder (30), including a processing circuit, for executing the method shown in FIG. 11 or FIG. 13 .
  • An embodiment of the present application provides an encoder, including: one or more processors; a non-transitory computer-readable storage medium coupled to the processor and storing a program executed by the processor, wherein the program is stored in When executed by the processor, the encoder is caused to perform the method of FIG. 7 .
  • An embodiment of the present application provides a decoder, including: one or more processors; a non-transitory computer-readable storage medium coupled to the processor and storing a program executed by the processor, wherein the program is stored in When executed by the processor, the decoder is caused to execute the method shown in FIG. 11 or FIG. 13 .
  • Embodiments of the present application provide a non-transitory computer-readable storage medium, including program codes, which, when executed by a computer device, are used to execute the method shown in FIG. 7 , FIG. 11 or FIG. 13 .
  • An embodiment of the present application provides a non-transitory storage medium, including a bit stream encoded according to the method shown in FIG. 7 .
  • Computer-readable media may include computer-readable storage media, which corresponds to tangible media, such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another (eg, according to a communication protocol) .
  • a computer-readable medium may generally correspond to (1) a non-transitory tangible computer-readable storage medium, or (2) a communication medium, such as a signal or carrier wave.
  • Data storage media can be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementing the techniques described in this application.
  • the computer program product may comprise a computer-readable medium.
  • such computer-readable storage media may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage devices, magnetic disk storage devices or other magnetic storage devices, flash memory or may be used to store instructions or data structures desired program code in the form of any other medium that can be accessed by a computer.
  • any connection is properly termed a computer-readable medium.
  • a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave are used to transmit instructions from a website, server, or other remote source
  • the coaxial cable Wire, fiber optic cable, twisted pair, DSL or wireless technologies such as infrared, radio and microwave are included in the definition of media.
  • computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but are instead directed to non-transitory, tangible storage media.
  • magnetic disks and optical disks include compact disks (CDs), laser disks, optical disks, digital versatile disks (DVDs), and Blu-ray disks, where disks typically reproduce data magnetically, while disks reproduce optically with lasers data. Combinations of the above should also be included within the scope of computer-readable media.
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGAs field programmable logic arrays
  • the term "processor,” as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein.
  • the functions described by the various illustrative logical blocks, modules, and steps described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or in combination with into the combined codec.
  • the techniques may be fully implemented in one or more circuits or logic elements.
  • the techniques of this application may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC), or a set of ICs (eg, a chip set).
  • IC integrated circuit
  • Various components, modules, or units are described herein to emphasize functional aspects of means for performing the disclosed techniques, but do not necessarily require realization by different hardware units. Indeed, as described above, the various units may be combined in codec hardware units in conjunction with suitable software and/or firmware, or by interoperating hardware units (including one or more processors as described above) supply.

Abstract

本申请提供了编码、解码方法和相关设备。涉及基于人工智能的视频或图像压缩技术领域,具体涉及基于神经网络的视频压缩技术领域。该方法包括:对原始图像进行编码以得到第一码流,对保真度图进行编码以得到第二码流;解码端对第一码流进行解码可以得到原始图像的重建图像,解码端对第二码流进行解码后得到保真度图的重建图,而保真度图的重建图用于表示原始图像的至少部分区域与重建图像的至少部分区域之间的失真。本申请实施例能够在解码端获得编码图像的失真强度信息。

Description

编码、解码方法和相关设备
本申请要求于2021年02月08日提交中国专利局、申请号为202110170984.8、申请名称为“编码、解码方法和相关设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例设计基于人工智能(AI)的视频或图像压缩技术领域,尤其涉及一种编码、解码方法和相关设备。
背景技术
视频编码(视频编码和解码)广泛用于数字视频应用,例如广播数字电视、互联网和移动网络上的视频传输、视频聊天和视频会议等实时会话应用、DVD和蓝光光盘、视频内容采集和编辑系统以及可携式摄像机的安全应用。
即使在影片较短的情况下也需要对大量的视频数据进行描述,当数据要在带宽容量受限的网络中发送或以其它方式传输时,这样可能会造成困难。因此,视频数据通常要先压缩然后在现代电信网络中传输。由于内存资源可能有限,当在存储设备上存储视频时,视频的大小也可能成为问题。视频压缩设备通常在信源侧使用软件和/或硬件,以在传输或存储之前对视频数据进行编码,从而减少用来表示数字视频图像所需的数据量。然后,压缩的数据在目的地侧由视频解压缩设备接收。在有限的网络资源以及对更高视频质量的需求不断增长的情况下,需要改进压缩和解压缩技术,这些改进的技术能够提高压缩率而几乎不影响图像质量。
近年来,将深度学习应用于在图像和视频编解码领域逐渐成为一种趋势。然而,现有的各种基于神经网络的视频或图像编解码方案,都不能在解码端获得编码图像的失真强度信息,例如不能在解码端获得一张编码图像中各个区域的失真强度信息以及一张编码图像的总体失真强度信息。
发明内容
本申请提供一种编码、解码方法和相关设备,能够在解码端获得编码图像的失真强度信息。
上述和其它目标通过独立权利要求的主体实现。其它实现方式在从属权利要求、具体实施方式和附图中显而易见。
具体实施例在所附独立权利要求中概述,其它实施例在从属权利要求中概述。
根据第一方面,本申请涉及一种编码方法。所述方法由编码设备执行。所述方法包括:对原始图像进行编码以得到第一码流;对保真度图进行编码以得到第二码流,其中,所述保真度图用于表示所述原始图像的至少部分区域与重建图像的至少部分区域之间的失真,所述重建图像是对所述第一码流进行解码后得到的。在本申请实施例中,对原始图像进行编码以得到第一码流,对保真度图进行编码以得到第二码流,保真度图用于表示原始图像的至少部分区域与重建图像的至少部分区域之间的失真,其中,失真包括差异;解码端对第一码流进行解码可以得到原始图像的重建图像,解码端对第二码流进行解码后得到保真度图的重建图 (也可简称为重建保真度图);而若编码为无损编码,则保真度图的重建图和保真度图相同;若编码为有损编码,则保真度图的重建图包括对保真度图进行编码而生成的编码失真;故保真度图可以用于表示原始图像的至少部分区域与重建图像的至少部分区域之间的失真;因此,本申请实施例能够在解码端获得编码图像的失真强度信息。
在一种可能的设计中,所述方法还包括:将所述原始图像划分为多个第一图像块,以及将所述重建图像划分为多个第二图像块,其中,划分所述原始图像的划分策略与划分所述重建图像的划分策略相同,所述多个第一图像块与所述多个第二图像块一一对应;或将所述原始图像的预设区域划分为多个第一图像块,以及将所述重建图像的预设区域划分为多个第二图像块,其中,划分所述原始图像的预设区域的划分策略与划分所述重建图像的预设区域的划分策略相同,所述多个第一图像块与所述多个第二图像块一一对应;根据所述多个第二图像块中的任一第二图像块与所述任一第二图像块对应的第一图像块计算得到所述任一第二图像块的保真度值,所述保真度图包括所述任一第二图像块的保真度值,所述任一第二图像块的保真度值用于表示所述任一第二图像块与所述任一第二图像块对应的第一图像块之间的失真。其中,所述多个第二图像块中的任一第二图像块在所述重建图像中的位置与所述任一第二图像块对应的第一图像块在所述原始图像中的位置相同;所述原始图像的预设区域在所述原始图像中的位置与所述重建图像的预设区域在所述重建图像中的位置相同,所述多个第二图像块中的任一第二图像块在所述重建图像的预设区域中的位置与所述任一第二图像块对应的第一图像块在所述原始图像的预设区域中的位置相同。在本申请实施例中,原始图像的尺寸和重建图像的尺寸是一样的,预设区域在原始图像中的位置和在重建图像中的尺寸、位置是一样的;根据相同的划分策略将原始图像划分为多个第一图像块,以及将重建图像划分为多个第二图像块;或者根据相同的划分策略将原始图像的预设区域划分为多个第一图像块,以及将重建图像的预设区域划分为多个第二图像块;划分得到的多个第一图像块和划分得到的第二图像块是存在一一对应的关系的,其中,任一第一图像块的尺寸相同,任一第二图像块的尺寸也相同,且第一图像块与第二图像块的尺寸也相同;故可以将第一图像块和第二图像块作为保真度计算的基本单元,也即根据这多个第二图像块中的任一第二图像块与其对应的第一图像块可以计算得到任一第二图像块的保真度值,而这多个第二图像块的保真度值也即重建图像的各个区域的保真度值,根据多个第二图像块的保真度值可以得到保真度图;其中,当第一图像块是原始图像划分得到、第二图像块是重建图像划分得到时,保真度图用于表征重建图像的保真度;当第一图像块是原始图像的预设区域划分得到、第二图像块是重建图像的预设区域划分得到时,保真度图用于表征重建图像的预设区域的保真度;从而有利于得到用于表征编码图像的失真强度信息的保真度图。
在一种可能的设计中,所述保真度图包括多个第一元素,所述多个第二图像块与所述多个第一元素一一对应,所述多个第一元素中的任一第一元素的值为与所述任一第一元素对应的第二图像块的保真度值,所述任一第一元素在所述保真度图中的位置根据与所述任一第一元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第一元素在所述保真度图中的位置根据与所述任一第一元素对应的第二图像块在所述重建图像的预设区域中的位置确定。其中,第一元素也可以称为保真图的像素点。在计算保真度图时,是将图像划分得到基本单元进行保真度计算的,故有多少个基本单元,保真度图中就有多少个第一元素;其中,任一第一元素有两个属性,分别为保真度值与该保真度值在保真度图中的位置。在本申请实施例中,保真度图为二维阵列,将重建图像划分得到多个第二图像块,可以根据多个第二图像块的保真度值得到保真度图,也即根据多个第二图像块可以确定多个第一元素,多个第二 图像块与多个第一元素一一对应,多个第一元素中的任一第一元素的值为与其对应的第二图像块的保真度值;且多个第一元素中的任一第一元素在保真度图中的位置根据与其对应的第二图像块在重建图像中的位置确定,具体地,多个第一元素中的任一第一元素在保真度图中的位置与其对应的第二图像块在重建图像或在重建图像的预设区域中的位置相同,从而保真度图各位置上的元素表征的是重建图像或重建图像的预设区域中与其位置对应的区域的保真度,因此有利于保真度图用于表征编码图像的失真强度信息。
在一种可能的设计中,所述第二图像块包括三个色彩分量,所述保真度图为包括色彩分量、宽和高三个维度的三维阵列,所述保真度图中的任一色彩分量A下的二维阵列包括多个第一元素,所述多个第一元素中的任一第一元素的值为与所述任一第一元素对应的第二图像块的色彩分量A的保真度值,所述任一第一元素在所述保真度图中的任一色彩分量A下的二维阵列的位置根据与所述任一第一元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第一元素在所述保真度图中的任一色彩分量A下的二维阵列的位置根据与所述任一第一元素对应的第二图像块在所述重建图像的预设区域中的位置确定。其中,当保真度图为包括色彩分量、宽和高三个维度的三维阵列时,高表示任一色彩分量A下的二维阵列包括多个行第一元素,宽表示任一色彩分量A下的二维阵列包括多个列第一元素,多个第一元素的数量等于宽和高的乘积,色彩分量A为所述三个色彩分量的任意一个。在本申请实施例中,原始图像或重建图像包括三个色彩分量,在计算保真度图时,任一色彩分量下都计算一张二维阵列的保真度图,三个色彩分量下的二维阵列构成三维阵列的保真度图,三维阵列的保真度图中的任一色彩分量A下的二维阵列中的第一元素表征的是任一色彩分量A下,重建图像或重建图像的预设区域中与其位置对应的区域的保真度,因此有利于三维阵列的保真度图可以表征编码图像的三个色彩分量的失真强度信息。
在一种可能的设计中,所述对保真度图进行编码以得到第二码流,包括:对所述任一第一元素进行熵编码以得到所述第二码流,对所述任一第一元素的熵编码独立于其他第一元素的熵编码;或者,根据所述已编码的第一元素中的至少一个第一元素的值确定所述任一第一元素的值的概率分布或所述任一第一元素的预测值,并根据所述任一第一元素的值的概率分布或所述任一第一元素的预测值对所述任一第一元素进行熵编码,以得到所述第二码流;其中,所述第二码流包括所述多个第一元素的码流。具体地,对于保真度图中的任一第一元素的熵编码过程,若不存在已编码的第一元素,则直接对该任一第一元素进行熵编码以得到该任一第一元素的码流;若存在已编码的第一元素,则根据所述已编码的第一元素中的至少一个第一元素的值确定所述任一第一元素的值的概率分布或所述任一第一元素的预测值,并根据所述任一第一元素的值的概率分布或所述任一第一元素的预测值对所述任一第一元素进行熵编码,以得到所述任一第一元素的码流;其中,所述第二码流包括所述多个第一元素的码流。在本申请实施例中,对保真度图进行编码以得到第二码流,也即对保真度图中的任一第一元素进行编码以得到保真度图中的任一第一元素的码流,第二码流包括保真度图中的任一第一元素的码流;在熵编码过程中,可以利用已编码的第一元素的值来确定当前编码的第一元素的值的概率分布或当前编码的第一元素的预测值,例如利用当前编码的第一元素的左侧、上方、左上方等相邻的第一元素的值来确定当前编码的第一元素的值的概率分布,然后根据当前编码的第一元素的值的概率分布或所述当前编码的第一元素的预测值对当前编码的第一元素进行熵编码,以此来辅助提升熵编码效率。
在一种可能的设计中,所述对保真度图进行编码以得到第二码流,包括:对所述任一第一元素进行量化,以得到量化后的第一元素;对所述量化后的第一元素进行编码,以得到所 述第二码流;其中,所述第二码流包括所述多个第一元素的码流。其中,对多个第一元素中的任一第一元素进行量化的量化步长可以相同,也可以不同。在本申请实施例中,对保真度图进行编码以得到第二码流,也即对保真度图中的任一第一元素进行编码以得到保真度图中的任一第一元素的码流,第二码流包括保真度图中的任一第一元素的码流;在对保真度图中的任一第一元素进行编码过程中,可以对任一第一元素进行量化,然后采用量化后的任一第一元素进行编码,以得到任一第一元素的码流;对任一第一元素进行量化也即对任一第一元素的值进行量化,或者说对保真度图中的任一保真度值进行缩放;量化的目的是缩小保真度图中的保真度值的动态范围,以减小保真度图的编码开销。
根据第二方面,本申请涉及一种解码方法。所述方法由解码设备执行。所述方法包括:对第一码流进行解码以得到原始图像的重建图像;对第二码流进行解码以得到保真度图的重建图,其中,所述第二码流是对所述保真度图进行编码得到的,所述保真度图的重建图用于表示所述原始图像的至少部分区域与所述重建图像的至少部分区域之间的失真。在本申请实施例中,对原始图像进行编码以得到第一码流,对保真度图进行编码以得到第二码流,其中,保真度图用于表示原始图像的至少部分区域与重建图像的至少部分区域之间的失真,其中,失真包括差异;解码端对第一码流进行解码可以得到原始图像的重建图像,解码端对第二码流进行解码后得到保真度图的重建图;而若编码为无损编码,则保真度图的重建图和保真度图相同;若编码为有损编码,则保真度图的重建图包括对保真度图进行编码而生成的编码失真;故保真度图的重建图可以用于表示原始图像的至少部分区域与重建图像的至少部分区域之间的失真;因此,本申请实施例能够在解码端获得编码图像的失真强度信息。
在一种可能的设计中,所述保真度图包括多个第二图像块中的任一第二图像块的保真度值,所述任一第二图像块的保真度值用于表示所述任一第二图像块与所述任一第二图像块对应的原始图像块之间的失真。其中,所述多个第二图像块由所述重建图像划分得到,所述多个第二图像块与多个原始图像块一一对应,所述原始图像块是所述原始图像中的图像块,例如原始图像块为前述第一图像块;所述多个原始图像块是对所述原始图像进行划分得到的,所述多个第二图像块是对所述重建图像进行划分得到的,划分所述原始图像的划分策略与划分所述重建图像的划分策略相同;或所述多个原始图像块是对所述原始图像的预设区域进行划分得到的,所述多个第二图像块是对所述重建图像的预设区域进行划分得到的,划分所述原始图像的预设区域的划分策略与划分所述重建图像的预设区域的划分策略相同。
在一种可能的设计中,所述保真度图包括多个第一元素,所述多个第二图像块与所述多个第一元素一一对应,所述多个第一元素中的任一第一元素的值为与所述任一第一元素对应的第二图像块的保真度值,所述任一第一元素在所述保真度图中的位置根据与所述任一第一元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第一元素在所述保真度图中的位置根据与所述任一第一元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
在一种可能的设计中,所述第二图像块包括三个色彩分量,所述保真度图为包括色彩分量、宽和高三个维度的三维阵列,所述保真度图中的任一色彩分量A下的二维阵列包括多个第一元素,所述多个第一元素中的任一第一元素的值为与所述任一第一元素对应的第二图像块的色彩分量A的保真度值,所述任一第一元素在所述保真度图中的任一色彩分量A下的二维阵列的位置根据与所述任一第一元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第一元素在所述保真度图中的任一色彩分量A下的二维阵列的位置根据与所述任一 第一元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
在一种可能的设计中,所述对第二码流进行解码以得到保真度图的重建图,包括:对所述第二码流进行解码,以得到所述任一第一元素的重建保真度值;根据所述任一第一元素的重建保真度值得到所述保真度图的重建图。其中,第一元素的重建保真度值也即第一元素的值的重建。所述任一第一元素的重建保真度值在所述保真度图的重建图中的位置,根据所述任一第一元素对应的第二图像块在所述重建图像中的位置确定。或者,所述第二码流包括所述任一第一元素在所述保真度图中的位置;所述任一第一元素的重建保真度值在所述保真度图的重建图中的位置,根据所述任一第一元素在所述保真度图中的位置确定。在本申请实施例中,第二码流包括保真度图中的任一第一元素的码流,故对第二码流进行解码即可得到任一第一元素的重建保真度值;应理解,而若编码为无损编码,则第一元素的重建保真度值为第一元素的值;若编码为有损编码,第一元素的重建保真度值包括对第一元素进行编码而生成的编码失真,第一元素的重建保真度值为第一元素的值与编码失真的和;从而根据任一第一元素的重建保真度值可以得到保真度图的重建图,故保真度图的重建图可以用于表示原始图像的至少部分区域与重建图像的至少部分区域之间的失真;因此,本申请实施例能够在解码端获得编码图像的失真强度信息。
在一种可能的设计中,所述第二码流是对量化后的第一元素进行编码得到的;所述对第二码流进行解码以得到保真度图的重建图,包括:对所述第二码流进行解码,以得到所述量化后的第一元素的重建保真度值;对所述量化后的第一元素的重建保真度值进行反量化,以得到所述任一第一元素的重建保真度值;根据所述任一第一元素的重建保真度值得到所述保真度图的重建图。在本申请实施例中,为了减少编码开销,编码端可以对第一元素进行量化后再编码得到第一元素的码流,故解码端获得的第一元素的码流可以是对量化后的第一元素进行编码得到的;此种情况下,对第二码流进行解码得到量化后的第一元素的重建保真度值,还需要对量化后的第一元素的重建保真度值进行反量化,以得到任一第一元素的重建保真度值;然后根据任一第一元素的重建保真度值即可得到保真度图的重建图;如此,既能够在解码端获得编码图像的失真强度信息,又能够减少编码开销。
在一种可能的设计中,所述方法还包括:根据所述保真度图的重建图对所述重建图像或所述重建图像的预设区域进行处理,以提高所述重建图像或所述重建图像的预设区域的图像质量;或根据所述保真度图的重建图确定是否应用所述重建图像。在本申请实施例中,解码端可以根据保真度图的重建图对重建图像或重建图像的预设区域进行处理,以提高重建图像或重建图像的预设区域的图像质量;或根据保真度图的重建图确定是否应用重建图像;从而有利于重建图像的应用。
根据第三方面,本申请涉及一种解码方法。所述方法由解码设备执行。所述方法包括:对第一码流进行解码以得到原始图像的重建图像和目标量化参数信息,所述目标量化参数信息包括所述重建图像的多个第二图像块中的全部或部分第二图像块的量化参数值;根据所述目标量化参数信息构建所述重建图像的量化参数图,其中,所述重建图像的量化参数图用于表示所述原始图像的至少部分区域与所述重建图像的至少部分区域之间的失真。应理解,在解码端,量化参数的主要目的是为了做反量化操作;当然,量化参数本身就表示了信号失真(保真度),故根据量化参数构建的重建图像的量化参数图可以用于表示所述原始图像的至少部分区域与所述重建图像的至少部分区域之间的失真。现有技术中,解码端在对原始图像编码得到的第一码流进行解码操作,不会得到重建图像的各个区域(第二图像块)的量化参数 值。在本申请实施例中,解码端在对原始图像编码得到的第一码流进行解码操作,可以得到重建图像的各个区域(第二图像块)的量化参数值;而根据重建图像的多个第二图像块中的全部或部分第二图像块的量化参数值可以构建重建图像的量化参数图,且重建图像的量化参数图可以用于表示所述原始图像的至少部分区域与所述重建图像的至少部分区域之间的失真,因此,本申请实施例能够在解码端获得编码图像的失真强度信息。
在一种可能的设计中,所述第二图像块为编码单元。在本申请实施例中,编码端将原始图像划分为多个编码单元,对由原始图像划分得到的多个编码单元进行编码得到第一码流;解码端对第一码流进行解码,可以得到原始图像的重建图像和目标量化参数信息,目标量化参数信息包括该多个编码单元中的全部或部分编码单元的量化参数值;根据目标量化参数信息可以构建重建图像的量化参数图;而重建图像的量化参数图就是一种形式的保真度图,当目标量化参数信息包括该多个编码单元中的全部编码单元的量化参数值时,重建图像的量化参数图是整张重建图像的保真度图;当目标量化参数信息包括该多个编码单元中的部分编码单元的量化参数值时,重建图像的量化参数图是重建图像的预设区域的保真度图;故重建图像的量化参数图可以用于表征重建图像的保真度或用于表征重建图像的预设区域的保真度;因此,本申请实施例能够在解码端获得编码图像的失真强度信息。
在一种可能的设计中,所述重建图像的量化参数图包括多个第二元素,所述多个第二图像块与所述多个第二元素一一对应,所述多个第二元素中的任一第二元素的值为与所述任一第二元素对应的第二图像块的量化参数值,所述任一第二元素在所述重建图像的量化参数图中的位置根据与所述任一第二元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第二元素在所述重建图像的量化参数图中的位置根据与所述任一第二元素对应的第二图像块在所述重建图像的预设区域中的位置确定。其中,第二元素也可以称为量化参数图的像素点。
在一种可能的设计中,所述第二图像块包括三个色彩分量,所述重建图像的量化参数图为包括色彩分量、宽和高三个维度的三维阵列,所述重建图像的量化参数图中的任一色彩分量A下的二维阵列包括多个第二元素,所述多个第二元素中的任一第二元素的值为与所述任一第二元素对应的第二图像块的色彩分量A的量化参数值,所述任一第二元素在所述重建图像的量化参数图中的任一色彩分量A下的二维阵列的位置根据与所述任一第二元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第二元素在所述重建图像的量化参数图中的任一色彩分量A下的二维阵列的位置根据与所述任一第二元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
在一种可能的设计中,所述根据所述目标量化参数信息构建所述重建图像的量化参数图,包括:当所述目标量化参数信息包括所述多个编码单元中的部分编码单元的量化参数值时,根据所述部分编码单元的量化参数值和/或参考量化参数图,得到目标编码单元的量化参数值,其中,所述参考量化参数图为所述重建图像的参考图像的量化参数图,所述目标编码单元为所述多个编码单元中除所述部分编码单元之外的编码单元;根据所述部分编码单元的量化参数值和所述目标编码单元的量化参数值,得到所述重建图像的量化参数图。具体地,当所述目标量化参数信息包括所述多个编码单元中的全部编码单元的量化参数值时,根据所述全部编码单元的量化参数值得到所述重建图像的量化参数图;当所述目标量化参数信息包括所述多个编码单元中的部分编码单元的量化参数值时,根据所述部分编码单元的量化参数值得到所述重建图像的量化参数图,或根据所述部分编码单元的量化参数值和参考量化参数图得到所述重建图像的量化参数图,其中,所述参考量化参数图为所述重建图像的参考图像的 量化参数图。在本申请实施例中,当目标量化参数信息包括多个编码单元中的全部编码单元的量化参数值时,可以根据全部编码单元的量化参数值得到重建图像的量化参数图,此种情况下得到的重建图像的量化参数图可以用于表征整张重建图像的保真度;当目标量化参数信息包括多个编码单元中的部分编码单元的量化参数值时,可以根据部分编码单元的量化参数值得到重建图像的量化参数图,此种情况下得到的重建图像的量化参数图可以用于表征重建图像的预设区域的保真度;当目标量化参数信息包括多个编码单元中的部分编码单元的量化参数值时,可以根据部分编码单元的量化参数值和参考量化参数图得到重建图像的量化参数图,由于参考量化参数图为重建图像的参考图像的量化参数图,可以根据参考量化参数图得到多个编码单元中除该部分编码单元之外的任意一个编码单元的量化参数值,如此可以获得多个编码单元中任意一个编码单元的量化参数,此种情况下得到的重建图像的量化参数图可以用于表征整张重建图像的保真度或用于表征重建图像的预设区域的保真度;因此,本申请实施例在解码得到的目标量化参数信息包括多个编码单元中的全部或部分编码单元的量化参数值的任意一种情况下,均能得到用于表征重建图像的保真度或用于表征重建图像的预设区域的保真度的重建图像的量化参数图。
在一种可能的设计中,所述根据所述部分编码单元的量化参数值,得到目标编码单元的量化参数值,包括:根据所述部分编码单元中的至少一个编码单元的量化参数值确定所述目标编码单元的量化参数值。在本申请实施例中,无法从第一码流中获得某个编码单元的量化参数值时,可以采用该编码单元的空间邻域的量化参数进行填充。具体地,当目标量化参数信息包括多个编码单元中的部分编码单元的量化参数值时,还可以根据该部分编码单元中的至少一个编码单元的量化参数值确定这多个编码单元中除该部分编码单元之外的任意一个编码单元的量化参数值,如此可以确保得到多个编码单元中的任意一个编码单元的量化参数值;并且,可以根据多个编码单元中的部分或全部编码单元的量化参数值得到重建图像的量化参数图;当根据多个编码单元中的全部编码单元的量化参数值得到重建图像的量化参数图时,得到的重建图像的量化参数图也可以用于表征整张重建图像的保真度;当根据多个编码单元中的部分编码单元的量化参数值得到重建图像的量化参数图时,得到的重建图像的量化参数图也可以用于表征重建图像的预设区域的保真度。
在一种可能的设计中,所述参考量化参数图包括多个参考元素,所述多个参考元素中的任一参考元素的值为所述参考图像中的编码单元的量化参数值;所述根据参考量化参数图,得到目标编码单元的量化参数值,包括:将目标元素的值作为所述目标编码单元中任一目标编码单元的量化参数值,其中,所述目标元素为所述参考量化参数图中的参考元素,所述目标元素在所述参考量化参数图中的位置根据所述任一目标编码单元在所述重建图像中的位置确定,或所述目标元素在所述参考量化参数图中的位置根据所述任一目标编码单元在所述重建图像中的位置和所述任一目标编码单元的运动矢量确定。其中,参考元素为第二元素的另外一种叫法。在本申请实施例中,无法从第一码流中获得某个编码单元的量化参数值时,可以采用该编码单元的时间邻域的量化参数进行填充。具体地,对于任意一个无法从第一码流中获得的量化参数值的编码单元;可以将参考量化参数图中的目标元素的值作为该编码单元的量化参数值,该目标元素在参考量化参数图中的位置根据该编码单元在重建图像中的位置确定,或该目标元素在参考量化参数图中的位置根据该编码单元在重建图像中的位置和该编码单元的运动矢量确定。如此可以确保得到多个编码单元中任意一个编码单元的量化参数值;并且,可以根据多个编码单元中的部分或全部编码单元的量化参数值得到重建图像的量化参数图;当根据多个编码单元中的全部编码单元的量化参数值得到重建图像的量化参数图时, 得到的重建图像的量化参数图也可以用于表征整张重建图像的保真度;当根据多个编码单元中的部分编码单元的量化参数值得到重建图像的量化参数图时,得到的重建图像的量化参数图也可以用于表征重建图像的预设区域的保真度。
在一种可能的设计中,所述方法还包括:将所述重建图像和所述重建图像的量化参数图关联存储,以将所述重建图像作为参考图像及将所述重建图像的量化参数图作为参考量化参数图。在本申请实施例中,可以将重建图像的量化参数图存储,用于后续解码图像的构建量化参数图的参考量化参数图,从而有利于后续解码图像的构建量化参数图。
在一种可能的设计中,所述方法还包括:根据所述重建图像的量化参数图对所述重建图像或所述重建图像的预设区域进行处理,以提高所述重建图像或所述重建图像的预设区域的图像质量;或根据所述重建图像的量化参数图确定是否应用所述重建图像。在本申请实施例中,解码端可以根据保真度图的重建图对重建图像或重建图像的预设区域进行处理,以提高重建图像或重建图像的预设区域的图像质量;或根据保真度图的重建图确定是否应用重建图像;从而有利于重建图像的应用。
根据第四方面,本申请涉及一种编码设备,有益效果可以参见第一方面的描述,此处不再赘述。该编码设备包括:视频编码器,用于对原始图像进行编码以得到第一码流;保真度图编码器,用于对保真度图进行编码以得到第二码流,其中,所述保真度图用于表示所述原始图像的至少部分区域与重建图像的至少部分区域之间的失真,所述重建图像是对所述第一码流进行解码后得到的。
在一种可能的设计中,所述编码设备还包括保真度图计算器,所述保真度图计算器用于:将所述原始图像划分为多个第一图像块,以及将所述重建图像划分为多个第二图像块,其中,划分所述原始图像的划分策略与划分所述重建图像的划分策略相同,所述多个第一图像块与所述多个第二图像块一一对应;或将所述原始图像的预设区域划分为多个第一图像块,以及将所述重建图像的预设区域划分为多个第二图像块,其中,划分所述原始图像的预设区域的划分策略与划分所述重建图像的预设区域的划分策略相同,所述多个第一图像块与所述多个第二图像块一一对应;根据所述多个第二图像块中的任一第二图像块与所述任一第二图像块对应的第一图像块计算得到所述任一第二图像块的保真度值,所述保真度图包括所述任一第二图像块的保真度值,所述任一第二图像块的保真度值用于表示所述任一第二图像块与所述任一第二图像块对应的第一图像块之间的失真。
在一种可能的设计中,所述保真度图包括多个第一元素,所述多个第二图像块与所述多个第一元素一一对应,所述多个第一元素中的任一第一元素的值为与所述任一第一元素对应的第二图像块的保真度值,所述任一第一元素在所述保真度图中的位置根据与所述任一第一元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第一元素在所述保真度图中的位置根据与所述任一第一元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
在一种可能的设计中,所述第二图像块包括三个色彩分量,所述保真度图为包括色彩分量、宽和高三个维度的三维阵列,所述保真度图中的任一色彩分量A下的二维阵列包括多个第一元素,所述多个第一元素中的任一第一元素的值为与所述任一第一元素对应的第二图像块的色彩分量A的保真度值,所述任一第一元素在所述保真度图中的任一色彩分量A下的二维阵列的位置根据与所述任一第一元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第一元素在所述保真度图中的任一色彩分量A下的二维阵列的位置根据与所述任一 第一元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
在一种可能的设计中,所述保真度图编码器,具体用于:对所述任一第一元素进行熵编码以得到所述第二码流,对所述任一第一元素的熵编码独立于其他第一元素的熵编码;或者,根据所述已编码的第一元素中的至少一个第一元素的值确定所述任一第一元素的值的概率分布或所述任一第一元素的预测值,并根据所述任一第一元素的值的概率分布或所述任一第一元素的预测值对所述任一第一元素进行熵编码,以得到所述第二码流;其中,所述第二码流包括所述多个第一元素的码流。
在一种可能的设计中,所述保真度图编码器,具体用于:对所述任一第一元素进行量化,以得到量化后的第一元素;对所述量化后的第一元素进行编码,以得到所述第二码流;其中,所述第二码流包括所述多个第一元素的码流。
根据第五方面,本申请涉及一种解码设备,有益效果可以参见第二方面的描述,此处不再赘述。该解码设备包括:视频解码器,用于对第一码流进行解码以得到原始图像的重建图像;保真度图解码器,用于对第二码流进行解码以得到保真度图的重建图,其中,所述第二码流是对所述保真度图进行编码得到的,所述保真度图的重建图用于表示所述原始图像的至少部分区域与所述重建图像的至少部分区域之间的失真。
在一种可能的设计中,所述保真度图包括多个第二图像块中的任一第二图像块的保真度值,所述任一第二图像块的保真度值用于表示所述任一第二图像块与所述任一第二图像块对应的原始图像块之间的失真。
在一种可能的设计中,所述保真度图包括多个第一元素,所述多个第二图像块与所述多个第一元素一一对应,所述多个第一元素中的任一第一元素的值为与所述任一第一元素对应的第二图像块的保真度值,所述任一第一元素在所述保真度图中的位置根据与所述任一第一元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第一元素在所述保真度图中的位置根据与所述任一第一元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
在一种可能的设计中,所述第二图像块包括三个色彩分量,所述保真度图为包括色彩分量、宽和高三个维度的三维阵列,所述保真度图中的任一色彩分量A下的二维阵列包括多个第一元素,所述多个第一元素中的任一第一元素的值为与所述任一第一元素对应的第二图像块的色彩分量A的保真度值,所述任一第一元素在所述保真度图中的任一色彩分量A下的二维阵列的位置根据与所述任一第一元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第一元素在所述保真度图中的任一色彩分量A下的二维阵列的位置根据与所述任一第一元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
在一种可能的设计中,所述保真度图解码器,具体用于:对所述第二码流进行解码,以得到所述任一第一元素的重建保真度值;根据所述任一第一元素的重建保真度值得到所述保真度图的重建图。
在一种可能的设计中,所述第二码流是对量化后的第一元素进行编码得到的;所述保真度图解码器,具体用于:对所述第二码流进行解码,以得到所述量化后的第一元素的重建保真度值;对所述量化后的第一元素的重建保真度值进行反量化,以得到所述任一第一元素的重建保真度值;根据所述任一第一元素的重建保真度值得到所述保真度图的重建图。
根据第六方面,本申请涉及一种解码设备,有益效果可以参见第三方面的描述,此处不 再赘述。该解码设备包括:视频解码器,用于对第一码流进行解码以得到原始图像的重建图像和目标量化参数信息,所述目标量化参数信息包括所述重建图像的多个第二图像块中的全部或部分第二图像块的量化参数值;量化参数图构建器,用于根据所述目标量化参数信息构建所述重建图像的量化参数图,其中,所述重建图像的量化参数图用于表示所述原始图像的至少部分区域与所述重建图像的至少部分区域之间的失真。
在一种可能的设计中,所述第二图像块为编码单元。
在一种可能的设计中,所述重建图像的量化参数图包括多个第二元素,所述多个第二图像块与所述多个第二元素一一对应,所述多个第二元素中的任一第二元素的值为与所述任一第二元素对应的第二图像块的量化参数值,所述任一第二元素在所述重建图像的量化参数图中的位置根据与所述任一第二元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第二元素在所述重建图像的量化参数图中的位置根据与所述任一第二元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
在一种可能的设计中,所述第二图像块包括三个色彩分量,所述重建图像的量化参数图为包括色彩分量、宽和高三个维度的三维阵列,所述重建图像的量化参数图中的任一色彩分量A下的二维阵列包括多个第二元素,所述多个第二元素中的任一第二元素的值为与所述任一第二元素对应的第二图像块的色彩分量A的量化参数值,所述任一第二元素在所述重建图像的量化参数图中的任一色彩分量A下的二维阵列的位置根据与所述任一第二元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第二元素在所述重建图像的量化参数图中的任一色彩分量A下的二维阵列的位置根据与所述任一第二元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
在一种可能的设计中,所述量化参数图构建器,具体用于:当所述目标量化参数信息包括所述多个编码单元中的部分编码单元的量化参数值时,根据所述部分编码单元的量化参数值和/或参考量化参数图,得到目标编码单元的量化参数值,其中,所述参考量化参数图为所述重建图像的参考图像的量化参数图,所述目标编码单元为所述多个编码单元中除所述部分编码单元之外的编码单元;根据所述部分编码单元的量化参数值和所述目标编码单元的量化参数值,得到所述重建图像的量化参数图。
在一种可能的设计中,所述参考量化参数图包括多个参考元素,所述多个参考元素中的任一参考元素的值为所述参考图像中的编码单元的量化参数值;所述量化参数图构建器,具体用于:将目标元素的值作为所述目标编码单元中任一目标编码单元的量化参数值,其中,所述目标元素为所述参考量化参数图中的参考元素,所述目标元素在所述参考量化参数图中的位置根据所述任一目标编码单元在所述重建图像中的位置确定,或所述目标元素在所述参考量化参数图中的位置根据所述任一目标编码单元在所述重建图像中的位置和所述任一目标编码单元的运动矢量确定。
根据第七方面,本申请涉及一种编码装置,有益效果可以参见第一方面的描述,此处不再赘述。所述编码装置具有实现上述第一方面的方法实例中行为的功能。所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能相对应的模块。在一个可能的设计中,所述解码装置包括处理单元,所述处理单元用于:对原始图像进行编码以得到第一码流;对保真度图进行编码以得到第二码流,其中,所述保真度图用于表示所述原始图像的至少部分区域与重建图像的至少部分区域之间的失真,所述重建图像是对所述第一码流进行解码后得到的。
在一种可能的设计中,所述处理单元还用于:将所述原始图像划分为多个第一图像块,以及将所述重建图像划分为多个第二图像块,其中,划分所述原始图像的划分策略与划分所述重建图像的划分策略相同,所述多个第一图像块与所述多个第二图像块一一对应;或将所述原始图像的预设区域划分为多个第一图像块,以及将所述重建图像的预设区域划分为多个第二图像块,其中,划分所述原始图像的预设区域的划分策略与划分所述重建图像的预设区域的划分策略相同,所述多个第一图像块与所述多个第二图像块一一对应;根据所述多个第二图像块中的任一第二图像块与所述任一第二图像块对应的第一图像块计算得到所述任一第二图像块的保真度值,所述保真度图包括所述任一第二图像块的保真度值,所述任一第二图像块的保真度值用于表示所述任一第二图像块与所述任一第二图像块对应的第一图像块之间的失真。
在一种可能的设计中,所述保真度图包括多个第一元素,所述多个第二图像块与所述多个第一元素一一对应,所述多个第一元素中的任一第一元素的值为与所述任一第一元素对应的第二图像块的保真度值,所述任一第一元素在所述保真度图中的位置根据与所述任一第一元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第一元素在所述保真度图中的位置根据与所述任一第一元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
在一种可能的设计中,所述第二图像块包括三个色彩分量,所述保真度图为包括色彩分量、宽和高三个维度的三维阵列,所述保真度图中的任一色彩分量A下的二维阵列包括多个第一元素,所述多个第一元素中的任一第一元素的值为与所述任一第一元素对应的第二图像块的色彩分量A的保真度值,所述任一第一元素在所述保真度图中的任一色彩分量A下的二维阵列的位置根据与所述任一第一元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第一元素在所述保真度图中的任一色彩分量A下的二维阵列的位置根据与所述任一第一元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
在一种可能的设计中,所述处理单元具体用于:对所述任一第一元素进行熵编码以得到所述第二码流,对所述任一第一元素的熵编码独立于其他第一元素的熵编码;或者,根据所述已编码的第一元素中的至少一个第一元素的值确定所述任一第一元素的值的概率分布或所述任一第一元素的预测值,并根据所述任一第一元素的值的概率分布或所述任一第一元素的预测值对所述任一第一元素进行熵编码,以得到所述第二码流;其中,所述第二码流包括所述多个第一元素的码流。
在一种可能的设计中,所述处理单元具体用于:对所述任一第一元素进行量化,以得到量化后的第一元素;对所述量化后的第一元素进行编码,以得到所述第二码流;其中,所述第二码流包括所述多个第一元素的码流。
根据第八方面,本申请涉及一种编码装置,有益效果可以参见第二方面的描述,此处不再赘述。所述编码装置具有实现上述第二方面的方法实例中行为的功能。所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能相对应的模块。在一个可能的设计中,所述解码装置包括处理单元,所述处理单元用于:对第一码流进行解码以得到原始图像的重建图像;对第二码流进行解码以得到保真度图的重建图,其中,所述第二码流是对所述保真度图进行编码得到的,所述保真度图的重建图用于表示所述原始图像的至少部分区域与所述重建图像的至少部分区域之间的失真。
在一种可能的设计中,所述保真度图包括多个第二图像块中的任一第二图像块的保真度 值,所述任一第二图像块的保真度值用于表示所述任一第二图像块与所述任一第二图像块对应的原始图像块之间的失真。
在一种可能的设计中,所述保真度图包括多个第一元素,所述多个第二图像块与所述多个第一元素一一对应,所述多个第一元素中的任一第一元素的值为与所述任一第一元素对应的第二图像块的保真度值,所述任一第一元素在所述保真度图中的位置根据与所述任一第一元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第一元素在所述保真度图中的位置根据与所述任一第一元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
在一种可能的设计中,所述第二图像块包括三个色彩分量,所述保真度图为包括色彩分量、宽和高三个维度的三维阵列,所述保真度图中的任一色彩分量A下的二维阵列包括多个第一元素,所述多个第一元素中的任一第一元素的值为与所述任一第一元素对应的第二图像块的色彩分量A的保真度值,所述任一第一元素在所述保真度图中的任一色彩分量A下的二维阵列的位置根据与所述任一第一元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第一元素在所述保真度图中的任一色彩分量A下的二维阵列的位置根据与所述任一第一元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
在一种可能的设计中,所述处理单元具体用于:对所述第二码流进行解码,以得到所述任一第一元素的重建保真度值;根据所述任一第一元素的重建保真度值得到所述保真度图的重建图。
在一种可能的设计中,所述第二码流是对量化后的第一元素进行编码得到的;所述处理单元具体用于:对所述第二码流进行解码,以得到所述量化后的第一元素的重建保真度值;对所述量化后的第一元素的重建保真度值进行反量化,以得到所述任一第一元素的重建保真度值;根据所述任一第一元素的重建保真度值得到所述保真度图的重建图。
根据第九方面,本申请涉及一种编码装置,有益效果可以参见第三方面的描述,此处不再赘述。所述编码装置具有实现上述第三方面的方法实例中行为的功能。所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能相对应的模块。在一个可能的设计中,所述解码装置包括处理单元,所述处理单元用于:对第一码流进行解码以得到原始图像的重建图像和目标量化参数信息,所述目标量化参数信息包括所述重建图像的多个第二图像块中的全部或部分第二图像块的量化参数值;根据所述目标量化参数信息构建所述重建图像的量化参数图,其中,所述重建图像的量化参数图用于表示所述原始图像的至少部分区域与所述重建图像的至少部分区域之间的失真。
在一种可能的设计中,所述第二图像块为编码单元。
在一种可能的设计中,所述重建图像的量化参数图包括多个第二元素,所述多个第二图像块与所述多个第二元素一一对应,所述多个第二元素中的任一第二元素的值为与所述任一第二元素对应的第二图像块的量化参数值,所述任一第二元素在所述重建图像的量化参数图中的位置根据与所述任一第二元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第二元素在所述重建图像的量化参数图中的位置根据与所述任一第二元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
在一种可能的设计中,所述第二图像块包括三个色彩分量,所述重建图像的量化参数图为包括色彩分量、宽和高三个维度的三维阵列,所述重建图像的量化参数图中的任一色彩分量A下的二维阵列包括多个第二元素,所述多个第二元素中的任一第二元素的值为与所述任 一第二元素对应的第二图像块的色彩分量A的量化参数值,所述任一第二元素在所述重建图像的量化参数图中的任一色彩分量A下的二维阵列的位置根据与所述任一第二元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第二元素在所述重建图像的量化参数图中的任一色彩分量A下的二维阵列的位置根据与所述任一第二元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
在一种可能的设计中,所述处理单元具体用于:当所述目标量化参数信息包括所述多个编码单元中的部分编码单元的量化参数值时,根据所述部分编码单元的量化参数值和/或参考量化参数图,得到目标编码单元的量化参数值,其中,所述参考量化参数图为所述重建图像的参考图像的量化参数图,所述目标编码单元为所述多个编码单元中除所述部分编码单元之外的编码单元;根据所述部分编码单元的量化参数值和所述目标编码单元的量化参数值,得到所述重建图像的量化参数图。
在一种可能的设计中,所述参考量化参数图包括多个参考元素,所述多个参考元素中的任一参考元素的值为所述参考图像中的编码单元的量化参数值;所述处理单元具体用于:将目标元素的值作为所述目标编码单元中任一目标编码单元的量化参数值,其中,所述目标元素为所述参考量化参数图中的参考元素,所述目标元素在所述参考量化参数图中的位置根据所述任一目标编码单元在所述重建图像中的位置确定,或所述目标元素在所述参考量化参数图中的位置根据所述任一目标编码单元在所述重建图像中的位置和所述任一目标编码单元的运动矢量确定。
本申请第一方面所述的方法可由本申请第七方面所述的装置执行。本申请第一方面所述的方法的其它特征和实现方式直接取决于本申请第七方面所述的装置的功能性和实现方式。
本申请第二方面所述的方法可由本申请第八方面所述的装置执行。本申请第二方面所述的方法的其它特征和实现方式直接取决于本申请第八方面所述的装置的功能性和实现方式。
本申请第三方面所述的方法可由本申请第九方面所述的装置执行。本申请第二方面所述的方法的其它特征和实现方式直接取决于本申请第九方面所述的装置的功能性和实现方式。
根据第十方面,本申请涉及编码视频流的装置,包含处理器和存储器。所述存储器存储指令,所述指令使得所述处理器执行第一方面所述的方法。
根据第十一方面,本申请涉及解码视频流的装置,包含处理器和存储器。所述存储器存储指令,所述指令使得所述处理器执行第二方面所述的方法。
根据第十二方面,本申请涉及解码视频流的装置,包含处理器和存储器。所述存储器存储指令,所述指令使得所述处理器执行第三方面所述的方法。
根据第十三方面,本申请提供一种计算机可读存储介质,其上储存有指令,当所述指令执行时,使得一个或多个处理器编码视频数据。所述指令使得所述一个或多个处理器执行第一、第二或第三方面或第一、第二或第三方面任意一种可能的实施例中的方法。
根据第十四方面,本申请涉及包括程序代码的计算机程序产品,所述程序代码在运行时执行第一、第二或第三方面或第一、第二或第三方面任意一种可能的实施例中的方法。
根据第十五方面,本申请涉及一种编码器(20),包括处理电路,用于执行第一方面或第一方面任意一种可能的实施例中的方法。
根据第十六方面,本申请涉及一种解码器(30),包括处理电路,用于执行第二或第三方面或第二或第三方面任意一种可能的实施例中的方法。
根据第十七方面,本申请涉及一种编码器,包括:一个或多个处理器;非瞬时性计算机 可读存储介质,耦合到所述处理器并存储由所述处理器执行的程序,其中所述程序在由所述处理器执行时,使得所述编码器执行第一方面或第一方面任意一种可能的实施例中的方法。
根据第十八方面,本申请涉及一种解码器,包括:一个或多个处理器;非瞬时性计算机可读存储介质,耦合到所述处理器并存储由所述处理器执行的程序,其中所述程序在由所述处理器执行时,使得所述解码器执行第二或第三方面或第二或第三方面任意一种可能的实施例中的方法。
根据第十九方面,本申请涉及一种非瞬时性计算机可读存储介质,包括程序代码,当其由计算机设备执行时,用于执行第一、第二或第三方面或第一、第二或第三方面任意一种可能的实施例中的方法。
根据第二十方面,本申请涉及一种非瞬时性存储介质,包括根据第一方面或第一方面任意一种可能的实施例中的方法编码的比特流。
根据第二十一方面,本申请涉及一种电子设备,该电子设备包括第四方面所述的编码设备和/或第五方面或第六方面所述的解码设备。
附图及以下说明中将详细描述一个或多个实施例。其它特征、目的和优点在说明、附图以及权利要求中是显而易见的。
附图说明
下面对本申请实施例用到的附图进行介绍。
图1A为用于实现本申请实施例的视频译码系统示例的框图,其中该系统利用神经网络来编码或解码视频图像;
图1B为用于实现本申请实施例的视频译码系统另一示例的框图,其中该视频编码器和/或视频解码器使用神经网络来编码或解码视频图像;
图2为用于实现本申请实施例的视频编码器实例示例的框图,其中该视频编码器20使用神经网络来编码视频图像;
图3为用于实现本申请实施例的视频解码器实例示例的框图,其中该视频解码器30使用神经网络来解码视频图像;
图4为用于实现本申请实施例的视频译码装置的示意性框图;
图5为用于实现本申请实施例的视频译码装置的示意性框图;
图6为用于实现本申请实施例的基于深度神经网络的图像编解码器的示意图;
图7为本申请实施例提供的一种编码方法的示意性框图;
图8为本申请实施例提供的一种原始图像或原始图像的预设区域的划分示意图;
图9为本申请实施例提供的一种重建图像或重建图像的预设区域的划分示意图;
图10为本申请实施例提供的一种保真度图的示意图;
图11为本申请实施例提供的一种解码方法的示意性框图;
图12为本申请实施例提供的一种保真度图的重建图的示意图;
图13为本申请实施例提供的另一种解码方法的示意性框图;
图14为本申请实施例提供的一种量化参数图的示意图;
图15为本申请实施例提供的另一种量化参数图的示意图;
图16为本申请实施例提供的一种编码设备的示意性框图;
图17为本申请实施例提供的一种解码设备的示意性框图;
图18为本申请实施例提供的一种解码设备的示意性框图;
图19为本申请实施例提供的一种编码装置的示意性框图;
图20为本申请实施例提供的一种解码装置的示意性框图;
图21为本申请实施例提供的一种解码装置的示意性框图。
具体实施方式
本申请实施例提供一种基于人工智能的视频图像压缩技术,尤其是提供一种基于神经网络(Neural Network,NN)的视频压缩技术,具体提供一种基于神经网络的帧间预测技术/帧内预测技术/滤波技术,以改进传统的混合视频编解码系统。
视频编码通常是指处理形成视频或视频序列的图像序列。在视频编码领域,术语“图像(picture)”、“帧(frame)”或“图片(image)”可以用作同义词。视频编码(或通常称为编码)包括视频编码和视频解码两部分。视频编码在源侧执行,通常包括处理(例如,压缩)原始视频图像以减少表示该视频图像所需的数据量(从而更高效存储和/或传输)。视频解码在目的地侧执行,通常包括相对于编码器作逆处理,以重建视频图像。实施例涉及的视频图像(或通常称为图像)的“编码”应理解为视频图像或视频序列的“编码”或“解码”。编码部分和解码部分也合称为编解码(编码和解码,CODEC)。
在无损视频编码情况下,可以重建原始视频图像,即重建的视频图像与原始视频图像具有相同的质量(假设存储或传输期间没有传输损耗或其它数据丢失)。在有损视频编码情况下,通过量化等执行进一步压缩,来减少表示视频图像所需的数据量,而解码器侧无法完全重建视频图像,即重建的视频图像的质量比原始视频图像的质量较低或较差。
几个视频编码标准属于“有损混合型视频编解码”(即,将像素域中的空间和时间预测与变换域中用于应用量化的2D变换编码结合)。视频序列中的任一图像通常分割成不重叠的块集合,通常在块级上进行编码。换句话说,编码器通常在块(视频块)级处理即编码视频,例如,通过空间(帧内)预测和时间(帧间)预测来产生预测块;从当前块(当前处理/待处理的块)中减去预测块,得到残差块;在变换域中变换残差块并量化残差块,以减少待传输(压缩)的数据量,而解码器侧将相对于编码器的逆处理部分应用于编码或压缩的块,以重建用于表示的当前块。另外,编码器需要重复解码器的处理步骤,使得编码器和解码器生成相同的预测(例如,帧内预测和帧间预测)和/或重建像素,用于处理,即编码后续块。
在以下译码系统10的实施例中,编码器20和解码器30根据图1A至图3进行描述。
图1A为示例性译码系统10的示意性框图,例如可以利用本申请技术的视频译码系统10(或简称为译码系统10)。视频译码系统10中的视频编码器20(或简称为编码器20)和视频解码器30(或简称为解码器30)代表可用于根据本申请中描述的各种示例执行各技术的设备等。
如图1A所示,译码系统10包括源设备12,源设备12用于将编码图像等编码图像数据21提供给用于对编码图像数据21进行解码的目的设备14。其中,编码图像数据也即比特流、压缩码流或码流,故编码图像数据21也可称为比特流21、压缩码流21或码流21。
源设备12包括编码器20,另外即可选地,可包括图像源16、图像预处理器等预处理器(或预处理单元)18、通信接口(或通信单元)22。
图像源16可包括或可以为任意类型的用于捕获现实世界图像等的图像捕获设备,和/或 任意类型的图像生成设备,例如用于生成计算机动画图像的计算机图形处理器或任意类型的用于获取和/或提供现实世界图像、计算机生成图像(例如,屏幕内容、虚拟现实(virtual reality,VR)图像和/或其任意组合(例如增强现实(augmented reality,AR)图像)的设备。所述图像源可以为存储上述图像中的任意图像的任意类型的内存或存储器。
为了区分预处理器(或预处理单元)18执行的处理,图像(或图像数据)17也可称为原始图像(或原始图像数据)17。
预处理器18用于接收(原始)图像数据17,并对图像数据17进行预处理,得到预处理图像(或预处理图像数据)19。例如,预处理器18执行的预处理可包括修剪、颜色格式转换(例如从RGB转换为YCbCr)、调色或去噪。可以理解的是,预处理单元18可以为可选组件。
视频编码器(或编码器)20用于接收预处理图像数据19并提供编码图像数据21(下面将根据图2等进一步描述)。
源设备12中的通信接口22可用于:接收编码图像数据21并通过通信信道13向目的设备14等另一设备或任何其它设备发送编码图像数据21(或其它任意处理后的版本),以便存储或直接重建。
目的设备14包括解码器30,另外即可选地,可包括通信接口(或通信单元)28、后处理器(或后处理单元)32和显示设备34。
目的设备14中的通信接口28用于直接从源设备12或从存储设备等任意其它源设备接收编码图像数据21(或其它任意处理后的版本),例如,存储设备为编码图像数据存储设备,并将编码图像数据21提供给解码器30。
通信接口22和通信接口28可用于通过源设备12与目的设备14之间的直连通信链路,例如直接有线或无线连接等,或者通过任意类型的网络,例如有线网络、无线网络或其任意组合、任意类型的私网和公网或其任意类型的组合,发送或接收编码图像数据(或编码数据)21。
例如,通信接口22可用于将编码图像数据21封装为报文等合适的格式,和/或使用任意类型的传输编码或处理来处理所述编码后的图像数据,以便在通信链路或通信网络上进行传输。
通信接口28与通信接口22对应,例如,可用于接收传输数据,并使用任意类型的对应传输解码或处理和/或解封装对传输数据进行处理,得到编码图像数据21。
通信接口22和通信接口28均可配置为如图1A中从源设备12指向目的设备14的对应通信信道13的箭头所指示的单向通信接口,或双向通信接口,并且可用于发送和接收消息等,以建立连接,确认并交换与通信链路和/或例如编码后的图像数据传输等数据传输相关的任何其它信息,等等。
视频解码器(或解码器)30用于接收编码图像数据21并提供解码图像数据(或称解码图像、重建图像)31(下面将根据图3等进一步描述)。
后处理器32用于对解码后的图像等解码图像数据31(也称为重建后的图像数据)进行后处理,得到后处理后的图像等后处理图像数据33。后处理单元32执行的后处理可以包括例如颜色格式转换(例如从YCbCr转换为RGB)、调色、修剪或重采样,或者用于产生供显示设备34等显示的解码图像数据31等任何其它处理。
显示设备34用于接收后处理图像数据33,以向用户或观看者等显示图像。显示设备34可以为或包括任意类型的用于表示重建后图像的显示器,例如,集成或外部显示屏或显示器。 例如,显示屏可包括液晶显示器(liquid crystal display,LCD)、有机发光二极管(organic light emitting diode,OLED)显示器、等离子显示器、投影仪、微型LED显示器、硅基液晶显示器(liquid crystal on silicon,LCoS)、数字光处理器(digital light processor,DLP)或任意类型的其它显示屏。
译码系统10还包括训练引擎25,训练引擎25用于训练编码器20(尤其是编码器20中的模式选择单元260)或解码器30(尤其是解码器30中的模式应用单元360)以处理输入图像或图像区域或图像块以生成输入图像或图像区域或图像块的预测值。
本申请实施例中用于练编码器20或解码器30的训练数据可以存入数据库(未示意)中,训练引擎25基于训练数据训练得到目标模型(例如:可以是用于图像帧间预测或图像帧内预测或者环路滤波的神经网络等)。需要说明的是,本申请实施例对于训练数据的来源不做限定,例如可以是从云端或其他地方获取训练数据进行模型训练。
训练引擎25训练得到的目标模型可以应用于译码系统10,40中,例如,应用于图1A所示的源设备12(例如编码器20)或目的设备14(例如解码器30)。训练引擎25可以在云端训练得到目标模型,然后译码系统10从云端下载并使用该目标模型;或者,训练引擎25可以在云端训练得到目标模型并使用该目标模型,译码系统10从云端直接获取处理结果。例如,训练引擎25训练得到具备滤波功能的目标模型,译码系统10从云端下载该目标模型,然后编码器20中的环路滤波器220或解码器30中的环路滤波器320可以根据该目标模型对输入的重建的图像或图像块进行滤波处理,得到滤波后的图像或图像块。又例如,训练引擎25训练得到具备滤波功能的目标模型,译码系统10无需从云端下载该目标模型,编码器20或解码器30将重建的图像或图像块传输给云端,由云端通过目标模型对该重建的图像或图像块进行滤波处理,得到滤波后的图像或图像块并传输给编码器20或解码器30。
尽管图1A示出了源设备12和目的设备14作为独立的设备,但设备实施例也可以同时包括源设备12和目的设备14或同时包括源设备12和目的设备14的功能,即同时包括源设备12或对应功能和目的设备14或对应功能。在这些实施例中,源设备12或对应功能和目的设备14或对应功能可以使用相同硬件和/或软件或通过单独的硬件和/或软件或其任意组合来实现。
根据描述,图1A所示的源设备12和/或目的设备14中的不同单元或功能的存在和(准确)划分可能根据实际设备和应用而有所不同,这对技术人员来说是显而易见的。
编码器20(例如视频编码器20)或解码器30(例如视频解码器30)或两者都可通过如图1B所示的处理电路实现,例如一个或多个微处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application-specific integrated circuit,ASIC)、现场可编程门阵列(field-programmable gate array,FPGA)、离散逻辑、硬件、视频编码专用处理器或其任意组合。编码器20可以通过处理电路46实现,以包含参照图2编码器20论述的各种模块和/或本文描述的任何其它编码器系统或子系统。解码器30可以通过处理电路46实现,以包含参照图3解码器30论述的各种模块和/或本文描述的任何其它解码器系统或子系统。所述处理电路46可用于执行下文论述的各种操作。如图5所示,如果部分技术在软件中实施,则设备可以将软件的指令存储在合适的非瞬时性计算机可读存储介质中,并且使用一个或多个处理器在硬件中执行指令,从而执行本申请技术。视频编码器20和视频解码器30中的其中一个可作为组合编解码器(encoder/decoder,CODEC)的一部分集成在单个设备中,如图1B所示。
源设备12和目的设备14可包括各种设备中的任一种,包括任意类型的手持设备或固定 设备,例如,笔记本电脑或膝上型电脑、手机、智能手机、平板或平板电脑、相机、台式计算机、机顶盒、电视机、显示设备、数字媒体播放器、视频游戏控制台、视频流设备(例如,内容业务服务器或内容分发服务器)、广播接收设备、广播发射设备,等等,并可以不使用或使用任意类型的操作系统。在一些情况下,源设备12和目的设备14可配备用于无线通信的组件。因此,源设备12和目的设备14可以是无线通信设备。
在一些情况下,图1A所示的视频译码系统10仅仅是示例性的,本申请提供的技术可适用于视频编码设置(例如,视频编码或视频解码),这些设置不一定包括编码设备与解码设备之间的任何数据通信。在其它示例中,数据从本地存储器中检索,通过网络发送,等等。视频编码设备可以对数据进行编码并将数据存储到存储器中,和/或视频解码设备可以从存储器中检索数据并对数据进行解码。在一些示例中,编码和解码由相互不通信而只是编码数据到存储器和/或从存储器中检索并解码数据的设备来执行。
图1B是根据一示例性实施例的包含图2的视频编码器20和/或图3的视频解码器30的视频译码系统40的实例的说明图。视频译码系统40可以包含成像设备41、视频编码器20、视频解码器30(和/或藉由处理电路46实施的视频编/解码器)、天线42、一个或多个处理器43、一个或多个内存存储器44和/或显示设备45。
如图1B所示,成像设备41、天线42、处理电路46、视频编码器20、视频解码器30、处理器43、内存存储器44和/或显示设备45能够互相通信。在不同实例中,视频译码系统40可以只包含视频编码器20或只包含视频解码器30。
在一些实例中,天线42可以用于传输或接收视频数据的经编码比特流。另外,在一些实例中,显示设备45可以用于呈现视频数据。处理电路46可以包含专用集成电路(application-specific integrated circuit,ASIC)逻辑、图形处理器、通用处理器等。视频译码系统40也可以包含可选的处理器43,该可选处理器43类似地可以包含专用集成电路(application-specific integrated circuit,ASIC)逻辑、图形处理器、通用处理器等。另外,内存存储器44可以是任何类型的存储器,例如易失性存储器(例如,静态随机存取存储器(static random access memory,SRAM)、动态随机存储器(dynamic random access memory,DRAM)等)或非易失性存储器(例如,闪存等)等。在非限制性实例中,内存存储器44可以由超速缓存内存实施。在其它实例中,处理电路46可以包含存储器(例如,缓存等)用于实施图像缓冲器等。
在一些实例中,通过逻辑电路实施的视频编码器20可以包含(例如,通过处理电路46或内存存储器44实施的)图像缓冲器和(例如,通过处理电路46实施的)图形处理单元。图形处理单元可以通信耦合至图像缓冲器。图形处理单元可以包含通过处理电路46实施的视频编码器20,以实施参照图2和/或本文中所描述的任何其它编码器系统或子系统所论述的各种模块。逻辑电路可以用于执行本文所论述的各种操作。
在一些实例中,视频解码器30可以以类似方式通过处理电路46实施,以实施参照图3的视频解码器30和/或本文中所描述的任何其它解码器系统或子系统所论述的各种模块。在一些实例中,逻辑电路实施的视频解码器30可以包含(通过处理电路46或内存存储器44实施的)图像缓冲器和(例如,通过处理电路46实施的)图形处理单元。图形处理单元可以通信耦合至图像缓冲器。图形处理单元可以包含通过处理电路46实施的视频解码器30,以实施参照图3和/或本文中所描述的任何其它解码器系统或子系统所论述的各种模块。
在一些实例中,天线42可以用于接收视频数据的经编码比特流。如所论述,经编码比特 流可以包含本文所论述的与编码视频帧相关的数据、指示符、索引值、模式选择数据等,例如与编码分割相关的数据(例如,变换系数或经量化变换系数,(如所论述的)可选指示符,和/或定义编码分割的数据)。视频译码系统40还可包含耦合至天线42并用于解码经编码比特流的视频解码器30。显示设备45用于呈现视频帧。
应理解,本申请实施例中对于参考视频编码器20所描述的实例,视频解码器30可以用于执行相反过程。关于信令语法元素,视频解码器30可以用于接收并解析这种语法元素,相应地解码相关视频数据。在一些例子中,视频编码器20可以将语法元素熵编码成经编码视频比特流。在此类实例中,视频解码器30可以解析这种语法元素,并相应地解码相关视频数据。
为便于描述,参考通用视频编码(Versatile video coding,VVC)参考软件或由ITU-T视频编码专家组(Video Coding Experts Group,VCEG)和ISO/IEC运动图像专家组(Motion Picture Experts Group,MPEG)的视频编码联合工作组(Joint Collaboration Team on Video Coding,JCT-VC)开发的高性能视频编码(High-Efficiency Video Coding,HEVC)描述本申请实施例。本领域普通技术人员理解本申请实施例不限于HEVC或VVC。
下面对编码器和编码方法以及解码器和解码方法进行介绍。
一、编码器和编码方法
图2为用于实现本申请技术的视频编码器20的示例的示意性框图。在图2的示例中,视频编码器20包括输入端(或输入接口)201、残差计算单元204、变换处理单元206、量化单元208、反量化单元210、逆变换处理单元212、重建单元214、环路滤波器220、解码图像缓冲器(decoded picture buffer,DPB)230、模式选择单元260、熵编码单元270和输出端(或输出接口)272。模式选择单元260可包括帧间预测单元244、帧内预测单元254和分割单元262。帧间预测单元244可包括运动估计单元和运动补偿单元(未示出)。图2所示的视频编码器20也可称为混合型视频编码器或基于混合型视频编解码器的视频编码器。
参见图2,帧间预测模块/帧内预测模块/环路滤波模块包括(为)经过训练的目标模型(亦称为神经网络),该神经网络用于处理输入图像或图像区域或图像块,以生成输入图像块的预测值。例如,用于帧间预测/帧内预测/环路滤波的神经网络用于接收输入的图像或图像区域或图像块。
残差计算单元204、变换处理单元206、量化单元208和模式选择单元260组成编码器20的前向信号路径,而反量化单元210、逆变换处理单元212、重建单元214、缓冲器216、环路滤波器220、解码图像缓冲器(decoded picture buffer,DPB)230、帧间预测单元244和帧内预测单元254组成编码器的后向信号路径,其中编码器20的后向信号路径对应于解码器的信号路径(参见图3中的解码器30)。反量化单元210、逆变换处理单元212、重建单元214、环路滤波器220、解码图像缓冲器230、帧间预测单元244和帧内预测单元254还组成视频编码器20的“内置解码器”。
(1)图像和图像分割(图像和块)
编码器20可用于通过输入端201等接收图像(或图像数据)17,例如,形成视频或视频序列的图像序列中的图像。接收的图像或图像数据也可以是预处理后的图像(或预处理后的图像数据)19。为简单起见,以下描述使用图像17。图像17也可称为当前图像、原始图像或待编码的图像(尤其是在视频编码中将当前图像与其它图像区分开时,其它图像例如同一视频序列,即也包括当前图像的视频序列中的之前编码后图像和/或解码后图像)。
(数字)图像为或可以视为具有强度值的像素点组成的二维阵列或矩阵。阵列中的像素 点也可以称为像素(pixel或pel)(图像元素的简称)。阵列或图像在水平方向和垂直方向(或轴线)上的像素点数量决定了图像的大小和/或分辨率。为了表示颜色,通常采用三个颜色分量,即图像可以表示为或包括三个像素点阵列。在RBG格式或颜色空间中,图像包括对应的红色、绿色和蓝色像素点阵列。但是,在视频编码中,任一像素通常以亮度/色度格式或颜色空间表示,例如YCbCr,包括Y指示的亮度分量(有时也用L表示)以及Cb、Cr表示的两个色度分量。亮度(luma)分量Y表示亮度或灰度水平强度(例如,在灰度等级图像中两者相同),而两个色度(chrominance,简写为chroma)分量Cb和Cr表示色度或颜色信息分量。相应地,YCbCr格式的图像包括亮度像素点值(Y)的亮度像素点阵列和色度值(Cb和Cr)的两个色度像素点阵列。RGB格式的图像可以转换或变换为YCbCr格式,反之亦然,该过程也称为颜色变换或转换。如果图像是黑白的,则该图像可以只包括亮度像素点阵列。相应地,图像可以为例如单色格式的亮度像素点阵列或4:2:0、4:2:2和4:4:4彩色格式的亮度像素点阵列和两个相应的色度像素点阵列。
在一个实施例中,视频编码器20的实施例可包括图像分割单元(图2中未示出),用于将图像17分割成多个(通常不重叠)图像块203,图像块203为像素的集合,图像块203有时也称当前块203、原始块203或分割块203。例如,一张待编码的图像17首先被划分为不重叠的图像块203,并以给定的顺序(例如行扫描顺序)依次处理每一图像块203。这些图像块203在H.265/HEVC和VVC标准中也可以称为根块、宏块(H.264/AVC)或编码树块(Coding Tree Block,CTB),或编码树单元(Coding Tree Unit,CTU)。分割单元可用于对视频序列中的所有图像使用相同的块大小和使用限定块大小的对应网格,或在图像或图像子集或图像组之间改变块大小,并将任一图像分割成对应块。
在其它实施例中,视频编码器可用于直接接收图像17的图像块203,例如,组成所述图像17的一个、几个或所有块。图像块203也可以称为当前图像块或待编码图像块。
与图像17一样,图像块203同样是或可认为是具有强度值(像素点值)的像素点组成的二维阵列或矩阵,但是图像块203的比图像17的小。换句话说,图像块203可包括一个像素点阵列(例如,单色图像17情况下的亮度阵列或彩色图像情况下的亮度阵列或色度阵列)或三个像素点阵列(例如,彩色图像17情况下的一个亮度阵列和两个色度阵列)或根据所采用的颜色格式的任何其它数量和/或类型的阵列。图像块203的水平方向和垂直方向(或轴线)上的像素点数量限定了图像块203的大小。相应地,图像块203可以为M×N(M列×N行)个像素点阵列,或M×N个变换系数阵列等。例如,若图像块203大小为N×N,则表示该图像块203是一个二维像素阵列,其水平与竖直方向大小均为N。
在一个实施例中,图2所示的视频编码器20用于逐块对图像17进行编码,例如,对任一图像块203执行编码和预测。
在一个实施例中,图2所示的视频编码器20还可以用于使用片(也称为视频片)分割和/或编码图像,其中图像可以使用一个或多个片(通常为不重叠的)进行分割或编码。任一片可包括一个或多个块(例如,编码树单元CTU)或一个或多个块组(例如H.265/HEVC/VVC标准中的编码区块(tile)和VVC标准中的砖(brick)。
在一个实施例中,图2所示的视频编码器20还可以用于使用片/编码区块组(也称为视频编码区块组)和/或编码区块(也称为视频编码区块)对图像进行分割和/或编码,其中图像可以使用一个或多个片/编码区块组(通常为不重叠的)进行分割或编码,任一片/编码区块组可包括一个或多个块(例如CTU)或一个或多个编码区块等,其中任一编码区块可以为矩形等形状,可包括一个或多个完整或部分块(例如CTU)。
(2)残差计算
残差计算单元204用于通过如下方式根据图像块203和预测块265来计算残差块205(后续详细介绍了预测块265):例如,逐个像素点(逐个像素)从图像块203的像素点值中减去预测块265的像素点值,得到像素域中的残差块205。其中,编码器20对图像块进行帧内预测或帧间预测,来获取其中像素的预测值,图像块中像素的预测值的集合称为该图像块的预测,也称为预测块。进一步计算图像块中像素的原始值与图像块中像素的预测值的差值,图像块中像素的原始值与图像块中像素的预测值的差值的集合称为图像块的残差,也称为残差块。
(3)变换
变换处理单元206用于对残差块205的像素点值执行离散余弦变换(discrete cosine transform,DCT)或离散正弦变换(discrete sine transform,DST)等,得到变换域中的变换系数207。变换系数207也可称为变换残差系数,表示变换域中的残差块205。
变换处理单元206可用于应用DCT/DST的整数化近似,例如为H.265/HEVC指定的变换。与正交DCT变换相比,这种整数化近似通常由某一因子按比例缩放。为了维持经过正变换和逆变换处理的残差块的范数,使用其它比例缩放因子作为变换过程的一部分。比例缩放因子通常是根据某些约束条件来选择的,例如比例缩放因子是用于移位运算的2的幂、变换系数的位深度、准确性与实施成本之间的权衡等。例如,在编码器20侧通过逆变换处理单元212为逆变换(以及在解码器30侧通过例如逆变换处理单元312为对应逆变换)指定具体的比例缩放因子,以及相应地,可以在编码器20侧通过变换处理单元206为正变换指定对应比例缩放因子。
在一个实施例中,视频编码器20(对应地,变换处理单元206)可用于输出一种或多种变换的类型等变换参数,例如,直接输出或由熵编码单元270进行编码或压缩后输出,例如使得视频解码器30可接收并使用变换参数进行解码。
(4)量化
量化单元208用于通过例如标量量化或矢量量化对变换系数207进行量化,得到量化变换系数209,也即量化后变换系数(quantized transform coefficients)209,简称为量化系数209。量化变换系数209也可称为量化残差系数209。
量化过程可减少与部分或全部变换系数207有关的位深度。例如,可在量化期间将n位变换系数向下舍入到m位变换系数,其中n大于m。可通过调整量化参数(quantization parameter,QP)修改量化程度。例如,对于标量量化,可以应用不同程度的比例来实现较细或较粗的量化。较小量化步长对应较细量化,而较大量化步长对应较粗量化。可通过量化参数(quantization parameter,QP)指示合适的量化步长。例如,量化参数可以为合适的量化步长的预定义集合的索引。例如,较小的量化参数可对应精细量化(较小量化步长),较大的量化参数可对应粗糙量化(较大量化步长),反之亦然。量化可包括除以量化步长,而反量化单元210等执行的对应或逆解量化可包括乘以量化步长。根据例如HEVC一些标准的实施例可用于使用量化参数来确定量化步长。一般而言,可以根据量化参数使用包含除法的等式的定点近似来计算量化步长。可以引入其它比例缩放因子来进行量化和解量化,以恢复可能由于在用于量化步长和量化参数的等式的定点近似中使用的比例而修改的残差块的范数。在一种示例性实现方式中,可以合并逆变换和解量化的比例。或者,可以使用自定义量化表并在比特流中等将其从编码器向解码器指示。量化是有损操作,其中量化步长越大,损耗越大。
在一个实施例中,视频编码器20(对应地,量化单元208)可用于输出量化参数 (quantization parameter,QP),例如,直接输出或由熵编码单元270进行编码或压缩后输出,例如使得视频解码器30可接收并使用量化参数进行解码。
(5)反量化
反量化单元210用于对量化系数209执行量化单元208的反量化,得到解量化系数211,例如,根据或使用与量化单元208相同的量化步长执行与量化单元208所执行的量化方案的反量化方案。解量化系数211也可称为解量化残差系数211或反量化系数211,对应于变换系数207,但是由于量化造成损耗,反量化系数211通常与变换系数207不完全相同。
(6)逆变换
逆变换处理单元212用于执行变换处理单元206执行的变换的逆变换,例如,逆离散余弦变换(discrete cosine transform,DCT)或逆离散正弦变换(discrete sine transform,DST),以在像素域中得到重建残差块213。重建残差块213也可称为变换块213。
(7)重建
重建单元214(例如,求和器214)用于将变换块213(即重建残差块213)添加到预测块265,以在像素域中得到重建块215,例如,将重建残差块213的像素点值和预测块265的像素点值相加。
(8)滤波
环路滤波器单元220(或简称“环路滤波器”220)用于对重建块215进行滤波,得到滤波块221,或通常用于对重建像素点进行滤波以得到滤波像素点值。例如,环路滤波器单元用于顺利进行像素转变或提高视频质量,经过环路滤波可去除块效应、振铃效应等编码失真。环路滤波器单元220可包括一个或多个环路滤波器,例如去块滤波器、像素点自适应偏移(sample-adaptive offset,SAO)滤波器或一个或多个其它滤波器,例如自适应环路滤波器(adaptive loop filter,ALF)、噪声抑制滤波器(noise suppression filter,NSF)或任意组合。例如,环路滤波器单元220可以包括去块滤波器、SAO滤波器和ALF滤波器。滤波过程的顺序可以是去块滤波器、SAO滤波器和ALF滤波器。再例如,增加一个称为具有色度缩放的亮度映射(luma mapping with chroma scaling,LMCS)(即自适应环内整形器)的过程。该过程在去块之前执行。再例如,去块滤波过程也可以应用于内部子块边缘,例如仿射子块边缘、ATMVP子块边缘、子块变换(sub-block transform,SBT)边缘和内子部分(intra sub-partition,ISP)边缘。尽管环路滤波器单元220在图2中示为环路滤波器,但在其它配置中,环路滤波器单元220可以实现为环路滤波器。滤波块221也可称为滤波重建块221。
在一个实施例中,视频编码器20(对应地,环路滤波器单元220)可用于输出环路滤波器参数(例如SAO滤波参数、ALF滤波参数或LMCS参数),例如,直接输出或由熵编码单元270进行熵编码后输出,例如使得解码器30可接收并使用相同或不同的环路滤波器参数进行解码。
(9)解码图像缓冲器
解码图像缓冲器(decoded picture buffer,DPB)230可以是存储参考图像(或参考图像数据)以供视频编码器20在编码视频数据时使用的参考图像存储器。DPB 230可以由多种存储器设备中的任一种形成,例如动态随机存取存储器(dynamic random access memory,DRAM),包括同步DRAM(synchronous DRAM,SDRAM)、磁阻RAM(magnetoresistive RAM,MRAM)、电阻RAM(resistive RAM,RRAM)或其它类型的存储设备。解码图像缓冲器230可用于存储一个或多个滤波块221。解码图像缓冲器230还可用于存储同一当前图像或例如之前的重建图像等不同图像的其它之前的滤波块,例如之前重建和滤波的块221,并可提供 完整的之前重建即解码图像(和对应参考块和像素点)和/或部分重建的当前图像(和对应参考块和像素点),例如用于帧间预测。解码图像缓冲器230还可用于存储一个或多个未经滤波的重建块215,或一般存储未经滤波的重建像素点,例如,未被环路滤波单元220滤波的重建块215,或未进行任何其它处理的重建块或重建像素点。
(10)模式选择(分割和预测)
模式选择单元260包括分割单元262、帧间预测单元244和帧内预测单元254,用于从解码图像缓冲器230或其它缓冲器(例如,列缓冲器,图中未显示)接收或获得原始块203(当前图像17的当前块203)和重建图像数据等原始图像数据,例如,同一(当前)图像和/或一个或多个之前解码图像的滤波和/或未经滤波的重建像素点或重建块。重建图像数据用作帧间预测或帧内预测等预测所需的参考图像数据,以得到预测块265或预测值265。
模式选择单元260可用于为当前块预测模式(包括不分割)和预测模式(例如帧内或帧间预测模式)确定或选择一种分割,生成对应的预测块265,以对残差块205进行计算和对重建块215进行重建。
在一个实施例中,模式选择单元260可用于选择分割和预测模式(例如,从模式选择单元260支持的或可用的预测模式中),所述预测模式提供最佳匹配或者说最小残差(最小残差是指传输或存储中更好的压缩),或者提供最小信令开销(最小信令开销是指传输或存储中更好的压缩),或者同时考虑或平衡以上两者。模式选择单元260可用于根据码率失真优化(rate distortion Optimization,RDO)确定分割和预测模式,即选择提供最小码率失真优化的预测模式。本文“最佳”、“最低”、“最优”等术语不一定指总体上“最佳”、“最低”、“最优”的,但也可以指满足终止或选择标准的情况,例如,超过或低于阈值的值或其他限制可能导致“次优选择”,但会降低复杂度和处理时间。
换言之,分割单元262可用于将视频序列中的图像分割为编码树单元(coding tree unit,CTU)203序列,CTU 203可进一步被分割成较小的块部分或子块(再次形成块),例如,通过迭代使用四叉树(quad-tree partitioning,QT)分割、二叉树(binary-tree partitioning,BT)分割或三叉树(triple-tree partitioning,TT)分割或其任意组合,并且用于例如对块部分或子块中的每一个执行预测,其中模式选择包括选择分割块203的树结构和选择应用于块部分或子块中的每一个的预测模式。
下文将详细地描述由视频编码器20执行的分割(例如,由分割单元262执行)和预测处理(例如,由帧间预测单元244和帧内预测单元254执行)。
(11)分割
分割单元262可将一个编码树单元203分割(或划分)为较小的部分,例如正方形或矩形形状的小块。对于具有三个像素点阵列的图像,一个CTU由N×N个亮度像素点块和两个对应的色度像素点块组成。CTU中亮度块的最大允许大小在正在开发的通用视频编码(Versatile Video Coding,VVC)标准中被指定为128×128,但是将来可指定为不同于128×128的值,例如256×256。图像的CTU可以集中/分组为片/编码区块组、编码区块或砖。一个编码区块覆盖着一个图像的矩形区域,一个编码区块可以分成一个或多个砖。一个砖由一个编码区块内的多个CTU行组成。没有分割为多个砖的编码区块可以称为砖。但是,砖是编码区块的真正子集,因此不称为编码区块。VVC支持两种编码区块组模式,分别为光栅扫描片/编码区块组模式和矩形片模式。在光栅扫描编码区块组模式,一个片/编码区块组包含一个图像的编码区块光栅扫描中的编码区块序列。在矩形片模式中,片包含一个图像的多个砖,这些砖共同组成图像的矩形区域。矩形片内的砖按照片的砖光栅扫描顺序排列。这些较小块(也 可称为子块)可进一步分割为更小的部分。这也称为树分割或分层树分割,其中在根树级别0(层次级别0、深度0)等的根块可以递归地分割为两个或两个以上下一个较低树级别的块,例如树级别1(层次级别1、深度1)的节点。这些块可以又分割为两个或两个以上下一个较低级别的块,例如树级别2(层次级别2、深度2)等,直到分割结束(因为满足结束标准,例如达到最大树深度或最小块大小)。未进一步分割的块也称为树的叶块或叶节点。分割为两个部分的树称为二叉树(binary-tree,BT),分割为三个部分的树称为三叉树(ternary-tree,TT),分割为四个部分的树称为四叉树(quad-tree,QT)。
例如,编码树单元(CTU)可以为或包括亮度像素点的CTB、具有三个像素点阵列的图像的色度像素点的两个对应CTB、或单色图像的像素点的CTB或使用三个独立颜色平面和语法结构(用于编码像素点)编码的图像的像素点的CTB。相应地,编码树块(CTB)可以为N×N个像素点块,其中N可以设为某个值使得分量划分为CTB,这就是分割。编码单元(coding unit,CU)可以为或包括亮度像素点的编码块、具有三个像素点阵列的图像的色度像素点的两个对应编码块、或单色图像的像素点的编码块或使用三个独立颜色平面和语法结构(用于编码像素点)编码的图像的像素点的编码块。相应地,编码块(CB)可以为M×N个像素点块,其中M和N可以设为某个值使得CTB划分为编码块,这就是分割。
例如,在实施例中,根据HEVC可通过使用表示为编码树的四叉树结构将编码树单元(CTU)划分为多个CU。在叶CU级作出是否使用帧间(时间)预测或帧内(空间)预测对图像区域进行编码的决定。任一叶CU可以根据PU划分类型进一步划分为一个、两个或四个PU。一个PU内使用相同的预测过程,并以PU为单位向解码器传输相关信息。在根据PU划分类型应用预测过程得到残差块之后,可以根据类似于用于CU的编码树的其它四叉树结构将叶CU分割为变换单元(TU)。
例如,在实施例中,根据当前正在开发的最新视频编码标准(称为通用视频编码(VVC),使用嵌套多类型树(例如二叉树和三叉树)的组合四叉树来划分用于分割编码树单元的分段结构。在编码树单元内的编码树结构中,CU可以为正方形或矩形。例如,编码树单元(CTU)首先由四叉树结构进行分割。四叉树叶节点进一步由多类型树结构分割。多类型树形结构有四种划分类型:垂直二叉树划分(SPLIT_BT_VER)、水平二叉树划分(SPLIT_BT_HOR)、垂直三叉树划分(SPLIT_TT_VER)和水平三叉树划分(SPLIT_TT_HOR)。多类型树叶节点称为编码单元(CU),除非CU对于最大变换长度而言太大,这样的分段用于预测和变换处理,无需其它任何分割。在大多数情况下,这表示CU、PU和TU在四叉树嵌套多类型树的编码块结构中的块大小相同。当最大支持变换长度小于CU的彩色分量的宽度或高度时,就会出现该异常。VVC制定了具有四叉树嵌套多类型树的编码结构中的分割划分信息的唯一信令机制。在信令机制中,编码树单元(CTU)作为四叉树的根首先被四叉树结构分割。然后任一四叉树叶节点(当足够大可以被)被进一步分割为一个多类型树结构。在多类型树结构中,通过第一标识(mtt_split_cu_flag)指示节点是否进一步分割,当对节点进一步分割时,先用第二标识(mtt_split_cu_vertical_flag)指示划分方向,再用第三标识(mtt_split_cu_binary_flag)指示划分是二叉树划分或三叉树划分。根据mtt_split_cu_vertical_flag和mtt_split_cu_binary_flag的值,解码器可以基于预定义规则或表格推导出CU的多类型树划分模式(MttSplitMode)。需要说明的是,对于某种设计,例如VVC硬件解码器中的64×64的亮度块和32×32的色度流水线设计,当亮度编码块的宽度或高度大于64时,不允许进行TT划分。当色度编码块的宽度或高度大于32时,也不允许TT划分。流水线设计将图像分为多个虚拟流水线数据单元(virtual pipeline data unit,VPDU),任一 VPDU在图像中定义为互不重叠的单元。在硬件解码器中,连续的VPDU在多个流水线阶段同时处理。在大多数流水线阶段,VPDU大小与缓冲器大小大致成正比,因此需要保持较小的VPDU。在大多数硬件解码器中,VPDU大小可以设置为最大变换块(transform block,TB)大小。但是,在VVC中,三叉树(TT)和二叉树(BT)的分割可能会增加VPDU的大小。
另外,需要说明的是,当树节点块的一部分超出底部或图像右边界时,强制对该树节点块进行划分,直到任一编码CU的所有像素点都位于图像边界内。
例如,所述帧内子分割(intra sub-partitions,ISP)工具可以根据块大小将亮度帧内预测块垂直或水平地分为两个或四个子部分。
在一个示例中,视频编码器20的模式选择单元260可以用于执行上文描述的分割技术的任意组合。
如上所述,视频编码器20用于从(预定的)预测模式集合中确定或选择最好或最优的预测模式。预测模式集合可包括例如帧内预测模式和/或帧间预测模式。
(12)帧内预测
帧内预测模式集合可包括35种不同的帧内预测模式,例如,像DC(或均值)模式和平面模式的非方向性模式,或如HEVC定义的方向性模式,或者可包括67种不同的帧内预测模式,例如,像DC(或均值)模式和平面模式的非方向性模式,或如VVC中定义的方向性模式。例如,若干传统角度帧内预测模式自适应地替换为VVC中定义的非正方形块的广角帧内预测模式。又例如,为了避免DC预测的除法运算,仅使用较长边来计算非正方形块的平均值。并且,平面模式的帧内预测结果还可以使用位置决定的帧内预测组合(position dependent intra prediction combination,PDPC)方法修改。
帧内预测单元254用于根据帧内预测模式集合中的帧内预测模式使用同一当前图像的相邻块的重建像素点来生成帧内预测块265。
帧内预测单元254(或通常为模式选择单元260)还用于输出帧内预测参数(或通常为指示块的选定帧内预测模式的信息)以语法元素266的形式发送到熵编码单元270,以包含到编码图像数据21中,从而视频解码器30可执行操作,例如接收并使用用于解码的预测参数。
(13)帧间预测
在可能的实现中,帧间预测模式集合取决于可用参考图像(即,例如前述存储在DBP 230中的至少部分之前解码的图像)和其它帧间预测参数,例如取决于是否使用整个参考图像或只使用参考图像的一部分,例如当前块的区域附近的搜索窗口区域,来搜索最佳匹配参考块,和/或例如取决于是否执行半像素、四分之一像素和/或16分之一内插的像素内插。
除上述预测模式外,还可以采用跳过模式和/或直接模式。
例如,扩展合并预测,这种模式的合并候选列表由以下五种候选类型按顺序组成:来自空间相邻CU的空间MVP、来自并置CU的时间MVP、来自FIFO表的基于历史的MVP、成对平均MVP和零MV。可以使用基于双边匹配的解码器侧运动矢量修正(decoder side motion vector refinement,DMVR)来增加合并模式的MV的准确度。带有MVD的合并模式(merge mode with MVD,MMVD)来自有运动矢量差异的合并模式。在发送跳过标志和合并标志之后立即发送MMVD标志,以指定CU是否使用MMVD模式。可以使用CU级自适应运动矢量分辨率(adaptive motion vector resolution,AMVR)方案。AMVR支持CU的MVD以不同的精度进行编码。根据当前CU的预测模式,自适应地选择当前CU的MVD。当CU以合并模式进行编码时,可以将合并的帧间/帧内预测(combined inter/intra prediction,CIIP)模式应用于当前CU。对帧间和帧内预测信号进行加权平均,得到CIIP预测。对于仿射运动 补偿预测,通过2个控制点(4参数)或3个控制点(6参数)运动矢量的运动信息来描述块的仿射运动场。基于子块的时间运动矢量预测(subblock-based temporal motion vector prediction,SbTMVP),与HEVC中的时间运动矢量预测(temporal motion vector prediction,TMVP)类似,但预测的是当前CU内的子CU的运动矢量。双向光流(bi-directional optical flow,BDOF)以前称为BIO,是一种减少计算的简化版本,特别是在乘法次数和乘数大小方面的计算。在三角形分割模式中,CU以对角线划分和反对角线划分两种划分方式被均匀划分为两个三角形部分。此外,双向预测模式在简单平均的基础上进行了扩展,以支持两个预测信号的加权平均。
帧间预测单元244可包括运动估计(motion estimation,ME)单元和运动补偿(motion compensation,MC)单元(两者在图2中未示出)。运动估计单元可用于接收或获取图像块203(当前图像17的当前图像块203)和解码图像231,或至少一个或多个之前重建块,例如,一个或多个其它/不同之前解码图像231的重建块,来进行运动估计。例如,视频序列可包括当前图像和之前的解码图像231,或换句话说,当前图像和之前的解码图像231可以为形成视频序列的图像序列的一部分或形成该图像序列。
例如,编码器20可用于从多个其它图像中的同一或不同图像的多个参考块中选择参考块,并将参考图像(或参考图像索引)和/或参考块的位置(x、y坐标)与当前块的位置之间的偏移(空间偏移)作为帧间预测参数提供给运动估计单元。该偏移也称为运动矢量(motion vector,MV)。
运动补偿单元用于获取,例如接收,帧间预测参数,并根据或使用该帧间预测参数执行帧间预测,得到帧间预测块246。由运动补偿单元执行的运动补偿可能包含根据通过运动估计确定的运动/块矢量来提取或生成预测块,还可能包括对子像素精度执行内插。内插滤波可从已知像素的像素点中产生其它像素的像素点,从而潜在地增加可用于对图像块进行编码的候选预测块的数量。一旦接收到当前图像块的PU对应的运动矢量时,运动补偿单元可在其中一个参考图像列表中定位运动矢量指向的预测块。
运动补偿单元还可以生成与块和视频片相关的语法元素,以供视频解码器30在解码视频片的图像块时使用。此外,或者作为片和相应语法元素的替代,可以生成或使用编码区块组和/或编码区块以及相应语法元素。
(14)熵编码
熵编码单元270用于将熵编码算法或方案(例如,可变长度编码(variable length coding,VLC)方案、上下文自适应VLC方案(context adaptive VLC,CALVC)、算术编码方案、二值化算法、上下文自适应二进制算术编码(context adaptive binary arithmetic coding,CABAC)、基于语法的上下文自适应二进制算术编码(syntax-based context-adaptive binary arithmetic coding,SBAC)、概率区间分割熵(probability interval partitioning entropy,PIPE)编码或其它熵编码方法或技术)应用于量化残差系数209、帧间预测参数、帧内预测参数、环路滤波器参数和/或其它语法元素,得到可以通过输出端272以编码比特流21等形式输出的编码图像数据21,使得视频解码器30等可以接收并使用用于解码的参数。可将编码比特流21传输到视频解码器30,或将其保存在存储器中稍后由视频解码器30传输或检索。
视频编码器20的其它结构变体可用于对视频流进行编码。例如,基于非变换的编码器20可以在某些块或帧没有变换处理单元206的情况下直接量化残差信号。在另一种实现方式中,编码器20可以具有组合成单个单元的量化单元208和反量化单元210。
二、解码器和解码方法
图3示出了用于实现本申请技术的示例性视频解码器30。视频解码器30用于接收例如由编码器20编码的编码图像数据21(例如编码比特流21),得到解码图像331。编码图像数据21或比特流包括用于解码所述编码图像数据21的信息,例如表示编码视频片(和/或编码区块组或编码区块)的图像块的数据和相关的语法元素。
在图3的示例中,解码器30包括熵解码单元304、反量化单元310、逆变换处理单元312、重建单元314(例如求和器314)、环路滤波器320、解码图像缓冲器(DBP)330、模式应用单元360、帧间预测单元344和帧内预测单元354。帧间预测单元344可以为或包括运动补偿单元。在一些示例中,视频解码器30可执行大体上与参照图2的视频编码器100描述的编码过程相反的解码过程。
参见图3,帧间预测模块/帧内预测模块/环路滤波模块包括(为)经过训练的目标模型(亦称为神经网络),该神经网络用于处理输入图像或图像区域或图像块,以生成输入图像块的预测值。例如,用于帧间预测/帧内预测/环路滤波的神经网络用于接收输入的图像或图像区域或图像块。
如编码器20所述,反量化单元210、逆变换处理单元212、重建单元214、环路滤波器220、解码图像缓冲器230、帧间预测单元344和帧内预测单元354还组成视频编码器20的“内置解码器”。相应地,反量化单元310在功能上可与反量化单元110相同,逆变换处理单元312在功能上可与逆变换处理单元122相同,重建单元314在功能上可与重建单元214相同,环路滤波器320在功能上可与环路滤波器220相同,解码图像缓冲器330在功能上可与解码图像缓冲器230相同。因此,视频编码器20的相应单元和功能的解释相应地适用于视频解码器30的相应单元和功能。
(1)熵解码
熵解码单元304用于解析比特流21(或一般为编码图像数据21)并对编码图像数据21执行熵解码,得到量化系数309和/或解码后的编码参数(图3中未示出)等,例如帧间预测参数(例如参考图像索引和运动矢量)、帧内预测参数(例如帧内预测模式或索引)、变换参数、量化参数、环路滤波器参数和/或其它语法元素等中的任一个或全部。熵解码单元304可用于应用编码器20的熵编码单元270的编码方案对应的解码算法或方案。熵解码单元304还可用于向模式应用单元360提供帧间预测参数、帧内预测参数和/或其它语法元素,以及向解码器30的其它单元提供其它参数。视频解码器30可以接收视频片和/或视频块级的语法元素。此外,或者作为片和相应语法元素的替代,可以接收或使用编码区块组和/或编码区块以及相应语法元素。
(2)反量化
反量化单元310可用于从编码图像数据21(例如通过熵解码单元304解析和/或解码)接收量化参数(quantization parameter,QP)(或一般为与反量化相关的信息)和量化系数,并基于所述量化参数对所述解码的量化系数309进行反量化以获得反量化系数311,所述反量化系数311也可以称为变换系数311或解量化系数311。反量化过程可包括使用视频编码器20为视频片中的任一视频块计算的量化参数来确定量化程度,同样也确定需要执行的反量化的程度。
(3)逆变换
逆变换处理单元312可用于接收解量化系数311,也称为变换系数311,并对解量化系数311应用变换以得到像素域中的重建残差块313。重建残差块313也可称为变换块313。变换 可以为逆变换,例如逆DCT、逆DST、逆整数变换或概念上类似的逆变换过程。逆变换处理单元312还可以用于从编码图像数据21(例如通过熵解码单元304解析和/或解码)接收变换参数或相应信息,以确定应用于解量化系数311的变换。
(4)重建
重建单元314(例如,求和器314)用于将重建残差块313添加到预测块365,以在像素域中得到重建块315,例如,将重建残差块313的像素点值和预测块365的像素点值相加。
(5)滤波
环路滤波器单元320(在编码环路中或之后)用于对重建块315进行滤波,得到滤波块321,从而顺利进行像素转变或提高视频质量等。环路滤波器单元320可包括一个或多个环路滤波器,例如去块滤波器、像素点自适应偏移(sample-adaptive offset,SAO)滤波器或一个或多个其它滤波器,例如自适应环路滤波器(adaptive loop filter,ALF)、噪声抑制滤波器(noise suppression filter,NSF)或任意组合。例如,环路滤波器单元320可以包括去块滤波器、SAO滤波器和ALF滤波器。滤波过程的顺序可以是去块滤波器、SAO滤波器和ALF滤波器。再例如,增加一个称为具有色度缩放的亮度映射(luma mapping with chroma scaling,LMCS)(即自适应环内整形器)的过程。该过程在去块之前执行。再例如,去块滤波过程也可以应用于内部子块边缘,例如仿射子块边缘、ATMVP子块边缘、子块变换(sub-block transform,SBT)边缘和内子部分(intra sub-partition,ISP)边缘。尽管环路滤波器单元320在图3中示为环路滤波器,但在其它配置中,环路滤波器单元320可以实现为环路滤波器。
(6)解码图像缓冲器
随后将一个图像中的解码视频块321存储在解码图像缓冲器330中,解码图像缓冲器330存储作为参考图像的解码图像331,参考图像用于其它图像和/或分别输出显示的后续运动补偿。
解码器30用于通过输出端312等输出解码图像331,向用户显示或供用户查看。
(7)预测
帧间预测单元344在功能上可与帧间预测单元244(特别是运动补偿单元)相同,帧内预测单元354在功能上可与帧内预测单元254相同,并基于从编码图像数据21(例如通过熵解码单元304解析和/或解码)接收的分割和/或预测参数或相应信息决定划分或分割和执行预测。模式应用单元360可用于根据重建图像、块或相应的像素点(已滤波或未滤波)执行任一块的预测(帧内或帧间预测),得到预测块365。
当将视频片编码为帧内编码(intra coded,I)片时,模式应用单元360中的帧内预测单元354用于根据指示的帧内预测模式和来自当前图像的之前解码块的数据生成用于当前视频片的图像块的预测块365。当视频图像编码为帧间编码(即,B或P)片时,模式应用单元360中的帧间预测单元344(例如运动补偿单元)用于根据运动矢量和从熵解码单元304接收的其它语法元素生成用于当前视频片的视频块的预测块365。对于帧间预测,可从其中一个参考图像列表中的其中一个参考图像产生这些预测块。视频解码器30可以根据存储在解码图像缓冲器330中的参考图像,使用默认构建技术来构建参考帧列表0和列表1。除了片(例如视频片)或作为片的替代,相同或类似的过程可应用于编码区块组(例如视频编码区块组)和/或编码区块(例如视频编码区块)的实施例,例如视频可以使用I、P或B编码区块组和/或编码区块进行编码。
模式应用单元360用于通过解析运动矢量和其它语法元素,确定用于当前视频片的视频块的预测信息,并使用预测信息产生用于正在解码的当前视频块的预测块。例如,模式应用 单元360使用接收到的一些语法元素确定用于编码视频片的视频块的预测模式(例如帧内预测或帧间预测)、帧间预测片类型(例如B片、P片或GPB片)、用于片的一个或多个参考图像列表的构建信息、用于片的任一帧间编码视频块的运动矢量、用于片的任一帧间编码视频块的帧间预测状态、其它信息,以解码当前视频片内的视频块。除了片(例如视频片)或作为片的替代,相同或类似的过程可应用于编码区块组(例如视频编码区块组)和/或编码区块(例如视频编码区块)的实施例,例如视频可以使用I、P或B编码区块组和/或编码区块进行编码。
在一个实施例中,图3所示的视频编码器30还可以用于使用片(也称为视频片)分割和/或解码图像,其中图像可以使用一个或多个片(通常为不重叠的)进行分割或解码。任一片可包括一个或多个块(例如CTU)或一个或多个块组(例如H.265/HEVC/VVC标准中的编码区块和VVC标准中的砖。
在一个实施例中,图3所示的视频解码器30还可以用于使用片/编码区块组(也称为视频编码区块组)和/或编码区块(也称为视频编码区块)对图像进行分割和/或解码,其中图像可以使用一个或多个片/编码区块组(通常为不重叠的)进行分割或解码,任一片/编码区块组可包括一个或多个块(例如CTU)或一个或多个编码区块等,其中任一编码区块可以为矩形等形状,可包括一个或多个完整或部分块(例如CTU)。
视频解码器30的其它变型可用于对编码图像数据21进行解码。例如,解码器30可以在没有环路滤波器单元320的情况下产生输出视频流。例如,基于非变换的解码器30可以在某些块或帧没有逆变换处理单元312的情况下直接反量化残差信号。在另一种实现方式中,视频解码器30可以具有组合成单个单元的反量化单元310和逆变换处理单元312。
应理解,在编码器20和解码器30中,可以对当前步骤的处理结果进一步处理,然后输出到下一步骤。例如,在插值滤波、运动矢量推导或环路滤波之后,可以对插值滤波、运动矢量推导或环路滤波的处理结果进行进一步的运算,例如裁剪(clip)或移位(shift)运算。
应该注意的是,可以对当前块的推导运动矢量(包括但不限于仿射模式的控制点运动矢量、仿射、平面、ATMVP模式的子块运动矢量、时间运动矢量等)进行进一步运算。例如,根据运动矢量的表示位将运动矢量的值限制在预定义范围。如果运动矢量的表示位为bitDepth,则范围为-2^(bitDepth-1)至2^(bitDepth-1)-1,其中“^”表示幂次方。例如,如果bitDepth设置为16,则范围为-32768~32767;如果bitDepth设置为18,则范围为-131072~131071。例如,推导运动矢量的值(例如一个8×8块中的4个4×4子块的MV)被限制,使得所述4个4×4子块MV的整数部分之间的最大差值不超过N个像素,例如不超过1个像素。这里提供了两种根据bitDepth限制运动矢量的方法。
尽管上述实施例主要描述了视频编解码,但应注意的是,译码系统10、编码器20和解码器30的实施例以及本文描述的其它实施例也可以用于静止图像处理或编解码,即视频编解码中独立于任何先前或连续图像的单个图像的处理或编解码。一般情况下,如果图像处理仅限于单个图像17,帧间预测单元244(编码器)和帧间预测单元344(解码器)可能不可用。视频编码器20和视频解码器30的所有其它功能(也称为工具或技术)同样可用于静态图像处理,例如残差计算204/304、变换处理单元206、量化单元208、反量化单元210/310、逆变换处理单元212/312、分割单元262、帧内预测单元254/354、环路滤波器220/320、熵编码单元270和熵解码单元304等。
需要注意,编码器20与解码器30中各模块操作具有对应关系。例如,在编码器20与解码器30中,帧间预测单元244和帧间预测单元344,以及帧内预测单元254和帧内预测单元 354的操作完全相同;而熵编码单元270与熵解码单元304、变换处理单元206与逆变换处理单元212/312、量化单元208与反量化单元210/310等,均为成对的逆操作。因此,在规定了编码器20的预测、变换、量化、熵编码等操作后,解码器30的预测、反变换、反量化、熵解码等操作也随之确定。
图4为本申请实施例提供的视频译码设备400的示意图。视频译码设备400适用于实现本文描述的公开实施例。在一个实施例中,视频译码设备400可以是解码器,例如图1A中的视频解码器30,也可以是编码器,例如图1A中的视频编码器20。
视频译码设备400包括:用于接收数据的入端口410(或输入端口410)和接收单元(receiver unit,Rx)420;用于处理数据的处理器、逻辑单元或中央处理器(central processing unit,CPU)430;例如,这里的处理器430可以是神经网络处理器430;用于传输数据的发送单元(transmitter unit,Tx)440和出端口450(或输出端口450);用于存储数据的存储器460。视频译码设备400还可包括耦合到入端口410、接收单元420、发送单元440和出端口450的光电(optical-to-electrical,OE)组件和电光(electrical-to-optical,EO)组件,用于光信号或电信号的出口或入口。
处理器430通过硬件和软件实现。处理器430可实现为一个或多个处理器芯片、核(例如,多核处理器)、FPGA、ASIC和DSP。处理器430与入端口410、接收单元420、发送单元440、出端口450和存储器460通信。处理器430包括译码模块470(例如,基于神经网络的译码模块470)。译码模块470实施上文所公开的实施例。例如,译码模块470执行、处理、准备或提供各种编码操作。因此,通过译码模块470为视频译码设备400的功能提供了实质性的改进,并且影响了视频译码设备400到不同状态的切换。或者,以存储在存储器460中并由处理器430执行的指令来实现译码模块470。
存储器460包括一个或多个磁盘、磁带机和固态硬盘,可以用作溢出数据存储设备,用于在选择执行程序时存储此类程序,并且存储在程序执行过程中读取的指令和数据。存储器460可以是易失性和/或非易失性的,可以是只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、三态内容寻址存储器(ternary content-addressable memory,TCAM)和/或静态随机存取存储器(static random-access memory,SRAM)。
图5为示例性实施例提供的装置500的简化框图,装置500可用作图1A中的源设备12和目的设备14中的任一个或两个。
装置500中的处理器502可以是中央处理器。或者,处理器502可以是现有的或今后将研发出的能够操控或处理信息的任何其它类型设备或多个设备。虽然可以使用如图所示的处理器502等单个处理器来实施已公开的实现方式,但使用一个以上的处理器速度更快和效率更高。
在一种实现方式中,装置500中的存储器504可以是只读存储器(ROM)设备或随机存取存储器(RAM)设备。任何其它合适类型的存储设备都可以用作存储器504。存储器504可以包括处理器502通过总线512访问的代码和数据506。存储器504还可包括操作系统508和应用程序510,应用程序510包括允许处理器502执行本文所述方法的至少一个程序。例如,应用程序510可以包括应用1至N,还包括执行本文所述方法的视频译码应用。
装置500还可以包括一个或多个输出设备,例如显示器518。在一个示例中,显示器518可以是将显示器与可用于感测触摸输入的触敏元件组合的触敏显示器。显示器518可以通过 总线512耦合到处理器502。
虽然装置500中的总线512在本文中描述为单个总线,但是总线512可以包括多个总线。此外,辅助储存器可以直接耦合到装置500的其它组件或通过网络访问,并且可以包括存储卡等单个集成单元或多个存储卡等多个单元。因此,装置500可以具有各种各样的配置。
为了便于本领域技术人员理解本申请,下面对本申请实施例中的部分用语进行解释说明,以及对本申请实施例涉及的相关技术知识进行介绍。
一、术语解释
自编码器(Auto-Encoder,AE):一种特定的神经网络结构。
端到端(end to end,E2E):通常指一个学习任务的技术方案全部使用神经网络实现,因此可以通过梯度反传的方式同时优化该神经网络的所有参数。
编码图像(Coding picture):通常包含一张图像的三个矩阵,分别存储该图像的YUV或RGB三个色彩分量强度值的重建,还包含该图像的所有编码单元的块划分、编码模式、量化参数等编码信息。
解码图像(Decoded picture):通常包含一张图像的三个矩阵,分别存储该图像的YUV或RGB三个色彩分量强度值的重建。
编码单元(Coding Unit,CU):通常包含一个图像块的三个矩阵,分别存储该图像块的YUV或RGB三个色彩分量强度值的重建,还包含该图像块的块划分、编码模式、量化参数等编码信息。
重建图像:指的是经编码操作后包含编码失真的图像。
二、相关技术知识
视频压缩编码技术在多媒体服务、广播、视频通信和存储等领域都有广泛的应用。近年来,ITU-T和ISO/IEC两大标准组织在2003年、2013年和2020年先后联合制定并发布H.264/AVC、H.265/HEVC和H.266/VVC三项视频编解码标准。与此同时,AVS标准组也制定并输出了AVS1、AVS2、AVS3等一系列视频图像编解码标准。此外,AOM联盟也在2018年发布了AV1视频编解码方案。上述视频编解码技术均采用了基于块划分与变换量化的混合型编解码方案,并在具体的块划分、预测、变换、熵编码、环路滤波等模块做持续技术迭代,从而不断提升视频图像的压缩效率。
(1)相关技术知识一
近年来,学术界开始研究基于深度神经网络(Deep Neural Network,DNN)的图像编解码方案。图6展示了一种典型的基于深度神经网络的图像编解码方案,该方案采纳了自编码器网络结构。输入x是待编码的原始图像,可表达为一个w x×h x×c x的阵列,w x、h x、c x分别表示输入图像的宽、高、色彩分量数。分析器与合成器通常为深度卷积神经网络(Convolutional Neural Network,CNN)网络。分析器对输入图像x进行降维操作得到其潜层表达(Latent representation)y,潜层表达通常可表示为一个w y×h y×c y阵列,w y、h y、c y分别为潜层表达的宽、高、潜层通道数。分析器输出的潜层表达y中的每一个元素可以为一个浮点数或者为一个整型数,可以进行量化或普通的取整操作,来获得一个更加紧凑的整型化表达y’。接着对y’进行熵编码操作,获得压缩码流。熵解码为熵编码的逆操作,目的是从压缩码流解析获得y’。熵编码与熵解码采用相同的概率模型,概率模型的状态可以在编码器与解码器做同步更新,来保障编解码匹配。可选的,编码器可以将概率模型参数传输到解码器,来保障熵编码与熵解码使用相同的概率模型。合成器基于y’获取编码图像重建x’。其中,可 对输入图像做块划分操作,并将每一个图像块送入图3所示的编码器做编码操作输出压缩码流,并从压缩码流中解码获得该图像块的重建。
图2中的量化单元208用于通过应用标量量化或矢量量化来对变换处理单元206输出的变换系数207做量化操作,以获取变换系数207的量化等级(quantization level)209,其中,量化等级209就是对变换系数量化后的输出,也叫作量化后变换系数209,或量化系数209。
例如,对于标量量化,可以应用不同的量化步长(quantization step size,QS)来实现较细或较粗的量化。较小量化步长对应较细量化,较大量化步长对应较粗量化。可以通过量化参数(quantization parameter,QP)指示所使用的量化步长。例如,量化参数可以为合适的量化步长的预定义集合的索引。例如,较小的量化参数可以对应精细量化(较小量化步长),较大量化参数可以对应粗糙量化(较大量化步长),反之亦然。图2中的反量化单元210与图3中的反量化单元310执行相同的反量化操作,其输入均为量化系数,其输出为解量化系数。
编码器20(或量化单元208)的实施例可以用于输出量化方案和量化步长,例如,直接输出或由熵编码单元270或任何其它熵编码单元熵编码后输出,使得解码器30可以接收并做对应的反量化操作。
量化单元208与反量化单元210/310中所使用的量化步长,或等价的量化参数必须相同。该量化参数由编码器20指定,并直接输出或由熵编码单元270或任何其它熵编码单元熵编码后输出。解码器30可接收编码器20输出的量化参数信息,并由熵解码单元304或任何其它熵解码单元熵解码后得到编码器20所指定的量化参数。下面以HEVC标准方案为例说明编码器如何将量化参数传递到解码器,如表1所示。
表1
Figure PCTCN2021141403-appb-000001
由表1可知,在图像参数集(Picture Parameter Set,PPS)中传递QP初始值,该QP初始值即为init_qp_minus26+26。此外,通过控制标志cu_qp_delta_enabled_flag约定是否可以为不同的CU指定不同的QP。如果该控制标志指示为false,则整张图像中所有CU使用相同的QP,因此,无法为图像中每一个CU指定不同的QP。如果该控制标志指示为true,则可以为图像中每一个CU指定不同的QP,并在具体编码一个CU时将该QP信息写入视频码流。
注意到,HEVC标准中CU可以具有不同的大小,从64×64到8×8。在极端情况下,一张编码图像全部划分为8×8大小的CU,此时需要为每一个8×8大小的CU来传输一个QP,这可能带来极大的QP编码开销,带来编码视频速率的显著增加。为避免这种极端情况,HEVC标准通过PPS中语法diff_cu_qp_delta_depth来指定量化组(Quantization Group,QG)大小。在编码树单元的大小为64×64情况下,两者之间映射关系如表2所示。
表2
diff_cu_qp_delta_depth取值 0 1 2 3
QG大小 64×64 32×32 16×16 8×8
QG是传输QP的基本传输单元。换言之,每一个QG只能传输最多一个QP。若CU尺寸小于QG尺寸,也即一个QG包含多个CU,则仅在第一个包含非零量化等级的CU中传输QP,而该QP将被用于该QG内所有CU的反量化操作。若CU尺寸大于或等于QG尺寸, 也即一个CU包含一个或多个QG,则根据该CU是否包含非零量化等级来判断是否传输该CU的QP信息。
PPS中传输的QP初始值应用于PPS作用范围内的所有编码图像。在具体处理每一个编码图像、条带、子图像、片时,还可进一步调节QP初始值,获得所处理编码图像、条带、子图像、片的QP基准值。例如,HEVC标准方案在条带头(Slice Header,SH)中传输语法slice_qp_delta,表示在PPS中传输的QP初始值上叠加一个差分值,从而获得该条带的QP基准值,如表3所示。
表3
slice_segment_header(){ Descriptor
slice_qp_delta se(v)
HEVC在处理每一个CU时,会判断其中的每一个变换单元(Transfom Unit,TU)是否为该CU所在QG中的第一个包含非零量化等级的TU。如果是,则传输该CU的QP差分信息。具体的,该CU的QP差分信息包含QP差分绝对值cu_qp_delta_abs和CU差分符号cu_qp_delta_sign_flag,如表4所示;其中,该CU的QP差分值即为cu_qp_delta_abs×(1-2×cu_qp_delta_sign_flag)。因为一个CU仅传输最多一个QP差分信息,因此在一个CU包含多个TU的情况下,仅在处理第一个包含非零量化等级的TU时传输QP差分信息。
表4
Figure PCTCN2021141403-appb-000002
即使在QG的约束下,为其中最多一个CU传输其QP值,QP值编码带来的开销仍然会显著降低视频压缩整理效率。因此,普遍的会对QP值进行预测编码。仍以HEVC标准为例,会根据左相邻QG与上相邻QG以及前一个编码QG的QP值来推导得到一个CU的QP预测值,也即使用当前QG邻域内已处理QG的QP值来生成当前QG的QP值的预测。编码器在根据内容复杂度以及码控策略确定一个CU的QP值后,仅需传输该CU的QP值与该CU的QP预测值的差分值;解码器在解码得到一个CU的QP差分值后,经过同样的预测操作获得QP预测值,与QP差分值叠加后即可获得该CU的QP值。
使用指定量化步长Qstep对指定信号做量化操作,可以理论分析获得量化失真的强度。例如,假定均匀分布信源,均匀标量量化的量化失真的均方误差为Qstep 2/12。
但是现有HEVC标准方案或其它任何混合型编解码方案中的QP机制,尽管也使用了基于Qstep的均匀标量量化,但却无法准确指示每一个图像块的实际失真情况,原因如下:
第一,如果一个图像块的残差块中任一采样点的残差的值的张度小于量化步长Qstep,编码器会将该图像块的残差量化为全0。这种情况在P、B帧编码中较为常见,当参考图像质量 高而当前编码图像块的Qstep设置较大时就会发生。此时,尽管解码端可以获得当前块的有效QP信息,但是该QP并不能反映该图像块的实际失真情况。
第二,如果一个图像块的残差量化为全0,即不会为该图像块传输任何残差的情况,此时解码器会跳过反量化操作。为避免传输无用的QP信息,编码器也不会将该图像块的QP信息传递到解码端。此时,解码端根本无法获得当前块的有效QP信息。
这是由当代视频编解码方案中QP机制的设计目的决定的。当前的QP机制的目的是为了在解码端做正确的反量化操作,而不是为了在解码端获取准确的失真强度信息。
(2)相关技术知识二
学术界还提出一种基于学习的深度图像编解码方案,在该基于学习的深度图像编解码方案中,待编码图像经编码器(Encoder)子网络获得输入图像的潜层表达,再经量化(Quantization)模块处理后获得多个量化后的编码块(quantized codes)。另一方面,编码端根据输入图像计算得到重要性图(importance map)。编码端使用重要性图对个量化后的编码块进行裁剪操作,并将裁剪后的个量化后的编码块与重要性图一起做熵编码并传输到解码端。该方案中,重要性图可以用来控制需要传输的个量化后的编码块的数量,从而实现码率控制的功能。因此,重要性图在实质上相当于量化参数(QP),根据图像的内容做区域级的比特分配。
该方案中重要性图的作用是告诉解码端码流中包含哪些编码块,指导解码器解码获得这些编码块,并将码流中未包含的编码块置0,从而获得完整的潜层表达,输入解码器(Decoder)子网络做后续的解码操作。
在当前主流的基于深度神经网络的图像压缩编码方案中,都先将待编码图像输入一个分析器子网络(例如Encoder子网络)提取输入图像的潜层表达。一方面,由于潜层表达是输入图像的一种降维表示,故已经引入了信息损失,也即潜层表达中已经引入了图像信号的失真,而该失真并不能由上述方案的重要性图来表示。另一方面,重要性图仅用于确定哪些编码块不需要编码传输,并不能表示丢弃这些编码块后带来多大程度的信号失真。因此,上述重要性图不能指示编码图像中各个区域的失真强度信息。
综上可知,现有的各类基于深度神经网络的视频图像编解码方案,使用训练得到的模型做视频或图像的编解码操作,这类方法通常都会针对人眼感知做主观视觉质量优化,本质上是在不同的编码图像之间以及一张图像内的不同区域间做灵活的比特分配。一方面,这些编解码方案并没有向解码端传输各个编码图像或一张编码图像内各个区域的信号质量(或失真强度)。另一方面,在很多情况下,编码图像的真实失真强度情况无法通过人眼观察判断获得;例如,在一些基于对抗生成网络(Genrative Adverarial Network,GAN)方法训练得到编解码网络处理获得的编码图像中,会包含人眼无法分辨真伪的虚假纹理信息。因此,现有的各类基于深度神经网络的编解码方案都不能够在解码端获得当前编码图像的失真强度信息,具体地,不能够在解码端获得一张图像中各个区域的失真强度信息以及一张图像的总体失真强度信息。而编码图像的失真强度信息可用于辅助判断图像中某个区域的内容是否可信,故对于视频监控等应用非常重要。因此,如果要在实际产品或服务中使用基于深度神经网络的视频图像编解码方案,需要将这一信息告知解码端。需要注意是,即使使用传统的基于混合型的编解码方案,也无法通过所传输的量化参数来获取当前编码图像的准确的失真强度信息,具体原因已在前面解释。因此,针对使用传统混合型编解码方案的一些视频产品或服务,也需要通过某种方式使得解码端可以获得编码图像的准确的失真强度信息。
鉴于上述介绍存在的技术问题,本申请实施例提供了一种编码、解码方法和相关设备。 在本申请实施例中,无论使用何种视频图像编解码方案,编码端可对比原始图像和重建图像,计算获得重建图像的保真度信息(包括重建图像中各图像区域的保真度信息),并将保真度信息携带在压缩码流中以告知解码端;其中,重建图像为原始图像的重建图像,也即编码输出图像。在计算获得重建图像的保真度时,可以采用MSE(Mean Squared Error)、SAD(Sum of Absolute Difference)、SSIM(Structural Similarity)等任何一种有参考的质量评价方法,来计算获得重建图像相对于原始图像的失真强度数值,或者获得重建图像内容中是否有合成得到虚假图像内容的指示标识。保真度可以在任意粒度上计算得到,例如整张图像,或者一张图像中的任一M×N大小的图像块,等等。保真度可以为任一色彩分量分别计算得到,例如为一张图像的RGB或YUV这三个色彩分量分别计算三个保真度,也可以融合三个色彩分量的失真强度来获得一个保真度。即使是针对传统的混合型视频图像编解码方案,本申请实施也可以同时使用解码端已经获得的QP信息以及当前图像块在帧间预测操作中所参考的图像区域的保真度,来推导获得当前图像块的保真度。
下面结合具体实施方式对本申请提供的技术方案进行详细的介绍。
图7是示出根据本申请一种实施例的编码方法的过程700的流程图。过程700可由编码设备执行,例如可由视频编码器20执行。过程700描述为一系列的步骤或操作,应当理解的是,过程700可以以各种顺序执行和/或同时发生,不限于图7所示的执行顺序。过程700包括但不限于如下步骤或操作:
701、对原始图像进行编码以得到第一码流。
应理解,原始图像也即图像17,故原始图像是一张编码图像;第一码流也就编码图像数据21。
702、对保真度图进行编码以得到第二码流,其中,所述保真度图用于表示所述原始图像的至少部分区域与重建图像的至少部分区域之间的失真,所述重建图像是对所述第一码流进行解码后得到的。
应理解,重建图像也即解码图像231,故重建图像是一张解码图像。由于第一码流是由原始图像编码得到的,故第一码流解码得到的重建图像是原始图像的重建图像,且重建图像与原始图像有相同的图像尺寸。
其中,保真度图可以根据原始图像和重建图像计算得到,或者保真度图可以根据原始图像的预设区域和重建图像的预设区域计算得到。当保真度图是根据原始图像和重建图像计算得到时,保真度图用于表征整张重建图像的保真度;当保真度图是根据原始图像的预设区域和重建图像的预设区域计算得到时,保真度图用于表征重建图像的预设区域的保真度。原始图像的预设区域也即原始图像中某个区域图像,其可以为一图像块;重建图像的预设区域也即重建图像中某个区域图像,其同样可以为一图像块。
为了使得解码端能够得到编码图像的失真强度信息,需要将第一码流和第二码流从编码端传输到解码端,第一码流和第二码流可以合并后从编码端传输到解码端,第一码流和第二码流也可以分开单独从编码端传输到解码端。此外,如果保真度图是根据原始图像的预设区域和重建图像的预设区域计算得到,需要将该预设区域在原始图像中的位置和/或该预设区域在重建图像中的位置从编码端传输到解码端,其中,预设区域为矩形时,预设区域的位置可以用该预设区域的坐标来表示,预设区域的坐标通常表示为该预设区域的左上角亮度像素坐标;且该预设区域在原始图像中的位置和/或该预设区域在重建图像中的位置可以和第一码流、第二码流中的至少一个合并后,从编码端传输到解码端;该预设区域在原始图像中的位 置和/或该预设区域在重建图像中的位置也可以不和第一码流、第二码流中的至少一个合并,单独从编码端传输到解码端。本申请实施例描述的编码端和解码端可以是不同的电子设备,也可以是同一电子设备的不同硬件单元,本申请对此不作具体限定。
应理解,由于第二码流是由保真度图编码得到的,故第一码流经解码后可以得到保真度图的重建图。而根据编码方式的不同,解码得到的保真度图的重建图可以和保真度图相同,也可以和保真度图不同。具体地;若编码为无损编码,则保真度图的重建图和保真度图相同;若编码为有损编码,则保真度图的重建图包括对保真度图进行编码而生成的编码失真。
在本申请实施例中,对原始图像进行编码以得到第一码流,对保真度图进行编码以得到第二码流,其中,保真度图用于表示原始图像的至少部分区域与重建图像的至少部分区域之间的失真;失真包括差异;解码端对第一码流进行解码可以得到原始图像的重建图像,解码端对第二码流进行解码后得到保真度图的重建图;而若编码为无损编码,则保真度图的重建图和保真度图相同;若编码为有损编码,则保真度图的重建图包括对保真度图进行编码而生成的编码失真;故保真度图的重建图可以用于表示原始图像的至少部分区域与重建图像的至少部分区域之间的失真;因此,本申请实施例能够在解码端获得编码图像的失真强度信息。
在一种可能的设计中,所述方法还包括:将所述原始图像划分为多个第一图像块,以及将所述重建图像划分为多个第二图像块,其中,划分所述原始图像的划分策略与划分所述重建图像的划分策略相同,所述多个第一图像块与所述多个第二图像块一一对应;或将所述原始图像的预设区域划分为多个第一图像块,以及将所述重建图像的预设区域划分为多个第二图像块,其中,划分所述原始图像的预设区域的划分策略与划分所述重建图像的预设区域的划分策略相同,所述多个第一图像块与所述多个第二图像块一一对应;根据所述多个第二图像块中的任一第二图像块与所述任一第二图像块对应的第一图像块计算得到所述任一第二图像块的保真度值,所述保真度图包括所述任一第二图像块的保真度值,所述任一第二图像块的保真度值用于表示所述任一第二图像块与所述任一第二图像块对应的第一图像块之间的失真。
其中,所述多个第二图像块中的任一第二图像块在所述重建图像中的位置与所述任一第二图像块对应的第一图像块在所述原始图像中的位置相同;所述原始图像的预设区域在所述原始图像中的位置与所述重建图像的预设区域在所述重建图像中的位置相同,所述多个第二图像块中的任一第二图像块在所述重建图像的预设区域中的位置与所述任一第二图像块对应的第一图像块在所述原始图像的预设区域中的位置相同。
其中,划分原始图像的划分策略与划分重建图像的划分策略相同,以及划分原始图像的预设区域的划分策略与划分重建图像的预设区域的划分策略相同,是指划分得到的多个第一图像块中的任一第一图像块的尺寸与其对应的第二图像块的尺寸相同,以及划分得到的多个第一图像块中的任一第一图像块在原始图像中的位置与其对应的第二图像块在重建图像中的位置相同;图像块为矩形时,图像块的位置可以用该图像块的坐标来表示,图像块的坐标通常表示为该图像块的左上角亮度像素坐标。在对原始图像或重建图像进行划分时,可以按照基本单元进行划分,也即按照任一图像块等大小进行划分,从而划分得到的任一第一图像块的尺寸相同,划分得到的任一第二图像块的尺寸也相同,且第一图像块的尺寸与第二图像块的尺寸就是基本单元的尺寸。应理解,基本单元的尺寸可以根据原始图像或重建图像的尺寸大小或者需要划分得到的第一图像块或第二图像块的数量确定,基本单元最小可以为1×1像素大小,基本单元最大可以为原始图像或重建图像的尺寸大小。
其中,保真度可以为任一色彩分量分别计算得到,例如为一张图像的RGB或YUV这三 个色彩分量分别计算三个保真度,也可以融合三个色彩分量的失真强度来获得一个保真度。当任一色彩分量计算一个保真度时,保真度图为三维阵列;当融合三个色彩分量计算一个保真度时,保真度图为二维阵列。因此,为一张重建图像或重建图像的预设区域计算得到的保真度图是一个(W/M)×(H/N)的二维阵列,或一个(W/M)×(H/N)×C的三维阵列,其中,W与H表示原始图像或重建图像的宽和高,W与H表示原始图像的预设区域或重建图像的预设区域的宽和高,M和N表示保真度计算所使用的基本单元的宽和高,C表示原始图像或重建图像的色彩分量数。不失一般性,为简化描述,以下均假设C为1来描述本申请实施例具体的实现方式。
具体地,可以设定基本单元的尺寸大小为M×N,依据该基本单元对原始图像和重建图像进行划分,可以将原始图像划分成R行S列,一共R×S个第一图像块,其中,R=(W/M),S=(H/N);以及将重建图像划分成R行S列,一共R×S个第二图像块;且划分得到的第一图像块和第二图像块的尺寸大小为M×N。例如,将原始图像划分成4行7列,一共4×7=28个第一图像块,如图8所示;以及将重建图像划分成4行7列,一共4×7=28个第二图像块,如图9所示;且划分得到的第一图像块和第二图像块的尺寸大小为M×N。其中,对原始图像的预设区域和重建图像的预设区域进行划分也是同样的方法,可以将原始图像的预设区域划分成R行S列,一共R×S个第一图像块;以及将重建图像的预设区域划分成R行S列,一共R×S个第二图像块;且划分得到的第一图像块和第二图像块的尺寸大小为M×N。
其中,在划分得到R×S个第一图像块和R×S个第二图像块之后,可以计算任一第二图像块相对于与其对应的第一图像块的保真度,也即计算重建图像中第i行第j列的第二图像块相对于原始图像中第i行第j列的第一图像块的保真度,其中,1≤i≤R,1≤j≤S,从而可以得到R×S个第二图像块中的任一第二图像块与其对应的第一图像块的保真度,也即得到R×S个第二图像块的保真度值;再根据这R×S个第二图像块的保真度值即可得到保真度图。
在一种示例中,所述解码由解码器或解码设备执行;所述解码器或解码设备存储有所述第一图像块和/或所述第二图像块的尺寸;或,所述解码器或解码设备存储有所述第一图像块的数量和/或所述第二图像块的数量;或,所述第一图像块和/或所述第二图像块的尺寸为所述解码的输入;或,所述第一图像块的数量和/或所述第二图像块的数量为所述解码的输入;或,所述解码器或解码设备存储有所述基本单元的尺寸和/或所述基本单元的数量;或,所述基本单元的尺寸和/或所述基本单元的数量为所述解码的输入。
具体地,解码端存储有M和N的值或R和S的值,或M和N的值或R和S的值为解码的输入;如此可以确保解码端解码得到重建图像和保真度图的重建图后,知晓保真度图的重建图中的任一元素的值表征重建图像中哪个图像块的保真度。当M和N的值或R和S的值为解码的输入时,其可以与第一码流、第二码流的至少一个合并后传输到解码端,其也可以不与第一码流、第二码流的至少一个合并,而单独传输到解码端。
需要说明的是,图8对原始图像进行划分以及图9对重建图像进行划分是采用均匀划分的方式,得到的任一第一图像块或第二图像块的尺寸是相同的。可以理解的是,也可以采用非均匀划分的方式对原始图像或重建图像进行划分,此时划分得到的任一第一图像块或第二图像块的尺寸是相同的,采用非均匀划分时,解码端需要知道任一第一图像块和/或所述第二图像块的尺寸、位置。
在本申请实施例中,原始图像的尺寸和重建图像的尺寸是一样的,预设区域在原始图像中的尺寸、位置和在重建图像中的尺寸、位置是一样的;根据相同的划分策略将原始图像划分为多个第一图像块,以及将重建图像划分为多个第二图像块;或者根据相同的划分策略将 原始图像的预设区域划分为多个第一图像块,以及将重建图像的预设区域划分为多个第二图像块;划分得到的多个第一图像块和划分得到的第二图像块是存在一一对应的关系的,其中,任一第一图像块的尺寸相同,任一第二图像块的尺寸也相同,且第一图像块与第二图像块的尺寸也相同;故可以将第一图像块和第二图像块作为保真度计算的基本单元,也即根据这多个第二图像块中的任一第二图像块与其对应的第一图像块可以计算得到任一第二图像块的保真度值,而这多个第二图像块的保真度值也即重建图像的各个区域的保真度值,根据多个第二图像块的保真度值可以得到保真度图;其中,当第一图像块是原始图像划分得到、第二图像块是重建图像划分得到时,保真度图用于表征重建图像的保真度;当第一图像块是原始图像的预设区域划分得到、第二图像块是重建图像的预设区域划分得到时,保真度图用于表征重建图像的预设区域的保真度;从而有利于得到用于表征编码图像的失真强度信息的保真度图。
在一种可能的设计中,所述保真度图包括多个第一元素,所述多个第二图像块与所述多个第一元素一一对应,所述多个第一元素中的任一第一元素的值为与所述任一第一元素对应的第二图像块的保真度值,所述任一第一元素在所述保真度图中的位置根据与所述任一第一元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第一元素在所述保真度图中的位置根据与所述任一第一元素对应的第二图像块在所述重建图像的预设区域中的位置确定。其中,第一元素也可以称为保真图的像素点。
保真度图的构建过程为:根据所述多个第二图像块确定多个第一元素,其中,所述多个第二图像块与所述多个第一元素一一对应,所述多个第一元素中的任一第一元素的值为与其对应的第二图像块的保真度值;根据所述多个第一元素得到所述保真度图,其中,所述多个第一元素中的任一第一元素在所述保真度图中的位置根据与其对应的第二图像块在所述重建图像中的位置确定,或所述多个第一元素中的任一第一元素在所述保真度图中的位置根据与其对应的第二图像块在所述重建图像的预设区域中的位置确定。
其中,在计算保真度图时,是将图像划分得到基本单元进行保真度计算的,故有多少个基本单元,保真度图中就有多少个第一元素;也即有多少个第一图像块或第二图像块,保真度图中就有多少个第一元素;其中,任一第一元素有两个属性,分别为保真度值与该保真度值在保真度图中的位置。从而保真度图中任一第一元素用于表征与其对应的第二图像块的保真度值,保真度图中任一第一元素的值为与其对应的第二图像块的保真度值。
具体地,重建图像或重建图像的预设区域划分为R×S个第二图像块,则保真度图为一个R行S列的二维阵列,一共有R×S个第一元素,这R×S个第二图像块与R×S个第一元素是一一对应的,这R×S个第一元素中的任一第一元素的值为与其对应的第二图像块的保真度值。若是计算整张重建图像的保真度图,则这R×S个第一元素中的任一第一元素在保真度图中的位置与其对应的第二图像块在重建图像中的位置相同;若是计算重建图像的预设区域的保真度图,则这R×S个第一元素中的任一第一元素在保真度图中的位置与其对应的第二图像块在重建图像的预设区域中的位置相同。也即保真度图中第i行第j列的第一元素与重建图像或重建图像的预设区域中第i行第j列的第二图像块对应,保真度图中第i行第j列的第一元素的值为重建图像或重建图像的预设区域中第i行第j列的第二图像块的保真度值,其中,1≤i≤R,1≤j≤S。例如,如图10所示,图10是图9所示的重建图像的保真度图,图10中的格子表示一个第一元素,任一格子内的数字表示该第一元素的值,也即该第一元素对应的第二图像块的保真度值;图9中,重建图像划分成4行7列,一共有4×7=28个第二图像块;图10中,保真度图为一个4行7列的二维阵列,一共有4×7=28个第一元素;图10中第i行第j列的 第一元素与图9中第i行第j列的第二图像块对应,图10中第i行第j列的第一元素的值为图9中第i行第j列的第二图像块的保真度值,其中,1≤i≤4,1≤j≤7。
在本申请实施例中,保真度图为二维阵列,将重建图像划分得到多个第二图像块,可以根据多个第二图像块的保真度值得到保真度图,也即根据多个第二图像块可以确定多个第一元素,多个第二图像块与多个第一元素一一对应,多个第一元素中的任一第一元素的值为与其对应的第二图像块的保真度值;且多个第一元素中的任一第一元素在保真度图中的位置根据与其对应的第二图像块在重建图像中的位置确定,具体地,多个第一元素中的任一第一元素在保真度图中的位置与其对应的第二图像块在重建图像或在重建图像的预设区域中的位置相同,从而保真度图各位置上的元素表征的是重建图像或重建图像的预设区域中与其位置对应的区域的保真度,因此有利于保真度图用于表征编码图像的失真强度信息。
在一种可能的设计中,所述第二图像块包括三个色彩分量,所述保真度图为包括色彩分量、宽和高三个维度的三维阵列,所述保真度图中的任一色彩分量A下的二维阵列包括多个第一元素,所述多个第一元素中的任一第一元素的值为与所述任一第一元素对应的第二图像块的色彩分量A的保真度值,所述任一第一元素在所述保真度图中的任一色彩分量A下的二维阵列的位置根据与所述任一第一元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第一元素在所述保真度图中的任一色彩分量A下的二维阵列的位置根据与所述任一第一元素对应的第二图像块在所述重建图像的预设区域中的位置确定。其中,当保真度图为包括色彩分量、宽和高三个维度的三维阵列时,高表示任一色彩分量A下的二维阵列包括多个行第一元素,宽表示任一色彩分量A下的二维阵列包括多个列第一元素,多个第一元素的数量等于宽和高的乘积,色彩分量A为所述三个色彩分量的任意一个。
在本申请实施例中,原始图像或重建图像包括三个色彩分量,在计算保真度图时,任一色彩分量下都计算一张二维阵列的保真度图,三个色彩分量下的二维阵列构成三维阵列的保真度图,三维阵列的保真度图中的任一色彩分量A下的二维阵列中的第一元素表征的是任一色彩分量A下,重建图像或重建图像的预设区域中与其位置对应的区域的保真度,因此有利于三维阵列的保真度图可以表征编码图像的三个色彩分量的失真强度信息。
在一种可能的设计中,所述对保真度图进行编码以得到第二码流,包括:对所述任一第一元素进行熵编码以得到所述第二码流,对所述任一第一元素的熵编码独立于其他第一元素的熵编码;或者,根据所述已编码的第一元素中的至少一个第一元素的值确定所述任一第一元素的值的概率分布或所述任一第一元素的预测值,并根据所述任一第一元素的值的概率分布或所述任一第一元素的预测值对所述任一第一元素进行熵编码,以得到所述第二码流;其中,所述第二码流包括所述多个第一元素的码流。
具体地,对于保真度图中的任一第一元素的熵编码过程,若不存在已编码的第一元素,则直接对该任一第一元素进行熵编码以得到该任一第一元素的码流;若存在已编码的第一元素,则根据所述已编码的第一元素中的至少一个第一元素的值确定所述任一第一元素的值的概率分布或所述任一第一元素的预测值,并根据所述任一第一元素的值的概率分布或所述任一第一元素的预测值对所述任一第一元素进行熵编码,以得到所述任一第一元素的码流;其中,所述第二码流包括所述多个第一元素的码流。
其中,任意一个第一元素在熵编码过程中,可以利用已编码的第一元素的值来确定该第一元素的值的概率分布,而第一元素的值也即第二图像块的保真度值,也即利用已编码的保真度值来确定当前编码的保真度值的概率分布,以此来辅助提升熵编码效率。其中,对符号进行算数编码的输入为符号概率分布。其中,只要是已编码的保真度值都可以被用来确定当 前编码的保真度值的概率分布。例如,保真度图中当前编码的保真度值的左侧、上方、左上方等位置的已编码的保真度值来确定当前编码的保真度值的概率分布,来辅助提升熵编码效率。例如,可以根据当前编码的保真度值的概率分布确定来选择不同的哈夫曼码表,或者确定算数编码的子区间划分方式。
或者,任意一个第一元素在熵编码过程中,可以利用已编码的第一元素的值来确定该第一元素的预测值,也即利用已编码的保真度值来确定当前编码的保真度的预测值,再对该第一元素的值与该第一元素的预测值的差值进行熵编码,从而得到该第一元素的码流。
在本申请实施例中,对保真度图进行编码以得到第二码流,也即对保真度图中的任一第一元素中进行编码以得到任一第一元素的码流,第二码流包括保真度图中的任一第一元素的码流;在熵编码过程中,可以利用已编码的第一元素的值来确定当前编码的第一元素的值的概率分布,例如利用当前编码的第一元素的左侧、上方、左上方等相邻的第一元素的值来确定当前编码的第一元素的值的概率分布或所述当前编码的第一元素的预测值,然后根据当前编码的第一元素的值的概率分布或所述当前编码的第一元素的预测值对当前编码的第一元素进行编码,以此来辅助提升熵编码效率。
应理解,上述熵编码只是示例性给出一种熵编码方法;本申请在熵编码过程中,可以采用已有的各种熵编码技术,比如哈夫曼编码、算数编码、上下文建模算数编码(此处描述了一种上下文建模方法来辅助算数编码)、二进制算数编码等;本申请对此不作具体限定。
在一种可能的设计中,所述对保真度图进行编码以得到第二码流,包括:对所述任一第一元素进行量化,以得到量化后的第一元素;对所述量化后的第一元素进行编码,以得到所述第二码流;其中,所述第二码流包括所述多个第一元素的码流。
其中,对多个第一元素中的任一第一元素进行量化的量化步长可以相同,也可以不同。
在一示例中,所述解码由解码器或解码设备执行;所述解码器或所述解码设备存储有对所述多个第一元素中的任一第一元素进行量化的量化步长;或对所述多个第一元素中的任一第一元素进行量化的量化步长为所述解码的输入。如此,以便解码端能够恢复得到原始数量等级的保真度数值。
在本申请实施例中,对保真度图进行编码以得到第二码流,也即对保真度图中的任一第一元素进行编码以得到保真度图中的任一第一元素的码流,第二码流包括保真度图中的任一第一元素的码流;在对保真度图中的任一第一元素进行编码过程中,可以对任一第一元素进行量化,然后采用量化后的任一第一元素进行编码,以得到任一第一元素的码流;对任一第一元素进行量化也即对任一第一元素的值进行量化,或者说对保真度图中的任一保真度值进行缩放;量化的目的是缩小保真度图中的保真度值的动态范围,以减小保真度图的编码开销。
图11是示出根据本申请一种实施例的解码方法的过程1100的流程图。过程1100可由解码设备执行,例如可由视频解码器30执行。过程1100描述为一系列的步骤或操作,应当理解的是,过程1100可以以各种顺序执行和/或同时发生,不限于图11所示的执行顺序。过程1100包括但不限于如下步骤或操作:
1101、对第一码流进行解码以得到原始图像的重建图像。
其中,第一码流是对原始图像进行编码得到的码流,也即编码图像数据21;原始图像的重建图像也即解码图像331,以下简称重建图像。
1102、对第二码流进行解码以得到保真度图的重建图,其中,所述第二码流是对所述保真度图进行编码得到的,所述保真度图的重建图用于表示所述原始图像的至少部分区域与所 述重建图像的至少部分区域之间的失真。
其中,由于保真度图的重建图是根据保真度图的码流解码得到的,因此保真度图的重建图与保真度图有相同的尺寸大小和性质;当保真度图用于表征整张重建图像的保真度时,保真度图的重建图也用于表征整张重建图像的保真度时;当保真度图用于表征重建图像的预设区域的保真度,保真度图的重建图也用于表征重建图像的预设区域的保真度。如果保真度图用于表征重建图像的预设区域的保真度,则解码端需要从编码端获取该预设区域在原始图像中的位置和/或该预设区域在重建图像中的位置,以便解码端在解码得到保真度图的重建图以后,知晓该保真度图的重建图是用于表征重建图像中的具体哪个区域的保真度。
应理解,由于第二码流是由保真度图编码得到的,故第一码流经解码后可以得到保真度图的重建图。而根据编码方式的不同,解码得到的保真度图的重建图可以和保真度图相同,也可以和保真度图不同。具体地;若编码为无损编码,则保真度图的重建图和保真度图相同;若编码为有损编码,则保真度图的重建图包括对保真度图进行编码而生成的编码失真。
在本申请实施例中,对原始图像进行编码以得到第一码流,对保真度图进行编码以得到第二码流,保真度图用于表示原始图像的至少部分区域与重建图像的至少部分区域之间的失真,其中,失真包括差异;解码端对第一码流进行解码可以得到原始图像的重建图像,解码端对第二码流进行解码后得到保真度图的重建图;而若编码为无损编码,则保真度图的重建图和保真度图相同;若编码为有损编码,则保真度图的重建图包括对保真度图进行编码而生成的编码失真;故保真度图的重建图可以用于表示原始图像的至少部分区域与重建图像的至少部分区域之间的失真;因此,本申请实施例能够在解码端获得编码图像的失真强度信息。
在一种可能的设计中,所述保真度图包括多个第二图像块中的任一第二图像块的保真度值,所述任一第二图像块的保真度值用于表示所述任一第二图像块与所述任一第二图像块对应的原始图像块之间的失真。
其中,所述多个第二图像块由所述重建图像划分得到,所述多个第二图像块与多个原始图像块一一对应,所述原始图像块是所述原始图像中的图像块,例如原始图像块为前述第一图像块;所述多个原始图像块是对所述原始图像进行划分得到的,所述多个第二图像块是对所述重建图像进行划分得到的,划分所述原始图像的划分策略与划分所述重建图像的划分策略相同;或所述多个原始图像块是对所述原始图像的预设区域进行划分得到的,所述多个第二图像块是对所述重建图像的预设区域进行划分得到的,划分所述原始图像的预设区域的划分策略与划分所述重建图像的预设区域的划分策略相同。
在一种可能的设计中,所述保真度图包括多个第一元素,所述多个第二图像块与所述多个第一元素一一对应,所述多个第一元素中的任一第一元素的值为与所述任一第一元素对应的第二图像块的保真度值,所述任一第一元素在所述保真度图中的位置根据与所述任一第一元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第一元素在所述保真度图中的位置根据与所述任一第一元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
在一种可能的设计中,所述第二图像块包括三个色彩分量,所述保真度图为包括色彩分量、宽和高三个维度的三维阵列,所述保真度图中的任一色彩分量A下的二维阵列包括多个第一元素,所述多个第一元素中的任一第一元素的值为与所述任一第一元素对应的第二图像块的色彩分量A的保真度值,所述任一第一元素在所述保真度图中的任一色彩分量A下的二维阵列的位置根据与所述任一第一元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第一元素在所述保真度图中的任一色彩分量A下的二维阵列的位置根据与所述任一 第一元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
在一种可能的设计中,所述对第二码流进行解码以得到保真度图的重建图,包括:对所述第二码流进行解码,以得到所述任一第一元素的重建保真度值;根据所述任一第一元素的重建保真度值得到所述保真度图的重建图。
其中,第一元素的重建保真度值也即第一元素的值的重建。所述任一第一元素的重建保真度值在所述保真度图的重建图中的位置,根据所述任一第一元素对应的第二图像块在所述重建图像中的位置确定。或者,所述第二码流包括所述任一第一元素在所述保真度图中的位置;所述任一第一元素的重建保真度值在所述保真度图的重建图中的位置,根据所述任一第一元素在所述保真度图中的位置确定。
其中,保真度图的重建图中的任一第一元素有两个属性,分别为该任一第一元素的重建保真度值与该重建保真度值在保真度图的重建图中的位置。
应理解,而若编码为无损编码,则任一第一元素的重建保真度值就是该第一元素的值,此时保真度图的重建图中的a、b、c、d、e、f、g、h分别为15、67、99、134、16、76、123、187,如图10和图12所示,其中,图12中的一个格子表示一个第一元素,任一格子内的数字表示该第一元素的重建保真度值。若编码为有损编码,则任一第一元素的重建保真度值就是该第一元素的值与编码失真的和。
其中,保真度图中有多个第一元素,那么重建图像或重建图像的预设区域也可以划分为多个第二图像块,这多个第一元素与这多个第二图像块是一一对应的,这多个第一元素中的任一第一元素的值为与其对应的第二图像块的保真度值;由于保真度图的重建图中有多个第一元素的重建保真度值,这多个第一元素的重建保真度值与这多个第一元素是一一对应的,故这多个第一元素的重建保真度值与这多个第二图像块也是一一对应的,这多个第一元素的重建保真度值中的任一第一元素的重建保真度值为与其对应的第二图像块的保真度值。
具体地,如图12所示,由于重建图像或重建图像的预设区域划分为R×S个第二图像块,则保真度图的重建图为一个R行S列的二维阵列,一共有R×S个第一元素,这R×S个第二图像块与R×S个第一元素是一一对应的,这R×S个第一元素中的任一第一元素的重建保真度值为与其对应的第二图像块的保真度值。若是计算整张重建图像的保真度图的重建图,则这R×S个第一元素中的任一第一元素在保真度图的重建图中的位置与其对应的第二图像块在重建图像中的位置相同;若是计算重建图像的预设区域的保真度图的重建图,则这R×S个第一元素中的任一第一元素在保真度图的重建图中的位置与其对应的第二图像块在重建图像的预设区域中的位置相同。也即保真度图的重建图中第i行第j列的第一元素与重建图像或重建图像的预设区域中第i行第j列的第二图像块对应,保真度图的重建图中第i行第j列的第一元素的重建保真度值为重建图像或重建图像的预设区域中第i行第j列的第二图像块的保真度值,其中,1≤i≤R,1≤j≤S。
在本申请实施例中,第二码流包括保真度图中的任一第一元素的码流,故对第二码流进行解码即可得到任一第一元素的重建保真度值;应理解,而若编码为无损编码,则第一元素的重建保真度值为第一元素的值;若编码为有损编码,第一元素的重建保真度值包括对第一元素进行编码而生成的编码失真,第一元素的重建保真度值为第一元素的值与编码失真的和;从而根据任一第一元素的重建保真度值可以得到保真度图的重建图,故保真度图的重建图可以用于表示原始图像的至少部分区域与重建图像的至少部分区域之间的失真;因此,本申请实施例能够在解码端获得编码图像的失真强度信息。
在一种可能的设计中,所述第二码流是对量化后的第一元素进行编码得到的;所述对第 二码流进行解码以得到保真度图的重建图,包括:对所述第二码流进行解码,以得到所述量化后的第一元素的重建保真度值;对所述量化后的第一元素的重建保真度值进行反量化,以得到所述任一第一元素的重建保真度值;根据所述任一第一元素的重建保真度值得到所述保真度图的重建图。
其中,由于对多个第一元素中的任一第一元素进行量化的量化步长可以相同,也可以不同;故对多个第一元素中的任一第一元素进行反量化的量化步长可以相同,也可以不同;且对多个第一元素中的任一第一元素进行量化的量化步长与其对应的第一元素进行量化时的量化步长相同。
在一示例中,所述解码由解码器或解码设备执行;所述解码器或所述解码设备存储有对所述多个第一元素中的任一第一元素进行量化的量化步长;或,所述解码器或所述解码设备存储有对所述多个第一元素中的任一第一元素进行反量化的量化步长;或对所述多个第一元素中的任一第一元素进行量化的量化步长为所述解码的输入;或对所述多个第一元素中的任一第一元素进行反量化的量化步长为所述解码的输入。如此,以便解码端能够恢复得到原始数量等级的保真度数值。
在本申请实施例中,为了减少编码开销,编码端可以对第一元素进行量化后再编码得到第一元素的码流,故解码端获得的第一元素的码流可以是对量化后的第一元素进行编码得到的;此种情况下,对第二码流进行解码得到量化后的第一元素的重建保真度值,还需要对量化后的第一元素的重建保真度值进行反量化,以得到任一第一元素的重建保真度值;然后根据任一第一元素的重建保真度值即可得到保真度图的重建图;如此,既能够在解码端获得编码图像的失真强度信息,又能够减少编码开销。
在一种可能的设计中,所述方法还包括:根据所述保真度图的重建图对所述重建图像或所述重建图像的预设区域进行处理,以提高所述重建图像或所述重建图像的预设区域的图像质量;或根据所述保真度图的重建图确定是否应用所述重建图像。
具体地,根据保真度图的重建图可以确定重建图像或重建图像的预设区域的图像质量,当重建图像或重建图像的预设区域的图像质量较差时,采用图像质量增强算法对该重建图像或重建图像的预设区域进行处理,以提高所述重建图像或所述重建图像的预设区域的图像质量;或者根据保真度图的重建图确定重建图像或重建图像的预设区域的保真度,根据重建图像或重建图像的预设区域的保真度确定是否应用该重建图像,例如重建图像或重建图像的预设区域的保真度值低于预设保真度阈值时,确定不应用该重建图像。
其中,对所述重建图像或所述重建图像的预设区域进行处理时,可以在基于学习的后处理增强算法中,根据图像失真程度划分为B个失真范围分别进行训练得到多个图像增强模型,其中B为大于1的整数,解码端可以根据保真度图,确定重建图像的不同区域的失真程度,对不同区域分别选用不同的模型进行图像增强,使用更匹配失真分布的训练模型能得到更好的画质提升效果。再例如单模型的图像增强算法,训练和使用可以将量化参数图作为网络的额外输入信息,在量化参数图的指示下网络能输出更好增强效果的图像。
在本申请实施例中,解码端可以根据保真度图的重建图对重建图像或重建图像的预设区域进行处理,以提高重建图像或重建图像的预设区域的图像质量;或根据保真度图的重建图确定是否应用重建图像;从而有利于重建图像的应用。
需要说明的是,图11所描述是解码方法是图7所述描述的编码方法的逆过程,图11所描述的步骤或操作可以参阅图7所描述的步骤或操作中的相关描述。
下面通过视频图像编解码全过程来对图7至图12提供的技术方案进行进一步的介绍。
一、编码端实现方式
(1)原始图像编码
对于视频序列中的任意一张待编码的视频图像(也即原始图像),将其输入视频编码器做编码操作,经编码操作后输出该视频图像的第一码流和该视频图像的重建图像,也即输出原始图像的第一码流和原始图像的重建图像;其中,待编码的视频图像在进行编码操作时,常被称为编码图像;具体的编码操作可参阅前文的描述,此处不再赘述。上述编码操作可以是任一种视频图像编码方法,例如已有的H.264、H.265、H.266、AVS2、AVS3、AV1等国内外标准方案或行业方案,或学术界和工业界正在研究的基于深度神经网络的视频图像编码方案,或其它能够将输入视频图像进行压缩处理的方案。重建图像指的是经编码操作后包含编码失真的图像。典型的编码失真包括块效应、振铃效应、模糊等。一些基于深度学习技术的编解码方案,例如基于生成对抗网络(Generative adversarial network,GAN)的编解码方案,还可产生虚假图像内容或纹理细节等新型的编码失真。
对于一个视频序列中的每一张视频图像均可采用上述方法进行处理,从而可生成整个视频序列的码流以及该视频序列中的每一张视频图像的重建图像,其中,该整个视频序列的码流包括该视频序列中的每一张视频图像的第一码流。
(2)保真度图的计算与表示
对于一个视频序列,可以为该视频序列中的每一张视频图像计算保真度图,从而得到该视频序列中的每一张视频图像的保真度图;也可以选择该视频序列中的部分视频图像计算保真度图,从而得到该视频序列中的部分视频图像的保真度图。例如,可仅选择按帧内编码图像,计算其保真度图;或仅选择场景切换帧,即视频序列中场景发生切换后的第一帧图像,计算其保真度图;或仅选择关键帧,即在按分层B帧结构参考结构编码的情况下最低时间层图像,计算其保真度图;或其它选择规则以及多个选择规则的组合。
其中,可以按照预设的基本单元大小来计算一张编码图像的保真度图。例如,可以将编码图像划分成尺寸大小为M×N的多个图像块,将每一个图像块作为用于计算保真度的一个基本单元;M和N分别为图像块的宽和高,M和N的值可以相等,也可以不等。基本单元的尺寸大小可以预先设定并存储于编码器和解码器中;基本单元的尺寸大小也可由编码器根据视频序列的所有视频图像或其中一部分视频图像的内容的丰富程度灵活设定,并将设定好的基本单元的尺寸大小告知解码器。应理解,除直接指定基本单元的宽与高,还可指定一张编码图像中基本单元的数量,例如指定编码图像水平方向与竖直方向的基本单元的数量。因为采用均匀划分的方式将一张编码图像划分为一组基本单元(也即多个基本单元),所以上述两种基本单元的表示方式是等价的。
在计算保真度时,可以采用均方误差(Mean Squared Error,MSE)、绝对误差和(Sum of Absolute Difference,SAD)、结构相似性(structural similarity index measurement,SSIM)等任何一种有参考的质量评价方法,来计算得到重建图像相对于原始图像的失真强度数值,也可以是计算得到重建图像内容中是否有合成得到虚假图像内容的指示标识。具体地,根据基本单元的尺寸大小或基本单元的数量对原始图像和重建图像都进行均匀划分,以将原始图像划分成多个第一图像块,以及将重建图像划分成多个第二图像块;计算多个第二图像块中的任一第二图像块相对于与其对应的第一图像块的保真度,可以将第二图像块相对于与其对应的第一图像块的保真度称为保真度值,从而可以得到多个第二图像块的保真度值;再根据多个第二图像块的保真度值即可得到重建图像的保真度图。保真度图是一个阵列,该阵列中包 括多个第一元素,该多个第一元素与重建图像划分得到的多个第二图像块一一对应,该多个第一元素中的任一元素用来表征与其对应的第二图像块的保真度值,该多个第一元素中的任一第一元素的值为与其对应的第二图像块的保真度值,该多个第一元素中的任一第一元素在该阵列中的位置与其对应的第二图像块在重建图像中的位置相同。
在计算得到保真度图后,可以对其进行量化操作。具体地,可采用指定的量化步长对保真度图中的第一元素的值进行缩放,也即采用指定的量化步长对保真度图中的第一元素对应的第二图像块的保真度值进行缩放。量化的目的是缩小保真度图中保真度值的动态范围,以减小保真度图的编码开销。量化会对保真度值带来失真,因此编码端在设定量化步长时,可以根据解码端对重建图像的保真度的精度要求来设置对应的量化步长。例如,可以为一个视频序列的全部编码图像设置同一个量化步长,也即所有编码图像的保真度图采用同一个量化步长进行量化;或者为一个视频序列的一部分编码图像设置一个量化步长,而为该视频序列的另外一部分编码图像设置另外一个量化步长,也即所有编码图像中的部分编码图像的保真度图采用同一个量化步长进行量化;又或者为一个视频序列的不同编码图像设置不同的量化步长,也即所有编码图像中的任一编码图像的保真度图均采用不同的量化步长进行量化;甚至为一个视频序列的每张编码图像的不同图像块设置不同的量化步长,也即所有的编码块的保真度值采用不同的量化步长进行量化。其中,编码端所设置量化步长需要告知解码端;也即哪张编码图像的保真度图采用哪个量化步长进行量化或哪个编码块的保真度值采用哪个量化步长进行量化,编码端需要告知解码端。此外,除编码端灵活指定量化步长外,也可以将预设的量化步长存储于编码端与解码端使用;也即哪张编码图像的保真度图进行量化时采用哪个量化步长或哪个编码块的保真度值进行量化时采用哪个量化步长,编码端与解码端是预先存储好的。如此,编码端和解码端都能获得相同的量化步长,能够使得解码端恢复得到编码端中原始数量等级的保真度值。
其中,可以采用表5的语法结构来表示上述保真度图。在表5中,fidelity_metric_idc是计算保真度所使用的质量评价方法指示,其可以用于指示在预设的质量评价方法列表中究竟采用哪一种评价方法来计算得到保真度图,该质量评价方法列表预先设定,该质量评价方法列表可以包含MSE、SSE、SSIM等质量评价指标;base_unit_width和base_unit_height分别为计算保真度时采用的基本单元的宽和高;quantization_step为量化步长;fidelity_value为在一个基本单元计算得到的保真度值。
表5
Figure PCTCN2021141403-appb-000003
应理解,编码端可以为整张编码图像计算保真度图,也可以为编码图像中的预设区域计算保真度图,其中,编码端为编码图像中的预设区域计算保真度图的过程与为整张编码图像计算保真度图的过程相同,此处不在赘述。若编码端为编码图像中的预设区域计算保真度图,则编码端需要将预设区域在编码图像中的位置或预设区域在编码图像中的坐标告知解码端。
(3)保真度图编码
如表5所示,在对保真度图编码时,可遍历当前编码图像的保真度图中的每一个fidelity_value,并对它们进行压缩编码,以得到全部fidelity_value的码流,保真度图编码得到的第二码流包括全部fidelity_value的码流。具体地,可采用定长编码、指数哥伦布编码等方式将该数值进行二值化,获得一个二进制字符串,再对该二进制字符串中的每一个二进制字符进行熵编码;也可直接对该数值使用哈夫曼编码、多值算数编码等方法进行熵编码。其中,在熵编码过程中,可以采用保真度图中已编码的保真度值来确定当前编码的保真度值的概率分布或当前编码的保真度值的预测值,例如采用空间上与当前编码的保真度值相邻的左侧、上方或左上方的已编码的保真度值来确定当前编码的保真度值的概率分布或当前编码的保真度值的预测值,来辅助提升熵编码效率。例如,可以根据确定得到的当前编码的保真度值的概率分布或当前编码的保真度值的预测值来选择不同的哈夫曼码表,或者确定算数编码的子区间划分方式。
由于保真度图表示为一个二维或三维阵列,因此可使用任一现有的单色图像或彩色图像的编码方法对保真度图进行压缩编码。此时,无需在表5中遍历所有基本单元计算得到的保真度值,而是直接嵌入使用现有编码方法对保真度图进行编码操作得到第二码流。
需要注意的是,保真度图编码输出的第二码流可以嵌入原始图像编码输出的第一码流,也可以独立管理做传输或存储等操作。
通过上述编码操作,编码端可以得到原始图像的第一码流和保真度图的第二码流,在得到第一码流和第二码流后,向解码端传输第一码流和第二码流;其中,编码端向解码端传输的码流可以统称为压缩码流。
二、解码端实现方式
(1)解码获得重建图像
解码端接收到来自编码端的压缩码流后,从压缩码流中获取原始图像的第一码流,并将第一码流输入视频解码器,经解码操作获得重建图像。解码端对第一码流的解码操作为编码端对原始图像的编码操作的逆操作,解码端解码得到的重建图像与编码端得到的重建图像相同。具体的解码操作可参阅前文的描述,此处不再赘述。
(2)解码获得保真度图的重建图
解码端接收到来自编码端的压缩码流后,从压缩码流中获取保真度图的第二码流,并将第二码流进行解码操作以获得保真度图的重建图。解码端对第二码流的解码操作为编码端对保真度图的编码操作的逆操作,如果编码端对保真度图的编码操作是采用无损编码的方式,如表5所示,遍历保真度图中每一个保真度值,并对每保真度值做无损熵编码,那么解码端解码操作得到的保真度图的重建图与编码端计算得到的保真度图相同。如果编码端对保真度图的编码操作是采用有损编码的方式,例如采用单色图像编码方式处理保真度图,那么解码端解码操作得到的保真度图的重建图与编码端计算得到的保真度图不同,因为解码端解码操作得到的保真度图的重建图将包含编码端对保真度图编码操作引入的编码失真。
(3)应用保真度图度
解码端根据保真度图的重建图来判断一张重建图像或重建图像内预设区域的信号失真情 况,并应用到不同的业务环境中。例如,在视频监控场景中,根据保真度图的重建图来判断重建图像中的某个图像区域的失真程度,如果大于某个预设门限,则不会应用该重建图像;视频会议场景中,可根据保真度图来判断所看到的内容是真实还是虚假。再例如,可以根据重建图像中的某个图像区域的失真程度,根据预设规则从一组图像增强方法中选择其中的一种,应用于该图像区域做画质提升。
图13是示出根据本申请另一种实施例的解码方法的过程1300的流程图。过程1300可由解码设备执行,例如可由视频解码器30执行。过程1300描述为一系列的步骤或操作,应当理解的是,过程1300可以以各种顺序执行和/或同时发生,不限于图13所示的执行顺序。过程1300包括但不限于如下步骤或操作:
1301、对第一码流进行解码以得到原始图像的重建图像和目标量化参数信息,所述目标量化参数信息包括所述重建图像的多个第二图像块中的全部或部分第二图像块的量化参数值。
应理解,第一码流也就编码图像数据21;重建图像也即解码图像331,故重建图像是一张解码图像;原始图像也即图像17,故原始图像是一张编码图像。
在一种可能的设计中,所述第二图像块为编码单元。
其中,第一码流可以由任一现有主流视频图像编解码标准(H.264、H.265、H.266、AVS3等)所规定的编码器对原始图像进行编码得到;其中,在原始图像编码过程中,将原始图像划分为多个编码单元(也即编码块),并对这多个编码单元中的任一编码单元进行编码,从而得到这多个编码单元中的任一编码单元的码流,第一码流包括这多个编码单元的码流。应理解,将原始图像划分为多个编码单元,具体是如何划分的,由编码器决定。对应的,对第一码流进行解码的解码器则是可执行前述视频图像编解码标准所规定的解码操作的任意一解码器。该解码器对输入的第一码流做标准规定的解码操作后输出原始图像的重建图像。
本申请实施例对第一码流进行解码,除输出原始图像的重建图像外,还输出解码过程中所获取的目标量化参数信息,该目标量化参数信息也即原始图像编码过程中使用的量化参数信息。量化参数信息包括编码过程在任意一张原始图像划分得到的多个编码单元中的全部或部分编码单元的量化参数值、任意一张原始图像划分得到的多个编码单元中的全部或部分编码单元在该原始图像中的位置、任意一张原始图像划分得到的多个编码单元中的全部或部分编码单元的尺寸;其中,量化参数值也即量化参数的值,量化参数也即量化单元208所使用的量化参数,在量化单元208,任一编码单元采用对应的量化参数进行量化。因此,该目标量化参数信息包括上述原始图像划分得到的多个编码单元中的全部或部分编码单元的量化参数值、上述原始图像划分得到的多个编码单元中的任一编码单元在该原始图像中的位置、上述原始图像划分得到的多个编码单元中的任一编码单元的尺寸。一个编码单元在原始图像中的位置可以用该编码单元的坐标来表示,编码单元的坐标通常表示为该编码单元的左上角亮度像素坐标。而由于一个编码单元通常为矩形,所以其尺寸可表示为其宽度和高度,通常以亮度像素数量计;若一个编码单元为方形,则可以仅使用边长或面积来表示其尺寸。
1302、根据所述目标量化参数信息构建所述重建图像的量化参数图,其中,所述重建图像的量化参数图用于表示所述原始图像的至少部分区域与所述重建图像的至少部分区域之间的失真。
其中,量化参数图也即重建图像的量化参数图,其为指示该重建图像中的编码单元的量化参数值的数据结构;而重建图像中的编码单元的量化参数值也即原始图像中的编码单元的 量化参数值。应理解,量化参数的主要目的是为了做反量化操作;当然,量化参数本身就表示了信号失真(保真度),从而量化参数图可用于表征重建图像的保真度,也即量化参数图是一种形式的保真度图。故根据量化参数构建的重建图像的量化参数图可用于表征重建图像的保真度或可用于表征重建图像的预设区域的保真度。量化参数图为二维阵列或三维阵列,量化参数图中的第二元素的值为编码单元的量化参数值;其中,量化参数图中的任一第二元素有两个属性,分别为量化参数值与该量化参数值在量化参数图中的位置。应理解,量化参数图中的每一个量化参数值表示对应重建图像中的某个图像块的失真程度,显然量化参数图也是一种形式的保真度图。如果由一张重建图像的RGB或YUV这三个色彩分量的量化参数值构建量化参数图,该量化参数图为三维阵列;如果由一张重建图像的三个色彩分量的融合后的量化参数值构建量化参数图,该量化参数图为二维阵列。不失一般性,为简化描述,下文以量化参数图为二维阵列来描述具体的实现方式。
其中,量化参数图可以有多种表示方式。下面示例性的列举一些。
第一种,原始图像划分为多个编码单元,从而重建图像包括多个编码单元,将编码单元作为用于构建量化参数图的基本单元,此时重建图像中的编码单元也即第二图像块,故多个编码单元为用于构建量化参数图的多个基本单元;这多个基本单元与量化参数图中的多个第二元素是一一对应的,这多个基本单元就是这多个编码单元,从而多个编码单元与量化参数图中的多个第二元素是一一对应的,量化参数图中的任一第二元素的值设置为与其对应的编码单元的量化参数值。
此种表示方式,量化参数图中的任一第二元素在该量化参数图中的位置与其对应的编码单元在重建图像中的位置是相同的。进一步地,量化参数图还可以与对应的原始图像或重建图像具有相同的空间分辨率,也即量化参数图和原始图像或重建图像的尺寸大小相同。如图14所示,原始图像或重建图像包括6个编码单元,也即原始图像或重建图像包括6个编码单元,则量化参数图中有6个格子,也即量化参数图中有6个第二元素,其中,这6个编码单元与这6个第二元素一一对应,任一格子内的数字为与其对应的编码单元对应的量化参数;原始图像的尺寸大小为W×H,重建图像的尺寸大小也为W×H,则量化参数图的尺寸大小也为W×H。图14仅是示例性的描述了量化参数图与对应的原始图像或重建图像具有相同的尺寸大小的情形,应理解,量化参数图的尺寸与对应的原始图像或重建图像的尺寸还可以是有一定的缩放比例。
第二种,将重建图像划分为等大小的多个用于构建量化参数图的基本单元,此时重建图像中的基本单元也即第二图像块;其中,基本单元的尺寸大小应小于或等于尺寸最小的编码单元的尺寸大小;对于多个基本单元中的任意一个基本单元,将包含该基本单元的编码单元的量化参数值作为该基本单元的量化参数值;由于重建图像中的多个基本单元与量化参数图中的多个第二元素是一一对应的,因此量化参数图中的任一第二元素的值为重建图像中与其对应的基本单元的量化参数值,也即包含该基本单元的编码单元的量化参数值。
为了更好的理解,可以简单理解为,将原始图像和重建图像都划分为多个基本单元,将原始图像划分得到的基本单元称之为基本单元i,以及将重建图像划分得到的基本单元称之为基本单元j;多个基本单元i与多个基本单元j一一对应,多个基本单元j与多个第二元素一一对应,故多个基本单元i与与多个第二元素一一对应,多个第二元素中的任一第二元素的值为与其对应的基本单元i的量化参数值,该基本单元i的量化参数值为包含原始图像中包括该基本单元i的编码单元的量化参数值。或者采用另外一种方式说明,原始图像中的多个编码单元与重建图像中的多个重建块是一一对应的,重建图像中的多个重建块中的任一重建块 的量化参数值为与其对应的编码单元的量化参数值;将重建图像划分为等大小的多个用于构建量化参数图的基本单元,重建图像中的多个基本单元与量化参数图中的多个第二元素是一一对应的,因此量化参数图中的任一第二元素的值为重建图像中与其对应的基本单元的量化参数值,也即包含该基本单元的重建块的量化参数值。此种表示方式,量化参数图中的任一第二元素在该量化参数图中的位置与其对应的基本单元在原始图像或重建图像中的位置是相同的。例如,重建图像的尺寸大小为W×H,该重建图像对应的原始图像的编码单元的划分如图14所示,以编码单元2的尺寸大小M×N为基本单元的尺寸大小,将重建图像划分成R×S个尺寸大小为M×N的基本单元;从而构建得到的量化参数图为一个R行S列的二维阵列,如图15所示,二维阵列中一共有R×S个第二元素,这R×S个第二元素与这R×S个基本单元是一一对应的,这R×S个第二元素中的任一第二元素在二维阵列中的位置与其对应的基本单元在重建图像中的位置相同,这R×S个第二元素中的任一第二元素的值为与其对应的基本单元的量化参数值;具体地,图15中第二元素的值为22为图14中编码单元1的量化参数值,图15中第二元素的值为24为图14中编码单元2的量化参数值,图15中第二元素的值为26为图14中编码单元3的量化参数值,图15中第二元素的值为18为图14中编码单元4的量化参数值,图15中第二元素的值为20为图14中编码单元5的量化参数值,图15中第二元素的值为16为图14中编码单元6的量化参数值。应理解,量化参数图的尺寸与对应的原始图像或重建图像的尺寸可以相同,也可以是有一定的缩放比例;其中,图15所示的量化参数图是的尺寸是按照原始图像或重建图像的尺寸缩小后的。
需要说明的是,量化参数图也可以用于表征重建图像的预设区域的保真度,此时,量化参数图中的第二元素的值是原始图像中部分编码单元的量化参数值,也即在构建量化参数图时仅使用了部分编码单元的量化参数值。应理解,用于表征重建图像的预设区域的保真度的量化参数图同样可采用前面描述的表示方式,此时,用于构建量化参数图的多个基本单元是重建图像的预设区域划分得到的。
在本申请实施例中,编码端将原始图像划分为多个编码单元,对由原始图像划分得到的多个编码单元进行编码得到第一码流;解码端对第一码流进行解码,可以得到原始图像的重建图像和目标量化参数信息,目标量化参数信息包括该多个编码单元中的全部或部分编码单元的量化参数值;根据目标量化参数信息可以构建重建图像的量化参数图;而重建图像的量化参数图就是一种形式的保真度图,当目标量化参数信息包括该多个编码单元中的全部编码单元的量化参数值时,重建图像的量化参数图是整张重建图像的保真度图;当目标量化参数信息包括该多个编码单元中的部分编码单元的量化参数值时,重建图像的量化参数图是重建图像的预设区域的保真度图;故重建图像的量化参数图可以用于表征重建图像的保真度或用于表征重建图像的预设区域的保真度;因此,本申请实施例能够在解码端获得编码图像的失真强度信息。
在本申请实施例中,解码端在对原始图像编码得到的第一码流进行解码操作,可以得到重建图像的各个区域(第二图像块)的量化参数值;而根据重建图像的多个第二图像块中的全部或部分第二图像块的量化参数值可以构建重建图像的量化参数图,且重建图像的量化参数图可以用于表示所述原始图像的至少部分区域与所述重建图像的至少部分区域之间的失真,因此,本申请实施例能够在解码端获得编码图像的失真强度信息。
在一种可能的设计中,所述重建图像的量化参数图包括多个第二元素,所述多个第二图像块与所述多个第二元素一一对应,所述多个第二元素中的任一第二元素的值为与所述任一第二元素对应的第二图像块的量化参数值,所述任一第二元素在所述重建图像的量化参数图 中的位置根据与所述任一第二元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第二元素在所述重建图像的量化参数图中的位置根据与所述任一第二元素对应的第二图像块在所述重建图像的预设区域中的位置确定。其中,第二元素也可以称为量化参数图的像素点。
在一种可能的设计中,所述第二图像块包括三个色彩分量,所述重建图像的量化参数图为包括色彩分量、宽和高三个维度的三维阵列,所述重建图像的量化参数图中的任一色彩分量A下的二维阵列包括多个第二元素,所述多个第二元素中的任一第二元素的值为与所述任一第二元素对应的第二图像块的色彩分量A的量化参数值,所述任一第二元素在所述重建图像的量化参数图中的任一色彩分量A下的二维阵列的位置根据与所述任一第二元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第二元素在所述重建图像的量化参数图中的任一色彩分量A下的二维阵列的位置根据与所述任一第二元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
在一种可能的设计中,所述根据所述目标量化参数信息构建所述重建图像的量化参数图,包括:当所述目标量化参数信息包括所述多个编码单元中的部分编码单元的量化参数值时,根据所述部分编码单元的量化参数值和/或参考量化参数图,得到目标编码单元的量化参数值,其中,所述参考量化参数图为所述重建图像的参考图像的量化参数图,所述目标编码单元为所述多个编码单元中除所述部分编码单元之外的编码单元;根据所述部分编码单元的量化参数值和所述目标编码单元的量化参数值,得到所述重建图像的量化参数图。
具体地,当所述目标量化参数信息包括所述多个编码单元中的全部编码单元的量化参数值时,根据所述全部编码单元的量化参数值得到所述重建图像的量化参数图;当所述目标量化参数信息包括所述多个编码单元中的部分编码单元的量化参数值时,根据所述部分编码单元的量化参数值得到所述重建图像的量化参数图,或根据所述部分编码单元的量化参数值和参考量化参数图得到所述重建图像的量化参数图,其中,所述参考量化参数图为所述重建图像的参考图像的量化参数图。
应理解,参考量化参数图为重建图像的参考图像的量化参数图,重建图像的参考图像也即编码图像(coding picture)的参考图像(reference picture)。当前主流视频图像编解码标准,在解码端并不保证能够为每一个编码单元推导得到一个真实有效的量化参数值。以H.265标准方案为例,如果一个编码单元的残差量化为全0,即不会为其传输任何残差的情况,此时解码端会跳过反量化操作。为避免传输无用的量化参数信息,编码端也不会将该编码单元的量化参数值传递到解码端。此时,解码端根本无法从第一码流中获得该编码单元的量化参数值。此外,即使在解码端能够通过解码操作获得一个编码单元的量化参数值,但是该量化参数值也无法准确表示该编码单元的失真程度。具体说,如果一个编码单元残差的张度小于量化步长(量化步长根据量化参数值计算获得),编码端会将该编码单元的残差量化为全0。这种情况在P、B帧编码中较为常见,当参考图像质量高而编码单元的量化步长设置较大时就会发生。此时,尽管解码端可以通过解码操作获得该编码单元的量化参数值,但是该量化参数值并不能反映该编码单元的实际失真情况。
本申请实施例提供的构建量化参数图的方案可以与任一现有主流视频图像编解码标准(H.264、H.265、H.266、AVS3等)结合,根据解码过程中获取的量化参数信息构建量化参数图。并且,还可以对量化参数图中所识别出的不准确的量化参数值进行修正,获得修正后的量化参数图,也即保真度图,最后将保真度图应用于重建图像。具体地,可以使用当前原始图像中的编码单元的量化参数值修正当前原始图像中其他编码单元的量化参数值,或者使 用当前原始图像的参考图像的量化参数图(后面简称为参考量化参数图)对不准确量化参数数据进行修正。每一张重建图像的量化参数图在构建和修正完成后,会保存起来,留待用作构建后续重建图像的量化参数图的输入,也即留待用作构建后续重建图像的量化参数图的输入。
其中,当目标量化参数信息包括重建图像中多个编码单元中的全部编码单元的量化参数值时,可以根据全部编码单元的量化参数值构建重建图像的量化参数图,此时重建图像的量化参数图用于表征整张重建图像的保真度;或者,还可以从全部编码单元中选择重建图像的预设区域中的编码单元的量化参数值,并根据重建图像的预设区域中的编码单元的量化参数值构建重建图像的量化参数图,此时重建图像的量化参数图用于表征重建图像的预设区域的保真度。
其中,当目标量化参数信息包括重建图像中多个编码单元中的部分编码单元的量化参数值,且部分编码单元的量化参数值完全包括重建图像的预设区域中的编码单元的量化参数值时,可以根据重建图像的预设区域中的编码单元的量化参数值构建重建图像的量化参数图;或者,还可以根据多个编码单元中的部分编码单元的量化参数值或参考量化参数图进行修正量化参数值,以得到多个编码单元中的全部编码单元的量化参数值,之后,再根据全部编码单元的量化参数值构建重建图像的量化参数图。
其中,当目标量化参数信息包括重建图像中多个编码单元中的部分编码单元的量化参数值,且部分编码单元的量化参数值不完全包括重建图像的预设区域中的编码单元的量化参数值时,可以根据多个编码单元中的部分编码单元的量化参数值或参考量化参数图进行修正量化参数值,以得到多个编码单元中的全部编码单元的量化参数值;之后,可以根据全部编码单元的量化参数值构建重建图像的量化参数图;或者,还可以从全部编码单元中选择重建图像的预设区域中的编码单元的量化参数值,并根据重建图像的预设区域中的编码单元的量化参数值构建重建图像的量化参数图。
在本申请实施例中,当目标量化参数信息包括多个编码单元中的全部编码单元的量化参数值时,可以根据全部编码单元的量化参数值得到重建图像的量化参数图,此种情况下得到的重建图像的量化参数图可以用于表征整张重建图像的保真度;当目标量化参数信息包括多个编码单元中的部分编码单元的量化参数值时,可以根据部分编码单元的量化参数值得到重建图像的量化参数图,此种情况下得到的重建图像的量化参数图可以用于表征重建图像的预设区域的保真度;当目标量化参数信息包括多个编码单元中的部分编码单元的量化参数值时,可以根据部分编码单元的量化参数值和参考量化参数图得到重建图像的量化参数图,由于参考量化参数图为重建图像的参考图像的量化参数图,可以根据参考量化参数图得到多个编码单元中除该部分编码单元之外的任意一个编码单元的量化参数值,如此可以获得多个编码单元中任意一个编码单元的量化参数,此种情况下得到的重建图像的量化参数图可以用于表征整张重建图像的保真度或用于表征重建图像的预设区域的保真度;因此,本申请实施例在解码得到的目标量化参数信息包括多个编码单元中的全部或部分编码单元的量化参数值的任意一种情况下,均能得到用于表征重建图像的保真度或用于表征重建图像的预设区域的保真度的重建图像的量化参数图。
在一种可能的设计中,所述根据所述部分编码单元的量化参数值,得到目标编码单元的量化参数值,包括:根据所述部分编码单元中的至少一个编码单元的量化参数值确定所述目标编码单元的量化参数值。
其中,在解码端无法从第一码流中获得一个编码单元的量化参数值时,会采用该编码单 元空间邻域的量化参数值对该编码单元进行填充。具体地,假设解码端无法从第一码流中获得目标编码单元的量化参数值,可以将以及解码获得的量化参数值的编码单元中的至少一个编码单元的量化参数值确定目标编码单元的量化参数值,包括:将任意一个解码获得的量化参数值作为该目标编码单元的量化参数值,以及计算几个解码获得的量化参数值的平均值,将该平均值作为该目标编码单元的量化参数值。例如,可以将原始图像中目标编码单元左侧、上方、左上方等方位的编码单元的量化参数值作为目标编码单元的量化参数值。
如此,可以根据多个编码单元中的部分编码单元的量化参数值对无法从第一码流中获得的量化参数值的编码单元进行量化参数值填充,得到多个编码单元中的全部编码单元的量化参数值;进而可以根据全部编码单元的量化参数值构建重建图像的量化参数图;或者,从全部编码单元中选择重建图像的预设区域中的编码单元的量化参数值,并根据重建图像的预设区域中的编码单元的量化参数值构建重建图像的量化参数图。
在本申请实施例中,解码端无法从第一码流中获得某个编码单元的量化参数值时,可以采用该编码单元的空间邻域的量化参数值进行填充。具体地,当目标量化参数信息包括多个编码单元中的部分编码单元的量化参数值时,还可以根据该部分编码单元中的至少一个编码单元的量化参数值确定这多个编码单元中除该部分编码单元之外的任意一个编码单元的量化参数值,如此可以确保得到多个编码单元中的任意一个编码单元的量化参数值;并且,可以根据多个编码单元中的部分或全部编码单元的量化参数值得到重建图像的量化参数图;当根据多个编码单元中的全部编码单元的量化参数值得到重建图像的量化参数图时,得到的重建图像的量化参数图也可以用于表征整张重建图像的保真度;当根据多个编码单元中的部分编码单元的量化参数值得到重建图像的量化参数图时,得到的重建图像的量化参数图也可以用于表征重建图像的预设区域的保真度。
在一种可能的设计中,所述参考量化参数图包括多个参考元素,所述多个参考元素中的任一参考元素的值为所述参考图像中的编码单元的量化参数值;所述根据参考量化参数图,得到目标编码单元的量化参数值,包括:将目标元素的值作为所述目标编码单元中任一目标编码单元的量化参数值,其中,所述目标元素为所述参考量化参数图中的参考元素,所述目标元素在所述参考量化参数图中的位置根据所述任一目标编码单元在所述重建图像中的位置确定,或所述目标元素在所述参考量化参数图中的位置根据所述任一目标编码单元在所述重建图像中的位置和所述任一目标编码单元的运动矢量确定。其中,参考元素为第二元素的另外一种叫法。
其中,解码端无法从第一码流中获得某个编码单元的量化参数值时,可以采用该编码单元的时间邻域的量化参数值进行填充。假设解码端无法从第一码流中获得目标编码单元的量化参数值,填充方式包括:
方式一,使用重建图像的参考量化参数图中目标元素的值作为目标编码单元的量化参数值,该目标元素在参考量化参数图中的位置根据该目标编码单元在重建图像中的位置确定。
具体地,根据目标编码单元在重建图像的参考图像上确定参考编码单元,其中,参考编码单元在参考图像中的位置与目标编码单元在重建图像中的位置相同;将参考量化参数图中目标元素的值作为目标编码单元的量化参数值,其中,目标元素为参考量化参数图中与参考编码单元对应的第二元素。
方式二,使用重建图像的参考量化参数图中目标元素的值作为目标编码单元的量化参数值,该目标元素在参考量化参数图中的位置根据该目标编码单元在原始图像中的位置和该目标编码单元的运动矢量确定。
具体地,目标编码单元在重建图像上的坐标为(x,y),使用目标编码单元在重建图像上的坐标为(x,y)和目标编码单元的运动矢量(mvx,mvy)做偏移,获得(x+mvx,y+mvy),在重建图像的参考图像的(x+mvx,y+mvy)位置确定参考编码单元;将参考量化参数图中目标元素的值作为目标编码单元的量化参数值,其中,目标元素为参考量化参数图中与参考编码单元对应的第二元素。
需要注意,前述各种填充方式中,参考量化参数图的表示方式和重建图像的量化参数图的表示方式是一样的,也即可以假设重建图像的量化参数图、参考量化参数图与对应的原始图像具有相同的大小,或者是按比列缩放。
进一步地,可使用前述各种方式获得多个量化参数值,并对这多个量化参数值做算数平均操作来确定目标编码单元的量化参数值。
如此,可以根据重建图像的参考量化参数图对无法从第一码流中获得的量化参数值的编码单元进行量化参数值填充,得到多个编码单元中的全部编码单元的量化参数值;进而可以根据全部编码单元的量化参数值构建重建图像的量化参数图;或者,从全部编码单元中选择重建图像的预设区域中的编码单元的量化参数值,并根据重建图像的预设区域中的编码单元的量化参数值构建重建图像的量化参数图。
在本申请实施例中,无法从第一码流中获得某个编码单元的量化参数值时,可以采用该编码单元的时间邻域的量化参数进行填充。具体地,对于任意一个无法从第一码流中获得的量化参数值的编码单元;可以将参考量化参数图中的目标元素的值作为该编码单元的量化参数值,该目标元素在参考量化参数图中的位置根据该编码单元在重建图像中的位置确定,或该目标元素在参考量化参数图中的位置根据该编码单元在重建图像中的位置和该编码单元的运动矢量确定。如此可以确保得到多个编码单元中任意一个编码单元的量化参数值;并且,可以根据多个编码单元中的部分或全部编码单元的量化参数值得到重建图像的量化参数图;当根据多个编码单元中的全部编码单元的量化参数值得到重建图像的量化参数图时,得到的重建图像的量化参数图也可以用于表征整张重建图像的保真度;当根据多个编码单元中的部分编码单元的量化参数值得到重建图像的量化参数图时,得到的重建图像的量化参数图也可以用于表征重建图像的预设区域的保真度。
在一种可能的设计中,所述方法还包括:将所述重建图像和所述重建图像的量化参数图关联存储,以将所述重建图像作为参考图像及将所述重建图像的量化参数图作为参考量化参数图。
其中,一张量化参数图在构建和修正完成后,会保存起来,留待用作构建后续编码图像的量化参数图的输入;例如设计量化参数图缓存器,用于存储量化参数图。
在任一主流视频编解码方案的解码器中都包含一个解码图像缓存器来存储已解码图像,并由一个参考图像管理机制来管理解码图像的添加和移除。在本申请方案中,每一个量化参数图都与一个解码图像对应,记录该解码图像的量化参数信息。因此,可以按照管理一个解码图像完全相同的操作来管理其量化参数图。换言之,一个解码图像与其量化参数图分别在解码图像缓存器和参考量化参数图缓存器中管理,两者的管理操作完全同步。
在本申请实施例中,可以将重建图像的量化参数图存储,用于后续解码图像的构建量化参数图的参考量化参数图,从而有利于后续解码图像的构建量化参数图。
在一种可能的设计中,所述方法还包括:根据所述重建图像的量化参数图对所述重建图像或所述重建图像的预设区域进行处理,以提高所述重建图像或所述重建图像的预设区域的图像质量;或根据所述重建图像的量化参数图确定是否应用所述重建图像。
具体地,解码端根据保真度图的重建图来判断一张重建图像或重建图像内预设区域的信号失真情况,并应用到不同的业务环境中。例如,在视频监控场景中,根据保真度图的重建图来判断重建图像中的某个图像区域的失真程度,如果大于某个预设门限,则不会应用该重建图像。再例如,可以根据重建图像中的某个图像区域的失真程度,根据预设规则从一组图像增强方法中选择其中的一种,应用于该图像区域做画质提升。
其中,对所述重建图像或所述重建图像的预设区域进行处理时,可以在基于学习的后处理增强算法中,根据图像失真程度划分为B个失真范围分别进行训练得到多个图像增强模型,其中B为大于1的整数,解码端可以根据重建图像的量化参数图,确定重建图像的不同区域的失真程度,对不同区域分别选用不同的模型进行图像增强,使用更匹配失真分布的训练模型能得到更好的画质提升效果。再例如单模型的图像增强算法,训练和使用可以将量化参数图作为网络的额外输入信息,在量化参数图的指示下网络能输出更好增强效果的图像。
在本申请实施例中,解码端可以根据保真度图的重建图对重建图像或重建图像的预设区域进行处理,以提高重建图像或重建图像的预设区域的图像质量;或根据保真度图的重建图确定是否应用重建图像;从而有利于重建图像的应用。
图16为本申请实施例提供的一种编码设备的示意性框图;该编码设备包括视频编码器和保真度图编码器,其中:
视频编码器,用于对原始图像进行编码以得到第一码流;
保真度图编码器,用于对保真度图进行编码以得到第二码流,其中,所述保真度图用于表示所述原始图像的至少部分区域与重建图像的至少部分区域之间的失真,所述重建图像是对所述第一码流进行解码后得到的。
其中,图16中的压缩码流为编码端向解码端传输的码流的统称,压缩码流包括第一码流和第二码流。
在一种可能的设计中,所述编码设备还包括保真度图计算器,所述保真度图计算器用于:将所述原始图像划分为多个第一图像块,以及将所述重建图像划分为多个第二图像块,其中,划分所述原始图像的划分策略与划分所述重建图像的划分策略相同,所述多个第一图像块与所述多个第二图像块一一对应;或将所述原始图像的预设区域划分为多个第一图像块,以及将所述重建图像的预设区域划分为多个第二图像块,其中,划分所述原始图像的预设区域的划分策略与划分所述重建图像的预设区域的划分策略相同,所述多个第一图像块与所述多个第二图像块一一对应;根据所述多个第二图像块中的任一第二图像块与所述任一第二图像块对应的第一图像块计算得到所述任一第二图像块的保真度值,所述保真度图包括所述任一第二图像块的保真度值,所述任一第二图像块的保真度值用于表示所述任一第二图像块与所述任一第二图像块对应的第一图像块之间的失真。
在一种可能的设计中,所述保真度图包括多个第一元素,所述多个第二图像块与所述多个第一元素一一对应,所述多个第一元素中的任一第一元素的值为与所述任一第一元素对应的第二图像块的保真度值,所述任一第一元素在所述保真度图中的位置根据与所述任一第一元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第一元素在所述保真度图中的位置根据与所述任一第一元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
在一种可能的设计中,所述第二图像块包括三个色彩分量,所述保真度图为包括色彩分量、宽和高三个维度的三维阵列,所述保真度图中的任一色彩分量A下的二维阵列包括多个 第一元素,所述多个第一元素中的任一第一元素的值为与所述任一第一元素对应的第二图像块的色彩分量A的保真度值,所述任一第一元素在所述保真度图中的任一色彩分量A下的二维阵列的位置根据与所述任一第一元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第一元素在所述保真度图中的任一色彩分量A下的二维阵列的位置根据与所述任一第一元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
在一种可能的设计中,所述保真度图编码器,具体用于:对所述任一第一元素进行熵编码以得到所述第二码流,对所述任一第一元素的熵编码独立于其他第一元素的熵编码;或者,根据所述已编码的第一元素中的至少一个第一元素的值确定所述任一第一元素的值的概率分布或所述任一第一元素的预测值,并根据所述任一第一元素的值的概率分布或所述任一第一元素的预测值对所述任一第一元素进行熵编码,以得到所述第二码流;其中,所述第二码流包括所述多个第一元素的码流。
在一种可能的设计中,所述保真度图编码器,具体用于:对所述任一第一元素进行量化,以得到量化后的第一元素;对所述量化后的第一元素进行编码,以得到所述第二码流;其中,所述第二码流包括所述多个第一元素的码流。
需要说明的是,各个单元的实现还可以对应参照图7所示的方法实施例的相应描述。
在图16所描述的编码设备中,对原始图像进行编码以得到第一码流,对保真度图进行编码以得到第二码流,其中,保真度图用于表示原始图像的至少部分区域与重建图像的至少部分区域之间的失真;解码端对第一码流进行解码可以得到原始图像的重建图像,解码端对第二码流进行解码后得到保真度图的重建图;而若编码为无损编码,则保真度图的重建图和保真度图相同;若编码为有损编码,则保真度图的重建图包括对保真度图进行编码而生成的编码失真;故保真度图的重建图可以用于表示原始图像的至少部分区域与重建图像的至少部分区域之间的失真;因此,本申请实施例能够在解码端获得编码图像的失真强度信息。
图17为本申请实施例提供的一种解码设备的示意性框图;该解码设备包括视频解码器和保真度图解码器,其中:
视频解码器,用于对第一码流进行解码以得到原始图像的重建图像;
保真度图解码器,用于对第二码流进行解码以得到保真度图的重建图,其中,所述第二码流是对所述保真度图进行编码得到的,所述保真度图的重建图用于表示所述原始图像的至少部分区域与所述重建图像的至少部分区域之间的失真。
其中,图17中的压缩码流为编码端向解码端传输的码流的统称,压缩码流包括第一码流和第二码流。
在一种可能的设计中,所述保真度图包括多个第二图像块中的任一第二图像块的保真度值,所述任一第二图像块的保真度值用于表示所述任一第二图像块与所述任一第二图像块对应的原始图像块之间的失真。
在一种可能的设计中,所述保真度图包括多个第一元素,所述多个第二图像块与所述多个第一元素一一对应,所述多个第一元素中的任一第一元素的值为与所述任一第一元素对应的第二图像块的保真度值,所述任一第一元素在所述保真度图中的位置根据与所述任一第一元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第一元素在所述保真度图中的位置根据与所述任一第一元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
在一种可能的设计中,所述第二图像块包括三个色彩分量,所述保真度图为包括色彩分 量、宽和高三个维度的三维阵列,所述保真度图中的任一色彩分量A下的二维阵列包括多个第一元素,所述多个第一元素中的任一第一元素的值为与所述任一第一元素对应的第二图像块的色彩分量A的保真度值,所述任一第一元素在所述保真度图中的任一色彩分量A下的二维阵列的位置根据与所述任一第一元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第一元素在所述保真度图中的任一色彩分量A下的二维阵列的位置根据与所述任一第一元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
在一种可能的设计中,所述保真度图解码器,具体用于:对所述第二码流进行解码,以得到所述任一第一元素的重建保真度值;根据所述任一第一元素的重建保真度值得到所述保真度图的重建图。
在一种可能的设计中,所述第二码流是对量化后的第一元素进行编码得到的;所述保真度图解码器,具体用于:对所述第二码流进行解码,以得到所述量化后的第一元素的重建保真度值;对所述量化后的第一元素的重建保真度值进行反量化,以得到所述任一第一元素的重建保真度值;根据所述任一第一元素的重建保真度值得到所述保真度图的重建图。
需要说明的是,各个单元的实现还可以对应参照图11所示的方法实施例的相应描述。
在图17所描述的解码设备中,对第一码流进行解码可以得到原始图像的重建图像,对第二码流进行解码后得到保真度图的重建图,第一码流由对原始图像进行编码得到,第二码流由对保真度图进行编码得到,保真度图用于表示原始图像的至少部分区域与重建图像的至少部分区域之间的失真;若编码为无损编码,则保真度图的重建图和保真度图相同;若编码为有损编码,则保真度图的重建图包括对保真度图进行编码而生成的编码失真;故保真度图的重建图可以用于表示原始图像的至少部分区域与重建图像的至少部分区域之间的失真;因此,本申请实施例能够在解码设备获得编码图像的失真强度信息。
图18为本申请实施例提供的一种解码设备的示意性框图;该解码设备包括视频解码器和保真度图解码器,其中:
视频解码器,用于对第一码流进行解码以得到原始图像的重建图像和目标量化参数信息,所述目标量化参数信息包括所述重建图像的多个第二图像块中的全部或部分第二图像块的量化参数值;
量化参数图构建器,用于根据所述目标量化参数信息构建所述重建图像的量化参数图,其中,所述重建图像的量化参数图用于表示所述原始图像的至少部分区域与所述重建图像的至少部分区域之间的失真。
其中,图18中的压缩码流为编码端向解码端传输的码流的统称,压缩码流包括第一码流。
在一种可能的设计中,所述第二图像块为编码单元。
在一种可能的设计中,所述重建图像的量化参数图包括多个第二元素,所述多个第二图像块与所述多个第二元素一一对应,所述多个第二元素中的任一第二元素的值为与所述任一第二元素对应的第二图像块的量化参数值,所述任一第二元素在所述重建图像的量化参数图中的位置根据与所述任一第二元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第二元素在所述重建图像的量化参数图中的位置根据与所述任一第二元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
在一种可能的设计中,所述第二图像块包括三个色彩分量,所述重建图像的量化参数图为包括色彩分量、宽和高三个维度的三维阵列,所述重建图像的量化参数图中的任一色彩分量A下的二维阵列包括多个第二元素,所述多个第二元素中的任一第二元素的值为与所述任 一第二元素对应的第二图像块的色彩分量A的量化参数值,所述任一第二元素在所述重建图像的量化参数图中的任一色彩分量A下的二维阵列的位置根据与所述任一第二元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第二元素在所述重建图像的量化参数图中的任一色彩分量A下的二维阵列的位置根据与所述任一第二元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
在一种可能的设计中,所述量化参数图构建器,具体用于:当所述目标量化参数信息包括所述多个编码单元中的部分编码单元的量化参数值时,根据所述部分编码单元的量化参数值和/或参考量化参数图,得到目标编码单元的量化参数值,其中,所述参考量化参数图为所述重建图像的参考图像的量化参数图,所述目标编码单元为所述多个编码单元中除所述部分编码单元之外的编码单元;根据所述部分编码单元的量化参数值和所述目标编码单元的量化参数值,得到所述重建图像的量化参数图。
在一种可能的设计中,所述参考量化参数图包括多个参考元素,所述多个参考元素中的任一参考元素的值为所述参考图像中的编码单元的量化参数值;所述量化参数图构建器,具体用于:将目标元素的值作为所述目标编码单元中任一目标编码单元的量化参数值,其中,所述目标元素为所述参考量化参数图中的参考元素,所述目标元素在所述参考量化参数图中的位置根据所述任一目标编码单元在所述重建图像中的位置确定,或所述目标元素在所述参考量化参数图中的位置根据所述任一目标编码单元在所述重建图像中的位置和所述任一目标编码单元的运动矢量确定。
需要说明的是,各个单元的实现还可以对应参照图13所示的方法实施例的相应描述。
在图18所描述的解码设备中,在对原始图像编码得到的第一码流进行解码操作,可以得到重建图像的各个区域(第二图像块)的量化参数值;而根据重建图像的多个第二图像块中的全部或部分第二图像块的量化参数值可以构建重建图像的量化参数图,且重建图像的量化参数图可以用于表示所述原始图像的至少部分区域与所述重建图像的至少部分区域之间的失真,因此,本申请实施例能够在解码端获得编码图像的失真强度信息。
图19为本申请实施例提供的一种编码装置的示意性框图;该编码装置1900应用于编码设备,该编码装置1900包括处理单元1901和通信单元1902,其中,该处理单元1901,用于执行如图7所示的方法实施例中的任一步骤,且在执行诸如获取等数据传输时,可选择的调用该通信单元1902来完成相应操作。下面进行详细说明。
所述处理单元1901用于:对原始图像进行编码以得到第一码流;对保真度图进行编码以得到第二码流,其中,所述保真度图用于表示所述原始图像的至少部分区域与重建图像的至少部分区域之间的失真,所述重建图像是对所述第一码流进行解码后得到的。
在一种可能的设计中,所述处理单元1901还用于:将所述原始图像划分为多个第一图像块,以及将所述重建图像划分为多个第二图像块,其中,划分所述原始图像的划分策略与划分所述重建图像的划分策略相同,所述多个第一图像块与所述多个第二图像块一一对应;或将所述原始图像的预设区域划分为多个第一图像块,以及将所述重建图像的预设区域划分为多个第二图像块,其中,划分所述原始图像的预设区域的划分策略与划分所述重建图像的预设区域的划分策略相同,所述多个第一图像块与所述多个第二图像块一一对应;根据所述多个第二图像块中的任一第二图像块与所述任一第二图像块对应的第一图像块计算得到所述任一第二图像块的保真度值,所述保真度图包括所述任一第二图像块的保真度值,所述任一第二图像块的保真度值用于表示所述任一第二图像块与所述任一第二图像块对应的第一图像块 之间的失真。
在一种可能的设计中,所述保真度图包括多个第一元素,所述多个第二图像块与所述多个第一元素一一对应,所述多个第一元素中的任一第一元素的值为与所述任一第一元素对应的第二图像块的保真度值,所述任一第一元素在所述保真度图中的位置根据与所述任一第一元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第一元素在所述保真度图中的位置根据与所述任一第一元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
在一种可能的设计中,所述第二图像块包括三个色彩分量,所述保真度图为包括色彩分量、宽和高三个维度的三维阵列,所述保真度图中的任一色彩分量A下的二维阵列包括多个第一元素,所述多个第一元素中的任一第一元素的值为与所述任一第一元素对应的第二图像块的色彩分量A的保真度值,所述任一第一元素在所述保真度图中的任一色彩分量A下的二维阵列的位置根据与所述任一第一元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第一元素在所述保真度图中的任一色彩分量A下的二维阵列的位置根据与所述任一第一元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
在一种可能的设计中,所述处理单元1901具体用于:对所述任一第一元素进行熵编码以得到所述第二码流,对所述任一第一元素的熵编码独立于其他第一元素的熵编码;或者,根据所述已编码的第一元素中的至少一个第一元素的值确定所述任一第一元素的值的概率分布或所述任一第一元素的预测值,并根据所述任一第一元素的值的概率分布或所述任一第一元素的预测值对所述任一第一元素进行熵编码,以得到所述第二码流;其中,所述第二码流包括所述多个第一元素的码流。
在一种可能的设计中,所述处理单元1901具体用于:对所述任一第一元素进行量化,以得到量化后的第一元素;对所述量化后的第一元素进行编码,以得到所述第二码流;其中,所述第二码流包括所述多个第一元素的码流。
其中,该编码装置1900还可以包括存储单元1903,用于存储编码设备的程序代码和数据。该处理单元1901可以是处理器,该通信单元1902可以收发器,该存储单元1903可以是存储器。
需要说明的是,各个单元的实现还可以对应参照图7所示的方法实施例的相应描述。
在图19所描述的编码装置中,对原始图像进行编码以得到第一码流,对保真度图进行编码以得到第二码流,其中,保真度图用于表示原始图像的至少部分区域与重建图像的至少部分区域之间的失真;解码端对第一码流进行解码可以得到原始图像的重建图像,解码端对第二码流进行解码后得到保真度图的重建图;而若编码为无损编码,则保真度图的重建图和保真度图相同;若编码为有损编码,则保真度图的重建图包括对保真度图进行编码而生成的编码失真;故保真度图的重建图可以用于原始图像的至少部分区域与重建图像的至少部分区域之间的失真;因此,本申请实施例能够在解码端获得编码图像的失真强度信息。
图20为本申请实施例提供的一种解码装置的示意性框图;该解码装置2000应用于解码设备,该解码装置2000包括处理单元2001和通信单元2002,其中,该处理单元2001,用于执行如图11所示的方法实施例中的任一步骤,且在执行诸如获取等数据传输时,可选择的调用该通信单元2002来完成相应操作。下面进行详细说明。
所述处理单元2001用于:对第一码流进行解码以得到原始图像的重建图像;对第二码流进行解码以得到保真度图的重建图,其中,所述第二码流是对所述保真度图进行编码得到的, 所述保真度图的重建图用于表示所述原始图像的至少部分区域与所述重建图像的至少部分区域之间的失真。
在一种可能的设计中,所述保真度图包括多个第二图像块中的任一第二图像块的保真度值,所述任一第二图像块的保真度值用于表示所述任一第二图像块与所述任一第二图像块对应的原始图像块之间的失真。
在一种可能的设计中,所述保真度图包括多个第一元素,所述多个第二图像块与所述多个第一元素一一对应,所述多个第一元素中的任一第一元素的值为与所述任一第一元素对应的第二图像块的保真度值,所述任一第一元素在所述保真度图中的位置根据与所述任一第一元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第一元素在所述保真度图中的位置根据与所述任一第一元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
在一种可能的设计中,所述第二图像块包括三个色彩分量,所述保真度图为包括色彩分量、宽和高三个维度的三维阵列,所述保真度图中的任一色彩分量A下的二维阵列包括多个第一元素,所述多个第一元素中的任一第一元素的值为与所述任一第一元素对应的第二图像块的色彩分量A的保真度值,所述任一第一元素在所述保真度图中的任一色彩分量A下的二维阵列的位置根据与所述任一第一元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第一元素在所述保真度图中的任一色彩分量A下的二维阵列的位置根据与所述任一第一元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
在一种可能的设计中,所述处理单元2001具体用于:对所述第二码流进行解码,以得到所述任一第一元素的重建保真度值;根据所述任一第一元素的重建保真度值得到所述保真度图的重建图。
在一种可能的设计中,所述第二码流是对量化后的第一元素进行编码得到的;所述处理单元2001具体用于:对所述第二码流进行解码,以得到所述量化后的第一元素的重建保真度值;对所述量化后的第一元素的重建保真度值进行反量化,以得到所述任一第一元素的重建保真度值;根据所述任一第一元素的重建保真度值得到所述保真度图的重建图。
其中,该解码装置2000还可以包括存储单元2003,用于存储解码设备的程序代码和数据。该处理单元2001可以是处理器,该通信单元2002可以收发器,该存储单元2003可以是存储器。
需要说明的是,各个单元的实现还可以对应参照图11所示的方法实施例的相应描述。
在图20所描述的解码装置中,对第一码流进行解码可以得到原始图像的重建图像,对第二码流进行解码后得到保真度图的重建图,第一码流由对原始图像进行编码得到,第二码流由对保真度图进行编码得到,保真度图用于表示所述原始图像的至少部分区域与所述重建图像的至少部分区域之间的失真;若编码为无损编码,则保真度图的重建图和保真度图相同;若编码为有损编码,则保真度图的重建图包括对保真度图进行编码而生成的编码失真;故保真度图的重建图可以用于表示所述原始图像的至少部分区域与所述重建图像的至少部分区域之间的失真;因此,本申请实施例能够在解码设备获得编码图像的失真强度信息。
图21为本申请实施例提供的一种解码装置的示意性框图;该解码装置2100应用于解码设备,该解码装置2100包括处理单元2101和通信单元2102,其中,该处理单元2101,用于执行如图13所示的方法实施例中的任一步骤,且在执行诸如获取等数据传输时,可选择的调用该通信单元2102来完成相应操作。下面进行详细说明。
所述处理单元2101用于:对第一码流进行解码以得到原始图像的重建图像和目标量化参数信息,所述目标量化参数信息包括所述重建图像的多个第二图像块中的全部或部分第二图像块的量化参数值;根据所述目标量化参数信息构建所述重建图像的量化参数图,其中,所述重建图像的量化参数图用于表示所述原始图像的至少部分区域与所述重建图像的至少部分区域之间的失真。
在一种可能的设计中,所述第二图像块为编码单元。
在一种可能的设计中,所述重建图像的量化参数图包括多个第二元素,所述多个第二图像块与所述多个第二元素一一对应,所述多个第二元素中的任一第二元素的值为与所述任一第二元素对应的第二图像块的量化参数值,所述任一第二元素在所述重建图像的量化参数图中的位置根据与所述任一第二元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第二元素在所述重建图像的量化参数图中的位置根据与所述任一第二元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
在一种可能的设计中,所述第二图像块包括三个色彩分量,所述重建图像的量化参数图为包括色彩分量、宽和高三个维度的三维阵列,所述重建图像的量化参数图中的任一色彩分量A下的二维阵列包括多个第二元素,所述多个第二元素中的任一第二元素的值为与所述任一第二元素对应的第二图像块的色彩分量A的量化参数值,所述任一第二元素在所述重建图像的量化参数图中的任一色彩分量A下的二维阵列的位置根据与所述任一第二元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第二元素在所述重建图像的量化参数图中的任一色彩分量A下的二维阵列的位置根据与所述任一第二元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
在一种可能的设计中,所述处理单元2101具体用于:当所述目标量化参数信息包括所述多个编码单元中的部分编码单元的量化参数值时,根据所述部分编码单元的量化参数值和/或参考量化参数图,得到目标编码单元的量化参数值,其中,所述参考量化参数图为所述重建图像的参考图像的量化参数图,所述目标编码单元为所述多个编码单元中除所述部分编码单元之外的编码单元;根据所述部分编码单元的量化参数值和所述目标编码单元的量化参数值,得到所述重建图像的量化参数图。
在一种可能的设计中,所述参考量化参数图包括多个参考元素,所述多个参考元素中的任一参考元素的值为所述参考图像中的编码单元的量化参数值;所述处理单元2101具体用于:将目标元素的值作为所述目标编码单元中任一目标编码单元的量化参数值,其中,所述目标元素为所述参考量化参数图中的参考元素,所述目标元素在所述参考量化参数图中的位置根据所述任一目标编码单元在所述重建图像中的位置确定,或所述目标元素在所述参考量化参数图中的位置根据所述任一目标编码单元在所述重建图像中的位置和所述任一目标编码单元的运动矢量确定。
其中,该解码装置2100还可以包括存储单元2103,用于存储解码设备的程序代码和数据。该处理单元2101可以是处理器,该通信单元2102可以收发器,该存储单元2103可以是存储器。
需要说明的是,各个单元的实现还可以对应参照图13所示的方法实施例的相应描述。
在图21所描述的解码装置中,在对原始图像编码得到的第一码流进行解码操作,可以得到重建图像的各个区域(第二图像块)的量化参数值;而根据重建图像的多个第二图像块中的全部或部分第二图像块的量化参数值可以构建重建图像的量化参数图,且重建图像的量化参数图可以用于表示所述原始图像的至少部分区域与所述重建图像的至少部分区域之间的失 真,因此,本申请实施例能够在解码端获得编码图像的失真强度信息。
本申请实施例提供编码视频流的装置,包含处理器和存储器。所述存储器存储指令,所述指令使得所述处理器执行图7所示的方法。
本申请实施例提供解码视频流的装置,包含处理器和存储器。所述存储器存储指令,所述指令使得所述处理器执行图11所示的方法。
本申请实施例提供解码视频流的装置,包含处理器和存储器。所述存储器存储指令,所述指令使得所述处理器执行图13所示的方法。
本申请实施例提供一种计算机可读存储介质,其上储存有指令,当所述指令执行时,使得一个或多个处理器编码视频数据。所述指令使得所述一个或多个处理器执行图7、图11或图13所示的方法。
本申请实施例提供包括程序代码的计算机程序产品,所述程序代码在运行时执行图7、图11或图13所示的方法。
本申请实施例提供一种编码器(20),包括处理电路,用于执行图7所示的方法。
本申请实施例提供一种解码器(30),包括处理电路,用于执行图11或图13所示的方法。
本申请实施例提供一种编码器,包括:一个或多个处理器;非瞬时性计算机可读存储介质,耦合到所述处理器并存储由所述处理器执行的程序,其中所述程序在由所述处理器执行时,使得所述编码器执行图7的方法。
本申请实施例提供一种解码器,包括:一个或多个处理器;非瞬时性计算机可读存储介质,耦合到所述处理器并存储由所述处理器执行的程序,其中所述程序在由所述处理器执行时,使得所述解码器执行图11或图13所示的方法。
本申请实施例提供一种非瞬时性计算机可读存储介质,包括程序代码,当其由计算机设备执行时,用于执行图7、图11或图13所示的方法。
本申请实施例提供一种非瞬时性存储介质,包括根据图7所示的方法编码的比特流。
本领域技术人员能够领会,结合本文公开描述的各种说明性逻辑框、模块和算法步骤所描述的功能可以硬件、软件、固件或其任何组合来实施。如果以软件来实施,那么各种说明性逻辑框、模块、和步骤描述的功能可作为一或多个指令或代码在计算机可读媒体上存储或传输,且由基于硬件的处理单元执行。计算机可读媒体可包含计算机可读存储媒体,其对应于有形媒体,例如数据存储媒体,或包括任何促进将计算机程序从一处传送到另一处的媒体(例如,根据通信协议)的通信媒体。以此方式,计算机可读媒体大体上可对应于(1)非暂时性的有形计算机可读存储媒体,或(2)通信媒体,例如信号或载波。数据存储媒体可为可由一或多个计算机或一或多个处理器存取以检索用于实施本申请中描述的技术的指令、代码和/或数据结构的任何可用媒体。计算机程序产品可包含计算机可读媒体。
作为实例而非限制,此类计算机可读存储媒体可包括RAM、ROM、EEPROM、CD-ROM或其它光盘存储装置、磁盘存储装置或其它磁性存储装置、快闪存储器或可用来存储指令或数据结构的形式的所要程序代码并且可由计算机存取的任何其它媒体。并且,任何连接被恰当地称作计算机可读媒体。举例来说,如果使用同轴缆线、光纤缆线、双绞线、数字订户线(DSL)或例如红外线、无线电和微波等无线技术从网站、服务器或其它远程源传输指令,那么同轴缆线、光纤缆线、双绞线、DSL或例如红外线、无线电和微波等无线技术包含在媒体的定义中。但是,应理解,所述计算机可读存储媒体和数据存储媒体并不包括连接、载波、信 号或其它暂时媒体,而是实际上针对于非暂时性有形存储媒体。如本文中所使用,磁盘和光盘包含压缩光盘(CD)、激光光盘、光学光盘、数字多功能光盘(DVD)和蓝光光盘,其中磁盘通常以磁性方式再现数据,而光盘利用激光以光学方式再现数据。以上各项的组合也应包含在计算机可读媒体的范围内。
可通过例如一或多个数字信号处理器(DSP)、通用微处理器、专用集成电路(ASIC)、现场可编程逻辑阵列(FPGA)或其它等效集成或离散逻辑电路等一或多个处理器来执行指令。因此,如本文中所使用的术语“处理器”可指前述结构或适合于实施本文中所描述的技术的任一其它结构中的任一者。另外,在一些方面中,本文中所描述的各种说明性逻辑框、模块、和步骤所描述的功能可以提供于经配置以用于编码和解码的专用硬件和/或软件模块内,或者并入在组合编解码器中。而且,所述技术可完全实施于一或多个电路或逻辑元件中。
本申请的技术可在各种各样的装置或设备中实施,包含无线手持机、集成电路(IC)或一组IC(例如,芯片组)。本申请中描述各种组件、模块或单元是为了强调用于执行所揭示的技术的装置的功能方面,但未必需要由不同硬件单元实现。实际上,如上文所描述,各种单元可结合合适的软件和/或固件组合在编码解码器硬件单元中,或者通过互操作硬件单元(包含如上文所描述的一或多个处理器)来提供。
以上所述,仅为本申请示例性的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应该以权利要求的保护范围为准。

Claims (42)

  1. 一种编码方法,其特征在于,包括:
    对原始图像进行编码以得到第一码流;
    对保真度图进行编码以得到第二码流,其中,所述保真度图用于表示所述原始图像的至少部分区域与重建图像的至少部分区域之间的失真,所述重建图像是对所述第一码流进行解码后得到的。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    将所述原始图像划分为多个第一图像块,以及将所述重建图像划分为多个第二图像块,其中,划分所述原始图像的划分策略与划分所述重建图像的划分策略相同,所述多个第一图像块与所述多个第二图像块一一对应;
    或将所述原始图像的预设区域划分为多个第一图像块,以及将所述重建图像的预设区域划分为多个第二图像块,其中,划分所述原始图像的预设区域的划分策略与划分所述重建图像的预设区域的划分策略相同,所述多个第一图像块与所述多个第二图像块一一对应;
    根据所述多个第二图像块中的任一第二图像块与所述任一第二图像块对应的第一图像块计算得到所述任一第二图像块的保真度值,所述保真度图包括所述任一第二图像块的保真度值,所述任一第二图像块的保真度值用于表示所述任一第二图像块与所述任一第二图像块对应的第一图像块之间的失真。
  3. 根据权利要求2所述的方法,其特征在于,所述保真度图包括多个第一元素,所述多个第二图像块与所述多个第一元素一一对应,所述多个第一元素中的任一第一元素的值为与所述任一第一元素对应的第二图像块的保真度值,所述任一第一元素在所述保真度图中的位置根据与所述任一第一元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第一元素在所述保真度图中的位置根据与所述任一第一元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
  4. 根据权利要求2所述的方法,其特征在于,所述第二图像块包括三个色彩分量,所述保真度图为包括色彩分量、宽和高三个维度的三维阵列,所述保真度图中的任一色彩分量A下的二维阵列包括多个第一元素,所述多个第一元素中的任一第一元素的值为与所述任一第一元素对应的第二图像块的色彩分量A的保真度值,所述任一第一元素在所述保真度图中的任一色彩分量A下的二维阵列的位置根据与所述任一第一元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第一元素在所述保真度图中的任一色彩分量A下的二维阵列的位置根据与所述任一第一元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
  5. 根据权利要求3或4所述的方法,其特征在于,所述对保真度图进行编码以得到第二码流,包括:
    对所述任一第一元素进行熵编码以得到所述第二码流,对所述任一第一元素的熵编码独立于其他第一元素的熵编码;或者,
    根据所述已编码的第一元素中的至少一个第一元素的值确定所述任一第一元素的值的概率分布或所述任一第一元素的预测值,并根据所述任一第一元素的值的概率分布或所述任一 第一元素的预测值对所述任一第一元素进行熵编码,以得到所述第二码流;
    其中,所述第二码流包括所述多个第一元素的码流。
  6. 根据权利要求3或4所述的方法,其特征在于,所述对保真度图进行编码以得到第二码流,包括:
    对所述任一第一元素进行量化,以得到量化后的第一元素;
    对所述量化后的第一元素进行编码,以得到所述第二码流;
    其中,所述第二码流包括所述多个第一元素的码流。
  7. 一种解码方法,其特征在于,包括:
    对第一码流进行解码以得到原始图像的重建图像;
    对第二码流进行解码以得到保真度图的重建图,其中,所述第二码流是对所述保真度图进行编码得到的,所述保真度图的重建图用于表示所述原始图像的至少部分区域与所述重建图像的至少部分区域之间的失真。
  8. 根据权利要求7所述的方法,其特征在于,
    所述保真度图包括多个第二图像块中的任一第二图像块的保真度值,所述任一第二图像块的保真度值用于表示所述任一第二图像块与所述任一第二图像块对应的原始图像块之间的失真。
  9. 根据权利要求8所述的方法,其特征在于,所述保真度图包括多个第一元素,所述多个第二图像块与所述多个第一元素一一对应,所述多个第一元素中的任一第一元素的值为与所述任一第一元素对应的第二图像块的保真度值,所述任一第一元素在所述保真度图中的位置根据与所述任一第一元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第一元素在所述保真度图中的位置根据与所述任一第一元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
  10. 根据权利要求8所述的方法,其特征在于,所述第二图像块包括三个色彩分量,所述保真度图为包括色彩分量、宽和高三个维度的三维阵列,所述保真度图中的任一色彩分量A下的二维阵列包括多个第一元素,所述多个第一元素中的任一第一元素的值为与所述任一第一元素对应的第二图像块的色彩分量A的保真度值,所述任一第一元素在所述保真度图中的任一色彩分量A下的二维阵列的位置根据与所述任一第一元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第一元素在所述保真度图中的任一色彩分量A下的二维阵列的位置根据与所述任一第一元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
  11. 根据权利要求9或10所述的方法,其特征在于,所述对第二码流进行解码以得到保真度图的重建图,包括:
    对所述第二码流进行解码,以得到所述任一第一元素的重建保真度值;
    根据所述任一第一元素的重建保真度值得到所述保真度图的重建图。
  12. 根据权利要求9或10所述的方法,其特征在于,所述第二码流是对量化后的第一元素进行编码得到的;所述对第二码流进行解码以得到保真度图的重建图,包括:
    对所述第二码流进行解码,以得到所述量化后的第一元素的重建保真度值;
    对所述量化后的第一元素的重建保真度值进行反量化,以得到所述任一第一元素的重建保真度值;
    根据所述任一第一元素的重建保真度值得到所述保真度图的重建图。
  13. 一种解码方法,其特征在于,包括:
    对第一码流进行解码以得到原始图像的重建图像和目标量化参数信息,所述目标量化参数信息包括所述重建图像的多个第二图像块中的全部或部分第二图像块的量化参数值;
    根据所述目标量化参数信息构建所述重建图像的量化参数图,其中,所述重建图像的量化参数图用于表示所述原始图像的至少部分区域与所述重建图像的至少部分区域之间的失真。
  14. 根据权利要求13所述的方法,其特征在于,所述第二图像块为编码单元。
  15. 根据权利要求13或14所述的方法,其特征在于,所述重建图像的量化参数图包括多个第二元素,所述多个第二图像块与所述多个第二元素一一对应,所述多个第二元素中的任一第二元素的值为与所述任一第二元素对应的第二图像块的量化参数值,所述任一第二元素在所述重建图像的量化参数图中的位置根据与所述任一第二元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第二元素在所述重建图像的量化参数图中的位置根据与所述任一第二元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
  16. 根据权利要求13或14所述的方法,其特征在于,所述第二图像块包括三个色彩分量,所述重建图像的量化参数图为包括色彩分量、宽和高三个维度的三维阵列,所述重建图像的量化参数图中的任一色彩分量A下的二维阵列包括多个第二元素,所述多个第二元素中的任一第二元素的值为与所述任一第二元素对应的第二图像块的色彩分量A的量化参数值,所述任一第二元素在所述重建图像的量化参数图中的任一色彩分量A下的二维阵列的位置根据与所述任一第二元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第二元素在所述重建图像的量化参数图中的任一色彩分量A下的二维阵列的位置根据与所述任一第二元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
  17. 根据权利要求14-16中任一项所述的方法,其特征在于,所述根据所述目标量化参数信息构建所述重建图像的量化参数图,包括:
    当所述目标量化参数信息包括所述多个编码单元中的部分编码单元的量化参数值时,根据所述部分编码单元的量化参数值和/或参考量化参数图,得到目标编码单元的量化参数值,其中,所述参考量化参数图为所述重建图像的参考图像的量化参数图,所述目标编码单元为所述多个编码单元中除所述部分编码单元之外的编码单元;
    根据所述部分编码单元的量化参数值和所述目标编码单元的量化参数值,得到所述重建图像的量化参数图。
  18. 根据权利要求17所述的方法,其特征在于,所述参考量化参数图包括多个参考元素,所述多个参考元素中的任一参考元素的值为所述参考图像中的编码单元的量化参数值;所述根据参考量化参数图,得到目标编码单元的量化参数值,包括:
    将目标元素的值作为所述目标编码单元中任一目标编码单元的量化参数值,其中,所述目标元素为所述参考量化参数图中的参考元素,所述目标元素在所述参考量化参数图中的位置根据所述任一目标编码单元在所述重建图像中的位置确定,或所述目标元素在所述参考量化参数图中的位置根据所述任一目标编码单元在所述重建图像中的位置和所述任一目标编码单元的运动矢量确定。
  19. 一种编码设备,其特征在于,包括:
    视频编码器,用于对原始图像进行编码以得到第一码流;
    保真度图编码器,用于对保真度图进行编码以得到第二码流,其中,所述保真度图用于表示所述原始图像的至少部分区域与重建图像的至少部分区域之间的失真,所述重建图像是对所述第一码流进行解码后得到的。
  20. 根据权利要求19所述的编码设备,其特征在于,所述编码设备还包括保真度图计算器,所述保真度图计算器用于:
    将所述原始图像划分为多个第一图像块,以及将所述重建图像划分为多个第二图像块,其中,划分所述原始图像的划分策略与划分所述重建图像的划分策略相同,所述多个第一图像块与所述多个第二图像块一一对应;
    或将所述原始图像的预设区域划分为多个第一图像块,以及将所述重建图像的预设区域划分为多个第二图像块,其中,划分所述原始图像的预设区域的划分策略与划分所述重建图像的预设区域的划分策略相同,所述多个第一图像块与所述多个第二图像块一一对应;
    根据所述多个第二图像块中的任一第二图像块与所述任一第二图像块对应的第一图像块计算得到所述任一第二图像块的保真度值,所述保真度图包括所述任一第二图像块的保真度值,所述任一第二图像块的保真度值用于表示所述任一第二图像块与所述任一第二图像块对应的第一图像块之间的失真。
  21. 根据权利要求20所述的编码设备,其特征在于,所述保真度图包括多个第一元素,所述多个第二图像块与所述多个第一元素一一对应,所述多个第一元素中的任一第一元素的值为与所述任一第一元素对应的第二图像块的保真度值,所述任一第一元素在所述保真度图中的位置根据与所述任一第一元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第一元素在所述保真度图中的位置根据与所述任一第一元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
  22. 根据权利要求20所述的编码设备,其特征在于,所述第二图像块包括三个色彩分量,所述保真度图为包括色彩分量、宽和高三个维度的三维阵列,所述保真度图中的任一色彩分量A下的二维阵列包括多个第一元素,所述多个第一元素中的任一第一元素的值为与所述任一第一元素对应的第二图像块的色彩分量A的保真度值,所述任一第一元素在所述保真度图中的任一色彩分量A下的二维阵列的位置根据与所述任一第一元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第一元素在所述保真度图中的任一色彩分量A下的二维 阵列的位置根据与所述任一第一元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
  23. 根据权利要求21或22所述的编码设备,其特征在于,所述保真度图编码器,具体用于:
    对所述任一第一元素进行熵编码以得到所述第二码流,对所述任一第一元素的熵编码独立于其他第一元素的熵编码;或者,
    根据所述已编码的第一元素中的至少一个第一元素的值确定所述任一第一元素的值的概率分布或所述任一第一元素的预测值,并根据所述任一第一元素的值的概率分布或所述任一第一元素的预测值对所述任一第一元素进行熵编码,以得到所述第二码流;
    其中,所述第二码流包括所述多个第一元素的码流。
  24. 根据权利要求21或22所述的编码设备,其特征在于,所述保真度图编码器,具体用于:
    对所述任一第一元素进行量化,以得到量化后的第一元素;
    对所述量化后的第一元素进行编码,以得到所述第二码流;
    其中,所述第二码流包括所述多个第一元素的码流。
  25. 一种解码设备,其特征在于,包括:
    视频解码器,用于对第一码流进行解码以得到原始图像的重建图像;
    保真度图解码器,用于对第二码流进行解码以得到保真度图的重建图,其中,所述第二码流是对所述保真度图进行编码得到的,所述保真度图的重建图用于表示所述原始图像的至少部分区域与所述重建图像的至少部分区域之间的失真。
  26. 根据权利要求25所述的解码设备,其特征在于,
    所述保真度图包括多个第二图像块中的任一第二图像块的保真度值,所述任一第二图像块的保真度值用于表示所述任一第二图像块与所述任一第二图像块对应的原始图像块之间的失真。
  27. 根据权利要求26所述的解码设备,其特征在于,所述保真度图包括多个第一元素,所述多个第二图像块与所述多个第一元素一一对应,所述多个第一元素中的任一第一元素的值为与所述任一第一元素对应的第二图像块的保真度值,所述任一第一元素在所述保真度图中的位置根据与所述任一第一元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第一元素在所述保真度图中的位置根据与所述任一第一元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
  28. 根据权利要求26所述的解码设备,其特征在于,所述第二图像块包括三个色彩分量,所述保真度图为包括色彩分量、宽和高三个维度的三维阵列,所述保真度图中的任一色彩分量A下的二维阵列包括多个第一元素,所述多个第一元素中的任一第一元素的值为与所述任一第一元素对应的第二图像块的色彩分量A的保真度值,所述任一第一元素在所述保真度图中的任一色彩分量A下的二维阵列的位置根据与所述任一第一元素对应的第二图像块在所述 重建图像中的位置确定,或所述任一第一元素在所述保真度图中的任一色彩分量A下的二维阵列的位置根据与所述任一第一元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
  29. 根据权利要求27或28所述的解码设备,其特征在于,所述保真度图解码器,具体用于:
    对所述第二码流进行解码,以得到所述任一第一元素的重建保真度值;
    根据所述任一第一元素的重建保真度值得到所述保真度图的重建图。
  30. 根据权利要求27或28所述的解码设备,其特征在于,所述第二码流是对量化后的第一元素进行编码得到的;所述保真度图解码器,具体用于:
    对所述第二码流进行解码,以得到所述量化后的第一元素的重建保真度值;
    对所述量化后的第一元素的重建保真度值进行反量化,以得到所述任一第一元素的重建保真度值;
    根据所述任一第一元素的重建保真度值得到所述保真度图的重建图。
  31. 一种解码设备,其特征在于,包括:
    视频解码器,用于对第一码流进行解码以得到原始图像的重建图像和目标量化参数信息,所述目标量化参数信息包括所述重建图像的多个第二图像块中的全部或部分第二图像块的量化参数值;
    量化参数图构建器,用于根据所述目标量化参数信息构建所述重建图像的量化参数图,其中,所述重建图像的量化参数图用于表示所述原始图像的至少部分区域与所述重建图像的至少部分区域之间的失真。
  32. 根据权利要求31所述的解码设备,其特征在于,所述第二图像块为编码单元。
  33. 根据权利要求31或32所述的解码设备,其特征在于,所述重建图像的量化参数图包括多个第二元素,所述多个第二图像块与所述多个第二元素一一对应,所述多个第二元素中的任一第二元素的值为与所述任一第二元素对应的第二图像块的量化参数值,所述任一第二元素在所述重建图像的量化参数图中的位置根据与所述任一第二元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第二元素在所述重建图像的量化参数图中的位置根据与所述任一第二元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
  34. 根据权利要求31或32所述的解码设备,其特征在于,所述第二图像块包括三个色彩分量,所述重建图像的量化参数图为包括色彩分量、宽和高三个维度的三维阵列,所述重建图像的量化参数图中的任一色彩分量A下的二维阵列包括多个第二元素,所述多个第二元素中的任一第二元素的值为与所述任一第二元素对应的第二图像块的色彩分量A的量化参数值,所述任一第二元素在所述重建图像的量化参数图中的任一色彩分量A下的二维阵列的位置根据与所述任一第二元素对应的第二图像块在所述重建图像中的位置确定,或所述任一第二元素在所述重建图像的量化参数图中的任一色彩分量A下的二维阵列的位置根据与所述任一第二元素对应的第二图像块在所述重建图像的预设区域中的位置确定。
  35. 根据权利要求32-34中任一项所述的解码设备,其特征在于,所述量化参数图构建器,具体用于:
    当所述目标量化参数信息包括所述多个编码单元中的部分编码单元的量化参数值时,根据所述部分编码单元的量化参数值和/或参考量化参数图,得到目标编码单元的量化参数值,其中,所述参考量化参数图为所述重建图像的参考图像的量化参数图,所述目标编码单元为所述多个编码单元中除所述部分编码单元之外的编码单元;
    根据所述部分编码单元的量化参数值和所述目标编码单元的量化参数值,得到所述重建图像的量化参数图。
  36. 根据权利要求35所述的解码设备,其特征在于,所述参考量化参数图包括多个参考元素,所述多个参考元素中的任一参考元素的值为所述参考图像中的编码单元的量化参数值;所述量化参数图构建器,具体用于:
    将目标元素的值作为所述目标编码单元中任一目标编码单元的量化参数值,其中,所述目标元素为所述参考量化参数图中的参考元素,所述目标元素在所述参考量化参数图中的位置根据所述任一目标编码单元在所述重建图像中的位置确定,或所述目标元素在所述参考量化参数图中的位置根据所述任一目标编码单元在所述重建图像中的位置和所述任一目标编码单元的运动矢量确定。
  37. 一种编码器(20),其特征在于,包括处理电路,用于执行权利要求1-6中任一项所述的方法。
  38. 一种解码器(30),其特征在于,包括处理电路,用于执行权利要求7-12或13-18中任一项所述的方法。
  39. 一种编码器,其特征在于,包括:
    一个或多个处理器;
    非瞬时性计算机可读存储介质,耦合到所述处理器并存储由所述处理器执行的程序,其中,所述程序在由所述处理器执行时,使得所述编码器执行权利要求1-6中任一项所述的方法。
  40. 一种解码器,其特征在于,包括:
    一个或多个处理器;
    非瞬时性计算机可读存储介质,耦合到所述处理器并存储由所述处理器执行的程序,其中所述程序在由所述处理器执行时,使得所述解码器执行权利要求7-12或13-18中任一项所述的方法。
  41. 一种非瞬时性计算机可读存储介质,其特征在于,包括程序代码,当其由计算机设备执行时,用于执行权利要求1-6或7-12或13-18中任一项所述的方法。
  42. 一种非瞬时性存储介质,其特征在于,包括根据权利要求1-6中任一项所述的方法编码的比特流。
PCT/CN2021/141403 2021-02-08 2021-12-25 编码、解码方法和相关设备 WO2022166462A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110170984.8 2021-02-08
CN202110170984.8A CN114913249A (zh) 2021-02-08 2021-02-08 编码、解码方法和相关设备

Publications (1)

Publication Number Publication Date
WO2022166462A1 true WO2022166462A1 (zh) 2022-08-11

Family

ID=82740847

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/141403 WO2022166462A1 (zh) 2021-02-08 2021-12-25 编码、解码方法和相关设备

Country Status (2)

Country Link
CN (1) CN114913249A (zh)
WO (1) WO2022166462A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024077576A1 (zh) * 2022-10-13 2024-04-18 Oppo广东移动通信有限公司 基于神经网络的环路滤波、视频编解码方法、装置和系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101459847A (zh) * 2007-12-13 2009-06-17 联发科技股份有限公司 编码器、解码器、视频帧编码方法及比特流解码方法
CN101601300A (zh) * 2006-12-14 2009-12-09 汤姆逊许可公司 用自适应增强层预测对位深度可分级视频数据进行编码和/或解码的方法和设备
US20140192884A1 (en) * 2013-01-04 2014-07-10 Canon Kabushiki Kaisha Method and device for processing prediction information for encoding or decoding at least part of an image
CN110035286A (zh) * 2012-07-09 2019-07-19 Vid拓展公司 用于多层视频编码的编解码器架构
CN110446045A (zh) * 2019-07-09 2019-11-12 中移(杭州)信息技术有限公司 视频编码方法、装置、网络设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101601300A (zh) * 2006-12-14 2009-12-09 汤姆逊许可公司 用自适应增强层预测对位深度可分级视频数据进行编码和/或解码的方法和设备
CN101459847A (zh) * 2007-12-13 2009-06-17 联发科技股份有限公司 编码器、解码器、视频帧编码方法及比特流解码方法
CN110035286A (zh) * 2012-07-09 2019-07-19 Vid拓展公司 用于多层视频编码的编解码器架构
US20140192884A1 (en) * 2013-01-04 2014-07-10 Canon Kabushiki Kaisha Method and device for processing prediction information for encoding or decoding at least part of an image
CN110446045A (zh) * 2019-07-09 2019-11-12 中移(杭州)信息技术有限公司 视频编码方法、装置、网络设备及存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024077576A1 (zh) * 2022-10-13 2024-04-18 Oppo广东移动通信有限公司 基于神经网络的环路滤波、视频编解码方法、装置和系统

Also Published As

Publication number Publication date
CN114913249A (zh) 2022-08-16

Similar Documents

Publication Publication Date Title
WO2020211765A1 (en) An encoder, a decoder and corresponding methods harmonzting matrix-based intra prediction and secoundary transform core selection
KR102616713B1 (ko) 이미지 예측 방법, 장치 및 시스템, 디바이스 및 저장 매체
WO2020114394A1 (zh) 视频编解码方法、视频编码器和视频解码器
CA3114341C (en) An encoder, a decoder and corresponding methods using compact mv storage
WO2020244579A1 (zh) Mpm列表构建方法、色度块的帧内预测模式获取方法及装置
WO2020119814A1 (zh) 图像重建方法和装置
CN111385572B (zh) 预测模式确定方法、装置及编码设备和解码设备
CN114424571A (zh) 编码器、解码器及对应方法
CN113455005A (zh) 用于帧内子分区译码工具所产生的子分区边界的去块效应滤波器
US11758134B2 (en) Picture partitioning method and apparatus
WO2020253681A1 (zh) 融合候选运动信息列表的构建方法、装置及编解码器
WO2021164014A1 (zh) 视频编码方法及装置
WO2022166462A1 (zh) 编码、解码方法和相关设备
CN114679583B (zh) 视频编码器、视频解码器及对应方法
CN115836527A (zh) 编码器、解码器及用于自适应环路滤波的对应方法
WO2020259353A1 (zh) 语法元素的熵编码/解码方法、装置以及编解码器
WO2020182194A1 (zh) 帧间预测的方法及相关装置
WO2020114393A1 (zh) 变换方法、反变换方法以及视频编码器和视频解码器
WO2020134817A1 (zh) 预测模式确定方法、装置及编码设备和解码设备
CN113228632A (zh) 用于局部亮度补偿的编码器、解码器、以及对应方法
CN114503593B (zh) 编码器、解码器及对应方法
US20240146909A1 (en) Image prediction method, apparatus, and system, device, and storage medium
WO2020143292A1 (zh) 一种帧间预测方法及装置
TW202133618A (zh) 編解碼器及編解碼方法、電腦程式產品、非暫態性存儲介質
KR20210122800A (ko) 인트라 서브 파티션 코딩 모드 도구로부터 서브 파티션의 크기를 제한하는 인코더, 디코더 및 대응하는 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21924455

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21924455

Country of ref document: EP

Kind code of ref document: A1