WO2017129147A1 - 图像的编码、解码方法及装置、以及图像的编解码系统 - Google Patents
图像的编码、解码方法及装置、以及图像的编解码系统 Download PDFInfo
- Publication number
- WO2017129147A1 WO2017129147A1 PCT/CN2017/077167 CN2017077167W WO2017129147A1 WO 2017129147 A1 WO2017129147 A1 WO 2017129147A1 CN 2017077167 W CN2017077167 W CN 2017077167W WO 2017129147 A1 WO2017129147 A1 WO 2017129147A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- adjustment factor
- value
- video
- code stream
- image
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/98—Adaptive-dynamic-range coding [ADRC]
Definitions
- the present invention relates to the field of video image coding and decoding, and in particular to an image coding and decoding method and apparatus, and an image coding and decoding system.
- High-Dynamic Range (HDR) video can greatly expand contrast and color at the same time, and the bright part of the picture will be brighter, which can better reflect the real environment and improve the visual experience. .
- the HDR video storage format used in the related art requires a lot of storage space. Therefore, designing a new encoding method based on the characteristics of HDR video is a key issue for HDR video.
- the MPEG standard organization uses Perceptual Quantizer (PQ) to convert HDR video to accommodate the H.265/HEVC Main 10Profile encoder.
- PQ Perceptual Quantizer
- the PQ-based HDR video coding method in the related art uniformly encodes a fixed and large luminance range without considering the actual luminance range of the HDR video, so the PQ-based HDR video coding method often cannot encode a specific HDR video.
- the quantized value is fully utilized (in the case where the number of coded bits is fixed), there is a quantization loss.
- Embodiments of the present invention provide an image encoding and decoding method and apparatus, and an image encoding and decoding system, to at least solve the problem that the quantization value cannot be fully utilized when encoding a specific HDR video in the related art, and the quantization loss defect exists. .
- a video encoding method based on adaptive perceptual quantization is provided, based on an encoding end, and the video encoding method based on adaptive perceptual quantization includes: determining quantization according to a video image to be processed. Adjustment factor; according to the quantization adjustment factor, Processing the video image to be processed to obtain a video code stream; processing the quantization adjustment factor, combining the video code stream to obtain an input code stream; and transmitting the input code stream to an encoder/decoder for encoding , decoding processing.
- determining the quantization adjustment factor according to the video image to be processed includes: performing color space conversion on the video image to be processed, acquiring a luminance component of the converted video image; extracting a maximum brightness value in the luminance component and A minimum value of brightness; a quantization adjustment factor is determined based on the maximum value of the brightness and the minimum value of the brightness.
- determining a quantization adjustment factor according to the maximum value and the minimum value including: determining a quantization adjustment factor ratio based on formula one,
- Y max is the maximum brightness value and Y min is the minimum value of the brightness.
- processing the to-be-processed video image according to the quantization adjustment factor to obtain a video code stream including: determining an adaptive coding function APQ_TF(L) based on formula 2,
- coefficients m 1 and m 2 are 0.1593 and 78.8438, respectively, and the coefficients c 1 , c 2 and c 3 are 0.8359, 18.8516 and 18.8675, respectively;
- processing the quantization adjustment factor, and combining the video code stream to obtain an input code stream comprising: performing binarization processing on the quantization adjustment factor, and encoding the processing structure to obtain a coded code stream;
- the encoded code stream is written into the data unit, and the video code stream is combined to obtain an input code stream with the encoded code stream; wherein the data unit includes: a parameter set, Or an auxiliary information unit, or a user-defined data unit.
- a video encoding method based on adaptive perceptual quantization is provided, based on a decoding end, and the high dynamic range video compression encoding method based on adaptive perceptual quantization includes: parsing an input bit stream Obtaining a quantization adjustment factor and a video code stream to be recovered; processing the to-be-recovered video code stream according to the quantization adjustment factor to obtain a final video image.
- parsing the input code stream, obtaining a quantization adjustment factor, and a video code stream to be recovered including: parsing the input code stream, and acquiring a to-be-recovered video code stream and a data unit from the input code stream; Obtaining an encoded code stream in the data unit; processing the encoded code stream to obtain a quantization adjustment factor; wherein the data unit comprises: a parameter set, or an auxiliary information unit, or a user-defined data unit.
- processing the to-be-recovered video code stream to obtain a final video image according to the quantization adjustment factor including: processing the to-be-recovered video code stream, obtaining a to-be-recovered video image, and extracting the to-be-recovered video image Recovering a pixel value component of the video image; determining an adaptive inverse encoding function inverseAPQ_TF based on the quantization adjustment factor ratio based on Equation 3,
- the adaptive inverse encoding function inverseAPQ_TF corrects the pixel value component of the to-be-recovered video image to obtain a corrected component; and performs reconstruction based on the corrected component to obtain a final video image.
- a video encoding system for managing adaptive perceptual quantization, the adaptive perceptual quantization based video encoding system comprising: a first control unit configured to perform the An adaptive perceptually quantized video encoding method; a second control unit configured to perform an adaptive perceptual quantization based video encoding method as described above.
- an image encoding method including: determining an adjustment factor according to a video image pixel sample value; performing a transform process on the video image according to the adjustment factor, and performing a transform process
- the video image is encoded; the encoded code stream obtained by encoding the adjustment factor is written into the encoded code stream of the encoded video image.
- determining the adjustment factor according to the video image pixel sample value comprises: converting the video image pixel sample value into a pixel brightness value; determining a brightness maximum value and a brightness minimum value in the pixel brightness value; according to the brightness maximum The value and the minimum value of the brightness determine the adjustment factor.
- determining the adjustment factor according to the brightness maximum value and the brightness minimum value comprises: calculating a difference between the brightness maximum value and the brightness minimum value; and calculating a logarithmic value of the difference value
- the linear weighting value is set as a first adjustment factor; the first adjustment factor is set as the adjustment factor; or the reciprocal value of the first adjustment factor is set as the adjustment factor.
- performing transform processing on the video image according to the adjustment factor comprises: performing correction processing on a sampling component of a pixel sample value of the video image according to the adjustment factor; obtaining an output value according to the calibration result The transformed value of the sampled component.
- the method for performing correction processing on the sampled component of the pixel sample value of the video image includes: performing mapping on the sampled component by the adjustment factor or the adjustment factor weighted value.
- writing the coded code stream obtained by encoding the adjustment factor into the coded code stream of the encoded video image includes: performing binarization processing on the value of the adjustment factor; performing binarization on the value
- the processed output is encoded and the encoded bits are written to data units in the encoded code stream of the video image; wherein the data unit comprises at least one of: a parameter set, an auxiliary information unit, a user-defined data unit.
- the method for performing the binarization processing on the adjustment factor value includes at least one of: converting the adjustment factor value into a binary representation value; converting the adjustment factor value to one or more The value of the binary representation of an integer parameter.
- an image encoding apparatus comprising: a determining module configured to determine an adjustment factor according to a video image pixel sample value; and an encoding module configured to Performing transform processing on the video image according to the adjustment factor, and encoding the video image subjected to the transform processing; and writing a module, configured to write the encoded code stream obtained by encoding the adjustment factor to be encoded
- the video image is encoded in the code stream.
- the determining module includes: a converting unit configured to convert the video image pixel sample value into a pixel brightness value; and a first determining unit configured to determine a brightness maximum value and a minimum brightness in the pixel brightness value a second determining unit configured to determine the adjustment factor based on the brightness maximum value and the brightness minimum value.
- the second determining unit includes: a calculating subunit, configured to calculate a difference between the brightness maximum value and the brightness minimum value; a first setting subunit configured to set the difference value a linear weighting value of a logarithmic value is set as a first adjusting factor; a second setting subunit is set to set the first adjusting factor to the adjusting factor; or, set a reciprocal value of the first adjusting factor to The adjustment factor.
- the encoding module includes: a first correcting unit configured to perform a correction process on a sampling component of a pixel sample value of the video image according to the adjustment factor; and an encoding unit configured to output according to the calibration performed The value yields a transformed value of the sampled component.
- the first correcting unit includes: a first mapping subunit, configured to perform mapping on the sampling component by the adjustment factor or the adjustment factor weighted value.
- the writing module includes: a binarization unit configured to perform binarization processing on the adjustment factor value; and a writing unit configured to encode the output of the binarization process, and The coded bits are written to the data unit in the encoded code stream of the video image; wherein the data unit comprises at least one of: a parameter set, an auxiliary information unit, a user-defined data unit.
- the binarization unit includes at least one of: a first conversion subunit configured to convert the adjustment factor value into a binary representation value; and a second conversion subunit configured to adjust the adjustment
- the factor value is converted to the value of a binary representation of one or more integer parameters.
- a method for decoding an image includes: parsing a code stream to obtain an adjustment factor; and transforming the decoded restored image according to the adjustment factor; wherein the decoding and recovering image includes : decoding the image obtained by the code stream, or decoding the image The image obtained by the stream is subjected to a post-processed image.
- parsing the code stream and obtaining the adjustment factor in the parsed code stream includes: parsing the data unit in the code stream to obtain a parameter for determining the adjustment factor; wherein the data The unit includes at least one of the following: a parameter set, an auxiliary information unit, and a user-defined data unit; and determining a value of the adjustment factor according to the parameter.
- determining the value of the adjustment factor according to the parameter includes: setting a value of the parameter to a value of the adjustment factor; or calculating the parameter according to a preset operation rule.
- the output value is set to the value of the adjustment factor.
- converting the decoded restored image according to the adjustment factor comprises: performing correction processing on the sampling component of the pixel sample value of the decoded restored image according to the adjustment factor; and calculating the output value according to the correction processing The transformed value of the sampled component.
- the manner of performing correction processing on the sampled component of the pixel sample value of the decoded restored image includes: performing mapping on the sampled component by the adjustment factor or the adjustment factor weighted value.
- an image decoding apparatus comprising: a decoding module configured to parse a code stream to obtain an adjustment factor; and a transformation module configured to perform decoding on the restored image according to the adjustment factor Transforming; wherein the decoding the restored image comprises: decoding the image obtained by the code stream, or decoding the image obtained by the code stream through post-processing.
- the decoding module includes: a decoding unit, configured to parse the data unit in the code stream to obtain a parameter for determining the adjustment factor; wherein the data unit includes at least one of the following: a parameter set, an auxiliary information unit, and a user-defined data unit; and a third determining unit configured to determine a value of the adjustment factor according to the parameter.
- the third determining unit includes: a third setting subunit, configured to set a value of the parameter to a value of the adjustment factor; or a fourth setting subunit, configured to The output value calculated by the parameter according to the preset operation rule is set to the value of the adjustment factor.
- the transformation module includes: a second correction unit, configured to adjust the cause according to the adjustment Subsequently performing a correction process on the sampled component of the pixel sample value of the decoded restored image; and the calculating unit is configured to calculate the transformed value of the sampled component according to the output value obtained by the correction process.
- the second correction unit comprises: a second mapping subunit, configured to perform a mapping on the sampling component by the adjustment factor or the adjustment factor weighted value.
- an image encoding and decoding system comprising the encoding device according to any of the above, and the image decoding device according to any of the above.
- a storage medium is also provided.
- the storage medium is arranged to store program code for performing the following steps:
- the storage medium is further arranged to store program code for performing the following steps:
- Performing a color space conversion on the video image to be processed acquiring a luminance component of the converted video image; extracting a luminance maximum value and a luminance minimum value in the luminance component; according to the luminance maximum value and the luminance minimum value, Determine the quantitative adjustment factor.
- a storage medium is also provided.
- the storage medium is arranged to store program code for performing the following steps:
- Parsing the input code stream obtaining a quantization adjustment factor and a video code stream to be recovered, and processing the to-be-recovered video code stream according to the quantization adjustment factor to obtain a final video image.
- the storage medium is further arranged to store program code for performing the following steps:
- Parsing the input code stream obtaining a video code stream and a data unit to be recovered from the input code stream; acquiring an encoded code stream from the data unit; processing the coded code stream to obtain a quantization adjustment factor;
- the data unit includes: a parameter set, or an auxiliary information unit, or a user-defined data unit.
- the quantization adjustment factor is determined according to the video image to be processed; And processing, according to the quantization adjustment factor, the video image to be processed to obtain a video code stream; processing the quantization adjustment factor, combining the video code stream to obtain an input code stream; and transmitting the input code stream
- the encoding/decoding process is performed by the encoder/decoder, which solves the problem that the quantization value cannot be fully utilized in encoding the specific HDR video in the related art, and the quantization loss defect exists, so that the quantization value can be more fully utilized and the HDR video can be improved.
- the accuracy of the coding reduces the effect of quantization loss.
- FIG. 1 is a flowchart of a video image encoding and decoding method based on adaptive perceptual quantization according to an embodiment of the present invention
- FIG. 2 is a schematic structural diagram of a video coding system based on adaptive perceptual quantization according to an embodiment of the present invention
- FIG. 3 is a flowchart of an encoding method of an image according to an embodiment of the present invention.
- FIG. 4 is a schematic structural diagram of an image encoding apparatus according to an embodiment of the present invention.
- FIG. 5 is a flowchart of a method of decoding an image according to an embodiment of the present invention.
- FIG. 6 is a schematic structural diagram of an image decoding apparatus according to an embodiment of the present invention.
- FIG. 7(a) is a reconstructed frame obtained by encoding the Market3 using an HDR anchor according to the present invention.
- 7(b) is a partial enlarged view of a reconstructed frame obtained by encoding the Market3 using an HDR anchor according to the present invention
- FIG. 7(c) is a reconstructed frame obtained by encoding the Market3 based on the adaptive perceptual quantization video coding method provided by the present invention.
- FIG. 7(d) is a partially enlarged view of a reconstructed frame obtained by encoding the Market3 based on the adaptive perceptual quantization video encoding method provided by the present invention
- FIG. 8(a) is a reconstructed frame obtained by encoding a Balloon using an HDR anchor according to the present invention
- FIG. 8(b) is a partial enlarged view of a reconstructed frame obtained by encoding Balloon using an HDR anchor according to the present invention.
- FIG. 8(c) is a partial enlarged view of a reconstructed frame obtained by encoding Balloon using an HDR anchor according to the present invention.
- FIG. 8(d) is a reconstructed frame obtained by encoding Balloon according to a video coding method based on adaptive perceptual quantization provided by the present invention
- FIG. 8(e) is a partially enlarged view of a reconstructed frame obtained by encoding Balloon according to an adaptive perceptual quantization based video encoding method provided by the present invention
- FIG. 8(f) is a partial enlarged view of a reconstructed frame obtained by encoding Balloon according to a video coding method based on adaptive perceptual quantization provided by the present invention.
- FIG. 1 is a flowchart of a video image encoding and decoding method based on adaptive perceptual quantization according to an embodiment of the present invention. As shown in FIG. 1 , the encoding and decoding method is shown in FIG. 1 . It is divided into two parts, encoding and decoding, which are described separately below.
- the high dynamic range video compression coding method based on adaptive perceptual quantization includes:
- Step S11 Determine a quantization adjustment factor according to the video image to be processed.
- Step S12 Process the video image to be processed according to the quantization adjustment factor to obtain a video code stream.
- Step S13 Processing the quantization adjustment factor, and combining the video code stream to obtain an input code stream.
- Step S14 The input code stream is transmitted to an encoder/decoder for encoding and decoding processing.
- the quantization adjustment factor is first acquired, and then the video to be processed is processed according to the quantization adjustment factor to obtain a processed video code stream.
- the quantization adjustment factor is processed, and the processing result is combined with the video code stream to obtain an input code stream.
- the video is processed using adaptive adjustment.
- the size of the adaptive adjustment quantization interval can be adjusted by the calculated quantization adjustment factor, and the quantization adjustment factor is related to the image to be processed. Therefore, when the number of coding bits is fixed, the quantization value can be more fully utilized.
- the accuracy of HDR video coding is improved, and the quantization loss is reduced, thereby solving the problem that the related art often fails to make full use of the quantized value when encoding a specific HDR video, and there is a defect of quantization loss.
- the method proposed in the present invention is described by taking a 16-bit HDR video as an example.
- the invention provides a video coding method based on adaptive perceptual quantization, which comprises determining a quantization adjustment factor according to a video image to be processed, and processing the to-be-processed video image according to the quantization adjustment factor to obtain a video code. And processing, the quantization adjustment factor is processed, and the input code stream is obtained by combining the video code stream.
- determining a quantization adjustment factor according to the video image to be processed including:
- a quantization adjustment factor is determined based on the luminance maximum value and the luminance minimum value.
- step S11 may include:
- Step S101 Perform color space conversion on the video image to be processed, and acquire a luminance component of the converted video image.
- Step S102 Extract a brightness maximum value and a brightness minimum value in the brightness component.
- Step S103 Determine a quantization adjustment factor according to the brightness maximum value and the brightness minimum value.
- the video image to be processed is subjected to color space conversion, that is, converted from the RGB color space to the YCbCr color space, and after conversion, each pixel in the video image is extracted.
- the luminance component is the Y component.
- the conversion and extraction formula is:
- R is the value of the red component of a single pixel in the high dynamic range video to be processed
- G is the value of the green component of a single pixel in the high dynamic range video to be processed
- B is the high dynamic range to be processed.
- the luminance maximum value and the luminance minimum value therein are extracted.
- the quantization adjustment factor corresponding to each pixel is determined, and the specific determination process is as follows.
- determining a quantization adjustment factor according to the maximum value and the minimum value including:
- Y max is the maximum brightness value and Y min is the minimum value of the brightness.
- the quantization adjustment factor corresponding to each pixel is determined, and the specific determination process is as shown in Formula 1.
- the expression of the quantization adjustment factor ratio can also be:
- the reason for setting it to the above form is to increase the data processing accuracy by the two-part addition form when performing floating-point arithmetic processing in a computer.
- processing the to-be-processed video image according to the quantization adjustment factor to obtain a video code stream including:
- coefficients m 1 and m 2 are 0.1593 and 78.8438, respectively, and the coefficients c 1 , c 2 and c 3 are 0.8359, 18.8516 and 18.8675, respectively;
- the correction component is processed to obtain a video bitstream.
- step S12 the manner of obtaining the video code stream, step S12, can be implemented as follows:
- Step S201 determining an adaptive coding function APQ_TF(L) based on Equation 2,
- coefficients m 1 and m 2 are 0.1593 and 78.8438, respectively, and the coefficients c 1 , c 2 and c 3 are 0.8359, 18.8516 and 18.8675, respectively.
- Step S202 Extract pixel value components of the video image to be processed.
- the pixel value component extracted here that is, the component of the three channels in the RGB color space of each pixel in the video image to be processed.
- Step S203 correcting the pixel value component based on the adaptive encoding function APQ_TF(L) to obtain a corrected component.
- each pixel value in the video image to be processed is corrected in three channels in the RGB color space, and the formula for processing the specific reference is as follows:
- R is the value of the red component of a single pixel in the video image to be processed
- G is the value of the green component of a single pixel in the video image to be processed
- B is the value of the blue component of a single pixel in the video image to be processed
- R' To correct the value of the red component of a single pixel in the video image to be processed after correction
- G' is the value of the green component of a single pixel in the video image to be processed after correction
- B' is the blue component of a single pixel in the video image to be processed after correction.
- the value of the function max(x, y) represents the maximum value between the two
- min(x, y) represents the minimum value between the two.
- Step S204 Processing the correction component to obtain a video code stream.
- the process of obtaining the video code stream includes the following steps:
- the conversion matrix used when converting from the R'G'B' color space to the Y'CbCr color space is:
- bit depth BitDepth Y of the Y' component in the color space-converted video is extracted, and the Cb component and the Cr component BitDepth C in the transformed video are extracted.
- both BitDepth Y and BitDepth C here take the target value of 10.
- the video output by the decoder is a 10-bit integer, and the final reconstructed video requires that the number of bits per pixel is 16 bits, so an inverse quantization process is required.
- processing the quantization adjustment factor, and combining the video code stream to obtain an input code stream including:
- the data unit comprises a parameter set, or an auxiliary information unit, or a user-defined data unit.
- step S13 may include:
- S301 Perform binarization processing on the quantization adjustment factor, and encode the processing structure to obtain an encoded code stream.
- the binarization process can directly convert the value of the quantization adjustment factor into a binary representation value, or convert the quantization adjustment factor value into a binary of one or more integer parameters based on the requirement of high data processing precision.
- the value represented please refer to the relevant explanation of Formula 1 in the previous section.
- the data unit here includes a parameter set, or an auxiliary information unit, or a user-defined data unit.
- the reason why the processing procedure shown in S301 to S302 is performed in this step is to increase the description parameter of the video code stream in order to enable the video stream to be accurately encoded, and the variable includes the video.
- the specific parameters of the code stream are the following parameters of the code stream.
- the related description parameters may be stored in any one of a parameter set, an auxiliary information unit, and a user-defined data unit, and may be based on the developer's specific situation when actually coding. , select one of the three to use.
- step S302 After the execution of step S302 is completed, an input code stream including a video code stream and an encoded code stream is obtained.
- the input stream is input to the HEVC Main 10 encoder/decoder for subsequent encoding and decoding processing.
- the high dynamic range video compression coding method based on adaptive perceptual quantization includes:
- Step S21 Obtain an output code stream from the encoder/decoder, parse the output code stream, and obtain a quantization adjustment factor and a video code stream to be recovered.
- Step S22 Process the to-be-recovered video code stream according to the quantization adjustment factor to obtain a final video image.
- the encoder/decoder encodes and decodes the input code stream to obtain an output code stream.
- the output code stream is parsed, processed according to the parsed content, and a final video image capable of reducing the quantization loss is obtained.
- parsing the input code stream, obtaining a quantization adjustment factor, and a video code stream to be recovered including: parsing the input code stream, and acquiring a to-be-recovered video code stream and a data unit from the input code stream; Obtaining an encoded code stream in the data unit; processing the encoded code stream to obtain a quantization adjustment factor; wherein the data unit comprises a parameter set, or an auxiliary information unit, or a user-defined data unit.
- step S21 can be implemented as follows:
- the parsed video stream to be recovered is used for processing in a subsequent step to obtain a final video image.
- variable referring to the description parameter of the video code stream is stored in any one of the parameter set, the auxiliary information unit, and the user-defined data unit in step S302. Therefore, this step extracts the previous storage from the above three.
- the code stream is stored in any one of the parameter set, the auxiliary information unit, and the user-defined data unit in step S302. Therefore, this step extracts the previous storage from the above three.
- the code stream is stored in any one of the parameter set, the auxiliary information unit, and the user-defined data unit in step S302. Therefore, this step extracts the previous storage from the above three.
- the code stream is stored in any one of the parameter set, the auxiliary information unit, and the user-defined data unit in step S302. Therefore, this step extracts the previous storage from the above three.
- the parameter value in the coded code stream may be set as a quantization adjustment factor, or the output value calculated by the parameter in the coded code stream according to the set operation rule may be used as a quantization adjustment factor.
- the video stream to be restored is processed based on the quantization adjustment factor in a subsequent step.
- processing the to-be-recovered video code stream according to the quantization adjustment factor to obtain a final video image including:
- the pixel value component is corrected to obtain a corrected component
- reconstruction is performed to obtain a final video image.
- step S22 may include:
- S501 Process the to-be-recovered video code stream to obtain a to-be-recovered video image, and extract a pixel value component of the to-be-recovered video image.
- the process of obtaining a video image to be restored in this step includes the following steps:
- the video format is changed from 4:2:0 to 4:4:4 by the upsampling process.
- bit depth BitDepth Y of the Y' component in the upsampled video is extracted, and the Cb component and the Cr component BitDepth C in the inverse transformed video are extracted, and the quantization corresponding to the Y' component in the inverse transformed video is obtained.
- a value D Y ' , and a quantized value D Cb corresponding to the Cb component, and a quantized value D Cr corresponding to Cr is obtained.
- the inverse-sampled video is inverse quantized to the original bit range according to the following formula, and an inverse quantized video composed of components Y', Cb, and Cr is obtained.
- the video subjected to the upsampling process in the previous step can be converted from the 10-bit range to the original 16-bit range, so as to facilitate the subsequent processing of the subsequent steps.
- the video output by the decoder is a 10-bit integer, and the final reconstructed video requires that the number of bits per pixel is 16 bits, so an inverse quantization process is required.
- the inverse quantized video is obtained, and the color space of the inverse quantized video needs to be inversely transformed, that is, converted from the Y'CbCr color space to the original R'G'B. 'Color space.
- the formula for the specific inverse transformation is
- Clip RGB (x) Clip3(0,1,x).
- the reason for the inverse color space transformation here is determined by the standard testing framework.
- the video output by the decoder is in YCbCr format, and the final video request is in RGB format.
- the coefficients m 1 and m 2 are 0.1593 and 78.8438, respectively, c 1 , c 2 and c 3 are 0.8359, 18.8516 and 18.8675, respectively, and the function max(x, y) represents the maximum value between the two.
- R' is the value of the red component of a single pixel in the inverse transformed video
- G' is the value of the green component of a single pixel in the inverse transformed video
- B' is the value of the blue component of a single pixel in the inverse transformed video.
- R is the value of the red component of a single pixel in the corrected video
- G is the value of the green component of a single pixel in the corrected video
- B is the value of the blue component of a single pixel in the corrected video.
- the component values corresponding to the three channels R, G, and B corresponding to each pixel in the video image to be restored are obtained.
- the image is reconstructed based on the component values corresponding to the three channels R, G, and B corresponding to each pixel in the video image to be restored, which is obtained after the step S503, to obtain a final video image.
- the quantization adjustment factor is calculated based on the maximum and minimum values of the brightness of the input video to be processed.
- an adaptive coding conversion function is obtained, and the input video to be processed is converted.
- the quantization adjustment factor is written to the encoded code stream of the video image.
- the video converted by the adaptive code conversion function is preprocessed and converted into a format supported by HEVC Main 10.
- the preprocessed video is encoded and decoded using HEVC Main 10. Post-processing the decoded video.
- the code stream is parsed to obtain a quantization adjustment factor.
- an adaptive inverse coding conversion function is obtained, and the post-processed video is converted to obtain a reconstructed HDR video.
- the HDR video is encoded by using a HVS-based perceptual driving method. Not only can the range of brightness visible to the human eye be encoded, but the number of bits required for encoding is effectively reduced.
- the size of the quantization interval is also adaptively adjusted according to the luminance range of the input HDR video, in encoding When the number of bits is fixed, the quantization value can be more fully utilized to improve the accuracy of HDR video coding.
- the calculation of the quantization adjustment factor is related to the brightness of the input HDR video.
- the original method (PQ) takes the luminance range as a fixed value, and the proposed method is to calculate the luminance range based on the video.
- the larger the brightness range the larger the distortion corresponding to the smaller the smaller the corresponding distortion (in the case of the same number of bits), so the distortion of the proposed method is smaller than the original method.
- the invention provides a video coding method based on adaptive perceptual quantization, which comprises determining a quantization adjustment factor according to a video image to be processed, and processing the to-be-processed video image according to the quantization adjustment factor to obtain a video code. And processing, the quantization adjustment factor is processed, and the input code stream is obtained by combining the video code stream.
- FIG. 2 is a schematic structural diagram of a video coding system based on adaptive perceptual quantization according to an embodiment of the present invention, and FIG. 2 illustrates the video coding based on adaptive perceptual quantization.
- System including:
- the first control unit 31 is configured to perform the coding method in the above-described adaptive perceptual quantization based video coding method
- the second control unit 32 is configured to perform the decoding method in the above-described adaptive perceptual quantization based video encoding method.
- the invention provides a video coding system based on adaptive perceptual quantization, which comprises determining a quantization adjustment factor according to a video image to be processed, and processing the to-be-processed video image according to the quantization adjustment factor to obtain a video code. And processing, the quantization adjustment factor is processed, and the input code stream is obtained by combining the video code stream.
- an embodiment of the present invention further provides an image encoding method and an image decoding method.
- FIG. 3 is a flowchart of an image encoding method according to an embodiment of the present invention. As shown in FIG. 3, the steps of the method include:
- Step S302 determining an adjustment factor according to the pixel sample value of the video image
- Step S304 performing a transform process on the video image according to the adjustment factor, and encoding the video image subjected to the transform process;
- Step S306 Write the encoded code stream obtained by encoding the adjustment factor into the encoded code stream of the encoded video image.
- the manner of determining the adjustment factor according to the pixel sample value of the video image in the foregoing step S302 may be implemented as follows:
- Step S302-1 converting the video image pixel sample value into a pixel brightness value
- Step S302-2 determining a brightness maximum value and a brightness minimum value in the pixel brightness value
- Step S302-3 The adjustment factor is determined according to the maximum value of the brightness and the minimum value of the brightness.
- the way to determine the adjustment factor can include:
- S302-33 setting the first adjustment factor as the adjustment factor; or setting the reciprocal value of the first adjustment factor as the adjustment factor.
- the method for performing the transform processing on the video image according to the adjustment factor in step S304 in the embodiment of the present invention may include:
- Step S304-1 performing correction processing on the sampling component of the pixel sample value of the video image according to the adjustment factor
- Step S304-2 obtaining a transformed value of the sampled component based on the output value obtained by performing the correction.
- the manner of performing the correction processing on the sampling component of the pixel sample value of the video image includes: performing mapping on the sampling component by the adjustment factor or the adjustment factor weighted value.
- the manner in which the coded code stream obtained by encoding the adjustment factor is written into the coded code stream of the encoded video image in step S306 of the embodiment may include:
- Step S306-1 performing binarization processing on the value of the adjustment factor
- Step S306-2 encoding the output of the binarization process, and writing the coded bit to the data unit in the coded code stream of the video image; wherein the data unit includes at least one of the following: a parameter set, an auxiliary information element, User-defined data unit.
- the foregoing method for performing binarization on the value of the adjustment factor includes at least one of: converting the value of the adjustment factor into a value represented by a binary representation; and converting the value of the adjustment factor into a binary of one or more integer parameters. The value represented.
- FIG. 4 is a schematic structural diagram of an image encoding device according to an embodiment of the present invention.
- the apparatus includes: a determining module 42 configured to determine an adjustment factor according to the pixel sample value of the video image; the encoding module 44 is coupled with the determining module 42 and configured to perform transform processing on the video image according to the adjustment factor, and perform The converted video image is encoded; the writing module 46 is coupled to the encoding module 44, and is configured to write the encoded code stream obtained by encoding the adjustment factor into the encoded code stream of the encoded video image.
- the determining module includes: a converting unit configured to convert the video image pixel sample value into a pixel brightness value; the first determining unit, coupled with the converting unit, configured to determine a brightness maximum value and a minimum brightness in the pixel brightness value a second determining unit coupled to the first determining unit and configured to determine an adjustment factor according to the maximum brightness value and the minimum brightness value.
- the second determining unit comprises: a calculating subunit, configured to calculate a difference between the brightness maximum value and the brightness minimum value; the first setting subunit, coupled with the computing subunit, is set to be a pair of difference values The linear weighting value of the value is set as a first adjustment factor; the second setting subunit is coupled to the first setting subunit, and is set to set the first adjustment factor as an adjustment factor; or, the inverse value of the first adjustment factor is set To adjust the factor.
- the encoding module 44 includes: a first correcting unit, configured to perform a correction process on the sampling component of the pixel sample value of the video image according to the adjustment factor; and the encoding unit is coupled to the first correcting unit and configured to perform correction according to the The resulting output value yields a transformed value of the sampled component.
- the first correcting unit includes: a first mapping subunit, configured to perform a mapping of the sampling component by a power factor of an adjustment factor or an adjustment factor.
- the writing module 46 includes: a binarization unit configured to perform binarization processing on the adjustment factor value; and a writing unit coupled to the binarization unit and configured to perform binarization processing Outputting an encoding and writing the encoded bits to a data unit in the encoded code stream of the video image; wherein the data unit comprises at least one of: a parameter Set, auxiliary information unit, user-defined data unit.
- the binarization unit includes at least one of the following: a first conversion subunit configured to convert the adjustment factor value into a binary representation value; and a second conversion subunit configured to convert the adjustment factor value into one The value of a binary representation of multiple integer arguments.
- FIG. 5 is a flowchart of a method for decoding an image according to an embodiment of the present invention. As shown in FIG. 5, the steps of the method include:
- Step S502 parsing the code stream to obtain an adjustment factor
- Step S504 Perform transformation on the decoded restored image according to the adjustment factor.
- the decoding and recovering the image includes: an image obtained by decoding the code stream, or an image obtained by decoding the code stream after the image is processed.
- step S502 parses the code stream, and obtains an adjustment factor in the parsed code stream, including:
- Step S502-1 parsing the data unit in the code stream to obtain a parameter for determining an adjustment factor; wherein the data unit includes at least one of the following: a parameter set, an auxiliary information unit, and a user-defined data unit;
- Step S502-2 determining the value of the adjustment factor according to the parameter.
- the determining the value of the adjustment factor according to the parameter includes: setting the value of the parameter as the value of the adjustment factor; or setting the output value of the parameter according to the preset operation rule as the value of the adjustment factor.
- the manner of transforming the decoded restored image according to the adjustment factor in the foregoing step S504 may include:
- Step S504-1 picking up pixel sample values of the decoded image according to the adjustment factor The sample component is subjected to correction processing;
- Step S504-2 Calculate the transformed value of the sampled component based on the output value obtained by the correction process.
- the method for performing correction processing on the sampled component of the pixel sample value of the decoded restored image includes: performing mapping on the sampled component by the adjustment factor or the adjustment factor weighted value.
- FIG. 6 is a schematic structural diagram of an image decoding device according to an embodiment of the present invention.
- the device includes: a decoding module 62, which is set to The code stream is parsed to obtain an adjustment factor.
- the transform module 64 is coupled to the decoding module 62 and configured to transform the decoded restored image according to the adjustment factor.
- the decoded restored image includes: an image obtained by decoding the code stream, or a decoding code.
- the streamed image is post-processed.
- the decoding module 62 includes: a decoding unit, configured to parse the data unit in the code stream to obtain a parameter for determining an adjustment factor; wherein the data unit includes at least one of the following: a parameter set, an auxiliary information unit User-defined data unit;
- the third determining unit is configured to determine the value of the adjustment factor according to the parameter.
- the third determining unit includes: a third setting subunit, configured to set a value of the parameter as a value of the adjustment factor; or, a fourth setting subunit, configured to calculate the parameter according to a preset operation rule The subsequent output value is set to the value of the adjustment factor.
- the transform module 64 includes: a second correcting unit, configured to perform a correction process on the sampled component of the pixel sample value of the decoded restored image according to the adjustment factor; and the computing unit is coupled to the second correcting unit and configured to be corrected according to the correction The resulting output value is processed to calculate a transformed value of the sampled component.
- the second correcting unit comprises: a second mapping subunit, configured to perform a mapping of the sampling component by an adjustment factor or an adjustment factor weighted value.
- the embodiment further provides an image encoding and decoding system, which includes the encoding device in the third embodiment, and the image decoding device in the fourth embodiment.
- the CPU is Intel(R) Core TM i3 core processor M350: main frequency 2.27GHZ, memory 2G, operating system: WINDOWS 7, simulation platform: HEVC Main 10 reference software HM16.6.
- the simulation selects two 16-bit HDR video test sequences (Market3 and Balloon) in 4:4:4 format with a resolution of 1920 ⁇ 1080, which is encoded by Main 10 Profile.
- the value of the HM16.6 quantization parameter QP is set to 21, 25, 29, 33, the number of coding frames is 50 frames, and the GOP structure is I frame + 49P frame.
- the performance of the present invention is performed on two video sequences separately from the existing HDR video compression coding system.
- Simulation 1 encoding the Market3 video sequence using the HDR anchor and the method of the present invention.
- Tables 1 and 2 give the tPSNR and PSNR_DE for the HDR anchor and the method of the present invention, respectively, when encoding the Market3 sequence.
- the tPSNR value indicates the difference between the reconstructed video and the original video.
- the larger the tPSNR the better the quality of the reconstructed video.
- the PSNR_DE value indicates the color difference between the reconstructed video and the original video.
- the larger the PSNR_DE the better the color of the reconstructed video is maintained. It can be seen from Table 1 and Table 2 that the video reconstructed by the method of the present invention is superior to the HDR anchor and can better maintain the color.
- Tables 3 and 4 give the tPSNR and PSNR_DE for the HDR anchor and the method of the present invention, respectively, when encoding the Balloon sequence.
- Figure 7 (a) is a reconstructed frame obtained by HDR anchor processing
- Figure 7 (b) is a partial enlarged view of Figure 7 (a);
- Figure 7 (c) is a reconstructed frame processed using the method of the present invention.
- Fig. 7(d) is a partial enlarged view of Fig. 7(c).
- Figure 8 (a) is a reconstructed frame obtained by HDR anchor processing
- FIG. 8(b) and (c) are partial enlarged views of different regions of Fig. 8(a);
- Figure 8 (d) is a reconstructed frame processed using the method of the present invention.
- Figures 8(e) and (f) are partial enlarged views of Fig. 8(d) in different regions.
- the method of the present invention can better preserve the color of the original frame image. Comparing Figures 8(c) and (f), the reconstructed frame image obtained by the method of the present invention has a more Clear structure and detail. Therefore, the reconstructed frame image obtained by the method of the present invention has better visual perceptual quality than the HDR anchor.
- the simulation results show that the present invention encodes the HDR video using the adaptive perceptual driving method, allocates fewer bits to the area that is not sensitive to the human eye, and allocates more bits to the sensitive area of the human eye, which can not only be for the human eye.
- the visible range of luminance is encoded and the number of bits required for encoding is effectively reduced.
- the size of the quantization interval can be adaptively adjusted, and the quantization value can be more fully utilized to improve the accuracy of HDR video coding.
- modules or steps of the present invention described above can be implemented by a general-purpose computing device that can be centralized on a single computing device or distributed across a network of multiple computing devices. Alternatively, they may be implemented by program code executable by the computing device such that they may be stored in the storage device by the computing device and, in some cases, may be different from the order herein.
- the steps shown or described are performed, or they are separately fabricated into individual integrated circuit modules, or a plurality of modules or steps thereof are fabricated as a single integrated circuit module.
- the invention is not limited to any specific combination of hardware and software.
- the quantization adjustment factor is determined according to the video image to be processed; and the video image to be processed is processed according to the quantization adjustment factor to obtain a video code stream; Processing the quantization adjustment factor, combining the video code stream to obtain an input code stream; transmitting the input code stream to an encoder/decoder for encoding and decoding
- the method solves the problem that the quantization value cannot be fully utilized when encoding a specific HDR video in the related art, and the quantization loss defect exists, so that the quantization value can be more fully utilized, the accuracy of the HDR video coding is improved, and the quantization loss is reduced. effect.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
一种图像的编码、解码方法及装置、以及图像的编解码系统,其中,图像的编码方法包括:根据视频图像像素采样值确定调整因子;根据调整因子对视频图像进行变换处理,并对进行变换处理后的视频图像进行编码;将对调整因子进行编码得到的编码码流写入进行编码后的视频图像的编码码流中。解决了相关技术中在编码具体的HDR视频时无法充分利用量化值,以及存在量化损失缺陷的问题,达到了可以更充分的利用量化值,提高HDR视频编码的准确性,降低量化损失的效果。
Description
本发明涉及视频图像编解码领域,具体而言,涉及一种图像的编码、解码方法及装置、以及图像的编解码系统。
随着宽带网络和显示技术的不断发展,人们对视频画面的质量有了更高的期望。与普通视频相比,高动态范围(High-Dynamic Range,简称为HDR)视频可同时大幅度拓展对比度和色彩,画面中明亮的部分会更加明亮,从而能够更好的反映真实环境,提高视觉体验。
相关技术中采用的HDR视频存储格式需要占用许多的存储空间。因此,根据HDR视频的特点设计新的编码方法是HDR视频的关键问题。MPEG标准组织采用了感知量化(Perceptual Quantizer,简称为PQ)对HDR视频进行转换,以适应H.265/HEVC Main 10Profile的编码器。
相关技术中基于PQ的HDR视频编码方法是对固定且很大的亮度范围进行统一编码,而没有考虑HDR视频的实际亮度范围,因此基于PQ的HDR视频编码方法在编码具体的HDR视频时往往无法充分利用量化值(编码比特数固定的情况下),存在量化损失。
针对相关技术中的上述问题,目前尚未存在有效的解决方案。
发明内容
本发明实施例提供了一种图像的编码、解码方法及装置、以及图像的编解码系统,以至少解决相关技术中在编码具体的HDR视频时无法充分利用量化值,以及存在量化损失缺陷的问题。
根据本发明实施例的一个方面,提供了一种基于自适应感知量化的视频编码方法,基于编码端;所述基于自适应感知量化的视频编码方法,包括:根据待处理的视频图像,确定量化调整因子;根据所述量化调整因子,
对所述待处理的视频图像进行处理,得到视频码流;对所述量化调整因子进行处理,结合所述视频码流得到输入码流;将所述输入码流传送至编/解码器进行编码、解码处理。
可选地,根据待处理的视频图像,确定量化调整因子包括:将所述待处理的视频图像进行色彩空间转换,获取转换后视频图像的亮度分量;提取所述亮度分量中的亮度最大值和亮度最小值;根据所述亮度最大值和所述亮度最小值,确定量化调整因子。
可选地,根据所述最大值和所述最小值,确定量化调整因子,包括:基于公式一,确定量化调整因子ratio,
其中,Ymax为亮度最大值,Ymin亮度最小值。
可选地,根据所述量化调整因子,对所述待处理的视频图像进行处理,得到视频码流,包括:基于公式二,确定自适应编码函数APQ_TF(L),
其中,系数m1、m2分别为0.1593、78.8438,系数c1、c2和c3分别为0.8359、18.8516和18.6875;
提取所述待处理的视频图像的像素值分量;基于所述自适应编码函数APQ_TF(L)对所述像素值分量进行校正,得到校正分量;对所述校正分量进行处理,得到视频码流。
可选地,对所述量化调整因子进行处理,结合所述视频码流得到输入码流,包括:对所述量化调整因子进行二值化处理,将处理结构进行编码得到编码码流;将所述编码码流写入到数据单元中,结合所述视频码流,得到带有所述编码码流的输入码流;其中,所述数据单元包括:参数集,
或辅助信息单元,或用户自定义数据单元。
根据本发明实施例的另一个方面,提供了一种基于自适应感知量化的视频编码方法,基于解码端;所述基于自适应感知量化的高动态范围视频压缩编码方法,包括:解析输入码流,获取量化调整因子和待恢复视频码流;根据所述量化调整因子,对所述待恢复视频码流进行处理,得到最终视频图像。
可选地,解析所述输入码流,获取量化调整因子和待恢复视频码流,包括:解析所述输入码流,从所述输入码流中获取待恢复视频码流和数据单元;从所述数据单元中获取编码码流;对所述编码码流进行处理,获取量化调整因子;其中,所述数据单元包括:参数集,或辅助信息单元,或用户自定义数据单元。
可选地,根据所述量化调整因子,对所述待恢复视频码流进行处理,得到最终视频图像,包括:对所述待恢复视频码流进行处理,得到待恢复视频图像,提取所述待恢复视频图像的像素值分量;根据所述量化调整因子ratio,基于公式三,确定自适应逆编码函数inverseAPQ_TF,
其中,系数m1、m2分别为0.1593、78.8438,c1、c2和c3分别为0.8359、18.8516和18.6875,函数max(x,y)表示取两者之间的最大值;基于所述自适应逆编码函数inverseAPQ_TF,对所述待恢复视频图像的像素值分量进行校正,得到校正分量;基于所述校正分量,进行重建,得到最终视频图像。
根据本发明的另一个方面,提供管理一种基于自适应感知量化的视频编码系统,所述基于自适应感知量化的视频编码系统,包括:第一控制单元,设置为执行如上述所述的基于自适应感知量化的视频编码方法;第二控制单元,设置为执行如上述的基于自适应感知量化的视频编码方法。
根据本发明的再一个方面,提供了一种图像的编码方法,包括:根据视频图像像素采样值确定调整因子;根据所述调整因子对所述视频图像进行变换处理,并对进行变换处理后的视频图像进行编码;将对所述调整因子进行编码得到的编码码流写入进行编码后的视频图像的编码码流中。
可选地,根据视频图像像素采样值确定调整因子包括:将所述视频图像像素采样值转换为像素亮度值;确定所述像素亮度值中的亮度最大值和亮度最小值;根据所述亮度最大值和所述亮度最小值确定所述调整因子。
可选地,根据所述亮度最大值和所述亮度最小值确定所述调整因子包括:计算所述亮度最大值和所述亮度最小值之间的差值;将所述差值的对数值的线性加权值设置为第一调整因子;将所述第一调整因子设置为所述调整因子;或,将所述第一调整因子的倒数值设置为所述调整因子。
可选地,根据所述调整因子对所述视频图像进行变换处理包括:根据所述调整因子对所述视频图像的像素采样值的采样分量进行校正处理;根据进行校正处得到的输出值得到所述采样分量的变换值。
可选地,对所述视频图像的像素采样值的采样分量进行校正处理的方式包括:对所述采样分量进行以所述调整因子或所述调整因子加权值为幂次的映射。
可选地,将对所述调整因子进行编码得到的编码码流写入进行编码后的视频图像的编码码流中包括:对所述调整因子取值进行二值化处理;对进行二值化处理的输出进行编码,并将编码比特写入所述视频图像的编码码流中的数据单元;其中,所述数据单元包括以下至少之一:参数集、辅助信息单元、用户自定义数据单元。
可选地,对所述调整因子取值进行二值化处理的方式至少包括以下之一:将所述调整因子取值转换为二进制表示的数值;将所述调整因子取值转换为一个或多个整数参数的二进制表示的数值。
根据本发明的又一个方面,提供了一种图像的编码装置,包括:确定模块,设置为根据视频图像像素采样值确定调整因子;编码模块,设置为
根据所述调整因子对所述视频图像进行变换处理,并对进行变换处理后的视频图像进行编码;写入模块,设置为将对所述调整因子进行编码得到的编码码流写入进行编码后的视频图像的编码码流中。
可选地,所述确定模块包括:转换单元,设置为将所述视频图像像素采样值转换为像素亮度值;第一确定单元,设置为确定所述像素亮度值中的亮度最大值和亮度最小值;第二确定单元,设置为根据所述亮度最大值和所述亮度最小值确定所述调整因子。
可选地,所述第二确定单元包括:计算子单元,设置为计算所述亮度最大值和所述亮度最小值之间的差值;第一设置子单元,设置为将所述差值的对数值的线性加权值设置为第一调整因子;第二设置子单元,设置为将所述第一调整因子设置为所述调整因子;或,将所述第一调整因子的倒数值设置为所述调整因子。
可选地,所述编码模块包括:第一校正单元,设置为根据所述调整因子对所述视频图像的像素采样值的采样分量进行校正处理;编码单元,设置为根据进行校正处得到的输出值得到所述采样分量的变换值。
可选地,所述第一校正单元包括:第一映射子单元,设置为对所述采样分量进行以所述调整因子或所述调整因子加权值为幂次的映射。
可选地,所述写入模块包括:二值化单元,设置为对所述调整因子取值进行二值化处理;写入单元,设置为对进行二值化处理的输出进行编码,并将编码比特写入所述视频图像的编码码流中的数据单元;其中,所述数据单元包括以下至少之一:参数集、辅助信息单元、用户自定义数据单元。
可选地,所述二值化单元至少包括以下之一:第一转换子单元,设置为将所述调整因子取值转换为二进制表示的数值;第二转换子单元,设置为将所述调整因子取值转换为一个或多个整数参数的二进制表示的数值。
根据本发明的再一个方面,提供了一种图像的解码方法,包括:对码流进行解析,获取调整因子;根据所述调整因子,对解码恢复图像进行变换;其中,所述解码恢复图像包括:解码所述码流得到的图像,或解码所
述码流得到的图像经过后处理的图像。
可选地,对码流进行解析,并获取解析后码流中的调整因子包括:对所述码流中的数据单元进行解析以获取用于确定所述调整因子的参数;其中,所述数据单元包括以下至少之一:参数集、辅助信息单元、用户自定义数据单元;根据所述参数确定所述调整因子的取值。
可选地,根据所述参数确定所述调整因子的取值包括:将所述参数的取值设置为所述调整因子的取值;或,对所述参数按照预设运算规则进行计算后的输出值设置为所述调整因子的取值。
可选地,根据所述调整因子,对解码恢复图像进行变换包括:根据所述调整因子对所述解码恢复图像的像素采样值的采样分量进行校正处理;根据校正处理得到的输出值计算所述采样分量的变换值。
可选地,对所述解码恢复图像的像素采样值的采样分量进行校正处理的方式包括:对所述采样分量进行以所述调整因子或所述调整因子加权值为幂次的映射。
根据本发明的再一个方面,提供了一种图像的解码装置,包括:解码模块,设置为对码流进行解析,获取调整因子;变换模块,设置为根据所述调整因子,对解码恢复图像进行变换;其中,所述解码恢复图像包括:解码所述码流得到的图像,或解码所述码流得到的图像经过后处理的图像。
可选地,所述解码模块包括:解码单元,设置为对所述码流中的数据单元进行解析以获取用于确定所述调整因子的参数;其中,所述数据单元包括以下至少之一:参数集、辅助信息单元、用户自定义数据单元;第三确定单元,设置为根据所述参数确定所述调整因子的取值。
可选地,所述第三确定单元包括:第三设置子单元,设置为将所述参数的取值设置为所述调整因子的取值;或,第四设置子单元,设置为对所述参数按照预设运算规则进行计算后的输出值设置为所述调整因子的取值。
可选地,所述变换模块包括:第二校正单元,设置为根据所述调整因
子对所述解码恢复图像的像素采样值的采样分量进行校正处理;计算单元,设置为根据校正处理得到的输出值计算所述采样分量的变换值。
可选地,所述第二校正单元包括:第二映射子单元,设置为对所述采样分量进行以所述调整因子或所述调整因子加权值为幂次的映射。
根据本发明的又一个方面,提供了一种图像的编解码系统,包括上述任一项所述的编码装置,和上述任一项所述的图像解码装置。
根据本发明的又一个实施例,还提供了一种存储介质。该存储介质设置为存储用于执行以下步骤的程序代码:
根据待处理的视频图像,确定量化调整因子;根据所述量化调整因子,对所述待处理的视频图像进行处理,得到视频码流;对所述量化调整因子进行处理,结合所述视频码流得到输入码流;将所述输入码流传送至编/解码器进行编码、解码处理。
可选地,存储介质还设置为存储用于执行以下步骤的程序代码:
将所述待处理的视频图像进行色彩空间转换,获取转换后视频图像的亮度分量;提取所述亮度分量中的亮度最大值和亮度最小值;根据所述亮度最大值和所述亮度最小值,确定量化调整因子。
根据本发明的又一个实施例,还提供了一种存储介质。该存储介质设置为存储用于执行以下步骤的程序代码:
解析输入码流,获取量化调整因子和待恢复视频码流,根据所述量化调整因子,对所述待恢复视频码流进行处理,得到最终视频图像。
可选地,存储介质还设置为存储用于执行以下步骤的程序代码:
解析所述输入码流,从所述输入码流中获取待恢复视频码流和数据单元;从所述数据单元中获取编码码流;对所述编码码流进行处理,获取量化调整因子;其中,所述数据单元包括:参数集,或辅助信息单元,或用户自定义数据单元。
本发明实施例,采用根据待处理的视频图像,确定量化调整因子;根
据所述量化调整因子,对所述待处理的视频图像进行处理,得到视频码流;对所述量化调整因子进行处理,结合所述视频码流得到输入码流;将所述输入码流传送至编/解码器进行编码、解码处理,解决了相关技术中在编码具体的HDR视频时无法充分利用量化值,以及存在量化损失缺陷的问题,达到了可以更充分的利用量化值,提高HDR视频编码的准确性,降低量化损失的效果。
此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:
图1是根据本发明实施例的基于自适应感知量化的视频图像编解码方法的流程图;
图2是根据本发明实施例提供的基于自适应感知量化的视频编码系统的结构示意图;
图3是根据本发明实施例的图像的编码方法流程图;
图4是根据本发明实施例的图像的编码装置结构示意图;
图5是根据本发明实施例的图像的解码方法的流程图;
图6是根据本发明实施例的图像的解码装置的结构示意图;
图7(a)是本发明提供的使用HDR anchor对Market3进行编码得到的重建帧;
图7(b)是本发明提供的使用HDR anchor对Market3进行编码得到的重建帧的局部放大图;
图7(c)是本发明提供的一种基于自适应感知量化的视频编码方法对Market3进行编码得到的重建帧;
图7(d)是本发明提供的一种基于自适应感知量化的视频编码方法对Market3进行编码得到的重建帧的局部放大图;
图8(a)是本发明提供的使用HDR anchor对Balloon进行编码得到的重建帧;
图8(b)是本发明提供的使用HDR anchor对Balloon进行编码得到的重建帧的局部放大图一;
图8(c)是本发明提供的使用HDR anchor对Balloon进行编码得到的重建帧的局部放大图二;
图8(d)是本发明提供的一种基于自适应感知量化的视频编码方法对Balloon进行编码得到的重建帧;
图8(e)是本发明提供的一种基于自适应感知量化的视频编码方法对Balloon进行编码得到的重建帧的局部放大图一;
图8(f)是本发明提供的一种基于自适应感知量化的视频编码方法对Balloon进行编码得到的重建帧的局部放大图二。
下文中将参考附图并结合实施例来详细说明本发明。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。
需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。
实施例一
本发明提供了一种基于自适应感知量化的视频编码方法,图1是根据本发明实施例的基于自适应感知量化的视频图像编解码方法的流程图,如图1所示,该编解码方法分为编码和解码两部分,以下分别进行描述。
在编码端,该基于自适应感知量化的高动态范围视频压缩编码方法,包括:
步骤S11、根据待处理的视频图像,确定量化调整因子。
步骤S12、根据所述量化调整因子,对所述待处理的视频图像进行处理,得到视频码流。
步骤S13、对所述量化调整因子进行处理,结合所述视频码流得到输入码流。
步骤S14、将所述输入码流传送至编/解码器进行编码、解码处理。
基于上述步骤S11至S14,在本发明的实施中,首先获取量化调整因子,接着根据量化调整因子对待处理的视频进行处理,得到处理后的视频码流。另外,对量化调整因子进行处理,将处理结果结合视频码流得到输入码流,通过上述于自适应感知量化的视频编码方法,可见,在本实施例中对视频的处理使用了自适应的调整方式,而自适应的调整量化区间的大小能够由计算得到的量化调整因子调整,而量化调整因子与待处理图像相关,因此,在编码比特数固定的情况下,可以更充分的利用量化值,提高HDR视频编码的准确性,降低量化损失,从而解决了相关技术中在编码具体的HDR视频时往往无法充分利用量化值,并且存在量化损失的缺陷的问题,
在本发明的实施例中,以16bit HDR视频为例,对本发明中提出的方法进行说明。
本发明提出了一种基于自适应感知量化的视频编码方法,包括根据待处理的视频图像,确定量化调整因子,根据所述量化调整因子,对所述待处理的视频图像进行处理,得到视频码流,对所述量化调整因子进行处理,结合所述视频码流得到输入码流。通过使用感知驱动方法对HDR视频进行编码,不仅能够对人眼可见的亮度范围进行编码,而且有效的减少了编码需要的比特数,并且根据输入HDR视频的亮度范围自适应的调整量化区间的大小,在编码比特数固定的情况下,可以更充分的利用量化值,提高HDR视频编码的准确性,降低量化损失。
可选的,根据待处理的视频图像,确定量化调整因子,包括:
将所述待处理的视频图像进行色彩空间转换,获取转换后视频图像的亮度分量;
提取所述亮度分量中的亮度最大值和亮度最小值;
根据所述亮度最大值和所述亮度最小值,确定量化调整因子。
在实施中,步骤S11的实现方式可以包括:
步骤S101、将所述待处理的视频图像进行色彩空间转换,获取转换后视频图像的亮度分量。
步骤S102、提取所述亮度分量中的亮度最大值和亮度最小值。
步骤S103、根据所述亮度最大值和所述亮度最小值,确定量化调整因子。
在本发明实施例中,为了获取量化调整因子,首先,将待处理的视频图像进行色彩空间转换,即从RGB色彩空间转换至YCbCr色彩空间,并在转换后,提取视频图像中每个像素的亮度分量即Y分量。
可选地的,转换及提取公式为:
Y=0.262700*R+0.678000*G+0.059300*B,
其中,R为所述待处理的高动态范围视频中单个像素红色分量的数值,G为所述待处理的高动态范围视频中单个像素绿色分量的数值,B为所述待处理的高动态范围视频中单个像素蓝色分量的数值。
接着,在获取到视频图像中每个像素的亮度分量后,提取其中的亮度最大值和亮度最小值。
最后,基于获取到的亮度最大值和亮度最小值,确定与每个像素对应的量化调整因子,具体的确定过程如下。
可选的,根据所述最大值和所述最小值,确定量化调整因子,包括:
基于公式一,确定量化调整因子ratio,
其中,Ymax为亮度最大值,Ymin亮度最小值。
在实施中,确定与每个像素对应的量化调整因子,具体的确定过程如公式一所示。
值得注意的是,量化调整因子ratio的表达式还可以为:
之所以设置为上述形式,是考虑到在计算机中进行浮点运算处理时,通过两个分式相加的形式,可以提高数据处理精度。
可选的,根据所述量化调整因子,对所述待处理的视频图像进行处理,得到视频码流,包括:
基于公式二,确定自适应编码函数APQ_TF(L),
其中,系数m1、m2分别为0.1593、78.8438,系数c1、c2和c3分别为0.8359、18.8516和18.6875;
提取所述待处理的视频图像的像素值分量;
基于所述自适应编码函数APQ_TF(L)对所述像素值分量进行校正,得到校正分量;
对所述校正分量进行处理,得到视频码流。
在实施中,在获取到量化调整因子后,得到视频码流的方式即步骤S12可以通过如下方式来实现:
步骤S201、基于公式二,确定自适应编码函数APQ_TF(L),
其中,系数m1、m2分别为0.1593、78.8438,系数c1、c2和c3分别为0.8359、18.8516和18.6875。
步骤S202、提取所述待处理的视频图像的像素值分量。
这里提取的像素值分量,即待处理的视频图像中每个像素在RGB色彩空间中三个通道的分量。
步骤S203、基于所述自适应编码函数APQ_TF(L)对所述像素值分量进行校正,得到校正分量。
基于步骤S201中构建的自适应编码函数APQ_TF(L),对待处理的视频图像中每个像素值在RGB色彩空间中三个通道的分量进行校正,处理具体参照的公式如下:
其中,R为待处理的视频图像中单个像素红色分量的数值,G为待处理的视频图像中单个像素绿色分量的数值,B为待处理的视频图像中单个像素蓝色分量的数值,R'为校正后待处理的视频图像中单个像素红色分量的数值,G'为校正后待处理的视频图像中单个像素绿色分量的数值,B'为校正后待处理的视频图像中单个像素蓝色分量的数值,函数max(x,y)表示取两者之间的最大值,min(x,y)表示取两者之间的最小值。
基于上述公式进行校正后,得到与待处理的视频中每个像素对应的新的分量数值。
步骤S204、对所述校正分量进行处理,得到视频码流。
基于步骤S203校正后得到的结果,得到视频码流的处理包含如下步骤:
(1)颜色空间变换:从R'G'B'到Y'CbCr。
其中,从R'G'B'色彩空间向Y'CbCr色彩空间进行转换时使用的转换矩阵。根据上述转换矩阵T为:
(2)将颜色变换后的视频量化到10比特范围。
具体需要执行以下步骤:
首先,提取色彩空间变换后的视频中Y’分量的比特深度BitDepthY,提取所述变换后的视频中Cb分量和Cr分量BitDepthC。
在具体实施方式中,由于需要将待处理的高动态范围视频的量化范围从16比特转换为10比特,因此,这里的BitDepthY和BitDepthC均取目标值10。
其次,根据公式五获取量化后视频中与Y’分量对应的量化值DY',以及与Cb分量对应的量化值DCb,与Cr对应的量化值DCr,
其中,Round(x)=Sign(x)*Floor(Abs(x)+0.5)。
为了完成以上中的计算步骤,
以及Floor(x)取小于或等于x的最大整数的条件确定函数Round(x)的
表达式。
第二,根据以上公式的表达式,分别确定与Y
’分量对应的量化值DY',以及与Cb分量对应的量化值DCb,与Cr对应的量化值DCr的数值,其中的<<表示左移运算符。
这一过程是由标准的测试框架决定的。解码器输出的视频每个像素是10比特的整数,而最终重构的视频要求每个像素点的比特数是16比特,因此需要进行反量化处理。
(3)通过下采样的处理使得视频格式从4:4:4变换到4:2:0。
该部分的内容在现有技术中存在类似方案,因此不再对其进行赘述。
可选的,对所述量化调整因子进行处理,结合所述视频码流得到输入码流,包括:
对所述量化调整因子进行二值化处理,将处理结构进行编码得到编码码流;
将所述编码码流写入到数据单元中,结合所述视频码流,得到带有所述编码码流的输入码流;
其中,所述数据单元包括参数集,或辅助信息单元,或用户自定义数据单元。
在实施中,步骤S13具体的处理方式可以包括:
S301、对所述量化调整因子进行二值化处理,将处理结构进行编码得到编码码流。
这里的二值化处理,可以为将量化调整因子的取值直接转换为二进制表示的数值,或者基于数据处理精度较高的需求,将量化调整因子取值转换为一个或多个整数参数的二进制表示的数值。详情可以参考前文中对公式一的相关解释。
S302、将所述编码码流写入到数据单元中,结合所述视频码流,得到
带有所述编码码流的输入码流。
这里的数据单元包括参数集,或辅助信息单元,或用户自定义数据单元。
本步骤之所以进行如S301至S302所示的处理过程,是考虑到为了能够将视频码流能够准确的进行编码处理,特增加了对视频码流的描述参数这一变量,变量中包含有视频码流的具体参数。
在现有的视频编码协议中,相关的描述参数可以存储在参数集、辅助信息单元、用户自定义数据单元这三者中的任何一个中,在进行实际编码时,可以根据开发者的具体情况,在三者中选取一个进行使用。
在步骤S302执行完毕后,得到包含视频码流和编码码流的输入码流。输入码流输入至HEVC Main 10编/解码器中,进行后续编码和解码处理。
对应的,基于解码端,所述基于自适应感知量化的高动态范围视频压缩编码方法,包括:
步骤S21、从编/解码器中获取输出码流,解析所述输出码流,获取量化调整因子和待恢复视频码流。
步骤S22、根据所述量化调整因子,对所述待恢复视频码流进行处理,得到最终视频图像。
在实施中,编/解码器对输入码流进行编码和解码,得到输出码流。
在解码端,对输出码流进行解析,根据解析出的内容进行处理,并获取到能够降低量化损失的最终视频图像。
可选的,解析所述输入码流,获取量化调整因子和待恢复视频码流,包括:解析所述输入码流,从所述输入码流中获取待恢复视频码流和数据单元;从所述数据单元中获取编码码流;对所述编码码流进行处理,获取量化调整因子;其中,所述数据单元包括参数集,或辅助信息单元,或用户自定义数据单元。
在本发明实施例中,步骤S21中的方式可以通过如下方式来实现:
S401、解析所述输入码流,从所述输入码流中获取待恢复视频码流和数据单元。
解析得到的待恢复视频码流用于后续步骤中处理,得到最终视频图像。
S402、从所述数据单元中获取编码码流。
在步骤S302中提到对视频码流的描述参数的变量存储在参数集、辅助信息单元、用户自定义数据单元这三者中的任何一个中,因此,本步骤从上述三者中提取先前存储的编码码流。
S403、对所述编码码流进行处理,获取量化调整因子。
为了得到量化调整因子,可以将编码码流中的参数取值设置为量化调整因子,或者将编码码流中的参数按照设定运算规则进行计算后的输出值作为量化调整因子。
在获取到量化调整因子后,在后续步骤中基于量化调整因子对待恢复视频码流进行处理。
可选的,根据所述量化调整因子,对所述待恢复视频码流进行处理,得到最终视频图像,包括:
对所述待恢复视频码流进行处理,得到待恢复视频图像,提取所述待恢复视频图像的像素值分量;
根据所述量化调整因子ratio,基于公式三,确定自适应逆编码函数inverseAPQ_TF,
其中,系数m1、m2分别为0.1593、78.8438,c1、c2和c3分别为0.8359、18.8516和18.6875,函数max(x,y)表示取两者之间的最大值;
基于所述自适应逆编码函数inverseAPQ_TF,对所述待恢复视频图像的
像素值分量进行校正,得到校正分量;
基于所述校正分量,进行重建,得到最终视频图像。
在实施中,即步骤S22的实现方式可以包括:
S501、对所述待恢复视频码流进行处理,得到待恢复视频图像,提取所述待恢复视频图像的像素值分量。
本步骤获取待恢复视频图像的过程包含如下步骤:
(1)通过上采样的处理使得视频格式从4:2:0变换到4:4:4。
这里其实是上述步骤S204中(3)的逆向处理过程,相同的是,该部分的内容在现有技术中存在类似方案,因此不再对其进行赘述。
(2)将色度上采样后的视频反量化。
首先提取上采样处理后的视频中Y’分量的比特深度BitDepthY,提取所述逆变换后的视频中Cb分量和Cr分量BitDepthC,同时获取逆变换后的视频中与Y’分量对应的量化值DY',以及与Cb分量对应的量化值DCb,与Cr对应的量化值DCr;
根据下边的公式将上采样处理后的视频反量化到原始比特范围,得到由分量Y'、Cb和Cr构成的反量化后的视频
经过本步骤处理后,既可以将前一步进行上采样处理后的视频从10比特范围转换为原始的16比特范围,以便于后续步骤的继续处理。
这一过程是由标准的测试框架决定的。解码器输出的视频每个像素是10比特的整数,而最终重构的视频要求每个像素点的比特数是16比特,因此需要进行反量化处理。
(3)颜色空间反变换:从Y'CbCr到R'G'B'。
此时在进行完(2)的处理过程后得到反量化后的视频,还需要将反量化后的视频的色彩空间进行反变换,即从Y'CbCr色彩空间转换至原始的R'G'B'色彩空间。具体反变换依据的公式为
R'=ClipRGB(Y'+1.47460*Cr)
G'=ClipRGB(Y'-0.16455*Cb-0.57135*Cr)
B'=ClipRGB(Y'+1.88140*Cb),
根据上述公式对反量化后的视频进行色彩空间反变换,
其中的ClipRGB(x)=Clip3(0,1,x)。
这里之所以需要进行色彩空间反变换,是由标准的测试框架决定的。解码器输出的视频是YCbCr格式,而最终得到的视频要求是RGB格式。
S502、根据所述量化调整因子ratio,基于公式三,确定自适应逆编码函数inverseAPQ_TF,
其中,系数m1、m2分别为0.1593、78.8438,c1、c2和c3分别为0.8359、18.8516和18.6875,函数max(x,y)表示取两者之间的最大值。
S503、基于所述自适应逆编码函数inverseAPQ_TF,对所述待恢复视频图像的像素值分量进行校正,得到校正分量。
根据所述自适应的逆编码函数inverseAPQ_TF,对待恢复视频图像的像素值分量进行校正,校正过程依据的公式为:
其中,R'为反变换后的视频中单个像素红色分量的数值,G'为反变换后的视频中单个像素绿色分量的数值,B'为反变换后的视频中单个像素蓝色分量的数值,R为校正后的视频中单个像素红色分量的数值,G为校正后的视频中单个像素绿色分量的数值,B为校正后的视频中单个像素蓝色分量的数值。
校正后,得到与待恢复视频图像中每个像素对应的R、G、B三个通道对应的分量数值。
S504、基于所述校正分量,进行重建,得到最终视频图像。
基于步骤S503处理后得到的与待恢复视频图像中每个像素对应的R、G、B三个通道对应的分量数值,进行图像重建,得到最终的视频图像。
整个处理过程中,在根据输入待处理视频亮度的最大和最小值,计算得到量化调整因子。根据量化调整因子,得到自适应的编码变换函数,对输入的待处理视频进行转换。将量化调整因子写入视频图像的编码码流。对经过自适应编码变换函数转换的视频做预处理,转换为HEVC Main 10支持的格式。使用HEVC Main 10,对预处理后的视频进行编码和解码。对解码后的视频做后处理。解析码流,获得量化调整因子。根据量化调整因子,得到自适应的逆编码变换函数,对经过后处理的视频进行转换,得到重建的HDR视频。
通过使用基于HVS的感知驱动方法对HDR视频进行编码。不仅能够对人眼可见的亮度范围进行编码,而且有效的减少了编码需要的比特数。还根据输入HDR视频的亮度范围自适应的调整量化区间的大小,在编码
比特数固定的情况下,可以更充分的利用量化值,提高HDR视频编码的准确性。
在处理过程中对人眼不敏感的区域分配较少的比特,对人眼敏感的区域分配较多的比特,从而在编码比特数固定的情况下得到满意的结果。量化调整因子的计算与输入HDR视频的亮度有关。原来的方法(即PQ)是把亮度范围取为一个固定值,所提的方法是根据视频计算亮度范围。亮度范围越大对应的失真越大,越小对应的失真越小(在相同比特数情况下),因此所提方法的失真小于原来的方法。详细的结果验证请参考后文中的仿真实验结果。
本发明提出了一种基于自适应感知量化的视频编码方法,包括根据待处理的视频图像,确定量化调整因子,根据所述量化调整因子,对所述待处理的视频图像进行处理,得到视频码流,对所述量化调整因子进行处理,结合所述视频码流得到输入码流。通过使用感知驱动方法对HDR视频进行编码,不仅能够对人眼可见的亮度范围进行编码,而且有效的减少了编码需要的比特数,并且根据输入HDR视频的亮度范围自适应的调整量化区间的大小,在编码比特数固定的情况下,可以更充分的利用量化值,提高HDR视频编码的准确性,降低量化损失。
实施例二
一种基于自适应感知量化的视频编码系统,图2是根据本发明实施例提供的基于自适应感知量化的视频编码系统的结构示意图,图2所示,所述基于自适应感知量化的视频编码系统,包括:
第一控制单元31,设置为执行上述基于自适应感知量化的视频编码方法中的编码方法;
第二控制单元32,设置为执行上述基于自适应感知量化的视频编码方法中的解码方法。
本发明提出了一种基于自适应感知量化的视频编码系统,包括根据待处理的视频图像,确定量化调整因子,根据所述量化调整因子,对所述待处理的视频图像进行处理,得到视频码流,对所述量化调整因子进行处理,结合所述视频码流得到输入码流。通过使用感知驱动方法对HDR视频进行编码,不仅能够对人眼可见的亮度范围进行编码,而且有效的减少了编码需要的比特数,并且根据输入HDR视频的亮度范围自适应的调整量化区间的大小,在编码比特数固定的情况下,可以更充分的利用量化值,提高HDR视频编码的准确性,降低量化损失。
对应于上述实施例一和实施例二,本发明实施例还提供了一种图像的编码方法和图像的解码方法;
实施例三
本发明实施例提供了一种图像的编码方法对应于实施例一,图3是根据本发明实施例的图像的编码方法流程图,如图3所示,该方法的步骤包括:
步骤S302:根据视频图像像素采样值确定调整因子;
步骤S304:根据调整因子对视频图像进行变换处理,并对进行变换处理后的视频图像进行编码;
步骤S306:将对调整因子进行编码得到的编码码流写入进行编码后的视频图像的编码码流中。
可选地,对于上述步骤S302中根据视频图像像素采样值确定调整因子的方式,可以通过如下方式来实现:
步骤S302-1:将视频图像像素采样值转换为像素亮度值;
步骤S302-2:确定像素亮度值中的亮度最大值和亮度最小值;
步骤S302-3:根据亮度最大值和亮度最小值确定调整因子。
可选地,对于上述步骤S302-3中根据亮度最大值和亮度最小值
确定调整因子的方式,可以包括:
S302-31:计算亮度最大值和亮度最小值之间的差值;
S302-32:将差值的对数值的线性加权值设置为第一调整因子;
S302-33:将第一调整因子设置为调整因子;或,将第一调整因子的倒数值设置为调整因子。
可选地,对于本发明实施例中步骤S304根据调整因子对视频图像进行变换处理的方式,可以包括:
步骤S304-1:根据调整因子对视频图像的像素采样值的采样分量进行校正处理;
步骤S304-2:根据进行校正处得到的输出值得到采样分量的变换值。
需要说明的是,上述对视频图像的像素采样值的采样分量进行校正处理的方式包括:对采样分量进行以调整因子或调整因子加权值为幂次的映射。
可选地,在本实施例的步骤S306中将对调整因子进行编码得到的编码码流写入进行编码后的视频图像的编码码流中的方式,可以包括:
步骤S306-1:对调整因子取值进行二值化处理;
步骤S306-2:对进行二值化处理的输出进行编码,并将编码比特写入视频图像的编码码流中的数据单元;其中,数据单元包括以下至少之一:参数集、辅助信息单元、用户自定义数据单元。
可选地,上述对调整因子取值进行二值化处理的方式至少包括以下之一:将调整因子取值转换为二进制表示的数值;将调整因子取值转换为一个或多个整数参数的二进制表示的数值。
基于上述图像的编码方法,本实施例提供了一种图像的编码装置,图4是根据本发明实施例的图像的编码装置结构示意图,如图
4所示,该装置包括:确定模块42,设置为根据视频图像像素采样值确定调整因子;编码模块44,与确定模块42耦合链接,设置为根据调整因子对视频图像进行变换处理,并对进行变换处理后的视频图像进行编码;写入模块46,与编码模块44耦合链接,设置为将对调整因子进行编码得到的编码码流写入进行编码后的视频图像的编码码流中。
可选地,确定模块包括:转换单元,设置为将视频图像像素采样值转换为像素亮度值;第一确定单元,与转换单元耦合链接,设置为确定像素亮度值中的亮度最大值和亮度最小值;第二确定单元,与第一确定单元耦合链接,设置为根据亮度最大值和亮度最小值确定调整因子。
可选地,第二确定单元包括:计算子单元,设置为计算亮度最大值和亮度最小值之间的差值;第一设置子单元,与计算子单元耦合链接,设置为将差值的对数值的线性加权值设置为第一调整因子;第二设置子单元,与第一设置子单元耦合链接,设置为将第一调整因子设置为调整因子;或,将第一调整因子的倒数值设置为调整因子。
可选地,该编码模块44包括:第一校正单元,设置为根据调整因子对视频图像的像素采样值的采样分量进行校正处理;编码单元,与第一校正单元耦合链接,设置为根据进行校正处得到的输出值得到采样分量的变换值。
其中,第一校正单元包括:第一映射子单元,设置为对采样分量进行以调整因子或调整因子加权值为幂次的映射。
可选地,该写入模块46包括:二值化单元,设置为对调整因子取值进行二值化处理;写入单元,与二值化单元耦合链接,设置为对进行二值化处理的输出进行编码,并将编码比特写入视频图像的编码码流中的数据单元;其中,数据单元包括以下至少之一:参数
集、辅助信息单元、用户自定义数据单元。
可选地,二值化单元至少包括以下之一:第一转换子单元,设置为将调整因子取值转换为二进制表示的数值;第二转换子单元,设置为将调整因子取值转换为一个或多个整数参数的二进制表示的数值。
实施例四
对应于上述实施例二,本发明实施例还提供了一种图像的解码方法,图5是根据本发明实施例的图像的解码方法的流程图,如图5所示,该方法的步骤包括:
步骤S502:对码流进行解析,获取调整因子;
步骤S504:根据调整因子,对解码恢复图像进行变换;
其中,解码恢复图像包括:解码码流得到的图像,或解码码流得到的图像经过后处理的图像。
可选地,本发明实施例中,步骤S502对码流进行解析,并获取解析后码流中的调整因子包括:
步骤S502-1:对码流中的数据单元进行解析以获取用于确定调整因子的参数;其中,数据单元包括以下至少之一:参数集、辅助信息单元、用户自定义数据单元;
步骤S502-2:根据参数确定调整因子的取值。
其中,根据参数确定调整因子的取值包括:将参数的取值设置为调整因子的取值;或,对参数按照预设运算规则进行计算后的输出值设置为调整因子的取值。
可选地,上述步骤S504中根据调整因子,对解码恢复图像进行变换的方式可以包括:
步骤S504-1:根据调整因子对解码恢复图像的像素采样值的采
样分量进行校正处理;
步骤S504-2:根据校正处理得到的输出值计算采样分量的变换值。
其中,对解码恢复图像的像素采样值的采样分量进行校正处理的方式包括:对采样分量进行以调整因子或调整因子加权值为幂次的映射。
基于上述图像的解码方法还提供了一种图像的解码装置,图6是根据本发明实施例的图像的解码装置的结构示意图,如图6所示,该装置包括:解码模块62,设置为对码流进行解析,获取调整因子;变换模块64,与解码模块62耦合链接,设置为根据调整因子,对解码恢复图像进行变换;其中,解码恢复图像包括:解码码流得到的图像,或解码码流得到的图像经过后处理的图像。
可选地,该解码模块62包括:解码单元,设置为对码流中的数据单元进行解析以获取用于确定调整因子的参数;其中,数据单元包括以下至少之一:参数集、辅助信息单元、用户自定义数据单元;
第三确定单元,设置为根据参数确定调整因子的取值。
可选地,第三确定单元包括:第三设置子单元,设置为将参数的取值设置为调整因子的取值;或,第四设置子单元,设置为对参数按照预设运算规则进行计算后的输出值设置为调整因子的取值。
可选地,该变换模块64包括:第二校正单元,设置为根据调整因子对解码恢复图像的像素采样值的采样分量进行校正处理;计算单元,与第二校正单元耦合链接,设置为根据校正处理得到的输出值计算采样分量的变换值。
可选地,第二校正单元包括:第二映射子单元,设置为对采样分量进行以调整因子或调整因子加权值为幂次的映射。
实施例五
基于上述实施例三和四,本实施例还提供了一种图像的编解码系统,该系统包括上述实施例三中的编码装置,和上述实施例四中的图像解码装置。
本发明实施例的效果可以通过以下仿真实验进一步说明:
1.仿真条件:
在CPU为Intel(R)CoreTMi3核处理器M350:主频2.27GHZ,内存2G,操作系统:WINDOWS 7,仿真平台:HEVC Main 10参考软件HM16.6。
仿真选择2个4:4:4格式的16比特HDR视频测试序列(Market3和Balloon),分辨率是1920×1080,采用主流画质(Main 10Profile)编码。HM16.6量化参数QP的值分别设置为21,25,29,33,编码帧数为50帧,GOP结构为I帧+49P帧。
2.仿真内容:
仿真实验中,利用本发明方法与现有的HDR视频压缩编码系统分别在2个视频序列上进行性能测试。
仿真1,利用HDR anchor和本发明方法对Market3视频序列进行编码。表1和表2分别给出了编码Market3序列时HDR anchor和本发明方法的tPSNR和PSNR_DE。
表1HDR anchor编码结果(Market3序列)
QP | tPSNR_X | tPSNR_Y | tPSNR_Z | tPSNR_XYZ | PSNR_DE |
33 | 33.890 | 34.085 | 31.873 | 33.162 | 30.643 |
29 | 36.208 | 36.559 | 33.821 | 35.350 | 31.223 |
25 | 38.864 | 39.355 | 36.057 | 37.835 | 32.232 |
21 | 41.633 | 42.391 | 38.308 | 40.394 | 33.090 |
表2本发明方法编码结果(Market3序列)
QP | tPSNR_X | tPSNR_Y | tPSNR_Z | tPSNR_XYZ | PSNR_DE |
33 | 34.033 | 34.222 | 31.950 | 33.274 | 30.764 |
29 | 36.405 | 36.751 | 33.938 | 35.508 | 31.394 |
25 | 39.108 | 39.612 | 36.183 | 38.023 | 32.319 |
21 | 41.894 | 42.671 | 38.474 | 40.608 | 33.181 |
tPSNR值表明了重构视频与原视频之间的差异性,tPSNR越大说明重构视频的质量越好。PSNR_DE值表明重构视频与原始视频之间颜色的差异性,PSNR_DE越大说明重构视频的颜色保持的越好。通过表1和表2可以看出,本发明方法重构的视频质量优于HDR anchor,且能更好的保持颜色。
仿真2,利用HDR anchor和本发明方法对Balloon视频序列进行编码。表3和表4分别给出了编码Balloon序列时HDR anchor和本发明方法的tPSNR和PSNR_DE。
表3HDR anchor编码结果(Balloon序列)
QP | tPSNR_X | tPSNR_Y | tPSNR_Z | tPSNR_XYZ | PSNR_DE |
33 | 36.048 | 37.591 | 33.154 | 35.198 | 32.723 |
29 | 38.374 | 40.221 | 35.094 | 37.368 | 33.734 |
25 | 40.977 | 43.073 | 37.351 | 39.813 | 34.993 |
21 | 43.596 | 46.056 | 39.523 | 42.216 | 36.045 |
表4本发明方法编码结果(Balloon序列)
QP | tPSNR_X | tPSNR_Y | tPSNR_Z | tPSNR_XYZ | PSNR_DE |
33 | 36.170 | 37.703 | 33.197 | 35.276 | 32.842 |
29 | 38.521 | 40.346 | 35.169 | 37.471 | 33.731 |
25 | 41.148 | 43.193 | 37.402 | 39.905 | 35.166 |
21 | 43.733 | 46.162 | 39.589 | 42.305 | 36.168 |
通过表3和表4同样可以看出,本发明方法重构的视频质量优于HDR anchor,且能更好的保持颜色(QP=29时PSNR_DE稍有下降,但下降很小)。
仿真3,在QP=29情况下,利用HDR anchor和本发明方法对测试序列Market3进行编码,得到测试视频第2帧的重建帧图像如图7(a)~(d)所示,其中:
图7(a)为HDR anchor处理得到的重建帧;
图7(b)为图7(a)的局部放大图;
图7(c)为使用本发明方法处理得到的重建帧;
图7(d)为图7(c)的局部放大图。
由图7(a)~(d)对比可见,本发明方法得到的重建帧图像视觉感知质量优于HDR anchor,不仅保留了原始帧图像中更多的结构信息和细节信息,减少了模糊(图5(d)蓝框),而且能够更好的保持原始帧图像的颜色(图5(d)红框)。
仿真4,在QP=29情况下,利用HDR anchor和本发明方法对测试序列Balloon进行编码,得到测试视频第8帧的重建帧图像如图8(a)~(f)所示,其中:
图8(a)为HDR anchor处理得到的重建帧;
图8(b)和(c)为图8(a)在不同区域的局部放大图;
图8(d)为使用本发明方法处理得到的重建帧;
图8(e)和(f)为图8(d)在不同区域的局部放大图。
对比图8(b)和(e)可得本发明方法能够更好的保持原始帧图像的颜色。对比图8(c)和(f)可得本发明方法所得的重建帧图像具有更加
清晰的结构和细节。因此,本发明方法得到的重建帧图像视觉感知质量优于HDR anchor。
仿真实验结果表明,本发明使用自适应的感知驱动方法对HDR视频进行编码,对人眼不敏感的区域分配较少的比特,对人眼敏感的区域分配较多的比特,不仅能够对人眼可见的亮度范围进行编码,而且有效的减少了编码需要的比特数。同时根据输入HDR视频的亮度范围自适应的调整量化区间的大小,可以更充分的利用量化值,提高HDR视频编码的准确性。
可选地,本实施例中的具体示例可以参考上述实施例及可选实施方式中所描述的示例,本实施例在此不再赘述。
显然,本领域的技术人员应该明白,上述的本发明的各模块或各步骤可以用通用的计算装置来实现,它们可以集中在单个的计算装置上,或者分布在多个计算装置所组成的网络上,可选地,它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储装置中由计算装置来执行,并且在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本发明不限制于任何特定的硬件和软件结合。
以上所述仅为本发明的优选实施例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明可以有各种更改和变化。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。
在本发明实施例对图像的编码、解码过程中,采用根据待处理的视频图像,确定量化调整因子;根据所述量化调整因子,对所述待处理的视频图像进行处理,得到视频码流;对所述量化调整因子进行处理,结合所述视频码流得到输入码流;将所述输入码流传送至编/解码器进行编码、解码
处理,解决了相关技术中在编码具体的HDR视频时无法充分利用量化值,以及存在量化损失缺陷的问题,达到了可以更充分的利用量化值,提高HDR视频编码的准确性,降低量化损失的效果。
Claims (34)
- 一种基于自适应感知量化的视频编码方法,基于编码端,其中,所述基于自适应感知量化的视频编码方法,包括:根据待处理的视频图像,确定量化调整因子;根据所述量化调整因子,对所述待处理的视频图像进行处理,得到视频码流;对所述量化调整因子进行处理,结合所述视频码流得到输入码流;将所述输入码流传送至编/解码器进行编码、解码处理。
- 根据权利要求1所述的基于自适应感知量化的视频编码方法,其中,根据待处理的视频图像,确定量化调整因子,包括:将所述待处理的视频图像进行色彩空间转换,获取转换后视频图像的亮度分量;提取所述亮度分量中的亮度最大值和亮度最小值;根据所述亮度最大值和所述亮度最小值,确定量化调整因子。
- 根据权利要求1所述的基于自适应感知量化的视频编码方法,其中,对所述量化调整因子进行处理,结合所述视频码流得到输入码流,包括:对所述量化调整因子进行二值化处理,将处理结构进行编码得到编码码流;将所述编码码流写入到数据单元中,结合所述视频码流,得到带有所述编码码流的输入码流;其中,所述数据单元包括:参数集,或辅助信息单元,或用户自定义数据单元。
- 一种基于自适应感知量化的视频编码方法,基于解码端,其中,所述基于自适应感知量化的高动态范围视频压缩编码方法,包括:解析输入码流,获取量化调整因子和待恢复视频码流;根据所述量化调整因子,对所述待恢复视频码流进行处理,得到最终视频图像。
- 根据权利要求6所述的基于自适应感知量化的视频编码方法, 其中,解析所述输入码流,获取量化调整因子和待恢复视频码流,包括:解析所述输入码流,从所述输入码流中获取待恢复视频码流和数据单元;从所述数据单元中获取编码码流;对所述编码码流进行处理,获取量化调整因子;其中,所述数据单元包括:参数集,或辅助信息单元,或用户自定义数据单元。
- 根据权利要求6所述的基于自适应感知量化的视频编码方法,其中,根据所述量化调整因子,对所述待恢复视频码流进行处理,得到最终视频图像,包括:对所述待恢复视频码流进行处理,得到待恢复视频图像,提取所述待恢复视频图像的像素值分量;根据所述量化调整因子ratio,基于公式三,确定自适应逆编码函数inverseAPQ_TF,其中,系数m1、m2分别为0.1593、78.8438,c1、c2和c3分别为0.8359、18.8516和18.6875,函数max(x,y)表示取两者之间的最大值;基于所述自适应逆编码函数inverseAPQ_TF,对所述待恢复视频图像的像素值分量进行校正,得到校正分量;基于所述校正分量,进行重建,得到最终视频图像。
- 一种基于自适应感知量化的视频编码系统,其中,所述基于 自适应感知量化的视频编码系统,包括:第一控制单元,设置为执行如权利要求1所述的基于自适应感知量化的视频编码方法;第二控制单元,设置为执行如权利要求7所述的基于自适应感知量化的视频编码方法。
- 一种图像的编码方法,其中,包括:根据视频图像像素采样值确定调整因子;根据所述调整因子对所述视频图像进行变换处理,并对进行变换处理后的视频图像进行编码;将对所述调整因子进行编码得到的编码码流写入进行编码后的视频图像的编码码流中。
- 根据权利要求10所述的方法,其中,根据视频图像像素采样值确定调整因子包括:将所述视频图像像素采样值转换为像素亮度值;确定所述像素亮度值中的亮度最大值和亮度最小值;根据所述亮度最大值和所述亮度最小值确定所述调整因子。
- 根据权利要求11所述的方法,其中,根据所述亮度最大值和所述亮度最小值确定所述调整因子包括:计算所述亮度最大值和所述亮度最小值之间的差值;将所述差值的对数值的线性加权值设置为第一调整因子;将所述第一调整因子设置为所述调整因子;或,将所述第一调整因子的倒数值设置为所述调整因子。
- 根据权利要求10所述的方法,其中,根据所述调整因子对 所述视频图像进行变换处理包括:根据所述调整因子对所述视频图像的像素采样值的采样分量进行校正处理;根据进行校正处得到的输出值得到所述采样分量的变换值。
- 根据权利要求13所述的方法,其中,对所述视频图像的像素采样值的采样分量进行校正处理的方式包括:对所述采样分量进行以所述调整因子或所述调整因子加权值为幂次的映射。
- 根据权利要求10所述的方法,其中,将对所述调整因子进行编码得到的编码码流写入进行编码后的视频图像的编码码流中包括:对所述调整因子取值进行二值化处理;对进行二值化处理的输出进行编码,并将编码比特写入所述视频图像的编码码流中的数据单元;其中,所述数据单元包括以下至少之一:参数集、辅助信息单元、用户自定义数据单元。
- 根据权利要求15所述的方法,其中,对所述调整因子取值进行二值化处理的方式至少包括以下之一:将所述调整因子取值转换为二进制表示的数值;将所述调整因子取值转换为一个或多个整数参数的二进制表示的数值。
- 一种图像的编码装置,其中,包括:确定模块,设置为根据视频图像像素采样值确定调整因子;编码模块,设置为根据所述调整因子对所述视频图像进行变换处理,并对进行变换处理后的视频图像进行编码;写入模块,设置为将对所述调整因子进行编码得到的编码码流写入进行编码后的视频图像的编码码流中。
- 根据权利要求17所述的装置,其中,所述确定模块包括:转换单元,设置为将所述视频图像像素采样值转换为像素亮度值;第一确定单元,设置为确定所述像素亮度值中的亮度最大值和亮度最小值;第二确定单元,设置为根据所述亮度最大值和所述亮度最小值确定所述调整因子。
- 根据权利要求18所述的装置,其中,所述第二确定单元包括:计算子单元,设置为计算所述亮度最大值和所述亮度最小值之间的差值;第一设置子单元,设置为将所述差值的对数值的线性加权值设置为第一调整因子;第二设置子单元,设置为将所述第一调整因子设置为所述调整因子;或,将所述第一调整因子的倒数值设置为所述调整因子。
- 根据权利要求17所述的装置,其中,所述编码模块包括:第一校正单元,设置为根据所述调整因子对所述视频图像的像素采样值的采样分量进行校正处理;编码单元,设置为根据进行校正处得到的输出值得到所述采样分量的变换值。
- 根据权利要求20所述的装置,其中,所述第一校正单元包括:第一映射子单元,设置为对所述采样分量进行以所述调整因子或所述调整因子加权值为幂次的映射。
- 根据权利要求17所述的装置,其中,所述写入模块包括:二值化单元,设置为对所述调整因子取值进行二值化处理;写入单元,设置为对进行二值化处理的输出进行编码,并将编码比特写入所述视频图像的编码码流中的数据单元;其中,所述数据单元包括以下至少之一:参数集、辅助信息单元、用户自定义数据单元。
- 根据权利要求22所述的装置,其中,所述二值化单元至少包括以下之一:第一转换子单元,设置为将所述调整因子取值转换为二进制表示的数值;第二转换子单元,设置为将所述调整因子取值转换为一个或多个整数参数的二进制表示的数值。
- 一种图像的解码方法,其中,包括:对码流进行解析,获取调整因子;根据所述调整因子,对解码恢复图像进行变换;其中,所述解码恢复图像包括:解码所述码流得到的图像,或解码所述码流得到的图像经过后处理的图像。
- 根据权利要求24所述的方法,其中,对码流进行解析,获取调整因子包括:对所述码流中的数据单元进行解析以获取用于确定所述调整因子的参数;其中,所述数据单元包括以下至少之一:参数集、辅助信息单元、用户自定义数据单元;根据所述参数确定所述调整因子的取值。
- 根据权利要求25所述的方法,其中,根据所述参数确定所述调整因子的取值包括:将所述参数的取值设置为所述调整因子的取值;或,对所述参数按照预设运算规则进行计算后的输出值设置为所述调整因子的取值。
- 根据权利要求24所述的方法,其中,根据所述调整因子,对解码恢复图像进行变换包括:根据所述调整因子对所述解码恢复图像的像素采样值的采样分量进行校正处理;根据校正处理得到的输出值计算所述采样分量的变换值。
- 根据权利要求27所述的方法,其中,对所述解码恢复图像的像素采样值的采样分量进行校正处理的方式包括:对所述采样分量进行以所述调整因子或所述调整因子加权值为幂次的映射。
- 一种图像的解码装置,其中,包括:解码模块,设置为对码流进行解析,获取调整因子;变换模块,设置为根据所述调整因子,对解码恢复图像进行变换;其中,所述解码恢复图像包括:解码所述码流得到的图像,或解码所述码流得到的图像经过后处理的图像。
- 根据权利要求29所述的装置,其中,所述解码模块包括:解码单元,设置为对所述码流中的数据单元进行解析以获取用于确定所述调整因子的参数;其中,所述数据单元包括以下至少之一:参数集、辅助信息单元、用户自定义数据单元;第三确定单元,设置为根据所述参数确定所述调整因子的取值。
- 根据权利要求30所述的装置,其中,所述第三确定单元包括:第三设置子单元,设置为将所述参数的取值设置为所述调整因子的取值;或,第四设置子单元,设置为对所述参数按照预设运算规则进行计算后的输出值设置为所述调整因子的取值。
- 根据权利要求29所述的装置,其中,所述变换模块包括:第二校正单元,设置为根据所述调整因子对所述解码恢复图像的像素采样值的采样分量进行校正处理;计算单元,设置为根据校正处理得到的输出值计算所述采样分量的变换值。
- 根据权利要求32所述的装置,其中,所述第二校正单元包括:第二映射子单元,设置为对所述采样分量进行以所述调整因子或所述调整因子加权值为幂次的映射。
- 一种图像的编解码系统,其中,包括权利要求17至23任一项所述的编码装置,和权利要求29至33任一项所述的图像解码装置。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/084,065 US10681350B2 (en) | 2016-01-31 | 2017-03-18 | Picture encoding and decoding methods and apparatuses, and picture encoding and decoding system |
EP17743762.1A EP3410726A4 (en) | 2016-01-31 | 2017-03-18 | METHODS AND DEVICES FOR ENCODING AND DECODING IMAGES, AND IMAGE ENCODING / DECODING SYSTEM |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610066327.8 | 2016-01-31 | ||
CN201610066327.8A CN105828089A (zh) | 2016-01-31 | 2016-01-31 | 一种基于自适应感知量化的视频编码方法和系统 |
CN201610875185 | 2016-09-30 | ||
CN201610875185.X | 2016-09-30 | ||
CN201611111206.7A CN107027027B (zh) | 2016-01-31 | 2016-12-01 | 图像的编码、解码方法及装置、以及图像的编解码系统 |
CN201611111206.7 | 2016-12-01 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017129147A1 true WO2017129147A1 (zh) | 2017-08-03 |
Family
ID=59397399
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/077167 WO2017129147A1 (zh) | 2016-01-31 | 2017-03-18 | 图像的编码、解码方法及装置、以及图像的编解码系统 |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2017129147A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109257602A (zh) * | 2018-10-26 | 2019-01-22 | 西安科锐盛创新科技有限公司 | 自适应量化方法 |
CN117376565A (zh) * | 2023-12-04 | 2024-01-09 | 深圳市路通网络技术有限公司 | 一种hdr视频优化编码方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101511025A (zh) * | 2009-01-22 | 2009-08-19 | 炬力集成电路设计有限公司 | 图像压缩/解压缩的方法、装置 |
US20090317017A1 (en) * | 2008-06-20 | 2009-12-24 | The Hong Kong University Of Science And Technology | Image characteristic oriented tone mapping for high dynamic range images |
CN103051901A (zh) * | 2013-01-14 | 2013-04-17 | 北京华兴宏视技术发展有限公司 | 视频数据编码装置和视频数据编码方法 |
US20160005153A1 (en) * | 2014-07-03 | 2016-01-07 | Dolby Laboratories Licensing Corporation | Display Management for High Dynamic Range Video |
CN105828089A (zh) * | 2016-01-31 | 2016-08-03 | 西安电子科技大学 | 一种基于自适应感知量化的视频编码方法和系统 |
-
2017
- 2017-03-18 WO PCT/CN2017/077167 patent/WO2017129147A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090317017A1 (en) * | 2008-06-20 | 2009-12-24 | The Hong Kong University Of Science And Technology | Image characteristic oriented tone mapping for high dynamic range images |
CN101511025A (zh) * | 2009-01-22 | 2009-08-19 | 炬力集成电路设计有限公司 | 图像压缩/解压缩的方法、装置 |
CN103051901A (zh) * | 2013-01-14 | 2013-04-17 | 北京华兴宏视技术发展有限公司 | 视频数据编码装置和视频数据编码方法 |
US20160005153A1 (en) * | 2014-07-03 | 2016-01-07 | Dolby Laboratories Licensing Corporation | Display Management for High Dynamic Range Video |
CN105828089A (zh) * | 2016-01-31 | 2016-08-03 | 西安电子科技大学 | 一种基于自适应感知量化的视频编码方法和系统 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3410726A4 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109257602A (zh) * | 2018-10-26 | 2019-01-22 | 西安科锐盛创新科技有限公司 | 自适应量化方法 |
CN117376565A (zh) * | 2023-12-04 | 2024-01-09 | 深圳市路通网络技术有限公司 | 一种hdr视频优化编码方法 |
CN117376565B (zh) * | 2023-12-04 | 2024-03-22 | 深圳市路通网络技术有限公司 | 一种hdr视频优化编码方法 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10616497B2 (en) | Method and apparatus for processing image signal conversion, and terminal device | |
RU2666234C1 (ru) | Аппроксимация восстановления формы сигнала | |
RU2589857C2 (ru) | Кодирование, декодирование и представление изображений с расширенным динамическим диапазоном | |
KR101346008B1 (ko) | 고 동적 범위, 가시 동적 범위, 및 광색역 비디오의 층상 압축 | |
US20170251211A1 (en) | Encoding and decoding perceptually-quantized video content | |
EP3338243A1 (en) | High dynamic range adaptation operations at a video decoder | |
CN108476325B (zh) | 用于高动态范围颜色转换校正的介质、方法和设备 | |
CN105828089A (zh) | 一种基于自适应感知量化的视频编码方法和系统 | |
US8340442B1 (en) | Lossy compression of high-dynamic range image files | |
CN104322063B (zh) | 视频流压缩的方法、系统和计算机可读存储介质 | |
US10750188B2 (en) | Method and device for dynamically monitoring the encoding of a digital multidimensional signal | |
JP2005176361A (ja) | 色変換方法および色変換装置 | |
US10742986B2 (en) | High dynamic range color conversion correction | |
US20220256157A1 (en) | Method and apparatus for processing image signal conversion, and terminal device | |
CN114556910A (zh) | 一种视频信号处理方法及装置 | |
WO2017129147A1 (zh) | 图像的编码、解码方法及装置、以及图像的编解码系统 | |
CN107027027B (zh) | 图像的编码、解码方法及装置、以及图像的编解码系统 | |
US20130084003A1 (en) | Psychovisual Image Compression | |
CN108370442B (zh) | 一种高动态范围图像的处理方法、装置及计算机可读存储介质 | |
US11425381B2 (en) | Image encoding apparatus and computer readable medium | |
US10715772B2 (en) | High dynamic range color conversion correction | |
CN116016925A (zh) | 码率控制的方法及装置 | |
KR20170058855A (ko) | 픽셀의 정보를 이용한 적응적인 영상 변환 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17743762 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2017743762 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2017743762 Country of ref document: EP Effective date: 20180831 |