WO2021136110A1 - 编码方法及编码器 - Google Patents

编码方法及编码器 Download PDF

Info

Publication number
WO2021136110A1
WO2021136110A1 PCT/CN2020/139681 CN2020139681W WO2021136110A1 WO 2021136110 A1 WO2021136110 A1 WO 2021136110A1 CN 2020139681 W CN2020139681 W CN 2020139681W WO 2021136110 A1 WO2021136110 A1 WO 2021136110A1
Authority
WO
WIPO (PCT)
Prior art keywords
frequency domain
coded
prediction
transform coefficients
coding
Prior art date
Application number
PCT/CN2020/139681
Other languages
English (en)
French (fr)
Inventor
冯俊凯
曲波
王丽萍
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP20910708.5A priority Critical patent/EP4075803A4/en
Publication of WO2021136110A1 publication Critical patent/WO2021136110A1/zh
Priority to US17/853,714 priority patent/US20220329818A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • H04N19/122Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/18Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/152Data rate or code amount at the encoder output by measuring the fullness of the transmission buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/184Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Definitions

  • This application relates to the field of image processing, and in particular to an encoding method and encoder.
  • Compression technology can effectively reduce the memory and bandwidth occupation of the video processing system, and reduce the cost of the video processing system.
  • visually lossless lossy compression usually has a higher compression rate, which can save more memory and bandwidth.
  • code rate control must ensure that the code rate meets the system memory and bandwidth limitations. Under the limit condition that the encoding rate is about to reach the upper limit of the system, in the prior art, the encoding rate is usually compressed by forced means, so that the encoding rate meets the system memory and bandwidth limitations. At this time, large visual distortions are often introduced into the decoded image. Therefore, there is still room for optimization in the existing mandatory rate control methods.
  • the embodiments of the present application provide an encoding method and an encoder, which can improve the quality of decoded images on the premise that the encoding bit rate meets the system memory and bandwidth limitations.
  • an embodiment of the present application provides an encoding method, which includes: performing bit interception on the block to be coded when the mandatory rate control condition is met; calculating the first generation value corresponding to the above-mentioned bit interception; When controlling the conditions, predict the above-mentioned block to be coded to determine the prediction residual of the above-mentioned block to be coded; calculate the second-generation value corresponding to the prediction according to the above-mentioned prediction residual; compare the above-mentioned first-generation value with the above-mentioned second-generation value , Determine the encoding bit.
  • the coded bits carry the coding mode of the coded bits, and the coding mode is bit interception or prediction.
  • the above-mentioned bit truncation may be bit truncation of the pixel value of the currently coded pixel of the to-be-coded block, or bit truncation of the prediction residual after prediction of the to-be-coded block.
  • the method of bit interception is referred to as the forced rate control method of the spatial branch.
  • the prediction of the spatial branch can be a point-level prediction or a block-level prediction.
  • block-level prediction uses a coding block as a prediction unit, and pixels in the current coding block cannot be used as reference pixels for subsequent pixels in the current coding block.
  • Point-level prediction uses points as prediction units, and pixels in the current coding block can be used as reference pixels for subsequent pixels in the current coding block.
  • the pixel value of the current coded pixel of the block to be coded can be bit-cut, which is called the forced rate control method of the spatial branch.
  • the method of bit-cutting the prediction residual of the pixel to be coded It is called the forced rate control mode two of the airspace branch.
  • the prediction of the block to be coded is also performed, which can be regarded as a mandatory rate control method of the frequency domain branch.
  • the prediction of the frequency domain branch may be block-level prediction.
  • the coding mode of the coded bits can also be marked in the coded bitstream, so that the decoding end can decode according to the coding mode of the coded bits.
  • the method of calculating the cost value may include any of the following: coding rate (rate, R), rate-distortion cost (rate-distortion cost, RDCOST), coding distortion size, and so on.
  • the magnitude of coding distortion can be sum of absolute differences (SAD), mean absolute differences (MAD), sum of squared differences (SSD), sum of squared differences (sum of squared). error, SSE), mean squared error (MSE) and other measures.
  • the above method further includes: judging whether the mandatory rate control condition is satisfied; When the conditions are met, determine the first coding quantization parameter QP; use the first coding QP to pre-code the block to be coded in multiple prediction modes to obtain the precoding result information corresponding to each prediction mode; Select the best prediction mode in the mode; use the coding result information corresponding to the best prediction mode to adjust the first coded QP to obtain the second coded QP; in the best prediction mode, use the second coded QP for the block to be coded Perform real coding.
  • the precoding result information includes at least one of the following: the number of coded bits of the block to be coded under the first coding QP, the coding distortion magnitude of the block to be coded under the first coding QP, and the block to be coded The coding rate-distortion cost under the first coding QP, the prediction residual of the block to be coded, and the texture complexity of the block to be coded.
  • the embodiment of the present application may use multiple prediction modes to pre-encode the block to be coded when the system does not need to control the code rate, so as to determine the best prediction mode, and then use the precoding result information pair corresponding to the best prediction mode.
  • the first encoding QP is adjusted, and the adjusted encoding QP is used in the optimal prediction mode to perform real encoding of the to-be-coded block, which can reasonably use the bit rate to transmit image data of better quality, thereby improving the performance of image compression.
  • the method further includes: outputting the coded bits to the code stream buffer; the judging whether the mandatory code rate control condition is satisfied includes: according to the filling of the code stream buffer The state judges whether the aforementioned mandatory code rate control condition is satisfied; the aforementioned adjustment of the first encoding QP by using the encoding result information corresponding to the aforementioned optimal prediction mode includes: according to the full state of the aforementioned code stream buffer and the encoding corresponding to the aforementioned optimal prediction mode The result information adjusts the above-mentioned first encoding QP.
  • the code rate needs to be forced to be controlled according to the full state of the code stream buffer.
  • forcibly controlling the code rate can reduce the output code rate of the block to be coded, thereby preventing the code stream buffer from overflowing and ensuring that the actual code rate is less than the target code rate.
  • the full state of the bit stream buffer can also be used to adjust the encoding QP. If the code stream buffer is full, increase QP to prevent the code stream buffer from overflowing; if the code stream buffer is empty, decrease QP to make the encoded image carry more image information and ensure image compression quality .
  • comparing the value of the first generation with the value of the second generation to determine the coded bit specifically includes: determining the coded bit when the value of the first generation is less than the value of the second generation It is the coded bit after the above-mentioned bit interception.
  • the output coded bits are the coded bits after bit interception.
  • comparing the value of the first generation with the value of the second generation to determine the coded bit specifically includes: determining the coded bit when the value of the first generation is greater than the value of the second generation It is the coded bit after entropy coding the above prediction residual.
  • the block-level prediction of the frequency domain branch is used to control the bit rate and the image loss is small, and the block-level prediction of the frequency domain branch is preferred.
  • Control the code rate, and the output coded bits are the coded bits after entropy coding the residual information output by the block-level prediction of the frequency domain branch.
  • the method of directly performing entropy coding on the residual information of the block to be coded can be regarded as performing frequency domain transformation on the prediction residual, and after obtaining the N frequency domain transformation coefficients of the prediction residual, the All N frequency domain transform coefficients are set to zero, that is, all transform coefficients are discarded. In the embodiment of the present application, this method is referred to as discarding all transform coefficients of the frequency domain branch.
  • coefficient discarding of frequency domain branches It is not limited to discarding all transform coefficients. In the frequency domain branch, there can also be a way of partially discarding transform coefficients to force control of the code rate.
  • full discarding of transform coefficients and partial discarding of transform coefficients are collectively referred to as coefficient discarding of frequency domain branches.
  • the calculation of the second generation value corresponding to the prediction based on the prediction residual includes: performing frequency domain transformation on the prediction residual to obtain N frequency domain transform coefficients of the prediction residual ,
  • the above-mentioned N is a positive integer
  • the M frequency-domain transform coefficients in the above-mentioned N frequency-domain transform coefficients are zeroed, and N frequency-domain transform coefficients after zeroing are obtained, and the above M is a positive integer less than N
  • the second-generation value corresponding to the zeroing of the frequency domain transform coefficients; the above-mentioned comparison of the above-mentioned first-generation value and the above-mentioned second-generation value to determine the coding bit specifically includes: when the above-mentioned first-generation value is greater than the above-mentioned second-generation value , It is determined that the coded bits are coded bits obtained by performing entropy coding on the N frequency domain transform coefficients after zeroing.
  • the calculation method of the second-generation value can be encoding rate, RDCOST, encoding distortion size, and so on.
  • the magnitude of coding distortion can be measured by measures such as SAD, MAD, SSD, SSE, and MSE.
  • the magnitude of the coding distortion may specifically be the difference between the residual information before the transformation and the residual information after the inverse transformation.
  • the output coded bits are the coded bits after bit interception or the residual output for prediction Entropy coded bits of information.
  • the embodiment of the present application does not limit the mandatory code rate control mode selected when the cost value is equal.
  • the coefficient discarding of the frequency domain branch only operates on the frequency domain transform coefficients of the residual information. Even if many transform coefficients are discarded, the prediction information of the block to be coded can still retain part of the texture information of the image. Part of the transform coefficients are discarded (that is, the transform coefficients are set to zero), which can achieve the effect of reducing the code rate and ensure the quality of the decoded image.
  • the above-mentioned M frequency domain transform coefficients of the N frequency domain transform coefficients are obtained.
  • the above method further includes: quantizing the N frequency domain transform coefficients to obtain N quantized frequency domain transform coefficients; Zeroing includes: zeroing M frequency domain transform coefficients among the above N quantized frequency domain transform coefficients.
  • the second-generation value corresponding to the M frequency-domain transform coefficients is set to zero
  • the above method also includes: quantizing the N frequency domain transform coefficients after zeroing; the above calculating the second generation value corresponding to zeroing the M frequency domain transform coefficients includes: calculating the N frequency transform coefficients after zeroing The second-generation value of domain transform coefficients after quantization.
  • the transform coefficients are set to zero and then quantized, which can reduce the transform coefficients involved in quantization.
  • the former can retain more frequency components (quantize the coefficients that are zeroed before the coefficients are zeroed
  • the number of can be less than or equal to the number of quantized coefficients that are zeroed after zeroing), which helps to improve the image quality of a specific image content.
  • the method of quantizing the transform coefficients and then discarding some of the coefficients can be called the forced rate control method of the frequency domain branch, and the method of discarding some of the coefficients and then quantizing is called the frequency domain.
  • Mandatory rate control mode two of the branch When the mandatory rate control of the frequency domain branch is performed, the encoding cost of the above-mentioned mode 1 and mode 2 can be compared, and the mode with a smaller coding cost can be selected as the mandatory rate control mode of the frequency domain branch.
  • the above method further includes: based on a preset cost The calculation rule selects a target prediction mode from a plurality of prediction modes; the target prediction mode is the prediction mode with the least cost of the plurality of prediction modes, and different prediction modes correspond to different prediction directions and/or different prediction value calculation methods;
  • the prediction of the above-mentioned block to be coded and the determination of the prediction residual of the above-mentioned block to be coded include: when the above-mentioned mandatory rate control condition is satisfied, the prediction is performed in the target prediction mode.
  • the block to be coded performs the block-level prediction, and the prediction residual of the block to be coded is determined.
  • the embodiment of the present application can also analyze multiple prediction modes, and select the prediction mode with the least coding cost to perform block-level prediction on the block to be coded, so as to reduce coding distortion and ensure the quality of image coding.
  • an embodiment of the present application provides an encoder, including: a bit interception module, configured to perform bit interception on a block to be coded when a mandatory rate control condition is met; a first cost calculation module, used to calculate the above-mentioned bit The corresponding first-generation value is intercepted; the prediction module is used to predict the above-mentioned block to be coded when the above-mentioned mandatory code rate control condition is satisfied, and determine the prediction residual of the above-mentioned block to be coded; the second cost calculation module is used to calculate according to The prediction residual calculates the second-generation value corresponding to the prediction; the comparison determination module is used to compare the first-generation value with the second-generation value to determine the coded bit.
  • the coded bits carry the coding mode of the coded bits, and the coding mode is bit interception or prediction.
  • the coding mode of the coded bits can also be marked in the coded bitstream, so that the decoding end can decode according to the coding mode of the coded bits.
  • the method of calculating the cost value may include any one of the following: encoding rate, RDCOST, encoding distortion size, and so on.
  • the magnitude of coding distortion can be measured by measures such as SAD, MAD, SSD, SSE, and MSE.
  • the above-mentioned encoder further includes: a judging module, which is used to judge whether the above-mentioned mandatory code rate control condition is satisfied; a first code-rate control module, which is used to, when the above-mentioned mandatory code rate control condition is not satisfied, Determine the first coding quantization parameter QP; a precoding module for precoding the block to be coded by using the first coding QP in multiple prediction modes to obtain the precoding result information corresponding to each prediction mode; selection module , Used to select the best prediction mode from the above multiple prediction modes; the second code rate control module, used to adjust the first coding QP by using the coding result information corresponding to the above best prediction mode to obtain the second coding QP;
  • the encoding module is configured to use the second encoding QP to perform real encoding on the block to be encoded in the optimal prediction mode.
  • the encoder further includes: an output module for outputting the encoded bits to the code stream buffer; the judging module is specifically used for judging whether it is satisfied according to the full state of the code stream buffer The mandatory rate control condition; the second rate control module is specifically configured to adjust the first encoding QP according to the full state of the code stream buffer and the encoding result information corresponding to the optimal prediction mode.
  • the above-mentioned comparison determination module is specifically configured to determine that the above-mentioned coded bit is the coded bit after the above-mentioned bit is intercepted when the value of the first generation is less than the value of the second generation.
  • the above-mentioned comparison determination module is specifically configured to: in a case where the above-mentioned first-generation value is greater than the above-mentioned second-generation value, determine that the above-mentioned coded bit is the result of entropy coding the above-mentioned prediction residual Encoding bits.
  • the above-mentioned second cost calculation module includes: a transform unit, configured to perform frequency-domain transform on the above-mentioned prediction residual to obtain N frequency-domain transform coefficients of the above-mentioned prediction residual, where the above N is A positive integer; the zeroing unit is used to zero the M frequency domain transform coefficients in the above N frequency domain transform coefficients to obtain N frequency domain transform coefficients after zeroing, and the above M is a positive integer less than N; The calculation unit is used to calculate the second-generation value corresponding to zeroing the M frequency domain transform coefficients; the above-mentioned comparison determination module is specifically used to determine the above-mentioned code when the above-mentioned first-generation value is greater than the above-mentioned second-generation value The bit is the coded bit obtained by performing entropy coding on the above-mentioned zero-set N frequency domain transform coefficients.
  • the calculation method of the second generation value may be to calculate the magnitude of the coding distortion, and specifically may be the difference between the residual information before the transformation and the residual information after the inverse transformation.
  • the second cost calculation module further includes: a quantization unit, configured to perform frequency domain transform on the prediction residual in the transform unit to obtain N frequency domain transform coefficients of the prediction residual Afterwards, the zeroing unit quantizes the N frequency domain transform coefficients before zeroing the M frequency domain transform coefficients among the N frequency domain transform coefficients to obtain N quantized frequency domain transform coefficients; The zero unit is specifically used for: zeroing the M frequency domain transform coefficients among the above N quantized frequency domain transform coefficients.
  • the above-mentioned second cost calculation module further includes: a quantization unit, configured to: after the above-mentioned zero-setting unit zeros M frequency-domain transform coefficients among the above-mentioned N frequency-domain transform coefficients, the above-mentioned Before the cost calculation unit calculates the second-generation value corresponding to zeroing the M frequency domain transform coefficients, quantize the N frequency domain transform coefficients after zeroing; the cost computing unit is specifically used to: calculate the zeroing The second-generation value after quantization of the N frequency domain transform coefficients.
  • the above-mentioned encoder further includes: a pre-analysis module, configured to select a target prediction mode from a plurality of prediction modes based on a preset cost calculation rule; the above-mentioned target prediction mode is one of the above-mentioned multiple prediction modes The prediction mode with the least cost value. Different prediction modes correspond to different prediction directions and/or different prediction value calculation methods; the block-level prediction module is specifically used for: when the above-mentioned mandatory rate control conditions are met, the above-mentioned target prediction Perform the block-level prediction on the block to be coded in the mode, and determine the prediction residual of the block to be coded.
  • a pre-analysis module configured to select a target prediction mode from a plurality of prediction modes based on a preset cost calculation rule
  • the above-mentioned target prediction mode is one of the above-mentioned multiple prediction modes The prediction mode with the least cost value.
  • Different prediction modes correspond to different prediction directions and/or different prediction value calculation methods
  • the block-level prediction module
  • an embodiment of the present application provides an encoder, including: a processor and a transmission interface; the foregoing processor is used to call software instructions stored in a memory to execute the first aspect or any of the first aspects of the embodiments of the present application An encoding method provided by a possible implementation.
  • the aforementioned encoder further includes: the aforementioned memory.
  • the embodiments of the present application provide a computer-readable storage medium, and the above-mentioned computer-readable storage medium stores instructions that, when run on a computer or processor, cause the above-mentioned computer or processor to execute the implementation of the present application.
  • the embodiments of the present application provide a computer program product containing instructions that, when run on a computer or processor, cause the computer or processor to execute the first aspect or any of the first aspects of the embodiments of the present application.
  • An encoding method provided by a possible implementation.
  • the encoder provided in the second aspect, the encoder provided in the third aspect, the computer storage medium provided in the fourth aspect, and the computer program product provided in the fifth aspect provided above are all used to execute the code provided in the first aspect. Encoding method. Therefore, the beneficial effects that can be achieved can be referred to the beneficial effects in the coding method provided in the first aspect, which will not be repeated here.
  • FIG. 1 is a block diagram of a video encoding and decoding system applicable to this application;
  • FIG. 2 is a block diagram of a video decoding system applicable to this application.
  • FIG. 3 is a schematic structural diagram of a video decoding device provided by an embodiment of the application.
  • Fig. 4 is a schematic diagram of a block-level prediction mode provided by an application embodiment
  • FIG. 5 is a schematic diagram of several point-level prediction modes provided by an embodiment of this application.
  • FIG. 6 is a schematic flowchart of an encoding method provided by an embodiment of this application.
  • FIG. 7 is a schematic flowchart of another encoding method provided by an embodiment of this application.
  • FIG. 8 is a schematic diagram of a space-frequency domain coding architecture applicable to an embodiment of this application.
  • FIG. 9 is a schematic diagram of another space-frequency domain coding architecture applicable to an embodiment of this application.
  • FIG. 10 is a schematic diagram of a spatial coding architecture applicable to an embodiment of this application.
  • FIG. 11 is a schematic diagram of a frequency domain coding architecture applicable to an embodiment of this application.
  • FIG. 12 is a schematic diagram of another frequency domain coding architecture applicable to an embodiment of this application.
  • FIG. 13 is a schematic structural diagram of an encoder provided by an embodiment of the application.
  • Video coding generally refers to processing a sequence of pictures that form a video or video sequence.
  • the terms "picture”, "frame” or “image” can be used as synonyms.
  • Video encoding is performed on the source side, and usually includes processing (for example, by compressing) the original video picture to reduce the amount of data required to represent the video picture, so as to store and/or transmit more efficiently.
  • Video decoding is performed on the destination side and usually involves inverse processing relative to the encoder to reconstruct the video picture.
  • a video sequence includes a series of pictures, the pictures are further divided into slices, and the slices are further divided into blocks.
  • Video coding is performed in units of blocks.
  • the concept of blocks is further expanded.
  • macroblocks MB
  • the macroblocks can be further divided into multiple prediction blocks (partitions) that can be used for predictive coding.
  • HEVC high efficiency video coding
  • basic concepts such as coding unit (CU), prediction unit (PU), and transform unit (TU) are adopted.
  • the coding block to be processed in the current image can be referred to as the current coding block or the coding block to be processed.
  • a reference block is a block that provides a reference signal for the current block, where the reference signal represents the pixel value in the coded block.
  • the block in the reference image that provides the prediction signal for the current block may be a prediction block, where the prediction signal represents a pixel value or a sample value or a sample signal in the prediction block.
  • the best reference block is found. This best reference block will provide prediction for the current block, and this block is called a prediction block.
  • the original video picture can be reconstructed, that is, the reconstructed video picture has the same quality as the original video picture (assuming that there is no transmission loss or other data loss during storage or transmission).
  • quantization is performed to perform further compression to reduce the amount of data required to represent the video picture, and the decoder side cannot completely reconstruct the video picture, that is, the quality of the reconstructed video picture is compared with the quality of the original video picture Lower or worse.
  • Fig. 1 exemplarily shows a schematic block diagram of a video encoding and decoding system 10 applied in an embodiment of the present application.
  • the video encoding and decoding system 10 may include a source device 12 and a destination device 14.
  • the source device 12 generates encoded video data. Therefore, the source device 12 may be referred to as a video encoding device.
  • the destination device 14 can decode the encoded video data generated by the source device 12, and therefore, the destination device 14 can be referred to as a video decoding device.
  • Various implementations of source device 12, destination device 14, or both may include one or more processors and memory coupled to the one or more processors.
  • the memory may include, but is not limited to, random access memory (RAM), read-only memory (ROM), and electrically erasable programmable read only memory, EEPROM), flash memory, or any other medium that can be used to store the desired program code in the form of instructions or data structures that can be accessed by a computer, as described herein.
  • the source device 12 and the destination device 14 may include various devices, including desktop computers, mobile computing devices, notebook (for example, laptop) computers, tablet computers, set-top boxes, telephone handsets such as so-called "smart" phones. Computers, televisions, cameras, display devices, digital media players, video game consoles, on-board computers, wireless communication equipment, or the like.
  • FIG. 1 shows the source device 12 and the destination device 14 as separate devices
  • the device embodiment may also include the source device 12 and the destination device 14 or the functionality of both, that is, the source device 12 or the corresponding The functionality of the destination device 14 or the corresponding functionality.
  • the same hardware and/or software may be used, or separate hardware and/or software, or any combination thereof may be used to implement the source device 12 or the corresponding functionality and the destination device 14 or the corresponding functionality .
  • the source device 12 and the destination device 14 can communicate with each other via a link 13, and the destination device 14 can receive encoded video data from the source device 12 via the link 13.
  • the link 13 may include one or more media or devices capable of moving the encoded video data from the source device 12 to the destination device 14.
  • link 13 may include one or more communication media that enable source device 12 to transmit encoded video data directly to destination device 14 in real time.
  • the source device 12 may modulate the encoded video data according to a communication standard (for example, a wireless communication protocol), and may transmit the modulated video data to the destination device 14.
  • the one or more communication media may include wireless and/or wired communication media, such as a radio frequency (RF) spectrum or one or more physical transmission lines.
  • RF radio frequency
  • the one or more communication media may form part of a packet-based network, such as a local area network, a wide area network, or a global network (e.g., the Internet).
  • the one or more communication media may include routers, switches, base stations, or other devices that facilitate communication from source device 12 to destination device 14.
  • the source device 12 includes an encoder 20, and optionally, the source device 12 may also include a picture source 16, a picture preprocessor 18, and a communication interface 22.
  • the encoder 20, the picture source 16, the picture preprocessor 18, and the communication interface 22 may be hardware components in the source device 12, or may be software programs in the source device 12. They are described as follows:
  • the picture source 16 which can include or can be any type of picture capture device, for example to capture real-world pictures, and/or any type of pictures or comments (for screen content encoding, some text on the screen is also considered to be encoded Picture or part of an image) generating equipment, for example, a computer graphics processor for generating computer animation pictures, or for acquiring and/or providing real world pictures, computer animation pictures (for example, screen content, virtual reality, VR) pictures), and/or any combination thereof (for example, augmented reality (AR) pictures).
  • the picture source 16 may be a camera for capturing pictures or a memory for storing pictures.
  • the picture source 16 may also include any type (internal or external) interface for storing previously captured or generated pictures and/or acquiring or receiving pictures.
  • the picture source 16 When the picture source 16 is a camera, the picture source 16 may be, for example, a local or an integrated camera integrated in the source device; when the picture source 16 is a memory, the picture source 16 may be local or, for example, an integrated camera integrated in the source device. Memory.
  • the interface When the picture source 16 includes an interface, the interface may be, for example, an external interface that receives pictures from an external video source.
  • the external video source is, for example, an external picture capture device, such as a camera, an external memory, or an external picture generation device, such as It is an external computer graphics processor, computer or server.
  • the interface can be any type of interface according to any proprietary or standardized interface protocol, such as a wired or wireless interface, and an optical interface.
  • the picture can be regarded as a two-dimensional array or matrix of picture elements.
  • the picture transmitted from the picture source 16 to the picture processor may also be referred to as original picture data 17.
  • the picture preprocessor 18 is configured to receive the original picture data 17 and perform preprocessing on the original picture data 17 to obtain the preprocessed picture 19 or the preprocessed picture data 19.
  • the pre-processing performed by the picture pre-processor 18 may include retouching, color format conversion, toning, or denoising.
  • the encoder 20 (or video encoder 20) is configured to receive the pre-processed picture data 19, and process the pre-processed picture data 19 using a relevant prediction mode (such as the prediction mode in the various embodiments herein), thereby
  • the encoded picture data 21 is provided (the structure details of the encoder 20 will be further described below based on FIG. 3).
  • the encoder 20 may be used to implement the various embodiments described below to implement the encoding method described in this application.
  • the communication interface 22 can be used to receive the encoded picture data 21, and can transmit the encoded picture data 21 to the destination device 14 or any other device (such as a memory) through the link 13 for storage or direct reconstruction.
  • the other device can be any device used for decoding or storage.
  • the communication interface 22 may be used, for example, to encapsulate the encoded picture data 21 into a suitable format, such as a data packet, for transmission on the link 13.
  • the destination device 14 includes a decoder 30, and optionally, the destination device 14 may also include a communication interface 28, a picture post-processor 32, and a display device 34. They are described as follows:
  • the communication interface 28 may be used to receive the encoded picture data 21 from the source device 12 or any other source, for example, a storage device, and the storage device is, for example, an encoded picture data storage device.
  • the communication interface 28 can be used to transmit or receive the encoded picture data 21 via the link 13 between the source device 12 and the destination device 14 or via any type of network.
  • the link 13 is, for example, a direct wired or wireless connection.
  • Any type of network is, for example, a wired or wireless network or any combination thereof, or any type of private network and public network, or any combination thereof.
  • the communication interface 28 may be used, for example, to decapsulate the data packet transmitted by the communication interface 22 to obtain the encoded picture data 21.
  • Both the communication interface 28 and the communication interface 22 can be configured as a one-way communication interface or a two-way communication interface, and can be used, for example, to send and receive messages to establish connections, confirm and exchange any other communication links and/or, for example, encoded pictures Data transfer information about data transfer.
  • the decoder 30 (or called the decoder 30) is configured to receive the encoded picture data 21 and provide the decoded picture data 31 or the decoded picture 31.
  • the picture post processor 32 is configured to perform post-processing on the decoded picture data 31 (also referred to as reconstructed picture data) to obtain post-processed picture data 33.
  • the post-processing performed by the picture post-processor 32 may include: color format conversion (for example, conversion from YUV format to RGB format), toning, trimming or resampling, or any other processing, and can also be used to convert post-processed picture data 33 Transmitted to the display device 34.
  • the display device 34 is used to receive the post-processed picture data 33 to display the picture to, for example, a user or a viewer.
  • the display device 34 may be or may include any type of display for presenting the reconstructed picture, for example, an integrated or external display or monitor.
  • the display may include a liquid crystal display (LCD), an organic light emitting diode (OLED) display, a plasma display, a projector, a micro LED display, a liquid crystal on silicon (LCoS), Digital light processor (digital light processor, DLP) or any other type of display.
  • FIG. 1 shows the source device 12 and the destination device 14 as separate devices
  • the device embodiment may also include the source device 12 and the destination device 14 or the functionality of both, that is, the source device 12 or Corresponding functionality and destination device 14 or corresponding functionality.
  • the same hardware and/or software may be used, or separate hardware and/or software, or any combination thereof may be used to implement the source device 12 or the corresponding functionality and the destination device 14 or the corresponding functionality .
  • the source device 12 and the destination device 14 may include any of a variety of devices, including any type of handheld or stationary device, such as a notebook or laptop computer, mobile phone, smart phone, tablet or tablet computer, video camera, desktop Computers, set-top boxes, televisions, cameras, in-vehicle devices, display devices, digital media players, video game consoles, video streaming devices (such as content service servers or content distribution servers), broadcast receiver devices, broadcast transmitter devices And so on, and can not use or use any type of operating system.
  • a notebook or laptop computer mobile phone, smart phone, tablet or tablet computer
  • video camera desktop Computers
  • set-top boxes televisions, cameras, in-vehicle devices, display devices, digital media players, video game consoles, video streaming devices (such as content service servers or content distribution servers), broadcast receiver devices, broadcast transmitter devices And so on, and can not use or use any type of operating system.
  • Both the encoder 20 and the decoder 30 can be implemented as any of various suitable circuits, for example, one or more microprocessors, digital signal processors (digital signal processors, DSP), and application-specific integrated circuits (application-specific integrated circuits). circuit, ASIC), field-programmable gate array (FPGA), discrete logic, hardware, or any combination thereof.
  • the device can store the instructions of the software in a suitable non-transitory computer-readable storage medium, and can use one or more processors to execute the instructions in hardware to execute the technology of the present disclosure. . Any of the foregoing (including hardware, software, a combination of hardware and software, etc.) can be regarded as one or more processors.
  • the video encoding and decoding system 10 shown in FIG. 1 is only an example, and the technology of this application can be applied to video encoding settings that do not necessarily include any data communication between encoding and decoding devices (for example, video encoding or video encoding). decoding).
  • the data can be retrieved from local storage, streamed on the network, etc.
  • the video encoding device can encode data and store the data to the memory, and/or the video decoding device can retrieve the data from the memory and decode the data.
  • encoding and decoding are performed by devices that do not communicate with each other but only encode data to and/or retrieve data from the memory and decode the data.
  • FIG. 2 is an explanatory diagram of an example of a video coding system 40 including an encoder and/or a decoder according to an exemplary embodiment.
  • the video decoding system 40 can implement a combination of various technologies in the embodiments of the present application.
  • the video coding system 40 may include an imaging device 41, an encoder 20, a decoder 30 (and/or a video encoder/decoder implemented by the processing circuit 46), an antenna 42, one or more A processor 43, one or more memories 44, and/or a display device 45.
  • the imaging device 41, the antenna 42, the processing circuit 46, the encoder 20, the decoder 30, the processor 43, the memory 44, and/or the display device 45 can communicate with each other.
  • the encoder 20 and the decoder 30 are used to illustrate the video coding system 40, in different examples, the video coding system 40 may include only the encoder 20 or only the decoder 30.
  • antenna 42 may be used to transmit or receive an encoded bitstream of video data.
  • the display device 45 may be used to present video data.
  • the processing circuit 46 may include application-specific integrated circuit (ASIC) logic, a graphics processor, a general-purpose processor, and the like.
  • the video decoding system 40 may also include an optional processor 43, and the optional processor 43 may similarly include application-specific integrated circuit (ASIC) logic, a graphics processor, a general-purpose processor, and the like.
  • the processor 43 may be implemented by general-purpose software, an operating system, and the like.
  • the memory 44 may be any type of memory, such as volatile memory (for example, static random access memory (SRAM), dynamic random access memory (DRAM), etc.) or non-volatile memory. Memory (for example, flash memory, etc.), etc. In a non-limiting example, the memory 44 may be implemented by cache memory.
  • volatile memory for example, static random access memory (SRAM), dynamic random access memory (DRAM), etc.
  • non-volatile memory for example, flash memory, etc.
  • the memory 44 may be implemented by cache memory.
  • antenna 42 may be used to receive an encoded bitstream of video data.
  • the encoded bitstream may include data, indicators, index values, mode selection data, etc., related to encoded video frames discussed herein, such as data related to encoded partitions (e.g., transform coefficients or quantized transform coefficients). , (As discussed) optional indicators, and/or data defining coded partitions).
  • the video coding system 40 may also include a decoder 30 coupled to the antenna 42 and used to decode the encoded bitstream.
  • the display device 45 is used to present video frames.
  • the decoder 30 may be used to perform the reverse process.
  • the decoder 30 can be used to receive and parse such syntax elements, and decode related video data accordingly.
  • the encoder 20 may entropy encode the syntax elements into an encoded video bitstream. In such instances, the decoder 30 may parse such syntax elements and decode the related video data accordingly.
  • the video image encoding method described in the embodiment of the application occurs at the encoder 20, and the video image decoding method described in the embodiment of the application occurs at the decoder 30.
  • the encoder 20 and the decoder in the embodiment of the application may be, for example, an encoder/decoder corresponding to video standard protocols such as H.263, H.264, HEVV, MPEG-2, MPEG-4, VP8, VP9, or next-generation video standard protocols (such as H.266, etc.).
  • FIG. 3 is a schematic structural diagram of a video decoding device 300 (for example, a video encoding device 300 or a video decoding device 300) provided by an embodiment of the present application.
  • the video coding device 300 is suitable for implementing the embodiments described herein.
  • the video coding device 300 may be a video decoder (for example, the decoder 30 of FIG. 1) or a video encoder (for example, the encoder 20 of FIG. 1).
  • the video coding device 300 may be one or more components of the decoder 30 in FIG. 1 or the encoder 20 in FIG. 1 described above.
  • the video decoding device 300 includes: an entry port 310 and a receiving unit 320 for receiving data, a processor, a logic unit or a central processing unit 330 for processing data, and a transmitter unit 340 (or simply referred to as transmitting 340) and an outlet port 350, and a memory 360 (such as a memory 360) for storing data.
  • the video decoding device 300 may also include photoelectric conversion components and electro-optical components coupled with the inlet port 310, the receiver unit 320 (or simply referred to as the receiver 320), the transmitter unit 340 and the outlet port 350 for optical or electrical signals. The exit or entrance.
  • the processor 330 is implemented by hardware and software.
  • the processor 330 may be implemented as one or more CPU chips, cores (for example, multi-core processors), FPGAs, ASICs, and DSPs.
  • the processor 330 communicates with the ingress port 310, the receiver unit 320, the transmitter unit 340, the egress port 350, and the memory 360.
  • the processor 330 includes a decoding module 370 (for example, an encoding module 370).
  • the encoding module 370 implements the embodiments disclosed herein to implement the encoding method provided in the embodiments of the present application. For example, the encoding module 370 implements, processes, or provides various encoding operations.
  • the encoding module 370 provides a substantial improvement to the function of the video decoding device 300, and affects the conversion of the video decoding device 300 to different states.
  • the encoding module 370 is implemented by instructions stored in the memory 360 and executed by the processor 330.
  • the memory 360 includes one or more magnetic disks, tape drives, and solid-state hard disks, and can be used as an overflow data storage device for storing programs when these programs are selectively executed, and storing instructions and data read during program execution.
  • the memory 360 may be volatile and/or nonvolatile, and may be a read-only memory, a random access memory, a random access memory (ternary content-addressable memory, TCAM), and/or a static random access memory.
  • block-level prediction and point-level prediction.
  • Block-level prediction The coding block is used as the prediction unit, and pixels in the current coding block cannot be used as reference pixels for subsequent pixels in the current coding block.
  • Point-level prediction Using points as prediction units, pixels in the current coding block can be used as reference pixels for subsequent pixels in the current coding block.
  • the pixel value of the reference pixel mentioned in the above block-level prediction or point-level prediction can be used as the reference value of a certain pixel in the current coding block to calculate the predicted value of a certain pixel in the current coding block.
  • the pixel value of the aforementioned reference pixel may be the reconstructed value of the reference pixel.
  • the block-level prediction is further explained below in conjunction with the schematic diagram of the block-level prediction mode provided in FIG. 4, and the point-level prediction is further explained in conjunction with the schematic diagrams of several prediction modes provided in FIG.
  • each coded block contains 10 pixels.
  • A0, A1, A2, A3, and A4 in Figure 4 and Figure 5 are the 5 pixels contained in the coding block A (the other 5 pixels are not shown), and B1 and B2 are the 2 pixels contained in the coding block B. 8 pixels are not shown), 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 are 10 pixels included in the current coding block.
  • the arrow in the figure is used to indicate the prediction direction, that is, the starting point of the arrow is the prediction reference pixel, the pixel passed by the arrow and the end point of the arrow are the pixel to be predicted.
  • the direction of the arrow in the figure can represent the prediction direction of the prediction mode.
  • pixel B1 is the reference pixel of pixel 5
  • pixel B2 is the reference pixel of pixels 0 and 6
  • pixel A0 is the reference pixel of pixels 1 and 7
  • pixel A1 is the reference pixel of pixels 2 and 8
  • pixel A2 It is the reference pixel of pixels 3 and 9
  • the pixel A3 is the reference pixel of pixel 4, etc.
  • the pixels in the current coding block cannot be used as reference pixels for subsequent pixels in the current coding block.
  • pixel 0 cannot be used as the reference pixel of pixel 6, and pixel 1 cannot be used as the reference pixel of pixel 7, and so on.
  • pixel 0 is the reference pixel of pixel 5
  • pixel 1 is the reference pixel of pixel 6
  • pixel 2 is the reference pixel of pixel 7
  • pixel 3 is the reference pixel of pixel 8
  • pixel 4 is pixel 9.
  • the reference pixel It can be seen that the pixels in the current coding block can be used as reference pixels for subsequent pixels in the current coding block, which is point-level prediction.
  • pixel 0 is the reference pixel of pixel 1
  • pixel 1 is the reference pixel of pixel 2
  • pixel 7 is the reference pixel of pixel 8, and so on. It can be seen that the pixels in the current coding block can be used as reference pixels for subsequent pixels in the current coding block, which is point-level prediction.
  • the prediction directions of the two point-level predictions, a and b in Figure 5 are different.
  • the prediction direction of a is the vertical direction
  • the prediction direction of b is the horizontal direction.
  • the prediction direction of block-level prediction in Fig. 4 is the lower right diagonal direction.
  • the predicted value calculation method refers to how to calculate the predicted value of the current coded pixel according to the value of the reference pixel.
  • the value of the reference pixel may be directly used as the predicted value of the current encoded pixel, or the average value of the value of the reference pixel and the values of other pixels around the reference pixel may be used as the predicted value of the current encoded pixel.
  • the embodiment of the present application does not limit the specific prediction direction and prediction value calculation method of the prediction mode.
  • FIG. 6 shows a schematic flowchart of an encoding method provided by an embodiment of the present application.
  • the encoding method can include at least the following steps:
  • the mandatory code rate control condition may be that the full state of the code stream buffer reaches a certain level. For example, when the full state of the code stream buffer reaches 90%, mandatory rate control is required.
  • the mandatory code rate control condition is that the full state of the code stream buffer is greater than or equal to 90%.
  • the full state of the code stream buffer is expressed by a percentage. The percentage is the ratio of the current used capacity of the code stream buffer to the total capacity. It is not limited to expressing the full state of the code stream buffer by a percentage, and in specific implementation, the full state of the code stream buffer may also be expressed by a numerical value, which is not limited in the embodiment of the present application.
  • the code stream buffer can be a buffer memory, which is used to buffer the coded data so that the coded data is output at a stable code rate. It can be seen that the code stream buffer can also have other names in different standards, such as video buffer checker, coded image buffer, and so on. The embodiment of the present application does not limit the specific name of the code stream buffer.
  • the mandatory rate control condition can also be the result of the comprehensive effect of the full state of the bitstream buffer and the predicted residual size, for example, when the full state of the bitstream buffer reaches a certain level (for example, greater than or equal to 90%), but The prediction residuals are all less than a certain threshold (for example, all zeros), and at this time, it is still determined that the mandatory rate control condition is not satisfied.
  • the mandatory code rate control condition may be that the full state of the code stream buffer reaches a certain level and the predicted residual error is greater than a certain threshold.
  • bit truncation may be bit truncation of the pixel value of the currently coded pixel of the block to be coded.
  • each component requires 8 bits to transmit.
  • Bit interception can be to intercept some of these 8 bits.
  • the bits that are intercepted are the bits that are reserved and transmitted.
  • the number of bits to be intercepted can be determined by the full state of the code stream buffer. The more full the current bit stream buffer, the more urgent the current system needs to reduce the bit rate, the smaller the number of bits to be intercepted.
  • the high-order bits can be intercepted, that is, the low-order bits are set to zero, and the high-order bits are reserved for transmission, so as to achieve the effect of reducing the code rate.
  • the low-order bits can be set to zero, and the high-order bits can be reserved.
  • the bits of the R component are 11110000
  • the bits of the G component are 11111000
  • the bits of the B component are 11110001.
  • the upper 4 bits of the R component can be intercepted, that is, the lower 4 bits are set to zero.
  • the brightness value has not changed, it is still 240.
  • the bit changes from 11111000 to 11110000, that is, from 248 to 240, and the brightness value does not change significantly.
  • the brightness value of the B component will not change significantly after the high 4 bits are intercepted. It can be seen that for specific image content, intercepting high-order bits can effectively reduce the bit rate and ensure the image encoding quality.
  • the low-order bits can be intercepted, that is, the high-order bits are set to zero, and the low-order bits are reserved for transmission.
  • the high-order bits can be set to zero, and the low-order bits can be reserved.
  • the bits of the R component are 00001101
  • the bits of the G component are 00000110
  • the bits of the B component are 00000001.
  • the lower 4 bits of the R component can be intercepted, that is, the upper 4 bits are set to zero.
  • the brightness value has not changed, it is still 13.
  • the G component and the B component It can be seen that for specific image content, intercepting low-order bits can effectively reduce the bit rate and ensure the image encoding quality.
  • bit truncation may be bit truncation of the prediction residual of the current coded pixel of the block to be coded.
  • point-level prediction or block-level prediction can be used to predict each pixel in the block to be coded, and the prediction residual of each pixel is output separately. Then perform bit interception on the prediction residual.
  • the prediction residual is the difference between the original value and the predicted value.
  • the original value is the pixel value of the current coded pixel
  • the predicted value is the predicted value of the current coded pixel.
  • the bit-cutting method of the prediction residual is similar to the bit-cutting method of the pixel value of the current coded pixel. You can refer to the foregoing description about bit-cutting the pixel value of the current coded pixel, which will not be repeated here.
  • the above-mentioned bit interception may be referred to as forced rate control of the spatial branch.
  • the method of bit-cutting the pixel value of the pixel to be coded is called the forced rate control method of the spatial branch
  • the method of bit-cutting the prediction residual of the pixel to be coded is called the method of the spatial branch.
  • Mandatory rate control mode two is used.
  • S602 Calculate the first-generation value corresponding to the bit interception.
  • the cost calculation method may include any one of the following: encoding rate (rate, R), encoding rate-distortion cost (rate-distortion cost, RDCOST), encoding distortion size, and so on.
  • encoding rate rate
  • rate-distortion cost rate-distortion cost
  • RDCOST rate-distortion cost
  • the first cost value is the cost value corresponding to the bit interception on the pixel value of the currently coded pixel. If the bit truncation is the bit truncation of the prediction residual of the current coded pixel of the to-be-coded block, the first cost value is the cost value corresponding to the bit truncation of the prediction residual of the current coded pixel of the to-be-coded block.
  • the mandatory code rate control condition is as described in S601, which will not be repeated here.
  • prediction of the to-be-coded block is also performed, which can be regarded as a mandatory rate control method of the frequency domain branch.
  • the prediction of the frequency domain branch may be block-level prediction.
  • the target prediction mode can be used to predict the block to be coded, and the difference between the original value and the predicted value is determined, which is the prediction residual.
  • the target prediction mode may be block-level prediction.
  • the prediction residual can be subjected to frequency domain transform to obtain N frequency domain transform coefficients of the prediction residual, and then M1 frequency domain transform coefficients of the N frequency domain transform coefficients After zeroing, N frequency domain transform coefficients after zeroing are obtained, and then the N frequency domain transform coefficients after zeroing are quantized.
  • the second generation value is the coding cost value after quantizing the N frequency domain transform coefficients after zeroing.
  • the transform coefficients are set to zero and then quantized, which can reduce the transform coefficients involved in quantization.
  • N and M1 are both positive integers, and M1 is less than N.
  • the prediction residual can be subjected to frequency domain transform to obtain N frequency domain transform coefficients of the prediction residual, and then the N frequency domain transform coefficients are quantized, and the quantized N M2 frequency domain transform coefficients in the frequency domain transform coefficients are zeroed, and N frequency domain transform coefficients after zeroing are obtained.
  • the second generation value is the coding cost value corresponding to zeroing the M2 frequency domain transform coefficients.
  • N and M2 are both positive integers, and M2 is less than N.
  • the quantized transform coefficients are set to zero. Compared with the quantization after the transform coefficients are set to zero, more transform coefficients can be retained, that is, more frequency components can be retained (M2 may be less than or equal to M1), which is beneficial Improve the image quality of specific image content.
  • the prediction residual can be subjected to frequency domain transform to obtain N frequency domain transform coefficients of the prediction residual, and then M3 frequency domain transform coefficients of the N frequency domain transform coefficients Set to zero and no longer quantize the transform coefficients.
  • the second generation value is the coding cost value corresponding to zeroing the M3 frequency domain transform coefficients.
  • setting the M3 frequency domain transform coefficients to zero can be achieved by means of quantization.
  • M1, M2, and M3 may be collectively referred to as M.
  • the number M of transform coefficients that are set to zero can be determined by the full state of the code stream buffer. The more full the current bit stream buffer, the more urgent the current system needs to reduce the bit rate, the greater the number M of zeroed transform coefficients. Because the human eyes are not sensitive to high-frequency signals, the coefficients that are set to zero can be the transform coefficients corresponding to the high-frequency components, which can ensure that the loss of the image after the coefficients are discarded is not noticed by the human eyes.
  • the cost calculation methods corresponding to the two second-generation values listed above are as described in S602, which can be encoding rate, RDCOST, encoding distortion size, and so on.
  • the magnitude of coding distortion can be measured by measures such as SAD, MAD, SSD, SSE, and MSE.
  • the magnitude of the coding distortion may specifically be the difference between the residual information before the transformation and the residual information after the inverse transformation.
  • the above method of zeroing M (or M1 or M2 or M3) frequency domain transform coefficients among the N frequency domain transform coefficients can be regarded as partial discarding of the transform coefficients of the frequency domain branch.
  • the coding cost value is directly calculated, that is, the second generation value is the coding cost value corresponding to the prediction residual after the block-level prediction is discarded.
  • the second generation value is the coding cost value corresponding to the prediction residual after the block-level prediction is discarded.
  • it can be regarded as performing frequency domain transformation on the prediction residual, and after obtaining the N frequency domain transform coefficients of the prediction residual, all the N frequency domain transform coefficients are set to zero.
  • This method can be regarded as discarding all the transform coefficients of the frequency domain branch.
  • the full discarding of transform coefficients and the partial discarding of transform coefficients may be collectively referred to as coefficient discarding of the frequency domain branch.
  • the foregoing zeroing of transform coefficients may be referred to as forced rate control of the frequency domain branch.
  • the method of quantizing the transform coefficients and then discarding some of the coefficients may be referred to as the forced rate control method of the frequency domain branch, and the method of discarding some coefficients and then quantizing is called the forced rate control of the frequency domain branch.
  • Rate control method two the method in which all transform coefficients are discarded is called the forced rate control method three of the frequency domain branch.
  • S605 Compare the value of the first generation and the value of the second generation, and determine the coding bit.
  • the first-generation value and the second-generation value are compared to determine the best mandatory rate control method, and the to-be-coded block is further encoded according to the best mandatory rate control method to obtain the final encoded bits.
  • the coded bit can also carry its coding mode.
  • the coding mode can be the above-mentioned compulsory rate control method of the spatial branch, the compulsory rate control method of the spatial branch, the compulsory rate control method of the frequency domain branch, and the compulsory rate control method of the frequency domain branch. , The forced rate control mode 2 of the frequency domain branch or the mandatory rate control mode 3 of the frequency domain branch, etc., so that the decoder can decode the coded bit according to the encoding mode.
  • the coded bit is determined to be the coded bit after bit interception of the current coded pixel. If it is determined that the best mandatory rate control mode is the second mandatory rate control mode of the spatial branch, the coded bit is determined to be the coded bit obtained by bit-cutting the prediction residual of the current coded pixel. If the best mandatory rate control mode is the mandatory rate control mode 1 of the frequency domain branch, the coded bit is determined to be the coded bit obtained by entropy coding the zero-set transform coefficient. If the best mandatory rate control mode is the second mandatory rate control mode of the frequency domain branch, the coded bit is determined to be the coded bit obtained by entropy coding the quantized transform coefficient.
  • the best mandatory rate control method is the third mandatory rate control method of the frequency domain branch, it is determined that the coding bit is the encoding after entropy coding is performed on the transform coefficients of which all zeros of the prediction residual information are discarded after block-level prediction. Bits. It can be seen that the finally determined code bit must be smaller than the output bit of the code stream buffer, so as to ensure that the code stream buffer does not overflow.
  • the first-generation value and the second-generation value can be calculated according to the above-mentioned coding cost rule.
  • the coding cost value is directly proportional to the coding cost.
  • the coding cost value is inversely proportional to the coding cost in some calculation rules.
  • the coding cost is the sum of the reciprocal of the absolute value of the prediction residual of each pixel.
  • the coding cost value is inversely proportional to the coding cost.
  • the embodiments of the present application do not limit the coding cost rules, but no matter what coding cost calculation rule is adopted, the best mandatory rate control method is always the prediction mode with the least coding cost.
  • the output coded bits are the coded bits after bit interception or the entropy of the transform coefficients. Coded bits after encoding.
  • the embodiment of the present application does not limit the mandatory code rate control mode selected when the cost value is equal.
  • the encoding method may include the following steps:
  • S701 Select a target prediction mode from multiple prediction modes based on a preset cost rule.
  • At least one optimal spatial domain prediction mode may be selected from a plurality of spatial domain prediction modes, and at least one optimal frequency domain prediction mode may be selected from a plurality of frequency domain prediction modes.
  • the spatial prediction mode may be point-level prediction or block-level prediction. Different spatial prediction modes correspond to different prediction directions and/or different prediction value calculation methods.
  • the frequency domain prediction mode may be block-level prediction. Different frequency domain prediction modes correspond to different prediction directions and/or different prediction value calculation methods.
  • the at least one optimal spatial prediction mode is at least one spatial prediction mode with a lower coding cost among the plurality of spatial prediction modes
  • at least one optimal frequency domain prediction mode is at least one of the plurality of frequency domain prediction modes with a lower coding cost.
  • a frequency domain prediction mode may be multiple spatial prediction modes (or frequency domain prediction modes) according to the coding cost.
  • the process of S701 may be referred to as pre-analysis.
  • the calculation rule of the coding cost in the pre-analysis stage can be any of the following: SAD of residual information, MAD of residual information, SSE of residual information, MSE of residual information, coding rate R, RDCOST, coding distortion size Wait.
  • the magnitude of coding distortion can be measured by measures such as SAD, MAD, SSE, and MSE, and the residual information is the difference between the original value and the predicted value.
  • the coding cost value of spatial precoding and frequency domain precoding can be calculated.
  • the coding cost value is directly proportional to the coding cost.
  • the coding cost value is inversely proportional to the coding cost.
  • the coding cost is the sum of the reciprocal of the absolute value of the prediction residual of each pixel.
  • the coding cost value is inversely proportional to the coding cost.
  • the embodiment of the present application does not limit the coding cost calculation rule in the pre-analysis stage, but no matter what coding cost calculation rule is adopted, the optimal prediction mode is always the prediction mode with the least coding cost.
  • S702 Determine whether the mandatory rate control condition is met, if yes, execute S703, if not, execute S708.
  • the mandatory rate control condition can be that the full state of the bit stream buffer reaches a certain level, or it can be the result of the comprehensive influence of the full state of the bit stream buffer and the predicted residual size, such as when the full state of the bit stream buffer To a certain extent (>90%), but the prediction residuals are all less than a certain threshold (for example, all 0), it is still determined that the mandatory rate control condition is not met.
  • the mandatory code rate control condition may be that the full state of the code stream buffer reaches a certain level and the predicted residual error is greater than a certain threshold. For details, refer to the description of the mandatory rate control condition in S601, which is not repeated here.
  • S703 Perform bit interception on the block to be coded.
  • S703 is the same as S601, and will not be repeated here.
  • bit truncation is to perform bit truncation on the prediction residual of the currently coded pixel of the block to be coded, and then at least one optimal spatial prediction mode determined in S701 may be used to predict the block to be coded, and then bit truncation is performed on the prediction residual.
  • S704 is the same as S602, and will not be repeated here.
  • S705 Perform prediction on the block to be coded, and determine the prediction residual of the block to be coded.
  • the block-level prediction mode may be at least one optimal frequency domain prediction mode determined in S701.
  • the block to be coded is predicted in the at least one optimal prediction mode, and at least one prediction residual of the block to be coded is determined.
  • the second generation value corresponding to each optimal frequency domain prediction mode can be calculated according to each prediction residual respectively.
  • the calculation method of each second-generation value is consistent with the relevant description in S604, and will not be repeated here.
  • each optimal frequency domain prediction mode can execute S705-S706 correspondingly, and finally obtain at least one second-generation value.
  • S703-S704 and S705-S706 can be performed synchronously, and the execution order is not limited in the embodiment of the present application.
  • S707 Compare the value of the first generation with the value of the second generation, and determine the coding bit.
  • the coded bit can also carry its coding mode.
  • the coding mode can be the above-mentioned compulsory rate control method of the spatial branch, the compulsory rate control method of the spatial branch, the compulsory rate control method of the frequency domain branch, and the compulsory rate control method of the frequency domain branch. , The forced rate control mode 2 of the frequency domain branch or the mandatory rate control mode 3 of the frequency domain branch, etc., so that the decoder can decode the coded bit according to the encoding mode.
  • the first-generation value can be compared with multiple second-generation values to determine the mandatory rate control method with the least encoding cost. If the mandatory rate control method with the least encoding cost is bit interception, then the encoded bit is the encoded bit after bit interception; if the mandatory rate control method with the least encoding cost is the block to be encoded in an optimal frequency domain prediction mode For block-level prediction, the coded bit is the coded bit after entropy coding is performed on the prediction residual of the block to be coded in the optimal frequency domain prediction mode.
  • the first encoding QP may be determined according to the texture complexity of the block to be encoded and/or the full state of the code stream buffer.
  • the first coding QP may be proportional to the texture complexity of the block to be coded. The more complex the texture of the block to be coded, the larger the first code QP; the simpler the texture of the block to be coded, the smaller the first code QP.
  • a larger first coding QP can be used to reduce the code rate that may be occupied by the block to be coded after being coded.
  • the first coding QP can be reduced to reduce the distortion, thereby ensuring that the image distortion is not perceived by the human eye.
  • the first code QP may be proportional to the full state of the code stream buffer. The more full the code stream buffer, the larger the first code QP; the more free the code stream buffer, the smaller the first code QP.
  • bit rate that may be occupied by the block to be coded needs to be reduced, which can be specifically achieved by using a larger first code QP.
  • the bit rate that may be occupied by the block to be coded can be increased to increase the image information carried by the block to be coded, so that the decoded image has a higher degree of restoration. This is achieved by using a smaller first code QP.
  • the texture complexity can be quantified, and the first coding QP corresponding to different degrees of texture complexity can be set. It is also possible to quantify the full state of the code stream buffer, and set the first code QP corresponding to the full state of different degrees.
  • the respective first coded QPs of the two can be integrated to obtain the final first coded QP.
  • the first coded QP corresponding to the two can be averaged or weighted and summed to obtain the final first coded QP.
  • the weight occupied by the two can be the default weight obtained according to the prior information.
  • the texture complexity of the coding block and the corresponding relationship between the first coding QP and the coding rate can be obtained by statistics based on the information of the coding block that has been currently coded.
  • the foregoing corresponding relationship can be searched for according to the texture complexity and coding rate of the block to be coded to determine the first coding QP.
  • the coding rate of the block to be coded can be determined according to the current full state of the code stream buffer.
  • S709 Precoding the block to be coded by using the first coding QP in multiple prediction modes, respectively, to obtain the precoding result information corresponding to each prediction mode.
  • the pre-coding may be the result of coding the block to be coded estimated based on prior information (or historical statistical information).
  • the a priori information or historical statistical information may be the information of the currently encoded coding block.
  • Precoding may also be to encode the block to be coded in advance to obtain the result of coding the block to be coded.
  • Precoding may include spatial precoding and frequency domain precoding.
  • spatial precoding can include prediction, quantization, and cost calculation.
  • Frequency domain precoding can include prediction, transformation, quantization, and cost calculation.
  • the optimal spatial prediction mode determines the predicted value of the block to be coded, and further determines the residual information of the block to be coded.
  • the optimal frequency domain mode also determines the predicted value of the block to be coded, which in turn determines the residual information of the block to be coded.
  • the prediction may be to determine the prediction value and prediction residual of the block to be coded according to the prediction reference direction corresponding to the prediction mode and the prediction value calculation method.
  • the transformation may be a frequency domain transformation of the prediction residuals to obtain transform coefficients in the transform domain.
  • the quantization in spatial precoding may be the quantization of the prediction residual with the first coding QP.
  • the quantization in frequency domain precoding may be to quantize the transform coefficients with the first coding QP.
  • the cost calculation in the spatial precoding may be the calculation of the coding cost corresponding to the quantization of the prediction residual.
  • the cost calculation in the frequency domain precoding may be to calculate the coding cost corresponding to the quantization of the transform coefficient.
  • the method for calculating the cost of the precoding stage may include any one of the following: coding rate, rate-distortion cost (rate-distortion cost, RDCOST), size of coding distortion, and so on.
  • the magnitude of coding distortion can be sum of absolute differences (SAD), mean absolute differences (MAD), sum of squared differences (SSD), sum of squared differences (sum of squared). error, SSE), mean squared error (MSE) and other measures.
  • the precoding result information may include at least one of the following: the number of coded bits of the block to be coded under the first coding QP, the coding distortion size of the block to be coded under the first coding QP, and the size of the coding distortion of the block to be coded under the first coding QP.
  • the magnitude of the coding distortion may be the difference between the residual information before quantization and the residual information after inverse quantization.
  • the magnitude of coding distortion can be the difference between the residual information before transformation and the residual information after inverse transformation, or the residual information before quantization after transformation and before inverse transformation after inverse quantization. The difference of the residual information.
  • the block to be coded may include multiple pixels, and each pixel corresponds to a difference of residual information (before quantization and after dequantization, or before transformation and after inverse transformation), that is, the block to be coded may include multiple above residual information.
  • the magnitude of coding distortion may be calculated by using a certain calculation rule (such as SAD, MAD, SSE, MSE, etc.) to calculate the above-mentioned multiple differences into one value, and the final value obtained is the magnitude of coding distortion.
  • rate-distortion cost estimation is to select a suitable coding method to ensure that a smaller distortion is obtained when the code rate is smaller.
  • the rate-distortion cost can be used to measure the result of image coding by integrating the coding rate and the magnitude of distortion. The lower the rate-distortion cost, the better the performance of characterizing image coding.
  • RDCOST D+ ⁇ *R; where D is the distortion size, R is the bit rate, and ⁇ is the Lagrangian optimization factor. The value of ⁇ can be positively correlated with the first code QP and the fullness of the code stream buffer.
  • S710 Select the best prediction mode from multiple prediction modes.
  • the optimal prediction mode is at least one optimal spatial domain prediction mode and at least one optimal frequency domain prediction mode with the least coding cost.
  • S711 Use the coding result information corresponding to the optimal prediction mode to adjust the first coded QP to obtain the second coded QP.
  • different prediction modes correspond to different precoding result information in the precoding stage.
  • the precoding result information corresponding to the optimal prediction mode can be used to adjust the first coding QP to obtain the second QP.
  • the encoding result information includes the number of encoded bits of the block to be encoded under the first encoding QP.
  • the first coded QP is reduced; when the number of coded bits is greater than the target number of bits, the first coded QP is increased.
  • the target number of bits of the block to be encoded is determined by the full state of the code stream buffer and the number of output bits of the code stream buffer.
  • the number of output bits of the code stream buffer can be determined by the current target code rate of the encoder. Exemplarily, if the current target code rate of the encoder is 1 megabits per second (Mbps), the current frame rate is 30 frames per second, and each frame of image is divided into 30 coding blocks. If the code rate is divided equally for each code block, the number of output bits in the code stream buffer can be 1 Mbit/(30*30).
  • the above method of calculating the number of output bits of the code rate buffer based on the target code rate of the encoder is only an example. In the specific implementation, there may be other calculation methods (for example, each code block is not evenly divided into the code rate). The application embodiment does not limit this.
  • the number of output bits of the code stream buffer is 100 bits, and when the full state of the code stream buffer is 50%, the target number of bits is equal to the number of output bits.
  • the target bit number is 90bit; when the current filling state of the code stream buffer is 80%, the target bit number is 70bit; when the current filling state of the code stream buffer is At 30%, the target number of bits is 120 bits.
  • the full state of the above-mentioned code stream buffer is expressed by a percentage. The percentage is the ratio of the current used capacity of the code stream buffer to the total capacity.
  • the first encoding QP can be reduced to increase the bit rate of the block to be encoded, thereby improving the quality of the compressed image. If it is estimated that the number of encoded bits output after encoding is greater than the target number of bits of the block to be encoded, and the bit stream buffer is full, it indicates that the bit stream buffer may overflow, and the first encoding QP can be increased to reduce the bit rate. Ensure that the code stream buffer does not overflow.
  • the encoding result information includes the encoding distortion size of the block to be encoded under the first encoding QP.
  • the first coding QP is increased; when the coding distortion is greater than the second threshold, the first coding QP is decreased.
  • the magnitude of the coding distortion may be the difference between the residual information before quantization and the residual information after inverse quantization.
  • the magnitude of the coding distortion can be the difference between the residual information before the transformation and the residual information after the inverse transformation, or the residual information before the quantization after the transformation and the residual information before the inverse transformation after the inverse quantization. The difference of the difference information.
  • the coding distortion When the coding distortion is less than a certain threshold, it indicates that the image quality after coding is considerable, and the first coding QP can be increased to save the number of coding bits. When the encoding distortion is greater than a certain threshold, it indicates that the encoded image quality is low, and the first encoding QP needs to be reduced to improve the image quality.
  • the encoding result information includes the texture complexity of the block to be encoded.
  • the first coding QP can be reduced to increase the bit rate, so as to ensure that image distortion is not perceived by human eyes.
  • the first coding QP can be increased to reduce the bit rate.
  • the texture complexity of the block to be coded can be determined according to the prediction residuals corresponding to each prediction mode in S701. Specifically, the prediction direction of the prediction mode with the smaller prediction residual may characterize the texture information of the block to be coded to a certain extent. It is not limited to the prediction residual. In specific implementation, the texture complexity of the block to be coded may also be determined in other ways, which is not limited in the embodiment of the present application.
  • the encoding result information includes the prediction residual of the block to be encoded.
  • the first coding QP is reduced; when the absolute values of the prediction residuals are all larger than the fourth threshold, the first coding QP is increased.
  • the prediction residual can reflect the texture complexity of the block to be coded.
  • the more complex the texture of the block to be coded the less obvious the image distortion caused by quantization, and the less perceptible it is for human eyes.
  • the first coding QP can be increased to reduce the bit rate.
  • the encoding result information may include any two or more of the above. Specifically, the adjustment amount of the first code QP corresponding to each item can be determined, and then the final adjustment amount of the first code QP is calculated according to the weight occupied by each item, so as to obtain the second QP.
  • S712 Use the second coding QP to perform real coding on the block to be coded in the best prediction mode.
  • the best prediction mode is the best spatial prediction mode
  • the best spatial mode is used to predict the block to be coded
  • the prediction value and prediction residual are output
  • the prediction residual is quantized, entropy coded, etc.
  • the best prediction mode is the best frequency domain prediction mode
  • the best frequency domain mode is used to predict the block to be coded, the predicted value and prediction residual are output, and then the prediction residual is transformed, quantized, entropy coded, etc.
  • the prediction may be to determine the prediction value and the prediction residual of the block to be coded according to the prediction reference direction corresponding to the prediction mode and the prediction value calculation method.
  • the transformation may be a frequency domain transformation of the prediction residuals to obtain transform coefficients in the transform domain.
  • the quantization in spatial precoding may be the quantization of the prediction residual with the first coding QP.
  • the quantization in frequency domain precoding may be to quantize the transform coefficients with the first coding QP.
  • Entropy coding can be coding that does not lose any information according to the entropy principle.
  • Shannon coding, Huffman coding, or arithmetic coding are usually used to quantize the quantized prediction residuals (spatial real coding) or quantized transform coefficients (frequency real real coding). Encoding) for encoding.
  • the coded bits can be output to the code stream buffer.
  • the full state of the code stream buffer may change.
  • the full state of the stream buffer can further affect S702 and S711.
  • the relationship between the full state of the code stream buffer and S702 is explained in S601 and S702, and will not be repeated here.
  • the relationship between the full state of the stream buffer and S711 is also explained in S711, and will not be repeated here.
  • the embodiment of the present application can estimate the possible coding cost after controlling the bit rate by using two different methods: bit interception of the spatial branch and coefficient discarding of the frequency domain branch respectively when the system needs to control the code rate compulsorily. Control the code rate in a small way, and finally output the coded bits, so that the output limited coded bits carry as much useful information as possible (that is, the information that the human eye can easily perceive), while ensuring that the code rate meets the premise of system memory and bandwidth limitations Next, improve the quality of the decoded image.
  • the embodiment of the present application can also provide multiple prediction modes for the spatial and frequency domain branches through pre-analysis when the system does not need to control the code rate, so that it is not necessary to use block-based prediction operations (that is, the spatial coding in spatial coding). Prediction operation) to achieve more refined point-level prediction (that is, the reconstructed value in the current prediction block can be used as the prediction reference value of subsequent pixels in the current prediction block), and then the image compression performance can be improved through more refined point-level prediction. Furthermore, in the embodiment of the present application, a two-level code rate control can be used to achieve a more refined code rate control, and the code rate can be reasonably used to transmit image data of better quality, and the performance of image compression can be improved.
  • Figure 8 illustrates a space-frequency domain coding architecture, which can be used to implement the coding method provided in Figure 7.
  • the space-frequency domain coding architecture can include the following parts:
  • the pre-analysis 801 can select a target prediction mode from multiple prediction modes based on a preset cost rule. Specifically, at least one optimal spatial domain prediction mode may be selected from a plurality of spatial domain prediction modes, and at least one optimal frequency domain prediction mode may be selected from a plurality of frequency domain prediction modes. For details, please refer to the description of S701, which will not be repeated here.
  • the mandatory code control condition judgment 802 is used to judge whether the mandatory code rate control condition is currently met. For details, please refer to the description of S702, which will not be repeated here. If the compulsory rate control condition is met, the code rate is forced to be controlled through a space-domain branch and a frequency-domain branch at the same time.
  • the following introduces two airspace branches and two frequency domain branches respectively.
  • the space-frequency domain coding architecture can select a space-domain branch from two space-domain branches, and select a frequency-domain branch from the two frequency-domain branches to force control of the code rate.
  • Spatial branch one can include the following parts: bit interception 803, cost calculation 1 804.
  • Spatial branch two can include the following parts: prediction 1 803a, bit interception 803, and cost calculation 1 804.
  • bit interception 803 can be used to perform bit interception on the pixel value of the currently coded pixel in the to-be-coded block under the condition that the mandatory rate control condition is met. For details, please refer to the relevant description in S601, which is not repeated here.
  • Cost calculation 1 804 can be used to calculate the first generation value corresponding to bit interception. For details, please refer to the relevant description in S602, which will not be repeated here.
  • Prediction 803a may be used to predict the to-be-coded block by using at least one optimal spatial prediction mode determined in the pre-analysis 801 to obtain a prediction residual.
  • the best spatial prediction mode can be point-level prediction or block-level prediction.
  • Bit interception 803 can be used to perform bit interception on the prediction residual. For details, please refer to the related description in S601, which will not be repeated here.
  • the cost calculation 1 804 can be used to calculate the first generation value corresponding to the bit interception of the prediction residual. For details, please refer to the relevant description in S602, which will not be repeated here.
  • the coding costs of the above-mentioned spatial branch 1 and the airspace branch 2 can be compared, and a branch with a smaller coding cost may be selected to perform the forced rate control of the airspace branch.
  • Frequency domain branch 1 can include the following parts: prediction 2 805, transformation 806, quantization 807a, coefficient discarding 808a, cost calculation 2 809.
  • Frequency domain branch two can include the following parts: prediction 2 805, transformation 806, coefficient discarding 807b, quantization 808b, cost calculation 2 809.
  • Prediction 2 805 can be used to perform the mandatory rate control of the frequency domain branch when the mandatory rate control conditions are met. Specifically, block-level prediction can be performed on the block to be coded, and the prediction residual of the block to be coded can be determined.
  • the prediction mode of block-level prediction may be the best frequency domain prediction mode determined in the pre-analysis 801. For details, please refer to the description of S603, which is not repeated here.
  • the transform 806 can be used to perform frequency domain transform on the prediction residual of the block to be coded to obtain N frequency domain transform coefficients of the prediction residual.
  • the quantization 807a can be used to quantize the N frequency domain transform coefficients of the prediction residual.
  • the coefficient discarding 808a can be used to zero out the M frequency domain transform coefficients among the quantized N frequency domain transform coefficients to obtain the zeroed N frequency domain transform coefficients.
  • N and M are both positive integers, and M is less than N.
  • Cost calculation 2809 can be used to calculate the second-generation value corresponding to zeroing M frequency domain transform coefficients.
  • the number M of transform coefficients that are set to zero can be determined by the full state of the code stream buffer. The more full the current bit stream buffer, the more urgent the current system needs to reduce the bit rate, the greater the number M of zeroed transform coefficients. Because the human eyes are not sensitive to high-frequency signals, the coefficients that are set to zero can be the transform coefficients corresponding to the high-frequency components, which can ensure that the loss of the image after the coefficients are discarded is not noticed by the human eyes.
  • prediction 2 805 which can be used to perform mandatory rate control of the frequency domain branch when the mandatory rate control conditions are met.
  • block-level prediction can be performed on the block to be coded, and the prediction residual of the block to be coded can be determined.
  • the prediction mode of block-level prediction may be the best frequency domain prediction mode determined in the pre-analysis 801. For details, please refer to the description of S603, which will not be repeated here.
  • the transform 806 can be used to perform frequency domain transform on the prediction residual of the block to be coded to obtain N frequency domain transform coefficients of the prediction residual.
  • the coefficient discarding 807b can be used to zero out the M frequency domain transform coefficients among the N frequency domain transform coefficients of the prediction residual to obtain the zeroed N frequency domain transform coefficients.
  • N and M are both positive integers, and M is less than N.
  • the quantization 808b can be used to quantize the N frequency domain transform coefficients after zeroing.
  • the cost calculation 2809 can be used to calculate the coding cost value after quantizing the N frequency domain transform coefficients after zeroing, that is, the second coding cost value.
  • the transform coefficients are set to zero and then quantized, which can reduce the transform coefficients involved in quantization.
  • the frequency domain branch 1 is first quantized and then discarded, which can retain more frequency components, which is beneficial to improve the image quality of a specific image content.
  • the coding cost of the frequency domain branch 1 and the frequency domain branch 2 can be compared, and the branch with the smaller coding cost can be selected to execute the mandatory code of the frequency domain branch. Rate control.
  • the cost comparison 810 can be used to compare the value of the first generation and the value of the second generation. If yes, output the remaining bits after bit interception to the bitstream buffer; if not, then set the N frequency domain transform coefficients after zeroing (frequency domain branch 1) or the quantized N frequency domain transform coefficients ( Frequency domain branch 2) Entropy coding. For details, please refer to the related description of S605, which is not repeated here.
  • Entropy coding 811 can be used to perform entropy coding on N frequency domain transform coefficients after zeroing (frequency domain branch 1) or quantized N frequency domain transform coefficients (frequency domain branch 2), and output the encoded The bits to the code stream buffer 819.
  • Rate control 1 812 can be used to determine the first code QP when the mandatory rate control condition is not met. For details, please refer to the description of S708, which will not be repeated here.
  • the spatial precoding 813 may be used to perform spatial precoding on the block to be coded using the first coding QP in the at least one optimal spatial prediction mode determined in the pre-analysis 801.
  • Spatial precoding can include prediction, quantization, and cost calculation. For details, please refer to the relevant description in S709, which will not be repeated here.
  • the frequency domain precoding 814 may be used to perform frequency domain precoding on the block to be coded using the first coding QP in the at least one optimal frequency domain prediction mode determined in the pre-analysis 801.
  • Frequency domain precoding can include prediction, transformation, quantization, and cost calculation. For details, please refer to the relevant description in S709, which will not be repeated here.
  • the compressed domain decision 815 can be used to select the prediction mode with the least coding cost among the at least one optimal spatial domain prediction mode and the at least one optimal frequency domain prediction mode described above. For details, please refer to the description in S710, which will not be repeated here.
  • the code rate control 2816 can be used to adjust the first coded QP by using the coding result information corresponding to the best prediction mode to obtain the second coded QP. For details, please refer to the description in S711, which will not be repeated here.
  • Spatial real coding 817 is used to determine the best prediction mode as the spatial prediction mode in the compressed domain decision 815, and use the second coding QP to perform spatial real coding in the best prediction mode, and output the real coding bits of the spatial domain to Stream buffer 819.
  • the second coding QP to perform spatial real coding in the best prediction mode, and output the real coding bits of the spatial domain to Stream buffer 819.
  • the frequency domain real coding 818 is used to determine the best prediction mode as the frequency domain prediction mode in the compression domain decision 815, and use the second coding QP to perform frequency domain real coding in the best prediction mode, and output the frequency domain real coding after ⁇ coding bits to the code stream buffer 819.
  • the frequency domain real coding 818 is used to determine the best prediction mode as the frequency domain prediction mode in the compression domain decision 815, and use the second coding QP to perform frequency domain real coding in the best prediction mode, and output the frequency domain real coding after ⁇ coding bits to the code stream buffer 819.
  • the code stream buffer 819 can be used to receive the coded bits output by the real coding 817 in the space domain or the real coding in the frequency domain 818, and can also be used to receive the remaining bits output after bit interception or to perform the zeroing of the N frequency domain transform coefficients. Entropy coded coded bits.
  • the code stream buffer 819 can also act on 802 and 816, and the relationship between the code stream buffer 819 and 802 is explained in S601 and S702, and will not be repeated here. The relationship between the code stream buffers 819 and 802 is also explained in S711, and will not be repeated here.
  • the forced rate control of the frequency domain branch can also be implemented in the following manner:
  • the prediction residual can be transformed into the frequency domain to obtain the N frequency domain transform coefficients of the prediction residual, and then the M frequency domain transform coefficients of the N frequency domain transform coefficients are set to zero, The transform coefficients are no longer quantized.
  • the second generation value is the coding cost value corresponding to zeroing the M frequency domain transform coefficients. Among them, zeroing of the frequency domain transform coefficients can be achieved by means of quantization. In the embodiment of the present application, only the transform coefficients are set to zero, and the transform coefficients are not additionally quantized, which can simplify the calculation method and simplify the coding architecture.
  • Figure 9 provides another space-frequency domain coding architecture. As shown in FIG. 9, the space-frequency domain coding architecture is different from the space-frequency domain coding architecture provided in FIG. 8 except that the frequency domain branch (frequency domain branch three) for mandatory rate control is different, and other parts are the same. Only the differences are described below, and the similarities can be referred to the description of FIG. 8, and the details are not repeated here.
  • Frequency domain branch three can include the following parts: prediction 2 905, cost calculation 2 906.
  • Prediction 2 905 can be used to perform block-level prediction on the block to be coded and output prediction residuals.
  • Cost calculation 2 906 can be used to calculate the block-level prediction and discard the second-generation value corresponding to the prediction residual.
  • the space-frequency domain coding architecture shown in Figure 9 discards all transform coefficients (that is, all zero coefficients) in the forced rate control of the frequency domain branch, and can directly analyze the quantized prediction residuals (all zero coefficients). 0 coefficient) performs entropy coding, and outputs the coded bits to the code stream buffer 917. This can simplify the coding architecture, reduce the amount of calculation, and improve coding efficiency.
  • the cost calculation method of cost calculation 2 can refer to the description in S602, which will not be repeated here.
  • the space-frequency domain coding architecture It is not limited to the space-frequency domain coding architecture, and the coding method provided in the embodiments of the present application may also be applicable to the spatial domain coding architecture.
  • the spatial coding architecture provided by the embodiment of the present application will be introduced with reference to FIG. 10.
  • the spatial coding architecture can include the following parts:
  • the mandatory code control condition judgment 1001 is used to judge whether the mandatory code rate control condition is currently met. For details, please refer to the description of S702, which will not be repeated here. If the compulsory rate control conditions are met, the code rate is forced to be controlled through a space-domain branch and a frequency-domain branch at the same time. The following introduces two airspace branches and one frequency domain branch. In a specific implementation, the spatial coding architecture can select one of the two spatial branches to force control of the code rate.
  • Spatial branch 1 can include the following parts: bit interception 1002, cost calculation 1 1003.
  • Spatial branch two can include the following parts: prediction 1 1002a, bit interception 1002, cost calculation 1 1003.
  • the airspace branch 1 is similar to the airspace branch 1 involved in FIG. 8, and the airspace branch 2 is similar to the airspace branch 2 involved in FIG. 8, and will not be repeated here.
  • the coding costs of the above-mentioned spatial branch 1 and the airspace branch 2 can be compared, and a branch with a smaller coding cost may be selected to perform the forced rate control of the airspace branch.
  • the frequency domain branch can include the following parts: prediction 2 1004, cost calculation 2 1005.
  • the frequency domain branch is similar to the frequency domain branch 3 involved in FIG. 9, and will not be repeated here.
  • the cost comparison 1006 is consistent with the cost comparison 810, and will not be repeated here.
  • Entropy coding 1007 can be used to perform entropy coding on prediction residuals (all zero coefficients), and output coded bits to bitstream buffer 1012.
  • Prediction 1008 can be used to perform point-level prediction or block-level prediction on the block to be coded under the condition that the mandatory rate control condition is not met, to obtain the prediction residual.
  • the code rate control 1009 can be used to determine the coding QP according to the texture complexity of the block to be coded and/or the full state of the code stream buffer. For details, please refer to the relevant description in S708, which will not be repeated here.
  • the quantization 1010 can be used to quantize the prediction residual output by the prediction 1008 using the coding QP determined by the rate control 1009.
  • Entropy coding 1011 may be used to perform entropy coding on the quantized prediction residual output by quantization 1010, and output coded bits to bitstream buffer 1012.
  • the code stream buffer 1012 can further influence 1001 and code rate control 1009.
  • the relationship between the code stream buffer 1012 and 1001 is explained in S601 and S702, and will not be repeated here.
  • For the relationship between the code stream buffer 1012 and the code rate control 1009 please refer to the related description of the relationship between the full state of the code stream buffer and the first code QP in S708, which will not be repeated here.
  • the frequency domain coding architecture can include the following parts:
  • the mandatory code control condition judgment 1101 is consistent with the mandatory code control condition judgment 802. If the mandatory code rate control condition is met, the code rate is forced to be controlled through a spatial branch and a frequency domain branch at the same time.
  • the following introduces two airspace branches and two frequency domain branches respectively.
  • the space-frequency domain coding architecture can select a space-domain branch from two space-domain branches, and select a frequency-domain branch from the two frequency-domain branches to force control of the code rate.
  • Spatial branch one can include the following parts: bit interception 1102, cost calculation 1 1103.
  • Spatial branch two can include the following parts: prediction 1 1102a, bit interception 1102, and cost calculation 1 1103.
  • the airspace branch 1 is similar to the airspace branch 1 involved in FIG. 8, and the airspace branch 2 is similar to the airspace branch 2 involved in FIG. 8, and will not be repeated here.
  • the coding costs of the above-mentioned spatial branch 1 and the airspace branch 2 can be compared, and the branch with the smaller coding cost may be selected to perform the forced rate control of the airspace branch.
  • Frequency domain branch 1 can include the following parts: prediction 2 1104, transformation 1105, quantization 1106a, coefficient discarding 1107a, cost calculation 2 1108.
  • Frequency domain branch two can include the following parts: prediction 2 1104, transformation 1105, coefficient discarding 1106b, quantization 1107b, cost calculation 2 1108.
  • the frequency domain branch 1 is similar to the frequency domain branch 1 involved in FIG. 8, and the frequency domain branch 2 is similar to the space domain branch 2 involved in FIG. 8, and will not be repeated here.
  • the coding cost of the frequency domain branch 1 and the frequency domain branch 2 can be compared, and the branch with the smaller coding cost can be selected to execute the mandatory code of the frequency domain branch. Rate control.
  • the cost calculation 2 1108 is consistent with the cost calculation 2 809, and will not be repeated here.
  • the cost comparison 1109 is consistent with the cost comparison 810, and will not be repeated here.
  • Entropy coding 1110 is consistent with entropy coding 811, and will not be repeated here.
  • Prediction 1111 can be used to perform block-level prediction on the block to be coded under the condition that the mandatory rate control condition is not met to obtain the prediction residual.
  • the transform 1112 can be used to perform frequency domain transform on the prediction residual of the block to be coded, and output the transformed prediction residual.
  • the code rate control 1113 is consistent with the code rate control 1010, and will not be repeated here.
  • the quantization 1114 can be used to quantize the transformed prediction residual output by the transformation 1112 using the coding QP determined by the rate control 1113.
  • Entropy coding 1115 is consistent with entropy coding 1012, and will not be repeated here.
  • the code stream buffer 1116 is consistent with the code stream buffer 1013, and will not be repeated here.
  • the forced rate control of the frequency domain branch can also be implemented in the following manner:
  • the prediction residual can be transformed into the frequency domain to obtain the N frequency domain transform coefficients of the prediction residual, and then the M frequency domain transform coefficients of the N frequency domain transform coefficients are set to zero, The transform coefficients are no longer quantized.
  • the second generation value is the coding cost value corresponding to zeroing the M frequency domain transform coefficients. Among them, zeroing the M frequency domain transform coefficients can be achieved by means of quantization. In the embodiment of the present application, only the transform coefficients are set to zero, and the transform coefficients are not additionally quantized, which can simplify the calculation method and simplify the coding architecture.
  • Figure 12 provides another frequency domain coding architecture. As shown in FIG. 12, the frequency domain coding architecture is different from the frequency domain coding architecture provided in FIG. 11 except that the frequency domain branch (frequency domain branch three) for mandatory rate control is different, and other parts are the same. The following only introduces the differences, and the similarities can be referred to the description of FIG. 11, which will not be repeated here.
  • Frequency domain branch three can include the following parts: prediction 2 1204, cost calculation 2 1205.
  • the frequency domain branch three is similar to the frequency domain branch three involved in FIG. 9, and will not be repeated here.
  • the frequency domain coding architecture shown in Figure 12 discards all transform coefficients (that is, all 0 coefficients) in the forced rate control of the frequency domain branch, and the subsequent quantized prediction residuals (all 0 coefficients) can be directly analyzed. Coefficients) perform entropy coding, and output coded bits to the code stream buffer 1213. This can simplify the coding architecture, reduce the amount of calculation, and improve coding efficiency.
  • the encoder 130 may at least include: a bit interception module 1301, a first cost calculation module 1302, a prediction module 1303, a second cost calculation module 1304, and a comparison determination module. Module 1305. among them:
  • the bit interception module 1301 can be used to perform bit interception on the to-be-coded block when the mandatory rate control condition is met. For details, please refer to the description of S601, which will not be repeated here.
  • the first cost calculation module 1302 may be used to calculate the first generation value corresponding to the bit interception. For details, refer to the description of S602, which is not repeated here.
  • the prediction module 1303 may be used to perform block-level prediction on the block to be coded when the mandatory rate control condition is met, and determine the prediction residual of the block to be coded. For details, please refer to the description of S603, which is not repeated here.
  • the second cost calculation module 1304 may be used to calculate the second generation value corresponding to the block-level prediction according to the prediction residual. For details, please refer to the description of S604, which is not repeated here.
  • the comparison determination module 1305 can be used to compare the first-generation value and the second-generation value to determine the coded bits. For details, refer to the description of S605, which is not repeated here.
  • the encoder 130 may further include: a judgment module 1306, a first rate control module 1307, a precoding module 1308, a coding domain decision module 1309, a second rate control module 1310, and a real coding module 1311. among them:
  • the judging module 1306 can be used to judge whether the mandatory code rate control condition is satisfied. For details, please refer to the description of S702, which will not be repeated here.
  • the first rate control module 1307 may be used to determine the first coded quantization parameter QP when the mandatory rate control condition is not met. For details, please refer to the description of S708, which will not be repeated here.
  • the precoding module 1308 may be used to precode the block to be coded by using the first coding QP in multiple prediction modes to obtain the precoding result information corresponding to each prediction mode. For details, refer to the description of S709, which is not repeated here.
  • the coding domain decision module 1309 can be used to select the best prediction mode from multiple prediction modes. For details, refer to the description of S710, which is not repeated here.
  • the second code rate control module 1310 may be used to adjust the first coded QP by using the coding result information corresponding to the optimal prediction mode to obtain the second coded QP. For details, please refer to the description of S711, which will not be repeated here.
  • the real coding module 1311 can be used to perform real coding on the block to be coded by adopting the second coding QP in the best prediction mode. For details, please refer to the description of S712, which is not repeated here.
  • the encoder 130 further includes: an output module 1312, which can be used to output coded bits to a code stream buffer.
  • an output module 1312 which can be used to output coded bits to a code stream buffer.
  • the judging module 1306 can be specifically used for judging whether the mandatory code rate control condition is satisfied according to the full state of the code stream buffer. For details, please refer to the description of S702, which will not be repeated here.
  • the second code rate control module 1310 can be specifically used to adjust the first code QP according to the full state of the code stream buffer and the coding result information corresponding to the optimal prediction mode. For details, please refer to the description of S711, which will not be repeated here.
  • the comparison and determination module 1305 may be specifically used to determine that the coded bit is the coded bit after the bit is intercepted when the value of the first generation is less than the value of the second generation.
  • the comparison determination module 1305 may be specifically used to determine that the coded bit is the coded bit after entropy coding is performed on the prediction residual when the value of the first generation is greater than the value of the second generation.
  • the second cost calculation module 1304 may include: a transformation unit, a zero-setting unit, and a cost calculation unit. among them:
  • the transform unit can be used to perform frequency domain transform on the prediction residual to obtain N frequency domain transform coefficients of the prediction residual, where N is a positive integer.
  • the zero-setting unit can be used to zero M frequency-domain transform coefficients among the N frequency-domain transform coefficients to obtain N frequency-domain transform coefficients after zeroing, where M is a positive integer less than N.
  • the cost calculation unit may be used to calculate the second-generation value corresponding to zeroing the M frequency domain transform coefficients.
  • the comparison and determination module 1305 can be specifically used to determine that the coded bit is the coded bit obtained by entropy coding the zero-set N frequency domain transform coefficients when the value of the first generation is greater than the value of the second generation.
  • the second cost calculation module 1304 may further include: a quantization unit, configured to perform a frequency domain transform on the prediction residual by the transform unit to obtain N frequency domain transform coefficients of the prediction residual,
  • the zero setting unit quantizes the N frequency domain transform coefficients before zeroing the M frequency domain transform coefficients among the N frequency domain transform coefficients to obtain N quantized frequency domain transform coefficients.
  • the zero-setting unit may be specifically used for: zeroing M frequency-domain transform coefficients among the N-quantized frequency-domain transform coefficients.
  • the second cost calculation module 1304 may further include: a quantization unit, configured to, after the zero setting unit zeros M frequency domain transform coefficients among the N frequency domain transform coefficients, the cost calculation unit Calculate the second generation value corresponding to zeroing the M frequency domain transform coefficients, and quantize the N frequency domain transform coefficients after zeroing.
  • a quantization unit configured to, after the zero setting unit zeros M frequency domain transform coefficients among the N frequency domain transform coefficients, the cost calculation unit Calculate the second generation value corresponding to zeroing the M frequency domain transform coefficients, and quantize the N frequency domain transform coefficients after zeroing.
  • the cost calculation unit may be specifically used to calculate the second-generation value after quantizing the N frequency domain transform coefficients after zeroing.
  • the encoder further includes: a pre-analysis module 1313, which can be used to select a target prediction mode from multiple prediction modes based on a preset cost calculation rule.
  • the target prediction mode is the prediction mode with the smallest cost value among multiple prediction modes, and different prediction modes correspond to different prediction directions and different prediction value calculation methods.
  • the prediction module 1303 may be specifically used to predict the block to be coded in the target prediction mode and determine the prediction residual of the block to be coded when the mandatory rate control condition is satisfied.
  • the embodiments of the present application also provide a computer-readable storage medium that stores instructions in the computer-readable storage medium, and when it runs on a computer or a processor, the computer or the processor executes any one of the above methods. Or multiple steps. If each component module of the above-mentioned signal processing device is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in the computer readable storage medium.
  • the above embodiments it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • software it can be implemented in the form of a computer program product in whole or in part.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted through the computer-readable storage medium.
  • the computer instructions can be sent from a website site, computer, server, or data center to another website site via wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) , Computer, server or data center for transmission.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).
  • the program can be stored in a computer readable storage medium, and the program can be stored in a computer readable storage medium. During execution, it may include the procedures of the above-mentioned method embodiments.
  • the storage medium can be a magnetic disk, an optical disk, a ROM, or a random storage memory RAM, etc.
  • the modules in the device of the embodiment of the present application may be combined, divided, and deleted according to actual needs.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本申请涉及图像处理领域,提供了一种编码方法及编码器。该方法包括:在满足强制码率控制条件时,同时对待编码块进行空域支路的比特截取及频域支路的系数丢弃,并计算比特截取和系数丢弃分别对应的代价值,根据两者的代价值确定最终的编码比特,在防止码流缓冲区溢出的前提下尽可能地提高解码图像质量。

Description

编码方法及编码器
本申请要求于2019年12月31日提交中国国家知识产权局、申请号为201911409778.7、申请名称为“编码方法及编码器”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理领域,尤其涉及一种编码方法及编码器。
背景技术
近年来,视频图像应用在多个维度(如分辨率、帧率等)的提升,使得视频处理系统的处理数据量大幅增长。视频处理系统的带宽、功耗及成本均大幅增加。为了有效降低视频处理系统的带宽、功耗与成本,压缩技术被应用于视频处理系统中。
压缩技术能够有效地减少视频处理系统的内存与带宽的占用,降低视频处理系统的成本。相比于无损压缩,视觉无损的有损压缩通常有着更高的压缩率,能够节省更多的内存与带宽占用。码率控制作为有损压缩的重要环节,必须保证编码码率满足系统内存和带宽限制。在编码码率即将到达系统上限的极限条件下,现有技术中通常通过强制手段压缩码率,使得编码码率满足系统内存和带宽限制。而此时在解码图像中往往会引入较大的视觉失真。因此,现有的强制码率控制手段还存在优化的空间。
发明内容
本申请实施例提供了一种编码方法及编码器,可以在保证编码码率满足系统内存和带宽限制的前提下,提高解码图像质量。
第一方面,本申请实施例提供了一种编码方法,包括:在满足强制码率控制条件时,对待编码块进行比特截取;计算上述比特截取对应的第一代价值;在满足上述强制码率控制条件时,还对上述待编码块进行预测,确定上述待编码块的预测残差;根据上述预测残差计算上述预测对应的第二代价值;对比上述第一代价值与上述第二代价值,确定编码比特。
在一种可能的实施方式中,上述编码比特携带上述编码比特的编码方式,上述编码方式为比特截取或预测。
具体地,上述比特截取可以是对待编码块的当前编码像素的像素值进行比特截取,也可以是对待编码块进行预测后,对预测残差进行比特截取。本申请中将比特截取的方式称为空域支路的强制码率控制方式。其中,空域支路的预测可以是点级预测,也可以是块级预测。
其中,块级预测以编码块为预测单元,当前编码块内的像素不可作为当前编码块内后续像素的参考像素。点级预测以点为预测单元,当前编码块内的像素可作为当前编码块内后续像素的参考像素。
在一种可能的实现方式中,可以将对待编码块的当前编码像素的像素值进行比特截取 称为空域支路的强制码率控制方式一,将对待编码像素的预测残差进行比特截取的方式称为空域支路的强制码率控制方式二。当执行空域支路的强制码率控制时,可以对比上述方式一与方式二的编码代价,选择编码代价较小的方式作为空域支路的强制码率控制方式。
具体地,上述在满足上述强制码率控制条件时,还对上述待编码块进行预测,可以看作是频域支路的强制码率控制方式。其中,频域支路的预测可以是块级预测。
具体地,确定编码比特后,还可以在编码码流中标记编码比特的编码方式,以使解码端根据编码比特的编码方式进行解码。
可能地,计算代价值的方式可以包括以下任意一项:编码码率(rate,R)、率失真代价(rate-distortion cost,RDCOST)、编码失真大小等。
其中,编码失真大小可以采用绝对误差和(sum of absolute difference,SAD)、平均绝对差(mean absolute differences,MAD)、平方误差和(sum of squared difference,SSD)、差值平方和(sum of squared error,SSE)、均方误差(mean squared error,MSE)等测度进行衡量。
本申请实施例在系统需要强制控制码率时,预估分别采用空域支路及频域支路提供的两种不同方式控制码率后可能造成的编码代价,选择编码代价较小的方式来控制码率,最终输出编码比特,使输出的有限的编码比特尽可能多的携带有用信息(即人眼容易感知的信息),在保证编码码率满足系统内存和带宽限制的前提下,提高解码图像质量。
在一种可能的实现方式中,上述在满足强制码率控制条件时,对待编码块进行比特截取之前,上述方法还包括:判断是否满足上述强制码率控制条件;在不满足上述强制码率控制条件时,确定第一编码量化参数QP;分别在多种预测模式下采用上述第一编码QP对上述待编码块进行预编码,得到各个预测模式各自对应的预编码结果信息;从上述多种预测模式中选择最佳预测模式;采用上述最佳预测模式对应的编码结果信息调整上述第一编码QP,得到第二编码QP;在上述最佳预测模式下采用上述第二编码QP对上述待编码块进行实编码。
可能地,上述预编码结果信息包括以下至少一项:上述待编码块在上述第一编码QP下的编码比特数、上述待编码块在上述第一编码QP下的编码失真大小、上述待编码块在上述第一编码QP下的编码率失真代价、上述待编码块的预测残差及上述待编码块的纹理复杂度。
本申请实施例可以在系统不需要强制控制码率时,采用多种预测模式对待编码块进行预编码,从而确定最佳的预测模式,再采用该最佳的预测模式对应的预编码结果信息对第一编码QP进行调整,在该最佳的预测模式下采用调整后的编码QP对待编码块进行实编码,可以合理利用码率传递质量更好的图像数据,从而提升图像压缩的性能。
在一种可能的实现方式中,上述确定编码比特后,上述方法还包括:输出上述编码比特至码流缓冲区;上述判断是否满足强制码率控制条件,包括:根据上述码流缓冲区的充盈状态判断是否满足上述强制码率控制条件;上述采用上述最佳预测模式对应的编码结果信息调整上述第一编码QP,包括:根据上述码流缓冲区的充盈状态及上述最佳预测模式对应的编码结果信息调整上述第一编码QP。
本申请实施例可以根据码流缓冲区的充盈状态来判断是否需要强制控制码率。当码流 缓冲区快要溢出时强制控制码率可以减小待编码块的输出码率,从而防止码流缓冲区上溢,保证实际编码码率小于目标码率。此外,在不需要强制控制码率时,码流缓冲区的充盈状态还可以用于调节编码QP。若码流缓冲区较满,则增大QP以防码流缓冲区溢出;若码流缓冲区较空,则减小QP,以使编码后的图像携带更多的图像信息,保证图像压缩质量。
在一种可能的实现方式中,上述对比上述第一代价值与上述第二代价值,确定编码比特,具体包括:在上述第一代价值小于上述第二代价值的情况下,确定上述编码比特为上述比特截取后的编码比特。
当第一代价值小于第二代价值时,表明通过空域支路的比特截取的方式来控制码率带来的图像损失较小,则优先选择空域支路的比特截取的方式来控制码率,输出的编码比特即为比特截取后的编码比特。
在一种可能的实现方式中,上述对比上述第一代价值与上述第二代价值,确定编码比特,具体包括:在上述第一代价值大于上述第二代价值的情况下,确定上述编码比特为对上述预测残差进行熵编码后的编码比特。
当第二代价值小于第一代价值时,表明通过频域支路的块级预测的方式来控制码率带来的图像损失较小,则优先选择频域支路的块级预测的方式来控制码率,输出的编码比特即为对频域支路的块级预测输出的残差信息进行熵编码后的编码比特。上述对待编码块进行块级预测后,直接对待编码块的残差信息进行熵编码的方式可以看作是对预测残差进行频域变换,得到预测残差的N个频域变换系数后,将N个频域变换系数全部置零,即将变换系数全部丢弃。本申请实施例将这种方式称为频域支路的变换系数全丢弃。
不限于将变换系数全丢弃,在频域支路中还可以有变换系数部分丢弃的方式来强制控制码率。以下将变换系数全丢弃以及变换系数部分丢弃统称为频域支路的系数丢弃。
在另外一种可能的实现方式中,上述根据上述预测残差计算上述预测对应的第二代价值,包括:对上述预测残差进行频域变换,得到上述预测残差的N个频域变换系数,上述N为正整数;将上述N个频域变换系数中的M个频域变换系数置零,得到置零后的N个频域变换系数,上述M为小于N的正整数;计算将M个频域变换系数置零对应的第二代价值;上述对比上述第一代价值与上述第二代价值,确定编码比特,具体包括:在上述第一代价值大于上述第二代价值的情况下,确定上述编码比特为对上述置零后的N个频域变换系数进行熵编码后的编码比特。上述将N个频域变换系数中的M个频域变换系数置零的方式即可看作频域支路的变换系数部分丢弃。
具体地,第二代价值的计算方式可以是编码码率、RDCOST、编码失真大小等。其中,编码失真大小可以采用SAD、MAD、SSD、SSE、MSE等测度进行衡量。编码失真大小,具体可以是变换之前的残差信息与反变换之后的残差信息的差值。
当第一代价值等于第二代价值时,可以任意选择比特截取的方式或者变换系数丢弃的方式来控制码率,输出的编码比特即为比特截取后的编码比特或者为对预测输出的残差信息进行熵编码后的编码比特。本申请实施例对代价值相等的情况下选择的强制码率控制方式不做限定。
本申请实施例,频域支路的系数丢弃仅对残差信息的频域变换系数进行操作,即使丢弃较多的变换系数,待编码块的预测信息仍能保留图像的部分纹理信息,通过将部分变换 系数进行丢弃(即将变换系数置零),既可以达到降低码率的效果,又可以保证解码图像质量。
在另外一种可能的实现方式中,上述对上述预测残差进行频域变换,得到上述预测残差的N个频域变换系数之后,上述将上述N个频域变换系数中的M个频域变换系数置零之前,上述方法还包括:对上述N个频域变换系数进行量化,得到N个量化后的频域变换系数;上述将上述N个频域变换系数中的M个频域变换系数置零,包括:将上述N个量化后频域变换系数中的M个频域变换系数置零。
在另外一种可能的实现方式中,上述将上述N个频域变换系数中的M个频域变换系数置零之后,上述计算将M个频域变换系数置零对应的第二代价值之前,上述方法还包括:对上述置零后的N个频域变换系数进行量化;上述计算将M个频域变换系数置零对应的第二代价值,包括:计算将上述置零后的N个频域变换系数进行量化后的第二代价值。本申请实施例将变换系数置零后再进行量化,可以减少参与量化的变换系数。
对比上述对量化后的变换系数置零(丢弃)以及将变换系数置零(丢弃)后再量化这两种方式,前者可以保留更多的频率成分(量化在系数置零之前置零的系数的数量可小于或等于量化在置零之后置零的系数的数量),有利于提高特定图像内容的图像质量。
在另外一种可能的实现方式中,可以将先对变换系数量化再丢弃部分系数的方式称为频域支路的强制码率控制方式一,将丢弃部分系数后再量化的方式称为频域支路的强制码率控制方式二。当执行频域支路的强制码率控制时,可以对比上述方式一与方式二的编码代价,选择编码代价较小的方式作为频域支路的强制码率控制方式。
在另外一种可能的实现方式中,上述在满足上述强制码率控制条件时,还对上述待编码块进行预测,确定上述待编码块的预测残差之前,上述方法还包括:基于预设代价计算规则从多个预测模式中选择目标预测模式;上述目标预测模式为上述多个预测模式中代价值最小的预测模式,不同的预测模式对应不同的预测方向和/或不同的预测值计算方法;上述在满足上述强制码率控制条件时,还对上述待编码块进行预测,确定上述待编码块的预测残差,包括:在满足上述强制码率控制条件时,还在上述目标预测模式下对上述待编码块进行上述块级预测,确定上述待编码块的预测残差。
本申请实施例还可以对多种预测模式进行分析,选择编码代价最小的预测模式对待编码块进行块级预测,以减小编码的失真,保证图像编码的质量。
第二方面,本申请实施例提供了一种编码器,包括:比特截取模块,用于在满足强制码率控制条件时,对待编码块进行比特截取;第一代价计算模块,用于计算上述比特截取对应的第一代价值;预测模块,用于在满足上述强制码率控制条件时,对上述待编码块进行预测,确定上述待编码块的预测残差;第二代价计算模块,用于根据上述预测残差计算上述预测对应的第二代价值;对比确定模块,用于对比上述第一代价值与上述第二代价值,确定编码比特。
在一种可能的实施方式中,上述编码比特携带上述编码比特的编码方式,上述编码方式为比特截取或预测。
可能地,确定编码比特后,还可以在编码码流中标记编码比特的编码方式,以使解码端根据编码比特的编码方式进行解码。
可能地,计算代价值的方式可以包括以下任意一项:编码码率、RDCOST、编码失真大小等。其中,编码失真大小可以采用SAD、MAD、SSD、SSE、MSE等测度进行衡量。
在一种可能的实现方式中,上述编码器还包括:判断模块,用于判断是否满足上述强制码率控制条件;第一码率控制模块,用于在不满足上述强制码率控制条件时,确定第一编码量化参数QP;预编码模块,用于分别在多种预测模式下采用上述第一编码QP对上述待编码块进行预编码,得到各个预测模式各自对应的预编码结果信息;选择模块,用于从上述多种预测模式中选择最佳预测模式;第二码率控制模块,用于采用上述最佳预测模式对应的编码结果信息调整上述第一编码QP,得到第二编码QP;实编码模块,用于在上述最佳预测模式下采用上述第二编码QP对上述待编码块进行实编码。
在一种可能的实现方式中,上述编码器还包括:输出模块,用于输出上述编码比特至码流缓冲区;上述判断模块,具体用于:根据上述码流缓冲区的充盈状态判断是否满足上述强制码率控制条件;上述第二码率控制模块,具体用于:根据上述码流缓冲区的充盈状态及上述最佳预测模式对应的编码结果信息调整上述第一编码QP。
在一种可能的实现方式中,上述对比确定模块,具体用于:在上述第一代价值小于上述第二代价值的情况下,确定上述编码比特为上述比特截取后的编码比特。
在另外一种可能的实现方式中,上述对比确定模块,具体用于:在上述第一代价值大于上述第二代价值的情况下,确定上述编码比特为对上述预测残差进行熵编码后的编码比特。
在另外一种可能的实现方式中,上述第二代价计算模块,包括:变换单元,用于对上述预测残差进行频域变换,得到上述预测残差的N个频域变换系数,上述N为正整数;置零单元,用于将上述N个频域变换系数中的M个频域变换系数置零,得到置零后的N个频域变换系数,上述M为小于N的正整数;代价计算单元,用于计算将M个频域变换系数置零对应的第二代价值;上述对比确定模块,具体用于:在上述第一代价值大于上述第二代价值的情况下,确定上述编码比特为对上述置零后的N个频域变换系数进行熵编码后的编码比特。
具体地,第二代价值的计算方式可以是计算编码失真大小,具体可以是变换之前的残差信息与反变换之后的残差信息的差值。
在另外一种可能的实现方式中,上述第二代价计算模块还包括:量化单元,用于在上述变换单元对上述预测残差进行频域变换,得到上述预测残差的N个频域变换系数之后,上述置零单元将上述N个频域变换系数中的M个频域变换系数置零之前,对上述N个频域变换系数进行量化,得到N个量化后的频域变换系数;上述置零单元,具体用于:将上述N个量化后频域变换系数中的M个频域变换系数置零。
在另外一种可能的实现方式中,上述第二代价计算模块还包括:量化单元,用于在上述置零单元将上述N个频域变换系数中的M个频域变换系数置零之后,上述代价计算单元计算将M个频域变换系数置零对应的第二代价值之前,对上述置零后的N个频域变换系数进行量化;上述代价计算单元,具体用于:计算将上述置零后的N个频域变换系数进行量化后的第二代价值。
在另外一种可能的实现方式中,上述编码器还包括:预分析模块,用于基于预设代价 计算规则从多个预测模式中选择目标预测模式;上述目标预测模式为上述多个预测模式中代价值最小的预测模式,不同的预测模式对应不同的预测方向和/或不同的预测值计算方法;上述块级预测模块,具体用于:在满足上述强制码率控制条件时,在上述目标预测模式下对上述待编码块进行上述块级预测,确定上述待编码块的预测残差。
第三方面,本申请实施例提供了一种编码器,包括:处理器和传输接口;上述处理器用于调用存储器中存储的软件指令,以执行本申请实施例第一方面或第一方面的任意一种可能的实现方式提供的编码方法。
在一种可能的实现方式中,上述编码器还包括:上述存储器。
第四方面,本申请实施例提供了一种计算机可读存储介质,上述计算机可读存储介质中存储有指令,当其在计算机或处理器上运行时,使得上述计算机或处理器执行本申请实施例第一方面或第一方面的任意一种可能的实现方式提供的编码方法。
第五方面,本申请实施例提供了一种包含指令的计算机程序产品,当其在计算机或处理器上运行时,使得上述计算机或处理器执行本申请实施例第一方面或第一方面的任意一种可能的实现方式提供的编码方法。
可以理解地,上述提供的第二方面提供的编码器、第三方面提供的编码器、第四方面提供的计算机存储介质,以及第五方面提供的计算机程序产品均用于执行第一方面所提供的编码方法。因此,其所能达到的有益效果可参考第一方面所提供的编码方法中的有益效果,此处不再赘述。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例中所需使用的附图作简单地介绍。
图1为本申请适用的视频编码及解码系统框图;
图2为本申请适用的视频译码系统框图;
图3为本申请实施例提供的视频译码设备的结构示意图;
图4为申请实施例提供的块级预测模式示意图;
图5为本申请实施例提供的几种点级预测模式示意图;
图6为本申请实施例提供的一种编码方法的流程示意图;
图7为本申请实施例提供的另外一种编码方法的流程示意图;
图8为本申请实施例适用的空频域编码架构示意图;
图9为本申请实施例适用的另一种空频域编码架构示意图;
图10为本申请实施例适用的一种空域编码架构示意图;
图11为本申请实施例适用的频域编码架构示意图;
图12为本申请实施例适用的另一种频域编码架构示意图;
图13为本申请实施例提供的一种编码器的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地 描述。
本申请实施例所涉及的技术方案不仅可能应用于现有的视频编码标准中(如H.264、HEVC等标准),还可能应用于未来的视频编码标准中(如H.266标准)。本申请的实施方式部分使用的术语仅用于对本申请的具体实施例进行解释,而非旨在限定本申请。下面先对本申请实施例可能涉及的一些概念进行简单介绍。
视频编码通常是指处理形成视频或视频序列的图片序列。在视频编码领域,术语“图片(picture)”、“帧(frame)”或“图像(image)”可以用作同义词。视频编码在源侧执行,通常包括处理(例如,通过压缩)原始视频图片以减少表示该视频图片所需的数据量,从而更高效地存储和/或传输。视频解码在目的地侧执行,通常包括相对于编码器作逆处理,以重建视频图片。
视频序列包括一系列图像(picture),图像被进一步划分为切片(slice),切片再被划分为块(block)。视频编码以块为单位进行编码处理,在一些新的视频编码标准中,块的概念被进一步扩展。比如,在H.264标准中有宏块(macroblock,MB),宏块可进一步划分成多个可用于预测编码的预测块(partition)。在高性能视频编码(high efficiency video coding,HEVC)标准中,采用编码单元(coding unit,CU),预测单元(prediction unit,PU)和变换单元(transform unit,TU)等基本概念。
本文中,为了便于描述和理解,可将当前图像中待处理的编码块称为当前编码块或者待处理编码块,例如在编码中,指当前正在编码的块;在解码中,指当前正在解码的块。将参考图像中用于对当前块进行预测的已解码的编码块称为参考块,即参考块是为当前块提供参考信号的块,其中,参考信号表示编码块内的像素值。可将参考图像中为当前块提供预测信号的块为预测块,其中,预测信号表示预测块内的像素值或者采样值或者采样信号。例如,在遍历多个参考块以后,找到了最佳参考块,此最佳参考块将为当前块提供预测,此块称为预测块。
无损视频编码情况下,可以重建原始视频图片,即经重建视频图片具有与原始视频图片相同的质量(假设存储或传输期间没有传输损耗或其它数据丢失)。在有损视频编码情况下,通过例如量化执行进一步压缩,来减少表示视频图片所需的数据量,而解码器侧无法完全重建视频图片,即经重建视频图片的质量相比原始视频图片的质量较低或较差。
下面描述本申请实施例所应用的系统架构。参见图1,图1示例性地给出了本申请实施例所应用的视频编码及解码系统10的示意性框图。如图1所示,视频编码及解码系统10可包括源设备12和目的地设备14,源设备12产生经过编码的视频数据,因此,源设备12可被称为视频编码装置。目的地设备14可对由源设备12所产生的经过编码的视频数据进行解码,因此,目的地设备14可被称为视频解码装置。源设备12、目的地设备14或两个的各种实施方案可包含一或多个处理器以及耦合到所述一或多个处理器的存储器。所述存储器可包含但不限于随机存储记忆体(random access memory,RAM)、只读存储记忆体(read-only memory,ROM)、电可擦可编程只读存储器(electrically erasable programmable read only memory,EEPROM)、快闪存储器或可用于以可由计算机存取的指令或数据结构的形式存储所要的程序代码的任何其它媒体,如本文所描述。源设备12和目的地设备14可以包括各种装置,包含桌上型计算机、移动计算装置、笔记型(例如,膝上型)计算机、平 板计算机、机顶盒、例如所谓的“智能”电话等电话手持机、电视机、相机、显示装置、数字媒体播放器、视频游戏控制台、车载计算机、无线通信设备或其类似者。
虽然图1将源设备12和目的地设备14绘示为单独的设备,但设备实施例也可以同时包括源设备12和目的地设备14或同时包括两者的功能性,即源设备12或对应的功能性以及目的地设备14或对应的功能性。在此类实施例中,可以使用相同硬件和/或软件,或使用单独的硬件和/或软件,或其任何组合来实施源设备12或对应的功能性以及目的地设备14或对应的功能性。
源设备12和目的地设备14之间可通过链路13进行通信连接,目的地设备14可经由链路13从源设备12接收经过编码的视频数据。链路13可包括能够将经过编码的视频数据从源设备12移动到目的地设备14的一或多个媒体或装置。在一个实例中,链路13可包括使得源设备12能够实时将经过编码的视频数据直接发射到目的地设备14的一或多个通信媒体。在此实例中,源设备12可根据通信标准(例如无线通信协议)来调制经过编码的视频数据,且可将经调制的视频数据发射到目的地设备14。所述一或多个通信媒体可包含无线和/或有线通信媒体,例如射频(RF)频谱或一或多个物理传输线。所述一或多个通信媒体可形成基于分组的网络的一部分,基于分组的网络例如为局域网、广域网或全球网络(例如,因特网)。所述一或多个通信媒体可包含路由器、交换器、基站或促进从源设备12到目的地设备14的通信的其它设备。
源设备12包括编码器20,另外可选地,源设备12还可以包括图片源16、图片预处理器18、以及通信接口22。具体实现形态中,所述编码器20、图片源16、图片预处理器18、以及通信接口22可能是源设备12中的硬件部件,也可能是源设备12中的软件程序。分别描述如下:
图片源16,可以包括或可以为任何类别的图片捕获设备,用于例如捕获现实世界图片,和/或任何类别的图片或评论(对于屏幕内容编码,屏幕上的一些文字也认为是待编码的图片或图像的一部分)生成设备,例如,用于生成计算机动画图片的计算机图形处理器,或用于获取和/或提供现实世界图片、计算机动画图片(例如,屏幕内容、虚拟现实(virtual reality,VR)图片)的任何类别设备,和/或其任何组合(例如,实景(augmented reality,AR)图片)。图片源16可以为用于捕获图片的相机或者用于存储图片的存储器,图片源16还可以包括存储先前捕获或产生的图片和/或获取或接收图片的任何类别的(内部或外部)接口。当图片源16为相机时,图片源16可例如为本地的或集成在源设备中的集成相机;当图片源16为存储器时,图片源16可为本地的或例如集成在源设备中的集成存储器。当所述图片源16包括接口时,接口可例如为从外部视频源接收图片的外部接口,外部视频源例如为外部图片捕获设备,比如相机、外部存储器或外部图片生成设备,外部图片生成设备例如为外部计算机图形处理器、计算机或服务器。接口可以为根据任何专有或标准化接口协议的任何类别的接口,例如有线或无线接口、光接口。
其中,图片可以视为像素点(picture element)的二维阵列或矩阵。本申请实施例中,由图片源16传输至图片处理器的图片也可称为原始图片数据17。
图片预处理器18,用于接收原始图片数据17并对原始图片数据17执行预处理,以获取经预处理的图片19或经预处理的图片数据19。例如,图片预处理器18执行的预处理可 以包括整修、色彩格式转换、调色或去噪。
编码器20(或称视频编码器20),用于接收经预处理的图片数据19,采用相关预测模式(如本文各个实施例中的预测模式)对经预处理的图片数据19进行处理,从而提供经过编码的图片数据21(下文将进一步基于图3描述编码器20的结构细节)。在一些实施例中,编码器20可以用于执行后文所描述的各个实施例,以实现本申请所描述的编码方法。
通信接口22,可用于接收经编码图片数据21,并可通过链路13将经编码图片数据21传输至目的地设备14或任何其它设备(如存储器),以用于存储或直接重建,所述其它设备可为任何用于解码或存储的设备。通信接口22可例如用于将经过编码的图片数据21封装成合适的格式,例如数据包,以在链路13上传输。
目的地设备14包括解码器30,另外可选地,目的地设备14还可以包括通信接口28、图片后处理器32和显示设备34。分别描述如下:
通信接口28,可用于从源设备12或任何其它源接收经过编码的图片数据21,所述任何其它源例如为存储设备,存储设备例如为经过编码的图片数据存储设备。通信接口28可以用于藉由源设备12和目的地设备14之间的链路13或藉由任何类别的网络传输或接收经过编码的图片数据21,链路13例如为直接有线或无线连接,任何类别的网络例如为有线或无线网络或其任何组合,或任何类别的私网和公网,或其任何组合。通信接口28可以例如用于解封装通信接口22所传输的数据包以获取经过编码的图片数据21。
通信接口28和通信接口22都可以配置为单向通信接口或者双向通信接口,以及可以用于例如发送和接收消息来建立连接、确认和交换任何其它与通信链路和/或例如经过编码的图片数据传输的数据传输有关的信息。
解码器30(或称为解码器30),用于接收经过编码的图片数据21并提供经过解码的图片数据31或经过解码的图片31。
图片后处理器32,用于对经解码图片数据31(也称为经重建图片数据)执行后处理,以获得经后处理图片数据33。图片后处理器32执行的后处理可以包括:色彩格式转换(例如,从YUV格式转换为RGB格式)、调色、整修或重采样,或任何其它处理,还可用于将经后处理图片数据33传输至显示设备34。
显示设备34,用于接收经后处理图片数据33以向例如用户或观看者显示图片。显示设备34可以为或可以包括任何类别的用于呈现经重建图片的显示器,例如,集成的或外部的显示器或监视器。例如,显示器可以包括液晶显示器(liquid crystal display,LCD)、有机发光二极管(organic light emitting diode,OLED)显示器、等离子显示器、投影仪、微LED显示器、硅基液晶(liquid crystal on silicon,LCoS)、数字光处理器(digital light processor,DLP)或任何类别的其它显示器。
虽然,图1将源设备12和目的地设备14绘示为单独的设备,但设备实施例也可以同时包括源设备12和目的地设备14或同时包括两者的功能性,即源设备12或对应的功能性以及目的地设备14或对应的功能性。在此类实施例中,可以使用相同硬件和/或软件,或使用单独的硬件和/或软件,或其任何组合来实施源设备12或对应的功能性以及目的地设备14或对应的功能性。
本领域技术人员基于描述明显可知,不同单元的功能性或图1所示的源设备12和/或 目的地设备14的功能性的存在和(准确)划分可能根据实际设备和应用有所不同。源设备12和目的地设备14可以包括各种设备中的任一个,包含任何类别的手持或静止设备,例如,笔记本或膝上型计算机、移动电话、智能手机、平板或平板计算机、摄像机、台式计算机、机顶盒、电视机、相机、车载设备、显示设备、数字媒体播放器、视频游戏控制台、视频流式传输设备(例如内容服务服务器或内容分发服务器)、广播接收器设备、广播发射器设备等,并可以不使用或使用任何类别的操作系统。
编码器20和解码器30都可以实施为各种合适电路中的任一个,例如,一个或多个微处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application-specific integrated circuit,ASIC)、现场可编程门阵列(field-programmable gate array,FPGA)、离散逻辑、硬件或其任何组合。如果部分地以软件实施所述技术,则设备可将软件的指令存储于合适的非暂时性计算机可读存储介质中,且可使用一或多个处理器以硬件执行指令从而执行本公开的技术。前述内容(包含硬件、软件、硬件与软件的组合等)中的任一者可视为一或多个处理器。
在一些情况下,图1中所示视频编码及解码系统10仅为示例,本申请的技术可以适用于不必包含编码和解码设备之间的任何数据通信的视频编码设置(例如,视频编码或视频解码)。在其它实例中,数据可从本地存储器检索、在网络上流式传输等。视频编码设备可以对数据进行编码并且将数据存储到存储器,和/或视频解码设备可以从存储器检索数据并且对数据进行解码。在一些实例中,由并不彼此通信而是仅编码数据到存储器和/或从存储器检索数据且解码数据的设备执行编码和解码。
参见图2,图2是根据一示例性实施例的包含编码器和/或解码器的视频译码系统40的实例的说明图。视频译码系统40可以实现本申请实施例的各种技术的组合。在所说明的实施方式中,视频译码系统40可以包含成像设备41、编码器20、解码器30(和/或藉由处理电路46实施的视频编/解码器)、天线42、一个或多个处理器43、一个或多个存储器44和/或显示设备45。
如图2所示,成像设备41、天线42、处理电路46、编码器20、解码器30、处理器43、存储器44和/或显示设备45能够互相通信。如所论述,虽然用编码器20和解码器30绘示视频译码系统40,但在不同实例中,视频译码系统40可以只包含编码器20或只包含解码器30。
在一些实例中,天线42可以用于传输或接收视频数据的经编码比特流。另外,在一些实例中,显示设备45可以用于呈现视频数据。处理电路46可以包含专用集成电路(application-specific integrated circuit,ASIC)逻辑、图形处理器、通用处理器等。视频译码系统40也可以包含可选的处理器43,该可选处理器43类似地可以包含专用集成电路(application-specific integrated circuit,ASIC)逻辑、图形处理器、通用处理器等。在一些实例中,处理器43可以通过通用软件、操作系统等实施。另外,存储器44可以是任何类型的存储器,例如易失性存储器(例如,静态随机存取存储器(static random access memory,SRAM)、动态随机存储器(dynamic random access memory,DRAM)等)或非易失性存储器(例如,闪存等)等。在非限制性实例中,存储器44可以由超速缓存内存实施。
在一些实例中,天线42可以用于接收视频数据的经编码比特流。如所论述,经编码比特流可以包含本文所论述的与编码视频帧相关的数据、指示符、索引值、模式选择数据等,例如与编码分割相关的数据(例如,变换系数或经量化变换系数,(如所论述的)可选指示符,和/或定义编码分割的数据)。视频译码系统40还可包含耦合至天线42并用于解码经编码比特流的解码器30。显示设备45用于呈现视频帧。
应理解,本申请实施例中对于参考编码器20所描述的实例,解码器30可以用于执行相反过程。关于信令语法元素,解码器30可以用于接收并解析这种语法元素,相应地解码相关视频数据。在一些例子中,编码器20可以将语法元素熵编码成经编码视频比特流。在此类实例中,解码器30可以解析这种语法元素,并相应地解码相关视频数据。
需要说明的是,本申请实施例描述的视频图像编码方法发生在编码器20处,本申请实施例描述的视频图像解码方法发生在解码器30处,本申请实施例中的编码器20和解码器30可以是例如H.263、H.264、HEVV、MPEG-2、MPEG-4、VP8、VP9等视频标准协议或者下一代视频标准协议(如H.266等)对应的编/解码器。
参见图3,图3是本申请实施例提供的视频译码设备300(例如视频编码设备300或视频解码设备300)的结构示意图。视频译码设备300适于实施本文所描述的实施例。在一个实施例中,视频译码设备300可以是视频解码器(例如图1的解码器30)或视频编码器(例如图1的编码器20)。在另一个实施例中,视频译码设备300可以是上述图1的解码器30或图1的编码器20中的一个或多个组件。
视频译码设备300包括:用于接收数据的入口端口310和接收单元320,用于处理数据的处理器、逻辑单元或中央处理器330,用于传输数据的发射器单元340(或者简称为发射器340)和出口端口350,以及,用于存储数据的存储器360(比如内存360)。视频译码设备300还可以包括与入口端口310、接收器单元320(或者简称为接收器320)、发射器单元340和出口端口350耦合的光电转换组件和电光组件,用于光信号或电信号的出口或入口。
处理器330通过硬件和软件实现。处理器330可以实现为一个或多个CPU芯片、核(例如,多核处理器)、FPGA、ASIC和DSP。处理器330与入口端口310、接收器单元320、发射器单元340、出口端口350和存储器360通信。处理器330包括译码模块370(例如编码模块370)。编码模块370实现本文中所公开的实施例,以实现本申请实施例所提供的编码方法。例如,编码模块370实现、处理或提供各种编码操作。因此,通过编码模块370为视频译码设备300的功能提供了实质性的改进,并影响了视频译码设备300到不同状态的转换。或者,以存储在存储器360中并由处理器330执行的指令来实现编码模块370。
存储器360包括一个或多个磁盘、磁带机和固态硬盘,可以用作溢出数据存储设备,用于在选择性地执行这些程序时存储程序,并存储在程序执行过程中读取的指令和数据。存储器360可以是易失性和/或非易失性的,可以是只读存储器、随机存取存储器、随机存取存储器(ternary content-addressable memory,TCAM)和/或静态随机存取存储器。
接下来介绍本申请实施例中涉及的两类预测模式:块级预测、点级预测。
块级预测:以编码块为预测单元,当前编码块内的像素不可作为当前编码块内后续像 素的参考像素。
点级预测:以点为预测单元,当前编码块内的像素可作为当前编码块内后续像素的参考像素。
以上的块级预测或点级预测中提到的参考像素的像素值可以作为当前编码块中某个像素的参考值,用于计算当前编码块中某个像素的预测值。上述参考像素的像素值可以是该参考像素的重建值。
以下结合图4提供的块级预测模式的示意图来进一步解释上述块级预测,并结合图5提供的几种预测模式的示意图来进一步解释上述点级预测。
首先,假设将待编码图像按块划分为若干个大小相等的编码块,每个编码块包含10个像素。图4和图5中的A0、A1、A2、A3、A4为编码块A中包含的5个像素(另外5个像素未示出),B1和B2为编码块B中包含的2个像素另外8个像素未示出),0、1、2、3、4、5、6、7、8、9为当前编码块包含的10个像素。图中的箭头用于表示预测方向,即箭头的起点为预测参考像素,箭头经过的像素以及箭头的终点为待预测的像素。图中箭头的方向可表征该预测模式的预测方向。
如图4所示,像素B1为像素5的参考像素,像素B2为像素0及6的参考像素,像素A0为像素1和7的参考像素,像素A1为像素2和8的参考像素,像素A2为像素3和9的参考像素,像素A3为像素4的参考像素等。可以看出,当前编码块内的像素不可作为当前编码块内后续像素的参考像素。例如像素0不可作为像素6的参考像素,像素1不可作为像素7的参考像素等。此为块级预测。
如图5中的a所示,像素0为像素5的参考像素,像素1为像素6的参考像素,像素2为像素7的参考像素,像素3为像素8的参考像素,像素4为像素9的参考像素。可以看出,当前编码块内的像素可作为当前编码块内后续像素的参考像素,此为点级预测。
如图5中的b所示,像素0为像素1的参考像素,像素1为像素2的参考像素,像素7为像素8的参考像素等。可以看出,当前编码块内的像素可作为当前编码块内后续像素的参考像素,此为点级预测。
通过箭头的方向可以看出,图5中的a和b这两种点级预测的预测方向不同。a的预测方向为竖直方向,b的预测方向为水平方向。而图4中块级预测的预测方向为右下对角方向。
当然,不限于图5中示出的几种预测模式,在具体实现中还可以由其他的预测模式,不同的预测模式有不同的预测方向和/或不同的预测值计算方法。其中,预测值计算方法是指如何根据参考像素的值计算出当前编码像素的预测值。例如可以是直接将参考像素的值作为当前编码像素的预测值,也可以是将参考像素的值与该参考像素周围的其他像素的值的平均值作为当前编码像素的预测值。本申请实施例对预测模式的具体预测方向和预测值计算方法不作限定。
接下来结合图6介绍本申请实施例提供的一种编码方法。图6示出了本申请实施例提供的一种编码方法的流程示意图。如图6所示,编码方法至少可以包括以下几个步骤:
S601:在满足强制码率控制条件时,对待编码块进行比特截取。
可能地,强制码率控制条件可以是码流缓冲区的充盈状态达到一定程度。例如当码流缓冲区的充盈状态达到90%时,需要进行强制码率控制,强制码率控制条件即为码流缓冲区的充盈状态大于或等于90%。码流缓冲区的充盈状态通过百分比来表示。该百分比为码流缓冲区的当前已用容量与总容量的比值。不限于通过百分比来表示码流缓冲区的充盈状态,在具体实现中还可以通过数值来表示码流缓冲区的充盈状态,本申请实施例对此不限定。
其中,码流缓冲区可以是缓冲存储器,用来缓存编码数据,使编码数据以稳定的码率输出。可知,码流缓冲区在不同的标准中还可以有其他称呼,例如视频缓冲检验器、编码图像缓冲区等。本申请实施例对码流缓冲区的具体称呼不作限定。
可能地,强制码率控制条件还可以是码流缓冲区的充盈状态与预测残差大小的综合影响结果,比如当码流缓冲区的充盈状态达到一定程度(例如大于或等于90%),但预测残差均小于某一阈值(比如全0),此时仍判定为不满足强制码率控制条件。此时强制码率控制条件可以是码流缓冲区的充盈状态达到一定程度且预测存在残差大于某阈值。
可能地,比特截取可以是对待编码块的当前编码像素的像素值进行比特截取。以RGB颜色空间为例,若每个分量均为256阶亮度,则每个分量需要8个比特来传输。比特截取可以是截取这8个比特中的部分比特。被截取的比特为被保留并传输的比特。被截取的比特位数可以由码流缓冲区的充盈状态决定。当前码流缓冲区越充盈,表明当前系统越迫切的需要降低码率,则被截取的比特位数则越少。
在一些可能的实现方式中,可以截取高位比特,即将低位比特置零,保留高位比特进行传输,从而达到降低码率的效果。
具体地,对于幅值较高的像素而言,可以将低位比特置零,保留高位比特。假设某像素的R、G、B分量分别为240、248、241,则R分量的比特为11110000、G分量的比特为11111000、B分量的比特为11110001。以R分量为例,若根据码流缓冲区当前的充盈状态确定需要截取4位比特数,则可以截取R分量的高4位比特,即将低4位比特置零,此时对于R分量而言截取之后的亮度值并没有发生变化,依然是240。对于G分量而言,低4位比特被置零后,比特从11111000变成了11110000,即从248变成了240,亮度值并没有发生明显的变化。同理可知B分量被截取高4位比特后亮度值也不会发生明显的变化。可以看出,对于特定图像内容而言,截取高位比特可以有效降低码率,且保证图像编码质量。
在另外一些可能的实现方式中,可以截取低位比特,即将高位比特置零,保留低位比特进行传输。
具体地,对于幅值较低的像素而言,可以将高位比特置零,保留低位比特。假设某像素的R、G、B分量分别为13、6、1,则R分量的比特为00001101、G分量的比特为00000110、B分量的比特为00000001。以R分量为例,若根据码流缓冲区当前的充盈状态确定需要截取4位比特数,则可以截取R分量的低4位比特,即将高4位比特置零,此时对于R分量而言截取之后的亮度值并没有发生变化,依然是13。G分量和B分量也一样。可以看出,对于特定图像内容而言,截取低位比特可以有效降低码率,且保证图像编码质量。
可能地,比特截取可以是对待编码块的当前编码像素的预测残差进行比特截取。具体可以采用点级预测或者块级预测对待编码块中的每个像素进行预测,分别输出每个像素的 预测残差。再对预测残差进行比特截取。其中,预测残差为原始值与预测值的差值。原始值即为当前编码像素的像素值,预测值即为当前编码像素的预测值。对预测残差的比特截取方式与对当前编码像素的像素值进行比特截取的方式类似,可以参考前述关于对当前编码像素的像素值进行比特截取的相关描述,此处不再赘述。
本申请实施例中可以将上述比特截取称为空域支路的强制码率控制。本申请后续实施例中将对待编码像素的像素值进行比特截取的方式称为空域支路的强制码率控制方式一,将对待编码像素的预测残差进行比特截取的方式称为空域支路的强制码率控制方式二。
S602:计算比特截取对应的第一代价值。
具体地,代价计算的方式可以包括以下任意一种:编码码率(rate,R)、编码率失真代价(rate-distortion cost,RDCOST)、编码失真大小等。其中,编码失真大小可以采用SAD、MAD、SSD、SSE、MSE等测度进行衡量。本申请实施例对于代价计算方式不做限定。
具体地,若比特截取是对待编码块的当前编码像素的像素值进行比特截取,则第一代价值为对当前编码像素的像素值进行比特截取对应的代价值。若比特截取是对待编码块的当前编码像素的预测残差进行比特截取,则第一代价值为对待编码块的当前编码像素的预测残差进行比特截取对应的代价值。
S603:在满足强制码率控制条件时,还对待编码块进行预测,确定待编码块的预测残差。
具体地,强制码率控制条件如S601中的描述,此处不再赘述。
具体地,在满足上述强制码率控制条件时,还对待编码块进行预测,可以看作是频域支路的强制码率控制方式。其中,频域支路的预测可以是块级预测。具体可以采用目标预测模式对待编码块进行预测,确定原始值与预测值的差值,即为预测残差。其中,目标预测模式可以是块级预测。
S604:根据预测残差计算预测对应的第二代价值。
可能地,对待编码块进行块级预测后,可以对预测残差进行频域变换,得到预测残差的N个频域变换系数,然后将N个频域变换系数中的M1个频域变换系数置零,得到置零后的N个频域变换系数,再将置零后的N个频域变换系数进行量化。第二代价值即为将置零后的N个频域变换系数进行量化后的编码代价值。本申请实施例将变换系数置零后再进行量化,可以减少参与量化的变换系数。其中,N和M1均为正整数,且M1小于N。
可能地,对待编码块进行块级预测后,可以对预测残差进行频域变换,得到预测残差的N个频域变换系数,再对N个频域变换系数进行量化,将量化后的N个频域变换系数中的M2个频域变换系数置零,得到置零后的N个频域变换系数。第二代价值即为将M2个频域变换系数置零对应的编码代价值。其中,N和M2均为正整数,且M2小于N。
本申请实施例对量化后的变换系数置零,与将变换系数置零后再量化相比,可以保留更多的变换系数,即可以保留更多频率成分(M2可小于等于M1),有利于提高特定图像内容的图像质量。
可能地,对待编码块进行块级预测后,可以对预测残差进行频域变换,得到预测残差的N个频域变换系数,然后将N个频域变换系数中的M3个频域变换系数置零,而不再对变换系数进行量化。第二代价值即为将M3个频域变换系数置零对应的编码代价值。其中, 将M3个频域变换系数置零可以通过量化的方式来实现。本申请实施例只需将变换系数置零,而不再额外对变换系数进行量化,可以简化计算方式,简化编码架构。本申请实施例中可以将M1、M2、M3统称为M。
上述被置零的变换系数的数量M可以由码流缓冲区的充盈状态决定。当前码流缓冲区越充盈,表明当前系统越迫切的需要降低码率,则被置零的变换系数的数量M越大。因为人眼对高频信号不敏感,因此被置零的系数可以是高频分量对应的变换系数,这样可以保证系数被丢弃后图像的损失不被人眼察觉。
以上列举的两种第二代价值对应的代价计算方式如S602中的描述,可以是编码码率、RDCOST、编码失真大小等。其中,编码失真大小可以采用SAD、MAD、SSD、SSE、MSE等测度进行衡量。编码失真大小,具体可以是变换之前的残差信息与反变换之后的残差信息的差值。
以上将N个频域变换系数中的M(或M1或M2或M3)个频域变换系数置零的方式可以看作是频域支路的变换系数部分丢弃。
可能地,对待编码块进行块级预测后,直接计算编码代价值,即第二代价值即为块级预测后丢弃预测残差对应的编码代价值。这种情况下可看作对预测残差进行频域变换,得到预测残差的N个频域变换系数后,将N个频域变换系数均置零。这种方式可以看作是频域支路的变换系数全丢弃。本申请中可以将变换系数全丢弃以及变换系数部分丢弃统称为频域支路的系数丢弃。
以上列举的一种第二代价值对应的代价计算方式可参考S602中的描述,此处不再赘述。
本申请实施例中可以将上述对变换系数置零(系数丢弃)称为频域支路的强制码率控制。本申请后续实施例可中将先对变换系数量化再丢弃部分系数的方式称为频域支路的强制码率控制方式一,将丢弃部分系数后再量化的方式称为频域支路的强制码率控制方式二,将变换系数全部丢弃的方式称为频域支路的强制码率控制方式三。
S605:对比第一代价值和第二代价值,确定编码比特。
具体地,对比第一代价值和第二代价值确定最佳的强制码率控制方式,进一步根据最佳的强制码率控制方式对待编码块进行编码,得到最终的编码比特。
可能地,编码比特还可以携带其编码方式,编码方式可以是上述空域支路的强制码率控制方式一、空域支路的强制码率控制方式二、频域支路的强制码率控制方式一、频域支路的强制码率控制方式二或频域支路的强制码率控制方式三等,以使解码器可以根据该编码比特的编码方式对其进行解码。
若确定最佳的强制码率控制方式为空域支路的强制码率控制方式一,则确定编码比特为对当前编码像素进行比特截取后的编码比特。若确定最佳的强制码率控制方式为空域支路的强制码率控制方式二,则确定编码比特为对当前编码像素的预测残差进行比特截取后的编码比特。若最佳的强制码率控制方式为频域支路的强制码率控制方式一,则确定编码比特为对置零后的变换系数进行熵编码后的编码比特。若最佳的强制码率控制方式为频域支路的强制码率控制方式二,则确定编码比特为对量化后的变换系数进行熵编码后的编码比特。若最佳的强制码率控制方式为频域支路的强制码率控制方式三,则确定编码比特为对块级预测后丢弃预测残差信息的全部为0的变换系数进行熵编码后的编码比特。可知, 最终确定的编码比特必定小于码流缓冲区的输出比特,从而保证码流缓冲区不溢出。
具体地,根据上述编码的代价规则可以计算出第一代价值和第二代价值。通常情况下编码代价值与编码代价成正比。但不排除在某些计算规则中编码代价值与编码代价成反比,例如编码代价值为各个像素点的预测残差绝对值的倒数之和。在这种与预测残差成反比的计算规则中编码代价值与编码代价成反比。
本申请实施例对编码代价规则不作限定,但无论采用何种编码代价的计算规则,最佳的强制码率控制方式始终是编码代价最小的预测模式。
当第一代价值等于第二代价值时,可以任意选择空域比特截取的方式或者频域系数丢弃方式来控制码率,输出的编码比特即为比特截取后的编码比特或者为对变换系数进行熵编码后的编码比特。本申请实施例对代价值相等的情况下选择的强制码率控制方式不做限定。
本申请实施例在系统需要强制控制码率时,预估分别采用空域支路的比特截取及频域支路的系数丢弃这两种不同方式控制码率后可能的编码代价,选择编码代价较小的方式来控制码率,最终输出编码比特,使输出的有限的编码比特尽可能多的携带有用信息(即人眼容易感知的信息),在保证编码码率满足系统内存和带宽限制的前提下,提高解码图像质量。
接下来介绍本申请实施例提供的一种详细的编码方法。如图7所示,编码方法可以包括以下几个步骤:
S701:基于预设代价规则从多个预测模式中选择目标预测模式。
具体地,可以从多个空域预测模式中选择至少一个最佳空域预测模式,并从多个频域预测模式中选择至少一个最佳频域预测模式。具体地,空域预测模式可以是点级预测,也可以是块级预测。不同的空域预测模式对应不同的预测方向和/或不同的预测值计算方法。频域预测模式可以是块级预测。不同的频域预测模式对应不同的预测方向和/或不同的预测值计算方法。
具体地,至少一个最佳空域预测模式为多个空域预测模式中编码代价偏小的至少一个空域预测模式,至少一个最佳频域预测模式为多个频域预测模式中编码代价偏小的至少一个频域预测模式。应当理解,多个空域预测模式(或频域预测模式)中编码代价偏小的至少一个空域预测模式(或频域预测模式)可以为多个空域预测模式(或频域预测模式)按照编码代价从小到大排序靠前的至少一个空域预测模式(或频域预测模式),或者多个空域预测模式(或频域预测模式)按照编码代价从大到小排序靠后的至少一个空域预测模式(或频域预测模式)。本申请实施例可将S701的过程称为预分析。
预分析阶段的编码代价的计算规则可以是以下任意一种:残差信息的SAD、残差信息的MAD、残差信息的SSE、残差信息的MSE、编码码率R、RDCOST、编码失真大小等。其中,编码失真大小可以采用SAD、MAD、SSE、MSE等测度进行衡量,残差信息为原始值与预测值的差值。
根据上述编码的代价规则可以计算出空域预编码及频域预编码的编码代价值。通常情况下编码代价值与编码代价成正比。但不排除在某些计算规则中编码代价值与编码代价成 反比,例如编码代价值为各个像素点的预测残差绝对值的倒数之和。在这种与预测残差成反比的计算规则中编码代价值与编码代价成反比。
本申请实施例对预分析阶段的编码代价计算规则不作限定,但无论采用何种编码代价的计算规则,最佳预测模式始终是编码代价最小的预测模式。
S702:判断是否满足强制码率控制条件,若是,执行S703,若否,执行S708。
具体地,强制码率控制条件可以是码流缓冲区的充盈状态达到一定程度,还可以是码流缓冲区的充盈状态与预测残差大小的综合影响结果,比如当码流缓冲区的充盈状态达到一定程度(>90%),但预测残差均小于某一阈值(比如全0),此时仍判定为不满足强制码率控制条件。此时强制码率控制条件可以是码流缓冲区的充盈状态达到一定程度且预测存在残差大于某阈值。具体可参考S601中关于强制码率控制条件的描述,此处不赘述。
S703:对待编码块进行比特截取。
具体地,S703与S601一致,此处不再赘述。
可知,比特截取是对待编码块的当前编码像素的预测残差进行比特截取,则可以先采用S701中确定的至少一个最佳空域预测模式对待编码块进行预测,再对预测残差进行比特截取。
S704:计算比特截取对应的第一代价值。
具体地,S704与S602一致,此处不再赘述。
S705:对待编码块进行预测,确定待编码块的预测残差。
具体地,块级预测的模式可以是S701中确定的至少一个最佳频域预测模式。分别在这至少一个最佳预测模式下对待编码块进行预测,确定待编码块的至少一个预测残差。
S706:根据预测残差计算预测对应的第二代价值。
具体地,可以分别根据每个预测残差计算每个最佳频域预测模式对应的第二代价值。每个第二代价值的计算方式与S604中的相关描述一致,此处不再赘述。
即每个最佳频域预测模式可以对应执行一遍S705-S706,最终得到至少一个第二代价值。
以上S703-S704,S705-S706可同步进行,本申请实施例对其执行顺序不作限定。
S707:对比第一代价值与第二代价值,确定编码比特。
可能地,编码比特还可以携带其编码方式,编码方式可以是上述空域支路的强制码率控制方式一、空域支路的强制码率控制方式二、频域支路的强制码率控制方式一、频域支路的强制码率控制方式二或频域支路的强制码率控制方式三等,以使解码器可以根据该编码比特的编码方式对其进行解码。
具体地,若存在多个第二代价值,可以将第一代价值与多个第二代价值对比,确定编码代价最小的强制码率控制方式。若编码代价最小的强制码率控制方式为比特截取,则编码比特为比特截取后的编码比特;若编码代价最小的强制码率控制方式为在某个最佳频域预测模式下对待编码块进行块级预测,则编码比特为待编码块在该最佳频域预测模式下的预测残差进行熵编码后的编码比特。
S708:确定第一编码量化参数QP。
具体地,可以根据待编码块的纹理复杂度和/或码流缓冲区的充盈状态确定第一编码QP。
具体地,第一编码QP可以与待编码块的纹理复杂度成正比。待编码块的纹理越复杂,第一编码QP越大;待编码块的纹理越简单,第一编码QP越小。
可知,待编码块的纹理复杂度越高,量化导致的图像失真越不明显,人眼越不容易感知,可以使用较大的第一编码QP降低待编码块编码后可能占用的码率。待编码块的纹理越简单,量化导致的图像失真越明显,人眼越容易感知,可以减小第一编码QP来减少失真,从而保证图像失真不被人眼感知。
具体地,第一编码QP可以与码流缓冲区的充盈状态成正比。码流缓冲区越充盈,第一编码QP越大;码流缓冲区越空闲,第一编码QP越小。
可知,码流缓冲区越充盈,为了防止码流缓冲区溢出,则需降低待编码块可能占用的码率,具体可以通过使用较大的第一编码QP来实现。码流缓冲区越空闲,表明当前码率存在盈余,可以增大待编码块可能占用的码率来增加待编码块编码后携带的图像信息,从而使解码后的图像还原度更高,具体可以通过使用较小的第一编码QP来实现。
在具体实现中,可以将纹理复杂度量化,并设置不同程度的纹理复杂度对应的第一编码QP。也可以将码流缓冲区的充盈状态量化,并设置不同程度的充盈状态对应的第一编码QP。
若第一编码QP可以由待编码块的纹理复杂度以及码流缓冲区的充盈状态共同决定,则可以将两者各自对应的第一编码QP进行整合,得到最终的第一编码QP。具体可以将两者各自对应的第一编码QP求平均,或者加权求和,得到最终的第一编码QP。两者各自占用的权重可以是根据先验信息得到的默认的权重。
可能地,可以根据当前已完成编码的编码块的信息统计得到编码块的纹理复杂度及第一编码QP与编码码率的对应关系。对于待编码块而言,可以根据待编码块的纹理复杂度及编码码率查找上述对应关系,确定第一编码QP。其中,待编码块的编码码率可以根据码流缓冲区当前的充盈状态确定。
以上列举的确定第一编码QP的方式仅为示例性说明,在具体实现中还可以由其他的确定方式,本申请实施例对此不作限定。
S709:分别在多种预测模式下采用第一编码QP对待编码块进行预编码,得到各个预测模式各自对应的预编码结果信息。
具体地,预编码可以是根据先验信息(或历史统计信息)预估待编码块编码后的结果。其中,先验信息或历史统计信息可以是当前已编码完成的编码块的信息。预编码还可以是预先对待编码块进行编码,以获取待编码块编码后的结果。
预编码可以包括空域预编码和频域预编码。其中,空域预编码可以包括预测、量化及代价计算。频域预编码可以包括预测、变换、量化及代价计算。最佳空域预测模式决定了待编码块的预测值,进而决定了待编码块的残差信息。最佳频域模式也决定了待编码块的预测值,进而决定了待编码块的残差信息。
其中,预测可以是根据预测模式对应的预测参考方向以及预测值计算方法,确定待编码块的预测值以及预测残差。
变换可以是对预测残差进行频域变换,以在变换域中获取变换系数。
空域预编码中的量化可以是以第一编码QP对预测残差进行量化。频域预编码中的量 化可以是以第一编码QP对变换系数进行量化。
空域预编码中的代价计算可以是计算将预测残差进行量化对应的编码代价。频域预编码中代价计算可以是计算将变换系数进行量化对应的编码代价。
其中,预编码阶段的代价计算的方式可以包括以下任意一种:编码码率、编码率失真代价(rate-distortion cost,RDCOST)、编码失真大小等。其中,编码失真大小可以采用绝对误差和(sum of absolute difference,SAD)、平均绝对差(mean absolute differences,MAD)、平方误差和(sum of squared difference,SSD)、差值平方和(sum of squared error,SSE)、均方误差(mean squared error,MSE)等测度进行衡量。
具体地,预编码结果信息可以包括以下至少一项:待编码块在第一编码QP下的编码比特数、待编码块在第一编码QP下的编码失真大小、待编码块在第一编码QP下的RDCOST、待编码块的预测残差及待编码块的纹理复杂度。
对于空域预编码来说,编码失真大小可以是量化之前的残差信息与反量化之后的残差信息的差值。对于频域预编码来说,编码失真大小可以是变换之前的残差信息与反变换之后的残差信息的差值,也可以是变换之后量化之前的残差信息与反量化之后反变换之前的残差信息的差值。
可知,待编码块可以包括多个像素,每个像素分别对应一个残差信息的差值(量化之前与反量化之后,或者变换前与反变换之后),即待编码块可以包括多个上述残差信息的差值。编码失真大小可以是采用某种计算规则(如SAD、MAD、SSE、MSE等)将上述多个差值计算成一个值,最终得到的值即为编码失真大小。
率失真代价估计的目标是选择合适的编码方法,保证在码率较小的情况下得到较小的失真。率失真代价可以综合编码码率和失真大小,用来衡量图像编码的结果。率失真代价越小,表征图像编码的性能越好。RDCOST=D+λ*R;其中,D为失真大小,R为码率,λ为拉格朗日优化因子,λ的取值可与第一编码QP及码流缓冲区的充盈程度正相关。
S710:从多个预测模式中选择最佳预测模式。
具体地,最佳预测模式为至少一个最佳空域预测模式及至少一个最佳频域预测模式中编码代价最小的预测模式。
S711:采用最佳预测模式对应的编码结果信息调整第一编码QP,得到第二编码QP。
具体地,不同的预测模式在预编码阶段各自对应不同的预编码结果信息。确定最佳预测模式后,可以采用最佳预测模式对应的预编码结果信息调整第一编码QP,得到第二QP。
可能地,编码结果信息包括待编码块在第一编码QP下的编码比特数。在编码比特数小于目标比特数的情况下,减小第一编码QP;在编码比特数大于目标比特数的情况下,增大第一编码QP。
其中,待编码块的目标比特数由码流缓冲区的充盈状态及码流缓冲区的输出比特数决定。码流缓冲区的充盈状态越满,待编码块的目标比特数在码流缓冲区的输出比特数的基础上减小的越多。码流缓冲区的充盈状态越空闲,待编码块的目标比特数在码流缓冲区的输出比特数的基础上增加的越多。
可知,码流缓冲区的输出比特数可以由编码器当前的目标码率决定。示例性地,若编码器当前的目标码率为1兆比特每秒(Mbps),当前帧率为30帧每秒,每帧图像被划分为 30个编码块。若为每个编码块均分码率,则当码流缓冲区的输出比特数可以是1兆比特/(30*30)。以上通过编码器的目标码率计算码率缓冲区的输出比特数的方式仅为示例性说明,在具体实现中还可以有其他的计算方式(如每个编码块并非均分码率),本申请实施例对此不做限定。
示例性地,码流缓冲区的输出比特数为100比特(bit),当码流缓冲区的充盈状态为50%时,目标比特数与输出比特数相等。当码流缓冲区的当前充盈状态为60%时,目标比特数为90bit;当码流缓冲区的当前充盈状态为80%时,目标比特数为70bit;当码流缓冲区的当前充盈状态为30%时,目标比特数为120bit。上述码流缓冲区的充盈状态通过百分比来表示。该百分比为码流缓冲区的当前已用容量与总容量的比值。
以上输出比特数、码流缓冲区的充盈状态与目标比特数的对应关系仅为示例性说明,在具体实现中还可以有其他,本申请实施例对此不作限定。
若预估编码后输出的编码比特数小于待编码块的目标比特数,可以减小第一编码QP来待编码块的提高码率,从而提高压缩图像的质量。若预估编码后输出的编码比特数大于待编码块的目标比特数,且码流缓冲区较充盈,说明码流缓冲区存在溢出的可能,可增大第一编码QP来降低码率,从而保证码流缓冲区不溢出。
可能地,编码结果信息包括待编码块在第一编码QP下的编码失真大小。在编码失真小于第一阈值的情况下,增大第一编码QP;在编码失真大于第二阈值的情况下,减小第一编码QP。
具体地,对于空域编码来说,编码失真大小可以是量化之前的残差信息与反量化之后的残差信息的差值。对于频域编码来说,编码失真大小可以是变换之前的残差信息与反变换之后的残差信息的差值,也可以是变换之后量化之前的残差信息与反量化之后反变换之前的残差信息的差值。
当编码失真小于某个阈值时,表明编码后的图像质量可观,可以增大第一编码QP来节约编码比特数。当编码失真大于某个阈值时,表明编码后的图像质量较低,需减小第一编码QP来提升图像质量。
可能地,编码结果信息包括待编码块的纹理复杂度。待编码块的纹理越简单,越减小第一编码QP;预待编码块的纹理越复杂,越增大第一编码QP。
待编码块的纹理越简单,量化导致的图像失真越明显,人眼越容易感知,可以减小第一编码QP来提高码率,从而保证图像失真不被人眼感知。待编码块的纹理越复杂,量化导致的图像失真越不明显,人眼越不容易感知,可增大第一编码QP来降低码率。
可知,在以待编码块的纹理复杂度调整第一编码QP时,待编码块的纹理复杂度可以根据S701中各个预测模式对应的预测残差确定。具体地,预测残差较小的预测模式的预测方向可以在一定程度上表征待编码块的纹理信息。不限于预测残差,在具体实现中还可以由其他的方式确定待编码块的纹理复杂度,本申请实施例对此不作限定。
可能地,编码结果信息包括待编码块的预测残差。在预测残差绝对值均小于第三阈值的情况下,减小第一编码QP;在预测残差绝对值均大于第四阈值的情况下,增大第一编码QP。
预测残差可以反映待编码块的纹理复杂度。预测残差越小,表明待编码块的纹理越简 单,预测残差越大,表明待编码块的纹理越复杂。而待编码块的纹理越简单,量化导致的图像失真越明显,人眼越容易感知,可以减小第一编码QP减小失真,从而保证图像失真不被人眼感知。待编码块的纹理越复杂,量化导致的图像失真越不明显,人眼越不容易感知,可增大第一编码QP来降低码率。
可能地,编码结果信息可以包括以上任意两项或多项。具体可以确定每一项各自对应的第一编码QP的调节量,再根据各项占的权重计算第一编码QP的最终调节量,从而获得第二QP。
S712:在最佳预测模式下采用第二编码QP对待编码块进行实编码。
具体地,若最佳预测模式为最佳空域预测模式,则采用最佳空域模式对待编码块进行预测,输出预测值及预测残差,再对该预测残差进行量化、熵编码等。
若最佳预测模式为最佳频域预测模式,则采用最佳频域模式对待编码块进行预测,输出预测值及预测残差,再对该预测残差进行变换、量化、熵编码等。
具体地,预测可以是根据预测模式对应的预测参考方向以及预测值计算方法,确定待编码块的预测值以及预测残差。
变换可以是对预测残差进行频域变换,以在变换域中获取变换系数。
空域预编码中的量化可以是以第一编码QP对预测残差进行量化。频域预编码中的量化可以是以第一编码QP对变换系数进行量化。
熵编码可以是按照熵原理不丢失任何信息的编码,通常采用香农编码、哈夫曼编码或算数编码等方式对量化后的预测残差(空域实编码)或量化后的变换系数(频域实编码)进行编码。
S713:输出编码比特至码流缓冲区。
具体地,S707或S712后,均可输出编码比特至码流缓冲区。此时码流缓冲区的充盈状态可能发生变化。码流缓冲区的充盈状态可进一步影响S702及S711。码流缓冲区的充盈状态与S702的关系在S601及S702中均有说明,此处不再赘述。码流缓冲区的充盈状态与S711的关系在S711中也有说明,此处不再赘述。
本申请实施例可以在系统需要强制控制码率时,预估分别采用空域支路的比特截取及频域支路的系数丢弃这两种不同方式控制码率后可能的编码代价,选择编码代价较小的方式来控制码率,最终输出编码比特,使输出的有限的编码比特尽可能多的携带有用信息(即人眼容易感知的信息),在保证编码码率满足系统内存和带宽限制的前提下,提高解码图像质量。本申请实施例还可以在系统不需要强制控制码率时,通过预分析分别对空域和频域支路提供多种预测模式,可以使无需采用以块为单位的预测操作(即空域编码中的预测操作)实现更加精细的点级预测(即当前预测块内的重建值可以作为当前预测块内后续像素点的预测参考值),进而可以通过更精细的点级预测提升图像压缩性能。进一步地,本申请实施例还可以通过两级码率控制实现更加精细的码率控制,合理利用码率传递质量更好的图像数据,提升图像压缩的性能。
接下来结合图7提供的编码方法,介绍本申请实施例提供的两种空频域编码架构。图8介绍了一种空频域编码架构,均可用于执行图7提供的编码方法。
如图8所示,空频域编码架构可以包括以下几个部分:
预分析801,可以基于预设代价规则从多个预测模式中选择目标预测模式。具体可以从多个空域预测模式中选择至少一个最佳空域预测模式,并从多个频域预测模式中选择至少一个最佳频域预测模式。具体可参考S701的描述,此处不再赘述。
强制码控条件判断802,用于判断当前是否满足强制码率控制条件。具体可参考S702的描述,此处不再赘述。若满足强制码率控制条件,则同时通过一条空域支路及一条频域支路来强制控制码率。以下分别介绍两条空域支路以及两条频域支路。在具体实现中,该空频域编码架构可以从两条空域支路中选择一条空域支路,并从两条频域支路中选择一条频域支路来强制控制码率。
空域支路一可以包括以下几个部分:比特截取803、代价计算1 804。空域支路二可以包括以下几个部分:预测1 803a、比特截取803、代价计算1 804。
对于空域支路一而言:比特截取803,可以用于在满足强制码率控制条件的情况下对待编码块中的当前编码像素的像素值进行比特截取。具体可以参考S601中的相关描述,此处不赘述。
代价计算1 804,可以用于计算比特截取对应的第一代价值。具体可参考S602中的相关描述,此处不赘述。
对于空域支路二而言:
预测1 803a,可以用于采用预分析801中确定的至少一个最佳空域预测模式对待编码块进行预测,得到预测残差。其中,最佳空域预测模式可以是点级预测,也可以是块级预测。
比特截取803,可以用于对预测残差进行比特截取。具体可参考S601中的相关描述,此处不赘述。
代价计算1 804,可以用于计算对预测残差进行比特截取对应的第一代价值。具体可参考S602中的相关描述,此处不赘述。
可能地,当执行空域支路的强制码率控制时,可以对比上述空域支路一与空域支路二的编码代价,选择编码代价较小的支路执行空域支路的强制码率控制。
频域支路一可以包括以下几个部分:预测2 805、变换806、量化807a、系数丢弃808a、代价计算2 809。频域支路二可以包括以下几个部分:预测2 805、变换806、系数丢弃807b、量化808b、代价计算2 809。
对于频域支路一而言:
预测2 805,可以用于在满足强制码率控制条件的情况下执行频域支路的强制码率控制。具体可以对待编码块进行块级预测,确定待编码块的预测残差。其中,块级预测的预测模式可以是预分析801中确定的最佳频域预测模式。具体可参考S603的描述,此处不赘述。
变换806,可以用于对待编码块的预测残差进行频域变换,得到预测残差的N个频域变换系数。
量化807a,可以用于对预测残差的N个频域变换系数进行量化。
系数丢弃808a,可以用于对量化后的N个频域变换系数中的M个频域变换系数置零,得到置零后的N个频域变换系数。其中,N和M均为正整数,且M小于N。
代价计算2 809,可以用于计算将M个频域变换系数置零对应的第二代价值。
上述被置零的变换系数的数量M可以由码流缓冲区的充盈状态决定。当前码流缓冲区越充盈,表明当前系统越迫切的需要降低码率,则被置零的变换系数的数量M越大。因为人眼对高频信号不敏感,因此被置零的系数可以是高频分量对应的变换系数,这样可以保证系数被丢弃后图像的损失不被人眼察觉。
对于频域支路二而言:预测2 805,可以用于在满足强制码率控制条件的情况下执行频域支路的强制码率控制。具体可以对待编码块进行块级预测,确定待编码块的预测残差。其中,块级预测的预测模式可以是预分析801中确定的最佳频域预测模式。具体可参考S603的描述,此处不赘述。
变换806,可以用于对待编码块的预测残差进行频域变换,得到预测残差的N个频域变换系数。
系数丢弃807b,可以用于对预测残差的N个频域变换系数中的M个频域变换系数置零,得到置零后的N个频域变换系数。其中,N和M均为正整数,且M小于N。
量化808b,可以用于对置零后的N个频域变换系数进行量化。
代价计算2 809,则可以用于计算将置零后的N个频域变换系数进行量化后的编码代价值,即第二编码代价值。本申请实施例将变换系数置零后再进行量化,可以减少参与量化的变换系数。
频域支路一与频率支路二相比,由于先对变换系数进行量化后再丢弃,可以保留更多的频率成分,有利于提高特定图像内容的图像质量。
可能地,当执行频域支路的强制码率控制时,可以对比上述频域支路一与频域支路二的编码代价,选择编码代价较小的支路执行频域支路的强制码率控制。
以上代价计算1和代价计算2采用的代价计算方式可参考S602中的相关描述,此处不再赘述。
代价比较810,可以用于对比第一代价值和第二代价值。若是,就输出比特截取后的剩余比特至码流缓冲区;若否,就对置零后的N个频域变换系数(频域支路一)或者对量化后的N个频域变换系数(频域支路二)进行熵编码。具体可参考S605的相关描述,此处不赘述。
熵编码811,可以用于对置零后的N个频域变换系数(频域支路一)或者对量化后的N个频域变换系数(频域支路二)进行熵编码,输出编码后的比特至码流缓冲区819。
码率控制1 812,可以用于在不满足强制码率控制条件的情况下,确定第一编码QP。具体可参考S708的描述,此处不再赘述。
空域预编码813,可以用于在预分析801中确定的至少一个最佳空域预测模式下采用第一编码QP对待编码块进行空域预编码。空域预编码可以包括预测、量化及代价计算。具体可参考S709中的相关描述,此处不再赘述。
频域预编码814,可以用于在预分析801中确定的至少一个最佳频域预测模式下采用第一编码QP对待编码块进行频域预编码。频域预编码可以包括预测、变换、量化及代价计算。具体可参考S709中的相关描述,此处不再赘述。
压缩域决策815,可以用于上述至少一个最佳空域预测模式及至少一个最佳频域预测 模式中选择编码代价最小的预测模式。具体可参考S710中的描述,此处不再赘述。
码率控制2 816,可以用于采用最佳预测模式对应的编码结果信息调整第一编码QP,得到第二编码QP。具体可参考S711中的描述,此处不再赘述。
空域实编码817,用于在压缩域决策815中确定最佳预测模式为空域预测模式后,在该最佳预测模式下采用第二编码QP进行空域实编码,输出空域实编码后的编码比特至码流缓冲区819。具体可参考S712中的描述,此处不再赘述。
频域实编码818,用于在压缩域决策815中确定最佳预测模式为频域预测模式后,在该最佳预测模式下采用第二编码QP进行频域实编码,输出频域实编码后的编码比特至码流缓冲区819。具体可参考S712中的描述,此处不再赘述。
码流缓冲区819,可以用于接收空域实编码817或频域实编码818输出的编码比特,还可以用于接收比特截取后输出的剩余比特或对置零后的N个频域变换系数进行熵编码得到的编码比特。此外,码流缓冲区819还可以作用于802及816,码流缓冲区819与802的关系在S601及S702中均有说明,此处不再赘述。码流缓冲区819与802的关系在S711中也有说明,此处不再赘述。
可能地,不限于上述频域支路一或频域支路二提供的频域支路的强制码率控制方式,在具体实现中还可以通过如下方式实现频域支路的强制码率控制:
对待编码块进行块级预测后,可以对预测残差进行频域变换,得到预测残差的N个频域变换系数,然后将N个频域变换系数中的M个频域变换系数置零,而不再对变换系数进行量化。第二代价值即为将M个频域变换系数置零对应的编码代价值。其中,将个频域变换系数置零可以通过量化的方式来实现。本申请实施例只需将变换系数置零,而不再额外对变换系数进行量化,可以简化计算方式,简化编码架构。
图9提供了另外一种空频域编码架构。如图9所示,空频域编码架构与图8提供的空频域编码架构除了强制码率控制的频域支路(频域支路三)不同,其他部分均一致。以下仅介绍不同之处,相同之处可参考图8的描述,此处均不赘述。
频域支路三可以包括以下几个部分:预测2 905、代价计算2 906。
预测2 905,可以用于对待编码块进行块级预测,输出预测残差。
代价计算2 906,可以用于计算块级预测后丢弃预测残差对应的第二代价值。
也即是说,图9示出的空频域编码架构在频域支路的强制码率控制中将变换系数全部丢弃(即全0系数),后续可直接对量化后的预测残差(全0系数)进行熵编码,输出编码比特至码流缓冲区917。这样可以简化编码架构,减小计算量,提升编码效率。
在这种编码架构下,代价计算2的代价计算方式可参考S602中的描述,此处不再赘述。
不限于空频域编码架构,本申请实施例提供的编码方法还可以适用于空域编架构。接下来结合图10介绍本申请实施例提供的空域编码架构。
如图10所示,空域编码架构可以包括以下几个部分:
强制码控条件判断1001,用于判断当前是否满足强制码率控制条件。具体可参考S702的描述,此处不再赘述。若满足强制码率控制条件,则同时通过一条空域支路及一条频域 支路来强制控制码率。以下介绍两条空域支路以及一条频域支路。在具体实现中,该空域编码架构可以从两条空域支路中选择一条空域支路来强制控制码率。
空域支路一可以包括以下几个部分:比特截取1002、代价计算1 1003。空域支路二可以包括以下几个部分:预测1 1002a、比特截取1002、代价计算1 1003。
空域支路一与图8中涉及的空域支路一类似,空域支路二与图8中涉及的空域支路二类似,此处均不再赘述。
可能地,当执行空域支路的强制码率控制时,可以对比上述空域支路一与空域支路二的编码代价,选择编码代价较小的支路执行空域支路的强制码率控制。
频域支路可以包括以下几个部分:预测2 1004、代价计算2 1005。频域支路与图9中涉及的频域支路三类似,此处不再赘述。
代价比较1006,与代价比较810一致,此处不再赘述。
熵编码1007,可以用于对预测残差(全0系数)进行熵编码,输出编码比特至码流缓冲区1012。
预测1008,可以用于在不满足在强制码率控制条件的情况下,对待编码块进行点级预测或者块级预测,得到预测残差。
码率控制1009,可以用于根据待编码块的纹理复杂度和/或码流缓冲区的充盈状态确定编码QP。具体可参考S708中的相关描述,此处不再赘述。
量化1010,可以用于采用码率控制1009确定的编码QP对预测1008输出的预测残差进行量化。
熵编码1011,可以用于对量化1010输出的量化后的预测残差进行熵编码,输出编码比特至码流缓冲区1012。
码流缓冲区1012,可以进一步影响1001和码率控制1009。码流缓冲区1012与1001的关系在S601及S702中均有说明,此处不再赘述。码流缓冲区1012与码率控制1009的关系可参考S708中关于码流缓冲区的充盈状态与第一编码QP的关系的相关描述,此处不再赘述。
不限于空频域编码架构及空域编码架构,本申请实施例提供的编码方法还可以适用于频域编架构。图11及图12提供了几种频域编码架构。
如图11所示,频域编码架构可以包括以下几个部分:
强制码控条件判断1101,与强制码控条件判断802一致,若满足强制码率控制条件,则同时通过一条空域支路及一条频域支路来强制控制码率。以下分别介绍两条空域支路以及两条频域支路。在具体实现中,该空频域编码架构可以从两条空域支路中选择一条空域支路,并从两条频域支路中选择一条频域支路来强制控制码率。
空域支路一可以包括以下几个部分:比特截取1102、代价计算1 1103。空域支路二可以包括以下几个部分:预测1 1102a、比特截取1102、代价计算1 1103。
空域支路一与图8中涉及的空域支路一类似,空域支路二与图8中涉及的空域支路二类似,此处均不再赘述。
可能地,当执行空域支路的强制码率控制时,可以对比上述空域支路一与空域支路二 的编码代价,选择编码代价较小的支路执行空域支路的强制码率控制。
频域支路一可以包括以下几个部分:预测2 1104、变换1105、量化1106a、系数丢弃1107a、代价计算2 1108。频域支路二可以包括以下几个部分:预测2 1104、变换1105、系数丢弃1106b、量化1107b、代价计算2 1108。
频域支路一与图8中涉及的频域支路一类似,频域支路二与图8中涉及的空域支路二类似,此处均不再赘述。
可能地,当执行频域支路的强制码率控制时,可以对比上述频域支路一与频域支路二的编码代价,选择编码代价较小的支路执行频域支路的强制码率控制。
代价计算2 1108,与代价计算2 809一致,此处不再赘述。
代价比较1109,与代价比较810一致,此处不再赘述。
熵编码1110,与熵编码811一致,此处不再赘述。
预测1111,可以用于在不满足在强制码率控制条件的情况下,对待编码块进行块级预测,得到预测残差。
变换1112,可以用于对待编码块的预测残差进行频域变换,输出变换后的预测残差。
码率控制1113,与码率控制1010一致,此处不再赘述。
量化1114,可以用于采用码率控制1113确定的编码QP对变换1112输出的变换后的预测残差进行量化。
熵编码1115,与熵编码1012一致,此处不再赘述。
码流缓冲区1116,与码流缓冲区1013一致,此处不再赘述。
可能地,不限于上述频域支路一或频域支路二提供的频域支路的强制码率控制方式,在具体实现中还可以通过如下方式实现频域支路的强制码率控制:
对待编码块进行块级预测后,可以对预测残差进行频域变换,得到预测残差的N个频域变换系数,然后将N个频域变换系数中的M个频域变换系数置零,而不再对变换系数进行量化。第二代价值即为将M个频域变换系数置零对应的编码代价值。其中,将M个频域变换系数置零可以通过量化的方式来实现。本申请实施例只需将变换系数置零,而不再额外对变换系数进行量化,可以简化计算方式,简化编码架构。
图12提供了另外一种频域编码架构。如图12所示,频域编码架构与图11提供的频域编码架构除了强制码率控制的频域支路(频域支路三)不同,其他部分均一致。以下仅介绍不同之处,相同之处可参考图11的描述,此处均不赘述。
频域支路三可以包括以下几个部分:预测2 1204、代价计算2 1205。频域支路三与图9中涉及的频域支路三类似,此处不再赘述。
也即是说,图12示出的频域编码架构在频域支路的强制码率控制中将变换系数全部丢弃(即全0系数),后续可直接对量化后的预测残差(全0系数)进行熵编码,输出编码比特至码流缓冲区1213。这样可以简化编码架构,减小计算量,提升编码效率。
本申请实施例还提供了一种编码器,如图13所示,编码器130至少可以包括:比特截取模块1301、第一代价计算模块1302、预测模块1303、第二代价计算模块1304及对比确 定模块1305。其中:
比特截取模块1301,可以用于在满足强制码率控制条件时,对待编码块进行比特截取。具体可参考S601的描述,此处不赘述。
第一代价计算模块1302,可以用于计算比特截取对应的第一代价值。具体可参考S602的描述,此处不赘述。
预测模块1303,可以用于在满足强制码率控制条件时,对待编码块进行块级预测,确定所述待编码块的预测残差。具体可参考S603的描述,此处不赘述。
第二代价计算模块1304,可以用于根据预测残差计算块级预测对应的第二代价值。具体可参考S604的描述,此处不赘述。
对比确定模块1305,可以用于对比第一代价值与第二代价值,确定编码比特。具体可参考S605的描述,此处不赘述。
在一种可能的实施例中,编码器130还可以包括:判断模块1306、第一码率控制模块1307、预编码模块1308、编码域决策模块1309、第二码率控制模块1310及实编码模块1311。其中:
判断模块1306,可以用于判断是否满足强制码率控制条件。具体可参考S702的描述,此处不赘述。
第一码率控制模块1307,可以用于在不满足强制码率控制条件时,确定第一编码量化参数QP。具体可参考S708的描述,此处不赘述。
预编码模块1308,可以用于分别在多种预测模式下采用第一编码QP对所述待编码块进行预编码,得到各个预测模式各自对应的预编码结果信息。具体可参考S709的描述,此处不赘述。
编码域决策模块1309,可以用于从多个预测模式中选择最佳预测模式。具体可参考S710的描述,此处不赘述。
第二码率控制模块1310,可以用于采用最佳预测模式对应的编码结果信息调整第一编码QP,得到第二编码QP。具体可参考S711的描述,此处不赘述。
实编码模块1311,可以用于在最佳预测模式下采用第二编码QP对待编码块进行实编码。具体可参考S712的描述,此处不赘述。
在一种可能的实施例中,编码器130还包括:输出模块1312,可以用于输出编码比特至码流缓冲区。具体可参考S713的描述,此处不赘述。
判断模块1306,具体可以用于:根据码流缓冲区的充盈状态判断是否满足强制码率控制条件。具体可参考S702的描述,此处不赘述。
第二码率控制模块1310,具体可以用于:根据码流缓冲区的充盈状态及最佳预测模式对应的编码结果信息调整第一编码QP。具体可参考S711的描述,此处不赘述。
在一种可能的实施例中,对比确定模块1305,具体可以用于:在第一代价值小于第二代价值的情况下,确定编码比特为所述比特截取后的编码比特。
在一种可能的实施例中,对比确定模块1305,具体可以用于:在比第一代价值大于第二代价值的情况下,确定编码比特为对预测残差进行熵编码后的编码比特。
在一种可能的实施例中,第二代价计算模块1304,可以包括:变换单元、置零单元、 代价计算单元。其中:
变换单元,可以用于对预测残差进行频域变换,得到预测残差的N个频域变换系数,N为正整数。
置零单元,可以用于将N个频域变换系数中的M个频域变换系数置零,得到置零后的N个频域变换系数,M为小于N的正整数。
代价计算单元,可以用于计算将M个频域变换系数置零对应的第二代价值。
对比确定模块1305,具体可以用于:在第一代价值大于第二代价值的情况下,确定编码比特为对置零后的N个频域变换系数进行熵编码后的编码比特。
在一种可能的实施例中,第二代价计算模块1304,还可以包括:量化单元,用于在变换单元对预测残差进行频域变换,得到预测残差的N个频域变换系数之后,置零单元将N个频域变换系数中的M个频域变换系数置零之前,对N个频域变换系数进行量化,得到N个量化后的频域变换系数。
置零单元,具体可以用于:将N个量化后频域变换系数中的M个频域变换系数置零。
在一种可能的实施例中,第二代价计算模块1304还可以包括:量化单元,用于在置零单元将N个频域变换系数中的M个频域变换系数置零之后,代价计算单元计算将M个频域变换系数置零对应的第二代价值,对置零后的N个频域变换系数进行量化。
代价计算单元,具体可以用于:计算将置零后的N个频域变换系数进行量化后的第二代价值。
在一种可能的实施例中,编码器还包括:预分析模块1313,可以用于基于预设代价计算规则从多个预测模式中选择目标预测模式。目标预测模式为多个预测模式中代价值最小的预测模式,不同的预测模式对应不同的预测方向及不同的预测值计算方法。
预测模块1303,具体可以用于:在满足强制码率控制条件时,在目标预测模式下对待编码块进行预测,确定待编码块的预测残差。
本申请实施例还提供了一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当其在计算机或处理器上运行时,使得计算机或处理器执行上述任一个方法中的一个或多个步骤。上述信号处理装置的各组成模块如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在所述计算机可读取存储介质中。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者通过所述计算机可读存储介质进行传输。所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例 如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如,固态硬盘(solid state disk,SSD))等。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、ROM或随机存储记忆RAM等。
本申请实施例方法中的步骤可以根据实际需要进行顺序调整、合并和删减。
本申请实施例装置中的模块可以根据实际需要进行合并、划分和删减。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (21)

  1. 一种编码方法,其特征在于,包括:
    在满足强制码率控制条件时,对待编码块进行比特截取;
    计算所述比特截取对应的第一代价值;
    在满足所述强制码率控制条件时,还对所述待编码块进行预测,确定所述待编码块的预测残差;
    根据所述预测残差计算所述预测对应的第二代价值;
    对比所述第一代价值与所述第二代价值,确定编码比特。
  2. 如权利要求1所述的方法,其特征在于,所述在满足强制码率控制条件时,对待编码块进行比特截取之前,所述方法还包括:判断是否满足所述强制码率控制条件;
    在不满足所述强制码率控制条件时,确定第一编码量化参数QP;
    分别在多种预测模式下采用所述第一编码QP对所述待编码块进行预编码,得到各个预测模式各自对应的预编码结果信息;
    从所述多种预测模式中选择最佳预测模式;
    采用所述最佳预测模式对应的编码结果信息调整所述第一编码QP,得到第二编码QP;
    在所述最佳预测模式下采用所述第二编码QP对所述待编码块进行实编码。
  3. 如权利要求2所述的方法,其特征在于,所述确定编码比特后,所述方法还包括:输出所述编码比特至码流缓冲区;
    所述判断是否满足强制码率控制条件,包括:根据所述码流缓冲区的充盈状态判断是否满足所述强制码率控制条件;
    所述采用所述最佳预测模式对应的编码结果信息调整所述第一编码QP,包括:根据所述码流缓冲区的充盈状态及所述最佳预测模式对应的编码结果信息调整所述第一编码QP。
  4. 如权利要求1-3任一项所述的方法,其特征在于,所述对比所述第一代价值与所述第二代价值,确定编码比特,具体包括:
    在所述第一代价值小于所述第二代价值的情况下,确定所述编码比特为所述比特截取后的编码比特。
  5. 如权利要求1-3任一项所述的方法,其特征在于,所述对比所述第一代价值与所述第二代价值,确定编码比特,具体包括:
    在所述第一代价值大于所述第二代价值的情况下,确定所述编码比特为对所述预测残差进行熵编码后的编码比特。
  6. 如权利要求1-5任一项所述的方法,其特征在于,所述根据所述预测残差计算所述预测对应的第二代价值,包括:
    对所述预测残差进行频域变换,得到所述预测残差的N个频域变换系数,所述N为正整数;
    将所述N个频域变换系数中的M个频域变换系数置零,得到置零后的N个频域变换系数,所述M为小于N的正整数;
    计算将M个频域变换系数置零对应的第二代价值;
    所述对比所述第一代价值与所述第二代价值,确定编码比特,具体包括:
    在所述第一代价值大于所述第二代价值的情况下,确定所述编码比特为对所述置零后的N个频域变换系数进行熵编码后的编码比特。
  7. 如权利要求6所述的方法,其特征在于,所述对所述预测残差进行频域变换,得到所述预测残差的N个频域变换系数之后,所述将所述N个频域变换系数中的M个频域变换系数置零之前,所述方法还包括:对所述N个频域变换系数进行量化,得到N个量化后的频域变换系数;
    所述将所述N个频域变换系数中的M个频域变换系数置零,包括:将所述N个量化后频域变换系数中的M个频域变换系数置零。
  8. 如权利要求6所述的方法,其特征在于,所述将所述N个频域变换系数中的M个频域变换系数置零之后,所述计算将M个频域变换系数置零对应的第二代价值之前,所述方法还包括:对所述置零后的N个频域变换系数进行量化;
    所述计算将M个频域变换系数置零对应的第二代价值,包括:计算将所述置零后的N个频域变换系数进行量化后的第二代价值。
  9. 如权利要求1-8任一项所述的方法,其特征在于,所述在满足所述强制码率控制条件时,还对所述待编码块进行预测,确定所述待编码块的预测残差之前,所述方法还包括:
    基于预设代价计算规则从多个预测模式中选择目标预测模式;所述目标预测模式为所述多个预测模式中代价值最小的预测模式,不同的预测模式对应不同的预测方向和/或不同的预测值计算方法;
    所述在满足所述强制码率控制条件时,还对所述待编码块进行预测,确定所述待编码块的预测残差,包括:在满足所述强制码率控制条件时,还在所述目标预测模式下对所述待编码块进行预测,确定所述待编码块的预测残差。
  10. 一种编码器,其特征在于,包括:
    比特截取模块,用于在满足强制码率控制条件时,对待编码块进行比特截取;
    第一代价计算模块,用于计算所述比特截取对应的第一代价值;
    块级预测模块,用于在满足所述强制码率控制条件时,对所述待编码块进行预测,确定所述待编码块的预测残差;
    第二代价计算模块,用于根据所述预测残差计算所述预测对应的第二代价值;
    对比确定模块,用于对比所述第一代价值与所述第二代价值,确定编码比特。
  11. 如权利要求10所述的编码器,其特征在于,所述编码器还包括:
    判断模块,用于判断是否满足所述强制码率控制条件;
    第一码率控制模块,用于在不满足所述强制码率控制条件时,确定第一编码量化参数QP;
    预编码模块,用于分别在多种预测模式下采用所述第一编码QP对所述待编码块进行预编码,得到各个预测模式各自对应的预编码结果信息;
    选择模块,用于从所述多种预测模式中选择最佳预测模式;
    第二码率控制模块,用于采用所述最佳预测模式对应的编码结果信息调整所述第一编码QP,得到第二编码QP;
    实编码模块,用于在所述最佳预测模式下采用所述第二编码QP对所述待编码块进行实编码。
  12. 如权利要求11所述的编码器,其特征在于,所述编码器还包括:输出模块,用于输出所述编码比特至码流缓冲区;
    所述判断模块,具体用于:根据所述码流缓冲区的充盈状态判断是否满足所述强制码率控制条件;
    所述第二码率控制模块,具体用于:根据所述码流缓冲区的充盈状态及所述最佳预测模式对应的编码结果信息调整所述第一编码QP。
  13. 如权利要求10-12任一项所述的编码器,其特征在于,所述对比确定模块,具体用于:在所述第一代价值小于所述第二代价值的情况下,确定所述编码比特为所述比特截取后的编码比特。
  14. 如权利要求10-12任一项所述的编码器,其特征在于,所述对比确定模块,具体用于:在所述第一代价值大于所述第二代价值的情况下,确定所述编码比特为对所述预测残差进行熵编码后的编码比特。
  15. 如权利要求10-14任一项所述的编码器,其特征在于,所述第二代价计算模块,包括:
    变换单元,用于对所述预测残差进行频域变换,得到所述预测残差的N个频域变换系数,所述N为正整数;
    置零单元,用于将所述N个频域变换系数中的M个频域变换系数置零,得到置零后的N个频域变换系数,所述M为小于N的正整数;
    代价计算单元,用于计算将M个频域变换系数置零对应的第二代价值;
    所述对比确定模块,具体用于:在所述第一代价值大于所述第二代价值的情况下,确定所述编码比特为对所述置零后的N个频域变换系数进行熵编码后的编码比特。
  16. 如权利要求15所述的编码器,其特征在于,所述第二代价计算模块还包括:
    量化单元,用于在所述变换单元对所述预测残差进行频域变换,得到所述预测残差的N个频域变换系数之后,所述置零单元将所述N个频域变换系数中的M个频域变换系数置零之前,对所述N个频域变换系数进行量化,得到N个量化后的频域变换系数;
    所述置零单元,具体用于:将所述N个量化后频域变换系数中的M个频域变换系数置零。
  17. 如权利要求15所述的编码器,其特征在于,所述第二代价计算模块还包括:
    量化单元,用于在所述置零单元将所述N个频域变换系数中的M个频域变换系数置零之后,所述代价计算单元计算将M个频域变换系数置零对应的第二代价值之前,对所述置零后的N个频域变换系数进行量化;
    所述代价计算单元,具体用于:计算将所述置零后的N个频域变换系数进行量化后的第二代价值。
  18. 如权利要求10-17任一项所述的编码器,其特征在于,所述编码器还包括:预分析模块,用于基于预设代价计算规则从多个预测模式中选择目标预测模式;
    所述目标预测模式为所述多个预测模式中代价值最小的预测模式,不同的预测模式对 应不同的预测方向和/或不同的预测值计算方法;
    所述预测模块,具体用于:在满足所述强制码率控制条件时,在所述目标预测模式下对所述待编码块进行预测,确定所述待编码块的预测残差。
  19. 一种编码器,其特征在于,包括:处理器和传输接口;
    所述处理器用于调用存储器中存储的软件指令,以执行如权利要求1-9任一项所述的编码方法。
  20. 一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当其在计算机或处理器上运行时,使得所述计算机或处理器执行如权利要求1至9任一项所述的方法。
  21. 一种包含指令的计算机程序产品,当其在计算机或处理器上运行时,使得所述计算机或处理器执行如权利要求1至9任一项所述的方法。
PCT/CN2020/139681 2019-12-31 2020-12-25 编码方法及编码器 WO2021136110A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20910708.5A EP4075803A4 (en) 2019-12-31 2020-12-25 CODING PROCESS AND ENCODER
US17/853,714 US20220329818A1 (en) 2019-12-31 2022-06-29 Encoding method and encoder

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911409778.7 2019-12-31
CN201911409778.7A CN113132726B (zh) 2019-12-31 2019-12-31 编码方法及编码器

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/853,714 Continuation US20220329818A1 (en) 2019-12-31 2022-06-29 Encoding method and encoder

Publications (1)

Publication Number Publication Date
WO2021136110A1 true WO2021136110A1 (zh) 2021-07-08

Family

ID=76685935

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/139681 WO2021136110A1 (zh) 2019-12-31 2020-12-25 编码方法及编码器

Country Status (4)

Country Link
US (1) US20220329818A1 (zh)
EP (1) EP4075803A4 (zh)
CN (2) CN115348450A (zh)
WO (1) WO2021136110A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117676140A (zh) * 2022-09-08 2024-03-08 华为技术有限公司 图像编解码方法、装置、编码器、解码器和系统
CN117676141A (zh) * 2022-09-08 2024-03-08 华为技术有限公司 图像编解码方法、装置、编码器、解码器和系统
CN116760987A (zh) * 2022-11-18 2023-09-15 杭州海康威视数字技术股份有限公司 图像编解码方法、装置及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1953551A (zh) * 2006-11-24 2007-04-25 北京中星微电子有限公司 图像压缩装置及方法
US20080043841A1 (en) * 2006-08-21 2008-02-21 Ching-Lung Su Method for video coding
CN101202912A (zh) * 2007-11-30 2008-06-18 上海广电(集团)有限公司中央研究院 一种平衡码率和图像质量的码率控制方法
CN103248891A (zh) * 2013-04-24 2013-08-14 复旦大学 一种基于n-bit截尾量化和块内二维预测的参考帧压缩方法
CN105208390A (zh) * 2014-06-30 2015-12-30 杭州海康威视数字技术股份有限公司 视频编码的码率控制方法及其系统

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8208545B2 (en) * 2006-06-01 2012-06-26 Electronics And Telecommunications Research Institute Method and apparatus for video coding on pixel-wise prediction
US7925101B2 (en) * 2007-09-05 2011-04-12 Himax Technologies Limited Apparatus for controlling image compression
TWI410139B (zh) * 2007-09-12 2013-09-21 Sony Corp Image processing apparatus and image processing method
CN100592798C (zh) * 2007-09-14 2010-02-24 四川虹微技术有限公司 一种视频编码快速变换量化的实现方法
US9008184B2 (en) * 2012-01-20 2015-04-14 Blackberry Limited Multiple sign bit hiding within a transform unit
CN103686187B (zh) * 2013-12-07 2016-09-28 吉林大学 一种变换域全局高精度运动矢量估计方法
US11070810B2 (en) * 2014-03-14 2021-07-20 Qualcomm Incorporated Modifying bit depths in color-space transform coding

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080043841A1 (en) * 2006-08-21 2008-02-21 Ching-Lung Su Method for video coding
CN1953551A (zh) * 2006-11-24 2007-04-25 北京中星微电子有限公司 图像压缩装置及方法
CN101202912A (zh) * 2007-11-30 2008-06-18 上海广电(集团)有限公司中央研究院 一种平衡码率和图像质量的码率控制方法
CN103248891A (zh) * 2013-04-24 2013-08-14 复旦大学 一种基于n-bit截尾量化和块内二维预测的参考帧压缩方法
CN105208390A (zh) * 2014-06-30 2015-12-30 杭州海康威视数字技术股份有限公司 视频编码的码率控制方法及其系统

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HE Z-L, ET AL.: "LOW-POWER VLSI DESIGN FOR MOTION ESTIMATION USING ADAPTIVE PIXEL TRUNCATION", IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS, US, vol. 10, no. 05, 1 August 2000 (2000-08-01), US, pages 669 - 678, XP000950199, ISSN: 1051-8215, DOI: 10.1109/76.856445 *
See also references of EP4075803A4

Also Published As

Publication number Publication date
CN113132726A (zh) 2021-07-16
US20220329818A1 (en) 2022-10-13
CN115348450A (zh) 2022-11-15
CN113132726B (zh) 2022-07-29
EP4075803A4 (en) 2023-05-10
EP4075803A1 (en) 2022-10-19

Similar Documents

Publication Publication Date Title
US10148948B1 (en) Selection of transform size in video coding
JP5777080B2 (ja) 合成ビデオのためのロスレス・コード化および関連するシグナリング方法
CN106170092B (zh) 用于无损编码的快速编码方法
WO2021136110A1 (zh) 编码方法及编码器
US11595669B2 (en) Chroma block prediction method and apparatus
WO2021136056A1 (zh) 编码方法及编码器
JP7492051B2 (ja) クロマブロック予測方法及び装置
US20160234496A1 (en) Near visually lossless video recompression
KR101993966B1 (ko) 디스플레이 스트림 압축 (dsc) 을 위한 평탄도 검출을 위한 시스템 및 방법
US20220295071A1 (en) Video encoding method, video decoding method, and corresponding apparatus
WO2020119814A1 (zh) 图像重建方法和装置
CN110881126A (zh) 色度块预测方法以及设备
CN113784126A (zh) 图像编码方法、装置、设备及存储介质
WO2022166462A1 (zh) 编码、解码方法和相关设备
WO2020143585A1 (zh) 视频编码器、视频解码器及相应方法
WO2021164014A1 (zh) 视频编码方法及装置
WO2020253681A1 (zh) 融合候选运动信息列表的构建方法、装置及编解码器
WO2021180220A1 (zh) 图像编码和解码方法及装置
US11985303B2 (en) Context modeling method and apparatus for flag
WO2020114393A1 (zh) 变换方法、反变换方法以及视频编码器和视频解码器
KR20200005748A (ko) 복합 모션-보상 예측
CN113766227B (zh) 用于图像编码和解码的量化和反量化方法及装置
CN111405279B (zh) 量化、反量化方法及装置
WO2020108168A1 (zh) 一种视频图像预测方法及装置
WO2020143292A1 (zh) 一种帧间预测方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20910708

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020910708

Country of ref document: EP

Effective date: 20220713

NENP Non-entry into the national phase

Ref country code: DE