CN111405279A - Quantization and inverse quantization method and device - Google Patents

Quantization and inverse quantization method and device Download PDF

Info

Publication number
CN111405279A
CN111405279A CN201910005657.XA CN201910005657A CN111405279A CN 111405279 A CN111405279 A CN 111405279A CN 201910005657 A CN201910005657 A CN 201910005657A CN 111405279 A CN111405279 A CN 111405279A
Authority
CN
China
Prior art keywords
block
parameter
current sub
quantization parameter
size
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910005657.XA
Other languages
Chinese (zh)
Other versions
CN111405279B (en
Inventor
余全合
郑建铧
王力强
何芸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Huawei Technologies Co Ltd
Original Assignee
Tsinghua University
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University, Huawei Technologies Co Ltd filed Critical Tsinghua University
Priority to CN201910005657.XA priority Critical patent/CN111405279B/en
Priority to PCT/CN2019/130400 priority patent/WO2020140889A1/en
Publication of CN111405279A publication Critical patent/CN111405279A/en
Application granted granted Critical
Publication of CN111405279B publication Critical patent/CN111405279B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The method comprises the steps of obtaining a size parameter of a current subblock, determining a first quantization parameter used when the current subblock is quantized according to the size parameter of the current subblock, determining the first quantization parameter according to the size parameter of the current subblock in the embodiment of the application, being beneficial to improving the quantization effect, being beneficial to improving the information quantity which can be compressed in the quantization process and ensuring certain image precision, avoiding a configuration mode based on the traditional quantization parameter and inverse quantization parameter, wherein the quantization parameter and the inverse quantization parameter corresponding to each subblock in the whole L CU are the same, and limiting the information quantity which can be compressed in the quantization process.

Description

Quantization and inverse quantization method and device
Technical Field
The present application relates to the field of video coding and decoding, and more particularly, to quantization and inverse quantization methods and apparatuses.
Background
Digital video capabilities can be incorporated into a wide variety of devices, including digital televisions, digital direct broadcast systems, wireless broadcast systems, Personal Digital Assistants (PDAs), laptop or desktop computers, tablet computers, electronic book readers, digital cameras, digital recording devices, digital media players, video gaming devices, video game consoles, cellular or satellite radio telephones (so-called "smart phones"), video teleconferencing devices, video streaming devices, and the like. Digital video devices implement video compression techniques, such as those described in the standards defined by MPEG-2, MPEG-4, ITU-T H.263, ITU-T H.264/MPEG-4 part 10 Advanced Video Coding (AVC), the video coding standard H.265/High Efficiency Video Coding (HEVC), and extensions of such standards. Video devices may transmit, receive, encode, decode, and/or store digital video information more efficiently by implementing such video compression techniques.
Currently, compression techniques for digital video signals are a hot problem. Transform coding, which is a common compression method, aims to transform an image signal described in a spatial domain into a frequency domain by using a strong spatial correlation of an image, and then code a transformed coefficient, so as to achieve decorrelation and energy concentration. In the process of transformation, quantization techniques are often introduced to further compress the transform coefficients, thereby reducing the amount of transmission of the digital video signal. Accordingly, for the quantized transform coefficient, the quantized transform coefficient may be inverse quantized by an inverse quantization technique to restore the quantized transform coefficient, reducing the image accuracy reduced in the quantization process.
However, the Quantization Parameter (QP) used in the conventional quantization process is configured in units of large coding unit (L CU), that is, each L CU corresponds to a quantization parameter, and the configuration of the quantization parameter is rough, which limits quantization effect.
Disclosure of Invention
The embodiment of the application provides a quantization method, an inverse quantization method and an inverse quantization device, which are beneficial to improving the quantization effect.
In a first aspect, the present application provides a quantization method, comprising:
obtaining the size parameter of the current sub-block;
and determining a first quantization parameter used when the current subblock is quantized according to the size parameter of the current subblock.
In the embodiment of the application, the first quantization parameter is determined according to the size parameter of the current subblock, so that the quantization effect is improved, the information quantity which can be compressed in the quantization process is improved, and meanwhile, certain image precision is ensured.
In a possible implementation manner, the determining a quantization parameter used for quantizing the first sub-block according to the size parameter of the current sub-block includes:
and adjusting a second quantization parameter of the current sub-block to the first quantization parameter according to the size parameter of the current sub-block, where the second quantization parameter is a quantization parameter corresponding to a maximum coding unit L CU in which the current sub-block is located.
In the embodiment of the present application, the second quantization parameter is adjusted to determine the first quantization parameter according to the size parameter of the current sub-block, which is compatible with the conventional method for determining the quantization parameter.
In a possible implementation manner, if the size parameter of the current subblock is greater than a first size threshold, the first quantization parameter is smaller than the second quantization parameter, or
If the size parameter of the current sub-block is smaller than a second size threshold, the first quantization parameter is larger than the second quantization parameter, wherein the first size threshold is larger than or equal to the second size threshold.
In the embodiment of the present application, for a current subblock whose size parameter is greater than a first size threshold, a first quantization parameter smaller than a second quantization parameter may be used for quantization to improve a compression rate. For the current sub-block with the size parameter larger than the second size threshold, the quantization can be performed by using the first quantization parameter smaller than the second quantization parameter, so as to reduce the precision loss of the quantization process. In one possible implementation manner, the adjusting the second quantization parameter of the current sub-block to the first quantization parameter according to the size parameter of the current sub-block includes:
adjusting the second quantization parameter to the first quantization parameter according to the size parameter of the current sub-block and the characteristic parameter of the current sub-block, wherein the characteristic parameter of the current sub-block comprises at least one of the following parameters: the type of the frame where the current sub-block is located, and the type of the block where the current sub-block is located.
In the embodiment of the application, the second quantization parameter is adjusted to determine the first quantization parameter according to the size parameter of the current sub-block and the characteristic parameter of the current sub-block, which is beneficial to improving the rationality of the first quantization parameter.
In a possible implementation manner, when the characteristic parameter of the current subblock includes a type of a frame in which the current subblock is located, the first quantization parameter determined when the frame to which the current subblock belongs is an I frame is greater than the first quantization parameter determined when the frame to which the current subblock belongs is a P frame or a B frame.
In the embodiment of the application, the second quantization parameter is adjusted to determine the first quantization parameter according to the size parameter of the current subblock and the type of the frame where the current subblock is located, so that balance between precision and compression rate is favorably achieved.
In a possible implementation manner, the characteristic parameter of the current sub-block includes that the type of the block in which the current sub-block is located includes a chroma block or a luma block, and the first quantization parameter determined in the case that the current sub-block is a chroma block is smaller than the first quantization parameter determined in the case that the current sub-block is a luma block.
In the embodiment of the application, the second quantization parameter is adjusted to determine the first quantization parameter according to the size parameter of the current sub-block and the type of the frame where the current sub-block is located, which is beneficial to balance between the visual experience of a user and the compression rate.
In a possible implementation manner, the characteristic parameter of the current sub-block includes that the type of the block in which the current sub-block is located includes an intra-frame block or an inter-frame block, and the first quantization parameter determined when the current sub-block is an intra-frame block is greater than the first quantization parameter determined when the current sub-block is an inter-frame block.
In the embodiment of the application, the second quantization parameter is adjusted to determine the first quantization parameter according to the size parameter of the current sub-block and the type of the prediction block to which the current sub-block belongs, so that the balance between the prediction precision and the compression rate is ensured.
In a second aspect, the present application provides an inverse quantization method, comprising:
obtaining the size parameter of the current sub-block;
and determining a first inverse quantization parameter used when inverse quantization is carried out on the current subblock according to the size parameter of the current subblock.
In the embodiment of the application, the first inverse quantization parameter is determined according to the size parameter of the current subblock, so that the effect of inverse quantization is improved, the information quantity which can be compressed in the quantization process is improved, and meanwhile, certain image precision is ensured.
In a possible implementation manner, the determining, according to the size parameter of the current subblock, an inverse quantization parameter used in quantizing the first subblock includes:
and adjusting a second inverse quantization parameter of the current sub-block to the first inverse quantization parameter according to the size parameter of the current sub-block, where the second inverse quantization parameter is an inverse quantization parameter corresponding to a maximum coding unit L CU where the current sub-block is located.
In the embodiment of the present application, the conventional method for determining the quantization parameter may be compatible with adjusting the second dequantization parameter to determine the first dequantization parameter according to the size parameter of the current sub-block.
In a possible implementation manner, if the size parameter of the current subblock is greater than a first size threshold, the first inverse quantization parameter is greater than the second inverse quantization parameter, or
If the size parameter of the current sub-block is smaller than a second size threshold, the first quantization parameter is smaller than the second quantization parameter, wherein the first size threshold is greater than or equal to the second size threshold.
In the embodiment of the present application, for a current subblock whose size parameter is greater than a first size threshold, a first inverse quantization parameter whose second inverse quantization parameter is greater than the first inverse quantization parameter may be used for quantization to match the first quantization parameter, which is beneficial to improving the compression rate of a quantization process. For current subblocks with size parameters smaller than the second size threshold, quantization may be performed using a first inverse quantization parameter smaller than the second inverse quantization parameter to match the first quantization parameter to reduce loss of precision in the quantization process.
In a possible implementation manner, the adjusting the second inverse quantization parameter of the current sub-block to the first inverse quantization parameter according to the size parameter of the current sub-block includes:
adjusting the second dequantization parameter to the first dequantization parameter according to the size parameter of the current sub-block and the characteristic parameter of the current sub-block, where the characteristic parameter of the current sub-block includes at least one of: the type of the frame where the current sub-block is located, and the type of the block where the current sub-block is located.
In the embodiment of the application, the second inverse quantization parameter is adjusted to determine the first inverse quantization parameter according to the size parameter of the current sub-block and the characteristic parameter of the current sub-block, which is beneficial to improving the rationality of the first inverse quantization parameter.
In a possible implementation manner, when the characteristic parameter of the current subblock includes a type of a frame in which the current subblock is located, the first dequantization parameter determined when the frame to which the current subblock belongs is an I frame is smaller than the first dequantization parameter determined when the frame to which the current subblock belongs is a P frame or a B frame.
In the embodiment of the application, the second inverse quantization parameter is adjusted to determine the first inverse quantization parameter according to the size parameter of the current subblock and the type of the frame where the current subblock is located, so that balance between precision and compression rate is favorably achieved.
In a possible implementation manner, the characteristic parameter of the current sub-block includes that the type of the block in which the current sub-block is located includes a chroma block or a luma block, and the first inverse quantization parameter determined in the case that the current sub-block is a chroma block is larger than the first inverse quantization parameter determined in the case that the current sub-block is a luma block.
In the embodiment of the application, the second inverse quantization parameter is adjusted to determine the first inverse quantization parameter according to the size parameter of the current sub-block and the type of the frame where the current sub-block is located, so that balance between the visual experience of a user and the compression rate is favorably achieved.
In a possible implementation manner, the characteristic parameter of the current sub-block includes that the type of the block in which the current sub-block is located includes an intra-frame block or an inter-frame block, and the first dequantization parameter determined when the current sub-block is an intra-frame block is smaller than the first dequantization parameter determined when the current sub-block is an inter-frame block.
In the embodiment of the application, the second quantization parameter is adjusted to determine the first quantization parameter according to the size parameter of the current sub-block and the type of the prediction block to which the current sub-block belongs, so that the balance between the prediction precision and the compression rate is ensured.
In a third aspect, an apparatus is provided that includes means for performing any one of the possible implementations of the first aspect. For example, the quantization means may comprise: the device comprises an acquisition module and a processing module.
In a fourth aspect, an inverse quantization apparatus is provided, which includes means for performing any one of the possible implementations of the second aspect. For example, the inverse quantization means may include: the device comprises an acquisition module and a processing module.
In a fifth aspect, there is provided an encoding apparatus comprising: a memory and a processor coupled to each other, the processor calling program code stored in the memory to perform any one of the possible implementations of the first and second aspects.
In a sixth aspect, there is provided a decoding device comprising: a memory and a processor coupled to each other, the processor calling program code stored in the memory to perform any one of the possible implementations of the second aspect.
In a seventh aspect, a computer program product is provided, the computer program product comprising: computer program code which, when run on a computer, causes the computer to perform the method of the above-mentioned aspects.
It should be noted that, all or part of the computer program code may be stored in the first storage medium, where the first storage medium may be packaged together with the processor or may be packaged separately from the processor, and this is not specifically limited in this embodiment of the present application.
In an eighth aspect, a computer-readable medium is provided, which stores program code, which, when run on a computer, causes the computer to perform the method in the above-mentioned aspects.
In a ninth aspect, a chip system is provided, the chip system comprising a processor for a quantizing device to perform the functions referred to in the above aspects, e.g. to generate, receive, transmit, or process data and/or information referred to in the above methods. In one possible design, the system-on-chip further includes a memory for storing program instructions and data necessary for the terminal device. The chip system may be formed by a chip, or may include a chip and other discrete devices.
In a tenth aspect, a chip system is provided, which comprises a processor for enabling an inverse quantization means to perform the functions referred to in the above aspects, such as generating, receiving, transmitting, or processing data and/or information referred to in the above methods. In one possible design, the system-on-chip further includes a memory for storing program instructions and data necessary for the network device. The chip system may be formed by a chip, or may include a chip and other discrete devices.
In the embodiment of the present application, the larger the quantization parameter is, the lower the quantization degree is, and the smaller the quantization parameter is, the larger the quantization degree is. It should be understood that the third to tenth aspects of the present application are consistent with the technical solutions of the first and second aspects of the present application, and similar advantageous effects are obtained by the aspects and the corresponding possible embodiments, and are not described again.
Drawings
Fig. 1 is a block diagram of an example video encoding and decoding system 10 for implementing embodiments of the present application.
FIG. 2 is a block diagram of an example of a video coding system 40 for implementing embodiments of the present application.
Fig. 3 is a block diagram of an example structure of an encoder 20 for implementing embodiments of the present application.
Fig. 4 is a block diagram of an example structure of a decoder 30 for implementing embodiments of the present application.
FIG. 5 is a block diagram of an example of a video coding apparatus 400 for implementing an embodiment of the present application.
Fig. 6 is a block diagram of another example of an encoding device or a decoding device for implementing embodiments of the present application.
Fig. 7 is a schematic flow chart of a quantization method of an embodiment of the present application.
Fig. 8 is a schematic diagram illustrating a dividing manner of the size parameter according to an embodiment of the present application.
Fig. 9 is a schematic flow chart of an inverse quantization method of an embodiment of the present application.
Fig. 10 is a schematic diagram of a quantization apparatus according to an embodiment of the present application.
Fig. 11 is a schematic diagram of an inverse quantization apparatus according to an embodiment of the present application.
Detailed Description
The embodiments of the present application will be described below with reference to the drawings. In the following description, reference is made to the accompanying drawings which form a part hereof and in which is shown by way of illustration specific aspects of embodiments of the present application or in which specific aspects of embodiments of the present application may be employed. It should be understood that embodiments of the present application may be used in other ways and may include structural or logical changes not depicted in the drawings. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present application is defined by the appended claims. For example, it should be understood that the disclosure in connection with the described methods may equally apply to the corresponding apparatus or system for performing the methods, and vice versa. For example, if one or more particular method steps are described, the corresponding apparatus may comprise one or more units, such as functional units, to perform the described one or more method steps (e.g., a unit performs one or more steps, or multiple units, each of which performs one or more of the multiple steps), even if such one or more units are not explicitly described or illustrated in the figures. On the other hand, for example, if a particular apparatus is described based on one or more units, such as functional units, the corresponding method may comprise one step to perform the functionality of the one or more units (e.g., one step performs the functionality of the one or more units, or multiple steps, each of which performs the functionality of one or more of the plurality of units), even if such one or more steps are not explicitly described or illustrated in the figures. Further, it is to be understood that features of the various exemplary embodiments and/or aspects described herein may be combined with each other, unless explicitly stated otherwise.
The technical scheme related to the embodiment of the application can be applied to the existing video coding standards (such as H.264, HEVC and the like), and can also be applied to the future video coding standards (such as H.266 standard). The terminology used in the description of the embodiments section of the present application is for the purpose of describing particular embodiments of the present application only and is not intended to be limiting of the present application. Some concepts that may be involved in embodiments of the present application are briefly described below.
Video coding generally refers to processing a sequence of pictures that form a video or video sequence. In the field of video coding, the terms "picture", "frame" or "image" may be used as synonyms. Video encoding as used herein means video encoding or video decoding. Video encoding is performed on the source side, typically including processing (e.g., by compressing) the original video picture to reduce the amount of data required to represent the video picture for more efficient storage and/or transmission. Video decoding is performed at the destination side, typically involving inverse processing with respect to the encoder, to reconstruct the video pictures. Embodiments are directed to video picture "encoding" to be understood as referring to "encoding" or "decoding" of a video sequence. The combination of the encoding part and the decoding part is also called codec (encoding and decoding).
For example, in the h.264 standard, there are Macroblocks (MBs) which can be further divided into a plurality of prediction blocks (partitions) which can be used for prediction coding, in the high performance video coding (HEVC) standard, Coding Units (CUs), Prediction Units (PU) and Transform Units (TU) are used, the various block units are functionally divided, and a completely new tree-based description is adopted, for example, the CU can be divided into smaller CUs according to a quadtree structure, and the smaller CU can be further divided into a smaller CU, thereby forming a quadtree structure, which is a basic concept of dividing coding pictures and coding units (PU) into a maximum block, and the coding units (PU) can be divided into a plurality of sub-blocks according to a basic transform tree structure, whereas the prediction blocks (PU) and the CU can be divided into a plurality of sub-blocks according to a similar basic transform unit (coding unit, CU) can also be divided into a plurality of sub-blocks according to a similar prediction mode.
For example, in HEVC, a CTU is split into multiple CUs by using a quadtree structure represented as a coding tree. A decision is made at the CU level whether to encode a picture region using inter-picture (temporal) or intra-picture (spatial) prediction. Each CU may be further split into one, two, or four PUs according to the PU split type. The same prediction process is applied within one PU and the relevant information is transmitted to the decoder on a PU basis. After the residual block is obtained by applying a prediction process based on the PU split type, the CU may be partitioned into Transform Units (TUs) according to other quadtree structures similar to the coding tree used for the CU. In recent developments of video compression techniques, the coding blocks are partitioned using Quad-tree and binary tree (QTBT) partitions to partition frames. In the QTBT block structure, a CU may be square or rectangular in shape.
Herein, for convenience of description and understanding, an image block to be encoded in a currently encoded image may be referred to as a current block, e.g., in encoding, referring to a block currently being encoded; in decoding, refers to the block currently being decoded. Accordingly, the current sub-block can be understood as the sub-block currently being encoded in the encoding process; which can be understood in the decoding process as the subblock currently being decoded.
In the case of lossless video coding, the original video picture can be reconstructed, i.e., the reconstructed video picture has the same quality as the original video picture (assuming no transmission loss or other data loss during storage or transmission). In the case of lossy video coding, the amount of data needed to represent the video picture is reduced by performing further compression, e.g., by quantization, while the decoder side cannot fully reconstruct the video picture, i.e., the quality of the reconstructed video picture is lower or worse than the quality of the original video picture.
Several video coding standards of h.261 belong to the "lossy hybrid video codec" (i.e., the combination of spatial and temporal prediction in the sample domain with 2D transform coding in the transform domain for applying quantization). Each picture of a video sequence is typically partitioned into non-overlapping sets of blocks, typically encoded at the block level. In other words, the encoder side typically processes, i.e., encodes, video at the block (video block) level, e.g., generates a prediction block by spatial (intra-picture) prediction and temporal (inter-picture) prediction, subtracts the prediction block from the current block (currently processed or block to be processed) to obtain a residual block, transforms the residual block and quantizes the residual block in the transform domain to reduce the amount of data to be transmitted (compressed), while the decoder side applies the inverse processing portion relative to the encoder to the encoded or compressed block to reconstruct the current block for representation. In addition, the encoder replicates the decoder processing loop such that the encoder and decoder generate the same prediction (e.g., intra-prediction and inter-prediction) and/or reconstruction for processing, i.e., encoding, subsequent blocks.
The system architecture to which the embodiments of the present application apply is described below. Referring to fig. 1, fig. 1 schematically shows a block diagram of a video encoding and decoding system 10 to which an embodiment of the present application is applied. As shown in fig. 1, video encoding and decoding system 10 may include a source device 12 and a destination device 14, source device 12 generating encoded video data and, thus, source device 12 may be referred to as a video encoding apparatus. Destination device 14 may decode the encoded video data generated by source device 12, and thus destination device 14 may be referred to as a video decoding apparatus. Various implementations of source apparatus 12, destination apparatus 14, or both may include one or more processors and memory coupled to the one or more processors. The memory can include, but is not limited to, RAM, ROM, EEPROM, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures that can be accessed by a computer, as described herein. Source apparatus 12 and destination apparatus 14 may comprise a variety of devices, including desktop computers, mobile computing devices, notebook (e.g., laptop) computers, tablet computers, set-top boxes, telephone handsets such as so-called "smart" phones, televisions, cameras, display devices, digital media players, video game consoles, on-board computers, wireless communication devices, or the like.
Although fig. 1 depicts source apparatus 12 and destination apparatus 14 as separate apparatuses, an apparatus embodiment may also include the functionality of both source apparatus 12 and destination apparatus 14 or both, i.e., source apparatus 12 or corresponding functionality and destination apparatus 14 or corresponding functionality. In such embodiments, source device 12 or corresponding functionality and destination device 14 or corresponding functionality may be implemented using the same hardware and/or software, or using separate hardware and/or software, or any combination thereof.
A communication connection may be made between source device 12 and destination device 14 over link 13, and destination device 14 may receive encoded video data from source device 12 via link 13. Link 13 may include one or more media or devices capable of moving encoded video data from source device 12 to destination device 14. In one example, link 13 may include one or more communication media that enable source device 12 to transmit encoded video data directly to destination device 14 in real-time. In this example, source apparatus 12 may modulate the encoded video data according to a communication standard, such as a wireless communication protocol, and may transmit the modulated video data to destination apparatus 14. The one or more communication media may include wireless and/or wired communication media such as a Radio Frequency (RF) spectrum or one or more physical transmission lines. The one or more communication media may form part of a packet-based network, such as a local area network, a wide area network, or a global network (e.g., the internet). The one or more communication media may include a router, switch, base station, or other apparatus that facilitates communication from source apparatus 12 to destination apparatus 14.
Source device 12 includes an encoder 20, and in the alternative, source device 12 may also include a picture source 16, a picture preprocessor 18, and a communication interface 22. In one implementation, the encoder 20, the picture source 16, the picture preprocessor 18, and the communication interface 22 may be hardware components of the source device 12 or may be software programs of the source device 12. Described below, respectively:
the picture source 16, which may include or be any kind of picture capturing device, is used for capturing, for example, a real-world picture, and/or any kind of picture or comment generation device (for screen content encoding, some text on the screen is also considered as part of the picture or image to be encoded), such as a computer graphics processor for generating a computer animation picture, or any kind of device for acquiring and/or providing a real-world picture, a computer animation picture (e.g., screen content, a Virtual Reality (VR) picture), and/or any combination thereof (e.g., an Augmented Reality (AR) picture). The picture source 16 may be a camera for capturing pictures or a memory for storing pictures, and the picture source 16 may also include any kind of (internal or external) interface for storing previously captured or generated pictures and/or for obtaining or receiving pictures. When picture source 16 is a camera, picture source 16 may be, for example, an integrated camera local or integrated in the source device; when the picture source 16 is a memory, the picture source 16 may be an integrated memory local or integrated in the source device. When the picture source 16 comprises an interface, the interface may illustratively be an external interface that receives pictures from an external video source, the external video source may illustratively be an external picture capturing device such as a camera, an external memory, or an external picture generating device, which may illustratively be an external computer graphics processor, a computer, or a server. The interface may be any kind of interface according to any proprietary or standardized interface protocol, e.g. a wired or wireless interface, an optical interface.
In order to represent color, three color components are typically employed, i.e., the picture may be represented as or contain three sample arrays, e.g., in RBG format or color space, the picture includes corresponding red, green, and blue sample arrays, however, in video coding, each pixel is typically represented in luminance/chrominance format or color space, e.g., for a picture in YUV format, including a luminance component indicated by Y (sometimes also indicated by L) and two chrominance components indicated by U and V.
Picture pre-processor 18 is configured to receive original picture data 17 and perform pre-processing on original picture data 17 to obtain pre-processed picture 19 or pre-processed picture data 19. For example, the pre-processing performed by picture pre-processor 18 may include trimming, color format conversion (e.g., from RGB format to YUV format), toning, or de-noising.
An encoder 20 (or video encoder 20) for receiving the pre-processed picture data 19, processing the pre-processed picture data 19 with a relevant prediction mode (such as the prediction mode in various embodiments herein), thereby providing encoded picture data 21 (structural details of the encoder 20 will be described further below based on fig. 3 or fig. 5 or fig. 6). In some embodiments, the encoder 20 may be used to perform various embodiments described hereinafter to implement the application of the quantization and inverse quantization methods described in this application on the encoding side.
A communication interface 22, which may be used to receive encoded picture data 21 and may transmit encoded picture data 21 over link 13 to destination device 14 or any other device (e.g., memory) for storage or direct reconstruction, which may be any device for decoding or storage. Communication interface 22 may, for example, be used to encapsulate encoded picture data 21 into a suitable format, such as a data packet, for transmission over link 13.
Destination device 14 includes a decoder 30, and optionally destination device 14 may also include a communication interface 28, a picture post-processor 32, and a display device 34. Described below, respectively:
communication interface 28 may be used to receive encoded picture data 21 from source device 12 or any other source, such as a storage device, which may be an encoded picture data storage device. The communication interface 28 may be used to transmit or receive the encoded picture data 21 by way of the link 13 between the source device 12 and the destination device 14 or by way of any kind of network, the link 13 may be a direct wired or wireless connection, any kind of network such as a wired or wireless network or any combination thereof, or any kind of private and public networks, or any combination thereof. Communication interface 28 may, for example, be used to decapsulate data packets transmitted by communication interface 22 to obtain encoded picture data 21.
Both communication interface 28 and communication interface 22 may be configured as a one-way communication interface or a two-way communication interface, and may be used, for example, to send and receive messages to establish a connection, acknowledge and exchange any other information related to a communication link and/or data transfer, such as an encoded picture data transfer.
A decoder 30 (otherwise referred to as decoder 30) for receiving the encoded picture data 21 and providing decoded picture data 31 or decoded pictures 31 (structural details of the decoder 30 will be described further below based on fig. 4 or fig. 5 or fig. 6). In some embodiments, the decoder 30 may be configured to perform various embodiments described hereinafter to implement the application of the inverse quantization method described in the present application on the decoding side.
A picture post-processor 32 for performing post-processing on the decoded picture data 31 (also referred to as reconstructed picture data) to obtain post-processed picture data 33. Post-processing performed by picture post-processor 32 may include: color format conversion (e.g., from YUV format to RGB format), toning, trimming or resampling, or any other process may also be used to transmit post-processed picture data 33 to display device 34.
The display device 34 may be or may include any type of display for presenting the reconstructed picture, such as an integrated or external display or monitor, for example, the display may include a liquid crystal display (L CD), an organic light emitting diode (O L ED) display, a plasma display, a projector, a micro L ED display, a liquid crystal on silicon (L CoS), a digital light processor (D L P), or any type of other display.
Although fig. 1 depicts source apparatus 12 and destination apparatus 14 as separate apparatuses, an apparatus embodiment may also include the functionality of both source apparatus 12 and destination apparatus 14 or both, i.e., source apparatus 12 or corresponding functionality and destination apparatus 14 or corresponding functionality. In such embodiments, source device 12 or corresponding functionality and destination device 14 or corresponding functionality may be implemented using the same hardware and/or software, or using separate hardware and/or software, or any combination thereof.
It will be apparent to those skilled in the art from this description that the existence and (exact) division of the functionality of the different elements or source device 12 and/or destination device 14 shown in fig. 1 may vary depending on the actual device and application. Source device 12 and destination device 14 may comprise any of a variety of devices, including any type of handheld or stationary device, such as a notebook or laptop computer, a mobile phone, a smartphone, a tablet or tablet computer, a camcorder, a desktop computer, a set-top box, a television, a camera, an in-vehicle device, a display device, a digital media player, a video game console, a video streaming device (e.g., a content service server or a content distribution server), a broadcast receiver device, a broadcast transmitter device, etc., and may not use or use any type of operating system.
Both encoder 20 and decoder 30 may be implemented as any of a variety of suitable circuits, such as one or more microprocessors, Digital Signal Processors (DSPs), application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), discrete logic, hardware, or any combinations thereof. If the techniques are implemented in part in software, the device may store instructions of the software in a suitable non-transitory computer-readable storage medium and may execute the instructions in hardware using one or more processors to perform the techniques of this application. Any of the foregoing, including hardware, software, a combination of hardware and software, etc., may be considered as one or more processors.
In some cases, the video encoding and decoding system 10 shown in fig. 1 is merely an example, and the techniques of this application may be applicable to video encoding settings (e.g., video encoding or video decoding) that do not necessarily involve any data communication between the encoding and decoding devices. In other examples, the data may be retrieved from local storage, streamed over a network, and so on. A video encoding device may encode and store data to a memory, and/or a video decoding device may retrieve and decode data from a memory. In some examples, the encoding and decoding are performed by devices that do not communicate with each other, but merely encode data to and/or retrieve data from memory and decode data.
Referring to fig. 2, fig. 2 is an illustrative diagram of an example of a video coding system 40 including encoder 20 of fig. 3 and/or decoder 30 of fig. 2 of an embodiment of the present application. Video coding system 40 may implement a combination of the various techniques of the embodiments of the present application. In the illustrated embodiment, video coding system 40 may include an imaging device 41, an encoder 20, a decoder 30 (and/or a video codec implemented by logic 47 of a processing unit 46), an antenna 42, one or more processors 43, one or more memories 44, and/or a display device 45.
As shown in fig. 2, the imaging device 41, the antenna 42, the processing unit 46, the logic circuit 47, the encoder 20, the decoder 30, the processor 43, the memory 44, and/or the display device 45 are capable of communicating with each other. As discussed, although video coding system 40 is depicted with encoder 20 and decoder 30, in different examples video coding system 40 may include only encoder 20 or only decoder 30.
In some instances, antenna 42 may be used to transmit or receive an encoded bitstream of video data. Additionally, in some instances, display device 45 may be used to present video data. In some examples, logic 47 may be implemented by processing unit 46. The processing unit 46 may comprise ASIC logic, a graphics processor, a general purpose processor, or the like. Video decoding system 40 may also include an optional processor 43, which optional processor 43 similarly may include ASIC logic, a graphics processor, a general purpose processor, or the like. In some examples, the logic 47 may be implemented in hardware, such as video encoding specific hardware, and the processor 43 may be implemented in general purpose software, an operating system, and so on. In addition, the Memory 44 may be any type of Memory, such as a volatile Memory (e.g., Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), etc.) or a nonvolatile Memory (e.g., flash Memory, etc.), and the like. In a non-limiting example, storage 44 may be implemented by a speed cache memory. In some instances, logic circuitry 47 may access memory 44 (e.g., to implement an image buffer). In other examples, logic 47 and/or processing unit 46 may include memory (e.g., cache, etc.) for implementing image buffers, etc.
In some examples, encoder 20, implemented by logic circuitry, may include an image buffer (e.g., implemented by processing unit 46 or memory 44) and a graphics processing unit (e.g., implemented by processing unit 46). The graphics processing unit may be communicatively coupled to the image buffer. The graphics processing unit may include an encoder 20 implemented by logic circuitry 47 to implement the various modules discussed with reference to fig. 3 and/or any other encoder system or subsystem described herein. Logic circuitry may be used to perform various operations discussed herein.
In some examples, decoder 30 may be implemented by logic circuitry 47 in a similar manner to implement the various modules discussed with reference to decoder 30 of fig. 4 and/or any other decoder system or subsystem described herein. In some examples, logic circuit implemented decoder 30 may include an image buffer (implemented by processing unit 2820 or memory 44) and a graphics processing unit (e.g., implemented by processing unit 46). The graphics processing unit may be communicatively coupled to the image buffer. The graphics processing unit may include a decoder 30 implemented by logic circuitry 47 to implement the various modules discussed with reference to fig. 4 and/or any other decoder system or subsystem described herein.
In some instances, antenna 42 may be used to receive an encoded bitstream of video data. As discussed, the encoded bitstream may include data related to the encoded video frame, indicators, index values, mode selection data, etc., discussed herein, such as data related to the encoding partition (e.g., transform coefficients or quantized transform coefficients, (as discussed) optional indicators, and/or data defining the encoding partition). Video coding system 40 may also include a decoder 30 coupled to antenna 42 and used to decode the encoded bitstream. The display device 45 is used to present video frames.
It should be understood that for the example described with reference to encoder 20 in the embodiments of the present application, decoder 30 may be used to perform the reverse process. With respect to signaling syntax elements, decoder 30 may be configured to receive and parse such syntax elements and decode the associated video data accordingly. In some examples, encoder 20 may entropy encode the syntax elements into an encoded video bitstream. In such instances, decoder 30 may parse such syntax elements and decode the relevant video data accordingly.
It should be noted that, the quantization and dequantization methods described in the embodiments of the present application exist in both the encoder 20 and the decoder 30, where the encoder 20 and the decoder 30 in the embodiments of the present application may be a video standard protocol such as h.263, h.264, HEVC, MPEG-2, MPEG-4, VP8, VP9, or a corresponding encoder/decoder of a next generation video standard protocol (e.g., h.266).
Referring to fig. 3, fig. 3 shows a schematic/conceptual block diagram of an example of an encoder 20 for implementing embodiments of the present application. In the example of fig. 3, encoder 20 includes a residual calculation unit 204, a transform processing unit 206, a quantization unit 208, an inverse quantization unit 210, an inverse transform processing unit 212, a reconstruction unit 214, a buffer 216, a loop filter unit 220, a Decoded Picture Buffer (DPB) 230, a prediction processing unit 260, and an entropy encoding unit 270. Prediction processing unit 260 may include inter prediction unit 244, intra prediction unit 254, and mode selection unit 262. Inter prediction unit 244 may include a motion estimation unit and a motion compensation unit (not shown). The encoder 20 shown in fig. 3 may also be referred to as a hybrid video encoder or a video encoder according to a hybrid video codec.
For example, the residual calculation unit 204, the transform processing unit 206, the quantization unit 208, the prediction processing unit 260, and the entropy encoding unit 270 form a forward signal path of the encoder 20, and, for example, the inverse quantization unit 210, the inverse transform processing unit 212, the reconstruction unit 214, the buffer 216, the loop filter 220, the Decoded Picture Buffer (DPB) 230, the prediction processing unit 260 form a backward signal path of the encoder, wherein the backward signal path of the encoder corresponds to a signal path of a decoder (see the decoder 30 in fig. 4).
The encoder 20 receives, e.g., via an input 202, a picture 201 or an image block 203 of a picture 201, e.g., a picture in a sequence of pictures forming a video or a video sequence. Image block 203 may also be referred to as a current picture block or a picture block to be encoded, and picture 201 may be referred to as a current picture or a picture to be encoded (especially when the current picture is distinguished from other pictures in video encoding, such as previously encoded and/or decoded pictures in the same video sequence, i.e., a video sequence that also includes the current picture).
An embodiment of the encoder 20 may comprise a partitioning unit (not shown in fig. 3) for partitioning the picture 201 into a plurality of blocks, e.g. image blocks 203, typically into a plurality of non-overlapping blocks. The partitioning unit may be used to use the same block size for all pictures in a video sequence and a corresponding grid defining the block size, or to alter the block size between pictures or subsets or groups of pictures and partition each picture into corresponding blocks.
In one example, prediction processing unit 260 of encoder 20 may be used to perform any combination of the above-described segmentation techniques.
Like picture 201, image block 203 is also or can be considered as a two-dimensional array or matrix of sample points having sample values, although its size is smaller than picture 201. In other words, the image block 203 may comprise, for example, one sample array (e.g., a luma array in the case of a black and white picture 201) or three sample arrays (e.g., a luma array and two chroma arrays in the case of a color picture) or any other number and/or class of arrays depending on the color format applied. The number of sampling points in the horizontal and vertical directions (or axes) of the image block 203 defines the size of the image block 203.
The encoder 20 as shown in fig. 3 is used to encode the picture 201 block by block, e.g. performing encoding and prediction for each image block 203.
The residual calculation unit 204 is configured to calculate a residual block 205 based on the picture image block 203 and the prediction block 265 (further details of the prediction block 265 are provided below), e.g. by subtracting sample values of the prediction block 265 from sample values of the picture image block 203 sample by sample (pixel by pixel) to obtain the residual block 205 in the sample domain.
The transform processing unit 206 is configured to apply a transform, such as a Discrete Cosine Transform (DCT) or a Discrete Sine Transform (DST), on the sample values of the residual block 205 to obtain transform coefficients 207 in a transform domain. The transform coefficients 207 may also be referred to as transform residual coefficients and represent the residual block 205 in the transform domain.
The transform processing unit 206 may be used to apply integer approximations of DCT/DST, such as the transform specified for HEVC/h.265. Such integer approximations are typically scaled by some factor compared to the orthogonal DCT transform. To maintain the norm of the residual block processed by the forward transform and the inverse transform, an additional scaling factor is applied as part of the transform process. The scaling factor is typically selected based on certain constraints, e.g., the scaling factor is a power of 2 for a shift operation, a trade-off between bit depth of transform coefficients, accuracy and implementation cost, etc. For example, a specific scaling factor may be specified on the decoder 30 side for the inverse transform by, for example, inverse transform processing unit 212 (and on the encoder 20 side for the corresponding inverse transform by, for example, inverse transform processing unit 212), and correspondingly, a corresponding scaling factor may be specified on the encoder 20 side for the forward transform by transform processing unit 206.
Quantization unit 208 is used to quantize transform coefficients 207, e.g., by applying scalar quantization or vector quantization, to obtain quantized transform coefficients 209. Quantized transform coefficients 209 may also be referred to as quantized residual coefficients 209. The quantization process may reduce the bit depth associated with some or all of transform coefficients 207. For example, an n-bit transform coefficient may be rounded down to an m-bit transform coefficient during quantization, where n is greater than m. The quantization level may be modified by adjusting a Quantization Parameter (QP). For example, for scalar quantization, different scales may be applied to achieve finer or coarser quantization. Smaller quantization steps correspond to finer quantization and larger quantization steps correspond to coarser quantization. An appropriate quantization step size may be indicated by a Quantization Parameter (QP). For example, the quantization parameter may be an index of a predefined set of suitable quantization step sizes. For example, a smaller quantization parameter may correspond to a fine quantization (smaller quantization step size) and a larger quantization parameter may correspond to a coarse quantization (larger quantization step size), or vice versa. The quantization may comprise a division by a quantization step size and a corresponding quantization or inverse quantization, e.g. performed by inverse quantization 210, or may comprise a multiplication by a quantization step size. Embodiments according to some standards, such as HEVC, may use a quantization parameter to determine the quantization step size. In general, the quantization step size may be calculated based on the quantization parameter using a fixed point approximation of an equation that includes division. Additional scaling factors may be introduced for quantization and dequantization to recover the norm of the residual block that may be modified due to the scale used in the fixed point approximation of the equation for the quantization step size and quantization parameter. In one example implementation, the inverse transform and inverse quantization scales may be combined. Alternatively, a custom quantization table may be used and signaled from the encoder to the decoder, e.g., in a bitstream. Quantization is a lossy operation, where the larger the quantization step size, the greater the loss.
The process of quantizing the transform coefficient 207 by the quantization unit 208 can be expressed by formula (1):
CQ=(Y×Q(qp)+(1<<(s-1)*m))>>s,s=15+r-n-F
wherein Y represents the above-mentioned transform coefficient to be quantized, Q (qp) represents a quantization parameter with an index corresponding to qp, CQThe quantized transform coefficient is represented, r represents an intermediate limit bit width, and usually, r is 16, F represents a logarithmic value of a transform block area, and n represents a bit width of a residual (transform coefficient), and if the transform coefficient corresponds to an intra-coded block, m is 10/31, and if the transform coefficient corresponds to an inter-coded block, m is 10/62.
The inverse quantization unit 210 is configured to apply inverse quantization of the quantization unit 208 on the quantized coefficients to obtain inverse quantized coefficients 211, e.g., to apply an inverse quantization scheme of the quantization scheme applied by the quantization unit 208 based on or using the same quantization step as the quantization unit 208. The dequantized coefficients 211 may also be referred to as dequantized residual coefficients 211, corresponding to transform coefficients 207, although the loss due to quantization is typically not the same as the transform coefficients.
The inverse quantization unit 210 may inverse-quantize the quantized transform coefficient according to formula (2):
C=(CQ×DQ(qp)+1<<(s-1))>>s,s=shift(qp)+n+F+1-r
wherein, CQThe quantized transform coefficient is represented, dq (qp) represents an inverse quantization parameter corresponding to an index qp, r represents an intermediate limit bit width, generally, r is 16, F represents a logarithmic value of a transform block area, n represents a bit width of a residual (transform coefficient), and shift (qp) represents an offset corresponding to an index qp.
The inverse transform processing unit 212 is configured to apply an inverse transform of the transform applied by the transform processing unit 206, for example, an inverse Discrete Cosine Transform (DCT) or an inverse Discrete Sine Transform (DST), to obtain an inverse transform block 213 in the sample domain. The inverse transform block 213 may also be referred to as an inverse transform dequantized block 213 or an inverse transform residual block 213.
The reconstruction unit 214 (e.g., summer 214) is used to add the inverse transform block 213 (i.e., the reconstructed residual block 213) to the prediction block 265 to obtain the reconstructed block 215 in the sample domain, e.g., to add sample values of the reconstructed residual block 213 to sample values of the prediction block 265.
Optionally, a buffer unit 216 (or simply "buffer" 216), such as a line buffer 216, is used to buffer or store the reconstructed block 215 and corresponding sample values, for example, for intra prediction. In other embodiments, the encoder may be used to use the unfiltered reconstructed block and/or corresponding sample values stored in buffer unit 216 for any class of estimation and/or prediction, such as intra prediction.
For example, an embodiment of encoder 20 may be configured such that buffer unit 216 is used not only to store reconstructed blocks 215 for intra prediction 254, but also for loop filter unit 220 (not shown in fig. 3), and/or such that buffer unit 216 and decoded picture buffer unit 230 form one buffer, for example. Other embodiments may be used to use filtered block 221 and/or blocks or samples from decoded picture buffer 230 (neither shown in fig. 3) as input or basis for intra prediction 254.
Loop filter unit 220 (or simply "loop filter" 220) is used to filter reconstructed block 215 to obtain filtered block 221, thereby facilitating pixel transitions or improving video quality loop filter unit 220 is intended to represent one or more loop filters, such as a deblocking filter, a sample-adaptive offset (SAO) filter, or other filters, such as a bilateral filter, an adaptive loop filter (A L F), or a sharpening or smoothing filter, or a collaborative filter, although loop filter unit 220 is shown in FIG. 3 as an in-loop filter, in other configurations loop filter unit 220 may be implemented as a post-loop filter.
Embodiments of encoder 20 (correspondingly, loop filter unit 220) may be configured to output loop filter parameters (e.g., sample adaptive offset information), e.g., directly or after entropy encoding by entropy encoding unit 270 or any other entropy encoding unit, e.g., such that decoder 30 may receive and apply the same loop filter parameters for decoding.
Decoded Picture Buffer (DPB) 230 may be a reference picture memory that stores reference picture data for use by encoder 20 in encoding video data. DPB 230 may be formed from any of a variety of memory devices, such as Dynamic Random Access Memory (DRAM) including Synchronous DRAM (SDRAM), Magnetoresistive RAM (MRAM), Resistive RAM (RRAM), or other types of memory devices. The DPB 230 and the buffer 216 may be provided by the same memory device or separate memory devices. In a certain example, a Decoded Picture Buffer (DPB) 230 is used to store filtered blocks 221. Decoded picture buffer 230 may further be used to store other previous filtered blocks, such as previous reconstructed and filtered blocks 221, of the same current picture or of a different picture, such as a previous reconstructed picture, and may provide the complete previous reconstructed, i.e., decoded picture (and corresponding reference blocks and samples) and/or the partially reconstructed current picture (and corresponding reference blocks and samples), e.g., for inter prediction. In a certain example, if reconstructed block 215 is reconstructed without in-loop filtering, Decoded Picture Buffer (DPB) 230 is used to store reconstructed block 215.
Prediction processing unit 260, also referred to as block prediction processing unit 260, is used to receive or obtain image block 203 (current image block 203 of current picture 201) and reconstructed picture data, e.g., reference samples of the same (current) picture from buffer 216 and/or reference picture data 231 of one or more previously decoded pictures from decoded picture buffer 230, and to process such data for prediction, i.e., to provide prediction block 265, which may be inter-predicted block 245 or intra-predicted block 255.
The mode selection unit 262 may be used to select a prediction mode (e.g., intra or inter prediction mode) and/or a corresponding prediction block 245 or 255 used as the prediction block 265 to calculate the residual block 205 and reconstruct the reconstructed block 215.
Embodiments of mode selection unit 262 may be used to select prediction modes (e.g., from those supported by prediction processing unit 260) that provide the best match or the smallest residual (smallest residual means better compression in transmission or storage), or that provide the smallest signaling overhead (smallest signaling overhead means better compression in transmission or storage), or both. The mode selection unit 262 may be configured to determine a prediction mode based on Rate Distortion Optimization (RDO), i.e., select a prediction mode that provides the minimum rate distortion optimization, or select a prediction mode in which the associated rate distortion at least meets the prediction mode selection criteria.
Entropy coding unit 270 is configured to apply an entropy coding algorithm or scheme (e.g., a variable length coding (V L C) scheme, a context adaptive V L C (context adaptive V L C, CAV L C) scheme, an arithmetic coding scheme, a Context Adaptive Binary Arithmetic Coding (CABAC), syntax-based context-adaptive binary arithmetic coding (SBAC), Probability Interval Partition Entropy (PIPE) coding, or other methods or techniques) to individual or all of quantized residual coefficients 209, inter-frame prediction parameters, intra-frame prediction parameters, and/or entropy coding loop filter parameters (or not) to obtain coded bitstream data that may be output by output 272, e.g., in the form of coded bitstream 21. video decoder 30 may also transmit the coded bitstream data to video decoder 30 or other video bitstream decoder 30 for later retrieval.
Other structural variations of video encoder 20 may be used to encode the video stream. For example, the non-transform based encoder 20 may quantize the residual signal directly without the transform processing unit 206 for certain blocks or frames. In another embodiment, encoder 20 may have quantization unit 208 and inverse quantization unit 210 combined into a single unit.
In the embodiment of the present application, the encoder 20 may be used to implement the quantization and inverse quantization methods described in the embodiments below. For example, the quantization method of the embodiment of the present application may be performed by the quantization unit 208 in the encoder 20, the inverse quantization parameter (e.g., inverse quantization step size) in the embodiment of the present application may be determined by the inverse quantization unit 210 in the encoder 20, and the inverse quantization method may be performed.
It should be understood that other structural variations of the video encoder 20 may be used to encode the video stream. For example, for some image blocks or image frames, video encoder 20 may quantize the residual signal directly without processing by transform processing unit 206 and, correspondingly, without processing by inverse transform processing unit 212; alternatively, for some image blocks or image frames, the video encoder 20 does not generate residual data and accordingly does not need to be processed by the transform processing unit 206, the quantization unit 208, the inverse quantization unit 210, and the inverse transform processing unit 212; alternatively, video encoder 20 may store the reconstructed image block directly as a reference block without processing by filter 220; alternatively, the quantization unit 208 and the inverse quantization unit 210 in the video encoder 20 may be merged together. The loop filter 220 is optional, and in the case of lossless compression coding, the transform processing unit 206, the quantization unit 208, the inverse quantization unit 210, and the inverse transform processing unit 212 are optional. It should be appreciated that the inter prediction unit 244 and the intra prediction unit 254 may be selectively enabled according to different application scenarios.
Referring to fig. 4, fig. 4 shows a schematic/conceptual block diagram of an example of a decoder 30 for implementing embodiments of the present application. Video decoder 30 is operative to receive encoded picture data (e.g., an encoded bitstream) 21, e.g., encoded by encoder 20, to obtain a decoded picture 231. During the decoding process, video decoder 30 receives video data, such as an encoded video bitstream representing picture blocks of an encoded video slice and associated syntax elements, from video encoder 20.
In the example of fig. 4, decoder 30 includes entropy decoding unit 304, inverse quantization unit 310, inverse transform processing unit 312, reconstruction unit 314 (e.g., summer 314), buffer 316, loop filter 320, decoded picture buffer 330, and prediction processing unit 360. The prediction processing unit 360 may include an inter prediction unit 344, an intra prediction unit 354, and a mode selection unit 362. In some examples, video decoder 30 may perform a decoding pass that is substantially reciprocal to the encoding pass described with reference to video encoder 20 of fig. 3.
Entropy decoding unit 304 is to perform entropy decoding on encoded picture data 21 to obtain, for example, quantized coefficients 309 and/or decoded encoding parameters (not shown in fig. 4), e.g., any or all of inter-prediction, intra-prediction parameters, loop filter parameters, and/or other syntax elements (decoded). The entropy decoding unit 304 is further for forwarding the inter-prediction parameters, the intra-prediction parameters, and/or other syntax elements to the prediction processing unit 360. Video decoder 30 may receive syntax elements at the video slice level and/or the video block level.
Inverse quantization unit 310 may be functionally identical to inverse quantization unit 110, inverse transform processing unit 312 may be functionally identical to inverse transform processing unit 212, reconstruction unit 314 may be functionally identical to reconstruction unit 214, buffer 316 may be functionally identical to buffer 216, loop filter 320 may be functionally identical to loop filter 220, and decoded picture buffer 330 may be functionally identical to decoded picture buffer 230.
Prediction processing unit 360 may include inter prediction unit 344 and intra prediction unit 354, where inter prediction unit 344 may be functionally similar to inter prediction unit 244 and intra prediction unit 354 may be functionally similar to intra prediction unit 254. The prediction processing unit 360 is typically used to perform block prediction and/or to obtain a prediction block 365 from the encoded data 21, as well as to receive or obtain (explicitly or implicitly) prediction related parameters and/or information about the selected prediction mode from, for example, the entropy decoding unit 304.
When the video slice is encoded as an intra-coded (I) slice, intra-prediction unit 354 of prediction processing unit 360 is used to generate a prediction block 365 for the picture block of the current video slice based on the signaled intra-prediction mode and data from previously decoded blocks of the current frame or picture. When a video frame is encoded as an inter-coded (i.e., B or P) slice, inter prediction unit 344 (e.g., a motion compensation unit) of prediction processing unit 360 is used to generate a prediction block 365 for the video block of the current video slice based on the motion vectors and other syntax elements received from entropy decoding unit 304. For inter prediction, a prediction block may be generated from one reference picture within one reference picture list. Video decoder 30 may construct the reference frame list using default construction techniques based on the reference pictures stored in DPB 330: list 0 and list 1.
Prediction processing unit 360 is used to determine prediction information for the video blocks of the current video slice by parsing the motion vectors and other syntax elements, and to generate a prediction block for the current video block being decoded using the prediction information. In an example of the present application, prediction processing unit 360 uses some of the syntax elements received to determine a prediction mode (e.g., intra or inter prediction) for encoding video blocks of a video slice, an inter prediction slice type (e.g., B-slice, P-slice, or GPB-slice), construction information for one or more of a reference picture list of the slice, a motion vector for each inter-coded video block of the slice, an inter prediction state for each inter-coded video block of the slice, and other information to decode video blocks of a current video slice. In another example of the present application, the syntax elements received by video decoder 30 from the bitstream include syntax elements received in one or more of an Adaptive Parameter Set (APS), a Sequence Parameter Set (SPS), a Picture Parameter Set (PPS), or a slice header.
Inverse quantization unit 310 may be used to inverse quantize (i.e., inverse quantize) the quantized transform coefficients provided in the bitstream and decoded by entropy decoding unit 304. The inverse quantization process may include using quantization parameters calculated by video encoder 20 for each video block in the video slice to determine the degree of quantization that should be applied and likewise the degree of inverse quantization that should be applied.
Inverse transform processing unit 312 is used to apply an inverse transform (e.g., an inverse DCT, an inverse integer transform, or a conceptually similar inverse transform process) to the transform coefficients in order to produce a block of residuals in the pixel domain.
The reconstruction unit 314 (e.g., summer 314) is used to add the inverse transform block 313 (i.e., reconstructed residual block 313) to the prediction block 365 to obtain the reconstructed block 315 in the sample domain, e.g., by adding sample values of the reconstructed residual block 313 to sample values of the prediction block 365.
Loop filter unit 320 is used (either during or after the encoding cycle) to filter reconstructed block 315 to obtain filtered block 321 to facilitate pixel transitions or to improve video quality in one example, loop filter unit 320 may be used to perform any combination of the filtering techniques described below loop filter unit 320 is intended to represent one or more loop filters, such as a deblocking filter, a sample-adaptive offset (SAO) filter, or other filters, such as a bilateral filter, an adaptive loop filter (A L F), or a sharpening or smoothing filter, or a collaborative filter.
Decoded video block 321 in a given frame or picture is then stored in decoded picture buffer 330, which stores reference pictures for subsequent motion compensation.
Decoder 30 is used to output decoded picture 31, e.g., via output 332, for presentation to or viewing by a user.
Other variations of video decoder 30 may be used to decode the compressed bitstream. For example, decoder 30 may generate an output video stream without loop filter unit 320. For example, the non-transform based decoder 30 may directly inverse quantize the residual signal without the inverse transform processing unit 312 for certain blocks or frames. In another embodiment, video decoder 30 may have inverse quantization unit 310 and inverse transform processing unit 312 combined into a single unit.
In the embodiment of the present application, the decoder 30 is configured to implement the inverse quantization method described in the following embodiments, and specifically, the decoder 30 may perform the inverse quantization method of the embodiment of the present application through the inverse quantization unit 310.
It should be understood that other structural variations of the video decoder 30 may be used to decode the encoded video bitstream. For example, video decoder 30 may generate an output video stream without processing by filter 320; alternatively, for some image blocks or image frames, the quantized coefficients are not decoded by entropy decoding unit 304 of video decoder 30 and, accordingly, do not need to be processed by inverse quantization unit 310 and inverse transform processing unit 312. Loop filter 320 is optional; and the inverse quantization unit 310 and the inverse transform processing unit 312 are optional for the case of lossless compression. It should be understood that the inter prediction unit and the intra prediction unit may be selectively enabled according to different application scenarios.
Fig. 5 is a schematic structural diagram of a video coding apparatus 400 (e.g., a video encoding apparatus 400 or a video decoding apparatus 400) provided by an embodiment of the present application. Video coding apparatus 400 is suitable for implementing the embodiments described herein. In one embodiment, video coding device 400 may be a video decoder (e.g., decoder 30 of fig. 1) or a video encoder (e.g., encoder 20 of fig. 1). In another embodiment, video coding device 400 may be one or more components of decoder 30 of fig. 1 or encoder 20 of fig. 1 described above.
Video coding apparatus 400 includes: an ingress port 410 and a reception unit (Rx)420 for receiving data, a processor, logic unit or Central Processing Unit (CPU)430 for processing data, a transmitter unit (Tx)440 and an egress port 450 for transmitting data, and a memory 460 for storing data. Video coding device 400 may also include optical-to-Electrical (EO) components and optical-to-electrical (opto) components coupled with ingress port 410, receiver unit 420, transmitter unit 440, and egress port 450 for egress or ingress of optical or electrical signals.
The processor 430 is implemented by hardware and software. Processor 430 may be implemented as one or more CPU chips, cores (e.g., multi-core processors), FPGAs, ASICs, and DSPs. Processor 430 is in communication with inlet port 410, receiver unit 420, transmitter unit 440, outlet port 450, and memory 460. Processor 430 includes a coding module 470 (e.g., encoding module 470 or decoding module 470). The encoding/decoding module 470 implements embodiments applied herein to implement the chroma block prediction method provided by the embodiments of the present application. For example, the encoding/decoding module 470 implements, processes, or provides various encoding operations. Accordingly, substantial improvements are provided to the functionality of the video coding apparatus 400 by the encoding/decoding module 470 and affect the transition of the video coding apparatus 400 to different states. Alternatively, the encode/decode module 470 is implemented as instructions stored in the memory 460 and executed by the processor 430.
The memory 460, which may include one or more disks, tape drives, and solid state drives, may be used as an over-flow data storage device for storing programs when such programs are selectively executed, and for storing instructions and data that are read during program execution. The memory 460 may be volatile and/or nonvolatile, and may be Read Only Memory (ROM), Random Access Memory (RAM), random access memory (TCAM), and/or Static Random Access Memory (SRAM).
Fig. 6 is a simplified block diagram of an apparatus 500 that may be used as either or both of source device 12 and destination device 14 in fig. 1 in an embodiment of the present application. Apparatus 500 may implement the techniques of this application. In other words, fig. 6 is a schematic block diagram of one implementation of an encoding apparatus or a decoding apparatus (simply referred to as a decoding apparatus 500) of the embodiment of the present application. Among other things, the decoding device 500 may include a processor 510, a memory 530, and a bus system 550. Wherein the processor is connected with the memory through the bus system, the memory is used for storing instructions, and the processor is used for executing the instructions stored by the memory. The memory of the coding apparatus stores program codes, and the processor may call the program codes stored in the memory to perform various video encoding or decoding methods, particularly a quantization method and an inverse quantization method in the embodiments of the present application. For example, the methods illustrated in fig. 7 and 9 below, for the sake of brevity, refer to the detailed description below.
In the embodiment of the present application, the processor 510 may be a Central Processing Unit (CPU), and the processor 510 may also be other general-purpose processors, Digital Signal Processors (DSP), Application Specific Integrated Circuits (ASIC), ready-made programmable gate arrays (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and so on. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 530 may include a Read Only Memory (ROM) device or a Random Access Memory (RAM) device. Any other suitable type of memory device may also be used for memory 530. Memory 530 may include code and data 531 to be accessed by processor 510 using bus 550. Memory 530 may further include an operating system 533 and application programs 535, the application programs 535 including at least one program that allows processor 510 to perform the video encoding or decoding methods described herein (particularly the quantization methods and inverse quantization methods described herein). For example, the application programs 535 may include applications 1 through N, which further include a video encoding or decoding application (simply a video coding application) that performs the video encoding or decoding methods described herein.
The bus system 550 may include a power bus, a control bus, a status signal bus, and the like, in addition to a data bus. For clarity of illustration, however, the various buses are designated in the figure as bus system 550.
Optionally, the translator device 500 may also include one or more output devices, such as a display 570. In one example, the display 570 may be a touch-sensitive display that incorporates a display with a touch-sensitive unit operable to sense touch input. A display 570 may be connected to the processor 510 via the bus 550.
If the quantization parameter and the inverse quantization parameter corresponding to each subblock in the whole L CU are the same based on the conventional configuration of the quantization parameter and the inverse quantization parameter, for a subblock with a small size, if a large quantization/inverse quantization parameter is used for processing, the accuracy of an image restored after quantization and inverse quantization may be low, which reduces user experience.
In order to avoid the above problem, the present application provides a configuration manner for determining quantization parameters and inverse quantization parameters based on the size of sub-blocks, and a schematic flow chart of the quantization method according to the embodiment of the present application is described in detail below with reference to fig. 7. It should be noted that the quantization method shown in fig. 7 may be performed by the above encoding apparatus, for example, may be performed by a quantization unit in the encoding apparatus. The method shown in fig. 7 includes steps 610 through 620.
And 610, acquiring the size parameter of the current sub-block.
The dimensional parameters may include at least one of the following: the length of the sub-block, the width of the sub-block, the area of the sub-block, etc.
The size parameter may be determined according to the partition information of each sub-block, for example, the size parameter of each sub-block is stored in the partition information of the sub-block.
And 620, determining a first quantization parameter used for quantizing the current subblock according to the size parameter of the current subblock.
The first quantization parameter may include a quantization step, a quantization coefficient, and the like, which are used when the current subblock is quantized.
It should be noted that there are many ways to determine the first quantization parameter according to the size parameter of the current sub-block, which are not specifically limited in this embodiment of the application, for example, the larger the quantization degree is, the smaller the corresponding quantization parameter is, so the value of the first quantization parameter may be inversely related to the change of the size parameter of the sub-block, that is, the value of the first quantization parameter decreases and increases as the size parameter of the sub-block increases. For another example, when the size parameter of the current subblock satisfies the preset condition, the current subblock may be quantized using the preconfigured first quantization parameter.
In the embodiment of the application, the first quantization parameter is determined according to the size parameter of the current subblock, so that the quantization effect is improved, the information quantity which can be compressed in the quantization process is improved, and meanwhile, certain image precision is ensured.
The conventional quantization parameter configuration method described above may also be used in combination with the method for determining the first quantization parameter provided in the embodiment of the present application to simplify the complexity of configuring the quantization parameter in the embodiment of the present application, that is, the quantization parameter (i.e., the second quantization parameter) corresponding to the L CU where the sub-block is located is adjusted according to the size parameter of the current sub-block to obtain the first quantization parameter.
The adjusting the second quantization parameter to obtain the first quantization parameter may include increasing the second quantization parameter to obtain the first quantization parameter, decreasing the second quantization parameter to obtain the first quantization parameter, and determining the second quantization parameter as the first quantization parameter, that is, the second quantization parameter may be equal to the first quantization parameter.
For example, the size parameter of the sub-block in an L CU may be inversely related to the variation between the first quantization parameter and the second quantization parameter, that is, the larger the size parameter of the sub-block in a L CU is, the larger the degree of reduction of the first quantization parameter on the basis of the second quantization parameter is, that is, the degree of quantization is about small, the smaller the size parameter of the sub-block in the first L CU is, the smaller the change of the first quantization parameter relative to the second quantization parameter is, and even the second quantization parameter may be larger than the first quantization parameter, and the degree of quantization may be about large.
Optionally, if the size parameter of the current sub-block is greater than a first size threshold, the first quantization parameter is smaller than the second quantization parameter, and/or if the size parameter of the current sub-block is smaller than a second size threshold, the first quantization parameter is greater than the second quantization parameter, where the first size threshold is greater than or equal to the second size threshold.
When the first size threshold is equal to the second size threshold, it can be understood that the first size threshold and the second size threshold are one size threshold. When the first size threshold is larger than the second size threshold, the first size threshold and the second size threshold are two different thresholds.
The first size threshold and the second size threshold may be used alone or in combination. And if the size parameter of the current sub-block is larger than the first size threshold, adjusting the second quantization parameter to be the first quantization parameter. Accordingly, if the size parameter of the current sub-block is smaller than or equal to the first size threshold, the second size threshold may be directly used as the first size threshold. Or adjusting the first quantization parameter by combining with the second size threshold to obtain a second quantization parameter.
And if the size parameter of the current sub-block is larger than the first size threshold, adjusting the second quantization parameter to be the first quantization parameter. Accordingly, if the size parameter of the current sub-block is smaller than or equal to the first size threshold, the second size threshold may be directly used as the first size threshold. Or the first quantization parameter is adjusted by combining the first size threshold value to obtain a second quantization parameter.
In the case of combining the first size threshold and the second size threshold, the comparison result between the size parameter of the current sub-block and the two thresholds may be divided into three cases, namely, the size parameter of the current sub-block is greater than the first size threshold, the size parameter of the current sub-block is smaller than the first size threshold and greater than the second size threshold, and the size parameter of the current sub-block is smaller than the second size parameter threshold. In the embodiment of the present application, in addition to dividing the size parameter of the sub-block into the above three cases based on the above two size thresholds, adjusting the second quantization parameter to obtain the first quantization parameter, the size parameter of the sub-block may be further subdivided, for example, embodied by 4 size thresholds.
For the current subblock with too small or too large size parameter, the existing quantization parameter is not configured to the value of the quantization parameter that can be matched with the current subblock with too large size parameter, or is not configured to the value of the quantization parameter that can be matched with the current subblock with too small size parameter, so for the two special cases, in order to simplify the process of obtaining the first quantization parameter, the second quantization parameter can be directly used as the first quantization parameter.
In order to divide the above two special cases, the third size threshold and the fourth size threshold (see fig. 8) may be added on the basis of the above two size thresholds, i.e. the second quantization parameter is adjusted by 4 size thresholds. The third size threshold may be an upper threshold corresponding to an adjustment manner (710) for reducing the second quantization parameter to obtain the first quantization parameter, that is, if the size parameter of the current subblock is greater than the third size threshold, the second quantization parameter may be directly used as the first quantization parameter. The fourth size threshold may be a lower threshold corresponding to an adjustment manner (720) for increasing the second quantization parameter to obtain the first quantization parameter, that is, if the size parameter of the current sub-block is smaller than the fourth size threshold, the second quantization parameter may be directly used as the first quantization parameter.
It should be noted that there are many ways of dividing the sizes of the sub-blocks by the size threshold, and further subdivision may be performed on the basis of the above division, that is, the size threshold is continuously increased, and the adjustment ways 710 and 720 are subdivided into different size levels (or size intervals), where quantization parameters corresponding to sub-blocks located in different size levels are different, which is not specifically limited in this embodiment of the application.
The above-mentioned manner of adjusting the second quantization parameter to obtain the first quantization parameter can be implemented based on the existing quantization parameter index table. That is, the first quantization parameter is determined by an index offset between the index of the first quantization parameter and the index of the second quantization parameter with reference to the index corresponding to the second quantization parameter. For example, can be through the formula qp'1=qp1- Δ represents, wherein qp'1Index, qp, representing a first quantization parameter1Denotes an index of the second quantization parameter, and Δ denotes an index offset between the index of the first quantization parameter and the index of the second quantization parameter.
In the conventional quantization parameter table, as the index number increases, the quantization parameter corresponding to the index decreases, and thus, based on the above formula, the first quantization parameter decreases as the index offset Δ increases, and the first quantization parameter increases as the index offset Δ decreases.
Based on the above-described representation of the adjusted quantization parameter (i.e. the first quantization parameter) by the index offset, in the embodiment of the present application, one possible implementation form of the quantization principle can be modified on the basis of the formula (1) and is described by the following formula (3), that is:
CQ=(Y×Q(qp'1)+(1<<(s-1)*m))>>s,s=15+r-n-F
wherein Y represents the current subblock pairCorresponding transform coefficient, Q (qp'1) Denotes index as qp'1Corresponding quantization parameter, CQThe quantized transform coefficient corresponding to the current subblock is represented, r represents the intermediate limit bit width, r is typically 16, F represents the logarithm of the transform block sub-area, n represents the bit width of the residual (transform coefficient), m is 10/31 if the current subblock belongs to an intra-coded block, and m is 10/62 if the current subblock is an inter-coded block.
The following describes a method for adjusting quantization parameters according to an embodiment of the present application with reference to table 1 and table 2. Table 1 shows correspondence between some quantization parameters in the quantization parameter table and indices. Table 2 shows the correspondence between different sub-block size parameters and index offsets.
TABLE 1
Figure BDA0001935331790000201
In table 2, the size intervals (or area intervals) of the sub-blocks corresponding to different levels are different, the size interval of level 0 includes coding units greater than or equal to 16 and smaller than 64, and the index offset corresponding to level 0 is 0, that is, the first quantization parameter and the second quantization parameter corresponding to the sub-blocks with the size parameter belonging to level 0 are the same. The level 1 size section includes coding units of 64 or more and coding units of 256 or less, and the index offset corresponding to level 1 is-2. The level 2 size interval includes coding units greater than 256 and smaller than 512, and the index offset corresponding to level 2 is-3.
TABLE 2
Interval of size Grade 0[16, 64) Grade 1[64 ],256) Level 2[256, 512 ]]
Index offset delta 0 -2 -3
Assuming that the index of the second quantization parameter of the current sub-block is 3, the size parameter of the current sub-block is 64, the current sub-block belongs to level 1, and the index offset corresponding to level 1 is-2, the index of the first quantization parameter of the current sub-block is 5.
On the basis of the above-described determination of the first quantization parameter on the basis of the size parameter, the first quantization parameter may also be determined in combination with other characteristic parameters of the sub-blocks. The other characteristic parameters of the sub-blocks may include one or more of a type of a frame in which the current sub-block is located and a type of a block in which the current sub-block is located, where the type of the frame in which the current sub-block is located includes an I-frame, a P-frame, and a B-frame, the type of the block in which the current sub-block is located includes a chrominance block or a luminance block, and the type of the block in which the current sub-block is located may further include a prediction type of the block, or the type of the prediction block in which the sub-block is located includes an inter. The following description is respectively provided in connection with the above three characteristic parameters.
It should be noted that, for the above process of determining the first quantization parameter by adjusting the second quantization parameter based on the characteristic parameter and the size parameter, reference may be made to the related description above, and details are not described herein again for brevity. The following mainly describes the way of adjusting the second quantization parameter in combination with different characteristic parameters to determine the first quantization parameter.
And combining the size parameter with the type of the frame where the current sub-block is located.
The I frame is also called an intra-frame coded frame, and is an independent frame with all information, and can be independently decoded without referring to other images, and can be simply understood as a static picture. The first frame in a video sequence is always an I-frame because it is a key frame.
The P frame, also called an inter-frame predictive coding frame, needs to refer to the previous I frame for coding, which indicates the difference between the current frame picture and the previous frame (the previous frame may be an I frame or a P frame). When decoding, the difference defined by the frame is superimposed on the picture buffered before, and the final picture is generated.
The B frame is also called bidirectional predictive coding frame, and the B frame records the difference between the current frame and the previous and subsequent frames, that is, to decode the B frame, not only the previous buffer picture but also the decoded picture are obtained, and the final picture is obtained by the superposition of the previous and subsequent pictures and the current frame data.
As can be seen from the above three frame types, the accuracy requirement for the I frame is higher than that for the P frame and the B frame, so that a larger quantization parameter can be configured for the sub-blocks in the I frame to reduce the loss in accuracy caused by quantization, and a smaller quantization parameter can be configured for the sub-blocks in the P frame and the B frame to improve the compression rate.
That is, when the characteristic parameter of the current subblock includes the type of the frame where the current subblock is located, the first quantization parameter determined when the frame to which the current subblock belongs is an I frame is greater than the first quantization parameter determined when the frame to which the current subblock belongs is a P frame or a B frame. Accordingly, the larger the quantization parameter, the smaller the degree of quantization, and the higher the accuracy after quantization.
Table 3 shows a possible implementation of the index offset configuration table based on the combination mode one. In table 3, the sub-blocks having the same size interval belong to different frame types and have different offsets. Assuming that the size interval of the current sub-block is level 1 and the index of the second quantization parameter of the current sub-block is 3, based on the corresponding relationship between the quantization parameter and the index shown in table 1, the index of the first quantization parameter of the current sub-block is 1 when the frame type to which the current sub-block belongs is an I-frame. When the frame type of the current sub-block is a P frame or a B frame, the index of the first quantization parameter of the current sub-block is 6.
TABLE 3
Figure BDA0001935331790000211
It should be noted that, only one index offset for distinguishing between the I frame and the sub-blocks on other frames (P frame and B frame) is described above, in this embodiment of the present application, the index offsets corresponding to the other frames may be further subdivided, that is, the index offsets corresponding to the sub-blocks located on the P frame and the B frame may also be different. Of course, if the complexity of the configuration is reduced, the index offsets corresponding to the sub-blocks located on the P frame and the B frame may be the same.
And combining the size parameter with the type of the block in which the current sub-block is positioned, wherein the type of the block comprises a chrominance block or a luminance block.
In general, human eyes are less sensitive to color than luminance, and in order to balance between compression rate and user experience, a smaller quantization parameter may be configured for a chroma block to ensure compression rate of the chroma block, and a larger quantization parameter may be configured for a luminance block to ensure accuracy of the luminance block.
That is, the type of the block in which the current sub-block is located includes a chroma block or a luma block, and the first quantization parameter determined when the current sub-block is a chroma block is smaller than the first quantization parameter determined when the current sub-block is a luma block.
Table 4 shows a possible implementation of the index offset configuration table based on the second combination method. In table 4, the offsets corresponding to different types of blocks are different for sub-blocks having the same size interval. Assuming that the size interval of the current sub-block is level 2 and the index of the second quantization parameter of the current sub-block is 3, based on the corresponding relationship between the quantization parameter and the index shown in table 1, the index of the first quantization parameter of the current sub-block is 1 when the block type of the block in which the current sub-block is located is a luminance block. When the block type of the block in which the current sub-block is located is a chroma block, the index of the first quantization parameter of the current sub-block is 6.
TABLE 4
Figure BDA0001935331790000212
And combining the size parameters with the type of the prediction block where the current sub-block is located, wherein the type of the prediction block comprises an inter-frame block and an intra-frame block.
In general, the accuracy of an intra block is required to be higher than that of an inter block, and therefore, in order to balance between the compression rate and the prediction accuracy, a smaller quantization parameter may be allocated to the inter block to ensure the compression rate of the inter block, and a larger quantization parameter may be allocated to the intra block to ensure the prediction accuracy of the intra block.
That is, the type of the block in which the current sub-block is located includes an intra block or an inter block, and the first quantization parameter determined if the current sub-block is an intra block is greater than the first quantization parameter determined if the current sub-block is an inter block.
Table 5 shows a possible implementation of the index offset configuration table based on combination mode three. In table 5, the offsets corresponding to different types of blocks are different for sub-blocks having the same size interval. Assuming that the size corresponding to the size interval of the current sub-block is level 1, and the index of the second quantization parameter of the current sub-block is 3, based on the corresponding relationship between the quantization parameter and the index shown in table 1, when the block type of the block in which the current sub-block is located is an intra-frame block, the index of the first quantization parameter of the current sub-block is 1. When the block type of the block in which the current sub-block is located is an inter-block, the index of the first quantization parameter of the current sub-block is 6.
TABLE 5
Figure BDA0001935331790000221
It should be noted that the size parameter of the sub-block may be combined with one or more of the above-mentioned three characteristic parameters at one or more levels. For example, the index offset is determined in level 1 in conjunction with the frame type and in level 2 in conjunction with the prediction block type, i.e., table 2 in conjunction with table 3. For another example, the index offset may be determined in level 1 and level 2 by combining the prediction block type and the frame type at the same time, which is not specifically limited in the embodiment of the present application.
It should be noted that, for convenience of description, the size parameter and the characteristic parameter are referred to as "sub-block characteristics" first, and different sub-block characteristics have one or more different parameters, and may correspond to one index offset, but different sub-block characteristics may correspond to the same or different index offsets.
Table 6 shows one possible implementation of the index offset configuration table based on sub-block characteristics. In table 6, it can be seen that for a sub-block on a P frame, the second quantization parameter corresponding to the sub-block is directly used as the first quantization parameter regardless of the type of the block. For sub-blocks on the B frame, the index offset between the index of the second quantization parameter and the index of the first quantization parameter is a fixed value (-6) regardless of the type of block. For the sub-blocks on the I-frame, when the type of the block is an intra-block and a chroma block, an index offset between an index of the second quantization parameter and an index of the first quantization parameter is a fixed value (2), when the type of the block is an inter-block and a luma block, an index offset between an index of the second quantization parameter and an index of the first quantization parameter of the sub-block having a size parameter of level 0 is 2, and an index offset between an index of the second quantization parameter and an index of the first quantization parameter of the sub-block having a size parameter of level 1 is-3.
TABLE 6
Figure BDA0001935331790000222
It should be noted that the subblocks not shown in table 6 may directly use the second quantization parameter as the first quantization parameter, or may determine the index offset according to other configuration manners, which is not limited in this embodiment of the present application.
The inverse quantization method according to the embodiment of the present application is described below with reference to fig. 8 based on the quantization method shown in fig. 7. It should be noted that the quantization and inverse quantization processes are a set of corresponding processes, and the quantization parameter used in the quantization process corresponds to the inverse quantization parameter used in the inverse quantization process, that is, the index corresponding to the inverse quantization parameter is the same as the index corresponding to the quantization parameter, or the inverse quantization parameter is determined according to the quantization parameter. Since there is a correspondence between the quantization parameter and the inverse quantization parameter, after the quantization parameter is adjusted, the inverse quantization parameter also needs to be adjusted.
For the sake of simplicity, the following adjustment procedure of the inverse quantization parameter may be determined with reference to the adjustment procedure of the quantization parameter above, and the index offsets shown in tables 2 to 6 may be understood as index offsets for determining the index of the first inverse quantization parameter with reference to the index of the second inverse quantization parameter. It should be understood that the quantization parameter table shown in table 1 may be different from the inverse quantization parameter table referred to in the inverse quantization process below, but may be two tables having a correspondence relationship, or in two tables having a correspondence relationship, the same index quantization parameter and inverse quantization parameter have a correspondence relationship.
Based on the quantization principle and the inverse quantization principle, the inverse quantization coefficients corresponding to larger quantization coefficients are smaller, and the inverse quantization coefficients corresponding to smaller quantization coefficients are larger.
Fig. 9 is a schematic flow chart of an inverse quantization method of an embodiment of the present application. The method shown in fig. 9 may be performed by a coding device or a decoding device, and in particular may be performed by an inverse quantization unit in a coding device, and may also be performed by an inverse quantization unit in a decoding device. The method shown in fig. 9 includes steps 810 and 820.
And 810, acquiring the size parameter of the current subblock.
The dimensional parameters may include at least one of the following: the length of the sub-block, the width of the sub-block, the area of the sub-block, etc.
It should be noted that, if the quantization process shown in fig. 7 and the inverse quantization process shown in fig. 9 are executed by an encoding apparatus, the above step 610 and step 810 may be one step, or may be steps executed in the quantization process and the inverse quantization process, respectively, and the embodiment of the present application is not limited thereto.
820, determining a first inverse quantization parameter used for inverse quantization of the current sub-block according to the size parameter of the current sub-block.
The first inverse quantization parameter may include an inverse quantization step size, an inverse quantization coefficient, and the like.
The first inverse quantization parameter corresponds to the first quantization parameter described above.
Step 820 includes adjusting the second dequantization parameter of the current subblock to the first dequantization parameter according to the size parameter of the current subblock, where the second dequantization parameter is the dequantization parameter corresponding to the largest coding unit L CU in which the current subblock is located.
The adjusting the second inverse quantization parameter to obtain the first inverse quantization parameter may include increasing the second inverse quantization parameter to obtain the first inverse quantization parameter, decreasing the second inverse quantization parameter to obtain the first inverse quantization parameter, and determining the second inverse quantization parameter as the first inverse quantization parameter, that is, the second inverse quantization parameter may be equal to the first inverse quantization parameter.
The above-described second inverse quantization parameter and the above-described second quantization parameter are both the quantization parameter and the inverse quantization parameter for L CU, and therefore, the second inverse quantization parameter and the second quantization parameter correspond to each other.
It should be noted that there are many ways to determine the first inverse quantization parameter according to the size parameter of the current sub-block, which is not specifically limited in this embodiment of the present application, and for a detailed description, reference may be made to the method for determining the first quantization parameter according to the size parameter of the current sub-block.
Optionally, if the size parameter of the current sub-block is greater than a first size threshold, the first dequantization parameter is greater than the second dequantization parameter, and/or if the size parameter of the current sub-block is less than a second size threshold, the first quantization parameter is less than the second quantization parameter, where the first size threshold is greater than or equal to the second size threshold.
When the first size threshold is equal to the second size threshold, it can be understood that the first size threshold and the second size threshold are one size threshold. When the first size threshold is larger than the second size threshold, the first size threshold and the second size threshold are two different thresholds.
The first size threshold and the second size threshold may be used alone or in combination. And if the size parameter of the current sub-block is larger than the first size threshold, adjusting the second inverse quantization parameter to be the first inverse quantization parameter. Accordingly, if the size parameter of the current sub-block is smaller than or equal to the first size threshold, the second size threshold may be directly used as the first size threshold. Or the first inverse quantization parameter is adjusted by combining with the second size threshold value to obtain a second inverse quantization parameter.
And if the size parameter of the current sub-block is larger than the first size threshold, adjusting the second inverse quantization parameter to be the first inverse quantization parameter. Accordingly, if the size parameter of the current sub-block is smaller than or equal to the first size threshold, the second size threshold may be directly used as the first size threshold. Or the first inverse quantization parameter is adjusted by combining the first size threshold value to obtain a second inverse quantization parameter.
In the case of combining the first size threshold and the second size threshold, the comparison result between the size parameter of the current sub-block and the two thresholds may be divided into three cases, namely, the size parameter of the current sub-block is greater than the first size threshold, the size parameter of the current sub-block is smaller than the first size threshold and greater than the second size threshold, and the size parameter of the current sub-block is smaller than the second size parameter threshold. In the embodiment of the present application, the size parameters of the sub-blocks are divided into the above three cases based on the two size thresholds, the second inverse quantization parameter is adjusted to obtain the first inverse quantization parameter, and the size parameters of the sub-blocks can be further subdivided, for example, represented by 4 size thresholds.
For the current subblock with too small or too large size parameter, the existing dequantization parameter is not configured to the value of the dequantization parameter that can be matched with the current subblock with too large size parameter, or is not configured to the value of the dequantization parameter that can be matched with the current subblock with too small size parameter, so for the two special cases, in order to simplify the process of obtaining the first dequantization parameter, the second dequantization parameter can be directly used as the first dequantization parameter.
In order to divide the two special cases, the third size threshold and the fourth size threshold (see fig. 8) may be added based on the two size thresholds, that is, the second inverse quantization parameter is adjusted by 4 size thresholds. The third size threshold may be an upper threshold corresponding to an adjustment manner (710) for increasing the second inverse quantization parameter to obtain the first inverse quantization parameter, that is, if the size parameter of the current subblock is greater than the third size threshold, the second inverse quantization parameter may be directly used as the first inverse quantization parameter. The fourth size threshold may be a lower threshold corresponding to an adjustment manner (720) for reducing the second inverse quantization parameter to obtain the first inverse quantization parameter, that is, if the size parameter of the current subblock is smaller than the fourth size threshold, the second inverse quantization parameter may be directly used as the first inverse quantization parameter.
It should be noted that there are many ways of dividing the sizes of the sub-blocks by the size threshold, and further subdivision may be performed on the basis of the above division, that is, the size threshold is continuously increased, and the adjustment ways 710 and 720 are subdivided into different size levels (or size intervals), where inverse quantization parameters corresponding to sub-blocks located in different size levels are different, which is not specifically limited in this embodiment of the application.
The above-mentioned manner of adjusting the second inverse quantization parameter to obtain the first inverse quantization parameter can be implemented based on the existing inverse quantization parameter index table. That is, the first inverse quantization parameter is determined by an index offset between the index of the first inverse quantization parameter and the index of the second inverse quantization parameter with reference to the index corresponding to the second inverse quantization parameter. For example, by the formula qp2'=qp2- Δ represents, wherein qp2' index, qp, representing a first dequantization parameter2Denotes an index of the second inverse quantization parameter, and Δ denotes the first inverse quantization parameterAn index offset between the index of the number and the index of the second dequantization parameter.
Based on the above-described representation of the adjusted inverse quantization parameter (i.e. the first inverse quantization parameter) by the offset, in the embodiment of the present application, one possible implementation form of the inverse quantization principle can be modified on the basis of the formula (2) and is described by the following formula (4), that is:
C=(CQ×DQ(qp2')+1<<(s-1))>>s,s=shift(qp2')+n+F+1-r
wherein, CQRepresenting the quantized transform coefficient, DQ (qp), corresponding to the current sub-block2') denotes an index of qp2The corresponding inverse quantization parameter, r, represents the intermediate constraint bit width, usually r is 16, F represents the logarithm of the transform block area, n represents the bit width of the residual (transform coefficient), shift (qp)2') denotes an index of qp2' corresponding offset.
On the basis of the above-described determination of the first inverse quantization parameter on the basis of the size parameter, the first inverse quantization parameter may also be determined in combination with other characteristic parameters of the sub-blocks. The other characteristic parameters of the sub-blocks may include one or more of a type of a frame in which the current sub-block is located and a type of a block in which the current sub-block is located, where the type of the frame in which the current sub-block is located includes an I-frame, a P-frame, and a B-frame, the type of the block in which the current sub-block is located includes a chrominance block or a luminance block, and the type of the block in which the current sub-block is located may further include a prediction type of the block, or the type of the prediction block in which the sub-block is located includes an inter. The following description is respectively presented in connection with three characteristic parameters.
And combining the size parameter with the type of the frame where the current sub-block is located. The fourth combination mode corresponds to the first combination mode, and all the related descriptions about the fourth combination mode can be referred to the description in the first combination mode.
That is, when the characteristic parameter of the current subblock includes the type of the frame where the current subblock is located, the first dequantization parameter determined when the frame to which the current subblock belongs is an I frame is smaller than the first dequantization parameter determined when the frame to which the current subblock belongs is a P frame or a B frame.
And combining the size parameter with the type of the block where the current sub-block is located, wherein the type of the block comprises a chrominance block or a luminance block. The above-mentioned fifth combination mode corresponds to the second combination mode, and all the relevant descriptions about the fifth combination mode can be referred to the descriptions in the second combination mode.
That is, the type of the block in which the current sub-block is located includes a chroma block or a luma block, and the first inverse quantization parameter determined when the current sub-block is a chroma block is greater than the first inverse quantization parameter determined when the current sub-block is a luma block.
And combining the size parameter with the type of the prediction block where the current sub-block is located, wherein the type of the prediction block comprises an inter-frame block and an intra-frame block. The above-mentioned combination mode six corresponds to the above-mentioned combination mode three, and the relevant description about the combination mode six can be referred to the description in the combination mode three.
That is, the type of the block in which the current sub-block is located includes an intra block or an inter block, and the first dequantization parameter determined when the current sub-block is an intra block is smaller than the first dequantization parameter determined when the current sub-block is an inter block.
In fig. 9 and fig. 7, the determination methods for determining the quantization parameter and the inverse quantization parameter are all determined based on the configured quantization parameter and the adjustment method of the inverse quantization parameter, and the first quantization parameter and the first inverse quantization parameter are determined accordingly. The inverse quantization parameter required for performing the inverse quantization process may also be determined by the quantization parameter, that is, after the quantization parameter is determined, the inverse quantization parameter may be determined according to a corresponding relationship between the quantization parameter and the inverse quantization parameter, which may also be understood as one possible implementation manner of determining the first inverse quantization parameter according to the size parameter of the subblock.
The inverse quantization method according to the embodiment of the present application is described below with reference to specific examples, and table 7 shows the correspondence between some quantization parameters in the inverse quantization parameter table and the indexes. Table 8 shows the correspondence between different sub-block size parameters and index offsets. Among the inverse quantization parameters shown in table 7, quantization parameters corresponding to the same indices among the quantization parameters shown in table 1 have a correspondence relationship with inverse quantization parameters.
TABLE 7
Figure BDA0001935331790000251
Table 8 shows one possible implementation of the index offset configuration table based on sub-block characteristics. The offset of the index in table 8 can be understood as an offset of the index of the inverse quantization parameter, having a correspondence with the offset of the quantization parameter shown in table 6 above.
In table 8, it can be seen that for a sub-block on a P frame, the second dequantization parameter corresponding to the sub-block is directly used as the first dequantization parameter regardless of the type of the block. For sub-blocks on the B frame, the index offset between the index of the second inverse quantization parameter and the index of the first inverse quantization parameter is a fixed value (-6) regardless of the type of block. For the sub-blocks on the I-frame, when the type of the block is an intra-block and a chroma block, an index offset between an index of the second quantization parameter and an index of the first quantization parameter is a fixed value (2), when the type of the block is an inter-block and a luma block, an index offset between an index of the second inverse quantization parameter and an index of the first inverse quantization parameter of the sub-block having a size parameter of level 0 is 2, and an index offset between an index of the second inverse quantization parameter and an index of the first inverse quantization parameter of the sub-block having a size parameter of level 1 is-3.
TABLE 8
Figure BDA0001935331790000261
It should be noted that the subblocks not shown in table 8 may directly use the second dequantization parameter as the first dequantization parameter, or may determine the index offset according to other configuration manners, which is not limited in this embodiment of the present application.
The method of the embodiment of the present application is described in detail above with reference to fig. 1 to 9, and the apparatus of the embodiment of the present application is described below with reference to fig. 10 to 11. It should be noted that the apparatuses shown in fig. 10 to fig. 11 can implement the steps in the above method, and are not described herein again for brevity.
Fig. 10 is a schematic diagram of a quantization apparatus according to an embodiment of the present application, and the apparatus shown in fig. 10 can perform the quantization method shown in fig. 7. The apparatus shown in fig. 9 may be located at the encoder 20, and may specifically be the quantization unit 208. The quantization apparatus 900 shown in fig. 10 includes an obtaining module 910 and a processing module 920.
In an alternative embodiment, the processing module 910 may be the processor 510, and the obtaining module 920 may be an input/output interface.
Fig. 11 is a schematic diagram of an inverse quantization apparatus according to an embodiment of the present application, and the apparatus shown in fig. 11 may perform the inverse quantization method shown in fig. 9. The apparatus shown in fig. 11 may be located at the encoder 20 and may also be located at the decoder 30. Specifically, it may be the inverse quantization unit 210, or the inverse quantization unit 310. The inverse quantization apparatus 1000 shown in fig. 11 includes an obtaining module 1010 and a processing module 1020.
In an alternative embodiment, the processing module 1010 may be the processor 510, and the obtaining module 1020 may be an input/output interface.
Those of skill in the art will appreciate that the functions described in connection with the various illustrative logical blocks, modules, and algorithm steps described in the application can be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions described in the various illustrative logical blocks, modules, and steps may be stored on or transmitted over as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. The computer-readable medium may include a computer-readable storage medium, which corresponds to a tangible medium, such as a data storage medium, or any communication medium including a medium that facilitates transfer of a computer program from one place to another (e.g., according to a communication protocol). In this manner, a computer-readable medium may generally correspond to (1) a non-transitory tangible computer-readable storage medium, or (2) a communication medium, such as a signal or carrier wave. A data storage medium may be any available medium that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementing the techniques described herein. The computer program product may include a computer-readable medium.
By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer.
The instructions may be executed by one or more processors, such as one or more Digital Signal Processors (DSPs), general purpose microprocessors, Application Specific Integrated Circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Thus, the term "processor," as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. Additionally, in some aspects, the functions described by the various illustrative logical blocks, modules, and steps described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or incorporated in a combined codec. Also, the techniques may be fully implemented in one or more circuits or logic elements.
The techniques of this application may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an Integrated Circuit (IC), or a set of ICs (e.g., a chipset). Various components, modules, or units are described in this application to emphasize functional aspects of means for performing the disclosed techniques, but do not necessarily require realization by different hardware units. Indeed, as described above, the various units may be combined in a codec hardware unit, in combination with suitable software and/or firmware, or provided by interoperating hardware units (including one or more processors as described above).
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The above description is only an exemplary embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (16)

1. An inverse quantization method, comprising:
obtaining the size parameter of the current sub-block;
and determining a first inverse quantization parameter used when inverse quantization is carried out on the current subblock according to the size parameter of the current subblock.
2. The method of claim 1, wherein the determining an inverse quantization parameter for use in quantizing the first sub-block according to the size parameter of the current sub-block comprises:
and adjusting a second inverse quantization parameter of the current sub-block to the first inverse quantization parameter according to the size parameter of the current sub-block, where the second inverse quantization parameter is an inverse quantization parameter corresponding to a maximum coding unit L CU where the current sub-block is located.
3. The method of claim 2, wherein the first dequantization parameter is greater than the second dequantization parameter if the size parameter of the current sub-block is greater than a first size threshold, or
If the size parameter of the current sub-block is smaller than a second size threshold, the first quantization parameter is smaller than the second quantization parameter, wherein the first size threshold is greater than or equal to the second size threshold.
4. The method of claim 2 or 3, wherein the adjusting the second inverse quantization parameter of the current sub-block to the first inverse quantization parameter according to the size parameter of the current sub-block comprises:
adjusting the second dequantization parameter to the first dequantization parameter according to the size parameter of the current sub-block and the characteristic parameter of the current sub-block, where the characteristic parameter of the current sub-block includes at least one of: the type of the frame where the current sub-block is located or the type of the block where the current sub-block is located.
5. The method of claim 4, wherein when the characteristic parameter of the current sub-block comprises a type of a frame in which the current sub-block is located, the first dequantization parameter determined if the frame to which the current sub-block belongs is an I-frame is smaller than the first dequantization parameter determined if the frame to which the current sub-block belongs is a P-frame or a B-frame.
6. The method of claim 4, wherein the characteristic parameter of the current sub-block comprises a type of a block in which the current sub-block is located comprises a chroma block or a luma block, and wherein the first inverse quantization parameter determined if the current sub-block is a chroma block is greater than the first inverse quantization parameter determined if the current sub-block is a luma block.
7. The method of claim 4, wherein the characteristic parameter of the current sub-block comprises a type of a block in which the current sub-block is located comprises an intra block or an inter block, and wherein the first dequantization parameter determined if the current sub-block is an intra block is smaller than the first dequantization parameter determined if the current sub-block is an inter block.
8. An inverse quantization apparatus, comprising:
the acquisition module is used for acquiring the size parameter of the current sub-block;
and the processing module is used for determining a first inverse quantization parameter used for inverse quantization of the current subblock according to the size parameter of the current subblock acquired by the acquisition module.
9. The apparatus of claim 8,
the processing module is configured to adjust a second dequantization parameter of the current sub-block to the first dequantization parameter according to the size parameter of the current sub-block, where the second dequantization parameter is a dequantization parameter corresponding to a largest coding unit L CU in which the current sub-block is located.
10. The apparatus of claim 9, wherein the first dequantization parameter is greater than the second dequantization parameter if the size parameter of the current subblock is greater than a first size threshold, or
If the size parameter of the current sub-block is smaller than a second size threshold, the first quantization parameter is smaller than the second quantization parameter, wherein the first size threshold is greater than or equal to the second size threshold.
11. The apparatus of claim 8 or 9,
the processing module is configured to adjust the second dequantization parameter to the first dequantization parameter according to the size parameter of the current sub-block and the characteristic parameter of the current sub-block, where the characteristic parameter of the current sub-block includes at least one of the following parameters: the type of the frame where the current sub-block is located, and the type of the block where the current sub-block is located.
12. The apparatus of claim 11, wherein when the characteristic parameter of the current sub-block comprises a type of a frame in which the current sub-block is located, the first dequantization parameter determined if the frame to which the current sub-block belongs is an I-frame is smaller than the first dequantization parameter determined if the frame to which the current sub-block belongs is a P-frame or a B-frame.
13. The apparatus of claim 11, wherein the characteristic parameter of the current sub-block comprises a type of a block in which the current sub-block is located comprises a chroma block or a luma block, and wherein the first inverse quantization parameter determined if the current sub-block is a chroma block is greater than the first inverse quantization parameter determined if the current sub-block is a luma block.
14. The apparatus of claim 11, wherein the characteristic parameter of the current sub-block comprises a type of block in which the current sub-block is located comprises an intra block or an inter block, and wherein the first dequantization parameter determined if the current sub-block is an intra block is smaller than the first dequantization parameter determined if the current sub-block is an inter block.
15. An encoding device, characterized by comprising: a memory and a processor coupled to each other, the processor calling program code stored in the memory to perform the method of any of claims 1-7.
16. A decoding device, characterized by comprising: a memory and a processor coupled to each other, the processor calling program code stored in the memory to perform the method of any of claims 1-7.
CN201910005657.XA 2019-01-03 2019-01-03 Quantization and inverse quantization method and device Active CN111405279B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910005657.XA CN111405279B (en) 2019-01-03 2019-01-03 Quantization and inverse quantization method and device
PCT/CN2019/130400 WO2020140889A1 (en) 2019-01-03 2019-12-31 Quantization and dequantization method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910005657.XA CN111405279B (en) 2019-01-03 2019-01-03 Quantization and inverse quantization method and device

Publications (2)

Publication Number Publication Date
CN111405279A true CN111405279A (en) 2020-07-10
CN111405279B CN111405279B (en) 2021-06-29

Family

ID=71406686

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910005657.XA Active CN111405279B (en) 2019-01-03 2019-01-03 Quantization and inverse quantization method and device

Country Status (2)

Country Link
CN (1) CN111405279B (en)
WO (1) WO2020140889A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120114034A1 (en) * 2010-11-08 2012-05-10 Mediatek Inc. Method and Apparatus of Delta Quantization Parameter Processing for High Efficiency Video Coding
CN106028032A (en) * 2016-05-24 2016-10-12 西安电子科技大学 Coefficient-level adaptive quantization method
CN106101706A (en) * 2016-06-30 2016-11-09 华为技术有限公司 A kind of method for encoding images and device
WO2017075810A1 (en) * 2015-11-06 2017-05-11 华为技术有限公司 Method and apparatus for de-quantization of transform coefficients, and decoding device
US20170155903A1 (en) * 2015-11-30 2017-06-01 Canon Kabushiki Kaisha Method, apparatus and system for encoding and decoding video data according to local luminance intensity
CN106851280A (en) * 2017-01-04 2017-06-13 苏睿 The method and apparatus of compression of images
CN108141590A (en) * 2015-09-29 2018-06-08 高通股份有限公司 Increment QP for the quantization of rectangular transform unit, the video coding based on short distance intra prediction SDIP

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100442848C (en) * 2005-04-11 2008-12-10 华为技术有限公司 Method for controlling code rate in H263 coding
CN100562118C (en) * 2007-07-03 2009-11-18 上海富瀚微电子有限公司 A kind of bit rate control method of video coding
US8451896B2 (en) * 2009-10-19 2013-05-28 Hong Kong Applied Science and Technology Research Institute Company Limited Method and apparatus for adaptive quantization in digital video coding
US20110268180A1 (en) * 2010-04-29 2011-11-03 Naveen Srinivasamurthy Method and System for Low Complexity Adaptive Quantization
JP2013038758A (en) * 2011-07-13 2013-02-21 Canon Inc Image encoder, image encoding method, program, image decoder, image decoding method and program
CN105898299A (en) * 2015-12-14 2016-08-24 乐视云计算有限公司 Self-adaptive quantification method and device based on size of transform block

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120114034A1 (en) * 2010-11-08 2012-05-10 Mediatek Inc. Method and Apparatus of Delta Quantization Parameter Processing for High Efficiency Video Coding
CN103210647A (en) * 2010-11-08 2013-07-17 联发科技股份有限公司 Method and apparatus of delta quantization parameter processing for high efficiency video coding
CN108141590A (en) * 2015-09-29 2018-06-08 高通股份有限公司 Increment QP for the quantization of rectangular transform unit, the video coding based on short distance intra prediction SDIP
WO2017075810A1 (en) * 2015-11-06 2017-05-11 华为技术有限公司 Method and apparatus for de-quantization of transform coefficients, and decoding device
CN107211133A (en) * 2015-11-06 2017-09-26 华为技术有限公司 Method, device and the decoding device of inverse quantization conversion coefficient
US20170155903A1 (en) * 2015-11-30 2017-06-01 Canon Kabushiki Kaisha Method, apparatus and system for encoding and decoding video data according to local luminance intensity
CN106028032A (en) * 2016-05-24 2016-10-12 西安电子科技大学 Coefficient-level adaptive quantization method
CN106101706A (en) * 2016-06-30 2016-11-09 华为技术有限公司 A kind of method for encoding images and device
CN106851280A (en) * 2017-01-04 2017-06-13 苏睿 The method and apparatus of compression of images

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张世强,等: "基于自适应分割和自适应量化的图像压缩算法", 《大连海事大学学报》 *

Also Published As

Publication number Publication date
CN111405279B (en) 2021-06-29
WO2020140889A1 (en) 2020-07-09

Similar Documents

Publication Publication Date Title
CN111327904B (en) Image reconstruction method and device
CN114173114B (en) Image prediction method, device, equipment, system and storage medium
CN111277828B (en) Video encoding and decoding method, video encoder and video decoder
CN111416981B (en) Video image decoding and encoding method and device
CN112055200A (en) MPM list construction method, and chroma block intra-frame prediction mode acquisition method and device
CN111385572A (en) Prediction mode determining method and device, coding equipment and decoding equipment
CN111416977A (en) Video encoder, video decoder and corresponding methods
CN111355959A (en) Image block division method and device
WO2021164014A1 (en) Video encoding method and device
CN112118447A (en) Construction method and device of fusion candidate motion information list and coder/decoder
CN111327899A (en) Video decoder and corresponding method
CN112055211B (en) Video encoder and QP setting method
WO2021180220A1 (en) Image encoding and decoding method and apparatus
CN113366850B (en) Video encoder, video decoder and corresponding methods
WO2020224476A1 (en) Image division method, apparatus and device
CN111327894B (en) Block division method, video coding and decoding method and video coder and decoder
CN111277840B (en) Transform method, inverse transform method, video encoder and video decoder
CN112637590A (en) Video encoder, video decoder and corresponding methods
CN111405279B (en) Quantization and inverse quantization method and device
CN113316939A (en) Context modeling method and device for zone bit
CN111294603A (en) Video coding and decoding method and device
CN112135128A (en) Image prediction method, coding tree node division method and device thereof
CN112135148B (en) Non-separable transformation method and device
CN113170147B (en) Video encoder, video decoder, and corresponding methods
CN111770337B (en) Video encoding method, video decoding method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant