CN116982262A - State transition for dependent quantization in video coding - Google Patents

State transition for dependent quantization in video coding Download PDF

Info

Publication number
CN116982262A
CN116982262A CN202180093746.XA CN202180093746A CN116982262A CN 116982262 A CN116982262 A CN 116982262A CN 202180093746 A CN202180093746 A CN 202180093746A CN 116982262 A CN116982262 A CN 116982262A
Authority
CN
China
Prior art keywords
current
quantization
quantizer
block
elements
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180093746.XA
Other languages
Chinese (zh)
Inventor
余越
于浩平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Innopeak Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Innopeak Technology Inc filed Critical Innopeak Technology Inc
Publication of CN116982262A publication Critical patent/CN116982262A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • H04N19/126Details of normalisation or weighting functions, e.g. normalisation matrices or variable uniform quantisers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/129Scanning of coding units, e.g. zig-zag scan of transform coefficients or flexible macroblock ordering [FMO]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/18Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

In some embodiments, a video encoder or decoder reconstructs blocks of video by dependent quantization. The video encoder or decoder accesses quantization elements associated with the blocks, and processes the quantization elements according to the order of the blocks to generate corresponding dequantized elements. The processing includes obtaining a current quantization element of the block from the quantization elements, and determining a quantizer of the current quantization element based on at least two quantization elements preceding the current quantization element or based on a comparison of an element immediately preceding the current quantization element with zero. The processing also includes dequantizing the current quantized element based on the quantizer to generate a dequantized element. The video encoder or decoder reconstructs the block based on the dequantized elements.

Description

State transition for dependent quantization in video coding
Cross Reference to Related Applications
The application claims priority from U.S. provisional application No. 63/151,535, filed on 19/2/2021, entitled "New Dependent Quantization State Transition for Video Coding (a new type of dependent quantization state conversion in video coding)", the entire contents of which are incorporated herein by reference.
Technical Field
The present disclosure relates generally to computer-implemented methods and systems for video processing. In particular, the present disclosure relates to dependency quantization for video coding.
Background
The ubiquitous camera-enabled devices, such as smartphones, tablets and computers, make capturing video or images easier than ever before. However, even short videos may have a considerable amount of data. Video encoding techniques (including video encoding and decoding) allow video data to be compressed into smaller sizes, allowing various videos to be stored and transmitted. Video coding has been widely used in a variety of applications such as digital television broadcasting, video transmission over the internet and mobile networks, real-time applications (e.g., video chat, video conferencing), DVD, blu-ray disc, etc. In order to reduce the storage space in which video is stored and/or the network bandwidth consumption in which video is transmitted, it is desirable to increase the efficiency of the video coding scheme.
Disclosure of Invention
Some embodiments relate to dependency quantization for video coding. In one example, a method for reconstructing a video block includes accessing a plurality of quantization elements associated with the block, and processing the plurality of quantization elements according to an order of the block to generate corresponding dequantized elements. The processing includes obtaining a current quantization element of the block from the plurality of quantization elements; determining a quantizer for the current quantization element based on at least two quantization elements preceding the current quantization element or based on a comparison of an element immediately preceding the current quantization element with zero; and dequantizing the current quantized element based on the quantizer to generate a dequantized element; and reconstructing the block based on the dequantized elements.
In another example, a non-transitory computer readable medium has program code stored thereon, and the program code is executable by one or more processing devices for performing operations. The operations include accessing a plurality of quantized elements associated with a video block, and processing the plurality of quantized elements according to an order of the block to generate respective dequantized elements. The processing includes obtaining a current quantization element of the block from the plurality of quantization elements; determining a current state for quantization based on a previous state and state transition table for quantizing an element immediately preceding the current quantization element:
where v is a value determined based on at least one quantization element preceding the current quantization element. The processing also includes determining a quantizer for the current quantized element based on the current state of quantization, and dequantizing the current quantized element based on the quantizer to generate a dequantized element. The operations also include reconstructing the block based on the dequantized elements.
In yet another example, a system includes a processing device and a non-volatile computer readable medium communicatively coupled to the processing device. The processing device is configured to execute program code stored in the non-volatile computer readable medium to perform operations. The operations include accessing a plurality of elements associated with a video block, and processing the plurality of elements according to an order of the blocks. The processing includes obtaining a current element of the block from the plurality of elements, determining a quantizer for the current element based on at least two elements preceding the current element or based on a comparison of an element immediately preceding the current element with zero, and quantizing the current element based on the quantizer to generate a quantized element. The operations also include encoding the quantization element into a bitstream representing the video.
These illustrative embodiments are not mentioned to limit or define the disclosure, but to provide examples to aid understanding of the disclosure. Additional examples are discussed in the detailed description and further description is provided herein.
Drawings
The features, embodiments, and advantages of the present disclosure may be better understood when the following detailed description is read with reference to the accompanying drawings.
Fig. 1 shows a block diagram of an example video encoder configured to implement an embodiment of the application.
Fig. 2 shows a block diagram of an example video decoder configured to implement an embodiment of the application.
Fig. 3 depicts an example of coding tree cell partitioning of images in video according to some embodiments of the present disclosure.
Fig. 4 depicts an example of coding unit partitioning of a coding tree unit according to some embodiments of the present disclosure.
Fig. 5 depicts an example of a coded block having a predetermined order of elements for processing the coded block.
Fig. 6 depicts an example of two quantizers for dependency quantization in prior art video coding techniques.
Fig. 7 depicts an example of a state transition diagram for dependency quantization in accordance with some embodiments of the present disclosure.
Fig. 8 depicts an example of a state transition diagram and associated state transition table for dependency quantization in accordance with some embodiments of the present disclosure.
Fig. 9 depicts an example of a process for encoding video blocks via dependency quantization, according to some embodiments of the present disclosure.
Fig. 10 depicts an example of a process for reconstructing a video block quantized using dependent quantization, according to some embodiments of the disclosure.
FIG. 11 depicts an example of a computing system that may be used to implement some embodiments of the present disclosure.
Detailed Description
Various embodiments may provide dependency quantization for video coding. As described above, more and more video data is being generated, stored, and transmitted. It would be beneficial to improve the coding efficiency of video coding techniques so that less data can be used to represent video without compromising the visual quality of the decoded video. One aspect of improving coding efficiency is improving the quantization scheme of video coding. Recent video coding standards, such as the general video coding (VVC), have adopted dependent quantization techniques.
In dependency quantization, multiple quantizers may be used when quantizing elements (e.g., pixel values or transform coefficients) of a coded block (or "block"). The quantizer used to quantize the current element depends on the value of the previous element in the encoded block. In some examples, the value of the previous quantized element is used to determine the state of the current element, which in turn is used to determine the quantizer for the current element. Since existing dependent quantization methods use only limited information from previous quantized elements of a block in determining a quantizer of a current element of the block, the quantizer for the current element may be inaccurately or inappropriately selected. Furthermore, the state transition table used to determine the quantizer in the existing dependency quantization has a specific value, which may reduce the coding efficiency of video signals whose statistics or distribution does not match those designed for the specific state transition table.
Various embodiments described herein address these problems by considering multiple elements in a coded block when selecting a quantizer for the current element of the coded block. The element of the encoded block may be a residual after inter or intra prediction of the encoded block. The element may be a transform coefficient of the residual in the frequency domain or a value of the residual in the pixel domain. When selecting a quantizer for an element, quantized values of a plurality of elements that have been processed before the element are used. In addition, a state transition table different from the existing state transition table is proposed to improve the coding efficiency of video having statistics or distribution that do not match the existing state transition table.
The following non-limiting examples are provided to introduce some embodiments. In one example, the quantizer of the current element of the encoded block is determined based on a parity check of the sum of quantized values of all previously processed elements in the encoded block. In some implementations, if the elements are transform coefficients in the frequency domain, the elements in the encoded block are processed according to a predetermined order, e.g., from highest frequency to lowest frequency. In these examples, the video encoder or decoder calculates the parity of the sum of the quantization levels of the elements preceding the current element according to a predetermined order. The state of the current element is then determined using the calculated parity according to the state transition table, and the quantizer corresponding to the determined state is the quantizer for the current element.
Alternatively or additionally, the video encoder or decoder calculates the parity of the number of non-zero quantization levels (or zero quantization levels) in the quantization element preceding the current element according to a predetermined order. The state of the current element is then determined using the calculated parity according to the state transition table, and the quantizer corresponding to the state is the quantizer for the current element. The proposed method considers information of a plurality of previous quantized elements, compared to existing dependency quantization methods that use only one previous quantized element of the current element to determine the state of the current element, thereby making the selection of the quantizer more reliable and accurate.
In another example, if the quantization level of a previous element of the current element is not zero, the state of the current element is determined according to a state transition table. Otherwise, the state remains unchanged and the current element is quantized using the quantizer of the previous element. This embodiment eliminates the step of determining the state of the current element if the quantization level of the previous element is zero, thereby reducing the computational complexity of the encoding and decoding algorithms. In another example, an alternative state transition table, different from the existing state transition table, is presented to capture statistics or distribution of some video that is not captured by the existing dependency quantization. The state of the current element may be calculated using any of the examples disclosed herein.
The video encoder may quantize the current element using the selected quantizer. The quantized elements of the encoded block may then be encoded into a bitstream of video. On the decoder side or encoder side, when reconstructing a block for prediction purposes, the dequantization process may use any method or any combination of the above methods to determine the state of each quantization element in the block, and then determine the quantizer. The determined quantizer may dequantize the elements, and the dequantized elements of the block are then used to reconstruct the video block for display (at the decoder) or to predict (at the encoder) other blocks or images.
As described herein, some embodiments improve coding efficiency by utilizing more information than existing dependency quantization methods and by employing different state transition tables. By using elements other than the element immediately preceding the current element, the state of the current element can be determined more accurately, and thus the selected quantizer is more suitable for the current element than the existing method. Furthermore, by skipping the state determination of some elements, the computational complexity of the quantization level can be reduced without sacrificing the visual quality of the encoded video. Furthermore, the use of different state transition tables allows the video coding scheme to achieve high coding efficiency for different sets of video signals. In future video coding standards, these techniques may be an effective coding tool.
Referring now to the drawings, FIG. 1 illustrates an example block diagram of a video encoder 100 for implementing an embodiment of the present application. In the example shown in fig. 1, video encoder 100 includes a partitioning module 112, a transform module 114, a quantization module 115, an inverse quantization module 118, an inverse transform module 119, an in-loop filter module 120, an intra-prediction module 126, an inter-prediction module 124, a motion estimation module 122, a decoded image buffer 130, and an entropy encoding module 116.
The input to the video encoder 100 is an input video 102 that contains a sequence of images (also referred to as frames or pictures). In a block-based video encoder, for each image, video encoder 100 employs a partitioning module 112 to partition the image into blocks 104, and each block contains a plurality of pixels. The block may be a macroblock, a coding tree unit, a coding unit, a prediction unit, and/or a prediction block. One image may include blocks of different sizes, and the block division of different images of the video may also be different. Each block may be encoded with different predictions, such as intra-prediction or inter-prediction or a hybrid of intra-and inter-prediction.
Typically, the first picture of a video signal is an intra-predicted picture, which is encoded using intra-prediction only. In intra prediction mode, only the blocks of an image are predicted using data from the same image. An intra-predicted image may be decoded without information of other images. To perform intra prediction, the video encoder 100 shown in fig. 1 may use an intra prediction module 126. The intra prediction module 126 is configured to generate an intra prediction block (prediction block 134) using reconstructed samples in a reconstructed block 136 of neighboring blocks of the same image. Intra prediction is performed according to an intra prediction mode selected for the block. Then, the video encoder 100 calculates the difference between the block 104 and the intra prediction block 134. This difference is referred to as residual block 106.
To further remove redundancy from the block, the transform module 114 transforms the residual block 106 into the transform domain by transforming the samples in the block. Examples of transforms may include, but are not limited to, discrete Cosine Transforms (DCTs) or Discrete Sine Transforms (DSTs). The transformed values may be referred to as transform coefficients representing a residual block in the transform domain. In some examples, the residual block may be directly quantized without transformation by the transformation module 114. This is referred to as a transform skip mode.
Video encoder 100 may further quantize the transform coefficients using quantization module 115 to obtain quantized coefficients. Quantization involves dividing the samples by a quantization step followed by rounding; and inverse quantization involves multiplying the quantized value by a quantization step size. This quantization process is known as scalar quantization. Quantization is used to reduce the dynamic range of video samples (transformed or untransformed) in order to use fewer bits to represent the video samples.
Quantization of intra coefficients/samples can be done independently and this quantization method is used in some existing video compression standards, such as h.264 and HEVC. For an N-by-M block, the 2D coefficients of the block may be converted into a 1-D array for coefficient quantization and encoding using a particular scan order. Quantization of the intra-block coefficients may utilize scan order information. For example, the quantization of a given coefficient in a block may depend on the state of the previously quantized values along the scan order. To further increase the coding efficiency, more than one quantizer may be used. Which quantizer is used to quantize the current coefficient depends on information preceding the current coefficient in the encoding/decoding scan order. This quantization method is called dependency quantization.
Quantization step sizes may be used to adjust the quantization levels. For example, for scalar quantization, different quantization steps may be applied to achieve finer or coarser quantization. A smaller quantization step corresponds to finer quantization, while a larger quantization step corresponds to coarser quantization. The quantization step size may be indicated by a Quantization Parameter (QP). Quantization parameters are provided in the encoded bitstream of video so that a video decoder can decode with the same quantization parameters.
Next, the entropy encoding module 116 encodes the quantized samples to further reduce the size of the video signal. The entropy encoding module 116 is configured to apply an entropy encoding algorithm to the quantized samples. Examples of entropy coding algorithms include, but are not limited to, variable Length Coding (VLC) schemes, context adaptive VLC schemes (CAVLC), arithmetic coding schemes, binarization, context Adaptive Binary Arithmetic Coding (CABAC), syntax-based context adaptive binary arithmetic coding (SBAC), probability Interval Partitioning Entropy (PIPE) coding, or other entropy coding techniques. The entropy encoded data is added to the bitstream of the output encoded video 132.
As described above, reconstructed blocks 136 from neighboring blocks are used in intra prediction of a block of an image. Generating a reconstructed block 136 of a block includes calculating a reconstructed residual for the block. The reconstructed residual may be determined by applying inverse quantization and inverse transform to the quantized residual of the block. The inverse quantization module 118 is configured to apply inverse quantization to the quantized samples to obtain dequantized coefficients. The inverse quantization module 118 applies an inverse of the quantization scheme used by the quantization module 115 by using the same quantization step size as the quantization module 115. The inverse transform module 119 is configured to apply an inverse transform of the transform applied by the transform module 114 to the dequantized samples, such as an inverse DCT or an inverse DST. The output of inverse transform module 119 is the reconstructed residual of the block in the pixel domain. The reconstructed residual may be added to a prediction block 134 of the block to obtain a reconstructed block 136 in the pixel domain. For blocks that skip transforms, the inverse transform module 119 is not applied to these blocks. The dequantized samples are the reconstructed residuals of the block.
The block in the subsequent image after the first intra-predicted image may be encoded using inter-prediction or intra-prediction. In inter prediction, the prediction of a block in a picture is from one or more previously encoded video pictures. To perform inter prediction, the video encoder 100 uses an inter prediction module 124. The inter prediction module 124 is configured to perform motion compensation on the block based on the motion estimation provided by the motion estimation module 122.
The motion estimation module 122 compares the current block 104 of the current image with the decoded reference image 108 for motion estimation. The decoded reference pictures 108 are stored in a decoded picture buffer 130. The motion estimation module 122 selects the reference block from the decoded reference pictures 108 that best matches the current block. The motion estimation module 122 further identifies an offset between the location of the reference block (e.g., x, y coordinates) and the location of the current block. This offset is referred to as a Motion Vector (MV) and is provided to the inter prediction module 124. In some cases, a plurality of reference blocks are identified for blocks in the plurality of decoded reference pictures 108. Thus, a plurality of motion vectors are generated and provided to the inter prediction module 124.
The inter prediction module 124 performs motion compensation using the motion vector and other inter prediction parameters to generate a prediction of the current block (i.e., the inter prediction block 134). For example, based on the motion vector, the inter prediction module 124 may locate a prediction block pointed to by the motion vector in a corresponding reference image. If there is more than one prediction block, these prediction blocks are combined with some weights to generate the prediction block 134 of the current block.
For inter-prediction blocks, video encoder 100 may subtract inter-prediction block 134 from block 104 to generate residual block 106. The residual block 106 may be transformed, quantized, and entropy encoded in the same manner as the residual of the intra-prediction block discussed above. Likewise, a reconstructed block 136 of the inter prediction block may be obtained by inverse quantization, inverse transforming the residual and then combining with the corresponding prediction block 134.
To obtain the decoded image 108 for motion estimation, the reconstruction block 136 is processed by the in-loop filter module 120. The in-loop filter module 120 is configured to smooth pixel transitions to improve video quality. In-loop filter module 120 may be configured to implement one or more in-loop filters, such as a deblocking filter, or a Sample Adaptive Offset (SAO) filter, or an Adaptive Loop Filter (ALF), or the like.
Fig. 2 depicts an example of a video decoder 200 for implementing an embodiment of the application. The video decoder 200 processes the encoded video 202 in the bitstream and generates decoded images 208. In the example shown in fig. 2, video decoder 200 includes entropy decoding module 216, inverse quantization module 218, inverse transform module 219, in-loop filter module 220, intra prediction module 226, inter prediction module 224, and decoded image buffer 230.
The entropy decoding module 216 is used to entropy decode the encoded video 202. The entropy decoding module 216 decodes the quantized coefficients, encoding parameters including intra-prediction parameters and inter-prediction parameters, and other information. The entropy decoded coefficients are then inverse quantized by inverse quantization module 218 and then inverse transformed to the pixel domain by inverse transform module 219. The inverse quantization module 218 and the inverse transformation module 219 function similarly to the inverse quantization module 118 and the inverse transformation module 119, respectively, described above with respect to fig. 1. The inverse transformed residual block may be added to the corresponding prediction block 234 to generate a reconstructed block 236. For blocks that skip transforms, the inverse transform module 219 is not applied to these blocks. The dequantized samples generated by the dequantization module 118 are used to generate the reconstruction block 236.
The prediction block 234 of a particular block is generated based on the prediction mode of the block. If the encoding parameters of the block indicate that the block is intra-predicted, a reconstructed block 236 of a reference block in the same image may be input into the intra-prediction module 226 to generate a predicted block 234 of the block. If the encoding parameters of the block indicate that the block is inter predicted, a prediction block 234 is generated by the inter prediction module 224. The intra-prediction module 226 and the inter-prediction module 224 function similarly to the intra-prediction module 126 and the inter-prediction module 124, respectively, of fig. 1.
As described above with respect to fig. 1, inter prediction involves one or more reference pictures. The video decoder 200 generates a decoded image 208 of the reference image by applying the in-loop filter module 220 to the reconstructed block of the reference image. Decoded image 208 is stored in decoded image buffer 230 for use by inter prediction module 224 and for output.
Referring to fig. 3, fig. 3 depicts an example of coding tree unit partitioning of images in video according to some embodiments of the present disclosure. As discussed above with respect to fig. 1 and 2, in order to encode an image of a video, the image is divided into blocks, such as the coding tree units (Coding Tree Units, CTUs) 302 in the VVC shown in fig. 3. For example, CTUs 302 may be a block of 128x128 pixels. CTUs are processed according to the sequence as shown in fig. 3. In some examples, each CTU 302 in the image may be partitioned into one or more Coding Units (CUs) 402 as shown in fig. 4, which may be used for prediction and transformation. CTU 302 may be partitioned into multiple CUs 402 in different ways according to the coding scheme. For example, in VVC, CU 402 may be rectangular or square, and may be encoded without being further divided into prediction units or transform units. Each CU 402 may be as large as its root CTU 302 or as small as a sub-divided 4x4 block of the root CTU 302. As shown in fig. 4, the partitioning from CTU 302 to CUs 402 in VVC may be a quadtree partitioning or a binary tree partitioning or a trigeminal tree partitioning. In fig. 4, the solid line represents a quadtree segmentation and the broken line represents a binary tree segmentation.
Dependency quantization
As discussed above with respect to fig. 1 and 2, quantization is used to reduce the dynamic range of block elements in a video signal, thereby using fewer bits to represent the video signal. In some examples, the elements at a particular location of a block are referred to as coefficients prior to quantization. After quantization, the quantized value of the coefficient is called quantization level or level. Quantization typically involves dividing by a quantization step size and then rounding, while inverse quantization involves multiplying by the quantization step size. This quantization process is also known as scalar quantization. Quantization of intra-block coefficients may be performed independently and such independent quantization methods are used in some existing video compression standards, e.g., h.264, HEVC, etc. In other examples, dependency quantization is employed, such as in VVC.
For an N-by-M block, the 2-D coefficients of the block may be converted into a 1-D array for coefficient quantization and encoding using a particular scan order, and the same scan order is used for encoding and decoding. Fig. 5 shows an example of a coded block having a predetermined scanning order for processing coefficients of the coded block. In this example, the size of the encoding block 500 is 8×8, and the processing is from the lower right corner position L 0 Beginning and at the upper left corner L 63 And (5) ending. If block 500 is a transformed block, the predetermined sequence shown in FIG. 5 begins at the highest frequency to the lowest frequency. In some examples, processing of the block, e.g., quantization, begins with a first non-zero element of the block according to a predetermined scan order. For example, if position L 0 -L 17 The coefficients at all are zero, and L 18 Where the coefficient is not zero, then at L 18 Beginning quantization at coefficients of (2) and for L in scan order 18 Each coefficient thereafter is quantized.
In dependency quantization, quantization of intra-block coefficients may utilize scan order information. For example, it may depend on the state of the previous quantization level along the scan order. In addition, in order to further improve the coding efficiency, more than one quantizer (e.g., two quantizers) is used in the dependency quantization. In existing dependency quantization, a quantizer for quantizing a current coefficient depends on a coefficient immediately preceding the current coefficient in the scan order.
Fig. 6 shows an example of a quantizer used in the dependency quantization employed by VVC. In this example two quantizers are used, Q0 and Q1. The quantization step size delta is determined by a quantization factor embedded in the code stream. Ideally, the quantizer used to quantize the current coefficient can be specified explicitly. However, the overhead of the vectorizer signaling reduces coding efficiency. Alternatively, instead of an explicit signal, a quantizer of the current coefficient may be determined and derived based on the quantization level of the coefficient immediately preceding the current coefficient. For example, VVC uses a four-state model, and the parity of the quantization level of the previous coefficient is used to determine the state of the current coefficient. This state is then used to determine the quantizer used to quantize the current coefficient.
TABLE 1 State transition Table for dependency quantization
Table 1 shows a state transition table employed by VVC. The state of the coefficients can take four different values: 0. 1, 2 and 3. The state of the current coefficient may be uniquely determined by parity checking of a quantization level immediately before the current coefficient in a scan order of encoding/decoding. At the beginning of quantization on the encoding side or inverse quantization on the decoding side of a block, the state is set to a default value, e.g., 0. The coefficients are quantized or dequantized in a predetermined scan order (i.e., in the same order in which they were entropy decoded). After one coefficient is quantized or dequantized, the process moves to the next coefficient according to the scan order. The next coefficient becomes the new current coefficient and the just processed coefficient becomes the previous coefficient. Determining the state of the new current coefficient according to Table 1 i Wherein k is i-1 A value representing the quantization level of the previous coefficient. The index i indicates the position of the coefficient or quantization level along the scan order. Note that in this example, the state depends only on the state of the previous coefficient i-1 And the level k of the previous coefficient at position i-1 i-1 Parity check (k) i-1 &1). The update process for this state can be formulated as:
state i =stateTransTable[state i-1 ][k i-1 &1] (1)
wherein stateTransTable represents the table shown in Table 1, operator & represents the bitwise "AND" operator in two's complement arithmetic. Alternatively, state transitions may be specified without a look-up table, as follows:
state i =(32040>>((state i-1 <<2)+((k i-1 &1)<<1)))&3 (2)
Wherein the 16-bit value 32040 specifies a state transition table. The state uniquely specifies the scalar quantizer that is used. In one example, if the state of the current coefficient is equal to 0 or 1, a scalar quantizer Q0 shown in fig. 6 is used. Otherwise (state equal to 2 or 3), the scalar quantizer Q1 shown in fig. 6 is used.
As noted above, performance and complexity in existing dependency quantization may not be optimal. In this disclosure, state transition methods are presented to improve dependency quantization.
In one example, the parity of the quantization level at the immediately preceding position in the scan order is not used, but rather the parity of the sum of quantization levels at all preceding encoding/decoding positions in the encoded block is used for state transitions. Will k m Represented as the quantization level of the mth element in the scan order. For the current i-th element, the indices i-1, i-2, …,0 represent the position of the previous quantization level in the encoded block that precedes the current element. In the example shown in FIG. 5, if the current element is at position L 24 The previous quantization level in the block preceding the current element will include the level at L 0 To L 23 Quantization level of previous elements of (c). In the example of processing starting from a first non-zero element in the block along the scan order, these previous elements will include those elements from the first non-zero element to the element immediately preceding the current element. For example, if the first non-zero element in the block shown in FIG. 5 is L 18 Then the previous element will include a position of L 18 To L 23 Is an element of (a).
The sum (i-1) of all quantization levels before position i can be calculated as:
the state of the quantizer used to determine the current element at location i may be calculated as:
state i =stateTransTable[state i-1 ][sum(i-1)&1] (4)
stateTransTable may be the state transition table shown in Table 2, discussed below with reference to FIG. 8, or any other state transition table suitable for dependency quantization.
TABLE 2
In addition, when equations (3) and (4) calculate and use sum (i-1) to determine the state of the current element, other methods may be used. For example, a video encoder or decoder may directly calculate the parity of sum (i.e., sum (i-1) & 1) without calculating the sum value itself. In one example, the state of the current element may be determined as
state i
stateTransTable[state i-1 ][XOR(LSB(k 0 ),LSB(k 1 ),…,LSB(k i-1 ))](5)
Where XOR is an exclusive or operator and LSB (x) is a function taking the least significant bits of parameter x. In another example, the state of the current element may be determined as
state i = stateTransTable[ state i-1 ][ c i-1 & 1 ] (6)
Wherein c i-1 Is a counter that counts the total number of 1 s in the last bit of the previous quantization level.
The examples discussed above for determining the state of a current element based on all previous elements are for illustration and should not be limiting. Various other ways may be used. Further, in some examples, instead of using all previous quantization levels in the block, a subset of these quantization levels may be utilized to determine the state of the current element, e.g., a previous l quantization level, where l takes a value between 2 and i-1. For example, the state of the current element may be determined using the parity of the sum of the immediately preceding l quantization levels using any of the methods discussed above (e.g., the methods shown in equations (4), (5), or (6)).
The method presented in this example considers information of a plurality of previous elements, thus making the selection of the quantizer more reliable and accurate, compared to existing dependency quantization methods that use only one previous quantization level of the current element to determine the state of the current element.
In another embodiment, a state determination method based on a comparison of a value of a quantization level with zero is presented. After quantization, the quantization level of some elements may be 0. In this case, it is not necessary to use a value of zero to calculate the transition of the state. Thus, only when the previous quantization level k i-1 Not zero, the state of the quantizer used to determine the current element encoding or decoding is calculated as follows.
if(k i-1 !=0)
state i =stateTransTable[state i-1 ][k i-1 &1] (7)
else
state i =state i-1
Wherein k is i-1 Is the quantization level at position i-1 in the encoding/decoding scan order. If the quantization level at position i-1 is zero, the state will remain unchanged, so the quantizer for the current element at position i will remain unchanged. Fig. 7 shows a state transition diagram of the state determination method when the transition table of table 1 is used. In fig. 7, the state transition is different from that of table 1 in that if the quantization level of the previous element is zero, the state of the current element remains the same as that of the previous element. Table 3 shows a state transition table equivalent to the state transition diagram of fig. 7. This embodiment eliminates if the quantization level of the previous element is zero And determining the state of the current element, thereby reducing the computational complexity of the encoding and decoding algorithms.
TABLE 3 Table 3
In another embodiment, the state transitions are determined not using the parity of the previous level or the sum of the previous levels, but using the number of parity of the previous non-zero levels. Let cnt denote a counter and set cnt=0 before encoding/decoding the current encoded block. The value of the quantization level is available after quantizing (at encoding) the element or dequantizing (at encoding or decoding) the element. If the level is not zero, the counter will increment by 1. The encoder or decoder will continue to process the next element of the block in scan order, which becomes the new current element. The state of the quantizer used to determine the new current element may be updated as follows:
state i = stateTransTable[ state i-1 ][ cnt & 1 ] (8)
alternatively, the counter cnt may be used to count the number of previous levels equal to zero. The state used to determine the quantizer of the current element may be determined using equation (8). In these examples, stateTransTable may be the state transition table in Table 1 (by replacing k with cnt i-1 ) The state transition table in table 2 (by replacing sum (i-1) with cnt), the state transition table discussed below with reference to fig. 8, or any other state transition table suitable for dependency quantization. Since the present embodiment considers information of a plurality of previous elements, selecting a quantizer according to the present embodiment is more accurate or more appropriate than the existing method.
In the above-described embodiment, the state transitions shown in tables 1 to 3 are used as examples. Alternatively or additionally, different state transition tables may be used to determine the quantizer of the current element. Fig. 8 root illustrates an example of a state transition diagram and associated state transition tables for dependency quantization in accordance with some embodiments of the present disclosure. In FIG. 8, v i-1 Representing the state of the parity check thereof for determining the current element at position i i Is a number of (3). Quantity v i-1 May be, for example, the quantization level k of the previous element at position i-1 in equations (1), (2) and (7) i1 Sum (i-1) in equations (3) and (4), c in equation (6) i-1 Or cnt in equation (8). In other words, the state transition table shown in fig. 8 may implement stateTransTable in any of the above-described state determination methods or other state determination methods using previous element information. In one example, if the determined state of the current element at location i is 0 or 1, quantizer Q0 is selected for the current element; otherwise, the quantizer Q1 is selected for the current element. By using different state transition tables, high coding efficiency can be achieved for video signals with different signal distributions or statistics.
Fig. 9 depicts an example of a process 900 for encoding blocks of video via dependency quantization in accordance with some embodiments of the present disclosure. One or more computing devices (e.g., a computing device implementing video encoder 100) implement the operations described in fig. 9 by executing appropriate program code (e.g., program code implementing quantization module 115). For purposes of illustration, the process 900 is described with reference to some examples depicted in the accompanying drawings. However, other implementations are also possible.
At block 902, the process 900 includes accessing an encoded block (or block) of a video signal. The block may be a portion of an image of the input video, such as the encoding unit 402 discussed in fig. 4, or any type of block that is processed as a unit by the video encoder when performing quantization.
At block 904, which includes 906-910, process 900 includes processing each element of the block according to a predetermined scan order of the block (e.g., the scan order shown in FIG. 5) to generate quantized elements. The element of the encoded block may be a residual after inter or intra prediction. The element may be a transform coefficient of the residual in the frequency domain or a value of the residual in the pixel domain.
At block 906, process 900 includes retrieving the current element according to the scan order. If the current block has no elements quantized, then the current element will be the first element in the block according to the scan order. As discussed above with respect to fig. 5, in some cases, the video encoder performs quantization starting from the first non-zero element in the block according to the scan order. In these cases, the current element will be the first non-zero element in the block. If there are already quantized elements in the block, the current element will be the element following the last processed element in the scan order.
At block 908, the process 900 includes determining a quantizer of the current element based on elements preceding the current element. As described above, the quantizer may be selected according to the quantization state (or state) of the current element. The quantization state (current state) of the current element may be determined using any of the methods described above. For example, the current state may be determined using a state transition table and values calculated based on quantization levels of one or more elements preceding the current element. The state transition table may be any one of tables 1 to 3, or the state transition table shown in fig. 8. The value may be calculated as a parity of a sum of quantization levels of a plurality of elements before the current element (e.g., equations (3) - (6)). This value may also be calculated as the number of parity checks of the previous non-zero level (or zero level) as shown in equation (8). The value may also be the parity of the previous quantization level, but the state is determined only if the previous quantization level is not zero, as shown in equation (7). If the previous quantization level is zero, the quantizer of the current element remains the same as the quantizer of the previous element. Various other ways of determining the quantizer of the current element may be used. At block 910, process 900 includes quantizing the current element using the determined quantizer to generate a quantized element.
At block 912, process 900 includes encoding quantization elements (quantization levels) of the block for inclusion in a bitstream of the video. The encoding may include entropy encoding as described above with respect to fig. 1.
Fig. 10 depicts an example of a process 1000 for reconstructing a video block via dependency dequantization according to some embodiments of the present disclosure. One or more computing devices implement the operations described in fig. 10 by executing appropriate program code. For example, a computing device implementing the video encoder 100 may implement the operations described in fig. 10 by executing program code of the inverse quantization module 118. The computing device implementing the video decoder 200 may implement the operations described in fig. 10 by executing program code of the inverse quantization module 218. For purposes of illustration, process 1000 is described with reference to some examples depicted in the accompanying drawings. However, other implementations are also possible.
At block 1002, process 1000 includes accessing a quantization element (quantization level) of an encoded block of a video signal. The block may be a portion of an image of the input video, such as the encoding unit 402 discussed in fig. 4, or any type of block that is processed as a unit by a video encoder or decoder when performing dequantization. For an encoder, the quantization element may be obtained by quantizing an element of a block. For a decoder, quantization elements may be obtained by performing entropy decoding on binary strings parsed from an encoded bitstream of video.
At block 1004, which includes 1006-1010, process 1000 includes processing each quantized element of the block according to a predetermined scan order of the block (e.g., the scan order shown in FIG. 5) to generate dequantized elements. At block 1006, process 1000 includes retrieving a current quantized element according to a scan order. If the current block has no quantized elements dequantized, the current quantized element will be the first quantized element of the block according to the scan order. As described above with respect to fig. 5, in some cases, the video encoder performs quantization starting from the first non-zero element in the block according to the scan order. In these cases, the first quantization element will be the first non-zero quantization level in the block. If there are already dequantized elements in the block, the current quantized element will be the quantization level after the last dequantized element in the scan order.
At block 1008, process 1000 includes determining a quantizer for the current quantization element based on a quantization level preceding the current quantization element. As described above, the quantizer may be selected according to the quantization state of the current quantization element. The quantization state of the current quantization element may be determined using any of the methods described above. For example, the current state may be determined using a state transition table and values calculated based on quantization levels of one or more elements preceding the current quantization element. The state transition table may be one of tables 1 to 3, or the state transition table shown in fig. 8. The value may be calculated using a parity check (e.g., equations (3) - (6)) of a sum of quantization levels of a plurality of elements preceding the current quantization element. This value may also be calculated using the number of parity checks of the previous non-zero level (or zero level) as shown in equation (8). The value may also be the parity of the previous quantization level, but the state is determined only if the previous quantization level is not zero, as shown in equation (7). If the previous quantization level is zero, the quantizer for the current quantization element remains the same as the quantizer for the previous quantization element. Various other ways may be used to determine the quantizer for the current quantization element. At block 1010, process 1000 includes quantizing the current quantization element using the determined quantizer to generate a dequantized element.
At block 1012, process 1000 includes reconstructing the block in the pixel domain based on the dequantized elements of the block. The reconstruction may comprise an inverse transformation as described above with respect to fig. 1 and 2. The reconstructed block may also be used to perform intra or inter prediction on other blocks or images in the video by an encoder or decoder, as described above with respect to fig. 1 and 2. The reconstructed block may be further processed to generate a decoded block for display at the decoder side with other decoded blocks in the image.
It should be noted that the state transition tables shown in table 1 and fig. 8 may be used during encoding and decoding of video. For example, one state transition table is used for some blocks in the video, and another state transition table is used for other blocks in the video. Also, more than two state transition tables may be used in encoding and decoding of video signals.
It should be further appreciated that while the above description is directed to the application of dependent quantization to video coding, the same techniques may be applied to image coding. For example, when compressing an image, the image may be divided into blocks, and the elements (with or without transforms) of each block may be quantized as described above. The decompression of the image may be performed by dequantizing the elements in each block as described above.
Computing system examples for implementing dependency quantization for video coding
Any suitable computing system may be used to perform the operations described herein. For example, fig. 11 depicts an example of a computing device 1100 that may implement video encoder 100 of fig. 1 or video decoder 200 of fig. 2. In some embodiments, computing device 1100 may include a processor 1112, with processor 1112 communicatively coupled to memory 1114 and executing computer-executable program code and/or accessing information stored in memory 1114. The processor 1112 may include a microprocessor, an Application Specific Integrated Circuit (ASIC), a state machine, or other processing device. The processor 1112 may include any of a number of processing devices, including one. Such a processor may include, or may be in communication with, a computer-readable medium storing instructions that, when executed by the processor 1112, cause the processor to perform operations described herein.
Memory 1114 may include any suitable non-volatile computer-readable medium. The computer readable medium may include any electronic, optical, magnetic, or other storage device that can provide computer readable instructions or other program code to a processor. Non-limiting examples of computer readable media include magnetic disks, memory chips, ROM, RAM, ASIC, configured processors, optical storage, magnetic tape or other magnetic storage, or any other medium from which a computer processor may read instructions. The instructions may include processor-specific instructions generated by a compiler and/or interpreter from code written in any suitable computer programming language including, for example, C, C ++, c#, visual Basic, java, python, perl, javaScript, and ActionScript.
Computing device 1100 may also include a bus 1116. Bus 1116 may communicatively couple one or more components of computing device 1100. Computing device 1100 may also include a number of external or internal devices, such as input or output devices. For example, computing device 1100 is shown with an input/output (I/O) interface 1118, which interface 1118 may receive input from one or more input devices 1120 or provide output to one or more output devices 1122. One or more input devices 1120 and one or more output devices 1122 may be communicatively coupled to I/O interface 1118. The communicative coupling may be achieved in any suitable manner (e.g., connection through a printed circuit board, connection through a cable, communication through wireless transmission, etc.). Non-limiting examples of input devices 1120 include a touch screen (e.g., one or more cameras for imaging touch areas or pressure sensors for detecting pressure changes caused by touches), a mouse, a keyboard, or any other device that may be used to generate input events in response to physical actions of a user on a computing device. Non-limiting examples of output devices 1122 include an LCD screen, an external monitor, speakers, or any other device that may be used to display or otherwise present output generated by a computing device.
Computing device 1100 may execute program code that configures processor 1112 to perform one or more operations described above with respect to fig. 1-10. The program code may include the video encoder 100 or the video decoder 200. The program code may reside in the memory 1114 or any suitable computer readable medium and may be executed by the processor 1112 or any other suitable processor.
Computing device 1100 can also include at least one network interface device 1124. The network interface device 1124 can include any device or group of devices suitable for establishing a wired or wireless data connection to the one or more data networks 1128. Non-limiting examples of network interface device 1124 include an ethernet network adapter, modem, or the like. Computing device 1100 can send messages as electronic or optical signals through network interface device 1124.
General considerations
The present application has been described in considerable detail to provide a thorough understanding of the claimed subject matter. However, it will be understood by those skilled in the art that the claimed subject matter may be practiced without these specific details. In other instances, methods, devices, or systems known by those skilled in the art have not been described in detail so as not to obscure the claimed subject matter.
Unless specifically stated otherwise, it is appreciated that throughout the present specification discussions utilizing terms such as "processing," "computing," "calculating," "determining," or "identifying" or the like, refer to the action and processes of a computing device, such as one or more computers or similar electronic computing devices or devices, that manipulate or transform data represented as physical, electronic, or magnetic quantities within the computing platform's memory, registers, or other information storage device, transmission device, or display device.
The one or more systems discussed herein are not limited to any particular hardware architecture or configuration. The computing device may include any suitable arrangement of components that provide results conditioned on one or more inputs. Suitable computing devices include multipurpose microprocessor-based computer systems that access storage software that programs or configures a computing system from a general-purpose computing device to a special-purpose computing device that implements one or more embodiments of the present subject matter. Any suitable programming, scripting, or other type of language or combination of languages may be used to implement the teachings contained herein in software that is used when programming or configuring a computing device.
Embodiments of the disclosed methods may be performed in the operation of such computing devices. The order of the blocks presented in the above examples may be varied, e.g., blocks may be reordered, combined, and/or broken into sub-blocks. Some blocks or processes may be performed in parallel.
The use of "adapted" or "configured" by the present application is meant to be open and inclusive and does not exclude devices adapted or configured to perform additional tasks or steps. Furthermore, the use of "based on" is intended to mean open and inclusive in that a process, step, calculation, or other action "based on" one or more of the recited conditions or values may in practice be based on additional conditions or values other than the recited conditions or values. Headings, lists, and numbers included herein are for ease of explanation only and are not meant as limitations.
While specific embodiments of the present subject matter have been described in detail, it will be appreciated that those skilled in the art, upon attaining an understanding of the foregoing, may readily produce alterations to, variations of, and equivalents to such embodiments. Accordingly, it should be understood that the present disclosure has been presented for purposes of example and not limitation, and that such modifications, variations and/or additions to the subject matter, including those that are obvious to those of ordinary skill in the art, are not excluded.

Claims (20)

1. A method of reconstructing a block of video, the method comprising:
accessing a plurality of quantization elements associated with the block;
processing the plurality of quantized elements according to the order of the blocks to generate corresponding dequantized elements, the processing comprising:
obtaining a current quantization element of the block from the plurality of quantization elements;
determining a quantizer for the current quantization element based on at least two quantization elements preceding the current quantization element or based on a comparison of an element immediately preceding the current quantization element with zero; and
dequantizing the current quantized element based on the quantizer to generate a dequantized element; and
reconstructing the block based on the dequantized elements.
2. The method of claim 1, wherein the block comprises a coding unit.
3. The method of claim 1, wherein the quantization element associated with the block comprises a quantized pixel of the block or a quantized transform coefficient of the block.
4. The method of claim 1, wherein determining a quantizer for the current quantization element based on at least two quantization elements preceding the current quantization element comprises:
The quantizer for the current quantization element is determined based on a parity of a sum of at least two quantization elements preceding the current quantization element according to the order.
5. The method of claim 4, wherein the at least two quantization elements preceding the current quantization element include all quantization elements preceding the current quantization element in the current block according to the order.
6. The method of claim 1, wherein determining a quantizer for the current quantized element based on a comparison of an element immediately preceding the current quantized element with zero comprises:
in response to determining that an element immediately preceding the current quantized element is zero, determining that the quantizer for the current quantized element is a quantizer for an element immediately preceding the current quantized element; and
in response to determining that an element immediately preceding the current quantization element is not zero, the quantizer of the current quantization element is determined according to a state transition table.
7. The method of claim 1, wherein determining a quantizer for the current quantization element based on at least two quantization elements preceding the current quantization element comprises:
A quantizer for the current quantization element is determined based on a parity check of a number of non-zero quantization elements or zero quantization elements in the block preceding the current quantization element according to the order.
8. The method of claim 1, wherein determining a quantizer for the current quantization element comprises:
determining a current state for quantization based on a previous state and state transition table for quantizing an element immediately preceding the current quantization element:
wherein v is a value determined based on the at least two quantization elements preceding the current quantization element; and
a quantizer for the current quantization element is determined from the current state for quantization.
9. A non-transitory computer readable medium storing program code executable by one or more processing devices to perform operations comprising:
accessing a plurality of quantization elements associated with blocks of video;
processing the plurality of quantized elements according to the order of the blocks to generate corresponding dequantized elements, the processing comprising:
obtaining a current quantization element of the block from the plurality of quantization elements;
Determining a current state for quantization based on a previous state and state transition table for quantizing an element immediately preceding the current quantization element:
wherein v is a value determined based on at least one quantization element preceding the current quantization element;
determining a quantizer of the current quantization element from the current state for quantization; and
dequantizing the current quantized element based on the quantizer to generate a dequantized element; and
reconstructing the block based on the dequantized elements.
10. The non-transitory computer readable medium of claim 9, wherein the processing further comprises:
prior to determining the current state for quantization,
determining whether the quantization element immediately preceding the current quantization element is zero; and
in response to determining that the quantization element immediately preceding the current quantization element is zero, determining that a quantizer for the current quantization element is a quantizer of a quantization element immediately preceding the current quantization element,
wherein the current state for quantization is determined in response to determining that a quantization element immediately preceding the current quantization element is not zero.
11. The non-transitory computer-readable medium of claim 9, wherein determining a quantizer for the current quantized element based on at least one quantized element preceding the current quantized element comprises:
a quantizer for the current quantization element is determined based on a parity of a sum of at least two quantization elements preceding the current quantization element according to the order.
12. The non-transitory computer-readable medium of claim 9, wherein determining a quantizer for the current quantized element based on at least one quantized element preceding the current quantized element comprises:
a quantizer for the current quantization element is determined based on a parity check of a number of non-zero quantization elements or zero quantization elements in the block preceding the current quantization element according to the order.
13. The non-transitory computer readable medium of claim 9, wherein the block comprises an encoding unit.
14. The non-transitory computer-readable medium of claim 9, wherein the quantization element associated with the block comprises a quantized pixel of the block or a quantized transform coefficient of the block.
15. A system, comprising:
a processing device; and
a non-transitory computer readable medium communicatively coupled to the processing device, wherein the processing device is configured to execute program code stored in the non-transitory computer readable medium and thereby perform operations comprising:
accessing a plurality of elements associated with a block of the video;
processing the plurality of elements according to an order of the blocks, the processing comprising:
obtaining a current element of the block from the plurality of elements;
determining a quantizer for the current element based on at least two previous elements preceding the current element or based on a comparison of an element immediately preceding the current element with zero; and
quantizing the current element based on the quantizer to generate a quantized element; and encoding the quantized elements into a bitstream representing the video.
16. The system of claim 15, wherein determining a quantizer for the current element based on at least two elements preceding the current element comprises:
a quantizer for the current element is determined based on a parity of a sum of at least two elements preceding the current element according to the order.
17. The system of claim 16, wherein the at least two elements preceding the current element include all elements in a current block preceding the current element according to the order.
18. The system of claim 15, wherein determining a quantizer for the current element based on a comparison of an element immediately preceding the current element with zero comprises:
in response to determining that an element immediately preceding the current element is zero, determining that a quantizer for the current element is a quantizer for an element immediately preceding the current element; and
in response to determining that an element immediately preceding the current element is not zero, a quantizer for the current element is determined from a state transition table.
19. The system of claim 15, wherein determining a quantizer for the current element based on at least two elements preceding the current element comprises:
a quantizer for the current element is determined based on parity checking of a plurality of non-zero elements in the block preceding the current element according to the order.
20. The system of claim 15, wherein determining a quantizer for the current element comprises:
Based on a previous state and state transition table for quantizing an element immediately preceding the current element, determining a current state for quantization,
wherein v is a value determined based on the at least two elements preceding the current element; and
a quantizer for the current element is determined based on the current state for quantization.
CN202180093746.XA 2021-02-19 2021-09-08 State transition for dependent quantization in video coding Pending CN116982262A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US202163151535P 2021-02-19 2021-02-19
US63/151,535 2021-02-19
PCT/US2021/049474 WO2021263251A1 (en) 2021-02-19 2021-09-08 State transition for dependent quantization in video coding

Publications (1)

Publication Number Publication Date
CN116982262A true CN116982262A (en) 2023-10-31

Family

ID=79281977

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180093746.XA Pending CN116982262A (en) 2021-02-19 2021-09-08 State transition for dependent quantization in video coding

Country Status (2)

Country Link
CN (1) CN116982262A (en)
WO (1) WO2021263251A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116848842A (en) * 2021-02-22 2023-10-03 创峰科技 Dependency quantization and residual coding method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101668093B1 (en) * 2010-06-17 2016-10-21 삼성전자주식회사 Method and Apparatus for encoding and decoding data
WO2019185769A1 (en) * 2018-03-29 2019-10-03 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Dependent quantization

Also Published As

Publication number Publication date
WO2021263251A1 (en) 2021-12-30

Similar Documents

Publication Publication Date Title
CN110199524B (en) Method implemented in computing device
EP4113997A1 (en) Video decoding method, video encoding method, and related device
CN111742552B (en) Method and device for loop filtering
CN112543337B (en) Video decoding method, device, computer readable medium and electronic equipment
KR20030086076A (en) Filtering method for removing block artifacts and/or ringing noise and apparatus therefor
CN112544081A (en) Method and device for loop filtering
CN113068026B (en) Coding prediction method, device and computer storage medium
CN112995671B (en) Video encoding and decoding method and device, computer readable medium and electronic equipment
WO2022116246A1 (en) Inter-frame prediction method, video encoding and decoding method, apparatus, and medium
CN116982262A (en) State transition for dependent quantization in video coding
JP7483029B2 (en) VIDEO DECODING METHOD, VIDEO ENCODING METHOD, DEVICE, MEDIUM, AND ELECTRONIC APPARATUS
WO2022213122A1 (en) State transition for trellis quantization in video coding
CN114079772B (en) Video decoding method and device, computer readable medium and electronic equipment
CN116965028A (en) Residual level binarization for video coding
WO2023130899A1 (en) Loop filtering method, video encoding/decoding method and apparatus, medium, and electronic device
WO2022174637A1 (en) Video encoding and decoding method, video encoding and decoding apparatus, computer-readable medium and electronic device
WO2023212684A1 (en) Subblock coding inference in video coding
CN117981306A (en) Independent history-based rice parameter derivation for video coding
CN118020294A (en) History-based RICE parameter derivation for video coding
WO2022217245A1 (en) Remaining level binarization for video coding
CN117837148A (en) History-based rice coding parameter derivation for video coding
CN115209141A (en) Video encoding and decoding method and device, computer readable medium and electronic equipment
CN117981323A (en) Video encoding using alternative neural network-based encoding tools
WO2023200933A1 (en) Cross-component model adjustment for video coding
CN115834882A (en) Intra-frame prediction method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240410

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Country or region after: China

Address before: 2479 Bay East Road, Palo Alto, California, USA, Room 110

Applicant before: Chuangfeng Technology

Country or region before: U.S.A.