US20140328395A1 - Method and apparatus for dequantization of transformed coefficients - Google Patents
Method and apparatus for dequantization of transformed coefficients Download PDFInfo
- Publication number
- US20140328395A1 US20140328395A1 US14/363,791 US201214363791A US2014328395A1 US 20140328395 A1 US20140328395 A1 US 20140328395A1 US 201214363791 A US201214363791 A US 201214363791A US 2014328395 A1 US2014328395 A1 US 2014328395A1
- Authority
- US
- United States
- Prior art keywords
- clipping
- quantization
- quantization level
- decoded
- level
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H04N19/00169—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
- H04N19/126—Details of normalisation or weighting functions, e.g. normalisation matrices or variable uniform quantisers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M7/00—Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
- H03M7/30—Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
- H03M7/3059—Digital compression and data reduction techniques where the original information is represented by a subset or similar information, e.g. lossy compression
-
- H04N19/00096—
-
- H04N19/00296—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/18—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/184—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
Definitions
- the present invention relates to video coding.
- the present invention relates to dequantization of transform coefficients for High Efficiency Video Coding (HEVC).
- HEVC High Efficiency Video Coding
- High-Efficiency Video Coding is a new international video coding standard that is being developed by the Joint Collaborative Team on Video Coding (JCT-VC).
- JCT-VC Joint Collaborative Team on Video Coding
- HEVC is based on the hybrid block-based motion-compensated DCT-like transform coding architecture.
- the basic unit for compression termed Coding Unit (CU)
- CU The basic unit for compression, termed Coding Unit (CU), is a 2N ⁇ 2N square block, and each CU can be recursively split into four smaller CUs until a predefined minimum size is reached.
- Each CU contains one or several variable-block-sized Prediction Unit(s) (PUs) and Transform Unit(s) (TUs).
- PUs Prediction Unit
- TUs Transform Unit
- For each PU either intra-picture or inter-picture prediction is selected.
- Each TU is processed by a spatial block transform and the transform coefficients for the TU are then quantized.
- the quantization of transform coefficients plays an important role in bitrate and quality control in video coding.
- a set of quantization steps is used to quantize the transform coefficient into a quantization level.
- a larger quantization step size will result in lower bitrate and lower quality.
- a smaller quantization step size will result in higher bitrate and higher quality.
- a straight forward implementation of the quantization process would involve a division operation which is more complex in hardware-based implementation and consumes more computational resource in software-based implementation. Accordingly, various techniques have been developed in the field for division-free quantization process. In HEVC Test Model Revision 5 (HM-5.0), the quantization process is described as follows. A set of parameters are defined:
- N transform size of the transform unit (TU)
- Q[x] and IQ[x] are called quantization step and dequantization step respectively.
- the quantization process is performed according to:
- the variable qlevel in equations (1) and (2) represents the quantization level of a transform coefficient.
- the variable coeffQ in equation (2) represents the dequantized transform coefficient.
- IQ[x] indicates de-quantization step (also called de-quantization step size) and QP represents the quantization parameter.
- QP/6 in equations (1) and (2) represents the integer part of QP divided by 6. As shown in equations (1) and (2), the quantization and dequantization processes are implemented by integer multiplication followed by arithmetic shift(s). An offset value is added in both equations (1) and (2) to implement integer conversion using rounding.
- the bit depth of the quantization level is 16 bits (including 1 bit for sign) for HEVC.
- w is the dynamic range of quantization matrix W
- iq is the dynamic range of IQ[x]
- bit depth of the de-quantized or reconstructed transform coefficient is 32 bits.
- the dynamic range of the quantization matrix W is 8 bits
- the de-quantization process uses 32-bit data representation
- the reconstructed transform coefficient according to equations (3) through (5) may overflow and cause system failure. Therefore it is desirable to develop a scheme for transform coefficient reconstruction to avoid possible overflow.
- a method and apparatus for de-quantizing a transform coefficient from a quantization level are disclosed. Embodiments according to the present invention avoid overflow of the de-quantized transform coefficient by clipping the quantization level adaptively before reconstructing the transform coefficient.
- the method comprises receiving the quantization level of the transform coefficient associated with a transform unit; clipping the quantization level to generate a clipping-processed quantization level; and generating a de-quantized transform coefficient using the clipping-processed quantization level.
- the quantization level can be clipped to a first range under a first clipping condition and a second range under a second clipping condition.
- the first range may correspond to a fixed range related to quantization-level bit-depth and the second range may be related to dynamic range of the quantization level.
- the clipping condition is determined by comparing a first weighted value with a threshold, wherein the first weighted value corresponds to a first linear function of the quantization matrix, the quantization parameter, the de-quantization step, the video source bit-depth, and the transform size of the transform unit.
- the clipping condition determination comprises comparing (20+M+DB ⁇ QP/6) with a threshold, where M is the transform size, DB is equal to B ⁇ 8 and B is the video source bit-depth, and QP is the quantization parameter.
- the clipping condition determination comprises comparing QP with a threshold.
- overflow of a de-quantized transform coefficient is avoided by clipping the decoded quantization level adaptively, where the clipping may take place either after entropy decoding or during entropy decoding.
- the method comprises receiving a decoded quantization level for the transform coefficient of a transform unit; determining clipping range for the decoded quantization level; clipping the decoded quantization level to the clipping range to generate a clipping-processed quantization level; and generating a de-quantized transform coefficient using the clipping-processed quantization level.
- FIG. 1 illustrates an exemplary flow chart for de-quantization process incorporating an embodiment of the present invention to avoid overflow.
- FIG. 2 illustrates an exemplary flow chart for de-quantization process incorporating an embodiment of the present invention to avoid overflow.
- n has to be less than or equal to (20+M+DB ⁇ QP/6) to avoid overflow.
- n should not exceed 16 bits.
- (20+M+DB ⁇ QP/6) is greater than 16
- the quantization level of the transform coefficient has to be clipped to a range not to exceed 16-bit data representation.
- pseudo codes (pseudo code A) illustrate an example of clipping the quantization level, qlevel, of the transform coefficient according to an embodiment of the present invention in order to avoid data overflow during transform coefficient reconstruction:
- test condition may use the bit depth B of the video source instead of parameter DB.
- pseudo codes Pseudo code B
- test condition “if (12+M+B ⁇ QP/6 ⁇ 16)” becomes “if (22 ⁇ QP/6 ⁇ 16)” in this case.
- An exemplary pseudo codes (Pseudo code C) is shown below:
- the condition in (7) is always met.
- the clipping is performed unconditional for the bit-depth equal to 10 bits or higher
- the quantization level of the transform coefficient may also be clipped unconditionally to desired bit-depth regardless of the bit-depth of source video.
- the desired bit-depth can be 8, 16 or 32 bits and the corresponding clipping ranges can be [ ⁇ 128, 127], [ ⁇ 32768, 32767] and [ ⁇ 2147483648, 2147483647].
- pseudo codes incorporating an embodiment of the present invention are described above. These pseudo codes are intended to illustrate exemplary process to avoid data overflow during transform coefficient reconstruction.
- the clipping operation may be implemented by using other function such as a clipping function, clip (x, y, z), where the variable z is clipped between x and y (x ⁇ y).
- the clipping operation can be implemented using a comparator. For example, clip (x, y, z) can be implemented by comparing z with x and comparing z with y.
- q level clip ( ⁇ 2 21-QP/6 ,2 21-QP/6 ⁇ 1 ,q level).
- FIG. 1 illustrates the flow chart for an exemplary system incorporating an embodiment of the present invention.
- the quantization level for the transform coefficient associated with a transform unit is received in step 110 .
- the quantization level is generated by quantizing the transform coefficient according to a quantization matrix and quantization parameter.
- clipping condition is determined based on the quantization matrix, the quantization parameter, de-quantization step, video source bit-depth, transform size of the transform unit, or any combination thereof.
- the de-quantization step is dependent on the quantization parameter.
- the quantization level is then clipped according to the clipping condition to generate a clipping-processed quantization level as shown in step 130 .
- a de-quantized transform coefficient is then generated using the clipping-processed quantization level as shown in step 140 .
- FIG. 2 illustrates the flow chart for another exemplary system incorporating an embodiment of the present invention.
- Some processing steps including steps 110 , 120 and 140 are the same as before.
- two different clipping ranges are used depending on whether the clipping condition is the first clipping condition or the second clipping condition (shown in step 210 ). If the clipping condition is the first clipping condition, the first clipping range is used to clip the quantization level as shown in step 221 . If the clipping condition is the second clipping condition, the second clipping range is used to clip the quantization level as shown in step 221 .
- FIG. 3 illustrates the flow chart for yet another exemplary system incorporating an embodiment of the present invention.
- a decoded quantization level for the transform coefficient of a transform unit is received in step 310 , wherein the decoded quantization level is decoded by an entropy decoder or is being processed by the entropy decoder.
- the clipping range for the decoded quantization level is determined in step 320 .
- the decoded quantization level is then clipped to the clipping range according to the clipping condition to generate a clipping-processed quantization level in step 330 .
- a de-quantized transform coefficient is then generated using the clipping-processed quantization level in step 340 .
- the system in FIG. 3 may include an additional step 410 as shown in FIG. 4 .
- clipping condition is determined for the decoded quantization level, wherein said clipping the decoded quantization level is performed according to the clipping condition.
- FIG. 1 through FIG. 4 are intended to illustrate examples of quantization level clipping before reconstructing the transform coefficient to avoid data overflow of the de-quantized transform coefficient.
- a skilled person in the art may practice the present invention by re-arranging the steps, split one or more steps, or combining one or more steps.
- Embodiment of the present invention as described above may be implemented in various hardware, software codes, or a combination of both.
- an embodiment of the present invention can be a circuit integrated into a video compression chip or program code integrated into video compression software to perform the processing described herein.
- An embodiment of the present invention may also be program code to be executed on a Digital Signal Processor (DSP) to perform the processing described herein.
- DSP Digital Signal Processor
- the invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA). These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention.
- the software code or firmware code may be developed in different programming languages and different formats or styles.
- the software code may also be compiled for different target platforms.
- different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
A method and apparatus for de-quantizing a transform coefficient from a quantization level are disclosed. Embodiments according to the present invention avoid overflow of the de-quantized transform coefficient by clipping the quantization level adaptively before reconstructing the transform coefficient. In one embodiment, the method comprises receiving a decoded quantization level for the transform coefficient of a transform unit, wherein the decoded quantization level is decoded by an entropy decoder or is being processed by the entropy decoder. The clipping range is determined and then the decoded quantization level is clipped to the clipping range to generate a clipping-processed quantization level. A de-quantized transform coefficient can be generated using the clipping-processed quantization level. In another embodiment, the decoded quantization level is always clipped to [−N, M], where M and N are positive integers.
Description
- The present invention claims priority to PCT Patent Application, Serial No. PCT/CN2011/084083, filed on Dec. 15, 2011, entitled “Method of Clipping Transformed Coefficients before De-Quantization”. The PCT Patent Application is hereby incorporated by reference in its entirety.
- The present invention relates to video coding. In particular, the present invention relates to dequantization of transform coefficients for High Efficiency Video Coding (HEVC).
- High-Efficiency Video Coding (HEVC) is a new international video coding standard that is being developed by the Joint Collaborative Team on Video Coding (JCT-VC). HEVC is based on the hybrid block-based motion-compensated DCT-like transform coding architecture. The basic unit for compression, termed Coding Unit (CU), is a 2N×2N square block, and each CU can be recursively split into four smaller CUs until a predefined minimum size is reached. Each CU contains one or several variable-block-sized Prediction Unit(s) (PUs) and Transform Unit(s) (TUs). For each PU, either intra-picture or inter-picture prediction is selected. Each TU is processed by a spatial block transform and the transform coefficients for the TU are then quantized. The smallest TU size allowed for HEVC is 4×4.
- The quantization of transform coefficients plays an important role in bitrate and quality control in video coding. A set of quantization steps is used to quantize the transform coefficient into a quantization level. A larger quantization step size will result in lower bitrate and lower quality. On the other hand, a smaller quantization step size will result in higher bitrate and higher quality. A straight forward implementation of the quantization process would involve a division operation which is more complex in hardware-based implementation and consumes more computational resource in software-based implementation. Accordingly, various techniques have been developed in the field for division-free quantization process. In HEVC Test Model Revision 5 (HM-5.0), the quantization process is described as follows. A set of parameters are defined:
- B=bit width or bit depth of the input source video,
- N=transform size of the transform unit (TU),
- Q[x]=f(x), where f(x)={26214,23302,20560,18396,16384,14564}, x=0, . . . , 5, and
IQ[x]=g(x), where g(x)={40,45,51,57,64,72}, x=0, . . . , 5.
Q[x] and IQ[x] are called quantization step and dequantization step respectively. The quantization process is performed according to: -
qlevel=(coeff*Q[QP %6]+offset)>>(21+QP/6−M−DB), where -
offset=1<<(20+QP/6−M−DB), (1) - where “%” is the modulo operator. The dequantization process is performed according to:
-
coeffQ=((qlevel*IQ[QP %6]<<(QP/6))+offset)>>(M−1+DB), where -
offset=1<<(M−2+DB). (2) - The variable qlevel in equations (1) and (2) represents the quantization level of a transform coefficient. The variable coeffQ in equation (2) represents the dequantized transform coefficient. IQ[x] indicates de-quantization step (also called de-quantization step size) and QP represents the quantization parameter. “QP/6” in equations (1) and (2) represents the integer part of QP divided by 6. As shown in equations (1) and (2), the quantization and dequantization processes are implemented by integer multiplication followed by arithmetic shift(s). An offset value is added in both equations (1) and (2) to implement integer conversion using rounding.
- The bit depth of the quantization level is 16 bits (including 1 bit for sign) for HEVC. In other words, the quantization level is represented in 2 bytes or a 16-bit word. Since IQ(x)<=72 and QP<=51, the dynamic range of IQ[x] is 7 bits and the “<<(QP/6)” operation performs left arithmetic shift up to 8 bits. Accordingly, the dynamic range of de-quantized transform coefficient coeffQ, i.e., “(qlevel*IQ[QP %6])<<(QP/6)”, is 31 (16+7+8) bits. Therefore, the de-quantization process as described by equation (2) will never cause overflow since the de-quantization process uses 32-bit data representation.
- However, when quantization matrix is introduced, the de-quantization process is modified as shown in equations (3) through (5):
-
iShift=M−1+DB+4. (3) -
if (iShift>QP/6), -
coeffQ[i][j]=(qlevel[i][j]*W[i][j]*IQ[QP %6]+offset)>>(iShift−QP/6), where -
offset=1<<(iShift−QP/6−1), with i=0 . . . nW−1,j=0 . . . nH−1 (4) -
else -
coeffQ[i][j]=(qlevel[i][j]*W[i][j]*IQ[QP %6])<<(QP/6−iShift) (5) - wherein “[i][j]” indicates the position (also called indices) of the transformed coefficient within a transform unit, W denotes quantization matrix, nW and nH are width and height of the transform. If n represents the dynamic range of a quantization level for a transform coefficient, the dynamic range n has to satisfy the following condition to avoid overflow:
-
n+w+iq+QP/6−M+DB−3≦32, (6) - where w is the dynamic range of quantization matrix W, iq is the dynamic range of IQ[x] and the bit depth of the de-quantized or reconstructed transform coefficient is 32 bits.
- If the dynamic range of the quantization matrix W is 8 bits, the dynamic range of the reconstructed transform coefficient as described by equations (3) through (5) becomes 34 (16+8+7+3) bits for QP=51, M=2 and DB=0. When the de-quantization process uses 32-bit data representation, the reconstructed transform coefficient according to equations (3) through (5) may overflow and cause system failure. Therefore it is desirable to develop a scheme for transform coefficient reconstruction to avoid possible overflow.
- A method and apparatus for de-quantizing a transform coefficient from a quantization level are disclosed. Embodiments according to the present invention avoid overflow of the de-quantized transform coefficient by clipping the quantization level adaptively before reconstructing the transform coefficient. In one embodiment of the present invention, the method comprises receiving the quantization level of the transform coefficient associated with a transform unit; clipping the quantization level to generate a clipping-processed quantization level; and generating a de-quantized transform coefficient using the clipping-processed quantization level. The quantization level can be clipped to a first range under a first clipping condition and a second range under a second clipping condition. The first range may correspond to a fixed range related to quantization-level bit-depth and the second range may be related to dynamic range of the quantization level.
- One aspect of the present invention addresses determining clipping condition for the decoded quantization level. In one embodiment, the clipping condition is determined by comparing a first weighted value with a threshold, wherein the first weighted value corresponds to a first linear function of the quantization matrix, the quantization parameter, the de-quantization step, the video source bit-depth, and the transform size of the transform unit. In another embodiment, the clipping condition determination comprises comparing (20+M+DB−QP/6) with a threshold, where M is the transform size, DB is equal to B−8 and B is the video source bit-depth, and QP is the quantization parameter. In yet another embodiment, the clipping condition determination comprises comparing QP with a threshold.
- In another embodiment according to the present invention, overflow of a de-quantized transform coefficient is avoided by clipping the decoded quantization level adaptively, where the clipping may take place either after entropy decoding or during entropy decoding. The method comprises receiving a decoded quantization level for the transform coefficient of a transform unit; determining clipping range for the decoded quantization level; clipping the decoded quantization level to the clipping range to generate a clipping-processed quantization level; and generating a de-quantized transform coefficient using the clipping-processed quantization level. The method may further comprise determining clipping condition for the decoded quantization level, where the clipping condition is related to the quantization matrix, the quantization parameter, de-quantization step, the video source bit-depth, the transform size of the transform unit, the value of the decoded quantization level, predefined values or any combination thereof. Similarly, the clipping range is related to the quantization matrix, the quantization parameter, the de-quantization step, the video source bit-depth, the transform size of the transform unit, the value of the decoded quantization level, predefined values or any combination thereof. In another embodiment, the decoded quantization level is always clipped to [−N, M], where M and N are positive integers.
-
FIG. 1 illustrates an exemplary flow chart for de-quantization process incorporating an embodiment of the present invention to avoid overflow. -
FIG. 2 illustrates an exemplary flow chart for de-quantization process incorporating an embodiment of the present invention to avoid overflow. -
FIG. 3 illustrates an exemplary flow chart for de-quantization process incorporating an embodiment of the present invention to avoid overflow. -
FIG. 4 illustrates an exemplary flow chart for de-quantization process incorporating an embodiment of the present invention to avoid overflow. - As mentioned before, the coefficient de-quantization (or reconstruction) process as described above may suffer from overflow when quantization matrix is incorporated. To avoid potential overflow during transform coefficient reconstruction, embodiments according to the present invention restrict the quantization level of the transform coefficient before performing the de-quantization process. The dynamic range of the quantization level of the transform coefficient is represented by an integer n. In the example as described in equations (3) to (5), the dynamic range of n shall not exceed 32 bits if 32-bit data representation is used for the de-quantized (or reconstructed) transform coefficients. Accordingly, n has to satisfy the following constraint:
-
n+8+7+(QP/6−(M−1+DB+4))≦32, (7) -
which leads to -
n≦20+M+DB−QP/6. (8) - where M represents the transform size, DB is equal to B−8 and B is the video source bit-depth, and QP is the quantization parameter. For example, M=2 represents the transform size is 4×4, M=3 represents the transform size is 8×8, and M=5 represents the transform size is 32×32. In this case, the quantization level, qlevel, of the transform coefficient shall be clipped according to equation (9):
-
qlevel=max(−2n-1,min(2n-1−1,qlevel)) (9) - To avoid the overflow, the dynamic range of the quantization level of the transform coefficient has to be constrained according to equation (8). According to equation (8), n has to be less than or equal to (20+M+DB−QP/6) to avoid overflow. However, since the quantization level is represented by 16 bits in this example, (i.e., the bit depth of the quantization level=16), n should not exceed 16 bits. Accordingly, if (20+M+DB−QP/6) is greater than 16, the quantization level of the transform coefficient has to be clipped to a range not to exceed 16-bit data representation. The following pseudo codes (pseudo code A) illustrate an example of clipping the quantization level, qlevel, of the transform coefficient according to an embodiment of the present invention in order to avoid data overflow during transform coefficient reconstruction:
-
Pseudo code A: if (20+M+DB−QP/6 >= 16) qlevel = max(−215, min(215 −1, qlevel)); else qlevel = max(−220+M+DB-QP/6 −1, min(220+M+DB−QP/6 − 1 −1, qlevel)). - As shown in pseudo code A, two clipping ranges are used for two different clipping conditions. The first clipping condition corresponds to “20+M+B−8-QP/6≧16” and the second clipping condition corresponds to “20+M+B−8-QP/6<16”. The first clipping range corresponds to a fixed clipping range, i.e., (−215, 215−1) and the second clipping range corresponds to (−220+M+DB−QP/6-1, 220+M+DB−QP/6-1). While the test condition “if (20+M+DB−QP/6≧16)” is used in the exemplary pseudo code A shown above, other test conditions may also be used. For example, the test condition may use the bit depth B of the video source instead of parameter DB. The test condition becomes “if (20+M+B−8-QP/6>=16)”, i.e., “if (12+M+B−QP/6>=16)”. The corresponding pseudo codes (Pseudo code B) becomes:
-
Pseudo code B: if (12+M+B−QP/6 >= 16) qlevel = max(−215, min(215 −1, qlevel)); else qlevel = max(−212+M+B−QP/6 −1, min(212+M+B-QP/6 −1 −1, qlevel)). - If the bit-depth of source video is 8 bits (DB=0) and the transform size is 4×4, equation (8) can be simplified to:
-
n≦22−QP/6. - Therefore, the test condition “if (12+M+B−QP/6≧16)” becomes “if (22−QP/6≧16)” in this case. The test condition can be further simplified as “if (QP<=36)”. Consequently, clipping process for the quantization level of the transform coefficient according to another embodiment of the present invention only depends on QP for video source with fixed dynamic range. An exemplary pseudo codes (Pseudo code C) is shown below:
-
Pseudo code C: if (QP<=36) qlevel = max(−215, min(215 −1, qlevel)); else qlevel = max(\−221−QP/6, min(221−QP/6 −1, qlevel)). - When the bit-depth of source video is 10 bits or higher, i.e., DB≧2, the condition in (7) is always met. In this case, 16-bit clipping, namely qlevel=max(−215, min(215−1, qlevel)) or qlevel=max(−32,768, min(32,767, qlevel)), is always used unconditionally. While the clipping is performed unconditional for the bit-depth equal to 10 bits or higher, the quantization level of the transform coefficient may also be clipped unconditionally to desired bit-depth regardless of the bit-depth of source video. The desired bit-depth can be 8, 16 or 32 bits and the corresponding clipping ranges can be [−128, 127], [−32768, 32767] and [−2147483648, 2147483647].
- Three exemplary pseudo codes incorporating an embodiment of the present invention are described above. These pseudo codes are intended to illustrate exemplary process to avoid data overflow during transform coefficient reconstruction. A person skilled in the art may practice the present invention by using other test conditions. For example, instead of testing “if (QP<=36)”, the test condition “if (QP/6<=6)” may be used. In another example, the clipping operation may be implemented by using other function such as a clipping function, clip (x, y, z), where the variable z is clipped between x and y (x<y). The clipping operation can be implemented using a comparator. For example, clip (x, y, z) can be implemented by comparing z with x and comparing z with y. If z is smaller than x, z is clipped to x and the operation is completed. If z is not smaller than x, z is then compared with y. If z is larger than y, z is clipped to y and the operation is completed. Otherwise, z does not need clipping. The clipping operations for pseudo code C can be expressed as:
-
qlevel=clip(−215,215−1,qlevel), and -
qlevel=clip (−221-QP/6,221-QP/6−1,qlevel). - In the above examples, specific parameters are used to illustrate the dequantization process incorporating embodiments of the present invention to avoid data overflow. The specific parameters used shall not be construed as limitations to the present invention. A person skilled in the art may modify the testing for clipping condition based on the parameters provided. For example, if de-quantization step has 6-bit dynamic range instead of 7-bit dynamic range, the constraint of equation (8) becomes n≦19+M+DB−QP/6. The corresponding clipping condition testing in pseudo code A becomes “if (19+M+DB−QP/6>=16)”.
- To avoid potential overflow of the de-quantized coefficients, embodiments according to the present invention restrict the quantization level of the transform coefficient. The quantization level can be clipped after the quantization level is decoded by the entropy decoder or during the entropy decoding of the quantization level at the decoder side. The quantization level is clipped to a clipping range according to clipping condition. The clipping condition and clipping range depend on the de-quantization matrix, the de-quantization parameter, the video source bit-depth, the transform size of the transform unit, the value of the decoded quantization level, predefined values, or any combination thereof. The quantization level of the transform coefficient can be clipped unconditionally to desired bit-depth regardless of the bit-depth of the source video. The desired bit-depth can be 8, 16 or 32 bits and the corresponding clipping ranges can be [−128, 127], [−32768, 32767] and [−2147483648, 2147483647].
-
FIG. 1 illustrates the flow chart for an exemplary system incorporating an embodiment of the present invention. The quantization level for the transform coefficient associated with a transform unit is received instep 110. The quantization level is generated by quantizing the transform coefficient according to a quantization matrix and quantization parameter. Instep 120, clipping condition is determined based on the quantization matrix, the quantization parameter, de-quantization step, video source bit-depth, transform size of the transform unit, or any combination thereof. The de-quantization step is dependent on the quantization parameter. The quantization level is then clipped according to the clipping condition to generate a clipping-processed quantization level as shown instep 130. A de-quantized transform coefficient is then generated using the clipping-processed quantization level as shown instep 140. -
FIG. 2 illustrates the flow chart for another exemplary system incorporating an embodiment of the present invention. Some processingsteps including steps step 120, two different clipping ranges are used depending on whether the clipping condition is the first clipping condition or the second clipping condition (shown in step 210). If the clipping condition is the first clipping condition, the first clipping range is used to clip the quantization level as shown instep 221. If the clipping condition is the second clipping condition, the second clipping range is used to clip the quantization level as shown instep 221. -
FIG. 3 illustrates the flow chart for yet another exemplary system incorporating an embodiment of the present invention. A decoded quantization level for the transform coefficient of a transform unit is received instep 310, wherein the decoded quantization level is decoded by an entropy decoder or is being processed by the entropy decoder. The clipping range for the decoded quantization level is determined instep 320. The decoded quantization level is then clipped to the clipping range according to the clipping condition to generate a clipping-processed quantization level instep 330. A de-quantized transform coefficient is then generated using the clipping-processed quantization level instep 340. The system inFIG. 3 may include anadditional step 410 as shown inFIG. 4 . Instep 410, clipping condition is determined for the decoded quantization level, wherein said clipping the decoded quantization level is performed according to the clipping condition. - The flow charts in
FIG. 1 throughFIG. 4 are intended to illustrate examples of quantization level clipping before reconstructing the transform coefficient to avoid data overflow of the de-quantized transform coefficient. A skilled person in the art may practice the present invention by re-arranging the steps, split one or more steps, or combining one or more steps. - The above description is presented to enable a person of ordinary skill in the art to practice the present invention as provided in the context of a particular application and its requirement. Various modifications to the described embodiments will be apparent to those with skill in the art, and the general principles defined herein may be applied to other embodiments. Therefore, the present invention is not intended to be limited to the particular embodiments shown and described, but is to be accorded the widest scope consistent with the principles and novel features herein disclosed. In the above detailed description, various specific details are illustrated in order to provide a thorough understanding of the present invention. Nevertheless, it will be understood by those skilled in the art how the present invention may be practiced.
- Embodiment of the present invention as described above may be implemented in various hardware, software codes, or a combination of both. For example, an embodiment of the present invention can be a circuit integrated into a video compression chip or program code integrated into video compression software to perform the processing described herein. An embodiment of the present invention may also be program code to be executed on a Digital Signal Processor (DSP) to perform the processing described herein. The invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA). These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention. The software code or firmware code may be developed in different programming languages and different formats or styles. The software code may also be compiled for different target platforms. However, different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.
- The invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described examples are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
Claims (25)
1. A method of de-quantizing a transform coefficient from a quantization level, the method comprising:
receiving a decoded quantization level for the transform coefficient of a transform unit, wherein the decoded quantization level is decoded by an entropy decoder or is being processed by the entropy decoder;
determining clipping range for the decoded quantization level;
clipping the decoded quantization level to the clipping range to generate a clipping-processed quantization level; and
generating a de-quantized transform coefficient using the clipping-processed quantization level.
2. The method of claim 1 , further comprising determining clipping condition for the decoded quantization level, wherein said clipping the decoded quantization level is performed according to the clipping condition, and wherein the clipping condition is related to quantization matrix, quantization parameter, de-quantization step, video source bit-depth, transform size of the transform unit, a value of the decoded quantization level, predefined values or any combination thereof.
3. The method of claim 2 , wherein the decoded quantization level is clipped to a first range for a first clipping condition and the decoded quantization level is clipped to a second range for a second clipping condition.
4. The method of claim 3 , wherein the first range corresponds to a fixed range related to quantization-level bit-depth.
5. The method of claim 3 , wherein the second range is related to dynamic range of the decoded quantization level.
6. The method of claim 2 , wherein the clipping condition is determined by comparing a first weighted value with a threshold, wherein the first weighted value corresponds to a first linear function of quantization matrix, quantization parameter, de-quantization step, video source bit-depth, transform size of the transform unit, or any combination thereof.
7. The method of claim 6 , wherein the threshold corresponds to a fixed value or a second weighted value, wherein the second weighted value corresponds to a second linear function of quantization matrix, quantization parameter, de-quantization step, video source bit-depth, and transform size of the transform unit, or any combination thereof.
8. The method of claim 2 , wherein the clipping condition comprises comparing (20+M+DB−QP/6) with a threshold, wherein the threshold is 16, M represents transform size of the transform unit, DB is equal to B−8 and B is video source bit-depth, QP is quantization parameter, dynamic range of de-quantization step is 7 bits, the de-quantized transform coefficient is represented in 32 bits and the decoded quantization level is represented in 16 bits.
9. The method of claim 2 , wherein the clipping condition comprises comparing (12+M+B−QP/6) with a threshold, wherein the threshold is 16, M represents transform size of the transform unit, B is video source bit-depth, QP is quantization parameter, dynamic range of de-quantization step is 7 bits, the de-quantized transform coefficient is represented in 32 bits and the decoded quantization level is represented in 16 bits.
10. The method of claim 2 , wherein the clipping condition comprises comparing QP with a threshold, wherein the threshold is 36, transform size of the transform unit is 4×4, video source bit-depth is 8 bits, QP is quantization parameter, dynamic range of de-quantization step is 7 bits, the de-quantized transform coefficient is represented in 32 bits and the decoded quantization level is represented in 16 bits.
11. The method of claim 1 , wherein the clipping range is related to quantization matrix, quantization parameter, de-quantization step, video source bit-depth, transform size of the transform unit, a value of the decoded quantization level, predefined values or any combination thereof.
12. The method of claim 1 , wherein the decoded quantization level is clipped to M if the decoded quantization level is larger than M, where M is a first positive integer.
13. The method of claim 12 , wherein the decoded quantization level is clipped to −N if the decoded quantization level is smaller than −N, where N is a second positive integer.
14. The method of claim 13 , wherein M and N correspond to 32767 and 32768 respectively.
15. The method of claim 1 , wherein a comparator is used for clipping the decoded quantization level.
16. The method of claim 1 , wherein said clipping the decoded quantization level corresponds to unconditional fixed-range clipping if video source bit-depth is 10 bits or more, dynamic range of quantization matrix is 8 bits, dynamic range of de-quantization step is 7 bits, transform size of the transform unit is 4×4, dynamic range of quantization parameter is 8 bits, the de-quantized transform coefficient is represented in 32 bits and the decoded quantization level is represented in 16 bits.
17. The method of claim 1 , wherein said generating the de-quantized transform coefficient comprises multiplying the clipping-processed quantization level by quantization matrix and de-quantization step.
18. The method of claim 1 , wherein said clipping the decoded quantization level corresponds to unconditional fixed-range clipping, the clipped quantization level is represented in n bits.
19. The method of claim 18 , wherein n correspond to 8, 16, or 32.
20. An apparatus of de-quantizing a transform coefficient from a quantization level, the apparatus comprising:
means for receiving a decoded quantization level for the transform coefficient of a transform unit, wherein the decoded quantization level is decoded by an entropy decoder or is being processed by the entropy decoder;
means for determining clipping range for the decoded quantization level;
means for clipping the decoded quantization level to the clipping range to generate a clipping-processed quantization level; and
means for generating a de-quantized transform coefficient using the clipping-processed quantization level.
21. The apparatus of claim 20 , further comprising means for determining clipping condition for the decoded quantization level, wherein said means for clipping the decoded quantization level is performed according to the clipping condition, and wherein the clipping condition is related to quantization matrix, quantization parameter, de-quantization step, video source bit-depth, transform size of the transform unit, a value of the decoded quantization level, predefined values or any combination thereof.
22. The apparatus of claim 20 , wherein the clipping range is related to quantization matrix, quantization parameter, de-quantization step, video source bit-depth, transform size of the transform unit, a value of the decoded quantization level, predefined values or any combination thereof.
23. The apparatus of claim 20 , wherein a comparator is used for clipping the decoded quantization level.
24. The apparatus of claim 20 , wherein the decoded quantization level is clipped to a first range for a first clipping condition and the decoded quantization level is clipped to a second range for a second clipping condition.
25. The apparatus of claim 20 , further comprising means for determining clipping condition for the decoded quantization level, wherein said means for clipping the decoded quantization level is performed according to the clipping condition, and wherein the clipping condition is determined by comparing a first weighted value with a threshold, wherein the first weighted value corresponds to a first linear function of quantization matrix, quantization parameter, de-quantization step, video source bit-depth, transform size of the transform unit, or any combination thereof.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2011/084083 WO2013086724A1 (en) | 2011-12-15 | 2011-12-15 | Method of clippling transformed coefficients before de-quantization |
CNPCTCN2011084083 | 2011-12-15 | ||
PCT/CN2012/086658 WO2013087025A1 (en) | 2011-12-15 | 2012-12-14 | Method and apparatus for dequantization of transformed coefficients |
Publications (1)
Publication Number | Publication Date |
---|---|
US20140328395A1 true US20140328395A1 (en) | 2014-11-06 |
Family
ID=48611829
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/985,779 Active 2032-10-06 US9420296B2 (en) | 2011-12-15 | 2012-12-14 | Method and apparatus for quantization level clipping |
US14/363,791 Abandoned US20140328395A1 (en) | 2011-12-15 | 2012-12-14 | Method and apparatus for dequantization of transformed coefficients |
US15/079,341 Active US9565441B2 (en) | 2011-12-15 | 2016-03-24 | Method and apparatus for quantization level clipping |
US15/375,574 Active US9749635B2 (en) | 2011-12-15 | 2016-12-12 | Method and apparatus for quantization level clipping |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/985,779 Active 2032-10-06 US9420296B2 (en) | 2011-12-15 | 2012-12-14 | Method and apparatus for quantization level clipping |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/079,341 Active US9565441B2 (en) | 2011-12-15 | 2016-03-24 | Method and apparatus for quantization level clipping |
US15/375,574 Active US9749635B2 (en) | 2011-12-15 | 2016-12-12 | Method and apparatus for quantization level clipping |
Country Status (8)
Country | Link |
---|---|
US (4) | US9420296B2 (en) |
EP (2) | EP2737696B8 (en) |
JP (1) | JP5753630B2 (en) |
AU (1) | AU2012350503B2 (en) |
CA (1) | CA2831019C (en) |
MX (1) | MX2013012041A (en) |
RU (1) | RU2600935C1 (en) |
WO (3) | WO2013086724A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020146709A1 (en) * | 2019-01-12 | 2020-07-16 | Tencent America Llc. | Method and apparatus for video coding |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6157114B2 (en) * | 2012-12-28 | 2017-07-05 | キヤノン株式会社 | Image encoding device, image encoding method and program, image decoding device, image decoding method and program |
WO2014165960A1 (en) * | 2013-04-08 | 2014-10-16 | Blackberry Limited | Methods for reconstructing an encoded video at a bit-depth lower than at which it was encoded |
US9674538B2 (en) | 2013-04-08 | 2017-06-06 | Blackberry Limited | Methods for reconstructing an encoded video at a bit-depth lower than at which it was encoded |
JP6075875B2 (en) * | 2013-07-31 | 2017-02-08 | 日本電信電話株式会社 | Transform quantization method, transform quantization apparatus, and transform quantization program |
JP6069128B2 (en) * | 2013-08-08 | 2017-02-01 | 日本電信電話株式会社 | Transform quantization method, transform quantization apparatus, and transform quantization program |
KR102139159B1 (en) | 2015-11-06 | 2020-07-29 | 후아웨이 테크놀러지 컴퍼니 리미티드 | Method and apparatus for inverse quantization of transform coefficients, and decoding apparatus |
CN112771865A (en) * | 2018-08-23 | 2021-05-07 | 交互数字Vc控股法国公司 | Encoding and decoding quantization matrices using parameterized models |
US11973958B2 (en) * | 2019-09-22 | 2024-04-30 | Hfi Innovation Inc. | Method and apparatus of sample clipping for prediction refinement with optical flow in video coding |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6163868A (en) * | 1997-10-23 | 2000-12-19 | Sony Corporation | Apparatus and method for providing robust error recovery for errors that occur in a lossy transmission environment |
US9167261B2 (en) * | 2011-11-07 | 2015-10-20 | Sharp Laboratories Of America, Inc. | Video decoder with constrained dynamic range |
Family Cites Families (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3202433B2 (en) * | 1993-09-17 | 2001-08-27 | 株式会社リコー | Quantization device, inverse quantization device, image processing device, quantization method, inverse quantization method, and image processing method |
KR100188934B1 (en) | 1995-08-28 | 1999-06-01 | 윤종용 | Apparatus and method of encoding image signal |
US6931058B1 (en) * | 2000-05-19 | 2005-08-16 | Scientific-Atlanta, Inc. | Method and apparatus for the compression and/or transport and/or decompression of a digital signal |
US6628709B2 (en) * | 2000-12-21 | 2003-09-30 | Matsushita Electric Corporation Of America | Bit number prediction for VLC coded DCT coefficients and its application in DV encoding/transcoding |
US6898323B2 (en) * | 2001-02-15 | 2005-05-24 | Ricoh Company, Ltd. | Memory usage scheme for performing wavelet processing |
US20020118743A1 (en) * | 2001-02-28 | 2002-08-29 | Hong Jiang | Method, apparatus and system for multiple-layer scalable video coding |
KR100603592B1 (en) * | 2001-11-26 | 2006-07-24 | 학교법인 고황재단 | Intelligent Water ring scan apparatus and method using Quality Factor, video coding/decoding apparatus and method using that |
US7130876B2 (en) * | 2001-11-30 | 2006-10-31 | General Instrument Corporation | Systems and methods for efficient quantization |
JP4617644B2 (en) * | 2003-07-18 | 2011-01-26 | ソニー株式会社 | Encoding apparatus and method |
US7778813B2 (en) | 2003-08-15 | 2010-08-17 | Texas Instruments Incorporated | Video coding quantization |
US7660355B2 (en) * | 2003-12-18 | 2010-02-09 | Lsi Corporation | Low complexity transcoding between video streams using different entropy coding |
US8031774B2 (en) * | 2005-01-31 | 2011-10-04 | Mediatek Incoropration | Video encoding methods and systems with frame-layer rate control |
JP4645948B2 (en) * | 2005-03-18 | 2011-03-09 | 富士ゼロックス株式会社 | Decoding device and program |
KR100668344B1 (en) * | 2005-09-20 | 2007-01-12 | 삼성전자주식회사 | Image encoding apparatus and method, image decoding apparatus and method, and display driving circuit and method employing the same |
WO2007094100A1 (en) * | 2006-02-13 | 2007-08-23 | Kabushiki Kaisha Toshiba | Moving image encoding/decoding method and device and program |
US8606023B2 (en) * | 2006-06-26 | 2013-12-10 | Qualcomm Incorporated | Reduction of errors during computation of inverse discrete cosine transform |
CN100556144C (en) * | 2007-02-14 | 2009-10-28 | 浙江大学 | Be used for method and the code device of avoiding video or image compression inverse transformation to cross the border |
CN101202912A (en) * | 2007-11-30 | 2008-06-18 | 上海广电(集团)有限公司中央研究院 | Method for controlling balanced code rate and picture quality code rate |
CN100592795C (en) * | 2007-12-27 | 2010-02-24 | 武汉大学 | Integer translation base optimization method in video coding standard |
WO2009132018A1 (en) | 2008-04-22 | 2009-10-29 | The Board Of Regents Of The University Of Texas System | Fluidics-based pulsatile perfusion organ preservation device |
EP2154895A1 (en) * | 2008-08-15 | 2010-02-17 | Thomson Licensing | Method and apparatus for entropy encoding blocks of transform coefficients of a video signal using a zigzag scan sequence, and method and apparatus for a corresponding decoding |
US9819952B2 (en) * | 2009-10-05 | 2017-11-14 | Thomson Licensing Dtv | Methods and apparatus for embedded quantization parameter adjustment in video encoding and decoding |
CA2785036A1 (en) * | 2010-02-05 | 2011-08-11 | Telefonaktiebolaget L M Ericsson (Publ) | De-blocking filtering control |
US10298939B2 (en) * | 2011-06-22 | 2019-05-21 | Qualcomm Incorporated | Quantization in video coding |
MY168044A (en) | 2011-06-30 | 2018-10-11 | Samsung Electronics Co Ltd | Video encoding method with bit depth adjustment for fixed-point conversion and apparatus therefor, and video decoding method and apparatus therefor |
CN102271258A (en) | 2011-08-03 | 2011-12-07 | 中山大学深圳研究院 | Method and device for video coding aiming at low coding rate |
-
2011
- 2011-12-15 WO PCT/CN2011/084083 patent/WO2013086724A1/en active Application Filing
-
2012
- 2012-12-14 WO PCT/CN2012/086648 patent/WO2013087021A1/en active Application Filing
- 2012-12-14 US US13/985,779 patent/US9420296B2/en active Active
- 2012-12-14 CA CA2831019A patent/CA2831019C/en active Active
- 2012-12-14 EP EP12858362.2A patent/EP2737696B8/en active Active
- 2012-12-14 JP JP2014523196A patent/JP5753630B2/en not_active Expired - Fee Related
- 2012-12-14 MX MX2013012041A patent/MX2013012041A/en active IP Right Grant
- 2012-12-14 RU RU2013145895/07A patent/RU2600935C1/en active
- 2012-12-14 WO PCT/CN2012/086658 patent/WO2013087025A1/en unknown
- 2012-12-14 AU AU2012350503A patent/AU2012350503B2/en not_active Ceased
- 2012-12-14 EP EP12857216.1A patent/EP2737706A4/en not_active Withdrawn
- 2012-12-14 US US14/363,791 patent/US20140328395A1/en not_active Abandoned
-
2016
- 2016-03-24 US US15/079,341 patent/US9565441B2/en active Active
- 2016-12-12 US US15/375,574 patent/US9749635B2/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6163868A (en) * | 1997-10-23 | 2000-12-19 | Sony Corporation | Apparatus and method for providing robust error recovery for errors that occur in a lossy transmission environment |
US9167261B2 (en) * | 2011-11-07 | 2015-10-20 | Sharp Laboratories Of America, Inc. | Video decoder with constrained dynamic range |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020146709A1 (en) * | 2019-01-12 | 2020-07-16 | Tencent America Llc. | Method and apparatus for video coding |
US10904550B2 (en) | 2019-01-12 | 2021-01-26 | Tencent America LLC | Method and apparatus for video coding |
CN113228631A (en) * | 2019-01-12 | 2021-08-06 | 腾讯美国有限责任公司 | Video coding and decoding method and device |
Also Published As
Publication number | Publication date |
---|---|
AU2012350503B2 (en) | 2015-11-05 |
WO2013086724A1 (en) | 2013-06-20 |
US20130322527A1 (en) | 2013-12-05 |
MX2013012041A (en) | 2013-12-16 |
WO2013087021A1 (en) | 2013-06-20 |
EP2737696B8 (en) | 2019-10-02 |
CA2831019C (en) | 2017-12-05 |
EP2737706A4 (en) | 2015-05-27 |
JP2014526199A (en) | 2014-10-02 |
CA2831019A1 (en) | 2013-06-20 |
RU2600935C1 (en) | 2016-10-27 |
JP5753630B2 (en) | 2015-07-22 |
EP2737706A1 (en) | 2014-06-04 |
US9749635B2 (en) | 2017-08-29 |
US20170094275A1 (en) | 2017-03-30 |
US9420296B2 (en) | 2016-08-16 |
US20160205401A1 (en) | 2016-07-14 |
EP2737696A4 (en) | 2015-05-20 |
EP2737696A1 (en) | 2014-06-04 |
WO2013087025A1 (en) | 2013-06-20 |
AU2012350503A1 (en) | 2014-02-06 |
EP2737696B1 (en) | 2019-07-17 |
US9565441B2 (en) | 2017-02-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9749635B2 (en) | Method and apparatus for quantization level clipping | |
JP4560027B2 (en) | Image and video coding methods | |
EP2323407A1 (en) | Video image encoding method, video image decoding method, video image encoding apparatus, video image decoding apparatus, program and integrated circuit | |
CN114554217A (en) | Flexible band offset mode in sample adaptive offset in HEVC | |
US12075046B2 (en) | Shape adaptive discrete cosine transform for geometric partitioning with an adaptive number of regions | |
JP7402280B2 (en) | Video decoding device, video decoding method and program | |
JP2017500792A (en) | Evaluation measure for HDR video frames | |
US20240291979A1 (en) | Image encoding apparatus, image encoding method, image decoding apparatus, image decoding method, and non-transitory computer-readable storage medium | |
US8811735B2 (en) | System and method for scalar quantization error reduction | |
RU2638009C1 (en) | Method of encoding image quantisation parameters and method of decoding image quantisation parameters | |
WO2020262012A1 (en) | Image decoding device, image decoding method, and program | |
KR20140036172A (en) | Techniques for context-adaptive binary data arithmetic coding(cabac) decoding | |
EP3343919A1 (en) | Video encoding apparatus, video encoding method, video decoding apparatus, and video decoding method | |
CN103975592A (en) | Method and apparatus for dequantization of transformed coefficients | |
KR101659377B1 (en) | Method and system for data encoding | |
US11785204B1 (en) | Frequency domain mode decision for joint chroma coding | |
EP4250728A1 (en) | Image decoding device, image decoding method, and program | |
WO2024027566A1 (en) | Constraining convolution model coefficient | |
US20150365701A1 (en) | Method for encoding and decoding image block, encoder and decoder | |
RU2574279C2 (en) | Method of encoding image quantisation parameters and method of decoding image quantisation parameters | |
CN112911312A (en) | Encoding and decoding method, device and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MEDIATEK SINGAPORE PTE. LTD, SINGAPORE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GUO, XUN;CHUANG, TZU-DER;LEI, SHAW-MIN;SIGNING DATES FROM 20140512 TO 20140519;REEL/FRAME:033052/0504 |
|
AS | Assignment |
Owner name: HFI INNOVATION INC., TAIWAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MEDIATEK SINGAPORE PTE. LTD.;REEL/FRAME:039609/0911 Effective date: 20160713 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |