WO2016115728A1 - Improved escape value coding methods - Google Patents
Improved escape value coding methods Download PDFInfo
- Publication number
- WO2016115728A1 WO2016115728A1 PCT/CN2015/071429 CN2015071429W WO2016115728A1 WO 2016115728 A1 WO2016115728 A1 WO 2016115728A1 CN 2015071429 W CN2015071429 W CN 2015071429W WO 2016115728 A1 WO2016115728 A1 WO 2016115728A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- coding
- palette
- cmax
- fetched
- level
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/91—Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
Definitions
- the invention relates generally to video/image coding/processing. Particularly, it is related palette coding.
- palette is utilized to represent a given video block (e.g. CU) .
- the encoding process is as follows [1] :
- the pixels in the CU are encoded in a raster scan order. For each position, a flag is first transmitted to indicate whether the “run mode” or “copy above mode” is being used.
- “Run mode” In “run mode” , a palette index is first signaled followed by “palette_run” (e.g., M) . No further information needs to be transmitted for the current position and the following M positions as they have the same palette index as signaled.
- “Copy above mode” In “copy above mode” , a value “copy_run” (e.g., N) is transmitted to indicate that for the following N positions (including the current one) , the palette index is equal to the palette index of the one that is at the same location in the row above.
- palette of each component are constructed and transmitted.
- the palette can be predicted (shared) from its left neighboring CU to reduce the bitrate.
- Qualcomm proposed a second version of their palette coding technique [1] , in which each element in the palette is a triplet, representing a specific combination of the three color components.
- the predictive coding of palette across CU is removed.
- This invention proposes the usage of palette prediction/sharing that can also be applied to the triplet palette format. Again both the palette from the left and/or above CU are utilized, as long as the above CU is within the same CTB (LCU) as the current CU being encoded (reducing line buffer) .
- Major-color-based (or palette) coding [3] was proposed by Microsoft. Similar to [1] , palette of each component are constructed and transmitted. However, instead of predicting the entire palette from the left CU, individual entry in a palette can be predicted from the exact corresponding palette entry in the above CU or left CU.
- a predictive coding method is applied on the indices [3] , in which a pixel line can be predicted by different modes.
- three kinds of line modes are used for a pixel line, i.e. horizontal mode, vertical mode and normal mode.
- horizontal mode all the pixels in the same line have the same value. If the value is the same as the first pixel of the above pixel line, only line mode signalling bits are transmitted. Otherwise, the index value is also transmitted.
- vertical mode the current pixel line is the same with the above pixel line. Therefore, only line mode signalling bits are transmitted.
- normal mode pixels in a line are predicted individually. For each pixel, the left or above neighbours is used as predictor, and the prediction symbol is transmitted to the decoder.
- [3] classified pixels into major color pixels and escape pixel.
- the decoder reconstruct pixel value by major color index (palette index in [1] [2] ) and palette.
- major color index palette index in [1] [2]
- palette index palette index in [1] [2]
- escape pixel the encoder would further send the pixel value.
- the syntax table for palette coding is as follows.
- the palette table of last coded palette CU is used as palette predictors for current palette table coding.
- a palette_share_flag is first signalled. If palette_share_flag is 1, all the palettes in the last coded palette table are reused for current CU.
- the palette size is also equal to the palette size of last coded palette CU. Otherwise (palette_share_flag is 0) , the current palette table is signalled by choosing which palette in the last coded palette table can be reuse and transmitting the new palette.
- the size of the palette is set as the size of the predicting palette (numPredPreviousPalette) and the size of the transmitted palette size (num_signalled_palette_entries) .
- the predicting palette is a palette derived from the previously reconstructed palette coded CUs.
- palette mode those palette colors which are not predicted by the predicting palette are directly transmitted into the bitstream. For example, if current CU is coded as palette mode with a palette size equal to six. We assume three of the six major colors are predicted from the predicting palette and three are directly transmitted through the bitstream. The transmitted three will be signalled using the sample syntax as given below.
- the palette index from 0 to 5 is used to indicate each palette coded pixel could be reconstructed as the major color in the palette color table.
- the color index for the escape pixel is signalled as the value equal to the palette size.
- the palette size of that block is increased by one and the last major color index is used as the index of escape pixels.
- the major color index 6 indicates that this pixel is an escape pixel.
- one CU-level escape flag, palette_escape_val_present_flag is signalled for each palette CU to indicate whether escape pixels indexing is coded for this palette CU.
- palette_escape_val represents the original value of the pixel in lossless coding or a quantized value of the in lossy coding.
- palette_escape_val is binarized with the truncated binary (TB) binarization process as described in the sub-clause 9.3.3.6 in JCTVS-S1005.
- TB truncated binary
- variable bitDepth is derived as follows:
- the binarization of palette_escape_val is derived by invoking the FL binarization process specified in subclause 9.3.3.5 with the input parameter set to (1 ⁇ bitdepth) -1.
- quantization parameter qP is derived as follows:
- CTB (LCU) : Coded tree block (largest coding unit)
- HEVC High Efficiency Video Coding
- IntraBC Intra picture Block Copy
- Fig. 1 is a diagram illustrating the table based palette escape value binarization.
- palette_escape_val is binarized with the truncated binary (TB) , and cMax for TB is fetched from a table.
- the cMax is fetched from the table indexed by the quantization parameter (QP) and the bit depth.
- Table 1 demonstrates an exemplary cMax lookup table indexed by QP and bitDepth. For example, if the QP of the component Y is 22, and the bit depth of the component Y is 8, then the cMax used to parse palette_escape_val should be 5.
- Fig. 1 shows the procedure of the table-based palette value coding method.
- the cMax is fetched from a table indexed by QP. Different table is used for different bit depth.
- the table is predefined and stored in memory before the coding/decoding process.
- the table is built up once at the beginning of the coding/decoding process.
- the table is built up once at the beginning of coding/decoding a picture or a slice.
- the cMax is fetched from a table only for lossy coding.
- cMax is fetched from a table for lossless coding.
- Table 5 demonstrates a exemplary table for lossless coding. For example, when the bit-depth for the component Y is 8, the cMax used to parse palette_escape_val should be 8.
- cMax can be fetched from a selected one from multiple tables.
- the tables can be predefined and stored in memory before the coding/decoding process. Or the tables can be built up once at the beginning of the coding/decoding process. Or the tables can be built up once at the beginning of coding/decoding a picture or a slice.
- the table selection can be at sequence level, at picture level, at slice level, at tile level, at line level, at coding tree unit (CTU) level or at coding unit (CU) level.
- CTU coding tree unit
- CU coding unit
- the selected table can be signaled from the encoder to the decoder explicitly.
- the selection information can be signaled in VPS, SPS, PPS, slice header, CTU, or CU.
- cMax can be fetched from one of multiple tables obeying the same rule at encoder and decoder without signaling any information from the encoder to the decoder.
- the table can be selected according to the coding information such as coding modes of neighboring blocks.
- cMax could be a fixed number such as 8.
- an embodiment of the present invention can be a circuit integrated into a video compression chip or program codes integrated into video compression software to perform the processing described herein.
- An embodiment of the present invention may also be program codes to be executed on a Digital Signal Processor (DSP) to perform the processing described herein.
- DSP Digital Signal Processor
- the invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA) .
- processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention.
- the software code or firmware codes may be developed in different programming languages and different format or style.
- the software code may also be compiled for different target platform.
- different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.
Abstract
Methods are proposed to binarize the escape value in palette coding more efficiently.
Description
The invention relates generally to video/image coding/processing. Particularly, it is related palette coding.
Palette coding [1] [2] is described as follows.
In this first method, proposed by Qualcomm, palette is utilized to represent a given video block (e.g. CU) . The encoding process is as follows [1] :
1. Transmission of the palette: the palette size is first transmitted followed by the palette elements.
2. Transmission of pixel values: the pixels in the CU are encoded in a raster scan order. For each position, a flag is first transmitted to indicate whether the “run mode” or “copy above mode” is being used.
2.1 “Run mode” : In “run mode” , a palette index is first signaled followed by “palette_run” (e.g., M) . No further information needs to be transmitted for the current position and the following M positions as they have the same palette index as signaled. The palette index (e.g., i) is shared by all three color components, which means that the reconstructed pixel values are (Y, U, V) = (paletteY [i] , paletteU [i] , paletteV [i] ) (assuming the color space is YUV)
2.2 “Copy above mode” : In “copy above mode” , a value “copy_run” (e.g., N) is transmitted to indicate that for the following N positions (including the current one) , the palette index is equal to the palette index of the one that is at the same location in the row above.
3.Transmission of residue: the palette indices transmitted in Stage 2 are converted back to pixel values and used as the prediction. Residue information is transmitted using HEVC residue coding and is added to the prediction for the reconstruction.
In the original version of the work [2] , palette of each component are constructed and transmitted. The palette can be predicted (shared) from its left neighboring CU to reduce the bitrate. Later on Qualcomm proposed a second version of their palette coding technique [1] , in which each element in the palette is a triplet, representing a specific combination of the three color components. The predictive coding of palette across CU is removed. This invention proposes the usage of palette prediction/sharing that can also be applied to the triplet palette format. Again both the palette from the left and/or above CU are utilized, as long as the above CU is within the same CTB (LCU) as the current CU being encoded (reducing line buffer) .
Major-color-based (or palette) coding [3] was proposed by Microsoft. Similar to [1] , palette of each component are constructed and transmitted. However, instead of predicting the entire palette from the left CU, individual entry in a palette can be predicted from the exact corresponding palette entry in the above CU or left CU.
For transmission of pixel values, a predictive coding method is applied on the indices [3] , in which a pixel line can be predicted by different modes. In specific, three kinds of line modes are used for a pixel line, i.e. horizontal mode, vertical mode and normal mode. In horizontal mode, all the pixels in the same line have the same value. If the value is the same as the first pixel of the above pixel line, only line mode signalling bits are transmitted. Otherwise, the index value is also transmitted. In vertical mode, the current pixel line is the same with the above pixel line. Therefore, only line mode signalling bits are transmitted. In normal mode, pixels in a line are predicted individually. For each pixel, the left or above neighbours is used as predictor, and the prediction symbol is transmitted to the decoder.
Furthermore, [3] classified pixels into major color pixels and escape pixel. For major color pixels, the decoder reconstruct pixel value by major color index (palette index in [1] [2] ) and palette. For escape pixel, the encoder would further send the pixel value.
In the reference software of screen content coding (SCC) standard, SCM-3.0, an improved palette scheme based on [4] is integrated.
The syntax table for palette coding is as follows.
In SCM-3.0, the palette table of last coded palette CU is used as palette predictors for current palette table coding. In palette table coding, a palette_share_flag is first signalled. If palette_share_flag is 1, all the palettes in the last coded palette
table are reused for current CU. The palette size is also equal to the palette size of last coded palette CU. Otherwise (palette_share_flag is 0) , the current palette table is signalled by choosing which palette in the last coded palette table can be reuse and transmitting the new palette. The size of the palette is set as the size of the predicting palette (numPredPreviousPalette) and the size of the transmitted palette size (num_signalled_palette_entries) . The predicting palette is a palette derived from the previously reconstructed palette coded CUs. When coding the current CU as palette mode, those palette colors which are not predicted by the predicting palette are directly transmitted into the bitstream. For example, if current CU is coded as palette mode with a palette size equal to six. We assume three of the six major colors are predicted from the predicting palette and three are directly transmitted through the bitstream. The transmitted three will be signalled using the sample syntax as given below.
num_signalled_palette_entries = 3
for (cIdx = 0; cIdx < 3; cIdx++ ) //signal colors for different components
for (i = 0; i < num_signalled_palette_entries; i++ )
palette_entries [cIdx] [numPredPreviousPalette + i]
Since the palette size is six in this example, the palette index from 0 to 5 is used to indicate each palette coded pixel could be reconstructed as the major color in the palette color table.
In SCM-3.0, to indicate that a pixel is coded as escape pixel, the color index for the escape pixel is signalled as the value equal to the palette size. With different interpretation, when escape pixels is coded in a palette coded block, the palette size of that block is increased by one and the last major color index is used as the index of escape pixels. In the above example, the major color index 6 indicates that this pixel is an escape pixel. Moreover, in SCM-3.0, one CU-level escape flag, palette_escape_val_present_flag, is signalled for each palette CU to indicate whether escape pixels indexing is coded for this palette CU.
If a pixel is coded as an escape pixel, the escape value for each component noted as palette_escape_val will be coded. palette_escape_val represents the original value of the pixel in lossless coding or a quantized value of the in lossy coding. palette_escape_val is binarized with the truncated binary (TB) binarization process as described in the sub-clause 9.3.3.6 in JCTVS-S1005. And the parameter
cMax input to the process is calculated as described in the sub-clause 9.3.3.13 in JCTVS-S1005, recited as below.
The variable bitDepth is derived as follows:
bitDepth = (cIdx = = 0 ) ? BitDepthY : BitDepthC
The binarization of palette_escape_val is derived as follows:
–If cu_transquant_bypass_flag is true, the binarization of palette_escape_val is derived by invoking the FL binarization process specified in subclause 9.3.3.5 with the input parameter set to (1<<bitdepth) -1.
–Otherwise (cu_transquant_bypass_flag is false) the following ordered steps apply
1.The quantization parameter qP is derived as follows:
qP = (cIdx = = 0) ? Qp′Y : ( (cIdx = = 1) Qp′Cb ? Qp′Cr)
2.A quantization step size parameter qStep is derived as follows:
qStep = (qP= =0) ? Round (2 (qP -4) /6) : 1
3.A maximum possible quantized value maxValue is derived as follows:
maxValue = ( (1<<bitDepth ) -1) /Qstep
4.The number of bins numBins of the fixed length binarization codeword is derived as follows
5.The maximum parameter cMax for the fixed length binarization is derived as follows
cMax = (1 << numBins) -1
This process will be invoked whenever palette_escape_val is parsed. Since the process is quite complicted, it imposes a high computing burden on the parsing process, which can only telorate a very low time-latency.
Acronyms:
CE: Core Experiments
CU: Coding Unit
CTB (LCU) : Coded tree block (largest coding unit)
HEVC: High Efficiency Video Coding
IntraBC: Intra picture Block Copy
MC: Motion Compensation
MV: Motion Vector
PU: Prediction Unit
RExt: HEVC Range Extensions
WPP: wavefront parallel process
SUMMARY
In light of the previously described problems, methods are proposed to binarize the escape value in a more efficient way.
Other aspects and features of the invention will become apparent to those with ordinary skill in the art upon review of the following descriptions of specific embodiments.
BRIEF DESCRIPTION OF DRAWINGS
The invention can be more fully understood by reading the subsequent detailed description and examples with references made to the accompanying drawings, wherein:
Fig. 1 is a diagram illustrating the table based palette escape value binarization.
The following description is of the best-contemplated mode of carrying out the invention. This description is made for the purpose of illustrating the general principles of the invention and should not be taken in a limiting sense. The scope of the invention is best determined by reference to the appended claims.
In order to binarize the palette_escape_val more efficiently, several methods are proposed.
In one embodiment, palette_escape_val is binarized with the truncated binary (TB) , and cMax for TB is fetched from a table.
In one embodiment, the cMax is fetched from the table indexed by the quantization parameter (QP) and the bit depth. Table 1 demonstrates an exemplary cMax lookup table indexed by QP and bitDepth. For example, if the QP of the component Y is 22, and the bit depth of the component Y is 8, then the cMax used to parse palette_escape_val should be 5. Fig. 1 shows the procedure of the table-based palette value coding method.
Table 1. cMax Lookup table indexed by QP and bitDepth
QP | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 |
bitDepth=8 | 8 | 8 | 8 | 8 | 8 | 8 | 8 | 8 | 7 | 7 | 7 | 7 | 7 |
bitDepth=10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 9 | 9 | 9 | 9 | 9 |
bitDepth=12 | 12 | 12 | 12 | 12 | 12 | 12 | 12 | 12 | 11 | 11 | 11 | 11 | 11 |
QP | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 |
bitDepth=8 | 7 | 7 | 6 | 6 | 6 | 6 | 6 | 6 | 6 | 5 | 5 | 5 | 5 |
bitDepth=10 | 9 | 9 | 8 | 8 | 8 | 8 | 8 | 8 | 8 | 7 | 7 | 7 | 7 |
bitDepth=12 | 11 | 11 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 9 | 9 | 9 | 9 |
QP | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 |
bitDepth=8 | 5 | 5 | 4 | 4 | 4 | 4 | 4 | 4 | 3 | 3 | 3 | 3 | 3 |
bitDepth=10 | 7 | 7 | 6 | 6 | 6 | 6 | 6 | 6 | 5 | 5 | 5 | 5 | 5 |
bitDepth=12 | 9 | 9 | 8 | 8 | 8 | 8 | 8 | 8 | 7 | 7 | 7 | 7 | 7 |
QP | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 |
bitDepth=8 | 3 | 2 | 2 | 2 | 2 | 2 | 2 | 1 | 1 | 1 | 1 | 1 | 1 |
bitDepth=10 | 5 | 4 | 4 | 4 | 4 | 4 | 4 | 3 | 3 | 3 | 3 | 3 | 3 |
bitDepth=12 | 7 | 6 | 6 | 6 | 6 | 6 | 6 | 5 | 5 | 5 | 5 | 5 | 5 |
In another embodiment, the cMax is fetched from a table indexed by QP. Different table is used for different bit depth. Table 2-Table 4 show exemplary tables for bit depth = 8, 10, 12 respectively.
Table 2. cMax Lookup table indexed by QP for bitDepth = 8
QP | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 |
cMax | 8 | 8 | 8 | 8 | 8 | 8 | 8 | 8 | 7 | 7 | 7 | 7 | 7 |
QP | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 |
cMax | 7 | 7 | 6 | 6 | 6 | 6 | 6 | 6 | 6 | 5 | 5 | 5 | 5 |
QP | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 |
cMax | 5 | 5 | 4 | 4 | 4 | 4 | 4 | 4 | 3 | 3 | 3 | 3 | 3 |
QP | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 |
cMax | 3 | 2 | 2 | 2 | 2 | 2 | 2 | 1 | 1 | 1 | 1 | 1 | 1 |
Table 3. cMax Lookup table indexed by QP for bitDepth = 10
QP | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 |
cMax | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 9 | 9 | 9 | 9 | 9 |
QP | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 |
cMax | 9 | 9 | 8 | 8 | 8 | 8 | 8 | 8 | 8 | 7 | 7 | 7 | 7 |
QP | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 |
cMax | 7 | 7 | 6 | 6 | 6 | 6 | 6 | 6 | 5 | 5 | 5 | 5 | 5 |
QP | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 |
cMax | 5 | 4 | 4 | 4 | 4 | 4 | 4 | 3 | 3 | 3 | 3 | 3 | 3 |
Table 4. cMax Lookup table indexed by QP for bitDepth = 12
QP | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 |
bitDepth=12 | 12 | 12 | 12 | 12 | 12 | 12 | 12 | 12 | 11 | 11 | 11 | 11 | 11 |
QP | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 |
bitDepth=12 | 11 | 11 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 9 | 9 | 9 | 9 |
QP | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 |
bitDepth=12 | 9 | 9 | 8 | 8 | 8 | 8 | 8 | 8 | 7 | 7 | 7 | 7 | 7 |
QP | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 |
bitDepth=12 | 7 | 6 | 6 | 6 | 6 | 6 | 6 | 5 | 5 | 5 | 5 | 5 | 5 |
In one embodiment, the table is predefined and stored in memory before the coding/decoding process.
In another embodiment, the table is built up once at the beginning of the coding/decoding process.
In still another embodiment, the table is built up once at the beginning of coding/decoding a picture or a slice.
In one embodiment, the cMax is fetched from a table only for lossy coding.
In another embodiment, cMax is fetched from a table for lossless coding. Table 5 demonstrates a exemplary table for lossless coding. For example, when the bit-depth for the component Y is 8, the the cMax used to parse palette_escape_val should be 8.
Table 5. cMax Lookup table for lossless coding.
lossless | |
bitDepth=8 | 8 |
bitDepth=10 | 10 |
bitDepth=12 | 12 |
In another embodiment, cMax can be fetched from a selected one from multiple tables. The tables can be predefined and stored in memory before the coding/decoding process. Or the tables can be built up once at the beginning of the coding/decoding process. Or the tables can be built up once at the beginning of coding/decoding a picture or a slice.
In still another embodiment, the table selection can be at sequence level, at
picture level, at slice level, at tile level, at line level, at coding tree unit (CTU) level or at coding unit (CU) level.
In still another embodiment, the selected table can be signaled from the encoder to the decoder explicitly. The selection information can be signaled in VPS, SPS, PPS, slice header, CTU, or CU.
In still another embodiment, cMax can be fetched from one of multiple tables obeying the same rule at encoder and decoder without signaling any information from the encoder to the decoder. For example, the table can be selected according to the coding information such as coding modes of neighboring blocks.
In still another embodiment, cMax could be a fixed number such as 8.
The methods described above can be used in a video encoder as well as in a video decoder. Embodiments of disparity vector derivation methods according to the present invention as described above may be implemented in various hardware, software codes, or a combination of both. For example, an embodiment of the present invention can be a circuit integrated into a video compression chip or program codes integrated into video compression software to perform the processing described herein. An embodiment of the present invention may also be program codes to be executed on a Digital Signal Processor (DSP) to perform the processing described herein. The invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA) . These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention. The software code or firmware codes may be developed in different programming languages and different format or style. The software code may also be compiled for different target platform. However, different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.
The invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described examples are to be considered in all respects only as illustrative and not restrictive. To the contrary, it is intended to cover various modifications and similar arrangements (as would be apparent to those skilled in the art) . Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such
modifications and similar arrangements.
References
[1] L. Guo, M. Karczewicz, J. Sole, and R. Joshi, “Evaluation of Palette Mode Coding on HM-12.0+RExt-4.1” , JCTVC-O0218, Geneva, CH, Oct 2013.
[2] L. Guo, M. Karczewicz, and J. Sole, “RCE3: Results of Test 3.1 on Palette Mode for Screen Content Coding” , JCTVC-N0247, Vienna, AT, July 2013.
[3] X. Guo, B. Li, J. Xu, Y. Lu, S. Li, and F. Wu, “AHG8: Major-color-based screen content coding” , JCTVC-O0182, Geneva, CH, October 2013.
[4] P. Onno, X. Xiu, Y. -W. Huang, R. Joshi, “Suggested combined software and text for run-based palette mode” , JCTVC-R0348, Sapporo, JP, July 2014.
Claims (16)
- A method of palette table coding, wherein cMax used in the truncated binary (TB) binarization process is fetched from a table.
- The method as claimed in claim 1, wherein the said TB binarization process is used to code the palette escape value.
- The method as claimed in claim 1, wherein the cMax is fetched from the table indexed by the quantization parameter (QP) and the bit depth.
- The method as claimed in claim 1, wherein the cMax is fetched from a table indexed by QP, and different table is used for different bit depth.
- The method as claimed in claim 1, wherein the table is predefined and stored in memory before the coding/decoding process.
- The method as claimed in claim 1, wherein the table is built up once at the beginning of the coding/decoding process.
- The method as claimed in claim 1, wherein the table is built up once at the beginning of coding/decoding a picture or a slice.
- The method as claimed in claim 1, wherein the cMax is fetched from a table only for lossy coding.
- The method as claimed in claim 1, wherein cMax is fetched from a table for lossless coding.
- The method as claimed in claim 1, wherein cMax can be fetched from a selected one from multiple tables, the tables can be predefined and stored in memory before the coding/decoding process, or the tables can be built up once at the beginning of the coding/decoding process, or the tables can be built up once at the beginning of coding/decoding a picture or a slice.
- The method as claimed in claim 10, wherein the table selection can be at sequence level, at picture level, at slice level, at tile level, at line level, at coding tree unit (CTU) level or at coding unit (CU) level.
- The method as claimed in claim 10, wherein the selected table can be signaled from the encoder to the decoder explicitly, the selection information can be signaled in VPS, SPS, PPS, slice header, CTU, or CU.
- The method as claimed in claim 10, wherein cMax can be fetched from one of multiple tables obeying the same rule at encoder and decoder without signaling any information from the encoder to the decoder, for example, the table can be selected according to the coding information such as coding modes of neighboring blocks.
- The method as claimed in claim 1, wherein cMax can be a fixed number.
- The method as claimed in claim 1, wherein cMax can be obtained in different ways for different component, such as Y, U, V or R, G, B.
- The method as claimed in claim 15, wherein cMax can be fetched from different tables for different component, such as Y, U, V or R, G, B.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2015/071429 WO2016115728A1 (en) | 2015-01-23 | 2015-01-23 | Improved escape value coding methods |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2015/071429 WO2016115728A1 (en) | 2015-01-23 | 2015-01-23 | Improved escape value coding methods |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016115728A1 true WO2016115728A1 (en) | 2016-07-28 |
Family
ID=56416308
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2015/071429 WO2016115728A1 (en) | 2015-01-23 | 2015-01-23 | Improved escape value coding methods |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2016115728A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020207421A1 (en) * | 2019-04-09 | 2020-10-15 | Beijing Bytedance Network Technology Co., Ltd. | Entry construction for palette mode coding |
US11611753B2 (en) | 2019-07-20 | 2023-03-21 | Beijing Bytedance Network Technology Co., Ltd. | Quantization process for palette mode |
US11677935B2 (en) | 2019-07-23 | 2023-06-13 | Beijing Bytedance Network Technology Co., Ltd | Mode determination for palette mode coding |
US11677953B2 (en) | 2019-02-24 | 2023-06-13 | Beijing Bytedance Network Technology Co., Ltd. | Independent coding of palette mode usage indication |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060204086A1 (en) * | 2005-03-10 | 2006-09-14 | Ullas Gargi | Compression of palettized images |
US20140064612A1 (en) * | 2012-09-04 | 2014-03-06 | Kabushiki Kaisha Toshiba | Apparatus and a method for coding an image |
CN104301737A (en) * | 2013-07-15 | 2015-01-21 | 华为技术有限公司 | Decoding method and encoding method for target image block and decoder and encoder |
-
2015
- 2015-01-23 WO PCT/CN2015/071429 patent/WO2016115728A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060204086A1 (en) * | 2005-03-10 | 2006-09-14 | Ullas Gargi | Compression of palettized images |
US20140064612A1 (en) * | 2012-09-04 | 2014-03-06 | Kabushiki Kaisha Toshiba | Apparatus and a method for coding an image |
CN104301737A (en) * | 2013-07-15 | 2015-01-21 | 华为技术有限公司 | Decoding method and encoding method for target image block and decoder and encoder |
Non-Patent Citations (2)
Title |
---|
GISQUET, C. ET AL.: "SCCE3 Test C.2:combination of palette coding tools", JOINT COLLABORATIVE TEAM ON VIDEO CODING OF ITU-T SG 16 WP 3 AND ISOLIEC JTC 1/SC 29/ WG 11 18TH MEETING, 9 July 2014 (2014-07-09), Sapporo, JP * |
PU, WEI ET AL.: "SCCE3:Test B.12-Binarization of Escape Sample and Palette Index", JOINT COLLABORATIVE TEAM ON VIDEO CODING OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/ WG 11 18TH MEETING, 9 July 2014 (2014-07-09), Sapporo, JP * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11677953B2 (en) | 2019-02-24 | 2023-06-13 | Beijing Bytedance Network Technology Co., Ltd. | Independent coding of palette mode usage indication |
WO2020207421A1 (en) * | 2019-04-09 | 2020-10-15 | Beijing Bytedance Network Technology Co., Ltd. | Entry construction for palette mode coding |
US11611753B2 (en) | 2019-07-20 | 2023-03-21 | Beijing Bytedance Network Technology Co., Ltd. | Quantization process for palette mode |
US11924432B2 (en) | 2019-07-20 | 2024-03-05 | Beijing Bytedance Network Technology Co., Ltd | Condition dependent coding of palette mode usage indication |
US11677935B2 (en) | 2019-07-23 | 2023-06-13 | Beijing Bytedance Network Technology Co., Ltd | Mode determination for palette mode coding |
US11683503B2 (en) | 2019-07-23 | 2023-06-20 | Beijing Bytedance Network Technology Co., Ltd. | Mode determining for palette mode in prediction process |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10334281B2 (en) | Method of conditional binary tree block partitioning structure for video and image coding | |
US10554974B2 (en) | Method and apparatus enabling adaptive multiple transform for chroma transport blocks using control flags | |
US9788004B2 (en) | Method of color index coding with palette stuffing | |
US10750169B2 (en) | Method and apparatus for intra chroma coding in image and video coding | |
EP3085083B1 (en) | Method and apparatus for palette initialization and management | |
US9749628B2 (en) | Methods of handling escape pixel as a predictor in index map coding | |
US9681135B2 (en) | Method for palette table initialization and management | |
US9860548B2 (en) | Method and apparatus for palette table prediction and signaling | |
CA2948683C (en) | Methods for palette size signaling and conditional palette escape flag signaling | |
WO2016202259A1 (en) | Advanced coding techniques for high efficiency video coding (hevc) screen content coding (scc) extensions | |
US10819990B2 (en) | Method and apparatus for palette predictor initialization for palette coding in video and image compression | |
US10448049B2 (en) | Method for color index coding using a generalized copy previous mode | |
US10652555B2 (en) | Method and apparatus of palette index map coding for screen content coding | |
WO2016115728A1 (en) | Improved escape value coding methods | |
US10904566B2 (en) | Method and apparatus for index map coding in video and image compression | |
WO2016044974A1 (en) | Palette table signalling |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15878405 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15878405 Country of ref document: EP Kind code of ref document: A1 |