CN114630131B - Jumping-out pixel coding and decoding method in index mapping coding and decoding - Google Patents

Jumping-out pixel coding and decoding method in index mapping coding and decoding Download PDF

Info

Publication number
CN114630131B
CN114630131B CN202111452015.8A CN202111452015A CN114630131B CN 114630131 B CN114630131 B CN 114630131B CN 202111452015 A CN202111452015 A CN 202111452015A CN 114630131 B CN114630131 B CN 114630131B
Authority
CN
China
Prior art keywords
palette
index
run
pixel
copy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111452015.8A
Other languages
Chinese (zh)
Other versions
CN114630131A (en
Inventor
庄子德
陈庆晔
孙域晨
夜静
刘杉
许晓中
金廷宣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MediaTek Inc
Original Assignee
MediaTek Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MediaTek Inc filed Critical MediaTek Inc
Priority to CN202111452015.8A priority Critical patent/CN114630131B/en
Priority claimed from PCT/CN2015/094410 external-priority patent/WO2016074627A1/en
Publication of CN114630131A publication Critical patent/CN114630131A/en
Application granted granted Critical
Publication of CN114630131B publication Critical patent/CN114630131B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
    • H04N19/426Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements using memory downsizing methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/463Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/93Run-length coding

Abstract

Disclosed is a video encoding and decoding method for reducing implementation costs by reusing a transform coefficient buffer palette for palette encoding and decoding. If the current prediction mode is an intra prediction mode or an inter prediction mode, information about transform coefficients of prediction residues of a current block generated by intra prediction or inter prediction is stored in a transform coefficient buffer. If the current prediction mode is a palette codec mode, information related to palette data associated with the current block is stored in a transform coefficient buffer. If the current block is encoded or decoded in the intra prediction mode or the inter prediction mode, the current block is encoded or decoded based on information related to the transform coefficients, or if the current prediction mode is a palette encoding/decoding mode, the current block is encoded or decoded based on information related to palette data stored in a transform coefficient buffer.

Description

Jumping-out pixel coding and decoding method in index mapping coding and decoding
[ Cross-reference ]
The present application claims priority from U.S. provisional application No. 62/078,595, U.S. provisional application No. 62/087,454, U.S. provisional application No. 2015, 2/24, U.S. provisional application No. 62/119,950, 10, 2015/145,578, 5/15, 62/162,313 and 2015/6/4, and U.S. provisional application No. 62/170,828, all of which are incorporated herein by reference.
[ field of technology ]
The present invention relates to palette coding (palette coding) of video data. In particular, the present invention relates to various techniques for preserving (continuous) system memory or increasing system throughput together by reusing transform coefficient (transform coefficient) buffers or aggregate skip-out values (Group Escape Value), palette predictor (palette predictor) initialization, palette predictor entry semantics (entry semanticas), or palette entry (entry) semantics.
[ background Art ]
Efficient video codec (High Efficiency Video Coding, abbreviated HEVC) is a new codec standard that has been developed in recent years. In High Efficiency Video Coding (HEVC) systems, fixed-size macro blocks (fixed-size macro blocks) of h.264/AVC are replaced by flexible blocks called Coding Units (CUs). Pixels in a CU share the same codec parameters to improve codec efficiency. A CU may start with a Largest CU (LCU), also referred to in HEVC as a Coded Tree Unit (CTU). In addition to the concept of a codec unit, the concept of a Prediction Unit (PU) is also introduced in HEVC. Once the splitting of the CU hierarchy tree is complete, each leaf CU is further split into one or more Prediction Units (PUs) according to prediction type and PU partitioning. Several codec tools have been developed for screen content codec. These tools relevant to the present invention are briefly reviewed below.
Palette coding and decoding
During development of HEVC screen content codec (screen content coding, abbreviated SCC), several proposals have been disclosed to address palette-based codec. For example, the prediction and sharing techniques are disclosed in JCTCVC-N0247 (Guo et al, "RCE3: results of Test 3.1on Palette Mode for Screen Content Codin g", ITU-T SG 16WP 3 and ISO/IEC JTC 1/SC 29/WG 11 video coding joint collaboration group (Joint Collaborative Team on Video Coding) (JCT-VC), 14 th meeting: vienna, AT,2013.7.25-2013.8.2, document number: JCTCC-N0247) and JCTV C-O0218 (Guo et al, "Evaluation of Palette Mode Coding on HM-12.0+RExt-4.1", ITU-T SG 16WP 3 and ISO/IEC JTC 1/SC 29/WG 11 video coding joint collaboration group (Joint Collaborative Team on Video Coding) (JCT-VC), 15 th meeting: genva, CH,2013.10.23-2013.11.1, document number: JCTCVC-O0218). In JCTCC-N0247 and JCTCC-O0218, a palette for each color component is constructed and transmitted. The palette may be predicted (or shared) from its left neighboring CU to reduce the bitrate. All pixels within a given block are then encoded using their palette indices. An example of the encoding process according to JCTVC-N0247 is shown below.
Transmission of palettes: the color index table (also referred to as palette table) size is transmitted first, followed by the palette elements (i.e., color values).
Transmission of pixel palette index values (indices point to colors in the palette): the index values of the pixels in the CU are encoded (encoding) in raster scan order. For each location, a flag is first sent to indicate whether "run mode" or "copy above mode" is being used.
"run mode" in which the palette index is issued first, followed by "palette run" (e.g., M). Since there are palette indices that are the same as already indicated palette indices, no further information needs to be sent for the current position and the subsequent M positions. The palette index (e.g., i) is shared by all three color components, which means that the reconstructed pixel value is (Y, U, V) = (palette Y [i],palette U [i],palette V [i]) (assuming that the color space is YUV).
"copy over mode of operation" in which the value "copy_run" (e.g., N) is sent to indicate that for the next N positions (including the current position), the palette index is equal to the palette index that is in the same position in the upper row.
Transmission of residual: the palette index sent in stage 2 is converted back to pixel values and used as a prediction. The residual information is transmitted using HEVC residual codec and is added to the reconstructed prediction.
In JCTVC-N0247, a palette for each component is constructed and transmitted. The palette may be predicted (or shared) from its left neighbor CU to reduce the bitrate. In JCTVC-O0218, each element in the palette is a triplet (triplet) representing a specific combination of three color components. Predictive codec across CU (across CU) palettes is removed.
In JCTCVC-O0182 (Guo et al, "AHG8: major-color-based screen content coding", ITU-T SG 16WP 3 and ISO/IEC JTC 1/SC 29/WG 11 video coding and decoding joint collaboration group (Joint Collaborative Team on Video Coding) (JCT-VC), 15 th conference: geneva, CH,2013.10.23-2013.11.1, document number: JCTCVC-O0182), another palette coding and decoding method is disclosed. Instead of predicting the entire palette table from the left CU, individual palette color entries in the palette may be predicted from the exact corresponding palette color entries in the upper CU or the left CU.
For transmission of pixel palette index values, a predictive codec method is applied to the indices according to JCTVC-O0182. The index lines may be predicted by different modes. In particular, the index row uses three row modes, namely a horizontal mode (horizontal mode), a vertical mode (vertical mode), and a normal mode (normal mode). In horizontal mode, all indexes in the same row have the same value. If the value is the same as the first pixel of the upper pixel row, only row mode signaling bits are sent (line mode signaling bi t). Otherwise, the index value is also sent. In vertical mode, the current index row is the same as the upper index row. Thus, only row mode signaling bits are transmitted. In normal mode, the indices in a row are predicted individually. For each index position, the left or upper neighbor (left or above neighbor) is used as a predictor and the prediction symbols are sent to the decoder.
Further, according to JCTVC-O0182, pixels are classified into main color pixels (major color pixel) (with palette indices pointing to palette colors) and skip pixels (escape pixels). For primary color pixels, the decoder reconstructs pixel values from the primary color indices (i.e., palette indices in JCTCVC-N0247 and JCTCVC-O0182) and the palette table. For a skip pixel, the encoder will send further pixel values.
Palette table signaling (signaling)
In reference software of the screen content codec (Screen Content Coding, abbreviated SCC) standard, SCM-2.0 (JCTCVC-R1014: joshi et al, screen content coding test model (SCM 2), ITU-T SG 16WP 3 and ISO/IEC JTC 1/SC 29/WG 11 video codec joint collaboration group (Joint Collaborative Team on Video Coding) (JCT-VC), 18 th meeting: saporo, JP,2014.6.30-7.9, document number: JCTCVC-R1014) the palette table of the last codec palette CU is used as a predictor for the current palette table codec. In palette table coding, palette_share_flag is first marked. If palette_share_flag is 1, all palette colors in the last codec palette table will be reused for the current CU. In this case, the current palette size is equal to the palette size of the last codec palette CU. Otherwise (i.e., palette_share_flag is 0), the current palette table is marked (signaled) by indicating which palette color in the last codec palette table may be reused, or by sending a new palette color. The size of the current palette is set to the size of the predicted palette (i.e., numpredpreviouspilette) plus the size of the transmitted palette (i.e., num_signaled_palette_entries). The predicted palette is a palette derived from a previously reconstructed palette-coded CU. When the current CU is encoded as a palette mode, palette colors that are not predicted using a prediction palette are directly transmitted through the bitstream. For example, if the current CU is coded as a palette mode with a palette size equal to 6. If three of the six dominant colors are predicted from the palette predictor, the three are transmitted directly through the bitstream. The following pseudo code illustrates an example of three palette colors transmitted using the example syntax described above.
Since the palette size is 6 in this example, palette indices from 0 to 5 are used to indicate each palette codec pixel, and each palette may be reconstructed as a main color in the palette color table.
In SCM-2.0, if wavefront parallel processing (wavefront parallel processing, abbreviated WPP) is not applied, the palette table of the last codec is initialized (i.e., reset) at the beginning of each slice (tile) or at the beginning of each tile (tile). If WPP is applied, the palette table of the last codec is initialized (reset) not only at the beginning of each slice or at the beginning of each tile, but also at the beginning of each CTU row (i.e., reset).
Palette index map scan order
In SCM-3.0 (JCTCC-R1014: joshi et al, screen content coding test model (SCM 3), ITU-T SG 16WP 3 and ISO/IEC JTC 1/SC 29/WG 11 video codec joint collaboration team (Joint Collaborative Team on Video Coding) (JCT-VC), 19 th meeting: strasbourg, FR,2014.10.17-24, document number: JCTCVC-S1014) palette mode codec, a traversal scan (transition scan) is used for index map codec as shown in FIG. 1. Fig. 1 shows an example of a traversal scan of 8×8 blocks. In the traversal Scan (transition Scan), the Scan for even rows is left to right and the Scan for odd rows is right to left. The traversal scan is applicable to all block sizes in palette mode.
Palette index map codec in SCM-4.0
In SCM-4.0 (JCTCVC-T1014: joshi et al, screen content coding test model (SCM 4), ITU-T SG 16WP 3 and ISO/IEC JTC 1/SC 29/WG 11 video codec joint collaboration group (Joint Collaborative Team on Video Coding) (JCT-VC), 20 th conference: geneva, CH,2015.2.10-18, document number: JCTCVC-T1014) palette indices are aggregated and marked (signaled) in front of the corresponding blocks of encoded and decoded data (i.e., before the palette_run_mo de and palette_run codecs). On the other hand, at the end of the codec data for the corresponding block, the codec jumps out of pixels (escape pixels). Syntax elements palette run mode and palette run are coded between the palette index and the skipped pixels. Fig. 2 shows an exemplary flow chart of index map syntax signaling according to SCM 4.0. The number of index (210), last run type (230), and aggregate index (220) are indicated. After indexing information is marked, a pair of run types (240) and run times (250) are repeatedly marked. Finally, if necessary, a set of jump-out values is marked (260).
Palette predictor initialization
In SCM-4.0, the global palette predictor set is denoted in PPS (picture parameter set ). Instead of resetting all palette prediction states (including PredictorPaletteSize, previousPaletteSize and predictors palette entries) to 0, the values obtained from PPS are used.
Palette grammar
For the running of the index in the index map, several elements that need to be labeled include:
1) Operation type: it is a replication top run or replication index run.
2) Palette index: for marking which index references this run in the replication index run.
3) Run length: it represents the length of this run of duplicate over and duplicate index types.
4) Skipping out of pixels: if there are N (N > =1) skipped pixels in operation, then N pixel values need to be labeled for these N skipped pixels.
In JCTCVC-T0064 (JCTCVC-T0064: joshi et al, screen content coding test model (SCM 4), ITU-T SG 16WP 3 and ISO/IEC JTC 1/SC 29/WG 11 video coding joint collaboration group (Joint Collaborative Team on Video Coding) (JCT-VC), 20 th meeting: geneva, CH,2015.2.10-18, document number: JCTCVC-T1014, all palette indices are clustered together.
According to the existing HEVC specification, most other palette codec related data for different color components are interleaved in the bitstream when the palette indices for each color component are clustered together. In addition, an Inter/intra codec block (Inter/Intra coded block) of a separate storage space is stored and a palette codec block is stored. It is desirable to develop techniques that increase system throughput and/or reduce system implementation costs.
[ invention ]
Disclosed is a video encoding and decoding method for reducing implementation costs by reusing a transform coefficient buffer palette for palette encoding and decoding. If the current prediction mode is an intra prediction mode or an inter prediction mode, information about transform coefficients of prediction residues of a current block generated by intra prediction or inter prediction is stored in a transform coefficient buffer. If the current prediction mode is a palette codec mode, information related to palette data associated with the current block is stored in a transform coefficient buffer. If the current block is encoded or decoded in the intra prediction mode or the inter prediction mode, the current block is encoded or decoded based on information related to the transform coefficients, or if the current prediction mode is a palette codec mode, the current block is encoded or decoded based on information related to palette data stored in a transform coefficient buffer.
If the current prediction mode is a palette codec mode, the palette data may correspond to a palette run type, a palette index, a palette run, a skip value, a skip flag, a palette table, or any combination thereof associated with the current block. The information related to the palette data may correspond to palette data, parsed palette data, or reconstructed palette data. For example, the parsing palette index of the samples of the current block is reconstructed in the parsing stage, and the reconstructed palette index and the reconstructed skip value are stored in the transform coefficient buffer at the decoder side. In another example, the parsing palette index of the samples of the current block is reconstructed, in the parsing stage, the reconstructed palette index is further reconstructed into reconstructed pixel values using a palette table, and the reconstructed pixel values and the reconstructed skip values are stored in a transform coefficient buffer at the decoder side. Further, a memory area may be designated to store the palette table during the analysis phase, and the memory area may be freed from use by the palette table during the reconstruction phase. The skip flag may be stored in a transform coefficient buffer on the decoder side. In another example, the skip flag is stored in one portion of the transform coefficient buffer (e.g., the Most Significant Bit (MSB) portion of the transform coefficient buffer) and the reconstructed pixel value or skip value is stored in another portion of the transform coefficient buffer.
In another embodiment, if the current prediction mode is a palette codec mode, all the skip values for the same color component that are clustered together are parsed from the video bitstream at the decoder side or all the skip values for the same color component are clustered together at the encoder side. The information containing the jump-out value is then used for encoding or decoding of the current block. The aggregated skip value of the same color component may be marked at the end of the codec palette data of the current block. The aggregated skip values for the different color components may be issued separately for the current block. The aggregate skip value for the same color component of the current block may be stored in a transform coefficient buffer. The aggregate trip-out values of the different color components may share the transform coefficient buffer by storing the aggregate trip-out values of one color component at a time in the transform coefficient buffer.
In another embodiment, all initial palette predictors for the same color component that are aggregated together in a Sequence Parameter Set (SPS), picture Parameter Set (PPS), or slice header are parsed from the video bitstream at the decoder side, or all initial palette predictors for the same color component are aggregated together at the encoder side. At least one palette codec block within a corresponding sequence, picture, or slice is encoded or decoded using the initial palette predictor.
In another embodiment, all palette predictor entries or palette entries for the current block of the same color component that are clustered together are parsed from the video bitstream at the decoder side, or all palette predictor entries or palette entries for the same color component are clustered together at the encoder side. The current block is then encoded or decoded using a palette predictor composed of all palette predictor entries or a palette table composed of all palette entries.
[ description of the drawings ]
Fig. 1 shows an example of a traversal scan of 8×8 blocks.
Fig. 2 illustrates exemplary palette index map syntax signaling according to screen content codec test module version 4 (SCM-4.0).
FIG. 3A illustrates an example index map flip before index encoding and decoding.
Fig. 3B shows an example of a flipped index map corresponding to the index map in fig. 3A.
Fig. 4 shows an example of flipping the index map before the index map codec, where it is inefficient to predict the pixels in the last row in the physical location using the pixels of the above neighbor construction.
Fig. 5A-B illustrate examples of predictions from an upper CU with a flipped index map, where the index in the duplicate upper run mode is always predicted from its physically upper position, regardless of whether the index map is flipped, according to an embodiment of the invention. In fig. 5A, the line-filled blocks represent the flipped index map, while the transparent blocks in fig. 5B represent the original index map.
Fig. 6A-B show another example from an upper CU with flipped index mapping, where the codec pixels are run above a copy of the first line predicted from its physically nearest samples, according to an embodiment of the invention. In fig. 6A, the line-filled blocks represent the flipped index map, while the transparent blocks in fig. 6B represent the original index map.
Fig. 7A-B illustrate another example of prediction from an upper CU with a flipped index map, where the codec pixels are run above a copy of the last line predicted from the physical pixel location of the upper neighboring CU, according to an embodiment of the invention. In fig. 7A, the line-filled blocks represent the flipped index map, while the transparent blocks in fig. 7B represent the original index map.
Fig. 8A shows an example of an extended copy-up mode of operation, in which two rows of pixels are copied from the upper row above the CU boundary (i.e., l=2).
Fig. 8B shows an example of cross-CU prediction indicating M (i.e., m=11) samples prior to prediction from a reconstructed pixel by signaling a syntax element pixel_num (M).
Fig. 9A shows an example of predicting pixel values of the first two rows of samples by reconstructed pixel values of the last row of the upper CU.
Fig. 9B shows an example of predicting pixel values of the first two columns of samples by reconstructed pixel values of the rightmost column of the left CU.
Fig. 10A-C illustrate three different scan patterns for cross-CU prediction according to an embodiment of the invention.
Fig. 11A-C illustrate three different scan patterns for cross-CU prediction according to another embodiment of the invention.
Fig. 12A-B illustrate two different scan patterns for inverse scanning across CU predictions, according to an embodiment of the present invention.
Fig. 13 shows an example of extending a row-based duplicate pixel from an adjacent CU to an 8 x 8CU that is encoded and decoded in inter mode.
Fig. 14 shows an example of changing the positions of neighboring reference pixels according to an embodiment of the present invention, in which the upper right reference pixel is copied from the upper right CU.
Fig. 15 shows another example of changing the positions of adjacent reference pixels according to an embodiment of the present invention, in which an upper right reference pixel is copied from the rightmost pixel of the third row.
Fig. 16 shows an example of decoding a skipped color according to an embodiment with N skipped pixels in different locations within the current codec block (n=5), where the pixel value of each skipped pixel occurrence is still written to the bitstream and horizontal traversal is used.
Fig. 17 shows an example of decoding a skipped color according to another embodiment, where there are N skipped pixels in different locations in the current codec block (n=5), where only non-repeated colors are decoded.
Fig. 18 shows an example of using a special index of adjacent construction pixels (NCPs), denoted N, across CU boundaries.
Fig. 19 shows an example of using a special index of an adjacent construction pixel (NCP) in the case where the maximum index value is 0.
Fig. 20 shows an example of using a special index for an adjacent construction pixel (NCP) in the case where the maximum index value is 1.
Fig. 21 shows an exemplary flow chart of signaling for supporting index prediction across CUs, in which a new flag all_pixel_from_ncp_flag is added and used for index prediction across CUs according to the syntax of SCM 3.0.
Fig. 22 shows an exemplary flow chart of signaling for supporting inter-CU index prediction, in which a new flag all_pixel_from_ncp_flag is added, and when all_pixel_from_ncp_flag is turned off, syntax according to SCM3.0 is used without inter-CU index prediction.
Fig. 23 shows another exemplary flow chart similar to fig. 22. However, when the maximum index value is not 0, index prediction on the CU is used according to the syntax of SCM 3.0.
Fig. 24 shows an exemplary flow diagram of signaling for supporting index prediction across CUs according to an embodiment of the invention.
Fig. 25 shows an exemplary flow diagram of signaling for supporting index prediction across CUs according to another embodiment of the invention.
Fig. 26 shows an exemplary flow diagram of signaling for supporting index prediction across CUs according to another embodiment of the invention.
Fig. 27 shows an exemplary flow diagram of signaling for supporting index prediction across CUs according to another embodiment of the invention.
Fig. 28A shows an exemplary flow diagram of signaling for supporting index prediction across CUs according to another embodiment of the invention.
Fig. 28B illustrates an exemplary flow diagram of signaling for supporting index prediction across CUs according to another embodiment of the invention.
Fig. 29 shows an example of source pixels for intra block copy prediction and compensation, in which a dot-filled region corresponds to unfiltered pixels in a current CTU (codec tree unit) and a left CTU.
Fig. 30 shows another example of source pixels for intra block copy prediction and compensation, in which a dot-filled region corresponds to unfiltered pixels in the bottom four rows of the current CTU (codec tree unit), the left CTU, and the upper CTU, and the bottom four rows of the upper left CTU.
Fig. 31 shows another example of source pixels for intra block copy prediction and compensation, where the dot-filled region corresponds to unfiltered pixels in the current CTU (codec tree unit), the bottom four rows of the N left CTUs, the top CTUs, and the bottom four rows of the N top left CTUs.
Fig. 32 shows another example of source pixels for intra block copy prediction and compensation, where the dot-filled region corresponds to the current CTU (codec tree unit), the bottom four rows of the upper CTU, and unfiltered pixels in the bottom four rows of the left CTU.
Fig. 33 shows another example of source pixels for intra block copy prediction and compensation, in which a dot-filled region corresponds to unfiltered pixels in the current CTU (codec tree unit), the bottom four rows of the N left CTUs, and the top four rows of the N upper left CTUs, and the right four columns of the n+1th left CTU.
Fig. 34 shows an exemplary flowchart of a system for palette codec block sharing transform coefficient buffers in conjunction with an embodiment of the present invention.
[ detailed description ] of the invention
The following description is of the best contemplated mode of carrying out the invention. This description is made for the purpose of illustrating the general principles of the invention and should not be taken in a limiting sense. The scope of the invention is best determined by the appended claims.
Reuse of HEVC transform coefficient buffers for palette related information
In HEVC, inter prediction (Inter prediction) and Intra prediction (Intra prediction) are available coding modes in addition to palette coding. When inter-or intra-prediction is used, transform coding is typically applied to the prediction residual resulting from inter/intra prediction. The transform coefficients are then quantized and entropy encoded for inclusion in the encoded bitstream. On the decoder side, the inverse operation is applied to the received bitstream. In other words, entropy decoding is applied to the bitstream to recover the codec symbols corresponding to the quantized transform coefficients. The quantized transform coefficients are then dequantized and inverse transformed to reconstruct the inter/intra prediction residual. A transform coefficient buffer is often used at the encoder side and the decoder side to store transform coefficients as needed between entropy encoding and decoding and quantization/transform operations.
For color video data, multiple transform coefficient buffers may be required. However, the system may also be configured to process one color component at a time (at a time) such that only one transform coefficient buffer is needed. When a block is palette-coded, the block is not transformed and a transform coefficient buffer is not used. To save system implementation costs, embodiments of the present invention reuse the transform coefficient buffer to store palette codec related data. Thus, for inter/intra codec blocks, the decoded coefficients of the TUs are stored in a coefficient buffer in a coefficient parsing stage. However, for an SCC (screen content codec) codec block, the palette codec block does not require residual codec. Thus, according to one embodiment of the invention, a transform coefficient buffer is used to store palette coding related information, which may include palette run type, palette index, palette run, skip value, skip flag, palette table, or any combination thereof.
For example, the parsing palette index of the samples of the block is reconstructed in the parsing stage. The transform coefficient buffer is used to store the reconstructed palette indices and the skip values.
In another example, the parsed palette indices of the samples of the block at the parse stage are reconstructed. The parsing index is used to reconstruct the pixel values of the corresponding color components using palette table lookup during the parsing stage. Thus, the coefficient buffer is used to store the reconstructed pixel value and the pop-out value, or to store the reconstructed pixel value, the pop-out value, and the pop-out flag. In this case, the palette table only needs to be stored in the parsing stage. No palette table needs to be stored and maintained during the reconstruction phase. The data depth required for the transform coefficients may be higher than the data depth of the palette codec related data. For example, transform coefficients may require a buffer of 16 bits depth. However, for palette coding, if the maximum palette index is 63, the storage of the palette index requires only a depth of 6 bits. Furthermore, the maximum bit length of the jump-out value is equal to the bit depth, which is typically 8 bits or 10 bits. Thus, a portion of the 16 bits (i.e., 8 bits or 6 bits) are free and may be used to store information for the pop-out flag. For example, the six MSBs (most significant bits ) of the coefficient buffer may be used to store the pop flag. The remaining bits are used to store the reconstructed pixel value or the jump-out value. In the reconstruction stage, the MSBs may be used to directly reconstruct the skip values or reconstruct the pixel values.
In yet another example, the parsing palette indices of the samples are reconstructed in the parsing stage and the palette table lookup is also used in the parsing stage to reconstruct the reconstructed values of the different color components. The pixel values of the pop samples are also reconstructed from the pop values in the parse phase. Thus, the coefficient buffer is used to store the reconstructed pixel values. In this case, the palette table only needs to be stored in the parsing stage. The reconstruction phase does not require the storage and maintenance of palette tables.
Aggregating the pop-out values of the same color component
In JCTCVC-T0064 (JCTCVC-T0064: joshi et al, screen content coding test model (SCM 4)), ITU-T SG 16WP 3 and ISO/IEC JTC 1/SC 29/WG 11 video codec joint collaboration group (Joint Collaborative Team on Video Coding) (JCT-VC), 20 th conference: geneva, CH,2015.2.10-18, document number: JCTCVC-T1014, the jump-out values of the three color components are clustered together.
Table 1.
Since there is no residual codec for the palette mode, the coefficient buffer may be reused to store palette index map information. In HEVC coefficient parsing, only TUs of one color component are parsed at a time. The syntax parser can only have one coefficient buffer for one color component and can only access one coefficient buffer for one color component. However, in palette coding, one palette index represents pixel values of a plurality of color components. The palette mode may decode multiple color component values at a time. The palette index may be stored in a coefficient buffer of a color component. However, in the current SCM-4.0 syntax parsing order, the skip value may require three coefficient buffers for three color components.
Thus, according to another embodiment, the syntax parsing order of the skip values is changed to overcome the problem related to the need for three coefficient buffers to store three color components. The pop-out values of the same color components are clustered together and marked at the end of palette syntax coding of a block. The jump-out values of the different color components are signaled separately. An exemplary syntax for palette coding in connection with the present embodiment is shown in table 2.
Table 2.
In table 2, text in the line filled region indicates deletion. The do-loop statement (i.e., "for (cidx=0; cIdx < numcomps; cidx++)") for the different color component is moved upward from the original position shown by annotation (2-1) to the new position shown by annotation (2-2). Thus, it can parse all the jump-out values of one color component of the CU. The palette index and the skip value of the first color component may be written out (i.e., cIdx is equal to 0) and then the parser may reuse the same buffer to parse the information of the second color component. With the syntax modification in table 2, one coefficient buffer for storing palette coding related data for one color component is sufficient for palette mode parsing. The implementation complexity and cost of the SCC parser does not increase.
Aggregating palette predictor initializations for the same color component
In SCM 4.0, a set of PPS (picture parameter set) syntax elements is marked to specify a palette predictor initializer. The existing syntax of PPS extension is shown in table 3.
Table 3.
The palette predictor initialization procedure according to the existing SCM 4.0 is as follows:
the output of this process is the initialized palette predictor palette pages size and predictor palette entries.
The predictorpilettesize is derived as follows:
-if the palette predictor present flag is equal to 1, the predictor palette is set equal to num palette predictor minus1 plus 1.
Otherwise (predictor_predictor_present_flag equal to 0), predictor palette size is set to 0.
The predictors Pattern entries array (array) is derived as follows:
-if the palette_predictor_initiator_present_flag is equal to 1, for (i=0, i < predictor palette size; i++) for (comp=0, comp <3; comp++)
PredictorPaletteEntries[i][comp]=palette_predictor_initializers[i][comp]
Otherwise (predictor_predictor_present_flag equals 0), predictor Pateteelements is set to 0.
In one embodiment, the predictor initialization values may be combined together for the Y component, cb component, cr component, respectively. The syntax changes are shown in table 4. In contrast to Table 3, the two do-loop statements are swapped as shown by annotation (4-1) and annotation (4-2) to combine palette predictor initializations for the same color component.
Table 4.
An exemplary palette predictor initialization process corresponding to the above-described embodiment is as follows.
The output of this process is the initialized palette predictor palette pages size and predictor palette entries.
The predictorpilettesize is derived as follows:
-if the palette predictor present flag is equal to 1, the predictor palette is set equal to num palette predictor starter minus1 plus 1.
Otherwise (predictor_predictor_present_flag equal to 0), predictor palette size is set equal to 0.
The predictors PateteEntries array is derived as follows:
-if palette_predictor_initiator_present_flag is equal to 1, for (comp=0; comp++) for (i=0; i < predictor palette_flag, +)
PredictorPaletteEntries[i][comp]=palette_predictor_initializers[i][comp]
Otherwise (predictor_predictor_present_flag equals 0), predictor Pateteelements is set to 0.
The same aggregation method may be applied if the palette predictor initializer is marked at the SPS (sequence parameter set) or slice header (slice header). In other words, all the initialized values of the Y component are labeled together, all the initialized values of the Cb component are labeled together, and all the initialized values of the Cr component are labeled together.
Aggregating palette predictor updates for the same color component
The existing palette predictor update procedure is as follows.
The variable numComps is derived as shown in equation (1):
numComps=(ChromaArrayType==0)?1:3 (1)
the variables predictorplatestize and Predictor Palette Entries are modified as shown by the pseudocode in table 5.
Table 5.
As shown in Table 5, the first part of the newPrerectorpileEntricies update procedure is performed in a double do-loop represented by annotation (5-1) and annotation (5-2), where the do-loop associated with the color component (i.e., cIdx) is in an internal do-loop. Thus, for each palette entry, three color components are updated. The remaining two do-loop statements of the newPrerectentrie update are represented by note (5-3) and note (5-4), where the do-loop statements of different color components correspond to inner loops (inner loops). Thus, for each entry, three color components are updated. Two do-loop statements of the predictors PatteteEntrices update are represented by an annotation (5-5) and an annotation (5-6), where the do-loop statements of different color components correspond to outer loops (outer loops). Thus, the predictors palette entries value is updated for each color component.
In one embodiment, palette predictors for three respective components are aggregated and then updated. An exemplary modification to the existing update process is shown below.
The derivation of the variable numComps remains the same as equation (1). The variables predictorplatettesize and predictorplateelement are modified as shown by the following exemplary pseudocode in table 6.
Table 6.
In the above example, the two do-loop statements of the first portion of the newPrerectenpatteteries update are represented by annotation (6-1) and annotation (6-2), where the do-loop statements of different color components correspond to the outer loop. Thus, the remaining newpredictorpileEntries values are updated for each color component. The newpredictorpileitems update procedure indicated by the double do-loop indicated by the note (6-3) and note (6-4) remained consistent with the existing update procedure in table 5. The predictors PatleteEntries update procedure indicated by the double do-loop indicated by note (6-5) and note (6-6) remained consistent with the existing update procedure in Table 5.
In another embodiment, the variables predictorpatestize and predictorpateEntrie are modified as shown by the following exemplary pseudocode in Table 7.
Table 7.
In the above example, the two do-loop statements of the first part of the newpredictorpateEntricies update are represented by annotation (7-1) and annotation (7-2), where the do-loop statements of different color components correspond to the outer loop. Thus, the remaining newpredictorpileEntries values are updated for each color component. The newPrerectennes update procedure indicated by double do-loop is represented by note (7-3) and note (7-4), where the do-loop statements of the different components correspond to the outer loop. Thus, the newPred ictorPaletteEntries value is updated for each color component. The Pre dictorPaletteEntries update procedure indicated by the double do-loop indicated by Note (7-5) and Note (7-6) remained consistent with the existing update procedure in table 5.
In another embodiment, the variables predictorpatestize and predictorpateEntrie are modified as shown by the following exemplary pseudocode in Table 8.
Table 8.
/>
In the above example, the newPrerectenpatteteries and the double-loop (two-loop) statement of the Prerectenpatteteries update procedure are represented by annotation (8-1), annotation (8-2), annotation (8-3) and annotation (8-4), where the do-loop statement of the different color components corresponds to the outer loop. Thus, the newpredictorpattentries value and predictorpattentries are updated for each color component.
Various examples of aggregating palette predictors for the same color component for an update process have been shown in tables 6-8, wherein the update process has been demonstrated by aggregating palette predictors for the same color component using an HEVC syntax element according to an embodiment of the present invention. However, the present invention is not limited to the specific syntax elements and specific pseudo-code listed in these examples. One skilled in the art can practice the invention without departing from the spirit thereof.
Palette entry semantic aggregation for the same color component
In the current SCM4.0, a syntax element palette_entry is used to specify the values of the components in the palette entry of the current palette. The variable predictorpattentries [ cIdx ] [ i ] specifies the i-th element in the predictor palette for the color component cIdx. The variable numComps is derived as shown in equation (1). The variable currentpalette entries [ cIdx ] [ i ] specifies the i-th element in the current palette of color components cIdx and is derived as shown by the following exemplary pseudocode in table 9.
Table 9.
As shown in Table 9, the CurrentPatteteEntries for the palette predictor are updated as shown by notes (9-1) and (9-2), where the do-loop statement for the different color component corresponds to an inner loop. Thus, for each entry, three color components are updated. In addition, the CurrentPatleteEntries from the new palette entry will be updated as shown by annotation (9-3) and annotation (9-4), where the do-loop statement for the different color component corresponds to the outer loop. Thus, the CurrentPaletteEntries value is updated for each color component.
In one embodiment, palette predictors for each of the three components may be clustered together. Exemplary changes to the existing process are shown below.
The variable numComps is derived as shown in equation (1). The variable currentpalette entries [ cIdx ] [ i ] specifies the i-th element in the current palette of color components cIdx and is derived as shown by the following exemplary pseudocode in table 10.
Table 10.
As shown in Table 10, the CurrentPatteteEntries from the palette predictor are updated as shown by annotation (10-1) and annotation (10-2), where the do-loop statement for the different color component corresponds to the outer loop. Thus, the CurrentPaletteEntries value is updated for each color component. CurrentPaletteEn tries in the new palette entry is updated as shown by notes (10-3) and (10-4) and remains consistent with table 9.
In another embodiment, the three components may be combined together. The modifications are shown below.
The variable numComps is derived as shown in equation (1):
the variable currentpalette entries [ cIdx ] [ i ] specifies the i-th element in the current palette for the color component cIdx and is derived as shown by the following exemplary pseudocode in table 11.
Table 11.
/>
As shown in Table 11, the currentPatleteEntries in the palette predictor and currentPatleteEntries from the new palette entry are updated as shown by annotation (11-1), annotation (11-2), annotation (11-3) and annotation (11-4), where the do-loop statement for the different color component corresponds to the outer loop. Thus, the CurrentPaletteEntries value is updated for each color component.
Various examples of aggregation of palette entry semantics for the same color component of the update process have been shown in tables 10 and 11, where HEVC syntax elements have been used to demonstrate the update process by aggregating palette entry semantics for the same color component, according to an embodiment of the present invention. However, the present invention is not limited to the specific syntax elements and specific pseudo-code listed in these examples. One skilled in the art can practice the invention without departing from the spirit thereof.
Other palette grammar aggregation
In SCM-4.0, the palette index is clustered in front and the skip value is clustered at the end. The palette indices and the skip values are encoded with a bypass binary file (bypass bin). Aggregating the jump-out values (i.e., aggregating binary files bypassing the codec) using palette indices may increase parsing throughput. In SCM-4.0, the number of skip values to parse depends on the palette run mode, the palette index, and the palette run. Thus, the pop-out values can only be gathered at the end. In order to aggregate the skip values using the palette index, several methods are disclosed as follows.
The method comprises the following steps: the previous skip values are aggregated using palette indices:
if the skip values are gathered in the front, the number of skip values to be parsed should be independent of palette operation. For skip value resolution, to remove the data dependency of palette runs, embodiments of the present invention change the copy-on-top run mode behavior when the skip samples are copied from the top row. If the prediction variable is a skip sample, the skip sample is treated as a predefined color index in the prediction variable copy mode.
In the copy upper run mode, a "palette run" value is transmitted or derived to indicate the number of subsequent samples to copy from the upper row. The color index is equal to the color index in the upper row. In one embodiment, if the top or left sample is a pop-out sample, the color index of the top sample is considered to be a predefined color index (e.g., 0). The current index is set to a predefined index. These predictor replication modes do not require a jump-out value. In this method, palette run may be marked for an index run mode even if the marked index is equal to the skip index. If the run of the jump-out index is greater than 0 (e.g., N >0) The first sample is reconstructed using the encoded and decoded skip values. The index of the first sample may be set to a skip index or a predefined color index. The index of the remaining N samples is set to a predetermined index (e.g., index 0), and the remaining N samples are reconstructed using the value of the predetermined index (e.g., index 0). In one embodiment, the maximum codeword index in the run mode (e.g., adjustIndexMax) is fixed (e.g., fixed as index Max-1) except for the first sample of the index run mode. For the first sample in the CU, adjus tIndexMax is equal to index Max. Redundant index deletion may still be applied. In this method, the number of jump-out values that need to be parsed depends on the number of parse/reconstruct indexes that are equal to the jump-out index. For example, if palette indices reference truncated binary codes are both 1's, then the comparison is madeParseThe index (sampled index) is a skip index. The number of jump-out values that need to be parsed is independent of palette operation. Thus, with palette indices, the syntax of the skip value can be placed in front (i.e., palette runFront).
Syntax order example 1: copy_above_run or number_run number → last_run_mode → run type aggregation (context codec) → palette index aggregation (bypass codec) → skip value aggregation (bypass codec) → run length aggregation.
In this case, last_run_mode indicates that the last operation mode is copy_above_run or in dex_run. For example, in the encoding/decoding run type aggregation, if the number of index_run of encoding/decoding is equal to the number of index_run, and last_run_mode is index_run, the run type aggregation will be terminated. If the number of index run of the codec/decoding is equal to the number of index run, and last run mode is copy above run, the run type aggregation will also be terminated and copy_above_run inserted to the end.
Syntax order example 2: copy_above_run or index_run number → run type aggregation (context codec) → last_run_mode → palette index aggregation (bypass codec) → skip value aggregation (bypass codec) → run length aggregation.
In this case, last_run_mode indicates whether the last operation mode is copy_above_run or index_run. For example, in the encoding/decoding operation type aggregation, if the number of index_run of encoding/decoding is equal to the number of index_run, the operation type aggregation will be terminated, and last_run_mode is indicated. If last_run_mode is copy_above_run, copy_above_run is inserted at the end.
Syntax order example 3: copy_above_run or index_run number → run type aggregation (context codec) → palette index aggregation (bypass codec) → skip value aggregation (bypass codec) → last_run_mode → run length aggregation.
In this case, last_run_mode indicates that the last operation mode is copy_above_run or index_run. For example, in the encoding/decoding operation type aggregation, if the number of index_run of encoding/decoding is equal to the number of index_run, the operation type aggregation is terminated. And marks last run mode. If last_run_mode is copy_above_run, then a copy_above_run is inserted at the end.
Syntax order example 4: the number of copy_above_run or the number of index_run → run type aggregation (context codec) → palette index aggregation (bypass codec) → last_run_mode→ skip value (bypass codec) aggregation→ run length aggregation.
In this case, last_run_mode indicates whether the last operation mode is copy_above_run or index_run. For example, in the encoding/decoding operation type aggregation, if the number of index_run of encoding/decoding is equal to the number of index_run, the operation type aggregation is terminated. And marks last run mode. If last_run_mode is copy_above_run, then a copy_above_run is inserted last.
Syntax order example 5: the number of copy_above_run or the number of index_run → palette index aggregation → skip value aggregation → last_run_mode → run type aggregation → run length aggregation.
In this case, last_run_mode indicates whether the last operation mode is copy_above_run or index_run. For example, in the encoding/decoding operation type aggregation, if the number of index_run of encoding/decoding is equal to the number of index_run, the operation type aggregation is terminated. If last_run_mode is copy_above_run, then a copy_above_run is inserted at the end that is not marked. For the last palette run mode, palette run is inferred to be at the end of the PU/CU.
Syntax order example 6: the number of copy_above_run or the number of index_run → palette index aggregation → last_run_mode → skip value aggregation → run type aggregation → run length aggregation.
In this case, last_run_mode indicates whether the last operation mode is copy_above_run or index_run. For example, in the encoding/decoding operation type aggregation, if the number of index_run of encoding/decoding is equal to the number of index_run, the operation type aggregation will be terminated. If last_run_mode is copy_above_run, copy_above_run is inserted at the end, without designation. For the last palette run mode, palette run is inferred to be at the end of the PU/CU.
Syntax order example 7: the number of copy_above_run or the number of index_run → palette index aggregation → skip value aggregation → last_run_mode → interleave { palette run type, palette run length }.
In this case, last_run_mode indicates whether the last operation mode is copy_above_run or index_run. For example, in the encoding/decoding operation type aggregation, if the number of index_run of encoding/decoding is equal to the number of index_run, the operation type aggregation is terminated. If last_run_mode is copy_above_run, copy_above_run is inserted at the end, without designation. For the last palette run mode, palette run is inferred to be at the end of the PU/CU.
Syntax order example 8: the number of copy_above_run or the number of index_run → palette index aggregation → last_run_mode → skip value aggregation → interleave { palette run type, palette run length }.
In this case, last_run_mode indicates whether the last operation mode is copy_above_run or index_run. For example, in the encoding/decoding operation type aggregation, if the number of index_run of encoding/decoding is equal to the number of index_run, the operation type aggregation is terminated. If last_run_mode is copy_above_run, copy_above_run is inserted at the end and is not marked. For the last palette run mode, palette run is inferred to be at the end of the PU/CU.
In the above example, copy_above_run corresponds to the above-described exemplary syntax element "duplicate upper run mode". In addition, index_run corresponds to an exemplary syntax element of "run mode".
In examples 1, 2, 3, and 5 of the above syntax order, "palette index aggregation (bypass codec) →skip (bypass codec) value aggregation" may be "interlace { palette index, skip value }. The palette indices and the skip values may be encoded and decoded in an interleaved manner. If the parsed index is equal to the jump-out index, the jump-out value may be parsed immediately.
In examples 2 to 7 of the above-described syntax order, "last_run_mode" may be indicated after "copy_above_run number or index_run number".
In examples 1 to 8 of the above syntax order, "last_run_mode" may be marked anywhere before the last palette run is marked.
In examples 1 to 4 of the above-described syntax order, the palette run aggregation is decoded before the palette run aggregation. Thus, for palette run signaling, the maximum possible run (maximum possible run) may further subtract (sub-tracked by) the number of index run modes remaining, the number of duplicate over run modes remaining, or the number of index run modes remaining + the number of duplicate over run modes remaining. For example, maxpalettrun=ncbs-scanPos-1-number of remaining copy_index_mode, or maxpalettrun=ncbs-1-number of remaining copy_above_mode, or maxpalettrun=ncbs-nCbS-scanPos-1-number of remaining copy_above_mode-number of remaining copy_index_mode. In the above example, maxpaletetrun corresponds to an exemplary syntax element for maximum palette run, nCbS corresponds to an exemplary syntax element for the size of the current luma codec block, and scanPos corresponds to the scan position of the current pixel.
In examples 1 to 7 of the above syntax order, for the last palette run mode, the palette run is inferred to be at the end of the PU/CU.
Method 2: the palette index is aggregated to the last using the skip value.
In SCM-4.0, the context formation of palette run depends on the palette index. Thus, the palette index can only be coded before palette run. In order to aggregate palette indices to the end, the context formation of palette runs should be independent of palette indices.
Thus, the context formation of palette runs needs to be changed such that the context formation of palette runs will depend only on the current palette run mode, the previous palette run, the palette run mode of the previous sample, the palette run of the previous sample, or a combination of the above. Alternatively, palette run may be encoded and decoded with bypass binary files.
Various examples of the syntax order of palette index map codec are shown below.
Syntax order example 1: the number of copy_above_run or the number of index_run → last_run_mode → run type aggregation (context codec) → run length aggregation (independent of palette index) → palette index aggregation (bypass codec) → skip value aggregation (bypass codec).
In this case, last_run_mode indicates whether the last operation mode is copy_above_run or index_run. For example, when encoding/decoding the run type aggregation, if the number of index_run of the codec/decoding is equal to the number of index_run, and last_run_mode is index_run, the run type aggregation will be terminated. If the number of index run of the codec/decoding is equal to the number of index run, and last run mode is copy above run, the run type aggregation will also be terminated and copy_above_run inserted to the end. For the last palette run mode, palette run is inferred to be at the end of the PU/CU.
Syntax order example 2: the number of copy_above_run or the number of index_run → run type aggregation → last_run_mode → run length aggregation → palette index aggregation → skip value aggregation.
In this case, last_run_mode indicates whether the last operation mode is copy_above_run or index_run. For example, when encoding/decoding the run-type packet, if the number of index_run of the codec/decoding is equal to the number of index_run, the run-type aggregation will be terminated and last_run_mode is indicated. If last_run_mode is copy_above_run, copy_above_run is inserted to the end. For the last palette run mode, palette run is inferred to be at the end of the PU/CU.
Syntax order example 3: interleaving { palette run type, palette run length }, palette index aggregation → skip value aggregation.
In examples 1 to 3 of the above syntax order, "palette index aggregation→skip value aggregation" may be "interlace { palette index skip value }. The palette indices and the skip values may be encoded and decoded in an interleaved manner. If the parsed index is equal to the jump-out index, the jump-out value may be parsed immediately.
In examples 1 to 3 of the above syntax order, "last_run_mode" may be marked anywhere before the last palette run is marked.
In examples 1 and 2 of the above syntax order, the palette run aggregation may be decoded before the palette index aggregation. Thus, for palette operation signaling, the maximum possible operation may be further subtracted by the number of remaining index operation modes, remaining duplicate upper operation modes, or the number of remaining index operation modes + the number of remaining duplicate upper operation modes. For example, maxpalettrun=ncbs-scproviding-1-number of remaining copy_index_mode, or maxpalettrun=ncbs-1-number of remaining copy_above_mode, or maxpalettrun=ncbs-nCbS-scanPos-1-number of remaining copy_above_mode-number of remaining copy_index_mode.
In examples 1 and 2 of the above syntax order, for the last palette run mode, the palette run is inferred to be at the end of the PU/CU.
The present invention also relates to various aspects of palette coding disclosed below.
Removing line buffers in palette index map resolution
In SCM-3.0, four palette index map syntax elements (i.e., palette run mode, palette index, palette run and skip values) are encoded and decoded in an interleaved manner. Although the contextual form of the palette run mode is modified to be independent of the palette run mode for the upper sample in SCM-4.0, the index map resolution requires information for the upper row. For example, when the copy-up operation mode is used, the number of jump-out values to be resolved depends on the number of jump-out pixels to be copied from the upper row. When the previous codec mode is a duplicate upper run mode, the index reconstruction also depends on the palette index of the upper sample. To save (save) line buffers in palette index map parsing, several methods of removing data dependencies from the above examples are disclosed.
Method-1: if the predicted value is a skip sample, the skip value is directly copied in a predicted value copy mode.
In order to remove dependencies during the calculation of the number of skipped pixels to resolve in the replication upper run mode, replication upper run mode behavior is modified according to an embodiment of the invention for replicating removal samples from the upper row.
In the copy upper run mode, a "palette run" value is transmitted or derived to indicate the number of the following samples to copy from the upper row. The color index is equal to the color index in the upper row. According to one embodiment, if the predicted value (i.e., the above position) is a jumped-out sample, the current sample replicates not only the index (jumped-out index) but also the jumped-out value from the upper row. There is no need to resolve the jump-out value in these samples. In this method, the index run mode may be marked for operation even if the marked index is equal to the pop-out index. If the run of the jump-out index is greater than 0 (e.g., N > 0), the decoder will fill in the reconstructed values (or jump-out values) of N samples starting from the first sample. The run grammar may be followed by a skip value.
To remove the data dependency of index resolution and reconstruction of the index run mode when the previous mode is the copy-up run mode, redundant index deletion is disabled when the previous mode is the copy-up run mode, as described in JCTCV-T0078 (JCTCV-T00784: kim et al, CE1-related: simplification for index map coding in palette mode, ISO/IEC JTC 1/SC 29/WG 11 video coding joint collaboration group (Joint Collaborative Team on Video Coding) (JCT-VC), 20 th meeting: geneva, CH,2015.2.10-18, document number: JCTCV-T0078).
Based on method-1, the index map resolution is independent of the information of the upper row. The entropy decoder may parse all palette index map syntax by using only the previous codec palette run mode and the palette index's dependence.
In one embodiment, in the copy index mode of operation, a "palette run" value is sent or derived to indicate the number of following samples to be encoded in the bitstream. The color index of the current position is encoded and decoded. However, if the sample of the current position is a skip sample, not only the index (skip index) of the current sample but also the skip value is encoded. In this method, even if the index indicated is equal to the jump-out index, the operation can be indicated for the index operation mode. If the run of the jump-out index is greater than 0 (e.g., N > 0), the decoder will fill in the reconstructed values (or jump-out values) of N samples starting from the first sample. The run grammar may be followed by a skip value.
To remove the data dependencies of the index resolution and reconstruction of the copy index mode of operation when the previous mode is the copy index mode of operation, redundant index deletion is disabled when the previous mode is the copy index mode of operation.
With the above method, the index map resolution is independent of the previously indexed information. The entropy decoder may parse all palette index map syntax by using the dependency on the palette run mode and palette index of the previous codec.
Method-2: if the predicted value is a skip sample, the skip sample is processed as a predetermined color index in the predicted value copy mode.
In the copy upper run mode, a "palette run" value is transmitted or derived to indicate the number of the following samples to copy from the upper row. The color index is equal to the color index in the upper row. According to one embodiment, if the top or left sample is a skip sample, the color index of the top sample is considered to be a predefined color index (e.g., 0). The current index is set to a predefined index. These predictor replication modes do not require a jump-out value. According to one embodiment, palette operations may be marked for an index operation mode even though the marked index is equal to the skip index. The run grammar may be followed by a skip value. If the run of the jump-out index is greater than 0 (e.g., N > 0), the first sample is reconstructed with the jump-out value of the codec. The index of the first sample may be set to a skip index or a predefined color index. The index of the remaining N samples is set to a predetermined index (e.g., index 0). The remaining N samples are reconstructed with values of a predefined index (e.g., index 0). In another example, if the run of the pop index is greater than 0 (e.g., N > 0), then the first sample is reconstructed with the pop value, and the pop value for the next N samples also needs to be indicated. The remaining samples respectively indicate a jump-out value reconstruction. The indices of the first sample and the remaining N samples may be set to a predetermined index (e.g., index 0).
According to method-2, the maximum codeword index in the run mode (i.e., adjustedIndexMax) is fixed (e.g., fixed as index Max-1) except for the first sample of the index run mode. For the first sample in the CU, adjustIndexMax is equal to index Max. Redundant index deletion may continue to be applied.
According to method-2, while index map reconstruction still requires the index values of the samples described above, the entropy decoder can parse all palette index map syntax by using only the dependencies on palette run modes and palette indices of previous codecs.
According to JCTCVC-T0078, in order to remove the data dependency of index reconstruction of an index run mode when the previous mode is a copy-up run mode, redundant index deletion is disabled when the previous mode is a copy-up run mode.
Replication top mode under index map flip
The present invention also relates to problems associated with index map flipping prior to index map codec. After the decoder flips the index map, the prediction source in the upper run mode is an upper pixel (physical above pixels) in a different physical location than before. FIG. 3A illustrates an example index map flip before index encoding and decoding. Fig. 3B shows an example of a flipped index map.
Fig. 3A shows an example of an original codec unit. After the index map is flipped, the pixels in the last row (i.e., pixels 0 through 7) will be flipped to the first row, as shown in FIG. 3B. If prediction can span a CU, the current first line of pixels is predicted from the upper neighboring construction pixels (neighboring constructed pixel, abbreviated NCP). As shown in fig. 3B, the line fill block indicates the flipped index map, while the transparent block (clear block) in fig. 3A represents the original index map. For pixels in other rows after the flip, the prediction in the copy-up mode of operation becomes the prediction of the pixel position from the physical position below before the flip. In this approach, the index reconstruction does not require a second pass.
However, flipping the index map before the index map codec means using the NCP described above to predict the pixels in the last row in the physical location, as shown in fig. 4. This prediction process is inefficient because the distance between the predicted value and the potential index to be predicted is large.
Accordingly, a method of improving coding efficiency related to index map flipping is disclosed below.
Method 1: as shown in fig. 5A and 5B, the index in the duplicate upper run mode is predicted from its physical upper position (left position if the transpose flag is on (transpose flag is on)) regardless of whether the index map is flipped or not. As shown in FIG. 5A, the line-filled blocks represent the flipped index map, while the transparent blocks in FIG. 5B represent the original index map.
Method 2: different run scan start positions are signaled. The method is similar to method 1, in that the index in the duplicate upper run mode is predicted from the pixels in the physical upper position. Additional information may be indicated to indicate a "run scan start position" or a "scan mode". The "run scan start position" may be upper left, upper right, lower left, or lower right. The "scan pattern" may be a horizontal scan, a vertical scan, a horizontal traversal scan, or a vertical traversal scan.
Method 3: if the index map is flipped, then the codec pixels are run over a copy of the first line predicted from its physically nearest sample, as shown in FIGS. 6A and 6B. Fig. 6A shows flipped samples as indicated by the line filled blocks, and fig. 6B shows samples of the original physical locations. If the transpose flag is off, the running codec pixels above the replication of the last row in the physical location are predicted from the samples of its physically nearest location.
Method 4: if the index map is flipped, then the codec pixels are run above the copy of the last row predicted from the physical pixel location of the neighboring CU as shown in fig. 7A and 7B. Fig. 7A shows flipped pixels indicated by the line-fill-block, and fig. 7B shows a sample of the original physical location. If the transpose flag is off, the first row (or the first M rows) of physical locations can be predicted from the physical pixel locations of the neighboring CUs described above. M may be 1, 2, or 3.M may also depend on the CU size. M may be marked so that the decoder may decode accordingly.
Cross-CU prediction
To further increase the codec efficiency, a special operation is disclosed. This special run extends the duplicate top run starting from the first sample of the palette codec CU. The special run may be denoted once (signaled once). Samples in the extended copy upper run mode are predicted from reconstructed pixels in neighboring CUs. The remaining samples in the CU are encoded using the palette syntax specified in SCM-4.0 or SCM-3.0, except that the total palette codec samples in the PU/CU are reduced.
Method 1: a syntax element (e.g., line num denoted by L) is first marked to indicate that the previous L lines of samples are predicted from reconstructed pixels in the neighboring CU, where L is a positive integer. The remaining samples are coded using the palette syntax in SCM-4.0 or SCM-3.0, except that the total palette coding samples in the PU/CU are reduced. For the first L rows of samples, their pixel values are predicted from the reconstructed pixels in their neighboring CUs. For example, if the palette_transmit_flag is 0, the reconstructed pixel in the upper CU is used. The pixel values of the first L rows of samples are the reconstructed pixel values of the last row of the upper CU. Similar to the application of vertical intra prediction (Intra vertical prediction) to the first L lines of samples, the remaining lines are coded with normal palette mode.
Fig. 8A shows an example of an extended copy upper mode of operation, in which two rows of pixels are copied from the upper row located above CU boundary 810 (i.e., l=2).
Method 2: a syntax element (e.g., pixel_num denoted by M) is first labeled to indicate the first M samples predicted from reconstructed pixels in the neighboring CU, where M is a positive integer. The remaining samples are coded using the palette syntax in SCM-4.0 or SCM-3.0, except that the total palette coding samples in the PU/CU are reduced. For example, if the palette_transmit_flag is 0, the reconstructed pixel in the upper CU is used. The pixel values of the first M samples are the reconstructed pixel values of the last line of the upper CU. Similar to applying vertical intra prediction to the first M rows of samples. If the width of the CU is CU_width, the first CU_width samples from the first sample at (M+1) to the (M+CU_width) samples in the CU cannot be encoded in the copy-up mode of operation according to the syntax in SCM-4.0. In other words, according to the syntax in SCM-4.0, samples with scan positions equal to M through (M+CU_width-1) cannot be encoded and decoded in the copy-up mode of operation. Fig. 8B shows an example of indicating prediction across CUs from M (i.e., m=11) samples before prediction of a reconstructed pixel by indicating a syntax element pixel_num (M).
For example, the syntax table of palette_coding in SCM-3.0 may be modified as shown in Table 12.
Table 12.
As shown in table 12, the syntax element pixel_num is incorporated as shown in note (12-1), where pixel_num is indicated before palette index map codec. The remaining palette samples are encoded and decoded from the scan position beginning with pixel_num as shown in note (12-2). The previous palette sample positions are derived from the annotation (12-3). The first sample row after the previous pixel_num sample does not allow replication of the upper mode, as indicated by note (12-4).
The variable adjustIndexMax representing the adjusted maximum index is derived as follows:
adjustedIndexMax=indexMax
if(scanPos>pixel_num)
adjustedIndexMax-=1
the variable adjustrefindex representing the adjusted maximum reference index is derived as follows:
in the above-described method 1 and method 2, the syntax element copy_from_neighbor_cu_flag may be first marked. If copy_from_neighbor_cu_flag is 0, line_num and pixel_num are not marked and inferred to be 0. If copy_from_neighbor_CU_flag is 1, then line_num and pixel_num will be indicated. The actual line_num and pixel_num, which are equal to the parsed line_num and pixel_num, are incremented by 1.
Method 3: in the method, neighboring pixels are used to predict a current pixel that is encoded and decoded in a palette mode. First, num_copy_pixel_line is labeled, indicating the pre-num_copy_pixel_line sample line predicted from reconstructed pixels in the neighboring CU. In addition to the starting point change, the total palette codec samples in the PU/CU are reduced, and the remaining samples are encoded by the normal palette index map codec.
For the first num_copy_pixel_line lines of samples, their pixel values are predicted from reconstructed pixels in the neighboring CU, where the syntax element num_copy_pixel_line corresponds to the number of pixel lines to be copied. For example, if the palette_transmit_flag is 0, the reconstructed pixel in the upper CU is used, as shown in fig. 9A. The pixel values of the samples of the previous num_copy_pixel_line (e.g., K) row are predicted by the reconstructed pixel values of the last row of the upper CU. It is similar to applying intra vertical prediction (Intra vertical prediction) to the first K lines of samples, while the remaining lines are coded with normal palette mode. If the palette_transform_flag is 1, then the reconstructed pixel in the left CU is used, as shown in FIG. 9B.
The syntax of num_copy_pixel_line may be marked after the palette mode marking and before the palette table codec. If num_copy_pixel_line is equal to the CU width or CU height, the palette_transfer_flag is marked to indicate that the entire CU is predicted from the top pixel or left pixel. Since all samples in the current CU are predicted, the syntax of palette table codec and index map codec is skipped.
If num_copy_pixel_line is smaller than the CU width or the CU height, the normal palette table codec and the palette index map codec are applied. In SCM-4.0, if MaxPaetetsIndex is equal to 0, the palette_Transpost_flag is inferred to be 0. However, according to the current method, if MaxPaletteIndex is equal to 0 and num_copy_pixel_line is not equal to 0, the palette_transfer_flag still needs to be marked to indicate the previous num_copy_pixel_line line or column samples to be predicted from the reconstructed pixels in the upper CU or left CU. For index map codec, the starting sample position is set to num_copy_pixel_line_cu_width. For samples with sample positions between num_copy_pixel_line and (num_copy_pixel_line+1) cu_width-1, the copy upper mode of operation cannot be selected. In other words, samples with sample positions less than cu_width (num_copy_pixel_line+1) cannot be marked as duplicate upper run mode.
Table 13 shows an exemplary syntax table of palette coding according to an embodiment of the above disclosed method.
Table 13.
/>
/>
In table 13, the syntax element num_copy_pixel_line is incorporated in front of all the syntax as shown in note (13-1). If num_copy_pixel_line is equal to nCbS (i.e., CU width or CU height), syntax palette_transfer_flag is merged as shown in note (13-2). The palette index for the entire CU is assigned to-1, as indicated by note (13-3), which indicates that pixel values are copied from the neighboring CU. If num_copy_pixel_line is not equal to 0 and MaxPaletteIndex is not greater than 0 (e.g., maxPaletteIndex is equal to 0), syntax element palette_transfer_flag is merged as shown in note (13-4). The palette index of the samples in the num_copy_pixel_line line is assigned to-1 as shown in note (13-5). The first sample line after the previous ncbs_copy_pixel_line does not allow replication of the upper mode, as indicated by note (13-6).
The variable Adjust MaxPalletteIndex is derived as follows:
AdjustedMaxPaletteIndex=MaxPaletteIndex
if(PaletteScanPos>num_copy_pixel_line*nCbS)
AdjustedMaxPaletteIndex-=1
the variable adjustRefPaletteIndex is derived as follows:
if the PaleteIndexMap [ xC ] [ yC ] is equal to-1, then the corresponding pixel value is the same as its neighbor. According to this method, if an upper or left pixel is not available (e.g., samples at a frame boundary or inter-coded samples when limiting intra prediction is applied), a color having a palette index equal to 0 is used. If the palette table of the current CU is not coded (e.g., num_copy_pixel_line is equal to cu_width), the first palette in the palette predictor table is used.
In another example, if an upper or left pixel is not available, then an HEVC intra prediction boundary pixel fill method may be used to generate a replacement neighboring pixel.
The number of lines using the copy-up pixel line mode may be derived from the syntax element num_copy_pixel_line_indication instead of being directly indicated as num_copy_pixel_line. If num_copy_pixel_line_indication is 0, num_copy_pixel_line is derived to 0, if num_copy_pixel_line_indication is 1, num_copy_pixel_line is derived to N, which is a predefined number. If num_copy_pixel_line_indication is k, num_copy_pixel_line is derived as k×n.
Method 4: in this method, a num_copy_pixel_line syntax is marked before a palette_transfer_flag syntax. Table 14 shows an exemplary palette codec syntax table according to an embodiment of the method.
Table 14.
/>
/>
In table 14, after the palette_escape_val_present_flag signaling, a syntax element num_copy_pixel_line is incorporated as shown in note (14-1). If num_copy_pixel_line is not equal to 0 and MaxPaletteIndex is not greater than 0 (e.g., maxPaletteIndex is equal to 0), syntax palette_transfer_flag is incorporated as indicated by note (14-2). The palette index for the samples in the num_copy_pixel_line line is assigned to-1 as indicated by note (14-3). The first sample row after the previous ncbs_copy_pixel_line does not allow copying of the upper mode, as indicated by note (14-4).
The syntax element num_copy_pixel_line may be marked before the syntax element palette_escape_val_present_flag. Table 15 shows an exemplary syntax table of palette coding according to the method.
Table 15.
/>
In table 15, a syntax element num_copy_pixel_line is incorporated before the palette_escape_val_present_flag syntax signaling, as shown in note (15-1). If num_copy_pixel_line is not equal to 0 and MaxPaletteIndex is not greater than 0 (e.g., maxPaletteIndex is equal to 0), then syntax palette_transfose_flag is merged as shown in notation (15-2). The palette index for the samples in the num_copy_pixel_line line is assigned to-1 as indicated by note (15-3). The first sample line after the previous ncbs_copy_pixel_line does not allow replication of the upper pattern, as indicated by note (15-4).
In this embodiment, if the syntax element num_copy_pixel_line is equal to cu_width, the syntax elements numpredictedpattetteentries and num_signaled_pattettenteries should both be 0. The first codec syntax element palette_predictor_run should be 1.
In the syntax design of the signaling (signaling) num_copy_pixel_line before the palette_escape_val_present_flag (e.g., table 15), numpredictedpalette entries and num_signaled_palette_entries should both be 0, the palette_predictor_run of the first codec should be 1, and the palette_escape_val_present_flag is inferred to be 0.
Method 5: according to this method, num_copy_pixel_line is marked after the palette_transfer_flag. In this syntax design, even if MaxPaletteIndex is equal to 0, it is necessary to mark syntax element palette_transfer_flag. Table 16 shows an exemplary syntax table of palette coding and decoding according to an embodiment of the method.
Table 16.
/>
Since syntax element palette_transfer_flag needs to be marked even in the case where MaxPaletteIndex is equal to 0, syntax is incorporated outside test "if (MaxPaletteIndex > 0)" (as shown in note (16-1)). Meanwhile, the syntax element palette_transfer_flag in test "if (MaxPaletteIndex > 0)" is deleted as shown by the linefill text in note (16-2). The syntax element num_copy_pixel_line is incorporated as shown in note (16-3). The palette index for the samples in the num_copy_pixel_line line is assigned to-1 as indicated by note (16-4).
If num_copy_pixel_line is even, then it is natural to use left-to-right scanning for the first normal (normal line), as shown in FIG. 10A. However, if num_copy_pixel_line is an odd number, there are two types of scans that can be selected. One is a left-to-right scan of the first normal line as shown in fig. 10B, and the other is a right-to-left scan of the first normal line as shown in fig. 10C.
As shown in fig. 10B, it moves down through the scan (traverse scan downward) or uses the traversal from the first normal. In fig. 10C, it uses a traversal scan of the first sample from the current CU and skips the scan of the previous num_copy_pixel_line line. In tables 13 to 16, the scan in fig. 10C is used. The syntax table may be modified accordingly for the scan in fig. 10B.
Method 6: table 17 shows an exemplary syntax table for palette coding and decoding scanned in fig. 10B.
Table 17.
/>
/>
/>
/>
In table 17, the syntax element num_copy_pixel_line is incorporated in front of all the syntax as shown in note (17-1). If num_copy_pixel_line is equal to nCbS (i.e., CU width or CU height), then the syntax palette_transfer_flag is incorporated as indicated by comment (17-2). The palette index for the entire CU is assigned to-1, as indicated by note (17-3), which indicates that pixel values are copied from the neighboring CU. If num_copy_pixel_line is not equal to 0 and MaxPaletteIndex is not greater than 0 (e.g., maxPaletteIndex is equal to 0), then syntax palette_transfer_flag will be merged as noted (17-4). The palette index of the samples in the num_copy_pixel_line line is assigned to-1 as shown in note (17-5). The palettesanpos will reset to 0 as indicated by note (17-6). Since the palettecanpos is reset, the actual sample index needs to be incremented by num_copy_pixel_line nCbS, as shown by the notes (17-7 and 17-10 to 17-12). The vertical position needs to be incremented by num_copy_pixel_line as indicated by notes (17-8, 17-9 and 17-13).
The variable Adjust MaxPalletteIndex is derived as follows:
AdjustedMaxPaletteIndex=MaxPaletteIndex
if(PaletteScanPos>0)
AdjustedMaxPaletteIndex-=1
the variable adjustRefPaletteIndex is derived as follows:
/>
note that in methods 3 to 6 described above, the signaling of num_copy_pixel_line may also be constructed by using the palette_run syntax element and context in the existing HEVC SCC specification. For example, it may be identified as shown in table 18.
Table 18.
In table 18, using the above syntax, the decoded copy_pixel_line_length may be semantically the number of duplicate pixel rows. The maximum value of the codec element is the block height-1.
The decoded copy_pixel_line length may also be the actual number of samples using the copy pixel pattern, and the number of copy pixel lines is derived as copy_pixel_line length/block_width. Note that in this method, a coherence constraint (conformance constraint) is applied to the copy_pixel_line length such that it must be a multiple of block_width.
Method 7: in this method, an indication of whether the block starts from a copy pixel pattern outside the current CU is indicated using the current syntax structure with the modified decoding process, and the number of lines (number of samples) of the copy pixel pattern is used. This may be achieved, for example, by:
allow replication of the upper run mode for the first sample in the block. This is indicated by using the syntax element palette_run_type_flag [0] [0 ]. If palette_run_type_flag [0] [0] is 1, the sample indicating the number of lines is filled with the copy pixels in the following. If palette_run_type_flag [0] [0] is 0, the remaining syntax signaling (syntax signaling) will remain the same as the current syntax structure.
When palette_run_type_flag [0] [0] is 1, the same syntax element used for signaling palette run length is used to indicate the number of rows that use duplicate pixels. There are two methods to use the palette run length syntax to inform the number of rows.
The decoding palette run length R semantics for the copy pixel mode implies the number of rows (instead of the number of samples). Thus, the actual running of the duplicated pixels is the decoded value r×block_width of the horizontal scan, or the decoded value r×block_height of the vertical scan. The maximum value of the decoded R should be the block height (or width).
Another approach is that the decoded palette run length R is the actual copy pixel run. The method does not require modification of the semantics and decoding process of the duplicate pixel run.
Note that for this approach, a consistency constraint must be imposed on the decoded value R so that it must be a multiple of block_width.
When palette_run_type_flag [0] [0] is 1, after copying the pixel row, the next row sample cannot use the copy-up mode of operation. The parsing standard is modified according to this condition so as not to parse the run_type flag for this line.
When palette_run_type_flag [0] [0] is 0, the remaining samples in the first row cannot use the copy-up mode of operation. The parsing standard is modified according to this condition so as not to parse run_type flags for these samples.
Table 19 shows an exemplary syntax table of palette coding according to the method. The codec of the "palette run length R" of the copy pixel mode semantically implies the number of lines (not the number of samples).
Table 19.
/>
/>
/>
In table 19, since the syntax element palette_transfer_flag needs to be marked even though MaxPaletteIndex is equal to 0, syntax is incorporated (as shown by note (19-1)) except for test "if (MaxPaletteIndex > 0)". Meanwhile, the linefill text in the note (19-2) indicates deleting the syntax element palette_transfer_flag within the test "if (MaxPaletteIndex > 0)". The palette index for the entire CU is first reset to-1, as indicated by note (19-3), which indicates that the pixel values are copied from the neighboring CU. The palette_run_type_flag and Palette run of the first sample (palette_copy_pixel_line) are used to indicate num_copy_pixel_line. As shown in note (19-5), for the first sample, if the palette_run_type_flag is equal to copy_ABOVE_MODE, the maxPaeterun is set equal to nCbS-1, as shown in note (19-4), and num_copy_pixel_line is equal to the decoded palette Run. Otherwise, num_copy_pixel_line is set to 0.
Table 20 shows another exemplary syntax table of palette coding according to the method. The codec that replicates the "palette run length R" of the pixel mode is the actual run length and a consistency constraint is imposed such that R must be a multiple of cu_width.
Table 20.
/>
/>
In table 20, since the syntax element palette_transfer_flag needs to be marked even though MaxPaletteIndex is equal to 0, syntax is incorporated outside of test "if (MaxPaletteIndex > 0),. At the same time, the linefill text in comment (20-2) indicates deletion of syntax element palette_transfer_flag within test" if (MaxPaletteIndex > 0),. Palette index of the entire CU is first reset to-1 as shown in comment (20-3), for the first sample (palette can pos= 0) palette_run_type_flag and palette run are used to indicate that num_copy_pixel_line. As shown in note (20-4), if palette_run_type_flag is equal to copy_above_mode, then num_copy_pixel_line is equal to decoded palette run/nCbs. Note that in this case, a consistency constraint is applied to palette run that must be a multiple of nCbs otherwise (i.e., palette_run_type_flag is not equal to copy_above_mode), num_copy_pixel_line is set to 0.
Method 8: according to this method, only num_copy_pixel_line=0 or cu_width is tested. A shortcut to the palette mode is provided by introducing a new syntax element pred_from_neighbor_pixels. If pred_from_neighbor_pixels is 1, the palette_transfer_flag is displayed.
If pred_from_neighbor_pixels is 1 and palette_transfer_flag is 0, all samples are predicted from the pixels of the upper CU. If pred_from_neighbor_pixels is 1 and palette_transfer_flag is 1, all samples are predicted from pixels of the left CU. If adjacent pixels are not available, there are two methods to generate replacement pixels. According to the first method, an intra prediction boundary (Intra prediction boundary) pixel generation method may be used. It is similar to horizontal or vertical intra prediction without residual codec. According to a second method, a color is used with a palette index equal to 0. If the palette table of the current CU is not coded (e.g., num_copy_pixel_line is equal to cu_width), the first palette in the palette predictor table is used.
Table 21 shows an exemplary syntax table of palette coding according to the method.
Table 21.
In table 21, the syntax elements pred_from_neighbor_pixels syntax are merged as shown in note (21-1). If pred_from_neighbor_pixels is true (true), palette_transform_flag is merged, and the palette index of the entire CU is set to-1, as shown in note (21-2).
In methods 3 to 8, only one palette_transfer_flag is indicated. If palette_transfer_flag is equal to 0, the vertical copy pixel mode is first used, and then the index horizontal scan is used. Otherwise, the pixels are first copied horizontally and then scanned vertically using an index.
Alternatively, two transpose flags, such as a pallet copy pixel transfer flag and a pallet scan transfer flag, may be indicated. The palette_copy_pixel_transfer_flag indicates the direction of the copy pixel pattern from the neighboring CU. The palette_scan_transfer_flag indicates the direction of Palette scanning. For example, a palette_copy_pixel_transfer_flag equal to 0 indicates a copy pixel mode from the upper CU, and a palette_copy_pixel_transfer_flag equal to 1 indicates a copy pixel mode from the left CU. A palette_scan_transfer_flag equal to 0 indicates that horizontal scanning is used, and a palette_scan_transfer_flag equal to 1 indicates that vertical scanning is used.
For example, in fig. 11A, the palette_copy_pixel_transfer_flag is 0 and the palette_scan_transfer_flag is 0. In fig. 11B, the palette_copy_pixel_transfer_flag is 0, and the palette_scan_transfer_flag is 1. In fig. 11C, the palette_copy_pixel_transfer_flag is 1, and the palette_scan_transfer_flag is 0.
Prediction on CUs by inverse and rotational traversal
According to one embodiment, prediction across CUs may be implemented with reverse traversal scanning and rotational traversal scanning. Fig. 12A shows an example of cross-CU prediction by reverse traversal, and fig. 12B shows an example of cross-CU prediction by rotational traversal. The scan positions of the normal index map codec for both scans start at 0, ending at nCbS-num_copy_pixel_line nCbS-1. For the remaining samples whose sample positions are equal to or greater than nCbS-num_copy_pixel_line nCbS, paletteindex map [ xC ] [ yC ] is set to-1, which means that their pixel values are the same as the neighboring pixels.
Context formation and Binarization (Binarization) of num_copy_pixel_line
The syntax element num_copy_pixel_line may use an Exponential-Golomb (Exponential-Golomb) code of K-th order (EG-K code), a truncated Exponential-Golomb code of K-th order (truncated EG-K code), an N-bit truncated Unary code) +eg-K code, or the same binarization method (binarization into palette_run_msb_id_plus1 and palette_run_definition_bits) used in palette operation.
A context bin file (context bin) may be used for the codec num_copy_pixel_line. For example, we can use the same binarization method as used in palette run for num_copy_pixel_line. The first N bits of palette run msb id plus1 may use context codec binary (bins). For example, N may be 3. The remaining binary files (bins) are encoded and decoded in the bypass binary file (bins). The codec sharing context may be run through the palette. For example, the context may be shared with the extended copy-on-top mode of operation.
Since the probability that num_copy_pixel_line is equal to cu_width and 0 is greater than the probability that num_copy_pixel_line is the other number, code binarization is modified so as to reduce the codeword (codeword) of num_copy_pixel_line equal to cu_width. For example, the value of cu_width may be inserted before the number P (e.g., 1) in the codeword table for binarization. The value of num_copy_pixel_line equal to or greater than P will increase by 1 at the encoder side. On the decoder side, if the parsed codeword is P, it means num_copy_pixel_line is equal to cu_width. If the parsed codeword is less than P, num_copy_pixel_line is equal to the parsed codeword. If the parsed codeword is greater than P, num_copy_pixel_line is equal to the parsed codeword minus 1.
In another embodiment, two additional bits are used to indicate whether num_copy_pixel_line is equal to 0, cu_width, or other number. For example, "0" means that num_copy_pixel_line is equal to 0, "10" means that num_copy_pixel_line is equal to cu_width, and "11+encoded-L" means that num_copy_pixel_line is equal to l+1.
In another embodiment, two additional bits are used to indicate whether num_copy_pixel_line is equal to 0, cu_width, or other number. For example, "0" means that num_copy_pixel_line is equal to cu_width, "10" means that num_copy_pixel_line is equal to 0, and "11+encoded-L" means that num_copy_pixel_line is equal to l+1.
For methods 4 to 7 mentioned in the cross-CU region prediction, num_copy_pixel_line is marked after palette table codec. Binarization of num_copy_pixel_line may be modified according to decoding information of palette table codec. For example, if numpredictedpalette entries and num_signaled_palette_entries are both 0, this means that at least one sample row/column is coded by normal palette mode. Therefore, num_copy_pixel_line should not be equal to cu_width. Therefore, the codeword range of num_copy_pixel_line is limited to 0 to (cu_width-1). For example, if a truncated exponential-golomb code of K-order (truncated EG-K code), an N-bit truncated unary code+eg-K code, or the same binarization method used in palette operation is used for encoding and decoding num_copy_pixel_line, cMax, maxPaletteRun, or the maximum possible value is set to cu_width-1. The binarization method for palette run will binarize palette run into palette run msb id plus1 and palette run refine bits.
Search method for determining num_copy_pixel_line
In another embodiment, a search method for determining the number of duplicate pixel rows (i.e., num_copy_pixel_line) is disclosed.
Method 1: on the encoder side, the value of num_copy_pixel_line is determined. The previous num_copy_pixel_line column/row is predicted from the neighboring pixels. The remaining samples are used to derive a palette of remaining samples and index maps.
Method 2: at the encoder side, samples of the entire CU are used to derive the palette first. From this palette table, a rate-distortion-optimization (RDO) process may be used to determine the optimal value of num_copy_pixel_line. Interpolation (interpolation) can be used to estimate the bit cost of the different num_copy_pixel_lines. For example, if the bit cost of num_copy_pixel_line=0 is R0 and cu_width is 16, then the bit cost of num_copy_pixel_line=3 is equal to R0 (cu_width-3)/cu_width (i.e., 13/16×r0).
After the num_copy_pixel_line is determined, the previous num_copy_pixel_line column/line is predicted from the neighboring pixels. The remaining samples are used to re-derive a palette of remaining samples and index mappings.
Line-based replicated pixels from neighboring CUs
In the previous section, a row-based copy pixel from a neighboring CU is disclosed for palette mode coding. A syntax element num_copy_pixel_row representing the copy pixel line number is first marked to indicate the previous num_copy_pixel_row samples to be predicted from the reconstructed pixels in the neighboring CU. Similar concepts may be applied to other modes, such as inter-frame mode, intra-BC mode, and intra-BC mode.
In one embodiment, row-based duplicate pixels from neighboring CUs are applied to inter-mode and/or intra-mode PUs or CUs. The syntax element num_copy_pixel_row is first marked. If num_copy_pixel_row is not equal to 0, a syntax element copy_pixel_row_direction_flag is marked to indicate the direction in which the pixel row is copied. If copy_pixel_row_direction_flag is 0, num_copy_pixel_row represents the previous num_copy_pixel_row line samples predicted from the reconstructed pixels in the upper CU. If copy_pixel_row_direction_flag is 1, num_copy_pixel_row represents the previous num_copy_pixel_row column sample predicted from the reconstructed pixel in the left CU. For example, fig. 13 shows an example of an 8×8CU encoded and decoded in inter mode. The num_copy_pixel_row of the CU has a value of 3, and the copy_pixel_row_direction_flag of the CU has a value of 0. The prediction values in the upper three rows are replaced by the reconstructed pixel values of the last row of the upper CU. The remaining pixels are predicted from the original inter mode. It is similar to performing inter mode prediction on the entire CU/PU and then replacing the previous num_copy_pixel_row line or column with a neighboring pixel.
Intra-prediction neighboring pixel constructions may be used to generate neighboring reference pixels. If neighboring pixels are not available (e.g., outside of the image boundary), a reference pixel filling method in intra prediction may be applied to generate neighboring reference pixels. The smoothing filter in intra prediction may be applied or turned off.
Syntax elements num_copy_pixel_row and copy_pixel_row_direction_flag may signal at a CU level or a PU level. Num_copy_pixel_row and copy_pixel_row_direction_flag may be marked before the CU or PU, or may be marked in the middle of the CU or PU, or may be marked at the end of the CU or PU. For example, if num_copy_pixel_row and copy_pixel_row_direction_flag are marked at the CU level and are marked before the CU, num_copy_pixel_row and copy_pixel_row_direction_flag may be marked before the part_mode. The codeword of part_mode may be adaptively changed according to the values of num_copy_pixel_row and copy_pixel_row_direction_flag. For example, if copy_pixel_row_direction_flag is 0 and num_copy_pixel_row is equal to or greater than cu_height/2, then part_2nxn and part_2nxnu are not valid. Modifying the codeword binarization of part_mode. For example, if ParT_2NxN and ParT_2NxnU are deleted, the binarization of part_mode is shown in Table 22.
Table 22.
/>
In table 22, text with line filled background corresponds to deleted text. In another example, if num_copy_pixel_row and copy_pixel_row_direction_flag are marked at the CU level, and num_copy_pixel_row and copy_pixel_row_direction_flag are marked at the end of the CU or in the middle of the CU, num_ opy _pixel_row and copy_pixel_row_direction_flag may be marked after the part_mode. After receiving the part_mode, the values of num_copy_pixel_row and copy_pixel_row_direction_flag may be limited. For example, if part_mode is part_2nxn and copy_pixel_row_direction_flag is 0, the value of num_copy_pixel_row will be limited to a range from 0 to cu_height/2.
Intra-frame Boundary (Intra Boundary) reference pixels
In intra mode, if num_copy_pixel_row is not 0, then the neighboring reference pixel may be the same as the pixel from HEVC. It is the same as performing intra prediction of a PU and then replaces the first few rows or columns with pixels of the neighboring CU.
In another embodiment, if num_copy_pixel_row is not 0, the position of the adjacent reference pixel is changed. Fig. 14 and 15 show an example of changing the positions of adjacent reference pixels according to the present embodiment. In fig. 14 and 15, num_copy_pixel_row is 3, and copy_pixel_row_direction_flag is 0. The upper reference pixel and the upper left reference pixel are shifted to the third row. As shown in fig. 14, the upper right reference pixel is copied from the upper right CU. As shown in fig. 15, the reference pixel at the upper right is copied from the rightmost pixel of the third row.
Residual coding for regions predicted from neighboring CUs
According to one embodiment, if num_copy_pixel_row is N and copy_pixel_row_direction_flag is 0, then the upper N rows are predicted from the neighboring CU. The residuals of the upper N rows may be limited to 0. According to another embodiment, the residuals of the upper N rows may be marked. For inter mode, HEVC residual quadtree is applied.
Context formation and binarization of num_copy_pixel_row
num_copy_pixel_row may use an exponent Golomb code (EG-K code) of order K, a truncated exponent Golomb code (truncated EG-K code) of order K, an N-bit truncated unary code+eg-K code, or the same binarization method used in palette operation (i.e., binarized as palette_run_msb_id_plus1 and palette_run_definition_bits).
The context binary file may be used to codec num_copy_pixel_row. For example, the same binarization method as num_copy_pixel_row in palette run codec may be used. The first N bits of palette run msb id plus1 may use a context codec binary file. For example, N may be 3. The remaining binaries are encoded and decoded in the bypass binaries. The codec sharing context may be run through the palette. For example, the context may be shared with the extended replication upper mode of operation.
Since the probability that num_copy_pixel_row equals cu_width and 0 is higher than the probability that num_copy_pixel_row is the other number, code binarization can be modified to reduce the codeword of num_copy_pixel_row equal to cu_width. For example, the value of cu_width may be inserted before the number M (e.g., 1) in the codeword table. The value of num_copy_pixel_row equal to or greater than M will increase by 1 on the encoder side. On the decoder side, if the parsed codeword is M, it means num_copy_pixel_row is equal to cu_width. If the parsed codeword is less than M, num_copy_pixel_row is equal to the parsed codeword. If the parsed codeword is greater than M, num_copy_pixel_row is equal to the parsed codeword minus 1.
According to another embodiment, two additional bits are used to indicate whether num_copy_pixel_row is equal to 0, cu_width, or other number. For example, "0" means that num_copy_pixel_row is equal to 0, "10" means that num_copy_pixel_row is equal to cu_width, and "11+encoded-L" means that num_copy_pixel_row is equal to l+1. In another example, "0" means that num_copy_pixel_row is equal to cu_width, "10" means that num_copy_pixel_row is equal to 0, and "11+encoded-L" means that num_copy_pixel_row is equal to l+1.
For methods 4 to 7 mentioned in the prediction across CU portions, num_copy_pixel_row may be marked after palette table codec. Binarization of num_copy_pixel_row may be modified according to decoding information of palette table codec. For example, if both numpredictedpalette entries and num_signaled_palette entries are 0, it means that at least one row/column of samples is coded in normal palette mode. num_copy_pixel_row should not be equal to cu_width. Therefore, the codeword range of num_copy_pixel_row is limited to 0 to (cu_width-1). For example, if a K-order truncated exponential-Columbus code (truncated EG-K code), an N-bit truncated unary code+EG-K code, the same binarization method used in palette operation (i.e., binarized into palette_run_msb_id_plus1 and palette_run_definition_bits) is used to codec num_copy_pixel_row, cMax, maxPetetRun, or the maximum possible value is set to CU_width-1.num_copy_pixel_row should not be equal to cu_width
In this section, the above-mentioned cu_width may be replaced with cu_height, pu_width, or pu_height of a line-based copied pixel from a CU level or PU level of an adjacent method.
Index codec number
In SCM-4.0 palette index map coding, the index number is first marked. To index the number of indices, the variable "index number-palette size" is first derived. A mapping process is then performed to map the "index number-palette size" to the "map value". The mapped value is binarized using the same binarization method as "coeff_abs_level_remaining" and labeled. The prefix portion is represented by a truncated Rice code. The suffix portion is represented by an exponential golomb code. This binarization process can be considered to have an input cParam, and cParam is set to (2+IndexMax/6). However, index Max/6 requires a divide or look-up table operation. Thus, according to an embodiment of the present invention, cParam is set to (2+indexMax/M), and M is the integer power of 2 (e.g., 2) n N is an integer). Thus, indexMax/M can be implemented by right shifting indexMax to n bits. For example, setting cParam to (2+IndexMax/4) or (2+IndexMax/8) may be accomplished by right shifting IndexMax by 2 or 3 bits, respectively.
Aggregating all of the skip colors prior to encoding/parsing palette index map
In the current HEVC SCC specification or previous version, during the encoding of the index map, the values of the skipped pixels in the modulation board codec are marked in an interleaved fashion with other regular indices or the pixel values are clustered together after the encoding of the index map is completed. According to one embodiment of the invention, all the skipped pixel values are aggregated before the index map codec.
Assume that there are N skipped pixels at different locations within the current codec block, where N is a positive integer. In one embodiment, all color values of the skipped pixels are encoded/decoded together before the palette index map of the codec block is encoded/decoded. In this way, when an index is decoded as a skip index, its corresponding pixel value no longer needs to be decoded. Note that some of the skipped pixels may have the same color value. In one embodiment, the pixel value that each skipped pixel occurs is still written to the bitstream.
Fig. 16 is an example of decoding a skip color with n=5, wherein pixel values for each skip pixel occurrence are still written into the bitstream, according to an embodiment of the method. In this example, a horizontal traversal scan is used. Depending on the decoding order, each skipped pixel may find the corresponding color in the decoding table.
In another embodiment, only non-duplicate color values are written to the bitstream, and an index that marks these written colors occurs for each skipped pixel.
Fig. 17 shows an example of decoding a jump-out color according to an embodiment of the method of n=5, wherein only non-repeated color values are written to the bitstream. In this example, a horizontal traversal scan is used. Only non-repeating colors are decoded (e.g., m=3), and the index in the color table for each pop-up pixel occurrence is signaled (e.g., n=5). Depending on the decoding order, each skipped pixel may find the corresponding color in the decoding table.
Jump-out color signaling termination
In one embodiment, the number of encoded/decoded skip colors is indicated. For example, after each skip color is encoded or decoded, if it is the last color to be encoded or decoded, a 1-bit flag "end_of_escape_color_flag" is used to signal. When the decoded end_of_escape_color_flag is 1, there is no need to decode more escape colors. Assuming that there are N skipped colors in the current codec block, the last M pixels have the same color value, where M and N are integers, M < = N. In another embodiment, only one color value of these M pixels need be transmitted, and end_of_escape_color_flag is set to 1. The last (M-1) skipped pixels are inferred to share the last decoded skipped color value. For n=5 and m=3, an example of this method is shown in table 23.
Table 23.
In another embodiment, the total number of colors that are skipped is explicitly (explicit) indicated before the color values are encoded/decoded. Also, the end_of_escape_color_flag may be a bypass codec or a context codec.
Limiting the total number of allowed colors to escape in a codec block
The total number of decoded pop-out colors may be limited by setting a maximum allowed number at a high-level header, such as a sequence level, a picture level, or a slice (slice) level.
If this maximum number is reached, according to one embodiment, the remaining pop-out pixels are inferred to share the value of the last decoded pop-out color. In another embodiment, the remaining skipped pixels are inferred to share the value of a particular decoded skipped color, such as the most common color.
Copying pixels from outside a CU
In SCM 3.0, the color index value range depends on the palette size and the skip flag. If the skip flag is off, the maximum index value is equal to the palette size. If the skip flag is on, the maximum index value is equal to (palette size +1). If the maximum index value is equal to N, the possible index value range is 0 to (N-1). In SCM 3.0, the maximum index value equal to 0 is prohibited. If the maximum index value is equal to 1, all color indexes in the CU will be assumed to be 0. If there is only one possible index value, then it is assumed that all color indices should be 0.
The pixels may be labeled by copy_above, as disclosed ABOVE. In this case, it will duplicate not only the pixel index of the upper pixel, but also the pixel value of the upper pixel. The decoder may reconstruct pixels in copy_above from the copied pixel values without having to reference the palette. If the above pixels cross CU boundaries, a special index (denoted N) of Neighboring Constructed Pixels (NCP) of neighboring CUs is assigned according to the disclosure above. When a pixel is denoted by copy_above, it will duplicate not only the pixel index (N) of the upper pixel, but also the pixel value of the upper pixel in fig. 18 indicated by the dot-filled region, where fig. 18 shows the CU boundary (1810).
The assumption of processing palette-encoded CUs with zero/one index value is not true based on the method of copying the pixel values of NCP. For the case where the maximum index value is equal to 0, all pixels in the CU can be predicted from NCP, as shown in fig. 19.
If the maximum index value is equal to 1, not all color indexes may be 0. A portion of the pixels may be 0 and a portion of the pixels may be predicted from NCP as shown in fig. 20.
In the example shown in fig. 19 and 20, no corresponding syntax is available in SCM 3.0. Thus, a new syntax for marking these cases is disclosed below.
Syntax element for inter-CU index prediction
In SCM3.0, the palette codec CU contains the following syntax:
the palette_share_flag being equal to 1 specifies that the Palette size is equal to the previous Palette entry, and the entire Palette entry is the same as the previous Palette entry.
The palette_transfer_flag being equal to 1 specifies the associated Palette index to apply the transpose process to the current CU. The palette_transform_flag being equal to 0 specifies that the transpose process is not applied to the associated Palette index of the current CU.
The palette_escape_val_present_flag specifies the sample value of the skip codec.
Pattern_prediction_run [ i ] specifies the difference between the currently reused index in the previous Palette previous PatleteEntrie and the index of the next recalled Palette entry, with the following exceptions: a palette_prediction_run equal to 0 indicates that the difference between the indices of the current and next reuse entries is 1, and a palette_prediction_run equal to 1 indicates that more entries of the previous palette entries are not reused.
Num_signaled_palette_entries specifies the number of entries in the palette that are explicitly marked for the current codec unit.
I-th element in Palette of palette_entries specified color component cIdx
The palette_run_coding () specifies the running codec mode of the index map.
To provide syntax for index prediction across CUs, the following syntax examples are disclosed according to embodiments of the present invention:
syntax example 1: a new flag all_pixel_from_ncp_flag is added. If all_pixel_from_NCP_flag is off, the other syntax is the same as SCM 3.0. In the first row, a duplicate mode of operation may be indicated to allow prediction across CUs. If all_pixel_from_NCP_flag is on, this means that (real) all pixels will be predicted from NCP. The palette_transfer_flag may indicate predictions from the left NCP or above. Other prediction directions may also be indicated. If all_pixel_from_NCP_flag is on, signaling palette_share_flag, palette_escape_val_present_flag, palette_prediction_run, num_signaled_palette_ entries, palette _entries, or palette_run_coding () may be skipped.
Fig. 21 shows an exemplary flow diagram of signaling for supporting index prediction across CUs according to the above example. In step 2110, it is tested whether all_pixel_from_ncp_flag is equal to 1. If the result is "yes," then step 2130 is performed. If the result is "NO," then step 2120 is performed. In step 2130, a palette_transfer_flag is marked to indicate predictions from the left NCP or upper NCP. In step 2120, SCM3.0 based syntax is used for index prediction across CUs.
Syntax example 2: a new flag any_pixel_from_ncp_flag is added. If any_pixel_from_NCP_flag is off, the other syntax is the same as SCM 3.0. In the first row, the copy run mode is not indicated (there is no prediction across CUs). If any_pixel_from_NCP_flag is on, then it implies (real) that a portion of pixels is to be predicted from NCP. The encoder may indicate a palette_share_flag, a palette_prediction_run, a num_signaled_palette_ entries, palette _escape_val_present_flag, and the decoder may calculate a maximum index value based on the information. If the maximum index value is equal to 0, all pixels are predicted from NCP, and palette_run_coding () may be skipped. If the maximum index value is greater than 0, then some pixels are predicted from NCP and palette_run_coding () may be marked.
Fig. 22 shows an exemplary flow diagram of signaling for supporting index prediction across CUs according to the above example. As shown in step 2210, a test is made as to whether any_pixel_from_ncp_flag is equal to 1. If the result is "Yes," step 2230 is performed. If the result is "NO," then step 2220 is performed. In step 2220, the SCM 3.0-based syntax is used for index prediction, and no inter-CU prediction is required. In step 2230, various syntax elements including palette_share_flag, palette_prediction_run, num_signaled_palette_entries, and palette_escape_val_present_flag are marked. The decoder calculates a maximum index value based on the information, and checks in step 2240 whether the maximum index value is equal to 0. If the result is "Yes," then step 2260 is performed. If the result is "NO," then step 2250 is performed. In step 2250, palette_transmit_flag and palette_run_coding () are marked. In step 2260, the palette_transform_flag is marked and palette_run_coding () (i.e., all pixels are predicted from NCP) is skipped.
Syntax example 3: a new flag any_pixel_from_ncp_flag is added. If any_pixel_from_NCP_flag is off, the other syntax is the same as SCM 3.0. In the first row, the copy run mode is not indicated (there is no prediction across CUs). If any_pixel_from_NCP_flag is on, then it is implied that a portion of the pixels are to be predicted from the NCP. The encoder may indicate a palette_share_flag, a palette_prediction_run, a num_signaled_palette_ entries, palette _escape_val_present_flag, and the decoder may calculate a maximum index value based on the information. If the maximum index value is equal to 0, all pixels are predicted from NCP, and palette_run_coding () may be skipped, as shown in FIG. 2. Otherwise, the other syntax is the same as SCM 3.0. In the first row, the copy run mode is denoted (cross CU prediction). Note that if the maximum index value is equal to 1, the palette_run_coding () and palette_transfer_flag may be skipped.
Fig. 23 shows an exemplary flow chart of signaling for supporting index prediction across CUs according to the above example. The flowchart is substantially the same as in fig. 22 except for the case where the maximum index is not 0 (i.e., the "no" path from step 2240). In this case, as shown in step 2310, syntax according to SCM3.0 is used for prediction across CUs.
Syntax example 4: the "all_pixel_from_ncp_flag" in syntax example 1 and the "any_pixel_from_ncp_flag" in syntax example 2 or 3 may be combined into a palette_prediction_run. In SCM3.0, palette_prediction_run is the run-length codec. If the first run (i.e., palette_prediction_run [0 ]) is equal to a fixed or derivative value, all_pixel_from_NCP_flag or any_pixel_from_NCP_flag is inferred to be on (on). This value may be 0 or 1.
Syntax example 5: as shown in step 2410 of fig. 24, the encoder may indicate a palette_share_flag, a palette_prediction_run, and a num_signaled_palette_entries signal. The palette size may then be derived from the information.
The palette size is checked to determine if it is equal to 0, as shown in step 2420. If the palette size is greater than 0 (i.e., the "no" path from step 2420), the other syntax is the same as SCM3.0, as shown in step 2430. In the first line, the copy run mode may be indicated according to whether it is a cross-CU prediction.
If the palette size is equal to 0 (i.e., the "yes" path from step 2420), then signal_from_ncp_flag is marked. In step 2440 it is checked whether any_pixel_from_ncp_flag is on. If the any_pixel_from_NCP_flag is off (i.e., the "NO" path from step 2440), then the palette_escape_val_present_flag is inferred to be on, as shown in step 2450, and the SCM3.0 based syntax is used for index prediction, and no inter-CU prediction is required. If any_pixel_from_NCP_flag is on, then the palette_escape_val_present_flag is indicated. If any_pixel_from_ncp_flag is on (i.e., the yes path from step 2440) and palette_escape_val_present_flag is off (i.e., the no path from step 2460), then all pixels may be predicted from NCP and palette_run_coding () may be skipped, as shown in step 2470. If the any_pixel_from_NCP_flag is on (i.e., the "Yes" path from step 2440) and the palette_escape_val_present_flag is on, then a portion of the pixels are predicted from NCP and palette_run_coding () may be marked, as shown in step 2480.
Syntax example 6: this example is substantially the same as grammar example 5 except that the any_pixel_from_ncp_flag is on (i.e., the "yes" path from step 2440) and the palette_escape_val_present_flag is on (i.e., the "yes" path from step 2460). In this case, as shown in step 2510 of fig. 25, all pixels are the jump index.
Syntax example 7: the encoder may indicate palette_share_flag, palette_prediction_run, num_signaled_palette_entries, as shown in step 2610 of fig. 26. The palette size may then be derived from the information.
The palette size is checked in step 2620 to determine if it is equal to 0. If the palette size is equal to 0 (i.e., "yes" from step 2620), then all_pixel_from_ncp_flag is marked and it is checked in step 2640 whether all_pixel_from_ncp_flag is open. If all_pixel_from_NCP_flag is on (i.e., the "Yes" path from step 2640), then all pixels are implied as predicted from NCP, as shown in step 2660. In this case, the palette_transfer_flag may be marked to suggest that the prediction is from the left NCP or the upper NCP. Other prediction directions may also be indicated. Otherwise, the syntax is the same as SCM3.0 shown in step 2650. In the first row, a duplicate mode of operation (i.e., prediction across CUs) may be indicated.
Syntax example 8: in grammar example 8, it may be indicated that a codec is run to indicate the situation in fig. 27. The flow chart in fig. 27 is similar to the flow chart in fig. 26. However, steps 2630 and 2650 in fig. 26 are replaced with steps 2710 and 2720 (i.e., the tabs palette_escape_val_present_flag, palette_franscope_flag, and palette_run_coding ()).
Syntax example 9: as shown in step 2810 of fig. 28A, the encoder may transmit a palette_share_flag, palette_reuse_flag (), num_signaled_palette_ entries, palette _escape_val_present_flag, and the decoder may calculate a maximum index value based on the information. If the maximum index value is equal to 0 or 1 (i.e., the "NO" path from step 2820), palette_run_coding () may be skipped. If the maximum index value is equal to 0 (i.e., the "NO" path from step 2830), then all pixels are predicted from the NCP as shown in step 2850. If the maximum index value is greater than 1 (i.e., the "yes" path from step 2820), a palette_transmit_flag may be marked to suggest predictions from the left NCP or the upper NCP, as shown in step 2840. Other prediction directions may also be indicated. If the maximum index value is equal to 1 (i.e., the "yes" path from step 2830), then all color indexes in the CU will be inferred to be 0 or jumped out, as shown in step 2860.
Syntax example 10: as shown in step 2812 of fig. 28B, the encoder may indicate a palette_share_flag, a palette_reuse_flag (), a num_signaled_palette_ entries, palette _escape_val_present_flag, and the decoder may calculate a maximum index value based on the information. If the maximum index value is equal to 0 (i.e., the "NO" path from step 2822), then all pixels are predicted from the NCP, as shown in step 2832. If the maximum index value is greater than 0 (i.e., the "yes" path from step 2822), the palette_transfer_flag may be marked to suggest a prediction from the left NCP or the upper NCP, as shown in step 2842. Other prediction directions may also be indicated. The Palette run coding () may be skipped. If the maximum index value is greater than 0, a palette_transmit_flag and palette_run_coding () may be indicated as shown in fig. 20.
In the above syntax example, the NCP may be the nearest upper row or the nearest left column. If the number of NCP rows is greater than 1 (e.g., two nearest upper rows or two nearest left columns), additional signaling may be required to indicate which NCP row is used for prediction. In the case of "all pixels predicted from NCP" in the syntax example, NCP may be limited to the nearest upper row or nearest left column.
Although specific syntax elements are used to illustrate examples of syntax for supporting inter-CU index prediction according to embodiments of the invention, these specific syntax elements should not be construed as limiting the invention. Other syntax elements and semantics may be used by those skilled in the art to perform index prediction across CUs without departing from the spirit of the invention.
Syntax element for enabling inter-CU index prediction
For index prediction across CUs, an enable flag may be marked in PPS (picture parameter set) or SPS (sequence parameter set). In addition, the flag can be marked only when the palette_mode_enabled_flag in the SPS is true. Otherwise it will be inferred as false. When the enable flag of the index prediction across CUs is false, the index prediction across CUs is disabled. In another embodiment, when the enable flag of index prediction across CUs is false, adjacent indices or values may be inferred to be predetermined values. For example, the adjacent index of each color component may be set to 0, and the adjacent value may be set to 128. In the present invention, the method of predicting an index or value in a block is not limited to information of neighboring pixels (or blocks).
Contextual selection of the run mode: in SCM 3.0, the syntax element palette_mode for index map codec is context coded. The palette_mode has two contexts. The context is selected according to the palette_mode of the upper index. However, for the index of the first row, there is no upper index.
Several methods of processing context selection are disclosed in this case:
1. the context of the first row uses a fixed context. The context may be a first context used when the top INDEX is encoded by INDEX-RUN, or a second context used when the top INDEX is encoded by COPY-RUN mode.
2. The index of the first row may be a third context.
All indexes in the cu use the same context to encode the palette_mode. The context may be selected according to the CU size, and all indexes in the CU use the same context to encode the palette_mode. In this case there may be two contexts. If the CU size is greater than a fixed or derived threshold, the first context is used. Otherwise, another context is used. In another example, the number of contexts may be reduced to 1. In this case, the contexts of all CU sizes are the same.
Modified intra prediction scheme
In another embodiment, to achieve the same effect as the prediction schemes disclosed in fig. 19 and 20, based on HEVC, HEVC range extension, or conventional intra prediction in HEVC SCC, a syntax element rqt _root_cbf is marked to indicate whether a TU (transform unit) from the root (transmitted from) current CU has a residual. The signaling of rqt _root_cbf may be the same as rqt _root_cbf signaling of inter-CU (inter CU) redundancy codec in HEVC.
Based on intra prediction with rqt _root_cbf according to an embodiment of the present invention, rqt _root_cbf may be selectively applied to a subset of intra prediction modes including luma and chroma intra prediction modes. rqt _root_cbf is used only for those intra prediction modes in the subset. In one example, the modification applies only to luminance intra prediction modes equal to horizontal or vertical prediction modes, while chrominance intra prediction modes are not modified.
In another embodiment, a CU level flag is used for an intra CU (intra CU) to indicate whether there is a residual signal for encoding and decoding the intra CU. Similarly, the flag may be selectively applied to a subset of intra prediction modes.
Intra block copy (IntraBC) search
One embodiment of the present invention changes the source pixels of IntraBC. The pixels used for IntraBC prediction and compensation may be unfiltered pixels (i.e., before deblocking) or filtered pixels (i.e., after deblocking and SAO (sampling adaptive offset)) depending on the location of the pixels.
For example, as shown in fig. 29, the pixels used for IntraBC prediction and compensation may be based on unfiltered pixels of pixels in the current CTU (2910) and left CTU (2920). The other pixels still use filtered pixels. Fig. 29 shows an example of a source pixel according to an embodiment of the invention, where the dot-filled pixels are from unfiltered pixels and the transparent pixels are from filtered pixels for IntraBC.
In another example, as shown in fig. 30, the pixels used for IntraBC prediction and compensation are unfiltered pixels of the pixels in the bottom four rows of the current CTU (3010), left CTU (3020), and upper CTU (3030) and the bottom four rows of the upper left CTU (3040). Other pixels use filtered pixels. In fig. 30, the dot-filled pixels are from unfiltered pixels and the transparent pixels are from filtered pixels for IntraBC.
In another example, as shown in fig. 31, the pixels used for IntraBC prediction and compensation are unfiltered pixels of the current CTU, of the N CTUs to the left, and of the bottom four rows of the CTUs above and of the bottom four rows of the N upper left CTUs. N is a positive integer. Other pixels use filtered pixels. In fig. 31, the dot-filled pixels are from unfiltered pixels, while the transparent pixels are from filtered pixels of IntraBC.
In another example, the pixels used for IntraBC prediction and compensation are unfiltered pixels of pixels in the current CTU, the bottom four rows of the upper CTU, and the right four columns of the left CTU. Other pixels are filtered as shown in fig. 32. In fig. 32, the dot-filled pixels are from unfiltered pixels and the transparent pixels are from filtered pixels of IntraBC.
In another example, the pixels used for IntraBC prediction and compensation are unfiltered pixels of the pixels in the current CTU, the N left CTUs, the bottom four rows in the upper CTU and the bottom four rows of the N upper left CTUs, and the four columns to the right of the (n+1) th left CTU. N is a positive integer value. Other pixels use filtered pixels. In fig. 33, the dot-filled pixels are from unfiltered pixels, while the transparent pixels are from filtered pixels of IntraBC.
Fig. 34 shows an exemplary flowchart of a system for palette codec block sharing transform coefficient buffers in conjunction with an embodiment of the present invention. The system determines the current prediction mode of the current block in step 3410 and designates the storage area as a transform coefficient buffer in step 3420. In step 3430, if the current prediction mode is an intra prediction mode or an inter prediction, first information related to transform coefficients of a prediction residual of the current block generated by the intra prediction or the inter prediction is stored in a transform coefficient buffer. In step 3440, if the current prediction mode is a palette codec mode, information related to palette data associated with the current block is stored in a transform coefficient buffer. In step 3450, if the current block is encoded or decoded in the intra prediction mode or the inter prediction mode, the current block is encoded or decoded based on information related to the transform coefficients, or if the current block is encoded or decoded in the modulation plate encoding and decoding mode, the current block is encoded or decoded based on information stored in the transform coefficient buffer related to the palette data.
The previous description is presented to enable any person skilled in the art to make or use the invention in the context of a particular application and its requirements. Various modifications to the described embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments. Thus, the present invention is not intended to be limited to the particular embodiments shown and described, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. In the previous detailed description, numerous specific details were set forth in order to provide a thorough understanding of the present invention. However, those skilled in the art will appreciate that the present invention may be practiced.
Embodiments of the invention as described above may be implemented in various hardware, software code or a combination of both. For example, embodiments of the invention may be one or more electronic circuits integrated into a video compression chip or program code integrated into video compression software to perform the processes described herein. Embodiments of the invention may also be program code to be executed on a Digital Signal Processor (DSP) to perform the processes described herein. The invention may also relate to a number of functions performed by a computer processor, a digital signal processor, a microprocessor, or a Field Programmable Gate Array (FPGA). The processors may be configured to perform particular tasks according to the invention by executing machine readable software code or firmware code that defines the particular methods in which the invention is embodied. The software code or firmware code may be developed in different programming languages and different formats or styles. The software code may also be compiled for different target platforms. However, different code formats, styles and languages of software code and other ways of configuring code according to the present invention will not depart from the spirit and scope of the present invention.
The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described examples are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims (2)

1. A video encoding and decoding method using a plurality of prediction modes including a palette encoding and decoding mode, the method comprising:
parsing all initial palette predictors of the same color component gathered together in a sequence parameter set, an image parameter set, or a slice header from a video bitstream at a decoder side, or gathering all initial palette predictors of the same color component together at an encoder side; and
at least one palette codec block within a respective sequence, picture, or slice is decoded or encoded using the initial palette predictor.
2. A video encoding and decoding method using a plurality of prediction modes including a palette encoding and decoding mode, the method comprising:
at the decoder side, parsing all palette predictor entries or palette entries of the current block of the same color component that are aggregated together from the video bitstream, or aggregating all palette predictor entries or palette entries of the same color component at the encoder side; and
the current block is decoded or encoded using a palette predictor composed of all palette predictor entries or a palette table composed of all palette entries.
CN202111452015.8A 2014-11-12 2015-11-12 Jumping-out pixel coding and decoding method in index mapping coding and decoding Active CN114630131B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111452015.8A CN114630131B (en) 2014-11-12 2015-11-12 Jumping-out pixel coding and decoding method in index mapping coding and decoding

Applications Claiming Priority (15)

Application Number Priority Date Filing Date Title
US201461078595P 2014-11-12 2014-11-12
US62/078,595 2014-11-12
US201462087454P 2014-12-04 2014-12-04
US62/087,454 2014-12-04
US201562119950P 2015-02-24 2015-02-24
US62/119,950 2015-02-24
US201562145578P 2015-04-10 2015-04-10
US62/145,578 2015-04-10
US201562162313P 2015-05-15 2015-05-15
US62/162,313 2015-05-15
US201562170828P 2015-06-04 2015-06-04
US62/170,828 2015-06-04
PCT/CN2015/094410 WO2016074627A1 (en) 2014-11-12 2015-11-12 Methods of escape pixel coding in index map coding
CN202111452015.8A CN114630131B (en) 2014-11-12 2015-11-12 Jumping-out pixel coding and decoding method in index mapping coding and decoding
CN201580061695.7A CN107005717B (en) 2014-11-12 2015-11-12 Skip pixel coding and decoding method in index mapping coding and decoding

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201580061695.7A Division CN107005717B (en) 2014-11-12 2015-11-12 Skip pixel coding and decoding method in index mapping coding and decoding

Publications (2)

Publication Number Publication Date
CN114630131A CN114630131A (en) 2022-06-14
CN114630131B true CN114630131B (en) 2023-11-07

Family

ID=59422558

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201580061695.7A Active CN107005717B (en) 2014-11-12 2015-11-12 Skip pixel coding and decoding method in index mapping coding and decoding
CN202111452015.8A Active CN114630131B (en) 2014-11-12 2015-11-12 Jumping-out pixel coding and decoding method in index mapping coding and decoding

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201580061695.7A Active CN107005717B (en) 2014-11-12 2015-11-12 Skip pixel coding and decoding method in index mapping coding and decoding

Country Status (1)

Country Link
CN (2) CN107005717B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102152239B1 (en) * 2015-03-18 2020-09-07 에이치에프아이 이노베이션 인크. Method and apparatus for index map coding in video and image compression
CN112219399A (en) * 2018-07-04 2021-01-12 阿里巴巴集团控股有限公司 Palette-based residual coding in video compression systems
CN113170192B (en) * 2018-11-15 2023-12-01 北京字节跳动网络技术有限公司 Affine MERGE and MVD
CN113396592B (en) * 2019-02-02 2023-11-14 北京字节跳动网络技术有限公司 Buffer management for intra block copying in video codec
MX2021009943A (en) * 2019-02-24 2021-09-21 Beijing Bytedance Network Tech Co Ltd Independent coding of palette mode usage indication.
JP7405861B2 (en) 2019-03-01 2023-12-26 北京字節跳動網絡技術有限公司 Direction-based prediction for intra block copy in video coding
EP3941048A4 (en) * 2019-03-13 2022-12-28 LG Electronics Inc. Image encoding/decoding method and device, and method for transmitting bitstream
CN113678448A (en) * 2019-04-09 2021-11-19 北京字节跳动网络技术有限公司 Entry structure for palette mode encoding and decoding
CN117221544A (en) * 2019-05-19 2023-12-12 字节跳动有限公司 Transform bypass coding residual block in digital video
CN114072849B (en) * 2019-06-28 2023-12-15 字节跳动有限公司 Chroma intra mode derivation in screen content coding
MX2022000110A (en) 2019-07-10 2022-02-10 Beijing Bytedance Network Tech Co Ltd Sample identification for intra block copy in video coding.
CN117459744A (en) * 2019-07-20 2024-01-26 北京字节跳动网络技术有限公司 Condition dependent codec with palette mode usage indication
FI4002843T3 (en) * 2019-07-21 2024-04-25 Lg Electronics Inc Image encoding/decoding method and device for signaling chroma component prediction information according to whether palette mode is applicable, and method for transmitting bitstream
CN114145013B (en) * 2019-07-23 2023-11-14 北京字节跳动网络技术有限公司 Mode determination for palette mode coding and decoding
CN116684583A (en) * 2019-08-26 2023-09-01 Lg电子株式会社 Decoding device, encoding device, and data transmitting device
KR20220050968A (en) * 2019-10-05 2022-04-25 엘지전자 주식회사 Transform skip and palette coding related information-based image or video coding
WO2021066618A1 (en) * 2019-10-05 2021-04-08 엘지전자 주식회사 Image or video coding based on signaling of transform skip- and palette coding-related information
WO2021066609A1 (en) * 2019-10-05 2021-04-08 엘지전자 주식회사 Image or video coding based on transform skip- and palette coding-related advanced syntax element
EP4088455A4 (en) * 2020-01-11 2023-03-22 Beijing Dajia Internet Information Technology Co., Ltd. Methods and apparatus of video coding using palette mode

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5883633A (en) * 1997-04-15 1999-03-16 Microsoft Corporation Method and system of variable run length image encoding using sub-palette
CN101340587A (en) * 2007-07-05 2009-01-07 联发科技股份有限公司 Method for encoding input image, method and apparatus for displaying an encoded image
CN103703779A (en) * 2011-11-03 2014-04-02 谷歌公司 Image compression using sub-resolution images

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AUPO648397A0 (en) * 1997-04-30 1997-05-22 Canon Information Systems Research Australia Pty Ltd Improvements in multiprocessor architecture operation
US7302006B2 (en) * 2002-04-30 2007-11-27 Hewlett-Packard Development Company, L.P. Compression of images and image sequences through adaptive partitioning
US9232226B2 (en) * 2008-08-19 2016-01-05 Marvell World Trade Ltd. Systems and methods for perceptually lossless video compression
US9405734B2 (en) * 2012-12-27 2016-08-02 Reflektion, Inc. Image manipulation for web content
US9654777B2 (en) * 2013-04-05 2017-05-16 Qualcomm Incorporated Determining palette indices in palette-based video coding

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5883633A (en) * 1997-04-15 1999-03-16 Microsoft Corporation Method and system of variable run length image encoding using sub-palette
CN101340587A (en) * 2007-07-05 2009-01-07 联发科技股份有限公司 Method for encoding input image, method and apparatus for displaying an encoded image
CN103703779A (en) * 2011-11-03 2014-04-02 谷歌公司 Image compression using sub-resolution images

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Block-adaptive palette-based prediction for depth map coding;Shinya Shimizu;《2011 18th IEEE International Conference on Image Processing》;全文 *
Xiaoyu xiu et al.Removal of parsing dependency in palette-based coding.《JCTVC S0181》.2014,全文. *
多图像格式及三维图像输出功能研究;胡健;《硕士论文》;全文 *
视频数据采集及编码技术的研究与实现;李少龙;《中国优秀硕士学位论文全文数据库(电子期刊)》;全文 *

Also Published As

Publication number Publication date
CN107005717B (en) 2020-04-07
CN114630131A (en) 2022-06-14
CN107005717A (en) 2017-08-01

Similar Documents

Publication Publication Date Title
CN114630131B (en) Jumping-out pixel coding and decoding method in index mapping coding and decoding
CN110519604B (en) Skip pixel coding and decoding method in index mapping coding and decoding
US11265537B2 (en) Method for palette table initialization and management
KR101887798B1 (en) Method and apparatus of binarization and context-adaptive coding for syntax in video coding
CN110336999B (en) Method and apparatus for encoding a block of video data using palette coding
CN110191338B (en) Method for using jumping-out pixel as predictor in index image coding
CA2934743C (en) Method and apparatus for syntax redundancy removal in palette coding
KR101782280B1 (en) Method and apparatus for palette table prediction
CN113615185B (en) Method and apparatus for video encoding and decoding
CN107852505A (en) Method and apparatus for being handled using the video decoding error of intra block replication mode
CN108141621B (en) Method and device for coding and decoding video data
US20160316213A1 (en) Palette Prediction and Sharing in Video Coding
CN107431827B (en) Method and apparatus for video and image coding and decoding using palette coding mode
BR112017009946B1 (en) EXHAUST PIXEL TO CODE CONVERSION METHODS IN INDICATOR MAP TO CODE CONVERSION

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant