WO2015094711A1 - Palette prediction and sharing in video coding - Google Patents

Palette prediction and sharing in video coding Download PDF

Info

Publication number
WO2015094711A1
WO2015094711A1 PCT/US2014/068725 US2014068725W WO2015094711A1 WO 2015094711 A1 WO2015094711 A1 WO 2015094711A1 US 2014068725 W US2014068725 W US 2014068725W WO 2015094711 A1 WO2015094711 A1 WO 2015094711A1
Authority
WO
WIPO (PCT)
Prior art keywords
palette
palettes
current
index
color
Prior art date
Application number
PCT/US2014/068725
Other languages
French (fr)
Inventor
Wang-lin LAI
Shan Liu
Tzu-Der Chuang
Xiaozhong Xu
Jing Ye
Original Assignee
Mediatek Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mediatek Inc. filed Critical Mediatek Inc.
Priority to CA2934246A priority Critical patent/CA2934246C/en
Priority to US15/104,654 priority patent/US10469848B2/en
Priority to EP14871918.0A priority patent/EP3085068A4/en
Priority to CN201480069231.6A priority patent/CN106031142B/en
Publication of WO2015094711A1 publication Critical patent/WO2015094711A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/115Selection of the code volume for a coding unit prior to coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/167Position within a video image, e.g. region of interest [ROI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/463Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock

Definitions

  • the present invention relates to palette coding for video data that may contain color contents with limited colors in some areas.
  • the present invention relates to techniques to improve the performance by developing more efficient palette sharing.
  • High Efficiency Video Coding is a new coding standard that has been developed in recent years.
  • HEVC High Efficiency Video Coding
  • a CU may begin with a largest CU (LCU), which is also referred as coded tree unit (CTU) in HEVC.
  • CTU coded tree unit
  • PU prediction unit
  • the HEVC extensions include range extensions (RExt) which target at non-4 :2:0 color formats, such as 4:2:2 and 4:4:4, and higher bit-depths video such as 12, 14 and 16 bits per sample.
  • RExt range extensions
  • One of the likely applications utilizing RExt is screen sharing, over wired- or wireless-connection.
  • coding tools Due to specific characteristics of screen contents, coding tools have been developed and demonstrate significant gains in coding efficiency.
  • the palette coding (a.k.a. major color based coding) techniques represent block of pixels using indices to the palette (major colors), and encode the palette and the indices by exploiting spatial redundancy. While the total number of possible color combinations is huge, the number of colors in an area of picture is usually very limited for typical screen contents. Therefore, the palette coding becomes very effective for screen content materials.
  • JCTVC-N0247 (Guo et al.,”i?CEJ: Results of Test 3.1 on Palette Mode or Screen Content Coding", Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 14th Meeting: Vienna, AT, 25 July - 2 Aug. 2013 Document: JCTVC-N0247).
  • JCTVC-N0247 the palette of each color component is constructed and transmitted.
  • the palette can be predicted (or shared) from its left neighboring CU to reduce the bitrate.
  • a pseudo code for the method disclosed in JCTVC-N0247 is shown as follows.
  • palette_pred[color_index] when the palette prediction mode is used as indicated by palette_pred[color_index], the palette for the current coding unit having color index (i.e., Current CU palette[color_index]) is shared from the palette of the CU having the same color index at the left side of the current CU (i.e., left CU palette[color_index]). Otherwise, a new palette is parsed from the bitstream at the decoder side or signaled in the bitstream at the encoder side.
  • the method according to JCTVC-N0247 does not use palette prediction (sharing) from the above CU. Furthermore, if the left CU is not coded using palette mode, the palette for the current CU cannot be predicted from the left CU.
  • JCTVC-N0249 Another palette coding method is disclosed in JCTVC-N0249 (Guo et al, "non-RCE3 : Modified Palette Mode for Screen Content Coding", Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 14th Meeting: Vienna, AT, 25 July - 2 Aug. 2013 Document: JCTVC-N0249).
  • each element in the palette is a triplet, representing a specific combination of the three color components.
  • JCTVC-O0182 Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 15th Meeting: Geneva, CH, 23 Oct. - 1 Nov. 2013, Document: JCTVC-O0182).
  • JCTVC-O0182 each component is constructed and transmitted. However, instead of predicting the entire palette from the left CU, individual entry in a palette can be predicted from the exact corresponding palette entry in the above CU or left CU.
  • a pseudo code for the method disclosed in JCTVC-O0182 is shown as follows.
  • the individual entry n of the palette for the current CU may be shared from the corresponding palette entry of the above CU (i.e., Above CU palette[color_index][n]) or the left CU (i.e., Left CU palette[color_index][n]) when palette prediction is selected as indicated by palette_pred[color_index][n] being 1.
  • palette_pred[color_index][n] is not used as indicated by palette_pred[color_index][n] being 0
  • the palette for the current CU is parsed from the bitstream (i.e., Parse syntax for current CU palette[color_index][n]) at the decoder side or signaled in the bitstream at the encoder side.
  • a method and apparatus for palette prediction and sharing according to the present invention are disclosed.
  • a method incorporating an embodiment of the present invention determines one or more palette sharing flags for the current block.
  • a set of current palettes corresponding to the set of color components is generated according to said one or more palette sharing flags. If a first palette sharing flag is asserted, one or more current palettes indicated by the first palette sharing flag are copied entirely from one or more reference palettes of a set of reference palettes. If the first palette sharing flag is not asserted, one or more current palettes indicated by the first palette sharing flag are derived from a bitstream associated with the video data. Encoding or decoding is then applied to the current block according to the set of current palettes.
  • the current block corresponds to a coding unit (CU), a prediction unit (PU), a largest coding unit (LCU) or a coding tree block (CTB).
  • CU coding unit
  • PU prediction unit
  • LCU largest coding unit
  • CTB coding tree block
  • the palette sharing flags may correspond to a single sharing flag and if the single sharing flag is asserted, all current palettes of the set of current palettes are copied entirely from all reference palettes of the set of reference palettes.
  • Each of said one or more palette sharing flags may correspond to each of the set of color components, and if one corresponding sharing flag is asserted, one corresponding current palette with one corresponding color component is copied entirely from one corresponding reference palette with said one corresponding color component.
  • the palette sharing flags may also correspond to a set of sharing flags for the color components (YUV, RGB, etc.).
  • the palette sharing flags may correspond to a luma palette sharing flag and a chroma palette sharing flag, and if the luma palette sharing flag or the chroma palette sharing flag is asserted, a current luma palette or at least one current chroma palette is copied entirely from one corresponding luma reference palette or at least one corresponding chroma reference palette.
  • the set of reference palettes corresponds to a most recent set of palettes among one or more recent sets of palettes associated with one or more previous blocks.
  • the one or more recent sets of palettes associated with one or more previous blocks are referred as "palette book" in this disclosure.
  • the one or more recent sets of palettes may correspond to N recent sets of palettes, wherein N is an integer greater than zero.
  • the set of reference palettes may also be shared from the above CU and/or the left CU.
  • an additional flag is used to indicate whether the set of palettes is shared from the above CU or the left CU.
  • the set of palettes of the above CU is compared with the set of palettes of the left CU. If they are identical, the additional flag can be skipped.
  • a set of replacement palettes can be used to replace the identical set of palettes.
  • the set of replacement palettes may correspond to the set of palettes of the above-left CU or the above-right CU.
  • the set of replacement palettes may also be determined from a previously coded set of palettes.
  • FIG. 1 illustrates a flowchart of an exemplary system incorporating an embodiment of the present invention to share palette from previously processed palette based on one or more palette sharing flags.
  • the present invention discloses various improvements and simplified palette coding.
  • the palette is predicted or shared on a
  • the palette prediction is applied to coding units (CUs) in a raster scan fashion (i.e., row-by-row and from top row to bottom row), the above CU and the left CU represent previously processed CUs.
  • the CU-by-CU based palette sharing can be applied to the triplet palette format as disclosed in JCTVC-N-0249.
  • the prediction and sharing is accomplished by facilitating a history of previous coded palettes using "palette book".
  • the palettes to be shared may be associated with CUs other than the above CU or the left CU.
  • a most recently coded palette can be saved within a given LCU (or so called coding tree unit, CTU), or within a given region of multiple LCUs (CTUs) such as CTU lines, or within the current slice.
  • CTU coding tree unit
  • a CU within the current LCU or, within a given region of multiple LCUs (CTUs) such as CTU lines, or within the current slice) can either share this palette or uses its own (new) palette.
  • First Embodiment Component-wise control of palette sharing from left/above CU.
  • the palette or palettes for the current CU can share with the palette or palettes from the above CU or the left CU, and the palette sharing is performed for each color component.
  • An indication e.g., palette_pred[color_index]
  • palette_pred[color_index] for each color component can be used to indicate whether palette prediction is used. If palette prediction is used, another indication can be used to indicate whether the prediction is from the above CU or the left CU. For example, an indication pred from above can be used. If pred from above has a value of 1 , palette prediction from the above CU is selected. Otherwise, palette prediction is from the left CU.
  • An exemplary pseudo code according to this embodiment is shown below.
  • palette_pred[color_index] indicates that palette prediction is used for the specified color component (indicated by color index)
  • an entire palette including the palette size of the specified color component with color index is copied from palette of the above block or the left block.
  • palette_pred[color_index] indicates that palette prediction is not used
  • a new palette for the current CU i.e., Parse syntax num major_color[color_index]
  • Second Embodiment CU-wise control of palette sharing from left/above CU.
  • control for whether to share the palette for the current CU is the same for all color index.
  • control flags, palette_pred and pred from above are the same for all color index.
  • An exemplary pseudo code according to this embodiment is shown below.
  • palette_pred indicates that palette prediction is used
  • entire palettes including the palette size for all color index are copied from corresponding palettes of the above block or the left block.
  • This embodiment may be considered as an example of component- wise palette sharing control, where the color components correspond to a luma component and at least one chroma component.
  • palette sharing from above/left has two prediction control flags, where one flag is for luma and another is for chroma components. This can be particularly useful for contents with different degree of variations in the luma and chroma components.
  • An exemplary pseudo code according to this embodiment is shown below for YUV color components.
  • palette_pred_Y or palette_pred_UV indicates that palette prediction is used, entire palette including the palette size for Y or U/V is copied from corresponding palette of the above block or the left block.
  • palette coding and sharing can be performed on (prediction unit) PU basis, LCU basis, coding tree unit (CTU) basis, or multiple LCUs basis.
  • the signaling bits can be context-coded.
  • the syntax bit in Table 1 for each color component can use different context for CAB AC coding, or use the same context.
  • different context can be used for luma/chroma control scheme in the third embodiment.
  • a palette book is used.
  • Various means can be used to generate the palette book.
  • a history of the recently encoded palette sets can be stored in a "palette book”.
  • the current CU may choose to share one of the palette sets stored in the palette book as indicated by book index.
  • the current CU may also use its own palette and the current palette will replace one set in the palette book.
  • the new palette is encoded and transmitted to the decoder so that the same palette book updating process can be carried out in the same way at both the encoder and the decoder. There are various ways to update and order the previously coded palette sets.
  • palette sets are simply ordered based on their coding order, i.e. the most recently coded palette is stored at the beginning of the "palette book" (i.e., having a smallest index), while the older ones are stored afterwards (i.e., having larger indices).
  • a palette book with size KK is used to store KK sets of previously coded palettes.
  • entries 1 to (KK-1) in the "palette book” will be moved to entries 2 through KK in order to make the first entry available for the newly coded palette. This is simply a first-in- first-out updating and ordering process.
  • the following pseudo-code demonstrates an example of palette sharing using a palette book when the sharing is controlled on a CU-wise basis (i.e., sharing for all color components).
  • the embodiment may also be used for triplet palette format as disclosed in JCTVC-N-0249.
  • palettejDook[k] [color_index] palettej ook[k-l] [colorj ' ndex]
  • palettej ook[0] [colorj ' ndex] [n] current CU palette[color_index] [n]
  • a palette book index (i.e., book index) is determined from the bitstream.
  • the palette for the current CU i.e., Current CU palette[color_index]
  • palette_book[book_index] [colorjndex] is derived from the palette book having book index (i.e., palette_book[book_index] [colorjndex]). If the current CU does not use palette prediction, entries 1 to (K -1) in the "palette book" will be moved to entries 2 through K in order to make the first entry available for the newly coded palette (i.e.,
  • the newly parsed current CU palette i.e., Parse syntax for current CU palette[color_index][n]
  • palette_book[0][color_index][n] current CU
  • the fifth embodiment is similar to the fourth embodiment except that the sharing control is component- wise.
  • An exemplary pseudo code according to this embodiment is shown below for each color component.
  • palette[color_index] palette_book[ book_index[color_index] ] [colorjndex]
  • palette_book[k][color_index] palette_book[k-l][color_index]
  • palette_book[0][color_index][n] current CU palette[color_index][n]
  • the luma component and the chroma components may have separate sharing controls (e.g., one control flag for luma and one control flag for chroma).
  • Each luma and chroma components may have its own palette table. This may be particularly useful for content with different degree of variations in the luma and chroma components.
  • An exemplary pseudo code according to this embodiment is shown below for YUV color format, where a same sharing control flag is for the U and V component (i.e., palette_pred_UV), separate control flags may also be used.
  • palette_book[k][Y_index] palette_book[k-l][ Y_index]
  • palette_book[0][Y_index][n] current CU palette[Y_index] [n]
  • palette_book[k][U_index] palette_book[k-l][U_index]
  • palette_book[0][U_index][n] current CU palette[U_index][n]
  • palette coding and sharing can be performed on prediction unit (PU) basis, largest coding unit (LCU) basis, coding tree unit (CTU) basis, or multiple LCUs basis.
  • PU prediction unit
  • LCU largest coding unit
  • CTU coding tree unit
  • first-in-first-out scheme is used in the pseudo codes for the fourth embodiment through the sixth embodiment for the "palette book" updating and ordering
  • other method can also be utilized as long as the encoder and decoder can perform the same process.
  • a counter can be used to keep track of the frequency of each palette set being selected for sharing.
  • the palette book can then be updated according to the frequency, such as ordering them from high selection frequency to low selection frequency.
  • the signaling bits of the variable length code can be context-coded.
  • the syntax bit in Table 2 for each color component can use different context for CABAC (context adaptive binary arithmetic coding), or they can use the same context.
  • CABAC context adaptive binary arithmetic coding
  • different context can be used for luma/chroma control scheme in the sixth embodiment.
  • palette book Since the "palette book” keeps track and updates the most recently coded palette sets, there is no line buffer issue.
  • the selection of palette book size becomes an issue of tradeoff between providing better palette matching (i.e., using larger book size) and signaling side information (i.e., using smaller book size).
  • palette_book[color_index] [n] current CU palette[color_index] [n]
  • the palette book is reset at the beginning of each LCU, LCU row, or slice as shown in the pseudo code (i.e., "If (begin new LCU, or begin new LCU row, or begin new slice), Clear palette book").
  • palette_book[Y_index][n] current CU palette[Y_index] [n]
  • palette_book[V_index][n] current CU palette[V_index] [n]
  • Another aspect of the present invention is related to syntax of palette prediction.
  • a pseudo code for parsing the syntax associates with the signaling of palette sharing in Table 1 is illustrated as follows.
  • Another aspect of the present invention addresses the case of identical palette from the above CU and the left CU when palette sharing is from the above CU and the left CU.
  • the pseudo codes shown above illustrates that syntax palette_pred is first parsed. If palette_pred has a value of 1 , then syntax pred from above is parsed. Depending on the value of pred from above, the palette for the current CU is copied from either above or left.
  • Tenth Embodiment Omitting syntax pred rom above when the above and left palettes are identical. According to this embodiment, in the event that the above palette is the same as the left palette, there is no need to differentiate whether to copy from above or left when palette_pred has a value of 1. Accordingly, the syntax pred from above becomes redundant in this case. In order to remove this redundancy, this embodiment checks whether the above palette is the same as the left palette. If so, the pred from above is not transmitted. An exemplary pseudo-code for incorporating a comparison and indication regarding whether the above palette is the same as the left palette is shown below.
  • Eleventh Embodiment Using replacement neighboring palette when the above and left palettes are identical. While the tenth embodiment omits the syntax pred from above when the left palette is identical to the above palette, this embodiment uses other causal palette from a neighboring block to replace either the above or the left palette.
  • the possible causal neighboring blocks can be, for example, the above-left block or the above-right block.
  • the palette from the above-left block can be used to replace the left palette.
  • An exemplary pseudo-code to use a replacement palette from another neighboring block when the above palette is identical to the left palette is shown below.
  • the previously coded palette may correspond to the most recent coded palette that is neither the above nor the left palette of the current block.
  • This previously coded palette can be utilized for palette sharing when the above and left palettes of the current block are the same.
  • This approach requires an updating process at both the encoder and the decoder to maintain a previously coded palette.
  • An exemplary pseudo-code to use a previously coded palette when the above palette is identical to the left palette is shown below.
  • the most recent coded palette is denoted as "recent palette”.
  • the most recent palette is used as an example that can be used to replace the above or left block's palette.
  • Other palette maintaining approaches can also be used.
  • JCT-VC Joint Collaborative Team on Video Coding
  • JCTVC-P0108 Joint Collaborative Team on Video Coding
  • the comparison is performed for various test materials as shown in the first column and for various system configurations (AI-MT, AI-HT and AI-SHT).
  • the Al refers to "all Intra”
  • MT refers to "Main Tier”
  • HT refers to "High Tier”
  • SHT refers to "Super High Tier”.
  • the performance measure is based on BD-rate, which is a well-known performance measure in the field of video coding.
  • Table 3 The comparison results are shown in Table 3, where a negative value in Table 3 implies performance improved over the anchor system.
  • the seventh embodiment of the present invention demonstrates noticeable performance improvement for screen content materials (e.g., SC RGB 444, SC YUV 444, SC(444) GBR Opt. and SC(444) YUV Opt.).
  • Fig. 1 illustrates a flowchart of an exemplary system incorporating an embodiment of the present invention to share palette from previously processed palette based on one or more palette sharing flags.
  • the system receives input data associated with a current block comprising a set of color components as shown in step 110, wherein the set of color components consists of one or more colors.
  • the input data corresponds to pixel data to be encoded using palette coding.
  • the input data corresponds to coded pixel data to be decoded using palette coding.
  • the input data may be retrieved from memory (e.g., computer memory, buffer (RAM or DRAM) or other media) or from a processor.
  • One or more palette sharing flags for the current block is determined as shown in step 120.
  • the first palette sharing flag is checked to determine whether it is asserted. In step 130. If the result is "Yes”, the processing goes to step 140. If the result is "No”, the processing goes to step 150. After processing in step 140 or step 150, the processing goes to step 160.
  • step 140 the set of current palettes is generated by copying one or more current palettes indicated by the first palette sharing flag entirely from one or more reference palettes of a set of reference palettes.
  • step 150 the set of current palettes is generated by deriving said one or more current palettes indicated by the first palette sharing flag from a bitstream associated with the video data.
  • encoding or decoding is applied to the current block according to the set of current palettes.
  • Embodiment of the present invention as described above may be implemented in various hardware, software codes, or a combination of both.
  • an embodiment of the present invention can be a circuit integrated into a video compression chip or program code integrated into video compression software to perform the processing described herein.
  • An embodiment of the present invention may also be program code to be executed on a Digital Signal Processor (DSP) to perform the processing described herein.
  • DSP Digital Signal Processor
  • the invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA). These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention.
  • the software code or firmware code may be developed in different programming languages and different formats or styles.
  • the software code may also be compiled for different target platforms.
  • different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.

Abstract

A method and apparatus for palette prediction and sharing according to the present invention are disclosed. A method incorporating an embodiment of the present invention determines one or more palette sharing flags for the current block. A set of current palettes corresponding to the set of color components is generated according to the palette sharing flags. If a first palette sharing flag is asserted, one or more current palettes indicated by the first palette sharing flag are copied entirely from one or more reference palettes of a set of reference palettes. If the first palette sharing flag is not asserted, one or more current palettes indicated by the first palette sharing flag are derived from a bitstream associated with the video data. Encoding or decoding is then applied to the current block according to the set of current palettes.

Description

TITLE: PALETTE PREDICTION AND SHARING IN VIDEO CODING
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] The present invention claims priority to U.S. Provisional Patent Application, Serial No. 61/917,474, filed on December 18, 2013, entitled "Methods and Apparatus for Palette Prediction and Sharing in Major Color Based Coding in Video Compression" and U.S. Provisional Patent Application, Serial No. 61/923,378, filed on January 3, 2014, entitled "Methods and Apparatus of Syntax and Processes for Palette Prediction and Sharing in Video Compression". The U.S.
Provisional Patent Applications are hereby incorporated by reference in their entireties.
FIELD OF THE INVENTION
[0002] The present invention relates to palette coding for video data that may contain color contents with limited colors in some areas. In particular, the present invention relates to techniques to improve the performance by developing more efficient palette sharing.
BACKGROUND AND RELATED ART
[0003] High Efficiency Video Coding (HEVC) is a new coding standard that has been developed in recent years. In the High Efficiency Video Coding (HEVC) system, the fixed- size macroblock of H.264/AVC is replaced by a flexible block, named coding unit (CU). Pixels in the CU share the same coding parameters to improve coding efficiency. A CU may begin with a largest CU (LCU), which is also referred as coded tree unit (CTU) in HEVC. In addition to the concept of coding unit, the concept of prediction unit (PU) is also introduced in HEVC. Once the splitting of CU hierarchical tree is done, each leaf CU is further split into one or more prediction units (PUs) according to prediction type and PU partition.
[0004] Along with the High Efficiency Video Coding (HEVC) standard development, the development of extensions of HEVC has also started. The HEVC extensions include range extensions (RExt) which target at non-4 :2:0 color formats, such as 4:2:2 and 4:4:4, and higher bit-depths video such as 12, 14 and 16 bits per sample. One of the likely applications utilizing RExt is screen sharing, over wired- or wireless-connection. Due to specific characteristics of screen contents, coding tools have been developed and demonstrate significant gains in coding efficiency. Among them, the palette coding (a.k.a. major color based coding) techniques represent block of pixels using indices to the palette (major colors), and encode the palette and the indices by exploiting spatial redundancy. While the total number of possible color combinations is huge, the number of colors in an area of picture is usually very limited for typical screen contents. Therefore, the palette coding becomes very effective for screen content materials.
[0005] During the early development of HEVC range extensions (RExt), several proposals have been disclosed to address palette-based coding. For example, a palette prediction and sharing technique is disclosed in JCTVC-N0247 (Guo et al.,"i?CEJ: Results of Test 3.1 on Palette Mode or Screen Content Coding", Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 14th Meeting: Vienna, AT, 25 July - 2 Aug. 2013 Document: JCTVC-N0247). In JCTVC-N0247, the palette of each color component is constructed and transmitted. The palette can be predicted (or shared) from its left neighboring CU to reduce the bitrate. A pseudo code for the method disclosed in JCTVC-N0247 is shown as follows.
For (color index)
If (palette_pred[color_index])
Current CU palette [color index] = left CU palette[color_index]
Else
Parse syntax for the palette [color index] of the current CU
End End
[0006] As shown in the pseudo code above, when the palette prediction mode is used as indicated by palette_pred[color_index], the palette for the current coding unit having color index (i.e., Current CU palette[color_index]) is shared from the palette of the CU having the same color index at the left side of the current CU (i.e., left CU palette[color_index]). Otherwise, a new palette is parsed from the bitstream at the decoder side or signaled in the bitstream at the encoder side. The method according to JCTVC-N0247 does not use palette prediction (sharing) from the above CU. Furthermore, if the left CU is not coded using palette mode, the palette for the current CU cannot be predicted from the left CU.
[0007] Another palette coding method is disclosed in JCTVC-N0249 (Guo et al, "non-RCE3 : Modified Palette Mode for Screen Content Coding", Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 14th Meeting: Vienna, AT, 25 July - 2 Aug. 2013 Document: JCTVC-N0249). In JCTVC-N0249, each element in the palette is a triplet, representing a specific combination of the three color components.
[0008] Yet another palette coding is disclosed in JCTVC-O0182 (Guo et al, "AHG8:
Major-color-based screen content coding", Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 15th Meeting: Geneva, CH, 23 Oct. - 1 Nov. 2013, Document: JCTVC-O0182). In JCTVC-O0182, each component is constructed and transmitted. However, instead of predicting the entire palette from the left CU, individual entry in a palette can be predicted from the exact corresponding palette entry in the above CU or left CU. A pseudo code for the method disclosed in JCTVC-O0182 is shown as follows.
For (color index)
Parse syntax num_major_color[color_index]
For (n <= num_major_color[color_index])
If (palette_pred[color_index][n])
If (pred_from_above[color_index] [n]) Current CU palette[color_index][n] = Above CU palette [color_index][n] Else
Current CU palette[color_index][n] = Left CU palette [color_index][n] End
Else
Parse syntax for current CU palette [color_index][n]
End
End
End
[0009] As shown in the above pseudo code, the individual entry n of the palette for the current CU (i.e., Current CU palette[color_index][n] for order position n) may be shared from the corresponding palette entry of the above CU (i.e., Above CU palette[color_index][n]) or the left CU (i.e., Left CU palette[color_index][n]) when palette prediction is selected as indicated by palette_pred[color_index][n] being 1. If the palette prediction is not used as indicated by palette_pred[color_index][n] being 0, the palette for the current CU is parsed from the bitstream (i.e., Parse syntax for current CU palette[color_index][n]) at the decoder side or signaled in the bitstream at the encoder side.
[0010] As shown above, the palette coding according to JCTVC-O0182 uses
element-by-element (or entry-by-entry) predictive coding. Therefore, the parsing complexity (multiple levels) becomes high. Furthermore, it may not be very efficient since palette elements (palette entries) in adjacent CUs might not be at same order position n, even if they have the same value.
[0011] Therefore, it is desirable to develop methods for further improving the coding efficiency and/or reducing the complexity associated with the palette coding.
BRIEF SUMMARY OF THE INVENTION
[0012] A method and apparatus for palette prediction and sharing according to the present invention are disclosed. A method incorporating an embodiment of the present invention determines one or more palette sharing flags for the current block. A set of current palettes corresponding to the set of color components is generated according to said one or more palette sharing flags. If a first palette sharing flag is asserted, one or more current palettes indicated by the first palette sharing flag are copied entirely from one or more reference palettes of a set of reference palettes. If the first palette sharing flag is not asserted, one or more current palettes indicated by the first palette sharing flag are derived from a bitstream associated with the video data. Encoding or decoding is then applied to the current block according to the set of current palettes. The current block corresponds to a coding unit (CU), a prediction unit (PU), a largest coding unit (LCU) or a coding tree block (CTB).
[0013] One aspect of the present invention addresses palette sharing flag design. The palette sharing flags may correspond to a single sharing flag and if the single sharing flag is asserted, all current palettes of the set of current palettes are copied entirely from all reference palettes of the set of reference palettes. Each of said one or more palette sharing flags may correspond to each of the set of color components, and if one corresponding sharing flag is asserted, one corresponding current palette with one corresponding color component is copied entirely from one corresponding reference palette with said one corresponding color component. The palette sharing flags may also correspond to a set of sharing flags for the color components (YUV, RGB, etc.). For example, the palette sharing flags may correspond to a luma palette sharing flag and a chroma palette sharing flag, and if the luma palette sharing flag or the chroma palette sharing flag is asserted, a current luma palette or at least one current chroma palette is copied entirely from one corresponding luma reference palette or at least one corresponding chroma reference palette.
[0014] Another aspect of the present invention addresses reference palette design. The set of reference palettes corresponds to a most recent set of palettes among one or more recent sets of palettes associated with one or more previous blocks. The one or more recent sets of palettes associated with one or more previous blocks are referred as "palette book" in this disclosure. The one or more recent sets of palettes may correspond to N recent sets of palettes, wherein N is an integer greater than zero. The palette book can be updated by deleting an oldest set of palettes when a new set of palettes is stored. In one embodiment, there is only one recent set of palettes in the palette book, i.e., N=l .
[0015] The set of reference palettes may also be shared from the above CU and/or the left CU. When the palette sharing flag is asserted and both the above CU and the left CU are allowed, an additional flag is used to indicate whether the set of palettes is shared from the above CU or the left CU. In one embodiment, the set of palettes of the above CU is compared with the set of palettes of the left CU. If they are identical, the additional flag can be skipped. Alternatively, a set of replacement palettes can be used to replace the identical set of palettes. The set of replacement palettes may correspond to the set of palettes of the above-left CU or the above-right CU. The set of replacement palettes may also be determined from a previously coded set of palettes.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] Fig. 1 illustrates a flowchart of an exemplary system incorporating an embodiment of the present invention to share palette from previously processed palette based on one or more palette sharing flags.
DETAILED DESCRIPTION OF THE INVENTION
[0017] In order to improve the performance and to reduce the complexity of palette coding, the present invention discloses various improvements and simplified palette coding. In first category embodiments of the present invention, the palette is predicted or shared on a
component-by-component or CU-by-CU basis using the left and/or the above CU. When the palette prediction is applied to coding units (CUs) in a raster scan fashion (i.e., row-by-row and from top row to bottom row), the above CU and the left CU represent previously processed CUs. The CU-by-CU based palette sharing can be applied to the triplet palette format as disclosed in JCTVC-N-0249. In second category embodiments of the present invention, the prediction and sharing is accomplished by facilitating a history of previous coded palettes using "palette book". In this case, the palettes to be shared may be associated with CUs other than the above CU or the left CU. For example, a most recently coded palette can be saved within a given LCU (or so called coding tree unit, CTU), or within a given region of multiple LCUs (CTUs) such as CTU lines, or within the current slice. A CU within the current LCU (or, within a given region of multiple LCUs (CTUs) such as CTU lines, or within the current slice) can either share this palette or uses its own (new) palette.
[0018] First Embodiment: Component-wise control of palette sharing from left/above CU.
In this embodiment, the palette or palettes for the current CU can share with the palette or palettes from the above CU or the left CU, and the palette sharing is performed for each color component. An indication (e.g., palette_pred[color_index]) for each color component can be used to indicate whether palette prediction is used. If palette prediction is used, another indication can be used to indicate whether the prediction is from the above CU or the left CU. For example, an indication pred from above can be used. If pred from above has a value of 1 , palette prediction from the above CU is selected. Otherwise, palette prediction is from the left CU. An exemplary pseudo code according to this embodiment is shown below.
For (colorjndex)
If (palette_pred[color_index])
If (pred_from_above[color_index])
All element current CU palette[color_index] =Above CU palette[color_index] Else
All element Current CU palette[color_index] = Left CU palette[color_index] End
Else
Parse syntax num_major_color[color_index]
For (n <= num_major_color[color_index])
Parse syntax for current CU palette[color_index][n] End
End
End
[0019] In the above exemplary pseudo code, when palette_pred[color_index] indicates that palette prediction is used for the specified color component (indicated by color index), an entire palette including the palette size of the specified color component with color index is copied from palette of the above block or the left block. In the above exemplary pseudo code, when palette_pred[color_index] indicates that palette prediction is not used, a new palette for the current CU (i.e., Parse syntax num major_color[color_index]) is derived from the bitstream.
[0020] Second Embodiment: CU-wise control of palette sharing from left/above CU.
Compared to the first embodiment, the control for whether to share the palette for the current CU is the same for all color index. In other words, the control flags, palette_pred and pred from above are the same for all color index. An exemplary pseudo code according to this embodiment is shown below.
If (palette_pred)
If (pred_from_above)
For (colorjndex)
All element current CU palette[color_index] =Above CU palette[color_index] End
Else
For (colorjndex)
All element Current CU palette[color_index] = Left CU palette[color_index] End
End
Else
For (colorjndex)
Parse syntax numjnajor ;olor[color_index]
For (n <= num jnajor ;olor[color_index])
Parse syntax for current CU palette[color_index][n]
End End
End
[0021] Again, in the above pseudo code, when palette_pred indicates that palette prediction is used, entire palettes including the palette size for all color index are copied from corresponding palettes of the above block or the left block.
[0022] Third Embodiment: Luma/Chroma-wise control of palette sharing from left/above
CU. This embodiment may be considered as an example of component- wise palette sharing control, where the color components correspond to a luma component and at least one chroma component. In one example, palette sharing from above/left has two prediction control flags, where one flag is for luma and another is for chroma components. This can be particularly useful for contents with different degree of variations in the luma and chroma components. An exemplary pseudo code according to this embodiment is shown below for YUV color components.
If (palette_pred_Y)
If (pred_from_above_Y)
All element current CU palette[Y_index] =Above CU palette[Y_index]
Else
All element Current CU palette[Y_index] = Left CU palette[Y_index]
End
Else
Parse syntax num_major_color[Y_index]
For (n <= num_major_color[Y_index])
Parse syntax for current CU palette[Y_index][n]
End
End
If (palette_pred_UV)
If (pred_from_above_UV)
All element current CU palette[U_index] =Above CU palette[U_index]
All element current CU palette[V_index] =Above CU palette[V_index]
Else
All element Current CU palette[U_index] = Left CU palette[U_index] All element Current CU palette[V_index] = Left CU palette[V_index] End
Else
Parse syntax num_major_color[U_index]
For (n <= num_major_color[U_index])
Parse syntax for current CU palette[U_index] [n]
End
Parse syntax num_major_color[V_index]
For (n <= num_major_color[V_index])
Parse syntax for current CU palette[V_index] [n]
End
End
[0023] As shown in the pseudo code above, separate sharing control flags
pred from above Y and pred from above UV are used. Also respective palette tables (i.e., palette[Y_index], palette[U_index] and palette[V_index]) are used for individual color components. While U and V color components share a same flag (i.e., palette_pred_UV), individual flags may be used for U and V color components. Again, in the above pseudo code, when palette_pred_Y or palette_pred_UV indicates that palette prediction is used, entire palette including the palette size for Y or U/V is copied from corresponding palette of the above block or the left block.
[0024] Note that the above pseudo-codes correspond to the decoding process. Similar pseudo codes can be developed for the encoder side. For example, the action at the encoder side corresponding to "Parse syntax num_major_color[color_index]" will be "Signal syntax num_major_color[color_index]". To reduce the line buffer associated with the shared palette from the above CU, a variation of the embodiments according to the present invention may allow the share-above only if the above CU is within the current largest CU (LCU) or coding tree unit (CTU). An encoder compliant to this variation of embodiments will check whether the above CU and the current CU are in the same LCU or CTU. While the examples shown above always allow both the above CU and the left CU for the current CU to share the palette, however, it may also restrict to share the palette from only one neighboring CU (i.e., only the above CU or only the left CU). Furthermore, while the examples demonstrate that the granularity of palette coding and sharing is on a CU basis, other granularity of palette coding and sharing may be used as well. For example, palette coding and sharing can be performed on (prediction unit) PU basis, LCU basis, coding tree unit (CTU) basis, or multiple LCUs basis.
[0025] One example for the syntax signaling of the above embodiments is illustrated in the Table 1. The signaling bits can be context-coded. For the component- wise palette sharing in the first embodiment, the syntax bit in Table 1 for each color component can use different context for CAB AC coding, or use the same context. Similarly different context can be used for luma/chroma control scheme in the third embodiment.
Table 1
Figure imgf000013_0001
[0026] For the second-category embodiments, a palette book is used. Various means can be used to generate the palette book. For example, a history of the recently encoded palette sets can be stored in a "palette book". The current CU may choose to share one of the palette sets stored in the palette book as indicated by book index. The current CU may also use its own palette and the current palette will replace one set in the palette book. The new palette is encoded and transmitted to the decoder so that the same palette book updating process can be carried out in the same way at both the encoder and the decoder. There are various ways to update and order the previously coded palette sets.
[0027] Fourth Embodiment: CU-wise control of palette sharing using "palette book". In one particular example, the palette sets are simply ordered based on their coding order, i.e. the most recently coded palette is stored at the beginning of the "palette book" (i.e., having a smallest index), while the older ones are stored afterwards (i.e., having larger indices). For example, a palette book with size KK is used to store KK sets of previously coded palettes. When a new palette set is being coded, entries 1 to (KK-1) in the "palette book" will be moved to entries 2 through KK in order to make the first entry available for the newly coded palette. This is simply a first-in- first-out updating and ordering process. The following pseudo-code demonstrates an example of palette sharing using a palette book when the sharing is controlled on a CU-wise basis (i.e., sharing for all color components). The embodiment may also be used for triplet palette format as disclosed in JCTVC-N-0249.
If (palette_pred)
Parse syntax book_index
For (colorjndex)
Current CU palette[color_index] = palette_book[book_index] [colorjndex]
End
Else
For (colorjndex)
Parse syntax numjnajor ;olor[color_index]
For (k<=KK, k>l, k~)
palettejDook[k] [color_index] = palettej ook[k-l] [colorj'ndex]
End
For (n <= num jnajor ;olor[color_index])
Parse syntax for current CU palette[color_index] [n]
palettej ook[0] [colorj'ndex] [n] = current CU palette[color_index] [n]
End
End
End
[0028] In the above pseudo code, when the palette prediction is used as indicated by palette j red being 1, a palette book index (i.e., book index) is determined from the bitstream. The palette for the current CU (i.e., Current CU palette[color_index]) is derived from the palette book having book index (i.e., palette_book[book_index] [colorjndex]). If the current CU does not use palette prediction, entries 1 to (K -1) in the "palette book" will be moved to entries 2 through K in order to make the first entry available for the newly coded palette (i.e.,
palette_book[k][color_index] = palette_book[k-l][color_index] for (k<=K , k>l, k--)). The newly parsed current CU palette (i.e., Parse syntax for current CU palette[color_index][n]) will be placed in the leading palette book (i.e., palette_book[0][color_index][n] = current CU
palette[color_index] [n]).
[0029] Fifth Embodiment: Component-wise control of palette sharing using "palette book".
The fifth embodiment is similar to the fourth embodiment except that the sharing control is component- wise. An exemplary pseudo code according to this embodiment is shown below for each color component.
For (colorjndex)
If (palette_pred[color_index])
Parse syntax book_index[color_index]
palette[color_index] = palette_book[ book_index[color_index] ] [colorjndex]
Else
Parse syntax num_major_color[color_index]
For (k<=KK, k>l, k-)
palette_book[k][color_index] = palette_book[k-l][color_index]
End
For (n <= num_major_color[color_index])
Parse syntax for current CU palette[color_index][n]
palette_book[0][color_index][n] = current CU palette[color_index][n]
End
End
End
[0030] Sixth Embodiment: Luma/Chroma-wise control of palette sharing using "palette book". While CU-wise and component-wise palette sharing control using "palette book" are shown in the fourth and fifth embodiments respectively, the sharing control of palette book may also be luma/chroma wise. The luma component and the chroma components may have separate sharing controls (e.g., one control flag for luma and one control flag for chroma). Each luma and chroma components may have its own palette table. This may be particularly useful for content with different degree of variations in the luma and chroma components. An exemplary pseudo code according to this embodiment is shown below for YUV color format, where a same sharing control flag is for the U and V component (i.e., palette_pred_UV), separate control flags may also be used.
If (palette_pred_Y)
Parse syntax book_index_Y
Current CU palette[Y_index] = palette_book[book_index_Y][Y_index]
Else
Parse syntax num_major_color[Y_index]
For (k<=KK, k>l, k~)
palette_book[k][Y_index] = palette_book[k-l][ Y_index]
End
For (n <= num_major_color[Y_index])
Parse syntax for current CU palette[Y_index][n]
palette_book[0][Y_index][n] = current CU palette[Y_index] [n]
End
End
If (palette_pred_UV)
Parse syntax book_index_UV
Current CU palette[U_index] = palette_book[book_index_UV][U_index]
Current CU palette[V_index] = palette_book[book_index_UV][V_index]
Else
Parse syntax num_major_color[U_index]
For (k<=KK, k>l, k~)
palette_book[k][U_index] = palette_book[k-l][U_index]
End
For (n <= num_major_color[U_index])
Parse syntax for current CU palette[U_index] [n]
palette_book[0][U_index][n] = current CU palette[U_index][n]
End
Parse syntax num_major_color[V_index] For (k<=KK, k>l, k~)
pa lette_book[k] [V_i ndex] = pa lette_book[k-l] [V_index]
End
For (n <= num_major_color[V_index] )
Pa rse syntax for current CU pa lette [V_index] [n]
pa lette_book[0] [U_i ndex] [n] = current CU pa lette [V_index] [n]
End
End
[0031] While the examples shown above always allow both the above CU and the left CU for the current CU to share their palette. However, it may also restrict to allow only one neighboring CU (only above CU or only left CU) to share the palette. Furthermore, while the examples demonstrate the granularity of palette coding and sharing on a CU basis, other granularity of palette coding and sharing may be used as well. For example, palette coding and sharing can be performed on prediction unit (PU) basis, largest coding unit (LCU) basis, coding tree unit (CTU) basis, or multiple LCUs basis.
[0032] While the first-in-first-out scheme is used in the pseudo codes for the fourth embodiment through the sixth embodiment for the "palette book" updating and ordering, other method can also be utilized as long as the encoder and decoder can perform the same process. For example, a counter can be used to keep track of the frequency of each palette set being selected for sharing. The palette book can then be updated according to the frequency, such as ordering them from high selection frequency to low selection frequency.
[0033] The selection of an entry from the "palette book" has to be signaled in the bitstream. The most straightforward way is to signal the book entry with fixed length coding. However, with proper ordering of the "palette book", such as first-in-first-out or frequency-based, the most likely to be used palette entries are in the front of the "palette book". On the other hand, entries at the end of the "palette book" are less likely to be used. Thus, a more efficient syntax design can be constructed to exploit such property. For example, the selection of entry 0, 1, 2, 3 for a "palette book" with size 4 can be signaled using the variable length code as shown in Table 2.
Table 2.
Figure imgf000018_0001
[0034] The signaling bits of the variable length code can be context-coded. For the component- wise palette sharing in the fifth embodiment, the syntax bit in Table 2 for each color component can use different context for CABAC (context adaptive binary arithmetic coding), or they can use the same context. Similarly different context can be used for luma/chroma control scheme in the sixth embodiment.
[0035] Since the "palette book" keeps track and updates the most recently coded palette sets, there is no line buffer issue. The selection of palette book size becomes an issue of tradeoff between providing better palette matching (i.e., using larger book size) and signaling side information (i.e., using smaller book size).
[0036] Another design consideration is the duration of the palette to be kept valid before it is being reset. Longer valid duration, for example the entire slice/picture, enables longer memory of the palette to be available for the block. However the error resilience property becomes weaker, as loss of such palette book will affect the decoding of all blocks in the slice/picture.
[0037] Seventh Embodiment: CU-wise control of palette sharing using "palette book" with palette book size kk=l. This embodiment corresponds to a particular example of the palette book sharing, where only one most recently coded palette is kept in the book (i.e., palette book size kk=l). Since there is only 1 entry in the palette book, there is no need to signal the book index as mentioned in embodiments 4-6. Also, the updating process for the palette book becomes simply replacing the palette book with the current palette table. An exemplary corresponding pseudo-code for CU-wise control of palette sharing using "palette book" with palette book size kk=land various valid durations is shown below, where the valid duration defines how often the book will be reset.
If (begin new LCU, or begin new LCU row, or begin new slice)
Clear palette_book
End
If (palette_pred)
For (colorjndex)
Current CU palette[color_index] = palette_book[color_index]
End
Else
For (colorjndex)
Parse syntax num_major_color[color_index]
For (n <= num_major_color[color_index])
Parse syntax for current CU palette[color_index][n]
palette_book[color_index] [n] = current CU palette[color_index] [n]
End
End
End
[0038] As shown in the above example, there is no need for shifting the old palette book to make room for the new one. The palette book is reset at the beginning of each LCU, LCU row, or slice as shown in the pseudo code (i.e., "If (begin new LCU, or begin new LCU row, or begin new slice), Clear palette book").
[0039] Eighth Embodiment: Component-wise control of palette sharing using "palette book" with palette book size kk=l An exemplary corresponding pseudo-code for component-wise control of palette sharing using "palette book" with palette book size kk=land various valid durations is shown below.
If (begin new LCU, or begin new LCU row, or begin new slice)
Clear palette_book
End
For (colorjndex) If (palette_pred[color_index])
Current CU palette[color_index] = palette_book[color_index]
Else
Parse syntax num_major_color[color_index]
For (n <= num_major_color[color_index])
Parse syntax for current CU palette[color_index][n] palette_book[color_index][n] = current CU palette[color_index][n]
End
End
End
[0040] Ninth Embodiment: Luma/chroma -wise control of palette sharing using "palette book" with palette book size kk=l. An exemplary corresponding pseudo-code for Luma/chroma -wise control of palette sharing using "palette book" with palette book size kk=land various valid durations is shown below.
If (begin new LCU, or begin new LCU row, or begin new slice)
Clear palette_book
End
If (palette_pred_Y)
Current CU palette[Y_index] = palette_book[Y_index]
Else
Parse syntax num_major_color[Y_index]
For (n <= num_major_color[Y_index])
Parse syntax for current CU palette[Y_index][n]
palette_book[Y_index][n] = current CU palette[Y_index] [n]
End
End
If (palette_pred_UV)
Current CU palette[U_index] = palette_book[U_index]
Current CU palette[V_index] = palette_book[V_index]
Else
Parse syntax num_major_color[U_index]
For (n <= num_major_color[U_index])
Parse syntax for current CU palette[U_index] [n] palette_book[U_index][n] = current CU palette[U_index][n]
End
Parse syntax num_major_color[V_index]
For (n <= num_major_color[V_index])
Parse syntax for current CU palette[V_index] [n]
palette_book[V_index][n] = current CU palette[V_index] [n]
End
End
[0041] Another aspect of the present invention is related to syntax of palette prediction. A pseudo code for parsing the syntax associates with the signaling of palette sharing in Table 1 is illustrated as follows.
Parse syntax palette_pred
If (palette_pred)
Parse syntax pred_from_above
If (pred_from_above)
Current palette is copied from the above palette
Else
Current palette is copied from the left palette
End
Else
Parse syntax of the current block's palette
End
[0042] Another aspect of the present invention addresses the case of identical palette from the above CU and the left CU when palette sharing is from the above CU and the left CU. The pseudo codes shown above illustrates that syntax palette_pred is first parsed. If palette_pred has a value of 1 , then syntax pred from above is parsed. Depending on the value of pred from above, the palette for the current CU is copied from either above or left.
[0043] Tenth Embodiment: Omitting syntax pred rom above when the above and left palettes are identical. According to this embodiment, in the event that the above palette is the same as the left palette, there is no need to differentiate whether to copy from above or left when palette_pred has a value of 1. Accordingly, the syntax pred from above becomes redundant in this case. In order to remove this redundancy, this embodiment checks whether the above palette is the same as the left palette. If so, the pred from above is not transmitted. An exemplary pseudo-code for incorporating a comparison and indication regarding whether the above palette is the same as the left palette is shown below.
AbLfEq = (above palette = = left palette)
Parse syntax palette_pred
If (palette_pred)
If ( !AbLfEq)
Parse syntax pred_from_above
If (pred_from_above)
Current palette is copied from the above palette
Else
Current palette is copied from the left palette
End
Else
Current palette is copied from the above palette
End
Else
Parse syntax of the current block's palette
End
[0044] As shown in the above pseudo code, the above palette and the left palette are compared (i.e., AbLfEq = (above palette = = left palette)). If the above palette and the left palette are not the same (i.e.,! AbLfEq having a value of 1), regular palette sharing from the above or the left is performed. Otherwise (i.e., ! AbLfEq having a value of 0), no syntax for pred from above is parsed and the current palette is always copied from the above palette. Alternatively, the current palette is always copied from the left palette since the above palette and the left palette are the same.
[0045] Eleventh Embodiment: Using replacement neighboring palette when the above and left palettes are identical. While the tenth embodiment omits the syntax pred from above when the left palette is identical to the above palette, this embodiment uses other causal palette from a neighboring block to replace either the above or the left palette. The possible causal neighboring blocks can be, for example, the above-left block or the above-right block. For example, the palette from the above-left block can be used to replace the left palette. An exemplary pseudo-code to use a replacement palette from another neighboring block when the above palette is identical to the left palette is shown below.
AbLfEq = (above palette = = left palette)
Parse syntax palette_pred
If (palette_pred)
Parse syntax pred_from_above
If (pred_from_above)
Current palette is copied from the above palette
Else
If ( !AbLfEq)
Current palette is copied from the left palette
Else
Current palette is copied from the above-left palette
End
End
Else
Parse syntax of current block's palette
End
[0046] Twelveth Embodiment: Using a previously coded palette when the above and left palettes are identical. This embodiment maintains a previously coded palette. For example, the previously coded palette may correspond to the most recent coded palette that is neither the above nor the left palette of the current block. This previously coded palette can be utilized for palette sharing when the above and left palettes of the current block are the same. This approach requires an updating process at both the encoder and the decoder to maintain a previously coded palette. An exemplary pseudo-code to use a previously coded palette when the above palette is identical to the left palette is shown below.
AbLfEq = (above palette = = left palette)
Parse syntax palette_pred
If (palette_pred)
Parse syntax pred_from_above
If (pred_from_above)
Current palette is copied from the above palette
Else
If ( !AbLfEq)
Current palette is copied from the left palette
Else
Current palette is copied from the "recent palette"
End
End
Else
Parse syntax of current block's palette
End
[0047] In the above exemplary pseudo code, the most recent coded palette is denoted as "recent palette". The most recent palette is used as an example that can be used to replace the above or left block's palette. Other palette maintaining approaches can also be used.
[0048] The performance of an embodiment incorporating the seventh embodiment is compared to an anchor system disclosed in JCTVC-P0108 (Guo et al, RCE4: Test 1.
Major-color-based screen content coding, Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 16th Meeting: San Jose, US, 9-17 Jan. 2014, Document: JCTVC-P0108). The comparison is performed for various test materials as shown in the first column and for various system configurations (AI-MT, AI-HT and AI-SHT). The Al refers to "all Intra", MT refers to "Main Tier", HT refers to "High Tier" and SHT refers to "Super High Tier". The performance measure is based on BD-rate, which is a well-known performance measure in the field of video coding. The comparison results are shown in Table 3, where a negative value in Table 3 implies performance improved over the anchor system. As shown in Table 3, the seventh embodiment of the present invention demonstrates noticeable performance improvement for screen content materials (e.g., SC RGB 444, SC YUV 444, SC(444) GBR Opt. and SC(444) YUV Opt.).
Table 3.
Figure imgf000025_0001
[0049] Fig. 1 illustrates a flowchart of an exemplary system incorporating an embodiment of the present invention to share palette from previously processed palette based on one or more palette sharing flags. The system receives input data associated with a current block comprising a set of color components as shown in step 110, wherein the set of color components consists of one or more colors. For encoding, the input data corresponds to pixel data to be encoded using palette coding. For decoding, the input data corresponds to coded pixel data to be decoded using palette coding. The input data may be retrieved from memory (e.g., computer memory, buffer (RAM or DRAM) or other media) or from a processor. One or more palette sharing flags for the current block is determined as shown in step 120. The first palette sharing flag is checked to determine whether it is asserted. In step 130. If the result is "Yes", the processing goes to step 140. If the result is "No", the processing goes to step 150. After processing in step 140 or step 150, the processing goes to step 160. In step 140, the set of current palettes is generated by copying one or more current palettes indicated by the first palette sharing flag entirely from one or more reference palettes of a set of reference palettes. In step 150, the set of current palettes is generated by deriving said one or more current palettes indicated by the first palette sharing flag from a bitstream associated with the video data. In step 160, encoding or decoding is applied to the current block according to the set of current palettes.
[0050] The flowchart shown above is intended to illustrate an example of palette coding according to the present invention. A person skilled in the art may modify each step, re-arranges the steps, split a step, or combine steps to practice the present invention without departing from the spirit of the present invention. In the disclosure, specific syntax and semantics have been used to illustrate examples to implement embodiments of the present invention. A skilled person may practice the present invention by substituting the syntax and semantics with equivalent syntax and semantics without departing from the spirit of the present invention.
[0051] The above description is presented to enable a person of ordinary skill in the art to practice the present invention as provided in the context of a particular application and its requirement. Various modifications to the described embodiments will be apparent to those with skill in the art, and the general principles defined herein may be applied to other embodiments. Therefore, the present invention is not intended to be limited to the particular embodiments shown and described, but is to be accorded the widest scope consistent with the principles and novel features herein disclosed. In the above detailed description, various specific details are illustrated in order to provide a thorough understanding of the present invention. Nevertheless, it will be understood by those skilled in the art that the present invention may be practiced.
[0052] Embodiment of the present invention as described above may be implemented in various hardware, software codes, or a combination of both. For example, an embodiment of the present invention can be a circuit integrated into a video compression chip or program code integrated into video compression software to perform the processing described herein. An embodiment of the present invention may also be program code to be executed on a Digital Signal Processor (DSP) to perform the processing described herein. The invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA). These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention. The software code or firmware code may be developed in different programming languages and different formats or styles. The software code may also be compiled for different target platforms. However, different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.
[0053] The invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described examples are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims

1. A method of coding a block of video data using palette coding in a video coding system, the method comprising:
receiving input data associated with a current block comprising a set of color components, wherein the set of color components consists of one or more colors;
determining one or more palette sharing flags for the current block;
generating a set of current palettes corresponding to the set of color components according to said one or more palette sharing flags, wherein if a first palette sharing flag is asserted, said generating the set of current palettes copies one or more current palettes indicated by the first palette sharing flag entirely from one or more reference palettes of a set of reference palettes, and if the first palette sharing flag is not asserted, said generating the set of current palettes derives said one or more current palettes indicated by the first palette sharing flag from a bitstream associated with the video data; and
encoding or decoding the current block according to the set of current palettes.
2. The method of Claim 1, wherein the current block corresponds to a coding unit (CU), a prediction unit (PU), a largest coding unit (LCU) or a coding tree block (CTB).
3. The method of Claim 1, wherein said one or more palette sharing flags correspond to a single sharing flag and if the single sharing flag is asserted, said generating the set of current palettes copies all current palettes of the set of current palettes entirely from all reference palettes of the set of reference palettes.
4. The method of Claim 1 , wherein each of said one or more palette sharing flags correspond to each of the set of color components, and if one corresponding sharing flag is asserted, said generating the set of current palettes copies one corresponding current palette with one corresponding color component entirely from one corresponding reference palette with said one corresponding color component.
5. The method of Claim 1, wherein said one or more palette sharing flags correspond to a luma palette sharing flag and a chroma palette sharing flag, and if the luma palette sharing flag or the chroma palette sharing flag is asserted, said generating the set of current palettes copies a current luma palette or at least one current chroma palette entirely from one corresponding luma reference palette or at least one corresponding chroma reference palette.
6. The method of Claim 1, wherein the set of reference palettes corresponds to a most recent set of palettes among one or more recent sets of palettes associated with one or more previous blocks.
7. The method of Claim 6, wherein said one or more recent sets of palettes correspond to N recent sets of palettes, wherein N is an integer greater than zero.
8. The method of Claim 7, wherein an oldest set of palettes is removed from said N recent sets of palettes when a new set of palettes is stored in said N recent sets of palettes.
9. The method of Claim 7, wherein said N recent sets of palettes are reset for each largest coding unit (LCU), or for each region of LCUs, or for each slice, and N is equal to 1.
10. The method of Claim 1, wherein the set of reference palettes corresponds to a set of neighboring palettes associated with a neighboring block at upper side neighboring block or left side neighboring block of the current block.
11. The method of Claim 10, wherein if the first palette sharing flag is asserted, a second palette sharing flag is signaled to select the set of reference palettes from the upper side neighboring block or the left side neighboring block of the current block.
12. The method of Claim 11, wherein the second palette sharing flag is omitted if the upper side neighboring block and the left side neighboring block of the current block have an identical set of palettes.
13. The method of Claim 11, wherein if the upper side neighboring block and the left side neighboring block of the current block have an identical set of palettes, the set of reference palettes is selected from the identical set of palettes and a replacement set of palettes associated with a replacement neighboring block, wherein the replacement neighboring block corresponds to an above-left neighboring block or an above-right neighboring block.
14. The method of Claim 11 , wherein if the upper side neighboring block and the left side neighboring block of the current block have an identical set of palettes, the set of reference palettes is selected from the identical set of palettes and a previously coded set of palettes.
15. The method of Claim 1, wherein the first palette sharing flag is context-boded.
16. An apparatus of coding a block of video data using palette coding in a video coding system, the apparatus comprising one or more electronic circuits configured to:
receive input data associated with a current block comprising a set of color components, wherein the set of color components consists of one or more colors;
determine one or more palette sharing flags for the current block;
generate a set of current palettes corresponding to the set of color components according to said one or more palette sharing flags, wherein if a first palette sharing flag is asserted, one or more current palettes of the set of current palettes indicated by the first palette sharing flag are copied entirely from one or more reference palettes of a set of reference palettes, and if the first palette sharing flag is not asserted, said one or more current palettes of the set of current palettes indicated by the first palette sharing flag are derived from a bitstream associated with the video data; and encode or decode the current block according to the set of current palettes.
17. The apparatus of Claim 16, wherein said one or more palette sharing flags correspond to a single sharing flag and if the single sharing flag is asserted, all current palettes of the set of current palettes are entirely copied from all reference palettes of the set of reference palettes.
18. The apparatus of Claim 16, wherein each of said one or more palette sharing flags correspond to each of the set of color components, and if one corresponding sharing flag is asserted, one corresponding current palette with one corresponding color component is entirely copied from one corresponding reference palette with said one corresponding color component.
19. The apparatus of Claim 16, wherein the set of reference palettes corresponds to a most recent set of palettes among one or more recent sets of palettes associated with one or more previous blocks.
20. The apparatus of Claim 16, wherein the set of reference palettes corresponds to a set of neighboring palettes associated with a neighboring block at upper side neighboring block or left side neighboring block of the current block.
PCT/US2014/068725 2013-12-18 2014-12-05 Palette prediction and sharing in video coding WO2015094711A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CA2934246A CA2934246C (en) 2013-12-18 2014-12-05 Palette prediction and sharing in video coding
US15/104,654 US10469848B2 (en) 2013-12-18 2014-12-05 Palette prediction and sharing in video coding
EP14871918.0A EP3085068A4 (en) 2013-12-18 2014-12-05 Palette prediction and sharing in video coding
CN201480069231.6A CN106031142B (en) 2013-12-18 2014-12-05 The method and apparatus predicted and shared for palette in Video coding

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201361917474P 2013-12-18 2013-12-18
US61/917,474 2013-12-18
US201461923378P 2014-01-03 2014-01-03
US61/923,378 2014-01-03

Publications (1)

Publication Number Publication Date
WO2015094711A1 true WO2015094711A1 (en) 2015-06-25

Family

ID=53403504

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2014/068725 WO2015094711A1 (en) 2013-12-18 2014-12-05 Palette prediction and sharing in video coding

Country Status (5)

Country Link
US (1) US10469848B2 (en)
EP (1) EP3085068A4 (en)
CN (1) CN106031142B (en)
CA (1) CA2934246C (en)
WO (1) WO2015094711A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017206805A1 (en) * 2016-05-28 2017-12-07 Mediatek Inc. Method and apparatus of palette mode coding for colour video data
US11039147B2 (en) 2016-05-28 2021-06-15 Mediatek Inc. Method and apparatus of palette mode coding for colour video data
CN113168718A (en) * 2018-09-14 2021-07-23 腾讯美国有限责任公司 Method and apparatus for palette decoding
US11917168B2 (en) 2015-06-08 2024-02-27 Tongji University Image encoding and decoding methods, image processing device, and computer storage medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10291915B2 (en) * 2014-03-06 2019-05-14 Samsung Electronics Co., Ltd. Video decoding method and apparatus and video encoding method and apparatus
US11259047B2 (en) 2016-04-06 2022-02-22 Kt Corporation Method and apparatus for processing video signal
MX2022002617A (en) 2019-09-12 2022-03-25 Bytedance Inc Using palette predictor in video coding.

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5684716A (en) * 1994-02-16 1997-11-04 Freeman; Mitchael C. Remote video transmission system
US20040179731A1 (en) * 2003-03-11 2004-09-16 Tarik Ono Method and apparatus for compressing images using color palettes and rare colors
US20060204086A1 (en) * 2005-03-10 2006-09-14 Ullas Gargi Compression of palettized images
US20130195199A1 (en) * 2012-01-30 2013-08-01 Qualcomm Incorporated Residual quad tree (rqt) coding for video coding

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5065144A (en) * 1990-04-17 1991-11-12 Analog Devices, Inc. Apparatus for mix-run encoding of image data
KR101712915B1 (en) * 2007-10-16 2017-03-07 엘지전자 주식회사 A method and an apparatus for processing a video signal
US9232226B2 (en) * 2008-08-19 2016-01-05 Marvell World Trade Ltd. Systems and methods for perceptually lossless video compression
CN102523367B (en) * 2011-12-29 2016-06-15 全时云商务服务股份有限公司 Real time imaging based on many palettes compresses and method of reducing
CN103209326B (en) * 2013-03-29 2017-04-12 惠州学院 PNG (Portable Network Graphic) image compression method
US11259020B2 (en) * 2013-04-05 2022-02-22 Qualcomm Incorporated Determining palettes in palette-based video coding
US9558567B2 (en) * 2013-07-12 2017-01-31 Qualcomm Incorporated Palette prediction in palette-based video coding

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5684716A (en) * 1994-02-16 1997-11-04 Freeman; Mitchael C. Remote video transmission system
US20040179731A1 (en) * 2003-03-11 2004-09-16 Tarik Ono Method and apparatus for compressing images using color palettes and rare colors
US20060204086A1 (en) * 2005-03-10 2006-09-14 Ullas Gargi Compression of palettized images
US20130195199A1 (en) * 2012-01-30 2013-08-01 Qualcomm Incorporated Residual quad tree (rqt) coding for video coding

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GUO, L. ET AL.: "RCE3: RESULTS OF TEST 3.1 ON PALETTE MODE FOR SCREEN CONTENT CODING", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11, 14TH MEETING ,JCTVC-N0247, 25 July 2013 (2013-07-25), VIENNA, AT ;, pages 1 - 7, XP030114764, Retrieved from the Internet <URL:http://phenix.it-sudparis.eu/jct/doc_end_user/documents/14_Vienna/wg11/JCTVC-N0247-v1.zip> [retrieved on 20150123] *
See also references of EP3085068A4 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11917168B2 (en) 2015-06-08 2024-02-27 Tongji University Image encoding and decoding methods, image processing device, and computer storage medium
WO2017206805A1 (en) * 2016-05-28 2017-12-07 Mediatek Inc. Method and apparatus of palette mode coding for colour video data
TWI672943B (en) * 2016-05-28 2019-09-21 聯發科技股份有限公司 Method and apparatus of palette mode coding for colour video data
US11039147B2 (en) 2016-05-28 2021-06-15 Mediatek Inc. Method and apparatus of palette mode coding for colour video data
CN113727109A (en) * 2016-05-28 2021-11-30 联发科技股份有限公司 Method and apparatus for palette mode encoding and decoding of color video data
CN113727109B (en) * 2016-05-28 2023-12-29 寰发股份有限公司 Method and apparatus for palette mode encoding and decoding of color video data
CN113168718A (en) * 2018-09-14 2021-07-23 腾讯美国有限责任公司 Method and apparatus for palette decoding
CN113168718B (en) * 2018-09-14 2023-08-04 腾讯美国有限责任公司 Video decoding method, device and storage medium

Also Published As

Publication number Publication date
CA2934246C (en) 2019-02-12
CA2934246A1 (en) 2015-06-25
CN106031142B (en) 2019-07-26
EP3085068A1 (en) 2016-10-26
US10469848B2 (en) 2019-11-05
CN106031142A (en) 2016-10-12
EP3085068A4 (en) 2016-12-21
US20160316213A1 (en) 2016-10-27

Similar Documents

Publication Publication Date Title
US10979726B2 (en) Method and apparatus for palette initialization and management
US10972723B2 (en) Method and apparatus for palette table prediction
US11265537B2 (en) Method for palette table initialization and management
US11457237B2 (en) Methods of escape pixel coding in index map coding
US10356432B2 (en) Palette predictor initialization and merge for video coding
US9788004B2 (en) Method of color index coding with palette stuffing
CA2934246C (en) Palette prediction and sharing in video coding
US10819990B2 (en) Method and apparatus for palette predictor initialization for palette coding in video and image compression
US20150229971A1 (en) Method of Coding Based on String Matching for Video Compression
WO2016146076A1 (en) Method and apparatus for index map coding in video and image compression

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14871918

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15104654

Country of ref document: US

ENP Entry into the national phase

Ref document number: 2934246

Country of ref document: CA

NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2014871918

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2014871918

Country of ref document: EP

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112016014088

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 112016014088

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20160616