WO2021194283A1 - Procédé et dispositif de traitement de signal vidéo - Google Patents

Procédé et dispositif de traitement de signal vidéo Download PDF

Info

Publication number
WO2021194283A1
WO2021194283A1 PCT/KR2021/003725 KR2021003725W WO2021194283A1 WO 2021194283 A1 WO2021194283 A1 WO 2021194283A1 KR 2021003725 W KR2021003725 W KR 2021003725W WO 2021194283 A1 WO2021194283 A1 WO 2021194283A1
Authority
WO
WIPO (PCT)
Prior art keywords
palette
block
pixel
current block
value
Prior art date
Application number
PCT/KR2021/003725
Other languages
English (en)
Korean (ko)
Inventor
임성원
Original Assignee
주식회사 케이티
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 케이티 filed Critical 주식회사 케이티
Publication of WO2021194283A1 publication Critical patent/WO2021194283A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/57Motion estimation characterised by a search window with variable size or shape
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • the present disclosure relates to a video signal processing method and apparatus.
  • HD High Definition
  • UHD Ultra High Definition
  • Inter-screen prediction technology that predicts pixel values included in the current picture from pictures before or after the current picture with image compression technology
  • intra-picture prediction technology that predicts pixel values included in the current picture using pixel information in the current picture
  • Various techniques exist such as entropy encoding technology in which a short code is assigned to a value with a high frequency of occurrence and a long code is assigned to a value with a low frequency of occurrence.
  • An object of the present disclosure is to provide a method and apparatus for intra prediction in encoding/decoding a video signal.
  • An object of the present disclosure is to provide a method and apparatus for intra prediction based on a palette mode in encoding/decoding a video signal.
  • An object of the present disclosure is to provide a method and apparatus for applying a palette mode under a lossless coding technique in encoding/decoding a video signal.
  • the video signal decoding method includes the steps of determining whether lossless coding is applied, when the lossless coding is applied, determining whether palette prediction is applied to the current block, the palette table for the current block and, based on the palette table, reconstructing pixels in the current block.
  • the video signal encoding method includes the steps of determining whether lossless encoding is applied, determining whether palette prediction is applied to a current block when the lossless encoding is applied, and a palette table for the current block and encoding the pixels in the current block based on the palette table.
  • the size of the palette table when lossless coding is applied to the current block and when lossy coding is applied to the current block, the size of the palette table may be different.
  • difference information indicating a difference between the size of the palette table when the lossy coding is applied and the size of the palette table when the lossless coding is applied is decoded from a bitstream can be
  • the escape mode is applied to all pixels in the current block.
  • escape value information for a pixel in the current block may be decoded from a bitstream.
  • the escape value information may indicate a difference between an escape value and an escape predicted value.
  • the escape prediction value may be derived based on an intra prediction mode or a block vector of the current block.
  • the palette table of the current block by configuring the palette table of the current block based on the previous palette table, it is possible to improve the encoding/decoding efficiency of the palette mode.
  • encoding/decoding efficiency of the palette mode can be improved by adaptively using the scan order of the palette mode.
  • FIG. 1 is a block diagram illustrating an image encoding apparatus according to an embodiment of the present disclosure.
  • FIG. 2 is a block diagram illustrating an image decoding apparatus according to an embodiment of the present disclosure.
  • 3 and 4 illustrate a lossless encoding method according to the present disclosure.
  • FIG 5 to 7 are diagrams for explaining the concept of the palette mode (palette mode) according to the present disclosure.
  • FIG. 8 illustrates a method of performing intra prediction based on a palette mode according to the present disclosure.
  • FIG. 9 and 10 show a method of configuring a pallet table according to the present disclosure.
  • 11 is a diagram illustrating an example in which palette entries are added to the palette entry candidate list.
  • FIG. 12 shows an example in which a palette table predefined in an encoder and a decoder is used.
  • FIG. 13 illustrates a method of signaling a palette prediction flag in the form of a binary vector based on run length encoding as an embodiment to which the present disclosure is applied.
  • FIG. 14 illustrates a method of encoding/decoding a palette index according to a scan order according to the present disclosure.
  • 15 is an exemplary diagram for describing a pixel adjacent to a current pixel.
  • 16 shows an example of encoding a run merge flag using context information.
  • 17 is an example showing a range of a context information index.
  • index-related information is encoded in units of a region having a preset size.
  • FIG 19 shows an example in which index-related information is encoded using inter-block dependency.
  • 20 is an exemplary diagram for explaining an aspect of encoding an escape value.
  • 21 shows an example of deriving a difference value with respect to an escape value based on an intra prediction mode.
  • first, second, etc. may be used to describe various elements, but the elements should not be limited by the terms. The above terms are used only for the purpose of distinguishing one component from another. For example, without departing from the scope of the present disclosure, a first component may be referred to as a second component, and similarly, a second component may also be referred to as a first component. and/or includes a combination of a plurality of related listed items or any of a plurality of related listed items.
  • FIG. 1 is a block diagram illustrating an image encoding apparatus according to an embodiment of the present disclosure.
  • the image encoding apparatus 100 includes a picture division unit 110 , prediction units 120 and 125 , a transform unit 130 , a quantization unit 135 , a rearrangement unit 160 , and an entropy encoding unit ( 165 ), an inverse quantization unit 140 , an inverse transform unit 145 , a filter unit 150 , and a memory 155 .
  • each of the constituent units shown in FIG. 1 is independently illustrated to represent different characteristic functions in the image encoding apparatus, and does not mean that each constituent unit is composed of separate hardware or one software constituent unit. That is, each component is listed as each component for convenience of description, and at least two components of each component are combined to form one component, or one component can be divided into a plurality of components to perform a function, and each Integrated embodiments and separate embodiments of the components are also included in the scope of the present disclosure without departing from the essence of the present disclosure.
  • components are not essential components to perform an essential function in the present disclosure, but may be optional components for merely improving performance.
  • the present disclosure may be implemented by including only essential components to implement the essence of the present disclosure, except for components used for performance improvement, and a structure including only essential components excluding optional components used for performance improvement Also included in the scope of the present disclosure.
  • the picture divider 110 may divide the input picture into at least one processing unit.
  • the processing unit may be a prediction unit (PU), a transform unit (TU), or a coding unit (CU).
  • the picture splitter 110 divides one picture into a combination of a plurality of coding units, prediction units, and transformation units, and combines one coding unit, prediction unit, and transformation unit based on a predetermined criterion (eg, a cost function). can be selected to encode the picture.
  • a predetermined criterion eg, a cost function
  • one picture may be divided into a plurality of coding units.
  • a recursive tree structure such as a quad tree, a ternary tree, or a binary tree may be used.
  • a coding unit divided into other coding units with the coding unit as a root may be divided having as many child nodes as the number of the divided coding units.
  • a coding unit that is no longer split according to certain restrictions becomes a leaf node. For example, when it is assumed that quad tree splitting is applied to one coding unit, one coding unit may be split into up to four different coding units.
  • a coding unit may be used as a unit for performing encoding or may be used as a meaning for a unit for performing decoding.
  • a prediction unit may be split in the form of at least one square or rectangle of the same size within one coding unit, and one prediction unit among the split prediction units within one coding unit is a prediction of another. It may be divided to have a shape and/or size different from that of the unit.
  • the transformation unit and the prediction unit may be set to be the same. In this case, after dividing the coding unit into a plurality of transform units, intra prediction may be performed for each transform unit.
  • a coding unit may be divided in a horizontal direction or a vertical direction. The number of transformation units generated by dividing the coding unit may be 2 or 4 according to the size of the coding unit.
  • the prediction units 120 and 125 may include an inter prediction unit 120 performing inter prediction and an intra prediction unit 125 performing intra prediction. Whether to use inter prediction or to perform intra prediction on a coding unit may be determined, and specific information (eg, intra prediction mode, motion vector, reference picture, etc.) according to each prediction method may be determined. In this case, a processing unit in which prediction is performed and a processing unit in which a prediction method and specific content are determined may be different. For example, a prediction method and a prediction mode may be determined in a coding unit, and prediction may be performed in a prediction unit or a transformation unit. A residual value (residual block) between the generated prediction block and the original block may be input to the transform unit 130 . Also, prediction mode information, motion vector information, etc.
  • the entropy encoder 165 may be encoded by the entropy encoder 165 together with the residual value and transmitted to the decoding apparatus.
  • a specific encoding mode it is also possible to encode the original block as it is without generating the prediction block through the prediction units 120 and 125 and transmit it to the decoder.
  • the inter prediction unit 120 may predict a prediction unit based on information on at least one of a picture before or after a picture of the current picture, and in some cases, prediction based on information of a partial region in the current picture that has been encoded Units can also be predicted.
  • the inter prediction unit 120 may include a reference picture interpolator, a motion prediction unit, and a motion compensator.
  • the reference picture interpolator may receive reference picture information from the memory 155 and generate pixel information of integer pixels or less in the reference picture.
  • a DCT-based 8-tap interpolation filter (DCT-based Interpolation Filter) with different filter coefficients may be used to generate pixel information of integer pixels or less in units of 1/4 pixels.
  • a DCT-based 4-tap interpolation filter in which filter coefficients are different to generate pixel information of integer pixels or less in units of 1/8 pixels may be used.
  • the motion prediction unit may perform motion prediction based on the reference picture interpolated by the reference picture interpolator.
  • various methods such as Full search-based Block Matching Algorithm (FBMA), Three Step Search (TSS), and New Three-Step Search Algorithm (NTS) may be used.
  • the motion vector may have a motion vector value of 1/2 or 1/4 pixel unit based on the interpolated pixel.
  • the motion prediction unit may predict the current prediction unit by using a different motion prediction method.
  • Various methods such as a skip method, a merge method, an AMVP (Advanced Motion Vector Prediction) method, an intra block copy method, etc., may be used as the motion prediction method.
  • the intra prediction unit 125 may generate a prediction block based on reference pixel information that is pixel information in the current picture.
  • Reference pixel information may be derived from a selected one of a plurality of reference pixel lines.
  • An Nth reference pixel line among the plurality of reference pixel lines may include left pixels having an x-axis difference of N from an upper-left pixel in the current block and upper pixels having an y-axis difference of N from the upper-left pixel.
  • the number of reference pixel lines that the current block can select may be one, two, three, or four.
  • a reference pixel included in the block on which inter prediction is performed is a reference pixel of the block on which intra prediction has been performed.
  • information can be used instead. That is, when the reference pixel is not available, the unavailable reference pixel information may be replaced with information of at least one of the available reference pixels.
  • the prediction mode may have a directional prediction mode in which reference pixel information is used according to a prediction direction and a non-directional mode in which directional information is not used when prediction is performed.
  • a mode for predicting luminance information and a mode for predicting chrominance information may be different, and intra prediction mode information used for predicting luminance information or predicted luminance signal information may be utilized to predict chrominance information.
  • intra prediction When intra prediction is performed, if the size of the prediction unit and the size of the transformation unit are the same, intra prediction for the prediction unit based on the pixel present on the left side, the pixel present on the upper left side, and the pixel present on the upper side of the prediction unit can be performed.
  • the intra prediction method may generate a prediction block after applying a smoothing filter to a reference pixel according to a prediction mode. Whether to apply the smoothing filter may be determined according to the selected reference pixel line.
  • the intra prediction mode of the current prediction unit may be predicted from the intra prediction mode of the prediction unit existing around the current prediction unit.
  • the prediction mode of the current prediction unit is predicted using the mode information predicted from the neighboring prediction unit, if the intra prediction mode of the current prediction unit and the neighboring prediction unit are the same, the current prediction unit and the neighboring prediction unit are used using predetermined flag information It is possible to transmit information that the prediction modes of . , and if the prediction modes of the current prediction unit and the neighboring prediction units are different from each other, entropy encoding may be performed to encode prediction mode information of the current block.
  • a residual block including residual information which is a difference value between a prediction unit and an original block of the prediction unit, in which prediction is performed based on the prediction unit generated by the prediction units 120 and 125 may be generated.
  • the generated residual block may be input to the transform unit 130 .
  • the transform unit 130 converts the original block and the residual block including residual information of the prediction unit generated by the prediction units 120 and 125 to DCT (Discrete Cosine Transform), DST (Discrete Sine Transform), KLT and It can be converted using the same conversion method. Whether to apply DCT, DST, or KLT to transform the residual block is based on at least one of the size of the transform unit, the shape of the transform unit, the prediction mode of the prediction unit, or the intra prediction mode information of the prediction unit. can decide
  • the quantizer 135 may quantize the values transformed by the transform unit 130 into the frequency domain.
  • the quantization coefficient may change according to blocks or the importance of an image.
  • the value calculated by the quantization unit 135 may be provided to the inverse quantization unit 140 and the rearrangement unit 160 .
  • the rearrangement unit 160 may rearrange the coefficient values on the quantized residual values.
  • the reordering unit 160 may change the two-dimensional block form coefficient into a one-dimensional vector form through a coefficient scanning method.
  • the rearranging unit 160 may use a Zig-Zag Scan method to scan from DC coefficients to coefficients in a high-frequency region and change them into a one-dimensional vector form.
  • a zig-zag scan instead of a zig-zag scan, a vertical scan that scans a two-dimensional block shape coefficient in a column direction, a horizontal scan that scans a two-dimensional block shape coefficient in a row direction, or a two-dimensional block
  • a diagonal scan that scans the shape coefficients in the diagonal direction may be used. That is, it may be determined whether any of the zig-zag scan, vertical scan, horizontal scan, or diagonal scan is to be used according to the size of the transform unit and the intra prediction mode.
  • the entropy encoding unit 165 may perform entropy encoding based on the values calculated by the reordering unit 160 .
  • various encoding methods such as Exponential Golomb, Context-Adaptive Variable Length Coding (CAVLC), and Context-Adaptive Binary Arithmetic Coding (CABAC) may be used.
  • the entropy encoding unit 165 receives the residual value coefficient information and block type information, prediction mode information, division unit information, prediction unit information and transmission unit information, motion of the coding unit from the reordering unit 160 and the prediction units 120 and 125 .
  • Various information such as vector information, reference frame information, interpolation information of a block, and filtering information may be encoded.
  • the entropy encoder 165 may entropy-encode the coefficient values of the coding units input from the reordering unit 160 .
  • the inverse quantizer 140 and the inverse transform unit 145 inversely quantize the values quantized by the quantizer 135 and inversely transform the values transformed by the transform unit 130 .
  • the residual values generated by the inverse quantizer 140 and the inverse transform unit 145 are combined with the prediction units predicted through the motion estimation unit, the motion compensator, and the intra prediction unit included in the prediction units 120 and 125 and restored. You can create a Reconstructed Block.
  • the filter unit 150 may include at least one of a deblocking filter, an offset correcting unit, and an adaptive loop filter (ALF).
  • a deblocking filter may include at least one of a deblocking filter, an offset correcting unit, and an adaptive loop filter (ALF).
  • ALF adaptive loop filter
  • the deblocking filter may remove block distortion caused by the boundary between blocks in the reconstructed picture.
  • it may be determined whether to apply the deblocking filter to the current block based on pixels included in several columns or rows included in the block.
  • a strong filter or a weak filter can be applied according to the required deblocking filtering strength.
  • horizontal filtering and vertical filtering may be concurrently processed when performing vertical filtering and horizontal filtering.
  • the offset correcting unit may correct an offset from the original image in units of pixels with respect to the image on which the deblocking has been performed.
  • a method of dividing pixels included in an image into a certain number of regions, determining the region to be offset and applying the offset to the region, or taking edge information of each pixel into consideration can be used to apply
  • Adaptive loop filtering may be performed based on a value obtained by comparing the filtered reconstructed image and the original image. After dividing the pixels included in the image into a predetermined group, one filter to be applied to the corresponding group is determined, and filtering can be performed differentially for each group.
  • the luminance signal may be transmitted for each coding unit (CU), and the shape and filter coefficients of the ALF filter to be applied may vary according to each block.
  • the ALF filter of the same type may be applied regardless of the characteristics of the target block.
  • the memory 155 may store the reconstructed block or picture calculated through the filter unit 150 , and the stored reconstructed block or picture may be provided to the predictors 120 and 125 when inter prediction is performed.
  • FIG. 2 is a block diagram illustrating an image decoding apparatus according to an embodiment of the present disclosure.
  • the image decoding apparatus 200 includes an entropy decoding unit 210, a reordering unit 215, an inverse quantization unit 220, an inverse transform unit 225, prediction units 230 and 235, and a filter unit ( 240) and a memory 245 may be included.
  • the input bitstream may be decoded by a procedure opposite to that of the image encoding apparatus.
  • the entropy decoding unit 210 may perform entropy decoding in a procedure opposite to that performed by the entropy encoding unit of the image encoding apparatus. For example, various methods such as Exponential Golomb, Context-Adaptive Variable Length Coding (CAVLC), and Context-Adaptive Binary Arithmetic Coding (CABAC) may be applied corresponding to the method performed by the image encoding apparatus.
  • various methods such as Exponential Golomb, Context-Adaptive Variable Length Coding (CAVLC), and Context-Adaptive Binary Arithmetic Coding (CABAC) may be applied corresponding to the method performed by the image encoding apparatus.
  • CAVLC Context-Adaptive Variable Length Coding
  • CABAC Context-Adaptive Binary Arithmetic Coding
  • the entropy decoding unit 210 may decode information related to intra prediction and inter prediction performed by the encoding apparatus.
  • the reordering unit 215 may perform reordering based on a method of rearranging the entropy-decoded bitstream by the entropy decoding unit 210 by the encoder. Coefficients expressed in the form of a one-dimensional vector may be restored and rearranged as coefficients in the form of a two-dimensional block.
  • the reordering unit 215 may receive information related to coefficient scanning performed by the encoder and perform the reordering by performing a reverse scanning method based on the scanning order performed by the corresponding encoder.
  • the inverse quantization unit 220 may perform inverse quantization based on the quantization parameter provided by the encoding apparatus and the reordered coefficient values of the blocks.
  • the inverse transform unit 225 may perform inverse transforms, ie, inverse DCT, inverse DST, and inverse KLT, on the transforms performed by the transform unit, ie, DCT, DST, and KLT, on the quantization result performed by the image encoding apparatus.
  • Inverse transform may be performed based on a transmission unit determined by the image encoding apparatus.
  • a transformation technique eg, DCT, DST, KLT
  • a transformation technique may be selectively performed according to a plurality of pieces of information such as a prediction method, a size, a shape of a current block, a prediction mode, and an intra prediction direction.
  • the prediction units 230 and 235 may generate a prediction block based on the prediction block generation related information provided from the entropy decoding unit 210 and previously decoded block or picture information provided from the memory 245 .
  • intra prediction when intra prediction is performed in the same manner as in the operation in the image encoding apparatus, when the size of the prediction unit and the size of the transformation unit are the same, the pixel present on the left side of the prediction unit, the pixel present on the upper left side, and the upper Intra prediction is performed on the prediction unit based on the existing pixel, but when the size of the prediction unit and the size of the transformation unit are different when performing intra prediction, intra prediction is performed using the reference pixel based on the transformation unit can do. Also, intra prediction using NxN splitting may be used only for the smallest coding unit.
  • the prediction units 230 and 235 may include a prediction unit determiner, an inter prediction unit, and an intra prediction unit.
  • the prediction unit determining unit receives various information such as prediction unit information input from the entropy decoder 210, prediction mode information of the intra prediction method, and motion prediction related information of the inter prediction method, and divides the prediction unit from the current coding unit, and predicts It may be determined whether the unit performs inter prediction or intra prediction.
  • the inter prediction unit 230 uses information required for inter prediction of the current prediction unit provided from the image encoding apparatus based on information included in at least one of a picture before or after the current picture including the current prediction unit. Inter prediction may be performed on the prediction unit. Alternatively, inter prediction may be performed based on information of a pre-restored partial region in the current picture including the current prediction unit.
  • a motion prediction method of a prediction unit included in a corresponding coding unit based on a coding unit is selected from among skip mode, merge mode, AMVP mode, and intra block copy mode. You can decide which way to go.
  • the intra prediction unit 235 may generate a prediction block based on pixel information in the current picture.
  • intra prediction may be performed based on intra prediction mode information of the prediction unit provided by the image encoding apparatus.
  • the intra prediction unit 235 may include an Adaptive Intra Smoothing (AIS) filter, a reference pixel interpolator, and a DC filter.
  • the AIS filter is a part that performs filtering on the reference pixel of the current block, and may be applied by determining whether to apply the filter according to the prediction mode of the current prediction unit.
  • AIS filtering may be performed on the reference pixel of the current block by using the prediction mode and AIS filter information of the prediction unit provided by the image encoding apparatus.
  • the prediction mode of the current block is a mode in which AIS filtering is not performed, the AIS filter may not be applied.
  • the reference pixel interpolator may interpolate the reference pixel to generate a reference pixel of a pixel unit having an integer value or less.
  • the prediction mode of the current prediction unit is a prediction mode that generates a prediction block without interpolating the reference pixel
  • the reference pixel may not be interpolated.
  • the DC filter may generate the prediction block through filtering when the prediction mode of the current block is the DC mode.
  • the reconstructed block or picture may be provided to the filter unit 240 .
  • the filter unit 240 may include a deblocking filter, an offset correction unit, and an ALF.
  • the deblocking filter of the image decoding apparatus may receive deblocking filter-related information provided from the image encoding apparatus, and the image decoding apparatus may perform deblocking filtering on the corresponding block.
  • the offset correction unit may perform offset correction on the reconstructed image based on the type of offset correction applied to the image during encoding, information on the offset value, and the like.
  • ALF may be applied to a coding unit based on information on whether ALF is applied, ALF coefficient information, etc. provided from the encoding apparatus. Such ALF information may be provided by being included in a specific parameter set.
  • the memory 245 may store the reconstructed picture or block to be used as a reference picture or reference block, and may also provide the reconstructed picture to an output unit.
  • a coding unit is used as a term for a coding unit for convenience of description, but may also be a unit for performing decoding as well as coding.
  • the current block indicates a block to be encoded/decoded, and according to the encoding/decoding step, a coding tree block (or coding tree unit), a coding block (or a coding unit), a transform block (or a transform unit), and a prediction block (or a prediction unit) or a block to which an in-loop filter is applied.
  • a 'unit' may indicate a basic unit for performing a specific encoding/decoding process
  • a 'block' may indicate a pixel array of a predetermined size.
  • 'block' and 'unit' may be used interchangeably.
  • the coding block (coding block) and the coding unit (coding unit) are mutually equivalent.
  • 3 and 4 illustrate a lossless encoding method according to the present disclosure.
  • Image compression can be broadly classified into lossy coding and lossless coding.
  • the biggest difference between the two encodings is the presence or absence of a quantization process.
  • lossy coding greater compression efficiency than lossless coding can be obtained through a quantization process, but data loss may occur.
  • lossless encoding the original data can be maintained as it is, but compression efficiency is lower than that of lossy encoding.
  • reconstructed data different from the original data may be generated (ie, loss occurs). Accordingly, in lossless coding, a quantization process and an in-loop filtering process may be skipped. In this case, if the quantization process is omitted, the transform process for transforming the residual data into frequency domain components becomes meaningless, so that the transform process can be further omitted when lossless coding is applied.
  • information indicating whether lossless encoding is applied must be transmitted to the decoder.
  • information indicating whether lossless encoding is performed may be encoded through an upper header (eg, SPS, PPS, or slice header). If the upper header indicates that lossless encoding has been performed using 1-bit information (flag), the decoder determines that lossless encoding has been performed using the 1-bit information. When it is determined that lossless encoding has been performed, the decoder may omit the quantization and in-loop filtering processes and decode the image.
  • lossless coding it may be determined whether lossless coding is applied in units of sub-pictures, tile units, or coding tree units.
  • a flag for determining a variable lossless_coding for determining whether to use lossless encoding may be signaled through a bitstream.
  • a value of the variable lossless_coding being true indicates that lossless encoding is applied, and a value of the variable lossless_coding being false indicates that lossless encoding is not applied.
  • Variables indicating whether transform, quantization, deblocking filter, SAO, or ALF are used may be defined as t_skip, q_skip, d_skip, s_skip, and a_skip. A value of true of the variables indicates that a corresponding coding process is omitted, and a value of false indicates that a corresponding coding process is not omitted.
  • variable lossless_coding it may be determined whether a flag for determining the values of the variables is signaled through the bitstream. For example, when the value of the variable lossless_coding is true, signaling of a flag for determining the variables t_skip, q_skip, d_skip, s_skip, and a_skip may be omitted. Variables whose encoding is omitted may be estimated as pre-defined values. A pre-defined value may be true.
  • flags for determining variables t_skip, q_skip, d_skip, s_skip, and a_skip may be decoded. Based on whether the value of each variable is true or false, it is possible to determine whether to skip the corresponding coding process.
  • t_skip when the value of lossless_coding is false, at least one of t_skip, q_skip, d_skip, s_skip, and a_skip may be set to true. Accordingly, when values of variables other than the first variable are false, signaling of a flag for determining the value of the first variable may be omitted and the value of the first variable may be set to true. For example, when values of t_skip, q_skip, d_skip, and s_skip are false, encoding of a flag for determining a_skip may be omitted and the variable a_skip may be set to true.
  • lossless_coding may be defined as an internal variable.
  • the value of the internal variable lossless_coding may be determined based on the variables t_skip, q_skip, d_skip, s_skip, and a_skip.
  • a flag for determining a value of each of variables t_skip, q_skip, d_skip, s_skip, and a_skip may be signaled through a bitstream.
  • the variable lossless_coding may be set to true.
  • the variable lossless_coding may be determined to be false.
  • blk_lossless_coding the value of blk_lossless_coding may be set to the value of the variable lossless_coding set in FIG. 3 or FIG. 4 .
  • blk_lossless_coding may be encoded for each block to determine whether to perform lossless encoding in units of blocks.
  • FIG 5 to 7 are diagrams for explaining the concept of the palette mode (palette mode) according to the present disclosure.
  • a current block pixels occurring a lot in a block to be encoded (hereinafter referred to as a current block) are displayed with a specific index, and then the specific index is encoded instead of the pixel and transmitted to the decoding device.
  • a flag indicating whether the palette mode is permitted may be encoded and transmitted to the decoding apparatus.
  • the flag may be coded only when the size of the current block is less than or equal to a preset size.
  • the preset size may be determined based on the slice type of the slice to which the current block belongs, the encoding mode or the prediction mode of the current block. For example, when the current block belongs to the I slice, the palette mode may be used only when the size of the current block is 4x4. When the current block belongs to the B or P slice, the palette mode may be used only when the size of the current block is larger than 4x4 and smaller than 64x64.
  • FIG. 5 is an example of a process of generating a pallet table.
  • the size of the current block is 8x8.
  • a histogram of 64 pixels existing in the current block is shown in FIG. 5 .
  • the horizontal axis indicates a pixel value (for example, a value from 0 to 255 in the case of a pixel quantized to 8 bits), and the vertical axis indicates the frequency of pixel values.
  • a quantization zone is set based on pixels that occur frequently. Pixels existing in the quantization zone are replaced with the pixel with the highest frequency, and one index is assigned to the pixel with the highest frequency.
  • Information indicating the size of the quantization zone may be encoded and transmitted to a decoding apparatus. Alternatively, the size of the quantization zone may be determined based on at least one of the size, shape, and bit depth of the current block.
  • parts represented by thick solid lines in the quantization zone denote pixels a8, a20, a31, and a40 having the highest frequency, and parts denoted by thin solid lines denote other pixels.
  • pixels that are not included in the quantization zone are expressed as escape values, and these values are quantized and encoded in addition to encoding with an index.
  • Figure 6 shows an example for the palette table set in Figure 5.
  • each row of the palette table is expressed as a palette entry, and a different index is assigned to each entry. That is, the size of the palette table may mean the number of entries.
  • An entry is formed using pixels a8, a20, a31, and a40 having the highest frequency in each quantization zone, and an index is assigned to each entry. If an escape value exists, it can be assigned an index by placing the escape at the last entry. That is, the last index in the palette can mean an escape value.
  • FIG. 7 is an example of a process in which pixels in a block are allocated as indexes using a set palette table.
  • the allocated indexes are expressed as palette indexes.
  • Pixels existing in the block are replaced with indexes according to the set palette table, and the indexes are encoded and transmitted to the decoding device.
  • quantized a50' and a62' are additionally encoded in addition to the index.
  • the used palette table is also encoded and transmitted to the decoding device.
  • FIG. 8 illustrates a method of performing intra prediction based on a palette mode according to the present disclosure.
  • the palette mode may be applied on a block-by-block basis (eg, a coding unit, a prediction unit), and for this, flag information (pred_mode_plt_flag) indicating whether to use the palette mode may be signaled on a block-by-block basis. That is, when the value of the flag is 1, the palette mode is applied to the current block, and when the value of the flag is 0, the palette mode is not applied to the current block.
  • a block-by-block basis eg, a coding unit, a prediction unit
  • the flag may be adaptively encoded/decoded based on at least one of a prediction mode of the current block and a size of the current block. For example, the flag may be encoded/decoded only when the prediction mode of the current block is the intra mode. The flag may be encoded/decoded only when the prediction mode of the current block is not a skip mode. The flag may be encoded/decoded only when at least one of a width or a height of the current block is less than or equal to a predetermined first threshold size.
  • the first threshold size is a pre-defined value in the encoding/decoding apparatus, and may be any one of 16, 32, or 64.
  • the flag may be encoded/decoded only when the product of the width and height of the current block is greater than a predetermined second threshold size.
  • the second threshold size is a value pre-defined in the encoding/decoding apparatus, and may be any one of 16, 32, or 64.
  • the first threshold size and the second threshold size may be different values. If any one of the above conditions is not satisfied, the flag is not encoded/decoded, and in this case, the value of the flag may be set to 0.
  • a palette table for the palette mode of the current block may be configured (S800).
  • the palette table may consist of at least one palette entry and a palette index identifying each palette entry.
  • the palette table of the current block may be determined using the palette table of the previous block (hereinafter referred to as the previous palette table).
  • the previous block may mean a block coded or decoded before the current block.
  • the palette entry of the current block may include at least one of a predicted palette entry or a signaled palette entry.
  • the current block may use all or part of the palette entries used by the previous block, and among the palette entries used in the previous block, the palette entry reused in the current block is called a predicted palette entry.
  • the current block can use all of the palette entries in the previous palette table.
  • the current block may use some of the palette entries of the previous palette table, and for this, a flag (PalettePredictorEntryReuseFlag, hereinafter referred to as a palette prediction flag) for specifying whether to reuse the palette entry may be used.
  • the value of the palette prediction flag is assigned to each palette entry of the previous palette table, and the palette prediction flag (PalettePredictorEntryReuseFlag[i]) determines whether the palette entry corresponding to the palette index i in the previous palette table is reused in the palette table of the current block.
  • a palette table of the current block may be constructed by extracting a palette entry having a value of the palette prediction flag of 1 from the previous palette table, and arranging them sequentially.
  • the palette table of the current block may be initialized in units of a predetermined area.
  • the predetermined region may mean a parallel processing region or a CTU row of the current picture.
  • the palette table of the current block may be initialized with the palette table of the neighboring CTU of the CTU to which the current block belongs.
  • the neighboring CTU may mean a CTU located above the CTU to which the current block belongs. That is, the palette table for the first CTU of the N-th CTU row may be initialized based on the palette table for the first CTU of the (N-1)-th CTU row.
  • the initialized palette table may be updated based on the palette table of the previous block belonging to the same CTU row.
  • the palette prediction flag may be signaled in the form of an encoded/decoded flag for each palette entry.
  • the palette prediction flag may be encoded/decoded in the form of a binary vector based on run length encoding. That is, in the palette prediction flag array that specifies whether to reuse the previous palette entry, a syntax palette_predictor_run that specifies the number of palette prediction flags that is 0 between non-zero palette prediction flags may be encoded/decoded. This will be described later.
  • the palette prediction flag values may be directly encoded. This will be described later.
  • the palette table of the current block may further include a palette entry signaled through a bitstream, wherein the signaled palette entry is a palette entry that is not included in the previous palette table among the palette entries used by the current block can mean
  • the signaled palette entry may be added after the predicted palette entry of the palette table.
  • a palette index may be determined in units of pixels of the current block ( S810 ).
  • the current block may determine the palette index using at least one of an index mode (INDEX MODE) or a copy mode (COPY MODE).
  • the index mode may mean a method of encoding the palette index information (palette_idx_idc) in the encoding apparatus to specify the palette index used in the current block.
  • the decoding apparatus may derive the palette index of the current pixel based on the encoded palette index information.
  • the palette index information has a value between 0 and (MaxPaletteIndex-1), where MaxPaletteIndex may mean the size of the palette table of the current block or the number of palette entries constituting the palette table.
  • MaxPaletteIndex may mean the size of the palette table of the current block or the number of palette entries constituting the palette table.
  • the value of the palette index information signaled through the bitstream may be allocated as the palette index of the current pixel.
  • the copy mode may refer to a method of determining the palette index of the current pixel by using the palette index of the neighboring pixel according to a predetermined scan order.
  • a horizontal scan, a vertical scan, a diagonal scan, etc. may be used, and any one of them may be selectively used.
  • a predetermined flag or index may be encoded/decoded.
  • the encoding apparatus encodes the flag as 0 when horizontal scan is applied as the scan order of the current block, and codes the flag as 1 when vertical scan is applied as the scan order of the current block. can do.
  • the decoding apparatus may adaptively determine the scan order of the current block according to the coded flag.
  • the present invention is not limited thereto, and a method of encoding/decoding the palette index according to the scan order will be described later.
  • the palette index of the current pixel may be predicted based on the palette index of the neighboring pixel, or the palette index of the neighboring pixel may be copied and set as the palette index of the current pixel as it is.
  • the neighboring pixel may mean a pixel adjacent to the top, bottom, left, or right of the current pixel.
  • the neighboring pixel may be located on the same horizontal line or the same vertical line as the current pixel.
  • the copy mode is the first copy mode in which the palette index used by the pixel adjacent to the top or bottom of the current pixel is identically used as the palette index of the current pixel, the palette index used by the pixel adjacent to the left or right of the current pixel At least one of a second copy mode that uses the same as the palette index of the current pixel, or a third copy mode that uses the palette index used by diagonally adjacent pixels of the current pixel equally as the palette index of the current pixel can
  • any one of the above-described first to third copy modes may be selectively used according to the scan order of the current block.
  • the first copy mode may be applied when the scan order of the current block is a vertical scan
  • the second copy mode may be applied when the scan order of the current block is a horizontal scan.
  • the scan start position of the current block is not limited to the upper-left pixel of the current block, and other corner pixels of the current block (eg, lower-left pixel, upper-right pixel, and lower-right pixel) may be used as the scan start position.
  • other corner pixels of the current block eg, lower-left pixel, upper-right pixel, and lower-right pixel
  • the same palette index as the pixel adjacent to the top or left may be used as described above, or the same palette index as the pixel adjacent to the bottom or right may be used.
  • the encoding apparatus may encode a flag (run_copy_flag) indicating whether the copy mode is used.
  • the encoding apparatus may encode the flag as 1, otherwise (ie, if the index mode is used), the encoding apparatus may encode the flag as 0.
  • the pixel of the current block may be predicted based on the palette table and the palette index ( S820 ).
  • the value of the palette entry extracted from the palette table may be set as the predicted value or the restored value of the pixel of the current block.
  • the pixel when the palette index indicates the last palette entry among the palette entries in the palette table of the current block, the pixel may be inferred as being encoded in the escape mode (ESCAPE MODE).
  • the escape mode does not use the palette entry of the pre-configured palette table, but instead predicts / restores the pixel based on the additionally signaled palette escape value It can mean a method.
  • a pixel having a palette index equal to (the number of palette entries - 1) may be predicted/restored using the additionally signaled palette escape value.
  • FIG. 9 and 10 show a method of configuring a pallet table according to the present disclosure.
  • the same palette table used in the encoding apparatus must also exist in the decoding apparatus. Therefore, it is necessary to encode the palette table in the encoding device. Therefore, it is possible to encode the number of palette entries existing in the palette table, and to encode the pixel value assigned to each entry.
  • the palette mode was used in the previous block, the amount of bits required to encode the palette table can be greatly reduced by generating the palette table of the current block based on the palette table used in the previous block.
  • the previous block means a block that has been encoded/decoded before the current block. Specifically, at least one of a flag indicating whether to configure the palette table of the current block based on the previous palette table, or a palette prediction flag indicating whether to add an entry included in the palette table of the previous block to the palette table of the current block is available.
  • 9 is a method of reducing the bit amount of a palette table to be currently encoded by using a palette prediction flag.
  • the palette table A may mean a palette table existing in a block encoded using a palette mode before the current block.
  • the previous block may be a neighboring block adjacent to the current block.
  • the neighboring block may include at least one of an upper neighboring block, a left neighboring block, an upper left neighboring block, an upper right neighboring block, or a lower left neighboring block.
  • the palette table A it is possible to specify whether or not to be used as it is in the current palette table by using a palette prediction flag for each entry. For example, if the palette prediction flag is 1, it may mean that the corresponding entry is used as it is in the current palette table, and if 0, it may mean that the corresponding entry is not used in the current palette table.
  • the index allocated to the entries predicted from the palette table A may be set to be the same as the index allocated to the palette table A. Alternatively, the index of each entry in the ascending/descending order of the indexes allocated to each entry in the palette table A may be reassigned.
  • the first entry, the third entry, and the fifth entry are used in the current palette table, so you can put them in order from the first entry to the third entry of the current palette table and configure a new entry only from the fourth entry to the fifth entry. have.
  • the palette prediction flag is encoded first, and the number of remaining entries (two in the example of FIG. 9: the fourth entry and the fifth entry of the current palette table) can be encoded. After that, the remaining entries may be encoded as much as the number of remaining entries.
  • the decoding device can also generate the same palette table as the coding device and predict/restore the current block.
  • information related to the palette table may be encoded and signaled.
  • the size of the palette table may indicate the maximum number of palette entries that the palette table can contain.
  • the size of the palette table may have a fixed value in the encoder and decoder.
  • the bit depth of the current image, the color component of the current block eg, whether it is a luminance component or a chrominance component
  • the size of the current block or Based on at least one of the shapes, the size of the pallet table may be determined.
  • the palette table of the current block may consist of palette entries reused in the previous palette table and palette entries newly added. Information about the reused palette entries constituting the palette table and the newly added palette entries may be encoded and signaled.
  • a palette prediction flag indicating whether a palette entry included in a previous palette table is reused may be encoded.
  • the number of palette entries to be newly added to the palette table may be a value obtained by subtracting the number of palette entries reused from the size of the palette table. For example, if the maximum number of palette entries that the palette table can contain is 40, the sum of the number of reused palette entries and the number of new palette entries cannot exceed 40.
  • the number of palette prediction flags may be set so as not to exceed a threshold value. For example, if the threshold value is 10, for up to 10 of the palette entries included in the previous palette table, it can be determined whether to reuse. Accordingly, a maximum of 10 palette prediction flags may be generated.
  • the number of reused palette entries may be set so that the number of reused palette entries does not exceed a threshold value. That is, the number of palette prediction flags having a value of True may be set so as not to exceed a threshold value.
  • the threshold value may be encoded through an upper header such as a slice, a picture, or a sequence. Alternatively, information for specifying a threshold value for each block may be encoded and signaled.
  • an encoder and a decoder may use a fixed value threshold.
  • the threshold value may be determined based on at least one of a size, a shape, a color component (eg, a luminance component or a chrominance component) of the block, or a bit depth.
  • a size e.g., a size, a shape, a color component (eg, a luminance component or a chrominance component) of the block, or a bit depth.
  • the maximum number of predictions it is possible to limit the number of entries (hereinafter, referred to as the maximum number of predictions) that can be brought by using the palette prediction flag.
  • information on the maximum number of predictions may be signaled through a bitstream.
  • the maximum number of predictions may be determined based on at least one of the size of the palette table, the size/shape of the current block, the size/shape of the previous block, or the size of the previous palette table.
  • an entry from the previous palette table is fetched using the palette prediction flag only as much as a certain percentage of the size of the current palette table, and the remaining ratio may be unconditionally generated from the current palette table. For example, if the size of the current palette table is 6 and the ratio is set to 50%, up to 3 entries from the previous palette table are fetched using the palette prediction flag, and the remaining 3 entries can be created unconditionally from the current palette table. have. Accordingly, when the number of entries having the value of the palette prediction flag of 1 reaches three, encoding of the palette prediction flag may be omitted for subsequent entries.
  • the size of the previous block when the size of the previous block is smaller than a preset threshold, it may be set so that the palette entries included in the palette table of the previous block are not added to the palette table of the current block. That is, when the size of the previous block is smaller than a preset threshold, encoding of the palette entry prediction flag for the palette entries of the previous block may be omitted, and the value may be regarded as 0.
  • the palette entry of the previous block may not be added to the palette table of the current block.
  • the threshold value may be encoded in an upper header and transmitted to a decoder.
  • the encoder and the decoder may use a fixed threshold.
  • the number of palette entries that can be added to the palette table of the current block from the palette table of the previous block may be determined.
  • a method of continuously allocating palette prediction flags using a second previous palette table that is earlier than the first previous palette table is also possible.
  • the encoding order of the palette prediction flag may be determined by considering the indexes of the entries included in the first previous palette table and the entries included in the second previous palette table.
  • the palette prediction flag may be encoded for the entry with index 0 included in the second previous palette table. Then, after encoding the palette prediction flag for the entry with index 1 included in the first previous palette table, it is possible to encode the palette prediction flag for the entry with index 1 included in the second previous palette table.
  • the palette table candidate list may be configured, and at least one of a plurality of previous palette table candidates included in the palette table candidate list may be used when encoding the current palette table.
  • RT denotes a pixel located at the upper right of the block
  • LB denotes a pixel located at the bottom left of the block.
  • the referenced block may be encoded as an index and transmitted to a decoding apparatus.
  • the pre-defined position may be the upper block (B) or the left block (A). In this case, encoding of an index specifying the referenced block may be omitted.
  • the palette table to be currently encoded can be filled by additionally designating a block based on an additional index.
  • the encoding/decoding apparatus may refer to a pre-promised fixed number of blocks, and information specifying the number of referenced blocks may be transmitted through an upper header.
  • a method in which the encoding/decoding apparatus refers to the same fixed number of neighboring blocks according to the size/form of the block and the size of the palette table is also possible.
  • a method of fetching the palette table from the corresponding block by designating M blocks encoded in the palette mode before the current block in the encoding order as indexes is also possible.
  • a method of fetching the palette table from the block by designating the block included in the collocated picture as an index is also possible.
  • a method of constructing a palette table candidate list is also possible. All used palette tables are stored in the candidate list starting from the block existing in the first position of the image until just before the current block. Alternatively, after setting the number N of tables to be stored in the candidate list, the N palette tables are stored in the candidate list. That is, when the encoding of the block is completed, the palette table of the encoded block may be stored in the candidate list. In this case, if the same palette table candidate as the palette table to be added to the candidate list exists, the palette table may not be added to the candidate list. Alternatively, the palette table may be added to the candidate list, and the same palette table candidate as the palette table may be deleted from the candidate list.
  • the method in which the palette table candidates in the candidate list are stored has a higher priority as it is closer to the current block, and may have a lower priority as it is further away from the current block.
  • the priority may be set according to the size or reference frequency of the palette table. According to this priority, when the number of stored tables exceeds N, it can be deleted from the candidate list starting from the palette table with a lower priority.
  • a method of configuring a palette table list separately for each area to be processed in parallel is also possible.
  • the number of palette tables stored in the palette table list may be very small in the initial part of the area. Therefore, it is also possible to fill a preset initial palette table without filling the palette table from the beginning for each area in which parallel processing is performed.
  • the initial palette table may be the palette table of the first CTU of the previous CTU row.
  • the preset initial palette table may be a palette table derived from the entire image, not a palette table derived for each block.
  • each entry value of the palette table derived from the entire image may be encoded through a higher header along with the number of entries.
  • a method of configuring the entries included in the palette table as a palette entry candidate list is also possible. Entries included in the palette table of the encoded block can be added to the entry candidate list. In this case, among the entries included in the palette table, only entries having an index smaller than the threshold may be included in the entry candidate list. When the number of entries included in the palette table of the current block is less than the maximum number, the palette table may be configured with reference to the candidate entries included in the palette entry candidate list.
  • Palette entries included in the palette table of the encoded/decoded block may be added to the palette entry candidate list.
  • the smallest index may be assigned to the newly added palette entries.
  • the existing palette entries may be removed from the palette entry candidate list in the order of the highest index.
  • 11 is a diagram illustrating an example in which palette entries are added to the palette entry candidate list.
  • blocks may be encoded/decoded using the configured palette table.
  • the palette entry candidate list may add palette entries included in the palette table.
  • the above palette entries may be added to the palette entry candidate list.
  • the duplicate palette entry may not be added to the palette entry candidate list.
  • the same palette entry as the palette entry to be added to the palette entry candidate list is already stored in the palette entry candidate list, the previously stored palette entry is removed from the palette entry candidate list, and the duplicate palette entry is removed from the palette entry candidate list can be added to
  • palette entry candidate list In order to reduce the complexity of constructing the palette entry candidate list, only those whose index is less than or equal to a threshold value among palette entries may be added to the palette entry candidate list.
  • the palette entries included in the palette table may not be added to the palette entry candidate list.
  • the palette entries included in the palette table of the block may not be added to the palette entry candidate list. Accordingly, the palette entry included in the palette table of the block in which the number of pixels included in the block is less than or equal to the threshold value cannot be utilized as a prediction palette entry in constructing the palette table of the next block.
  • the palette entries included in the palette table may be added to the palette entry candidate list.
  • the threshold value may be encoded in an upper header and transmitted to a decoder.
  • the encoder and the decoder may use a fixed threshold.
  • the threshold may be a natural number such as 4, 8, 16, or 32.
  • the number of palette entries that can be added to the palette entry candidate list may be determined. For example, if the size of the block is less than or equal to the threshold, a maximum of n palette entries may be added to the palette entry candidate list, whereas if the size of the block is greater than the threshold, a maximum of m palette entries are palette entry candidates can be added to the list.
  • n may be a natural number smaller than m.
  • a palette table predefined in the encoder and the decoder may be used.
  • FIG. 12 shows an example in which a palette table predefined in an encoder and a decoder is used.
  • the predefined palette table means that the size of the palette table and/or pixel values allocated to palette entries are predefined in the encoder and the decoder.
  • an index specifying one of the plurality of palette tables may be encoded and transmitted to the decoder.
  • palette entries after defining only pixel values allocated to each palette entry, only information indicating an index allocation order between palette entries may be encoded.
  • index 0 is assigned to a palette entry having a pixel value of -3
  • index 1 is assigned to a palette entry having a pixel value of +4
  • a pixel value of -4 Index 2 can be assigned to the in-palette entry.
  • the minimum value m in the block may be encoded and transmitted to the decoding apparatus, and an index of each of the palette entries may be determined based on the minimum value m.
  • index 0 may be allocated to the palette entry equal to the minimum value m, and indexes may be allocated in an order similar to the minimum value m.
  • an index assigned to a palette entry having a small difference from the minimum value m may have a smaller value than an index assigned to a palette entry having a large difference from the minimum value m.
  • Whether to use the predefined palette table may be determined based on whether lossless encoding is applied. For example, when lossless encoding is applied, a predefined palette table is used, and when lossless encoding is not applied, the decoder may configure and use the palette table in the same way as the encoder.
  • the method of configuring the palette table may be set differently depending on whether lossless encoding is applied.
  • the above-described palette table may be used to derive a predicted value, a reconstructed value, or a residual value of a sample.
  • a run length encoding method may be used.
  • a continuous sequence of identical data is called a run, and the continuous length is expressed as a run length.
  • a run length For example, if there is a string aaaaaabbccccccc, 6 a, 2 B, and 7 c, so it can be expressed as 6a2b7c.
  • Such an encoding method is called a run-length encoding method.
  • run-length encoding method When encoding the palette prediction flags using run-length encoding, the number of 0's, the number of 1's, etc. can be expressed. Alternatively, run-length encoding may be performed only on 0, and conversely, run-length encoding may be performed on only 1 as well.
  • FIG. 13 illustrates a method of signaling a palette prediction flag in the form of a binary vector based on run length encoding as an embodiment to which the present disclosure is applied.
  • the palette table of the previous block uses 8 palette entries with palette indexes of 0 to 7.
  • the image encoding apparatus determines whether the corresponding palette entry is reused as the palette entry of the current block for each of the palette entries 0 to 7 of the previous block, and if the palette entry is reused as the palette entry of the current block, the corresponding palette
  • the value of the palette prediction flag for the entry can be set to 1, otherwise 0, respectively. For example, as shown in FIG. 13, among the palette entries of the previous block, the palette entries 0, 1, 3, and 7 are reused as the palette entries of the current block, and the remaining palette entries are not reused, A binary vector represented by 11010001 may be generated.
  • encoding at least one of the number of 1s in the binary vector (that is, the number of palette entries reused as the palette entry of the current block among the palette entries of the previous block) or the number of 0s preceding 1 in the binary vector Signaling may be performed by a decoding device.
  • the number of 1s in the binary vector is 4, 4 can be encoded as the number of palette entries of the previous block that is reused as the palette entry of the current block.
  • the number of 0s preceding 1 in the binary vector that is, 0, 0, 1, 3 may be sequentially encoded.
  • the decoding apparatus receives at least one of information (palette_entry_run) about the number of 0's preceding 1 in the binary vector or information about the number of palette entries of the previous block reused as the palette entry of the current block from the encoding device, and It can be used to compose the palette table of the current block.
  • information palette_entry_run
  • the decoding apparatus sequentially extracts information (palette_entry_run), that is, 0, 0, 1, 3 about the number of 0s before 1 in the binary vector, from the bitstream, and using this, the palette entry of the previous block A binary vector indicating whether to reuse or not, that is, 11010001 may be restored. If a value of 1 occurs in the process of restoring a binary vector, the palette entry of the previous block corresponding to the value of 1 may be inserted into the palette table of the current block. Through this process, the palette table of the current block can be configured by selectively reusing some palette entries from the palette table of the previous block.
  • FIG. 14 illustrates a method of encoding/decoding a palette index according to a scan order according to the present disclosure.
  • the palette index assigned to each pixel of the current block must also be encoded. 14 is an example of a scan order performed in a current block.
  • the main purpose of the scan sequence shown in FIG. 14 is to perform scanning in consideration of directionality. As shown in FIG. 14(a) , if characteristics of pixels existing in the current block have similar values in the horizontal or vertical direction, as shown in FIG. 14(a), if the scan is performed, the possibility of clustering among the same indices increases. Alternatively, if characteristics of pixels existing in a block have similar values in the z-direction or diagonal direction as shown in FIG. 14(b) , the probability of clustering among the same indices increases when scanning is performed as shown in FIG. 14(b).
  • the encoding apparatus may indicate which scan method is used as an index, encode it, and transmit it to the decoding apparatus.
  • the scan order may be determined according to the size and shape of the current block.
  • the index for each pixel may be encoded.
  • the index of the current pixel may be derived using run merge encoding.
  • the run merge encoding method represents a method of deriving an index of a current pixel from an index of an adjacent pixel.
  • the adjacent pixel may include at least one of left and right adjacent pixels, an upper adjacent pixel, a lower adjacent pixel, or diagonally adjacent pixels of the current pixel, and the location of the adjacent pixel is determined by the scan applied to the current block. It may be adaptively determined according to the direction.
  • 15 is an exemplary diagram for describing a pixel adjacent to a current pixel.
  • A indicates a current pixel to be encoded
  • B indicates a pixel immediately preceding the current pixel in a scan order
  • C indicates a neighboring pixel adjacent to the current pixel among pixels included in an adjacent line (row/column).
  • the adjacent line may be one of a top row, a bottom row, a left column, or a right column, depending on the scan direction.
  • the run type flag may be set.
  • the run type is 'ABOVE' indicating that the index of the current pixel A is the same as the index of the neighboring pixel (ie, the neighboring pixel C) included in the adjacent line, or 'INDEX' indicating that the index of the current pixel A is encoded as it is. can point to one of them.
  • the value of the run merge flag may be set based on the run type of the previous pixel B and whether the index of the current pixel A is the same as that of the adjacent pixel B or C.
  • the run type of the previous pixel B is 'ABOVE'
  • the value of the run merge flag may be set to 1 (true).
  • the run type of the previous pixel B is 'INDEX' and the indexes of the current pixel A and the previous pixel B are the same
  • the value of the run merge flag may be set to true.
  • the decoder may derive the index of the current pixel with reference to the run type of the previous pixel B.
  • the run type of the previous pixel B is 'ABOVE'
  • the run type 'ABOVE' may be applied to the current pixel A to derive the index of the neighboring pixel C as the index of the current pixel A.
  • the index of the previous pixel B may be derived as the index of the current pixel A.
  • the value of the run merge flag may be set to 0 (false).
  • run type information indicating the run type of the current pixel A may be encoded/decoded.
  • the run type may indicate either 'ABOVE' indicating that the index of the current pixel A is the same as the index of the neighboring pixel C or 'INDEX' in which the index of the current pixel A is encoded as it is.
  • the run type information may be a 1-bit flag or an index specifying one of a plurality of run types. For example, when the syntax run_type indicating the run type indicates 'ABOVE', it indicates that the index of the current pixel A is the same as the index of the neighboring pixel C. In this case, the index of the neighboring pixel C may be derived as the index of the current pixel A.
  • the syntax run_type indicating the run type indicates 'INDEX'
  • palette_idx specifying one of a plurality of palette entries included in the palette table may be encoded/decoded.
  • information for deriving an escape value may be additionally encoded.
  • a pixel located at the top row in the current block does not have a pixel adjacent to the top. Accordingly, with respect to the pixel located in the uppermost row, encoding of run type information may be omitted and it may be estimated that the run type is 'INDEX'.
  • the decoder may derive the index of the current pixel A using the run type information. For example, when the run type information indicates 'ABOVE', the index of the neighboring pixel C may be derived as the index of the current pixel A. On the other hand, when the run type information indicates 'INDEX', index information may be additionally parsed, and the index information of the current pixel A may be derived based on the parsed index information.
  • encoding/decoding of the run merge flag may be omitted. This is because, for the first pixel, there is no previous pixel whose index is coded.
  • the encoding of the run merge flag is omitted, it can be estimated that the value is 0 (false).
  • the index of the current pixel may be derived based on at least one of the syntax run_merge_flag, run_type, and palette_idx.
  • the above syntaxes may be encoded without using context information.
  • a coding method that does not use context information may be defined as bypass coding.
  • At least one of the above syntaxes may be set to be encoded using context information.
  • context information may be referred to.
  • the probability that the value of the run merge flag is 1 or the probability that the value of the run merge flag is 0 may be determined based on the value of the previous run merge flag.
  • 16 shows an example of encoding a run merge flag using context information.
  • a variable PREV_POS indicating the scan order of a pixel having the highest scan order among pixels in which the value of the run merge flag is set to 0 may be used.
  • a context information index value may be derived by differentiating variables PREV_POS and 1, and a run merge flag may be encoded using the derived context information index value.
  • the value of the variable PREV_POS may be set to an initial value (eg, 0). Accordingly, for the first palette prediction flag, the context information index value may be set to -1.
  • variable PREV_POS may be updated.
  • the run merge flag having a value of 1 is encoded, the variable PREV_POS may be maintained as it is.
  • a context information index for a pixel having a scan order of 6 may be set to 2.
  • the probability of the run merge flag may be determined according to the value of the context information index, and the palette prediction flag may be encoded based on the determined probability.
  • variable PREV_POS has been described as indicating the position of a pixel having the run merge flag having a value of 0, but the variable PREV_POS may be set to indicate the position of a pixel having the run merge flag having a value of 1.
  • 17 is an example showing a range of a context information index.
  • the maximum value of the context information index may be set not to exceed a predefined threshold.
  • the value of the context information index may be set to the maximum value. In FIG. 17 , it is illustrated that the maximum value is 4.
  • the minimum value of the context information index does not become less than a predefined threshold.
  • the value of the context information index may be set to the minimum value.
  • the minimum value is exemplified as zero. Accordingly, the context information index for the first pixel may be changed from -1 to 0.
  • the maximum and/or minimum values of the context information index may be defined in the encoder and the decoder. Alternatively, information indicating the maximum and/or minimum values of the context information index may be signaled through the bitstream. For example, the information may be encoded and signaled at a higher level such as a slice, a picture, or a sequence.
  • index-related information may be encoded for each region.
  • index-related information is encoded in units of a region having a preset size.
  • FIG. 18A shows a case where the block size is 16x4, and the example of FIG. 18B shows a case where the block size is 8x8.
  • the block size is 16x4
  • the example of FIG. 18B shows a case where the block size is 8x8.
  • a horizontal scan is applied to a block.
  • a block may be divided into regions of a predefined size. For example, when the predefined size is 16, the block may be divided into a plurality of regions in units of 16 pixels. For example, in the example of FIG. 18A , the block is divided into 16x1 sized regions, and in the example of FIG. 18B , the block is divided into 8x2 sized regions.
  • Index-related information may be encoded for each region. For example, after encoding of the index-related information in the N-th region (eg, at least one of run_merge_flag, run_type, or palette_idx) is completed, the index-related information in the N+1 th region may be encoded. Alternatively, the encoding of index-related information may be parallel-processed between regions.
  • Index-related information may be coded independently between regions. For example, when encoding index-related information within a predetermined region, other regions may not be referred. For example, when the first pixel of the region 2 is encoded, information on the last pixel of the region 1 may not be referenced. Accordingly, encoding of the run merge flag may be omitted for the first pixel in each region. Since index-related information is coded independently between regions, a scan order may be independently assigned between regions. For example, by allocating scan order numbers from 0 to 15 for each area, pixels may be distinguished.
  • encoding of run type information may be omitted for pixels located in the uppermost row in each region.
  • index-related information may be encoded by providing inter-region dependency.
  • index-related information of pixels positioned in the uppermost row of region 2 may be encoded with reference to index-related information of pixels positioned in the lowermost row of region 1.
  • the pixels may be distinguished by assigning a continuous scan order between the regions. For example, scan orders 0 to 15 may be allocated to pixels belonging to region 1, while scan orders 16 to 31 may be allocated to pixels belonging to region 2 .
  • the value of the run merge flag may be coded for the first pixel in the remaining area except for the first area.
  • the run merge flag of the first pixel of the region 2 may be encoded with reference to the last pixel of the region 1 (eg, a pixel having a scan order of 15).
  • Information indicating whether inter-region index-related information is independently encoded or dependently encoded may be encoded and signaled. Alternatively, based on at least one of the size or shape of the current block, it may be determined whether the inter-region index-related information is independently coded or dependently coded.
  • index-related information can be encoded by dividing one block into a plurality of regions and then providing dependencies between regions. By extending this, inter-block dependency may be given to encode index-related information.
  • FIG 19 shows an example in which index-related information is encoded using inter-block dependency.
  • index-related information may be coded with reference to the neighboring block.
  • the run merge flag for the first pixel in the current block may be encoded with reference to the neighboring block.
  • the run merge flag for the first pixel of the current block is set by referring to the pixel having the last scan order in the upper neighboring block or the left neighboring block, or the pixel adjacent to the first pixel of the current block in the upper neighboring block or the left neighboring block. can be encoded.
  • the run merge flag for the first pixel C in the current block may be encoded with reference to the pixel A neighboring to the top or the pixel B neighboring to the left.
  • run type information may be coded with reference to a neighboring block even for pixels located in the uppermost row in the current block.
  • the upper neighboring block may be referred to when the left neighboring block is unavailable. That is, when the run-type flag encoding for the first pixel C in the current block is performed, the pixel B is preferentially referenced, but when the pixel B is unavailable, the pixel A may be referenced.
  • the escape value must be additionally encoded.
  • An escape value may be quantized, and the quantized escape value may be encoded.
  • quantization may be performed using an initial quantization parameter (QP) defined at the SLAS level. Through the SLAS header, an initial quantization parameter may be signaled.
  • QP initial quantization parameter
  • an offset value indicating a difference between a quantization parameter defined at the slice level and a quantization parameter applied to the current block may be encoded.
  • the offset may be coded in units of blocks.
  • an offset having a value of 4 may be encoded and signaled.
  • the size of the palette table (ie, the maximum number of palette entries) and/or the binarization method of the index may be different. Accordingly, information indicating whether an escape exists in the current palette table may be encoded and signaled. The information may be a 1-bit flag.
  • the size of the current palette table may be increased by one.
  • the flag indicates that there is no escape, it is possible to maintain the size of the current palette table.
  • a difference between the escape value and a prediction value derived through intra prediction may be encoded.
  • 20 is an exemplary diagram for explaining an aspect of encoding an escape value.
  • Fig. 20 (a) shows a current block and reference pixels adjacent to the current block
  • Fig. 20 (b) shows a palette entry for the current block.
  • the pixel to which the index 4 is assigned is encoded as an escape value.
  • a difference value derived by differentiating a prediction value obtained through intra prediction from the value A of the pixel may be encoded.
  • a difference value (ie, A-R2) derived by differentiating the upper reference pixel R2 from the pixel value A may be encoded.
  • the quantized difference value may be encoded.
  • the intra prediction mode used to derive the difference value may be at least one of planar, DC, vertical direction, horizontal direction, upper right diagonal direction, upper left diagonal direction, or lower left diagonal direction.
  • An escape value may be derived by using a fixed intra prediction mode in the encoder and decoder.
  • information specifying an intra prediction mode for deriving an escape value among a plurality of intra prediction modes may be encoded and signaled.
  • the information when there are two available intra prediction modes, the information may be a 1-bit flag.
  • the intra prediction mode used to derive the escape value may be determined based on the flag.
  • the intra prediction mode used to derive the escape value may be determined based on at least one of the size and shape of the current block and the intra prediction mode of the neighboring block.
  • a difference value may be derived using an adjacent pixel in the current block.
  • the adjacent pixel indicates a pixel whose index is determined before the current pixel in the scan order.
  • 21 shows an example of deriving a difference value with respect to an escape value based on an intra prediction mode.
  • FIG. 21A is an example of deriving a difference value with respect to an escape value using a reference pixel
  • FIG. 21B is an example of deriving a difference value with respect to an escape value using an adjacent pixel.
  • the scan order is in the horizontal direction.
  • a difference value between a value of the current pixel and a reference pixel positioned in a vertical or horizontal direction of the current pixel may be derived.
  • a difference value may be derived by differentiating the reference pixel R2 or the reference pixel R6 from the value A of the pixel having the index 4 .
  • the difference value can be derived using the reference pixel in the horizontal direction.
  • a difference value between the value of the current pixel and a neighboring pixel adjacent to the current pixel may be derived.
  • a difference value may be derived by differentiating the value of the right neighboring pixel or the value of the upper neighboring pixel from the value A of the pixel having the index 4 .
  • the vertical intra prediction mode when the vertical intra prediction mode is applied, a difference value is derived using the neighboring pixels in the vertical direction, and when the horizontal intra prediction mode is applied, the difference value can be derived using the neighboring pixels in the horizontal direction. have.
  • a method of deriving a difference value may be determined. For example, when lossless encoding is not applied, as in the example shown in FIG. 21A , a difference value may be derived using a reference pixel outside the current block. On the other hand, when lossless encoding is applied, as in the example shown in FIG. 21B , a difference value can be derived using neighboring pixels in the current block.
  • the decoder may derive an escape value for the current pixel by summing the difference value and the prediction value. For example, when a difference value is derived using a reference pixel outside the current block, an escape value may be derived by adding the difference value decoded from the bitstream and the upper reference pixel or the left reference pixel. On the other hand, when the difference value is encoded using the reference pixel in the current block, the escape value is derived by adding the difference value decoded from the bitstream and the value of the reconstructed neighboring pixel (eg, the upper neighboring pixel or the right neighboring pixel). can do.
  • an escape value may be derived by adding the difference value decoded from the bitstream and the value of the reconstructed neighboring pixel (eg, the upper neighboring pixel or the right neighboring pixel).
  • a prediction value for an escape value may be generated using a block vector.
  • the reference block specified through the block vector may be specified.
  • a value of a pixel at the same position as a current pixel to which an escape value is assigned in the reference block may be set as the predicted value.
  • a difference value may be derived by differentiating A5, which is a pixel at the same position as the pixel in the reference block, from the value A of the pixel to which the escape value is assigned.
  • the quantized difference value may be encoded by encoding the difference value or quantizing the difference value.
  • Whether to generate a prediction value for an escape value by using the block vector may be determined based on whether lossless coding is applied. For example, only when lossless encoding is applied to the current image, a prediction value for an escape value may be generated using a block vector.
  • information specifying a method of generating a prediction value for an escape value may be signaled through a bitstream.
  • the information may specify at least one of an intra prediction mode use mode and a block vector use mode.
  • the intra prediction mode or block vector of the current block may be derived with reference to a neighboring block adjacent to the current block.
  • the neighboring block may include at least one of an upper neighboring block, a left neighboring block, an upper left neighboring block, an upper right neighboring block, or a lower left neighboring block of the current block.
  • RT indicates a pixel located at the upper right of the block
  • LB indicates a pixel located at the bottom left of the block.
  • a block vector candidate list may be constructed by searching for neighboring blocks according to a predefined priority. For example, an upper neighboring block including a pixel at position A, a left neighboring block including a pixel at position L, an upper right neighboring block including a pixel at position e, a lower left neighboring block including a pixel at position i, and position a
  • the neighboring blocks may be searched in the order of the upper left neighboring blocks including the pixel of .
  • a block vector candidate list may be constructed based on the block vector of the searched block.
  • index information specifying one of the plurality of block vector candidates may be encoded and signaled.
  • a block vector of a first found block ie, a first block encoded as a block vector
  • a block vector of the current block may be set as a block vector of the current block.
  • the escape mode is applied to all pixels. That is, when lossless encoding is applied, the palette escape value may be encoded and signaled for each pixel regardless of the size of the palette table.
  • the palette index may be encoded/decoded for each pixel.
  • Information indicating which of the above two palette encoding methods is applied may be encoded and signaled. For example, when lossless encoding is applied, a flag specifying one of the above two palette encoding methods may be further encoded and signaled.
  • one of the palette encoding methods may be adaptively selected based on at least one of a color format, a color component, a size of a current block, or a shape of the current block.
  • information indicating the encoding mode of the escape value may be additionally encoded.
  • the information may indicate at least one of a mode for encoding escape values as they are, a mode for encoding escape values using intra prediction, and a mode for encoding escape values using block vectors.
  • the palette encoding method in which the escape mode is applied to all pixels can be applied only when lossless encoding is applied.
  • a palette encoding method in which an escape mode is applied to all pixels may be applied.
  • the palette encoding method may be determined by the information signaled through the bitstream, or the palette encoding method may be determined based on at least one of a color component, a color format, the size of the current block, or the shape of the current block. have.
  • the size of the palette table may be set differently.
  • the size of the palette table when lossy coding is applied, the size of the palette table may be set to M, and when lossless coding is applied, the size of the palette table may be set to N.
  • M and N may have different values.
  • N may be a natural number greater than M.
  • the values of M and N may be fixed in the encoder and the decoder.
  • information for determining the values of M and N may be encoded and signaled through the upper header.
  • information indicating the size M of the palette table when lossy encoding is applied and information indicating the size N of the palette table when lossless encoding is applied may be each encoded and signaled.
  • a relationship between the size M of the palette table when lossy coding is applied and the size N of the palette table when lossless coding is applied may be predefined in the encoder and decoder.
  • information indicating a difference value between two variables may be encoded.
  • information indicating a difference between the size N of the palette table when lossless coding is applied and the size M of the palette table when lossy coding is applied may be encoded and signaled.
  • the minimum size of the palette table may be encoded and signaled.
  • the higher level header indicates a parameter set commonly referenced by a plurality of lower levels.
  • the upper level is any one of a coding tree unit, a slice, a tile, a subpicture, a picture, or a sequence
  • a lower level is a processing unit (eg, a block, a coding tree unit, a slice, a tile) having a size smaller than that of the upper level. , subpicture or picture).
  • the size of the palette table for the current block may be determined as a value obtained by adding a difference between a minimum size determined at a higher level and a size decoded with respect to the current block. That is, when the difference value is encoded/decoded at the block level, a palette table having a different size for each block may be used.
  • the configuration of the palette table is determined based on whether lossy coding or lossless coding is applied.
  • that the configuration between the palette tables is different indicates that at least one of the size of each of the palette tables or the method of encoding the escape values of each of the palette tables is different.
  • the configuration of the palette table may be adaptively selected according to a color component or a color format.
  • a palette table composed of M palette entries may be used for a luminance component block
  • a palette table composed of N palette entries may be used for a chrominance component block.
  • the color format it may be determined whether or not to configure the palette table individually for each color component. For example, when the color format is 4:2:0, it may be determined whether to configure the integrated palette table according to the tree structure between the luminance component and the chrominance component.
  • an integrated palette table applied to both the luminance component and the chrominance component may be used.
  • the integrated palette table indicates that the palette table of the luminance component and the palette table of the chrominance component are identical to each other.
  • the palette table can be configured independently between the luminance component and the chrominance component. At least one of a size of the palette table and an encoding mode of an escape value may be different between the palette table of the luminance component and the palette table of the chrominance component.
  • each of the components (eg, unit, module, etc.) constituting the block diagram in the above-described embodiment may be implemented as a hardware device or software, or a plurality of components may be combined to form one hardware device or software. may be implemented.
  • the above-described embodiment may be implemented in the form of program instructions that can be executed through various computer components and recorded in a computer-readable recording medium.
  • the computer-readable recording medium may include program instructions, data files, data structures, etc. alone or in combination.
  • Examples of the computer-readable recording medium include a hard disk, a magnetic medium such as a floppy disk and a magnetic tape, an optical recording medium such as a CD-ROM, a DVD, and a magneto-optical medium such as a floppy disk. media), and hardware devices specially configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like.
  • the hardware device may be configured to operate as one or more software modules to perform processing according to the present disclosure, and vice versa.
  • the present disclosure may be used to encode/decode a video signal.

Abstract

Selon l'invention, un procédé de décodage d'image comprend les étapes consistant à : déterminer si un codage sans perte est appliqué ; lorsque le codage sans perte est appliqué, déterminer si une prédiction de palette est appliquée à un bloc actuel ; configurer une table de palette pour le bloc actuel ; et reconstruire des pixels dans le bloc actuel d'après la table de palette.
PCT/KR2021/003725 2020-03-25 2021-03-25 Procédé et dispositif de traitement de signal vidéo WO2021194283A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2020-0036399 2020-03-25
KR20200036399 2020-03-25

Publications (1)

Publication Number Publication Date
WO2021194283A1 true WO2021194283A1 (fr) 2021-09-30

Family

ID=77892052

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/003725 WO2021194283A1 (fr) 2020-03-25 2021-03-25 Procédé et dispositif de traitement de signal vidéo

Country Status (2)

Country Link
KR (1) KR102589351B1 (fr)
WO (1) WO2021194283A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160138102A (ko) * 2014-03-26 2016-12-02 퀄컴 인코포레이티드 비디오 코딩에서 팔레트 코딩된 블록들의 팔레트 사이즈, 팔레트 엔트리들 및 필터링의 결정
US20170230667A1 (en) * 2016-02-08 2017-08-10 Canon Kabushiki Kaisha Encoder optimizations for palette lossless encoding of content with subsampled colour component
US20180288415A1 (en) * 2015-06-09 2018-10-04 Microsoft Technology Licensing, Llc Robust encoding/decoding of escape-coded pixels in palette mode
KR20190041549A (ko) * 2014-10-06 2019-04-22 캐논 가부시끼가이샤 팔레트 모드를 사용하는 개선된 인코딩 프로세스

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11259020B2 (en) * 2013-04-05 2022-02-22 Qualcomm Incorporated Determining palettes in palette-based video coding
US9924175B2 (en) * 2014-06-11 2018-03-20 Qualcomm Incorporated Determining application of deblocking filtering to palette coded blocks in video coding

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160138102A (ko) * 2014-03-26 2016-12-02 퀄컴 인코포레이티드 비디오 코딩에서 팔레트 코딩된 블록들의 팔레트 사이즈, 팔레트 엔트리들 및 필터링의 결정
KR20190041549A (ko) * 2014-10-06 2019-04-22 캐논 가부시끼가이샤 팔레트 모드를 사용하는 개선된 인코딩 프로세스
US20180288415A1 (en) * 2015-06-09 2018-10-04 Microsoft Technology Licensing, Llc Robust encoding/decoding of escape-coded pixels in palette mode
US20170230667A1 (en) * 2016-02-08 2017-08-10 Canon Kabushiki Kaisha Encoder optimizations for palette lossless encoding of content with subsampled colour component

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
X. XIU (KWAI), H.-J. JHU (KWAI), Y.-W. CHEN (KWAI), T.-C. MA (KWAI), X. WANG (KWAI): "Non-CE2: On signaling of maximum palette size and maximum palette predictor size", 17. JVET MEETING; 20200107 - 20200117; BRUSSELS; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ), 1 January 2020 (2020-01-01), XP030223813 *

Also Published As

Publication number Publication date
KR102589351B1 (ko) 2023-10-16
KR20210119916A (ko) 2021-10-06

Similar Documents

Publication Publication Date Title
WO2017222326A1 (fr) Procédé et dispositif de traitement de signal vidéo
WO2018026219A1 (fr) Procédé et dispositif de traitement de signal vidéo
WO2018030599A1 (fr) Procédé de traitement d'image fondé sur un mode de prédiction intra et dispositif associé
WO2017176030A1 (fr) Procédé et appareil de traitement de signal vidéo
WO2018056703A1 (fr) Procédé et appareil de traitement de signal vidéo
WO2019225993A1 (fr) Procédé et appareil de traitement de signal vidéo
WO2018097626A1 (fr) Procédé et appareil de traitement de signal vidéo
WO2018070742A1 (fr) Dispositif et procédé de codage et de décodage d'image, et support d'enregistrement dans lequel le flux binaire est stocké
WO2012043989A2 (fr) Procédé destiné à diviser un bloc et dispositif de décodage
WO2018212579A1 (fr) Procédé et dispositif de traitement de signal vidéo
WO2020050685A1 (fr) Procédé et dispositif de codage/décodage d'image à l'aide d'une prédiction intra
WO2019190201A1 (fr) Procédé et dispositif de traitement de signal vidéo
WO2020096428A1 (fr) Procédé de codage/décodage d'un signal d'image et dispositif pour cette technologie
WO2018124333A1 (fr) Procédé de traitement d'image basé sur un mode de prédiction intra et appareil s'y rapportant
WO2019182295A1 (fr) Procédé et appareil de traitement de signal vidéo
WO2018066958A1 (fr) Procédé et appareil de traitement de signal vidéo
WO2018056701A1 (fr) Procédé et appareil de traitement de signal vidéo
WO2016190627A1 (fr) Procédé et dispositif pour traiter un signal vidéo
WO2017069505A1 (fr) Procédé de codage/décodage d'image et dispositif correspondant
WO2019221465A1 (fr) Procédé/dispositif de décodage d'image, procédé/dispositif de codage d'image et support d'enregistrement dans lequel un train de bits est stocké
WO2020096427A1 (fr) Procédé de codage/décodage de signal d'image et appareil associé
WO2018062950A1 (fr) Procédé de traitement d'image et appareil associé
WO2020171681A1 (fr) Procédé et dispositif de traitement de signal vidéo sur la base de l'intraprédiction
WO2018174457A1 (fr) Procédé de traitement des images et dispositif associé
WO2020180166A1 (fr) Procédé et appareil de codage/décodage d'image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21776704

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21776704

Country of ref document: EP

Kind code of ref document: A1