WO2024136481A1 - Procédé et dispositif de codage/décodage d'image, et support d'enregistrement sur lequel un flux binaire est stocké - Google Patents
Procédé et dispositif de codage/décodage d'image, et support d'enregistrement sur lequel un flux binaire est stocké Download PDFInfo
- Publication number
- WO2024136481A1 WO2024136481A1 PCT/KR2023/021164 KR2023021164W WO2024136481A1 WO 2024136481 A1 WO2024136481 A1 WO 2024136481A1 KR 2023021164 W KR2023021164 W KR 2023021164W WO 2024136481 A1 WO2024136481 A1 WO 2024136481A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- nspt
- transformation
- current block
- block
- kernel
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 120
- 230000002441 reversible effect Effects 0.000 claims description 26
- 238000006243 chemical reaction Methods 0.000 abstract description 43
- 230000009466 transformation Effects 0.000 description 400
- 239000011159 matrix material Substances 0.000 description 97
- 239000013598 vector Substances 0.000 description 78
- 239000000523 sample Substances 0.000 description 65
- 230000008569 process Effects 0.000 description 39
- 238000001914 filtration Methods 0.000 description 25
- 238000013139 quantization Methods 0.000 description 24
- 101150089388 dct-5 gene Proteins 0.000 description 21
- 241000023320 Luma <angiosperm> Species 0.000 description 19
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 description 19
- 238000013507 mapping Methods 0.000 description 16
- 230000006835 compression Effects 0.000 description 13
- 238000007906 compression Methods 0.000 description 13
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 12
- 230000011664 signaling Effects 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 9
- 230000002123 temporal effect Effects 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 6
- 230000003044 adaptive effect Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 5
- 230000001965 increasing effect Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000014509 gene expression Effects 0.000 description 4
- 230000006872 improvement Effects 0.000 description 3
- 238000000844 transformation Methods 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 230000002146 bilateral effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000009977 dual effect Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- 230000001939 inductive effect Effects 0.000 description 2
- 230000001788 irregular Effects 0.000 description 2
- 239000013074 reference sample Substances 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- TVEXGJYMHHTVKP-UHFFFAOYSA-N 6-oxabicyclo[3.2.1]oct-3-en-7-one Chemical compound C1C2C(=O)OC1C=CC2 TVEXGJYMHHTVKP-UHFFFAOYSA-N 0.000 description 1
- 108010082155 Chemokine CCL18 Proteins 0.000 description 1
- 235000010627 Phaseolus vulgaris Nutrition 0.000 description 1
- 244000046052 Phaseolus vulgaris Species 0.000 description 1
- 238000005056 compaction Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 239000010454 slate Substances 0.000 description 1
- 239000004984 smart glass Substances 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/12—Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
- H04N19/122—Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/18—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
Definitions
- the present invention relates to a video encoding/decoding method and device, and a recording medium storing a bitstream.
- HD High Definition
- UHD Ultra High Definition
- Inter prediction technology that predicts the pixel value included in the current picture from pictures before or after the current picture as an image compression technology
- intra prediction technology that predicts the pixel value included in the current picture using pixel information in the current picture
- frequency of appearance There are various technologies, such as entropy coding technology, which assigns short codes to high values and long codes to values with low frequency of occurrence. Using these video compression technologies, video data can be effectively compressed and transmitted or stored.
- the present disclosure seeks to provide a method and apparatus for performing transformation using non-separable linear transformation.
- the present disclosure seeks to provide a method and apparatus for performing transformation using a reduced-dimensional non-separable first-order transformation kernel.
- the present disclosure seeks to provide a method and device for determining a non-separable primary transform kernel based on encoding parameters.
- the video decoding method and device derive transform coefficients of the current block from a bitstream, determine a non-separable primary transform (NSPT) set for the current block, and determine the NSPT set. Based on this, residual samples of the current block can be derived by applying reverse NSPT to at least one of the transform coefficients of the current block, and the current block can be restored based on the residual samples of the current block.
- NSPT non-separable primary transform
- the reverse NSPT may be applied based on whether the size of the current block belongs to one or more groups of block sizes to which the NSPT can be applied.
- the NSPT set for the reverse NSPT may be determined to be one of 35 pre-defined NSPT sets.
- each of the 35 NSPT sets may include 3 NSPT kernel candidates.
- the group may include at least one of 4x4, 4x8, 8x4, 8x8, 16x8, 8x16, 16x4, or 4x16.
- the number of transform coefficients to which the reverse NSPT is applied may be 24.
- the video encoding method and device derive residual samples of the current block, determine a non-separable primary transform (NSPT) set for the current block, and perform an operation based on the NSPT set.
- NSPT can be applied to the residual samples of the current block to derive transform coefficients of the current block, and the transform coefficients of the current block can be encoded to generate a bitstream.
- the NSPT may be applied based on whether the size of the current block belongs to one or more groups of block sizes to which the NSPT can be applied.
- a computer-readable digital storage medium storing encoded video/image information that causes performing an image decoding method by a decoding device according to the present disclosure is provided.
- a computer-readable digital storage medium storing video/image information generated according to the image encoding method according to the present disclosure is provided.
- a method and device for transmitting video/image information generated according to the video encoding method according to the present disclosure are provided.
- the present disclosure can improve transformation performance by using non-separated primary transformation as the primary transformation.
- the present disclosure can improve the performance of transformation by performing transformation using a reduced-dimensional non-separable first-order transformation kernel.
- the present disclosure can improve coding efficiency by effectively determining or signaling a non-separable primary transform kernel based on coding parameters such as intra prediction mode and block size/shape.
- FIG. 1 shows a video/image coding system according to the present disclosure.
- Figure 2 shows a schematic block diagram of an encoding device to which an embodiment of the present disclosure can be applied and encoding of video/image signals is performed.
- Figure 3 shows a schematic block diagram of a decoding device to which an embodiment of the present disclosure can be applied and decoding of video/image signals is performed.
- FIG. 4 illustrates an image decoding method performed by the decoding device 300 as an embodiment according to the present disclosure.
- Figure 5 exemplarily shows an intra prediction mode and its prediction direction according to the present disclosure.
- FIG. 6 shows a schematic configuration of a decoding device 300 that performs the video decoding method according to the present disclosure.
- FIG. 7 illustrates an image encoding method performed by the encoding device 200 as an embodiment according to the present disclosure.
- FIG. 8 shows a schematic configuration of an encoding device 200 that performs the video encoding method according to the present disclosure.
- Figure 9 shows an example of a content streaming system to which embodiments of the present disclosure can be applied.
- first, second, etc. may be used to describe various components, but the components should not be limited by the terms. The above terms are used only for the purpose of distinguishing one component from another. For example, a first component may be referred to as a second component, and similarly, the second component may be referred to as a first component without departing from the scope of the present disclosure.
- the term and/or includes any of a plurality of related stated items or a combination of a plurality of related stated items.
- This disclosure relates to video/image coding.
- the method/embodiment disclosed herein may be applied to the method disclosed in the versatile video coding (VVC) standard.
- VVC versatile video coding
- the methods/embodiments disclosed in this specification are EVC (essential video coding) standard, AV1 (AOMedia Video 1) standard, AVS2 (2nd generation of audio video coding standard), or next-generation video/image coding standard (ex. H.267). or H.268, etc.).
- video may mean a set of a series of images over time.
- a picture generally refers to a unit representing one image in a specific time period, and a slice/tile is a unit that forms part of a picture in coding.
- a slice/tile may contain one or more coding tree units (CTUs).
- CTUs coding tree units
- One picture may consist of one or more slices/tiles.
- One tile is a rectangular area composed of a plurality of CTUs within a specific tile row and a specific tile row of one picture.
- a tile row is a rectangular area of CTUs with a height equal to the height of the picture and a width specified by the syntax requirements of the picture parameter set.
- a tile row is a rectangular area of CTUs with a height specified by a picture parameter set and a width equal to the width of the picture. While CTUs within one tile are sequentially arranged according to the CTU raster scan, tiles within one picture may be arranged continuously according to the raster scan of the tile.
- One slice may contain an integer number of complete tiles or an integer number of consecutive complete CTU rows within a tile of a picture that may be contained exclusively in a single NAL unit. Meanwhile, one picture may be divided into two or more subpictures.
- a subpicture may be a rectangular area of one or more slices within a picture.
- a pixel, pixel, or pel may refer to the minimum unit that constitutes one picture (or video). Additionally, 'sample' may be used as a term corresponding to a pixel.
- a sample may generally represent a pixel or a pixel value, and may represent only a pixel/pixel value of a luminance (luma) component, or only a pixel/pixel value of a chroma component.
- a unit may represent the basic unit of image processing.
- a unit may include at least one of a specific area of a picture and information related to the area.
- One unit may include one luma block and two chroma (ex. cb, cr) blocks.
- unit may be used interchangeably with terms such as block or area.
- an MxN block may include a set (or array) of samples (or a sample array) or transform coefficients consisting of M columns and N rows.
- a or B may mean “only A,” “only B,” or “both A and B.” In other words, in this specification, “A or B” may be interpreted as “A and/or B.”
- A, B or C refers to “only A,” “only B,” “only C,” or “any and all combinations of A, B, and C ( It can mean “any combination of A, B and C)”.
- the slash (/) or comma used in this specification may mean “and/or.”
- A/B can mean “A and/or B.”
- A/B can mean “only A,” “only B,” or “both A and B.”
- A, B, C can mean “A, B, or C.”
- At least one of A and B may mean “only A,” “only B,” or “both A and B.”
- the expression “at least one of A or B” or “at least one of A and/or B” means “at least one It can be interpreted the same as "at least one of A and B”.
- At least one of A, B and C means “only A”, “only B”, “only C”, or “A, B and C”. It may mean “any combination of A, B and C.” Also, “at least one of A, B or C” or “at least one of A, B and/or C” means It may mean “at least one of A, B and C.”
- parentheses used in this specification may mean “for example.” Specifically, when “prediction (intra prediction)” is displayed, “intra prediction” may be proposed as an example of “prediction.” In other words, “prediction” in this specification is not limited to “intra prediction,” and “intra prediction” may be proposed as an example of “prediction.” Additionally, even when “prediction (i.e., intra prediction)” is indicated, “intra prediction” may be proposed as an example of “prediction.”
- FIG. 1 shows a video/image coding system according to the present disclosure.
- a video/image coding system may include a first device (source device) and a second device (receiving device).
- the source device can transmit encoded video/image information or data in file or streaming form to a receiving device through a digital storage medium or network.
- the source device may include a video source, an encoding device, and a transmission unit.
- the receiving device may include a receiving unit, a decoding device, and a renderer.
- the encoding device may be called a video/image encoding device, and the decoding device may be called a video/image decoding device.
- a transmitter may be included in the encoding device.
- a receiver may be included in the decoding device.
- the renderer may include a display unit, and the display unit may be composed of a separate device or external component.
- a video source can acquire video/image through the process of capturing, compositing, or creating video/image.
- a video source may include a video/image capture device and/or a video/image generation device.
- a video/image capture device may include one or more cameras, a video/image archive containing previously captured video/image, etc.
- Video/image generating devices may include computers, tablets, and smartphones, and are capable of (electronically) generating video/images. For example, a virtual video/image may be created through a computer, etc., and in this case, the video/image capture process may be replaced by the process of generating related data.
- the encoding device can encode input video/image.
- the encoding device can perform a series of procedures such as prediction, transformation, and quantization for compression and coding efficiency.
- Encoded data (encoded video/image information) may be output in the form of a bitstream.
- the transmitting unit may transmit the encoded video/image information or data output in the form of a bitstream to the receiving unit of the receiving device through a digital storage medium or network in the form of a file or streaming.
- Digital storage media may include various storage media such as USB, SD, CD, DVD, Blu-ray, HDD, and SSD.
- the transmission unit may include elements for creating a media file through a predetermined file format and may include elements for transmission through a broadcasting/communication network.
- the receiving unit may receive/extract the bitstream and transmit it to the decoding device.
- the decoding device can decode the video/image by performing a series of procedures such as inverse quantization, inverse transformation, and prediction that correspond to the operation of the encoding device.
- the renderer can render the decoded video/image.
- the rendered video/image may be displayed through the display unit.
- Figure 2 shows a schematic block diagram of an encoding device to which an embodiment of the present disclosure can be applied and encoding of video/image signals is performed.
- the encoding device 200 includes an image partitioner (210), a predictor (220), a residual processor (230), an entropy encoder (240), It may be configured to include an adder (250), a filter (260), and a memory (270).
- the prediction unit 220 may include an inter prediction unit 221 and an intra prediction unit 222.
- the residual processing unit 230 may include a transformer 232, a quantizer 233, a dequantizer 234, and an inverse transformer 235.
- the residual processing unit 230 may further include a subtractor 231.
- the adder 250 may be called a reconstructor or a reconstructed block generator.
- the above-described image segmentation unit 210, prediction unit 220, residual processing unit 230, entropy encoding unit 240, addition unit 250, and filtering unit 260 may include one or more hardware components (depending on the embodiment). For example, it may be configured by an encoding device chipset or processor). Additionally, the memory 270 may include a decoded picture buffer (DPB) and may be configured by a digital storage medium. The hardware component may further include a memory 270 as an internal/external component.
- DPB decoded picture buffer
- the image segmentation unit 210 may divide an input image (or picture, frame) input to the encoding device 200 into one or more processing units.
- the processing unit may be called a coding unit (CU).
- the coding unit will be split recursively according to the QTBTTT (Quad-tree binary-tree ternary-tree) structure from the coding tree unit (CTU) or the largest coding unit (LCU). You can.
- QTBTTT Quad-tree binary-tree ternary-tree
- one coding unit may be divided into a plurality of coding units with deeper depth based on a quad tree structure, binary tree structure, and/or ternary structure.
- the quad tree structure may be applied first and the binary tree structure and/or ternary structure may be applied later.
- the binary tree structure may be applied before the quad tree structure.
- the coding procedure according to the present specification can be performed based on the final coding unit that is no longer divided. In this case, based on coding efficiency according to video characteristics, the largest coding unit can be directly used as the final coding unit, or, if necessary, the coding unit is recursively divided into coding units of lower depth to determine the optimal coding unit.
- a coding unit with a size of can be used as the final coding unit.
- the coding procedure may include procedures such as prediction, transformation, and restoration, which will be described later.
- the processing unit may further include a prediction unit (PU) or a transform unit (TU).
- the prediction unit and the transform unit may each be divided or partitioned from the final coding unit described above.
- the prediction unit may be a unit of sample prediction
- the transform unit may be a unit for deriving a transform coefficient and/or a unit for deriving a residual signal from the transform coefficient.
- an MxN block may represent a set of samples or transform coefficients consisting of M columns and N rows.
- a sample may generally represent a pixel or a pixel value, and may represent only a pixel/pixel value of a luminance (luma) component, or only a pixel/pixel value of a chroma component.
- a sample may be used as a term corresponding to a pixel or pel of one picture (or video).
- the encoding device 200 subtracts the prediction signal (prediction block, prediction sample array) output from the inter prediction unit 221 or the intra prediction unit 222 from the input image signal (original block, original sample array) to generate a residual signal. (residual signal, residual block, residual sample array) can be generated, and the generated residual signal is transmitted to the converter 232.
- the unit that subtracts the prediction signal (prediction block, prediction sample array) from the input image signal (original block, original sample array) within the encoding device 200 may be called the subtraction unit 231.
- the prediction unit 220 may perform prediction on a block to be processed (hereinafter referred to as a current block) and generate a predicted block including prediction samples for the current block.
- the prediction unit 220 may determine whether intra prediction or inter prediction is applied on a current block or CU basis.
- the prediction unit 220 may generate various information related to prediction, such as prediction mode information, and transmit it to the entropy encoding unit 240, as will be described later in the description of each prediction mode.
- Information about prediction may be encoded in the entropy encoding unit 240 and output in the form of a bitstream.
- the intra prediction unit 222 can predict the current block by referring to samples within the current picture.
- the referenced samples may be located in the neighborhood of the current block, or may be located a certain distance away from the current block, depending on the prediction mode.
- prediction modes may include one or more non-directional modes and multiple directional modes.
- the non-directional mode may include at least one of DC mode or planar mode.
- the directional mode may include 33 directional modes or 65 directional modes depending on the level of detail of the predicted direction. However, this is an example and more or less directional modes may be used depending on the setting.
- the intra prediction unit 222 may determine the prediction mode applied to the current block using the prediction mode applied to the neighboring block.
- the inter prediction unit 221 may derive a prediction block for the current block based on a reference block (reference sample array) specified by a motion vector in the reference picture.
- motion information can be predicted in blocks, subblocks, or samples based on the correlation of motion information between neighboring blocks and the current block.
- the motion information may include a motion vector and a reference picture index.
- the motion information may further include inter prediction direction information (L0 prediction, L1 prediction, Bi prediction, etc.).
- neighboring blocks may include a spatial neighboring block existing in the current picture and a temporal neighboring block existing in the reference picture.
- a reference picture including the reference block and a reference picture including the temporal neighboring block may be the same or different.
- the temporal neighboring block may be called a collocated reference block, a collocated reference block (colCU), etc.
- a reference picture including the temporal neighboring block may be called a collocated picture (colPic).
- the inter prediction unit 221 constructs a motion information candidate list based on neighboring blocks, and provides information indicating which candidate is used to derive the motion vector and/or reference picture index of the current block. can be created. Inter prediction may be performed based on various prediction modes. For example, in the case of skip mode and merge mode, the inter prediction unit 221 may use motion information of neighboring blocks as motion information of the current block.
- motion vector prediction (MVP) mode the motion vector of the surrounding block is used as a motion vector predictor and the motion vector difference is signaled to determine the motion vector of the current block. can be instructed.
- MVP motion vector prediction
- the prediction unit 220 may generate a prediction signal based on various prediction methods described later.
- the prediction unit can not only apply intra prediction or inter prediction for prediction of one block, but also can apply intra prediction and inter prediction simultaneously. This can be called combined inter and intra prediction (CIIP) mode.
- the prediction unit may be based on an intra block copy (IBC) prediction mode or a palette mode for prediction of a block.
- IBC prediction mode or palette mode can be used for video/video coding of content such as games, such as screen content coding (SCC).
- SCC screen content coding
- IBC basically performs prediction within the current picture, but can be performed similarly to inter prediction in that it derives a reference block within the current picture. That is, IBC can use at least one of the inter prediction techniques described in this specification.
- Palette mode can be viewed as an example of intra coding or intra prediction.
- sample values within a picture can be signaled based on information about the palette table and palette index.
- the prediction signal generated through the prediction unit 220 may be used to generate a restored signal or a residual signal.
- the transform unit 232 may generate transform coefficients by applying a transform technique to the residual signal.
- the transformation technique may be at least one of Discrete Cosine Transform (DCT), Discrete Sine Transform (DST), Karhunen-Loeve Transform (KLT), Graph-Based Transform (GBT), or Conditionally Non-linear Transform (CNT). It can be included.
- DCT Discrete Cosine Transform
- DST Discrete Sine Transform
- KLT Karhunen-Loeve Transform
- GBT Graph-Based Transform
- CNT Conditionally Non-linear Transform
- GBT refers to the transformation obtained from this graph when the relationship information between pixels is expressed as a graph.
- CNT generates a prediction signal using all previously restored pixels, and refers to a transformation obtained based on it.
- the conversion process may be applied to square pixel blocks of the same size, or to non-square blocks of variable size.
- the quantization unit 233 quantizes the transform coefficients and transmits them to the entropy encoding unit 240, and the entropy encoding unit 240 encodes the quantized signal (information about the quantized transform coefficients) and outputs it as a bitstream. there is. Information about the quantized transform coefficients may be called residual information.
- the quantization unit 233 may rearrange the quantized transform coefficients in block form into a one-dimensional vector form based on the coefficient scan order, and perform the quantization based on the quantized transform coefficients in the one-dimensional vector form. Information about converted transformation coefficients can also be generated.
- the entropy encoding unit 240 may perform various encoding methods such as exponential Golomb, context-adaptive variable length coding (CAVLC), and context-adaptive binary arithmetic coding (CABAC).
- the entropy encoding unit 240 may encode information necessary for video/image restoration (eg, values of syntax elements, etc.) in addition to the quantized transformation coefficients together or separately.
- Encoded information may be transmitted or stored in bitstream form in units of NAL (network abstraction layer) units.
- the video/image information may further include information about various parameter sets, such as an adaptation parameter set (APS), a picture parameter set (PPS), a sequence parameter set (SPS), or a video parameter set (VPS). Additionally, the video/image information may further include general constraint information.
- APS adaptation parameter set
- PPS picture parameter set
- SPS sequence parameter set
- VPS video parameter set
- the video/image information may further include general constraint information.
- information and/or syntax elements transmitted/signaled from an encoding device to a decoding device may be included in video/image information.
- the video/image information may be encoded through the above-described encoding procedure and included in the bitstream.
- the bitstream can be transmitted over a network or stored in a digital storage medium.
- the network may include a broadcasting network and/or a communication network
- the digital storage medium may include various storage media such as USB, SD, CD, DVD, Blu-ray, HDD, and SSD.
- a transmission unit (not shown) that transmits the signal output from the entropy encoding unit 240 and/or a storage unit (not shown) that stores the signal may be configured as internal/external elements of the encoding device 200, or the transmission unit may be configured as internal/external elements of the encoding device 200. It may also be included in the entropy encoding unit 240.
- Quantized transform coefficients output from the quantization unit 233 can be used to generate a prediction signal.
- a residual signal residual block or residual samples
- the adder 250 adds the reconstructed residual signal to the prediction signal output from the inter prediction unit 221 or the intra prediction unit 222, thereby creating a reconstructed signal (reconstructed picture, reconstructed block, reconstructed sample array). can be created. If there is no residual for the block to be processed, such as when skip mode is applied, the predicted block can be used as a restoration block.
- the addition unit 250 may be called a restoration unit or a restoration block generation unit.
- the generated reconstructed signal can be used for intra prediction of the next processing target block in the current picture, and can also be used for inter prediction of the next picture after filtering, as will be described later.
- LMCS luma mapping with chroma scaling
- the filtering unit 260 can improve subjective/objective image quality by applying filtering to the restored signal.
- the filtering unit 260 may generate a modified reconstructed picture by applying various filtering methods to the reconstructed picture, and store the modified reconstructed picture in the memory 270, specifically the memory 270. It can be saved in DPB.
- the various filtering methods may include deblocking filtering, sample adaptive offset, adaptive loop filter, bilateral filter, etc.
- the filtering unit 260 may generate various information regarding filtering and transmit it to the entropy encoding unit 240. Information about filtering may be encoded in the entropy encoding unit 240 and output in the form of a bitstream.
- the modified reconstructed picture transmitted to the memory 270 can be used as a reference picture in the inter prediction unit 221.
- the encoding device can avoid prediction mismatch in the encoding device 200 and the decoding device when inter prediction is applied, and can also improve encoding efficiency.
- the DPB of the memory 270 can store the modified reconstructed picture to use it as a reference picture in the inter prediction unit 221.
- the memory 270 may store motion information of a block from which motion information in the current picture is derived (or encoded) and/or motion information of blocks in an already reconstructed picture.
- the stored motion information can be transmitted to the inter prediction unit 221 to be used as motion information of spatial neighboring blocks or motion information of temporal neighboring blocks.
- the memory 270 can store reconstructed samples of reconstructed blocks in the current picture and transmit them to the intra prediction unit 222.
- Figure 3 shows a schematic block diagram of a decoding device to which an embodiment of the present disclosure can be applied and decoding of video/image signals is performed.
- the decoding device 300 includes an entropy decoder (310), a residual processor (320), a predictor (330), an adder (340), and a filtering unit. It may be configured to include (filter, 350) and memory (memoery, 360).
- the prediction unit 330 may include an inter prediction unit 331 and an intra prediction unit 332.
- the residual processing unit 320 may include a dequantizer (321) and an inverse transformer (321).
- the entropy decoding unit 310, residual processing unit 320, prediction unit 330, addition unit 340, and filtering unit 350 may include one hardware component (e.g., a decoding device chipset or It can be configured by a processor).
- the memory 360 may include a decoded picture buffer (DPB) and may be configured by a digital storage medium.
- the hardware component may further include a memory 360 as an internal/external component.
- the decoding device 300 may restore the image in response to the process in which the video/image information is processed in the encoding device of FIG. 2.
- the decoding device 300 may derive units/blocks based on block division-related information obtained from the bitstream.
- the decoding device 300 may perform decoding using a processing unit applied in the encoding device.
- the processing unit of decoding may be a coding unit, and the coding unit may be divided from a coding tree unit or a maximum coding unit according to a quad tree structure, binary tree structure, and/or ternary tree structure.
- One or more transformation units can be derived from a coding unit.
- the restored video signal decoded and output through the decoding device 300 can be played through a playback device.
- the decoding device 300 may receive a signal output from the encoding device of FIG. 2 in the form of a bitstream, and the received signal may be decoded through the entropy decoding unit 310.
- the entropy decoder 310 may parse the bitstream to derive information (e.g. video/picture information) necessary for image restoration (or picture restoration).
- the video/image information may further include information about various parameter sets, such as an adaptation parameter set (APS), a picture parameter set (PPS), a sequence parameter set (SPS), or a video parameter set (VPS). Additionally, the video/image information may further include general constraint information.
- the decoding device may decode the picture further based on the information about the parameter set and/or the general restriction information.
- Signaled/received information and/or syntax elements described later in this specification may be obtained from the bitstream by being decoded through the decoding procedure.
- the entropy decoding unit 310 decodes information in the bitstream based on a coding method such as exponential Golomb coding, CAVLC, or CABAC, and calculates the value of the syntax element required for image restoration and the quantized value of the transform coefficient for the residual. can be printed out.
- the CABAC entropy decoding method receives bins corresponding to each syntax element from the bitstream, and provides syntax element information to be decoded, decoding information of surrounding and target blocks to be decoded, or information of symbols/bins decoded in the previous step.
- the CABAC entropy decoding method can update the context model using information on the decoded symbol/bin for the context model of the next symbol/bin after determining the context model.
- the residual processing unit 320 may derive a residual signal (residual block, residual samples, residual sample array). Additionally, information about filtering among the information decoded by the entropy decoding unit 310 may be provided to the filtering unit 350. Meanwhile, a receiving unit (not shown) that receives the signal output from the encoding device may be further configured as an internal/external element of the decoding device 300, or the receiving unit may be a component of the entropy decoding unit 310.
- the decoding device may be called a video/image/picture decoding device, and the decoding device may include an information decoding device (video/image/picture information decoding device) and a sample decoding device (video/image/picture sample decoding It can also be classified by device).
- the information decoding device may include the entropy decoding unit 310, and the sample decoding device may include the inverse quantization unit 321, the inverse transform unit 322, the adder 340, the filtering unit 350, and the memory. (360), and may include at least one of an inter prediction unit 332 and an intra prediction unit 331.
- the inverse quantization unit 321 may inversely quantize the quantized transform coefficients and output the transform coefficients.
- the inverse quantization unit 321 may rearrange the quantized transform coefficients into a two-dimensional block form. In this case, the rearrangement may be performed based on the coefficient scan order performed in the encoding device.
- the inverse quantization unit 321 may perform inverse quantization on quantized transform coefficients using quantization parameters (eg, quantization step size information) and obtain transform coefficients.
- the inverse transform unit 322 inversely transforms the transform coefficients to obtain a residual signal (residual block, residual sample array).
- the prediction unit 320 may perform prediction on the current block and generate a predicted block including prediction samples for the current block.
- the prediction unit 320 may determine whether intra prediction or inter prediction is applied to the current block based on the information about the prediction output from the entropy decoding unit 310, and determine a specific intra/inter prediction mode. You can.
- the prediction unit 320 may generate a prediction signal based on various prediction methods described later. For example, the prediction unit 320 can not only apply intra prediction or inter prediction for prediction of one block, but also can apply intra prediction and inter prediction simultaneously. This can be called combined inter and intra prediction (CIIP) mode. Additionally, the prediction unit may be based on an intra block copy (IBC) prediction mode or a palette mode for prediction of a block.
- IBC intra block copy
- palette mode can be used for video/video coding of content such as games, such as screen content coding (SCC). IBC basically performs prediction within the current picture, but can be performed similarly to inter prediction in that it derives a reference block within the current picture. That is, IBC can use at least one of the inter prediction techniques described in this specification.
- Palette mode can be viewed as an example of intra coding or intra prediction. When the palette mode is applied, information about the palette table and palette index may be included and signaled in the video/image information.
- the intra prediction unit 331 can predict the current block by referring to samples within the current picture.
- the referenced samples may be located in the neighborhood of the current block, or may be located a certain distance away from the current block, depending on the prediction mode.
- prediction modes may include one or more non-directional modes and multiple directional modes.
- the intra prediction unit 331 may determine the prediction mode applied to the current block using the prediction mode applied to the neighboring block.
- the inter prediction unit 332 may derive a prediction block for the current block based on a reference block (reference sample array) specified by a motion vector in the reference picture.
- motion information can be predicted on a block, subblock, or sample basis based on the correlation of motion information between neighboring blocks and the current block.
- the motion information may include a motion vector and a reference picture index.
- the motion information may further include inter prediction direction information (L0 prediction, L1 prediction, Bi prediction, etc.).
- neighboring blocks may include a spatial neighboring block existing in the current picture and a temporal neighboring block existing in the reference picture.
- the inter prediction unit 332 may construct a motion information candidate list based on neighboring blocks and derive a motion vector and/or reference picture index of the current block based on the received candidate selection information. Inter prediction may be performed based on various prediction modes, and the information regarding the prediction may include information indicating the inter prediction mode for the current block.
- the adder 340 adds the obtained residual signal to the prediction signal (prediction block, prediction sample array) output from the prediction unit (including the inter prediction unit 332 and/or the intra prediction unit 331) to produce a restored signal. (Restored picture, restored block, restored sample array) can be generated. If there is no residual for the block to be processed, such as when skip mode is applied, the prediction block can be used as a restoration block.
- the addition unit 340 may be called a restoration unit or a restoration block generation unit.
- the generated reconstructed signal may be used for intra prediction of the next processing target block in the current picture, may be output after filtering as described later, or may be used for inter prediction of the next picture.
- LMCS luma mapping with chroma scaling
- the filtering unit 350 can improve subjective/objective image quality by applying filtering to the restored signal.
- the filtering unit 350 may generate a modified reconstructed picture by applying various filtering methods to the reconstructed picture, and store the modified reconstructed picture in the memory 360, specifically the DPB of the memory 360. can be transmitted to.
- the various filtering methods may include deblocking filtering, sample adaptive offset, adaptive loop filter, bilateral filter, etc.
- the (corrected) reconstructed picture stored in the DPB of the memory 360 can be used as a reference picture in the inter prediction unit 332.
- the memory 360 may store motion information of a block from which motion information in the current picture is derived (or decoded) and/or motion information of blocks in an already restored picture.
- the stored motion information can be transmitted to the inter prediction unit 260 to be used as motion information of spatial neighboring blocks or motion information of temporal neighboring blocks.
- the memory 360 can store reconstructed samples of reconstructed blocks in the current picture and transmit them to the intra prediction unit 331.
- the embodiments described in the filtering unit 260, the inter prediction unit 221, and the intra prediction unit 222 of the encoding device 200 are the filtering unit 350 and the inter prediction unit of the decoding device 300, respectively.
- the same or corresponding application may be applied to the unit 332 and the intra prediction unit 331.
- Figure 4 illustrates an image decoding method performed by a decoding device as an embodiment according to the present disclosure.
- transform coefficients of the current block can be derived from the bitstream (S400). That is, the bitstream may include residual information of the current block, and the transform coefficient of the current block can be derived by decoding the residual information.
- residual samples of the current block can be derived by performing at least one of dequantization or inverse-transform on the transform coefficient of the current block (S410).
- inverse transformation may be performed based on at least one of DCT-2, DST-7, or DCT-8.
- DCT-2, DST-7, DCT-8, etc. may be called a conversion type, conversion kernel, or conversion core.
- Inverse transformation in the present disclosure may mean separable transform. However, it is not limited to this, and inverse transformation may mean non-separable transformation, or may be a concept including separate transformation and non-separable transformation. Additionally, inverse transformation in the present disclosure refers to primary transformation, but is not limited thereto, and may be transformed into the same/similar form and applied to secondary transformation.
- non-separable transformation may be used, or non-separable transformation may be used in addition to at least one of DCT-2, DST-7, or DCT-8, or DCT- 2, non-separate transformation may replace one or more transformation kernels of DST-7, or DCT-8.
- transformation kernel candidates for separate transformation (DCT-2, DCT-2), (DST-7, DST-7), (DCT-8, DST-7), (DST-7, DCT -8), If (DCT-8, DCT-8) exists, a non-separated transformation may replace or be added to one or more of the five transformation kernel candidates.
- notations such as (transformation 1, transformation 2) indicate that transformation 1 is applied in the horizontal direction and transformation 2 is applied in the vertical direction. If non-separate transformation replaces some of the corresponding transformation kernel candidates, the remaining transformation kernel candidates except (DCT-2, DCT-2) and (DST-7, DST-7) can be replaced with non-separate transformation.
- the transformation kernel candidate is only an example, and other types of DCT and/or DST may be included, and transform skip may be included as a transformation kernel candidate.
- Non-separable transformation may mean transformation or inverse transformation based on a non-separable transform matrix.
- non-separated transformation can perform horizontal and vertical transformation at once.
- vector X' When the input data X is expressed in vector form, vector X' can be expressed as follows.
- the non-separable conversion can be performed as in Equation 3 below.
- Equation 3 F represents the transformation coefficient vector, T represents the 16x16 non-separable transformation matrix, and ⁇ means the multiplication of the matrix and the vector.
- a 16x1 transform coefficient vector F can be derived through Equation 3, and F can be reconstructed into a 4x4 block according to a predetermined scan order.
- the scan order may be horizontal scan, vertical scan, diagonal scan, z scan, raster scan, or pre-defined scan.
- the non-separate transformation set and/or transformation kernel for the non-separate transformation includes the prediction mode (ex. intra mode, inter mode, etc.), the width, height, or number of pixels of the current block, the location of the sub-block within the current block, and It can be configured in various ways based on at least one of a syntactic element signaled by the target, statistical characteristics of surrounding samples, whether a quadratic transform is used, or a quantization parameter (QP).
- the prediction mode ex. intra mode, inter mode, etc.
- the non-separate transformation set and/or transformation kernel for the non-separate transformation includes the prediction mode (ex. intra mode, inter mode, etc.), the width, height, or number of pixels of the current block, the location of the sub-block within the current block, and It can be configured in various ways based on at least one of a syntactic element signaled by the target, statistical characteristics of surrounding samples, whether a quadratic transform is used, or a quantization parameter (
- pre-defined intra prediction modes are grouped to correspond to n non-separated transform sets, and each non-separated transform set may include k transform kernel candidates.
- n and k may be arbitrary constants according to rules (conditions) defined equally for the encoding device and the decoding device.
- the number of non-separated transform sets and/or the number of transform kernel candidates included in the non-separated transform set may be configured differently depending on the width and/or height of the current block. For example, for a 4x4 block, n 1 non-separable transformation sets and k 1 transformation kernel candidates may be configured. For a 4x8 block, n 2 non-separable transformation sets and k 2 transformation kernel candidates can be configured. Additionally, the number of non-separated transformation sets and the number of transformation kernel candidates included in each non-separated transformation set may be configured differently depending on the product of the width and height of the current block.
- n 3 non-separable transformation sets and k 3 transformation kernel candidates can be constructed, otherwise, n 4 non-separable transformation sets can be constructed.
- a set and k 4 transformation kernel candidates can be constructed. That is, since the degree of change in the statistical characteristics of the residual signal varies depending on the block size, the number of non-separable transformation sets and transformation kernel candidates can be configured differently to reflect this.
- the statistical characteristics of the residual signal may be different for each sub-block, so the number of non-separable transform sets and transform kernel candidates can be configured differently. For example, if a 4x8 or 8x4 block is divided into two 4x4 subblocks, and a non-separable transform is applied to each subblock, n 5 non-separable transform sets and k 5 transform kernels for the top left 4x4 subblock. Candidates can be configured, and n 6 non-separated transform sets and k 6 transform kernel candidates can be configured for different 4x4 subblocks.
- the number of non-separated transformation sets and transformation kernel candidates can be configured differently.
- the syntax element information indicating one of a plurality of non-separable transformation configurations may be used. For example, if three kinds of non-separated transformation configurations are supported (i.e., n 7 non-separated transformation sets and k 7 transformation kernel candidates, n 8 non-separate transformation sets and k 8 transformation kernel candidates, n 9 k non-separate transformation sets and k 9 transformation kernel candidates), the corresponding syntax elements can have the values 0, 1, and 2, and the value of the signaled syntax element can determine the non-separate transformation configuration applied to the current block. there is.
- the number of non-separable transformation sets and transformation kernel candidates may be configured differently. For example, when the secondary transformation is not applied, a non-separable transformation configuration including n 10 non-separable transformation sets and k 10 transformation kernel candidates can be applied. When a secondary transformation is applied, a non-separable transformation configuration including n 11 non-separable transformation sets and k 11 transformation kernel candidates can be applied.
- non-separable transformation schemes can be applied. For example, when the QP value is small, a non-separable transformation configuration including n 12 non-separable transformation sets and k 12 transformation kernel candidates can be applied. On the other hand, when the QP value is large, a non-separable transformation configuration including n 13 non-separable transformation sets and k 13 transformation kernel candidates can be applied. If the QP value is below (or less than) the threshold value (eg, 32), the case can be classified as having a small QP value. Otherwise, the case can be classified as having a large QP value. Alternatively, the range of QP values can be divided into three or more, and a different non-separated transformation configuration can be applied to each range.
- the threshold value eg, 32
- the block For relatively large blocks, rather than using a non-separable transformation corresponding to the width and height of the block, you can divide the block into multiple sub-blocks and use a non-separating transformation corresponding to the width and height of the sub-blocks.
- the 4x8 block when performing non-separable transformation on a 4x8 block, the 4x8 block can be divided into two 4x4 sub-blocks, and 4x4 block-based non-separable transformation can be used for each 4x4 sub-block.
- an 8x16 block it can be divided into two 8x8 sub-blocks and use 8x8 block-based non-separable transformation.
- the non-separated transform set may be determined based on the intra prediction mode and mapping table of the current block.
- the mapping table may define a mapping relationship between pre-defined intra prediction modes and non-separate transform sets.
- Pre-defined intra prediction modes may include 2 non-directional modes and 65 directional modes.
- the size of the transformation kernel for non-separated transformation is larger than that for separated transformation. This means that the computational complexity required for the conversion process is high and the memory required to store the conversion kernel is large. Meanwhile, the separate transformation can only consider statistical characteristics that exist in the horizontal and/or vertical directions, but the non-separate transformation can simultaneously consider statistical characteristics in a two-dimensional space including the horizontal and vertical directions, providing better compression efficiency. .
- the non-directional mode may include planar mode number 0 and DC mode number 1, and the directional mode may include intra prediction modes number 2 to 66. However, this is an example, and the present disclosure can be applied even when the number of pre-defined intra prediction modes is different.
- the pre-defined intra prediction mode may further include -14 to -1 intra prediction modes and 67 to 80 intra prediction modes.
- Figure 5 exemplarily shows an intra prediction mode and its prediction direction according to the present disclosure.
- modes -14 to -1 and 2 to 33 and modes 35 to 80 are symmetrical in terms of prediction direction with mode 34 as the center.
- modes 10 and 58 are symmetrical about the direction corresponding to mode 34
- mode -1 is symmetrical with mode 67. Therefore, for the vertical directional mode that is symmetrical to the horizontal directional mode around mode 34, the input data can be transposed and used. Transposing input data means that rows in the input data MxN of a two-dimensional block become columns and columns become rows to form NxM data.
- the 16 pieces of data that make up the 4x4 block can be appropriately arranged to form a 16x1 one-dimensional vector.
- a one-dimensional vector may be constructed in row-first order, or a one-dimensional vector may be constructed in column-first order.
- a two-dimensional block can be formed by arranging the residual samples that are the result of non-separable transformation in the above order.
- Mode 34 can be considered neither a horizontal directional mode nor a vertical directional mode, but in the present disclosure, it is classified as belonging to the horizontal directional mode. That is, for modes -14 to -1 and 2 to 33, the input data sorting method for the horizontal directional mode, that is, row-first order, is used, and for the vertical directional mode that is symmetrical around mode 34, You can use it by transposing the input data.
- symmetry between block types in a transpose relationship that is, symmetry between KxL blocks and LxK blocks, can also be utilized.
- a symmetrical relationship exists between the KxL block predicted in P mode and the LxK block predicted in (68-P) mode.
- a symmetrical relationship exists between the KxL block predicted in Q mode and the LxK block predicted in (66-Q) mode.
- the same transformation kernel can be applied to the KxL block and the LxK block.
- the (68-P) mode is used instead of the P mode applied to the LxK block.
- a non-separable transformation set can be derived through the mapping table corresponding to the KxL block.
- a non-separable transform set can be derived through a mapping table corresponding to the KxL block based on the (66-Q) mode instead of the Q mode applied to the LxK block.
- a non-separable transformation set can be selected based on mode 2 instead of mode 66.
- the input data can be read according to a pre-determined order (e.g., row-first order or column-first order) to construct a one-dimensional vector, and then the corresponding non-separable transformation can be applied.
- the LxK block the input data can be read according to the transpose order to construct a one-dimensional vector, and then the corresponding non-separable transformation can be applied.
- the KxL block is read in row-first order
- the LxK block can be read in column-first order.
- an LxK block can be read in row-first order.
- mode 34 when mode 34 is applied to the KxL block, a non-separable transformation set is determined based on mode 34, and the input data is read according to a pre-determined order to construct a one-dimensional vector to perform the non-separate transformation. You can.
- mode 34 is applied to the LxK block, a non-separable transformation set is similarly determined based on mode 34, but the non-separable transformation can be performed by reading the input data in the transpose order and constructing a one-dimensional vector. .
- a method of determining a non-separable transformation set and a method of configuring input data are described based on a KxL block, but the non-separable transformation can be performed by equally utilizing the above-described symmetry for the KxL block based on an LxK block.
- a block whose width is greater than its height may be restricted to be used as a reference block.
- symmetry may be restricted from being utilized.
- the non-square block may use a different number of non-separate transform sets and/or transform kernel candidates than the square block, and the non-separate transform set may be selected using a different mapping table than that of the square block.
- mapping table for selecting a non-separate transformation set is as follows.
- Table 1 shows an example of allocating a non-separable transform set for each intra prediction mode when there are five non-separate transform sets.
- the value of predModeIntra means the value of the intra prediction mode considering WAIP, and TrSetIdx is an index indicating a specific non-separated transformation set.
- Table 1 it can be seen that the same non-separable transformation set is applied to modes located in symmetrical directions according to the intra prediction mode.
- Table 1 is only an example of using five non-separable transformation sets, and does not limit the total number of non-separable transformation sets for non-separated transformation.
- non-separable transformation may not be applied to WAIP for compression performance.
- a non-separate transform set corresponding to an adjacent intra prediction mode may be shared without configuring a separate non-separate transform set for WAIP.
- predModeIntra TrSetIdx predModeIntra ⁇ 0
- predModeIntra ⁇ 1 0
- predModeIntra ⁇ 12
- predModeIntra ⁇ 23
- predModeIntra ⁇ 44 3
- predModeIntra ⁇ 55 2 56
- predModeIntra ⁇ 80
- the non-separated transformation set may include a plurality of transformation kernel candidates, and any one of the plurality of transformation kernel candidates may be selectively used. For this purpose, an index signaled through the bitstream can be used.
- one of a plurality of conversion kernel candidates may be implicitly determined based on context information of the current block.
- the context information may mean the size of the current block or whether non-separable transformation is applied to neighboring blocks.
- the size of the current block can be defined as width, height, maximum/minimum value of width and height, sum of width and height, or product of width and height.
- inverse transformation can be divided into separate transformation and non-separate transformation.
- Separate transformation means performing transformation separately in the horizontal and vertical directions for a 2D block
- non-separate transformation means performing one transformation on the samples that make up the entire or partial region of the 2D block. can do.
- separate transformation it can be expressed as a pair of horizontal transformation and vertical transformation, and in the present disclosure, it is expressed as (horizontal transformation, vertical transformation).
- Each transformation set may include one or more transformation kernel candidates.
- a transform kernel may refer to one transform (e.g., DCT-2, DST-7) or two transform pairs (e.g., (DCT-2, DCT-2)). It may be possible.
- a non-separable transformation applied as a primary transformation can be expressed as Non-Separable Primary Transform (NSPT).
- NSPT Non-Separable Primary Transform
- a plurality of non-separated transform sets may be configured, and each non-separated transform set may include one or more transform kernels as transform kernel candidates.
- one of a plurality of non-separable transformation sets is selected depending on the intra prediction mode, and the plurality of non-separable transformation sets for NSPT can be expressed as an NSPT set list. This is the same as previously discussed, and detailed explanation will be omitted here.
- a group of one or more transform sets for which the current block is available can be formed from a plurality of pre-defined transform sets.
- the group of one or more transformation sets may be composed of units of a predetermined area to which the current block belongs, and will hereinafter be referred to as a collection.
- the predetermined area unit may be at least one of a picture, a slice, a coding tree unit row (CTU row), or a coding tree unit (CTU).
- a set of transforms consisting of (DCT-2, DCT-2) can be called S 1 , (DST-7, DST-7), (DCT-8, DST-7), (DST-7, DCT-8)
- the transformation set consisting of (DCT-8, DCT-8) will be called S 2 , respectively.
- the above-described NSPT set list may include N non-separable transformation sets, and the N non-separable transformation sets will be called S 3,1 , S 3,2 , ..., S 3,N, respectively.
- N may be 35, but is not limited thereto.
- the transform kernel applicable to the current block may belong to any of S 1 , S 2 , or S 3,13 there is.
- the collection for which the current block is available can be expressed as ⁇ S 1 , S 2 , S 3,13 ⁇ .
- the collection since the collection according to the present disclosure is a group of one or more transformation sets for which the current block is available, the collection may be configured differently depending on the context of the current block.
- a collection may be constructed based on the context of the current block, and at this time, the process of selecting one of a plurality of transformation sets belonging to the collection and selecting one of a plurality of transformation kernel candidates belonging to the selected transformation set. This can be done.
- selection of the transformation set and transformation kernel candidate may be performed implicitly based on the context of the current block, or may be performed based on an explicitly signaled index.
- the process of selecting one of a plurality of transformation sets belonging to the collection and the process of selecting one of a plurality of transformation kernel candidates belonging to the selected transformation set may be performed separately. For example, the index for selecting a transformation set is signaled first, and based on this, one of a plurality of transformation sets belonging to the collection can be selected.
- an index indicating one of a plurality of transformation kernel candidates belonging to the transformation set may be signaled, and any one transformation kernel candidate may be selected from the transformation set based on the signaled index.
- the transformation kernel of the current block may be determined based on the selected transformation kernel candidate.
- selection of any one transformation set from the collection may be performed implicitly based on the context of the current block, and selection of any one transformation kernel candidate from the selected transformation set may be performed based on a signaled index.
- selection of any one transformation set from the collection may be performed based on a signaled index, and selection of any one transformation kernel candidate from the selected transformation set may be implicitly performed based on the context of the current block.
- selection of any one transformation set from the collection may be performed implicitly based on the context of the current block, and selection of any one transformation kernel candidate from the selected transformation set may also be performed implicitly based on the context of the current block. .
- the index for selecting the transformation set may not be signaled.
- the index for indicating the corresponding transformation kernel candidate may not be signaled.
- an index indicating one of all conversion kernel candidates belonging to the current collection may be signaled. In this case, the process of selecting any one transformation set from the collection can be omitted. At this time, all transformation sets belonging to the collection can be rearranged considering priority.
- a small-length binary code when allocating a small-length binary code to a small-valued index, such as a truncated unary code, it may be advantageous to allocate a small-valued index to a conversion kernel candidate that is more advantageous for improving coding performance.
- a small-valued index when shuffling all transformation kernel candidates in a collection according to priority, different shuffling can be applied to each collection. Additionally, rather than rearranging all conversion kernel candidates in the collection, only some can be selectively rearranged.
- the transformation kernel for inverse transformation of the current block may be determined based on Multiple Transform Selection (MTS).
- MTS Multiple Transform Selection
- the MTS according to the present disclosure may use at least one of DST-7, DCT-8, DCT-5, DST-4, DST-1, or IDT (identity transform) as a transformation kernel. Additionally, the MTS according to the present disclosure may further include a DCT-2 conversion kernel.
- multiple MTS sets for an MTS may be defined. Based on the size and/or intra prediction mode of the current block, one of a plurality of MTS sets may be determined. For example, in determining one MTS set, the size of 16 transform blocks can be considered, and for directional mode, the shape of the transform block and the symmetry between intra prediction modes can be considered. For WAIP (Wide Angle Intra Prediction) mode (i.e. -1 to -14 (or -15), 67 to 80 (or 81)), -1 to -14 (or - For mode 15), the MTS set corresponding to mode 2 can be applied, and for modes 67 to 80 (or 81), the MTS set corresponding to mode 66 can be applied. A separate MTS set may be allocated for Matrix-based Intra Prediction (MIP) mode.
- MIP Matrix-based Intra Prediction
- the MTS set according to the transform block size and intra prediction mode can be allocated/defined as shown in Table 4 below.
- Table 4 shows the allocation of MTS sets according to 16 transformation block sizes and intra prediction modes.
- the number of pre-defined MTS sets is 80, and the index indicating any one of the 80 MTS sets can range from 0 to 79 as shown in Table 4.
- Table 5 shows conversion kernel candidates included in each MTS set examined in Table 4.
- Each MTS set may consist of six transformation kernel candidates.
- the conversion kernel candidate index has a value between 0 and 5, and may indicate one of the six conversion kernel candidates.
- each transformation kernel candidate may be a combination of a horizontal transformation kernel and a vertical transformation kernel for separate transformation, and 25 transformation kernel candidates with indices of 0 to 24 may be defined.
- Table 6 is an example of the 25 conversion kernel candidates examined in Table 5.
- the horizontal transformation and vertical transformation of the transformation kernel candidate are expressed as (horizontal transformation, vertical transformation).
- the horizontal/vertical transformation when the intra prediction mode is less than 35 may be opposite to the horizontal/vertical transformation when the intra prediction mode is 35 or more.
- a mode symmetrical around mode 34 can be derived, and an MTS set from Table 4 above can be selected based on the mode. Additionally, symmetry of the block shape can be additionally considered. If the original transform block has size WxH, the original transform block is symmetrically considered to have size HxW, and an MTS set can be selected from Table 4.
- the value of the intra prediction mode may be the value of the modified intra prediction mode.
- a mode value for WAIP -14 (or -15) to -1 are modified to mode 2
- 67 to 80 (or 81) are modified to mode 66
- the remaining modes are modified to mode 2.
- the value of the original intra prediction mode can be set to the value of the modified intra prediction mode.
- the extended modes for WAIP are also configured symmetrically around mode 34, symmetry around mode 34 can be used for all directional modes except planar mode and DC mode.
- the MTS set with an index of 72 may be selected, as defined in Table 4.
- the MTS set assigned to MIP mode may be selected based on the size of the current block without considering the symmetry of the block shape.
- the MTS set allocated to the MIP mode may be selected based on the symmetric block size.
- the MTS set assigned to the MIP mode may be selected based on the size of the current block without considering the symmetry of the block shape.
- the MTS set assigned to the MIP mode may be selected based on the symmetric block size.
- a flag indicating whether MIP mode is applied as transpose mode can be used. If MIP mode is applied to the current block of MxN and the flag indicates application of transpose mode, the intra prediction mode may be considered Planar mode, and the current block of MxN may be regarded as an NxM block. That is, from Table 4, the MTS set corresponding to the block size of NxM and Planar mode can be selected. As seen in Table 6, if the value of the intra prediction mode is 35 or higher, the horizontal and vertical transformations are swapped. However, since the intra prediction mode of the current block is considered Planar mode, the horizontal and vertical transformations of the transformation kernel candidate are not swapped. It may not be possible.
- the intra prediction mode is not considered a Planar mode, and the current block of MxN may be considered an NxM block. That is, from Table 4, an MTS set corresponding to the block size of NxM and MIP mode may be selected.
- the transform kernel candidate selected by the transform kernel candidate index may be set as the transform kernel of the current block.
- at least one of the horizontal transformation or vertical transformation of the selected transformation kernel candidate may be changed to another transformation kernel. For example, if the transformation kernel candidate index is 3, and the width and height of the current block are both 16 or less, at least one of the horizontal transformation or vertical transformation of the transformation kernel candidate corresponding to the transformation kernel candidate index of 3 is performed by another transformation kernel. can be changed. At this time, horizontal transformation and vertical transformation can be changed independently of each other.
- the vertical transformation of the selected transformation kernel candidate may be changed to IDT (identity transformation). If the difference (or absolute value of the difference) between the intra prediction mode value of the current block and the vertical mode value is less than or equal to a predetermined threshold, the horizontal transformation of the selected transformation kernel candidate may be changed to IDT (identity transformation).
- the threshold value can be determined as shown in Table 7 below based on the width and height of the current block.
- Table 7 is for changing the horizontal transformation and/or vertical transformation of the transformation kernel candidate selected by the transformation kernel candidate index to another transformation kernel, and defines a threshold according to the size of the transformation block.
- the six transformation kernel candidates constituting one MTS set can be distinguished by a transformation kernel candidate index of 0 to 5 as defined in Table 5.
- the corresponding conversion kernel candidate index may be signaled through a bitstream.
- a flag (MTS enabled flag or MTS flag) indicating availability/application of the MTS set may be signaled, and if the flag indicates availability/application of the MTS set, a conversion kernel candidate index may be signaled.
- the MTS flag may consist of one bin, and one or more contexts for CABAC-based entropy coding (hereinafter referred to as CABAC context) may be assigned to the bin. For example, different CABAC contexts may be allocated for non-MIP mode and MIP mode.
- the number of transformation kernel candidates available for the current block may be set differently.
- the sum of the absolute values of all or some transform coefficients in the current block may be considered.
- the sum of the absolute values of the corresponding transformation coefficients will be referred to as AbsSum. If AbsSum is less than or equal to T1, only one transform kernel candidate corresponding to a transform kernel candidate index of 0 may be available. If AbsSum is greater than T1 and less than or equal to T2, four transform kernel candidates corresponding to transform kernel candidate indices of 0 to 3 may be available. If AbsSum is greater than T2, 6 transform kernel candidates corresponding to transform kernel candidate indices from 0 to 5 may be available.
- T1 may be 6 and T2 may be 32, but this is only an example.
- AbsSum is less than or equal to T1
- the number of transformation kernel candidates available for the current block is 1, so without signaling the transformation kernel candidate index
- the transformation kernel candidate corresponding to the transformation kernel candidate index of 0 is used for transformation of the current block.
- AbsSum is greater than T1 and less than or equal to T2
- four transformation kernel candidates are available, so any one of the four transformation kernel candidates can be selected based on the transformation kernel candidate index with two bins. That is, conversion kernel candidate indices of 0 to 3 may be signaled as 00, 01, 10, and 11, respectively. For the two bins, the Most Significant Bit (MSB) may be signaled first, and the Least Significant Bit (LSB) may be signaled later.
- MSB Most Significant Bit
- LSB Least Significant Bit
- a different CABAC context can be assigned to each bean. For example, for two bins, a different CABAC context than the CABAC context allocated for the MTS flag can be assigned to each bin. Alternatively, CABAC context may not be assigned to the two bins and bypass coding may be applied.
- AbsSum is greater than T2
- the transformation kernel candidate index has a value of 0 to 5, so the transformation kernel candidate index cannot be expressed with only two bins.
- the conversion kernel candidate index can be expressed by allocating two or more bins.
- a CABAC context may be allocated, or bypass coding may be applied without allocation of a CABAC context.
- a CABAC context may be allocated to some of the plurality of bins (e.g., the first bin, or the first and second bins), and bypass coding may be applied to the remaining bins.
- the transform kernel of the current block may be determined based on a transform set including one or more transform kernel candidates.
- the transform kernel of the current block may be derived from one of one or more transform kernel candidates belonging to the transform set.
- the process of determining the transform kernel of the current block may include at least one of the following: 1) determining the transform set of the current block or 2) selecting one transform kernel candidate from the transform set of the current block.
- the process of determining the transform set may be a process of selecting one of a plurality of transform sets that are identically predefined for the encoding device and the decoding device.
- the process of determining the transform set may include configuring one or more transform sets available to the current block among a plurality of transform sets that are pre-defined in the encoding device and the decoding device, and selecting one of the configured transform sets. It may be a process of selection.
- the process of determining the transform set may be a process of configuring a transform set based on a transform kernel candidate available to the current block among a plurality of transform kernel candidates pre-defined equally for the encoding device and the decoding device. there is.
- the transformation set of the current block includes a plurality of transformation kernel candidates
- a process of selecting one of the plurality of transformation kernel candidates for the current block may be performed.
- the transformation set of the current block includes one transformation kernel candidate (that is, when the current block has one available transformation kernel candidate)
- the transformation kernel of the current block may be set to the corresponding transformation kernel candidate.
- the transform set according to the present disclosure may refer to the (non-separated) transform set in Embodiment 1 described above, or the MTS set in Embodiment 2.
- the transform set may be defined separately from the (non-separated) transform set of Example 1 or the MTS set of Example 2.
- the transformation set may include one or more specific transformation kernels as transformation kernel candidates.
- One specific transformation kernel may be defined as a pair of a transformation kernel for horizontal transformation and a transformation kernel for vertical transformation, or may be defined as one transformation kernel that applies equally to horizontal and vertical transformation.
- NSPT can be applied to all or part of the transform block.
- residual samples existing in the area where NSPT is applied can be input as a one-dimensional vector of NSPT. That is, residual samples existing in one transform block as a whole or in a partial region (in this disclosure, referred to as Region Of Interest, ROI) can be collected into a one-dimensional vector and configured as an input.
- ROI Region Of Interest
- the first-order transformation coefficient can be obtained.
- a one-dimensional vector output can be obtained.
- a residual sample for the ROI can be obtained by placing each element value constituting the corresponding output vector at a designated position within the 2D transformation block.
- the dimension of the matrix can be determined depending on the size of the ROI.
- a transformation kernel may be referred to as a transformation type or a transformation matrix
- a non-separable transformation kernel for NSPT may be referred to as an NSPT kernel.
- the dimension of the corresponding transform matrix may be MN x MN.
- the dimension of the NSPT kernel may be 64 x 64.
- the NSPT kernel when NSPT is applied to a residual generated by intra prediction, can be adaptively determined according to the intra prediction mode. Since the statistical characteristics of the residual block may vary for each intra prediction mode, compression efficiency can be increased by adaptively determining the NSPT kernel according to the intra prediction mode.
- the NSPT kernel applied for one or more intra prediction modes can be configured to be shared.
- the non-separate transform set may be determined based on the intra prediction mode and mapping table of the current block.
- the mapping table may define a mapping relationship between pre-defined intra prediction modes and non-separate transform sets.
- Pre-defined intra prediction modes may include 2 non-directional modes and 65 directional modes.
- intra prediction modes may be grouped into an intra prediction mode group.
- One NSPT kernel may be assigned to an intra prediction mode group, or multiple NSPT kernels may be assigned.
- a non-separated transform set (NSPT set) including one or more NSPT kernels may be assigned to an intra prediction mode group.
- a non-separate transformation set is mapped to the intra prediction mode, and one of N NSPT kernels included in the non-separate transformation set can be selected.
- an intra prediction group may include adjacent prediction modes (e.g., modes 17, 18, and 19). Additionally, the intra prediction group may include modes with symmetry. For example, the directional modes may be symmetrical around the diagonal mode (i.e., intra prediction mode No. 34) of FIG. 5 described above. At this time, the two symmetrical modes can form one group (or pair). For example, mode 18 and mode 50 are symmetrical around mode 34, so they can be included in the same group. However, for modes with symmetry, a process of transposing the 2D input block and constructing a one-dimensional input vector may be added before applying the forward NSPT kernel.
- modes with symmetry a process of transposing the 2D input block and constructing a one-dimensional input vector may be added before applying the forward NSPT kernel.
- a one-dimensional input vector can be derived from the corresponding input block in row first order without transposing the 2D input block. If the intra prediction mode is greater than 34, first transpose the 2D input block and then read the corresponding input block in row-first order, or leave the 2D input block as is and read the corresponding input block in column first order. By reading, we can construct a one-dimensional input vector.
- Table 8 below illustrates a mapping table for assigning NSPT sets according to intra prediction mode. Referring to Table 8, a total of 35 NSPT sets from 0 to 34 can be defined. Extended WAIP modes (i.e., modes -14 to -1 and modes 67 to 80 in FIG. 5) may be assigned the NSPT set assigned to the nearest normal directional mode. That is, NSPT set number 2 can be assigned to the extended WAIP mode.
- Extended WAIP modes i.e., modes -14 to -1 and modes 67 to 80 in FIG. 5
- NSPT set number 2 can be assigned to the extended WAIP mode.
- An NSPT set may include one or more NSPT kernels (or kernel candidates). That is, the NSPT set may include N NSPT kernel candidates. As an example, N may be set to a value equal to or greater than 1, such as 1, 2, 3, or 4. Among one or more NSPT kernels included in the NSPT set, the kernel applied to the current block can be signaled using an index. In this disclosure, the index may be referred to as an NSPT index. As an example, the NSPT index is 0, 1, 2,... , can have a value of N - 1.
- the NSPT index value may be fixed to 0. In this case, the NSPT index can be inferred without being signaled separately. Additionally, a flag indicating whether NSPT is applied may be signaled separately from the NSPT index. In this disclosure, the corresponding flag may be referred to as the NSPT flag.
- NSPT flag value is 1, NSPT can be applied. If the NSPT flag value is 0, NSPT may not be applied. If the NSPT flag is not signaled, the NSPT flag value can be inferred to be 0. As an example, when the NSPT flag value is 1, the NSPT index may be signaled. Based on the signaled NSPT index, one of N kernel candidates included in the NSPT set selected by the intra prediction mode can be specified.
- the entropy coding method of the NSPT index may be defined in various ways considering the number (N) of NSPT kernels included in the NSPT set. For example, as a method (i.e., binarization method) of mapping values from 0 to N-1 to an empty string, truncated unary binarization, truncated binarization, and fixed-length binarization methods can be used. .
- the value N of the number of kernel candidates constituting the NSPT set is 2, one of the two candidates can be specified with one bin. For example, 0 may indicate the first candidate, and 1 may indicate the second candidate. Additionally, when the N value is 3 and truncated unary binarization is applied, candidates can be specified as two bins. For example, the first, second, and third candidates can be binarized to 0, 10, and 11, respectively, for signaling. As an example, binarized bins may be coded using context coding or bypass coding.
- a reduced primary transform (RPT) method using a transform kernel of reduced dimensions as a primary transform is described.
- RPT reduced primary transform
- samples belonging to a 2D residual block may be arranged (or rearranged) into a 1D vector according to row priority (or column priority).
- the transformation matrix for NSPT can be multiplied by the arrayed vector.
- the corresponding 2D residual block is an M x N block (M is the horizontal length and N is the vertical length)
- the length of the rearranged 1D vector can be M*N. That is, the corresponding 2D residual block can also be represented as a column vector with dimensions M*N x 1.
- M*N may be denoted as MN for convenience.
- the dimension of the corresponding transformation matrix may be MN x MN.
- forward NSPT can operate by multiplying the left side of the MN x 1 vector by the corresponding MN x MN transformation matrix to obtain an MN x 1 transformation coefficient vector.
- r transformation coefficients can be obtained by multiplying an r x MN matrix rather than multiplying the MN x MN matrix as the forward NSPT transformation matrix described above.
- r represents the number of rows of the transformation matrix
- MN represents the number of columns of the transformation matrix.
- the r value may be set to be less than or equal to MN. That is, the existing forward NSPT transformation matrix includes MN rows, and each row is a 1 x MN row vector, which is the transform basis vector of the corresponding NSPT transformation matrix.
- the corresponding transform coefficients can be obtained by multiplying each transform basis vector MN x 1 sample column vector.
- the existing forward NSPT transformation matrix consists of MN row vectors
- MN transformation coefficients i.e., MN x 1 transformation coefficient column vector
- the transformation matrix may be composed of r transformation basis vectors instead of MN transformation basis vectors. Accordingly, when forward RPT is applied, r transform coefficients (i.e., r x 1 transform coefficient column vector) can be obtained instead of MN.
- the RPT kernel can be constructed by selecting r transformation basis vectors, which are some of the transformation basis vectors that make up the MN x MN forward NSPT kernel.
- a transformation kernel may be referred to as a transformation type or a transformation matrix
- a non-separable transformation kernel for NSPT may be referred to as an RPT kernel. That is, when selecting r 1 x MN row vectors from the MN x MN forward NSPT kernel, it may be advantageous to select the most important transformation basis vectors from a coding performance perspective. Specifically, in terms of energy compaction through transformation, greater energy can be concentrated in the transformation coefficients that appear first by multiplying the forward NSPT transformation matrix.
- an r x MN forward RPT kernel can be constructed (or derived) by taking r numbers from the upper part of the forward NSPT kernel.
- the RPT according to the present disclosure takes only a portion (i.e., r) of the transformation coefficients obtained by applying the existing NSPT, which may result in loss of some of the energy of the original signal. In other words, distortion between the original signal and the original signal may occur through this process. Nevertheless, by applying RPT, only r transform coefficients are generated instead of MN, so the amount of bits required to code the corresponding transform coefficients can be reduced. Therefore, in the case of a signal in which a lot of energy is concentrated in a small number of transform coefficients (e.g., video residual signal), the gain obtained by reducing signaling bits can be significantly large, and this can improve coding performance.
- the reverse NSPT is a transformation matrix and may be the transpose matrix of the forward NSPT kernel described above.
- the input data may be a conversion coefficient signal instead of a sample signal such as a residual signal.
- the forward NSPT transformation matrix is G and the sample signal rearranged into a 1D vector is x
- the transformation coefficient vector obtained by multiplying the transformation matrix on the left side can be expressed as Equation 4 below.
- x and y may be MN x 1 column vectors.
- G may have the form of an MN x MN matrix.
- the reverse NSPT process can be expressed as Equation 5 below using the same variables.
- G T means the transpose matrix of G.
- Forward RPT operation and reverse RPT operation according to the present disclosure can also be expressed by the above two equations.
- y is an rx1 column vector instead of an MNx1 column vector
- G is an rxMN matrix instead of an MMxMN matrix.
- the dimension of the sample signal eg, image residual signal
- MN sample signals can be obtained with only r transformation coefficients through reverse RPT. This may mean that it can be restored.
- the original MN sample signals can be restored by coding only r transform coefficients less than MN, thereby improving coding performance.
- the r value is defined in consideration of the statistical characteristics of the residual block, and a residual block of the existing transform block size is derived from the reduced size residual block determined according to the defined r value.
- a residual block of the existing transform block size is derived from the reduced size residual block determined according to the defined r value.
- RPT it is a technique for defining the r value by considering the statistical characteristics of samples in the residual block, which have very different characteristics from the distribution of the primary transformation coefficient, and has a fundamental difference from the reduced secondary transformation.
- RPT kernel which is a reduced-dimensional transformation matrix
- memory usage may be considered as a measure of worst case complexity.
- memory usage and/or number of multiplications per sample may be considered as a measure of worst case complexity. For example, if the maximum possible number of multiplications per sample is set to 16 for a 16x16 block, and the memory usage is set to 8 KB or less per kernel (kernel coefficients are expressed as 1 byte), the value of r will be set to 16 or less. You can.
- the r value constituting the RPT kernel may be determined based on specific information.
- the r value constituting the RPT kernel can be determined based on predefined encoding parameters.
- the value of r may be determined depending on the size of the block.
- the RPT kernel can be determined variably depending on the size of the block.
- the block may be at least one of a coding block, a transform block, and a prediction block.
- the r value may be determined based on prediction information.
- the prediction information may include information about inter/intra prediction, intra prediction mode information, etc.
- the r value may be determined based on signaled information (value of a syntax element).
- the r value may be variably determined depending on the quantization parameter value.
- a predefined fixed value may be used as the r value, and the predefined fixed value may be determined based on signaled information.
- r transformation coefficients By multiplying the sample signal by the RPT kernel r x MN, r transformation coefficients can be obtained.
- the obtained r conversion coefficients are determined by the scan order of the predefined conversion coefficients (e.g., forward/reverse zig-zag scan order, forward/reverse horizontal scan order, forward/reverse vertical scan order) , forward/backward diagonal scan order, scan order specified based on intra prediction mode, etc.).
- the transformation coefficients obtained from applying the forward RPT according to this scan order e.g., the scan order in coefficient group (CG) units is also applicable
- M with r transformation coefficients It may not be possible to fill the entire inside of the x N block, resulting in empty space.
- the above-described empty space can be predicted in the following manner considering the characteristics of the residual signal.
- the value of the empty space can be filled using the values of available surrounding pixels.
- the value of the empty space can be filled based on the values of available surrounding pixels and intra prediction mode.
- the value of an empty space can be predicted by performing intra prediction based on the values of available surrounding pixels and the intra prediction mode.
- the value of the empty space can be filled using a predefined fixed value (e.g., 0).
- the value of the empty space can be filled from available surrounding pixels using a predetermined intra prediction mode (eg, planner mode).
- a predetermined intra prediction mode eg, planner mode
- filling the empty space with 0 in the above-described examples may be referred to as a zero-out process.
- the following embodiments can be applied. If a non-zero transform coefficient is detected (or parsed) in the corresponding empty space portion during parsing of the transform coefficient on the decoding device side, it may be considered (or inferred) that the RPT does not apply. In other words, if a non-zero transformation coefficient exists in the predefined area representing the corresponding empty space, RPT may be considered not applicable. In this case, signaling (or parsing) may not be performed for a flag indicating whether to apply RPT and/or an index designating one of a plurality of RPT kernel candidates. As an example, if a non-zero transformation coefficient exists in the predefined area representing the corresponding empty space, the predefined variable value may be updated, and it is inferred that RPT is not applied based on the updated variable value. It can be.
- whether to apply RPT may be determined depending on the size and/or shape of the block.
- the RPT kernel may be variably determined depending on the size and/or shape of the block.
- the value of r may vary, so the empty space may vary depending on the size and/or shape of the block. Accordingly, the area for checking whether a non-zero transform coefficient is detected for each block size and/or shape may be defined differently. In other words, the zero out area can be determined variably.
- the r value when a 16x64 matrix is applied as the forward RPT matrix for an 8x8 block, the r value may be 16.
- the CG when the CG is a 4x4 subblock, only the top left 4x4 block can be filled with a non-zero RPT transform coefficient, and the remaining three empty 4x4 subblocks (i.e. top right, bottom left, bottom right subblocks) can be filled with a non-zero RPT transform coefficient. A value of 0 can be filled.
- a non-zero transform coefficient is detected in the remaining three 4x4 sub-block areas during the decoding process, it can be considered that RPT is not applied.
- a flag indicating whether to apply RPT or an index designating one of a plurality of RPT kernel candidates may not be signaled.
- the CG is a 4x4 subblock
- only two CGs in the scan order have non-zero RPT conversion coefficients. It can be filled.
- the upper left 4x4 subblock and the 4x4 subblock adjacent to the lower left top subblock may be filled with the corresponding RPT transform coefficients.
- the area filled with 0 as an empty space can be determined as the remaining area excluding the two 4x4 sub-blocks.
- the RPT kernel may be determined variably depending on the size and/or shape of the block, and as seen, the empty space may be determined differently for 8x8 blocks and 16x8 blocks.
- the r value is a multiple of the CG size and the transform coefficient is scanned in CG units
- a flag and/or index related to the RPT is signaled. It may not ring. That is, the CG internal transform coefficients can be scanned for each CG according to a specified order, and the CG internal transform coefficients can be scanned in the same way by moving to the next CG according to the scan order for the CG unit.
- RPT when RPT is applied, if a non-zero conversion coefficient is detected in an empty space area filled with 0, RPT may not be applied. In this case, signaling for information related to the RPT may be omitted. However, if a non-zero conversion coefficient is not detected in the empty space area, it is impossible to determine whether RPT is applied, so after parsing (or signaling) the relevant conversion coefficient, the flag indicating whether to apply RPT is parsed to determine whether to apply RPT. The final judgment can be made.
- forward quadratic transform may be additionally applied to transform coefficients generated through application of RPT.
- forward secondary transform can be additionally applied to the area where the corresponding generated transform coefficients are located in the M x N block.
- ROI the region or a portion of the region may be referred to as ROI from a forward quadratic transform perspective.
- the reverse quadratic transformation can be applied first and then the reverse RPT can be applied.
- forward secondary transformation can be applied by setting the area where r transformation coefficients generated by applying forward RPT are placed or a part of the area as ROI.
- the 16 generated transformation coefficients can be located in the upper left 4x4 subblock, and by setting the corresponding subblock area as the ROI, the forward quadratic transformation is performed for the ROI. It can be applied.
- the coefficient values of the RPT kernel may be adjusted considering operations such as integer operations or fixed-point operations. That is, the RPT kernel is not a theoretical orthogonal transformation or a non-orthogonal transformation (where orthogonal transformation and non-orthogonal transformation refer to transformations where the norm of each transformation basis vector is 1), but the corresponding
- the actual codec system can be configured to perform conversion through integer arithmetic (or fixed-point arithmetic).
- the scaling factor that is multiplied when applying separate transformation in existing video compression technology can be equally reflected when applying RPT.
- separate transformation or non-separate transformation can be performed while maintaining processes other than transformation (e.g., quantization, dequantization process).
- the integerized coefficients of the RPT kernel can be obtained by multiplying the transformation basis vector by the scaling value described above.
- multiplying by a scaling value may include applying rounding, floor, ceiling, etc. operations to each kernel coefficient. That is, the integerized RPT kernel obtained through the above-described method can be defined and used in the conversion/inverse conversion process.
- the maximum and minimum values can be obtained for all kernel coefficients, so all kernel coefficients can be calculated from the maximum and minimum values.
- the number of bits sufficient to express can be obtained. For example, if the maximum value is less than 127 and the minimum value is more than -128, all integer kernel coefficients can be expressed with 8 bits (especially through 2's complement expression, etc.).
- all integer kernel coefficients can be expressed with N bits. If the maximum value is greater than (2 (N-1) - 1) or the minimum value is less than -2 (N-1) , all integer kernel coefficients may not be expressed with N bits. In this case, 1) all kernel coefficients can be additionally multiplied by a scaling value to adjust them to fall within the N bit range, or 2) the number of bits required to express the kernel coefficients can be increased (i.e., more than N+1 bits).
- all kernel coefficients can be expressed in 8-bit, 9-bit, 10-bit, etc., and of course, the scaling value of the kernel coefficient can be set differently for each block size or kernel, and bits to express the kernel coefficient You can also set a different number.
- the NSPT described above may be applied based on at least one of the size, tree type, or component type of the current block. For example, whether to apply NSPT may be determined based on at least one of the size, tree type, or component type of the current block. Based on at least one of the size, tree type, or component type of the current block, the NSPT index may be signaled. Based on at least one of the size, tree type, or component type of the current block, an NSPT set or NSPT kernel may be derived.
- Allowed transform block sizes pre-defined in the decoding device can be broadly divided into two groups.
- One of the two groups (hereinafter referred to as the first group) may refer to a set of block sizes to which NSPT can be applied.
- the first group may be composed of one of the allowed transform block sizes, or may be composed of two or more block sizes among the allowed transform block sizes.
- the block size to which NSPT can be applied can be defined as a block size in which at least one of the width and height is less than or equal to a predetermined threshold.
- the block size to which NSPT can be applied may be defined as a block size where the product of width and height is less than or equal to a predetermined threshold.
- the block size to which NSPT can be applied may be defined as a block size where the maximum value of width and height is less than or equal to a predetermined threshold.
- the threshold may be an integer of 4, 8, 16, 32, 64, 128 or more.
- the other of the two groups may refer to a set of block sizes to which NSPT is not applied.
- the above-described separate linear transformation may be applied to block sizes belonging to the second group.
- non-separate quadratic transformation may be applied to all or some of the block sizes belonging to the second group.
- reverse NSPT may be applied to the (dequantized) transform coefficient of the current block.
- a reverse separate first-order transform may be applied to the (dequantized) transform coefficient of the current block.
- the reverse non-separable quadratic transform e.g., low frequency non-separable transform, LFNST
- a reverse separated linear transformation e.g. DCT-2 can be applied to the obtained transformation coefficients.
- the first group which is a set of block sizes to which NSPT can be applied, may be defined as a set of 4x4, 4x8, 8x4, and 8x8.
- the first group may be defined as a set of 4x8, 8x4, and 8x8.
- the first group may be defined as a set of 4x8 and 8x4.
- the first group may be defined as a set of 4x4, 4x8, 4x16, 8x4, 8x8, and 16x4.
- the first group may be defined as a set of 4x8, 4x16, 8x4, 8x8, and 16x4.
- the first group may be defined as a set of 4x8, 4x16, 8x4, and 16x4.
- the first group may be defined as a set of 4x4, 4x8, 8x4, 8x8, 8x16, 16x8, and 16x16.
- the first group may be defined as a set of 4x4, 4x8, 8x4, 8x8, 8x16, and 16x8.
- the first group may be defined as a set of 4x8, 8x4, 8x8, 8x16, and 16x8.
- the first group may be defined as a set of 4x8, 8x4, 8x16, and 16x8.
- the first group may be defined as a set of 4x4, 4x8, 8x4, 8x8, 8x16, 16x32, 32x16, and 32x32.
- the first group may be defined as a set of 4x4, 4x8, 8x4, 8x8, 8x16, 16x8, 16x16, 16x32, and 32x16.
- the first group may be defined as a set of 4x8, 8x4, 8x8, 8x16, 16x8, 16x16, 16x32, and 32x16.
- the first group may be defined as a set of 4x8, 8x4, 8x16, 16x8, 16x16, 16x32, and 32x16.
- the first group may be defined as a set of 4x8, 8x4, 8x16, 16x8, 16x32, and 32x16.
- the first group may be defined as a set of 4x4, 4x8, 4x16, 8x4, and 16x4.
- the first group may be defined as a set of 4x4, 4x8, 4x16, 8x4, 8x8, 8x16, 16x4, and 16x8.
- the first group may be defined as a set of 4x8, 4x16, 8x4, 8x8, 8x16, 16x4, and 16x8.
- the first group may be defined as a set of 4x4, 4x8, 4x16, 8x4, 8x16, 16x4, and 16x8.
- the first group may be defined as a set of 4x8, 4x16, 8x4, 8x16, 16x4, and 16x8.
- the first group may be defined as a set of 4x4, 4x8, 4x16, 8x4, 8x8, 8x16, 16x4, 16x8, and 16x16.
- the first group may be defined as a set of 4x8, 4x16, 8x4, 8x8, 8x16, 16x4, 16x8, and 16x16.
- the first group may be defined as a set of 4x4, 4x8, 4x16, 8x4, 8x16, 16x4, 16x8, and 16x16.
- the first group may be defined as a set of 4x8, 4x16, 8x4, 8x16, 16x4, 16x8, and 16x16.
- NSPT can be applied to MxN blocks and NxM blocks, which are non-square blocks.
- NSPT can be applied to 4x8 blocks and 8x4 blocks.
- NSPT may be applied to 4x16 blocks and 16x4 blocks
- NSPT may be applied to 8x16 blocks and 16x8 blocks
- NSPT may be applied to 16x32 blocks and 32x16 blocks.
- NSPT By applying NSPT to specific block sizes belonging to the first group, more precise transformation can be performed and coding performance can be improved.
- the primary transformed transform coefficients of the remaining regions except the region to which LFNST is applied i.e. Region-Of-Interest, ROI
- LFNST may be composed of a small number of transform basis vectors. In this case, if a separate primary transform such as DCT-2 and a non-separated secondary transform such as LFNST are applied instead of NSPT for the corresponding block sizes, performance may be degraded.
- NSPT is applied instead of LFNST in this case, the zero-out process is omitted, and coding performance can be improved compared to the case of applying LFNST.
- performance improvement can be expected by applying NSPT.
- NSPT or LFNST can be applied using symmetry, which will be described later.
- LFNST a transpose operation is performed on the corresponding input block using symmetry only for the ROI area.
- NSPT the transpose operation is performed using symmetry for the entire block. Therefore, in the case of NSPT, performance improvement can be expected as the corresponding NSPT kernel can be trained and applied using a more sophisticated symmetry method.
- a 32x64 transformation matrix can be applied instead of a 16x64 transformation matrix from the perspective of forward transformation.
- the 16x64 transformation matrix can be constructed by sampling the top 16 rows of the 32x64 transformation matrix.
- NSPT may be applied to the luma component of the current block, and NSPT may not be applied to the chroma component of the current block. If the tree type of the current block is a dual tree, NSPT can be applied to the luma component and chroma component of the current block.
- NSPT may be applied to the luma component of the current block and NSPT may not be applied to the chroma component of the current block.
- NSPT may be applied to the luma component and chroma component of the current block, regardless of whether the tree type of the current block is a single tree.
- the tree type of the current block is a single tree
- NSPT is allowed for the luma component and the chroma component
- the size of the current block belongs to the first group one NSPT index may be signaled, and the current block The luma component and chroma component of may share the corresponding NSPT index.
- the NSPT index may be an index for selecting one of the transformation kernel candidates for NSPT. If the sizes of the luma block and the chroma block of the current block belong to the first group, the transform kernel candidate selected by the same NSPT index may be applied to the luma component and the chroma component.
- LFNST may not be applied to the chroma component of the current block, and separate transformation may be applied.
- LFNST may be applied to the chroma component of the current block.
- a single tree In the case of a single tree, it may have characteristics that have a high correlation between luma components and chroma components. In this case, unnecessary signaling can be reduced and compression efficiency can be improved by applying NSPT only to the luma component or by commonly applying the transformation kernel candidate selected by one NSPT index to the luma component and the chroma component.
- the luma component and chroma component have independent partitioning and encoding structures. In this case, by signaling the NSPT index for each component, the characteristics of each component can be reflected and compression efficiency can be improved.
- the NSPT kernel for the NSPT may be derived based on at least one of symmetry between intra prediction modes or symmetry between block types.
- the NSPT kernel may be derived as an NSPT kernel corresponding to at least one of a mode that is symmetrical to the intra prediction mode of the current block or a block type that is symmetrical to the block type of the current block.
- the NSPT kernel may be derived based on an NSPT set including one or more NSPT kernel candidates, where the NSPT set is a mode symmetric to the intra prediction mode of the current block or a block type symmetric to the block shape of the current block. It can be derived from an NSPT set corresponding to at least one of the following.
- Any one of one or more NSPT kernel candidates belonging to the NSPT set may be set as the NSPT kernel of the current block.
- an NSPT index that specifies any one of one or more NSPT kernel candidates belonging to the NSPT set can be used.
- the NSPT index may be signaled through a bitstream or may be derived based on the symmetry described above.
- Mode 34 There may be symmetry between at least two intra prediction modes among the intra prediction modes pre-defined in the decoding device.
- mode 34 symmetry centered on the upper left diagonal mode
- Modes 2 to 66 are called normal directional modes (can be written as [2, 66]), and modes -14 to -1 (can be written as [-14, -1]).
- modes 67 to 80 can be named wide directional modes.
- the wide directional mode may include at least one of a mode with a value less than -14 or a mode with a value greater than 80.
- mode 0 and mode 1 all modes are symmetrical around mode 34.
- the x mode and the (68 - x) mode are symmetrical, and between the [-14, -1] mode and the [67, 80] mode, the x mode and the (66 - Mode x) is symmetrical.
- the same symmetry relationship can be established between the [N, -1] mode and the [67, 66 - N] mode.
- N may be an integer less than or equal to -14.
- MxN blocks and NxM blocks can be defined as blocks with symmetry to each other.
- M and N may be the same or different from each other.
- the M1xN1 block and the M2xN2 block can be defined as blocks with symmetry to each other.
- the M1xN1 block and the M2xN2 block can be defined as blocks with symmetry to each other.
- modes that are symmetrical to each other may share at least one of an NSPT set, NSPT index, or NSPT kernel. That is, at least one of the NSPT set, NSPT index, or NSPT kernel for one of the symmetric modes can be applied equally to the other one of the symmetric modes.
- modes that are symmetrical to each other may share one NSPT kernel.
- the corresponding NSPT kernel can be applied to the input data
- the corresponding NSPT kernel can be applied after applying a transpose operation to the input data.
- the x mode belongs to the [2, 33] mode
- the NSPT kernel can be applied to 1D vector.
- the configuration of the 1D vector according to the column priority order may be to read the input data in column units from the MxM block, which is the input data, to obtain M columns, and to configure the 1D vector by arranging them in order.
- a 1D vector can be constructed in row-first order, and the same NSPT kernel can be applied to the 1D vector.
- the configuration of the 1D vector according to the row priority order may be to obtain M rows by reading the input data in row units from the MxM block, which is the input data, and to configure the 1D vector by arranging them in order. If the x mode belongs to the [N, -1] mode (N ⁇ -14), for the (66 - x) mode that is symmetrical to the x mode, a 1D vector is constructed in row-first order.
- the same NSPT kernel as in the case of mode x can be applied to the corresponding 1D vector.
- Column priority order or row priority order can be applied to modes 0 and 1, and column priority order or row priority order can also be applied to mode 34. Additionally, row priority order may be applied to the intra prediction mode belonging to the [2, 33] mode, and column priority order may be applied to the mode symmetric to the corresponding intra prediction mode. Row-first ordering can be applied to intra prediction modes belonging to the [N, -1] mode, and column-first ordering can be applied to modes that are symmetrical to this.
- symmetry between block types may be further considered in addition to symmetry between intra prediction modes.
- An irregular block with width and height M and N, respectively can be viewed as having a symmetrical relationship with an irregular block with width and height N and M, respectively.
- symmetry may exist between the x mode of the MxN block and the (68 - x) mode of the NxM block.
- symmetry may exist between the x mode of the MxN block and the (66 - x) mode of the NxM block. .
- each column can have a length of N.
- each row can have a length of M.
- each row can have a length of M.
- each column can have a length of N.
- the NSPT kernel may be set as the NSPT kernel for the NxM block rather than the NSPT kernel for the MxN block. That is, when symmetry is used for the current block, the NSPT set and/or NSPT kernel for the block with symmetry as the current block can be used in the same way.
- a 1D vector can be constructed from input data blocks according to a predetermined priority order, and this can correspond to the input of the NSPT kernel.
- the symmetry may be limited to be used only when the value of the intra prediction mode of the current block is greater than 34. That is, if the value of the intra prediction mode of the current block is greater than 34, a transpose operation can be applied when constructing a 1D vector from the input data block, and an NSPT set or NSPT corresponding to a mode and/or block shape that has symmetry with the current block A kernel may be used. Specifically, if the intra prediction mode of the current block belongs to the [N, -1] mode and [2, 34] mode, symmetry may not be used for the current block. On the other hand, if the intra prediction mode of the current block belongs to the [35, 66] mode and the [67, 66 - N] mode, symmetry can be used for the current block.
- N may be an integer less than or equal to -14.
- Derivation of the NSPT set or NSPT kernel based on the symmetry may be performed adaptively based on the size of the current block. For example, for 4x4 blocks and 8x8 blocks, an NSPT set or NSPT kernel may be derived based on symmetry, and for 4x8 blocks and 8x4 blocks, an NSPT set or NSPT kernel may not be derived based on symmetry.
- the number of NSPT sets available may be different. For example, if symmetry is used, the number of available NSPT sets may be 35, and if symmetry is not used, the number of available NSPT sets may be 67.
- Table 9 relates to an example in which the NSPT set is determined using symmetry, and shows the mapping relationship between the intra prediction mode and the NSPT set when the number of available NSPT sets is 35.
- Intra prediction mode NSPT set index X ⁇ 0 2 0 ⁇ X ⁇ 34 X 35 ⁇ X ⁇ 66 68-X X > 66 2
- the NSPT set of the current block may be determined to be an NSPT set with an NSPT set index of 2 among 35 NSPT sets. If the value (X) of the intra prediction mode of the current block is greater than or equal to 0 and less than or equal to 34, the NSPT set of the current block may be determined as the NSPT set with the NSPT set index of X among 35 NSPT sets. If the value (X) of the intra prediction mode of the current block is greater than or equal to 35 and less than or equal to 66, the NSPT set of the current block is an NSPT set with an NSPT set index of (68-X) among 35 NSPT sets.
- the NSPT set of the current block is the value (68-X) of the mode symmetric to the intra prediction mode of the current block. It may be the same as the NSPT set.
- the NSPT set of the current block may be determined to be an NSPT set with an NSPT set index of 2 among 35 NSPT sets. If the value (X) of the intra prediction mode of the current block is greater than 66, the NSPT set of the current block may be the same as the NSPT set corresponding to the mode symmetrical to the intra prediction mode of the current block.
- the following Table 10 relates to an example in which the NSPT set is determined without using symmetry, and shows the mapping relationship between the intra prediction mode and the NSPT set when the number of available NSPT sets is 67.
- Intra prediction mode NSPT set index X ⁇ 0 2 0 ⁇ X ⁇ 66 X X > 66 66
- the NSPT set of the current block may be determined to be an NSPT set with an NSPT set index of 2 among 67 NSPT sets. If the value (X) of the intra prediction mode of the current block is greater than or equal to 0 and less than or equal to 66, the NSPT set of the current block may be determined as the NSPT set with the NSPT set index of X among 67 NSPT sets. If the value (X) of the intra prediction mode of the current block is greater than 66, the NSPT set of the current block may be determined as the NSPT set with an NSPT set index of 66 among 67 NSPT sets.
- the memory size required for storing the transformation kernel can be saved while maintaining performance according to the transformation application. For example, if 35 NSPT sets are used instead of 67 NSPT sets by utilizing symmetry, the memory size required to store the NSPT kernel can be significantly reduced.
- the number of available NSPT sets and/or the number of NSPT kernel candidates belonging to the NSPT set may vary depending on the block size.
- the number of NSPT sets available for a 4x4 block may be 35
- the number of NSPT sets available for a 4x8 block and an 8x4 block may be 19
- the number of NSPT sets available for an 8x8 block may be 10. there is.
- the NSPT set for a 4x4 block may consist of three NSPT kernel candidates
- the NSPT set for a 4x8 block and an 8x4 block may consist of three or two NSPT kernel candidates
- the NSPT set for an 8x8 block may consist of one NSPT. It may consist of kernel candidates.
- the size of the conversion kernel may increase. Accordingly, by reducing the number of available NSPT sets and/or the number of NSPT kernel candidates belonging to the NSPT set, the memory size required to store the conversion kernel can be saved. Additionally, as the block size increases, the residual signal characteristics within the block tend to become more general. Therefore, reducing the number of available NSPT sets and/or the number of NSPT kernel candidates belonging to the NSPT set may help maintain compression efficiency while reducing implementation complexity by reflecting these statistical characteristics.
- the NSPT kernel can be configured with 8-bit precision.
- the range of coefficients in the NSPT kernel can be greater than or equal to -128 and less than or equal to 127.
- the result obtained through matrix multiplication can be shifted to the right by the increased precision. For example, if the value obtained after matrix multiplication is shifted to the right by S bits based on an NSPT kernel with 8-bit precision and stored in a buffer, if the kernel coefficients are configured with N-bit precision (S+(N -8)) It can be shifted to the right by bits and stored in the buffer.
- the size of the NSPT kernel (or NSPT matrix) can be expressed as MN x r.
- MN may mean the product of the width and height of the current block. This may mean the output length of NSPT or the number of residual samples generated by NSPT.
- r may mean the input length of NSPT or the number of (dequantized) transformation coefficients to which NSPT is applied. r may be an integer greater than or equal to 0 and less than or equal to MN. The following is an example of the NSPT matrix of MN x r according to block size.
- the NSPT matrix for a 4x4 block may consist of a 16x16 matrix.
- the NSPT matrix for the 4x8 block and 8x4 block may consist of a 32x20 matrix, a 32x16 matrix, a 32x24 matrix, a 32x28 matrix, or a 32x32 matrix.
- the NSPT matrix for an 8x8 block may consist of a 64x16 matrix, a 64x24 matrix, a 64x32 matrix, a 64x40 matrix, a 64x48 matrix, a 64x56 matrix, or a 64x64 matrix.
- the NSPT matrix for the 4x16 block and 16x4 block may consist of a 64x16 matrix, a 64x24 matrix, a 64x32 matrix, a 64x40 matrix, a 64x48 matrix, a 64x56 matrix, or a 64x64 matrix.
- the NSPT matrix for the 8x16 block and 16x8 block may consist of a 128x96 matrix, a 128x64 matrix, a 128x48 matrix, or a 128x32 matrix.
- the NSPT matrix for a 16x16 block may consist of a 256x128 matrix, a 256x96 matrix, or a 256x64 matrix.
- NSPT matrices for 16x32 blocks and 32x16 blocks can be composed of a 512x256 matrix or a 512x128 matrix.
- the NSPT matrix for a 32x32 block may consist of a 1024x512 matrix, a 1024x256 matrix, or a 1024x128 matrix.
- a 16x16 matrix can be applied to 4xN blocks and Nx4 blocks.
- N may be an integer greater than or equal to 4.
- a 64x16 matrix can be applied to an 8x8 block.
- a 64x32 matrix can be applied to 8xN blocks and Nx8 blocks.
- N may be an integer greater than or equal to 16.
- a 96x32 matrix can be applied to 16xN blocks and Nx16 blocks.
- N may be an integer greater than or equal to 16.
- Table 11 is an example of NSPT kernels that can be applied to a 4x16 block.
- the coefficients that make up the NSPT kernel are expressed with 8-bit precision.
- the coefficients of the NSPT kernel can have values from -128 to 127.
- the total number of pre-defined NSPT sets for the encoding device and the decoding device is 35, and each NSPT set may be composed of three NSPT kernel candidates.
- g_nspt4x16[ 35 ][ 3 ][ 24 ][ 64 ] may represent NSPT kernels that can be applied to a 4x16 block.
- the NSPT set and/or NSPT kernel may be determined considering the above-described symmetry. If the current block is a 4x16 block and has a specific intra prediction mode (e.g., a mode with vertical directionality), then the NSPT set and/or NSPT kernel for a block with symmetry with this current block, i.e., a 16x4 block, is applied to the current block. The same can be applied. At this time, the input data from the current block is transposed and then multiplied by the corresponding NSPT kernel.
- a specific intra prediction mode e.g., a mode with vertical directionality
- [35] indicates that it consists of 35 NSPT sets
- [3] indicates that each NSPT set consists of 3 NSPT kernel candidates.
- /k may mean an NSPT set with a set index of k, and k may range from 0 to 34.
- the three matrices belonging to /k can represent NSPT kernel candidates with transformation indices of 0, 1, and 2, respectively.
- [24][64] can represent a 24x64 matrix based on the forward NSPT matrix.
- the first [24] may indicate 24 transformation basis vectors
- the second [64] may indicate that each transformation basis vector is a one-dimensional vector composed of 64 coefficients. This can indicate that, from the perspective of the forward NSPT, the input to the forward NSPT is a one-dimensional vector with a length of 64.
- the reverse NSPT matrix can be derived by transposing the NSPT kernel selected from the g_nspt4x16 array below. In this case, the reverse NSPT matrix becomes a 64x24 matrix.
- const int8_t g_nspt4x16[ 35 ][ 3 ][ 24 ][ 64 ] ⁇ ⁇ /0 ⁇ ⁇ 1,1,2,3,1,3,4,5,2,4,6,9,3,6,10,14,4,9,15,20,6,14,21,27,8 ,19,29,35,10,25,38,44,12,30,45,51,13,33,50,55,14,35,53,57,15,37,54,58,15,37 ,54,58,15,37,53,57,15,36,51,54,15,34,47,51 ⁇ , ⁇ 5,10,12,9,9,17,20,16,13,24,28,23,16,32,38,31,20,40,48,38,24,47,55,42,26 ,51,58,41,26,51,54,32,25,46,44,20,21,36,29,7,15,22,10,-8,8,5,-11
- Transform coefficients can be derived by applying forward NSPT to the residual samples of the MxN block.
- the number of derived conversion coefficients may be less than or equal to the value of (M*N) by zero-out.
- the forward NSPT matrix can be defined as an r Alternatively, it may mean the number of residual samples to which NSPT is applied.
- the derived transform coefficients can be arranged inside an MxN block according to a predetermined scan order, and areas not filled with the transform coefficients can be filled with 0 (i.e., zero-out). Therefore, in the process of scanning the transform coefficients in the decoding device, if a non-zero transform coefficient is found in an area that would have been filled with 0 if NSPT had been applied (or the scan position of the last significant coefficient in the MxN block is greater than or equal to r) case), NSPT is considered not applied to the corresponding MxN block, and the NSPT index may not be signaled.
- the transformation kernel of the current block may be determined based on any one of the above-described embodiments 1 to 3.
- the transform kernel of the current block may be determined based on a combination of at least two of Examples 1 to 3, to the extent that the inventions according to Examples 1 to 3 described above do not conflict with each other.
- the current block can be restored based on the residual sample of the current block (S420).
- the prediction sample of the current block can be derived based on the intra prediction mode of the current block. Based on the prediction sample and residual sample of the current block, a restored sample of the current block can be generated.
- FIG. 6 shows a schematic configuration of a decoding device 300 that performs the video decoding method according to the present disclosure.
- the decoding device 300 may include a transform coefficient deriving unit 600, a residual sample deriving unit 610, and a restored block generating unit 620.
- the transform coefficient deriving unit 600 may be configured in the entropy decoding unit 310 of FIG. 3
- the residual sample deriving unit 610 may be configured in the residual processing unit 320 of FIG. 3
- the restored block generating unit ( 620 may be configured in the adder 340 of FIG. 3.
- the transform coefficient deriving unit 600 may obtain residual information of the current block from the bitstream, decode it, and derive the transform coefficient of the current block.
- the residual sample deriving unit 610 may perform at least one of inverse quantization and inverse transformation on the transform coefficient of the current block to derive the residual sample of the current block.
- the residual sample deriving unit 610 may determine a transform kernel for inverse transform of the current block through a predetermined transform kernel determination method and derive the residual sample of the current block based on this. This is the same as discussed with reference to FIG. 4, and detailed description will be omitted here.
- the restored block generator 620 may restore the current block based on the residual sample of the current block.
- FIG. 7 illustrates an image encoding method performed by the encoding device 200 as an embodiment according to the present disclosure.
- the residual sample of the current block can be derived by differentiating prediction samples from the original samples of the current block.
- the prediction sample may be derived based on a predetermined intra prediction mode.
- transform coefficients of the current block can be derived by performing at least one of transformation or quantization on the residual sample of the current block (S710).
- the conversion method according to the present disclosure can be understood as the reverse process of the inverse conversion seen with reference to FIG. 4.
- the method of determining the conversion kernel for the conversion is as described with reference to FIG. 4. Detailed explanation will be omitted here.
- one or more transform sets for transforming the current block may be defined/configured, and each transform set may include one or more transform kernel candidates.
- one of the plurality of transform sets may be selected as the transform set of the current block. Any one of a plurality of transformation kernel candidates belonging to the transformation set of the current block may be selected. The selection may be performed implicitly based on the context of the current block. Alternatively, the optimal transformation set and/or transformation kernel candidate for the current block may be selected, and an index indicating this may be signaled.
- the transformation kernel of the current block may be determined based on the MTS set. Based on at least one of the size of the current block or the intra prediction mode, one of a plurality of MTS sets may be selected.
- the selected MTS set may include one or more transformation kernel candidates. Any one of the one or more transform kernel candidates may be selected, and the transform kernel of the current block may be determined based on the selected transform kernel candidate. Selection of the transform kernel candidate may be performed using a transform kernel candidate index derived based on the context of the current block.
- the optimal transform kernel candidate for the current block may be selected, and a transform kernel candidate index indicating the selected transform kernel candidate may be signaled.
- the transform kernel of the current block may be determined based on a non-separable primary transform (NSPT) kernel.
- NSPT non-separable primary transform
- the size of the current block belongs to the first group which is a set of block sizes to which NSPT can be applied
- forward NSPT can be applied to the current block
- the size of the current block belongs to the second group to the current block
- Forward NSPT may not be applied.
- transform coefficients can be derived by applying forward separate primary transform (e.g., DCT-2) to the residual sample of the current block.
- Forward LFNST may additionally be applied to all or some of the transform coefficients derived through the separate first-order transform.
- the NSPT may be applied based on at least one of the tree type or component type of the current block.
- the NSPT kernel (or NSPT matrix) for the NSPT may be determined using symmetry between intra prediction modes or symmetry between block types.
- the NSPT kernel can be expressed as r x MN.
- r means the output length of NSPT or the number of transformation coefficients generated by NSPT
- MN is the product of the width and height of the current block, which means the input length of NSPT or the number of residual samples to which NSPT is applied. You can.
- the method for determining the size of the NSPT kernel is as described with reference to FIG. 4.
- a bitstream can be generated by encoding the transform coefficient of the current block (S720).
- residual information regarding the transform coefficient can be generated, and a bitstream can be generated by encoding the residual information.
- FIG. 8 shows a schematic configuration of an encoding device 200 that performs the video encoding method according to the present disclosure.
- the encoding device 200 may include a residual sample deriving unit 800, a transform coefficient deriving unit 810, and a transform coefficient encoding unit 820.
- the residual sample inducing unit 800 and the transform coefficient inducing unit 810 may be configured in the residual processing unit 230 of FIG. 2, and the transform coefficient encoding unit 820 may be configured in the entropy encoding unit 240 of FIG. 2. You can.
- the residual sample deriving unit 800 may derive the residual sample of the current block by differentiating the prediction sample from the original sample of the current block.
- the prediction sample may be derived based on a predetermined intra prediction mode.
- the transform coefficient deriving unit 810 may derive the transform coefficient of the current block by performing at least one of transformation or quantization on the residual sample of the current block.
- the transform coefficient deriving unit 810 may determine the transform kernel of the current block based on at least one of the above-described embodiments 1 to 3, and derive the transform coefficient by applying the transform kernel to the residual sample of the current block. .
- the transform coefficient encoding unit 820 may generate a bitstream by encoding the transform coefficient of the current block.
- the methods are described based on a flowchart as a series of steps or blocks, but the embodiments are not limited to the order of the steps, and some steps may occur in a different order or simultaneously with other steps as described above. You can. Additionally, those skilled in the art will understand that the steps shown in the flowchart are not exclusive and that other steps may be included or one or more steps in the flowchart may be deleted without affecting the scope of the embodiments of the present document.
- the method according to the embodiments of the present document described above may be implemented in software form, and the encoding device and/or decoding device according to the present document may be used to encode images, such as TVs, computers, smartphones, set-top boxes, and display devices. It may be included in the device that performs the processing.
- the above-described method may be implemented as a module (process, function, etc.) that performs the above-described function.
- Modules are stored in memory and can be executed by a processor.
- Memory may be internal or external to the processor, and may be connected to the processor by a variety of well-known means.
- a processor may include an application-specific integrated circuit (ASIC), other chipset, logic circuitry, and/or data processing devices.
- Memory may include read-only memory (ROM), random access memory (RAM), flash memory, memory cards, storage media, and/or other storage devices. That is, the embodiments described in this document may be implemented and performed on a processor, microprocessor, controller, or chip.
- the functional units shown in each drawing may be implemented and performed on a computer, processor, microprocessor, controller, or chip.
- information for implementation (ex. information on instructions) or algorithm may be stored in a digital storage medium.
- the decoding device and the encoding device to which the embodiment(s) of the present specification are applied include a multimedia broadcasting transmission and reception device, a mobile communication terminal, a home cinema video device, a digital cinema video device, a surveillance camera, a video chat device, and video communication.
- Real-time communication devices mobile streaming devices, storage media, camcorders, video on demand (VoD) service provision devices, OTT video (Over the top video) devices, Internet streaming service provision devices, three-dimensional (3D) video devices, VR (virtual reality) ) devices, AR (argumente reality) devices, video phone video devices, transportation terminals (ex.
- OTT video (Over the top video) devices may include game consoles, Blu-ray players, Internet-connected TVs, home theater systems, smartphones, tablet PCs, and digital video recorders (DVRs).
- DVRs digital video recorders
- the processing method to which the embodiment(s) of the present specification are applied may be produced in the form of a program executed by a computer and stored in a computer-readable recording medium.
- Multimedia data having a data structure according to the embodiment(s) of the present specification may also be stored in a computer-readable recording medium.
- the computer-readable recording medium includes all types of storage devices and distributed storage devices that store computer-readable data.
- the computer-readable recording media include, for example, Blu-ray Disk (BD), Universal Serial Bus (USB), ROM, PROM, EPROM, EEPROM, RAM, CD-ROM, magnetic tape, floppy disk, and optical media. May include a data storage device.
- the computer-readable recording medium includes media implemented in the form of a carrier wave (eg, transmitted via the Internet).
- the bitstream generated by the encoding method may be stored in a computer-readable recording medium or transmitted through a wired or wireless communication network.
- embodiment(s) of this specification may be implemented as a computer program product by program code, and the program code may be executed on a computer by the embodiment(s) of this specification.
- the program code may be stored on a carrier readable by a computer.
- Figure 9 shows an example of a content streaming system to which embodiments of the present disclosure can be applied.
- the content streaming system may broadly include an encoding server, a streaming server, a web server, a media storage, a user device, and a multimedia input device.
- the encoding server compresses content input from multimedia input devices such as smartphones, cameras, camcorders, etc. into digital data, generates a bitstream, and transmits it to the streaming server.
- multimedia input devices such as smartphones, cameras, camcorders, etc. directly generate bitstreams
- the encoding server may be omitted.
- the bitstream may be generated by an encoding method or a bitstream generation method to which the embodiment(s) of the present specification is applied, and the streaming server may temporarily store the bitstream in the process of transmitting or receiving the bitstream. You can.
- the streaming server transmits multimedia data to the user device based on user requests through a web server, and the web server serves as a medium to inform the user of what services are available.
- the web server delivers it to a streaming server, and the streaming server transmits multimedia data to the user.
- the content streaming system may include a separate control server, and in this case, the control server serves to control commands/responses between each device in the content streaming system.
- the streaming server may receive content from a media repository and/or encoding server. For example, when receiving content from the encoding server, the content can be received in real time. In this case, in order to provide a smooth streaming service, the streaming server may store the bitstream for a certain period of time.
- Examples of the user device include mobile phones, smart phones, laptop computers, digital broadcasting terminals, personal digital assistants (PDAs), portable multimedia players (PMPs), navigation, slate PCs, Tablet PC, ultrabook, wearable device (e.g. smartwatch, smart glass, head mounted display), digital TV, desktop There may be computers, digital signage, etc.
- PDAs personal digital assistants
- PMPs portable multimedia players
- navigation slate PCs
- Tablet PC ultrabook
- wearable device e.g. smartwatch, smart glass, head mounted display
- digital TV desktop There may be computers, digital signage, etc.
- Each server in the content streaming system may be operated as a distributed server, and in this case, data received from each server may be distributedly processed.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Discrete Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Un procédé et un dispositif de décodage d'image selon la présente divulgation peuvent dériver des facteurs de conversion d'un bloc courant à partir d'un flux binaire, déterminer une transformée primaire non séparable (NSPT) définie pour le bloc courant, appliquer une NSPT inversée à au moins l'un des facteurs de conversion du bloc courant sur la base de l'ensemble NSPT pour dériver des échantillons résiduels du bloc courant, et rétablir le bloc courant sur la base des échantillons résiduels du bloc courant. Dans ce cas, le NSPT inversé peut être appliqué sur la base du fait qu'une taille du bloc courant est incluse dans un groupe d'une ou de plusieurs tailles de blocs auxquels le NSPT peut être appliqué.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202263434072P | 2022-12-20 | 2022-12-20 | |
US63/434,072 | 2022-12-20 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024136481A1 true WO2024136481A1 (fr) | 2024-06-27 |
Family
ID=91589376
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2023/021164 WO2024136481A1 (fr) | 2022-12-20 | 2023-12-20 | Procédé et dispositif de codage/décodage d'image, et support d'enregistrement sur lequel un flux binaire est stocké |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2024136481A1 (fr) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20210135320A (ko) * | 2019-11-27 | 2021-11-12 | 텐센트 아메리카 엘엘씨 | 비디오 코딩 방법 및 장치 |
US20220264093A1 (en) * | 2017-07-03 | 2022-08-18 | Panasonic Intellectual Property Corporation Of America | Encoder, decoder, encoding method, and decoding method |
KR102467334B1 (ko) * | 2019-09-21 | 2022-11-16 | 엘지전자 주식회사 | 변환에 기반한 영상 코딩 방법 및 그 장치 |
KR20220165273A (ko) * | 2020-04-07 | 2022-12-14 | 엘지전자 주식회사 | 변환에 기반한 영상 코딩 방법 및 그 장치 |
-
2023
- 2023-12-20 WO PCT/KR2023/021164 patent/WO2024136481A1/fr unknown
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220264093A1 (en) * | 2017-07-03 | 2022-08-18 | Panasonic Intellectual Property Corporation Of America | Encoder, decoder, encoding method, and decoding method |
KR102467334B1 (ko) * | 2019-09-21 | 2022-11-16 | 엘지전자 주식회사 | 변환에 기반한 영상 코딩 방법 및 그 장치 |
KR20210135320A (ko) * | 2019-11-27 | 2021-11-12 | 텐센트 아메리카 엘엘씨 | 비디오 코딩 방법 및 장치 |
KR20220165273A (ko) * | 2020-04-07 | 2022-12-14 | 엘지전자 주식회사 | 변환에 기반한 영상 코딩 방법 및 그 장치 |
Non-Patent Citations (1)
Title |
---|
P. GARUS (QUALCOMM), M. COBAN (QUALCOMM), B. RAY (QUALCOMM), V. SEREGIN (QUALCOMM), M. KARCZEWICZ (QUALCOMM): "Non-EE2: Non-Separable Primary Transform for Intra Coding", 28. JVET MEETING; 20221021 - 20221028; MAINZ; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ), 18 October 2022 (2022-10-18), XP030304757 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020218793A1 (fr) | Procédé de codage basé sur une bdpcm et dispositif associé | |
WO2020246849A1 (fr) | Procédé de codage d'image fondé sur une transformée et dispositif associé | |
WO2021096172A1 (fr) | Procédé de codage d'image basé sur une transformée, et dispositif associé | |
WO2020116961A1 (fr) | Procédé de codage d'image basé sur une une transformée secondaire et dispositif associé | |
WO2020242183A1 (fr) | Procédé et dispositif de codage d'image sur la base d'une intraprédiction à grand angle et d'une transformée | |
WO2020060282A1 (fr) | Procédé de codage de niveau facteur de conversion, et dispositif associé | |
WO2020197274A1 (fr) | Procédé de codage d'image basé sur des transformations et dispositif associé | |
WO2020256482A1 (fr) | Procédé de codage d'image basé sur une transformée et dispositif associé | |
WO2020185005A1 (fr) | Procédé de codage d'images basé sur une transformée et dispositif associé | |
WO2021025528A1 (fr) | Procédé de codage d'images sur la base d'une transformée, et dispositif associé | |
WO2024136481A1 (fr) | Procédé et dispositif de codage/décodage d'image, et support d'enregistrement sur lequel un flux binaire est stocké | |
WO2024136476A1 (fr) | Procédé et dispositif de codage/décodage d'image, et support d'enregistrement sur lequel un flux binaire est stocké | |
WO2024136484A1 (fr) | Procédé et appareil de codage/décodage d'images, et support d'enregistrement sur lequel a été stocké un flux binaire | |
WO2024136486A1 (fr) | Procédé et dispositif de codage/décodage d'image, et support d'enregistrement sur lequel un flux binaire est enregistré | |
WO2024136475A1 (fr) | Procédé et dispositif de codage/décodage d'image, support d'enregistrement pour stocker un flux binaire | |
WO2024136473A1 (fr) | Procédé et dispositif d'encodage/de décodage des images, support d'enregistrement sur lequel est stocké un flux binaire | |
WO2024136483A1 (fr) | Procédé et dispositif de codage/décodage d'image, et support d'enregistrement sur lequel un flux binaire est enregistré | |
WO2024112171A1 (fr) | Procédé et dispositif de codage/décodage d'image, et support d'enregistrement sur lequel un flux binaire est stocké | |
WO2020130577A1 (fr) | Procédé de codage d'image sur la base d'une transformée secondaire et dispositif associé | |
WO2024085565A1 (fr) | Procédé et dispositif de codage/décodage d'image, et support d'enregistrement sur lequel un flux binaire est stocké | |
WO2024085566A1 (fr) | Procédé et dispositif de codage/décodage d'image, et support d'enregistrement sur lequel un flux binaire est stocké | |
WO2024085567A1 (fr) | Procédé et dispositif de codage/décodage d'image, et support d'enregistrement sur lequel un flux binaire est stocké | |
WO2024080798A1 (fr) | Procédé et dispositif de codage/décodage d'image, et support d'enregistrement sur lequel un flux binaire est stocké | |
WO2024112156A1 (fr) | Procédé et dispositif de codage/décodage d'image, et support d'enregistrement stockant un flux binaire | |
WO2024080795A1 (fr) | Procédé et dispositif de codage/décodage de vidéo, et support d'enregistrement stockant un flux binaire |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23907764 Country of ref document: EP Kind code of ref document: A1 |