WO2020226424A1 - Mip 및 lfnst를 수행하는 영상 부호화/복호화 방법, 장치 및 비트스트림을 전송하는 방법 - Google Patents
Mip 및 lfnst를 수행하는 영상 부호화/복호화 방법, 장치 및 비트스트림을 전송하는 방법 Download PDFInfo
- Publication number
- WO2020226424A1 WO2020226424A1 PCT/KR2020/005982 KR2020005982W WO2020226424A1 WO 2020226424 A1 WO2020226424 A1 WO 2020226424A1 KR 2020005982 W KR2020005982 W KR 2020005982W WO 2020226424 A1 WO2020226424 A1 WO 2020226424A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- block
- transform
- intra prediction
- current block
- prediction
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
- H04N19/619—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding the transform being operated outside the prediction loop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/12—Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/18—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
Definitions
- the present disclosure relates to an image encoding/decoding method and apparatus, and more particularly, an image encoding/decoding method and apparatus for applying a low frequency non-seperable transform (LFNST) to a block to which a matrix based intra prediction (MIP) is applied. And a method of transmitting a bitstream generated by an image encoding method/apparatus of the present disclosure.
- LFNST low frequency non-seperable transform
- MIP matrix based intra prediction
- a high-efficiency image compression technique is required for effectively transmitting, storing, and reproducing information of high-resolution and high-quality images.
- An object of the present disclosure is to provide an image encoding/decoding method and apparatus with improved encoding/decoding efficiency.
- an object of the present disclosure is to provide a method and apparatus for encoding/decoding an image by applying LFNST to a block to which MIP is applied.
- an object of the present disclosure is to provide a method for transmitting a bitstream generated by an image encoding method or apparatus according to the present disclosure.
- an object of the present disclosure is to provide a recording medium storing a bitstream generated by an image encoding method or apparatus according to the present disclosure.
- an object of the present disclosure is to provide a recording medium storing a bitstream that is received and decoded by an image decoding apparatus according to the present disclosure and used for image restoration.
- An image decoding method includes generating a prediction block by performing intra prediction on a current block, generating a residual block by performing inverse transform on a transform coefficient of the current block, and the Restoring the current block based on a prediction block and a residual block, wherein the inverse transform includes a first-order inverse transform and a second-order inverse transform, and the second inverse transform is whether intra prediction for the current block is MIP prediction. It can be done based on whether or not.
- the second-order inverse transform may be performed only when it is determined that the second-order inverse transform is performed on the transform coefficient.
- the determination of whether to perform the second-order inverse transformation on the transform coefficient may be performed based on information signaled through a bitstream.
- the second-order inverse transform is a step of determining a transform set of a second-order inverse transform based on an intra prediction mode of the current block, and included in the transform set of the second-order inverse transform. It may include selecting one of a plurality of transformed kernels, and performing the second-order inverse transform based on the selected transform kernel.
- the intra prediction mode of the current block used to determine a transform set of the second-order inverse transform is a predetermined intra prediction mode. Can be induced.
- the predetermined intra prediction mode may be derived from the MIP mode of the current block based on a predefined mapping table.
- the predetermined intra prediction mode when the intra prediction for the current block is MIP prediction, the predetermined intra prediction mode may be derived as a planar mode.
- a second-order inverse transform for the transform coefficient may be skipped.
- intra prediction for the current block is MIP prediction
- information indicating whether to perform a second-order inverse transform on the transform coefficient may not be signaled through a bitstream.
- a transform kernel for the second-order inverse transform of the transform coefficient is not signaled through a bitstream and may be determined as a predetermined transform kernel.
- the number of transform kernels available when the current block is MIP predicted may be smaller than the number of transform kernels available when the current block is not MIP predicted.
- first information indicating whether a second-order inverse transform is applied to the current block and second information indicating a transform kernel used for the second-order inverse transform are signaled as separate information,
- the second information may be signaled when the first information indicates that a second-order inverse transform is applied to the current block.
- An image decoding apparatus includes a memory and at least one processor, wherein the at least one processor generates a prediction block by performing intra prediction on a current block, and calculates a transform coefficient of the current block. Inverse transform is performed to generate a residual block, and the current block is restored based on the prediction block and the residual block, and the inverse transform includes a first-order inverse transform and a second-order inverse transform, and the second-order inverse transform is the current It may be performed based on whether intra prediction for a block is MIP prediction.
- An image encoding method includes generating a prediction block by performing intra prediction on a current block, generating a residual block of the current block based on the prediction block, and the residual block And generating transform coefficients by performing a transform on a dual block, wherein the transform includes a first-order transform and a second-order transform, and the second-order transform determines whether intra prediction for the current block is MIP prediction. Can be done based on
- a transmission method may transmit a bitstream generated by the image encoding apparatus or image encoding method of the present disclosure.
- a computer-readable recording medium may store a bitstream generated by the image encoding method or image encoding apparatus of the present disclosure.
- an image encoding/decoding method and apparatus with improved encoding/decoding efficiency may be provided.
- a method and apparatus for encoding/decoding an image by applying LFNST to a block to which MIP is applied may be provided.
- a method for transmitting a bitstream generated by an image encoding method or apparatus according to the present disclosure may be provided.
- a recording medium storing a bitstream generated by an image encoding method or apparatus according to the present disclosure may be provided.
- a recording medium may be provided that stores a bitstream that is received and decoded by the image decoding apparatus according to the present disclosure and used for image restoration.
- FIG. 1 is a diagram schematically illustrating a video coding system to which an embodiment according to the present disclosure can be applied.
- FIG. 2 is a diagram schematically illustrating an image encoding apparatus to which an embodiment according to the present disclosure can be applied.
- FIG. 3 is a diagram schematically illustrating an image decoding apparatus to which an embodiment according to the present disclosure can be applied.
- FIG. 4 is a diagram showing a block division type according to a multi-type tree structure.
- FIG. 5 is a diagram illustrating a signaling mechanism of partitioning information of a quadtree with nested multi-type tree structure according to the present disclosure.
- FIG. 6 is a flowchart illustrating a video/video encoding method based on intra prediction.
- FIG. 7 is a diagram illustrating an exemplary configuration of an intra prediction unit 185 according to the present disclosure.
- FIG. 8 is a flowchart illustrating a video/video decoding method based on intra prediction.
- FIG. 9 is a diagram illustrating an exemplary configuration of an intra prediction unit 265 according to the present disclosure.
- FIG. 10 is a flowchart illustrating an intra prediction mode signaling procedure in an image encoding apparatus.
- 11 is a flowchart illustrating a procedure for determining an intra prediction mode in an image decoding apparatus.
- FIG. 12 is a flowchart for describing a procedure for deriving an intra prediction mode in more detail.
- FIG. 13 is a diagram illustrating an intra prediction direction according to an embodiment of the present disclosure.
- FIG. 14 is a diagram illustrating an intra prediction direction according to another embodiment of the present disclosure.
- 15 is a diagram illustrating an ALWIP process for a 4x4 block.
- 16 is a diagram illustrating an ALWIP process for an 8x8 block.
- 17 is a diagram illustrating an ALWIP process for an 8x4 block.
- 18 is a diagram illustrating an ALWIP process for a 16x16 block.
- FIG. 19 is a diagram illustrating an averaging step in an ALWIP process according to the present disclosure.
- 20 is a diagram for explaining an interpolation step in an ALWIP process according to the present disclosure.
- 21 is a diagram for describing a transformation method applied to a residual block.
- 22 is a flowchart illustrating a method of performing a quadratic transformation/inverse transformation according to the present disclosure.
- FIG. 23 is a diagram for describing a method performed in an image decoding apparatus based on whether MIP and LFNST are applied according to another embodiment of the present disclosure.
- FIG. 24 is a diagram for describing a method performed by an image encoding apparatus based on whether MIP and LFNST are applied according to another embodiment of the present disclosure.
- 25 is a diagram illustrating a content streaming system to which an embodiment of the present disclosure can be applied.
- first and second are used only for the purpose of distinguishing one component from other components, and do not limit the order or importance of the components unless otherwise stated. Accordingly, within the scope of the present disclosure, a first component in one embodiment may be referred to as a second component in another embodiment, and similarly, a second component in one embodiment is a first component in another embodiment. It can also be called.
- components that are distinguished from each other are intended to clearly describe each feature, and do not necessarily mean that the components are separated. That is, a plurality of components may be integrated to be formed in one hardware or software unit, or one component may be distributed in a plurality of hardware or software units. Therefore, even if not stated otherwise, such integrated or distributed embodiments are also included in the scope of the present disclosure.
- the components described in various embodiments do not necessarily mean essential components, and some may be optional components. Accordingly, an embodiment consisting of a subset of components described in an embodiment is also included in the scope of the present disclosure. In addition, embodiments including other elements in addition to the elements described in the various embodiments are included in the scope of the present disclosure.
- the present disclosure relates to encoding and decoding of an image, and terms used in the present disclosure may have a common meaning commonly used in the technical field to which the present disclosure belongs unless newly defined in the present disclosure.
- a “picture” generally refers to a unit representing one image in a specific time period
- a slice/tile is a coding unit constituting a part of a picture
- one picture is one It may be composed of more than one slice/tile.
- a slice/tile may include one or more coding tree units (CTU).
- pixel or "pel” may mean a minimum unit constituting one picture (or image).
- sample may be used as a term corresponding to a pixel.
- a sample may generally represent a pixel or a value of a pixel, may represent only a pixel/pixel value of a luma component, or may represent only a pixel/pixel value of a chroma component.
- unit may represent a basic unit of image processing.
- the unit may include at least one of a specific area of a picture and information related to the corresponding area.
- the unit may be used interchangeably with terms such as “sample array”, “block”, or “area” depending on the case.
- the MxN block may include samples (or sample arrays) consisting of M columns and N rows, or a set (or array) of transform coefficients.
- current block may mean one of “current coding block”, “current coding unit”, “coding object block”, “decoding object block”, or “processing object block”.
- current block may mean “current prediction block” or “prediction target block”.
- transformation inverse transformation
- quantization inverse quantization
- current block may mean “current transform block” or “transform target block”.
- filtering is performed, “current block” may mean “block to be filtered”.
- current block may mean a block including both a luma component block and a chroma component block or "a luma block of the current block” unless explicitly stated as a chroma block.
- the chroma block of the current block may be explicitly expressed by including an explicit description of a chroma block such as a "chroma block” or a "current chroma block”.
- FIG. 1 shows a video coding system according to this disclosure.
- a video coding system may include an encoding device 10 and a decoding device 20.
- the encoding device 10 may transmit the encoded video and/or image information or data in a file or streaming format to the decoding device 20 through a digital storage medium or a network.
- the encoding apparatus 10 may include a video source generator 11, an encoder 12, and a transmission unit 13.
- the decoding apparatus 20 may include a receiving unit 21, a decoding unit 22, and a rendering unit 23.
- the encoder 12 may be referred to as a video/image encoder, and the decoder 22 may be referred to as a video/image decoder.
- the transmission unit 13 may be included in the encoding unit 12.
- the receiving unit 21 may be included in the decoding unit 22.
- the rendering unit 23 may include a display unit, and the display unit may be configured as a separate device or an external component.
- the video source generator 11 may acquire a video/image through a process of capturing, synthesizing, or generating a video/image.
- the video source generator 11 may include a video/image capturing device and/or a video/image generating device.
- the video/image capture device may include, for example, one or more cameras, a video/image archive including previously captured video/images, and the like.
- the video/image generating device may include, for example, a computer, a tablet and a smartphone, and may (electronically) generate a video/image.
- a virtual video/image may be generated through a computer or the like, and in this case, a video/image capturing process may be substituted as a process of generating related data.
- the encoder 12 may encode an input video/image.
- the encoder 12 may perform a series of procedures such as prediction, transformation, and quantization for compression and encoding efficiency.
- the encoder 12 may output encoded data (coded video/image information) in a bitstream format.
- the transmission unit 13 may transmit the encoded video/image information or data output in the form of a bitstream to the receiving unit 21 of the decoding apparatus 20 through a digital storage medium or a network in a file or streaming form.
- Digital storage media may include various storage media such as USB, SD, CD, DVD, Blu-ray, HDD, and SSD.
- the transmission unit 13 may include an element for generating a media file through a predetermined file format, and may include an element for transmission through a broadcast/communication network.
- the receiving unit 21 may extract/receive the bitstream from the storage medium or network and transmit it to the decoding unit 22.
- the decoder 22 may decode the video/image by performing a series of procedures such as inverse quantization, inverse transformation, and prediction corresponding to the operation of the encoder 12.
- the rendering unit 23 may render the decoded video/image.
- the rendered video/image may be displayed through the display unit.
- FIG. 2 is a diagram schematically illustrating an image encoding apparatus to which an embodiment according to the present disclosure can be applied.
- the image encoding apparatus 100 includes an image segmentation unit 110, a subtraction unit 115, a transform unit 120, a quantization unit 130, an inverse quantization unit 140, and an inverse transform unit ( 150), an addition unit 155, a filtering unit 160, a memory 170, an inter prediction unit 180, an intra prediction unit 185, and an entropy encoding unit 190.
- the inter prediction unit 180 and the intra prediction unit 185 may be collectively referred to as a “prediction unit”.
- the transform unit 120, the quantization unit 130, the inverse quantization unit 140, and the inverse transform unit 150 may be included in a residual processing unit.
- the residual processing unit may further include a subtraction unit 115.
- All or at least some of the plurality of constituent units constituting the image encoding apparatus 100 may be implemented as one hardware component (eg, an encoder or a processor) according to embodiments.
- the memory 170 may include a decoded picture buffer (DPB), and may be implemented by a digital storage medium.
- DPB decoded picture buffer
- the image dividing unit 110 may divide an input image (or picture, frame) input to the image encoding apparatus 100 into one or more processing units.
- the processing unit may be referred to as a coding unit (CU).
- the coding unit is a coding tree unit (CTU) or a largest coding unit (LCU) recursively according to a QT/BT/TT (Quad-tree/binary-tree/ternary-tree) structure ( It can be obtained by dividing recursively.
- one coding unit may be divided into a plurality of coding units of a deeper depth based on a quad tree structure, a binary tree structure, and/or a ternary tree structure.
- a quad tree structure may be applied first and a binary tree structure and/or a ternary tree structure may be applied later.
- the coding procedure according to the present disclosure may be performed based on the final coding unit that is no longer divided.
- the largest coding unit may be directly used as the final coding unit, or a coding unit of a lower depth obtained by dividing the largest coding unit may be used as the final cornet unit.
- the coding procedure may include a procedure such as prediction, transformation, and/or restoration described later.
- the processing unit of the coding procedure may be a prediction unit (PU) or a transform unit (TU).
- Each of the prediction unit and the transform unit may be divided or partitioned from the final coding unit.
- the prediction unit may be a unit of sample prediction
- the transform unit may be a unit for inducing a transform coefficient and/or a unit for inducing a residual signal from the transform coefficient.
- the prediction unit (inter prediction unit 180 or intra prediction unit 185) performs prediction on a block to be processed (current block), and generates a predicted block including prediction samples for the current block. Can be generated.
- the prediction unit may determine whether intra prediction or inter prediction is applied in units of the current block or CU.
- the prediction unit may generate various information on prediction of the current block and transmit it to the entropy encoding unit 190.
- the information on prediction may be encoded by the entropy encoding unit 190 and output in the form of a bitstream.
- the intra prediction unit 185 may predict the current block by referring to samples in the current picture.
- the referenced samples may be located in a neighborhood of the current block or may be located away from each other according to an intra prediction mode and/or an intra prediction technique.
- the intra prediction modes may include a plurality of non-directional modes and a plurality of directional modes.
- the non-directional mode may include, for example, a DC mode and a planar mode (Planar mode).
- the directional mode may include, for example, 33 directional prediction modes or 65 directional prediction modes, depending on the degree of detail of the prediction direction. However, this is an example, and more or less directional prediction modes may be used depending on the setting.
- the intra prediction unit 185 may determine a prediction mode applied to the current block by using the prediction mode applied to the neighboring block.
- the inter prediction unit 180 may derive a predicted block for the current block based on a reference block (reference sample array) specified by a motion vector on the reference picture.
- motion information may be predicted in units of blocks, subblocks, or samples based on correlation between motion information between neighboring blocks and current blocks.
- the motion information may include a motion vector and a reference picture index.
- the motion information may further include inter prediction direction (L0 prediction, L1 prediction, Bi prediction, etc.) information.
- the neighboring block may include a spatial neighboring block existing in the current picture and a temporal neighboring block existing in the reference picture.
- the reference picture including the reference block and the reference picture including the temporal neighboring block may be the same or different from each other.
- the temporal neighboring block may be referred to as a collocated reference block, a collocated CU (colCU), or the like.
- a reference picture including the temporal neighboring block may be referred to as a collocated picture (colPic).
- the inter prediction unit 180 constructs a motion information candidate list based on neighboring blocks, and provides information indicating which candidate is used to derive a motion vector and/or a reference picture index of the current block. Can be generated. Inter prediction may be performed based on various prediction modes.
- the inter prediction unit 180 may use motion information of a neighboring block as motion information of a current block.
- a residual signal may not be transmitted.
- motion vector prediction (MVP) mode motion vectors of neighboring blocks are used as motion vector predictors, and indicators for motion vector difference and motion vector predictors ( indicator) to signal the motion vector of the current block.
- the motion vector difference may mean a difference between a motion vector of a current block and a motion vector predictor.
- the prediction unit may generate a prediction signal based on various prediction methods and/or prediction techniques to be described later. For example, the prediction unit may apply intra prediction or inter prediction for prediction of the current block, and may simultaneously apply intra prediction and inter prediction. A prediction method in which intra prediction and inter prediction are applied simultaneously for prediction of a current block may be called combined inter and intra prediction (CIIP). Also, the prediction unit may perform intra block copy (IBC) for prediction of the current block. The intra block copy may be used for content image/movie coding such as games, such as, for example, screen content coding (SCC). IBC is a method of predicting a current block by using a reference block in a current picture at a distance from the current block by a predetermined distance.
- CIIP combined inter and intra prediction
- IBC intra block copy
- the intra block copy may be used for content image/movie coding such as games, such as, for example, screen content coding (SCC).
- IBC is a method of predicting a current block by using a reference
- the position of the reference block in the current picture may be encoded as a vector (block vector) corresponding to the predetermined distance.
- IBC basically performs prediction in the current picture, but may be performed similarly to inter prediction in that it derives a reference block in the current picture. That is, the IBC may use at least one of the inter prediction techniques described in this disclosure.
- the prediction signal generated through the prediction unit may be used to generate a reconstructed signal or may be used to generate a residual signal.
- the subtraction unit 115 subtracts the prediction signal (predicted block, prediction sample array) output from the prediction unit from the input image signal (original block, original sample array), and subtracts a residual signal (remaining block, residual sample array). ) Can be created.
- the generated residual signal may be transmitted to the converter 120.
- the transform unit 120 may generate transform coefficients by applying a transform technique to the residual signal.
- the transformation technique uses at least one of DCT (Discrete Cosine Transform), DST (Discrete Sine Transform), KLT (Karhunen-Loeve Transform), GBT (Graph-Based Transform), or CNT (Conditionally Non-linear Transform).
- DCT Discrete Cosine Transform
- DST Discrete Sine Transform
- KLT Kerhunen-Loeve Transform
- GBT Graph-Based Transform
- CNT Supplementally Non-linear Transform
- GBT refers to the transformation obtained from this graph when the relationship information between pixels is expressed in a graph.
- CNT refers to a transformation obtained based on generating a prediction signal using all previously reconstructed pixels.
- the conversion process may be applied to a block of pixels having the same size of a square, or may be applied to a block of variable size other than a square.
- the quantization unit 130 may quantize the transform coefficients and transmit the quantization to the entropy encoding unit 190.
- the entropy encoding unit 190 may encode a quantized signal (information on quantized transform coefficients) and output it as a bitstream.
- the information on the quantized transform coefficients may be called residual information.
- the quantization unit 130 may rearrange the quantized transform coefficients in the form of a block into a one-dimensional vector form based on a coefficient scan order, and the quantized transform coefficients in the form of the one-dimensional vector It is also possible to generate information about transform coefficients.
- the entropy encoding unit 190 may perform various encoding methods such as exponential Golomb, context-adaptive variable length coding (CAVLC), and context-adaptive binary arithmetic coding (CABAC).
- the entropy encoding unit 190 may encode together or separately information necessary for video/image restoration (eg, values of syntax elements) in addition to quantized transform coefficients.
- the encoded information (eg, encoded video/video information) may be transmitted or stored in a bitstream format in units of network abstraction layer (NAL) units.
- the video/video information may further include information on various parameter sets, such as an adaptation parameter set (APS), a picture parameter set (PPS), a sequence parameter set (SPS), or a video parameter set (VPS).
- the video/video information may further include general constraint information.
- the signaling information, transmitted information, and/or syntax elements mentioned in the present disclosure may be encoded through the above-described encoding procedure and included in the bitstream.
- the bitstream may be transmitted through a network or may be stored in a digital storage medium.
- the network may include a broadcasting network and/or a communication network
- the digital storage medium may include various storage media such as USB, SD, CD, DVD, Blu-ray, HDD, and SSD.
- a transmission unit (not shown) for transmitting the signal output from the entropy encoding unit 190 and/or a storage unit (not shown) for storing may be provided as an inner/outer element of the image encoding apparatus 100, or transmission The unit may be provided as a component of the entropy encoding unit 190.
- the quantized transform coefficients output from the quantization unit 130 may be used to generate a residual signal.
- a residual signal residual block or residual samples
- inverse quantization and inverse transform residual transforms
- the addition unit 155 adds the reconstructed residual signal to the prediction signal output from the inter prediction unit 180 or the intra prediction unit 185 to obtain a reconstructed signal (a reconstructed picture, a reconstructed block, and a reconstructed sample array). Can be generated.
- a reconstructed signal (a reconstructed picture, a reconstructed block, and a reconstructed sample array).
- the predicted block may be used as a reconstructed block.
- the addition unit 155 may be referred to as a restoration unit or a restoration block generation unit.
- the generated reconstructed signal may be used for intra prediction of the next processing target block in the current picture, and may be used for inter prediction of the next picture through filtering as described later.
- the filtering unit 160 may apply filtering to the reconstructed signal to improve subjective/objective image quality.
- the filtering unit 160 may generate a modified reconstructed picture by applying various filtering methods to the reconstructed picture, and the modified reconstructed picture may be converted to the memory 170, specifically, the DPB of the memory 170. Can be saved on.
- the various filtering methods may include, for example, deblocking filtering, sample adaptive offset, adaptive loop filter, bilateral filter, and the like.
- the filtering unit 160 may generate a variety of filtering information and transmit it to the entropy encoding unit 190 as described later in the description of each filtering method.
- the filtering information may be encoded by the entropy encoding unit 190 and output in the form of a bitstream.
- the modified reconstructed picture transmitted to the memory 170 may be used as a reference picture in the inter prediction unit 180.
- the image encoding apparatus 100 may avoid prediction mismatch between the image encoding apparatus 100 and the image decoding apparatus, and may improve encoding efficiency.
- the DPB in the memory 170 may store a reconstructed picture modified to be used as a reference picture in the inter prediction unit 180.
- the memory 170 may store motion information of a block from which motion information in a current picture is derived (or encoded) and/or motion information of blocks in a picture that have already been reconstructed.
- the stored motion information may be transmitted to the inter prediction unit 180 to be used as motion information of spatial neighboring blocks or motion information of temporal neighboring blocks.
- the memory 170 may store reconstructed samples of reconstructed blocks in the current picture, and may transmit the reconstructed samples to the intra prediction unit 185.
- FIG. 3 is a diagram schematically illustrating an image decoding apparatus to which an embodiment according to the present disclosure can be applied.
- the image decoding apparatus 200 includes an entropy decoding unit 210, an inverse quantization unit 220, an inverse transform unit 230, an addition unit 235, a filtering unit 240, and a memory 250. ), an inter prediction unit 260 and an intra prediction unit 265 may be included.
- the inter prediction unit 260 and the intra prediction unit 265 may be collectively referred to as a “prediction unit”.
- the inverse quantization unit 220 and the inverse transform unit 230 may be included in the residual processing unit.
- All or at least some of the plurality of constituent units constituting the image decoding apparatus 200 may be implemented as one hardware component (eg, a decoder or a processor) according to embodiments.
- the memory 170 may include a DPB and may be implemented by a digital storage medium.
- the image decoding apparatus 200 receiving a bitstream including video/image information may reconstruct an image by performing a process corresponding to the process performed by the image encoding apparatus 100 of FIG. 2.
- the image decoding apparatus 200 may perform decoding using a processing unit applied in the image encoding apparatus.
- the processing unit of decoding may be, for example, a coding unit.
- the coding unit may be a coding tree unit or may be obtained by dividing the largest coding unit.
- the reconstructed image signal decoded and output through the image decoding apparatus 200 may be reproduced through a reproduction device (not shown).
- the image decoding apparatus 200 may receive a signal output from the image encoding apparatus of FIG. 2 in the form of a bitstream.
- the received signal may be decoded through the entropy decoding unit 210.
- the entropy decoding unit 210 may parse the bitstream to derive information (eg, video/video information) necessary for image restoration (or picture restoration).
- the video/video information may further include information on various parameter sets, such as an adaptation parameter set (APS), a picture parameter set (PPS), a sequence parameter set (SPS), or a video parameter set (VPS).
- the video/video information may further include general constraint information.
- the image decoding apparatus may additionally use information on the parameter set and/or the general restriction information to decode an image.
- the signaling information, received information and/or syntax elements mentioned in the present disclosure may be obtained from the bitstream by being decoded through the decoding procedure.
- the entropy decoding unit 210 decodes information in the bitstream based on a coding method such as exponential Golomb coding, CAVLC, or CABAC, and a value of a syntax element required for image restoration, a quantized value of a transform coefficient related to a residual. Can be printed.
- the CABAC entropy decoding method receives a bin corresponding to each syntax element in a bitstream, and includes information on the syntax element to be decoded, information on decoding information of a neighboring block and a block to be decoded, or information on a symbol/bin decoded in a previous step
- the context model is determined by using and, according to the determined context model, the probability of occurrence of bins is predicted to perform arithmetic decoding of bins to generate symbols corresponding to the values of each syntax element. I can.
- the CABAC entropy decoding method may update the context model using information of the decoded symbol/bin for the context model of the next symbol/bin after the context model is determined.
- the entropy decoding unit 210 Among the information decoded by the entropy decoding unit 210, information on prediction is provided to the prediction unit (inter prediction unit 260 and intra prediction unit 265), and the register on which entropy decoding is performed by the entropy decoding unit 210 Dual values, that is, quantized transform coefficients and related parameter information may be input to the inverse quantization unit 220. In addition, information about filtering among information decoded by the entropy decoding unit 210 may be provided to the filtering unit 240.
- a receiving unit for receiving a signal output from the image encoding device may be additionally provided as an inner/outer element of the image decoding device 200, or the receiving unit is provided as a component of the entropy decoding unit 210 It could be.
- the video decoding apparatus may include an information decoder (video/video/picture information decoder) and/or a sample decoder (video/video/picture sample decoder).
- the information decoder may include an entropy decoding unit 210, and the sample decoder includes an inverse quantization unit 220, an inverse transform unit 230, an addition unit 235, a filtering unit 240, a memory 250, It may include at least one of the inter prediction unit 260 and the intra prediction unit 265.
- the inverse quantization unit 220 may inverse quantize the quantized transform coefficients and output transform coefficients.
- the inverse quantization unit 220 may rearrange the quantized transform coefficients into a two-dimensional block shape. In this case, the rearrangement may be performed based on a coefficient scan order performed by the image encoding apparatus.
- the inverse quantization unit 220 may perform inverse quantization on quantized transform coefficients by using a quantization parameter (eg, quantization step size information) and obtain transform coefficients.
- a quantization parameter eg, quantization step size information
- the inverse transform unit 230 may inversely transform transform coefficients to obtain a residual signal (residual block, residual sample array).
- the prediction unit may perform prediction on the current block and generate a predicted block including prediction samples for the current block.
- the prediction unit may determine whether intra prediction or inter prediction is applied to the current block based on the prediction information output from the entropy decoding unit 210, and determine a specific intra/inter prediction mode (prediction technique). I can.
- the prediction unit can generate the prediction signal based on various prediction methods (techniques) described later.
- the intra prediction unit 265 may predict the current block by referring to samples in the current picture.
- the description of the intra prediction unit 185 may be equally applied to the intra prediction unit 265.
- the inter prediction unit 260 may derive a predicted block for the current block based on a reference block (reference sample array) specified by a motion vector on the reference picture.
- motion information may be predicted in units of blocks, subblocks, or samples based on correlation between motion information between neighboring blocks and current blocks.
- the motion information may include a motion vector and a reference picture index.
- the motion information may further include inter prediction direction (L0 prediction, L1 prediction, Bi prediction, etc.) information.
- the neighboring block may include a spatial neighboring block existing in the current picture and a temporal neighboring block existing in the reference picture.
- the inter prediction unit 260 may construct a motion information candidate list based on neighboring blocks, and derive a motion vector and/or a reference picture index of the current block based on the received candidate selection information.
- Inter prediction may be performed based on various prediction modes (techniques), and the information about the prediction may include information indicating a mode (technique) of inter prediction for the current block.
- the addition unit 235 is reconstructed by adding the obtained residual signal to the prediction signal (predicted block, prediction sample array) output from the prediction unit (including the inter prediction unit 260 and/or the intra prediction unit 265). Signals (restored pictures, reconstructed blocks, reconstructed sample arrays) can be generated. When there is no residual for a block to be processed, such as when the skip mode is applied, the predicted block may be used as a reconstructed block.
- the description of the addition unit 155 may be equally applied to the addition unit 235.
- the addition unit 235 may be referred to as a restoration unit or a restoration block generation unit.
- the generated reconstructed signal may be used for intra prediction of the next processing target block in the current picture, and may be used for inter prediction of the next picture through filtering as described later.
- the filtering unit 240 may apply filtering to the reconstructed signal to improve subjective/objective image quality.
- the filtering unit 240 may generate a modified reconstructed picture by applying various filtering methods to the reconstructed picture, and the modified reconstructed picture may be converted to the memory 250, specifically the DPB of the memory 250. Can be saved on.
- the various filtering methods may include, for example, deblocking filtering, sample adaptive offset, adaptive loop filter, bilateral filter, and the like.
- the (modified) reconstructed picture stored in the DPB of the memory 250 may be used as a reference picture in the inter prediction unit 260.
- the memory 250 may store motion information of a block from which motion information in a current picture is derived (or decoded) and/or motion information of blocks in a picture that have already been reconstructed.
- the stored motion information may be transmitted to the inter prediction unit 260 to be used as motion information of a spatial neighboring block or motion information of a temporal neighboring block.
- the memory 250 may store reconstructed samples of reconstructed blocks in the current picture, and may be transmitted to the intra prediction unit 265.
- embodiments described in the filtering unit 160, the inter prediction unit 180, and the intra prediction unit 185 of the image encoding apparatus 100 are respectively the filtering unit 240 of the image decoding apparatus 200, The same or corresponding to the inter prediction unit 260 and the intra prediction unit 265 may be applied.
- the coding unit is obtained by recursively dividing a coding tree unit (CTU) or a maximum coding unit (LCU) according to a QT/BT/TT (Quad-tree/binary-tree/ternary-tree) structure.
- CTU coding tree unit
- LCU maximum coding unit
- QT/BT/TT Quad-tree/binary-tree/ternary-tree
- the CTU may be first divided into a quadtree structure. Thereafter, leaf nodes of a quadtree structure may be further divided by a multitype tree structure.
- the division according to the quadtree means division in which the current CU (or CTU) is divided into four. By partitioning according to the quadtree, the current CU can be divided into four CUs having the same width and the same height.
- the current CU corresponds to a leaf node of the quadtree structure.
- the CU corresponding to the leaf node of the quadtree structure is no longer divided and may be used as the final coding unit described above.
- a CU corresponding to a leaf node of a quadtree structure may be further divided by a multitype tree structure.
- the division according to the multi-type tree structure may include two divisions according to a binary tree structure and two divisions according to a ternary tree structure.
- the two divisions according to the binary tree structure may include vertical binary splitting (SPLIT_BT_VER) and horizontal binary splitting (SPLIT_BT_HOR).
- the vertical binary division (SPLIT_BT_VER) refers to division in which the current CU is divided in two vertical directions. As shown in FIG. 4, two CUs having a height equal to the height of the current CU and a width of half the width of the current CU may be generated by vertical binary division.
- the horizontal binary division means division in which the current CU is divided into two in the horizontal direction. As shown in FIG. 4, two CUs having a height of half the height of the current CU and a width equal to the width of the current CU may be generated by horizontal binary division.
- the two divisions according to the ternary tree structure may include vertical ternary splitting (SPLIT_TT_VER) and horizontal ternary splitting (hotizontal ternary splitting, SPLIT_TT_HOR).
- Vertical ternary division (SPLIT_TT_VER) divides the current CU in a vertical direction at a ratio of 1:2:1. As shown in FIG. 4, by vertical ternary division, two CUs having a height equal to the height of the current CU and a width of 1/4 of the width of the current CU, and a current CU having a height equal to the height of the current CU A CU with a width of half the width of can be created.
- the horizontal ternary division divides the current CU in the horizontal direction at a ratio of 1:2:1. As shown in FIG. 4, by horizontal ternary division, two CUs having a height of 1/4 of the height of the current CU and having the same width as the width of the current CU and a height of half the height of the current CU One CU can be created with a width equal to the width of the CU.
- FIG. 5 is a diagram illustrating a signaling mechanism of partitioning information of a quadtree with nested multi-type tree structure according to the present disclosure.
- the CTU is treated as the root node of a quadtree, and is first partitioned into a quadtree structure.
- Information eg, qt_split_flag
- qt_split_flag a first value (eg, “1”)
- the current CU may be quadtree split.
- qt_split_flag is a second value (eg, "0")
- the current CU is not divided into a quadtree, but becomes a leaf node (QT_leaf_node) of the quadtree.
- the leaf nodes of each quadtree can then be further partitioned into a multitype tree structure. That is, a leaf node of a quad tree may be a node (MTT_node) of a multi-type tree.
- a first flag eg, mtt_split_cu_flag
- a second flag (ex.mtt_split_cu_verticla_flag) may be signaled to indicate the splitting direction.
- the division direction may be a vertical direction
- the second flag is 0, the division direction may be a horizontal direction.
- a third flag (eg, mtt_split_cu_binary_flag) may be signaled to indicate whether the division type is a binary division type or a ternary division type.
- the division type may be a binary division type
- the third flag when the third flag is 0, the division type may be a ternary division type.
- Nodes of a multitype tree obtained by binary division or ternary division may be further partitioned into a multitype tree structure.
- nodes of a multitype tree cannot be partitioned into a quadtree structure.
- the first flag is 0, the corresponding node of the multitype tree is no longer divided and becomes a leaf node (MTT_leaf_node) of the multitype tree.
- the CU corresponding to the leaf node of the multitype tree may be used as the above-described final coding unit.
- a multi-type tree splitting mode (MttSplitMode) of the CU may be derived as shown in Table 1.
- One CTU may include a coding block of luma samples (hereinafter, referred to as a “luma block”) and two coding blocks of chroma samples corresponding thereto (hereinafter referred to as a “chroma block”).
- the above-described coding tree scheme may be applied equally to the luma block and the chroma block of the current CU, or may be applied separately.
- a luma block and a chroma block in one CTU may be divided into the same block tree structure, and the tree structure in this case may be represented as a single tree (SINGLE_TREE).
- a luma block and a chroma block in one CTU may be divided into individual block tree structures, and the tree structure in this case may be represented as a dual tree (DUAL_TREE). That is, when the CTU is divided into a dual tree, a block tree structure for a luma block and a block tree structure for a chroma block may exist separately.
- the block tree structure for the luma block may be referred to as a dual tree luma (DUAL_TREE_LUMA)
- the block tree structure for the chroma block may be referred to as a dual tree chroma (DUAL_TREE_CHROMA).
- luma blocks and chroma blocks in one CTU may be limited to have the same coding tree structure.
- luma blocks and chroma blocks may have separate block tree structures from each other.
- a luma coding tree block (CTB) may be divided into CUs based on a specific coding tree structure, and the chroma CTB may be divided into chroma CUs based on a different coding tree structure. That is, a CU in an I slice/tile group to which an individual block tree structure is applied may be composed of a coding block of a luma component or a coding block of two chroma components.
- a CU in an I slice/tile group to which the same block tree structure is applied and a CU of a P or B slice/tile group may be composed of blocks of three color components (a luma component and two chroma components).
- the structure in which the CU is divided is not limited thereto.
- the BT structure and the TT structure may be interpreted as a concept included in the Multiple Partitioning Tree (MPT) structure, and the CU may be interpreted as being divided through the QT structure and the MPT structure.
- MPT Multiple Partitioning Tree
- a syntax element eg, MPT_split_type
- MPT_split_mode a syntax element including information on which direction of splitting between horizontal and horizontal.
- the CU may be divided in a different way from the QT structure, BT structure, or TT structure. That is, according to the QT structure, the CU of the lower depth is divided into 1/4 size of the CU of the upper depth, or the CU of the lower depth is divided into 1/2 of the CU of the upper depth according to the BT structure, or according to the TT structure. Unlike CUs of lower depth are divided into 1/4 or 1/2 of CUs of higher depth, CUs of lower depth are 1/5, 1/3, 3/8, 3 of CUs of higher depth depending on the case. It can be divided into /5, 2/3, or 5/8 size, and the method of partitioning the CU is not limited thereto.
- Intra prediction may indicate prediction of generating prediction samples for a current block based on reference samples in a picture (hereinafter, referred to as a current picture) to which the current block belongs.
- a current picture a picture to which the current block belongs.
- surrounding reference samples to be used for intra prediction of the current block may be derived.
- the neighboring reference samples of the current block are a sample adjacent to the left boundary of the current block of size nWxnH, a total of 2xnH samples adjacent to the bottom-left, and a sample adjacent to the top boundary of the current block. And a total of 2xnW samples adjacent to the top-right side and one sample adjacent to the top-left side of the current block.
- the peripheral reference samples of the current block may include a plurality of columns of upper peripheral samples and a plurality of rows of left peripheral samples.
- the neighboring reference samples of the current block are a total of nH samples adjacent to the right boundary of the current block of size nWxnH, a total of nW samples adjacent to the bottom boundary of the current block, and the lower right side of the current block. It may include one sample adjacent to (bottom-right).
- the decoder may construct neighboring reference samples to be used for prediction by substituting samples that are not available with available samples.
- surrounding reference samples to be used for prediction may be configured through interpolation of available samples.
- a prediction sample can be derived based on an average or interpolation of neighboring reference samples of the current block, and (ii) neighboring reference samples of the current block Among them, the prediction sample may be derived based on a reference sample existing in a specific (prediction) direction with respect to the prediction sample.
- it may be called a non-directional mode or a non-angular mode
- it may be called a directional mode or an angular mode.
- LIP linear interpolation intra prediction
- chroma prediction samples may be generated based on luma samples using a linear model. This case may be referred to as LM (Linear Model) mode.
- LM Linear Model
- a temporary prediction sample of the current block is derived based on the filtered surrounding reference samples, and at least one of the existing surrounding reference samples, that is, unfiltered surrounding reference samples, derived according to the intra prediction mode.
- a prediction sample of the current block may be derived by weighted sum of a reference sample and the temporary prediction sample. This case may be called PDPC (Position dependent intra prediction).
- a reference sample line having the highest prediction accuracy among the neighboring multi-reference sample lines of the current block may be selected, and a prediction sample may be derived using a reference sample positioned in the prediction direction from the corresponding line.
- information on the used reference sample line eg, intra_luma_ref_idx
- MRL multi-reference line intra prediction
- reference samples may be derived from a reference sample line directly adjacent to the current block, and in this case, information about the reference sample line may not be signaled.
- the current block may be divided into vertical or horizontal subpartitions, and intra prediction may be performed for each subpartition based on the same intra prediction mode.
- neighboring reference samples of intra prediction may be derived for each subpartition. That is, a reconstructed sample of the previous subpartition in the encoding/decoding order may be used as a neighboring reference sample of the current subpartition.
- the intra prediction mode for the current block is equally applied to the subpartitions, but by deriving and using neighboring reference samples in units of the subpartitions, intra prediction performance may be improved in some cases.
- This prediction method may be referred to as intra sub-partitions (ISP) or ISP-based intra prediction.
- intra prediction techniques may be referred to in various terms such as an intra prediction type or an additional intra prediction mode, separated from a directional or non-directional intra prediction mode.
- the intra prediction technique may include at least one of the aforementioned LIP, LM, PDPC, MRL, and ISP.
- the general intra prediction method excluding specific intra prediction types such as LIP, LM, PDPC, MRL, and ISP may be referred to as a normal intra prediction type.
- the normal intra prediction type may be generally applied when the specific intra prediction type as described above is not applied, and prediction may be performed based on the aforementioned intra prediction mode. Meanwhile, post-processing filtering may be performed on the derived prediction samples as necessary.
- the intra prediction procedure may include determining an intra prediction mode/type, deriving a neighboring reference sample, and deriving an intra prediction mode/type based prediction sample. Also, a post-filtering step may be performed on the derived prediction samples as necessary.
- affine linear weighted intra prediction may be used in addition to the aforementioned intra prediction types.
- the ALWIP may be called linear weighted intra prediction (LWIP), matrix weighted intra prediction (MWIP), or matrix based intra prediction (MIP).
- LWIP linear weighted intra prediction
- MWIP matrix weighted intra prediction
- MIP matrix based intra prediction
- the intra prediction modes used for the ALWIP are LIP, PDPC, MRL, and ISP intra prediction described above, or intra prediction modes used in normal intra prediction (intra prediction modes described with reference to FIGS. 13 and/or 14). And can be configured differently.
- the intra prediction mode for ALWIP may be referred to as an ALWIP mode, LWIP mode, MWIP mode, or MIP mode.
- a matrix and an offset used in the matrix vector multiplication may be set differently according to the intra prediction mode for the ALWIP.
- the matrix may be referred to as a (affine) weight matrix
- the offset may be referred to as a (afine) offset vector or a (afine) bias vector.
- a specific ALWIP method will be described later.
- FIG. 6 is a flowchart illustrating a video/video encoding method based on intra prediction.
- the encoding method of FIG. 6 may be performed by the video encoding apparatus of FIG. 2. Specifically, step S610 may be performed by the intra prediction unit 185, and step S620 may be performed by the residual processing unit. Specifically, step S620 may be performed by the subtraction unit 115. Step S630 may be performed by the entropy encoding unit 190.
- the prediction information of step S630 may be derived by the intra prediction unit 185, and the residual information of step S630 may be derived by the residual processing unit.
- the residual information is information on the residual samples.
- the residual information may include information on quantized transform coefficients for the residual samples.
- the residual samples may be derived as transform coefficients through the transform unit 120 of the image encoding apparatus, and the transform coefficients may be derived as quantized transform coefficients through the quantization unit 130.
- Information about the quantized transform coefficients may be encoded by the entropy encoding unit 190 through a residual coding procedure.
- the image encoding apparatus may perform intra prediction on the current block (S610).
- the video encoding apparatus determines an intra prediction mode/type for the current block, derives neighboring reference samples of the current block, and then generates prediction samples in the current block based on the intra prediction mode/type and the neighboring reference samples. can do.
- the procedure of determining the intra prediction mode/type, deriving neighboring reference samples, and generating prediction samples may be simultaneously performed, or one procedure may be performed before the other procedure.
- FIG. 7 is a diagram illustrating an exemplary configuration of an intra prediction unit 185 according to the present disclosure.
- the intra prediction unit 185 of the video encoding apparatus may include an intra prediction mode/type determination unit 186, a reference sample derivation unit 187 and/or a prediction sample derivation unit 188.
- the intra prediction mode/type determiner 186 may determine an intra prediction mode/type for the current block.
- the reference sample derivation unit 187 may derive neighboring reference samples of the current block.
- the prediction sample derivation unit 188 may derive prediction samples of the current block.
- the intra prediction unit 185 may further include a prediction sample filter unit (not shown).
- the image encoding apparatus may determine a mode/type applied to the current block from among a plurality of intra prediction modes/types.
- the video encoding apparatus may compare RD costs for the intra prediction modes/types and determine an optimal intra prediction mode/type for the current block.
- the image encoding apparatus may perform a prediction sample filtering procedure.
- Predictive sample filtering may be referred to as post filtering. Some or all of the prediction samples may be filtered by the prediction sample filtering procedure. In some cases, the prediction sample filtering procedure may be omitted.
- the apparatus for encoding an image may generate residual samples for the current block based on prediction samples or filtered prediction samples (S620).
- the image encoding apparatus may derive the residual samples by subtracting the prediction samples from original samples of the current block. That is, the image encoding apparatus may derive the residual sample value by subtracting the corresponding predicted sample value from the original sample value.
- the image encoding apparatus may encode image information including information about the intra prediction (prediction information) and residual information about the residual samples (S630).
- the prediction information may include the intra prediction mode information and/or the intra prediction technique information.
- the image encoding apparatus may output the encoded image information in the form of a bitstream.
- the output bitstream may be delivered to an image decoding apparatus through a storage medium or a network.
- the residual information may include a residual coding syntax to be described later.
- the image encoding apparatus may transform/quantize the residual samples to derive quantized transform coefficients.
- the residual information may include information on the quantized transform coefficients.
- the image encoding apparatus may generate a reconstructed picture (including reconstructed samples and a reconstructed block). To this end, the image encoding apparatus may perform inverse quantization/inverse transformation on the quantized transform coefficients again to derive (modified) residual samples. The reason why the residual samples are transformed/quantized and then inverse quantized/inverse transformed is performed to derive residual samples identical to the residual samples derived from the image decoding apparatus.
- the image encoding apparatus may generate a reconstructed block including reconstructed samples for the current block based on the prediction samples and the (modified) residual samples. A reconstructed picture for the current picture may be generated based on the reconstructed block. As described above, an in-loop filtering procedure or the like may be further applied to the reconstructed picture.
- FIG. 8 is a flowchart illustrating a video/video decoding method based on intra prediction.
- the image decoding apparatus may perform an operation corresponding to an operation performed by the image encoding apparatus.
- the decoding method of FIG. 8 may be performed by the video decoding apparatus of FIG. 3.
- Dean systems S810 to S830 may be performed by the intra prediction unit 265, and the prediction information of step S810 and the residual information of step S840 may be obtained from the bitstream by the entropy decoding unit 210.
- the residual processing unit of the image decoding apparatus may derive residual samples for the current block based on the residual information (S840).
- the inverse quantization unit 220 of the residual processing unit derives transform coefficients by performing inverse quantization based on the quantized transform coefficients derived based on the residual information
- the inverse transform unit of the residual processing unit ( 230) may derive residual samples for the current block by performing inverse transform on the transform coefficients.
- Step S850 may be performed by the addition unit 235 or the restoration unit.
- the image decoding apparatus may derive an intra prediction mode/type for the current block based on the received prediction information (intra prediction mode/type information) (S810).
- the image decoding apparatus may derive neighboring reference samples of the current block (S820).
- the image decoding apparatus may generate prediction samples in the current block based on the intra prediction mode/type and the neighboring reference samples (S830).
- the image decoding apparatus may perform a prediction sample filtering procedure. Predictive sample filtering may be referred to as post filtering. Some or all of the prediction samples may be filtered by the prediction sample filtering procedure. In some cases, the prediction sample filtering procedure may be omitted.
- the image decoding apparatus may generate residual samples for the current block based on the received residual information (S840).
- the image decoding apparatus may generate reconstructed samples for the current block based on the prediction samples and the residual samples, and derive a reconstructed block including the reconstructed samples (S850).
- a reconstructed picture for the current picture may be generated based on the reconstructed block.
- an in-loop filtering procedure or the like may be further applied to the reconstructed picture.
- FIG. 9 is a diagram illustrating an exemplary configuration of an intra prediction unit 265 according to the present disclosure.
- the intra prediction unit 265 of the image decoding apparatus may include an intra prediction mode/type determination unit 266, a reference sample derivation unit 267, and a prediction sample derivation unit 268. .
- the intra prediction mode/type determiner 266 determines an intra prediction mode/type for the current block based on intra prediction mode/type information generated and signaled by the intra prediction mode/type determiner 186 of the image encoding apparatus.
- the reference sample deriving unit 266 may derive neighboring reference samples of the current block from the reconstructed reference region in the current picture.
- the prediction sample derivation unit 268 may derive prediction samples of the current block.
- the intra prediction unit 265 may further include a prediction sample filter unit (not shown).
- the intra prediction mode information may include, for example, flag information (ex. intra_luma_mpm_flag) indicating whether a most probable mode (MPM) is applied to the current block or a remaining mode is applied, and the When MPM is applied to the current block, the intra prediction mode information may further include index information (ex. intra_luma_mpm_idx) indicating one of the intra prediction mode candidates (MPM candidates).
- the intra prediction mode candidates (MPM candidates) may be composed of an MPM candidate list or an MPM list.
- the intra prediction mode information includes remaining mode information (ex. intra_luma_mpm_remainder) indicating one of the remaining intra prediction modes excluding the intra prediction mode candidates (MPM candidates).
- the image decoding apparatus may determine an intra prediction mode of the current block based on the intra prediction mode information.
- a separate MPM list may be configured for the above-described ALWIP.
- the MPM candidate modes may include an intra prediction mode and additional candidate modes of a neighboring block (eg, a left neighboring block and an upper neighboring block) of the current block.
- the intra prediction technique information may be implemented in various forms.
- the intra prediction technique information may include intra prediction technique index information indicating one of the intra prediction techniques.
- the intra prediction method information includes reference sample line information (ex. intra_luma_ref_idx) indicating whether the MRL is applied to the current block and, if applied, a reference sample line (eg, intra_luma_ref_idx), and the ISP is the current block.
- ISP flag information indicating whether it is applied to (ex. intra_subpartitions_mode_flag), ISP type information indicating the split type of subpartitions when the ISP is applied (ex.
- intra_subpartitions_split_flag flag information indicating whether PDPC is applied, or LIP application It may include at least one of flag information indicating whether or not.
- the ISP flag information may be referred to as an ISP application indicator.
- the intra prediction type information may include an ALWIP flag indicating whether ALWIP is applied to the current block.
- the intra prediction mode information and/or the intra prediction technique information may be encoded/decoded through the coding method described in this disclosure.
- the intra prediction mode information and/or the intra prediction method information may be encoded/decoded through entropy coding (ex. CABAC, CAVLC) based on a truncated (rice) binary code.
- an intra prediction mode applied to the current block may be determined using an intra prediction mode of a neighboring block.
- the image decoding apparatus constructs an mpm (most probable mode) list derived based on the intra prediction mode and additional candidate modes of the neighboring block (ex. left and/or upper neighboring block) of the current block, and received One of the mpm candidates in the mpm list can be selected based on the mpm index.
- the video decoding apparatus may select one of the remaining intra prediction modes that are not included in the mpm list based on the remaining intra prediction mode information.
- intra prediction mode applied to the current block is among mpm candidates (ie, is included in the mpm list) or in the remaining mode may be indicated based on an mpm flag (ex. intra_luma_mpm_flag).
- a value of 1 of the mpm flag may indicate that the intra prediction mode for the current block is in mpm candidates (mpm list), and a value of 0 of the mpm flag indicates that the intra prediction mode for the current block is in mpm candidates (mpm list). Can indicate none.
- the mpm index may be signaled in the form of an mpm_idx or intra_luma_mpm_idx syntax element
- the remaining intra prediction mode information may be signaled in the form of a rem_intra_luma_pred_mode or intra_luma_mpm_remainder syntax element.
- the remaining intra prediction mode information may indicate one of all intra prediction modes by indexing the remaining intra prediction modes not included in the mpm candidates (mpm list) in the order of prediction mode numbers.
- the intra prediction mode may be an intra prediction mode for a luma component (sample).
- the intra prediction mode information may include at least one of the mpm flag (ex.
- the MPM list may be referred to in various terms such as an MPM candidate list and candModeList.
- FIG. 10 is a flowchart illustrating an intra prediction mode signaling procedure in an image encoding apparatus.
- the apparatus for encoding an image may configure an MPM list for a current block (S1010).
- the MPM list may include candidate intra prediction modes (MPM candidates) that are likely to be applied to the current block.
- the MPM list may include intra prediction modes of neighboring blocks, or may further include specific intra prediction modes according to a predetermined method.
- the image encoding apparatus may determine an intra prediction mode of the current block (S1020).
- the video encoding apparatus may perform prediction based on various intra prediction modes, and may determine an optimal intra prediction mode by performing rate-distortion optimization (RDO) based thereon.
- RDO rate-distortion optimization
- the video encoding apparatus may determine the optimal intra prediction mode using only MPM candidates included in the MPM list, or further use the remaining intra prediction modes as well as the MPM candidates included in the MPM list. It is also possible to determine the intra prediction mode. Specifically, for example, if the intra prediction type of the current block is a specific type (eg, LIP, MRL, or ISP) other than the normal intra prediction type, the video encoding apparatus uses only the MPM candidates to determine the optimal intra prediction type.
- a specific type eg, LIP, MRL, or ISP
- the prediction mode can be determined. That is, in this case, the intra prediction mode for the current block may be determined only among the MPM candidates, and in this case, the mpm flag may not be encoded/signaled. In the case of the specific type, the video decoding apparatus may estimate that the mpm flag is 1 without separately signaling the mpm flag.
- the video encoding apparatus may generate an mpm index (mpm idx) indicating one of the MPM candidates. If the intra prediction mode of the current block is not in the MPM list, remaining intra prediction mode information indicating the same mode as the intra prediction mode of the current block is generated among the remaining intra prediction modes not included in the MPM list. can do.
- the image encoding apparatus may encode the intra prediction mode information and output it in the form of a bitstream (S1030).
- the intra prediction mode information may include the aforementioned mpm flag, mpm index, and/or remaining intra prediction mode information.
- the mpm index and the remaining intra prediction mode information are not signaled at the same time when indicating an intra prediction mode for one block with an alternative relationship. That is, when the mpm flag value is 1, the mpm index may be signaled, and when the mpm flag value is 0, the remaining intra prediction mode information may be signaled.
- the mpm flag is not signaled and its value is inferred as 1, and only the mpm index may be signaled. That is, in this case, the intra prediction mode information may include only the mpm index.
- S1020 is shown to be performed after S1010, but this is an example, and S1020 may be performed before S1010 or at the same time.
- 11 is a flowchart illustrating a procedure for determining an intra prediction mode in an image decoding apparatus.
- the image decoding apparatus may determine an intra prediction mode of the current block based on intra prediction mode information determined and signaled by the image encoding apparatus.
- the apparatus for decoding an image may acquire intra prediction mode information from a bitstream (S1110).
- the intra prediction mode information may include at least one of an mpm flag, an mpm index, and a remaining intra prediction mode.
- the video decoding apparatus may configure an MPM list (S1120).
- the MPM list is configured in the same way as the MPM list configured in the video encoding apparatus. That is, the MPM list may include intra prediction modes of neighboring blocks, or may further include specific intra prediction modes according to a predetermined method.
- S1120 is shown to be performed after S1110, but this is an example, and S1120 may be performed before S1110 or at the same time.
- the video decoding apparatus determines an intra prediction mode of the current block based on the MPM list and the intra prediction mode information (S1130). Step S1130 will be described in more detail with reference to FIG. 12.
- FIG. 12 is a flowchart for describing a procedure for deriving an intra prediction mode in more detail.
- Steps S1210 and S1220 of FIG. 12 may correspond to steps S1110 and S1120 of FIG. 11, respectively. Therefore, detailed descriptions of steps S1210 and S1220 are omitted.
- the image decoding apparatus may obtain intra prediction mode information from the bitstream, configure an MPM list (S1210 and S1220), and determine a predetermined condition (S1230). Specifically, as shown in FIG. 12, when the value of the mpm flag is 1 (Yes in S1230), the video decoding apparatus selects a candidate indicated by the mpm index among MPM candidates in the MPM list in the intra prediction mode of the current block. It can be derived as (S1240). As another example, when the value of the mpm flag is 0 (No in S1230), the video decoding apparatus selects the intra prediction mode indicated by the remany intra prediction mode information among the remaining intra prediction modes not included in the MPM list. The intra prediction mode of the block may be derived (S1250).
- the video decoding apparatus within the MPM list without checking the mpm flag A candidate indicated by the mpm index may be derived as an intra prediction mode of the current block (S1240).
- FIG. 13 is a diagram illustrating an intra prediction direction according to an embodiment of the present disclosure.
- the intra prediction mode may include two non-directional intra prediction modes and 33 directional intra prediction modes.
- the non-directional intra prediction modes may include a planar intra prediction mode and a DC intra prediction mode, and the directional intra prediction modes may include 2 to 34 intra prediction modes.
- the planar intra prediction mode may be referred to as a planner mode, and the DC intra prediction mode may be referred to as a DC mode.
- the intra prediction mode includes two non-directional intra prediction modes and 65 extended directional intra prediction. It can include modes.
- the non-directional intra prediction modes may include a planar mode and a DC mode, and the directional intra prediction modes may include 2 to 66 intra prediction modes.
- the extended intra prediction modes can be applied to blocks of all sizes, and can be applied to both a luma component (a luma block) and a chroma component (a chroma block).
- the intra prediction mode may include two non-directional intra prediction modes and 129 directional intra prediction modes.
- the non-directional intra prediction modes may include a planar mode and a DC mode, and the directional intra prediction modes may include intra prediction modes 2 to 130.
- the intra prediction mode may further include a cross-component linear model (CCLM) mode for chroma samples in addition to the aforementioned intra prediction modes.
- CCLM cross-component linear model
- the CCLM mode can be divided into L_CCLM, T_CCLM, and LT_CCLM, depending on whether left samples are considered, upper samples are considered, or both for LM parameter derivation, and can be applied only to a chroma component.
- the intra prediction mode may be indexed, for example, as shown in Table 2 below.
- an intra prediction mode in order to capture an arbitrary edge direction presented in a natural video, includes 93 directions along with two non-directional intra prediction modes. It may include an intra prediction mode. Non-directional intra prediction modes may include planar mode and DC mode.
- the directional intra prediction mode may include an intra prediction mode consisting of times 2 to 80 and -1 to -14 as indicated by the arrows in FIG. 14.
- the planner mode may be indicated as INTRA_PLANAR, and the DC mode may be indicated as INTRA_DC.
- the directional intra prediction mode may be expressed as INTRA_ANGULAR-14 to INTRA_ANGULAR-1 and INTRA_ANGULAR2 to INTRA_ANGULAR80.
- an MPM list for the ALWIP may be separately configured, and the intra prediction mode for the ALWIP
- the MPM flag that may be included in the information may be referred to as intra_lwip_mpm_flag
- the MPM index may be referred to as intra_lwip_mpm_idx
- the remaining intra prediction mode information may be referred to as intra_lwip_mpm_remainder.
- various prediction modes may be used for ALWIP, and a matrix and an offset for ALWIP may be derived according to an intra prediction mode for ALWIP.
- the matrix may be referred to as a (affine) weight matrix
- the offset may be referred to as a (afine) offset vector or a (afine) bias vector.
- the number of intra prediction modes for the ALWIP may be differently set based on the size of the current block. For example, i) when the height and width of the current block (ex.
- intra prediction modes 0 to 34 35 intra prediction modes (ie, intra prediction modes 0 to 34) may be available, and ii) the current
- 19 intra prediction modes ie, intra prediction modes 0 to 18
- 11 intra prediction modes ie, intra prediction modes
- Prediction modes 0 to 10 35 intra prediction modes (ie, intra prediction modes 0 to 34)
- 19 intra prediction modes ie, intra prediction modes 0 to 18
- 11 intra prediction modes ie, intra prediction modes
- Prediction modes 0 to 10 11 intra prediction modes (ie, intra prediction modes) Prediction modes 0 to 10) may be available.
- the block size type is 0, when the height and width of the current block are both 8 or less, the block size type is called 1, and other cases are the block size type.
- Table 3 the number of intra prediction modes for ALWIP can be summarized as shown in Table 3. However, this is an example, and the block size type and the number of available intra prediction modes may be changed.
- the MPM list may be configured to include N MPMs.
- N may be 5 or 6.
- two neighboring blocks that is, a left peripheral block A and an upper peripheral block B may be considered.
- initialized default MPM may be considered to construct the MPM list.
- An MPM list may be configured by performing a pruning process for the two peripheral intra modes.
- the MPM list includes ⁇ A, Planar, DC ⁇ modes, and may include three derived intra modes. .
- the three derived intra modes can be obtained by adding a predetermined offset value to the peripheral intra mode and/or performing a modulo operation.
- the two peripheral intra modes are different from each other, the two peripheral intra modes are allocated as a first MPM mode and a second MPM mode, and the remaining four MPM modes may be derived from default modes and/or peripheral intra modes.
- a pruning process may be performed to prevent overlapping of the same mode in the MPM list.
- Truncated Binary Code (TBC) may be used for entropy encoding of modes other than the MPM mode.
- the above-described MPM list construction method can be used when ALWIP is not applied to the current block.
- the above-described MPM list construction method may be used for LIP, PDPC, MRL, and ISP intra prediction, or for deriving an intra prediction mode used in normal intra prediction.
- the left neighboring block or the upper neighboring block may be coded based on the above-described ALWIP. That is, ALWIP may be applied when coding the left neighboring block or the upper neighboring block. In this case, it is inappropriate to use the ALWIP intra prediction mode number of the neighboring block to which ALWIP is applied (left neighboring block/upper neighboring block) in the MPM list for the current block to which ALWIP is not applied.
- the intra prediction mode of the neighboring block (left neighboring block/upper neighboring block) to which ALWIP is applied may be regarded as a DC or planar mode. That is, when configuring the MPM list of the current block, the intra prediction mode of the neighboring block encoded with ALWIP may be replaced with a DC or planar mode.
- an intra prediction mode of a neighboring block (left neighboring block/upper neighboring block) to which ALWIP is applied may be mapped to a general intra prediction mode based on a mapping table and used to construct an MPM list of the current block. In this case, the mapping may be performed based on the block size type of the current block.
- the mapping table may be represented as Table 4.
- ALWIP IntraPredMode represents an ALWIP intra prediction mode of a neighboring block (left neighboring block/upper neighboring block)
- block size type (sizeId) represents a block size type of a neighboring block or a current block.
- Numbers under the block size type values of 0, 1, and 2 indicate a general intra prediction mode to which the ALWIP intra prediction mode is mapped in case of each block size type. For example, when the block size type of the current block is 0 and the ALWIP intra prediction mode number of the neighboring block is 10, the mapped general intra prediction mode number may be 18. However, the mapping relationship is an example and may be changed.
- an MPM list for the current block to which the ALWIP is applied may be separately configured.
- the MPM list may be called by various names such as an ALWIP MPM list (or LWIP MPM list, candLwipModeList) to distinguish it from the MPM list when ALWIP is not applied to the current block.
- ALWIP MPM list or LWIP MPM list, candLwipModeList
- the ALWIP MPM list may include n candidates, for example, n may be 3.
- the ALWIP MPM list may be configured based on a left neighboring block and an upper neighboring block of the current block.
- the left neighboring block may represent the uppermost block among neighboring blocks adjacent to the left boundary of the current block.
- the upper neighboring block may represent a leftmost block among neighboring blocks adjacent to the upper boundary of the current block.
- a first candidate intra prediction mode (or candLwipModeA) may be set to be the same as the ALWIP intra prediction mode of the left neighboring block.
- the second candidate intra prediction mode (or candLwipModeB) may be set to be the same as the ALWIP intra prediction mode of the upper neighboring block.
- the left neighboring block or the upper neighboring block may be coded based on intra prediction rather than ALWIP. That is, when coding the left neighboring block or the upper neighboring block, an intra prediction type other than ALWIP may be applied.
- the ALWIP intra prediction mode of a neighboring block to which ALWIP is not applied is considered to be an ALWIP intra prediction mode of a specific value (ex. 0, 1 or 2, etc.). can do.
- a general intra prediction mode of a neighboring block (left neighboring block/upper neighboring block) to which ALWIP is not applied may be mapped to an ALWIP intra prediction mode based on a mapping table, and used to construct an ALWIP MPM list.
- the mapping may be performed based on the block size type of the current block.
- the mapping table may be represented as Table 5.
- IntraPredModeY represents an intra prediction mode of a neighboring block (left neighboring block/upper neighboring block).
- the intra prediction mode of the neighboring block may be an intra prediction mode for a luma component (sample), that is, a luma intra prediction mode.
- block size type sizeId represents a block size type of a neighboring block or a current block. Numbers below the block size type values of 0, 1, and 2 indicate the ALWIP intra prediction mode to which the normal intra prediction mode is mapped in case of each block size type. For example, when the block size type of the current block is 0 and the general intra prediction mode of the neighboring block is 10, the mapped ALWIP intra prediction mode number may be 9. However, the mapping relationship is an example and may be changed.
- ALWIP is applied to the neighboring blocks. Even if it is, the ALWIP intra prediction mode of the neighboring block may not be available for the current block according to the block size type.
- a specific ALWIP intra prediction mode predefined for the first candidate and/or the second candidate may be used as the first candidate intra prediction mode or the second candidate intra prediction mode.
- a specific ALWIP intra prediction mode predefined for the third candidate may be used as the third candidate intra prediction mode.
- the predefined specific ALWIP intra prediction mode may be represented as shown in Table 6.
- the ALWIP MPM list may be configured based on the first candidate intra prediction mode and the second candidate intra prediction mode. For example, when the first candidate intra prediction mode and the second candidate intra prediction mode are different from each other, the first candidate intra prediction mode is put as the 0th candidate (ex. lwipMpmcand[0]) of the ALWIP MPM list, and The second candidate intra prediction mode may be put as a first candidate (ex. lwipMpmcand[1]) of the ALWIP MPM list. As the second candidate (ex. lwipMpmcand[2]) of the ALWIP MPM list, the above-described specific ALWIP intra prediction mode may be used.
- one of the first candidate intra prediction mode and the second candidate intra prediction mode is selected as the 0th candidate of the ALWIP MPM list (ex. lwipMpmcand[0]), and the first candidate of the ALWIP MPM list (ex. lwipMpmcand[1]) and the second candidate of the ALWIP MPM list (ex. lwipMpmcand[2]) are the aforementioned predefined specific ALWIP Intra prediction modes can be used.
- the ALWIP intra prediction mode of the current block may be derived based on the ALWIP MPM list.
- the MPM flag that may be included in the intra prediction mode information for the ALWIP may be referred to as intra_lwip_mpm_flag
- the MPM index may be referred to as intra_lwip_mpm_idx
- the remaining intra prediction mode information may be referred to as intra_lwip_mpm_remainder.
- the procedure for deriving an ALWIP intra prediction mode from the ALWIP MPM list may be performed as described above with reference to FIGS. 10 and 11.
- the ALWIP intra prediction mode of the current block may be signaled directly.
- Affine linear weighted intra prediction (ALWIP)
- ALWIP may also be called Matrix weighted intra prediction (MWIP) or Matrix based intra prediction (MIP).
- MWIP Matrix weighted intra prediction
- MIP Matrix based intra prediction
- One line including reconstructed peripheral boundary samples may be used as an input.
- the reconstructed peripheral boundary samples that are not available can be replaced with available samples according to the method performed in conventional intra prediction.
- the process of generating a prediction signal by applying ALWIP may include the following three steps.
- Step 2 A reduced prediction signal for a subsampled set of samples in an original block by performing matrix vector multiplication by taking the averaged sample values as input and adding an offset. prediction signal).
- Step 3 (linear) Interpolation process: A prediction signal at a remaining position may be generated by linearly interpolating a prediction signal for the subsample set.
- the linear interpolation may be a single step linear interpolation in each direction.
- the matrix and offset required to generate the prediction signal may be obtained from three matrix sets S 0 , S 1 , and S 2 .
- the set S 0 may consist of 18 matrices and 18 offset vectors. In this case, each matrix is composed of 16 rows and 4 columns, and the size of each offset vector may be 16.
- the matrix and offset vector of the set S 0 can be used for blocks of size 4x4.
- Set S 1 may consist of 10 matrices and 10 offset vectors.
- each matrix is composed of 16 rows and 8 columns, and the size of each offset vector may be 16.
- the matrix and offset vector of the set S 1 can be used for blocks of size 4x8, 8x4 and 8x8.
- Set S 2 may consist of 6 matrices and 6 offset vectors.
- each matrix is composed of 64 rows and 8 columns, and the size of each offset vector may be 64.
- the matrix and offset vectors of set S 2 can be used for all other types of blocks.
- the total number of multiplications required for matrix vector multiplication is always less than or equal to 4xWxH. That is, a maximum of 4 multiplications per sample is required for the ALWIP mode.
- FIGS. 15 to 18 Blocks other than the blocks shown in FIGS. 15 to 18 may be processed by one of the methods described with reference to FIGS. 15 to 18.
- 15 is a diagram illustrating an ALWIP process for a 4x4 block.
- two average values may be obtained along each boundary. That is, two average values (bdry top ) may be obtained by selecting and averaging two peripheral boundary samples at the top of the current block. In addition, two average values (bdry left ) may be obtained by selecting and averaging two peripheral border samples on the left side of the current block.
- matrix vector multiplication may be performed by inputting four sample values (bdry red ) generated in the averaging step.
- the matrix A k may be obtained from the set S 0 using the ALWIP mode k.
- 16 final prediction samples may be generated. In this case, linear interpolation is not necessary.
- 16 is a diagram illustrating an ALWIP process for an 8x8 block.
- each boundary four average values (bdry top ) can be obtained by selecting and averaging two peripheral border samples at the top of the current block.
- four average values (bdry left ) may be obtained by selecting and averaging two peripheral border samples on the left side of the current block.
- matrix vector multiplication may be performed by inputting eight sample values (bdry red ) generated in the averaging step.
- the matrix A k may be obtained from the set S 1 using the ALWIP mode k.
- 16 odd-numbered samples (pred red ) in the prediction block may be generated.
- 17 is a diagram illustrating an ALWIP process for an 8x4 block.
- four average values may be obtained along the horizontal boundary. That is, four average values (bdry top ) can be obtained by selecting and averaging two peripheral border samples at the top of the current block. In addition, four peripheral boundary samples (bdry left ) of the left side of the current block may be obtained. Thereafter, matrix vector multiplication may be performed by inputting eight sample values (bdry red ) generated in the averaging step. In this case, the matrix A k may be obtained from the set S 1 using the ALWIP mode k. As a result of adding the offset b k to the result of performing the matrix vector multiplication, samples (pred red ) at 16 positions in the prediction block may be generated.
- the ALWIP process for the 4x8 block may be a transposed process of the process for the 8x4 block.
- 18 is a diagram illustrating an ALWIP process for a 16x16 block.
- four average values may be obtained along the boundary. For example, eight average values may be obtained by selecting and averaging two peripheral boundary samples of the current block, and four average values may be obtained by selecting and averaging two of the eight sample values. Alternatively, four average values may be obtained by selecting and averaging four perimeter samples of the current block.
- matrix vector multiplication may be performed by inputting eight sample values (bdry red ) generated in the averaging step.
- the matrix A k may be obtained from the set S 2 using the ALWIP mode k.
- 16 odd-numbered samples (pred red ) in the prediction block may be generated.
- pred red includes 32 samples, and only horizontal interpolation can be performed.
- the ALWIP process for the 8xH block or 4xH block may be a transposed process of the process for the Wx8 block or the Wx4 block.
- FIG. 19 is a diagram illustrating an averaging step in an ALWIP process according to the present disclosure.
- Averaging can be applied to each of the left boundary and/or the upper boundary of the current block.
- the boundary represents neighboring reference samples adjacent to the boundary of the current block, like the gray samples shown in FIG. 19.
- the left boundary bdry left indicates left peripheral reference samples adjacent to the left boundary of the current block.
- the upper boundary bdry top represents reference samples around the upper edge adjacent to the upper boundary of the current block.
- each boundary size may be reduced to two samples based on an averaging process. If the current block is a block other than a 4x4 block, each boundary size may be reduced to 4 samples based on an averaging process.
- the input bounds bdry top and bdry left are smaller bounds And Can be reduced to And
- a 4x4 block may be composed of 2 samples, and in the other case, may be composed of 4 samples.
- Equation 1 Can be created.
- Equation 1 i may have a value of 0 or more and less than 2.
- Equation 1 similarly to Equation 1 above Can be created.
- Equation 2 i may have a value of 0 or more and less than 4. Also, similar to Equation 2 Can be created.
- the reduced boundary vector may have a size of 4 for 4x4 blocks and 8 for other blocks. Equation 3 is according to the mode (ALWIP mode) and the size of the block (W, H) And Shows how to create bdry red by connecting them.
- Equation 3 according to the size (W, H) of the current block and the ALWIP mode, And The order of connecting may be different. For example, when the current block is a 4x4 block and the mode is less than 18, bdry red is Since the Can be created by connecting Or, for example, when the current block is a 4x4 block, and the mode is 18 or greater than 18, bdry red is Since the Can be created by connecting or And
- the order of concatenation may be determined based on information (eg, flag information) signaled through the bitstream.
- Equation 4 i may have a value of 0 or more and less than 8.
- min(W, H)>8 and W ⁇ H similarly to Equation 4 Can be created.
- a reduced prediction signal pred red may be generated using the bdry red generated in the averaging step.
- the reduced prediction signal pred red may be a signal of a downsampled block having a size of W red x H red .
- W red and H red may be defined as in Equation 5.
- the reduced prediction signal pred red may be generated by adding matrix vector multiplication and offset as shown in Equation 6.
- A may be a matrix composed of W red x H red rows and 4 columns (when the current block is a 4x4 block) or 8 columns (otherwise).
- the offset vector b may be a vector of W red xH red size.
- the matrix A and the offset vector b can be obtained from the matrix sets S 0 , S 1 , and S 2 as follows.
- the index (idx) may be set to idx (W, H) according to Equation 7. That is, idx may be set based on the width (W) and height (H) of the current block.
- variable m may be set based on the ALWIP mode and the width (W) and height (H) of the current block.
- the matrix A is And the offset vector b is Can be determined as When the index idx is 2 and min(W, H) is 4, the matrix A is, when W is 4, It is created by omitting all rows corresponding to odd x coordinates in the downsampled block in, or if H is 4, It can be created by omitting all rows corresponding to odd y-coordinates in the downsampled block at.
- the interpolation process may be referred to as a linear interpolation or a bilinear interpolation process.
- the interpolation process may include two steps: vertical interpolation and horizontal interpolation.
- 20 is a diagram for explaining an interpolation step in an ALWIP process according to the present disclosure.
- the prediction signal may be generated by linear interpolation of the reduced prediction signal pred red (W red xH red ).
- linear interpolation can be performed in a vertical direction, a horizontal direction, or in both directions.
- W ⁇ H the horizontal direction is performed first, otherwise, the vertical direction may be performed first.
- FIG. 20 for example, in the case of an 8x8 block, vertical interpolation is first performed, and then horizontal interpolation is performed to generate a final prediction signal pred.
- An extended reduced prediction signal may be generated by Equation 10.
- a vertically interpolated prediction signal may be generated by performing vertically linear interpolation using Equation (11).
- Linear interpolation in the horizontal direction may be performed similarly to the linear interpolation in the vertical direction.
- the row and column, x-coordinate and y-coordinate can be reversed, respectively.
- the expanded reduced prediction signal may be an extension of the reduced prediction signal to the left boundary.
- a prediction signal of the current block may be finally generated.
- the video encoding apparatus may derive a residual block (residual samples) based on a block (prediction samples) predicted through intra/inter/IBC prediction, etc., and the derived residual samples It is possible to derive quantized transform coefficients by applying transform and quantization.
- Information on the quantized transform coefficients may be included in the residual coding syntax and output in a bitstream form after encoding.
- the image decoding apparatus may obtain information (residual information) on the quantized transform coefficients from the bitstream, and decode the quantized transform coefficients to derive the quantized transform coefficients.
- the image decoding apparatus may derive residual samples through inverse quantization/inverse transformation based on the quantized transform coefficients.
- the transform/inverse quantization and/or transform/inverse transformation may be omitted.
- the transform coefficient may be called a coefficient or a residual coefficient, or may still be called a transform coefficient for uniformity of expression. Whether or not the transform/inverse transform is omitted may be signaled based on transform_skip_flag.
- the transform/inverse transform may be performed based on transform kernel(s).
- a multiple transform selection (MTS) scheme may be applied.
- some of a plurality of transform kernel sets may be selected and applied to the current block.
- the transformation kernel can be referred to in various terms such as transformation matrix and transformation type.
- the transform kernel set may represent a combination of a vertical transform kernel (vertical transform kernel) and a horizontal transform kernel (horizontal transform kernel).
- MTS index information eg, tu_mts_idx syntax element
- the conversion kernel set according to the value of the MTS index information may be shown in Table 7.
- tu_mts_idx represents MTS index information
- trTypeHor and trTypeVer represent a horizontal transformation kernel and a vertical transformation kernel, respectively.
- the transform kernel set may be determined based on, for example, cu_sbt_horizontal_flag and cu__sbt_pos_flag.
- cu_sbt_horizontal_flag has a value of 1, it may indicate that the current block is horizontally divided into two transform blocks, and when it has a value of 0, it may indicate that the current block is vertically divided into two transform blocks.
- cu_sbt_pos_flag When cu_sbt_pos_flag has a value of 1, it indicates that tu_cbf_luma, tu_cbf_cb, and tu_cbf_cr for the first transform block of the current block do not exist in the bitstream, and when it has a value of 0, tu_cbf_luma, tu_cbf_cb for the second transform block of the current block And tu_cbf_cr may not exist in the bitstream.
- tu_cbf_luma, tu_cbf_cb, and tu_cbf_cr may be syntax elements indicating whether the transform block of the corresponding color component (luma, cb, cr) includes at least one non-zero transform coefficient. For example, when tu_cbf_luma has a value of 1, it may indicate that the corresponding luma transform block includes at least one non-zero transform coefficient.
- trTypeHor and trTypeVer may be determined according to Table 8 below based on cu_sbt_horizontal_flag and cu__sbt_pos_flag.
- trTpeHor and trTypeVer may be determined as 1, respectively.
- the transform kernel set may be determined based on, for example, an intra prediction mode for a current block.
- the MTS-based transform is applied to a primary transform, and additionally, a secondary transform may be further applied.
- the second-order transform may be applied only to coefficients in the upper left wxh region of the coefficient block to which the first-order transform is applied, and may be referred to as a reduced secondary transform (RST).
- RST reduced secondary transform
- w and/or h may be 4 or 8.
- the first-order transform and the second-order transform may be sequentially applied to the residual block, and in the inverse transform, the second-order inverse transform and the first-order inverse transform may be sequentially applied to the transform coefficients.
- the second-order transform (RST transform) may be referred to as a low frequency coefficients transform (LFC transform or LFCT).
- the second-order inverse transform may be referred to as an inverse LFC transform or an inverse LFCT.
- 21 is a diagram for describing a transformation method applied to a residual block.
- the transform unit 120 of the image encoding apparatus receives residual samples, performs a primary transform, generates transform coefficients A, and generates a second transform (Secondary Transform). ) To generate transform coefficients B.
- the inverse transform unit 150 of the image encoding apparatus and the inverse transform unit 230 of the image decoding apparatus receive transform coefficients B and perform Inverse Secondary Transform to generate transform coefficients A, Residual samples may be generated by performing Inverse Primary Transform.
- the first-order transformation and the first-order inverse transformation may be performed based on MTS.
- the second-order transform and the second-order inverse transform may be performed only in the low-frequency region (the upper left wxh region of the block).
- the transformation/inverse transformation may be performed in units of CU (coding unit) or TU (transformation unit). That is, the transform/inverse transform may be applied to residual samples in a CU or residual samples in a TU.
- the CU size and the TU size may be the same, or a plurality of TUs may exist in the CU region.
- the CU size may generally indicate the size of a luma component (sample) and CB (coding block).
- the TU size may generally indicate a luma component (sample) and TB (transform block) size. Chroma component (sample) CB or TB size depends on the color format (chroma format, ex.
- the TU size may be derived based on maxTbSize.
- maxTbSize may mean a maximum size that can be converted.
- a plurality of TUs (TBs) of the maxTbSize may be derived from the CU, and transformation/inverse transformation may be performed in units of the TU (TB).
- the maxTbSize may be considered to determine whether to apply various intra prediction types such as ISP.
- the information on the maxTbSize may be determined in advance, or may be generated and encoded by an image encoding apparatus and signaled to an image decoding apparatus.
- the secondary transform of the present disclosure may be a mode-dependent non-separable secondary transform (MDNSST).
- MDNSST can be applied only to coefficients in the low frequency domain after the first-order transform is performed.
- W and H the height of the current transform coefficient block are 8 or more
- an 8x8 non-separable secondary transform may be applied to the upper left 8x8 region of the current transform coefficient block.
- W or H is less than 8
- a 4x4 non-separable secondary transform can be applied to the upper left min(8, W) x min(8, H) region of the current transform coefficient block. have.
- a total of 35x3 non-separated quadratic transforms are available for 4x4 blocks and 8x8 blocks.
- 35 denotes the number of transform sets specified by the intra prediction mode
- 3 denotes the number of NSST candidates (candidate kernels) for each intra prediction mode.
- the mapping relationship between the intra prediction mode and the corresponding transform set may be shown in Table 9, for example.
- the transform set for the quadratic transformation (inverse transformation) may be set 0.
- the index NSST Idx may be encoded and signaled.
- an NSST Idx having a value of 0 may be signaled.
- MDNSST may not be applied to the transform skipped block.
- NSST Idx When an NSST Idx having a non-zero value for the current CU is signaled, MDNSST may not be applied to a block of a component for which the current intra-CU transformation is skipped. When transformation is skipped for blocks of all components in the current CU, or when the number of non-zero coefficients of the block in which the transformation is performed is less than 2, NSST Idx may not be signaled for the current CU. If NSST Idx is not signaled, its value can be inferred as 0.
- NSST is not applied to the entire block (TU in the case of HEVC) to which the first-order transform is applied, but can be applied only to the top-left 8x8 area or 4x4 area. For example, if the size of the block is 8x8 or more, 8x8 NSST may be applied, and if it is less than 8x8, 4x4 NSST may be applied. In addition, when 8x8 NSST is applied, 4x4 NSST may be applied to each after dividing into 4x4 blocks.
- Both the 8x8 NSST and 4x4 NSST follow the configuration of the transform set described above, and as they are non-separable transforms, the 8x8 NSST receives 64 data and outputs 64 data, and the 4x4 NSST has 16 inputs and 16 outputs.
- NSST/RT/RST may be referred to as a low frequency non-seperable transform (LFNST).
- LFNST can be applied in a non-separated transform form based on a transform kernel (transform matrix or transform matrix kernel) for low-frequency transform coefficients located in the upper left region of the transform coefficient block.
- the NSST index or (R)ST index may be referred to as an LFNST index.
- an index (NSST idx or st_idx syntax) for LFNST may be transmitted in the same manner as before. That is, an index for specifying one of the transform kernels constituting the LFNST transform set for the current block to which the MIP is applied may be transmitted.
- Table 10 shows the syntax of a CU according to an embodiment of the present disclosure.
- intra_mip_flag[ x0 ][ y0 ] when intra_mip_flag[ x0 ][ y0 ] has a value of 1, it indicates that MIP is applied to the luma samples of the current CU, and when it has a value of 0, it may indicate that MIP is not applied. If intra_mip_flag[ x0 ][ y0 ] does not exist in the bitstream, its value can be inferred as 0.
- intra_mip_mpm_flag[ x0 ][ y0 ], intra_mip_mpm_idx[ x0 ][ y0 ] and intra_mip_mpm_remainder[ x0 ] MIP modes for luma samples can be used to specify the luma samples.
- the coordinate (x0, y0) may be the upper left position of the luma samples of the current coding block.
- intra_mip_mpm_flag[ x0 ][ y0 ] When intra_mip_mpm_flag[ x0 ][ y0 ] has a value of 1, it may indicate that the MIP mode is derived from an intra-predicted CU surrounding the current CU. If intra_mip_mpm_flag[ x0 ][ y0 ] does not exist in the bitstream, its value can be deduced as 1.
- st_idx[x0][y0] may specify conversion kernels (LFNST kernels) applied to LFNST for the current block. That is, st_idx may indicate one of the transform kernels included in the LFNST transform set. As described above, the LFNST transform set may be determined based on the intra prediction mode and the block size of the current block. In this disclosure, st_idx may be referred to as lfnst_idx.
- MIP technology uses a different number of MIP modes depending on the block size. For example, when cbWidth and cbHeight represent the width and height of the current block, a variable (sizeId) for classifying the block size may be derived as follows.
- sizeId When both cbWidth and cbHeight are 4, sizeId may be set to 0. Otherwise, when both cbWidth and cbHeight are 8 or less, sizeId may be set to 1. In all other cases, sizeId may be set to 2. For example, when the current block is 16x16, sizeId may be set to 2.
- Table 11 shows the number of available MIP modes according to the sizeId.
- the LFNST technique may determine a transform set (lfnstSetIdx) with reference to Table 12 based on 67 intra prediction modes (lfnstPredModeIntra).
- LfnstPredModeIntra of Table 12 is a mode derived based on the intra prediction mode of the current block, and includes the wide-angle mode and the CCLM modes described with reference to FIG. 14. Accordingly, lfnstPredModeIntra of Table 12 may have a value of 0 to 83.
- an index of a transform set of LFNST may be determined by transforming the MIP mode into an existing intra prediction mode (a mode described with reference to FIGS. 13 and 14). Specifically, based on the MIP mode and the block size (sizeId) of the current block, an intra prediction mode for determining the index of the transform set may be determined with reference to Table 13.
- MIP mode indicates the MIP mode of the current block
- sizeId indicates the size type of the current block.
- numbers below sizeId 0, 1, and 2 indicate a general intra prediction mode (eg, one of 67 general intra prediction modes) mapped to the MIP mode for each block size type.
- the mapping relationship is an example and may be changed.
- the mapped general intra prediction mode number may be 18.
- lfnstSetIdx has a value of 2 according to Table 12, and the LFNST transform set may be determined based on this. That is, an LFNST transform set having a value of 2 is selected, and a transform kernel indicated by st_idx (or lfnst_idx) among transform kernels included in the corresponding transform set may be used for the second transform/inverse transform of the current block.
- 22 is a flowchart illustrating a method of performing a quadratic transformation/inverse transformation according to the present disclosure.
- the image encoding apparatus may perform second-order transform on transform coefficients generated by performing first-order transform according to the order shown in FIG. 22.
- the image decoding apparatus may perform second-order inverse transformation on transform coefficients reconstructed from a bitstream according to the order shown in FIG. 22.
- LFNST LFNST is applied to the current transform block (S2210). Determination of whether LFNST is applied may be performed based on, for example, st_idx or lfnst_idx (NSST idx) restored from the bitstream. If LFNST is not applied, second-order transform/inverse transform may not be performed for the current transform block.
- LFNST it may be determined whether MIP is applied to the current block (S2220). Whether MIP is applied to the current block may be determined using the aforementioned flag information (eg, intra_mip_flag).
- an intra prediction mode for determining the LFNST transform set may be derived (S2230).
- an intra prediction mode for determining an LFNST transform set based on the MIP mode may be derived.
- the MIP mode may be restored based on information signaled through a bitstream.
- the derivation of the intra prediction mode based on the MIP mode may be performed by a method previously set in the image encoding apparatus and the image decoding apparatus.
- step S2230 may be performed using a mapping table between the MIP mode and the intra prediction mode.
- the method is not limited to the above method, and for example, when MIP is applied, an intra prediction mode (eg, a planar mode) may be derived to a predefined intra prediction mode to determine an LFNST transform set.
- an LFNST transform set may be determined based on the derived intra prediction mode (S2240).
- the intra prediction mode of the current block may be used to determine the LFNST transform set (S2240).
- Step S2240 may correspond to the process of determining lfnstSetIdx described with reference to Table 12.
- a transform kernel to be used for the second transform/inverse transform of the current transform block may be selected (S2250). The selection of a conversion kernel may be performed based on, for example, st_idx or lfnst_idx restored from the bitstream.
- a second-order transform/inverse transform may be performed on the current transform block using the selected transform kernel (S2260).
- the video encoding apparatus may determine an optimal mode by comparing the rate-distortion cost. Accordingly, the apparatus for encoding an image may use the above-described flag information to determine step S2210 or step S2220, but is not limited thereto.
- the image decoding apparatus may perform a determination of step S2210 or step S2220 based on information signaled through a bitstream from the image encoding device.
- LFNST when LFNST is applied to a block to which MIP is applied, since an intra prediction mode for determining an LFNST transform set can be derived, a more efficient LFNST can be performed. There is.
- FIG. 23 is a diagram for describing a method performed in an image decoding apparatus based on whether MIP and LFNST are applied according to another embodiment of the present disclosure.
- an index (st_idx or lfnst_idx) for LFNST may not be transmitted. That is, when MIP is applied to the current block, the LFNST index is inferred as a value of 0, which may mean that the LFNST technique is not applied to the current block.
- MIP Mobility Management Entity
- Whether MIP is applied to the current block may be determined using the above-described flag information (eg, intra_mip_flag).
- MIP prediction is performed (S2320), and it may be determined that LFNST is not applied. Accordingly, the second-order inverse transform may not be performed, and a first-order inverse transform may be performed on the transform coefficient (S2360). Thereafter, the current block may be reconstructed based on the prediction block generated by applying MIP and the residual block generated by inverse transform (S2370).
- normal intra prediction may be performed on the current block (S2330).
- step S2340 it may be determined whether LFNST is applied to the current block (S2340).
- the determination of step S2340 may be performed based on st_idx or lfnst_idx (NSST idx) restored from the bitstream. For example, when st_idx is 0, LFNST is not applied, and when st_idx is greater than 0, it may be determined that LFNST is applied.
- the second-order inverse transform for the current transform block is not performed, and the first-order inverse transform may be performed for the transform coefficient (S2360).
- the current block may be reconstructed based on a prediction block generated by normal intra prediction and a residual block generated by inverse transform (S2370).
- the first inverse transform may be performed (S2360).
- the current block may be reconstructed based on a prediction block generated by normal intra prediction and a residual block generated by inverse transform (S2370).
- the second-order inverse transform of step S2350 may be performed based on the selected transform kernel after determining the LFNST transform set based on the intra prediction mode, selecting a transform kernel to be used for the second-order inverse transform based on st_idx.
- Table 14 shows the syntax of a CU according to the embodiment shown in FIG. 23.
- st_idx can be included in the bitstream only when intra_mip_flag is 0. Therefore, when intra_mip_flag is 1, that is, when MIP is applied to the current block, st_idx is not included in the bitstream. If st_idx does not exist in the bitstream, its value is inferred as 0, and thus, it may be determined that LFNST is not applied to the current block.
- the LFNST index is not transmitted to the block to which the MIP is applied, the amount of bits for encoding the corresponding index may be reduced.
- FIG. 24 is a diagram for describing a method performed by an image encoding apparatus based on whether MIP and LFNST are applied according to another embodiment of the present disclosure.
- the encoding method illustrated in FIG. 24 may correspond to the decoding method illustrated in FIG. 23.
- MIP Mobility Management Entity
- Whether MIP is applied to the current block may be determined using the above-described flag information (eg, intra_mip_flag).
- flag information eg, intra_mip_flag
- the present invention is not limited thereto, and the image encoding apparatus may perform step S2410 in various ways.
- MIP prediction is performed (S2420), and it may be determined that LFNST is not applied. Accordingly, a residual block of the current block may be generated based on a prediction block generated by performing MIP without performing a second-order transform, and a first-order transform may be performed on the residual block of the current block (S2430). .
- the transform coefficient generated by the transform may be encoded in the bitstream (S2480).
- normal intra prediction may be performed on the current block (S2440).
- a residual block of the current block may be generated based on a prediction block generated by performing normal intra prediction, and a first-order transform may be performed on the generated residual block (S2450).
- it may be determined whether LFNST is applied to the current block (S2460). The determination of step S2460 may be performed based on st_idx or lfnst_idx (NSST idx).
- LFNST when st_idx is 0, LFNST is not applied, and when st_idx is greater than 0, it may be determined that LFNST is applied.
- the present invention is not limited thereto, and the image encoding apparatus may perform step S2460 in various ways.
- transform coefficients generated by first-order transform may not be second-order transform and may be encoded in the bitstream (S2480).
- a second-order transform may be performed on a transform coefficient generated by the first-order transform (S2470).
- the transform coefficients generated by the quadratic transformation may be encoded in the bitstream (S2480).
- the second-order transform in step S2470 may be performed based on the selected transform kernel after determining the LFNST transform set based on the intra prediction mode, selecting a transform kernel to be used for the second inverse transform.
- st_idx may be encoded and signaled.
- the LFNST index may be derived and used according to a predetermined method without signaling the LFNST index for the block to which the MIP is applied.
- the second-order transform/inverse transform process may be performed according to the method described with reference to FIG. 22, and the selection of the transform kernel in step S2250 may be performed based on the LFNST index derived according to the predetermined method.
- a separate optimized transformation kernel for a block to which MIP is applied may be defined and used in advance. According to the present embodiment, while selecting an optimal LFNST kernel for a block to which MIP is applied, it is possible to reduce the amount of bits for encoding it.
- the derivation of the LFNST index may be performed based on at least one of a reference line index for intra prediction, an intra prediction mode, a block size, and whether MIP is applied.
- the MIP mode may be transformed or mapped to a general intra prediction mode as in the embodiment described with reference to FIG. 22.
- the syntax of the CU may be the same as in Table 14.
- a binarization method of an LFNST index may be adaptively performed on a block to which the MIP technology is applied. More specifically, the number of applicable LFNST conversion kernels may be used differently depending on whether MIP is applied to the current block, and the binarization method for the LFNST index may be selectively changed accordingly. For example, one LFNST kernel is used for a block to which MIP is applied, and this kernel may be one of LFNST kernels applied to a block to which MIP is not applied. Or, for blocks to which MIP is applied, a separate kernel optimized for blocks to which MIP is applied is defined and used, and this kernel may not be an LFNST kernel applied to blocks to which MIP is not applied.
- the binarization process for st_idx and the cMax value may be differently determined according to the intra_mip_flag value.
- another method of transmitting information for LFNST for a block to which the MIP technology is applied may be provided.
- information for LFNST when a single syntax is transmitted like st_idx, st_idx has a value of 0, indicates that LFNST is not applied, and when st_idx has a value greater than 0, st_idx is LFNST Indicate the conversion kernel to be used. That is, whether LFNST is applied and the type of conversion kernel used for LFNST can be indicated by using a single syntax.
- information for LFNST may include st_flag, a syntax indicating whether LFNST is applied, and st_idx_flag, a syntax indicating the type of transformation kernel used for LFNST when LFNST is applied.
- Table 16 shows the syntax of a CU according to another method of transmitting information for LFNST.
- LFNST conversion kernels may be used for blocks to which MIP is applied and blocks to which MIP is not applied.
- the transform kernel may be one of LFNST transform kernels applied to a block to which MIP is not applied, or may be a separate transform kernel optimized for a block to which MIP is applied.
- the transmission method of Table 16 may be changed as shown in Table 17.
- st_idx_flag can be transmitted only when intra_mip_flag is 0. That is, st_idx_flag may not be transmitted when MIP is applied to the current block.
- St_flag in Tables 16 and 17 is information indicating whether LFNST is applied to the current block, and when it is not present in the bitstream, it may be inferred as 0.
- st_flag may be referred to as lfnst_flag.
- st_idx_flag may indicate one of two candidate kernels included in the selected LFNST transform set. When st_idx_flag does not exist in the bitstream, its value can be inferred as 0.
- st_idx_flag may be referred to as lfnst_idx_flag or lfnst_kernel_flag.
- ctxInc according to the context coded bin of st_flag and st_idx_flag may be as shown in Table 19.
- ctxIdx of st_flag may have a value of 0 or 1 when binIdx is 0.
- ctxInc of st_flag may be derived by Equation 12.
- a value of ctxInc used for coding st_flag may be differently determined based on a treetype and/or a tu_mts_idx value for the current block.
- a context model used for coding the st_flag (based on CABAC) may be derived.
- the context model may be derived by determining the context index (ctxIdx), and ctxIdx may be derived from the sum of the variables ctxIdxOffset and ctxInc.
- st_idx__flag may be bypass coded/decoded. Bypass encoding/decoding may mean encoding/decoding an input bin by applying a uniform probability distribution instead of allocating a context.
- a reduced number of LFNST kernels is used for blocks to which MIP is applied, compared to blocks that do not, to reduce the overhead of transmitting the index and obtain an effect of reducing complexity.
- I can.
- the value of ctxInc used for coding st_flag as described above may be determined differently based on the treetype and/or tu_mts_idx value for the current block.
- an LFNST conversion kernel may be derived and used.
- LFNST When LFNST is applied to the current block to which MIP is applied, information for selecting the LFNST transform kernel is not signaled, and one of the transform kernels constituting the LFNST transform set is selected through the derivation process, or for a block to which MIP is applied. You can choose a separate optimized conversion kernel. In this case, while selecting an optimal LFNST conversion kernel for a block to which MIP is applied, it is possible to reduce the amount of bits for signaling this.
- the selection of the LFNST transformation kernel may be performed based on at least one of a reference line index for intra prediction, an intra prediction mode, a block size, and whether MIP is applied.
- the MIP mode may be transformed or mapped to a general intra prediction mode as in the embodiment described with reference to FIG. 22.
- exemplary methods of the present disclosure are expressed as a series of operations for clarity of description, but this is not intended to limit the order in which steps are performed, and each step may be performed simultaneously or in a different order if necessary.
- the illustrative steps may include additional steps, other steps may be included excluding some steps, or may include additional other steps excluding some steps.
- an image encoding apparatus or an image decoding apparatus performing a predetermined operation may perform an operation (step) of confirming an execution condition or situation of the operation (step). For example, when it is described that a predetermined operation is performed when a predetermined condition is satisfied, the video encoding apparatus or the video decoding apparatus performs an operation to check whether the predetermined condition is satisfied, and then performs the predetermined operation. I can.
- various embodiments of the present disclosure may be implemented by hardware, firmware, software, or a combination thereof.
- one or more ASICs Application Specific Integrated Circuits
- DSPs Digital Signal Processors
- DSPDs Digital Signal Processing Devices
- PLDs Programmable Logic Devices
- FPGAs Field Programmable Gate Arrays
- general purpose It may be implemented by a processor (general processor), a controller, a microcontroller, a microprocessor, or the like.
- the image decoding device and the image encoding device to which the embodiment of the present disclosure is applied include a multimedia broadcasting transmission/reception device, a mobile communication terminal, a home cinema video device, a digital cinema video device, a surveillance camera, a video chat device, and a real-time communication device such as video communication.
- Mobile streaming devices storage media, camcorders, video-on-demand (VoD) service providers, OTT video (Over the top video) devices, Internet streaming service providers, three-dimensional (3D) video devices, video telephony video devices, and medical use. It may be included in a video device or the like and may be used to process a video signal or a data signal.
- an OTT video (Over the top video) device may include a game console, a Blu-ray player, an Internet-connected TV, a home theater system, a smartphone, a tablet PC, and a digital video recorder (DVR).
- DVR digital video recorder
- 25 is a diagram illustrating a content streaming system to which an embodiment of the present disclosure can be applied.
- the content streaming system to which the embodiment of the present disclosure is applied may largely include an encoding server, a streaming server, a web server, a media storage device, a user device, and a multimedia input device.
- the encoding server serves to generate a bitstream by compressing content input from multimedia input devices such as smartphones, cameras, camcorders, etc. into digital data, and transmits it to the streaming server.
- multimedia input devices such as smartphones, cameras, camcorders, etc. directly generate bitstreams
- the encoding server may be omitted.
- the bitstream may be generated by an image encoding method and/or an image encoding apparatus to which an embodiment of the present disclosure is applied, and the streaming server may temporarily store the bitstream in a process of transmitting or receiving the bitstream.
- the streaming server may transmit multimedia data to a user device based on a user request through a web server, and the web server may serve as an intermediary for notifying the user of a service.
- the web server transmits the request to the streaming server, and the streaming server may transmit multimedia data to the user.
- the content streaming system may include a separate control server, and in this case, the control server may play a role of controlling a command/response between devices in the content streaming system.
- the streaming server may receive content from a media storage and/or encoding server. For example, when content is received from the encoding server, the content may be received in real time. In this case, in order to provide a smooth streaming service, the streaming server may store the bitstream for a predetermined time.
- Examples of the user device include a mobile phone, a smart phone, a laptop computer, a digital broadcasting terminal, a personal digital assistant (PDA), a portable multimedia player (PMP), a navigation system, a slate PC, and Tablet PC, ultrabook, wearable device, for example, smartwatch, smart glass, head mounted display (HMD)), digital TV, desktop There may be computers, digital signage, etc.
- PDA personal digital assistant
- PMP portable multimedia player
- HMD head mounted display
- TV desktop
- desktop There may be computers, digital signage, etc.
- Each server in the content streaming system may be operated as a distributed server, and in this case, data received from each server may be distributedly processed.
- the scope of the present disclosure is software or machine-executable instructions (e.g., operating systems, applications, firmware, programs, etc.) that cause an operation according to the method of various embodiments to be executed on a device or computer, and such software or It includes a non-transitory computer-readable medium (non-transitory computer-readable medium) which stores instructions and the like and is executable on a device or a computer.
- a non-transitory computer-readable medium non-transitory computer-readable medium
- An embodiment according to the present disclosure may be used to encode/decode an image.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Discrete Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims (15)
- 영상 복호화 장치에 의해 수행되는 영상 복호화 방법으로서,현재 블록에 대해 인트라 예측을 수행하여 예측 블록을 생성하는 단계;상기 현재 블록의 변환 계수에 대해 역변환을 수행하여 레지듀얼 블록을 생성하는 단계; 및상기 예측 블록과 레지듀얼 블록에 기반하여 상기 현재 블록을 복원하는 단계를 포함하고,상기 역변환은 1차 역변환과 2차 역변환을 포함하고, 상기 2차 역변환은 상기 현재 블록에 대한 인트라 예측이 MIP 예측인지 여부에 기반하여 수행되는 영상 복호화 방법.
- 제1항에 있어서,상기 2차 역변환은 상기 변환 계수에 대해 2차 역변환을 수행하는 것으로 판단되는 경우에만 수행되는 영상 복호화 방법.
- 제2항에 있어서,상기 변환 계수에 대해 2차 역변환을 수행하는지의 판단은 비트스트림을 통해 시그널링되는 정보에 기반하여 수행되는 영상 복호화 방법.
- 제1항에 있어서,상기 2차 역변환은,상기 현재 블록의 인트라 예측 모드에 기반하여 2차 역변환의 변환 셋(transform set)을 결정하는 단계;상기 2차 역변환의 변환 셋에 포함된 복수의 변환 커널들 중 하나를 선택하는 단계; 및상기 선택된 변환 커널에 기반하여 상기 2차 역변환을 수행하는 단계를 포함하는 영상 복호화 방법.
- 제4항에 있어서,상기 현재 블록에 대한 인트라 예측이 MIP 예측인 경우,상기 2차 역변환의 변환 셋을 결정하기 위해 이용되는 상기 현재 블록의 인트라 예측 모드는 소정의 인트라 예측 모드로 유도되는 영상 복호화 방법.
- 제5항에 있어서,상기 현재 블록에 대한 인트라 예측이 MIP 예측인 경우,상기 소정의 인트라 예측 모드는 기정의된 매핑 테이블에 기반하여 상기 현재 블록의 MIP 모드로부터 유도되는 영상 복호화 방법.
- 제5항에 있어서,상기 현재 블록에 대한 인트라 예측이 MIP 예측인 경우,상기 소정의 인트라 예측 모드는 플래너 모드로 유도되는 영상 복호화 방법.
- 제1항에 있어서,상기 현재 블록에 대한 인트라 예측이 MIP 예측인 경우,상기 변환 계수에 대한 2차 역변환은 스킵되는 영상 복호화 방법.
- 제1항에 있어서,상기 현재 블록에 대한 인트라 예측이 MIP 예측인 경우,상기 변환 계수에 대한 2차 역변환의 수행 여부를 지시하는 정보는 비트스트림을 통해 시그널링되지 않는 영상 복호화 방법.
- 제1항에 있어서,상기 현재 블록에 대한 인트라 예측이 MIP 예측인 경우,상기 변환 계수의 2차 역변환을 위한 변환 커널은 비트스트림을 통해 시그널링되지 않고 소정의 변환 커널로 결정되는 영상 복호화 방법.
- 제1항에 있어서,상기 현재 블록이 MIP 예측된 경우에 가용한 변환 커널의 수는 상기 현재 블록이 MIP 예측되지 않은 경우에 가용한 변환 커널의 수보다 작은 영상 복호화 방법.
- 제1항에 있어서,상기 현재 블록에 2차 역변환이 적용되는지 여부를 나타내는 제1 정보 및 상기 2차 역변환에 사용되는 변환 커널을 지시하는 제2 정보는 별개의 정보로서 시그널링되고,상기 제2 정보는 상기 제1 정보가 상기 현재 블록에 2차 역변환이 적용되는 것을 나타낼 때 시그널링되는 영상 복호화 방법.
- 메모리 및 적어도 하나의 프로세서를 포함하는 영상 복호화 장치로서,상기 적어도 하나의 프로세서는현재 블록에 대해 인트라 예측을 수행하여 예측 블록을 생성하고,상기 현재 블록의 변환 계수에 대해 역변환을 수행하여 레지듀얼 블록을 생성하고,상기 예측 블록과 레지듀얼 블록에 기반하여 상기 현재 블록을 복원하고,상기 역변환은 1차 역변환과 2차 역변환을 포함하고, 상기 2차 역변환은 상기 현재 블록에 대한 인트라 예측이 MIP 예측인지 여부에 기반하여 수행되는 영상 복호화 장치.
- 영상 부호화 장치에 의해 수행되는 영상 부호화 방법으로서,현재 블록에 대해 인트라 예측을 수행하여 예측 블록을 생성하는 단계;상기 예측 블록에 기반하여 상기 현재 블록의 레지듀얼 블록을 생성하는 단계; 및상기 레지듀얼 블록에 대해 변환을 수행하여 변환 계수를 생성하는 단계를 포함하고,상기 변환은 1차 변환과 2차 변환을 포함하고, 상기 2차 변환은 상기 현재 블록에 대한 인트라 예측이 MIP 예측인지 여부에 기반하여 수행되는 영상 부호화 방법.
- 제14항의 영상 부호화 방법에 의해 생성된 비트스트림을 전송하는 방법.
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021566094A JP7256296B2 (ja) | 2019-05-08 | 2020-05-07 | Mip及びlfnstを行う画像符号化/復号化方法、装置、及びビットストリームを伝送する方法 |
KR1020217035756A KR20210136157A (ko) | 2019-05-08 | 2020-05-07 | Mip 및 lfnst를 수행하는 영상 부호화/복호화 방법, 장치 및 비트스트림을 전송하는 방법 |
US17/521,086 US20220060751A1 (en) | 2019-05-08 | 2021-11-08 | Image encoding/decoding method and device for performing mip and lfnst, and method for transmitting bitstream |
JP2023055436A JP7422917B2 (ja) | 2019-05-08 | 2023-03-30 | Mip及びlfnstを行う画像符号化/復号化方法、装置、及びビットストリームを伝送する方法 |
JP2024004481A JP2024026779A (ja) | 2019-05-08 | 2024-01-16 | Mip及びlfnstを行う画像符号化/復号化方法、装置、及びビットストリームを伝送する方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201962844751P | 2019-05-08 | 2019-05-08 | |
US62/844,751 | 2019-05-08 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/521,086 Continuation US20220060751A1 (en) | 2019-05-08 | 2021-11-08 | Image encoding/decoding method and device for performing mip and lfnst, and method for transmitting bitstream |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020226424A1 true WO2020226424A1 (ko) | 2020-11-12 |
Family
ID=73050568
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2020/005982 WO2020226424A1 (ko) | 2019-05-08 | 2020-05-07 | Mip 및 lfnst를 수행하는 영상 부호화/복호화 방법, 장치 및 비트스트림을 전송하는 방법 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20220060751A1 (ko) |
JP (3) | JP7256296B2 (ko) |
KR (1) | KR20210136157A (ko) |
WO (1) | WO2020226424A1 (ko) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220116606A1 (en) * | 2019-06-25 | 2022-04-14 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Coding using matrix based intra-prediction and secondary transforms |
KR20220053698A (ko) * | 2017-07-04 | 2022-04-29 | 삼성전자주식회사 | 다중 코어 변환에 의한 비디오 복호화 방법 및 장치, 다중 코어 변환에 의한 비디오 부호화 방법 및 장치 |
JP2022543102A (ja) * | 2019-08-03 | 2022-10-07 | 北京字節跳動網絡技術有限公司 | ビデオ・コーディングにおける縮小二次変換のための行列の選択 |
US11622131B2 (en) | 2019-05-10 | 2023-04-04 | Beijing Bytedance Network Technology Co., Ltd. | Luma based secondary transform matrix selection for video processing |
AU2020354500B2 (en) * | 2019-09-24 | 2024-02-22 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Efficient implementation of matrix-based intra-prediction |
US11924469B2 (en) | 2019-06-07 | 2024-03-05 | Beijing Bytedance Network Technology Co., Ltd. | Conditional signaling of reduced secondary transform in video bitstreams |
US11968367B2 (en) | 2019-08-17 | 2024-04-23 | Beijing Bytedance Network Technology Co., Ltd. | Context modeling of side information for reduced secondary transforms in video |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020256466A1 (ko) * | 2019-06-19 | 2020-12-24 | 한국전자통신연구원 | 화면 내 예측 모드 및 엔트로피 부호화/복호화 방법 및 장치 |
EP3973700A4 (en) * | 2019-06-28 | 2023-06-14 | HFI Innovation Inc. | MATRIX METHOD AND APPARATUS BASED ON INTRA PREDICTION IN IMAGE AND VIDEO PROCESSING |
US11902531B2 (en) * | 2021-04-12 | 2024-02-13 | Qualcomm Incorporated | Low frequency non-separable transform for video coding |
WO2024005480A1 (ko) * | 2022-06-29 | 2024-01-04 | 엘지전자 주식회사 | 다중 참조 라인에 기반한 영상 부호화/복호화 방법, 비트스트림을 전송하는 방법 및 비트스트림을 저장한 기록 매체 |
WO2024007120A1 (zh) * | 2022-07-04 | 2024-01-11 | Oppo广东移动通信有限公司 | 编解码方法、编码器、解码器以及存储介质 |
WO2024049024A1 (ko) * | 2022-08-29 | 2024-03-07 | 현대자동차주식회사 | 1차 변환 커널에 적응적인 분리 불가능한 2차 변환 기반 비디오 코딩을 위한 방법 및 장치 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20150129715A (ko) * | 2013-03-08 | 2015-11-20 | 삼성전자주식회사 | 향상 레이어 차분들에 대한 세컨더리 변환을 적용하기 위한 방법 및 장치 |
KR20180014655A (ko) * | 2016-08-01 | 2018-02-09 | 한국전자통신연구원 | 영상 부호화/복호화 방법 |
KR20180085526A (ko) * | 2017-01-19 | 2018-07-27 | 가온미디어 주식회사 | 효율적 변환을 처리하는 영상 복호화 및 부호화 방법 |
WO2018174402A1 (ko) * | 2017-03-21 | 2018-09-27 | 엘지전자 주식회사 | 영상 코딩 시스템에서 변환 방법 및 그 장치 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20230130772A (ko) * | 2016-02-12 | 2023-09-12 | 삼성전자주식회사 | 영상 부호화 방법 및 장치, 영상 복호화 방법 및 장치 |
US11303912B2 (en) * | 2018-04-18 | 2022-04-12 | Qualcomm Incorporated | Decoded picture buffer management and dynamic range adjustment |
KR20200109276A (ko) * | 2019-03-12 | 2020-09-22 | 주식회사 엑스리스 | 영상 신호 부호화/복호화 방법 및 이를 위한 장치 |
US11616966B2 (en) * | 2019-04-03 | 2023-03-28 | Mediatek Inc. | Interaction between core transform and secondary transform |
CN113678453B (zh) * | 2019-04-12 | 2024-05-14 | 北京字节跳动网络技术有限公司 | 基于矩阵的帧内预测的上下文确定 |
CN117896523A (zh) | 2019-04-16 | 2024-04-16 | 松下电器(美国)知识产权公司 | 编码装置、解码装置、编码方法、解码方法和记录介质 |
MX2021012405A (es) | 2019-04-17 | 2021-11-12 | Huawei Tech Co Ltd | Un codificador, un decodificador y metodos de armonizacion de intraprediccion basada en matriz correspondiente y seleccion de nucleo de transformada secundaria. |
-
2020
- 2020-05-07 WO PCT/KR2020/005982 patent/WO2020226424A1/ko active Application Filing
- 2020-05-07 JP JP2021566094A patent/JP7256296B2/ja active Active
- 2020-05-07 KR KR1020217035756A patent/KR20210136157A/ko unknown
-
2021
- 2021-11-08 US US17/521,086 patent/US20220060751A1/en not_active Abandoned
-
2023
- 2023-03-30 JP JP2023055436A patent/JP7422917B2/ja active Active
-
2024
- 2024-01-16 JP JP2024004481A patent/JP2024026779A/ja active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20150129715A (ko) * | 2013-03-08 | 2015-11-20 | 삼성전자주식회사 | 향상 레이어 차분들에 대한 세컨더리 변환을 적용하기 위한 방법 및 장치 |
KR20180014655A (ko) * | 2016-08-01 | 2018-02-09 | 한국전자통신연구원 | 영상 부호화/복호화 방법 |
KR20180085526A (ko) * | 2017-01-19 | 2018-07-27 | 가온미디어 주식회사 | 효율적 변환을 처리하는 영상 복호화 및 부호화 방법 |
WO2018174402A1 (ko) * | 2017-03-21 | 2018-09-27 | 엘지전자 주식회사 | 영상 코딩 시스템에서 변환 방법 및 그 장치 |
Non-Patent Citations (1)
Title |
---|
ONATHAN PFAFF; BJORN STALLENBERGER; MICHAEL SCHAFER; PHILIPPE HELLE; TOBIAS HIINZ; HEIKO SCHWARZ; DETLEV MARPE; THOMAS WIEGAND: "CE3: Affine linear weighted intra prediction (CE3-4.1, CE3-4.2).", JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11. 14TH MEETING, no. JVET-N0217, 25 March 2019 (2019-03-25), Geneva, CH, XP030202699 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20220053698A (ko) * | 2017-07-04 | 2022-04-29 | 삼성전자주식회사 | 다중 코어 변환에 의한 비디오 복호화 방법 및 장치, 다중 코어 변환에 의한 비디오 부호화 방법 및 장치 |
US12003750B2 (en) | 2017-07-04 | 2024-06-04 | Samsung Electronics Co., Ltd. | Video decoding method and apparatus using multi-core transform, and video encoding method and apparatus using multi-core transform |
KR102625144B1 (ko) | 2017-07-04 | 2024-01-15 | 삼성전자주식회사 | 다중 코어 변환에 의한 비디오 복호화 방법 및 장치, 다중 코어 변환에 의한 비디오 부호화 방법 및 장치 |
US11622131B2 (en) | 2019-05-10 | 2023-04-04 | Beijing Bytedance Network Technology Co., Ltd. | Luma based secondary transform matrix selection for video processing |
US11924469B2 (en) | 2019-06-07 | 2024-03-05 | Beijing Bytedance Network Technology Co., Ltd. | Conditional signaling of reduced secondary transform in video bitstreams |
JP2022538853A (ja) * | 2019-06-25 | 2022-09-06 | フラウンホファー ゲセルシャフト ツール フェールデルンク ダー アンゲヴァンテン フォルシュンク エー.ファオ. | 行列ベースのイントラ予測および二次変換を使用したコーディング |
US20220116606A1 (en) * | 2019-06-25 | 2022-04-14 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Coding using matrix based intra-prediction and secondary transforms |
JP7477538B2 (ja) | 2019-06-25 | 2024-05-01 | フラウンホファー ゲセルシャフト ツール フェールデルンク ダー アンゲヴァンテン フォルシュンク エー.ファオ. | 行列ベースのイントラ予測および二次変換を使用したコーディング |
JP2022543102A (ja) * | 2019-08-03 | 2022-10-07 | 北京字節跳動網絡技術有限公司 | ビデオ・コーディングにおける縮小二次変換のための行列の選択 |
JP7422858B2 (ja) | 2019-08-03 | 2024-01-26 | 北京字節跳動網絡技術有限公司 | ビデオ処理方法、装置、記憶媒体及び記憶方法 |
US11882274B2 (en) | 2019-08-03 | 2024-01-23 | Beijing Bytedance Network Technology Co., Ltd | Position based mode derivation in reduced secondary transforms for video |
US11638008B2 (en) | 2019-08-03 | 2023-04-25 | Beijing Bytedance Network Technology Co., Ltd. | Selection of matrices for reduced secondary transform in video coding |
US11968367B2 (en) | 2019-08-17 | 2024-04-23 | Beijing Bytedance Network Technology Co., Ltd. | Context modeling of side information for reduced secondary transforms in video |
AU2020354500B2 (en) * | 2019-09-24 | 2024-02-22 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Efficient implementation of matrix-based intra-prediction |
US12022120B2 (en) | 2019-09-24 | 2024-06-25 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Efficient implementation of matrix-based intra-prediction |
Also Published As
Publication number | Publication date |
---|---|
KR20210136157A (ko) | 2021-11-16 |
JP7256296B2 (ja) | 2023-04-11 |
JP2023073437A (ja) | 2023-05-25 |
US20220060751A1 (en) | 2022-02-24 |
JP2024026779A (ja) | 2024-02-28 |
JP2022532114A (ja) | 2022-07-13 |
JP7422917B2 (ja) | 2024-01-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020226424A1 (ko) | Mip 및 lfnst를 수행하는 영상 부호화/복호화 방법, 장치 및 비트스트림을 전송하는 방법 | |
WO2021040481A1 (ko) | 크로스 컴포넌트 필터링 기반 영상 코딩 장치 및 방법 | |
WO2020246806A1 (ko) | 매트릭스 기반 인트라 예측 장치 및 방법 | |
WO2020251330A1 (ko) | 단순화된 mpm 리스트 생성 방법을 활용하는 영상 부호화/복호화 방법, 장치 및 비트스트림을 전송하는 방법 | |
WO2020171592A1 (ko) | 영상 코딩 시스템에서 레지듀얼 정보를 사용하는 영상 디코딩 방법 및 그 장치 | |
WO2020213931A1 (ko) | 레지듀얼 계수의 차분 부호화를 이용한 영상 부호화/복호화 방법, 장치 및 비트스트림을 전송하는 방법 | |
WO2021040484A1 (ko) | 크로스-컴포넌트 적응적 루프 필터링 기반 영상 코딩 장치 및 방법 | |
WO2020050651A1 (ko) | 다중 변환 선택에 기반한 영상 코딩 방법 및 그 장치 | |
WO2020246803A1 (ko) | 매트릭스 기반 인트라 예측 장치 및 방법 | |
WO2019194514A1 (ko) | 인터 예측 모드 기반 영상 처리 방법 및 이를 위한 장치 | |
WO2021060847A1 (ko) | 컬러 포맷에 기반하여 분할 모드를 결정하는 영상 부호화/복호화 방법, 장치 및 비트스트림을 전송하는 방법 | |
WO2020036390A1 (ko) | 영상 신호를 처리하기 위한 방법 및 장치 | |
WO2019216714A1 (ko) | 인터 예측 모드 기반 영상 처리 방법 및 이를 위한 장치 | |
WO2021101205A1 (ko) | 영상 코딩 장치 및 방법 | |
WO2020251329A1 (ko) | Mip 모드 매핑이 단순화된 영상 부호화/복호화 방법, 장치 및 비트스트림을 전송하는 방법 | |
WO2020213976A1 (ko) | Bdpcm을 이용한 영상 부호화/복호화 방법, 장치 및 비트스트림을 전송하는 방법 | |
WO2021101201A1 (ko) | 루프 필터링을 제어하는 영상 코딩 장치 및 방법 | |
WO2019194463A1 (ko) | 영상의 처리 방법 및 이를 위한 장치 | |
WO2021040410A1 (ko) | 레지듀얼 코딩에 대한 영상 디코딩 방법 및 그 장치 | |
WO2021040482A1 (ko) | 적응적 루프 필터링 기반 영상 코딩 장치 및 방법 | |
WO2020197223A1 (ko) | 영상 코딩 시스템에서의 인트라 예측 기반 영상 코딩 | |
WO2021006697A1 (ko) | 레지듀얼 코딩에 대한 영상 디코딩 방법 및 그 장치 | |
WO2020262962A1 (ko) | 크로마 변환 블록의 최대 크기 제한을 이용한 영상 부호화/복호화 방법, 장치 및 비트스트림을 전송하는 방법 | |
WO2021006700A1 (ko) | 영상 코딩 시스템에서 레지듀얼 코딩 방법에 대한 플래그를 사용하는 영상 디코딩 방법 및 그 장치 | |
WO2020256492A1 (ko) | 비디오/영상 코딩 시스템에서 중복 시그널링 제거 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20802042 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20217035756 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2021566094 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20802042 Country of ref document: EP Kind code of ref document: A1 |