WO2021133100A1 - Pdpc를 수행하는 영상 부호화/복호화 방법, 장치 및 비트스트림을 전송하는 방법 - Google Patents
Pdpc를 수행하는 영상 부호화/복호화 방법, 장치 및 비트스트림을 전송하는 방법 Download PDFInfo
- Publication number
- WO2021133100A1 WO2021133100A1 PCT/KR2020/019091 KR2020019091W WO2021133100A1 WO 2021133100 A1 WO2021133100 A1 WO 2021133100A1 KR 2020019091 W KR2020019091 W KR 2020019091W WO 2021133100 A1 WO2021133100 A1 WO 2021133100A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- block
- current block
- prediction
- pdpc
- intra prediction
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 130
- IESVDEZGAHUQJU-ZLBXKVHBSA-N 1-hexadecanoyl-2-(4Z,7Z,10Z,13Z,16Z,19Z-docosahexaenoyl)-sn-glycero-3-phosphocholine Chemical compound CCCCCCCCCCCCCCCC(=O)OC[C@H](COP([O-])(=O)OCC[N+](C)(C)C)OC(=O)CC\C=C/C\C=C/C\C=C/C\C=C/C\C=C/C\C=C/CC IESVDEZGAHUQJU-ZLBXKVHBSA-N 0.000 claims abstract 28
- 239000013074 reference sample Substances 0.000 claims description 88
- 239000000523 sample Substances 0.000 description 67
- 238000001914 filtration Methods 0.000 description 34
- 238000010586 diagram Methods 0.000 description 33
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 description 33
- 241000023320 Luma <angiosperm> Species 0.000 description 32
- 238000013139 quantization Methods 0.000 description 23
- 238000009795 derivation Methods 0.000 description 11
- 238000005192 partition Methods 0.000 description 11
- 238000000638 solvent extraction Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 9
- 230000011664 signaling Effects 0.000 description 8
- 230000002123 temporal effect Effects 0.000 description 7
- 230000009466 transformation Effects 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 6
- 230000002093 peripheral effect Effects 0.000 description 6
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 5
- 230000009977 dual effect Effects 0.000 description 5
- 230000003044 adaptive effect Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 230000001419 dependent effect Effects 0.000 description 4
- 238000003491 array Methods 0.000 description 3
- 238000009877 rendering Methods 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 230000002146 bilateral effect Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 230000007727 signaling mechanism Effects 0.000 description 2
- 240000007594 Oryza sativa Species 0.000 description 1
- 235000007164 Oryza sativa Nutrition 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 235000009566 rice Nutrition 0.000 description 1
- 239000010454 slate Substances 0.000 description 1
- 239000004984 smart glass Substances 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 238000011426 transformation method Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/11—Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/119—Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/184—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
Definitions
- the present disclosure relates to a method and apparatus for encoding/decoding an image, and more particularly, to a method and apparatus for encoding/decoding an image for performing position-dependent intra prediction (PDPC), and a method and apparatus for encoding an image of the present disclosure. It relates to a method of transmitting a bitstream.
- PDPC position-dependent intra prediction
- HD images high definition (HD) images and ultra high definition (UHD) images
- UHD images ultra high definition
- An object of the present disclosure is to provide an image encoding/decoding method and apparatus with improved encoding/decoding efficiency.
- Another object of the present disclosure is to provide an image encoding/decoding method and apparatus for improving encoding/decoding efficiency by making the PDPC application requirements of a chrominance block and a luminance block the same.
- Another object of the present disclosure is to provide a method of transmitting a bitstream generated by an image encoding method or apparatus according to the present disclosure.
- Another object of the present disclosure is to provide a recording medium storing a bitstream generated by an image encoding method or apparatus according to the present disclosure.
- Another object of the present disclosure is to provide a recording medium storing a bitstream that is received and decoded by an image decoding apparatus according to the present disclosure and used to restore an image.
- An image decoding method performed by an image decoding apparatus includes generating a prediction block by performing intra prediction on a current block, determining whether to apply PDPC to the prediction block, and generating a final prediction block of the current block by applying PDPC to the prediction block based on the determination.
- the determining whether to apply PDPC to the prediction block includes determining whether the size of the current block satisfies a predetermined condition, and based on the size of the current block satisfying the predetermined condition Thus, it is determined that PDPC is applied to the prediction block, and when the size of the current block does not satisfy the predetermined condition, the determination of the color component of the current block is skipped, and PDPC is applied to the prediction block. It may be judged not to apply.
- the predetermined condition may be that the size of the current block is greater than or equal to a predetermined threshold.
- the predetermined condition when the width of the current block is equal to or greater than the predetermined threshold value and the height of the current block is equal to or greater than the predetermined threshold value, the predetermined condition may be satisfied.
- the predetermined threshold value may be 4.
- determining whether to apply PDPC to the prediction block may further include determining a reference sample line used for intra prediction of the current block.
- the reference sample line is a predetermined reference sample line
- the determination of the color component of the current block may be skipped and it may be determined that PDPC is not applied to the prediction block.
- the predetermined reference sample line may be a first reference sample line adjacent to the current block.
- the determining whether to apply PDPC to the prediction block includes determining whether BDPCM is applied to the current block and determining an intra prediction mode of the current block may further include.
- An image decoding apparatus may include a memory and at least one processor.
- the at least one processor generates a prediction block by performing intra prediction on the current block, determines whether to apply PDPC to the prediction block, and applies PDPC to the prediction block based on the determination. , a final prediction block of the current block may be generated.
- the determination of whether to apply PDPC to the prediction block includes determining whether the size of the current block satisfies a predetermined condition, and based on the size of the current block satisfying the predetermined condition, If it is determined that PDPC is applied to the prediction block, and the size of the current block does not satisfy the predetermined condition, the determination of the color component of the current block is skipped, and PDPC is not applied to the prediction block can be judged not to be.
- An image encoding method performed by an image encoding apparatus includes generating a prediction block by performing intra prediction on a current block, and determining whether to apply PDPC to the prediction block. , and generating a final prediction block of the current block by applying PDPC to the prediction block based on the determination.
- the determining whether to apply PDPC to the prediction block includes determining whether the size of the current block satisfies a predetermined condition, and based on the size of the current block satisfying the predetermined condition Thus, it is determined that PDPC is applied to the prediction block, and when the size of the current block does not satisfy the predetermined condition, the determination of the color component of the current block is skipped, and PDPC is applied to the prediction block. It may be judged not to apply.
- the predetermined condition may be that the size of the current block is greater than or equal to a predetermined threshold.
- the predetermined condition when the width of the current block is equal to or greater than the predetermined threshold value and the height of the current block is equal to or greater than the predetermined threshold value, the predetermined condition may be satisfied.
- the predetermined threshold value may be 4.
- determining whether to apply PDPC to the prediction block may further include determining a reference sample line used for intra prediction of the current block.
- the reference sample line is a predetermined reference sample line
- the determination of the color component of the current block may be skipped and it may be determined that PDPC is not applied to the prediction block.
- the predetermined reference sample line may be a first reference sample line adjacent to the current block.
- the determining whether to apply PDPC to the prediction block includes determining whether BDPCM is applied to the current block and determining an intra prediction mode of the current block may further include.
- a transmission method may transmit a bitstream generated by the image encoding apparatus or the image encoding method of the present disclosure.
- a computer-readable recording medium may store a bitstream generated by the image encoding method or image encoding apparatus of the present disclosure.
- an image encoding/decoding method and apparatus having improved encoding/decoding efficiency may be provided.
- an image encoding/decoding method and apparatus with improved encoding/decoding efficiency by simplifying the determination of whether to apply PDPC by using a unified PDPC application condition for a luminance component and a chrominance component in intra prediction encoding/decoding may be provided.
- a method for transmitting a bitstream generated by an image encoding method or apparatus according to the present disclosure may be provided.
- a recording medium storing a bitstream generated by the image encoding method or apparatus according to the present disclosure may be provided.
- a recording medium storing a bitstream received and decoded by the image decoding apparatus according to the present disclosure and used to restore an image.
- FIG. 1 illustrates a video coding system according to this disclosure.
- FIG. 2 is a diagram schematically illustrating an image encoding apparatus to which an embodiment according to the present disclosure can be applied.
- FIG. 3 is a diagram schematically illustrating an image decoding apparatus to which an embodiment according to the present disclosure can be applied.
- FIG. 4 is a diagram illustrating a division structure of an image according to an embodiment of the present disclosure.
- FIG. 5 is a diagram illustrating an embodiment of a block division type according to a multi-type tree structure.
- FIG. 6 is a diagram illustrating a signaling mechanism of block partitioning information in a quadtree with nested multi-type tree structure according to the present disclosure.
- FIG. 7 is a diagram illustrating an embodiment in which a CTU is divided into multiple CUs.
- FIG. 8 is a diagram illustrating an intra prediction-based video/image encoding method.
- FIG. 9 is a diagram illustrating an intra prediction unit in an encoding apparatus.
- FIG. 10 is a diagram illustrating an intra prediction-based video/image decoding method.
- FIG. 11 is a diagram illustrating an intra prediction unit in a decoding apparatus.
- FIG. 12 is a diagram illustrating a directional intra prediction mode among intra prediction modes.
- 13A to 13D are diagrams illustrating reference samples defined in PDPC.
- FIG. 14 is a diagram for describing a reference sample line usable in an MRL method.
- 15 is a diagram illustrating a syntax structure of an encoding unit signaling the multi-reference line index.
- 16 is a diagram illustrating a PDPC application condition according to an embodiment of the present disclosure.
- 17 is a diagram illustrating a PDPC application condition according to another embodiment of the present disclosure.
- FIG. 18 is a diagram illustrating a PDPC application condition according to another embodiment of the present disclosure.
- 19 is a flowchart illustrating a method of generating a prediction block according to an embodiment of the present disclosure.
- FIG. 20 is a diagram illustrating a content streaming system to which an embodiment of the present disclosure can be applied.
- a component when a component is “connected”, “coupled” or “connected” to another component, it is not only a direct connection relationship, but also an indirect connection relationship in which another component exists in the middle. may also include.
- a component when a component is said to "include” or “have” another component, it means that another component may be further included without excluding other components unless otherwise stated. .
- first, second, etc. are used only for the purpose of distinguishing one component from other components, and unless otherwise specified, the order or importance between the components is not limited. Accordingly, within the scope of the present disclosure, a first component in one embodiment may be referred to as a second component in another embodiment, and similarly, a second component in one embodiment is referred to as a first component in another embodiment. can also be called
- components that are distinguished from each other are for clearly explaining each characteristic, and do not necessarily mean that the components are separated. That is, a plurality of components may be integrated to form one hardware or software unit, or one component may be distributed to form a plurality of hardware or software units. Accordingly, even if not specifically mentioned, such integrated or dispersed embodiments are also included in the scope of the present disclosure.
- components described in various embodiments do not necessarily mean essential components, and some may be optional components. Accordingly, an embodiment composed of a subset of components described in one embodiment is also included in the scope of the present disclosure. In addition, embodiments including other components in addition to components described in various embodiments are also included in the scope of the present disclosure.
- the present disclosure relates to encoding and decoding of an image, and terms used in the present disclosure may have conventional meanings commonly used in the technical field to which the present disclosure belongs unless they are newly defined in the present disclosure.
- a “picture” generally means a unit representing one image in a specific time period
- a slice/tile is a coding unit constituting a part of a picture
- one picture is one It may be composed of more than one slice/tile.
- a slice/tile may include one or more coding tree units (CTUs).
- pixel or “pel” may mean a minimum unit constituting one picture (or image).
- sample may be used as a term corresponding to a pixel.
- the sample may generally represent a pixel or a value of a pixel, may represent only a pixel/pixel value of a luma component, or may represent only a pixel/pixel value of a chroma component.
- unit indicates a basic unit of image processing.
- the unit may include at least one of a specific region of a picture and information related to the region.
- a unit may be used interchangeably with terms such as “sample array”, “block” or “area” in some cases.
- an MxN block may include samples (or sample arrays) or a set (or arrays) of transform coefficients including M columns and N rows.
- current block may mean one of “current coding block”, “current coding unit”, “coding object block”, “decoding object block”, or “processing object block”.
- current block may mean “current prediction block” or “prediction target block”.
- transform inverse transform
- quantization inverse quantization
- current block may mean “current transform block” or “transform target block”.
- filtering the “current block” may mean a “filtering target block”.
- a “current block” may mean a “luma block of the current block” unless explicitly stated as a chroma block.
- a “chroma block of the current block” may be explicitly expressed including an explicit description of a chroma block, such as a “chroma block” or a “current chroma block”.
- “/” and “,” may be interpreted as “and/or”.
- “A/B” and “A, B” may be interpreted as “A and/or B”.
- “A/B/C” and “A, B, C” may mean “at least one of A, B, and/or C”.
- FIG. 1 illustrates a video coding system according to this disclosure.
- a video coding system may include an encoding apparatus 10 and a decoding apparatus 20 .
- the encoding apparatus 10 may transmit encoded video and/or image information or data in the form of a file or streaming to the decoding apparatus 20 through a digital storage medium or a network.
- the encoding apparatus 10 may include a video source generator 11 , an encoder 12 , and a transmitter 13 .
- the decoding apparatus 20 may include a receiving unit 21 , a decoding unit 22 , and a rendering unit 23 .
- the encoder 12 may be referred to as a video/image encoder, and the decoder 22 may be referred to as a video/image decoder.
- the transmitter 13 may be included in the encoder 12 .
- the receiver 21 may be included in the decoder 22 .
- the rendering unit 23 may include a display unit, and the display unit may be configured as a separate device or external component.
- the video source generator 11 may acquire a video/image through a process of capturing, synthesizing, or generating the video/image.
- the video source generating unit 11 may include a video/image capturing device and/or a video/image generating device.
- a video/image capture device may include, for example, one or more cameras, a video/image archive containing previously captured video/images, and the like.
- a video/image generating device may include, for example, a computer, tablet, and smart phone, and may (electronically) generate a video/image.
- a virtual video/image may be generated through a computer, etc. In this case, the video/image capturing process may be substituted for the process of generating related data.
- the encoder 12 may encode an input video/image.
- the encoder 12 may perform a series of procedures such as prediction, transformation, and quantization for compression and encoding efficiency.
- the encoder 12 may output encoded data (encoded video/image information) in the form of a bitstream.
- the transmitter 13 may transmit the encoded video/image information or data output in the form of a bitstream in the form of a file or streaming to the receiver 21 of the decoding apparatus 20 through a digital storage medium or a network.
- the digital storage medium may include various storage media such as USB, SD, CD, DVD, Blu-ray, HDD, and SSD.
- the transmission unit 13 may include an element for generating a media file through a predetermined file format, and may include an element for transmission through a broadcast/communication network.
- the receiver 21 may extract/receive the bitstream from the storage medium or the network and transmit it to the decoder 22 .
- the decoder 22 may decode the video/image by performing a series of procedures such as inverse quantization, inverse transform, and prediction corresponding to the operation of the encoder 12 .
- the rendering unit 23 may render the decoded video/image.
- the rendered video/image may be displayed through the display unit.
- FIG. 2 is a diagram schematically illustrating an image encoding apparatus to which an embodiment according to the present disclosure can be applied.
- the image encoding apparatus 100 includes an image segmentation unit 110 , a subtraction unit 115 , a transform unit 120 , a quantization unit 130 , an inverse quantization unit 140 , and an inverse transform unit ( 150 ), an adder 155 , a filtering unit 160 , a memory 170 , an inter prediction unit 180 , an intra prediction unit 185 , and an entropy encoding unit 190 .
- the inter prediction unit 180 and the intra prediction unit 185 may be collectively referred to as a “prediction unit”.
- the transform unit 120 , the quantization unit 130 , the inverse quantization unit 140 , and the inverse transform unit 150 may be included in a residual processing unit.
- the residual processing unit may further include a subtraction unit 115 .
- All or at least some of the plurality of components constituting the image encoding apparatus 100 may be implemented as one hardware component (eg, an encoder or a processor) according to an embodiment.
- the memory 170 may include a decoded picture buffer (DPB), and may be implemented by a digital storage medium.
- DPB decoded picture buffer
- the image dividing unit 110 may divide an input image (or a picture, a frame) input to the image encoding apparatus 100 into one or more processing units.
- the processing unit may be referred to as a coding unit (CU).
- Coding unit is a coding tree unit (coding tree unit, CTU) or largest coding unit (LCU) according to the QT / BT / TT (Quad-tree / binary-tree / ternary-tree) structure recursively ( can be obtained by recursively segmenting.
- one coding unit may be divided into a plurality of coding units having a lower depth based on a quad tree structure, a binary tree structure, and/or a ternary tree structure.
- a quad tree structure may be applied first and a binary tree structure and/or a ternary tree structure may be applied later.
- a coding procedure according to the present disclosure may be performed based on the last coding unit that is no longer divided.
- the largest coding unit may be directly used as the final coding unit, and a coding unit of a lower depth obtained by dividing the largest coding unit may be used as the final cornet unit.
- the coding procedure may include procedures such as prediction, transformation, and/or restoration, which will be described later.
- the processing unit of the coding procedure may be a prediction unit (PU) or a transform unit (TU).
- the prediction unit and the transform unit may be divided or partitioned from the final coding unit, respectively.
- the prediction unit may be a unit of sample prediction
- the transform unit may be a unit for deriving a transform coefficient and/or a unit for deriving a residual signal from the transform coefficient.
- the prediction unit (the inter prediction unit 180 or the intra prediction unit 185) performs prediction on a processing target block (current block), and generates a predicted block including prediction samples for the current block.
- the prediction unit may determine whether intra prediction or inter prediction is applied on a current block or CU basis.
- the prediction unit may generate various information regarding prediction of the current block and transmit it to the entropy encoding unit 190 .
- the prediction information may be encoded by the entropy encoding unit 190 and output in the form of a bitstream.
- the intra prediction unit 185 may predict the current block with reference to samples in the current picture.
- the referenced samples may be located in the vicinity of the current block according to an intra prediction mode and/or an intra prediction technique, or may be located apart from each other.
- the intra prediction modes may include a plurality of non-directional modes and a plurality of directional modes.
- the non-directional mode may include, for example, a DC mode and a planar mode (Planar mode).
- the directional mode may include, for example, 33 directional prediction modes or 65 directional prediction modes according to the granularity of the prediction direction. However, this is an example, and a higher or lower number of directional prediction modes may be used according to a setting.
- the intra prediction unit 185 may determine the prediction mode applied to the current block by using the prediction mode applied to the neighboring block.
- the inter prediction unit 180 may derive the predicted block for the current block based on the reference block (reference sample array) specified by the motion vector on the reference picture.
- the motion information may be predicted in units of blocks, subblocks, or samples based on the correlation between motion information between neighboring blocks and the current block.
- the motion information may include a motion vector and a reference picture index.
- the motion information may further include inter prediction direction (L0 prediction, L1 prediction, Bi prediction, etc.) information.
- the neighboring blocks may include spatial neighboring blocks existing in the current picture and temporal neighboring blocks present in the reference picture.
- the reference picture including the reference block and the reference picture including the temporal neighboring block may be the same or different.
- the temporal neighboring block may be called a collocated reference block, a collocated CU (colCU), or the like.
- the reference picture including the temporal neighboring block may be referred to as a collocated picture (colPic).
- the inter prediction unit 180 constructs a motion information candidate list based on neighboring blocks, and provides information indicating which candidate is used to derive a motion vector and/or a reference picture index of the current block. can create Inter prediction may be performed based on various prediction modes. For example, in the skip mode and merge mode, the inter prediction unit 180 may use motion information of a neighboring block as motion information of the current block. In the skip mode, unlike the merge mode, a residual signal may not be transmitted.
- a motion vector of a neighboring block is used as a motion vector predictor, and a motion vector difference and an indicator for the motion vector predictor ( indicator) to signal the motion vector of the current block.
- the motion vector difference may mean a difference between the motion vector of the current block and the motion vector predictor.
- the prediction unit may generate a prediction signal based on various prediction methods and/or prediction techniques to be described later. For example, the prediction unit may apply intra prediction or inter prediction for prediction of the current block, and may simultaneously apply intra prediction and inter prediction. A prediction method that simultaneously applies intra prediction and inter prediction for prediction of the current block may be referred to as combined inter and intra prediction (CIIP). Also, the prediction unit may perform intra block copy (IBC) for prediction of the current block. The intra block copy may be used for video/video coding of content such as games, for example, screen content coding (SCC). IBC is a method of predicting a current block using a reconstructed reference block in a current picture located a predetermined distance away from the current block.
- CIIP combined inter and intra prediction
- IBC intra block copy
- the intra block copy may be used for video/video coding of content such as games, for example, screen content coding (SCC).
- IBC is a method of predicting a current block using a reconstructed reference block in a current picture located
- the position of the reference block in the current picture may be encoded as a vector (block vector) corresponding to the predetermined distance.
- IBC basically performs prediction within the current picture, but may be performed similarly to inter prediction in that a reference block is derived within the current picture. That is, IBC may use at least one of the inter prediction techniques described in this disclosure.
- the prediction signal generated by the prediction unit may be used to generate a reconstructed signal or may be used to generate a residual signal.
- the subtraction unit 115 subtracts the prediction signal (predicted block, prediction sample array) output from the prediction unit from the input image signal (original block, original sample array) to obtain a residual signal (residual signal, residual block, and residual sample array). ) can be created.
- the generated residual signal may be transmitted to the converter 120 .
- the transform unit 120 may generate transform coefficients by applying a transform technique to the residual signal.
- the transformation method may include at least one of Discrete Cosine Transform (DCT), Discrete Sine Transform (DST), Karhunen-Loeve Transform (KLT), Graph-Based Transform (GBT), or Conditionally Non-linear Transform (CNT).
- DCT Discrete Cosine Transform
- DST Discrete Sine Transform
- KLT Karhunen-Loeve Transform
- GBT Graph-Based Transform
- CNT Conditionally Non-linear Transform
- GBT means a transformation obtained from this graph when expressing relationship information between pixels in a graph.
- CNT refers to a transformation obtained by generating a prediction signal using all previously reconstructed pixels and based thereon.
- the transformation process may be applied to a block of pixels having the same size as a square, or may be applied to a block of variable size that is not a square.
- the quantization unit 130 may quantize the transform coefficients and transmit them to the entropy encoding unit 190 .
- the entropy encoding unit 190 may encode a quantized signal (information on quantized transform coefficients) and output it as a bitstream. Information about the quantized transform coefficients may be referred to as residual information.
- the quantization unit 130 may rearrange the quantized transform coefficients in the block form into a one-dimensional vector form based on a coefficient scan order, and the quantized transform coefficients in the one-dimensional vector form are quantized based on the Information about the transform coefficients may be generated.
- the entropy encoding unit 190 may perform various encoding methods such as, for example, exponential Golomb, context-adaptive variable length coding (CAVLC), and context-adaptive binary arithmetic coding (CABAC).
- the entropy encoding unit 190 may encode information necessary for video/image reconstruction (eg, values of syntax elements, etc.) other than the quantized transform coefficients together or separately.
- Encoded information eg, encoded video/image information
- NAL network abstraction layer
- the video/image information may further include information about various parameter sets, such as an adaptation parameter set (APS), a picture parameter set (PPS), a sequence parameter set (SPS), or a video parameter set (VPS). Also, the video/image information may further include general constraint information.
- APS adaptation parameter set
- PPS picture parameter set
- SPS sequence parameter set
- VPS video parameter set
- the video/image information may further include general constraint information.
- the signaling information, transmitted information, and/or syntax elements mentioned in this disclosure may be encoded through the above-described encoding procedure and included in the bitstream.
- the bitstream may be transmitted over a network or may be stored in a digital storage medium.
- the network may include a broadcasting network and/or a communication network
- the digital storage medium may include various storage media such as USB, SD, CD, DVD, Blu-ray, HDD, and SSD.
- a transmission unit (not shown) and/or a storage unit (not shown) for storing the signal output from the entropy encoding unit 190 may be provided as internal/external elements of the image encoding apparatus 100 , or transmission The unit may be provided as a component of the entropy encoding unit 190 .
- the quantized transform coefficients output from the quantization unit 130 may be used to generate a residual signal.
- a residual signal residual block or residual samples
- a residual signal residual block or residual samples
- the adder 155 adds a reconstructed signal (reconstructed picture, reconstructed block, reconstructed sample array) by adding the reconstructed residual signal to the prediction signal output from the inter prediction unit 180 or the intra prediction unit 185 .
- a reconstructed signal (reconstructed picture, reconstructed block, reconstructed sample array) by adding the reconstructed residual signal to the prediction signal output from the inter prediction unit 180 or the intra prediction unit 185 .
- the adder 155 may be referred to as a restoration unit or a restoration block generator.
- the generated reconstructed signal may be used for intra prediction of the next processing object block in the current picture, or may be used for inter prediction of the next picture after filtering as described below.
- the filtering unit 160 may improve subjective/objective image quality by applying filtering to the reconstructed signal.
- the filtering unit 160 may generate a modified reconstructed picture by applying various filtering methods to the reconstructed picture, and store the modified reconstructed picture in the memory 170 , specifically, the DPB of the memory 170 .
- the various filtering methods may include, for example, deblocking filtering, a sample adaptive offset, an adaptive loop filter, a bilateral filter, and the like.
- the filtering unit 160 may generate various information regarding filtering and transmit it to the entropy encoding unit 190 as described later in the description of each filtering method.
- the filtering-related information may be encoded by the entropy encoding unit 190 and output in the form of a bitstream.
- the modified reconstructed picture transmitted to the memory 170 may be used as a reference picture in the inter prediction unit 180 .
- the image encoding apparatus 100 can avoid a prediction mismatch between the image encoding apparatus 100 and the image decoding apparatus, and can also improve encoding efficiency.
- the DPB in the memory 170 may store a reconstructed picture corrected for use as a reference picture in the inter prediction unit 180 .
- the memory 170 may store motion information of a block in which motion information in the current picture is derived (or encoded) and/or motion information of blocks in an already reconstructed picture.
- the stored motion information may be transmitted to the inter prediction unit 180 to be used as motion information of a spatial neighboring block or motion information of a temporal neighboring block.
- the memory 170 may store reconstructed samples of reconstructed blocks in the current picture, and may transmit the reconstructed samples to the intra prediction unit 185 .
- FIG. 3 is a diagram schematically illustrating an image decoding apparatus to which an embodiment according to the present disclosure can be applied.
- the image decoding apparatus 200 includes an entropy decoding unit 210 , an inverse quantization unit 220 , an inverse transform unit 230 , an adder 235 , a filtering unit 240 , and a memory 250 .
- the inter prediction unit 260 and the intra prediction unit 265 may be included.
- the inter prediction unit 260 and the intra prediction unit 265 may be collectively referred to as a “prediction unit”.
- the inverse quantization unit 220 and the inverse transform unit 230 may be included in the residual processing unit.
- All or at least some of the plurality of components constituting the image decoding apparatus 200 may be implemented as one hardware component (eg, a decoder or a processor) according to an embodiment.
- the memory 170 may include a DPB, and may be implemented by a digital storage medium.
- the image decoding apparatus 200 may reconstruct the image by performing a process corresponding to the process performed by the image encoding apparatus 100 of FIG. 2 .
- the image decoding apparatus 200 may perform decoding using a processing unit applied in the image encoding apparatus.
- the processing unit of decoding may be, for example, a coding unit.
- a coding unit may be a coding tree unit or may be obtained by dividing the largest coding unit.
- the reconstructed image signal decoded and output through the image decoding apparatus 200 may be reproduced through a reproducing apparatus (not shown).
- the image decoding apparatus 200 may receive the signal output from the image encoding apparatus of FIG. 2 in the form of a bitstream.
- the received signal may be decoded through the entropy decoding unit 210 .
- the entropy decoding unit 210 may parse the bitstream to derive information (eg, video/image information) required for image restoration (or picture restoration).
- the video/image information may further include information about various parameter sets, such as an adaptation parameter set (APS), a picture parameter set (PPS), a sequence parameter set (SPS), or a video parameter set (VPS).
- the video/image information may further include general constraint information.
- the image decoding apparatus may additionally use the information about the parameter set and/or the general restriction information to decode the image.
- the signaling information, received information and/or syntax elements mentioned in this disclosure may be obtained from the bitstream by being decoded through the decoding procedure.
- the entropy decoding unit 210 decodes information in a bitstream based on a coding method such as exponential Golomb coding, CAVLC or CABAC, and a value of a syntax element required for image reconstruction, and a quantized value of a transform coefficient related to a residual. can be printed out.
- the CABAC entropy decoding method receives a bin corresponding to each syntax element in a bitstream, and receives syntax element information to be decoded and decoding information of neighboring blocks and blocks to be decoded or information of symbols/bins decoded in the previous step.
- the CABAC entropy decoding method may update the context model by using the decoded symbol/bin information for the context model of the next symbol/bin after determining the context model.
- Prediction-related information among the information decoded by the entropy decoding unit 210 is provided to the prediction unit (the inter prediction unit 260 and the intra prediction unit 265), and the entropy decoding unit 210 performs entropy decoding.
- the dual value that is, quantized transform coefficients and related parameter information may be input to the inverse quantization unit 220 .
- information about filtering among the information decoded by the entropy decoding unit 210 may be provided to the filtering unit 240 .
- a receiver (not shown) for receiving a signal output from the image encoding apparatus may be additionally provided as an internal/external element of the image decoding apparatus 200 , or the receiver is provided as a component of the entropy decoding unit 210 could be
- the image decoding apparatus may be referred to as a video/image/picture decoding apparatus.
- the image decoding apparatus may include an information decoder (video/image/picture information decoder) and/or a sample decoder (video/image/picture sample decoder).
- the information decoder may include an entropy decoding unit 210, and the sample decoder includes an inverse quantizer 220, an inverse transform unit 230, an adder 235, a filtering unit 240, a memory 250, At least one of an inter prediction unit 260 and an intra prediction unit 265 may be included.
- the inverse quantizer 220 may inverse quantize the quantized transform coefficients to output transform coefficients.
- the inverse quantizer 220 may rearrange the quantized transform coefficients in a two-dimensional block form. In this case, the rearrangement may be performed based on a coefficient scan order performed by the image encoding apparatus.
- the inverse quantizer 220 may perform inverse quantization on the quantized transform coefficients using a quantization parameter (eg, quantization step size information) and obtain transform coefficients.
- a quantization parameter eg, quantization step size information
- the inverse transform unit 230 may inverse transform the transform coefficients to obtain a residual signal (residual block, residual sample array).
- the prediction unit may perform prediction on the current block and generate a predicted block including prediction samples for the current block.
- the prediction unit may determine whether intra prediction or inter prediction is applied to the current block based on the prediction information output from the entropy decoding unit 210, and determine a specific intra/inter prediction mode (prediction technique).
- the intra prediction unit 265 may predict the current block with reference to samples in the current picture.
- the description of the intra prediction unit 185 may be equally applied to the intra prediction unit 265 .
- the inter prediction unit 260 may derive the predicted block for the current block based on the reference block (reference sample array) specified by the motion vector on the reference picture.
- the motion information may be predicted in units of blocks, subblocks, or samples based on the correlation between motion information between neighboring blocks and the current block.
- the motion information may include a motion vector and a reference picture index.
- the motion information may further include inter prediction direction (L0 prediction, L1 prediction, Bi prediction, etc.) information.
- the neighboring blocks may include spatial neighboring blocks existing in the current picture and temporal neighboring blocks present in the reference picture.
- the inter prediction unit 260 may construct a motion information candidate list based on neighboring blocks, and derive a motion vector and/or a reference picture index of the current block based on the received candidate selection information.
- Inter prediction may be performed based on various prediction modes (techniques), and the prediction information may include information indicating a mode (technique) of inter prediction for the current block.
- the adder 235 restores the obtained residual signal by adding it to the prediction signal (predicted block, prediction sample array) output from the prediction unit (including the inter prediction unit 260 and/or the intra prediction unit 265 ).
- a signal (reconstructed picture, reconstructed block, reconstructed sample array) may be generated.
- the predicted block may be used as a reconstructed block.
- the description of the adder 155 may be equally applied to the adder 235 .
- the addition unit 235 may be called a restoration unit or a restoration block generation unit.
- the generated reconstructed signal may be used for intra prediction of the next processing object block in the current picture, or may be used for inter prediction of the next picture after filtering as described below.
- the filtering unit 240 may improve subjective/objective image quality by applying filtering to the reconstructed signal.
- the filtering unit 240 may generate a modified reconstructed picture by applying various filtering methods to the reconstructed picture, and store the modified reconstructed picture in the memory 250 , specifically, the DPB of the memory 250 .
- the various filtering methods may include, for example, deblocking filtering, a sample adaptive offset, an adaptive loop filter, a bilateral filter, and the like.
- the (modified) reconstructed picture stored in the DPB of the memory 250 may be used as a reference picture in the inter prediction unit 260 .
- the memory 250 may store motion information of a block in which motion information in the current picture is derived (or decoded) and/or motion information of blocks in an already reconstructed picture.
- the stored motion information may be transmitted to the inter prediction unit 260 to be used as motion information of a spatial neighboring block or motion information of a temporal neighboring block.
- the memory 250 may store reconstructed samples of blocks reconstructed in the current picture, and may transmit the reconstructed samples to the intra prediction unit 265 .
- the embodiments described in the filtering unit 160, the inter prediction unit 180, and the intra prediction unit 185 of the image encoding apparatus 100 include the filtering unit 240 of the image decoding apparatus 200, The same or corresponding application may be applied to the inter prediction unit 260 and the intra prediction unit 265 .
- the video/image coding method may be performed based on the following image division structure.
- procedures such as prediction, residual processing ((inverse) transform, (inverse) quantization, etc.), syntax element coding, and filtering, which will be described later, are CTU, CU (and/or TU, etc.) derived based on the segmentation structure of the image. PU) may be performed.
- the image may be divided in block units, and the block division procedure may be performed by the image division unit 110 of the above-described encoding apparatus.
- the division-related information may be encoded by the entropy encoding unit 190 and transmitted to the decoding apparatus in the form of a bitstream.
- the entropy decoding unit 210 of the decoding apparatus derives the block division structure of the current picture based on the division related information obtained from the bitstream, and based on this, a series of procedures (eg, prediction, residual processing, block/picture restoration, in-loop filtering, etc.).
- a series of procedures eg, prediction, residual processing, block/picture restoration, in-loop filtering, etc.
- Pictures may be divided into a sequence of coding tree units (CTUs). 4 shows an example in which a picture is divided into CTUs.
- a CTU may correspond to a coding tree block (CTB).
- the CTU may include a coding tree block of luma samples and two coding tree blocks of corresponding chroma samples.
- the CTU may include an NxN block of luma samples and two corresponding blocks of chroma samples.
- the coding unit is to be obtained by recursively dividing a coding tree unit (CTU) or a largest coding unit (LCU) according to a QT/BT/TT (Quad-tree/binary-tree/ternary-tree) structure.
- CTU coding tree unit
- LCU largest coding unit
- QT/BT/TT Quad-tree/binary-tree/ternary-tree
- the CTU may first be divided into a quadtree structure. Thereafter, the leaf nodes of the quadtree structure may be further divided by the multitype tree structure.
- the division according to the quadtree refers to division that divides the current CU (or CTU) into quarters.
- the current CU may be divided into four CUs having the same width and the same height.
- the current CU corresponds to a leaf node of the quadtree structure.
- a CU corresponding to a leaf node of a quadtree structure may be used as the above-described final coding unit without being split any further.
- a CU corresponding to a leaf node of a quadtree structure may be further divided by a multitype tree structure.
- the division according to the multi-type tree structure may include two divisions according to the binary tree structure and two divisions according to the ternary tree structure.
- the two divisions according to the binary tree structure may include vertical binary splitting (SPLIT_BT_VER) and horizontal binary splitting (SPLIT_BT_HOR).
- the vertical binary division (SPLIT_BT_VER) refers to division that divides the current CU into two in the vertical direction. As shown in FIG. 4 , two CUs having the same height as the height of the current CU and half the width of the current CU may be generated by vertical binary partitioning.
- the horizontal binary division (SPLIT_BT_HOR) refers to division that divides the current CU into two in the horizontal direction. As shown in FIG. 5 , two CUs having a height of half the height of the current CU and the same width as the width of the current CU may be generated by horizontal binary partitioning.
- the two divisions according to the ternary tree structure may include a vertical ternary splitting (SPLIT_TT_VER) and a horizontal ternary splitting (SPLIT_TT_HOR).
- the vertical ternary division (SPLIT_TT_VER) divides the current CU in the vertical direction at a ratio of 1:2:1.
- the horizontal ternary division (SPLIT_TT_HOR) divides the current CU in the horizontal direction at a ratio of 1:2:1.
- FIG. 6 is a diagram illustrating a signaling mechanism of block partitioning information in a quadtree with nested multi-type tree structure according to the present disclosure.
- the CTU is treated as a root node of the quadtree, and the CTU is first divided into a quadtree structure.
- Information eg, qt_split_flag
- qt_split_flag indicating whether to perform quadtree splitting on a current CU (CTU or a node (QT_node) of a quadtree
- qt_split_flag is a first value (eg, “1”)
- the current CU may be divided into a quadtree.
- qt_split_flag is a second value (eg, “0”)
- the current CU is not divided into a quadtree and becomes a leaf node (QT_leaf_node) of the quadtree.
- the leaf nodes of each quadtree may then be further divided into a multitype tree structure. That is, the leaf node of the quadtree may be a node (MTT_node) of the multitype tree.
- a first flag eg, mtt_split_cu_flag
- a second flag (a second flag, e.g. mtt_split_cu_verticla_flag) is signaled to indicate a splitting direction.
- a second flag e.g. mtt_split_cu_verticla_flag
- the division direction may be a vertical direction
- the second flag is 0, the division direction may be a horizontal direction.
- a third flag (ex. mtt_split_cu_binary_flag) is signaled to indicate whether the split type is a binary split type or a ternary split type.
- the partition type may be a binary partition type
- the partition type may be a ternary partition type.
- Nodes of a multitype tree obtained by binary partitioning or ternary partitioning may be further partitioned into a multitype tree structure.
- the nodes of the multitype tree cannot be partitioned into a quadtree structure.
- the first flag is 0, the corresponding node of the multitype tree is no longer split and becomes a leaf node (MTT_leaf_node) of the multitype tree.
- a CU corresponding to a leaf node of the multitype tree may be used as the above-described final coding unit.
- a multi-type tree splitting mode (MttSplitMode) of a CU may be derived as shown in Table 1.
- the multi-tree split mode may be referred to as a multi-tree split type or a split type for short.
- a CU may correspond to a coding block CB.
- a CU may include a coding block of luma samples and two coding blocks of chroma samples corresponding to the luma samples.
- the chroma component (sample) CB or TB size is determined by the luma component (sample) according to the component ratio according to the color format (chroma format, ex. 4:4:4, 4:2:2, 4:2:0, etc.) of the picture/video. ) may be derived based on the CB or TB size.
- the color format is 4:4:4, the chroma component CB/TB size may be set to be the same as the luma component CB/TB size.
- the width of the chroma component CB/TB may be set to half the width of the luma component CB/TB, and the height of the chroma component CB/TB may be set to the height of the luma component CB/TB.
- the width of the chroma component CB/TB may be set to half the width of the luma component CB/TB, and the height of the chroma component CB/TB may be set to half the height of the luma component CB/TB.
- the size of the CU when the size of the CTU is 128 based on the luma sample unit, the size of the CU may have a size from 128 x 128 to 4 x 4, which is the same size as the CTU. In an embodiment, in the case of a 4:2:0 color format (or chroma format), the chroma CB size may have a size of 64x64 to 2x2.
- the CU size and the TU size may be the same.
- a plurality of TUs may exist in the CU region.
- the TU size generally refers to a luma component (sample) TB (Transform Block) size.
- the TU size may be derived based on a preset maximum allowable TB size (maxTbSize). For example, when the CU size is larger than the maxTbSize, a plurality of TUs (TBs) having the maxTbSize may be derived from the CU, and transform/inverse transformation may be performed in units of the TUs (TB). For example, the maximum allowable luma TB size may be 64x64, and the maximum allowable chroma TB size may be 32x32. If the width or height of a CB divided according to the tree structure is greater than the maximum transform width or height, the CB may be automatically (or implicitly) divided until it satisfies the TB size limit in the horizontal and vertical directions.
- maximum TbSize maximum allowable TB size
- the intra prediction mode/type is derived in the CU (or CB) unit, and the peripheral reference sample derivation and prediction sample generation procedure may be performed in a TU (or TB) unit.
- the intra prediction mode/type is derived in the CU (or CB) unit
- the peripheral reference sample derivation and prediction sample generation procedure may be performed in a TU (or TB) unit.
- one or a plurality of TUs (or TBs) may exist in one CU (or CB) region, and in this case, the plurality of TUs (or TBs) may share the same intra prediction mode/type.
- the following parameters may be signaled from the encoding device to the decoding device as SPS syntax elements.
- CTU size a parameter indicating the size of the root node of a quadtree tree
- MinQTSize a parameter indicating the minimum available size of a quadtree leaf node
- MaxBTSize a parameter indicating the maximum available size of a binary tree root node MaxTTSize
- MaxMttDepth a parameter indicating the available size
- MinBtSize a parameter indicating the minimum available leaf node size of a binary tree
- At least one of MinTtSize which is a parameter indicating the minimum available leaf node size of the tree, is signaled.
- the CTU size may be set to a 128x128 luma block and two 64x64 chroma blocks corresponding to the luma block.
- MinQTSize may be set to 16x16
- MaxBtSize may be set to 128x1208
- MaxTtSzie may be set to 64x64
- MinBtSize and MinTtSize may be set to 4x4
- MaxMttDepth may be set to 4.
- Quadtree partitioning may be applied to the CTU to create quadtree leaf nodes.
- a quadtree leaf node may be referred to as a leaf QT node.
- Quadtree leaf nodes may have a size of 128x128 (e.g.
- the CTU size from a size of 16x16 (e.g. the MinQTSize).
- the leaf QT node is 128x128, it may not be additionally split into a binary tree/ternary tree. This is because in this case, even if it is split, it exceeds MaxBtsize and MaxTtszie (i.e. 64x64).
- the leaf QT node may be further divided into a multitype tree. Therefore, a leaf QT node is a root node for a multitype tree, and a leaf QT node may have a multitype tree depth (mttDepth) value of 0. If the multitype tree depth reaches MaxMttdepth (ex. 4), further splitting may not be considered.
- the encoding apparatus may omit signaling of splitting information.
- the decoding apparatus may derive the division information as a predetermined value.
- one CTU may include a coding block of luma samples (hereinafter, referred to as a “luma block”) and two coding blocks of chroma samples corresponding thereto (hereinafter, referred to as a “chroma block”).
- the aforementioned coding tree scheme may be equally applied to the luma block and the chroma block of the current CU, or may be applied separately.
- a luma block and a chroma block within one CTU may be divided into the same block tree structure, and the tree structure in this case is indicated as a single tree (SINGLE_TREE).
- a luma block and a chroma block in one CTU may be divided into individual block tree structures, and the tree structure in this case is indicated as a dual tree (DUAL_TREE). That is, when the CTU is divided into a dual tree, a block tree structure for a luma block and a block tree structure for a chroma block may exist separately.
- the block tree structure for the luma block may be called a dual tree luma (DUAL_TREE_LUMA)
- the block tree structure for the chroma block may be called a dual tree chroma (DUAL_TREE_CHROMA).
- the luma block and chroma blocks in one CTU may be constrained to have the same coding tree structure.
- the luma block and the chroma block may have separate block tree structures from each other. If an individual block tree structure is applied, a luma coding tree block (CTB) may be divided into CUs based on a specific coding tree structure, and the chroma CTB may be divided into chroma CUs based on another coding tree structure.
- CTB luma coding tree block
- a CU in an I slice/tile group to which an individual block tree structure is applied consists of a coding block of a luma component or coding blocks of two chroma components, and a CU of a P or B slice/tile group has three color components (luma component). and two chroma components).
- the structure in which the CU is divided is not limited thereto.
- the BT structure and the TT structure may be interpreted as concepts included in a Multiple Partitioning Tree (MPT) structure, and the CU may be interpreted as being divided through the QT structure and the MPT structure.
- MPT Multiple Partitioning Tree
- a syntax element (eg, MPT_split_type) including information on how many blocks a leaf node of the QT structure is divided into and a leaf node of the QT structure are vertical
- the split structure may be determined by signaling a syntax element (eg, MPT_split_mode) including information on which direction is split in the horizontal direction.
- a CU may be partitioned in a way different from a QT structure, a BT structure, or a TT structure. That is, according to the QT structure, the CU of the lower depth is divided into 1/4 size of the CU of the upper depth, or the CU of the lower depth is divided into 1/2 the size of the CU of the upper depth according to the BT structure, or according to the TT structure Unlike a CU of a lower depth that is divided into 1/4 or 1/2 the size of a CU of a higher depth, a CU of a lower depth is 1/5, 1/3, 3/8, or 3 of a CU of a higher depth in some cases. It may be divided into /5, 2/3 or 5/8 size, and the method by which the CU is divided is not limited thereto.
- the quadtree coding block structure accompanying the multi-type tree can provide a very flexible block division structure.
- different partitioning patterns may potentially lead to the same coding block structure result in some cases.
- the encoding apparatus and the decoding apparatus may reduce the data amount of the partition information by limiting the generation of such redundant partition patterns.
- Intra prediction may indicate prediction that generates prediction samples for a current block based on reference samples in a picture to which the current block belongs (hereinafter, referred to as a current picture).
- a current picture reference samples in a picture to which the current block belongs
- neighboring reference samples to be used for intra prediction of the current block may be derived.
- the neighboring reference samples of the current block are a sample adjacent to the left boundary of the current block of size nWxnH, a total of 2xnH samples neighboring to the bottom-left, and a sample adjacent to the top boundary of the current block and a total of 2xnW samples neighboring the top-right side and one sample neighboring the top-left side of the current block.
- the neighboring reference samples of the current block may include a plurality of columns of upper neighboring samples and a plurality of rows of left neighboring samples.
- the neighboring reference samples of the current block include a total of nH samples adjacent to the right boundary of the current block of size nWxnH, a total of nW samples adjacent to the bottom boundary of the current block, and the lower right side of the current block. It may include one sample neighboring to (bottom-right).
- the decoder may construct neighboring reference samples to be used for prediction by substituting unavailable samples with available samples.
- neighboring reference samples to be used for prediction may be configured through interpolation of available samples.
- FIG. 8 is a diagram illustrating an intra prediction-based video/image encoding method.
- FIG. 9 is a diagram illustrating an intra prediction unit in an encoding apparatus.
- S800 may be performed by the intra prediction unit 222 of the encoding apparatus, and S810 may be performed by the residual processing unit 230 of the encoding apparatus. Specifically, S810 may be performed by the subtraction unit 231 of the encoding apparatus.
- the prediction information may be derived by the intra prediction unit 222 and encoded by the entropy encoding unit 240 .
- the residual information may be derived by the residual processing unit 230 and encoded by the entropy encoding unit 240 .
- the residual information is information about the residual samples.
- the residual information may include information about quantized transform coefficients for the residual samples.
- the residual samples may be derived as transform coefficients through the transform unit 232 of the encoding apparatus, and the transform coefficients may be derived as quantized transform coefficients through the quantization unit 233 .
- Information on the quantized transform coefficients may be encoded by the entropy encoding unit 240 through a residual coding procedure.
- the encoding apparatus performs intra prediction on the current block (S800).
- the encoding apparatus may derive an intra prediction mode/type for the current block, derive neighboring reference samples of the current block, and generate prediction samples in the current block based on the intra prediction mode/type and the neighboring reference samples do.
- intra prediction mode/type determination, peripheral reference samples derivation, and prediction samples generation procedures may be performed simultaneously, or one procedure may be performed before another procedure.
- the intra prediction unit 222 of the encoding apparatus may include an intra prediction mode/type determiner 222-1, a reference sample derivation unit 222-2, and a prediction sample derivation unit 222-3.
- the intra-prediction mode/type determiner 222-1 determines the intra-prediction mode/type for the current block, and the reference sample derivation unit 222-2 derives neighboring reference samples of the current block,
- the prediction sample derivation unit 222 - 3 may derive prediction samples of the current block.
- the intra prediction unit 222 may further include a prediction sample filter unit (not shown).
- the encoding apparatus may determine a mode/type applied to the current block from among a plurality of intra prediction modes/types. The encoding apparatus may compare RD costs for the intra prediction mode/types and determine an optimal intra prediction mode/type for the current block.
- the encoding apparatus generates residual samples for the current block based on the prediction samples (S810).
- the encoding apparatus may compare the prediction samples in the original samples of the current block based on the phase and derive the residual samples.
- the encoding apparatus may encode image information including information on the intra prediction (prediction information) and residual information on the residual samples ( S820 ).
- the prediction information may include the intra prediction mode information and the intra prediction type information.
- the encoding apparatus may output encoded image information in the form of a bitstream.
- the output bitstream may be transmitted to a decoding device through a storage medium or a network.
- the residual information may include residual coding syntax, which will be described later.
- the encoding apparatus may transform/quantize the residual samples to derive quantized transform coefficients.
- the residual information may include information on the quantized transform coefficients.
- the encoding apparatus may generate a reconstructed picture (including reconstructed samples and reconstructed blocks). To this end, the encoding apparatus may inverse quantize/inverse transform the quantized transform coefficients again to derive (modified) residual samples. The reason for performing the inverse quantization/inverse transformation after transforming/quantizing the residual samples in this way is to derive the same residual samples as the residual samples derived from the decoding apparatus as described above.
- the encoding apparatus may generate a reconstructed block including reconstructed samples for the current block based on the prediction samples and the (modified) residual samples. A reconstructed picture for the current picture may be generated based on the reconstructed block. As described above, an in-loop filtering procedure may be further applied to the reconstructed picture.
- FIGS. 10 and 11 a video/image decoding method based on intra prediction and an intra prediction unit in a decoding apparatus will be described using FIGS. 10 and 11 .
- FIG. 10 is a diagram illustrating an intra prediction-based video/image decoding method.
- FIG. 11 is a diagram illustrating an intra prediction unit in a decoding apparatus.
- the decoding apparatus may perform an operation corresponding to the operation performed by the encoding apparatus.
- S1000 to S1020 may be performed by the intra prediction unit 331 of the decoding apparatus, and the prediction information of S1000 and the residual information of S1030 may be obtained from the bitstream by the entropy decoding unit 310 of the decoding apparatus.
- the residual processing unit 320 of the decoding apparatus may derive residual samples for the current block based on the residual information.
- the inverse quantization unit 321 of the residual processing unit 320 derives transform coefficients by performing inverse quantization based on the quantized transform coefficients derived based on the residual information
- the inverse transform unit 322 may derive residual samples for the current block by performing inverse transform on the transform coefficients.
- S1040 may be performed by the adder 340 or the restorer of the decoding apparatus.
- the decoding apparatus may derive the intra prediction mode/type for the current block based on the received prediction information (intra prediction mode/type information) (S1000).
- the decoding apparatus may derive peripheral reference samples of the current block (S1010).
- the decoding apparatus generates prediction samples in the current block based on the intra prediction mode/type and the neighboring reference samples (S1020).
- the decoding apparatus generates residual samples for the current block based on the received residual information.
- the decoding apparatus may generate reconstructed samples for the current block based on the prediction samples and the residual samples, and derive a reconstructed block including the reconstructed samples (S1030).
- a reconstructed picture for the current picture may be generated based on the reconstructed block.
- an in-loop filtering procedure may be further applied to the reconstructed picture.
- the intra prediction unit 331 of the decoding apparatus may include an intra prediction mode/type determiner 331-1, a reference sample derivation unit 331-2, and a prediction sample derivation unit 331-3,
- the intra prediction mode/type determiner 331-1 determines the intra prediction mode/type for the current block based on the intra prediction mode/type information obtained from the entropy decoding unit 310, and a reference sample derivation unit ( 331-2) may derive peripheral reference samples of the current block, and the prediction sample derivation unit 331-3 may derive prediction samples of the current block.
- the intra prediction unit 331 may further include a prediction sample filter unit (not shown).
- the intra prediction mode information may include, for example, flag information (ex. intra_luma_mpm_flag) indicating whether a most probable mode (MPM) is applied to the current block or a remaining mode is applied, and the When MPM is applied to the current block, the prediction mode information may further include index information (eg, intra_luma_mpm_idx) indicating one of the intra prediction mode candidates (MPM candidates).
- the intra prediction mode candidates (MPM candidates) may be composed of an MPM candidate list or an MPM list.
- the intra prediction mode information includes remaining mode information (ex. intra_luma_mpm_remainder) indicating one of the remaining intra prediction modes except for the intra prediction mode candidates (MPM candidates). may include more.
- the decoding apparatus may determine the intra prediction mode of the current block based on the intra prediction mode information.
- the intra prediction type information may be implemented in various forms.
- the intra prediction type information may include intra prediction type index information indicating one of the intra prediction types.
- the intra prediction type information includes reference sample line information (ex. intra_luma_ref_idx) indicating whether the MRL is applied to the current block and, when applied, which reference sample line is used, and the ISP is the current block ISP flag information indicating whether to apply to (ex. intra_subpartitions_mode_flag), ISP type information indicating the split type of subpartitions when the ISP is applied (ex.
- intra_subpartitions_split_flag flag information indicating whether PDPC is applied, or application of LIP It may include at least one of flag information indicating whether or not. Also, the intra prediction type information may include a MIP flag indicating whether MIP is applied to the current block.
- the intra prediction mode information and/or the intra prediction type information may be encoded/decoded through the coding method described in this document.
- the intra prediction mode information and/or the intra prediction type information may be encoded/decoded through entropy coding (eg CABAC, CAVLC) coding based on a truncated (rice) binary code.
- entropy coding eg CABAC, CAVLC
- the intra prediction modes may include two directional intra prediction modes and 65 directional prediction modes.
- the non-directional intra prediction modes may include a planar intra prediction mode and a DC intra prediction mode, and the directional intra prediction modes may include No. 2 to No. 66 intra prediction modes.
- An example of a directional intra prediction mode is shown in FIG. 12 .
- the intra prediction mode may further include a cross-component linear model (CCLM) mode for chroma samples in addition to the above-described intra prediction modes.
- CCLM cross-component linear model
- the CCLM mode can be divided into LT_CCLM, L_CCLM, and T_CCLM according to whether left samples, upper samples, or both are considered for LM parameter derivation, and can be applied only to the chroma component.
- the above-described intra prediction mode may be indexed, for example, as shown in Table 2 below.
- the prediction unit of the encoding apparatus/decoding apparatus may derive a reference sample according to the intra prediction mode of the current block among neighboring reference samples of the current block, and generate a prediction sample of the current block based on the reference sample .
- a prediction sample may be derived based on the average or interpolation of neighboring reference samples of the current block, and (ii) a prediction sample among neighboring reference samples of the current block is specified.
- the prediction sample may be derived based on the reference sample existing in the (prediction) direction.
- the case of (i) may be called a non-directional mode or a non-angular mode, and the case of (ii) may be called a directional mode or an angular mode.
- Prediction samples may be generated.
- LIP linear interpolation intra prediction
- a temporary prediction sample of the current block is derived based on the filtered neighboring reference samples, and at least one derived according to the intra prediction mode among the existing neighboring reference samples, that is, unfiltered neighboring reference samples.
- the prediction sample of the current block may be derived by weighted summing the reference sample and the temporary prediction sample.
- PDPC position dependent intra prediction
- the reference sample line with the highest prediction accuracy is selected among the neighboring multiple reference sample lines of the current block, and the prediction sample is derived using the reference sample located in the prediction direction in the corresponding line, and at this time, the used reference sample line is decoded.
- Intra prediction encoding may be performed by instructing (signaling) the device.
- the above-described case may be referred to as multi-reference line intra prediction (MRL) or MRL-based intra prediction.
- the current block is divided into vertical or horizontal sub-partitions to perform intra prediction based on the same intra prediction mode, but neighboring reference samples may be derived and used in units of the sub-partitions. That is, in this case, the intra prediction mode for the current block is equally applied to the sub-partitions, and the intra prediction performance may be improved in some cases by deriving and using the neighboring reference samples in units of the sub-partitions.
- This prediction method may be called intra sub-partitions (ISP) or ISP-based intra prediction. Specific details will be described later.
- a value of a prediction sample may be derived through interpolation.
- the above-described intra prediction methods may be called an intra prediction type to be distinguished from the intra prediction mode.
- the intra prediction type may be referred to by various terms such as an intra prediction technique or an additional intra prediction mode.
- the intra prediction type (or additional intra prediction mode, etc.) may include at least one of the aforementioned LIP, PDPC, MRL, and ISP.
- the information on the intra prediction type may be encoded by an encoding device and included in a bitstream to be signaled to a decoding device.
- the information on the intra prediction type may be implemented in various forms, such as flag information indicating whether each intra prediction type is applied or index information indicating one of several intra prediction types.
- the PDPC derives filtered reference samples by performing filtering based on the filter for the PDPC, derives a temporary prediction sample of the current block based on the intra prediction mode of the current block and the filtered reference samples, and Deriving the prediction sample of the current block by weighted summing the temporary prediction sample and at least one reference sample derived according to the intra prediction mode among existing reference samples, that is, unfiltered reference samples
- An intra prediction method may be indicated.
- the predefined filter may be one of five 7-tap filters.
- the predefined filter may be one of a 3-tap filter, a 5-tap filter, and a 7-tap filter.
- the 3-tap filter, the 5-tap filter, and the 7-tap filter may represent a filter having 3 filter coefficients, a filter having 5 filter coefficients, and a filter having 7 filter coefficients, respectively.
- the prediction result of the intra planner mode may be further modified by the PDPC.
- the PDPC may perform an intra-planar mode, an intra DC mode, a horizontal intra prediction mode, a vertical intra prediction mode, and an intra prediction mode in the bottom left direction without separate signaling (ie, the second intra prediction mode). and eight directional intra prediction modes adjacent to the lower-left intra prediction mode, a top-right intra prediction mode, and eight directional intra prediction modes adjacent to the upper-right intra prediction mode.
- a prediction sample of (x, y) coordinates predicted based on a linear combination of an intra prediction mode and reference samples may be derived as shown in Equation 1 below.
- pred(x,y) of the left term represents a predicted sample value of (x,y) coordinates
- pred(x,y) of the right term is a temporary (primary) prediction sample of (x,y) coordinates represents a value
- R (x,-1) and R (-1,y) represent the upper reference sample and the left reference sample located above and to the left of the current sample of the (x, y) coordinate
- R (-1,-1) denotes the upper-left reference sample located in the upper-left corner of the current block.
- wL denotes a weight applied to the left reference sample
- wT denotes a weight applied to the upper reference sample
- wTL denotes a weight applied to the upper left reference sample.
- the temporary (primary) prediction sample may be generated as a result of performing intra prediction based on the intra prediction mode and reference samples of the current block.
- a final prediction sample of the current block may be generated based on Equation 1 above.
- the temporary (primary) prediction sample may be used as a final prediction sample of the current block.
- 13A to 13D are diagrams illustrating reference samples defined in PDPC.
- pred(x, y) represents a prediction sample obtained through intra prediction (the above-described temporary prediction sample), and R (x,-1) and R (-1,y) are (x, y) indicates the upper and left reference samples located above and to the left of the current sample of the coordinates.
- 13A illustrates reference samples (R x,-1 , R -1,y , R -1,-1 ) when the prediction mode is a diagonal top-right mode.
- 13B shows reference samples (R x,-1 , R -1,y , R -1,-1 ) when the prediction mode is a diagonal bottom-left mode.
- 13C illustrates reference samples (R x,-1 , R -1,y , R -1,-1 ) when the prediction mode is an adjacent diagonal top-right mode.
- 13D illustrates reference samples (R x,-1 , R -1,y , R -1,-1 ) when the prediction mode is an adjacent diagonal bottom-left mode.
- the weights of the PDPC may be derived based on prediction modes.
- the weights wT, wL, and wTL of the PDPC may be derived as shown in Table 3 below.
- Prediction mode wT wL wTL top right diagonal mode 16 >> ( ( y ⁇ 1 ) >> shift ) 16 >> ( ( x ⁇ 1 ) >> shift ) 0 bottom left diagonal mode 16 >> ( ( y ⁇ 1 ) >> shift ) 16 >> ( ( x ⁇ 1 ) >> shift ) 0
- Adjacent mode in upper right diagonal mode 32 >> ( ( y ⁇ 1 ) >> shift ) 0
- Adjacent mode of lower left diagonal mode 0 32 >> ( ( x ⁇ 1 ) >> shift ) 0
- a prediction sample is generated using a reference sample according to a prediction mode, and then the prediction sample is improved using a neighboring reference sample.
- PDPC is not applied to all intra prediction modes, but based on 65 directional intra prediction modes, Planar, DC, 2 (lower right mode), VDIA (upper left mode), Hor (horizontal mode), Ver (vertical mode) direction mode), peripheral modes of mode 2 (modes 3 to 10), and peripheral modes of VDIA mode (modes 58 to 65). Also, instead of being applied to all prediction samples in a block to be currently encoded, it may be variably applied in consideration of the size of the block.
- intra prediction may be performed using, as reference samples, neighboring samples located in a reference sample line separated by one to three sample distances from the top and/or left of the current block.
- FIG. 14 is a diagram for describing a reference sample line usable in an MRL method.
- reference line 0 may be referred to as a first reference sample line.
- Reference line 1 to Reference line 3 may be referred to as a second reference sample line to a fourth reference sample line, respectively.
- a multiple reference line index (ex. mrl_idx) for indicating which reference sample line is used for intra prediction with respect to the current block may be signaled.
- 15 is a diagram illustrating a syntax structure of an encoding unit signaling the multi-reference line index.
- the multiple reference line index may be signaled in the form of intra_luma_ref_idx.
- the value of the multi-reference index is greater than 0, it can be said that MRL is applied to the target block.
- the intra_luma_ref_idx of FIG. 15 may be used to specify the reference sample line index IntraLumaRefLineIdx[ x0 ][ y0 ] to be used for intra prediction of the current coding unit of the (x0, y0) coordinates.
- intra_luma_ref_idx[ x0 ][ y0 ] does not exist in the bitstream, the corresponding value may be inferred to be 0.
- intra_luma_ref_idx may be referred to as an (intra) reference sample line index or mrl_idx. Also, intra_luma_ref_idx may be referred to as intra_luma_ref_line_idx.
- Table 4 shows IntraLumaRefLineIdx[ x0 ][ y0 ] specified based on intra_luma_ref_idx[ x0 ][ y0 ].
- the flag indicating whether MPM is applied to the current coding unit is intra_luma_mpm_flag[ x0 ][ y0 ], and when the corresponding flag is not present in the bitstream, its value may be inferred to be 1. have. That is, it may be determined that MPM is applied to the current coding unit.
- the MRL may not be available for blocks of the first line (row) in the CTU. For example, when the upper boundary of the current coding unit is the upper boundary of the CTU, MRL is not available for the current coding unit. This is to prevent the use of extended reference samples existing outside the current CTU. Also, as will be described later, when a reference sample line other than the first reference sample line is used, the PDPC for the current coding unit may not be applied.
- the second and subsequent reference sample lines may be used to derive the DC value.
- the DC value may be derived based on the reference sample of the second and subsequent reference sample lines instead of the reference sample of the first reference sample line.
- information indicating a reference sample line in which intra prediction of the current block is used may be expressed as refIdx.
- refIdx of 0 may indicate the first reference sample line.
- Embodiments of the present disclosure relate to the above-described PDPC.
- Filtered (modified) prediction samples may be generated when the PDPC procedure is applied to intra prediction samples.
- An embodiment of the present disclosure proposes a method of performing PDPC in a chrominance block under a specific condition when applying PDPC in intra prediction for a chroma component (block).
- Existing PDPC determines whether to apply PDPC by applying different conditions according to the luminance component block and the chrominance component block.
- 16 is a diagram illustrating a PDPC application condition according to an embodiment of the present disclosure.
- PDPC may be applied to the intra-predicted prediction block of the current block.
- Both the width and height of the current block are 4 or greater, or the current block is a color difference block, or the color component of the current block is a color difference component.
- the above condition 1 relates to the size of the current block.
- the above condition 1 is satisfied regardless of the size of the current block.
- the current block is a luminance block and the current block has a size of 4x4 or more
- the above condition 1 is satisfied.
- the color component of the current block may be represented by cIdx. For example, if cIdx is 0, it may indicate that the current block is a luminance component block, and if cIdx is not 0, it may indicate that the current block is a chrominance component block.
- the above condition 2 relates to a reference sample line used for intra prediction.
- the above condition 2 is satisfied regardless of the reference sample line.
- the current block is a luminance block
- condition 2 is satisfied.
- Condition 3 above relates to whether BDPCM is applied to the current block.
- the condition 3 may be determined based on the BdpcmFlag of the current block. For example, when BdpcmFlag of the current block is 0, it may indicate that BDPCM is not applied to the current block.
- the BdpcmFlag of the current block may be determined based on a value signaled from the bitstream.
- a value of BdpcmFlag may be derived based on the signaled intra_bdpcm_luma_flag.
- the current block is a color difference (chroma component) block
- a value of BdpcmFlag may be derived based on the signaled intra_bdpcm_chroma_flag.
- the above condition 4 relates to the intra prediction mode of the current block.
- the intra prediction mode of the current block is 1) PLANAR mode, 2) DC mode, 3) directional mode equal to or less than 18 times, or 4) directional mode equal to or greater than 50 times, and one of modes smaller than LT_CCLM In the case of , PDPC may be applied to the current block.
- Table 5 is a table in which condition 1 among the PDPC application conditions according to the embodiment shown in FIG. 16 is arranged according to the color component of the current block.
- luminance component block color difference block condition 1 width ⁇ 4 && height ⁇ 4 no condition
- condition 1 when the current block is a luminance block, condition 1 is satisfied when both a width and a height of the current block are greater than or equal to a predetermined threshold value of 4.
- the predetermined threshold value 4 may be replaced with MIN_TB_SIZEY.
- MIN_TB_SIZEY may indicate the minimum transform block (TB) size for the luma component, and the value may be predetermined or signaled from the encoding device to the decoding device.
- MIN_TB_SIZEY may be 4.
- the condition 1 of FIG. 16 is always satisfied. That is, the condition regarding the size of the current block does not apply. As such, the condition regarding the size of the current block is applied only to the luminance block of the current block, and not to the chrominance block. As a result, among the PDPC application conditions, the condition regarding the size of the current block may be applied differently depending on the color component of the current block.
- the current block is a chrominance block and its size is 2x2, 2x4, 4x2, or 2xN
- intra prediction for the current chrominance block is not performed. Therefore, if the current block is a color difference block having the above size, PDPC is also not performed.
- the current block is an Nx2 color difference block, since intra prediction may be performed, PDPC may also be performed. Accordingly, when the current block is an Nx2 block, PDPC may not be performed in intra prediction of a luminance block, whereas PDPC may be performed in intra prediction of a chrominance block.
- 17 is a diagram illustrating a PDPC application condition according to another embodiment of the present disclosure.
- the block size condition may be equally applied to the luminance block and the chrominance block.
- the size of the current color difference block is Nx2
- a method of not performing PDPC may be provided.
- PDPC may be applied to the intra-predicted prediction block of the current block.
- condition 1-1 relates to the size of the current block.
- condition 1 if the current block has a size of 4x4 or greater irrespective of the color component of the current block, condition 1 is satisfied. That is, when both the width and height of the current block are equal to or greater than a predetermined threshold value (e.g. 4), the above condition 1-1 is satisfied.
- a predetermined threshold value e.g. 4
- the width or height of the current block is smaller than a predetermined threshold, it may be determined that the condition 1-1 is not satisfied regardless of the color component of the current block. Therefore, according to the embodiment shown in FIG. 17 , in order to determine whether the condition 1-1 is satisfied, the color component of the current block may be skipped without determining whether the color component is a luminance component or a color difference component.
- condition 1-1 regarding the size of the current block is commonly applied to the luminance block and the chrominance block, thereby solving the problem of applying the PDPC to the Nx2 color difference block can
- the embodiment shown in FIG. 17 is technically characterized in that the above condition 1-1 regarding the size of the current block is commonly applied to the luminance component block and the chrominance component block. Accordingly, all or part of Condition 2 to Condition 4 other than Condition 1-1 may be changed differently from the embodiment illustrated in FIG. 17 .
- the PDPC application condition changed in this way is also a modification of the embodiment shown in FIG. 17 and may be included in the scope of the invention according to the present disclosure.
- FIG. 18 is a diagram illustrating a PDPC application condition according to another embodiment of the present disclosure.
- the condition regarding the size of the block and the condition regarding the reference sample line may be equally applied to the luminance block and the chrominance block.
- the size of the current color difference block is Nx2
- a method of not performing PDPC may be provided.
- the reference sample line used for intra prediction is not the first reference sample line
- a method of not performing PDPC may be provided.
- PDPC may be applied to the intra-predicted prediction block of the current block.
- Condition 1-1 is the same as described with reference to FIG. 17 , a redundant description thereof will be omitted.
- condition 2-1 relates to a reference sample line used for intra prediction.
- condition 2-1 when intra prediction is performed using the first reference sample line adjacent to the current block regardless of the color component of the current block, condition 2-1 is satisfied. That is, when the first reference sample line is used for intra prediction of the current block, condition 2 is satisfied.
- the first reference sample line is not used for intra prediction of the current block, it may be determined that the condition 2-1 is not satisfied regardless of the color component of the current block. Accordingly, according to the embodiment illustrated in FIG. 18 , in order to determine whether the condition 2-1 is satisfied, the color component of the current block may be skipped without determining whether the color component is a luminance component or a chrominance component.
- condition 3 and the condition 4 are the same as those described with reference to FIG. 16 , a redundant description is omitted.
- condition 1-1 regarding the size of the current block is commonly applied to the luminance block and the chrominance block, thereby solving the problem of applying the PDPC to the Nx2 color difference block can Also, among the PDPC application conditions, the condition 2-1 regarding the reference sample line used for intra prediction of the current block is commonly applied to the luminance block and the chrominance block, so that the PDPC application conditions can be unified.
- condition 1-1 regarding the size of the current block and condition 2-1 regarding the reference sample line used for intra prediction are commonly applied to the luminance component block and the chrominance component block.
- the changed PDPC application condition is also a modified example of the embodiment shown in FIG. 18 and may be included in the scope of the invention according to the present disclosure.
- 19 is a flowchart illustrating a method of generating a prediction block according to an embodiment of the present disclosure.
- each step of FIG. 19 may be performed from step S800 of FIG. 8 performed by the image encoding apparatus.
- each step of FIG. 19 may be performed from step S1020 of FIG. 10 performed in the image decoding apparatus.
- a prediction block of the current block may be generated based on neighboring reference samples of the current block and the intra prediction mode (S1910). Since the prediction block generated in step S1910 may be modified according to whether or not PDPC is applied, it may be referred to as a temporary prediction block or a primary prediction block. In addition, the prediction block generated as a result of the application of PDPC may be simply referred to as a prediction block or a final prediction block.
- Step S1920 may be performed by checking whether the PDPC application condition is satisfied.
- PDPC application conditions according to the present disclosure are as described with reference to FIGS. 17 and 18 . However, as described above, the PDPC application conditions according to the present disclosure are not limited to the examples of FIGS. 17 and 18 , and various modifications of the PDPC application conditions may be included in the protection scope of the present disclosure.
- step S1920 if the PDPC application condition is not satisfied, PDPC is not performed, and the temporary prediction block (primary prediction block) generated in step S1910 may be used as the final prediction block of the current block.
- step S1930 if the PDPC application condition is satisfied, PDPC may be performed (S1930). In this case, by performing PDPC on the temporary prediction block (primary prediction block) generated in step S1910, the final prediction block of the current block may be generated.
- the PDPC of step S1930 may be performed, for example, according to the above-described PDPC method.
- the final prediction block of the current block generated according to the method of FIG. 19 may be used to generate a residual block of the current block (S810) or to reconstruct the current block together with the residual block of the current block (S1040).
- the determination of the color component of the current block can be skipped and the process of determining whether to apply the PDPC can be simplified.
- Example methods of the present disclosure are expressed as a series of operations for clarity of description, but this is not intended to limit the order in which the steps are performed, and if necessary, each step may be performed simultaneously or in a different order.
- other steps may be included in addition to the illustrated steps, steps may be excluded from some steps, and/or other steps may be included except for some steps.
- an image encoding apparatus or an image decoding apparatus performing a predetermined operation may perform an operation (step) of confirming a condition or situation for performing the corresponding operation (step). For example, if it is stated that a predetermined operation is performed when a predetermined condition is satisfied, the image encoding apparatus or the image decoding apparatus performs an operation to check whether the predetermined condition is satisfied and then performs the predetermined operation can
- various embodiments of the present disclosure may be implemented by hardware, firmware, software, or a combination thereof.
- ASICs Application Specific Integrated Circuits
- DSPs Digital Signal Processors
- DSPDs Digital Signal Processing Devices
- PLDs Programmable Logic Devices
- FPGAs Field Programmable Gate Arrays
- ASICs Application Specific Integrated Circuits
- DSPs Digital Signal Processors
- DSPDs Digital Signal Processing Devices
- PLDs Programmable Logic Devices
- FPGAs Field Programmable Gate Arrays
- It may be implemented by a general processor, a controller, a microcontroller, a microprocessor, and the like.
- the image decoding apparatus and the image encoding apparatus to which the embodiments of the present disclosure are applied are real-time communication apparatuses such as a multimedia broadcasting transceiver, a mobile communication terminal, a home cinema video apparatus, a digital cinema video apparatus, a surveillance camera, a video conversation apparatus, and a video communication apparatus.
- mobile streaming device storage medium, camcorder, video on demand (VoD) service providing device, OTT video (Over the top video) device, internet streaming service providing device, three-dimensional (3D) video device, video telephony video device, and medical use It may be included in a video device and the like, and may be used to process a video signal or a data signal.
- the OTT video (Over the top video) device may include a game console, a Blu-ray player, an Internet-connected TV, a home theater system, a smart phone, a tablet PC, a digital video recorder (DVR), and the like.
- a game console a Blu-ray player
- an Internet-connected TV a home theater system
- a smart phone a tablet PC
- DVR digital video recorder
- FIG. 20 is a diagram illustrating a content streaming system to which an embodiment of the present disclosure can be applied.
- the content streaming system to which the embodiment of the present disclosure is applied may largely include an encoding server, a streaming server, a web server, a media storage, a user device, and a multimedia input device.
- the encoding server generates a bitstream by compressing content input from multimedia input devices such as a smart phone, a camera, a camcorder, etc. into digital data and transmits it to the streaming server.
- multimedia input devices such as a smartphone, a camera, a camcorder, etc. directly generate a bitstream
- the encoding server may be omitted.
- the bitstream may be generated by an image encoding method and/or an image encoding apparatus to which an embodiment of the present disclosure is applied, and the streaming server may temporarily store the bitstream in a process of transmitting or receiving the bitstream.
- the streaming server transmits multimedia data to the user device based on a user request through the web server, and the web server may serve as a medium informing the user of any service.
- the web server transmits it to a streaming server, and the streaming server may transmit multimedia data to the user.
- the content streaming system may include a separate control server.
- the control server may serve to control commands/responses between devices in the content streaming system.
- the streaming server may receive content from a media repository and/or an encoding server. For example, when receiving content from the encoding server, the content may be received in real time. In this case, in order to provide a smooth streaming service, the streaming server may store the bitstream for a predetermined time.
- Examples of the user device include a mobile phone, a smart phone, a laptop computer, a digital broadcasting terminal, a personal digital assistant (PDA), a portable multimedia player (PMP), a navigation system, a slate PC, Tablet PC (tablet PC), ultrabook (ultrabook), wearable device (e.g., watch-type terminal (smartwatch), glass-type terminal (smart glass), HMD (head mounted display)), digital TV, desktop There may be a computer, digital signage, and the like.
- PDA personal digital assistant
- PMP portable multimedia player
- PDA portable multimedia player
- slate PC slate PC
- Tablet PC Tablet PC
- ultrabook ultrabook
- wearable device e.g., watch-type terminal (smartwatch), glass-type terminal (smart glass), HMD (head mounted display)
- digital TV desktop
- desktop There may be a computer, digital signage, and the like.
- Each server in the content streaming system may be operated as a distributed server, and in this case, data received from each server may be distributed and processed.
- the scope of the present disclosure includes software or machine-executable instructions (eg, operating system, application, firmware, program, etc.) that cause operation according to the method of various embodiments to be executed on a device or computer, and such software or and non-transitory computer-readable media in which instructions and the like are stored and executed on a device or computer.
- software or machine-executable instructions eg, operating system, application, firmware, program, etc.
- An embodiment according to the present disclosure may be used to encode/decode an image.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
MttSplitMode | mtt_split_cu_vertical_flag | mtt_split_cu_binary_flag |
SPLIT_TT_HOR | 0 | 0 |
SPLIT_BT_HOR | 0 | 1 |
SPLIT_TT_VER | 1 | 0 |
SPLIT_BT_VER | 1 | 1 |
Intra prediction mode | Associated name |
0 | INTRA_PLANAR |
1 | INTRA_DC |
2..66 | INTRA_ANGULAR2..INTRA_ANGULAR66 |
81..83 | INTRA_LT_CCLM, INTRA_L_CCLM, INTRA_T_CCLM |
예측 모드 | wT | wL | wTL |
우상단 대각 모드 | 16 >> ( ( y<<1 ) >> shift) | 16 >> ( ( x<<1 ) >> shift) | 0 |
좌하단 대각 모드 | 16 >> ( ( y<<1 ) >> shift ) | 16 >> ( ( x<<1 ) >> shift ) | 0 |
우상단 대각 모드의 인접 모드 | 32 >> ( ( y<<1 ) >> shift ) | 0 | 0 |
좌하단 대각 모드의 인접 모드 | 0 | 32 >> ( ( x<<1 ) >> shift ) | 0 |
intra_luma_ref_idx[ x0 ][ y0 ] | IntraLumaRefLineIdx[ x0 ][ y0 ] |
0 | 0 |
1 | 1 |
2 | 2 or 3 |
휘도 성분 블록 | 색차 성분 블록 | |
조건 1 | width ≥ 4 && height ≥ 4 | 조건 없음 |
Claims (15)
- 영상 복호화 장치에 의해 수행되는 영상 복호화 방법으로서,현재 블록에 대해 인트라 예측을 수행하여 예측 블록을 생성하는 단계;상기 예측 블록에 대해 PDPC를 적용할지 여부를 판단하는 단계; 및상기 판단에 기반하여, 상기 예측 블록에 대해 PDPC를 적용함으로써, 상기 현재 블록의 최종 예측 블록을 생성하는 단계를 포함하고,상기 예측 블록에 대해 PDPC를 적용할지 여부를 판단하는 단계는,상기 현재 블록의 크기가 소정 조건을 만족하는지의 여부에 대한 판단을 포함하고,상기 현재 블록의 크기가 상기 소정 조건을 만족하는 것에 기반하여, 상기 예측 블록에 대해 PDPC를 적용하는 것으로 판단하고,상기 현재 블록의 크기가 상기 소정 조건을 만족하지 않는 경우, 상기 현재 블록의 색 성분에 대한 판단을 스킵하고, 상기 예측 블록에 대해 PDPC를 적용하지 않는 것으로 판단하는 영상 복호화 방법.
- 제1항에 있어서,상기 소정 조건은 상기 현재 블록의 크기가 소정의 임계값 이상인 것인 영상 복호화 방법.
- 제2항에 있어서,상기 현재 블록의 너비가 상기 소정의 임계값 이상이고 상기 현재 블록의 높이가 상기 소정의 임계값 이상인 경우, 상기 소정 조건을 만족하는 영상 복호화 방법.
- 제3항에 있어서,상기 소정의 임계값은 4인 영상 복호화 방법.
- 제1항에 있어서,상기 예측 블록에 대해 PDPC를 적용할지 여부를 판단하는 단계는,상기 현재 블록의 인트라 예측에 이용된 참조 샘플 라인에 대한 판단을 더 포함하는 영상 복호화 방법.
- 제5항에 있어서,상기 참조 샘플 라인이 소정의 참조 샘플 라인인 것에 기반하여, 상기 예측 블록에 대해 PDPC를 적용하는 것으로 판단하고,상기 참조 샘플 라인이 소정의 참조 샘플 라인이 아닌 경우, 상기 현재 블록의 색 성분에 대한 판단을 스킵하고, 상기 예측 블록에 대해 PDPC를 적용하지 않는 것으로 판단하는 영상 복호화 방법.
- 제6항에 있어서,상기 소정의 참조 샘플 라인은 상기 현재 블록에 인접한 첫번째 참조 샘플 라인인 영상 복호화 방법.
- 제1항에 있어서,상기 예측 블록에 대해 PDPC를 적용할지 여부를 판단하는 단계는,상기 현재 블록에 BDPCM이 적용되는지의 여부에 대한 판단 및 상기 현재 블록의 인트라 예측 모드에 대한 판단을 더 포함하는 영상 복호화 방법.
- 메모리 및 적어도 하나의 프로세서를 포함하는 영상 복호화 장치로서,상기 적어도 하나의 프로세서는현재 블록에 대해 인트라 예측을 수행하여 예측 블록을 생성하고,상기 예측 블록에 대해 PDPC를 적용할지 여부를 판단하고,상기 판단에 기반하여, 상기 예측 블록에 대해 PDPC를 적용함으로써, 상기 현재 블록의 최종 예측 블록을 생성하고,상기 예측 블록에 대해 PDPC를 적용할지 여부의 판단은,상기 현재 블록의 크기가 소정 조건을 만족하는지의 여부에 대한 판단을 포함하고,상기 현재 블록의 크기가 상기 소정 조건을 만족하는 것에 기반하여, 상기 예측 블록에 대해 PDPC를 적용하는 것으로 판단하고,상기 현재 블록의 크기가 상기 소정 조건을 만족하지 않는 경우, 상기 현재 블록의 색 성분에 대한 판단을 스킵하고, 상기 예측 블록에 대해 PDPC를 적용하지 않는 것으로 판단하는 영상 복호화 장치.
- 영상 부호화 장치에 의해 수행되는 영상 부호화 방법으로서,현재 블록에 대해 인트라 예측을 수행하여 예측 블록을 생성하는 단계;상기 예측 블록에 대해 PDPC를 적용할지 여부를 판단하는 단계; 및상기 판단에 기반하여, 상기 예측 블록에 대해 PDPC를 적용함으로써, 상기 현재 블록의 최종 예측 블록을 생성하는 단계를 포함하고,상기 예측 블록에 대해 PDPC를 적용할지 여부를 판단하는 단계는,상기 현재 블록의 크기가 소정 조건을 만족하는지의 여부에 대한 판단을 포함하고,상기 현재 블록의 크기가 상기 소정 조건을 만족하는 것에 기반하여, 상기 예측 블록에 대해 PDPC를 적용하는 것으로 판단하고,상기 현재 블록의 크기가 상기 소정 조건을 만족하지 않는 경우, 상기 현재 블록의 색 성분에 대한 판단을 스킵하고, 상기 예측 블록에 대해 PDPC를 적용하지 않는 것으로 판단하는 영상 부호화 방법.
- 제10항에 있어서,상기 소정 조건은 상기 현재 블록의 크기가 소정의 임계값 이상인 것인 영상 부호화 방법.
- 제10항에 있어서,상기 현재 블록의 너비가 상기 소정의 임계값 이상이고 상기 현재 블록의 높이가 상기 소정의 임계값 이상인 경우, 상기 소정 조건을 만족하는 영상 부호화 방법.
- 제10항에 있어서,상기 예측 블록에 대해 PDPC를 적용할지 여부를 판단하는 단계는,상기 현재 블록의 인트라 예측에 이용된 참조 샘플 라인에 대한 판단을 더 포함하는 영상 부호화 방법.
- 제13항에 있어서,상기 참조 샘플 라인이 소정의 참조 샘플 라인인 것에 기반하여, 상기 예측 블록에 대해 PDPC를 적용하는 것으로 판단하고,상기 참조 샘플 라인이 소정의 참조 샘플 라인이 아닌 경우, 상기 현재 블록의 색 성분에 대한 판단을 스킵하고, 상기 예측 블록에 대해 PDPC를 적용하지 않는 것으로 판단하는 영상 부호화 방법.
- 제10항의 영상 부호화 방법에 의해 생성된 비트스트림을 전송하는 방법.
Priority Applications (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022539196A JP2023508178A (ja) | 2019-12-26 | 2020-12-24 | Pdpcを行う画像符号化/復号化方法、装置、及びビットストリームを伝送する方法 |
BR112022012747A BR112022012747A2 (pt) | 2019-12-26 | 2020-12-24 | Aparelho e método de codificação/decodificação de vídeo para executar pdpc e método para transmissão de fluxo de bits |
EP20908208.0A EP4084477A4 (en) | 2019-12-26 | 2020-12-24 | IMAGE CODING/DECODING METHOD AND DEVICE FOR PERFORMING PDPC AND METHOD FOR TRANSMITTING A BIT STREAM |
CN202080096308.4A CN115088261A (zh) | 2019-12-26 | 2020-12-24 | 执行pdpc的视频编码/解码方法和设备以及发送比特流的方法 |
MX2022007832A MX2022007832A (es) | 2019-12-26 | 2020-12-24 | Metodo y aparato de codificacion/decodificacion de video para realizar pdpc y metodo para transmitir flujo de bits. |
KR1020227016132A KR20220079974A (ko) | 2019-12-26 | 2020-12-24 | Pdpc를 수행하는 영상 부호화/복호화 방법, 장치 및 비트스트림을 전송하는 방법 |
US17/848,634 US11778192B2 (en) | 2019-12-26 | 2022-06-24 | Video encoding/decoding method and apparatus for performing PDPC and method for transmitting bitstream |
US18/234,710 US20230396772A1 (en) | 2019-12-26 | 2023-08-16 | Video encoding/decoding method and apparatus for performing pdpc and method for transmitting bitstream |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201962953886P | 2019-12-26 | 2019-12-26 | |
US62/953,886 | 2019-12-26 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/848,634 Continuation US11778192B2 (en) | 2019-12-26 | 2022-06-24 | Video encoding/decoding method and apparatus for performing PDPC and method for transmitting bitstream |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021133100A1 true WO2021133100A1 (ko) | 2021-07-01 |
Family
ID=76575654
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2020/019091 WO2021133100A1 (ko) | 2019-12-26 | 2020-12-24 | Pdpc를 수행하는 영상 부호화/복호화 방법, 장치 및 비트스트림을 전송하는 방법 |
Country Status (8)
Country | Link |
---|---|
US (2) | US11778192B2 (ko) |
EP (1) | EP4084477A4 (ko) |
JP (1) | JP2023508178A (ko) |
KR (1) | KR20220079974A (ko) |
CN (1) | CN115088261A (ko) |
BR (1) | BR112022012747A2 (ko) |
MX (1) | MX2022007832A (ko) |
WO (1) | WO2021133100A1 (ko) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190253706A1 (en) * | 2018-02-12 | 2019-08-15 | Tencent America LLC | Method and apparatus for using an intra prediction coding tool for intra prediction of non-square blocks in video compression |
US20190306513A1 (en) * | 2018-04-02 | 2019-10-03 | Qualcomm Incorporated | Position dependent intra prediction combination extended with angular modes |
WO2019199142A1 (ko) * | 2018-04-13 | 2019-10-17 | 엘지전자 주식회사 | 인트라 예측 방법을 결정하기 위한 영상 코딩 방법 및 장치 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112236996A (zh) * | 2018-12-21 | 2021-01-15 | 株式会社 Xris | 视频信号编码/解码方法及其装置 |
CN113826403A (zh) * | 2019-06-25 | 2021-12-21 | Oppo广东移动通信有限公司 | 信息处理方法及装置、设备、存储介质 |
-
2020
- 2020-12-24 EP EP20908208.0A patent/EP4084477A4/en active Pending
- 2020-12-24 MX MX2022007832A patent/MX2022007832A/es unknown
- 2020-12-24 JP JP2022539196A patent/JP2023508178A/ja active Pending
- 2020-12-24 CN CN202080096308.4A patent/CN115088261A/zh active Pending
- 2020-12-24 WO PCT/KR2020/019091 patent/WO2021133100A1/ko unknown
- 2020-12-24 KR KR1020227016132A patent/KR20220079974A/ko unknown
- 2020-12-24 BR BR112022012747A patent/BR112022012747A2/pt unknown
-
2022
- 2022-06-24 US US17/848,634 patent/US11778192B2/en active Active
-
2023
- 2023-08-16 US US18/234,710 patent/US20230396772A1/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190253706A1 (en) * | 2018-02-12 | 2019-08-15 | Tencent America LLC | Method and apparatus for using an intra prediction coding tool for intra prediction of non-square blocks in video compression |
US20190306513A1 (en) * | 2018-04-02 | 2019-10-03 | Qualcomm Incorporated | Position dependent intra prediction combination extended with angular modes |
WO2019199142A1 (ko) * | 2018-04-13 | 2019-10-17 | 엘지전자 주식회사 | 인트라 예측 방법을 결정하기 위한 영상 코딩 방법 및 장치 |
Non-Patent Citations (3)
Title |
---|
BENJAMIN BROSS , JIANLE CHEN , SHAN LIU , YE-KUI WANG: "Versatile Video Coding (Draft 7)", 16. JVET MEETING; 20191001 - 20191011; GENEVA; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ), no. JVET-P2001-vE; m51515, 14 November 2019 (2019-11-14), pages 1 - 489, XP030224330 * |
S. DE-LUXÁN-HERNÁNDEZ (FRAUNHOFER), V. GEORGE, G. VENUGOPAL, J. BRANDENBURG, B. BROSS, T. NGUYEN, H. SCHWARZ, D. MARPE, T. WIEGAND: "Non-CE3: Proposed ISP cleanup", 15. JVET MEETING; 20190703 - 20190712; GOTHENBURG; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ), no. JVET-O0502 ; m48627, 5 July 2019 (2019-07-05), XP030219748 * |
See also references of EP4084477A4 * |
Also Published As
Publication number | Publication date |
---|---|
BR112022012747A2 (pt) | 2022-09-06 |
KR20220079974A (ko) | 2022-06-14 |
MX2022007832A (es) | 2022-08-04 |
CN115088261A (zh) | 2022-09-20 |
JP2023508178A (ja) | 2023-03-01 |
EP4084477A1 (en) | 2022-11-02 |
US20230396772A1 (en) | 2023-12-07 |
US20220353507A1 (en) | 2022-11-03 |
EP4084477A4 (en) | 2023-12-20 |
US11778192B2 (en) | 2023-10-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020218793A1 (ko) | Bdpcm에 기반한 영상 코딩 방법 및 그 장치 | |
WO2020009556A1 (ko) | 변환에 기반한 영상 코딩 방법 및 그 장치 | |
WO2020213944A1 (ko) | 영상 코딩에서 매트릭스 기반의 인트라 예측을 위한 변환 | |
WO2020213946A1 (ko) | 변환 인덱스를 이용하는 영상 코딩 | |
WO2020231140A1 (ko) | 적응적 루프 필터 기반 비디오 또는 영상 코딩 | |
WO2020180119A1 (ko) | Cclm 예측에 기반한 영상 디코딩 방법 및 그 장치 | |
WO2019235822A1 (ko) | 어파인 움직임 예측을 이용하여 비디오 신호를 처리하는 방법 및 장치 | |
WO2021015537A1 (ko) | 팔레트 모드의 적용 여부에 따라 크로마 성분 예측 정보를 시그널링 하는 영상 부호화/복호화 방법, 장치 및 비트스트림을 전송하는 방법 | |
WO2020167097A1 (ko) | 영상 코딩 시스템에서 인터 예측을 위한 인터 예측 타입 도출 | |
WO2021029744A1 (ko) | 루마 샘플 위치를 참조하여 크로마 블록의 예측 모드를 결정하는 영상 부호화/복호화 방법, 장치 및 비트스트림을 전송하는 방법 | |
WO2020204419A1 (ko) | 적응적 루프 필터 기반 비디오 또는 영상 코딩 | |
WO2020180143A1 (ko) | 루마 맵핑 및 크로마 스케일링 기반 비디오 또는 영상 코딩 | |
WO2021054807A1 (ko) | 참조 샘플 필터링을 이용하는 영상 부호화/복호화 방법, 장치 및 비트스트림을 전송하는 방법 | |
WO2021040398A1 (ko) | 팔레트 이스케이프 코딩 기반 영상 또는 비디오 코딩 | |
WO2020256506A1 (ko) | 다중 참조 라인 인트라 예측을 이용한 영상 부호화/복호화 방법, 장치 및 비트스트림을 전송하는 방법 | |
WO2020235960A1 (ko) | Bdpcm 에 대한 영상 디코딩 방법 및 그 장치 | |
WO2020197274A1 (ko) | 변환에 기반한 영상 코딩 방법 및 그 장치 | |
WO2020185047A1 (ko) | 인트라 예측을 수행하는 영상 부호화/복호화 방법, 장치 및 비트스트림을 전송하는 방법 | |
WO2020180122A1 (ko) | 조건적으로 파싱되는 alf 모델 및 리셰이핑 모델 기반 비디오 또는 영상 코딩 | |
WO2019199093A1 (ko) | 인트라 예측 모드 기반 영상 처리 방법 및 이를 위한 장치 | |
WO2021034116A1 (ko) | 크로마 양자화 파라미터를 사용하는 영상 디코딩 방법 및 그 장치 | |
WO2020242183A1 (ko) | 광각 인트라 예측 및 변환에 기반한 영상 코딩 방법 및 그 장치 | |
WO2020184966A1 (ko) | 영상 부호화/복호화 방법, 장치 및 비트스트림을 전송하는 방법 | |
WO2020184928A1 (ko) | 루마 맵핑 및 크로마 스케일링 기반 비디오 또는 영상 코딩 | |
WO2020130581A1 (ko) | 이차 변환에 기반한 영상 코딩 방법 및 그 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20908208 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20227016132 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2022539196 Country of ref document: JP Kind code of ref document: A |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112022012747 Country of ref document: BR |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2020908208 Country of ref document: EP Effective date: 20220726 |
|
ENP | Entry into the national phase |
Ref document number: 112022012747 Country of ref document: BR Kind code of ref document: A2 Effective date: 20220624 |