WO2019194485A1 - 영상 부호화/복호화 방법 및 장치 - Google Patents
영상 부호화/복호화 방법 및 장치 Download PDFInfo
- Publication number
- WO2019194485A1 WO2019194485A1 PCT/KR2019/003777 KR2019003777W WO2019194485A1 WO 2019194485 A1 WO2019194485 A1 WO 2019194485A1 KR 2019003777 W KR2019003777 W KR 2019003777W WO 2019194485 A1 WO2019194485 A1 WO 2019194485A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- block
- prediction
- mode
- encoding
- prediction mode
- Prior art date
Links
- 238000000034 method Methods 0.000 title abstract description 236
- 230000008569 process Effects 0.000 description 117
- 238000005192 partition Methods 0.000 description 62
- 238000000638 solvent extraction Methods 0.000 description 60
- 230000033001 locomotion Effects 0.000 description 52
- 238000013139 quantization Methods 0.000 description 49
- 230000009466 transformation Effects 0.000 description 30
- 230000003044 adaptive effect Effects 0.000 description 29
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 20
- 238000010586 diagram Methods 0.000 description 18
- 238000001914 filtration Methods 0.000 description 17
- 238000012986 modification Methods 0.000 description 17
- 230000004048 modification Effects 0.000 description 17
- 238000012545 processing Methods 0.000 description 15
- 238000004891 communication Methods 0.000 description 10
- 239000013598 vector Substances 0.000 description 6
- 238000010276 construction Methods 0.000 description 5
- 230000036961 partial effect Effects 0.000 description 4
- 230000011218 segmentation Effects 0.000 description 4
- 241000209094 Oryza Species 0.000 description 3
- 235000007164 Oryza sativa Nutrition 0.000 description 3
- 239000012634 fragment Substances 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 235000009566 rice Nutrition 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 230000006866 deterioration Effects 0.000 description 2
- 238000013213 extrapolation Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- FZEIVUHEODGHML-UHFFFAOYSA-N 2-phenyl-3,6-dimethylmorpholine Chemical compound O1C(C)CNC(C)C1C1=CC=CC=C1 FZEIVUHEODGHML-UHFFFAOYSA-N 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000002860 competitive effect Effects 0.000 description 1
- 101150108611 dct-1 gene Proteins 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/119—Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/129—Scanning of coding units, e.g. zig-zag scan of transform coefficients or flexible macroblock ordering [FMO]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
Definitions
- the present invention relates to a video encoding and decoding technique, and more particularly, to a method and apparatus for encoding / decoding in intra prediction.
- An object of the present invention is to provide an intra prediction method and apparatus.
- the present invention aims to provide an intra prediction method and apparatus in sub-block units.
- an object of the present invention is to provide a method and apparatus for determining the division and coding order of sub-block units.
- An image encoding / decoding method and apparatus constitute a candidate group regarding a partition type of a current block, determine a partition type of a current block into subblocks based on the candidate group and a candidate index, and determine the current block unit.
- the intra prediction mode may be derived, and the intra prediction of the current block may be performed based on the intra prediction mode of the current block and the split type of the sub block.
- encoding / decoding performance can be improved through intra prediction on a sub-block basis.
- the accuracy of prediction can be improved by efficiently constructing a candidate group relating to the divided form in sub-block units.
- the encoding / decoding efficiency of the intra prediction may be improved by adaptively applying the coding order in units of subblocks.
- FIG. 1 is a conceptual diagram of an image encoding and decoding system according to an embodiment of the present invention.
- FIG. 2 is a block diagram of a video encoding apparatus according to an embodiment of the present invention.
- FIG. 3 is a block diagram of an image decoding apparatus according to an embodiment of the present invention.
- FIG. 4 is an exemplary diagram illustrating various partition types that may be obtained in a block partition unit of the present invention.
- FIG. 5 is an exemplary diagram illustrating an intra prediction mode according to an embodiment of the present invention.
- FIG. 6 is an exemplary diagram for describing a configuration of a reference pixel used for intra prediction according to an embodiment of the present invention.
- FIG. 7 is a conceptual diagram illustrating a block adjacent to a target block of intra prediction according to an embodiment of the present invention.
- FIG. 8 illustrates various types of divisions of subblocks obtainable based on coding blocks.
- FIG. 9 is an exemplary diagram of a reference pixel area used based on an intra prediction mode according to an embodiment of the present invention.
- FIG. 10 illustrates an example of an encoding sequence that may be provided in a prediction mode in a diagonal up right direction according to an embodiment of the present invention.
- FIG. 11 illustrates an example of an encoding sequence that may be included in a horizontal mode according to an embodiment of the present invention.
- FIG. 12 illustrates an example of an encoding sequence that may be provided in a prediction mode in a diagonal down right direction according to an embodiment of the present invention.
- FIG. 13 illustrates an example of an encoding sequence that may be included in a vertical mode according to an embodiment of the present invention.
- FIG. 14 illustrates an example of an encoding sequence that may be provided in a mode of a diagonal down left direction according to an embodiment of the present invention.
- FIG. 15 is an exemplary diagram for an encoding sequence considering an intra prediction mode and a split form according to an embodiment of the present invention.
- An image encoding / decoding method and apparatus constitute a candidate group regarding a partition type of a current block, determine a partition type of a current block into subblocks based on the candidate group and a candidate index, and determine the current block unit.
- the intra prediction mode may be derived, and the intra prediction of the current block may be performed based on the intra prediction mode of the current block and the split type of the sub block.
- first, second, A, and B may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another.
- the first component may be referred to as the second component, and similarly, the second component may also be referred to as the first component.
- one or more color spaces may be configured according to a color format of an image. It may consist of one or more pictures with a certain size or one or more pictures with other sizes, depending on the color format.
- color formats such as 4: 4: 4, 4: 2: 2, 4: 2: 0, and Monochrome (only Y) may be supported.
- YCbCr 4: 2: 0 one luminance component (Y in this example) and two chrominance components (Cb / Cr in this example) may be configured, wherein the chrominance component and the luminance component are configured.
- the ratio may have a length of 1: 2.
- 4: 4: 4 may have the same ratio of width and length.
- Images can be classified into I, P, B, etc. according to the image type (eg, picture type, slice type, tile type, etc.).
- the I image type is an image which is encoded / decoded by itself without using a reference picture.
- the P image type may mean an image that is encoded / decoded using a reference picture but allows only omnidirectional prediction
- the B image type may be encoded / decoded by using a reference picture.
- it may mean an image that allows direction prediction, a part of the type may be combined (P and B are combined) or another type of image type may be supported according to the encoding / decoding setting.
- the explicit processing may generate selection information indicating one candidate among a plurality of candidate groups for encoding information in a sequence, slice, tile, block, subblock, and the like, and record the selected information in a bitstream, and at the same level as that of the encoder in the decoder. It can be understood as parsing related information in units of to restore the decoded information.
- the implicit processing is processed by the same process, rule, and the like in the encoder and the decoder.
- FIG. 1 is a conceptual diagram of an image encoding and decoding system according to an embodiment of the present invention.
- the video encoding apparatus 105 and the decoding apparatus 100 may include a personal computer (PC), a notebook computer, a personal digital assistant (PDA), and a portable multimedia player (PMP). Player), PlayStation Portable (PSP: PlayStation Portable), wireless communication terminal (Wireless Communication Terminal), smart phone (Smart Phone) or a user terminal such as a TV, or a server terminal such as an application server and a service server, etc.
- PC personal computer
- PDA personal digital assistant
- PMP portable multimedia player
- Player Portable
- PSP PlayStation Portable
- wireless communication terminal Wireless Communication Terminal
- smart phone Smart Phone
- user terminal such as a TV
- server terminal such as an application server and a service server, etc.
- a communication device such as a communication modem for communicating with a wired / wireless communication network, a memory (memory, 120, 125) or a program for storing various programs and data for inter or intra prediction for encoding or decoding an image and executing the operation
- various devices including processors (processors 110 and 115) for controlling.
- the image encoded in the bitstream by the image encoding apparatus 105 is real-time or non-real-time via the wired or wireless communication network (Network), such as the Internet, local area wireless communication network, wireless LAN network, WiBro network or mobile communication network, or cable or general purpose
- the image decoding apparatus 100 may be transmitted to the image decoding apparatus 100 through various communication interfaces such as a universal serial bus (USB), and may be decoded by the image decoding apparatus 100 to restore and reproduce the image.
- an image encoded in the bitstream by the image encoding apparatus 105 may be transferred from the image encoding apparatus 105 to the image decoding apparatus 100 through a computer-readable recording medium.
- the above-described image encoding apparatus and the image decoding apparatus may be separate apparatuses, but may be made of one image encoding / decoding apparatus according to implementation.
- some components of the image encoding apparatus may be implemented to include at least the same structure or to perform at least the same function as substantially the same technical elements as some components of the image decoding apparatus.
- the image decoding apparatus corresponds to a computing device applying the image encoding method performed by the image encoding apparatus to the decoding, the following description will focus on the image encoding apparatus.
- the computing device may include a memory for storing a program or software module for implementing an image encoding method and / or an image decoding method, and a processor connected to the memory and executing a program.
- the image encoding apparatus may be referred to as an encoder
- the image decoding apparatus may be referred to as a decoder.
- FIG. 2 is a block diagram of a video encoding apparatus according to an embodiment of the present invention.
- the image encoding apparatus 20 may include a predictor 200, a subtractor 205, a transformer 210, a quantizer 215, an inverse quantizer 220, an inverse transformer 225, An adder 230, a filter 235, an encoded picture buffer 240, and an entropy encoder 245 may be included.
- the prediction unit 200 may be implemented using a prediction module, which is a software module, and generates a prediction block with an intra prediction or inter prediction for a block to be encoded. can do.
- the prediction unit 200 may generate a prediction block by predicting a current block to be currently encoded in an image. In other words, the prediction unit 200 predicts the pixel value of each pixel of the current block to be encoded in the image according to the intra prediction or the inter prediction. Can generate a prediction block with
- the prediction unit 200 may transmit information necessary for generating a prediction block, such as information about a prediction mode such as an intra prediction mode or an inter prediction mode, to the encoder so that the encoder encodes the information about the prediction mode. Can be.
- the processing unit for which the prediction is performed the processing method for which the prediction method and the details are determined may be determined according to the encoding / decoding setting.
- the prediction method, the prediction mode, etc. may be determined in the prediction unit, and the performance of the prediction may be performed in the transformation unit.
- the inter prediction unit may be classified into a moving motion model and an out-of-movement motion model according to a motion prediction method.
- the prediction is performed by considering only the parallel movement.
- the prediction can be performed by considering not only the parallel movement but also movements such as rotation, perspective, and zoom in / out. have. Assuming one-way prediction, one motion vector may be required for a motion model, but one or more motion vectors may be required for an out-of-motion motion model.
- each motion vector may be information applied to a predetermined position of the current block, such as an upper left vertex and an upper right vertex of the current block, and a position of an area to be predicted of the current block through the corresponding motion vector. May be obtained in a pixel unit or a sub block unit.
- the inter prediction unit may apply some processes to be described later in common according to the motion model, and some processes may be applied separately.
- the inter prediction unit may include a reference picture configuration unit, a motion estimation unit, a motion compensator, a motion information determiner, and a motion information encoder.
- the reference picture configuration unit may include, in the reference picture lists L0 and L1, pictures encoded before or after the current picture.
- a prediction block may be obtained from a reference picture included in the reference picture list, and the current picture may also be configured as a reference picture and included in at least one of the reference picture lists according to encoding settings.
- the reference picture component may include a reference picture interpolator, and may perform an interpolation process for a fractional pixel according to interpolation precision.
- a reference picture interpolator For example, an 8-tap DCT based interpolation filter may be applied in the case of luminance components, and a 4-tap DCT based interpolation filter may be applied in the case of chrominance components.
- the motion estimation unit searches a block having a high correlation with the current block through a reference picture, and various methods such as a full search-based block matching algorithm (FBMA) and a three step search (TSS) may be used.
- FBMA full search-based block matching algorithm
- TSS three step search
- the motion compensator means a process of obtaining a prediction block through a motion estimation process.
- the motion information determiner may perform a process for selecting the optimal motion information of the current block in the inter prediction unit, and the motion information may be a skip mode, a merge mode, or a competition mode. It may be encoded by a motion information encoding mode.
- the mode may be configured by combining the supported modes according to the motion model, skip mode (movement), skip mode (movement), merge mode (movement), merge mode (movement), competition mode (movement), Competitive mode (outside the move) may be an example. Some of the modes may be included in the candidate group according to the encoding setting.
- a prediction value of motion information (motion vector, reference picture, prediction direction, etc.) of the current block may be obtained from at least one candidate block, and optimal candidate selection information is provided when two or more candidate blocks are supported. May occur.
- the prediction value may be used as the motion information of the current block, and in the contention mode, difference information between the motion information of the current block and the prediction value may be generated. .
- the candidate group for the motion information prediction value of the current block may be adaptive and have various configurations according to the motion information encoding mode.
- the motion information of a block spatially adjacent to the current block eg, left, top, top left, top right, bottom left block, etc.
- the motion information of a temporally adjacent block may be included in the candidate group
- the spatial candidate And mixed motion information of temporal candidates may be included in the candidate group.
- the temporally adjacent block may include a block in another image corresponding to (or corresponding to) the current block, and mean a block located in a left, right, up, down, top, right, left, bottom, and bottom block with respect to the block.
- the mixed motion information may mean information obtained by an average, a median value, etc. through motion information of spatially adjacent blocks and motion information of blocks adjacent in time.
- the order of inclusion in the prediction value candidate group configuration may be determined according to the priority, and the candidate group configuration may be completed when the number of candidate groups (determined according to the motion information encoding mode) is filled according to the priority.
- priority may be determined in order of motion information of spatially adjacent blocks, motion information of temporally adjacent blocks, and mixed motion information of spatial candidates and temporal candidates, but other modifications are also possible.
- candidates may be included in the candidate group in the order of left-up-top-right-bottom-left-top block, and candidate blocks in order of right-bottom-middle-right-bottom block among temporally adjacent blocks. It can be included in.
- the subtraction unit 205 may generate a residual block by subtracting the prediction block from the current block.
- the subtractor 205 calculates a difference between the pixel value of each pixel of the current block to be encoded and the predicted pixel value of each pixel of the prediction block generated through the prediction unit, and is a residual signal in the form of a block. Residual blocks can be created.
- the subtraction unit 205 may generate the residual block according to units other than the block unit obtained through the block division unit described below.
- the transform unit 210 may convert a signal belonging to a spatial domain into a signal belonging to a frequency domain, and a signal obtained through a transformation process is called a transformed coefficient.
- a transform block having a transform coefficient may be obtained by transforming the residual block having the residual signal received from the subtractor, but the input signal is determined according to an encoding setting, which is not limited to the residual signal.
- the transform unit can transform the residual block using transformation techniques such as a Hadamard transform, a Discrete Sine Transform (DST Based-Transform), or a Discrete Cosine Transform (DCT Based-Transform).
- transformation techniques such as a Hadamard transform, a Discrete Sine Transform (DST Based-Transform), or a Discrete Cosine Transform (DCT Based-Transform).
- DST Based-Transform a Discrete Sine Transform
- DCT Based-Transform Discrete Cosine Transform
- At least one of the transformation schemes may be supported, and at least one detailed transformation scheme may be supported in each transformation scheme.
- the detailed changed technique may be a transformation technique in which a part of the basis vector is configured differently in each transformation technique.
- DCT digital coherence tomography
- DCT-8 one or more detailed transformation schemes of DCT-1 to DCT-8
- DST-8 one or more detailed transformation schemes of DST-1 to DST-8
- a portion of the detailed transformation scheme may be configured to form a transformation technique candidate group.
- DCT-2, DCT-8, and DST-7 may be configured as a transform technique candidate group to perform transform.
- the conversion can be performed in the horizontal / vertical direction.
- one-dimensional transform is performed in the horizontal direction using the transform technique of DCT-2
- one-dimensional transform is performed in the vertical direction using the transform technique of DST-7 to perform a total two-dimensional transform.
- the pixel value of can be converted into the frequency domain.
- the transformation can be performed using one fixed transformation technique, or the transformation can be performed by adaptively selecting the transformation scheme according to the encoding / decoding.
- the adaptation method may be selected using an explicit or implicit method.
- each transformation technique selection information or transformation technique set selection information applied to the horizontal and vertical directions may occur in a unit such as a block.
- encoding settings may be defined according to an image type (I / P / B), a color component, a size, a shape of a block, an intra prediction mode, and a predetermined transformation scheme may be selected.
- the partial transform is omitted depending on the encoding setting. That is, one or more of the horizontal / vertical units may be omitted, either explicitly or implicitly.
- the transformer may transmit information necessary to generate a transform block to the encoder to encode the information, and store the information in the bitstream and transmit the information to the decoder, and the decoder of the decoder parses the information and inversely transforms the information. Can be used for the process.
- the quantization unit 215 may quantize the input signal.
- the signal obtained through the quantization process is referred to as a quantized coefficient.
- a quantization block having a quantization coefficient may be obtained by quantizing a residual block having a residual transform coefficient received from a transform unit. The received signal is determined according to an encoding setting, which is not limited to the residual transform coefficient.
- the quantization unit may quantize the transformed residual block using a quantization technique such as dead zone uniform threshold quantization, quantization weighted matrix, and the like, but is not limited thereto. Quantization techniques can be used.
- the quantization process may be omitted according to the encoding setting.
- the quantization process may be omitted (including the inverse process) according to the encoding setting (for example, the quantization parameter is 0. That is, the lossless compression environment).
- the quantization process may be omitted when the compression performance through quantization is not exhibited according to the characteristics of the image.
- the region in which the quantization process is omitted among the quantization blocks M x N may be an entire region or a partial region (M / 2 x N / 2, M x N / 2, M / 2 x N, etc.) and may be quantized.
- the omission selection information may be determined implicitly or explicitly.
- the quantization unit may transmit information necessary for generating a quantization block to the encoding unit and encode the information.
- the quantization unit stores the information in a bitstream and transmits the information to the decoder, and the decoder of the decoder parses the information and dequantizes it. Can be used for the process.
- the residual block may be transformed from the residual signal to generate a residual block having transform coefficients, and the quantization process may not be performed. Not only may the quantization process be performed without converting the residual signal into transform coefficients, but neither the transformation nor the quantization process may be performed. This may be determined according to the encoder setting.
- the inverse quantization unit 220 inverse quantizes the residual block quantized by the quantization unit 215. That is, the inverse quantizer 220 inversely quantizes the quantized frequency coefficient sequence to generate a residual block having the frequency coefficient.
- the inverse transform unit 225 inversely transforms the residual block inversely quantized by the inverse quantization unit 220. That is, the inverse transformer 225 inversely transforms frequency coefficients of the inversely quantized residual block to generate a residual block having a pixel value, that is, a reconstructed residual block.
- the inverse transform unit 225 may perform inverse transform by using the transformed method used in the transform unit 210 as the inverse.
- the adder 230 reconstructs the current block by adding the prediction block predicted by the predictor 200 and the residual block reconstructed by the inverse transform unit 225.
- the reconstructed current block may be stored as a reference picture (or a reference block) in the coded picture buffer 240 and may be used as a reference picture when encoding the next block of the current block, another block in the future, or another picture.
- the filter unit 235 may include one or more post-processing filter processes such as a deblocking filter, a sample adaptive offset (SAO), an adaptive loop filter (ALF), and the like.
- the deblocking filter may remove block distortion generated at the boundary between blocks in the reconstructed picture.
- the ALF may perform filtering based on a value obtained by comparing the reconstructed image with the original image after the block is filtered through the deblocking filter.
- the SAO may restore the offset difference from the original image on a pixel basis with respect to the residual block to which the deblocking filter is applied.
- Such a post-processing filter may be applied to the reconstructed picture or block.
- the encoded picture buffer 240 may store a block or a picture reconstructed by the filter unit 235.
- the reconstructed block or picture stored in the encoded picture buffer 240 may be provided to the prediction unit 200 that performs intra prediction or inter prediction.
- the entropy encoder 245 scans the generated quantization frequency coefficient sequence according to various scan methods to generate a quantization coefficient sequence, and outputs the encoded quantization coefficient sequence by encoding it using an entropy encoding technique.
- the scan pattern may be set to one of various patterns such as zigzag, diagonal lines, and rasters.
- encoded data including encoded information transmitted from each component may be generated and output as a bitstream.
- FIG. 3 is a block diagram of an image decoding apparatus according to an embodiment of the present invention.
- the image decoding apparatus 30 includes an entropy decoder 305, a predictor 310, an inverse quantizer 315, an inverse transformer 320, an adder and subtractor 325, a filter 330, and the like.
- the decoded picture buffer 335 may be configured.
- the prediction unit 310 may be configured to include an intra prediction module and an inter prediction module.
- the image bitstream may be transferred to the entropy decoder 305.
- the entropy decoder 305 may decode the bitstream to decode the decoded data including the quantized coefficients and the decoded information transmitted to each component.
- the prediction unit 310 may generate a prediction block based on the data transferred from the entropy decoding unit 305.
- the reference picture list using a default construction technique may be constructed based on the reference picture stored in the decoded picture buffer 335.
- the inter prediction unit may include a reference picture component unit, a motion compensator, and a motion information decoder, and some of the processes may be performed in the same process as the encoder, and some may be reversed.
- the inverse quantizer 315 may inverse quantize the quantized transform coefficients provided in the bitstream and decoded by the entropy decoder 305.
- the inverse transform unit 320 may generate a residual block by applying inverse transform techniques of inverse DCT, inverse integer transform, or the like to a transform coefficient.
- the inverse quantization unit 315 and the inverse transform unit 320 perform the processes performed by the transform unit 210 and the quantization unit 215 of the image encoding apparatus 20 described above, and may be implemented in various ways. have.
- the same process and inverse transform shared with the transform unit 210 and the quantization unit 215 may be used, and information about the transform and quantization process from the image encoding apparatus 20 (for example, transform size and transform). Shape, quantization type, etc.) may be used to reverse the transform and quantization processes.
- the residual block that has undergone inverse quantization and inverse transformation may be added to the prediction block derived by the prediction unit 310 to generate an image block reconstructed. This addition can be made by the adder and subtractor 325.
- the filter 330 may apply a deblocking filter to the reconstructed image block to remove blocking if necessary, and further add other loop filters to improve video quality before and after the decoding process. Can also be used.
- the reconstructed and filtered image block may be stored in the decoded picture buffer 335.
- the picture encoding / decoding apparatus may further include a picture divider and a block divider.
- the picture divider divides a picture into at least one processing unit such as a color space (for example, YCbCr, RGB, XYZ, etc.), a tile, a slice, a basic coding unit (or a maximum coding unit, Coding Tree Unit.CTU), or the like.
- the block division unit may divide the basic coding unit into at least one processing unit (eg, encoding, prediction, transform, quantization, entropy, in-loop filter unit, etc.).
- the basic coding unit may be obtained by dividing a picture at regular intervals in a horizontal direction and a vertical direction. Based on this, division of tiles, slices, etc. may be performed, but is not limited thereto.
- the division unit such as the tile and the slice may be configured as an integer multiple of the basic coding block, but an exceptional case may occur in the division unit located at the image boundary. To this end, adjustment of the basic coding block size may occur.
- a picture may be divided into basic coding units and then divided into the units, or a picture may be divided into basic units and then divided into basic coding units.
- a description will be given on the assumption that the division and division order of each unit is the former, but the present invention is not limited thereto.
- the size of the basic coding unit may be changed to an adaptive case according to the division unit (tile, etc.). That is, it means that the basic coding block having a different size for each division unit can be supported.
- the default setting may mean that the picture is not divided into tiles or slices or the picture is one tile or one slice.
- each division unit (tile, slice, etc.) is first partitioned and then divided into basic coding units based on the obtained unit (that is, each division unit is not an integer multiple of the basic coding unit, etc.). It should be understood that various embodiments may be applied in the same or modified form.
- a slice may be composed of a bundle of at least one consecutive block according to a scan pattern, and in the case of a tile, the slice may be composed of a rectangular bundle of spatially adjacent blocks, and other additional division units are supported. And can be constructed by definition accordingly.
- the slice and the tile may be divided units supported for the purpose of parallel processing, and for this purpose, references between the divided units may be limited (that is, cannot be referred to).
- a slice may generate split information of each unit as information about a start position of consecutive blocks, and in the case of a tile, generate slice information about horizontal and vertical split lines or tile position information (for example, upper left corner). , Upper right, lower left, and lower right positions).
- the slice and the tile may be divided into a plurality of units according to the encoding / decoding.
- some units ⁇ A> may be units containing setting information that affects the encoding / decoding process (ie, including tile headers or slice headers), and some units ⁇ B> do not include setting information. May be a unit.
- some units ⁇ A> may be units that cannot refer to other units in the encoding / decoding process, and some units ⁇ B> may be units that may be referred to.
- some units ⁇ A> may be in a vertical relationship including other units ⁇ B> or some units ⁇ A> may be in a relationship equivalent to other units ⁇ B>.
- a and B may be slices and tiles (or tiles and slices).
- a and B may be composed of one of slices or tiles.
- A may be a slice / tile ⁇ type 1> and B may be configured as a slice / tile ⁇ type 2>.
- Type 1 and Type 2 may each be one slice or tile.
- type 1 may be a plurality of slices or tiles (including type 2) (slice groups or tile groups), and type 2 may be one slice or tile.
- a and B are examples of properties that a division unit may have, and an example in which A and B of each example are mixed is also possible.
- the block splitter may obtain information about a basic coding unit from the picture splitter, and the basic coding unit may mean a basic (or start) unit for prediction, transform, and quantization in an image encoding / decoding process.
- the basic coding unit may include one luminance basic coding block (or, maximum coding block, Coding Tree Block.CTB) and two basic color difference coding blocks according to a color format (YCbCr in this example). According to the size of each block can be determined.
- a coding block (CB) may be obtained according to a division process.
- a coding block may be understood as a unit that is not divided into further coding blocks according to a certain restriction, and may be set as a start unit of division into lower units.
- a block is not limited to a rectangular shape, but can be understood as a broad concept including various shapes such as triangles and circles. For convenience of explanation, it is assumed that the case has a rectangular shape.
- a block may be expressed as M ⁇ N, and a maximum value and a minimum value of each block may be obtained within a range. For example, if the maximum value of a block is 256 ⁇ 256 and the minimum value is 4 ⁇ 4, a 2 m ⁇ 2 n block (m and n are integers from 2 to 8 in this example) or 2 m ⁇ 2 m In this example, m and n are integers from 2 to 128, or m ⁇ m blocks (m and n are integers from 4 to 256).
- m and n may or may not be the same, and one or more ranges in which the blocks such as the maximum value and the minimum value are supported may occur.
- information about a maximum size and a minimum size of a block may occur, and information about a maximum size and a minimum size of a block, etc. may be generated in some division settings.
- the former may be range information on the maximum and minimum sizes that may occur in the image, and the latter may be information on the maximum and minimum sizes that may occur according to some division settings.
- the division setting may include image type (I / P / B), color component (YCbCr, etc.), block type (encoding / prediction / conversion / quantization, etc.), division type (Index or Type), and division method (QT in the Tree method).
- BT, TT, etc. may be defined by SI2, SI3, SI4, etc.) in an index method.
- width / length length ratio (the shape of the block) that the block may have, and a threshold value condition may be set for it.
- k is a ratio of length and width equal to A / B (A is a length or the same value among horizontal and vertical values, and B is the remaining value). It may be defined according to, and may be one or more real numbers such as 1.5, 2, 3, 4, and the like.
- constraints regarding the shape of one block in the image may be supported, or one or more constraints may be supported according to the division setting.
- whether or not block division is supported may be determined by the ranges and conditions as described above and the division setting described later. For example, if a candidate (child block) according to a partition of a block (parent block) satisfies a supported block condition, the partition may be supported. If not, the partition may not be supported.
- the block divider may be set in relation to each component of the image encoding apparatus and the decoding apparatus, and the size and shape of the block may be determined through this process.
- the set block may be defined differently according to the configuration unit, and a prediction block in the prediction unit, a transform block in the transform unit, and a quantization block in the quantization unit may correspond to this.
- the present invention is not limited thereto, and a block unit according to another component may be further defined.
- the input and output of each component is mainly described in the case of a rectangular form, but in some components it may be possible to input / output in a different form (for example, a triangle, etc.).
- the size and shape of the initial (or starting) block of the block division may be determined from higher units.
- the initial block may be divided into smaller sized blocks, and when the optimal size and shape according to the division of the block is determined, the block may be determined as the initial block of the lower unit.
- the upper unit may be a coding block and the lower unit may be a prediction block or a transform block, but the present invention is not limited thereto, and various modifications may be possible.
- a partitioning process for searching for a block having an optimal size and shape like the upper unit may be performed.
- the block dividing unit may divide the basic coding block (or the largest coding block) into at least one coding block, and the coding block may divide into at least one prediction block / transform block / quantization block. .
- the prediction block may perform partitioning into at least one transform block / quantization block, and the transform block may perform partitioning into at least one quantization block.
- some blocks may be dependent relationships (ie, defined by higher units and lower units) with other blocks or may have independent relationships.
- the prediction block may be an upper unit of the transform block or may be an independent unit of the transform block, and various relationships may be set according to the type of the block.
- the combination between units does not perform the division from the upper unit to the lower unit, but the sub-unit decoding / decoding process (eg, prediction unit, transform unit, inverse transform unit, etc.) into blocks (sizes and shapes) of higher units.
- the sub-unit decoding / decoding process eg, prediction unit, transform unit, inverse transform unit, etc.
- the splitting process may be shared in a plurality of units, and the splitting information may be generated in one unit (for example, higher unit).
- a prediction process, a transform, and an inverse transform process may be performed in a coding block (when a coding block is combined with a prediction block and a transform block).
- a prediction process may be performed in a coding block (when a coding block is combined with a prediction block), and a transform and inverse transform process may be performed in a transform block that is the same as or smaller than the coding block.
- a prediction process may be performed on a prediction block that is the same as or smaller than the coding block (when the coding block is combined with a transform block), and a transform and inverse transform process may be performed on the coding block.
- a prediction process may be performed on a prediction block that is the same as or smaller than the coding block (when the prediction block is combined with a transform block), and a transform and inverse transform process may be performed on the prediction block.
- a prediction process may be performed in a prediction block that is the same as or smaller than the coding block (when not combined in any block), and a transform and inverse transform process may be performed in a transform block that is the same as or smaller than the coding block.
- the sub / decoding elements may include an image type, a color component, an encoding mode (Intra / Inter), a segmentation setting, a size / shape / position of a block, a length / width ratio, and prediction related information (eg, intra prediction). Mode, inter prediction mode, etc.), transform related information (e.g., transformation technique selection information, etc.), quantization related information (e.g., quantization region selection information, quantized transform coefficient encoding information, etc.), and the like. have.
- encoding mode Intra / Inter
- a segmentation setting e.g., a size / shape / position of a block, a length / width ratio
- prediction related information eg, intra prediction.
- transform related information e.g., transformation technique selection information, etc.
- quantization related information e.g., quantization region selection information, quantized transform coefficient encoding information, etc.
- mode information for example, split information, etc.
- the mode information may be stored in the bitstream together with information generated by the component to which the block belongs (for example, prediction related information and transform related information) and transmitted to the decoder, and may be parsed in units of the same level by the decoder. Can be used in the video decoding process.
- the initial block is in the form of a square, but the same may be similarly or similarly applied to the case where the initial block is in the form of a rectangle.
- the block divider may support various kinds of splits. For example, it may support tree-based partitioning or index-based partitioning, and other methods may be supported.
- the tree-based partition may determine the partition type based on various kinds of information (for example, whether to split, the tree type, the split direction, etc.), and the index-based partition may determine the partition type based on predetermined index information.
- FIG. 4 is an exemplary diagram illustrating various partition types that may be obtained in a block partition unit of the present invention.
- the division form as shown in FIG. 4 is obtained through one division (or process).
- the present disclosure is not limited thereto and may also be obtained through a plurality of division operations.
- additional divisional forms not shown in FIG. 4 may be possible.
- a quad tree QT
- BT binary tree
- TT ternary tree
- multi-tree split When one tree method is supported, it may be referred to as a single tree split, and when two or more tree methods are supported, a multi-tree split.
- the block is divided into two (i.e., four) in the horizontal and vertical directions (n), and in the case of BT, the block is divided into two in the horizontal or vertical directions (b to g). In the case of TT, the block is divided into three directions in one direction (h to m).
- the division method (o, p) may be supported by limiting the division direction to one of horizontal and vertical.
- the schemes (b, c) having uniform sizes
- only the schemes (d to g) having non-uniform sizes, or a mixture of the two schemes may be supported.
- TT only the division (h, j, k, m) in which the division has an arrangement (1: 1: 2, 2: 1: 1, etc. in a left-> right or up-> down direction) It can support only the methods (i, l) that are supported, centered (1: 2: 1, etc.), or a mixture of both.
- z-partition (b, d, e, h, i, j, o) limited to the horizontal division direction among the tree methods, or z-partition (c, f, g only limited to the vertical division direction). , k, l, m, p), or a mixture of both.
- z may be an integer of 2 or more, such as 2, 3, and 4.
- One or more of the tree divisions may be supported according to the encoding / decoding setting. For example, it may support QT, support QT / BT, or support QT / BT / TT.
- the above example is a case where the base tree split is QT and BT and TT are included in the additional split scheme according to whether other trees are supported, but various modifications may be possible.
- information on whether or not other trees are supported (bt_enabled_flag, tt_enabled_flag, bt_tt_enabled_flag, etc., may have a value of 0 or 1, 0 is not supported, 1 is supported) or is implicitly determined according to the encoding / decoding setting or sequence, It may be determined explicitly in units of a picture, a slice, a tile, and the like.
- the partition information may include information on whether to split (tree_part_flag. Or qt_part_flag, bt_part_flag, tt_part_flag, bt_tt_part_flag. The value may be 0 or 1, and 0 may be divided and 1 is divided). In addition, depending on the division schemes BT and TT, the division direction (dir_part_flag. Or bt_dir_part_flag, tt_dir_part_flag, bt_tt_dir_part_flag. May be 0 or 1, where 0 is ⁇ horizontal / horizontal>, and 1 is ⁇ vertical / vertical>) Information about may be added, which may be information that may occur when splitting is performed.
- partition information configurations may be possible.
- the following description assumes an example of how the partition information is configured at one depth level (ie, for the convenience of explanation, although recursive partitioning may be possible because one or more supported partition depths are set to one or more). do.
- the selection information on the split type (for example, tree_idx. 0 is QT, 1 is BT, 2 is TT) is checked. At this time, additionally check the split direction information according to the selected split type, and start again from the beginning if additional splitting is possible due to the next step (when the split depth has not reached the maximum, or end splitting if splitting is not possible). Go to).
- the information on whether to split some tree methods is checked and the process proceeds to the next step.
- the split is not performed, information on whether or not the partial tree method BT is split is checked.
- the split is not performed, information on whether or not the partial tree method TT is split is checked. At this time, if the division is not performed, the division ends.
- splitting of some tree type QT
- the process proceeds to the next step.
- the split direction information is checked and the process proceeds to the next step.
- the split of some tree split schemes TT
- the information on whether to split for some tree method is checked. At this time, if the split is not performed, the information on whether or not to split some tree methods (BT and TT) is checked. At this time, if the division is not performed, the division ends.
- splitting of some tree type QT
- the process proceeds to the next step.
- splitting of some tree methods BT and TT the split direction information is checked and the process proceeds to the next step.
- the above example may be the case where the priority of the tree split exists (examples 2 and 3) or does not exist (example 1), but various modification examples may be possible.
- the division of the current step is an example for explaining a case that is not related to the division result of the previous step, but it may also be possible to set the division of the current step depending on the division result of the previous step.
- the splitting of the same tree method (QT) may be supported in the current step.
- the partitioning information configuration described above may also be configured differently. (Examples to be described below are assumed to be examples 3)
- splitting information eg, splitting information, splitting direction information, etc.
- splitting information regarding the related tree method may be configured by removing the splitting information.
- the above example is adaptive for the case where block partitioning is allowed (e.g., block size is within the range between the maximum and minimum values, the partitioning depth of each tree method does not reach the maximum depth ⁇ allowed depth>, etc.). It is related to the structure of partitioning information, and adapts to the case where the block partitioning is limited (for example, the block size does not exist in the range between the maximum value and the minimum value, the partitioning depth of each tree method reaches the maximum depth, etc.). Partitioning information may be configured.
- tree-based partitioning in the present invention can be performed using a recursive manner. For example, when the partition flag of the coding block having the partition depth k is 0, encoding of the coding block is performed in the coding block having the partition depth k, and when the partition flag of the coding block having the partition depth k is 1, the coding block.
- the encoding of is performed in N sub-coding blocks having a division depth of k + 1 (wherein N is an integer of 2 or more, such as 2, 3, 4) according to a division scheme.
- the sub coded block may be set again to a coded block k + 1 and divided into sub coded blocks k + 2 through the above process. It can be determined according to.
- the bitstream structure for expressing the partition information may be selected from one or more scan methods.
- the bitstream of the split information may be configured based on the split depth order, or the bitstream of the split information may be configured based on whether the split information is divided.
- the partitioning information is obtained at the next level of depth.
- a method of preferentially acquiring additional splitting information in the received block, and another additional scanning method may be considered.
- CSI constant split index
- VSI variable split index
- the CSI scheme may be a scheme in which k subblocks are obtained through division in a predetermined direction, and k may be an integer of 2, 3, 4, or the like. In detail, it may be a partitioning scheme of a configuration in which the size and shape of the sub block are determined based on the k value regardless of the size and shape of the block.
- the predetermined direction may be combined with one or two or more of the horizontal, vertical, diagonal (upper left-> lower right direction, or lower left-> upper right direction) direction.
- the index-based CSI partitioning scheme of the present invention may include candidates divided into z in one of horizontal or vertical directions.
- z may be an integer of 2 or more, such as 2, 3, and 4, and one of the horizontal or vertical lengths of each sub block may be the same and the other may be the same or different.
- the ratio of the width or length of the sub-block is A 1 : A 2 : ...: A Z and A 1 to A Z may be an integer of 1 or more, such as 1, 2, and 3.
- x and y may be an integer of 1 or more, such as 1, 2, 3, 4, but may be limited if x and y are 1 at the same time (since a already exists).
- x and y may be an integer of 1 or more, such as 1, 2, 3, 4, but may be limited if x and y are 1 at the same time (since a already exists).
- FIG. 4 the case in which the ratio of the width or length of each sub-block is the same is illustrated. However, candidates including different cases may be included.
- it may include w candidates divided into one of some diagonal directions (upper left-> lower right direction) or some diagonal directions (lower left-> upper right direction), and w may be an integer of 2 or more, such as 2 and 3. .
- the sub-blocks may be divided into symmetrical partitions (b) and asymmetrical partitions (d, e) according to the length ratio of each sub-block. It can be divided into the divided form k arranged.
- the partition type can be defined by various sub / decoding elements including not only the length ratio of the sub block but also the type of the sub block, and the supported split type can be determined either implicitly or explicitly according to the sub / decoding setting. have.
- the candidate group in the index-based partitioning scheme may be determined based on the supported partition type.
- the VSI method may be a method in which one or more subblocks are obtained by dividing in a predetermined direction while the width (w) or height (h) of the subblock is fixed, and w and h are 1, 2, 4, It may be an integer of 1 or more, such as 8 or the like. In detail, it may be a partitioning scheme of a configuration in which the number of sub blocks is determined based on the size and shape of the block and the w or n value.
- the index-based VSI partitioning scheme of the present invention may include candidates that are partitioned by fixing either the horizontal or vertical length of the subblock. Or, it may include a candidate divided by fixing the horizontal and vertical length of the sub-block. Since the horizontal or vertical length of the sub-block is fixed, it may have a feature that allows equal division in the horizontal or vertical direction, but is not limited thereto.
- the block before dividing is M x N and the horizontal length of the subblock is fixed (w), the vertical length is fixed (h), or the horizontal and vertical length is fixed (w, h), the number of subblocks obtained is (M * N) / w, (M * N) / h, and (M * N) / w / h, respectively.
- the CSI method or the VSI method may be supported, both methods may be supported, and information on the supported method may be implicitly or explicitly determined.
- the candidate group may be configured by including two or more candidates among the index partitions according to the encoding / decoding.
- candidate groups such as ⁇ a, b, c ⁇ , ⁇ a, b, c, n ⁇ , and ⁇ a to g, n ⁇ can be configured, which can be divided into two horizontal or vertical directions or two horizontal and vertical directions.
- a block group predicted to occur a lot based on general statistical characteristics, such as a block form divided into two, may be an example of configuring a candidate group.
- candidate groups such as ⁇ a, b ⁇ , ⁇ a, o ⁇ , ⁇ a, b, o ⁇ or ⁇ a, c ⁇ , ⁇ a, p ⁇ , ⁇ a, c, p ⁇ may be configured.
- a candidate group such as ⁇ a, o, p ⁇ or ⁇ a, n, q ⁇ may be configured.
- An example of configuring a candidate group with a block type that is expected to generate a large number of partitions having a smaller size than a block before splitting is performed. Can be.
- a candidate group such as ⁇ a, r, s ⁇ may be configured.
- the optimal partitioning result obtained in the rectangular form is obtained through another method (tree method) in the pre-division block. It may be an example of configuring the divided form as a candidate group.
- various candidate groups may be configured, and one or more candidate groups may be supported in consideration of various encoding / decoding elements.
- index selection information may occur in a candidate group including a candidate (a) that is not split and candidates (b to s) that are split.
- information indicating whether or not to split may be generated (whether or not the split type is a), and index selection information may be generated in a candidate group consisting of candidates (b to s) to be split when splitting is performed (if not a). Can be.
- the partitioning information may be configured in various manners other than the above description, and the binary bits may be allocated to the index of each candidate in the candidate group through various methods, such as fixed length binarization and variable length binarization, except for the information indicating whether the partition is performed. have. If the number of candidate groups is two, one bit may be allocated to the index selection information, and if three or more, one bit or more may be allocated to the index selection information.
- the index-based partitioning method may be a method of selectively configuring a partition type that is expected to occur in a candidate group.
- a single hierarchical partition for example, a partition depth of 0 may be used instead of a tree-based hierarchical partition (recursive partition). Limitation). That is, it may be a method of supporting one partitioning operation, and a subblock obtained through index-based partitioning may not be further partitioned.
- this may mean that additional splitting into blocks of the same type having a smaller size is impossible (for example, a coding block obtained through an index splitting scheme may not be further divided into coding blocks). It may also be possible to set (for example, not possible to split not only the coding block but also the prediction block from the coding block) into the type block.
- additional splitting into blocks of the same type having a smaller size for example, a coding block obtained through an index splitting scheme may not be further divided into coding blocks.
- set for example, not possible to split not only the coding block but also the prediction block from the coding block
- the following describes the case where the block division setting is decided based on the type of block among the encoding / decoding elements.
- a coding block may be obtained through a division process.
- the splitting process may be a tree-based splitting scheme, and according to the tree type, splitting forms of a (no split), n (QT), b, c (BT), i, l (TT), etc. of FIG. 4 may be used.
- the result can be.
- Various combinations of tree types such as QT / QT + BT / QT + BT + TT may be possible depending on the encoding / decoding settings.
- An example to be described below shows a process of finally partitioning a prediction block and a transform block based on the coding block obtained through the above process, and assumes a case where a prediction, transform, and inverse transform process is performed based on each partition size.
- a prediction block may be set as the size of a coding block to perform a prediction process
- a transform block may be set as the size of a coding block (or a prediction block) to perform a transform and an inverse transform process. Since the prediction block and the transform block are set based on the coding block, there is no split information that occurs separately.
- the prediction block may be set as the size of the coding block to perform the prediction process.
- a transform block may be obtained through a partitioning process based on a coding block (or a prediction block), and a transform and inverse transform process may be performed based on the obtained size.
- the splitting process may be a tree-based splitting scheme, and according to the tree type, splitting schemes such as a (no split), b, c (BT), i, l (TT), and n (QT) of FIG. 4 may be used.
- the result can be.
- Various combinations of tree types, such as QT / BT / QT + BT / QT + BT + TT, may be possible depending on the min / decoding settings.
- the partitioning process may use an index-based partitioning scheme, and a split type result of a (no split), b, c, and d of FIG. 4 may be output according to the index type.
- Various candidate groups such as ⁇ a, b, c ⁇ , ⁇ a, b, c, d ⁇ , etc. may be configured according to the encoding / decoding setting.
- a prediction block in the case of a prediction block, a prediction block may be obtained by performing a partitioning process based on a coding block, and a prediction process may be performed based on the obtained size.
- the size of the coding block is set as it is, and the transform and inverse transform processes can be performed. This example may correspond to a case in which the prediction block and the transform block have independent relationships with each other.
- the partitioning process may use an index-based partitioning scheme, and a partition type result of a (no split), b to g, n, r, and s of FIG. 4 may be output according to an index type.
- Various candidate group configurations such as ⁇ a, b, c, n ⁇ , ⁇ a to g, n ⁇ , ⁇ a, r, s ⁇ , etc. may be possible depending on the encoding / decoding setting.
- a prediction block in the case of a prediction block, a prediction block may be obtained by performing a partitioning process based on a coding block, and a prediction process may be performed based on the obtained size.
- the size of the prediction block is set as it is to perform the transform and inverse transform process. This example may be the case where the transform block is set as it is obtained predicted block size, or vice versa (predicted block as the transform block size is set).
- the splitting process may use a tree-based splitting scheme, and a splitting scheme such as a (no split), b, c (BT), n (QT), etc. of FIG. 4 may be provided according to the tree type.
- a splitting scheme such as a (no split), b, c (BT), n (QT), etc. of FIG. 4 may be provided according to the tree type.
- Various combinations of tree types, such as QT / BT / QT + BT, may be possible depending on the encoding / decoding settings.
- the partitioning process may use an index-based partitioning scheme, and a partitioning form of a (no split), b, c, n, o, p, etc. of FIG. 4 may appear according to the index type.
- ⁇ A, b ⁇ , ⁇ a, c ⁇ , ⁇ a, n ⁇ , ⁇ a, o ⁇ , ⁇ a, p ⁇ , ⁇ a, b, c ⁇ , ⁇ a, o, p depending on the negative / decryption settings ⁇ , ⁇ a, b, c, n ⁇ , ⁇ a, b, c, n, p ⁇ and the like can be configured in various candidate groups.
- the VSI scheme may be used alone or in combination with the CSI scheme to form a candidate group.
- a prediction block in the case of a prediction block, a prediction block may be obtained by performing a partitioning process based on a coding block, and a prediction process may be performed based on the obtained size.
- a prediction block in the case of the transform block, a prediction block may be obtained by performing a partitioning process based on the coding block, and a transform and inverse transform process may be performed based on the obtained size.
- This example may be a case of splitting the prediction block and the transform block based on the coding block.
- the partitioning process may use a tree-based partitioning method and an index-based partitioning method, and candidate groups may be configured in the same or similar manner as in the fourth example.
- the above example describes some cases that may occur depending on whether the division process of each type of block is shared or the like.
- the present disclosure is not limited thereto, and various modifications may be possible.
- the block division setting may be determined by considering various sub / decoding elements as well as the type of the block.
- the encoding / decoding elements may include image type (I / P / B), color component (YCbCr), block size / form / position, block width / length length ratio, block type (coding block, prediction block, transform, etc.).
- Block, quantization block, etc. segmentation state, coding mode (Intra / Inter), prediction related information (intra prediction mode, inter prediction mode, etc.), transformation related information (transformation technique selection information, etc.), quantization related information (quantization) Region selection information, quantized transform coefficient encoding information, and the like).
- the intra prediction may be configured as follows.
- the intra prediction of the prediction unit may include a reference pixel construction step, a prediction block generation step, a prediction mode determination step, and a prediction mode encoding step.
- the image encoding apparatus may be configured to include a reference pixel constructing unit, a predictive block generating unit, and a prediction mode encoding unit for implementing a reference pixel constructing step, a predictive block generating step, a prediction mode determining step, and a prediction mode encoding step.
- FIG. 5 is an exemplary diagram illustrating an intra prediction mode according to an embodiment of the present invention.
- 67 prediction modes are composed of prediction mode candidate groups for intra prediction, of which 65 are directional modes and 2 are non-directional modes (DC, Planar). Not limited to this, various configurations may be possible.
- the directional mode may be divided into tilt (eg, dy / dx) or angle information (Degree).
- all or part of the prediction mode may be included in the prediction mode candidate group of the luminance component or the chrominance component, and other additional modes may be included in the prediction mode candidate group.
- the direction of the directional mode may mean a straight line, and the curvature directional mode may further be configured as a prediction mode.
- the pixel of the block may include a planar mode for obtaining a prediction block through linear interpolation.
- the reference pixel used to generate the prediction block may be obtained from blocks that are bound in various combinations such as left, top, left + top, left + left, top + top, left + top + bottom + top, and the like.
- the reference pixel acquisition block position may be determined according to a sub / decoding setting defined by an image type, a color component, and a size / shape / position of a block.
- the pixel used for generating the prediction block includes a region consisting of reference pixels (eg, left, top, top left, top right, bottom left, etc.) and a region not consisting of reference pixels (eg, right, Lower, lower right, etc.), in the case of areas not configured as reference pixels (i.e., uncoded), one or more pixels are used in the area configured as reference pixels (e.g., copy as is, weighted average, etc.) It is possible to explicitly generate information on at least one pixel of an area which may be obtained implicitly or not constituted of a reference pixel.
- a prediction block can be generated using an area composed of reference pixels and an area not composed of reference pixels as described above.
- non-directional modes other than the above description may be included, and the present invention will be described based on the linear directional mode and the non-directional modes of DC and Planar. However, changes to other cases may also be possible.
- the prediction mode supported according to the size of the block may be different from FIG. 5.
- the number of prediction mode candidates is adaptive (for example, the angles between the prediction modes are equally spaced, but the angles are set differently. 9, 17, 33, 65, 129, etc. based on the directional mode).
- the number of prediction mode candidates may be fixed but may be of different configuration (eg, directional mode angle, non-directional type, etc.).
- the prediction mode supported according to the shape of the block may be different from FIG. 5.
- the number of prediction mode candidates is adaptive (e.g., the number of prediction modes derived from the horizontal or vertical direction depending on the aspect ratio of the block is small or high) or the number of prediction mode candidates is It may be fixed but in a different configuration (e.g., more precisely set the prediction mode derived from the horizontal or vertical direction depending on the aspect ratio of the block).
- the prediction mode of the longer block may support a larger number, and the prediction mode of the shorter block may support a smaller number.
- the prediction mode interval may be represented in FIG. 5.
- the mode located to the right of mode 66 for example, a mode with an angle of +45 degrees or more relative to mode 50, that is, a mode with numbers such as 67 to 80
- the mode to the left of mode 2 for example, a mode having an angle of -45 degrees or more based on mode 18, that is, a mode having a number such as -1 to -14 may be supported. This can be determined by the ratio of the width to the length of the block, and vice versa.
- the prediction mode will be described with reference to the case where the prediction mode is fixedly supported (regardless of any encoding / decoding factor) as shown in FIG. 5, but the prediction mode setting adaptively supported according to the encoding setting may also be possible. .
- some diagonal modes (Diagonal up right ⁇ 2>, Diagonal down right ⁇ 34>, Diagonal down left ⁇ 66>, etc.) Etc. may be a reference, which may be a classification method performed based on some directionality (or angle, 45 degrees, 90 degrees, etc.).
- modes 2 and 66 located at both ends of the directional modes may be the modes used as the criteria for predicting the classification of the prediction mode. That is, when the prediction mode configuration is adaptive, an example in which the reference mode is changed may also be possible.
- mode 2 may be replaced with a mode with a number less than or greater than 2 (-2, -1, 3, 4, etc.), or mode 66 with a number less than or greater than 66 (64, 66). , 67, 68, etc.).
- an additional prediction mode (color copy mode, color mode) regarding the color component may be included in the prediction mode candidate group.
- the color copy mode may be a prediction mode related to a method for obtaining data for generating a prediction block from an area located in another color space, and the color mode may include a method of obtaining a prediction mode from an area located in another color space; May be an associated prediction mode.
- the size and shape (M x N) of the prediction block may be obtained through the block partitioner.
- Intra prediction may be generally performed in units of prediction blocks, but may be performed in units of coding blocks, transform blocks, or the like according to the setting of the block partitioner.
- the reference pixel configuration unit may configure a reference pixel used for prediction of the current block.
- the reference pixel may be managed through a temporary memory (for example, an array ⁇ Array>, a primary or a secondary array, etc.), generated and removed for each intra-prediction process of the block, and the size of the temporary memory is referred to. It may be determined according to the configuration of the pixel.
- left, top, top left, top right, and bottom left blocks are used for prediction of the current block with respect to the current block.
- block candidate groups having other configurations may be used for prediction of the current block.
- the candidate group of neighboring blocks for the reference pixel may be an example when following a raster or Z scan, and some of the candidate groups may be removed according to the scanning order, or other block candidate groups (eg, right , Lower, lower right block, etc. may be configured).
- prediction modes color copy modes
- this may also be considered a reference pixel since some regions of other color spaces may be used for prediction of the current block.
- FIG. 7 is a conceptual diagram illustrating a block adjacent to a target block of intra prediction according to an embodiment of the present invention.
- the left side of FIG. 7 represents a block adjacent to the current block of the current color space
- the right side represents a corresponding block of another color space.
- the following description assumes that a block adjacent to the current block of the current color space has a basic reference pixel configuration.
- the reference pixels used for prediction of the current block may include adjacent pixels (Ref_L, Ref_T, Ref_TL, Ref_TR, and Ref_BL in FIG. 6) of the left, top, left, top, and bottom blocks.
- the reference pixel is generally composed of pixels of a neighboring block closest to the current block (a of FIG. 6 is referred to as a reference pixel line), but other pixels (b of FIG. 6 and other outer lines of pixels). May also be configured in the reference pixel.
- Pixels adjacent to the current block may be classified into at least one reference pixel line, and pixels closest to the current block are ref_0 ⁇ for example, pixels having a distance between the boundary pixel of the current block and the pixel 1.
- p (-1, -1) to p (2m-1, -1), p (-1,0) to p (-1,2n-1) ⁇ and the next adjacent pixel ⁇ e.g., The distance between the boundary pixel and the pixel is 2.p (-2, -2) to p (2m, -2), p (-2, -1) to p (-2,2n) ⁇ is ref_1, and then the adjacent pixel.
- the distance between the boundary pixel and the pixel of the current block is 3.p (-3, -3) to p (2m + 1, -3), p (-3, -2) to p (-3, 2n + 1) ⁇ may be divided into ref_2 and the like. That is, it may be classified as a reference pixel line according to a pixel distance adjacent to the boundary pixel of the current block.
- the supported reference pixel lines may be N or more, and N may be an integer of 1 or more, such as 1 to 5.
- the reference pixel line candidate group is sequentially included in the reference pixel line closest to the current block, but is not limited thereto.
- the candidate group may be sequentially configured as ⁇ ref_0, ref_1, ref_2>, or ⁇ ref_0, ref_1, ref_3>, ⁇ ref_0, ref_2, ref_3>, ⁇ ref_1, ref_2, ref_3 It may also be possible for the candidate group to be configured in such a manner as to exclude non-sequential or closest reference pixel lines such as >
- Prediction may be performed using all reference pixel lines in the candidate group, or prediction may be performed using some reference pixel lines (one or more).
- one of a plurality of reference pixel lines may be selected according to a negative / decoding setting to perform intra prediction using the reference pixel lines.
- two or more of the plurality of reference pixel lines may be selected to perform intra prediction by using the reference pixel lines (for example, applying a weighted average to data of each reference pixel line).
- the reference pixel line selection may be determined implicitly or explicitly.
- it means that it is determined according to an encoding / decoding setting defined by one or a combination of two or more elements such as an image type, a color component, and a size / shape / position of a block.
- the explicit case means that reference pixel line selection information may occur in a unit such as a block.
- a setting in which the information is implicitly determined may be supported for intra prediction on a sub-block basis, which will be described later only by considering a case where the nearest reference pixel line is used. That is, intra prediction may be performed in units of sub-blocks using the preset reference pixel line, and the reference pixel line may be selected through implicit processing.
- the reference pixel line may mean the nearest reference pixel line, but is not limited thereto.
- a reference pixel line may be adaptively selected for intra-picture prediction in sub-block units, and various reference pixel lines including the nearest reference pixel may be selected to perform intra-picture prediction in sub-block units.
- intra-picture prediction in units of sub-blocks may be performed using reference pixel lines determined in consideration of various sub / decoding elements, and the reference pixel lines may be selected through implicit or explicit processing.
- the reference pixel component of the intra prediction according to the present invention may include a reference pixel generator, a reference pixel interpolator, a reference pixel filter, and the like, and may include all or part of the above configuration.
- the reference pixel configuration unit may check the availability of the reference pixel to classify the available reference pixel and the unavailable reference pixel.
- the availability of the reference pixel is determined to be unavailable when at least one of the following conditions is satisfied.
- the same division unit as the current block for example, a unit that cannot be referred to each other such as a slice or a tile, etc., but a unit such as a slice or a tile has characteristics that can be referred to each other. If it does not belong to the exception processing, even if not in the same partition unit, it can be determined that it is unavailable if any one of the cases where encoding / decoding is not completed is satisfied. That is, when none of the above conditions are satisfied, it can be determined that it can be used.
- the use of the reference pixel can be restricted by the negative / decoding settings.
- the use of the reference pixel may be limited according to whether limited intra prediction (eg, constrained_intra_pred_flag) is performed.
- the limited intra prediction may be performed when an attempt is made to prohibit the use of a block reconstructed by referencing from another image as a reference pixel when attempting to perform error robust encoding / decoding to an external factor such as a communication environment.
- all reference pixel candidate blocks may be available.
- the reference pixel candidate block is a condition for determining whether to use the reference pixel according to an encoding mode (Intra or Inter).
- encoding mode Intra or Inter
- the reference pixel is composed of one or more blocks
- the reference pixel may be classified into three cases, such as ⁇ usable for all>, ⁇ some for some use>, and ⁇ not for all use>. In all cases except when all are available, the reference pixels at the candidate block positions that are not available may be filled or generated.
- the pixel at the corresponding position may be included in the reference pixel memory of the current block.
- the pixel data may be copied as it is or included in the reference pixel memory through processes such as reference pixel filtering and reference pixel interpolation.
- the pixel obtained through the reference pixel generation process may be included in the reference pixel memory of the current block.
- the following shows an example of generating a reference pixel at an unusable block position using various methods.
- the reference pixel may be generated using any pixel value.
- the arbitrary pixel value means one pixel value (for example, the minimum value of the pixel value range) belonging to the pixel value range (for example, the pixel value range based on the bit depth or the pixel value range according to the pixel distribution in the corresponding image). Maximum value, median value, etc.). In detail, this may be an example applied when all of the reference pixel candidate blocks are unavailable.
- the reference pixel may be generated from an area in which encoding / decoding of the image is completed.
- the reference pixel may be generated from at least one usable block adjacent to the unusable block. In this case, at least one of extrapolation, interpolation, and copying may be used.
- the reference pixel may be generated in a decimal unit through linear interpolation of the reference pixel.
- the reference pixel interpolation process may be performed after the reference pixel filter process described later.
- the horizontal, vertical, some diagonal mode e.g., a mode of 45 degree difference in the vertical and horizontal, such as Diagonal up right, Diagonal down right, Diagonal down left. Correspondence), non-directional mode, color copy mode, etc., do not perform interpolation process, and other modes (diagonal mode) may perform interpolation process.
- the pixel position at which interpolation is performed (ie, which fractional units are interpolated) depending on the prediction mode (e.g. directionality of the prediction mode, dy / dx, etc.) and the position of the reference pixel and the prediction pixel. And the like) can be determined.
- one filter for example, the equation used to determine the filter coefficient or the length of the filter tap is assumed to be the same filter regardless of the precision of the decimal unit, except that the precision of the decimal unit is ⁇ 1/2, 7/32, 19/32> can be applied, or a filter in which the equation used to determine the length of the filter coefficients or filter taps can be applied, Hypothesis) can be selected and applied according to the decimal unit.
- an integer pixel may be used as an input for interpolation of fractional pixels, and in the latter case, an input pixel may be changed in stages (for example, an integer pixel in a 1/2 unit). In the case of the / 4 unit, an integer and a 1/2 unit pixel, etc.) may be used, but the present invention is not limited thereto.
- Fixed or adaptive filtering may be performed for reference pixel interpolation, which may include sub / decoding settings (e.g., image type, color component, block position / size / shape, block aspect ratio, prediction). Mode or one or more combinations).
- sub / decoding settings e.g., image type, color component, block position / size / shape, block aspect ratio, prediction. Mode or one or more combinations).
- Fixed filtering may perform reference pixel interpolation using one filter
- adaptive filtering may perform reference pixel interpolation using one of a plurality of filters.
- one of the plurality of filters may be implicitly determined or explicitly determined according to the encoding / decoding setting.
- the type of filter may be composed of a 4-tap DCT-IF filter, a 4-tap cubic filter, a 4-tap Gaussian filter, a 6-tap winner filter, and an 8-tap Kalman filter. It may also be possible for the filter candidate group to be defined differently (eg, some types of filters are the same or different, and short or long filter taps, etc.).
- filtering may be performed on the reference pixel for the purpose of improving the accuracy of prediction by reducing the deterioration remaining through the decoding / decoding process.
- the filter used may be a low-pass filter.
- Fixed filtering means that reference pixel filtering is not performed or reference pixel filtering is applied using one filter.
- Adaptive filtering means that filtering is applied according to the encoding / decoding setting, and if more than one filter type is supported, one of them may be selected.
- the type of the filter is various filter coefficients such as 3-tap filter such as [1, 2, 1] / 4, 5-tap filter such as [2, 3, 6, 3, 2] / 16, and filter tap length.
- 3-tap filter such as [1, 2, 1] / 4
- 5-tap filter such as [2, 3, 6, 3, 2] / 16
- filter tap length a plurality of filters separated by the like may be supported.
- the reference pixel interpolator and the reference pixel filter introduced in the reference pixel configuration step may be necessary to improve the accuracy of prediction.
- the two processes may be independently performed, but a combination of the two processes (ie, one filtering) may be possible.
- the prediction block generator may generate a prediction block according to at least one prediction mode, and use a reference pixel based on the prediction mode.
- the reference pixel may be used in a method such as extrapolation (directional mode) according to a prediction mode, and may be used in a method such as interpolation or average (DC) or copy (non-directional mode). have.
- the prediction mode determiner performs a process for selecting an optimal mode among a plurality of prediction mode candidate groups.
- block distortion eg, Distortion of current and reconstructed blocks.
- a mode that is optimal in terms of coding cost may be determined using a sum-of-absolute difference (SAD), a sum of square difference (SSD), etc., and a rate-distortion technique that takes into account the amount of bits generated according to a corresponding mode.
- the prediction block generated based on the prediction mode determined through the above process may be transmitted to the subtractor and the adder.
- all prediction modes present in the prediction mode candidate group may be searched, or the optimal prediction mode may be selected through other decision processes for the purpose of reducing the amount of computation / complexity.
- the first stage selects some modes that perform well in terms of image quality deterioration for all intra prediction prediction candidates, and the second stage considers not only the image quality degradation but also the amount of bits generated for the mode selected in the first stage.
- the optimal prediction mode can be selected.
- various methods of reducing the amount of computation / complexity may be applied.
- the prediction mode determiner may generally be included only in the encoder, but may also be included in the decoder according to a sub / decoding setting. For example, when a template matching is included as a prediction method or a method of deriving an in-picture prediction mode in an adjacent region of the current block. In the latter case, it can be understood that the method of implicitly obtaining the prediction mode in the decoder is used.
- the prediction mode encoder may encode the prediction mode selected by the prediction mode determiner.
- index information corresponding to the prediction mode may be encoded, or the information about the prediction mode may be predicted.
- the former may be a method applied to the luminance component and the latter may be a method applied to the color difference component, but is not limited thereto.
- the prediction value (or prediction information) of the prediction mode may be referred to as Most Probable Mode (MPM).
- MPM may be configured in one prediction mode or in a plurality of prediction modes.
- the number of MPMs (k. K is an integer of 1 or more, such as 1, 2, 3, 6, etc.) according to the number of prediction mode candidates. Can be determined.
- the MPM is composed of a plurality of prediction modes, it may be referred to as an MPM candidate group.
- MPM candidates may be supported under fixed settings or adaptive settings may be supported depending on various encoding / decoding factors.
- the candidate group configuration may be determined according to which reference pixel layer is used among the plurality of reference pixel layers, and the intra prediction is performed on a block basis or the intra prediction is performed on a sub block basis.
- the candidate group configuration can be determined depending on whether or not it is performed.
- MPM is a concept supported for efficiently encoding a prediction mode.
- the candidate group may be configured as a prediction mode that is more likely to occur in the prediction mode of the current block.
- the MPM candidate group may be a preset prediction mode (or a statistically frequently occurring prediction mode such as DC, Plaanr, vertical, horizontal, and some diagonal modes), adjacent blocks (left, top, left top, right top, bottom left blocks, etc.).
- a prediction mode or a statistically frequently occurring prediction mode such as DC, Plaanr, vertical, horizontal, and some diagonal modes
- adjacent blocks left, top, left top, right top, bottom left blocks, etc.
- the prediction modes of the adjacent blocks are obtained at L0 to L3 (left block), T0 to T3 (upper block), TL (upper left block), R0 to R3 (upper right block), and B0 to B3 (lower left block) in FIG. can do.
- Predefined priority when MPM candidates can be constructed from two or more subblock positions (e.g., L0, L2, etc.) in adjacent blocks (e.g., left block). Etc.), the prediction mode of the block may be configured in the candidate group.
- the prediction mode of the subblock corresponding to the predefined position e.g, L0
- a prediction mode of positions L3, T3, TL, R0, and B0 among adjacent blocks may be selected as a prediction mode of the adjacent block and included in the MPM candidate group.
- the above description is in some cases configuring the prediction mode of the adjacent block in the candidate group, but is not limited thereto. In the example described below, it is assumed that a candidate mode is configured with a prediction mode of a predefined position.
- a mode derived from one or more included prediction modes may be further configured as the MPM candidate group.
- mode k directional mode
- a mode that can be derived from the mode a mode having an interval of + a, -b on the basis of k. A and 1 are 1, 2, 3, etc.). Above integer may be further included in the MPM candidate group.
- the MPM candidate group may be configured in the order of the prediction mode of the adjacent block, the preset prediction mode, the derived prediction mode, and the like.
- the process of configuring the MPM candidate group may be completed by filling the maximum number of MPM candidates according to the priority. If the prediction mode coincides with the previously included prediction mode, the prediction mode may include a redundancy check process in which an order is passed to a candidate of the next priority without being configured in the candidate group.
- candidate groups may be configured in the order of L-T-TL-TR-BL-Planar-DC-Vertical-Horizontal-Diagonal mode. It may be the case that a prediction mode of an adjacent block is preferentially configured in a candidate group and additional configuration of a preset prediction mode is performed.
- candidate groups may be configured in the order of L-T-Planar-DC- ⁇ L + 1>- ⁇ L-1>- ⁇ T + 1>- ⁇ T-1>-Vertical-Horizontal-Diagonal mode.
- the prediction mode of some adjacent blocks and a part of the preset prediction mode are configured preferentially, and on the assumption that a prediction mode in a direction similar to the prediction mode of the adjacent block will occur, the additional mode and the part of the preset prediction mode It may be the case.
- the MPM candidate group may be binarized such as unary binarization or truncated rice binarization based on the index in the candidate group. That is, mode bits can be represented by allocating short bits to candidates having a small index and assigning long bits to candidates having a large index.
- Modes not included in the MPM candidate group may be classified as non-MPM candidate groups.
- the non-MPM candidate group may be classified into two or more candidate groups according to the encoding / decoding setting.
- binarization such as fixed length binarization and truncated unary binarization may be used based on indexes in the non-MPM candidate group.
- the non-MPM candidate group is classified into non-MPM_A (ideal A candidate group) and non-MPM_B (ideal B candidate group). It is assumed that a candidate group (p. MPM candidate groups or more) constitutes the candidate group in a prediction mode that is more likely to occur in the prediction mode of the current block than the B candidate group (q. Or more than the A candidate group numbers). At this time, the A candidate group configuration process may be added.
- some prediction modes having even intervals (eg, 2, 4, 6, etc.) among the directional modes are configured in the A candidate group or included in the preset prediction mode (eg, the MPM candidate group). Mode derived from the predicted mode).
- the prediction mode remaining through the MPM candidate group configuration and the A candidate group configuration may be configured as a B candidate group, and an additional candidate group configuration process is not required.
- Binarization such as fixed length binarization and truncated unary binarization, may be used based on indices in candidate A and candidate B groups.
- non-MPM candidate group is composed of two or more, but is not limited thereto, and various modification examples may be possible.
- the following is a process for the case of predicting and encoding a prediction mode.
- mpm_flag On whether the prediction mode of the current block matches the MPM (or some modes in the MPM candidate group) may be checked.
- the MPM index information may be additionally checked according to the configuration of the MPM (one or two or more). After that, the encoding process of the current block is completed.
- non-MPM index information (remaining_idx) can be confirmed. After that, the encoding process of the current block is completed.
- non-MPM candidate group includes a plurality (two in this example)
- information (non_mpm_flag) about whether the prediction mode of the current block matches some prediction mode in the A candidate group may be checked.
- the A candidate group index information (non_mpm_A_idx) may be checked. If the A candidate group does not match, the B candidate group index information (remaining_idx) may be checked. After that, the encoding process of the current block is completed.
- the same prediction number index may be used for the prediction mode supported in the current block, the prediction mode supported in the adjacent block, and the preset prediction mode.
- the prediction mode supported by the current block, the prediction mode supported by the adjacent block, and the preset prediction mode may use the same prediction number index or different prediction number indexes. Reference is made to FIG. 5 for the following description.
- a prediction mode candidate group unification (or adjustment) process for configuring an MPM candidate group or the like may be performed.
- the prediction mode of the current block may be one of prediction mode candidate groups of modes -5 to 61
- the prediction mode of an adjacent block may be one of prediction mode candidate groups of modes 2 to 66.
- a process of unifying it in the prediction mode encoding process may be performed. That is, the process may not be required when the fixed intra prediction mode candidate group configuration is supported, or the process may be required when the adaptive intra screen prediction mode candidate group configuration is supported, and a detailed description thereof will be omitted. .
- encoding may be performed by assigning an index to a prediction mode belonging to a prediction mode candidate group.
- the method of encoding a corresponding index corresponds to this.
- the prediction mode candidate group is fixed and a fixed index is assigned to the prediction mode.
- the fixed index allocation method may not be suitable.
- an index may be allocated to a prediction mode according to an adaptive priority, and a method of encoding a corresponding index may be applied when the prediction mode of the current block is selected. Because of the adaptive configuration of the prediction mode candidate group, it is possible to efficiently encode the prediction mode by changing the indices assigned to the prediction modes. That is, the adaptive priority may be to assign a candidate having a high probability of being selected as the prediction mode of the current block to an index in which a short mode bit occurs.
- the following presupposes that eight prediction modes including a prediction mode (directional mode and non-directional mode), a color copy mode, and a color mode preset in the prediction mode candidate group are supported (color difference component).
- the preset prediction modes such as the directional mode, the non-directional mode, and the color copy mode may be easily classified into prediction modes in which prediction methods are divided.
- the color mode may be a directional mode or a non-directional mode, and there may be a possibility of overlapping with the preset prediction mode.
- the color mode is a vertical mode
- a case may occur in which the color mode overlaps with a vertical mode which is one of preset prediction modes.
- the number of candidate groups may be adjusted (8-> 7).
- an index may be allocated by adding and considering other candidates when the overlapping case occurs, which will be described below on the assumption of this setting.
- the adaptive prediction mode candidate group may be a supported configuration even when a variable mode such as a color mode is included. Therefore, when adaptive index allocation is performed, it can be regarded as an example of adaptive prediction mode candidate group configuration.
- the following describes a case where adaptive index allocation is performed according to the color mode.
- the basic index is assumed to be allocated in the order of Planar (0)-vertical (1)-horizontal (2)-DC (3)-CP1 (4)-CP2 (5)-CP3 (6)-C (7).
- index allocation is performed in the above order.
- the prediction mode corresponding to the index 7 of the color mode is filled.
- the index (one of 0 to 3) of the matching prediction mode is filled with a preset prediction mode (Diagoanl down left).
- Planar (0)-Vertical (1)-Diagoanal down left (2)-DC (3)-CP1 (4)-CP2 (5)-CP3 (6)-Horizontal Index allocation as shown in 7 may be performed.
- the prediction mode corresponding to the index 0 is filled.
- the preset prediction mode (Diagoanal down left) is filled in the index 7 of the color mode.
- the existing index configuration may be adjusted.
- the color mode is DC mode
- DC (0)-Planar (1)-Vertical (2)-Horizontal (3)-CP1 (4)-CP2 (5)-CP3 (6)-Diagonal down left Index allocation as shown in 7 may be performed.
- binarization such as fixed length binarization, unary binarization, truncated unary binarization, truncated Rice binarization, or the like may be used based on the index within the candidate group.
- a method of dividing into a prediction mode, a prediction method, and the like into a plurality of prediction mode candidate groups, and assigning and encoding an index to a prediction mode belonging to the candidate group corresponds thereto.
- candidate group selection information encoding may precede the index encoding.
- the directional mode, the non-directional mode, and the color mode which are prediction modes for performing prediction in the same color space, may belong to one candidate group (or more S candidate groups), and the color copy, which is a prediction mode for performing prediction in another color space.
- the mode may belong to one candidate group (or more than D candidate group).
- the following presupposes that nine prediction modes including a prediction mode, a color copying mode, and a color mode that are preset in the prediction mode candidate group are supported (color difference component).
- the S candidate group may have five candidates configured with a preset prediction mode and a color mode
- the D candidate group may have four candidates configured with the color copy mode.
- the S candidate group is an example of an adaptive mode candidate group that is adaptively configured, and an example of adaptive index allocation has been described above, and thus a detailed description thereof will be omitted.
- the D candidate group is an example of a fixed prediction mode candidate group, a fixed index allocation method may be used. For example, an index allocation such as CP1 (0)-CP2 (1)-CP3 (2)-CP4 (3) may be performed.
- Binarization such as fixed length binarization, unary binarization, truncated unary binarization, truncated Rice binarization, or the like, may be used based on the index within the candidate group.
- various modifications may be possible without being limited to the above examples.
- Candidate group configuration such as MPM for prediction mode encoding may be performed in units of blocks.
- the process of constructing the candidate group may be omitted, and a predetermined candidate group may be used or a candidate group obtained by various methods may be used. This may be a configuration that can be supported for the purpose of reducing complexity.
- one predefined candidate group may be used, or one of a plurality of predefined candidate groups may be used according to a sub / decoding setup.
- a predefined candidate group such as ⁇ Planar-DC-vertical-horizontal-Diagonal down left ⁇ 66 of FIG. 5-Diagonal down right ⁇ 34 of FIG. 5> ⁇ may be used.
- a candidate group of encoded blocks may be used.
- the block in which encoding is completed may be selected based on an encoding order (a predetermined scan method, for example, z-scan, vertical scan, horizontal scan, etc.) or the left, top, left top, right top, bottom left of the current block. It can be selected from adjacent blocks, such as. However, adjacent blocks are divided into units that can be referenced from the current block (e.g., when each block has different slices or tiles, for example, when the blocks belong to the same tile group but different tiles). If you belong to an unreferenced split unit (for example, if the slice or tile to which each block belongs is different but has non-referenced attributes to each other, for example, if it belongs to a different tile group), Blocks can be excluded from candidates.
- an unreferenced split unit for example, if the slice or tile to which each block belongs is different but has non-referenced attributes to each other, for example, if it belongs to a different tile group
- Blocks can
- the adjacent block may be determined according to the state of the current block. For example, when the current block has a square shape, a candidate group of available blocks among blocks located according to a predetermined first priority may be borrowed (or shared). Alternatively, when the current block has a rectangular shape, a candidate group of available blocks among blocks located according to the second priority may be borrowed. In this case, the second or third priority may be supported according to the width / length length ratio of the block.
- the priority for selecting candidate blocks to be borrowed may be various configurations such as left-top-top-right-bottom-left-top, top-left-top-left-top-left. In this case, the first to third priorities may all have the same configuration, all have different configurations, or some components may have the same configuration.
- the candidate group of the current block may perform borrowing from an adjacent block only when it is more than / over a predetermined threshold value.
- the borrowing may be performed in an adjacent block only when the value is less than or less than a predetermined threshold.
- the boundary value may be defined as a minimum size or a maximum size of a block that allows candidate group borrowing.
- the boundary value may be expressed as a horizontal length (W), a vertical length (H), W x H, W * H, and the like of the block, and W and H may be integers of 4, 8, 16, 32, or more. .
- a common candidate group may be configured in an upper block composed of a bundle of predetermined blocks, and lower blocks belonging to the upper block may use the candidate group.
- the number of lower blocks may be an integer of 1 or more, such as 1, 2, 3, and 4.
- the upper block may be an ancestor block (including a parent block) of the lower block or may be a block composed of any bundle.
- the ancestor block may refer to a block before division of a previous step (a difference in division depth is 1 or more) of the division process for acquiring a lower block.
- parent blocks of subblocks 0 and 1 of 4N ⁇ 2N may indicate 4N ⁇ 4N of a of FIG. 4.
- the candidate group of the upper block may be borrowed (or shared) in the lower block only when the predetermined first threshold value is greater than or exceeded.
- the borrowing in the lower block may be performed only when the value is less than / less than the second predetermined boundary value.
- the boundary value may be defined as a minimum size or a maximum size of a block in which candidate group borrowing is allowed. Only one of the boundary values may be supported, or both may be supported, and the boundary value may be represented by the horizontal length (W), vertical length (H), W x H, W * H, and the like of the block, where W and H are It can be an integer of 8, 16, 32, 64 or more.
- the candidate group of the lower block may borrow from the upper block only when the predetermined third threshold value or more / over.
- the borrowing may be performed in an upper block only when the value is less than / less than a fourth threshold.
- the boundary value may be defined as a minimum size or a maximum size of a block that allows candidate group borrowing. Only one of the boundary values may be supported, or both may be supported, and the boundary value may be represented by the horizontal length (W), vertical length (H), W x H, W * H, and the like of the block, where W and H are It can be an integer of 4, 8, 16, 32 or more.
- the first boundary value (or the second boundary value) may be greater than or equal to the third boundary value (or the fourth boundary value).
- candidate group borrowing may be selectively used based on any of the above-described embodiments, and candidate group borrowing may be selectively used based on at least two combinations of the embodiments 1-3. Further, the detailed configuration of each embodiment may also be selectively used based on any one, and may be selectively used according to a combination of one or more detailed configurations.
- information about whether the candidate group is borrowed may be explicitly processed.
- coding elements such as an image type and a color component may serve as input variables in the candidate group borrowing setting. The borrowing of candidate groups may be performed based on the information and the decoding / decoding settings.
- the prediction related information generated by the prediction mode encoder may be transmitted to the encoder and may be included in the bitstream.
- the intra prediction may be configured as follows.
- the intra prediction of the prediction unit may include a prediction mode decoding step, a reference pixel construction step, and a prediction block generation step.
- the image decoding apparatus may be configured to include a prediction mode decoding unit, a reference pixel construction unit, and a prediction block generation unit that implement the prediction mode decoding step, the reference pixel construction step, and the prediction block generation step.
- the prediction mode decoder may be performed by using the inverse method used by the prediction mode encoder. Can be.
- the coding block is referred to as a parent block, and the sub block may be a child block.
- the subblock may be a unit in which prediction is performed or a unit in which transformation is performed.
- the coding order of subblocks may be determined according to various order combinations of a to p of FIG. 8. For example, z-scan (left-> right, up-> down), vertical scan (up-> down), horizontal scan (left-> right), inverse vertical scan (bottom-> up), inverse horizontal scan You can follow one of these steps: (right-> left).
- the encoding order may be an order pre-committed to the image encoder / decoder.
- the coding order of the subblock may be determined in consideration of the split direction of the parent block. For example, when the parent block is divided in the horizontal direction, the coding order of the sub blocks may be determined by vertical scan. When the parent block is divided in the vertical direction, the coding order of the sub blocks may be determined by horizontal scan.
- reference data used for prediction may be obtained at a closer position, and since only one prediction mode is generated and shared in the sub-blocks, it may be efficient.
- the lower right subblock when encoding is performed in units of a parent block, the lower right subblock may perform prediction using adjacent pixels of the parent block.
- the lower-right sub-block uses the pixels closer than the parent block because the upper-left, upper-right, lower-left sub-blocks are reconstructed according to a predetermined encoding order (z-scan in this example). You can make predictions.
- a candidate group may be configured based on one or more partition types that are more likely to occur for intra prediction according to an optimal partition considering the image characteristics.
- partition information is generated using an index-based partitioning method.
- Candidate groups can be configured in various division forms as follows.
- candidate groups composed of N divided forms may be configured.
- N may be an integer greater than or equal to 2.
- the candidate group may include a combination of at least two of the seven divided forms illustrated in FIG. 8.
- the parent block may be divided into predetermined sub-blocks by selectively using any one of a plurality of division forms belonging to the candidate group.
- the selection may be performed based on an index signaled by the image encoding apparatus.
- the index may mean information for specifying a partition type of the parent block.
- the selection may be performed in consideration of the attribute of the parent block in the image decoding apparatus.
- the attribute may be a position, a size, a shape, a width, a width / length ratio, a length / width of any one of the width / length, a split depth, an image type (I / P / B), a color component (eg, Luminance, color difference), the value of the intra prediction mode, whether the intra prediction mode is the non-directional mode, the angle of the intra prediction mode, the position of the reference pixel, and the like.
- the block ⁇ coding block may mean a prediction block and / or a transform block corresponding to the coding block.
- the position of the block may mean whether the block is located at a boundary of a predetermined image (or fragment image) of the parent block.
- the image (or fragment image) may mean at least one of a picture, a slice group, a tile group, a slice, a tile, a CTU row, and a CTU to which a parent block belongs.
- candidate groups such as ⁇ a to d ⁇ and ⁇ a to g ⁇ of FIG. 8 may be configured, which may be a candidate group configuration considering various division forms. Assuming ⁇ a to d ⁇ , we can assign various binary bits to each index (assuming that the index is assigned in alphabetical order.
- bin type 1 may be an example of binarization considering all possible partition types, and in bin type 2, a bit (first bit) indicating whether to split is allocated first, and when split (first bit is 1) It may be an example of binarization performed by excluding only undivided candidates among possible partition types.
- candidate groups such as ⁇ a, c, d ⁇ , ⁇ a, f, g ⁇ , etc. of FIG. 8 may be configured, which may be candidate group configurations considering division of a specific direction (horizontal or vertical). have. Assuming ⁇ a, f, g ⁇ , various binary bits can be allocated to each index.
- Table 2 is an example of binarization allocated based on a property of a block, and may be an example of a case in which the shape of a block is considered. In bin type 1, it may be an example of allocating 1 bit when the parent block is in a square shape and 2 bits when it is divided in a horizontal or vertical direction.
- Bin type 2 when the parent block is horizontally long in a rectangular shape, 1 bit may be allocated when the parent block is divided in the horizontal direction, and the remaining 2 bits are allocated.
- Bin type 3 may be an example of allocating 1 bit when the parent block is vertically long in a rectangular shape and 2 bits when the parent block is divided in the vertical direction. This may be an example of allocating a shorter bit if it is determined that a partition, such as a form of a parent block, is more likely to occur, but may include an example of modification including the opposite case, without being limited thereto.
- candidate groups such as ⁇ a, c, d, f, g ⁇ of FIG. 8 may be configured, which may be another example of a candidate group configuration considering division of a specific direction.
- bin type 1 is allocated with a flag (first bit) indicating whether a partition is divided first, followed by a flag that distinguishes the number of partitions. If the flag (second bit) for dividing the number of divisions is 0, the number of divisions may be two, and if it is 1, the division number may be four.
- the subsequent flag may be a flag representing a division direction, and 0 may be a horizontal division and 1 may be a vertical division.
- the partitioning information may be a configuration supported in a general situation, but may be modified to another setting according to an encoding / decoding environment. That is, it may be possible to support an exceptional configuration regarding the partition information or to replace the partition type represented by the partition information with another partition type.
- the block may mean at least one of a parent block or a sub block.
- the supported partition type is ⁇ a, f, g ⁇ of FIG. 8 and that the size of the parent block is 4M x 4N. If some segmentation forms are not supported according to the minimum value condition of the block in the image and the boundary of the predetermined image (or fragment image) of the parent block, various processing may be possible. For the following example, assume that the minimum value of the width of the block in the image is 2M and the minimum value of the width of the block is 4 * M * N.
- the candidate group may be reconfigured except for the unobtainable split form.
- Candidates that may be present in the existing candidate group are 4M x 4N, 4M x N, M x 4N
- the candidate group reconstructed except for the non-obtainable candidate (4M x N) may be 4M x 4N, 4M x N.
- binarization may be performed again on the candidate in the reconstructed candidate group.
- one flag (1 bit) may select 4M x 4N and 4M x N.
- the candidate group may be reconfigured as a candidate to replace the unobtainable split form.
- the unobtainable divided form may be a divided form (four divisions) in the vertical direction.
- the candidate group may be reconfigured by substituting another division form (eg, 2M ⁇ 4N) that maintains the division form in the vertical direction. This may maintain the flag configuration according to the existing partitioning information.
- the candidate group may be reconfigured by adjusting the number of candidate groups or replacing an existing candidate, and the example may include various descriptions, and various modification examples may be possible.
- the encoding order of various subblocks as shown in FIG. 8 may be set.
- the coding order may be implicitly determined according to the encoding / decoding setting.
- the segmentation type, the image type, the color component, the size / shape / position of the parent block, the width / vertical length ratio of the block, the prediction mode related information (eg, the intra prediction mode, the reference pixel position used, etc.), Division state and the like may be included in the encoding / decoding element.
- the candidate group may be formed of candidates having a high probability according to the division type, and selection information on one of them may be generated.
- the coding order candidates supported according to the partitioning form may be adaptively configured.
- the subblock may be encoded using one fixed encoding order, a method of applying the adaptive encoding order may also be possible.
- many coding orders may be obtained according to various order assignments a to p described in the drawing. Since the position and the number of sub-blocks obtained may vary according to each division type, the configuration of the encoding order specific to the division type may be important. In the division form as shown in FIG. 8A, in which the division into sub-blocks is not performed, the processing of the encoding order is not necessary. Therefore, when explicitly processing the information about the encoding order, the information about the encoding order of the sub-blocks may be generated based on the selected partition information after checking the partition information first.
- a vertical scan to which 0 and 1 are applied to a and b and an inverse scan to which 1 and 0 are applied may be supported as candidates, and a 1-bit flag for selecting one of them is shown. May occur.
- the intra prediction mode is vertical mode, horizontal mode, mode in the direction of Diagonal down right (modes 19 to 49), mode in the direction of diagoanal down left (more than 51), mode in the direction of Diagonal up right (17). Times or less), vertical scan and horizontal scan may be determined.
- the intra prediction mode is a mode (eg, 51 times or more) in a diagonal down left direction
- vertical scan and inverse horizontal scan may be determined.
- inverse vertical scan and horizontal scan may be determined when the intra prediction mode is a mode in the direction of Diagonal up right (17 or less).
- the example may be an encoding order according to a predetermined scan order.
- the predetermined scan order may be one of z-scan, vertical scan, and horizontal scan.
- the scanning order may be determined according to the position and distance of the pixel referred to in the intra prediction. Inverse scan can be considered further for this purpose.
- FIG. 9 is an exemplary diagram of a reference pixel area used based on an intra prediction mode according to an embodiment of the present invention. Referring to FIG. 9, it can be seen that a region referred to according to the direction of the prediction mode is shaded.
- FIG. 9A illustrates an example in which adjacent areas of the parent block are divided into left, top, top left, top right, and bottom left areas.
- B of FIG. 9 shows left and left lower regions referred to in the mode of the diagonal up right direction
- FIG. 9 (c) shows the left region referred to in the horizontal mode
- FIG. 9 (d) shows diagonal down right.
- Fig. 9 (e) shows the top region referred to in the vertical mode
- (f) of Fig. 9 is the image referred to in the mode of the Diagonal down left direction, Represents an idol area.
- a coding order of a subblock is predefined so that it is not necessary to separately signal related information. It may have a variety of division forms, and thus may have the following example based on the region (or prediction mode) referenced.
- FIG. 10 illustrates an example of an encoding sequence that may be provided in a prediction mode in a diagonal up right direction according to an embodiment of the present invention.
- FIG. An example in which priorities are assigned to sub blocks adjacent to the lower left direction may be confirmed through FIGS. 10A to 10G.
- FIGS. 11A to 11G illustrates an example of an encoding sequence that may be included in a horizontal mode according to an embodiment of the present invention.
- An example in which the preceding rank is allocated to the sub block adjacent to the left direction may be confirmed through FIGS. 11A to 11G.
- FIG. 12 illustrates an example of an encoding sequence that may be provided in a prediction mode in a diagonal down right direction according to an embodiment of the present invention.
- An example in which the preceding rank is allocated to the sub-block adjacent to the upper left direction can be confirmed through (a) to (g) of FIG. 12.
- FIG. 13 illustrates an example of an encoding sequence that may be included in a vertical mode according to an embodiment of the present invention.
- FIG. An example in which the prior ranking is allocated to the sub blocks adjacent to the upper direction may be confirmed through FIGS. 13A to 13G.
- FIG. 14 illustrates an example of an encoding sequence that may be provided in a mode of a diagonal down left direction according to an embodiment of the present invention.
- An example in which the priority is assigned to the sub-blocks adjacent to the upper right direction can be confirmed through (a) to (g) of FIG. 14.
- the above example is one example of defining the coding order from an adjacent sub / decoded region, and other modifications are possible.
- various configurations in which an encoding order is defined according to other sub / decoding elements may be possible.
- FIG. 15 is an exemplary diagram for an encoding sequence considering an intra prediction mode and a split form according to an embodiment of the present invention.
- FIG. 15 the coding order of the subblocks can be implicitly determined in each division type according to the intra prediction mode.
- an example of reconfiguring a candidate group that is replaced with another divided form when there is an unobtainable divided form is described. For convenience of explanation, it is assumed that the parent block is 4M x 4N.
- (a) of FIG. 15 it may be divided into 4M ⁇ N forms for intra prediction on a sub-block basis.
- the inverse vertical scan sequence may be followed. If the 4M ⁇ N split type cannot be obtained, splitting into 2M ⁇ 2N split types according to a predetermined priority may be performed. Since the coding order for this case is shown in the figure, detailed description is omitted. If the 2M x 2N form is not obtainable, division into a 4M x 2N form of next priority may be performed. Partitioning may be supported according to the predetermined priority as described above, and division based on the sub-block may be performed. If all of the predefined partition types cannot be obtained, it is impossible to divide the corresponding parent block into sub-blocks, and thus encoding may be performed in the parent block.
- (b) of FIG. 15 it may be divided into M ⁇ 4N forms for intra prediction on a sub-block basis. It is assumed that the encoding order is determined in advance based on each division form as shown in FIG. In this example, it can be understood that the case is replaced in the order of 2M x 2N, 2M x 4N.
- FIG. 15E it may be divided into M ⁇ 4N forms for intra prediction on a sub-block basis.
- the case is replaced in the order of 2M x 2N, 2M x 4N.
- the above example is an example of supporting division into other forms in a predetermined order when there is an unobtainable division form, and various modification examples may be possible.
- the M x 4N or 4M x N shape, where splitting is performed is replaced with a 2M x 4N or 4M x 2N type when it cannot be obtained. It may be possible.
- the division setting of the sub block may be determined according to various sub / decoding elements, and the sub / decoding element may be derived from the description of the aforementioned sub block division.
- prediction and transformation may be performed as it is.
- the intra prediction mode is determined in units of a parent block, and prediction may be performed accordingly.
- related settings may be determined based on the parent block based on the transform and inverse transform.
- a related setting may be determined based on the subblock, and the subblock unit transformation and inverse transform may be performed according to the corresponding setting.
- the transformation and inverse transformation may be performed based on one of the above settings.
- the methods according to the invention can be implemented in the form of program instructions that can be executed by various computer means and recorded on a computer readable medium.
- Computer-readable media may include, alone or in combination with the program instructions, data files, data structures, and the like.
- the program instructions recorded on the computer readable medium may be those specially designed and constructed for the present invention, or may be known and available to those skilled in computer software.
- Examples of computer readable media may include hardware devices specifically configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like.
- Examples of program instructions may include high-level language code that can be executed by a computer using an interpreter as well as machine code such as produced by a compiler.
- the hardware device described above may be configured to operate with at least one software module to perform the operations of the present invention, and vice versa.
- the above-described method or apparatus may be implemented by combining all or part of the configuration or function, or may be implemented separately.
- the present invention can be used to encode / decode an image.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Abstract
Description
idx | bin type 1 | bin type 2 |
0 | 00 | 0 |
1 | 01 | 10 |
2 | 10 | 110 |
3 | 11 | 111 |
idx | bin type 1 | bin type 2 | bin type 3 |
0 | 0 | 10 | 10 |
1 | 10 | 0 | 11 |
2 | 11 | 11 | 0 |
idx | bin type 1 | bin type 2 |
0 | 0 | 0 |
1 | 100 | 100 |
2 | 101 | 110 |
3 | 110 | 101 |
4 | 111 | 111 |
Claims (1)
- 현재 블록의 화면내 예측 모드를 유도하는 단계;상기 현재 블록의 분할 여부를 확인하여 서브 블록 단위의 화면내 예측을 수행 여부를 확인하는 단계;상기 화면내 예측 모드에 기반하여 상기 서브 블록의 부호화 순서를 결정하는 단계; 및상기 서브 블록 부호화 순서와 상기 화면내 예측 모드에 기반하여 예측 블록을 생성하는 화면내 예측 방법.
Priority Applications (23)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
BR112020020213-4A BR112020020213A2 (pt) | 2018-04-01 | 2019-04-01 | Método e aparelho para codificar/decodificar imagem |
CA3095769A CA3095769C (en) | 2018-04-01 | 2019-04-01 | Method and apparatus for encoding/decoding image |
NZ769114A NZ769114A (en) | 2018-04-01 | 2019-04-01 | Method and apparatus for encoding/decoding image |
CN202311458822.XA CN117692639A (zh) | 2018-04-01 | 2019-04-01 | 图像编码/解码方法、介质和传送比特流的方法 |
CN201980023756.9A CN111937395B (zh) | 2018-04-01 | 2019-04-01 | 用于编码/解码图像的方法和装置 |
PE2020001495A PE20211404A1 (es) | 2018-04-01 | 2019-04-01 | Metodo y aparato para codificar/decodificar imagen |
US17/040,765 US11297309B2 (en) | 2018-04-01 | 2019-04-01 | Method and apparatus for encoding/decoding image |
KR1020217007603A KR20210031783A (ko) | 2018-04-01 | 2019-04-01 | 영상 부호화/복호화 방법 및 장치 |
CN202311451276.7A CN117692638A (zh) | 2018-04-01 | 2019-04-01 | 图像编码/解码方法、介质和传送比特流的方法 |
AU2019247240A AU2019247240B2 (en) | 2018-04-01 | 2019-04-01 | Method and apparatus for encoding/decoding image |
CN202311448655.0A CN117692637A (zh) | 2018-04-01 | 2019-04-01 | 图像编码/解码方法、介质和传送比特流的方法 |
SG11202009302RA SG11202009302RA (en) | 2018-04-01 | 2019-04-01 | Method and apparatus for encoding/decoding image |
EP19781285.2A EP3780620A4 (en) | 2018-04-01 | 2019-04-01 | METHODS AND APPARATUS FOR IMAGE ENCODING/DECODING |
JP2020553472A JP7152503B2 (ja) | 2018-04-01 | 2019-04-01 | 映像符号化/復号化方法及び装置 |
MX2020010314A MX2020010314A (es) | 2018-04-01 | 2019-04-01 | Metodo y aparato para codificar/decodificar imagen. |
KR1020207000412A KR102378882B1 (ko) | 2018-04-01 | 2019-04-01 | 영상 부호화/복호화 방법 및 장치 |
RU2020134739A RU2752011C1 (ru) | 2018-04-01 | 2019-04-01 | Способ и оборудование для кодирования/декодирования изображения |
PH12020551469A PH12020551469A1 (en) | 2018-04-01 | 2020-09-11 | Method and apparatus for encoding/decoding image |
CONC2020/0011703A CO2020011703A2 (es) | 2018-04-01 | 2020-09-23 | Método y aparato para codificar/decodificar imagen |
US17/680,674 US20220182602A1 (en) | 2018-04-01 | 2022-02-25 | Method and apparatus for encoding/decoding image |
AU2022204573A AU2022204573B2 (en) | 2018-04-01 | 2022-06-28 | Method and apparatus for encoding/decoding image |
ZA2022/10140A ZA202210140B (en) | 2018-04-01 | 2022-09-13 | Method and apparatus for encoding/decoding image |
JP2022155326A JP2022177266A (ja) | 2018-04-01 | 2022-09-28 | 映像符号化/復号化方法及び装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2018-0037812 | 2018-04-01 | ||
KR20180037812 | 2018-04-01 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/040,765 A-371-Of-International US11297309B2 (en) | 2018-04-01 | 2019-04-01 | Method and apparatus for encoding/decoding image |
US17/680,674 Continuation US20220182602A1 (en) | 2018-04-01 | 2022-02-25 | Method and apparatus for encoding/decoding image |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019194485A1 true WO2019194485A1 (ko) | 2019-10-10 |
Family
ID=68100865
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2019/003777 WO2019194485A1 (ko) | 2018-04-01 | 2019-04-01 | 영상 부호화/복호화 방법 및 장치 |
Country Status (18)
Country | Link |
---|---|
US (2) | US11297309B2 (ko) |
EP (1) | EP3780620A4 (ko) |
JP (2) | JP7152503B2 (ko) |
KR (2) | KR20210031783A (ko) |
CN (4) | CN111937395B (ko) |
AU (2) | AU2019247240B2 (ko) |
BR (1) | BR112020020213A2 (ko) |
CA (2) | CA3207701A1 (ko) |
CL (1) | CL2020002508A1 (ko) |
CO (1) | CO2020011703A2 (ko) |
MX (4) | MX2020010314A (ko) |
NZ (1) | NZ769114A (ko) |
PE (1) | PE20211404A1 (ko) |
PH (1) | PH12020551469A1 (ko) |
RU (2) | RU2752011C1 (ko) |
SG (1) | SG11202009302RA (ko) |
WO (1) | WO2019194485A1 (ko) |
ZA (1) | ZA202210140B (ko) |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20210015963A (ko) | 2018-06-29 | 2021-02-10 | 후아웨이 테크놀러지 컴퍼니 리미티드 | 인트라-예측을 위한 디바이스 및 방법 |
US10284844B1 (en) * | 2018-07-02 | 2019-05-07 | Tencent America LLC | Method and apparatus for video coding |
WO2020258020A1 (zh) * | 2019-06-25 | 2020-12-30 | Oppo广东移动通信有限公司 | 信息处理方法及装置、设备、存储介质 |
CN114097224B (zh) * | 2019-06-25 | 2023-07-28 | 日本放送协会 | 帧内预测装置、图像解码装置及程序 |
WO2022108419A1 (ko) * | 2020-11-23 | 2022-05-27 | 현대자동차주식회사 | 선택적 서브블록 분할정보 전송을 이용하는 영상 부호화 및 복호화 방법과 장치 |
WO2022177317A1 (ko) * | 2021-02-18 | 2022-08-25 | 현대자동차주식회사 | 서브블록 분할 기반 인트라 예측을 이용하는 비디오 코딩방법 및 장치 |
WO2022197135A1 (ko) * | 2021-03-19 | 2022-09-22 | 현대자동차주식회사 | 분할된 서브블록의 적응적 순서를 이용하는 비디오 코딩방법 및 장치 |
US11818395B2 (en) * | 2021-04-22 | 2023-11-14 | Electronics And Telecommunications Research Institute | Immersive video decoding method and immersive video encoding method |
KR20230175203A (ko) * | 2021-04-22 | 2023-12-29 | 엘지전자 주식회사 | 세컨더리 mpm 리스트를 이용하는 인트라 예측 방법및 장치 |
WO2023022389A1 (ko) * | 2021-08-19 | 2023-02-23 | 현대자동차주식회사 | 직사각형이 아닌 블록 분할 구조를 이용하는 비디오 코딩방법 및 장치 |
WO2023038315A1 (ko) * | 2021-09-08 | 2023-03-16 | 현대자동차주식회사 | 서브블록 코딩 순서 변경 및 그에 따른 인트라 예측을 이용하는 비디오 코딩방법 및 장치 |
WO2023049486A1 (en) * | 2021-09-27 | 2023-03-30 | Beijing Dajia Internet Information Technology Co., Ltd. | Adaptive coding order for intra prediction in video coding |
US20230104476A1 (en) * | 2021-10-05 | 2023-04-06 | Tencent America LLC | Grouping based adaptive reordering of merge candidate |
WO2023128615A1 (ko) * | 2021-12-29 | 2023-07-06 | 엘지전자 주식회사 | 영상 인코딩/디코딩 방법 및 장치, 그리고 비트스트림을 저장한 기록 매체 |
WO2023153891A1 (ko) * | 2022-02-13 | 2023-08-17 | 엘지전자 주식회사 | 영상 인코딩/디코딩 방법 및 장치, 그리고 비트스트림을 저장한 기록 매체 |
WO2023224289A1 (ko) * | 2022-05-16 | 2023-11-23 | 현대자동차주식회사 | 가상의 참조라인을 사용하는 비디오 코딩을 위한 방법 및 장치 |
WO2024076134A1 (ko) * | 2022-10-05 | 2024-04-11 | 세종대학교산학협력단 | 동영상 인코딩 및 디코딩 장치와 방법 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014049982A1 (ja) * | 2012-09-28 | 2014-04-03 | 三菱電機株式会社 | 動画像符号化装置、動画像復号装置、動画像符号化方法及び動画像復号方法 |
KR20150074201A (ko) * | 2011-06-23 | 2015-07-01 | 가부시키가이샤 제이브이씨 켄우드 | 화상 인코딩 장치, 화상 인코딩 방법 및 화상 인코딩 프로그램, 및 화상 디코딩 장치, 화상 디코딩 방법 및 화상 디코딩 프로그램 |
JP2017139758A (ja) * | 2016-01-28 | 2017-08-10 | 日本放送協会 | 符号化装置、復号装置及びプログラム |
KR20170122351A (ko) * | 2016-04-26 | 2017-11-06 | 인텔렉추얼디스커버리 주식회사 | 화면 내 예측 방향성에 따른 적응적 부호화 순서를 사용하는 비디오 코딩 방법 및 장치 |
KR20180019008A (ko) * | 2011-01-13 | 2018-02-22 | 삼성전자주식회사 | 선택적 스캔 모드를 이용하는 비디오 부호화 방법 및 그 장치, 비디오 복호화 방법 및 그 장치 |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20000014092A (ko) * | 1998-08-17 | 2000-03-06 | 윤종용 | 인터폴레이션 필터 및 데시메이션 필터 |
US8503527B2 (en) * | 2008-10-03 | 2013-08-06 | Qualcomm Incorporated | Video coding with large macroblocks |
US9706204B2 (en) * | 2010-05-19 | 2017-07-11 | Sk Telecom Co., Ltd. | Image encoding/decoding device and method |
ES2891598T3 (es) * | 2010-11-04 | 2022-01-28 | Ge Video Compression Llc | Codificación de instantánea que soporta unión de bloques y modo de salto |
JP6039163B2 (ja) * | 2011-04-15 | 2016-12-07 | キヤノン株式会社 | 画像符号化装置、画像符号化方法及びプログラム、画像復号装置、画像復号方法及びプログラム |
WO2013074964A1 (en) * | 2011-11-16 | 2013-05-23 | Vanguard Software Solutions, Inc. | Video compression for high efficiency video coding |
FR2993084A1 (fr) * | 2012-07-09 | 2014-01-10 | France Telecom | Procede de codage video par prediction du partitionnement d'un bloc courant, procede de decodage, dispositifs de codage et de decodage et programmes d'ordinateur correspondants |
KR20160102073A (ko) * | 2013-12-30 | 2016-08-26 | 퀄컴 인코포레이티드 | 3d 비디오 코딩에서 큰 예측 블록들의 세그먼트-와이즈 dc 코딩의 단순화 |
CN112954352A (zh) | 2015-11-24 | 2021-06-11 | 三星电子株式会社 | 视频解码方法和视频编码方法 |
WO2017131233A1 (ja) * | 2016-01-28 | 2017-08-03 | 日本放送協会 | 符号化装置、復号装置及びプログラム |
KR20170108367A (ko) * | 2016-03-17 | 2017-09-27 | 세종대학교산학협력단 | 인트라 예측 기반의 비디오 신호 처리 방법 및 장치 |
EP3451668A4 (en) * | 2016-04-26 | 2020-04-15 | Intellectual Discovery Co., Ltd. | METHOD AND DEVICE FOR CODING / DECODING AN IMAGE |
WO2017205704A1 (en) * | 2016-05-25 | 2017-11-30 | Arris Enterprises Llc | General block partitioning method |
US10880548B2 (en) * | 2016-06-01 | 2020-12-29 | Samsung Electronics Co., Ltd. | Methods and apparatuses for encoding and decoding video according to coding order |
-
2019
- 2019-04-01 KR KR1020217007603A patent/KR20210031783A/ko not_active Application Discontinuation
- 2019-04-01 EP EP19781285.2A patent/EP3780620A4/en active Pending
- 2019-04-01 CA CA3207701A patent/CA3207701A1/en active Pending
- 2019-04-01 AU AU2019247240A patent/AU2019247240B2/en active Active
- 2019-04-01 RU RU2020134739A patent/RU2752011C1/ru active
- 2019-04-01 US US17/040,765 patent/US11297309B2/en active Active
- 2019-04-01 BR BR112020020213-4A patent/BR112020020213A2/pt unknown
- 2019-04-01 PE PE2020001495A patent/PE20211404A1/es unknown
- 2019-04-01 NZ NZ769114A patent/NZ769114A/en unknown
- 2019-04-01 KR KR1020207000412A patent/KR102378882B1/ko active IP Right Grant
- 2019-04-01 CN CN201980023756.9A patent/CN111937395B/zh active Active
- 2019-04-01 CN CN202311458822.XA patent/CN117692639A/zh active Pending
- 2019-04-01 MX MX2020010314A patent/MX2020010314A/es unknown
- 2019-04-01 CN CN202311448655.0A patent/CN117692637A/zh active Pending
- 2019-04-01 SG SG11202009302RA patent/SG11202009302RA/en unknown
- 2019-04-01 JP JP2020553472A patent/JP7152503B2/ja active Active
- 2019-04-01 WO PCT/KR2019/003777 patent/WO2019194485A1/ko active Application Filing
- 2019-04-01 CN CN202311451276.7A patent/CN117692638A/zh active Pending
- 2019-04-01 RU RU2021120287A patent/RU2021120287A/ru unknown
- 2019-04-01 CA CA3095769A patent/CA3095769C/en active Active
-
2020
- 2020-09-11 PH PH12020551469A patent/PH12020551469A1/en unknown
- 2020-09-23 CO CONC2020/0011703A patent/CO2020011703A2/es unknown
- 2020-09-28 CL CL2020002508A patent/CL2020002508A1/es unknown
- 2020-09-29 MX MX2023007745A patent/MX2023007745A/es unknown
- 2020-09-29 MX MX2023007743A patent/MX2023007743A/es unknown
- 2020-09-29 MX MX2023007746A patent/MX2023007746A/es unknown
-
2022
- 2022-02-25 US US17/680,674 patent/US20220182602A1/en active Pending
- 2022-06-28 AU AU2022204573A patent/AU2022204573B2/en active Active
- 2022-09-13 ZA ZA2022/10140A patent/ZA202210140B/en unknown
- 2022-09-28 JP JP2022155326A patent/JP2022177266A/ja active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20180019008A (ko) * | 2011-01-13 | 2018-02-22 | 삼성전자주식회사 | 선택적 스캔 모드를 이용하는 비디오 부호화 방법 및 그 장치, 비디오 복호화 방법 및 그 장치 |
KR20150074201A (ko) * | 2011-06-23 | 2015-07-01 | 가부시키가이샤 제이브이씨 켄우드 | 화상 인코딩 장치, 화상 인코딩 방법 및 화상 인코딩 프로그램, 및 화상 디코딩 장치, 화상 디코딩 방법 및 화상 디코딩 프로그램 |
WO2014049982A1 (ja) * | 2012-09-28 | 2014-04-03 | 三菱電機株式会社 | 動画像符号化装置、動画像復号装置、動画像符号化方法及び動画像復号方法 |
JP2017139758A (ja) * | 2016-01-28 | 2017-08-10 | 日本放送協会 | 符号化装置、復号装置及びプログラム |
KR20170122351A (ko) * | 2016-04-26 | 2017-11-06 | 인텔렉추얼디스커버리 주식회사 | 화면 내 예측 방향성에 따른 적응적 부호화 순서를 사용하는 비디오 코딩 방법 및 장치 |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019194485A1 (ko) | 영상 부호화/복호화 방법 및 장치 | |
WO2020004900A1 (ko) | 화면내 예측 방법 및 장치 | |
WO2018026219A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2018026118A1 (ko) | 영상 부호화/복호화 방법 | |
WO2017222326A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2017171370A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2018030599A1 (ko) | 인트라 예측 모드 기반 영상 처리 방법 및 이를 위한 장치 | |
WO2017176030A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2017204532A1 (ko) | 영상 부호화/복호화 방법 및 이를 위한 기록 매체 | |
WO2018212577A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2018124843A1 (ko) | 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체 | |
WO2017222325A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2016195460A1 (ko) | 화면 내 예측에 대한 부호화/복호화 방법 및 장치 | |
WO2017192011A2 (ko) | 화면 내 예측을 이용한 영상 부호화/복호화 방법 및 장치 | |
WO2016195453A1 (ko) | 영상 부호화 및 복호화 방법과 영상 복호화 장치 | |
WO2018174593A1 (ko) | 적응적인 화소 분류 기준에 따른 인루프 필터링 방법 | |
WO2017146526A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2019235887A1 (ko) | 인트라 예측 모드에 기초하여 변환 인덱스 코딩을 수행하는 방법 및 이를 위한 장치 | |
WO2018097626A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2020004902A1 (ko) | 영상 부호화/복호화 방법 및 장치 | |
WO2019017651A1 (ko) | 영상 부호화/복호화 방법 및 장치 | |
WO2019050292A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2018212579A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2018047995A1 (ko) | 인트라 예측 모드 기반 영상 처리 방법 및 이를 위한 장치 | |
WO2019190201A1 (ko) | 비디오 신호 처리 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19781285 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20207000412 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 3095769 Country of ref document: CA Ref document number: 2020553472 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 122022006115 Country of ref document: BR |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112020020213 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 2019781285 Country of ref document: EP Effective date: 20201102 |
|
ENP | Entry into the national phase |
Ref document number: 2019247240 Country of ref document: AU Date of ref document: 20190401 Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 112020020213 Country of ref document: BR Kind code of ref document: A2 Effective date: 20201001 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 523450127 Country of ref document: SA |