WO2017052118A1 - 영상 코딩 시스템에서 인트라 예측 방법 및 장치 - Google Patents
영상 코딩 시스템에서 인트라 예측 방법 및 장치 Download PDFInfo
- Publication number
- WO2017052118A1 WO2017052118A1 PCT/KR2016/010062 KR2016010062W WO2017052118A1 WO 2017052118 A1 WO2017052118 A1 WO 2017052118A1 KR 2016010062 W KR2016010062 W KR 2016010062W WO 2017052118 A1 WO2017052118 A1 WO 2017052118A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sample
- prediction
- intra
- intra prediction
- mode
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/11—Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/182—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/189—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
- H04N19/196—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/96—Tree coding, e.g. quad-tree coding
Definitions
- the present invention relates to image coding, and more particularly, to an intra prediction method and apparatus in an image coding system.
- the demand for high resolution and high quality images such as high definition (HD) images and ultra high definition (UHD) images is increasing in various fields.
- the higher the resolution and the higher quality of the image data the more information or bit rate is transmitted than the existing image data. Therefore, the image data can be transmitted by using a medium such as a conventional wired / wireless broadband line or by using a conventional storage medium. In the case of storage, the transmission cost and the storage cost are increased.
- a high efficiency image compression technique is required to effectively transmit, store, and reproduce high resolution, high quality image information.
- An object of the present invention is to provide a method and apparatus for increasing intra prediction efficiency.
- An object of the present invention is to provide a method and apparatus for adaptively deriving the division direction of a block.
- Another technical problem of the present invention is to provide a method and apparatus for adjusting a prediction direction in units of samples in a block.
- Another technical problem of the present invention is to provide a method and apparatus for adjusting a prediction direction on a lower block basis.
- an intra prediction method performed by a decoding apparatus may include receiving intra prediction mode information through a bitstream, deriving an intra prediction mode of a current block as a first intra directional mode based on the intra prediction mode information, and a first peripheral of the current block.
- Deriving samples deriving a reference sample adjacent to a target sample in the current block based on a prediction direction of the first intra directional mode, deriving second peripheral samples of a neighboring block to which the reference sample belongs; Determining a modified prediction direction based on the comparison of the reference sample with at least one of the second peripheral samples, and a prediction value for the target sample based on the modified prediction direction and the first peripheral samples Characterized in that it comprises the step of deriving.
- an intra prediction method performed by an encoding apparatus may include deriving a first intra directional mode and first peripheral samples of a current block, determining a reference sample adjacent to a target sample in the current block based on a prediction direction of the first intra directional mode; Deriving second neighboring samples of a neighboring block to which the reference sample belongs, determining a modified prediction direction based on a comparison between the reference sample and at least one of the second neighboring samples, and the modified prediction direction And deriving a prediction value for the target sample based on the first neighboring samples, and encoding and outputting intra prediction mode information indicating the first intra directional mode.
- a decoding apparatus for performing intra prediction.
- the decoding apparatus may further include an entropy decoding unit configured to receive intra prediction mode information through a bitstream, and derive an intra prediction mode of a current block into a first intra directional mode based on the intra prediction mode information, and determine a first intra directional mode of the current block.
- Derive neighboring samples derive a reference sample adjacent to the target sample in the current block based on the prediction direction of the first intra directional mode, derive second neighboring samples of the neighboring block to which the reference sample belongs, and reference A prediction that determines a modified prediction direction based on a comparison of a sample with at least one of the second peripheral samples and derives a prediction value for the target sample based on the modified prediction direction and the first peripheral samples It is characterized by including a wealth.
- an encoding apparatus for performing intra prediction derives the first intra directional mode and the first peripheral samples of the current block, determines a reference sample adjacent to the target sample in the current block based on the prediction direction of the first intra directional mode, and the reference sample. Derive second peripheral samples of the neighboring block to which the neighboring block belongs, determine a modified prediction direction based on a comparison between the reference sample and at least one of the second peripheral samples, and determine the modified prediction direction and the first peripheral And an entropy encoding unit for encoding and outputting intra prediction mode information indicating the first intra directional mode, and a prediction unit for deriving a prediction value for the target sample based on samples.
- a reference sample of a target sample of the current block may be selected based on the intra prediction mode, thereby improving intra prediction performance.
- the prediction direction can be adjusted by the sample unit in the current block, thereby improving the accuracy of prediction by using the optimal intra prediction direction by the sample unit, and improving the overall coding efficiency by reducing the data amount of the residual signal. Can be.
- FIG. 1 is a diagram schematically illustrating a configuration of a video encoding apparatus to which the present invention may be applied.
- FIG. 2 is a diagram schematically illustrating a configuration of a video decoding apparatus to which the present invention may be applied.
- 3 exemplarily illustrates how TUs are partitioned based on a qued tree structure.
- FIG. 4 exemplarily illustrates an intra prediction mode divided according to a division scheme.
- 5 exemplarily illustrates an intra prediction direction in each sample in the square division scheme and an intra prediction direction in each sample in the non-square division scheme.
- 6 exemplarily shows how groups are divided according to intra prediction angles.
- FIG. 7 exemplarily shows a TU partitioning method of groups divided according to prediction angles.
- FIG. 8 shows an example of deriving a reference sample for a target sample.
- FIG 9 shows an example of deriving a reference sample for a target sample.
- FIG. 10 exemplarily illustrates deriving a modified prediction direction when a current block is a block of a size of Nxh or wxN (1 ⁇ h ⁇ N, 1 ⁇ w ⁇ N).
- FIG. 11 schematically illustrates an intra prediction method by a decoding apparatus according to the present invention.
- FIG. 12 schematically illustrates an intra prediction method by an encoding apparatus according to the present invention.
- each configuration in the drawings described in the present invention are shown independently for the convenience of description of the different characteristic functions, it does not mean that each configuration is implemented by separate hardware or separate software.
- two or more of each configuration may be combined to form one configuration, or one configuration may be divided into a plurality of configurations.
- Embodiments in which each configuration is integrated and / or separated are also included in the scope of the present invention without departing from the spirit of the present invention.
- a picture generally refers to a unit representing one image of a specific time zone
- a slice is a unit constituting a part of a picture in coding.
- One picture may be composed of a plurality of slices, and if necessary, the picture and the slice may be mixed with each other.
- a pixel or a pel may refer to a minimum unit constituting one picture (or image). Also, 'sample' may be used as a term corresponding to a pixel.
- a sample may generally represent a pixel or a value of a pixel, and may only represent pixel / pixel values of the luma component, or only pixel / pixel values of the chroma component.
- a unit represents the basic unit of image processing.
- the unit may include at least one of a specific region of the picture and information related to the region.
- the unit may be used interchangeably with terms such as block or area in some cases.
- an M ⁇ N block may represent a set of samples or transform coefficients consisting of M columns and N rows.
- FIG. 1 is a diagram schematically illustrating a configuration of a video encoding apparatus to which the present invention may be applied.
- the video encoding apparatus 100 may include a picture divider 105, a predictor 110, a subtractor 115, a transformer 120, a quantizer 125, a reordering unit 130, An entropy encoding unit 135, an inverse quantization unit 140, an inverse transform unit 145, an adder 150, a filter unit 155, and a memory 160 are included.
- the picture divider 105 may divide the input picture into at least one processing unit.
- the processing unit may be a coding unit block (CU), a prediction unit (PU), or a transform unit (TU).
- a coding unit is a unit block of coding and may be split from a largest coding unit (LCU) into coding units of a deeper depth along a quad-tree structure.
- LCU largest coding unit
- the maximum coding unit may be used as the final coding unit immediately based on coding efficiency according to the image characteristic, or if necessary, the coding unit is recursively divided into coding units of lower depths and optimized.
- a coding unit of size may be used as the final coding unit.
- the coding unit may not be split into smaller coding units than the minimum coding unit.
- the final coding unit refers to a coding unit that is the basis of partitioning or partitioning into a prediction unit or a transform unit.
- the prediction unit is a block partitioning from the coding unit block and may be a unit block of sample prediction. In this case, the prediction unit may be divided into sub blocks.
- the transform unit may be divided along the quad tree structure from the coding unit block, and may be a unit block for deriving a transform coefficient and / or a unit block for deriving a residual signal from the transform coefficient.
- a coding unit may be called a coding block (CB)
- a prediction unit is a prediction block (PB)
- a transform unit may be called a transform block (TB).
- a prediction block or prediction unit may mean a specific area in the form of a block within a picture, and may include an array of prediction samples.
- a transform block or a transform unit may mean a specific area in a block form within a picture, and may include an array of transform coefficients or residual samples.
- the prediction unit 110 may perform a prediction on a block to be processed (hereinafter, referred to as a current block) and generate a prediction block including prediction samples of the current block.
- the unit of prediction performed by the prediction unit 110 may be a coding block, a transform block, or a prediction block.
- the prediction unit 110 may determine whether intra prediction or inter prediction is applied to the current block. As an example, the prediction unit 110 may determine whether intra prediction or inter prediction is applied on a CU basis.
- the prediction unit 110 may derive a prediction sample for the current block based on reference samples outside the current block in the picture to which the current block belongs (hereinafter, referred to as the current picture). In this case, the prediction unit 110 may (i) derive the prediction sample based on the average or interpolation of neighboring reference samples of the current block, and (ii) the neighbor reference of the current block.
- the prediction sample may be derived based on a reference sample present in a specific (prediction) direction with respect to the prediction sample among the samples. In case of (i), it may be called non-directional mode or non-angle mode, and in case of (ii), it may be called directional mode or angular mode.
- the prediction mode may have, for example, 33 directional prediction modes and at least two non-directional modes.
- the non-directional mode may include a DC prediction mode and a planner mode (Planar mode).
- the prediction unit 110 may determine the prediction mode applied to the current block by using the prediction mode applied to the neighboring block.
- the prediction unit 110 may derive the prediction sample for the current block based on the sample specified by the motion vector on the reference picture.
- the prediction unit 110 may apply one of a skip mode, a merge mode, and a motion vector prediction (MVP) mode to derive a prediction sample for the current block.
- the prediction unit 110 may use the motion information of the neighboring block as the motion information of the current block.
- the skip mode unlike the merge mode, the difference (residual) between the prediction sample and the original sample is not transmitted.
- the MVP mode the motion vector of the current block can be derived using the motion vector of the neighboring block as a motion vector predictor.
- the neighboring block may include a spatial neighboring block existing in the current picture and a temporal neighboring block present in the reference picture.
- a reference picture including the temporal neighboring block may be called a collocated picture (colPic).
- the motion information may include a motion vector and a reference picture index.
- Information such as prediction mode information and motion information may be encoded (entropy) and output in the form of a bitstream.
- the highest picture on the reference picture list may be used as the reference picture.
- Reference pictures included in a reference picture list may be sorted based on a difference in a picture order count (POC) between a current picture and a corresponding reference picture.
- POC picture order count
- the subtraction unit 115 generates a residual sample which is a difference between the original sample and the prediction sample.
- residual samples may not be generated as described above.
- the transform unit 120 generates a transform coefficient by transforming the residual sample in units of transform blocks.
- the transform unit 120 may perform the transformation according to the size of the transform block and the prediction mode applied to the coding block or the prediction block that spatially overlaps the transform block. For example, if intra prediction is applied to the coding block or the prediction block that overlaps the transform block, and the transform block is a 4 ⁇ 4 residual array, the residual sample uses a discrete sine transform (DST). In other cases, the residual sample may be transformed by using a discrete cosine transform (DCT).
- DST discrete sine transform
- DCT discrete cosine transform
- the quantization unit 125 may quantize the transform coefficients to generate quantized transform coefficients.
- the reordering unit 130 rearranges the quantized transform coefficients.
- the reordering unit 130 may reorder the quantized transform coefficients in the form of a block into a one-dimensional vector form through a coefficient scanning method. Although the reordering unit 130 has been described in a separate configuration, the reordering unit 130 may be part of the quantization unit 125.
- the entropy encoding unit 135 may perform entropy encoding on the quantized transform coefficients.
- Entropy encoding may include, for example, encoding methods such as exponential Golomb, context-adaptive variable length coding (CAVLC), context-adaptive binary arithmetic coding (CABAC), and the like.
- the entropy encoding unit 135 may encode information necessary for video reconstruction other than the quantized transform coefficients (for example, a value of a syntax element) together or separately. Entropy encoded information may be transmitted or stored in units of network abstraction layer (NAL) units in the form of bitstreams.
- NAL network abstraction layer
- the inverse quantization unit 140 inverse quantizes the quantized values (quantized transform coefficients) in the quantization unit 125, and the inverse transform unit 145 inversely transforms the inverse quantized values in the inverse quantization unit 135 to obtain a residual sample.
- the adder 150 reconstructs the picture by combining the residual sample and the predictive sample.
- the residual sample and the predictive sample may be added in units of blocks to generate a reconstructed block.
- the adder 150 has been described in a separate configuration, the adder 150 may be part of the predictor 110.
- the filter unit 155 may apply a deblocking filter and / or a sample adaptive offset to the reconstructed picture. Through deblocking filtering and / or sample adaptive offset, the artifacts of the block boundaries in the reconstructed picture or the distortion in the quantization process can be corrected.
- the sample adaptive offset may be applied on a sample basis and may be applied after the process of deblocking filtering is completed.
- the filter unit 155 may apply an adaptive loop filter (ALF) to the reconstructed picture. ALF may be applied to the reconstructed picture after the deblocking filter and / or sample adaptive offset is applied.
- ALF adaptive loop filter
- the memory 160 may store information necessary for reconstruction picture or encoding / decoding.
- the reconstructed picture may be a reconstructed picture after the filtering process is completed by the filter unit 155.
- the stored reconstructed picture may be used as a reference picture for (inter) prediction of another picture.
- the memory 160 may store (reference) pictures used for inter prediction.
- pictures used for inter prediction may be designated by a reference picture set or a reference picture list.
- FIG. 2 is a diagram schematically illustrating a configuration of a video decoding apparatus to which the present invention may be applied.
- the video decoding apparatus 200 includes an entropy decoding unit 210, a reordering unit 220, an inverse quantization unit 230, an inverse transform unit 240, a predictor 250, and an adder 260. , A filter unit 270, and a memory 280.
- the video decoding apparatus 200 may reconstruct the video in response to a process in which the video information is processed in the video encoding apparatus.
- the video decoding apparatus 200 may perform video decoding using a processing unit applied in the video encoding apparatus.
- the processing unit block of video decoding may be a coding unit block, a PU or a TU.
- the coding unit block may be divided along the quad tree structure from the largest coding unit block as a unit block of decoding.
- the PU is a block partitioned from the coding unit block and may be a unit block of sample prediction. At this time, the PU may be divided into sub-blocks.
- the TU may be divided along the quad tree structure from the coding unit block, and may be a unit block for deriving a transform coefficient or a unit block for deriving a residual signal from the transform coefficient.
- the entropy decoding unit 210 may parse the bitstream and output information necessary for video reconstruction or picture reconstruction. For example, the entropy decoding unit 210 decodes information in a bitstream based on a coding method such as exponential Golomb coding, CAVLC, or CABAC, quantized values of syntax elements necessary for video reconstruction, and residual coefficients. Can be output.
- a coding method such as exponential Golomb coding, CAVLC, or CABAC, quantized values of syntax elements necessary for video reconstruction, and residual coefficients. Can be output.
- the CABAC entropy decoding method receives a bin corresponding to each syntax element in a bitstream, and decodes syntax element information and decoding information of neighboring and decoding target blocks or information of symbols / bins decoded in a previous step.
- the context model may be determined using the context model, the probability of occurrence of a bin may be predicted according to the determined context model, and arithmetic decoding of the bin may be performed to generate a symbol corresponding to the value of each syntax element. have.
- the CABAC entropy decoding method may update the context model by using the information of the decoded symbol / bin for the context model of the next symbol / bean after determining the context model.
- the information related to the prediction among the information decoded by the entropy decoding unit 210 is provided to the prediction unit 230, and the residual value on which the entropy decoding has been performed by the entropy decoding unit 210, that is, the quantized transform coefficient, is used as a reordering unit ( 220).
- the reordering unit 220 may rearrange the quantized transform coefficients in the form of a two-dimensional block.
- the reordering unit 220 may perform reordering in response to coefficient scanning performed by the encoding apparatus. Although the reordering unit 220 has been described in a separate configuration, the reordering unit 220 may be a part of the quantization unit 230.
- the inverse quantization unit 230 may output the transform coefficients by inversely quantizing the transform coefficients quantized based on the (inverse) quantization parameter.
- information for deriving a quantization parameter may be signaled from the encoding apparatus.
- the inverse transform unit 240 may induce residual samples by inversely transforming the transform coefficients.
- the prediction unit 250 may perform prediction on the current block and generate a prediction block including prediction samples for the current block.
- the unit of prediction performed by the prediction unit 250 may be a coding block, a transform block, or a prediction block.
- the prediction unit 250 may determine whether to apply intra prediction or inter prediction based on the information about the prediction.
- a unit for determining which of intra prediction and inter prediction is to be applied and a unit for generating a prediction sample may be different.
- the unit for generating a prediction sample in inter prediction and intra prediction may also be different.
- whether to apply inter prediction or intra prediction may be determined in units of CUs.
- a prediction mode may be determined and a prediction sample may be generated in PU units
- intra prediction a prediction mode may be determined in PU units and a prediction sample may be generated in TU units.
- the prediction unit 250 may derive the prediction sample for the current block based on the neighbor reference samples in the current picture.
- the prediction unit 250 may derive the prediction sample for the current block by applying the directional mode or the non-directional mode based on the neighbor reference samples of the current block.
- the prediction mode to be applied to the current block may be determined using the intra prediction mode of the neighboring block.
- the prediction unit 250 may derive the prediction sample for the current block based on the sample specified on the reference picture by the motion vector on the reference picture.
- the prediction unit 250 may induce a prediction sample for the current block by applying any one of a skip mode, a merge mode, and an MVP mode.
- motion information required for inter prediction of the current block provided by the video encoding apparatus for example, information about a motion vector, a reference picture index, and the like may be obtained or derived based on the prediction information.
- the motion information of the neighboring block may be used as the motion information of the current block.
- the neighboring block may include a spatial neighboring block and a temporal neighboring block.
- the predictor 250 may construct a merge candidate list using motion information of available neighboring blocks, and may use information indicated by the merge index on the merge candidate list as a motion vector of the current block.
- the merge index may be signaled from the encoding device.
- the motion information may include a motion vector and a reference picture. When the motion information of the temporal neighboring block is used in the skip mode and the merge mode, the highest picture on the reference picture list may be used as the reference picture.
- the difference (residual) between the prediction sample and the original sample is not transmitted.
- the motion vector of the current block may be derived using the motion vector of the neighboring block as a motion vector predictor.
- the neighboring block may include a spatial neighboring block and a temporal neighboring block.
- a merge candidate list may be generated by using a motion vector of a reconstructed spatial neighboring block and / or a motion vector corresponding to a Col block, which is a temporal neighboring block.
- the motion vector of the candidate block selected from the merge candidate list is used as the motion vector of the current block.
- the information about the prediction may include a merge index indicating a candidate block having an optimal motion vector selected from candidate blocks included in the merge candidate list.
- the prediction unit 250 may derive the motion vector of the current block by using the merge index.
- a motion vector predictor candidate list may be generated using a motion vector of a reconstructed spatial neighboring block and / or a motion vector corresponding to a Col block, which is a temporal neighboring block.
- the prediction information may include a prediction motion vector index indicating an optimal motion vector selected from the motion vector candidates included in the list.
- the prediction unit 250 may select the predicted motion vector of the current block from the motion vector candidates included in the motion vector candidate list using the motion vector index.
- the prediction unit of the encoding apparatus may obtain a motion vector difference (MVD) between the motion vector of the current block and the motion vector predictor, and may encode the output vector in a bitstream form. That is, MVD may be obtained by subtracting the motion vector predictor from the motion vector of the current block.
- the prediction unit 250 may obtain a motion vector difference included in the information about the prediction, and derive the motion vector of the current block by adding the motion vector difference and the motion vector predictor.
- the prediction unit may also obtain or derive a reference picture index or the like indicating a reference picture from the information about the prediction.
- the adder 260 may reconstruct the current block or the current picture by adding the residual sample and the predictive sample.
- the adder 260 may reconstruct the current picture by adding the residual sample and the predictive sample in block units. Since the residual is not transmitted when the skip mode is applied, the prediction sample may be a reconstruction sample.
- the adder 260 is described in a separate configuration, the adder 260 may be part of the predictor 250.
- the filter unit 270 may apply the deblocking filtering sample adaptive offset, and / or ALF to the reconstructed picture.
- the sample adaptive offset may be applied in units of samples and may be applied after deblocking filtering.
- ALF may be applied after deblocking filtering and / or sample adaptive offset.
- the memory 280 may store information necessary for reconstruction picture or decoding.
- the reconstructed picture may be a reconstructed picture after the filtering process is completed by the filter unit 270.
- the memory 280 may store pictures used for inter prediction.
- pictures used for inter prediction may be designated by a reference picture set or a reference picture list.
- the reconstructed picture can be used as a reference picture for another picture.
- the memory 280 may output the reconstructed picture in an output order.
- inter prediction or intra prediction may be performed to increase coding efficiency.
- an intra prediction mode may be determined in units of a prediction unit, and a prediction sample may be derived using peripheral reference samples in units of a transform unit.
- one or more transform units may be included in one prediction unit, and the one or more transform units may share the same intra prediction mode.
- the maximum and minimum transform unit sizes are determined according to characteristics of the video image (for example, resolution, etc.) or in consideration of encoding efficiency, and the bitstream may include information about the maximum and minimum transform unit sizes. have.
- a transform unit having a tree structure may be divided hierarchically with depth information. Each partitioned lower transform unit may have depth information. Since the depth information indicates the number and / or degree of division of the coding unit, the depth information may include information about the size of the lower transform unit.
- the decoding apparatus may obtain split information indicating whether the current transform unit is split.
- information indicating whether the corresponding transform unit is divided can be specified. For example, if a split flag indicating splitting is 1, the corresponding transform unit is divided into four transform units. If the value of the division flag is 0, the corresponding transformation unit may not be divided any more, and a processing process may be performed on the corresponding transformation unit, and prediction and transformation may be performed based on the corresponding transformation unit.
- the partition structure of the above-described transform unit may be represented using a tree structure. For example, splitting may be performed by using a transform unit that overlaps the prediction unit as a root.
- the transform unit in which the current partition is made becomes a parent node, and the transform unit partitioned from the parent node becomes a child node.
- the transform unit (parent node) in which the current partition is performed has as many child nodes as the number of partitioned transform units.
- the transformation unit that is no longer partitioned becomes a leaf node.
- Leaf nodes are nodes that have no child nodes.
- the partition structure of the transform unit will be a quad tree structure.
- the block division according to the present invention includes not only the block division of the square structure as described above, but also includes various non-square divisions in addition to the square division, or a combination of square and non-square divisions. Even if the current block is one of non-square transform units (TUs), it may be split along the quad tree structure from the coding unit.
- TUs non-square transform units
- 3 exemplarily illustrates how TUs are partitioned based on a qued tree structure.
- the prediction unit PU shown in FIGS. 3A and 3B is a unit for performing prediction, that is, a unit in which a prediction mode is determined.
- the intra prediction mode may generally include 35 prediction modes.
- intra prediction mode # 0 represents an intra planar mode
- intra prediction mode # 1 represents an intra DC mode
- Intra prediction modes # 2 ... # 34 represent intra angular2 mode ... intra angular 34 mode, respectively.
- the intra planner mode and the intra DC mode may be referred to as an intra non-directional mode
- the intra angular 2 to intra angular 34 modes may be referred to as an intra directional mode.
- one or more TUs may be included in one PU, and the one or more TUs may share the same intra prediction mode.
- 3 (a) shows that the TU is divided into a plurality of square TUs from the same TU having the same size and location.
- One TU may be divided into four TUs having the same size as a line divided vertically at half the width and a line divided horizontally at the half of the TU height.
- FIG. 3 (b) shows that the TUs of the same size and position are divided into non-square TUs.
- a vertical split may be selected in which one TU is divided into a plurality of TUs through a vertically divided line, and a horizontal split in which one TU is divided into a plurality of TUs through a horizontally divided line.
- a split scheme may be selected.
- the partitioning scheme shown in FIG. 3 (b) is partly similar to a partitioning method of a coding unit (CU) of short distance intra prediciton (PU) into a PU. In the conventional TU partitioning method, there is a difference in that one TU can be divided into only forward TUs in a quad tree structure.
- the decoding apparatus may distinguish a division scheme by receiving a flag related to a division scheme, for example, a vertical division flag, from a bitstream. If the value of the vertical division flag is 1, the PU may be partitioned by the vertical division method, and when the value of the block division flag is 0, the PU may be partitioned by the horizontal division method. On the contrary, when the value is 1, the horizontal division flag may be transmitted, and when the value is 0, the horizontal division flag may be selected to select the vertical division method.
- a flag related to a division scheme for example, a vertical division flag
- the decoding apparatus may distinguish a division scheme for each intra prediction mode applied in a PU including an area of a current TU.
- FIG. 4 exemplarily illustrates an intra prediction mode divided according to a division scheme.
- an embodiment of dividing a TU division scheme according to an intra directional mode may be seen.
- the decoding apparatus may select a vertical division scheme when the 2 to 17 intra directional modes close to the horizontal direction are applied.
- the horizontal division method may be selected.
- the intra prediction direction may be adjusted for each sample of the TU divided into non-square blocks.
- the intra prediction direction may be refined on a sample basis. That is, according to the present invention, the decoding apparatus may derive a modified intra prediction direction for each sample of a TU of wxN (1 ⁇ w ⁇ N) or Nxh (1 ⁇ h ⁇ N) size divided by the above-described partitioning scheme. .
- 5 exemplarily illustrates an intra prediction direction in each sample in the square division scheme and an intra prediction direction in each sample in the non-square division scheme.
- each TU may generally be decoded by applying the same intra prediction mode to each other.
- intra prediction mode information is transmitted in PU units as in the case of the square segmentation scheme, but the prediction direction may be modified to have the most suitable intra prediction direction for each sample in the TUs.
- the decoding apparatus may also distinguish the anisotropic division scheme according to the angle of the intra prediction direction applied in the PU as described above.
- 6 exemplarily shows how groups are divided according to intra prediction angles.
- the angles of the modes in which the prediction direction exists between H + 1 and H + 32 are angles having a positive sign on a horizontal basis, and the modes in which the prediction direction exists between H-1 and H-32 exist.
- This angle is expressed as an angle with a negative sign on a horizontal basis. That is, the angle of the prediction direction existing between H + 1 and H + 32 based on the horizontal reference angle 0 ° corresponding to the intra prediction mode 10 is represented as a horizontal reference angle having a positive sign, H-1 to H-.
- An angle in the prediction direction existing between 32 is expressed as a horizontal reference angle having a negative sign.
- angles of the modes in which the prediction direction exists between V-1 and V-32 are negative angles with respect to the vertical reference, and the angles of the modes in which the prediction direction exist between V + 1 and V + 32 are vertical. It may be expressed as an angle having a positive sign as a reference. That is, the angle of the prediction direction existing between V-1 to V-32 based on the vertical reference angle 0 ° corresponding to the intra prediction mode No. 26 is a vertical reference angle having a negative sign. V + 1 to V + The angle of the prediction direction existing between 32 may be expressed as a vertical reference angle with a positive sign.
- the intra prediction mode When the prediction angle of the intra prediction mode is included in H-31 to H + 32, the intra prediction mode may be set to a horizontal group, and the prediction angle of the intra prediction mode is V-32 to V + 32. If included in the intra prediction mode may be set to a vertical group (vertical group). That is, it may be divided into a horizontal group and a vertical group according to the prediction angles of the intra directional modes applied in the PU.
- TUs having a size of wxN (1 ⁇ w ⁇ N) may be derived in a vertical division scheme in the area of the PU.
- Nxh (1 ⁇ h ⁇ N) sizes of TUs may be derived in a horizontal division scheme in the region of the PU.
- FIG. 7 exemplarily shows a TU partitioning method of groups divided according to prediction angles.
- FIG. 7 a division of a TU based on a PU to which an intra directional mode corresponding to a horizontal group is applied through (a) of FIG. 7 and a PU to which an intra directional mode corresponding to a vertical group is applied to (b) of FIG.
- An example of partitioning of a TU based on is shown.
- d TU (TU depth) shown in FIG. 7 is 1, the TU may be divided into (N / 4) xN or Nx (N / 4) sizes.
- the d TU (TU depth) is the maximum value, the TU may be divided into 1 ⁇ N or N ⁇ 1 sizes.
- the divided TU may modify the prediction direction for each sample.
- the decoding apparatus may modify the prediction direction for each sample as follows.
- the current block may be a block of size wxN (1 ⁇ w ⁇ N).
- the PU including the current block is divided in a vertical division scheme so that the TUs are located from left to right and decoded from left to right according to a scan order, so that a TU adjacent to the left side of the current block may already be decoded. Therefore, the reconstruction value of included samples of the TU adjacent to the left side of the current block can be used for prediction of included target samples of the current block.
- the decoding apparatus may derive one of the samples included in the TU adjacent to the left as a reference sample according to the intra directional mode applied to the current block. For example, the reference sample may be derived based on the following equation.
- (x, y) is the sample position of the target sample
- P ref (x, y) is the derived reference sample
- P rec (x-1, y-1) is the reconstructed value of the upper left adjacent sample of the target sample.
- P rec (x-1, y) is the reconstruction value of the left adjacent sample of the target sample
- P rec (x-1, y + 1) is the reconstruction value of the lower left adjacent sample of the target sample
- n is the intradirectional mode.
- Horizontal angles H + n ( ⁇ 31 ⁇ n ⁇ 32), A, B, and C represent each horizontal angle category.
- FIG. 8 shows an example of deriving a reference sample for a target sample.
- the decoding apparatus may derive P rec (x-1, y-1) as a reference sample.
- the decoding apparatus may derive P rec (x-1, y) as a reference sample.
- the decoding apparatus may derive P rec (x ⁇ 1, y + 1) as a reference sample.
- a reference sample (hereinafter, referred to as a refined reference sample) of the target sample is derived, and the decoding apparatus derives neighboring reference samples of the neighboring block to which the refined reference sample belongs, and thus, the refined reference sample and the The intra prediction direction may be modified based on the peripheral reference samples of the neighboring block. If the intra prediction direction used in the refined reference sample is a horizontal angle H + m ( ⁇ 31 ⁇ m ⁇ 32), n samples in the neighboring block corresponding to the prediction direction of a predetermined range ( ⁇ r) around H + m Are derived from the peripheral reference samples of the peripheral block.
- the decoding apparatus compares the n neighboring reference samples with the refined reference sample to derive a prediction direction indicating a peripheral reference sample having the smallest difference from the refined reference sample as the modified intra prediction direction.
- the method for deriving the modified intra prediction direction may be expressed by, for example, the following equation.
- DirRefine (Pref (x, y)) is a modified prediction direction
- PRED Pref (x, y) (a) is one of the n neighboring reference samples
- P ref (x, y) is the refined reference sample
- m is the prediction direction of the intra directional mode applied to the refined reference sample
- r is a positive integer.
- n may be 2r-1.
- the decoding apparatus may modify the prediction direction for each sample as follows.
- the current block may be a block of size Nxh (1 ⁇ h ⁇ N).
- the PU including the current block is divided in a horizontal division manner so that the TUs are located from the top to the bottom, and are decoded from the top to the bottom according to the scan order, so that a TU adjacent to the top of the current block may already be decoded. Therefore, the reconstruction value of included samples of the TU adjacent to the top of the current block can be used for prediction of the included target sample of the current block.
- the decoding apparatus may derive one of the samples included in the upper TU adjacent to the reference sample according to the intra directional mode applied to the current block. For example, the reference sample may be derived based on the following equation.
- (x, y) is the sample position of the target sample
- P ref (x, y) is the derived reference sample
- P rec (x-1, y-1) is the reconstructed value of the upper left adjacent sample of the target sample.
- P rec (x, y-1) is the restored value of the upper neighboring sample of the target sample
- P rec (x + 1, y-1) is the restored value of the upper right neighboring sample of the target sample
- n is the intradirectional mode Vertical angles V + n ( ⁇ 31 ⁇ n ⁇ 32)
- D, E, and F represent respective vertical angle categories.
- FIG 9 shows an example of deriving a reference sample for a target sample.
- the decoding apparatus may derive P rec (x-1, y-1) as a reference sample.
- the decoding apparatus may derive P rec (x, y-1) as a reference sample.
- the decoding apparatus may derive P rec (x + 1, y-1) as a reference sample.
- a reference sample (hereinafter, referred to as a refined reference sample) of the target sample is derived, and the decoding apparatus derives neighboring reference samples of the neighboring block to which the refined reference sample belongs, and thus, the refined reference sample and the The intra prediction direction may be modified based on the peripheral reference samples of the neighboring block. If the intra prediction direction used in the refined reference sample is vertical angle V + m ( ⁇ 31 ⁇ m ⁇ 32), n samples in the neighboring block corresponding to the prediction direction of a predetermined range ( ⁇ r) around V + m Are derived from the peripheral reference samples of the peripheral block.
- the decoding apparatus compares the n neighboring reference samples with the refined reference sample to derive a prediction direction indicating a peripheral reference sample having the smallest difference from the refined reference sample as the modified intra prediction direction.
- the method for deriving the modified intra prediction direction may be expressed by, for example, the following equation.
- DirRefine (Pref (x, y)) is a modified prediction direction
- PRED Pref (x, y) (a) is one of the n neighboring reference samples
- P ref (x, y) is the refined reference sample
- m is the prediction direction of the intra directional mode applied to the refined reference sample
- r is a positive integer.
- n may be 2r-1.
- the modified prediction direction may be derived for each sample as described above.
- the intra prediction mode applied to the PU belongs to a horizontal group and a vertical division scheme is applied, and thus the current block is a block having a size of wxN (1 ⁇ w ⁇ N)
- the modified prediction of the samples located in the leftmost column The direction is derived by the method described above, and the modified prediction direction for the samples excluding the samples may be derived in the same direction as the modified prediction direction of the leftmost column samples located in the same row.
- the modified samples of the uppermost row are modified.
- the prediction direction is derived by the above-described method, and the modified prediction direction for the samples excluding the samples may be derived in the same direction as the modified prediction direction of the uppermost row samples located in the same column.
- FIG. 10 exemplarily illustrates deriving a modified prediction direction when a current block is a block of a size of Nxh or wxN (1 ⁇ h ⁇ N, 1 ⁇ w ⁇ N).
- an intra prediction direction for each sample in a wxN size block and an intra prediction direction for each sample in a Nxh size block can be seen.
- P (x, y) represents one of the samples located in the leftmost column of the block as a sample having a (x, y) sample position.
- the modified intra prediction direction of P (x, y) may be derived as described above based on the refined reference sample of P (x, y). In this case, for example, the modified intra prediction direction may be derived based on Equation 2 described above.
- Samples other than the leftmost column, P (x + 1, y) to P (x + w-1, y), located in the same row as P (x, y) are modified intras of P (x, y).
- the same direction as the prediction direction may be derived as the intra prediction direction.
- P (x, y) represents one of the samples located in the uppermost row of the block as a sample having the (x, y) sample position.
- the modified intra prediction direction of P (x, y) may be derived as described above based on the refined reference sample of P (x, y). In this case, for example, the modified intra prediction direction may be derived based on Equation 4 described above.
- Samples other than the top row, located in the same column as P (x, y), P (x, y + 1) to P (x, y + h-1) are modified intra predictions of P (x, y).
- the same direction as the direction may be derived as the intra prediction direction.
- the encoding apparatus may transmit an angle refine flag (angle_refine_flag) to the decoding apparatus.
- the angle refine flag may indicate whether the modified prediction direction according to the present invention described above is used. That is, the encoding apparatus may inform whether the modified intra prediction direction is used through the angle refine flag. For example, when the value of the angle refine flag is 1, the encoding apparatus and the decoding apparatus may predict the target sample in the current block based on the modified prediction direction.
- the angle refine flag may be transmitted when the index of the intra prediction mode of the current PU is greater than one.
- the angle refine flag may be transmitted through a syntax as shown in Table 2 below.
- the syntax may be, for example, a CU syntax, and may be transmitted in a bitstream.
- the angle_refine_flag syntax element corresponds to the angle refine flag.
- IntraPredModeY information indicates the index of the intra prediction mode of the current PU.
- the angle_refine_flag syntax element may be transmitted and received and parsed when the value of the IntraPredModeY information is greater than one.
- FIG. 11 schematically illustrates an intra prediction method by a decoding apparatus according to the present invention.
- the method disclosed in FIG. 11 may be performed by the decoding apparatus disclosed in FIG. 2.
- S1100 of FIG. 8 may be performed by the entropy decoding unit of the decoding apparatus
- S1110 to S1160 may be performed by the prediction unit of the decoding apparatus.
- the decoding apparatus obtains information about an intra prediction mode from a bitstream (S1100).
- the decoding device may decode the bitstream received from the encoding device and obtain information about the intra prediction mode.
- the information about the intra prediction mode may include index information indicating the intra prediction mode of the current PU.
- the bitstream may also include an angle refine flag.
- the angle refine flag may be transmitted when index information indicating the intra prediction mode of the current PU is greater than one.
- the bitstream may also include a vertical division flag or a horizontal division flag. When the value of the vertical division flag or the horizontal division flag is 1, the current PU may be divided into TUs through a vertical division scheme or a horizontal division scheme.
- the bitstream may be received via a network or a storage medium.
- the decoding apparatus derives the intra prediction mode of the current block as the first intra directional mode based on the intra prediction mode information (S1110).
- the decoding apparatus may determine the intra prediction mode according to whether the most probable mode (MPM) or the remaining mode is applied, and derive the intra prediction mode as the first intra directional mode.
- MPM most probable mode
- the MPM list is determined based on an intra prediction mode for the left or upper neighboring block of the current block
- the intra prediction mode is determined based on the MPM list and the index information, and the intra is selected.
- the prediction mode may be derived as the first intra directional mode.
- the intra prediction mode may be determined based on the index information among the remaining prediction modes not included in the MPM list, and the intra prediction mode may be derived as the first intra directional mode.
- the decoding apparatus derives first peripheral samples of the current block (S1120).
- the first peripheral samples may include left peripheral samples, upper left peripheral samples, and upper peripheral samples of the current block.
- the left neighboring samples (p [-1) are used as the peripheral reference samples for intra prediction of the current block.
- upper peripheral samples (p [0] [-1 ], ..., p [2W-1] [-1]) may be derived.
- p [m] [n] may represent a sample (or pixel) of the sample position (m, n).
- W and H may correspond to the width and height of the current block, respectively.
- the decoding apparatus derives a (fine) reference sample adjacent to the target sample in the current block based on the prediction direction of the first intra directional mode (S1130).
- angles of modes in which a prediction direction exists between H + 1 and H + 32 are angles having a positive sign with respect to a horizontal reference, and angles of modes in which a prediction direction exists between H-1 and H-32. Is expressed as an angle with a negative sign on a horizontal basis. That is, the angle of the prediction direction existing between H + 1 and H + 32 based on the horizontal reference angle 0 ° corresponding to the intra prediction mode 10 is represented as a horizontal reference angle having a positive sign, H-1 to H-.
- An angle in the prediction direction existing between 32 is expressed as a horizontal reference angle having a negative sign.
- the angles of the modes in which the prediction direction exists between V-1 and V-32 are negative angles with respect to the vertical reference, and the angles of the modes in which the prediction direction exist between V + 1 and V + 32 are vertical. It is expressed as an angle with a positive sign as a reference. That is, the angle of the prediction direction existing between V-1 to V-32 based on the vertical reference angle 0 ° corresponding to the intra prediction mode No. 26 is a vertical reference angle having a negative sign. V + 1 to V + The angle of the prediction direction existing between 32 is expressed as a vertical reference angle with a positive sign.
- the intra prediction mode When the prediction angle of the intra prediction mode is included in H-31 to H + 32, the intra prediction mode may be set to a horizontal group, and when the prediction angle of the intra prediction mode is included in V-32 to V + 32, Intra prediction mode may be set to a vertical group.
- the decoding apparatus may determine whether the first intra directional mode is included in the horizontal group or the vertical group according to the prediction direction of the first intra directional mode.
- the decoding apparatus may set the target sample as a sample adjacent to the left boundary of the current block and derive a reference sample of the target sample from the left block of the current block.
- the decoding apparatus When the first intra directional mode is included in the vertical group, the decoding apparatus may set the target sample as a sample adjacent to an upper boundary of the current block and derive a reference sample of the target sample from the upper block of the current block. .
- the decoding apparatus may use P (x, y) adjacent to the left boundary of the current block as a target sample as shown in FIG. 8.
- the decoding apparatus may derive an upper left adjacent sample of the target sample as a reference sample of the target sample.
- the decoding apparatus may derive a left adjacent sample of the target sample as a reference sample of the target sample.
- the decoding apparatus may derive a lower left adjacent sample of the target sample as a reference sample of the target sample. In this case, the decoding apparatus may derive the reference sample based on Equation 1 described above.
- the decoding apparatus may use P (x, y) adjacent to an upper boundary of the current block as a target sample as shown in FIG. 9.
- the decoding apparatus may derive the upper left adjacent sample of the target sample as a reference sample.
- the decoding apparatus may derive an upper neighboring sample of the target sample as a reference sample.
- the decoding apparatus may derive the right upper neighboring sample of the target sample as a reference sample. In this case, the decoding apparatus may derive the reference sample based on Equation 3 described above.
- the decoding apparatus may derive a target sample corresponding to the index and a reference sample of the target sample based on the index of the first intra directional mode.
- the decoding apparatus may use P (x, y) adjacent to the left boundary of the current block as a target sample.
- the sample position of the target sample is (x, y) and the first intra directional mode is one of 2 to 6 intra prediction modes
- the sample position of the reference sample is (x-1, y + 1) Can be.
- the sample position of the target sample is (x, y) and the first intra directional mode is one of seventh to thirteenth intra prediction modes
- the sample position of the reference sample may be.
- the sample position of the target sample is (x, y) and the first intra directional mode is one of 14 to 17 intra prediction modes
- the sample position of the reference sample is (x-1, y-1) Can be.
- the decoding apparatus may use P (x, y) adjacent to an upper boundary of the current block as a target sample.
- the sample position of the target sample is (x, y) and the first intra directional mode is one of 18 to 22 intra prediction modes
- the sample position of the reference sample is (x-1, y-1) Can be.
- the sample position of the target sample is (x, y) and the first intra directional mode is one of the 23rd to 29th intra prediction modes
- the sample position of the reference sample may be (x, y-1).
- the sample position of the target sample is (x, y) and the first intra directional mode is one of 30 to 34 intra prediction modes
- the sample position of the reference sample is (x + 1, y-1) Can be.
- the decoding apparatus derives second neighboring samples of the neighboring block to which the reference sample belongs (S1140).
- the second peripheral samples may include left peripheral samples, upper left peripheral samples, and upper peripheral samples of the peripheral block to which the reference sample belongs.
- the neighboring samples for intra prediction of the current block are the left neighboring samples (p [x0-1). ] [y0 + 2H ref -1], ..., p [x0-1] [y0]), the upper left peripheral sample (p [x0-1] [y0-1]), and the upper peripheral samples p [x0] [y0-1], ..., p [x0 + 2W ref- 1] [y0-1]) may be derived.
- p [m] [n] may represent a sample (or pixel) of the sample position (m, n).
- W ref and H ref may correspond to the width and height of the neighboring block, respectively.
- the decoding apparatus determines a modified prediction direction based on the comparison between the reference sample and at least one of the second neighboring samples (S1150).
- the decoding apparatus may derive the peripheral sample 1 located in the prediction direction of the second intra directional mode based on the reference sample. have.
- the peripheral sample 1 is included in the second peripheral samples.
- the decoding apparatus may derive n neighboring samples based on the peripheral sample 1 of the second neighboring samples.
- the n peripheral samples may include the peripheral sample 1 and may include a plurality of peripheral samples positioned within a specific distance from the peripheral sample 1.
- the decoding apparatus corresponds to the prediction direction of a predetermined range ( ⁇ r) around H + m.
- ⁇ r a predetermined range
- the decoding apparatus corresponds to the prediction direction of a predetermined range ( ⁇ r) around V + m. Peripheral samples can be derived.
- the decoding apparatus may compare the n neighboring samples with the reference sample to derive a prediction direction indicating the neighboring sample having the smallest difference from the reference sample as the modified intra prediction direction. In this case, the decoding apparatus may derive the modified intra prediction direction based on Equation 2 or Equation 4 described above.
- the decoding apparatus derives a prediction value for the target sample based on the modified prediction direction and the first neighboring samples (S1160).
- the decoding apparatus may derive a prediction value for the target sample based on a first peripheral sample located in the modified prediction direction (that is, a peripheral sample located in the modified prediction direction from a target sample position). For example, when the value of the angle refine flag is 1, the decoding apparatus may derive the prediction value for the target sample based on the modified intra prediction direction for the target sample of the current block.
- the decoding apparatus may generate a reconstructed sample based on the prediction value for the target sample.
- the decoding apparatus may receive a residual signal from the encoding apparatus and generate a residual sample for the current block. In this case, the decoding apparatus may generate the reconstructed sample based on the prediction value and the residual sample.
- the decoding apparatus may apply an in-loop filtering procedure, such as a deblocking filtering and / or SAO procedure, to the reconstructed picture in order to improve subjective / objective picture quality as necessary.
- an in-loop filtering procedure such as a deblocking filtering and / or SAO procedure
- FIG. 12 schematically illustrates an intra prediction method by an encoding apparatus according to the present invention.
- the method disclosed in FIG. 12 may be performed by the encoding apparatus disclosed in FIG. 1.
- S1200 to S1240 of FIG. 12 may be performed by the prediction unit of the encoding apparatus
- S1250 may be performed by the entropy encoding unit of the encoding apparatus.
- the encoding apparatus derives the first intra directional mode and the first peripheral samples of the current block (S1200).
- the encoding apparatus may determine an optimal intra prediction mode for the current PU based on the RD cost, and derive the intra prediction mode as the first intra directional mode of the current block. In this case, the encoding apparatus determines whether an MPM (most probable mode) or a remaining mode is applied to signal the intra prediction mode information indicating the determined first intra directional mode. Signaling may be included in the prediction mode information.
- the first peripheral samples may include left peripheral samples, upper left peripheral samples, and upper peripheral samples of the current block.
- the left neighboring samples (p [-1) are used as the peripheral reference samples for intra prediction of the current block.
- upper peripheral samples (p [0] [-1 ], ..., p [2W-1] [-1]) may be derived.
- p [m] [n] may represent a sample (or pixel) of the sample position (m, n).
- W and H may correspond to the width and height of the current block, respectively.
- the encoding apparatus determines a (refine) reference sample adjacent to the target sample in the current block based on the prediction direction of the first intra directional mode (S1210).
- angles of modes in which a prediction direction exists between H + 1 and H + 32 are angles having a positive sign with respect to a horizontal reference, and angles of modes in which a prediction direction exists between H-1 and H-32. Is expressed as an angle with a negative sign on a horizontal basis. That is, the angle of the prediction direction existing between H + 1 and H + 32 based on the horizontal reference angle 0 ° corresponding to the intra prediction mode 10 is represented as a horizontal reference angle having a positive sign, H-1 to H-.
- An angle in the prediction direction existing between 32 is expressed as a horizontal reference angle having a negative sign.
- the angles of the modes in which the prediction direction exists between V-1 and V-32 are negative angles with respect to the vertical reference, and the angles of the modes in which the prediction direction exist between V + 1 and V + 32 are vertical. It is expressed as an angle with a positive sign as a reference. That is, the angle of the prediction direction existing between V-1 to V-32 based on the vertical reference angle 0 ° corresponding to the intra prediction mode No. 26 is a vertical reference angle having a negative sign. V + 1 to V + The angle of the prediction direction existing between 32 is expressed as a vertical reference angle with a positive sign.
- the intra prediction mode When the prediction angle of the intra prediction mode is included in H-31 to H + 32, the intra prediction mode may be set to a horizontal group, and when the prediction angle of the intra prediction mode is included in V-32 to V + 32, Intra prediction mode may be set to a vertical group.
- the encoding apparatus may determine whether the first intra directional mode is included in the horizontal group or the vertical group according to the prediction direction of the first intra directional mode.
- the encoding apparatus may set the target sample as a sample adjacent to the left boundary of the current block and derive a reference sample of the target sample from the left block of the current block.
- the encoding apparatus When the first intra directional mode is included in the vertical group, the encoding apparatus may set the target sample as a sample adjacent to an upper boundary of the current block, and derive a reference sample of the target sample from the upper block of the current block. .
- the encoding apparatus may use P (x, y) adjacent to the left boundary of the current block as a target sample as shown in FIG. 8.
- the encoding apparatus may derive an upper left adjacent sample of the target sample as a reference sample of the target sample.
- the encoding apparatus may derive a left adjacent sample of the target sample as a reference sample of the target sample.
- the encoding apparatus may derive a lower left adjacent sample of the target sample as a reference sample of the target sample.
- the encoding apparatus may derive the reference sample based on Equation 1 described above.
- the encoding apparatus may use P (x, y) adjacent to an upper boundary of the current block as a target sample as shown in FIG. 9.
- the encoding apparatus may derive a reference sample from the upper left adjacent sample of the target sample.
- the encoding apparatus may derive an upper neighboring sample of the target sample as a reference sample.
- the encoding apparatus may derive the right upper side adjacent sample of the target sample as a reference sample. In this case, the encoding apparatus may derive the reference sample based on Equation 3 described above.
- the encoding apparatus may derive a target sample corresponding to the index and a reference sample of the target sample based on the index of the first intra directional mode.
- the encoding apparatus may use P (x, y) adjacent to the left boundary of the current block as a target sample.
- the sample position of the target sample is (x, y) and the first intra directional mode is one of 2 to 6 intra prediction modes
- the sample position of the reference sample is (x-1, y + 1) Can be.
- the sample position of the target sample is (x, y) and the first intra directional mode is one of seventh to thirteenth intra prediction modes
- the sample position of the reference sample may be (x-1, y).
- the sample position of the target sample is (x, y) and the first intra directional mode is one of 14 to 17 intra prediction modes
- the sample position of the reference sample is (x-1, y-1) Can be.
- the encoding apparatus may use P (x, y) adjacent to an upper boundary of the current block as a target sample.
- the sample position of the target sample is (x, y) and the first intra directional mode is one of 18 to 22 intra prediction modes
- the sample position of the reference sample is (x-1, y-1) Can be.
- the sample position of the target sample is (x, y) and the first intra directional mode is one of the 23rd to 29th intra prediction modes
- the sample position of the reference sample may be (x, y-1).
- the sample position of the target sample is (x, y) and the first intra directional mode is one of 30 to 34 intra prediction modes
- the sample position of the reference sample is (x + 1, y-1) Can be.
- the encoding apparatus derives second neighboring samples of the neighboring block to which the reference sample belongs (S1220).
- the second peripheral samples may include left peripheral samples, upper left peripheral samples, and upper peripheral samples of the peripheral block to which the reference sample belongs.
- the neighboring samples for intra prediction of the current block are the left neighboring samples (p [x0-1). ] [y0 + 2H ref -1], ..., p [x0-1] [y0]), the upper left peripheral sample (p [x0-1] [y0-1]), and the upper peripheral samples p [x0] [y0-1], ..., p [x0 + 2W ref- 1] [y0-1]) may be derived.
- p [m] [n] may represent a sample (or pixel) of the sample position (m, n).
- W ref and H ref may correspond to the width and height of the neighboring block, respectively.
- the encoding apparatus determines a modified prediction direction based on the comparison of the reference sample and at least one of the second neighboring samples (S1230).
- the encoding apparatus may derive the peripheral sample 1 located in the prediction direction of the second intra directional mode with respect to the reference sample. have.
- the peripheral sample 1 is included in the second peripheral samples.
- the encoding apparatus may derive n peripheral samples based on the peripheral sample 1 of the second peripheral samples.
- the n peripheral samples may include the peripheral sample 1 and may include a plurality of peripheral samples positioned within a specific distance from the peripheral sample 1.
- the encoding apparatus corresponds to the prediction direction of a predetermined range ( ⁇ r) around H + m.
- n peripheral samples can be derived.
- the encoding apparatus corresponds to the prediction direction of a predetermined range ( ⁇ r) around V + m. Peripheral samples can be derived.
- the encoding apparatus may compare the n neighboring samples with the reference sample to derive a prediction direction indicating the neighboring sample having the smallest difference from the reference sample as the modified intra prediction direction. In this case, the encoding apparatus may derive the modified intra prediction direction based on Equation 2 or Equation 4 described above.
- the encoding apparatus derives a prediction value for the target sample based on the modified prediction direction and the first neighboring samples (S1240).
- the encoding apparatus may derive a prediction value for the target sample based on the first peripheral sample located in the modified prediction direction (that is, the peripheral sample located in the modified prediction direction from the target sample position). Meanwhile, when the angle refine flag has a value of 1, the encoding apparatus may derive a prediction value for the target sample based on the modified intra prediction direction for the target sample of the current block.
- the encoding apparatus encodes and outputs intra prediction mode information indicating the first intra directional mode (S1250).
- the intra prediction mode information may include index information indicating the intra prediction mode of the current PU.
- the intra prediction mode information may be transmitted in the form of a bitstream.
- the bitstream may be transmitted to a decoding apparatus via a network or a storage medium.
- the bitstream may also include the angle refine flag.
- the angle refine flag may be transmitted when index information indicating the intra prediction mode of the current PU is greater than one.
- the bitstream may also include a vertical division flag or a horizontal division flag.
- a plurality of non-square transform units can be derived, thereby improving intra prediction performance.
- the intra prediction direction can be adjusted in the sample unit of the current block, thereby improving the intra prediction performance.
- a reference sample of a target sample of the current block may be selected based on the intra prediction mode, thereby improving intra prediction performance.
- the prediction direction can be adjusted by the sample unit in the current block, thereby improving the accuracy of prediction by using the optimal intra prediction direction by the sample unit, and improving the overall coding efficiency by reducing the data amount of the residual signal. Can be.
- the above-described method according to the present invention may be implemented in software, and the encoding device and / or the decoding device according to the present invention may perform image processing of, for example, a TV, a computer, a smartphone, a set-top box, a display device, and the like. It can be included in the device.
- the above-described method may be implemented as a module (process, function, etc.) for performing the above-described function.
- the module may be stored in memory and executed by a processor.
- the memory may be internal or external to the processor and may be coupled to the processor by various well known means.
- the processor may include application-specific integrated circuits (ASICs), other chipsets, logic circuits, and / or data processing devices.
- the memory may include read-only memory (ROM), random access memory (RAM), flash memory, memory card, storage medium and / or other storage device.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims (15)
- 영상 디코딩 장치에서 수행되는 인트라 예측 방법에 있어서,비트스트림을 통하여 인트라 예측 모드 정보를 수신하는 단계;상기 인트라 예측 모드 정보를 기반으로 현재 블록의 인트라 예측 모드를 제1 인트라 방향성 모드로 도출하는 단계;상기 현재 블록의 제1 주변 샘플들을 도출하는 단계;상기 제1 인트라 방향성 모드의 예측 방향을 기반으로 상기 현재 블록 내 대상 샘플에 인접한 참조샘플을 도출하는 단계;상기 참조샘플이 속하는 주변 블록의 제2 주변 샘플들을 도출하는 단계;상기 참조샘플과 상기 제2 주변 샘플들 중 적어도 하나와의 비교를 기반으로 수정된 예측 방향을 결정하는 단계;상기 수정된 예측 방향 및 상기 제1 주변 샘플들을 기반으로 상기 대상 샘플에 대한 예측 값을 도출하는 단계를 포함하는 인트라 예측 방법.
- 제1항에 있어서,상기 제2 주변 샘플들 중 상기 참조샘플을 기준으로 상기 주변 블록의 제2 인트라 방향성 모드의 예측 방향에 위치하는 주변 샘플1을 도출하는 단계;상기 제2 주변 샘플들 중 상기 주변 샘플1으로부터 일정 거리 내에 위치하는 주변 샘플들을 기반으로 n개의 주변 샘플들을 도출하는 단계를 더 포함하되,상기 n개의 주변 샘플들과 상기 참조샘플을 비교를 기반으로, 상기 n개의 주변 샘플들 중 상기 참조 샘플과의 차분(difference)이 가장 작은 샘플을 가리키는 방향이 상기 수정된 예측 방향으로 결정되는 것을 특징으로 하는 인트라 예측 방법
- 제1항에 있어서,상기 제1 인트라 방향성 모드가 2번 내지 17번 인트라 예측 모드 중 하나인 경우,상기 주변 블록은 상기 현재 블록의 좌측 블록이고, 상기 대상 샘플은 상기 현재 블록의 좌측 경계에 인접한 것을 특징으로 하는 인트라 예측 방법.
- 제4항에 있어서,상기 대상 샘플의 샘플 포지션이 (x,y)이고 상기 제1 인트라 방향성 모드가 2번 내지 6번 인트라 예측 모드 중 하나일 때, 상기 참조샘플의 샘플 포지션은 (x-1,y+1)이고,상기 대상 샘플의 샘플 포지션이 (x,y)이고 상기 제1 인트라 방향성 모드가 7번 내지 13번 인트라 예측 모드 중 하나일 때, 상기 참조샘플의 샘플 포지션은 P(x-1,y)이고,상기 대상 샘플의 샘플 포지션이 (x,y)이고 상기 제1 인트라 방향성 모드가 14번 내지 17번 인트라 예측 모드 중 하나일 때, 상기 참조샘플의 샘플 포지션 (x-1,y-1)인 것을 특징으로 하는 인트라 예측 방법.
- 제5항에 있어서,상기 현재 블록의 샘플들 중 상기 대상 샘플의 우측에 위치하는 샘플에 대한 예측 값을 상기 수정된 예측 방향을 기반으로 도출하는 단계를 더 포함하는 것을 특징으로 하는 인트라 예측 방법.
- 제1항에 있어서,상기 제1 인트라 방향성 모드가 18번 내지 34번 인트라 예측 모드 중 하나인 경우,상기 주변 블록은 상기 현재 블록의 상측 블록이고, 상기 대상 샘플은 상기 현재 블록의 상측 경계에 인접한 것을 특징으로 하는 인트라 예측 방법.
- 제7항에 있어서,상기 대상 샘플의 샘플 포지션이 (x,y)이고 상기 제1 인트라 방향성 모드가 18번 내지 22번 인트라 예측 모드 중 하나일 때, 상기 참조샘플의 샘플 포지션은 (x-1,y-1)이고,상기 대상 샘플의 샘플 포지션이 (x,y)이고 상기 제1 인트라 방향성 모드가 23번 내지 29번 인트라 예측 모드 중 하나일 때, 상기 참조샘플의 샘플 포지션은 (x,y-1)이고,상기 대상 샘플의 샘플 포지션이 (x,y)이고 상기 제1 인트라 방향성 모드가 30번 내지 34번 인트라 예측 모드 중 하나일 때, 상기 참조샘플의 샘플 포지션은 (x+1,y-1)인 것을 특징으로 하는 인트라 예측 방법.
- 제8항에 있어서,상기 현재 블록의 샘플들 중 상기 대상 샘플의 하측에 위치하는 샘플에 대한 예측 값을 상기 수정된 예측 방향을 기반으로 도출하는 단계를 더 포함하는 것을 특징으로 하는 인트라 예측 방법.
- 제1항에 있어서,현재 PU(prediction unit)와 중첩되는(overlapped) 다수의 비정방형 TU(transform unit)들을 도출하는 단계를 더 포함하되,상기 현재 블록은 상기 비정방형 TU들 중 하나인 것을 특징으로 하는 인트라 예측 방법.
- 제10항에 있어서,상기 현재 PU의 사이즈가 NxN이고 상기 제1 인트라 방향성 모드가 2번 내지 17번 인트라 예측 모드 중 하나인 경우, 상기 현재 PU는 wxN(1≤w<N) 사이즈의 상기 비정방형 TU들로 분할되고,상기 현재 PU의 사이즈가 NxN이고 상기 제1 인트라 방향성 모드가 18번 내지 34번 인트라 예측 모드 중 하나인 경우, 상기 현재 PU는 Nxh(1≤h<N) 사이즈의 상기 비정방형 TU들로 분할되는 것을 특징으로 하는 인트라 예측 방법.
- 제10항에 있어서,상기 비트스트림을 통하여 수직 분할플래그를 수신하는 단계를 더 포함하되,상기 수직 분할 플래그의 값이 1인 경우, 상기 현재 PU는 wxN(1≤w<N) 사이즈의 상기 비정방향 TU들로 분할되고,상기 수직 분할 플래그의 값이 0인 경우, 상기 현재 PU는 Nxh(1≤h<N) 사이즈의 상기 비정방형 TU들로 분할되는 것을 특징으로 하는 인트라 예측 방법.
- 제1항에 있어서,상기 비트스트림을 통하여 앵글 리파인 플래그를 수신하는 단계를 더 포함하되,상기 앵글 리파인 플래그의 값이 1인 경우, 상기 수정된 예측 방향을 기반으로 상기 대상 샘플에 대한 상기 예측 값이 도출되는 것을 특징으로 하는 인트라 예측 방법
- 제13항에 있어서,상기 앵글 리파인 플래그는 상기 제1 인트라 방향성 모드의 인트라 예측 모드 인덱스가 1보다 큰 경우에 수신되는 것을 특징으로 하는 인트라 예측 방법.
- 영상 인코딩 장치에서 수행되는 인트라 예측 방법에 있어서,현재 블록의 제1 인트라 방향성 모드 및 제1 주변 샘플들을 도출하는 단계;상기 제1 인트라 방향성 모드의 예측 방향을 기반으로 상기 현재 블록 내 대상 샘플에 인접한 참조샘플을 결정하는 단계;상기 참조샘플이 속하는 주변 블록의 제2 주변 샘플들을 도출하는 단계;상기 참조샘플과 상기 제2 주변 샘플들 중 적어도 하나와의 비교를 기반으로 수정된 예측 방향을 결정하는 단계;상기 수정된 예측 방향 및 상기 제1 주변 샘플들을 기반으로 상기 대상 샘플에 대한 예측 값을 도출하는 단계; 및상기 제1 인트라 방향성 모드를 나타내는 인트라 예측 모드 정보를 인코딩하여 출력하는 단계를 포함하는 인트라 예측 방법.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/762,946 US10721492B2 (en) | 2015-09-23 | 2016-09-08 | Intra prediction method and device in image coding system |
KR1020187007987A KR20180044943A (ko) | 2015-09-23 | 2016-09-08 | 영상 코딩 시스템에서 인트라 예측 방법 및 장치 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201562222749P | 2015-09-23 | 2015-09-23 | |
US62/222,749 | 2015-09-23 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017052118A1 true WO2017052118A1 (ko) | 2017-03-30 |
Family
ID=58386283
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2016/010062 WO2017052118A1 (ko) | 2015-09-23 | 2016-09-08 | 영상 코딩 시스템에서 인트라 예측 방법 및 장치 |
Country Status (3)
Country | Link |
---|---|
US (1) | US10721492B2 (ko) |
KR (1) | KR20180044943A (ko) |
WO (1) | WO2017052118A1 (ko) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018216862A1 (ko) * | 2017-05-24 | 2018-11-29 | 엘지전자 주식회사 | 영상 코딩 시스템에서 인트라 예측에 따른 영상 디코딩 방법 및 장치 |
CN110786015A (zh) * | 2017-05-04 | 2020-02-11 | 交互数字Vc控股公司 | 用于帧内预测的最可能模式(mpm)重新排序的方法和装置 |
EP3804312A4 (en) * | 2018-06-01 | 2022-03-16 | Tencent America LLC | METHOD AND DEVICE FOR VIDEO CODING |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20180086203A (ko) * | 2015-12-17 | 2018-07-30 | 삼성전자주식회사 | 영상을 부호화/복호화 하는 방법 및 그 장치 |
EP3396953A4 (en) | 2016-02-16 | 2019-01-16 | Samsung Electronics Co., Ltd. | METHOD AND APPARATUS FOR ENCODING / DECODING AN IMAGE |
CN116634176A (zh) * | 2017-05-17 | 2023-08-22 | 株式会社Kt | 用于解码图像信号的方法和用于编码图像信号的方法 |
WO2019009620A1 (ko) * | 2017-07-04 | 2019-01-10 | 엘지전자 주식회사 | 인트라 예측 모드 기반 영상 처리 방법 및 이를 위한 장치 |
CN118042130A (zh) * | 2017-11-16 | 2024-05-14 | 松下电器(美国)知识产权公司 | 图像编码装置、编码方法、图像解码装置、解码方法和非暂时性存储介质 |
WO2019151284A1 (ja) | 2018-01-30 | 2019-08-08 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 符号化装置、復号装置、符号化方法及び復号方法 |
CN116320411A (zh) * | 2018-03-29 | 2023-06-23 | 日本放送协会 | 图像编码装置、图像解码装置以及程序 |
US11445203B2 (en) * | 2019-01-04 | 2022-09-13 | Qualcomm Incorporated | Sub-partition intra prediction in video coding |
WO2020180151A1 (ko) * | 2019-03-07 | 2020-09-10 | 엘지전자 주식회사 | 비디오 신호를 처리하기 위한 방법 및 장치 |
US20220224891A1 (en) * | 2019-05-10 | 2022-07-14 | Mediatek Inc. | Method and Apparatus of Chroma Direct Mode Generation for Video Coding |
US11190777B2 (en) * | 2019-06-30 | 2021-11-30 | Tencent America LLC | Method and apparatus for video coding |
US11317090B2 (en) * | 2019-08-12 | 2022-04-26 | Tencent America LLC | Method and apparatus for video coding |
WO2021215849A1 (ko) * | 2020-04-22 | 2021-10-28 | 엘지전자 주식회사 | 포인트 클라우드 데이터 처리 장치 및 처리 방법 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110073263A (ko) * | 2009-12-21 | 2011-06-29 | 한국전자통신연구원 | 인트라 예측 부호화 방법 및 부호화 방법, 그리고 상기 방법을 수행하는 인트라 예측 부호화 장치 및 인트라 예측 복호화 장치 |
KR20120065953A (ko) * | 2010-12-13 | 2012-06-21 | 한국전자통신연구원 | 인트라 예측 방법 및 그 장치 |
KR20130029130A (ko) * | 2011-05-20 | 2013-03-22 | 주식회사 케이티 | 단거리 인트라 예측 단위 복호화 방법 및 복호화 장치 |
KR20130126928A (ko) * | 2010-12-08 | 2013-11-21 | 엘지전자 주식회사 | 인트라 예측 방법과 이를 이용한 부호화 장치 및 복호화 장치 |
KR20150081240A (ko) * | 2015-06-22 | 2015-07-13 | 한양대학교 산학협력단 | 무손실 비디오 부호화/복호화 방법 및 장치 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9100646B2 (en) * | 2008-09-03 | 2015-08-04 | Sk Telecom Co., Ltd. | Device and method for image encoding/decoding using prediction direction conversion and selective encoding |
KR101672456B1 (ko) | 2009-02-09 | 2016-11-17 | 삼성전자 주식회사 | 저복잡도 주파수 변환을 이용한 비디오 부호화 방법과 그 장치, 및 비디오 복호화 방법과 그 장치 |
KR101772046B1 (ko) * | 2010-11-04 | 2017-08-29 | 에스케이텔레콤 주식회사 | 예측모드에 따라 필터링된 화소값으로 인트라예측을 수행하는 영상 부호화/복호화 방법 및 장치 |
PL2658263T3 (pl) * | 2010-12-22 | 2023-03-13 | Lg Electronics Inc. | Sposób predykcji wewnątrzramkowej i urządzenie wykorzystujące ten sposób |
US9532058B2 (en) * | 2011-06-03 | 2016-12-27 | Qualcomm Incorporated | Intra prediction mode coding with directional partitions |
US9787982B2 (en) * | 2011-09-12 | 2017-10-10 | Qualcomm Incorporated | Non-square transform units and prediction units in video coding |
AU2013208472B2 (en) * | 2012-01-13 | 2015-10-01 | Sharp Kabushiki Kaisha | Image decoding device, image encoding device, and data structure of encoded data |
EP2806649A1 (en) * | 2012-01-18 | 2014-11-26 | Electronics and Telecommunications Research Institute | Method and device for encoding and decoding image |
CN107005701A (zh) * | 2014-09-23 | 2017-08-01 | 英特尔公司 | 使用减少数量的角模式来降低帧内预测复杂度及其后续改进 |
MX2017005892A (es) * | 2014-11-05 | 2017-06-27 | Samsung Electronics Co Ltd | Aparato y metodo de codificacion de prediccion por muestra. |
-
2016
- 2016-09-08 WO PCT/KR2016/010062 patent/WO2017052118A1/ko active Application Filing
- 2016-09-08 KR KR1020187007987A patent/KR20180044943A/ko unknown
- 2016-09-08 US US15/762,946 patent/US10721492B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110073263A (ko) * | 2009-12-21 | 2011-06-29 | 한국전자통신연구원 | 인트라 예측 부호화 방법 및 부호화 방법, 그리고 상기 방법을 수행하는 인트라 예측 부호화 장치 및 인트라 예측 복호화 장치 |
KR20130126928A (ko) * | 2010-12-08 | 2013-11-21 | 엘지전자 주식회사 | 인트라 예측 방법과 이를 이용한 부호화 장치 및 복호화 장치 |
KR20120065953A (ko) * | 2010-12-13 | 2012-06-21 | 한국전자통신연구원 | 인트라 예측 방법 및 그 장치 |
KR20130029130A (ko) * | 2011-05-20 | 2013-03-22 | 주식회사 케이티 | 단거리 인트라 예측 단위 복호화 방법 및 복호화 장치 |
KR20150081240A (ko) * | 2015-06-22 | 2015-07-13 | 한양대학교 산학협력단 | 무손실 비디오 부호화/복호화 방법 및 장치 |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110786015A (zh) * | 2017-05-04 | 2020-02-11 | 交互数字Vc控股公司 | 用于帧内预测的最可能模式(mpm)重新排序的方法和装置 |
CN110786015B (zh) * | 2017-05-04 | 2023-09-12 | 交互数字Vc控股公司 | 用于帧内预测的最可能模式(mpm)重新排序的方法和装置 |
WO2018216862A1 (ko) * | 2017-05-24 | 2018-11-29 | 엘지전자 주식회사 | 영상 코딩 시스템에서 인트라 예측에 따른 영상 디코딩 방법 및 장치 |
US10951908B2 (en) | 2017-05-24 | 2021-03-16 | Lg Electronics Inc. | Method and device for decoding image according to intra prediction in image coding system |
EP3804312A4 (en) * | 2018-06-01 | 2022-03-16 | Tencent America LLC | METHOD AND DEVICE FOR VIDEO CODING |
US11722661B2 (en) | 2018-06-01 | 2023-08-08 | Tencent America LLC | Method and apparatus for video coding |
Also Published As
Publication number | Publication date |
---|---|
KR20180044943A (ko) | 2018-05-03 |
US10721492B2 (en) | 2020-07-21 |
US20180295384A1 (en) | 2018-10-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017052118A1 (ko) | 영상 코딩 시스템에서 인트라 예측 방법 및 장치 | |
WO2017057953A1 (ko) | 비디오 코딩 시스템에서 레지듀얼 신호 코딩 방법 및 장치 | |
WO2017043786A1 (ko) | 비디오 코딩 시스템에서 인트라 예측 방법 및 장치 | |
WO2018062702A1 (ko) | 영상 코딩 시스템에서 인트라 예측 방법 및 장치 | |
WO2019194440A1 (ko) | 인트라 예측 모드에 대한 룩업 테이블을 이용한 영상 코딩 방법 및 그 장치 | |
WO2017014412A1 (ko) | 비디오 코딩 시스템에서 인트라 예측 방법 및 장치 | |
WO2018056603A1 (ko) | 영상 코딩 시스템에서 조도 보상 기반 인터 예측 방법 및 장치 | |
WO2018044088A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2017034331A1 (ko) | 영상 코딩 시스템에서 크로마 샘플 인트라 예측 방법 및 장치 | |
WO2017069590A1 (ko) | 영상 코딩 시스템에서 모델링 기반 영상 디코딩 방법 및 장치 | |
WO2017188565A1 (ko) | 영상 코딩 시스템에서 영상 디코딩 방법 및 장치 | |
WO2017057877A1 (ko) | 영상 코딩 시스템에서 영상 필터링 방법 및 장치 | |
WO2017061671A1 (ko) | 영상 코딩 시스템에서 적응적 변환에 기반한 영상 코딩 방법 및 장치 | |
WO2018056702A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2018216862A1 (ko) | 영상 코딩 시스템에서 인트라 예측에 따른 영상 디코딩 방법 및 장치 | |
WO2019194507A1 (ko) | 어파인 움직임 예측에 기반한 영상 코딩 방법 및 그 장치 | |
WO2018021585A1 (ko) | 영상 코딩 시스템에서 인트라 예측 방법 및 장치 | |
WO2017048008A1 (ko) | 영상 코딩 시스템에서 인터 예측 방법 및 장치 | |
WO2019125035A1 (ko) | 선택적 변환에 기반한 영상 코딩 방법 및 그 장치 | |
WO2018174357A1 (ko) | 영상 코딩 시스템에서 영상 디코딩 방법 및 장치 | |
WO2018128222A1 (ko) | 영상 코딩 시스템에서 영상 디코딩 방법 및 장치 | |
WO2018066791A1 (ko) | 영상 코딩 시스템에서 영상 디코딩 방법 및 장치 | |
WO2020013480A1 (ko) | 인트라 예측 모드를 코딩하는 방법 및 그 장치 | |
WO2018128223A1 (ko) | 영상 코딩 시스템에서 인터 예측 방법 및 장치 | |
WO2018070661A1 (ko) | 영상 코딩 시스템에서 인트라 예측에 따른 영상 디코딩 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16848844 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20187007987 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15762946 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16848844 Country of ref document: EP Kind code of ref document: A1 |