WO2017090957A1 - Video encoding method and apparatus, and video decoding method and apparatus - Google Patents

Video encoding method and apparatus, and video decoding method and apparatus Download PDF

Info

Publication number
WO2017090957A1
WO2017090957A1 PCT/KR2016/013485 KR2016013485W WO2017090957A1 WO 2017090957 A1 WO2017090957 A1 WO 2017090957A1 KR 2016013485 W KR2016013485 W KR 2016013485W WO 2017090957 A1 WO2017090957 A1 WO 2017090957A1
Authority
WO
WIPO (PCT)
Prior art keywords
coding unit
block
sample
current
current block
Prior art date
Application number
PCT/KR2016/013485
Other languages
French (fr)
Korean (ko)
Inventor
원광현
김찬열
이선일
이진영
Original Assignee
삼성전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US201562259287P priority Critical
Priority to US62/259,287 priority
Application filed by 삼성전자 주식회사 filed Critical 삼성전자 주식회사
Publication of WO2017090957A1 publication Critical patent/WO2017090957A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process

Abstract

Provided are encoding or decoding method and apparatus for predicting an image. The image decoding method comprises the steps of: acquiring, from a bit stream, the prediction mode of the current block contained in the current image; if the prediction mode of the current block is intra-prediction, then determining a co-located block corresponding to the location of the current block in a reference image neighboring the current image; acquiring a reference sample on the basis of at least one from among a sample neighboring the current block, a boundary sample in the co-located block, and a sample neighboring the co-located block; acquiring a predictor by intra-predicting the current block on the basis of the reference sample; and reconstructing the current block on the basis of the predictor.

Description

Video encoding method and apparatus, video decoding method and apparatus

The present specification relates to an image encoding, an image decoding method and an apparatus, and more particularly, to an image encoding or decoding method and an apparatus for predicting an image.

The image data is encoded by a codec according to a predetermined data compression standard, for example, the Moving Picture Expert Group (MPEG) standard, and then stored in a recording medium in the form of a bitstream or transmitted through a communication channel.

With the development and dissemination of hardware capable of playing and storing high resolution or high definition image content, there is an increasing need for a codec for efficiently encoding or decoding high resolution or high definition image content. The encoded video content may be reproduced by decoding. Recently, methods for effectively compressing such high resolution or high definition image contents have been implemented. For example, an efficient image compression method is implemented through a process of arbitrarily processing an image to be encoded.

The video codec reduces the amount of data by using a prediction technique by using a feature that images of a video are highly correlated with each other temporally or spatially. According to the prediction technique, in order to predict the current image using the surrounding image, image information is recorded using a temporal or spatial distance between the images, a prediction error, and the like.

The present disclosure provides a method and apparatus for encoding or decoding for accurate intra prediction in consideration of a temporal distance between images.

An image decoding method according to an embodiment of the present disclosure includes: obtaining a prediction mode of a current block included in a current image from a bitstream; Determining a collocated block corresponding to a position of a current block in a reference picture adjacent to the current picture when the prediction mode of the current block is intra prediction; Obtaining a reference sample based on at least one of a sample adjacent to the current block, a boundary sample within the collocated block, and a sample adjacent to the collocated block; Performing intra prediction on the current block based on the reference sample to obtain a predictor; And reconstructing the current block based on the predictor.

An image decoding method according to an embodiment of the present disclosure includes the steps of obtaining a first flag from a bitstream; And if the first flag indicates to determine a reference sample in the current block, obtaining a reference sample based on the upper and left samples adjacent to the current block; And if the first flag indicates determining a reference sample in the collocated block, obtaining the reference sample based on at least one of a boundary sample located within the collocated block and a sample adjacent to the collocated block; It features.

An image decoding method according to an embodiment of the present disclosure includes obtaining a reference sample based on right and bottom boundary samples in a collocated block.

An image decoding method according to an embodiment of the present disclosure includes the steps of obtaining a second flag from a bitstream; If the second flag indicates to use samples located on the right and bottom sides, obtaining a reference sample based on at least one of the right and bottom samples adjacent to the current block or the right and bottom boundary samples in the collocated block; And if the second flag indicates to use samples located at the left and top, obtaining a reference sample based on at least one of a sample located at the left and top adjacent to the current block or a sample located at the left and top adjacent to the collocated block. Characterized in that it comprises a step.

The image decoding method according to an embodiment of the present disclosure obtains a reference sample based on at least one of a boundary sample in a collocated block and a sample adjacent to the collocated block when the current block is located on the upper left side of the current image. Characterized in that it comprises a step.

An image decoding method according to an embodiment of the present disclosure is characterized in that when the current block is located on the upper left side of the current image, obtaining a reference sample based on left and upper boundary samples in the collocated block. .

An image decoding apparatus according to an embodiment of the present disclosure includes a receiving unit which receives a bitstream; And obtaining a prediction mode of the current block included in the current picture from the bitstream, and determining a collocated block corresponding to the position of the current block in a reference picture adjacent to the current picture when the prediction mode of the current block is intra prediction. Obtain a reference sample based on at least one of a sample adjacent to the current block, a boundary sample within the collocated block, and a sample adjacent to the collocated block, and perform predictive by performing intra prediction on the current block based on the reference sample And a decoder which reconstructs the current block based on the predictor.

A decoding unit of an image decoding apparatus according to an embodiment of the present disclosure obtains a first flag from a bitstream, and when the first flag indicates to determine a reference sample in the current block, upper and left samples adjacent to the current block. Obtain a reference sample based on and wherein the first flag indicates to determine a reference sample in the collocated block, the reference sample based on at least one of a boundary sample located within the collocated block and a sample adjacent to the collocated block Characterized in obtaining.

The decoding unit of the image decoding apparatus according to an embodiment of the present disclosure is configured to obtain a reference sample based on the right and lower boundary samples in the collocated block.

A decoding unit of an image decoding apparatus according to an embodiment of the present disclosure obtains a second flag from a bitstream and indicates that the second flag uses samples located on the right side and the bottom side, respectively, on the right side and the bottom side adjacent to the current block. Obtain a reference sample based on at least one of the right and bottom boundary samples in the located sample or the collocated block, and the left and top samples adjacent to the current block if the second flag indicates to use the samples located on the left and top. Or obtaining a reference sample based on at least one of a sample located at a left side and an upper side adjacent to the collocated block.

A decoding unit of an image decoding apparatus according to an embodiment of the present disclosure may perform reference samples based on at least one of boundary samples within a collocated block and samples adjacent to the collocated block when the current block is located on the upper left side of the current image. It is characterized by obtaining.

The decoding unit of the image decoding apparatus according to an embodiment of the present disclosure may obtain a reference sample based on left and upper boundary samples in the collocated block when the current block is located on the upper left side of the current image.

An image encoding method according to an embodiment of the present disclosure includes determining a collocated block corresponding to a position of a current block in a reference image adjacent to a current image; Obtaining a reference sample based on at least one of a left and an upper sample adjacent to the current block, a boundary sample within the collocated block, and a sample adjacent to the collocated block; Performing intra prediction on the current block based on the reference sample to obtain a predictor; And encoding the current block based on the predictor to obtain encoding information of the current block. And generating encoded information into a bitstream.

An image encoding apparatus according to an embodiment of the present disclosure determines a collocated block corresponding to a position of a current block in a reference image adjacent to a current image, and includes a left and an upper sample adjacent to the current block, and within a collocated block. Obtain a reference sample based on at least one of a boundary sample and a sample adjacent to the collocated block, perform intra prediction on the current block based on the reference sample to obtain a predictor, and encode the current block based on the predictor An encoder for obtaining encoding information of the current block; And a bitstream generator for generating encoded information into a bitstream.

1 is a schematic block diagram of an image decoding apparatus according to an embodiment.

2 is a flowchart of an image decoding method, according to an exemplary embodiment.

3 illustrates an intra prediction mode according to an embodiment.

4A and 4B illustrate a process of predicting a current block based on an intra prediction mode, according to an embodiment.

5 illustrates a process of obtaining sample values for intra prediction using a collocated block according to an embodiment.

6 illustrates a process of obtaining pixel values for intra prediction using a collocated block according to an embodiment.

7 illustrates a process of obtaining pixel values for intra prediction using a collocated block according to an embodiment.

8 illustrates a process of obtaining pixel values for intra prediction using a collocated block according to an embodiment.

9 is a schematic block diagram of an image encoding apparatus according to an embodiment.

10 is a flowchart of an image encoding method, according to an embodiment.

11 illustrates a process of determining at least one coding unit by dividing a current coding unit by an image decoding apparatus according to an embodiment.

12 illustrates a process of determining at least one coding unit by dividing a coding unit having a non-square shape by an image decoding apparatus according to an embodiment.

13 illustrates a process of splitting a coding unit based on at least one of block shape information and split shape information, according to an embodiment.

14 is a diagram for a method of determining, by an image decoding apparatus, a predetermined coding unit among odd number of coding units according to an embodiment.

FIG. 15 illustrates an order in which a plurality of coding units are processed when the image decoding apparatus determines a plurality of coding units by dividing a current coding unit.

16 illustrates a process of determining that a current coding unit is divided into an odd number of coding units when the image decoding apparatus cannot process the coding units in a predetermined order, according to an embodiment.

17 illustrates a process of determining, by an image decoding apparatus, at least one coding unit by dividing a first coding unit according to an embodiment.

FIG. 18 is a view illustrating that a shape in which a second coding unit may be split is limited when a non-square type second coding unit determined by splitting a first coding unit according to an embodiment satisfies a predetermined condition; FIG. Shows that.

FIG. 19 illustrates a process of splitting a coding unit having a square shape by the image decoding apparatus when the split shape information cannot be divided into four square coding units according to an embodiment.

FIG. 20 illustrates that a processing order between a plurality of coding units may vary according to a division process of coding units, according to an embodiment.

21 is a diagram illustrating a process of determining a depth of a coding unit as a shape and a size of a coding unit change when a coding unit is recursively divided to determine a plurality of coding units according to an embodiment.

FIG. 22 illustrates a depth and a part index (PID) for classifying coding units, which may be determined according to the shape and size of coding units, according to an embodiment.

FIG. 23 illustrates that a plurality of coding units are determined according to a plurality of predetermined data units included in a picture according to an embodiment.

24 is a diagram of a processing block serving as a reference for determining a determination order of a reference coding unit included in a picture, according to an embodiment.

Advantages and features of the disclosed embodiments, and methods of achieving them will be apparent with reference to the embodiments described below in conjunction with the accompanying drawings. However, the present disclosure is not limited to the embodiments disclosed below, but may be implemented in various forms, and the present embodiments are merely provided to make the present disclosure complete, and those of ordinary skill in the art to which the present disclosure belongs. It is merely provided to fully inform the scope of the invention.

Terms used herein will be briefly described, and the disclosed embodiments will be described in detail.

The terminology used herein has been selected among general terms that are currently widely used while considering the functions of the present disclosure, but may vary according to the intention or precedent of a person skilled in the relevant field, the emergence of a new technology, and the like. In addition, in certain cases, there is also a term arbitrarily selected by the applicant, in which case the meaning will be described in detail in the description of the invention. Therefore, the terms used in the present disclosure should be defined based on the meanings of the terms and the contents throughout the present disclosure, rather than simply the names of the terms.

A singular expression in this specification includes a plural expression unless the context clearly indicates that it is singular.

When any part of the specification is to "include" any component, this means that it may further include other components, except to exclude other components unless otherwise stated. In addition, the term "part" as used herein refers to a hardware component such as software, microprocessor, FPGA or ASIC, and "part" plays certain roles. However, "part" is not meant to be limited to software or hardware. The “unit” may be configured to be in an addressable storage medium and may be configured to play one or more processors. Thus, as an example, a "part" refers to components such as software components, object-oriented software components, class components, and task components, processes, functions, properties, procedures, Subroutines, segments of program code, drivers, firmware, microcode, circuits, data, databases, data structures, tables, arrays and variables. The functionality provided within the components and "parts" may be combined into a smaller number of components and "parts" or further separated into additional components and "parts".

Hereinafter, the "image" may be a static image such as a still image of a video or may represent a dynamic image such as a video, that is, the video itself.

Hereinafter, "sample" means data to be processed as data allocated to a sampling position of an image. For example, pixel values and transform coefficients on a transform region may be samples in an image of a spatial domain. A unit including the at least one sample may be defined as a block.

DETAILED DESCRIPTION Hereinafter, exemplary embodiments will be described in detail with reference to the accompanying drawings so that those skilled in the art may easily implement the embodiments. In the drawings, parts irrelevant to the description are omitted in order to clearly describe the present disclosure.

Hereinafter, an image encoding apparatus, an image decoding apparatus, an image encoding method, and an image decoding method will be described in detail with reference to FIGS. 1 to 24. A method and apparatus for encoding or decoding using image prediction according to an embodiment will be described with reference to FIGS. 1 to 10, and a method for determining a data unit of an image according to an embodiment will be described with reference to FIGS. 11 to 24. do.

Hereinafter, a method and apparatus for encoding or decoding for accurate intra prediction in consideration of a temporal distance between images according to an embodiment of the present disclosure will be described with reference to FIGS. 1 to 10.

1 is a schematic block diagram of an image decoding apparatus 100 according to an embodiment.

The image decoding apparatus 100 may include a receiver 110 and a decoder 120. The receiver 110 may receive a bitstream. The bitstream includes information obtained by encoding the image by the image encoding apparatus 900. In addition, the bitstream may be transmitted from the image encoding apparatus 900. The image encoding apparatus 900 and the image decoding apparatus 100 may be connected by wire or wirelessly, and the receiver 110 may receive a bitstream through wire or wirelessly. The decoder 120 may reconstruct an image by parsing information from the received bitstream. The operation of the decoder 120 will be described in more detail with reference to FIG. 2.

2 is a flowchart of an image decoding method, according to an exemplary embodiment.

According to an embodiment of the present disclosure, the decoder 120 may obtain 210 a prediction mode of the current block included in the current image from the bitstream. In addition, when the prediction mode of the current block is intra prediction, the decoder 120 may determine 220 a collocated block corresponding to the position of the current block in the reference image adjacent to the current image. In addition, the decoder 120 may acquire a reference sample based on at least one of a sample adjacent to the current block, a boundary sample in the collocated block, and a sample adjacent to the collocated block. In addition, the decoder 120 may acquire 240 a predictor by performing intra prediction on the current block based on the reference sample. In addition, the decoder 120 may reconstruct the current block 250 based on the predictor.

The image may be divided into maximum coding units. The size of the largest coding unit may be determined based on information parsed from the bitstream. The shape of the largest coding unit may have a square of the same size. But it is not limited thereto. In addition, the maximum coding unit may be hierarchically divided into coding units based on split information parsed from a bitstream. The coding unit may be smaller than or equal to the maximum coding unit. For example, when indicating that the split information is not split, the coding unit has the same size as the maximum coding unit. When the split information is split, the maximum coding unit may be split into coding units having a lower depth. In addition, when split information about a coding unit of a lower depth indicates splitting, a coding unit of a lower depth may be split into coding units having a smaller size. However, segmentation of an image is not limited thereto, and a maximum coding unit and a coding unit may not be distinguished. The division of coding units will be described in more detail with reference to FIGS. 11 to 24.

In addition, the coding unit may be divided into a prediction unit for prediction of an image. The prediction unit may be equal to or smaller than the coding unit. In addition, the coding unit may be divided into a transformation unit for transformation of an image. The transformation unit may be equal to or smaller than the coding unit. The shape and size of the transform unit and the prediction unit may not be related to each other. The coding unit may be distinguished from the prediction unit and the transformation unit, but the coding unit may be the prediction unit and the transformation unit. The division of coding units will be described in more detail with reference to FIGS. 11 to 24. The current block and the neighboring block of the present disclosure may represent one of a maximum coding unit, a coding unit, a prediction unit, and a transformation unit.

The decoder 120 may obtain a prediction mode of the current block included in the current image from the bitstream. The current image is an image currently decoded by the decoder 120. The current block is a block included in the current video. The decoder 120 may obtain information about the prediction mode of the current block by parsing the bitstream. The prediction mode may include intra prediction or inter prediction. Intra prediction is a mode that predicts the current block based on the spatial similarity of the samples in the image. In addition, inter prediction is a mode for predicting a current block based on temporal similarity of samples between images. The decoder 120 may determine the prediction mode of the current block as intra prediction based on the obtained prediction information.

3 illustrates an intra prediction mode according to an embodiment.

The decoder 120 may perform intra prediction according to 35 intra prediction modes. Intra prediction modes may be largely Intra_Planar mode, Intra_DC mode, Intra_Angular mode. Intra_Angular mode can also have 33 directions including Intra_Vertical and Intra_Horizontal. Therefore, there may be a total of 35 intra prediction modes. Also, the decoder 120 may parse information about an intra prediction mode from the bitstream. For example, when the information about the intra prediction mode is '0', the decoder 120 may determine the intra prediction mode as the Intra_Planar mode. In addition, when the information about the intra prediction mode is '1', the decoder 120 may determine the intra prediction mode as the Intra_DC mode. In addition, when the information about the intra prediction mode is '2' to '34', the decoder 120 may determine the direction shown in FIG. 3 as the intra prediction mode.

4A and 4B illustrate a process of predicting a current block based on an intra prediction mode, according to an embodiment.

Referring to FIG. 4A, the decoder 120 may predict a current block 440 in the current image 410. The decoder 120 may determine the prediction mode of the current block 440 as intra prediction based on the information about the prediction mode parsed from the bitstream. In addition, the decoder 120 may parse information about an intra prediction mode parsed from the bitstream. The decoder 120 may determine the intra prediction mode of the current block 440 based on the information about the intra prediction mode. For example, the information about the parsed intra prediction mode may be '26'. Referring to FIG. 3, when the information about the intra prediction mode is '26', the decoder 120 may determine the intra prediction mode in the vertical direction.

The decoder 120 may predict the current block 440 by using neighboring samples of the current block 440. According to an embodiment of the present disclosure, the decoder 120 may determine reference samples 430 to be used for intra prediction using samples of the adjacent blocks 421, 422, 423, 424, and 425. The adjacent blocks 421, 422, 423, 424, and 425 may be blocks located on the upper right side, upper side, upper left side, left side, and lower left side of the current block 440. Reference samples 430 to use for intra prediction may be determined based on lower samples of the adjacent block 421 located on the upper right side of the current block 440. In addition, reference samples 430 to be used for intra prediction may be determined based on lower samples of the adjacent block 422 located above the current block 440. In addition, the reference samples 430 to be used for intra prediction may be determined based on the lower left samples of the adjacent block 423 located on the upper left side of the current block 440. In addition, reference samples 430 to be used for intra prediction may be determined based on samples of the right side of the neighboring block 424 located to the left of the current block 440. In addition, the reference samples 430 to be used for intra prediction may be determined based on the right samples of the adjacent block 425 located at the lower left of the current block 440.

Since the reference samples 430 are located at the left and the upper side of the current block 440, the decoder 120 may more accurately predict an image extending from the current block 440 to the left or the upper side.

The decoder 120 may predict the current block 440 based on the reference samples 430 to be used for intra prediction. For example, when the decoder 120 determines the intra prediction mode of the current block 440 in the vertical direction, the decoder 120 may select the current block 440 based on the lower samples of the adjacent block 422 among the reference samples 430. Can be predicted.

In addition, the decoder 120 may obtain a predictor by performing intra prediction on the current block 440 based on the reference samples 430. For example, when the intra prediction mode is determined in the vertical direction, the decoder 120 may predict the current block 440 by copying the lower pixel values of the adjacent block 422 downward. The predicted sample values of the current block 440 may be referred to as 'predictors'. The decoder 120 may parse the transform coefficients from the bitstream. The decoder 120 may inversely quantize transform coefficients and inverse transform to obtain residuals. The decoder 120 may reconstruct the current block based on the predictor and the residual.

Referring to FIG. 4B, the decoder 120 may predict a current block 490 in the current image 460. The decoder 120 may determine reference samples 480 to be used for intra prediction using samples of the adjacent blocks 471, 472, 473, 474, and 475. The adjacent blocks 471, 472, 473, 474, and 475 may be blocks located on the upper right side, right side, lower right side, lower side, and lower left side of the current block 490. The reference samples 480 to use for intra prediction may be determined based on the left samples of the adjacent block 471 located on the upper right side of the current block 490. In addition, the reference samples 480 to use for intra prediction may be determined based on the left samples of the adjacent block 472 located to the right of the current block 490. In addition, the reference samples 480 to use for intra prediction may be determined based on the upper left samples of the adjacent block 473 located on the lower right side of the current block 490. In addition, reference samples 480 to use for intra prediction may be determined based on upper samples of the adjacent block 474 located below the current block 490. In addition, reference samples 480 to be used for intra prediction may be determined based on upper samples of the adjacent block 475 located at the lower left of the current block 490.

Since the reference samples 480 are located on the right side and the bottom side of the current block 490, the decoder 120 may more accurately predict an image extending rightward or downward from the current block 490.

The decoder 120 may predict the current block 490 based on the reference samples 480 to be used for intra prediction. For example, when the decoder 120 determines the intra prediction mode of the current block 490 in the vertical direction, the decoder 120 based on the upper samples of the adjacent block 474 among the reference samples 480 may be used. Can be predicted.

In addition, the decoder 120 may obtain a predictor by performing intra prediction on the current block 490 based on the reference sample. For example, when the intra prediction mode is determined in the vertical direction, the decoder 120 may predict the current block 490 by copying the upper pixel values of the adjacent block 474 upward. The sample values of the predicted current block 490 may be referred to as 'predictors'. The decoder 120 may parse the transform coefficients from the bitstream. The decoder 120 may inversely quantize transform coefficients and inverse transform to obtain residuals. The decoder 120 may reconstruct the current block based on the predictor and the residual.

Unlike FIGS. 4A and 4B, the decoder 120 may determine sample values to be used for intra prediction by using adjacent blocks on the left and bottom sides of the current block or using adjacent blocks on the right and top sides of the current block. In addition, the decoder 120 may determine sample values to be used for intra prediction by using an adjacent block located in at least one of a left side, an upper side, a right side, and a lower side of the current block.

5 illustrates a process of obtaining sample values for intra prediction using a collocated block according to an embodiment.

The decoder 120 may predict the current block 520 in the current image 510. The decoder 120 may determine the prediction mode of the current block 520 in the current image 510 as intra prediction. Also, when the prediction mode of the current block 520 is intra prediction, the decoder 120 determines the collocated block 570 corresponding to the position of the current block in the reference image 560 adjacent to the current image 510. Can be. The collocated block may be a block corresponding to the position of the current block in the previously reconstructed image 560. For example, the coordinate value of the upper left sample of the current block 520 for the current image 510 may be the same as the coordinate value of the upper left sample of the collocated block 570 for the previously reconstructed image 560. Can be.

A picture order count (POC) is a variable associated with each picture. The image order count indicates the display order of images. In addition, the video sequence count is a unique value representing the video within the decoded video sequence (CVS). In addition, the relative time distance between the images may be known through image order counts of the images existing in the same CVS. The displayed order and the restored order may be different from each other. The adjacent reference image 560 may be an image reconstructed before the current image 510. The POC of the previously reconstructed image 560 may be N (integer). In addition, the POC of the current image 510 may be N + M (M is an integer other than 0). Also, the previously reconstructed image 560 may be a previous image of the current image 510. That is, M may be '1'. If M is '1', the decoder 120 may more accurately perform intra prediction because it is most likely to be similar to the current image 510 and the previously reconstructed image 560. However, the present invention is not limited thereto, and the decoder 120 may select an image most similar to the current image 510 among a plurality of previously reconstructed images. For example, M may be greater than or equal to '2' or M may be smaller than or equal to '-1'. Also, the decoder 120 may acquire a collocated block within the most similar image.

According to an embodiment of the present disclosure, when the prediction mode of the current block 520 is intra prediction, the decoder 120 may reconstruct samples of blocks adjacent to the current block 520 in the current image 510 and previously reconstructed. Reference samples may be obtained based on at least one of the samples of the collocated block 570 in the image 560. The decoder 120 may obtain a reference sample based on at least one of a left and an upper sample adjacent to the current block 520, a boundary sample in the collocated block, and a sample adjacent to the collocated block.

Since the decoder 120 acquires the reference samples by using the samples adjacent to the current block 520, it is already described with reference to FIGS. 4A and 4B, and thus redundant descriptions thereof will be omitted.

The decoder 120 may obtain the reference sample based on the right and lower boundary samples in the collocated block. Referring to FIG. 5, the decoder 120 may determine a sample of the current block 520 based on the right and lower boundary samples 581, 582, 583, 584, 585, 586, and 587 in the collocated block 570. 531,532, 533, 534, 535, 536 and 537 can be predicted. For example, the decoder 120 may convert the pixel values of the samples 581, 582, 583, 584, 585, 586, and 587 in the collocated block 570 to the samples 531, 532, and 533 of the current block 520. , 534, 535, 536, and 537). The decoder 120 may obtain the samples 531, 532, 533, 534, 535, 536, and 537 of the predicted current block 520 as reference samples.

The coordinate values of the samples 581, 582, 583, 584, 585, 586 and 587 in the collocated block 570 are the samples 531, 532, 533, 534, 535, 536 and 537 of the current block 520. It can be equal to the coordinate value of. The decoder 120 may predict the current block 520 based on the predicted samples 531, 532, 533, 534, 535, 536, and 537 of the current block 520 as reference samples. Since a method of generating a predictor by predicting the current block 520 has been described with reference to FIGS. 3, 4A, and 4B, redundant description thereof will be omitted.

5 illustrates samples 531, 532, 533, 534, 535, 536 in current block 520 using boundary samples 581, 582, 583, 584, 585, 586 and 587 in collocated block 570. And 537) has been described, but the present invention is not limited thereto. Similar to FIG. 8, which will be described later, the decoder 120 may determine the current block 520 based on boundary samples 581, 582, 583, 584, 585, 586, and 587 in the collocated block 570. A reference sample adjacent to may be predicted. In addition, the decoder 120 may predict the current block 520 based on a reference sample adjacent to the current block 520. Since a process of predicting a current block using a reference sample adjacent to the current block 520 has been described with reference to FIG. 4B, duplicate description thereof will be omitted.

The size of the current block 520 may be different from the size of the collocated block 570. When the size of the current block 520 and the size of the collocated block 570 are different from each other, the decoder 120 enlarges or reduces the size of the collocated block 570 to be the same as the size of the current block 520. Can be changed. An interpolation method may be used to enlarge the size of the collocated block 570.

Also, the decoder 120 does not determine the collocated block 570, but the previously reconstructed image 560 corresponding to the samples 531, 532, 533, 534, 535, 536, and 537 of the current block 520. The samples 581, 582, 583, 584, 585, 586, and 587 in) may be used to predict pixel values of the samples 531, 532, 533, 534, 535, 536, and 537 of the current block 520. . The coordinates of the samples 581, 582, 583, 584, 585, 586 and 587 for the previously reconstructed image 560 are the samples 531, 532, 533 of the current block 520 for the current image 510. , 534, 535, 536, and 537).

6 illustrates a process of obtaining pixel values for intra prediction using a collocated block according to an embodiment.

The decoder 120 may predict the current block 620 in the current image 610. The decoder 120 may determine the prediction mode of the current block 620 in the current image 610 as intra prediction. Also, when the prediction mode of the current block 620 is intra prediction, the decoder 120 determines the collocated block 670 corresponding to the position of the current block in the reference image 660 adjacent to the current image 610. Can be. The adjacent reference image 660 may be an image reconstructed before the current image 610. The POC of the previously reconstructed image 660 may be N (integer). In addition, the POC of the current image 610 may be N + M (M is an integer other than 0). Also, the previously reconstructed image 660 may be a previous image of the current image 610.

The decoder 120 may determine the collocated block 670 in the previously reconstructed image 660. Decoder 120 also samples 631, 632, 633, 634 of current block 620 based on samples 681, 682, 683, 684, 685, 686, and 687 adjacent to collocated block 670. , 635, 636 and 637 can be predicted. For example, the decoder 120 may convert the pixel values of the samples 681, 682, 683, 684, 685, 686, and 687 adjacent to the collocated block 670 into the samples 631, 632, 633, 634, 635, 636, and 637 can be predicted. The decoder 120 may obtain the samples 631, 632, 633, 634, 635, 636, and 637 of the predicted current block 620 as reference samples.

The decoder 120 may predict the current block 620 based on the samples 631, 632, 633, 634, 635, 636, and 637 of the current block 620. Since the method of predicting the current block 620 has been described with reference to FIGS. 3, 4A, and 4B, redundant descriptions thereof will be omitted.

6 illustrates samples 631, 632, 633, 634, 635, 636 in current block 620 using samples 681, 682, 683, 684, 685, 686, and 687 adjacent to collocated block 670. And 637, a process of predicting the same is not limited thereto. Similar to FIG. 8, which will be described later, the decoder 120 may determine the current block 620 based on boundary samples 681, 682, 683, 684, 685, 686, and 687 in the collocated block 670. A reference sample adjacent to may be predicted. Also, the decoder 120 may predict the current block 620 based on a reference sample adjacent to the current block 620. Since the process of predicting the current block using the reference sample adjacent to the current block 620 has been described with 4B, redundant description thereof will be omitted.

Even if the size of the current block 620 and the size of the collocated block 670 is the same, the number of samples 681, 682, 683, 684, 685, 686, and 687 adjacent to the collated block 670 is currently There may be more than the number of boundary samples 631, 632, 633, 634, 635, 636, and 637 in block 620. In this case, the decoder 120 may use the current block by using at least two average or median values of pixels of the samples 681, 682, 683, 684, 685, 686, and 687 adjacent to the collocated block 670. The pixel values of the samples 631, 632, 633, 634, 635, 636, and 637 may be estimated. For example, the decoder 120 may obtain a pixel value of the sample 634 of the current block 620 based on the pixel values of the samples 683, 684, and 685. The decoder 120 may predict the pixel value of the sample 634 based on the mean value of the pixel values of the samples 683, 684, and 685. Also, the decoder 120 may predict the pixel value of the sample 634 based on the median of the pixel values of the samples 683, 684, and 685. Also, the decoder 120 may predict the pixel value of the sample 634 based on the pixel value of any one of the samples 683, 684, and 685.

7 illustrates a process of obtaining pixel values for intra prediction using a collocated block according to an embodiment.

The decoder 120 may predict the current block 720 in the current image 710. The decoder 120 may determine the prediction mode of the current block 720 in the current image 710 as intra prediction. In addition, when the prediction mode of the current block 720 is intra prediction, the decoder 120 determines the collocated block 770 corresponding to the position of the current block in the reference image 760 adjacent to the current image 710. Can be. The adjacent reference picture 760 may be a picture reconstructed before the current picture 710. The POC of the previously reconstructed image 760 may be N (integer). In addition, the POC of the current image 710 may be N + M (M is an integer other than 0). Also, the previously reconstructed image 760 may be a previous image of the current image 710.

The decoder 120 may determine the collocated block 770 in the previously reconstructed image 760. Decoder 120 may also reference reference samples 731, 732, 733, 734 based on left and upper boundary samples 781, 782, 783, 784, 785, 786, and 787 in collocated block 770. , 735, 736, and 737 can be predicted. The pixel values of the reference samples 731, 732, 733, 734, 735, 736, and 737 may be the pixel values of the boundary samples 731, 732, 733, 734, 735, 736, and 737 in the current block 720. have. The decoder 120 may predict the current block 720 based on pixel values of the reference samples 731, 732, 733, 734, 735, 736, and 737. The decoder 120 may obtain the predicted current block 720 as a predictor. Since a method of predicting the current block 720 has been described with reference to FIGS. 3, 4A, and 4B, redundant description thereof will be omitted.

The size of the current block 720 may be different from the size of the collocated block 770. When the size of the current block 720 and the size of the collocated block 770 are different from each other, the decoder 120 enlarges or reduces the size of the collocated block 770 to be the same as the size of the current block 720. Can be changed.

Also, the decoder 120 does not determine the collocated block 770, and the previously reconstructed image corresponding to the samples 731, 732, 733, 734, 735, 736, and 737 of the current block 720. The samples 731, 732, 733, 734, 735, 736, and 737 of the current block 720 can be predicted based on the samples 781, 782, 783, 784, 785, 786, and 787 in 760. have. The coordinates of the samples 781, 782, 783, 784, 785, 786 and 787 for the previously reconstructed image 760 are the samples 731, 732 of the current block 720 for the current image 710. , 733, 734, 735, 736, and 737.

7 predicts a sample in the current block 720 using the collocated block 770, but is not limited thereto. In addition, the decoder 120 may predict adjacent samples on the left side and the upper side of the current block 720 using the collocated block 770. Adjacent samples may not be samples in current block 720. The adjacent samples on the left and top of current block 720 may be reference samples. The decoder 120 may predict the current block 720 by using the predicted left and upper neighboring samples of the current block 720. Since the method of predicting the current block has been described with reference to FIGS. 3, 4A, and 4B, redundant description thereof will be omitted.

The decoder 120 may obtain a first flag from the bitstream. In addition, when the first flag indicates to determine a reference sample in the current block, the decoder 120 may obtain the reference sample based on the upper and left samples adjacent to the current block. In addition, when the first flag indicates that the first flag determines a reference sample in the collocated block, the decoder 120 obtains the reference sample based on at least one of a boundary sample located in the collocated block and a sample adjacent to the collocated block. can do.

The first flag may be a flag for determining whether intra prediction is performed based on samples adjacent to the current block or intra prediction based on samples adjacent to the collocated block. For example, when the first flag indicates intra prediction based on samples adjacent to the current block, the decoder 120 may select the current block 440 or the current block 440 or the current image 410 or 460 as illustrated in FIG. 4A or 4B. A reference sample may be obtained by using pixel values of samples included in blocks 421, 422, 423, 424, 425, 471, 472, 473, 474, or 475 adjacent to 490. In addition, when the first flag indicates intra prediction based on samples adjacent to the collocated block, the decoder 120 may determine a previously reconstructed image 560, 660, or 760 as illustrated in FIGS. 5 to 7. The sample can be used to obtain a reference sample. For example, the decoder 120 may determine the boundary samples 581, 582, 583, 584, 585, 586, 587, 781, 782, 783, 784, 785, 786, or 787 in the collocated block 570 or 770. Alternatively, the reference sample may be obtained based on at least one of the samples 681, 682, 683, 684, 685, 686, or 687 adjacent to the collocated block 670.

According to the present disclosure, since the image decoding apparatus and the image encoding apparatus intra-prediction the current block using both temporal information or spatial information, the image decoding apparatus and the image encoding apparatus may obtain a high quality image while increasing the compression efficiency.

8 illustrates a process of obtaining pixel values for intra prediction using a collocated block according to an embodiment.

The decoder 120 may predict the current block 820 in the current image 810. The decoder 120 may determine the prediction mode of the current block 820 in the current image 810 as intra prediction. In addition, when the prediction mode of the current block 820 is intra prediction, the decoder 120 may determine whether the reference block exists in at least one of the left side and the upper side of the current block 820. For example, it may be determined whether the current block 820 is located on the upper left side of the current image 810. When the current block 820 is located on the upper left side of the current image 810, the reference block may not exist on the left side and the upper side of the current block 820.

If the coordinate value of the upper left sample of the current block 820 is the same as the coordinate value of the upper left sample of the current image 810, the decoder 120 does not have a reference block on the left and upper sides of the current block 820. Can be determined. In addition, when the x coordinate value of the upper left sample of the current block 820 is the same as the x coordinate value of the upper left sample of the current image 810, the decoder 120 has a reference block on the left side of the current block 820. You can decide not to. In addition, when the y coordinate value of the upper left sample of the current block 820 is the same as the y coordinate value of the upper left sample of the current image 810, the decoder 120 has a reference block above the current block 820. You can decide not to.

If there is no reference block on at least one of the left side and the upper side of the current block 820, the decoder 120 collocates corresponding to the position of the current block in the reference image 860 adjacent to the current image 810. Block 870 may be determined. The adjacent reference picture 860 may be a previously reconstructed picture of the current picture 810. The POC of the previously reconstructed image 860 may be N (integer). In addition, the POC of the current image 810 may be N + M (M is an integer other than 0). Also, the previously reconstructed image 860 may be a previous image of the current image 810. That is, M may be '1'.

The decoder 120 may obtain a reference sample based on at least one of a boundary sample in the collocated block 870 or an adjacent sample of the collocated block 870. Referring to FIG. 8, the decoder 120 may determine reference samples 831, 832, 833, 834, 835, and 836 based on upper and left boundary samples 881, 882, 883, 884, 885, 886, and 887. 837). As shown in FIG. 8, the reference samples 831, 832, 833, 834, 835, 836, and 837 may exist at a virtual location outside the current image 810. The decoder 120 may predict the current block 820 based on the reference samples 831, 832, 833, 834, 835, 836, and 837. The decoder 120 may obtain the predicted current block 820 as a predictor. Since a method of predicting the current block 820 using reference samples 831, 832, 833, 834, 835, 836, and 837 has been described with reference to FIGS. 3, 4A, and 4B, redundant description thereof will be omitted.

According to an embodiment of the present disclosure, the decoder 120 may obtain a second flag from the bitstream. The second flag is information for determining whether to predict the current block using reference samples located on the right and lower sides of the current block, or predicting the current block using reference samples located on the left and top sides of the current block. The decoder 120 may predict the current block based on reference samples located on the right and the bottom of the current block when the second flag indicates to use the samples located on the right and the bottom. In addition, when the second flag indicates that the second flag uses samples located on the left side and the upper side, the decoder 120 may predict the current block based on reference samples located on the left side and the upper side of the current block.

More specifically, the decoder 120 may obtain a second flag from the bitstream. In addition, when the decoder 120 indicates that the second flag uses the samples located on the right side and the bottom side, the decoder 120 based on at least one of the right and bottom boundary samples adjacent to the current block or the right and bottom boundary samples in the collocated block. The current block reference sample can be obtained. In addition, when the second flag indicates that the second flag uses samples located on the left side and the upper side, the decoder 120 may apply to a sample located on the left side and the upper side adjacent to the current block or at least one of the left side and the upper side adjacent to the collocated block. A reference sample of the current block can be obtained based on this.

When the second flag indicates to use the samples located at the right and the lower sides, the decoder 120 may acquire the reference samples as shown in FIG. 4B, 5, or 6. The decoder 120 may obtain the right and lower samples adjacent to the current block 490 as the reference samples 480. Decoder 120 may also reference the right and bottom samples in current block 520 or 620 with reference samples 531, 532, 533, 534, 535, 536, 537, 631, 632, 633, 634, 635, 636 or 637). The decoder 120 may predict the current block by using the reference samples. In addition, the decoder 120 may obtain the predicted current block as a predictor.

In addition, when the second flag indicates to use the samples located on the left and the upper side, the decoder 120 may obtain a reference sample as shown in FIG. 4A, 7, or 8. For example, the decoder 120 may acquire left and upper samples adjacent to the current block 440 or 820 as reference samples 430, 831, 832, 833, 834, 835, 836, and 837. Also, the decoder 120 may obtain right and upper samples in the current block 720 as reference samples 731, 732, 733, 734, 735, 736, and 737. The decoder 120 may predict the current block by using the reference samples. In addition, the decoder 120 may obtain the predicted current block as a predictor.

According to the present disclosure, even if a reference block does not exist in at least one of the upper side and the left side of the current block 820, the decoder 120 may obtain a pixel value most similar to the pixel values of the left and upper samples of the current block 820. Intra-prediction of the current block 820. Accordingly, the image decoding apparatus and the image encoding apparatus may obtain high quality images while increasing compression efficiency.

9 is a schematic block diagram of an image encoding apparatus 900 according to an embodiment.

The image encoding apparatus 900 may include an encoder 910 and a bitstream generator 920. The encoder 910 may receive an input image and encode the input image. The bitstream generator 920 may output a bitstream based on the encoded input image. Also, the image encoding apparatus 900 may transmit the bitstream to the image decoding apparatus 100. Detailed operations of the video encoding apparatus 900 will be described in detail with reference to FIG. 10.

10 is a flowchart of a video encoding method, according to an embodiment.

The encoder 910 may determine 1010 a collocated block corresponding to the position of the current block in the reference image adjacent to the current image. The encoder 910 may acquire a reference sample based on at least one of a sample adjacent to the current block, a boundary sample within the collocated block, and a sample adjacent to the collocated block. The encoder 910 may acquire 1030 a predictor by performing intra prediction on the current block based on the reference sample. The encoder 910 may acquire 1040 encoding information of the current block by encoding the current block based on the predictor. The bitstream generator 920 may generate 1050 the encoded information into a bitstream.

The encoder 910 may determine a collocated block corresponding to the position of the current block in the reference image adjacent to the current image. The reference picture may be a previously reconstructed picture. The coordinate value of the current block for the current image may be the same as the coordinate value of the collocated block for the reference image. For example, the coordinate value of the upper left sample of the current block for the current image may be the same as the coordinate value of the upper left sample of the collocated block for the reference image.

The encoder 910 may perform intra prediction on the current block. The encoder 910 may obtain a reference sample based on at least one of a left and a top sample adjacent to the current block, a boundary sample within the collocated block, and a sample adjacent to the collocated block, to intra-predict the current block. have. In addition, the current block may be predicted according to 35 intra prediction modes with respect to the obtained reference sample. Since the prediction of the current block according to the intra prediction mode has been described with reference to FIGS. 3 and 4, a redundant description is omitted. The encoder 910 may obtain, as predictors, current blocks predicted based on each reference sample and an intra prediction mode.

The encoder 910 may select the predictor that best predicts the current block by comparing the predictors with the original of the current block. For example, the encoder 910 may add both absolute differences between the original values of the current block and the pixel values of the predictor using Sum of Absolute Differences (SAD). The encoder 910 may select a predictor having the smallest SAD as a predictor that best predicts the current block.

In addition, the encoder 910 may add both squares of the difference between the original values of the current block and the pixel values of the predictor using the Sum of Square Error (SSE). The encoder 910 may select a predictor having the smallest SSE as the predictor that best predicts the current block.

In addition, the encoder 910 may determine, as encoding information, at least one of information about a reference sample used for the selected predictor and an intra prediction mode. For example, the encoding information may include information indicating to obtain a reference sample based on at least one of samples inside the collocated block, samples adjacent to the collocated block, or samples adjacent to the current block. . In addition, the encoding information may include information indicating that a reference sample is to be obtained based on at least one of the collocated block or the left, upper, right, and lower samples of the current block. The encoding information may also include information indicating to select one of 35 intra prediction modes.

In addition, the encoder 910 may obtain the difference between the current block and the selected predictor as a residual. The encoder 910 may obtain a transform coefficient by transforming and quantizing the residual.

The bitstream generator 920 may generate at least one of encoded information and transform coefficients as a bitstream. The generated bitstream may be transmitted to the image decoding apparatus 100. The image decoding apparatus 100 may reconstruct an image based on the bitstream.

Hereinafter, a method of determining a data unit of an image according to an exemplary embodiment will be described with reference to FIGS. 11 to 24.

11 illustrates a process of determining, by the image decoding apparatus 100, at least one coding unit by dividing a current coding unit according to an embodiment.

According to an embodiment, the image decoding apparatus 100 may determine a shape of a coding unit by using block shape information, and determine in which form the coding unit is divided using the split shape information. That is, the method of dividing the coding unit indicated by the segmentation form information may be determined according to which block form the block form information used by the image decoding apparatus 100 represents.

According to an embodiment, the image decoding apparatus 100 may use block shape information indicating that the current coding unit is square. For example, the image decoding apparatus 100 may determine whether to split a square coding unit, to split vertically, to split horizontally, or to split into four coding units according to the split type information. Referring to FIG. 11, when the block shape information of the current coding unit 1100 indicates a square shape, the decoder 1130 may have the same size as the current coding unit 1100 according to the split shape information indicating that the block shape information is not divided. The splitting coding unit 1110a may not be divided, or split coding units 1110b, 1110c, 1110d, and the like may be determined based on splitting form information indicating a predetermined division method.

Referring to FIG. 11, the image decoding apparatus 100 determines two coding units 1110b obtained by dividing a current coding unit 1100 in a vertical direction based on split shape information indicating that the image is split in a vertical direction. Can be. The image decoding apparatus 100 may determine two coding units 1110c obtained by dividing the current coding unit 1100 in the horizontal direction based on the split type information indicating the split in the horizontal direction. The image decoding apparatus 100 may determine four coding units 1110d that divide the current coding unit 1100 in the vertical direction and the horizontal direction based on the split type information indicating that the image decoding apparatus 100 is split in the vertical direction and the horizontal direction. However, the divided form in which the square coding unit may be divided should not be limited to the above-described form and may include various forms represented by the divided form information. Certain division forms in which a square coding unit is divided will be described in detail with reference to various embodiments below.

12 illustrates a process of determining, by the image decoding apparatus 100, at least one coding unit by dividing a coding unit having a non-square shape according to an embodiment.

According to an embodiment, the image decoding apparatus 100 may use block shape information indicating that a current coding unit is a non-square shape. The image decoding apparatus 100 may determine whether to divide the current coding unit of the non-square according to the split type information or to split it by a predetermined method. Referring to FIG. 12, when the block shape information of the current coding unit 1200 or 1250 indicates a non-square shape, the image decoding apparatus 100 may not divide the current coding unit 1200 according to the split shape information indicating that the shape is not divided. Or coding units 1210a, 1220b, 1230a, 1230b, 1230c, 1270a, which do not divide the coding units 1210 or 1260 having the same size as 1250, or are divided based on the partition type information indicating a predetermined division method. 1270b, 1280a, 1280b, and 1280c. A predetermined division method in which a non-square coding unit is divided will be described in detail with reference to various embodiments below.

According to an embodiment, the image decoding apparatus 100 may determine a shape in which a coding unit is divided using split shape information. In this case, the split shape information may include the number of at least one coding unit generated by splitting a coding unit. Can be represented. Referring to FIG. 12, when the split shape information indicates that the current coding unit 1200 or 1250 is split into two coding units, the image decoding apparatus 100 may determine the current coding unit 1200 or 1250 based on the split shape information. By splitting, two coding units 1220a, 1220b, or 1270a and 1270b included in the current coding unit may be determined.

According to an embodiment, when the image decoding apparatus 100 divides the current coding unit 1200 or 1250 having a non-square shape based on the split shape information, the image decoding apparatus 100 may determine the current coding unit 1200 or 1250 having the non-square shape. The current coding unit may be split in consideration of the position of the long side. For example, the image decoding apparatus 100 divides the current coding unit 1200 or 1250 in a direction of dividing a long side of the current coding unit 1200 or 1250 in consideration of the shape of the current coding unit 1200 or 1250. To determine a plurality of coding units.

According to an embodiment, when the split type information indicates splitting a coding unit into odd blocks, the image decoding apparatus 100 may determine an odd number of coding units included in the current coding unit 1200 or 1250. For example, when the split form information indicates that the current coding unit 1200 or 1250 is divided into three coding units, the image decoding apparatus 100 may divide the current coding unit 1200 or 1250 into three coding units 1230a. , 1230b, 1230c, 1280a, 1280b, and 1280c. According to an embodiment, the image decoding apparatus 100 may determine an odd number of coding units included in the current coding unit 1200 or 1250, and not all sizes of the determined coding units may be the same. For example, the size of a predetermined coding unit 1230b or 1280b among the determined odd coding units 1230a, 1230b, 1230c, 1280a, 1280b, and 1280c is different from other coding units 1230a, 1230c, 1280a, and 1280c. May have That is, a coding unit that may be determined by dividing the current coding unit 1200 or 1250 may have a plurality of types, and in some cases, odd number of coding units 1230a, 1230b, 1230c, 1280a, 1280b, and 1280c Each may have a different size.

According to an embodiment, when the split type information indicates that a coding unit is divided into odd blocks, the image decoding apparatus 100 may determine an odd number of coding units included in the current coding unit 1200 or 1250. The image decoding apparatus 100 may set a predetermined limit on at least one coding unit among odd-numbered coding units generated by dividing. Referring to FIG. 12, the image decoding apparatus 100 is a coding unit positioned at the center of three coding units 1230a, 1230b, 1230c, 1280a, 1280b, and 1280c generated by dividing a current coding unit 1200 or 1250. The decoding process for 1230b and 1280b may be different from other coding units 1230a, 1230c, 1280a, and 1280c. For example, the image decoding apparatus 100 may restrict the coding units 1230b and 1280b positioned in the center from being no longer divided, unlike other coding units 1230a, 1230c, 1280a, and 1280c, or only a predetermined number of times. You can limit it to split.

13 illustrates a process of splitting a coding unit by the image decoding apparatus 100 based on at least one of block shape information and split shape information, according to an exemplary embodiment.

According to an embodiment, the image decoding apparatus 100 may determine whether to split or not split the first coding unit 1300 having a square shape into coding units based on at least one of block shape information and split shape information. According to an embodiment, when the split type information indicates splitting the first coding unit 1300 in the horizontal direction, the image decoding apparatus 100 divides the first coding unit 1300 in the horizontal direction to divide the second coding unit. 1310 may be determined. The first coding unit, the second coding unit, and the third coding unit used according to an embodiment are terms used to understand a before and after relationship between the coding units. For example, when the first coding unit is split, the second coding unit may be determined. When the second coding unit is split, the third coding unit may be determined. Hereinafter, it may be understood that the relationship between the first coding unit, the second coding unit, and the third coding unit used is based on the above-described feature.

According to an embodiment, the image decoding apparatus 100 may determine to divide or not split the determined second coding unit 1310 into coding units based on at least one of block shape information and split shape information. Referring to FIG. 13, the image decoding apparatus 100 may determine a second coding unit 1310 having a non-square shape determined by dividing the first coding unit 1300 based on at least one of block shape information and split shape information. It may be split into at least one third coding unit 1320a, 1320b, 1320c, 1320d, or the like, or may not split the second coding unit 1310. The image decoding apparatus 100 may obtain at least one of block shape information and split shape information, and the image decoding apparatus 100 may determine the first coding unit 1300 based on at least one of the obtained block shape information and split shape information. ) May be divided into a plurality of second coding units (eg, 1310) of various types, and the second coding unit 1310 may be configured to perform first encoding based on at least one of block shape information and split shape information. The unit 1300 may be divided according to the divided manner. According to an embodiment, when the first coding unit 1300 is divided into the second coding unit 1310 based on at least one of the block shape information and the split shape information for the first coding unit 1300, the second The coding unit 1310 may also be divided into third coding units (eg, 1320a, 1320b, 1320c, 1320d, etc.) based on at least one of block shape information and split shape information of the second coding unit 1310. have. That is, the coding unit may be recursively divided based on at least one of the partition shape information and the block shape information associated with each coding unit. Therefore, a square coding unit may be determined in a non-square coding unit, and a coding unit of a square shape may be recursively divided to determine a coding unit of a non-square shape. Referring to FIG. 12, a predetermined coding unit (eg, located in the center of an odd number of third coding units 1320b, 1320c, and 1320d determined by splitting a second coding unit 1310 having a non-square shape) may be included. Coding units or coding units having a square shape) may be recursively divided. According to an embodiment, the third coding unit 1320c having a square shape, which is one of odd third coding units 1320b, 1320c, and 1320d, may be divided in a horizontal direction and divided into a plurality of fourth coding units. The fourth coding unit 1340 having a non-square shape, which is one of the plurality of fourth coding units, may be divided into a plurality of coding units. For example, the fourth coding unit 1340 having a non-square shape may be divided into odd coding units 1350a, 1350b, and 1350c. A method that can be used for recursive division of coding units will be described later through various embodiments.

According to an embodiment, the image decoding apparatus 100 splits each of the third coding units 1320a, 1320b, 1320c, 1320d, etc. into coding units based on at least one of block shape information and split shape information, or performs second encoding. It may be determined that the unit 1310 is not divided. The image decoding apparatus 100 may divide the non-square second coding unit 1310 into odd third coding units 1320b, 1320c, and 1320d. The image decoding apparatus 100 may place a predetermined limit on a predetermined third coding unit among the odd number of third coding units 1320b, 1320c, and 1320d. For example, the image decoding apparatus 100 may be limited to the number of coding units 1320c located in the middle of the odd number of third coding units 1320b, 1320c, and 1320d, which are no longer divided, or divided by a set number of times. It can be limited to. Referring to FIG. 13, the image decoding apparatus 100 may include a coding unit positioned at the center of odd-numbered third coding units 1320b, 1320c, and 1320d included in a second coding unit 1310 having a non-square shape. 1320c is no longer divided, or is limited to being divided into a predetermined division form (for example, divided into only four coding units or divided into a form corresponding to a divided form of the second coding unit 1310), or predetermined. It can be limited to dividing only by the number of times (eg, dividing only n times, n> 0). However, since the above limitation on the coding unit 1320c located in the center is merely a mere embodiment, it should not be construed as being limited to the above-described embodiments, and the coding unit 1320c located in the center may have different coding units 1320b and 1320d. ), It should be interpreted as including various restrictions that can be decoded.

According to an embodiment, the image decoding apparatus 100 may obtain at least one of block shape information and split shape information used to divide a current coding unit at a predetermined position in the current coding unit.

14 illustrates a method for the image decoding apparatus 100 to determine a predetermined coding unit among odd number of coding units, according to an exemplary embodiment.

Referring to FIG. 14, at least one of the block shape information and the split shape information of the current coding unit 1400 may be a sample at a predetermined position (for example, located at the center of a plurality of samples included in the current coding unit 1400). Sample 1440). However, a predetermined position in the current coding unit 1400 from which at least one of such block shape information and split shape information may be obtained should not be interpreted as being limited to the center position shown in FIG. 14, and the current coding unit 1400 is located at the predetermined position. It should be construed that various positions (eg, top, bottom, left, right, top left, bottom left, top right or bottom right, etc.) that may be included in the. The image decoding apparatus 100 may determine that the current coding unit is divided into coding units of various shapes and sizes by not obtaining at least one of block shape information and split shape information obtained from a predetermined position.

According to an embodiment, when the current coding unit is divided into a predetermined number of coding units, the image decoding apparatus 100 may select one coding unit from among them. Methods for selecting one of a plurality of coding units may vary, which will be described below through various embodiments.

According to an embodiment, the image decoding apparatus 100 may divide a current coding unit into a plurality of coding units and determine a coding unit of a predetermined position.

According to an embodiment, the image decoding apparatus 100 may use information indicating the position of each of the odd coding units to determine a coding unit located in the middle of the odd coding units. Referring to FIG. 14, the image decoding apparatus 100 may determine an odd number of coding units 1420a, 1420b, and 1420c by dividing the current coding unit 1400. The image decoding apparatus 100 may determine the central coding unit 1420b by using information about the positions of the odd number of coding units 1420a, 1420b, and 1420c. For example, the image decoding apparatus 100 determines the positions of the coding units 1420a, 1420b, and 1420c based on information indicating the positions of predetermined samples included in the coding units 1420a, 1420b, and 1420c. The coding unit 1420b positioned at may be determined. In detail, the image decoding apparatus 100 may encode coding units 1420a, 1420b, and 1420c based on information indicating the positions of samples 1430a, 1430b, and 1430c in the upper left of the coding units 1420a, 1420b, and 1420c. The coding unit 1420b positioned in the center may be determined by determining the position of.

According to an embodiment, the information indicating the position of the samples 1430a, 1430b, and 1430c in the upper left included in the coding units 1420a, 1420b, and 1420c, respectively, may be included in the pictures of the coding units 1420a, 1420b, and 1420c. It may include information about the location or coordinates of. According to an embodiment, the information indicating the positions of the samples 1430a, 1430b, and 1430c in the upper left included in the coding units 1420a, 1420b, and 1420c, respectively, may be included in the coding units 1420a included in the current coding unit 1400. , 1420b and 1420c may include information indicating a width or height, and the width or height may correspond to information indicating a difference between coordinates in a picture of the coding units 1420a, 1420b and 1420c. That is, the image decoding apparatus 100 may directly use information about the position or coordinates in the pictures of the coding units 1420a, 1420b, and 1420c or information about the width or height of the coding unit corresponding to the difference between the coordinates. By using, the coding unit 1420b positioned in the center may be determined.

According to an embodiment, the information indicating the position of the sample 1430a at the upper left of the upper coding unit 1420a may indicate (xa, ya) coordinates, and the sample 1330b at the upper left of the middle coding unit 1420b. The information indicating the position of) may indicate the (xb, yb) coordinates, and the information indicating the position of the sample 1430c on the upper left of the lower coding unit 1420c may indicate the (xc, yc) coordinates. The image decoding apparatus 100 may determine the center coding unit 1420b using the coordinates of the samples 1430a, 1430b, and 1430c in the upper left included in the coding units 1420a, 1420b, and 1420c, respectively. For example, when the coordinates of the samples 1430a, 1430b, and 1430c at the upper left are arranged in ascending or descending order, the coding unit 1420b including (xb, yb), which is the coordinate of the sample 1430b located at the center, is arranged. May be determined as a coding unit located in the center of the coding units 1420a, 1420b, and 1420c determined by splitting the current coding unit 1400. However, the coordinates indicating the positions of the samples 1430a, 1430b, and 1430c at the upper left may indicate coordinates indicating the absolute positions in the picture, and further, the position of the samples 1430a at the upper left of the upper coding unit 1420a. As a reference, the (dxb, dyb) coordinate, which is information indicating the relative position of the upper left sample 1430b of the central coding unit 1420b, and the relative position of the upper left sample 1430c of the lower coding unit 1420c. Information (dxc, dyc) coordinates can also be used. In addition, the method of determining the coding unit of a predetermined position by using the coordinates of the sample as information indicating the position of the sample included in the coding unit should not be interpreted to be limited to the above-described method, and various arithmetic operations that can use the coordinates of the sample are available. It should be interpreted in a way.

According to an embodiment, the image decoding apparatus 100 may divide the current coding unit 1400 into a plurality of coding units 1420a, 1420b, and 1420c, and may select one of the coding units 1420a, 1420b, and 1420c. The coding unit may be selected according to the standard. For example, the image decoding apparatus 100 may select coding units 1420b having different sizes from among coding units 1420a, 1420b, and 1420c.

According to an exemplary embodiment, the image decoding apparatus 100 may include (xa, ya) coordinates, which are information indicating the position of the sample 1430a on the upper left side of the upper coding unit 1420a, and the sample on the upper left side of the center coding unit 1420b. Coding units 1420a using (xb, yb) coordinates indicating information of position of 1430b and (xc, yc) coordinates indicating information of sample 1430c on the upper left side of lower coding unit 1420c. 1420b, 1420c) may determine the width or height of each. The image decoding apparatus 100 uses (xa, ya), (xb, yb), and (xc, yc) coordinates indicating the positions of the coding units 1420a, 1420b, and 1420c. 1420c) each size may be determined.

According to an embodiment, the image decoding apparatus 100 may determine the width of the upper coding unit 1420a as xb-xa and the height as yb-ya. According to an embodiment, the image decoding apparatus 100 may determine the width of the central coding unit 1420b as xc-xb and the height as yc-yb. According to an embodiment, the image decoding apparatus 100 may determine the width or height of the lower coding unit by using the width or height of the current coding unit, the width and height of the upper coding unit 1420a, and the center coding unit 1420b. . The image decoding apparatus 100 may determine a coding unit having a different size from other coding units based on the width and the height of the determined coding units 1420a, 1420b, and 1420c. Referring to FIG. 14, the image decoding apparatus 100 may determine a coding unit 1420b as a coding unit having a predetermined position while having a size different from that of the upper coding unit 1420a and the lower coding unit 1420c. However, in the above-described process of determining, by the image decoding apparatus 100, a coding unit having a different size from another coding unit, the coding unit at a predetermined position may be determined using the size of the coding unit determined based on the sample coordinates. In this regard, various processes of determining a coding unit at a predetermined position by comparing the sizes of coding units determined according to predetermined sample coordinates may be used.

However, the position of the sample to be considered for determining the position of the coding unit should not be interpreted as being limited to the upper left side described above, but may be interpreted that information on the position of any sample included in the coding unit may be used.

According to an embodiment, the image decoding apparatus 100 may select a coding unit of a predetermined position among odd-numbered coding units determined by dividing the current coding unit in consideration of the shape of the current coding unit. For example, if the current coding unit has a non-square shape having a width greater than the height, the image decoding apparatus 100 may determine the coding unit at a predetermined position in the horizontal direction. That is, the image decoding apparatus 100 may determine one of the coding units having different positions in the horizontal direction to limit the corresponding coding unit. If the current coding unit has a non-square shape having a height greater than the width, the image decoding apparatus 100 may determine a coding unit of a predetermined position in the vertical direction. That is, the image decoding apparatus 100 may determine one of the coding units having different positions in the vertical direction to limit the corresponding coding unit.

According to an embodiment, the image decoding apparatus 100 may use information indicating the positions of each of the even coding units to determine the coding unit of the predetermined position among the even coding units. The image decoding apparatus 100 may determine an even number of coding units by dividing a current coding unit and determine a coding unit of a predetermined position by using information about the positions of the even coding units. A detailed process for this may be a process corresponding to a process of determining a coding unit of a predetermined position (for example, a middle position) among the odd number of coding units described above with reference to FIG. 14.

According to an embodiment, when the current coding unit having a non-square shape is divided into a plurality of coding units, a predetermined value for a coding unit of a predetermined position in the splitting process is determined to determine a coding unit of a predetermined position among the plurality of coding units. Information is available. For example, the image decoding apparatus 100 may determine block shape information and a split shape stored in a sample included in a middle coding unit in a splitting process in order to determine a coding unit located in a center among coding units in which a current coding unit is divided into a plurality. At least one of the information may be used.

Referring to FIG. 14, the image decoding apparatus 100 may divide the current coding unit 1400 into a plurality of coding units 1420a, 1420b, and 1420c based on at least one of block shape information and split shape information. A coding unit 1420b positioned in the center of the plurality of coding units 1420a, 1420b, and 1420c may be determined. Furthermore, the image decoding apparatus 100 may determine a coding unit 1420b positioned in the center in consideration of a position where at least one of block shape information and split shape information is obtained. That is, at least one of the block shape information and the split shape information of the current coding unit 1400 may be obtained from a sample 1440 located in the center of the current coding unit 1400. The block shape information and the split shape information may be obtained. When the current coding unit 1400 is divided into a plurality of coding units 1420a, 1420b, and 1420c based on at least one of the elements, the coding unit 1420b including the sample 1440 may be a coding unit positioned at the center. You can decide. However, the information used to determine the coding unit located in the middle should not be interpreted as being limited to at least one of the block type information and the split type information, and various types of information may be used in the process of determining the coding unit located in the center. Can be.

According to an embodiment, predetermined information for identifying a coding unit of a predetermined position may be obtained from a predetermined sample included in the coding unit to be determined. Referring to FIG. 14, the image decoding apparatus 100 may divide a current coding unit 1400 into a plurality of coding units (eg, divided into a plurality of coding units 1420a, 1420b, and 1420c) determined by splitting the current coding unit 1400. Block shape information obtained from a sample at a predetermined position (for example, a sample located in the center of the current coding unit 1400) in the current coding unit 1400 to determine a coding unit located in the middle of the coding units; At least one of the partition type information may be used. . That is, the image decoding apparatus 100 may determine a sample of the predetermined position in consideration of the block shape of the current coding unit 1400, and the image decoding apparatus 100 may determine a plurality of pieces in which the current coding unit 1400 is divided and determined. Among the coding units 1420a, 1420b, and 1420c, a coding unit 1420b including a sample from which predetermined information (for example, at least one of block shape information and split shape information) may be obtained may be determined and determined. Can be limited. Referring to FIG. 14, according to an embodiment, the image decoding apparatus 100 may determine a sample 1440 located in the center of the current coding unit 1400 as a sample from which predetermined information may be obtained, and the image decoding apparatus may be used. 100 may set a predetermined limit in decoding the coding unit 1420b including the sample 1440. However, the position of the sample from which the predetermined information may be obtained should not be interpreted as being limited to the above-described position, but may be interpreted as samples of arbitrary positions included in the coding unit 1420b to be determined for the purpose of limitation.

According to an embodiment, a position of a sample from which predetermined information may be obtained may be determined according to the shape of the current coding unit 1400. According to an embodiment, the block shape information may determine whether the shape of the current coding unit is square or non-square, and determine the position of a sample from which the predetermined information may be obtained according to the shape. For example, the image decoding apparatus 100 may be positioned on a boundary that divides at least one of the width and the height of the current coding unit in half using at least one of information about the width and the height of the current coding unit. The sample may be determined as a sample from which predetermined information can be obtained. As another example, when the image decoding apparatus 100 indicates that the block shape information related to the current coding unit is a non-square shape, the image decoding apparatus 100 may select one of samples adjacent to a boundary that divides the long side of the current coding unit in half. May be determined as a sample from which information may be obtained.

According to an embodiment, when the image decoding apparatus 100 divides a current coding unit into a plurality of coding units, at least one of block shape information and split shape information may be used to determine a coding unit of a predetermined position among a plurality of coding units. One can be used. According to an embodiment, the image decoding apparatus 100 may obtain at least one of block shape information and split shape information from a sample at a predetermined position included in a coding unit, and the image decoding apparatus 100 may divide the current coding unit. The generated plurality of coding units may be divided using at least one of split shape information and block shape information obtained from a sample of a predetermined position included in each of the plurality of coding units. That is, the coding unit may be recursively split using at least one of block shape information and split shape information obtained from a sample of a predetermined position included in each coding unit. Since the recursive division process of the coding unit has been described above with reference to FIG. 13, a detailed description thereof will be omitted.

According to an embodiment, the image decoding apparatus 100 may determine at least one coding unit by dividing a current coding unit, and determine an order in which the at least one coding unit is decoded in a predetermined block (for example, the current coding unit). Can be determined according to

FIG. 15 illustrates an order in which a plurality of coding units are processed when the image decoding apparatus 100 determines a plurality of coding units by dividing a current coding unit.

According to an embodiment, the image decoding apparatus 100 determines the second coding units 1510a and 1510b by dividing the first coding unit 1500 in the vertical direction according to the block shape information and the split shape information. Split the 1500 in the horizontal direction to determine the second coding units 1530a and 1530b, or divide the first coding unit 1500 in the vertical and horizontal directions to divide the second coding units 1550a, 1550b, 1550c, and 1550d. Can be determined.

Referring to FIG. 15, the image decoding apparatus 100 may determine an order such that the second coding units 1510a and 1510b determined by dividing the first coding unit 1500 in the vertical direction are processed in the horizontal direction 1510c. . The image decoding apparatus 100 may determine the processing order of the second coding units 1530a and 1530b determined by dividing the first coding unit 1500 in the horizontal direction, in the vertical direction 1530c. The image decoding apparatus 100 may process the coding units for positioning the second coding units 1550a, 1550b, 1550c, and 1550d determined by dividing the first coding unit 1500 in the vertical direction and the horizontal direction, in one row. The coding units located in the next row may be determined according to a predetermined order (for example, raster scan order or z scan order 1550e).

According to an embodiment, the image decoding apparatus 100 may recursively split coding units. Referring to FIG. 15, the image decoding apparatus 100 may determine a plurality of coding units 1510a, 1510b, 1530a, 1530b, 1550a, 1550b, 1550c, and 1550d by dividing the first coding unit 1500. Each of the determined coding units 1510a, 1510b, 1530a, 1530b, 1550a, 1550b, 1550c, and 1550d may be recursively divided. The method of dividing the plurality of coding units 1510a, 1510b, 1530a, 1530b, 1550a, 1550b, 1550c, and 1550d may correspond to a method of dividing the first coding unit 1500. Accordingly, the plurality of coding units 1510a, 1510b, 1530a, 1530b, 1550a, 1550b, 1550c, and 1550d may be independently divided into a plurality of coding units. Referring to FIG. 15, the image decoding apparatus 100 may determine the second coding units 1510a and 1510b by dividing the first coding unit 1500 in the vertical direction, and further, respectively, the second coding units 1510a and 1510b. It can be decided to split independently or not.

According to an exemplary embodiment, the image decoding apparatus 100 may divide the second coding unit 1510a on the left side into a horizontal coding direction and divide the second coding unit 1520a and 1520b into a second coding unit 1510b. ) May not be divided.

According to an embodiment, the processing order of coding units may be determined based on a split process of the coding units. In other words, the processing order of the divided coding units may be determined based on the processing order of the coding units immediately before being split. The image decoding apparatus 100 may independently determine the order in which the third coding units 1520a and 1520b, which are determined by splitting the second coding unit 1510a on the left side, are processed independently of the second coding unit 1510b on the right side. Since the second coding unit 1510a on the left is divided in the horizontal direction to determine the third coding units 1520a and 1520b, the third coding units 1520a and 1520b may be processed in the vertical direction 1520c. In addition, since the order in which the second coding unit 1510a on the left side and the second coding unit 1510b on the right side is processed corresponds to the horizontal direction 1510c, the third coding unit included in the second coding unit 1510a on the left side. After the 1520a and 1520b are processed in the vertical direction 1520c, the right coding unit 1510b may be processed. The above description is intended to explain a process in which processing units are determined according to coding units before splitting, respectively, and thus should not be interpreted to be limited to the above-described embodiment. It should be interpreted as being used in a variety of ways that can be processed independently in order.

16 illustrates a process of determining that a current coding unit is divided into an odd number of coding units when the image decoding apparatus 100 may not process the coding units in a predetermined order, according to an embodiment.

According to an embodiment, the image decoding apparatus 100 may determine that the current coding unit is split into odd coding units based on the obtained block shape information and the split shape information. Referring to FIG. 16, a first coding unit 1600 having a square shape may be divided into second coding units 1610a and 1610b having a non-square shape, and each of the second coding units 1610a and 1610b may be independently formed. It may be divided into three coding units 1620a, 1620b, 1620c, 1620d, and 1620e. According to an embodiment, the image decoding apparatus 100 may determine a plurality of third coding units 1620a and 1620b by dividing the left coding unit 1610a in the horizontal direction among the second coding units, and right coding unit 1610b. ) May be divided into an odd number of third coding units 1620c, 1620d, and 1620e.

According to an embodiment, the image decoding apparatus 100 determines whether the third coding units 1620a, 1620b, 1620c, 1620d, and 1620e may be processed in a predetermined order to determine whether there are an odd number of coding units. You can decide. Referring to FIG. 16, the image decoding apparatus 100 may recursively divide a first coding unit 1600 to determine third coding units 1620a, 1620b, 1620c, 1620d, and 1620e. The image decoding apparatus 100 may include a first coding unit 1600, a second coding unit 1610a, 1610b, or a third coding unit 1620a, 1620b, 1620c, based on at least one of block shape information and split shape information. 1620d and 1620e may be determined to be divided into odd coding units among split forms. For example, a coding unit located on the right side of the second coding units 1610a and 1610b may be divided into an odd number of third coding units 1620c, 1620d, and 1620e. The order in which the plurality of coding units included in the first coding unit 1600 are processed may be a predetermined order (for example, a z-scan order 1630), and the image decoding apparatus ( 100 may determine whether the third coding units 1620c, 1620d, and 1620e, which are determined by splitting the right second coding unit 1610b into an odd number, satisfy the condition under which the predetermined order is processed.

According to an embodiment, the image decoding apparatus 100 may satisfy a condition that the third coding units 1620a, 1620b, 1620c, 1620d, and 1620e included in the first coding unit 1600 may be processed in a predetermined order. And whether the at least one of the width and the height of the second coding unit 1610a and 1610b is divided in half according to the boundary of the third coding unit 1620a, 1620b, 1620c, 1620d, and 1620e. Related. For example, the third coding units 1620a and 1620b, which are determined by dividing the height of the left second coding unit 1610a in the non-square form by half, satisfy the condition, but the right second coding unit 1610b is 3. Since the boundary of the third coding units 1620c, 1620d, and 1620e determined by dividing into two coding units does not divide the width or height of the right second coding unit 1610b in half, the third coding units 1620c, 1620d, 1620e may be determined to not satisfy the condition, and the image decoding apparatus 100 determines that the scan sequence is disconnected in the case of dissatisfaction with the condition, and based on the determination result, the right second coding unit 1610b It may be determined to be divided into an odd number of coding units. According to an embodiment, when the image decoding apparatus 100 is divided into an odd number of coding units, the image decoding apparatus 100 may set a predetermined limit on a coding unit of a predetermined position among the divided coding units. Since the above has been described through the embodiments, a detailed description thereof will be omitted.

17 illustrates a process of determining, by the image decoding apparatus 100, at least one coding unit by dividing the first coding unit 1700, according to an exemplary embodiment.

According to an embodiment, the image decoding apparatus 100 may divide the first coding unit 1700 based on at least one of the block shape information and the split shape information acquired through the receiver 110. The first coding unit 1700 having a square shape may be divided into coding units having four square shapes, or may be divided into a plurality of coding units having a non-square shape. For example, referring to FIG. 17, when the block shape information indicates that the first coding unit 1700 is a square and the split shape information is divided into non-square coding units, the image decoding apparatus 100 may determine the first coding unit. 1700 may be divided into a plurality of non-square coding units. In detail, when the segmentation information indicates that the first coding unit 1700 is divided into horizontal or vertical directions to determine an odd number of coding units, the image decoding apparatus 100 may form a square first coding unit 1700. ) May be divided into second coding units 1710a, 1710b, and 1710c that are determined by being split in the vertical direction as odd coding units, or second coding units 1720a, 1720b, and 1720c that are determined by splitting in the horizontal direction.

According to an embodiment, the image decoding apparatus 100 may process the second coding units 1710a, 1710b, 1710c, 1720a, 1720b, and 1720c included in the first coding unit 1700 in a predetermined order. The condition is whether the at least one of the width and the height of the first coding unit 1700 is divided in half according to the boundary of the second coding unit (1710a, 1710b, 1710c, 1720a, 1720b, 1720c). It is related to whether or not. Referring to FIG. 17, a boundary between second coding units 1710a, 1710b, and 1710c, which is determined by dividing a square first coding unit 1700 in a vertical direction, divides the width of the first coding unit 1700 in half. As such, the first coding unit 1700 may be determined to not satisfy a condition that may be processed in a predetermined order. In addition, since a boundary between the second coding units 1720a, 1720b, and 1720c, which is determined by dividing the first coding unit 1700 having a square shape in the horizontal direction, does not divide the width of the first coding unit 1700 in half, The one coding unit 1700 may be determined as not satisfying a condition that may be processed in a predetermined order. In case of such a condition dissatisfaction, the image decoding apparatus 100 may determine that the scan order is disconnected, and determine that the first coding unit 1700 is divided into odd coding units based on the determination result. According to an embodiment, when the image decoding apparatus 100 is divided into an odd number of coding units, the image decoding apparatus 100 may set a predetermined limit on a coding unit of a predetermined position among the divided coding units. Since the above has been described through the embodiments, a detailed description thereof will be omitted.

According to an embodiment, the image decoding apparatus 100 may determine various coding units by dividing the first coding unit.

Referring to FIG. 17, the image decoding apparatus 100 may split a first coding unit 1700 having a square shape and a first coding unit 1730 or 1750 having a non-square shape into various coding units. .

FIG. 18 illustrates that the second coding unit is split when the second coding unit having a non-square shape determined by splitting the first coding unit 1800 according to an embodiment satisfies a predetermined condition. It shows that the form that can be limited.

According to an exemplary embodiment, the image decoding apparatus 100 may determine the square-type first coding unit 1800 based on at least one of the block shape information and the partition shape information acquired through the receiver 110, and may include a non-square type first coding unit 1800. It may be determined by dividing into two coding units 1810a, 1810b, 1820a, and 1820b. The second coding units 1810a, 1810b, 1820a, and 1820b may be split independently. Accordingly, the image decoding apparatus 100 determines whether to split or not split into a plurality of coding units based on at least one of block shape information and split shape information associated with each of the second coding units 1810a, 1810b, 1820a, and 1820b. Can be. According to an exemplary embodiment, the image decoding apparatus 100 divides the left second coding unit 1810a of the non-square shape in a horizontal direction, determined by dividing the first coding unit 1800 in the vertical direction, and then uses the third coding unit ( 1812a, 1812b). However, when the image decoding apparatus 100 divides the left second coding unit 1810a in the horizontal direction, the right second coding unit 1810b may have the same horizontal direction as the direction in which the left second coding unit 1810a is divided. It can be limited to not be divided into. If the right second coding unit 1810b is split in the same direction to determine the third coding unit 1814a and 1814b, the left second coding unit 1810a and the right second coding unit 1810b are respectively horizontally aligned. The third coding units 1812a, 1812b, 1814a, and 1814b may be determined by being split independently. However, this means that the image decoding apparatus 100 divides the first coding unit 1800 into four square second coding units 1830a, 1830b, 1830c, and 1830d based on at least one of the block shape information and the split shape information. This is the same result as the above, which may be inefficient in terms of image decoding.

According to an exemplary embodiment, the image decoding apparatus 100 splits a second coding unit 1820a or 1820b of a non-square shape, determined by dividing a first coding unit 1800 in a horizontal direction, into a third coding unit. (1822a, 1822b, 1824a, 1824b) can be determined. However, when the image decoding apparatus 100 divides one of the second coding units (for example, the upper second coding unit 1820a) in the vertical direction, another image coding unit (for example, the lower end) may be used for the above reason. The coding unit 1820b may restrict the upper second coding unit 1820a from being split in the vertical direction in the same direction as the split direction.

FIG. 19 illustrates a process of splitting a coding unit having a square shape by the image decoding apparatus 100 when the split shape information cannot be divided into four square coding units.

According to an embodiment, the image decoding apparatus 100 divides the first coding unit 1900 based on at least one of the block shape information and the split shape information to divide the second coding units 1910a, 1910b, 1920a, 1920b, and the like. You can decide. The split type information may include information about various types in which a coding unit may be split, but the information on various types may not include information for splitting into four coding units having a square shape. According to the divided form information, the image decoding apparatus 100 may not divide the first coding unit 1900 having a square shape into four second coding units 1930a, 1930b, 1930c, and 1930d having a square shape. The image decoding apparatus 100 may determine the second coding units 1910a, 1910b, 1920a, 1920b, and the like having a non-square shape, based on the split shape information.

According to an embodiment, the image decoding apparatus 100 may independently split the non-square second coding units 1910a, 1910b, 1920a, 1920b, and the like. Each of the second coding units 1910a, 1910b, 1920a, 1920b, etc. may be split in a predetermined order through a recursive method, which is based on at least one of block shape information and split shape information, and the first coding unit 1900. ) May be a division method corresponding to the division method.

For example, the image decoding apparatus 100 may divide the left second coding unit 1910a in the horizontal direction to determine the third coding units 1912a and 1912b having a square shape, and the right second coding unit 1910b may The third coding units 1914a and 1914b having a square shape may be determined by being split in the horizontal direction. Furthermore, the image decoding apparatus 100 may divide the left second coding unit 1910a and the right second coding unit 1910b in the horizontal direction to determine the third coding units 1916a, 1916b, 1916c, and 1916d having a square shape. have. In this case, the coding unit may be determined in the same form as the first coding unit 1900 is divided into four square second coding units 1930a, 1930b, 1930c, and 1930d.

 For another example, the image decoding apparatus 100 may determine the third coding units 1922a and 1922b having a square shape by dividing the upper second coding unit 1920a in the vertical direction, and the lower second coding unit 1920b. ) May be divided in a vertical direction to determine third coding units 1924a and 1924b having a square shape. Furthermore, the image decoding apparatus 100 may divide the upper second coding unit 1920a and the lower second coding unit 1920b in the vertical direction to determine the third coding units 1926a, 1926b, 1926a, and 1926b having a square shape. have. In this case, the coding unit may be determined in the same form as the first coding unit 1900 is divided into four square second coding units 1930a, 1930b, 1930c, and 1930d.

FIG. 20 illustrates that a processing order between a plurality of coding units may vary according to a division process of coding units, according to an embodiment.

According to an embodiment, the image decoding apparatus 100 may divide the first coding unit 2000 based on the block shape information and the split shape information. When the block shape information indicates a square shape and the split shape information indicates that the first coding unit 2000 is split in at least one of a horizontal direction and a vertical direction, the image decoding apparatus 100 may determine the first coding unit 2000. ) May be determined to determine a second coding unit (eg, 2010a, 2010b, 2020a, 2020b, etc.). Referring to FIG. 20, non-square-type second coding units 2010a, 2010b, 2020a, and 2020b, which are determined by dividing the first coding unit 2000 only in the horizontal direction or the vertical direction, respectively, may include block shape information and split shape information for each. It can be divided independently based on. For example, the image decoding apparatus 100 divides the second coding units 2010a and 2010b generated by splitting the first coding unit 2000 in the vertical direction in the horizontal direction, respectively, to generate the third coding unit 2016a, 2016b, 2016c and 2016d), and the second coding unit 2020a and 2020b generated by splitting the first coding unit 2000 in the horizontal direction are divided in the horizontal direction, respectively, and the third coding unit 2026a, 2026b and 2026c. , 2026d). Since the splitting process of the second coding units 2010a, 2010b, 2020a, and 2020b has been described above with reference to FIG. 19, a detailed description thereof will be omitted.

According to an embodiment, the image decoding apparatus 100 may process coding units in a predetermined order. Features of the processing of coding units according to a predetermined order have been described above with reference to FIG. 15, and thus a detailed description thereof will be omitted. Referring to FIG. 20, the image decoding apparatus 100 splits a first coding unit 2000 having a square shape to form three square third coding units 2016a, 2016b, 2016c, 2016d, 2026a, 2026b, 2026c, and 2026d. ) Can be determined. According to an exemplary embodiment, the image decoding apparatus 100 performs a processing sequence of the third coding units 2016a, 2016b, 2016c, 2016d, 2026a, 2026b, 2026c, and 2026d according to the form in which the first coding unit 2000 is divided. You can decide.

According to an embodiment, the image decoding apparatus 100 determines the third coding units 2016a, 2016b, 2016c, and 2016d by dividing the second coding units 2010a and 2010b generated by dividing in the vertical direction in the horizontal direction, respectively. The image decoding apparatus 100 may first process the third coding units 2016a and 2016b included in the left second coding unit 2010a in the vertical direction, and then include the right coding unit 2010b. The third coding units 2016a, 2016b, 2016c, and 2016d may be processed according to an order 2017 of processing the third coding units 2016c and 2016d in the vertical direction.

According to an embodiment, the image decoding apparatus 100 determines the third coding units 2026a, 2026b, 2026c, and 2026d by dividing the second coding units 2020a and 2020b generated by dividing in the horizontal direction, respectively. The image decoding apparatus 100 may first process the third coding units 2026a and 2026b included in the upper second coding unit 2020a in the horizontal direction, and then include the lower coding unit 2020b. The third coding units 2026a, 2026b, 2026c, and 2026d may be processed according to an order 2027 of processing the third coding units 2026c and 2026d in the horizontal direction.

Referring to FIG. 20, second coding units 2010a, 2010b, 2020a, and 2020b may be divided to determine third coding units 2016a, 2016b, 2016c, 2016d, 2026a, 2026b, 2026c, and 2026d having a square shape. have. The second coding units 2010a and 2010b determined by dividing in the vertical direction and the second coding units 2020a and 2020b determined by dividing in the horizontal direction are divided into different forms, but are determined afterwards. , 2016b, 2016c, 2016d, 2026a, 2026b, 2026c, and 2026d may result in the first coding unit 2000 being split into coding units having the same type. Accordingly, the apparatus 100 for decoding an image recursively splits a coding unit through a different process based on at least one of block shape information and split shape information, and as a result, even if the coding units having the same shape are determined, the plurality of pictures determined in the same shape are determined. Coding units may be processed in different orders.

21 is a diagram illustrating a process of determining a depth of a coding unit as a shape and a size of a coding unit change when a coding unit is recursively divided to determine a plurality of coding units according to an embodiment.

According to an embodiment, the image decoding apparatus 100 may determine the depth of a coding unit according to a predetermined criterion. For example, the predetermined criterion may be the length of the long side of the coding unit. When the length of the long side of the current coding unit is divided by 2n (n> 0) times the length of the long side of the coding unit before the split, the depth of the current coding unit is greater than the depth of the coding unit before the split. It can be determined that the depth is increased by n. Hereinafter, a coding unit having an increased depth is expressed as a coding unit of a lower depth.

Referring to FIG. 21, according to an embodiment, the image decoding apparatus 100 may have a square shape based on block shape information indicating a square shape (for example, block shape information may indicate '0: SQUARE'). The second coding unit 2102, the third coding unit 2104, and the like of the lower depth may be determined by dividing the first coding unit 2100. If the size of the square first coding unit 2100 is 2Nx2N, the second coding unit 2102 determined by dividing the width and height of the first coding unit 2100 by 1/21 times may have a size of NxN. have. Furthermore, the third coding unit 2104 determined by dividing the width and the height of the second coding unit 2102 into 1/2 size may have a size of N / 2 × N / 2. In this case, the width and height of the third coding unit 2104 correspond to 1/22 times the first coding unit 2100. When the depth of the first coding unit 2100 is D, the depth of the second coding unit 2102 that is 1/21 times the width and the height of the first coding unit 2100 may be D + 1, and the first coding unit The depth of the third coding unit 2104, which is 1/22 times the width and the height of 2100, may be D + 2.

According to one embodiment, block shape information indicating a non-square shape (e.g., block shape information indicates that the height is a non-square longer than the width '1: NS_VER' or the width is a non-square longer than the height). 2: may represent NS_HOR ', the image decoding apparatus 100 may split the first coding unit 2110 or 2120 having a non-square shape to form a second coding unit 2112 or 2122 of a lower depth, The third coding unit 2114 or 2124 may be determined.

The image decoding apparatus 100 may determine a second coding unit (eg, 2102, 2112, 2122, etc.) by dividing at least one of a width and a height of the Nx2N-sized first coding unit 2110. That is, the image decoding apparatus 100 may divide the first coding unit 2110 in the horizontal direction to determine a second coding unit 2102 having an NxN size or a second coding unit 2122 having an NxN / 2 size. The second coding unit 2112 having an N / 2 × N size may be determined by splitting in the horizontal direction and the vertical direction.

According to an exemplary embodiment, the image decoding apparatus 100 determines at least one of a width and a height of a 2N × N sized first coding unit 2120 to determine a second coding unit (eg, 2102, 2112, 2122, etc.). It may be. That is, the image decoding apparatus 100 may divide the first coding unit 2120 in the vertical direction to determine a second coding unit 2102 having an NxN size or a second coding unit 2112 having an N / 2xN size. The second coding unit 2122 having the size of NxN / 2 may be determined by splitting in the horizontal direction and the vertical direction.

According to an embodiment, the image decoding apparatus 100 determines at least one of a width and a height of the NxN-sized second coding unit 2102 to determine a third coding unit (eg, 2104, 2114, 2124, etc.). It may be. That is, the image decoding apparatus 100 determines the third coding unit 2104 having the size of N / 2xN / 2 by dividing the second coding unit 2102 in the vertical direction and the horizontal direction, or makes the N / 22xN / 2 size the first coding unit 2104. The third coding unit 2114 may be determined or the third coding unit 2124 having a size of N / 2 × N / 22 may be determined.

According to an embodiment, the image decoding apparatus 100 divides at least one of a width and a height of the N / 2 × N sized second coding unit 2112 into a third coding unit (eg, 2104, 2114, 2124, etc.). May be determined. That is, the image decoding apparatus 100 divides the second coding unit 2112 in the horizontal direction to form a third coding unit 2104 having a size of N / 2xN / 2 or a third coding unit 2124 having a size of N / 2xN / 22. ) May be determined or divided into vertical and horizontal directions to determine the third coding unit 2114 having an N / 22 × N / 2 size.

According to an embodiment, the image decoding apparatus 100 divides at least one of the width and the height of the NxN / 2 sized second coding unit 2122 into a third coding unit (eg, 2104, 2114, 2124, etc.). May be determined. That is, the image decoding apparatus 100 divides the second coding unit 2122 in the vertical direction to form a third coding unit 2104 having a size of N / 2xN / 2 or a third coding unit 2114 having a size of N / 22xN / 2. ) May be determined or divided into vertical and horizontal directions to determine the third coding unit 2124 of size N / 2 × N / 22.

According to an embodiment, the image decoding apparatus 100 may divide a square coding unit (for example, 2100, 2102, 2104) into a horizontal direction or a vertical direction. For example, the first coding unit 2100 having the size of 2Nx2N is divided in the vertical direction to determine the first coding unit 2110 having the size of Nx2N, or the first coding unit 2120 having the size of 2NxN is determined by splitting in the horizontal direction. Can be. According to an embodiment, when the depth is determined based on the length of the longest side of the coding unit, the depth of the coding unit determined by dividing the first coding unit 2100 having a 2N × 2N size in the horizontal direction or the vertical direction may be determined by the first encoding. It may be the same as the depth of the unit 2100.

According to an embodiment, the width and height of the third coding unit 2114 or 2124 may correspond to 1/22 times the first coding unit 2110 or 2120. When the depth of the first coding unit 2110 or 2120 is D, the depth of the second coding unit 2112 or 2122, which is 1/2 the width and height of the first coding unit 2110 or 2120, may be D + 1. The depth of the third coding unit 2114 or 2124, which is 1/22 times the width and the height of the first coding unit 2110 or 2120, may be D + 2.

FIG. 22 illustrates a depth and a part index (PID) for classifying coding units, which may be determined according to the shape and size of coding units, according to an embodiment.

According to an embodiment, the image decoding apparatus 100 may determine a second coding unit having various forms by dividing the first coding unit 2200 having a square shape. Referring to FIG. 22, the image decoding apparatus 100 divides the first coding unit 2200 in at least one of a vertical direction and a horizontal direction according to the split type information to form second coding units 2202a, 2202b, 2204a, 2204b, 2206a, 2206b, 2206c, 2206d). That is, the image decoding apparatus 100 may determine the second coding units 2202a, 2202b, 2204a, 2204b, 2206a, 2206b, 2206c, and 2206d based on the split shape information about the first coding unit 2200.

According to an embodiment, the second coding units 2202a, 2202b, 2204a, 2204b, 2206a, 2206b, 2206c, and 2206d that are determined according to split shape information about the first coding unit 2200 having a square shape may have a long side length. Depth can be determined based on this. For example, since the length of one side of the first coding unit 2200 having a square shape and the length of the long side of the second coding units 2202a, 2202b, 2204a, and 2204b having a non-square shape are the same, the first coding unit ( 2200 and the non-square second coding units 2202a, 2202b, 2204a, and 2204b may be regarded as D. In contrast, when the image decoding apparatus 100 divides the first coding unit 2200 into four square second coding units 2206a, 2206b, 2206c, and 2206d based on the split shape information, the image having the square shape may be used. Since the length of one side of the two coding units 2206a, 2206b, 2206c, and 2206d is 1/2 times the length of one side of the first coding unit 2200, the depths of the second coding units 2206a, 2206b, 2206c, and 2206d are determined. May be a depth of D + 1 that is one depth lower than D, which is a depth of the first coding unit 2200.

According to an exemplary embodiment, the image decoding apparatus 100 divides a first coding unit 2210 having a shape having a height greater than a width in a horizontal direction according to split shape information, thereby providing a plurality of second coding units 2212a, 2212b, 2214a, 2214b, 2214c). According to an exemplary embodiment, the image decoding apparatus 100 divides a first coding unit 2220 having a width greater than a height in a vertical direction according to split shape information, thereby performing a plurality of second coding units 2222a, 2222b, 2224a, 2224b, 2224c).

According to an embodiment, second coding units 2212a, 2212b, 2214a, 2214b, 2214c, 2222a, 2222b, 2224a, and 2224b that are determined according to split shape information about the first coding unit 2210 or 2220 having a non-square shape. , 2224c) may be determined based on the length of the long side. For example, since the length of one side of the second coding units 2212a and 2212b having a square shape is 1/2 times the length of one side of the first coding unit 2210 having a non-square shape having a height greater than the width, the square is square. The depths of the second coding units 2212a and 2212b of the shape are D + 1, which is a depth lower than the depth D of the first coding unit 2210 of the non-square shape.

Furthermore, the image decoding apparatus 100 may divide the non-square first coding unit 2210 into odd second coding units 2214a, 2214b, and 2214c based on the split shape information. The odd numbered second coding units 2214a, 2214b, and 2214c may include second coding units 2214a and 2214c in a non-square shape and second coding units 2214b in a square shape. In this case, the length of the long side of the second coding units 2214a and 2214c in the non-square form and the length of one side of the second coding unit 2214b in the square form is 1 / time of the length of one side of the first coding unit 2210. Since the depth is twice, the depths of the second coding units 2214a, 2214b, and 2214c may be a depth of D + 1, which is one depth lower than the depth D of the first coding unit 2210. The image decoding apparatus 100 corresponds to the above-described method of determining depths of coding units related to the first coding unit 2210 and is related to the first coding unit 2220 having a non-square shape having a width greater than the height. Depth of coding units may be determined.

According to an embodiment, when the image decoding apparatus 100 determines an index (PID) for dividing the divided coding units, when the odd-numbered split coding units are not the same size, the image decoding apparatus 100 may determine the size ratio between the coding units. The index can be determined based on this. Referring to FIG. 22, a coding unit 2214b positioned in the middle of odd-numbered split coding units 2214a, 2214b, and 2214c has a width unit that is the same in width but different in height from other coding units 2214a and 2214c. It may be twice the height of the fields 2214a, 2214c. That is, in this case, the coding unit 2214b positioned in the center may include two of the other coding units 2214a and 2214c. Therefore, if the index (PID) of the coding unit 2214b located in the center according to the scan order is 1, the coding unit 2214c located in the next order may be 3 having an index increased by 2. That is, there may be a discontinuity in the value of the index. According to an embodiment, the image decoding apparatus 100 may determine whether odd-numbered split coding units are not the same size based on whether there is a discontinuity of an index for distinguishing between the divided coding units.

According to an embodiment, the image decoding apparatus 100 may determine whether the image decoding apparatus 100 is divided into a specific division type based on a value of an index for dividing the plurality of coding units determined by dividing from the current coding unit. Referring to FIG. 22, the image decoding apparatus 100 may determine an even number of coding units 2212a and 2212b by dividing a first coding unit 2210 having a rectangular shape having a height greater than a width, or may determine an odd number of coding units 2214a and 2214b. 2214c). The image decoding apparatus 100 may use an index (PID) indicating each coding unit to distinguish each of the plurality of coding units. According to an embodiment, the PID may be obtained from a sample (eg, an upper left sample) at a predetermined position of each coding unit.

According to an embodiment, the image decoding apparatus 100 may determine a coding unit of a predetermined position among coding units determined by splitting by using an index for dividing coding units. According to an embodiment, when the split type information for the first coding unit 2210 having a height greater than the width is divided into three coding units, the image decoding apparatus 100 may determine the first coding unit 2210. It may be divided into three coding units 2214a, 2214b, and 2214c. The image decoding apparatus 100 may allocate an index for each of three coding units 2214a, 2214b, and 2214c. The image decoding apparatus 100 may compare the indices of the respective coding units to determine the coding unit among the oddly divided coding units. The image decoding apparatus 100 encodes a coding unit 2214b having an index corresponding to a center value among the indices based on the indexes of the coding units, and encodes the center position among the coding units determined by splitting the first coding unit 2210. It can be determined as a unit. According to an embodiment, when determining the indexes for distinguishing the divided coding units, the image decoding apparatus 100 may determine the indexes based on the size ratio between the coding units when the coding units are not the same size. . Referring to FIG. 22, the coding unit 2214b generated by dividing the first coding unit 2210 may include the coding units 2214a and 2214c having the same width but different heights as the other coding units 2214a and 2214c. It can be twice the height. In this case, if the index (PID) of the coding unit 2214b positioned in the center is 1, the coding unit 2214c positioned in the next order may be 3 having an index increased by 2. In this case, when the index is uniformly increased and the increment is changed, the image decoding apparatus 100 may determine that the image decoding apparatus 100 is divided into a plurality of coding units including a coding unit having a different size from other coding units. In this case, when the split form information is divided into odd coding units, the image decoding apparatus 100 may have a shape different from a coding unit having a different coding unit (for example, a middle coding unit) at a predetermined position among the odd coding units. The current coding unit can be divided by. In this case, the image decoding apparatus 100 may determine a coding unit having a different size by using an index (PID) for the coding unit. However, the above-described index, the size or position of the coding unit of the predetermined position to be determined are specific to explain an embodiment and should not be construed as being limited thereto. Various indexes and positions and sizes of the coding unit may be used. Should be interpreted.

According to an embodiment, the image decoding apparatus 100 may use a predetermined data unit at which recursive division of coding units begins.

FIG. 23 illustrates that a plurality of coding units are determined according to a plurality of predetermined data units included in a picture according to an embodiment.

According to an embodiment, the predetermined data unit may be defined as a data unit in which a coding unit starts to be recursively divided using at least one of block shape information and split shape information. That is, it may correspond to the coding unit of the highest depth used in the process of determining a plurality of coding units for dividing the current picture. Hereinafter, for convenience of description, such a predetermined data unit will be referred to as a reference data unit.

According to an embodiment, the reference data unit may represent a predetermined size and shape. According to an embodiment, the reference coding unit may include samples of M × N. M and N may be the same as each other, and may be an integer represented by a multiplier of two. That is, the reference data unit may represent a square or non-square shape, and then may be divided into integer coding units.

According to an embodiment, the image decoding apparatus 100 may divide the current picture into a plurality of reference data units. According to an embodiment, the image decoding apparatus 100 may divide a plurality of reference data units for dividing a current picture by using split information for each reference data unit. The division process of the reference data unit may correspond to the division process using a quad-tree structure.

According to an embodiment, the image decoding apparatus 100 may predetermine the minimum size of the reference data unit included in the current picture. Accordingly, the image decoding apparatus 100 may determine a reference data unit having various sizes having a minimum size or more, and determine at least one coding unit by using block shape information and split shape information based on the determined reference data unit. You can decide.

Referring to FIG. 23, the image decoding apparatus 100 may use a reference coding unit 2300 having a square shape, or may use a reference coding unit 2302 having a non-square shape. According to an embodiment, the shape and size of the reference coding unit may include various data units (eg, a sequence, a picture, a slice, and a slice segment) that may include at least one reference coding unit. slice segment, maximum coding unit, etc.).

According to an exemplary embodiment, the receiving unit 110 of the image decoding apparatus 100 may obtain at least one of information on the shape of a reference coding unit and information on the size of the reference coding unit from each bitstream. . The process of determining at least one coding unit included in the reference coding unit 2300 having a square shape has been described above through the process of splitting the current coding unit 1100 of FIG. 11, and the reference coding unit having a non-square shape 2302 is described. Since the process of determining at least one coding unit included in the above is described above through the process of splitting the current coding unit 1200 or 1250 of FIG. 12, a detailed description thereof will be omitted.

According to an embodiment, the image decoding apparatus 100 may determine the size and shape of the reference coding unit in order to determine the size and shape of the reference coding unit according to some data unit predetermined based on a predetermined condition. Can be used. That is, the receiver 110 may determine a predetermined condition (for example, a data unit having a size less than or equal to a slice) among the various data units (eg, sequence, picture, slice, slice segment, maximum coding unit, etc.) from the bitstream. For each slice, slice segment, maximum coding unit, or the like as a data unit satisfying, only an index for identifying the size and shape of the reference coding unit may be obtained. The image decoding apparatus 100 may determine the size and shape of the reference data unit for each data unit satisfying the predetermined condition by using the index. When information on the shape of the reference coding unit and information on the size of the reference coding unit are obtained from the bitstream for each data unit having a relatively small size, the use efficiency of the bitstream may not be good, and thus the shape of the reference coding unit Instead of directly acquiring information about the information and the size of the reference coding unit, only the index may be obtained and used. In this case, at least one of the size and shape of the reference coding unit corresponding to the index indicating the size and shape of the reference coding unit may be predetermined. That is, the image decoding apparatus 100 selects at least one of the predetermined size and shape of the reference coding unit according to the index, thereby selecting at least one of the size and shape of the reference coding unit included in the data unit that is the reference for obtaining the index. You can decide.

According to an embodiment, the image decoding apparatus 100 may use at least one reference coding unit included in one maximum coding unit. That is, at least one reference coding unit may be included in the maximum coding unit for dividing an image, and the coding unit may be determined through a recursive division process of each reference coding unit. According to an embodiment, at least one of the width and the height of the maximum coding unit may correspond to an integer multiple of at least one of the width and the height of the reference coding unit. According to an embodiment, the size of the reference coding unit may be a size obtained by dividing the maximum coding unit n times according to a quad tree structure. That is, the image decoding apparatus 100 may determine the reference coding unit by dividing the maximum coding unit n times according to the quad tree structure, and according to various embodiments, the reference coding unit may include at least one of block shape information and split shape information. Can be divided based on.

24 is a diagram of a processing block serving as a reference for determining a determination order of a reference coding unit included in a picture 2400, according to an exemplary embodiment.

According to an embodiment, the image decoding apparatus 100 may determine at least one processing block for dividing a picture. The processing block is a data unit including at least one reference coding unit for dividing an image, and the at least one reference coding unit included in the processing block may be determined in a specific order. That is, the determination order of at least one reference coding unit determined in each processing block may correspond to one of various types of order in which the reference coding unit may be determined, and the reference coding unit determination order determined in each processing block. May be different per processing block. The order of determination of the reference coding units determined for each processing block is raster scan, Z-scan, N-scan, up-right diagonal scan, and horizontal scan. It may be one of various orders such as a horizontal scan, a vertical scan, etc., but the order that may be determined should not be construed as being limited to the scan orders.

According to an embodiment, the image decoding apparatus 100 may determine the size of at least one processing block included in the image by obtaining information about the size of the processing block. The image decoding apparatus 100 may determine the size of at least one processing block included in the image by obtaining information about the size of the processing block from the bitstream. The size of such a processing block may be a predetermined size of a data unit indicated by the information about the size of the processing block.

According to an embodiment, the receiver 110 of the image decoding apparatus 100 may obtain information about the size of a processing block from a bitstream for each specific data unit. For example, the information about the size of the processing block may be obtained from the bitstream in data units such as an image, a sequence, a picture, a slice, and a slice segment. That is, the receiver 110 may obtain information about the size of the processing block from the bitstream for each of the various data units, and the image decoding apparatus 100 may at least divide the picture using the information about the size of the acquired processing block. The size of one processing block may be determined, and the size of the processing block may be an integer multiple of the reference coding unit.

According to an embodiment, the image decoding apparatus 100 may determine the sizes of the processing blocks 2402 and 2412 included in the picture 2400. For example, the image decoding apparatus 100 may determine the size of the processing block based on the information about the size of the processing block obtained from the bitstream. Referring to FIG. 24, the apparatus 100 for decoding an image according to an exemplary embodiment of the present invention has a horizontal size of the processing blocks 2402 and 2412 equal to four times the horizontal size of the reference coding unit and four times the vertical size of the reference coding unit. You can decide. The image decoding apparatus 100 may determine an order in which at least one reference coding unit is determined in at least one processing block.

According to an embodiment, the image decoding apparatus 100 may determine each processing block 2402 and 2412 included in the picture 2400 based on the size of the processing block, and include the processing block 2402 and 2412 in the processing block 2402 and 2412. A determination order of at least one reference coding unit may be determined. According to an embodiment, the determination of the reference coding unit may include the determination of the size of the reference coding unit.

According to an embodiment, the image decoding apparatus 100 may obtain information about a determination order of at least one reference coding unit included in at least one processing block from a bitstream, and based on the obtained determination order The order in which at least one reference coding unit is determined may be determined. The information about the determination order may be defined in an order or direction in which reference coding units are determined in the processing block. That is, the order in which the reference coding units are determined may be independently determined for each processing block.

According to an embodiment, the image decoding apparatus 100 may obtain information about a determination order of a reference coding unit from a bitstream for each specific data unit. For example, the receiver 110 may obtain information about a determination order of a reference coding unit from a bitstream for each data unit such as an image, a sequence, a picture, a slice, a slice segment, and a processing block. Since the information about the determination order of the reference coding unit indicates the determination order of the reference coding unit in the processing block, the information about the determination order may be obtained for each specific data unit including an integer number of processing blocks.

The image decoding apparatus 100 may determine at least one reference coding unit based on the order determined according to the embodiment.

According to an embodiment, the receiver 110 may obtain information on a determination order of a reference coding unit from the bitstream as information related to the processing blocks 2402 and 2412, and the image decoding apparatus 100 may process the processing block ( An order of determining at least one reference coding unit included in 2402 and 2412 may be determined, and at least one reference coding unit included in the picture 2400 may be determined according to the determination order of the coding unit. Referring to FIG. 24, the image decoding apparatus 100 may determine the determination orders 2404 and 2414 of at least one reference coding unit associated with each processing block 2402 and 2412. For example, when information about the determination order of the reference coding unit is obtained for each processing block, the reference coding unit determination order associated with each processing block 2402 or 2412 may be different for each processing block. When the reference coding unit determination order 2404 associated with the processing block 2402 is a raster scan order, the reference coding units included in the processing block 2402 may be determined according to the raster scan order. In contrast, when the reference coding unit determination order 2414 associated with another processing block 2412 is the reverse order of the raster scan order, the reference coding units included in the processing block 2412 may be determined according to the reverse order of the raster scan order.

The image decoding apparatus 100 may decode at least one determined reference coding unit according to an embodiment. The image decoding apparatus 100 may decode an image based on the reference coding unit determined through the above-described embodiment. The method of decoding the reference coding unit may include various methods of decoding an image.

According to an embodiment, the image decoding apparatus 100 may obtain and use block shape information indicating a shape of a current coding unit or split shape information indicating a method of dividing a current coding unit from a bitstream. Block type information or split type information may be included in a bitstream associated with various data units. For example, the image decoding apparatus 100 may include a sequence parameter set, a picture parameter set, a video parameter set, a slice header, and a slice segment header. block type information or segmentation type information included in a segment header) may be used. In addition, the image decoding apparatus 100 may obtain and use syntax corresponding to block type information or split type information from the bitstream from the bitstream for each maximum coding unit, reference coding unit, and processing block.

So far I looked at the center of the various embodiments. Those skilled in the art will appreciate that the present disclosure may be implemented in a modified form without departing from the essential characteristics of the present disclosure. Therefore, the disclosed embodiments should be considered in descriptive sense only and not for purposes of limitation. The scope of the present disclosure is set forth in the claims rather than the foregoing description, and all differences within the scope equivalent thereto should be construed as being included in the present disclosure.

Meanwhile, the above-described embodiments of the present disclosure may be written as a program executable on a computer, and may be implemented in a general-purpose digital computer operating the program using a computer-readable recording medium. The computer-readable recording medium may include a storage medium such as a magnetic storage medium (eg, a ROM, a floppy disk, a hard disk, etc.) and an optical reading medium (eg, a CD-ROM, a DVD, etc.).

Claims (14)

  1. Obtaining a prediction mode of a current block included in a current video from the bitstream;
    Determining a collocated block corresponding to a position of a current block in a reference picture adjacent to a current picture when the prediction mode of the current block is intra prediction;
    Obtaining a reference sample based on at least one of a sample adjacent to the current block, a boundary sample within a collocated block, and a sample adjacent to a collocated block;
    Performing intra prediction on a current block based on the reference sample to obtain a predictor; And
    And reconstructing the current block based on the predictor.
  2. The method of claim 1,
    Acquiring the reference sample,
    Obtaining a first flag from the bitstream; If the first flag indicates to determine a reference sample in the current block, obtaining a reference sample based on upper and left samples adjacent to the current block; And
    If the first flag indicates determining a reference sample in the collocated block, obtaining a reference sample based on at least one of a boundary sample located within the collocated block and a sample adjacent to the collocated block; Image decoding method comprising a.
  3. The method of claim 2,
    Obtaining a reference sample based on at least one of a boundary sample located within the collocated block and a sample adjacent to the collocated block,
    And obtaining the reference sample based on right and lower boundary samples in the collocated block.
  4. The method of claim 1,
    Acquiring the reference sample,
    Obtaining a second flag from the bitstream;
    If the second flag indicates to use samples located on the right and bottom sides, obtain a reference sample based on at least one of the right and bottom samples adjacent to the current block or the right and bottom boundary samples in the collocated block; Making; And
    If the second flag indicates to use samples located at the left and top, reference samples are based on at least one of the samples located at the left and top adjacent to the current block or at the left and top adjacent to the collocated block. The image decoding method comprising the step of obtaining.
  5. The method of claim 1,
    Acquiring the reference sample,
    Obtaining a reference sample based on at least one of a boundary sample in the collocated block and a sample adjacent to the collocated block when the current block is located on the upper left side of the current image; An image decoding method.
  6. The method of claim 1,
    Acquiring the reference sample,
    And acquiring a reference sample based on left and upper boundary samples in the collocated block when the current block is located on an upper left side of the current image.
  7. In the video decoding apparatus,
    A receiver for receiving a bitstream; And
    Obtain a prediction mode of a current block included in a current picture from the bitstream, and determine a collocated block corresponding to a position of a current block in a reference picture adjacent to the current picture when the prediction mode of the current block is intra prediction. Obtain a reference sample based on at least one of a sample adjacent to the current block, a boundary sample within the collocated block, and a sample adjacent to the collocated block, and perform intra prediction on the current block based on the reference sample And a decoder which obtains a predictor and reconstructs the current block based on the predictor.
  8. The method of claim 7, wherein
    The decoding unit
    Obtain a first flag from the bitstream, and if the first flag indicates to determine a reference sample in the current block, obtain a reference sample based on the upper and left samples adjacent to the current block, and If a first flag indicates determining a reference sample in the collocated block, obtaining the reference sample based on at least one of a boundary sample located within the collocated block and a sample adjacent to the collocated block An image decoding device.
  9. The method of claim 8,
    The decoding unit,
    And a reference sample is obtained based on right and bottom boundary samples in the collocated block.
  10. The method of claim 7, wherein
    The decoding unit,
    Obtain a second flag from the bitstream and indicate that the second flag uses a sample located on the right and the bottom, right and bottom samples adjacent to the current block or on the right and in the collocated block; Obtain a reference sample based on at least one of the lower boundary samples, and if the second flag indicates to use samples located on the left and on the upper side, left and adjacent on the current block or left on the collocated block And obtaining a reference sample based on at least one of the samples located above.
  11. The method of claim 7, wherein
    The decoding unit,
    And a reference sample is obtained based on at least one of a boundary sample in the collocated block and a sample adjacent to the collocated block when the current block is located on the upper left side of the current image.
  12. The method of claim 7, wherein
    The decoding unit,
    And a reference sample is obtained based on left and upper boundary samples in the collocated block when the current block is located on an upper left side of the current image.
  13. Determining a collocated block corresponding to a position of a current block in a reference image adjacent to the current image;
    Obtaining a reference sample based on at least one of a sample adjacent to the current block, a boundary sample within a collocated block, and a sample adjacent to a collocated block;
    Performing intra prediction on a current block based on the reference sample to obtain a predictor; And
    Encoding the current block based on the predictor to obtain encoding information of the current block; And
    And generating the encoded information into a bitstream.
  14. In the video encoding apparatus,
    Determine a collocated block corresponding to the position of the current block within a reference image adjacent to the current image, and determine at least one of a sample adjacent to the current block, a boundary sample within the collocated block, and a sample adjacent to the collocated block An encoder that obtains a reference sample based on the received sample, performs intra prediction on the current block based on the reference sample, obtains a predictor, and encodes the current block based on the predictor to obtain encoding information of the current block. ; And
    And a bitstream generator for generating the encoded information into a bitstream.
PCT/KR2016/013485 2015-11-24 2016-11-22 Video encoding method and apparatus, and video decoding method and apparatus WO2017090957A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US201562259287P true 2015-11-24 2015-11-24
US62/259,287 2015-11-24

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020187011825A KR20180075517A (en) 2015-11-24 2016-11-22 Video encoding method and apparatus, video decoding method and apparatus

Publications (1)

Publication Number Publication Date
WO2017090957A1 true WO2017090957A1 (en) 2017-06-01

Family

ID=58764180

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2016/013485 WO2017090957A1 (en) 2015-11-24 2016-11-22 Video encoding method and apparatus, and video decoding method and apparatus

Country Status (2)

Country Link
KR (1) KR20180075517A (en)
WO (1) WO2017090957A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019161798A1 (en) * 2018-02-26 2019-08-29 Mediatek Inc. Intelligent mode assignment in video coding

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130004435A (en) * 2011-07-02 2013-01-10 삼성전자주식회사 Method and apparatus for video encoding with inter prediction using collocated picture, method and apparatus for video decoding with inter prediction using collocated picture
KR20140034053A (en) * 2012-08-21 2014-03-19 삼성전자주식회사 Method and appratus for inter-layer encoding of prediction information in scalable video encoding based on coding units of tree structure, method and appratus for inter-layer decoding of prediction information in scalable video decoding based on coding units of tree structure
JP2014518031A (en) * 2011-04-25 2014-07-24 エルジー エレクトロニクス インコーポレイティド Intra-prediction method and encoder and decoder using the same
KR20140109839A (en) * 2014-07-17 2014-09-16 에스케이텔레콤 주식회사 Video Encoding/Decoding Method and Apparatus by Efficiently Processing Intra Prediction Mode
KR20150059146A (en) * 2011-06-28 2015-05-29 삼성전자주식회사 Method and apparatus for video intra prediction encoding, and method and apparatus for video intra prediction decoding

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014518031A (en) * 2011-04-25 2014-07-24 エルジー エレクトロニクス インコーポレイティド Intra-prediction method and encoder and decoder using the same
KR20150059146A (en) * 2011-06-28 2015-05-29 삼성전자주식회사 Method and apparatus for video intra prediction encoding, and method and apparatus for video intra prediction decoding
KR20130004435A (en) * 2011-07-02 2013-01-10 삼성전자주식회사 Method and apparatus for video encoding with inter prediction using collocated picture, method and apparatus for video decoding with inter prediction using collocated picture
KR20140034053A (en) * 2012-08-21 2014-03-19 삼성전자주식회사 Method and appratus for inter-layer encoding of prediction information in scalable video encoding based on coding units of tree structure, method and appratus for inter-layer decoding of prediction information in scalable video decoding based on coding units of tree structure
KR20140109839A (en) * 2014-07-17 2014-09-16 에스케이텔레콤 주식회사 Video Encoding/Decoding Method and Apparatus by Efficiently Processing Intra Prediction Mode

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019161798A1 (en) * 2018-02-26 2019-08-29 Mediatek Inc. Intelligent mode assignment in video coding

Also Published As

Publication number Publication date
KR20180075517A (en) 2018-07-04

Similar Documents

Publication Publication Date Title
WO2010002214A2 (en) Image encoding method and device, and decoding method and device therefor
WO2011049396A2 (en) Method and apparatus for encoding video and method and apparatus for decoding video, based on hierarchical structure of coding unit
WO2012043989A2 (en) Method for partitioning block and decoding device
WO2011129619A2 (en) Video encoding method and video encoding apparatus and video decoding method and video decoding apparatus, which perform deblocking filtering based on tree-structure encoding units
WO2011019250A4 (en) Method and apparatus for encoding video, and method and apparatus for decoding video
WO2011019249A2 (en) Video encoding method and apparatus and video decoding method and apparatus, based on hierarchical coded block pattern information
WO2011152635A2 (en) Enhanced intra prediction mode signaling
WO2010151049A2 (en) Method and apparatus for automatic transformation of three-dimensional video
WO2013022297A2 (en) Method and device for encoding a depth map of multi viewpoint video data, and method and device for decoding the encoded depth map
WO2011019247A2 (en) Method and apparatus for encoding/decoding motion vector
WO2012023762A2 (en) Method for decoding intra-predictions
WO2012148138A2 (en) Intra-prediction method, and encoder and decoder using same
WO2011034372A2 (en) Methods and apparatuses for encoding and decoding mode information
WO2012023806A2 (en) Method and apparatus for encoding video, and decoding method and apparatus
WO2011019253A4 (en) Method and apparatus for encoding video in consideration of scanning order of coding units having hierarchical structure, and method and apparatus for decoding video in consideration of scanning order of coding units having hierarchical structure
EP2510698A2 (en) Method and apparatus for encoding video by motion prediction using arbitrary partition, and method and apparatus for decoding video by motion prediction using arbitrary partition
WO2012087077A2 (en) Method and device for encoding intra prediction mode for image prediction unit, and method and device for decoding intra prediction mode for image prediction unit
WO2010085064A2 (en) Apparatus and method for motion vector encoding/decoding, and apparatus and method for image encoding/decoding using same
WO2012081879A1 (en) Method for decoding inter predictive encoded motion pictures
WO2011126275A2 (en) Determining intra prediction mode of image coding unit and image decoding unit
WO2011090314A2 (en) Method and apparatus for encoding and decoding motion vector based on reduced motion vector predictor candidates
WO2011126281A2 (en) Method and apparatus for encoding video by performing in-loop filtering based on tree-structured data unit, and method and apparatus for decoding video by performing the same
WO2011087321A2 (en) Method and apparatus for encoding and decoding motion vector
WO2013002586A2 (en) Method and apparatus for image encoding and decoding using intra prediction
WO2010044563A2 (en) Method and apparatus for encoding/decoding the motion vectors of a plurality of reference pictures, and apparatus and method for image encoding/decoding using same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16868853

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase in:

Ref document number: 20187011825

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16868853

Country of ref document: EP

Kind code of ref document: A1