KR20180075517A - Video encoding method and apparatus, video decoding method and apparatus - Google Patents

Video encoding method and apparatus, video decoding method and apparatus Download PDF

Info

Publication number
KR20180075517A
KR20180075517A KR1020187011825A KR20187011825A KR20180075517A KR 20180075517 A KR20180075517 A KR 20180075517A KR 1020187011825 A KR1020187011825 A KR 1020187011825A KR 20187011825 A KR20187011825 A KR 20187011825A KR 20180075517 A KR20180075517 A KR 20180075517A
Authority
KR
South Korea
Prior art keywords
block
encoding
unit
sample
image
Prior art date
Application number
KR1020187011825A
Other languages
Korean (ko)
Inventor
원광현
김찬열
이선일
이진영
Original Assignee
삼성전자주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 삼성전자주식회사 filed Critical 삼성전자주식회사
Publication of KR20180075517A publication Critical patent/KR20180075517A/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present disclosure provides a method and apparatus for encoding or decoding images to predict. The image decoding method includes obtaining a predicted mode of a current block included in a current image from a bitstream and, when the prediction mode of the current block is intra prediction, calculating a collocated block corresponding to a position of a current block in a reference image adjacent to the current image Obtains a reference sample based on at least one of a sample adjacent to the current block, a boundary sample in the collocated block, and a sample adjacent to the collocated block, and performs intra prediction on the current block based on the reference sample Acquiring the predictor, and restoring the current block based on the predictor.

Description

Video encoding method and apparatus, video decoding method and apparatus

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to an image encoding and decoding method and apparatus, and more particularly, to a method and apparatus for encoding or decoding an image to predict an image.

The video data is encoded by a codec according to a predetermined data compression standard, for example, a Moving Picture Expert Group (MPEG) standard, and then stored in a recording medium in the form of a bit stream or transmitted over a communication channel.

[0003] As the development and dissemination of hardware capable of reproducing and storing high-resolution or high-definition video content, a need for a codec for effectively encoding or decoding high-resolution or high-definition video content is increasing. The encoded image content can be reproduced by being decoded. Recently, methods for efficiently compressing such high-resolution or high-definition image content have been implemented. For example, an efficient image compression method through a process of processing an image to be encoded in an arbitrary manner has been implemented.

The video codec reduces the amount of data by using a prediction technique using the feature that video images have high correlation with each other temporally or spatially. According to the prediction technique, in order to predict a current image using a surrounding image, image information is recorded using temporal distance, spatial distance, prediction error, etc. between the images.

The present disclosure provides a method and apparatus for encoding or decoding intraprediction in consideration of temporal distance between images.

According to an aspect of the present invention, there is provided an image decoding method including: obtaining a prediction mode of a current block included in a current image from a bitstream; Determining a collocated block corresponding to a position of a current block in a reference image adjacent to the current image if the prediction mode of the current block is intra prediction; Obtaining a reference sample based on at least one of a sample adjacent to the current block, a boundary sample in the collocated block, and a sample adjacent to the collocated block; Performing intra prediction on a current block based on a reference sample to obtain a predictor; And restoring the current block based on the predictor.

According to an aspect of the present invention, there is provided an image decoding method including: obtaining a first flag from a bitstream; And if the first flag indicates to determine a reference sample in the current block, obtaining a reference sample based on the upper and left samples adjacent to the current block; And obtaining a reference sample based on at least one of a boundary sample located within the collocated block and a sample adjacent to the collocated block if the first flag indicates that the reference sample is to be determined in the collocated block .

An image decoding method in accordance with an embodiment of the present disclosure is characterized by comprising obtaining a reference sample based on right and bottom boundary samples in a collocated block.

According to another aspect of the present invention, there is provided a video decoding method including: obtaining a second flag from a bitstream; Obtaining a reference sample based on at least one of a right and a lower boundary sample in a sample or collocated block located on the right side and the bottom side adjacent to the current block if the second flag indicates use of the sample located on the right side and the bottom side; And acquiring a reference sample based on at least one of left and upper side samples adjacent to the current block and left and upper side samples adjacent to the collocated block if the second flag indicates to use the samples located on the left and the upper side, The method comprising the steps of:

The image decoding method according to an embodiment of the present disclosure acquires a reference sample based on at least one of a boundary sample in the collocated block and a sample adjacent to the collocated block when the current block is located on the upper left side of the current image The method comprising the steps of:

The image decoding method according to an embodiment of the present disclosure is characterized by including a step of obtaining a reference sample based on left and upper boundary samples in a collocated block when the current block is located on the upper left side of the current image .

According to an aspect of the present invention, there is provided an image decoding apparatus comprising: a receiving unit for receiving a bitstream; And determining a collocated block corresponding to a position of a current block in a reference image adjacent to the current image if the prediction mode of the current block is intra prediction, Obtaining a reference sample based on at least one of a sample adjacent to the current block, a boundary sample in the collocated block, and a sample adjacent to the collocated block, and performing a intra prediction on the current block based on the reference sample And a decoding unit for decoding the current block based on the predictor.

The decoding unit of the image decoding apparatus according to an embodiment of the present disclosure acquires a first flag from the bitstream and, when the first flag indicates that the reference sample is to be determined in the current block, Based on at least one of a boundary sample located within the collocated block and a sample adjacent to the collocated block, the reference sample is obtained based on at least one of the reference sample and the collocated block, and if the first flag indicates to determine a reference sample in the collocated block, Is obtained.

A decoding unit of an image decoding apparatus according to an embodiment of the present disclosure is characterized in that a reference sample is obtained based on right and lower boundary samples in a collocated block.

The decoding unit of the video decoding apparatus according to an embodiment of the present invention obtains a second flag from the bitstream and outputs the second flag to the right and the lower side adjacent to the current block, A reference sample is obtained based on at least one of a right and a lower boundary sample in a positioned or collocated block and a sample located on the left and an upper side adjacent to the current block when the second flag indicates use of the sample located on the left and on the upper side Or obtains a reference sample based on at least one of left and upper samples adjacent to the collocated block.

The decoding unit of the video decoding apparatus according to an embodiment of the present disclosure decodes a reference sample based on at least one of a boundary sample in the collocated block and a sample adjacent to the collocated block when the current block is located on the upper left side of the current video .

The decoding unit of the video decoding apparatus according to an embodiment of the present disclosure is characterized in that a reference sample is obtained based on the left and upper boundary samples in the collocated block when the current block is located on the left upper side of the current video.

According to an aspect of the present invention, there is provided an image encoding method including: determining a collocated block corresponding to a current block position in a reference image adjacent to a current image; Obtaining a reference sample based on at least one of a left and an upper sample adjacent to a current block, a boundary sample within a collocated block, and a sample adjacent to the collocated block; Performing intra prediction on a current block based on a reference sample to obtain a predictor; And encoding the current block based on the predictor to obtain encoding information of the current block; And generating encoded information as a bitstream.

An image encoding apparatus according to an embodiment of the present invention determines a collocated block corresponding to a position of a current block in a reference image adjacent to a current image and determines a collocated block corresponding to a position of the left and upper samples adjacent to the current block, Obtaining a reference sample based on at least one of a boundary sample and a sample adjacent to the collocated block, performing a prediction on the current block based on the reference sample to obtain a predictor, and encoding the current block based on the predictor An encoding unit for obtaining encoding information of a current block; And a bitstream generation unit for generating encoding information as a bitstream.

1 is a schematic block diagram of an image decoding apparatus according to an exemplary embodiment of the present invention.
FIG. 2 shows a flowchart of a video decoding method according to an embodiment.
FIG. 3 illustrates an intra prediction mode according to an embodiment.
4A and 4B illustrate a process of predicting a current block based on an intra prediction mode according to an embodiment.
FIG. 5 illustrates a process of obtaining sample values for use in intra prediction using a collocated block according to an embodiment.
FIG. 6 illustrates a process of obtaining pixel values for use in intra prediction using a collocated block according to an embodiment.
FIG. 7 illustrates a process of obtaining pixel values for use in intra prediction using a collocated block according to an embodiment.
FIG. 8 illustrates a process of obtaining pixel values for use in intra prediction using a collocated block according to an embodiment.
FIG. 9 shows a schematic block diagram of an image encoding apparatus according to an embodiment.
10 is a flowchart illustrating a method of encoding an image according to an embodiment of the present invention.
FIG. 11 illustrates a process in which an image decoding apparatus determines at least one encoding unit by dividing a current encoding unit according to an embodiment.
FIG. 12 illustrates a process in which an image decoding apparatus determines at least one encoding unit by dividing a non-square encoding unit according to an embodiment.
FIG. 13 illustrates a process in which an image decoding apparatus divides an encoding unit based on at least one of block type information and division type information according to an embodiment.
FIG. 14 illustrates a method for an image decoding apparatus to determine a predetermined encoding unit among odd number of encoding units according to an embodiment.
FIG. 15 shows a sequence in which a plurality of coding units are processed when an image decoding apparatus determines a plurality of coding units by dividing a current coding unit according to an embodiment.
16 illustrates a process of determining that the current encoding unit is divided into odd number of encoding units when the image decoding apparatus can not process the encoding units in a predetermined order according to an embodiment.
FIG. 17 illustrates a process in which an image decoding apparatus determines at least one encoding unit by dividing a first encoding unit according to an embodiment.
FIG. 18 is a diagram illustrating an example in which when the non-square second encoding unit determined by dividing the first encoding unit by the image decoding apparatus satisfies a predetermined condition, a form in which the second encoding unit can be divided is limited Lt; / RTI >
FIG. 19 illustrates a process in which an image decoding apparatus divides a square-shaped encoding unit when the division type information can not indicate division into four square-shaped encoding units according to an embodiment.
FIG. 20 illustrates that a processing order among a plurality of coding units may be changed according to a division process of coding units according to an embodiment.
FIG. 21 illustrates a process in which the depth of an encoding unit is determined according to a change in type and size of an encoding unit when a plurality of encoding units are determined by recursively dividing an encoding unit according to an exemplary embodiment.
FIG. 22 illustrates a depth index (hereinafter referred to as a PID) for classifying a depth and a coding unit that can be determined according to the type and size of coding units according to an exemplary embodiment.
23 shows that a plurality of coding units are determined according to a plurality of predetermined data units included in a picture according to an embodiment.
FIG. 24 shows a processing block serving as a reference for determining a determination order of reference encoding units included in a picture according to an embodiment.

BRIEF DESCRIPTION OF THE DRAWINGS The advantages and features of the disclosed embodiments, and how to accomplish them, will become apparent with reference to the embodiments described below with reference to the accompanying drawings. It should be understood, however, that the present disclosure is not limited to the embodiments disclosed herein but may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein, It is only provided to give the complete scope of the invention to the person.

The terms used in this specification will be briefly described, and the disclosed embodiments will be described in detail.

As used herein, terms used in the present specification are taken to be those of ordinary skill in the art and are not intended to limit the scope of the present invention. Also, in certain cases, there may be a term selected arbitrarily by the applicant, in which case the meaning thereof will be described in detail in the description of the corresponding invention. Accordingly, the terms used in this disclosure should be defined based on the meaning of the term rather than on the name of the term, and throughout the present disclosure.

The singular expressions herein include plural referents unless the context clearly dictates otherwise.

When an element is referred to as "including" an element throughout the specification, it is to be understood that the element may include other elements as well, without departing from the spirit or scope of the present invention. Further, the term "part" used in the specification means a hardware component such as a software, a microprocessor, an FPGA or an ASIC, and "part " However, "part" is not meant to be limited to software or hardware. "Part" may be configured to reside on an addressable storage medium and may be configured to play back one or more processors. Thus, by way of example, and not limitation, "part (s) " refers to components such as software components, object oriented software components, class components and task components, and processes, Subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays and variables. The functions provided in the components and "parts " may be combined into a smaller number of components and" parts " or further separated into additional components and "parts ".

Hereinafter, the "image" may be a static image such as a still image of a video or a dynamic image such as a moving image, i.e., the video itself.

Hereinafter, "sample" means data to be processed as data assigned to a sampling position of an image. For example, pixel values in the image of the spatial domain, and transform coefficients on the transform domain may be samples. A unit including at least one of these samples may be defined as a block.

Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily carry out the present invention. In order to clearly explain the present disclosure in the drawings, portions not related to the description will be omitted.

An image encoding apparatus, an image decoding apparatus, an image encoding method, and an image decoding method will be described below with reference to FIGS. 1 to 24 according to an embodiment. A method and apparatus for encoding or decoding using image prediction according to an embodiment will be described with reference to FIGS. 1 to 10, and a method for determining a data unit of an image according to an embodiment will be described with reference to FIGS. do.

A method and apparatus for encoding or decoding intraprediction in consideration of temporal distance between images according to an embodiment of the present disclosure will be described below with reference to FIGS. 1 to 10. FIG.

FIG. 1 shows a schematic block diagram of an image decoding apparatus 100 according to an embodiment.

The image decoding apparatus 100 may include a receiving unit 110 and a decoding unit 120. The receiving unit 110 may receive the bit stream. The bit stream includes information obtained by encoding the image by the image encoding apparatus 900. [ Also, the bit stream can be transmitted from the image encoding apparatus 900. The image encoding apparatus 900 and the image decoding apparatus 100 may be connected by wire or wirelessly, and the receiving unit 110 may receive a bit stream by wire or wireless. The decoding unit 120 can recover the image by parsing the information from the received bitstream. The operation of the decoding unit 120 will be described in detail with reference to FIG.

FIG. 2 shows a flowchart of a video decoding method according to an embodiment.

According to an embodiment of the present disclosure, the decoding unit 120 may acquire (210) a prediction mode of a current block included in a current image from a bitstream. Also, when the prediction mode of the current block is intra prediction, the decoding unit 120 may determine 220 the collocated block corresponding to the position of the current block in the reference image adjacent to the current image. In addition, the decoding unit 120 may acquire (230) a reference sample based on at least one of a sample adjacent to the current block, a boundary sample in the collocated block, and a sample adjacent to the collocated block. In addition, the decoding unit 120 may perform intraprediction on the current block based on the reference sample to obtain (240) a predictor. Also, the decoding unit 120 may restore (250) the current block based on the predictor.

An image can be divided into a maximum number of coding units. The size of the maximum encoding unit may be determined based on information parsed from the bitstream. The shape of the largest encoding unit may have a square of the same size. However, the present invention is not limited thereto. The maximum encoding unit may be hierarchically divided in units of encoding based on the division information parsed from the bit stream. The encoding unit may be less than or equal to the maximum encoding unit. For example, if the segmentation information indicates that the segmentation information is not segmented, then the encoding unit has the same size as the largest encoding unit. When the segmentation information indicates that the segmentation information is segmented, the maximum encoding unit may be divided into lower-depth encoding units. If the division information for the lower-depth encoding unit indicates division, the lower-depth encoding unit can be divided into smaller-sized encoding units. However, the division of the image is not limited to this, and the maximum encoding unit and the encoding unit may not be distinguished. The division of the encoding unit will be described in more detail with reference to FIG. 11 to FIG.

In addition, the encoding unit can be divided into prediction units for prediction of an image. The prediction unit may be equal to or smaller than the encoding unit. The encoding unit can also be divided into conversion units for image conversion. The conversion unit may be equal to or smaller than the encoding unit. The shape and size of the conversion unit and the prediction unit may not be related to each other. The encoding unit may be distinguished from the prediction unit and the conversion unit, but the encoding unit may be a prediction unit and a conversion unit. The division of the encoding unit will be described in more detail with reference to FIG. 11 to FIG. The current block and the adjacent block of the present disclosure may represent one of a maximum encoding unit, an encoding unit, a prediction unit, and a conversion unit.

The decoding unit 120 may obtain a prediction mode of the current block included in the current image from the bitstream. The current image is an image currently being decoded by the decoding unit 120. [ The current block is the block included in the current image. The decoding unit 120 may obtain information on the prediction mode of the current block by parsing the bitstream. The prediction mode may include intra prediction or inter prediction. Intra prediction is a mode for predicting a current block based on the spatial similarity of samples in an image. Inter prediction is a mode for predicting a current block based on the temporal similarity of samples between images. The decoding unit 120 may determine the prediction mode of the current block by intra prediction based on the obtained prediction information.

FIG. 3 illustrates an intra-prediction mode according to one embodiment.

The decoding unit 120 may perform intra prediction according to thirty-five intra prediction modes. Intra prediction modes can be classified into Intra_Planar mode, Intra_DC mode, and Intra_Angular mode. The Intra_Angular mode can also have 33 directions including Intra_Vertical and Intra_Horizontal. Therefore, there are a total of 35 intra prediction modes. Also, the decoding unit 120 may parse information on the intra prediction mode from the bitstream. For example, when the information on the intra prediction mode is '0', the decoding unit 120 can determine the intra prediction mode as the Intra_Planar mode. Also, when the information on the intra prediction mode is '1', the decoding unit 120 can determine the intra prediction mode as the Intra_DC mode. Also, when the information on the intra prediction mode is '2' to '34', the decoding unit 120 can determine the direction shown in FIG. 3 as the intra prediction mode.

4A and 4B illustrate a process of predicting a current block based on an intra prediction mode according to an embodiment.

Referring to FIG. 4A, the decoding unit 120 may predict a current block 440 in the current image 410. FIG. The decoding unit 120 may determine the prediction mode of the current block 440 as intra prediction based on the information about the prediction mode parsed from the bit stream. Also, the decoding unit 120 may parse information on the intra-prediction mode parsed from the bitstream. The decoding unit 120 can determine the intra prediction mode of the current block 440 based on the information about the intra prediction mode. For example, the information about the parsed intra prediction mode may be '26'. Referring to FIG. 3, when the information on the intra-prediction mode is '26', the decoding unit 120 can determine the intra-prediction mode as a vertical direction.

The decoding unit 120 may predict the current block 440 using neighboring samples of the current block 440. [ According to one embodiment of the present disclosure, the decoding unit 120 may use the samples of the neighboring blocks 421, 422, 423, 424, and 425 to determine reference samples 430 to use for intra prediction. The adjacent blocks 421, 422, 423, 424 and 425 may be blocks located on the upper right side, the upper side, the upper left side, the left side, and the lower left side of the current block 440. The reference samples 430 to be used for intra prediction can be determined based on the lower samples of the adjacent block 421 located on the upper right side of the current block 440. [ In addition, reference samples 430 to be used for intra prediction can be determined based on the lower samples of the adjacent block 422 located on the upper side of the current block 440. In addition, reference samples 430 to be used for intra prediction can be determined based on the lower left sample of the adjacent block 423 located on the upper left side of the current block 440. In addition, reference samples 430 to be used for intra prediction may be determined based on the right samples of the adjacent block 424 located to the left of the current block 440. [ In addition, reference samples 430 to be used for intra prediction can be determined based on the right samples of the adjacent block 425 located at the lower left of the current block 440.

Since the reference samples 430 are located on the left side and the upper side of the current block 440, the decoding unit 120 can more accurately predict an image leading to the left side or the upper side in the current block 440.

The decoding unit 120 may predict the current block 440 based on the reference samples 430 to be used for intra prediction. For example, when the intra prediction mode of the current block 440 is determined to be the vertical direction, the decoding unit 120 may determine the current block 440 based on the lower samples of the adjacent block 422 among the reference samples 430, Can be predicted.

In addition, the decoding unit 120 may perform intra-prediction on the current block 440 based on the reference samples 430 to obtain a predictor. For example, when the intra prediction mode is determined to be the vertical direction, the decoding unit 120 may predict the current block 440 by copying the lower pixel values of the adjacent block 422 downward. The predicted current block 440 sample values may be referred to as a 'predictor'. The decoding unit 120 can parse the transform coefficients from the bit stream. The decoding unit 120 may dequantize and inverse transform the transform coefficients to obtain residuals. The decoding unit 120 may restore the current block based on the predictor and the residual.

Referring to FIG. 4B, the decoding unit 120 may predict a current block 490 in the current image 460. FIG. The decoding unit 120 may use the samples of the adjacent blocks 471, 472, 473, 474, and 475 to determine reference samples 480 to be used for intra prediction. The adjacent blocks 471, 472, 473, 474, and 475 may be blocks located on the upper right side, the right side, the lower right side, the lower side, and the lower left side of the current block 490. The reference samples 480 to be used for intra prediction can be determined based on the left samples of the adjacent block 471 located on the upper right side of the current block 490. [ Reference samples 480 to be used for intra prediction can also be determined based on the left samples of the adjacent block 472 located to the right of the current block 490. [ Also, the reference samples 480 to be used for intra prediction can be determined based on the upper left side sample of the adjacent block 473 located on the lower right side of the current block 490. In addition, reference samples 480 to be used for intra prediction can be determined based on the upper samples of the adjacent block 474 located below the current block 490. Also, reference samples 480 to be used for intra prediction can be determined based on the upper samples of the adjacent block 475 located at the lower left of the current block 490.

Since the reference samples 480 are located on the right side and the lower side of the current block 490, the decoding unit 120 can more accurately predict the image from the current block 490 to the right or below.

The decoding unit 120 can predict the current block 490 based on the reference samples 480 to be used for intra prediction. For example, when the intra prediction mode of the current block 490 is determined to be the vertical direction, the decoding unit 120 may determine the current block 490 based on the upper samples of the adjacent block 474 among the reference samples 480, Can be predicted.

In addition, the decoding unit 120 can perform the intra prediction on the current block 490 based on the reference sample to obtain the predictor. For example, when the intra prediction mode is determined to be the vertical direction, the decoding unit 120 may predict the current block 490 by copying the upper pixel values of the adjacent block 474 upward. The predicted sample values of the current block 490 may be referred to as a 'predictor'. The decoding unit 120 can parse the transform coefficients from the bit stream. The decoding unit 120 may dequantize and inverse transform the transform coefficients to obtain residuals. The decoding unit 120 may restore the current block based on the predictor and the residual.

4A and 4B, the decoding unit 120 may determine sample values to be used for intraprediction by using adjacent blocks on the left and lower sides of the current block, or using adjacent blocks on the right and upper sides of the current block. Also, the decoding unit 120 may determine sample values to be used for intraprediction using adjacent blocks located at least one of the left side, the upper side, the right side, and the lower side of the current block.

FIG. 5 illustrates a process of obtaining sample values for use in intra prediction using a collocated block according to an embodiment.

The decoding unit 120 can predict the current block 520 in the current image 510. [ The decoding unit 120 may determine the prediction mode of the current block 520 in the current image 510 as intra prediction. If the prediction mode of the current block 520 is intra prediction, the decoding unit 120 determines a collocated block 570 corresponding to the position of the current block in the reference image 560 adjacent to the current image 510 . The collocated block may be a block corresponding to the position of the current block in the previously reconstructed image 560. [ For example, the coordinate value of the upper left sample of the current block 520 with respect to the current image 510 is the same as the coordinate value of the upper left sample of the collocated block 570 with respect to the previously reconstructed image 560 .

Picture order count (POC) is a variable related to each image. The video sequence count indicates the display order of the video. The video sequence count is a unique value indicating the corresponding video in the decoded video sequence (CVS). Also, the relative time distance between each image can be determined through the image sequence count of the images existing in the same CVS. The displayed order and the restored order may be different from each other. The adjacent reference image 560 may be an image reconstructed before the current image 510. The POC of the previously reconstructed image 560 may be N (integer). Also, the POC of the current image 510 may be N + M (where M is a non-zero integer). Also, the previously reconstructed image 560 may be a previous image of the current image 510. That is, M may be '1'. If M is '1', it is highly likely that the current image 510 and the previously reconstructed image 560 are most similar to each other, so that the decoding unit 120 can perform intra prediction more accurately. However, the present invention is not limited thereto, and the decoding unit 120 can select an image most similar to the current image 510 among a plurality of previously reconstructed images. For example, M may be more than '2' or M may be less than '-1'. Also, the decoding unit 120 can acquire the collocated block in the most similar image.

According to an embodiment of the present disclosure, if the prediction mode of the current block 520 is intra prediction, the decoding unit 120 may determine samples of blocks adjacent to the current block 520 in the current image 510, Reference samples may be obtained based on at least one of the samples of the collocated block 570 in the image 560. The decoding unit 120 may obtain a reference sample based on at least one of left and upper samples adjacent to the current block 520, a boundary sample in the collocated block, and a sample adjacent to the collocated block.

Since the decoding unit 120 acquires the reference samples using the samples adjacent to the current block 520, it has already been described with reference to FIGS. 4A and 4B.

The decoding unit 120 may obtain the reference samples based on the right and lower boundary samples in the collocated block. 5, the decoding unit 120 decodes the samples of the current block 520 based on the right and the lower boundary samples 581, 582, 583, 584, 585, 586 and 587 in the collocated block 570 531, 532, 533, 534, 535, 536 and 537. [ For example, the decoding unit 120 may store the pixel values of the samples 581, 582, 583, 584, 585, 586 and 587 in the collocated block 570 as the samples 531, 532, 533 , 534, 535, 536, and 537). The decoding unit 120 may obtain the samples 531, 532, 533, 534, 535, 536, and 537 of the predicted current block 520 as reference samples.

The coordinate values of the samples 581, 582, 583, 584, 585, 586 and 587 in the collocated block 570 correspond to the samples 531, 532, 533, 534, 535, 536 and 537 of the current block 520, May be the same as the coordinate value of < RTI ID = 0.0 > The decoding unit 120 may predict the current block 520 using the predicted samples 531, 532, 533, 534, 535, 536, and 537 of the current block 520 as reference samples. A method of predicting the current block 520 to generate a predictor has been described with reference to FIG. 3, FIG. 4A and FIG. 4B, and thus a duplicated description will be omitted.

Figure 5 illustrates samples 531, 532, 533, 534, 535, 536 in current block 520 using boundary samples 581, 582, 583, 584, 585, 586 and 587 in collocated block 570 And 537 are predicted, the present invention is not limited thereto. Similar to FIG. 8 to be described later, the decoding unit 120 decodes the current block 520 based on the boundary samples 581, 582, 583, 584, 585, 586 and 587 in the collocated block 570. [ Can be predicted. Also, the decoding unit 120 can predict the current block 520 based on the reference sample adjacent to the current block 520. The process of predicting the current block using the reference sample adjacent to the current block 520 has been described with reference to FIG. 4B, and thus a duplicated description will be omitted.

The size of the current block 520 may be different from the size of the collocated block 570. If the size of the current block 520 and the size of the collocated block 570 are different from each other, the decoding unit 120 enlarges or reduces the size of the collocated block 570, . An interpolation method may be used to increase the size of the collocated block 570.

The decryption unit 120 does not determine the collocated block 570 and decrypts the previously reconstructed image 560 corresponding to the samples 531,532, 533, 534, 535, 536, and 537 of the current block 520 533, 534, 535, 536 and 537 of the current block 520 using the samples 581, 582, 583, 584, 585, 586 and 587 in the current block 520 . The coordinate values of the samples 581, 582, 583, 584, 585, 586 and 587 for the previously reconstructed image 560 correspond to the samples 531, 532, 533 of the current block 520 for the current image 510 , 534, 535, 536, and 537).

FIG. 6 illustrates a process of obtaining pixel values for use in intra prediction using a collocated block according to an embodiment.

The decoding unit 120 can predict the current block 620 in the current image 610. [ The decoding unit 120 may determine the prediction mode of the current block 620 in the current image 610 as intra prediction. If the prediction mode of the current block 620 is intra prediction, the decoding unit 120 determines a collocated block 670 corresponding to the position of the current block in the reference image 660 adjacent to the current image 610 . The adjacent reference image 660 may be an image reconstructed before the current image 610. The POC of the previously reconstructed image 660 may be N (integer). Also, the POC of the current image 610 may be N + M (where M is a non-zero integer). Also, the previously reconstructed image 660 may be a previous image of the current image 610. [

The decoding unit 120 may determine the collocated block 670 in the previously reconstructed image 660. [ The decoding unit 120 also receives the samples 631, 632, 633, 634 of the current block 620 based on the samples 681, 682, 683, 684, 685, 686 and 687 adjacent to the collocated block 670 , 635, 636 and 637 can be predicted. For example, the decoding unit 120 may convert the pixel values of the samples 681, 682, 683, 684, 685, 686, and 687 adjacent to the collocated block 670 to samples 631, 633, 634, 635, 636, and 637, respectively. The decoding unit 120 may obtain the samples 631, 632, 633, 634, 635, 636 and 637 of the predicted current block 620 as reference samples.

The decoding unit 120 may predict the current block 620 using the samples 631, 632, 633, 634, 635, 636, and 637 of the current block 620 as reference samples. Since the method of predicting the current block 620 has been described with reference to FIG. 3, FIG. 4A and FIG. 4B, redundant description is omitted.

Figure 6 illustrates samples 631,632, 633, 634, 635, 636 in the current block 620 using samples 681, 682, 683, 684, 685, 686 and 687 adjacent to the collocated block 670 And 637 are predicted, the present invention is not limited thereto. Similar to FIG. 8 to be described later, the decoding unit 120 decodes the current block 620 based on the boundary samples 681, 682, 683, 684, 685, 686 and 687 in the collocated block 670, Can be predicted. The decoding unit 120 may predict the current block 620 based on the reference sample adjacent to the current block 620. [ Since the process of predicting the current block using the reference sample adjacent to the current block 620 has been described with reference to 4B, redundant description will be omitted.

The number of samples 681, 682, 683, 684, 685, 686, and 687 adjacent to the collocated block 670, even if the size of the current block 620 and the size of the collocated block 670 are the same, May be greater than the number of boundary samples 631, 632, 633, 634, 635, 636, and 637 in block 620. In this case, the decoding unit 120 uses the average or median value of at least two of the pixel values of the samples 681, 682, 683, 684, 685, 686, and 687 adjacent to the collocated block 670, 633, 634, 635, 636, and 637 of the pixels 631, 632, 633, 634, 635, 636, For example, the decoding unit 120 may obtain the pixel values of the samples 634 of the current block 620 based on the pixel values of the samples 683, 684, and 685. The decoding unit 120 can predict the pixel value of the sample 634 based on the mean of the pixel values of the samples 683, 684, and 685. [ The decoding unit 120 may also predict the pixel value of the sample 634 based on the median of the pixel values of the samples 683, 684, and 685. Also, the decoding unit 120 can predict the pixel value of the sample 634 based on the pixel value of any one of the samples 683, 684, and 685

FIG. 7 illustrates a process of obtaining pixel values for use in intra prediction using a collocated block according to an embodiment.

The decoding unit 120 can predict the current block 720 in the current image 710. [ The decoding unit 120 may determine the prediction mode of the current block 720 in the current image 710 as intra prediction. If the prediction mode of the current block 720 is intra prediction, the decoding unit 120 determines the collocated block 770 corresponding to the position of the current block in the reference image 760 adjacent to the current image 710 . The adjacent reference image 760 may be an image reconstructed before the current image 710. The POC of the previously reconstructed image 760 may be N (integer). Also, the POC of the current image 710 may be N + M (where M is a non-zero integer). Also, the previously reconstructed image 760 may be a previous image of the current image 710.

The decoding unit 120 can determine the collocated block 770 in the previously reconstructed image 760. [ The decoding unit 120 also generates reference samples 731, 732, 733, and 734 based on the left and top boundary samples 781, 782, 783, 784, 785, 786, and 787 in the collocated block 770 , 735, 736 and 737). The pixel values of the reference samples 731,732,733,734,735,736 and 737 may be the pixel values of the boundary samples 731,732,733,734,735,736 and 737 in the current block 720 have. The decoding unit 120 can predict the current block 720 based on the pixel values of the reference samples 731, 732, 733, 734, 735, 736 and 737. [ The decoding unit 120 may obtain the predicted current block 720 as a predictor. Since the method of predicting the current block 720 has been described with reference to FIG. 3, FIG. 4A and FIG. 4B, a repetitive description will be omitted.

The size of the current block 720 may be different from the size of the collocated block 770. If the size of the current block 720 and the size of the collocated block 770 are different from each other, the decoding unit 120 enlarges or reduces the size of the collocated block 770 to be equal to the size of the current block 720 .

The decryption unit 120 does not determine the collocated block 770 and decrypts the previously reconstructed image corresponding to the samples 731, 732, 733, 734, 735, 736, and 737 of the current block 720 732, 733, 734, 735, 736 and 737 of the current block 720 based on the samples 781, 782, 783, 784, 785, 786 and 787 in the current block 760 have. The coordinate values of the samples 781, 782, 783, 784, 785, 786 and 787 for the previously reconstructed image 760 are the samples 731,732 of the current block 720 for the current image 710 , 733, 734, 735, 736 and 737).

FIG. 7 predicts samples in current block 720 using collocated block 770, but is not limited thereto. In addition, the decoding unit 120 may predict adjacent samples on the left and upper sides of the current block 720 using the collocated block 770. [ Adjacent samples may not be samples in the current block 720. Adjacent samples on the left and top of the current block 720 may be reference samples. The decoding unit 120 may predict the current block 720 using the predicted left and upper adjacent samples of the current block 720. [ Since the method of predicting the current block has been described with reference to FIG. 3, FIG. 4A and FIG. 4B, redundant description will be omitted.

The decoding unit 120 may obtain a first flag from the bitstream. Also, if the first flag indicates that the first flag is to determine a reference sample in the current block, the decoding unit 120 may obtain a reference sample based on the upper and left samples adjacent to the current block. The decoding unit 120 also obtains a reference sample based on at least one of the boundary sample located in the collocated block and the sample adjacent to the collocated block if the first flag indicates that the reference flag should be determined in the collocated block can do.

The first flag may be a flag for determining whether to perform intra-prediction based on samples adjacent to the current block or whether to perform intra-prediction based on samples adjacent to the collocated block. For example, if the first flag indicates intra prediction based on samples adjacent to the current block, the decoding unit 120 decodes the current block 440 or 460 in the current image 410 or 460, as shown in FIG. 4A or 4B. 490, 472, 473, 474, or 475) adjacent to the reference block (e.g., block 491, block 490). Also, when the first flag indicates intra prediction based on samples adjacent to the collocated block, the decoding unit 120 decodes the previously reconstructed image 560, 660, or 760 as shown in FIGS. A reference sample can be obtained using a sample. For example, the decryption unit 120 may generate the boundary samples 581, 582, 583, 584, 585, 586, 587, 781, 782, 783, 784, 785, 786, or 787 in the collocated block 570 or 770, 682, 683, 684, 685, 686, or 687 adjacent to the collocated block 670. The reference sample can be obtained based on at least one of the samples 681, 682, 683, 684, 685,

According to the present disclosure, an image decoding apparatus and an image encoding apparatus can intrapreate a current block using both temporal information and spatial information, so that the image decoding apparatus and the image encoding apparatus can acquire a high-quality image with high compression efficiency.

FIG. 8 illustrates a process of obtaining pixel values for use in intra prediction using a collocated block according to an embodiment.

The decoding unit 120 can predict the current block 820 in the current image 810. [ The decoding unit 120 may determine the prediction mode of the current block 820 in the current image 810 as intra prediction. If the prediction mode of the current block 820 is intra prediction, the decoding unit 120 may determine whether a reference block exists in at least one of the left side and the upper side of the current block 820. For example, whether the current block 820 is located on the upper left side of the current image 810. [ If the current block 820 is located on the upper left of the current image 810, the reference block may not exist on the left and upper sides of the current block 820.

If the coordinate value of the upper left sample of the current block 820 is equal to the coordinate value of the upper left sample of the current image 810, the decoding unit 120 determines that the reference block does not exist on the left and upper sides of the current block 820 Can be determined. If the x coordinate value of the upper left sample of the current block 820 is equal to the x coordinate value of the upper left sample of the current image 810, the decoding unit 120 determines that the reference block exists at the left side of the current block 820 Or not. If the y coordinate value of the upper left sample of the current block 820 is equal to the y coordinate value of the upper left sample of the current image 810, the decoding unit 120 determines that the reference block exists at the upper side of the current block 820 Or not.

If there is no reference block in at least one of the left side and the upper side of the current block 820, the decoding unit 120 decodes the reference picture 860, which corresponds to the position of the current block in the reference image 860 adjacent to the current image 810, Block 870 may be determined. The adjacent reference image 860 may be a previously reconstructed image of the current image 810. The POC of the previously reconstructed image 860 may be N (integer). Also, the POC of the current image 810 may be N + M (where M is a non-zero integer). Also, the previously reconstructed image 860 may be the previous image of the current image 810. [ That is, M may be '1'.

The decoding unit 120 may obtain a reference sample based on at least one of a boundary sample in the collocated block 870 or a neighboring sample of the collocated block 870. Referring to FIG. 8, the decoding unit 120 generates reference samples 831, 832, 833, 834, 835, and 836 based on the upper and left boundary samples 881, 882, 883, 884, 885, , 837). The reference samples 831, 832, 833, 834, 835, 836, and 837 may exist at a virtual position outside the current image 810 as shown in FIG. The decoding unit 120 may predict the current block 820 based on the reference samples 831, 832, 833, 834, 835, 836, The decoding unit 120 may obtain the predicted current block 820 as a predictor. The method of predicting the current block 820 using the reference samples 831, 832, 833, 834, 835, 836, and 837 has been described with reference to FIGS. 3, 4A, and 4B.

According to an embodiment of the present disclosure, the decoding unit 120 may obtain a second flag from the bitstream. The second flag is information for determining whether to predict the current block using the reference samples located on the right and bottom sides of the current block or the current block using the reference samples located on the left and top of the current block. The decoding unit 120 can predict the current block based on the reference samples located on the right side and the bottom side of the current block when the second flag indicates use of the samples located on the right side and the bottom side. If the second flag indicates that the left and upper samples are to be used, the decoding unit 120 may predict the current block based on the reference samples located on the left side and the upper side of the current block.

More specifically, the decoding unit 120 may obtain a second flag from the bitstream. In addition, when the second flag indicates use of the samples located on the right and on the lower side, the decoding unit 120 determines whether or not the second flag is to be used based on at least one of the right and the lower boundary samples in the sample or collocated block located on the right- To obtain a current block reference sample. Further, when the second flag indicates that a sample located on the left and the upper side is to be used, the decoding unit 120 may determine whether the sample is located on at least one of the left side and the upper side adjacent to the left side and the upper side, A reference sample of the current block can be obtained.

When the second flag indicates to use the samples located on the right side and the lower side, the decoding unit 120 can obtain a reference sample as shown in FIG. 4B, FIG. 5, or FIG. The decoding unit 120 may obtain the right and the bottom samples adjacent to the current block 490 with the reference samples 480. [ Also, the decoding unit 120 may store the reference samples 531, 532, 533, 534, 535, 536, 537, 631, 632, 633, 634, 635, 636, or 633 in the current block 520 or 620, 637). The decoding unit 120 can predict a current block using reference samples. Also, the decoding unit 120 may obtain the predicted current block as a predictor.

Further, when the second flag indicates that the left and upper side samples are to be used, the decoding unit 120 can obtain a reference sample as shown in FIG. 4A, FIG. 7 or FIG. For example, the decoding unit 120 may obtain the left and upper samples adjacent to the current block 440 or 820 with reference samples 430, 831, 832, 833, 834, 835, 836, and 837. The decoding unit 120 may obtain the right and upper samples in the current block 720 with the reference samples 731, 732, 733, 734, 735, 736, and 737. The decoding unit 120 can predict a current block using reference samples. Also, the decoding unit 120 may obtain the predicted current block as a predictor.

According to the present disclosure, even if there is no reference block in at least one of the upper side and the left side of the current block 820, the decoding unit 120 obtains a pixel value closest to the pixel value of the left and upper samples of the current block 820 The current block 820 can be intra-predicted. Therefore, the image decoding apparatus and the image encoding apparatus can acquire a high-quality image while increasing compression efficiency.

FIG. 9 shows a schematic block diagram of an image encoding apparatus 900 according to an embodiment.

The image encoding apparatus 900 may include a coding unit 910 and a bitstream generating unit 920. The encoding unit 910 can encode the input image by receiving the input image. The bitstream generator 920 can output a bitstream based on the encoded input image. Also, the image encoding apparatus 900 may transmit the bit stream to the image decoding apparatus 100. Detailed operation of the image encoding apparatus 900 will be described in detail with reference to FIG.

FIG. 10 shows a flowchart of an image encoding method according to an embodiment.

The encoding unit 910 may determine 1010 a collocated block corresponding to a position of a current block in a reference image adjacent to the current image. The encoding unit 910 may acquire (1020) a reference sample based on at least one of a sample adjacent to the current block, a boundary sample within the collocated block, and a sample adjacent to the collocated block. The encoding unit 910 may perform intraprediction on the current block based on the reference sample to obtain (1030) a predictor. The encoding unit 910 may encode the current block based on the predictor and obtain (1040) the encoding information of the current block. The bitstream generator 920 may generate 1050 the encoded information as a bitstream.

The encoding unit 910 can determine the collocated block corresponding to the current block position in the reference image adjacent to the current image. The reference image may be a previously reconstructed image. The coordinate value of the current block for the current image may be the same as the coordinate value of the collocated block for the reference image. For example, the coordinate value of the upper left sample of the current block with respect to the current image may be the same as the coordinate value of the upper left sample of the collocated block with respect to the reference image

The encoding unit 910 can perform intra prediction on the current block. The encoding unit 910 can obtain a reference sample based on at least one of left and upper samples adjacent to the current block, a boundary sample within the collocated block, and a sample adjacent to the collocated block to intrapredge the current block have. Also, the current block can be predicted according to 35 intra prediction modes for the obtained reference samples. Prediction of the current block according to the intra prediction mode has been described with reference to FIG. 3 and FIG. 4, and duplicated description will be omitted. The encoding unit 910 can obtain predicted current blocks based on the respective reference samples and the intra-prediction mode.

The encoding unit 910 may compare the original of the current block with the predictors to select a predictor that best predicts the current block. For example, the encoding unit 910 may add the absolute differences between the original pixel values of the current block and the predictor using the Sum of Absolute Differences (SAD). The encoding unit 910 can select the predictor having the smallest SAD as the predictor that best predicts the current block.

Also, the encoding unit 910 may add the square of the difference between the original pixel value of the current block and the pixel value of the predictor using the Sum of Square Error (SSE). The encoding unit 910 can select the predictor having the smallest SSE as a predictor that best predicts the current block.

Also, the encoding unit 910 can determine at least one of the information on the reference samples used for the selected predictor and the intra-prediction mode as the encoding information. For example, the encoding information may include information indicating to acquire a reference sample based on at least one of samples within the collocated block, samples adjacent to the collocated block, or samples adjacent to the current block . The encoding information may also include information indicating to acquire a reference sample based on at least one of the left, top, right, and bottom samples of the collocated block or current block. The encoding information may also include information indicating that one of the 35 intra prediction modes is selected.

Also, the encoding unit 910 may obtain the difference between the current block and the selected predictor as a residual. The encoding unit 910 can transform and quantize residuals to obtain transform coefficients.

The bitstream generator 920 may generate at least one of the encoding information and the transform coefficient as a bitstream. The generated bitstream can be transmitted to the image decoding apparatus 100. The image decoding apparatus 100 can restore the image based on the bit stream.

Hereinafter, a method of determining data units of an image according to an embodiment will be described with reference to FIGS. 11 to 24. FIG.

11 illustrates a process in which the image decoding apparatus 100 determines at least one encoding unit by dividing a current encoding unit according to an embodiment.

According to an exemplary embodiment, the image decoding apparatus 100 may determine the type of an encoding unit using block type information, and may determine a type of an encoding unit to be divided using the type information. That is, the division method of the coding unit indicated by the division type information can be determined according to which block type the block type information used by the video decoding apparatus 100 represents.

According to one embodiment, the image decoding apparatus 100 may use block type information indicating that the current encoding unit is a square type. For example, the image decoding apparatus 100 can determine whether to divide a square encoding unit according to division type information, vertically divide, horizontally divide, or divide into four encoding units. 11, when the block type information of the current encoding unit 1100 indicates a square shape, the decoding unit 1130 decodes the same size as the current encoding unit 1100 according to the division type information indicating that the current block is not divided It is possible to determine the divided coding units 1110b, 1110c, and 1110d based on the division type information indicating the predetermined division method without dividing the coding unit 1110a.

11, the image decoding apparatus 100 determines two encoding units 1110b obtained by dividing the current encoding unit 1100 in the vertical direction based on the division type information indicating that the image is divided vertically according to an embodiment . The image decoding apparatus 100 can determine two encoding units 1110c obtained by dividing the current encoding unit 1100 in the horizontal direction based on the division type information indicating that the image is divided in the horizontal direction. The image decoding apparatus 100 can determine the four encoding units 1110d in which the current encoding unit 1100 is divided into the vertical direction and the horizontal direction based on the division type information indicating that the image is divided in the vertical direction and the horizontal direction. However, the division type in which the square coding unit can be divided should not be limited to the above-mentioned form, but may include various forms in which the division type information can be represented. The predetermined divisional form in which the square encoding unit is divided will be described in detail by way of various embodiments below.

FIG. 12 illustrates a process in which the image decoding apparatus 100 determines at least one encoding unit by dividing a non-square encoding unit according to an embodiment.

According to an embodiment, the image decoding apparatus 100 may use block type information indicating that the current encoding unit is a non-square format. The image decoding apparatus 100 may determine whether to divide the current non-square coding unit according to the division type information or not by a predetermined method. 12, when the block type information of the current encoding unit 1200 or 1250 indicates a non-square shape, the image decoding apparatus 100 determines the current encoding unit 1200 (or 1200) according to the division type information indicating that the current block is not divided 1220a, 1230b, 1230c, 1270a, 1230a, 1230a, 1230a, 1230a, 1230a, 1230a, 1230a, 1230a, 1230a, 1230a, 1230a, 1230a, 1230a, 1270b, 1280a, 1280b, 1280c. The predetermined division method in which the non-square coding unit is divided will be described in detail through various embodiments.

According to an exemplary embodiment, the image decoding apparatus 100 may determine the type in which the encoding unit is divided using the division type information. In this case, the division type information indicates the number of at least one encoding unit generated by dividing the encoding unit . 12, if the division type information indicates that the current encoding unit 1200 or 1250 is divided into two encoding units, the image decoding apparatus 100 determines the current encoding unit 1200 or 1250 based on the division type information, To determine two encoding units 1220a, 1220b, or 1270a and 1270b included in the current encoding unit.

When the video decoding apparatus 100 divides the current coding unit 1200 or 1250 in the form of non-square based on the division type information according to an embodiment, the non-square current coding unit 1200 or 1250 The current encoding unit can be divided in consideration of the position of the long side. For example, the image decoding apparatus 100 divides the current encoding unit 1200 or 1250 in the direction of dividing the long side of the current encoding unit 1200 or 1250 in consideration of the type of the current encoding unit 1200 or 1250 So that a plurality of encoding units can be determined.

According to one embodiment, when the division type information indicates division of an encoding unit into odd number of blocks, the image decoding apparatus 100 can determine an odd number of encoding units included in the current encoding unit 1200 or 1250. For example, when the division type information indicates that the current encoding unit 1200 or 1250 is divided into three encoding units, the video decoding apparatus 100 divides the current encoding unit 1200 or 1250 into three encoding units 1230a , 1230b, 1230c, 1280a, 1280b, 1280c. According to an exemplary embodiment, the image decoding apparatus 100 may determine an odd number of encoding units included in the current encoding unit 1200 or 1250, and the sizes of the determined encoding units may not be the same. For example, the size of a predetermined encoding unit 1230b or 1280b among the determined odd number of encoding units 1230a, 1230b, 1230c, 1280a, 1280b, and 1280c is different from that of other encoding units 1230a, 1230c, 1280a, and 1280c . In other words, an encoding unit that can be determined by dividing the current encoding unit 1200 or 1250 may have a plurality of types of sizes, and may include an odd number of encoding units 1230a, 1230b, 1230c, 1280a, 1280b, and 1280c, May have different sizes.

According to an exemplary embodiment, when the division type information indicates that an encoding unit is divided into an odd number of blocks, the image decoding apparatus 100 can determine an odd number of encoding units included in the current encoding unit 1200 or 1250, The image decoding apparatus 100 may set a predetermined restriction on at least one of the odd number of encoding units generated by division. 12, the image decoding apparatus 100 includes a coding unit 1230a, 1230b, 1230c, 1280a, 1280b, and 1280c generated by dividing a current coding unit 1200 or 1250, 1230b, and 1280b may be different from the other encoding units 1230a, 1230c, 1280a, and 1280c. For example, in the video decoding apparatus 100, the coding units 1230b and 1280b positioned at the center are restricted so as not to be further divided unlike the other coding units 1230a, 1230c, 1280a, and 1280c, It can be limited to be divided.

FIG. 13 illustrates a process in which the image decoding apparatus 100 divides an encoding unit based on at least one of block type information and division type information according to an embodiment.

According to an embodiment, the image decoding apparatus 100 may determine that the first encoding unit 1300 in the form of a square is divided or not divided into encoding units based on at least one of the block type information and the division type information. According to one embodiment, when the division type information indicates division of the first encoding unit 1300 in the horizontal direction, the image decoding apparatus 100 divides the first encoding unit 1300 in the horizontal direction, (1310). The first encoding unit, the second encoding unit, and the third encoding unit used according to an embodiment are terms used to understand the relation before and after the division between encoding units. For example, if the first encoding unit is divided, the second encoding unit can be determined, and if the second encoding unit is divided, the third encoding unit can be determined. Hereinafter, the relationship between the first coding unit, the second coding unit and the third coding unit used can be understood to be in accordance with the above-mentioned characteristic.

According to an embodiment, the image decoding apparatus 100 may determine that the determined second encoding unit 1310 is not divided or divided into encoding units based on at least one of the block type information and the division type information. Referring to FIG. 13, the image decoding apparatus 100 may include a second encoding unit 1310 of a non-square shape determined by dividing a first encoding unit 1300 based on at least one of block type information and division type information It may be divided into at least one third encoding unit 1320a, 1320b, 1320c, 1320d, or the like, or the second encoding unit 1310 may not be divided. The image decoding apparatus 100 may acquire at least one of block type information and division type information and the image decoding apparatus 100 may acquire at least one of the block type information and the division type information obtained from the first encoding unit 1300 (For example, 1310), and the second encoding unit 1310 may divide a plurality of second encoding units (for example, 1310) of various types into a first encoding unit The unit 1300 can be divided according to the division method. According to one embodiment, when the first coding unit 1300 is divided into the second coding units 1310 based on at least one of the block type information and the division type information for the first coding unit 1300, The coding unit 1310 may also be divided into a third coding unit (e.g., 1320a, 1320b, 1320c, 1320d, etc.) based on at least one of block type information and division type information for the second coding unit 1310 have. That is, an encoding unit can be recursively divided based on at least one of division type information and block type information associated with each encoding unit. Therefore, a square encoding unit may be determined in a non-square encoding unit, and a non-square encoding unit may be determined by dividing the square encoding unit recursively. Referring to FIG. 12, among the odd number of third encoding units 1320b, 1320c, and 1320d in which the non-square type second encoding unit 1310 is divided and determined, predetermined encoding units (for example, An encoding unit or a square-shaped encoding unit) can be recursively divided. According to an embodiment, the square-shaped third coding unit 1320c, which is one of the odd-numbered third coding units 1320b, 1320c, and 1320d, may be divided in the horizontal direction and divided into a plurality of fourth coding units. The non-square fourth encoding unit 1340, which is one of the plurality of fourth encoding units, may be further divided into a plurality of encoding units. For example, the non-square-shaped fourth encoding unit 1340 may be further divided into odd-numbered encoding units 1350a, 1350b, and 1350c. A method which can be used for recursive division of an encoding unit will be described later in various embodiments.

According to one embodiment, the image decoding apparatus 100 divides each of the third encoding units 1320a, 1320b, 1320c, and 1320d into encoding units based on at least one of the block type information and the division type information, It can be determined that the unit 1310 is not divided. The image decoding apparatus 100 may divide the second encoding unit 1310 in a non-square form into third encoding units 1320b, 1320c, and 1320d in an odd number according to an embodiment. The image decoding apparatus 100 may set a predetermined restriction on a predetermined third encoding unit among odd numbered third encoding units 1320b, 1320c, and 1320d. For example, the image decoding apparatus 100 may restrict the encoding unit 1320c located in the middle among the odd-numbered third encoding units 1320b, 1320c, and 1320d to not be further divided, or be divided into a set number of times . Referring to FIG. 13, the image decoding apparatus 100 includes an encoding unit (not shown) located in the middle among odd numbered third encoding units 1320b, 1320c, and 1320d included in the non- 1320c may be further divided into a predetermined division type (for example, divided into four coding units or divided into a form corresponding to the divided form of the second coding unit 1310) (For example, dividing only n times, n > 0). The above restriction on the coding unit 1320c positioned in the middle is merely an example and should not be construed to be limited to the above embodiments and the coding unit 1320c positioned in the middle is not limited to the other coding units 1320b and 1320d Quot;), < / RTI > which can be decoded differently.

According to an exemplary embodiment, the image decoding apparatus 100 may acquire at least one of block type information and division type information used for dividing a current encoding unit at a predetermined position in the current encoding unit.

FIG. 14 illustrates a method for an image decoding apparatus 100 to determine a predetermined encoding unit among odd number of encoding units according to an embodiment.

Referring to FIG. 14, at least one of the block type information and the division type information of the current encoding unit 1400 is a sample of a predetermined position among a plurality of samples included in the current encoding unit 1400 (for example, Sample 1440). However, the predetermined position in the current coding unit 1400 in which at least one of the block type information and the division type information can be obtained should not be limited to the middle position shown in FIG. 14, and the current coding unit 1400 (E.g., top, bottom, left, right, top left, bottom left, top right or bottom right, etc.) The image decoding apparatus 100 may determine at least one of the block type information and the division type information obtained from a predetermined position to divide the current encoding unit into the encoding units of various types and sizes.

According to an exemplary embodiment, when the current encoding unit is divided into a predetermined number of encoding units, the image decoding apparatus 100 may select one of the encoding units. The method for selecting one of the plurality of encoding units may be various, and description of these methods will be described later in various embodiments.

According to an exemplary embodiment, the image decoding apparatus 100 may divide the current encoding unit into a plurality of encoding units and determine a predetermined encoding unit.

According to an exemplary embodiment, the image decoding apparatus 100 may use information indicating the positions of odd-numbered encoding units in order to determine an encoding unit located in the middle among odd-numbered encoding units. Referring to FIG. 14, the image decoding apparatus 100 may divide the current encoding unit 1400 to determine odd number of encoding units 1420a, 1420b, and 1420c. The image decoding apparatus 100 may determine the center encoding unit 1420b using information on the positions of the odd number of encoding units 1420a, 1420b, and 1420c. For example, the image decoding apparatus 100 determines the positions of the encoding units 1420a, 1420b, and 1420c based on information indicating the positions of predetermined samples included in the encoding units 1420a, 1420b, and 1420c, The encoding unit 1420b located at the position of the encoding unit 1420b can be determined. Specifically, the image decoding apparatus 100 encodes the encoding units 1420a, 1420b, 1420c based on information indicating the positions of the samples 1430a, 1430b, 1430c at the upper left of the encoding units 1420a, 1420b, 1420c. The encoding unit 1420b located in the center can be determined.

Information indicating the positions of the upper left samples 1430a, 1430b, and 1430c included in the coding units 1420a, 1420b, and 1420c according to one embodiment is stored in the pictures of the coding units 1420a, 1420b, and 1420c Or information about the position or coordinates of the object. Information indicating the positions of the upper left samples 1430a, 1430b and 1430c included in the coding units 1420a, 1420b and 1420c according to one embodiment is stored in the coding units 1420a , 1420b, 1420c, and the width or height may correspond to information indicating the difference between the coordinates of the coding units 1420a, 1420b, 1420c in the picture. That is, the image decoding apparatus 100 directly uses the information on the position or the coordinates in the pictures of the coding units 1420a, 1420b, and 1420c or the information on the width or height of the coding units corresponding to the difference between the coordinates The encoding unit 1420b located in the center can be determined.

The information indicating the position of the upper left sample 1430a of the upper coding unit 1420a may indicate the coordinates (xa, ya) and the upper left sample 1330b of the middle coding unit 1420b May represent the coordinates (xb, yb), and the information indicating the position of the upper left sample 1430c of the lower coding unit 1420c may indicate the coordinates (xc, yc). The video decoding apparatus 100 can determine the center encoding unit 1420b by using the coordinates of the upper left samples 1430a, 1430b, and 1430c included in the encoding units 1420a, 1420b, and 1420c. For example, when the coordinates of the upper left samples 1430a, 1430b, and 1430c are sorted in ascending or descending order, the coding unit 1420b including the coordinates (xb, yb) of the sample 1430b positioned at the center, May be determined as an encoding unit located in the middle of the encoding units 1420a, 1420b, and 1420c determined by dividing the current encoding unit 1400. [ However, the coordinates indicating the positions of the samples 1430a, 1430b, and 1430c at the upper left corner may indicate the coordinates indicating the absolute position in the picture, and the position of the sample 1430a at the upper left of the upper coding unit 1420a may be (Dxb, dyb), which is information indicating the relative position of the sample 1430b at the upper left of the middle encoding unit 1420b, and the relative position of the sample 1430c at the upper left of the lower encoding unit 1420c Information dyn (dxc, dyc) coordinates may also be used. Also, the method of determining the coding unit at a predetermined position by using the coordinates of the sample as information indicating the position of the sample included in the coding unit should not be limited to the above-described method, and various arithmetic Should be interpreted as a method.

According to an embodiment, the image decoding apparatus 100 may divide the current encoding unit 1400 into a plurality of encoding units 1420a, 1420b, and 1420c, and may encode a predetermined one of the encoding units 1420a, 1420b, and 1420c The encoding unit can be selected according to the criterion. For example, the image decoding apparatus 100 can select an encoding unit 1420b having a different size from the encoding units 1420a, 1420b, and 1420c.

According to an embodiment, the image decoding apparatus 100 may include (xa, ya) coordinates, which is information indicating the position of the upper left sample 1430a of the upper encoding unit 1420a, a sample at the upper left of the middle encoding unit 1420b (Xc, yc) coordinates, which is information indicating the position of the lower-stage coding unit 1430b and the position of the upper-left sample 1430c of the lower-stage coding unit 1420c, , 1420b, 1420c, respectively. The image decoding apparatus 100 encodes the encoded units 1420a and 1420b using the coordinates (xa, ya), (xb, yb), (xc, yc) indicating the positions of the encoding units 1420a, 1420b and 1420c , And 1420c, respectively.

According to an embodiment, the image decoding apparatus 100 can determine the width of the upper encoding unit 1420a as xb-xa and the height as yb-ya. According to an embodiment, the image decoding apparatus 100 can determine the width of the center encoding unit 1420b as xc-xb and the height as yc-yb. The image decoding apparatus 100 may determine the width or height of the lower coding unit using the width or height of the current coding unit and the width and height of the upper coding unit 1420a and the middle coding unit 1420b . The image decoding apparatus 100 may determine an encoding unit having a different size from the other encoding units based on the widths and heights of the determined encoding units 1420a, 1420b, and 1420c. Referring to FIG. 14, the image decoding apparatus 100 may determine a coding unit 1420b as a coding unit at a predetermined position while having a size different from that of the upper coding unit 1420a and the lower coding unit 1420c. However, the process of determining the encoding unit having a size different from that of the other encoding units by the video decoding apparatus 100 may be the same as that of the first embodiment in which the encoding unit of a predetermined position is determined using the size of the encoding unit determined based on the sample coordinates , Various processes may be used for determining the encoding unit at a predetermined position by comparing the sizes of the encoding units determined according to predetermined sample coordinates.

However, the position of the sample to be considered for determining the position of the coding unit should not be interpreted as being limited to the left upper end, and information about the position of any sample included in the coding unit can be interpreted as being available.

According to one embodiment, the image decoding apparatus 100 can select a coding unit at a predetermined position among the odd number of coding units determined by dividing the current coding unit considering the type of the current coding unit. For example, if the current coding unit is a non-square shape having a width greater than the height, the image decoding apparatus 100 can determine a coding unit at a predetermined position along the horizontal direction. That is, the image decoding apparatus 100 may determine one of the encoding units which are located in the horizontal direction and limit the encoding unit. If the current coding unit is a non-square shape having a height greater than the width, the image decoding apparatus 100 can determine a coding unit at a predetermined position in the vertical direction. That is, the image decoding apparatus 100 may determine one of the encoding units having different positions in the vertical direction and set a restriction on the encoding unit.

According to an exemplary embodiment, the image decoding apparatus 100 may use information indicating positions of even-numbered encoding units in order to determine an encoding unit at a predetermined position among the even-numbered encoding units. The image decoding apparatus 100 may determine an even number of coding units by dividing the current coding unit and determine a coding unit at a predetermined position by using information on the positions of the even number of coding units. A concrete procedure for this is omitted because it may be a process corresponding to a process of determining a coding unit of a predetermined position (for example, the middle position) among the above-mentioned odd number of coding units in FIG.

According to one embodiment, when a non-square current encoding unit is divided into a plurality of encoding units, in order to determine an encoding unit at a predetermined position among a plurality of encoding units, Can be used. For example, in order to determine a coding unit located in the middle among the plurality of coding units in which the current coding unit is divided, the video decoding apparatus 100 may determine the block type information stored in the sample included in the middle coding unit, Information can be used.

14, the image decoding apparatus 100 may divide a current encoding unit 1400 into a plurality of encoding units 1420a, 1420b, and 1420c based on at least one of block type information and division type information, The encoding unit 1420b located in the middle of the plurality of encoding units 1420a, 1420b, and 1420c can be determined. Furthermore, the image decoding apparatus 100 may determine a coding unit 1420b positioned at the center in consideration of the position where at least one of the block type information and the division type information is obtained. That is, at least one of the block type information and the division type information of the current encoding unit 1400 can be obtained in the sample 1440 located in the middle of the current encoding unit 1400, and the block type information and the division type information If the current encoding unit 1400 is divided into a plurality of encoding units 1420a, 1420b and 1420c based on at least one of the encoding units 1420a to 1420c, You can decide. However, the information used for determining the coding unit located in the middle should not be limited to at least one of the block type information and the division type information, and various kinds of information may be used in the process of determining the coding unit located in the middle .

According to an embodiment, predetermined information for identifying a coding unit at a predetermined position may be obtained from a predetermined sample included in a coding unit to be determined. 14, the image decoding apparatus 100 includes a plurality of encoding units 1420a, 1420b, and 1420c, which are determined by dividing the current encoding unit 1400, and encoding units (for example, (For example, a sample located in the middle of the current encoding unit 1400) at a predetermined position in the current encoding unit 1400 in order to determine an encoding unit located in the middle of the encoding unit) And at least one of division type information. . That is, the image decoding apparatus 100 can determine a sample of the predetermined position in consideration of the block form of the current encoding unit 1400, and the image decoding apparatus 100 can decode a plurality of (For example, at least one of the block type information and the division type information) out of the number of encoding units 1420a, 1420b, and 1420c of the encoding units 1420a, 1420b, . Referring to FIG. 14, the image decoding apparatus 100 may determine a sample 1440 located in the middle of a current encoding unit 1400 as a sample from which predetermined information can be obtained, The decoding unit 100 may limit the coding unit 1420b including the sample 1440 to a predetermined limit in the decoding process. However, the position of the sample from which predetermined information can be obtained should not be construed to be limited to the above-mentioned position, but may be interpreted as samples at arbitrary positions included in the encoding unit 1420b to be determined for limiting.

The position of a sample from which predetermined information can be obtained according to one embodiment can be determined according to the type of the current encoding unit 1400. According to one embodiment, the block type information can determine whether the current encoding unit is a square or a non-square, and determine the position of a sample from which predetermined information can be obtained according to the shape. For example, the video decoding apparatus 100 may use at least one of the information on the width of the current coding unit and the information on the height to position at least one of the width and the height of the current coding unit in half The sample can be determined as a sample from which predetermined information can be obtained. For example, when the block type information related to the current encoding unit is a non-square type, the image decoding apparatus 100 selects one of the samples adjacent to the boundary dividing the longer side of the current encoding unit into halves by a predetermined Can be determined as a sample from which the information of < / RTI >

According to an exemplary embodiment, when the current encoding unit is divided into a plurality of encoding units, the image decoding apparatus 100 may determine at least one of the block type information and the division type information One can be used. According to an embodiment, the image decoding apparatus 100 can acquire at least one of the block type information and the division type information from a sample at a predetermined position included in an encoding unit, and the image decoding apparatus 100 can decode the current encoding unit And divide the generated plurality of coding units by using at least one of division type information and block type information obtained from samples at predetermined positions included in each of the plurality of coding units. That is, the coding unit can be recursively divided using at least one of the block type information and the division type information obtained in the sample at the predetermined position included in each of the coding units. The recursive division process of the encoding unit is described in detail with reference to FIG. 13, and a detailed description thereof will be omitted.

According to an embodiment, the image decoding apparatus 100 can determine at least one encoding unit by dividing the current encoding unit, and the order in which the at least one encoding unit is decoded is determined as a predetermined block (for example, ). ≪ / RTI >

FIG. 15 shows a sequence in which a plurality of coding units are processed when the image decoding apparatus 100 determines a plurality of coding units by dividing the current coding unit according to an embodiment.

According to an embodiment, the image decoding apparatus 100 may divide the first encoding unit 1500 in the vertical direction according to the block type information and the division type information to determine the second encoding units 1510a and 1510b, 1550b, 1550c, and 1550d by dividing the first encoding unit 1500 in the horizontal direction and the second encoding units 1530a and 1530b or dividing the first encoding unit 1500 in the vertical direction and the horizontal direction, Can be determined.

Referring to FIG. 15, the image decoding apparatus 100 may determine the order in which the second encoding units 1510a and 1510b determined by dividing the first encoding unit 1500 in the vertical direction are processed in the horizontal direction 1510c . The image decoding apparatus 100 may determine the processing order of the second encoding units 1530a and 1530b determined by dividing the first encoding unit 1500 in the horizontal direction as the vertical direction 1530c. The image decoding apparatus 100 processes the encoding units located in one row of the second encoding units 1550a, 1550b, 1550c, and 1550d determined by dividing the first encoding unit 1500 in the vertical direction and the horizontal direction, (For example, a raster scan order or a z scan order 1550e) in which the encoding units located in the next row are processed.

According to an embodiment, the image decoding apparatus 100 may recursively divide encoding units. 15, the image decoding apparatus 100 may determine a plurality of encoding units 1510a, 1510b, 1530a, 1530b, 1550a, 1550b, 1550c, and 1550d by dividing the first encoding unit 1500, The determined plurality of encoding units 1510a, 1510b, 1530a, 1530b, 1550a, 1550b, 1550c, and 1550d may be recursively divided. The method of dividing the plurality of encoding units 1510a, 1510b, 1530a, 1530b, 1550a, 1550b, 1550c, and 1550d may be a method corresponding to the method of dividing the first encoding unit 1500. [ Accordingly, the plurality of encoding units 1510a, 1510b, 1530a, 1530b, 1550a, 1550b, 1550c, and 1550d may be independently divided into a plurality of encoding units. Referring to FIG. 15, the image decoding apparatus 100 can determine the second encoding units 1510a and 1510b by dividing the first encoding unit 1500 in the vertical direction, and further determines the second encoding units 1510a and 1510b Can be determined not to divide or separate independently.

According to an embodiment, the image decoding apparatus 100 may divide the left second encoding unit 1510a in the horizontal direction into third encoding units 1520a and 1520b, and the second encoding unit 1510b ) May not be divided.

According to an embodiment, the processing order of the encoding units may be determined based on the division process of the encoding units. In other words, the processing order of the divided coding units can be determined based on the processing order of the coding units immediately before being divided. The video decoding apparatus 100 may determine the order in which the third coding units 1520a and 1520b determined by dividing the left second coding unit 1510a are processed independently of the second coding unit 1510b on the right. Since the left second encoding unit 1510a is horizontally divided and the third encoding units 1520a and 1520b are determined, the third encoding units 1520a and 1520b can be processed in the vertical direction 1520c. Since the order in which the left second encoding unit 1510a and the right second encoding unit 1510b are processed corresponds to the horizontal direction 1510c, the third encoding unit 1510a included in the left second encoding unit 1510a, The right encoding unit 1510b can be processed after the blocks 1520a and 1520b are processed in the vertical direction 1520c. The above description is intended to explain the process sequence in which encoding units are determined according to the encoding units before division. Therefore, it should not be construed to be limited to the above-described embodiments, It should be construed as being used in various ways that can be handled independently in sequence.

FIG. 16 illustrates a process of determining that the current encoding unit is divided into odd number of encoding units when the image decoding apparatus 100 can not process the encoding units in a predetermined order according to an embodiment.

According to one embodiment, the image decoding apparatus 100 may determine that the current encoding unit is divided into odd number of encoding units based on the obtained block type information and the division type information. Referring to FIG. 16, the first encoding unit 1600 in the form of a square may be divided into second non-square encoding units 1610a and 1610b, and the second encoding units 1610a and 1610b may be independently 3 encoding units 1620a, 1620b, 1620c, 1620d, and 1620e. According to an embodiment, the image decoding apparatus 100 can determine the plurality of third encoding units 1620a and 1620b by dividing the left encoding unit 1610a of the second encoding unit in the horizontal direction, and the right encoding unit 1610b May be divided into an odd number of third encoding units 1620c, 1620d, and 1620e.

According to an embodiment, the image decoding apparatus 100 determines whether or not the third encoding units 1620a, 1620b, 1620c, 1620d, and 1620e can be processed in a predetermined order and determines whether there are odd-numbered encoding units You can decide. Referring to FIG. 16, the image decoding apparatus 100 may divide the first encoding unit 1600 recursively to determine the third encoding units 1620a, 1620b, 1620c, 1620d, and 1620e. The image decoding apparatus 100 may further include a first encoding unit 1600, a second encoding unit 1610a and 1610b or a third encoding unit 1620a, 1620b, 1620c, and 1620c based on at least one of block type information and division type information, 1620d, and 1620e may be divided into odd number of coding units among the divided types. For example, an encoding unit located on the right of the second encoding units 1610a and 1610b may be divided into odd third encoding units 1620c, 1620d, and 1620e. The order in which the plurality of coding units included in the first coding unit 1600 are processed may be a predetermined order (for example, a z-scan order 1630) 100 can determine whether the third encoding units 1620c, 1620d, and 1620e determined by dividing the right second encoding unit 1610b into odd numbers satisfy the condition that the third encoding units 1620c, 1620d, and 1620e can be processed according to the predetermined order.

According to an embodiment, the image decoding apparatus 100 satisfies a condition that third encoding units 1620a, 1620b, 1620c, 1620d, and 1620e included in the first encoding unit 1600 can be processed in a predetermined order And it is determined whether or not at least one of the width and the height of the second encoding units 1610a and 1610b is divided in half according to the boundaries of the third encoding units 1620a, 1620b, 1620c, 1620d, and 1620e . For example, the third encoding units 1620a and 1620b determined by dividing the height of the left-side second encoding unit 1610a in the non-square form by half are satisfying the condition, but the right second encoding unit 1610b is set to 3 Since the boundary of the third coding units 1620c, 1620d, and 1620e determined by dividing the number of the first coding units 1620c, 1620d, and 1620d by the number of the first coding units 1620c, 1620d, and 1620e does not divide the width or height of the right second coding unit 1610b by half, 1620e may be determined as not satisfying the condition and the image decoding apparatus 100 determines that the scan order is disconnection in the case of such unsatisfactory condition and the right second encoding unit 1610b is determined based on the determination result It can be determined to be divided into odd number of encoding units. According to an embodiment, the image decoding apparatus 100 may limit a coding unit of a predetermined position among the divided coding units when the coding unit is divided into odd number of coding units. Since the embodiment has been described above, a detailed description thereof will be omitted.

FIG. 17 illustrates a process in which the image decoding apparatus 100 determines at least one encoding unit by dividing a first encoding unit 1700 according to an embodiment.

According to an embodiment, the image decoding apparatus 100 may divide the first encoding unit 1700 based on at least one of the block type information and the division type information acquired through the receiving unit 110. The first encoding unit 1700 in the form of a square may be divided into four encoding units having a square shape or may be divided into a plurality of non-square encoding units. For example, referring to FIG. 17, if the block type information indicates that the first encoding unit 1700 is a square and that the division type information is divided into non-square encoding units, the image decoding apparatus 100 transmits the first encoding units The encoding unit 1700 may be divided into a plurality of non-square encoding units. Specifically, when the division type information indicates that an odd number of encoding units are determined by dividing the first encoding unit 1700 in the horizontal direction or the vertical direction, the video decoding apparatus 100 includes a first encoding unit 1700 1720b, and 1710c divided in the vertical direction as the odd number of encoding units, or into the second encoding units 1720a, 1720b, and 1720c determined by being divided in the horizontal direction.

According to an embodiment, the image decoding apparatus 100 may be configured such that the second encoding units 1710a, 1710b, 1710c, 1720a, 1720b, and 1720c included in the first encoding unit 1700 are processed in a predetermined order And the condition is that at least one of the width and the height of the first encoding unit 1700 is divided in half according to the boundaries of the second encoding units 1710a, 1710b, 1710c, 1720a, 1720b, and 1720c . 17, the boundaries of the second encoding units 1710a, 1710b, and 1710c, which are determined by dividing the first encoding unit 1700 in the vertical direction, are divided in half by the width of the first encoding unit 1700 The first encoding unit 1700 can be determined as not satisfying a condition that can be processed in a predetermined order. In addition, since the boundaries of the second encoding units 1720a, 1720b, and 1720c determined by dividing the first encoding unit 1700 in the horizontal direction into the horizontal direction can not divide the width of the first encoding unit 1700 in half, 1 encoding unit 1700 can be determined as not satisfying a condition that can be processed in a predetermined order. The video decoding apparatus 100 may determine that the scan sequence is disconnection in the case of such unsatisfactory condition and determine that the first encoding unit 1700 is divided into odd number of encoding units based on the determination result. According to an embodiment, the image decoding apparatus 100 may limit a coding unit of a predetermined position among the divided coding units when the coding unit is divided into odd number of coding units. Since the embodiment has been described above, a detailed description thereof will be omitted.

According to an embodiment, the image decoding apparatus 100 may determine the encoding units of various types by dividing the first encoding unit.

17, the image decoding apparatus 100 may divide a first coding unit 1700 in a square form and a first coding unit 1730 or 1750 in a non-square form into various types of coding units .

FIG. 18 illustrates an example in which when the non-square second encoding unit determined by dividing the first encoding unit 1800 by the image decoding apparatus 100 satisfies a predetermined condition, the second encoding unit is divided Lt; RTI ID = 0.0 > limited. ≪ / RTI >

According to an embodiment, the image decoding apparatus 100 may include a first encoding unit 1800 in the form of a square based on at least one of block type information and division type information acquired through the receiver 110, 2 encoding units 1810a, 1810b, 1820a, and 1820b. The second encoding units 1810a, 1810b, 1820a, and 1820b may be independently divided. Accordingly, the video decoding apparatus 100 determines whether to divide or not divide into a plurality of coding units based on at least one of the block type information and the division type information related to each of the second coding units 1810a, 1810b, 1820a, and 1820b . According to an embodiment, the image decoding apparatus 100 divides the non-square-shaped left second encoding unit 1810a determined by dividing the first encoding unit 1800 in the vertical direction into a horizontal direction, 1812a, and 1812b. However, when the left second encoding unit 1810a is divided in the horizontal direction, the right-side second encoding unit 1810b is shifted in the horizontal direction in the same manner as the direction in which the left second encoding unit 1810a is divided, As shown in Fig. When the right second encoding unit 1810b is divided in the same direction and the third encoding units 1814a and 1814b are determined, the left second encoding unit 1810a and the right second encoding unit 1810b are arranged in the horizontal direction The third encoding units 1812a, 1812b, 1814a, and 1814b can be determined by being independently divided. However, the image decoding apparatus 100 divides the first encoding unit 1800 into four square-shaped second encoding units 1830a, 1830b, 1830c, and 1830d based on at least one of the block type information and the division type information. And this may be inefficient in terms of image decoding.

According to an embodiment, the image decoding apparatus 100 divides a second encoding unit 1820a or 1820b in a non-square form determined by dividing a first encoding unit 1800 in the horizontal direction into a vertical direction, (1822a, 1822b, 1824a, 1824b). However, when one of the second coding units (for example, the upper second coding unit 1820a) is divided in the vertical direction, the video decoding apparatus 100 may generate a second coding unit (for example, The encoding unit 1820b) can be restricted such that the upper second encoding unit 1820a can not be divided in the vertical direction in the same direction as the divided direction.

FIG. 19 illustrates a process in which the image decoding apparatus 100 divides a square-shaped encoding unit when the division type information can not be divided into four square-shaped encoding units according to an embodiment.

According to one embodiment, the image decoding apparatus 100 divides the first encoding unit 1900 based on at least one of the block type information and the division type information, and outputs the second encoding units 1910a, 1910b, 1920a, and 1920b You can decide. The division type information may include information on various types in which the coding unit can be divided, but information on various types may not include information for dividing into four square units of coding units. According to the division type information, the image decoding apparatus 100 can not divide the first encoding unit 1900 in the square form into the second encoding units 1930a, 1930b, 1930c, and 1930d in the form of four squares. Based on the division type information, the image decoding apparatus 100 can determine the non-square second encoding units 1910a, 1910b, 1920a, and 1920b.

According to an embodiment, the image decoding apparatus 100 may independently divide the non-square second encoding units 1910a, 1910b, 1920a, and 1920b, respectively. Each of the second encoding units 1910a, 1910b, 1920a, and 1920b may be divided in a predetermined order through a recursive method, and the first encoding unit 1900 May be a partitioning method corresponding to a method in which a partition is divided.

For example, the image decoding apparatus 100 can determine the third encoding units 1912a and 1912b in the form of a square by dividing the left second encoding unit 1910a in the horizontal direction and the right second encoding unit 1910b It is possible to determine the third encoding units 1914a and 1914b in the form of a square by being divided in the horizontal direction. Furthermore, the image decoding apparatus 100 may divide the left second encoding unit 1910a and the right second encoding unit 1910b in the horizontal direction to determine the third encoding units 1916a, 1916b, 1916c, and 1916d in the form of a square have. In this case, the encoding unit can be determined in the same manner as that of the first encoding unit 1900 divided into the 4 second square encoding units 1930a, 1930b, 1930c, and 1930d.

 In another example, the image decoding apparatus 100 can determine the third encoding units 1922a and 1922b in the form of a square by dividing the upper second encoding unit 1920a in the vertical direction, and the lower second encoding units 1920b Can be divided in the vertical direction to determine the third encoding units 1924a and 1924b in the form of a square. Further, the image decoding apparatus 100 may divide the upper second encoding unit 1920a and the lower second encoding unit 1920b in the vertical direction to determine the square-shaped third encoding units 1926a, 1926b, 1926a, and 1926b have. In this case, the encoding unit can be determined in the same manner as that of the first encoding unit 1900 divided into the 4 second square encoding units 1930a, 1930b, 1930c, and 1930d.

FIG. 20 illustrates that a processing order among a plurality of coding units may be changed according to a division process of coding units according to an embodiment.

According to an embodiment, the image decoding apparatus 100 may divide the first encoding unit 2000 based on the block type information and the division type information. When the block type information indicates a square shape and the division type information indicates that the first encoding unit 2000 is divided into at least one of a horizontal direction and a vertical direction, the image decoding apparatus 100 includes a first encoding unit 2000 (For example, 2010a, 2010b, 2020a, 2020b, etc.) can be determined by dividing the first encoding unit (e.g. Referring to FIG. 20, the non-square second encoding units 2010a, 2010b, 2020a, and 2020b, which are determined by dividing the first encoding unit 2000 only in the horizontal direction or the vertical direction, As shown in FIG. For example, the image decoding apparatus 100 divides the second encoding units 2010a and 2010b generated by dividing the first encoding unit 2000 in the vertical direction into horizontal directions, and outputs the third encoding units 2016a, 2016b, The second coding units 2020a and 2020b generated by dividing the first coding unit 2000 in the horizontal direction are divided in the horizontal direction and the third coding units 2026a, 2026b and 2026c , 2026d. Since the process of dividing the second encoding units 2010a, 2010b, 2020a, and 2020b has been described in detail with reference to FIG. 19, a detailed description thereof will be omitted.

According to an embodiment, the image decoding apparatus 100 may process an encoding unit in a predetermined order. Features of processing of a coding unit in a predetermined order have been described above with reference to FIG. 15, and a detailed description thereof will be omitted. Referring to FIG. 20, the image decoding apparatus 100 divides a first encoding unit 2000 in a square form into four quadrangle-shaped third encoding units 2016a, 2016b, 2016c, 2016d, 2026a, 2026b, 2026c, 2026d Can be determined. According to an embodiment, the image decoding apparatus 100 may process the third encoding units 2016a, 2016c, 2016c, 2016d, 2026a, 2026b, 2026c, and 2026d according to the form in which the first encoding unit 2000 is divided You can decide.

The image decoding apparatus 100 determines the third encoding units 2016a, 2016b, 2016c, 2016d by dividing the generated second encoding units 2010a, 2010b in the vertical direction, respectively, in the horizontal direction And the image decoding apparatus 100 first processes the third encoding units 2016a and 2016b included in the left second encoding unit 2010a in the vertical direction and then processes the third encoding units 2016a and 2016b included in the right second encoding unit 2010b The third encoding units 2016a, 2016b, 2016c, and 2016d can be processed according to the order 2017 of processing the third encoding units 2016c and 2016d in the vertical direction.

According to an embodiment, the image decoding apparatus 100 divides the second encoding units 2020a and 2020b generated in the horizontal direction into vertical directions to determine the third encoding units 2026a, 2026b, 2026c, and 2026d The image decoding apparatus 100 first processes the third encoding units 2026a and 2026b included in the upper second encoding unit 2020a in the horizontal direction and then processes the third encoding units 2026a and 2026b included in the lower second encoding unit 2020b The third encoding units 2026a, 2026b, 2026c, and 2026d may be processed in accordance with the sequence 2027 of processing the third encoding units 2026c and 2026d in the horizontal direction.

Referring to FIG. 20, the second encoding units 2010a, 2010b, 2020a, and 2020b are divided to determine the third encoding units 2016a, 2016b, 2016c, 2016d, 2026a, 2026b, 2026c, have. The second encoding units 2010a and 2010b determined in the vertical direction and the second encoding units 2020a and 2020b determined in the horizontal direction are divided into different formats, , 2016b, 2016c, 2016d, 2026a, 2026b, 2026c, and 2026d, the result is that the first encoding unit 2000 is divided into the same type of encoding units. Accordingly, the image decoding apparatus 100 recursively divides the encoding units through different processes based on at least one of the block type information and the division type information, thereby eventually determining the same type of encoding units, Lt; RTI ID = 0.0 > units of < / RTI >

FIG. 21 illustrates a process in which the depth of an encoding unit is determined according to a change in type and size of an encoding unit when a plurality of encoding units are determined by recursively dividing an encoding unit according to an exemplary embodiment.

According to an exemplary embodiment, the image decoding apparatus 100 may determine the depth of a coding unit according to a predetermined criterion. For example, a predetermined criterion may be a length of a long side of a coding unit. When the length of the long side of the current encoding unit is divided by 2n (n > 0) times longer than the length of the long side of the current encoding unit, the depth of the current encoding unit is smaller than the depth of the encoding unit before being divided it can be determined that the depth is increased by n. Hereinafter, an encoding unit with an increased depth is expressed as a lower-depth encoding unit.

Referring to FIG. 21, on the basis of block type information (for example, block type information may indicate '0: SQUARE') indicating that the block type information is a square type according to an embodiment, 1 encoding unit 2100 may be divided to determine the second encoding unit 2102, the third encoding unit 2104, and the like of the lower depth. If the size of the first encoding unit 2100 in the form of a square is 2Nx2N, the second encoding unit 2102 determined by dividing the width and height of the first encoding unit 2100 by 1/21 times may have a size of NxN have. Further, the third coding unit 2104 determined by dividing the width and height of the second coding unit 2102 by 1/2 size may have a size of N / 2xN / 2. In this case, the width and height of the third encoding unit 2104 correspond to 1/22 times of the first encoding unit 2100. If the depth of the first encoding unit 2100 is D, the depth of the second encoding unit 2102, which is 1/21 times the width and height of the first encoding unit 2100, may be D + 1, The depth of the third encoding unit 2104, which is one-22 times the width and height of the second encoding unit 2100, may be D + 2.

According to an exemplary embodiment, block type information indicating a non-square shape (for example, block type information is' 1: NS_VER 'indicating that the height is a non-square having a width greater than the width or' 2 >: NS_HOR '), the image decoding apparatus 100 divides the first coding unit 2110 or 2120 in a non-square form into a second coding unit 2112 or 2122 of lower depth, The third encoding unit 2114 or 2124, or the like.

The image decoding apparatus 100 may determine a second encoding unit (e.g., 2102, 2112, 2122, etc.) by dividing at least one of the width and the height of the first encoding unit 2110 of Nx2N size. That is, the image decoding apparatus 100 can determine the second encoding unit 2102 of the NxN size or the second encoding unit 2122 of the NxN / 2 size by dividing the first encoding unit 2110 in the horizontal direction, It is possible to determine the second encoding unit 2112 of N / 2xN size by dividing it in the horizontal direction and the vertical direction.

According to one embodiment, the image decoding apparatus 100 determines a second encoding unit (e.g., 2102, 2112, 2122, etc.) by dividing at least one of the width and the height of the 2NxN first encoding unit 2120 It is possible. That is, the image decoding apparatus 100 can determine the second encoding unit 2102 of NxN size or the second encoding unit 2112 of N / 2xN size by dividing the first encoding unit 2120 in the vertical direction, The second encoding unit 2122 of the NxN / 2 size may be determined by dividing the frame into the horizontal direction and the vertical direction.

According to an exemplary embodiment, the image decoding apparatus 100 divides at least one of the width and height of the second encoding unit 2102 of NxN size to determine a third encoding unit (e.g., 2104, 2114, and 2124) It is possible. That is, the image decoding apparatus 100 determines the third encoding unit 2104 of N / 2xN / 2 size by dividing the second encoding unit 2102 in the vertical direction and the horizontal direction, 3 encoding unit 2114 or a third encoding unit 2124 of N / 2xN / 22 size.

The image decoding apparatus 100 may divide at least one of the width and the height of the second encoding unit 2112 of N / 2xN size into a third encoding unit (e.g., 2104, 2114, 2124, etc.) . That is, the image decoding apparatus 100 divides the second encoding unit 2112 in the horizontal direction to generate a third encoding unit 2104 of N / 2xN / 2 or a third encoding unit 2124 of N / 2xN / 22 size ) Or may be divided in the vertical and horizontal directions to determine the third encoding unit 2114 of N / 22xN / 2 size.

The image decoding apparatus 100 may divide at least one of the width and the height of the second encoding unit 2122 of the NxN / 2 size into a third encoding unit (e.g., 2104, 2114, 2124, etc.) . That is, the image decoding apparatus 100 divides the second encoding unit 2122 in the vertical direction to generate a third encoding unit 2104 of N / 2xN / 2 or a third encoding unit 2114 of N / 22xN / 2 size ) Or may be divided in the vertical and horizontal directions to determine the third encoding unit 2124 of size N / 2xN / 22.

According to an embodiment, the image decoding apparatus 100 may divide a square-shaped encoding unit (for example, 2100, 2102, and 2104) into a horizontal direction or a vertical direction. For example, the first encoding unit 2100 having a size of 2Nx2N is divided in the vertical direction to determine a first encoding unit 2110 having a size of Nx2N or the first encoding unit 2110 having a size of 2NxN to determine a first encoding unit 2120 having a size of 2NxN . According to one embodiment, when the depth is determined based on the length of the longest side of the encoding unit, the depth of the encoding unit, which is determined by dividing the first encoding unit 2100 of 2Nx2N size in the horizontal direction or the vertical direction, May be the same as the depth of the unit (2100).

The width and height of the third encoding unit 2114 or 2124 may correspond to 1/22 of the first encoding unit 2110 or 2120 according to an embodiment. If the depth of the first coding unit 2110 or 2120 is D, the depth of the second coding unit 2112 or 2122 which is half the width and height of the first coding unit 2110 or 2120 is D + And the depth of the third coding unit 2114 or 2124 which is one-22 times the width and height of the first coding unit 2110 or 2120 may be D + 2.

FIG. 22 illustrates a depth index (hereinafter referred to as a PID) for classifying a depth and a coding unit that can be determined according to the type and size of coding units according to an exemplary embodiment.

According to an exemplary embodiment, the image decoding apparatus 100 may determine a second type of encoding unit by dividing the first encoding unit 2200 in a square form. 22, the image decoding apparatus 100 divides the first encoding unit 2200 into at least one of a vertical direction and a horizontal direction according to the division type information, and outputs the second encoding units 2202a, 2202b, 2204a, 2204b, 2206a, 2206b, 2206c, and 2206d. That is, the image decoding apparatus 100 may determine the second encoding units 2202a, 2202b, 2204a, 2204b, 2206a, 2206b, 2206c, and 2206d based on the division type information for the first encoding unit 2200. [

The second encoding units 2202a, 2202b, 2204a, 2204b, 2206a, 2206b, 2206c, and 2206d, which are determined according to the division type information for the first encoding unit 2200 in the form of a square, Depth can be determined based on. For example, since the lengths of one side of the first encoding unit 2200 in the square form and the lengths of the long sides of the second encoding units 2202a, 2202b, 2204a, and 2204b in the non-square form are the same, 2200 and the non-square type second encoding units 2202a, 2202b, 2204a, and 2204b are denoted by D in the same manner. On the other hand, when the image decoding apparatus 100 divides the first encoding unit 2200 into four square-shaped second encoding units 2206a, 2206b, 2206c, and 2206d based on the division type information, Since the length of one side of the second encoding units 2206a, 2206b, 2206c and 2206d is half the length of one side of the first encoding unit 2200, the depths of the second encoding units 2206a, 2206b, May be a depth of D + 1 which is one depth lower than D, which is the depth of the first encoding unit 2200.

According to an embodiment, the image decoding apparatus 100 divides a first encoding unit 2210 having a height greater than a width in a horizontal direction according to division type information, and generates a plurality of second encoding units 2212a, 2212b, 2214a, 2214b, and 2214c. According to an embodiment, the image decoding apparatus 100 divides a first encoding unit 2220 of a shape whose width is longer than a height into a plurality of second encoding units 2222a, 2222b, 2224a, 2224b, and 2224c.

2212a, 2212b, 2214a, 2214b, 2214c. 2222a, 2222b, 2224a, 2224b (2212a, 2212b, 2224a, 2224b) determined according to the division type information for the first encoding unit 2210 or 2220 in the non- , 2224c can be determined in depth based on the length of the long side. For example, since the length of one side of the second encoding units 2212a and 2212b in the form of a square is 1/2 times the length of one side of the non-square first encoding unit 2210 whose height is longer than the width, The depth of the second encoding units 2212a and 2212b of the form is D + 1 which is one depth lower than the depth D of the first encoding unit 2210 of the non-square form.

Furthermore, the image decoding apparatus 100 may divide the non-square first encoding unit 2210 into odd second encoding units 2214a, 2214b, and 2214c based on the division type information. The odd number of second encoding units 2214a, 2214b and 2214c may include non-square second encoding units 2214a and 2214c and a square second encoding unit 2214b. In this case, the length of the long sides of the non-square second encoding units 2214a and 2214c and the length of one side of the second encoding unit 2214b of the square shape are 1 / The depth of the second encoding units 2214a, 2214b, and 2214c may be a depth of D + 1 that is one depth lower than the depth D of the first encoding unit 2210. [ The image decoding apparatus 100 is connected to the first encoding unit 2220 of a non-square shape whose width is greater than the height in a manner corresponding to the method of determining the depths of the encoding units associated with the first encoding unit 2210 The depth of the encoding units can be determined.

According to an exemplary embodiment, the image decoding apparatus 100 determines an index (PID) for distinguishing the divided coding units. If the odd-numbered coding units are not the same size, The index can be determined based on the index. 22, an encoding unit 2214b positioned at the center among the odd-numbered encoding units 2214a, 2214b, and 2214c has the same width as other encoding units 2214a and 2214c, Lt; / RTI > may be twice as high as the height of the sockets 2214a and 2214c. That is, in this case, the encoding unit 2214b located in the center may include two of the other encoding units 2214a and 2214c. Therefore, if the index (PID) of the coding unit 2214b located at the center is 1 according to the scanning order, the coding unit 2214c positioned next to the coding unit 2214c may be three days in which the index is increased by two. That is, there may be a discontinuity in the value of the index. According to an exemplary embodiment, the image decoding apparatus 100 may determine whether odd-numbered encoding units are not the same size based on the presence or absence of an index discontinuity for distinguishing between the divided encoding units.

According to an exemplary embodiment, the image decoding apparatus 100 may determine whether the image is divided into a specific division form based on an index value for distinguishing a plurality of coding units divided from the current coding unit. 22, the image decoding apparatus 100 divides a rectangular first encoding unit 2210 having a height greater than the width to determine even-numbered encoding units 2212a and 2212b or odd-numbered encoding units 2214a and 2214b , And 2214c. The image decoding apparatus 100 may use an index (PID) indicating each coding unit in order to distinguish each of the plurality of coding units. According to one embodiment, the PID may be obtained at a sample of a predetermined position of each coding unit (e.g., the upper left sample).

According to an embodiment, the image decoding apparatus 100 may determine a coding unit of a predetermined position among the coding units determined by using the index for classifying the coding unit. According to an exemplary embodiment, when the division type information for the rectangular first type encoding unit 2210 whose height is longer than the width is divided into three encoding units, the image decoding apparatus 100 reads the first encoding unit 2210 It can be divided into three encoding units 2214a, 2214b, and 2214c. The image decoding apparatus 100 may assign an index to each of the three encoding units 2214a, 2214b, and 2214c. The image decoding apparatus 100 may compare the indexes of the respective encoding units in order to determine the middle encoding unit among the encoding units divided into odd numbers. The video decoding apparatus 100 encodes an encoding unit 2214b having an index corresponding to a middle value among indices based on indexes of encoding units by encoding the middle position among the encoding units determined by dividing the first encoding unit 2210 Can be determined as a unit. According to an exemplary embodiment, the image decoding apparatus 100 may determine an index based on a size ratio between coding units when the coding units are not the same size in determining the index for dividing the divided coding units . 22, the coding unit 2214b generated by dividing the first coding unit 2210 is divided into coding units 2214a and 2214c having the same width as the other coding units 2214a and 2214c but different in height Can be double the height. In this case, if the index (PID) of the coding unit 2214b positioned at the center is 1, the coding unit 2214c located next to the coding unit 2214c may be three-indexed by two. In this case, when the index increases uniformly and the increment increases, the image decoding apparatus 100 may determine that the image decoding apparatus 100 is divided into a plurality of encoding units including encoding units having different sizes from other encoding units. When the division type information indicates that the division type information is divided into an odd number of coding units, the image decoding apparatus 100 determines that the coding unit (for example, the middle coding unit) at a predetermined position among the odd number of coding units is different from the coding unit The current encoding unit can be divided into. In this case, the image decoding apparatus 100 may determine an encoding unit having a different size by using an index (PID) for the encoding unit. However, the index and the size or position of the encoding unit at a predetermined position to be determined are specific for explaining an embodiment, and thus should not be construed to be limited thereto, and various indexes, positions and sizes of encoding units can be used Should be interpreted.

According to an exemplary embodiment, the image decoding apparatus 100 may use a predetermined data unit in which a recursive division of an encoding unit starts.

23 illustrates that a plurality of coding units are determined according to a plurality of predetermined data units included in a picture according to an embodiment.

According to an exemplary embodiment, a predetermined data unit may be defined as a data unit in which an encoding unit starts to be recursively segmented using at least one of block type information and partition type information. That is, it may correspond to a coding unit of the highest depth used in the process of determining a plurality of coding units for dividing the current picture. Hereinafter, such a predetermined data unit is referred to as a reference data unit for convenience of explanation.

According to one embodiment, the reference data unit may represent a predetermined size and shape. According to one embodiment, the reference encoding unit may comprise samples of MxN. Here, M and N may be equal to each other, or may be an integer represented by a multiplier of 2. That is, the reference data unit may represent a square or a non-square shape, and may be divided into an integer number of encoding units.

According to an embodiment, the image decoding apparatus 100 may divide the current picture into a plurality of reference data units. According to an embodiment, the image decoding apparatus 100 may divide a plurality of reference data units for dividing a current picture by using the division information for each reference data unit. The segmentation process of the reference data unit may correspond to the segmentation process using a quad-tree structure.

According to an exemplary embodiment, the image decoding apparatus 100 may determine in advance a minimum size that the reference data unit included in the current picture can have. Accordingly, the image decoding apparatus 100 can determine reference data units of various sizes having sizes equal to or larger than the minimum size, and determine at least one coding unit using block type information and division type information based on the determined reference data unit You can decide.

Referring to FIG. 23, the image decoding apparatus 100 may use a square-shaped reference encoding unit 2300 or a non-square-shaped reference encoding unit 2302. According to an exemplary embodiment, the type and size of the reference encoding unit may include various data units (e.g., a sequence, a picture, a slice, a slice segment a slice segment, a maximum encoding unit, and the like).

According to an embodiment, the receiving unit 110 of the image decoding apparatus 100 may acquire at least one of information on the type of the reference encoding unit and information on the size of the reference encoding unit from the bitstream for each of the various data units . The process of determining at least one encoding unit included in the reference-type encoding unit 2300 in the form of a square is described in detail in the process of dividing the current encoding unit 1100 of FIG. 11, and the non- Is determined in the process of dividing the current encoding unit 1200 or 1250 of FIG. 12, a detailed description thereof will be omitted.

In order to determine the size and the type of the reference encoding unit according to a predetermined data unit predetermined based on a predetermined condition, the image decoding apparatus 100 may include an index for identifying the size and type of the reference encoding unit Can be used. That is, the receiving unit 110 extracts a predetermined condition (for example, a data unit having a size smaller than a slice) among the various data units (e.g., a sequence, a picture, a slice, a slice segment, It is possible to obtain only an index for identifying the size and type of the reference encoding unit for each slice, slice segment, maximum encoding unit, and the like. The image decoding apparatus 100 can determine the size and shape of the reference data unit for each data unit satisfying the predetermined condition by using the index. When the information on the type of the reference encoding unit and the information on the size of the reference encoding unit are obtained from the bitstream for each relatively small data unit and used, the use efficiency of the bitstream may not be good. Therefore, Information on the size of the reference encoding unit and information on the size of the reference encoding unit can be acquired and used. In this case, at least one of the size and the type of the reference encoding unit corresponding to the index indicating the size and type of the reference encoding unit may be predetermined. That is, the image decoding apparatus 100 selects at least one of the size and the type of the reference encoding unit in accordance with the index, thereby obtaining at least one of the size and the type of the reference encoding unit included in the data unit, You can decide.

According to an exemplary embodiment, the image decoding apparatus 100 may use at least one reference encoding unit included in one maximum encoding unit. That is, the maximum encoding unit for dividing an image may include at least one reference encoding unit, and the encoding unit may be determined through a recursive division process of each reference encoding unit. According to an exemplary embodiment, at least one of the width and the height of the maximum encoding unit may correspond to at least one integer multiple of the width and height of the reference encoding unit. According to an exemplary embodiment, the size of the reference encoding unit may be a size obtained by dividing the maximum encoding unit n times according to a quadtree structure. That is, the image decoding apparatus 100 can determine the reference encoding unit by dividing the maximum encoding unit n times according to the quad-tree structure, and according to various embodiments, the reference encoding unit may convert at least one of the block type information and the division type information As shown in FIG.

24 shows a processing block serving as a reference for determining a determination order of a reference encoding unit included in the picture 2400 according to an embodiment.

According to one embodiment, the image decoding apparatus 100 may determine at least one processing block for dividing a picture. The processing block is a data unit including at least one reference encoding unit for dividing an image, and at least one reference encoding unit included in the processing block may be determined in a specific order. That is, the order of determination of at least one reference encoding unit determined in each processing block may correspond to one of various kinds of order in which the reference encoding unit can be determined, and the reference encoding unit determination order determined in each processing block May be different for each processing block. The order of determination of the reference encoding unit determined for each processing block is a raster scan, a Z scan, an N scan, an up-right diagonal scan, a horizontal scan a horizontal scan, and a vertical scan. However, the order that can be determined should not be limited to the scan orders.

According to an embodiment, the image decoding apparatus 100 may obtain information on the size of the processing block and determine the size of the at least one processing block included in the image. The image decoding apparatus 100 may obtain information on the size of the processing block from the bitstream to determine the size of the at least one processing block included in the image. The size of such a processing block may be a predetermined size of a data unit represented by information on the size of the processing block.

According to an exemplary embodiment, the receiving unit 110 of the image decoding apparatus 100 may obtain information on the size of a processing block from a bitstream for each specific data unit. For example, information on the size of a processing block can be obtained from a bitstream in units of data such as an image, a sequence, a picture, a slice, a slice segment, and the like. That is, the receiving unit 110 may acquire information on the size of the processing block from the bitstream for each of the plurality of data units, and the image decoding apparatus 100 may use at least information on the size of the obtained processing block The size of one processing block may be determined, and the size of the processing block may be an integer multiple of the reference encoding unit.

According to one embodiment, the image decoding apparatus 100 may determine the sizes of the processing blocks 2402 and 2412 included in the picture 2400. For example, the video decoding apparatus 100 can determine the size of the processing block based on information on the size of the processing block obtained from the bitstream. Referring to FIG. 24, the image decoding apparatus 100 according to an exemplary embodiment of the present invention may be configured such that the horizontal size of the processing blocks 2402 and 2412 is four times the horizontal size of the reference encoding unit, four times the vertical size of the reference encoding unit, You can decide. The image decoding apparatus 100 may determine an order in which at least one reference encoding unit is determined in at least one processing block.

According to one embodiment, the video decoding apparatus 100 may determine each of the processing blocks 2402 and 2412 included in the picture 2400 based on the size of the processing block, and may include in the processing blocks 2402 and 2412 The determination order of at least one reference encoding unit is determined. The determination of the reference encoding unit may include determining the size of the reference encoding unit according to an embodiment.

According to an exemplary embodiment, the image decoding apparatus 100 may obtain information on a determination order of at least one reference encoding unit included in at least one processing block from a bitstream, So that the order in which at least one reference encoding unit is determined can be determined. The information on the decision order can be defined in the order or direction in which the reference encoding units are determined in the processing block. That is, the order in which the reference encoding units are determined may be independently determined for each processing block.

According to one embodiment, the image decoding apparatus 100 may obtain information on a determination order of a reference encoding unit from a bitstream for each specific data unit. For example, the receiving unit 110 may acquire information on a determination order of a reference encoding unit from a bitstream for each data unit such as an image, a sequence, a picture, a slice, a slice segment, and a processing block. Since the information on the determination order of the reference encoding unit indicates the reference encoding unit determination order in the processing block, the information on the determination order can be obtained for each specific data unit including an integer number of processing blocks.

The image decoding apparatus 100 may determine at least one reference encoding unit based on the determined order according to an embodiment.

According to an exemplary embodiment, the receiving unit 110 may obtain information on a reference encoding unit determination order from the bitstream as information related to the processing blocks 2402 and 2412, and the video decoding apparatus 100 may obtain information 2402, and 2412, and determine at least one reference encoding unit included in the picture 2400 according to the determination order of the encoding units. Referring to FIG. 24, the image decoding apparatus 100 may determine a determination order 2404, 2414 of at least one reference encoding unit associated with each of the processing blocks 2402, 2412. For example, when information on the determination order of reference encoding units is obtained for each processing block, the reference encoding unit determination order associated with each processing block 2402, 2412 may be different for each processing block. If the reference encoding unit determination order 2404 related to the processing block 2402 is a raster scan order, the reference encoding unit included in the processing block 2402 may be determined according to the raster scan order. On the other hand, when the reference encoding unit determination order 2414 related to the other processing block 2412 is in the reverse order of the raster scan order, the reference encoding unit included in the processing block 2412 can be determined according to the reverse order of the raster scan order.

The image decoding apparatus 100 may decode the determined at least one reference encoding unit according to an embodiment. The image decoding apparatus 100 can decode an image based on the reference encoding unit determined through the above-described embodiment. The method of decoding the reference encoding unit may include various methods of decoding the image.

According to one embodiment, the image decoding apparatus 100 may obtain block type information indicating a type of a current encoding unit or division type information indicating a method of dividing a current encoding unit from a bitstream. The block type information or the division type information may be included in a bitstream related to various data units. For example, the video decoding apparatus 100 may include a sequence parameter set, a picture parameter set, a video parameter set, a slice header, a slice segment header slice segment type information included in the segment header can be used. Further, the image decoding apparatus 100 may obtain a syntax corresponding to the block type information or the division type information from the bitstream for each of the maximum encoding unit, the reference encoding unit, and the processing block, from the bitstream.

Various embodiments have been described above. It will be understood by those skilled in the art that the present disclosure may be embodied in various forms without departing from the essential characteristics of the present disclosure. Therefore, the disclosed embodiments should be considered in an illustrative rather than a restrictive sense. The scope of the present disclosure is set forth in the appended claims rather than the foregoing description, and all differences within the scope of equivalents thereof should be construed as being included in the present disclosure.

Meanwhile, the above-described embodiments of the present disclosure can be implemented in a general-purpose digital computer that can be created as a program that can be executed by a computer and operates the program using a computer-readable recording medium. The computer-readable recording medium includes a storage medium such as a magnetic storage medium (e.g., ROM, floppy disk, hard disk, etc.), optical reading medium (e.g., CD ROM,

Claims (14)

Obtaining a prediction mode of a current block included in the current image from the bitstream;
Determining a collocated block corresponding to a position of a current block in a reference image adjacent to a current image if the prediction mode of the current block is intra prediction;
Obtaining a reference sample based on at least one of a sample adjacent to the current block, a boundary sample in the collocated block, and a sample adjacent to the collocated block;
Performing intra prediction on a current block based on the reference sample to obtain a predictor; And
And reconstructing the current block based on the predictor.
The method according to claim 1,
Wherein obtaining the reference sample comprises:
Obtaining a first flag from the bitstream; If the first flag indicates to determine a reference sample in the current block, obtaining a reference sample based on the upper and left samples adjacent to the current block; And
Obtaining a reference sample based on at least one of a boundary sample positioned within the collocated block and a sample adjacent to the collocated block if the first flag indicates to determine a reference sample in the collocated block, And decoding the decoded image.
3. The method of claim 2,
Wherein obtaining a reference sample based on at least one of a boundary sample located within the collocated block and a sample adjacent to the collocated block comprises:
And obtaining the reference samples based on the right and bottom boundary samples in the collocated block.
The method according to claim 1,
Wherein obtaining the reference sample comprises:
Obtaining a second flag from the bitstream;
Acquiring a reference sample based on at least one of a right side and a lower side sample adjacent to the current block or a right and a lower side boundary sample in the collocated block when the second flag indicates use of the sample located on the right side and the lower side, ; And
A reference sample based on at least one of left and upper samples adjacent to the current block and left and upper samples adjacent to the collocated block when the second flag indicates use of samples located on the left and upper sides, Wherein the step of acquiring the image comprises the step of acquiring the image.
The method according to claim 1,
Wherein obtaining the reference sample comprises:
Acquiring a reference sample based on at least one of a boundary sample in the colocated block and a sample adjacent to the colocated block when the current block is located on the upper left side of the current image, / RTI >
The method according to claim 1,
Wherein obtaining the reference sample comprises:
And obtaining a reference sample based on left and upper boundary samples in the colocated block when the current block is located on the left upper side of the current image.
In the image decoding apparatus,
A receiver for receiving a bitstream; And
Acquires a prediction mode of a current block included in a current image from the bitstream, and determines a collocated block corresponding to a position of a current block in a reference image adjacent to the current image if the prediction mode of the current block is intra prediction Obtaining a reference sample based on at least one of a sample adjacent to the current block, a boundary sample in the collocated block, and a sample adjacent to the collocated block, and performing intra prediction on the current block based on the reference sample And a decoding unit for obtaining the predictor and restoring the current block based on the predictor.
8. The method of claim 7,
The decoding unit
Acquiring a reference sample based on the upper and left samples adjacent to the current block if the first flag indicates to determine a reference sample in the current block, obtaining a first flag from the bitstream, And acquiring a reference sample based on at least one of a boundary sample located in the colocated block and a sample adjacent to the colocated block if the first flag indicates to determine a reference sample in the collocated block The image decoding apparatus comprising:
9. The method of claim 8,
Wherein the decoding unit comprises:
And obtains a reference sample based on the right and lower boundary samples in the collocated block.
8. The method of claim 7,
Wherein the decoding unit comprises:
And if the second flag indicates to use the samples located on the right and on the lower side, a sample located on the right side and the lower side adjacent to the current block or the right side and the left side in the collocated block, Adjacent to the current block and adjacent to the sample or collocated block adjacent to the current block if the second flag indicates to use the samples located at the left and the upper side, And obtains a reference sample based on at least one of the samples located on the upper side and the upper side.
8. The method of claim 7,
Wherein the decoding unit comprises:
And obtains a reference sample based on at least one of a boundary sample in the collocated block and a sample adjacent to the collocated block when the current block is located on the left upper side of the current image.
8. The method of claim 7,
Wherein the decoding unit comprises:
And obtains a reference sample based on left and upper boundary samples in the collocated block when the current block is located on the left upper side of the current image.
Determining a collocated block corresponding to a position of a current block in a reference image adjacent to the current image;
Obtaining a reference sample based on at least one of a sample adjacent to the current block, a boundary sample within the collocated block, and a sample adjacent to the collocated block;
Performing intra prediction on a current block based on the reference sample to obtain a predictor; And
Encoding the current block based on the predictor to obtain encoding information of the current block; And
And generating the encoded information as a bitstream.
A video encoding apparatus comprising:
Determining a collocated block corresponding to a position of a current block in a reference image adjacent to the current image, and determining at least one of a sample adjacent to the current block, a boundary sample within the collocated block, and a sample adjacent to the collocated block An encoding unit for obtaining a reference sample on the basis of the reference sample to obtain a predictor by performing intra prediction on a current block on the basis of the reference sample and encoding the current block based on the predictor, ; And
And a bitstream generator for generating the encoded information as a bitstream.
KR1020187011825A 2015-11-24 2016-11-22 Video encoding method and apparatus, video decoding method and apparatus KR20180075517A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201562259287P 2015-11-24 2015-11-24
US62/259,287 2015-11-24
PCT/KR2016/013485 WO2017090957A1 (en) 2015-11-24 2016-11-22 Video encoding method and apparatus, and video decoding method and apparatus

Publications (1)

Publication Number Publication Date
KR20180075517A true KR20180075517A (en) 2018-07-04

Family

ID=58764180

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020187011825A KR20180075517A (en) 2015-11-24 2016-11-22 Video encoding method and apparatus, video decoding method and apparatus

Country Status (2)

Country Link
KR (1) KR20180075517A (en)
WO (1) WO2017090957A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115484458A (en) * 2017-10-20 2022-12-16 韩国电子通信研究院 Image encoding method, image decoding method, and recording medium storing bit stream
US20190268611A1 (en) * 2018-02-26 2019-08-29 Mediatek Inc. Intelligent Mode Assignment In Video Coding
CN111770337B (en) * 2019-03-30 2022-08-19 华为技术有限公司 Video encoding method, video decoding method and related equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3471412B8 (en) * 2011-04-25 2020-12-23 LG Electronics Inc. Intra-prediction method for video decoding and video encoding
CN108282659B (en) * 2011-06-28 2022-02-25 三星电子株式会社 Method and apparatus for image encoding and decoding using intra prediction
MX2014000159A (en) * 2011-07-02 2014-02-19 Samsung Electronics Co Ltd Sas-based semiconductor storage device memory disk unit.
WO2014030920A1 (en) * 2012-08-21 2014-02-27 삼성전자 주식회사 Inter-layer video coding method and device for predictive information based on tree structure coding unit, and inter-layer video decoding method and device for predictive information based on tree structure coding unit
KR101449686B1 (en) * 2014-07-17 2014-10-15 에스케이텔레콤 주식회사 Video Encoding/Decoding Method and Apparatus by Efficiently Processing Intra Prediction Mode

Also Published As

Publication number Publication date
WO2017090957A1 (en) 2017-06-01

Similar Documents

Publication Publication Date Title
US10856006B2 (en) Method and system using overlapped search space for bi-predictive motion vector refinement
CN109845253B (en) Method for decoding and encoding two-dimensional video
KR20180085714A (en) Video decoding method and video decoding apparatus using merge candidate list
US8630351B2 (en) Method and apparatus for encoding and decoding motion vector
KR20180075518A (en) Video encoding method and apparatus, video decoding method and apparatus
KR20190038910A (en) Method and Apparatus for Encoding or Decoding Coding Units of a Picture Outline
CN112385213B (en) Method for processing image based on inter prediction mode and apparatus for the same
KR20180107153A (en) Image coding method and apparatus, image decoding method and apparatus
KR102490854B1 (en) Image decoding method and apparatus based on affine motion prediction in image coding system
KR20190020161A (en) Method and apparatus for encoding or decoding luma blocks and chroma blocks
KR20180086203A (en) METHOD AND APPARATUS FOR ENCODING / DECODING IMAGE
KR20180107082A (en) Video encoding method and apparatus, decoding method and apparatus thereof
KR102283545B1 (en) Method and apparatus for encoding or decoding image by using block map
KR20180075517A (en) Video encoding method and apparatus, video decoding method and apparatus
KR20180067598A (en) METHOD AND APPARATUS FOR ENCODING / DECODING IMAGE
KR20180104603A (en) Video encoding method and apparatus, video decoding method and apparatus
KR102434479B1 (en) Video encoding/decoding method and apparatus therefor
KR20230040295A (en) Method of encoding/decoding a video signal, and recording medium storing a bitstream
KR20230100677A (en) Method of encoding/decoding a video signal, and recording medium stroing a bitstream
KR20230040296A (en) Method of encoding/decoding a video signa, and recording medium stroing a bitstream
KR20180075484A (en) METHOD AND APPARATUS FOR ENCODING / DECODING IMAGE