WO2017131473A1 - Procédé et dispositif pour coder et décoder une vidéo par utilisation d'une prédiction - Google Patents

Procédé et dispositif pour coder et décoder une vidéo par utilisation d'une prédiction Download PDF

Info

Publication number
WO2017131473A1
WO2017131473A1 PCT/KR2017/000965 KR2017000965W WO2017131473A1 WO 2017131473 A1 WO2017131473 A1 WO 2017131473A1 KR 2017000965 W KR2017000965 W KR 2017000965W WO 2017131473 A1 WO2017131473 A1 WO 2017131473A1
Authority
WO
WIPO (PCT)
Prior art keywords
prediction
block
reconstructed
unit
current block
Prior art date
Application number
PCT/KR2017/000965
Other languages
English (en)
Korean (ko)
Inventor
임성창
고현석
강정원
이진호
전동산
김휘용
전병우
김남욱
전승수
최진수
Original Assignee
한국전자통신연구원
성균관대학교산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020170012160A external-priority patent/KR20170089777A/ko
Application filed by 한국전자통신연구원, 성균관대학교산학협력단 filed Critical 한국전자통신연구원
Priority to CN202211613933.9A priority Critical patent/CN115941949A/zh
Priority to CN201780009090.2A priority patent/CN108605123B/zh
Priority to CN202211613078.1A priority patent/CN115733977A/zh
Priority to CN202211615144.9A priority patent/CN116016912A/zh
Priority to CN202211613690.9A priority patent/CN115733978A/zh
Priority to CN202211613836.XA priority patent/CN115733979A/zh
Priority to CN202211613450.9A priority patent/CN116016911A/zh
Priority to CN202211613703.2A priority patent/CN115941948A/zh
Publication of WO2017131473A1 publication Critical patent/WO2017131473A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/567Motion estimation based on rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/59Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution

Definitions

  • the following embodiments relate to a video decoding method, a decoding device, an encoding method, and an encoding device, and more particularly, to a method and an apparatus for encoding and decoding a video using prediction on a target block.
  • Ultra High Definition (UHD) TVs which have four times the resolution of FHD TVs, as well as High Definition TV (HDTV) and Full HD (FHD) TVs.
  • HDTV High Definition TV
  • FHD Full HD
  • An image encoding / decoding apparatus and method include an inter prediction technique, an intra prediction technique, an entropy encoding technique, etc. in order to perform encoding / decoding of high resolution and high quality images.
  • the inter prediction technique may be a technique for predicting a value of a pixel included in a current picture using a temporally previous picture and / or temporally following picture.
  • An intra prediction technique may be a technique of predicting a value of a pixel included in a current picture by using information of a pixel in a current picture.
  • the entropy encoding technique may be a technique of allocating a short code to a symbol having a high appearance frequency and a long code to a symbol having a low appearance frequency.
  • prediction may mean generating a prediction signal similar to the original signal.
  • Predictions can be broadly classified into predictions referring to spatially reconstructed images, predictions referring to temporal reconstructed images, and predictions for other symbols.
  • a temporal reference may refer to a temporally reconstructed image
  • a spatial reference may refer to a spatially reconstructed image.
  • the current block may be a block that is currently subjected to encoding or decoding.
  • the current block may be named a target block or a target unit.
  • the current block may be called an encoding target block or an encoding target unit.
  • the current block may be called a decoding target block or a decoding target unit.
  • Inter prediction may be a technique for predicting the current block using temporal and spatial references.
  • Intra prediction may be a technique for predicting the current block using only spatial references.
  • the image encoding / decoding technique encodes the current block using one prediction mode among a plurality of prediction modes when performing intra prediction to reduce spatial repetition.
  • HEVC High Efficiency Video Coding
  • the encoding apparatus generates prediction blocks of the current block using all available prediction modes among the plurality of prediction modes, and selects one prediction modes that produce the best result as a prediction mode of encoding of the current block. However, even using the prediction mode that produces the best results, there is still an error between the original block and the prediction block. This error is represented as a residual block.
  • One embodiment may provide a method and apparatus for reducing a prediction error of a prediction unit generated by the prediction.
  • One embodiment may provide a method and apparatus for reducing a prediction error caused by a prediction direction of a prediction mode.
  • An embodiment may provide a method and apparatus for performing a second prediction selectively in a direction of a first prediction mode with respect to a prediction error according to the first prediction.
  • An embodiment may provide a method and apparatus for performing a selective second prediction on a part of a prediction error according to a first prediction.
  • An embodiment may provide a method and apparatus for performing a selective second prediction on a portion with a severe distortion among prediction errors according to the first prediction.
  • One embodiment provides a method and apparatus for preventing an additional prediction error that may occur when performing a second prediction on the entirety of the prediction error by performing a second prediction selectively on a part of the prediction error according to the first prediction. Can provide.
  • One embodiment can provide a method and apparatus for obtaining sufficient spatial iteration reduction through the progress of transform quantization by reducing large prediction errors generated in the prediction direction.
  • the restored residual signal generator for generating a restored residual signal for the current block; And a reconstructed block generator that generates a reconstructed block for the current block based on the reconstructed residual signal, the second prediction, and the first prediction.
  • generating a reconstructed residual signal for a current block And generating a reconstructed block for the current block based on the reconstructed residual signal, the second prediction, and the first prediction.
  • the reconstructed block may correspond to the sum of the reconstructed residual signal, the first prediction signal generated by the first prediction, and the second prediction signal generated by the second prediction.
  • the first prediction and the second prediction may be intra prediction.
  • the prediction direction of the second prediction may be the same as the first prediction direction.
  • At least some of the plurality of first reference blocks of the first prediction and the plurality of second reference blocks of the second prediction may be different from each other.
  • the type of the first reference block of the first prediction and the type of the second reference block of the second prediction may be different from each other.
  • the first reference block of the first prediction may be a reconstructed neighboring block neighboring the current block.
  • the second reference block of the second prediction may be a reconstructed neighbor residual block neighboring the current block.
  • the reconstructed neighboring residual block may be a difference between the reconstructed neighboring block and the first prediction block of the reconstructed neighboring block.
  • the area of the reference sample of the second prediction may be part of the area of the reference sample of the first prediction.
  • the region to which the second prediction is to be applied may be specified based on a range of reference samples used for the second prediction.
  • An area to which the second prediction is to be applied may be specified based on a prediction direction of the second prediction.
  • the range of reference samples is specified by information indicating a first reference sample to be used for the second prediction and information indicating the number of reference samples to be used for the second prediction.
  • the region to which the second prediction is to be applied may be a region generated as the reference samples of the range proceed in the prediction direction of the second prediction.
  • the second prediction usage information indicating whether the second prediction is used in encoding the current block indicates that the second prediction is not used, the second prediction may not be used.
  • the second prediction may not be used.
  • the second prediction may not be used.
  • the boundary may be a boundary of a picture, a boundary of a slice, or a boundary of a tile.
  • Whether the second prediction is to be used may be determined based on the number of reconstructed blocks adjacent to the current block.
  • the reconstructed block may be generated based on the reconstructed residual signal and the first prediction signal by the first prediction.
  • a method and apparatus are provided for reducing a prediction error of a prediction unit generated by prediction.
  • a method and apparatus are provided for reducing a prediction error caused by a prediction direction of a prediction mode.
  • a method and apparatus are provided for performing selective second prediction in a direction of a first prediction mode with respect to a prediction error according to the first prediction.
  • a method and apparatus are provided for performing a selective second prediction on some of the prediction errors according to the first prediction.
  • a method and an apparatus for performing selective second prediction on a portion of a distortion that is severe among prediction errors according to a first prediction are provided.
  • a method and apparatus are provided for preventing additional prediction errors that may occur when performing a second prediction on the entirety of the prediction error by performing a second prediction selectively for only a part of the prediction errors according to the first prediction.
  • a method and apparatus are provided for obtaining sufficient spatial iteration reduction through the progress of transform quantization by reducing large prediction errors generated in the prediction direction.
  • FIG. 1 is a block diagram illustrating a configuration of an encoding apparatus according to an embodiment of the present invention.
  • FIG. 2 is a block diagram illustrating a configuration of a decoding apparatus according to an embodiment of the present invention.
  • FIG. 3 is a diagram schematically illustrating a division structure of an image when encoding and decoding an image.
  • FIG. 4 is a diagram illustrating a form of a prediction unit PU that a coding unit CU may include.
  • FIG. 5 is a diagram illustrating a form of a transform unit (TU) that may be included in a coding unit (CU).
  • TU transform unit
  • CU coding unit
  • FIG. 6 is a diagram for explaining an embodiment of an intra prediction process.
  • FIG. 7 is a diagram for describing a position of a reference sample used in an intra prediction process.
  • FIG. 8 is a diagram for explaining an embodiment of an inter prediction process.
  • FIG 9 illustrates a prediction error when the original image is predicted in the vertical direction according to an example.
  • FIG. 10 is a structural diagram of an encoding apparatus according to an embodiment.
  • FIG. 11 is a flowchart of an encoding method according to an embodiment.
  • FIG 12 illustrates first prediction and second prediction using intra prediction, according to an embodiment.
  • FIG. 13 is a flowchart of a method of generating a residual block, according to an exemplary embodiment.
  • FIG. 14 is a flowchart of a method of generating a restored block, according to an example.
  • 16 is a structural diagram of a decoding apparatus according to an embodiment.
  • 17 is a flowchart of a decoding method according to an embodiment.
  • FIG. 18 is a flowchart of a method of generating a restored block, according to an exemplary embodiment.
  • FIG. 19 illustrates a decoding process according to an embodiment.
  • first and second may be used to describe various components, but the above components should not be limited by the above terms. The above terms are used to distinguish one component from another component.
  • first component may be referred to as the second component, and similarly, the second component may also be referred to as the first component.
  • each component shown in the embodiments are shown independently to represent different characteristic functions, and do not mean that each component is composed of only separate hardware or one software component unit. That is, each component is listed as each component for convenience of description. For example, at least two of the components may be combined into one component. In addition, one component may be divided into a plurality of components. The integrated and separated embodiments of each of these components are also included in the scope of the present invention without departing from the essence.
  • components may not be essential components for performing essential functions, but may be optional components for improving performance.
  • Embodiments may be implemented including only components necessary to implement the nature of the embodiments, and structures including the optional components, such as, for example, components used only for performance improvement, are also included in the scope of rights.
  • an image may mean one picture constituting a video and may represent a video itself.
  • "encoding and / or decoding of an image” may mean “encoding and / or decoding of a video” and may mean “encoding and / or decoding of one of images constituting the video.” It may be.
  • video and “motion picture” may be used interchangeably and may be used interchangeably.
  • image may be used in the same sense, and may be used interchangeably.
  • each of the specified information, data, flags and elements, attributes, etc. may have a value.
  • the value "0" of information, data, flags and elements, attributes, etc. may represent a logical false or first predefined value. In other words, the value "0", logic false and the first predefined value can be used interchangeably.
  • the value "1" of information, data, flags and elements, attributes, etc. may represent logical true or second predefined values. In other words, the value "1", the logical true and the second predefined value can be used interchangeably.
  • a variable such as i or j may be used to indicate a row, column or index, and the value of i may be an integer of 0 or more and may be an integer of 1 or more. In other words, in embodiments, rows, columns, indexes, etc. may be counted from zero and counted from one.
  • a “unit” may represent a unit of encoding and decoding of an image.
  • the meanings of the unit and the block may be the same.
  • the terms “unit” and “block” may be used interchangeably.
  • the unit may be an M ⁇ N array of samples.
  • M and N may each be a positive integer.
  • a unit can often mean an array of two-dimensional samples.
  • the sample may be a pixel or pixel value.
  • pixel and “sample” can be used interchangeably and can be used interchangeably.
  • a unit may be an area generated by division of one image.
  • One image may be divided into a plurality of units.
  • a predefined process for the unit may be performed according to the type of the unit.
  • the type of unit may be classified into a macro unit, a coding unit (CU), a prediction unit (PU), a transform unit (TU), and the like.
  • One unit may be further divided into subunits having a smaller size than the unit.
  • the unit division information may include information about the depth of the unit.
  • the depth information may indicate the number and / or degree of division of the unit.
  • One unit may be divided into a plurality of sub-units hierarchically with depth information based on a tree structure.
  • the unit and the lower unit generated by the division of the unit may correspond to the node and the child node of the node, respectively.
  • Each divided subunit may have depth information. Since the depth information of the unit indicates the number and / or degree of division of the unit, the division information of the lower unit may include information about the size of the lower unit.
  • the highest node may correspond to the first unit that is not split.
  • the highest node may be referred to as a root node.
  • the highest node may have a minimum depth value. At this time, the highest node may have a depth of level 0.
  • a node with a depth of level 1 may represent a unit created as the first unit is divided once.
  • a node with a depth of level 2 may represent a unit created as the first unit is split twice.
  • a node with a depth of level n may represent a unit generated as the first unit is divided n times.
  • the leaf node may be the lowest node or may be a node that cannot be further divided.
  • the depth of the leaf node may be at the maximum level.
  • the predefined value of the maximum level may be three.
  • a transform unit may be a basic unit in residual signal coding and / or residual signal decoding such as transform, inverse transform, quantization, inverse quantization, transform coefficient encoding, and transform coefficient decoding. .
  • One transform unit may be divided into a plurality of transform units having a smaller size.
  • a prediction unit may be a basic unit in performing prediction or compensation.
  • the prediction unit can be a number of partitions by partitioning. Multiple partitions may also be the basic unit in performing prediction or compensation.
  • the partition generated by the partitioning of the prediction unit may also be the prediction unit.
  • the reconstructed neighbor unit may be a unit that has already been encoded or decoded around the encoding target unit or the decoding target unit.
  • the reconstructed neighbor unit may be a spatial neighbor unit or a temporal neighbor unit to the target unit.
  • Prediction unit partition may mean a form in which a prediction unit is divided.
  • a parameter set may correspond to header information among structures in the bitstream.
  • the parameter set may include a sequence parameter set, a picture parameter set, an adaptation parameter set, and the like.
  • Rate-distortion optimization The encoding apparatus uses a combination of the size of the coding unit, the prediction mode, the size of the prediction unit, the motion information, and the size of the transform unit to provide high coding efficiency. Distortion optimization can be used.
  • the rate-distortion optimization method can calculate the rate-distortion cost of each combination in order to select the optimal combination among the above combinations.
  • the rate-distortion cost may be calculated using Equation 1 below.
  • a combination in which the rate-distortion cost is minimized may be selected as an optimal combination in the rate-distortion optimization scheme.
  • D may represent distortion.
  • D may be the mean square error of the squares of difference values between the original transform coefficients and the reconstructed transform coefficients in the transform block.
  • R can represent the rate.
  • R may indicate a bit rate using the associated context information.
  • may represent a Lagrangian multiplier.
  • R may include not only encoding parameter information such as a prediction mode, motion information, and a coded block flag, but also bits generated by encoding of transform coefficients.
  • the encoding apparatus performs processes such as inter prediction and / or intra prediction, transformation, quantization, entropy encoding, inverse quantization, and inverse transformation to calculate accurate D and R, which can greatly increase the complexity in the encoding apparatus. have.
  • the reference picture may be an image used for inter prediction or motion compensation.
  • the reference picture may be a picture including a reference unit referenced by the target unit for inter prediction or motion compensation.
  • the meanings of the picture and the image may be the same.
  • the terms "picture” and “image” may be used interchangeably.
  • the reference picture list may be a list including reference pictures used for inter prediction or motion compensation.
  • the type of the reference picture list may be List Combined (LC), List 0 (List 0; L0), List 1 (List 1; L1), and the like.
  • Motion Vector The motion vector may be a two-dimensional vector used in inter prediction.
  • MV may be expressed in the form of (mv x , mv y ).
  • mv x may represent a horizontal component
  • mv y may represent a vertical component.
  • the MV may indicate an offset between the target picture and the reference picture.
  • the search range may be a two-dimensional area in which a search for MV is performed during inter prediction.
  • the size of the search region may be M ⁇ N.
  • M and N may each be a positive integer.
  • FIG. 1 is a block diagram illustrating a configuration of an encoding apparatus according to an embodiment of the present invention.
  • the encoding apparatus 100 may be a video encoding apparatus or an image encoding apparatus.
  • the video may include one or more images.
  • the encoding apparatus 110 may sequentially encode one or more images of the video over time.
  • the encoding apparatus 100 may include an inter predictor 110, an intra predictor 120, a switch 115, a subtractor 125, a transformer 130, a quantizer 140, and entropy decoding.
  • the unit 150 may include an inverse quantization unit 160, an inverse transform unit 170, an adder 175, a filter unit 180, and a reference picture buffer 190.
  • the encoding apparatus 100 may encode the input image in an intra mode and / or an inter mode.
  • the input image may be referred to as a current image that is a target of current encoding.
  • the encoding apparatus 100 may generate a bitstream including encoding information by encoding the input image, and may output the generated bitstream.
  • the switch 115 When the intra mode is used, the switch 115 can be switched to intra. When the inter mode is used, the switch 115 can be switched to inter.
  • the encoding apparatus 100 may generate a prediction block for the input block of the input image. In addition, after the prediction block is generated, the encoding apparatus 100 may encode a residual between the input block and the prediction block.
  • the input block may be referred to as a current block that is a target of current encoding.
  • the intra prediction unit 120 may use a pixel value of an already encoded block in the vicinity of the current block as a reference pixel.
  • the intra predictor 120 may perform spatial prediction on the current block by using the reference pixel, and generate prediction samples on the current block through spatial prediction.
  • the inter predictor 110 may include a motion predictor and a motion compensator.
  • the motion predictor may search an area that best matches the current block from the reference image in the motion prediction process, and may derive a motion vector for the current block and the searched area.
  • the reference picture may be stored in the reference picture buffer 190 and may be stored in the reference picture buffer 190 when encoding and / or decoding of the reference picture is processed.
  • the motion compensator may generate a prediction block by performing motion compensation using a motion vector.
  • the motion vector may be a two-dimensional vector used for inter prediction.
  • the motion vector may indicate an offset between the current picture and the reference picture.
  • the subtractor 125 may generate a residual block that is a difference between the input block and the prediction block.
  • the residual block may be referred to as a residual signal.
  • the transform unit 130 may generate transform coefficients by performing transform on the residual block, and output the generated transform coefficients.
  • the transform coefficient may be a coefficient value generated by performing transform on the residual block.
  • the transform unit 130 may omit the transform on the residual block.
  • Quantized transform coefficient levels may be generated by applying quantization to the transform coefficients.
  • the quantized transform coefficient level may also be referred to as transform coefficient.
  • the quantization unit 140 may generate a quantized transform coefficient level by quantizing the transform coefficients according to the quantization parameter.
  • the quantization unit 140 may output the generated quantized transform coefficient level. In this case, the quantization unit 140 may quantize the transform coefficients using the quantization matrix.
  • the entropy decoder 150 may generate a bitstream by performing entropy encoding according to a probability distribution based on the values calculated by the quantizer 140 and / or encoding parameter values calculated in the encoding process. .
  • the entropy decoder 150 may output the generated bitstream.
  • the entropy decoder 150 may perform entropy encoding on information for decoding an image in addition to information about pixels of an image.
  • the information for decoding the image may include a syntax element.
  • the encoding parameter may be information required for encoding and / or decoding.
  • the encoding parameter may include information encoded by the encoding apparatus and transmitted to the decoding apparatus, and may include information that may be inferred in the encoding or decoding process. For example, there is a syntax element as information transmitted to the decoding apparatus.
  • coding parameters include prediction modes, motion vectors, reference picture indexes, coding block patterns, residual signals, transform coefficients, quantized transform coefficients, quantization parameters, block sizes, block partitions. It may include values or statistics such as information.
  • the prediction mode may indicate an intra prediction mode or an inter prediction mode.
  • the residual signal may mean a difference between the original signal and the prediction signal.
  • the residual signal may be a signal generated by transforming a difference between the original signal and the prediction signal.
  • the residual signal may be a signal generated by transforming and quantizing the difference between the original signal and the prediction signal.
  • the residual block may be a residual signal in block units.
  • entropy coding When entropy coding is applied, a small number of bits may be allocated to a symbol having a high occurrence probability, and a large number of bits may be allocated to a symbol having a low occurrence probability. As the symbol is represented through this assignment, the size of the bitstring for the symbols to be encoded may be reduced. Therefore, compression performance of image encoding may be improved through entropy encoding.
  • the entropy decoder 150 may perform entropy encoding by using a variable length coding (VLC) table.
  • VLC variable length coding
  • the entropy decoder 150 may derive a binarization method for the target symbol.
  • the entropy decoder 150 may derive a probability model of the target symbol / bin.
  • the entropy decoder 150 may perform entropy encoding using the derived binarization method or the probability model.
  • the encoded current image may be used as a reference image with respect to other image (s) to be processed later. Therefore, the encoding apparatus 100 may decode the encoded current image again and store the decoded image as a reference image. Inverse quantization and inverse transform on the encoded current image may be processed for decoding.
  • the quantized coefficients may be inversely quantized in the inverse quantization unit 160 and inversely transformed in the inverse transformer 170.
  • the inverse quantized and inverse transformed coefficients may be summed with the prediction block via the adder 175.
  • a reconstructed block may be generated by adding the inverse quantized and inverse transformed coefficients and the prediction block.
  • the restored block may pass through the filter unit 180.
  • the filter unit 180 may apply at least one or more of a deblocking filter, a sample adaptive offset (SAO), and an adaptive loop filter (ALF) to the reconstructed block or the reconstructed picture. have.
  • the filter unit 180 may be referred to as an adaptive in-loop filter.
  • the deblocking filter may remove block distortion generated at boundaries between blocks.
  • the SAO may add an appropriate offset value to the pixel value to compensate for coding errors.
  • the ALF may perform filtering based on a value obtained by comparing the reconstructed image and the original image.
  • the reconstructed block that has passed through the filter unit 180 may be stored in the reference picture buffer 190.
  • FIG. 2 is a block diagram illustrating a configuration of a decoding apparatus according to an embodiment of the present invention.
  • the decoding apparatus 200 may be a video decoding apparatus or an image decoding apparatus.
  • the decoding apparatus 200 may include an entropy decoder 210, an inverse quantizer 220, an inverse transformer 230, an intra predictor 240, an inter predictor 250, and an adder 255.
  • the filter unit 260 may include a reference picture buffer 270.
  • the decoding apparatus 200 may receive a bitstream output from the encoding apparatus 100.
  • the decoding apparatus 200 may perform intra mode and / or inter mode decoding on the bitstream.
  • the decoding apparatus 200 may generate a reconstructed image by decoding, and output the generated reconstructed image.
  • switching to the intra mode or the inter mode according to the prediction mode used for decoding may be made by a switch.
  • the prediction mode used for decoding is an intra mode
  • the switch may be switched to intra.
  • the prediction mode used for decoding is an inter mode
  • the switch may be switched to inter.
  • the decoding apparatus 200 may obtain a reconstructed residual block from the input bitstream, and generate a prediction block. When the reconstructed residual block and the prediction block are obtained, the decoding apparatus 200 may generate the reconstructed block by adding the reconstructed residual block and the prediction block.
  • the entropy decoder 210 may generate symbols by performing entropy decoding on the bitstream based on the probability distribution.
  • the generated symbols may include symbols in the form of quantized coefficients.
  • the entropy decoding method may be similar to the entropy encoding method described above.
  • the entropy decoding method may be an inverse process of the above-described entropy encoding method.
  • the quantized coefficient may be inverse quantized by the inverse quantization unit 220.
  • the inverse quantized coefficient may be inversely transformed by the inverse transformer 230.
  • a reconstructed residual block may be generated.
  • the inverse quantization unit 220 may apply a quantization matrix to the quantized coefficients.
  • the intra predictor 240 may generate a predictive block by performing spatial prediction using pixel values of blocks already decoded around the current block.
  • the inter predictor 250 may include a motion compensator.
  • the motion compensator may generate a prediction block by performing motion compensation using a motion vector and a reference image.
  • the reference picture may be stored in the reference picture buffer 270.
  • the reconstructed residual block and the prediction block may be added through the adder 255.
  • the adder 255 may generate the reconstructed block by adding the reconstructed residual block and the predictive block.
  • the restored block may pass through the filter unit 260.
  • the filter unit 260 may apply at least one or more of the deblocking filter, SAO, and ALF to the reconstructed block or the reconstructed picture.
  • the filter unit 260 may output the restored image.
  • the reconstructed picture may be stored in the reference picture buffer 270 and used for inter prediction.
  • FIG. 3 is a diagram schematically illustrating a division structure of an image when encoding and decoding an image.
  • a coding unit may be used in encoding and decoding.
  • a unit may be a term that collectively refers to 1) a block including image samples and 2) a syntax element.
  • “division of a unit” may mean “division of a block corresponding to a unit”.
  • the image 300 may be sequentially divided in units of a largest coding unit (LCU), and the division structure of the image 300 may be determined according to the LCU.
  • LCU may be used as the same meaning as a coding tree unit (CTU).
  • the partition structure may mean a distribution of a coding unit (CU) for efficiently encoding an image in the LCU 310. This distribution may be determined according to whether to divide one CU into four CUs.
  • the horizontal size and the vertical size of the CU generated by the split may be half of the horizontal size and half of the vertical size, respectively, before the split.
  • the partitioned CU may be recursively divided into four CUs whose width and length are reduced by half in the same manner.
  • Depth information may be information indicating the size of a CU. Depth information may be stored for each CU. For example, the depth of the LCU may be zero, and the depth of the smallest coding unit (SCU) may be a predefined maximum depth.
  • the LCU may be a CU having a maximum coding unit size as described above, and the SCU may be a CU having a minimum coding unit size.
  • the division may start from the LCU 310, and the depth of the CU may increase by one whenever the horizontal and vertical sizes of the CU are reduced by half by the division. For each depth, the CU that is not divided may have a size of 2N ⁇ 2N.
  • a CU of 2N ⁇ 2N size may be divided into four CUs having an N ⁇ N size. The size of N can be reduced by half for every 1 increase in depth.
  • an LCU having a depth of 0 may be 64 ⁇ 64 pixels. 0 may be the minimum depth.
  • An SCU of depth 3 may be 8x8 pixels. 3 may be the maximum depth.
  • a CU of 64x64 pixels, which is an LCU may be represented by a depth of zero.
  • a CU of 32x32 pixels may be represented by depth one.
  • a CU of 16 ⁇ 16 pixels may be represented by depth two.
  • a CU of 8x8 pixels, which is an SCU, may be represented by depth three.
  • information on whether the CU is split may be expressed through split information of the CU.
  • the split information may be 1 bit of information. All CUs except the SCU may include partition information. For example, when the CU is not split, the value of partition information of the CU may be 0, and when the CU is split, the value of partition information of the CU may be 1.
  • FIG. 4 is a diagram illustrating a form of a prediction unit PU that a coding unit CU may include.
  • a CU that is no longer split among CUs split from the LCU may be split into one or more prediction units (PUs). This partition may also be called a partition.
  • PUs prediction units
  • the PU may be a basic unit for prediction.
  • the PU may be encoded and decoded in any one of a skip mode, an inter mode, and an intra mode.
  • PU may be divided into various types according to each mode.
  • skip mode there may be no partition in the CU.
  • 2N ⁇ 2N mode 410 having the same size of PU and CU without splitting may be supported.
  • inter mode eight divided forms in a CU may be supported.
  • 2Nx2N mode 410, 2NxN mode 415, Nx2N mode 420, NxN mode 425, 2NxnU mode 430, 2NxnD mode 435, nLx2N mode 440, and nRx2N Mode 445 may be supported.
  • 2Nx2N mode 410 and NxN mode 425 may be supported.
  • a PU having a size of 2Nx2N may be encoded.
  • a PU having a size of 2N ⁇ 2N may mean a PU having a size equal to the size of a CU.
  • a PU having a size of 2N ⁇ 2N may have a size of 64 ⁇ 64, 32 ⁇ 32, 16 ⁇ 16, or 8 ⁇ 8.
  • a PU having a size of NxN may be encoded.
  • the size of the PU when the size of the PU is 8x8, four divided PUs may be encoded.
  • the size of the partitioned PU may be 4 ⁇ 4.
  • the PU When the PU is encoded by the intra mode, the PU may be encoded using one intra prediction mode among the plurality of intra prediction modes.
  • HEVC High Efficiency Video Coding
  • the PU can be coded in one of the 35 intra prediction modes.
  • Which of the 2Nx2N mode 410 and NxN mode 425 is to be coded may be determined by the rate-distortion cost.
  • the encoding apparatus 100 may perform an encoding operation on a PU having a size of 2N ⁇ 2N.
  • the encoding operation may be to encode the PU in each of a plurality of intra prediction modes that the encoding apparatus 100 may use.
  • an optimal intra prediction mode for a 2N ⁇ 2N size PU may be derived.
  • the optimal intra prediction mode may be an intra prediction mode that generates a minimum rate-distortion cost for encoding a 2N ⁇ 2N size PU among a plurality of intra prediction modes that can be used by the encoding apparatus 100.
  • the encoding apparatus 100 may sequentially perform encoding operations on each PU of the PUs divided by N ⁇ N.
  • the encoding operation may be to encode the PU in each of a plurality of intra prediction modes that the encoding apparatus 100 may use.
  • an optimal intra prediction mode for a N ⁇ N size PU may be derived.
  • the optimal intra prediction mode may be an intra prediction mode that generates a minimum rate-distortion cost for encoding of a PU of an N ⁇ N size among a plurality of intra prediction modes that can be used by the encoding apparatus 100.
  • the encoding apparatus 100 may determine which of 2Nx2N size PU and NxN size PU to encode based on a comparison of the rate-distortion cost of the 2Nx2N size PU and the rate-distortion costs of the NxN size PUs.
  • FIG. 5 is a diagram illustrating a form of a transform unit (TU) that may be included in a coding unit (CU).
  • TU transform unit
  • CU coding unit
  • a transform unit may be a basic unit used for a process of transform, quantization, inverse transform, inverse quantization, entropy encoding, and entropy decoding in a CU.
  • the TU may have a square shape or a rectangular shape.
  • a CU that is no longer split into CUs may be split into one or more TUs.
  • the partition structure of the TU may be a quad-tree structure.
  • one CU 510 may be divided one or more times according to the quad-tree structure. Through division, one CU 510 may be configured with TUs of various sizes.
  • FIG. 6 is a diagram for explaining an embodiment of an intra prediction process.
  • Arrows outward from the center of the graph of FIG. 6 may indicate prediction directions of intra prediction modes.
  • the number displayed near the arrow may represent an example of a mode value allocated to the intra prediction mode or the prediction direction of the intra prediction mode.
  • Intra encoding and / or decoding may be performed using reference samples of units around the target unit. Peripheral units may be peripheral restored units. For example, intra encoding and / or decoding may be performed using a value or encoding parameter of a reference sample included in a neighboring reconstructed unit.
  • the encoding apparatus 100 and / or the decoding apparatus 200 may generate the prediction block by performing intra prediction on the target unit based on the information of the sample in the current picture.
  • the encoding apparatus 100 and / or the decoding apparatus 200 may generate a prediction block for a target unit by performing intra prediction based on information of a sample in a current picture.
  • the encoding apparatus 100 and / or the decoding apparatus 200 may perform directional prediction and / or non-directional prediction based on at least one reconstructed reference sample.
  • the prediction block may mean a block generated as a result of performing intra prediction.
  • the prediction block may correspond to at least one of a CU, a PU, and a TU.
  • the unit of a prediction block may be the size of at least one of a CU, a PU, and a TU.
  • the prediction block may have a square shape, having a size of 2N ⁇ 2N or a size of N ⁇ N.
  • the size of NxN may include 4x4, 8x8, 16x16, 32x32 and 64x64.
  • the prediction block may be a block in the form of a square having a size of 2x2, 4x4, 16x16, 32x32, or 64x64, or a rectangular block having a size of 2x8, 4x8, 2x16, 4x16, and 8x16.
  • Intra prediction may be performed according to an intra prediction mode for the target unit.
  • the number of intra prediction modes that the target unit may have may be a predefined fixed value or may be a value determined differently according to the attributes of the prediction block.
  • the attributes of the prediction block may include the size of the prediction block and the type of the prediction block.
  • the number of intra prediction modes may be fixed to 35 regardless of the size of the prediction unit.
  • the number of intra prediction modes may be 3, 5, 9, 17, 34, 35, 36, or the like.
  • the intra prediction mode may include two non-directional modes and 33 directional modes as shown in FIG. 6.
  • Two non-directional modes may include a DC mode and a planar mode.
  • prediction may be performed in the vertical direction based on the pixel value of the reference sample.
  • prediction may be performed in the horizontal direction based on the pixel value of the reference sample.
  • prediction may be performed in the vertical direction based on the pixel value of the reference sample.
  • the encoding apparatus 100 and the decoding apparatus 200 may perform intra prediction on the target unit using the reference sample according to the angle corresponding to the directional mode.
  • the intra prediction mode located on the right side of the vertical mode may be referred to as a vertical right mode.
  • the intra prediction mode located at the bottom of the horizontal mode may be referred to as a horizontal-below mode.
  • intra prediction modes in which the mode value is one of 27, 28, 29, 30, 31, 32, 33, and 34 may be vertical right modes 613.
  • Intra prediction modes with a mode value of one of 2, 3, 4, 5, 6, 7, 8, and 9 may be horizontal bottom modes 616.
  • the non-directional mode may include a DC mode and a planar mode.
  • the mode value of the DC mode may be 1.
  • the mode value of the planner mode may be zero.
  • the directional mode may include an angular mode.
  • a mode other than the DC mode and the planner mode may be a directional mode.
  • a prediction block may be generated based on an average of pixel values of the plurality of reference samples. For example, the value of a pixel of the prediction block may be determined based on an average of pixel values of the plurality of reference samples.
  • the number of intra prediction modes described above and the mode value of each intra prediction modes may be exemplary only.
  • the number of intra prediction modes described above and the mode value of each intra prediction modes may be defined differently according to an embodiment, implementation, and / or need.
  • the number of intra prediction modes may differ depending on the type of color component.
  • the number of prediction modes may vary depending on whether the color component is a luma signal or a chroma signal.
  • FIG. 7 is a diagram for describing a position of a reference sample used in an intra prediction process.
  • a reconstructed reference pixel used for intra prediction of a current block includes, for example, lower-left reference samples 731 and left reference samples 733. , Upper-left corner reference sample 735, upper-reference samples 737, upper-right reference samples 739, and the like.
  • the left reference samples 733 may refer to a reconstructed reference pixel adjacent to the left side of the target unit.
  • the top reference samples 737 may refer to a reconstructed reference pixel adjacent to the top of the target unit.
  • the upper left corner reference pixel 735 may mean a restored reference pixel located at the upper left corner of the target unit.
  • the lower left reference samples 731 may refer to a reference sample located at the bottom of the left sample line among samples positioned on the same line as the left sample line composed of the left reference samples 733.
  • the upper right reference samples 739 may refer to reference samples positioned to the right of the upper pixel line among samples positioned on the same line as the upper sample line formed of the upper reference samples 737.
  • the lower left reference samples 731, the left reference samples 733, the upper reference samples 737, and the upper right reference samples 739 may each be N pieces.
  • the prediction block may be generated through intra prediction on the target unit. Generation of the predictive block may include determining a value of pixels of the predictive block. The size of the target unit and the prediction block may be the same.
  • the reference sample used for intra prediction of the target unit may vary according to the intra prediction mode of the target unit.
  • the direction of the intra prediction mode may indicate a dependency relationship between the reference samples and the pixels of the prediction block.
  • the value of the specified reference sample can be used as the value of the specified one or more pixels of the prediction block.
  • the specified one or more specified pixels of the specified reference sample and prediction block may be samples and pixels designated by a straight line in the direction of the intra prediction mode.
  • the value of the specified reference sample may be copied to the value of the pixel located in the reverse direction of the intra prediction mode.
  • the pixel value of the prediction block may be a value of a reference sample located in the direction of the intra prediction mode based on the position of the pixel.
  • the intra prediction mode of the target unit is a vertical mode having a mode value of 26
  • the upper reference samples 737 may be used for intra prediction.
  • the value of the pixel of the prediction block may be the value of the reference pixel located vertically above the position of the pixel.
  • the top reference samples 737 adjacent to the top of the target unit can be used for intra prediction.
  • the values of the pixels of one row of the prediction block may be the same as the values of the top reference samples 737.
  • the left reference samples 733 may be used for intra prediction.
  • the pixel value of the prediction block may be a value of a reference pixel located horizontally on the left side with respect to the pixel.
  • left reference samples 733 adjacent to the target unit to the left may be used for intra prediction.
  • the values of the pixels of one column of the prediction block may be the same as the values of the left reference samples 733.
  • the mode value of the intra prediction mode of the current block is 18, at least some of the left reference samples 733, the upper left corner reference sample 735, and the at least some intra prediction of the top reference samples 737 are included. Can be used.
  • the mode value of the intra prediction mode is 18, the value of the pixel of the prediction block may be the value of the reference pixel located at the top left diagonally with respect to the pixel.
  • At least some of the upper right reference pixels 439 may be used for intra prediction.
  • At least some of the lower left reference pixels 431 may be used for intra prediction.
  • the upper left corner reference samples 735 may be used for intra prediction.
  • the reference sample used to determine the pixel value of one pixel of the prediction block may be one, or may be two or more.
  • the pixel value of the pixel of the prediction block may be determined according to the position of the reference sample indicated by the position of the pixel and the direction of the intra prediction mode. If the position of the reference sample indicated by the position of the pixel and the direction of the intra prediction mode is an integer position, the value of one reference sample indicated by the integer position may be used to determine the pixel value of the pixel of the prediction block.
  • an interpolated reference sample may be generated based on the two reference samples closest to the position of the reference sample. have.
  • the value of the interpolated reference sample can be used to determine the pixel value of the pixel of the prediction block. In other words, when the position of the reference sample indicated by the position of the pixel of the prediction block and the direction of the intra prediction mode indicates between the two reference samples, an interpolated value is generated based on the values of the two samples. Can be.
  • the prediction block generated by the prediction may not be the same as the original target unit.
  • a prediction error that is a difference between the target unit and the prediction block may exist, and the prediction error may exist between the pixel of the target unit and the pixel of the prediction block.
  • the greater the distance between the pixel and the reference sample of the prediction block the larger prediction error may occur.
  • Discontinuity may occur between the prediction block and the neighboring block generated by such a prediction error.
  • Filtering on the prediction block may be used to reduce the prediction error.
  • the filtering may be to adaptively apply a filter to a region that is considered to have a large prediction error in the prediction block.
  • an area considered to have a large prediction error may be a boundary of a prediction block.
  • an area considered to have a large prediction error among the prediction blocks may be different, and characteristics of the filter may be different.
  • FIG. 8 is a diagram for explaining an embodiment of an inter prediction process.
  • the rectangle illustrated in FIG. 8 may represent an image (or picture).
  • arrows in FIG. 8 may indicate prediction directions. That is, the image may be encoded and / or decoded according to the prediction direction.
  • Each picture may be classified into an I picture (Intra Picture), a P picture (Uni-prediction Picture), and a B picture (Bi-prediction Picture) according to an encoding type.
  • Each picture may be encoded according to an encoding type of each picture.
  • the image to be encoded When the image to be encoded is an I picture, the image may be encoded with respect to the image itself without inter prediction.
  • the image to be encoded When the image to be encoded is a P picture, the image may be encoded through inter prediction using a reference picture only in the forward direction.
  • the image to be encoded When the image to be encoded is a B picture, it may be encoded through inter prediction using reference pictures in both the forward and reverse directions, and may be encoded through inter prediction using the reference picture in one of the forward and reverse directions.
  • the P picture and the B picture encoded and / or decoded using the reference picture may be regarded as an image using inter prediction.
  • the encoding apparatus 100 and the decoding apparatus 200 may perform prediction and / or motion compensation on the encoding target unit and the decoding target unit.
  • the encoding apparatus 100 or the decoding apparatus 200 may perform prediction and / or motion compensation by using the reconstructed motion information of the neighboring unit as the motion information of the encoding target unit or the decoding target unit.
  • the encoding target unit or the decoding target unit may mean a prediction unit and / or a prediction unit partition.
  • Inter prediction may be performed using a reference picture and motion information.
  • inter prediction may use the skip mode described above.
  • the reference picture may be at least one of a previous picture of the current picture or a subsequent picture of the current picture.
  • the inter prediction may perform prediction on a block of the current picture based on the reference picture.
  • the reference picture may mean an image used for prediction of a block.
  • an area in the reference picture may be specified by using a reference picture index refIdx indicating a reference picture, a motion vector to be described later, and the like.
  • the inter prediction may select a reference picture and a reference block corresponding to the current block within the reference picture, and generate the prediction block for the current block using the selected reference block.
  • the current block may be a block that is a target of current encoding or decoding among blocks of the current picture.
  • the motion information may be derived during inter prediction by each of the encoding apparatus 100 and the decoding apparatus 200.
  • the derived motion information may be used to perform inter prediction.
  • the encoding apparatus 100 and the decoding apparatus 200 may use encoding information and / or decoding efficiency by using motion information of a restored neighboring block and / or motion information of a collocated block (col block). Can improve.
  • the call block may be a block corresponding to the current block in a collocated picture (col picture).
  • the reconstructed neighboring block may be a block in the current picture and may be a block already reconstructed through encoding and / or decoding.
  • the reconstructed block may be a neighboring block adjacent to the current block and / or a block located at an outer corner of the current block.
  • the block located at the outer corner of the current block may be a block vertically adjacent to a neighboring block horizontally adjacent to the current block or a block horizontally adjacent to a neighboring block vertically adjacent to the current block.
  • a restored peripheral unit may be a unit located to the left of the target unit, a unit located at the top of the target unit, a unit located at the lower left corner of the target unit, a unit located at the upper right corner of the target unit, or an upper left of the target unit. It may be a unit located at the corner.
  • Each of the encoding apparatus 100 and the decoding apparatus 200 may determine a block existing at a position corresponding to a current block spatially in the call picture, and may determine a predetermined relative position based on the determined block.
  • the predefined relative position may be a position inside and / or outside of a block that exists spatially at a position corresponding to the current block.
  • each of the encoding apparatus 100 and the decoding apparatus 200 may derive a call block based on the determined predetermined relative position.
  • the call picture may be one picture among at least one reference picture included in the reference picture list.
  • the block in the reference picture may exist at a position spatially corresponding to the position of the current block in the reconstructed reference picture.
  • the position of the current block in the current picture and the position of the block in the reference picture may correspond to each other.
  • motion information of a block included in the reference picture may be referred to as temporal motion information.
  • the method of deriving the motion information may vary according to the prediction mode of the current block.
  • a prediction mode applied for inter prediction there may be an advanced motion vector predictor (AMVP) and merge.
  • AMVP advanced motion vector predictor
  • each of the encoding apparatus 100 and the decoding apparatus 200 may predict the motion vector candidate using the motion vector of the reconstructed neighboring block and / or the motion vector of the call block. You can create a list. The motion vector of the reconstructed neighboring block and / or the motion vector of the collocated block may be used as a prediction motion vector candidate.
  • the bitstream generated by the encoding apparatus 100 may include a predicted motion vector index.
  • the prediction motion vector index may indicate an optimal prediction motion vector selected from the prediction motion vector candidates included in the prediction motion vector candidate list.
  • the predicted motion vector index may be transmitted from the encoding apparatus 100 to the decoding apparatus 200 through the bitstream.
  • the decoding apparatus 200 may select the prediction motion vector of the current block from the prediction motion vector candidates included in the prediction motion vector candidate list by using the prediction motion vector index.
  • the encoding apparatus 100 may calculate a motion vector difference (MVD) between the motion vector and the predictive motion vector of the current block, and may encode the MVD.
  • the bitstream may include encoded MVD.
  • the MVD may be transmitted from the encoding apparatus 100 to the decoding apparatus 200 through a bitstream.
  • the decoding apparatus 200 may decode the received MVD.
  • the decoding apparatus 200 may derive the motion vector of the current block through the sum of the decoded MVD and the predictive motion vector.
  • the bitstream may include a reference picture index and the like indicating the reference picture.
  • the reference picture index may be transmitted from the encoding apparatus 100 to the decoding apparatus 200 through a bitstream.
  • the decoding apparatus 200 may predict the motion vector of the current block by using the motion information of the neighboring block, and may derive the motion vector of the current block by using the predicted motion vector and the motion vector difference.
  • the decoding apparatus 200 may generate a prediction block for the current block based on the derived motion vector and the reference picture index information.
  • the encoding apparatus 100 may not separately encode the motion information for the target unit. If the motion information of the target unit is not encoded, the amount of bits transmitted to the decoding apparatus 200 may be reduced, and encoding efficiency may be improved.
  • the inter prediction mode in which the motion information of the target unit is not encoded may include a skip mode and / or a merge mode. In this case, the encoding apparatus 100 and the decoding apparatus 200 may use an identifier and / or an index indicating which unit of the reconstructed neighboring units is used as the movement information of the target unit.
  • Merge may mean merging of motions for a plurality of blocks. Merge may mean applying motion information of one block to other blocks.
  • each of the encoding apparatus 100 and the decoding apparatus 200 may generate a merge candidate list using the motion information of the reconstructed neighboring block and / or the motion information of the call block.
  • the motion information may include at least one of 1) a motion vector, 2) an index for a reference image, and 3) a prediction direction.
  • the prediction direction may be unidirectional or bidirectional.
  • the merge may be applied in a CU unit or a PU unit.
  • the encoding apparatus 100 may transmit predefined information to the decoding apparatus 200 through a bitstream.
  • the bitstream may include predefined information.
  • the predefined information may include 1) information indicating whether to merge for each block partition, and 2) information about which one of neighboring blocks adjacent to the current block to merge with.
  • the neighboring blocks of the current block may include a left neighboring block of the current block, a top neighboring block of the current block, a temporal neighboring block of the current block, and the like.
  • the merge candidate list may represent a list in which motion information is stored.
  • the merge candidate list may be generated before the merge is performed.
  • the motion information stored in the merge candidate list may be 1) motion information of a neighboring block adjacent to the current block or 2) collocated block motion information corresponding to the current block in the reference image.
  • the motion information stored in the merge candidate list may be new motion information generated by a combination of motion information already present in the merge candidate list.
  • the skip mode may be a mode in which information of neighboring blocks is applied to the current block as it is.
  • the skip mode may be one of modes used for inter prediction.
  • the encoding apparatus 100 may transmit only information on which block motion information to use as the motion information of the current block to the decoding apparatus 200 through the bitstream.
  • the encoding apparatus 100 may not transmit other information to the decoding apparatus 200.
  • the other information may be syntax information.
  • the syntax information may include motion vector difference information.
  • FIG 9 illustrates a prediction error when the original image is predicted in the vertical direction according to an example.
  • one PU and a prediction mode may be selected after the calculation for all possible PUs and prediction modes is performed.
  • the prediction on the original image is performed according to the selected PU and the prediction mode, a tendency for distortion of the prediction error remains in the direction of the prediction mode.
  • the second second intra prediction may be performed after performing the first first intra prediction using the selected prediction mode of one of the plurality of prediction modes for the PU.
  • the secondary intra prediction may be intra prediction defined in the region of pixels specified in the direction of the prediction mode of the first intra prediction.
  • the prediction mode of the secondary intra prediction may be the same as the prediction mode of the first intra prediction.
  • the first intra prediction is applied to the whole of the PU, but the secondary intra prediction may be applied to only a part of the PU.
  • the encoding apparatus may determine a part of the PU to which secondary intra prediction is to be applied. For example, the encoding apparatus may calculate a rate-distortion cost when secondary intra prediction is applied to the candidate region for each candidate region of the plurality of candidate regions. The encoding apparatus may determine the candidate region of the smallest rate-distortion cost as part of the PU to which secondary intra prediction is to be applied.
  • the PU may include a plurality of pixels.
  • the plurality of candidate regions may be portions determined by a specified manner of the plurality of pixels of the PU.
  • the plurality of candidate regions may be determined by the prediction direction of the first intra prediction.
  • the prediction error for the PU can be further reduced compared to the case where only the first intra prediction is applied.
  • the following embodiments can present a method and apparatus for improving the performance of intra prediction and reducing the prediction error by efficiently using the aforementioned tendency.
  • FIG. 10 is a structural diagram of an encoding apparatus according to an embodiment.
  • the encoding apparatus 1000 may be a general-purpose computer system that performs encoding.
  • the encoding apparatus 1000 may include at least one processor 1010, a memory 1030, a user interface (UI) input device 1050, which communicates with each other via a bus 1090, and the like.
  • UI output device 1060 and storage 1040 may further include a communication unit 1020 connected to the network 1099.
  • the processor 1010 may be a semiconductor device that executes processing instructions stored in a central processing unit (CPU), a memory 1030, or a storage 1040.
  • Memory 1030 and storage 1040 may be various types of volatile or nonvolatile storage media.
  • the memory may include at least one of a ROM 1031 and a RAM 1032.
  • the processor 1010 is an inter predictor 110, an intra predictor 120, a switch 115, a subtractor 125, a transformer 130, a quantizer 140, and an entropy decoder of the encoding apparatus 100.
  • 150 an inverse quantization unit 160, an inverse transform unit 170, an adder 175, a filter unit 180, and a reference picture buffer 190.
  • the recording medium may store at least one module for the operation of the encoding apparatus 1000.
  • the memory 1030 may store at least one module and may be configured to be executed by the at least one processor 1010.
  • the inter prediction unit 110, the intra prediction unit 120, the switch 115, the subtractor 125, the transform unit 130, the quantization unit 140, and the entropy decoding of the encoding apparatus 100 may be used.
  • At least some of the unit 150, the inverse quantizer 160, the inverse transform unit 170, the adder 175, the filter unit 180, and the reference picture buffer 190 may be program modules, and may be an external device or system. Communicate with The program modules may be included in the encoding apparatus 1000 in the form of an operating system, an application module, and other program modules.
  • the program modules may be physically stored on various known storage devices.
  • at least some of these program modules may be stored in a remote storage device that can communicate with the encoding apparatus 1000.
  • Program modules perform routines or subroutines, programs, objects, components, and data to perform functions or operations, or to implement abstract data types, according to one embodiment. Data structures and the like, but is not limited thereto.
  • the program modules may be composed of instructions or code performed by the processor 1010.
  • Functions related to communication of data or information of the encoding apparatus 1000 may be performed through the communication unit 1020.
  • FIG. 11 is a flowchart of an encoding method according to an embodiment.
  • the encoding method of the embodiment may be performed by the encoding apparatus 100 or the encoding apparatus 1000.
  • the prediction unit may generate a residual signal of the current block based on the current block, the first prediction, and the second prediction.
  • the predictor may include an inter predictor 110, an intra predictor 120, and a subtractor 125.
  • the residual signal may represent a residual block.
  • the residual block may be the difference between the current block and the prediction block.
  • the residual block may be generated based on the difference between the current block and the prediction block.
  • the prediction block may be the sum of the first prediction block and the second prediction block.
  • the residual block may be obtained by subtracting the sum of the first prediction block and the second prediction block from the current block.
  • the second prediction may be a prediction for a prediction error of the first prediction.
  • the first prediction block may be a block generated by the first prediction.
  • the second prediction block may be a block generated by the second prediction. If the second prediction is not used, the second prediction block may be a null block.
  • the null block may be a block in which the values of all pixels of the block are "0".
  • the prediction unit may generate a residual block of the current block by performing inter prediction or intra prediction.
  • the prediction unit may generate a second prediction block based on the current block and / or the first prediction block.
  • the prediction unit may generate a residual signal of the current block by performing a first prediction on the current block and performing a second prediction on the first prediction error that is a result of the first prediction.
  • the first prediction error may be a difference between the current block and the first prediction signal.
  • the residual signal may be a difference between the first prediction error and the second prediction signal.
  • the first prediction and the second prediction may be different kinds of predictions.
  • the first prediction and the second prediction may be the same kind of predictions.
  • each of the first prediction and the second prediction may be intra prediction. If the first prediction is intra prediction, the second prediction may also be set to intra prediction.
  • the prediction direction of the first prediction and the prediction direction of the second prediction may be the same.
  • the prediction direction of the second prediction may be set to be the same as the prediction direction of the first prediction.
  • each of the first prediction and the second prediction may be inter prediction.
  • a block that is the target of the first prediction and a block that is the target of the second prediction may be different from each other.
  • the "block to be predicted” may mean to which block the "prediction” generates a prediction block.
  • the type of the block targeted for the first prediction and the type of the block targeted for the second prediction may be different from each other.
  • the type of block may be an original block, a luma block, a chroma block, a depth block, a residual block, or the like.
  • the block that is the target of the first prediction may be the current block or the original block.
  • the first prediction error generated by the first prediction may represent the first prediction error block.
  • the first prediction error may be a first residual signal.
  • the first residual signal may represent a first residual block.
  • the block that is the target of the second prediction may be a first prediction error block or a first residual block.
  • the first residual signal may be generated by the first prediction
  • the second prediction error may be generated by the second prediction.
  • the second prediction error may be a second residual signal.
  • the second residual signal may represent a second residual block.
  • Reference blocks may be used for prediction.
  • the first reference block of the first prediction and the second reference block of the second prediction may be different from each other.
  • the reference blocks may differ from each other in terms of type and / or location.
  • the position of the first reference block of the first prediction and the position of the second reference block of the second prediction may be different from each other.
  • the position of the first reference block may be a position relative to the block that is the target of the first prediction.
  • the position of the second reference block may be a position relative to the block that is the target of the second prediction.
  • the first reference block of the first prediction may be plural.
  • the second reference block of the second prediction may be plural. At least some of the plurality of first reference blocks of the first prediction and the plurality of second reference blocks of the second prediction may be different from each other.
  • the position of at least one of the plurality of first reference blocks of the first prediction and the plurality of second reference blocks of the second prediction is the positions of the plurality of first reference blocks and the positions of the plurality of second reference blocks. Can only be included in one of the
  • the type of the first reference block of the first prediction and the type of the second reference block of the second prediction may be different from each other.
  • the type of block may be a reconstructed block, a reconstructed luma block, a reconstructed chroma block, a reconstructed depth block, a reconstructed first residual block, a reconstructed second residual block, or the like.
  • the first reference block of the first prediction may be a reconstructed block.
  • the second reference block of the second prediction may be a reconstructed first residual block.
  • the first reference sample of the first prediction may be a pixel of the reconstructed block.
  • the second reference sample of the second prediction may be a pixel of the reconstructed first residual block.
  • the first reference block of the first prediction may be a reconstructed neighboring block neighboring the current block.
  • the second reference block of the second prediction may be a reconstructed first residual block neighboring the current block.
  • the reconstructed neighbor residual block neighboring the current block may be obtained by adding the reconstructed residual block of the reconstructed neighboring block to the second prediction block of the reconstructed neighboring block.
  • the reconstructed neighbor residual block neighboring the current block may be a difference between the reconstructed neighboring block and the first prediction block of the reconstructed neighboring block.
  • a reconstructed block around the current block may be used as a reference block for prediction of the current block.
  • the target of the second prediction is the first residual block
  • a reconstructed first residual block around the current block or the first residual block may be used as the second reference block for prediction on the first residual block.
  • the region of the first prediction and the region of the second prediction may be different from each other.
  • the "prediction area” may indicate an area in which a prediction value is generated among blocks that are to be predicted.
  • the “area of prediction” may indicate an area to which a prediction value generated by prediction is allocated among prediction blocks corresponding to a block that is a prediction target.
  • the first prediction may be a prediction for an area of the current block.
  • the second prediction may be a prediction for a portion of the above area. At least some of the area of the current block may be excluded from the second prediction.
  • the block that is the target of the first prediction may be a block within a rectangular area.
  • the second prediction may be made for some of the above rectangular regions. At least some of the areas of the quadrangle may not be the target of the second prediction.
  • the area of the first prediction may be the entire block that is the target of the first prediction.
  • the area of the second prediction may be part of a block that is the target of the second prediction.
  • the second prediction may be excluded for at least a portion of the block that is the target of the second prediction.
  • the first prediction may be performed on the entire block that is the target of the first prediction.
  • the second prediction may be performed on the selected portion of the block that is the target of the second prediction.
  • prediction values determined by the first prediction may be assigned only to pixels in the region of the first prediction among blocks that are the targets of the first prediction.
  • the prediction value according to the first prediction may not be allocated to pixels outside the region of the first prediction among the blocks that are the targets of the first prediction.
  • a predefined value may be assigned to pixels outside the region of the first prediction among blocks that are the target of the first prediction. The predefined value may be "0".
  • prediction values determined by the second prediction may be assigned only to pixels in the region of the second prediction among blocks that are the target of the second prediction. Pixels outside the region of the second prediction among the blocks that are the target of the second prediction may not be assigned a prediction value by the second prediction.
  • a predefined value may be assigned to pixels outside the region of the second prediction among blocks that are the target of the second prediction. For example, the predefined value may be "0".
  • each of the NxN pixels may be assigned a prediction value determined by the first prediction.
  • the prediction value determined by the first prediction may be assigned to all pixels of the block that are the targets of the first prediction.
  • the size of the block that is the target of the second prediction is N ⁇ N
  • one or more pixels among the N ⁇ N pixels may not be assigned a prediction value determined by the first prediction.
  • the size of the block that is the target of the second prediction is N ⁇ N
  • one or more pixels among the N ⁇ N pixels may be assigned a predefined value.
  • the predefined value may be "0".
  • the region of the first prediction may be determined based on the type of the first prediction. For example, the region of the first prediction may be determined based on whether the first prediction is inter prediction. Alternatively, the region of the first prediction may be determined based on whether the first prediction is intra prediction. Alternatively, the region of the first prediction may be determined based on the prediction direction of the first prediction.
  • the region of the second prediction may be determined based on the type of the second prediction. For example, the region of the second prediction may be determined based on whether the second prediction is inter prediction. Alternatively, the region of the second prediction may be determined based on whether the second prediction is intra prediction. Alternatively, the region of the second prediction may be determined based on the prediction direction of the second prediction.
  • the second prediction may be performed using a second reference sample within an optional range.
  • the region of the first reference sample used for the first prediction and the region of the second reference sample used for the second prediction may be different from each other.
  • the region of the first prediction and the region of the second prediction may be different from each other.
  • the area of the first reference sample may be an area of reference samples required for generating prediction values for the area of the first prediction through the first prediction.
  • the area of the second reference sample may be an area of reference samples required for generating prediction values for the area of the second prediction through the second prediction.
  • the region of the first prediction may be the entirety of the block that is the target of the first prediction.
  • the area of the second prediction may be part of a block that is the target of the second prediction.
  • the region of the second reference sample may be part of the region of the first reference sample.
  • the encoder may perform encoding on the residual signal.
  • the encoder may generate information about the encoded residual signal by performing encoding on the residual signal.
  • the encoder may include a transformer 130, a quantizer 140, and an entropy encoder 150.
  • Step 1120 may include steps 1121, 1122, and 1123.
  • the transformer 130 may generate transform coefficients of the residual signal.
  • the quantization unit 140 may generate quantized transform coefficients by performing quantization on the transform coefficients.
  • the entropy encoder 150 may generate information on the encoded residual signal by performing entropy encoding on the quantized transform coefficients.
  • the entropy encoder 150 may generate a bitstream including information about the encoded residual signal.
  • the bitstream may include information about the encoded residual signal, and may include information related to prediction.
  • the entropy encoder 150 may include information related to prediction in the bitstream. Related information of the prediction may be entropy coded.
  • the information related to the prediction may include prediction scheme information indicating how the current block is encoded.
  • the prediction scheme information may indicate whether the current block is encoded by intra prediction or intra prediction.
  • the prediction scheme information may indicate whether the current block is encoded by intra prediction.
  • the prediction scheme information may indicate whether the current block is encoded by inter prediction.
  • the information related to the prediction may include intra prediction mode information indicating a prediction mode of the intra prediction.
  • the information related to the prediction may include second prediction usage information indicating whether the second prediction is used in encoding the current block.
  • the information related to the prediction may include first prediction region information indicating the region of the first prediction and / or second prediction region information indicating the second prediction region of the second prediction.
  • Information related to the prediction may include information used for the prediction described in the embodiments.
  • the entropy encoder 150 may include the information related to the prediction in the bitstream in the order described in the embodiment or the generation order according to the embodiment.
  • the communication unit 1020 may transmit the bitstream to the decoding apparatus 200 or the decoding apparatus 1700.
  • the reconstructed residual signal generator may generate a reconstructed residual signal based on the information about the residual signal.
  • the reconstructed residual signal generator may include an inverse quantizer 160 and an inverse transformer 170.
  • Step 1150 may include steps 1151 and 1152.
  • the inverse quantization unit 160 may generate the reconstructed transform coefficients by performing inverse quantization on the quantized transform coefficients.
  • the transformer 130 may generate the reconstructed residual signal by performing transform on the reconstructed transform coefficients.
  • the reconstructed block generator may generate a reconstructed block based on the reconstructed residual signal, the first prediction, and the second prediction.
  • the restored block generator may include an adder 175.
  • the sum of the reconstructed residual signal and the prediction signal may represent a reconstructed block.
  • the reconstructed block may be generated based on the sum of the reconstructed residual signal and the prediction signal.
  • the reconstructed block may be the sum of the reconstructed residual block and the prediction block.
  • the prediction block may be the sum of the first prediction block and the second prediction block.
  • the reconstructed block may be generated based on the sum of the reconstructed residual block and the prediction block.
  • the reconstructed block may be a block generated by the reconstructed residual block, the second prediction block, and the first prediction block.
  • the reconstructed block generator may generate the reconstructed first residual signal by adding the reconstructed residual signal and the second prediction signal.
  • the reconstructed block generator may generate the reconstructed block by adding the reconstructed first residual signal and the first prediction signal.
  • the reconstructed block generator may generate the reconstructed first residual block by adding the reconstructed residual block and the second prediction block.
  • the reconstructed block generator may generate the reconstructed block by adding the reconstructed first residual block and the first prediction block.
  • the reconstructed block and the reconstructed first residual signal may be used as reference blocks for encoding other blocks.
  • FIG 12 illustrates first prediction and second prediction using intra prediction, according to an embodiment.
  • the first prediction and the second prediction may be intra prediction.
  • 12 illustrates an example of encoding of a current block when the first prediction and the second prediction are intra prediction.
  • the current block may be a PU.
  • a PU having a size of 2N ⁇ 2N is illustrated as a current block.
  • the prediction unit may determine which prediction mode from among a plurality of prediction modes to use the first prediction.
  • the prediction unit may perform the first prediction by using each of the plurality of prediction modes.
  • the prediction directions of the plurality of prediction modes may be different.
  • reconstructed blocks around the current block can be used as reference blocks. Also, pixels of reconstructed blocks around the current block can be used as reference samples.
  • the prediction unit may generate first prediction blocks of the plurality of prediction modes through the first prediction.
  • the prediction unit may generate first prediction errors of the plurality of prediction modes through the first prediction. For one prediction mode, the first prediction block and the first prediction error can be calculated.
  • the first prediction error may indicate a difference between the current block and the first prediction block.
  • the first prediction error may represent the first residual block.
  • the prediction unit may select a region to which the second prediction is to be applied in the first residual block by referring to the first prediction error.
  • the selected area may be part of the first residual block.
  • the prediction unit may determine which part of the first residual block to apply the second prediction to.
  • the second prediction may be applied to the portion of the first residual block that is the selected region.
  • Various methods can be used to select the region to which the second prediction is to be applied.
  • the prediction region of the second prediction may be specified by the range of reference samples used for the second prediction.
  • reconstructed residual blocks neighboring the current block may be used as reference blocks. Also, pixels of reconstructed residual blocks neighboring the current block can be used as reference samples.
  • the prediction direction of the first prediction and the prediction direction of the second prediction may be the same.
  • the intra prediction mode of the first prediction and the intra prediction mode of the second prediction may be the same.
  • an intra prediction mode having a mode value of 30 is illustrated as an example.
  • the range of reference samples of the first prediction and the reference samples of the second prediction may be the same.
  • the second prediction may be applied to a portion of the first residual block.
  • the range of reference samples of the second prediction may be part of the range of reference samples of the first prediction.
  • the range of reference samples of the second prediction is shown as a range between ⁇ and ⁇ + ⁇ .
  • may indicate the first reference sample to be used for the second prediction among the reference samples for the prediction direction of the second prediction.
  • may indicate the starting position of the second prediction.
  • ⁇ + ⁇ may indicate the last reference sample to be used for the second prediction among the reference samples for the prediction direction of the second prediction.
  • ⁇ + ⁇ may indicate the last position of the second prediction.
  • may represent the number of reference samples to be used for the second prediction.
  • may represent the range of the second prediction or the length of the second prediction.
  • an area of the second prediction may be specified according to the determined range of reference samples and the prediction direction.
  • regions specified by dotted lines in the residual block are shown.
  • the region of the second prediction may be a region generated as the reference samples in the range between ⁇ and ⁇ + ⁇ advance in the prediction direction.
  • the prediction unit may select a range of reference samples to be used for the second prediction among the reference samples for the prediction direction.
  • the prediction unit may perform the second prediction by using the reference samples in the selected range among the entire reference samples for the prediction direction.
  • the entire reference samples may correspond to the reference samples used in the first prediction.
  • the value of the reference sample outside of the selected range may be considered null or "0".
  • the value of the previous reference sample of ⁇ may be considered zero, and the value of the subsequent reference sample of ⁇ + ⁇ may be considered zero.
  • the prediction unit may generate second prediction blocks for the plurality of candidate regions through the second prediction.
  • the plurality of candidate regions may be different from each other.
  • the plurality of candidate regions may be different portions of the first residual block.
  • the region of the first residual block can be specified as reference samples used for the second prediction.
  • a part of the first residual block or the region of the second prediction for the first residual block may be represented by an ordered pair ( ⁇ , ⁇ ).
  • Order pairs of ( ⁇ , ⁇ ) corresponding to the plurality of candidate regions may be different from each other.
  • the ordered pairs ⁇ and ⁇ may indicate a range of reference samples to be used for the second prediction.
  • the plurality of candidate regions may be all different regions that may be selected by the manner specified for the first residual block.
  • the specified scheme may be a prediction direction.
  • the region of the first residual block can be specified as reference samples used for the second prediction.
  • may be greater than or equal to 0 and less than or equal to n ⁇ 1.
  • may be 0 or more and n-1- ⁇ or less.
  • the number of ordered pairs of ( ⁇ , ⁇ ) may be (n + 1) n / 2.
  • the number of the plurality of candidate regions may be (n + 1) n / 2.
  • the plurality of candidate regions may respectively correspond to ranges of reference samples to be used for the second prediction.
  • ⁇ and ⁇ may each be an index of a list of reference samples in a specified prediction direction.
  • the order of the reference samples may be determined.
  • the order of the reference sample may be given in the order of 737 and the above right-above reference samples 739.
  • the lower left reference samples 731 the lower reference sample may have a faster order than the upper reference sample.
  • the left reference samples 733 the bottom reference sample may have a faster order than the top reference sample.
  • the top reference samples 737 the reference sample located on the left side may have a faster order than the reference sample located on the right side.
  • the upper right reference samples 739 the left reference sample may have a faster order than the right reference sample.
  • the order of the reference samples may be given for the reference samples determined according to the prediction direction.
  • the above-described lower-left reference samples 731, left-reference samples 733, upper-left corner reference sample 735, and upper reference samples Only some of the reference samples 737 and upper-right reference samples 739 determined according to the prediction direction of the second prediction may be used for the prediction of the entirety of the first residual block. Order may be given for some of the determined reference samples.
  • the reference samples that may be used for the entirety of the first residual block may be upper reference samples 737.
  • the size of the current block is 2N ⁇ 2N
  • the order of the leftmost reference sample among the top reference samples 737 may be 0, and the order of the rightmost reference sample among the top reference samples 737 may be 2N ⁇ 1. Can be.
  • reference samples that may be used for the entirety of the first residual block may be left reference samples 733.
  • the size of the current block is 2N ⁇ 2N
  • the order of the lowest reference sample of the left reference samples 737 may be 0, and the order of the highest reference sample of the left reference samples 737 may be 2N-1.
  • the prediction unit may generate second prediction errors for the plurality of candidate regions through the second prediction. For one region, the second prediction block and the second prediction error can be calculated.
  • the second prediction error may indicate a residual between the first residual block and the second prediction block.
  • the second prediction error may represent the second residual block.
  • the prediction unit may generate a plurality of first prediction errors for the plurality of prediction modes. Further, for each first prediction error of the plurality of first prediction errors, the prediction unit may generate the first prediction error through the second prediction. Second prediction errors of the plurality of candidate regions of the first residual block indicated by the first prediction error may be generated.
  • the prediction unit may generate second prediction errors for all candidate regions of the plurality of prediction modes.
  • the prediction unit may select a second prediction error having a minimum rate-distortion cost among the generated second prediction errors.
  • the candidate region corresponding to the selected second prediction error among the plurality of candidate regions may be a part to which the second prediction of the current block is applied.
  • a prediction mode corresponding to the selected second prediction error among the plurality of prediction modes may be a prediction mode to be used in the first prediction.
  • the prediction unit may determine the prediction mode of the first prediction and the region of the second prediction to be used for encoding the current block, depending on whether the second prediction error for which region of the prediction mode has a minimum rate-distortion cost. have.
  • the prediction unit may select a prediction mode having a minimum rate-distortion cost among the plurality of prediction modes.
  • the second prediction may be performed on the first prediction error or the first residual block generated by the selected prediction mode.
  • the prediction unit may select a second prediction error having a minimum rate-distortion cost among the second prediction errors.
  • the prediction unit may determine the prediction mode of the first prediction to be used for encoding the current block according to which first prediction error has the minimum rate-distortion cost, and which second prediction error is the minimum rate-distortion cost. It is possible to determine the region of the second prediction to be used for encoding the current block according to whether
  • the information related to the prediction described above with reference to FIG. 11 may include information about the selective intra-picture prediction described above.
  • the information related to the prediction may include SSIP_flag, which is a flag indicating whether to use the optional Secondary Intra-Prediction (SSIP).
  • SSIP_flag is a flag indicating whether to use the optional Secondary Intra-Prediction (SSIP).
  • the information related to the prediction may include information representing ⁇ described above and information representing ⁇ described above.
  • the information related to the prediction may include information indicating the first reference sample to be used for the second prediction and information indicating the number of reference samples to be used for the second prediction.
  • the information related to the prediction may include information indicating the first reference sample to be used for the second prediction and information indicating the last reference sample to be used for the second prediction.
  • the information related to the prediction may include information indicating a position at which the second prediction starts and to which position the second prediction proceeds.
  • the information related to the prediction may include information indicating a position where the second prediction starts and a position where the second prediction ends.
  • the information indicating ⁇ or the information indicating the position where the second prediction starts may be SSIP_start_point, and the information indicating ⁇ or the information indicating to which position the second prediction may proceed may be SSIP_length.
  • the prediction error of the current block can be further reduced compared to performing only the existing first prediction.
  • the second prediction may effectively reduce the prediction error with respect to the tendency of the distortion of the prediction error to remain in the direction of the prediction mode described above with reference to FIG. 9. In other words, by effectively using the tendency of the distortion of the prediction error to remain in the direction of the prediction mode through the second prediction, the performance of the prediction may be improved and the prediction error may be reduced.
  • FIG. 13 is a flowchart of a method of generating a residual block, according to an exemplary embodiment.
  • Step 1110 described above with reference to FIG. 11 may include steps 1310, 1320, 1330, 1340, 1350, 1360, 1370, and 1380.
  • the block that is the target of the first prediction may be the current block.
  • the first prediction may be intra prediction.
  • the prediction unit may generate a first prediction signal by performing a first prediction on the current block.
  • the first prediction signal may represent the first prediction block.
  • the prediction unit may perform the first prediction for each prediction mode of the plurality of prediction modes to find the prediction mode of the first prediction best for the encoding of the current block.
  • steps 1310, 1320, and 1325 may be performed for each prediction mode of the plurality of prediction modes.
  • the plurality of prediction modes may correspond to different prediction directions.
  • the prediction unit may generate a first prediction error based on the current block and the first prediction signal.
  • the first prediction error may represent a first prediction error block or a first residual block.
  • the first residual block may represent a residual of the current block and the first prediction block.
  • the prediction unit may generate the first prediction error by subtracting the first prediction block indicated by the first prediction signal from the current block.
  • the prediction unit may calculate the rate-distortion cost of the first prediction using the first prediction error.
  • the prediction unit may store the calculated first prediction error and / or the rate-distortion cost of the first prediction in the memory 1030, the storage 1040, or a buffer.
  • the prediction unit may perform the first prediction for each prediction mode of the plurality of prediction modes to find the prediction mode of the first prediction best for the encoding of the current block.
  • the prediction unit may calculate rate-distortion costs of the plurality of prediction modes using the first prediction errors of the plurality of prediction modes.
  • the prediction unit may store the minimum rate-distortion cost of the calculated rate-distortion costs as the minimum first rate-distortion cost RDcost_1.
  • RDcost_1 may be the rate-distortion cost of the mode with the minimum rate-distortion cost in the first intra picture prediction.
  • the prediction unit may store a first prediction error and RDcost_1 corresponding to RDcost_1.
  • the second prediction can optionally be performed based on the predefined conditions.
  • the prediction unit may determine whether to perform a second prediction.
  • the determination of step 1330 may not ultimately determine that the current block is encoded using the second prediction.
  • the prediction unit may determine whether a second prediction is possible with respect to the current block or the first residual block.
  • the current block may be encoded without using the second prediction.
  • the second prediction signal and the second prediction error may be generated by the second prediction, but in encoding of the current block through comparison of the rate-distortion costs in step 1360. It may be determined whether to use the second prediction.
  • the prediction unit performs a second prediction after performing the first prediction on each prediction mode of the plurality of prediction modes to find the prediction mode of the first prediction best for the encoding of the current block. You can decide whether or not to do so. In other words, step 1330 may be performed for each prediction mode of the plurality of prediction modes.
  • the residual block of the current block may be a block indicated by the first prediction error.
  • the first prediction error may be provided as a residual block.
  • the residual block of the current block may be a block indicated by the second prediction error signal to be described later.
  • the second prediction error signal may be provided as a residual block.
  • the prediction unit may determine whether to perform the second prediction based on the predefined condition.
  • the prediction unit may determine whether to perform the second prediction based on the prediction mode of the first prediction.
  • the prediction unit may not perform the second prediction.
  • the prediction unit may perform the second prediction when the prediction mode of the first prediction is the directional mode.
  • the prediction mode of the first prediction is a non-directional mode among the plurality of prediction modes, it may not be possible to perform prediction on a selective region or range because there is no direction of the prediction mode. Therefore, the second prediction may be performed only when the prediction mode of the first prediction is the directional mode.
  • the prediction unit may determine whether to perform the second prediction based on the position of the current block.
  • the prediction unit may not perform the second prediction when the current block is adjacent to the boundary.
  • the prediction unit may not perform the second prediction when the current block is adjacent to the upper boundary and the left boundary.
  • the boundary may be a boundary of a picture, a boundary of a slice, or a boundary of a tile.
  • the first residual block for the second prediction may not be sufficient. If the first residual block is lacking, a second reference sample for performing the second prediction cannot be generated intact.
  • the prediction unit may determine whether to perform the second prediction based on the number of reconstructed blocks adjacent to the current block.
  • the prediction unit may not perform the second prediction.
  • step 1340 may be performed.
  • step 1370 or step 1120 may be performed.
  • the residual signal of step 1120 may be a first prediction error.
  • the first prediction error generated by the first prediction may be used as the residual signal of the current block.
  • the prediction unit may generate a second prediction signal by performing a second prediction on the first prediction error.
  • the second prediction signal may represent the second prediction block.
  • the second prediction may be intra prediction, and the prediction direction of the second prediction may be the same as the prediction direction of the first prediction.
  • the predictor may perform second prediction on a portion of the first residual block. Some area may be specified by reference samples to be used in the second prediction.
  • the area of the reference samples of the second prediction may be part of the area of the reference samples of the first prediction.
  • the prediction unit may perform second prediction on each candidate region of the plurality of candidate regions to find an optimal second prediction region for encoding a current block.
  • the prediction unit is configured for each candidate region of each prediction mode in order to find the prediction mode of the optimal first prediction and the region of the optimal second prediction for encoding the current block. 2 Predictions can be performed. In other words, step 1340 may be performed for each candidate region of each prediction mode.
  • the prediction mode of the optimal first prediction and the region of the optimal second prediction may be prediction modes and regions corresponding to candidate regions having a minimum rate-distortion cost among the plurality of candidate regions of the plurality of prediction modes.
  • the prediction unit encodes the current block based on which prediction mode and which region among the plurality of candidate regions of the plurality of prediction modes is the minimum rate-distortion cost incurred when the first prediction and the second prediction are performed.
  • the prediction mode of the first prediction and the region of the second prediction may be determined.
  • the plurality of candidate regions may respectively correspond to ranges of reference samples to be used for the second prediction.
  • the prediction unit may select a range of reference samples to be used for the second prediction among the reference samples for the prediction direction.
  • the prediction unit may perform the second prediction by using the reference samples in the selected range among the entire reference samples for the prediction direction. For the second prediction, the value of the reference sample outside the selected range may be considered "0".
  • the prediction unit may generate a second prediction error based on the first prediction error and the second prediction signal.
  • the second prediction error may represent a second prediction error block or a second residual block.
  • the second residual block may represent a residual of the first residual block and the second prediction block.
  • the prediction unit may generate the second prediction error signal by subtracting the second prediction block indicated by the second prediction signal from the first residual block indicated by the first prediction error.
  • the prediction unit may calculate the rate-distortion cost of the second prediction using the second prediction error.
  • the prediction unit may store the calculated second prediction error and / or the rate-distortion cost of the second prediction in the memory 1030, the storage 1040, or a buffer.
  • the prediction unit may perform second prediction on each region of the plurality of regions to find an optimal second prediction region for encoding the current block.
  • the prediction unit may calculate rate-distortion costs of the plurality of regions using second prediction errors of the plurality of regions.
  • the prediction unit may store the minimum rate-distortion cost of the calculated rate-distortion costs as the minimum second rate-distortion cost RDcost_2.
  • RDcost_2 may be the rate-distortion cost of the region with the lowest rate-distortion cost in the optional second intra prediction.
  • the prediction unit may store a second prediction error and RDcost_2 corresponding to RDcost_2.
  • the second prediction can optionally be used based on the predefined conditions.
  • the prediction unit may determine whether to use the second prediction in encoding of the current block.
  • the prediction unit may determine whether to use the second prediction in encoding of the current block based on the predefined condition.
  • the prediction unit may determine to use the second prediction if the rate-distortion cost is further reduced by using the second prediction.
  • the prediction unit may determine not to use the second prediction if the rate-distortion cost is not further reduced even if the second prediction is used.
  • the prediction unit may determine to use the second prediction when the minimum rate-distortion cost when the second prediction is used is smaller than the minimum rate-distortion cost when the second prediction is not used.
  • the prediction unit may determine whether to use the second prediction in encoding of the current block based on the comparison between RDcost_1 and RDcost_2.
  • RDcost_2 is smaller than RDcost_1 may indicate that the rate-distortion cost of encoding of the current block may be further reduced by using the second prediction than when using only the first prediction.
  • the prediction unit may determine not to use the second prediction for encoding the current block. If RDcost_1 is greater than RDcost_2, the prediction unit may determine to use the second prediction for encoding the current block.
  • step 1370 may be performed.
  • step 1380 may be performed.
  • the prediction unit may perform a setting indicating that the second prediction is not used.
  • the prediction unit may set a value of the second prediction usage information to indicate that the second prediction usage information does not use the second prediction. For example, if the value of the second prediction usage information is "0", the second prediction usage information may indicate that the second prediction usage information is not used.
  • the residual signal of step 1120 may be a first prediction error.
  • the first prediction error generated based on the first prediction may be used as the residual signal.
  • the prediction unit may perform setting indicating to use the second prediction.
  • the prediction unit may set a value of the second prediction usage information to indicate that the second prediction usage information uses the second prediction. For example, if the value of the second prediction usage information is "1", the second prediction usage information may indicate that the second prediction usage information is used.
  • the residual signal of step 1120 may be a second prediction error.
  • the second prediction error generated based on the first prediction and the second prediction may be used as the residual signal.
  • FIG. 14 is a flowchart of a method of generating a restored block, according to an example.
  • Step 1160 described above with reference to FIG. 11 may include steps 1410, 1420, and 1430 below.
  • step 1420 may be performed.
  • step 1430 may be performed. If the second prediction is not used, the reconstructed first prediction error of the current block may be a reconstructed residual signal.
  • the reconstructed block generator may generate a reconstructed first prediction error based on the reconstructed residual signal and the second prediction signal.
  • the reconstructed block generator may generate a reconstructed first prediction error of the current block based on the residual signal of the current block and the second prediction signal for a portion of the current block.
  • the reconstructed first prediction error may be the sum of the reconstructed residual signal and the second prediction signal.
  • the reconstructed first prediction error may be generated based on the reconstructed residual signal and the second prediction signal.
  • the reconstructed block generator may generate a reconstructed block based on the reconstructed first prediction error and the first prediction signal.
  • the reconstructed block generator may generate a signal of the reconstructed block based on the reconstructed first prediction error of the current block and the first prediction signal of the current block.
  • the signal representing the reconstructed block may be the sum of the reconstructed residual signal, the second prediction signal, and the first prediction signal.
  • a signal representing the reconstructed block may be generated based on the reconstructed residual signal, the second prediction signal, and the first prediction signal.
  • the signal representing the reconstructed block may be the sum of the reconstructed residual signal and the first prediction signal.
  • a signal representing the reconstructed block may be generated based on the reconstructed residual signal and the first prediction signal.
  • the signal of the reconstructed block may be the sum of the reconstructed first prediction error and the first prediction signal.
  • the reconstructed first prediction error may be generated based on the reconstructed residual signal and the second prediction signal.
  • plus and minus The symbol may represent generating an error signal for two signals.
  • An error signal may be generated by subtracting an input signal input with a minus sign from an input signal input with a plus sign.
  • the symbol may represent generating a signal that combines two signals.
  • a signal that is the sum of the two signals can be generated by adding input signals that are respectively input as + symbols.
  • the prediction unit described above with reference to FIG. 11 may include a first predictor and a second predictor.
  • the process within the dashed line may represent the SSIP.
  • Rhombus may represent a conditional divergence
  • A may represent a target to which a value is assigned
  • B may represent a value to be assigned to a target
  • TQ may represent transform and quantization.
  • (TQ) ⁇ 1 may represent inverse transformation and inverse quantization.
  • e 1 may represent a signal of the first prediction error. May represent a signal of the reconstructed first prediction error.
  • e 2 may represent a signal of the second prediction error. May represent a signal of the reconstructed second prediction error. Although not shown in FIG. 15, when the second prediction is used, Is May be the same as If no second prediction is used, Is May be the same as
  • e may represent a residual signal. May represent the reconstructed residual signal.
  • 16 is a structural diagram of a decoding apparatus according to an embodiment.
  • the decryption apparatus 1600 may be a general purpose computer system for performing decryption.
  • the decoding apparatus 1600 may include at least one processor 1610, a memory 1630, a user interface (UI) input device 1650, which communicates with each other via a bus 1690, UI output device 1660 and storage 1640.
  • the decoding apparatus 1600 may further include a communication unit 1620 connected to the network 1699.
  • the processor 1610 may be a semiconductor device that executes processing instructions stored in a central processing unit (CPU), a memory 1630, or a storage 1640.
  • Memory 1630 and storage 1640 may be various forms of volatile or nonvolatile storage media.
  • the memory may include at least one of a ROM 1631 and a RAM 1632.
  • the processor 1610 may include an entropy decoder 210, an inverse quantizer 220, an inverse transformer 230, an intra predictor 240, an inter predictor 250, an adder 255 of the decoding apparatus 200.
  • the filter unit 260 and the reference picture buffer 270 may be included.
  • the recording medium may store at least one module for the operation of the decoding apparatus 1600.
  • the memory 1630 may store at least one module and may be configured to be executed by the at least one processor 1610.
  • the entropy decoder 210, the inverse quantizer 220, the inverse transformer 230, the intra predictor 240, the inter predictor 250, and the adder 255 of the decoder 1600 may be described.
  • At least some of the filter unit 260 and the reference picture buffer 270 may be program modules, and may communicate with an external device or system.
  • the program modules may be included in the decryption apparatus 1600 in the form of an operating system, an application program module, and other program modules.
  • the program modules may be physically stored on various known storage devices.
  • at least some of these program modules may be stored in a remote storage device that can communicate with the decryption apparatus 1600.
  • Program modules perform routines or subroutines, programs, objects, components, and data to perform functions or operations, or to implement abstract data types, according to one embodiment. Data structures and the like, but is not limited thereto.
  • the program modules may be composed of instructions or code performed by the processor 1610.
  • Functions related to communication of data or information of the decoding apparatus 1600 may be performed through the communication unit 1620.
  • 17 is a flowchart of a decoding method according to an embodiment.
  • the decoding method of the embodiment may be performed by the decoding apparatus 200 or the decoding apparatus 1600.
  • the communication unit 1620 may receive a bitstream from the encoding apparatus 100 or the encoding apparatus 1100.
  • the bitstream may include information about the encoded residual signal, and may include information related to prediction.
  • the information about the encoded residual signal may include entropy coded quantized transform coefficients.
  • Information related to the prediction may be entropy decoded.
  • the entropy decoder 210 may generate quantized transform coefficients by performing entropy decoding on the bitstream.
  • the entropy decoding unit 210 may generate information related to prediction by performing entropy decoding on the bitstream.
  • the information related to the prediction may include prediction scheme information indicating how the current block is encoded.
  • the prediction scheme information may indicate whether the current block is encoded by intra prediction or intra prediction.
  • the prediction scheme information may indicate whether the current block is encoded by intra prediction.
  • the prediction scheme information may indicate whether the current block is encoded by inter prediction.
  • the information related to the prediction may include intra prediction mode information indicating a prediction mode of the intra prediction.
  • the information related to the prediction may include second prediction usage information indicating whether the second prediction is used in encoding the current block.
  • the information related to the prediction includes current block type information indicating the type of the current block, first prediction block type information indicating the type of the first prediction block, second prediction block type information indicating the type of the second prediction block, First reference block type information indicating a type of a reference block, second reference block type information indicating a type of a second reference block, first reference sample type information indicating a type of a first reference sample, and a type of a second reference sample Second reference sample type information indicating the information may be included.
  • the information related to the prediction may include first prediction region information indicating the region of the first prediction and / or second prediction region information indicating the region of the second prediction.
  • the information related to the prediction may include information used for the prediction described in the embodiments.
  • Information related to the prediction may be included in the bitstream according to the order described in the embodiment or the order of generation according to the embodiment.
  • the reconstructed residual signal generator may generate a reconstructed residual signal for the current block based on the quantized transform coefficients.
  • the reconstructed residual signal generator may include an inverse quantizer 220 and an inverse transformer 230.
  • Step 1730 may include steps 1731 and 1732.
  • the inverse quantization unit 220 may generate inverse quantized transform coefficients by performing inverse quantization on the quantized transform coefficients.
  • the transformer 230 may generate the reconstructed residual signal by performing inverse transform on the inverse quantized transform coefficients.
  • a reconstructed residual signal for the current block may be generated.
  • the reconstructed block generator may generate a reconstructed block based on the reconstructed residual signal, the second prediction, and the first prediction.
  • the reconstructed block generator may include an adder 225, an intra predictor 240, an inter predictor 250, a filter 260, and a reference picture buffer 270.
  • the recovered residual signal may be a recovered residual block.
  • the reconstructed block may be the sum of the reconstructed residual block and the prediction block.
  • the reconstructed block may be generated based on the sum of the reconstructed residual block and the prediction block.
  • the prediction block may be the sum of the first prediction block and the second prediction block.
  • the prediction signal may be the sum of the first prediction signal and the second prediction signal.
  • the reconstructed block may be the sum of the reconstructed residual block, the first prediction block, and the second prediction block.
  • the reconstructed signal may be the sum of the reconstructed residual signal, the first prediction signal, and the second prediction signal.
  • the reconstructed block may correspond to the sum of the reconstructed residual block, the first prediction block, and the second prediction block.
  • the first prediction block may be a block generated by the first prediction or a block indicated by the prediction signal generated by the first prediction.
  • the second prediction block may be a block generated by the second prediction or a block indicated by the prediction signal generated by the second prediction.
  • the second prediction signal may be a null signal and the second prediction block may be a null block.
  • the null block may be a block in which the values of all pixels of the block are "0".
  • the reconstructed residual signal when the second prediction is used for encoding of the current block, the reconstructed residual signal may be the reconstructed second prediction error described above with reference to FIG. 14.
  • the reconstructed residual signal when the second prediction is used for encoding of the current block, the reconstructed residual signal may be a signal in which the second prediction error described above with reference to FIG. 11 is reconstructed by inverse quantization and inverse transformation.
  • the reconstructed residual signal in which the second prediction is not used for the encoding of the current block may be the reconstructed first prediction error described above with reference to FIG. 14.
  • the reconstructed residual signal may be a signal in which the first prediction error described above with reference to FIG. 11 is reconstructed by inverse quantization and inverse transformation.
  • the reconstructed block generator may generate a reconstructed block by performing inter prediction or intra prediction.
  • the first prediction and the second prediction may be different kinds of predictions.
  • the first prediction and the second prediction may be the same kind of predictions.
  • each of the first prediction and the second prediction may be intra prediction. If the first prediction is intra prediction, the second prediction may also be set to intra prediction.
  • the prediction direction of the first prediction and the prediction direction of the second prediction may be the same.
  • the prediction direction of the second prediction may be set to be the same as the prediction direction of the first prediction.
  • each of the first prediction and the second prediction may be inter prediction.
  • a block that is the target of the first prediction and a block that is the target of the second prediction may be different from each other.
  • the "block to be predicted” may mean to which block the "prediction” generates a prediction block.
  • the type of the block targeted for the first prediction and the type of the block targeted for the second prediction may be different from each other.
  • the type of block may be an original block, a luma block, a chroma block, a depth block, a residual block, or the like.
  • the block targeted for the first prediction may be a reconstructed block.
  • the block that is the target of the second prediction may be a prediction error block.
  • Reference blocks may be used for prediction.
  • the first reference block of the first prediction and the second reference block of the second prediction may be different from each other.
  • the reference blocks may differ from each other in terms of type and / or location.
  • the position of the first reference block of the first prediction and the position of the second reference block of the second prediction may be different from each other.
  • the position of the first reference block may be a position relative to the block that is the target of the first prediction.
  • the position of the second reference block may be a position relative to the block that is the target of the second prediction.
  • the first reference block of the first prediction may be plural.
  • the second reference block of the second prediction may be plural. At least some of the plurality of first reference blocks of the first prediction and the plurality of second reference blocks of the second prediction may be different from each other.
  • the position of at least one of the plurality of first reference blocks of the first prediction and the plurality of second reference blocks of the second prediction is the positions of the plurality of first reference blocks and the positions of the plurality of second reference blocks. Can only be included in one of the
  • the type of the first reference block of the first prediction and the type of the second reference block of the second prediction may be different from each other.
  • the type of block may be a reconstructed block, a reconstructed luma block, a reconstructed chroma block, a reconstructed depth block, a reconstructed first residual block, a reconstructed second residual block, or the like.
  • the first reference block of the first prediction may be a reconstructed block.
  • the second reference block of the second prediction may be a reconstructed first residual block.
  • the first reference sample of the first prediction may be a pixel of the reconstructed block.
  • the second reference sample of the second prediction may be a pixel of the reconstructed first residual block.
  • the first reference block of the first prediction may be a reconstructed neighboring block neighboring the current block.
  • the second reference block of the second prediction may be a reconstructed first residual block neighboring the current block.
  • the reconstructed neighbor residual block neighboring the current block may be obtained by adding the reconstructed residual block of the reconstructed neighboring block to the second prediction block of the reconstructed neighboring block.
  • the reconstructed neighbor residual block neighboring the current block may be a difference between the reconstructed neighboring block and the first prediction block of the reconstructed neighboring block.
  • a reconstructed block around the current block may be used as a reference block for prediction of the current block.
  • the target of the second prediction is the first residual block
  • a reconstructed first residual block around the current block or the first residual block may be used as the second reference block for prediction on the first residual block.
  • the region of the first prediction and the region of the second prediction may be different from each other.
  • the "prediction area” may indicate an area in which a prediction value is generated among blocks that are to be predicted.
  • the “area of prediction” may indicate an area to which a prediction value generated by prediction is allocated among prediction blocks corresponding to a block that is a prediction target.
  • the first prediction may be a prediction for an area of the current block.
  • the second prediction may be a prediction for a portion of the above area. At least some of the area of the current block may be excluded from the second prediction.
  • the block that is the target of the first prediction may be a block within a rectangular area.
  • the second prediction may be made for some of the above rectangular regions. At least some of the areas of the quadrangle may not be the target of the second prediction.
  • the area of the first prediction may be the entire block that is the target of the first prediction.
  • the area of the second prediction may be part of a block that is the target of the second prediction.
  • the second prediction may be excluded for at least a portion of the block that is the target of the second prediction.
  • the first prediction may be performed on the entire block that is the target of the first prediction.
  • the second prediction may be performed on the selected portion of the block that is the target of the second prediction.
  • prediction values determined by the first prediction may be assigned only to pixels in the region of the first prediction among blocks that are the targets of the first prediction.
  • the prediction value according to the first prediction may not be allocated to pixels outside the region of the first prediction among the blocks that are the targets of the first prediction.
  • a predefined value may be assigned to pixels outside the region of the first prediction among blocks that are the target of the first prediction. For example, the predefined value may be "0".
  • prediction values determined by the second prediction may be assigned only to pixels in the region of the second prediction among blocks that are the target of the second prediction. Pixels outside the region of the second prediction among the blocks that are the target of the second prediction may not be assigned a prediction value by the second prediction.
  • a predefined value may be assigned to pixels outside the region of the second prediction among blocks that are the target of the second prediction. For example, the predefined value may be "0".
  • each of the NxN pixels may be assigned a prediction value determined by the first prediction.
  • the prediction value determined by the first prediction may be assigned to all pixels of the block that are the targets of the first prediction.
  • the size of the block that is the target of the second prediction is N ⁇ N
  • one or more pixels among the N ⁇ N pixels may not be assigned a prediction value determined by the first prediction.
  • the size of the block that is the target of the second prediction is N ⁇ N
  • one or more pixels among the N ⁇ N pixels may be assigned a predefined value.
  • the predefined value may be "0".
  • the region of the first prediction may be determined based on the type of the first prediction. For example, the region of the first prediction may be determined based on whether the first prediction is inter prediction. Alternatively, the region of the first prediction may be determined based on whether the first prediction is intra prediction. Alternatively, the region of the first prediction may be determined based on the prediction direction of the first prediction.
  • the region of the second prediction may be determined based on the type of the second prediction. For example, the region of the second prediction may be determined based on whether the second prediction is inter prediction. Alternatively, the region of the second prediction may be determined based on whether the second prediction is intra prediction. Alternatively, the region of the second prediction may be determined based on the prediction direction of the second prediction.
  • the second prediction may be performed using a second reference sample within an optional range.
  • the region of the first reference sample used for the first prediction and the region of the second reference sample used for the second prediction may be different from each other.
  • the region of the first prediction and the region of the second prediction may be different from each other.
  • the area of the first reference sample may be an area of reference samples required for generating prediction values for the area of the first prediction through the first prediction.
  • the area of the second reference sample may be an area of reference samples required for generating prediction values for the area of the second prediction through the second prediction.
  • the region of the first prediction may be the entirety of the block that is the target of the first prediction.
  • the area of the second prediction may be part of a block that is the target of the second prediction.
  • the region of the second reference sample may be part of the region of the first reference sample.
  • FIG. 18 is a flowchart of a method of generating a restored block, according to an exemplary embodiment.
  • Step 1740 described above with reference to FIG. 17 may include the following steps 1810, 1820, 1830, 1840, and 1850.
  • the second prediction can optionally be performed based on the predefined conditions.
  • the reconstructed block generator may determine whether to use the second prediction for generation of the reconstructed block.
  • the reconstructed block generator may determine whether to use the second prediction for the generation of the reconstructed block based on a predefined condition.
  • the reconstructed block generator may determine whether to use the second prediction based on the prediction mode of the first prediction.
  • the reconstructed block generator may obtain intra prediction mode information indicating a prediction mode of intra prediction from the bitstream.
  • the reconstructed block generator may not use the second prediction if the prediction mode of the first prediction is a non-directional mode.
  • the reconstructed block generator may determine whether to use the second prediction based on the position of the current block.
  • the reconstructed block generator may not use the second prediction when the current block is adjacent to the boundary.
  • the prediction unit may not use the second prediction when the current block is adjacent to the upper boundary and the left boundary.
  • the boundary may be a boundary of a picture, a boundary of a slice, or a boundary of a tile.
  • the reconstructed block generator may determine whether to use the second prediction based on the number of reconstructed blocks adjacent to the current block.
  • the recovered block may be a PU.
  • the reconstructed block generator may not use the second prediction when the number of reconstructed blocks adjacent to the current block is two or less.
  • the reconstructed block generator after the determination is made whether to use the second prediction by the prediction mode, the position of the current block, and / or the number of adjacent reconstructed blocks, the reconstructed block generator then performs a second prediction on the encoding of the current block. It may be determined whether to use the second prediction based on whether it is used.
  • the reconstructed block generator may obtain second prediction usage information from the bitstream.
  • the reconstructed block generator may use the second prediction when the second prediction usage information indicates that the second prediction use information is used.
  • the reconstructed block generator may not use the second prediction when the second prediction usage information indicates that the second prediction usage information does not use the second prediction.
  • step 1820 may be performed.
  • step 1840 may be performed.
  • the reconstructed first prediction error of the current block may be a reconstructed residual signal.
  • the reconstructed block generator may generate a second prediction signal by performing a second prediction.
  • the second prediction may correspond to the second prediction in encoding of the current block described above with reference to FIGS. 11 and 13.
  • the second prediction signal may correspond to the second prediction in encoding of the current block described above with reference to FIGS. 11 and 13.
  • the second prediction signal may represent the second prediction block.
  • the second prediction may be intra prediction, and the prediction direction of the second prediction may be the same as the prediction direction of the first prediction.
  • the reference block of the second prediction may be a reconstructed residual block.
  • the reconstructed block generator may generate a second prediction signal by performing a second prediction using a reconstructed residual block of a block neighboring the current block as a reference block.
  • the region of the second prediction may be part of the region of the current block.
  • the reconstructed block generator may obtain second prediction region information indicating a prediction region of the second prediction from the bitstream.
  • the reconstructed block generator may perform second prediction on the second prediction region.
  • the region to which the second prediction is to be applied may be specified based on the range of reference samples used for the second prediction.
  • the region to which the second prediction is to be applied may be specified based on the prediction direction of the second prediction.
  • the region to which the second prediction is to be applied may be specified by the range of reference samples used for the second prediction and the prediction direction of the second prediction.
  • the reconstructed block generator may perform second prediction using reference samples in a selected range of the entire reference samples for the prediction direction. For the second prediction, the value of the reference sample outside the selected range may be considered "0".
  • the reconstructed block generator may obtain information ⁇ indicating the first reference sample to be used for the second prediction and information ⁇ indicating the number of reference samples to be used for the second prediction from the bitstream.
  • the range of reference samples used for the second prediction may be specified by ⁇ and ⁇ .
  • the range of reference samples may be specified by information ⁇ indicating the first reference sample to be used for the second prediction and information ⁇ indicating the number of reference samples to be used for the second prediction.
  • the region of the second prediction may be a region generated as the reference samples in the range between ⁇ and ⁇ + ⁇ advance in the prediction direction.
  • the value of the reference sample outside of the selected range may be considered null or "0".
  • the value of the previous reference sample of ⁇ may be considered zero, and the value of the subsequent reference sample of ⁇ + ⁇ may be considered zero.
  • the reconstructed block generator may generate a reconstructed first prediction error based on the reconstructed residual signal and the second prediction signal.
  • the reconstructed block generator may generate the reconstructed first prediction error by adding the second prediction signal to the reconstructed residual signal.
  • the reconstructed block generator may generate a reconstructed first prediction error of the current block based on the reconstructed residual signal and the second prediction signal for the partial region.
  • the reconstructed first prediction error may be the sum of the reconstructed residual signal and the second prediction signal. Alternatively, the reconstructed first prediction error may be generated based on the reconstructed residual signal and the second prediction signal. Alternatively, the reconstructed first prediction error may be generated based on the sum of the reconstructed residual signal and the second prediction signal.
  • the reconstructed block generator may generate a first prediction signal by performing a first prediction on the current block.
  • the first prediction signal may represent the first prediction block.
  • reconstructed blocks around the current block can be used as reference blocks. Also, pixels of reconstructed blocks around the current block can be used as reference samples.
  • the reconstructed block generator may generate a signal of the reconstructed block based on the reconstructed first prediction error and the first prediction signal.
  • the reconstructed block generator may generate a signal of the reconstructed block by adding the first prediction signal to the reconstructed first prediction error.
  • the reconstructed block when the second prediction is used, the reconstructed block may be the sum of the reconstructed residual signal, the second prediction signal, and the first prediction signal.
  • a signal representing the reconstructed block when the second prediction is used, may be generated based on the reconstructed residual signal, the second prediction signal, and the first prediction signal.
  • the signal representing the reconstructed block may be the sum of the reconstructed residual signal and the first prediction signal. Or, if the second prediction is not used, a signal representing the reconstructed block may be generated based on the sum of the reconstructed residual signal and the first prediction signal.
  • the reconstructed block and the reconstructed first residual signal may be used as reference blocks for decoding other blocks.
  • FIG. 19 illustrates a decoding process according to an embodiment.
  • plus and plus The symbol may represent generating a signal that combines two signals.
  • a signal that is the sum of the two signals can be generated by adding input signals that are respectively input as + symbols.
  • the reconstructed block generator described above with reference to FIG. 19 may include a first predictor and a second predictor.
  • the process within the dashed line may represent the SSIP.
  • Rhombus may represent a conditional divergence
  • TQ TQ ⁇ 1
  • (TQ) ⁇ 1 may represent inverse transformation and inverse quantization.
  • the selected part to which the first prediction is applied and the remaining part to which the second prediction is applied may be separated.
  • the first prediction when the second prediction is used, the first prediction may be applied to the entirety of the current block.
  • the reconstructed block for the selected portion to which the second prediction is applied is based on the sum of the reconstructed residual block, the second prediction signal generated by the second prediction, and the first prediction signal generated by the first prediction. Can be generated.
  • a reconstructed block for the remaining part to which the second prediction is not applied may be generated based on the sum of the reconstructed residual block and the first prediction signal generated by the first prediction.
  • the second prediction may be a prediction for the residual.
  • the second prediction may be a prediction for a first residual signal that is a difference between the current block and the first prediction signal.
  • the reference block of the second prediction may be a reconstructed residual block, and the reference pixel of the second prediction may be a pixel of the reconstructed residual block.
  • a second residual signal which is a difference between the first residual signal and the second prediction signal, may be generated through the second prediction on the first residual signal, and encoded information of the current block is generated using the second residual signal. Can be.
  • the reconstructed block for the selected portion to which the second prediction is applied may be a weighted-sum of the reconstructed residual block, the second prediction signal, and the first prediction signal.
  • the information related to the prediction may include a first weight of the reconstructed residual block, a second weight of the second prediction signal, and a third weight of the first prediction signal.
  • the fact that the second prediction is not used may be considered that the second weight of the second prediction signal is set to zero.
  • the second weight of the second prediction signal may be considered to be set to 0 with respect to the remaining part to which the second prediction is not applied.
  • the methods are described based on a flowchart as a series of steps or units, but the present invention is not limited to the order of steps, and certain steps may occur in a different order or simultaneously from other steps as described above. Can be. Also, one of ordinary skill in the art appreciates that the steps shown in the flowcharts are not exclusive, that other steps may be included, or that one or more steps in the flowcharts may be deleted without affecting the scope of the present invention. I can understand.
  • Embodiments according to the present invention described above may be implemented in the form of program instructions that may be executed by various computer components, and may be recorded in a computer-readable recording medium.
  • the computer-readable recording medium may include program instructions, data files, data structures, etc. alone or in combination.
  • Program instructions recorded on the computer-readable recording medium may be those specially designed and configured for the present invention, or may be known and available to those skilled in the computer software arts.
  • Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks and magnetic tape, optical recording media such as CD-ROMs, DVDs, and magneto-optical media such as floptical disks. media), and hardware devices specifically configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like.
  • Examples of program instructions include not only machine code generated by a compiler, but also high-level language code that can be executed by a computer using an interpreter or the like.
  • the hardware device may be configured to operate as one or more software modules to perform the process according to the invention, and vice versa.

Abstract

L'invention concerne un procédé et un dispositif pour réaliser un codage et un décodage par utilisation d'une première prédiction et d'une seconde prédiction. Dans le codage d'un bloc courant, un signal résiduel du bloc courant est généré sur la base du bloc courant, de la première prédiction et de la seconde prédiction. En outre, des informations sur le signal résiduel codées par codage du signal résiduel sont générées. Dans le décodage du bloc courant, un signal résiduel restauré du bloc courant est généré. Un bloc restauré du bloc courant est généré sur la base du signal résiduel restauré, de la seconde prédiction et de la première prédiction.
PCT/KR2017/000965 2016-01-27 2017-01-26 Procédé et dispositif pour coder et décoder une vidéo par utilisation d'une prédiction WO2017131473A1 (fr)

Priority Applications (8)

Application Number Priority Date Filing Date Title
CN202211613933.9A CN115941949A (zh) 2016-01-27 2017-01-26 通过使用预测对视频进行编码和解码的方法和装置
CN201780009090.2A CN108605123B (zh) 2016-01-27 2017-01-26 通过使用预测对视频进行编码和解码的方法和装置
CN202211613078.1A CN115733977A (zh) 2016-01-27 2017-01-26 通过使用预测对视频进行编码和解码的方法和装置
CN202211615144.9A CN116016912A (zh) 2016-01-27 2017-01-26 通过使用预测对视频进行编码和解码的方法和装置
CN202211613690.9A CN115733978A (zh) 2016-01-27 2017-01-26 通过使用预测对视频进行编码和解码的方法和装置
CN202211613836.XA CN115733979A (zh) 2016-01-27 2017-01-26 通过使用预测对视频进行编码和解码的方法和装置
CN202211613450.9A CN116016911A (zh) 2016-01-27 2017-01-26 通过使用预测对视频进行编码和解码的方法和装置
CN202211613703.2A CN115941948A (zh) 2016-01-27 2017-01-26 通过使用预测对视频进行编码和解码的方法和装置

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR20160010260 2016-01-27
KR10-2016-0010260 2016-01-27
KR1020170012160A KR20170089777A (ko) 2016-01-27 2017-01-25 예측을 사용하는 비디오의 부호화 및 복호화를 위한 방법 및 장치
KR10-2017-0012160 2017-01-25

Publications (1)

Publication Number Publication Date
WO2017131473A1 true WO2017131473A1 (fr) 2017-08-03

Family

ID=59398440

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2017/000965 WO2017131473A1 (fr) 2016-01-27 2017-01-26 Procédé et dispositif pour coder et décoder une vidéo par utilisation d'une prédiction

Country Status (2)

Country Link
CN (7) CN116016911A (fr)
WO (1) WO2017131473A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090095407A (ko) * 2008-03-05 2009-09-09 경희대학교 산학협력단 정수 dct 변환 영역에서 영상 블록의 업/다운 샘플링방법 및 그 장치
JP2010045853A (ja) * 2009-11-20 2010-02-25 Mitsubishi Electric Corp 動画像符号化装置および動画像符号化方法
KR101411315B1 (ko) * 2007-01-22 2014-06-26 삼성전자주식회사 인트라/인터 예측 방법 및 장치
KR20150027530A (ko) * 2013-09-04 2015-03-12 한국전자통신연구원 Hevc 부호화 장치 및 이를 이용한 부호화 방법
KR20150093633A (ko) * 2010-11-08 2015-08-18 성균관대학교산학협력단 다단계 예측을 이용한 영상 부/복호화 방법 및 이러한 방법을 사용하는 부/복호화 장치

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101411315B1 (ko) * 2007-01-22 2014-06-26 삼성전자주식회사 인트라/인터 예측 방법 및 장치
KR20090095407A (ko) * 2008-03-05 2009-09-09 경희대학교 산학협력단 정수 dct 변환 영역에서 영상 블록의 업/다운 샘플링방법 및 그 장치
JP2010045853A (ja) * 2009-11-20 2010-02-25 Mitsubishi Electric Corp 動画像符号化装置および動画像符号化方法
KR20150093633A (ko) * 2010-11-08 2015-08-18 성균관대학교산학협력단 다단계 예측을 이용한 영상 부/복호화 방법 및 이러한 방법을 사용하는 부/복호화 장치
KR20150027530A (ko) * 2013-09-04 2015-03-12 한국전자통신연구원 Hevc 부호화 장치 및 이를 이용한 부호화 방법

Also Published As

Publication number Publication date
CN115941949A (zh) 2023-04-07
CN115733977A9 (zh) 2023-07-11
CN115941948A (zh) 2023-04-07
CN116016912A (zh) 2023-04-25
CN116016911A (zh) 2023-04-25
CN115733977A (zh) 2023-03-03
CN115733978A (zh) 2023-03-03
CN115733979A (zh) 2023-03-03
CN115941949A9 (zh) 2023-07-11
CN115733979A9 (zh) 2023-06-16

Similar Documents

Publication Publication Date Title
WO2018097693A2 (fr) Procédé et dispositif de codage et de décodage vidéo, et support d'enregistrement à flux binaire mémorisé en son sein
WO2018030773A1 (fr) Procédé et appareil destinés au codage/décodage d'image
WO2019190224A1 (fr) Dispositif et procédé de codage/décodage d'image, et support d'enregistrement mémorisant un flux binaire
WO2018226015A1 (fr) Procédé et dispositif de codage/de décodage vidéo, et support d'enregistrement stockant un flux binaire
WO2018012886A1 (fr) Procédé de codage/décodage d'images et support d'enregistrement correspondant
WO2019112394A1 (fr) Procédé et appareil de codage et décodage utilisant un partage d'informations sélectif entre des canaux
WO2019177354A1 (fr) Dispositif et procédé de codage/décodage d'image et support d'enregistrement ayant un train de bits stocké en son sein
WO2018016823A1 (fr) Dispositif et procédé de codage/décodage d'image, et support d'enregistrement dans lequel le flux binaire est stocké
WO2018026166A1 (fr) Procédé et appareil de codage/décodage d'image, et support d'enregistrement stockant un train de bits
WO2018174617A1 (fr) Procédé de prédiction basé sur une forme de bloc et dispositif associé
WO2017222237A1 (fr) Procédé et dispositif de prédiction intra
WO2019182385A1 (fr) Dispositif et procédé de codage/décodage d'image, et support d'enregistrement contenant un flux binaire
WO2018097692A2 (fr) Procédé et appareil de codage/décodage d'image et support d'enregistrement contenant en mémoire un train de bits
WO2018097607A1 (fr) Procédé et dispositif de codage/décodage d'image et support d'enregistrement stockant un flux binaire
WO2019172705A1 (fr) Procédé et appareil de codage/décodage d'image utilisant un filtrage d'échantillon
WO2020005035A1 (fr) Appareil et procédé de décodage/codage d'image destiné à l'amélioration du débit de traitement, et support d'enregistrement stockant un train de bits
WO2017086748A1 (fr) Procédé et dispositif pour chiffrer/déchiffrer une image en utilisant une image modifiée géométriquement
WO2019107927A1 (fr) Procédé et appareil de prédiction intra bidirectionnelle
WO2017176092A1 (fr) Procédé et dispositif pour induire des informations de prédiction de mouvement
WO2019240493A1 (fr) Procédé et dispositif de codage arithmétique binaire adaptatif au contexte
WO2018174618A1 (fr) Procédé et dispositif de prédiction à l'aide d'un bloc de référence
WO2018101700A1 (fr) Procédé et dispositif de codage/décodage d'images, et support d'enregistrement dans lequel est stocké un flux binaire
WO2021015581A1 (fr) Procédé, appareil et support d'enregistrement pour coder/décoder une image à l'aide d'un partitionnement géométrique
WO2020256422A1 (fr) Procédé et dispositif de codage/décodage d'informations à prédiction inter
WO2020050600A1 (fr) Procédé et dispositif de codage/décodage vidéo, et support d'enregistrement pour stockage de flux binaire

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17744594

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17744594

Country of ref document: EP

Kind code of ref document: A1