WO2020175915A1 - 영상 신호 부호화/복호화 방법 및 이를 위한 장치 - Google Patents

영상 신호 부호화/복호화 방법 및 이를 위한 장치 Download PDF

Info

Publication number
WO2020175915A1
WO2020175915A1 PCT/KR2020/002754 KR2020002754W WO2020175915A1 WO 2020175915 A1 WO2020175915 A1 WO 2020175915A1 KR 2020002754 W KR2020002754 W KR 2020002754W WO 2020175915 A1 WO2020175915 A1 WO 2020175915A1
Authority
WO
WIPO (PCT)
Prior art keywords
merge
motion information
block
current block
prediction
Prior art date
Application number
PCT/KR2020/002754
Other languages
English (en)
French (fr)
Inventor
이배근
Original Assignee
주식회사 엑스리스
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 엑스리스 filed Critical 주식회사 엑스리스
Priority to CN202310512325.7A priority Critical patent/CN116366840A/zh
Priority to CN202080004012.5A priority patent/CN112425160B/zh
Publication of WO2020175915A1 publication Critical patent/WO2020175915A1/ko
Priority to US17/126,803 priority patent/US11025944B2/en
Priority to US17/241,950 priority patent/US11632562B2/en
Priority to ZA2021/04757A priority patent/ZA202104757B/en
Priority to US18/135,106 priority patent/US20230370631A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/109Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • H04N19/139Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • the present invention relates to a video signal encoding/decoding method and an apparatus therefor.
  • Video services are in demand.
  • the biggest problem with high-definition video services is that the amount of data increases significantly, and to solve this problem, research to improve the video compression rate is actively being conducted.
  • MPEG Motion Picture Experts Group
  • ITU-T International Telecommunication Union
  • JCT-VC Joint Collaborative Team on Video Coding
  • the motion information table is
  • An object of the present invention is to provide a method for updating motion information of blocks included in a merge processing area to a motion information table and an apparatus for performing the method in encoding/decoding a video signal.
  • the present invention is based on the last candidate in encoding/decoding video signals.
  • the present invention aims to provide a method for efficiently determining an inter prediction method to be applied to a current block and an apparatus for performing the method in encoding/decoding a video signal.
  • the video signal decoding method is based on the current block edge mode.
  • 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754 Parsing the first flag indicating whether or not inter prediction is applied, if the first flag is true, the current block is encoded in a regular merge mode or a merge offset. Parsing a second flag indicating whether a mode is applied, and if the second flag is true, parsing a third flag indicating whether the merge offset encoding mode is applied to the current block. have. In this case, when the third flag is true, the merge offset encoding mode is applied to the current block, and when the third flag is false, the regular merge mode may be applied to the current block.
  • the video signal encoding method includes encoding a first flag indicating whether inter prediction based on a merge mode is applied to the current block, and when the first flag is true, the current block is regular Encoding a second flag indicating whether a merge mode or a merge offset encoding mode is applied, and when the second flag is true, a third flag indicating whether the merge offset encoding mode is applied to the current block It may include the step of encoding. In this case, when the merge offset encoding mode is applied to the current block, the third flag is set to true, and when the regular merge mode is applied to the current block, the third flag may be set to false.
  • the video signal decoding/encoding method according to the present invention may further include a step of parsing/encoding a fourth flag indicating whether the combined prediction mode is applied to the current block when the second flag is false. have.
  • the prediction unit partitioning-based encoding method can be applied when the fourth flag is false.
  • the motion information of the current block is derived from the merge candidate list of the current block, and the number of merge candidates derived from neighboring blocks of the current block If it is less than the threshold value, motion information candidates included in the motion information table may be added to the merge candidate list as merge candidates.
  • the motion information table may not be updated while the blocks included in the merge processing area are decoded.
  • the motion information of the current block is Whether to update the motion information table can be determined.
  • the inter prediction efficiency can be improved by refining the motion vector induced based on the merge candidate.
  • FIG. 1 is a block diagram of an image encoder (encoder) according to an embodiment of the present invention.
  • FIG. 2 is a block diagram of an image decoder (decoder) according to an embodiment of the present invention.
  • FIG. 3 is a diagram showing a basic coding tree unit according to an embodiment of the present invention.
  • FIG. 4 is a diagram showing various division types of a coding block.
  • FIG. 5 is a view showing an example of the division of the coding tree unit.
  • FIG. 6 is a flow diagram of an inter prediction method according to an embodiment of the present invention.
  • FIG. 7 shows a process of inducing movement information of the current block in the merge mode.
  • FIG. 8 is a diagram illustrating candidate blocks used to induce a merge candidate.
  • FIG. 9 is a view showing the positions of reference samples.
  • FIG. 10 is a diagram illustrating candidate blocks used to induce a merge candidate.
  • FIG. 11 is a view showing an example in which the position of the reference sample is changed.
  • FIG. 12 is a view showing an example in which the position of the reference sample is changed.
  • FIG. 13 is a diagram for explaining an update mode of a motion information table.
  • FIG. 14 is a diagram showing an update pattern of a motion information table.
  • 15 is a diagram showing an example in which the index of the previously stored motion information candidate is updated.
  • 16 is a diagram showing the location of a representative sub-block.
  • FIG. 17 shows an example in which a motion information table is generated for each inter prediction mode.
  • FIG. 18 shows an example in which a motion information table is generated for each motion vector resolution.
  • Fig. 19 shows the motion information of the block to which the merge offset coding method is applied. 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754 This is an example that is stored in the information table.
  • 20 is a diagram showing an example in which motion information candidates included in the long-term motion information table are added to the merge candidate list.
  • 21 is a diagram showing an example in which a redundancy test is performed only for some of the merge candidates.
  • FIG. 22 is a diagram showing an example in which a redundancy test with a specific merge candidate is omitted.
  • Fig. 23 shows a candidate block emerged in the same merge processing area as the current block.
  • FIG. 24 is a diagram showing an example of inducing a merge candidate for the current block when the current block is included in the merge processing area.
  • 25 is a diagram showing a temporary motion information table.
  • Fig. 26 is a diagram showing an example of merging the motion information table and the temporary motion information table.
  • FIG. 27 is a diagram showing an example of dividing a coding block into a plurality of prediction units using a diagonal line.
  • 28 is a diagram showing an example of dividing a coding block into two prediction units.
  • Fig. 29 shows examples of dividing a coding block into a plurality of prediction blocks having different sizes.
  • Fig. 30 is a diagram showing neighboring blocks used to induce a split mode merge candidate.
  • 31 is a diagram for explaining an example of determining availability of neighboring blocks for each prediction unit.
  • Figs. 32 and 33 are diagrams showing an example in which a predicted sample is derived based on a weighted sum operation of the first predicted sample and the second predicted sample.
  • Figure 34 shows the size of the offset vector It is a diagram showing the offset vector according to the value of (1 urine 6 011 _: 1 (1) indicating the direction.
  • Figure 35 shows the size of the offset vector It is a diagram showing the offset vector according to the value of (1 urine 6 011 _: 1 (1) indicating the direction.
  • Encoding and decoding of an image is performed in block units. For example, encoding/decoding processing such as transformation, quantization, prediction, in-loop filtering, or restoration is performed for a coding block, a transformation block, or a prediction block.
  • encoding/decoding processing such as transformation, quantization, prediction, in-loop filtering, or restoration is performed for a coding block, a transformation block, or a prediction block.
  • a block to be encoded/decoded will be referred to as a “current block.”
  • the current block may represent a coding block, a transform block, or a block according to the current encoding/decoding process step.
  • 2020/175915 1 (:1 ⁇ 1 ⁇ 2020/002754 It represents the basic unit to be executed, and'block' can be understood as representing a sample array of a predetermined size. Unless otherwise stated,'block' and 'Unit' may be used with equivalent meaning. For example, in an embodiment described below, it may be understood that a coding block and a coding unit have the same meaning.
  • FIG. 1 is a block diagram of an image encoder (encoder) according to an embodiment of the present invention.
  • the image encoding apparatus 100 includes a picture division unit 110, a prediction unit 120, 125, a conversion unit 130, a quantization unit 135, a re-alignment unit 160, and an entropy.
  • Encoding unit 165 the image encoding apparatus 100 includes a picture division unit 110, a prediction unit 120, 125, a conversion unit 130, a quantization unit 135, a re-alignment unit 160, and an entropy.
  • Encoding unit 165 Encoding unit 165,
  • An inverse quantization unit 140, an inverse transform unit 145, a filter unit 150, and a memory 155 may be included.
  • FIG. 1 Each of the components shown in FIG. 1 is characterized by different characteristics in the image encoding apparatus.
  • each component is made up of separate hardware or a single software component; that is, each component is listed and included in each component for convenience of explanation, and at least two of each component are included.
  • the constituent parts of the present invention are combined to form one constituent part, or one constituent part can be divided into a plurality of constituent parts to perform functions, and the integrated and separate embodiments of each of these constituent parts are within the scope of the present invention as long as they do not depart from the essence of the present invention Included.
  • the components may not be essential components to perform the essential functions of the present invention, but only optional components to improve performance.
  • the present invention can be implemented by including only the components essential for realizing the essence of the present invention, excluding components used for improving performance, and a structural diagram including only essential components excluding optional components used for performance improvement. It is included in the scope of the invention.
  • the picture divider (no) can divide the input picture into at least one processing unit.
  • the processing unit may be a prediction unit (prediction unit:
  • a picture may be a unit (Transform Unit: TU) or a coding unit (CU).
  • TU Transform Unit
  • CU coding unit
  • a picture division part (no) a picture is divided into a combination of a plurality of coding units, prediction units, and transformation units.
  • a picture can be encoded by selecting a combination of a coding unit, a prediction unit, and a conversion unit based on a predetermined criterion (for example, a cost function).
  • a picture can be divided into a plurality of coding units.
  • a recursive tree structure such as a quad tree structure can be used.
  • a coding unit that is divided into other coding units by using the largest coding unit as a root can be divided with as many child nodes as the number of divided coding units.
  • a coding unit that is no longer divided according to certain restrictions becomes a leaf node, i.e., assuming that only square division is possible for one coding unit, one coding unit can be divided into up to four different coding units.
  • the coding unit is the unit that performs the coding.
  • a prediction unit is at least one of the same size within a coding unit.
  • It may be divided into a shape such as a square or a rectangle, and one of the prediction units divided within one coding unit may be divided so that one prediction unit has a different shape and/or size from the other prediction unit. have.
  • intra example can be performed without dividing into a plurality of prediction units NxN.
  • the prediction units 120 and 125 may include an inter prediction unit 120 that performs inter prediction and an intra prediction unit 125 that performs intra prediction.
  • an inter prediction unit 120 that performs inter prediction
  • an intra prediction unit 125 that performs intra prediction.
  • the processing unit in which the prediction is performed, the prediction method, and the specific content are determined.
  • the processing unit may be different.
  • the prediction method and the prediction mode are determined by the prediction unit, and the execution of the prediction may be performed by the transformation unit.
  • the residual value (residual block) between the generated prediction block and the original block can be input to the transform unit 130.
  • the prediction mode information and motion vector information used for prediction are included in the entropy encoding unit together with the residual value.
  • the prediction mode information and motion vector information used for prediction are included in the entropy encoding unit together with the residual value.
  • the P inter prediction unit 120 may predict the prediction unit based on information of at least one of the previous or subsequent pictures of the current picture, and in some cases, based on the information of some regions in which encoding is completed in the current picture.
  • the prediction unit may be predicted by using the inter prediction unit 120.
  • the inter prediction unit 120 may include a reference picture interpolation unit, a motion prediction unit, and a motion compensation unit.
  • the reference picture interpolation unit receives reference picture information from the memory 155 and can generate pixel information less than an integer pixel from the reference picture.
  • pixel information less than an integer pixel is stored in units of 1/4 pixel.
  • a DCT-based interpolation filter (DCT-based Interpolation Filter) that differs in filter coefficients can be used.
  • filter coefficients are generated to generate pixel information less than an integer pixel in 1/8 pixel units.
  • DCT-based Interpolation Filter DCT-based Interpolation Filter
  • a different DCT-based 4-tap interpolation filter can be used.
  • Motion prediction unit performs motion based on the reference picture interpolated by the reference picture interpolation unit.
  • Prediction can be performed Various methods such as FBMA (Full search-based Block Matching Algorithm), TSS (Three Step Search), NTS (New Three-Step Search Algorithm) can be used to calculate the motion vector.
  • a vector may have a motion vector value of 1/2 or 1/4 pixel unit based on the interpolated pixel. 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754
  • the current prediction unit can be predicted by different motion prediction methods.
  • Skip method, Merge method, AMVP( Various methods such as Advanced Motion Vector Prediction method and Intra Block Copy method can be used.
  • the intra prediction unit 125 may generate a prediction unit based on reference pixel information around the current block, which is pixel information in the current picture. Since the surrounding block of the current prediction unit is a block on which inter prediction has been performed, the reference pixel is In the case of a pixel subjected to inter prediction, the reference pixel included in the block on which the inter prediction was performed can be used by replacing the reference pixel information of the block on which the intra prediction was performed. The reference pixel information that is not available can be used by replacing at least one reference pixel among the available reference pixels.
  • the prediction mode may have a directional prediction mode that uses reference pixel information according to the prediction direction, and a non-directional mode that does not use directional information when performing prediction.
  • a mode for predicting luminance information and color difference information are included.
  • the mode for prediction may be different, and the intra prediction mode information or the predicted luminance signal information used to predict the luminance information to predict the color difference information can be utilized.
  • the prediction unit is based on the pixel existing on the left, the pixel on the top left, and the pixel on the top of the prediction unit. Intra prediction can be performed.
  • intra prediction can be performed using a reference pixel based on the conversion unit.
  • the NxN division is used only for the smallest coding unit. Intra prediction can be used.
  • Intra prediction method is AIS (Adaptive Intra) in the reference pixel according to the prediction mode.
  • the prediction block can be created after applying the smoothing) filter.
  • the type of AIS filter applied to the reference pixel may be different.
  • the intra prediction mode of the current prediction unit exists around the current prediction unit.
  • the prediction can be made from the intra prediction mode of the prediction unit.
  • a predetermined flag is displayed if the intra prediction mode of the current prediction unit and the peripheral prediction unit are the same.
  • information indicating that the prediction mode of the current prediction unit and the surrounding prediction unit is the same can be transmitted, and if the prediction modes of the current prediction unit and the surrounding prediction unit are different, entropy encoding is performed to encode the prediction mode information of the current block.
  • a residual block containing residual information which is a difference between the prediction unit and the original block of the prediction unit, based on the prediction unit generated by the prediction units (120, 125).
  • the generated residual block may be input to the transform unit 130.
  • the residual block containing residual information can be transformed using a transformation method such as DCT (Discrete Cosine Transform) or DST (Discrete Sine Transform).
  • the DCT conversion core includes at least one of DCT2 or DCT8, and the DST conversion core includes DST7.
  • the prediction unit used to generate the residual block Intra prediction mode information of can be determined based on the information of the intra prediction mode of the. Transformation for the residual block can also be skipped. A flag indicating whether to skip the transformation for the residual block can be encoded. Transformation skipping is the size of which is less than or equal to a threshold. May be acceptable for residual blocks, luma components, or chroma components under the 4:4:4 format.
  • the quantization unit 135 may quantize values converted into the frequency domain by the conversion unit 130.
  • the quantization coefficient may vary depending on the block or the importance of the image.
  • the value calculated by the quantization unit 135 may be provided to the inverse quantization unit 140 and the re-alignment unit 160.
  • the rearrangement unit 160 may rearrange the coefficient values with respect to the quantized residual values.
  • the rearrangement unit 160 may change the two-dimensional block shape coefficient into a one-dimensional vector shape through a coefficient scanning method. For example,
  • the rearrangement unit 160 can scan from DC coefficients to coefficients in the high frequency domain and change them to a one-dimensional vector form using a Zig-Zag Scan method.
  • a zag scan a vertical scan that scans two-dimensional block shape coefficients in the column direction, and a horizontal scan that scans two-dimensional block shape coefficients in the row direction can also be used, i.e., depending on the size of the transformation unit and the intra prediction mode. You can decide which scan method will be used: zig-zag scan, vertical scan, and horizontal scan.
  • the entropy encoding unit 165 uses the values calculated by the re-alignment unit 160 as the basis.
  • Entropy coding can be performed. Entropy coding is for example exponential
  • Various coding methods such as is (Exponential Golomb), CAVLC (Context-Adaptive Variable Length Coding), and CABAC (Context-Adaptive Binary Arithmetic Coding) can be used.
  • the entropy encoding unit 165 is the residual value coefficient information and block type information, prediction mode information, division unit information, prediction unit information and transmission unit information of the coding unit from the reordering unit 160 and the prediction units 120 and 125. , Motion vector information, reference frame information, block interpolation information, filtering information, etc. can be encoded.
  • the entropy encoding unit 165 may entropy-encode the coefficient value of the encoding unit inputted from the reordering unit 160.
  • the inverse quantization unit 140 and the inverse transform unit 145 inverse quantize values quantized in the quantization unit 135 and inverse transform the values transformed by the transform unit 130.
  • Inverse quantization unit 140 and inverse transform unit 145 the residual value (Residual) generated in the prediction unit (120, 125) is combined with the prediction unit predicted through the motion estimation unit, motion compensation unit, and intra prediction unit. 2020/175915 1»(:1/10 ⁇ 020/002754 Reconstructed Block) can be created.
  • the filter unit 150 may include at least one of a deblocking filter, an offset correction unit, and an adaptive loop filter (ALF).
  • a deblocking filter may include at least one of a deblocking filter, an offset correction unit, and an adaptive loop filter (ALF).
  • ALF adaptive loop filter
  • the deblocking filter can remove block distortion caused by the boundary between blocks in the restored picture.
  • the current block is based on the pixels included in several columns or rows in the block. It is possible to judge whether to apply a deblocking filter.
  • a strong filter or a weak filter can be applied depending on the required deblocking filtering strength.
  • a deblocking filter can be applied. In doing so, when performing vertical filtering and horizontal filtering, horizontal direction filtering and vertical direction filtering can be processed in parallel.
  • the offset correction unit can correct the offset from the original image on a pixel-by-pixel basis for the deblocking image. After dividing the pixels included in the image into a certain number of areas to perform offset correction for a specific picture, A method of determining an area to perform the offset and applying the offset to the area, or a method of applying the offset in consideration of edge information of each pixel can be used.
  • Adaptive Loop Filtering can be performed on the basis of a value obtained by comparing the filtered restored image and the original image. After dividing the pixels included in the image into predetermined groups, one filter to be applied to the group is determined for each group. Filtering can be performed differentially. Information related to whether or not to apply ALF, the luminance signal can be transmitted for each coding unit (CU), and the shape and filter coefficient of the ALF filter to be applied may vary according to each block. In addition, the characteristics of the block to be applied may be different. The same type (fixed type) of ALF filter may be applied regardless of.
  • ALF Adaptive Loop Filtering
  • the memory 155 can store the restored block or picture calculated through the filter unit 150.
  • the stored restored block or picture may be provided to the prediction units 120 and 125 when performing inter prediction.
  • the image decoder 200 is an entropy decoding unit 210,
  • a rearrangement unit 215, an inverse quantization unit 220, an inverse transform unit 225, prediction units 230 and 235, a filter unit 240, and a memory 245 may be included.
  • the input bitstream can be decoded in a procedure opposite to that of the video encoder.
  • the entropy decoding unit (2W) can perform entropy decoding in a procedure opposite to that of entropy encoding performed by the entropy encoding unit of the image encoder. For example, Exponential Golomb (Exponential Golomb) in response to the method performed by the image encoder ), CAVLC(Context- Adaptive Variable Length Coding),
  • the rearrangement unit 215 is the entropy decryption unit 210
  • Reordering can be performed based on the method of rearranging the bitstream in the encoder.
  • the coefficients expressed in the form of a one-dimensional vector can be restored and rearranged into coefficients in the form of a two-dimensional block.
  • the re-alignment unit 215 receives information related to the coefficient scanning performed in the encoding unit, and the scanning performed in the encoding unit Reordering can be done through the reverse scanning method based on the sequence.
  • the inverse quantization unit 220 may perform inverse quantization based on the quantization parameter provided by the encoder and the coefficient value of the re-aligned block.
  • the inverse transformation unit 225 includes at least one of 2 or 0018 to perform an inverse transformation, that is, an inverse 1) (that or an inverse example!) in the transformation unit with respect to the quantization result performed by the image encoder, and C. Alternatively, if the transformation is skipped in the image encoder, the inverse transformation may not be performed in the inverse transformation unit 225. The inverse transformation may be performed based on the transmission unit determined in the image encoder.
  • the inverse transformation unit 225 of the image decoder In the case, a transformation technique (for example, .(or 0)) can be selectively performed according to a plurality of pieces of information such as the prediction method, the size of the current block, and the prediction direction.
  • the prediction units 230 and 235 may generate a block based on information related to prediction block generation provided by the entropy decoding unit 210 and previously decoded block or picture information provided from the memory 245, for example.
  • the prediction units 230 and 235 may include a prediction unit determination unit, an inter prediction unit, and an intra prediction unit.
  • the prediction unit determination unit information about prediction units input from the entropy decoding unit 210, prediction of an intra prediction method. By receiving various information such as mode information and motion prediction related information of the inter prediction method, it is possible to classify the prediction unit in the current coding unit, and to determine whether the prediction unit performs inter prediction or intra prediction.
  • (230) uses the information necessary for inter prediction of the current prediction unit provided by the image encoder, based on the information contained in at least one picture of the current picture containing the current prediction unit or the picture after the current prediction unit. Inter-prediction can be performed, or inter-prediction can be performed based on the information of some regions previously restored within the current picture containing the current prediction unit. 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754 There is.
  • the coding unit is
  • the motion prediction method of the included prediction unit can be judged whether the method is skip mode, merge mode (Merge mode), motion vector prediction mode (AMVP mode), or intrablock copy mode.
  • the intra prediction unit 235 may generate a prediction block based on pixel information in the current picture.
  • the prediction unit is a prediction unit that has performed intra prediction, an image
  • the intra prediction mode information of the prediction unit provided by the encoder can be used to perform intra prediction mode information.
  • the intra example name section 235 may include an AIS (Adaptive Intra Smoothing) filter, a reference pixel interpolation section, and a DC filter.
  • the AIS filter is a part that performs filtering on the reference pixel of the current block, and can be applied by determining whether to apply the filter according to the prediction mode of the current prediction unit. Using the prediction mode of the prediction unit and AIS filter information provided by the image encoder AIS filtering can be performed on the reference pixel of the current block. When the prediction mode of the current block is a mode that does not perform AIS filtering, the AIS filter may not be applied.
  • the reference pixel interpolation unit can generate a reference pixel of a pixel unit less than an integer value by interpolating the reference pixel. If the prediction mode of the current prediction unit is a prediction mode that generates a prediction block without interpolating a reference pixel, the reference pixel may not be interpolated.
  • the DC filter can generate a prediction block through filtering when the prediction mode of the current block is DC mode.
  • the reconstructed block or picture may be provided to the filter unit 240.
  • the filter unit 240 The filter unit 240
  • It can include a deblocking filter, an offset correction unit, and ALF.
  • the deblocking filter of the image decoder information related to the deblocking filter provided by the image encoder is provided, and the image decoder can perform deblocking filtering on the corresponding block.
  • the offset correction unit can perform offset correction on the restored image based on the type of offset correction applied to the image at the time of encoding and information on the offset value.
  • ALF is the ALF application information and ALF coefficient information provided from the encoder.
  • This ALF information can be provided by being included in a specific parameter set.
  • the memory 245 stores the restored picture or block and stores the reference picture or reference.
  • the restored picture can be provided as an output unit.
  • FIG. 3 is a diagram showing a basic coding tree unit according to an embodiment of the present invention.
  • a coding block of the largest size can be defined as a coding tree block.
  • One picture is divided into a plurality of coding tree units (CTU).
  • the coding tree unit is a coding unit of the largest size, and LCU (LCU). Largest Coding Unit).
  • FIG. 3 shows an example in which one picture is divided into a plurality of coding tree units.
  • the size of the coding tree unit can be defined at the picture level or the sequence level. To this end, information indicating the size of the coding tree unit can be signaled through a picture parameter set or a sequence parameter set.
  • the size of the coding tree unit for all pictures in the sequence is 128x128.
  • either 128x128 or 256x256 at the picture level can be determined as the size of the coding tree unit.
  • the size of the coding tree unit is set to 128x128 in the first picture, and the coding tree is set to 128x128 in the second picture.
  • the predictive encoding mode refers to the method of generating the predictive image.
  • the encoding mode is an intra-screen example (Intra Prediction, Intra example), and inter-screen
  • Inter Prediction i.e.
  • current picture reference Current Picture Referencing, CPR, or Intra Block Copy (IBC)
  • IBC Intra Block Copy
  • Prediction For a coding block, at least one of intra prediction, inter prediction, current picture reference, or combined prediction can be used to generate a prediction block for the coding block.
  • the information may be a 1-bit flag indicating whether the predictive encoding mode is intra mode or inter mode. Only when the predictive encoding mode of the current block is determined to be inter mode, the current picture reference or Combined prediction may be available.
  • Reference current picture sets the current picture as the reference picture
  • the current picture means a picture including the current block.
  • Information indicating whether or not the current picture reference is applied to the current block is the bitstream.
  • the information may be a 1-bit flag. If the flag is true, the predictive encoding mode of the current block is referred to as the current picture reference.
  • the prediction mode of the current block may be determined by inter prediction.
  • the predictive encoding mode of the current block can be determined. 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754.
  • the prediction coding mode of the current block can be determined by the current picture reference.
  • the reference picture index is the current picture reference.
  • the prediction encoding mode of the current block can be determined by inter prediction, i.e., the current picture reference is a prediction method using information of the area in which the current picture has been internally encoded/decoded, and the inter prediction is the encoding/decoding. This is a prediction method using information of another picture that has been decoded.
  • Combined prediction is a combination of two or more of intra prediction, inter prediction, and current picture reference.
  • a first prediction block when combined prediction is applied, a first prediction block may be generated based on one of intra prediction, inter prediction, or current picture reference, and a second prediction block may be generated based on the other.
  • a final prediction block may be generated through an average operation or a weighting operation of the first prediction block and the second prediction block.
  • Information indicating whether the combined prediction is applied is provided in the bitstream. It can be signaled through. The information can be a 1-bit flag.
  • FIG. 4 is a diagram showing various division types of a coding block.
  • Coding block is divided into quadtree, binary tree, or triple tree.
  • a divided coding block can be divided into a plurality of coding blocks based on a quad tree division, a binary tree division, or a triple tree division.
  • Quadtree partitioning represents a partitioning technique that divides the current block into four blocks. As a result of quadtree division, the current block can be divided into 4 square partitions (see Fig. 4)-!, reference).
  • Binary tree division is a division technique that divides the current block into two blocks.
  • Dividing the current block into two blocks along the vertical direction can be called a vertical binary tree division, and along the horizontal direction (i.e., the current block horizontally).
  • Dividing the current block into two blocks can be referred to as a horizontal direction binary tree division.
  • the current block can be divided into two amorphous partitions.
  • Binary tree splitting result This is the result of dividing the direction binary tree.
  • Triple tree division is a division technique that divides the current block into 3 blocks.
  • Dividing the current block into three blocks along the vertical direction can be referred to as a vertical triple tree division
  • the horizontal direction i.e., the current block Dividing the current block into three blocks
  • the current block can be divided into three non-square partitions.
  • the width/height can be twice the width/height of other partitions.
  • Fig. 4(d)'SPLIT_TT_VER' shows the result of the vertical triple tree division
  • (E)'SPLIT_TT_HOR' in 4 shows the result of splitting a triple tree in the horizontal direction.
  • the number of divisions of the coding tree unit can be defined as the partitioning depth.
  • the maximum division depth of the coding tree unit can be determined at the sequence or picture level. Accordingly, the maximum division depth of the coding tree unit may be different for each sequence or field.
  • the maximum dividing depth for each of the dividing techniques can be determined individually.
  • the maximum partition depth allowed for a quadtree partition may be different from the maximum partition depth allowed for a binary tree partition and/or a triple tree partition.
  • the encoder can signal information representing at least one of the partition shape or the partition depth of the current block through the bitstream.
  • FIG. 5 is a diagram showing an example of a division of a coding tree unit.
  • Divisions such as quadtree division, binary tree division, and/or triple tree division
  • Coding blocks generated by applying multi-tree division to a coding block can be referred to as sub-coding blocks.
  • the division depth of a coding block is no, the division depth of sub-coding blocks is set to k+1.
  • an encoding block with a split depth can be referred to as an upper coding block.
  • the partition type of the current coding block is the partition type of the upper coding block or neighbor coding.
  • the partition type may include at least one of whether to divide a quadtree, whether to divide a binary tree, a direction to divide a binary tree, whether to divide a triple tree, or a direction to divide a triple tree.
  • the indicated information can be signaled through the bitstream.
  • the information is a 1-bit flag'split_cu_flag', and the fact that the flag is true indicates that the coding block is divided by the head tree division technique.
  • split_cu_flag When split_cu_flag is true, information indicating whether the coding block is divided into quadtrees may be signaled through the bitstream.
  • the information is a 1-bit flag split_qt_flag, and when the flag is true, the coding block is 4 It can be divided into blocks. 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754
  • Information indicating whether the block is applied can be signaled through the bitstream.
  • the information may be a 1-bit flag mtt_split_cu_binary_flag. Based on the flag, it may be determined whether binary re-tree division or triple-tree division is applied to the coding block.
  • Inter prediction is a prediction coding mode that predicts the current block using information of the previous picture.
  • a block hereinafter, a collocated block
  • a prediction block generated based on a block at the same position as the current block will be referred to as a collocated, i.e., a block (Collocated Prediction Block).
  • the current block can be effectively predicted by using the motion of the object. For example, if the moving direction and size of the object can be known by comparing the previous picture with the current picture, the current block's prediction is made taking into account the motion information of the object. Blocks (or prediction images) can be created.
  • a prediction block generated using motion information may be referred to as a motion prediction block.
  • a residual block can be generated by differentiating the prediction block from the current block.
  • the energy of the residual block can be reduced, and accordingly, the compression performance of the residual block can be improved.
  • motion compensation is performed to generate prediction blocks using motion information.
  • a prediction block can be generated based on motion compensation prediction.
  • the motion information may include at least one of a motion vector, a reference picture index, a predictive direction or a bidirectional weight index.
  • the motion vector indicates the moving direction and size of the object.
  • the reference picture index is a reference picture included in the reference picture list. Among the pictures, the reference picture of the current block is specified.
  • the prediction direction indicates either one-way L0 prediction, one-way L1 prediction, or two-way prediction (L0 prediction and L1 prediction). Depending on the prediction direction of the current block, movement in the L0 direction. At least one of human information or motion information in the L1 direction can be used.
  • the bidirectional weight index specifies the weight applied to the L0 prediction block and the weight applied to the L1, i.e., the block.
  • FIG. 6 is a flow chart of an inter prediction method according to an embodiment of the present invention.
  • the inter prediction method includes: determining an inter prediction mode of a current block (S601), acquiring motion information of a current block according to the determined inter prediction mode (S602), and acquired movement Based on the information, a step of performing motion compensation prediction for the current block (S603).
  • the inter prediction mode represents various techniques for determining the motion information of the current block, and an inter prediction mode using translation motion information and an inter prediction mode using affine motion information.
  • the inter prediction mode using translational motion information includes a merge mode and a motion vector prediction mode
  • the inter prediction mode using the affine motion information includes an Kire merge mode and an Matte motion vector prediction mode.
  • the motion information of the current block may be determined based on information parsed from a neighboring block or a bitstream adjacent to the current block according to the inter prediction mode.
  • the motion information of the current block may be derived from the motion information of the other block of the current block.
  • the other block may be a block encoded/decoded by inter prediction prior to the current block.
  • the motion information of the current block can be derived from the motion information of the other block. Setting the same as the motion information can be defined as the merge mode.
  • setting the motion vector of the other block as the predicted value of the motion vector of the current block can be defined as a motion vector prediction mode.
  • FIG. 7 shows a process of inducing motion information of a current block in a merge mode.
  • the merge candidate of the current block can be derived (S701).
  • the merge candidate of the current block can be derived from the block encoded/decoded by inter prediction before the current block.
  • FIG. 8 is a diagram illustrating candidate blocks used to induce a merge candidate.
  • Candidate blocks are neighboring blocks containing samples adjacent to the current block or 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754 May contain at least one of the non-neighboring blocks containing samples that are not adjacent to the block.
  • the samples that determine candidate blocks are defined as reference samples.
  • a reference sample adjacent to the current block will be referred to as a neighbor reference sample
  • a reference sample not adjacent to the current block will be referred to as a non-neighbor reference sample.
  • the neighboring reference sample may be included in the neighboring column of the leftmost column of the current block or the neighboring row of the topmost row of the current block. For example, when the coordinates of the upper left sample of the current block are (0, 0). , Blocks containing the reference samples at positions (-1, 11-1), Blocks containing the reference samples at positions (-1, -1), Blocks containing reference samples at positions (, -1), (- At least one of the blocks containing the reference sample at positions 1 and 3 ⁇ 4 or the block containing the reference sample at positions (-1, -1) can be used as a candidate block. Referring to the drawing, the neighbors of index 0 to index 4 can be used. Blocks can be used as candidate blocks.
  • the non-neighboring reference sample represents a sample in which at least one of the X-axis distance or axial distance from the reference sample adjacent to the current block has a predefined value.
  • the X-axis distance from the left reference sample is predefined.
  • a block containing a reference sample that is a specified value, a block containing a non-neighbor sample whose X-axis distance from the upper reference sample is a predefined value, or a non-neighbor sample whose X-axis distance and the X-axis distance from the left upper reference sample are predefined values.
  • At least one of the blocks containing the can be used as a candidate block.
  • the predefined value can be a natural number such as 4, 8, 12, 16, etc. Referring to the drawing, at least one of the blocks of index 5 to 26 can be used as a candidate block. It can be used as a block.
  • 9 is a diagram showing the positions of reference samples.
  • the X coordinates of the upper non-neighboring reference samples may be set differently from the X coordinates of the upper neighboring reference samples.
  • the position of the upper neighboring reference sample is In the case of, the position of the top non-neighbor reference sample, which is N away from the top neighbor reference sample, is set to (( /2)-1, -1-ratio, and the top non-neighbor is 2N away from the top neighbor reference sample.
  • the position of the reference sample can be set in the ratio of (0, -1-2, i.e., the position of the non-adjacent reference sample can be determined based on the position of the adjacent reference sample and the distance from the adjacent reference sample.
  • a candidate block including a neighbor reference sample is referred to as a neighbor block
  • a block including a non-neighbor reference sample is referred to as a neighbor block.
  • the candidate block may be set to be unavailable as a merge candidate.
  • the threshold value may be determined based on the size of the coding tree unit. For example, The threshold value is the
  • Can be set Offset N is a value predefined in the encoder and decoder, 4, 8, 16, have. 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754
  • the candidate block may be determined to be unavailable as a future candidate.
  • the candidate block including the reference sample may be set to be unavailable as a future candidate.
  • candidate blocks can be set so that the number of candidate blocks positioned at the left of the current block is greater than the number of candidate blocks positioned at the top of the current block.
  • FIG. 10 is a diagram illustrating candidate blocks used to induce a merge candidate.
  • the upper blocks belonging to the upper N block rows of the current block and the left blocks belonging to the left block rows of the current block can be set as candidate blocks.
  • M is more than N.
  • the number of left candidate blocks can be set larger than the number of upper candidate blocks.
  • the difference between the X-axis coordinate of the reference sample in the current block and the X-axis coordinate of the upper block that can be used as a candidate block not to exceed a multiple of the current block height.
  • the reference sample in the current block It can be set so that the difference between the X-axis coordinate of and the X-axis coordinate of the left block that can be used as a candidate block does not exceed M times the current block width.
  • Blocks and blocks belonging to the five block columns to the left of the current block are shown to be set as candidate blocks.
  • the candidate block does not belong to the same coding tree unit as the current block.
  • a block belonging to the same coding tree unit as the current block or a block including a reference sample adjacent to the boundary of the coding tree unit may be used to induce a merge candidate.
  • FIG. 11 is a view showing an example in which the position of the reference sample is changed.
  • a reference sample adjacent to the boundary of the coding tree unit is used instead of the reference sample. Can determine a candidate block.
  • [17 is an example, in Fig. 11] and (in the non-shown example, the upper boundary and coding of the current block)
  • the reference samples at the top of the current block belong to a different coding tree unit from the current block.
  • the upper boundary of the coding tree unit is not adjacent to the current block.
  • the reference sample at position 6 is replaced with the sample at position 6'located at the upper boundary of the coding tree unit, and
  • the reference sample at position 15 can be replaced with a sample at position 15' located at the upper boundary of the coding tree unit.
  • the X coordinate of the replacement sample is changed to the adjacent position of the coding tree unit,
  • the X coordinate of the replacement sample can be set the same as the reference sample.
  • the sample at position 6' has the same X coordinate as the sample at position 6, and the sample at position 15' has the same X coordinate as the sample at position 15.
  • the X coordinate of the replacement sample by adding or subtracting the offset from the X coordinate of the reference sample can be set.
  • the X coordinate of the neighboring reference sample and the non-neighboring reference sample located at the top of the current block can be set.
  • an offset can be added or subtracted from the reference sample's X coordinate as the replacement sample's X coordinate, which means that the replacement sample replacing the non-neighboring reference sample has the same position as the other non-neighboring reference sample or the neighboring reference sample. It is to prevent it from becoming.
  • FIG. 12 is a view showing an example in which the position of the reference sample is changed.
  • the reference sample at position 6 and the reference sample at position 15 are the sample at the 6'position and 15' in which the X coordinate is the same as the row adjacent to the upper boundary of the coding tree unit.
  • the X coordinate of the sample at position 6' is set as the difference ⁇ /2 from the X coordinate of the reference sample at position 6, and the X coordinate of the sample at position 15' is at 15 position.
  • the X coordinate of the reference sample of It can be set to a subdued value.
  • the reference sample is not included in the same coding tree unit as the current block, and the left boundary of the coding tree unit. If it is not adjacent to, the reference sample can be replaced with a sample that is adjacent to the left boundary of the Coding Tree unit.
  • the replacement sample has the same coordinates as the reference sample, or by adding or subtracting an offset from the X coordinate of the reference sample. It can have the X-coordinate obtained by doing it.
  • a block including the replacement sample is set as a candidate block, and a merge candidate of the current block can be derived based on the candidate block.
  • a merge candidate from a temporal neighbor block included in a picture different from the current block.
  • a collocated picture included in a collocated picture. 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754 Merge candidates can be derived from the block.
  • Any one of the reference pictures included in the reference picture processing list can be set as a collocated picture.
  • a collo Index information for identifying the cate picture may be signaled through the bitstream.
  • the motion information of the merge candidate can be set the same as the motion information of the candidate block.
  • At least one of the motion vector of the candidate block, the reference picture index, the prediction direction, or the bidirectional weight index can be set as the motion information of the merge candidate.
  • a merge candidate list containing merge candidates can be created (702).
  • the indexes of the merged candidates may be assigned in a predetermined order. For example, a merge candidate derived from the left neighbor block, a merge candidate derived from the upper neighbor block, a merge candidate derived from the upper right neighbor block, a merge candidate derived from the lower left neighbor block, and a merge candidate derived from the upper left neighbor block. And indexes in the order of merge candidates derived from the temporal neighbor block.
  • a merge candidate includes multiple candidate candidates
  • at least one of the plurality of candidate candidates may be selected 703).
  • information for specifying any one of the plurality of candidate candidates is signaled through the bitstream. For example, information indicating an index of any one of the merge candidates included in the merge candidate list. have.
  • the motion information candidate included in the motion information table can be added to the merge candidate list as a merge candidate.
  • the threshold value is the merge candidate list. It may be the number of maximum merge candidates that can be included or the number of maximum merge candidates minus the offset.
  • the offset may be a natural number such as 1 or 2.
  • the motion information table is encoded/decoded based on the inter prediction in the current picture.
  • motion information candidates derived from the block Includes motion information candidates derived from the block.
  • motion information candidates derived from the block includes motion information candidates derived from the block. For example, motion information
  • the motion information of the motion information candidates included in the table is based on inter prediction.
  • the motion information may include at least one of a motion vector, a reference picture index, a prediction direction, or a bidirectional weight index.
  • Motion information candidates included in the motion information table may be referred to as inter-area merge candidates or predicted-area merge candidates.
  • the maximum number of motion information candidates that the motion information table can include may be predefined in the encoder and decoder.
  • the maximum number of motion information candidates that the motion information table can contain is 1, 2, 3, 4, 5, 6, 7, 8 or more (e.g.,
  • 16 can be. 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754
  • the indicated information may be signaled through a bitstream.
  • the information may be signaled at the sequence, picture, or slice level.
  • the information may indicate the maximum number of motion information candidates that the motion information table can contain. or ,
  • the information may represent a difference between the maximum number of motion information candidates that the motion information table can contain and the maximum number of merge candidates that the merge candidate list can contain.
  • the maximum number of motion information candidates that the motion information table can contain may be determined according to the size of the picture, the size of the slice, or the size of the coding tree unit.
  • the motion information table can be initialized in units of picture, slice, tile, brick, coding tree unit, or coding tree unit line (row or column). For example, when a slice is initialized, the motion information table is also initialized. , The motion information table may not contain any motion information candidates.
  • the information can also be signaled through a bitstream.
  • the information can be signaled at the slice, tile, brick or block level. Until the information instructs to initialize the motion information table, the constructed motion information table can be used.
  • information about the initial motion information candidate may be signaled through a picture parameter set or a slice header. Even if the slice is initialized, the motion information table may contain the initial motion information candidate. Accordingly, the first encoding in the slice /The initial motion information candidate can also be used for the block to be decrypted.
  • the motion information candidate included in the motion information table of the previous coding tree unit can be set as the initial motion information candidate.
  • the dex among the motion information candidates included in the motion information table of the previous coding tree unit is the most The small motion information candidate or the motion information candidate with the largest index can be set as the initial motion information candidate.
  • Blocks are encoded/decoded according to the encoding/decoding order, but blocks encoded/decoded based on inter prediction can be sequentially set as motion information candidates according to the encoding/decoding order.
  • FIG. 13 is a diagram for explaining an update mode of a motion information table.
  • motion information candidates can be derived based on the current block ( ⁇ 302).
  • the motion information of the motion information candidate is set the same as the motion information of the current block.
  • the motion information candidate derived based on the current block can be added to the motion information table ( ⁇ 304).
  • the motion information table already contains motion information candidates 1303), it is possible to perform a redundancy check for the motion information (or motion information candidates derived based on this) of the current block ( ⁇ 305).
  • the inspection checks whether the motion information of the motion information candidate stored in the motion information table and the motion information of the current block are the same. 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754 This is to determine.
  • the redundancy check can be performed on all motion information candidates previously stored in the motion information table. Or, motion information candidates previously stored in the motion information table.
  • a redundancy test may be performed for motion information candidates whose dex is above or below a threshold value.
  • a redundancy test may be performed for a predefined number of motion information candidates. For example, 2 with a small index of 2
  • Two motion information candidates or two Wim motion information candidates with a large index may be determined as a redundancy test object.
  • motion information candidates derived based on the current block can be added to the motion information table 1308). Whether the motion information candidates are the same is determined by the motion information of the motion information candidates (e.g., motion vector and/or It can be determined based on whether the reference picture index, etc.) are the same.
  • the motion information of the motion information candidates e.g., motion vector and/or It can be determined based on whether the reference picture index, etc.
  • the oldest motion information candidate is deleted and 1307), the motion information candidate derived based on the current block is added to the motion information table. Can be added ( ⁇ 308).
  • the oldest motion information candidate may be a motion information candidate with the largest index or a motion information candidate with the smallest index.
  • Motion information candidates can be identified by respective indexes.
  • the lowest index eg, 0
  • the index of the stored motion information candidates can be increased by 1. At this time, if the maximum number of motion information candidates are already stored in the motion information table, the motion information candidate with the largest index is removed.
  • the motion information candidate derived from the current block is added to the motion information table.
  • the largest index can be assigned to the motion information candidate. For example, if the number of motion information candidates previously stored in the motion information table is less than the maximum value, the motion information candidate has the number of previously stored motion information candidates. In addition, if the number of motion information candidates stored in the motion information table is equal to the maximum value, the motion information candidate can be assigned an index subtracting 1 from the maximum value. The motion information candidate with the smallest index is removed, and the remaining indexes of the motion information candidates stored are 1
  • FIG. 14 is a diagram showing an update pattern of a motion information table.
  • the motion information candidate derived from the current block is added to the motion information table, and the largest index is allocated to the motion information candidate.
  • the motion information table already stores the maximum number of motion information candidates. I assume.
  • Motion information candidate 13 ⁇ 41 1 ⁇ 1[11+1] derived from the current block is the motion information table
  • the motion induced from the current block can be decreased.
  • the index of information candidates 1 & 11 (1[11+1]) can be set to the maximum value (in the example shown in Fig. 14, not).
  • the motion information candidate derived based on the current block may not be added to the motion information table 1309).
  • the motion information candidate derived based on the current block is added to the motion information table.
  • 15 is a diagram showing an example in which an index of a previously stored motion information candidate is updated.
  • Motion information candidates derived based on the current block 111 If the index of motion information candidates stored in the same time as 111 1 is urine, the previously stored motion information candidates are deleted, and the index is 111 (motion information candidates greater than urine) For example, in the example shown in Fig. 15, 111 ⁇ 1 is deleted from the same Gamotion information table 1 ⁇ 1111 3 ⁇ 411(1], and ⁇ 1 ⁇ (1[3] to 13 ⁇ 41 The index up to ⁇ 1 ⁇ (1[11] is shown as decreasing by one.
  • motion information candidate derived based on the current block ⁇ 0 (1 is motion information
  • the index assigned to the motion information candidate that is previously stored identical to the motion information candidate derived based on the current block can be updated.
  • the index of the previously stored motion information candidate can be changed to a minimum value or a maximum value.
  • motion information candidates derived based on the motion information of blocks included in the merge processing area may not be added to the motion information table. Since the encoding/decoding order of the blocks included in the merge processing area is not defined, it is inappropriate to use one of these motion information for inter prediction of other blocks. Accordingly, the block included in the merge processing area is inappropriate. Motion information candidates derived on the basis of these may not be added to the motion information table.
  • the motion information of the block smaller than the preset size is displayed in the motion information table.
  • motion information of a coding block with a width or height less than 4 or 8 or motion information candidates derived based on the motion information of a 4x4 coding block may not be added to the motion information table. have.
  • motion information candidates can be derived based on the motion information of the representative sub-block among a plurality of sub-blocks included in the current block. For example, for the current block When a sub-block merge candidate is used, the motion information candidate is selected based on the motion information of the representative sub-block among the sub-blocks. 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754 Can be derived.
  • the motion vectors of the subblocks can be derived in the following order: First, one of the merge candidates included in the merge candidate list of the current block is selected, based on the motion vector of the selected merge candidate, and an initial shift vector. ⁇ : ⁇ You can induce burrs. Then, the position of the reference sample (e.g., upper left sample or middle position sample) of the coding block is added with the initial shift vector to the position ⁇ 315, 8, and the position of the reference sample is ⁇ 001815, 0181). Equation 1 below represents the equation for deriving the shift subblock.
  • the reference sample e.g., upper left sample or middle position sample
  • the motion vector of the corresponding collocated block can be set as the motion vector of the sub-block including the ratio (31) and 3 ratio.
  • the representative sub-block can mean a sub-block including the top left sample, the center sample, the bottom right sample, the top right sample, or the bottom left sample of the current block.
  • Fig. 16 is a diagram showing the location of the representative sub-block.
  • Fig. 16 shows an example in which the sub-block located at the top left of the current block is set as the representative sub-block
  • Figure 16 shows an example in which the sub-block located at the center of the current block is set as the representative sub-block.
  • motion information candidates for the current block can be derived based on the motion vector of the sub-block containing the upper left sample of the current block or the sub-block containing the center sample of the current block. have.
  • the coded/decoded block may be set to be unavailable as a motion information candidate. Accordingly, even if the current block is coded/decoded by inter prediction, in the case where the inter prediction mode of the current block is a fine prediction mode, it is based on the current block. It is possible not to update the motion information table.
  • the current block may be set as unavailable as motion information candidates.
  • motion information candidates may be derived based on at least one subblock vector of the subblocks included in the encoded/decoded block based on the affine motion model. For example, in the upper left of the current block. Sub-block located, sub-block located in the center 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754 Or the motion information candidate can be derived by using the subblock located in the upper right corner. Or, the average value of the subblock vectors of the multiple subblocks is calculated as the motion of the motion information candidate. It can also be set as a vector.
  • At least one of the first affine seed vector, the second affine seed vector, or the third affine seed vector of the current block is the motion information candidate. It can be set as a motion vector of.
  • a motion information table can be configured for each inter prediction mode. For example, a motion information table for a block encoded/decoded with an intra block copy, a motion information table for an encoded/decoded block based on a translational motion model. At least one of the motion information tables for the coded/decoded block can be defined based on the table or the affine motion model. Depending on the inter prediction mode of the current block, any one of a plurality of motion information tables can be selected.
  • FIG. 17 shows an example in which a motion information table is generated for each inter prediction mode.
  • Motion information candidate mvCand derived as a basis is a non-agnostic motion information table
  • the motion information candidate mvAfCand derived based on the block can be added to the Bennette motion information table HmvpAfCandList.
  • the information candidate may store the affine seed vectors of the block. Accordingly, the motion information candidate can be used as a merge candidate for inducing the affine seed vector of the current block.
  • a motion information table can be configured for each corresponding degree of a motion vector.
  • a motion information table for storing motion information with a resolution of 1/16 pel-in and a motion information table with a resolution of 1 ⁇ 4 pel-in motion vector.
  • At least one of the motion information tables for storing information can be defined.
  • FIG. 18 shows an example in which a motion information table is generated for each motion vector resolution.
  • the motion information mvCand of the block can be stored in the quarter-pel motion information table HmvpQPCandList.
  • the motion vector resolution of the block is integer-pel.
  • the motion vector resolution of the block has 4 integer-pels, the 4 integer felt motion information table HmvpIPCandList Motion information mvCand can be saved. 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754
  • the motion vector resolution of the current block it is possible to induce a merge candidate of the current block by selecting the motion information table.
  • the motion vector resolution of the current block is Can be used to induce a merge candidate for the current block
  • the motion vector resolution of the current block is Can be used to induce merge candidates for the current block.
  • the motion information of the block to which the merge offset encoding method is applied can be stored in a separate motion information table.
  • FIG. 19 shows an example in which motion information of a block to which the merge offset encoding method is applied is stored in a separate motion information table.
  • the motion information table can be selected.
  • the merge offset encoding method is used in the current block.
  • the merge offset motion information table HmvpMMVDCandList can be used to induce the merge candidate of the current block.
  • a long term motion information table (hereinafter referred to as the second motion information table) may be defined.
  • the long term motion information table includes long term motion information candidates. do.
  • both the first motion information table and the second motion information table are empty, first, a motion information candidate can be added to the second motion information table.
  • the number of motion information candidates available in the second motion information table is limited. After reaching the maximum number, it is possible to add motion information candidates to the first motion information table as a ratio.
  • one motion information candidate may be added to both the second motion information table and the first motion information table.
  • the second motion information table can be updated, or the second motion information table can be updated every N coding tree unit lines.
  • the first motion information table can be updated whenever a block encoded/decoded by inter prediction occurs.
  • the motion information candidate added to the second motion information table is used to update the first motion information table. May be set not to 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754 There is.
  • Information for selecting either the first motion information table or the second motion information table may be signaled through a bitstream.
  • the number of merge candidates included in the merge candidate list is less than the threshold, the above
  • the motion information candidates included in the motion information table for which information is indicated can be added to the merge candidate list as merge candidates.
  • a motion information table may be selected based on the size, shape, inter prediction mode, bi-directional prediction, motion vector refinement, or triangular partitioning of the current block.
  • motion information candidates included in the second motion information table can be added to the merge candidate list.
  • 20 is a diagram showing an example in which motion information candidates included in the long term motion information table are added to the merge candidate list.
  • the motion information candidates included in the first motion information table HmvpCandList can be added to the merge candidate list.
  • the motion information candidates included in the first motion information table Even though information candidates are added to the merge candidate list, if the number of merge candidates included in the merge candidate list is less than the maximum number, motion information candidates included in the long-term motion information table HmvpLTCandList can be added to the merge candidate list.
  • Table 1 shows the motion information candidates included in the long term motion information table.
  • This motion information candidate can be set to include additional information in addition to motion information.
  • at least one of the size, shape, or partition information of the block can be additionally stored for the motion information candidate.
  • the current block, size, shape, or partition information among the motion information candidates Use only the same or similar motion information candidate, or if the current block and size, shape, or partition information 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754
  • a motion information table corresponding to the shape, size, or partition information of the current block can be used to construct a list of candidates for the current block.
  • the motion information candidates included in the motion information table can be added to the merge candidate list as merge candidates.
  • the indexes of the candidates are sorted in ascending or descending order. For example, the motion information candidate with the largest index can be added to the merge candidate list of the current block.
  • a redundancy check may be performed between the motion information candidates and the merge candidates previously stored in the merge candidate list. Motion information candidates with the same motion information as the candidate may not be added to the merge candidate list.
  • Table 2 shows the process of adding motion information candidates to the merge candidate list.
  • a redundancy test can be performed only for motion information candidates whose index is above or below the threshold value, or N motion information candidates with the largest index or N motion information candidates with the smallest index.
  • a redundancy check can be performed only for the merge candidate list, or a redundancy check can be performed only for some of the merge candidates previously stored in the merge candidate list.
  • a merge candidate whose index is above or below the threshold value or derived from a block at a specific location The redundancy test may only be performed on the old merged candidate.
  • the specific location is the left neighboring block, the upper neighboring block, and the upper right 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754 Can contain at least one of the neighboring blocks or the lower left neighboring blocks.
  • 21 is a diagram showing an example in which a redundancy test is performed only for some of the merge candidates
  • 111 can perform a redundancy test with 60&11(11 1; 11111] 6-2] and 1116 60&11(11 1;[] ⁇ 11111
  • motion information candidates 13 ⁇ 41 1 ⁇ (3 ⁇ 4] were added to the merge candidate list.
  • 111 can perform a redundancy test with 60&11(11 1; 11111] 6-2] and 1116 60&11(11 1;[] ⁇ 11111
  • a redundancy check can be performed. For example, a redundancy check can be performed only for motion information candidates having an index in which the number and difference of motion information candidates included in the motion information table is less than or equal to the threshold value. If the threshold value is 2, motion information can be performed.
  • the redundancy test can be performed only for the three motion information candidates with the largest dex value among the motion information candidates included in the information table. For the motion information candidates excluding the three motion information candidates, the redundancy test may be omitted. If is omitted, the motion information candidate can be added to the merge candidate list, regardless of whether they have the same motion information as the merge candidate.
  • the number of motion information candidates for which the redundancy check is performed is determined by the encoder and the decoder.
  • 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754 may be predefined.
  • the threshold may be an integer such as 0, 1 or 2.
  • the threshold may be determined based on at least one of the number of merge candidates included in the merge candidate list or the number of motion information candidates included in the motion information table.
  • the redundancy test between the first motion information candidate and the same merge candidate may be omitted during the redundancy test for the second motion information candidate.
  • 22 is a diagram showing an example in which the redundancy test with a specific merge candidate is omitted.
  • At least one of a pairwise merge candidate or a zero merge candidate may be included. It means a merge candidate having the average of the motion vectors of two or more merge candidates as a motion vector, and a zero merge candidate means a merge candidate with a motion vector of 0.
  • the merge candidate can be added in the following order.
  • Spatial merge candidates are from at least one of the neighboring or non-neighboring blocks.
  • the affine motion information candidate represents the motion information candidate derived from the block encoded/decoded with the Matte motion model, while the derived merge candidate means the temporal merge candidate derived from the previous reference picture.
  • Motion information tables can also be used in motion vector prediction mode. For example, when the number of motion vector prediction candidates included in the motion vector prediction candidate list of the current block is less than the threshold value, motion included in the motion information table
  • the information candidate can be set as the motion vector prediction candidate for the current block.
  • the motion vector of the motion information candidate can be set as the motion vector prediction candidate.
  • the selected candidate can be set as the motion vector predictor of the current block. Thereafter, the motion vector residual value of the current block is decoded. After that, the motion vector of the current block can be obtained by summing the motion vector predictor and the motion vector residual value.
  • the motion vector prediction candidate list of the current block can be constructed in the following order.
  • the spatial motion vector prediction candidate is at least a neighboring block or a non-neighboring block.
  • the motion vector prediction candidate derived from one means the candidate
  • the temporal motion vector prediction candidate means the motion vector prediction candidate derived from the previous reference picture.
  • the affine motion information candidate is derived from the block encoded/decoded with the Rane motion model.
  • the zero motion vector prediction candidate represents the candidate whose motion vector value is 0.
  • a merge processing area of a larger size than the coding block can be defined.
  • the coding blocks included in the region are not sequentially encoded/decoded, but can be processed in parallel.
  • the sequentially encoded/decrypted blocks mean that the encoding/decoding order is not defined.
  • merge processing The encoding/decoding process of blocks included in the region can be independently processed.
  • blocks included in the merge processing area can share merge candidates.
  • the merge candidates can be derived based on the merge processing area.
  • the merge processing region may be referred to as a parallel processing region, a shared merge region (SMR), or a merge estimation region (MER).
  • SMR shared merge region
  • MER merge estimation region
  • the merge candidate of the current block can be derived based on the coding block. However, if the current block is included in the merge processing area of a larger size than the current block, the candidate block included in the same merge processing area as the current block is It may be set as unavailable as a merge candidate.
  • Fig. 23 shows a candidate block emerging included in the same merge processing area as the current block.
  • Blocks containing samples may be set as candidate blocks.
  • candidate blocks X3 and X4 included in the same merge processing area as CU5 may be set to be unavailable as merge candidates for CU5, while merged same as CU5.
  • candidate blocks XO, XI and X2 not included in the processing area may be set to be available as merge candidates.
  • Fig. 23 (in the non-shown example, when the CU8 is encoded/decoded, the reference adjacent to the CU8 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754 Blocks containing samples can be set as candidate blocks. At this time, candidate blocks X XI and 8 contained in the merge processing area equal to 18 are ratios as merge candidates. On the other hand, candidate blocks 5 and 9 that are not included in the same merge area as [ ⁇ 8] can be set as available as merge candidates.
  • a neighboring block adjacent to the current block and a neighboring block adjacent to the merge processing area may be set as candidate blocks.
  • FIG. 24 is a diagram showing an example of inducing a merge candidate for the current block when the current block is included in the merge processing area.
  • neighboring blocks adjacent to the current block may be set as candidate blocks for inducing merge candidates of the current block.
  • the included candidate block may be set as unusable as a merge candidate. For example, when the merge candidate for the coding block 013 is held, the coding block (: top neighbor block 3 and the top right neighbor block included in the merge processing area identical to 113) 4 may be set to be unavailable as a merge candidate for the coding block 0 ⁇ .
  • the predefined order may be in the order of, 3, 4, and 2.
  • the merge process for the current block is used using neighboring blocks adjacent to the merge processing area of Fig. 24 (as in the non-shown example).
  • candidate blocks adjacent to the merge processing area including the coding block can be set as candidate blocks for the coding block (: 3).
  • the neighboring blocks adjacent to the merge processing area can be set. It may include at least one of XI of the left neighboring block, 3 of the upper neighboring blocks, the lower left neighboring block, the upper right neighboring block 4, or the upper left neighboring block.
  • the predefined order is the order of XI, ⁇ 3, ⁇ 4, 0, and 2 Can be
  • the merge candidate for the coding block (3) 3 included in the merge processing area can be derived by scanning the candidate blocks according to the following scan sequence.
  • the scanning order of the candidate blocks illustrated above is only an example of the present invention, and it is possible to scan the candidate blocks in a different order from the above example. Or, the current block or the merge processing area. It is also possible to adaptively determine the scan order based on at least one of the size or shape.
  • the merge processing area may be square or amorphous.
  • Information for 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754 can be signaled through a bitstream.
  • the above information includes at least one of information indicating the shape of the merge processing area or information indicating the size of the merge processing area.
  • the merge processing area is non-square, at least one of information indicating the size of the merge processing area, information indicating the width and/or height of the merge processing area, or information indicating the ratio between the width and the height of the merge processing area.
  • One can be signaled through the bitstream.
  • the size of the merge processing area may be determined based on at least one of information signaled through the bitstream, picture resolution, slice size, or tile size.
  • motion information candidates derived based on the motion information of the block on which the motion compensation prediction was performed can be added to the motion information table.
  • the table can be updated.
  • the predefined positions are the block located in the upper left corner, the block located in the upper right corner, the block located in the lower left corner, the block located in the lower right corner, and the block located in the center. It may include at least one of a block adjacent to the right boundary or a block adjacent to the lower boundary. For example, only the motion information of the block adjacent to the lower right corner in the merge processing area is updated in the motion information table, and the movement of other blocks Information may not be updated in the motion information table.
  • Motion information candidates derived from blocks can be added to the motion information table, that is, the motion information table may not be updated while blocks included in the merge processing area are being encoded/decoded.
  • motion information candidates derived from the blocks can be added to the motion information table in a predefined order.
  • a predefined definition The ordered order may be determined according to the scanning order of the coding blocks in the merge processing area or the coding tree unit.
  • the scanning order may be at least one of raster scan, horizontal scan, vertical scan, or zigzag scan.
  • a predefined order Is the movement information of each block or the same movement 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754 It can be determined based on the number of blocks with information.
  • a motion information candidate including one-way motion information, and two-way motion information.
  • motion information candidates including two-way motion information it is possible to add motion information candidates including two-way motion information to the motion information table earlier than the included motion information candidates.
  • motion information candidates including bi-directional motion information may be added to the motion information table earlier than motion information candidates containing unidirectional motion information.
  • motion information candidates can be added to the motion information table in the order of high frequency of use or low frequency of use in the merge processing area or coding tree unit.
  • the motion information candidate included in the motion information table can be added to the merge candidate list.
  • motion information candidates derived from blocks included in the same merge processing area as the current block may be set not to be added to the merge candidate list of the current block.
  • the motion information candidates included in the motion information table can be set not to be used. That is, the number of merge candidates included in the merge candidate list of the current block is maximum. Even if it is smaller than the number, motion information candidates included in the motion information table may not be added to the merge candidate list.
  • a motion information table for a merge processing area or a coding tree unit can be constructed.
  • This motion information table serves to temporarily store motion information of blocks included in the merge processing area.
  • General motion In order to distinguish between the information table and the motion information table for the merge processing area or the coding tree unit, the motion information table for the merge processing area or the coding tree unit will be referred to as a temporary motion information table.
  • the motion information candidate will be referred to as a temporary motion information candidate.
  • 25 is a diagram showing a temporary motion information table.
  • a temporary motion information table for the coding tree unit or the merge processing area can be configured.
  • the motion information of the block is Information table
  • the temporary motion information candidate derived from the block can be added to the temporary motion information table HmvpMERCandList. That is, the temporary motion information candidate added to the temporary motion information table may not be added to the motion information table. Accordingly, the motion information table is based on motion information of blocks included in the coding tree unit including the current block or the merge processing area. It may not include motion information candidates induced by.
  • the maximum number of temporary motion information candidates that the temporary motion information table can contain may be set equal to the maximum number of motion information candidates that the motion information table can contain. Or, the temporary motion information candidate that the temporary motion information table can contain. The maximum number of can be determined according to the size of the coding tree unit or merge processing area, or the maximum number of temporary motion information candidates that the temporary motion information table can contain is less than the maximum number of motion information candidates that the motion information table can contain. Can be set.
  • the current block included in the coding tree unit or the merge processing area can be set not to use the temporary motion information table for the corresponding coding tree unit or the merge processing area. That is, included in the merge candidate list of the current block. If the number of merge candidates is less than the threshold value, the motion information candidates included in the motion information table may be added to the merge candidate list, and the temporary motion information candidates included in the temporary motion information table may not be added to the merge candidate list. Accordingly, motion information of the same coding tree unit as the current block or of other blocks included in the same merge processing area may not be used for the motion compensation prediction of the current block.
  • 26 is a diagram showing an example of merging the motion information table and the temporary motion information table.
  • the temporary motion information candidate included in the temporary motion information table is updated to the motion information table as in the example shown in FIG. can do.
  • the temporary motion information candidates included in the temporary motion information table may be added to the motion information table in the order they are inserted into the temporary motion information table (ie, ascending or descending index value).
  • the temporary motion information candidates included in the temporary motion information table can be added to the motion information table.
  • the predefined order is the merge processing area or the coding within the coding tree unit. It can be determined according to the scanning order of blocks. The scanning order may be at least one of raster scan, horizontal scan, vertical scan, or zigzag scan. Or, the predefined order is the motion information of each block or having the same motion information. It can be determined based on the number of blocks.
  • the temporary motion information candidate including the one-way motion information may be added to the motion information table earlier than the temporary motion information candidate including the bi-directional motion information.
  • temporary motion information candidates may be added to the motion information table in the order of high frequency of use or low frequency of use in the merge processing area or the coding tree unit.
  • a redundancy test for the temporary motion information candidates can be performed.
  • the temporary motion information included in the temporary motion information table can be performed. If the same motion information candidate as the information candidate is previously stored in the motion information table, the temporary motion information candidate may not be added to the motion information table.
  • the redundancy check is performed on some of the motion information candidates included in the motion information table. For example, a redundancy test may be performed on motion information candidates whose index is above or below the threshold. For example, a motion information candidate with an index above a predefined value and In the same case, the temporary motion information candidate may not be added to the motion information table.
  • the motion information candidate derived from the block can be restricted from being used as the merge candidate of the current block.
  • the address information of the block can be additionally stored for the motion information candidate.
  • the address information of the block is, the location of the block, the block's address information. Address, block index, position of merge processing area containing block, address of merge processing area containing block, index of merge processing area containing block, position of coding tree area containing block, block included It may contain at least one of the address of the coded tree area or the index of the coded tree area containing the block.
  • the coding block is divided into a plurality of prediction units, and each of the divided prediction units is
  • the prediction unit represents the basic unit for performing the prediction.
  • a coding block may be divided using at least one of a vertical line, a horizontal line, a diagonal line, or a diagonal line.
  • the prediction units divided by the division line may have a shape such as a triangle, a square, a trapezoid, or a pentagon.
  • coding The block may be divided into two triangular-shaped prediction units, two trapezoidal-shaped prediction units, two square-shaped prediction units, or one triangular-shaped prediction unit and one pentagonal-shaped prediction unit.
  • Information for determining at least one of the number, angle, or position of the lines dividing the coding block may be signaled through the bitstream.
  • information indicating any one of the partition type candidates of the coding block is a bit bit.
  • One of the multiple line candidates signaled through a stream or splitting a coding block 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754 Specific information can be signaled through a bitstream.
  • index information indicating any one of a plurality of line candidates can be signaled through the bitstream. have.
  • Each of the multiple line candidates may be virtually different from at least one of the angles or positions.
  • the number of line candidates for which the current block is available may be determined based on the size and shape of the current block, the number of available merge candidates, or whether a neighboring block at a specific location can be used as a merge candidate.
  • a 1-bit flag can be used to determine whether a diagonal line with an angle larger than the diagonal and/or a diagonal line with an angle smaller than the diagonal line can be used as a line candidate. It can be signaled at the picture or sequence level.
  • At least one of the intra prediction mode, the inter prediction mode, the position of the available merge candidates, or the segmentation pattern of the neighboring block at least one of the number, angle, or position of the lines dividing the coding block It can be determined adaptively.
  • intra prediction or inter prediction can be performed on each of the divided prediction units.
  • FIG. 27 is a diagram showing an example of dividing a coding block into a plurality of prediction units by using a diagonal line.
  • the coding block is divided into two prediction units.
  • the coding block can be divided into two prediction units by using a diagonal line in which at least one end of the line does not pass the vertex of the coding block.
  • 28 is a diagram illustrating an example of dividing a coding block into two prediction units.
  • the coding block can be divided into two prediction units by using a diagonal line where both ends are in contact with the upper and lower boundaries of the coding block.
  • the coding block can be divided into two prediction units by using the diagonal lines tangent to the left and right borders.
  • the coding block can be divided into two prediction units of different sizes. For example, by setting the diagonal line dividing the coding block to be in contact with two boundaries forming one vertex, the coding block can be divided into two different sized prediction units. It can be divided into prediction units of.
  • the coding block can be divided into two prediction units of different sizes.
  • the coding block is divided into two prediction units with different sizes. can do.
  • the first prediction unit is a sample located at the lower left of the coding block. Or the sample located in the upper left corner
  • the second prediction unit may mean a prediction unit including a sample located at the upper right or a sample located at the lower right of the coding block.
  • a prediction unit including a sample located at the top right or a sample located at the bottom right is defined as the first prediction unit, and includes a sample located at the bottom left or a sample located at the top left in the coding block.
  • the predicting unit can be defined as a second example, namely unit.
  • the division of the coding block using horizontal lines, vertical lines, diagonal lines, or diagonal lines can be referred to as prediction unit partitioning.
  • the prediction units generated by applying the prediction unit partitioning, according to their shape, are triangular prediction units and square prediction units. It can be referred to as a unit or pentagonal prediction unit.
  • the coding block is divided using a diagonal line.
  • dividing the coding block into two prediction units using a diagonal line will be referred to as diagonal partitioning or triangular partitioning.
  • the prediction units can be encoded/decoded according to the embodiments described below. That is, matters related to encoding/decoding of the triangular prediction unit to be described later can also be applied to the encoding/decoding of the square prediction unit or the pentagonal prediction unit.
  • the slice type, the maximum number of merge candidates that the merge candidate list can contain the size of the coding block, the shape of the coding block, the prediction coding mode or the parent node of the coding block It can be determined on the basis of at least one of the aspects of the division.
  • prediction unit partitioning For example, it is possible to determine whether to apply prediction unit partitioning to the coding block based on whether the current slice is: 8 types. Prediction unit partitioning can be allowed only when the current slice is: 8 types. .
  • the width or height of the hardware implementation is larger than 64, there is a disadvantage that the data processing unit of size 64 4 has redundant access. Accordingly, at least one of the width or height of the coding block is the threshold value. If it is larger than, it may not be allowed to divide the coding block into a plurality of prediction units. For example, if at least one of the height or width of the coding block is greater than 64 (e.g., at least one of the width or height is 128) Right), prediction unit partitioning may not be used.
  • prediction unit partitioning may not be allowed for a coding block with a number of samples greater than a threshold. For example, a coding tree block with a number of samples greater than 4096 may not be allowed. May not allow prediction unit partitioning.
  • prediction unit partitioning may not be allowed for a coding block in which the number of samples included in the coding block is smaller than the threshold. For example, if the number of samples included in the coding block is less than 64, the coding block It can be set so that prediction unit partitioning is not applied.
  • the width and height ratio of the coding block ⁇ vhRatio is the width of the coding block as shown in Equation 2 below. It can be determined as a ratio.
  • the second threshold may be an reciprocal of the first threshold. For example, when the first threshold is no, the second threshold may be 1.
  • Prediction unit partitioning can be applied to the coding block only in case.
  • prediction unit partitioning can be used only when the width and height ratio of the coding block is smaller than the first threshold or larger than the second threshold. For example, when the first threshold is 16, 64x4 or 4x64 Prediction unit partitioning may not be allowed for sized coding blocks.
  • the leaf node encoding block includes the prediction unit. Partitioning can be applied. On the other hand, if the parent node encoding block is divided on the basis of binary retrieval or triple tree division, the prediction unit partitioning is not allowed in the leaf node encoding block. 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754 can be set.
  • the predefined intere-side mode may include at least one of a merge mode, a motion vector prediction mode, an affine merge mode, and an Rane motion vector prediction mode.
  • prediction unit partitioning based on the size of the parallel processing area, it is possible to determine whether to allow prediction unit partitioning. For example, if the size of the coding block is larger than the size of the parallel processing area, the prediction unit partitioning may not be used. .
  • information indicating whether to apply prediction unit partitioning to a coding block may be signaled through a bitstream.
  • the information may be signaled at the sequence, picture, slice, or block level.
  • a flag triangle_paGtition_flag indicating whether prediction unit partitioning is applied to the coding block can be signaled at the coding block level.
  • Information indicating the number of lines to be divided or the position of the lines may be signaled through the bitstream.
  • a coding block when a coding block is divided by a diagonal line, information indicating the direction of a diagonal line for dividing the coding block may be signaled through the bitstream. For example, a triangle_paGtition_type_flag indicating the direction of the diagonal line may be signaled.
  • the flag indicates whether the coding block is divided by a diagonal line connecting the top left and bottom right, or whether it is divided by a diagonal line connecting the top left and bottom left. It is referred to as a left triangular partition type that divides the coding block by, and the coding block is divided by a diagonal line connecting the upper right and the lower left as a right triangular partition type. For example, if the value of the flag is 0, the value of the flag is 0. It indicates that the partition type of the coding block is the left triangular partition type, and the flag value of 1 indicates that the partition type of the coding block is the right triangular partition type.
  • Information indicating the position of the diagonal line dividing the block can be signaled through the bitstream. For example, if the information indicating the size of the prediction units indicates that the size of the prediction units is the same, the information indicating the position of the diagonal line is encoded. Is omitted, and the coding block can be divided into two prediction units using a diagonal line passing through the two vertices of the coding block, whereas information indicating the size of the prediction units indicates that the size of the prediction units is not the same, 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754 Based on the information indicating the position of the diagonal line, the position of the diagonal line that divides the coding block can be determined. For example, the left triangular partition type is applied to the coding block.
  • the location information may indicate whether the diagonal line is in contact with the left boundary and the lower boundary of the coding block, or whether the upper boundary and the right boundary are in contact.
  • the location information may indicate whether the diagonal line is in contact with the right boundary and the lower boundary of the coding block, or whether the upper boundary and the left boundary are in contact.
  • Information indicating the partition type of the coding block can be signaled at the coding block level. Accordingly, the partition type can be determined for each coding block to which prediction unit partitioning is applied.
  • information indicating the partition type for a sequence, picture, slice, tile, or coding tree unit may be signaled.
  • a sequence, picture, slice, tile, or coding in which diagonal partitioning within a coding tree unit is applied The partition types of blocks can be set identically.
  • the information for determining the partition type is encoded and signaled for the first coding unit to which the prediction unit partitioning in the coding tree unit is applied, and the second and subsequent coding units to which the prediction unit partitioning is applied are the first coding unit and You can configure it to use the same partition type.
  • the partition type of the coding block based on the partition type of the neighboring block, the partition type of the coding block
  • the neighboring block is a neighboring block adjacent to the upper left corner of the coding block, a neighboring block adjacent to the upper right corner, a neighboring block adjacent to the lower left corner, a neighboring block located at the top, or a neighboring block located at the left. It may contain at least one of the neighboring blocks.
  • the partition type of the current block may be set the same as the partition type of the neighboring block. Or, whether the left triangular partition type is applied to the upper left neighboring block, the upper right neighboring block, or The partition type of the current block can be determined based on whether the right triangular partition type is applied to the lower left neighboring block.
  • the movement information of the unit can be derived from the merge candidates included in the merge candidate list.
  • the merge candidate list will be referred to as a split mode merge candidate list or a triangular merge candidate list.
  • the merge candidate included in the split mode merge candidate list will be referred to as a split mode merge candidate or a triangle merge candidate. It is also included in the idea of the present invention that the candidate induction method and the merge candidate list construction method are used in the split mode merge candidate and the split mode merge candidate list construction method.
  • Information for determining the number of maximum split mode merge candidates that the split mode merge candidate list can contain can be signaled through the bitstream.
  • the above information includes the maximum number of merge candidates that the merge candidate list can contain and the split mode candidate list. You can indicate the difference between the maximum number of split mode merge candidates that the merge candidate list can contain.
  • FIG. 30 is a diagram showing neighboring blocks used to induce a split mode merge candidate.
  • Split mode merge candidates are included in the neighboring block located at the top of the coding block, the neighboring block located at the left of the coding block, or a picture different from the coding block.
  • the upper neighboring block is a block containing samples ⁇ (3 ⁇ 4+(3 ⁇ 4 ⁇ -1, (3 ⁇ 4-1)) located at the top of the coding block, It may include at least one of a block containing the sample ⁇ (3 ⁇ 4+ example (3 ⁇ 4-1) at the top, or a block containing the sample ⁇ (3 ⁇ 4-1, (3 ⁇ 4-1)) located at the top of the coding block.
  • the left neighboring block is a block containing sample ⁇ (3 ⁇ 4-1, (3 ⁇ 4+ ⁇ -1) located on the left side of the coding block or sample ⁇ 5-1, (3 ⁇ 4+ ⁇ ) located on the left side of the coding block.
  • the collocated block is a block containing a sample ⁇ 5+example ⁇ 5+0 ⁇ ) adjacent to the upper right corner of the coding block in the collocated picture, or a sample ⁇ (3 ⁇ 4/2, located in the center of the coding block). It can be determined by any one of the blocks containing (3 ⁇ 4/2).
  • the motion information of the prediction units is based on the split mode merge candidate list.
  • prediction units can share a single split mode merge candidate list.
  • Information for specifying at least one of the included split mode merge candidates may be signaled through the bitstream.
  • at least one of the split mode merge candidates may be signaled.
  • the index information is based on the merge candidate of the first prediction unit and the merge candidate of the second prediction unit.
  • the combination can be specified.
  • the following table 3 is an example showing the combination of the merged candidates according to index information 111 6_1;] 13 ⁇ 416_:1 (1). 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754
  • the information indicates that the index is derived from the merge candidate in 1, and the motion information of the second prediction unit is derived from the merge candidate in 0.
  • Index information
  • the partitioning mode merge candidate for inducing the information can be determined.
  • the partition type of the coding block to which the diagonal partitioning is applied can be determined. That is, the index information is the merge candidate of the first prediction unit, the second It is possible to specify a combination of the prediction unit's merge candidate and the direction of division of the coding block
  • the partition type of the coding block is determined by the index information, information indicating the direction of the diagonal to divide the coding block
  • the triangle_paGtition_type_flag may not be coded.
  • Table 4 shows the index information 1116 6_1::1' 13 ⁇ 416_:1 (partition type of the urine encoding block. 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754
  • Urine 1 indicates that the right triangular partition type is applied to the coding block.
  • Table 3 and Table 4 are combined, index information 111 6_1;] Urine 13 ⁇ 416_:1 (1 is the 1st prediction unit's
  • the combination of the merge candidate, the merge candidate of the second prediction unit, and the direction of division of the coding block can be specified.
  • only the index information for either the first prediction unit and the second prediction unit is signaled, and Based on the above index information, it is possible to determine the index of the merge candidate for the other one of the first prediction unit and the second prediction unit. For example, index information 1116 6_1::1 indicating the index of any one of the split mode merge candidates.
  • the merge candidate of the first example that is, the unit's merger.
  • the above 1116 _1;] urine 13 ⁇ 416_:1 (Based on 1, the second prediction unit's merge candidate can be specified.
  • the offset may be an integer such as 1 or 2.
  • the second prediction unit's merge candidate is 111 6_1; 13 ⁇ 416_:1 (can be determined as a split mode merge candidate having the value of 1 plus 1 as an index.
  • the motion information of the second prediction unit can be derived from the split mode merge candidate having the same reference picture.
  • the split mode merge candidate having the same reference picture as the split mode merge candidate of the first prediction unit At least one of the split mode merge candidate and the 1 ⁇ 0 reference picture or the 1 ⁇ 1 reference picture may represent the same split mode merge candidate.
  • the split mode merge candidate and the reference picture of the first prediction unit are 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754 When there are multiple candidates for the same division mode merge, at least one of whether the merge candidate contains bidirectional motion information, or the difference between the index information of the merge candidate and the index information. You can choose any one based on.
  • the index information for each of the first prediction unit and the second prediction unit is the index information for each of the first prediction unit and the second prediction unit
  • the first index information 1 _111 for determining the split mode merge candidate of the first prediction unit (Urine and second index information 211 for determining the split mode merge candidate of the second prediction unit) 1_1 £ _:1 (Urine can be signaled through the bitstream.
  • the movement information of the first prediction unit is the first index. It is derived from the split mode merge candidate determined on the basis, and the movement information of the second prediction unit can be derived from the split mode merge candidate determined on the basis of the second index information 2!1 (1_111 £ £ _:1 (urea).
  • [38 is the first index information 1 _11163 ⁇ 46_: 1 (Urine can indicate any one index among the split mode merge candidates included in the split mode merge candidate list.
  • the split mode merge candidate of the first prediction unit is
  • the candidate may be set not to be available as a split mode merge candidate of the second prediction unit. Accordingly, the second index information 211 (1_1116 6_:1) of the second prediction unit (regarding the division mode merge indicated by the first index information) Any one of the remaining split mode merge candidates excluding the candidate can be displayed. Second index information 211 (1_1116 6_:1 (if the urine value is less than the first index value, the second prediction unit's split mode merge candidate) 1116 6_:1 (Can be determined as a partition mode merge candidate with the index information indicating urine.
  • the division mode merge candidate of the second prediction unit may be determined as a division mode merge candidate having the second index information 2!1 (1_111 £ £ _:1 (a urine value plus 1) as an index.
  • the second index information it is possible to determine whether or not to signal the second index information according to the number of split mode merge candidates included in the split mode merge candidate list. For example, the maximum number of split mode merge candidates that the split mode merge candidate list can include. If the number does not exceed 2, the signaling of the second index information may be omitted. If the signaling of the second index information is omitted, an offset is added or subtracted to the first index information to be eligible for the second division mode merge.
  • the maximum number of split mode merge candidates that the split mode merge candidate list can include is 2, and the first index information points to index 0, 1 is added to the first index information, and the second Alternatively, if the maximum number of split mode merge candidates that the split mode merge candidate list can include is two, and the first index information points to 1, the first index information is divided by 1 Two-split mode merge candidates can be induced.
  • the second index information is defaulted. 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754 value can be set.
  • the default value can be 0.
  • the first index information and the second index information are compared, and the second division mode merge candidate is induced. For example, if the second index information is smaller than the first index information, the index 0 index merge candidate is set as the second split mode merge candidate, and the second index information is equal to or greater than the first index information, The index 1 merge candidate can be set as the second split mode merge candidate.
  • the split mode merge candidate has one-way motion information
  • the one-way motion information of the split-mode merge candidate can be set as the motion information of the prediction unit.
  • the split-mode merge candidate has two-way motion information.
  • Only one of the motion information can be set as the motion information of the prediction unit. It is determined whether the motion information or the motion information is to be taken. It can be determined based on the candidate index or motion information of other prediction units.
  • the index of the split mode merge candidate is even number, the movement information of the prediction unit is set to 0, and the split mode merge candidate's index is set to 0.
  • the motion information can be set as the motion information of the prediction unit.
  • the index of the split mode merge candidate is odd, the one motion information of the prediction unit is set to 0, and the split mode merge candidate's index is 0.
  • the motion information can be set to 0.
  • the index of the split mode merge candidate is even, the motion information of the split mode merge candidate is set as the motion information of the prediction unit, and the index of the split mode merge candidate is odd.
  • the split mode merge candidate While the information is set as the movement information of the first prediction unit, for the second prediction unit, if the number of split mode merge candidates is odd, the split mode merge candidate's The information can be set as the motion information of the second prediction unit.
  • the movement information of the second prediction unit can be set to 0, and the motion information of the split mode merge candidate can be set as the knee information of the second prediction unit.
  • the movement information of the second prediction unit can be set to 0, and the movement information of the split mode merge candidate can be set as the movement information of the second prediction unit.
  • the split mode merge candidate list for inducing the motion information of the first prediction unit and the split mode merge candidate list for inducing the motion information of the second prediction unit may be set differently.
  • split mode merge candidate based on the index information for the first prediction unit
  • the motion information of the second prediction unit is split mode merge, including the remaining split mode merge candidates excluding the split mode merge candidate indicated by the index information. It can be derived using a list. Specifically, the movement information of the second prediction unit can be derived from any one of the remaining division mode merge candidates. 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754
  • the number of maximum split mode merge candidates included in the split mode merge candidate list of the first prediction unit and the maximum split mode merge candidate list included in the split mode merge candidate list of the second prediction unit may be different. For example, if the split mode merge candidate list of the first prediction unit contains M merge candidates, the split mode merge candidate list of the second prediction unit excludes the split mode merge candidate indicated by the index information of the first prediction unit. ⁇ Can contain 1-1 merge candidates.
  • the merge candidate of each prediction unit is guided based on the neighboring blocks adjacent to the coding block, but the availability of the neighboring block can be determined by considering the shape or location of the prediction unit.
  • 31 is a diagram for explaining an example of determining availability of neighboring blocks for each prediction unit.
  • a neighboring block not adjacent to the first prediction unit may be set as unavailable for the first prediction unit, and a neighboring block not adjacent to the second prediction unit may be set as unavailable for the second prediction unit.
  • the split mode merge candidate list for the first prediction unit includes the split mode merge candidates derived from block show 1, show 0 and show 2, while block 60 and the split mode merge derived therefrom. Candidates may not be included.
  • the split mode merge candidate list for the second prediction unit includes block examples and split mode merge candidates derived therefrom, while the block show Split mode merge candidates derived from 1, Show 0 and Show 2 may not be included.
  • the number of split mode merge candidates that the prediction unit can use or the range of split mode merge candidates can be determined based on at least one of the location of the prediction unit or the partition type of the coding block.
  • only one of the first prediction unit and the second prediction unit can be used in the merge mode.
  • the motion information of the other one of the first and second prediction units is set to be the same as the motion information of the prediction unit to which the merge mode is applied, or the motion information of the prediction unit to which the merge mode is applied is refined. Can be induced by
  • the motion vector of the first prediction unit and the reference picture index can be derived, and the motion vector of the second prediction unit can be derived by refining the motion vector of the first prediction unit.
  • the motion vector of the second prediction unit is 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754 It can be derived by adding or subtracting the refine motion vector ⁇ Rx, Ry ⁇ to the motion vector ⁇ mvDlLXx, mvDlLXy ⁇ of the first prediction unit.
  • the picture index may be set equal to the reference picture index of the first prediction unit.
  • Information for determining a refine motion vector indicating a difference between the motion vector of the first prediction unit and the motion vector of the second prediction unit may be signaled through the bitstream.
  • the above information indicates the size of the refine motion vector. It may include at least one of the indicated information or information indicating the sign of the refine motion vector.
  • one of the first and second prediction units can signal a motion vector and a reference picture index.
  • the other one of the first and second prediction units is the signal above. It can be derived by refining the ringed motion vector.
  • the motion vector of the first prediction unit and the reference picture index can be determined.
  • the motion vector of the first prediction unit is refined to The motion vector may be derived.
  • the motion vector of the second prediction unit may be derived by adding or subtracting the refine motion vector ⁇ Rx, Ry ⁇ to the motion vector ⁇ mvDILXx, mvDILXy ⁇ of the first prediction unit.
  • the reference picture index of the second prediction unit may be set equal to the reference picture index of the first prediction unit.
  • only one of the first predicting unit and the second predicting unit is the merge mode.
  • the motion information of the other one of the first prediction unit and the second prediction unit may be derived based on the motion information of the prediction unit to which the merge mode is applied.
  • the motion vector of the first prediction unit The symmetric motion vector may be set as the motion vector of the second prediction unit, where the symmetric motion vector is the same size as the motion vector of the first prediction unit, but at least one of the X-axis or y-axis components is opposite.
  • the in-motion vector or the scaled vector obtained by scaling the motion vector of the first prediction unit may have the same size, but at least one of the X-axis component or the y-axis component may mean a motion vector whose sign is opposite.
  • the motion vector of the first prediction unit is (MVx, MVy)
  • the motion vector of the second prediction unit is (MVx, -MVy), (-MVx, MVy) or (-MVx,-which is the symmetrical motion vector of the motion vector.
  • MVy can be set.
  • the reference picture index of the prediction unit to which the merge mode is not applied may be set equal to the reference picture index of the prediction unit to which the merge mode is applied. Or, the merge mode is not applied.
  • the reference picture index of the non-prediction unit may be set to a predefined value.
  • the predefined value may be the smallest index or the largest index in the reference picture list.
  • the merge 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754 Information specifying the reference picture index of the prediction unit to which the mode is not applied can be signaled through the bitstream. Or, the reference picture of the prediction unit to which the merge mode is applied.
  • the reference picture of the prediction unit to which the merge mode is not applied can be selected from the reference picture list different from the accelerated reference picture list. For example, when the reference picture of the prediction unit to which the merge mode is applied is selected from the L0 reference picture list, the merge mode The reference picture of the prediction unit to which the merge mode is not applied can be selected from the L1 reference picture list. At this time, the reference picture of the prediction unit to which the merge mode is not applied is the output order between the reference picture of the prediction unit to which the merge mode is applied and the current picture ( Picture Order Count, POC) can be derived based on the difference.
  • POC Picture Order Count
  • the difference value from the current picture in the L1 reference picture list is applied to the merge mode.
  • a reference picture equal to or similar to a difference value between the reference picture of the prediction unit and the current picture can be selected as the reference picture of the prediction unit to which the merge mode is not applied.
  • a refine vector may be added or subtracted from the derived motion vector.
  • the motion vector of the first prediction unit is a motion vector of the first prediction unit. It is derived by adding or subtracting the first refine vector to the first motion vector derived based on the first merge candidate, and the motion vector of the second prediction unit is the second motion vector derived based on the second merge candidate. It can be derived by adding or subtracting the refine vector.
  • Information for determining at least one of the first refine vector or the second refine vector may be signaled through the bitstream. The information may include at least one of information for determining the size of the refine vector or information for determining the sign of the refine vector.
  • the second refine vector may be a symmetric motion vector of the first refine vector.
  • information for determining the only refine vector may be signaled for either of the first refine vector and the second refine vector.
  • the first refine vector is determined to be (MVDx, MVDy) by the information signaled from the bitstream
  • the symmetric motion vector of the first refine vector (-MVDx, MVDy), (MVDx, -MVDy) or (-MVDx , -MVDy) can be set as the second refine vector.
  • the prediction units set the symmetrical motion vector of the scaled motion vector obtained by scaling the first refine vector as the second refine vector according to the output order of each reference picture. May be
  • one of the first prediction unit and the second prediction unit is derived based on the merge candidate, and the other motion information is signaled through the bitstream.
  • 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754 Can be determined based on information.
  • the merge index for the first prediction unit and determine the motion vector for the second prediction unit At least one of the information and information for determining the reference picture can be signaled.
  • the motion information of the first prediction unit can be set equal to the motion information of the merge candidate specified by the merge index.
  • the information may be specified by at least one of the information for determining the motion vector signaled through the bitstream and the information for determining the reference picture.
  • the motion prediction compensation prediction can be performed for each coding block.
  • the image quality at the boundary of the first and second prediction units Deterioration may occur.
  • the continuity of the image quality may be lost around the edge existing at the boundary of the first prediction unit and the second prediction unit.
  • smoothing Predictive samples can be derived through filters or weighted predictions.
  • the prediction sample in the coding block to which the diagonal partitioning is applied is the weighted sum operation of the first prediction sample acquired based on the motion information of the first prediction unit and the second prediction sample acquired based on the motion information of the second prediction unit.
  • the prediction sample of the first prediction unit can be derived from the first prediction block determined based on the motion information of the first prediction unit, and the prediction sample of the second prediction unit is determined based on the motion information of the second prediction unit.
  • the prediction sample of the second prediction unit is derived from the prediction block, but the prediction samples located in the boundary regions of the first prediction unit and the second prediction unit are included in the first prediction sample and the second prediction block.
  • the weighted summation operation of the included second prediction sample can be derived on the basis of the calculation. For example, Equation 3 below shows an example of inducing the prediction samples of the first prediction unit and the second prediction unit.
  • Equation 3 is the first prediction sample, and P2 is the second prediction sample.
  • wl represents the weight applied to the first predicted sample
  • (1-wl) represents the weight applied to the second predicted sample.
  • the second example that is, the weight applied to the sample, is a constant value. In the first example, it can be derived by differentiating the weight applied to the gland.
  • the boundary area may contain prediction samples having the same X-axis coordinate and y-axis coordinate.
  • the boundary area is It is possible to include predicted samples in which the sum of the X-axis and y-axis coordinates is greater than or equal to the first threshold and less than or equal to the second threshold.
  • the size of the boundary region is the size of the coding block, the shape of the coding block, the motion information of the prediction units, the motion vector difference value of the prediction units, the output order of the reference picture, or the first prediction sample and the second prediction sample at the diagonal border. At least one of the difference values can be determined on the basis. 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754
  • FIG. 32 and 33 are diagrams showing an example in which a predicted sample is derived based on a weighted sum operation of the first predicted sample and the second predicted sample.
  • FIG. 32 is an example of a case in which the left triangular partition type is applied to the coding block.
  • 33 illustrates a case in which the right triangular partition type is applied to the coding block.
  • the knowledge of FIG. 32 and the ⁇ of FIG. 33 are diagrams showing the predicted aspects of the luma component, and FIG. 32) and FIG. 33
  • the (ratio of) is a drawing showing the predicted aspects of the chroma component.
  • a number written in a prediction sample located near the boundary of the first prediction unit and the second prediction unit indicates a weight applied to the first prediction sample.
  • a number written in the prediction sample is In the case of, the predicted sample can be derived by applying a weight of N/8 to the first predicted sample and applying a weight of (1-uh/8)) to the second predicted sample.
  • the first prediction sample or the second prediction sample may be determined as a prediction sample. Referring to the example of FIG. 32, in the area belonging to the first prediction unit, based on the motion information of the first prediction unit. The derived first prediction sample may be determined as the prediction sample. On the other hand, in the region belonging to the second prediction unit, the derived second prediction sample based on the motion information of the second prediction unit may be determined as the prediction sample.
  • the first prediction sample derived based on the motion information of the first prediction unit can be determined as the prediction sample.
  • a second prediction sample derived based on the motion information of the second prediction unit can be determined as the prediction sample.
  • the threshold value for determining the non-boundary region can be determined based on at least one of the size of the coding block, the shape of the coding block, or the color component. For example, when the threshold value for the luma component is set to N, chroma The threshold for the component can be set to N/2.
  • the predicted samples included in the boundary region can be derived based on a weighted sum operation of the first and second predicted samples.
  • the weight applied to the first and second predicted samples is the predicted sample. It can be determined based on at least one of the location of, the size of the coding block, the shape of the coding block, or the color component.
  • the predicted samples at the same position in the X-axis coordinate and the axis coordinate can be derived by applying the same weight to the first and second prediction samples.
  • Predicted samples with the absolute value of the difference between the X-axis and X-axis coordinates of 1 can be derived by setting the weight ratio applied to the first and second prediction samples to (3:1) or (1:3).
  • predicted samples whose absolute value of the difference between the X-axis and X-axis coordinates is 2 can be derived by setting the weighting ratio applied to the first and second prediction samples to (7:1) or (1:7). have.
  • the predicted samples of the position have the same weights for the first predicted sample and the second predicted sample.
  • the predicted samples in which the sum of the X-axis coordinates and the axis coordinates is 1 less than the width or height of the coding block are added to the first predicted sample and the second predicted sample. It can be derived by applying the same weight. Prediction samples in which the sum of the X-axis coordinate and the X-axis coordinate equals the width or height of the coding block or are smaller than 2 are the weighting ratios applied to the first and second prediction samples (3:1) or (1:3). It can be derived by setting it to ). The sum of the X-axis coordinate and X-axis coordinate is less than the width or height of the coding block.
  • Estimated samples that are larger than 1 or smaller than 3 can be derived by setting the weight ratio applied to the first and second predicted samples to (7:1) or (1:7).
  • the predicted samples in which the sum of the X-axis coordinates and the axis coordinates is 1 less than the width or height of the coding block are the same as those of the first predicted sample and the second predicted sample. It can be derived by applying a weight.For the predicted samples where the sum of the X-axis coordinate and the X-axis coordinate is equal to the width or height of the coding block or less than 2, the weighting ratio applied to the first predicted sample and the second predicted sample is (7: It can be derived by setting it to 1) or (1:7).
  • a weight value can be determined taking into account the position of the predicted sample or the shape of the coding block. Equations 4 to 6 are examples of inducing the weight when the left triangular partition type is applied to the coding block. Equation 4 shows an example of deriving a weight applied to the first predicted sample when the coding block is square.
  • Equation 4 X and denote the position of the prediction sample.
  • the weight applied to the first prediction sample can be derived as in Equation 5 or Equation 6.
  • 5 shows the case where the width of the coding block is greater than the height
  • Equation 6 shows the case where the width of the coding block is less than the height.
  • Equation 7 is the first predicted sample when the coding block is square. It shows an example of deriving the weights applied to it.
  • Equation 7 [3 ⁇ 4 represents the width of the coding block.
  • the coding block is non-square 2020/175915 1»(:1 ⁇ 1 ⁇
  • the weight applied to the first prediction sample can be derived as in Equation 8 or Equation 9.
  • Equation 8 shows that the width of the coding block is greater than the height. A large case is indicated, and Equation 9 indicates a case where the width of the coding block is smaller than the height.
  • Equation 8 ⁇ represents the height of the coding block.
  • the ones included in the first prediction unit among the prediction samples within the boundary region are derived by giving a greater weight to the first prediction sample than the second prediction sample, and included in the second prediction unit. This can be derived by giving a greater weight to the second predicted sample than to the first predicted sample.
  • a combined prediction mode in which an intra prediction mode and a merge mode are combined may be set not to be applied to the coding block.
  • motion information of the coding/decoding completed coding block can be stored for the coding/decoding of the next coding block.
  • the motion information may be stored in units of subblocks having a preset size.
  • a subblock having a preset size may have a size of 4x4.
  • the size or shape of the subblock It can be determined virtually.
  • the movement information of the first prediction unit can be stored as the movement information of the subblock.
  • Motion information can be saved as sub-block motion information.
  • one of the motion information of the first prediction unit and the motion information of the second prediction unit can be set as the motion information of the sub-block.
  • the motion information of the first prediction unit may be set as the motion information of the sub-block
  • the motion information of the second prediction unit may be set as the motion information of the sub-block.
  • the motion information of the first prediction unit is set as the motion information of the subblock
  • the motion information of the second prediction unit is set.
  • Subblock It can be set by motion information, but the first prediction unit and the second prediction unit In the case of having only the motion information or only the first motion information, the motion information of the sub-block can be determined by selecting either the first prediction unit or the second prediction unit.
  • the first prediction unit and the second prediction unit Unit movement 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754
  • the vector average value can be set as the motion vector of the subblock.
  • the motion information of the coding block after encoding/decoding is completed is stored in the motion information table.
  • the motion information of the coding block to which the prediction unit partitioning has been applied can be set not to be added to the motion information table.
  • motion information of only one of the plurality of prediction units generated by dividing the coding block is added to the motion information table.
  • the motion information of the second prediction unit may not be added to the motion information table.
  • the size of the coding block, the shape of the coding block, the size of the prediction unit, the shape of the prediction unit, or whether the two-way prediction was performed for the prediction unit is selected.
  • motion information of each of a plurality of prediction units generated by dividing a coding block may be added to the motion information table.
  • the order of adding the motion information table may be predefined in the encoder and decoder. For example, motion information of a prediction unit including an upper left sample or a lower left corner sample may be added to the motion information table prior to the motion information of other prediction units. Or, merge index, reference picture index, or motion information of each prediction unit.
  • the order of addition to the motion information table can be determined based on at least one of the sizes of the vector.
  • motion information obtained by combining the motion information of the first prediction unit and the motion information of the second prediction unit can be added to the motion information table. Either of the motion information and the motion information of the combined motion information It is derived from the first prediction unit, and 0) the other of the motion information and the knee motion information may be derived from the second prediction unit.
  • motion information to be added to the motion information table can be determined. For example, when the reference pictures of the first and second prediction units are different, the motion information of any one of the first and second prediction units Alternatively, motion information obtained by combining the first prediction unit and the second prediction unit can be added to the motion information table. On the other hand, when the reference pictures of the first prediction unit and the second prediction unit are the same, the motion vector and the second prediction unit 2 The average of the motion vectors of the prediction unit can be added to the motion information table.
  • the motion vector to be added to the motion information table can be determined. For example, when right triangular partitioning is applied to the coding block, the motion information of the first prediction unit can be added to the motion information table. On the other hand, in the coding block. When the left triangular partitioning is applied, the motion information of the second prediction unit can be added to the motion information table, or motion information obtained by combining the motion information of the first prediction unit and the motion information of the second prediction unit can be added to the motion information table. . 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754
  • a motion information table for storing motion information of a coding block to which prediction unit partitioning is applied may be separately defined.
  • motion information of a coding block to which prediction unit partitioning is applied may be stored in a partition mode motion information table.
  • the split mode motion information table can also be referred to as a triangular motion information table, i.e., the motion information of the coding block to which the prediction unit partitioning is not applied is stored in the general motion information table, and the motion information of the coding block to which the prediction unit partitioning is applied is Examples of adding the motion information of a coding block to which the above-described prediction unit partitioning is applied to the motion information table can be applied to updating the split mode motion information table.
  • the split mode motion information table can be stored in the split mode motion information table.
  • the motion information table the motion information of the first prediction unit, the motion information of the second prediction unit, the motion information combining the motion information of the first prediction unit and the motion information of the second prediction unit, and the motion vector task of the first prediction unit. 2 Motion information obtained by averaging the motion vector of the prediction unit can be added.
  • a merge candidate can be derived using a general motion information table.
  • merge using the prediction mode motion information table Can induce candidates.
  • the motion vector of the selected merge candidate is set as the initial motion vector, and the motion vector derived by adding or subtracting the offset vector to the initial motion vector is used to move to the current block. Compensation prediction can be performed.
  • a merge offset vector encoding method can be defined to induce a new motion vector by adding or subtracting the offset vector to the motion vector of the merge candidate.
  • the information is a 1-bit flag.
  • the 111 can be 6_(none 861;_ ⁇ 6 ⁇ 1;0]'_£1&silver. For example, 1116 6_(none 861;_ ⁇ 6 ⁇ 1;0]'_£1& silver is 1).
  • the motion vector of the current block can be derived by adding or subtracting the offset vector to the motion vector of the merge candidate. . This indicates that the merge offset vector encoding method is not applied to the block.
  • the motion vector of the merge candidate may be set as the motion vector of the current block.
  • the flag can be signaled only when the value of the skip flag indicating whether the skip mode is applied is true or the value of the merge flag indicating whether the merge mode is applied is true.
  • the current block Indicating that the skip mode is applied Block Merge Mode is applied.
  • Information for determining the maximum number of merge candidates that the merge candidate list can contain can be signaled through the bitstream.
  • the maximum number of merge candidates that the merge candidate list can contain is set to a natural number of 6 or less.
  • the maximum number of merge candidates that the current block can use is set to ], while 111 6_0 861;_ ⁇ 1;0]'_13 ⁇ 4 is set to 1, the current block can use.
  • the maximum number of merge candidates can be set to N, where M represents the maximum number of merge candidates that the merge candidate list can contain, and N represents a natural number equal to or less than M.
  • the two merge candidates with the smallest index among the merge candidates included in the merge candidate list may be set as available for the current block. Accordingly, the index The motion vector of the merge candidate with a value of 0 or the motion vector of the merge candidate with the index value of 1 can be set as the initial motion vector of the current block. If M and N are the same (e.g., if M and N are 2) ), All merge candidates included in the merge candidate list can be set as available for the current block.
  • the current block At least one of the neighboring blocks adjacent to the right or upper corner of the block, the neighboring block adjacent to the left or lower corner, or the neighboring block adjacent to the lower left corner cannot be set as the initial motion vector, or 1116 _0 ⁇ 61; If the value of _ ⁇ 6;0]'_£ ' ⁇ is 1, the temporal neighboring block of the current block may be set as unavailable as a merge candidate.
  • Candidate or zero merge candidate can be set not to use at least one. Accordingly, in this case, even if the number of merge candidates included in the merge candidate list is less than the maximum number, at least one of the pairwise merge candidates or zero merge candidates may not be added to the merge candidate list.
  • the motion vector of the merge candidate can be set as the initial motion vector of the current block.
  • the number of merge candidates that the current block can use is plural, information specifying any one of the plural merge candidates may be signaled through the bitstream.
  • the maximum number of merge candidates that the merge candidate list can include may be signaled.
  • the number is greater than 1, information indicating any one of the plurality of merge candidates can be signaled through the bitstream, i.e., under the merge offset encoding method, any one of the plurality of merge candidates can be signaled through the bitstream.
  • It can be set as the motion vector of the merge candidate.
  • the signaling of information to specify the merge candidate can be omitted.
  • the maximum number of merge candidates that the merge candidate list can include is 1 If not greater than, In other words, under the merge offset coding method, if one merge candidate is included in the merge candidate list, the encoding of information for specifying the merge candidate is omitted, and based on the merge candidate included in the merge candidate list, the initial The motion vector can be determined.
  • the motion vector of the merge candidate can be set as the initial motion vector of the current block.
  • the merge offset vector encoding method is applied to the current block. Indicating whether or not There is.
  • FIG. 3 is a diagram showing a syntax table according to the above-described embodiment.
  • the merge offset vector encoding method in the current block only when the index of the determined merge candidate is less than the maximum number of merge candidates that can be used when the merge offset vector encoding method is applied. It is also possible to decide whether to apply the merge offset vector encoding method to the current block, for example, only when the value of the index information is smaller. Greater than,
  • 111 Silver 6_(Look 861;_ ⁇ 6 ⁇ 1;0]'_£1& silver encoding may be omitted. If the encoding of 111 is 6_0 861;_ ⁇ 6 ⁇ 1;0]'_13 ⁇ 4 is omitted, it may be determined that the merge offset vector encoding method is not applied to the current block. Or, after determining the merge candidate of the current block, Considering whether the determined merge candidate has bidirectional motion information or unidirectional motion information, it is possible to decide whether to apply the merge offset vector encoding method to the current block.
  • 6_(Now 861;_ ⁇ 6 ⁇ 1;0]'_£1&) can be signaled by encoding or, the value of the index information mountain is smaller, and the remaining candidate selected by the index information has one-way motion information. In this case, it indicates whether to apply the merge offset vector encoding method to the current block.
  • 111 can be signaled by encoding 6_(none861;_ ⁇ 6 ⁇ 1;0]'_ ⁇ &silver.
  • the size, shape, or current block of the current block is at the boundary of the coding tree unit.
  • At least one of the size and shape of the current block or whether the current block is in contact with the boundary of the coding tree unit is a preset condition. If is not satisfied, the encoding of, indicating whether to apply the merge offset vector encoding method to the current block, can be omitted.
  • the motion vector of the merge candidate can be set as the initial motion vector of the current block. Then, information indicating the size of the offset vector and information indicating the direction of the offset vector are decoded to obtain the offset vector. I can decide.
  • the offset vector can have a horizontal direction component or a vertical direction component.
  • the information indicating the size of the offset vector may be index information indicating any one of the vector size candidates. For example, indicating any one of the vector size candidates. There is.
  • DistFromMergeMV represents the value of the variable DistFromMergeMV for determining the size of the offset vector according to the index information (1 &11.6_:1(Urine binarization and (1 &1 6_:1(1).
  • the size of the offset vector can be derived by dividing the variable DistFromMergeMV by a preset value.
  • the equation shows an example of determining the size of the offset vector. 2020/175915 1»(:1/10 ⁇ 020/002754
  • the range can be set differently. For example, if the motion vector precision for the current block is fractional-pel, the values of the variable DistFromMergeMV corresponding to the values of the index information distance_idx are 1, 2, 4, 8, 16. It can be set to etc.
  • the minority pel includes at least one of 1/16 pel, Octo-pel, Quarter-pel, or Half-pel, while movement relative to the current block.
  • the vector precision is an integer-pel
  • the values of the variable DistFromMergeMV corresponding to the values of the index information distance_idx can be set to 4, 8, 16, 32, 64, etc. That is, according to the motion vector precision for the current block. ,variable
  • DistFromMergeMV more than the referenced table can be set. For example, if the motion vector precision of the current block or merge candidate is a quarterfel, then Table 6 can be used to derive the variable DistFromMergeMV pointed to by distance_idx. If the motion vector precision of the block or merge candidate is an integer pel, the value obtained by taking N times (e.g., 4 times) to the value of the variable DistFromMergeMV indicated by distance_idx in Table 6 can be derived as the value of the variable DistFromMergeMV.
  • N times e.g. 4 times
  • Information for determining the motion vector precision may be signaled through a bitstream.
  • the information may be signaled at the sequence, picture, slice or block level.
  • the range of vector size candidates is , Via bitstream 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754 It can be set differently by the information related to the motion vector precision signaled.
  • the motion vector precision can be determined based on the merge candidate of the current block. For example, the motion vector precision of the current block may be set equal to the motion vector precision of the merge candidate.
  • information for determining the search range of the offset vector can be signaled through a bitstream. Based on the search range, at least one of the number of vector size candidates and the minimum or maximum of the vector size candidates can be determined. For example, to determine the search range of the offset vector, there is a neat 1116 6_( ⁇ 861;_ ⁇ 1;0]'_£ ' ⁇
  • the information can be signaled through a bitstream.
  • the information can be signaled through a sequence header, a picture header, or a slice header.
  • DistFromMergeMV can be set to 8.
  • 111 is 6_0 861;_6 Yes 611 (1_ 13 ⁇ 46_: (If the value is 1, the offset vector size is 32 samples) It can be set so as not to exceed the distance.
  • the large value can be set to 128.
  • the size of the offset vector can be determined by using a flag indicating whether the size of the offset vector is greater than the threshold value. For example, the size of the offset vector is
  • the threshold can be 1, 2, 4, 8 or 16.
  • the threshold can be 1, 2, 4, 8 or 16.
  • a value of 1 indicates that the size of the offset vector is greater than 4, while a value of 0 indicates that the size of the offset vector is less than 4.
  • the size of the offset vector can be determined using Table 8 :1 (This is a syntax table showing the coding pattern of urine.
  • Equation 11 shows an example of inducing the variable DistFromMergeMV for determining the size of the offset vector using distance_flag and distance_idx.
  • DistFromMergeMV N ⁇ distance flag + (1 «distance _idx)
  • the value of distance_flag may be set to 1 or 0.
  • the value of distance_idx can be set to 1, 2, 4, 8, 16, 32, 64, 128, etc.
  • N represents the coefficient determined by the threshold value. For example, if the threshold value is 4, N can be set to 16.
  • the information indicating the direction of the offset vector may be index information indicating any one of the vector direction candidates.
  • any one of the vector direction candidates may be used.
  • Table 9 shows index information(1 urine 6(No.011_:1( Binarization of urine and (1 urine 6 (number 011_: 1) indicates the direction of the offset vector according to urine.
  • Figure 34 shows the size of the offset vector This is a diagram showing the offset vector according to the value of (1 urine 6 011_: 1 (1) indicating the direction.
  • Sin ⁇ 011_:1 (Can be determined according to the value of urine.
  • the maximum size of the offset vector can be set so as not to exceed the threshold value.
  • the threshold value can have a predefined value in the encoder and the decoder.
  • the threshold value may be 32 sample distances.
  • the threshold value may be determined.
  • the threshold value for the horizontal direction is based on the size of the horizontal component of the initial motion vector. Is set, and the threshold value for the vertical direction may be set based on the size of the vertical direction component of the initial motion vector.
  • the merge candidate's The motion vector can be set as the initial motion vector of the current block, and the motion vector of the merge candidate can be set as the initial motion vector of the current block.
  • the output sequence difference between the 1 ⁇ 0 reference picture of the merge candidate and the current picture The offset vector and the knee offset vector can be determined by taking into account the value (hereinafter referred to as the zero difference value) and the output order difference value (hereinafter referred to as the first difference value) between the reference picture and the current picture of the remaining candidate.
  • the offset vector can be set the same, whereas the sign of the first difference value and the tooth difference value is ⁇
  • the knee offset vector can be set in the opposite direction to the inner offset vector.
  • the size of the inner offset vector and the size of the offset vector can be set equally. Or, it is possible to determine the size of the knee offset vector by scaling the offset vector based on the difference value and the tooth difference value.
  • Equation 13 shows: )when the sign of the difference value and the tooth difference value are the same, )the offset vector and 1 ⁇ 1 offset vector are represented.
  • offsetMVL() (This represents the horizontal direction component of the inner offset vector, and offsetMVL0[l] represents the vertical direction component of the 1- ⁇ offset vector.
  • offsetMVLl (This represents the horizontal direction component of the offset vector, and offsetMVLl[l] is Represents the vertical component of the offset vector.
  • Equation 14 shows the inner offset vector and the knee offset vector when the signs of the inner difference value and the difference value are different.
  • Fig. 35 is a diagram showing the offset vector according to the value of (1 urine 6 011_: 1 (1) indicating the size of the offset vector and the direction of the offset vector.
  • FIG. 35 is an example of a case where Table 9 is applied, and in FIG.
  • Information for determining at least one of the number or size of vector direction candidates may be signaled through the bitstream. For example, determining vector direction candidates
  • the flag may be signaled through a stream.
  • the flag may be signaled at a sequence, picture, or slice level. For example, when the value of the flag is 0, the four vector direction candidates illustrated in Table 9 can be used. On the other hand, if the value of the flag is 1, the 8 vector direction candidates illustrated in Table or Table 11 can be used.
  • At least one of the number or size of vector direction candidates can be determined. For example, the value of the variable DistFromMergeMV for determining the size of the offset vector is equal to or equal to the threshold. If the value is smaller than the value, the eight vector direction candidates illustrated in Table or Table 11 can be used, whereas if the value of the variable DistFromMergeMV is greater than the threshold value, the four vector direction candidates illustrated in Table 9 can be used.
  • At least one of the number or size of vector direction candidates can be determined. For example, MVx and MVy difference or absolute difference of the difference If the value is below the threshold, Table 10 or Table
  • the eight vector direction candidates illustrated in 11 are available, whereas the MVx and MVy differences 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754 or if the absolute value of the difference is greater than the threshold, the four vector direction candidates shown in Table 9 can be used.
  • the motion vector of the current block can be derived by adding the offset vector to the initial motion vector. Equation 15 shows an example of determining the motion vector of the current block.
  • 11110 denotes the inner motion vector of the current block
  • 11111 denotes the kinematic vector of the current block.
  • mergeMVL0 represents the initial motion vector of the current block (that is, the motion vector of the merge candidate)
  • mergeMVLl represents the initial motion vector of the current block. [This represents the horizontal direction component of the motion vector, and [1] represents the vertical direction component of the motion vector.
  • the affine seed vector derived based on the affine merge mode or the Rane motion vector prediction mode, or the motion vector of a subblock (subblock motion vector or Rane subblock vector) can be updated based on the offset vector. Specifically, an offset is added to the affineseed vector or the motion vector of the subblock, or
  • the updated affineseed vector or the updated sub-block motion vector can be derived Under the affine motion model, refining the affineseed vector or the sub-block motion vector can be referred to as the Matte merge offset encoding method. have.
  • the merge index for determining the initial motion vector of the block ⁇ 163 ⁇ 46_:1 (urea), the index information for determining the size of the offset vector, and the index information for determining the direction of the offset vector.
  • Can be determined size Points to any one of the plurality of size candidates, and index information for determining the direction (1 urine 6 011_: 1 (1 indicates any one of the plurality of direction candidates).
  • the index information for determining the direction (1 011_:1 (based on 1, offset vector ((None Sho noai, 0 8 Sho & 1A1))) can be determined.
  • the updated affine seed vector can be derived by adding or subtracting the offset vector to the affine seed vector.
  • the sign of the offset vector applied to each affine seed vector can be determined according to the direction of the reference picture. For example, in the coding block 2020/175915 1» (:1 ⁇ 1 ⁇ 2020/002754 bidirectional prediction is applied,)
  • each word fineseed vector An offset vector can be added to the offset vector.
  • the temporal direction can be determined based on the output order of the current picture and the reference picture 00. For example, the output order difference between the current picture and the reference picture, and the current picture and reference.
  • the temporal direction of the reference picture and The temporal direction of the reference picture can be determined to be the same.
  • an updated affineseed vector can be derived. For example, the current If the difference is negative, but the difference in the output order of the current picture and the reference picture is positive, or the difference in the output order of the current picture and the reference picture 0 is positive, but the difference in the output order of the current picture and the reference picture is negative, the temporal difference of the reference picture It can be determined that the direction and the temporal direction of the 1 ⁇ 1 reference picture are the same.
  • the present invention is not limited to this. It is also possible to determine the offset vector of each affine vector individually.
  • an offset vector can be set for each subblock.
  • the motion vector of a subblock can be updated using the offset vector of the subblock.
  • the range of candidates can be determined differently, i.e., depending on the motion vector precision of the current block or neighboring block, the number of candidates for the offset vector size, minimum or maximum value. 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754 At least one may be different. For example, if the Matte motion model is applied to the current block, and the motion vector precision of the current block is 1/4 pel, the variable
  • DistFromMergeMV It can be determined to be one of 1, 2, 4, 8, 16, 32, 64, and 128, whereas if the current block's motion vector precision is an integer pel, the variable DistFromMergeMV By 4, 8, 16, 32, 64, 128, 256 and
  • information for specifying any one of a plurality of offset vector size candidate sets may be signaled through the bitstream.
  • the number of offset vector size candidates each of the offset vector size candidate sets contains, or At least one of the types may be different.
  • the variable DistFromMergeMV may be determined as one of ⁇ 1, 2, 4, 8, 16, 32, 64, 128 ⁇ , and 2
  • the variable DistFromMergeMV can be determined as one of ⁇ 4, 8, 16, 32, 64, 128, 256, 512 ⁇ .
  • Index information DistMV_idx specifying any one of a plurality of offset vector size candidate sets may be signaled through a bitstream. For example, DistMV_idx of 0 indicates that the first offset vector size candidate set is selected,
  • DistMV_idx is 1, it indicates that the second offset vector size candidate set is selected.
  • offset vector or difference vector
  • An array (or difference vector array) can also be defined as offset data.
  • motion compensation for the subblock can be performed using the induced subblock motion vector.
  • the subblock or the subblock or the subblock when performing the motion compensation is performed. Additional sample-specific offset vectors are available.
  • the offset vector for the subblock can be derived using the offset vector candidate. Based on the offset vector, it is possible to update the sub-block motion vector, and perform motion compensation for the sub-block based on the updated sub-block motion vector.
  • An offset vector for each prediction sample within a subblock may be derived. Specifically, based on the position of each prediction sample within the subblock, an offset vector for each prediction sample can be derived. Here, the location of the prediction sample is It can be determined based on the upper left sample of the subblock.
  • the X component of the offset vector for the predicted sample is the difference between the X component of the second affineseed vector and the X component of the first affineseed vector, multiplied by the X-axis coordinate of the predicted sample, and the second affineseed. It can be derived based on the difference between the vector X component and the first affineseed vector component multiplied by the X-axis coordinate of the predicted sample.
  • the offset vector component for the predicted sample is the third affineseed vector X component.
  • the difference value multiplied by the X-axis coordinate of the prediction sample, and the difference value between the third affineseed vector component and the second affineseed vector X component, and the predicted sample It can be derived based on the product of the axial coordinate of
  • the component of the offset vector is the X-axis coordinate of the predicted sample to the difference between the first affineseed vector X component and the second affineseed vector X component.
  • a value obtained by multiplying the product by multiplying the axial coordinates of the predicted sample by the difference between the second affine seed vector component and the first affine seed vector X component can be derived.
  • the offset vectors of the prediction samples in the sub-block may have different values.
  • the offset vector array for the prediction samples can be commonly applied to all sub-blocks. That is, the first sub-block.
  • the offset vector array applied to the block and the offset vector array applied to the second subblock may be the same.
  • an offset vector array for each sample can be derived by taking the position of the subblock into consideration.
  • an offset vector array different between the subblocks can be applied.
  • each prediction sample can be updated based on the offset vector.
  • the update of the prediction sample is performed on the offset vector of the prediction sample and the prediction sample. Can be performed on the basis of gradients.
  • the gradient for the predicted sample can be derived based on the difference value of the predicted samples.
  • the gradient for the first predicted sample is the difference value between the first predicted sample and the predicted samples belonging to the same line, or the first predicted sample. It can be derived based on the difference value between the prediction samples belonging to the line adjacent to.
  • the gradient for the first predicted sample may be derived as a difference value between the first predicted sample and another predicted sample belonging to the same line as the first predicted sample.
  • the horizontal direction gradient of the first predicted sample Is derived as the difference value between the first predicted sample and the second predicted sample belonging to the same row as the first predicted sample, and the vertical gradient of the first predicted sample belongs to the same column as the first predicted sample and the first predicted sample.
  • 3 can be derived as a difference value from the predicted sample.
  • the second predicted sample and the third predicted sample may be adjacent to the first predicted sample.
  • the second predicted sample is to the left or right of the first predicted sample.
  • the third predicted sample may be positioned above or below the first predicted sample; or, the second predicted sample and the third predicted sample are spaced a predetermined distance from the first predicted sample in the X-axis and/or X-axis direction.
  • the predetermined distance may be a natural number such as 1, 2 or 3.
  • the difference value of the predicted samples belonging to the line adjacent to the first predicted sample may be set as the gradient with respect to the first predicted sample.
  • the horizontal direction gradient with respect to the first predicted sample is the first predicted sample. It can be derived as a difference value of the predicted samples belonging to the row adjacent to.
  • the row adjacent to the first predicted sample may mean the upper row or the lower row of the first predicted sample.
  • At least one of the predicted samples used to induce the gradient may be adjacent to the first predicted sample and the other not adjacent to the first predicted sample.
  • the horizontal direction gradient can be derived.
  • the vertical gradient for the first prediction sample can be derived as a difference value between the prediction samples belonging to the column adjacent to the first prediction sample.
  • the column adjacent to the first prediction sample is derived from the first prediction sample. It can mean the left column or right row of the predicted sample.
  • At least one of the predicted samples used to induce the vertical gradient of the first predicted sample is adjacent to the first predicted sample, and the other is adjacent to the first predicted sample.
  • the predetermined distance may be a natural number such as 1, 2 or 3.
  • Equation 18 is the horizontal direction gradient gradient H for the first predicted sample
  • predSample represents the example, that is, the sample
  • [Shi represents the x jug and y jug coordinates
  • shiftl represents a shifting parameter, which can have a predefined value in the encoder and decoder, or adapt the shifting parameter based on at least one of the size, shape, aspect ratio, or affine motion model of the current block. It can be decided as an enemy.
  • the offset predicted value can be derived based on a multiplication operation of a gradient and an offset vector.
  • Equation 19 shows an example of deriving the offset predicted value OffsetPred.
  • OffsetPred[x] [y] gradientH[x] [y] * offsetMV ⁇ x ⁇ ⁇ y ⁇ S ⁇ + gradientV * offsetA4V[x] [y] [1]
  • Equation 20 shows an example of updating the predicted sample.
  • predSample [y] predSample ⁇ x ⁇ ⁇ y ⁇ + OffsetPred[x ⁇ [y ⁇
  • the predicted sample can be updated by adding an offset vector to the peripheral predicted sample.
  • the peripheral predicted sample is a sample positioned to the right of the predicted sample, a predicted sample. 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754
  • the sample located at the bottom of the sample or the sample located at the bottom right of the prediction sample may include at least one of the samples.
  • Equation 21 is a peripheral prediction sample. This is an example of updating the prediction sample using.
  • predSample [x] [y] predSample [x+ 1] [y ⁇ 1] + Offse tPred[x ⁇ [y ⁇
  • Information indicating whether to use an offset vector when performing motion compensation for the current block may be signaled through a bitstream.
  • the information may be a 1-bit flag.
  • the use of the offset vector may be determined based on the size of the current block, the shape of the current block, or whether the affineseed vectors are the same. For example, a 4-parameter Matte motion model is applied to the current block. If the first affine seed vector and the second affine seed vector are the same, motion compensation may be performed using the offset vector. Or, if a 6-parameter Matte motion model is applied to the current block, the first affine seed vector Offset vector when both the affine seed vector, the second affine seed vector and the third affine seed vector are the same, or when two of the first affine seed vector, the second affine seed vector and the third affine seed vector are the same Motion compensation can be performed using.
  • One prediction mode can be applied multiple times to the current block, or multiple prediction modes can be applied repeatedly. In this way, a prediction method using homogeneous or heterogeneous prediction modes is combined, eg, a mode (or Multi-hypothesis Prediction Mode). ).
  • the combined prediction mode the merge mode and the merge mode are combined, the inter prediction and the intra prediction are combined, the merge mode and the motion vector prediction mode are combined, and the motion vector prediction mode and the motion vector prediction mode are It may include at least one of a combined mode or a combined mode of merge mode and intra prediction.
  • a first prediction block may be generated based on the first prediction mode, and a second prediction block may be generated based on the second prediction mode. Then, the first prediction block and the second prediction block may be generated.
  • a third prediction block may be generated based on the weighted sum operation of. The third prediction block may be set as the last prediction block of the current block.
  • Sub-block motion compensation mode A method of deriving a sub-block motion vector based on the merge candidate and performing motion compensation on a sub-block basis.
  • Prediction unit partitioning-based encoding mode A method of dividing the current block into a plurality of prediction units, and inducing the motion information of each prediction unit from different merge candidates.
  • V) Combined prediction mode A method of combining intra prediction and inter prediction (eg, merge mode)
  • the merge flag merge_flag indicates that at least one motion information of the current block is derived from the merge candidate. For example If the value of the syntax merge_flag is 1, it indicates that one of the methods is applied to the current block, e.g., an inter based on the merge mode described above. Either of the compensation mode, prediction unit partitioning-based encoding mode, or combined prediction mode can be applied to the current block, whereas when the value of the syntax merge_flag is 0, the inter prediction methods based on the merge mode described above are not applied to the current block.
  • a subblock The syntax merge_subblock_flag to determine whether to apply the motion compensation mode, the syntax merge_offset_vector_flag or mmvd_flag to determine whether to apply the merge offset coding mode, triangle_partition_flag or merge_triangle_flag indicating whether to apply the prediction unit partitioning-based coding mode, or application of the merge_triangle_flag At least one of ciip_flag indicating whether or not may be additionally signaled.
  • whether the inter prediction method based on a specific merge mode is applied to the current block instead of the flag, the size and shape of the current block, or the merge candidates included in the merge candidate list. It may be decided on the basis of at least one of the number of pieces.
  • the regular merge mode can be set not to be applied to the current block. However, in this case, it is determined whether or not the regular merge mode is applied to the current block.
  • mmvd_flag indicating whether the merge offset encoding mode is applied
  • merge_subblock_flag indicating whether the subblock motion compensation is applied, i.e.
  • merge_triangle_flag indicating whether the unit partitioning based encoding mode is applied, and/or If, after parsing all ciip_flag indicating whether the mode is applied or not, and then determining whether to apply the regular mode, there is a problem that many syntax elements must be parsed even though the regular merge mode is used more frequently than other modes. Determining whether or not regular merge mode is applied to the current block has several levels of parsing dependency. 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754 Also, an index for specifying merge candidates under regular mode
  • information indicating whether or not the regular merge mode has been used can be separately coded. Specifically, it indicates whether the regular merge mode is applied or not. It can be encoded and signaled.
  • Table 12 shows the syntax table including the syntax silver 111 Table 1_111 silver 6_: (3 ⁇ 4 silver.
  • Table 1 1 '_ 111 may be coded only if determined to be the inter mode based on the remaining yejeuk mode to 6 3 ⁇ 4 _ :( eunneun current block is applied, that is, syntax
  • the syntax is the 11 1 Table 1 '_ 111 _ :( 6 3 ⁇ 4 autumn 1, the regular mode merge in the current block
  • the Neulrag indicating whether the sub-block motion compensation mode is applied 1 high school 6 6_8111 10 03 ⁇ 4 is a combination example i.e. mode Flag indicating whether or not Or, at least one of them can be signaled through the bitstream, indicating whether the prediction unit partitioning-based encoding mode is applied.
  • a flag indicating whether or not the regular merge mode is applied can be encoded and signaled.
  • the regular merge mode or the merge offset encoding can be performed.
  • a flag ⁇ 11101_111111 ⁇ (1_111 ⁇ 6_: ⁇ indicating whether at least one of the modes is applied can be signaled through the bitstream.
  • Table shows the syntax table including the secret tax for 111 1'_111111 ⁇ (1_1116 6_: ⁇ dragon.
  • Syntax 6 indicates that the regular merge mode or merge offset encoding mode is applied to the current block when the value of 1'_111111 ⁇ (1_1116 6_:(3 ⁇ 4 is 1, the current block is applied.
  • syntax 11111 (1_ ⁇ gachu indicates whether the remaining offset coding mode to be applied to the current block signaling. If the value of syntax 11111 (1_off& ⁇ is 1, it indicates that the merge offset encoding mode is applied to the current block, Indicates that regular merge mode is applied to the block.
  • Syntax 6 indicates that the regular merge mode and merge offset encoding mode are not applied to the current block when the value of 111 Table 1'_111111 ⁇ (1_1116 6_: (3 ⁇ 4 is 0).
  • a flag indicating whether to apply the prediction unit partitioning-based coding mode 1116 6_1;:1' 13 ⁇ 416_: (3 ⁇ 4 is at least one, and a flag indicating whether the regular merge mode or merge offset coding mode is applied is 111 Table 1'_111111 ⁇ (11116 6_:(It is also possible to signal before 3 ⁇ 4 is applied.
  • a flag 111 indicating whether subblock motion compensation mode is applied is 6_8111 ⁇ 10 ⁇ ] When 03 ⁇ 4 is 0, regular merge mode or Flag indicating whether the merge offset encoding mode is applied or not
  • the merge index 111 for specifying is 6_:1 (signaled by encoding urine, and 111 is 6_:1 (1), the availability of the merge offset encoding mode may be determined based on the value of 1. For example, If less than the value, regular merge mode or merge offset encoding mode may be applied to the current block.
  • the regular merge mode is applied to the current block, and the merge offset encoding mode is not applied.
  • the encoding of syntax 11111 (1_£ ⁇ ) indicating whether the merge offset encoding mode is applied is omitted, and the regular merge mode can be applied to the current block. That is, only when the value of is less than 2, the merge offset encoding is performed. Mode can be applied.
  • Table 14 shows an example in which the parsing order of the merge index is parsed before the flag indicating whether the regular merge mode is applied.
  • a merge index for specifying any one of the merge candidates may be signaled. That is, when the value of syntax 111 6_: (3 ⁇ 4 is 1) , 111 6_:1(1 can be signaled to specify the future candidate.
  • the flag indicating whether the regular merge mode is applied or not is the merge index 111 (which can be parsed after the urine is parsed. For example, After that, it indicates whether the regular merge mode is applied.
  • the flag indicating whether the regular merge mode or merge offset encoding mode is applied can be set to be visually signaled 111 Table1'_11163 ⁇ 46_: (3 ⁇ 4).
  • merge index The specified merge candidate can be used to derive motion information of the current block.
  • the merge candidate specified by the merge index 111 6_:1 (1 can be used to derive the initial motion vector of the current block.
  • a flag indicating whether the merge offset encoding mode is applied or not. Can be signaled through the bitstream only when the merge index is smaller than the threshold value.
  • the merge candidate specified by can be used to derive the motion vector of the subblocks.
  • the unit partitioning-based coding mode can be applied to the current block.
  • the prediction unit partitioning-based coding mode is applied.
  • the candidate can be set as either the first partition or the second partition as a merge candidate.
  • the prediction unit partitioning-based coding mode is applied, the first partition or the second 2020/175915 1»(:1 ⁇ 1 ⁇ 2020/002754 A merge index has been added to specify the other one of the partitions.
  • the flag regular_merge_flag or regular_mmvd_merge_flag indicating whether the above-described regular merge mode is applied can be signaled only when the size of the current block is smaller than the threshold. For example, when the size of the current block is smaller than 128x128, the regular merge is signaled. A flag indicating whether the mode is applied or not may be signaled. If the flag indicating whether the regular merge mode is applied or not is not signaled, it indicates that the regular merge mode is not applied to the current block. In this case, the prediction unit partitioning-based encoding mode can be applied to the current block.
  • Intra prediction is based on the encoding/decryption completed restoration sample around the current block.
  • the current block is predicted.
  • a restoration sample before the in-loop filter is applied can be used.
  • Information indicating the intra prediction technique of the current block can be signaled through the bitstream.
  • the information may be a 1-bit flag.
  • Information for specifying any one of a plurality of previously stored matrices can be signaled through the bitstream. have.
  • the decoder can determine a matrix for intra prediction of the current block based on the information and the size of the current block.
  • the residual image is displayed.
  • the current block can be transformed to decompose into two-dimensional frequency components.
  • the transformation will be performed using a transformation technique such as DCT (Discrete Cosine Transform) or DST (Discrete Sine Transform). 2020/175915 1»(:1 ⁇ 1 ⁇ Wed 2020/002754.
  • the transformation method can be determined in units of blocks.
  • the transformation method can be determined based on at least one of the predictive encoding mode of the current block, the size of the current block, or the size of the current block. If it is encoded in the side mode and the size of the current block is smaller than NxN, the conversion can be performed using a conversion technique. On the other hand, when the above conditions are not satisfied, conversion technique 0 [can be performed using it. .
  • the converted current block can be converted again.
  • the conversion based on that or is defined as the first conversion, and the second conversion is the conversion of the block to which the first conversion was applied. Can be defined.
  • the first conversion can be performed using any one of a plurality of conversion core candidates.
  • the first transformation may be performed using either DCT2 0018 or 1X17.
  • the units of the first transformation and the second transformation may be different.
  • the first transformation can be performed on an 8x8 block
  • the second transformation can be performed on a 4x4 sub-block among the converted 8x8 blocks.
  • Information indicating whether or not the second transformation is performed may be signaled through the bitstream.
  • whether or not to perform the second conversion may be determined. For example, only when the horizontal direction conversion core and the vertical direction conversion core are the same, The second transformation may be performed. Alternatively, the second transformation may be performed only when the horizontal direction transformation core and the vertical direction transformation core are different.
  • the second transformation may be allowed only when a transformation core defined as transformation in the horizontal direction and transformation in the vertical direction is used.
  • a transformation core defined as transformation in the horizontal direction and transformation in the vertical direction is used.
  • a second transformation may be allowed.
  • the second transform it is possible to determine whether to perform the second transformation based on the number of the thesis-zero transformation coefficients of the current block. For example, the thesis-zero transformation coefficient of the current block is less than the threshold value or In the case of 2020/175915 1» (:1 ⁇ 1 ⁇ 2020/002754, the second transform is disabled, and if the non-zero transform coefficient of the current block is greater than the threshold value, it can be set to use the second transform. It may be set to use the second transform only if the current block is encoded with intra prediction.
  • the decoder can perform an inverse transform of the second transform (a second inverse transform), and perform an inverse transform of the first transform (the inverse first transform) on the result of the performance.
  • the results of the second inverse transform and the first inverse transform are performed.
  • the residual signals for the current block can be acquired.
  • the decoder can obtain a residual block through inverse quantization and inverse transformation.
  • In-loop filters include deblocking filters, sample adaptive offset filters (SAO), or adaptive loop filters.
  • ALF may include at least one of.
  • each of the components (e.g., units, modules, etc.) constituting the block diagram in the above-described embodiment is hardware It may be implemented as a device or software, or may be implemented as a single hardware device or software by combining a plurality of components.
  • the above-described embodiments are implemented in the form of program instructions that can be executed through various computer components and are computer-readable.
  • the recording medium can be recorded on a recording medium.
  • the computer-readable recording medium may contain program commands, data files, data structures, etc. alone or in combination.
  • Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks, and magnetic tapes, optical recording media such as CD-ROMs and DVDs, magnetic-optical media such as floptical disks. media), and a hardware device specially configured to store and execute program instructions such as ROM, RAM, flash memory, etc.
  • the hardware device may be configured to operate as one or more software modules to perform processing according to the present invention. It is the same as that of the weightlifting. 2020/175915 1»(:1 ⁇ 112020/002754 Industrial applicability
  • the present invention can be applied to an electronic device that encodes/decodes an image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Television Systems (AREA)

Abstract

본 발명에 따른 영상 복호화 방법은, 현재 블록에 머지 모드에 기초한 인터 예측이 적용되는지 여부를 나타내는 제1 플래그를 파싱하는 단계, 상기 제1 플래그가 참인 경우, 상기 현재 블록에 레귤러 머지 모드 또는 머지 오프셋 부호화 모드가 적용되는지 여부를 나타내는 제2 플래그를 파싱하는 단계, 및 상기 제2 플래그가 참인 경우, 상기 현재 블록에 상기 머지 오프셋 부호화 모드가 적용되는지 여부를 나타내는 제3 플래그를 파싱하는 단계를 포함할 수 있다.

Description

2020/175915 1»(:1/10公020/002754 명세서
발명의명칭 :영상신호부호화/복호화방법및이를위한장치 기술분야
[1] 본발명은영상신호부호화/복호화방법및이를위한장치에관한것이다. 배경기술
[2] 디스플레이패널이점점더대형화되는추세에따라점점더높은화질의
비디오서비스가요구되고있다.고화질비디오서비스의가장큰문제는 데이터량이크게증가하는것이며 ,이러한문제를해결하기위해,비디오 압축율을향상시키기위한연구가활발하게진행되고있다.대표적인예로,
2009년에 MPEG(Motion Picture Experts Group)과 ITU -T (International
Telecommunication Union-Telecommunication)산하의 VCEG( Video Coding Experts Group)에서는 JCT-VC(Joint Collaborative Team on Video Coding)를결성하였다. JCT-VC는 H.264/AVC에비해약 2배의압축성능을갖는비디오압축표준인 HEVC(High Efficiency Video Coding)를제안하였으며 , 2013년 1월 25일에표준 승인되었다.고화질비디오서비스의급격한발전에따라 HEVC의성능도점차 적으로그한계를드러내고있다.
발명의상세한설명
기술적과제
[3] 본발명은비디오신호를부호화/복호화함에있어서,모션정보테이블을
이용한머지후보유도방법및상기방법을수행하기위한장치를제공하는 것을목적으로한다.
[4] 본발명은비디오신호를부호화/복호화함에있어서,머지처리영역에포함된 블록들의움직임정보를모션정보테이블에업데이트하는방법및상기방법을 수행하기위한장치를제공하는것을목적으로한다.
[5] 본발명은비디오신호를부호화/복호화함에있어서,머지후보를기초로
유도된움직임벡터를리파인하는방법및상기방법을수행하기위한장치를 제공하는것을목적으로한다.
[6] 본발명은비디오신호를부호화/복호화함에있어서,현재블록에적용될인터 예측방법을효율적으로결정하는방법및상기방법을수행하기위한장치를 제공하는것을목적으로한다.
[7] 본발명에서이루고자하는기술적과제들은이상에서언급한기술적과제들로 제한되지않으며,언급하지않은또다른기술적과제들은아래의기재로부터본 발명이속하는기술분야에서통상의지식을가진자에게명확하게이해될수 있을것이다.
과제해결수단
[8] 본발명에따른비디오신호복호화방법은,현재블록에머지모드에기초한 2020/175915 1»(:1^1{2020/002754 인터 예측이적용되는지여부를나타내는제 1플래그를파싱하는단계,상기제 1 플래그가참인경우,상기현재블록에레귤러머지모드또는머지오프셋 부호화모드가적용되는지여부를나타내는제 2플래그를파싱하는단계 ,및 상기제 2플래그가참인경우,상기현재블록에상기머지오프셋부호화모드가 적용되는지여부를나타내는제 3플래그를파싱하는단계를포함할수있다. 이때,상기제 3플래그가참인경우,상기현재블록에상기머지오프셋부호화 모드가적용되고,상기제 3플래그가거짓인경우,상기현재블록에상기레귤러 머지모드가적용될수있다.
[9] 본발명에따른비디오신호부호화방법은,현재블록에머지모드에기초한 인터 예측이적용되는지여부를나타내는제 1플래그를부호화하는단계,상기 제 1플래그가참인경우,상기현재블록에레귤러머지모드또는머지오프셋 부호화모드가적용되는지여부를나타내는제 2플래그를부호화하는단계 ,및 상기제 2플래그가참인경우,상기현재블록에상기머지오프셋부호화모드가 적용되는지여부를나타내는제 3플래그를부호화하는단계를포함할수있다. 이때,상기현재블록에상기머지오프셋부호화모드가적용되는경우,상기 제 3플래그는참으로설정되고,상기현재블록에상기레귤러머지모드가 적용되는경우,상기제 3플래그는거짓으로설정될수있다.
[1이 본발명에따른비디오신호복호화/부호화방법은,상기제 2플래그가거짓인 경우,상기현재블록에결합예측모드가적용되는지여부를나타내는제 4 플래그를파심/부호화하는단계를더포함할수있다.
[11] 본발명에따른비디오신호복호화/부호화방법에있어서,예측유닛파티셔닝 기반부호화방법은상기제 4플래그가거짓인경우적용가능할수있다.
[12] 본발명에따른비디오신호복호화/부호화방법에있어서,상기현재블록의 움직임정보는,상기현재블록의머지후보리스트로부터유도되고,상기현재 블록의이웃블록들로부터유도된머지후보들의개수가문턱값이하인경우, 모션정보테이블이포함하는모션정보후보가머지후보로서상기머지후보 리스트에추가될수있다.
[13] 본발명에따른비디오신호복호화/부호화방법에있어서,상기현재블록이 머지처리영역에포함된경우, 상기머지처리영역에포함된블록들을 복호화하는동안상기모션정보테이블은업데이트되지않을수있다.
[14] 본발명에따른비디오신호복호화/부호화방법에있어서,상기현재블록이 머지처리영역에포함된경우, 상기머지처리영역내상기현재블록의위치를 기초로,상기현재블록의상기움직임정보를상기모션정보테이블에 업데이트할것인지여부가결정될수있다.
[15] 본발명에따른비디오신호복호화/부호화방법에있어서,상기현재블록이 상기머지처리영역내우측하단에위치한경우,상기현재블록의움직임 정보를상기모션정보테이블에업데이트할것으로결정될수있다.
[16] 본발명에대하여위에서간략하게요약된특징들은후술하는본발명의 2020/175915 1»(:1^1{2020/002754 상세한설명의 예시적인양상일뿐이며 ,본발명의범위를제한하는것은 아니다.
발명의효과
[17] 본발명에 의하면,모션정보테이블을이용한머지후보를유도함으로써 인터 예측효율을향상시킬수있다.
[18] 본발명에 의하면,머지처리 영역에포함된블록들의움직임 정보를모션정보 테이블에 업데이트하는방법을제공함으로써,인터 예측효율을향상시킬수 있다.
[19] 본발명에 의하면,머지후보를기초로유도된움직임 벡터를리파인함으로써, 인터 예측효율을향상시킬수있다.
[2이 본발명에 의하면,현재블록에 적용될인터 예측방법을효율적으로결정할수 있다.
[21] 본발명에서 얻을수있는효과는이상에서 언급한효과들로제한되지 않으며, 언급하지 않은또다른효과들은아래의기재로부터본발명이속하는
기술분야에서통상의지식을가진자에게 명확하게 이해될수있을것이다.
도면의간단한설명
[22] 도 1은본발명의 일실시예에 따른영상부호화기(인코더기)의블록도이다.
[23] 도 2는본발명의 일실시예에 따른영상복호화기(디코더기)의블록도이다.
[24] 도 3은본발명의 일실시예에 따른기본코딩트리유닛을도시한도면이다.
[25] 도 4는코딩블록의다양한분할형태를나타낸도면이다.
[26] 도 5는코딩트리유닛의분할양상을예시한도면이다.
[27] 도 6은본발명의 일실시예에 따른인터 예측방법의흐름도이다.
[28] 도 7은머지모드하에서 현재블록의움직임 정보를유도하는과정의
흐름도이다.
[29] 도 8은머지후보를유도하기위해사용되는후보블록들을예시한도면이다.
[3이 도 9는기준샘플들의 위치를나타낸도면이다.
[31] 도 10은머지후보를유도하기위해사용되는후보블록들을예시한도면이다.
[32] 도 11은기준샘플의 위치가변경되는예를도시한도면이다.
[33] 도 12는기준샘플의 위치가변경되는예를도시한도면이다.
[34] 도 13은모션정보테이블의 업데이트양상을설명하기 위한도면이다.
[35] 도 14는모션정보테이블의 업데이트양상을나타낸도면이다.
[36] 도 15는기 저장된모션정보후보의 인덱스가갱신되는예를나타낸도면이다.
[37] 도 16은대표서브블록의위치를나타낸도면이다.
[38] 도 17은인터 예측모드별로모션정보테이블이 생성되는예를나타낸것이다.
[39] 도 18은움직임 벡터 해상도별로모션정보테이블이 생성되는예를나타낸 것이다.
[4이 도 19는머지오프셋부호화방법이 적용된블록의움직임정보가별도의모션 2020/175915 1»(:1^1{2020/002754 정보테이블에저장되는예를나타낸것이다.
[41] 도 20은롱텀모션정보테이블에포함된모션정보후보가머지후보리스트에 추가되는예를나타낸도면이다.
[42] 도 21은머지후보들중일부에대해서만중복성검사가수행되는예를도시한 도면이다.
[43] 도 22는특정머지후보와의중복성검사가생략되는예를나타낸도면이다.
[44] 도 23은현재블록과동일한머지처리영역에포함된후보블록이머지
후보로서이용불가능한것으로설정되는예를나타낸도면이다.
[45] 도 24는현재블록이머지처리영역에포함되어 있을경우,현재블록에대한 머지후보를유도하는예를나타낸도면이다.
[46] 도 25는임시모션정보테이블을나타낸도면이다.
[47] 도 26은모션정보테이블과임시모션정보테이블을병합하는예를나타낸 도면이다.
[48] 도 27은대각선을이용하여코딩블록을복수의예측유닛들로분할하는예를 나타낸도면이다.
[49] 도 28은코딩블록을 2개의예측유닛들로분할하는예를도시한도면이다.
[5이 도 29는코딩블록을크기가상이한복수의예측블록들로분할하는예시들을 나타낸다.
[51] 도 30은분할모드머지후보를유도하는데이용되는이웃블록들을나타낸 도면이다.
[52] 도 31은예측유닛별로이웃블록의가용성을결정하는예를설명하기위한 도면이다.
[53] 도 32및도 33은제 1예측샘플과제 2예측샘플의가중합연산을기초로예측 샘플을유도하는예를나타낸도면이다
[54] 도 34는오프셋벡터의크기를나타내
Figure imgf000006_0001
방향을 나타내는 (1뇨6 011_:1(1 의값에따른오프셋벡터를나타낸도면이다.
[55] 도 35는오프셋벡터의크기를나타내
Figure imgf000006_0002
방향을 나타내는 (1뇨6 011_:1(1 의값에따른오프셋벡터를나타낸도면이다.
발명의실시를위한형태
[56] 이하에서는도면을참조하여본발명의실시예를상세히설명한다.
[57] 영상의부호화및복호화는블록단위로수행된다.일예로,코딩블록,변환 블록,또는예측블록에대해,변환,양자화,예측,인루프필터링,또는복원등의 부호화/복호화처리가수행될수있다.
[58] 이하,부호화/복호화대상인블록을’현재블록’이라호칭하기로한다.일예로, 현재블록은현재부호화/복호화처리단계에따라,코딩블록,변환블록,또는 예즉블록을나타낼수있다.
[59] 아울러,본명세서에서사용되는용어’유닛’은특정부호화/복호화프로세스를 2020/175915 1»(:1^1{2020/002754 수행하기 위한기본단위를나타내고,’블록’은소정크기의 샘플어레이를 나타내는것으로이해될수있다.별도의 설명이 없는한,’블록’과’유닛’은 동등한의미로사용될수있다.일 예로,후술되는실시예에서,코딩블록과코딩 유닛은상호동등한의미를갖는것으로이해될수있다.
[6이 도 1은본발명의 일실시예에 따른영상부호화기 (인코더기)의블록도이다.
[61] 도 1을참조하면,영상부호화장치 (100)는픽쳐분할부 (110),예측부 (120, 125), 변환부 (130),양자화부 (135),재정렬부 (160),엔트로피부호화부 (165),
역양자화부 (140),역변환부 (145),필터부 (150)및메모리 (155)를포함할수있다.
[62] 도 1에 나타난각구성부들은영상부호화장치에서서로다른특징적인
기능들을나타내기위해독립적으로도시한것으로,각구성부들이분리된 하드웨어나하나의소프트웨어구성단위로이루어짐을의미하지 않는다.즉,각 구성부는설명의편의상각각의구성부로나열하여포함한것으로각구성부중 적어도두개의구성부가합쳐져하나의구성부로이루어지거나,하나의 구성부가복수개의구성부로나뉘어져기능을수행할수있고이러한각 구성부의통합된실시예 및분리된실시예도본발명의본질에서 벗어나지 않는 한본발명의 권리범위에포함된다.
[63] 또한,일부의구성요소는본발명에서본질적인기능을수행하는필수적인 구성요소는아니고단지성능을향상시키기위한선택적구성요소일수있다. 본발명은단지성능향상을위해사용되는구성요소를제외한본발명의 본질을구현하는데필수적인구성부만을포함하여구현될수있고,단지성능 향상을위해사용되는선택적구성요소를제외한필수구성요소만을포함한 구조도본발명의 권리범위에포함된다.
[64] 픽쳐분할부 (no)는입력된픽쳐를적어도하나의처리 단위로분할할수있다. 이때,처리 단위는예측단위 (Prediction Unit:모미일수도있고,변환
단위 (Transform Unit: TU)일수도있으며,부호화단위 (Coding Unit: CU)일수도 있다.픽쳐분할부 (no)에서는하나의픽쳐에 대해복수의부호화단위,예측 단위 및변환단위의조합으로분할하고소정의 기준 (예를들어,비용함수)으로 하나의부호화단위,예측단위 및변환단위조합을선택하여픽쳐를부호화할 수있다.
[65] 예를들어,하나의픽쳐는복수개의부호화단위로분할될수있다.픽쳐에서 부호화단위를분할하기 위해서는쿼드트리구조 (Quad Tree Structure)와같은 재귀적인트리구조를사용할수있는데하나의 영상또는최대크기부호화 단위 (largest coding unit)를루트로하여다른부호화단위로분할되는부호화 유닛은분할된부호화단위의 개수만큼의자식노드를가지고분할될수있다. 일정한제한에따라더 이상분할되지 않는부호화단위는리프노드가된다.즉, 하나의코딩유닛에 대하여정방형분할만이가능하다고가정하는경우,하나의 부호화단위는최대 4개의다른부호화단위로분할될수있다.
[66] 이하,본발명의실시예에서는부호화단위는부호화를수행하는단위의 2020/175915 1»(:1^1{2020/002754 의미로사용할수도있고,복호화를수행하는단위의의미로사용할수도있다.
[67] 예측단위는하나의부호화단위내에서동일한크기의적어도하나의
정사각형또는직사각형등의형태를가지고분할된것일수도있고,하나의 부호화단위내에서분할된예측단위중어느하나의 예측단위가다른하나의 예측단위와상이한형태및/또는크기를가지도록분할된것일수도있다.
[68] 부호화단위를기초로인트라예측을수행하는예측단위를생성시최소
부호화단위가아닌경우,복수의예측단위 NxN으로분할하지않고인트라 예즉을수행할수있다.
[69] 예측부 (120, 125)는인터예측을수행하는인터 예측부 (120)와인트라예측을 수행하는인트라예측부 (125)를포함할수있다.예측단위에대해인터예측을 사용할것인지또는인트라예측을수행할것인지를결정하고,각예측방법에 따른구체적인정보 (예컨대,인트라예측모드,모션벡터,참조픽쳐등)를 결정할수있다.이때,예측이수행되는처리단위와예측방법및구체적인 내용이정해지는처리단위는다를수있다.예컨대,예측의방법과예측모드 등은예측단위로결정되고,예측의수행은변환단위로수행될수도있다. 생성된예측블록과원본블록사이의잔차값 (잔차블록)은변환부 (130)로 입력될수있다.또한,예측을위해사용한예측모드정보,모션벡터정보등은 잔차값과함께엔트로피부호화부 (165)에서부호화되어복호화기에전달될수 있다.특정한부호화모드를사용할경우,예측부 (120, 125)를통해예측블록을 생성하지않고,원본블록을그대로부호화하여복호화부에전송하는것도 가능하다.
P이 인터예측부 (120)는현재픽쳐의이전픽쳐또는이후픽쳐중적어도하나의 픽쳐의정보를기초로예측단위를예측할수도있고,경우에따라서는현재 픽쳐내의부호화가완료된일부영역의정보를기초로예측단위를예측할수도 있다.인터 예측부 (120)는참조픽쳐보간부,모션예측부,움직임보상부를 포함할수있다.
1] 참조픽쳐보간부에서는메모리 (155)로부터참조픽쳐정보를제공받고참조 픽쳐에서정수화소이하의화소정보를생성할수있다.휘도화소의경우, 1/4 화소단위로정수화소이하의화소정보를생성하기위해필터계수를달리하는 DCT기반의 8탭보간필터 (DCT-based Interpolation Filter)가사용될수있다.색차 신호의경우 1/8화소단위로정수화소이하의화소정보를생성하기위해필터 계수를달리하는 DCT기반의 4탭보간필터 (DCT-based Interpolation Filter)가 사용될수있다.
2] 모션예측부는참조픽쳐보간부에의해보간된참조픽쳐를기초로모션
예측을수행할수있다.모션벡터를산출하기위한방법으로 FBMA(Full search-based Block Matching Algorithm), TSS (Three Step Search), NTS(New Three-Step Search Algorithm)등다양한방법이사용될수있다.모션벡터는 보간된화소를기초로 1/2또는 1/4화소단위의모션벡터값을가질수있다. 2020/175915 1»(:1^1{2020/002754 모션예측부에서는모션예측방법을다르게하여현재예측단위를예측할수 있다.모션예즉방법으로스킵 (Skip)방법 ,머지 (Merge)방법 , AMVP( Advanced Motion Vector Prediction)방법 ,인트라블록카피 (Intra Block Copy)방법등 다양한방법이사용될수있다.
3] 인트라예측부 (125)는현재픽쳐내의화소정보인현재블록주변의참조픽셀 정보를기초로예측단위를생성할수있다.현재예측단위의주변블록이인터 예측을수행한블록이어서,참조픽셀이인터 예측을수행한픽셀일경우,인터 예측을수행한블록에포함되는참조픽셀을주변의인트라예측을수행한 블록의참조픽셀정보로대체하여사용할수있다.즉,참조픽셀이가용하지 않는경우,가용하지않은참조픽셀정보를가용한참조픽셀중적어도하나의 참조픽셀로대체하여사용할수있다.
4] 인트라예측에서 예측모드는참조픽셀정보를예측방향에따라사용하는 방향성 예측모드와예측을수행시방향성정보를사용하지않는비방향성 모드를가질수있다.휘도정보를예측하기위한모드와색차정보를예측하기 위한모드가상이할수있고,색차정보를예측하기위해휘도정보를예측하기 위해사용된인트라예측모드정보또는예측된휘도신호정보를활용할수 있다.
5] 인트라예측을수행할때예측단위의크기와변환단위의크기가동일할경우, 예측단위의좌측에존재하는픽셀,좌측상단에존재하는픽셀,상단에 존재하는픽셀을기초로예측단위에대한인트라예측을수행할수있다.
그러나인트라예측을수행할때예측단위의크기와변환단위의크기가상이할 경우,변환단위를기초로한참조픽셀을이용하여인트라예측을수행할수 있다.또한,최소부호화단위에대해서만 NxN분할을사용하는인트라예측을 사용할수있다.
6] 인트라예측방법은예측모드에따라참조화소에 AIS(Adaptive Intra
Smoothing)필터를적용한후예측블록을생성할수있다.참조화소에적용되는 AIS필터의종류는상이할수있다.인트라예측방법을수행하기위해현재예측 단위의인트라예측모드는현재예측단위의주변에존재하는예측단위의 인트라예측모드로부터 예측할수있다.주변예측단위로부터예측된모드 정보를이용하여현재예측단위의 예측모드를예측하는경우,현재예측 단위와주변예측단위의인트라예측모드가동일하면소정의플래그정보를 이용하여현재예측단위와주변예측단위의 예측모드가동일하다는정보를 전송할수있고,만약현재예측단위와주변예측단위의 예측모드가상이하면 엔트로피부호화를수행하여현재블록의 예측모드정보를부호화할수있다.7] 또한,예측부 (120, 125)에서생성된예측단위를기초로예측을수행한예측 단위와예측단위의원본블록과차이값인잔차값 (Residual)정보를포함하는 잔차블록이생성될수있다.생성된잔차블록은변환부 (130)로입력될수있다.8] 변환부 (130)에서는원본블록과예측부 (120, 125)를통해생성된예측단위의 2020/175915 1»(:1/10公020/002754 잔차값 (residual)정보를포함한잔차블록을 DCT(Discrete Cosine Transform)또는 DST(Discrete Sine Transform)와같은변환방법을사용하여변환시킬수있다. 여기서 , DCT변환코어는 DCT2또는 DCT8중적어도하나를포함하고, DST 변환코어는 DST7을포함한다.잔차블록을변환하기위해 DCT를적용할지 또는 DST를적용할지는잔차블록을생성하기위해사용된예측단위의인트라 예측모드정보를기초로결정할수있다.잔차블록에대한변환을스킵할수도 있다.잔차블록에대한변환을스킵할것인지여부를나타내는플래그를 부호화할수있다.변환스킵은,크기가문턱값이하인잔차블록,루마성분또는 4:4:4포맷하에서의크로마성분에대해허용될수있다.
양자화부 (135)는변환부 (130)에서주파수영역으로변환된값들을양자화할수 있다.블록에따라또는영상의중요도에따라양자화계수는변할수있다. 양자화부 (135)에서산출된값은역양자화부 (140)와재정렬부 (160)에제공될수 있다.
재정렬부 (160)는양자화된잔차값에대해계수값의재정렬을수행할수있다. 재정렬부 (160)는계수스캐닝 (Coefficient Scanning)방법을통해 2차원의블록 형태계수를 1차원의벡터형태로변경할수있다.예를들어,
재정렬부 (160)에서는지그-재그스캔 (Zig-Zag Scan)방법을이용하여 DC 계수부터고주파수영역의계수까지스캔하여 1차원벡터형태로변경시킬수 있다.변환단위의크기및인트라예측모드에따라지그-재그스캔대신 2차원의블록형태계수를열방향으로스캔하는수직스캔, 2차원의블록형태 계수를행방향으로스캔하는수평스캔이사용될수도있다.즉,변환단위의 크기및인트라예측모드에따라지그-재그스캔,수직방향스캔및수평방향 스캔중어떠한스캔방법이사용될지여부를결정할수있다.
] ] ] ] 엔트로피부호화부 (165)는재정렬부 (160)에의해산출된값들을기초로
24 o511
78888891 엔트로피부호화를수행할수있다.엔트로피부호화는예를들어,지수
is (Exponential Golomb), CAVLC(Context- Adaptive Variable Length Coding), CABAC(Context- Adaptive Binary Arithmetic Coding)과같은다양한부호화 방법을사용할수있다.
83] 엔트로피부호화부 (165)는재정렬부 (160)및예측부 (120, 125)로부터부호화 단위의잔차값계수정보및블록타입정보,예측모드정보,분할단위정보, 예측단위정보및전송단위정보,모션벡터정보,참조프레임정보,블록의 보간정보,필터링정보등다양한정보를부호화할수있다.
엔트로피부호화부 (165)에서는재정렬부 (160)에서입력된부호화단위의 계수값을엔트로피부호화할수있다.
역양자화부 (140)및역변환부 (145)에서는양자화부 (135)에서양자화된값들을 역양자화하고변환부 ( 130)에서변환된값들을역변환한다.역양자화부 (140)및 역변환부 (145)에서생성된잔차값 (Residual)은예측부 (120, 125)에포함된움직임 추정부,움직임보상부및인트라예측부를통해서예측된예측단위와합쳐져 2020/175915 1»(:1/10公020/002754 복원블록 (Recons仕 ucted Block)을생성할수있다.
필터부 (150)는디블록킹필터,오프셋보정부, ALF(Adaptive Loop Filter)중 적어도하나를포함할수있다.
디블록킹필터는복원된픽쳐에서블록간의경계로인해생긴블록왜곡을 제거할수있다.디블록킹을수행할지 여부를판단하기위해블록에포함된몇 개의 열또는행에포함된픽셀을기초로현재블록에 디블록킹필터 적용할지 여부를판단할수있다.블록에디블록킹 필터를적용하는경우필요한디블록킹 필터링 강도에따라강한필터 (Strong Filter)또는약한필터 (Weak Filter)를 적용할수있다.또한디블록킹필터를적용함에 있어수직필터링 및수평 필터링수행시수평 방향필터링 및수직 방향필터링이 병행처리되도록할수 있다.
88] 오프셋보정부는디블록킹을수행한영상에 대해픽셀단위로원본영상과의 오프셋을보정할수있다.특정픽쳐에 대한오프셋보정을수행하기 위해 영상에포함된픽셀을일정한수의 영역으로구분한후오프셋을수행할영역을 결정하고해당영역에오프셋을적용하는방법또는각픽셀의 에지정보를 고려하여오프셋을적용하는방법을사용할수있다.
ALF( Adaptive Loop Filtering)는필터링한복원영상과원래의 영상을비교한 값을기초로수행될수있다.영상에포함된픽셀을소정의그룹으로나눈후 해당그룹에 적용될하나의필터를결정하여그룹마다차별적으로필터링을 수행할수있다. ALF를적용할지 여부에관련된정보는휘도신호는부호화 단위 (Coding Unit, CU)별로전송될수있고,각각의블록에따라적용될 ALF 필터의모양및필터 계수는달라질수있다.또한,적용대상블록의특성에 상관없이동일한형태 (고정된형태)의 ALF필터가적용될수도있다.
[9이 ] ] 메모리 (155)는필터부 (150)를통해산출된복원블록또는픽쳐를저장할수
9211
99 9 8887461 있고,저장된복원블록또는픽쳐는인터 예측을수행시 예측부 (120, 125)에 제공될수있다.
도 2는본발명의 일실시예에 따른영상복호화기 (디코더기 )의블록도이다. 도 2를참조하면,영상복호화기 (200)는엔트로피복호화부 (210),
재정렬부 (215),역양자화부 (220),역변환부 (225),예측부 (230, 235),필터부 (240), 메모리 (245)가포함될수있다.
[93] 영상부호화기에서 영상비트스트림이 입력된경우,입력된비트스트림은영상 부호화기와반대의절차로복호화될수있다.
엔트로피복호화부 (2 W)는영상부호화기의 엔트로피부호화부에서 엔트로피 부호화를수행한것과반대의절차로엔트로피복호화를수행할수있다.예를 들어 ,영상부호화기에서수행된방법에 대응하여지수골롬 (Exponential Golomb), CAVLC(Context- Adaptive Variable Length Coding),
CAB AC(Context- Adaptive Binary Arithmetic Coding)과같은다양한방법이 적용될수있다. 2020/175915 1»(:1^1{2020/002754
[95] 엔트로피복호화부 (210)에서는부호화기에서수행된인트라예측및인터
예측에관련된정보를복호화할수있다.
[96] 재정렬부 (215)는엔트로피복호화부 (210)에서엔트로피복호화된
비트스트림을부호화부에서재정렬한방법을기초로재정렬을수행할수있다.
1차원벡터형태로표현된계수들을다시 2차원의블록형태의계수로복원하여 재정렬할수있다.재정렬부 (215)에서는부호화부에서수행된계수스캐닝에 관련된정보를제공받고해당부호화부에서수행된스캐닝순서에기초하여 역으로스캐닝하는방법을통해재정렬을수행할수있다.
[97] 역양자화부 (220)는부호화기에서제공된양자화파라미터와재정렬된블록의 계수값을기초로역양자화를수행할수있다.
[98] 역변환부 (225)는영상부호화기에서수행한양자화결과에대해변환부에서 역변환즉,역 1)(그또는역예!를수행할 2또는 0018중적어도하나를포함하고,
Figure imgf000012_0001
다.또는,영상부호화기에서변환이 스킵된경우,역변환부 (225)에서도역변환을수행하지않을수있다.역변환은 영상부호화기에서결정된전송단위를기초로수행될수있다.영상복호화기의 역변환부 (225)에서는예측방법,현재블록의크기및예측방향등복수의 정보에따라변환기법 (예를들어,。(그또는 0 )이선택적으로수행될수있다.
[99] 예측부 (230, 235)는엔트로피복호화부 (210)에서제공된예측블록생성관련 정보와메모리 (245)에서제공된이전에복호화된블록또는픽쳐정보를기초로 예즉블록을생성할수있다.
[100] 전술한바와같이영상부호화기에서의동작과동일하게인트라예측을수행시 예측단위의크기와변환단위의크기가동일할경우,예측단위의좌측에 존재하는픽셀,좌측상단에존재하는픽셀,상단에존재하는픽셀을기초로 예측단위에대한인트라예측을수행하지만,인트라예측을수행시예측단위의 크기와변환단위의크기가상이할경우,변환단위를기초로한참조픽셀을 이용하여인트라예측을수행할수있다.또한,최소부호화단위에대해서만 NxN분할을사용하는인트라예측을사용할수도있다.
[101] 예측부 (230, 235)는예측단위판별부,인터예측부및인트라예측부를포함할 수있다.예측단위판별부는엔트로피복호화부 (210)에서입력되는예측단위 정보,인트라예측방법의 예측모드정보,인터예측방법의모션예측관련정보 등다양한정보를입력받고현재부호화단위에서예측단위를구분하고,예측 단위가인터예측을수행하는지아니면인트라예측을수행하는지여부를 판별할수있다.인터 예측부 (230)는영상부호화기에서제공된현재예측단위의 인터 예측에필요한정보를이용해현재예측단위가포함된현재픽쳐의이전 픽쳐또는이후픽쳐중적어도하나의픽쳐에포함된정보를기초로현재예측 단위에대한인터 예측을수행할수있다.또는,현재예측단위가포함된현재 픽쳐내에서기-복원된일부영역의정보를기초로인터예측을수행할수도 2020/175915 1»(:1^1{2020/002754 있다.
[102] 인터예측을수행하기위해부호화단위를기준으로해당부호화단위에
포함된예측단위의모션예측방법이스킵모드 (Skip Mode),머지모드 (Merge 모드),모션벡터예측모드 (AMVP Mode),인트라블록카피모드중어떠한 방법인지여부를판단할수있다.
[103] 인트라예측부 (235)는현재픽쳐내의화소정보를기초로예측블록을생성할 수있다.예측단위가인트라예측을수행한예측단위인경우,영상
부호화기에서제공된예측단위의인트라예측모드정보를기초로인트라 예즉을수행할수있다.인트라예즉부 (235)에는 AIS(Adaptive Intra Smoothing) 필터,참조화소보간부, DC필터를포함할수있다. AIS필터는현재블록의 참조화소에필터링을수행하는부분으로써현재예측단위의예측모드에따라 필터의적용여부를결정하여적용할수있다.영상부호화기에서제공된예측 단위의 예측모드및 AIS필터정보를이용하여현재블록의참조화소에 AIS 필터링을수행할수있다.현재블록의 예측모드가 AIS필터링을수행하지않는 모드일경우, AIS필터는적용되지않을수있다.
[104] 참조화소보간부는예측단위의예측모드가참조화소를보간한화소값을 기초로인트라예측을수행하는예측단위일경우,참조화소를보간하여정수값 이하의화소단위의참조화소를생성할수있다.현재예측단위의예측모드가 참조화소를보간하지않고예측블록을생성하는예측모드일경우참조화소는 보간되지않을수있다. DC필터는현재블록의예측모드가 DC모드일경우 필터링을통해서예측블록을생성할수있다.
[105] 복원된블록또는픽쳐는필터부 (240)로제공될수있다.필터부 (240)는
디블록킹필터,오프셋보정부, ALF를포함할수있다.
[106] 영상부호화기로부터해당블록또는픽쳐에디블록킹필터를적용하였는지 여부에대한정보및디블록킹필터를적용하였을경우,강한필터를
적용하였는지또는약한필터를적용하였는지에대한정보를제공받을수있다. 영상복호화기의디블록킹필터에서는영상부호화기에서제공된디블록킹 필터관련정보를제공받고영상복호화기에서해당블록에대한디블록킹 필터링을수행할수있다.
[107] 오프셋보정부는부호화시영상에적용된오프셋보정의종류및오프셋값 정보등을기초로복원된영상에오프셋보정을수행할수있다.
[108] ALF는부호화기로부터제공된 ALF적용여부정보, ALF계수정보등을
기초로부호화단위에적용될수있다.이러한 ALF정보는특정한파라메터셋에 포함되어제공될수있다.
[109] 메모리 (245)는복원된픽쳐또는블록을저장하여참조픽쳐또는참조
블록으로사용할수있도록할수있고또한복원된픽쳐를출력부로제공할수 있다.
[11이 2020/175915 1»(:1^1{2020/002754
[111] 도 3은본발명의일실시예에따른기본코딩트리유닛을도시한도면이다.
[112] 최대크기의코딩블록을코딩트리블록이라정의할수있다.하나의픽처는 복수개의코딩트리유닛 (Coding Tree Unit, CTU)으로분할된다.코딩트리 유닛은최대크기의코딩유닛으로, LCU (Largest Coding Unit)라호칭될수도 있다.도 3은하나의픽처가복수개의코딩트리유닛으로분할된예를나타낸 것이다.
[113] 코딩트리유닛의크기는픽처레벨또는시퀀스레벨에서정의될수있다.이를 위해,코딩트리유닛의크기를나타내는정보가픽처파라미터세트또는 시퀀스파라미터세트를통해시그날링될수있다.
[114] 일예로,시퀀스내전체픽처에대한코딩트리유닛의크기가 128x128로
설정될수있다.또는,픽처레벨에서 128x128또는 256x256중어느하나를코딩 트리유닛의크기로결정할수있다.일예로,제 1픽처에서는코딩트리유닛의 크기가 128x128로설정되고,제 2픽처에서는코딩트리유닛의크기가
256x256으로설정될수있다.
[115] 코딩트리유닛을분할하여,코딩블록을생성할수있다.코딩블록은
부호화/복호화처리를위한기본단위를나타낸다.일예로,코딩블록별로예측 또는변환이수행되거나,코딩블록별로예측부호화모드가결정될수있다. 여기서,예측부호화모드는예측영상을생성하는방법을나타낸다.일예로, 예즉부호화모드는화면내예즉 (Intra Prediction,인트라예즉),화면간
예즉 (Inter Prediction,인터예즉),현재픽처참조 (Current Picture Referencing, CPR, 또는인트라블록카피 (Intra Block Copy, IBC))또는결합예즉 (Combined
Prediction)을포함할수있다.코딩블록에대해,인트라예측,인터 예측,현재 픽처참조또는결합예측중적어도하나의예측부호화모드를이용하여,코딩 블록에대한예측블록을생성할수있다.
[116] 현재블록의 예측부호화모드를나타내는정보가비트스트림을통해
시그날링될수있다.일예로,상기정보는예측부호화모드가인트라모드인지 또는인터모드인지여부를나타내는 1비트플래그일수있다.현재블록의예측 부호화모드가인터모드로결정된경우에한하여,현재픽처참조또는결합 예측이이용가능할수있다.
[117] 현재픽처참조는현재픽처를참조픽처로설정하고,현재픽처내이미
부호화/복호화가완료된영역으로부터현재블록의 예측블록을획득하기위한 것이다.여기서,현재픽처는현재블록을포함하는픽처를의미한다.현재 블록에현재픽처참조가적용되는지여부를나타내는정보가비트스트림을 통해시그날링될수있다.일예로,상기정보는 1비트의플래그일수있다.상기 플래그가참인경우,현재블록의예측부호화모드는현재픽처참조로
결정되고,상기플래그가거짓인경우,현재블록의 예측모드는인터예측으로 결정될수있다.
[118] 또는,참조픽처인덱스를기초로,현재블록의예측부호화모드가결정될수 2020/175915 1»(:1^1{2020/002754 있다.일예로,참조픽처인덱스가현재픽처를가리키는경우,현재블록의 예측 부호화모드는현재픽처참조로결정될수있다.참조픽처인덱스가현재 픽처가아닌다른픽처를가리키는경우,현재블록의 예측부호화모드는인터 예측으로결정될수있다.즉,현재픽처참조는현재픽처내부호화/복호화가 완료된영역의정보를이용한예측방법이고,인터예측은부호화/복호화가 완료된다른픽처의정보를이용한예측방법이다.
[119] 결합예측은인트라예측,인터 예측및현재픽처참조중둘이상을조합된
부호화모드를나타낸다.일예로,결합예측이적용되는경우,인트라예측,인터 예측또는현재픽처참조중어느하나를기초로제 1예측블록이생성되고, 다른하나를기초로제 2예측블록이생성될수있다.제 1예측블록및제 2예측 블록이생성되면,제 1예측블록및제 2예측블록의평균연산또는가중합 연산을통해최종예측블록이생성될수있다.결합예측이적용되는지여부를 나타내는정보가비트스트림을통해시그날링될수있다.상기정보는 1비트의 플래그일수있다.
[120] 도 4는코딩블록의다양한분할형태를나타낸도면이다.
[121] 코딩블록은쿼드트리분할,바이너리트리분할또는트리플트리분할을
기초로복수의코딩블록들로분할될수있다.분할된코딩블록도다시쿼드 트리분할,바이터리트리분할또는트리플트리분할을기초로다시복수의 코딩블록들로분할될수있다.
[122] 쿼드트리분할은현재블록을 4개의블록들로분할하는분할기법을나타낸다. 쿼드트리분할의결과,현재블록은 4개의정방형태파티션들로분할될수있다 (도 4의知) - !,참조).
[123] 바이너리트리분할은현재블록을 2개의블록들로분할하는분할기법을
나타낸다.수직방향을따라(즉,현재블록을가로지르는수직선을이용)현재 블록을두개의블록들로분할하는것을수직방향바이너리트리분할이라 호칭할수있고,수평방향을따라(즉,현재블록을가로지르는수평선을이용) 현재블록을두개의블록들로분할하는것을수평방향바이너리트리분할이라 호칭할수있다.바이너리트리분할결과,현재블록은 2개의비정방형태 파티션들로분할될수있다. 바이너리 트리분할결과를나타낸것
Figure imgf000015_0001
방향 바이너리트리분할결과를나타낸것이다.
[124] 트리플트리분할은현재블록을 3개의블록들로분할하는분할기법을
나타낸다.수직방향을따라(즉,현재블록을가로지르는두개의수직선을이용) 현재블록을세개의블록들로분할하는것을수직방향트리플트리분할이라 호칭할수있고,수평방향을따라(즉,현재블록을가로지르는두개의수평선을 이용)현재블록을세개의블록들로분할하는것을수평방향트리플트리 분할이라호칭할수있다.트리플트리분할결과,현재블록은 3개의비정방 형태파티션들로분할될수있다.이때,현재블록의중앙에위치하는파티션의 2020/175915 1»(:1^1{2020/002754 너비/높이는다른파티션들의너비/높이대비 2배일수있다.도 4의 (d) ’SPLIT_TT_VER’는수직방향트리플트리분할결과를나타낸것이고,도 4의 (e) ’SPLIT_TT_HOR’는수평방향트리플트리분할결과를나타낸것이다.
[125] 코딩트리유닛의분할횟수를분할깊이 (Partitioning Depth)라정의할수있다. 시퀀스또는픽처레벨에서코딩트리유닛의최대분할깊이가결정될수있다. 이에따라,시퀀스또는필처별로코딩트리유닛의최대분할깊이가상이할수 있다.
[126] 또는,분할기법들각각에대한최대분할깊이를개별적으로결정할수있다. 일예로,쿼드트리분할이허용되는최대분할깊이는바이너리트리분할 및/또는트리플트리분할이허용되는최대분할깊이와상이할수있다.
[127] 부호화기는현재블록의분할형태또는분할깊이중적어도하나를나타내는 정보를비트스트림을통해시그날링할수있다.복호화기는
비트스트림으로부터파싱되는상기정보에기초하여코딩트리유닛의분할 형태및분할깊이를결정할수있다.
[128] 도 5는코딩트리유닛의분할양상을예시한도면이다.
[129] 쿼드트리분할,바이너리트리분할및/또는트리플트리분할등의분할
기법을이용하여코딩블록을분할하는것을멀티트리분할 (Multi Tree
Partitioning)이라호칭할수있다.
[130] 코딩블록에멀티트리분할을적용하여생성되는코딩블록들을하위코딩 블록들이라호칭할수있다.코딩블록의분할깊이가노인경우,하위코딩 블록들의분할깊이는 k+1로설정된다.
[131] 반대로,분할깊이가 k+1인코딩블록들에대해,분할깊이가노인코딩블록을 상위코딩블록이라호칭할수있다.
[132] 현재코딩블록의분할타입은상위코딩블록의분할형태또는이웃코딩
블록의분할타입중적어도하나를기초로결정될수있다.여기서,이웃코딩 블록은현재코딩블록에인접하는것으로,현재코딩블록의상단이웃블록, 좌측이웃블록,또는좌측상단코너에인접하는이웃블록중적어도하나를 포함할수있다.여기서,분할타입은,쿼드트리분할여부,바이너리트리분할 여부,바이너리트리분할방향,트리플트리분할여부,또는트리플트리분할 방향중적어도하나를포함할수있다.
[133] 코딩블록의분할형태를결정하기위해,코딩블록이분할되는지여부를
나타내는정보가비트스트림을통해시그날링될수있다.상기정보는 1비트의 플래그’split_cu_flag’로,상기플래그가참인것은,머리트리분할기법에의해 코딩블록이분할됨을나타낸다.
[134] split_cu_flag가참인경우,코딩블록이쿼드트리분할되는지여부를나타내는 정보가비트스트림을통해시그날링될수있다.상기정보는 1비트의플래그 split_qt_flag로,상기플래그가참인경우,코딩블록은 4개의블록들로분할될수 있다. 2020/175915 1»(:1^1{2020/002754
[135] 일예로,도 5에도시된예에서는,코딩트리유닛이쿼드트리분할됨에따라, 분할깊이가 1인 4개의코딩블록들이생성되는것으로도시되었다.또한,쿼드 트리분할결과로생성된 4개의코딩블록들중첫번째코딩블록및네번째코딩 블록에다시쿼드트리분할이적용된것으로도시되었다.그결과,분할깊이가 2인 4개의코딩블록들이생성될수있다.
[136] 또한,분할깊이가 2인코딩블록에다시쿼드트리분할을적용함으로써,분할 깊이가 3인코딩블록을생성할수있다.
[137] 코딩블록에쿼드트리분할이적용되지않는경우,코딩블록의크기,코딩 블록이픽처경계에위치하는지여부,최대분할깊이또는이웃블록의분할 형태중적어도하나를고려하여,상기코딩블록에바이너리트리분할또는 트리플트리분할을수행할것인지여부를결정할수있다.상기코딩블록에 바이너리트리분할또는트리플트리분할이수행되는것으로결정된경우, 분할방향을나타내는정보가비트스트림을통해시그날링될수있다.상기 정보는 1비트의늘래그 mtt_split_cu_vertical_flag일수있다.상기늘래그에 기초하여,분할방향이수직방향인지또는수평방향인지여부가결정될수 있다.추가로,바이너리트리분할또는트리플트리분할중어느것이상기코딩 블록에적용되는지를나타내는정보가비트스트림을통해시그날링될수있다. 상기정보는 1비트의플래그 mtt_split_cu_binary_flag일수있다.상기플래그에 기초하여,상기코딩블록에바이너리트리분할이적용되는지또는트리플트리 분할이적용되는지여부가결정될수있다.
[138] 일예로,도 5에도시된예에서는,분할깊이 1인코딩블록에수직방향
바이너리트리분할이적용되고,상기분할결과로생성된코딩블록들중좌측 코딩블록에는수직방향트리플트리분할이적용되고,우측코딩블록에는 수직방향바이너리트리분할이적용된것으로도시되었다.
[139]
[140] 인터예측은이전픽처의정보를이용하여,현재블록을예측하는예측부호화 모드이다.일예로,이전픽처내현재블록과동일한위치의블록 (이하, 콜로케이티드블록, Collocated block)을현재블록의 예측블록으로설정할수 있다.이하,현재블록과동일한위치의블록을기초로생성된예측블록을 콜로케이티드예즉블록 (Collocated Prediction Block)이라호칭하기로한다.
[141] 반면,이전픽처에존재한오브젝트가현재픽처에서는다른위치로
이동하였다면,오브젝트의움직임을이용하여효과적으로현재블록을예측할 수있다.예를들어,이전픽처와현재픽처를비교함으로써오브젝트의이동 방향및크기를알수있다면,오브젝트의움직임정보를고려하여현재블록의 예측블록 (또는,예측영상)을생성할수있다.이하,움직임정보를이용하여 생성된예측블록을움직임예측블록이라호칭할수있다.
[142] 현재블록에서 예측블록을차분하여 ,잔차블록 (residual block)을생성할수 있다.이때,오브젝트의움직임이존재하는경우라면,콜로케이티드예측블록 2020/175915 1»(:1^1{2020/002754 대신움직임 예측블록을이용함으로써 ,잔차블록의 에너지를줄이고,이에 따라,잔차블록의 압축성능을향상시킬수있다.
[143] 위처럼,움직임정보를이용하여 예측블록을생성하는것을움직임보상
예측이라호칭할수있다.대부분의 인터 예측에서는움직임보상예측에 기초하여 예측블록을생성할수있다.
[144] 움직임정보는모션벡터 ,참조픽처 인덱스,예측방향또는양방향가중치 인덱스중적어도하나를포함할수있다.모션벡터는오브젝트의 이동방향및 크기를나타낸다.참조픽처 인덱스는참조픽처 리스트에포함된참조픽처들 중현재블록의 참조픽처를특정한다.예측방향은단방향 L0예측,단방향 L1 예측또는양방향예측 (L0예측및 L1예측)중어느하나를가리킨다.현재 블록의 예측방향에 따라, L0방향의움직인정보또는 L1방향의움직임 정보중 적어도하나가이용될수있다.양방향가중치 인덱스는 L0예측블록에 적용되는가중치 및 L1예즉블록에 적용되는가중치를특정한다.
[145] 도 6은본발명의 일실시예에 따른인터 예측방법의흐름도이다.
[146] 도 6을참조하면,인터 예측방법은,현재블록의 인터 예측모드를결정하는 단계 (S601),결정된인터 예측모드에따라현재블록의움직임정보를획득하는 단계 (S602)및획득된움직임정보에 기초하여,현재블록에 대한움직임보상 예측을수행하는단계 (S603)를포함한다.
[147] 여기서,인터 예측모드는현재블록의움직임 정보를결정하기 위한다양한 기법들을나타내는것으로,병진 (Translation)움직임 정보를이용하는인터 예측 모드와,어파인 (Affine)움직임 정보를이용하는인터 예측모드를포함할수 있다.일 예로,병진움직임 정보를이용하는인터 예측모드는,머지모드및 모션벡터 예측모드를포함하고,어파인움직임정보를이용하는인터 예측 모드는어파인머지모드및어파인모션벡터 예측모드를포함할수있다.현재 블록의움직임정보는,인터 예측모드에 따라,현재블록에 이웃하는이웃블록 또는비트스트림으로부터파싱되는정보를기초로결정될수있다.
[148] 현재블록의움직임정보는현재블록타블록의움직임정보로부터유도될수 있다.여기서,타블록은현재블록보다앞서 인터 예측으로부호화/복호화된 블록일수있다.현재블록의움직임 정보를타블록의움직임 정보와동일하게 설정하는것을머지모드라정의할수있다.또한,타블록의움직임 벡터를현재 블록의움직임 벡터의 예측값으로설정하는것을모션벡터 예측모드라정의할 수있다.
[149] 도 7은머지모드하에서 현재블록의움직임 정보를유도하는과정의
흐름도이다.
[150] 현재블록의 머지후보를유도할수있다 (S701).현재블록의 머지후보는현재 블록보다앞서 인터 예측으로부호화/복호화된블록으로부터유도될수있다.
[151] 도 8은머지후보를유도하기위해사용되는후보블록들을예시한도면이다.
[152] 후보블록들은,현재블록에 인접하는샘플을포함하는이웃블록들또는현재 2020/175915 1»(:1^1{2020/002754 블록에인접하지않는샘플을포함하는비이웃블록들중적어도하나를포함할 수있다.이하,후보블록들을결정하는샘플들을기준샘플들이라정의한다. 또한,현재블록에인접하는기준샘플을이웃기준샘플이라호칭하고,현재 블록에인접하지않는기준샘플을비이웃기준샘플이라호칭하기로한다.
[153] 이웃기준샘플은,현재블록의최좌측열의이웃열또는현재블록의최상단 행의이웃행에포함될수있다.일예로,현재블록의좌측상단샘플의좌표를 (0, 0)이라할때, (-1, 11-1)위치의기준샘플을포함하는블록 , ( - 1, -1)위치의 기준샘플을포함하는블록, ( , -1)위치의기준샘플을포함하는블록, (-1, ¾ 위치의기준샘플을포함하는블록또는 (-1, -1)위치의기준샘플을포함하는 블록중적어도하나가후보블록으로이용될수있다.도면을참조하면,인덱스 0내지인덱스 4의이웃블록들이후보블록들로이용될수있다.
[154] 비이웃기준샘플은,현재블록에인접하는기준샘플과의 X축거리또는 축 거리중적어도하나가기정의된값을갖는샘플을나타낸다.일예로,좌측기준 샘플과의 X축거리가기정의된값인기준샘플을포함하는블록,상단기준 샘플과의 X축거리가기정의된값인비이웃샘플을포함하는블록또는좌측 상단기준샘플과의 X축거리및 X축거리가기정의된값인비이웃샘플을 포함하는블록중적어도하나가후보블록으로이용될수있다.기정의된값은, 4, 8, 12, 16등의자연수일수있다.도면을참조하면,인덱스 5내지 26의블록들 중적어도하나가후보블록으로이용될수있다.
[155] 이웃기준샘플과동일한수직선,수평선또는대각선상에위치하지않는
샘플을비이웃기준샘플로설정할수도있다.
[156] 도 9는기준샘플들의위치를나타낸도면이다.
[157] 도 9에도시된예에서와같이,상단비이웃기준샘플들의 X좌표는상단이웃 기준샘플들의 X좌표와상이하게설정될수있다.일예로,상단이웃기준 샘플의위치가
Figure imgf000019_0001
인경우,상단이웃기준샘플로부터 축으로 N만큼 떨어진상단비이웃기준샘플의위치는 (( /2)-1, -1-비으로설정되고,상단이웃 기준샘플로부터 축으로 2N만큼떨어진상단비이웃기준샘플의위치는 (0, -1-2비으로설정될수있다.즉,비인접기준샘플의위치는인접기준샘플의 위치와인접기준샘플과의거리를기초로결정될수있다.
[158] 이하,후보블록들중이웃기준샘플을포함하는후보블록을이웃블록이라 호칭하고,비이웃기준샘플을포함하는블록을비이웃블록이라호칭하기로 한다.
[159] 현재블록과후보블록사이의거리가문턱값이상인경우,상기후보블록은 머지후보로서이용불가능한것으로설정될수있다.상기문턱값은코딩트리 유닛의크기를기초로결정될수있다.일예로,문턱값은코딩트리유닛의
유닛의높이에서오프셋을가산또는감산한
Figure imgf000019_0002
, 설정될수있다.오프셋 N는부호화기및 복호화기에서기정의된값으로, 4, 8, 16,
Figure imgf000019_0003
있다. 2020/175915 1»(:1^1{2020/002754
[160] 현재블록의 X축좌표와후보블록에포함된샘플의 X축좌표사이의차분이 문턱값보다큰경우,후보블록은머지후보로서이용불가능한것으로결정될 수있다.
[161] 또는,현재블록과동일한코딩트리유닛에속하지않는후보블록은머지
후보로이용불가능한것으로설정될수있다.일예로,기준샘플이현재블록이 속하는코딩트리유닛의상단경계를벗어나는경우,상기기준샘플을 포함하는후보블록은머지후보로서이용불가능한것으로설정될수있다.
[162] 만약,현재블록의상단경계가코딩트리유닛의상단경계와인접하는경우, 다수의후보블록들이머지후보로서이용가능하지않은것으로결정되어,현재 블록의부호화/복호화효율이감소할수있다.이와같은문제점을해소하기 위해,현재블록의상단에위치하는후보블록들의개수보다현재블록의좌측에 위치하는후보블록들의개수가더많아지도록후보블록들을설정할수있다.
[163] 도 10은머지후보를유도하기위해사용되는후보블록들을예시한도면이다.
[164] 도 에도시된예에서와같이 ,현재블록의상단 N개의블록열에속한상단 블록들및현재블록의좌측 개의블록열에속한좌측블록들을후보블록들로 설정할수있다.이때, N보다 M을더크게설정하여,상단후보블록들의 개수보다좌측후보블록들의개수를더크게설정할수있다.
[165] 일예로,현재블록내기준샘플의 X축좌표와후보블록으로이용될수있는 상단블록의 X축좌표의차분이현재블록높이의 배를초과하지않도록 설정할수있다.또한,현재블록내기준샘플의 X축좌표와후보블록으로 이용될수있는좌측블록의 X축좌표의차분이현재블록너비의 M배를 초과하지않도록설정할수있다.
[166] 일예로,도 에도시된예에서는현재블록의상단 2개의블록열에속한
블록들및현재블록의좌측 5개의블록열에속한블록들이후보블록들로 설정되는것으로도시되었다.
[167] 다른예로,후보블록이현재블록과동일한코딩트리유닛에속하지않는
경우,상기후보블록대신현재블록과동일한코딩트리유닛에속하는블록 또는상기코딩트리유닛의경계에인접하는기준샘플을포함하는블록을 이용하여머지후보를유도할수있다.
[168] 도 11은기준샘플의위치가변경되는예를도시한도면이다.
[169] 기준샘플이현재블록과상이한코딩트리유닛에포함되고,상기기준샘플이 상기코딩트리유닛의경계에인접하지않는경우,상기기준샘플대신상기 코딩트리유닛의경계에인접하는기준샘플을이용하여후보블록을결정할수 있다.
[17이 일예로,도 11의知)및(비에도시된예에서,현재블록의상단경계와코딩
트리유닛의상단경계가접하는경우,현재블록상단의기준샘플들은현재 블록과상이한코딩트리유닛에속하게된다.현재블록과상이한코딩트리 유닛에속하는기준샘플들중코딩트리유닛의상단경계에인접하지않는 2020/175915 1»(:1^1{2020/002754 기준샘플을코딩트리유닛의상단경계에인접하는샘플로대체할수있다.
[171] 일예로,도 11의知)에도시된예에서와같이, 6위치의기준샘플을코딩트리 유닛의상단경계에위치하는 6’위치의샘플로대체하고,도 11의(비에도시된 예에서와같이, 15위치의기준샘플을코딩트리유닛의상단경계에위치하는 15’위치의샘플로대체할수있다.이때,대체샘플의 X좌표는,코딩트리유닛의 인접위치로변경되고,대체샘플의 X좌표는,기준샘플과동일하게설정될수 있다.일예로, 6’위치의샘플은 6위치의샘플과동일한 X좌표를갖고, 15’위치의 샘플은 15위치의샘플과동일한 X좌표를가질수있다.
[172] 또는,기준샘플의 X좌표에서오프셋을가산또는차감한것을대체샘플의 X 좌표로설정될수있다.일예로,현재블록의상단에위치하는이웃기준샘플과 비이웃기준샘플의 X좌표가동일한경우,기준샘플의 X좌표에서오프셋을 가산또는차감한것을대체샘플의 X좌표로설정할수있다.이는비이웃기준 샘플을대체하는대체샘플이다른비이웃기준샘플또는이웃기준샘플과 동일한위치가되는것을방지하기위함이다.
[173] 도 12는기준샘플의위치가변경되는예를도시한도면이다.
[174] 현재블록과상이한코딩트리유닛에포함되고,코딩트리유닛의경계에 인접하지않는기준샘플을코딩트리유닛의경계에위치하는샘플로대체함에 있어서,기준샘플의 X좌표에서오프셋을가산또는감산한값을대체샘플의
X좌표로설정할수있다.
[175] 일예로,도 12에도시된예에서, 6위치의기준샘플및 15위치의기준샘플은 각각 X좌표가코딩트리유닛의상단경계에인접하는행과동일한 6’위치의 샘플및 15’위치의샘플로대체될수있다.이때, 6’위치의샘플의 X좌표는 6 위치의기준샘플의 X좌표에서 \¥/2를차분한값으로설정되고, 15’위치의 샘플의 X좌표는 15위치의기준샘플의 X좌표에서
Figure imgf000021_0001
차분한값으로설정될 수있다.
[176] 도 11및도 12에도시된예와는달리,현재블록의최상단행의상단에
위치하는행의 X좌표또는,코딩트리유닛의상단경계의 X좌표를대체샘플의 X좌표로설정할수도있다.
[177] 도시되지는않았지만,코딩트리유닛의좌측경계를기준으로기준샘플을 대체하는샘플을결정할수도있다.일예로,기준샘플이현재블록과동일한 코딩트리유닛에포함되지않고,코딩트리유닛의좌측경계에인접하지도 않는경우,상기기준샘플을코딩트리유닛의좌측경계에인접하는샘플로 대체할수있다.이때,대체샘플은기준샘플과동일한 좌표를갖거나,기준 샘플의 X좌표에서오프셋을가산또는차분하여획득된 X좌표를가질수있다.
[178] 이후,대체샘플을포함하는블록을후보블록으로설정하고,상기후보블록을 기초로현재블록의머지후보를유도할수있다.
[179] 현재블록과상이한픽처에포함된시간적이웃블록으로부터머지후보를 유도할수도있다.일예로,콜로케이티드픽처에포함된콜로케이티드 2020/175915 1»(:1^1{2020/002754 블록으로부터머지후보를유도할수있다.참조픽처리스트에포함된참조 픽처들중어느하나가콜로케이티드픽처로설정될수있다.참조픽처들중 콜로케이티드픽처를식별하는인덱스정보가비트스트림을통해시그날링될 수있다.또는,참조픽처들중기정의된인덱스를갖는참조픽처가
콜로케이티드픽처로결정될수있다.
[180] 머지후보의움직임정보는후보블록의움직임정보와동일하게설정될수
있다.일예로,후보블록의모션벡터,참조픽처인덱스,예측방향또는양방향 가중치인덱스중적어도하나를머지후보의움직임정보로설정할수있다.
[181] 머지후보를포함하는머지후보리스트를생성할수있다 702).
[182] 머지후보리스트내머지후보들의인덱스는소정순서에따라할당될수있다. 일예로,좌측이웃블록으로부터유도된머지후보,상단이웃블록으로부터 유도된머지후보,우측상단이웃블록으로부터유도된머지후보,좌측하단 이웃블록으로부터유도된머지후보,좌측상단이웃블록으로부터유도된머지 후보및시간적이웃블록으로부터유도된머지후보순으로인덱스를부여할수 있다.
[183] 머지후보에복수의머지후보들이포함된경우,복수의머지후보들중적어도 하나를선택할수있다 703).구체적으로,복수의머지후보들중어느하나를 특정하기위한정보가비트스트림을통해시그날링될수있다.일예로,머지 후보리스트에포함된머지후보들중어느하나의인덱스를나타내는정보
Figure imgf000022_0001
있다.
[184]
[185] 머지후보리스트에포함된머지후보들의개수가문턱값보다작은경우,모션 정보테이블에포함된모션정보후보를머지후보로서머지후보리스트에 추가할수있다.여기서 ,문턱값은머지후보리스트가포함할수있는최대머지 후보의개수또는최대머지후보의개수에서오프셋을차감한값일수있다. 오프셋은, 1또는 2등의자연수일수있다.
[186] 모션정보테이블은현재픽처내인터예측을기초로부호화/복호화된
블록으로부터유도되는모션정보후보를포함한다.일예로,모션정보
테이블에포함된모션정보후보의움직임정보는인터예측을기초로
부호화/복호화된블록의움직임정보와동일하게설정될수있다.여기서, 움직임정보는모션벡터 ,참조픽처인덱스,예측방향또는양방향가중치 인덱스중적어도하나를포함할수있다.
[187] 모션정보테이블에포함된모션정보후보를인터영역머지후보또는예측 영역머지후보라호칭할수도있다.
[188] 모션정보테이블이포함할수있는모션정보후보의최대개수는부호화기및 복호화기에서기정의되어 있을수있다.일예로,모션정보테이블이포함할수 있는최대모션정보후보의개수는, 1, 2, 3, 4, 5, 6, 7, 8또는그이상 (예컨대,
16)일수있다. 2020/175915 1»(:1^1{2020/002754
[189] 또는,모션정보테이블이포함할수있는모션정보후보의최대 개수를
나타내는정보가비트스트림을통해시그날링될수있다.상기 정보는,시퀀스, 픽처,또는슬라이스레벨에서시그날링될수있다.상기정보는모션정보 테이블이포함할수있는모션정보후보의 최대개수를나타낼수있다.또는, 상기 정보는모션정보테이블이포함할수있는모션정보후보의 최대개수와 머지후보리스트가포함할수있는머지후보의최대 개수사이의차분을 나타낼수있다.
[190] 또는,픽처의크기,슬라이스의크기또는코딩트리유닛의크기에 따라,모션 정보테이블이포함할수있는모션정보후보의최대 개수가결정될수있다.
[191] 모션정보테이블은픽처,슬라이스,타일,브릭,코딩트리유닛,또는코딩트리 유닛라인(행또는열)단위로초기화될수있다.일 예로,슬라이스가초기화되는 경우,모션정보테이블도초기화되어 ,모션정보테이블은어떠한모션정보 후보도포함하지 않을수있다.
[192] 또는,모션정보테이블을초기화할것인지 여부를나타내는정보가
비트스트림을통해시그날링될수도있다.상기 정보는슬라이스,타일,브릭 또는블록레벨에서시그날링될수있다.상기정보가모션정보테이블을 초기화할것을지시하기 전까지,기구성된모션정보테이블이 이용될수있다.
[193] 또는,픽처파라미터 세트또는슬라이스헤더를통해초기모션정보후보에 대한정보가시그날링될수있다.슬라이스가초기화되더라도,모션정보 테이블은초기모션정보후보를포함할수있다.이에따라,슬라이스내 첫번째 부호화/복호화대상인블록에 대해서도초기모션정보후보를이용할수있다.
[194] 또는,이전코딩트리유닛의모션정보테이블에포함된모션정보후보를초기 모션정보후보로설정할수있다.일예로,이전코딩트리유닛의모션정보 테이블에포함된모션정보후보들중인덱스가가장작은모션정보후보또는 인덱스가가장큰모션정보후보가초기모션정보후보로설정될수있다.
[195] 부호화/복호화순서에 따라블록들을부호화/복호화하되 ,인터 예측을기초로 부호화/복호화된블록들을부호화/복호화순서에따라순차적으로모션정보 후보로설정할수있다.
[196] 도 13은모션정보테이블의 업데이트양상을설명하기 위한도면이다.
[197] 현재블록에 대해,인터 예측이수행된경우 1301),현재블록을기초로모션 정보후보를유도할수있다(別302).모션정보후보의움직임정보는현재 블록의움직임정보와동일하게설정될수있다.
[198] 모션정보테이블이빈상태인경우 1303),현재블록을기초로유도된모션 정보후보를모션정보테이블에추가할수있다(別304).
[199] 모션정보테이블이 이미모션정보후보를포함하고있는경우 1303),현재 블록의움직임정보(또는,이를기초로유도된모션정보후보)에 대한중복성 검사를실시할수있다(別305).중복성검사는모션정보테이블에기 저장된 모션정보후보의움직임정보와현재블록의움직임정보가동일한지 여부를 2020/175915 1»(:1^1{2020/002754 결정하기 위한것이다.중복성 검사는모션정보테이블에 기 저장된모든모션 정보후보들을대상으로수행될수있다.또는,모션정보테이블에기 저장된 모션정보후보들중인덱스가문턱값이상또는문턱값이하인모션정보 후보들을대상으로중복성 검사를수행할수있다.또는,기 정의된개수의모션 정보후보들을대상으로중복성검사가수행될수있다.일예로,인덱스가작은 2개의모션정보후보들또는인덱스가큰 2개윔모션정보후보들이중복성 검사대상으로결정될수있다.
[200] 현재블록의움직임정보와동일한움직임 정보를갖는모션정보후보가
포함되어 있지 않은경우,현재블록을기초로유도된모션정보후보를모션 정보테이블에추가할수있다 1308).모션정보후보들이동일한지 여부는, 모션정보후보들의움직임정보(예컨대,모션벡터 및/또는참조픽처 인덱스 등)가동일한지 여부를기초로결정될수있다.
[201] 이때,모션정보테이블에 이미 최대개수의모션정보후보들이 저장되어 있을 경우 1306),가장오래된모션정보후보를삭제하고 1307),현재블록을 기초로유도된모션정보후보를모션정보테이블에추가할수있다(別308). 여기서,가장오래된모션정보후보는인덱스가가장큰모션정보후보또는 인덱스가가장작은모션정보후보일수있다.
[202] 모션정보후보들은각기 인덱스에의해식별될수있다.현재블록으로부터 유도된모션정보후보가모션정보테이블에추가되는경우,상기모션정보 후보에 가장낮은인덱스(예컨대, 0)를할당하고,기 저장된모션정보후보들의 인덱스를 1씩증가시킬수있다.이때 ,모션정보테이블에 이미최대 개수의 모션정보후보들이 저장되었던경우,인덱스가가장큰모션정보후보가 제거된다.
[203] 또는,현재블록으로부터유도된모션정보후보가모션정보테이블에
추가되는경우,상기모션정보후보에 가장큰인덱스를할당할수있다.일 예로,모션정보테이블에 기 저장된모션정보후보들의 개수가최대값보다 작은경우,상기모션정보후보에는기 저장된모션정보후보들의 개수와 동일한값의 인덱스가할당될수있다.또는,모션정보테이블에기 저장된모션 정보후보들의 개수가최대값과같은경우,상기모션정보후보에는최대값에서 1을차감한인덱스가할당될수있다.또한,인덱스가가장작은모션정보 후보가제거되고,잔여기 저장된모션정보후보들의 인덱스들이 1씩
감소하게된다.
[204] 도 14는모션정보테이블의 업데이트양상을나타낸도면이다.
[205] 현재블록으로부터유도된모션정보후보가모션정보테이블에추가되면서 , 상기모션정보후보에가장큰인덱스가할당되는것으로가정한다.또한,모션 정보테이블에는이미 최대개수의모션정보후보가저장된것으로가정한다.
[206] 현재블록으로부터유도된모션정보후보 1¾1 1幻  1[11+1]를모션정보테이블
Figure imgf000024_0001
추가하는경우,기 저장된모션정보후보들중인덱스가가장 2020/175915 1»(:1^1{2020/002754 작은모션정보후보 1¾1 1幻&11(1 [이를삭제하고,잔여모션정보후보들의 인덱스를 1씩감소시킬수있다.또한,현재블록으로부터유도된모션정보후보 1 &11(1[11+1]의인덱스를최대값(도 14에도시된예에서는미으로설정할수 있다.
[207] 현재블록을기초로유도된모션정보후보와동일한모션정보후보가기
저장되어 있을경우 1305),현재블록을기초로유도된모션정보후보를모션 정보테이블에추가하지않을수있다 1309).
[208] 또는,현재블록을기초로유도된모션정보후보를모션정보테이블에
추가하면서,상기모션정보후보와동일한기저장된모션정보후보를제거할 수도있다.이경우,기저장된모션정보후보의인덱스가새롭게갱신되는것과 동일한효과가야기된다.
[209] 도 15는기저장된모션정보후보의인덱스가갱신되는예를나타낸도면이다.
[210] 현재블록을기초로유도된모션정보후보 111  1와동일한기저장된모션 정보후보의인덱스가 111(뇨인경우,상기기저장된모션정보후보를삭제하고, 인덱스가 111(뇨보다큰모션정보후보들의인덱스를 1만큼감소시킬수있다.일 예로,도 15에도시된예에서는 111<  1와동일한 가모션정보 테이블 1切1111 ¾11(1] 에서삭제되고, ^1幻 (1[3]부터 1¾1^1幻 (1[11]까지의 인덱스가 1씩감소하는것으로도시되었다.
[211] 그리고,현재블록을기초로유도된모션정보후보 · 0 (1를모션정보
테이블의마지막에추가할수있다.
[212] 또는,현재블록을기초로유도된모션정보후보와동일한기저장된모션정보 후보에할당된인덱스를갱신할수있다.예컨대,기저장된모션정보후보의 인덱스를최소값또는최대값으로변경할수있다.
[213] 소정영역에포함된블록들의움직임정보는모션정보테이블에추가되지
않도록설정될수있다.일예로,머지처리영역에포함된블록의움직임정보를 기초로유도되는모션정보후보는모션정보테이블에추가하지않을수있다. 머지처리영역에포함된블록들에대해서는부호화/복호화순서가정의되어 있지않은바,이들중어느하나의움직임정보를다른블록의인터 예측시에 이용하는것은부적절하다.이에따라,머지처리영역에포함된블록들을 기초로유도된모션정보후보들은모션정보테이블에추가하지않을수있다.
[214] 또는,기설정된크기보다작은블록의움직임정보는모션정보테이블에
추가되지않도록설정될수있다.일예로,너비또는높이가 4또는 8보다작은 코딩블록의움직임정보,또는 4x4크기의코딩블록의움직임정보를기초로 유도되는모션정보후보는모션정보테이블에추가하지않을수있다.
[215] 서브블록단위로움직임보상예측이수행된경우,현재블록에포함된복수의 서브블록들중대표서브블록의움직임정보를기초로모션정보후보를 유도할수있다.일예로,현재블록에대해서브블록머지후보가사용된경우, 서브블록들중대표서브블록의움직임정보를기초로모션정보후보를 2020/175915 1»(:1^1{2020/002754 유도할수있다.
[216] 서브블록들의움직임벡터는다음의순서로유도될수있다.먼저,현재블록의 머지후보리스트에포함된머지후보들중어느하나를선택하고,선택된머지 후보의움직임벡터를기초로,초기시프트벡터知:^ 버를유도할수있다. 그리고,코딩블록내각서브블록의기준샘플(예컨대,좌상단샘플또는중간 위치샘플)의위치江315, 8비에초기시프트벡터를가산하여,기준샘플의 위치가江001815, 0181))인시프트서브블록을유도할수있다.하기수학식 1은 시프트서브블록을유도하기위한수식을나타낸다.
[217] [수식 1]
Figure imgf000026_0001
[218] 그리고나서 ,(  0 1), 비를포함하는서브블록의센터포지션에
대응하는콜로케이티드블록의모션벡터를( 31), 3비를포함하는서브블록의 모션벡터로설정할수있다.
[219] 대표서브블록은현재블록의좌측상단샘플,중앙샘플,우측하단샘플,우측 상단샘플또는좌측하단샘플을포함하는서브블록을의미할수있다.
[22이 도 16은대표서브블록의위치를나타낸도면이다.
[221] 도 16의如는현재블록의좌측상단에위치한서브블록이대표서브블록으로 설정된예를나타내고,도 16의 )는현재블록의중앙에위치한서브블록이 대표서브블록으로설정된예를나타낸다.서브블록단위로움직임보상 예측이수행된경우,현재블록의좌측상단샘플을포함하는서브블록또는 현재블록의중앙샘플을포함하는서브블록의움직임벡터를기초로,현재 블록의모션정보후보를유도할수있다.
[222] 현재블록의인터예측모드를기초로,현재블록을모션정보후보로이용할 것인지여부를결정할수도있다.일예로,어파인모션모델을기초로
부호화/복호화된블록은모션정보후보로이용불가능한것으로설정될수 있다.이에따라,현재블록이인터예측으로부호화/복호화되었다하더라도, 현재블록의인터 예측모드가어파인예측모드인경우에는,현재블록을 기초로모션정보테이블을업데이트하지않을수있다.
[223] 또는,현재블록의움직임벡터해상도,머지오프셋부호화방법의적용여부, 결합예측적용여부,삼각파티셔닝적용여부중적어도하나를기초로,현재 블록을모션정보후보로이용할것인지여부를결정할수도있다.일예로,현재 블록의모션정보해상도가 2정수펠이상인경우,현재블록에결합예측이 적용된경우,현재블록에삼각파티셔닝이적용된경우또는현재블록에머지 오프셋부호화방법이적용된경우중적어도하나에있어서,현재블록은모션 정보후보로이용불가능한것으로설정될수있다.
[224] 또는,어파인모션모델을기초로부호화/복호화된블록에포함된서브블록중 적어도하나의서브블록벡터를기초로모션정보후보를유도할수도있다.일 예로,현재블록의좌측상단에위치하는서브블록,중앙에위치하는서브블록 2020/175915 1»(:1^1{2020/002754 또는우측상단에 위치하는서브블록을이용하여모션정보후보를유도할수 있다.또는,복수서브블록들의서브블록벡터들의평균값을모션정보후보의 움직임 벡터로설정할수도있다.
[225] 또는,어파인모션모델을기초로부호화/복호화된블록의 어파인시드
벡터들의 평균값을기초로모션정보후보를유도할수도있다.일 예로,현재 블록의 제 1어파인시드벡터,제 2어파인시드벡터또는제 3어파인시드벡터 중적어도하나의 평균을모션정보후보의움직임 벡터로설정할수있다.
[226] 또는,인터 예측모드별로모션정보테이블을구성할수있다.일 예로,인트라 블록카피로부호화/복호화된블록을위한모션정보테이블,병진모션모델을 기초로부호화/복호화된블록을위한모션정보테이블또는어파인모션모델을 기초로부호화/복호화된블록을위한모션정보테이블중적어도하나가정의될 수있다.현재블록의 인터 예측모드에따라,복수의모션정보테이블중어느 하나가선택될수있다.
[227] 도 17은인터 예측모드별로모션정보테이블이 생성되는예를나타낸것이다.
[228] 블록이논어파인모션모델을기초로부호화/복호화된경우,상기블록을
기초로유도된모션정보후보 mvCand는논어파인모션정보테이블
HmvpCandList에추가될수있다.반면,블록이 어파인모션모델을기초로 부호화/복호화된경우,상기블록을기초로유도된모션정보후보 mvAfCand는 어파인모션정보테이블 HmvpAfCandList에추가될수있다.
[229] 어파인모션모델을기초로부호화/복호화된블록으로부터유도되는모션
정보후보에는상기블록의 어파인시드벡터들이 저장될수있다.이에따라, 상기모션정보후보를현재블록의 어파인시드벡터를유도하기위한머지 후보로서 이용할수있다.
[23이 또는,움직임 벡터 해당도별로모션정보테이블을구성할수있다.일 예로, 움직임 벡터해상도가 1/16펠인움직임정보를저장하기위한모션정보테이블, 움직임 벡터해상도가 1/4펠인움직임정보를저장하기위한모션정보테이블, 움직임 벡터해상도가 1/2펠인움직임정보를저장하기위한모션정보테이블, 움직임 벡터해상도가정수펠인움직임 정보를저장하기 위한모션정보테이블 또는움직임 벡터 해상도가 4정수펠인움직임 정보를저장하기 위한모션정보 테이블중적어도하나가정의될수있다.
[231] 도 18은움직임 벡터 해상도별로모션정보테이블이 생성되는예를나타낸 것이다.
[232] 블록의움직임 벡터해상도가 1/4펠 (pel)을갖는경우,쿼터펠모션정보테이블 HmvpQPCandList에블록의움직임 정보 mvCand를저장할수있다.반면,블록의 움직임 벡터해상도가정수펠 (Integer-pel)을갖는경우,정수펠모션정보테이블 HmvpIPCandList에블록의움직임정보 mvCand를저장할수있다.블록의움직임 벡터 해상도가 4정수펠 (4 Integer-pel)을갖는경우, 4정수펠모션정보테이블 HmvpIPCandList블록의움직임정보 mvCand를저장할수있다. 2020/175915 1»(:1^1{2020/002754
[233] 현재블록의움직임 벡터 해상도에따라,모션정보테이블을선택하여,현재 블록의 머지후보를유도할수있다.일예로,현재블록의움직임 벡터해상도가
Figure imgf000028_0001
용하여 ,현재 블록의 머지후보를유도할수있다.반면,현재블록의움직임 벡터 해상도가
Figure imgf000028_0002
용하여 ,현재 블록의 머지후보를유도할수있다.
[234] 또는,머지오프셋부호화방법이 적용된블록의움직임정보를별도의움직임 정보테이블에 저장할수있다.
[235] 도 19는머지오프셋부호화방법이 적용된블록의움직임정보가별도의모션 정보테이블에 저장되는예를나타낸것이다.
[236] 블록에머지오프셋벡터부호화방법이 적용되지 않은경우,블록의움직임 정보 111< (1를모션정보테이블
Figure imgf000028_0003
있다.반면,블록에 머지오프셋벡터부호화방법이 적용된경우,블록의움직임정보마 산를
Figure imgf000028_0004
장하지 않고,머지오프셋모션정보테이블
HmvpMMVDCandList에 저장할수있다.
[237] 현재블록에 머지오프셋벡터부호화방법이 적용되었는지 여부에따라,모션 정보테이블을선택할수있다.일 예로,현재블록에 머지오프셋부호화방법이
Figure imgf000028_0005
용하여 ,현재블록의 머지후보를유도할수있다.반면,현재블록에머지오프셋부호화방법이 적용된경우,머지오프셋모션정보테이블 HmvpMMVDCandList를이용하여 , 현재블록의머지후보를유도할수있다.
[238] 설명한모션정보테이블이외에추가모션정보테이블을정의할수도있다. 상기 설명한모션정보테이블(이하,제 1모션정보테이블이라함)이외에롱텀 모션정보테이블(이하,제 2모션정보테이블이라함)을정의할수있다.여기서 , 롱텀모션정보테이블은롱텀모션정보후보들을포함한다.
[239] 제 1모션정보테이블및제 2모션정보테이블이모두빈상태일경우,먼저 , 제 2모션정보테이블에모션정보후보를추가할수있다.제 2모션정보 테이블에 가용한모션정보후보들이 개수가최대 개수에다다른이후,비로서 제 1모션정보테이블에모션정보후보를추가할수있다.
[24이 또는,하나의모션정보후보를제 2모션정보테이블및제 1모션정보테이블 모두에주가할수도있다.
[241] 이때 ,구성이완료된제 2모션정보테이블은더 이상업데이트를수행하지
않을수있다.또는,복호화된영역이슬라이스의소정의비율이상인경우,제 2 모션정보테이블을업데이트할수있다.또는, N개의코딩트리유닛라인마다 제 2모션정보테이블을업데이트할수있다.
[242] 반면,제 1모션정보테이블은인터 예측으로부호화/복호화된블록이 발생할 때마다업데이트될수있다.단,제 2모션정보테이블에추가되는모션정보 후보는제 1모션정보테이블을업데이트하는데 이용되지 않도록설정될수도 2020/175915 1»(:1^1{2020/002754 있다.
[243] 제 1모션정보테이블또는제 2모션정보테이블중어느하나를선택하기위한 정보가비트스트림을통해시그날링될수있다.머지후보리스트에포함된머지 후보의 개수가문턱값보다작은경우,상기 정보가지시하는모션정보테이블에 포함된모션정보후보들이머지후보로서머지후보리스트에추가할수있다.
[244] 또는,현재블록의크기 ,형태,인터 예측모드,양방향예측여부,움직임 벡터 리파인여부또는삼각파티셔닝 여부에기초하여,모션정보테이블을선택할 수도있다.
[245] 또는,제 1모션정보테이블에포함된모션정보후보를추가하더라도머지
후보리스트에포함된머지후보들의 개수가최대 개수보다작은경우,제 2모션 정보테이블에포함된모션정보후보를머지후보리스트에추가할수있다.
[246] 도 20은롱텀모션정보테이블에포함된모션정보후보가머지후보리스트에 추가되는예를나타낸도면이다.
[247] 머지후보리스트에포함된머지후보의 개수가최대 개수보다작은경우,제 1 모션정보테이블 HmvpCandList에포함된모션정보후보를머지후보리스트에 추가할수있다.제 1모션정보테이블에포함된모션정보후보들을머지후보 리스트에추가하였음에도,머지후보리스트에포함된머지후보의 개수가최대 개수보다작은경우,롱텀모션정보테이블 HmvpLTCandList에포함된모션 정보후보들을머지후보리스트에추가할수있다.
[248] 표 1은롱텀모션정보테이블에포함된모션정보후보들을머지후보
리스트에추가하는과정을나타낸것이다.
[249] [표 1]
Figure imgf000029_0001
[25이 모션정보후보가움직임 정보이외에추가정보를포함하도록설정할수있다. 일 예로,모션정보후보에 대해블록의크기,형태또는블록의 파티션정보중 적어도하나를추가저장할수있다.현재블록의 머지후보리스트구성시,모션 정보후보들중현재블록과크기,형태또는파티션정보가동일또는유사한 모션정보후보만을사용하거나,현재블록과크기,형태또는파티션정보가 2020/175915 1»(:1^1{2020/002754 동일또는유사한모션정보후보를먼저 머지후보리스트에추가할수 있다.또는,블록크기 ,형태또는파티션정보별로모션정보테이블을생성할수 있다.복수의모션정보테이블중현재블록의 형태,크기또는파티션정보에 부합하는모션정보테이블을이용하여,현재블록의머지후보리스트를구성할 수있다.
[251] 현재블록의 머지후보리스트에포함된머지후보의 개수가문턱값보다작은 경우,모션정보테이블에포함된모션정보후보를머지후보로서 머지후보 리스트에추가할수있다.상기추가과정은모션정보후보들의 인덱스를 오름차순또는내림차순으로정렬하였을때의순서를따라수행된다.일예로, 인덱스가가장큰모션정보후보부터 현재블록의머지후보리스트에추가할 수있다.
[252] 모션정보테이블에포함된모션정보후보를머지후보리스트에추가하고자 하는경우,모션정보후보와머지후보리스트에 기 저장된머지후보들간의 중복성 검사가수행될수있다.중복성 검사수행결과,기 저장된머지후보와 동일한움직임정보를갖는모션정보후보는머지후보리스트에추가되지 않을 수있다.
[253] 일예로,표 2는모션정보후보가머지후보리스트에추가되는과정을나타낸 것이다.
[254] [표 2]
Figure imgf000030_0001
[255] 중복성검사는,모션정보테이블에포함된모션정보후보들중일부에
대해서만수행될수도있다.일예로,인덱스가문턱값이상또는문턱값이하인 모션정보후보에 대해서만중복성 검사가수행될수있다.또는인덱스가가장 큰 N개의모션정보후보또는인덱스가가장작은 N개의모션정보후보에 대해서만중복성검사가수행될수있다.또는,머지후보리스트에기 저장된 머지후보들중일부에 대해서만중복성검사를수행할수있다.일예로, 인덱스가문턱값이상또는문턱값이하인머지후보또는특정위치의 블록으로부터유도된머지후보에 대해서만중복성검사가수행될수있다. 여기서,특정 위치는,현재블록의좌측이웃블록,상단이웃블록,우측상단 2020/175915 1»(:1^1{2020/002754 이웃블록또는좌측하단이웃블록중적어도하나를포함할수있다.
[256] 도 21은머지후보들중일부에 대해서만중복성 검사가수행되는예를도시한 도면이
[257] 모션
Figure imgf000031_0001
스트에에추가하고자하는경우, 모션정보후보에 대해 인덱스가가장큰 2개의머지후보들
111 은60&11(11 1; 11111] 은6-2]및 1116 60&11(11 1;[]^11111| 6 6-1]과의중복성 검사를 수행할수있다.여기서, NumMerge는가용한공간적머지후보및시간적머지 후보의 개수를나타낼수있다.
[258] 도시된예와달리 ,모션정보후보 1¾1 1幻 (¾]를머지후보리스트에에
추가하고자하는경우,모션정보후보에 대해 인덱스가가장작은최대 2개의 머지후보와의중복성 검사를수행할수도있다.예컨대, 111때 ^(11 [이및 여부를확인할수있다.
[259]
Figure imgf000031_0002
서만중복성 검사를수행할수 있다.일 예로,현재블록의좌측에위치하는주변블록으로부터유도된머지 후보또는현재블록의상단에 위치하는주변블록으로부터유도된머지후보중 적어도하나에 대해중복성검사를수행할수있다.머지후보리스트에특정 위치에서유도된머지후보가존재하지 않는경우,중복성 검사없이모션정보 후보를머지후보리스트에추가할수있다.
[26이
Figure imgf000031_0003
스트에에추가하고자하는경우, 모션정보후보에 대해 인덱스가가장큰 2개의머지후보들
111 은60&11(11 1; 11111] 은6-2]및 1116 60&11(11 1;[]^11111| 6 6-1]과의중복성 검사를 수행할수있다.여기서, NumMerge는가용한공간적머지후보및시간적머지 후보의 개수를나타낼수있다.
[261] 모션정보후보들중일부에 대해서만머지후보와의중복성검사를수행할 수도있다.일예로,모션정보테이블에포함된모션정보후보들중인덱스가큰 N개또는인덱스가작은 N개의모션정보후보에 대해서만중복성 검사가 수행될수있다.일 예로,모션정보테이블에포함된모션정보후보들의 개수와 차분이문턱값이하인인덱스를갖는모션정보후보들에 대해서만중복성 검사가수행될수있다.문턱값이 2인경우,모션정보테이블에포함된모션 정보후보들중인덱스값이 가장큰 3개의모션정보후보들에 대해서만중복성 검사가수행될수있다.상기 3개의모션정보후보들을제외한모션정보 후보들에 대해서는중복성 검사가생략될수있다.중복성 검사가생략되는 경우,머지후보와동일한움직임정보를갖는지 여부와관계없이,모션정보 후보를머지후보리스트에추가할수있다.
[262] 이와반대로,모션정보테이블에포함된모션정보후보들의 개수와차분이 문턱값이상인인덱스를갖는모션정보후보들에 대해서만중복성 검사가 수행되도록설정할수도있다.
[263] 중복성검사가수행되는모션정보후보의 개수는부호화기 및복호화기에서 2020/175915 1»(:1^1{2020/002754 기 정의되어 있을수있다.예컨대,문턱값은 0, 1또는 2와같은정수일수있다.
[264] 또는,머지후보리스트에포함된머지후보의 개수또는모션정보테이블에 포함된모션정보후보들의 개수중적어도하나를기초로문턱값을결정할수 있다.
[265] 제 1모션정보후보와동일한머지후보가발견된경우,제 2모션정보후보에 대한중복성검사시상기제 1모션정보후보와동일한머지후보와의중복성 검사를생략할수있다.
[266] 도 22는특정머지후보와의중복성 검사가생략되는예를나타낸도면이다.
[267]
Figure imgf000032_0001
스트에추가하고자 하는경우,상기모션정보후보와머지후보리스트에기 저장된머지후보들 사이의중복성검사가수행된
Figure imgf000032_0002
동일한
Figure imgf000032_0003
머지 후보리스트에추가하지 않고,인덱스가 1인모션정보후보 1^1幻&11(111-1]와 머지후보들간의중복성검사를수행할수있다.이때,모션정보후보
1¾11¥1)0&11( -1]과머지후보 1116 &11(11 1;[]]사이의중복성검사는생략할수 있다.
[268] 일예로,
Figure imgf000032_0004
동일한 것으로결정되었다.이에따라,
Figure imgf000032_0005
는머지후보리스트에추가되지 않고, &!1 따-1]에 대한중복성 검사가수행될수있다.이때,
13\011] 뇨11(1 -1]과 1116 60&11(11 1;[2]사이의중복성검사는생략될수있다.
[269] 현재블록의 머지후보리스트에포함된머지후보의 개수가문턱값보다작은 경우,모션정보후보이외에도,페어와이즈머지후보또는제로머지후보중 적어도하나가더포함될수도있다.페어와이즈머지후보는둘이상의 머지 후보들의움직임 벡터들을평균한값을움직임 벡터로갖는머지후보를 의미하고,제로머지후보는모션벡터가 0인머지후보를의미한다.
[27이 현재블록의 머지후보리스트는다음의순서를따라,머지후보가추가될수 있다.
[271] 공간적머지후보 -시간적 머지후보 -모션정보후보 -(어파인모션정보 후보) -페어와이즈머지후보 -제로머지후보
[272] 공간적머지후보는이웃블록또는비이웃블록중적어도하나로부터
유도되는머지후보를의미하고,시간적머지후보는이전참조픽처에서 유도되는머지후보를의미한다.어파인모션정보후보는어파인모션모델로 부호화/복호화된블록으로부터유도된모션정보후보를나타낸다.
[273] 모션벡터 예측모드에서도모션정보테이블이 이용될수있다.일 예로,현재 블록의움직임 벡터 예측후보리스트에포함된움직임 벡터 예측후보의 개수가 문턱값보다작은경우,모션정보테이블에포함된모션정보후보를현재 블록에 대한움직임 벡터 예측후보로설정할수있다.구체적으로,모션정보 후보의움직임 벡터를움직임 벡터 예측후보로설정할수있다. 2020/175915 1»(:1^1{2020/002754
[274] 현재블록의움직임 벡터 예측후보리스트에포함된움직임 벡터 예측후보들 중어느하나가선택되면,선택된후보를현재블록의움직임 벡터 예측자로 설정할수있다.이후,현재블록의움직임 벡터잔차값을복호화한뒤,움직임 벡터 예측자와움직임 벡터잔차값을합하여 현재블록의움직임 벡터를획득할 수있다.
[275] 현재블록의움직임 벡터 예측후보리스트는다음의순서를따라,구성될수 있다.
[276] 공간적모션벡터 예측후보 -시간적모션벡터 예측후보 -모션정보후보 - (어파인모션정보후보) -제로모션벡터 예측후보
[277] 공간적모션벡터 예측후보는이웃블록또는비이웃블록중적어도
하나로부터유도되는모션벡터 예측후보를의미하고,시간적모션벡터 예측 후보는이전참조픽처에서유도되는모션벡터 예측후보를의미한다.어파인 모션정보후보는어파인모션모델로부호화/복호화된블록으로부터유도된 모션정보후보를나타낸다.제로모션벡터 예측후보는움직임 벡터의값이 0인 후보를나타낸다.
[278]
[279] 코딩블록보다더큰크기의머지 처리 영역이 정의될수있다.머지처리
영역에포함된코딩블록들은순차적으로부호화/복호화되지 않고,병렬처리될 수있다.여기서,순차적으로부호화/복호화되지 않는다는것은,부호화/복호화 순서가정의되어 있지 않음을의미한다.이에 따라,머지처리 영역에포함된 블록들의부호화/복호화과정은독립적으로처리될수있다.또는,머지 처리 영역에포함된블록들은머지후보들을공유할수있다.여기서 ,머지후보들은 머지 처리 영역을기준으로유도될수있다.
[28이 상술한특징에 따라,머지처리 영역을병렬처리 영역 ,머지 공유영역 (Shared Merge Region, SMR)또는 MER (Merge Estimation Region)이라호칭할수도있다.
[281] 현재블록의 머지후보는코딩블록을기준으로유도될수있다.다만,현재 블록이 현재블록보다더큰크기의머지 처리 영역에포함된경우,현재블록과 동일한머지처리 영역에포함된후보블록은머지후보로서 이용불가능한 것으로설정될수있다.
[282] 도 23은현재블록과동일한머지 처리 영역에포함된후보블록이머지
후보로서 이용불가능한것으로설정되는예를나타낸도면이다.
[283] 도 23의 (a)에도시된예에서, CU5의부호화/복호화시, CU5에 인접한기준
샘플들을포함하는블록들이후보블록들로설정될수있다.이때, CU5와동일한 머지 처리 영역에포함된후보블록 X3및 X4는 CU5의 머지후보로서 이용 불가능한것으로설정될수있다.반면, CU5와동일한머지처리 영역에 포함되어 있지 않은후보블록 XO, XI및 X2는머지후보로서 이용가능한 것으로설정될수있다.
[284] 도 23의 (비에도시된예에서, CU8의부호화/복호화시, CU8에 인접한기준 2020/175915 1»(:1^1{2020/002754 샘플들을포함하는블록들이후보블록들로설정될수있다.이때, 18과동일한 머지 처리 영역에포함된후보블록 X XI및 8은머지후보로서비가용한 것으로설정될수있다.반면,〔刀8과동일한머지 영역에포함되어 있지 않은 후보블록 5및 9는머지후보로서 이용가능한것으로설정될수있다.
[285] 또는,현재블록이 머지처리 영역에포함된경우,현재블록에 인접하는이웃 블록및머지 처리 영역에 인접하는이웃블록이후보블록으로설정될수있다.
[286] 도 24는현재블록이 머지처리 영역에포함되어 있을경우,현재블록에 대한 머지후보를유도하는예를나타낸도면이다.
[287] 도 24의如에도시된예에서와같이,현재블록이 인접하는이웃블록들이 현재 블록의 머지후보를유도하기위한후보블록들로설정될수있다.이때,현재 블록과동일한머지 처리 영역에포함된후보블록은머지후보로서비가용한 것으로설정될수있다.일예로,코딩블록 013에 대한머지후보유도시,코딩 블록(:113과동일한머지 처리 영역에포함된상단이웃블록 3및우측상단 이웃블록 4는코딩블록 0凡의 머지후보로서 이용불가능한것으로설정될수 있다.
[288] 현재블록에 인접하는이웃블록들을기 정의된순서에 따라스캔하여,머지 후보를유도할수있다.일예로,기 정의된순서는, , 3, 4, 및 2의순서일 수있다.
[289] 현재블록에 인접하는이웃블록들로부터유도할수있는머지후보들의
개수가머지후보의 최대개수또는상기 최대개수에서오프셋을차감한값보다 작은경우,도 24의(비에도시된예에서와같이,머지처리 영역에 인접하는이웃 블록들을이용하여 현재블록에 대한머지후보를유도할수있다.일예로,코딩 블록 을포함하는머지 처리 영역에 인접하는이웃블록들을,코딩블록 (: 3에 대한후보블록들로설정할수있다.여기서,머지처리 영역에 인접하는 이웃블록들은좌측이웃블록 XI,상단이웃블록 3,좌측하단이웃블록 , 우측상단이웃블록 4또는좌측상단이웃블록 중적어도하나를포함할수 있다.
[29이 머지처리 영역에 인접하는이웃블록들을기정의된순서에따라스캔하여 , 머지후보를유도할수있다.일 예로,기정의된순서는, XI, \3, \4, 0및 2의 순서일수있다.
[291] 요약하면,머지처리 영역에포함된코딩블록⑶ 3에 대한머지후보는다음의 스캔순서에따라후보블록들을스캔하여유도될수있다.
[292] (yl, 3, 0,)/2, XI, 3, 4, , ^2)
[293] 다만,위 예시된후보블록들의스캔순서는본발명의 일예시를나타낸것에 불과하며,위 예시와상이한순서를따라후보블록들을스캔하는것도 가능하다.또는,현재블록또는머지처리 영역의크기또는형태중적어도 하나에 기초하여,스캔순서를적응적으로결정할수도있다.
[294] 머지처리 영역은정방형또는비정방형일수있다.머지 처리 영역을결정하기 2020/175915 1»(:1^1{2020/002754 위한정보가비트스트림을통해시그날링될수있다.상기 정보는머지 처리 영역의 형태를나타내는정보또는머지처리 영역의크기를나타내는정보중 적어도하나를포함할수있다.머지 처리 영역이 비정방형태인경우,머지처리 영역의크기를나타내는정보,머지처리 영역의너비 및/또는높이를나타내는 정보또는머지 처리 영역의 너비와높이사이의비율을나타내는정보중 적어도하나가비트스트림을통해시그날링될수있다.
[295] 머지처리 영역의크기는비트스트림을통해시그날링되는정보,픽처 해상도, 슬라이스의크기또는타일크기중적어도하나를기초로결정될수있다.
[296] 머지처리 영역에포함된블록에 대해움직임보상예측이수행되면,움직임 보상예측이수행된블록의움직임 정보를기초로유도된모션정보후보를모션 정보테이블에추가할수있다.
[297] 다만,머지처리 영역에포함된블록으로부터유도된모션정보후보를모션 정보테이블에추가할경우,상기블록보다실제로부호화/복호화가늦은머지 처리 영역내타블록의부호화/복호화시,상기블록으로부터유도된모션정보 후보를사용하는경우가발생할수있다.즉,머지처리 영역에포함된블록들의 부호화/복호화시블록들간의존성을배제하여야함에도불구하고,머지처리 영역에포함된타블록의움직임 정보를이용하여움직임 예측보상이수행되는 경우가발생할수있다.이와같은문제점을해소하기 위해 ,머지처리 영역에 포함된블록의부호화/복호화가완료되더라도,부호화/복호화가완료된블록의 움직임 정보를모션정보테이블에추가하지 않을수있다.
[298] 또는,머지처리 영역내기정의된위치의블록만을이용하여,모션정보
테이블을업데이트할수있다.기정의된위치는,머지처리 영역내좌측상단에 위치하는블록,우측상단에 위치하는블록,좌측하단에위치하는블록,우측 하단에 위치하는블록,중앙에 위치하는블록,우측경계에 인접하는블록또는 하단경계에 인접하는블록중적어도하나를포함할수있다.일예로,머지 처리 영역 내우측하단코너에 인접하는블록의움직임정보만을모션정보테이블에 업데이트하고,다른블록들의움직임 정보는모션정보테이블에 업데이트하지 않을수있다.
[299] 또는,머지처리 영역에포함된모든블록들의복호화가완료된이후,상기
블록들로부터유도된모션정보후보를모션정보테이블에추가할수있다.즉, 머지 처리 영역에포함된블록들이부호화/복호화되는동안에는모션정보 테이블이 업데이트되지 않을수있다.
[300] 일예로,머지 처리 영역에포함된블록들에 대해움직임보상예측이수행되면, 상기블록들로부터유도된모션정보후보를기 정의된순서로모션정보 테이블에추가할수있다.여기서 ,기정의된순서는머지 처리 영역또는코딩 트리유닛내코딩블록들의스캔순서에 따라결정될수있다.상기스캔순서는, 래스터스캔,수평스캔,수직스캔또는지그재그스캔중적어도하나일수 있다.또는,기 정의된순서는각블록들의움직임정보또는동일한움직임 2020/175915 1»(:1^1{2020/002754 정보를갖는블록들의 개수를기초로결정될수있다.
[301] 또는,단방향모션정보를포함하는모션정보후보를양방향모션정보를
포함하는모션정보후보보다먼저모션정보테이블에추가할수있다.이와 반대로,양방향모션정보를포함하는모션정보후보를단방향모션정보를 포함하는모션정보후보보다먼저모션정보테이블에추가할수도있다.
[302] 또는,머지처리 영역또는코딩트리유닛내사용빈도가높은순서또는사용 빈도가낮은순서를따라모션정보후보를모션정보테이블에추가할수있다.
[303] 현재블록이 머지처리 영역에포함되어 있고,현재블록의 머지후보리스트에 포함된머지후보들의 개수가최대 개수보다작은경우,모션정보테이블에 포함된모션정보후보를머지후보리스트에추가할수있다.이때,현재블록과 동일한머지처리 영역에포함된블록으로부터유도된모션정보후보는현재 블록의 머지후보리스트에추가되지 않도록설정될수있다.
[304] 또는,현재블록이 머지처리 영역에포함된경우,모션정보테이블에포함된 모션정보후보를사용하지 않도록설정할수있다.즉,현재블록의 머지후보 리스트에포함된머지후보들의 개수가최대개수보다작은경우라하더라도, 모션정보테이블에포함된모션정보후보를머지후보리스트에추가하지 않을 수있다.
[305] 다른예로,머지 처리 영역또는코딩트리유닛에 대한모션정보테이블을 구성할수있다.이모션정보테이블은머지 처리 영역에포함된블록들의모션 정보를임시로저장하는역할을수행한다.일반적인모션정보테이블과머지 처리 영역또는코딩트리유닛을위한모션정보테이블을구별하기 위해,머지 처리 영역또는코딩트리유닛을위한모션정보테이블을임시모션정보 테이블이라호칭하기로한다.아울러,임시모션정보테이블에 저장된모션 정보후보를임시모션정보후보라호칭하기로한다.
[306] 도 25는임시모션정보테이블을나타낸도면이다.
[307] 코딩트리유닛또는머지처리 영역을위한임시모션정보테이블을구성할수 있다.코딩트리유닛또는머지 처리 영역에포함된현재블록에 대해움직임 보상예측이수행된경우,상기블록의움직임정보는모션정보테이블
Figure imgf000036_0001
추가하지 않을수있다.대신,상기블록으로부터유도된임시 모션정보후보를임시모션정보테이블 HmvpMERCandList에추가할수있다. 즉,임시모션정보테이블에추가된임시모션정보후보는모션정보테이블에 추가되지 않을수있다.이에따라,모션정보테이블은현재블록을포함하는 코딩트리유닛또는머지 처리 영역에포함된블록들의모션정보를기초로 유도된모션정보후보를포함하지 않을수있다.
[308] 또는,머리처리 영역에포함된블록들중일부블록들의움직임정보만을임시 모션정보테이블에추가할수있다.일예로,머지 처리 영역 내기 정의된 위치의블록들만을모션정보테이블을업데이트하는데이용할수있다.기 정의된위치는,머지처리 영역내좌측상단에 위치하는블록,우측상단에 2020/175915 1»(:1^1{2020/002754 위치하는블록,좌측하단에 위치하는블록,우측하단에위치하는블록,중앙에 위치하는블록,우측경계에 인접하는블록또는하단경계에 인접하는블록중 적어도하나를포함할수있다.일 예로,머지처리 영역내우측하단코너에 인접하는블록의움직임 정보만을임시모션정보테이블에추가하고,다른 블록들의움직임정보는임시모션정보테이블에추가하지 않을수있다.
[309] 임시모션정보테이블이포함할수있는임시모션정보후보의최대 개수는 모션정보테이블이포함할수있는모션정보후보의 최대개수와동일하게 설정될수있다.또는,임시모션정보테이블이포함할수있는임시모션정보 후보의 최대개수는코딩트리유닛또는머지처리 영역의크기에 따라결정될 수있다.또는,임시모션정보테이블이포함할수있는임시모션정보후보의 최대 개수를모션정보테이블이포함할수있는모션정보후보의 최대 개수보다작게설정할수있다.
[310] 코딩트리유닛또는머지처리 영역에포함된현재블록은해당코딩트리유닛 또는해당머지 처리 영역에 대한임시모션정보테이블을이용하지 않도록 설정할수있다.즉,현재블록의머지후보리스트에포함된머지후보의 개수가 문턱값보다작은경우,모션정보테이블에포함된모션정보후보를머지후보 리스트에추가하고,임시모션정보테이블에포함된임시모션정보후보는 머지후보리스트에추가하지 않을수있다.이에 따라,현재블록과동일한코딩 트리유닛또는동일한머지 처리 영역에포함된타블록의움직임 정보를현재 블록의움직임보상예측에 이용하지 않을수있다.
[311] 코딩트리유닛또는머지처리 영역에포함된모든블록들의부호화/복호화가 완료되면,모션정보테이블과임시모션정보테이블을병합할수있다.
[312] 도 26은모션정보테이블과임시모션정보테이블을병합하는예를나타낸 도면이다.
[313] 코딩트리유닛또는머지처리 영역에포함된모든블록들의부호화/복호화가 완료되면,도 26에도시된예에서와같이 ,임시모션정보테이블에포함된임시 모션정보후보를모션정보테이블에 업데이트할수있다.
[314] 이때,임시모션정보테이블에포함된임시모션정보후보들은,임시모션 정보테이블에삽입된순서(즉,인덱스값의오름차순또는내림차순)대로모션 정보테이블에추가될수있다.
[315] 다른예로,기 정의된순서에 따라,임시모션정보테이블에포함된임시모션 정보후보들을모션정보테이블에추가할수있다.여기서 ,기정의된순서는 머지 처리 영역또는코딩트리유닛내코딩블록들의스캔순서에따라결정될 수있다.상기스캔순서는,래스터스캔,수평스캔,수직스캔또는지그재그 스캔중적어도하나일수있다.또는,기정의된순서는각블록들의움직임 정보 또는동일한움직임 정보를갖는블록들의 개수를기초로결정될수있다.
[316] 또는,단방향모션정보를포함하는임시모션정보후보를양방향모션정보를 포함하는임시모션정보후보보다먼저모션정보테이블에추가할수있다. 2020/175915 1»(:1^1{2020/002754 이와반대로,양방향모션정보를포함하는임시모션정보후보를단방향모션 정보를포함하는임시모션정보후보보다먼저모션정보테이블에추가할수도 있다.
[317] 또는,머지처리 영역또는코딩트리유닛내사용빈도가높은순서또는사용 빈도가낮은순서를따라임시모션정보후보를모션정보테이블에추가할수 있다.
[318] 임시모션정보테이블에포함된임시모션정보후보를모션정보테이블에 추가하는경우에 있어서,임시모션정보후보에 대한중복성검사를수행할수 있다.일 예로,임시모션정보테이블에포함된임시모션정보후보와동일한 모션정보후보가모션정보테이블에기 저장되어 있을경우,상기 임시모션 정보후보를모션정보테이블에추가하지 않을수있다.이때,중복성 검사는 모션정보테이블에포함된모션정보후보들중일부를대상으로수행될수 있다.일 예로,인덱스가문턱값이상또는문턱값이하인모션정보후보들을 대상으로중복성검사가수행될수있다.일예로,임시모션정보후보가기 정의된값이상인인덱스를갖는모션정보후보와동일한경우에는,상기 임시 모션정보후보를모션정보테이블에추가하지 않을수있다.
[319] 현재블록과동일한코딩트리유닛또는동일한머지처리 영역에포함된
블록으로부터유도된모션정보후보가현재블록의 머지후보로이용되는것을 제한할수있다.이를위해,모션정보후보에 대해블록의주소정보를추가 저장할수있다.블록의주소정보는,블록의 위치,블록의주소,블록의 인덱스, 블록이포함된머지 처리 영역의 위치,블록이포함된머지처리 영역의주소, 블록이포함된머지 처리 영역의 인덱스,블록이포함된코딩트리 영역의위치, 블록이포함된코딩트리 영역의주소또는블록이포함된코딩트리 영역의 인덱스중적어도하나를포함할수있다.
[32이
[321] 코딩블록을복수의 예측유닛들로분할하고,분할된예측유닛들각각에
예측을수행할수있다.여기서 ,예측유닛은예측을수행하기 위한기본단위를 나타낸다.
[322] 코딩블록은수직선,수평선,사선또는대각선중적어도하나를이용하여 분할될수있다.분할선에 의해분할된예측유닛들은삼각형,사각형,사다리꼴 또는오각형과같은형태를가질수있다.일 예로,코딩블록은두개의삼각 형태의 예측유닛들,두개의사다리꼴형태의 예측유닛들,두개의사각형태의 예측유닛들또는하나의삼각형태의 예측유닛과하나의오각형태의 예측 유닛으로분할될수있다.
[323] 코딩블록을분할하는라인의 개수,각도또는위치중적어도하나를결정하기 위한정보가비트스트림을통해시그날링될수있다.일 예로,코딩블록의 파티션타입후보들중어느하나를나타내는정보가비트스트림을통해 시그날링되거나,코딩블록을분할하는복수의 라인후보들중어느하나를 2020/175915 1»(:1^1{2020/002754 특정하는정보가비트스트림을통해시그날링될수있다.일 예로,복수의 라인 후보들중어느하나를가리키는인덱스정보가비트스트림을통해시그날링될 수있다.
[324] 복수의 라인후보들각각은각도또는위치중적어도하나가상이할수있다. 현재블록이 이용가능한라인후보들의 개수는현재블록의크기,형태,이용 가능한머지후보의 개수또는특정위치의 이웃블록을머지후보로서 이용할 수있는지 여부에 기초하여결정될수있다.
[325] 또는,라인후보들의 개수또는종류를결정하기 위한정보가비트스트림을
통해시그날링될수있다.일예로, 1비트의플래그를이용하여,대각선보다 각도가큰사선및/또는대각선보다각도가작은사선을라인후보로서 이용할 수있는지 여부를결정할수있다.상기정보는시퀀스,픽처또는시퀀스 레벨에서시그날링될수있다.
[326] 또는,코딩블록의 인트라예측모드,인터 예측모드,이용가능한머지후보의 위치또는이웃블록의분할양상중적어도하나에 기초하여,코딩블록을 분할하는라인의 개수,각도또는위치중적어도하나가적응적으로결정될수 있다.
[327] 코딩블록이복수의 예측유닛으로분할되면,분할된예측유닛각각에 인트라 예측또는인터 예측이수행될수있다.
[328] 도 27은대각선을이용하여코딩블록을복수의 예측유닛들로분할하는예를 나타낸도면이다.
[329] 도 27의如및(비에도시된예에서와같이 ,대각선을이용하여코딩블록을
2개의삼각형태 예측유닛들로분할할수있다.
[33이 도 27의如및 )에서는코딩블록의두꼭지점을잇는대각선을이용하여
코딩블록을두개의 예측유닛들로분할하는것으로도시하였다.다만,라인의 적어도한쪽끝이코딩블록의꼭지점을지나지 않는사선을이용하여,코딩 블록을두개의 예측유닛들로분할할수있다.
[331] 도 28은코딩블록을 2개의 예측유닛들로분할하는예를도시한도면이다.
[332] 도 28의如및 )에도시된예에서와같이 ,양끝이 각각코딩블록의상단경계 및하단경계에 접하는사선을이용하여,코딩블록을두개의 예측유닛들로 분할할수있다.
[333] 또는,도 28의(0)및 )에도시된예에서와같이 ,양끝이 각각코딩블록의
좌측경계및우측경계에 접하는사선을이용하여 ,코딩블록을두개의 예측 유닛들로분할할수있다.
[334] 또는,코딩블록을크기가상이한 2개의 예측유닛으로분할할수있다.일 예로, 코딩블록을분할하는사선이하나의꼭지점을이루는두경계면에 접하도록 설정함으로써,코딩블록을크기가다른두개의 예측유닛들로분할할수있다.
[335] 도 29는코딩블록을크기가상이한복수의 예측유닛들로분할하는예시들을 나타낸다. 2020/175915 1»(:1^1{2020/002754
[336] 도 29의如및 (비에도시된예에서와같이 ,코딩블록의좌상단과우하단을
잇는대각선이,코딩블록의좌상단코너또는우하단코너를지나는대신좌측 경계 ,우측경계 ,상단경계또는하단경계를지나도록설정함으로써 ,코딩 블록을상이한크기를갖는두개의 예측유닛들로분할할수있다.
[337] 또는,도 29의 (0)및 )에도시된예에서와같이 ,코딩블록의우상단과
좌하단을잇는대각선이코딩블록의좌상단코너또는우하단코너를지나는 대신,좌측경계,우측경계,상단경계또는하단경계를지나도록설정함으로써 , 코딩블록을상이한크기를갖는두개의 예측유닛들로분할할수있다.
[338] 코딩블록을분할하여 생성된예측유닛들각각을
Figure imgf000040_0001
예측유닛’이라
호칭하기로한다.일예로,도 27내지도 29에도시된예에서 , 을제 1예측 유닛으로정의하고, 1 2를제 2예측유닛으로정의할수있다.제 1예측유닛은 코딩블록내좌측하단에 위치한샘플또는좌측상단에위치한샘플을
포함하는예측유닛을의미하고,제 2예측유닛은코딩블록내우측상단에 위치한샘플또는우측하단에 위치한샘플을포함하는예측유닛을의미할수 있다.
[339] 위와반대로,코딩블록내우측상단에 위치한샘플또는우측하단에위치한 샘플을포함하는예측유닛을제 1예측유닛으로정의하고,코딩블록내좌측 하단에 위치한샘플또는좌측상단에위치한샘플을포함하는예측유닛을제 2 예즉유닛으로정의할수있다.
[34이 수평선,수직선,대각선또는사선을이용하여코딩블록을분할하는것을예측 유닛파티셔닝이라호칭할수있다.예측유닛파티셔닝이 적용됨으로써 생성된 예측유닛은,그형태에 따라,삼각예측유닛,사각예측유닛또는오각예측 유닛과같이호칭될수있다.
[341] 후술되는실시예들은코딩블록이 대각선을이용하여분할된것으로가정한다. 특히,대각선을이용하여코딩블록을 2개의 예측유닛들로분할하는것을대각 파티셔닝또는삼각파티셔닝이라호칭하기로한다.다만,수직선,수평선또는 대각선과상이한각도의사선을이용하여코딩블록이분할되는경우에도, 후술되는실시예들을따라예측유닛들을부호화/복호화할수있다.즉, 후술되는삼각예측유닛의부호화/복호화관련사항은,사각예측유닛또는 오각예측유닛의부호화/복호화시에도적용될수있다.
[342] 코딩블록에 예측유닛파티셔닝을적용할것인지 여부는,슬라이스타입,머지 후보리스트가포함할수있는머지후보의 최대개수,코딩블록의크기,코딩 블록의 형태,코딩블록의 예측부호화모드또는부모노드의분할양상중 적어도하나를기초로결정될수있다.
[343] 일예로,현재슬라이스가: 8타입인지 여부에기초하여코딩블록에 예측유닛 파티셔닝을적용할것인지 여부를결정할수있다.예측유닛파티셔닝은현재 슬라이스가: 8타입인경우에 한하여허용될수있다.
[344] 또는,머지후보리스트에포함된머지후보의 최대개수가 2개 이상인지 2020/175915 1»(:1^1{2020/002754 여부에 기초하여코딩블록에 예측유닛파티셔닝을적용할것인지 여부를 결정할수있다.예측유닛파티셔닝은머지후보리스트에포함된머지후보의 최대 개수가 2개 이상인경우에 한하여허용될수있다.
[345] 또는,하드웨어구현상너비또는높이중적어도하나가 64보다큰경우에는 64 4크기의 데이터 처리유닛이중복액세스되는단점이 발생한다.이에따라, 코딩블록의너비또는높이중적어도하나가문턱값보다큰경우에는코딩 블록을복수의 예측유닛들로분할하는것을허용하지 않을수있다.일예로, 코딩블록의높이또는너비중적어도하나가 64보다큰경우(예컨대,너비또는 높이중적어도하나가 128인경우),예측유닛파티셔닝을사용하지 않을수 있다.
[346] 또는,하드웨어구현상동시에처리 가능한최대 샘플개수를고려하여,샘플 수가문턱값보다큰코딩블록에 대해서는예측유닛파티셔닝을허용하지 않을 수있다.일예로,샘플개수가 4096보다큰코딩트리블록에 대해서는예측유닛 파티셔닝을허용하지 않을수있다.
[347] 또는,코딩블록에포함된샘플개수가문턱값보다작은코딩블록에 대해서는 예측유닛파티셔닝을허용하지 않을수있다.일 예로,코딩블록이포함하는 샘플개수가 64개보다작은경우,코딩블록에 예측유닛파티셔닝이 적용되지 않도록설정될수있다.
[348] 또는,코딩블록의 너비 및높이비가제 1문턱값보다작은지 여부또는코딩 블록의 너비 및높이비가제 2문턱값보다큰지 여부에기초하여코딩블록에 예측유닛파티셔닝을적용할것인지 여부를결정할수있다.여기서,코딩 블록의 너비 및높이비 \vhRatio는다음수학식 2와같이코딩블록의너비대\¥
Figure imgf000041_0001
비율로서 결정될수있다.
[349] [수식 2]
쎄 1요幻 0 =(그 1? /<=1?11
[35이 제 2문턱값은제 1문턱값의 역수일수있다.일예로,제 1문턱값이노인경우, 제 2문턱값은 1 일수있다.
[351] 코딩블록의 너비 및높이비가제 1문턱값및제 2문턱값사이에존재하는
경우에만코딩블록에 예측유닛파티셔닝을적용할수있다.
[352] 또는,코딩블록의 너비 및높이비가제 1문턱값보다작거나제 2문턱값보다큰 경우에만예측유닛파티셔닝을사용할수있다.일 예로,제 1문턱값이 16인 경우, 64x4또는 4x64크기의코딩블록에 대해서는예측유닛파티셔닝이 허용되지 않을수있다.
[353] 또는,부모노드의분할양상에 기초하여,예측유닛파티셔닝의허용여부를 결정할수있다.일 예로,부모노드인코딩블록이 쿼드트리분할을기초로 분할된경우,리프노드인코딩블록에는예측유닛파티셔닝이 적용될수있다. 반면,부모노드인코딩블록이바이너리트리또는트리플트리분할을기초로 분할된경우,리프노드인코딩블록에는예측유닛파티셔닝이허용되지 않도록 2020/175915 1»(:1^1{2020/002754 설정될수있다.
[354] 또는,코딩블록의 예측부호화모드에기초하여,예측유닛파티셔닝의허용 여부를결정할수있다.일예로,코딩블록이인트라예측으로부호화된경우, 코딩블록이인터 예측으로부호화된경우또는코딩블록이기정의된인터 예측모드로부호화된경우에한하여 예측유닛파티셔닝을허용할수있다. 여기서,기정의된인터에측모드는,머지모드,모션벡터 예측모드,어파인 머지모드또는어파인모션벡터예측모드중적어도하나를포함할수있다.
[355] 또는,병렬처리영역의크기에기초하여,예측유닛파티셔닝의허용여부를 결정할수있다.일예로,코딩블록의크기가병렬처리영역의크기보다큰 경우에는예측유닛파티셔닝을사용하지않을수있다.
[356] 상기열거된조건들중둘이상을고려하여,코딩블록에 예측유닛파티셔닝을 적용할것인지여부를결정할수도있다.
[357] 다른예로,코딩블록에예측유닛파티셔닝을적용할것인지여부를나타내는 정보가비트스트림을통해시그날링될수있다.상기정보는시퀀스,픽처, 슬라이스또는블록레벨에서시그날링될수있다.예컨대,코딩블록에예측 유닛파티셔닝이적용되는지여부를나타내는플래그 triangle_paGtition_flag가 코딩블록레벨에서시그날링될수있다.
[358] 코딩블록에 예측유닛파티셔닝을적용하기로결정된경우,코딩블록을
분할하는라인들의개수또는라인의위치를나타내는정보가비트스트림을 통해시그날링될수있다.
[359] 일예로,코딩블록이대각선에의해분할되는경우,코딩블록을분할하는 대각선의방향을나타내는정보가비트스트림을통해시그날링될수있다.일 예로,대각선의방향을나타내는늘래그 triangle_paGtition_type_flag가
비트스트림을통해시그날링될수있다.상기플래그는코딩블록이좌상단과 우하단을잇는대각선에의해분할되는지여부또는우상단과좌하단을잇는 대각선에의해분할되는지여부를나타낸다.좌상단과우하단을잇는대각선에 의해코딩블록을분할하는좌삼각파티션타입이라호칭하고,우상단과 좌하단을잇는대각선에의해코딩블록을분할하는것을우삼각파티션 타입이라호칭할수있다.일예로,상기플래그의값이 0인것은코딩블록의 파티션타입이좌삼각파티션타입임을나타내고,상기플래그의값이 1인것은 코딩블록의파티션타입이우삼각파티션타입임을나타낼수있다.
[36이 추가로,예측유닛들의크기가동일한지여부를나타내는정보또는코딩
블록을분할하는대각선의위치를나타내는정보가비트스트림을통해 시그날링될수있다.일예로,예측유닛들의크기를나타내는정보가예측 유닛들의크기가동일함을나타내는경우,대각선의위치를나타내는정보의 부호화가생략되고,코딩블록은코딩블록의두꼭지점을지나는대각선을 이용하여두개의예측유닛들로분할될수있다.반면,예측유닛들의크기를 나타내는정보가예측유닛들의크기가동일하지않음을나타내는경우, 2020/175915 1»(:1^1{2020/002754 대각선의 위치를나타내는정보를기초로,코딩블록을분할하는대각선의 위치를결정할수있다.일예로,코딩블록에좌삼각파티션타입이 적용되는 경우,상기 위치정보는대각선이코딩블록의좌측경계및하단경계와 접하는지 여부또는상단경계 및우측경계에접하는지 여부를나타낼수있다. 또는,코딩블록에우삼각파티션타입이 적용되는경우,상기 위치정보는 대각선이코딩블록의우측경계 및하단경계와접하는지 여부또는상단경계 및좌측경계와접하는지 여부를나타낼수있다.
[361] 코딩블록의 파티션타입을나타내는정보는코딩블록레벨에서시그날링될 수있다.이에따라,예측유닛파티셔닝이 적용되는코딩블록별로,파티션 타입이 결정될수있다.
[362] 다른예로,시퀀스,픽처,슬라이스,타일또는코딩트리유닛에 대해파티션 타입을나타내는정보가시그날링될수있다.이경우,시퀀스,픽처,슬라이스, 타일또는코딩트리유닛내대각파티셔닝이 적용되는코딩블록들의 파티션 타입은동일하게설정될수있다.
[363] 또는,코딩트리유닛내 예측유닛파티셔닝이 적용되는첫번째코딩유닛에 대해파티션타입을결정하기위한정보를부호화하여시그날링하고,예측유닛 파티셔닝이 적용되는두번째 이후의코딩유닛들은첫번째코딩유닛과동일한 파티션타입을사용하도록설정할수있다.
[364] 다른예로,이웃블록의파티션타입을기초로,코딩블록의파티션타입을
결정할수있다.여기서,이웃블록은코딩블록의좌측상단코너에 인접하는 이웃블록,우측상단코너에 인접하는이웃블록,좌측하단코너에 인접하는 이웃블록,상단에위치하는이웃블록또는좌측에 위치하는이웃블록중 적어도하나를포함할수있다.일 예로,현재블록의 파티션타입은이웃블록의 파티션타입과동일하게설정될수있다.또는,좌상단이웃블록에좌삼각 파티션타입이 적용되었는지 여부,우상단이웃블록또는좌하단이웃블록에 우삼각파티션타입이 적용되었는지 여부에 기초하여 현재블록의파티션 타입을결정할수있다.
[365] 제 1예측유닛및제 2예측유닛에 대한움직임 예측보상을수행하기 위해 ,제 1 예측유닛및제 2예측유닛각각의움직임정보를유도할수있다.이때,제 1 예측유닛및제 2예측유닛의움직임 정보는머지후보리스트에포함된머지 후보들로부터유도될수있다.일반적인머지후보리스트와예측유닛들의 움직임 정보를유도하는데 이용되는머지후보리스트를구분하기위해,예측 유닛들의움직임정보를유도하기위한머지후보리스트를분할모드머지후보 리스트또는삼각머지후보리스트라호칭하기로한다.또한,분할모드머지 후보리스트에포함된머지후보를분할모드머지후보또는삼각머지후보라 호칭하기로한다.단,전술한머지후보유도방법 및머지후보리스트구성 방법을분할모드머지후보및분할모드머지후보리스트구성방법에 이용하는것역시본발명의사상에포함되는것이다. 2020/175915 1»(:1^1{2020/002754
[366] 분할모드머지후보리스트가포함할수있는최대분할모드머지후보의 개수를결정하기위한정보가비트스트림을통해시그날링될수있다.상기 정보는머지후보리스트가포함할수있는최대머지후보의 개수와분할모드 머지후보리스트가포함할수있는최대분할모드머지후보개수사이의 차분을나타낼수있다.
[367] 분할모드머지후보는코딩블록의 공간적 이웃블록및시간적 이웃
블록으로부터유도될수있다.
[368] 도 30은분할모드머지후보를유도하는데이용되는이웃블록들을나타낸 도면이다.
[369] 분할모드머지후보는코딩블록의상단에위치하는이웃블록,코딩블록의 좌측에 위치하는이웃블록또는코딩블록과상이한픽처에포함된
콜로케이티드블록중적어도하나를이용하여유도될수있다.상단이웃 블록은코딩블록의상단에위치하는샘플江(¾+(¾\¥- 1, (¾-1)를포함하는 블록,코딩블록의상단에 위치하는샘플江(¾+예 (¾-1)를포함하는블록 또는코딩블록의상단에위치하는샘플江(¾- 1, (¾- 1)를포함하는블록중 적어도하나를포함할수있다.좌측이웃블록은코딩블록의좌측에위치하는 샘플江(¾-1, (¾+幻^- 1)을포함하는블록또는코딩블록의좌측에 위치하는 샘플江幻5-1, (¾+幻出)을포함하는블록중적어도하나를포함할수있다. 콜로케이티드블록은콜로케이티드픽처내코딩블록의우측상단코너에 인접하는샘플江幻5+예 幻5+0出)를포함하는블록또는코딩블록의중앙에 위치하는샘플江(¾/2, 〔¾/2)을포함하는블록중어느하나로결정될수있다.
[37이 기정의된순서로이웃블록들을탐색하고,기 정의된순서에 따라,분할모드 머지후보를분할모드머지후보리스트를구성할수있다.일예로, : 81,사,예, 쇼0, 00,:82및 0의순서로분할모드머지후보를탐색하여분할모드머지후보 리스트를구성할수있다.
[371] 예측유닛들의움직임정보는상기분할모드머지후보리스트를기초로
유도될수있다.즉,예측유닛들은하나의분할모드머지후보리스트를공유할 수있다.
[372] 예측유닛의움직임정보를유도하기위해,분할모드머지후보리스트에
포함된분할모드머지후보들중적어도하나를특정하기 위한정보가 비트스트림을통해시그날링될수있다.일 예로,분할모드머지후보들중 적어도하나를
Figure imgf000044_0001
통해시그날링될수있다.
[373] 인덱스정보는제 1예측유닛의머지후보와제 2예측유닛의 머지후보의
조합을특정할수있다.일예로,다음표 3은인덱스정보 111 6_1;] 1¾16_:1(1 에 따른머지후보들의조합을나타낸예이다. 2020/175915 1»(:1^1{2020/002754
[374] [표 3]
Figure imgf000045_0002
[375] 인덱스정보 111때0]뇨1¾1 (뇨의값이 1인것은,제 1예측유닛의움직임
정보는인덱스가 1인머지후보로부터유도되고,제 2예측유닛의움직임정보는 인덱스가 0인머지후보로부터유도됨을나타낸다.인덱스정보
움직임정보를유도하기위한분할
Figure imgf000045_0001
정보를유도하기위한분할모드머지 후보가결정될수있다.인덱스정보에의해,대각파티셔닝이적용되는코딩 블록의파티션타입을결정할수도있다.즉,인덱스정보는,제 1예측유닛의 머지후보,제 2예측유닛의머지후보및코딩블록의분할방향의조합을 특정할수있다.인덱스정보에의해코딩블록의파티션타입이결정되는경우, 코딩블록을분할하는대각선의방향을나타내는정보
triangle_paGtition_type_flag는부호화되지않을수있다.표 4는인덱스정보 1116 6_1::1' 1¾16_:1(뇨에코딩블록의파티션타입을나타낸다. 2020/175915 1»(:1^1{2020/002754
[376] [표 4]
Figure imgf000046_0001
[377] 변수 1¾ 민£1)뇨이 0인것은코딩블록에좌삼각파티션타입이적용됨을
나타내고,변수 1¾ 민£1)뇨이 1인것은코딩블록에우삼각파티션타입이 적용됨을나타낸다.표 3과표 4를결합하여,인덱스정보 111 6_1;]뇨1¾16_:1(1 가 제 1예측유닛의머지후보,제 2예측유닛의머지후보및코딩블록의분할 방향의조합을특정하도록설정할수있다.다른예로,제 1예측유닛및제 2예측 유닛중어느하나를위한인덱스정보만을시그날링하고,상기인덱스정보에 기초하여제 1예측유닛및제 2예측유닛중다른하나를위한머지후보의 인덱스를결정할수있다.일예로,분할모드머지후보들중어느하나의 인덱스를나타내는인덱스정보 1116 6_1::1' 1¾16_:1(뇨를기초로,제 1예즉유닛의 머지후보를결정할수있다.그리고,상기 1116 _1;]뇨1¾16_:1(1 에기초하여제 2 예측유닛의머지후보를특정할수있다.일예로,제 2예측유닛의머지후보는 상기인덱스정보 111 은6_1;]뇨1¾16_:1(1 에오프셋을가산또는감산하여유도할수 있다.오프셋은, 1또는 2와같은정수일수있다.일예로,제 2예측유닛의머지 후보는 111 6_1; 1¾16_:1(1 에 1을가산한값을인덱스로갖는분할모드머지 후보로결정될수있다.만약, 111 6_1;] 1¾16_:1(1 가분할모드머지후보들중 인덱스값이가장큰분할모드머지후보를가리키는경우,인덱스가 0인분할 모드머지후보또는 1116 6_1;:1' 1¾16_:1(뇨에서 1을차분한값을인덱스로갖는분할 모드머지후보로부터제 2예측유닛의움직임정보를유도할수있다.
[378] 또는,인덱스정보에의해특정된제 1예측유닛의분할모드머지후보와
동일한참조픽처를갖는분할모드머지후보로부터제 2예측유닛의움직임 정보를유도할수있다.여기서,제 1예측유닛의분할모드머지후보와동일한 참조픽처를갖는분할모드머지후보는,제 1예측유닛의분할모드머지 후보와 1^0참조픽처또는 1^1참조픽처중적어도하나가동일한분할모드머지 후보를나타낼수있다.제 1예측유닛의분할모드머지후보와참조픽처가 2020/175915 1»(:1^1{2020/002754 동일한분할모드머지후보들이복수개존재하는경우,머지후보가양방향 움직임 정보를포함하는지 여부또는머지후보의 인덱스와인덱스정보와의 차분값중적어도하나를기초로어느하나를선택할수있다.
[379] 다른예로,제 1예측유닛및제 2예측유닛각각에 대해 인덱스정보가
시그날링될수있다.일 예로,제 1예측유닛의분할모드머지후보를결정하기 위한제 1인덱스정보 1 _111때 (뇨및제 2예측유닛의분할모드머지후보를 결정하기 위한제 2인덱스정보 211(1_1 £_:1(뇨가비트스트림을통해시그날링될 수있다.제 1예측유닛의움직임 정보는제 1인덱스
Figure imgf000047_0001
기초로 결정되는분할모드머지후보로부터유도되고,제 2예측유닛의움직임 정보는 제 2인덱스정보 2!1(1_111£ £_:1(뇨를기초로결정되는분할모드머지후보로부터 유도될수있다.
[38이 제 1인덱스정보 1 _1116¾6_:1(뇨는분할모드머지후보리스트에포함된분할 모드머지후보들중어느하나의 인덱스를나타낼수있다.제 1예측유닛의 분할모드머지후보는
Figure imgf000047_0002
머지후보로결정될수있다.
[381] 제 1인덱스
Figure imgf000047_0003
후보는제 2예측 유닛의분할모드머지후보로이용가능하지 않도록설정될수있다.이에 따라, 제 2예측유닛의 제 2인덱스정보 211(1_1116 6_:1(뇨는제 1인덱스정보가가리키는 분할모드머지후보를제외한잔여분할모드머지후보들중어느하나의 인덱스를나타낼수있다.제 2인덱스정보 211(1_1116 6_:1(뇨의 값이제 1인덱스 값보다작은경우,제 2예측유닛의분할모드머지후보는
Figure imgf000047_0004
1116 6_:1(뇨가나타내는인덱스정보를갖는분할모드머지 후보로결정될수있다.반면,제 2인덱스정보 211(1_111때 (뇨의 값이제 1인덱스 정보 1 _111£ (뇨의값과같거나큰경우,제 2예측유닛의분할모드머지 후보는제 2인덱스정보 2!1(1_111£ £_:1(뇨의 값에 1을더한값을인덱스로갖는 분할모드머지후보로결정될수있다.
[382] 또는,분할모드머지후보리스트에포함된분할모드머지후보의 개수에따라 제 2인덱스정보의시그날링 여부를결정할수있다.일예로분할모드머지 후보리스트가포함할수있는분할모드머지후보의 최대개수가 2를초과하지 않는경우,제 2인덱스정보의시그날링이 생략될수있다.제 2인덱스정보의 시그날링이 생략되는경우,제 1인덱스정보에오프셋을가산또는감산하여 제 2 분할모드머지후보를유도할수있다.일 예로,분할모드머지후보리스트가 포함할수있는분할모드머지후보의 최대개수가 2개이고,제 1인덱스정보가 인덱스 0을가리키는경우,제 1인덱스정보에 1을가산하여,제 2분할모드머지 후보를유도할수있다.또는,분할모드머지후보리스트가포함할수있는분할 모드머지후보의 최대개수가 2개이고,제 1인덱스정보가 1을가리키는경우, 제 1인덱스정보에 1을차분하여 제 2분할모드머지후보를유도할수있다.
[383] 또는제 2인덱스정보의시그날링이 생략되는경우,제 2인덱스정보를디폴트 2020/175915 1»(:1^1{2020/002754 값으로설정할수있다.여기서,디폴트값은 0일수있다.제 1인덱스정보와제 2 인덱스정보를비교하여,제 2분할모드머지후보를유도할수있다.일예로, 제 2인덱스정보가제 1인덱스정보보다작은경우,인덱스 0인머지후보를제 2 분할모드머지후보로설정하고,제 2인덱스정보가제 1인덱스정보와같거나 큰경우,인덱스 1인머지후보를제 2분할모드머지후보로설정할수있다.
[384] 분할모드머지후보가단방향움직임 정보를갖는경우,분할모드머지후보의 단방향움직임정보를예측유닛의움직임정보로설정할수있다.반면,분할 모드머지후보가양방향움직임 정보를
Figure imgf000048_0001
움직임 정보중어느하나만을예측유닛의움직임정보로설정할수있다. 움직임 정보또는니움직임정보중어느쪽을취할것인지는분할모드머
Figure imgf000048_0002
후보의 인덱스또는타예측유닛의움직임 정보를기초로결정될수있다.
[385] 일예로,분할모드머지후보의 인덱스가짝수인경우,예측유닛의 )움직임 정보를 0으로설정하고,분할모드머지후보의
Figure imgf000048_0003
움직임 정보를예측유닛의 니움직임 정보로설정할수있다.반면,분할모드머지후보의 인덱스가홀수인 경우,예측유닛의 1움직임 정보를 0으로설정하고,분할모드머지후보의
Figure imgf000048_0004
움직임 정보를 0으로설정할수있다.위와반대로,분할모드머지후보의 인덱스가짝수인경우,분할모드머지후보의 )움직임 정보를예측유닛의 ) 움직임 정보로설정하고,분할모드머지후보의 인덱스가홀수인경우,분할 모드머지후보의니움직임정보를예측유닛의니움직임정보로설정할수도 있다.또는,제 1예측유닛에 대해서는분할모드머지후보가짝수인경우,분할 모드머지후보의
Figure imgf000048_0005
정보를제 1예측유닛의 )움직임 정보로설정하는 반면,제 2예측유닛에 대해서는분할모드머지후보가홀수인경우,분할모드 머지후보의
Figure imgf000048_0006
정보를제 2예측유닛의니움직임정보로설정할수 있다.
[386] 또는,제 1예측유닛이 )움직임 정보를갖는경우,제 2예측유닛의 )움직임 정보를 0으로설정하고,분할모드머지후보의 움직임 정보를제 2예측 유닛의니정보로설정할수있다.반면,제 1예측유닛이니움직임정보를갖는 경우,제 2예측유닛의니움직임정보를 0으로설정하고,분할모드머지후보의 내움직임 정보를제 2예측유닛의내움직임정보로설정할수있다.
[387] 제 1예측유닛의움직임 정보를유도하기 위한분할모드머지후보리스트및 제 2예측유닛의움직임정보를유도하기위한분할모드머지후보리스트를 상이하게설정할수도있다.
[388] 일예로,제 1예측유닛에 대한인덱스정보를기초로분할모드머지후보
리스트내제 1예측유닛의움직임정보를유도하기위한분할모드머지후보가 특정되면,제 2예측유닛의움직임정보는상기 인덱스정보가가리키는분할 모드머지후보를제외한잔여분할모드머지후보들을포함하는분할모드 머지 리스트를이용하여유도될수있다.구체적으로,제 2예측유닛의움직임 정보는잔여분할모드머지후보들중어느하나로부터유도될수있다. 2020/175915 1»(:1^1{2020/002754
[389] 이에따라,제 1예측유닛의분할모드머지후보리스트가포함하는최대분할 모드머지후보들의 개수와제 2예측유닛의분할모드머지후보리스트가 포함하는최대분할모드머지후보들의 개수는상이할수있다.일예로,제 1 예측유닛의분할모드머지후보리스트가 M개의 머지후보들을포함할경우, 제 2예측유닛의분할모드머지후보리스트는제 1예측유닛의 인덱스정보가 가리키는분할모드머지후보를제외한 ^1-1개의 머지후보를포함할수있다.
[39이 다른예로,코딩블록에 인접하는이웃블록들을기초로각예측유닛의 머지 후보를유도하되,예측유닛의 형태또는위치를고려하여,이웃블록의 이용 가능성을결정할수있다.
[391] 도 31은예측유닛별로이웃블록의가용성을결정하는예를설명하기위한 도면이다.
[392] 제 1예측유닛에 인접하지 않는이웃블록은제 1예측유닛에 대해비가용한 것으로설정되고,제 2예측유닛에 인접하지 않는이웃블록은제 2예측유닛에 대해비가용한것으로설정될수있다.
[393] 일예로,도 31의知)에도시된예에서와같이,코딩블록에좌삼각파티션
타입이 적용된경우,코딩블록에 이웃하는이웃블록들중제 1예측유닛에 인접하는블록쇼1,쇼0및쇼2는제 1예측유닛에 이용가능한반면,블록: 80및 이은제 1예측유닛에 이용불가능한것으로결정될수있다.이에 따라,제 1예측 유닛에 대한분할모드머지후보리스트는블록쇼1,쇼0및쇼2로부터유도되는 분할모드머지후보들을포함하는반면,블록 60및이으로부터유도되는분할 모드머지후보들은포함하지 않을수있다.
[394] 도 31의(비에도시된예에서와같이,코딩블록에좌삼각파티션타입이 적용된 경우,제 2예측유닛에 인접하는블록예및이은제 2예측유닛에 이용가능한 반면,블록쇼1,쇼0및쇼2는제 2예측유닛에 이용불가능한것으로결정될수 있다.이에 따라,제 2예측유닛에 대한분할모드머지후보리스트는블록예및 이으로부터유도되는분할모드머지후보들을포함하는반면,블록쇼1,쇼0및 쇼2로부터유도되는분할모드머지후보들은포함하지 않을수있다.
[395] 이에따라,예측유닛이 이용할수있는분할모드머지후보들의 개수또는 분할모드머지후보들의범위는예측유닛의 위치또는코딩블록의 파티션 타입중적어도하나를기초로결정될수있다.
[396] 다른예로,제 1예측유닛및제 2예측유닛중어느하나에만머지모드를
적용할수있다.그리고,제 1예측유닛및제 2예측유닛중다른하나의움직임 정보는상기머지모드가적용된예측유닛의움직임정보와동일하게 설정하거나,상기 머지모드가적용된예측유닛의움직임 정보를리파인하여 유도할수있다.
[397] 일예로,분할모드머지후보를기초로,제 1예측유닛의움직임 벡터 및참조 픽처 인덱스를유도하고,제 1예측유닛의움직임 벡터를리파인하여제 2예측 유닛의움직임 벡터를유도할수있다.일예로,제 2예측유닛의움직임 벡터는 2020/175915 1»(:1^1{2020/002754 제 1예측유닛의움직임 벡터 {mvDlLXx, mvDlLXy}에 리파인모션벡터 {Rx, Ry}를가산또는감산하여유도될수있다.제 2예측유닛의참조픽처 인덱스는 제 1예측유닛의 참조픽처 인덱스와동일하게설정될수있다.
[398] 제 1예측유닛의움직임 벡터와제 2예측유닛의움직임 벡터사이의차분을 나타내는리파인모션벡터를결정하기 위한정보가비트스트림을통해 시그날링될수있다.상기 정보는리파인모션벡터의크기를나타내는정보 또는리파인모션벡터의부호를나타내는정보중적어도하나를포함할수 있다.
[399] 또는,예측유닛의 위치,인덱스또는코딩블록에 적용된파티션타입중
적어도하나를기초로,리파인모션벡터의부호를유도할수있다.
[40이 다른예로,제 1예측유닛및제 2예측유닛중어느하나의움직임 벡터 및참조 픽처 인덱스를시그날링할수있다.제 1예측유닛및제 2예측유닛중다른 하나의움직임 벡터는상기시그날링된움직임 벡터를리파인하여유도될수 있다.
[401] 일예로,비트스트림으로부터시그날링되는정보를기초로,제 1예측유닛의 움직임 벡터 및참조픽처 인덱스를결정할수있다.그리고,제 1예측유닛의 움직임 벡터를리파인하여 제 2예측유닛의움직임 벡터를유도할수있다.일 예로,제 2예측유닛의움직임 벡터는제 1예측유닛의움직임 벡터 {mvDILXx, mvDILXy}에 리파인모션벡터 {Rx, Ry}를가산또는감산하여유도될수있다. 제 2예측유닛의 참조픽처 인덱스는제 1예측유닛의 참조픽처 인덱스와 동일하게설정될수있다.
[402] 다른예로,제 1예측유닛및제 2예측유닛중어느하나에만머지모드를
적용할수있다.그리고,제 1예측유닛및제 2예측유닛중다른하나의움직임 정보는상기머지모드가적용된예측유닛의움직임정보를기초로유도될수 있다.일 예로,제 1예측유닛의움직임 벡터의 대칭 (Symmetric)움직임 벡터가 제 2예측유닛의움직임 벡터로설정될수있다.여기서,대칭움직임 벡터는제 1 예측유닛의움직임 벡터와크기는동일하나 X축또는 y축성분중적어도 하나의부호가반대인움직임 벡터또는제 1예측유닛의움직임 벡터를 스케일링하여 획득된스케일링된벡터와크기는동일하나 X축성분또는 y축 성분중적어도하나의부호가반대인움직임 벡터를의미할수있다.일 예로, 제 1예측유닛의움직임 벡터가 (MVx, MVy)인경우,제 2예측유닛의움직임 벡터는상기움직임 벡터의 대칭움직임 벡터인 (MVx, -MVy), (-MVx, MVy)또는 (-MVx, -MVy)로설정될수있다.
[403] 제 1예측유닛및제 2예측유닛중머지모드가적용되지 않은예측유닛의 참조픽처 인덱스는머지모드가적용된예측유닛의참조픽처 인덱스와 동일하게설정될수있다.또는,머지모드가적용되지 않은예측유닛의 참조 픽처 인덱스는기 정의된값으로설정될수있다.여기서,기정의된값은,참조 픽처 리스트내가장작은인덱스또는가장큰인덱스일수있다.또는,머지 2020/175915 1»(:1^1{2020/002754 모드가적용되지 않은예측유닛의 참조픽처 인덱스를특정하는정보가 비트스트림을통해시그날링될수있다.또는,머지모드가적용된예측유닛의 참조픽처가속한참조픽처 리스트와상이한참조픽처 리스트로부터머지 모드가적용되지 않은예측유닛의 참조픽처를선택할수있다.일예로,머지 모드가적용된예측유닛의참조픽처가 L0참조픽처 리스트로부터선택된 경우,머지모드가적용되지 않은예측유닛의참조픽처는 L1참조픽처 리스트로부터선택될수있다.이때,머지모드가적용되지 않은예측유닛의 참조픽처는머지모드가적용된예측유닛의 참조픽처와현재픽처사이의 출력순서 (Picture Order Count, POC)차분을기초로유도될수있다.일예로,머지 모드가적용된예측유닛의참조픽처가 L0참조픽처 리스트로부터선택된 경우, L1참조픽처 리스트내현재픽처와의차분값이 머지모드가적용된예측 유닛의 참조픽처와현재픽처사이의차분값과동일또는유사한참조픽처를 머지모드가적용되지 않은예측유닛의참조픽처로선택할수있다.
[404] 제 1예측유닛의참조픽처와현재픽처사이의출력순서차분값및제 2예측 유닛의 참조픽처와현재픽처사이의출력순서차분값의크기가상이한경우, 머지모드가적용된예측유닛의스케일링된움직임 벡터의 대칭움직임 벡터가 머지모드가적용되지 않은예측유닛의움직임 벡터로설정될수있다.이때, 스케일링은각참조픽처와현재픽처사이의출력순서차분값을기초로수행될 수있다.
[405] 다른예로,제 1예측유닛및제 2예측유닛각각의움직임 벡터를유도한뒤 , 유도된움직임 벡터에 리파인벡터를가산또는감산할수있다.일예로,제 1 예측유닛의움직임 벡터는제 1머지후보를기초로유도된제 1움직임 벡터에 제 1리파인벡터를가산또는감산하여유도되고,제 2예측유닛의움직임 벡터는제 2머지후보를기초로유도된제 2움직임 벡터에 제 2리파인벡터를 가산또는감산하여유도될수있다.제 1리파인벡터또는제 2리파인벡터중 적어도하나를결정하기 위한정보가비트스트림을통해시그날링될수있다. 상기 정보는리파인벡터의크기를결정하기위한정보또는리파인벡터의 부호를결정하기위한정보중적어도하나를포함할수있다.
[406] 제 2리파인벡터는제 1리파인벡터의 대칭움직임 벡터일수도있다.이경우, 제 1리파인벡터 및제 2리파인벡터중어느하나에 대해서만리파인벡터를 결정하기 위한정보가시그날링될수있다.일예로,비트스트림으로부터 시그날링되는정보에의해제 1리파인벡터가 (MVDx, MVDy)로결정된경우, 제 1리파인벡터의 대칭움직임 벡터인 (-MVDx, MVDy), (MVDx, -MVDy)또는 (-MVDx, -MVDy)를제 2리파인벡터로설정할수있다.예측유닛들각각의참조 픽처의출력순서에 따라,제 1리파인벡터를스케일링하여 획득된스케일링된 움직임 벡터의 대칭움직임 벡터를제 2리파인벡터로설정할수도있다.
[407] 다른예로,제 1예측유닛및제 2예측유닛중어느하나의정보는머지후보를 기초로유도하고,다른하나의움직임정보는비트스트림을통해시그날링되는 2020/175915 1»(:1^1{2020/002754 정보에기초하여결정할수있다.일예로,제 1예측유닛에대해머지인덱스를 시그날링하고,제 2예측유닛에대해움직임벡터를결정하기위한정보및참조 픽처를결정하기위한정보중적어도하나를시그날링할수있다.제 1예측 유닛의움직임정보는머지인덱스에의해특정되는머지후보의움직임정보와 동일하게설정될수있다.제 2예측유닛의움직임정보는비트스트림을통해 시그날링되는움직임벡터를결정하기위한정보및참조픽처를결정하기위한 정보중적어도하나에의해특정될수있다.
[408] 제 1예측유닛의움직임정보및제 2예측유닛의움직임정보를기초로각각 코딩블록에대한움직임예측보상예측을수행할수있다.이때,제 1예측유닛 및제 2예측유닛의경계부분에서는화질열화가발생할수있다.일예로,제 1 예측유닛및제 2예측유닛의경계에존재하는에지 (Edge)주변에서화질의 연속성이나빠질수있다.경계부분에서의화질열화를감소하기위해, 스무딩 (Smoothing)필터또는가중예측을통해예측샘플을유도할수있다.
[409] 대각파티셔닝이적용된코딩블록내예측샘플은제 1예측유닛의움직임 정보를기초로획득된제 1예측샘플및제 2예측유닛의움직임정보를기초로 획득된제 2예측샘플의가중합연산을기초로유도될수있다.또는,제 1예측 유닛의움직임정보를기초로결정되는제 1예측블록으로부터제 1예측유닛의 예측샘플을유도하고,제 2예측유닛의움직임정보를기초로결정되는제 2 예측블록으로부터제 2예측유닛의 예측샘플을유도하되,제 1예측유닛및제 2 예측유닛의경계영역에위치하는예측샘플은제 1예측블록에포함된제 1 예측샘플및제 2예측블록에포함된제 2예측샘플의가중합연산을기초로 유도할수있다.일예로,하기수학식 3은제 1예측유닛및제 2예측유닛의 예측 샘플을유도하는예를나타낸다.
[410] [수식 3]
Figure imgf000052_0001
[411] 상기수학식 3에서, 은제 1예측샘플을나타내고, P2는제 2예측샘플을
나타낸다. wl은제 1예측샘플에적용되는가중치를나타내고, (1-wl)은제 2예측 샘플에적용되는가중치를나타낸다.수학식 3에나타난예에서와같이,제 2 예즉샘늘에적용되는가중치는상수값에서제 1예즉샘늘에적용되는가중치를 차분하여유도될수있다.
[412] 코딩블록에좌삼각파티션타입이적용된경우,경계영역은, X축좌표및 y축 좌표가동일한예측샘플들을포함할수있다.반면,코딩블록에우삼각파티션 타입이적용된경우,경계영역은 X축좌표및 y축좌표의합이제 1문턱값 이상이고제 2문턱값이하인예측샘플들을포함할수있다.
[413] 경계영역의크기는코딩블록의크기,코딩블록의형태,예측유닛들의움직임 정보,예측유닛들의움직임벡터차분값,참조픽처의출력순서또는대각 경계에서제 1예측샘플과제 2예측샘플의차분값중적어도하나를기초로 결정될수있다. 2020/175915 1»(:1^1{2020/002754
[414] 도 32및도 33은제 1예측샘플과제 2예측샘플의가중합연산을기초로예측 샘플을유도하는예를나타낸도면이다.도 32는코딩블록에좌삼각파티션 타입이적용된경우를예시한것이고,도 33은코딩블록에우삼각파티션 타입이적용된경우를예시한것이다.아울러,도 32의知)및도 33의如는루마 성분에대한예측양상을나타낸도면이고,도 32의 )및도 33의 (비는크로마 성분에대한예측양상을나타낸도면이다.
[415] 도시된도면에서제 1예측유닛및제 2예측유닛의경계부근에위치하는예측 샘플에기입된숫자는제 1예측샘플에적용되는가중치를나타낸다.일예로, 예측샘플에기입된숫자가 인경우,제 1예측샘플에 N/8의가중치를 적용하고,제 2예측샘플에 (1-어/8))의가중치를적용하여,상기예측샘플이 유도될수있다.
[416] 비경계영역에서는제 1예측샘플또는제 2예측샘플이 예측샘플로결정될수 있다.도 32의예시를살펴보면,제 1예측유닛에속한영역에서는,제 1예측 유닛의움직임정보를기초로유도된제 1예측샘플이예측샘플로결정될수 있다.반면,제 2예측유닛에속한영역에서는,제 2예측유닛의움직임정보를 기초로유도된제 2예측샘플이 예측샘플로결정될수있다.
[417] 도 33의 예시를살펴보면, X축좌표와 축좌표의합이제 1문턱값보다작은 영역에서는,제 1예측유닛의움직임정보를기초로유도된제 1예측샘플이 예측샘플로결정될수있다.반면, X축좌표와 X축좌표의합이제 2문턱값보다 큰영역에서는,제 2예측유닛의움직임정보를기초로유도된제 2예측샘플이 예측샘플로결정될수있다.
[418] 비경계영역을결정하는문턱값은코딩블록의크기,코딩블록의형태또는 컬러성분중적어도하나를기초로결정될수있다.일예로,루마성분에대한 문턱값이 N으로설정된경우,크로마성분에대한문턱값은 N/2로설정될수 있다.
[419] 경계영역에포함된예측샘플들은제 1예측샘플및제 2예측샘플의가중합 연산을기초로유도될수있다.이때,제 1예측샘플및제 2예측샘플에적용되는 가중치는,예측샘플의위치,코딩블록의크기,코딩블록의형태또는컬러성분 중적어도하나를기초로결정될수있다.
[42이 일예로,도 32의知)에도시된예에서와같이 , X축좌표및 축좌표가동일한 위치의 예측샘플들은제 1예측샘플과제 2예측샘플에동일한가중치를 적용하여유도될수있다. X축좌표및 X축좌표의차분의절대값이 1인예측 샘플들은제 1예측샘플및제 2예측샘플에적용되는가중치비율을 (3:1)또는 (1:3)으로설정하여유도될수있다.또한, X축좌표및 X축좌표의차분의 절대값이 2인예측샘플들은제 1예측샘플과제 2예측샘플에적용되는가중치 비율을 (7:1)또는 (1:7)로설정하여유도될수있다.
[421] 또는,도 32의 (비에도시된예에서와같이, X축좌표및 축좌표가동일한
위치의 예측샘플들은제 1예측샘플과제 2예측샘플에동일한가중치를 2020/175915 1»(:1^1{2020/002754 적용하여유도되고, X축좌표및 X축좌표의차분의절대값이 1인예측샘플들은 제 1예측샘플과제 2예측샘플에적용되는가중치비율을 (7:1)또는 (1:7)로 설정하여유도될수있다.
[422] 일예로,도 33의知)에도시된예에서와같이 , X축좌표및 축좌표의합이코딩 블록의너비또는높이보다 1이작은예측샘플들은제 1예측샘플과제 2예측 샘플에동일한가중치를적용하여유도될수있다. X축좌표및 X축좌표의합이 코딩블록의너비또는높이와동일하거나 2가작은예측샘플들은제 1예측 샘플및제 2예측샘플에적용되는가중치비율을 (3:1)또는 (1:3)으로설정하여 유도될수있다. X축좌표및 X축좌표의합이코딩블록의너비또는높이보다
1이크거나 3이작은에측샘플들은제 1예측샘플및제 2예측샘플에적용되는 가중치비율을 (7:1)또는 (1:7)로설정하여유도될수있다.
[423] 또는,도 33의 (비에도시된예에서와같이, X축좌표및 축좌표의합이코딩 블록의너비또는높이보다 1이작은예측샘플들은제 1예측샘플과제 2예측 샘플에동일한가중치를적용하여유도될수있다. X축좌표및 X축좌표의합이 코딩블록의너비또는높이와동일하거나 2가작은예측샘플들은제 1예측 샘플과제 2예측샘플에적용되는가중치비율을 (7:1)또는 (1:7)로설정하여 유도될수있다.
[424] 다른예로,예측샘플의위치또는코딩블록의형태를고려하여,가중치를 결정할수있다.수학식 4내지수학식 6는코딩블록에좌삼각파티션타입이 적용된경우,가중치를유도하는예를나타낸다.수학식 4는코딩블록이 정방형일때,제 1예측샘플에적용되는가중치를유도하는예를나타낸다.
[425] [수식 4]
Figure imgf000054_0001
[426] 수학식 4에서 X및 는예측샘플의위치를나타낸다.코딩블록이비정방형인 경우,제 1예측샘플에적용되는가중치는다음수학식 5또는수학식 6과같이 유도될수있다.수학식 5는코딩블록의너비가높이보다큰경우를나타내고, 수학식 6은코딩블록의너비가높이보다작은경우를나타낸다.
[427] [수식 5]
>1* 1 =( (ᄌ/ /, ¾¾" ?)-) +4)/8
[428] [수식 6]
Figure imgf000054_0002
[429] 코딩블록에우삼각파티션타입이적용된경우,수학식 7내지수학식 9와같이 제 1예측샘플에적용되는가중치를결정할수있다.수학식 7은코딩블록이 정방형일때,제 1예측샘플에적용되는가중치를유도하는예를나타낸다.
[430] [수식7]
w l = (야 1 -ᄌ· > )+4)/8
[431] 수학식 7에서〔¾ 는코딩블록의너비를나타낸다.코딩블록이비정방형인 2020/175915 1»(:1^1{2020/002754 경우,제 1예측샘플에 적용되는가중치는다음수학식 8또는수학식 9와같이 유도될수있다.수학식 8은코딩블록의너비가높이보다큰경우를나타내고, 수학식 9는코딩블록의너비가높이보다작은경우를나타낸다.
[432] [수식 8]
Figure imgf000055_0001
[433] [수식 9]
Figure imgf000055_0002
[434] 수학식 8에서幻^는코딩블록의높이를나타낸다.
[435] 도시된예에서와같이,경계 영역 내 예측샘플들중제 1예측유닛에포함된 것은,제 2예측샘플보다제 1예측샘플에더큰가중치를부여하여유도되고, 제 2예측유닛에포함된것은제 1예측샘플보다제 2예측샘플에 더큰가중치를 부여하여유도될수있다.
[436] 코딩블록에 대각파티셔닝이 적용되는경우,코딩블록에는인트라예측모드 및머지모드가조합된결합예측모드가적용되지 않도록설정될수있다.
[437] 코딩블록의부호화/복호화가완료되면,다음코딩블록의부호화/복호화를 위해,부호화/복호화가완료된코딩블록의움직임 정보를저장할수있다.
움직임 정보는기 설정된크기를갖는서브블록단위로저장될수있다.일예로, 기 설정된크기를갖는서브블록은 4x4크기를가질수있다.또는,코딩블록의 크기또는형태에 따라,서브블록의크기또는형태가상이하게결정될수있다.
[438] 서브블록이 제 1예측유닛에속한경우,제 1예측유닛의움직임 정보를서브 블록의움직임정보로저장할수있다.반면,서브블록이 제 2예측유닛에속한 경우,제 2예측유닛의움직임정보를서브블록의움직임정보로저장할수 있다.
[439] 서브블록이 제 1예측유닛및제 2예측유닛의경계에 걸쳐진경우,제 1예측 유닛의움직임정보및제 2예측유닛의움직임정보중어느하나를서브블록의 움직임 정보로설정할수있다.일 예로,제 1예측유닛의움직임 정보를서브 블록의움직임정보로설정하거나,제 2예측유닛의움직임 정보를서브블록의 움직임 정보로설정할수있다.
[44이 다른예로,서브블록이제 1예측유닛및제 2예측유닛의 경계에걸쳐진경우, 서브블록의 )움직임 정보및니움직임정보중어느하나는제 1예측 유닛으로부터유도하고,서브블록의
Figure imgf000055_0003
움직임정보및 움직임 정보중다른 하나는제 2예측유닛으로부터유도할수있다.일 예로,제 1예측유닛의 ) 움직임 정보를서브블록의 )움직임 정보로설정하고,제 2예측유닛의 1그 움직임 정보를서브블록의
Figure imgf000055_0004
움직임 정보로설정할수있다.다만,제 1예측 유닛및제 2예측유닛이
Figure imgf000055_0005
움직임정보만을갖거나, 1그움직임정보만을갖는 경우,제 1예측유닛또는제 2예측유닛중어느하나를선택하여,서브블록의 움직임 정보를결정할수있다.또는,제 1예측유닛및제 2예측유닛의움직임 2020/175915 1»(:1^1{2020/002754 벡터 평균값을서브블록의움직임 벡터로설정할수있다.
[441] 부호화/복호화가완료된코딩블록의움직임정보는모션정보테이블에
업데이트될수있다.이때,예측유닛파티셔닝이 적용된코딩블록의움직임 정보는모션정보테이블에추가하지 않도록설정할수있다.
[442] 또는,코딩블록을분할하여 생성된복수의 예측유닛들중어느하나의움직임 정보만을모션정보테이블에추가할수있다.일예로,제 1예측유닛의움직임 정보를모션정보테이블에추가하는한편,제 2예측유닛의움직임 정보는모션 정보테이블에추가하지 않을수있다.이때,코딩블록의크기,코딩블록의 형태,예측유닛의크기,예측유닛의 형태,또는예측유닛에 대해 양방향예측이 수행되었는지 여부중적어도하나에 기초하여,모션정보테이블에추가될예측 유닛이선택될수있다.
[443] 또는,코딩블록을분할하여 생성된복수의 예측유닛들각각의움직임정보를 모션정보테이블에추가할수있다.이때 ,모션정보테이블의추가순서는 부호화기 및복호화기에서 기정의되어 있을수있다.일예로,좌상단샘플또는 좌하단코너 샘플을포함하는예측유닛의움직임 정보를그렇지 않은예측 유닛의움직임정보보다먼저모션정보테이블에추가할수있다.또는,각예측 유닛의 머지 인덱스,참조픽처 인덱스또는움직임 벡터의크기중적어도 하나에 기초하여모션정보테이블로의추가순서가결정될수있다.
[444] 또는,제 1예측유닛의움직임 정보와제 2예측유닛의움직임정보를조합한 움직임 정보를모션정보테이블에추가할수있다.조합된움직임 정보의 ) 움직임 정보및니움직임정보중어느하나는제 1예측유닛으로부터유도되고, 0)움직임 정보및니움직임정보중다른하나는제 2예측유닛으로부터 유도될수있다.
[445] 또는,제 1예측유닛및제 2예측유닛의참조픽처가동일한지 여부에
기초하여,모션정보테이블에추가될움직임 정보가결정될수있다.일 예로, 제 1예측유닛및제 2예측유닛의 참조픽처가상이한경우,제 1예측유닛및 제 2예측유닛중어느하나의움직임 정보또는제 1예측유닛및제 2예측 유닛을조합한움직임 정보를모션정보테이블에추가할수있다.반면,제 1 예측유닛및제 2예측유닛의참조픽처가동일한경우,제 1예측유닛의움직임 벡터 및제 2예측유닛의움직임 벡터의 평균을모션정보테이블에추가할수 있다.
[446] 또는,코딩블록의크기 ,코딩블록의 형태또는코딩블록의분할형태에
기초하여,모션정보테이블에추가될움직임 벡터가결정될수있다.일 예로, 코딩블록에우삼각파티셔닝이 적용된경우,제 1예측유닛의움직임 정보를 모션정보테이블에추가할수있다.반면,코딩블록에좌삼각파티셔닝이 적용된경우,제 2예측유닛의움직임 정보를모션정보테이블에추가하거나, 제 1예측유닛의움직임정보및제 2예측유닛의움직임정보를조합한움직임 정보를모션정보테이블에추가할수있다. 2020/175915 1»(:1^1{2020/002754
[447] 예측유닛파티셔닝이 적용된코딩블록의움직임정보를저장하기위한모션 정보테이블이별개로정의될수있다.일예로,예측유닛파티셔닝이 적용된 코딩블록의움직임 정보는분할모드모션정보테이블에 저장될수있다.분할 모드모션정보테이블을삼각모션정보테이블이라호칭할수도있다.즉,예측 유닛파티셔닝이 적용되지 않은코딩블록의움직임 정보는일반적인모션정보 테이블에 저장하고,예측유닛파티셔닝이 적용된코딩블록의움직임정보는 분할모드모션정보테이블에 저장될수있다.상술한예측유닛파티셔닝이 적용된코딩블록의움직임정보를모션정보테이블에추가하는실시예들이 분할모드모션정보테이블을업데이트하는것에 적용될수있다.일예로,분할 모드모션정보테이블에는제 1예측유닛의움직임정보,제 2예측유닛의 움직임 정보,제 1예측유닛의움직임 정보와제 2예측유닛의움직임정보를 조합한움직임정보,및제 1예측유닛의움직임 벡터와제 2예측유닛의움직임 벡터를평균한움직임 정보가추가될수있다.
[448] 코딩블록에 예측모드파티셔닝이 적용되지 않은경우,일반적인모션정보 테이블을이용하여머지후보를유도할수있다.반면,코딩블록에 예측모드 파티셔닝이 적용된경우,예측모드모션정보테이블을이용하여머지후보를 유도할수있다.
[449]
[45이 현재블록의 머지후보가선택되면,선택된머지후보의움직임 벡터를초기 움직임 벡터로설정하고,초기움직임 벡터에오프셋벡터를가산또는감산하여 유도된움직임 벡터를이용하여,현재블록에 대한움직임보상예측을수행할 수있다.머지후보의움직임 벡터에오프셋벡터를가산또는감산하여새로운 움직임 벡터를유도하는것을머지오프셋벡터부호화방법이라정의할수 있다.
[451] 머지오프셋부호화방법을사용할것인지 여부를나타내는정보가
비트스트림을통해시그날링될수있다.상기 정보는 1비트의플래그
111 은6_(났861;_¥6。1;0]'_£1&은일수있다.일 예로, 1116 6_(났 861;_¥6。1;0]'_£1&은의 값이 1인 것은현재블록에 머지오프셋벡터부호화방법이 적용됨을나타낸다.현재 블록에 머지오프셋벡터부호화방법이 적용되는경우,머지후보의움직임 벡터에오프셋벡터를가산또는감산하여 ,현재블록의움직임 벡터를유도할 수있다.
Figure imgf000057_0001
블록에 머지오프셋벡터 부호화방법이 적용되지 않음을나타낸다.머지오프셋부호화방법이 적용되지 않는경우,머지후보의움직임 벡터가현재블록의움직임 벡터로설정될수 있다.
[452] 상기플래그는,스킵모드의적용여부를나타내는스킵플래그의 값이참인 경우또는머지모드의적용여부를나타내는머지플래그의 값이참인경우에 한하여시그날링될수있다.일예로,현재블록에스킵모드가적용됨을 나타내는
Figure imgf000057_0002
블록에머지모드가적용됨을 2020/175915 1»(:1^1{2020/002754 나타내는 1116 6_:(¾은의 값이 1인경우, 111 은6_0 861;_¥6아;0]'_£'^가부호화되어 시그날링될수있다.
[453] 현재블록에 머지오프셋부호화방법이 적용되는것으로결정된경우,머지 후보리스트에포함된머지후보들중어느하나를특정하는정보,오프셋 벡터의크기를나타내는정보또는오프셋벡터의 방향을나타내는정보중 적어도하나가추가시그날링될수있다.
[454] 머지후보리스트가포함할수있는머지후보들의 최대개수를결정하기위한 정보가비트스트림을통해시그날링될수있다.일예로,머지후보리스트가 포함할수있는머지후보들의 최대개수는 6이하의자연수로설정될수있다.
[455] 현재블록에 머지오프셋부호화방법이 적용되는것으로결정되는경우,기 설정된최대개수의 머지후보만이 현재블록의초기움직임 벡터로설정될수 있다.즉,머지오프셋부호화방법의 적용여부에따라,현재블록이 이용가능한 머지후보들의 개수가적응적으로결정될수있다.일 예로,
의 값이 0으로설정된경우,현재블록이 이용할수있는 머지후보들의최대 개수는 ] 개로설정되는반면, 111 6_0 861;_¥ 1;0]'_1¾은의 값이 1로설정된경우,현재블록이 이용할수있는머지후보들의최대 개수는 N개로설정될수있다.여기서 , M은머지후보리스트가포함할수있는머지 후보들의 최대개수를나타내고, N은M과같거나 M보다작은자연수를 나타낸다.
[456] 일예로, M이 6이고, N이 2인경우,머지후보리스트에포함된머지후보들중 인덱스가가장작은 2개의머지후보들이 현재블록에 대해 이용가능한것으로 설정될수있다.이에 따라,인덱스값이 0인머지후보의움직임 벡터또는 인덱스값이 1인머지후보의움직임 벡터가현재블록의초기움직임 벡터로 설정될수있다.만약, M과N이동일한경우(예컨대, M과N이 2인경우),머지 후보리스트에포함된모든머지후보들이 현재블록에 대해 이용가능한것으로 설정될수있다.
[457] 또는,이웃블록을머지후보로서 이용가능한지 여부가현재블록에머지
오프셋벡터부호화방법이 적용되는지 여부를기초로결정될수있다.일예로, 111 6_0^61;_¥ 1;0]'_£'^의 값이 1인경우,현재블록의우즉상단코너에 인접하는이웃블록,좌측하단코너에 인접하는이웃블록또는좌측하단 코너에 인접하는이웃블록중적어도하나는머지후보로서 이용불가능한 것으로설정될수있다.이에따라,현재블록에머지오프셋벡터부호화방법이 적용되는경우,현재블록의우즉상단코너에 인접하는이웃블록,좌즉하단 코너에 인접하는이웃블록또는좌측하단코너에 인접하는이웃블록중 적어도하나의움직임 벡터는초기움직임 벡터로설정될수없다.또는, 1116 _0^61;_¥6이;0]'_£'^의 값이 1인경우,현재블록의시간적 이웃블록은머지 후보로서 이용불가능한것으로설정될수있다.
[458] 현재블록에 머지오프셋벡터부호화방법이 적용되는경우,페어와이즈머지 2020/175915 1»(:1^1{2020/002754 후보또는제로머지후보중적어도하나를이용하지 않도록설정할수있다. 이에 따라,
Figure imgf000059_0001
경우,머지후보리스트에포함된 머지후보들의 개수가최대개수보다작은경우라하더라도,페어와이즈머지 후보또는제로머지후보중적어도하나는머지후보리스트에추가되지 않을 수있다.
[459] 머지후보의움직임 벡터를현재블록의초기움직임 벡터로설정할수있다. 이때,현재블록이 이용할수있는머지후보들의 개수가복수인경우,복수의 머지후보들중어느하나를특정하는정보가비트스트림을통해시그날링될수 있다.일 예로,머지후보리스트가포함할수있는머지후보들의 최대개수가 1보다큰경우,복수의 머지후보들중어느하나를가리키는정보 가 비트스트림을통해시그날링될수있다.즉,머지오프셋부호화방법하에서, 복수의 머지후보들중어느하나를
후보가특정될수있다.현재블록의
Figure imgf000059_0002
머지후보의움직임 벡터로설정될수있다.
[46이 반면,현재블록이 이용할수있는머지후보의 개수가 1개인경우,머지후보를 특정하기 위한정보의시그날링을생략할수있다.일 예로,머지후보리스트가 포함할수있는머지후보들의 최대개수가 1보다크지 않은경우,머지후보를
Figure imgf000059_0003
있다.즉,머지오프셋 부호화방법하에서,머지후보리스트에 1개의머지후보가포함되어 있을경우, 머지후보를특정하기 위한정보 의부호화를생략하고,머지후보 리스트에포함된머지후보를기초로,초기움직임 벡터를결정할수있다.상기 머지후보의움직인벡터가현재블록의초기움직임 벡터로설정될수있다.
[461] 다른예로,현재블록의머지후보를결정한뒤,현재블록에 머지오프셋벡터 부호화방법을적용할것인지 여부를결정할수도있다.일예로,머지후보가 포함할수있는머지후보의 최대개수가 1보다큰경우,머지후보들중어느 하나를특정하기위한정보 111 6_:1(1 를시그날링할수있다. 1116 _:1(1 를 기초로머지후보를선택한이후,현재블록에머지오프셋벡터부호화방법이 적용되는지 여부를나타내는
Figure imgf000059_0004
있다.표
3은상술한실시예에 따른신택스테이블을나타낸도면이다.
[462] [S.5]
2020/175915 1»(:1^1{2020/002754
Figure imgf000061_0003
[463] 다른예로,현재블록의머지후보를결정한뒤,결정된머지후보의 인덱스가 머지오프셋벡터부호화방법 적용시 이용할수있는머지후보들의 최대 개수보다작은경우에 한하여,현재블록에 머지오프셋벡터부호화방법을 적용할것인지 여부를결정할수도있다.일예로,인덱스정보 의 값이 보다작은경우에 한하여,현재블록에 머지오프셋벡터부호화방법을적용할 부호화하여시그날링할수
Figure imgf000061_0001
보다큰경우,
111 은6_(났861;_¥6。1;0]'_£1&은의부호화가생략될수있다. 111 은6_0 861;_¥6。1;0]'_1¾은의 부호화가생략된경우,현재블록에머지오프셋벡터부호화방법이 적용되지 않는것으로결정될수있다.또는,현재블록의 머지후보를결정한뒤,결정된 머지후보가양방향움직임정보를갖는지 여부또는단방향움직임정보를 갖는지 여부를고려하여,현재블록에머지오프셋벡터부호화방법을적용할 것인지 여부를결정할수있다.일 예로,인덱스정보
Figure imgf000061_0002
의값이 N보다 2020/175915 1»(:1^1{2020/002754 작고,상기 인덱스정보에 의해선택된머지후보가양방향움직임 정보를갖는 경우에 한하여,현재블록에 머지오프셋벡터부호화방법을적용할것인지 여부를나타내는 1116 6_(났861;_¥6。1;0]'_£1&은를부호화하여시그날링할수있다. 또는,인덱스정보 산 의 값이 보다작고,상기 인덱스정보에의해 선택된머지후보가단방향움직임 정보를갖는경우에 한하여,현재블록에 머지오프셋벡터부호화방법을적용할것인지 여부를나타내는
111 은6_(났861;_¥6。1;0]'_典&은를부호화하여시그날링할수있다.
[464] 또는,현재블록의크기,형태또는현재블록이코딩트리유닛의경계에
접하는지 여부중적어도하나에 기초하여,머지오프셋벡터부호화방법을 적용할것인지 여부를결정할수있다.현재블록의크기,형태또는현재블록이 코딩트리유닛의 경계에접하는지 여부중적어도하나가기설정된조건을 만족하지 않는경우,현재블록에머지오프셋벡터부호화방법을적용할 것인지 여부를나타내는 의부호화를생략할수있다.
[465] 머지후보가선택되면,머지후보의움직임 벡터를현재블록의초기움직임 벡터로설정할수있다.그리고,오프셋벡터의크기를나타내는정보및오프셋 벡터의 방향을나타내는정보를복호화하여 ,오프셋벡터를결정할수있다. 오프셋벡터는수평 방향성분또는수직 방향성분을가질수있다.
[466] 오프셋벡터의크기를나타내는정보는벡터크기후보들중어느하나를 나타내는인덱스정보일수있다.일 예로,벡터크기후보들중어느하나를 나타내
Figure imgf000062_0001
있다.표
4는인덱스정보 (1 &11。6_:1(뇨의 이진화및 (1 &1 6_:1(1 에 따른오프셋벡터의 크기를결정하기위한변수 DistFromMergeMV의값을나타낸다.
[467] [표 6]
Figure imgf000062_0002
[468] 오프셋벡터의크기는변수 DistFromMergeMV를기 설정된값으로나누어 유도될수있다.수학식 은오프셋벡터의크기를결정하는예를나타낸다. 2020/175915 1»(:1/10公020/002754
[469] [수식 1이
Figure imgf000063_0001
[47이 상기수학식 10을따르면,변수 DistFromMegeMV를 4로나눈값또는변수
DistFromMergeMV를좌측으로 2만큼비트시프팅한값을오프셋벡터의크기로 설정할수있다.
[471] 표 6에나타난예보다더많은수의벡터크기후보들또는더적은수의벡터 크기후보들을사용하거나,움직임벡터오프셋크기후보들의범위를표 6에 나타난예와상이하게설정하는것도가능하다.일예로,오프셋벡터의수평 방향성분또는수직방향성분의크기가 2샘플거리보다크지않도록설정할수 있다.표 7은인덱스정보 (1 &11。6_:1(뇨의이진화
Figure imgf000063_0002
벡터의크기를결정하기위한변수 DistFromMergeMV의값을나타낸다.
[472] [표 7]
Figure imgf000063_0003
[473] 또는,움직임벡터정밀도에기초하여,움직임벡터오프셋크기후보들의
범위를상이하게설정할수있다.일예로,현재블록에대한움직임벡터 정밀도가소수펠 (fractional-pel)인경우,인덱스정보 distance_idx의값들에 대응하는변수 DistFromMergeMV의값들은 1, 2, 4, 8, 16등으로설정될수있다. 여기서,소수펠은 1/16펠 (pel),옥토펠 (Octo-pel),쿼터펠 (Quarter-pel)또는 하프펠 (Half-pel)중적어도하나를포함한다.반면,현재블록에대한움직임 벡터정밀도가정수펠 (integer-pel)인경우,인덱스정보 distance_idx의값들에 대응하는변수 DistFromMergeMV의값들은 4, 8, 16, 32, 64등으로설정될수 있다.즉,현재블록에대한움직임벡터정밀도에따라,변수
DistFromMergeMV를결정하기위해참조하는테이블이상이하게설정될수 있다.일예로,현재블록또는머지후보의움직임벡터정밀도가쿼터펠인 경우에는표 6을이용하여 distance_idx가가리키는변수 DistFromMergeMV를 유도할수있다.반면,현재블록또는머지후보의움직임벡터정밀도가정수 펠인경우에는표 6에서 distance_idx가가리키는변수 DistFromMergeMV의값에 N배 (예컨대, 4배)를취한값을변수 DistFromMergeMV의값으로유도할수있다.
[474] 움직임벡터정밀도를결정하기위한정보가비트스트림을통해시그날링될수 있다.일예로,상기정보는,시퀀스,픽처 ,슬라이스또는블록레벨에서 시그날링될수있다.이에따라,벡터크기후보들의범위는,비트스트림을통해 2020/175915 1»(:1^1{2020/002754 시그날링되는움직임벡터정밀도관련정보에의해상이하게설정될수있다. 또는,현재블록의머지후보에기초하여,움직임벡터정밀도가결정될수있다. 일예로,현재블록의움직임벡터정밀도는,머지후보의움직임벡터정밀도와 동일하게설정될수있다.
[475] 또는,오프셋벡터의탐색범위를결정하기위한정보를비트스트림을통해 시그날링할수있다.탐색범위에기초하여벡터크기후보들의개수,벡터크기 후보들중최소값또는최대값중적어도하나가결정될수있다.일예로,오프셋 벡터의탐색범위를결정하기위한늘래그 1116 6_(^861;_¥ 1;0]'_£'^가
비트스트림을통해시그날링될수있다.상기정보는시퀀스헤더 ,픽처헤더 또는슬라이스헤더를통해시그날링될수있다.
[476] 일예로,
Figure imgf000064_0001
벡터의크기가
2를초과하지않도록설정될수있다.이에따라, DistFromMergeMV의최대값은 8로설정될수있다.반면, 111 은6_0 861;_6있611(1_ 1¾6_:(¾은값이 1이면,오프셋 벡터의크기가 32샘플거리를초과하지않도록설정될수있다.이에따라,
Figure imgf000064_0002
대값은 128로설정될수있다.
[477] 오프셋벡터의크기가문턱값보다큰지여부를나타내는플래그를이용하여, 오프셋벡터의크기를결정할수있다.일예로,오프셋벡터의크기가
문턱값보다큰지여부를나타내는플래그
Figure imgf000064_0003
있다.문턱값은 1, 2, 4, 8또는 16일수있다.일예로,
1인것은,오프셋벡터의크기가 4보다큼을나타낸다.반면, 0인것은,오프셋벡터의크기가 4이하임을나타낸다.
[478]
Figure imgf000064_0004
의크기가문턱값보다
Figure imgf000064_0005
이용하여 ,오프셋벡터의크기와문턱값사이의차분값을유도할수있다.또는, 오프셋벡터의크기가문턱값이하인경우, 이용하여 , 오프셋벡터의크기를결정할수있다.표 8
Figure imgf000064_0006
:1(뇨의 부호화양상을나타낸신택스테이블이다.
[479] [S.8]
2020/175915 1»(:1/10公020/002754
Figure imgf000066_0001
[480] 수학식 11은 distance_flag과 distance_idx를이용하여,오프셋벡터의크기를 결정하기위한변수 DistFromMergeMV를유도하는예를나타낸것이다.
[481] [수식 11]
DistFromMergeMV = N ^ distance flag +(1 « distance _idx)
[482] 수학식 11에서 , distance_flag의값은 1또는 0으로설정될수있다.
distance_idx의값은 1, 2, 4, 8, 16, 32, 64, 128등으로설정될수있다. N은 문턱값에의해결정되는계수를나타낸다.일예로,문턱값이 4인경우, N은 16으로설정될수있다.
[483] 오프셋벡터의방향을나타내는정보는벡터방향후보들중어느하나를 나타내는인덱스정보일수있다.일예로,벡터방향후보들중어느하나를 2020/175915 1»(:1^1{2020/002754 나타내는인덱스정보신 0!1_:1(뇨가비트스트림을통해시그날링될수있다.표 9는인덱스정보(1뇨6(번011_:1(뇨의 이진화및(1뇨6(번011_:1(뇨에 따른오프셋벡터의 방향을나타낸다.
[484] [표 9]
Figure imgf000067_0008
[485] 표 9에서 [이은수평방향을나타내고, 四⑴은수직방향을나타낸다. +1은 오프셋벡터의 X성분또는 X성분의 값이 +임을나타내고, - 1은오프셋벡터의 X 성분또는 성분의값이 -임을나타낸다.수학식 12는오프셋벡터의크기 및 방향에 기초하여오프셋벡터를결정하는예를나타낸다.
[486] [수식 12]
<¾分 1넜7 <ίJ^)*s^gn [0]
( 1넜7 샀7:/) * 5/71 [ 1 ]
[487] 수학식
Figure imgf000067_0001
[이는오프셋벡터의수직방향성분을나타내고, offsetMV[l]은오프셋벡터의수평 방향성분을나타낸다.
[488] 도 34는오프셋벡터의크기를나타내
Figure imgf000067_0002
방향을 나타내는(1뇨6 011_:1(1 의 값에따른오프셋벡터를나타낸도면이다.
[489] 도 34에도시된예에서와같이 ,오프셋벡터의크기 및방향은
Figure imgf000067_0003
신^ 011_:1(뇨의값에 따라결정될수있다.오프셋벡터의최대크기는문턱값을 초과하지 않도록설정될수있다.여기서,문턱값은부호화기 및복호화기에서 기 정의된값을가질수있다.일예로,문턱값은 32샘플거리일수있다.또는, 초기움직임 벡터의크기에따라,문턱값이 결정될수있다.일예로,수평 방향에 대한문턱값은초기움직임 벡터의수평방향성분의크기를기초로설정되고, 수직 방향에 대한문턱값은초기움직임 벡터의수직방향성분의크기를기초로 설정될수있다.
[49이 머지후보가양방향움직임 정보를갖는경우,머지후보의
Figure imgf000067_0004
움직임 벡터를 현재블록의 )초기움직임 벡터로설정하고,머지후보의니움직임 벡터를 현재블록의 1초기움직임 벡터로설정할수있다.이때,머지후보의 1^0참조 픽처와현재픽처와의출력순서차분값(이하, 0차분값이라함)과머지후보의 참조픽처와현재픽처와의출력순서차분값(이하, 1 차분값이라함)을 고려하여, 오프셋벡터 및니오프셋벡터를결정할수있다.
[491]
Figure imgf000067_0005
부호화동일한경우,
Figure imgf000067_0007
오프셋벡터와
Figure imgf000067_0006
오프셋벡터는동일하게설정될수있다.반면, 1刀차분값및니차분값의부호가 的
2020/175915 1»(:1^1{2020/002754 상이한경우,:니오프셋벡터는내오프셋벡터와반대방향으로설정될수있다.
[492] 내오프셋벡터의크기와 오프셋벡터의크기는동일하게설정될수있다. 또는, )차분값과니차분값에기초하여 , )오프셋벡터를스케일링함으로써 , 니오프셋벡터의크기를결정할수있다.
[493] 일예로,수학식 13은 )차분값과니차분값의부호가동일한경우, )오프셋 벡터 및 1^1오프셋벡터를나타낸것이다.
[494] [수식 13]
Figure imgf000068_0001
[495] 수학식 13에서 , offsetMVL() [이는내오프셋벡터의수평방향성분을나타내고, offsetMVL0[l]은 1刀오프셋벡터의수직방향성분을나타낸다. offsetMVLl [이은 오프셋벡터의수평 방향성분을나타내고, offsetMVLl[l]은
Figure imgf000068_0002
오프셋 벡터의수직방향성분을나타낸다.
[496] 수학식 14는내차분값과 차분값의부호가상이한경우,내오프셋벡터 및 니오프셋벡터를나타낸것이다.
[497] [수식 14]
Figure imgf000068_0003
[498] 4개보다더 많은수의 벡터방향후보들을정의할수도있다.표 및표 11은 8개의 벡터방향후보들이정의된예를나타낸것이다.
[499] [표 1이
Figure imgf000068_0004
2020/175915 1»(:1^1{2020/002754
[500] [표 11]
Figure imgf000069_0002
[501] 표 및표 11에서 은!! [이및 11[1]의 절대값이 0보다큰것은오프셋벡터가 대각방향임을나타낸다.표 9가이용되는경우,대각방향오프셋벡터의 X축및 축성분의크기는 abs(offsetMV)로설정되는반면,표 10이 이용되는경우,대각 방향오프셋벡터의 X축및 X축성분의크기는 abs(offsetMV/2)로설정될수있다.
[502] 도 35는오프셋벡터의크기를나타내는 및오프셋벡터의방향을 나타내는 (1뇨6 011_:1(1 의 값에따른오프셋벡터를나타낸도면이다.
[503] 도 35의如는표 9가적용되는경우의 예시이고,도 35의 (비는표 이
적용되는경우의 예시이다.
[504] 벡터방향후보들의 개수또는크기중적어도하나를결정하기 위한정보가 비트스트림을통해시그날링될수있다.일 예로,벡터방향후보들을결정하기
Figure imgf000069_0001
트스트림을통해시그날링될 수있다.상기플래그는시퀀스,픽처또는슬라이스레벨에서시그날링될수 있다.일 예로,상기플래그의 값이 0인경우,표 9에 예시된 4개의 벡터방향 후보들을이용할수있다.반면,상기플래그의 값이 1인경우,표 또는표 11에 예시된 8개의 벡터 방향후보들을이용할수있다.
[505] 또는,오프셋벡터의크기에기초하여,벡터 방향후보들의 개수또는크기중 적어도하나를결정할수있다.일 예로,오프셋벡터의크기를결정하기 위한 변수 DistFromMergeMV의 값이문턱값과같거나문턱값보다작은경우,표 또는표 11에 예시된 8개의 벡터방향후보들을사용할수있다.반면,변수 DistFromMergeMV의값이문턱값보다큰경우,표 9에 예시된 4개의 벡터방향 후보들을사용할수있다.
[506] 또는,초기움직임 벡터의 X성분의값 MVx및 X성분의값 MVy에 기초하여 , 벡터 방향후보들의 개수또는크기중적어도하나를결정할수있다.일예로, MVx및 MVy차분또는차분의절대값이문턱값이하인경우,표 10또는표
11에 예시된 8개의 벡터방향후보들을사용할수있다.반면, MVx및 MVy차분 2020/175915 1»(:1^1{2020/002754 또는차분의절대값이문턱값보다큰경우,표 9에 예시된 4개의 벡터방향 후보들을사용할수있다.
[507] 현재블록의움직임 벡터는초기움직임 벡터에오프셋벡터를더하여유도될 수있다.수학식 15는현재블록의움직임 벡터를결정하는예를나타낸다.
[508] [수식 15]
Figure imgf000070_0001
[509] 수학식 15에서, 11110는현재블록의내움직임 벡터를나타내고, 11111은현재 블록의니움직임 벡터를나타낸다. mergeMVL0는현재블록의내초기움직임 벡터(즉,머지후보의 )움직임 벡터)를나타내고, mergeMVLl은현재블록의 니초기움직임 벡터를나타낸다. [이은움직임 벡터의수평방향성분을 나타내고, [1]은움직임 벡터의수직 방향성분을나타낸다.
[510]
[511] 어파인머지모드또는어파인모션벡터 예측모드를기초로유도된어파인 시드벡터또는서브블록의움직임 벡터(서브블록움직임 벡터또는어파인 서브블록벡터)를오프셋벡터를기초로업데이트할수있다.구체적으로, 어파인시드벡터또는서브블록의움직임 벡터에오프셋을가산또는
감산하여,업데이트된어파인시드벡터또는업데이트된서브블록움직임 벡터를유도할수있다.어파인모션모델하에서,어파인시드벡터또는서브 블록의움직임 벡터를리파인하는것을어파인머지오프셋부호화방법이라 호칭할수있다.
[512] 코딩블록에 어파인모션모델이 적용되고,머지오프셋부호화방법의사용 여부를나타내는늘래그 의값이 1인경우,코딩블록에 어파인머지오프셋부호화방법이 적용될수있다.
[513] 어파인머지오프셋부호화방법이 적용되는것으로결정되는경우,현재
블록의초기움직임 벡터를결정하기 위한머지 인덱스知16¾6_:1(뇨)와오프셋 벡터의크기를결정하기 위한인덱스정보 및오프셋벡터의 방향을 결정하기 위한인덱스정보曲 선011_:1(뇨가시그날링될수있다.크기를결정하기
Figure imgf000070_0002
복수의크기후보들중어느하나를가리키고, 방향을결정하기위한인덱스정보(1뇨6 011_:1(1 는복수의방향후보들중어느 하나를가리킨다.
Figure imgf000070_0003
방향을 결정하기 위한인덱스정보(1 011_:1(1 에기초하여,오프셋벡터((났요 쇼 노아이, 0 8 쇼&1아1])가결정될수있다.
[514] 어파인시드벡터에오프셋벡터를가산또는감산하여 ,업데이트된어파인 시드벡터를유도할수있다.이때,각어파인시드벡터에 적용되는오프셋 벡터의부호는참조픽처의방향에 따라결정될수있다.일 예로,코딩블록에 2020/175915 1»(:1^1{2020/002754 양방향예측이적용되고, )참조픽처의시간적방향과 :니참조픽처의시간적 방향이동일한경우,다음의수학식 16과같이,각어파인시드벡터에오프셋 벡터를가산할수있다.여기서,시간적방향은,현재픽처와참조픽처의출력 순서 00차분을기초로결정될수있다.일예로,현재픽처와 1刀참조픽처의 출력순서차분및현재픽처와 참조픽처의출력순서차분이모두음수인 경우,또는현재픽처와내참조픽처의출력순서차분및현재픽처와니 픽처의출력순서차분이모두양수인경우, )참조픽처의시간적방향과
Figure imgf000071_0001
참조픽처의시간적방향이동일한것으로결정될수있다.
[515] [수식 16]
Figure imgf000071_0002
[516] 반면, 1,0참조픽처와니참조픽처의시간적방향이상이한경우,다음의
수학식 17과같이 ,각어파인시드벡터에오프셋벡터를가산또는
감산함으로써,업데이트된어파인시드벡터를유도할수있다.일예로,현재
Figure imgf000071_0003
차분은음수이나현재픽처와니참조픽처의 출력순서차분은양수인경우,또는현재픽처와 0참조픽처의출력순서 차분은양수이나현재픽처와 참조픽처의출력순서차분은음수인경우, ) 참조픽처의시간적방향과 1^1참조픽처의시간적방향이상이한것으로결정될 수있다.
[517] [수식 17]
Figure imgf000071_0004
[518] 수학식 16및수학식 17에서는모든어파인시드벡터들에동일한오프셋
벡터가적용되는것으로예시되었으나,본발명은이에한정되지않는다.각 어파인시드벡터들의오프셋벡터를개별적으로결정하는것역시가능하다.
[519] 또는,서브블록별로오프셋벡터를설정할수도있다.서브블록의움직임 벡터는해당서브블록의오프셋벡터를이용하여업데이트될수있다.
[52이 현재블록또는이웃블록의움직임벡터정밀도에따라,오프셋벡터크기
후보들의범위가상이하게결정될수있다.즉,현재블록또는이웃블록의 움직임벡터정밀도에따라,오프셋벡터크기후보의개수,최소값또는최대값 2020/175915 1»(:1^1{2020/002754 중적어도하나가상이할수있다.일예로,현재블록에 어파인모션모델이 적용되고,현재블록의움직임 벡터정밀도가 1/4펠인경우,변수
DistFromMergeMV의
Figure imgf000072_0001
의해 1, 2, 4, 8, 16, 32, 64및 128중어느 하나로결정될수있다.반면,현재블록의움직임 벡터 정밀도가정수펠인경우, 변수 DistFromMergeMV의
Figure imgf000072_0002
의해 4, 8, 16, 32, 64, 128, 256및
512중어느하나로결정될수있다.
[521] 다른예로,복수의오프셋벡터크기후보세트들중어느하나를특정하기위한 정보가비트스트림을통해시그날링될수있다.오프셋벡터크기후보세트들 각각이포함하는오프셋벡터크기후보들의 개수또는종류중적어도하나가 상이할수있다.일 예로,제 1오프셋벡터크기후보세트가선택된경우,변수 DistFromMergeMV는 { 1, 2, 4, 8, 16, 32, 64, 128 }중하나로결정될수있고,제 2 오프셋벡터크기후보세트가선택된경우,변수 DistFromMergeMV는 {4, 8, 16, 32, 64, 128, 256, 512}중하나로결정될수있다.
[522] 복수의오프셋벡터크기후보세트들중어느하나를특정하는인덱스정보 DistMV_idx가비트스트리을통해시그날링될수있다.일 예로, DistMV_idx가 0인것은제 1오프셋벡터크기후보세트가선택되었음을나타내고,
DistMV_idx가 1인것은제 2오프셋벡터크기후보세트가선택되었음을 나타낸다.
[523]
[524] 서브블록또는샘플별로오프셋벡터를설정할수도있다.즉,서브블록들
또는샘플들에 대한오프셋벡터(또는,차분벡터)또는오프셋벡터
어레이(또는,차분벡터 어레이)를오프셋데이터로정의할수도있다.
[525] 예컨대,어파인시드벡터들을기초로서브블록모션벡터를유도되면,유도된 서브블록모션벡터를이용하여서브블록에 대한움직임보상을수행할수 있다.이때,상기움직임보상수행시서브블록또는샘플별오프셋벡터를추가 이용할수있다.
[526] 서브블록에 대한오프셋벡터는,오프셋벡터후보를이용하여유도될수있다. 오프셋벡터에기초하여 ,서브블록모션벡터를업데이트하고,업데이트된서브 블록모션벡터를기초로서브블록에 대한움직임보상을수행할수있다.
[527] 서브블록내 예측샘플별로오프셋벡터를유도할수도있다.구체적으로,서브 블록내각예측샘플의 위치에기초하여,각예측샘플에 대한오프셋벡터를 유도할수있다.여기서,예측샘플의위치는서브블록의좌측상단샘플을 기준으로결정될수있다.
[528] 예측샘플에 대한오프셋벡터의 X성분은제 2어파인시드벡터 X성분및제 1 어파인시드벡터 X성분의차분값에 예측샘플의 X축좌표를곱한값과,제 2 어파인시드벡터 X성분및제 1어파인시드벡터 성분의차분값에 예측샘플의 X축좌표를곱한값을기초로유도될수있다.또한,예측샘플에 대한오프셋 벡터의 성분은제 3어파인시드벡터 X성분및제 1어파인시드벡터 X성분의 2020/175915 1»(:1^1{2020/002754 차분값에 예측샘플의 X축좌표를곱한값과,제 3어파인시드벡터 성분및제 2 어파인시드벡터 X성분의차분값에 예측샘플의 축좌표를곱한값을기초로 유도될수있다.
[529] 현재블록에 4파라미터모션모델이 적용된경우,오프셋벡터의 성분은 X 성분은제 1어파인시드벡터 X성분및제 2어파인시드벡터 X성분의차분값에 예측샘플의 X축좌표를곱한값과,제 2어파인시드벡터 성분및제 1어파인 시드벡터 X성분의차분값에 예측샘플의 축좌표를곱한값을기초로유도될 수있다.
[53이 상술한바와같이,서브블록내 예측샘플들의오프셋벡터들은각기상이한 값을가질수있다.단,예측샘플들에 대한오프셋벡터 어레이는모든서브 블록에 공통으로적용될수있다.즉,제 1서브블록에 적용되는오프셋벡터 어레이와제 2서브블록에 적용되는오프셋벡터 어레이는같을수있다.
[531] 또는,서브블록의 위치를더고려하여 ,샘플별오프셋벡터 어레이를유도할수 있다.이 경우,서브블록간상이한오브셋벡터 어레이가적용될수있다.
[532] 서브블록모션벡터를기초로서브블록에 대한움직임보상을수행한뒤 , 오프셋벡터에기초하여,각예측샘플을업데이트할수있다.예측샘플의 업데이트는예측샘플의오프셋벡터와예측샘플에 대한그라디언트를기초로 수행될수있다.
[533] 예측샘플에 대한그라디언트는예측샘플들의차분값을기초로유도될수 있다.제 1예측샘플에 대한그라디언트는제 1예측샘플과동일한라인에 속하는예측샘플들사이의차분값또는제 1예측샘플에 이웃하는라인에속한 예측샘플들사이의차분값을기초로유도될수있다.
[534] 일예로,제 1예측샘플에 대한그라디언트는제 1예측샘플과제 1예측샘플과 동일한라인에속한타예측샘플과의차분값으로유도될수있다.구체적으로, 제 1예측샘플의수평방향그라디언트는제 1예측샘플및제 1예측샘플과 동일한행에속한제 2예측샘플과의차분값으로유도되고,제 1예측샘플의 수직 방향그라디언트는제 1예측샘플및제 1예측샘플과동일한열에속한제 3 예측샘플과의차분값으로유도될수있다.여기서,제 2예측샘플및제 3예측 샘플은제 1예측샘플에 이웃하는것일수있다.일예로,제 2예측샘플은제 1 예측샘플의좌측또는우측에 위치하고,제 3예측샘플은제 1예측샘플의상단 또는하단에위치할수있다.또는,제 2예측샘플및제 3예측샘플은제 1예측 샘플로부터 X축및/또는 X축방향으로소정거리 이격된것일수있다.여기서, 소정 거리는, 1, 2또는 3등의자연수일수있다.
[535] 또는,제 1예측샘플에 인접하는라인에속한예측샘플들의차분값을제 1예측 샘플에 대한그라디언트로설정할수있다.일예로,제 1예측샘플에 대한수평 방향그라디언트는제 1예측샘플에 인접하는행에속한예측샘플들의 차분값으로유도될수있다.여기서,제 1예측샘플에 인접하는행은제 1예측 샘플의상단행또는하단행을의미할수있다.제 1예측샘플의수평방향 2020/175915 1»(:1^1{2020/002754 그라디언트를유도하는데이용되는예측샘플들중적어도하나는제 1예측 샘플에 인접하고,다른하나는제 1예측샘플에 인접하지 않는것일수있다.일 예로,제 1예측샘플의상단또는하단에위치하는제 2예측샘플및상기제 2 예측샘플로부터 X축방향으로소정 거리 이격된제 3예측샘플사이의차분값에 기초하여 제 1예측샘플에 대한수평방향그라디언트를유도할수있다.제 1 예측샘플에 대한수직방향그라디언트는제 1예측샘플에 인접하는열에속한 예측샘플들의차분값으로유도될수있다.여기서,제 1예측샘플에 인접하는 열은제 1예측샘플의좌측열또는우측행을의미할수있다.제 1예측샘플의 수직 방향그라디언트를유도하는데이용되는예측샘플들중적어도하나는 제 1예측샘플에 인접하고,다른하나는제 1예측샘플에 인접하지 않는것일수 있다.일 예로,제 1예측샘플의좌측또는우측에 위치하는제 4예측샘플및 상기 제 4예측샘플로부터 y축방향으로소정 거리 이격된제 5예측샘플사이의 차분값에 기초하여제 1예즉샘늘에 대한수직 방향그라디언트를유도할수 있다.여기서,소정 거리는, 1, 2또는 3등의자연수일수있다.
[536] 수학식 18은제 1예측샘플에 대한수평방향그라디언트 gradientH및수직
방향그라디언트 gradientV를유도하는예를나타낸것이다.
[537] [수식 18]
gradientH[x] [y] = (predSample [x+2] [y + 1 ] - predSample[x] \y+ l ^)' » shiftl gradieniV[x] [y] = (predSample [x+ 1 ] [y+2] - predSample [ x+ 1 ] [y] ) » shiftl
[538] 수학식 18에서, predSample예즉샘플을나타내고, [시이는 x죽및 y죽좌표를 나타낸다. shiftl은시프팅파라미터를나타낸다.시프팅파라미터는부호화기 및 복호화기에서 기정의된값을가질수있다.또는,현재블록의크기,형태, 종횡비또는어파인모션모델중적어도하나에기초하여시프팅파라미터를 적응적으로결정할수있다.
[539] 예측샘플에 대한그라디언트가유도되면,그라디언트및오프셋벡터를
이용하여 ,예측샘플에 대한오프셋예측값을유도할수있다.오프셋예측값은 그라디언트및오프셋벡터의곱셈 연산에기초하여유도될수있다.일예로, 수학식 19는오프셋 예측값 OffsetPred를유도하는예를나타낸다.
[54이 [수식 19]
OffsetPred[x] [y] = gradientH[x] [y] * offsetMV\x\ \y^S\ + gradientV * offsetA4V[x] [y] [ 1 ]
[541] 오프셋예측값이유도되면,예측샘플에오프셋예측값을가산하여,예측
샘플을업데이트할수있다.수학식 20은예측샘플을업데이트하는예를 나타낸것이다.
[542] [수식 2이
predSample [y] = predSample\x\ \y\ + OffsetPred[x\ [y^
[543] 다른예로,주변예측샘플에오프셋벡터를가산하여,예측샘플을업데이트할 수도있다.여기서,주변예측샘플은,예측샘플의우측에위치하는샘플,예측 2020/175915 1»(:1^1{2020/002754 샘플의하단에위치하는샘플,또는예측샘플의우측하단에위치하는샘플중 적어도하나를포함할수있다.일 예로,수학식 21은주변예측샘플을이용하여 예측샘플을업데이트하는예를나타낸것이다.
[544] [수식 21]
predSample [x] [y] = predSample [x+ 1 ] [y^ 1 ] + Offse tPred[x\ [y^
[545] 현재블록에 대한움직임보상수행시오프셋벡터를이용할것인지 여부를 나타내는정보가비트스트림을통해시그날링될수있다.상기 정보는 1비트의 플래그일수있다.
[546] 또는,현재블록의크기,현재블록의 형태또는어파인시드벡터들이동일한지 여부에 기초하여오프셋벡터의 이용여부가결정될수있다.일 예로,현재 블록에 4파라미터 어파인모션모델이 적용되는경우,제 1어파인시드벡터 및 제 2어파인시드벡터가상호동일한경우오프셋벡터를이용하여움직임 보상이수행될수있다.또는,현재블록에 6파라미터 어파인모션모델이 적용되는경우,제 1어파인시드벡터,제 2어파인시드벡터 및제 3어파인시드 벡터가모두동일한경우,또는제 1어파인시드벡터,제 2어파인시드벡터 및 제 3어파인시드벡터중두개가동일한경우오프셋벡터를이용하여움직임 보상이수행될수있다.
[547]
[548] 현재블록에하나의 예측모드를복수회 적용하거나,복수의 예측모드들을 중복적용할수있다.이처럼,동종또는이종의 예측모드들을이용한예측 방법을결합예즉모드 (또는, Multi-hypothesis Prediction Mode)라호칭할수있다.
[549] 결합예측모드는,머지모드와머지모드가결합된모드,인터 예측과인트라 예측이 결합된모드,머지모드와모션벡터 예측모드가결합된모드,모션벡터 예측모드와모션벡터 예측모드가결합된모드또는머지모드와인트라 예측이 결합된모드중적어도하나를포함할수있다.
[55이 결합예측모드에서는,제 1예측모드에기초하여 제 1예측블록이 생성되고, 제 2예측모드에 기초하여제 2예측블록이 생성될수있다.그리고나서,제 1 예측블록과제 2예측블록의 가중합연산에 기초하여제 3예측블록이 생성될 수있다.제 3예측블록이 현재블록의최종예측블록으로설정될수있다.
[551]
[552] 상술한실시예들에서는,머지모드하에서 머지후보로부터유도된움직임
정보를활용한다양한인터 예측기법이설명되었다.구체적으로,본명세서를 통해아래와같은머지모드에 기초한인터 예측기법들이소개되었다.
[553] i)레귤러머지모드 :머지후보로부터유도된움직임 벡터를기초로움직임 보상을수행하는방법
[554] ii)머지오프셋부호화모드 :머지후보로부터유도된움직임 벡터를오프셋 벡터를기초로수정하고,수정된움직임 벡터를기초로움직임보상을수행하는 방법 2020/175915 1»(:1^1{2020/002754
[555] iii)서브블록움직임보상모드 :머지후보를기초로서브블록움직임벡터를 유도하고,서브블록단위로움직임보상을수행하는방법
[556] iv)예측유닛파티셔닝기반부호화모드 :현재블록을복수의 예측유닛들로 분할하고,상이한머지후보들로부터각예측유닛의움직임정보를유도하는 방법
[557] V)결합예측모드 :인트라예측과인터예측 (예컨대,머지모드)을조합하는 방법
[558] 머지모드에기초한인터 예측방법이허용됨을나타내는정보가비트스트림을 통해시그날링될수있다.일예로,머지플래그 merge_flag는현재블록의적어도 하나의움직임정보가머지후보로부터유도됨을나타낸다.일예로,신택스 merge_flag의값이 1인것은,상술한머지모드에기초한인터 예즉방법들중 하나가현재블록에적용됨을나타낸다.즉,신택스 merge_flag의값이 1이면, 레귤러머지모드,머지오프셋부호화모드,서브블록움직임보상모드,예측 유닛파티셔닝기반부호화모드또는결합예측모드중어느하나가현재 블록에적용될수있다.반면,신택스 merge_flag의값이 0인것은,상술한머지 모드에기초한인터 예측방법들이현재블록에적용되지않음을나타낸다.
[559] 신택스 merge_flag의값이 1인경우라하더라도,머지모드에기초한다양한 인터 예측방법이존재함에따라,현재블록에적용되는인터예측방법을 결정하기위해추가정보를필요로하게된다.일예로,서브블록움직임보상 모드의적용여부를결정하기위한신택스 merge_subblock_flag,머지오프셋 부호화모드의적용여부를결정하기위한신택스 merge_offset_vector_flag또는 mmvd_flag,예측유닛파티셔닝기반부호화모드의적용여부를나타내는 triangle_partition_flag또는 merge_triangle_flag ,또는결합예즉모드의적용 여부를나타내는 ciip_flag중적어도하나가추가시그날링될수있다.또는, 특정한머지모드에기초한인터 예측방법이현재블록에적용되는지여부는, 플래그대신,현재블록의크기,형태또는머지후보리스트가포함하는머지 후보들의개수중적어도하나에기초하여결정될수도있다.
[56이 이때,레귤러머지모드를제외한다른예측방법들이적용되지않을때,현재 블록에레귤러머지모드가적용되지않도록설정할수있다.다만,이경우,현재 블록에레귤러머지모드가적용되는지여부를결정하는데에많은신택스 요소들을파싱해야하는문제가있다.예컨대,머지오프셋부호화모드의적용 여부를나타내는 mmvd_flag,서브블록움직임보상의적용여부를나타내는 merge_subblock_flag,예즉유닛파티셔닝기반부호화모드의적용여부를 나타내는 merge_triangle_flag및/또는결합예즉모드의적용여부를나타내는 ciip_flag를모두파싱한뒤,레귤러모드의적용여부가결정된다면,타모드에 비해레귤러머지모드의사용빈도가높음에도많은신택스요소들을파싱해야 하는문제점이있다.즉,현재블록에레귤러머지모드가적용되는지여부를 결정하는것이여러단계의파싱디펜던시 (parsing dependency)를갖게된다. 2020/175915 1»(:1^1{2020/002754 또한,레귤러모드하에서 머지후보를특정하기위한인덱스
Figure imgf000077_0001
높은파싱디펜던시를갖는문제가발생한다.
[561] 위와같은문제점을해소하기 위해,레귤러머지모드가사용되었는지 여부를 나타내는정보를별도로부호화할수있다.구체적으로,레귤러머지모드의 적용여부를나타내
Figure imgf000077_0002
부호화하여시그날링할수 있다.
[562] 표 12는신택스 은111표1_111 은6_:(¾은를포함하는신택스테이블을나타낸것이다.
[563] [S12]
2020/175915 1»(:1^1{2020/002754
Figure imgf000079_0005
[ 4] 신택스 은111표1'_1116_:(¾은는현재블록에머지모드에기초한인터예즉모드가 적용되는것으로결정된경우에 한하여부호화될수있다.즉,신택스
1116_:(¾은가 1인경우에 한하여 ,신택스 은111&]_1116_:(1&은가비트스트림을통해 시그날링될수있다.
[565] 신택스 은111표1'_1116_:(¾은가 1인것은,현재블록에 레귤러머지모드가
적용됨을나타낸다.현재블록에 레귤러머지모드가적용되는것으로결정된 경우,머지후보들중하나를
Figure imgf000079_0001
비트스트림을통해시그날링될수있다.
[566] 신택스 은111때_1116¾6_:(¾은가 0인것은,현재블록에 레귤러 머지모드가
적용되지 않음을나타낸다.신택스 가 0인경우,머지오프셋 부호화모드의 적용여부를나타내는플래그 11111  1_£ §,서브블록움직임보상 모드의 적용여부를나타내는늘래그 1고6 6_8111 10 0¾은,결합예즉모드의 적용여부를나타내는플래그
Figure imgf000079_0002
또는예측유닛파티셔닝기반부호화 모드의 적용여부를나타내는늘래그 1116 6_1;:1' 1¾16_:(¾은중적어도하나가 비트스트림을통해시그날링될수있다.
[567] 또는,늘래그 1116 6_1::1' 1¾16_:(¾은의시그날링을생략하고,
Figure imgf000079_0003
초하여 , 예측유닛파티셔닝 기반부호화모드의 적용여부를결정할수있다.일 예로,
Figure imgf000079_0004
참인것은현재블록에결합예측모드가적용되고,예측유닛 파티셔닝 기반부호화모드가적용되지 않음을나타낸다. cnp_nag7}거짓인 것은현재블록에 결합예측모드가적용되지 않고,예측유닛파티셔닝기반 부호화모드가적용될수있음을나타낸다.
[568] 또는,머지오프셋부호화모드의 적용여부를나타내는플래그 11111 (1_典&
서브블록움직임보상모드의 적용여부를나타내는플래그
1116 6_8111 10신0¾은,결합예즉모드의 적용여부를나타내는늘래그( ]3_:(1&은 2020/175915 1»(:1/10公020/002754
Figure imgf000080_0002
서브블록움직임보상모드의적용여부를나타내는플래그
경우,레귤러머지의적용여부를나타내는
Figure imgf000080_0001
그날링할수있다.
[569] 레귤러머지모드의적용여부를나타내는플래그대신,레귤러머지모드또는 머지오프셋부호화모드중적어도하나가적용되는지여부를나타내는 플래그를부호화하여시그날링할수있다.일예로,레귤러머지모드또는머지 오프셋부호화모드중적어도하나가적용되는지여부를나타내는플래그 寒11101_111111¥(1_111 寒6_:^寒가비트스트림을통해시그날링될수있다.
[570] 표 은신택스 용111 1'_111111¥(1_1116 6_:^용를포함하는신택스테이블을나타낸 것이다.
[571] [5.13]
2020/175915 1»(:1^1{2020/002754
Figure imgf000082_0003
[572] 신택스 은111표1'_111111¥(1_1116 6_:(¾은는현재블록에머지모드에기초한인터 예즉 방법이 적용되는것으로결정된경우에 한하여부호화될수있다.즉,신택스 111 은6_:(¾은가 1인경우에 한하여 ,신택스 은111&]'_111111¥(1_1116 6_:(1&은가
비트스트림을통해시그날링될수있다.
[573] 신택스 6은111표1'_111111¥(1_1116 6_:(¾은의값이 1인것은,현재블록에 레귤러 머지 모드또는머지오프셋부호화모드가적용됨을나타낸다.신택스
은111표1'_111111¥(1_1116 6_:(¾은가 1인경우,현재블록에 머지오프셋부호화모드가 적용되는지 여부를나타내는신택스11111 (1_ §가추가시그날링될수있다. 신택스 11111 (1_끄&§의 값이 1인것은,현재블록에 머지오프셋부호화모드가 적용됨을나타내고,
Figure imgf000082_0001
블록에 레귤러머지 모드가적용됨을나타낸다.
[574] 신택스 6은111표1'_111111¥(1_1116 6_:(¾은의값이 0인것은,현재블록에 레귤러 머지 모드및머지오프셋부호화모드가적용되지 않음을나타낸다.신택스
은111때_111111¥(1_1116 경우,서브블록움직임보상모드의 적용 여부를나타내는 10 0¾은,결합예즉모드의 적용여부를 나타내는플래그 유닛파티셔닝기반부호화모드의 적용 여부를나타내는
Figure imgf000082_0002
_ ¾16_:(¾은중적어도하나가비트스트림을 통해시그날링될수있다.
[575] 또는,늘래그 1116 6_1::1' 1¾16_:(¾은의시그날링을생략하고,( ]3_:(¾은에기초하여 , 예측유닛파티셔닝 기반부호화모드의 적용여부를결정할수있다.일 예로, 가참인것은현재블록에결합예측모드가적용되고,예측유닛 파티셔닝 기반부호화모드가적용되지 않음을나타낸다. cnp_nag7}거짓인 2020/175915 1»(:1^1{2020/002754 것은현재블록에 결합예측모드가적용되지 않고,예측유닛파티셔닝기반 부호화모드가적용될수있음을나타낸다.
[576] 또는,서브블록움직임보상모드의 적용여부를나타내는플래그
1116 6_8111 10신0¾은,결합예즉모드의 적용여부를나타내
Figure imgf000083_0001
또는예측유닛파티셔닝기반부호화모드의 적용여부를나타내는플래그 1116 6_1;:1' 1¾16_:(¾은중적어도하나를,레귤러머지모드또는머지오프셋부호화 모드의 적용여부를나타내는늘래그 은111표1'_111111¥(11116 6_:(¾은보다이전에 시그날링하는것도가능하다.일 예로,서브블록움직임보상모드의 적용 여부를나타내는플래그 111 은6_8111止10。]0¾은의값이 0인경우,레귤러 머지모드 또는머지오프셋부호화모드의 적용여부를나타내는플래그
은111표1'_111111¥(1_111 은6_:(¾은를시그날링할수있다.
[577] 신택스 16은111표1'_111111¥(1_1116 6_:(¾은의값이 1인경우,머지후보들중하나를
특정하기 위한머지 인덱스 111 은6_:1(뇨를부호화하여시그날링하고, 111 은6_:1(1 의 값에 기초하여,머지오프셋부호화모드의 이용가능성이 결정될수도있다.일 예로,
Figure imgf000083_0002
값보다작은경우,현재블록에 레귤러머지 모드또는머지오프셋부호화모드가적용될수있다.반면,
Figure imgf000083_0003
값이문턱값이상인경우,현재블록에 레귤러머지모드가적용되고,머지 오프셋부호화모드는적용되지 않는다.
[578] 일예로, 111때   의값이 2보다작은경우에 한하여 ,머지오프셋부호화
모드의 적용여부를나타내는신택스11111 (1_ ^¾를부호화하여시그날링할수 있다.반면,
Figure imgf000083_0004
이상인경우,머지오프셋부호화모드의 적용 여부를나타내는신택스 11111 (1_£ §의부호화가생략되고,현재블록에 레귤러 머지모드가적용될수있다.즉, 의 값이 2보다작은경우에 한하여, 머지오프셋부호화모드가적용될수있다.
[579] 신택스11111 (1_ §가 1인경우,머지오프셋부호화모드가적용됨을나타낸다. 머지오프셋부호화모드가적용되는경우,머지 인덱스 111 은6_:1(1 가가리키는 머지후보의움직임 벡터를초기움직임 벡터로설정할수있다.즉, 11111 (1_典&§ 이전에 파싱되는 ! 116 6_:1(1 에 기초하여머지후보들중초기움직임 벡터를 유도하는데 이용되는머지후보를특정할수있다.
[580] 또한, 111111¥(1_£1&은가 1인경우,오프셋벡터를결정하기
Figure imgf000083_0005
出^此0!1_:1(뇨가비트스트림을통해시그날링될수있다.
[581] 다른예로,머지 인덱스 111때 (뇨가레귤러 머지모드의 적용여부를나타내는 늘래그(예컨대 , 은111江1'_111 은6_:(1&은또는 은111江1'_111111¥(1_111 은6_:(¾은)보다먼저 파싱되도록신택스테이블을구성할수있다.
[582] 표 14는머지 인덱스의 파싱순서가레귤러머지모드의 적용여부를나타내는 플래그보다먼저파싱되는예를나타낸것이다. 2020/175915 1»(:1^1{2020/002754
Figure imgf000085_0007
[584] 현재블록에 머지모드기반의움직임 예측이수행되는것으로결정되면,머지 후보들중어느하나를특정하기 위한머지 인덱스가시그날링될수있다.즉, 신택스 111 6_:(¾은의 값이 1인경우,머지후보를특정하기위한 111 6_:1(1 가 시그날링될수있다.
[585] 레귤러머지모드의 적용여부를나타내는플래그는,머지 인덱스 111 (뇨가 파싱된이후파싱될수있다.일 예로,
Figure imgf000085_0001
이후, 레귤러 머지모드의 적용여부를나타내
것으로예시되었다.표 14의 예시에서와
Figure imgf000085_0002
시그날링 이후,레귤러머지모드또는머지오프셋부호화모드의 적용여부를 나타내는플래그 은111표1'_1116¾6_:(¾은가시그날링되도록설정할수도있다.
[586] 레귤러머지모드가적용되는경우,머지 인덱스
Figure imgf000085_0003
특정되는 머지후보가현재블록의움직임 정보를유도하는데 이용될수있다.
[587] 머지오프셋부호화모드가적용되는경우,머지 인덱스 111 6_:1(1 에의해 특정되는머지후보가현재블록의초기움직임 벡터를유도하는데 이용될수 있다.머지오프셋부호화모드의 적용여부를나타내는플래그
Figure imgf000085_0004
는 머지 인덱스가문턱값보다작은경우에 한하여비트스트림을통해시그날링될 수있다.
[588] 레귤러머지모드가적용되지 않는것으로결정되는경우(예컨대,
은111표1'_1116 6_:(1&은또는 은111표1'_111111¥(1_111 은6_:(1&은가 0인경우),서브블록움직임 보상모드의 적용여부를나타내는늘래그 1116 6_8111 10010¾은가시그날링될수 있다.서브블록움직임보상모드가적용되
Figure imgf000085_0005
의해특정되는머지후보가서브블록들의움직임 벡터를유도하는데 이용될수 있다.
[589] 현재블록에 레귤러머지모드가적용되지 않는것으로결정된경우(예컨대, 은111江1'_1116 6_:(1&은또는 은111江1'_111111¥(1_111 은6_:(1&은가 0인경우),현재블록에예즉 유닛파티셔닝기반부호화모드를적용할수있다.예측유닛파티셔닝기반 부호화모드가적용되
Figure imgf000085_0006
후보가제 1파티션및제 2파티션중어느하나의 머지후보로설정될수있다. 예측유닛파티셔닝 기반부호화모드가적용되는경우,제 1파티션또는제 2 2020/175915 1»(:1^1{2020/002754 파티션중다른하나의머지후보를특정하기위한머지인덱스가추가
시그날링될수있다.
[590] 상술한레귤러머지모드의적용여부를나타내는플래그 regular_merge_flag 또는 regular_mmvd_merge_flag는현재블록의크기가문턱값보다작은경우에 한하여시그날링될수있다.일예로,현재블록의크기가 128x128보다작은 경우에레귤러머지모드의적용여부를나타내는플래그가시그날링될수있다. 레귤러머지모드의적용여부를나타내는플래그가시그날링되지않는경우, 현재블록에레귤러머지모드가적용되지않음을나타낸다.이경우,현재 블록에 예측유닛파티셔닝기반부호화모드가적용될수있다.
[591]
[592] 인트라예측은현재블록주변에부호화/복호화가완료된복원샘플을
이용하여,현재블록을예측하는것이다.이때,현재블록의인트라예측에는, 인루프필터가적용되기전의복원샘플이이용될수있다.
[593] 인트라예측기법은매트릭스 (Matrix)에기반한인트라예측및주변복원
샘플과의방향성을고려한일반인트라예측을포함한다.현재블록의인트라 예측기법을지시하는정보가비트스트림을통해시그날링될수있다.상기 정보는 1비트의플래그일수있다.또는,현재블록의위치,크기,형태또는이웃 블록의인트라예측기법중적어도하나에기초하여,현재블록의인트라예측 기법을결정할수있다.일예로,현재블록이픽처바운더리를걸쳐존재하는 경우,현재블록에는매트릭트에기반한인트라예측이적용되지않도록설정될 수있다.
[594] 매트릭스에기반한인트라예측은,부호화기및복호화기에서기저장된
매트릭스와,현재블록주변의복원샘플사이의행렬곱에기반하여,현재 블록의 예측블록을획득하는방법이다.기저장된복수개의매트릭스들중어느 하나를특정하기위한정보가비트스트림을통해시그날링될수있다.
복호화기는상기정보및현재블록의크기에기초하여,현재블록의인트라 예측을위한매트릭스를결정할수있다.
[595] 일반인트라예측은,비방향성인트라예측모드또는방향성인트라예측
모드에기초하여,현재블록에대한예측블록을획득하는방법이다.
[596]
[597] 원본영상에서 예측영상을차분하여유도된잔차영상을유도할수있다.이때, 잔차영상을주파수도메인으로변경하였을때,주파수성분들중고주파 성분들을제거하더라도,영상의주관적화질은크게떨어지지않는다.이에 따라,고주파성분들의값을작게변환하거나,고주파성분들의값을 0으로 설정한다면,시각적왜곡이크게발생하지않으면서도압축효율을증가시킬수 있는효과가있다.위특성을반영하여,잔차영상을 2차원주파수성분들로 분해하기위해현재블록을변환할수있다.상기변환은 DCT(Discrete Cosine Transform)또는 DST(Discrete Sine Transform)등의변환기법을이용하여수행될 2020/175915 1»(:1^1{2020/002754 수있다.
[598] 변환기법은블록단위로결정될수있다.변환기법은현재블록의 예측부호화 모드,현재블록의크기또는현재블록의크기중적어도하나를기초로결정될 수있다.일예로,현재블록이인트라에측모드로부호화되고,현재블록의 크기가 NxN보다작은경우에는변환기법。 를사용하여변환이수행될수 있다.반면,상기조건을만족하지않는경우,변환기법 0〔그를사용하여변환이 수행될수있다.
[599] 잔차영상중일부블록에대해서는 2차원영상변환이수행되지않을수도
있다. 2차원영상변환을
Figure imgf000087_0001
호칭할수있다.변환스킵이적용된경우,변환이수행되지않는잔차값들을 대상으로양자화가적용될수있다.
[600]
Figure imgf000087_0002
용하여현재블록을변환한뒤 ,변환된현재블록을다시 변환할수있다.이때,。(그또는 에기초한변환을제 1변환이라정의하고, 제 1변환이적용된블록을다시변환하는것을제 2변환이라정의할수있다.
[601] 제 1변환은복수개의변환코어후보들중어느하나를이용하여수행될수
있다.일예로, DCT2 0018또는 1X17중어느하나를이용하여제 1변환이 수행될수있다.
[602] 수평방향및수직방향에대해상이한변환코어가사용될수도있다.수평
방향의변환코어및수직방향의변환코어의조합을나타내는정보가
비트스트림을통해시그날링될수도있다.
[603] 제 1변환및제 2변환의수행단위가상이할수있다.일예로, 8x8블록에대해 제 1변환을수행하고,변환된 8x8블록중 4x4크기의서브블록에대해제 2 변환을수행할수있다.이때,제 2변환이수행되지않는잔여영역들의변환 계수를 0으로설정할수도있다.
[604] 또는, 4x4블록에대해제 1변환을수행하고,변환된 4x4블록을포함하는 8x8 크기의영역에대해제 2변환을수행할수도있다.
[605] 제 2변환의수행여부를나타내는정보가비트스트림을통해시그날링될수 있다.
[606] 또는,수평방향변환코어와수직방향변환코어가동일한지여부에기초하여, 제 2변환의수행여부가결정될수있다.일예로,수평방향변환코어와수직 방향변환코어가동일한경우에만,제 2변환이수행될수있다.또는,수평방향 변환코어와수직방향변환코어가상이한경우에만,제 2변환이수행될수 있다.
[607] 또는,수평방향의변환및수직방향의변환이기정의된변환코어를이용된 경우에한하여,제 2변환이허용될수있다.일예로,수평방향의변환및수직 방향의변환에 0012변환코어가사용된경우에 ,제 2변환이허용될수있다.
[608] 또는,현재블록의논제로변환계수의개수를기초로제 2변환의수행여부를 결정할수있다.일예로,현재블록의논제로변환계수가문턱값보다작거나 2020/175915 1»(:1^1{2020/002754 같은경우,제 2변환을사용하지않도록설정하고,현재블록의논제로변환 계수가문턱값보다큰경우,제 2변환을사용하도록설정될수있다.현재블록이 인트라예측으로부호화된경우에한하여,제 2변환을사용하도록설정될수도 있다.
[609] 복호화기에서는제 2변환의역변환 (제 2역변환)을수행하고,그수행결과에 제 1변환의역변환 (제 1역변환)을수행할수있다.상기제 2역변환및제 1 역변환의수행결과,현재블록에대한잔차신호들이획득될수있다.
[610] 부호화기에서변환및양자화를수행하면,복호화기는역양자화및역변환을 통해잔차블록을획득할수있다.복호화기에서는예측블록과잔차블록을 더하여 ,현재블록에대한복원블록을획득할수있다.
[611]
[612] 현재블록의복원블록이획득되면,인루프필터링 (In-loop filtering)을통해
양자화및부호화과정에서발생하는정보의손실을줄일수있다.인루프 필터는디블록킹필터 (Deblocking filter),샘늘적응적오프셋필터 (Sample Adaptive Offset filter, SAO)또는적응적루프필터 (Adaptive Loop Filter, ALF)중 적어도하나를포함할수있다.
[613]
[614] 복호화과정또는부호화과정을중심으로설명된실시예들을,부호화과정 또는복호화과정에적용하는것은,본발명의범주에포함되는것이다.소정의 순서로설명된실시예들을,설명된것과상이한순서로변경하는것역시,본 발명의범주에포함되는것이다.
[615] 상술한실시예는일련의단계또는순서도를기초로설명되고있으나,이는
발명의시계열적순서를한정한것은아니며,필요에따라동시에수행되거나 다른순서로수행될수있다.또한,상술한실시예에서블록도를구성하는 구성요소 (예를들어,유닛,모듈등)각각은하드웨어장치또는소프트웨어로 구현될수도있고,복수의구성요소가결합하여하나의하드웨어장치또는 소프트웨어로구현될수도있다.상술한실시예는다양한컴퓨터구성요소를 통하여수행될수있는프로그램명령어의형태로구현되어컴퓨터판독가능한 기록매체에기록될수있다.상기컴퓨터판독가능한기록매체는프로그램 명령어 ,데이터파일,데이터구조등을단독으로또는조합하여포함할수있다. 컴퓨터판독가능한기록매체의 예에는,하드디스크,플로피디스크및자기 테이프와같은자기매체 , CD-ROM, DVD와같은광기록매체 ,플롭티컬 디스크 (floptical disk)와같은자기 -광매체 (magneto-optical media),및 ROM, RAM, 플래시메모리등과같은프로그램명령어를저장하고수행하도록특별히 구성된하드웨어장치가포함된다.상기하드웨어장치는본발명에따른처리를 수행하기위해하나이상의소프트웨어모듈로서작동하도록구성될수있으며, 그역도마찬가지이다. 2020/175915 1»(:1^112020/002754 산업상이용가능성
[616] 본발명은영상을부호화/복호화하는전자장치에적용될수있다.

Claims

2020/175915 1»(:1/10公020/002754 청구범위
[청구항 1] 현재블록에머지모드에기초한인터예측이적용되는지여부를
나타내는제 1플래그를파싱하는단계;
상기제 1플래그가참인경우,상기현재블록에레귤러머지모드또는 머지오프셋부호화모드가적용되는지여부를나타내는제 2플래그를 파싱하는단계 ;및
상기제 2플래그가참인경우,상기현재블록에상기머지오프셋부호화 모드가적용되는지여부를나타내는제 3플래그를파싱하는단계를 포함하되,
상기제 3플래그가참인경우,상기현재블록에상기머지오프셋부호화 모드가적용되고,상기제 3플래그가거짓인경우,상기현재블록에상기 레귤러머지모드가적용되는것을특징으로하는,영상복호화방법. [청구항 2] 제 1항에있어서,
상기제 2플래그가거짓인경우,상기현재블록에결합예측모드가 적용되는지여부를나타내는제 4플래그를파싱하는단계를포함하는, 영상복호화방법 .
[청구항 3] 제 2항에있어서,
예측유닛파티셔닝기반부호화방법은상기제 4플래그가거짓인경우 적용가능한것을특징으로하는,영상복호화방법 .
[청구항 4] 제 1항에있어서,
상기현재블록의움직임정보는,상기현재블록의머지후보
리스트로부터유도되고,
상기현재블록의이웃블록들로부터유도된머지후보들의개수가 문턱값이하인경우,모션정보테이블이포함하는모션정보후보가머지 후보로서상기머지후보리스트에추가되는것을특징으로하는,영상 복호화방법 .
[청구항 5] 제 4항에있어서,
상기현재블록이머지처리영역에포함된경우, 상기머지처리영역에 포함된블록들을복호화하는동안상기모션정보테이블은 업데이트되지않는것을특징으로하는, 영상복호화방법 .
[청구항 6] 제 4항에있어서,
상기현재블록이머지처리영역에포함된경우, 상기머지처리영역내 상기현재블록의위치를기초로,상기현재블록의상기움직임정보를 상기모션정보테이블에업데이트할것인지여부가결정되는것을 특징으로하는,영상복호화방법 .
[청구항 7] 제 6항에있어서,
상기현재블록이상기머지처리영역내우측하단에위치한경우,상기 2020/175915 1»(:1^1{2020/002754 현재블록의움직임 정보를상기모션정보테이블에 업데이트할것으로 결정되는것을특징으로하는,영상복호화방법 .
[청구항 8] 현재블록에머지모드에기초한인터 예측이 적용되는지 여부를
나타내는제 1플래그를부호화하는단계;
상기 제 1플래그가참인경우,상기 현재블록에 레귤러머지모드또는 머지오프셋부호화모드가적용되는지 여부를나타내는제 2플래그를 부호화하는단계 ;및
상기 제 2플래그가참인경우,상기 현재블록에상기머지오프셋부호화 모드가적용되는지 여부를나타내는제 3플래그를부호화하는단계를 포함하되,
상기 현재블록에상기머지오프셋부호화모드가적용되는경우,상기 제 3플래그는참으로설정되고,상기 현재블록에상기 레귤러머지 모드가적용되는경우,상기 제 3플래그는거짓으로설정되는것을 특징으로하는,영상부호화방법 .
[청구항 9] 제 8항에 있어서,
상기 제 2플래그가거짓인경우,상기 현재블록에 결합예측모드가 적용되는지 여부를나타내는제 4플래그를부호화하는단계를포함하는, 영상부호화방법 .
[청구항 ] 제 9항에 있어서,
상기 현재블록에 결합예측모드가적용되는경우,상기제 4플래그는 참으로설정되고,상기 현재블록에 예측유닛파티셔닝 기반부호화 방법이 적용되는경우,상기 제 4플래그가거짓으로설정되는것을 특징으로하는,영상부호화방법 .
[청구항 11] 제 8항에 있어서,
상기 현재블록의움직임정보는,상기 현재블록의 머지후보 리스트로부터유도되고,
상기 현재블록의 이웃블록들로부터유도된머지후보들의 개수가 문턱값이하인경우,모션정보테이블이포함하는모션정보후보가머지 후보로서상기머지후보리스트에추가되는것을특징으로하는,영상 부호화방법 .
[청구항 12] 제 11항에 있어서,
상기 현재블록이 머지처리 영역에포함된경우, 상기 머지처리 영역에 포함된블록들을복호화하는동안상기모션정보테이블은 업데이트되지 않는것을특징으로하는, 영상부호화방법 .
[청구항 제 11항에 있어서,
상기 현재블록이 머지처리 영역에포함된경우, 상기 머지처리 영역내 상기 현재블록의 위치를기초로,상기 현재블록의상기움직임 정보를 상기모션정보테이블에 업데이트할것인지 여부가결정되는것을 2020/175915 1»(:1^1{2020/002754 특징으로하는,영상부호화방법 .
[청구항 14] 제 13항에 있어서,
상기 현재블록이상기머지 처리 영역 내우측하단에위치한경우,상기 현재블록의움직임 정보를상기모션정보테이블에 업데이트할것으로 결정되는것을특징으로하는,영상부호화방법 .
PCT/KR2020/002754 2019-02-26 2020-02-26 영상 신호 부호화/복호화 방법 및 이를 위한 장치 WO2020175915A1 (ko)

Priority Applications (6)

Application Number Priority Date Filing Date Title
CN202310512325.7A CN116366840A (zh) 2019-02-26 2020-02-26 用于对视频信号进行编码/解码的方法及其设备
CN202080004012.5A CN112425160B (zh) 2019-02-26 2020-02-26 用于对视频信号进行编码/解码的方法及其设备
US17/126,803 US11025944B2 (en) 2019-02-26 2020-12-18 Method for encoding/decoding video signal, and apparatus therefor
US17/241,950 US11632562B2 (en) 2019-02-26 2021-04-27 Method for encoding/decoding video signal, and apparatus therefor
ZA2021/04757A ZA202104757B (en) 2019-02-26 2021-07-07 Method for encoding/decoding video signal, and apparatus therefor
US18/135,106 US20230370631A1 (en) 2019-02-26 2023-04-14 Method for encoding/decoding video signal, and apparatus therefor

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2019-0022767 2019-02-26
KR20190022767 2019-02-26

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/126,803 Continuation US11025944B2 (en) 2019-02-26 2020-12-18 Method for encoding/decoding video signal, and apparatus therefor

Publications (1)

Publication Number Publication Date
WO2020175915A1 true WO2020175915A1 (ko) 2020-09-03

Family

ID=72239766

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/002754 WO2020175915A1 (ko) 2019-02-26 2020-02-26 영상 신호 부호화/복호화 방법 및 이를 위한 장치

Country Status (5)

Country Link
US (3) US11025944B2 (ko)
KR (2) KR102597617B1 (ko)
CN (2) CN112425160B (ko)
WO (1) WO2020175915A1 (ko)
ZA (1) ZA202104757B (ko)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020175915A1 (ko) * 2019-02-26 2020-09-03 주식회사 엑스리스 영상 신호 부호화/복호화 방법 및 이를 위한 장치
AU2020256658A1 (en) 2019-04-12 2021-10-28 Beijing Bytedance Network Technology Co., Ltd. Most probable mode list construction for matrix-based intra prediction
CN117499656A (zh) 2019-04-16 2024-02-02 北京字节跳动网络技术有限公司 帧内编解码模式下的矩阵推导
KR20220002318A (ko) 2019-05-01 2022-01-06 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 필터링을 이용한 행렬 기반 인트라 예측
CN117097912A (zh) 2019-05-01 2023-11-21 北京字节跳动网络技术有限公司 基于矩阵的帧内预测的上下文编码
JP2022533190A (ja) 2019-05-22 2022-07-21 北京字節跳動網絡技術有限公司 アップサンプリングを使用した行列ベースのイントラ予測
CN114051735A (zh) 2019-05-31 2022-02-15 北京字节跳动网络技术有限公司 基于矩阵的帧内预测中的一步下采样过程
CN117768652A (zh) 2019-06-05 2024-03-26 北京字节跳动网络技术有限公司 视频处理方法、装置、介质、以及存储比特流的方法
MX2021016161A (es) * 2019-06-19 2022-03-11 Lg Electronics Inc Metodo de decodificacion de imagen que comprende generar muestras de prediccion aplicando modo de prediccion determinada, y dispositivo para el mismo.
WO2021052491A1 (en) 2019-09-19 2021-03-25 Beijing Bytedance Network Technology Co., Ltd. Deriving reference sample positions in video coding
JP2022548351A (ja) * 2019-09-19 2022-11-18 アリババ グループ ホウルディング リミテッド マージ候補リストを構築するための方法
JP7391199B2 (ja) 2019-10-05 2023-12-04 北京字節跳動網絡技術有限公司 映像コーディングツールのレベルベースシグナリング
KR102637881B1 (ko) 2019-10-12 2024-02-19 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 정제 비디오 코딩 툴의 사용 및 시그널링
KR20220082847A (ko) 2019-10-28 2022-06-17 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 색상 성분에 기초한 신택스 시그널링 및 파싱
KR20220113379A (ko) * 2019-12-27 2022-08-12 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 비디오 픽처 헤더의 슬라이스 유형의 시그널링

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140049527A (ko) * 2011-09-09 2014-04-25 주식회사 케이티 화면 간 예측 수행시 후보 블록 결정 방법 및 이러한 방법을 사용하는 장치
US20150103897A1 (en) * 2012-10-12 2015-04-16 Electronics And Telecommunications Research Institute Image encoding/decoding method and device using same
WO2017176092A1 (ko) * 2016-04-08 2017-10-12 한국전자통신연구원 움직임 예측 정보를 유도하는 방법 및 장치
KR20180037583A (ko) * 2016-10-04 2018-04-12 한국전자통신연구원 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116546205A (zh) * 2016-04-08 2023-08-04 韩国电子通信研究院 用于导出运动预测信息的方法和装置
US11019357B2 (en) * 2018-08-07 2021-05-25 Qualcomm Incorporated Motion vector predictor list generation
CN116708821A (zh) * 2018-10-02 2023-09-05 Lg电子株式会社 图像编解码方法、存储介质和数据发送方法
WO2020073920A1 (en) * 2018-10-10 2020-04-16 Mediatek Inc. Methods and apparatuses of combining multiple predictors for block prediction in video coding systems
US20200169757A1 (en) * 2018-11-23 2020-05-28 Mediatek Inc. Signaling For Multi-Reference Line Prediction And Multi-Hypothesis Prediction
US11146810B2 (en) * 2018-11-27 2021-10-12 Qualcomm Incorporated Decoder-side motion vector refinement
WO2020175915A1 (ko) 2019-02-26 2020-09-03 주식회사 엑스리스 영상 신호 부호화/복호화 방법 및 이를 위한 장치

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140049527A (ko) * 2011-09-09 2014-04-25 주식회사 케이티 화면 간 예측 수행시 후보 블록 결정 방법 및 이러한 방법을 사용하는 장치
US20150103897A1 (en) * 2012-10-12 2015-04-16 Electronics And Telecommunications Research Institute Image encoding/decoding method and device using same
WO2017176092A1 (ko) * 2016-04-08 2017-10-12 한국전자통신연구원 움직임 예측 정보를 유도하는 방법 및 장치
KR20180037583A (ko) * 2016-10-04 2018-04-12 한국전자통신연구원 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BENJAMIN BROSS, CHEN J., LIU: "Versatile Video Coding (Draft 4)", JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11, JVET-M1001-V7, 13TH MEETING, no. JVET-M1001, 17 March 2019 (2019-03-17), Marrakech, MA, pages 1 - 290, XP030203323 *

Also Published As

Publication number Publication date
US20230370631A1 (en) 2023-11-16
US20210258599A1 (en) 2021-08-19
ZA202104757B (en) 2022-06-29
KR20230155381A (ko) 2023-11-10
KR20200104253A (ko) 2020-09-03
US11632562B2 (en) 2023-04-18
CN116366840A (zh) 2023-06-30
CN112425160B (zh) 2023-05-12
US11025944B2 (en) 2021-06-01
KR102597617B1 (ko) 2023-11-03
US20210105499A1 (en) 2021-04-08
CN112425160A (zh) 2021-02-26

Similar Documents

Publication Publication Date Title
WO2020175915A1 (ko) 영상 신호 부호화/복호화 방법 및 이를 위한 장치
KR102608847B1 (ko) 영상 신호 부호화/복호화 방법 및 이를 위한 장치
KR20200109276A (ko) 영상 신호 부호화/복호화 방법 및 이를 위한 장치
KR102617439B1 (ko) 영상 신호 부호화/복호화 방법 및 이를 위한 장치
KR20240023059A (ko) 영상 신호 부호화/복호화 방법 및 이를 위한 장치
CN116866582A (zh) 对图像解码和编码的方法以及存储压缩视频数据的装置
CN113382234B (zh) 视频信号编码/解码方法以及用于所述方法的设备
US20220303531A1 (en) Method for encoding/decoding image signal and device therefor
JP7459069B2 (ja) 映像信号符号化/復号化方法及びその装置
CN113507603B (zh) 图像信号编码/解码方法及其设备
KR102619997B1 (ko) 영상 신호 부호화/복호화 방법 및 이를 위한 장치
KR102597461B1 (ko) 영상 신호 부호화/복호화 방법 및 이를 위한 장치
JP7305879B2 (ja) 映像信号の符号化/復号方法およびその装置
CN113574878A (zh) 用于对视频信号进行编码/解码的方法及其设备
WO2020175914A1 (ko) 영상 신호 부호화/복호화 방법 및 이를 위한 장치
CN112236996A (zh) 视频信号编码/解码方法及其装置
KR20230114250A (ko) 영상 신호 부호화/복호화 방법 및 이를 위한 장치
KR20230063314A (ko) 영상 신호 부호화/복호화 방법 및 이를 기초로 생성된 비트스트림을 저장하는 기록 매체
KR20230063322A (ko) 영상 신호 부호화/복호화 방법 및 이를 기초로 생성된 비트스트림을 저장하는 기록 매체

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20762115

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20762115

Country of ref document: EP

Kind code of ref document: A1