WO2020197264A1 - Procédé et dispositif de traitement de signal vidéo - Google Patents

Procédé et dispositif de traitement de signal vidéo Download PDF

Info

Publication number
WO2020197264A1
WO2020197264A1 PCT/KR2020/004067 KR2020004067W WO2020197264A1 WO 2020197264 A1 WO2020197264 A1 WO 2020197264A1 KR 2020004067 W KR2020004067 W KR 2020004067W WO 2020197264 A1 WO2020197264 A1 WO 2020197264A1
Authority
WO
WIPO (PCT)
Prior art keywords
prediction mode
prediction
current block
ibc
block
Prior art date
Application number
PCT/KR2020/004067
Other languages
English (en)
Korean (ko)
Inventor
장형문
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Publication of WO2020197264A1 publication Critical patent/WO2020197264A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/107Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • H04N19/122Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • Embodiments of the present specification relate to a video/video compression coding system, and more particularly, to a method and apparatus for determining a prediction mode of a video signal and performing prediction based on the prediction mode.
  • Compression coding refers to a series of signal processing techniques for transmitting digitized information through a communication line or storing it in a format suitable for a storage medium.
  • Media such as video, image, and audio may be subject to compression encoding.
  • a technique for performing compression encoding on an image is referred to as video image compression.
  • Next-generation video content will be characterized by high spatial resolution, high frame rate, and high dimensionality of scene representation. In order to process such content, it will bring a tremendous increase in terms of memory storage, memory access rate, and processing power.
  • prediction is a method of performing prediction on a current picture by referring to reconstructed samples.
  • Inter prediction referring to another picture and intra prediction using reconstructed samples of the current picture (Intra prediction) as well as various prediction techniques are being discussed.
  • An embodiment of the present specification provides a method and apparatus for performing prediction in consideration of a block size in a process of encoding/decoding information for prediction.
  • Embodiments of the present specification provide a method and apparatus for processing a video signal.
  • a video signal decoding method includes determining a prediction mode of a current block, generating a prediction sample of the current block based on the prediction mode, and determining the prediction mode The step of determining a prediction mode of the current block from among an intra prediction mode and an inter prediction mode based on a prediction mode flag, and an IBC prediction mode as a prediction mode of the current block based on an IBC (intra block copy) flag And determining whether to set to, and if the width or height of the current block is equal to a predefined value, it is determined that the IBC prediction mode is not applied without parsing the IBC flag.
  • the predefined size value may be 128.
  • the predefined size value may be a coding tree unit (CTU) size value.
  • CTU coding tree unit
  • the determining whether to set the prediction mode of the current block as the IBC prediction mode comprises: parsing an IBC flag based on the fact that both width and height of the current block are not 128, and the IBC It may include determining whether to set the prediction mode of the current block to the IBC prediction mode based on the flag.
  • the determining of whether to set the prediction mode of the current block as the IBC prediction mode comprises: if the IBC flag is 1, the IBC prediction mode is set as the prediction mode of the current block, and the IBC flag is If 0, one of an intra prediction mode and an inter prediction mode determined based on the prediction mode flag may be applied as a prediction mode of the current block.
  • a video signal encoding method includes determining a prediction mode of a current block, encoding prediction information of the current block based on the prediction mode, and prediction of the current block
  • the determining of the mode includes determining whether to apply an intra block copy (IBC) side mode to the current block based on the width and height of the current block. If the width or height of the current block is equal to a predefined value, it is determined that the IBC prediction mode is not applied.
  • IBC intra block copy
  • the predefined value may be 128.
  • the predefined value may be a coding tree unit (CTU) size value.
  • CTU coding tree unit
  • the prediction information of the current block includes an IBC flag indicating whether the IBC prediction mode is applied.
  • Encoding the prediction information of the current block may include encoding the IBC flag based on the fact that both the width and the height of the current block are not 128.
  • the encoding of the prediction information of the current block comprises: encoding the IBC flag as 1 when the IBC prediction mode is applied, and encoding the IBC flag as 0 when the IBC prediction mode is not applied. And encoding a prediction mode flag indicating one of an intra prediction mode and an inter prediction mode.
  • a decoding apparatus includes a memory storing the video signal, and a processor coupled to the memory and processing the video signal.
  • the processor is configured to determine a prediction mode of the current block and generate a prediction sample of the current block based on the prediction mode.
  • the processor determines a prediction mode of the current block from among an intra prediction mode and an inter prediction mode based on a prediction mode flag, and based on an intra block copy (IBC) flag It is set to determine whether to set the prediction mode to the IBC prediction mode. If the width or height of the current block is the same as a specific size value, it is determined that the IBC prediction mode is not applied without parsing the IBC flag.
  • IBC intra block copy
  • An encoding apparatus includes a memory for storing the video signal, and a processor coupled to the memory and processing the video signal.
  • the processor is configured to determine a prediction mode of the current block and encode prediction information of the current block based on the prediction mode.
  • the processor is configured to determine whether to apply an intra block copy (IBC) prediction mode to the current block based on the width and height of the current block. If the width or height of the current block is the same as a specific size value, it is determined that the IBC prediction mode is not applied without parsing the IBC flag.
  • IBC intra block copy
  • an embodiment of the present specification provides a non-transitory computer-readable medium storing one or more instructions.
  • the one or more instructions determine a prediction mode of a current block, generate a prediction sample of the current block based on the prediction mode, and determine a prediction mode of the current block, based on a prediction mode flag
  • a video signal processing apparatus is configured to determine a prediction mode of the current block among intra prediction mode and inter prediction mode, and to determine whether to set the prediction mode of the current block as an IBC prediction mode based on an IBC (intra block copy) flag. Control. If the width or height of the current block is equal to a predefined value, it is determined that the IBC prediction mode is not applied without parsing the IBC flag.
  • One or more instructions according to an embodiment of the present specification are set to determine a prediction mode of a current block, encode prediction information of the current block based on the prediction mode, and determine a prediction mode of the current block
  • the video signal processing apparatus is controlled to determine whether to apply an intra block copy (IBC) prediction mode to the current block based on the width and height of the current block. If the width or height of the current block is the same as a specific size value, it is determined that the IBC prediction mode is not applied.
  • IBC intra block copy
  • an intra block copy (IBC) prediction mode in consideration of a block size, it is possible to prevent a sharp increase in coding complexity and a sudden increase in memory bandwidth when performing IBC prediction.
  • IBC intra block copy
  • FIG. 1 shows an example of an image coding system according to an embodiment of the present specification.
  • FIG. 2 shows an example of a schematic block diagram of an encoding apparatus in which encoding of a video signal is performed according to an embodiment of the present specification.
  • FIG. 3 shows an example of a schematic block diagram of a decoding apparatus for decoding an image signal according to an embodiment of the present specification.
  • FIG. 4 shows an example of a content streaming system according to an embodiment of the present specification.
  • FIG 5 shows an example of a video signal processing apparatus according to an embodiment of the present specification.
  • FIG. 6 illustrates an example of a picture division structure according to an embodiment of the present specification.
  • FIG. 7A to 7D illustrate an example of a block division structure according to an embodiment of the present specification.
  • FIG 8 shows an example of a case in which the ternary tree (TT) and the binary tree (BT) are divided according to an embodiment of the present specification.
  • FIG. 9 is an example of a flowchart for encoding a picture constituting a video signal according to an embodiment of the present specification.
  • FIG. 10 is an example of a flowchart for decoding a picture constituting a video signal according to an embodiment of the present specification.
  • FIG. 11 illustrates an example of a hierarchical structure for a coded image according to an embodiment of the present specification.
  • FIG. 12 is an example of a flowchart for inter prediction in an encoding process of a video signal according to an embodiment of the present specification.
  • FIG. 13 illustrates an example of an inter prediction unit in an encoding device according to an embodiment of the present specification.
  • FIG. 14 is an example of a flowchart for inter prediction in a process of decoding a video signal according to an embodiment of the present specification.
  • FIG. 15 illustrates an example of an inter prediction unit in a decoding apparatus according to an embodiment of the present specification.
  • FIG. 16 illustrates examples of spatial neighboring blocks used as spatial merge candidates according to an embodiment of the present specification.
  • 17 is an example of a flowchart for configuring a merge candidate list according to an embodiment of the present specification.
  • MVP motion vector predictor
  • FIG. 19 illustrates an example in which a symmetric motion vector difference (MVD) mode according to an embodiment of the present specification is applied.
  • VMD symmetric motion vector difference
  • 21A and 21B illustrate examples of motion vectors for each control point according to an embodiment of the present specification.
  • FIG. 22 shows an example of a motion vector for each subblock according to an embodiment of the present specification.
  • FIG. 23 is an example of a flowchart for configuring an affine merge candidate list according to an embodiment of the present specification.
  • FIG. 24 shows examples of blocks for deriving an inherited affine motion predictor according to an embodiment of the present specification.
  • 25 illustrates an example of control point motion vectors for deriving an inherited affine motion predictor according to an embodiment of the present specification.
  • 26 shows an example of blocks for deriving a constructed affine merge candidate according to an embodiment of the present specification.
  • FIG. 27 is an example of a flowchart for configuring an affine MVP candidate list according to an embodiment of the present specification.
  • FIG. 28A and 28B illustrate examples of spatial neighboring blocks used in adaptive temporal motion vector prediction (ATMVP) according to an embodiment of the present specification and a sub-coding block (CU) motion field derived from the spatial neighboring block. Shows.
  • ATMVP adaptive temporal motion vector prediction
  • CU sub-coding block
  • 29A and 29B illustrate examples of a video/video encoding method based on an intra block copy (IBC) mode and a prediction unit in an encoding apparatus according to an embodiment of the present specification.
  • IBC intra block copy
  • 30A and 30B illustrate an example of a video/video decoding method based on an IBC mode and a prediction unit in a decoding apparatus according to an embodiment of the present specification.
  • 31 illustrates an example of a decoding procedure of prediction information according to an embodiment of the present specification.
  • 32 illustrates another example of a decoding procedure of prediction information according to an embodiment of the present specification.
  • 33 illustrates an example of a decoding procedure of prediction information considering a block size according to an embodiment of the present specification.
  • 35 illustrates an example of an encoding procedure in consideration of a maximum IBC block size according to an embodiment of the present specification.
  • 36 is an example of a flowchart of encoding a video signal according to an embodiment of the present specification.
  • a'processing unit' means a unit in which an encoding/decoding process such as prediction, transformation, and/or quantization is performed.
  • the processing unit may be interpreted as including a unit for a luma component and a unit for a chroma component.
  • the processing unit may correspond to a block, a coding unit (CU), a prediction unit (PU), or a transform unit (TU).
  • the processing unit may be interpreted as a unit for a luminance component or a unit for a color difference component.
  • the processing unit may correspond to a coding tree block (CTB), a coding block (CB), a PU, or a transform block (TB) for a luminance component.
  • the processing unit may correspond to CTB, CB, PU or TB for the color difference component.
  • the present invention is not limited thereto, and the processing unit may be interpreted as including a unit for a luminance component and a unit for a color difference component.
  • processing unit is not necessarily limited to a square block, and may be configured in a polygonal shape having three or more vertices.
  • pixels or pixels are collectively referred to as samples.
  • using a sample may mean using a pixel value or a pixel value.
  • the image coding system may include a source device 10 and a reception device 20.
  • the source device 10 may transmit the encoded video/video information or data in a file or streaming format to the receiving device 20 through a digital storage medium or a network.
  • the source device 10 may include a video source 11, an encoding device 12, and a transmitter 13.
  • the receiving device 20 may include a receiver 21, a decoding device 22 and a renderer 23.
  • the encoding device 12 may be referred to as a video/image encoding device, and the decoding device 22 may be referred to as a video/image decoding device.
  • the transmitter 13 may be included in the encoding device 12.
  • the receiver 21 may be included in the decoding device 22.
  • the renderer 23 may include a display unit, and the display unit may be configured as a separate device or an external component.
  • the video source 11 may acquire a video/image through a process of capturing, synthesizing, or generating a video/image.
  • the video source 11 may include a video/image capturing device and/or a video/image generating device.
  • the video/image capturing device may include, for example, one or more cameras, and a video/image archive including previously captured video/images.
  • Video/image generating devices may include, for example, computers, tablets, and smartphones, and may (electronically) generate video/images.
  • a virtual video/image may be generated through a computer, and in this case, a video/image capturing process may be substituted as a process of generating related data.
  • the encoding device 12 may encode an input video/video.
  • the encoding apparatus 12 may perform a series of procedures such as prediction, transformation, and quantization for compression and coding efficiency.
  • the encoded data (encoded video/video information) may be output in the form of a bitstream.
  • the transmitter 13 may transmit the encoded video/video information or data output in the form of a bitstream to the receiver 21 of the receiving device 20 through a digital storage medium or a network in a file or streaming form.
  • Digital storage media include USB (universal serial bus), SD card (secure digital card), CD (compact disc), DVD (digital versatile disc), Blu-ray disc, HDD (hard disk drive), SSD (solid state drive) may include a variety of storage media.
  • the transmitter 13 may include an element for generating a media file through a predetermined file format, and may include an element for transmission through a broadcast/communication network.
  • the receiver 21 may extract the bitstream and transmit it to the decoding device 22.
  • the decoding device 22 may decode the video/video by performing a series of procedures such as inverse quantization, inverse transformation, and prediction corresponding to the operation of the encoding device 12.
  • the renderer 23 may render the decoded video/image.
  • the rendered video/image may be displayed through the display unit.
  • FIG. 2 shows an example of a schematic block diagram of an encoding apparatus in which encoding of a video signal is performed according to an embodiment of the present specification.
  • the encoding device 100 of FIG. 2 may correspond to the encoding device 12 of FIG. 1.
  • the encoding apparatus 100 includes an image partitioning module 110, a subtraction module 115, a transform module 120, and a quantization module. (130), a de-quantization module (140), an inverse-transform module (150), an addition module (155), a filtering module (160), a memory A (memory) 170, an inter prediction module 180, an intra prediction module 185, and an entropy encoding module 190 may be included.
  • the inter prediction unit 180 and the intra prediction unit 185 may be collectively referred to as a prediction unit. That is, the prediction unit may include an inter prediction unit 180 and an intra prediction unit 185.
  • the transform unit 120, the quantization unit 130, the inverse quantization unit 140, and the inverse transform unit 150 may be included in a residual processing unit.
  • the residual processing unit may further include a subtraction unit 115.
  • the above-described image segmentation unit 110, subtraction unit 115, transform unit 120, quantization unit 130, inverse quantization unit 140, inverse transform unit 150, addition unit 155, filtering unit 160 ), the inter prediction unit 180, the intra prediction unit 185, and the entropy encoding unit 190 may be configured by one hardware component (eg, an encoder or a processor) according to an embodiment.
  • the memory 170 may include a decoded picture buffer (DPB) 175 and may be configured by a digital storage medium.
  • DPB decoded picture buffer
  • the image segmentation unit 110 may divide an input image (or picture, frame) input to the encoding apparatus 100 into one or more processing units.
  • the processing unit may be referred to as a coding unit (CU).
  • the coding unit may be recursively partitioned from a coding tree unit (CTU) or a largest coding unit (LCU) according to a quad-tree binary-tree (QTBT) structure.
  • CTU coding tree unit
  • LCU largest coding unit
  • QTBT quad-tree binary-tree
  • one coding unit may be divided into a plurality of coding units of a deeper depth based on a quad tree structure and/or a binary tree structure.
  • a quad tree structure may be applied first and a binary tree structure may be applied later.
  • the binary tree structure may be applied first.
  • a coding procedure according to an embodiment of the present specification may be performed based on a final coding unit that is no longer divided.
  • the maximum coding unit may be directly used as the final coding unit based on coding efficiency according to image characteristics.
  • the coding unit is recursively divided into coding units of a lower depth, so that a coding unit having an optimal size may be used as a final coding unit.
  • the coding procedure may include procedures such as prediction, transformation, and restoration described below.
  • the processing unit may further include a prediction unit (PU) or a transform unit (TU).
  • the prediction unit and the transform unit may be divided from the above-described coding units, respectively.
  • the prediction unit may be a unit of sample prediction
  • the transform unit may be a unit for inducing a transform coefficient or a unit for inducing a residual signal from the transform coefficient.
  • the term "unit” used in this document may be used interchangeably with terms such as "block” or "area” in some cases.
  • the MxN block may represent a set of samples or transform coefficients consisting of M columns and N rows.
  • a sample may generally represent a pixel or pixel value, a pixel/pixel value of a luma component, or a pixel/pixel value of a saturation component.
  • a sample may be used as a term corresponding to one picture (or image) as a pixel or pel.
  • the encoding apparatus 100 subtracts a prediction signal (predicted block, prediction sample array) output from the inter prediction unit 180 or the intra prediction unit 185 from the input video signal (original block, original sample array)
  • a signal residual signal, residual block, residual sample array
  • the generated residual signal is transmitted to the conversion unit 120.
  • a unit that subtracts the prediction signal (prediction block, prediction sample array) from the input image signal (original block, original sample array) in the encoding apparatus 100 may be referred to as a subtraction unit 115.
  • the prediction unit may perform prediction on a block to be processed (hereinafter, referred to as a current block) and generate a predicted block including prediction samples for the current block.
  • the prediction module may determine whether intra prediction or inter prediction is applied on a per CU basis.
  • the prediction unit may generate information about prediction, such as prediction mode information, as described later in the description of each prediction mode, and may transmit information about prediction to the entropy encoding unit 190.
  • Information about prediction is encoded by the entropy encoding unit 190 and may be output in the form of a bitstream.
  • the intra prediction unit 185 may predict the current block by referring to samples in the current picture.
  • the referenced samples may be located in the vicinity of the current block or may be located away from each other according to the prediction mode.
  • prediction modes may include a plurality of non-directional modes and a plurality of directional modes.
  • the non-directional mode may include, for example, a DC mode and a planar mode (Planar mode).
  • the directional mode may include, for example, 33 directional prediction modes or 65 directional prediction modes according to a detailed degree of the prediction direction. However, this is an example, and more or less directional prediction modes may be used depending on the setting.
  • the intra prediction unit 185 may determine a prediction mode applied to the current block by using the prediction mode applied to the neighboring block.
  • the inter prediction unit 180 may derive a predicted block for the current block based on a reference block (reference sample array) specified by a motion vector on the reference picture.
  • the inter prediction unit 180 may predict motion information in units of blocks, subblocks, or samples based on the correlation between motion information between neighboring blocks and the current block.
  • the motion information may include a motion vector and a reference picture index.
  • the motion information may further include inter prediction direction (L0 prediction, L1 prediction, Bi prediction, etc.) information.
  • the neighboring block may include a spatial neighboring block existing in the current picture and a temporal neighboring block existing in the reference picture.
  • the reference picture including the reference block and the reference picture including the temporal neighboring block may be the same or different.
  • the temporal neighboring block may be referred to as a collocated reference block or a colCU (colCU), and a reference picture including the temporal neighboring block may be referred to as a collocated picture (colPic).
  • the inter prediction unit 180 constructs a motion information candidate list based on motion information of neighboring blocks, and indicates which candidate is used to derive a motion vector and/or a reference picture index of the current block. Can generate information. Inter prediction may be performed based on various prediction modes. For example, when a skip mode and a merge mode are used, the inter prediction unit 180 may use motion information of a neighboring block as motion information of a current block.
  • a residual signal is not transmitted.
  • MVP motion vector prediction
  • MVD motion vector difference
  • the prediction unit may generate a prediction signal (prediction sample) based on various prediction methods described later. For example, the prediction unit may not only apply intra prediction or inter prediction to predict one block, but also apply intra prediction and inter prediction together (simultaneously). This may be referred to as CIIP (combined inter and intra prediction). Also, the prediction unit may perform intra block copy (IBC) to predict a block. IBC may be used for content (eg, game) video/video coding, such as, for example, screen content coding (SCC). Also, IBC may be referred to as CPR (current picture referencing). IBC basically performs prediction in the current picture, but can be performed similarly to inter prediction in that it derives a reference block in the current picture. That is, the IBC may use at least one of the inter prediction techniques described in this document.
  • IBC intra block copy
  • the prediction signal generated by the prediction unit may be used to generate a reconstructed signal or may be used to generate a residual signal.
  • the transform unit 120 may generate transform coefficients by applying a transform technique to the residual signal.
  • the transformation technique uses at least one of DCT (Discrete Cosine Transform), DST (Discrete Sine Transform), KLT (Karhunen-Loeve Transform), GBT (Graph-Based Transform), or CNT (Conditionally Non-linear Transform).
  • DCT Discrete Cosine Transform
  • DST Discrete Sine Transform
  • KLT Kerhunen-Loeve Transform
  • GBT Graph-Based Transform
  • CNT Conditionally Non-linear Transform
  • GBT refers to transformation obtained from a graph representing relationship information between pixels.
  • CNT refers to a transformation obtained based on the prediction signal and generating a prediction signal using all previously reconstructed pixels.
  • the conversion process may be applied to a pixel block having the
  • the quantization unit 130 quantizes the transform coefficients and transmits the quantized transform coefficients to the entropy encoding unit 190.
  • the entropy encoding unit 190 may encode a quantized signal (information on quantized transform coefficients) and output it as a bitstream. Information about the quantized transform coefficients may be referred to as residual information.
  • the quantization unit 130 may rearrange the quantized transform coefficients in a block form into a one-dimensional vector form based on a coefficient scan order, and quantize the quantized transform coefficients based on the characteristics of the quantized transform coefficients in a one-dimensional vector form. It is also possible to generate information about transform coefficients.
  • the entropy encoding unit 190 may perform various encoding techniques such as exponential Golomb, context-adaptive variable length coding (CAVLC), and context-adaptive binary arithmetic coding (CABAC).
  • the entropy encoding unit 190 may encode information necessary for video/image restoration (eg, values of syntax elements) in addition to quantized transform coefficients together or separately.
  • the encoded information (eg, video/video information) may be transmitted or stored in a bitstream format in units of network abstraction layer (NAL) units.
  • the video/video information may further include information on various parameter sets, such as adaptation parameter set (APS), picture parameter set (PPS), sequence parameter set (SPS), or video parameter set (VPS).
  • Signaled/transmitted information and/or syntax elements described later in this document may be encoded through the above-described encoding procedure and included in the bitstream.
  • the bitstream may be transmitted over a network or may be stored in a digital storage medium.
  • the network may include a broadcasting network and/or a communication network
  • the digital storage medium may include a storage medium such as USB, SD, CD, DVD, Blu-ray, HDD, and SSD.
  • a transmission unit (not shown) for transmitting and/or a storage unit (not shown) for storing may be configured as internal/external elements of the encoding apparatus 100, or the transmission unit It may be a component of the entropy encoding unit 190.
  • the quantized transform coefficients output from the quantization unit 130 may be used to generate a reconstructed signal.
  • a residual signal may be restored by applying inverse quantization and inverse transform through the inverse quantization unit 140 and the inverse transform unit 150 in the loop for the quantized transform coefficients.
  • the addition unit 155 adds the reconstructed residual signal to the prediction signal output from the inter prediction unit 180 or the intra prediction unit 185 to obtain a reconstructed signal (a reconstructed picture, a reconstructed block, and a reconstructed sample array). Can be generated.
  • the predicted block may be used as a reconstructed block.
  • the addition unit 155 may be referred to as a restoration unit or a restoration block generation unit.
  • the generated reconstructed signal may be used for intra prediction of the next processing target block in the current picture, and may be used for inter prediction of the next picture through filtering as described later.
  • the filtering unit 160 may improve subjective/objective image quality by applying filtering to the reconstructed signal.
  • the filtering unit 160 may apply various filtering methods to the reconstructed picture to generate a modified reconstructed picture, and may transmit the modified reconstructed picture to the DPB 175 of the memory 170.
  • Various filtering methods may include, for example, deblocking filtering, sample adaptive offset (SAO), adaptive loop filter (ALF), and bilateral filter.
  • the filtering unit 160 may generate filtering information and transmit the filtering information to the entropy encoding unit 190 as described later in the description of each filtering method.
  • the filtering information may be output in the form of a bitstream through entropy encoding in the entropy encoding unit 190.
  • the modified reconstructed picture transmitted to the DPB 175 may be used as a reference picture in the inter prediction unit 180.
  • the encoding apparatus 100 may avoid prediction mismatch between the encoding apparatus 100 and the decoding apparatus 200 by using the modified reconstructed picture, and may improve encoding efficiency.
  • the DPB 175 may store the modified reconstructed picture for use as a reference picture in the inter prediction unit 180.
  • the stored motion information may be transmitted to the inter prediction unit 180 for use as motion information of a spatial neighboring block or motion information of a temporal neighboring block.
  • the memory 170 may store reconstructed samples of reconstructed blocks in the current picture, and transfer information on the reconstructed samples to the intra prediction unit 185.
  • FIG. 3 shows an example of a schematic block diagram of a decoding apparatus for decoding an image signal according to an embodiment of the present specification.
  • the decoding device 200 of FIG. 3 may correspond to the decoding device 22 of FIG. 1.
  • the decoding apparatus 200 includes an entropy decoding module 210, a de-quantization module 220, an inverse transform module 230, and an adder. (addition module) 235, filtering module 240, memory 250, inter prediction module 260, and intra prediction module 265 may be included. have.
  • the inter prediction unit 260 and the intra prediction unit 265 may be collectively referred to as a prediction module. That is, the prediction unit may include an inter prediction unit 180 and an intra prediction unit 185.
  • the inverse quantization unit 220 and the inverse transform unit 230 may be collectively referred to as a residual processing module. That is, the residual processing unit may include an inverse quantization unit 220 and an inverse transform unit 230.
  • the entropy decoding unit 210, the inverse quantization unit 220, the inverse transform unit 230, the addition unit 235, the filtering unit 240, the inter prediction unit 260, and the intra prediction unit 265 are implemented. It may be configured by one hardware component (eg, a decoder or a processor) according to an example. Also, the memory 250 may include the DPB 255, and may be configured by one hardware component (eg, a memory or a digital storage medium) according to an embodiment.
  • the decoding apparatus 200 may reconstruct an image in response to a process in which the video/image information is processed by the encoding apparatus 100 of FIG. 2.
  • the decoding apparatus 200 may perform decoding using a processing unit applied by the encoding apparatus 100.
  • the processing unit may be a coding unit, for example, and the coding unit may be divided from a coding tree unit or a maximum coding unit along a quad tree structure and/or a binary tree structure.
  • the reconstructed image signal decoded and output through the decoding device 200 may be reproduced through the playback device.
  • the decoding apparatus 200 may receive a signal output from the encoding apparatus 100 of FIG. 2 in the form of a bitstream, and the received signal may be decoded through the entropy decoding unit 210.
  • the entropy decoding unit 210 may parse the bitstream to derive information (eg, video/video information) necessary for image restoration (or picture restoration).
  • the video/video information may further include information on various parameter sets, such as adaptation parameter set (APS), picture parameter set (PPS), sequence parameter set (SPS), or video parameter set (VPS).
  • APS adaptation parameter set
  • PPS picture parameter set
  • SPS sequence parameter set
  • VPS video parameter set
  • the decoding apparatus may decode a picture based on information on a parameter set.
  • Signaled/received information and/or syntax elements described later in this document may be decoded through a decoding procedure and obtained from a bitstream.
  • the entropy decoding unit 210 acquires information in the bitstream using a coding technique such as exponential Golomb coding, CAVLC, or CABAC, and a value of a syntax element required for image restoration, and a quantized value of a transform coefficient for a residual. Can be printed.
  • a bin corresponding to each syntax element is received in a bitstream, and information about the syntax element to be decoded and decoding information of a block to be decoded and a neighbor or a symbol/bin decoded in a previous step
  • the symbol corresponding to the value of each syntax element is determined by determining the context model using the information of, and performing arithmetic decoding of the bin by predicting the probability of occurrence of the bin according to the determined context model.
  • the CABAC entropy decoding method may update the context model using information of the decoded symbol/bin for the context model of the next symbol/bin after the context model is determined.
  • the entropy decoding unit 210 Among the information decoded by the entropy decoding unit 210, information on prediction is provided to the prediction unit (inter prediction unit 260 and intra prediction unit 265), and the register on which entropy decoding is performed by the entropy decoding unit 210 Dual values, that is, quantized transform coefficients and related parameter information may be input to the inverse quantization unit 220. In addition, information about filtering among information decoded by the entropy decoding unit 210 may be provided to the filtering unit 240. Meanwhile, a receiving unit (not shown) for receiving a signal output from the encoding device 100 may be further configured as an inner/outer element of the decoding device 200, or the receiving unit may be a component of the entropy decoding unit 210. May be.
  • the decoding apparatus 200 may be referred to as a video/video/picture decoding apparatus.
  • the decoding apparatus 200 may be divided into an information decoder (video/video/picture information decoder) and a sample decoder (video/video/picture sample decoder).
  • the information decoder may include an entropy decoding unit 210, and the sample decoder includes an inverse quantization unit 220, an inverse transform unit 230, an addition unit 235, a filtering unit 240, a memory 250, and inter prediction. It may include at least one of the unit 260 and the intra prediction unit 265.
  • the inverse quantization unit 220 may output transform coefficients through inverse quantization of the quantized transform coefficients.
  • the inverse quantization unit 220 may rearrange the quantized transform coefficients into a two-dimensional block shape. In this case, the reordering may be performed based on the coefficient scan order performed by the encoding apparatus 100.
  • the inverse quantization unit 220 may perform inverse quantization on quantized transform coefficients by using a quantization parameter (eg, quantization step size information) and obtain transform coefficients.
  • a quantization parameter eg, quantization step size information
  • the inverse transform unit 230 inversely transforms the transform coefficients to obtain a residual signal (residual block, residual sample array).
  • the prediction unit may perform prediction on the current block and generate a predicted block including prediction samples for the current block.
  • the prediction unit may determine whether intra prediction or inter prediction is applied to the current block based on information on prediction output from the entropy decoding unit 210, and may determine a specific intra/inter prediction mode.
  • the prediction unit may generate a prediction signal (prediction sample) based on various prediction methods described later. For example, the prediction unit may not only apply intra prediction or inter prediction to predict one block, but also apply intra prediction and inter prediction together (simultaneously). This may be referred to as CIIP (combined inter and intra prediction). Also, the prediction unit may perform intra block copy (IBC) to predict a block. IBC may be used for content (eg, game) video/video coding, such as, for example, screen content coding (SCC). Also, IBC may be referred to as CPR (current picture referencing). IBC basically performs prediction in the current picture, but can be performed similarly to inter prediction in that it derives a reference block in the current picture. That is, the IBC may use at least one of the inter prediction techniques described in this document.
  • IBC intra block copy
  • the intra prediction unit 265 may predict the current block by referring to samples in the current picture.
  • the referenced samples may be located near the current block or may be spaced apart according to the prediction mode.
  • prediction modes may include a plurality of non-directional modes and a plurality of directional modes.
  • the intra prediction unit 265 may determine a prediction mode applied to the current block by using the prediction mode applied to the neighboring block.
  • the inter prediction unit 260 may derive a predicted block for the current block based on a reference block (reference sample array) specified by a motion vector on the reference picture.
  • motion information may be predicted in a block, subblock, or sample unit based on a correlation between motion information between a neighboring block and a current block.
  • the motion information may include a motion vector and a reference picture index.
  • the motion information may further include inter prediction direction (L0 prediction, L1 prediction, Bi prediction) information.
  • the neighboring block may include a spatial neighboring block existing in the current picture and a temporal neighboring block existing in the reference picture.
  • the inter prediction unit 260 may construct a motion information candidate list based on neighboring blocks, and derive a motion vector and/or a reference picture index of the current block based on the received candidate selection information.
  • Inter prediction may be performed based on various prediction modes, and information on prediction may include information indicating a mode of inter prediction for a current block.
  • the addition unit 235 is reconstructed by adding the obtained residual signal to the prediction signal (predicted block, prediction sample array) output from the prediction unit (including the inter prediction unit 260 and/or the intra prediction unit 265). Signals (restored pictures, reconstructed blocks, reconstructed sample arrays) can be generated. When there is no residual for a block to be processed, such as when the skip mode is applied, the predicted block may be used as a reconstructed block.
  • the addition unit 235 may be referred to as a restoration unit or a restoration block generation unit.
  • the generated reconstructed signal may be used for intra prediction of the next processing target block in the current picture, and may be used for inter prediction of the next picture through filtering as described later.
  • the filtering unit 240 may improve subjective/objective image quality by applying filtering to the reconstructed signal. For example, the filtering unit 240 may apply various filtering methods to the reconstructed picture to generate a modified reconstructed picture, and may transmit the modified reconstructed picture to the DPB 255 of the memory 250 .
  • Various filtering methods may include, for example, deblocking filtering, sample adaptive offset, adaptive loop filter, and bilateral filter.
  • the modified reconstructed picture delivered to the DPB 255 of the memory 250 may be used as a reference picture by the inter prediction unit 260.
  • the memory 250 may store motion information of a block from which motion information in a current picture is derived (or decoded) and/or motion information of blocks in a picture that have already been reconstructed.
  • the stored motion information may be transmitted to the inter prediction unit 260 to be used as motion information of a spatial neighboring block or motion information of a temporal neighboring block.
  • the memory 250 may store reconstructed samples of reconstructed blocks in the current picture, and may be transmitted to the intra prediction unit 265.
  • embodiments described in the filtering unit 160, the inter prediction unit 180, and the intra prediction unit 185 of the encoding apparatus 100 are respectively the filtering unit 240 and the inter prediction unit 260 of the decoding apparatus.
  • the intra prediction unit 265 may be applied to be the same or correspond to each other.
  • FIG. 4 shows an example of a content streaming system according to an embodiment of the present specification.
  • Content streaming systems to which the embodiments of the present specification are applied are largely an encoding server 410, a streaming server 420, a web server 430, and a media storage 440. ), a user equipment 450, and a multimedia input device 460.
  • the encoding server 410 generates a bitstream by compressing content input from a multimedia input device 460 such as a smartphone, a camera, or a camcorder into digital data, and transmits the generated bitstream to the streaming server 420.
  • a multimedia input device 460 such as a smartphone, a camera, or a camcorder
  • the encoding server 410 may be omitted.
  • the bitstream may be generated by an encoding method or a bitstream generation method to which an embodiment of the present specification is applied, and the streaming server 420 may temporarily store the bitstream while transmitting or receiving the bitstream.
  • the streaming server 420 transmits multimedia data to the user device 450 based on a user request through the web server 430, and the web server 430 serves as an intermediary that informs the user of what kind of service exists.
  • the web server 430 transmits information on the requested service to the streaming server 420, and the streaming server 420 transmits multimedia data to the user.
  • the content streaming system may include a separate control server, and in this case, the control server serves to control commands/responses between devices in the content streaming system.
  • the streaming server 420 may receive content from the media storage 440 and/or the encoding server 410. For example, when content is received from the encoding server 410, the content may be received in real time. In this case, in order to provide a smooth streaming service, the streaming server 420 may store the bitstream for a predetermined time.
  • the user device 450 includes, for example, a mobile phone, a smart phone, a laptop computer, a digital broadcasting terminal, a personal digital assistants (PDA), a portable multimedia player (PMP), a navigation system, and a slate PC ( slate PC), tablet PC, ultrabook, wearable device, e.g., smartwatch, smart glass, head mounted display (HMD)), It can include digital TV, desktop computer, and digital signage.
  • PDA personal digital assistants
  • PMP portable multimedia player
  • HMD head mounted display
  • Each server in the content streaming system may be operated as a distributed server, and in this case, data received from each server may be distributedly processed.
  • FIG. 5 shows an example of a video signal processing apparatus according to an embodiment of the present specification.
  • the video signal processing apparatus of FIG. 5 may correspond to the encoding apparatus 100 of FIG. 1 or the decoding apparatus 200 of FIG. 2.
  • the video signal processing apparatus 500 for processing a video signal includes a memory 520 for storing a video signal, and a processor 510 for processing a video signal while being combined with the memory 520.
  • the processor 510 may be configured with at least one processing circuit for processing a video signal, and may process a video signal by executing instructions for encoding/decoding a video signal. That is, the processor 510 may encode original video data or decode an encoded video signal by executing encoding/decoding methods described below.
  • the processor 510 may be composed of one or more processors corresponding to each of the modules of FIG. 2 or 3.
  • the memory 520 may correspond to the memory 170 of FIG. 2 or the memory 250 of FIG. 3.
  • the video/image coding method according to the present specification may be performed based on a split structure described later.
  • Procedures such as prediction, residual processing (e.g., (inverse) transformation, (inverse) quantization), syntax element coding, and filtering, which will be described later, are CTU (coding tree unit) derived based on the load structure, CU (and/ Alternatively, it may be performed based on TU, PU).
  • the block division procedure may be performed by the video division unit 110 of the encoding apparatus 100 described above, and division-related information is (encoded) processed by the entropy encoding unit 190 and transferred to the decoding apparatus 200 in the form of a bitstream. Can be delivered.
  • the entropy decoding unit 210 of the decoding apparatus 200 derives the block division structure of the current block based on the division-related information obtained from the bitstream, and based on this, a series of procedures (e.g., prediction, registration) for decoding an image Dual processing, block/picture restoration, and in-loop filtering) can be performed.
  • a series of procedures e.g., prediction, registration
  • an image processing unit may have a hierarchical structure.
  • One picture may be divided into one or more tiles or tile groups.
  • One tile group may include one or more tiles.
  • One tile may contain more than one CTU.
  • the CTU can be divided into one or more CUs.
  • a tile is a rectangular region of CTUs within a particular tile column and a particular tile row in a picture.
  • the tile group may include an integer number of tiles according to a tile raster scan in a picture.
  • the tile group header may convey information/parameters applicable to the corresponding tile group.
  • the tile group may have one type of tile groups including an intra (I) tile group, a predictive (P) tile group, and a bi-predictive (B) tile group.
  • inter prediction is not used and only intra prediction can be used.
  • a coded original sample value may be signaled without prediction.
  • Intra prediction or inter prediction may be used for blocks in a P tile group, and when inter prediction is used, only uni prediction may be used.
  • intra prediction or inter prediction may be used for blocks in the B tile group, and when inter prediction is used, not only unidirectional prediction but also bi prediction may be used.
  • FIG. 6 illustrates an example of a picture division structure according to an embodiment of the present specification.
  • a picture having 216 (18 by 12) luminance CTUs is divided into 12 tiles and 3 tile groups.
  • the encoder determines the size of a tile/tile group and a maximum and minimum coding unit according to a characteristic (e.g., resolution) of a video image or in consideration of coding efficiency or parallel processing, and provides information about this or information for inducing it. It can be included in the bitstream.
  • a characteristic e.g., resolution
  • the decoder may obtain information indicating whether the tile/tile group of the current picture and the CTU in the tile are divided into a plurality of coding units. Coding efficiency can be increased if such information is not always acquired (decoded) by the decoder, but is acquired (decoded) only under certain conditions.
  • the tile group header may include information/parameters commonly applicable to the tile group.
  • APS APS syntax
  • PPS PPS syntax
  • SPS SPS syntax
  • VPS VPS syntax
  • the high-level syntax in the present specification may include at least one of APS syntax, PPS syntax, SPS syntax, and VPS syntax.
  • information on the division and configuration of a tile/tile group may be configured in an encoder through a higher level syntax and then transmitted to a decoder in a bitstream form.
  • FIG. 7A to 7D illustrate examples of a block division structure according to an embodiment of the present specification.
  • 7A is a QT (quadtree, QT)
  • FIG. 7b is a binary tree (BT)
  • FIG. 7c is a ternary tree (TT)
  • FIG. 7d shows an example of block division structures by an asymmetric tree (AT). do.
  • one block may be divided based on a QT division scheme.
  • one subblock divided by the QT division method may be further divided recursively according to the QT division method.
  • a leaf block that is no longer divided by the QT division method may be divided by at least one of BT, TT, or AT.
  • BT can have two types of division, such as horizontal BT (2NxN, 2NxN) and vertical BT (Nx2N, Nx2N).
  • TT may have two types of division, such as horizontal TT (2Nx1/2N, 2NxN, 2Nx1/2N) and vertical TT (1/2Nx2N, Nx2N, 1/2Nx2N).
  • AT is horizontal-up AT (2Nx1/2N, 2Nx3/2N), horizontal-down AT (2Nx3/2N, 2Nx1/2N), vertical-left AT ( It can have four types of division: 1/2Nx2N, 3/2Nx2N), and vertical-right AT (3/2Nx2N, 1/2Nx2N).
  • Each BT, TT, AT can be further divided recursively using BT, TT, AT.
  • Block A may be divided into four sub-blocks (A0, A1, A2, A3) by QT.
  • Sub-block A1 may be divided into four sub-blocks (B0, B1, B2, B3) by QT again.
  • Block B3 that is no longer divided by QT may be divided by vertical BT (C0, C1) or horizontal BT (D0, D1). Like block C0, each sub-block may be further divided recursively in the form of horizontal BT (E0, E1) or vertical BT (F0, F1).
  • Block B3 which is no longer divided by QT may be divided into vertical TT (C0, C1, C2) or horizontal TT (D0, D1, D2). Like block C1, each sub-block may be further divided recursively in the form of horizontal TT (E0, E1, E2) or vertical TT (F0, F1, F2).
  • Block B3 which is no longer divided by QT, can be divided into vertical ATs (C0, C1) or horizontal ATs (D0, D1). Like block C1, each sub-block can be further divided recursively in the form of a horizontal AT (E0, E1) or a vertical TT (F0, F1).
  • BT, TT, and AT division can be applied together in one block.
  • a sub-block divided by BT may be divided by TT or AT.
  • sub-blocks divided by TT may be divided by BT or AT.
  • Sub-blocks divided by AT may be divided by BT or TT.
  • each sub-block may be divided by vertical BT.
  • each sub-block may be divided by horizontal BT. In this case, the order of division is different, but the shape of the final division is the same.
  • the order of searching for the block may be variously defined.
  • a search is performed from left to right and from top to bottom, and searching for a block means the order of determining whether to divide additional blocks of each divided sub-block, or if the block is no longer divided, each sub It may mean an encoding order of a block, or a search order when a subblock refers to information of another neighboring block.
  • VPDUs virtual pipeline data units
  • VPDUs may be defined as non-overlapping units within one picture.
  • successive VPDUs can be processed simultaneously by multiple pipeline stages.
  • the VPDU size is roughly proportional to the buffer size in most pipeline stages. Therefore, keeping the VDPU size small is important when considering the buffer size from a hardware perspective.
  • the VPDU size can be set equal to the maximum TB size.
  • the VPDU size may be 64x64 (64x64 luminance samples) size.
  • the VPDU size may be changed (increased or decreased) in consideration of the TT and/or BT partition described above.
  • FIG. 8 shows an example of a case in which TT and BT division are restricted according to an embodiment of the present specification.
  • at least one of the following restrictions may be applied as illustrated in FIG. 8.
  • -TT split is not allowed for a CU with either width or height, or both width and height equal to the width or height, or for a CU with both width and height equal to 128. 128).
  • vertical BT is not allowed
  • pictures constituting the video/video may be encoded/decoded according to a series of decoding orders.
  • a picture order corresponding to an output order of a decoded picture may be set differently from a decoding order, and based on this, not only forward prediction but also backward prediction may be performed during inter prediction.
  • step S910 may be performed by the prediction units 180 and 185 of the encoding apparatus 100 described in FIG. 2, and step S920 may be performed by the residual processing units 115, 120, and 130.
  • S930 may be performed by the entropy encoding unit 190.
  • Step S910 may include an inter/intra prediction procedure described in this document
  • step S920 may include a residual processing procedure described in this document
  • step S930 includes an information encoding procedure described in this document. can do.
  • the picture encoding procedure is not only a procedure of encoding information for picture restoration (eg, prediction information, residual information, partitioning information) schematically as described in FIG. 2 to output in a bitstream form,
  • a procedure for generating a reconstructed picture for the current picture and a procedure for applying in-loop filtering to the reconstructed picture (optional) may be included.
  • the encoding apparatus 100 may derive (modified) residual samples from the quantized transform coefficients through the inverse quantization unit 140 and the inverse transform unit 150, and prediction samples corresponding to the output of step S910 and ( A reconstructed picture may be generated based on the modified) residual samples.
  • the reconstructed picture generated in this way may be the same as the reconstructed picture generated by the decoding apparatus 200 described above.
  • a modified reconstructed picture can be generated through an in-loop filtering procedure for the reconstructed picture, which can be stored in the memory 170 (DPB 175), and, as in the case of the decoding device 200, a subsequent picture Can be used as a reference picture in an inter prediction procedure upon encoding of As described above, in some cases, some or all of the in-loop filtering procedure may be omitted.
  • (in-loop) filtering-related information may be encoded by the entropy encoding unit 190 and output in the form of a bitstream, and the decoding apparatus 200
  • the in-loop filtering procedure may be performed in the same manner as the encoding apparatus 100.
  • the encoding device 100 and the decoding device 200 can derive the same prediction result, and increase the reliability of picture coding. , It is possible to reduce the amount of data transmitted for picture coding.
  • Step S1010 may be performed by the entropy decoding unit 210 of the decoding apparatus 200 of FIG. 3, step S1020 may be performed by the prediction units 260 and 265, and step S1030 may be performed by the residual processing unit ( 220, 230), step S1040 may be performed by the addition unit 235, step S1050 may be performed by the filtering unit 240.
  • Step S1010 may include the information decoding procedure described in this document, step S1020 may include the inter/intra prediction procedure described in this document, and step S1030 includes the residual processing procedure described in this document.
  • step S1040 may include the block/picture restoration procedure described in this document, and step S1050 may include the in-loop filtering procedure described in this document.
  • the picture decoding procedure is a procedure for obtaining image/video information (through decoding) from a bitstream (S1010), a picture restoration procedure (S1020 to S1040), and a reconstructed picture, as described in FIG. It may include an in-loop filtering procedure for (S1050).
  • the picture restoration procedure is based on prediction samples and residual samples obtained through the process of inter/intra prediction (S1020) and residual processing (S1030, inverse quantization and inverse transformation of a quantized code or coefficient) described in this document. Can be done.
  • a modified reconstructed picture may be generated through an in-loop filtering procedure for a reconstructed picture generated through a picture restoration procedure, and the modified reconstructed picture may be output as a decoded picture, and the decoding apparatus 200 It is stored in the DPB 255 of and can be used as a reference picture in inter prediction train when decoding a picture later.
  • the in-loop filtering procedure may be omitted, and in this case, the reconstructed picture may be output as a decoded picture, stored in the DPB 255 of the decoding device 200, and referenced in the inter prediction train when decoding a subsequent picture. Can be used as a picture.
  • the in-loop filtering procedure may include a deblocking filtering procedure, a sample adaptive offset (SAO) procedure, an adaptive loop filter (ALF) procedure, and/or a bi-lateral filter procedure as described above. And some or all of them may be omitted.
  • one or some of the deblocking filtering procedure, the SAO procedure, the ALF procedure, and the bilateral filter procedure may be sequentially applied, or all may be sequentially applied.
  • the SAO procedure may be performed.
  • the ALF procedure may be performed. This may be similarly performed in the encoding device 100.
  • a reconstructed block may be generated based on intra prediction/inter prediction for each block, and a reconstructed picture including the reconstructed blocks may be generated.
  • the current picture/slice/tile group is an I picture/slice/tile group
  • blocks included in the current picture/slice/tile group may be reconstructed based only on intra prediction.
  • inter prediction may be applied to some blocks in the current picture/slice/tile group
  • intra prediction may be applied to the remaining some blocks.
  • the color component of a picture may include a luminance component and a chrominance component, and the methods and embodiments proposed in this document may be applied to the luminance component and the chrominance component unless explicitly limited in this document.
  • FIG. 11 illustrates an example of a hierarchical structure for a coded image according to an embodiment of the present specification.
  • the coded image is a video coding layer (VCL) that deals with the decoding process of the image and itself, a subsystem that transmits and stores encoded information, and a network abstraction (NAL) that exists between the VCL and the subsystem and is responsible for network adaptation. layer).
  • VCL video coding layer
  • NAL network abstraction
  • VCL data including video data (tile group data) compressed in the VCL is generated, or a parameter set including information such as PPS (picture parameter set), SPS (sequence parameter set), VPS (video parameter set), or An additionally required SEI (supplemental enhancement information) message may be generated in the process of decoding an image.
  • PPS picture parameter set
  • SPS sequence parameter set
  • VPS video parameter set
  • SEI Supplemental Enhancement information
  • NAL unit data may be added to a raw byte sequence payload (RBSP) generated in VCL to generate a NAL unit.
  • RBSP may refer to tile group data, parameter set, and SEI message generated in the VCL.
  • NAL unit type information specified according to RBSP data included in the corresponding NAL unit may be included.
  • the NAL unit may be divided into a VCL NAL unit and a Non-VCL NAL unit according to an RBSP generated from VCL.
  • the VCL NAL unit may mean a NAL unit that includes information about an image (tile group data), and the Non-VCL NAL unit is an NAL that includes information (parameter set or SEI message) necessary for decoding an image. It can mean a unit.
  • VCL NAL unit and Non-VCL NAL unit may be transmitted through a network with header information added according to the data standard of the sub-system.
  • the NAL unit may be converted into a data format of a predetermined standard such as an H.266/VVC file format, a real-time transport protocol (RTP), and a transport stream (TS) and then transmitted through various networks.
  • a predetermined standard such as an H.266/VVC file format, a real-time transport protocol (RTP), and a transport stream (TS)
  • the NAL unit type may be specified according to the RBSP data structure included in the corresponding NAL unit, and information on the NAL unit type may be stored in the NAL unit header and signaled.
  • the NAL unit may be largely classified into a VCL NAL unit type and a Non-VCL NAL unit type according to whether or not information on an image (tile group data) is included.
  • the VCL NAL unit type may be classified according to the nature and type of a picture included in the VCL NAL unit, and the non-VCL NAL unit type may be classified according to the type of the parameter set.
  • NAL unit type specified according to the type of a parameter set included in the Non-VCL NAL unit type.
  • NAL unit A type for a NAL unit including APS
  • VPS Video Parameter Set
  • NAL unit a type for a NAL unit including SPS
  • NAL unit A type for a NAL unit including PPS
  • NAL unit types have syntax information for the NAL unit type, and the syntax information may be stored in the NAL unit header and signaled.
  • syntax information may be nal_unit_type, and NAL unit types may be specified by nal_unit_type values.
  • the tile group header may include information/parameters commonly applicable to the tile group.
  • APS APS syntax
  • PPS PPS syntax
  • SPS SPS syntax
  • VPS VPS syntax
  • the higher-level syntax may include at least one of APS syntax, PPS syntax, SPS syntax, and VPS syntax.
  • the image/video information encoded by the encoding device 100 by the decoding device 200 and signaled in the form of a bitstream includes intra-picture partitioning-related information, intra/inter prediction information, residual information, and in-loop filtering information.
  • information included in the APS, information included in the PPS, information included in the SPS, and/or the information included in the VPS may be included.
  • inter prediction described below may be performed by the inter prediction unit 180 of the encoding apparatus 100 of FIG. 2 or the inter prediction unit 260 of the decoding apparatus 200 of FIG. 3.
  • data encoded according to an embodiment of the present specification may be stored in the form of a bitstream.
  • the prediction unit of the encoding device 100/decoding device 200 may derive a prediction sample by performing inter prediction in block units.
  • Inter prediction may represent prediction derived in a method dependent on data elements (eg, sample values, motion information, etc.) of a picture(s) other than the current picture (Inter prediction can be a prediction derived in a manner that is de-pendent on data elements (eg, sample values or motion information) of picture(s) other than the current picture).
  • a predicted block (prediction sample array) for the current block is derived based on a reference block (reference sample array) specified by a motion vector on a reference picture indicated by a reference picture index. I can.
  • motion information of the current block may be predicted in units of blocks, subblocks, or samples based on correlation between motion information between neighboring blocks and the current block.
  • the motion information may include a motion vector and a reference picture index.
  • the motion information may further include inter prediction type (L0 prediction, L1 prediction, Bi prediction, etc.) information.
  • the neighboring block may include a spatial neighboring block existing in the current picture and a temporal neighboring block existing in the reference picture.
  • the reference picture including the reference block and the reference picture including the temporal neighboring block may be the same or different.
  • the temporal neighboring block may be called a collocated reference block, a co-located CU (colCU), or the like, and a reference picture including a temporal neighboring block may be referred to as a collocated picture (colPic).
  • a motion information candidate list may be constructed based on neighboring blocks of the current block, and a flag indicating which candidate is selected (used) to derive a motion vector and/or a reference picture index of the current block, or Index information may be signaled.
  • Inter prediction may be performed based on various prediction modes. For example, in the case of a skip mode and a merge mode, motion information of a current block may be the same as motion information of a selected neighboring block.
  • a residual signal may not be transmitted.
  • MVP motion vector prediction
  • a motion vector of a selected neighboring block is used as a motion vector predictor, and a motion vector difference may be signaled.
  • the motion vector of the current block may be derived using the sum of the motion vector predictor and the motion vector difference.
  • FIG. 12 is an example of a flowchart for inter prediction in a process of encoding a video signal according to an embodiment of the present specification
  • FIG. 13 illustrates an example of an inter prediction unit in an encoding apparatus according to an embodiment of the present specification.
  • the encoding apparatus 100 performs inter prediction on the current block (S1210).
  • the encoding apparatus 100 may derive inter prediction mode and motion information of the current block and generate prediction samples of the current block.
  • the procedure of determining the inter prediction mode, deriving motion information, and generating prediction samples may be performed simultaneously, or one procedure may be performed before the other procedure.
  • the inter prediction unit 180 of the encoding apparatus 100 may include a prediction mode determining unit 181, a motion information deriving unit 182, and a predicted sample deriving unit 183, and the prediction mode determining unit A prediction mode for the current block may be determined at 181, motion information of the current block may be derived by the motion information deriving unit 182, and prediction samples of the current block may be derived by the predicted sample deriving unit 183.
  • the inter prediction unit 180 of the encoding apparatus 100 searches for a block similar to the current block within a certain area (search area) of reference pictures through motion estimation, and searches for a block similar to the current block. It is possible to derive a reference block whose difference is less than a minimum or a certain standard.
  • a reference picture index indicating a reference picture in which the reference block is located may be derived, and a motion vector may be derived based on a position difference between the reference block and the current block.
  • the encoding apparatus 100 may determine a mode applied to the current block among various prediction modes.
  • the encoding apparatus 100 may compare rate-distortion (RD) costs for various prediction modes and determine an optimal prediction mode for the current block.
  • RD rate-distortion
  • the encoding apparatus 100 configures a merge candidate list to be described later, and the current block and the middle of the reference blocks indicated by merge candidates included in the merge candidate list. It is possible to derive a reference block whose difference from the current block is less than a minimum or a certain standard. In this case, a merge candidate associated with the derived reference block is selected, and merge index information indicating the selected merge candidate may be generated and signaled to the decoding apparatus 200. Motion information of the current block may be derived using motion information of the selected merge candidate.
  • the encoding apparatus 100 configures a (A)MVP candidate list to be described later, and (A)motion vector predictor (MVP) candidates included in the MVP candidate list
  • the motion vector of the selected MVP candidate may be used as the MVP of the current block.
  • a motion vector indicating a reference block derived by motion estimation described above may be used as a motion vector of the current block, and among MVP candidates, a motion vector having the smallest difference from the motion vector of the current block is selected.
  • a motion vector difference (MVD) which is a difference obtained by subtracting MVP from the motion vector of the current block, may be derived.
  • information on the MVD may be signaled to the decoding apparatus 200.
  • the value of the reference picture index may be separately signaled to the decoding apparatus 200 by configuring reference picture index information.
  • the encoding apparatus 100 may derive residual samples based on the prediction samples (S1220). The encoding apparatus 100 may derive residual samples by comparing the original samples of the current block with the prediction samples.
  • the encoding apparatus 100 encodes video information including prediction information and residual information (S1230).
  • the encoding apparatus 100 may output the encoded image information in the form of a bitstream.
  • the prediction information is information related to a prediction procedure and may include prediction mode information (eg, skip flag, merge flag, or mode index) and motion information.
  • the motion information may include candidate selection information (eg, merge index, mvp flag, or mvp index) that is information for deriving a motion vector. Further, the motion information may include information on the MVD and/or reference picture index information described above. Further, the motion information may include information indicating whether L0 prediction, L1 prediction, or bi prediction is applied.
  • the residual information is information about residual samples.
  • the residual information may include information about quantized transform coefficients for residual samples.
  • the prediction mode information and motion information may be collectively referred to as inter prediction information.
  • the output bitstream may be stored in a (digital) storage medium and transmitted to a decoding device, or may be transmitted to a decoding device through a network.
  • the encoding apparatus may generate a reconstructed picture (including reconstructed samples and a reconstructed block) based on the reference samples and the residual samples. This is because the encoding device 100 derives the same prediction result as that performed by the decoding device 200, and coding efficiency can be improved through this. Accordingly, the encoding apparatus 100 may store a reconstructed picture (or reconstructed samples, and a reconstructed block) in a memory and use it as a reference picture for inter prediction. As described above, an in-loop filtering procedure or the like may be further applied to the reconstructed picture.
  • FIG. 14 is an example of a flowchart for inter prediction in a process of decoding a video signal according to an embodiment of the present specification
  • FIG. 15 shows an example of an inter prediction unit in a decoding apparatus according to an embodiment of the present specification.
  • the decoding apparatus 200 may perform an operation corresponding to the operation performed by the encoding apparatus 100.
  • the decoding apparatus 200 may perform prediction on the current block and derive prediction samples based on the received prediction information.
  • the decoding apparatus 200 may determine a prediction mode for the current block based on the received prediction information (S1410).
  • the decoding apparatus 200 may determine which inter prediction mode is applied to the current block based on prediction mode information in the prediction information.
  • the decoding apparatus 200 may determine whether the merge mode is applied to the current block or the (A)MVP mode is determined based on the merge flag. Alternatively, the decoding apparatus 200 may select one of various inter prediction mode candidates based on a mode index. Inter prediction mode candidates may include a skip mode, a merge mode, and/or (A)MVP mode, or may include various inter prediction modes described below.
  • the decoding apparatus 200 derives motion information of the current block based on the determined inter prediction mode (S1420). For example, when the skip mode or the merge mode is applied to the current block, the decoding apparatus 200 may configure a merge candidate list to be described later, and select one merge candidate from among merge candidates included in the merge candidate list. . The selection of a merge candidate may be performed based on a merge index. Motion information of the current block may be derived from motion information of the selected merge candidate. Motion information of the selected merge candidate may be used as motion information of the current block.
  • the decoding apparatus 200 constructs a (A)MVP candidate list to be described later, and (A) a selected MVP candidate among MVP candidates included in the MVP candidate list.
  • the motion vector of can be used as the MVP of the current block.
  • the selection of MVP may be performed based on the above-described selection information (MVP flag or MVP index).
  • the decoding apparatus 200 may derive the MVD of the current block based on the information on the MVD, and may derive a motion vector of the current block based on the MVP and the MVD of the current block.
  • the decoding apparatus 200 may derive the reference picture index of the current block based on the reference picture index information.
  • the picture indicated by the reference picture index in the reference picture list for the current block may be derived as a reference picture referenced for inter prediction of the current block.
  • motion information of the current block may be derived without configuring a candidate list.
  • motion information of the current block may be derived according to a procedure disclosed in a prediction mode to be described later.
  • the configuration of the candidate list as described above may be omitted.
  • the decoding apparatus 200 may generate prediction samples for the current block based on motion information of the current block (S1430). In this case, the decoding apparatus 200 may derive a reference picture based on the reference picture index of the current block, and may derive the prediction samples of the current block by using samples of the reference block indicated on the reference picture by the motion vector of the current block. . In this case, as will be described later, a prediction sample filtering procedure may be further performed on all or part of the prediction samples of the current block in some cases.
  • the inter prediction unit 260 of the decoding apparatus 200 may include a prediction mode determination unit 261, a motion information derivation unit 262, and a prediction sample derivation unit 263, and a prediction mode determination unit A prediction mode for the current block is determined based on the prediction mode information received at (181), and motion information (motion vector) of the current block is determined based on the information on the motion information received from the motion information derivation unit 182. And/or a reference picture index), and the prediction sample derivation unit 183 may derive prediction samples of the current block.
  • the decoding apparatus 200 generates residual samples for the current block based on the received residual information (S1440).
  • the decoding apparatus 200 may generate reconstructed samples for the current block based on the prediction samples and the residual samples, and generate a reconstructed picture based on this (S1450). Thereafter, as described above, an in-loop filtering procedure or the like may be further applied to the reconstructed picture.
  • the inter prediction procedure may include determining an inter prediction mode, deriving motion information according to the determined prediction mode, and performing prediction based on the derived motion information (generating a prediction sample).
  • inter prediction modes may be used for prediction of a current block in a picture.
  • various modes such as a merge mode, a skip mode, an MVP mode, and an affine mode
  • a decoder side motion vector refinement (DMVR) mode, an adaptive motion vector resolution (AMVR) mode, or the like may be further used as an auxiliary mode.
  • the MVP mode may also be called an advanced motion vector prediction (AMVP) mode.
  • Prediction mode information indicating the inter prediction mode of the current block may be signaled from the encoding device to the decoding device 200.
  • the prediction mode information may be included in the bitstream and received by the decoding apparatus 200.
  • the prediction mode information may include index information indicating one of a plurality of candidate modes.
  • the inter prediction mode may be indicated through hierarchical signaling of flag information.
  • the prediction mode information may include one or more flags.
  • the encoding apparatus 100 signals the skip flag to indicate whether to apply the skip mode, and when the skip mode is not applied, signals the merge flag to indicate whether to apply the merge mode, and when the merge mode is not applied It may indicate that the MVP mode is applied or a flag for additional classification may be further signaled.
  • the Rane mode may be signaled as an independent mode, or may be signaled as a mode dependent on the merge mode or the MVP mode.
  • the affine mode may be composed of one candidate of a merge candidate list or an MVP candidate list, as described later.
  • the encoding device 100 or the decoding device 200 may perform inter prediction using motion information of the current block.
  • the encoding apparatus 100 may derive optimal motion information for the current block through a motion estimation procedure. For example, the encoding apparatus 100 may search for a similar reference block with high correlation using the original block in the original picture for the current block in units of fractional pixels within a predetermined search range within the reference picture, and through this Can be derived.
  • the similarity of the block may be derived based on the difference between the phase-based sample values.
  • the similarity of blocks may be calculated based on a sum of absolute difference (SAD) between a current block (or a template of a current block) and a reference block (or a template of a reference block).
  • SAD sum of absolute difference
  • motion information may be derived based on the reference block having the smallest SAD in the search area.
  • the derived motion information may be signaled to the decoding apparatus according to various methods based on the inter prediction mode.
  • the encoding apparatus 100 may indicate motion information of the current prediction block by transmitting flag information indicating that the merge mode has been used and a merge index indicating which prediction block is used.
  • the encoding apparatus 100 In order to perform a merge mode, the encoding apparatus 100 must search for a merge candidate block used to induce motion information of a current prediction block. For example, up to five merge candidate blocks may be used, but the present specification is not limited thereto. In addition, the maximum number of merge candidate blocks may be transmitted in a slice header or a tile group header, and the present specification is not limited thereto. After finding the merge candidate blocks, the encoding apparatus 100 may generate a merge candidate list and select a merge candidate block having the lowest cost among them as the final merge candidate block.
  • the merge candidate list may use, for example, 5 merge candidate blocks. For example, four spatial merge candidates and one temporal merge candidate can be used.
  • FIG. 16 illustrates examples of spatial neighboring blocks used as spatial merge candidates according to an embodiment of the present specification.
  • a left neighboring block A1 for prediction of a current block, a left neighboring block A1, a bottom-left neighboring block A0, a top-right neighboring block B0, and an upper neighboring block B1.
  • the merge candidate list for the current block may be configured based on the procedure shown in FIG. 17.
  • 17 is an example of a flowchart for configuring a merge candidate list according to an embodiment of the present specification.
  • the coding apparatus inserts spatial merge candidates derived by searching for spatial neighboring blocks of the current block into the merge candidate list (S1710).
  • the spatial neighboring blocks may include a block around a lower left corner of a current block, a block around a left, a block around an upper right corner, a block around an upper side, and blocks around an upper left corner.
  • additional neighboring blocks such as a right peripheral block, a lower peripheral block, and a right lower peripheral block may be further used as the spatial neighboring blocks.
  • the coding apparatus may detect available blocks by searching spatial neighboring blocks based on priority, and derive motion information of the detected blocks as spatial merge candidates.
  • the encoding device 100 or the decoding device 200 searches for the five blocks shown in FIG. 16 in the order of A1, B1, B0, A0, B2, and sequentially indexes the available candidates to obtain a merge candidate. It can be organized as a list.
  • the coding apparatus inserts a temporal merge candidate derived by searching for a temporal neighboring block of the current block into the merge candidate list (S1720).
  • the temporal neighboring block may be located on a reference picture that is a picture different from the current picture in which the current block is located.
  • a reference picture in which a temporal neighboring block is located may be referred to as a collocated picture or a col picture.
  • the temporal neighboring block may be searched in the order of the lower right corner neighboring block and the lower right center block of the collocated block with respect to the current block on the collocated picture. Meanwhile, when motion data compression is applied, specific motion information may be stored as representative motion information for each predetermined storage unit in a collocated picture.
  • the predetermined storage unit may be predetermined, for example, in a 16x16 sample unit, an 8x8 sample unit, or the like, or size information for a predetermined storage unit may be signaled from the encoding device 100 to the decoding device 200 have.
  • motion information of a temporal neighboring block may be replaced with representative motion information of a predetermined storage unit in which a temporal neighboring block is located.
  • a temporal merge candidate may be derived based on motion information of the prediction block.
  • the constant storage unit is a 2nx2n sample unit
  • the modified positions ((xTnb >> n) ⁇ n), (yTnb >> n)
  • the motion information of the prediction block located at ⁇ n)) may be used for the temporal merge candidate.
  • the predetermined storage unit is a 16x16 sample unit
  • the modified positions ((xTnb >> 4) ⁇ 4), (yTnb >> 4)
  • Motion information of the prediction block located at ⁇ 4) may be used for a temporal merge candidate.
  • the constant storage unit is an 8x8 sample unit
  • the coordinates of the temporal neighboring block are (xTnb, yTnb)
  • the modified positions ((xTnb >> 3) ⁇ 3), (yTnb >> 3 ) ⁇ 3)) motion information of a prediction block may be used for a temporal merge candidate.
  • the coding apparatus may check whether the number of current merge candidates is smaller than the number of maximum merge candidates (S1730).
  • the maximum number of merge candidates may be predefined or signaled from the encoding device 100 to the decoding device 200.
  • the encoding apparatus 100 may generate information on the number of maximum merge candidates, encode, and transmit the information to the decoding apparatus 200 in the form of a bitstream.
  • the subsequent candidate addition process may not proceed.
  • Additional merge candidates include, for example, adaptive temporal motion vector prediction (ATMVP), combined bi-predictive merge candidate (when the slice type of the current slice is B type), and/or zero vector merge. Can include candidates.
  • ATMVP adaptive temporal motion vector prediction
  • MVP motion vector predictor
  • the MVP mode may be referred to as AMVP (advanced MVP or adaptive MVP).
  • AMVP advanced MVP or adaptive MVP
  • a motion vector predictor using a motion vector of the reconstructed spatial neighboring block eg, neighboring block in FIG. 16
  • a motion vector corresponding to the temporal neighboring block or Col block
  • a (motion vector predictor, MVP) candidate list may be generated. That is, a motion vector of the reconstructed spatial neighboring block and/or a motion vector corresponding to the temporal neighboring block may be used as a motion vector predictor candidate.
  • the information on prediction may include selection information (eg, an MVP flag or an MVP index) indicating an optimal motion vector predictor candidate selected from among motion vector predictor candidates included in the list.
  • the prediction unit may select a motion vector predictor of the current block from among motion vector predictor candidates included in the motion vector candidate list using the selection information.
  • the prediction unit of the encoding apparatus 100 may obtain a motion vector difference (MVD) between the motion vector of the current block and the motion vector predictor, encode the motion vector, and output it in the form of a bitstream. That is, MVD may be obtained by subtracting the motion vector predictor from the motion vector of the current block.
  • the prediction unit of the decoding apparatus may obtain a motion vector difference included in the prediction information, and derive the motion vector of the current block by adding the motion vector difference and the motion vector predictor.
  • the prediction unit of the decoding apparatus may obtain or derive a reference picture index indicating a reference picture from the prediction information.
  • the motion vector predictor candidate list may be configured as shown in FIG. 18.
  • the coding apparatus searches for a spatial candidate block for motion vector prediction and inserts it into a prediction candidate list (S1810).
  • the coding apparatus may search for neighboring blocks according to a predetermined search order, and add information on neighboring blocks that satisfy a condition for a spatial candidate block to a prediction candidate list (MVP candidate list).
  • MVP candidate list a prediction candidate list
  • the coding apparatus After constructing a spatial candidate block list, the coding apparatus compares the number of spatial candidate lists included in the prediction candidate list with a preset reference number (eg, 2) (S1820). When the number of spatial candidate lists included in the prediction candidate list is greater than or equal to the reference number (eg, 2), the coding apparatus may terminate the construction of the prediction candidate list.
  • a preset reference number eg, 2
  • the coding apparatus searches for a temporal candidate block and inserts it into the prediction candidate list (S1830), and the temporal candidate block is used. If not possible, a zero motion vector is added to the prediction candidate list (S1840).
  • a predicted block for the current block may be derived based on motion information derived according to the prediction mode.
  • the predicted block may include predicted samples (prediction sample array) of the current block.
  • an interpolation procedure may be performed, through which prediction samples of the current block may be derived based on reference samples of the fractional sample unit within a reference picture.
  • prediction samples may be generated based on a motion vector per sample/subblock.
  • the prediction samples derived based on the first direction prediction eg, L0 prediction
  • the prediction samples derived based on the second direction prediction eg, L1 prediction
  • the final prediction samples can be derived through weighted summation (according to the phase).
  • reconstructed samples and reconstructed pictures may be generated based on the derived prediction samples, and then a procedure such as in-loop filtering may be performed.
  • a reference picture index may be explicitly signaled.
  • a reference picture index (refidxL0) for L0 prediction and a reference picture index (refidxL1) for L1 prediction may be signaled separately.
  • both information about refidxL0 and information about refidxL1 may be signaled.
  • information on the MVD derived from the encoding device 100 may be signaled to the decoding device 200 as described above.
  • the information on the MVD may include, for example, information indicating the absolute value of the MVD and the x and y components of the sign. In this case, information indicating whether the absolute MVD value is greater than 0 (abs_mvd_greater0_flag), whether it is greater than 1, and information indicating the remainder of the MVD (abs_mvd_greater1_flag) may be signaled in stages.
  • information indicating whether the absolute MVD value is greater than 1 may be signaled only when the value of the flag information (abs_mvd_greater0_flag) indicating whether the absolute MVD value is greater than 0 is 1.
  • information on MVD may be configured with syntax as shown in Table 1 below, encoded in the encoding device 100, and signaled to the decoding device 200.
  • MVD[compIdx] may be derived based on abs_mvd_greater0_flag[compIdx] *( abs_mvd_minus2[compIdx] + 2) * (1-2 * mvd_sign_flag[compIdx]).
  • compIdx (or cpIdx) represents the index of each component, and may have a value of 0 or 1.
  • compIdx 0 may indicate the x component
  • compIdx 1 may indicate the y component.
  • values for each component may be expressed using a coordinate system other than the x and y coordinate systems.
  • MVD (MVDL0) for L0 prediction and MVD (MVDL1) for L1 prediction may be differentiated and signaled, and the information on MVD may include information on MVDL0 and/or information on MVDL1.
  • the MVP mode is applied to the current block and bidirectional prediction is applied, both information about MVDLO and information about MVDL1 may be signaled.
  • SMVD Symmetric MVD
  • FIG. 19 illustrates an example in which a symmetric motion vector difference (MVD) mode according to an embodiment of the present specification is applied.
  • VMD symmetric motion vector difference
  • symmetric MVD may be used in consideration of coding efficiency.
  • signaling of some of the motion information may be omitted.
  • information on refidxL0, information on refidxL1, and information on MVDL1 are not signaled from the encoding device 100 to the decoding device 200 and may be derived internally. .
  • flag information indicating whether to apply SMVD eg, symmetric MVD flag information or sym_mvd_flag syntax element
  • the value of the flag information is 1
  • the decoding apparatus 200 may determine that SMVD is applied to the current block.
  • information about mvp_l0_lfag, mvp_l1_flag, and MVDL0 may be explicitly signaled, and information about refidxL0 as described above , information on refidxL1, and signaling of information on MVDL1 may be omitted, and may be derived inside the decoder.
  • refidxL0 is derived as an index indicating the previous reference picture closest to the current picture in the order of the picture order count (POC) within the reference picture list 0 (which may be referred to as list 0, L0, or the first reference list). Can be.
  • refidxL1 may be derived as an index indicating a subsequent reference picture closest to the current picture in the POC order in reference picture list 1 (which may be referred to as List 1, L1, or a second reference picture list). Also, for example, both refidxL0 and refidxL1 may be derived as 0, respectively. Also, for example, refidxL0 and refidxL1 may be derived as minimum indexes having the same POC difference in relation to the current picture.
  • [POC of the current picture]-[POC of the first reference picture indicated by refidxL0] is referred to as the first POC difference
  • [POC of the second reference picture indicated by refidxL1] is referred to as the second POC difference.
  • the value of refidxL0 indicating the first reference picture is derived as the value of refidxL0 of the current block
  • the value of refidxL1 indicating the second reference picture is the same as that of the current block. It can also be derived as the value of refidxL1.
  • refidxL0 and refidxL1 of the set with the minimum difference may be derived as refidxL0 and refidxL1 of the current block.
  • MVDL1 can be derived as -MVDL0.
  • the final MV for the current block may be derived as in Equation 1 below.
  • Equation 1 mvx 0 and mvy 0 represent the x and y components of the L0 direction motion vector for the current block, and mvx 1 and mvy 1 represent the x and y components of the motion vector for L0 direction prediction for the current block. And indicates the x and y components of a motion vector for L1 direction prediction.
  • mvp 0 and mvp 0 denote an MVP motion vector for L0 direction prediction (L0 base motion vector)
  • mvp 1 and mvp 1 denote an MVP motion vector for L1 direction prediction (L1 base motion vector).
  • mvd 0 and mvd 0 represent the x and y components of MVD for L0 direction prediction.
  • MVD for L1 direction prediction has the same value as L0 MVD, but has an opposite sign.
  • the present embodiment describes an affine motion prediction method for encoding/decoding using an affine motion model.
  • a motion vector may be expressed in units of each pixel of a block using two, three, or four motion vectors.
  • the affine motion model can represent four motions as shown in FIG.
  • the affine motion model that expresses three movements (translation, scale, and rotate) among the movements that can be expressed by the affine motion model is referred to as a similar (or simplified) affine motion model.
  • the proposed methods are described based on the affine motion model. However, the embodiments of the present specification are not limited to a similar (or singular) affine motion model.
  • 21A and 21B illustrate examples of motion vectors for each control point according to an embodiment of the present specification.
  • the affine motion prediction may determine a motion vector for each pixel position included in a block using two or more control point motion vectors (CPMVs).
  • CPMVs control point motion vectors
  • a motion vector at the sample position (x, y) can be derived as in Equation 2 below.
  • a motion vector at the sample position (x, y) can be derived as in Equation 3 below.
  • ⁇ v 0x , v 0y ⁇ is the CPMV of the CP at the top-left corner of the coding block
  • ⁇ v 1x , v 1y ⁇ is the CPMV of the CP at the top-right corner
  • ⁇ v 2x , v 2y ⁇ is the CPMV of the CP at the bottom-left corner
  • W corresponds to the width of the current block
  • H corresponds to the height of the current block
  • ⁇ v x , v y ⁇ is a motion vector at the position ⁇ x, y ⁇ .
  • FIG. 22 shows an example of a motion vector for each subblock according to an embodiment of the present specification.
  • a motion vector field which is an affine
  • a motion vector field may be determined in a pixel unit or a predefined subblock unit.
  • the MVP is determined in units of sub-blocks
  • the center of the sub-block (the lower right of the center, that is, the lower right of the center 4 samples) pixel
  • a motion vector of a corresponding block may be obtained based on a value.
  • the affine MVF is determined in units of 4*4 subblocks. However, this is only for convenience of explanation, and the size of the subblock may be variously changed.
  • motion models applicable to the current block may include the following three types.
  • Translational motion model can represent a model in which an existing block-based motion vector is used
  • 4-parameter affine motion model can represent a model in which two CPMVs are used
  • 6-parameter affine motion model can represent three It can indicate the model in which CPMV is used.
  • the affine motion prediction may include an affine MVP (or affine inter) mode and an affine merge.
  • affine motion prediction motion vectors of a current block may be derived in units of samples or sub-blocks.
  • the CPMV may be determined according to the Rane motion model of the neighboring block coded by the Rane motion prediction. Affine-coded neighboring blocks in the search order may be used for the affine merge mode.
  • the current block may be coded as AF_MERGE. That is, when the affine merge mode is applied, CPMVs of the current block may be derived using CPMVs of neighboring blocks.
  • CPMVs of the neighboring block may be used as CPMVs of the current block, or CPMVs of the neighboring block may be used as CPMVs of the current block by being modified based on the size of the neighboring block and the size of the current block.
  • an affine merge candidate list may be constructed to derive CPMVs for the current block.
  • the affine merge candidate list may include at least one of the following candidates, for example.
  • the inherited affine candidates are candidates derived based on the CPMVs of the neighboring block when the neighboring block is coded in the affine mode, and constructed affine candidates are the corresponding CP neighboring blocks in each CPMV unit. It is a candidate derived by constructing CPMVs based on MV, and a zero MV candidate may represent a candidate composed of CPMVs having a value of 0.
  • FIG. 23 is an example of a flowchart for configuring an affine merge candidate list according to an embodiment of the present specification.
  • a coding device inserts inherited affine candidates into a candidate list (S2310), and constructs constructed affine candidates into an affine candidate list. Then, a zero MV candidate may be inserted into the affine candidate list (S2330). In an embodiment, when the number of candidates included in the candidate list is smaller than the reference number (eg, two), the coding apparatus may insert the configured affine candidates or the zero MV candidate.
  • FIG. 24 shows examples of blocks for deriving an inherited affine motion predictor according to an embodiment of the present specification
  • FIG. 25 is a diagram for deriving an inherited affine motion predictor according to an embodiment of the present specification. An example of control point motion vectors is shown.
  • affine candidates There may be up to two (one from the left neighboring CU and one of the upper neighboring CUs) of inherited affine candidates, which may be derived from the affine motion model of neighboring blocks.
  • Candidate blocks are shown in FIG. 24.
  • the scan order for the left predictor is A0-A1
  • the scan order for the upper predictor is B0-B1-B2. Only the first inherited candidates from each side are selected. A pruning check may not be performed between the two inherited candidates.
  • control point motion vectors of the adjacent affine CU may be used to derive a control point motion vector predictor (CPMVP) candidate from the affine merge list of the current CU.
  • CCMVP control point motion vector predictor
  • motion vectors of the CU including the block A are v 2 , v 3 of the upper left corner, the upper right corner, and the lower left corner.
  • And v 4 are used.
  • block A is coded with a 4-parameter affine model
  • two CPMVs of the current CU are calculated according to v 2 and v 3 .
  • block A is coded with a 6-parameter model
  • the three CPMVs of the current CU are calculated according to v 2 , v 3 , and v 4 .
  • 26 shows an example of blocks for deriving a constructed affine merge candidate according to an embodiment of the present specification.
  • the constructed affine merge means a candidate formed by combining neighboring translational motion information for each control point.
  • motion information for control points is derived from specified spatial and temporal neighbors.
  • blocks are checked in the order of B2-B3-A2 and the MV of the first available block is used. Blocks are checked in the order of B1-B0 with respect to CPMV2 (CP1) in the upper right corner, and blocks in the order of A1-A0 with CPMV3 (CP2) in the lower left corner. If available, TMVP is used for CPMV4 (CP3) in the lower right corner.
  • affine merge candidates are configured based on this motion information.
  • the following combinations of control point MVs are used in order:
  • Combinations of three CPMVs constitute a 6-parameter affine merge candidate, and a combination of two CPMVs constitutes a 4-parameter affine merge candidate.
  • the combination of the related control point MVs is discarded.
  • FIG. 27 is an example of a flowchart for configuring an affine MVP candidate list according to an embodiment of the present specification.
  • a control point motion vector difference (CPMVD) corresponding to the difference value is obtained from the encoding device 100 from the decoding device ( 200).
  • an affine MVP candidate list may be configured to derive CPMVs for the current block.
  • the affine MVP candidate list may include at least one of the following candidates.
  • the affine MVP candidate list may include a maximum of n (eg, 2) candidates.
  • the inherited affine candidate is a candidate derived based on the CPMVs of the neighboring block when the neighboring block is coded in the affine mode, and the constructed affine candidate is each CPMV unit.
  • a candidate it is a candidate derived by configuring CPMVs based on the MV of a block adjacent to the corresponding CP, and a zero MV candidate represents a candidate composed of CPMVs whose value is 0.
  • the maximum number of candidates for the affine MVP candidate list is two, in the above order, 2) or less candidates may be considered and added when the number of current candidates is less than two.
  • additional candidates based on Translational MVs from neighboring CUs from neighboring CUs may be derived in the following order.
  • CPMV0 is used as an affine MVP candidate. That is, the MVs of CP0, CP1, and CP2 are all set to be the same as CPMV0 of the constructed candidate.
  • CPMV1 is used as an affine MVP candidate. That is, the MVs of CP0, CP1, and CP2 are all set to be the same as CPMV1 of the constructed candidate.
  • CPMV2 is used as the affine MVP candidate. That is, the MVs of CP0, CP1, and CP2 are all set to be the same as CPMV2 of the constructed candidate.
  • TMVP temporary motion vector predictor or mvCol
  • the affine MVP candidate list may be derived by the procedure shown in FIG.
  • the order of checking inherited MVP candidates is the same as that of the inherited affine merge candidates. The difference is that, for the MVP candidate, only affine CUs having the same reference picture as the current block are considered.
  • the pruning process is not applied.
  • the configured MVP candidate is derived from neighboring blocks shown in FIG. 26.
  • the same confirmation order as the composition of the affine merge candidate is used.
  • reference picture indexes of neighboring blocks are also checked.
  • the first block that is inter-coded in the check order and has the same reference picture as the current CU is used.
  • FIG. 28A and 28B illustrate examples of spatial neighboring blocks used in adaptive temporal motion vector prediction (ATMVP) according to an embodiment of the present specification and a sub-coding block (CU) motion field derived from the spatial neighboring block. Shows.
  • ATMVP adaptive temporal motion vector prediction
  • CU sub-coding block
  • Subblock-based temporal motion vector prediction (SbTMVP) method may be used. Similar to temporal motion vector prediction (TMVP), SbTMVP may use a motion field in the co-located picture to improve the motion vector predictor and merge mode for CUs in the current picture. The same co-located picture used by TMVP is used for SbTMVP. SbTMVP differs from TMVP in the following two aspects.
  • TMVP predicts motion at the CU level
  • SbTMVP predicts motion at the sub-CU level
  • TMVP Temporal motion vector
  • SbTMVP retrieves temporal motion information from the co-located picture
  • Apply motion shift before import where the motion shift is obtained from a motion vector from one of the spatial neighboring blocks of the current CU.
  • the SbTMVP process is shown in FIGS. 28A and 28B.
  • SbTMVP predicts motion vectors of sub-CUs in the current CU in two steps.
  • the spatial neighbors in FIG. 30A are examined in the order of A1, B1, B0, and A0.
  • this motion vector is selected to be motion shifted. (As soon as and the first spatial neighboring block that has a motion vector that uses the collocated picture as its reference picture is identified, this motion vector is selected to be the motion shift to be applied). If such motion is not confirmed from spatial surroundings, the motion shift is set to (0, 0).
  • the motion shift identified in the first step is applied to obtain sub-CU level motion information (motion vectors and reference indices) from the co-located picture as shown in FIG. 30B (that is, the current Are added to the block's coordinates).
  • FIG. 30B assumes that a motion shift is set by the motion of block A1.
  • the motion information of the corresponding block (the smallest motion grid covering the center sample) in the co-located picture is used to derive the motion information for the sub-CU.
  • the motion information of the co-located sub-CU is confirmed, it is converted into reference indices and motion vectors of the current sub-CU in a manner similar to the TMVP process, where temporal motion scaling is the reference pictures of the temporal motion vectors. It is applied to be aligned with temporal motion vectors.
  • the combined subblock-based merge list including the SbTMVP candidate and the Rane merge candidate may be used for signaling of the Rane merge mode (which may be referred to as a subblock-based merge mode).
  • the SbTMVP mode may be activated/deactivated by a sequence parameter set (SPS).
  • SPS sequence parameter set
  • the SbTMVP predictor is added as the first entry in the list of subblock-based merge candidates, and affine merge candidates are added later.
  • the maximum allowed size of the affine merge candidate list may be 5.
  • the size of the sub-CU used in SbTMVP is fixed at 8x8, the same is applied to the affine merge mode, and the SbTMVP mode can only be applied to CUs with both width and height greater than or equal to 8.
  • the encoding logic of the additional SbTMVP merge candidate is the same as the other merge candidates, that is, for each CU in the P or B slice, an additional RD (rate-distortion) check is performed to determine whether to use the SbTMVP candidate.
  • AMVR Adaptive Motion Vector Resolution
  • a motion vector difference (between a predicted motion vector and a motion vector of a CU) may be signaled in units of quarter-luma-samples.
  • the CU-level AMVR scheme is introduced.
  • the AMVR may cause the MVD of the CU to be coded in units of 1/4 luminance samples, integer luminance samples, or 4 luminance samples. If the current CU has at least one non-zero MVD component, a CU-level MVD resolution indicator is conditionally signaled. If all MVD components (i.e., horizontal and vertical MVDs for reference list L0 and reference list L1) are 0, then the 1/4 luminance sample MVD resolution is inferred.
  • a first flag is signaled to determine whether 1/4 luminance sample MVD accuracy is applied for that CU. If the first flag is 0, no additional signaling is required and 1/4 luminance sample MVD accuracy is used for the current CU. Otherwise, a second flag is signaled to indicate whether integer luminance samples or 4 luminance samples MVD accuracy is used.
  • the motion vector predictors for the CU have the same accuracy as the motion vector predictors previously added with the MVD. Can be rounded to have Motion vector predictors can be rounded to zero.
  • the encoder determines the motion vector resolution for the current CU using the RD check.
  • the RD check of 4 luminance samples MVD resolution can be called conditionally.
  • the RD cost of 1/4 sample MVD accuracy is calculated first. Then, the RD cost of the integer luminance sample MVD accuracy is compared with the RD cost of the 1/4 luminance sample MVD accuracy to determine whether it is necessary to check the RD cost of the 4 luminance sample MVD accuracy. When the RD cost for 1/4 luminance sample MVD accuracy is less than the RD cost for integer luminance sample MVD accuracy, the RD cost of 4 sample MVD accuracy is omitted.
  • motion information of a reference picture previously decoded may be stored in units of a predetermined area. This may be referred to as temporal motion field storage, motion field compression, or motion data compression.
  • the storage unit of motion information may be set differently depending on whether the affine mode is applied. In this case, the one with the highest accuracy among explicitly signaled motion vectors is a quarter-luma-sample.
  • motion vectors are derived at 1/16th-luma-sample precision and motion compensated prediction is performed at 1/16th-sample accuracy.
  • all motion vectors are stored with 1/16 luminance sample accuracy.
  • motion field compression is performed with 8x8 granularity.
  • HMVP history-based MVP
  • TMVP TMVP
  • motion information of a previously coded block is stored in a table and used as an MVP for a current CU.
  • a table composed of multiple HMVP candidates is maintained during the encoding/decoding process. When a new CTU row is used, the table is reset (emptied). When there is a CU coded by inter prediction other than a subblock, related motion information is added to the last entry of the table as a new HMVP candidate.
  • the HMVP table size (S) is set to 6, which means that a maximum of 6 HVMP candidates can be added to the table.
  • a constrained first-in-first-out (FIFO) rule is used.
  • a redundancy check is first performed to check whether an HMVP candidate with the same HMVP candidate to be added exists in the table. If the same HMVP candidate exists, the same existing HMVP candidate is removed from the table and all HMVP candidates are moved in the previous order.
  • HMVP candidates can be used in the merge candidate list construction process.
  • the most recent HMVP candidates are identified in the table, and are inserted into the merge candidate list in the order following the TMVP candidates. Redundancy check for HMVP candidates is applied to spatial or temporal merge candidates.
  • the following simplification methods may be used.
  • N denotes the number of candidates existing in the merge list
  • M denotes the number of HMVP candidates available in the table.
  • Pair average candidates are generated by an average of predefined pairs of candidates existing in the merge candidate list.
  • the predefined pairs are defined as ⁇ (0, 1), (0, 2), (1, 2), (0, 3), (1, 3), (2, 3) ⁇ , 0, 1 Numbers such as, 2, and 3 are merge indexes in the merge candidate list.
  • the average of motion vectors is calculated individually for each reference list. If both motion vectors are available in one list, the average value of the two motion vectors is used even if the two motion vectors are for different reference pictures. If only one motion vector is available, the available motion vector is used immediately. If there are no motion vectors available, the list is kept invalid.
  • a predicted block for the current block may be derived based on motion information derived according to the prediction mode.
  • the predicted block may include prediction samples (prediction sample array) of the current block.
  • prediction samples prediction sample array
  • an interpolation procedure may be performed.
  • Prediction samples of the current block may be derived from reference samples in units of fractional samples in a reference picture through an interpolation procedure.
  • prediction samples may be generated based on a motion vector in units of samples/subblocks.
  • prediction samples derived based on L0 direction prediction i.e., prediction using a reference picture in an L0 reference picture list and an L0 motion vector
  • L1 prediction i.e., an L1 reference picture list
  • Prediction samples derived through a weighted sum (according to a phase) or a weighted average of prediction samples derived based on prediction using an internal reference picture and an L1 motion vector) may be used as prediction samples of the current block.
  • L0 direction prediction i.e., prediction using a reference picture in an L0 reference picture list and an L0 motion vector
  • L1 prediction i.e., an L1 reference picture list
  • Prediction samples derived through a weighted sum accordinging to a phase
  • a weighted average of prediction samples derived based on prediction using an internal reference picture and an L1 motion vector may be used as prediction samples of the current block.
  • pair prediction when the reference picture used for L0 prediction and the reference picture used for L1 prediction are located in different temporal directions with respect to the
  • reconstructed samples and reconstructed pictures may be generated based on the derived prediction samples, and then procedures such as in-loop filtering may be performed.
  • a prediction sample when pair prediction is applied to a current block, a prediction sample may be derived based on a weighted average.
  • the pair prediction signal ie, pair prediction samples
  • the pair prediction samples may be derived through a simple average or weighted average of the L0 prediction signal (L0 prediction samples) and the L1 prediction signal (L1 prediction samples).
  • the pair prediction samples When the prediction sample derivation by a simple average is applied, the pair prediction samples may be derived as average values of the L0 prediction samples based on the L0 reference picture and the L0 motion vector, and the L1 prediction samples based on the L1 reference picture and the L1 motion vector.
  • a pair prediction signal when pair prediction is applied, a pair prediction signal (pair prediction samples) may be derived through a weighted average of the L0 prediction signal and the L1 prediction signal as shown in Equation 4 below.
  • Equation 4 P bi-pred represents a pair prediction sample value, P 0 represents an L0 prediction sample value, P 1 represents an L0 prediction sample value, and w represents a weight value.
  • weight values (w) may be allowed, and the weight values (w) may be -2, 3, 4, 5, 10.
  • the weight w may be determined by one of two methods.
  • the weight index is signaled after MVD.
  • the weight index is inferred from neighboring blocks based on the merge candidate index.
  • the weighted sum pair prediction can only be applied to CUs with 256 or more luminance samples (CUs whose product of CU width and CU height is greater than or equal to 256). For low-delay pictures, all five weights can be used. For pictures that are not low-latency, only three weights (3, 4, 5) can be used.
  • affine ME motion estimation
  • CIIP can be applied to the current CU.
  • a CU is coded in merge mode
  • the CU contains at least 64 luminance samples (the product of the CU width and the CU height is greater than or equal to 64)
  • an additional flag indicates whether the CIIP mode is applied to the current CU. May be signaled to indicate.
  • the CIIP mode may also be referred to as a multi-hypothesis mode or an inter/intra multiple hypothesis mode.
  • Up to four intra prediction modes including DC, PLANAR, HORIZONTAL, and VERTICAL modes can be used to predict the luminance component in the CIIP mode. If the CU shape is very wide (for example, if the width is more than twice the height), the HORIZONTAL mode is not allowed. If the CU shape is very narrow (ie, the height is more than twice the width), the VERTICAL mode is not allowed. For these cases, three intra prediction modes are allowed.
  • the CIIP mode uses three most probable modes (MPMs) for intra prediction.
  • the CIIP MPM candidate list is formed as follows.
  • intraModeA and intraModeB The prediction modes of block A and block B are named intraModeA and intraModeB, respectively, and are derived as follows.
  • intraModeX is set to DC
  • intraModeX is DC or PLANAR
  • intraModeX is DC or PLANAR
  • intra prediction mode of block X is "vertical-like" directional mode (greater than 34).
  • intraModeX is set to VERTICAL
  • intraModeX is set to HORIZONTAL if the intra-prediction mode of block X is a "horizontal-like" directional mode (mode less than or equal to 34).
  • 3 MPMs are set in the order of ⁇ intraModeA, PLANAR, DC ⁇
  • the first two MPMs are set in the order of ⁇ intraModeA, intraModeB ⁇
  • the MPM flag is inferred as 1 without signaling. Otherwise, an MPM flag for indicating whether the CIIP intra prediction mode is one of the CIIP MPM candidate modes is signaled.
  • the MPM flag is 1, an MPM index indicating which of the MPM candidate modes is used in CIIP intra prediction is additionally signaled. Otherwise, if the MPM flag is 0, the intra prediction mode in the MPM candidate list is set to a "missing" mode. For example, if the PLANAR mode is not in the MPM candidate list, PLANAR becomes the missing mode, and the intra prediction mode is set to PLANAR. Since 4 possible intra prediction modes are allowed in CIIP, the MPM candidate list contains only 3 intra prediction candidates. For color difference components, the DM mode is always applied without additional signaling. That is, the same prediction mode as the luminance component is used for the color difference components. The intra prediction mode of the CU coded with CIIP will be stored and used for intra mode coding of the next neighboring CUs.
  • the inter prediction signal P inter in the CIIP mode is derived using the same inter prediction process applied to the general merge mode, and the intra prediction signal P intra is derived using the CIIP intra prediction according to the intra prediction process. Then, the intra and inter prediction signals are combined using a weighted average, where the weight value depends on the intra prediction mode and where the sample is located in the coding block as follows.
  • the same weight is applied to the intra prediction and inter prediction signals.
  • the weights are determined based on the intra prediction mode (horizontal mode or vertical mode in this case) and the sample position in the block.
  • the horizontal prediction mode will be described as an example (weights for the vertical mode are similar, but can be derived in an orthogonal direction).
  • Set the width of the block to W and the height of the block to H.
  • the coding block is initially divided into 4 co-regional parts, each dimension is (W/4)xH. Starting from the part closest to the intra prediction reference samples and ending the part farthest from the intra prediction samples, the weight wt for each of the four regions is set to 6, 5, 3, and 2.
  • the final CIIP prediction signal may be derived as in Equation 5 below.
  • P CIIP is a CIIP prediction sample value
  • P inter is an inter prediction sample value
  • P intra is an intra prediction sample value
  • wt is a weight
  • IBC may be used for coding a content image/movie, such as a game, such as, for example, screen content coding (SCC).
  • IBC basically performs prediction in the current picture, but can be performed similarly to inter prediction in that it derives a reference block in the current picture. That is, the IBC may use at least one of the inter prediction techniques described in this document.
  • the IBC at least one of the above-described motion information (motion vector) derivation methods may be used.
  • the IBC may refer to the current picture, and thus may be referred to as current picture referencing (CPR).
  • CPR current picture referencing
  • the encoding apparatus 100 may derive an optimal block vector (or motion vector) for a current block (eg, CU) by performing block matching (BM).
  • the derived block vector (or motion vector) may be signaled to the decoding apparatus 200 through a bitstream using a method similar to signaling of block information (motion vector) in the above-described inter prediction.
  • the decoding apparatus 200 may derive a reference block for the current block in the current picture through the signaled block vector (motion vector), and through this, a prediction signal (predicted block or prediction samples) for the current block.
  • the block vector (or motion vector) may represent a displacement from a current block to a reference block located in an already reconstructed area in the current picture.
  • the block vector (or motion vector) may be called a displacement vector.
  • a motion vector may correspond to a block vector or a displacement vector.
  • the motion vector of the current block may include a motion vector for a luma component (a luma motion vector) or a motion vector for a chroma component (a chroma motion vector).
  • the luma motion vector for the IBC coded CU may be in integer sample units (ie, integer precision).
  • the chroma motion vector can also be clipped in units of integer samples.
  • IBC may use at least one of inter prediction techniques. For example, when IBC is applied together with AMVR, 1-pel and 4-pel motion vector precision may be switched.
  • hash-based motion estimation is performed for the IBC.
  • the encoder performs an RD check on blocks whose width or height is not greater than 16 luminance samples.
  • block vector search is performed from hash-based search. If the hash search does not return a valid candidate, a block matching-based local search will be performed.
  • hash key matching 32-bit CRC
  • hash key matching 32-bit CRC
  • the hash key calculation for all positions in the current picture is based on 4x4 subblocks. For a larger sized current block, if all hash keys in all 4x4 subblocks match with hash keys in all corresponding reference positions, the hash key is determined to match the hash key of the reference block. If the hash keys of multiple reference blocks are found to match the hash key of the current block, the block vector costs of each matched reference block are calculated and the one with the least cost is selected.
  • the search range is set to N samples from the left and above of the current block in the current CTU.
  • N is initialized to 128, and if there is at least one temporal reference picture, it is initialized to 64.
  • the hash hit ratio is defined as the percentage of samples in the CTU that found a match using a hash-based search. While encoding the current CTU, if the hash hit rate is less than 5%, N is reduced by half.
  • the IBC mode is signaled using a flag and can be signaled as follows, such as the IBC AMVP mode or the IBC skip/merge mode.
  • 29A and 29B illustrate an example of a video/video encoding method based on an intra block copy (IBC) mode and a prediction unit in the encoding apparatus 100 according to an embodiment of the present specification.
  • IBC intra block copy
  • the encoding apparatus 100 performs IBC prediction (IBC-based prediction) for the current block (S2910).
  • the encoding apparatus 100 may derive a prediction mode and a motion vector of a current block and generate prediction samples of the current block.
  • the prediction mode may include at least one of the above-described inter prediction modes as prediction modes for IBC.
  • a procedure for determining a prediction mode, deriving a motion vector, and generating prediction samples may be performed simultaneously, or one procedure may be performed prior to another procedure.
  • the prediction unit of the encoding apparatus 100 may include a prediction mode determination unit, a motion vector derivation unit, and a prediction sample derivation unit.
  • the prediction mode determination unit determines a prediction mode for the current block, and a motion vector derivation unit At, the motion vector of the current block may be derived, and prediction samples of the current block may be derived by a prediction sample derivation unit.
  • the prediction unit of the encoding apparatus 100 searches for a block similar to the current block in the reconstructed area of the current picture (or a certain area (search area) of the reconstructed area) through block matching (BM), and It is possible to derive a reference block whose difference from the block is less than a minimum or a certain standard.
  • a motion vector can be derived based on the difference in displacement between the reference block and the current block.
  • the encoding apparatus 100 may determine a mode applied to the current block among various prediction modes.
  • the encoding apparatus 100 may compare RD costs based on various prediction modes and determine an optimal prediction mode for the current block.
  • the encoding apparatus 100 configures the above-described merge candidate list, and the current block is matched with the current block among reference blocks indicated by merge candidates included in the merge candidate list. It is possible to derive a reference block whose difference is less than a minimum or a certain standard. In this case, a merge candidate associated with the derived reference block is selected, and merge index information indicating the selected merge candidate may be generated and signaled to the decoding apparatus 200. The motion vector of the current block may be derived using the motion vector of the selected merge candidate.
  • the encoding apparatus 100 configures the above-described (A)MVP candidate list, and (A)motion vector predictor (MVP) candidates included in the MVP candidate list
  • the motion vector of the selected MVP candidate may be used as the MVP of the current block.
  • a motion vector indicating a reference block derived by motion estimation described above may be used as a motion vector of the current block, and among MVP candidates, a motion vector having the smallest difference from the motion vector of the current block is selected.
  • a motion vector difference (MVD) which is a difference obtained by subtracting MVP from the motion vector of the current block, may be derived. In this case, information on the MVD may be signaled to the decoding apparatus 200.
  • the encoding apparatus 100 may derive residual samples based on the prediction samples (S2920). The encoding apparatus 100 may derive residual samples by comparing the original samples of the current block with the prediction samples.
  • the encoding apparatus 100 encodes video information including prediction information and residual information (S2930).
  • the encoding apparatus 100 may output the encoded image information in the form of a bitstream.
  • the prediction information is information related to a prediction procedure, and may include prediction mode information (eg, skip flag, merge flag, or mode index) and information about a motion vector.
  • the information on the motion vector may include candidate selection information (eg, merge index, mvp flag, or mvp index) that is information for deriving the motion vector.
  • the information on the motion vector may include information on the above-described MVD.
  • the information on the motion vector may include information indicating whether L0 prediction, L1 prediction, or bi prediction is applied.
  • the residual information is information about residual samples.
  • the residual information may include information about quantized transform coefficients for residual samples.
  • the output bitstream may be stored in a (digital) storage medium and transmitted to the decoding device, or may be transmitted to the decoding device 200 through a network.
  • the encoding apparatus 100 may generate a reconstructed picture (including reconstructed samples and a reconstructed block) based on reference samples and residual samples. This is because the encoding device derives the same prediction result as that performed by the decoding device 200, and coding efficiency can be improved through this. Accordingly, the encoding apparatus may store the reconstructed area (or reconstructed samples, reconstructed block) of the current picture in a memory and use it as a reference picture for IBC prediction.
  • the video/video decoding procedure based on the IBC and the prediction unit in the decoding apparatus 200 may schematically include, for example, the following.
  • 30A and 30B illustrate an example of a video/video decoding method based on an IBC mode and a prediction unit in the decoding apparatus 200 according to an embodiment of the present specification.
  • the decoding apparatus 200 may perform an operation corresponding to the operation performed by the encoding apparatus 100.
  • the decoding apparatus 200 may perform IBC prediction on the current block and derive prediction samples based on the received prediction information.
  • the decoding apparatus 200 may determine a prediction mode for the current block based on the received prediction information (S3010). The decoding apparatus 200 may determine which prediction mode is applied to the current block based on prediction mode information in the prediction information.
  • the merge mode may be applied to the current block or the (A)MVP mode is determined based on the merge flag.
  • one of various prediction mode candidates may be selected based on a mode index.
  • the prediction mode candidates may include a skip mode, a merge mode, and/or (A)MVP mode, or may include various prediction modes described above.
  • the decoding apparatus 200 derives a motion vector of the current block based on the determined prediction mode (S3220). For example, when the skip mode or the merge mode is applied to the current block, the decoding apparatus 200 may configure the above-described merge candidate list and select one merge candidate from among the merge candidates included in the merge candidate list. have. The selection of the merge candidate may be performed based on the above-described selection information (merge index). The motion vector of the current block may be derived using the motion vector of the selected merge candidate. The motion vector of the selected merge candidate may be used as the motion vector of the current block.
  • the decoding apparatus 200 may configure the above-described merge candidate list and select one merge candidate from among the merge candidates included in the merge candidate list. have.
  • the selection of the merge candidate may be performed based on the above-described selection information (merge index).
  • the motion vector of the current block may be derived using the motion vector of the selected merge candidate.
  • the motion vector of the selected merge candidate may be used as the motion vector of the current block.
  • the decoding apparatus 200 configures the above-described (A)MVP candidate list, and (A) selects MVP candidates from among MVP candidates included in the MVP candidate list.
  • the motion vector can be used as the MVP of the current block.
  • the selection of the MVP candidate may be performed based on the above-described selection information (mvp flag or mvp index).
  • the MVD of the current block may be derived based on the information on the MVD
  • the motion vector of the current block may be derived based on the MVP and the MVD of the current block.
  • motion information of the current block may be derived without configuring a candidate list.
  • a motion vector of the current block may be derived according to a procedure disclosed in a corresponding prediction mode.
  • the configuration of the candidate list as described above may be omitted.
  • the decoding apparatus 200 may generate prediction samples for the current block based on the motion vector of the current block (S3030). Prediction samples of the current block may be derived using samples of the reference block indicated by the motion vector of the current block on the current picture. In this case, a prediction sample filtering procedure may be further performed on all or part of the prediction samples of the current block.
  • the prediction unit of the decoding apparatus 200 may include a prediction mode determination unit, a motion vector derivation unit, and a prediction sample derivation unit, and the prediction mode determination unit determines a prediction mode for the current block based on the received prediction mode information. After determining, the motion vector deriving unit may derive a motion vector of the current block based on information on the received motion vector, and the prediction sample deriving unit may derive the predicted samples of the current block.
  • the decoding apparatus 200 generates residual samples for the current block based on the received residual information (S3040).
  • the decoding apparatus 200 may generate reconstructed samples for the current block based on the prediction samples and the residual samples, and generate a reconstructed picture based on the prediction samples (S3050). Thereafter, as described above, an in-loop filtering procedure may be further applied to the reconstructed picture.
  • a syntax element related to IBC prediction is not signaled for a coding unit having a block size for which IBC prediction is not available, and a method of efficiently reconfiguring a coding procedure of the syntax element is provided.
  • This document may include the following embodiments. The embodiments to be described later may be performed in an alternative manner to each other, or may be performed by combining some of them.
  • a tile group may include one or more CTUs as a partial region divided from a picture. It is natural that other terms (eg, slice) may be used in place of the tile group.
  • IBC restriction method and syntax signaling restriction method in a block size with limited motion in consideration of a predefined decoder patch area
  • Table 2 below shows a syntax signaling method for a coding unit in an I-tile group.
  • the current tree type is color difference.
  • a flag cu_skip_flag
  • the skip mode refers to a mode in which residual coding is omitted and prediction samples of a block are used as reconstructed samples.
  • a flag (pred_mode_flag) indicating the prediction mode is obtained.
  • 31 illustrates an example of a decoding procedure of prediction information according to an embodiment of the present specification.
  • 31 shows a syntax signaling procedure in an I-tile group.
  • whether the IBC mode is applied to a block (coding unit) belonging to an I-tile group is determined by CU_SKIP_FLAG and IBC_FLAG. More specifically, this is the case where the skip mode is not applied to the current block (while CU_SKIP_FLAG is 0) and IBC_FLAG is 1.
  • 32 illustrates another example of a decoding procedure of prediction information according to an embodiment of the present specification.
  • 32 shows a syntax signaling procedure in a Non I-tile group (ie, a P tile group or a B tile group).
  • a Non I-tile group ie, a P tile group or a B tile group.
  • whether an IBC mode is applied to a block (coding unit) belonging to a Non I-tile group may be determined by an IBC flag.
  • An embodiment of the present specification provides a method of limiting IBC with respect to a block size that cannot occur in consideration of a patch region of a predefined decoder, and performing syntax signaling in consideration of the limitation of IBC.
  • IBC is a technology that uses an image that has already been decoded in a current picture as a prediction block. Unlike conventional intra prediction, a decoder needs to store a large amount of pixel data. That is, since more pixels are referenced than pixels used in the conventional intra prediction, the latency and memory bandwidth of the hardware decoder are determined according to a method of storing the corresponding pixel data.
  • the block size to which IBC prediction can be applied may be limited.
  • the reference region may be limited to the current CTU or may be limited so that reference to the previous three VPDUs with respect to the current VPDU is possible in consideration of the VPDU.
  • the VPDU may have a size of 64x64.
  • a 128x128 block cannot satisfy the condition for applying IBC. Therefore, syntax signaling related to IBC may be restricted in a 128x128 block.
  • FIG. 33 illustrates an example of a decoding procedure of prediction information considering a block size according to an embodiment of the present specification.
  • the flag indicating whether to apply the skip mode (CU_SKIP_FLAG) and the flag indicating whether to apply the IBC mode (IBC_FLAG) are not parsed, and information on the intra mode is Can be parsed.
  • the coding unit syntax structure in which the IBC restriction for a 128x128 block is reflected may be shown in Table 3 below.
  • a flag (pred_mode_ibc_flag) indicating whether to apply IBC is parsed only when both the width (cbWidth) and the height (cbHeight) of the current block are not 128. According to Table 3, when the size of the current block (coding unit) is 128x128, a flag indicating whether to apply the IBC mode (pred_mode_ibc_flag) is not parsed, and the IBC mode is not applied.
  • An embodiment of the present specification provides a syntax signaling method that reflects a case in which IBC is restricted and IBC is restricted for a block size having a restricted motion in consideration of a predetermined patch region of a decoder.
  • the motion patch area of the IBC is limited. That is, IBC can be applied to a specific region in the Nx128 and 128xN blocks.
  • the current block 3410 is a 64x128 block
  • one location 3420 that can be referred to in a specific area (eg, an upper 64x64 area) is specified. In this case, signaling a motion vector is unnecessary, and motion prediction is also unnecessary. It is efficient to limit the IBC for these blocks.
  • the IBC prediction mode may be set not to be applied to a 128xN or Nx128 block (a block having a width or height of 128).
  • an embodiment of the present specification provides a method of limiting IBC to be applied only to blocks smaller than or equal to 64x64.
  • the syntax structure according to this embodiment is shown in Table 4 below.
  • a flag indicating whether to apply IBC is parsed only when the width (cbWidth) and height (cbHeight) of the current block are 64 or less, and when the width or height of the current block is greater than 128, the IBC is applied.
  • the decoder does not apply IBC without parsing the flag indicating whether or not (pred_mode_ibc_flag).
  • This embodiment limits the IBC block size in consideration of the trade-off of compression efficiency and complexity, and provides a syntax signaling method in consideration of the limited IBC block size.
  • a block having a size of 64x64 or less is encoded using IBC
  • a block having a size of 32x32 or less is encoded using IBC
  • a block having a size of 16x16 or less uses IBC.
  • IBC is applied to a block having a size of 16x16 or less.
  • the compression efficiency compared to the complexity is as high as possible.
  • the 32x32 and 64x64 blocks using IBC it was confirmed that the compression efficiency increased but the encoding speed decreased as the complexity also increased.
  • IBC is applied only to blocks having a size of 16x16 or less.
  • Table 5 The coding unit syntax structure according to this embodiment is shown in Table 5 below.
  • a flag (pred_mode_ibc_flag) indicating whether to apply IBC is parsed only when both the width (cbWidth) and the height (cbHeight) of the current block are less than or equal to 16. If the width or height of the current block is greater than 16, a flag indicating whether to apply IBC (pred_mode_ibc_flag) is not parsed, and the decoder does not apply the IBC mode.
  • a method of signing a syntax related to an IBC in consideration of a maximum IBC block size signaled by a high level syntax is provided.
  • the above-described method has a block size of a fixed size, flexibility is provided in which the maximum IBC block size can be variably adjusted in consideration of trade-offs in the encoder and the decoder depending on the case. That is, the maximum IBC block size information (MAXIMUM_IBC_BLOCK_SIZE) is signaled in the high-level syntax (e.g., sequence parameter set (SPS), picture parameter set (PPS), tile group header)) according to the present embodiment, and the maximum IBC-related syntax is signaled based on the IBC block size information.
  • SPS sequence parameter set
  • PPS picture parameter set
  • tile group header the maximum IBC-related syntax
  • SPS syntax structure according to this embodiment is shown in Table 6 below.
  • the syntax structure of the tile header group according to another embodiment is shown in Table 7 below.
  • sps_log2_max_ibc_blkSz_minus2 and tile_group_log2_max_ibc_blkSz_minus2 are information for deriving the maximum block size (maximum IBC block size) to which IBC can be applied.
  • the maximum IBC block size (MaxIbcBlkSz) may be determined as follows.
  • MaxIbcBlkSz 1 ⁇ (tile_group_log2_max_ibc_blkSz_minus2 + 2)
  • tile_group_log2_max_ibc_blkSz_minus2 may be replaced with sps_log2_max_ibc_blkSz_minus2.
  • the coding unit syntax structure in which the maximum IBC block size (MaxIbcBlkSz) is reflected may be as shown in Table 8 below.
  • a flag (pred_mode_ibc_flag) indicating whether to apply IBC is parsed only when both the width (cbWidth) and the height (cbHeight) of the current block are less than or equal to the maximum IBC block size (MaxIbcBlkSz).
  • the width or height of the current block is larger than the maximum IBC block size (MaxIbcBlkSz)
  • a flag indicating whether to apply IBC is not parsed, and the decoder does not apply the IBC mode.
  • This embodiment provides an encoding method in consideration of a block size to which IBC can be applied. As described above, when the maximum block size for which IBC is allowed is previously set or variably set, encoding may be performed in consideration of the maximum block size for which IBC is allowed.
  • the encoder may perform rate-distortion optimization for the IBC mode and derive an optimal prediction mode.
  • the encoder checks whether the current slice (or tile group) is an I-slice (or I-tile group) (S3505). If the current slice is an I-slice, the encoder performs RD optimization for intra prediction (S3510). If the current slice is not an I-slice (ie, B-slice or P-slice), the encoder omits RD optimization for intra prediction. Even in the case of an I-slice, RD optimization for intra prediction may be performed.
  • Step S3525 may be omitted. Thereafter, the encoder performs inter-AMVP RD optimization (S3530), inter merge RD optimization (S3535), affine RD optimization (S3540), and RD optimization for other inter prediction modes, and additionally Inter skip RD optimization (S3545) may be performed.
  • the IBC prediction mode is not allowed.
  • the predefined value may be 128. That is, when both the width and the height are 128, the IBC flag is deduced as 0.
  • Table 9 The coding unit syntax structure according to the present embodiment is shown in Table 9 below.
  • the IBC prediction mode when the width and height are the same as the predefined values, the IBC prediction mode is not allowed, and the predefined value may be the (maximum) CTU size. That is, if both the width and height are equal to the (maximum) CTU size, the IBC flag is inferred as zero.
  • the coding unit syntax structure according to the present embodiment is shown in Table 10 below.
  • the IBC prediction mode when the width and height are the same as the predefined values, the IBC prediction mode is not allowed, and the predefined value may be 128. That is, when both the width and the height are 128, the IBC flag is deduced as 0.
  • the coding unit syntax structure according to this embodiment is shown in Table 11 below.
  • the encoded information (eg, encoded video/video information) derived by the encoding apparatus 100 based on the above-described embodiments of the present specification may be output in a bitstream form.
  • the encoded information may be transmitted or stored in a bitstream form in units of network abstraction layer (NAL) units.
  • NAL network abstraction layer
  • the bitstream may be transmitted over a network or may be stored in a non-transitory digital storage medium.
  • the bitstream is not directly transmitted from the encoding device 100 to the decoding device 200, but may be provided with a streaming/download service through an external server (eg, a content streaming server).
  • the network may include a broadcasting network and/or a communication network
  • the digital storage medium may include various storage media such as USB, SD, CD, DVD, Blu-ray, HDD, and SSD.
  • FIG. 36 is an example of a flowchart for encoding a video signal according to an embodiment of the present specification.
  • the operations of FIG. 36 may be performed by the prediction unit of the encoding apparatus 100 or the processor 510 of the video signal processing apparatus 500.
  • Steps S3610 and S3620 of FIG. 36 may correspond to an example of step S2910 of FIG. 29A
  • step S3630 of FIG. 36 may correspond to an example of step S2930 of FIG. 29A.
  • the encoder determines a prediction mode of the current block. More specifically, the encoder determines whether to apply an intra block copy (IBC) prediction mode to the current block based on the width and height of the current block. If the width or height of the current block is equal to a predefined value, it is determined that the IBC prediction mode is not applied. For example, the predefined value may be 128. If the width or height of the current block is 128, it may be determined that the parsing of the IBC flag is omitted and the IBC prediction mode is not applied.
  • IBC intra block copy
  • the predefined value may be a CTU size value. If the width or height of the current block is the same as the CTU size value, it may be determined that parsing of the IBC flag is omitted and the IBC prediction mode is not applied. In this case, the encoder may determine the prediction mode of the current block from among the intra prediction mode and the inter prediction mode while excluding the IBC prediction mode.
  • the encoder encodes prediction information of the current block based on the prediction mode.
  • the prediction information may include information indicating a prediction mode of the current block.
  • the prediction information of the current block may include an IBC flag indicating whether the IBC prediction mode is applied.
  • the encoder can encode the IBC flag based on the current block's width and height not being both 128. For example, if the width or height of the current block is 128, it is implied that the IBC prediction mode is not applied to the current block, so the encoder does not encode the IBC flag. When the width or height of the current block is 128, the decoder may determine that the IBC prediction mode is not applied even without signaling of the IBC flag.
  • the encoder when the IBC prediction mode is applied to the current block, the encoder encodes the IBC flag as 1, and if the IBC prediction mode is not applied, the encoder encodes the IBC flag as 0 and selects one of an intra prediction mode and an inter prediction mode.
  • the indicated prediction mode flag may be encoded. For example, the encoder encodes the prediction mode flag as 1 when the intra prediction mode is applied as the prediction mode of the current block, and encodes the prediction mode flag as 0 when the inter prediction mode is applied as the intra prediction mode of the current block. I can.
  • FIG. 37 is an example of a flowchart for decoding a video signal according to an embodiment of the present specification. The operations of FIG. 37 may be performed by the prediction unit of the encoding apparatus 100 or the processor 510 of the video signal processing apparatus 500.
  • the decoder determines a prediction mode of the current block. More specifically, the decoder determines a prediction mode of the current block from among the intra prediction mode and the inter prediction mode based on the prediction mode flag. For example, the decoder may parse the prediction mode flag, and if the prediction mode flag is 0, determine the inter prediction mode as the prediction mode, and when the prediction mode flag is 1, determine the intra prediction mode as the prediction mode. Thereafter, the decoder determines whether to reset the prediction mode of the current block to the IBC prediction mode based on the IBC flag.
  • the decoder determines that the IBC prediction mode is not applied, and applies the prediction mode (intra prediction mode or inter prediction mode) determined by the prediction mode flag as the prediction mode of the current block. If the IBC flag is 1, the decoder can reset the IBC prediction mode as the prediction mode of the current block.
  • the decoder may determine that the IBC prediction mode is not applied without parsing the IBC flag.
  • the IBC prediction mode is not applied immediately without parsing the IBC flag, and the prediction mode determined by the prediction mode flag (intra prediction mode or inter prediction mode ) Is applied as the prediction mode of the current block.
  • the predefined value may be 128.
  • the decoder may determine that the IBC prediction mode is not applied without parsing the IBC flag. That is, if the width or height of the current block is 128, it is determined that the IBC prediction mode is not applied immediately without parsing the IBC flag, and the prediction mode (intra prediction mode or inter prediction mode) determined by the prediction mode flag is set to the current block. It can be applied as a prediction mode of.
  • the predefined value may be the size of the CTU to which the current block belongs (CTU size value). For example, if the width or height of the current block is the same as the CTU size value, the decoder may determine that the IBC prediction mode is not applied without parsing the IBC flag. That is, if the width or height of the current block is the same as the CTU size value, it is determined that the IBC prediction mode is not applied immediately without parsing the IBC flag, and the prediction mode determined by the prediction mode flag (intra prediction mode or inter prediction mode) Can be applied as a prediction mode of the current block.
  • the prediction mode flag Intra prediction mode or inter prediction mode
  • the decoder parses the IBC flag based on the width or height of the current block not being 128. That is, if the width or height is not 128, the decoder parses the IBC flag. Thereafter, the decoder determines whether to set the prediction mode of the current block as the IBC prediction mode based on the IBC flag.
  • step S3720 the decoder generates a prediction sample of the current block based on the prediction mode. For example, when the IBC prediction mode is applied, the decoder may generate a prediction sample of the current block by using the reconstructed block in the current picture as a reference block.
  • the embodiments described in the present invention may be implemented and performed on a processor, microprocessor, controller, or chip.
  • the functional units illustrated in each drawing may be implemented and executed on a computer, processor, microprocessor, controller, or chip.
  • the video signal processing apparatus 500 may include a memory 520 for storing a video signal, and a processor 510 coupled to the memory 520.
  • the processor 510 is configured to determine a prediction mode of a current block and encode prediction information of the current block based on the prediction mode. In order to determine the prediction mode of the current block, the processor is configured to determine whether to apply an intra block copy (IBC) prediction mode to the current block based on the width and height of the current block. If the width or height of the current block is equal to a predefined value, it is determined that the IBC prediction mode is not applied.
  • IBC intra block copy
  • the predefined size value may be 128.
  • the predefined value may be a coding tree unit (CTU) size value.
  • CTU coding tree unit
  • the prediction information of the current block includes an IBC flag indicating whether the IBC prediction mode is applied, and the processor 510 determines whether the width or height of the current block is not 128. It can be set to encode the IBC flag.
  • the processor 510 encodes the IBC flag as 1 when the IBC prediction mode is applied, and encodes the IBC flag as 0 when the IBC prediction mode is not applied, and the intra prediction mode and the inter prediction mode It is set to encode a prediction mode flag indicating one of.
  • the processor 510 is configured to determine a prediction mode of the current block and generate a prediction sample of the current block based on the prediction mode. To determine the prediction mode, the processor 510 determines a prediction mode of the current block from among an intra prediction mode and an inter prediction mode based on a prediction mode flag, and the prediction mode based on an intra block copy (IBC) flag. It is set to determine whether to set the prediction mode of the current block as the IBC prediction mode. If the width or height of the current block is the same as a specific size value, it is determined that the IBC prediction mode is not applied without parsing the IBC flag.
  • IBC intra block copy
  • the predefined size value may be 128.
  • the predefined size value may be a coding tree unit (CTU) size value.
  • CTU coding tree unit
  • the processor 510 parses the IBC flag based on the width or height of the current block not being 128, and sets the prediction mode of the current block as the IBC prediction mode based on the IBC flag. You can decide whether or not.
  • the processor 510 sets the IBC prediction mode as the prediction mode of the current block, and if the IBC flag is 0, the intra prediction mode determined based on the prediction mode flag and One of the inter prediction modes may be applied as the prediction mode of the current block.
  • the processing method to which the present invention is applied may be produced in the form of a program executed by a computer, and may be stored in a computer-readable recording medium.
  • Multimedia data having a data structure according to the present invention can also be stored in a computer-readable recording medium.
  • the computer-readable recording medium includes all kinds of storage devices and distributed storage devices in which computer-readable data is stored.
  • the computer-readable recording medium includes, for example, Blu-ray disk (BD), universal serial bus (USB), ROM, PROM, EPROM, EEPROM, RAM, CD-ROM, magnetic tape, floppy disk, and optical It may include a data storage device.
  • the computer-readable recording medium includes media implemented in the form of a carrier wave (for example, transmission through the Internet).
  • the bitstream generated by the encoding method may be stored in a computer-readable recording medium or transmitted through a wired or wireless communication network.
  • an embodiment of the present invention may be implemented as a computer program product using a program code, and the program code may be executed in a computer according to the embodiment of the present invention.
  • the program code may be stored on a carrier readable by a computer.
  • a non-transitory computer-readable medium stores one or more instructions executed by one or more processors.
  • the one or more instructions are set to determine a prediction mode of a current block and encode prediction information of the current block based on the prediction mode, and to determine a prediction mode of the current block,
  • the video signal processing apparatus 500 (encoding apparatus 100) is controlled to determine whether to apply the intra block copy (IBC) prediction mode to the current block based on the width and height. If the width or height of the current block is the same as a specific size value, it is determined that the IBC prediction mode is not applied.
  • IBC intra block copy
  • the predefined size value may be 128.
  • the predefined value may be a coding tree unit (CTU) size value.
  • CTU coding tree unit
  • the prediction information of the current block includes an IBC flag indicating whether the IBC prediction mode is applied, and the one or more instructions are based on the fact that the width or height of the current block is not 128.
  • the video signal processing device 500 (encoding device 100) can be controlled to encode the IBC flag.
  • the processor 510 encodes the IBC flag as 1 when the IBC prediction mode is applied, and encodes the IBC flag as 0 when the IBC prediction mode is not applied, and the intra prediction mode and the inter prediction mode
  • the video signal processing apparatus 500 (encoding apparatus 100) may be controlled to encode a prediction mode flag indicating one of them.
  • the one or more instructions determine a prediction mode of a current block, generate a prediction sample of the current block based on the prediction mode, and in order to determine a prediction mode of the current block, a prediction mode flag Video signal processing to determine whether to set the prediction mode of the current block to the IBC prediction mode based on an intra prediction mode and an inter prediction mode based on the determination of a prediction mode of the current block and based on an intra block copy (IBC) flag It controls the device 500 (decoding device 200). If the width or height of the current block is equal to a predefined value, it is determined that the IBC prediction mode is not applied without parsing the IBC flag.
  • IBC intra block copy
  • the predefined size value may be 128.
  • the predefined size value may be a coding tree unit (CTU) size value.
  • CTU coding tree unit
  • the one or more instructions parse the IBC flag based on the width or height of the current block is not 128, and set the prediction mode of the current block to the IBC prediction mode based on the IBC flag.
  • the video signal processing apparatus 500 (decoding apparatus 200) may be controlled to determine whether to set.
  • the one or more instructions set the IBC prediction mode as a prediction mode of the current block when the IBC flag is 1, and intra prediction determined based on the prediction mode flag when the IBC flag is 0
  • the video signal processing apparatus 500 may be controlled to apply one of a mode and an inter prediction mode as a prediction mode of the current block.
  • the decoder and encoder to which the present invention is applied include a multimedia broadcasting transmission/reception device, a mobile communication terminal, a home cinema video device, a digital cinema video device, a surveillance camera, a video chat device, a real-time communication device such as video communication, a mobile streaming device, Storage media, camcorders, video-on-demand (VoD) service providers, OTT video (Over the top video) devices, Internet streaming service providers, three-dimensional (3D) video devices, video telephony video devices, and medical video devices. And can be used to process video signals or data signals.
  • an OTT video (Over the top video) device may include a game console, a Blu-ray player, an Internet-connected TV, a home theater system, a smartphone, a tablet PC, and a digital video recorder (DVR).
  • an embodiment of the present invention may be implemented by various means, for example, hardware, firmware, software, or a combination thereof.
  • an embodiment of the present invention is one or more ASICs (application specific integrated circuits), DSPs (digital signal processors), DSPDs (digital signal processing devices), PLDs (programmable logic devices), FPGAs ( field programmable gate arrays), processors, controllers, microcontrollers, microprocessors, etc.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • processors controllers, microcontrollers, microprocessors, etc.
  • an embodiment of the present invention may be implemented in the form of a module, procedure, or function that performs the functions or operations described above.
  • the software code can be stored in a memory and driven by a processor.
  • the memory may be located inside or outside the processor, and may exchange data with the processor through various known means.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Selon des modes de réalisation, la présente invention concerne un procédé et un dispositif destinés au traitement d'un signal vidéo. Selon un mode de réalisation de la présente invention, un procédé de décodage d'un signal vidéo comprend les étapes consistant à : déterminer un mode de prédiction d'un bloc courant ; et générer un échantillon de prédiction pour le bloc courant sur la base du mode de prédiction, l'étape de détermination d'un mode de prédiction comprenant les étapes consistant à : sélectionner un mode de prédiction pour le bloc actuel à partir d'un mode d'intra-prédiction et d'un mode d'inter-prédiction sur la base d'un drapeau de mode de prédiction ; et à déterminer, sur la base d'un drapeau de copie intra-bloc (IBC), s'il faut fixer le mode de prédiction pour le bloc courant à un mode de prédiction IBC, si la largeur ou la hauteur du bloc courant coïncide avec la valeur prédéterminée, le mode de prédiction IBC est déterminé comme n'étant pas appliqué, le drapeau IBC n'étant pas analysé. Selon un mode de réalisation de la présente invention, la factorisation en taille de bloc et l'application du mode de prédiction IBC en fonction de cette dernière permettent d'empêcher que toute complexité de codage et toute utilisation de bande passante de mémoire n'augmentent fortement pendant la prédiction IBC.
PCT/KR2020/004067 2019-03-25 2020-03-25 Procédé et dispositif de traitement de signal vidéo WO2020197264A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962823572P 2019-03-25 2019-03-25
US62/823,572 2019-03-25

Publications (1)

Publication Number Publication Date
WO2020197264A1 true WO2020197264A1 (fr) 2020-10-01

Family

ID=72612050

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/004067 WO2020197264A1 (fr) 2019-03-25 2020-03-25 Procédé et dispositif de traitement de signal vidéo

Country Status (1)

Country Link
WO (1) WO2020197264A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160135226A (ko) * 2014-03-21 2016-11-25 퀄컴 인코포레이티드 비디오 코딩에서 인트라 블록 카피를 위한 검색 영역 결정
US9554141B2 (en) * 2013-11-18 2017-01-24 Arris Enterprises, Inc. Intra block copy for intra slices in high efficiency video coding (HEVC)
US20170134724A1 (en) * 2014-07-07 2017-05-11 Hfi Innovation Inc. Method of Intra Block Copy Search and Compensation Range
KR20180013918A (ko) * 2015-05-29 2018-02-07 퀄컴 인코포레이티드 슬라이스 레벨 인트라 블록 카피 및 기타 비디오 코딩 개선

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9554141B2 (en) * 2013-11-18 2017-01-24 Arris Enterprises, Inc. Intra block copy for intra slices in high efficiency video coding (HEVC)
KR20160135226A (ko) * 2014-03-21 2016-11-25 퀄컴 인코포레이티드 비디오 코딩에서 인트라 블록 카피를 위한 검색 영역 결정
US20170134724A1 (en) * 2014-07-07 2017-05-11 Hfi Innovation Inc. Method of Intra Block Copy Search and Compensation Range
KR20180013918A (ko) * 2015-05-29 2018-02-07 퀄컴 인코포레이티드 슬라이스 레벨 인트라 블록 카피 및 기타 비디오 코딩 개선

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
M. ZHOU (BROADCOM), J. AN (ALIBABA-INC), E. CHAI (UBLNX), K. CHOI (SAMSUNG), S. SETHURAMAN (ITTIAM), T. HSIEH (QUALCOMM), X. XIU (: "JVET AHG report: Implementation studies (AHG16)", 14. JVET MEETING; 20190319 - 20190327; GENEVA; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ), 17 March 2019 (2019-03-17), XP030255172 *

Similar Documents

Publication Publication Date Title
WO2020180155A1 (fr) Procédé et appareil de traitement de signal vidéo
WO2020171444A1 (fr) Procédé et dispositif d'inter-prédiction basés sur un dmvr
WO2020004990A1 (fr) Procédé de traitement d'image sur la base d'un mode de prédiction inter et dispositif correspondant
WO2020141911A1 (fr) Dispositif et procédé de traitement de signal vidéo par usage d'inter-prédiction
WO2020180129A1 (fr) Procédé et dispositif de traitement de signal vidéo destiné à une prédiction inter
WO2020184952A1 (fr) Procédé et dispositif de traitement de signal vidéo permettant de traiter des informations de différence de vecteur de mouvement pour l'interprédiction dans un signal vidéo
WO2019194514A1 (fr) Procédé de traitement d'image fondé sur un mode de prédiction inter et dispositif associé
WO2019216714A1 (fr) Procédé de traitement d'image fondé sur un mode de prédiction inter et appareil correspondant
WO2020184964A1 (fr) Procédé et appareil de traitement de signal vidéo pour prédiction inter
WO2021096290A1 (fr) Procédé de codage d'image basé sur une transformée et dispositif associé
WO2020262931A1 (fr) Procédé et dispositif de signalisation permettant de fusionner une syntaxe de données dans un système de codage vidéo/image
WO2021172914A1 (fr) Procédé de décodage d'image pour un codage résiduel et dispositif associé
WO2021040482A1 (fr) Dispositif et procédé de codage d'image à base de filtrage de boucle adaptatif
WO2021172881A1 (fr) Procédé et appareil de codage/décodage d'image par prédiction inter, et support d'enregistrement stockant un train de bits
WO2021040410A1 (fr) Procédé de décodage vidéo pour codage résiduel et dispositif associé
WO2020262930A1 (fr) Procédé et dispositif pour éliminer une syntaxe redondante d'une syntaxe de données de fusion
WO2020009447A1 (fr) Procédé de traitement d'images sur la base d'un mode d'inter-prédiction et dispositif associé
WO2020184958A1 (fr) Procédé et dispositif de traitement de signal vidéo pour interprédiction
WO2020256485A1 (fr) Procédé de codage/décodage d'image et dispositif utilisant un procédé de limitation adaptative de taille de bloc de chrominance et de transmission de flux binaire
WO2020262963A1 (fr) Procédé et appareil de codage/décodage d'image utilisant maximum pour un bloc de chrominance, et procédé de transmission de flux binaire
WO2020262929A1 (fr) Procédé et dispositif de signalisation de syntaxe dans un système de codage d'image/de vidéo
WO2021006697A1 (fr) Procédé de décodage d'image pour un codage résiduel et appareil associé
WO2020262962A1 (fr) Procédé et appareil pour coder/décoder une vidéo en utilisant une limitation de taille maximale de bloc de transformée de saturation, et procédé de transmission de flux binaire
WO2021006698A1 (fr) Procédé et dispositif de codage d'image dans un système de codage d'image
WO2020251268A1 (fr) Procédé de décodage d'image destiné à un composant de chrominance et dispositif associé

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20778260

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20778260

Country of ref document: EP

Kind code of ref document: A1