WO2020141831A2 - Procédé et appareil de codage d'image faisant appel à une prédiction de copie intra-bloc - Google Patents

Procédé et appareil de codage d'image faisant appel à une prédiction de copie intra-bloc Download PDF

Info

Publication number
WO2020141831A2
WO2020141831A2 PCT/KR2019/018713 KR2019018713W WO2020141831A2 WO 2020141831 A2 WO2020141831 A2 WO 2020141831A2 KR 2019018713 W KR2019018713 W KR 2019018713W WO 2020141831 A2 WO2020141831 A2 WO 2020141831A2
Authority
WO
WIPO (PCT)
Prior art keywords
syntax element
absolute value
bvd
information
block
Prior art date
Application number
PCT/KR2019/018713
Other languages
English (en)
Korean (ko)
Other versions
WO2020141831A3 (fr
Inventor
남정학
임재현
장형문
최정아
김승환
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Publication of WO2020141831A2 publication Critical patent/WO2020141831A2/fr
Publication of WO2020141831A3 publication Critical patent/WO2020141831A3/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques

Definitions

  • This document relates to image coding technology, and more particularly, to an image coding method and apparatus using intra block copy prediction in an image coding system.
  • a high-efficiency image compression technique is required to effectively transmit, store, and reproduce high-resolution, high-quality image information.
  • the technical problem of this document is to provide a method and apparatus for improving image coding efficiency.
  • Another technical task of this document is to provide an efficient intra block copy (IBC) prediction method and apparatus.
  • IBC intra block copy
  • Another technical task of this document is to provide a prediction method and apparatus using a current picture including a current block as a reference picture, rather than a different picture in time.
  • an image decoding method performed by a decoding apparatus includes obtaining prediction mode information and block vector difference information for a current block from a bitstream, and based on the prediction mode information, the prediction mode of the current block is IBC (Intra Block Copy). Deriving into a prediction mode, deriving a block vector difference of the current block based on information about the block vector difference, deriving a block vector of the current block based on the block vector difference, and the block vector And generating predictive samples of the current block based on reconstructed samples in the current picture indicated by, wherein the information about the block vector difference is a syntax element and whether the absolute value of the block vector difference is greater than 0 and the block. And a syntax element for the sign of the vector difference.
  • IBC Intelligent Block Copy
  • a decoding apparatus for performing image decoding.
  • the decoding apparatus sets the prediction mode of the current block to IBC (Intra) based on the prediction mode information and an entropy decoding unit for obtaining prediction mode information and block vector difference information for a current block from a bitstream.
  • IBC Intra
  • Block Copy is derived in a prediction mode, a block vector difference of the current block is derived based on information about the block vector difference, a block vector of the current block is derived based on the block vector difference, and the block vector And a prediction unit for generating prediction samples of the current block based on reconstructed samples in the current picture indicated by, wherein the information about the block vector difference is a syntax element and whether the absolute value of the block vector difference is greater than 0 and the block. And a syntax element for the sign of the vector difference.
  • a video encoding method performed by an encoding device includes deriving a prediction mode of a current block as an intra block copy (IBC) prediction mode; Deriving a block vector for the current block, deriving a block vector difference based on the block vector, and predicting samples of the current block based on reconstructed samples in the current picture indicated by the block vector Image information including generating steps, deriving residual samples based on the prediction samples, prediction mode information on the IBC prediction mode, information on the block vector difference, and information on the residual samples And encoding, wherein the information about the block vector difference includes a syntax element for whether the absolute value of the block vector difference is greater than 0 and a syntax element for the sign of the block vector difference.
  • IBC intra block copy
  • a video encoding apparatus derives a prediction mode of the current block as an intra block copy (IBC) prediction mode, derives a block vector for the current block, and derives a block vector difference based on the block vector.
  • a prediction unit generating prediction samples of the current block based on reconstructed samples in the current picture indicated by the block vector, a residual processing unit deriving residual samples based on the prediction samples, and a prediction mode for the IBC prediction mode
  • an entropy encoding unit for encoding image information including information, information on the block vector difference and information on the residual samples, wherein the information on the block vector difference has an absolute value of the block vector difference greater than zero.
  • a syntax element for largeness and a syntax element for the sign of the block vector difference is provided.
  • a computer-readable digital storage medium is characterized by storing a bitstream that causes the decoding method to be performed.
  • IBC intra block copy
  • FIG. 1 schematically shows an example of a video/image coding system to which the present document can be applied.
  • FIG. 2 is a diagram schematically illustrating a configuration of a video/video encoding apparatus to which the present document can be applied.
  • FIG. 3 is a diagram schematically illustrating a configuration of a video/video decoding apparatus to which the present document can be applied.
  • FIG. 5 shows an example of a block vector of the current block.
  • FIG. 6 schematically shows a block vector differential decoding method.
  • FIG. 7 and 8 schematically show an example of a video/video encoding method and related components according to embodiment(s) of the present document.
  • FIGS. 9 and 10 schematically show an example of a video/video encoding method and related components according to embodiment(s) of the present document.
  • FIG. 11 schematically shows a structure of a content streaming system.
  • each component in the drawings described in this document is independently shown for convenience of description of different characteristic functions, and does not mean that each component is implemented with separate hardware or separate software.
  • two or more components of each component may be combined to form a single component, or one component may be divided into a plurality of components.
  • Embodiments in which each component is integrated and/or separated are also included in the scope of this document as long as they do not depart from the nature of this document.
  • FIG. 1 schematically shows an example of a video/image coding system to which the present document can be applied.
  • a video/image coding system may include a first device (source device) and a second device (receiving device).
  • the source device may transmit the encoded video/image information or data to a receiving device through a digital storage medium or network in the form of a file or streaming.
  • the source device may include a video source, an encoding device, and a transmission unit.
  • the receiving device may include a receiving unit, a decoding apparatus, and a renderer.
  • the encoding device may be referred to as a video/video encoding device, and the decoding device may be referred to as a video/video decoding device.
  • the transmitter can be included in the encoding device.
  • the receiver may be included in the decoding device.
  • the renderer may include a display unit, and the display unit may be configured as a separate device or an external component.
  • the video source may acquire a video/image through a capture, synthesis, or generation process of the video/image.
  • the video source may include a video/image capture device and/or a video/image generation device.
  • the video/image capture device may include, for example, one or more cameras, a video/image archive including previously captured video/images, and the like.
  • the video/image generating device may include, for example, a computer, a tablet and a smartphone, and may (electronically) generate a video/image.
  • a virtual video/image may be generated through a computer or the like, and in this case, a video/image capture process may be replaced by a process in which related data is generated.
  • the encoding device can encode the input video/video.
  • the encoding apparatus may perform a series of procedures such as prediction, transformation, and quantization for compression and coding efficiency.
  • the encoded data (encoded video/image information) may be output in the form of a bitstream.
  • the transmitting unit may transmit the encoded video/video information or data output in the form of a bitstream to a receiving unit of a receiving device through a digital storage medium or a network in a file or streaming format.
  • the digital storage media may include various storage media such as USB, SD, CD, DVD, Blu-ray, HDD, SSD.
  • the transmission unit may include an element for generating a media file through a predetermined file format, and may include an element for transmission through a broadcast/communication network.
  • the receiver may receive/extract the bitstream and deliver it to a decoding device.
  • the decoding apparatus may decode a video/image by performing a series of procedures such as inverse quantization, inverse transformation, and prediction corresponding to the operation of the encoding apparatus.
  • the renderer can render the decoded video/image.
  • the rendered video/image may be displayed through the display unit.
  • VVC versatile video coding
  • EVC essential video coding
  • AV1 AOMedia Video 1
  • AVS2 2nd generation of audio video coding standard
  • next-generation video/ It can be applied to the method disclosed in the video coding standard (ex. H.267 or H.268, etc.).
  • video may mean a set of images over time.
  • a picture generally refers to a unit representing one image in a specific time period, and a slice/tile is a unit constituting a part of a picture in coding.
  • the slice/tile may include one or more coding tree units (CTUs).
  • CTUs coding tree units
  • One picture may be composed of one or more slices/tiles.
  • One picture may be composed of one or more tile groups.
  • One tile group may include one or more tiles.
  • the brick may represent a rectangular region of CTU rows within a tile in a picture. Tiles can be partitioned into multiple bricks, and each brick can be composed of one or more CTU rows in the tile (A tile may be partitioned into multiple bricks, each of which consisting of one or more CTU rows within the tile ).
  • a tile that is not partitioned into multiple bricks may be also referred to as a brick.
  • a brick scan can indicate a specific sequential ordering of CTUs partitioning a picture, the CTUs can be aligned with a CTU raster scan within a brick, and the bricks in a tile can be aligned sequentially with a raster scan of the bricks of the tile.
  • A, and tiles in a picture can be sequentially aligned with a raster scan of the tiles of the picture
  • a brick scan is a specific sequential ordering of CTUs partitioning a picture in which the CTUs are ordered consecutively in CTU raster scan in a brick , bricks within a tile are ordered consecutively in a raster scan of the bricks of the tile, and tiles in a picture are ordered consecutively in a raster scan of the tiles of the picture).
  • a tile is a rectangular region of CTUs within a particular tile column and a particular tile row in a picture.
  • the tile column is a rectangular area of CTUs, the rectangular area has a height equal to the height of the picture, and the width can be specified by syntax elements in a picture parameter set (The tile column is a rectangular region of CTUs having a height equal to the height of the picture and a width specified by syntax elements in the picture parameter set).
  • the tile row is a rectangular region of CTUs, the rectangular region has a width specified by syntax elements in a picture parameter set, and the height can be the same as the height of the picture (The tile row is a rectangular region of CTUs having a height specified by syntax elements in the picture parameter set and a width equal to the width of the picture).
  • a tile scan can indicate a specific sequential ordering of CTUs partitioning a picture, the CTUs can be successively aligned with a CTU raster scan in a tile, and the tiles in a picture can be successively aligned with a raster scan of the tiles of the picture.
  • a tile scan is a specific sequential ordering of CTUs partitioning a picture in which the CTUs are ordered consecutively in CTU raster scan in a tile whereas tiles in a picture are ordered consecutively in a raster scan of the tiles of the picture).
  • a slice may include an integer number of bricks of a picture, and the integer number of bricks may be included in one NAL unit (A slice includes an integer number of bricks of a picture that may be exclusively contained in a single NAL unit). A slice may consist of either a number of complete tiles or only a consecutive sequence of complete bricks of one tile ).
  • Tile groups and slices are used interchangeably in this document. For example, the tile group/tile group header in this document may be referred to as a slice/slice header.
  • a pixel or a pel may mean a minimum unit constituting one picture (or image). Also, as a term corresponding to a pixel,'sample' may be used.
  • the sample may generally represent a pixel or a pixel value, may represent only a pixel/pixel value of a luma component, or may represent only a pixel/pixel value of a chroma component.
  • the unit may represent a basic unit of image processing.
  • the unit may include at least one of a specific region of a picture and information related to the region.
  • One unit may include one luma block and two chroma (ex. cb, cr) blocks.
  • the unit may be used interchangeably with terms such as a block or area in some cases.
  • the MxN block may include samples (or sample arrays) of M columns and N rows or a set (or array) of transform coefficients.
  • the video encoding device may include a video encoding device.
  • the encoding apparatus 200 includes an image partitioner 210, a predictor 220, a residual processor 230, and an entropy encoder 240. It may be configured to include an adder (250), a filtering unit (filter, 260) and a memory (memory, 270).
  • the prediction unit 220 may include an inter prediction unit 221 and an intra prediction unit 222.
  • the residual processing unit 230 may include a transform unit 232, a quantizer 233, a dequantizer 234, and an inverse transformer 235.
  • the residual processing unit 230 may further include a subtractor 231.
  • the adder 250 may be referred to as a reconstructor or a recontructged block generator.
  • the above-described image segmentation unit 210, prediction unit 220, residual processing unit 230, entropy encoding unit 240, adding unit 250, and filtering unit 260 may include one or more hardware components (for example, it may be configured by an encoder chipset or processor).
  • the memory 270 may include a decoded picture buffer (DPB), or may be configured by a digital storage medium.
  • the hardware component may further include a memory 270 as an internal/external component.
  • the image division unit 210 may divide an input image (or picture, frame) input to the encoding apparatus 200 into one or more processing units.
  • the processing unit may be called a coding unit (CU).
  • the coding unit is recursively divided according to a quad-tree binary-tree ternary-tree (QTBTTT) structure from a coding tree unit (CTU) or a largest coding unit (LCU).
  • QTBTTT quad-tree binary-tree ternary-tree
  • CTU coding tree unit
  • LCU largest coding unit
  • one coding unit may be divided into a plurality of coding units of a deeper depth based on a quad tree structure, a binary tree structure, and/or a ternary structure.
  • a quad tree structure may be applied first, and a binary tree structure and/or a ternary structure may be applied later.
  • a binary tree structure may be applied first.
  • a coding procedure according to an embodiment may be performed based on a final coding unit that is no longer split.
  • the maximum coding unit may be directly used as the final coding unit based on coding efficiency according to image characteristics, or the coding unit may be recursively divided into coding units having a lower depth than optimal if necessary.
  • the coding unit of the size of can be used as the final coding unit.
  • the coding procedure may include procedures such as prediction, transformation, and reconstruction, which will be described later.
  • the processing unit may further include a prediction unit (PU) or a transform unit (TU).
  • the prediction unit and the transform unit may be partitioned or partitioned from the above-described final coding unit, respectively.
  • the prediction unit may be a unit of sample prediction
  • the transformation unit may be a unit for deriving a transform coefficient and/or a unit for deriving a residual signal from the transform coefficient.
  • the unit may be used interchangeably with terms such as a block or area in some cases.
  • the MxN block may represent samples of M columns and N rows or a set of transform coefficients.
  • the sample may generally represent a pixel or a pixel value, and may indicate only a pixel/pixel value of a luma component or only a pixel/pixel value of a saturation component.
  • the sample may be used as a term for one picture (or image) corresponding to a pixel or pel.
  • the encoding device 200 subtracts a prediction signal (a predicted block, a prediction sample array) output from the inter prediction unit 221 or the intra prediction unit 222 from the input image signal (original block, original sample array).
  • a signal residual signal, residual block, residual sample array
  • the prediction unit may perform prediction on a block to be processed (hereinafter, referred to as a current block) and generate a predicted block including prediction samples for the current block.
  • the prediction unit may determine whether intra prediction or inter prediction is applied in units of the current block or CU. As described later in the description of each prediction mode, the prediction unit may generate various information about prediction, such as prediction mode information, and transmit it to the entropy encoding unit 240.
  • the prediction information may be encoded by the entropy encoding unit 240 and output in the form of a bitstream.
  • the intra prediction unit 222 may predict the current block by referring to samples in the current picture.
  • the referenced samples may be located in the neighborhood of the current block or may be located apart depending on a prediction mode.
  • prediction modes may include a plurality of non-directional modes and a plurality of directional modes.
  • the non-directional mode may include, for example, a DC mode and a planar mode (Planar mode).
  • the directional mode may include, for example, 33 directional prediction modes or 65 directional prediction modes depending on the degree of detail of the prediction direction. However, this is an example, and more or less directional prediction modes may be used depending on the setting.
  • the intra prediction unit 222 may determine a prediction mode applied to the current block by using a prediction mode applied to neighboring blocks.
  • the inter prediction unit 221 may derive the predicted block for the current block based on a reference block (reference sample array) specified by a motion vector on the reference picture.
  • motion information may be predicted in units of blocks, subblocks, or samples based on the correlation of motion information between a neighboring block and a current block.
  • the motion information may include a motion vector and a reference picture index.
  • the motion information may further include inter prediction direction (L0 prediction, L1 prediction, Bi prediction, etc.) information.
  • the neighboring block may include a spatial neighboring block present in the current picture and a temporal neighboring block present in the reference picture.
  • the reference picture including the reference block and the reference picture including the temporal neighboring block may be the same or different.
  • the temporal neighboring block may be referred to by a name such as a collocated reference block or a colCU, and a reference picture including the temporal neighboring block may be called a collocated picture (colPic).
  • the inter prediction unit 221 constructs a motion information candidate list based on neighboring blocks, and provides information indicating which candidate is used to derive the motion vector and/or reference picture index of the current block. Can be created. Inter prediction may be performed based on various prediction modes. For example, in the case of the skip mode and the merge mode, the inter prediction unit 221 may use motion information of neighboring blocks as motion information of the current block.
  • the residual signal may not be transmitted.
  • the motion vector of the current block is obtained by using the motion vector of the neighboring block as a motion vector predictor and signaling a motion vector difference. I can order.
  • the prediction unit 220 may generate a prediction signal based on various prediction methods described below.
  • the prediction unit may apply intra prediction or inter prediction as well as intra prediction and inter prediction at the same time for prediction for one block. This can be called combined inter and intra prediction (CIIP).
  • the prediction unit may be based on an intra block copy (IBC) prediction mode or a palette mode for prediction of a block.
  • the IBC prediction mode or palette mode may be used for content video/video coding such as a game, such as screen content coding (SCC).
  • SCC screen content coding
  • IBC basically performs prediction in the current picture, but may be performed similarly to inter prediction in that a reference block is derived in the current picture. That is, the IBC can use at least one of the inter prediction techniques described in this document.
  • the palette mode can be regarded as an example of intra coding or intra prediction. When the palette mode is applied, a sample value in a picture may be signaled based on information on the palette table and palette index.
  • the prediction signal generated through the prediction unit may be used to generate a reconstructed signal or may be used to generate a residual signal.
  • the transform unit 232 may generate transform coefficients by applying a transform technique to the residual signal. For example, at least one of a DCT (Discrete Cosine Transform), DST (Discrete Sine Transform), KLT (Karhunen-Loeve Transform), GBT (Graph-Based Transform), or CNT (Conditionally Non-linear Transform) It can contain.
  • GBT means a transformation obtained from this graph when it is said that the relationship information between pixels is graphically represented.
  • CNT means a transform obtained by generating a prediction signal using all previously reconstructed pixels and based on it.
  • the transform process may be applied to pixel blocks having the same size of a square, or may be applied to blocks of variable sizes other than squares.
  • the quantization unit 233 quantizes the transform coefficients and transmits them to the entropy encoding unit 240, and the entropy encoding unit 240 encodes a quantized signal (information about quantized transform coefficients) and outputs it as a bitstream. have. Information about the quantized transform coefficients may be called residual information.
  • the quantization unit 233 may rearrange block-type quantized transform coefficients into a one-dimensional vector form based on a coefficient scan order, and quantize the quantized transform coefficients based on the one-dimensional vector form. Information regarding transform coefficients may be generated.
  • the entropy encoding unit 240 may perform various encoding methods, such as exponential Golomb (CAVLC), context-adaptive variable length coding (CAVLC), and context-adaptive binary arithmetic coding (CABAC).
  • CAVLC exponential Golomb
  • CAVLC context-adaptive variable length coding
  • CABAC context-adaptive binary arithmetic coding
  • the entropy encoding unit 240 may encode information necessary for video/image reconstruction (eg, a value of syntax elements, etc.) together with the quantized transform coefficients together or separately.
  • the encoded information (ex. encoded video/video information) may be transmitted or stored in units of network abstraction layer (NAL) units in the form of a bitstream.
  • NAL network abstraction layer
  • the video/video information may further include information regarding various parameter sets such as an adaptation parameter set (APS), a picture parameter set (PPS), a sequence parameter set (SPS), or a video parameter set (VPS).
  • the video/video information may further include general constraint information.
  • information and/or syntax elements transmitted/signaled from an encoding device to a decoding device may be included in video/video information.
  • the video/video information may be encoded through the above-described encoding procedure and included in the bitstream.
  • the bitstream can be transmitted over a network or stored on a digital storage medium.
  • the network may include a broadcasting network and/or a communication network
  • the digital storage medium may include various storage media such as USB, SD, CD, DVD, Blu-ray, HDD, SSD.
  • the signal output from the entropy encoding unit 240 may be configured as an internal/external element of the encoding device 200 by a transmitting unit (not shown) and/or a storing unit (not shown) for storing, or the transmitting unit It may be included in the entropy encoding unit 240.
  • the quantized transform coefficients output from the quantization unit 233 may be used to generate a prediction signal.
  • a residual signal residual block or residual samples
  • the adder 155 adds the reconstructed residual signal to the predicted signal output from the inter predictor 221 or the intra predictor 222, so that the reconstructed signal (restored picture, reconstructed block, reconstructed sample array) Can be generated. If there is no residual for the block to be processed, such as when the skip mode is applied, the predicted block may be used as a reconstructed block.
  • the adder 250 may be called a restoration unit or a restoration block generation unit.
  • the generated reconstructed signal may be used for intra prediction of a next processing target block in a current picture, or may be used for inter prediction of a next picture through filtering as described below.
  • LMCS luma mapping with chroma scaling
  • the filtering unit 260 may improve subjective/objective image quality by applying filtering to the reconstructed signal.
  • the filtering unit 260 may generate a modified restoration picture by applying various filtering methods to the restoration picture, and the modified restoration picture may be a DPB of the memory 270, specifically, the memory 270. Can be stored in.
  • the various filtering methods may include, for example, deblocking filtering, sample adaptive offset, adaptive loop filter, bilateral filter, and the like.
  • the filtering unit 260 may generate various pieces of information regarding filtering as described later in the description of each filtering method, and transmit them to the entropy encoding unit 240.
  • the filtering information may be encoded by the entropy encoding unit 240 and output in the form of a bitstream.
  • the modified reconstructed picture transmitted to the memory 270 may be used as a reference picture in the inter prediction unit 221.
  • inter prediction When the inter prediction is applied through the encoding apparatus, prediction mismatch between the encoding apparatus 100 and the decoding apparatus can be avoided, and encoding efficiency can be improved.
  • the memory 270 DPB may store the modified reconstructed picture for use as a reference picture in the inter prediction unit 221.
  • the memory 270 may store motion information of a block from which motion information in a current picture is derived (or encoded) and/or motion information of blocks in a picture that has already been reconstructed.
  • the stored motion information may be transmitted to the inter prediction unit 221 to be used as motion information of a spatial neighboring block or motion information of a temporal neighboring block.
  • the memory 270 may store reconstructed samples of blocks reconstructed in the current picture, and may transmit the reconstructed samples to the intra prediction unit 222.
  • FIG. 3 is a diagram schematically illustrating a configuration of a video/video decoding apparatus to which the present document can be applied.
  • the decoding apparatus 300 includes an entropy decoder (310), a residual processor (320), a prediction unit (predictor, 330), an adder (340), and a filtering unit (filter, 350) and memory (memoery, 360).
  • the prediction unit 330 may include an inter prediction unit 331 and an intra prediction unit 332.
  • the residual processing unit 320 may include a deequantizer (321) and an inverse transformer (321).
  • the entropy decoding unit 310, the residual processing unit 320, the prediction unit 330, the adding unit 340, and the filtering unit 350 described above may include one hardware component (eg, a decoder chipset or processor) according to an embodiment. ).
  • the memory 360 may include a decoded picture buffer (DPB), or may be configured by a digital storage medium.
  • the hardware component may further include a memory 360 as an internal/external component.
  • the decoding apparatus 300 may restore an image corresponding to a process in which the video/image information is processed in the encoding apparatus of FIG. 2.
  • the decoding apparatus 300 may derive units/blocks based on block partitioning related information obtained from the bitstream.
  • the decoding apparatus 300 may perform decoding using a processing unit applied in the encoding apparatus.
  • the processing unit of decoding may be, for example, a coding unit, and the coding unit may be divided along a quad tree structure, a binary tree structure and/or a ternary tree structure from a coding tree unit or a largest coding unit.
  • One or more transform units can be derived from the coding unit. Then, the decoded video signal decoded and output through the decoding device 300 may be reproduced through the reproduction device.
  • the decoding apparatus 300 may receive the signal output from the encoding apparatus of FIG. 2 in the form of a bitstream, and the received signal may be decoded through the entropy decoding unit 310.
  • the entropy decoding unit 310 may parse the bitstream to derive information (eg, video/image information) necessary for image reconstruction (or picture reconstruction).
  • the video/video information may further include information regarding various parameter sets such as an adaptation parameter set (APS), a picture parameter set (PPS), a sequence parameter set (SPS), or a video parameter set (VPS).
  • the video/video information may further include general constraint information.
  • the decoding apparatus may decode a picture further based on the information on the parameter set and/or the general restriction information.
  • Signaling/receiving information and/or syntax elements described later in this document may be decoded through the decoding procedure and obtained from the bitstream.
  • the entropy decoding unit 310 decodes information in a bitstream based on a coding method such as exponential Golomb coding, CAVLC, or CABAC, and quantizes a value of a syntax element required for image reconstruction and a transform coefficient for residual.
  • a coding method such as exponential Golomb coding, CAVLC, or CABAC
  • the CABAC entropy decoding method receives bins corresponding to each syntax element in a bitstream, and decodes syntax element information to be decoded and decoding information of neighboring and decoding target blocks or symbol/bin information decoded in the previous step.
  • the context model is determined by using, and the probability of occurrence of the bin is predicted according to the determined context model to perform arithmetic decoding of the bin to generate a symbol corresponding to the value of each syntax element. have.
  • the CABAC entropy decoding method may update the context model using the decoded symbol/bin information for the next symbol/bin context model after determining the context model.
  • prediction information is provided to a prediction unit (inter prediction unit 332 and intra prediction unit 331), and the entropy decoding unit 310 performs entropy decoding.
  • the dual value that is, quantized transform coefficients and related parameter information may be input to the residual processing unit 320.
  • the residual processor 320 may derive a residual signal (residual block, residual samples, residual sample array). Also, information related to filtering among information decoded by the entropy decoding unit 310 may be provided to the filtering unit 350. Meanwhile, a receiving unit (not shown) receiving a signal output from the encoding device may be further configured as an internal/external element of the decoding device 300, or the receiving unit may be a component of the entropy decoding unit 310.
  • the decoding device may be called a video/picture/picture decoding device, and the decoding device may be classified into an information decoder (video/picture/picture information decoder) and a sample decoder (video/picture/picture sample decoder). It might be.
  • the information decoder may include the entropy decoding unit 310, and the sample decoder may include the inverse quantization unit 321, an inverse transformation unit 322, an addition unit 340, a filtering unit 350, and a memory 360 ), at least one of an inter prediction unit 332 and an intra prediction unit 331.
  • the inverse quantization unit 321 may inverse quantize the quantized transform coefficients to output transform coefficients.
  • the inverse quantization unit 321 may rearrange the quantized transform coefficients in a two-dimensional block form. In this case, the reordering may be performed based on the coefficient scan order performed by the encoding device.
  • the inverse quantization unit 321 may perform inverse quantization on the quantized transform coefficients by using a quantization parameter (for example, quantization step size information), and obtain transform coefficients.
  • a quantization parameter for example, quantization step size information
  • the inverse transform unit 322 inversely transforms the transform coefficients to obtain a residual signal (residual block, residual sample array).
  • the prediction unit may perform prediction on the current block and generate a predicted block including prediction samples for the current block.
  • the prediction unit may determine whether intra prediction is applied to the current block or inter prediction is applied based on the information on the prediction output from the entropy decoding unit 310, and may determine a specific intra/inter prediction mode.
  • the prediction unit 320 may generate a prediction signal based on various prediction methods described below.
  • the prediction unit may apply intra prediction or inter prediction as well as intra prediction and inter prediction at the same time for prediction for one block. This can be called combined inter and intra prediction (CIIP).
  • the prediction unit may be based on an intra block copy (IBC) prediction mode or a palette mode for prediction of a block.
  • the IBC prediction mode or palette mode may be used for content video/video coding such as a game, such as screen content coding (SCC).
  • SCC screen content coding
  • IBC basically performs prediction in the current picture, but may be performed similarly to inter prediction in that a reference block is derived in the current picture. That is, the IBC can use at least one of the inter prediction techniques described in this document.
  • the palette mode can be regarded as an example of intra coding or intra prediction. When the palette mode is applied, information on the palette table and palette index may be signaled by being included in the video/image information.
  • the intra prediction unit 331 may predict the current block by referring to samples in the current picture.
  • the referenced samples may be located in the neighborhood of the current block or may be located apart depending on a prediction mode.
  • prediction modes may include a plurality of non-directional modes and a plurality of directional modes.
  • the intra prediction unit 331 may determine a prediction mode applied to the current block using a prediction mode applied to neighboring blocks.
  • the inter prediction unit 332 may derive the predicted block for the current block based on the reference block (reference sample array) specified by the motion vector on the reference picture.
  • motion information may be predicted in units of blocks, subblocks, or samples based on the correlation of motion information between a neighboring block and a current block.
  • the motion information may include a motion vector and a reference picture index.
  • the motion information may further include inter prediction direction (L0 prediction, L1 prediction, Bi prediction, etc.) information.
  • the neighboring block may include a spatial neighboring block present in the current picture and a temporal neighboring block present in the reference picture.
  • the inter prediction unit 332 may construct a motion information candidate list based on neighboring blocks, and derive a motion vector and/or reference picture index of the current block based on the received candidate selection information. Inter-prediction may be performed based on various prediction modes, and information on the prediction may include information indicating a mode of inter-prediction for the current block.
  • the adder 340 reconstructs the obtained residual signal by adding it to the predicted signal (predicted block, predicted sample array) output from the predictor (including the inter predictor 332 and/or the intra predictor 331) A signal (restored picture, reconstructed block, reconstructed sample array) can be generated. If there is no residual for the block to be processed, such as when the skip mode is applied, the predicted block may be used as a reconstructed block.
  • the adding unit 340 may be called a restoration unit or a restoration block generation unit.
  • the generated reconstructed signal may be used for intra prediction of a next processing target block in a current picture, may be output through filtering as described below, or may be used for inter prediction of a next picture.
  • LMCS luma mapping with chroma scaling
  • the filtering unit 350 may apply subjective/objective filtering to improve subjective/objective image quality.
  • the filtering unit 350 may generate a modified reconstructed picture by applying various filtering methods to the reconstructed picture, and the modified reconstructed picture may be a DPB of the memory 360, specifically, the memory 360 Can be transferred to.
  • the various filtering methods may include, for example, deblocking filtering, sample adaptive offset, adaptive loop filter, bilateral filter, and the like.
  • the (corrected) reconstructed picture stored in the DPB of the memory 360 may be used as a reference picture in the inter prediction unit 332.
  • the memory 360 may store motion information of a block from which motion information in a current picture is derived (or decoded) and/or motion information of blocks in a picture that has already been reconstructed.
  • the stored motion information may be transmitted to the inter prediction unit 260 to be used as motion information of a spatial neighboring block or motion information of a temporal neighboring block.
  • the memory 360 may store reconstructed samples of blocks reconstructed in the current picture, and may transmit the reconstructed samples to the intra prediction unit 331.
  • the embodiments described in the filtering unit 260, the inter prediction unit 221, and the intra prediction unit 222 of the encoding device 100 are respectively the filtering unit 350 and the inter prediction of the decoding device 300.
  • the unit 332 and the intra prediction unit 331 may be applied to the same or corresponding.
  • a predicted block including prediction samples for a current block which is a block to be coded
  • the predicted block includes prediction samples in a spatial domain (or pixel domain).
  • the predicted block is derived equally from an encoding device and a decoding device, and the encoding device decodes information (residual information) about the residual between the original block and the predicted block, not the original sample value itself of the original block. Signaling to the device can improve video coding efficiency.
  • the decoding apparatus may derive a residual block including residual samples based on the residual information, generate a reconstruction block including reconstruction samples by combining the residual block and the predicted block, and generate reconstruction blocks. It is possible to generate a reconstructed picture that includes.
  • the residual information may be generated through a transform and quantization procedure.
  • the encoding apparatus derives a residual block between the original block and the predicted block, and performs transformation procedures on residual samples (residual sample array) included in the residual block to derive transformation coefficients. And, by performing a quantization procedure on the transform coefficients, the quantized transform coefficients are derived to signal related residual information (via a bitstream) to a decoding apparatus.
  • the residual information may include information such as value information of the quantized transform coefficients, position information, a transform technique, a transform kernel, and quantization parameters.
  • the decoding apparatus may perform an inverse quantization/inverse transformation procedure based on the residual information and derive residual samples (or residual blocks).
  • the decoding apparatus may generate a reconstructed picture based on the predicted block and the residual block.
  • the encoding apparatus can also dequantize/inverse transform quantized transform coefficients for reference for inter prediction of a picture, to derive a residual block, and generate a reconstructed picture based on the quantized/inverse transform.
  • the prediction unit of the encoding device/decoding device may derive a prediction sample by performing inter prediction on a block basis.
  • the inter prediction may represent a prediction derived in a manner dependent on data elements (e.g. sample values, motion information, etc.) of the picture(s) other than the current picture.
  • data elements e.g. sample values, motion information, etc.
  • a predicted block predicted sample array for the current block is derived.
  • motion information of the current block may be predicted in units of blocks, subblocks, or samples based on the correlation of motion information between a neighboring block and a current block.
  • the motion information may include a motion vector and a reference picture index.
  • the motion information may further include inter prediction type (L0 prediction, L1 prediction, Bi prediction, etc.) information.
  • the neighboring block may include a spatial neighboring block existing in the current picture and a temporal neighboring block present in the reference picture.
  • the reference picture including the reference block and the reference picture including the temporal neighboring block may be the same or different.
  • the temporal neighboring block may be referred to as a name such as a collocated reference block or a CUCU, and a reference picture including the temporal neighboring block may be referred to as a collocated picture (colPic). It might be.
  • a motion information candidate list may be constructed based on neighboring blocks of the current block, and a flag indicating which candidate is selected (used) to derive a motion vector and/or reference picture index of the current block Alternatively, index information may be signaled.
  • Inter prediction may be performed based on various prediction modes. For example, in the case of the skip mode and the merge mode, motion information of a current block may be the same as motion information of a selected neighboring block.
  • the residual signal may not be transmitted.
  • a motion vector prediction (MVP) mode a motion vector of a selected neighboring block is used as a motion vector predictor (MVP), and a motion vector difference (MVD) may be signaled.
  • MVP motion vector prediction
  • MVD motion vector difference
  • a motion vector of the current block may be derived using the sum of the motion vector predictor and the motion vector difference.
  • inter prediction modes may be used for prediction of a current block in a picture.
  • various modes such as a merge mode, a skip mode, an MVP mode, and an affine mode may be used.
  • Decoder side motion vector refinement (DMVR) mode, adaptive motion vector resolution (AMVR) mode, and the like may be further used as ancillary modes.
  • the affine mode may also be called aaffine motion prediction mode.
  • the MVP mode may also be called AMVP (advanced motion vector prediction) mode.
  • the prediction mode information indicating the inter prediction mode of the current block may be signaled from the encoding device to the decoding device.
  • the prediction mode information may be included in a bitstream and received by a decoding device.
  • the prediction mode information may include index information indicating one of a plurality of candidate modes.
  • the inter prediction mode may be indicated through hierarchical signaling of flag information.
  • the prediction mode information may include one or more flags.
  • the skip flag is signaled to indicate whether the skip mode is applied, and when the skip mode is not applied, the merge flag is signaled to indicate whether the merge mode is applied, and when the merge mode is not applied, the MVP mode is indicated to be applied.
  • a flag for further classification may be further signaled.
  • the affine mode may be signaled as an independent mode, or may be signaled as a mode dependent on a merge mode or an MVP mode.
  • the affine mode may be configured as one candidate of the merge candidate list or the MVP candidate list, as described later.
  • Inter prediction may be performed using motion information of a current block.
  • the encoding apparatus may derive optimal motion information for the current block through a motion estimation procedure. For example, the encoding apparatus may search similar reference blocks having high correlation using the original blocks in the original picture for the current block in fractional pixel units within a predetermined search range in the reference picture, thereby deriving motion information. Can.
  • the similarity of the block can be derived based on the difference between phase-based sample values. For example, the similarity of a block may be calculated based on a sum of absolute difference (SAD) between a current block (or a template of a current block) and a reference block (or a template of a reference block). In this case, motion information may be derived based on a reference block having the smallest SAD in the search area.
  • the derived motion information may be signaled to the decoding apparatus according to various methods based on the inter prediction mode.
  • a motion vector predictor (MVP) candidate list is generated by using a motion vector of a reconstructed spatial neighboring block and/or a temporal neighboring block (or Col block). Can be generated. That is, the motion vector of the reconstructed spatial neighboring block and/or the motion vector corresponding to the temporal neighboring block may be used as a motion vector predictor candidate.
  • the prediction information may include selection information (eg, an MVP flag or an MVP index) indicating an optimal motion vector predictor candidate selected from among motion vector predictor candidates included in the list.
  • the prediction unit may select a motion vector predictor of the current block from among motion vector predictor candidates included in the motion vector candidate list, using the selection information.
  • the prediction unit of the encoding device may obtain a motion vector difference (MVD) between a motion vector of a current block and a motion vector predictor, encode it, and output it in a bitstream format. That is, the MVD can be obtained by subtracting the motion vector predictor from the motion vector of the current block.
  • the prediction unit of the decoding apparatus may obtain a motion vector difference included in the information about the prediction, and derive the motion vector of the current block through addition of the motion vector difference and the motion vector predictor.
  • the prediction unit of the decoding apparatus may obtain or derive a reference picture index indicating the reference picture from the information on the prediction.
  • a predicted block for a current block may be derived based on motion information derived according to a prediction mode.
  • the predicted block may include predicted samples (predicted sample array) of the current block.
  • an interpolation procedure may be performed, and through this, predictive samples of the current block may be derived based on reference samples in a fractional sample unit in a reference picture. .
  • prediction samples may be generated based on the sample/subblock unit MV.
  • final prediction samples may be derived through weighting (according to phase) of prediction samples derived based on L0 prediction and prediction samples derived based on L1 prediction.
  • reconstruction samples and reconstruction pictures may be generated based on the derived prediction samples, and then procedures such as in-loop filtering may be performed.
  • some or all of the video/video information may be entropy-encoded by the entropy encoding unit 240, and as described above with reference to FIG. 3, some or all of the video/video information Entropy decoding may be performed by the entropy decoding unit 310.
  • the video/video information may be encoded/decoded in units of syntax elements. Encoding/decoding of information in this document may include encoding/decoding by the method described in this paragraph.
  • CABAC Context-Adaptive Binary Arithmetic Coding
  • Binary bins can be input into a regular coding engine or a bypass coding engine.
  • the regular coding engine can allocate a context model that reflects a probability value for the bean, and can code the bean based on the assigned context model.
  • the probability model for the bin can be updated.
  • the bins thus coded may be referred to as context-coded bins.
  • the bypass coding engine may omit the procedure of estimating the probability for the input bin and updating the probability model applied to the bin after coding. Instead of assigning a context model, coding speed can be improved by coding the input bin by applying a uniform probability distribution (for example, 50:50).
  • the bins thus coded may be referred to as bypass bins.
  • the context model may be allocated and updated for each bean that is context coded (regular coded), and the context model may be indicated based on ctxIdx or ctxInc.
  • ctxIdx may be derived based on ctxInc.
  • a context index (ctxIdx) indicating a context model for each of the regular coded bins may be derived as a sum of context index increment (ctxInc) and context index offset (ctxIdxOffset).
  • the ctxInc may be derived differently for each bin.
  • the ctxIdxOffset may be represented by the lowest value of the ctxIdx.
  • the minimum value of ctxIdx may be referred to as an initial value (initValue) of ctxIdx.
  • the ctxIdxOffset may be a value generally used to distinguish context models for other syntax elements, and the context model for one syntax element may be classified or derived based on ctxInc.
  • Entropy decoding may be performed in the same order as entropy encoding.
  • the encoding device may perform entropy encoding for the target syntax element. That is, the encoding device is based on regular coding (context-based) or an empty string of a target syntax element based on entropy coding techniques such as context-adaptive arithmetic coding (CABAC) or context-adaptive variable length coding (CAVLC), or It can encode based on bypass coding, and its output can be included in the bitstream.
  • CABAC context-adaptive arithmetic coding
  • CAVLC context-adaptive variable length coding
  • the bitstream can be delivered to a decoding device via a (digital) storage medium or network.
  • the decoding apparatus may decode each bin in the bin string from a bitstream based on context or bypass based on an entropy coding technique such as CABAC or CAVLC.
  • the bitstream may include various information for video/video decoding as described above.
  • the bitstream can be obtained from an encoding device via a (digital) storage medium or network.
  • IBC Intelligent Block Copy
  • the IBC may be used for content video/video coding such as a game, such as screen content coding (SCC).
  • SCC screen content coding
  • the IBC basically performs prediction in the current picture, but may be performed similarly to inter prediction in that a reference block is derived in the current picture. That is, the IBC can use at least one of the inter prediction techniques described in this document. For example, at IBC, at least one of the above-described methods for deriving motion information (motion vector) may be used. At least one of the inter prediction techniques may be partially modified and used as described below in consideration of the IBC prediction.
  • the IBC may refer to the current picture, and thus the IBC may be referred to as Current Picture Referencing (CPR).
  • CPR Current Picture Referencing
  • whether the IBC is applied to the current block may be indicated through an IBC flag (or pred_mode_ibc_flag), may be included in a coding unit syntax, encoded in a bitstream in an encoding device, and signaled to a decoding device.
  • FIG. 5 shows an example of a block vector of the current block.
  • Motion compensation using a general temporal reference image uses an image in which both in-loop filtering is applied.
  • a delay may occur because in-loop filtering for a partial region of a previously decoded current image has to be waited.
  • an image to which in-loop filtering is not applied may be used in motion compensation using the current image.
  • an image buffer corresponding thereto may be required according to an area used for motion compensation. Accordingly, the motion compensation possible region may be limited to the largest coding unit (or current CTU) including the current coding unit.
  • the largest coding unit on the right by +1 or +2 based on the current largest coding unit may be used as a motion compensation area.
  • the image buffer may be limited to a specific area including the current CTU.
  • the displacement value used for motion compensation in the current image is called a block vector in order to distinguish it from a motion vector for motion compensation in an existing inter-screen image.
  • One embodiment relates to a method of using a part of the current video that has been coded so far as a reference video during encoding of the current coding unit.
  • the encoder adds the current image to the reference image list in the current image prediction unit, and finds a block most similar to the current block in a predetermined region among the already coded regions.
  • the optimal motion vector uses motion information prediction in the same way as the existing inter mode, and only the motion vector difference value is transmitted to the decoder.
  • Two methods can be used to indicate whether motion compensation is used in the current image.
  • the DiffPicOrderCnt function represents a difference in picture order count (POC) between two video inputs.
  • the input value currPic means the current image including the current coding target block
  • RefPicList0[ref_idx_l0[x0][y0]] means the reference video of the current coding target block.
  • x0 and y0 indicate the position of the current block
  • ref_idx_l0 indicates the reference image index in the 0 direction of the reference image list. Therefore, the reference image of the current block can be calculated using the reference image index received from the reference image list.
  • the output value 0 by the DiffPicOrderCnt function means that the current image and the reference image are the same image.
  • an intra block copy flag (intra_bc_flag) may be defined, and if the flag value is 1, the corresponding block indicates that motion compensation is used in the current image, and when the flag value is 0, the corresponding block is It may indicate that motion compensation is not used in the current image.
  • the intra block copy flag may be represented by an IBC flag (pred_mode_ibc_flag) syntax element, and may be included in a coding unit syntax.
  • the encoded information (ex. encoded video/video information) derived by the encoding device may be output in the form of a bitstream.
  • the encoding information may include information or flags indicating whether the current video is used as a reference video.
  • the encoded information may be transmitted or stored in units of network abstraction layer (NAL) units in the form of a bitstream.
  • NAL network abstraction layer
  • the bitstream may be transmitted over a network, or may be stored in a non-transitory digital storage medium.
  • the bitstream is not directly transmitted from the encoding device to the decoding device, and may be streamed/downloaded through an external server (ex. content streaming server).
  • the network may include a broadcasting network and/or a communication network
  • the digital storage medium may include various storage media such as USB, SD, CD, DVD, Blu-ray, HDD, SSD.
  • FIG. 6 schematically shows a block vector differential decoding method.
  • a block vector (BV) generated when motion compensation is performed on a current image (CPR) may have a different distribution or range of values from an existing motion vector. That is, since a block vector may have a motion compensation region limited to the current maximum coding unit or a maximum coding unit on the right by +1 or +2 based on the current maximum coding unit, the motion compensation region may be limited to a maximum unlike the motion vector. The value can be limited by the motion compensation area. Accordingly, in one embodiment, when motion compensation is used as a reference image using a coded current image, whether a CPR (or an IBC) is checked on a block basis and motion vector difference coding and block vector difference coding can be selectively used based on this. have.
  • the CPR of the current block can be determined by checking whether the picture order count (POC) of the reference picture of the current block is the same as the POC of the current picture, as described above.
  • the CPR of the current block may be determined based on a flag indicating whether CPR (or IBC) is applied to the current block.
  • the block vector difference value is decoded using the Block Vector Difference (BVD) decoding method in this document (S610).
  • BBVD Block Vector Difference
  • MVD decoding method an MVD decoding method
  • the BVD coding syntax and the binarization methods for the BVD coding syntax will be described, and the BVD decoding method may be performed.
  • various embodiments described below will be mainly described with respect to operations or processes performed in the decoding apparatus, but may be performed in the same manner in the encoding apparatus.
  • the prediction unit of the encoding apparatus may add the current image to the reference image list, find a block most similar to the current block in a predetermined region among the already coded regions, and the optimal block vector may be compared with the existing inter mode.
  • a syntax structure for predicting motion information may be used, and a block vector difference value may be transmitted to a decoding device.
  • the difference value of the block vector may be transmitted using the syntax of the block vector difference according to various embodiments.
  • the BVD coding syntax may include Table 1, for example.
  • the (x0, y0) syntax element may indicate information about the location of the current block.
  • the information about the position of the current block may include information indicating the position of the current block based on the upper left block in the current picture.
  • the refList syntax element may indicate information about a reference list of the current image.
  • the reference list may represent a list of reference images, and information about a reference list of the current image may represent list 0 or list 1.
  • the cpIdx syntax element may indicate information about a component index.
  • the cpIdx syntax element may indicate information about the control point index
  • the component index may indicate information about the component index of the compIdx syntax element, for example, a horizontal component when the compIdx syntax element is 0, , 1 may indicate a vertical component.
  • the current image when CPR is applied to the current block, the current image may be included in the reference image (picture) list 0.
  • the BVD coding syntax can be transmitted only for the reference picture (picture) list 0. If the current picture is included in the reference picture (picture) list 1, the BVD coding syntax can be transmitted only for the reference picture list 1.
  • the abs_bvd_greater0_flag syntax element may indicate information about whether the magnitude of the absolute value of the block vector difference of the input component is greater than zero. For example, when the abs_bvd_greater0_flag syntax element is 1, it may indicate that the magnitude of the absolute value of the block vector difference absolute value of the input component is greater than 0, and if it is 0, it may indicate that it is not greater than 0. That is, it can indicate information of 0.
  • an abs_bvd_greater1_flag syntax element may be additionally transmitted.
  • the abs_bvd_grater1_flag syntax element may be further included in the BVD coding syntax.
  • the abs_bvd_greater1_flag syntax element may indicate information about whether the magnitude of the absolute value of the block vector difference of the input component is greater than 1. For example, when the abs_bvd_greater1_flag syntax element is 1, it may indicate that the magnitude of the absolute value of the block vector difference of the input component is greater than 1, and if it is 0, it may indicate that it is not greater than 1.
  • abs_bvd_greater1_flag syntax element When the abs_bvd_greater1_flag syntax element is 1, an abs_bvd_minus2 syntax element may be additionally transmitted. Alternatively, the abs_bvd_minus2 syntax element may be further included in the BVD coding syntax. Alternatively, when the abs_bvd_greater0_flag syntax element is 1 and the abs_bvd_geater1_flag syntax element is 1, an abs_bvd_minus2 syntax element may be additionally transmitted. Alternatively, the abs_bvd_minus2 syntax element may be further included in the BVD coding syntax. Here, the abs_bvd_minus2 syntax element may indicate information about the -2 value of the absolute value of the block vector difference of the input component.
  • the +2 value of the abs_bvd_minus2 syntax element may represent the absolute value of the block vector difference.
  • the abs_bvd_minus2 syntax element may indicate information about the remaining values or remaining information.
  • the abs_bvd_minus2 syntax element value does not exist or when the abs_bvd_minus2 syntax element does not exist, it may imply that the abs_bvd_minus2 syntax element value is -1.
  • the bvd_sign_flag syntax element may indicate information about the sign of the block vector difference of the input component. For example, when the bvd_sign_flag syntax element is 0, the sign of the block vector difference of the input component may indicate information that is positive, and when it is 1, it may indicate information that is negative.
  • abs_bvd_greater0_flag syntax element the abs_bvd_greater1_flag syntax element, the abs_bvd_minus2 syntax element, and the bvd_sign_flag syntax element are abs_bvd_greater0_flag[compIdx] syntax element, abs_bvd_greater1_flag[compidx], and can be represented by the comp_xx Information about the component input by.
  • the block vector difference (BVD) value of the component finally input by the above-described syntax elements may be derived, for example, as in Equation 1.
  • a binarization method such as Table 2 may be used.
  • the abs_bvd_greater0_flag syntax element, abs_bvd_greater1_flag syntax element and bvd_sign_flag syntax element can be binarized by the Fixed-Length(FL) method, and the abs_bvd_minus2 syntax element can be binarized by Exp-Golomb tertiary (EG3).
  • the abs_bvd_minus2 syntax element may be binarized by Exp-Golomb 2nd order (EG2) or Exp-Golomb 4th order (EG4).
  • abs_bvd_minus2 syntax element may be shown in Table 3 through binarization by Exp-Golomb 3rd order.
  • x may represent the abs_bvd_minus2 syntax element
  • binary may represent the value of the abs_bvd_minus2 syntax element binarized by Exp-Golomb 3rd order.
  • the maximum value of the block vector difference may be less than twice the size of the largest coding unit. Or, it may be smaller than twice the size of the largest coding unit minus the minimum coding unit.
  • the finally obtained block vector (BV) value may be at least larger than the block size and smaller than the maximum coding unit.
  • the x value of BV is greater than the width of the block, and the y value can be greater than the height of the block.
  • the BVD coding syntax may include Table 4, for example.
  • the BVD coding syntax according to the first embodiment primarily includes syntax elements for the horizontal component or when the component index (compIdx) is 0 for convenience of description, but in the second embodiment
  • the BVD coding syntax according to may include syntax elements for both the horizontal component and the vertical component when the component indexes are 0 and 1. Accordingly, each syntax element differs only in the syntax elements and components described in the first embodiment and may represent the same meaning or the same information.
  • block vector difference (BVD) value of the component finally input by the above-described syntax elements may be derived, for example, as in Equation 1 above.
  • a binarization method such as Table 2 described above may be used.
  • different binarization methods may be used for the x component and the y component of the block vector.
  • a binarization method such as Table 5 may be used.
  • the abs_bvd_greater0_flag syntax element, abs_bvd_greater1_flag syntax element, and bvd_sign_flag syntax element can be binarized by the Fixed-Length(FL) method, and the abs_bvd_minus2[0] syntax element is binary by Exp-Golomb tertiary (EG3) And the abs_bvd_minus2[1] syntax element can be binarized by Exp-Golomb 2nd order (EG2).
  • the abs_bvd_minus2 syntax element may be represented as Table 3 described above through binarization by Exp-Golomb 3rd order.
  • the abs_bvd_minus2 syntax element may be binarized in a form combining Truncated Unary (TU) and Fixed Length Coding (FLC).
  • the fixed length size of the FLC may have a predefined integer value such as 3, 4, 5 or 6, and the like.
  • the search area of the IBC (or CPR) may include up to two coding tree units (CTUs), and if the maximum CTU size is 128, the block vector (BV) is within 256 ranges. It can exist as a value. In this case, the maximum value of TU is 16, and FLC can be binarized using a fixed length size of 4.
  • An FLC having a fixed length size of 4 may be referred to as FLC4.
  • the maximum value of the TU and the length of the FLC may be determined according to the size of the CTU, the search area of the IBC, the x or y component, and the like.
  • abs_bvd_minus2 syntax element can be shown in Table 6 through binarization by TU+FLC4.
  • abs_bvd_greater0_flag syntax element and abs_bvd_grater1_flag syntax element may be different from the abs_mvd_greater0_flag syntax element and abs_mvd_greater1_flag syntax element related to motion vector difference, and thus may have a separate model.
  • the allocation of ctxInc of context coded bins for the abs_bvd_greater0_flag syntax element and the abs_bvd_grater1_flag syntax element may be as shown in Table 7.
  • the context model of the abs_bvd_greater0_flag syntax element of the x component and the context model of abs_bvd_greater0_flag of the y component can be separated.
  • the context model of abs_bvd_greater1_flag of the x component and the context model of abs_bvd_greater1_flag of the y component can be separated.
  • the allocation of ctxInc of the context coded bins for the abs_bvd_greater0_flag syntax element and the abs_bvd_grater1_flag syntax element may be as shown in Table 8.
  • the context model of the bean may be classified or derived based on the ctxInc of the context code bean described above. That is, ctxInc of the above-described context code bean can be used when entropy coding.
  • the detailed description related to entropy coding has been described above with reference to FIG. 4.
  • the BVD coding syntax may include Table 9, for example.
  • the abs_bvd_greater0_flag syntax element may indicate information on whether the magnitude of the absolute value of the block vector difference of the input component is greater than zero. For example, when the abs_bvd_greater0_flag syntax element is 1, it may indicate that the magnitude of the absolute value of the block vector difference absolute value of the input component is greater than 0, and if it is 0, it may indicate that it is not greater than 0. That is, it can indicate information of 0.
  • the parity_flag syntax element may indicate information on the rest when the magnitude of the absolute value of the block vector difference of the input component is divided by 2. For example, when the parity_flag syntax element is 0, information indicating that the remainder is 0 or information indicating that the magnitude of the block vector difference absolute value is even, and when 1, information indicating that the remainder is 1 or block vector difference absolute value. It can indicate odd information.
  • the parity_flag syntax element may indicate information indicating odd or even number of absolute values of block vector differences. If the parity_flag syntax element does not exist, it may be implied that it is 0. Also, the abs_bvd_greater2_flag syntax element may indicate information about whether the magnitude of the absolute value of the block vector difference of the input component is greater than 2. For example, when the abs_bvd_greater2_flag syntax element is 1, it may indicate that the magnitude of the absolute value of the block vector difference of the input component is greater than 2, and if it is 0, it may indicate that it is not greater than 2. If the abs_bvd_greater2_flag syntax element does not exist, it may be implied that it is zero.
  • an abs_bvd_minus3 syntax element may be additionally transmitted.
  • the abs_bvd_minus3 syntax element may be further included in the BVD coding syntax.
  • the abs_bvd_greater0_flag syntax element is 1 and the abs_bvd_geater2_flag syntax element is 1
  • an abs_bvd_minus3 syntax element may be additionally transmitted.
  • the abs_bvd_minus3 syntax element may be further included in the BVD coding syntax.
  • the abs_bvd_minus3 syntax element may indicate information about a quotient of a value obtained by dividing a -3 value of an absolute value of a block vector difference of an input component by 2.
  • the parity_flag syntax element is added and an additional +3 value may represent an absolute value of the block vector difference.
  • the abs_bvd_minus3 syntax element may indicate information about the remaining values or remaining information.
  • the abs_bvd_minus3 syntax element value does not exist or when the abs_bvd_minus3 syntax element does not exist, it may imply that the abs_bvd_minus3 syntax element value is -1.
  • the bvd_sign_flag syntax element may indicate information about the sign of the block vector difference of the input component. For example, when the bvd_sign_flag syntax element is 0, the sign of the block vector difference of the input component may indicate information that is positive, and when it is 1, it may indicate information that is negative.
  • abs_bvd_greater0_flag syntax element, parity_flag syntax element, abs_bvd_greater2_flag syntax element, abs_bvd_minus3 syntax element and bvd_sign_flag syntax elements are each abs_bvd_greater0_flag [compIdx] syntax element, parity_flag [compIdx] syntax element, abs_bvd_greater2_flag [compIdx] syntax element, abs_bvd_minus2 [compIdx] syntax element and bvd_sign_flag [compIdx] may be represented as a syntax element, and may indicate information about a component input by compIdx.
  • the block vector difference (BVD) value of the component finally input by the above syntax elements may be derived, for example, as in Equation 2.
  • a binarization method such as Table 10 may be used.
  • the abs_bvd_greater0_flag syntax element, parity_flag syntax element, abs_bvd_greater2_flag syntax element, and bvd_sign_flag syntax element can be binarized by the Fixed-Length(FL) method, and the abs_bvd_minus3 syntax element is exp-Golomb 3rd ordered by Exp-Golomb Can be.
  • the abs_bvd_minus3 syntax element may be binarized by Exp-Golomb k-order (EGk), and k may have an integer value such as 0, 1, 2, 3 or 4.
  • the BVD coding syntax may include Table 11, for example.
  • the abs_bvd_greater0_flag syntax element may indicate information on whether the magnitude of the absolute value of the block vector difference of the input component is greater than zero. For example, when the abs_bvd_greater0_flag syntax element is 1, it may indicate that the magnitude of the absolute value of the block vector difference absolute value of the input component is greater than 0, and if it is 0, it may indicate that it is not greater than 0. That is, it can indicate information of 0.
  • the parity_flag syntax element may indicate information on the rest when the magnitude of the absolute value of the block vector difference of the input component is divided by 2. For example, when the parity_flag syntax element is 0, information indicating that the remainder is 0 or information indicating that the magnitude of the block vector difference absolute value is even, and when 1, information indicating that the remainder is 1 or block vector difference absolute value. It can indicate odd information.
  • the parity_flag syntax element may indicate information indicating odd or even number of absolute values of block vector differences. If the parity_flag syntax element does not exist, it may be implied that it is 0. Also, the abs_bvd_greater2_flag syntax element may indicate information about whether the magnitude of the absolute value of the block vector difference of the input component is greater than 2. For example, when the abs_bvd_greater2_flag syntax element is 1, it may indicate that the magnitude of the absolute value of the block vector difference of the input component is greater than 2, and if it is 0, it may indicate that it is not greater than 2. If the abs_bvd_greater2_flag syntax element does not exist, it may be implied that it is zero.
  • an abs_bvd_unary syntax element and an abs_bvd_flc syntax element may be additionally transmitted for a level value.
  • the abs_bvd_unary syntax element and the abs_bvd_flc syntax element may be further included in the BVD coding syntax.
  • the abs_bvd_greater0_flag syntax element is 1 and the abs_bvd_geater2_flag syntax element is 1
  • an abs_bvd_unary syntax element and an abs_bvd_flc syntax element may be additionally transmitted.
  • the abs_bvd_unary syntax element and the abs_bvd_flc syntax element may be further included in the BVD coding syntax.
  • the abs_bvd_unary syntax element and the abs_bvd_flc syntax element may indicate information about a quotient of a value obtained by dividing the -3 value of the absolute value of the block vector difference of the input component by 2.
  • the abs_bvd_unary syntax element may indicate information about a value obtained by dividing the quotient by the number of symbols that can be expressed by bits used in FLC, in unary.
  • FLC-3 3-bit FLC
  • 8 symbols can be represented, and one unary value can be assigned to each of 8 symbols. That is, the abs_bvd_unary syntax element may represent information about a value or value obtained by dividing a quotient of a value obtained by dividing a -3 value of a block vector difference absolute value of an input component by 2 by 8 again.
  • the abs_bvd_flc syntax element may represent information on a value derived by a modular operation 8 (mod 8) of a quotient of a value obtained by dividing a ⁇ 3 value of an absolute value of a block vector difference of an input component by 2.
  • the modular operation (mod) is a remainder operation
  • the k modular operation or the modular operation k (mod k) may represent an operation for deriving the remainder of a value divided by k.
  • the abs_bvd_flc syntax element may indicate information about the remaining value or value obtained by dividing the quotient of the value obtained by dividing the -3 value of the block vector difference absolute value of the input component by 2 by 8 again. .
  • the bvd_sign_flag syntax element may indicate information about the sign of the block vector difference of the input component. For example, when the bvd_sign_flag syntax element is 0, the sign of the block vector difference of the input component may indicate information that is positive, and when it is 1, it may indicate information that is negative.
  • abs_bvd_greater0_flag syntax element, parity_flag syntax element, abs_bvd_greater2_flag syntax element, abs_bvd_unary syntax element, abs_bvd_flc syntax element, and bvd_sign_flag syntax element are abs_bvd_greater0_flag[compidx].
  • the syntax element, abs_bvd_flc[compIdx] syntax element, and bvd_sign_flag[compIdx] syntax element may indicate information about a component input by compIdx.
  • the block vector difference (BVD) value of the component finally input by the above-described syntax elements may be derived, for example, as shown in Equation (3).
  • a binarization method such as Table 12 can be used.
  • the abs_bvd_greater0_flag syntax element, parity_flag syntax element, abs_bvd_greater2_flag syntax element, abs_bvd_flc syntax element, and bvd_sign_flag syntax element can be binarized by the Fixed-Length(FL) method, and the abs_bvd_unary syntax of the Ab_bvd_unary Can be binarized
  • 3 bits may be used as fixed bits for the abs_bvd_flc syntax element, but fixed bits such as 1, 2, 3, 5, or 6 may be used.
  • fixed bits of the x component and the y component may be used differently.
  • the x component may use 3 bits
  • the y component may use 4 bits.
  • the BVD coding syntax may include Table 13, for example.
  • the abs_bvd_greater0_flag syntax element may indicate information on whether the magnitude of the absolute value of the block vector difference of the input component is greater than zero. For example, when the abs_bvd_greater0_flag syntax element is 1, it may indicate that the magnitude of the absolute value of the block vector difference absolute value of the input component is greater than 0, and if it is 0, it may indicate that it is not greater than 0. That is, it can indicate information of 0.
  • the parity_flag syntax element and abs_bvd_greater10_flag syntax element may be transmitted.
  • the parity_flag syntax element and abs_bvd_grater10_flag syntax element may be further included in the BVD coding syntax.
  • the parity_flag syntax element may indicate information on the rest when the magnitude of the absolute value of the block vector difference of the input component is divided by 2. For example, when the parity_flag syntax element is 0, information indicating that the remainder is 0 or information indicating that the magnitude of the block vector difference absolute value is even, and when 1, information indicating that the remainder is 1 or block vector difference absolute value. It can indicate odd information.
  • the parity_flag syntax element may indicate information indicating odd or even number of absolute values of block vector differences. If the parity_flag syntax element does not exist, it may be implied that it is 0. Also, the abs_bvd_greater10_flag syntax element may indicate information on whether the magnitude of the absolute value of the block vector difference of the input component is greater than 10. For example, when the abs_bvd_greater10_flag syntax element is 1, it may indicate that the magnitude of the absolute value of the block vector difference of the input component is greater than 10, and if it is 0, it may indicate that it is not greater than 10. If the abs_bvd_greater10_flag syntax element does not exist, it may be implied that it is zero.
  • an abs_bvd_unary0 syntax element may be additionally transmitted for a level value.
  • the abs_bvd_unary0 syntax element may be further included in the BVD coding syntax.
  • the abs_bvd_greater0_flag syntax element is 1 and the abs_bvd_greater10_flag syntax element is 0, the abs_bvd_unary0 syntax element may be transmitted.
  • the abs_bvd_unary0 syntax element may be further included in the BVD coding syntax.
  • the abs_bvd_unary0 syntax element may be transmitted to represent one of values between 1 and 10 when the absolute value of the block vector difference of the input component is less than 11. That is, information on one of values between 1 and 10 may be indicated.
  • the abs_bvd_unary0 syntax element may indicate information about a value for a quotient of a value obtained by dividing a -1 value of an absolute value of a block vector difference of an input component by 2, and may be binarized with a TU, and a maximum value of 4.
  • an abs_bvd_flc syntax element or abs_bvd_exceed syntax element may be transmitted together with the abs_bvd_unary1 syntax element for a level value greater than 10.
  • the abs_bvd_flc syntax element or abs_bvd_exceed syntax element may be further included in the BVD coding syntax together with the abs_bvd_unary1 syntax element.
  • abs_bvd_unary1 syntax element and the abs_bvd_flc syntax element may be included in the BVD coding syntax, or the abs_bvd_unary1 syntax element and the abs_bvd_exceed syntax element may be included in the BVD coding syntax.
  • the abs_bvd_unary1 syntax element may represent information about a value obtained by dividing a value obtained by dividing a -11 value of an absolute value of a block vector difference of an input component by 2, and a value obtained by dividing a value divided by the number of symbols represented by bits used in FLC. .
  • FLC-3 3-bit FLC
  • 8 symbols can be represented, and one unary value can be assigned to each of 8 symbols. That is, the abs_bvd_unary1 syntax element may represent information about a value or value obtained by dividing the quotient of the value obtained by dividing the -11 value of the block vector difference absolute value of the input component by 2 by 8 again.
  • the abs_bvd_flc syntax element may represent information on a value derived by a modular operation 8 (mod 8) of a quotient of a value obtained by dividing a -11 value of an absolute value of a block vector difference of an input component by 2.
  • the modular operation (mod) is the rest of the operation, and may be the same as the modular operation described with the fourth embodiment.
  • the abs_bvd_flc syntax element may represent information about the remaining value or value obtained by dividing the quotient of the value obtained by dividing the -11 value of the block vector difference absolute value of the input component by 2 by 8 again. .
  • abs_bvd_unary1 syntax element is 32 as an exception, the abs_bvd_flc syntax element is not transmitted and the abs_bvd_exceed syntax element can be transmitted.
  • an abs_bvd_exceed syntax element other than the abs_bvd_flc syntax element may be included in the BVD coding syntax.
  • the abs_bvd_exceed syntax element may indicate information about the exceeded value when the absolute value of the block vector difference exceeds a range that can be expressed based on the above-described syntax elements or abs_bvd_unary1 syntax element.
  • the abs_bvd_exceed syntax element may be binarized by Exp_Golomb k order, and k may have an integer value of 1 or more and 10 or less.
  • the bvd_sign_flag syntax element may indicate information about the sign of the block vector difference of the input component. For example, when the bvd_sign_flag syntax element is 0, the sign of the block vector difference of the input component may indicate information that is positive, and when it is 1, it may indicate information that is negative.
  • abs_bvd_greater0_flag syntax element, parity_flag syntax element, abs_bvd_greater10_flag syntax element, abs_bvd_unary0 syntax element, abs_bvd_unary1 syntax element, abs_bvd_flc syntax element, abs_bvd_exceed syntax element and bvd_sign_flag syntax elements are each abs_bvd_greater0_flag [compIdx] syntax element, parity_flag [compIdx] syntax element, abs_bvd_greater10_flag [compIdx ] Syntax element, abs_bvd_unary0[compIdx] syntax element, abs_bvd_unary1[compIdx] syntax element, abs_bvd_flc[compIdx] syntax element, abs_bvd_exceed[compIdx] syntax element and bvd_sign_flag[compId
  • the block vector difference (BVD) value of the component finally input by the above-described syntax elements may be derived.
  • a binarization method such as Table 14 may be used.
  • abs_bvd_greater0_flag syntax element, parity_flag syntax element, abs_bvd_greater10_flag syntax element, abs_bvd_flc syntax element and bvd_sign_flag syntax element can be binarized by the Fixed-Length (FL) method, abs_bvd_unary0 TU).
  • abs_bvd_exceed syntax element can be binarized by Exp-Golomb primary (EG1).
  • 3 bits may be used as fixed bits for the abs_bvd_flc syntax element, but fixed bits such as 1, 2, 3, 5, or 6 may be used. Also, fixed bits of the x component and the y component may be used differently.
  • FL (Fixed-Length) binarization may indicate a method of binarizing to a fixed length, such as a specific number of bits, and the specific number of bits may be predefined or may be expressed based on cMax.
  • TU (Truncated Unary) binarization uses 1 as many as the number of symbols to be expressed and one 0, and when the number of symbols to be expressed is the same as the maximum length, it can indicate how to binarize to a variable length without adding zero. And the maximum length can be expressed based on cMax.
  • TR (Truncated Rice) binarization can represent a method of binarizing prefix and sufix as TU + FL, and uses maximum length and shift information, but TU when shift information has a value of 0 Can be the same as
  • the maximum length may be indicated based on cMax
  • the shift information may be indicated based on cRiceParam.
  • EGk (k-th order Exp-Golomb) binarization is 1 in the middle of an empty string, with 0 on the left and 1 on the left, and 0 and/or 1 equal to the number of 0s on the left.
  • the x component may represent a horizontal component
  • the y component may represent a vertical component
  • FIG. 7 and 8 schematically show an example of a video/video encoding method and related components according to embodiment(s) of the present document.
  • the method disclosed in FIG. 7 may be performed by the encoding apparatus disclosed in FIG. 2.
  • S700 to S730 of FIG. 7 may be performed by the prediction unit 220 of the encoding apparatus in FIG. 8
  • S740 of FIG. 7 may be a residual processing unit 230 of the encoding apparatus in FIG. 8
  • S750 of FIG. 7 may be performed by the entropy encoding unit 240 of the encoding apparatus in FIG. 8.
  • the method disclosed in FIG. 7 may include the embodiments described above in this document.
  • the encoding device may derive a prediction mode of a current block as an intra block copy (IBC) prediction mode (S700 ).
  • the IBC prediction mode may indicate a prediction mode that performs Current Picture Referencing (CPR). That is, the IBC prediction mode may represent a prediction mode that performs motion compensation within a current picture including a current block.
  • CPR Current Picture Referencing
  • the prediction mode of the current block is an inter prediction mode
  • motion vectors and motion vector differences may be derived based on a reference picture different from the current picture to perform prediction
  • the prediction mode of the current block is IBC prediction In the mode
  • block vectors and block vector differences may be derived based on a current picture in order to perform prediction.
  • the encoding device may derive a block vector for the current block (S710).
  • the encoding apparatus may find a block most similar to the current block in a predetermined region among regions in the current picture coded so far, and may derive a block vector based on the most similar block and the current block.
  • the block vector may represent information from the current block to the most similar block.
  • the most similar block can be used as a reference block.
  • the predetermined region may include a current largest coding unit including a current coding unit or a current coding tree unit (CTU).
  • the predetermined region may include a maximum coding unit to the right by +1 or +2 based on the current maximum coding unit.
  • the encoding device may derive a block vector difference based on the block vector (S720).
  • the block vector difference can be derived based on the current block and block vector.
  • the block vector difference may be derived based on the block vector predictor and the block vector of the current block.
  • the block vector predictor may be derived from neighboring blocks of the current block.
  • the block vector difference may be derived based on the pre-coded block (or the most similar block) and the current block located in the current picture including the current block.
  • the encoding apparatus may generate prediction samples of the current block based on reconstructed samples in the current picture indicated by the block vector (S730). For example, the encoding apparatus may generate the prediction samples (or the predicted block) using the reconstructed samples (or reconstructed blocks) in the current picture as reference samples (or reference blocks). For example, reconstructed samples may represent pre-decoded or reconstructed samples in the current picture, and a relative position from the current block may be determined by the block vector.
  • the encoding apparatus may derive residual samples based on the predicted samples (S740). For example, the encoding apparatus may derive residual samples (or residual blocks) for the current block based on original samples and prediction samples (or predicted blocks) for the current block. Alternatively, although the encoding apparatus is not illustrated, reconstructed samples (or reconstructed blocks) may be generated by adding residual samples (or residual blocks) to the predicted samples (or predicted blocks).
  • the encoding apparatus may encode image information including prediction mode information on IBC prediction mode, information on block vector difference, and information on residual samples (S750).
  • the encoding device may generate information about the residual including information about the residual samples (or residual sample array), and the image information may include information about the residual.
  • the information on the residual samples or the information on the residual may include information on the transform coefficients for the residual samples.
  • the encoding device may generate prediction mode information regarding the IBC prediction mode of the current block, and the image information may include the prediction mode information.
  • prediction mode information may include an IBC flag.
  • the prediction mode information may include information on a picture order count (POC) of a current picture for comparison and information on a POC of a reference (target) picture.
  • the prediction mode information may include information for performing motion compensation based on the current picture.
  • the prediction mode information may include various information related to prediction of the current block.
  • the encoding device may generate information about the block vector difference of the current block, and the image information may include information about the block vector difference.
  • information regarding the block vector difference may be included in a BVD coding syntax, and the image information may include the BVD coding syntax.
  • the information on the block vector difference may be binarized through at least one binarization method, and information on the binary block vector difference may be obtained from the bitstream.
  • an empty string for information about the binary block vector difference may be derived from the bitstream, and information regarding the block vector difference may be obtained from the empty string.
  • context-coded bins may be obtained from a bitstream, and information regarding a binary block vector difference may be derived from context-coded bins.
  • information about binarized block vector differences can be derived from coded bins based on a context model.
  • the context model may be classified or derived based on ctxInc (context index increment).
  • the context model may be classified or derived based on ctxInc and ctxIdxOffset (context index offset).
  • the context model may be indicated based on ctxInc or ctxIdx (context index).
  • ctxIdx may be derived based on ctxInc and ctxIdxOffset.
  • the information about the block vector difference may include a syntax element for whether the absolute value of the block vector difference is greater than 0 and a syntax element for the sign of the block vector difference.
  • a syntax element for whether the absolute value of the block vector difference is greater than 0 may be represented by an abs_bvd_greater0_flag syntax element, and a syntax element for a sign of the block vector difference may be represented by a bvd_sign_flag syntax element.
  • a syntax element for whether the absolute value of the block vector difference is greater than 0 and a syntax element for the sign of the block vector difference can be derived through FL (Fixed-Length) based binarization.
  • the information about the block vector difference may further include a syntax element for whether the absolute value of the block vector difference is greater than 1 and a syntax element for a -2 value of the absolute value of the block vector difference.
  • a syntax element for whether the absolute value of the block vector difference is greater than 1 may represent an abs_bvd_greater1_flag syntax element
  • a syntax element for a -2 value of the absolute value of the block vector difference may represent an abs_bvd_minus2 syntax element. have.
  • a syntax element for whether the absolute value of the block vector difference is greater than 1 can be derived through FL-based binarization, and the syntax element for a -2 value of the absolute value of the block vector difference is Exp-Golomb It can be derived through base binarization.
  • the syntax element for the -2 value of the absolute value of the block vector difference can be derived through EG3 (3rd order Exp-Golomb) based binarization.
  • information regarding the block vector difference may be included in the BVD coding syntax as shown in Table 1 or Table 4, and the block vector difference may be derived through Equation 1.
  • a syntax element for whether the absolute value of the block vector difference is greater than 0 and a syntax element for whether the absolute value of the block vector difference is greater than 1 can be derived based on a first context model, and
  • the first context model can be distinguished from the second context model related to motion vector difference. That is, the first context model for IBC prediction may be distinguished from the second context model for inter prediction.
  • a context model may be classified or derived based on context index increment (ctxInc), and the first context model and the second context model may be classified or derived based on ctxInc separated from each other.
  • ctxInc may be 0 in the first bin for a syntax element indicating whether the absolute value of the block vector difference is greater than 0, and whether the absolute value of the block vector difference is greater than 1.
  • the syntax element for the first bin it can be zero.
  • the syntax element for whether the absolute value of the block vector difference is greater than 0 and the syntax element for whether the absolute value of the block vector difference is greater than 1 can be represented by only 1 bit, it may not be used from the second bin. .
  • the syntax element for whether the absolute value of the block vector difference is greater than 0 is a syntax element for whether the x component of the absolute value of the block vector difference is greater than 0, and the y component of the absolute value of the block vector difference is greater than 0.
  • a syntax element for whether the block vector difference is greater than 1 may be included, and a syntax element for whether the x component of the absolute value of the block vector difference is greater than 1 and the block vector difference. It may include a syntax element for whether the y component of the absolute value of is greater than 1.
  • a syntax element for whether the x component of the absolute value of the block vector difference is greater than 0 can be derived based on a first context model, and the syntax for whether the y component of the absolute value of the block vector difference is greater than 0 is obtained.
  • the element may be derived based on a second context model, and the syntax element as to whether the x component of the absolute value of the block vector difference is greater than 1 may be derived based on a third context model, and the block vector difference may be derived.
  • the syntax element for whether the y component of the absolute value is greater than 1 may be derived based on a fourth context model, and the first to fourth context models may be different from each other. Or they can be distinguished from each other.
  • the first to fourth context models may be classified or derived based on ctxInc separated from each other.
  • ctxInc for a horizontal component (or x component) and ctxInc for a vertical component (or y component) may be classified, and thus different context models may be classified or derived. .
  • the information regarding the block vector difference includes syntax elements for the remainder of the block vector difference divided by 2, syntax elements for whether the absolute value of the block vector difference is greater than 2, and the block vector difference.
  • the syntax element for the -3 value of the absolute value may be further included.
  • a syntax element for the remainder obtained by dividing the absolute value of the block vector difference by 2 may indicate a parity_flag syntax element, and a syntax element for whether the absolute value of the block vector difference is greater than 2 indicates an abs_bvd_greater2_flag syntax element.
  • the syntax element for the -3 value of the absolute value of the block vector difference may represent the abs_bvd_minus3 syntax element.
  • a syntax element for the remainder obtained by dividing the absolute value of the block vector difference by 2 and a syntax element for whether the absolute value of the block vector difference is greater than 2 can be derived through FL-based binarization, and
  • the syntax element for the -3 value of the absolute value of the block vector difference can be derived through Exp-Golomb based binarization.
  • the syntax element for the -3 value of the absolute value of the block vector difference may be derived through EG3 (3rd order Exp-Golomb) based binarization.
  • information about the block vector difference may be included in the BVD coding syntax as shown in Table 9, and the block vector difference may be derived through Equation (2).
  • the information about the block vector difference includes syntax elements for the remainder of the block vector difference divided by 2, syntax elements for whether the absolute value of the block vector difference is greater than 2, abs_bvd_unary syntax elements, and abs_bvd_flc It may further include a syntax element.
  • the abs_bvd_unary syntax element may indicate information about a value obtained by dividing a quotient of a value obtained by dividing -3 of the absolute value of the block vector difference by 2, by the number of reproducible symbols, and the abs_bvd_flc syntax element is the block vector difference
  • Information about a value derived through a modular operation can be represented by the number of symbols in the quotient of the value obtained by dividing the -3 value of the absolute value by 2.
  • a syntax element for the remainder obtained by dividing the absolute value of the block vector difference by 2 may indicate a parity_flag syntax element, and a syntax element for whether the absolute value of the block vector difference is greater than 2 indicates an abs_bvd_greater2_flag syntax element.
  • a syntax element for the remainder of the block vector difference divided by 2 a syntax element for whether the absolute value of the block vector difference is greater than 2
  • the abs_bvd_flc syntax element are FL-based binarization.
  • the abs_bvd_unary syntax element may be derived through TU (Truncated Unary) based binarization.
  • information about the block vector difference may be included in the BVD coding syntax as shown in Table 11, and the block vector difference may be derived through Equation (3).
  • the information about the block vector difference further includes a syntax element for the remainder when the absolute value of the block vector difference is divided by 2, a syntax element for whether the absolute value of the block vector difference is greater than 10, and an abs_bvd_unary0 syntax element.
  • the abs_bvd_unary0 syntax element may indicate information about a quotient of a value obtained by dividing a -1 value of an absolute value of the block vector difference by 2.
  • a syntax element for the remainder obtained by dividing the absolute value of the block vector difference by 2 may indicate a parity_flag syntax element, and a syntax element for whether the absolute value of the block vector difference is greater than 10 indicates an abs_bvd_greater10_flag syntax element.
  • a syntax element for the remainder obtained by dividing the absolute value of the block vector difference by 2 and a syntax element for whether the absolute value of the block vector difference is greater than 10 can be derived through FL-based binarization
  • the abs_bvd_unary0 syntax element may be derived through TU (Truncated Unary) based binarization.
  • information about the block vector difference can be included in the BVD coding syntax as shown in Table 13, and the block vector difference can be derived through these syntax elements.
  • the information about the block vector difference further includes a syntax element for the remainder when the absolute value of the block vector difference is divided by 2, a syntax element for whether the absolute value of the block vector difference is greater than 10, and an abs_bvd_unary1 syntax element. It may include, the information about the block vector difference, if the value of the abs_bvd_unary1 syntax element is less than 32, may further include the abs_bvd_flc syntax element, if the abs_bvd_unary1 syntax element value is 32, the abs_bvd_exceed syntax element It may further include.
  • the abs_bvd_unary1 syntax element may indicate information on a value obtained by dividing the -11 value of the absolute value by 2 and the number of symbols that can be expressed again, and the abs_bvd_flc syntax element sets the -11 value of the absolute value.
  • the number of symbols divided by the value divided by 2 can indicate information about a value derived through a modular operation
  • the abs_bvd_exceed syntax element is information about a value exceeding a range that can be expressed based on the abs_bvd_unary1 syntax element. Can represent.
  • a syntax element for the remainder obtained by dividing the absolute value of the block vector difference by 2 may indicate a parity_flag syntax element, and a syntax element for whether the absolute value of the block vector difference is greater than 10 indicates an abs_bvd_greater10_flag syntax element.
  • a syntax element for the remainder of the block vector difference divided by 2 a syntax element for whether the absolute value of the block vector difference is greater than 10
  • the abs_bvd_flc syntax element are used for FL-based binarization.
  • the abs_bvd_unary1 syntax element may be derived through TU (Truncated Unary) based binarization, and the abs_bvd_exceed syntax element may be derived through Exp-Golomb based binarization.
  • information about the block vector difference can be included in the BVD coding syntax as shown in Table 13, and the block vector difference can be derived through these syntax elements.
  • the encoding apparatus may generate a bitstream by encoding video information including all or part of the above-described information (or syntax elements). Or, it can be output in the form of a bitstream.
  • the bitstream may be transmitted to a decoding device through a network or storage medium. Alternatively, the bitstream can be stored on a computer-readable storage medium.
  • FIG. 9 and 10 schematically show an example of a video/video encoding method and related components according to embodiment(s) of the present document.
  • the method disclosed in FIG. 9 may be performed by the decoding apparatus disclosed in FIG. 3. Specifically, for example, S900 of FIG. 9 may be performed by the entropy decoding unit 310 of the decoding apparatus in FIG. 10, and S910 to S940 of FIG. 9 may be predicted by the prediction unit 330 of FIG. 10 Can be performed by In addition, although not shown in FIG. 9, residual information may be obtained from a bitstream by the entropy decoding unit 310 of the decoding apparatus in FIG. 10, and based on the residual information by the residual processing unit 320. Residual samples may be derived, and the adder 340 may generate reconstructed samples (or reconstructed blocks) based on the predicted samples and the residual samples.
  • the method disclosed in FIG. 9 may include the embodiments described above in this document.
  • the decoding apparatus may derive prediction mode information and block vector difference information for a current block from a bitstream (S900). Or, the decoding apparatus may (entropy) decode the bitstream to derive information regarding prediction mode information and block vector difference.
  • prediction mode information may include an IBC flag.
  • the prediction mode information may include information about a picture order count (POC) of a current picture and information about a POC of a reference (target) picture.
  • the prediction mode information may include information for performing motion compensation based on the current picture.
  • the prediction mode information may include various information related to prediction of the current block.
  • the decoding apparatus may obtain BVD coding syntax from the bitstream, and information regarding the block vector difference may be included in the BVD coding syntax.
  • the information on the block vector difference may be binarized through at least one binarization method, and information on the binary block vector difference may be obtained from the bitstream.
  • an empty string for information about the binary block vector difference may be derived from the bitstream, and information regarding the block vector difference may be obtained from the empty string.
  • context-coded bins may be obtained from a bitstream, and information regarding a binary block vector difference may be derived from context-coded bins.
  • information about binarized block vector differences can be derived from coded bins based on a context model. The context model may be classified or derived based on ctxInc (context index increment).
  • the context model may be classified or derived based on ctxInc and ctxIdxOffset (context index offset).
  • the context model may be indicated based on ctxInc or ctxIdx (context index).
  • ctxIdx may be derived based on ctxInc and ctxIdxOffset.
  • the information about the block vector difference may include a syntax element for whether the absolute value of the block vector difference is greater than 0 and a syntax element for the sign of the block vector difference.
  • a syntax element for whether the absolute value of the block vector difference is greater than 0 may be represented by an abs_bvd_greater0_flag syntax element, and a syntax element for a sign of the block vector difference may be represented by a bvd_sign_flag syntax element.
  • a syntax element for whether the absolute value of the block vector difference is greater than 0 and a syntax element for the sign of the block vector difference can be derived through FL (Fixed-Length) based binarization.
  • the information about the block vector difference may further include a syntax element for whether the absolute value of the block vector difference is greater than 1 and a syntax element for a -2 value of the absolute value of the block vector difference.
  • a syntax element for whether the absolute value of the block vector difference is greater than 1 may represent an abs_bvd_greater1_flag syntax element
  • a syntax element for a -2 value of the absolute value of the block vector difference may represent an abs_bvd_minus2 syntax element. have.
  • a syntax element for whether the absolute value of the block vector difference is greater than 1 can be derived through FL-based binarization, and the syntax element for a -2 value of the absolute value of the block vector difference is Exp-Golomb It can be derived through base binarization.
  • the syntax element for the -2 value of the absolute value of the block vector difference can be derived through EG3 (3rd order Exp-Golomb) based binarization.
  • information regarding the block vector difference may be included in the BVD coding syntax as shown in Table 1 or Table 4, and the block vector difference may be derived through Equation 1.
  • a syntax element for whether the absolute value of the block vector difference is greater than 0 and a syntax element for whether the absolute value of the block vector difference is greater than 1 can be derived based on a first context model, and
  • the first context model can be distinguished from the second context model related to motion vector difference. That is, the first context model for IBC prediction may be distinguished from the second context model for inter prediction.
  • a context model may be classified or derived based on context index increment (ctxInc), and the first context model and the second context model may be classified or derived based on ctxInc separated from each other.
  • ctxInc may be 0 in the first bin for a syntax element indicating whether the absolute value of the block vector difference is greater than 0, and whether the absolute value of the block vector difference is greater than 1.
  • the syntax element for the first bin it can be zero.
  • the syntax element for whether the absolute value of the block vector difference is greater than 0 and the syntax element for whether the absolute value of the block vector difference is greater than 1 can be represented by only 1 bit, it may not be used from the second bin. .
  • the syntax element for whether the absolute value of the block vector difference is greater than 0 is a syntax element for whether the x component of the absolute value of the block vector difference is greater than 0, and the y component of the absolute value of the block vector difference is greater than 0.
  • a syntax element for whether the block vector difference is greater than 1 may be included, and a syntax element for whether the x component of the absolute value of the block vector difference is greater than 1 and the block vector difference. It may include a syntax element for whether the y component of the absolute value of is greater than 1.
  • a syntax element for whether the x component of the absolute value of the block vector difference is greater than 0 can be derived based on a first context model, and the syntax for whether the y component of the absolute value of the block vector difference is greater than 0 is obtained.
  • the element may be derived based on a second context model, and the syntax element as to whether the x component of the absolute value of the block vector difference is greater than 1 may be derived based on a third context model, and the block vector difference may be derived.
  • the syntax element for whether the y component of the absolute value is greater than 1 may be derived based on a fourth context model, and the first to fourth context models may be different from each other. Or they can be distinguished from each other.
  • the first to fourth context models may be classified or derived based on ctxInc separated from each other.
  • ctxInc for a horizontal component (or x component) and ctxInc for a vertical component (or y component) may be classified, and thus different context models may be classified or derived. .
  • the information regarding the block vector difference includes syntax elements for the remainder of the block vector difference divided by 2, syntax elements for whether the absolute value of the block vector difference is greater than 2, and the block vector difference.
  • the syntax element for the -3 value of the absolute value may be further included.
  • a syntax element for the remainder obtained by dividing the absolute value of the block vector difference by 2 may indicate a parity_flag syntax element, and a syntax element for whether the absolute value of the block vector difference is greater than 2 indicates an abs_bvd_greater2_flag syntax element.
  • the syntax element for the -3 value of the absolute value of the block vector difference may represent the abs_bvd_minus3 syntax element.
  • a syntax element for the remainder obtained by dividing the absolute value of the block vector difference by 2 and a syntax element for whether the absolute value of the block vector difference is greater than 2 can be derived through FL-based binarization, and
  • the syntax element for the -3 value of the absolute value of the block vector difference can be derived through Exp-Golomb based binarization.
  • the syntax element for the -3 value of the absolute value of the block vector difference may be derived through EG3 (3rd order Exp-Golomb) based binarization.
  • information about the block vector difference may be included in the BVD coding syntax as shown in Table 9, and the block vector difference may be derived through Equation (2).
  • the information about the block vector difference includes syntax elements for the remainder of the block vector difference divided by 2, syntax elements for whether the absolute value of the block vector difference is greater than 2, abs_bvd_unary syntax elements, and abs_bvd_flc It may further include a syntax element.
  • the abs_bvd_unary syntax element may indicate information about a value obtained by dividing a quotient of a value obtained by dividing -3 of the absolute value of the block vector difference by 2, by the number of reproducible symbols, and the abs_bvd_flc syntax element is the block vector difference
  • Information about a value derived through a modular operation can be represented by the number of symbols in the quotient of the value obtained by dividing the -3 value of the absolute value by 2.
  • a syntax element for the remainder obtained by dividing the absolute value of the block vector difference by 2 may indicate a parity_flag syntax element, and a syntax element for whether the absolute value of the block vector difference is greater than 2 indicates an abs_bvd_greater2_flag syntax element.
  • a syntax element for the remainder of the block vector difference divided by 2 a syntax element for whether the absolute value of the block vector difference is greater than 2
  • the abs_bvd_flc syntax element are FL-based binarization.
  • the abs_bvd_unary syntax element may be derived through TU (Truncated Unary) based binarization.
  • information about the block vector difference may be included in the BVD coding syntax as shown in Table 11, and the block vector difference may be derived through Equation (3).
  • the information about the block vector difference further includes a syntax element for the remainder when the absolute value of the block vector difference is divided by 2, a syntax element for whether the absolute value of the block vector difference is greater than 10, and an abs_bvd_unary0 syntax element.
  • the abs_bvd_unary0 syntax element may indicate information about a quotient of a value obtained by dividing a -1 value of an absolute value of the block vector difference by 2.
  • a syntax element for the remainder obtained by dividing the absolute value of the block vector difference by 2 may indicate a parity_flag syntax element, and a syntax element for whether the absolute value of the block vector difference is greater than 10 indicates an abs_bvd_greater10_flag syntax element.
  • a syntax element for the remainder obtained by dividing the absolute value of the block vector difference by 2 and a syntax element for whether the absolute value of the block vector difference is greater than 10 can be derived through FL-based binarization
  • the abs_bvd_unary0 syntax element may be derived through TU (Truncated Unary) based binarization.
  • information about the block vector difference can be included in the BVD coding syntax as shown in Table 13, and the block vector difference can be derived through these syntax elements.
  • the information about the block vector difference further includes a syntax element for the remainder when the absolute value of the block vector difference is divided by 2, a syntax element for whether the absolute value of the block vector difference is greater than 10, and an abs_bvd_unary1 syntax element. It may include, the information about the block vector difference, if the value of the abs_bvd_unary1 syntax element is less than 32, may further include the abs_bvd_flc syntax element, if the abs_bvd_unary1 syntax element value is 32, the abs_bvd_exceed syntax element It may further include.
  • the abs_bvd_unary1 syntax element may indicate information on a value obtained by dividing the -11 value of the absolute value by 2 and the number of symbols that can be expressed again, and the abs_bvd_flc syntax element sets the -11 value of the absolute value.
  • the number of symbols divided by the value divided by 2 can indicate information about a value derived through a modular operation
  • the abs_bvd_exceed syntax element is information about a value exceeding a range that can be expressed based on the abs_bvd_unary1 syntax element. Can represent.
  • a syntax element for the remainder obtained by dividing the absolute value of the block vector difference by 2 may indicate a parity_flag syntax element, and a syntax element for whether the absolute value of the block vector difference is greater than 10 indicates an abs_bvd_greater10_flag syntax element.
  • a syntax element for the remainder of the block vector difference divided by 2 a syntax element for whether the absolute value of the block vector difference is greater than 10
  • the abs_bvd_flc syntax element are used for FL-based binarization.
  • the abs_bvd_unary1 syntax element may be derived through TU (Truncated Unary) based binarization, and the abs_bvd_exceed syntax element may be derived through Exp-Golomb based binarization.
  • information about the block vector difference can be included in the BVD coding syntax as shown in Table 13, and the block vector difference can be derived through these syntax elements.
  • the decoding apparatus may derive a prediction mode of the current block as an intra block copy (IBC) prediction mode based on the prediction mode information (S910).
  • the IBC prediction mode may indicate a prediction mode that performs Current Picture Referencing (CPR). That is, the IBC prediction mode may represent a prediction mode that performs motion compensation within a current picture including a current block.
  • CPR Current Picture Referencing
  • the prediction mode of the current block is an inter prediction mode
  • motion vectors and motion vector differences may be derived based on a reference picture different from the current picture to perform prediction, but the prediction mode of the current block is IBC prediction In the mode, block vectors and block vector differences may be derived based on a current picture in order to perform prediction.
  • the decoding apparatus may derive the prediction mode of the current block as the IBC prediction mode based on the IBC flag.
  • the prediction mode information includes information about the picture order count (POC) of the current picture and information about the POC of the reference (destination) picture
  • the decoding apparatus may be based on the difference between the POC of the current picture and the POC of the reference picture.
  • the prediction mode of the current block can be derived as the IBC prediction mode.
  • the prediction mode information may include information for performing motion compensation based on the current picture, and based on this, an IBC prediction mode may be derived.
  • the decoding apparatus may derive the block vector difference of the current block based on information about the block vector difference (S920).
  • the information about the block vector difference is a syntax element for whether the absolute value of the block vector difference is greater than 0, a syntax element for the sign of the block vector difference, or an absolute value of the block vector difference is greater than 1.
  • a syntax element for and a syntax element for a -2 value of the absolute value of the block vector difference may be included, and the block vector difference may be derived based on the syntax elements through Equation (1).
  • the information about the block vector difference is a syntax element for whether the absolute value of the block vector difference is greater than 0, a syntax element for the sign of the block vector difference, and an absolute value of the block vector difference divided by 2.
  • the information about the block vector difference is a syntax element for whether the absolute value of the block vector difference is greater than 0, a syntax element for the sign of the block vector difference, and an absolute value of the block vector difference divided by 2.
  • a syntax element for the remainder, a syntax element for whether the absolute value of the block vector difference is greater than 2, an abs_bvd_unary syntax element and an abs_bvd_flc syntax element, and the block vector difference is based on the syntax elements through Equation (3).
  • the information about the block vector difference is a syntax element for whether the absolute value of the block vector difference is greater than 0, a syntax element for the sign of the block vector difference, and an absolute value of the block vector difference divided by 2.
  • a syntax element for the rest, a syntax element for whether the absolute value of the block vector difference is greater than 10, and an abs_bvd_unary0 syntax element may be included, and the block vector difference may be derived based on the syntax elements.
  • the information about the block vector difference is a syntax element for whether the absolute value of the block vector difference is greater than 0, a syntax element for the sign of the block vector difference, and an absolute value of the block vector difference divided by 2.
  • a syntax element for the remainder a syntax element for whether the absolute value of the block vector difference is greater than 10
  • an abs_bvd_unary1 syntax element and may further include an abs_bvd_flc syntax element or an abs_bvd_exceed syntax element, and the block vector difference is It can be derived based on the syntax elements.
  • the decoding apparatus may derive the block vector of the current block based on the block vector difference (S930).
  • the block vector may represent information from the current block to the most similar block.
  • the most similar block can be used as a reference block.
  • the decoding apparatus may derive the block vector based on a block vector difference and a block vector predictor of the current block.
  • the block vector predictor may be derived from neighboring blocks of the current block.
  • the decoding apparatus may generate prediction samples of the current block based on reconstructed samples in the current picture indicated by the block vector (S940). For example, the decoding apparatus may generate the predicted samples (or the predicted block) using reconstructed samples (or reconstructed blocks) in the current picture as reference samples (or reference blocks). For example, reconstructed samples may represent pre-decoded or reconstructed samples in the current picture, and a relative position from the current block may be determined by the block vector.
  • the decoding apparatus may obtain information about residual including information about the residual samples (or residual sample array) from a bitstream, and residual samples
  • the information about or residual information may include information about a transform coefficient for the residual samples.
  • the decoding apparatus may obtain various information related to prediction of the current block from a bitstream.
  • the decoding apparatus may derive residual samples based on information on residual samples, generate reconstruction samples based on the prediction samples and residual samples, and reconstruction blocks based on the reconstruction samples Alternatively, a reconstructed picture can be derived. As described above, the decoding apparatus may apply deblocking filtering and/or in-loop filtering procedures, such as SAO procedures, to the reconstructed picture to improve subjective/objective image quality, if necessary.
  • deblocking filtering and/or in-loop filtering procedures such as SAO procedures
  • the decoding device may decode the bitstream to obtain image information including all or part of the above-described information (or syntax elements).
  • the bitstream may be stored in a computer-readable digital storage medium, which may cause the above-described decoding method to be performed.
  • the above-described method according to the present document may be implemented in software form, and the encoding device and/or the decoding device according to the present document may perform image processing of, for example, a TV, a computer, a smartphone, a set-top box, and a display device. Device.
  • the above-described method may be implemented as a module (process, function, etc.) performing the above-described function.
  • Modules are stored in memory and can be executed by a processor.
  • the memory may be internal or external to the processor, and may be connected to the processor by various well-known means.
  • the processor may include an application-specific integrated circuit (ASIC), other chipsets, logic circuits, and/or data processing devices.
  • the memory may include read-only memory (ROM), random access memory (RAM), flash memory, memory cards, storage media and/or other storage devices.
  • FIG. 11 schematically shows a structure of a content streaming system.
  • the embodiments described in this document may be implemented and implemented on a processor, microprocessor, controller, or chip.
  • the functional units shown in each figure may be implemented and implemented on a computer, processor, microprocessor, controller, or chip.
  • the decoding device and encoding device to which the present document is applied include multimedia broadcast transmission/reception devices, mobile communication terminals, home cinema video devices, digital cinema video devices, surveillance cameras, video communication devices, real-time communication devices such as video communication, mobile streaming Devices, storage media, camcorders, video on demand (VoD) service providing devices, over the top video (OTT video) devices, Internet streaming service providing devices, 3D (3D) video devices, video telephony video devices, and medical video devices And may be used to process video signals or data signals.
  • the OTT video (Over the top video) device may include a game console, a Blu-ray player, an Internet-connected TV, a home theater system, a smartphone, a tablet PC, and a digital video recorder (DVR).
  • the processing method to which the present document is applied may be produced in the form of a program executed by a computer, and may be stored in a computer-readable recording medium.
  • Multimedia data having a data structure according to this document can also be stored in a computer-readable recording medium.
  • the computer-readable recording medium includes all kinds of storage devices and distributed storage devices in which computer-readable data is stored.
  • the computer-readable recording medium includes, for example, Blu-ray Disc (BD), Universal Serial Bus (USB), ROM, PROM, EPROM, EEPROM, RAM, CD-ROM, magnetic tape, floppy disk and optical. It may include a data storage device.
  • the computer-readable recording medium includes media implemented in the form of a carrier wave (for example, transmission via the Internet).
  • bitstream generated by the encoding method may be stored in a computer-readable recording medium or transmitted through a wired or wireless communication network.
  • embodiments of the present document may be implemented as computer program products using program codes, and the program codes may be executed on a computer according to embodiments of the present document.
  • the program code can be stored on a computer readable carrier.
  • the content streaming system to which this document is applied may largely include an encoding server, a streaming server, a web server, a media storage, a user device, and a multimedia input device.
  • the encoding server serves to compress a content input from multimedia input devices such as a smartphone, a camera, and a camcorder into digital data to generate a bitstream and transmit it to the streaming server.
  • multimedia input devices such as a smart phone, a camera, and a camcorder directly generate a bitstream
  • the encoding server may be omitted.
  • the bitstream may be generated by an encoding method or a bitstream generation method to which the present document is applied, and the streaming server may temporarily store the bitstream in the process of transmitting or receiving the bitstream.
  • the streaming server transmits multimedia data to a user device based on a user request through a web server, and the web server serves as an intermediary to inform the user of the service.
  • the web server delivers it to the streaming server, and the streaming server transmits multimedia data to the user.
  • the content streaming system may include a separate control server, in which case the control server serves to control commands/responses between devices in the content streaming system.
  • the streaming server may receive content from a media storage and/or encoding server. For example, when content is received from the encoding server, the content may be received in real time. In this case, in order to provide a smooth streaming service, the streaming server may store the bitstream for a predetermined time.
  • Examples of the user device include a mobile phone, a smart phone, a laptop computer, a terminal for digital broadcasting, a personal digital assistants (PDA), a portable multimedia player (PMP), navigation, a slate PC, Tablet PCs, ultrabooks, wearable devices, e.g., smartwatches, smart glass, head mounted display (HMD), digital TV, desktop Computers, digital signage, and the like.
  • PDA personal digital assistants
  • PMP portable multimedia player
  • HMD head mounted display
  • Each server in the content streaming system can be operated as a distributed server, and in this case, data received from each server can be distributed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

La présente invention concerne un procédé par lequel un appareil de décodage effectue un décodage d'image comprenant les étapes consistant à : acquérir, à partir d'un train de bits, des informations de mode de prédiction concernant un bloc courant et des informations relatives à une différence de vecteur de bloc; dériver un mode de prédiction du bloc courant en tant que mode de prédiction de copie intra-bloc (IBC) sur la base des informations de mode de prédiction; dériver une différence de vecteur de bloc du bloc courant sur la base des informations relatives à la différence de vecteur de bloc; dériver un vecteur de bloc du bloc courant sur la base de la différence de vecteur de bloc; et générer des échantillons de prédiction du bloc courant sur la base d'échantillons de restauration d'une image courante indiquée par le vecteur de bloc, les informations relatives à la différence de vecteur de bloc comprenant un élément de syntaxe indiquant si la valeur absolue de la différence de vecteur de bloc est supérieure à 0 et un élément de syntaxe concernant le signe de la différence de vecteur de bloc.
PCT/KR2019/018713 2018-12-31 2019-12-30 Procédé et appareil de codage d'image faisant appel à une prédiction de copie intra-bloc WO2020141831A2 (fr)

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US201862786552P 2018-12-31 2018-12-31
US62/786,552 2018-12-31
US201962814845P 2019-03-06 2019-03-06
US62/814,845 2019-03-06
US201962850549P 2019-05-21 2019-05-21
US62/850,549 2019-05-21
US201962860787P 2019-06-13 2019-06-13
US62/860,787 2019-06-13

Publications (2)

Publication Number Publication Date
WO2020141831A2 true WO2020141831A2 (fr) 2020-07-09
WO2020141831A3 WO2020141831A3 (fr) 2020-12-17

Family

ID=71406553

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/018713 WO2020141831A2 (fr) 2018-12-31 2019-12-30 Procédé et appareil de codage d'image faisant appel à une prédiction de copie intra-bloc

Country Status (1)

Country Link
WO (1) WO2020141831A2 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023193724A1 (fr) * 2022-04-05 2023-10-12 Beijing Bytedance Network Technology Co., Ltd. Procédé, appareil et support de traitement vidéo
WO2023198131A1 (fr) * 2022-04-12 2023-10-19 Beijing Bytedance Network Technology Co., Ltd. Procédé, appareil et support de traitement vidéo
EP4246975A4 (fr) * 2020-12-04 2024-01-24 Tencent Tech Shenzhen Co Ltd Procédé et appareil de décodage vidéo, procédé et appareil de codage vidéo, et dispositif

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10536701B2 (en) * 2011-07-01 2020-01-14 Qualcomm Incorporated Video coding using adaptive motion vector resolution
EP3158734A4 (fr) * 2014-06-19 2017-04-26 Microsoft Technology Licensing, LLC Modes de copie intra-bloc et de prédiction inter unifiés
US9930341B2 (en) * 2014-06-20 2018-03-27 Qualcomm Incorporated Block vector coding for intra block copying
US9948949B2 (en) * 2014-06-20 2018-04-17 Qualcomm Incorporated Intra block copy block vector signaling for video coding
KR101573334B1 (ko) * 2014-08-07 2015-12-01 삼성전자주식회사 영상 데이터의 엔트로피 부호화, 복호화 방법 및 장치

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4246975A4 (fr) * 2020-12-04 2024-01-24 Tencent Tech Shenzhen Co Ltd Procédé et appareil de décodage vidéo, procédé et appareil de codage vidéo, et dispositif
WO2023193724A1 (fr) * 2022-04-05 2023-10-12 Beijing Bytedance Network Technology Co., Ltd. Procédé, appareil et support de traitement vidéo
WO2023198131A1 (fr) * 2022-04-12 2023-10-19 Beijing Bytedance Network Technology Co., Ltd. Procédé, appareil et support de traitement vidéo

Also Published As

Publication number Publication date
WO2020141831A3 (fr) 2020-12-17

Similar Documents

Publication Publication Date Title
WO2020071829A1 (fr) Procédé de codage d'image basé sur l'historique, et appareil associé
WO2020071830A1 (fr) Procédé de codage d'images utilisant des informations de mouvement basées sur l'historique, et dispositif associé
WO2020091213A1 (fr) Procédé et appareil de prédiction intra dans un système de codage d'image
WO2020171632A1 (fr) Procédé et dispositif de prédiction intra fondée sur une liste mpm
WO2021137597A1 (fr) Procédé et dispositif de décodage d'image utilisant un paramètre de dpb pour un ols
WO2020251319A1 (fr) Codage d'image ou de vidéo basé sur une prédiction inter à l'aide de sbtmvp
WO2020141879A1 (fr) Procédé et dispositif de décodage de vidéo basé sur une prédiction de mouvement affine au moyen d'un candidat de fusion temporelle basé sur un sous-bloc dans un système de codage de vidéo
WO2020167097A1 (fr) Obtention du type de prédiction inter pour prédiction inter dans un système de codage d'images
WO2020204419A1 (fr) Codage vidéo ou d'image basé sur un filtre à boucle adaptatif
WO2020141886A1 (fr) Procédé et appareil d'inter-prédiction basée sur un sbtmvp
WO2020071879A1 (fr) Procédé de codage de coefficient de transformée et dispositif associé
WO2020071832A1 (fr) Procédé de codage de coefficient de transformation et dispositif associé
WO2020141831A2 (fr) Procédé et appareil de codage d'image faisant appel à une prédiction de copie intra-bloc
WO2021040400A1 (fr) Codage d'image ou de vidéo fondé sur un mode à palette
WO2020141932A1 (fr) Procédé et appareil de prédiction inter utilisant des mmvd de cpr
WO2021125700A1 (fr) Appareil et procédé de codage d'image/vidéo basé sur une table pondérée par prédiction
WO2021091256A1 (fr) Procédé et dispositif de codade d'image/vidéo
WO2021118293A1 (fr) Procédé et dispositif de codage d'image basé sur un filtrage
WO2021133060A1 (fr) Appareil et procédé de codage d'image basés sur une sous-image
WO2020251270A1 (fr) Codage d'image ou de vidéo basé sur des informations de mouvement temporel dans des unités de sous-blocs
WO2021040398A1 (fr) Codage d'image ou de vidéo s'appuyant sur un codage d'échappement de palette
WO2020251340A1 (fr) Procédé et dispositif de codage d'image/vidéo basés sur une prédiction de vecteurs de mouvement
WO2020180043A1 (fr) Procédé de codage d'image basé sur le lmcs et dispositif associé
WO2020145620A1 (fr) Procédé et dispositif de codage d'image basé sur une prédiction intra utilisant une liste mpm
WO2020197031A1 (fr) Procédé et appareil de prédiction intra basée sur une ligne à références multiples dans un système de codage d'image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19907833

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19907833

Country of ref document: EP

Kind code of ref document: A2