WO2020076034A1 - Procédé de prédiction utilisant un mode de référencement d'image en cours et dispositif de décodage d'image associé - Google Patents

Procédé de prédiction utilisant un mode de référencement d'image en cours et dispositif de décodage d'image associé Download PDF

Info

Publication number
WO2020076034A1
WO2020076034A1 PCT/KR2019/013129 KR2019013129W WO2020076034A1 WO 2020076034 A1 WO2020076034 A1 WO 2020076034A1 KR 2019013129 W KR2019013129 W KR 2019013129W WO 2020076034 A1 WO2020076034 A1 WO 2020076034A1
Authority
WO
WIPO (PCT)
Prior art keywords
mode
ibc
block
prediction
flag
Prior art date
Application number
PCT/KR2019/013129
Other languages
English (en)
Korean (ko)
Inventor
김재일
이선영
손세훈
신재섭
Original Assignee
에스케이텔레콤 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020190067724A external-priority patent/KR20200040179A/ko
Application filed by 에스케이텔레콤 주식회사 filed Critical 에스케이텔레콤 주식회사
Priority to CN201980081334.7A priority Critical patent/CN113170196A/zh
Publication of WO2020076034A1 publication Critical patent/WO2020076034A1/fr
Priority to US17/225,397 priority patent/US11405639B2/en
Priority to US17/847,783 priority patent/US11838545B2/en
Priority to US17/847,727 priority patent/US11838543B2/en
Priority to US17/847,706 priority patent/US11838542B2/en
Priority to US17/847,748 priority patent/US11838544B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • the present invention relates to encoding and decoding of an image, and to a prediction method for improving the efficiency of encoding and decoding and an image decoding apparatus using the same.
  • video data Since video data has a large amount of data compared to audio data or still image data, it requires a lot of hardware resources including memory to store or transmit itself without processing for compression.
  • HVC High Efficiency Video Coding
  • the present invention aims to provide an improved image encoding and decoding technology.
  • one aspect of the present invention is to improve the efficiency of encoding and decoding through various methods for determining a prediction mode of a current block. It's about improving skills.
  • An aspect of the present invention as a method of predicting a current block to be decoded in a current picture reference (intra block copy, ibc) mode, enables to indicate whether application of the ibc mode is permitted from a bitstream Decoding type information indicating whether the type of the flag and slice is inter; Decoding an ibc flag indicating whether the prediction mode of the current block is the ibc mode based on the enable flag and the type information from the bitstream; When the ibc flag indicates the ibc mode, decoding motion information not including a reference picture index of the current block from the bitstream; And predicting the current block using a block indicated by the motion information in the current picture in which the current block is located.
  • ibc current picture reference
  • Another aspect of the present invention is an enable flag indicating whether application of a current picture reference (intra block copy, ibc) mode is permitted from a bitstream and a type indicating whether the type of slice is inter.
  • Decoding information decoding the ibc flag indicating whether the prediction mode of the current block is the ibc mode depending on the enable flag and the type information, and when the ibc flag indicates the ibc mode, the current block
  • a decoding unit that decodes motion information not including a reference picture index of; And a prediction unit for predicting the current block by using a block indicated by the motion information in the current picture in which the current block is located.
  • bit efficiency can be improved.
  • FIG. 1 is an exemplary block diagram of an image encoding apparatus capable of implementing the techniques of the present disclosure.
  • FIG. 2 is a diagram for explaining a method of dividing a block using a QTBTTT structure.
  • 3 is a diagram for describing a plurality of intra prediction modes.
  • FIG. 4 is an exemplary block diagram of an image decoding apparatus capable of implementing the techniques of the present disclosure.
  • FIG. 5 is a diagram for describing a current picture reference technique.
  • FIG. 6 is a diagram for explaining a conventional method of classifying prediction modes.
  • FIG. 7 is a flowchart illustrating an example of a prediction mode determination proposed by the present invention.
  • FIG. 8 is a flowchart illustrating an example of predicting a current block in a current picture reference mode.
  • 9 to 14 are views for explaining various methods proposed in the present invention.
  • 15 is a flowchart illustrating an embodiment of the present invention for predicting a current block in ibc_BVP mode.
  • 16 to 18 are diagrams for describing BVP candidates included in the BVP candidate list for ibc mode.
  • FIG. 1 is an exemplary block diagram of an image encoding apparatus capable of implementing the techniques of the present disclosure.
  • an image encoding apparatus and sub-components of the apparatus will be described with reference to FIG. 1.
  • the image encoding apparatus includes a block division unit 110, a prediction unit 120, a subtractor 130, a transformation unit 140, a quantization unit 145, an encoding unit 150, an inverse quantization unit 160, and an inverse transformation unit ( 165), an adder 170, a filter unit 180 and a memory 190.
  • Each component of the video encoding apparatus may be implemented in hardware or software, or a combination of hardware and software. Further, the function of each component may be implemented in software, and the microprocessor may be implemented to execute the function of software corresponding to each component.
  • One image is composed of a plurality of pictures. Each picture is divided into a plurality of regions, and encoding is performed for each region. For example, one picture is divided into one or more tiles or / and slices. Here, one or more tiles may be defined as a tile group. Each tile or / slice is divided into one or more coding tree units (CTUs). And each CTU is divided into one or more coding units (CUs) by a tree structure. Information applied to each CU is encoded as the syntax of the CU, and information commonly applied to CUs included in one CTU is encoded as the syntax of the CTU.
  • CTUs coding tree units
  • CUs coding units
  • information commonly applied to all blocks in one tile is encoded as the syntax of a tile or encoded as a syntax of a tile group in which a plurality of tiles are collected, and information applied to all blocks constituting one picture is It is encoded in a picture parameter set (PPS) or a picture header.
  • PPS picture parameter set
  • information commonly referred to by a plurality of pictures is encoded in a sequence parameter set (SPS).
  • SPS sequence parameter set
  • VPS video parameter set
  • the block splitter 110 determines the size of a coding tree unit (CTU).
  • CTU size Information about the size of the CTU (CTU size) is encoded as a syntax of an SPS or a PPS and transmitted to an image decoding apparatus.
  • the block dividing unit 110 divides each picture constituting an image into a plurality of coding tree units (CTUs) having a predetermined size, and then repeatedly repeats the CTU using a tree structure. (recursively) split.
  • CTUs coding tree units
  • a leaf node in a tree structure becomes a coding unit (CU), which is a basic unit of encoding.
  • the tree structure includes a quad tree (QuadTree, QT) in which a parent node (or parent node) is divided into four child nodes (or child nodes) of the same size, or a binary tree (BinaryTree) in which a parent node is divided into two child nodes. , BT), or a ternary tree in which the upper node is divided into three lower nodes in a 1: 2: 1 ratio, or a structure in which two or more of these QT structures, BT structures, and TT structures are mixed. have.
  • QT quad tree
  • BinaryTree binary tree
  • a QTBT QuadTree plus BinaryTree
  • a QTBTTT QuadTree plus BinaryTree TernaryTree
  • MTT Multiple-Type Tree
  • the CTU can be first divided into a QT structure.
  • the quadtree split may be repeated until the size of the splitting block reaches the minimum block size (MinQTSize) of the leaf node allowed in QT.
  • the first flag (QT_split_flag) indicating whether each node of the QT structure is divided into 4 nodes of the lower layer is encoded by the encoder 150 and signaled to the video decoding apparatus. If the leaf node of QT is not larger than the maximum block size (MaxBTSize) of the root node allowed by BT, it may be further divided into any one or more of BT structure or TT structure.
  • MaxBTSize maximum block size
  • a plurality of split directions may exist. For example, there may be two directions in which a block of a corresponding node is horizontally divided and a vertically divided direction.
  • a second flag indicating whether nodes are split
  • a flag indicating additional splitting direction vertical or horizontal
  • / or splitting type Boary or Ternary
  • a CU split flag indicating that the block is split first and a QT split flag (split_qt_flag) indicating whether the split type is QT split is encoded 150 ) And signaled to the video decoding apparatus. If the CU split flag (split_cu_flag) value does not indicate that the partition has not been split, the block of the corresponding node becomes a leaf node in the split tree structure and becomes a coding unit (CU), which is a basic unit of encoding.
  • CU coding unit
  • the split type is QT or MTT through the QT split flag (split_qt_flag) value.
  • the split type is QT, there is no additional information, and when the split type is MTT, additionally, a flag indicating the MTT split direction (vertical or horizontal) (mtt_split_cu_vertical_flag) and / or a flag indicating the MTT split type (Binary or Ternary) (mtt_split_cu_binary_flag) is encoded by the encoding unit 150 and signaled to the video decoding apparatus.
  • split_flag indicating whether each node of the BT structure is divided into blocks of a lower layer and split type information indicating a split type are encoded by the encoder 150 and transmitted to an image decoding apparatus.
  • split_flag there may be further a type of dividing a block of a corresponding node into two blocks having an asymmetric shape.
  • the asymmetric form may include a form of dividing a block of a corresponding node into two rectangular blocks having a size ratio of 1: 3, or a form of dividing a block of a corresponding node in a diagonal direction.
  • CU may have various sizes according to QTBT or QTBTTT division from CTU.
  • a block corresponding to a CU ie, a leaf node of QTBTTT
  • a 'current block' a block corresponding to a CU (ie, a leaf node of QTBTTT) to be encoded or decoded.
  • the prediction unit 120 predicts the current block to generate a prediction block.
  • the prediction unit 120 includes an intra prediction unit 122 and an inter prediction unit 124.
  • each of the current blocks in a picture can be predictively coded.
  • prediction of the current block may be performed using intra prediction technology (using data from a picture containing the current block) or inter prediction technology (using data from a picture coded before a picture containing the current block). Can be performed.
  • Inter prediction includes both one-way prediction and two-way prediction.
  • the intra prediction unit 122 predicts pixels in the current block using pixels (reference pixels) located around the current block in the current picture including the current block.
  • a plurality of intra prediction modes may include 65 non-directional modes including a planar mode and a DC mode. Peripheral pixels to be used and expressions are defined differently for each prediction mode.
  • the intra prediction unit 122 may determine an intra prediction mode to be used to encode the current block.
  • the intra prediction unit 122 may encode the current block using various intra prediction modes and select an appropriate intra prediction mode to use from the tested modes. For example, the intra prediction unit 122 calculates rate distortion values using rate-distortion analysis for various tested intra prediction modes, and has the best rate distortion characteristics among the tested modes. The intra prediction mode can also be selected.
  • the intra prediction unit 122 selects one intra prediction mode from among a plurality of intra prediction modes, and predicts a current block using a neighboring pixel (reference pixel) and an arithmetic expression determined according to the selected intra prediction mode.
  • Information on the selected intra prediction mode is encoded by the encoding unit 150 and transmitted to the image decoding apparatus.
  • the inter prediction unit 124 generates a prediction block for the current block through a motion compensation process.
  • the block most similar to the current block is searched in the reference picture that is encoded and decoded before the current picture, and a predicted block for the current block is generated using the searched block. Then, a motion vector corresponding to displacement between the current block in the current picture and the prediction block in the reference picture is generated.
  • motion estimation is performed on luma components, and motion vectors calculated based on luma components are used for both luma components and chroma components.
  • the motion information including information about the reference picture and motion vector used to predict the current block is encoded by the encoder 150 and transmitted to the video decoding apparatus.
  • the subtractor 130 subtracts the prediction block generated by the intra prediction unit 122 or the inter prediction unit 124 from the current block to generate a residual block.
  • the transform unit 140 converts the residual signal in the residual block having pixel values in the spatial domain into a transform coefficient in the frequency domain.
  • the transform unit 140 may transform residual signals in the residual block using the entire size of the residual block as a transform unit, or divide the residual block into two subblocks that are a transform region and a non-transformed region, and convert the sub Residual signals can be transformed using only blocks as transform units.
  • the transform region sub-block may be one of two rectangular blocks having a size ratio of 1: 1 on the horizontal axis (or vertical axis).
  • a flag (cu_sbt_flag), directional (vertical / horizontal) information (cu_sbt_horizontal_flag), and / or location information (cu_sbt_pos_flag) indicating that only the sub-block is converted is encoded by the encoder 150 and signaled to the video decoding apparatus.
  • the size of the transform region sub-block may have a size ratio of 1: 3 based on the horizontal axis (or vertical axis).
  • a flag (cu_sbt_quad_flag) for classifying the split is additionally encoded by the encoder 150 to decode the image. Signaled to the device.
  • the quantization unit 145 quantizes transform coefficients output from the transform unit 140 and outputs the quantized transform coefficients to the encoder 150.
  • the encoding unit 150 generates a bitstream by encoding quantized transform coefficients using an encoding method such as CABAC (Context-based Adaptive Binary Arithmetic Code).
  • the encoder 150 encodes information such as a CTU size, a CU split flag, a QT split flag, an MTT split direction, and an MTT split type related to block splitting, so that the video decoding apparatus can split the block in the same way as the video coding apparatus. To make.
  • the encoder 150 encodes information about a prediction type indicating whether the current block is encoded by intra prediction or inter prediction, and intra prediction information (that is, intra prediction mode) according to the prediction type. Information) or inter prediction information (information on reference pictures and motion vectors).
  • the inverse quantization unit 160 inverse quantizes the quantized transform coefficients output from the quantization unit 145 to generate transform coefficients.
  • the inverse transform unit 165 reconstructs the residual block by transforming transform coefficients output from the inverse quantization unit 160 from the frequency domain to the spatial domain.
  • the adder 170 restores the current block by adding the reconstructed residual block and the predicted block generated by the predictor 120.
  • the pixels in the reconstructed current block are used as a reference pixel when intra prediction of a next order block.
  • the filter unit 180 filters the reconstructed pixels to reduce blocking artifacts, ringing artifacts, and blurring artifacts caused by block-based prediction and transformation / quantization. To perform.
  • the filter unit 180 may include a deblocking filter 182 and a sample adaptive offset (SAO) filter 184.
  • SAO sample adaptive offset
  • the deblocking filter 180 filters the boundary between the restored blocks to remove blocking artifacts caused by block-level encoding / decoding, and the SAO filter 184 adds additional deblocking filtered images. Filtering is performed.
  • the SAO filter 184 is a filter used to compensate for a difference between a reconstructed pixel and an original pixel caused by lossy coding.
  • the reconstructed blocks filtered through the deblocking filter 182 and the SAO filter 184 are stored in the memory 190.
  • the reconstructed picture is used as a reference picture for inter prediction of a block in a picture to be encoded.
  • FIG. 4 is an exemplary block diagram of an image decoding apparatus capable of implementing the techniques of the present disclosure.
  • an image decoding apparatus and sub-components of the apparatus will be described with reference to FIG. 4.
  • the image decoding apparatus may include a decoding unit 410, an inverse quantization unit 420, an inverse transform unit 430, a prediction unit 440, an adder 450, a filter unit 460, and a memory 470. have.
  • each component of the video decoding apparatus may be implemented in hardware or software, or a combination of hardware and software. Further, the function of each component may be implemented in software, and the microprocessor may be implemented to execute the function of software corresponding to each component.
  • the decoder 410 determines a current block to be decoded by decoding a bitstream received from an image encoding apparatus and extracting information related to block partitioning, and prediction information and residual signal information required to restore the current block. To extract.
  • the decoder 410 extracts information on the CTU size from a Sequence Parameter Set (SPS) or a Picture Parameter Set (PPS) to determine the size of the CTU, and divides the picture into CTUs of the determined size. Then, the CTU is determined as the top layer of the tree structure, that is, the root node, and the CTU is split using the tree structure by extracting the segmentation information for the CTU.
  • SPS Sequence Parameter Set
  • PPS Picture Parameter Set
  • the first flag (QT_split_flag) related to the splitting of the QT is extracted, and each node is divided into four nodes of a lower layer.
  • the second flag (MTT_split_flag) and split direction (vertical / horizontal) and / or split type (binary / ternary) information related to the splitting of the MTT are extracted to extract the corresponding leaf node from the MTT. Divide into structures. Through this, each node below the leaf node of QT is recursively divided into a BT or TT structure.
  • a CU split flag indicating whether to split the CU is first extracted, and when the corresponding block is split, a QT split flag (split_qt_flag) is extracted. . If the split type is not QT and MTT, a flag indicating the MTT splitting direction (vertical or horizontal) (mtt_split_cu_vertical_flag) and / or a flag indicating the MTT splitting type (Binary or Ternary) (mtt_split_cu_binary_flag) is additionally extracted.
  • each node may have 0 or more repetitive MTT splits after 0 or more repetitive QT splits. For example, in the CTU, MTT splitting may occur immediately, or conversely, only multiple QT splitting may occur.
  • the first flag (QT_split_flag) related to the division of the QT is extracted, and each node is divided into four nodes of a lower layer. Then, for a node corresponding to a leaf node of QT, split flag (split_flag) indicating whether to be further divided by BT and split direction information are extracted.
  • the decoder 410 when determining a current block to be decoded through partitioning of a tree structure, extracts information on a prediction type indicating whether the current block is intra predicted or inter predicted. When the prediction type information indicates intra prediction, the decoder 410 extracts a syntax element for intra prediction information (intra prediction mode) of the current block. When the prediction type information indicates inter prediction, the decoder 410 extracts syntax elements for the inter prediction information, that is, information indicating a motion vector and a reference picture referenced by the motion vector.
  • the decoder 410 extracts information about quantized transform coefficients of the current block as information about the residual signal.
  • the inverse quantization unit 420 inverse quantizes the quantized transform coefficients, and the inverse transform unit 430 inversely transforms the inverse quantized transform coefficients from the frequency domain to the spatial domain to restore residual signals to generate a residual block for the current block. .
  • the inverse transform unit 430 when the inverse transform unit 430 inversely transforms only a partial region (subblock) of a transform block, a flag (cu_sbt_flag) indicating that only a subblock of the transform block is transformed, and vertical / horizontal information (cu_sbt_horizontal_flag) of the subblock ) And / or extracting the location information (cu_sbt_pos_flag) of the sub-blocks, and restoring the residual signals by inverse transforming the transform coefficients of the corresponding sub-blocks from the frequency domain to the spatial domain.
  • a flag indicating that only a subblock of the transform block is transformed
  • vertical / horizontal information cu_sbt_horizontal_flag
  • the prediction unit 440 may include an intra prediction unit 442 and an inter prediction unit 444.
  • the intra prediction unit 442 is activated when the prediction type of the current block is intra prediction
  • the inter prediction unit 444 is activated when the prediction type of the current block is inter prediction.
  • the intra prediction unit 442 determines an intra prediction mode of a current block among a plurality of intra prediction modes from syntax elements for an intra prediction mode extracted from the decoder 410, and according to the intra prediction mode, reference pixels around the current block Use to predict the current block.
  • the inter prediction unit 444 determines a motion vector of a current block and a reference picture referred to by the motion vector using a syntax element for an intra prediction mode extracted from the decoding unit 410, and uses the motion vector and the reference picture. To predict the current block.
  • the adder 450 restores the current block by adding the residual block output from the inverse transform unit and the prediction block output from the inter prediction unit or the intra prediction unit.
  • the pixels in the reconstructed current block are used as a reference pixel in intra prediction of a block to be decoded later.
  • the filter unit 460 may include a deblocking filter 462 and a SAO filter 464.
  • the deblocking filter 462 deblocks the boundary between the reconstructed blocks in order to remove blocking artifacts caused by block-by-block decoding.
  • the SAO filter 464 performs additional filtering on the reconstructed block after deblocking filtering to compensate for the difference between the reconstructed pixel and the original pixel caused by lossy coding.
  • the reconstructed blocks filtered through the deblocking filter 462 and the SAO filter 464 are stored in the memory 470. When all blocks in one picture are reconstructed, the reconstructed picture is used as a reference picture for inter prediction of a block in a picture to be encoded.
  • the present invention proposes new methods for determining a prediction mode of a block to be coded and / or decoded (current block) and performing prediction on the current block based on the prediction mode.
  • the prediction mode determined through the present invention can be largely divided into an inter mode, an intra mode, and a current picture referencing (cpr) mode.
  • the cpr mode may be referred to as an ibc (intra block copy) mode.
  • the inter mode may include skip mode, merge mode, and AMVP mode, and cpr mode, i.e., ibc mode, may include ibc_skip mode, ibc_merge mode, and ibc_BVP mode.
  • ibc_skip mode is ibc mode applied to skip mode
  • ibc_merge mode is ibc mode applied to merge mode
  • ibc_BVP mode is ibc mode applied to AMVP mode.
  • the ibc mode is one of intra prediction methods, and an example of the ibc mode is represented in FIG. 5. As illustrated in FIG. 5, in ibc mode, prediction information of a current block is obtained from another block (reference block) located in the same picture (current picture).
  • a block including a pattern corresponds to a block or region (Coded region) in which decoding is already completed, and a block not including a pattern is a block or region in which decoding is not completed (Not coded yet). Therefore, the reference block from which the prediction information of the current block is obtained corresponds to a block that has already been decoded.
  • the reference block is indicated by a motion vector (MV), and in ibc mode, this motion vector may be referred to as a block vector (BV).
  • prediction information of the current block is obtained from a reference block indicated by BV, while in intra mode, prediction information is obtained from pixels adjacent to the periphery of the current block. Further, in ibc mode, prediction information is obtained from a reference block located in the same picture, while in inter mode, prediction information is obtained from a reference block located in another picture.
  • the slice type may include I-slice (intra slice), P-slice (predictive slice) and B-slice (bi-predictive slice).
  • I-slice is only available for intra prediction. Therefore, when the current block is included in the I-slice, a process of parsing and decoding information required for intra prediction (S692) is performed. In contrast, P-slices and B-slices are capable of both inter prediction and intra prediction. Therefore, when the current block is not included in the I-slice, additional determination processes for the current block are performed.
  • a process of parsing and decoding a flag (skip_flag) indicating whether the current block is predicted as a skip mode (S620) and a process of determining skip_flag (S630) are performed.
  • skip_flag 1
  • the prediction mode of the current block corresponds to skip mode.
  • merge_index merge index
  • the prediction mode of the current block may correspond to any one of modes other than skip mode (merge mode, AMVP mode, and intra mode).
  • the process of parsing and decoding a flag indicating whether the current block is predicted in the inter mode or the intra mode (the flag indicating whether the current block is predicted as the intra mode, pred_mode_flag) ( S640) and a process of determining pred_mode_flag (S650) is performed.
  • the prediction mode of the current block may correspond to one of a merge mode and an AMVP mode.
  • a flag (merge_flag) indicating whether a current block is predicted as a merge mode is parsed and decoded (S660) and a merge_flag is determined (S670).
  • pred_mode_flag indicates an intra mode
  • a process of parsing and decoding information required for intra prediction is performed.
  • the ibc mode may be applied based on whether a reference block for the current block is located in the same picture as the current picture. For example, when the prediction mode of the current block is determined to be a skip mode or a merge mode, when a merge candidate's reference picture indicated by merge_idx is the same as the current picture, the current block may be determined and predicted in ibc_skip mode or ibc_merge mode. have.
  • the prediction mode of the current block is determined to be the AMVP mode
  • the reference picture index (ref_idx) signaled from the video encoding device indicates the same picture as the current picture
  • the current block is determined and predicted as the ibc_BVP mode You can.
  • Whether the ibc mode is on or off may be defined through separate flags (sps_curr_pic_ref_enabled_flag and pps_curr_pic_ref_enabled_flag), and Table 1 and Table 2 below show an example of defining whether the ibc mode is on or off through each of the above flags.
  • Equation 1 shows an example in which the current picture is added to the reference picture list.
  • the present invention proposes new syntax and semantics for distinguishing prediction modes of the current block.
  • the present invention proposes syntax and semantics for a current block (predicted) encoded in ibc mode in an image encoding apparatus.
  • the present invention proposes new BVP candidates included in the BVP candidate list for the current block predicted in ibc_BVP mode.
  • the video encoding apparatus determines the prediction mode of the current block based on whether the preset conditions are satisfied, and signals the video decoding apparatus by including syntax elements indicating whether the preset conditions are satisfied in the bitstream.
  • the video decoding apparatus determines whether the preset conditions are satisfied (S710), and determines the prediction mode of the current block based on the determination result (S720). In addition, the video decoding apparatus predicts the current block based on the determined prediction mode (S730).
  • the 'preset conditions' refer to a criterion for determining the prediction mode of the current block.
  • the preset conditions include whether the type of the tile group (hereinafter referred to as 'tile group') containing the current block is intra (or whether the tile group type is inter), and whether the ibc mode is active (on). It may include whether or not (application of the ibc mode is allowed) and whether the prediction mode of the current block is the merge mode. Also, the preset conditions may include whether the current block is encoded in an intra mode (or inter mode) and whether the current block prediction mode is an ibc mode.
  • Type information may be implemented with a predefined syntax element (eg, tile_group_type).
  • the tile group may be referred to differently as a tile or a slice. Accordingly, 'whether the type of the tile group is intra' may be differently understood as 'whether the tile type is intra' or 'whether the slice type is intra'.
  • ibc_enabled_flag may be implemented with a predefined syntax element (eg, ibc_enabled_flag).
  • ibc_enabled_flag may be defined in one or more locations among SPS, PPS, tile group header, tile header, and CTU header.
  • Whether the prediction mode of the current block is a merge mode may be indicated through a syntax element (merge_flag) indicating this.
  • Whether the current block is coded in inter mode (or intra mode) may be indicated through a syntax element (pred_mode_flag) indicating this.
  • Whether the prediction mode of the current block is ibc mode may be indicated through a syntax element (pred_mode_ibc_flag) indicating this.
  • the image decoding apparatus decodes the enable flag and type information from the bitstream (S810).
  • the image encoding apparatus selectively transmits the ibc flag by including the enable flag, type information, and / or the ibc flag as indicated by the prediction mode of the current block.
  • the decoder 410 decodes the ibc flag depending on the decoded enable flag, type information, and / or the prediction mode of the current block (S820). For example, the image decoding apparatus may decode the ibc flag when the enable flag is on and the type information indicates intra.
  • the video decoding apparatus decodes the ibc flag when the enable flag is on, the type information indicates inter (type information does not indicate intra), and the prediction mode is inter (prediction mode is not intra). can do.
  • the video encoding apparatus transmits the motion information of the current block by including it in the bitstream, and the video decoding apparatus decodes the motion information included in the bitstream (S830).
  • the reference picture index is not included in the motion information of the current block to be decoded.
  • the video decoding apparatus (that is, the prediction unit) predicts the current block using a block (reference block) indicated by the decoded motion information (S840).
  • the reference block corresponds to a block located in the same picture (current picture) as the current block.
  • the present invention has a difference from the conventional method of determining that the current block is predicted in the ibc_BVP mode using the reference picture index signaled from the video encoding apparatus.
  • the conventional method parses both the BVP index (BVP_idx), the BVD and the reference picture index, whereas the present invention parses only the BVP index and BVD. That is, in the present invention, since the current block can be predicted in the ibc_BVP mode based on only the BVP index and the BVD without signaling for the reference picture index, bit efficiency can be improved.
  • the embodiments described below may be performed in the same manner in both the image encoding apparatus and the image decoding apparatus, but for convenience of explanation and understanding, the image decoding apparatus will be mainly described.
  • the process of parsing and decoding a specific syntax element by the decoder 410 or the video decoding apparatus is preferably understood as being included in the bitstream by the video encoding apparatus encoding the specific syntax element.
  • the first embodiment corresponds to an example in which a prediction mode of a current block is determined based on whether the predetermined conditions are satisfied, and a current block is predicted using the determined prediction mode.
  • Example 1-1 whether the application of the ibc mode is allowed, whether the type of the tile group is intra, whether the prediction mode of the current block is a merge mode, and the prediction mode of the current block All or part of whether it is in the ibc mode and whether the current block is encoded by inter prediction may be used.
  • the image decoding apparatus may determine whether the application of the ibc mode is allowed or the tile group type is intra using the decoded information (S910).
  • step S910 when the enable flag indicates that the application of the ibc mode is permitted or indicates that the type information is not intra (if the application of the ibc mode is not allowed and the tile group type is inter, the application of the ibc mode is allowed.
  • the image decoding apparatus parses and decodes merge_flag (S920) and determines merge_flag (S930).
  • the prediction mode of the current block may correspond to any one of a skip mode, a merge mode, an ibc_skip mode, and an ibc_merge mode.
  • the video decoding apparatus may distinguish between the skip / merge mode and the ibc_skip / ibc_merge mode using merge_idx that is parsed and decoded through S940.
  • the prediction mode of the current block corresponds to the ibc_skip / ibc_merge mode.
  • the prediction mode of the current block corresponds to the skip / merge mode.
  • the distinction between the skip mode and the merge mode and the distinction between the ibc_skip mode and the ibc_merge mode may be determined according to which value of 1 and 0 indicates information (eg, cbf) parsed and decoded through S990.
  • the video decoding apparatus determines whether the application of the ibc mode is permitted or the tile group type is intra using the enable flag and type information (S910), and displays information indicating whether the prediction mode of the current block is a merge mode. Parsing and decoding (S920), it can be determined (S930).
  • the prediction mode of the current block may correspond to any one of AMVP mode, ibc_BVP mode, and intra mode.
  • the video decoding apparatus may parse and decode mode information (pred_mode_flag) (S950) and determine pred_mode_flag (S960).
  • pred_mode_flag indicates inter prediction in step S960 (when not indicating intra prediction)
  • the video decoding apparatus uses an enable flag and type information, and a separate syntax indicating that the current block is predicted in ibc mode (pred_mode_ibc_flag) It may be determined whether or not to parse (S970). That is, if the prediction mode is not intra prediction in step S960, the image decoding apparatus may determine whether to parse the pred_mode_ibc_flag using the enable flag and type information (S970).
  • the video decoding apparatus may determine the type information again (S980).
  • the type information indicates inter in step S980, the prediction mode of the current block corresponds to the AMVP mode. Accordingly, the apparatus for decoding an image may parse and decode motion information (ref_idx, mvd, mvp_idx) for predicting the current block in the AMVP mode (S982).
  • the video decoding apparatus parses motion information for predicting the current block in the AMVP mode when the tile group type is inter without applying ibc mode. And decryption.
  • the video decoding apparatus parses and decodes pred_mode_ibc_flag (S984), The pred_mode_ibc_flag may be determined (S986). Even if pred_mode_ibc_flag does not indicate ibc mode in step S986, the prediction mode of the current block corresponds to the AMVP mode. Accordingly, the apparatus for decoding an image may parse and decode motion information for predicting the current block in the AMVP mode (S982).
  • the image decoding apparatus may determine whether the ibc mode is permitted using the enable flag and type information and whether the tile group type is intra (S970).
  • the image decoding apparatus may parse and decode pred_mode_ibc_flag (S984) and determine pred_mode_ibc_flag (S986).
  • pred_mode_ibc_flag indicates the ibc mode in step S986
  • the prediction mode of the current block corresponds to the ibc_BVP mode.
  • the apparatus for decoding an image may parse and decode motion information (bvd, bvp_idx) for predicting the current block in ibc_BVP mode (S988).
  • the video decoding apparatus may determine the type information again (S980). Even if the type information does not indicate an inter (in the case of indicating an intra) in step S980, the prediction mode of the current block corresponds to the ibc_BVP mode. Accordingly, the apparatus for decoding an image may parse and decode motion information (bvd, bvp_idx) for predicting the current block in ibc_BVP mode (S988).
  • the image decoding apparatus may further perform the processes of parsing and decoding the mode information (pred_mode_flag) before S970 (S950), and determining (S960).
  • pred_mode_flag mode information
  • the video decoding apparatus may perform the above-described S970 process and processes after this S970 process. .
  • the image decoding apparatus may be configured to perform the above-described S910 process, S920 process, and S930 process before the S950 process.
  • operation S950 may be performed.
  • the image decoding apparatus may parse and decode information for predicting the current block in intra mode (S992).
  • step S960 even if the mode information does not indicate inter prediction, the prediction mode of the current block corresponds to the intra mode. Accordingly, the image decoding apparatus may parse and decode information for predicting the current block in intra mode (S992). In the intra mode, cbf is not signaled and is derived to 1 (S994).
  • Example 1-1 The number of bits consumed or allocated to determine the prediction mode for the current block based on Example 1-1 is represented in FIG. 10.
  • CU type corresponds to an item indicating whether the current block is predicted in inter mode or intra mode, and CU mode indicates whether the current block is in skip mode, merge mode, AMVP mode, ibc_skip mode, ibc_merge mode or ibc_BVP mode. Corresponds to the item indicating whether the mode is predicted.
  • Total bits correspond to an item indicating the number of bits consumed or allocated to determine the prediction mode of the current block for each prediction mode
  • tile group type is I-type (when the CU type is intra)
  • ibc_enabled_flag 1
  • ibc_enabled_flag 0
  • Example 1-2 corresponds to another example of determining a prediction mode of a current block using new syntax and semantics, and predicting a current block based on the determined prediction mode. As shown in FIG. 11, in Embodiment 1-2, some or all of the preset conditions may be applied to determine a prediction mode of the current block.
  • the video decoding apparatus may determine whether the application of the ibc mode is permitted or the tile group type is intra using the enable flag and type information (S1110).
  • the image decoding apparatus parses and decodes the merge_flag (S1120) and may determine this (S1130).
  • the video decoding apparatus may distinguish between skip / merge mode and ibc_skip / ibc_merge mode using merge_idx parsed and decoded through step S1140. Specifically, when the reference picture of the merge candidate indicated by the decoded merge_idx is the same as the current picture, the prediction mode of the current block corresponds to the ibc_skip / ibc_merge mode. When the reference picture of the merge candidate indicated by the decoded merge_idx is not the same as the current picture, the prediction mode of the current block corresponds to skip / merge mode.
  • the distinction between the skip mode and the merge mode and the distinction between the ibc_skip mode and the ibc_merge mode may be determined according to which value of 1 or 0 is indicated by cbf parsed and decoded through the process S1190.
  • the image decoding apparatus may parse and decode mode information (pred_mode_flag) (S1150) and determine pred_mode_flag (S1160).
  • the video decoding apparatus may determine the type information (S1170).
  • the type information indicates the inter in step S1170
  • the prediction mode of the current block corresponds to the AMVP mode. Accordingly, the apparatus for decoding an image may parse and decode motion information (ref_idx, mvd, mvp_idx) for predicting the current block in the AMVP mode (S1180).
  • the image decoding apparatus can determine whether the ibc mode is applied using the enable flag and type information and whether the tile group type is intra (S1182).
  • step S1182 when the enable flag indicates that the application of the ibc mode is allowed and the type information indicates inter, the video decoding apparatus may parse and decode pred_mode_ibc_flag (S1184) and determine pred_mode_ibc_flag (S1186).
  • pred_mode_ibc_flag indicates ibc mode in step S1186, the prediction mode of the current block corresponds to the ibc_BVP mode.
  • the image decoding apparatus may parse and decode motion information (bvd, bvp_idx) for predicting the current block in ibc_BVP mode (S1188).
  • the image decoding apparatus may parse and decode motion information (bvd, bvp_idx) for predicting the current block in ibc_BVP mode (S1188).
  • the image decoding apparatus may further perform the processes of parsing and decoding the mode information (pred_mode_flag) before the process S1182 (S1150) and determining it (S1160).
  • the process S1182 may be performed when pred_mode_flag does not indicate inter prediction in step S1160
  • the process S1170 may be performed when pred_mode_flag indicates inter prediction in step S1160.
  • the image decoding apparatus may be configured to perform the above-described steps S1110, S1120, and S1130 before S1150.
  • step S1150 may be performed.
  • the image decoding apparatus may parse and decode information for predicting the current block in intra mode (S1192).
  • step S1182 even when the enable flag indicates that the application of the ibc mode is not permitted or the type information indicates intra, the prediction mode of the current block corresponds to the intra mode. Accordingly, the image decoding apparatus may parse and decode information for predicting the current block in intra mode (S1192).
  • the image decoding apparatus may parse and decode information for predicting the current block in intra mode (S1192). In the intra mode, cbf is not signaled and is derived to 1 (S1194).
  • Example 12 The number of bits consumed or allocated to determine the prediction mode for the current block based on Example 1-2 is represented in FIG. 12.
  • tile group type is I-type (when the CU type is intra)
  • ibc_enabled_flag 1
  • ibc_enabled_flag 0
  • Examples 1-3 correspond to another example of determining a prediction mode of a current block using new syntax and semantics, and predicting a current block based on the determined prediction mode. As illustrated in FIG. 13, in Examples 1-3, some or all of the preset conditions may be performed to determine the prediction mode of the current block.
  • the image decoding apparatus may determine whether the application of the ibc mode is allowed or the type of the tile group is intra using the enable flag and type information (S1310). When the application of the ibc mode is permitted or the tile group type is inter, the image decoding apparatus parses and decodes the merge_flag (S1320) and can determine this (S1330).
  • the prediction mode of the current block may correspond to any one of skip mode, merge mode, ibc_skip mode, and ibc_merge mode.
  • the video decoding apparatus may distinguish between skip / merge mode and ibc_skip / ibc_merge mode by using merge_idx parsed and decoded through S1340.
  • the distinction between the skip mode and the merge mode and the distinction between the ibc_skip mode and the ibc_merge mode may be determined according to which of 1 and 0 the cbf parsed and decoded through S1390 indicates.
  • the image decoding apparatus may determine enable flag and type information (S1350).
  • the enable flag in step S1350 indicates that the application of the ibc mode is allowed and the type information indicates inter
  • the video decoding apparatus may parse and decode pred_mode_ibc_flag (S1384) and determine pred_mode_ibc_flag (S1386).
  • pred_mode_ibc_flag does not indicate ibc mode in step S1386 or the enable flag indicates that application of ibc mode is not permitted in step S1350, or the type information indicates intra, the video decoding apparatus uses mode information (pred_mode_flag). Parsing and decoding (S1360), it can be determined (S1370).
  • the video decoding apparatus may determine type information (S1380).
  • the type information indicates inter in step S1380, the prediction mode of the current block corresponds to the AMVP mode. Accordingly, the apparatus for decoding an image may parse and decode information (ref_idx, mvd, mvp_idx) for predicting the current block in the AMVP mode (S1382).
  • the image decoding apparatus may determine whether the ibc mode is permitted using the enable flag and type information and whether the tile group type is intra (S1350).
  • the video decoding apparatus may parse and decode pred_mode_ibc_flag (S1384) and determine pred_mode_ibc_flag (S1386).
  • pred_mode_ibc_flag indicates ibc mode in step S1386, the prediction mode of the current block corresponds to the ibc_BVP mode.
  • the apparatus for decoding an image may parse and decode motion information (bvd, bvp_idx) for predicting the current block in ibc_BVP mode (S1388).
  • the video decoding apparatus parses and decodes the mode information (pred_mode_flag) (S1360), It can be determined (S1370).
  • pred_mode_flag indicates inter prediction in step S1370
  • the video decoding apparatus may determine type information (S1380).
  • the prediction mode of the current block corresponds to the ibc_BVP mode. Accordingly, the apparatus for decoding an image may parse and decode motion information (bvd, bvp_idx) for predicting the current block in ibc_BVP mode (S1388).
  • the image decoding apparatus may be configured to perform the aforementioned S1310 process, S1320 process, and S1330 process before the S1350 process.
  • step S1350 may be performed.
  • step S1310 when the enable flag indicates that the application of the ibc mode is not allowed and the type information indicates intra, the prediction mode of the current block corresponds to the intra mode. Therefore, the image decoding apparatus parses and decodes information for predicting the current block in intra mode (S1392).
  • pred_mode_flag does not indicate inter prediction in step S1370
  • the prediction mode of the current block corresponds to the intra mode. Therefore, the image decoding apparatus parses and decodes information for predicting the current block in intra mode (S1392). In the intra mode, cbf is not signaled and is led to 1 (S1394).
  • FIG. 14 The number of bits consumed or allocated to determine the prediction mode for the current block based on Examples 1-3 is illustrated in FIG. 14.
  • merge_flag 0
  • pred_mode_ibc_flag 0
  • pred_mode_flag 1
  • a total of 5 bits are allocated to determine the AMVP mode.
  • tile group type is I-type (when the CU type is intra)
  • ibc_enabled_flag 1
  • ibc_enabled_flag 0
  • the present invention may be configured to indicate that the prediction mode of the current block corresponds to the ibc mode (any of ibc_skip mode, ibc_merge mode, and ibc_BVP mode) by explicitly signaling pred_mode_ibc_flag.
  • the ibc mode any of ibc_skip mode, ibc_merge mode, and ibc_BVP mode
  • the image decoding apparatus configures a BVP candidate list consisting of one or more prediction block vector predictor (BVP) candidates (S1510).
  • the video decoding apparatus selects a BVP candidate corresponding to the BVP index (included in the motion information) signaled from the video encoding apparatus from the BVP candidate list (S1520).
  • the image decoding apparatus derives a block vector (BV) for the current block by summing the selected BVP candidates (selected BVPs) and the BVD signal (included in the motion information) from the image encoding apparatus (S1530), and deriving
  • the current block may be predicted by obtaining prediction information from the reference block in the current picture indicated by the BV (S1540).
  • the BVP candidate list constructed through Example 2-1 includes: 1) BV of a block predicted in ibc mode among one or more blocks spatially adjacent to the current block (spatial neighboring block); 2) one or more temporally adjacent to the current block.
  • a BV of a block predicted in ibc mode and 3) a preset BV may be included as BVP candidates. That is, the BVP candidates included in the BVP candidate list may include BVs of spatial neighboring blocks predicted in ibc mode, BVs of temporal neighboring blocks predicted in ibc mode, and preset BVs.
  • one or more of blocks A0, A1, and A2 located on the left side of the current block and / or blocks B0 located above the current block B1, B2) may be included, and a block AB located in the upper left of the current block may be further included.
  • the block AB located in the upper left of the current block may be treated as a block located to the left of the current block, or may be treated as a block located above the current block.
  • Blocks located on the left side of the current block include a block A1 located at the bottom and / or a block A0 located at the bottom based on the height direction H of the current block, or a block A2 located at the center. It may be further included.
  • Blocks located above the current block include a block B1 located at the right and / or a block B0 located at the right end based on the width direction W of the current block, or a block B2 located at the center It may be further included.
  • A1 may be a block including a pixel located at the bottom left of the current block as its right-most pixel, and A2 positioned at the left-most center of the current block. It may be a block including a pixel as its rightmost pixel.
  • B1 may be a block including the rightmost pixel of the current block as its rightmost pixel, and B2 may be a block including the pixel located at the uppermost center of the current block as its rightmost pixel.
  • AB may be a block including the leftmost pixel of the current block as its rightmost pixel
  • A0 may be a block including the leftmost pixel of the current block as its rightmost pixel
  • B0 is a current block It may be a block including the rightmost pixel of the rightmost pixel of its own.
  • the video decoding apparatus searches for one or more blocks among blocks located on the left side of the current block and / or one or more blocks among blocks positioned above the current block and / or blocks located in the upper left of the current block according to a preset order.
  • One or more BVP candidates may be derived, and the BVP candidate list may be constructed by including the derived BVP candidates.
  • the image decoding apparatus searches for a block A1 located on the left side of the current block and a block B1 located on the upper side in a predetermined order to derive BVP candidates, and includes the derived BVP candidates to list the BVP candidates. Can be configured.
  • the apparatus for decoding an image may derive one or more BVP candidates by searching blocks A0, A1, and / or A2 and / or AB located on the left side of the current block in a preset order. Furthermore, the image decoding apparatus may derive one or more BVP candidates by searching for blocks B0, B1 and / or B2 and / or AB located above the current block in a preset order.
  • the temporal neighboring block may refer to one or more blocks adjacent to this col_block based on a collocated block (col_block) located in a collocated picture (col_picture).
  • the col_picture may be specified in advance, such as a picture located at the first position (index of 0) of the reference picture list (L0 or L1).
  • the col_block is located in the col_picture, but may be specified in advance, such as a block located in the same position as the current picture in the current block.
  • the temporal adjacent blocks include a block BR located at the lower right of the col_ block, a block CT located at the center of the col_ block, and a block TR located at the upper right of the col_ block, and A block BL located at the bottom left of the col_block may be included.
  • BR may be a block including a pixel located at the bottom right of the current block as its leftmost pixel
  • CT indicates a pixel located at the center of the current block. It may be a block included as its leftmost pixel.
  • TR may be a block including a pixel located at the top right of the current block as its leftmost pixel
  • BL may be a block including a pixel located at the bottom left of the current block as its leftmost pixel.
  • the video decoding apparatus may search temporal temporal blocks according to a preset order to derive one or more BVP candidates, and construct the BVP candidate list by including the derived BVP candidates.
  • the preset BV may correspond to a BV indicating a position moved to the upper left by the height and width of the current block.
  • the preset BV may be (-W, -H).
  • the preset BV may be (-W * k, -H * k) in which the size of the BV is changed using an arbitrary constant k.
  • the preset BV may be referred to as a default BV in terms of being preset, rather than being acquired through a search process.
  • the BVs described through Embodiment 2-1 may indicate a relative position based on the CTU including the current block or a relative position based on the current block. As shown in FIG. 18, based on the upper leftmost pixel B of the current block, the BVs described through Example 2-1 are relative positions after setting this pixel B as a zero vector. It may be a vector indicating. Also, based on the leftmost pixel A of the CTU including the current block, the BVs described through Example 2-1 may be a vector indicating a relative position after setting the pixel A to a zero vector. have.
  • the video decoding apparatus may (optionally) express BVs based on either the CTU including the current block or the current block.
  • BVs based on the current block or by expressing BVs based on the CTU including the current block (optionally applying a criterion for expressing BVs), it is possible to efficiently set a search range for searching BVP candidates. do. Accordingly, the present invention can effectively reduce the amount of memory consumed for BVP candidate search, as well as the number of bits consumed to represent BV.
  • Example 2-2 a history-based BV is proposed that can replace BV of temporal adjacent blocks.
  • the BVP candidate list constructed through Example 2-2 may include BVs of blocks predicted (predicted) in ibc mode. That is, the BVP candidate list of Example 2-2 may include BVs of blocks predicted in ibc mode among blocks that have already been decoded (predicted) before the current block is decoded.
  • BVs BVs of blocks whose prediction is completed in ibc mode
  • HBV history-based block vector predictor
  • HBV may have a FIFO (first in first out) structure capable of storing one or more BVs.
  • FIFO first in first out
  • the BV stored first in the HBV may correspond to a zero vector.
  • BV of blocks whose prediction is completed in ibc mode is sequentially (FIFO) stored in HBV according to the prediction order (decoding order)
  • the specific block (current block) is to be predicted in ibc mode
  • the last BV stored in HBV From then, one or more BVs may be selected by searching sequentially (in the reverse order of the stored order or in the reverse order of the decoding order), and then the selected BVs may be included in the BVP candidate list.
  • the BVP candidate list construction process implemented through Example 2-2 includes' spatial BV construction process ⁇ BV configuration process selected from HBV ',' spatial BV construction process ⁇ BV configuration process selected from HBV ⁇ preset BV configuration Process' and 'spatial BV construction process ⁇ temporal BV construction process ⁇ BV construction process selected from HBV' can be implemented in various orders.
  • the video decoding apparatus may select an appropriate number of BVs from HBVs according to the type of BV to be replaced (temporal BV and / or preset BV).
  • 'the BV configuration process selected from HBV' may be performed when the number of BVP candidates included in the BVP candidate list is less than the maximum number that can be included in the BVP candidate list through the BV configuration process (s) previously performed. .
  • Example 2-3 a zero BV is proposed that can replace the preset BV.
  • the process of constructing the BVP candidate list implemented through Examples 2-3 includes 'spatial BV constructing process ⁇ temporal BV constructing process ⁇ zero BV constructing process' and 'spatial BV constructing process ⁇ selected BV constructing process from HBV ⁇ zero BV constructing process' 'It can be implemented in various order.
  • the 'zero BV configuration process' may be performed when the number of BVP candidates included in the BVP candidate list is less than the maximum number that can be included in the BVP candidate list through the previously performed BV configuration process (es).

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne un procédé de prédiction utilisant un mode de référencement d'image en cours et un dispositif de décodage d'image associé. Un mode de réalisation de la présente invention concerne un procédé de prédiction d'un bloc en cours constituant un bloc à décoder au moyen d'une copie d'image en cours (copie intra-bloc (ibc)), le procédé consistant : à décoder, à partir d'un flux binaire, un drapeau d'activation indiquant si l'application du mode ibc est autorisée, et des informations de type indiquant si le type de tranches est un type inter; en fonction du drapeau de validation et des informations de type, à décoder à partir du flux binaire un drapeau ibc indiquant si un mode de prédiction du bloc en cours est le mode ibc; lorsque le drapeau ibc indique le mode ibc, à décoder, à partir du flux binaire, des informations de mouvement n'ayant pas d'index d'images désignées par le bloc en cours; et à prédire le bloc en cours à l'aide d'un bloc indiqué par les informations de mouvement dans une image en cours dans laquelle le bloc en cours est situé. Dessin représentatif : figure 4
PCT/KR2019/013129 2018-10-08 2019-10-07 Procédé de prédiction utilisant un mode de référencement d'image en cours et dispositif de décodage d'image associé WO2020076034A1 (fr)

Priority Applications (6)

Application Number Priority Date Filing Date Title
CN201980081334.7A CN113170196A (zh) 2018-10-08 2019-10-07 使用当前画面参考模式的预测方法及其图像解码装置
US17/225,397 US11405639B2 (en) 2018-10-08 2021-04-08 Prediction method using current picture referencing mode, and video decoding device therefor
US17/847,783 US11838545B2 (en) 2018-10-08 2022-06-23 Prediction method using current picture referencing mode, and video decoding device therefor
US17/847,727 US11838543B2 (en) 2018-10-08 2022-06-23 Prediction method using current picture referencing mode, and video decoding device therefor
US17/847,706 US11838542B2 (en) 2018-10-08 2022-06-23 Prediction method using current picture referencing mode, and video decoding device therefor
US17/847,748 US11838544B2 (en) 2018-10-08 2022-06-23 Prediction method using current picture referencing mode, and video decoding device therefor

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR20180119881 2018-10-08
KR10-2018-0119881 2018-10-08
KR1020190067724A KR20200040179A (ko) 2018-10-08 2019-06-10 현재 픽처 참조 모드를 이용한 예측 방법 및 영상 복호화 장치
KR10-2019-0067724 2019-06-10

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/225,397 Continuation US11405639B2 (en) 2018-10-08 2021-04-08 Prediction method using current picture referencing mode, and video decoding device therefor

Publications (1)

Publication Number Publication Date
WO2020076034A1 true WO2020076034A1 (fr) 2020-04-16

Family

ID=70163985

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/013129 WO2020076034A1 (fr) 2018-10-08 2019-10-07 Procédé de prédiction utilisant un mode de référencement d'image en cours et dispositif de décodage d'image associé

Country Status (1)

Country Link
WO (1) WO2020076034A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015194913A1 (fr) * 2014-06-20 2015-12-23 성균관대학교 산학협력단 Procédé pour encoder et décoder une image et dispositif l'utilisant
KR20180013918A (ko) * 2015-05-29 2018-02-07 퀄컴 인코포레이티드 슬라이스 레벨 인트라 블록 카피 및 기타 비디오 코딩 개선
KR20180063094A (ko) * 2015-10-02 2018-06-11 퀄컴 인코포레이티드 인트라 블록 카피 병합 모드 및 이용가능하지 않는 ibc 참조 영역의 패딩

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015194913A1 (fr) * 2014-06-20 2015-12-23 성균관대학교 산학협력단 Procédé pour encoder et décoder une image et dispositif l'utilisant
KR20180013918A (ko) * 2015-05-29 2018-02-07 퀄컴 인코포레이티드 슬라이스 레벨 인트라 블록 카피 및 기타 비디오 코딩 개선
KR20180063094A (ko) * 2015-10-02 2018-06-11 퀄컴 인코포레이티드 인트라 블록 카피 병합 모드 및 이용가능하지 않는 ibc 참조 영역의 패딩

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MA, JONGHYUN ET AL.: "Intra Block Copy Analysis to Improve Coding Efficiency for HEVC Screen Content Coding", JOURNAL OF BROADCAST ENGINEERING, vol. 20, no. 1, 31 January 2015 (2015-01-31), pages 57 - 67, XP055704054, Retrieved from the Internet <URL:http:/7dx.doi.org/10.5909/JBE.2015.20.1.57> [retrieved on 20191230] *
XU, XIAOZHONG ET AL.: "Intra block copy improvement on top of Tencent's CfP response", JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11. JVET-J0050_ - R2, 20 April 2018 (2018-04-20), San Diego, US, pages 1 - 3, XP030151230, Retrieved from the Internet <URL:http://phenix.int-evry.fr/jvet> [retrieved on 20191227] *

Similar Documents

Publication Publication Date Title
WO2018212578A1 (fr) Procédé et dispositif de traitement de signal vidéo
WO2018066959A1 (fr) Procédé et appareil de traitement de signal vidéo
WO2018056763A1 (fr) Procédé et appareil permettant la réalisation d&#39;une prédiction à l&#39;aide d&#39;une pondération fondée sur un modèle
WO2018062950A1 (fr) Procédé de traitement d&#39;image et appareil associé
WO2020204419A1 (fr) Codage vidéo ou d&#39;image basé sur un filtre à boucle adaptatif
WO2020180143A1 (fr) Codage vidéo ou d&#39;image basé sur un mappage de luminance avec mise à l&#39;échelle de chrominance
WO2020180122A1 (fr) Codage de vidéo ou d&#39;images sur la base d&#39;un modèle à alf analysé conditionnellement et d&#39;un modèle de remodelage
WO2021125700A1 (fr) Appareil et procédé de codage d&#39;image/vidéo basé sur une table pondérée par prédiction
WO2020171673A1 (fr) Procédé et appareil de traitement de signal vidéo pour prédiction intra
WO2021141226A1 (fr) Procédé de décodage d&#39;image basé sur bdpcm pour composante de luminance et composante de chrominance, et dispositif pour celui-ci
WO2021040398A1 (fr) Codage d&#39;image ou de vidéo s&#39;appuyant sur un codage d&#39;échappement de palette
WO2021125702A1 (fr) Procédé et dispositif de codage d&#39;image/vidéo basés sur une prédiction pondérée
WO2021091256A1 (fr) Procédé et dispositif de codade d&#39;image/vidéo
WO2021091252A1 (fr) Procédé et dispositif de traitement d&#39;informations d&#39;image pour codage d&#39;image/vidéo
WO2020197207A1 (fr) Codage d&#39;image ou vidéo sur la base d&#39;un filtrage comprenant un mappage
WO2020138958A1 (fr) Procédé de prédiction bidirectionnelle et dispositif de décodage d&#39;image
WO2019066175A1 (fr) Procédé et dispositif de décodage d&#39;image conformes à une structure divisée de blocs dans un système de codage d&#39;image
WO2014098374A1 (fr) Procédé de décodage vidéo échelonnable utilisant un mpm, et appareil utilisant un tel procédé
WO2021091255A1 (fr) Procédé et dispositif de signalisation de syntaxe de haut niveau pour codage image/vidéo
WO2021125699A1 (fr) Procédé de codage et de décodage d&#39;image/de vidéo et appareil l&#39;utilisant
WO2021125701A1 (fr) Appareil et procédé de codage d&#39;image/vidéo basés sur la prédiction inter
WO2017078450A1 (fr) Procédé et appareil de décodage d&#39;images dans un système de codage d&#39;images
WO2020190085A1 (fr) Codage de vidéo ou d&#39;image reposant sur un filtrage en boucle
WO2020204412A1 (fr) Codage vidéo ou d&#39;image accompagné d&#39;une procédure de filtrage de boucle adaptative
WO2020076034A1 (fr) Procédé de prédiction utilisant un mode de référencement d&#39;image en cours et dispositif de décodage d&#39;image associé

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19870129

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19870129

Country of ref document: EP

Kind code of ref document: A1