WO2020256506A1 - Procédé et appareil de codage/décodage vidéo utilisant une prédiction intra à multiples lignes de référence, et procédé de transmission d'un flux binaire - Google Patents

Procédé et appareil de codage/décodage vidéo utilisant une prédiction intra à multiples lignes de référence, et procédé de transmission d'un flux binaire Download PDF

Info

Publication number
WO2020256506A1
WO2020256506A1 PCT/KR2020/008032 KR2020008032W WO2020256506A1 WO 2020256506 A1 WO2020256506 A1 WO 2020256506A1 KR 2020008032 W KR2020008032 W KR 2020008032W WO 2020256506 A1 WO2020256506 A1 WO 2020256506A1
Authority
WO
WIPO (PCT)
Prior art keywords
current block
reference sample
intra prediction
prediction mode
unit
Prior art date
Application number
PCT/KR2020/008032
Other languages
English (en)
Korean (ko)
Inventor
허진
장형문
이령
최장원
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Publication of WO2020256506A1 publication Critical patent/WO2020256506A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques

Definitions

  • the present disclosure relates to an image encoding/decoding method, an apparatus, and a method of transmitting a bitstream, and more particularly, a method, apparatus, and a method for encoding/decoding an image using intra prediction of multiple reference lines. It relates to a method of transmitting a bitstream generated by the disclosed video encoding method/apparatus.
  • a high-efficiency image compression technique is required for effectively transmitting, storing, and reproducing information of high-resolution and high-quality images.
  • An object of the present disclosure is to provide an image encoding/decoding method and apparatus with improved encoding/decoding efficiency.
  • an object of the present disclosure is to provide a method and apparatus for encoding/decoding an intra-predicted image using multiple reference line intra prediction.
  • an object of the present disclosure is to provide a method for transmitting a bitstream generated by an image encoding method or apparatus according to the present disclosure.
  • an object of the present disclosure is to provide a recording medium storing a bitstream generated by an image encoding method or apparatus according to the present disclosure.
  • an object of the present disclosure is to provide a recording medium storing a bitstream that is received and decoded by an image decoding apparatus according to the present disclosure and used for image restoration.
  • multiple reference line intra prediction may be performed even when a current block is adjacent to a boundary of a coding tree unit (CTU), and thus encoding efficiency may be increased.
  • CTU coding tree unit
  • determining whether a prediction mode of the current block is an intra prediction mode, wherein the prediction mode of the current block is intra prediction In the case of the mode, determining whether the current block is adjacent to a boundary of a preset area, based on whether the current block is adjacent to a boundary of the preset area, multiple reference sample lines for the current block It may include deriving and generating a prediction block for the current block using the derived multiple reference sample lines.
  • a boundary of the preset region may be an upper boundary of a coding tree unit (CTU) including the current block.
  • CTU coding tree unit
  • a sample value of a first upper reference sample line not adjacent to the current block may be derived using a sample value of a second upper reference sample line adjacent to the current block.
  • the sample value of the first reference sample included in the first upper reference sample line has the same x coordinate as the x coordinate of the first reference sample, and the second upper reference sample line It can be derived by using the sample value of the second reference sample included in.
  • a sample value of an upper left reference sample line not adjacent to the current block may be derived using a sample value of a left reference sample line corresponding to the upper left reference sample line.
  • the sample value of the upper left reference sample line may be derived using a sample value of the uppermost reference sample of the corresponding left reference sample line.
  • a sample value of all reference samples included in the upper left reference sample line may be determined as a sample value of the uppermost reference sample of the corresponding left reference sample line.
  • sample values of an upper reference sample line and an upper left reference sample line that are not adjacent to the current block are left corresponding to an upper reference sample line and an upper left reference sample line that are not adjacent to the current block. It can be derived using the sample value of the reference sample line.
  • the sample values of the upper reference sample line and the upper left reference sample line may be derived using sample values of the uppermost reference sample of the corresponding left reference sample line.
  • sample values of all reference samples included in the upper reference sample line and the upper left reference sample line may be determined as a sample value of the uppermost reference sample of the corresponding left reference sample line.
  • the step of deriving a multiple reference sample line for the current block based on whether the current block is adjacent to a boundary of the preset region includes: Multiple reference sample lines for the current block. Reference Line) parsing an index, and deriving the multiple reference sample lines based on the MRL index and whether the current block is adjacent to a boundary of the preset region.
  • An image decoding apparatus is an image decoding apparatus including a memory and at least one processor, wherein the at least one processor predicts the current block based on information on a prediction mode of the current block. It is determined whether a mode is an intra prediction mode, and when the prediction mode of the current block is an intra prediction mode, it is determined whether the current block is adjacent to a boundary of a preset area, and the current block is in the preset area. Based on whether it is adjacent to a boundary, multiple reference sample lines for the current block may be derived, and a prediction block for the current block may be generated using the derived multiple reference sample lines.
  • a boundary of the preset region may be an upper boundary of a coding tree unit (CTU) including the current block.
  • CTU coding tree unit
  • An image encoding method includes determining whether a prediction mode of a current block is an intra prediction mode, and when the prediction mode of the current block is an intra prediction mode, the current block determines the current block. Determining whether it is adjacent to the upper boundary of the included Coding Tree Unit (CTU), configuring a reference sample for the intra prediction based on whether the current block is adjacent to the upper boundary of the CTU, the configured Generating a prediction block for the current block by performing intra prediction using a reference sample, and information on a reference sample line used to configure the reference sample through the MRL (Multiple Reference Line) index of the current block It may include the step of encoding.
  • CTU Coding Tree Unit
  • a computer-readable recording medium may store a bitstream generated by the image encoding method or image encoding apparatus of the present disclosure.
  • an image encoding/decoding method and apparatus with improved encoding/decoding efficiency may be provided.
  • a method and apparatus for encoding/decoding an intra-predicted image using multi-reference line intra prediction may be provided.
  • a method for transmitting a bitstream generated by an image encoding method or apparatus according to the present disclosure may be provided.
  • a recording medium storing a bitstream generated by an image encoding method or apparatus according to the present disclosure may be provided.
  • a recording medium may be provided that stores a bitstream that is received and decoded by the image decoding apparatus according to the present disclosure and used for image restoration.
  • FIG. 1 is a diagram schematically illustrating a video coding system to which an embodiment according to the present disclosure can be applied.
  • FIG. 2 is a diagram schematically illustrating an image encoding apparatus to which an embodiment according to the present disclosure can be applied.
  • FIG. 3 is a diagram schematically illustrating an image decoding apparatus to which an embodiment according to the present disclosure can be applied.
  • FIG. 4 is a diagram showing a block division type according to a multi-type tree structure.
  • FIG. 5 is a diagram illustrating a signaling mechanism of partitioning information of a quadtree with nested multi-type tree structure according to the present disclosure.
  • FIG. 6 is a flowchart illustrating a video/video encoding method based on intra prediction.
  • FIG. 7 is a diagram illustrating an exemplary configuration of an intra prediction unit 185 according to the present disclosure.
  • FIG. 8 is a flowchart illustrating a video/video decoding method based on intra prediction.
  • FIG. 9 is a diagram illustrating an exemplary configuration of an intra prediction unit 265 according to the present disclosure.
  • FIG. 10 is a flowchart illustrating an intra prediction mode signaling procedure in an image encoding apparatus.
  • 11 is a flowchart illustrating a procedure for determining an intra prediction mode in an image decoding apparatus.
  • FIG. 12 is a flowchart for describing a procedure for deriving an intra prediction mode in more detail.
  • FIG. 13 is a diagram illustrating an intra prediction direction according to an embodiment of the present disclosure.
  • FIG. 14 is a diagram illustrating an intra prediction direction according to another embodiment of the present disclosure.
  • 15 is a diagram for describing a current block adjacent to an upper boundary of a CTU.
  • 16 is a diagram for describing an image decoding method according to an embodiment of the present disclosure.
  • 17 is a diagram for describing an image encoding method according to an embodiment of the present disclosure.
  • FIG. 18 is a diagram for describing a method of deriving a reference sample line according to another embodiment of the present disclosure.
  • FIG. 19 is a diagram illustrating a method of deriving a reference sample line according to another embodiment of the present disclosure.
  • FIG. 20 is a diagram illustrating a content streaming system to which an embodiment of the present disclosure can be applied.
  • a component when a component is said to be “connected”, “coupled” or “connected” with another component, it is not only a direct connection relationship, but an indirect connection relationship in which another component exists in the middle. It can also include.
  • a certain component when a certain component “includes” or “have” another component, it means that other components may be further included rather than excluding other components unless otherwise stated. .
  • first and second are used only for the purpose of distinguishing one component from other components, and do not limit the order or importance of the components unless otherwise stated. Accordingly, within the scope of the present disclosure, a first component in one embodiment may be referred to as a second component in another embodiment, and similarly, a second component in one embodiment is a first component in another embodiment. It can also be called.
  • components that are distinguished from each other are intended to clearly describe each feature, and do not necessarily mean that the components are separated. That is, a plurality of components may be integrated to be formed in one hardware or software unit, or one component may be distributed in a plurality of hardware or software units. Therefore, even if not stated otherwise, such integrated or distributed embodiments are also included in the scope of the present disclosure.
  • the components described in various embodiments do not necessarily mean essential components, and some may be optional components. Accordingly, an embodiment consisting of a subset of components described in an embodiment is also included in the scope of the present disclosure. In addition, embodiments including other elements in addition to the elements described in the various embodiments are included in the scope of the present disclosure.
  • the present disclosure relates to encoding and decoding of an image, and terms used in the present disclosure may have a common meaning commonly used in the technical field to which the present disclosure belongs unless newly defined in the present disclosure.
  • a “picture” generally refers to a unit representing one image in a specific time period
  • a slice/tile is a coding unit constituting a part of a picture
  • one picture is one It may be composed of more than one slice/tile.
  • a slice/tile may include one or more coding tree units (CTU).
  • pixel or "pel” may mean a minimum unit constituting one picture (or image).
  • sample may be used as a term corresponding to a pixel.
  • a sample may generally represent a pixel or a value of a pixel, may represent only a pixel/pixel value of a luma component, or may represent only a pixel/pixel value of a chroma component.
  • unit may represent a basic unit of image processing.
  • the unit may include at least one of a specific area of a picture and information related to the corresponding area.
  • the unit may be used interchangeably with terms such as “sample array”, “block”, or “area” depending on the case.
  • the MxN block may include samples (or sample arrays) consisting of M columns and N rows, or a set (or array) of transform coefficients.
  • current block may mean one of “current coding block”, “current coding unit”, “coding object block”, “decoding object block”, or “processing object block”.
  • current block may mean “current prediction block” or “prediction target block”.
  • transformation inverse transformation
  • quantization inverse quantization
  • current block may mean “current transform block” or “transform target block”.
  • filtering is performed, “current block” may mean “block to be filtered”.
  • current block may mean a block including both a luma component block and a chroma component block or "a luma block of the current block” unless explicitly stated as a chroma block.
  • the chroma block of the current block may be explicitly expressed by including an explicit description of a chroma block such as a "chroma block” or a "current chroma block”.
  • FIG. 1 shows a video coding system according to this disclosure.
  • a video coding system may include an encoding device 10 and a decoding device 20.
  • the encoding device 10 may transmit the encoded video and/or image information or data in a file or streaming format to the decoding device 20 through a digital storage medium or a network.
  • the encoding apparatus 10 may include a video source generator 11, an encoder 12, and a transmission unit 13.
  • the decoding apparatus 20 may include a receiving unit 21, a decoding unit 22, and a rendering unit 23.
  • the encoder 12 may be referred to as a video/image encoder, and the decoder 22 may be referred to as a video/image decoder.
  • the transmission unit 13 may be included in the encoding unit 12.
  • the receiving unit 21 may be included in the decoding unit 22.
  • the rendering unit 23 may include a display unit, and the display unit may be configured as a separate device or an external component.
  • the video source generator 11 may acquire a video/image through a process of capturing, synthesizing, or generating a video/image.
  • the video source generator 11 may include a video/image capturing device and/or a video/image generating device.
  • the video/image capture device may include, for example, one or more cameras, a video/image archive including previously captured video/images, and the like.
  • the video/image generating device may include, for example, a computer, a tablet and a smartphone, and may (electronically) generate a video/image.
  • a virtual video/image may be generated through a computer or the like, and in this case, a video/image capturing process may be substituted as a process of generating related data.
  • the encoder 12 may encode an input video/image.
  • the encoder 12 may perform a series of procedures such as prediction, transformation, and quantization for compression and encoding efficiency.
  • the encoder 12 may output encoded data (coded video/image information) in a bitstream format.
  • the transmission unit 13 may transmit the encoded video/image information or data output in the form of a bitstream to the receiving unit 21 of the decoding apparatus 20 through a digital storage medium or a network in a file or streaming form.
  • Digital storage media may include various storage media such as USB, SD, CD, DVD, Blu-ray, HDD, and SSD.
  • the transmission unit 13 may include an element for generating a media file through a predetermined file format, and may include an element for transmission through a broadcast/communication network.
  • the receiving unit 21 may extract/receive the bitstream from the storage medium or network and transmit it to the decoding unit 22.
  • the decoder 22 may decode the video/image by performing a series of procedures such as inverse quantization, inverse transformation, and prediction corresponding to the operation of the encoder 12.
  • the rendering unit 23 may render the decoded video/image.
  • the rendered video/image may be displayed through the display unit.
  • FIG. 2 is a diagram schematically illustrating an image encoding apparatus to which an embodiment according to the present disclosure can be applied.
  • the image encoding apparatus 100 includes an image segmentation unit 110, a subtraction unit 115, a transform unit 120, a quantization unit 130, an inverse quantization unit 140, and an inverse transform unit ( 150), an addition unit 155, a filtering unit 160, a memory 170, an inter prediction unit 180, an intra prediction unit 185, and an entropy encoding unit 190.
  • the inter prediction unit 180 and the intra prediction unit 185 may be collectively referred to as a “prediction unit”.
  • the transform unit 120, the quantization unit 130, the inverse quantization unit 140, and the inverse transform unit 150 may be included in a residual processing unit.
  • the residual processing unit may further include a subtraction unit 115.
  • All or at least some of the plurality of constituent units constituting the image encoding apparatus 100 may be implemented as one hardware component (eg, an encoder or a processor) according to embodiments.
  • the memory 170 may include a decoded picture buffer (DPB), and may be implemented by a digital storage medium.
  • DPB decoded picture buffer
  • the image dividing unit 110 may divide an input image (or picture, frame) input to the image encoding apparatus 100 into one or more processing units.
  • the processing unit may be referred to as a coding unit (CU).
  • the coding unit is a coding tree unit (CTU) or a largest coding unit (LCU) recursively according to a QT/BT/TT (Quad-tree/binary-tree/ternary-tree) structure ( It can be obtained by dividing recursively.
  • one coding unit may be divided into a plurality of coding units of a deeper depth based on a quad tree structure, a binary tree structure, and/or a ternary tree structure.
  • a quad tree structure may be applied first, and a binary tree structure and/or a ternary tree structure may be applied later.
  • the coding procedure according to the present disclosure may be performed based on the final coding unit that is no longer divided.
  • the largest coding unit may be directly used as the final coding unit, or a coding unit of a lower depth obtained by dividing the largest coding unit may be used as the final cornet unit.
  • the coding procedure may include a procedure such as prediction, transformation, and/or restoration described later.
  • the processing unit of the coding procedure may be a prediction unit (PU) or a transform unit (TU).
  • Each of the prediction unit and the transform unit may be divided or partitioned from the final coding unit.
  • the prediction unit may be a unit of sample prediction
  • the transform unit may be a unit for inducing a transform coefficient and/or a unit for inducing a residual signal from the transform coefficient.
  • the prediction unit (inter prediction unit 180 or intra prediction unit 185) performs prediction on a block to be processed (current block), and generates a predicted block including prediction samples for the current block. Can be generated.
  • the prediction unit may determine whether intra prediction or inter prediction is applied in units of the current block or CU.
  • the prediction unit may generate various information on prediction of the current block and transmit it to the entropy encoding unit 190.
  • the information on prediction may be encoded by the entropy encoding unit 190 and output in the form of a bitstream.
  • the intra prediction unit 185 may predict the current block by referring to samples in the current picture.
  • the referenced samples may be located in a neighborhood of the current block or may be located away from each other according to an intra prediction mode and/or an intra prediction technique.
  • the intra prediction modes may include a plurality of non-directional modes and a plurality of directional modes.
  • the non-directional mode may include, for example, a DC mode and a planar mode (Planar mode).
  • the directional mode may include, for example, 33 directional prediction modes or 65 directional prediction modes, depending on the degree of detail of the prediction direction. However, this is an example, and more or less directional prediction modes may be used depending on the setting.
  • the intra prediction unit 185 may determine a prediction mode applied to the current block by using the prediction mode applied to the neighboring block.
  • the inter prediction unit 180 may derive a predicted block for the current block based on a reference block (reference sample array) specified by a motion vector on the reference picture.
  • motion information may be predicted in units of blocks, subblocks, or samples based on a correlation between motion information between a neighboring block and a current block.
  • the motion information may include a motion vector and a reference picture index.
  • the motion information may further include inter prediction direction (L0 prediction, L1 prediction, Bi prediction, etc.) information.
  • the neighboring block may include a spatial neighboring block existing in the current picture and a temporal neighboring block existing in the reference picture.
  • the reference picture including the reference block and the reference picture including the temporal neighboring block may be the same or different from each other.
  • the temporal neighboring block may be referred to as a collocated reference block, a collocated CU (colCU), or the like.
  • a reference picture including the temporal neighboring block may be referred to as a collocated picture (colPic).
  • the inter prediction unit 180 constructs a motion information candidate list based on neighboring blocks, and provides information indicating which candidate is used to derive a motion vector and/or a reference picture index of the current block. Can be generated. Inter prediction may be performed based on various prediction modes.
  • the inter prediction unit 180 may use motion information of a neighboring block as motion information of a current block.
  • a residual signal may not be transmitted.
  • motion vector prediction (MVP) mode motion vectors of neighboring blocks are used as motion vector predictors, and indicators for motion vector difference and motion vector predictors ( indicator) to signal the motion vector of the current block.
  • the motion vector difference may mean a difference between a motion vector of a current block and a motion vector predictor.
  • the prediction unit may generate a prediction signal based on various prediction methods and/or prediction techniques to be described later. For example, the prediction unit may apply intra prediction or inter prediction for prediction of the current block, and may simultaneously apply intra prediction and inter prediction. A prediction method in which intra prediction and inter prediction are applied simultaneously for prediction of a current block may be called combined inter and intra prediction (CIIP). Also, the prediction unit may perform intra block copy (IBC) for prediction of the current block. The intra block copy may be used for content image/movie coding such as games, such as, for example, screen content coding (SCC). IBC is a method of predicting a current block by using a reference block in a current picture at a distance from the current block by a predetermined distance.
  • CIIP combined inter and intra prediction
  • IBC intra block copy
  • the intra block copy may be used for content image/movie coding such as games, such as, for example, screen content coding (SCC).
  • IBC is a method of predicting a current block by using a reference
  • the position of the reference block in the current picture may be encoded as a vector (block vector) corresponding to the predetermined distance.
  • IBC basically performs prediction in the current picture, but may be performed similarly to inter prediction in that it derives a reference block in the current picture. That is, the IBC may use at least one of the inter prediction techniques described in this disclosure.
  • the prediction signal generated through the prediction unit may be used to generate a reconstructed signal or may be used to generate a residual signal.
  • the subtraction unit 115 subtracts the prediction signal (predicted block, prediction sample array) output from the prediction unit from the input image signal (original block, original sample array), and subtracts a residual signal (remaining block, residual sample array). ) Can be created.
  • the generated residual signal may be transmitted to the converter 120.
  • the transform unit 120 may generate transform coefficients by applying a transform technique to the residual signal.
  • the transformation technique uses at least one of DCT (Discrete Cosine Transform), DST (Discrete Sine Transform), KLT (Karhunen-Loeve Transform), GBT (Graph-Based Transform), or CNT (Conditionally Non-linear Transform).
  • DCT Discrete Cosine Transform
  • DST Discrete Sine Transform
  • KLT Kerhunen-Loeve Transform
  • GBT Graph-Based Transform
  • CNT Conditionally Non-linear Transform
  • GBT refers to the transformation obtained from this graph when the relationship information between pixels is expressed in a graph.
  • CNT refers to a transformation obtained based on generating a prediction signal using all previously reconstructed pixels.
  • the conversion process may be applied to a block of pixels having the same size of a square, or may be applied to a block of a variable size other than a square.
  • the quantization unit 130 may quantize the transform coefficients and transmit the quantization to the entropy encoding unit 190.
  • the entropy encoding unit 190 may encode a quantized signal (information on quantized transform coefficients) and output it as a bitstream.
  • the information on the quantized transform coefficients may be called residual information.
  • the quantization unit 130 may rearrange the quantized transform coefficients in the form of a block into a one-dimensional vector form based on a coefficient scan order, and the quantized transform coefficients in the form of the one-dimensional vector It is also possible to generate information about transform coefficients.
  • the entropy encoding unit 190 may perform various encoding methods such as exponential Golomb, context-adaptive variable length coding (CAVLC), and context-adaptive binary arithmetic coding (CABAC).
  • the entropy encoding unit 190 may encode together or separately information necessary for video/image restoration (eg, values of syntax elements) in addition to quantized transform coefficients.
  • the encoded information (eg, encoded video/video information) may be transmitted or stored in a bitstream format in units of network abstraction layer (NAL) units.
  • the video/video information may further include information on various parameter sets, such as an adaptation parameter set (APS), a picture parameter set (PPS), a sequence parameter set (SPS), or a video parameter set (VPS).
  • the video/video information may further include general constraint information.
  • the signaling information, transmitted information, and/or syntax elements mentioned in the present disclosure may be encoded through the above-described encoding procedure and included in the bitstream.
  • the bitstream may be transmitted through a network or may be stored in a digital storage medium.
  • the network may include a broadcasting network and/or a communication network
  • the digital storage medium may include various storage media such as USB, SD, CD, DVD, Blu-ray, HDD, and SSD.
  • a transmission unit (not shown) for transmitting the signal output from the entropy encoding unit 190 and/or a storage unit (not shown) for storing may be provided as an inner/outer element of the image encoding apparatus 100, or transmission The unit may be provided as a component of the entropy encoding unit 190.
  • the quantized transform coefficients output from the quantization unit 130 may be used to generate a residual signal.
  • a residual signal residual block or residual samples
  • inverse quantization and inverse transform residual transforms
  • the addition unit 155 adds the reconstructed residual signal to the prediction signal output from the inter prediction unit 180 or the intra prediction unit 185 to obtain a reconstructed signal (a reconstructed picture, a reconstructed block, and a reconstructed sample array). Can be generated.
  • a reconstructed signal (a reconstructed picture, a reconstructed block, and a reconstructed sample array).
  • the predicted block may be used as a reconstructed block.
  • the addition unit 155 may be referred to as a restoration unit or a restoration block generation unit.
  • the generated reconstructed signal may be used for intra prediction of the next processing target block in the current picture, and may be used for inter prediction of the next picture through filtering as described later.
  • the filtering unit 160 may apply filtering to the reconstructed signal to improve subjective/objective image quality.
  • the filtering unit 160 may generate a modified reconstructed picture by applying various filtering methods to the reconstructed picture, and the modified reconstructed picture may be converted to the memory 170, specifically, the DPB of the memory 170. Can be saved on.
  • the various filtering methods may include, for example, deblocking filtering, sample adaptive offset, adaptive loop filter, bilateral filter, and the like.
  • the filtering unit 160 may generate a variety of filtering information and transmit it to the entropy encoding unit 190 as described later in the description of each filtering method.
  • the filtering information may be encoded by the entropy encoding unit 190 and output in the form of a bitstream.
  • the modified reconstructed picture transmitted to the memory 170 may be used as a reference picture in the inter prediction unit 180.
  • the image encoding apparatus 100 may avoid prediction mismatch between the image encoding apparatus 100 and the image decoding apparatus, and may improve encoding efficiency.
  • the DPB in the memory 170 may store a reconstructed picture modified to be used as a reference picture in the inter prediction unit 180.
  • the memory 170 may store motion information of a block from which motion information in a current picture is derived (or encoded) and/or motion information of blocks in a picture that have already been reconstructed.
  • the stored motion information may be transmitted to the inter prediction unit 180 to be used as motion information of spatial neighboring blocks or motion information of temporal neighboring blocks.
  • the memory 170 may store reconstructed samples of reconstructed blocks in the current picture, and may transmit the reconstructed samples to the intra prediction unit 185.
  • FIG. 3 is a diagram schematically illustrating an image decoding apparatus to which an embodiment according to the present disclosure can be applied.
  • the image decoding apparatus 200 includes an entropy decoding unit 210, an inverse quantization unit 220, an inverse transform unit 230, an addition unit 235, a filtering unit 240, and a memory 250. ), an inter prediction unit 260 and an intra prediction unit 265 may be included.
  • the inter prediction unit 260 and the intra prediction unit 265 may be collectively referred to as a “prediction unit”.
  • the inverse quantization unit 220 and the inverse transform unit 230 may be included in the residual processing unit.
  • All or at least some of the plurality of constituent units constituting the image decoding apparatus 200 may be implemented as one hardware component (eg, a decoder or a processor) according to embodiments.
  • the memory 170 may include a DPB and may be implemented by a digital storage medium.
  • the image decoding apparatus 200 receiving a bitstream including video/image information may reconstruct an image by performing a process corresponding to the process performed by the image encoding apparatus 100 of FIG. 2.
  • the image decoding apparatus 200 may perform decoding using a processing unit applied in the image encoding apparatus.
  • the processing unit of decoding may be, for example, a coding unit.
  • the coding unit may be a coding tree unit or may be obtained by dividing the largest coding unit.
  • the reconstructed image signal decoded and output through the image decoding apparatus 200 may be reproduced through a reproduction device (not shown).
  • the image decoding apparatus 200 may receive a signal output from the image encoding apparatus of FIG. 2 in the form of a bitstream.
  • the received signal may be decoded through the entropy decoding unit 210.
  • the entropy decoding unit 210 may parse the bitstream to derive information (eg, video/video information) necessary for image restoration (or picture restoration).
  • the video/video information may further include information on various parameter sets, such as an adaptation parameter set (APS), a picture parameter set (PPS), a sequence parameter set (SPS), or a video parameter set (VPS).
  • the video/video information may further include general constraint information.
  • the image decoding apparatus may additionally use information on the parameter set and/or the general restriction information to decode an image.
  • the signaling information, received information and/or syntax elements mentioned in the present disclosure may be obtained from the bitstream by being decoded through the decoding procedure.
  • the entropy decoding unit 210 decodes information in the bitstream based on a coding method such as exponential Golomb coding, CAVLC, or CABAC, and a value of a syntax element required for image restoration, a quantized value of a transform coefficient related to a residual Can be printed.
  • the CABAC entropy decoding method receives a bin corresponding to each syntax element in a bitstream, and includes information on the syntax element to be decoded, information on decoding information of a neighboring block and a block to be decoded, or information on a symbol/bin decoded in a previous step
  • the context model is determined by using and, according to the determined context model, the probability of occurrence of bins is predicted to perform arithmetic decoding of bins to generate symbols corresponding to the values of each syntax element. I can.
  • the CABAC entropy decoding method may update the context model using information of the decoded symbol/bin for the context model of the next symbol/bin after the context model is determined.
  • the entropy decoding unit 210 Among the information decoded by the entropy decoding unit 210, information on prediction is provided to the prediction unit (inter prediction unit 260 and intra prediction unit 265), and the register on which entropy decoding is performed by the entropy decoding unit 210 Dual values, that is, quantized transform coefficients and related parameter information may be input to the inverse quantization unit 220. In addition, information about filtering among information decoded by the entropy decoding unit 210 may be provided to the filtering unit 240.
  • a receiving unit for receiving a signal output from the image encoding device may be additionally provided as an inner/outer element of the image decoding device 200, or the receiving unit is provided as a component of the entropy decoding unit 210 It could be.
  • the video decoding apparatus may include an information decoder (video/video/picture information decoder) and/or a sample decoder (video/video/picture sample decoder).
  • the information decoder may include an entropy decoding unit 210, and the sample decoder includes an inverse quantization unit 220, an inverse transform unit 230, an addition unit 235, a filtering unit 240, a memory 250, It may include at least one of the inter prediction unit 260 and the intra prediction unit 265.
  • the inverse quantization unit 220 may inverse quantize the quantized transform coefficients and output transform coefficients.
  • the inverse quantization unit 220 may rearrange the quantized transform coefficients into a two-dimensional block shape. In this case, the rearrangement may be performed based on a coefficient scan order performed by the image encoding apparatus.
  • the inverse quantization unit 220 may perform inverse quantization on quantized transform coefficients by using a quantization parameter (eg, quantization step size information) and obtain transform coefficients.
  • a quantization parameter eg, quantization step size information
  • the inverse transform unit 230 may inversely transform transform coefficients to obtain a residual signal (residual block, residual sample array).
  • the prediction unit may perform prediction on the current block and generate a predicted block including prediction samples for the current block.
  • the prediction unit may determine whether intra prediction or inter prediction is applied to the current block based on the prediction information output from the entropy decoding unit 210, and determine a specific intra/inter prediction mode (prediction technique). I can.
  • the prediction unit can generate the prediction signal based on various prediction methods (techniques) described later.
  • the intra prediction unit 265 may predict the current block by referring to samples in the current picture.
  • the description of the intra prediction unit 185 may be equally applied to the intra prediction unit 265.
  • the inter prediction unit 260 may derive a predicted block for the current block based on a reference block (reference sample array) specified by a motion vector on the reference picture.
  • motion information may be predicted in units of blocks, subblocks, or samples based on a correlation between motion information between a neighboring block and a current block.
  • the motion information may include a motion vector and a reference picture index.
  • the motion information may further include inter prediction direction (L0 prediction, L1 prediction, Bi prediction, etc.) information.
  • the neighboring block may include a spatial neighboring block existing in the current picture and a temporal neighboring block existing in the reference picture.
  • the inter prediction unit 260 may construct a motion information candidate list based on neighboring blocks, and derive a motion vector and/or a reference picture index of the current block based on the received candidate selection information.
  • Inter prediction may be performed based on various prediction modes (techniques), and the information about the prediction may include information indicating a mode (technique) of inter prediction for the current block.
  • the addition unit 235 is reconstructed by adding the obtained residual signal to the prediction signal (predicted block, prediction sample array) output from the prediction unit (including the inter prediction unit 260 and/or the intra prediction unit 265). Signals (restored pictures, reconstructed blocks, reconstructed sample arrays) can be generated. When there is no residual for a block to be processed, such as when the skip mode is applied, the predicted block may be used as a reconstructed block.
  • the description of the addition unit 155 may be equally applied to the addition unit 235.
  • the addition unit 235 may be referred to as a restoration unit or a restoration block generation unit.
  • the generated reconstructed signal may be used for intra prediction of the next processing target block in the current picture, and may be used for inter prediction of the next picture through filtering as described later.
  • the filtering unit 240 may apply filtering to the reconstructed signal to improve subjective/objective image quality.
  • the filtering unit 240 may generate a modified reconstructed picture by applying various filtering methods to the reconstructed picture, and the modified reconstructed picture may be converted to the memory 250, specifically the DPB of the memory 250. Can be saved on.
  • the various filtering methods may include, for example, deblocking filtering, sample adaptive offset, adaptive loop filter, bilateral filter, and the like.
  • the (modified) reconstructed picture stored in the DPB of the memory 250 may be used as a reference picture in the inter prediction unit 260.
  • the memory 250 may store motion information of a block from which motion information in a current picture is derived (or decoded) and/or motion information of blocks in a picture that have already been reconstructed.
  • the stored motion information may be transmitted to the inter prediction unit 260 to be used as motion information of a spatial neighboring block or motion information of a temporal neighboring block.
  • the memory 250 may store reconstructed samples of reconstructed blocks in the current picture, and may be transmitted to the intra prediction unit 265.
  • embodiments described in the filtering unit 160, the inter prediction unit 180, and the intra prediction unit 185 of the image encoding apparatus 100 are respectively the filtering unit 240 of the image decoding apparatus 200, The same or corresponding to the inter prediction unit 260 and the intra prediction unit 265 may be applied.
  • the coding unit is obtained by recursively dividing a coding tree unit (CTU) or a maximum coding unit (LCU) according to a QT/BT/TT (Quad-tree/binary-tree/ternary-tree) structure.
  • CTU coding tree unit
  • LCU maximum coding unit
  • QT/BT/TT Quad-tree/binary-tree/ternary-tree
  • the CTU may be first divided into a quadtree structure. Thereafter, leaf nodes of a quadtree structure may be further divided by a multitype tree structure.
  • the division according to the quadtree means division in which the current CU (or CTU) is divided into four. By partitioning according to the quadtree, the current CU can be divided into four CUs having the same width and the same height.
  • the current CU corresponds to a leaf node of the quadtree structure.
  • the CU corresponding to the leaf node of the quadtree structure is no longer divided and may be used as the above-described final coding unit.
  • a CU corresponding to a leaf node of a quadtree structure may be further divided by a multitype tree structure.
  • the division according to the multi-type tree structure may include two divisions according to a binary tree structure and two divisions according to a ternary tree structure.
  • the two divisions according to the binary tree structure may include vertical binary splitting (SPLIT_BT_VER) and horizontal binary splitting (SPLIT_BT_HOR).
  • the vertical binary division (SPLIT_BT_VER) means division in which the current CU is divided into two in the vertical direction. As shown in FIG. 4, two CUs having a height equal to the height of the current CU and a width of half the width of the current CU may be generated by vertical binary division.
  • the horizontal binary division means division in which the current CU is divided into two in the horizontal direction. As shown in FIG. 4, two CUs having a height of half the height of the current CU and a width equal to the width of the current CU may be generated by horizontal binary division.
  • the two divisions according to the ternary tree structure may include vertical ternary splitting (SPLIT_TT_VER) and horizontal ternary splitting (hotizontal ternary splitting, SPLIT_TT_HOR).
  • Vertical ternary division (SPLIT_TT_VER) divides the current CU in a vertical direction at a ratio of 1:2:1.
  • the horizontal ternary division (SPLIT_TT_HOR) divides the current CU in the horizontal direction at a ratio of 1:2:1.
  • FIG. 4 by vertical ternary division, two CUs having a height equal to the height of the current CU and a width of 1/4 of the width of the current CU and the current CU A CU with a width of half the width of can be created.
  • the horizontal ternary division (SPLIT_TT_HOR) divides the current CU in the horizontal direction at a ratio of 1:2:1. As shown in FIG.
  • FIG. 5 is a diagram illustrating a signaling mechanism of partitioning information of a quadtree with nested multi-type tree structure according to the present disclosure.
  • the CTU is treated as the root node of a quadtree, and is first partitioned into a quadtree structure.
  • Information eg, qt_split_flag
  • qt_split_flag a first value (eg, “1”)
  • the current CU may be quadtree split.
  • qt_split_flag is a second value (eg, "0")
  • the current CU is not divided into a quadtree, but becomes a leaf node (QT_leaf_node) of the quadtree.
  • the leaf nodes of each quadtree can then be further partitioned into a multitype tree structure. That is, a leaf node of a quad tree may be a node (MTT_node) of a multi-type tree.
  • a first flag eg, mtt_split_cu_flag
  • a second flag (ex.mtt_split_cu_verticla_flag) may be signaled to indicate the splitting direction.
  • the division direction may be a vertical direction
  • the second flag is 0, the division direction may be a horizontal direction.
  • a third flag (eg, mtt_split_cu_binary_flag) may be signaled to indicate whether the division type is a binary division type or a ternary division type.
  • the division type may be a binary division type
  • the third flag when the third flag is 0, the division type may be a ternary division type.
  • Nodes of a multitype tree obtained by binary division or ternary division may be further partitioned into a multitype tree structure.
  • nodes of a multitype tree cannot be partitioned into a quadtree structure.
  • the first flag is 0, the corresponding node of the multitype tree is no longer divided and becomes a leaf node (MTT_leaf_node) of the multitype tree.
  • the CU corresponding to the leaf node of the multi-type tree may be used as the above-described final coding unit.
  • a multi-type tree splitting mode (MttSplitMode) of the CU may be derived as shown in Table 1.
  • One CTU may include a coding block of luma samples (hereinafter, referred to as a “luma block”) and two coding blocks of chroma samples corresponding thereto (hereinafter referred to as a “chroma block”).
  • the above-described coding tree scheme may be applied equally to the luma block and the chroma block of the current CU, or may be applied separately.
  • a luma block and a chroma block in one CTU may be divided into the same block tree structure, and the tree structure in this case may be represented as a single tree (SINGLE_TREE).
  • a luma block and a chroma block in one CTU may be divided into individual block tree structures, and the tree structure in this case may be represented as a dual tree (DUAL_TREE). That is, when the CTU is divided into a dual tree, a block tree structure for a luma block and a block tree structure for a chroma block may exist separately.
  • the block tree structure for the luma block may be referred to as a dual tree luma (DUAL_TREE_LUMA)
  • the block tree structure for the chroma block may be referred to as a dual tree chroma (DUAL_TREE_CHROMA).
  • luma blocks and chroma blocks in one CTU may be limited to have the same coding tree structure.
  • luma blocks and chroma blocks may have separate block tree structures from each other.
  • a luma coding tree block (CTB) may be divided into CUs based on a specific coding tree structure, and the chroma CTB may be divided into chroma CUs based on a different coding tree structure. That is, a CU in an I slice/tile group to which an individual block tree structure is applied may be composed of a coding block of a luma component or a coding block of two chroma components.
  • a CU in an I slice/tile group to which the same block tree structure is applied and a CU of a P or B slice/tile group may be composed of blocks of three color components (a luma component and two chroma components).
  • the structure in which the CU is divided is not limited thereto.
  • the BT structure and the TT structure may be interpreted as a concept included in the Multiple Partitioning Tree (MPT) structure, and the CU may be interpreted as being divided through the QT structure and the MPT structure.
  • MPT Multiple Partitioning Tree
  • a syntax element e.g., MPT_split_type
  • MPT_split_mode a syntax element including information on which direction of splitting between horizontal and horizontal.
  • the CU may be divided in a different way from the QT structure, BT structure, or TT structure. That is, according to the QT structure, the CU of the lower depth is divided into 1/4 size of the CU of the upper depth, or the CU of the lower depth is divided into 1/2 of the CU of the upper depth according to the BT structure, or according to the TT structure. Unlike CUs of lower depth are divided into 1/4 or 1/2 of CUs of higher depth, CUs of lower depth are 1/5, 1/3, 3/8, 3 of CUs of higher depth depending on the case. It may be divided into /5, 2/3, or 5/8 size, and the method of partitioning the CU is not limited thereto.
  • Intra prediction may indicate prediction of generating prediction samples for a current block based on reference samples in a picture (hereinafter, referred to as a current picture) to which the current block belongs.
  • a current picture a picture to which the current block belongs.
  • surrounding reference samples to be used for intra prediction of the current block may be derived.
  • the neighboring reference samples of the current block are a sample adjacent to the left boundary of the current block of size nWxnH, a total of 2xnH samples adjacent to the bottom-left, and a sample adjacent to the top boundary of the current block. And a total of 2xnW samples adjacent to the top-right side and one sample adjacent to the top-left side of the current block.
  • the peripheral reference samples of the current block may include a plurality of columns of upper peripheral samples and a plurality of rows of left peripheral samples.
  • the neighboring reference samples of the current block are a total of nH samples adjacent to the right boundary of the current block of size nWxnH, a total of nW samples adjacent to the bottom boundary of the current block, and the lower right side of the current block. It may include one sample adjacent to (bottom-right).
  • the decoder may construct neighboring reference samples to be used for prediction by substituting samples that are not available with available samples.
  • surrounding reference samples to be used for prediction may be configured through interpolation of available samples.
  • a prediction sample can be derived based on an average or interpolation of neighboring reference samples of the current block, and (ii) neighboring reference samples of the current block Among them, the prediction sample may be derived based on a reference sample existing in a specific (prediction) direction with respect to the prediction sample.
  • it may be called a non-directional mode or a non-angular mode
  • it may be called a directional mode or an angular mode.
  • LIP linear interpolation intra prediction
  • chroma prediction samples may be generated based on luma samples using a linear model. This case may be referred to as LM (Linear Model) mode.
  • LM Linear Model
  • a temporary prediction sample of the current block is derived based on the filtered surrounding reference samples, and at least one of the existing surrounding reference samples, that is, unfiltered surrounding reference samples, derived according to the intra prediction mode.
  • a prediction sample of the current block may be derived by weighted sum of a reference sample and the temporary prediction sample. This case may be called PDPC (Position dependent intra prediction).
  • a reference sample line having the highest prediction accuracy among the neighboring multi-reference sample lines of the current block may be selected, and a prediction sample may be derived using a reference sample positioned in the prediction direction from the corresponding line.
  • information on the used reference sample line eg, intra_luma_ref_idx or MRL index
  • MRL multi-reference line intra prediction
  • reference samples may be derived from a reference sample line directly adjacent to the current block, and in this case, information about the reference sample line may not be signaled.
  • the syntax element intra_luma_ref_idx indicating the MRL index may indicate a reference sample line for intra prediction of the current block according to the binary sequence of Table 1 below.
  • Binary strings in Table 2 are examples of the present disclosure, and the scope of the present disclosure is not limited according to the binary values below.
  • intra prediction for a current block may be performed using a 0-th reference sample line of the current block. That is, when the current block is intra-predicted by using the 0-th reference sample line, the video encoding apparatus or the video decoding apparatus may encode/decode 0 as a binary value of intra_luma_ref_idx. Similarly, when the current block is encoded/decoded using the first or third reference sample line, intra_luma_ref_idx may be encoded/decoded with a binary value of 10 or 11, respectively.
  • the current block may be divided into vertical or horizontal subpartitions, and intra prediction may be performed for each subpartition based on the same intra prediction mode.
  • neighboring reference samples of intra prediction may be derived for each subpartition. That is, the reconstructed sample of the previous sub-partition in the encoding/decoding order may be used as a neighboring reference sample of the current sub-partition.
  • the intra prediction mode for the current block is equally applied to the subpartitions, but by deriving and using neighboring reference samples in units of the subpartitions, intra prediction performance may be improved in some cases.
  • This prediction method may be referred to as intra sub-partitions (ISP) or ISP-based intra prediction.
  • intra prediction techniques may be referred to in various terms such as an intra prediction type or an additional intra prediction mode in distinction from a directional or non-directional intra prediction mode.
  • the intra prediction technique may include at least one of the aforementioned LIP, LM, PDPC, MRL, and ISP.
  • the general intra prediction method excluding specific intra prediction types such as LIP, LM, PDPC, MRL, and ISP may be referred to as a normal intra prediction type.
  • the normal intra prediction type may be generally applied when the specific intra prediction type as described above is not applied, and prediction may be performed based on the aforementioned intra prediction mode. Meanwhile, post-processing filtering may be performed on the derived prediction samples as necessary.
  • the intra prediction procedure may include determining an intra prediction mode/type, deriving a neighboring reference sample, and deriving an intra prediction mode/type based prediction sample. Also, a post-filtering step may be performed on the derived prediction samples as necessary.
  • FIG. 6 is a flowchart illustrating a video/video encoding method based on intra prediction.
  • the encoding method of FIG. 6 may be performed by the video encoding apparatus of FIG. 2. Specifically, step S610 may be performed by the intra prediction unit 185, and step S620 may be performed by the residual processing unit. Specifically, step S620 may be performed by the subtraction unit 115. Step S630 may be performed by the entropy encoding unit 190.
  • the prediction information of step S630 may be derived by the intra prediction unit 185, and the residual information of step S630 may be derived by the residual processing unit.
  • the residual information is information on the residual samples.
  • the residual information may include information on quantized transform coefficients for the residual samples.
  • the residual samples may be derived as transform coefficients through the transform unit 120 of the image encoding apparatus, and the transform coefficients may be derived as quantized transform coefficients through the quantization unit 130.
  • Information about the quantized transform coefficients may be encoded by the entropy encoding unit 190 through a residual coding procedure.
  • the image encoding apparatus may perform intra prediction on the current block (S610).
  • the video encoding apparatus determines an intra prediction mode/type for the current block, derives neighboring reference samples of the current block, and then generates prediction samples in the current block based on the intra prediction mode/type and the neighboring reference samples. can do.
  • the procedure of determining the intra prediction mode/type, deriving neighboring reference samples, and generating prediction samples may be simultaneously performed, or one procedure may be performed before the other procedure.
  • FIG. 7 is a diagram illustrating an exemplary configuration of an intra prediction unit 185 according to the present disclosure.
  • the intra prediction unit 185 of the video encoding apparatus may include an intra prediction mode/type determination unit 186, a reference sample derivation unit 187 and/or a prediction sample derivation unit 188.
  • the intra prediction mode/type determiner 186 may determine an intra prediction mode/type for the current block.
  • the reference sample derivation unit 187 may derive neighboring reference samples of the current block.
  • the prediction sample derivation unit 188 may derive prediction samples of the current block.
  • the intra prediction unit 185 may further include a prediction sample filter unit (not shown).
  • the image encoding apparatus may determine a mode/type applied to the current block from among a plurality of intra prediction modes/types.
  • the video encoding apparatus may compare RD costs for the intra prediction modes/types and determine an optimal intra prediction mode/type for the current block.
  • the image encoding apparatus may perform a prediction sample filtering procedure.
  • Predictive sample filtering may be referred to as post filtering. Some or all of the prediction samples may be filtered by the prediction sample filtering procedure. In some cases, the prediction sample filtering procedure may be omitted.
  • the apparatus for encoding an image may generate residual samples for the current block based on prediction samples or filtered prediction samples (S620).
  • the image encoding apparatus may derive the residual samples by subtracting the prediction samples from original samples of the current block. That is, the image encoding apparatus may derive the residual sample value by subtracting the corresponding predicted sample value from the original sample value.
  • the image encoding apparatus may encode image information including information about the intra prediction (prediction information) and residual information about the residual samples (S630).
  • the prediction information may include the intra prediction mode information and/or the intra prediction technique information.
  • the image encoding apparatus may output the encoded image information in the form of a bitstream.
  • the output bitstream may be delivered to an image decoding apparatus through a storage medium or a network.
  • the residual information may include a residual coding syntax to be described later.
  • the image encoding apparatus may transform/quantize the residual samples to derive quantized transform coefficients.
  • the residual information may include information on the quantized transform coefficients.
  • the image encoding apparatus may generate a reconstructed picture (including reconstructed samples and a reconstructed block). To this end, the image encoding apparatus may perform inverse quantization/inverse transformation on the quantized transform coefficients again to derive (modified) residual samples. The reason why the residual samples are transformed/quantized and then inverse quantized/inverse transformed is performed to derive residual samples identical to the residual samples derived from the image decoding apparatus.
  • the image encoding apparatus may generate a reconstructed block including reconstructed samples for the current block based on the prediction samples and the (modified) residual samples. A reconstructed picture for the current picture may be generated based on the reconstructed block. As described above, an in-loop filtering procedure or the like may be further applied to the reconstructed picture.
  • FIG. 8 is a flowchart illustrating a video/video decoding method based on intra prediction.
  • the image decoding apparatus may perform an operation corresponding to an operation performed by the image encoding apparatus.
  • the decoding method of FIG. 8 may be performed by the video decoding apparatus of FIG. 3.
  • Steps S810 to S830 may be performed by the intra prediction unit 265, and the prediction information of step S810 and the residual information of step S840 may be obtained from the bitstream by the entropy decoding unit 210.
  • the residual processing unit of the image decoding apparatus may derive residual samples for the current block based on the residual information (S840).
  • the inverse quantization unit 220 of the residual processing unit derives transform coefficients by performing inverse quantization based on the quantized transform coefficients derived based on the residual information
  • the inverse transform unit of the residual processing unit ( 230) may derive residual samples for the current block by performing inverse transform on the transform coefficients.
  • Step S850 may be performed by the addition unit 235 or the restoration unit.
  • the image decoding apparatus may derive an intra prediction mode/type for the current block based on the received prediction information (intra prediction mode/type information) (S810).
  • the image decoding apparatus may derive neighboring reference samples of the current block (S820).
  • the image decoding apparatus may generate prediction samples in the current block based on the intra prediction mode/type and the neighboring reference samples (S830).
  • the image decoding apparatus may perform a prediction sample filtering procedure. Predictive sample filtering may be referred to as post filtering. Some or all of the prediction samples may be filtered by the prediction sample filtering procedure. In some cases, the prediction sample filtering procedure may be omitted.
  • the image decoding apparatus may generate residual samples for the current block based on the received residual information (S840).
  • the image decoding apparatus may generate reconstructed samples for the current block based on the prediction samples and the residual samples, and derive a reconstructed block including the reconstructed samples (S850).
  • a reconstructed picture for the current picture may be generated based on the reconstructed block.
  • an in-loop filtering procedure or the like may be further applied to the reconstructed picture.
  • FIG. 9 is a diagram illustrating an exemplary configuration of an intra prediction unit 265 according to the present disclosure.
  • the intra prediction unit 265 of the image decoding apparatus may include an intra prediction mode/type determination unit 266, a reference sample derivation unit 267, and a prediction sample derivation unit 268. .
  • the intra prediction mode/type determiner 266 determines an intra prediction mode/type for the current block based on intra prediction mode/type information generated and signaled by the intra prediction mode/type determiner 186 of the image encoding apparatus.
  • the reference sample deriving unit 266 may derive neighboring reference samples of the current block from the reconstructed reference region in the current picture.
  • the prediction sample derivation unit 268 may derive prediction samples of the current block.
  • the intra prediction unit 265 may further include a prediction sample filter unit (not shown).
  • the intra prediction mode information may include, for example, flag information (ex. intra_luma_mpm_flag) indicating whether a most probable mode (MPM) is applied to the current block or a remaining mode is applied, and the When MPM is applied to the current block, the intra prediction mode information may further include index information (ex. intra_luma_mpm_idx) indicating one of the intra prediction mode candidates (MPM candidates).
  • the intra prediction mode candidates (MPM candidates) may be composed of an MPM candidate list or an MPM list.
  • the intra prediction mode information includes remaining mode information (ex. intra_luma_mpm_remainder) indicating one of the remaining intra prediction modes excluding the intra prediction mode candidates (MPM candidates). It may contain more.
  • the image decoding apparatus may determine an intra prediction mode of the current block based on the intra prediction mode information.
  • the intra prediction technique information may be implemented in various forms.
  • the intra prediction technique information may include intra prediction technique index information indicating one of the intra prediction techniques.
  • the intra prediction method information includes reference sample line information (ex. intra_luma_ref_idx) indicating whether the MRL is applied to the current block and, if applied, a reference sample line (eg, intra_luma_ref_idx), and the ISP is the current block.
  • ISP flag information indicating whether it is applied to (ex. intra_subpartitions_mode_flag), ISP type information indicating the split type of subpartitions when the ISP is applied (ex.
  • intra_subpartitions_split_flag flag information indicating whether PDPC is applied, or LIP application It may include at least one of flag information indicating whether or not.
  • the ISP flag information may be referred to as an ISP application indicator.
  • the intra prediction mode information and/or the intra prediction technique information may be encoded/decoded through the coding method described in this disclosure.
  • the intra prediction mode information and/or the intra prediction method information may be encoded/decoded through entropy coding (ex. CABAC, CAVLC) based on a truncated (rice) binary code.
  • an intra prediction mode applied to the current block may be determined using an intra prediction mode of a neighboring block.
  • the image decoding apparatus constructs a most probable mode (MPM) list derived based on the intra prediction mode and additional candidate modes of the neighboring block (ex. left and/or upper neighboring block) of the current block, and received One of MPM candidates in the MPM list may be selected based on the MPM index.
  • the video decoding apparatus may select one of the remaining intra prediction modes that are not included in the MPM list based on the remaining intra prediction mode information.
  • whether the intra prediction mode applied to the current block is among MPM candidates (i.e., is included in the MPM list) or is in the remaining mode may be indicated based on the MPM flag (ex. intra_luma_mpm_flag).
  • a value of 1 of the MPM flag may indicate that the intra prediction mode for the current block is in MPM candidates (MPM list), and a value of 0 of the MPM flag indicates that the intra prediction mode for the current block is in MPM candidates (MPM list). Can indicate none.
  • the MPM index may be signaled in the form of an mpm_idx or intra_luma_mpm_idx syntax element, and the remaining intra prediction mode information may be signaled in the form of rem_intra_luma_pred_mode or intra_luma_mpm_remainder syntax element.
  • the remaining intra prediction mode information may indicate one of all intra prediction modes by indexing the remaining intra prediction modes not included in the MPM candidates (MPM list) in order of prediction mode number.
  • the intra prediction mode may be an intra prediction mode for a luma component (sample).
  • the intra prediction mode information may include at least one of the MPM flag (ex. intra_luma_mpm_flag), the MPM index (ex.
  • the MPM list may be referred to in various terms such as an MPM candidate list and candModeList.
  • FIG. 10 is a flowchart illustrating an intra prediction mode signaling procedure in an image encoding apparatus.
  • the apparatus for encoding an image may configure an MPM list for a current block (S1010).
  • the MPM list may include candidate intra prediction modes (MPM candidates) that are likely to be applied to the current block.
  • the MPM list may include intra prediction modes of neighboring blocks, or may further include specific intra prediction modes according to a predetermined method.
  • the image encoding apparatus may determine an intra prediction mode of the current block (S1020).
  • the video encoding apparatus may perform prediction based on various intra prediction modes, and may determine an optimal intra prediction mode by performing rate-distortion optimization (RDO) based thereon.
  • RDO rate-distortion optimization
  • the video encoding apparatus may determine the optimal intra prediction mode using only MPM candidates included in the MPM list, or further use the remaining intra prediction modes as well as the MPM candidates included in the MPM list. It is also possible to determine the intra prediction mode. Specifically, for example, if the intra prediction type of the current block is a specific type (eg, LIP, MRL, or ISP) other than the normal intra prediction type, the video encoding apparatus uses only the MPM candidates to determine the optimal intra prediction type.
  • a specific type eg, LIP, MRL, or ISP
  • the prediction mode can be determined. That is, in this case, the intra prediction mode for the current block may be determined only among the MPM candidates, and in this case, the MPM flag may not be encoded/signaled. In the case of the specific type, the video decoding apparatus may estimate that the MPM flag is 1 without separately signaling the MPM flag.
  • the video encoding apparatus may generate an MPM index (mpm idx) indicating one of the MPM candidates. If the intra prediction mode of the current block is not in the MPM list, remaining intra prediction mode information indicating the same mode as the intra prediction mode of the current block is generated among the remaining intra prediction modes not included in the MPM list. can do.
  • MPM index mpm idx
  • the image encoding apparatus may encode the intra prediction mode information and output it in the form of a bitstream (S1030).
  • the intra prediction mode information may include the above-described MPM flag, MPM index, and/or remaining intra prediction mode information.
  • the MPM index and the remaining intra prediction mode information are not signaled at the same time when indicating an intra prediction mode for one block with an alternative relationship. That is, when the MPM flag value is 1, the MPM index may be signaled, and when the MPM flag value is 0, the remaining intra prediction mode information may be signaled.
  • the MPM flag when a specific intra prediction type is applied to the current block, the MPM flag is not signaled and its value is inferred as 1, and only the MPM index may be signaled. That is, in this case, the intra prediction mode information may include only the MPM index.
  • S1020 is shown to be performed after S1010, but this is an example, and S1020 may be performed before S1010 or at the same time.
  • 11 is a flowchart illustrating a procedure for determining an intra prediction mode in an image decoding apparatus.
  • the image decoding apparatus may determine an intra prediction mode of the current block based on intra prediction mode information determined and signaled by the image encoding apparatus.
  • the apparatus for decoding an image may acquire intra prediction mode information from a bitstream (S1110).
  • the intra prediction mode information may include at least one of an MPM flag, an MPM index, and a remaining intra prediction mode.
  • the video decoding apparatus may configure an MPM list (S1120).
  • the MPM list is configured in the same way as the MPM list configured in the video encoding apparatus. That is, the MPM list may include intra prediction modes of neighboring blocks, or may further include specific intra prediction modes according to a predetermined method.
  • S1120 is shown to be performed after S1110, but this is an example, and S1120 may be performed before S1110 or at the same time.
  • the video decoding apparatus determines an intra prediction mode of the current block based on the MPM list and the intra prediction mode information (S1130). Step S1130 will be described in more detail with reference to FIG. 12.
  • FIG. 12 is a flowchart for describing a procedure for deriving an intra prediction mode in more detail.
  • Steps S1210 and S1220 of FIG. 12 may correspond to steps S1110 and S1120 of FIG. 11, respectively. Therefore, detailed descriptions of steps S1210 and S1220 are omitted.
  • the image decoding apparatus may obtain intra prediction mode information from the bitstream, configure an MPM list (S1210 and S1220), and determine a predetermined condition (S1230). Specifically, as shown in FIG. 12, when the value of the MPM flag is 1 (Yes in S1230), the video decoding apparatus selects a candidate indicated by the MPM index among MPM candidates in the MPM list in the intra prediction mode of the current block. It can be derived as (S1240). As another example, when the value of the MPM flag is 0 (No in S1230), the video decoding apparatus selects an intra prediction mode indicated by the remaining intra prediction mode information among the remaining intra prediction modes not included in the MPM list. The intra prediction mode of the block may be derived (S1250).
  • the video decoding apparatus is within the MPM list without checking the MPM flag.
  • a candidate indicated by the MPM index may be derived as an intra prediction mode of the current block (S1240).
  • FIG. 13 is a diagram illustrating an intra prediction direction according to an embodiment of the present disclosure.
  • the intra prediction mode may include two non-directional intra prediction modes and 33 directional intra prediction modes.
  • the non-directional intra prediction modes may include a planar intra prediction mode and a DC intra prediction mode, and the directional intra prediction modes may include 2 to 34 intra prediction modes.
  • the planar intra prediction mode may be referred to as a planner mode, and the DC intra prediction mode may be referred to as a DC mode.
  • the intra prediction mode includes two non-directional intra prediction modes and 65 extended directional intra prediction. It can include modes.
  • the non-directional intra prediction modes may include a planar mode and a DC mode, and the directional intra prediction modes may include 2 to 66 intra prediction modes.
  • the extended intra prediction modes can be applied to blocks of all sizes, and can be applied to both a luma component (a luma block) and a chroma component (a chroma block).
  • the intra prediction mode may include two non-directional intra prediction modes and 129 directional intra prediction modes.
  • the non-directional intra prediction modes may include a planar mode and a DC mode, and the directional intra prediction modes may include 2 to 130 intra prediction modes.
  • the intra prediction mode may further include a cross-component linear model (CCLM) mode for chroma samples in addition to the aforementioned intra prediction modes.
  • CCLM cross-component linear model
  • the CCLM mode can be divided into L_CCLM, T_CCLM, and LT_CCLM, depending on whether left samples are considered, upper samples are considered, or both for LM parameter derivation, and can be applied only to a chroma component.
  • the intra prediction mode may be indexed, for example, as shown in Table 3 below.
  • an intra prediction mode in order to capture an arbitrary edge direction presented in a natural video, includes 93 directions along with two non-directional intra prediction modes. It may include an intra prediction mode. Non-directional intra prediction modes may include planar mode and DC mode.
  • the directional intra prediction mode may include an intra prediction mode consisting of times 2 to 80 and -1 to -14 as indicated by the arrows in FIG. 14.
  • the planner mode may be indicated as INTRA_PLANAR, and the DC mode may be indicated as INTRA_DC.
  • the directional intra prediction mode may be expressed as INTRA_ANGULAR-14 to INTRA_ANGULAR-1 and INTRA_ANGULAR2 to INTRA_ANGULAR80.
  • the apparatus for encoding/decoding an image may derive multiple reference sample lines using a method proposed by the present disclosure.
  • the video encoding/decoding apparatus to which the MRL is applied may not apply the MRL to the current block due to a line buffer problem.
  • 15 is a diagram for describing a current block located at an upper boundary of a CTU.
  • the CTU shown in FIG. 15 includes 16 CU blocks.
  • CU1 to CU4 may be blocks adjacent to the upper boundary of the CTU to which each CU (current block) belongs.
  • MRL may not be applied to CU1 to CU4. That is, in FIG. 15, MRL is not applied to CU1 to CU4, and MRL may be applied to CU5 to CU16 based on signaled MRL configuration.
  • the apparatus for encoding/decoding an image may perform intra prediction on the current block using only one reference sample line without applying MRL.
  • the apparatus for encoding/decoding an image may perform intra prediction by applying MRL to the current block without occurrence of a line buffer problem.
  • the coordinate (x, y) of the upper reference sample line is x among the coordinates of the reference sample line. It may mean coordinates satisfying x0 and y ⁇ y0.
  • the coordinates (x, y) of the upper left reference sample line of the current block are x ⁇ x0 and y ⁇ y0 among the coordinates of the reference sample line. It may mean a coordinate that satisfies.
  • the coordinates (x, y) of the upper left reference sample line 0 of the current block may include (x0-1, y0-1).
  • the coordinates of the upper left reference sample line No.1 of the current block are (x0-2, y0-1), (x0-2, y0-2) and (x0-1, y0-2). It may include at least one.
  • the coordinates of the second upper left reference sample line of the current block are (x, y) (x0-3, y0-1), (x0-3, y0-2), (x0-3, y0-3), It may include at least one of (x0-2, y0-3) and (x0-1, y0-3).
  • the coordinates of the 3rd upper left reference sample line of the current block are (x0-4, y0-1), (x0-4, y0-2), (x0-4, y0-3), It may include at least one of (x0-4, y0-4), (x0-3, y0-4), (x0-2, y0-4) and (x0-1, y0-4).
  • the coordinates (x, y) of the left reference sample line are the coordinates of the reference sample line x ⁇ x0 and y It may mean coordinates that satisfy y0.
  • the reference sample line may be configured to include at least one of the above-described upper reference sample line, upper left reference sample line, and left reference sample line.
  • 16 is a diagram for describing an image decoding method according to an embodiment of the present disclosure.
  • determining whether a prediction mode of a current block is an intra prediction mode based on information about a prediction mode of a current block (S1610) When the prediction mode of the current block is the intra prediction mode, determining whether the current block is adjacent to a boundary of a preset area (S1620), based on whether the current block is adjacent to a boundary of the preset area, It may include deriving multiple reference sample lines for the current block (S1630) and generating a prediction block for the current block by using the derived multiple reference sample lines (S1640).
  • the boundary of the preset region may be the upper boundary of the coding tree unit (CTU) including the current block.
  • CTU coding tree unit
  • 17 is a diagram for describing an image encoding method according to an embodiment of the present disclosure.
  • determining whether a prediction mode of a current block is an intra prediction mode (S1710), when a prediction mode of a current block is an intra prediction mode, Determining whether the current block is adjacent to the upper boundary of the CTU (Coding Tree Unit) including the current block (S1720), a reference sample for intra prediction based on whether the current block is adjacent to the upper boundary of the CTU Constructing (S1730), generating a prediction block for the current block by performing intra prediction using the configured reference sample (S1740), and information on a reference sample line used to construct a reference sample is provided to the current block.
  • Encoding through the MRL (Multiple Reference Line) index (S1750) may be included.
  • the image encoding/decoding apparatus uses the 0th upper reference sample line of the current block to determine the remaining upper reference sample lines. You can induce. Also, when the current block is adjacent to the upper boundary of the CTU to which the current block belongs, the image encoding/decoding apparatus may derive the upper left reference sample line by using the left reference sample line of the current block.
  • FIG. 18 is a diagram for describing a method of deriving a reference sample line according to another embodiment of the present disclosure.
  • CU1 to CU4 of FIG. 18 may be blocks adjacent to the upper boundary of the CTU.
  • 18 shows, when the current block is adjacent to the upper boundary of the CTU to which the current block belongs, the image encoding/decoding apparatus refers to the remaining upper part of the current block by using a reference sample line (reference sample line 0) adjacent to the current block. Shows how to derive a sample line.
  • the remaining upper reference sample lines may mean reference sample lines (no. 1 to 3 upper reference sample lines) excluding adjacent reference sample lines.
  • the first upper reference sample line and the third upper reference sample line of the current block may be derived using the 0th upper reference sample line of the current block.
  • the first and/or third upper reference sample lines are derived using the 0th upper reference sample line adjacent to the current block.
  • the numbering of reference sample lines is only an example, and in the present embodiment, the scope of rights may also extend to a configuration in which a reference sample line is derived by using a reference sample line adjacent to a current block or a CTU to which the current block belongs.
  • the 1st, 2nd, and/or 3rd upper reference sample lines use the sample values of the 0th upper reference sample line.
  • the first, second, and/or third upper reference sample lines may be derived by copying the sample value of the 0th upper reference sample line.
  • the sample value of each of the 1st, 2nd and/or 3rd upper reference sample lines is to be derived using the sample value of the 0th upper reference sample line having the same x coordinate as the coordinates of each reference sample. I can.
  • the sample value of the upper reference sample line #2 having the coordinates of (x2, y0-3) is derived using the sample value of the upper reference sample line #0 having the coordinates of (x2, y0-1). Or it can be determined by the corresponding sample value.
  • the sample value of the upper reference sample line 3 having the coordinates of (x3, y0-4) is derived using the sample value of the reference sample line 0 having the coordinates of (x3, y0-1) Or it can be determined by the corresponding sample value.
  • the reference sample line to the left of the current block may be derived in the same manner as in the related art, regardless of whether the current block is adjacent to the upper boundary of the CTU to which the current block belongs. That is, the 0th left reference sample line and the remaining left reference sample lines (1st to 3rd left reference sample lines) adjacent to the left boundary of the current block may be derived according to a conventional method.
  • the upper left reference sample line of the current block may be derived using the uppermost sample value of the left reference sample line corresponding to each upper left reference sample line.
  • all sample values of the first left upper reference sample line may be derived using the sample value of the first left reference sample line having a coordinate of (x0-2, y0), or may be determined as corresponding sample values.
  • all sample values of the second left upper reference sample line may be derived by using the sample values of the second left reference sample line having coordinates of (x0-3, y0), or may be determined as corresponding sample values.
  • all sample values of the upper left reference sample line 3 may be derived using the sample value of the left reference sample line 3 having coordinates of (x0-4, y0), or may be determined as corresponding sample values.
  • all sample values included in the first upper left reference sample line can be derived to the same value using the L1 sample value of FIG. 18, and all sample values included in the third upper left reference sample line using the L3 sample value. Can be derived to the same value.
  • the image encoding/decoding apparatus may additionally scan the available reference samples according to a preset direction.
  • a reference sample it may mean a case in which the upper left reference sample can be derived using the reference sample.
  • the image encoding/decoding apparatus may derive the upper left reference sample line by using the sample value.
  • the image encoding/decoding apparatus may scan the available reference sample from the top to the bottom.
  • the image encoding/decoding apparatus may derive the first upper left reference sample line using a reference sample value having a coordinate of (x0-2, y0 + 1).
  • the image encoding/decoding apparatus determines the upper left and upper reference sample lines using the left reference sample line of the current block. You can induce.
  • FIG. 19 is a diagram illustrating a method of deriving a reference sample line according to another embodiment of the present disclosure.
  • FIG. 19 shows, when the current block is adjacent to the upper boundary of the CTU to which the current block belongs, the image encoding/decoding apparatus uses the left reference sample line of the current block to determine the upper left reference sample line and the upper reference sample line of the current block. Shows how to induce.
  • reference sample line 1 and reference sample line 3 of the current block may be derived using a left reference sample line 1 and a left sample line 3, respectively.
  • the numbering of reference sample lines is only an example, and in the present embodiment, the scope of rights may extend to a configuration in which arbitrary upper left and upper reference sample lines are derived using the left reference sample line of the current block.
  • the reference sample line to the left of the current block can be derived in the same manner as in the related art, regardless of whether the current block is adjacent to the upper boundary of the CTU to which the current block belongs. That is, the 0th left reference sample line and the remaining left reference sample lines (1st to 3rd left reference sample lines) adjacent to the left boundary of the current block may be derived according to a conventional method.
  • the upper left reference sample line of the current block may be derived using the uppermost sample value of the left reference sample line corresponding to each upper left reference sample line.
  • all sample values of the upper left reference sample line No. 1 may be derived using the sample value of the reference sample line No. 1 having a coordinate of (x0-2, y0), or may be determined as corresponding sample values.
  • all sample values of the second upper left reference sample line may be derived using the sample value of the second reference sample line having a coordinate of (x0-3, y0) or may be determined as the corresponding sample values.
  • all sample values of the upper left reference sample line 3 may be derived by using the sample value of the reference sample line 3 having coordinates of (x0-4, y0), or may be determined as corresponding sample values.
  • the upper reference sample line of the current block may be derived using the uppermost sample value of the left reference sample line corresponding to each upper reference sample line.
  • all sample values of the upper reference sample line No. 1 may be derived using the sample value of the left reference sample line No. 1 having a coordinate of (x0-2, y0), or may be determined as corresponding sample values.
  • all sample values of the second left upper reference sample line may be derived by using the sample values of the second left reference sample line having coordinates of (x0-3, y0), or may be determined as corresponding sample values.
  • all sample values of the upper left reference sample line 3 may be derived using the sample value of the left reference sample line 3 having coordinates of (x0-4, y0), or may be determined as corresponding sample values.
  • sample values of all reference samples included in the upper and upper left reference sample lines are substituted or padded with the uppermost sample value of the left reference sample line corresponding to each reference sample line. Or derived by copying the uppermost sample value of the left reference sample line.
  • all sample values included in the first upper left and upper reference sample lines can be derived to the same value using the L1 sample value of FIG. 19, and included in the third upper left and upper reference sample lines using the L3 sample value. All sampled values can be derived to the same value.
  • the image encoding/decoding apparatus may additionally scan the available reference samples according to a preset direction.
  • a reference sample it may mean a case in which the upper left and/or upper reference samples can be derived using the reference sample.
  • the image encoding/decoding apparatus may derive the upper left and upper reference sample lines by using the sample value. As an example, if the uppermost value of the left reference sample line having coordinates of (x0-n, y0) is not available, the image encoding/decoding apparatus may scan the available reference sample from the top to the bottom.
  • the image encoding/decoding apparatus may derive the first upper left and upper reference sample lines using a reference sample value having a coordinate of (x0-2, y0 + 1).
  • exemplary methods of the present disclosure are expressed as a series of operations for clarity of description, but this is not intended to limit the order in which steps are performed, and each step may be performed simultaneously or in a different order if necessary.
  • the illustrative steps may include additional steps, other steps may be included excluding some steps, or may include additional other steps excluding some steps.
  • an image encoding apparatus or an image decoding apparatus performing a predetermined operation may perform an operation (step) of confirming an execution condition or situation of the operation (step). For example, when it is described that a predetermined operation is performed when a predetermined condition is satisfied, the video encoding apparatus or the video decoding apparatus performs an operation to check whether the predetermined condition is satisfied, and then performs the predetermined operation. I can.
  • various embodiments of the present disclosure may be implemented by hardware, firmware, software, or a combination thereof.
  • one or more ASICs Application Specific Integrated Circuits
  • DSPs Digital Signal Processors
  • DSPDs Digital Signal Processing Devices
  • PLDs Programmable Logic Devices
  • FPGAs Field Programmable Gate Arrays
  • general purpose It may be implemented by a processor (general processor), a controller, a microcontroller, a microprocessor, or the like.
  • the image decoding device and the image encoding device to which the embodiment of the present disclosure is applied include a multimedia broadcasting transmission/reception device, a mobile communication terminal, a home cinema video device, a digital cinema video device, a surveillance camera, a video chat device, and a real-time communication device such as video communication.
  • Mobile streaming devices storage media, camcorders, video-on-demand (VoD) service providers, OTT video (Over the top video) devices, Internet streaming service providers, three-dimensional (3D) video devices, video telephony video devices, and medical use. It may be included in a video device or the like, and may be used to process a video signal or a data signal.
  • an OTT video (Over the top video) device may include a game console, a Blu-ray player, an Internet-connected TV, a home theater system, a smartphone, a tablet PC, and a digital video recorder (DVR).
  • DVR digital video recorder
  • FIG. 20 is a diagram illustrating a content streaming system to which an embodiment of the present disclosure can be applied.
  • the content streaming system to which the embodiment of the present disclosure is applied may largely include an encoding server, a streaming server, a web server, a media storage device, a user device, and a multimedia input device.
  • the encoding server serves to generate a bitstream by compressing content input from multimedia input devices such as smartphones, cameras, camcorders, etc. into digital data, and transmits it to the streaming server.
  • multimedia input devices such as smartphones, cameras, camcorders, etc. directly generate bitstreams
  • the encoding server may be omitted.
  • the bitstream may be generated by an image encoding method and/or an image encoding apparatus to which an embodiment of the present disclosure is applied, and the streaming server may temporarily store the bitstream in a process of transmitting or receiving the bitstream.
  • the streaming server may transmit multimedia data to a user device based on a user request through a web server, and the web server may serve as an intermediary informing the user of a service.
  • the web server transmits the request to the streaming server, and the streaming server may transmit multimedia data to the user.
  • the content streaming system may include a separate control server, and in this case, the control server may play a role of controlling a command/response between devices in the content streaming system.
  • the streaming server may receive content from a media storage and/or encoding server. For example, when content is received from the encoding server, the content may be received in real time. In this case, in order to provide a smooth streaming service, the streaming server may store the bitstream for a predetermined time.
  • Examples of the user device include a mobile phone, a smart phone, a laptop computer, a digital broadcasting terminal, a personal digital assistant (PDA), a portable multimedia player (PMP), a navigation system, a slate PC, and Tablet PC, ultrabook, wearable device, for example, smartwatch, smart glass, head mounted display (HMD)), digital TV, desktop There may be computers, digital signage, etc.
  • PDA personal digital assistant
  • PMP portable multimedia player
  • HMD head mounted display
  • TV desktop
  • desktop There may be computers, digital signage, etc.
  • Each server in the content streaming system may be operated as a distributed server, and in this case, data received from each server may be distributedly processed.
  • the scope of the present disclosure is software or machine-executable instructions (e.g., operating systems, applications, firmware, programs, etc.) that cause an operation according to the method of various embodiments to be executed on a device or computer, and such software or It includes a non-transitory computer-readable medium (non-transitory computer-readable medium) which stores instructions and the like and is executable on a device or a computer.
  • a non-transitory computer-readable medium non-transitory computer-readable medium
  • An embodiment according to the present disclosure may be used to encode/decode an image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne un procédé et un appareil de codage/décodage d'une vidéo. Le procédé de décodage vidéo selon la présente invention est un procédé de décodage vidéo exécuté par l'appareil de décodage vidéo, le procédé de décodage vidéo pouvant comprendre les étapes suivantes consistant à : déterminer, sur la base d'informations sur un mode de prédiction d'un bloc courant, si le mode de prédiction du bloc courant est un mode de prédiction intra ; lorsque le mode de prédiction du bloc courant est le mode de prédiction intra, déterminer si le bloc courant est adjacent à une limite d'une zone prédéfinie ; lorsque le bloc courant est adjacent à la limite de la zone prédéfinie, obtenir de multiples lignes d'échantillon de référence pour le bloc courant ; et générer un bloc de prédiction pour le bloc courant à l'aide des multiples lignes d'échantillon de référence dérivées.
PCT/KR2020/008032 2019-06-20 2020-06-22 Procédé et appareil de codage/décodage vidéo utilisant une prédiction intra à multiples lignes de référence, et procédé de transmission d'un flux binaire WO2020256506A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962864438P 2019-06-20 2019-06-20
US62/864,438 2019-06-20

Publications (1)

Publication Number Publication Date
WO2020256506A1 true WO2020256506A1 (fr) 2020-12-24

Family

ID=74040324

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/008032 WO2020256506A1 (fr) 2019-06-20 2020-06-22 Procédé et appareil de codage/décodage vidéo utilisant une prédiction intra à multiples lignes de référence, et procédé de transmission d'un flux binaire

Country Status (1)

Country Link
WO (1) WO2020256506A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023121000A1 (fr) * 2021-12-23 2023-06-29 현대자동차주식회사 Procédé et dispositif de codage vidéo utilisant des lignes multi-référence adaptatives
WO2024007157A1 (fr) * 2022-07-05 2024-01-11 Oppo广东移动通信有限公司 Procédé et dispositif de tri de liste d'indices de ligne de référence multiples, procédé et dispositif de codage vidéo, procédé et dispositif de décodage vidéo, et système
WO2024022144A1 (fr) * 2022-07-29 2024-02-01 Mediatek Inc. Prédiction intra basée sur de multiples lignes de référence

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180001479A (ko) * 2016-06-24 2018-01-04 주식회사 케이티 비디오 신호 처리 방법 및 장치
KR20180015598A (ko) * 2016-08-03 2018-02-13 주식회사 케이티 비디오 신호 처리 방법 및 장치
KR20180029905A (ko) * 2016-09-13 2018-03-21 한국전자통신연구원 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
KR20180041575A (ko) * 2016-10-14 2018-04-24 세종대학교산학협력단 영상의 부호화/복호화 방법 및 장치
KR20190005730A (ko) * 2017-07-06 2019-01-16 한국전자통신연구원 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180001479A (ko) * 2016-06-24 2018-01-04 주식회사 케이티 비디오 신호 처리 방법 및 장치
KR20180015598A (ko) * 2016-08-03 2018-02-13 주식회사 케이티 비디오 신호 처리 방법 및 장치
KR20180029905A (ko) * 2016-09-13 2018-03-21 한국전자통신연구원 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
KR20180041575A (ko) * 2016-10-14 2018-04-24 세종대학교산학협력단 영상의 부호화/복호화 방법 및 장치
KR20190005730A (ko) * 2017-07-06 2019-01-16 한국전자통신연구원 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023121000A1 (fr) * 2021-12-23 2023-06-29 현대자동차주식회사 Procédé et dispositif de codage vidéo utilisant des lignes multi-référence adaptatives
WO2024007157A1 (fr) * 2022-07-05 2024-01-11 Oppo广东移动通信有限公司 Procédé et dispositif de tri de liste d'indices de ligne de référence multiples, procédé et dispositif de codage vidéo, procédé et dispositif de décodage vidéo, et système
WO2024022144A1 (fr) * 2022-07-29 2024-02-01 Mediatek Inc. Prédiction intra basée sur de multiples lignes de référence

Similar Documents

Publication Publication Date Title
WO2020071830A1 (fr) Procédé de codage d'images utilisant des informations de mouvement basées sur l'historique, et dispositif associé
WO2019190181A1 (fr) Procédé de codage d'image/de vidéo basé sur l'inter-prédiction et dispositif associé
WO2020184991A1 (fr) Procédé et appareil de codage/décodage vidéo utilisant un mode ibc, et procédé de transmission de flux binaire
WO2020171632A1 (fr) Procédé et dispositif de prédiction intra fondée sur une liste mpm
WO2021137597A1 (fr) Procédé et dispositif de décodage d'image utilisant un paramètre de dpb pour un ols
WO2021015537A1 (fr) Procédé et dispositif de codage/décodage d'image permettant de signaler des informations de prédiction de composante de chrominance en fonction de l'applicabilité d'un mode palette et procédé de transmission de flux binaire
WO2020141879A1 (fr) Procédé et dispositif de décodage de vidéo basé sur une prédiction de mouvement affine au moyen d'un candidat de fusion temporelle basé sur un sous-bloc dans un système de codage de vidéo
WO2020256506A1 (fr) Procédé et appareil de codage/décodage vidéo utilisant une prédiction intra à multiples lignes de référence, et procédé de transmission d'un flux binaire
WO2020149630A1 (fr) Procédé et dispositif de décodage d'image basé sur une prédiction cclm dans un système de codage d'image
WO2020141886A1 (fr) Procédé et appareil d'inter-prédiction basée sur un sbtmvp
WO2021029744A1 (fr) Procédé et appareil de codage/décodage d'image pour déterminer un mode de prédiction d'un bloc de chrominance en se référant à la position d'échantillon de luminance, et procédé de transmission de train de bits
WO2020185047A1 (fr) Procédé de codage/décodage d'image et appareil de réalisation de prédiction intra, et procédé de transmission de flux binaire
WO2020180159A1 (fr) Procédé et appareil de codage/décodage d'images et procédé de transmission d'un flux binaire
WO2019199093A1 (fr) Procédé de traitement d'image basé sur un mode d'intraprédiction, et dispositif associé
WO2021091256A1 (fr) Procédé et dispositif de codade d'image/vidéo
WO2020197243A1 (fr) Procédé et dispositif de codage/décodage d'image utilisant une différence de vecteur de mouvement symétrique (smvd) et procédé de transmission de flux binaire
WO2020184966A1 (fr) Procédé et dispositif de codage/décodage d'image, et procédé permettant de transmettre un flux binaire
WO2020251270A1 (fr) Codage d'image ou de vidéo basé sur des informations de mouvement temporel dans des unités de sous-blocs
WO2020145620A1 (fr) Procédé et dispositif de codage d'image basé sur une prédiction intra utilisant une liste mpm
WO2020180043A1 (fr) Procédé de codage d'image basé sur le lmcs et dispositif associé
WO2021091252A1 (fr) Procédé et dispositif de traitement d'informations d'image pour codage d'image/vidéo
WO2021015513A1 (fr) Procédé et dispositif de codage/décodage d'image à l'aide d'un filtrage, et procédé de transmission de flux binaire
WO2021091255A1 (fr) Procédé et dispositif de signalisation de syntaxe de haut niveau pour codage image/vidéo
WO2021015512A1 (fr) Procédé et appareil de codage/décodage d'images utilisant une ibc, et procédé de transmission d'un flux binaire
WO2021034160A1 (fr) Appareil et procédé de codage d'image sur la base d'une prédiction intra matricielle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20825615

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20825615

Country of ref document: EP

Kind code of ref document: A1