WO2020175905A1 - Signaled information-based picture partitioning method and apparatus - Google Patents

Signaled information-based picture partitioning method and apparatus Download PDF

Info

Publication number
WO2020175905A1
WO2020175905A1 PCT/KR2020/002730 KR2020002730W WO2020175905A1 WO 2020175905 A1 WO2020175905 A1 WO 2020175905A1 KR 2020002730 W KR2020002730 W KR 2020002730W WO 2020175905 A1 WO2020175905 A1 WO 2020175905A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
tile
tiles
picture
prediction
Prior art date
Application number
PCT/KR2020/002730
Other languages
French (fr)
Korean (ko)
Inventor
파루리시탈
김승환
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Publication of WO2020175905A1 publication Critical patent/WO2020175905A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/129Scanning of coding units, e.g. zig-zag scan of transform coefficients or flexible macroblock ordering [FMO]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • This disclosure is about video coding technology, and more specifically, video coding.
  • It relates to a picture partitioning method and apparatus based on information signaled by the system.
  • the demand for high-resolution, high-quality video/video is increasing in various fields.
  • the higher the resolution and quality of the video/video data the higher the amount of information or bits to be transmitted compared to the existing video/video data.
  • the video data can be transmitted using a medium such as a wired/wireless broadband line or an existing storage medium
  • the transmission cost and storage cost increase.
  • a flexible picture partitioning method that can be applied to efficiently compress and play back images/videos is required.
  • the technical task of this disclosure is to provide a method and apparatus to increase the image coding efficiency.
  • Another technical task of this disclosure is to provide a method and apparatus for signaling partitioning information.
  • Another technical task of this disclosure is to create pictures based on signaled information.
  • Another technical task of this disclosure is to provide a method and apparatus for partitioning a current picture based on partition information for the current picture.
  • Another technical task of this disclosure is flag information on whether the current picture is divided into motion constrained tile sets (MCTS), information on the number of MCTSs in the current picture, and upper left for each of the MCTSs. Position information of the tile located at the (top-left) or the position of the tile located at the bottom-right for each MCTS It is intended to provide a method and apparatus for partitioning the current picture based on at least one of the information.
  • MCTS motion constrained tile sets
  • an image decoding method performed by a decoding apparatus includes partition information for a current picture and a current block included in the current picture. For example, acquiring image information including prediction information from a bitstream; Based on the partitioning information for the current picture, a partitioning structure of the current picture based on a plurality of tiles is also provided. A step of; Deriving prediction samples for the current block based on the prediction information for the current block included in one of the plurality of tiles; And the current picture based on the prediction samples Including the step of restoring, the plurality of tiles are grouped into a plurality of tile groups, and at least one type group among the plurality of tile groups is arranged in a non-raster scan order. Includes tiles.
  • a decoding device for performing image decoding includes partition information for a current picture and a current block included in the current picture. For example, obtaining image information including prediction information from a bitstream, and reducing the partitioning structure of the current picture based on a plurality of tiles, based on the partitioning information for the current picture.
  • An entropy decoding unit A prediction unit for deriving prediction samples for the current block based on the prediction information for the current block included in one of the plurality of tiles; And the current picture based on the prediction samples It includes an adder for restoring, and the plurality of tiles are grouped into a plurality of tile groups, and at least one type group among the plurality of tile groups is arranged in a non-raster scan order. Includes tiles.
  • an image encoding method performed by an encoding device includes the steps of dividing a current picture into a plurality of tiles; based on the plurality of tiles. Generating segmentation information for the current picture; Deriving prediction samples for a current block included in one of the plurality of tiles; Generating prediction information for the current block based on the prediction samples And encoding image information including segmentation information on the current picture and prediction information on the current block, wherein the plurality of tiles are grouped into a plurality of tile groups, and the plurality of tiles At least one type group of the groups contains tiles arranged in a non-raster scan order.
  • an encoding device that performs image encoding.
  • the encoding apparatus includes: an image segmentation unit that divides the current picture into a plurality of tiles and generates segmentation information for the current picture based on the plurality of tiles, and in one tile of the plurality of tiles.
  • a prediction unit that derives prediction samples for the included current block and generates prediction information for the current block based on the prediction samples, and an image including segmentation information for the current picture and prediction information for the current block Entropy to encode information
  • the plurality of tiles are grouped into a plurality of tile groups, and at least one type group among the plurality of tile groups is non-raster
  • a computer-readable digital storage medium for storing encoded image information for causing the method to be performed includes partition information for a current picture and partition information included in the current picture. Acquiring image information including prediction information for the current block from the bitstream; based on the partitioning information for the current picture, partitioning structure of the current picture based on a plurality of tiles ); Deriving prediction samples for the current block based on the prediction information for the current block included in one of the plurality of tiles; And based on the prediction samples Including the step of restoring the current picture, the plurality of tiles are grouped into a plurality of tile groups, and at least one type group among the plurality of tile groups is in a non-raster scan order. Includes arranged tiles.
  • non-raster scans are arranged in order.
  • FIG. 1 schematically shows an example of a video/video coding system to which this disclosure can be applied.
  • Fig. 2 shows the configuration of a video/video encoding device to which this disclosure can be applied.
  • Figure 3 shows the configuration of a video/video decoding apparatus to which this disclosure can be applied.
  • FIG. 5 is a diagram showing an example of partitioning a picture.
  • FIG. 6 is a flowchart illustrating a procedure for encoding a picture based on a tile and/or a tile group according to an embodiment.
  • FIG. 7 is a flowchart illustrating a tile and/or tile group-based picture decoding procedure according to an embodiment.
  • FIG. 8 is a diagram showing an example of partitioning a picture into a plurality of tiles.
  • 9 is a block diagram showing the configuration of an encoding apparatus according to an embodiment.
  • W is a block diagram showing the configuration of a decoding apparatus according to an embodiment.
  • 11 is a diagram showing an example of a tile and a tile group unit constituting a current picture.
  • FIG. 12 is a diagram schematically showing an example of a signaling structure of tile group information.
  • 13 is a diagram illustrating an example of a picture in a video conferencing program.
  • 14 is a diagram showing an example of partitioning a picture into tiles or tile groups in a video conference video program.
  • FIG. 15 is a diagram illustrating an example of partitioning a picture into tiles or tile groups based on MCTS (Motion Constrained Tile Set).
  • MCTS Motion Constrained Tile Set
  • 16 is a diagram illustrating an example of dividing a picture based on an R0I area.
  • 17 is a diagram showing an example of partitioning a picture into a plurality of tiles.
  • 18 is a diagram illustrating an example of partitioning a picture into a plurality of tiles and tile groups.
  • FIG. 19 illustrates an example of partitioning a picture into a plurality of tiles and tile groups.
  • FIG. 20 is a flow chart showing the operation of the decoding apparatus according to an embodiment.
  • 21 is a block diagram showing a configuration of a decoding apparatus according to an embodiment.
  • 22 is a flow chart showing the operation of the encoding device according to an embodiment.
  • 23 is a block diagram showing the configuration of an encoding device according to an embodiment.
  • 24 shows an example of a content streaming system to which the disclosure of this document can be applied.
  • each configuration is implemented as separate hardware or separate software.
  • two or more of each configuration may be combined to form a single configuration.
  • one configuration may be divided into a plurality of configurations.
  • Embodiments in which each configuration is incorporated and/or separated are also included within the scope of the rights of this disclosure, unless departing from the essence of this disclosure.
  • 1 or show or p may mean “only show,” “only, or “show and ⁇ both.”
  • 1 or 1 of show in this specification is 1 and/ Or it can be interpreted as show (1/(epidermis”.
  • “6” or “:( ⁇ 3 or(:)” means “only show”, “only ,,” only 0 ⁇ or “ ⁇ Any combination of :8 and(:(
  • Show ⁇ can mean “only show”, “only, or “show and ⁇ all”.
  • ⁇ 3,(:” can mean “ ⁇ 3 or (:”).
  • intra prediction when marked as “prediction (intra prediction)”, “intra prediction” may have been proposed as an example of “prediction”. In other words, “forecast” in this specification is limited to “intra prediction” (1) It is not 0, and “intra prediction” may be suggested as an example of “prediction.” In addition, even when “prediction (ie, intra prediction)” is indicated, “intra prediction” is proposed as an example of “prediction” Can be [5 In this specification, technical features that are individually described within one drawing may be implemented individually or simultaneously.
  • FIG. 1 schematically shows an example of a video/video coding system to which this disclosure can be applied.
  • a video/video coding system may include a first device (source device) and a second device (receive device).
  • the source device is encoded.
  • Video/image information or data can be transferred to a receiving device via a digital storage medium or a network in the form of files or streaming.
  • the source device may include a video source, an encoding device, and a transmission unit.
  • the receiving device may include a receiver, a decoding device, and a renderer.
  • the encoding device may be referred to as a video/image encoding device, and the decoding device may be referred to as a video/image decoding device.
  • the transmitter may be included in the encoding device.
  • the receiver may be included in the decoding device.
  • the renderer may include a display unit, and the display unit may be composed of separate devices or external components.
  • Video sources are captured through video/video capture, synthesis, or generation
  • Video/image can be acquired Video sources can include video/image capture devices and/or video/image generation devices Video/image capture devices can be, for example, one or more cameras, previously captured video/image It can contain video/picture archives, etc. Video/picture generation devices can include, for example computers, tablets and smartphones, etc. It can generate video/pictures (electronically), for example computers A virtual video/video can be created through the like, and in this case, the video/video capture process can be replaced by the process of generating related data.
  • the encoding device can encode the input video/video.
  • the encoding device can perform a series of procedures such as prediction, transformation, and quantization for compression and coding efficiency.
  • the encoded data (encoded video/video information) can be summarized in the form of a bitstream.
  • the transmission unit is encoded video/video information output in the form of a bitstream or
  • Data can be transferred to the receiver of the receiving device via a digital storage medium or network in the form of a file or streaming.
  • the digital storage medium can include various storage media such as USB, SD, CD, DVD, Blu-ray, HDD, SSD, etc.
  • the transmission unit may include an element for generating a media file through a predetermined file format and may include an element for transmission through a broadcasting/communication network.
  • the receiving unit may receive/extract the bitstream and transmit it to the decoding device. have.
  • the decoding device is inverse quantization, inverse transformation, prediction, etc. corresponding to the operation of the encoding device.
  • Video/video can be decoded by performing a series of procedures.
  • the renderer can render decoded video/video.
  • the rendered video/video can be displayed through the display unit.
  • This document is about video/image coding. For example,
  • the method/embodiment includes a versatile video coding (VVC) standard, an essential video coding (EVC) standard, an AOMedia Video 1 (AVI) standard, a 2nd generation of audio video coding standard (AVS2), or a next-generation video/image coding standard (ex.H). .267 or H.268, etc.).
  • VVC versatile video coding
  • EVC essential video coding
  • AVI AOMedia Video 1
  • AVS2 2nd generation of audio video coding standard
  • next-generation video/image coding standard ex.H). .267 or H.268, etc.
  • a picture generally refers to a unit representing one image in a specific time period, and a slice/tile is a unit constituting a part of a picture in coding.
  • a tile can contain more than one CTU (coding tree unit); a picture can consist of more than one slice/tile.
  • a tile is a rectangular region of CTUs within a particular tile column and a particular tile row in a picture).
  • the tile row is a rectangular region of CTUs within a particular tile row.
  • the tile column is a rectangular region of CTUs having a height equal to the height of the picture and width can be specified by syntax elements in the picture parameter set. a width specified by syntax elements in the picture parameter set).
  • the tile row is a rectangular area of CTUs, the rectangular area has a width specified by syntax elements in the picture parameter set, and the height can be the same as the height of the picture.
  • a tile scan can represent a specific sequential ordering of CTUs partitioning the picture.
  • the CTUs may be sequentially aligned with a CTU raster scan in a tile, and tiles in a picture may be sequentially aligned with a raster scan of the tiles of the picture (A tile scan is a specific sequential ordering of CTUs partitioning a picture in which the CTUs are ordered consecutively in CTU raster scan in a tile whereas tiles in a picture are ordered consecutively in a raster scan of the tiles of the picture).
  • tile groups and slices can be mixed in this document.
  • tile group/ The tile group header can be called a slice/slice header.
  • a picture can be divided into two or more subpictures.
  • a subpicture can be a rectangular region of one or more slices within a picture.
  • a pixel or pel may mean the smallest unit constituting a picture (or image).
  • example' may be used as a term corresponding to a pixel.
  • a unit can represent a basic unit of image processing.
  • a unit can contain at least one of a specific region of a picture and information related to that region.
  • One unit contains one luma block and two chromas (one luma block and two chromas).
  • ex. cb, cr) may contain a block
  • a unit may be used interchangeably with terms such as block or area in some cases.
  • the MxN block may include a set (or array) of samples (or sample array) or transform coefficients consisting of M columns and N rows.
  • FIG. 2 shows the configuration of a video/video encoding apparatus to which this disclosure can be applied.
  • the video encoding device may include an image encoding device.
  • the encoding apparatus 200 includes an image partitioner 210,
  • Predictor 220 residual processor (230), entropy encoder (240), adder (250), filtering unit (filter, 260) and memory (memory, 270) It can be configured to include.
  • the part 220 is
  • the residual processing unit 230 includes a transform unit 232, a quantizer 233, an inverse quantizer 234, and an inverse transform unit ( An inverse transformer 235 may be included.
  • the residual processing unit 230 may further include a subtractor 231.
  • the addition unit 250 may include a reconstructor or a recontructged block generator.
  • the image segmentation unit 210, the prediction unit 220, the residual processing unit 230, the entropy encoding unit 240, the addition unit 250 and the filtering unit 260 described above may be used according to the embodiment.
  • the hardware component may be configured by a component (e.g., an encoder chipset or processor).
  • the memory 270 may include a decoded picture buffer (DPB), and may be configured by a digital storage medium.
  • DPB decoded picture buffer
  • the hardware component is a memory 270. You can also include more as internal/external components.
  • the image segmentation unit (2W) is an input image (or, picture, input) input to the encoding device 200
  • Frame can be divided into one or more processing units.
  • the processing unit may be referred to as a coding unit (CU), in which case the coding unit is a coding tree unit (CTU). Or according to the QTBTTT (Quad-tree binary-tree ternary-tree) structure from the largest coding unit (LCU) It can be divided recursively; for example, a coding unit can be divided into multiple coding units of deeper depth based on a quad tree structure, a binary tree structure, and/or a ternary structure. In this case, for example, the quadtree structure may be applied first, and the binary and/or ternary structure may be applied later, or the binary tree structure may be applied first.
  • This disclosure is based on the final coding unit, which is no longer divided. In this case, the maximum coding unit can be used directly as the final coding unit, or if necessary, the coding unit can be used based on the coding efficiency according to the video characteristics.
  • the processing unit may further include a unit (PU: Prediction Unit) or a transformation unit (TU: Transform Unit).
  • PU Prediction Unit
  • TU Transform Unit
  • the prediction unit and the transformation unit are each divided from the final coding unit described above.
  • it may be partitioned.
  • the prediction unit may be a unit of sample prediction
  • the transform unit may be a unit for inducing a conversion factor and/or a unit for inducing a residual signal from the conversion factor.
  • an MxN block can represent a set of samples or transform coefficients consisting of M columns and N rows.
  • a sample can typically represent a pixel or pixel value, and the luminance ( It can represent only the pixel/pixel value of the luma component, or it can represent only the pixel/pixel value of the chroma component.
  • a sample corresponds to one picture (or image) corresponding to a pixel or pel. Can be used as a term.
  • the encoding device 200 intercepts the input video signal (original block, original sample array)
  • a residual signal can be generated by subtracting the prediction signal (predicted block, prediction sample array) output from the prediction unit 221 or the intra prediction unit 222, and the generated The residual signal is transmitted to the conversion unit 232.
  • the prediction signal (prediction block, prediction sample array) is subtracted from the input video signal (original block, original sample array) in the encoding device 200 as shown.
  • the unit to be processed may be called a subtraction unit 231.
  • the prediction unit performs prediction on the block to be processed (hereinafter referred to as the current block), and a predicted block including prediction samples for the current block.
  • the prediction unit can determine whether intra prediction or inter prediction is applied in the current block or CU unit.
  • the prediction unit may generate various types of information related to prediction, such as prediction mode information, as described later in the description of each prediction mode, and transmit it to the entropy encoding unit 240.
  • the information on prediction may be encoded in the entropy encoding unit 240 and summarized in the form of a bitstream.
  • the intra prediction unit 222 refers to the samples in the current picture and predicts the current block. Depending on the prediction mode, the referenced samples may be located in the vicinity of the current block or may be located apart from each other.
  • prediction modes include a plurality of non-directional modes and a plurality of directional modes.
  • Non-directional mode can include, for example, DC mode and planar mode (Planar mode) Directional mode depends on the degree of detail of the prediction direction, for example
  • It may include 33 directional prediction modes or 65 directional prediction modes. However, this is an example and more or less directional predictions depending on the setting.
  • the intra prediction unit 222 may determine a prediction mode to be applied to the current block by using the prediction mode applied to the surrounding block.
  • the inter prediction unit 221 refers to a reference specified by a motion vector on the reference picture.
  • Motion information can be predicted in units of blocks, sub-blocks, or samples.
  • the motion information may include a motion vector and a reference picture index.
  • the motion information indicates inter prediction direction (L0 prediction, L1 prediction, Bi prediction, etc.)
  • the peripheral block may include a spatial neighboring block existing in the current picture and a temporal neighboring block existing in the reference picture.
  • the reference picture including the reference block and the reference picture including the temporal peripheral block may be the same or different.
  • the temporal peripheral block may be a collocated reference block, a co-located CU (colCU), etc. It can be called by the name of, and the reference picture containing the temporal surrounding block is the same position.
  • colPic collocated picture
  • the prediction unit 221 may construct a motion information candidate list based on the neighboring blocks, and generate information indicating which candidate is used to derive the motion vector and/or reference picture index of the current block. Inter prediction may be performed based on the prediction mode, for example, in the case of skip mode and merge mode, the inter prediction unit 221 may use the motion information of the neighboring block as the motion information of the current block. In this case, unlike the merge mode, the residual signal may not be transmitted. In the case of the motion information (motion vector prediction, MVP) mode, the motion vector of the surrounding block is used as the motion vector predictor, and the motion vector By signaling the motion vector difference, you can indicate the motion vector of the current block.
  • the motion information motion vector prediction, MVP
  • the prediction unit 220 may generate a prediction signal based on various prediction methods to be described later.
  • the prediction unit may apply intra prediction or inter prediction for prediction of one block, as well as intra prediction. Prediction and inter prediction can be applied at the same time. This can be called combined inter and intra prediction ([It can be called).
  • the IBC prediction mode or the palette mode may be based.
  • the IBC prediction mode or the palette mode can be used for content video/video coding such as games, for example, SCC (screen content coding).
  • Can be used for IBC basically performs prediction within the current picture, but can perform similarly to inter prediction in that it derives a reference block within the current picture, i.e.
  • the IBC can use at least one of the inter prediction techniques described in this document.
  • the palette mode can be seen as an example of intracoding or intra prediction. When the palette mode is applied, the sample values in the picture can be signaled based on the information about the palette table and palette index.
  • the prediction signal generated through the prediction unit may be used to generate a restoration signal or may be used to generate a residual signal.
  • the transform unit 232 may generate transform coefficients by applying a transform method to the residual signal.
  • the transform method is DCT (Discrete Cosine Transform), DST (Discrete Sine Transform),
  • KLT Kerhunen-Loeve Transform
  • GBT Graph-Based Transform
  • It may include at least one of CNT (Conditionally Non-linear Transform).
  • CNT refers to a transformation that is obtained based on, e.g., generating a signal using all previously reconstructed pixels. Also, the transformation process can be applied to a block of pixels of the same size of a square, and It can also be applied to blocks of variable size that are not square.
  • the quantization unit 233 quantizes the transform coefficients to the entropy encoding unit 240
  • the entropy encoding unit 240 encodes the quantized signal (information on quantized transformation coefficients) and outputs it as a bitstream.
  • the information on the quantized transformation coefficients may be referred to as residual information.
  • the quantization unit 233 can rearrange the quantized transformation coefficients in the block form into a one-dimensional vector form based on the coefficient scan order, and the quantized transformation coefficients are quantized based on the quantized transformation coefficients in the one-dimensional vector form. It is also possible to generate information about the transformation coefficients.
  • the entropy encoding unit 240 for example, exponential Golomb,
  • the entropy encoding unit 240 includes quantized conversion factors and information necessary for video/image restoration. It is also possible to encode together or separately (e.g., values of syntax elements).
  • the encoded information (e.g., encoded video/video information) is
  • the video/video information may be transmitted or stored in the form of a bitstream in units of network abstraction layer (NAL) units.
  • the video/video information is an appointment parameter set (APS), a picture parameter set (PPS), a sequence parameter set (SPS), or a video parameter. Set (VPS), etc. may contain more information about various parameter sets. Also, the video/video information may further contain general constraint information.
  • Information and/or syntax elements may be included in video/image information.
  • the video/image information may be encoded through the above-described encoding procedure and included in the bitstream.
  • the bitstream may be transmitted through a network, or It can be stored on a digital storage medium, where the network can include a broadcasting network and/or a communication network, and the digital storage medium can include a variety of storage media such as USB, SD, CD, DVD, Blu-ray, HDD, SSD, etc.
  • Entropy The signal output from the encoding unit 240 may be configured as an internal/external element of the encoding device 200 by a transmitting unit (not shown) for transmitting and/or a storage unit (not shown) for storing, or the transmitting unit It may be included in (240).
  • the quantized transformation coefficients output from the quantization unit 233 can be used to generate a prediction signal.
  • the quantization unit 234 and the inverse transformation unit 235 are used to generate a prediction signal. Residual by applying quantization and inverse transformation
  • a signal (residual block or residual samples) can be restored.
  • the addition unit 155 restores the restored residual signal by adding the restored residual signal to the prediction signal output from the inter prediction unit 221 or the intra prediction unit 222.
  • a (reconstructed) signal (restored picture, reconstructed block, reconstructed sample array) can be generated If there is no residual for the block to be processed, such as when the skip mode is applied, the predicted block can be used as a reconstructed block.
  • the unit 250 may be referred to as a restoration unit or a restoration block generation unit.
  • the generated restoration signal may be used for intra prediction of the next processing target block in the current picture, and inter prediction of the next picture through filtering as described below. It can also be used for
  • LMCS luma mapping with chroma scaling
  • the filtering unit 260 applies filtering to the restored signal to improve subjective/objective image quality.
  • the filtering unit 260 may apply various filtering methods to the restored picture to generate a modified restored picture, and store the modified restored picture in a memory 270, specifically a memory 270.
  • the various filtering methods include, for example, deblocking filtering, sample adaptive offset, adaptive loop filter, and bilateral filter.
  • the filtering unit 260 may generate a variety of filtering information and transmit it to the entropy encoding unit 240, as described later in the description of each filtering method.
  • the filtering information is encoded by the entropy encoding unit 240. It can be output as a bitstream.
  • the modified restored picture transmitted to this memory 270 can be used as a reference picture in the inter prediction unit 221.
  • the encoding device encodes when the inter prediction is applied through this. It can avoid predictive mismatch between the device (WO) and the decoding device, and also improve the encoding efficiency.
  • the DPB can store the modified restored picture for use as a reference picture in the inter prediction unit 221.
  • the memory 270 is a block from which motion information in the current picture is derived (or encoded). Motion information and/or motion information of blocks in a picture that has already been restored can be stored. The stored motion information is transmitted to the inter prediction unit 221 in order to use it as motion information of spatial neighboring blocks or motion information of temporal neighboring blocks.
  • the memory 270 may store restoration samples of the restored blocks in the current picture, and may transmit the restoration samples to the intra prediction unit 222.
  • Figure 3 shows the configuration of a video/video decoding apparatus to which this disclosure can be applied.
  • the decoding apparatus 300 includes an entropy decoder 310, a residual processor 320, a predictor 330, and an adder 340.
  • the prediction unit 330 may include an intra prediction unit 331 and an inter prediction unit 332.
  • the residual processing unit 320 may include a dequantizer 321 and an inverse transformer 321.
  • the addition unit 340 and the filtering unit 350 may be configured by one hardware component (for example, a decoder chipset or processor) according to an embodiment.
  • the memory 360 may include a decoded picture buffer (DPB). In addition, it may be configured by a digital storage medium.
  • the hardware component may include a memory 360 as an internal/external component loader.
  • the decoding device 300 can restore the image in response to the process in which the video/image information is processed in the encoding device of FIG. 3.
  • decoding The device 300 may derive units/blocks based on the block division related information acquired from the bitstream.
  • the decoding device 300 may perform decoding using a processing unit applied in the encoding device. Therefore, decoding
  • the processing unit of may be, for example, a coding unit, and the coding unit may be divided from the coding tree unit or the largest coding unit according to the quadtree structure, binary retrieval structure and/or turner retrie structure. From the coding unit one or more conversion units In addition, the restored video signal decoded and output through the decoding device 300 can be reproduced through the playback device.
  • the decoding device 300 converts the signal output from the encoding device of FIG. 3 into a bitstream.
  • the received signal can be decoded through the entropy decoding unit 310.
  • the entropy decoding unit 3W parses the bitstream and is required for image restoration (or picture restoration).
  • Information (ex. video/video information) can be derived.
  • the above video/video information includes an appointment parameter set (APS), a picture parameter set (PPS), a sequence parameter set (SPS), or a video parameter set (VPS). Further information on various parameter sets may be included.
  • the video/video information may further include general constraint information.
  • the decoding device may further decode the picture based on the information on the parameter set and/or the general limit information.
  • the signaling/received information and/or syntax elements described later in this document are decoded through the decoding procedure, It can be obtained from the bitstream.
  • the entropy decoding unit (3W) decodes the information in the bitstream based on a coding method such as exponential Golomb encoding, CAVLC or CABAC, and determines the value of the syntax element required for image restoration, and the residual.
  • the CABAC entropy decoding method receives the bin corresponding to each syntax element in the bitstream, and receives the decoding target syntax element information and the surrounding and decoding information of the decoding target block.
  • the context model is determined using the symbol/bin information decoded in the previous step, and the probability of occurrence of bins is predicted according to the determined context model, and arithmetic decoding of bins is performed.
  • a symbol corresponding to the value of the syntax element can be generated.
  • the CABAC entropy decoding method can update the context model using information of the decoded symbol/bin for the context model of the next symbol/bin after determining the context model.
  • the entropy decoding unit (3W) information about prediction is provided to the prediction unit (inter prediction unit 332 and intra prediction unit 331), and entropy decoding is performed by the entropy decoding unit 3W.
  • the residual value that is, quantized transform coefficients and related parameter information may be input to the residual processing unit 320.
  • the residual processing unit 320 may derive a residual signal (residual block, residual samples, and residual sample array).
  • a filtering unit information about filtering among information decoded by the entropy decoding unit 310 is a filtering unit. Can be provided as 350.
  • a receiving unit (not shown) that receives the signal output from the encoding device may be further configured as an internal/external element of the decoding device 300, or the receiving unit may be a component of the entropy decoding unit 3W.
  • the decoding device can be called a video/video/picture decoding device, and the decoding device can be divided into an information decoder (video/video/picture information decoder) and a sample decoder (video/video/picture sample decoder).
  • the information decoder is the entropy
  • a decoding unit (3W) may be included, and the sample decoder includes the inverse quantization unit 321, an inverse transform unit 322, an addition unit 340, a filtering unit 350, a memory 360, an inter prediction unit ( 332) and an intra prediction unit 331.
  • the inverse quantization unit 321 may inverse quantize the quantized transformation coefficients to output the transformation coefficients.
  • the inverse quantization unit 321 may rearrange the quantized transformation coefficients into a two-dimensional block form. In this case, the above reordering
  • the inverse quantization unit 321 performs inverse quantization on the quantized transform coefficients using a quantization parameter (for example, quantization step size information) based on the coefficient scan order performed by the silver encoding device. And obtain the transform coefficients Can
  • the residual signal (residual block, residual sample array) is obtained by inverse transforming the transform coefficients.
  • the prediction unit performs prediction on the current block, and predicts the current block
  • a predicted block including samples may be generated.
  • the prediction unit determines whether intra prediction or inter prediction is applied to the current block based on the information about the prediction output from the entropy decoding unit 310. Can be determined and specific intra/inter prediction modes can be determined.
  • the prediction unit 330 may generate a prediction signal based on various prediction methods to be described later.
  • the prediction unit may apply intra prediction or inter prediction for prediction for one block, as well as, Intra prediction and inter prediction can be applied at the same time. This can be called combined inter and intra prediction ([can be referred to as).
  • the example is based on the intra block copy (IBC) prediction mode for block prediction. It may or may be based on a palette mode.
  • the IBC prediction mode or palette mode can be used for content video/video coding such as games, for example, SCC (screen content coding), etc.
  • IBC is Basically, the prediction is performed within the current picture, but it can be performed similarly to inter prediction in that it derives a reference block within the current picture, i.e., IBC can use at least one of the inter prediction techniques described in this document.
  • Palette mode can be seen as an example of intra coding or intra prediction. When the palette mode is applied, information about the palette table and palette index can be included in the video/video information and signaled.
  • the intra prediction unit 331 may predict the current block by referring to samples in the current picture.
  • the referenced samples are of the current block according to the prediction mode.
  • the prediction modes may include a plurality of non-directional modes and a plurality of directional modes in intra prediction.
  • the intra prediction unit 331 is a prediction applied to a peripheral block. Using the mode, you can also determine the prediction mode that applies to the current block.
  • the inter prediction unit 332 is a reference specified by a motion vector on the reference picture.
  • Motion information can be predicted in units of blocks, sub-blocks, or samples.
  • the motion information may include a motion vector and a reference picture index.
  • the motion information indicates inter prediction direction (L0 prediction, L1 prediction, Bi prediction, etc.)
  • the peripheral block may include a spatial neighboring block existing in the current picture and a temporal neighboring block existing in the reference picture.
  • the inter prediction unit 332 is a motion information candidate based on surrounding blocks.
  • a list can be constructed, and a motion vector and/or a reference picture index of the current block can be derived based on the received candidate selection information.
  • Inter prediction can be performed based on various prediction modes, and the information on the prediction is described above. It may include information indicating the mode of inter prediction for the current block.
  • the addition unit 340 predicts the obtained residual signal (inter prediction unit 332 and/or
  • a restoration signal (restored picture, restoration block, restoration sample array) can be generated. Processing as in the case where skip mode is applied. If there is no residual for the target block, the predicted block can be used as a restore block.
  • the addition unit 340 may be referred to as a restoration unit or a restoration block generation unit.
  • the generated restoration signal may be used for intra prediction of the next processing target block in the current picture, and output through filtering as described later. It may be used or it may be used for inter prediction of the next picture.
  • LMCS luma mapping with chroma scaling
  • the filtering unit 350 applies filtering to the restored signal to improve subjective/objective image quality.
  • the filtering unit 350 may apply various filtering methods to the restored picture to generate a modified restored picture, and store the modified restored picture in a memory 360, specifically a memory 360.
  • the various filtering methods include, for example, deblocking filtering, sample adaptive offset, adaptive loop filter, bilateral filter, etc. can do.
  • the (modified) restored picture stored in the DPB of the memory 360 can be used as a reference picture in the inter prediction unit 332.
  • the memory 360 is from which motion information in the current picture is derived (or decoded).
  • the motion information of the block and/or the motion information of the blocks in the picture that has already been restored can be stored.
  • the stored motion information is interpolated to be used as the motion information of the spatial surrounding block or the motion information of the temporal surrounding block.
  • the memory 360 can store reconstructed samples of the restored blocks in the current picture, and can transfer them to the intra prediction unit 331.
  • the embodiments described in the filtering unit 260, the inter prediction unit 221, and the intra prediction unit 222 of the encoding apparatus 100 are respectively described in the filtering unit 350 of the decoding apparatus 300.
  • the inter prediction unit 332 and the intra prediction unit 331 may be applied to be the same or corresponding to each other.
  • a predicted block including predicted samples for the current block which is a block to be coded
  • the predicted The block includes prediction samples in the spatial domain (or pixel domain).
  • the predicted block is derived identically in the encoding device and the decoding device, and the encoding device Image coding efficiency can be improved by signaling information about the residual (residual information) between the original block and the predicted block, not the original sample value of the original block by itself.
  • the decoding apparatus is based on the residual information.
  • a residual block including residual samples may be derived, the residual block and the predicted block may be combined to generate a restoration block including restoration samples, and a restoration picture including restoration blocks may be generated.
  • the encoding apparatus derives a residual block between the original block and the predicted block, and performs a conversion procedure on residual samples (residual sample array) included in the residual block to derive conversion coefficients, By performing a quantization procedure on the transform coefficients, quantized transform coefficients are derived, and related residual information can be signaled to a decoding device (via a bitstream).
  • the residual information is value information of the quantized transform coefficients, value information, and It can include information such as location information, conversion technique, conversion kernel, quantization parameter, etc.
  • the decoding device can perform inverse quantization/inverse conversion procedures based on the residual information and derive residual samples (or residual blocks).
  • the decoding device can generate a reconstructed picture based on the predicted block and the residual block.
  • the encoding device can also inverse quantize/inverse transform the quantized transformation coefficients for reference for inter prediction of a subsequent picture to obtain a residual block. Can be derived, and a restored picture can be created based on it.
  • FIG. 4 exemplarily shows a hierarchical structure for coded data.
  • the coded data is between a video coding layer (VCL) that handles the video/image coding process and itself, and a sub-system that stores and transmits the coded video/image data. It can be classified as a network abstraction layer (NAL).
  • VCL video coding layer
  • NAL network abstraction layer
  • VCL is a set of parameters corresponding to headers such as sequences and pictures (picture parameter set (PPS), sequence parameter set (SPS), video parameter set (VPS), etc.) and in addition to the video/image coding process.
  • PPS picture parameter set
  • SPS sequence parameter set
  • VPS video parameter set
  • SEI Supplemental Enhancement Information
  • the SEI message is separated from the video/image information (slice data).
  • the VCL containing the video/image information consists of the slice data and the slice header.
  • the slice header is a tile group header. It may be referred to as, and the slice data may be referred to as tile group data.
  • NAL unit can be created by adding information (NAL unit header).
  • RBSP refers to slice data, parameter set, SEI message, etc. generated from VCL.
  • the NAL unit header may include NAL unit type information specified according to RBSP data included in the corresponding NAL unit.
  • the NAL unit which is the basic unit of NAL, plays a role of mapping the coded image to the bit string of sub-systems such as file format, RTP (Real-time Transport Protocol), TS (Transport Strea), etc. according to a predetermined standard.
  • the NAL unit is the NAL unit according to the RBSP generated from the VCL.
  • the VCL NAL unit can mean a NAL unit that contains information about the video (slice data), and the Non-VCL NAL unit is a NAL unit that contains the information (parameter set or SEI message) necessary for decoding the video.
  • VCL NAL unit can mean a NAL unit that contains information about the video (slice data)
  • Non-VCL NAL unit is a NAL unit that contains the information (parameter set or SEI message) necessary for decoding the video.
  • VCL NAL unit and Non-VCL NAL unit can be transmitted through a network by attaching header information according to the data standard of the sub-system.
  • the NAL unit is in H.266/VVC file format, RTP (Real-RTP). time Transport Protocol), TS (Transport Stream), etc., can be transformed into data types of predetermined standards and transmitted through various networks.
  • the NAL unit is RBSP data included in the NAL unit.
  • the NAL unit type may be specified according to the structure, and information on the NAL unit type may be stored in the NAL unit header and signaled.
  • VCL NAL unit type can be classified according to the properties and types of pictures included in the VCL NAL unit
  • non-VCL NAL unit type can be classified according to the type of parameter set.
  • the NAL unit type can be specified according to the type of parameter set, etc.
  • the NAL unit type is an APS (Adaptation Parameter Set) NAL unit, which is a type for NAL units including APS, and a type for NAL units including DPS.
  • APS Adaptation Parameter Set
  • VPS Video Parameter Set
  • SPS Sequence Parameter Set
  • PPS Position Parameter Set
  • NAL unit types have syntax information for the NAL unit type, and the syntax information may be stored in the NAL unit header and signaled.
  • the syntax information may be nal_unit_type, and the NAL unit types are nal_unit_type values. Can be specified.
  • one picture can contain a plurality of slices, and one slice can contain a slice header and slice data.
  • multiple slices (slice header and slice data) within one picture.
  • one picture header can be added.
  • the picture header may include information/parameters commonly applicable to the picture.
  • the slice header may include information/parameters commonly applicable to the slice.
  • the APS APS syntax
  • PPS PPS syntax
  • SPS SPS syntax
  • VPS syntax may include information/parameters commonly applicable to multiple layers.
  • the DPS may include information/parameters commonly applicable to overall video.
  • the DPS is a coded video sequence (CVS).
  • the high level syntax refers to the above APS syntax, PPS syntax, SPS syntax, VPS syntax, DPS syntax, a picture header syntax, slice You can include at least one of the header syntax.
  • the image/video information encoded from the encoding device to the decoding device and signaled in the form of a bitstream only includes intra-picture partitioning information, intra/inter prediction information, residual information, and in-loop filtering information. Rather, information included in the slice header, information included in the picture header, information included in the APS, information included in the PPS, information included in the SPS, information included in the VPS, and/or the information included in the DPS. Information may be included. In addition, the image/video information may further include information of the NAL unit header.
  • FIG. 5 is a diagram showing an example of partitioning a picture.
  • CTUs coding tree units
  • the CTU can include a coding tree block of luma samples and two coding tree blocks of chroma samples corresponding thereto.
  • the maximum allowable size of the CTU for coding and prediction is the CTU for conversion. It may be different from the maximum allowable size.
  • a tile can correspond to a series of CTUs that cover a rectangular area of a picture, and a picture can be divided into one or more tile rows and one or more tile columns.
  • a slice may consist of an integer number of complete tiles or an integer number of consecutive complete CTU rows.
  • two slice modes including a raster-scan slice mode and a rectangular slice mode can be supported.
  • a slice can contain a series of complete tiles in a tile raster scan of a picture.
  • a slice is a number of complete tiles or pictures that collectively form a rectangular area of a picture. It can contain a number of consecutive CTU rows within a tile that collectively form a rectangular region of the square. Tiles within a square slice can be scanned in tile raster scan order within the square region corresponding to that slice.
  • Figure 5 (a) shows an example of dividing a picture into tiles and raster scan slices. 2020/175905 1»(:1 ⁇ 1 ⁇ 2020/002730 This is a drawing, for example, a picture can be divided into 12 tiles and 3 raster scan slices.
  • FIG. 5 shows an example of dividing a picture into tiles and square slices.
  • a picture can be divided into 24 tiles (6 tile columns and 4 tile rows) and 9 square slices.
  • FIG. 5 shows an example of dividing the picture into tiles and square slices.
  • a picture can be divided into 24 tiles (2 tile columns and 2 tile rows) and 4 square slices.
  • FIG. 6 is a flowchart illustrating a procedure for encoding a picture based on a tile and/or a tile group according to an embodiment.
  • Generation 610) may be performed by the image dividing unit 210 of the encoding device, and for video/image information including information on a tile/tile group.
  • the encoding 620 may be performed by the entropy encoding unit 240 of the encoding device.
  • the encoding apparatus may perform picture partitioning for encoding an input picture 600).
  • the picture may include one or more tiles/tile groups.
  • the encoding apparatus includes an image of the picture. Considering the characteristics and coding efficiency, the picture can be partitioned into various types, and information indicating the partitioning type with the optimum coding efficiency can be generated and signaled to the decoding device.
  • An encoding apparatus includes a tile/tile applied to the picture
  • a group is determined, and information about the tile/tile group can be generated (610).
  • the information on the tile/tile group may include information indicating the structure of the tile/tile group for the picture.
  • the information on the tile/tile group includes various parameter sets and/or tile group headers as described later. It can be signaled through. A specific example is described below.
  • the encoding apparatus may encode video/image information including information on the tile/tile group and output it in the form of a bitstream 620).
  • the bitstream is through a digital storage medium or a network.
  • the video/video information may include a table and/or tile group header syntax described in this document.
  • the video/video information may include prediction information, residual information, and (In-loop) filtering information may be further included.
  • the encoding apparatus may restore the current picture, apply in-loop filtering, and encode the parameters related to the in-loop filtering, and output in a bitstream format.
  • FIG. 7 is a flowchart illustrating a tile and/or tile group-based picture decoding procedure according to an embodiment.
  • the step of performing the based picture decoding (S720) is the entropy of the decoding device.
  • step (S620) of encoding video/image information including information on a tile/tile group may be performed by a sample decoder of the decoding apparatus.
  • a decoding apparatus includes tiles/tiles from a received bitstream.
  • Information on the group can be obtained (S700).
  • the information on the tile/tile group can be obtained through various parameter sets and/or tile group headers as described later. A specific example will be described later.
  • the decoding apparatus may derive a tile/tile group in a current picture based on the information on the tile/tile group (S phase 0).
  • the decoding apparatus may decode the current picture based on the tile/tile group (S720). For example, the decoding apparatus derives a CTU/CU located in the tile, and performs it. Based on inter/intra prediction, residual processing, restoration block (picture) generation, and/or in-loop filtering procedures can be performed. In this case, for example, the decoding device can perform context model/information in tile/tile group units. In addition, if the surrounding block or the surrounding sample referenced during inter/intra prediction is located on a tile different from the current tile where the current block is located, the decoding device may treat the surrounding block or the surrounding sample as not available. .
  • FIG. 8 is a diagram showing an example of partitioning a picture into a plurality of tiles.
  • tiles may refer to regions within a picture defined by a set of vertical and/or horizontal boundaries that divide the picture into a plurality of rectangles.
  • FIG. 8 shows one picture 700
  • Figure 8 shows an example of splitting into multiple tiles based on a plurality of column boundaries (810) and row boundaries (820) within the first 32 maximum coding units (or 820).
  • Coding Tree Units (CTUs) are numbered and shown.
  • each tile may include an integer number of CTUs processed in a raster scan order within each tile.
  • a plurality of tiles within a picture, including each tile may also include the picture. It can be processed as a raster scan order within.
  • the tiles can be grouped to form tile groups, and tiles within a single tile group can be raster scanned. Dividing a picture into tiles is the syntax and semantics of the Picture Parameter Set (PPS). It can be defined based on semantics.
  • PPS Picture Parameter Set
  • the information derived from the PPS about tiles may be used to check (or read) the following items. First, it is checked whether one tile exists in the picture or if more than one tile exists. If more than one tile is present, it can be checked whether the above one or more tiles are uniformly distributed, the dimension of the tiles can be checked, and whether the loop filter is enabled can be checked. have.
  • the PPS may first signal the syntax element single_tile_in_pic_flag.
  • the single_tile_in_pic_flag may indicate whether only one tile in a picture exists or whether a plurality of tiles in a picture exist.
  • the decoding device can parse information about the number of tile rows and tile columns using the syntax elements num_tile_columns_minus 1 and num_tile_rows_minusl.
  • the syntax element num_tile_columns_minus 1 and num_tile_rows_minusl are present.
  • num_tile_rows_minusl can specify the process of dividing a picture into tile rows and columns.
  • the heights of tile rows and widths of tile columns are in terms of CTBs (i.e.
  • Additional flags can be parsed to check. If the tiles in the picture are not uniformly spaced, the number of CTBs per tile can be explicitly signaled for each tile row and column boundaries (i.e. CTB within each tile row). The number of and the number of CTBs in each tile row can be signaled). If the tiles are uniformly spaced, the tiles can have the same width and height.
  • a loop filter is enabled for tile boundaries.
  • Another flag (for example, the syntax element loop_filter_across_tiles_enabled_flag) can be parsed to determine whether or not.
  • Table 1 summarizes examples of main information about tiles that can be derived by parsing the PPS.
  • Table 1 can represent the PPS RBSP syntax.
  • FIG. 9 is a block diagram showing a configuration of an encoding apparatus according to an embodiment
  • FIG. 9 is a block diagram showing a configuration of a decoding apparatus according to an embodiment.
  • the encoding device 900 shown in FIG. 9 includes a partitioning module 910 and an encoding module 920.
  • the partitioning module (0) and the image division unit (0) of the encoding device shown in FIG. The same and/or similar operations may be performed, and the encoding module 920 may perform the same and/or similar operations as the entropy encoding unit 240 of the encoding apparatus shown in FIG. 2.
  • the input video is a partitioning module 9 After being divided in W), it can be encoded in the encoding module 920. After being encoded, the encoded video can be output from the encoding device 900.
  • FIG. W An example of a block diagram of a decoding apparatus is shown in FIG. W.
  • the decoding apparatus 1000 shown in FIG. W includes a decoding module 1010 and a deblocking filter 1020.
  • the decoding module ( 1010) can perform the same and/or similar operations as the entropy decoding unit 3W of the decoding apparatus shown in FIG. 3, and the deblocking filter 1020 is a filtering unit 350 of the decoding apparatus shown in FIG. The same and/or similar operations can be performed.
  • the decoding module 1010 decodes the input received from the encoding device 900 to derive information about tiles. A processing unit based on the decoded information
  • the deblocking filter 1020 may apply an in-loop deblocking filter to process the processing unit.
  • In-loop filtering may be applied to remove coding artifacts generated during the partitioning process.
  • the in-loop filtering The operation may include an adaptive loop filter (ALF), a deblocking filter (DF), a sample adaptive operation set (SAO), etc. After that, the decoded picture can be output.
  • ALF adaptive loop filter
  • DF deblocking filter
  • SAO sample adaptive operation set
  • FIG. 11 is a diagram showing an example of a tile and a tile group unit constituting the current picture.
  • tiles can be grouped to form tile groups.
  • 11 shows an example in which one picture is divided into tiles and tile groups.
  • the picture includes 9 tiles and 3 tile groups.
  • Each tile group can be independently coded.
  • each tile group has a tile group header.
  • Tile groups can have a similar meaning to a slice group. Each tile group can be independently coded.
  • a tile group can contain one or more tiles.
  • a tile group header can refer to a PPS, and a PPS can sequentially refer to a SPS (Sequence Parameter Set). .
  • the tile group header is the PPS of the PPS referenced by the tile group header.
  • the PPS can refer to the SPS in sequence.
  • the tile group header can be determined for the following information. First, if more than one tile exists per picture, the tile group address and the number of tiles in the tile group are determined. Next, you can determine the tile group type, such as intra/predictive/bi-directional. Next, you can determine the picture order count (POC) of the Lease Significant Bits (LSB). Next, if there is more than one tile in a picture, you can determine the offset length and entry point to the tile.
  • POC picture order count
  • LSB Lease Significant Bits
  • Table 4 shows an example of the syntax of the tile group header.
  • the tile group header (tile_group_header) can be replaced by a slice header.
  • Table 5 below shows an example of English semantics for the syntax of the tile group header.
  • tile group header syntax element group_pic_parameter_set_id and ti le_group_pic_order_cnt_l sb shall be ame in all tile group headers of a coded picture.
  • *' ti le_group_pic_para eter_set_id specifies the value of
  • pps_pic_parameter_set_id for the PPS in use.
  • the value of ti 1 e_group_pic_para eter_set_id shall be in the range of 0 to 63, inclusive. * ⁇
  • Temporal Id of the current picture shall be greater than or equal to the value of Temporal Id of the PPS that has pps_pic_parameter_set_id equal to ti 1 e_group_p i c_parameter _set_id.
  • ti le_group_address specifies the tile address of the first tile in the tile group, where tile address is the tile ID as specified by Equation c-7.
  • the length of ti le_group_address is Cei 1 (Log2 (NumTi lesInPic)) bits.
  • the value of ti le_group_address shall be in the range of 0 to
  • ti le_group_address When ti le_group_address is not present it is inferred to be equal to 0..: num_tiies_in_ti le_group_minusl plus 1 specifies the number of tiles [159] in the tile group. The value of num_ti les_in_ti le_group_minusl shall be in the range of 0 to Nu Ti lesInPic-1, inclusive.
  • ti le_group_type specifies the coding type of the tile group according to table 6.
  • nal_unit_type is equal to IRAP_NUT, i.e., the picture is an
  • ti le_group_type shall be equal to 2.* ⁇ ti le_group_pic_order_cnt_lsb specifies the picture order count modulo MaxPicOrderCntLsb for the current picture.
  • the length of the ti le_group_pic_order_cnt_lsb syntax element is log2_max_pic_order_cnt_lsb_minus4 + 4 bits.
  • the value of the ti le_group_pic_order_cnt_lsb shall be in the range of 0 to
  • MaxPicOrderCntLsb-1, inclusive.- ' of fset_len_ inusl plus 1 specifies the length, in bits, of the entry_point_offset_ inusl [i] syntax elements.
  • the value of offset_len_minusl shall be in the range of 0 to 31, inclusive.
  • entry_point_of fset_minusl[ i] plus 1 specifies the i-th entry point offset in bytes, and is represented by offset_len_minusl plus 1 bits.
  • the tile group data that follow the tile group header consists of nu _ti les_in_ti le_group_ inusl +1 subsets, with subset index values 2020/175905 1»(:1/10 ⁇ 020/002730
  • the tile group may include a tile group header and tile group data.
  • each 0X1 in the tile group is 2020/175905 PCT/KR2020/002730 Locations can be mapped and decoded.
  • Table 7 below shows an example of the syntax of tile group data. In Table 7, tile group data can be replaced with slice data.
  • Table 8 below shows an example of English semantics for the syntax of the tile group data.
  • tbY ctbAddrRs / PicfidthlnCtbsY
  • NumCtusInTi le[ tileldx] Colfidth[ i] * RowHeight[ j] '
  • tileStartFIag 0
  • tileStartFIag 1 2020/175905 1»(:1/10 ⁇ 020/002730
  • Some implementations running on CPUs require dividing the source picture into tiles and tile groups, where each tile group can be processed in parallel on a separate core.
  • the parallel processing is a high-resolution real-time encoding of videos.
  • the above parallel processing can reduce the sharing of information between groups of tiles, thereby reducing the memory constraint. Tiles can be distributed to different threads while processing in parallel. Therefore, the parallel architecture can benefit from this partitioning mechanism.
  • the maximum transmission unit (MTU) size matching is reviewed.
  • the coded pictures transmitted through the network are subject to fragmentation when the coded pictures are larger than the MTU size. It can be different. Similarly, if the coded segments are small, the IP (Internet Protocol) header can become important. Packet fragmentation can lead to loss of error resiliency. The picture is taken to mitigate the effects of packet fragmentation. When dividing into tiles and packing each tile/tile group as a separate packet, the packet may be smaller than the MTU size.
  • FIG. 13 is a diagram showing an example of a picture in a video conference video program.
  • flexible tiling can be achieved by using a predefined rectangular area.
  • Fig. 13 shows an example of a picture in a video program for video conferencing when a video conference with multiple participants is held.
  • the participant is speaker l (Speaker 1), speaker 2 (Speaker 2), speaker 3 (Speaker 3) and Speaker 4 (Speaker 4)
  • the area corresponding to each participant in the picture can correspond to each of the preset areas, and each of the preset areas can be coded as a single tile or a group of tiles. have.
  • the single tile or group of tiles corresponding to the participant may also change.
  • FIG. 14 is a diagram showing an example of partitioning a picture into tiles or tile groups in a video conference video program.
  • an area assigned to a speaker 1 (Speaker 1) participating in a video conference may be coded as a single tile.
  • the areas assigned to each of Speaker 2, Speaker 3, and Speaker 4 can be coded as a single tile.
  • FIG. 15 is a diagram illustrating an example of partitioning a picture into tiles or tile groups based on MCTS (Motion Constrained Tile Set).
  • MCTS Motion Constrained Tile Set
  • a picture can be acquired from 360 degree video data.
  • 360 video can mean video or image content that is captured or played back in all directions (360 degrees) at the same time required to provide VR (Virtual Reality).
  • %0 video can refer to a video or image that appears in various types of 3D space depending on the 3D model.
  • a 360 video can be displayed on a spherical surface.
  • a two-dimensional space (2D) picture obtained from 360-degree video data can be encoded with at least one spatial resolution.
  • a picture can be encoded with a first resolution and a second resolution, and the first resolution. May be higher than the second resolution.
  • a picture can be encoded in two spatial resolutions, each having a size of 1536x1536 and 768x768, but the spatial resolution is not limited thereto and may correspond to various sizes.
  • a 6x4 size tile grid may be used for the bitstreams encoded at each of the two spatial resolutions.
  • a motion constraint tile set (MCTS) for each position of the tiles may be coded and used.
  • each of the MCTSs may include tiles positioned in respective areas set for a picture.
  • MCTS may contain at least one tile to form a set of square tiles.
  • a tile can represent a rectangular area composed of coding tree blocks (CTBs) of a two-dimensional picture.
  • CTBs coding tree blocks
  • a tile can be classified based on a specific tile row and tile column within a picture.
  • a specific MCTS in the encoding/decoding process When inter prediction is performed on the blocks within, the blocks within the specific MCTS may be restricted to refer only to the corresponding MCTS of the reference picture for motion estimation/motion compensation.
  • the 12 first MCTSs 1510 are of 1536x1536.
  • first MCTSs 1510 May correspond to a region having a first resolution in the same picture
  • second MCTSs 1520 may correspond to a region having a second resolution in the same picture.
  • the first MCTSs may correspond to the viewport area in the picture.
  • the viewport area may refer to the area that the user is viewing in the 360-degree video.
  • the first MCTSs may correspond to the ROI (Region in the picture). of Interest).
  • the ROI area can refer to the area of interest of users, suggested by the 360 content provider.
  • the MCTSs 1520 can be merged and merged into a 1920x4708-sized merge picture 1530, and the merge picture 1530 can have four tile groups.
  • ti le_addr_val [i ][ j] specifies the ti le_group_address value of the tile of the i-th tile row and the j— th tile column.
  • the length of ti le_addr_val [i ][ j] is ti le_addr_len_minusl + 1 bits.
  • ti le_addr_val [i ][ j] shall not be equal to ti le_addr_val [m ][ n] when i is not equal to m or j is not equal to n.
  • num_mcts_in_pic_minusl plus 1 specifies the number of MCTSs in the picture 2020/175905 1»(:1/10 ⁇ 020/002730
  • a syntax element unifoml_tile_spacing_flag indicating whether tiles having the same width and height are to be derived by dividing the picture uniformly may be parsed.
  • the unifoml_tile_spacing_flag can be used to indicate whether the tiles in the picture are divided in a uniform manner.
  • the syntax element unifoml_tile_spacing_flag is enabled, the width of the tile row and the height of the tile row can be parsed, i.e., the syntax indicating the width of the tile column.
  • the tiles in the picture Syntax element indicating whether to form 111 8_: ⁇ Can be parsed. If so, it may indicate that the tiles or groups of tiles in the picture may or may not form a square tile set, and that the use of sample values or variables outside the rectangular tile set is restricted or unrestricted. 111 If _: ⁇ is 1, it can be indicated that the picture is divided into ⁇ .
  • the syntax element 1111111_111(:18_:11 in 1(:_1111111181 may represent the number. In one embodiment, when 111_£ is 1, In the case of dividing by, the syntax element num_mcts_in_pic_minusl can be parsed. 2020/175905 1»(:1/10 ⁇ 020/002730 There is.
  • the tile_group_address value which is the position of the tile located at the top-left, can be indicated.
  • the syntax element bottom_right_tile_addr[ i] is the i-th
  • the tile_group_address value which is the location of the tile located at the bottom-right, can be displayed.
  • Table 11 shows an example of the tile group data syntax.
  • tile group data can be replaced with slice data.
  • Table 12 below shows English semantics for the tile group data syntax.
  • 16 is a diagram showing an example of dividing a picture based on an R region.
  • tiling for partitioning a picture into a plurality of tiles flexible tiling based on a region of interest (ROI) can be achieved.
  • ROI region of interest
  • FIG. 16 a picture is in the R region. Based on this, it can be divided into multiple tile groups.
  • Table 15 below shows an example of English semantics for the above syntax.
  • tile_group_info_in_pps_flag indicating whether tile group information related to tiles included in the tile group exists in or in a tile group header referring to may be parsed.
  • tile_group_info_in_pps_flag 1
  • the tile group information does not exist in ⁇ and refers to In the tile group header, it can indicate its presence. have.
  • syntax element niim_tile_groups_in_pic_minusl may indicate the number of tile groups in the picture referring to.
  • syntax element pps_first_tile_id can represent the tile 11) of the first tile of the first tile group
  • syntax element pps_last_tile_id can represent the tile 11) of the last tile of the first tile group.
  • 17 is a diagram showing an example of partitioning a picture into a plurality of tiles.
  • coding for tiling that divides a picture into a plurality of tiles Flexible tiling can be achieved by considering tiles smaller than the size of the tree unit (CTU).
  • the tiling structure according to this method can be usefully applied to modern video applications such as video conferencing programs.
  • a picture may be partitioned into a plurality of tiles, and a plurality of
  • the size of at least one of the tiles may be smaller than the size of the Coding Tree Unit (CTU), e.g., a picture is a tile l (Tile 1), a tile 2 (Tile 2), a tile 3 (Tile 3) and a tile. It can be partitioned into 4 (Tile 4), among which the size of Tile 1, Tile 2, and Tile 4 is smaller than the size of CTU.
  • CTU Coding Tree Unit
  • the syntax element tile_size_unit_idc may represent the unit size of the tile. For example, if tile_size_unit_id is 0, 1, 2..., the height and width of the tile is a coding tree block (CTB) can be defined as 4, 8, 16...
  • CTB coding tree block
  • Figure 18 shows an example of partitioning a picture into a plurality of tiles and tile groups
  • a plurality of tiles within a picture can be grouped into a plurality of tile groups, and flexible tiling can be achieved by applying a tile group index to the plurality of tile groups.
  • It can contain tiles arranged in a non-raster scan order.
  • a picture can be partitioned into a plurality of tiles, and a plurality of tiles is a tile group l (Tile Group 1), a tile group 2 (Tile Group 2), and a tile group 3 (Tile Group). It can be grouped by 3), where each of tile group 1, tile group 2 and tile group 3 can contain tiles arranged in a non-raster scan order.
  • Table 18 below shows an example of the syntax of the tile group header (tile_group_header).
  • tile group headers can be replaced with slice headers.
  • a syntax element bar 6_ that designates an index of each of a plurality of tile groups within a picture may be 1'011]3_:111 (16 visible signalling/parsing.
  • bar 6_ is 1'011] 3_: 111 (value of 16 bar for another tile in the same picture group NAL unit 6_; 011; 3_: 111 (not the same as the value of 16.
  • tile group headers can be replaced with slice headers.
  • single_t i le_per_t i le_group_f lag is equal to 1
  • the value of single_t i le_in_t i le_group_f lag is inferred to be equal to 1.
  • firs t_t i 1 e_i d specifies the tile ID of the first tile of the tile group.
  • the length of fir s t_t i 1 e_i d is CeiK Log2( NumTi lesInTic)) bits.
  • the value of f irst_ti le_id of a tile group shall not be equal to the value of f irst_t i le_id of any other tile group of the same picture.
  • the value of f irst_t i le_id is inferred to be equal to the tile ID of the first tile of the current picture.
  • last_tile_id specifies the tile ID of the last tile of the tile group.
  • the length of last_tile_id is CeiK Log2( NumTi lesInTic)) bits.-' When NumTi lesInTic is equal to 1 or single_t i le_in_t i le_group_f lag is equal to 1, the value of last_tile_id is inferred to be equal to f irst _ ti le_id . When ti le_group_info_in_pps_f lag is equal to 1, the value of 1 as t_t ii e_i d is inferred to be equal to the value of
  • a syntax element first_tile_id that designates a tile ID of the first tile may be signaled/parsed.
  • the first_tile_id may correspond to the tile ID of the tile located at the top-left of the tile group. In this case, the tile ID of the first tile of the tile group is not the same as the tile ID of the first tile of the other tile group in the same picture.
  • the tile of the last tile for each of the plurality of tile groups in the picture The syntax element last_tile_id specifying the ID can be signaled/parsed.
  • the last_tile_id may correspond to the tile ID of the tile located at the bottom-right of the tile group.
  • the syntax element NumTilesInPic is 1 or single_tile_in_tile_group_flag is 1, the value of last_tile_id can be the same as first_tile_id.
  • the tile_group_info_in_pps_flag the value of last_tile_id can be the same as the meaning of pps_last_tile_id.
  • Fig. 19 shows an example of partitioning a picture into a plurality of tiles and tile groups
  • tiles can be grouped secondaryly within the tile group of a picture. Accordingly, the size of the tiles can be more effectively controlled, and thus flexible tiling can be achieved.
  • a picture can be first partitioned into three tile groups, and Tile group #2, which is a second tile group, can be additionally partitioned into secondary tile groups.
  • 111D1_(116_ ⁇ 1'01 ⁇ 8_1111111181 can be signaled/parsed.
  • the value of the syntax element 1111111_(1'013 ⁇ 4 ⁇ _1111111181 for 116_ plus 1) indicates the number of tile groups in the picture. Can be represented.
  • _611 (1_(1(1 88] can be signaled/parsed. Mountain _ ⁇ 1'0111)_ 031_( 1(1 88] and 1: The value of _ ⁇ 1'0111)_611(1_(1(1 88] is (for ⁇ 16_1 * 0111)_ Table 11_(1(1 8) of the other tile group units in the same picture. It is not the same as the value of [ ⁇ ] and (for ⁇ 16_1 * 0111 YES 1 (1_(1(1 8 ⁇ ]).
  • a plurality of tiles, each tile of the tile II) is explicitly
  • a syntax element tile_id_val[i] designating the tile ID of the i-th tile in the picture referencing the PPS may be signaled/parsed.
  • Table 27 shows an example of the syntax of the tile group header.
  • the tile group header can be replaced by a slice header. 2020/175905 1»(:1/10 ⁇ 020/002730
  • Table 28 below shows an example of English semantics for the syntax of the tile group header.
  • tile_group_address that designates the tile ID of the first tile of the tile group in the picture may be signaled/parsed.
  • the value of tile_group_address is not the same as the value of tile_group_address of other tile group NAL units in the same picture.
  • a MANE Media-Aware Network Element
  • video editor can identify a tile group carried by NAL units, and remove the corresponding NAL units or belong to a target tile group.
  • a sub-bitstream including NAL units can also be provided.
  • nuh_tile_group_id may be suggested in the NAL unit header.
  • This network element or video editor only 2020/175905 1»(:1/10 ⁇ 020/002730 By parsing and interpreting, the tile group carried by the NAL units can be easily identified. In addition, the network element or video editor can remove the corresponding NAL units. Accordingly, a subbitstream including NAL units belonging to the target tile group can be extracted.
  • Table 29 below shows an example of the syntax of the NAL unit header.
  • Table 30 below shows an example of English semantics for the syntax of the show unit header.
  • Table 31 shows an example of the syntax of _1 ⁇ (1) when the example of a tile group header is bright.
  • a tile group header can be replaced by a slice header.
  • tile_group_id specifying a tile group ID of a tile group in a picture may be signaled/parsed. At this time, the value of tile_group_id is not the same as the value of tile_group_id of another tile group NAL unit in the same picture.
  • FIG. 20 is a flow chart showing the operation of the decoding apparatus according to an embodiment
  • FIG. 21 is a block diagram showing the configuration of the decoding apparatus according to the embodiment.
  • Each step disclosed in FIG. 20 may be performed by the decoding device 300 disclosed in FIG. 3. More specifically, S2000 and S2010 are entropy disclosed in FIG.
  • S2020 may be performed by the prediction unit 330 disclosed in FIG. 3
  • S2030 may be performed by the addition unit 340 disclosed in FIG. 3.
  • operations according to S2000 to S2030 may be performed according to S2000 to S2030. , It is based on some of the contents described above in Figs. 1 to 19. Therefore, specific contents overlapping with the contents described above in Figs. 1 to 19 will be omitted or simplified.
  • the decoding apparatus As shown in FIG. 21, the decoding apparatus according to an embodiment is
  • Fig. 21 may not be essential components of the decoding device, and the decoding device is It may be implemented by more or less components than the components shown in FIG. 21.
  • the entropy decoding unit (3W), the prediction unit 330, and the addition unit 340 are each implemented as a separate chip, or at least two or more components are It can also be implemented through a chip.
  • the decoding apparatus includes partition information for a current picture (partition
  • image information including prediction infomiation about the current block included in the current picture can be obtained from the bitstream.
  • the entropy decoding unit (3W) of the decoding device can obtain image information including segmentation information for the current picture and prediction information for the current block included in the current picture from the bitstream. have.
  • the decoding apparatus may provide a partitioning structure of the current picture based on a plurality of tiles, based on the division information for the current picture. (S2(XL0). More specifically, the entropy decoding unit (3W) of the decoding device is based on the division information on the current picture, In one example, the plurality of tiles are grouped into a plurality of tile groups, and at least one type group among the plurality of tile groups is a non-raster scan. ) May contain tiles arranged in order.
  • the decoding apparatus may derive predicted samples for the current block based on the prediction information for the current block included in one of the plurality of tiles (S2020). More specifically, the prediction unit 330 of the decoding apparatus may derive prediction samples for the current block based on the prediction information for the current block included in one of the plurality of tiles.
  • the decoding apparatus may restore the current picture based on the predicted samples (S2030). More specifically, the adding unit 340 of the decoding apparatus is based on the predicted samples. Pictures can be restored.
  • the split information for the current picture is, index information for each of the plurality of tile groups, and located at the top-left for each of the plurality of tile groups. It may include at least one of the ID information of the tile and the ID information of the tile located at the bottom-right of each of the plurality of tile groups.
  • the division information on the current picture is at least one of flag information on whether ID information of each of the plurality of tiles is explicitly signaled, and ID information of each of the plurality of tiles.
  • at least one of the flag information and ID information of each of the plurality of tiles may be included in a PPS (Picture Parameter Set) of the image information.
  • the division information for the current picture includes information on the number of tiles of the plurality of tile groups, and a coding tree (CTB) positioned at the top-left for each of the plurality of tile groups. Block) and at least one of the location information of the CTB located at the bottom-right for each of the plurality of tile groups.
  • CTB coding tree
  • information on the number of tile groups, location information of the CTB located at the upper left of each of the plurality of tile groups, and location of the CTB located at the lower right of each of the plurality of tile groups At least one of the information may be included in the PPS (Picture Parameter Set) of the image information.
  • the division information for the current picture may further include ID information of each of the plurality of tile groups.
  • Each ID information may be included in the NAL (Network Abstraction Layer) unit header of the image information.
  • a picture can be converted to a plurality of tiles and the plurality of tiles are
  • FIG. 22 is a flow chart showing an operation of an encoding device according to an embodiment
  • FIG. 23 is a block diagram showing a configuration of an encoding device according to an embodiment.
  • the encoding apparatus according to FIGS. 22 and 23 can perform operations corresponding to those of the decoding apparatus according to FIGS. 20 and 21. Accordingly, operations of the encoding apparatus to be described later in FIGS. The same can be applied to a decoding device according to 21.
  • Each step disclosed in FIG. 22 may be performed by the encoding apparatus 200 disclosed in FIG. 2. More specifically, S2200 and S2210 may be performed by the image dividing unit 210 disclosed in FIG. 2, and S2220 and S2230 may be performed by the prediction unit 220 disclosed in Fig. 2, and S2240 may be performed by the entropy encoding unit 240 disclosed in Fig. 2. In addition, operations according to S2200 to S2240 are described above in Figs. It is based on some of the contents. Therefore, detailed contents overlapping with the contents described above in Figs. 1 to 19 will be omitted or simplified.
  • the encoding apparatus may include an image division unit 210, a prediction unit 220, and an entropy encoding unit 240.
  • All of the components shown in 23 may not be essential components of the encoding device, and the encoding device may be implemented by more or less components than the components shown in FIG. 23.
  • the image segmentation unit 210, the prediction unit 220, and the entropy encoding unit 240 are each implemented as a separate chip, or at least two or more components are It can also be implemented through a chip.
  • the encoding apparatus can divide the current picture into a plurality of tiles.
  • the image dividing unit 210 of the encoding apparatus may divide the current picture into a plurality of tiles.
  • the encoding apparatus may generate division information for the current picture based on the plurality of tiles (S22W). More specifically, the image division unit 2W of the encoding apparatus includes the plurality of tiles. In one example, the plurality of tiles are grouped into a plurality of tile groups, and at least one type group among the plurality of tile groups is non-raster scan. You can include tiles arranged in (non-raster scan) order.
  • the encoding apparatus according to an embodiment may derive prediction samples for the current block included in one of the plurality of tiles (S2220). More specifically, the prediction unit 220 of the encoding apparatus Can derive prediction samples for the current block included in one of the plurality of tiles.
  • the prediction unit 220 of the encoding device may generate prediction information for the current block based on the prediction samples.
  • the encoding apparatus may encode image information including division information for the current picture and prediction information for the current block (S2240). More specifically, it is possible to encode image information including at least one of division information for the current picture or prediction information for the current block.
  • the split information for the current picture is, index information for each of the plurality of tile groups, and located at the top-left for each of the plurality of tile groups. It may include at least one of the ID information of the tile and the ID information of the tile located at the bottom-right of each of the plurality of tile groups.
  • the split information on the current picture is at least one of flag information on whether ID information of each of the plurality of tiles is explicitly signaled, and ID information of each of the plurality of tiles.
  • at least one of the flag information and ID information of each of the plurality of tiles may be included in a PPS (Picture Parameter Set) of the image information.
  • the division information for the current picture includes information on the number of tile groups, and a coding tree (CTB) positioned at the top-left for each of the plurality of tile groups. Block) and at least one of the location information of the CTB located at the bottom-right for each of the plurality of tile groups.
  • CTB coding tree
  • information on the number of tile groups, location information of the CTB located at the upper left of each of the plurality of tile groups, and location of the CTB located at the lower right of each of the plurality of tile groups At least one of the information may be included in the PPS (Picture Parameter Set) of the image information.
  • the split information for the current picture may further include ID information of each of the plurality of tile groups. Further, ID information of each of the plurality of tile groups is the image information. It can be included in the NAL (Network Abstraction Layer) unit header.
  • NAL Network Abstraction Layer
  • the above-described method according to this disclosure can be implemented in the form of software, and the encoding device and/or the decoding device according to this disclosure can perform image processing such as TV, computer, smartphone, set-top box, and display device. It can be included in a device that performs.
  • Modules are stored in memory and can be executed by the processor.
  • the memory can be inside or outside the processor, it is well known and can be connected to the processor by various means.
  • Processors may include application-specific integrated circuits (ASICs), other chipsets, logic circuits and/or data processing devices.
  • Memory includes read-only memory (ROM), random access memory (RAM), flash memory, memory card
  • ROM read-only memory
  • RAM random access memory
  • flash memory memory card
  • the embodiments described in this disclosure may be implemented and implemented on a processor, microprocessor, controller, or chip.
  • the functional units shown in the respective figures may be included. It can be implemented and performed on a computer, processor, microprocessor, controller or chip, in which case information on instructions or algorithms can be stored on a digital storage medium.
  • the decoding device and encoding device to which this disclosure is applied are multimedia broadcasting.
  • Transmission/reception device mobile communication terminal, home cinema video device, digital cinema video device, surveillance camera, video conversation device, real-time communication device such as video communication, mobile streaming device, storage medium, camcorder, video-on-demand (VoD) service provider device , OTT video (Over the top video) device, Internet streaming service providing device,
  • real-time communication device such as video communication, mobile streaming device, storage medium, camcorder, video-on-demand (VoD) service provider device , OTT video (Over the top video) device, Internet streaming service providing device,
  • 3D (3D) video device VR (virtual reality) device, AR (argumente reality) device, video phone video device, transportation terminal (ex. vehicle (including self-driving vehicle) terminal, airplane terminal, ship terminal, etc.) and It can be included in medical video equipment, etc., and can be used to process video signals or data signals.
  • OTT video (Over the top video) devices include game consoles, Blu-ray players, Internet access TVs, home theater systems, It can include smartphones, tablet PCs, and DVR (Digital Video Recoder).
  • the processing method to which this disclosure is applied may be produced in the form of a program executed by a computer, and may be stored in a computer-readable recording medium.
  • Multimedia data having a data structure according to the present disclosure may also be produced by a computer.
  • the computer-readable recording medium includes all kinds of storage devices and distributed storage devices in which computer-readable data is stored.
  • the computer-readable recording medium is, for example, a computer-readable recording medium.
  • it can include Blu-ray disk (BD), universal serial bus (USB), ROM, PROM, EPROM, EEPROM, RAM, CD-ROM, magnetic tape, floppy disk and optical data storage device.
  • the temporary readable recording medium is a carrier (e.g. For example, it includes media implemented in the form of transmission over the Internet).
  • bitstreams generated by the encoding method may be stored in a computer-readable recording medium or transmitted through a wired or wireless communication network.
  • an embodiment of the present disclosure may be implemented as a computer program product using a program code, and the program code may be executed in a computer by an embodiment of the present disclosure.
  • the program code is a carrier readable by a computer. Can be stored on
  • Figure 24 shows an example of a content streaming system to which the disclosure of this document can be applied.
  • the content streaming system to which this disclosure is applied may largely include an encoding server, a streaming server, a web server, a media storage, a user device, and a multimedia input device.
  • the encoding server plays a role of generating a bitstream by compressing content input from multimedia input devices such as smartphones, cameras, camcorders, etc. into digital data and transmitting them to the streaming server.
  • multimedia input devices such as smartphones, cameras, camcorders, etc.
  • the encoding server may be omitted.
  • the bitstream may be generated by an encoding method or a bitstream generation method to which the present disclosure is applied, and the streaming server may temporarily store the bitstream while transmitting or receiving the bitstream.
  • This streaming server transmits multimedia data to a user device based on a user request through a web server, and the web server serves as a medium that informs the user of what kind of service is available.
  • the user wants the web server to be sent to the user's device.
  • the web server delivers it to the streaming server, and the streaming server transmits multimedia data to the user.
  • the content streaming system may include a separate control server, in which case the control server is the above. It controls the command/response between devices in the content streaming system.
  • the streaming server may receive the content from the media storage and/or the encoding server. For example, when receiving the content from the encoding server, the content can be received in real time. In this case, a seamless streaming service In order to provide a, the streaming server may store the bitstream for a predetermined time.
  • Computer laptop computer
  • digital broadcasting terminal PDA (personal digital assistants), PMP (portable multimedia player), navigation, slate PC, tablet PC, ultrabook (ul-abook), wearable device (wearable device, for example, a watch-type terminal (smartwatch), a glass-type terminal (smart glass), HMD (head mounted display), digital TV, desktop computer, digital signage, etc.
  • PDA personal digital assistants
  • PMP portable multimedia player
  • navigation slate PC
  • tablet PC tablet PC
  • ultrabook ultrabook
  • wearable device wearable device
  • wearable device wearable device, for example, a watch-type terminal (smartwatch), a glass-type terminal (smart glass), HMD (head mounted display), digital TV, desktop computer, digital signage, etc.
  • Each server in the content streaming system can be operated as a distributed server, and in this case, the data received from each server can be distributed and processed.

Abstract

A video decoding method performed by a decoding apparatus according to the present disclosure comprises the steps of: acquiring, from a bitstream, video information including partition information for a current picture and prediction information for a current block included in the current picture; deriving, on the basis of the partition information for the current picture, a partitioning structure of the current picture on the basis of a plurality of tiles; deriving prediction samples for the current block on the basis of the prediction information on the current block included in one of the plurality of tiles; and restoring the current picture on the basis of the prediction samples.

Description

명세서 Specification
발명의명칭:시그널링된정보에 기반한픽처파티셔닝방법 및 장치 Name of the invention: Picture partitioning method and apparatus based on signaled information
기술분야 Technical field
[1] 본개시는영상코딩기술에관한것으로서보다상세하게는영상코딩 [1] This disclosure is about video coding technology, and more specifically, video coding.
시스템에서시그널링된정보에기반한픽처파티셔닝방법및장치에관한 것이다. It relates to a picture partitioning method and apparatus based on information signaled by the system.
배경기술 Background
四 최근 4K또는 8K이상의 UHD(Ultra High Definition)영상/비디오와같은 四 The latest 4K or 8K or higher UHD (Ultra High Definition) video/video
고해상도,고품질의영상/비디오에대한수요가다양한분야에서증가하고있다. 영상/비디오데이터가고해상도,고품질이될수록기존의영상/비디오데이터에 비해상대적으로전송되는정보량또는비트량이증가하기때문에기존의 유무선광대역회선과같은매체를이용하여영상데이터를전송하거나기존의 저장매체를이용해영상/비디오데이터를저장하는경우,전송비용과저장 비용이증가된다. The demand for high-resolution, high-quality video/video is increasing in various fields. The higher the resolution and quality of the video/video data, the higher the amount of information or bits to be transmitted compared to the existing video/video data.Therefore, the video data can be transmitted using a medium such as a wired/wireless broadband line or an existing storage medium When the video/video data is stored using the method, the transmission cost and storage cost increase.
[3] 또한,최근 VR( Virtual Reality), AR(Artificial Realtiy)컨텐츠나홀로그램등의 실감미디어 (Immersive Media)에대한관심및수요가증가하고있으며 ,게임 영상과같이현실영상과다른영상특성을갖는영상/비디오에대한방송이 증가하고있다. [3] In addition, interest and demand for immersive media such as VR (Virtual Reality) and AR (Artificial Realtiy) contents and holograms are increasing recently. Broadcasting for video/video is increasing.
[4] 이에따라,최근다양한특성을갖는영상/비디오응용프로그램에서 [4] Accordingly, in recent years, video/video applications with various characteristics
효율적으로영상/비디오를압축및재생하기위해적용될수있는,유연한픽처 파티셔닝방법이요구된다. A flexible picture partitioning method that can be applied to efficiently compress and play back images/videos is required.
발명의상세한설명 Detailed description of the invention
기술적과제 Technical task
[5] 본개시의기술적과제는영상코딩효율을높이는방법및장치를제공함에 있다. [5] The technical task of this disclosure is to provide a method and apparatus to increase the image coding efficiency.
[6] 본개시의다른기술적과제는파티셔닝정보를시그널링하는방법및장치를 제공함에 있다. [6] Another technical task of this disclosure is to provide a method and apparatus for signaling partitioning information.
[7] 본개시의또다른기술적과제는시그널링된정보에기반하여픽처를 [7] Another technical task of this disclosure is to create pictures based on signaled information.
유연하게파티셔닝하는방법및장치를제공함에있다. It is to provide a flexible partitioning method and device.
[8] 본개시의또다른기술적과제는현재픽처에대한분할정보를기반으로현재 픽처를파티셔닝하는방법및장치를제공함에있다. [8] Another technical task of this disclosure is to provide a method and apparatus for partitioning a current picture based on partition information for the current picture.
[9] 본개시의또다른기술적과제는,현재픽처가 MCTS (motion constrained tile set)들로분할되는지여부에대한플래그정보,현재픽처내 MCTS들의개수에 대한개수정보, MCTS들각각에대하여좌상측 (top-left)에위치하는타일의위치 정보또는 MCTS들각각에대하여우하측 (bottom-right)에위치하는타일의위치 정보중적어도하나에기초하여현재픽처를파티셔닝하는방법및장치를 제공함에 있다 [9] Another technical task of this disclosure is flag information on whether the current picture is divided into motion constrained tile sets (MCTS), information on the number of MCTSs in the current picture, and upper left for each of the MCTSs. Position information of the tile located at the (top-left) or the position of the tile located at the bottom-right for each MCTS It is intended to provide a method and apparatus for partitioning the current picture based on at least one of the information.
과제해결수단 Problem solving means
[1이 본개시의일실시예에따르면,디코딩장치에의하여수행되는영상디코딩 방법이제공된다.상기방법은,현재픽처에대한분할정보 (partition information) 및상기현재픽처에포함된현재블록에대한예즉정보 (prediction information)를 포함하는영상정보를비트스트림으로부터획득하는단계 ;상기현재픽처에 대한상기분할정보를기반으로,복수의타일들에기반한상기현재픽처의 분할구조 (partitioning structure)를도줄하는단계;상기복수의타일들중하나의 타일에포함된상기현재블록에대한상기 예측정보를기반으로,상기현재 블록에대한예측샘플들을도출하는단계;및상기예측샘플들을기반으로 상기현재픽처를복원하는단계를포함하고,상기복수의타일들은복수의타일 그룹들로그룹화되고,상기복수의타일그룹들중적어도하나의타입그룹은 비 래스터스캔 (non-raster scan)순서 (order)로배열된타일들을포함한다. [1] According to an embodiment of the present disclosure, an image decoding method performed by a decoding apparatus is provided. The method includes partition information for a current picture and a current block included in the current picture. For example, acquiring image information including prediction information from a bitstream; Based on the partitioning information for the current picture, a partitioning structure of the current picture based on a plurality of tiles is also provided. A step of; Deriving prediction samples for the current block based on the prediction information for the current block included in one of the plurality of tiles; And the current picture based on the prediction samples Including the step of restoring, the plurality of tiles are grouped into a plurality of tile groups, and at least one type group among the plurality of tile groups is arranged in a non-raster scan order. Includes tiles.
[11] 본개시의다른일실시예에따르면,영상디코딩을수행하는디코딩장치가 제공된다.상기디코딩장치는,현재픽처에대한분할정보 (partition information) 및상기현재픽처에포함된현재블록에대한예즉정보 (prediction information)를 포함하는영상정보를비트스트림으로부터획득하고,상기현재픽처에대한 상기분할정보를기반으로,복수의타일들에기반한상기현재픽처의분할 구조 (partitioning structure)를도줄하는엔트로피디코딩부;상기복수의타일들 중하나의타일에포함된상기현재블록에대한상기 예측정보를기반으로, 상기현재블록에대한예측샘플들을도출하는예측부;및상기예측샘플들을 기반으로상기현재픽처를복원하는가산부를포함하고,상기복수의타일들은 복수의타일그룹들로그룹화되고,상기복수의타일그룹들중적어도하나의 타입그룹은비 래스터스캔 (non-raster scan)순서 (order)로배열된타일들을 포함한다. [11] According to another embodiment of the present disclosure, a decoding device for performing image decoding is provided. The decoding device includes partition information for a current picture and a current block included in the current picture. For example, obtaining image information including prediction information from a bitstream, and reducing the partitioning structure of the current picture based on a plurality of tiles, based on the partitioning information for the current picture. An entropy decoding unit; A prediction unit for deriving prediction samples for the current block based on the prediction information for the current block included in one of the plurality of tiles; And the current picture based on the prediction samples It includes an adder for restoring, and the plurality of tiles are grouped into a plurality of tile groups, and at least one type group among the plurality of tile groups is arranged in a non-raster scan order. Includes tiles.
[12] 본개시의또다른일실시예에따르면,인코딩장치에의하여수행되는영상 인코딩방법이제공된다.상기방법은,현재픽처를복수의타일들로분할하는 단계 ;상기복수의타일들을기반으로상기현재픽처에대한분할정보를 생성하는단계;상기복수의타일들중하나의타일에포함된현재블록에대한 예측샘플들을도출하는단계 ;상기 예측샘플들을기반으로상기현재블록에 대한예측정보를생성하는단계;및상기현재픽처에대한분할정보및상기 현재블록에대한예측정보를포함하는영상정보를인코딩하는단계를 포함하고,상기복수의타일들은복수의타일그룹들로그룹화되고,상기복수의 타일그룹들중적어도하나의타입그룹은비래스터스캔 (non-raster scan) 순서 (order)로배열된타일들을포함한다. [12] According to another embodiment of the present disclosure, an image encoding method performed by an encoding device is provided. The method includes the steps of dividing a current picture into a plurality of tiles; based on the plurality of tiles. Generating segmentation information for the current picture; Deriving prediction samples for a current block included in one of the plurality of tiles; Generating prediction information for the current block based on the prediction samples And encoding image information including segmentation information on the current picture and prediction information on the current block, wherein the plurality of tiles are grouped into a plurality of tile groups, and the plurality of tiles At least one type group of the groups contains tiles arranged in a non-raster scan order.
[13] 본개시의또다른일실시예에따르면,영상인코딩을수행하는인코딩장치가 제공된다.상기인코딩장치는,현재픽처를복수의타일들로분할하고,상기 복수의타일들을기반으로상기현재픽처에대한분할정보를생성하는영상 분할부,상기복수의타일들중하나의타일에포함된현재블록에대한예측 샘플들을도출하고,상기예측샘플들을기반으로상기현재블록에대한예측 정보를생성하는예측부및상기현재픽처에대한분할정보및상기현재 블록에대한예측정보를포함하는영상정보를인코딩하는엔트로피 [13] According to another embodiment of this disclosure, an encoding device that performs image encoding is The encoding apparatus includes: an image segmentation unit that divides the current picture into a plurality of tiles and generates segmentation information for the current picture based on the plurality of tiles, and in one tile of the plurality of tiles. A prediction unit that derives prediction samples for the included current block and generates prediction information for the current block based on the prediction samples, and an image including segmentation information for the current picture and prediction information for the current block Entropy to encode information
인코딩부를포함하고,상기복수의타일들은복수의타일그룹들로그룹화되고, 상기복수의타일그룹들중적어도하나의타입그룹은비래스터 Including an encoding unit, the plurality of tiles are grouped into a plurality of tile groups, and at least one type group among the plurality of tile groups is non-raster
스캔 (non-raster scan)순서 (order)로배열된타일들을포함한다. Contains tiles arranged in a non-raster scan order.
[14] 본개시의또다른일실시예에따르면,디코딩장치에의하여영상디코딩 [14] According to another embodiment of the present disclosure, image decoding by a decoding device
방법을수행하도록야기하는인코딩된영상정보를저장하는컴퓨터판독 가능한디지털저장매체가제공된다.상기일실시예에따른디코딩방법은, 현재픽처에대한분할정보 (partition information)및상기현재픽처에포함된 현재블록에대한예즉정보 (prediction information)를포함하는영상정보를 비트스트림으로부터획득하는단계 ;상기현재픽처에대한상기분할정보를 기반으로,복수의타일들에기반한상기현재픽처의분할구조 (partitioning structure)를도출하는단계;상기복수의타일들중하나의타일에포함된상기 현재블록에대한상기예측정보를기반으로,상기현재블록에대한예측 샘플들을도출하는단계;및상기 예측샘플들을기반으로상기현재픽처를 복원하는단계를포함하고,상기복수의타일들은복수의타일그룹들로 그룹화되고,상기복수의타일그룹들중적어도하나의타입그룹은비래스터 스캔 (non-raster scan)순서 (order)로배열된타일들을포함한다. A computer-readable digital storage medium for storing encoded image information for causing the method to be performed is provided. The decoding method according to the above embodiment includes partition information for a current picture and partition information included in the current picture. Acquiring image information including prediction information for the current block from the bitstream; based on the partitioning information for the current picture, partitioning structure of the current picture based on a plurality of tiles ); Deriving prediction samples for the current block based on the prediction information for the current block included in one of the plurality of tiles; And based on the prediction samples Including the step of restoring the current picture, the plurality of tiles are grouped into a plurality of tile groups, and at least one type group among the plurality of tile groups is in a non-raster scan order. Includes arranged tiles.
발명의효과 Effects of the Invention
[15] 본명세서에따르면전반적인영상/비디오압축효율을높일수있다. [15] According to this specification, the overall image/video compression efficiency can be improved.
[16] 본명세서에따르면픽처파티셔닝의효율을높일수있다. [16] According to this specification, the efficiency of picture partitioning can be improved.
[17] 본명세서에따르면현재픽처에대한분할정보를기반으로픽처파티셔닝의 유연성을높일수있다. [17] According to this specification, it is possible to increase the flexibility of picture partitioning based on the partition information for the current picture.
[18] 본명세서에따르면비 래스터스캔 (non-raster scan)순서 (order)로배열된 [18] According to the present specification, non-raster scans are arranged in order.
타일들을포함하는타일그룹에기초하여현재픽처를파티셔닝함으로써,픽처 파티셔닝을위한시그널링의효율을높일수있다. By partitioning the current picture based on a tile group containing tiles, the efficiency of signaling for picture partitioning can be improved.
도면의간단한설명 Brief description of the drawing
[19] 도 1은본개시가적용될수있는비디오/영상코딩시스템의 예를개략적으로 나타낸다. [19] Fig. 1 schematically shows an example of a video/video coding system to which this disclosure can be applied.
[2이 도 2는본개시가적용될수있는비디오/영상인코딩장치의구성을 [2] Fig. 2 shows the configuration of a video/video encoding device to which this disclosure can be applied.
개략적으로설명하는도면이다. This is a schematic drawing.
[21] 도 3은본개시가적용될수있는비디오/영상디코딩장치의구성을 [21] Figure 3 shows the configuration of a video/video decoding apparatus to which this disclosure can be applied.
개략적으로설명하는도면이다. 2020/175905 1»(:1/10公020/002730 도 4는코딩된데이터에 대한계증구조를예시적으로나타낸다. This is a schematic drawing. 2020/175905 1»(:1/10公020/002730 Fig. 4 exemplarily shows the inheritance structure for the coded data.
도 5는픽처를파티셔닝하는일예를나타내는도면이다. 5 is a diagram showing an example of partitioning a picture.
도 6는일실시예에 따른타일및/또는타일그룹에기반한픽처 인코딩 절차를 도시하는흐름도이다. 6 is a flowchart illustrating a procedure for encoding a picture based on a tile and/or a tile group according to an embodiment.
[25] 도 7은일실시예에 따른타일및/또는타일그룹에기반한픽처디코딩 절차를 도시하는흐름도이다. 7 is a flowchart illustrating a tile and/or tile group-based picture decoding procedure according to an embodiment.
도 8은픽처를복수의 타일들로파티셔닝하는일예를나타내는도면이다. 도 9는일실시예에 따른인코딩장치의구성을도시하는블록도이다. 도 W은일실시예에 따른디코딩장치의구성을도시하는블록도이다. 도 11은현재픽처를구성하는타일및타일그룹단위의 일예를도시하는 도면이다. 8 is a diagram showing an example of partitioning a picture into a plurality of tiles. 9 is a block diagram showing the configuration of an encoding apparatus according to an embodiment. W is a block diagram showing the configuration of a decoding apparatus according to an embodiment. 11 is a diagram showing an example of a tile and a tile group unit constituting a current picture.
도 12는타일그룹정보의시그널링구조의 일 예를개략적으로도시하는 도면이다. 12 is a diagram schematically showing an example of a signaling structure of tile group information.
도 13은화상회의용비디오프로그램에서픽처의 일 예를나타내는도면이다. 도 14는화상회의용비디오프로그램에서픽처를타일또는타일그룹으로 파티셔닝하는일예를나타내는도면이다. 13 is a diagram illustrating an example of a picture in a video conferencing program. 14 is a diagram showing an example of partitioning a picture into tiles or tile groups in a video conference video program.
도 15는픽처를 MCTS(Motion Constrained Tile Set)에 기반하여타일또는타일 그룹으로파티셔닝하는일 예를나타내는도면이다. 15 is a diagram illustrating an example of partitioning a picture into tiles or tile groups based on MCTS (Motion Constrained Tile Set).
도 16은픽처를 R0I영역에기반하여분할하는일 예를나타내는도면이다. 도 17은픽처를복수의 타일들로파티셔닝하는일예를나타내는도면이다. 도 18은픽처를복수의 타일들및타일그룹들로파티셔닝하는일 예를 나타내는도면이다. 16 is a diagram illustrating an example of dividing a picture based on an R0I area. 17 is a diagram showing an example of partitioning a picture into a plurality of tiles. 18 is a diagram illustrating an example of partitioning a picture into a plurality of tiles and tile groups.
도 19는픽처를복수의 타일들및타일그룹들로파티셔닝하는일 예를 19 illustrates an example of partitioning a picture into a plurality of tiles and tile groups.
] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] 나타내는도면이다. ]]]]]]]]]]]]]]]]]]]] This is the drawing shown.
24790247902363568811 24790247902363568811
22222224443333333333 43 도 20은일실시예에 따른디코딩장치의동작을도시하는흐름도이다. 22222224443333333333 43 FIG. 20 is a flow chart showing the operation of the decoding apparatus according to an embodiment.
도 21은일실시예에 따른디코딩장치의구성을도시하는블록도이다. 도 22는일실시예에 따른인코딩장치의동작을도시하는흐름도이다. 도 23은일실시예에 따른인코딩장치의구성을도시하는블록도이다. 도 24본문서의 개시가적용될수있는컨텐츠스트리밍시스템의 예를 나타낸다. 21 is a block diagram showing a configuration of a decoding apparatus according to an embodiment. 22 is a flow chart showing the operation of the encoding device according to an embodiment. 23 is a block diagram showing the configuration of an encoding device according to an embodiment. 24 shows an example of a content streaming system to which the disclosure of this document can be applied.
발명의실시를위한형태 Modes for the implementation of the invention
본개시는다양한변경을가할수있고여러 가지실시예를가질수있는바, 특정실시예들을도면에 예시하고상세하게설명하고자한다.그러나,이는본 개시를특정실시예에 한정하려고하는것이아니다.본명세서에서상용하는 용어는단지특정한실시예를설명하기 위해사용된것으로,본개시의 기술적 사상을한정하려는의도로사용되는것은아니다.단수의표현은문맥상 명백하게다르게뜻하지 않는한,복수의표현을포함한다.본명세서에서 2020/175905 1»(:1^1{2020/002730 Since the present disclosure can make various changes and have various embodiments, specific embodiments will be illustrated in the drawings and described in detail. However, this is not intended to limit the present disclosure to specific embodiments. Commonly used terms are used only to describe specific embodiments, and are not intended to limit the technical idea of this disclosure. Singular expressions include plural expressions unless clearly indicated otherwise in context. In the real specification 2020/175905 1»(:1^1{2020/002730
"포함하다”또는 "가지다”등의용어는명세서상에 기재된특징,숫자,단계, 동작,구성요소,부품또는이들을조합한것이존재함을지정하려는것이지, 하나또는그이상의다른특징들이나숫자,단계,동작,구성요소,부품도는 이들을조합한것들의존재또는부가가능성을미리 배제하지 않는것으로 이해되어야한다. Terms such as "include" or "have" are intended to designate the existence of a feature, number, step, action, component, part, or combination of those listed on the specification, but one or more other features, numbers, steps, It is to be understood that the operation, component and part diagrams do not preclude the presence or addition of any combination of them.
[44] 한편,본개시에서 설명되는도면상의 각구성들은서로다른특징적인 [44] On the other hand, each configuration on the drawings described in this disclosure
기능들에 관한설명의 편의를위해독립적으로도시된것으로서,각구성들이 서로별개의하드웨어나별개의소프트웨어로구현된다는것을의미하지는 않는다.예컨대,각구성중두개 이상의구성이 합쳐져하나의구성을이룰수도 있고,하나의구성이복수의구성으로나뉘어질수도있다.각구성이통합 및/또는분리된실시예도본개시의본질에서벗어나지 않는한본개시의 권리범위에포함된다. As shown independently for convenience of explanation of the functions, it does not mean that each configuration is implemented as separate hardware or separate software. For example, two or more of each configuration may be combined to form a single configuration. However, one configuration may be divided into a plurality of configurations. Embodiments in which each configuration is incorporated and/or separated are also included within the scope of the rights of this disclosure, unless departing from the essence of this disclosure.
[45] 본명세서에서 1또는 쇼 or피”는“오직쇼”,“오직 ,또는“쇼와 ^모두”를 의미할수있다.달리표현하면,본명세서에서 1또는 쇼 01피”는 1및/또는 쇼 (1/(표피”으로해석될수있다.예를들어 ,본명세서에서 ' 6또는(:(人 3 or(:)”는“오직쇼”,“오직 ,,“오직 0 \또는“人:8및(:의 임의의모든조합( [45] In this specification, 1 or show or p" may mean "only show," "only, or "show and ^ both." In other words, 1 or 1 of show in this specification is 1 and/ Or it can be interpreted as show (1/(epidermis”. For example, in this specification, “6” or “:(人 3 or(:)” means “only show”, “only ,,” only 0 \ or “人Any combination of :8 and(:(
6 (1〔:)”를의미할수있다. 6 Can mean (1〔:)”.
[46] 본명세서에서사용되는슬래쉬(/)나쉼표(¥111111幻는“및/또는 少버”을 [46] A forward slash (/) or comma (¥111111幻) is used in this specification.
의미할수있다.예를들어,“쇼思”는 1및/또는 6”를의미할수있다.이에따라 For example, "show 思" could mean 1 and/or 6".
“쇼思”는“오직쇼”,“오직 ,,또는“쇼와 ^모두”를의미할수있다.예를들어 ,“人 3,(:”는“人 3또는(:”를의미할수있다. “Show 思” can mean “only show”, “only, or “show and ^ all”. For example, “人 3,(:” can mean “人 3 or (:”).
[47]
Figure imgf000006_0005
[47]
Figure imgf000006_0005
동일하게해석될수있다. It can be interpreted in the same way.
[48] 또한,본명세서에서“적어도하나의 6및 01 6 산(I!)”는, 오직 0 \또는“人:8및(:의 임의의모든조합( [48] Also, in this specification, “at least one 6 and 01 6 mountain (I!)” means only 0 \ or any combination of “人:8 and(:(
11(1(I!)”를의미할수있다.또한,“적어도하나의 6또는 (묘(I!)”나“적어도하나의
Figure imgf000006_0002
It can mean 11(1(I!)”. Also, “at least one 6 or (I!)” or “at least one
Figure imgf000006_0002
Figure imgf000006_0001
하나의人:8및(:(없
Figure imgf000006_0003
의미할수 있다.
Figure imgf000006_0001
One person:8 and(:(no
Figure imgf000006_0003
It can mean.
[49] 또한,본명세서에서사용되는괄호는
Figure imgf000006_0004
[49] In addition, parentheses used in this specification
Figure imgf000006_0004
있다.구체적으로,“예측(인트라예측)”로표시된경우,“예측”의 일례로“인트라 예측”이제안된것일수있다.달리표현하면본명세서의“예측”은“인트라 예측”으로제한(1 0되지 않고,“인트라예측”이“예측”의 일례로제안될것일 수있다.또한,“예측(즉,인트라예측)”으로표시된경우에도,“예측”의 일례로 “인트라예측”이 제안된것일수있다. [5이 본명세서에서하나의도면내에서개별적으로설명되는기술적특징은, 개별적으로구현될수도있고,동시에구현될수도있다. Specifically, when marked as “prediction (intra prediction)”, “intra prediction” may have been proposed as an example of “prediction”. In other words, “forecast” in this specification is limited to “intra prediction” (1) It is not 0, and “intra prediction” may be suggested as an example of “prediction.” In addition, even when “prediction (ie, intra prediction)” is indicated, “intra prediction” is proposed as an example of “prediction” Can be [5 In this specification, technical features that are individually described within one drawing may be implemented individually or simultaneously.
[51] 이하,첨부한도면들을참조하여,본개시의바람직한실시예를보다상세하게 설명하고자한다.이하,도면상의동일한구성요소에대해서는동일한참조 부호를사용하고동일한구성요소에대해서중복된설명은생략될수있다. [51] Hereinafter, with reference to the accompanying drawings, a preferred embodiment of the present disclosure will be described in more detail. Hereinafter, the same reference numerals are used for the same components in the drawings, and duplicate descriptions for the same components will be described. Can be omitted.
[52] 도 1은본개시가적용될수있는비디오/영상코딩시스템의 예를개략적으로 나타낸다. 1 schematically shows an example of a video/video coding system to which this disclosure can be applied.
[53] 도 1을참조하면,비디오/영상코딩시스템은제 1장치(소스디바이스)및제 2 장치(수신디바이스)를포함할수있다.소스디바이스는인코딩된 [53] Referring to Fig. 1, a video/video coding system may include a first device (source device) and a second device (receive device). The source device is encoded.
비디오(video)/영상(image)정보또는데이터를파일또는스트리밍형태로 디지털저장매체또는네트워크를통하여수신디바이스로전달할수있다. Video/image information or data can be transferred to a receiving device via a digital storage medium or a network in the form of files or streaming.
[54] 상기소스디바이스는비디오소스,인코딩장치,전송부를포함할수있다. 상기수신디바이스는수신부,디코딩장치및렌더러를포함할수있다.상기 인코딩장치는비디오/영상인코딩장치라고불릴수있고,상기디코딩장치는 비디오/영상디코딩장치라고불릴수있다.송신기는인코딩장치에포함될수 있다.수신기는디코딩장치에포함될수있다.렌더러는디스플레이부를포함할 수도있고,디스플레이부는별개의디바이스또는외부컴포넌트로구성될수도 있다. [54] The source device may include a video source, an encoding device, and a transmission unit. The receiving device may include a receiver, a decoding device, and a renderer. The encoding device may be referred to as a video/image encoding device, and the decoding device may be referred to as a video/image decoding device. The transmitter may be included in the encoding device. The receiver may be included in the decoding device. The renderer may include a display unit, and the display unit may be composed of separate devices or external components.
[55] 비디오소스는비디오/영상의캡쳐 ,합성또는생성과정등을통하여 [55] Video sources are captured through video/video capture, synthesis, or generation
비디오/영상을획득할수있다.비디오소스는비디오/영상캡쳐디바이스 및/또는비디오/영상생성디바이스를포함할수있다.비디오/영상캡쳐 디바이스는예를들어,하나이상의카메라,이전에캡쳐된비디오/영상을 포함하는비디오/영상아카이브등을포함할수있다.비디오/영상생성 디바이스는예를들어컴퓨터,타블렛및스마트폰등을포함할수있으며 (전자적으로)비디오/영상을생성할수있다.예를들어,컴퓨터등을통하여 가상의비디오/영상이생성될수있으며,이경우관련데이터가생성되는 과정으로비디오/영상캡쳐과정이갈음될수있다. Video/image can be acquired Video sources can include video/image capture devices and/or video/image generation devices Video/image capture devices can be, for example, one or more cameras, previously captured video/image It can contain video/picture archives, etc. Video/picture generation devices can include, for example computers, tablets and smartphones, etc. It can generate video/pictures (electronically), for example computers A virtual video/video can be created through the like, and in this case, the video/video capture process can be replaced by the process of generating related data.
[56] 인코딩장치는입력비디오/영상을인코딩할수있다.인코딩장치는압축및 코딩효율을위하여 예측,변환,양자화등일련의절차를수행할수있다. [56] The encoding device can encode the input video/video. The encoding device can perform a series of procedures such as prediction, transformation, and quantization for compression and coding efficiency.
인코딩된데이터(인코딩된비디오/영상정보)는비트스트림(bitstream)형태로 줄력될수있다. The encoded data (encoded video/video information) can be summarized in the form of a bitstream.
[57] 전송부는비트스트림형태로출력된인코딩된비디오/영상정보또는 [57] The transmission unit is encoded video/video information output in the form of a bitstream or
데이터를파일또는스트리밍형태로디지털저장매체또는네트워크를통하여 수신디바이스의수신부로전달할수있다.디지털저장매체는 USB, SD, CD, DVD,블루레이 , HDD, SSD등다양한저장매체를포함할수있다.전송부는 미리정해진파일포멧을통하여미디어파일을생성하기위한엘리먼트를 포함할수있고,방송/통신네트워크를통한전송을위한엘리먼트를포함할수 있다.수신부는상기비트스트림을수신/추출하여디코딩장치로전달할수 있다. Data can be transferred to the receiver of the receiving device via a digital storage medium or network in the form of a file or streaming. The digital storage medium can include various storage media such as USB, SD, CD, DVD, Blu-ray, HDD, SSD, etc. The transmission unit may include an element for generating a media file through a predetermined file format and may include an element for transmission through a broadcasting/communication network. The receiving unit may receive/extract the bitstream and transmit it to the decoding device. have.
[58] 디코딩장치는인코딩장치의동작에대응하는역양자화,역변환,예측등 [58] The decoding device is inverse quantization, inverse transformation, prediction, etc. corresponding to the operation of the encoding device.
일련의절차를수행하여비디오/영상을디코딩할수있다. Video/video can be decoded by performing a series of procedures.
[59] 렌더러는디코딩된비디오/영상을렌더링할수있다.렌더링된비디오/영상은 디스플레이부를통하여디스플레이될수있다. [59] The renderer can render decoded video/video. The rendered video/video can be displayed through the display unit.
[6이 이문서는비디오/영상코딩에관한것이다.예를들어이문서에서개시된 [6 This document is about video/image coding. For example,
방법/실시예는 VVC (versatile video coding)표준, EVC (essential video coding) 표준, AVI (AOMedia Video 1)표준, AVS2 (2nd generation of audio video coding standard)또는차세대비디오/영상코딩표준 (ex. H.267 or H.268등)에개시되는 방법에적용될수있다. The method/embodiment includes a versatile video coding (VVC) standard, an essential video coding (EVC) standard, an AOMedia Video 1 (AVI) standard, a 2nd generation of audio video coding standard (AVS2), or a next-generation video/image coding standard (ex.H). .267 or H.268, etc.).
[61] 이문서에서는비디오/영상코딩에관한다양한실시예들을제시하며 ,다른 언급이없는한상기실시예들은서로조합되어수행될수도있다. [61] In this document, various embodiments of video/image coding are presented, and the above embodiments may be implemented in combination with each other unless otherwise stated.
[62] 이문서에서비디오 (video)는시간의흐름에따른일련의영상 (image)들의 [62] In this document, video is a series of images over time.
집합을의미할수있다.픽처 (picture)는일반적으로특정시간대의하나의영상을 나타내는단위를의미하며,슬라이스 (slice)/타일 (tile)는코딩에 있어서픽처의 일부를구성하는단위이다.슬라이스/타일은하나이상의 CTU(coding tree unit)을 포함할수있다.하나의픽처는하나이상의슬라이스/타일로구성될수있다. It can mean a set. A picture generally refers to a unit representing one image in a specific time period, and a slice/tile is a unit constituting a part of a picture in coding. A tile can contain more than one CTU (coding tree unit); a picture can consist of more than one slice/tile.
[63] 타일은특정타일열및특정타일열이내의 CTU들의사각영역이다 tile is a rectangular region of CTUs within a particular tile column and a particular tile row in a picture).상기타일열은 CTU들의사각영역이고,상기사각영역은상기픽처의 높이와동일한높이를갖고,너비는픽처파라미터세트내의신택스요소들에 의하여명시될수있다 (The tile column is a rectangular region of CTUs having a height equal to the height of the picture and a width specified by syntax elements in the picture parameter set).상기타일행은 CTU들의사각영역이고,상기사각영역은 픽처파라미터세트내의신택스요소들에의하여명시되는너비를갖고,높이는 상기픽처의높이와동일할수있다 (The tile row is a rectangular region of CTUs having a height specified by syntax elements in the picture parameter set and a width equal to the width of the picture).타일스캔은픽처를파티셔닝하는 CTU들의특정 순차적오더링을나타낼수있고,상기 CTU들은타일내 CTU래스터스캔으로 연속적으로정렬될수있고,픽처내타일들은상기픽처의상기타일들의 래스터스캔으로연속적으로정렬될수있다 (A tile scan is a specific sequential ordering of CTUs partitioning a picture in which the CTUs are ordered consecutively in CTU raster scan in a tile whereas tiles in a picture are ordered consecutively in a raster scan of the tiles of the picture).슬라이스는다수의완전한타일들또는 하나의 NAL유닛에포함될수있는픽처의하나의타일내다수의연속적인 CTU행들을포함할수있다.이문서에서타일그룹과슬라이스는혼용될수 있다.예를들어본문서에서 tile group/tile group header는 slice/slice header로불리 수있다. [64] 한편,하나의픽처는둘이상의서브픽처로구분될수있다.서브픽처는픽처내 하나이상의슬라이스들의사각리전일수있다 (an rectangular region of one or more slices within a picture). [63] A tile is a rectangular region of CTUs within a particular tile column and a particular tile row in a picture). The tile row is a rectangular region of CTUs within a particular tile row. The tile column is a rectangular region of CTUs having a height equal to the height of the picture and width can be specified by syntax elements in the picture parameter set. a width specified by syntax elements in the picture parameter set).The tile row is a rectangular area of CTUs, the rectangular area has a width specified by syntax elements in the picture parameter set, and the height can be the same as the height of the picture. (The tile row is a rectangular region of CTUs having a height specified by syntax elements in the picture parameter set and a width equal to the width of the picture) A tile scan can represent a specific sequential ordering of CTUs partitioning the picture. In addition, the CTUs may be sequentially aligned with a CTU raster scan in a tile, and tiles in a picture may be sequentially aligned with a raster scan of the tiles of the picture (A tile scan is a specific sequential ordering of CTUs partitioning a picture in which the CTUs are ordered consecutively in CTU raster scan in a tile whereas tiles in a picture are ordered consecutively in a raster scan of the tiles of the picture). It can contain multiple consecutive CTU rows in a single tile of a picture that can be contained in either complete tiles of a single NAL unit or in a single NAL unit. Tile groups and slices can be mixed in this document. For example, in this document, tile group/ The tile group header can be called a slice/slice header. [64] On the other hand, a picture can be divided into two or more subpictures. A subpicture can be a rectangular region of one or more slices within a picture.
[65] 픽셀 (pixel)또는펠 (pel)은하나의픽처 (또는영상)을구성하는최소의단위를 의미할수있다.또한,픽셀에대응하는용어로서’샘플 (sample)’이사용될수 있다.샘플은일반적으로픽셀또는픽셀의값을나타낼수있으며,루마 (luma) 성분의픽셀/픽셀값만을나타낼수도있고,크로마 (chroma)성분의픽셀/픽셀 값만을나타낼수도있다. [65] A pixel or pel may mean the smallest unit constituting a picture (or image). In addition,'sample' may be used as a term corresponding to a pixel. In general, can represent the pixel or pixel value, it can represent only the pixel/pixel value of the luma component, or it can represent only the pixel/pixel value of the chroma component.
[66] 유닛 (unit)은영상처리의기본단위를나타낼수있다.유닛은픽처의특정영역 및해당영역에관련된정보중적어도하나를포함할수있다.하나의유닛은 하나의루마블록및두개의크로마 (ex. cb, cr)블록을포함할수있다.유닛은 경우에따라서블록 (block)또는영역 (area)등의용어와혼용하여사용될수있다. 일반적인경우, MxN블록은 M개의열과 N개의행으로이루어진샘플들 (또는 샘늘어레이 )또는변환계수 (transform coefficient)들의집합 (또는어레이 )을 포함할수있다. [66] A unit can represent a basic unit of image processing. A unit can contain at least one of a specific region of a picture and information related to that region. One unit contains one luma block and two chromas (one luma block and two chromas). ex. cb, cr) may contain a block A unit may be used interchangeably with terms such as block or area in some cases. In general, the MxN block may include a set (or array) of samples (or sample array) or transform coefficients consisting of M columns and N rows.
[67] 도 2는본개시가적용될수있는비디오/영상인코딩장치의구성을 [67] Fig. 2 shows the configuration of a video/video encoding apparatus to which this disclosure can be applied.
개략적으로설명하는도면이다.이하비디오인코딩장치라함은영상인코딩 장치를포함할수있다. This is a schematic diagram. Hereinafter, the video encoding device may include an image encoding device.
[68] 도 2를참조하면,인코딩장치 (200)는영상분할부 (image partitioner, 210), Referring to FIG. 2, the encoding apparatus 200 includes an image partitioner 210,
예즉부 (predictor, 220),레지듀얼처리부 (residual processor, 230),엔트로피 인코딩부 (entropy encoder, 240),가산부 (adder, 250),필터링부 (filter, 260)및 메모리 (memory, 270)를포함하여구성될수있다.예즉부 (220)는인터 Predictor (220), residual processor (230), entropy encoder (240), adder (250), filtering unit (filter, 260) and memory (memory, 270) It can be configured to include. For example, the part 220 is
예측부 (221)및인트라예측부 (222)를포함할수있다.레지듀얼처리부 (230)는 변환부 (transformer, 232),양자화부 (quantizer 233),역양자화부 (dequantizer 234), 역변환부 (inverse transformer, 235)를포함할수있다.레지듀얼처리부 (230)은 감산부 (subtractor, 231)를더포함할수있다.가산부 (250)는복원부 (reconstructor) 또는복원블록생성부 (recontructged block generator)로불릴수있다.상술한영상 분할부 (210),예측부 (220),레지듀얼처리부 (230),엔트로피인코딩부 (240), 가산부 (250)및필터링부 (260)는실시예에따라하나이상의하드웨어 It may include a prediction unit 221 and an intra prediction unit 222. The residual processing unit 230 includes a transform unit 232, a quantizer 233, an inverse quantizer 234, and an inverse transform unit ( An inverse transformer 235 may be included. The residual processing unit 230 may further include a subtractor 231. The addition unit 250 may include a reconstructor or a recontructged block generator. The image segmentation unit 210, the prediction unit 220, the residual processing unit 230, the entropy encoding unit 240, the addition unit 250 and the filtering unit 260 described above may be used according to the embodiment. One or more hardware
컴포넌트 (예를들어인코더칩셋또는프로세서)에의하여구성될수있다.또한 메모리 (270)는 DPB(decoded picture buffer)를포함할수있고,디지털저장매체에 의하여구성될수도있다.상기하드웨어컴포넌트는메모리 (270)을내/외부 컴포넌트로더포함할수도있다. The hardware component may be configured by a component (e.g., an encoder chipset or processor). Also, the memory 270 may include a decoded picture buffer (DPB), and may be configured by a digital storage medium. The hardware component is a memory 270. You can also include more as internal/external components.
[69] 영상분할부 (2W)는인코딩장치 (200)에입력된입력영상 (또는,픽쳐 , [69] The image segmentation unit (2W) is an input image (or, picture, input) input to the encoding device 200
프레임)를하나이상의처리유닛 (processing unit)으로분할할수있다.일예로, 상기처리유닛은코딩유닛 (coding unit, CU)이라고불릴수있다.이경우코딩 유닛은코딩트리유닛 (coding tree unit, CTU)또는최대코딩유닛 (largest coding unit, LCU)으로부터 QTBTTT (Quad-tree binary-tree ternary-tree)구조에따라 재귀적으로 (recursively)분할될수있다.예를들어,하나의코딩유닛은쿼드 트리구조,바이너리트리구조,및/또는터너리구조를기반으로하위 (deeper) 뎁스의복수의코딩유닛들로분할될수있다.이경우예를들어쿼드트리 구조가먼저적용되고바이너리트리구조및/또는터너리구조가나중에적용될 수있다.또는바이너리트리구조가먼저적용될수도있다.더이상분할되지 않는최종코딩유닛을기반으로본개시에따른코딩절차가수행될수있다.이 경우영상특성에따른코딩효율등을기반으로,최대코딩유닛이바로최종 코딩유닛으로사용될수있고,또는필요에따라코딩유닛은 Frame) can be divided into one or more processing units. For example, the processing unit may be referred to as a coding unit (CU), in which case the coding unit is a coding tree unit (CTU). Or according to the QTBTTT (Quad-tree binary-tree ternary-tree) structure from the largest coding unit (LCU) It can be divided recursively; for example, a coding unit can be divided into multiple coding units of deeper depth based on a quad tree structure, a binary tree structure, and/or a ternary structure. In this case, for example, the quadtree structure may be applied first, and the binary and/or ternary structure may be applied later, or the binary tree structure may be applied first. This disclosure is based on the final coding unit, which is no longer divided. In this case, the maximum coding unit can be used directly as the final coding unit, or if necessary, the coding unit can be used based on the coding efficiency according to the video characteristics.
재귀적으로 (recursively)보다하위 뎁스의코딩유닛들로분할되어최적의 사이즈의코딩유닛이최종코딩유닛으로사용될수있다.여기서코딩절차라 함은후술하는예측,변환,및복원등의절차를포함할수있다.다른예로,상기 처리유닛은예즉유닛 (PU: Prediction Unit)또는변환유닛 (TU: Transform Unit)을 더포함할수있다.이경우상기예측유닛및상기변환유닛은각각상술한 최종코딩유닛으로부터분할또는파티셔닝될수있다.상기예측유닛은샘플 예측의단위일수있고,상기변환유닛은변환계수를유도하는단위및/또는 변환계수로부터레지듀얼신호 (residual signal)를유도하는단위일수있다. It is recursively divided into coding units of a lower depth, so that the optimal size coding unit can be used as the final coding unit. Here, the coding procedure may include procedures such as prediction, conversion, and restoration described later. As another example, the processing unit may further include a unit (PU: Prediction Unit) or a transformation unit (TU: Transform Unit). In this case, the prediction unit and the transformation unit are each divided from the final coding unit described above. Alternatively, it may be partitioned. The prediction unit may be a unit of sample prediction, and the transform unit may be a unit for inducing a conversion factor and/or a unit for inducing a residual signal from the conversion factor.
P이 유닛은경우에따라서블록 (block)또는영역 (area)등의용어와혼용하여 P This unit is sometimes used interchangeably with terms such as block or area.
사용될수있다.일반적인경우, MxN블록은 M개의열과 N개의행으로 이루어진샘늘들또는변환계수 (transform coefficient)들의집합을나타낼수 있다.샘플은일반적으로픽셀또는픽셀의값을나타낼수있으며,휘도 (luma) 성분의픽셀/픽셀값만을나타낼수도있고,채도 (chroma)성분의픽셀/픽셀 값만을나타낼수도있다.샘플은하나의픽처 (또는영상)을픽셀 (pixel)또는 펠 (pel)에대응하는용어로서사용될수있다. In general, an MxN block can represent a set of samples or transform coefficients consisting of M columns and N rows. A sample can typically represent a pixel or pixel value, and the luminance ( It can represent only the pixel/pixel value of the luma component, or it can represent only the pixel/pixel value of the chroma component. A sample corresponds to one picture (or image) corresponding to a pixel or pel. Can be used as a term.
1] 인코딩장치 (200)는입력영상신호 (원본블록,원본샘플어레이)에서인터 1] The encoding device 200 intercepts the input video signal (original block, original sample array)
예측부 (221)또는인트라예측부 (222)로부터출력된예측신호 (예측된블록,예측 샘플어레이 )를감산하여레지듀얼신호 (residual signal,잔여블록,잔여샘플 어레이)를생성할수있고,생성된레지듀얼신호는변환부 (232)로전송된다.이 경우도시된바와같이인코딩장치 (200)내에서입력영상신호 (원본블록,원본 샘플어레이 )에서예측신호 (예측블록,예측샘플어레이 )를감산하는유닛은 감산부 (231)라고불릴수있다.예측부는처리대상블록 (이하,현재블록이라 함)에대한예측을수행하고,상기현재블록에대한예측샘플들을포함하는 예측된블록 (predicted block)을생성할수있다.예측부는현재블록또는 CU 단위로인트라예측이적용되는지또는인터예측이적용되는지결정할수있다. 예측부는각예측모드에대한설명에서후술하는바와같이예측모드정보등 예측에관한다양한정보를생성하여엔트로피인코딩부 (240)로전달할수있다. 예측에관한정보는엔트로피인코딩부 (240)에서인코딩되어비트스트림형태로 줄력될수있다. A residual signal (residual signal, residual block, residual sample array) can be generated by subtracting the prediction signal (predicted block, prediction sample array) output from the prediction unit 221 or the intra prediction unit 222, and the generated The residual signal is transmitted to the conversion unit 232. In this case, the prediction signal (prediction block, prediction sample array) is subtracted from the input video signal (original block, original sample array) in the encoding device 200 as shown. The unit to be processed may be called a subtraction unit 231. The prediction unit performs prediction on the block to be processed (hereinafter referred to as the current block), and a predicted block including prediction samples for the current block. The prediction unit can determine whether intra prediction or inter prediction is applied in the current block or CU unit. The prediction unit may generate various types of information related to prediction, such as prediction mode information, as described later in the description of each prediction mode, and transmit it to the entropy encoding unit 240. The information on prediction may be encoded in the entropy encoding unit 240 and summarized in the form of a bitstream.
2] 인트라예측부 (222)는현재픽처내의샘플들을참조하여현재블록을예측할 수있다.상기참조되는샘플들은예측모드에따라상기현재블록의 주변 (neighbor)에위치할수있고,또는떨어져서위치할수도있다.인트라 예측에서 예측모드들은복수의비방향성모드와복수의방향성모드를포함할 수있다.비방향성모드는예를들어 DC모드및플래너모드 (Planar모드)를 포함할수있다.방향성모드는예측방향의세밀한정도에따라예를들어 2] The intra prediction unit 222 refers to the samples in the current picture and predicts the current block. Depending on the prediction mode, the referenced samples may be located in the vicinity of the current block or may be located apart from each other. In intra prediction, prediction modes include a plurality of non-directional modes and a plurality of directional modes. Non-directional mode can include, for example, DC mode and planar mode (Planar mode) Directional mode depends on the degree of detail of the prediction direction, for example
33개의방향성예측모드또는 65개의방향성 예측모드를포함할수있다.다만, 이는예시로서설정에따라그이상또는그이하의개수의방향성 예측 It may include 33 directional prediction modes or 65 directional prediction modes. However, this is an example and more or less directional predictions depending on the setting.
모드들이사용될수있다.인트라예측부 (222)는주변블록에적용된예측모드를 이용하여,현재블록에적용되는예측모드를결정할수도있다. Modes may be used. The intra prediction unit 222 may determine a prediction mode to be applied to the current block by using the prediction mode applied to the surrounding block.
3] 인터예측부 (221)는참조픽처상에서움직임벡터에의해특정되는참조 3] The inter prediction unit 221 refers to a reference specified by a motion vector on the reference picture.
블록 (참조샘플어레이)을기반으로,현재블록에대한예측된블록을유도할수 있다.이때,인터 예측모드에서전송되는움직임정보의양을줄이기위해주변 블록과현재블록간의움직임정보의상관성에기초하여움직임정보를블록, 서브블록또는샘플단위로예측할수있다.상기움직임정보는움직임벡터및 참조픽처인덱스를포함할수있다.상기움직임정보는인터 예측방향 (L0예측, L1예측, Bi예측등)정보를더포함할수있다.인터예측의경우에,주변블록은 현재픽처내에존재하는공간적주변블록 (spatial neighboring block)과참조 픽처에존재하는시간적주변블록 (temporal neighboring block)을포함할수있다. 상기참조블록을포함하는참조픽처와상기시간적주변블록을포함하는참조 픽처는동일할수도있고,다를수도있다.상기시간적주변블록은동일위치 참조블록 (collocated reference block),동일위치 CU(colCU)등의이름으로불릴 수있으며 ,상기시간적주변블록을포함하는참조픽처는동일위치 Based on the block (reference sample array), it is possible to induce the predicted block for the current block. In this case, based on the correlation of the motion information between the neighboring block and the current block to reduce the amount of motion information transmitted in the inter prediction mode. Motion information can be predicted in units of blocks, sub-blocks, or samples. The motion information may include a motion vector and a reference picture index. The motion information indicates inter prediction direction (L0 prediction, L1 prediction, Bi prediction, etc.) In the case of inter prediction, the peripheral block may include a spatial neighboring block existing in the current picture and a temporal neighboring block existing in the reference picture. The reference picture including the reference block and the reference picture including the temporal peripheral block may be the same or different. The temporal peripheral block may be a collocated reference block, a co-located CU (colCU), etc. It can be called by the name of, and the reference picture containing the temporal surrounding block is the same position.
픽처 (collocated picture, colPic)라고불릴수도있다.예를들어 ,인터 It can also be called a picture (collocated picture, colPic), for example, inter
예측부 (221)는주변블록들을기반으로움직임정보후보리스트를구성하고, 상기현재블록의움직임벡터및/또는참조픽처인덱스를도출하기위하여 어떤후보가사용되는지를지시하는정보를생성할수있다.다양한예측모드를 기반으로인터예측이수행될수있으며,예를들어스킵모드와머지모드의 경우에,인터 예측부 (221)는주변블록의움직임정보를현재블록의움직임 정보로이용할수있다.스킵모드의경우,머지모드와달리레지듀얼신호가 전송되지않을수있다.움직임정보예즉 (motion vector prediction, MVP)모드의 경우,주변블록의움직임벡터를움직임벡터예즉자 (motion vector predictor)로 이용하고,움직임벡터차분 (motion vector difference)을시그널링함으로써현재 블록의움직임벡터를지시할수있다. The prediction unit 221 may construct a motion information candidate list based on the neighboring blocks, and generate information indicating which candidate is used to derive the motion vector and/or reference picture index of the current block. Inter prediction may be performed based on the prediction mode, for example, in the case of skip mode and merge mode, the inter prediction unit 221 may use the motion information of the neighboring block as the motion information of the current block. In this case, unlike the merge mode, the residual signal may not be transmitted. In the case of the motion information (motion vector prediction, MVP) mode, the motion vector of the surrounding block is used as the motion vector predictor, and the motion vector By signaling the motion vector difference, you can indicate the motion vector of the current block.
4] 예측부 (220)는후술하는다양한예측방법을기반으로예측신호를생성할수 있다.예를들어,예측부는하나의블록에대한예측을위하여인트라예측또는 인터 예측을적용할수있을뿐아니라,인트라예측과인터 예측을동시에 적용할수있다.이는 combined inter and intra prediction (〔고 라고불릴수있다. 또한,예즉부는블록에대한예즉을위하여인트라블록카피 (intra block copy, IBC)예측모드에기반할수도있고또는팔레트모드 (palette mode)에기반할 수도있다.상기 IBC예측모드또는팔레트모드는예를들어 SCC(screen content coding)등과같이게임등의컨텐츠영상/동영상코딩을위하여사용될수있다. IBC는기본적으로현재픽처내에서예측을수행하나현재픽처내에서참조 블록을도출하는점에서인터예측과유사하게수행될수있다.즉, IBC는본 문서에서설명되는인터 예측기법들중적어도하나를이용할수있다.팔레트 모드는인트라코딩또는인트라예측의일예로볼수있다.팔레트모드가 적용되는경우팔레트테이블및팔레트인덱스에관한정보를기반으로픽처내 샘플값을시그널링할수있다.4] The prediction unit 220 may generate a prediction signal based on various prediction methods to be described later. For example, the prediction unit may apply intra prediction or inter prediction for prediction of one block, as well as intra prediction. Prediction and inter prediction can be applied at the same time. This can be called combined inter and intra prediction ([It can be called). In addition, for example, for example, an intra block copy, The IBC prediction mode or the palette mode may be based. The IBC prediction mode or the palette mode can be used for content video/video coding such as games, for example, SCC (screen content coding). Can be used for IBC basically performs prediction within the current picture, but can perform similarly to inter prediction in that it derives a reference block within the current picture, i.e. IBC can use at least one of the inter prediction techniques described in this document. The palette mode can be seen as an example of intracoding or intra prediction. When the palette mode is applied, the sample values in the picture can be signaled based on the information about the palette table and palette index.
5] 상기예측부 (인터 예측부 (221)및/또는상기인트라예측부 (222)포함)를통해 생성된예측신호는복원신호를생성하기위해이용되거나레지듀얼신호를 생성하기위해이용될수있다.변환부 (232)는레지듀얼신호에변환기법을 적용하여변환계수들 (transform coefficients)를생성할수있다.예를들어,변환 기법은 DCT (Discrete Cosine Transform), DST(Discrete Sine Transform), 5] The prediction signal generated through the prediction unit (including the inter prediction unit 221 and/or the intra prediction unit 222) may be used to generate a restoration signal or may be used to generate a residual signal. The transform unit 232 may generate transform coefficients by applying a transform method to the residual signal. For example, the transform method is DCT (Discrete Cosine Transform), DST (Discrete Sine Transform),
KLT(Karhunen-Loeve Transform), GBT(Graph-Based Transform),또는 KLT (Karhunen-Loeve Transform), GBT (Graph-Based Transform), or
CNT (Conditionally Non-linear Transform)중적어도하나를포함할수있다. It may include at least one of CNT (Conditionally Non-linear Transform).
여기서, GBT는픽셀간의관계정보를그래프로표현한다고할때이 Here, when it is said that GBT expresses relationship information between pixels in a graph,
그래프로부터얻어진변환을의미한다. CNT는이전에복원된모든픽셀 (all previously reconstructed pixel)를이용하여 예즉신호를생성하고그에기초하여 획득되는변환을의미한다.또한,변환과정은정사각형의동일한크기를갖는 픽셀블록에적용될수도있고,정사각형이아닌가변크기의블록에도적용될 수있다. It means the transformation obtained from the graph. CNT refers to a transformation that is obtained based on, e.g., generating a signal using all previously reconstructed pixels. Also, the transformation process can be applied to a block of pixels of the same size of a square, and It can also be applied to blocks of variable size that are not square.
6] 양자화부 (233)는변환계수들을양자화하여엔트로피인코딩부 (240)로 6] The quantization unit 233 quantizes the transform coefficients to the entropy encoding unit 240
전송되고,엔트로피인코딩부 (240)는양자화된신호 (양자화된변환계수들에 관한정보)를인코딩하여비트스트림으로출력할수있다.상기양자화된변환 계수들에관한정보는레지듀얼정보라고불릴수있다.양자화부 (233)는계수 스캔순서 (scan order)를기반으로블록형태의양자화된변환계수들을 1차원 벡터형태로재정렬할수있고,상기 1차원벡터형태의양자화된변환계수들을 기반으로상기양자화된변환계수들에관한정보를생성할수도있다.엔트로피 인코딩부 (240)는예를들어지수골롬 (exponential Golomb), After being transmitted, the entropy encoding unit 240 encodes the quantized signal (information on quantized transformation coefficients) and outputs it as a bitstream. The information on the quantized transformation coefficients may be referred to as residual information. The quantization unit 233 can rearrange the quantized transformation coefficients in the block form into a one-dimensional vector form based on the coefficient scan order, and the quantized transformation coefficients are quantized based on the quantized transformation coefficients in the one-dimensional vector form. It is also possible to generate information about the transformation coefficients. The entropy encoding unit 240, for example, exponential Golomb,
CAVLC(context-adaptive variable length coding), CABAC(context-adaptive binary arithmetic coding)등과같은다양한인코딩방법을수행할수있다.엔트로피 인코딩부 (240)는양자화된변환계수들외비디오/이미지복원에필요한 정보들 (예컨대신택스요소들 (syntax elements)의값등)을함께또는별도로 인코딩할수도있다.인코딩된정보 (ex.인코딩된비디오/영상정보)는 Various encoding methods such as CAVLC (context-adaptive variable length coding) and CABAC (context-adaptive binary arithmetic coding) can be performed. The entropy encoding unit 240 includes quantized conversion factors and information necessary for video/image restoration. It is also possible to encode together or separately (e.g., values of syntax elements). The encoded information (e.g., encoded video/video information) is
비트스트림형태로 NAL(network abstraction layer)유닛단위로전송또는저장될 수있다.상기비디오/영상정보는어맵테이션파라미터세트 (APS),픽처 파라미터세트 (PPS),시퀀스파라미터세트 (SPS)또는비디오파라미터 세트 (VPS)등다양한파라미터세트에관한정보를더포함할수있다.또한상기 비디오/영상정보는일반제한정보 (general constraint information)을더포함할수 있다.본문서에서인코딩장치에서디코딩장치로전달/시그널링되는정보 및/또는신택스요소들은비디오/영상정보에포함될수있다.상기비디오/영상 정보는상술한인코딩절차를통하여인코딩되어상기비트스트림에포함될수 있다.상기비트스트림은네트워크를통하여전송될수있고,또는디지털 저장매체에저장될수있다.여기서네트워크는방송망및/또는통신망등을 포함할수있고,디지털저장매체는 USB, SD, CD, DVD,블루레이 , HDD, SSD등 다양한저장매체를포함할수있다.엔트로피인코딩부 (240)로부터출력된 신호는전송하는전송부 (미도시)및/또는저장하는저장부 (미도시)가인코딩 장치 (200)의내/외부엘리먼트로서구성될수있고,또는전송부는엔트로피 인코딩부 (240)에포함될수도있다.The video/video information may be transmitted or stored in the form of a bitstream in units of network abstraction layer (NAL) units. The video/video information is an appointment parameter set (APS), a picture parameter set (PPS), a sequence parameter set (SPS), or a video parameter. Set (VPS), etc. may contain more information about various parameter sets. Also, the video/video information may further contain general constraint information. In this document, transmitted/signaled from the encoding device to the decoding device. Information and/or syntax elements may be included in video/image information. The video/image information may be encoded through the above-described encoding procedure and included in the bitstream. The bitstream may be transmitted through a network, or It can be stored on a digital storage medium, where the network can include a broadcasting network and/or a communication network, and the digital storage medium can include a variety of storage media such as USB, SD, CD, DVD, Blu-ray, HDD, SSD, etc. Entropy The signal output from the encoding unit 240 may be configured as an internal/external element of the encoding device 200 by a transmitting unit (not shown) for transmitting and/or a storage unit (not shown) for storing, or the transmitting unit It may be included in (240).
7] 양자화부 (233)로부터출력된양자화된변환계수들은예측신호를생성하기 위해이용될수있다.예를들어 ,양자화된변환계수들에역양자화부 (234)및 역변환부 (235)를통해역양자화및역변환을적용함으로써레지듀얼 7] The quantized transformation coefficients output from the quantization unit 233 can be used to generate a prediction signal. For example, the quantization unit 234 and the inverse transformation unit 235 are used to generate a prediction signal. Residual by applying quantization and inverse transformation
신호 (레지듀얼블록 or레지듀얼샘플들)를복원할수있다.가산부 (155)는 복원된레지듀얼신호를인터예측부 (221)또는인트라예측부 (222)로부터 출력된예측신호에더함으로써복원 (reconstructed)신호 (복원픽처,복원블록, 복원샘플어레이)가생성될수있다.스킵모드가적용된경우와같이처리대상 블록에대한레지듀얼이없는경우,예측된블록이복원블록으로사용될수 있다.가산부 (250)는복원부또는복원블록생성부라고불릴수있다.생성된 복원신호는현재픽처내다음처리대상블록의인트라예측을위하여사용될 수있고,후술하는바와같이필터링을거쳐서다음픽처의인터 예측을위하여 사용될수도있다. A signal (residual block or residual samples) can be restored. The addition unit 155 restores the restored residual signal by adding the restored residual signal to the prediction signal output from the inter prediction unit 221 or the intra prediction unit 222. A (reconstructed) signal (restored picture, reconstructed block, reconstructed sample array) can be generated If there is no residual for the block to be processed, such as when the skip mode is applied, the predicted block can be used as a reconstructed block. The unit 250 may be referred to as a restoration unit or a restoration block generation unit. The generated restoration signal may be used for intra prediction of the next processing target block in the current picture, and inter prediction of the next picture through filtering as described below. It can also be used for
R8] 한편픽처인코딩및/또는복원과정에서 LMCS (luma mapping with chroma scaling)가적용될수도있다. R8] Meanwhile, LMCS (luma mapping with chroma scaling) may be applied during picture encoding and/or restoration.
9] 필터링부 (260)는복원신호에필터링을적용하여주관적/객관적화질을 9] The filtering unit 260 applies filtering to the restored signal to improve subjective/objective image quality.
향상시킬수있다.예를들어필터링부 (260)은복원픽처에다양한필터링방법을 적용하여수정된 (modified)복원픽처를생성할수있고,상기수정된복원 픽처를메모리 (270),구체적으로메모리 (270)의 DPB에저장할수있다.상기 다양한필터링방법은예를들어,디블록킹필터링,샘플적응적오프셋 (sample adaptive offset),적응적루프필터 (adaptive loop filter),양방향필터 (bilateral filter) 등을포함할수있다.필터링부 (260)은각필터링방법에대한설명에서후술하는 바와같이필터링에관한다양한정보를생성하여엔트로피인코딩부 (240)로 전달할수있다.필터링관한정보는엔트로피인코딩부 (240)에서인코딩되어 비트스트림형태로출력될수있다. For example, the filtering unit 260 may apply various filtering methods to the restored picture to generate a modified restored picture, and store the modified restored picture in a memory 270, specifically a memory 270. The various filtering methods include, for example, deblocking filtering, sample adaptive offset, adaptive loop filter, and bilateral filter. The filtering unit 260 may generate a variety of filtering information and transmit it to the entropy encoding unit 240, as described later in the description of each filtering method. The filtering information is encoded by the entropy encoding unit 240. It can be output as a bitstream.
[8이 메모리 (270)에전송된수정된복원픽처는인터예측부 (221)에서참조픽처로 사용될수있다.인코딩장치는이를통하여인터 예측이적용되는경우,인코딩 장치 (WO)와디코딩장치에서의예측미스매치를피할수있고,부호화효율도 향상시킬수있다. [8] The modified restored picture transmitted to this memory 270 can be used as a reference picture in the inter prediction unit 221. The encoding device encodes when the inter prediction is applied through this. It can avoid predictive mismatch between the device (WO) and the decoding device, and also improve the encoding efficiency.
[81] 메모리 (270) DPB는수정된복원픽처를인터예측부 (221)에서의참조픽처로 사용하기위해저장할수있다.메모리 (270)는현재픽처내움직임정보가 도출된 (또는인코딩된)블록의움직임정보및/또는이미복원된픽처내 블록들의움직임정보를저장할수있다.상기저장된움직임정보는공간적 주변블록의움직임정보또는시간적주변블록의움직임정보로활용하기 위하여인터예측부 (221)에전달할수있다.메모리 (270)는현재픽처내복원된 블록들의복원샘플들을저장할수있고,인트라예측부 (222)에전달할수있다. [81] Memory 270 The DPB can store the modified restored picture for use as a reference picture in the inter prediction unit 221. The memory 270 is a block from which motion information in the current picture is derived (or encoded). Motion information and/or motion information of blocks in a picture that has already been restored can be stored. The stored motion information is transmitted to the inter prediction unit 221 in order to use it as motion information of spatial neighboring blocks or motion information of temporal neighboring blocks. The memory 270 may store restoration samples of the restored blocks in the current picture, and may transmit the restoration samples to the intra prediction unit 222.
[82] 도 3은본개시가적용될수있는비디오/영상디코딩장치의구성을 [82] Figure 3 shows the configuration of a video/video decoding apparatus to which this disclosure can be applied.
개략적으로설명하는도면이다. This is a schematic drawing.
[83] 도 3을참조하면,디코딩장치 (300)는엔트로피디코딩부 (entropy decoder, 310), 레지듀얼처리부 (residual processor, 320),예즉부 (predictor, 330),가산부 (adder, 340),필터링부 (filter, 350)및메모리 (memory, 360)를포함하여구성될수있다. 예측부 (330)는인트라예측부 (331)및인터예측부 (332)를포함할수있다. Referring to FIG. 3, the decoding apparatus 300 includes an entropy decoder 310, a residual processor 320, a predictor 330, and an adder 340. , Can be configured including a filtering unit (filter, 350) and memory (memory, 360). The prediction unit 330 may include an intra prediction unit 331 and an inter prediction unit 332.
레지듀얼처리부 (320)는역양자화부 (dequantizer, 321)및역변환부 (inverse transformer, 321)를포함할수있다.상술한엔트로피디코딩부 (310),레지듀얼 처리부 (320),예측부 (330),가산부 (340)및필터링부 (350)는실시예에따라하나의 하드웨어컴포넌트 (예를들어디코더칩셋또는프로세서)에의하여구성될수 있다.또한메모리 (360)는 DPB(decoded picture buffer)를포함할수있고,디지털 저장매체에의하여구성될수도있다.상기하드웨어컴포넌트는메모리 (360)을 내/외부컴포넌트로더포함할수도있다. The residual processing unit 320 may include a dequantizer 321 and an inverse transformer 321. The above-described entropy decoding unit 310, a residual processing unit 320, a prediction unit 330, The addition unit 340 and the filtering unit 350 may be configured by one hardware component (for example, a decoder chipset or processor) according to an embodiment. Further, the memory 360 may include a decoded picture buffer (DPB). In addition, it may be configured by a digital storage medium. The hardware component may include a memory 360 as an internal/external component loader.
[84] 비디오/영상정보를포함하는비트스트림이입력되면,디코딩장치 (300)는도 3의인코딩장치에서비디오/영상정보가처리된프로세스에대응하여영상을 복원할수있다.예를들어,디코딩장치 (300)는상기비트스트림으로부터획득한 블록분할관련정보를기반으로유닛들/블록들을도출할수있다.디코딩 장치 (300)는인코딩장치에서적용된처리유닛을이용하여디코딩을수행할수 있다.따라서디코딩의처리유닛은예를들어코딩유닛일수있고,코딩유닛은 코딩트리유닛또는최대코딩유닛으로부터쿼드트리구조,바이너리트리 구조및/또는터너리트리구조를따라서분할될수있다.코딩유닛으로부터 하나이상의변환유닛이도출될수있다.그리고,디코딩장치 (300)를통해 디코딩및출력된복원영상신호는재생장치를통해재생될수있다. [84] When a bitstream including video/image information is input, the decoding device 300 can restore the image in response to the process in which the video/image information is processed in the encoding device of FIG. 3. For example, decoding The device 300 may derive units/blocks based on the block division related information acquired from the bitstream. The decoding device 300 may perform decoding using a processing unit applied in the encoding device. Therefore, decoding The processing unit of may be, for example, a coding unit, and the coding unit may be divided from the coding tree unit or the largest coding unit according to the quadtree structure, binary retrieval structure and/or turner retrie structure. From the coding unit one or more conversion units In addition, the restored video signal decoded and output through the decoding device 300 can be reproduced through the playback device.
[85] 디코딩장치 (300)는도 3의인코딩장치로부터출력된신호를비트스트림 [85] The decoding device 300 converts the signal output from the encoding device of FIG. 3 into a bitstream.
형태로수신할수있고,수신된신호는엔트로피디코딩부 (310)를통해디코딩될 수있다.예를들어,엔트로피디코딩부 (3 W)는상기비트스트림을파싱하여영상 복원 (또는픽처복원)에필요한정보 (ex.비디오/영상정보)를도출할수있다. 상기비디오/영상정보는어맵테이션파라미터세트 (APS),픽처파라미터 세트 (PPS),시퀀스파라미터세트 (SPS)또는비디오파라미터세트 (VPS)등 다양한파라미터세트에관한정보를더포함할수있다.또한상기비디오/영상 정보는일반제한정보 (general constraint information)을더포함할수있다. 디코딩장치는상기파라미터세트에관한정보및/또는상기일반제한정보를 더기반으로픽처를디코딩할수있다.본문서에서후술되는시그널링/수신되는 정보및/또는신택스요소들은상기디코딩절차를통하여디코딩되어상기 비트스트림으로부터획득될수있다.예컨대,엔트로피디코딩부 (3W)는지수 골롬부호화, CAVLC또는 CABAC등의코딩방법을기초로비트스트림내 정보를디코딩하고,영상복원에필요한신택스엘리먼트의값,레지듀얼에관한 변환계수의양자화된값들을출력할수있다.보다상세하게, CABAC엔트로피 디코딩방법은,비트스트림에서각신택스요소에해당하는빈을수신하고, 디코딩대상신택스요소정보와주변및디코딩대상블록의디코딩정보혹은 이전단계에서디코딩된심볼/빈의정보를이용하여문맥 (context)모델을 결정하고,결정된문맥모델에따라빈 (bin)의발생확률을예측하여빈의산술 디코딩 (arithmetic decoding)를수행하여각신택스요소의값에해당하는심볼을 생성할수있다.이때, CABAC엔트로피디코딩방법은문맥모델결정후다음 심볼/빈의문맥모델을위해디코딩된심볼/빈의정보를이용하여문맥모델을 업데이트할수있다.엔트로피디코딩부 (3 W)에서디코딩된정보중예측에관한 정보는예측부 (인터예측부 (332)및인트라예측부 (331))로제공되고,엔트로피 디코딩부 (3W)에서엔트로피디코딩이수행된레지듀얼값,즉양자화된변환 계수들및관련파라미터정보는레지듀얼처리부 (320)로입력될수있다. It can be received in a form, and the received signal can be decoded through the entropy decoding unit 310. For example, the entropy decoding unit 3W parses the bitstream and is required for image restoration (or picture restoration). Information (ex. video/video information) can be derived. The above video/video information includes an appointment parameter set (APS), a picture parameter set (PPS), a sequence parameter set (SPS), or a video parameter set (VPS). Further information on various parameter sets may be included. In addition, the video/video information may further include general constraint information. The decoding device may further decode the picture based on the information on the parameter set and/or the general limit information. The signaling/received information and/or syntax elements described later in this document are decoded through the decoding procedure, It can be obtained from the bitstream. For example, the entropy decoding unit (3W) decodes the information in the bitstream based on a coding method such as exponential Golomb encoding, CAVLC or CABAC, and determines the value of the syntax element required for image restoration, and the residual. In more detail, the CABAC entropy decoding method receives the bin corresponding to each syntax element in the bitstream, and receives the decoding target syntax element information and the surrounding and decoding information of the decoding target block. Alternatively, the context model is determined using the symbol/bin information decoded in the previous step, and the probability of occurrence of bins is predicted according to the determined context model, and arithmetic decoding of bins is performed. A symbol corresponding to the value of the syntax element can be generated. In this case, the CABAC entropy decoding method can update the context model using information of the decoded symbol/bin for the context model of the next symbol/bin after determining the context model. Among the information decoded by the entropy decoding unit (3W), information about prediction is provided to the prediction unit (inter prediction unit 332 and intra prediction unit 331), and entropy decoding is performed by the entropy decoding unit 3W. The residual value, that is, quantized transform coefficients and related parameter information may be input to the residual processing unit 320.
레지듀얼처리부 (320)는레지듀얼신호 (레지듀얼블록,레지듀얼샘플들, 레지듀얼샘플어레이)를도출할수있다.또한,엔트로피디코딩부 (310)에서 디코딩된정보중필터링에관한정보는필터링부 (350)으로제공될수있다. 한편,인코딩장치로부터출력된신호를수신하는수신부 (미도시)가디코딩 장치 (300)의내/외부엘리먼트로서더구성될수있고,또는수신부는엔트로피 디코딩부 (3 W)의구성요소일수도있다.한편,본문서에따른디코딩장치는 비디오/영상/픽처디코딩장치라고불릴수있고,상기디코딩장치는정보 디코더 (비디오/영상/픽처정보디코더)및샘플디코더 (비디오/영상/픽처샘플 디코더)로구분할수도있다.상기정보디코더는상기엔트로피 The residual processing unit 320 may derive a residual signal (residual block, residual samples, and residual sample array). In addition, information about filtering among information decoded by the entropy decoding unit 310 is a filtering unit. Can be provided as 350. On the other hand, a receiving unit (not shown) that receives the signal output from the encoding device may be further configured as an internal/external element of the decoding device 300, or the receiving unit may be a component of the entropy decoding unit 3W. ,The decoding device according to this document can be called a video/video/picture decoding device, and the decoding device can be divided into an information decoder (video/video/picture information decoder) and a sample decoder (video/video/picture sample decoder). The information decoder is the entropy
디코딩부 (3 W)를포함할수있고,상기샘플디코더는상기역양자화부 (321), 역변환부 (322),가산부 (340),필터링부 (350),메모리 (360),인터 예측부 (332)및 인트라예측부 (331)중적어도하나를포함할수있다. A decoding unit (3W) may be included, and the sample decoder includes the inverse quantization unit 321, an inverse transform unit 322, an addition unit 340, a filtering unit 350, a memory 360, an inter prediction unit ( 332) and an intra prediction unit 331.
[86] 역양자화부 (321)에서는양자화된변환계수들을역양자화하여변환계수들을 출력할수있다.역양자화부 (321)는양자화된변환계수들을 2차원의블록 형태로재정렬할수있다.이경우상기재정렬은인코딩장치에서수행된계수 스캔순서를기반하여재정렬을수행할수있다.역양자화부 (321)는양자화 파라미터 (예를들어양자화스텝사이즈정보)를이용하여양자화된변환 계수들에대한역양자화를수행하고,변환계수들 (transform coefficient)를획득할 수있다. [86] The inverse quantization unit 321 may inverse quantize the quantized transformation coefficients to output the transformation coefficients. The inverse quantization unit 321 may rearrange the quantized transformation coefficients into a two-dimensional block form. In this case, the above reordering The inverse quantization unit 321 performs inverse quantization on the quantized transform coefficients using a quantization parameter (for example, quantization step size information) based on the coefficient scan order performed by the silver encoding device. And obtain the transform coefficients Can
[87] 역변환부 (322)에서는변환계수들를역변환하여레지듀얼신호 (레지듀얼블록, 레지듀얼샘플어레이)를획득하게된다. In the inverse transform unit 322, the residual signal (residual block, residual sample array) is obtained by inverse transforming the transform coefficients.
[88] 예측부는현재블록에대한예측을수행하고,상기현재블록에대한예측 [88] The prediction unit performs prediction on the current block, and predicts the current block
샘플들을포함하는예측된블록 (predicted block)을생성할수있다.예측부는 엔트로피디코딩부 (310)로부터출력된상기 예측에관한정보를기반으로상기 현재블록에인트라예측이적용되는지또는인터 예측이적용되는지결정할수 있고,구체적인인트라/인터예측모드를결정할수있다. A predicted block including samples may be generated. The prediction unit determines whether intra prediction or inter prediction is applied to the current block based on the information about the prediction output from the entropy decoding unit 310. Can be determined and specific intra/inter prediction modes can be determined.
[89] 예측부 (330)는후술하는다양한예측방법을기반으로예측신호를생성할수 있다.예를들어,예측부는하나의블록에대한예측을위하여인트라예측또는 인터 예측을적용할수있을뿐아니라,인트라예측과인터 예측을동시에 적용할수있다.이는 combined inter and intra prediction (〔고 라고불릴수있다. 또한,예즉부는블록에대한예즉을위하여인트라블록카피 (intra block copy, IBC)예측모드에기반할수도있고또는팔레트모드 (palette mode)에기반할 수도있다.상기 IBC예측모드또는팔레트모드는예를들어 SCC(screen content coding)등과같이게임등의컨텐츠영상/동영상코딩을위하여사용될수있다. IBC는기본적으로현재픽처내에서예측을수행하나현재픽처내에서참조 블록을도출하는점에서인터예측과유사하게수행될수있다.즉, IBC는본 문서에서설명되는인터 예측기법들중적어도하나를이용할수있다.팔레트 모드는인트라코딩또는인트라예측의일예로볼수있다.팔레트모드가 적용되는경우팔레트테이블및팔레트인덱스에관한정보가상기비디오/영상 정보에포함되어시그널링될수있다. [89] The prediction unit 330 may generate a prediction signal based on various prediction methods to be described later. For example, the prediction unit may apply intra prediction or inter prediction for prediction for one block, as well as, Intra prediction and inter prediction can be applied at the same time. This can be called combined inter and intra prediction ([can be referred to as). In addition, for example, the example is based on the intra block copy (IBC) prediction mode for block prediction. It may or may be based on a palette mode. The IBC prediction mode or palette mode can be used for content video/video coding such as games, for example, SCC (screen content coding), etc. IBC is Basically, the prediction is performed within the current picture, but it can be performed similarly to inter prediction in that it derives a reference block within the current picture, i.e., IBC can use at least one of the inter prediction techniques described in this document. Palette mode can be seen as an example of intra coding or intra prediction. When the palette mode is applied, information about the palette table and palette index can be included in the video/video information and signaled.
[9이 인트라예측부 (331)는현재픽처내의샘플들을참조하여현재블록을예측할 수있다.상기참조되는샘플들은예측모드에따라상기현재블록의 [9 The intra prediction unit 331 may predict the current block by referring to samples in the current picture. The referenced samples are of the current block according to the prediction mode.
주변 (neighbor)에위치할수있고,또는떨어져서위치할수도있다.인트라 예측에서 예측모드들은복수의비방향성모드와복수의방향성모드를포함할 수있다.인트라예측부 (331)는주변블록에적용된예측모드를이용하여,현재 블록에적용되는예측모드를결정할수도있다. The prediction modes may include a plurality of non-directional modes and a plurality of directional modes in intra prediction. The intra prediction unit 331 is a prediction applied to a peripheral block. Using the mode, you can also determine the prediction mode that applies to the current block.
[91] 인터예측부 (332)는참조픽처상에서움직임벡터에의해특정되는참조 [91] The inter prediction unit 332 is a reference specified by a motion vector on the reference picture.
블록 (참조샘플어레이)을기반으로,현재블록에대한예측된블록을유도할수 있다.이때,인터 예측모드에서전송되는움직임정보의양을줄이기위해주변 블록과현재블록간의움직임정보의상관성에기초하여움직임정보를블록, 서브블록또는샘플단위로예측할수있다.상기움직임정보는움직임벡터및 참조픽처인덱스를포함할수있다.상기움직임정보는인터 예측방향 (L0예측, L1예측, Bi예측등)정보를더포함할수있다.인터예측의경우에,주변블록은 현재픽처내에존재하는공간적주변블록 (spatial neighboring block)과참조 픽처에존재하는시간적주변블록 (temporal neighboring block)을포함할수있다. 예를들어,인터예측부 (332)는주변블록들을기반으로움직임정보후보 리스트를구성하고,수신한후보선택정보를기반으로상기현재블록의움직임 벡터및/또는참조픽처인덱스를도출할수있다.다양한예측모드를기반으로 인터 예측이수행될수있으며,상기 예측에관한정보는상기현재블록에대한 인터 예측의모드를지시하는정보를포함할수있다. Based on the block (reference sample array), it is possible to induce the predicted block for the current block. In this case, based on the correlation of the motion information between the neighboring block and the current block to reduce the amount of motion information transmitted in the inter prediction mode. Motion information can be predicted in units of blocks, sub-blocks, or samples. The motion information may include a motion vector and a reference picture index. The motion information indicates inter prediction direction (L0 prediction, L1 prediction, Bi prediction, etc.) In the case of inter prediction, the peripheral block may include a spatial neighboring block existing in the current picture and a temporal neighboring block existing in the reference picture. For example, the inter prediction unit 332 is a motion information candidate based on surrounding blocks. A list can be constructed, and a motion vector and/or a reference picture index of the current block can be derived based on the received candidate selection information. Inter prediction can be performed based on various prediction modes, and the information on the prediction is described above. It may include information indicating the mode of inter prediction for the current block.
[92] 가산부 (340)는획득된레지듀얼신호를예측부 (인터예측부 (332)및/또는 [92] The addition unit 340 predicts the obtained residual signal (inter prediction unit 332 and/or
인트라예측부 (331)포함)로부터출력된예측신호 (예측된블록,예측샘플 어레이)에더함으로써복원신호 (복원픽처,복원블록,복원샘플어레이)를 생성할수있다.스킵모드가적용된경우와같이처리대상블록에대한 레지듀얼이없는경우,예측된블록이복원블록으로사용될수있다. In addition to the prediction signals (predicted blocks, prediction sample arrays) output from the intra prediction unit 331), a restoration signal (restored picture, restoration block, restoration sample array) can be generated. Processing as in the case where skip mode is applied. If there is no residual for the target block, the predicted block can be used as a restore block.
[93] 가산부 (340)는복원부또는복원블록생성부라고불릴수있다.생성된복원 신호는현재픽처내다음처리대상블록의인트라예측을위하여사용될수 있고,후술하는바와같이필터링을거쳐서출력될수도있고또는다음픽처의 인터 예측을위하여사용될수도있다. [93] The addition unit 340 may be referred to as a restoration unit or a restoration block generation unit. The generated restoration signal may be used for intra prediction of the next processing target block in the current picture, and output through filtering as described later. It may be used or it may be used for inter prediction of the next picture.
[94] 한편,픽처디코딩과정에서 LMCS (luma mapping with chroma scaling)가적용될 수도있다. [94] Meanwhile, luma mapping with chroma scaling (LMCS) may be applied in the picture decoding process.
[95] 필터링부 (350)는복원신호에필터링을적용하여주관적/객관적화질을 [95] The filtering unit 350 applies filtering to the restored signal to improve subjective/objective image quality.
향상시킬수있다.예를들어필터링부 (350)는복원픽처에다양한필터링방법을 적용하여수정된 (modified)복원픽처를생성할수있고,상기수정된복원 픽처를메모리 (360),구체적으로메모리 (360)의 DPB에전송할수있다.상기 다양한필터링방법은예를들어,디블록킹필터링,샘플적응적오프셋 (sample adaptive offset),적응적루프필터 (adaptive loop filter),양방향필터 (bilateral filter) 등을포함할수있다. For example, the filtering unit 350 may apply various filtering methods to the restored picture to generate a modified restored picture, and store the modified restored picture in a memory 360, specifically a memory 360. The various filtering methods include, for example, deblocking filtering, sample adaptive offset, adaptive loop filter, bilateral filter, etc. can do.
[96] 메모리 (360)의 DPB에저장된 (수정된)복원픽처는인터 예측부 (332)에서참조 픽쳐로사용될수있다.메모리 (360)는현재픽처내움직임정보가도출된 (또는 디코딩된)블록의움직임정보및/또는이미복원된픽처내블록들의움직임 정보를저장할수있다.상기저장된움직임정보는공간적주변블록의움직임 정보또는시간적주변블록의움직임정보로활용하기위하여인터 [96] The (modified) restored picture stored in the DPB of the memory 360 can be used as a reference picture in the inter prediction unit 332. The memory 360 is from which motion information in the current picture is derived (or decoded). The motion information of the block and/or the motion information of the blocks in the picture that has already been restored can be stored. The stored motion information is interpolated to be used as the motion information of the spatial surrounding block or the motion information of the temporal surrounding block.
예측부 (332)에전달할수있다.메모리 (360)는현재픽처내복원된블록들의 복원샘플들을저장할수있고,인트라예측부 (331)에전달할수있다. It can be transmitted to the prediction unit 332. The memory 360 can store reconstructed samples of the restored blocks in the current picture, and can transfer them to the intra prediction unit 331.
[97] 본명세서에서,인코딩장치 (100)의필터링부 (260),인터예측부 (221)및인트라 예측부 (222)에서설명된실시예들은각각디코딩장치 (300)의필터링부 (350), 인터 예측부 (332)및인트라예측부 (331)에도동일또는대응되도록적용될수 있다. [97] In the present specification, the embodiments described in the filtering unit 260, the inter prediction unit 221, and the intra prediction unit 222 of the encoding apparatus 100 are respectively described in the filtering unit 350 of the decoding apparatus 300. , The inter prediction unit 332 and the intra prediction unit 331 may be applied to be the same or corresponding to each other.
[98] 상술한바와같이비디오코딩을수행함에 있어압축효율을높이기위하여 예측을수행한다.이를통하여코딩대상블록인현재블록에대한예측 샘플들을포함하는예측된블록을생성할수있다.여기서상기예측된블록은 공간도메인 (또는픽셀도메인)에서의 예측샘플들을포함한다.상기예측된 블록은인코딩장치및디코딩장치에서동일하게도출되며,상기인코딩장치는 원본블록의원본샘플값자체가아닌상기원본블록과상기 예측된블록간의 레지듀얼에대한정보 (레지듀얼정보)를디코딩장치로시그널링함으로써영상 코딩효율을높일수있다.디코딩장치는상기레지듀얼정보를기반으로 레지듀얼샘플들을포함하는레지듀얼블록을도출하고,상기레지듀얼블록과 상기 예측된블록을합하여복원샘플들을포함하는복원블록을생성할수 있고,복원블록들을포함하는복원픽처를생성할수있다. [98] As described above, prediction is performed to increase compression efficiency in performing video coding. Through this, a predicted block including predicted samples for the current block, which is a block to be coded, can be generated. Here, the predicted The block includes prediction samples in the spatial domain (or pixel domain). The predicted block is derived identically in the encoding device and the decoding device, and the encoding device Image coding efficiency can be improved by signaling information about the residual (residual information) between the original block and the predicted block, not the original sample value of the original block by itself. The decoding apparatus is based on the residual information. As a result, a residual block including residual samples may be derived, the residual block and the predicted block may be combined to generate a restoration block including restoration samples, and a restoration picture including restoration blocks may be generated.
[99] 상기레지듀얼정보는변환및양자화절차를통하여생성될수있다.예를 [99] The above residual information can be generated through conversion and quantization procedures.
들어 ,인코딩장치는상기원본블록과상기예측된블록간의레지듀얼블록을 도출하고,상기레지듀얼블록에포함된레지듀얼샘플들 (레지듀얼샘플 어레이)에변환절차를수행하여변환계수들을도출하고,상기변환계수들에 양자화절차를수행하여양자화된변환계수들을도출하여관련된레지듀얼 정보를 (비트스트림을통하여)디코딩장치로시그널링할수있다.여기서상기 레지듀얼정보는상기양자화된변환계수들의값정보,위치정보,변환기법, 변환커널,양자화파라미터등의정보를포함할수있다.디코딩장치는상기 레지듀얼정보를기반으로역양자화/역변환절차를수행하고레지듀얼 샘플들 (또는레지듀얼블록)을도출할수있다.디코딩장치는예측된블록과 상기레지듀얼블록을기반으로복원픽처를생성할수있다.인코딩장치는 또한이후픽처의인터예측을위한참조를위하여양자화된변환계수들을 역양자화/역변환하여레지듀얼블록을도출하고,이를기반으로복원픽처를 생성할수있다. For example, the encoding apparatus derives a residual block between the original block and the predicted block, and performs a conversion procedure on residual samples (residual sample array) included in the residual block to derive conversion coefficients, By performing a quantization procedure on the transform coefficients, quantized transform coefficients are derived, and related residual information can be signaled to a decoding device (via a bitstream). Here, the residual information is value information of the quantized transform coefficients, value information, and It can include information such as location information, conversion technique, conversion kernel, quantization parameter, etc. The decoding device can perform inverse quantization/inverse conversion procedures based on the residual information and derive residual samples (or residual blocks). The decoding device can generate a reconstructed picture based on the predicted block and the residual block. The encoding device can also inverse quantize/inverse transform the quantized transformation coefficients for reference for inter prediction of a subsequent picture to obtain a residual block. Can be derived, and a restored picture can be created based on it.
[100] 도 4는코딩된데이터에대한계층구조를예시적으로나타낸다. [100] Fig. 4 exemplarily shows a hierarchical structure for coded data.
[101] 도 4를참조하면,코딩된데이터는비디오/이미지의코딩처리및그자체를 다루는 VCL(video coding layer)과코딩된비디오/이미지의데이터를저장하고 전송하는하위시스템과의사이에있는 NAL(Network abstraction layer)로구분될 수있다. [101] Referring to FIG. 4, the coded data is between a video coding layer (VCL) that handles the video/image coding process and itself, and a sub-system that stores and transmits the coded video/image data. It can be classified as a network abstraction layer (NAL).
[102] VCL은시퀀스와픽처등의헤더에해당하는파라미터세트 (픽처파라미터 세트 (PPS),시퀀스파라미터세트 (SPS),비디오파라미터세트 (VPS)등)및 비디오/이미지의코딩과정에부가적으로필요한 SEI(Supplemental enhancement information)메시지를생성할수있다. SEI메시지는비디오/이미지에대한 정보 (슬라이스데이터)와분리되어있다.비디오/이미지에대한정보를포함한 VCL은슬라이스데이터와슬라이스헤더로이루어진다.한편,슬라이스헤더는 타일그룹헤더 (tile group header)로지칭될수있으며,슬라이스데이터는타일 그룹데이터 (tile group data)로지칭될수있다. [102] VCL is a set of parameters corresponding to headers such as sequences and pictures (picture parameter set (PPS), sequence parameter set (SPS), video parameter set (VPS), etc.) and in addition to the video/image coding process. You can generate the necessary Supplemental Enhancement Information (SEI) messages. The SEI message is separated from the video/image information (slice data). The VCL containing the video/image information consists of the slice data and the slice header. On the other hand, the slice header is a tile group header. It may be referred to as, and the slice data may be referred to as tile group data.
[103] NAL에서는 VCL에서생성된 RBSP(Raw Byte Sequence Payload)에헤더 [103] In NAL, header to RBSP (Raw Byte Sequence Payload) generated from VCL
정보 (NAL유닛헤더 )를부가하여 NAL유닛을생성할수있다.이때, RBSP는 VCL에서생성된슬라이스데이터 ,파라미터세트, SEI메시지등을말한다. NAL 유닛헤더에는해당 NAL유닛에포함되는 RBSP데이터에따라특정되는 NAL 유닛타입정보를포함할수있다. [104] NAL의기본단위인 NAL유닛은코딩된영상을소정의규격에따른파일포맷, RTP(Real-time Transport Protocol), TS(Transport Strea)등과같은하위시스템의 비트열에매핑시키는역할을한다. NAL unit can be created by adding information (NAL unit header). At this time, RBSP refers to slice data, parameter set, SEI message, etc. generated from VCL. The NAL unit header may include NAL unit type information specified according to RBSP data included in the corresponding NAL unit. [104] The NAL unit, which is the basic unit of NAL, plays a role of mapping the coded image to the bit string of sub-systems such as file format, RTP (Real-time Transport Protocol), TS (Transport Strea), etc. according to a predetermined standard.
[105] 도시된바와같이 NAL유닛은 NAL유닛은 VCL에서생성된 RBSP의따라 [105] As shown, the NAL unit is the NAL unit according to the RBSP generated from the VCL.
VCL NAL유닛과 Non-VCL NAL유닛으로구분될수있다. VCL NAL유닛은 영상에대한정보 (슬라이스데이터 )를포함하고있는 NAL유닛을의미할수 있고, Non-VCL NAL유닛은영상을디코딩하기위하여필요한정보 (파라미터 세트또는 SEI메시지)를포함하고있는 NAL유닛을의미할수있다. It can be divided into a VCL NAL unit and a Non-VCL NAL unit. The VCL NAL unit can mean a NAL unit that contains information about the video (slice data), and the Non-VCL NAL unit is a NAL unit that contains the information (parameter set or SEI message) necessary for decoding the video. Can mean
[106] 상술한 VCL NAL유닛, Non-VCL NAL유닛은하위시스템의데이터규격에 따라헤더정보를붙여서네트워크를통해전송될수있다.예컨대, NAL유닛은 H.266/VVC파일포맷, RTP(Real-time Transport Protocol), TS(Transport Stream) 등과같은소정규격의데이터형태로변형되어다양한네트워크를통해전송될 수있다. [106] The above-described VCL NAL unit and Non-VCL NAL unit can be transmitted through a network by attaching header information according to the data standard of the sub-system. For example, the NAL unit is in H.266/VVC file format, RTP (Real-RTP). time Transport Protocol), TS (Transport Stream), etc., can be transformed into data types of predetermined standards and transmitted through various networks.
[107] 상술한바와같이 , NAL유닛은해당 NAL유닛에포함되는 RBSP데이터 [107] As described above, the NAL unit is RBSP data included in the NAL unit.
구조 (structure)에따라 NAL유닛타입이특정될수있으며,이러한 NAL유닛 타입에대한정보는 NAL유닛헤더에저장되어시그널링될수있다. The NAL unit type may be specified according to the structure, and information on the NAL unit type may be stored in the NAL unit header and signaled.
[108] 예를들어 , NAL유닛이영상에대한정보 (슬라이스데이터 )를포함하는지 [108] For example, whether the NAL unit contains information about the image (slice data)
여부에따라크게 VCL NAL유닛타입과 Non-VCL NAL유닛타입으로분류될 수있다. VCL NAL유닛타입은 VCL NAL유닛이포함하는픽처의성질및종류 등에따라분류될수있으며, Non-VCL NAL유닛타입은파라미터세트의종류 등에따라분류될수있다. Depending on whether or not, it can be largely classified into VCL NAL unit type and Non-VCL NAL unit type. The VCL NAL unit type can be classified according to the properties and types of pictures included in the VCL NAL unit, and the non-VCL NAL unit type can be classified according to the type of parameter set.
[109] 아래는 Non-VCL NAL유닛타입이포함하는파라미터세트의종류등에따라 특정된 NAL유닛타입의일예이다. NAL유닛타입은파라미터세트의종류 등에따라특정될수있다.예를들어, NAL유닛타입은 APS를포함하는 NAL 유닛에대한타입인 APS (Adaptation Parameter Set) NAL unit, DPS를포함하는 NAL유닛에대한타입인 DPS (Decoding Parameter Set) NAL unit, VPS를 포함하는 NAL유닛에대한타입인 VPS(Video Parameter Set) NAL unit, SPS를 포함하는 NAL유닛에대한타입인 SPS(Sequence Parameter Set) NAL unit및 PPS를포함하는 NAL유닛에대한타입인 PPS(Picture Parameter Set) NAL unit중 어느하나로특정될수있다. [109] The following is an example of the NAL unit type specified according to the type of parameter set included in the Non-VCL NAL unit type. The NAL unit type can be specified according to the type of parameter set, etc. For example, the NAL unit type is an APS (Adaptation Parameter Set) NAL unit, which is a type for NAL units including APS, and a type for NAL units including DPS. In DPS (Decoding Parameter Set) NAL unit, VPS (Video Parameter Set) NAL unit, which is a type for NAL unit including VPS, SPS (Sequence Parameter Set) NAL unit and PPS, which is a type for NAL unit including SPS It may be specified as any one of the PPS (Picture Parameter Set) NAL unit, which is the type for the included NAL unit.
[110] 상술한 NAL유닛타입들은 NAL유닛타입을위한신택스정보를가지며,상기 신택스정보는 NAL유닛헤더에저장되어시그널링될수있다.예컨대,상기 신택스정보는 nal_unit_type일수있으며 , NAL유닛타입들은 nal_unit_type 값으로특정될수있다. [110] The above-described NAL unit types have syntax information for the NAL unit type, and the syntax information may be stored in the NAL unit header and signaled. For example, the syntax information may be nal_unit_type, and the NAL unit types are nal_unit_type values. Can be specified.
[111] 한편,상술한바와같이하나의픽처는복수의슬라이스를포함할수있으며, 하나의슬라이스는슬라이스헤더및슬라이스데이터를포함할수있다.이 경우,하나의픽처내복수의슬라이스 (슬라이스헤더및슬라이스데이터 집합)에대하여하나의픽처헤더가더부가될수있다.상기픽처헤더 (픽처헤더 신택스)는상기픽처에공통적으로적용할수있는정보/파라미터를포함할수 있다.상기슬라이스헤더 (슬라이스헤더신택스)는상기슬라이스에공통적으로 적용할수있는정보/파라미터를포함할수있다.상기 APS(APS신택스)또는 PPS(PPS신택스)는하나이상의슬라이스또는픽처에공통적으로적용할수 있는정보/파라미터를포함할수있다.상기 SPS(SPS신택스)는하나이상의 시퀀스에공통적으로적용할수있는정보/파라미터를포함할수있다.상기 VPS(VPS신택스)는다중레이어에공통적으로적용할수있는정보/파라미터를 포함할수있다.상기 DPS(DPS신택스)는비디오전반에공통적으로적용할수 있는정보/파라미터를포함할수있다.상기 DPS는 CVS(coded video sequence)의 concatenation에관련된정보/파라미터를포함할수있다.본문서에서상위레벨 신택스 (High level syntax, HLS)라함은상기 APS신택스, PPS신택스, SPS 신택스, VPS신택스, DPS신택스, a picture header syntax,슬라이스헤더신택스 중적어도하나를포함할수있다. [111] On the other hand, as described above, one picture can contain a plurality of slices, and one slice can contain a slice header and slice data. In this case, multiple slices (slice header and slice data) within one picture. For a set), one picture header can be added. The picture header (picture header Syntax) may include information/parameters commonly applicable to the picture. The slice header (slice header syntax) may include information/parameters commonly applicable to the slice. The APS (APS syntax) or PPS (PPS syntax) may contain information/parameters commonly applicable to one or more slices or pictures. The SPS (SPS syntax) may contain information/parameters commonly applicable to one or more sequences. VPS syntax) may include information/parameters commonly applicable to multiple layers. The DPS (DPS syntax) may include information/parameters commonly applicable to overall video. The DPS is a coded video sequence (CVS). In this document, the high level syntax (HLS) refers to the above APS syntax, PPS syntax, SPS syntax, VPS syntax, DPS syntax, a picture header syntax, slice You can include at least one of the header syntax.
[112] 본문서에서인코딩장치에서디코딩장치로인코딩되어비트스트림형태로 시그널링되는영상/비디오정보는픽처내파티셔닝관련정보,인트라/인터 예측정보,레지듀얼정보,인루프필터링정보등을포함할뿐아니라,상기 슬라이스헤더에포함된정보,상기 Picture header에포함된정보,상기 APS에 포함된정보,상기 PPS에포함된정보, SPS에포함된정보, VPS에포함된정보 및/또는 DPS에포함된정보를포함할수있다.또한상기영상/비디오정보는 NAL unit header의정보를더포함할수있다. [112] In this text, the image/video information encoded from the encoding device to the decoding device and signaled in the form of a bitstream only includes intra-picture partitioning information, intra/inter prediction information, residual information, and in-loop filtering information. Rather, information included in the slice header, information included in the picture header, information included in the APS, information included in the PPS, information included in the SPS, information included in the VPS, and/or the information included in the DPS. Information may be included. In addition, the image/video information may further include information of the NAL unit header.
[113] 도 5는픽처를파티셔닝하는일예를나타내는도면이다. 5 is a diagram showing an example of partitioning a picture.
[114] 픽처들은코딩트리유닛 (CTU)들로분할될수있으며, CTU는코딩트리 [114] Pictures can be divided into coding tree units (CTUs), and CTUs are
블록 (CTB)에대응될수있다. CTU는루마 (luma)샘플들의코딩트리블록및 이에대응하는크로마 (chroma)샘플들의두개의코딩트리블록들을포함할수 있다.한편,코딩및예측등을위한 CTU의최대허용사이즈는변환을위한 CTU의최대허용사이즈와다를수있다. Blocks (CTB) can be matched. The CTU can include a coding tree block of luma samples and two coding tree blocks of chroma samples corresponding thereto. On the other hand, the maximum allowable size of the CTU for coding and prediction is the CTU for conversion. It may be different from the maximum allowable size.
[115] 타일은픽처의직사각형영역을덮는일련의 CTU들에해당할수있으며, 픽처는하나이상의타일행과하나이상의타일열로분할될수있다. [115] A tile can correspond to a series of CTUs that cover a rectangular area of a picture, and a picture can be divided into one or more tile rows and one or more tile columns.
[116] 한편,슬라이스는정수개의완전한타일또는정수개의연속적인완전한 CTU 행들로구성될수있다.이때,래스터스캔 (raster- scan)슬라이스모드및 직사각형슬라이스모드를포함하는두가지슬라이스모드가지원될수있다. [116] On the other hand, a slice may consist of an integer number of complete tiles or an integer number of consecutive complete CTU rows. In this case, two slice modes including a raster-scan slice mode and a rectangular slice mode can be supported.
[117] 래스터스캔슬라이스모드에서,슬라이스는픽처의타일래스터스캔에서 일련의완전한타일들을포함할수있다.사각형슬라이스모드에서,슬라이스는 픽처의사각형영역을집합적으로형성하는다수의완전한타일들또는픽처의 사각형영역을집합적으로형성하는하나의타일내다수의연속적인 CTU 행들을포함할수있다.사각형슬라이스내의타일들은해당슬라이스에 해당하는사각형영역내에서타일래스터스캔순서로스캔될수있다. [117] In raster scan slice mode, a slice can contain a series of complete tiles in a tile raster scan of a picture. In square slice mode, a slice is a number of complete tiles or pictures that collectively form a rectangular area of a picture. It can contain a number of consecutive CTU rows within a tile that collectively form a rectangular region of the square. Tiles within a square slice can be scanned in tile raster scan order within the square region corresponding to that slice.
[118] 도 5의 (a)는픽처를타일들및래스터스캔슬라이스들로분할한일예를 2020/175905 1»(:1^1{2020/002730 나타내는도면이며,예를들어픽처는 12개타일들과 3개의 래스터스캔 슬라이스들로분할될수있다. [118] Figure 5 (a) shows an example of dividing a picture into tiles and raster scan slices. 2020/175905 1»(:1^1{2020/002730 This is a drawing, for example, a picture can be divided into 12 tiles and 3 raster scan slices.
[119] 또한,도 5의 )는픽처를타일들및사각형슬라이스들로분할한일예를 [119] In addition,) of FIG. 5 shows an example of dividing a picture into tiles and square slices.
나타내는도면이며,예를들어픽처는 24개의타일들 (6개의타일열과 4개의 타일행)과 9개의사각형슬라이스들로분할될수있다. This is a drawing, for example, a picture can be divided into 24 tiles (6 tile columns and 4 tile rows) and 9 square slices.
[120] 또한,도 5의切는픽처를타일들및사각형슬라이스들로분할한일예를 [120] Also, the figure in Fig. 5 shows an example of dividing the picture into tiles and square slices.
나타내는도면이며,예를들어픽처는 24개의타일 (2개의타일열과 2개의타일 행)과 4개의사각형슬라이스들들로분할될수있다. This is a drawing, for example, a picture can be divided into 24 tiles (2 tile columns and 2 tile rows) and 4 square slices.
[121] 도 6은일실시예에 따른타일및/또는타일그룹에기반한픽처 인코딩 절차를 도시하는흐름도이다. 6 is a flowchart illustrating a procedure for encoding a picture based on a tile and/or a tile group according to an embodiment.
[122] 일실시예에서,픽처 파티셔닝 600)및타일/타일그룹에 관한정보 [122] In one embodiment, information on picture partitioning 600) and tiles/tile groups
생성 610)은인코딩장치의 영상분할부 (210)에의하여수행될수있고, 타일/타일그룹에관한정보를포함하는비디오/영상정보에 대한 Generation 610) may be performed by the image dividing unit 210 of the encoding device, and for video/image information including information on a tile/tile group.
인코딩 620)은인코딩장치의 엔트로피 인코딩부 (240)에의하여수행될수 있다. The encoding 620 may be performed by the entropy encoding unit 240 of the encoding device.
[123] 일실시예에 따른인코딩장치는입력된픽처에 대한인코딩을위하여 ,픽처 파티셔닝을수행할수있다 600).상기픽처는하나이상의 타일/타일그룹을 포함할수있다.인코딩장치는상기픽처의 영상특성 및코딩효율을고려하여 픽처를다양한형태로파티셔닝할수있고,최적의코딩효율을갖는파티셔닝 형태를지시하는정보를생성하여디코딩장치로시그널링할수있다. [123] The encoding apparatus according to an embodiment may perform picture partitioning for encoding an input picture 600). The picture may include one or more tiles/tile groups. The encoding apparatus includes an image of the picture. Considering the characteristics and coding efficiency, the picture can be partitioned into various types, and information indicating the partitioning type with the optimum coding efficiency can be generated and signaled to the decoding device.
[124] 일실시예에 따른인코딩장치는상기픽처에 대하여 적용되는타일/타일 [124] An encoding apparatus according to an embodiment includes a tile/tile applied to the picture
그룹을결정하고,상기타일/타일그룹에관한정보를생성할수있다 610). 상기 타일/타일그룹에 관한정보는상기픽처에 대한타일/타일그룹의구조를 지시하는정보를포함할수있다.상기 타일/타일그룹에 관한정보는후술하는 바와같이다양한파라미터세트및/또는타일그룹헤더를통하여시그널링될 수있다.구체적인예는후술된다. A group is determined, and information about the tile/tile group can be generated (610). The information on the tile/tile group may include information indicating the structure of the tile/tile group for the picture. The information on the tile/tile group includes various parameter sets and/or tile group headers as described later. It can be signaled through. A specific example is described below.
[125] 일실시예에 따른인코딩장치는상기 타일/타일그룹에 관한정보를포함하는 비디오/영상정보를인코딩하여 비트스트림 형태로출력할수있다 620) .상기 비트스트림은디지털저장매체또는네트워크를통하여디코딩장치로전달될 수있다.상기비디오/영상정보는본문서에서서술된 1표 및/또는타일그룹 헤더신택스를포함할수있다.또한,상기 비디오/영상정보는상술한예측정보, 레지듀얼정보, (인루프)필터링정보등을더포함할수있다.예를들어,인코딩 장치는현재픽처를복원한후인루프필터링을적용하고,상기 인루프필터링에 관한파라미터를인코딩하여 비트스트림 형태로출력할수있다. [125] The encoding apparatus according to an embodiment may encode video/image information including information on the tile/tile group and output it in the form of a bitstream 620). The bitstream is through a digital storage medium or a network. The video/video information may include a table and/or tile group header syntax described in this document. In addition, the video/video information may include prediction information, residual information, and (In-loop) filtering information may be further included. For example, the encoding apparatus may restore the current picture, apply in-loop filtering, and encode the parameters related to the in-loop filtering, and output in a bitstream format.
[126] 도 7은일실시예에 따른타일및/또는타일그룹에기반한픽처디코딩 절차를 도시하는흐름도이다. 7 is a flowchart illustrating a tile and/or tile group-based picture decoding procedure according to an embodiment.
[127] 일실시예에서,비트스트림으로부터 타일/타일그룹에 관한정보를획득하는 단계 700)및픽처 내타일/타일그룹을도출하는단계 기 0),타일/타일그룹에 기반한픽처디코딩을수행하는단계 (S720)는디코딩장치의엔트로피 [127] In one embodiment, step 700 of acquiring information about a tile/tile group from a bitstream, and step 0) of deriving a tile/tile group in a picture, to a tile/tile group The step of performing the based picture decoding (S720) is the entropy of the decoding device.
디코딩부 (310)에의하여수행될수있고,타일/타일그룹에관한정보를포함하는 비디오/영상정보를인코딩하는단계 (S620)는디코딩장치의샘플디코더에 의하여수행될수있다. It may be performed by the decoding unit 310, and the step (S620) of encoding video/image information including information on a tile/tile group may be performed by a sample decoder of the decoding apparatus.
[128] 일실시예에따른디코딩장치는,수신된비트스트림으로부터타일/타일 [128] A decoding apparatus according to an embodiment includes tiles/tiles from a received bitstream.
그룹에관한정보를획득할수있다 (S700).상기타일/타일그룹에관한정보는 후술하는바와같이다양한파라미터세트및/또는타일그룹헤더를통하여 획득될수있다.구체적인예는후술된다. Information on the group can be obtained (S700). The information on the tile/tile group can be obtained through various parameter sets and/or tile group headers as described later. A specific example will be described later.
[129] 일실시예에따른디코딩장치는,상기타일/타일그룹에관한정보를기반으로 현재픽처내타일/타일그룹을도출할수있다 (S기 0). The decoding apparatus according to an embodiment may derive a tile/tile group in a current picture based on the information on the tile/tile group (S phase 0).
[13이 일실시예에따른디코딩장치는상기타일/타일그룹을기반으로상기현재 픽처를디코딩할수있다 (S720).예를들어 ,디코딩장치는상기타일내에 위치하는 CTU/CU를도출하고,이를기반으로인터/인트라예측,레지듀얼처리, 복원블록 (픽처)생성및/또는인루프필터링절차를수행할수있다.또한이 경우예를들어,디코딩장치는타일/타일그룹단위로컨텍스트모델/정보를 초기화할수있다.또한,디코딩장치는인터/인트라예측시참조되는주변블록 또는주변샘플이현재블록이위치하는현재타일과다른타일에위치하는경우 상기주변블록또는주변샘플이가용하지않은것으로처리할수도있다. [13 The decoding apparatus according to this embodiment may decode the current picture based on the tile/tile group (S720). For example, the decoding apparatus derives a CTU/CU located in the tile, and performs it. Based on inter/intra prediction, residual processing, restoration block (picture) generation, and/or in-loop filtering procedures can be performed. In this case, for example, the decoding device can perform context model/information in tile/tile group units. In addition, if the surrounding block or the surrounding sample referenced during inter/intra prediction is located on a tile different from the current tile where the current block is located, the decoding device may treat the surrounding block or the surrounding sample as not available. .
[131] 도 8은픽처를복수의타일들로파티셔닝하는일예를나타내는도면이다. 8 is a diagram showing an example of partitioning a picture into a plurality of tiles.
[132] 일실시예에서,타일들은픽처를복수의직사각형들로분할하는수직및/또는 수평경계들 (boundaries)의세트에의해정의되는픽처내영역들을의미할수 있다.도 8은하나의픽처 (700)내에서복수의열경계들 (column boundaries, 810) 및행경계들 (row boundaries, 820)을기반으로복수의타일들로분할되는예시를 도시하고있다.도 8에는최초 32개의최대코딩유닛 (또는 CTU(Coding Tree Unit))들이넘버링되어도시되어 있다. In one embodiment, tiles may refer to regions within a picture defined by a set of vertical and/or horizontal boundaries that divide the picture into a plurality of rectangles. FIG. 8 shows one picture 700 Figure 8 shows an example of splitting into multiple tiles based on a plurality of column boundaries (810) and row boundaries (820) within the first 32 maximum coding units (or 820). Coding Tree Units (CTUs) are numbered and shown.
[133] 일실시예에서 ,각타일은각타일내에서 래스터스캔오더 (raster scan order)로 처리되는정수개의 CTU들을포함할수있다.이때상기각타일을포함하는, 픽처내복수의타일들도상기픽처내에서 래스터스캔오더로처리될수있다. 상기타일들은타일그룹들 (tile groups)을형성하기위해그루핑될수있고,단일 타일그룹내타일들은래스터스캔될수있다.픽처를타일들로분할하는것은 PPS(Picture Parameter Set)의신택스 (syntax)및시맨틱스 (semantics)를기반으로 정의될수있다. [133] In one embodiment, each tile may include an integer number of CTUs processed in a raster scan order within each tile. In this case, a plurality of tiles within a picture, including each tile, may also include the picture. It can be processed as a raster scan order within. The tiles can be grouped to form tile groups, and tiles within a single tile group can be raster scanned. Dividing a picture into tiles is the syntax and semantics of the Picture Parameter Set (PPS). It can be defined based on semantics.
[134] 일실시예에서,타일들에관하여 PPS로부터도출된정보는다음의사항들을 체크 (또는판독)하기위해이용될수있다.우선픽처내에하나의타일이 존재하는지또는하나이상의타일들이존재하는지체크될수있고,하나이상의 타일들이존재하는경우,상기하나이상의타일들이유니픔하게 (uniformly) 분배되었는지여부가체크될수있고,타일들의차원 (dimension)이체크될수 있고,루프필터가인에이블되었는지여부가체크될수있다. [135] 일실시예에서, PPS는우선신택스요소 single_tile_in_pic_flag를시그널링할 수있다.상기 single_tile_in_pic_flag는픽처내하나의타일만존재하는지또는 픽처내복수의타일들이존재하는지여부를지시할수있다.픽처내복수의 타일들이존재하는경우,디코딩장치는신택스요소 num_tile_columns_minus 1 및 num_tile_rows_minusl을이용하여타일행들및타일열들의개수에대한 정보를파싱할수있다.상기신택스요소 num_tile_columns_minus 1및 [134] In one embodiment, the information derived from the PPS about tiles may be used to check (or read) the following items. First, it is checked whether one tile exists in the picture or if more than one tile exists. If more than one tile is present, it can be checked whether the above one or more tiles are uniformly distributed, the dimension of the tiles can be checked, and whether the loop filter is enabled can be checked. have. In one embodiment, the PPS may first signal the syntax element single_tile_in_pic_flag. The single_tile_in_pic_flag may indicate whether only one tile in a picture exists or whether a plurality of tiles in a picture exist. A plurality of tiles in a picture When they are present, the decoding device can parse information about the number of tile rows and tile columns using the syntax elements num_tile_columns_minus 1 and num_tile_rows_minusl. The syntax element num_tile_columns_minus 1 and
num_tile_rows_minusl은픽처를타일행들및열들로분할하는과정을구체화할 수있다.타일행들의높이들및타일열들의폭들은 CTB들의관점에서(즉, num_tile_rows_minusl can specify the process of dividing a picture into tile rows and columns. The heights of tile rows and widths of tile columns are in terms of CTBs (i.e.
CTB를단위로)나타낼수있다. CTB unit) can be displayed.
[136] 일실시예에서,픽처내타일들이유니폼하게스페이싱되었는지여부를 [136] In one embodiment, whether the tiles in the picture are uniformly spaced
체크하기위해추가적인플래그가파싱될수있다.상기픽처내타일들이 유니폼하게스페이싱되지않은경우,각각의타일행및열의경계들에대하여 타일당 CTB의개수가명시적으로시그널링될수있다(즉,각타일행내 CTB의 개수와각타일열내 CTB의개수가시그널링될수있다).만약타일들이 유니폼하게스페이싱된경우,타일들은서로동일한폭및높이를가질수있다. Additional flags can be parsed to check. If the tiles in the picture are not uniformly spaced, the number of CTBs per tile can be explicitly signaled for each tile row and column boundaries (i.e. CTB within each tile row). The number of and the number of CTBs in each tile row can be signaled). If the tiles are uniformly spaced, the tiles can have the same width and height.
[137] 일실시예에서 ,타일경계들에대하여루프필터(loop filter)가인에이블 [137] In one embodiment, a loop filter is enabled for tile boundaries.
되었는지여부를결정하기위해또다른플래그(예를들어,신택스요소 loop_filter_across_tiles_enabled_flag)가파싱될수있다. Another flag (for example, the syntax element loop_filter_across_tiles_enabled_flag) can be parsed to determine whether or not.
[138] 아래의표 1은 PPS를파싱함으로써도출될수있는타일들에대한주요정보의 예시를요약하여나타낸다.표 1은 PPS RBSP신택스를나타낼수있다. [138] Table 1 below summarizes examples of main information about tiles that can be derived by parsing the PPS. Table 1 can represent the PPS RBSP syntax.
[139] [표 1] [139] [Table 1]
Figure imgf000023_0001
Figure imgf000023_0001
[14이 아래의표 2는상기표 1에기재된신택스요소들에대한시맨틱스의일예시를 나타낸다. [141] [S.2] [14] Table 2 below shows an example of semantics for the syntax elements described in Table 1 above. [141] [S.2]
2020/175905 1»(:1/10公020/002730 2020/175905 1»(:1/10公020/002730
[142] [142]
Figure imgf000025_0001
Figure imgf000025_0001
[143] 도 9는일실시예에따른인코딩장치의구성을도시하는블록도이고,도 은 일실시예에따른디코딩장치의구성을도시하는블록도이다. 9 is a block diagram showing a configuration of an encoding apparatus according to an embodiment, and FIG. 9 is a block diagram showing a configuration of a decoding apparatus according to an embodiment.
[144] 도 9에는인코딩장치의블록도의일예시가도시되어 있다.도 9에도시된 인코딩장치 (900)는파티셔닝모듈 (910)과인코딩모듈 (920)을포함하고있다. 상기파티셔닝모듈 (이 0)은도 2에도시된인코딩장치의영상분할부 ( 0)와 동일및/또는유사한동작들을수행할수있고,상기인코딩모듈 (920)은도 2에 도시된인코딩장치의엔트로피인코딩부 (240)와동일및/또는유사한동작들을 수행할수있다.입력비디오는파티셔닝모듈 (9 W)에서분할된후,인코딩 모듈 (920)에서인코딩될수있다.인코딩된이후,상기인코딩된비디오는상기 인코딩장치 (900)로부터출력될수있다. 9 shows an example of a block diagram of the encoding device. The encoding device 900 shown in FIG. 9 includes a partitioning module 910 and an encoding module 920. The partitioning module (0) and the image division unit (0) of the encoding device shown in FIG. The same and/or similar operations may be performed, and the encoding module 920 may perform the same and/or similar operations as the entropy encoding unit 240 of the encoding apparatus shown in FIG. 2. The input video is a partitioning module 9 After being divided in W), it can be encoded in the encoding module 920. After being encoded, the encoded video can be output from the encoding device 900.
[145] 도 W에는디코딩장치의블록도의일예시가도시되어 있다.도 W에도시된 디코딩장치 (1000)는디코딩모듈 (1010)과디블록킹필터 (1020)을포함하고 있다.상기디코딩모듈 (1010)은도 3에도시된디코딩장치의엔트로피 디코딩부 (3 W)와동일및/또는유사한동작들을수행할수있고,상기디블록킹 필터 (1020)는도 3에도시된디코딩장치의필터링부 (350)와동일및/또는유사한 동작들을수행할수있다.디코딩모듈 (1010)은상기인코딩장치 (900)로부터 수신한입력을디코딩하여타일들에대한정보를도출할수있다.상기디코딩 된정보를기반으로처리단위가결정될수있고,디블록킹필터 (1020)는인루프 디블록킹필터를적용하여상기처리단위를처리할수있다.인루프필터링은 파티셔닝과정에서생성된코딩아티팩트를제거하기위해적용될수있다.상기 인루프필터링동작은 ALF( Adaptive Loop Filter),디블록킹필터 (Deblocking Filter, DF), SAO(Sample Adaptive O伴 set)등을포함할수있다.이후디코딩된 픽처가출력될수있다. An example of a block diagram of a decoding apparatus is shown in FIG. W. The decoding apparatus 1000 shown in FIG. W includes a decoding module 1010 and a deblocking filter 1020. The decoding module ( 1010) can perform the same and/or similar operations as the entropy decoding unit 3W of the decoding apparatus shown in FIG. 3, and the deblocking filter 1020 is a filtering unit 350 of the decoding apparatus shown in FIG. The same and/or similar operations can be performed. The decoding module 1010 decodes the input received from the encoding device 900 to derive information about tiles. A processing unit based on the decoded information The deblocking filter 1020 may apply an in-loop deblocking filter to process the processing unit. In-loop filtering may be applied to remove coding artifacts generated during the partitioning process. The in-loop filtering The operation may include an adaptive loop filter (ALF), a deblocking filter (DF), a sample adaptive operation set (SAO), etc. After that, the decoded picture can be output.
[146] 각신택스요소의파싱과정을구체화하는디스크립터 (descriptor)의예시는 아래의표 3에개시되어있다. [146] An example of a descriptor that embodies the parsing process of each syntax element is shown in Table 3 below.
[147] [S.3] [147] [S.3]
] ]
[149] 도 11은현재픽처를구성하는타일및타일그룹단위의일예를도시하는 도면이다. 11 is a diagram showing an example of a tile and a tile group unit constituting the current picture.
[150] 전술된바와같이,타일들은타일그룹들을형성하기위해그루핑될수있다. 도 11은하나의픽처가타일들및타일그룹들로분할된예시를도시하고있다. 도 11에서,상기픽처는 9개의타일들및 3개의타일그룹들을포함하고있다. 각각의타일그룹은독립적으로코딩될수있다. [150] As mentioned above, tiles can be grouped to form tile groups. 11 shows an example in which one picture is divided into tiles and tile groups. In FIG. 11, the picture includes 9 tiles and 3 tile groups. Each tile group can be independently coded.
[151] 도 12는타일그룹정보의시그널링구조의일예를개략적으로도시하는 12 schematically shows an example of the signaling structure of tile group information
도면이다. It is a drawing.
[152] CVS(Coded Video Sequence)내에서각각의타일그룹은타일그룹헤더를 [152] In CVS (Coded Video Sequence), each tile group has a tile group header.
포함할수있다.타일그룹들은슬라이스그룹과유사한의미를나타낼수있다. 각타일그룹은독립적으로코딩될수있다.타일그룹은하나또는그이상의 타일들을포함할수있다.타일그룹헤더는 PPS를참조할수있고, PPS는 순차적으로 (subsequently) SPS(Sequence Parameter Set)를참조할수있다. Tile groups can have a similar meaning to a slice group. Each tile group can be independently coded. A tile group can contain one or more tiles. A tile group header can refer to a PPS, and a PPS can sequentially refer to a SPS (Sequence Parameter Set). .
[153] 도 12에서 ,타일그룹헤더는상기타일그룹헤더가참조하는 PPS의 PPS 12, the tile group header is the PPS of the PPS referenced by the tile group header.
인덱스를가질수있다.상기 PPS는순차로 SPS를참조할수있다. It can have an index. The PPS can refer to the SPS in sequence.
[154] PPS인덱스와더불어,일실시예에따른타일그룹헤더는다음의정보들에 대하여결정할수있다.우선픽처당하나보다많은타일이존재하는경우,타일 그룹어드레스및타일그룹내타일들의개수를결정할수있다.다음으로, 인트라/프레딕티브 (predictive)/양방향 (bi-directional)과같이타일그룹타입을 결정할수있다.다음으로, LSB(Lease Significant Bits)의 POC(Picture Order Count)를결정할수있다.다음으로,하나의픽처에하나보다많은타일이 존재하는경우,오프셋길이및타일로의엔트리포인트를결정할수있다. [154] In addition to the PPS index, the tile group header according to one embodiment can be determined for the following information. First, if more than one tile exists per picture, the tile group address and the number of tiles in the tile group are determined. Next, you can determine the tile group type, such as intra/predictive/bi-directional. Next, you can determine the picture order count (POC) of the Lease Significant Bits (LSB). Next, if there is more than one tile in a picture, you can determine the offset length and entry point to the tile.
[155] 아래의표 4는타일그룹헤더의신택스의일예시를나타낸다.표 4에서타일 그룹헤더 (tile_group_header)는슬라이스헤더로대체될수있다. [155] Table 4 below shows an example of the syntax of the tile group header. In Table 4, the tile group header (tile_group_header) can be replaced by a slice header.
2020/175905 1»(:1^1{2020/002730 2020/175905 1»(:1^1{2020/002730
[156] [표 4] [156] [Table 4]
Figure imgf000030_0001
Figure imgf000030_0001
[157] 아래의표 5는상기 타일그룹헤더의신택스에 대한영문시맨틱스의 일 예시를나타낸다. Table 5 below shows an example of English semantics for the syntax of the tile group header.
[158] [S.5] [158] [S.5]
When present, the value of the tile group header syntax element , group_pic_parameter_set_id and ti le_group_pic_order_cnt_l sb shall be ame in all tile group headers of a coded picture.*’ ti le_group_pic_para eter_set_id specifies the value of
Figure imgf000031_0001
When present, the value of the tile group header syntax element, group_pic_parameter_set_id and ti le_group_pic_order_cnt_l sb shall be ame in all tile group headers of a coded picture. *' ti le_group_pic_para eter_set_id specifies the value of
Figure imgf000031_0001
pps_pic_parameter_set_id for the PPS in use. The value of ti 1 e_group_pic_para eter_set_id shall be in the range of 0 to 63, inclusive.*· pps_pic_parameter_set_id for the PPS in use. The value of ti 1 e_group_pic_para eter_set_id shall be in the range of 0 to 63, inclusive. * ·
It is a requirement of bitstream conformance that the value of Temporal Id of the current picture shall be greater than or equal to the value of Temporal Id of the PPS that has pps_pic_parameter_set_id equal to t i 1 e_group_p i c_parameter _set_id . *’ ti le_group_address specifies the tile address of the first tile in the tile group, where tile address is the tile ID as specified by Equation c-7. The length of ti le_group_address is Cei 1 ( Log2 ( NumTi lesInPic ) ) bits. The value of ti le_group_address shall be in the range of 0 to It is a requirement of bitstream conformance that the value of Temporal Id of the current picture shall be greater than or equal to the value of Temporal Id of the PPS that has pps_pic_parameter_set_id equal to ti 1 e_group_p i c_parameter _set_id. *' ti le_group_address specifies the tile address of the first tile in the tile group, where tile address is the tile ID as specified by Equation c-7. The length of ti le_group_address is Cei 1 (Log2 (NumTi lesInPic)) bits. The value of ti le_group_address shall be in the range of 0 to
NumTi lesInPic - 1, inclusive, and the value of ti le_group_address shall not be equal to the value of ti le_group_address of any other coded tile group ML unit of the same coded picture. When ti le_group_address is not present it is inferred to be equal to 0..: num_tiies_in_ti le_group_minusl plus 1 specifies the number of tiles [159] in the tile group. The value of num_ti les_in_ti le_group_minusl shall be in the range of 0 to Nu Ti lesInPic - 1, inclusive. When not present, the value of num_t i 1 es_ i n_t i le_group_minusl is inferred to be equal to 0.-' ti le_group_type specifies the coding type of the tile group according to table 6.-· NumTi lesInPic-1, inclusive, and the value of ti le_group_address shall not be equal to the value of ti le_group_address of any other coded tile group ML unit of the same coded picture. When ti le_group_address is not present it is inferred to be equal to 0..: num_tiies_in_ti le_group_minusl plus 1 specifies the number of tiles [159] in the tile group. The value of num_ti les_in_ti le_group_minusl shall be in the range of 0 to Nu Ti lesInPic-1, inclusive. When not present, the value of num_t i 1 es_ i n_t i le_group_minusl is inferred to be equal to 0.-' ti le_group_type specifies the coding type of the tile group according to table 6.-· '
When nal_unit_type is equal to IRAP_NUT, i.e., the picture is an When nal_unit_type is equal to IRAP_NUT, i.e., the picture is an
I RAP picture, ti le_group_type shall be equal to 2.*· ti le_group_pic_order_cnt_lsb specifies the picture order count modulo MaxPicOrderCntLsb for the current picture. The length of the ti le_group_pic_order_cnt_lsb syntax element is log2_max_pic_order_cnt_lsb_minus4 + 4 bits. The value of the ti le_group_pic_order_cnt_lsb shall be in the range of 0 to I RAP picture, ti le_group_type shall be equal to 2.*· ti le_group_pic_order_cnt_lsb specifies the picture order count modulo MaxPicOrderCntLsb for the current picture. The length of the ti le_group_pic_order_cnt_lsb syntax element is log2_max_pic_order_cnt_lsb_minus4 + 4 bits. The value of the ti le_group_pic_order_cnt_lsb shall be in the range of 0 to
MaxPicOrderCntLsb - 1, inclusive.- of fset_len_ inusl plus 1 specifies the length, in bits, of the entry_point_offset_ inusl [ i ] syntax elements. The value of offset_len_minusl shall be in the range of 0 to 31, inclusive.-· entry_point_of fset_minusl[ i ] plus 1 specifies the i-th entry point offset in bytes, and is represented by offset_len_minusl plus 1 bits. The tile group data that follow the tile group header consists of nu _ti les_in_ti le_group_ inusl +1 subsets, with subset index values 2020/175905 1»(:1/10公020/002730 MaxPicOrderCntLsb-1, inclusive.- ' of fset_len_ inusl plus 1 specifies the length, in bits, of the entry_point_offset_ inusl [i] syntax elements. The value of offset_len_minusl shall be in the range of 0 to 31, inclusive.-· ' entry_point_of fset_minusl[ i] plus 1 specifies the i-th entry point offset in bytes, and is represented by offset_len_minusl plus 1 bits. The tile group data that follow the tile group header consists of nu _ti les_in_ti le_group_ inusl +1 subsets, with subset index values 2020/175905 1»(:1/10公020/002730
[16이 [16 this
Figure imgf000033_0002
Figure imgf000033_0002
[161] [표 6] [161] [Table 6]
Figure imgf000033_0001
Figure imgf000033_0001
[162] 일실시예에서 ,타일그룹은타일그룹헤더및타일그룹데이터를포함할수 있다.타일그룹어드레스가알려지면,타일그룹내각 0X1의개별적인 2020/175905 PCT/KR2020/002730 위치들이매핑되어디코딩될수있다.아래의표 7은타일그룹데이터의 신택스의일예시를나타낸다.표 7에서타일그룹데이터는슬라이스데이터로 대체될수있다. [162] In one embodiment, the tile group may include a tile group header and tile group data. When the tile group address is known, each 0X1 in the tile group is 2020/175905 PCT/KR2020/002730 Locations can be mapped and decoded. Table 7 below shows an example of the syntax of tile group data. In Table 7, tile group data can be replaced with slice data.
[163] [표刀 [163] [Table
Figure imgf000034_0001
Figure imgf000034_0001
[164] 아래의표 8은상기타일그룹데이터의신택스에대한영문시맨틱스의일 예시를나타낸다. [164] Table 8 below shows an example of English semantics for the syntax of the tile group data.
WO 2020/175905 PCT/KR2020/002730 WO 2020/175905 PCT/KR2020/002730
[165] [S.8] [165] [S.8]
Figure imgf000035_0001
[166] = 0; j <= num_t i le_rows_minusl; j++ )·,·
Figure imgf000035_0001
[166] = 0; j <= num_t i le_rows_minusl; j++ )·,·
RowHeight[ j ] = ( ( j + l ) PicHeight InCtbsY ) / ows_minusl + 1 ) - ( j * PicHeight InCtbsY ) / ws_minusl + 1 )+ ght [ num_t i le_rows_aiinusl ] = PicHeight InCtbsY RowHeight[ j] = ((j + l) PicHeight InCtbsY) / ows_minusl + 1)-(j * PicHeight InCtbsY) / ws_minusl + 1 )+ ght [num_t i le_rows_aiinusl] = PicHeight InCtbsY
= 0: j < nu _t i le_rows_minusl: j++ ) i,· = 0: j <nu _t i le_rows_minusl: j++) i,·
RowHeight [ j
Figure imgf000036_0001
RowHeight [j
Figure imgf000036_0001
RowHeight [ num_t i le_rows_mimisl ] -= RowHeight [ j ]*' RowHeight [num_t i le_rows_mimisl] -= RowHeight [j] * '
st ColBd[ i ] for i ranging from 0 to nuin_t i le_columns_minusl e, specifying the location of the i-th tile column boundary TBs, is derived as follows—st ColBd[ i] for i ranging from 0 to nuin_t i le_columns_minusl e, specifying the location of the i-th tile column boundary TBs, is derived as follows—
olBd[ 0 ] = 0, 1 = 0; i <= num_tile_coluinns_mInusl: i++ )· ColBd[ i + 1 ] = ColBd[ i ] + Colfidtht i ] olBd[ 0] = 0, 1 = 0; i <= num_tile_coluinns_mInusl: i++ )· ColBd[ i + 1] = ColBd[ i] + Colfidtht i]
st RowBd[ j ] for j ranging from 0 to num_t i le_rows_jninusl + specifying the location of the j-th tile row boundary in , is derived as follows—· st RowBd[ j] for j ranging from 0 to num_t i le_rows_jninusl + specifying the location of the j-th tile row boundary in, is derived as follows—·
owBd[ 0 ] = 0, j = 0: j <= num_t i le_rows_minusl: j++ )· [167] Ro Bd [ j + 1 ] = RowBd [ j ] + RowHeight[ j ] owBd[ 0] = 0, j = 0: j <= num_t i le_rows_minusl: j++ )· [167] Ro Bd [j + 1] = RowBd [j] + RowHeight[ j]
The list CtbAddrRsToTs[ ctbAddrRs ] for ctbAddrRs ranging from 0 to PicSizelnCtbsY - 1, inclusive, specifying the conversion from a CTB address in CTB raster scan of a picture to a CTB address in tile scan, is derived as follows: * The list CtbAddrRsToTs[ ctbAddrRs] for ctbAddrRs ranging from 0 to PicSizelnCtbsY-1, inclusive, specifying the conversion from a CTB address in CTB raster scan of a picture to a CTB address in tile scan, is derived as follows: *
fori ctbAddrRs = Q; ctbAddrRs < PicSizelnCtbsY: ctbAddrRs++ ) - tbX = ctbAddrRs % ricWidthlnCtbsY, fori ctbAddrRs = Q; ctbAddrRs <PicSizelnCtbsY: ctbAddrRs++)-tbX = ctbAddrRs% ricWidthlnCtbsY,
tbY = ctbAddrRs / PicfidthlnCtbsY tbY = ctbAddrRs / PicfidthlnCtbsY
fori i = 0: i <= num_ti ]e_coluiiins_minusl: i++ )<· fori i = 0: i <= num_ti ]e_coluiiins_minusl: i++ )<·
if ( tbX >= ColBd[ i ] ). if (tbX >= ColBd[ i] ).
t i leX = i ' fori j = 0: j <= num_t ile_rows_minusl; j++ ) ti leX = i ' fori j = 0: j <= num_t ile_rows_minusl; j++)
iff tbY >= Ro«-Bd[ j ] ) iff tbY >= Ro«-Bd[ j])
tileY - j · tileY-j ·
CrbAddrRsToTsf ctbAddrRs ] = 0 CrbAddrRsToTsf ctbAddrRs] = 0
for( i = 0: i < c i leX ' i-H- ) for( i = 0: i <ci leX ' iH-)
CrbAddrRsToTs[ ctbAddrRs ] += RowHeighi [ tileY ] * CrbAddrRsToTs[ ctbAddrRs] += RowHeighi [tileY] *
Colffidtht i )· Colffidtht i )·
fori j ~ 0: j < tileY; j++ ).· fori j ~ 0: j <tileY; j++ ).·
CtbAddrRsToTs[ ctbAddrRs ] += Gί cWidthlnCtbsY * [168] RowHei ht [ j ]-CtbAddrRsToTs[ ctbAddrRs] += Gί cWidthlnCtbsY * [168] RowHei ht [j]-
CtbAddrRsToTst ctbAddrRs ] += ( tbY - RowBd[ tileY ] ) *CtbAddrRsToTst ctbAddrRs] += (tbY-RowBd[ tileY]) *
Coiffidth[ tileS ] + tbX - ColBd[ tiieX ].·Coiffidth[ tileS] + tbX-ColBd[ tiieX ].·
Figure imgf000038_0001
Figure imgf000038_0001
The list CtbAddrTsToRs[ ctbAddrTs ] for ctbAddrTs ranging from 0 to The list CtbAddrTsToRs[ ctbAddrTs] for ctbAddrTs ranging from 0 to
FicSizelnCtbsY - 1, inclusive, specifying the conversion from a CTB address in tile scan to a CTB address in CTB raster scan of a picture, is derived as follows:- for( ctbAddrRs = 0: ctbAddrRs < FicSizelnCtbsY: ctbAddrRs++ ) CtbAddrTsToRs[ CtbAddrRsToTst ctbAddrRs ] ] = ctbAddrRs-· FicSizelnCtbsY-1, inclusive, specifying the conversion from a CTB address in tile scan to a CTB address in CTB raster scan of a picture, is derived as follows:- for( ctbAddrRs = 0: ctbAddrRs <FicSizelnCtbsY: ctbAddrRs++) CtbAddrTsToRs[ CtbAddrRsTo ]] = ctbAddrRs-·
The list Tileld[ ctbAddrTs ] for ctbAddrTs ranging from 0 to ricSizelnCtbsY - 1, inclusive, specifying the conversion from a CTB address in tile scan to a tile ID, is derived as follows:- for( j = 0. tileldx = 0: j <= num_t i le_rows_minusl: j++ ) for( i = 0: i <= num_t i le_columns_niinusl; i++, tileldx++ )«· for( y = RowBd [ j ]; y < RowBd[ j + 1 ]; y++ ) .·· The list Tileld[ ctbAddrTs] for ctbAddrTs ranging from 0 to ricSizelnCtbsY-1, inclusive, specifying the conversion from a CTB address in tile scan to a tile ID, is derived as follows:- for( j = 0. tileldx = 0: j <= num_t i le_rows_minusl: j++) for( i = 0: i <= num_t i le_columns_niinusl; i++, tileldx++ )«· for( y = RowBd [j ]; y <RowBd[ j + 1 ]; y++). ··
for( x = ColBd[ i ]: x < CoIBd[ i + 1 ]: X-H- )- for( x = ColBd[ i ]: x <CoIBd[ i + 1 ]: X-H- )-
Tileidt CtbAddrRsToTst y * PicWidthInCtbsY+ x 3 J = tileldx-Tileidt CtbAddrRsToTst y * PicWidthInCtbsY+ x 3 J = tileldx-
The list NumCtusInTi ie[ tileldx ] for tileldx ranging from 0 to [169] PicSizelnCtbsY - 1, inclusive, specifying the conversion from a tile index to the number of CTUs in the tile, is derived as follows·· The list NumCtusInTi ie[ tileldx] for tileldx ranging from 0 to [169] PicSizelnCtbsY-1, inclusive, specifying the conversion from a tile index to the number of CTUs in the tile, is derived as follows··
for( j = 0, tileldx = 0; j <= num_tile_rows_minusl; j++ )* for( i = 0: i <= nuin_t ile_columns_minusl; i++, tileldx++ ) for( j = 0, tileldx = 0; j <= num_tile_rows_minusl; j++) * for( i = 0: i <= nuin_t ile_columns_minusl; i++, tileldx++)
NumCtusInTi le[ tileldx ] = Colfidth[ i ] * RowHeight[ j ] The list FirstCtbAddrTs[ tileldx ] for tileldx ranging from 0 to NumTilesInPic - 1. inclusive, specifying the conversion from a tile ID to the CTB address in tile scan of the first CTB in the tile are derived as fol lows: - for( ctbAddrTs = 0, tileldx = 0, tileStartFIag = 1: ctbAddrTs <NumCtusInTi le[ tileldx] = Colfidth[ i] * RowHeight[ j] ' The list FirstCtbAddrTs[ tileldx] for tileldx ranging from 0 to NumTilesInPic-1. inclusive, specifying the conversion from a tile ID to the CTB address in tile scan of the first CTB in the tile are derived as fol lows:-for( ctbAddrTs = 0, tileldx = 0, tileStartFIag = 1: ctbAddrTs <
PicSizelnCtbsY;
Figure imgf000039_0001
if( tileStartFIag
Figure imgf000039_0002
PicSizeln CtbsY;
Figure imgf000039_0001
if( tileStartFIag
Figure imgf000039_0002
FirstCtbAddrTsf t i ieldx ] = ctbAddrTs FirstCtbAddrTsf t i ieldx] = ctbAddrTs
tileStartFIag = 0 tileStartFIag = 0
}*> }*>
tiieEndFlag = ctbAddrTs = = PicSizelnCtbsY - 1 tiieEndFlag = ctbAddrTs = = PicSizelnCtbsY-1
Ti 1 e Id [ ctbAddrTs + 1 ] != Tiieldt ctbAddrTs ]* Ti 1 e Id [ctbAddrTs + 1] != Tiieldt ctbAddrTs] *
if( tiieEndFlag ) 1 if( tiieEndFlag) 1
ti leldx+A ti leldx+A
tileStartFIag = 1 2020/175905 1»(:1/10公020/002730 tileStartFIag = 1 2020/175905 1»(:1/10公020/002730
[17이 [17 this
Figure imgf000040_0001
Figure imgf000040_0001
[171] 타일들에기반한픽처의분할이요구되는다양한적용예들이존재할수 [171] There may be various application examples that require picture division based on tiles.
있으며,본실시예들은상기적용예들과연관될수있다. And, the present embodiments may be related to the above application examples.
[172] 일예시에서,병렬처리 (parallel processing)에대해검토한다.멀티코어 [172] In one example, we review parallel processing.
CPU들에서실행되는일부구현에서는소스픽처 (source picture)를타일들및 타일그룹들로분할해야한다.이때,각타일그룹은분리된코어에서병렬 처리될수있다.상기병렬처리는비디오들의고해상도실시간인코딩에유용할 수있다.추가적으로,상기병렬처리는타일그룹들간의정보공유를감소시킬 수있으며 ,이에따라메모리제한 (constraint)을감소시킬수있다.타일들은병렬 처리되는동안서로다른쓰레드 (thread)로분배될수있으므로,병렬아키텍쳐는 이러한분할메커니즘의이점을얻을수있다. Some implementations running on CPUs require dividing the source picture into tiles and tile groups, where each tile group can be processed in parallel on a separate core. The parallel processing is a high-resolution real-time encoding of videos. In addition, the above parallel processing can reduce the sharing of information between groups of tiles, thereby reducing the memory constraint. Tiles can be distributed to different threads while processing in parallel. Therefore, the parallel architecture can benefit from this partitioning mechanism.
[173] 다른일예시에서 ,최대전송유닛 (Maximum Transmission Unit, MTU)사이즈 매칭에대해검토한다.네트워크를통해전송된코딩된픽처들은,상기코딩된 픽처들이 MTU사이즈보다큰경우조각화 (fragmentation)의대상이될수있다. 유사하게 ,상기코딩된세그먼트들이작은경우, IP(Internet Protocol)헤더는 중요해질수있다.패킷조각화는에러레질리언시 (error resiliency)의손실을 초래할수있다.패킷조각화의효과들을완화하기위해픽처를타일들로 분할하고각타일/타일그룹을분리된패킷으로패킹하는경우,패킷이 MTU 사이즈보다작을수있다. [173] In another example, the maximum transmission unit (MTU) size matching is reviewed. The coded pictures transmitted through the network are subject to fragmentation when the coded pictures are larger than the MTU size. It can be different. Similarly, if the coded segments are small, the IP (Internet Protocol) header can become important. Packet fragmentation can lead to loss of error resiliency. The picture is taken to mitigate the effects of packet fragmentation. When dividing into tiles and packing each tile/tile group as a separate packet, the packet may be smaller than the MTU size.
[174] 또다른일예시에서,에러레질리언스에대해검토한다.에러레질리언스는 코딩된타일그룹들에불균형에러보호 (Unequal Error Protection, UEP)를 적용하는일부적용들의요구사항에의해동기가부여될수있다. [174] In another example, error resilience is reviewed. Error resilience is motivated by the requirements of some applications that apply Unequal Error Protection (UEP) to coded tile groups. Can be given.
[175] 상술한바와같이픽처를파티셔닝하는타일들의구조를효율적으로 시그널링하기위한방법이필요하며,이는도 13내지도 21에서구체적으로 설명한다. [175] As described above, the structure of tiles for partitioning pictures can be efficiently A method for signaling is required, which is described in detail in Figs. 13 to 21.
[176] 도 13은화상회의용비디오프로그램에서픽처의일예를나타내는도면이다. 13 is a diagram showing an example of a picture in a video conference video program.
[177] 본명세서에따르면픽처를복수의타일들로파티셔닝하는타일링에 있어서, 미리정의된사각형영역을이용하여유연한타일링을도모할수있다. [177] According to this specification, in tiling for partitioning a picture into a plurality of tiles, flexible tiling can be achieved by using a predefined rectangular area.
[178] 기존의타일링의경우래스터스캔순서에따라수행되었으나,이러한방식에 따른타일링구조는화상회의용비디오프로그램등최근의실제응용 [178] In the case of the existing tiling, it was performed according to the raster scan order, but the tiling structure according to this method has recently been applied to video programs for video conferencing.
프로그램에적용하기에는적합하지않은측면이 있다. There are aspects that are not suitable for application to programs.
[179] 도 13은참가자가여러명인화상회의가진행되는경우,화상회의용비디오 프로그램에서픽처의일예를나타낼수있다.이때,참가자는화자 l(Speaker 1), 화자 2(Speaker 2),화자 3(Speaker 3)및화자 4(Speaker 4)로나타낼수있다.상기 픽처에서각참가자에대응되는영역은기설정된영역들각각에해당할수 있으며,기설정된영역들각각은단일타일또는타일그룹으로코딩될수있다. 화상회의에서참가자가변경되는경우,참가자에대응되는단일타일또는타일 그룹또한변경될수있다. [179] Fig. 13 shows an example of a picture in a video program for video conferencing when a video conference with multiple participants is held. In this case, the participant is speaker l (Speaker 1), speaker 2 (Speaker 2), speaker 3 (Speaker 3) and Speaker 4 (Speaker 4) The area corresponding to each participant in the picture can correspond to each of the preset areas, and each of the preset areas can be coded as a single tile or a group of tiles. have. When a participant changes in a video conference, the single tile or group of tiles corresponding to the participant may also change.
[180] 도 14는화상회의용비디오프로그램에서픽처를타일또는타일그룹으로 파티셔닝하는일예를나타내는도면이다. 14 is a diagram showing an example of partitioning a picture into tiles or tile groups in a video conference video program.
[181] 도 14를참고하면,화상회의에참가하는화자 l(Speaker 1)에할당된영역은 단일타일로코딩될수있다. 찬가지로,화자 2(Speaker 2),화자 3(Speaker 3)및 화자 4(Speaker 4)들각각에할당된영역도단일타일로코딩될수있다. Referring to FIG. 14, an area assigned to a speaker 1 (Speaker 1) participating in a video conference may be coded as a single tile. Similarly, the areas assigned to each of Speaker 2, Speaker 3, and Speaker 4 can be coded as a single tile.
[182] 도 14와같이참가자들각각에할당되는영역을개별타일을이용하여 [182] As shown in Fig. 14, the area allocated to each participant is
코딩하는경우,공간의존성 (spatial dependency)이개선됨에따라효율적코딩을 가능하게할수있다.또한이러한분할방식은 360비디오데이터에적용할수 있으며,하기도 15에서후술한다. In the case of coding, it is possible to enable efficient coding as spatial dependency is improved. In addition, this division method can be applied to 360 video data, which will be described later in Fig. 15.
[183] 도 15는픽처를 MCTS(Motion Constrained Tile Set)에기반하여타일또는타일 그룹으로파티셔닝하는일예를나타내는도면이다. 15 is a diagram illustrating an example of partitioning a picture into tiles or tile groups based on MCTS (Motion Constrained Tile Set).
[184] 도 15에서,픽처는 360도비디오데이터로부터획득될수있다. 360비디오는 VR(Virtual Reality)을제공하기위해필요한,동시에모든방향 (360도)으로 캡처되거나재생되는비디오내지이미지컨텐츠를의미할수있다. %0 비디오는 3D모델에따라다양한형태의 3D공간상에나타내어지는비디오 내지이미지를의미할수있으며,예를들어 360비디오는구형면 (Spherical surface)상에나타내어질수있다. [184] In Fig. 15, a picture can be acquired from 360 degree video data. 360 video can mean video or image content that is captured or played back in all directions (360 degrees) at the same time required to provide VR (Virtual Reality). %0 video can refer to a video or image that appears in various types of 3D space depending on the 3D model. For example, a 360 video can be displayed on a spherical surface.
[185] 360도비디오데이터로부터획득된 2D(two-dimensional space)픽처는적어도 하나의공간해상도로인코딩될수있다.예를들어,픽처는제 1해상도및제 2 해상도로인코딩될수있으며 ,제 1해상도는제 2해상도보다높을수있다.도 15를참고하면,픽처는각각 1536x1536및 768x768의사이즈를갖는 2개의공간 해상도로인코딩될수있으나,공간해상도는이에제한되는것은아니고다양한 사이즈에해당될수있다. [186] 이때,상기두개의공간해상도각각으로인코딩된비트스트림들에대하여 6x4크기의타일그리드가이용될수있다.또한,타일들각각의위치를위한 MCTS(motion constraint tile set)가코딩되어이용될수있다.도 13및도 14에서 상술한바와같이 , MCTS들각각은픽처내기설정된영역들각각에위치한 타일들을포함할수있다. [185] A two-dimensional space (2D) picture obtained from 360-degree video data can be encoded with at least one spatial resolution. For example, a picture can be encoded with a first resolution and a second resolution, and the first resolution. May be higher than the second resolution. Referring to FIG. 15, a picture can be encoded in two spatial resolutions, each having a size of 1536x1536 and 768x768, but the spatial resolution is not limited thereto and may correspond to various sizes. [186] At this time, a 6x4 size tile grid may be used for the bitstreams encoded at each of the two spatial resolutions. In addition, a motion constraint tile set (MCTS) for each position of the tiles may be coded and used. As described above with reference to FIGS. 13 and 14, each of the MCTSs may include tiles positioned in respective areas set for a picture.
[187] MCTS는사각형타일세트를형성하는적어도하나의타일을포함할수 [187] MCTS may contain at least one tile to form a set of square tiles.
있으며 ,타일은 2차원픽처의코딩트리블록 (CTB)들로구성된사각영역을 나타낼수있다.타일은픽처내에서특정타일행및타일열을기반으로구분될 수있다.인코딩/디코딩과정에서특정 MCTS내의블록들에대한인터 예측이 수행되는경우,해당특정 MCTS내의블록들은움직임추정/움직임보상을 위하여참조픽처의대응 MCTS만을참조하도록제한될수있다. A tile can represent a rectangular area composed of coding tree blocks (CTBs) of a two-dimensional picture. A tile can be classified based on a specific tile row and tile column within a picture. A specific MCTS in the encoding/decoding process When inter prediction is performed on the blocks within, the blocks within the specific MCTS may be restricted to refer only to the corresponding MCTS of the reference picture for motion estimation/motion compensation.
[188] 예를들어도 15를참고하면, 12개의제 1 MCTS들 (1510)은 1536x1536의 [188] For example, referring to 15, the 12 first MCTSs 1510 are of 1536x1536.
사이즈를갖는공간해상도로인코딩된비트스트림으로부터도출되고, 12개의 제 2 MCTS들 (1520)은 768x768의사이즈를갖는공간해상도로인코딩된 비트스트림으로부터도출될수있다.즉,제 1 MCTS들 (1510)은동일한픽처에서 제 1해상도를갖는영역에대응하고,제 2 MCTS들 (1520)은동일한픽처에서제 2해상도를갖는영역에대응할수있다. It is derived from a bitstream encoded with a spatial resolution having a size, and 12 second MCTSs 1520 may be derived from a bitstream encoded with a spatial resolution having a size of 768x768. That is, the first MCTSs 1510 May correspond to a region having a first resolution in the same picture, and the second MCTSs 1520 may correspond to a region having a second resolution in the same picture.
[189] 제 1 MCTS들은픽처내에서의뷰포트 (viewport)영역에해당할수있다.뷰포트 영역은사용자가 360도비디오에서보고있는영역을의미할수있다.또는,제 1 MCTS들은픽처내에서의 ROI(Region of Interest)영역에해당할수있다. ROI 영역은 360컨텐츠제공자가제안하는,사용자들의관심영역을의미할수있다. [189] The first MCTSs may correspond to the viewport area in the picture. The viewport area may refer to the area that the user is viewing in the 360-degree video. Or, the first MCTSs may correspond to the ROI (Region in the picture). of Interest). The ROI area can refer to the area of interest of users, suggested by the 360 content provider.
[19이 이때,단일시간에수신된 MCTS들은합쳐져서하나의머지픽처 (merged [19 At this time, MCTSs received in a single time are merged into one merged picture (merged
picture)를구성할수있다.예를들어,제 1 MCTS들 (1510)및제 2 picture), for example, the first MCTSs 1510 and the second
MCTS들 (1520)은합해져서 1920x4708크기의머지픽처 (1530)로병합될수 있으며 ,머지픽처 (1530)는 4개의타일그룹을가질수있다. The MCTSs 1520 can be merged and merged into a 1920x4708-sized merge picture 1530, and the merge picture 1530 can have four tile groups.
[191] 아래의표 9는 PPS신택스의일예시를나타낸다. [191] Table 9 below shows an example of the PPS syntax.
2020/175905 1»(:1/10公020/0027302020/175905 1 » (:1/10公020/002730
[192] [표 9][192] [Table 9]
Figure imgf000043_0001
Figure imgf000043_0001
[193] 아래의표 는상기 신택스에대한영문시맨틱스의일예시를나타낸다. [193] The table below shows an example of English semantics for the above syntax.
[194] [5.10] [194] [5.10]
Figure imgf000044_0001
Figure imgf000044_0001
[195] [195]
t i le_addr_val [ i ][ j ] specifies the t i le_group_address value of the tile of the i-th tile row and the j— th tile column. The length of t i le_addr_val [ i ][ j ] is t i le_addr_len_minusl + 1 bits. * ti le_addr_val [i ][ j] specifies the ti le_group_address value of the tile of the i-th tile row and the j— th tile column. The length of ti le_addr_val [i ][ j] is ti le_addr_len_minusl + 1 bits. *
For any integer m in the range of 0 to num_t i 1 e_co 1 umns_m i nu s 1 inclusive, and any integer n in the range of 0 to num_t i le_rows_minusl, inclusive, t i le_addr_val [ i ][ j ] shall not be equal to t i le_addr_val [ m ][ n ] when i is not equal to m or j is not equal to n. num_mcts_in_pic_minusl plus 1 specifies the number of MCTSs in the picture 2020/175905 1»(:1/10公020/002730 For any integer m in the range of 0 to num_t i 1 e_co 1 umns_m i nu s 1 inclusive, and any integer n in the range of 0 to num_t i le_rows_minusl, inclusive, ti le_addr_val [i ][ j] shall not be equal to ti le_addr_val [m ][ n] when i is not equal to m or j is not equal to n. num_mcts_in_pic_minusl plus 1 specifies the number of MCTSs in the picture 2020/175905 1»(:1/10公020/002730
[196] [196]
Figure imgf000045_0004
Figure imgf000045_0004
[197] 일실시예에서,픽처 내복수의 타일들이존재하는경우,픽처를유니폼하게 분할하여폭및높이가동일한타일들을도출할지 여부를나타내는신택스요소 unifoml_tile_spacing_flag가파싱될수있다.상기신택스요소 [197] In an embodiment, when there are multiple tiles within a picture, a syntax element unifoml_tile_spacing_flag indicating whether tiles having the same width and height are to be derived by dividing the picture uniformly may be parsed. The syntax element above.
unifoml_tile_spacing_flag는픽처내타일들이유니픔하게분할되었는지 여부를 나타낼때 이용될수있다.상기신택스요소 unifoml_tile_spacing_flag가 인에이블된경우,타일열의폭과타일행의높이가파싱될수있다.즉,타일 열의폭을나타내는신택스요소 1: _(:011111111_\¥1(1111_1111111181과타일행의높이를 나타내는신택스요소(116_1'0\¥_11 寒11(:_1111111181이시그널링 및/또는파싱될수 있다. The unifoml_tile_spacing_flag can be used to indicate whether the tiles in the picture are divided in a uniform manner. When the syntax element unifoml_tile_spacing_flag is enabled, the width of the tile row and the height of the tile row can be parsed, i.e., the syntax indicating the width of the tile column. Element 1: _(:011111111_\¥1(1111_1111111181 and a syntax element representing the height of the tile row (116_1'0\¥_11 寒11(:_1111111181 can be signaled and/or parsed).
[198] 일실시예에서,픽처 내타일들이
Figure imgf000045_0001
형성하는지 여부를나타내는 신택스요소 111 8_:^용가파싱될수있다.
Figure imgf000045_0002
경우,픽처내타일들 또는타일그룹들이사각형타일집합을형성하거나형성하지 않을수있으며, 사각형 타일집합외부에 있는샘플값또는변수의사용이제한되거나제한되지 않음을나타낼수있따. 111 _:^^가 1인경우,픽처는■刀 들로분할됨을 나타낼수있다.
[198] In one embodiment, the tiles in the picture
Figure imgf000045_0001
Syntax element indicating whether to form 111 8_:^Can be parsed.
Figure imgf000045_0002
If so, it may indicate that the tiles or groups of tiles in the picture may or may not form a square tile set, and that the use of sample values or variables outside the rectangular tile set is restricted or unrestricted. 111 If _:^^ is 1, it can be indicated that the picture is divided into ■刀.
[199] 또한,상기신택스요소 1111111_111(:18_:11내1(:_1111111181는 개수를 나타낼수있다.일실시예에서 111 _£ 가 1인경우,
Figure imgf000045_0003
로 분할되는경우에는,상기신택스요소 num_mcts_in_pic_minusl가파싱될수 2020/175905 1»(:1/10公020/002730 있다.
[199] In addition, the syntax element 1111111_111(:18_:11 in 1(:_1111111181 may represent the number. In one embodiment, when 111_£ is 1,
Figure imgf000045_0003
In the case of dividing by, the syntax element num_mcts_in_pic_minusl can be parsed. 2020/175905 1»(:1/10公020/002730 There is.
[20이 또한,상기신택스요소 top_left_tile_addr[ i ]는 i번째 MCTS에서 [20] In addition, the syntax element top_left_tile_addr[i] is in the i-th MCTS
좌상즉 (top-left)에위치하는타일의위치인 tile_group_address value를나타낼수 있다.마찬가지로상기신택스요소 bottom_right_tile_addr[ i ]는 i번째 The tile_group_address value, which is the position of the tile located at the top-left, can be indicated. Similarly, the syntax element bottom_right_tile_addr[ i] is the i-th
MCTS에서우하즉 (bottom-right)에위치하는타일의위치인 tile_group_address value를나타낼수있다. In MCTS, the tile_group_address value, which is the location of the tile located at the bottom-right, can be displayed.
[201] 아래의표 11은타일그룹데이터신택스의일예시를나타낸다.표 11에서타일 그룹데이터 (tile group data)는슬라이스데이터로대체될수있다. [201] Table 11 below shows an example of the tile group data syntax. In Table 11, tile group data can be replaced with slice data.
[202] [표 11] [202] [Table 11]
Figure imgf000046_0001
Figure imgf000046_0001
[203] 아래의표 12는상기타일그룹데이터신택스에대한영문시맨틱스의일 [203] Table 12 below shows English semantics for the tile group data syntax.
예시를나타낸다. Give an example.
2020/175905 1»(:1/10公020/002730 2020/175905 1»(:1/10公020/002730
[204] [표 12] [204] [Table 12]
Figure imgf000047_0002
Figure imgf000047_0002
[205] 한편,픽처내타일들을디코딩하는순서인스캐닝프로세스( [205] On the other hand, the scanning process of the order of decoding the tiles in the picture (
Figure imgf000047_0001
있다.
Figure imgf000047_0001
have.
2020/175905 1»(:1/10公020/002730 2020/175905 1»(:1/10公020/002730
[206] [표 13] [206] [Table 13]
Figure imgf000048_0001
Figure imgf000048_0001
[207] 도 16은픽처를 R이영역에기반하여분할하는일예를나타내는도면이다. 16 is a diagram showing an example of dividing a picture based on an R region.
[208] 본명세서에따르면픽처를복수의타일들로파티셔닝하는타일링에 있어서, 관심영역 (Region of Interest, ROI)에기반한유연한타일링을도모할수있다.도 16을참조하면,픽처는 R이영역에기반하여복수의타일그룹들로분할될수 있다. [208] According to the present specification, in tiling for partitioning a picture into a plurality of tiles, flexible tiling based on a region of interest (ROI) can be achieved. Referring to FIG. 16, a picture is in the R region. Based on this, it can be divided into multiple tile groups.
[209] 아래의표 14는 PPS신택스의일예시를나타낸다. [209] Table 14 below shows an example of the PPS syntax.
2020/175905 1»(:1/10公020/0027302020/175905 1»(:1/10公020/002730
[210] [표 14][210] [Table 14]
Figure imgf000049_0001
Figure imgf000049_0001
[211] 아래의표 15는상기 신택스에대한영문시맨틱스의일예시를나타낸다. [211] Table 15 below shows an example of English semantics for the above syntax.
[212] [5.15] [212] [5.15]
2020/175905 1»(:1/10公020/002730 2020/175905 1»(:1/10公020/002730
[213] [213]
Figure imgf000051_0002
Figure imgf000051_0002
[214] 일실시예에서,타일그룹에포함된타일들과관련된타일그룹정보가 에 존재하는지또는 를참조하는타일그룹헤더에존재하는지를나타내는 신택스요소 tile_group_info_in_pps_flag가파싱될수있다. In one embodiment, a syntax element tile_group_info_in_pps_flag indicating whether tile group information related to tiles included in the tile group exists in or in a tile group header referring to may be parsed.
tile_group_info_in_pps_flag가 1인경우,타일그룹정보가 客에존재하고 客를 참조하는타일그룹헤더에는존재하지않음을나타낼수있다.또한, tile_group_info_in_pps_flag가 0인경우,타일그룹정보가 客에존재하지않고 를참조하는타일그룹헤더에는존재함을나타낼수있다.한편,
Figure imgf000051_0001
있다.
If tile_group_info_in_pps_flag is 1, it can be indicated that the tile group information exists in 客 and does not exist in the tile group header referring to 客. In addition, when tile_group_info_in_pps_flag is 0, the tile group information does not exist in 客 and refers to In the tile group header, it can indicate its presence.
Figure imgf000051_0001
have.
[215] 또한,상기신택스요소 niim_tile_groups_in_pic_minusl는 를참조하는픽처 내타일그룹의수를나타낼수있다. In addition, the syntax element niim_tile_groups_in_pic_minusl may indicate the number of tile groups in the picture referring to.
[216] 또한,상기신택스요소 pps_first_tile_id ]는 1번째타일그룹의첫번째타일의 타일 11)를나타낼수있고,상기신택스요소 pps_last_tile_id미는 1번째타일 그룹의마지막타일의타일 11)를나타낼수있다. [216] In addition, the syntax element pps_first_tile_id] can represent the tile 11) of the first tile of the first tile group, and the syntax element pps_last_tile_id can represent the tile 11) of the last tile of the first tile group.
[217] 도 17은픽처를복수의타일들로파티셔닝하는일예를나타내는도면이다. 17 is a diagram showing an example of partitioning a picture into a plurality of tiles.
[218] 본명세서에따르면픽처를복수의타일들로분할하는타일링에 있어서,코딩 트리유닛 (CTU)의크기보다작은크기의타일을고려함으로써유연한타일링을 도모할수있다.이러한방식에따른타일링구조는화상회의용비디오 프로그램등최근의비디오응용프로그램에유용하게적용될수있다. [218] According to the present specification, coding for tiling that divides a picture into a plurality of tiles Flexible tiling can be achieved by considering tiles smaller than the size of the tree unit (CTU). The tiling structure according to this method can be usefully applied to modern video applications such as video conferencing programs.
[219] 도 17을참조하면,픽처는복수의타일들로파티셔닝될수있으며,복수의 [219] Referring to FIG. 17, a picture may be partitioned into a plurality of tiles, and a plurality of
타일들중적어도하나의타일크기는코딩트리유닛 (CTU)의크기보다작을수 있다.예를들어 ,픽처는타일 l(Tile 1),타일 2(Tile 2),타일 3(Tile 3)및타일 4(Tile 4)로파티셔닝될수있고,그중타일 l(Tile 1),타일 2(Tile 2)및타일 4(Tile 4)의크기는 CTU의크기보다작다. The size of at least one of the tiles may be smaller than the size of the Coding Tree Unit (CTU), e.g., a picture is a tile l (Tile 1), a tile 2 (Tile 2), a tile 3 (Tile 3) and a tile. It can be partitioned into 4 (Tile 4), among which the size of Tile 1, Tile 2, and Tile 4 is smaller than the size of CTU.
[22이 아래의표 16은 PPS신택스의일예시를나타낸다. [22 Table 16 below shows an example of the PPS syntax.
[221] [표 16] [221] [Table 16]
Figure imgf000052_0001
Figure imgf000052_0001
[222] 아래의표 17은상기 PPS신택스에대한영문시맨틱스의일예시를나타낸다. [222] Table 17 below shows an example of English semantics for the PPS syntax.
[223] [표 17] [223] [Table 17]
2020/175905 PCT/KR2020/002730 2020/175905 PCT/KR2020/002730
[224] 일실시예에서,상기신택스요소 tile_size_unit_idc는타일의단위크기 (unit size)를나타낼수있다.예를들어, tile_size_unit_id가 0, 1, 2...이면,타일의높이 및폭은코딩트리블록 (CTB) 4, 8, 16...으로정의될수있다. [224] In one embodiment, the syntax element tile_size_unit_idc may represent the unit size of the tile. For example, if tile_size_unit_id is 0, 1, 2..., the height and width of the tile is a coding tree block (CTB) can be defined as 4, 8, 16...
[225] 도 18은픽처를복수의타일들및타일그룹들로파티셔닝하는일예를 [225] Figure 18 shows an example of partitioning a picture into a plurality of tiles and tile groups
나타내는도면이다. This is a drawing to show.
[226] 본명세서에따르면,픽처내복수의타일들은복수의타일그룹들로그룹화될 수있으며,복수의타일그룹들에대하여타일그룹인덱스 (tile group index)를 적용함으로써유연한타일링을도모할수있다. [226] According to this specification, a plurality of tiles within a picture can be grouped into a plurality of tile groups, and flexible tiling can be achieved by applying a tile group index to the plurality of tile groups.
[227] 한편,기존타일링의경우래스터스캔순서로배열된타일들이복수의타일 그룹들로그룹화되었다.그러나,본명세서에따르면유연한타일링을도모하기 위해복수의타일그룹들중적어도하나의타입그룹은비래스터 [227] On the other hand, in the case of conventional tiling, tiles arranged in raster scan order were grouped into multiple tile groups. However, according to this specification, at least one type group among multiple tile groups is non-existent in order to promote flexible tiling. Raster
스캔 (non-raster scan)순서 (order)로배열된타일들을포함수있다. It can contain tiles arranged in a non-raster scan order.
[228] 예를들어도 18을참조하면,픽처는복수의타일들로파티셔닝될수있으며, 복수의타일들은타일그룹 l(Tile Group 1),타일그룹 2(Tile Group 2)및타일 그룹 3(Tile Group 3)으로그룹화될수있다.이때 ,타일그룹 1,타일그룹 2및 타일그룹 3각각은비 래스터스캔 (non-raster scan)순서 (order)로배열된 타일들을포함할수있다. [228] For example, referring to Fig. 18, a picture can be partitioned into a plurality of tiles, and a plurality of tiles is a tile group l (Tile Group 1), a tile group 2 (Tile Group 2), and a tile group 3 (Tile Group). It can be grouped by 3), where each of tile group 1, tile group 2 and tile group 3 can contain tiles arranged in a non-raster scan order.
[229] 아래의표 18은타일그룹헤더 (tile_group_header)의신택스의일예시를 [229] Table 18 below shows an example of the syntax of the tile group header (tile_group_header).
나타낸다.표 18에서타일그룹헤더는슬라이스헤더로대체될수있다. In Table 18, tile group headers can be replaced with slice headers.
[23이 [표 18] [23 is [Table 18]
Figure imgf000053_0001
Figure imgf000053_0001
[231] 아래의표 19는상기타일그룹헤더의신택스에대한영문시맨틱스의일 예시를나타낸다. 2020/175905 1»(:1/10公020/002730 [231] Table 19 below shows an example of English semantics for the syntax of the tile group header. 2020/175905 1»(:1/10公020/002730
[232] [표 19] [232] [Table 19]
Figure imgf000054_0003
Figure imgf000054_0003
[233] 일실시예에서 ,픽처 내복수의 타일그룹들각각의 인덱스를지정하는신택스 요소바6_은1'011]3_:111(16 가시그널링/파싱될수있다.이 때 ,바6_은1'011]3_:111(16 의 값은 동일픽처내다른타일그룹 NAL유닛의바6_ 011]3_:111(16 의 값과같지 않다. [233] In one embodiment, a syntax element bar 6_ that designates an index of each of a plurality of tile groups within a picture may be 1'011]3_:111 (16 visible signalling/parsing. In this case, bar 6_ is 1'011] 3_: 111 (value of 16 bar for another tile in the same picture group NAL unit 6_; 011; 3_: 111 (not the same as the value of 16.
[234] 아래의
Figure imgf000054_0001
예시를
[234] under
Figure imgf000054_0001
Example
나타낸다.표 20에서타일그룹헤더는슬라이스헤더로대체될수있다. In Table 20, tile group headers can be replaced with slice headers.
[235] [표 2이 [235] [Table 2
Figure imgf000054_0002
Figure imgf000054_0002
[236] 아래의표 21은상기 타일그룹헤더의신택스에 대한영문시맨틱스의 일 예시를나타낸다. [237] [5.21] [236] Table 21 below shows an example of English semantics for the syntax of the tile group header. [237] [5.21]
[238] [238]
single_t i le_per_t i le_group_f lag is equal to 1, the value of single_t i le_in_t i le_group_f lag is inferred to be equal to 1.* single_t i le_per_t i le_group_f lag is equal to 1, the value of single_t i le_in_t i le_group_f lag is inferred to be equal to 1. *
firs t_t i 1 e_i d specifies the tile ID of the first tile of the tile group. The length of fir s t_t i 1 e_i d is CeiK Log2( NumTi lesInTic ) ) bits. The value of f irst_ti le_id of a tile group shall not be equal to the value of f irst_t i le_id of any other tile group of the same picture. When not present, the value of f irst_t i le_id is inferred to be equal to the tile ID of the first tile of the current picture.-' firs t_t i 1 e_i d specifies the tile ID of the first tile of the tile group. The length of fir s t_t i 1 e_i d is CeiK Log2( NumTi lesInTic)) bits. The value of f irst_ti le_id of a tile group shall not be equal to the value of f irst_t i le_id of any other tile group of the same picture. When not present, the value of f irst_t i le_id is inferred to be equal to the tile ID of the first tile of the current picture.-'
i as t_t i 1 e_i d specifies the tile ID of the last tile of the tile group. The length of last_tile_id is CeiK Log2( NumTi lesInTic ) ) bits.-' When NumTi lesInTic is equal to 1 or single_t i le_in_t i le_group_f lag is equal to 1, the value of last_tile_id is inferred to be equal to f irst _ t i le_id . When t i le_group_info_in_pps_f lag is equal to 1, the value of 1 as t_t i i e_i d is inferred to be equal to the value of i as t_t i 1 e_i d specifies the tile ID of the last tile of the tile group. The length of last_tile_id is CeiK Log2( NumTi lesInTic)) bits.-' When NumTi lesInTic is equal to 1 or single_t i le_in_t i le_group_f lag is equal to 1, the value of last_tile_id is inferred to be equal to f irst _ ti le_id . When ti le_group_info_in_pps_f lag is equal to 1, the value of 1 as t_t ii e_i d is inferred to be equal to the value of
2020/175905 1»(:1/10公020/002730 2020/175905 1»(:1/10公020/002730
[239] pps_last_r i le_id[ i ] where i is the value such that f irst_ti le_id is equal to pps_f irst_t i le_id[ i ] . [239] pps_last_r i le_id[ i] where i is the value such that f irst_ti le_id is equal to pps_f irst_t i le_id[ i].
NOTE - The first_tile_id is the tile ID of the tile located at the top-left corner of the tile group , and the last_ti le_id is the tile ID of the tile located at the bottom-right corner of the tile group. ^ NOTE-The first_tile_id is the tile ID of the tile located at the top-left corner of the tile group, and the last_ti le_id is the tile ID of the tile located at the bottom-right corner of the tile group. ^
The variable NumTi lesInTi leGroup, which specifies the number of tiles in the tile group, and TgTi leldx[ i ] , which specifies the tile index of the i-th ti le in the tile group, are derived as follows: deltaTi leldx = last_t i le_idx一 f irst_ti le_idx^ nimiTi leRows = ( deltaTi leldx / ( nuni_t i le_colunuis_minusl + 1 ) ) + l iiuiiiTI leColunms = ( deltaTi leldx % ( nuin_t i le_coIumns_minusl + 1 ) ) + 1 ^ NiunTi lesInTi leGroup = nuinTi leRows * nuinTi leColunms ^ ti leldx = first_ti le_id^ The variable NumTi lesInTi leGroup, which specifies the number of tiles in the tile group, and TgTi leldx[ i], which specifies the tile index of the i-th ti le in the tile group, are derived as follows: deltaTi leldx = last_t i le_idx一 f irst_ti le_idx^ nimiTi leRows = (deltaTi leldx / (nuni_t i le_colunuis_minusl + 1)) + l iiuiiiTI leColunms = (deltaTi leldx% (nuin_t i le_coIumns_minusl + 1)) + 1 ^ nuinTi leRows = nuinTi le leColunms ^ ti leldx = first_ti le_id^
[24이 [24 this
Figure imgf000057_0001
Figure imgf000057_0001
[241] 일실시예에서,픽처내복수의타일그룹들각각에대하여첫번째타일의타일 ID를지정하는신택스요소 first_tile_id가시그널링/파싱될수있다. first_tile_id는 타일그룹의좌상측 (top-left)에위치한타일의타일 ID에해당할수있다.이때, 타일그룹의첫번째타일의타일 ID는동일픽처내다른타일그룹의첫번째 타일의타일 ID와같지않다. In one embodiment, for each of a plurality of tile groups in a picture, a syntax element first_tile_id that designates a tile ID of the first tile may be signaled/parsed. The first_tile_id may correspond to the tile ID of the tile located at the top-left of the tile group. In this case, the tile ID of the first tile of the tile group is not the same as the tile ID of the first tile of the other tile group in the same picture.
[242] 일실시예에서,픽처내복수의타일그룹들각각에대하여마지막타일의타일 ID를지정하는신택스요소 last_tile_id가시그널링 /파싱될수있다. last_tile_id는 타일그룹의우하측 (bottom-right)에위치한타일의타일 ID에해당할수있다. 신택스요소 NumTilesInPic이 1이거나 single_tile_in_tile_group_flag가 1인경우 last_tile_id의값은 first_tile_id와같을수있다.또한, tile_group_info_in_pps_flag가 1인경우 last_tile_id의값은 pps_last_tile_id미의값과같을수있다. [242] In one embodiment, the tile of the last tile for each of the plurality of tile groups in the picture The syntax element last_tile_id specifying the ID can be signaled/parsed. The last_tile_id may correspond to the tile ID of the tile located at the bottom-right of the tile group. When the syntax element NumTilesInPic is 1 or single_tile_in_tile_group_flag is 1, the value of last_tile_id can be the same as first_tile_id. In addition, when the tile_group_info_in_pps_flag is 1, the value of last_tile_id can be the same as the meaning of pps_last_tile_id.
[243] 도 19는픽처를복수의타일들및타일그룹들로파티셔닝하는일예를 [243] Fig. 19 shows an example of partitioning a picture into a plurality of tiles and tile groups
나타내는도면이다. This is a drawing to show.
[244] 본명세서에따르면,픽처의타일그룹내에서 2차적으로타일들을그룹화할수 있다.이에따라,타일의크기를보다효과적으로제어할수있으므로유연한 타일링을도모할수있다. [244] According to this specification, tiles can be grouped secondaryly within the tile group of a picture. Accordingly, the size of the tiles can be more effectively controlled, and thus flexible tiling can be achieved.
[245] 예를들어도 19를참고하면,픽처는먼저 3개의타일그룹들로파티셔닝될수 있으며 ,두번째타일그룹에해당하는 Tile group #2는 2차타일그룹들로 추가적으로파티셔닝될수있다. [245] For example, referring to 19, a picture can be first partitioned into three tile groups, and Tile group #2, which is a second tile group, can be additionally partitioned into secondary tile groups.
[246] 아래의표 22는 PPS신택스의일예시를나타낸다. [246] Table 22 below shows an example of the PPS syntax.
[247] [표 22] [247] [Table 22]
Figure imgf000058_0001
Figure imgf000058_0001
[248] 아래의 시맨틱스의일예시를나타낸다. 2020/175905 1»(:1/10公020/002730 [248] The following is an example of semantics. 2020/175905 1 » (:1/10公020/002730
[249] [표 23] [249] [Table 23]
Figure imgf000059_0001
Figure imgf000059_0001
[25이 일실시예에서,픽처 내복수의 타일그룹들의 개수와관련된신택스요소 [25] In this embodiment, a syntax element related to the number of tile groups within a picture
111따1_(116_寒1'01中8_1111111181가시그널링/파싱될수있다.예를들어,신택스요소 1111111_(116_용1'01¾^_1111111181의 값에서 1을더한값은픽처내타일그룹들의 개수를 나타낼수있다. 111D1_(116_寒1'01中8_1111111181 can be signaled/parsed. For example, the value of the syntax element 1111111_(1'01¾^_1111111181 for 116_ plus 1) indicates the number of tile groups in the picture. Can be represented.
[251] 일실시예에서,픽처 내土번째타일그룹의좌상측(1애- :¾)에 위치하는첫번째 (그¾의위치를지정하는신택스요소산 _寒1'0111)_ 11_(1(1 88 ]가 [251] In one embodiment, the first (syntax element calculation specifying the position of the ¾) positioned on the upper left side (one part-:¾) of the soil-th tile group in the picture _寒1'0111)_11_(1( 1 88]
시그널링/파싱될수있다.또한,픽처 내土번째타일그룹의 It can be signaled/parsed. Also,
우하측여아 !!!-! !!!)에 위치하는마지막( ¾의위치를지정하는신택스요소 산 _寒1'0111)_611(1_(1(1 88 ]가시그널링/파싱될수있다.산 _寒1'0111)_ 031_(1(1 88 ] 및 1: _寒1'0111)_611(1_(1(1 88 ]의값은동일픽처 내다른타일그룹 유닛의 (立16_용1*0111)_ 표11_(1(1 8 [}]및(立16_용1*0111녜1(1_(1(1 8□]의값과같지않다. Lower right girl !!!-! !!!) at the end (Syntax element specifying the position of ¾) _611 (1_(1(1 88] can be signaled/parsed. Mountain _寒1'0111)_ 031_( 1(1 88] and 1: The value of _寒1'0111)_611(1_(1(1 88] is (for 立16_1 * 0111)_ Table 11_(1(1 8) of the other tile group units in the same picture. It is not the same as the value of [}] and (for 立16_1 * 0111 YES 1 (1_(1(1 8□]).
[252] 또한,본명세서에 따르면,픽처내타일의 II)가명시적으로시그널링될수 2020/175905 PCT/KR2020/002730 있으며 ,타일의 ID는타일의인덱스와상이할수있다.이에따라, VCL( video coding layer) NAL(Network abstraction layer)을변경할필요없이 MCTS를도줄할 수있다.또한,타일그룹헤더를변경할필요가없다는이점이있다. [252] Also, according to this specification, II) of the tiles in the picture can be explicitly signaled. 2020/175905 PCT/KR2020/002730, and the ID of the tile may be different from the index of the tile. Accordingly, it is possible to provide MCTS without the need to change the video coding layer (VCL) and the network abstraction layer (NAL). In addition, The advantage is that there is no need to change the tile group header.
[253] 아래의표 24는 PPS신택스의일예시를나타낸다. [253] Table 24 below shows an example of the PPS syntax.
[254] [표 24] [254] [Table 24]
Figure imgf000060_0001
Figure imgf000060_0001
[255] 아래의표 25는 스에대한영문시맨틱스의일예시를나타낸다. [255] Table 25 below shows an example of English semantics for Su.
2020/175905 1»(:1/10公020/002730 2020/175905 1»(:1/10公020/002730
[256] [표 25] [256] [Table 25]
Figure imgf000061_0002
Figure imgf000061_0002
[257] 일실시예에서,복수의타일들각각의타일의타일 II)가명시적으로 [257] In one embodiment, a plurality of tiles, each tile of the tile II) is explicitly
시그널링됨을나타내
Figure imgf000061_0001
시그널링 /파싱될수 2020/175905 PCT/KR2020/002730 있다.예를들어 , explicit_tile_id_flag가 0이면타일 ID가명시적으로
Indicates signaled
Figure imgf000061_0001
Signaling / can be parsed There is 2020/175905 PCT/KR2020/002730, for example, if explicit_tile_id_flag is 0, the tile ID is explicitly
시그널링되지않음을나타낼수있다. It can indicate that it is not signaled.
[258] 일실시예에서, PPS를참조하는픽처내 i번째타일의타일 ID를지정하는 신택스요소 tile_id_val[i]가시그널링/파싱될수있다. In one embodiment, a syntax element tile_id_val[i] designating the tile ID of the i-th tile in the picture referencing the PPS may be signaled/parsed.
[259] 한편,하기표 26의변수들은 CTB래스터및타일스캔변환프로세스 (tile scanning conversion process)를호줄하여도출될수있다. Meanwhile, the variables in Table 26 below can be derived by calling the CTB raster and tile scanning conversion process.
[260] [5.26] [260] [5.26]
2020/175905 1»(:1/10公020/002730 2020/175905 1»(:1/10公020/002730
[261] [261]
Figure imgf000064_0001
Figure imgf000064_0001
[262] 아래의표 27은타일그룹헤더의신택스의일예시를나타낸다.표 27에서타일 그룹헤더는슬라이스헤더로대체될수있다. 2020/175905 1»(:1/10公020/002730 [262] Table 27 below shows an example of the syntax of the tile group header. In Table 27, the tile group header can be replaced by a slice header. 2020/175905 1»(:1/10公020/002730
[263] [표 27] [263] [Table 27]
Figure imgf000065_0001
Figure imgf000065_0001
[264] 아래의표 28은상기타일그룹헤더의신택스에대한영문시맨틱스의일 예시를나타낸다. [264] Table 28 below shows an example of English semantics for the syntax of the tile group header.
[265] [표 28] [265] [Table 28]
Figure imgf000065_0002
Figure imgf000065_0002
[266] 일실시예에서,픽처내타일그룹의첫번째타일의타일 ID를지정하는신택스 요소 tile_group_address가시그널링/파싱될수있다. tile_group_address의값은 동일픽처내다른타일그룹 NAL유닛의 tile_group_address의값과같지않다. In one embodiment, a syntax element tile_group_address that designates the tile ID of the first tile of the tile group in the picture may be signaled/parsed. The value of tile_group_address is not the same as the value of tile_group_address of other tile group NAL units in the same picture.
[267] 한편,특정시스템에서는타일그룹을식별하는것이필요할수있다.이는 시스템레벨에서어떤 VCL NAL유닛이특정타일그룹에속하는지를해석하고 구별하기위해필수적일수있다. [267] On the other hand, in certain systems it may be necessary to identify a tile group. This may be essential at the system level to interpret and distinguish which VCL NAL units belong to a certain tile group.
[268] 예를들어 , MANE (Media-Aware Network Element)또는비디오편집기는 NAL 유닛들에의해운반되는타일그룹을식별할수있고,대응하는 NAL유닛들을 제거하거나타겟타일그룹 (target tile group)에속하는 NAL유닛들을포함하는 서브비트스트림 (sub-bitstream)을도줄할수있다. [268] For example, a MANE (Media-Aware Network Element) or video editor can identify a tile group carried by NAL units, and remove the corresponding NAL units or belong to a target tile group. A sub-bitstream including NAL units can also be provided.
[269] 이를위해 , tile_group_id의값과동일한값을갖는신택스요소 [269] To do this, a syntax element that has the same value as the value of tile_group_id
nuh_tile_group_id가 NAL유닛헤더에서제안될수있다. nuh_tile_group_id may be suggested in the NAL unit header.
[27이 네트워크요소 (Network Element)또는비디오편집기는 NAL유닛들만을 2020/175905 1»(:1/10公020/002730 파싱하고해석함으로써, NAL유닛들에 의해운반되는타일그룹을용이하게 식별할수있다.또한,네트워크요소또는비디오편집기는대응하는 NAL 유닛들을제거할수있으며 이에 따라타겟타일그룹에속하는 NAL유닛들을 포함하는서브비트스트림을추출할수있다. [27 This network element or video editor only 2020/175905 1»(:1/10公020/002730 By parsing and interpreting, the tile group carried by the NAL units can be easily identified. In addition, the network element or video editor can remove the corresponding NAL units. Accordingly, a subbitstream including NAL units belonging to the target tile group can be extracted.
[271] 아래의표 29는 NAL유닛헤더의신택스의 일예시를나타낸다. [271] Table 29 below shows an example of the syntax of the NAL unit header.
[272] [표 29] [272] [Table 29]
Figure imgf000066_0001
Figure imgf000066_0001
[273] 아래의표 30은상기 쇼 유닛헤더의신택스에 대한영문시맨틱스의 일 예시를나타낸다. [273] Table 30 below shows an example of English semantics for the syntax of the show unit header.
[274] [표 3이 [274] [Table 3
Figure imgf000066_0002
Figure imgf000066_0002
[275] 일실시예에서, NAL유닛의 타일그룹 II)를지정하는신택스요소 [275] In one embodiment, a syntax element specifying tile group II) of the NAL unit
111山_샀16_ 011]3_:1(1가시그널링/파싱될수있다. 111!11_샀16_ 011]3_:1(1의값은타일 그룹헤더의( _ 01¾)_ 의값과같다. 111山_ bought 16_ 011]3_:1(1 can be signaled/parsed. 111!11_Bought 16_ 011]3_:1 (The value of 1 is the same as the value of (_ 01¾)_ of the tile group header.
[276] 아래의표 31은타일그룹헤더예밝0때_1^(1 )의신택스의 일예시를 나타낸다.표 31에서타일그룹헤더는슬라이스헤더로대체될수있다. [276] Table 31 below shows an example of the syntax of _1^(1) when the example of a tile group header is bright. In Table 31, a tile group header can be replaced by a slice header.
2020/175905 1»(:1/10公020/002730 2020/175905 1»(:1/10公020/002730
[277] [표 31] [277] [Table 31]
Figure imgf000067_0001
Figure imgf000067_0001
[278] 아래의표 32는상기타일그룹헤더의신택스에대한영문시맨틱스의일 예시를나타낸다. 2020/175905 1»(:1/10公020/002730 [278] Table 32 below shows an example of English semantics for the syntax of the tile group header. 2020/175905 1»(:1/10公020/002730
[279] [표 32] [279] [Table 32]
Figure imgf000068_0001
Figure imgf000068_0001
[28이 일실시예에서,픽처내타일그룹의타일그룹 ID를지정하는신택스요소 tile_group_id가시그널링/파싱될수있다.이때, tile_group_id의값은동일픽처 내다른타일그룹 NAL유닛의 tile_group_id의값과같지않다. [28 In this embodiment, a syntax element tile_group_id specifying a tile group ID of a tile group in a picture may be signaled/parsed. At this time, the value of tile_group_id is not the same as the value of tile_group_id of another tile group NAL unit in the same picture.
[281] 도 20은일실시예에따른디코딩장치의동작을도시하는흐름도이고,도 21는 일실시예에따른디코딩장치의구성을도시하는블록도이다. 20 is a flow chart showing the operation of the decoding apparatus according to an embodiment, and FIG. 21 is a block diagram showing the configuration of the decoding apparatus according to the embodiment.
[282] 도 20에개시된각단계는도 3에개시된디코딩장치 (300)에의하여수행될수 있다.보다구체적으로, S2000및 S2010은도 3에개시된엔트로피 Each step disclosed in FIG. 20 may be performed by the decoding device 300 disclosed in FIG. 3. More specifically, S2000 and S2010 are entropy disclosed in FIG.
디코딩부 (310)에의하여수행될수있고, S2020은도 3에개시된예측부 (330)에 의하여수행될수있고, S2030은도 3에개시된가산부 (340)에의하여수행될수 있다.더불어 S2000내지 S2030에따른동작들은,도 1내지도 19에서전술된 내용들중일부를기반으로한것이다.따라서,도 1내지도 19에서전술된 내용과중복되는구체적인내용은설명을생략하거나간단히하기로한다. It may be performed by the decoding unit 310, S2020 may be performed by the prediction unit 330 disclosed in FIG. 3, and S2030 may be performed by the addition unit 340 disclosed in FIG. 3. In addition, operations according to S2000 to S2030 may be performed according to S2000 to S2030. , It is based on some of the contents described above in Figs. 1 to 19. Therefore, specific contents overlapping with the contents described above in Figs. 1 to 19 will be omitted or simplified.
[283] 도 21에도시된바와같이,일실시예에따른디코딩장치는엔트로피 [283] As shown in FIG. 21, the decoding apparatus according to an embodiment is
디코딩부 (3 W),예측부 (330)및가산부 (340)를포함할수있다.그러나,경우에 따라서는도 21에도시된구성요소모두가디코딩장치의필수구성요소가 아닐수있고,디코딩장치는도 21에도시된구성요소보다많거나적은구성 요소에의해구현될수있다. It may include a decoding unit (3W), a prediction unit 330 and an addition unit 340. However, in some cases, all of the components shown in Fig. 21 may not be essential components of the decoding device, and the decoding device is It may be implemented by more or less components than the components shown in FIG. 21.
[284] 일실시예에따른디코딩장치에서엔트로피디코딩부 (3 W),예측부 (330)및 가산부 (340)는각각별도의칩 (chip)으로구현되거나,적어도둘이상의구성 요소가하나의칩을통해구현될수도있다. [284] In the decoding apparatus according to an embodiment, the entropy decoding unit (3W), the prediction unit 330, and the addition unit 340 are each implemented as a separate chip, or at least two or more components are It can also be implemented through a chip.
[285] 일실시예에따른디코딩장치는,현재픽처에대한분할정보 (partition [285] The decoding apparatus according to an embodiment includes partition information for a current picture (partition
information)및상기현재픽처에포함된현재블록에대한예즉정보 (prediction infomiation)를포함하는영상정보를비트스트림으로부터획득할수 information) and image information including prediction infomiation about the current block included in the current picture can be obtained from the bitstream.
있다 (S2000).보다구체적으로,디코딩장치의엔트로피디코딩부 (3 W)는현재 픽처에대한분할정보및상기현재픽처에포함된현재블록에대한예측 정보를포함하는영상정보를비트스트림으로부터획득할수있다. [286] 일실시예에따른디코딩장치는,상기현재픽처에대한상기분할정보를 기반으로,복수의타일들 (a plurality of tiles)에기반한상기현재픽처의분할 구조 (partitioning structure)를도줄할수있다 (S2(XL0).보다구체적으로,디코딩 장치의엔트로피디코딩부 (3 W)는상기현재픽처에대한상기분할정보를 기반으로,복수의타일들 (a plurality of tiles)에기반한상기현재픽처의분할 구조 (partitioning structure)를도줄할수있다.일예시에서,상기복수의타일들은 복수의타일그룹들로그룹화되고,상기복수의타일그룹들중적어도하나의 타입그룹은비 래스터스캔 (non-raster scan)순서 (order)로배열된타일들을 포함할수있다. More specifically, the entropy decoding unit (3W) of the decoding device can obtain image information including segmentation information for the current picture and prediction information for the current block included in the current picture from the bitstream. have. [286] The decoding apparatus according to an embodiment may provide a partitioning structure of the current picture based on a plurality of tiles, based on the division information for the current picture. (S2(XL0). More specifically, the entropy decoding unit (3W) of the decoding device is based on the division information on the current picture, In one example, the plurality of tiles are grouped into a plurality of tile groups, and at least one type group among the plurality of tile groups is a non-raster scan. ) May contain tiles arranged in order.
[287] 일실시예에따른디코딩장치는,상기복수의타일들중하나의타일에포함된 상기현재블록에대한상기 예측정보를기반으로,상기현재블록에대한예측 샘플들을도출할수있다 (S2020).보다구체적으로,디코딩장치의예측부 (330)는 상기복수의타일들중하나의타일에포함된상기현재블록에대한상기예측 정보를기반으로,상기현재블록에대한예측샘플들을도출할수있다. [287] The decoding apparatus according to an embodiment may derive predicted samples for the current block based on the prediction information for the current block included in one of the plurality of tiles (S2020). More specifically, the prediction unit 330 of the decoding apparatus may derive prediction samples for the current block based on the prediction information for the current block included in one of the plurality of tiles.
[288] 일실시예에따른디코딩장치는,상기예측샘플들을기반으로상기현재 픽처를복원할수있다 (S2030).보다구체적으로,디코딩장치의가산부 (340)는 상기 예측샘플들을기반으로상기현재픽처를복원할수있다. [288] The decoding apparatus according to an embodiment may restore the current picture based on the predicted samples (S2030). More specifically, the adding unit 340 of the decoding apparatus is based on the predicted samples. Pictures can be restored.
[289] 일실시예에서,상기현재픽처에대한상기분할정보는,상기복수의타일 그룹들각각의인덱스 (index)정보,상기복수의타일그룹들각각에대하여 좌상측 (top-left)에위치한타일의 ID정보,상기복수의타일그룹들각각에 대하여우하측 (bottom-right)에위치한타일의 ID정보중적어도하나를포함할 수있다. [289] In one embodiment, the split information for the current picture is, index information for each of the plurality of tile groups, and located at the top-left for each of the plurality of tile groups. It may include at least one of the ID information of the tile and the ID information of the tile located at the bottom-right of each of the plurality of tile groups.
[29이 일실시예에서,상기현재픽처에대한상기분할정보는,상기복수의타일들 각각의 ID정보가명시적으로시그널링되는지여부에대한플래그정보및상기 복수의타일들각각의 ID정보중적어도하나를더포함할수있다.또한,상기 플래그정보및상기복수의타일들각각의 ID정보중적어도하나는상기영상 정보의 PPS(Picture Parameter Set)에포함될수있다. [29 In this embodiment, the division information on the current picture is at least one of flag information on whether ID information of each of the plurality of tiles is explicitly signaled, and ID information of each of the plurality of tiles. In addition, at least one of the flag information and ID information of each of the plurality of tiles may be included in a PPS (Picture Parameter Set) of the image information.
[291] 일실시예에서,상기현재픽처에대한상기분할정보는,상기복수의타일 그룹들의개수정보,상기복수의타일그룹들각각에대하여좌상측 (top-left)에 위치하는 CTB(Coding Tree Block)의위치정보및상기복수의타일그룹들 각각에대하여우하측 (bottom-right)에위치하는 CTB의위치정보중적어도 하나를더포함할수있다. [291] In one embodiment, the division information for the current picture includes information on the number of tiles of the plurality of tile groups, and a coding tree (CTB) positioned at the top-left for each of the plurality of tile groups. Block) and at least one of the location information of the CTB located at the bottom-right for each of the plurality of tile groups.
[292] 또한,상기복수의타일그룹들의개수정보,상기복수의타일그룹들각각에 대하여좌상측에위치하는 CTB의위치정보및상기복수의타일그룹들각각에 대하여우하측에위치하는 CTB의위치정보중적어도하나는상기영상정보의 PPS(Picture Parameter Set)에포함될수있다. [292] In addition, information on the number of tile groups, location information of the CTB located at the upper left of each of the plurality of tile groups, and location of the CTB located at the lower right of each of the plurality of tile groups At least one of the information may be included in the PPS (Picture Parameter Set) of the image information.
[293] 일실시예에서,상기현재픽처에대한상기분할정보는,상기복수의타일 그룹들각각의 ID정보를더포함할수있다.또한,상기복수의타일그룹들 각각의 ID정보는상기영상정보의 NAL(Network Abstraction Layer)유닛헤더에 포함될수있다. [293] In one embodiment, the division information for the current picture may further include ID information of each of the plurality of tile groups. In addition, the plurality of tile groups Each ID information may be included in the NAL (Network Abstraction Layer) unit header of the image information.
[294] 상술한본개시에따르면픽처를복수의타일들및상기복수의타일들을 [294] According to the above-described disclosure, a picture can be converted to a plurality of tiles and the plurality of tiles are
그룹화하는복수의타일그룹들로유연하게파티셔닝할수있다.또한,본 개시에따르면현재픽처에대한분할정보를기반으로픽처파티셔닝의효율을 높일수있다. It is possible to flexibly partition into a plurality of tile groups to be grouped. Further, according to the present disclosure, the efficiency of picture partitioning can be improved based on the division information for the current picture.
[295] 도 22는일실시예에따른인코딩장치의동작을도시하는흐름도이고,도 23은 일실시예에따른인코딩장치의구성을도시하는블록도이다. 22 is a flow chart showing an operation of an encoding device according to an embodiment, and FIG. 23 is a block diagram showing a configuration of an encoding device according to an embodiment.
[296] 도 22및도 23에따른인코딩장치는도 20및도 21에따른디코딩장치와 대응되는동작들을수행할수있다.따라서,도 22및도 23에서후술될인코딩 장치의동작들은도 20및도 21에따른디코딩장치에도마찬가지로적용될수 있다. The encoding apparatus according to FIGS. 22 and 23 can perform operations corresponding to those of the decoding apparatus according to FIGS. 20 and 21. Accordingly, operations of the encoding apparatus to be described later in FIGS. The same can be applied to a decoding device according to 21.
[297] 도 22에개시된각단계는도 2에개시된인코딩장치 (200)에의하여수행될수 있다.보다구체적으로, S2200및 S2210은도 2에개시된영상분할부 (210)에 의하여수행될수있고, S2220및 S2230은도 2에개시된예측부 (220)에의하여 수행될수있고, S2240은도 2에개시된엔트로피인코딩부 (240)에의하여 수행될수있다.더불어 S2200내지 S2240에따른동작들은,도 1내지도 19에서 전술된내용들중일부를기반으로한것이다.따라서,도 1내지도 19에서 전술된내용과중복되는구체적인내용은설명을생략하거나간단히하기로 한다. [297] Each step disclosed in FIG. 22 may be performed by the encoding apparatus 200 disclosed in FIG. 2. More specifically, S2200 and S2210 may be performed by the image dividing unit 210 disclosed in FIG. 2, and S2220 and S2230 may be performed by the prediction unit 220 disclosed in Fig. 2, and S2240 may be performed by the entropy encoding unit 240 disclosed in Fig. 2. In addition, operations according to S2200 to S2240 are described above in Figs. It is based on some of the contents. Therefore, detailed contents overlapping with the contents described above in Figs. 1 to 19 will be omitted or simplified.
[298] 도 23에도시된바와같이,일실시예에따른인코딩장치는영상분할부 (210), 예측부 (220)및엔트로피인코딩부 (240)를포함할수있다.그러나,경우에 따라서는도 23에도시된구성요소모두가인코딩장치의필수구성요소가 아닐수있고,인코딩장치는도 23에도시된구성요소보다많거나적은구성 요소에의해구현될수있다. As shown in FIG. 23, the encoding apparatus according to an embodiment may include an image division unit 210, a prediction unit 220, and an entropy encoding unit 240. However, in some cases, All of the components shown in 23 may not be essential components of the encoding device, and the encoding device may be implemented by more or less components than the components shown in FIG. 23.
[299] 일실시예에따른인코딩장치에서영상분할부 (210),예측부 (220)및엔트로피 인코딩부 (240)는각각별도의칩 (chip)으로구현되거나,적어도둘이상의구성 요소가하나의칩을통해구현될수도있다. [299] In the encoding apparatus according to an embodiment, the image segmentation unit 210, the prediction unit 220, and the entropy encoding unit 240 are each implemented as a separate chip, or at least two or more components are It can also be implemented through a chip.
[300] 일실시예에따른인코딩장치는,현재픽처를복수의타일들로분할할수 [300] The encoding apparatus according to an embodiment can divide the current picture into a plurality of tiles.
있다 (S2200).보다구체적으로,인코딩장치의영상분할부 (210)는현재픽처를 복수의타일들로분할할수있다. More specifically, the image dividing unit 210 of the encoding apparatus may divide the current picture into a plurality of tiles.
[301] 일실시예에따른인코딩장치는,상기복수의타일들을기반으로상기현재 픽처에대한분할정보를생성할수있다 (S22W).보다구체적으로,인코딩 장치의영상분할부 (2W)는상기복수의타일들을기반으로상기현재픽처에 대한분할정보를생성할수있다.일예시에서,상기복수의타일들은복수의 타일그룹들로그룹화되고,상기복수의타일그룹들중적어도하나의타입 그룹은비래스터스캔 (non-raster scan)순서 (order)로배열된타일들을포함할수 있다. [302] 일실시예에따른인코딩장치는,상기복수의타일들중하나의타일에포함된 현재블록에대한예측샘플들을도출할수있다 (S2220).보다구체적으로, 인코딩장치의예측부 (220)는상기복수의타일들중하나의타일에포함된현재 블록에대한예측샘플들을도출할수있다. [301] The encoding apparatus according to an embodiment may generate division information for the current picture based on the plurality of tiles (S22W). More specifically, the image division unit 2W of the encoding apparatus includes the plurality of tiles. In one example, the plurality of tiles are grouped into a plurality of tile groups, and at least one type group among the plurality of tile groups is non-raster scan. You can include tiles arranged in (non-raster scan) order. The encoding apparatus according to an embodiment may derive prediction samples for the current block included in one of the plurality of tiles (S2220). More specifically, the prediction unit 220 of the encoding apparatus Can derive prediction samples for the current block included in one of the plurality of tiles.
[303] 일실시예에따른인코딩장치는,상기예측샘플들을기반으로상기현재 [303] The encoding apparatus according to an embodiment, based on the predicted samples, the current
블록에대한예측정보를생성할수있다 (S2230).보다구체적으로,인코딩 장치의 예측부 (220)는상기예측샘플들을기반으로상기현재블록에대한예측 정보를생성할수있다. It is possible to generate prediction information for the block (S2230). More specifically, the prediction unit 220 of the encoding device may generate prediction information for the current block based on the prediction samples.
[304] 일실시예에따른인코딩장치는,상기현재픽처에대한분할정보및상기 현재블록에대한예측정보를포함하는영상정보를인코딩할수있다 (S2240). 보다구체적으로,상기현재픽처에대한분할정보또는상기현재블록에대한 예측정보중적어도하나를포함하는영상정보를인코딩할수있다. The encoding apparatus according to an embodiment may encode image information including division information for the current picture and prediction information for the current block (S2240). More specifically, it is possible to encode image information including at least one of division information for the current picture or prediction information for the current block.
[305] 일실시예에서,상기현재픽처에대한상기분할정보는,상기복수의타일 그룹들각각의인덱스 (index)정보,상기복수의타일그룹들각각에대하여 좌상측 (top-left)에위치한타일의 ID정보,상기복수의타일그룹들각각에 대하여우하측 (bottom-right)에위치한타일의 ID정보중적어도하나를포함할 수있다. [305] In one embodiment, the split information for the current picture is, index information for each of the plurality of tile groups, and located at the top-left for each of the plurality of tile groups. It may include at least one of the ID information of the tile and the ID information of the tile located at the bottom-right of each of the plurality of tile groups.
[306] 일실시예에서,상기현재픽처에대한상기분할정보는,상기복수의타일들 각각의 ID정보가명시적으로시그널링되는지여부에대한플래그정보및상기 복수의타일들각각의 ID정보중적어도하나를더포함할수있다.또한,상기 플래그정보및상기복수의타일들각각의 ID정보중적어도하나는상기영상 정보의 PPS(Picture Parameter Set)에포함될수있다. [306] In one embodiment, the split information on the current picture is at least one of flag information on whether ID information of each of the plurality of tiles is explicitly signaled, and ID information of each of the plurality of tiles. In addition, at least one of the flag information and ID information of each of the plurality of tiles may be included in a PPS (Picture Parameter Set) of the image information.
[307] 일실시예에서,상기현재픽처에대한상기분할정보는,상기복수의타일 그룹들의개수정보,상기복수의타일그룹들각각에대하여좌상측 (top-left)에 위치하는 CTB(Coding Tree Block)의위치정보및상기복수의타일그룹들 각각에대하여우하측 (bottom-right)에위치하는 CTB의위치정보중적어도 하나를더포함할수있다. [307] In one embodiment, the division information for the current picture includes information on the number of tile groups, and a coding tree (CTB) positioned at the top-left for each of the plurality of tile groups. Block) and at least one of the location information of the CTB located at the bottom-right for each of the plurality of tile groups.
[308] 또한,상기복수의타일그룹들의개수정보,상기복수의타일그룹들각각에 대하여좌상측에위치하는 CTB의위치정보및상기복수의타일그룹들각각에 대하여우하측에위치하는 CTB의위치정보중적어도하나는상기영상정보의 PPS(Picture Parameter Set)에포함될수있다. [308] In addition, information on the number of tile groups, location information of the CTB located at the upper left of each of the plurality of tile groups, and location of the CTB located at the lower right of each of the plurality of tile groups At least one of the information may be included in the PPS (Picture Parameter Set) of the image information.
[309] 일실시예에서,상기현재픽처에대한상기분할정보는,상기복수의타일 그룹들각각의 ID정보를더포함할수있다.또한,상기복수의타일그룹들 각각의 ID정보는상기영상정보의 NAL(Network Abstraction Layer)유닛헤더에 포함될수있다. In one embodiment, the split information for the current picture may further include ID information of each of the plurality of tile groups. Further, ID information of each of the plurality of tile groups is the image information. It can be included in the NAL (Network Abstraction Layer) unit header.
[310] 상술한실시예에서,방법들은일련의단계또는블록으로써순서도를기초로 설명되고있지만,본개시는단계들의순서에한정되는것은아니며,어떤 단계는상술한바와다른단계와다른순서로또는동시에발생할수있다.또한, 당업자라면순서도에나타내어진단계들이배타적이지않고,다른단계가 포함되거나순서도의하나또는그이상의단계가본개시의범위에영향을 미치지않고삭제될수있음을이해할수있을것이다. [310] In the above-described embodiment, the methods are described on the basis of a flow chart as a series of steps or blocks, but this disclosure is not limited to the order of the steps, and certain steps are in a different order from those described above, or It can occur simultaneously. Also, Those skilled in the art will appreciate that the steps shown in the flowchart are not exclusive, other steps may be included, or one or more steps in the flowchart may be deleted without affecting the scope of this disclosure.
[311] 상술한본개시에따른방법은소프트웨어형태로구현될수있으며,본개시에 따른인코딩장치및/또는디코딩장치는예를들어 TV,컴퓨터,스마트폰, 셋톱박스,디스플레이장치등의영상처리를수행하는장치에포함될수있다. [311] The above-described method according to this disclosure can be implemented in the form of software, and the encoding device and/or the decoding device according to this disclosure can perform image processing such as TV, computer, smartphone, set-top box, and display device. It can be included in a device that performs.
[312] 본개시에서실시예들이소프트웨어로구현될때,상술한방법은상술한 [312] In the present disclosure, when the embodiments are implemented as software, the above-described method is
기능을수행하는모듈 (과정 ,기능등)로구현될수있다.모듈은메모리에 저장되고,프로세서에의해실행될수있다.메모리는프로세서내부또는 외부에 있을수있고,잘알려진다양한수단으로프로세서와연결될수있다. 프로세서는 ASIC(application- specific integrated circuit),다른칩셋,논리회로 및/또는데이터처리장치를포함할수있다.메모리는 ROM(read-only memory), RAM(random access memory),늘래쉬메모리,메모리카드,저장매체및/또는 다른저장장치를포함할수있다.즉,본개시에서설명한실시예들은프로세서, 마이크로프로세서,컨트롤러또는칩상에서구현되어수행될수있다.예를 들어,각도면에서도시한기능유닛들은컴퓨터,프로세서,마이크로프로세서, 컨트롤러또는칩상에서구현되어수행될수있다.이경우구현을위한정보 (ex. information on instructions)또는알고리즘이디지털저장매체에저장될수있다. It can be implemented as a module (process, function, etc.) that performs a function. Modules are stored in memory and can be executed by the processor. The memory can be inside or outside the processor, it is well known and can be connected to the processor by various means. . Processors may include application-specific integrated circuits (ASICs), other chipsets, logic circuits and/or data processing devices. Memory includes read-only memory (ROM), random access memory (RAM), flash memory, memory card In other words, the embodiments described in this disclosure may be implemented and implemented on a processor, microprocessor, controller, or chip. For example, the functional units shown in the respective figures may be included. It can be implemented and performed on a computer, processor, microprocessor, controller or chip, in which case information on instructions or algorithms can be stored on a digital storage medium.
[313] 또한,본개시가적용되는디코딩장치및인코딩장치는멀티미디어방송 [313] In addition, the decoding device and encoding device to which this disclosure is applied are multimedia broadcasting.
송수신장치 ,모바일통신단말,홈시네마비디오장치,디지털시네마비디오 장치 ,감시용카메라,비디오대화장치,비디오통신과같은실시간통신장치 , 모바일스트리밍장치,저장매체,캠코더,주문형비디오 (VoD)서비스제공 장치 , OTT비디오 (Over the top video)장치,인터넷스트리밍서비스제공장치 , Transmission/reception device, mobile communication terminal, home cinema video device, digital cinema video device, surveillance camera, video conversation device, real-time communication device such as video communication, mobile streaming device, storage medium, camcorder, video-on-demand (VoD) service provider device , OTT video (Over the top video) device, Internet streaming service providing device,
3차원 (3D)비디오장치 , VR( virtual reality)장치 , AR(argumente reality)장치 ,화상 전화비디오장치,운송수단단말 (ex.차량 (자율주행차량포함)단말,비행기 단말,선박단말등)및의료용비디오장치등에포함될수있으며,비디오신호 또는데이터신호를처리하기위해사용될수있다.예를들어 , OTT비디오 (Over the top video)장치로는게임콘솔,블루레이플레이어 ,인터넷접속 TV,홈시어터 시스템,스마트폰,태블릿 PC, DVR(Digital Video Recoder)등을포함할수있다. 3D (3D) video device, VR (virtual reality) device, AR (argumente reality) device, video phone video device, transportation terminal (ex. vehicle (including self-driving vehicle) terminal, airplane terminal, ship terminal, etc.) and It can be included in medical video equipment, etc., and can be used to process video signals or data signals. For example, OTT video (Over the top video) devices include game consoles, Blu-ray players, Internet access TVs, home theater systems, It can include smartphones, tablet PCs, and DVR (Digital Video Recoder).
[314] 또한,본개시가적용되는처리방법은컴퓨터로실행되는프로그램의형태로 생산될수있으며,컴퓨터가판독할수있는기록매체에저장될수있다.본 개시에따른데이터구조를가지는멀티미디어데이터도또한컴퓨터가판독할 수있는기록매체에저장될수있다.상기컴퓨터가판독할수있는기록매체는 컴퓨터로읽을수있는데이터가저장되는모든종류의저장장치및분산저장 장치를포함한다.상기컴퓨터가판독할수있는기록매체는,예를들어 , 블루레이디스크 (BD),범용직렬버스 (USB), ROM, PROM, EPROM, EEPROM, RAM, CD-ROM,자기테이프,플로피디스크및광학적데이터저장장치를 포함할수있다.또한,상기컴퓨터가판독할수있는기록매체는반송파 (예를 들어,인터넷을통한전송)의형태로구현된미디어를포함한다.또한,인코딩 방법으로생성된비트스트림이컴퓨터가판독할수있는기록매체에 저장되거나유무선통신네트워크를통해전송될수있다. [314] In addition, the processing method to which this disclosure is applied may be produced in the form of a program executed by a computer, and may be stored in a computer-readable recording medium. Multimedia data having a data structure according to the present disclosure may also be produced by a computer. The computer-readable recording medium includes all kinds of storage devices and distributed storage devices in which computer-readable data is stored. The computer-readable recording medium is, for example, a computer-readable recording medium. For example, it can include Blu-ray disk (BD), universal serial bus (USB), ROM, PROM, EPROM, EEPROM, RAM, CD-ROM, magnetic tape, floppy disk and optical data storage device. The temporary readable recording medium is a carrier (e.g. For example, it includes media implemented in the form of transmission over the Internet). In addition, bitstreams generated by the encoding method may be stored in a computer-readable recording medium or transmitted through a wired or wireless communication network.
[315] 또한,본개시의실시예는프로그램코드에의한컴퓨터프로그램제품으로 구현될수있고,상기프로그램코드는본개시의실시예에의해컴퓨터에서 수행될수있다.상기프로그램코드는컴퓨터에의해판독가능한캐리어상에 저장될수있다. Further, an embodiment of the present disclosure may be implemented as a computer program product using a program code, and the program code may be executed in a computer by an embodiment of the present disclosure. The program code is a carrier readable by a computer. Can be stored on
[316] 도 24는본문서의개시가적용될수있는컨텐츠스트리밍시스템의예를 나타낸다. [316] Figure 24 shows an example of a content streaming system to which the disclosure of this document can be applied.
[317] 도 24를참조하면,본개시가적용되는컨텐츠스트리밍시스템은크게인코딩 서버,스트리밍서버,웹서버,미디어저장소,사용자장치및멀티미디어입력 장치를포함할수있다. Referring to FIG. 24, the content streaming system to which this disclosure is applied may largely include an encoding server, a streaming server, a web server, a media storage, a user device, and a multimedia input device.
[318] 상기인코딩서버는스마트폰,카메라,캠코더등과같은멀티미디어입력 장치들로부터입력된컨텐츠를디지털데이터로압축하여비트스트림을 생성하고이를상기스트리밍서버로전송하는역할을한다.다른예로, 스마트폰,카메라,캠코더등과같은멀티미디어입력장치들이비트스트림을 직접생성하는경우,상기인코딩서버는생략될수있다. [318] The encoding server plays a role of generating a bitstream by compressing content input from multimedia input devices such as smartphones, cameras, camcorders, etc. into digital data and transmitting them to the streaming server. As another example, a smart phone When multimedia input devices such as phones, cameras, camcorders, etc. directly generate bitstreams, the encoding server may be omitted.
[319] 상기비트스트림은본개시가적용되는인코딩방법또는비트스트림생성 방법에의해생성될수있고,상기스트리밍서버는상기비트스트림을전송 또는수신하는과정에서일시적으로상기비트스트림을저장할수있다. The bitstream may be generated by an encoding method or a bitstream generation method to which the present disclosure is applied, and the streaming server may temporarily store the bitstream while transmitting or receiving the bitstream.
[32이 상기스트리밍서버는웹서버를통한사용자요청에기초하여멀티미디어 데이터를사용자장치에전송하고,상기웹서버는사용자에게어떠한서비스가 있는지를알려주는매개체역할을한다.사용자가상기웹서버에원하는 서비스를요청하면,상기웹서버는이를스트리밍서버에전달하고,상기 스트리밍서버는사용자에게멀티미디어데이터를전송한다.이때 ,상기컨텐츠 스트리밍시스템은별도의제어서버를포함할수있고,이경우상기제어 서버는상기컨텐츠스트리밍시스템내각장치간명령/응답을제어하는 역할을한다. [32 This streaming server transmits multimedia data to a user device based on a user request through a web server, and the web server serves as a medium that informs the user of what kind of service is available. The user wants the web server to be sent to the user's device. When a service is requested, the web server delivers it to the streaming server, and the streaming server transmits multimedia data to the user. At this time, the content streaming system may include a separate control server, in which case the control server is the above. It controls the command/response between devices in the content streaming system.
[321] 상기스트리밍서버는미디어저장소및/또는인코딩서버로부터컨텐츠를 수신할수있다.예를들어,상기인코딩서버로부터컨텐츠를수신하게되는 경우,상기컨텐츠를실시간으로수신할수있다.이경우,원활한스트리밍 서비스를제공하기위하여상기스트리밍서버는상기비트스트림을일정 시간동안저장할수있다. [321] The streaming server may receive the content from the media storage and/or the encoding server. For example, when receiving the content from the encoding server, the content can be received in real time. In this case, a seamless streaming service In order to provide a, the streaming server may store the bitstream for a predetermined time.
[322] 상기사용자장치의예로는,휴대폰,스마트폰 (smart phone),노트북 [322] Examples of the user device, mobile phones, smart phones (smart phone), notebook
컴퓨터 (laptop computer),디지털방송용단말기 , PDA(personal digital assistants), PMP(portable multimedia player),네비게이션,슬레이트 PC(slate PC),태블릿 PC(tablet PC),울트라북 (ul仕 abook),웨어러블디바이스 (wearable device,예를 들어,워치형단말기 (smartwatch),글래스형단말기 (smart glass), HMD(head mounted display)),디지털 TV,데스크탑컴퓨터,디지털사이니지등이있을수 있다. Computer (laptop computer), digital broadcasting terminal, PDA (personal digital assistants), PMP (portable multimedia player), navigation, slate PC, tablet PC, ultrabook (ul-abook), wearable device (wearable device, for example, a watch-type terminal (smartwatch), a glass-type terminal (smart glass), HMD (head mounted display), digital TV, desktop computer, digital signage, etc.
[323] 상기컨텐츠스트리밍시스템내각서버들은분산서버로운영될수있으며 ,이 경우각서버에서수신하는데이터는분산처리될수있다. Each server in the content streaming system can be operated as a distributed server, and in this case, the data received from each server can be distributed and processed.
[324] 본명세서에기재된청구항들은다양한방식으로조합될수있다.예를들어 ,본 명세서의방법청구항의기술적특징이조합되어장치로구현될수있고,본 명세서의장치청구항의기술적특징이조합되어방법으로구현될수있다. 또한,본명세서의방법청구항의기술적특징과장치청구항의기술적특징이 조합되어장치로구현될수있고,본명세서의방법청구항의기술적특징과 장치청구항의기술적특징이조합되어방법으로구현될수있다. [324] The claims in this specification can be combined in various ways. For example, the technical features of the method claims of this specification can be combined to be implemented as an apparatus, and the technical features of the apparatus claims of this specification can be combined in a method. Can be implemented. In addition, the technical characteristics of the method claim of this specification and the technical characteristics of the apparatus claim may be combined to be implemented as a device, and the technical characteristics of the method claim of this specification and the technical characteristics of the device claim may be combined to be implemented in a method.

Claims

2020/175905 1»(:1/10公020/002730 청구범위 2020/175905 1»(:1/10公020/002730 Claims
[청구항 1] 디코딩장치에의하여수행되는영상디코딩방법에 있어서, [Claim 1] In the video decoding method performed by the decoding device,
현재픽처에대한분할정보 (partition information)및상기현재픽처에 포함된현재블록에대한예즉정보 (prediction information)를포함하는 영상정보를비트스트림으로부터획득하는단계 ; Acquiring from a bitstream image information including partition information on a current picture and, for example, prediction information on a current block included in the current picture;
상기현재픽처에대한상기분할정보를기반으로,복수의타일들에 기반한상기현재픽처의분할구조 (partitioning structure)를도줄하는 단계; Reducing a partitioning structure of the current picture based on a plurality of tiles based on the division information for the current picture;
상기복수의타일들중하나의타일에포함된상기현재블록에대한상기 예측정보를기반으로,상기현재블록에대한예측샘플들을도출하는 단계;및 Deriving prediction samples for the current block based on the prediction information for the current block included in one of the plurality of tiles; And
상기 예측샘플들을기반으로상기현재픽처를복원하는단계를 포함하고, Including the step of restoring the current picture based on the prediction samples,
상기복수의타일들은복수의타일그룹들로그룹화되고, 상기복수의타일그룹들중적어도하나의타입그룹은비래스터 스캔 (non-raster scan)순서 (order)로배열된타일들을포함하는,영상 디코딩방법. The plurality of tiles are grouped into a plurality of tile groups, and at least one type group among the plurality of tile groups includes tiles arranged in a non-raster scan order. .
[청구항 2] 제 1항에 있어서, [Claim 2] The method of claim 1,
상기현재픽처에대한상기분할정보는,상기복수의타일그룹들 각각의인덱스 (index)정보,상기복수의타일그룹들각각에대하여 좌상측 (top-left)에위치한타일의 ID정보,상기복수의타일그룹들 각각에대하여우하측 (bottom-right)에위치한타일의 ID정보중적어도 하나를포함하는,영상디코딩방법. The segmentation information for the current picture includes index (i n d ex ) information of each of the plurality of tile groups, ID information of a tile located at the top-left of each of the plurality of tile groups, A video decoding method comprising at least one of ID information of a tile positioned at a bottom-right for each of the plurality of tile groups.
[청구항 3] 제 1항에 있어서, [Claim 3] The method of claim 1,
상기현재픽처에대한상기분할정보는, The division information for the current picture,
상기복수의타일들각각의 ID정보가명시적으로시그널링되는지 여부에대한플래그정보및상기복수의타일들각각의 ID정보중 적어도하나를더포함하고, The plurality of tiles further include at least one of flag information on whether ID information of each of the plurality of tiles is explicitly signaled and ID information of each of the plurality of tiles,
상기플래그정보및상기복수의타일들각각의 ID정보중적어도 하나는상기영상정보의 PPS (Picture Parameter Set)에포함되는,영상 디코딩방법. At least one of the flag information and ID information of each of the plurality of tiles is included in a PPS (Picture Parameter Set) of the image information.
[청구항 4] 제 1항에 있어서, [Claim 4] The method of claim 1,
상기현재픽처에대한상기분할정보는, The division information for the current picture,
상기복수의타일그룹들의개수정보,상기복수의타일그룹들각각에 대하여좌상측 (top-left)에위치하는 CTB(Coding Tree Block)의위치정보 및상기복수의타일그룹들각각에대하여우하측 (bottom-right)에 위치하는 CTB의위치정보중적어도하나를더포함하는,영상디코딩 방법. Information on the number of tiles of the plurality of tile groups, location information of the coding tree block (CTB) located at the top-left of each of the plurality of tile groups, and the lower right of each of the plurality of tile groups ( Image decoding including at least one of the CTB's location information located at the bottom-right) Way.
[청구항 5] 제 4항에 있어서, [Claim 5] The method of claim 4,
상기복수의타일그룹들의개수정보,상기복수의타일그룹들각각에 대하여좌상측에위치하는 CTB의위치정보및상기복수의타일그룹들 각각에대하여우하측에위치하는 CTB의위치정보중적어도하나는 상기영상정보의 PPS(Picture Parameter Set)에포함되는,영상디코딩 방법. At least one of the number information of the plurality of tile groups, the location information of the CTB located at the upper left of each of the plurality of tile groups, and the location information of the CTB located at the lower right of each of the plurality of tile groups Included in the PPS (Picture Parameter Set) of the image information, image decoding method.
[청구항 6] 제 1항에 있어서, [Claim 6] The method of claim 1,
상기현재픽처에대한상기분할정보는, The division information for the current picture,
상기복수의타일그룹들각각의 ID정보를더포함하고, 상기복수의타일그룹들각각의 ID정보는상기영상정보의 NAL(Network Abstraction Layer)유닛헤더에포함되는,영상디코딩방법 . Each of the plurality of tile groups further includes ID information, and the ID information of each of the plurality of tile groups is included in the NAL (Network Abstraction Layer) unit header of the image information.
[청구항 7] 인코딩장치에의하여수행되는영상인코딩방법에 있어서, [Claim 7] In the video encoding method performed by the encoding device,
현재픽처를복수의타일들로분할하는단계; Dividing the current picture into a plurality of tiles;
상기복수의타일들을기반으로상기현재픽처에대한분할정보를 생성하는단계 ; Generating segmentation information for the current picture based on the plurality of tiles;
상기복수의타일들중하나의타일에포함된현재블록에대한예측 샘플들을도출하는단계 ; Deriving prediction samples for the current block included in one of the plurality of tiles;
상기 예측샘플들을기반으로상기현재블록에대한예측정보를 생성하는단계 ;및 Generating prediction information for the current block based on the prediction samples; And
상기현재픽처에대한분할정보및상기현재블록에대한예측정보를 포함하는영상정보를인코딩하는단계를포함하고, Including the step of encoding image information including segmentation information on the current picture and prediction information on the current block,
상기복수의타일들은복수의타일그룹들로그룹화되고, 상기복수의타일그룹들중적어도하나의타입그룹은비래스터 스캔 (non-raster scan)순서 (order)로배열된타일들을포함하는,영상 인코딩방법. The plurality of tiles are grouped into a plurality of tile groups, and at least one type group among the plurality of tile groups includes tiles arranged in a non-raster scan order. .
[청구항 8] 제 7항에 있어서, [Claim 8] The method of claim 7,
상기현재픽처에대한상기분할정보는,상기복수의타일그룹들 각각의인덱스 (index)정보,상기복수의타일그룹들각각에대하여 좌상측 (top-left)에위치한타일의 ID정보,상기복수의타일그룹들 각각에대하여우하측 (bottom-right)에위치한타일의 ID정보중적어도 하나를포함하는,영상인코딩방법 . The segmentation information for the current picture includes index (i n d ex ) information of each of the plurality of tile groups, ID information of a tile located at the top-left of each of the plurality of tile groups, An image encoding method comprising at least one of the ID information of a tile located at the bottom-right for each of the plurality of tile groups.
[청구항 9] 제 7항에 있어서, [Claim 9] The method of claim 7,
상기현재픽처에대한상기분할정보는, The division information for the current picture,
상기복수의타일들각각의 ID정보가명시적으로시그널링되는지 여부에대한플래그정보및상기복수의타일들각각의 ID정보중 적어도하나를더포함하고, The plurality of tiles further include at least one of flag information on whether ID information of each of the plurality of tiles is explicitly signaled and ID information of each of the plurality of tiles,
상기플래그정보및상기복수의타일들각각의 ID정보중적어도 하나는상기영상정보의 PPS (Picture Parameter Set)에포함되는,영상 인코딩방법. At least of the flag information and ID information of each of the plurality of tiles One is included in the PPS (Picture Parameter Set) of the image information, image encoding method.
[청구항 10] 제 7항에 있어서 , [Claim 10] In paragraph 7,
상기현재픽처에대한상기분할정보는, The division information for the current picture,
상기복수의타일그룹들의개수정보,상기복수의타일그룹들각각에 대하여좌상측 (top-left)에위치하는 CTB(Coding Tree Block)의위치정보 및상기복수의타일그룹들각각에대하여우하측 (bottom-right)에 위치하는 CTB의위치정보중적어도하나를더포함하는,영상인코딩 방법. Information on the number of tiles of the plurality of tile groups, location information of the coding tree block (CTB) located at the top-left of each of the plurality of tile groups, and the lower right of each of the plurality of tile groups ( An image encoding method that includes at least one of the CTB's location information located at the bottom-right).
[청구항 11] 제 10항에 있어서 , [Claim 11] In clause 10,
상기복수의타일그룹들의개수정보,상기복수의타일그룹들각각에 대하여좌상측에위치하는 CTB의위치정보및상기복수의타일그룹들 각각에대하여우하측에위치하는 CTB의위치정보중적어도하나는 상기영상정보의 PPS(Picture Parameter Set)에포함되는,영상인코딩 방법. At least one of the number information of the plurality of tile groups, the location information of the CTB located at the upper left of each of the plurality of tile groups, and the location information of the CTB located at the lower right of each of the plurality of tile groups Included in the PPS (Picture Parameter Set) of the image information, image encoding method.
[청구항 12] 제 7항에 있어서 , [Claim 12] In paragraph 7,
상기현재픽처에대한상기분할정보는, The division information for the current picture,
상기복수의타일그룹들각각의 ID정보를더포함하고, 상기복수의타일그룹들각각의 ID정보는상기영상정보의 NAL(Network Abstraction Layer)유닛헤더에포함되는,영상인코딩방법. [청구항 13] 디코딩장치에의하여영상디코딩방법을수행하도록야기하는 The plurality of tile groups each further includes ID information, and the ID information of each of the plurality of tile groups is included in the NAL (Network Abstraction Layer) unit header of the image information. [Claim 13] The decoding device is required to perform the video decoding method.
인코딩된영상정보를저장하는컴퓨터판독가능한디지털저장매체에 있어서 ,상기영상디코딩방법은, In a computer-readable digital storage medium for storing encoded image information, the image decoding method,
현재픽처에대한분할정보 (partition information)및상기현재픽처에 포함된현재블록에대한예즉정보 (prediction information)를포함하는 영상정보를비트스트림으로부터획득하는단계 ; Acquiring from a bitstream image information including partition information on a current picture and, for example, prediction information on a current block included in the current picture;
상기현재픽처에대한상기분할정보를기반으로,복수의타일들에 기반한상기현재픽처의분할구조 (partitioning structure)를도줄하는 단계; Reducing a partitioning structure of the current picture based on a plurality of tiles based on the division information for the current picture;
상기복수의타일들중하나의타일에포함된상기현재블록에대한상기 예측정보를기반으로,상기현재블록에대한예측샘플들을도출하는 단계;및 Deriving prediction samples for the current block based on the prediction information for the current block included in one of the plurality of tiles; And
상기 예측샘플들을기반으로상기현재픽처를복원하는단계를 포함하고, Including the step of restoring the current picture based on the prediction samples,
상기복수의타일들은복수의타일그룹들로그룹화되고, 상기복수의타일그룹들중적어도하나의타입그룹은비래스터 스캔 (non-raster scan)순서 (order)로배열된타일들을포함하는,저장매체. The plurality of tiles are grouped into a plurality of tile groups, and at least one type group among the plurality of tile groups includes tiles arranged in a non-raster scan order.
PCT/KR2020/002730 2019-02-26 2020-02-26 Signaled information-based picture partitioning method and apparatus WO2020175905A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962810942P 2019-02-26 2019-02-26
US62/810,942 2019-02-26

Publications (1)

Publication Number Publication Date
WO2020175905A1 true WO2020175905A1 (en) 2020-09-03

Family

ID=72238924

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/002730 WO2020175905A1 (en) 2019-02-26 2020-02-26 Signaled information-based picture partitioning method and apparatus

Country Status (1)

Country Link
WO (1) WO2020175905A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150140360A (en) * 2013-04-08 2015-12-15 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 Motion-constrained tile set for region of interest coding

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150140360A (en) * 2013-04-08 2015-12-15 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 Motion-constrained tile set for region of interest coding

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
COBAN, MUHAMMED: "AHG12: On signalling of tiles", JVET-M0530, JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 13TH M EETING, 4 January 2019 (2019-01-04), Marrakech, XP030198370, Retrieved from the Internet <URL:http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=5341> *
DESHPANDE, SACHIN: "AHG12: On Tile Information Signalling", JVET-M0416, JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 13TH MEETING, 4 January 2019 (2019-01-04), Marrakech, XP030200653, Retrieved from the Internet <URL:http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=5225> *
HENDRY: "AHG12: On explicit signalling of tile IDs", JVET-M0134-V2, JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 13TH MEETING, 2 January 2019 (2019-01-02), Marrakech, XP030197808, Retrieved from the Internet <URL:http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=4939> *
SYCHEV, MAXIM: "AHG12: On tile configuration signalling", JVET-M0137-VL, J OINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/W G 11 13TH MEETING, 2 January 2019 (2019-01-02), Marrakech, XP030197812, Retrieved from the Internet <URL:http://phenix.it-sudparis.eu/jvet/doc_end_user/current_document.php?id=4942> *

Similar Documents

Publication Publication Date Title
US20200260072A1 (en) Image coding method using history-based motion information and apparatus for the same
US20220182681A1 (en) Image or video coding based on sub-picture handling structure
US11575942B2 (en) Syntax design method and apparatus for performing coding by using syntax
US11758172B2 (en) Image encoding/decoding method and device for signaling information related to sub picture and picture header, and method for transmitting bitstream
US11825080B2 (en) Image decoding method and apparatus therefor
US20230038928A1 (en) Picture partitioning-based coding method and device
US11882280B2 (en) Method for decoding image by using block partitioning in image coding system, and device therefor
US20230308674A1 (en) Method and apparatus for encoding/decoding image on basis of cpi sei message, and recording medium having bitstream stored therein
WO2020175908A1 (en) Method and device for partitioning picture on basis of signaled information
US20230144371A1 (en) Image decoding method and apparatus
US20230016307A1 (en) Method for decoding image on basis of image information including ols dpb parameter index, and apparatus therefor
US20220408115A1 (en) Image decoding method and device
KR20230023708A (en) Method and apparatus for processing high-level syntax in image/video coding system
WO2020175905A1 (en) Signaled information-based picture partitioning method and apparatus
WO2020175904A1 (en) Method and apparatus for picture partitioning on basis of signaled information
US11902528B2 (en) Method and device for signaling information related to slice in image/video encoding/decoding system
US11956450B2 (en) Slice and tile configuration for image/video coding
US20240146920A1 (en) Method for decoding image by using block partitioning in image coding system, and device therefor
US20240056591A1 (en) Method for image coding based on signaling of information related to decoder initialization
JP7375198B2 (en) Method and apparatus for signaling picture segmentation information
US20230028326A1 (en) Image coding method based on partial entry point-associated information in video or image coding system
US20230156228A1 (en) Image/video encoding/decoding method and device
KR20220082082A (en) Method and apparatus for signaling image information
KR20220083818A (en) Method and apparatus for signaling slice-related information
KR20220137954A (en) An image coding method based on tile-related information and slice-related information in a video or image coding system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20762226

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20762226

Country of ref document: EP

Kind code of ref document: A1