WO2020175893A1 - Aps 시그널링 기반 비디오 또는 영상 코딩 - Google Patents

Aps 시그널링 기반 비디오 또는 영상 코딩 Download PDF

Info

Publication number
WO2020175893A1
WO2020175893A1 PCT/KR2020/002702 KR2020002702W WO2020175893A1 WO 2020175893 A1 WO2020175893 A1 WO 2020175893A1 KR 2020002702 W KR2020002702 W KR 2020002702W WO 2020175893 A1 WO2020175893 A1 WO 2020175893A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
alf
reshaping
flag
aps
Prior art date
Application number
PCT/KR2020/002702
Other languages
English (en)
French (fr)
Inventor
파루리시탈
김승환
자오지에
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Priority to KR1020237040136A priority Critical patent/KR20230163584A/ko
Priority to KR1020227014284A priority patent/KR102606330B1/ko
Priority to KR1020217027581A priority patent/KR102393325B1/ko
Priority to AU2020229608A priority patent/AU2020229608B2/en
Publication of WO2020175893A1 publication Critical patent/WO2020175893A1/ko
Priority to US17/400,883 priority patent/US11758141B2/en
Priority to US18/227,134 priority patent/US12069270B2/en
Priority to AU2023282249A priority patent/AU2023282249A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/188Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a video data packet, e.g. a network abstraction layer [NAL] unit
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop

Definitions

  • This technology relates to APS signaling-based video or video coding.
  • the demand for high-resolution, high-quality video/video is increasing in various fields.
  • the higher the resolution and quality of the video/video data the higher the amount of information or bits to be transmitted compared to the existing video/video data.
  • the video data can be transmitted using a medium such as a wired/wireless broadband line or an existing storage medium
  • the transmission cost and storage cost increase.
  • LMCS luma mapping with chroma scaling
  • ALF adaptive loop filtering
  • ALF data and/or LMCS data may be signaled conditionally through header information (picture header or slice header).
  • APS ID information indicating the ID of the referenced APS may be signaled through header information (picture header or slice header).
  • APS ID information referenced for ALF data and APS ID information referenced for LMCS data may be signaled separately.
  • a video/video decoding method performed by a decoding apparatus is provided.
  • a decoding apparatus for performing video/image decoding is provided.
  • a video/video encoding method performed by an encoding device is provided.
  • an encoding apparatus for performing video/image encoding is provided.
  • a computer-readable digital storage medium in which encoded video/image information generated according to the video/image encoding method disclosed in at least one of the embodiments of this document is stored is provided.
  • a computer storing encoded information or encoded video/image information causing to perform the video/image decoding method disclosed in at least one of the embodiments of this document by a decoding device It provides a readable digital storage medium.
  • 1 «: 3 related information can be efficiently signaled.
  • FIG. 1 schematically shows an example of a video/image coding system to which the embodiments of this document can be applied.
  • FIG. 2 is a diagram of a video/image encoding apparatus to which embodiments of this document can be applied.
  • FIG. 3 is a diagram of a video/image decoding apparatus to which the embodiments of this document can be applied.
  • FIG. 4 exemplarily shows a hierarchical structure for a coded image/video.
  • 5 is a flow chart schematically showing an example of the show procedure.
  • Fig. 8 shows another example of the hierarchical structure of the show data.
  • FIG. 9 is an exemplary hierarchical structure of according to an embodiment of the present document
  • Fig. 12 shows a graph showing exemplary forward mapping.
  • FIG. 13 and 14 schematically show an example of a video/image encoding method and related components according to the embodiment(s) of this document.
  • 15 and 16 show an image/video decoding method and related information according to an embodiment of the present document.
  • 17 is a content streaming to which the embodiments disclosed in this document can be applied
  • each configuration is implemented as separate hardware or separate software.
  • two or more of each configuration may be combined to form a single configuration.
  • One configuration may be divided into a plurality of configurations.
  • Embodiments in which each configuration is integrated and/or separated are also included in the scope of the disclosure.
  • FIG. 1 schematically shows an example of a video/image coding system to which the embodiments of this document can be applied.
  • a video/video coding system may include a first device (source device) and a second device (receive device).
  • the source device is encoded.
  • Video/image information or data can be transferred to a receiving device via a digital storage medium or network in the form of a file or streaming.
  • the source device may include a video source, an encoding device, and a transmission unit.
  • the receiving device may include a receiver, a decoding device, and a renderer.
  • the encoding device may be referred to as a video/image encoding device, and the decoding device may be referred to as a video/image decoding device.
  • the transmitter may be included in the encoding device.
  • the receiver may be included in the decoding device.
  • the renderer may include a display unit, and the display unit may be composed of separate devices or external components.
  • Video sources are captured through video/video capture, synthesis, or generation
  • Video/image can be acquired Video sources can include video/image capture devices and/or video/image generation devices Video/image capture devices can be, for example, one or more cameras, previously captured video/image Video/image archives, etc. Video/image generation devices may include, for example, computers, tablets and smartphones, and may (electronically) generate video/images, e.g. computers A virtual video/video can be created through the like, and in this case, the video/video capture process can be replaced by the process of generating related data.
  • the encoding device can encode the input video/video.
  • the encoding device can perform a series of procedures such as prediction, transformation and quantization for compression and coding efficiency.
  • the encoded data (encoded image/video information) can be summarized in the form of a bitstream.
  • the transmission unit is encoded image/video information output in the form of a bitstream or
  • Data can be transferred to the receiver of the receiving device via a digital storage medium or network in the form of a file or streaming.
  • the digital storage medium can include various storage media such as USB, SD, CD, DVD, Blu-ray, HDD, SSD, etc.
  • the transmission unit may include an element for generating a media file through a predetermined file format and may include an element for transmission through a broadcasting/communication network.
  • the receiving unit may receive/extract the bitstream and transmit it to the decoding device. have.
  • the decoding device is inverse quantization, inverse transformation, prediction, etc. corresponding to the operation of the encoding device.
  • Video/video can be decoded by performing a series of procedures.
  • the renderer can render decoded video/video.
  • the rendered video/video can be displayed through the display unit.
  • the main text is about video/image coding.
  • video/image coding For example,
  • the method/embodiment may be applied to a method disclosed in the VVC (versatile video coding) standard.
  • the method/embodiment disclosed in this document is an EVC (essential video coding) standard, an AVI (AOMedia Video 1) standard, and AVS2 ( 2nd generation of audio video coding standard) or the method disclosed in the next-generation video/video coding standard (ex. H.267 or H.268, etc.).
  • a picture generally refers to a unit representing one image in a specific time period, and a slice/tile is a unit constituting a part of a picture in coding.
  • a tile can contain more than one CTU (coding tree unit); a picture can consist of more than one slice/tile.
  • a tile is a rectangular region of CTUs within a particular tile column and a particular tile row in a picture.
  • a tile is a rectangular region of CTUs within a particular tile column and a particular tile row in a picture.
  • the tile column is a rectangular region of CTUs having a height equal to the height of the picture, and the width can be specified by syntax elements in the picture parameter set.
  • the tile row is a rectangular area of CTUs, the rectangular area has a width specified by syntax elements in the picture parameter set, and the height is the same as the height of the picture.
  • the tile row is a rectangular region of CTUs having a height specified by syntax elements in the picture parameter set and a width equal to the width of the picture;
  • a tile scan can represent a specific sequential ordering of the CTUs partitioning the picture.
  • the CTUs may be sequentially sorted by an intra-tile CTU raster scan, and tiles within a picture may be sequentially arranged by a raster scan of the tiles of the picture (A tile scan is a specific sequential ordering of CTUs partitioning a picture.
  • a slice includes an integer number of complete tiles or an integer number of consecutive complete CTU rows, which can be contained exclusively in a single NAL unit. within a tile of a picture that may be exclusively contained in a single NAL unit)
  • a picture can be divided into two or more sub-pictures.
  • a sub-picture can be a rectangular region of one or more slices within a picture (an mctangular mgion of one or more slices within a picture).
  • a pixel or pel can mean the smallest unit constituting a picture (or image). Also, as a term corresponding to a pixel, a sample can be used. A sample is generally used. You can display the pixel or pixel value, and you can display only the pixel/pixel value of the luma component, and the pixel/pixel of the chroma component. It can also indicate only the value.
  • a unit can represent a basic unit of image processing.
  • a unit can contain at least one of a specific area of a picture and information related to that area.
  • a unit can contain one luma block and two chromas (one luma block and two chromas).
  • ex. cb, cr) may contain a block
  • a unit may be used interchangeably with terms such as block or area in some cases.
  • the MxN block may include a set (or array) of samples (or sample array) or transform coefficients consisting of M columns and N rows.
  • a or B (A or B)” in this document may be interpreted as ”A and/or B (A and/or B)”.
  • A, B or C means “only A”, “only B”, “only C”, or” any combination of A, B and C. )”.
  • ⁇ at least one of A, B or C'' or ⁇ at least one A, B and/or C (at least one of A, B and/or C) can mean "at least one of A, B and C”.
  • parentheses used in this document may mean “for example”. Specifically, when indicated as “prediction (intra prediction)”, “intra prediction” may have been proposed as an example of “prediction”. In other words, ⁇ forecast'' in this document is ⁇ intra prediction''. It is not limited, and ⁇ intra prediction'' may be proposed as an example of ⁇ prediction.'' In addition, even when indicated as ⁇ prediction (i.e., intra prediction)'', ⁇ prediction'' As an example of'Intra prediction' may have been proposed.
  • FIG. 2 is a view of a video/image encoding apparatus to which the embodiments of this document can be applied.
  • the encoding device is referred to as video encoding.
  • Devices and/or video encoding devices may be included.
  • the encoding device 200 includes an image partitioner 210,
  • Predictor 220 residual processor (230), entropy encoder (240), adder (250), filtering unit (filter, 260) and memory (memory, 270) It can be configured to include.
  • the part 220 is
  • the residual processing unit 230 includes a transform unit 232, a quantizer 233, an inverse quantizer 234, and an inverse transform unit ( An inverse transformer 235 may be included.
  • the residual processing unit 230 may further include a subtractor 231.
  • the addition unit 250 may include a reconstructor or a recontructged block generator.
  • the image segmentation unit 210, the prediction unit 220, the residual processing unit 230, the entropy encoding unit 240, the addition unit 250 and the filtering unit 260 described above may be used according to the embodiment.
  • the hardware component may be configured by a component (e.g., an encoder chipset or processor).
  • the memory 270 may include a decoded picture buffer (DPB), and may be configured by a digital storage medium.
  • DPB decoded picture buffer
  • the hardware component is a memory 270. You can also include more as internal/external components.
  • the image segmentation unit (2W) is an input image (or, picture, input) input to the encoding device 200
  • Frame can be divided into one or more processing units.
  • the processing unit may be referred to as a coding unit (CU), in which case the coding unit is a coding tree unit (CTU).
  • the coding unit can be divided recursively from the largest coding unit (LCU) according to the QTBTTT (Quad-tree binary-tree ternary-tree) structure.
  • LCU largest coding unit
  • QTBTTT Quad-tree binary-tree ternary-tree
  • one coding unit has a quad tree structure, Based on the binary tree structure and/or ternary structure, it can be divided into a plurality of coding units of deeper depth. In this case, for example, the quad tree structure is applied first, and the binary tree structure and/or ternary structure is It may be applied later. Or the binary retrieval structure may be applied first.
  • the coding procedure according to this document may be performed based on the final coding unit, which is no longer divided. In this case, based on the coding efficiency according to the image characteristics, etc., the maximum possible
  • the coding unit can be used directly as the final coding unit, or if necessary, the coding unit can be
  • the processing unit may further include a unit (PU: Prediction Unit) or a transformation unit (TU: Transform Unit).
  • PU Prediction Unit
  • TU Transform Unit
  • the prediction unit and the transformation unit are each divided from the final coding unit described above.
  • it may be partitioned.
  • the prediction unit may be a unit of sample prediction
  • the transform unit may be a unit for inducing a conversion factor and/or a unit for inducing a residual signal from the conversion factor.
  • an MxN block can represent a set of samples or transform coefficients consisting of M columns and N rows.
  • a sample can typically represent a pixel or pixel value, and the luminance ( It can represent only the pixel/pixel value of the luma component, or it can represent only the pixel/pixel value of the chroma component.
  • a sample corresponds to one picture (or image) corresponding to a pixel or pel. Can be used as a term.
  • the encoding device 200 intercepts the input video signal (original block, original sample array)
  • a residual signal can be generated by subtracting the prediction signal (predicted block, prediction sample array) output from the prediction unit 221 or the intra prediction unit 222, and the generated The residual signal is transmitted to the conversion unit 232.
  • the prediction signal (prediction block, prediction sample array) is subtracted from the input video signal (original block, original sample array) in the encoder 200 as shown.
  • the unit may be referred to as a subtraction unit 231.
  • the prediction unit performs prediction on a block to be processed (hereinafter, referred to as a current block), and generates a predicted block including prediction samples for the current block.
  • the prediction unit can determine whether intra prediction or inter prediction is applied in the current block or CU unit.
  • the prediction unit may generate various types of information related to prediction, such as prediction mode information, as described later in the description of each prediction mode, and transmit it to the entropy encoding unit 240.
  • the information on prediction may be encoded in the entropy encoding unit 240 and summarized in the form of a bitstream.
  • the intra prediction unit 222 may predict the current block by referring to samples in the current picture.
  • the referenced samples are of the current block according to the prediction mode.
  • the prediction modes can include a plurality of non-directional modes and a plurality of directional modes.
  • Non-directional modes are, for example, DC mode and planner mode.
  • It may include 33 directional prediction modes or 65 directional prediction modes. However, this is an example and more or less directional predictions depending on the setting.
  • the intra prediction unit 222 may determine a prediction mode to be applied to the current block by using the prediction mode applied to the surrounding block.
  • the inter prediction unit 221 is a reference specified by a motion vector on the reference picture.
  • Motion information can be predicted in units of blocks, sub-blocks, or samples.
  • the motion information may include a motion vector and a reference picture index.
  • the motion information indicates inter prediction direction (L0 prediction, L1 prediction, Bi prediction, etc.)
  • the neighboring block refers to the spatial neighboring block existing in the current picture. It can include a temporal neighboring block that exists in the picture.
  • the reference picture including the reference block and the reference picture including the temporal peripheral block may be the same or different.
  • the temporal peripheral block may be a collocated reference block, a co-located CU (colCU), etc. It can be called by the name of, and the reference picture containing the temporal surrounding block is the same position.
  • colPic collocated picture
  • the prediction unit 221 may construct a motion information candidate list based on the neighboring blocks, and generate information indicating which candidate is used to derive the motion vector and/or reference picture index of the current block. Inter prediction may be performed based on the prediction mode, for example, in the case of skip mode and merge mode, the inter prediction unit 221 may use the motion information of the neighboring block as the motion information of the current block. In this case, unlike the merge mode, the residual signal may not be transmitted. In the case of the motion information (motion vector prediction, MVP) mode, the motion vector of the surrounding block is used as the motion vector predictor, and the motion vector By signaling the motion vector difference, you can indicate the motion vector of the current block.
  • the motion information motion vector prediction, MVP
  • the prediction unit 220 may generate a prediction signal based on various prediction methods to be described later.
  • the prediction unit may apply intra prediction or inter prediction for prediction for one block, as well as, Intra prediction and inter prediction can be applied at the same time. This can be called combined inter and intra prediction ([can be referred to as).
  • the example is based on the intra block copy (IBC) prediction mode for block prediction. It may or may be based on a palette mode.
  • the IBC prediction mode or palette mode can be used for content video/video coding such as games, for example, SCC (screen content coding), etc.
  • IBC is Basically, the prediction is performed within the current picture, but it can be performed similarly to inter prediction in that it derives a reference block within the current picture, i.e., IBC can use at least one of the inter prediction techniques described in this document.
  • Palette mode can be seen as an example of intracoding or intra prediction. When the palette mode is applied, the sample values in the picture can be signaled based on information about the palette table and palette index.
  • the prediction signal generated through the prediction unit may be used to generate a restoration signal or may be used to generate a residual signal.
  • the transform unit 232 may generate transform coefficients by applying a transform method to the residual signal.
  • the transform method is DCT (Discrete Cosine Transform), DST (Discrete Sine Transform),
  • GBT Graph-Based Transform
  • CNT Supplementally Non-linear Transform
  • GBT means the transformation obtained from this graph when it is said that the relationship information between pixels is expressed in a graph.
  • CNT previously It means the transformation that is obtained using all previously reconstructed pixels, i.e., a signal is generated and obtained based on it.
  • the transformation process can be applied to a block of pixels having the same size of a square, and is variable, not a square. It can also be applied to blocks of size.
  • the quantization unit 233 quantizes the transformation coefficients to the entropy encoding unit 240.
  • the entropy encoding unit 240 encodes the quantized signal (information on quantized transformation coefficients) and outputs it as a bitstream.
  • the information on the quantized transformation coefficients may be referred to as residual information.
  • the quantization unit 233 can rearrange the quantized transformation coefficients in the block form into a one-dimensional vector form based on the coefficient scan order, and the quantized transformation coefficients are quantized based on the quantized transformation coefficients in the one-dimensional vector form. It is also possible to generate information about the transformation coefficients.
  • the entropy encoding unit 240 for example, exponential Golomb,
  • the entropy encoding unit 240 includes quantized conversion factors and information necessary for video/image restoration. (E.g., values of syntax elements) can be encoded together or separately Encoded information (ex.encoded image/video information) is transmitted in bitstream format in units of NAL (network abstraction layer) units or The video/video information may further include information on various parameter sets, such as an appointment parameter set (APS), a picture parameter set (PPS), a sequence parameter set (SPS), or a video parameter set (VPS). In addition, the image/video information may further include general constraint information.
  • APS appointment parameter set
  • PPS picture parameter set
  • SPS sequence parameter set
  • VPS video parameter set
  • the image/video information may further include general constraint information.
  • information transmitted/signaled from the encoding device to the decoding device and/or syntax elements may be included in the image/video information.
  • the image/video information may be encoded through the above-described encoding procedure and included in the bitstream.
  • the bitstream may be transmitted through a network or may be stored in a digital storage medium.
  • the network is a broadcasting network and/or
  • the digital storage medium may include a variety of storage media such as USB, SD, CD, DVD, Blu-ray, HDD, SSD, etc.
  • the signal output from the entropy encoding unit 240 is transmitted by a transmission unit ( (Not shown) and/or a storage unit (not shown) for storing may be configured as an internal/external element of the encoding apparatus 200, or a transmission unit may be included in the entropy encoding unit 240.
  • P can be used to generate the predicted signal output from the quantization unit 233.
  • the quantization unit 234 and the inverse transformation unit 235 to the quantized transformation coefficients can be used. Residual by applying quantization and inverse transformation
  • the signal (residual block or residual samples) can be restored.
  • the addition unit 250 restores the restored residual signal by adding the restored residual signal to the prediction signal output from the inter prediction unit 221 or the intra prediction unit 222.
  • (reconstructed) signal (restored picture, restored block, When there is no residual for the block to be processed, such as when the skip mode is applied, the predicted block can be used as a restoration block.
  • the addition unit 250 is a restoration unit or a restoration block generation unit.
  • the generated reconstructed signal may be used for intra prediction of the next processing target block in the current picture, and may be used for inter prediction of the next picture through filtering as described later.
  • LMCS luma mapping with chroma scaling
  • the filtering unit 260 applies filtering to the restored signal to improve subjective/objective image quality.
  • the filtering unit 260 may apply various filtering methods to the restored picture to generate a modified restored picture, and store the modified restored picture in a memory 270, specifically a memory 270.
  • the various filtering methods include, for example, deblocking filtering, sample adaptive offset, adaptive loop filter, and bilateral filter.
  • the filtering unit 260 may generate a variety of filtering information and transmit it to the entropy encoding unit 240, as described later in the description of each filtering method.
  • the filtering information is encoded by the entropy encoding unit 240. It can be output as a bitstream.
  • the modified restored picture transmitted to the memory 270 can be used as a reference picture in the inter prediction unit 221.
  • the inter prediction is applied to the encoding device through this, the prediction in the encoding device 200 and the decoding device is performed. Mismatch can be avoided and coding efficiency can be improved.
  • the DPB may store the modified restored picture for use as a reference picture in the inter prediction unit 221.
  • the memory 270 is a block of the block from which motion information in the current picture is derived (or encoded). It is possible to store motion information and/or motion information of blocks in a picture that has already been restored.
  • the stored motion information can be transmitted to the inter prediction unit 221 for use as motion information of spatial neighboring blocks or motion information of temporal neighboring blocks.
  • the memory 270 may store restoration samples of the restored blocks in the current picture, and may transmit the restoration samples to the intra prediction unit 222.
  • FIG. 3 is a diagram of a video/image decoding apparatus to which embodiments of this document can be applied.
  • the decoding device may include a video decoding device and/or a video decoding device.
  • the decoding apparatus 300 includes an entropy decoder 310, a residual processor 320, a predictor 330, an adder 340, It can be configured with a filtering unit (filter, 350) and memory (memoery, 360).
  • the prediction unit 330 may include an inter prediction unit 331 and an intra prediction unit 332.
  • the residual processing unit 320 may include a dequantizer 321 and an inverse transformer 321.
  • the entropy decoding unit 310 and the residual described above may include a dequantizer 321 and an inverse transformer 321.
  • the processing unit 320, the prediction unit 330, the addition unit 340, and the filtering unit 350 may be configured by one hardware component (for example, a decoder chipset or processor) according to an embodiment. ) May include a DPB (decoded picture buffer) and may be configured by a digital storage medium.
  • the hardware component may further include a memory 360 as an internal/external component.
  • the video can be restored in response to the process in which the video/video information is processed in the encoding apparatus of 2.
  • the decoding apparatus 300 may reconstruct units/blocks based on the block division-related information obtained from the bitstream.
  • the decoding device 300 can perform decoding using a processing unit applied in the encoding device. Therefore, the processing unit of the decoding may be, for example, a coding unit, and the coding unit is from a coding tree unit or a maximum coding unit. It can be divided according to a quad tree structure, a binary tree tree structure and/or a turn tree tree structure.
  • One or more conversion units can be derived from the coding unit, and the reconstructed video signal decoded and output through the decoding device 300 is reproduced. Can be played through the device.
  • the decoding device 300 converts the signal output from the encoding device of FIG. 2 into a bitstream.
  • the received signal can be decoded through the entropy decoding unit 310.
  • the entropy decoding unit 3W parses the bitstream and is required for image restoration (or picture restoration).
  • Information (ex. video/video information) can be derived.
  • the image/video information may further include information on various parameter sets, such as an appointment parameter set (APS), a picture parameter set (PPS), a sequence parameter set (SPS), or a video parameter set (VPS).
  • /Video information may further include general constraint information.
  • the decoding device may further decode the picture based on the information on the parameter set and/or the general limit information.
  • the signaling/received information and/or syntax elements described later in this document are decoded through the decoding procedure, It can be obtained from the bitstream.
  • the entropy decoding unit (3W) decodes the information in the bitstream based on a coding method such as exponential Golomb encoding, CAVLC or CABAC, and the value of the syntax element required for image restoration
  • the CABAC entropy decoding method receives the bin corresponding to each syntax element in the bitstream, and receives the syntax element information to be decoded and the decoding information of the surrounding and decoding object blocks.
  • the context model is determined by using the symbol/bin information decoded in the previous step, and bins are calculated by predicting the probability of occurrence of bins according to the determined context model.
  • a symbol corresponding to the value of each syntax element can be generated by performing arithmetic decoding.
  • the CABAC entropy decoding method determines the context model and then decodes the symbol/bin information for the next symbol/bin context model.
  • the context model can be updated by using the entropy decoding unit (3W).
  • the information is provided to the prediction unit (inter prediction unit 332 and intra prediction unit 331), and the residual value subjected to entropy decoding in the entropy decoding unit 3W, i.e., quantized transformation coefficients and related parameter information It can be input to the residual processing unit (320).
  • the residual processing unit 320 may derive a residual signal (residual block, residual samples, and residual sample array).
  • a filtering unit information about filtering among information decoded by the entropy decoding unit 310 is a filtering unit. Can be provided as 350.
  • a receiving unit (not shown) that receives the signal output from the encoding device may be further configured as an internal/external element of the decoding device 300, or the receiving unit may be a component of the entropy decoding unit 3W.
  • the decoding device can be called a video/video/picture decoding device, and the decoding device can be divided into an information decoder (video/video/picture information decoder) and a sample decoder (video/video/picture sample decoder).
  • the information decoder is the entropy
  • a decoding unit (3W) may be included, and the sample decoder includes the inverse quantization unit 321, an inverse transform unit 322, an addition unit 340, a filtering unit 350, a memory 360, an inter prediction unit ( 332) and an intra prediction unit 331.
  • the inverse quantization unit 321 may inverse quantize the quantized transformation coefficients and output the transformation coefficients.
  • the inverse quantization unit 321 may rearrange the quantized transformation coefficients into a two-dimensional block. In this case, the reordering is performed above. The reordering can be performed based on the coefficient scan order performed by the encoding device.
  • the inverse quantization unit 321 performs inverse quantization on the quantized transform coefficients using a quantization parameter (for example, quantization step size information). , Transform coefficients can be obtained.
  • a quantization parameter for example, quantization step size information
  • a residual signal (residual block, residual sample array) is obtained by inverse transforming the transform coefficients.
  • the prediction unit performs prediction on the current block, and predicts the current block
  • a predicted block including samples may be generated.
  • the prediction unit determines whether intra prediction or inter prediction is applied to the current block based on the information about the prediction output from the entropy decoding unit 310. Can be determined and specific intra/inter prediction modes can be determined.
  • the prediction unit 320 may generate a prediction signal based on various prediction methods to be described later.
  • the prediction unit may apply intra prediction or inter prediction for prediction for one block, as well as, Intra prediction and inter prediction can be applied at the same time. This can be called combined inter and intra prediction ([can be referred to as).
  • the example is based on the intra block copy (IBC) prediction mode for block prediction. It may or may be based on a palette mode.
  • the IBC prediction mode or palette mode can be used for content video/video coding such as games, for example, SCC (screen content coding), etc.
  • IBC is By default, prediction is performed within the current picture, but referenced within the current picture.
  • palette mode can be viewed as an example of intracoding or intra prediction.
  • information about the palette table and palette index may be included in the image/video information and signaled.
  • the intra prediction unit 331 may predict the current block by referring to samples in the current picture.
  • the referenced samples are of the current block according to the prediction mode.
  • the prediction modes may include a plurality of non-directional modes and a plurality of directional modes in intra prediction.
  • the intra prediction unit 331 is a prediction applied to a peripheral block. Using the mode, you can also determine the prediction mode that applies to the current block.
  • the inter prediction unit 332 refers to a reference specified by a motion vector on the reference picture.
  • Motion information can be predicted in units of blocks, sub-blocks, or samples.
  • the motion information may include a motion vector and a reference picture index.
  • the motion information indicates inter prediction direction (L0 prediction, L1 prediction, Bi prediction, etc.)
  • the peripheral block may include a spatial neighboring block existing in the current picture and a temporal neighboring block existing in the reference picture.
  • the inter prediction unit 332 may construct a motion information candidate list based on the neighboring blocks, and derive a motion vector and/or a reference picture index of the current block based on the received candidate selection information. Inter prediction may be performed based on the prediction mode, and the information on the prediction may include information indicating a mode of inter prediction for the current block.
  • the addition unit 340 predicts the obtained residual signal (inter prediction unit 332 and/or
  • a restoration signal (restored picture, restoration block, restoration sample array) can be generated. Processing as in the case where skip mode is applied. If there is no residual for the target block, the predicted block can be used as a restore block.
  • the addition unit 340 may be referred to as a restoration unit or a restoration block generation unit.
  • the generated restoration signal may be used for intra prediction of the next processing target block in the current picture, and output after filtering as described later. It may be used or it may be used for inter prediction of the next picture.
  • LMCS luma mapping with chroma scaling
  • the filtering unit 350 applies filtering to the restored signal to improve subjective/objective image quality.
  • the filtering unit 350 applies various filtering methods to the restored picture.
  • the modified (modified) restored picture can be generated, and the modified restored picture can be transmitted to the memory 360, specifically the DPB of the memory 360.
  • the various filtering methods include, for example, deblocking filtering. , A sample adaptive offset, an adaptive loop filter, a bilateral filter, and the like.
  • the (modified) reconstructed picture stored in the DPB of the memory 360 may be used as a reference picture in the inter prediction unit 332.
  • the memory 360 is from which motion information in the current picture is derived (or decoded).
  • the motion information of the block and/or the motion information of the blocks in the picture that has already been restored can be stored.
  • the stored motion information is interpolated to be used as the motion information of the spatial surrounding block or the motion information of the temporal surrounding block.
  • the memory 360 can store reconstructed samples of the reconstructed blocks in the current picture, and can transfer them to the intra prediction unit 331.
  • prediction is performed to increase compression efficiency in performing video coding.
  • a predicted block including predicted samples for the current block which is a block to be coded
  • the block includes prediction samples in the spatial domain (or pixel domain).
  • the predicted block is derived identically in the encoding device and the decoding device, and the encoding device is the original block and the original block, not the original sample value itself.
  • the video coding efficiency can be increased by signaling the predicted residual information (residual information) between the predicted blocks with a decoding device.
  • the decoding device derives a residual block including residual samples based on the residual information, and ,
  • the residual block and the predicted block may be summed to generate a restoration block including restoration samples, and a restoration picture including restoration blocks may be generated.
  • the residual information may be generated through transformation and quantization procedures.
  • the encoding device derives a residual block between the original block and the predicted block, and the residual information is included in the residual block.
  • Transformation coefficients are derived by performing a transformation procedure on samples (residual sample array), quantized transformation coefficients are derived by performing a quantization procedure on the transformation coefficients, and related residual information (via bitstream)
  • the residual information may include information such as value information of the quantized conversion coefficients, location information, conversion technique, conversion kernel, and quantization parameters.
  • the decoding apparatus is inverse based on the residual information.
  • the quantization/inverse transform procedure can be performed and residual samples (or residual blocks) can be derived.
  • the decoding device can generate a reconstructed picture based on the predicted block and the residual block.
  • the encoding device is
  • a residual block can be derived by inverse quantization/inverse transformation of the quantized transformation coefficients for reference for inter prediction of a subsequent picture, and a reconstructed picture can be generated based on this.
  • Intra prediction may represent a prediction of generating prediction samples for the current block based on reference samples within a picture to which the current block belongs (hereinafter, the current picture).
  • peripheral reference samples to be used for intra prediction of the current block can be derived.
  • the surrounding reference samples of the current block are samples and adjacent to the left boundary of the current block of size nWxnH, and
  • the peripheral reference samples of the current block may include upper peripheral samples of a plurality of columns and a plurality of left peripheral samples of the current block.
  • Peripheral reference samples are a total of nH samples adjacent to the right boundary of the current block of size nWxnH, a total of nW samples adjacent to the bottom boundary of the current block, and the bottom-right of the current block. It can also contain one neighboring sample.
  • peripheral reference samples can be constructed, or by interpolation of the available samples, peripheral reference samples can be constructed for use in the example.
  • the sample can be derived based on the average or interpolation of the samples, and (ii) the reference sample existing in a specific (prediction) direction with respect to the predicted sample among the reference samples around the current block. You can also derive a prediction sample. In the case of (i), it may be called a non-directional mode or a non-angular mode, and in the case of (ii), it may be called a directional mode or an angular mode.
  • the predicted sample may be generated through interpolation with the sample. In the above-described case, it may be called linear interpolation intra prediction (LIP).
  • LIP linear interpolation intra prediction
  • a linear model is used to generate the luma samples based on the luma samples. Chroma prediction samples can also be generated, in this case called LM mode.
  • a sample is derived, and at least one reference sample extracted according to the intra prediction mode among the existing peripheral reference samples, that is, unfiltered peripheral reference samples, and
  • the prediction sample of the current block may be aided by weighted sum of the temporary prediction sample.
  • the above-described case may be referred to as PDPC (Position dependent intra prediction).
  • Intra predictive coding can be performed by instructing (signaling) a decoding device to perform intra prediction encoding.
  • a decoding device may be called a multi-reference line intra, i.e., or MRL-based intra.
  • the current block is divided into vertical or horizontal subpartitions to perform intra prediction based on the same intra prediction mode, but the surrounding reference samples can be derived and used in the subpartition unit. That is, in this case, the current block
  • the intra prediction mode for is applied equally to the sub-partitions, but by deriving and using the peripheral reference sample in the sub-partition unit, it is possible to increase the rhetra-prediction performance in some cases.
  • the intra prediction type can be referred to as various terms such as intra prediction technique or supplementary intra prediction mode.
  • the intra prediction type (or supplementary intra prediction mode, etc.) described above is LIP, PDPC, and MRL. And at least one of ISP.
  • General intra prediction methods except for specific intra prediction types such as LIP, PDPC, MRL, and ISP may be called normal intra prediction types.
  • the normal intra prediction type can be generally applied when the specific intra prediction type as described above is not applied, and prediction can be performed based on the intra prediction mode described above. Meanwhile, post-processing for the derived prediction sample as necessary Filtering may be performed.
  • the intra prediction procedure may include an intra prediction mode/type determination step, a peripheral reference sample extraction step, and an intra prediction mode/type-based prediction sample extraction step.
  • the derived prediction sample may be extracted.
  • a post-filtering step may be performed.
  • the intra prediction mode to be applied to the current block can be determined using the intra prediction mode of the surrounding block.
  • the decoding device uses the surrounding block (ex. left and/or left) of the current block.
  • one of the MPM candidates in the MPM (most probable mode) list reduced based on the intra prediction mode of the upper peripheral block) and the stock price candidate modes can be selected based on the received MPM index, or the MPM candidates (and planners).
  • Mode one of the remaining intra prediction modes that are not included in the Remaining Intra prediction mode can be selected based on the Remaining Intra prediction mode information.
  • the MPM list can be configured with or without the Planner mode as candidates. 2020/175893 1»(:1 ⁇ 1 ⁇ 2020/002702.
  • the MPM list can have 6 candidates, and the MPM list is in planner mode. If not included as a candidate, the MPM list can have 5 candidates. If the MPM list does not include a planner mode as a candidate, an 11 planner flag indicating whether the intra prediction mode of the current block is not a planner mode.
  • 1 unit_11111 _1101;_]31 Table 1'_13 ⁇ 4) can be signaled.
  • 1] Neullag is signaled first, and MPM index and 11 planner flags can be signaled when the value of MPM flag is 1.
  • the MPM index may be signaled when the value of the 11 planner flag is 1.
  • the fact that the MPM list is configured not to include the planner mode as a candidate is not that the planner mode is not MPM. , Since the planner mode is always considered as the MPM, flag first! It is to check whether it is in the planner mode by signaling 1131 £.
  • the MPM flag For example, whether the intra prediction mode applied to the current block is among the MPM candidates (and planner mode), or the remaining mode, the MPM flag.
  • a value of 1 of the MPM flag may indicate that the intra prediction mode for the current block is within MPM candidates (and planner mode), and a value of 0 of the MPM flag indicates that the intra prediction mode for the current block is within MPM candidates (and planner mode). It can indicate that none of the 11 planners above are always lag (. 1 _11111 _1101;_1 11 Table 1'_£ ⁇ )
  • a value of 0 can indicate that the intra prediction mode for the current block is a planar mode, and the 11 planar flag value of 1 is the intra prediction mode for the current block.
  • the above]] index is 111]3111_1 (urea or 1:3 ⁇ 4_11111 _111]3111_1 (can be signaled in the form of a urine syntex element, and the remaining intra prediction mode information is
  • the signal can be signaled in the form of a syntax element.
  • the remaining intra prediction mode information is full.
  • the remaining intra prediction modes that are not included in the MPM candidates (and planner mode) can be indexed in the order of prediction mode number to indicate one of them.
  • the intra prediction mode is an intra prediction mode for the luma component (sample).
  • MPM list it may contain at least one MPM list in this document may be referred to in various terms, such as MPM candidate list and candModeList.
  • MIP When MIP is applied to the current block, a separate for] 1] : > 1; _111 _111]3111_13 ⁇ 4
  • the encoder can use the intra prediction mode of the neighboring block to encode the intra prediction mode of the current block.
  • an encoder/decoder can construct a list of most probable modes (MPM) for the current block.
  • the MPM list may be referred to as an MPM candidate list.
  • MPM is referred to as intra prediction mode coding. Considering the similarity between the current block and the neighboring block, it can mean a mode used to improve the coding efficiency.
  • the MPM list can be configured including the planner mode, or can be configured excluding the planner mode. For example, if the MPM list includes a planner mode, the number of candidates in the MPM list may be 6, and if the MPM list does not include a planner mode, the number of candidates in the MPM list may be 5.
  • the encoder/decoder can construct an MPM list containing 5 or 6 MPMs.
  • peripheral intra modes two peripheral blocks, ie, a left peripheral block and an upper peripheral block, can be considered.
  • the planar mode is excluded from the list, and the number of MPM list candidates may be set to five.
  • non-directional mode (or non-angular mode) of the intra prediction modes is a DC mode based on an average of neighboring reference samples of the current block or
  • It can include interpolation based planar mode.
  • the prediction unit of the encoding device/decoding device may perform inter prediction in block units to derive a prediction sample.
  • Inter prediction is performed by data elements of picture(s) other than the current picture ( ex Inter prediction can be a prediction derived in a manner that is dependent on data elements (ex. sample values or motion information) of picture( s) other than the current picture).
  • the motion information of the current block can be predicted in units of blocks, sub-blocks or samples.
  • the motion information may include a motion vector and a reference picture index.
  • the motion information is an inter prediction type (L0 prediction, L0 prediction, and sample). L1 prediction, Bi prediction, etc.)
  • the surrounding block is a spatial neighboring block existing in the current picture and a temporal neighboring block existing in the reference picture.
  • the reference picture including the reference block and the reference picture including the temporal peripheral block may be the same or different.
  • the temporal peripheral block may be a collocated reference block, the same It can be called by a name such as position CU (colCU), and the reference picture including the temporal peripheral block is the same position.
  • a motion information candidate list can be constructed based on the blocks surrounding the current block, and to derive the motion vector and/or reference picture index of the current block.
  • a flag or index information indicating whether the candidate is selected (used) can be signaled.
  • Inter prediction can be performed based on various prediction modes, for example, in the case of the skip mode and the merge mode, the motion information of the current block may be the same as the motion information of the selected neighboring block. In the case of the skip mode, the merge mode and the merge mode Otherwise, the residual signal may not be transmitted.
  • motion information i.e.
  • motion vector prediction, MVP motion vector prediction
  • the motion vector of the selected neighboring block is used as a motion vector predictor, and motion vector difference difference
  • the motion vector of the current block can be derived using the sum of the motion vector predictor and motion vector difference.
  • the motion information may include L0 motion information and/or L1 motion information according to an inter prediction type (L0 prediction, L1 prediction, Bi prediction, etc.).
  • the motion vector in the L0 direction can be called an L0 motion vector or MVL0
  • the motion vector in the L1 direction can be called an L1 motion vector or MVL1.
  • the prediction based on the L0 motion vector can be called a L0 prediction
  • the prediction based on the L1 motion vector can be called the L1 prediction
  • the prediction based on both the L0 motion vector and the L1 motion vector can be called a pair (Bi) prediction.
  • the motion vector L0 can represent the motion vector associated with the reference picture list L0 (L0), and the motion vector L1 can represent the motion vector associated with the reference picture character L1 (L1).
  • Pictures prior to the output order may be included as reference pictures, and the reference picture stream L1 may include pictures after the output order of the current picture.
  • the previous pictures may be referred to as forward (reference) pictures, and pictures after the above. They may be referred to as reverse (reference) pictures.
  • the reference picture stream may further include pictures that are later in output order than the current picture as reference pictures.
  • the previous pictures in the reference picture stream L0 are first indexed and then indexed.
  • the pictures after the above can then be indexed.
  • the list L1 may further include previous pictures as reference pictures in an output order than the current picture.
  • the subsequent pictures may be indexed first in the reference picture stream 1 and the previous pictures may be indexed next.
  • the output order is POC (picture order count) can correspond to the order.
  • the coded image/video is a decoding process of the image/video
  • VCL video coding layer
  • NAL network abstraction layer
  • VCL data including compressed image data is generated, or a picture parameter set (PPS), a sequence parameter set (SPS), a video parameter set (Video Parameter Set: It is possible to create a parameter set including information such as VPS) or SEI (Supplemental Enhancement Information) message additionally required in the video decoding process.
  • PPS picture parameter set
  • SPS sequence parameter set
  • VCL Video Parameter Set
  • SEI Supplemental Enhancement Information
  • NAL unit can be created by adding information (NAL unit header).
  • RBSP refers to slice data, parameter set, SEI message, etc. generated from VCL.
  • the NAL unit header may include NAL unit type information specified according to RBSP data included in the corresponding NAL unit.
  • the NAL unit can be divided into a VCL NAL unit and a Non-VCL NAL unit according to the RBSP generated from VCL.
  • the VCL NAL unit can mean a NAL unit that contains information about the video (slice data), and the Non-VCL NAL unit is a NAL unit that contains the information (parameter set or SEI message) necessary for decoding the video.
  • VCL NAL unit and Non-VCL NAL unit can be transmitted through a network by attaching header information according to the data standard of the sub-system.
  • the NAL unit is in H.266/VVC file format, RTP (Real- time Transport Protocol), TS (Transport Stream), etc., can be transformed into data types of predetermined standards and transmitted through various networks.
  • the NAL unit is RBSP data included in the NAL unit.
  • the NAL unit type may be specified according to the structure, and information on the NAL unit type may be stored in the NAL unit header and signaled.
  • VCL NAL unit type can be classified according to the nature and type of pictures included in the VCL NAL unit
  • non-VCL NAL unit type is the type of the parameter set. It can be classified according to etc.
  • NAL unit A type for a NAL unit including APS
  • NAL unit A type for a NAL unit including DPS
  • NAL unit A type for NAL units including VPS
  • NAL unit a type for a NAL unit including SPS
  • NAL unit A type for a NAL unit including PPS
  • NAL unit Type for NAL unit including PH
  • NAL unit types have syntax information for the NAL unit type, and the syntax information may be stored in a NAL unit header and signaled.
  • the syntax information may be nal_unit_type, and NAL unit types are nal_unit_type values. Can be specified.
  • one picture can contain a plurality of slices, and one slice can contain a slice header and slice data.
  • multiple slices within one picture (slice header and slice data) For a set), one picture header can be added.
  • the picture header (picture header syntax) can include information/parameters commonly applicable to the picture.
  • tile groups can be mixed or replaced with slices.
  • tile group headers can be mixed or replaced with slice headers.
  • the slice header may include information/parameters commonly applicable to the slice.
  • the APS APS syntax
  • PPS PPS syntax
  • the SPS SPS syntax
  • the VPS VPS syntax
  • the DPS DPS syntax
  • the DPS may include information/parameters commonly applicable to the entire video.
  • the DPS may include information/parameters related to concatenation of a coded video sequence (CVS).
  • High level syntax (HLS) in this document may include at least one of the above APS syntax, PPS syntax, SPS syntax, VPS syntax, DPS syntax, picture header syntax, and slice header syntax.
  • the image/video information encoded from the encoding device to the decoding device and signaled in the form of a bitstream only includes intra-picture partitioning information, intra/inter prediction information, residual information, and in-loop filtering information.
  • information included in the slice header, information included in the picture header, and the APS Included information, information included in the PPS, information included in the SPS, information included in the VPS, and/or information included in the DPS.
  • the image/video information further includes information of the NAL unit header. can do.
  • the in-loop filtering procedure may be performed on the restored samples or restored pictures.
  • in-loop filtering can be performed in the filter section of the encoding device and in the filter section of the decoding device, and a deblocking filter, SAO and/or adaptive loop filter (ALF) can be applied.
  • the ALF procedure may be performed after the deblocking filtering procedure and/or the SAO procedure is completed.
  • the deblocking filtering procedure and/or the SAO procedure may be omitted.
  • Fig. 5 is a flow chart schematically showing an example of an ALF procedure.
  • the ALF procedure disclosed in Fig. 5 can be performed in an encoding device and a decoding device.
  • the encoding device includes the encoding device and/or the decoding device. can do.
  • the coding device derives a filter for the show (S500).
  • the filter may include filter coefficients.
  • the coding device may determine whether to apply ALF, and it is determined to apply the ALF. In case, it is possible to derive a filter that contains the filter coefficients for the ALF.
  • the information to derive (coefficients) can be called the ALF parameter.
  • Information on whether or not ALF is applied (ex. ALF available flag) and ALF data to derive the filter can be signaled from the encoding device to the decoding device.
  • the ALF data may include information for deriving the filter for the ALF.
  • the ALF available flags are signaled at the SPS, picture header, slice header and/or CTB level, respectively.
  • the activity and/or directivity of the current block (or ALF target block) is also reduced, and the filter is based on the activity and/or the direction.
  • the ALF procedure can be applied in units of 4x4 blocks (based on luma components).
  • the current block or ALF target block can be, for example, a CU, or a 4x4 block within a CU.
  • filters for ALF may be derived based on first filters derived from information included in the ALF data and second predefined filters, and the coding device may be based on the activity and/or the direction.
  • One of the filters can be selected.
  • the coding device can use filter coefficients included in the selected filter for the show.
  • the coding apparatus performs filtering based on the filter (S510).
  • Modified restored samples may be derived based on the filtering.
  • the filter coefficients in the filter are in the shape of a filter. It may be allocated or allocated accordingly, and the filtering may be performed on the reconstructed samples in the current block.
  • the restoration samples may be restoration samples after the deblocking filter procedure and the SAO procedure are completed.
  • one filter shape may be used, or one filter shape may be selected from a predetermined plurality of filter shapes.
  • the filter shape applied to the luma component and the filter shape applied to the chroma component may be different, for example, a 7x7 diamond filter shape may be used for the luma component, and a 5x5 diamond filter shape may be used for the chroma component.
  • a 7x7 diamond filter shape may be used for the luma component
  • a 5x5 diamond filter shape may be used for the chroma component.
  • 6A and 6B show examples of the shape of an ALF filter.
  • Fig. 6A shows a shape of a 7x7 diamond filter
  • Fig. 6B shows a shape of a 5x5 diamond filter.
  • Cn in the shape of the filter shows the filter coefficient. If n is the same in Cn, this indicates that the same filter coefficient can be assigned.
  • the position and/or unit to which the filter coefficient is assigned according to the filter shape of the ALF may be called a filter tab.
  • each filter One filter coefficient can be assigned to the tap, and the arrangement of the filter taps can correspond to the filter shape.
  • the filter tab positioned at the center of the filter shape may be referred to as the center filter tab.
  • the same filter coefficient can be assigned to two filter tabs with the same n value present at the corresponding positions as the center filter tab as a reference.
  • 25 filter taps are included, and since the filter coefficients C0 to C11 are allocated in a centrally symmetrical form, filter coefficients can be assigned to the 25 filter taps with only 13 filter coefficients.
  • filter coefficients can be assigned to the 13 filter taps with only 7 filter coefficients. For example, for the signaled filter coefficient To reduce the amount of data in the information, 12 of the 13 filter coefficients for the 7x7 diamond filter shape are signaled (explicitly), and 1 filter coefficient can be derived (implicitly). For example, six of the seven filter coefficients for a 5x5 diamond filter shape are signaled (explicitly), and one filter coefficient can be derived (implicitly).
  • a parameter may be signaled through an adaptation parameter set (APS).
  • the ALF parameter may be derived from filter information or ALF data for the ALF.
  • ALF can be done using a Wiener-based adaptive filter, which can be to minimize the mean square error (MSE) between the original and decoded samples (or restore samples).
  • MSE mean square error
  • the high level design for the ALF tool can incorporate syntax elements that can be accessed from the SPS and/or the slice header (or tile group header).
  • a coded video sequence may include an SPS, one or more PPS, and one or more coded pictures that follow. Each coded picture can be divided into regions. The square regions above can be called tiles. One or more tiles can be gathered to form a tile group or slice. In this case, the tile group header may be linked to the PPS, and the PPS may be linked to the SPS. According to the existing method, the above ALF data (ALF parameter) was included in the tile group header. Considering that one video is composed of multiple pictures and one picture contains multiple tiles, ALF data (ALF parameter) ) There was a problem that the coding efficiency was deteriorated because the signaling was performed frequently in units of tile groups.
  • the ALF parameter may be included in the APS and signaled as follows.
  • FIG 8 shows another example of the hierarchical structure of ALF data.
  • the APS is defined, the APS is required ALF data (ALF
  • the APS can have a self-identification parameter and ALF data.
  • the self-identification parameter of the APS can contain an APS ID. That is, the APS is in addition to the ALF data field.
  • the information indicating the APS ID may be included.
  • the tile group header or slice header may refer to the APS using APS index information. In other words, the tile group header or slice header may include APS index information, and the APS
  • the ALF procedure for the target block may be performed based on the ALF data (ALF parameter) included in the APS having the APS ID indicated by the index information.
  • the APS index information may be referred to as APS ID information.
  • the SPS may include a flag that allows the use of ALF. For example, when the CVS starts, the SPS is checked, and the flag in the SPS may be checked.
  • the SPS may contain the syntax in Table 1 below, and the syntax in Table 1 may be part of the SPS.
  • ALF available flag (which can be called the first ALF available flag), and can be included in the SPS, that is, the ALF available flag can be signaled at the SPS (or SPS level). Signaled at the SPS.
  • the value of the ALF available flag is 1, it can be determined that the ALF is basically available for pictures in the CVS referencing the SPS.
  • an additional available flag is signaled at a lower level than the SPS. Individual yarns can also be turned on/off.
  • an additional available flag (which may be referred to as the 2nd ALF available flag) can be signaled.
  • the 2nd ALF available flag above can be parsed/signaled, for example, if ALF71-Available at the SPS level. 2
  • the value of the ALF available flag is 1, ALF data can be parsed through the tile group header or the slice header.
  • the second ALF available flag is the ALF availability condition for luma and chroma components ( condition) can be specified.
  • the above ALF data can be accessed through APS ID information.
  • the second ALF usable water flag is a tile_group_alf_enabled_flag syntax element or
  • the corresponding tile group or the APS referenced by the corresponding slice may be identified.
  • the APS may include ALF data.
  • the structure of APS including ALF data can be described based on the following syntax and semantics, for example.
  • the syntax in Table 7 can be part of the APS.
  • the adaptation_parameter_set_id syntax element may indicate the identifier of the corresponding APS. That is, the APS may be identified based on the adaptation_parameter_set_id syntax element.
  • the adaptation_parameter_set_id syntax element may be called APS ID information.
  • the APS may include an ALF data field. The ALF data field may be parsed/signaled after the adaptation_parameter_set_id syntax element.
  • the APS extension flag can indicate whether an APS extension data extension flag (aps_extension_data_flag) syntax element exists.
  • the APS extension flag can be an extension point for later versions of the VVC standard, for example. Can be used to provide them.
  • the ALF data field described above may contain information about the processing of the ALF filter.
  • the information that can be extracted from the ALF data field applies only to the number of filters used and the ALF powdery component. It may include information indicating whether or not, information about color components, exponential golomb (EG) parameters, and/or information about delta values of filter coefficients.
  • EG exponential golomb
  • the ALF data field may contain the ALF data syntax, for example:
  • the availability parameters for each component have been determined.
  • information about the number of luma (component) filters can be parsed.
  • the available filters can be used.
  • the maximum number can be set to 25 2020/175893 1»(:1/10 ⁇ 020/002702 Yes. If the number of signaled luma filters is at least one, each range is from 0 to the maximum number of filters (ex. 25, which may alternatively be known as class).
  • the index information for the filter can be parsed/signaled, which means that every class (i.e., from 0 to the maximum number of filters) is associated with the filter index.
  • a mullag (ex.alf_luma_coeff_delta_flag) is
  • the flag can be used to analyze whether or not water lag information (ex.alf_luma_coeff_delta_prediction_flag) about the prediction of the ALF luma filter coefficient delta value exists in the slice header or tile group header.
  • the state of the alf_luma_coeff_delta_prediction_flag syntax element indicates 1, this can mean that the luma filter coefficients are predicted from the previous luma (filter) coefficients. If the state of the alf_luma_coeff_delta_prediction_flag syntax element indicates 0, this may mean that the luma filter coefficients are not predicted from the deltas of the previous luma (filter) coefficients.
  • the order k (order-k) of the EG code ) May have to be determined. This information may be needed to decode the filter coefficients.
  • the order of the exponent Gollum code may be expressed as EG(k).
  • EG(k) one syntax element alf_luma_min_eg_order_minus may be parsed/signaled.
  • the alf_luma_min_eg_order_minus 1 syntax element may be an entropy-coded syntax element.
  • alf_luma_min_eg_order_minus 1 The syntax element may represent the smallest order of the EG used for decoding the delta luma filter coefficient.
  • the value of the alf_luma_min_eg_order_minus 1 syntax element may be a value within the range of 0 to 6.
  • alf_luma_eg_order_increase_flag syntax element If the value of the alf_luma_eg_order_increase_flag syntax element is 1, this indicates that the order of the EG represented by the alf_luma_min_eg_order_minus 1 syntax element increases by 1.
  • alf_luma_min_eg_order_minus 1 It indicates that the order of the EG indicated by the syntax element does not increase.
  • the order of the EG can be indicated by the index of the EG.
  • the alf_luma_min_eg_order_minus 1 syntax element and the above 2020/175893 1»(:1/10 ⁇ 020/002702 alf_luma_eg_order_increase_flag The EG order (or EG index) based on the syntax element (relating to the luma component) can be judged as follows, for example.
  • expGoOrderY is the EG order (or EG index).
  • Gollum order index i.e. golombOrderldxY
  • the pre-defined Gollum order may be used to determine the final golomb order for coding the coefficients. have.
  • the predefined Gollum order can be constructed as, for example, the following equation.
  • the alf_luma_coeff_delta_flag syntax element can be signaled for all the (every) filters signaled. Indicates whether the filter coefficient is (explicitly) signaled.
  • difference information and sign information of luma filter coefficients can be parsed/signaled (i.e., if alf_luma_coeff_flag represents true).
  • alf_luma_coeff_delata_abs syntax element can be parsed/signaled.
  • the code information (alf_luma_coeff_delta_sign syntax element) can include parsing/signaling of the parsing/signal information of the filter coefficients. This information may be referred to as information about the luma filter coefficients.
  • the deltas of the filter coefficients are determined along with the code and are to be stored. 2020/175893 1»(:1 ⁇ 1 ⁇ 2020/002702 May.
  • the deltas of filter coefficients can be stored in an array form,
  • the deltas of the filter coefficients can be called delta luma coefficients, and the deltas of the filter coefficients with the sign can be called delta luma coefficients with the sign.
  • the (luma) filter coefficients can be updated as shown in the following equation.
  • the delta luma coefficients with the above sign for a given filter index can be used to determine the first 12 filter coefficients.
  • the 13th filter coefficient of the 7x7 filter can be determined based on, for example, the following equation.
  • the 13th filter coefficient can represent the above-described filter coefficient of the center tap.
  • the filter coefficient index 12 may represent the 13th filter coefficient.
  • a value of 12 may represent the 13th filter coefficient.
  • -It can be up to 1, and if the oar is 12, it can be from 0 to 2 8 -1, where the above oar can be replaced with.
  • the chroma component can be processed based on the element. If _ 1'0111 (10 syntax element value is greater than 0, at least £0 order information for the chroma component.
  • _ ⁇ 11 ]'0111& _111111_6 is _01(16 ]' _1111111181 syntax element) can be parsed/signaled.
  • a 5x5 diamond filter shape may be used for the chroma component, so in this case, the maximum Gollum index may be 2.
  • Index can be judged as follows, for example. 2020/175893 1»(:1/10 ⁇ 020/002702
  • an array containing EG orders can be derived, which can be used by a decoding device.
  • GollumOrderldxC There may be a pre-defined Gollum order index (golombOrderldxC).
  • the predefined Gollum order can be used to determine the final Gollomb order for coding the coefficients.
  • the pre-defined Gollum order is as follows:
  • the information including absolute value information and sign information of the chroma filter coefficients may be referred to as information about the chroma filter coefficients.
  • information about the chroma filter coefficients For example, a 5x5 diamond filter shape can be applied for the chroma component, in this case the delta absolute value information for each of the 6 (chroma component) filter coefficients
  • alf_chroma_coeff_abs syntax element can be parsed/signaled.
  • the value of the alf_chroma_coeff_ab s syntax element is greater than 0, the sign information
  • the (alf_chroma_coeff_sign syntax element) can be parsed/signaled.
  • the six chroma filter coefficients can be derived based on information about the chroma filter coefficients.
  • the seventh chroma filter coefficient can be, for example: It may be determined based on the equation.
  • the seventh filter coefficient may represent the filter coefficient of the center tap described above.
  • AlfCoeffc [6] 128- ⁇ k AlfCoeff c [filtIdx] [k] «1
  • the filter coefficient index 6 may represent the 7th filter coefficient.
  • the value 6 may represent the 7th filter coefficient.
  • the final filter The range of the coefficients AlfCoeff c [filtIdx][k] can be from -2 7 to 2 7 -1 when the furnace is 0, ⁇ , 5, and from 0 to 2 8 -1 when the furnace is 6. , The k can be replaced by j.
  • ALF-based filtering can be performed based on a filter containing coefficients. It is as described above that modified restored samples can be derived through this.
  • a number of filters can be derived, and one of the above multiple filters can be derived.
  • the filter coefficients of the filter may be used for the ALF procedure.
  • one of the plurality of filters may be specified based on the signaled filter selection information.
  • one filter among the plurality of filters may be selected based on the directionality, and filter coefficients of the selected filter may be used for the ALF procedure.
  • LMCS luma mapping wth chroma scaling
  • the LMCS can be referred to as a loop-free shaper (reshaping).
  • the control of the LMCS and/or the signaling of LMCS-related information can be performed hierarchically.
  • CVS coded video suquence
  • SPS sequence parameter set
  • PPS picture parameter set
  • tile group header can also be referred to as slice header and slice data, respectively.
  • SPS sets flags to enable tools to be used in CVS.
  • each encoded picture can contain tiles from one or more encoded rectangular domains. The tiles above are tiled. Each tile group is encapsulated with header information called a tile group header. Each tile consists of a CTU containing coded data, where the data is the original sample values and the predicted sample values. S, and its luma and chroma components (luma prediction sample values and chroma prediction sample values).
  • the LMCS structure 1000 of FIG. 10 includes luma components based on adaptive piecewise linear (adaptive PWL) models.
  • An in-loop mapping portion (1010) and a luma-dependent chroma residual scaling portion (1020) for chroma components may be included.
  • the inverse quantization and inverse transformation (1011), restoration (1012), and intra prediction (1013) blocks of (1010) are
  • Loop filters 1015 of the in-loop mapping portion 1010, motion compensation or inter prediction (1017) Blocks, and restoration of chroma residual scaling part (1020) (1022), intra prediction (1023), motion compensation or inter prediction (1024), loop filters (1025), and the original (non-mapped (non-mapped) blocks) mapped), non-reshaped) represents the processes applied in the domain.
  • the inverse reshaping process can be applied to the (restored) luma sample (or luma samples or array of luma samples) of the restored picture.
  • the inverse reshaping process is a partial function of the luma sample (inverse). It can be done based on a piecewise function (inverse) index.
  • the partial function (inverse) index can identify the fragment (or part) to which the luma sample belongs.
  • the output of the inverse reshaping process is a modified (restored) luma sample (or modified luma samples or modified luma sample array).
  • LMCS can be enabled or disabled at a tile group (or slice), picture, or higher level.
  • a picture can contain luma samples and chroma samples.
  • a restored picture with luma samples can be referred to as a restored luma picture, and a restored picture with chroma samples can be called a restored chroma sample.
  • the combination of the restored luma picture and the restored chroma picture can be referred to as a restored picture.
  • the restored luma picture can be created based on the forward reshaping process.
  • the inter prediction is the current block.
  • forward reshaping is applied to the luma prediction sample derived based on the (restored) luma sample of the reference picture.
  • the (restored) luma sample of the reference picture is inverse reshaping.
  • forward reshaping is applied to the luma prediction sample, so that a resafe (mapped) luma prediction sample can be derived.
  • the forward reshaping process can be performed based on the partial function index of the luma prediction sample.
  • the function index can be derived based on the value of the luma prediction sample or the value of the luma sample of the reference picture used for the inter prediction.
  • a restoration sample can be generated based on the (resafe/mapped) luma prediction sample.
  • the inverse reshaping (mapping) process may be applied to the restoration sample.
  • the restoration sample to which the inverse reshaping (mapping) process was applied may be called an inverse reshaping (mapped) restoration sample.
  • the restored (mapped) restoration sample can be simply referred to as a reshaped (mapped) restoration sample.
  • intra prediction or intra block copy (IBC)
  • IBC intra block copy
  • the restoration samples of the referenced current picture are applied.
  • forward mapping may not be necessary to the predicted sample(s) of the current block because the inverse reshaping process has not yet been applied.
  • the luma sample (restored) from the restored luma picture is the (resafe) luma prediction sample and the corresponding response. Haruma It can be created based on a residual sample.
  • the reconstructed chroma picture may be generated based on a chroma-scaling process.
  • a chroma sample (restored) in a restored coma picture can be derived based on a chroma prediction sample and a chroma residual sample (c res ) in the current block.
  • the chroma residual sample (c res ) is derived based on the (scaled) chroma residual sample (c res s t ; aie ) for the current block and the chroma residual scaling factor (cScalelnv can be referred to as varScale).
  • the chroma residual scaling factor can be calculated based on the reshaped luma prediction sample values in the current block.
  • the scaling factor is the average luma value (ave) of the reshaped luma prediction sample values (Y' pred ). It can be calculated based on (Y' pred )).
  • the (scaled) chroma residual sample derived based on the inverse transform/inverse quantization in Fig. W is c resScale
  • the (scaled) chroma residual sample is The chroma residual sample derived by performing the (inverse) scaling procedure can be referred to as Cres .
  • Fig. 11 shows an LMCS structure according to another embodiment of this document. Fig. 11 will be described with reference to Fig. 10.
  • the in-loop mapping portion (mo) of Fig. 11 and the luma-dependent chroma residual scaling portion (1120) of Fig. 10 are the in-loop mapping portion of Fig. 10 (1010) and luma-dependent chroma residual scaling.
  • the average luma value is based on the peripheral luma restoration samples outside the restoration block, not the inner luma restoration samples of the restoration block.
  • AvgY can be obtained and a chroma residual scaling factor can be derived based on the average luma value (avgYJ).
  • the peripheral luma restoration samples can be peripheral luma restoration samples of the current block, or including the current block. It may be the peripheral luma restoration samples of virtual pipeline data units (VPDUs).
  • VPDUs virtual pipeline data units
  • Video/video information signaled through the bitstream is LMCS
  • LMCS parameters can be configured with HLS (high level syntax, including slice header syntax). A detailed description of the LMCS parameters and configuration will be given later. As described above, the syntax tables described in this document (and the following examples) are at the encoder end.
  • the decoder retrieves information about the LMCS (in the types of syntax elements) from the syntax tables. Can be parsed/decoded. One or more embodiments described below can be combined.
  • the encoder can encode the current picture based on the information about the LMCS, and the decoder can decode the current picture based on the information about the LMCS.
  • In-loop mapping of luma components can adjust the dynamic range of the input signal by redistributing codewords across the dynamic range to improve compression efficiency.
  • the forward mapping (reshaping) function (FwdMap) ), and an inverse mapping (reshaping) function (InvMap) corresponding to the forward mapping function (FwdMap) can be used.
  • the forward mapping function (FwdMap) can be signaled using partial linear models, for example, partial linearity. Models can have 16 pieces or bins. The pieces can have the same length.
  • the inverse mapping function (InvMap) may not be signaled separately, but instead the forward mapping function (FwdMap).
  • inverse mapping can be a function of forward mapping.
  • In-loop (Luma) reshaping is the input luma in the reshaped domain
  • Chroma residual scaling Can be applied to compensate for the difference between the luma signal and the chroma signal.
  • In-loop preshaping can be done by specifying a high-level syntax for the reshaper model.
  • the reshaper model syntax is a partial linear model (PWL model).
  • a lookup table (InvLUT) can be derived, for example, forward
  • an inverse lookup table (InvLUT) can be derived based on the forward lookup table (FwdLUT).
  • the forward lookup table (FwdLUT) maps the input luma values Yi to the changed values.
  • the inverse lookup table (InvLUT) can map the restored values based on the changed values to the restored values, and the restored values can be derived based on the input luma values.
  • the SPS may include the syntax in Table 13 below.
  • the syntax in Table 16 may include sps_reshaper_enabled_flag as a tool enabling flag.
  • sps_reshaper_enabled_flag indicates whether the reshaper is used in a coded video sequence (CVS).
  • CVS coded video sequence
  • sps_reshaper_enabled_flag may be a flag that enables reshaping in the SPS.
  • the syntax in Table 16 may be part of the SPS. 2020/175893 1»(:1/10 ⁇ 020/002702
  • sps_seq_parameter_set_id and sps_reshaper_enabled_flag may be as shown in Table 17 below.
  • tile group header or slice header may include the syntax of Table 18 or Table 19 below.
  • Reshaper_enabled_flag a stock-based mullag
  • tile_group_reshaper_model_pmsent_flag or slice_reshaper_model_present_flag7]-.
  • tile_group_reshaper_model_pmsent_flag or
  • the purpose of 81 _ 1 ⁇ _1110 (161_1 8611 (:_£ ⁇ )) could be to indicate the existence of a reshaper model, for example tile_group_reshaper_model_present_flag (or
  • tile_group_reshaper_model_pmsent_flag (or slice_reshaper_model_present_flag)7]- If false (or 0), it can be indicated that no dissociation shaper exists for the current tile group (or current slice).
  • the reshaper model (e.g.
  • tile_group_reshaper_model() or slice_reshaper_model() can be processed, plus an additive mullag, tile_group_reshaper_enable_flag (or
  • slice_reshaper_enable_flag can also be parsed.
  • tile_group_reshaper_enable_flag (or slice_reshaper_enable_flag) can indicate whether the reshaper model is currently used for the tile group (or slice), e.g. for mountain_1'0111)_ 1 61'_611 _:3 ⁇ 4 (or ), the reshaper model can be indicated as not being used for the current tile group (or current slice).
  • tile_group_reshaper_enable_flag (or slice_reshaper_enable_flag) 7]- 1 (or true) 0 If !, the reshaper model can be indicated as being used for the current tile group (or slice).
  • tile_group_reshaper_model_present_flag (or 2020/175893 1»(:1 ⁇ 1 ⁇ 2020/002702
  • tile_group_reshaper_enable_flag (or slice_reshaper_enable_flag) can be false (or 0). This means that a reshaper model exists but has not been used in the current tile group (or slice). In this case, the reshaper model is the next tile.
  • tile_group_reshaper_model_present_flag may be false (or 0).
  • tile_group_reshaper_chroma_residual_scale_flag (or
  • tile_group_reshaper_chroma_residual_scale_flag (or
  • slice_reshaper_chroma_residual_scale_flag If slice_reshaper_chroma_residual_scale_flag) is enabled (1 or true), chroma residual scaling is disabled for the current tile group (or slice).
  • tile_group_reshaper_chroma_residual_scale_flag (or
  • slice_reshaper_chroma_residual_scale_flag is disabled (0 or false), chroma residual scaling is disabled for the current tile group (or slice).
  • Data-based lookup tables can divide the distribution of the allowable luma value range into multiple bins (for example, 16), so luma values within given bins can be mapped to changed luma values.
  • FIG. 12 shows a graph representing exemplary forward mapping.
  • the X-axis represents input luma values
  • the X-axis represents changed output luma values.
  • the X axis is divided into 5 bins or pieces, each bin has a length, i.e. 5 bins mapped to the changed luma values have the same length.
  • Forward lookup table ⁇ (11 ⁇ 1') is a tile It can be configured using data available in the group header (e.g. reshaper data), from which mapping It can be easier.
  • the output pivot points can be set (marked) the minimum and maximum boundaries of the output range of the luma code word reshaping.
  • the process of calculating the output pivot points is a piecewise cumulative number of codewords. It can be done based on the distribution function.
  • the output pivot range can be divided based on the maximum number of bins to be used and the size of the lookup table (FwdLUT or InvLUT). As an example, the output pivot range may be divided based on the product of the maximum number of bins and the size of the lookup table. For example, if the product between the maximum number of bins and the size of the lookup table is 1024, the output pivot range is 1024 It can be divided into four entries. The division of the output pivot range is the scaling factor.
  • the scaling factor can be derived based on Equation 6 below.
  • Equation 6 SF denotes a scaling factor
  • yl and y2 denote output pivot points corresponding to each bin.
  • FP_PREC and c may be predetermined constants. Equation 6 above.
  • the scaling factor determined on the basis of can be referred to as a scaling factor for forward reshaping.
  • the input resafe pivot points and the mapped inverse output pivot points (bin index * given by the number of initial codewords) corresponding to the mapped pivot points of the forward lookup table (FwdLUT) are patched.
  • the scaling factor (SF) can be derived based on Equation 7 below.
  • Equation 7 SF represents the scaling factor, and xl and x2 represent the input pivot
  • points, yl and y2 denote the output pivot points corresponding to each piece (empty), where the input pivot points can be pivot points mapped based on the forward lookup table (FwdLUT), and the output pivot points Heard inverse
  • Pivot points may be inverse mapped based on a lookup table (InvLUT).
  • FP_PREC may be a predetermined constant.
  • FP_PREC in Equation 7 may be the same as or different from FP_PREC in Equation 6. Equation 4 above.
  • the scaling factor determined on the basis of it can be referred to as the scaling factor for inverse reshaping.
  • the division of the input pivot points can be performed based on the scaling factor of Equation 4. Based on the divided input pivot points, at 0 2020/175893 PCT/KR2020/002702 up to and/or least empty index (reshaper_model_mm_bin_idx)
  • Pivot values corresponding to the minimum and maximum bin values are specified for bin indices in the range up to the index ( 1 ⁇ £-1110 (no1_11 _1 ⁇ 11_:1 (1)).
  • Table 22 below shows the syntax of the reshaper model according to an embodiment.
  • the reshaper model is
  • the reshaper model has been illustratively described as a tile group reshaper, but this specification is not necessarily limited by this embodiment.
  • the reshaper model may be included in,
  • the tile group reshaper model may be referred to as a slice reshaper model.
  • the reshaper model is reshape_model_min_bin_idx,
  • reshape_model_min_bin_idx represents the least empty (or fragment) index used in the reshaper construction process. The value of reshape_model_min_bin_idx starts from 0
  • Yes for example, may be 15.
  • the tile group reshaper model has two indexes (or
  • reshaper_model_min_bin_idx and reshaper_model_delta_max_bin_idx can be parsed preferentially. Based on these two indices, the maximum bin index (reshaper_model_max_bin_idx) can also be determined (determined).
  • reshape_model_delta_max_bin_idx is the maximum allowed bin index] ⁇ 111 (at 1 ⁇ , the actual maximum bin used in the reshaper configuration process
  • the value of the index ⁇ £ _ 1110( 161_111 size _1 ⁇ 11_1(1) can be from 0 to 1 size 111(1).
  • ] ⁇ [ ⁇ 111(1 ⁇ can be 15.
  • the value of reshape_model_max_bin_idx) Can be derived based on Equation 8 below. 2020/175893 1»(:1/10 ⁇ 020/002702
  • the maximum bin index (reshaper_model_max_bin_idx) is the least bin
  • the least empty index can be referred to as the least allowed empty index or the lowest allowed empty index, and the maximum empty index is the maximum allowed empty index. Or it may be referred to as the maximum allowed empty index.
  • the number of bits used to represent reshape_model_bin_delta_abs_CW[i] can be determined.
  • the number of bits used to represent reshape_model_bin_delta_abs_CW[i] can be equal to reshaper_model_bin_delta_abs_cw_prec_minusl.
  • 81 ⁇ 6_1110 ( ⁇ _ 11_ ( 1 stock_ ⁇ 8_ € ⁇ 3 ⁇ 4 can represent information related to the absolute delta codeword value (absolute value of the delta codeword) of the first bin. In one example, 1 If the absolute delta code word value of the second bin is greater than 0,
  • delta_sign_CW_flag[i] can be parsed.
  • delta_abs_CW[i] may be determined. In one example, if delta_sign_CW_flag[i] is 0 (or false), it may be the sign of the corresponding variable.
  • delta_sign_CW_flag[i] is 1 (or true)
  • delta_sign_CW_flag[i] does not exist, it can be considered 0 (or false).
  • reshape_model_bin_delta_sign_CW[i] is of RspDeltaCW[i] 2020/175893 1» (:1 ⁇ 1 ⁇ 2020/002702 This may be information related to the code.
  • reshape_model_bin_delta_sign_CW[i] can be the same as reshaper_model_bin_delta_sign_CW_flag[i] described above.
  • soil is at least bin index (reshaper_model_min_bin_id) Maximum empty
  • Number of codewords allocated (distributed) to the first bin The number of codewords per (distributed) can be stored in the form of an array. In one example, 1 ⁇ is less than the aforementioned reshaper_model_min_bin_idx or
  • 0 0 ⁇ may be a predetermined value, for example, it may be determined based on 11 in the following mathematics.
  • Variables can be derived based on 0 0 described above. For example,
  • ReshapePivot[i] This can also be reduced, for example ReshapePivot[i], large], It can be derived based on Table 24 below. 2020/175893 1»(:1/10 ⁇ 020/002702
  • Whether or not it is derived on a basis can be determined based on the conditional clause depending on whether the required 8 ⁇ meaning is zero.
  • may be a predetermined constant for bit hazard.
  • 011 * 0111] is 011 * 0111 ⁇ £811 1;8 ⁇ 1611(: based on: Whether it is derived or not can be determined based on the conditional clause depending on whether it is 0 or not, where 3 ⁇ 41 * 011 Mountain 11 (: is the number of days of the array determined in advance 2020/175893 1»(:1 ⁇ 1 ⁇ 2020/002702 Yes.
  • the array (3 ⁇ 401 Lose 1 8: ⁇ par 1 1) is only an example, and this example is not necessarily limited by Table 25.
  • Variables 1+1 can be based on ReshapePivot[i+l], for example ReshapePivot[i+l] can be derived based on Equation 13.
  • Equation above It can be derived based on the above-described Equations 10 and/or 11.
  • Luma mapping can be performed based on the above-described embodiments and examples, and the above-described syntax and components included therein can be merely exemplary expressions and are implemented. The examples are not limited by the tables or equations described above.
  • the model is It can be adaptively derived.
  • the model is It can be derived based on parameters. Also, for example, through the header information
  • tile group headers or slices can be used. Heather too It can be advantageous to have parameters.
  • a tile group header (syntax) or a slice header (syntax) may contain a new syntax element.
  • a new syntax element For example, a new syntax element
  • Element can also be called a flag used in the show.
  • the tile group header or slice header may contain the syntax of Table 26 or Table 27 below.
  • Table 26 or Table 27 may be part of a tile group header or a slice header.
  • the value of the element 1), _(1 () can be included in the tile group header or the slice header, and based on this, a filter for the show can be derived. or
  • Index information (for example Or slice_ The element) can be included in the tile group header or the slice header, and a filter for the show can be derived based on the ALF data in the APS pointed to by the APS index information, where the APS index information may be called APS ID information.
  • Reshaper parameters may be included. Existing reshaper parameters were included in the tile group header or slice header, but it may be advantageous to encapsulate the reshaper parameters within the APS so that they are parsed with the ALF data. Therefore, in other embodiments, in other embodiments
  • the APS can contain a reshaper parameter and an ALF parameter together, where the reshaper data can contain a reshaper parameter, and the reshaper parameter can also be called an LMCS parameter.
  • the ALF and reshaper tool flags can be evaluated first in the SPS; for example, the decision to enable/disable the use of the ALF and/or reshaper can be specified by the SPS. Alternatively, the determination may be made by information or syntax elements included in the SPS.
  • the two tool functions (ALF available flag and Reshaper available flag) can be operated independently of each other. That is, the two tool function functions can be independently operated. They can be contained independently within the SPS. For example, before the VCL NAL unit is decoded, the ALF and reshaper may be required to construct a table, so encapsulating the ALF data and reshaper data within the APS is not functional. Grouping tools of similar mechanisms of functionality together can help provide functionality.
  • the availability of the ALF tool can be determined through sps_alf_enabled_flag), and after that, whether or not the ALF is available in the current picture or slice can be indicated through the ALF enabeld_flag in the header information (ex.slice_alf_enabeld_flag) in the header information above. If the value of the ALF available flag of is 1, ALF-related APS ID counts and syntax elements can be parsed/signaled, and ALF-related APS IDs equal to the number of ALF-related APS IDs derived based on the above ALF-related APS ID counts received tax elements The syntax element can be parsed/signaled, i.e. it can indicate that this multiple APS can be parsed or referenced through one header information.
  • the availability of the LMCS (or reshaper) tool can be determined through, for example, the LMCS available in the SPS (ex. sps_reshaper_enabled_flag).
  • sps_reshaper_enabled_flag may be referred to as sps_lmcs_enabled_flag.
  • LMCS available flag ex.slice_lmcs_enabeld_flag
  • the LMCS model can be derived from the APS pointed to by the LMCS-related APS ID syntax element.
  • the APS may further include an LMCS data field, and the LMCS data field is described above. 2020/175893 1 » (:1/10 ⁇ 020/002702 model (reshaper model) information can be included.
  • [283] in one example, can contain the syntax in Table 30 below.
  • the syntax in Table 30 is a tool.
  • the element is the reshaper
  • the sps_alf_enabled_flag syntax element can be used to specify whether it is used in.
  • the sps_alf_enabled_flag syntax element can be a water flag that enables the in-show in.
  • the sps_reshaper_enabled_flag syntax element can be a flag that enables reshaping in the first place.
  • the syntax in Table 30 may be part of.
  • sps_alf_enabled_flag and sps_reshaper_enabled_flag may be as shown in Table 31 below.
  • the APS may include the syntax of Table 32 or Table 33 below.
  • Table 32 or Table 33 may be part of the APS.
  • the aps_extension_data_flag syntax element may represent the same information as described by the various embodiments described above.
  • the tile group header or slice header may include the syntax of Table 34 or Table 35 below.
  • Table 34 or Table 35 may be part of a tile group header or a slice header.
  • the semantics of the tile_group_aps_id_alf syntax element, the tile_group_aps_id_reshaper syntax element, the slice_aps_id_alf syntax element, and/or the slice_aps_id_reshaper syntax element may include the matters disclosed in the following tables, and other syntax elements as described above in various examples. Can represent
  • tile group header or slice header may contain the syntax in Table 38 or Table 39 below.
  • Table 38 or Table 39 may be part of a tile group header or a slice header.
  • Show index information for data e.g. tile_group_aps_id_alf syntax element or slice_aps_id_alf syntax element
  • Show index information for reshaper data e.g. tile_group_aps_id_reshaper syntax element or
  • slice_aps_id_reshaper syntax element can be separated and included.
  • a tile group header or slice header according to Table 38 or Table 39 above includes one show index information (for example, tile_group_aps_id syntax element or slice_aps_id syntax element), and the one show index information is 1 item. It can be used both as show index information for data and show index information for reshaper data.
  • show index information for example, tile_group_aps_id syntax element or slice_aps_id syntax element
  • Shapers can reference different shows.
  • show index information for table data for example,
  • the tax element or slice_aps_id_alf syntax element) and show index information for the reshaper data can represent index information for different shows.
  • ALF and reshaper can refer to the same APS.
  • APS index information for ALF data in Table 34 or Table 35 for example, tile_group_aps_id_alf syntax element or slice_aps_id_alf syntax element
  • APS index information for the data can indicate index information for the same APS, or if ALF and reshaper refer to the same APS, Table 38 or Table 39 Likewise, one APS index information (for example, tile_group_aps_id syntax element or slice_aps_id syntax element) may be included.
  • Reshaper parameters can be included conditionally; for this purpose, a tile group header or slice header can refer to one or more APSs that can be operated at the request of the application, e.g. a sub-bitstream.
  • a use case for the extraction process/bitstream splicing can be considered. Previously, not specifying a bitstream attribute could imply system constraints. In particular, the system Use one SPS for the whole (hence, efficiently splicing complex CVSs from different encoding devices), or all SPS at the beginning of the session.
  • the required ALF data and/or reshaper in the tile group header or slice header It can be advantageous for the model data to be parsed.
  • the system can have the flexibility of extracting for its own VCL NAL units for processing.
  • the NAL unit in addition to the flexibility provided by signaling the APS ID, can be advantageous. This can be advantageous by signaling the information in the tile group header or the slice header.
  • the NAL unit may have ALF data (eg, alf_data()) and/or a reshaper model (eg, tile_group_reshaper_model() or
  • tile_group_alf_reshaper_usage_flag syntax element or the slice_alf_reshaper_usage_flag syntax element i.e. the tile_group_alf_reshaper_usage_flag syntax element or the slice_alf_reshaper syntax element that can be included in the header tile_flag header.
  • tile group header or slice header is shown in Table 40 or Table 41 below.
  • Table 40 or Table 41 could be part of a tile group header or a slice header.
  • the semantics of the tile_group_alf_reshaper_usage_flag syntax element or the slice_alf_reshaper_usage_flag syntax element may include the items disclosed in the following tables, and other syntax elements may represent the same information as described by the various embodiments described above.
  • the slice_alf_reshaper_usage_flag syntax element can indicate information on whether show 113 is used. That is, if the tile_group_alf_reshaper_usage_flag syntax element or slice_alf_reshaper_usage_flag syntax element value is 1, show 113 is It is not used, and therefore may not even refer to the show, or
  • tile_group_alf_reshaper_usage_flag syntax element or slice_alf_reshaper_usage_flag syntax element is 0, Show II) can be used.
  • tile_group_aps_pic_parameter_set syntax element It can be specified by the tile_group_aps_pic_parameter_set syntax element or the slice_aps_pic_parameter_set syntax element).
  • the tile_group_alf_reshaper_usage_flag syntax element or the slice_alf_reshaper_usage_flag syntax element can also represent the show usage flag or the in-show & reshaper usage flag.
  • tile group header or slice header can contain the syntax in Table 44 or Table 45 below.
  • Table 44 or Table 45 may be part of a tile group header or a slice header.
  • Show index information for data e.g. tile_group_aps_id_alf syntax element or slice_aps_id_alf syntax element
  • Show index information for reshaper data e.g. tile_group_aps_id_reshaper syntax element or
  • slice_aps_id_reshaper syntax element can be separated and included.
  • a tile group header or slice header according to Table 44 or Table 45 above contains one show index information (for example, tile_group_aps_id syntax element or slice_aps_id syntax element), and the one show index information is 1 item. It can be used both as show index information for data and show index information for reshaper data.
  • show index information for example, tile_group_aps_id syntax element or slice_aps_id syntax element
  • Showna and Reshaper can refer to different shows.
  • show index information for show data in Table 40 or Table 41 for example,
  • tile_group_aps_id_alf syntax element or slice_aps_id_alf syntax element show index information for reshaper data (for example, tile_group_aps_id_reshaper syntax) 2020/175893 1»(:1 ⁇ 1 ⁇ 2020/002702
  • index information for example, tile_group_aps_id syntax element or scene
  • Table 44 or Table 45. Element may be included.
  • FIG. 13 and 14 schematically show an example of a video/image encoding method and related components according to the embodiment(s) of this document.
  • the method disclosed in FIG. 13 can be performed by the encoding apparatus disclosed in FIG. Specifically, for example, ⁇ 300 in FIG. 13 may be performed by the adding unit 250 of the encoding device, and character 0 in FIG. 13 may be performed by the filtering unit 260 of the encoding device. ⁇ 320 can be performed by the entropy encoding unit 240 of the encoding device.
  • the method disclosed in FIG. 13 may include the embodiments described above in this document.
  • the encoding apparatus performs restoration samples of the current block in the current picture.
  • the encoding device can derive the prediction samples of the current block based on the prediction mode.
  • various prediction methods disclosed in the text such as inter prediction or intra prediction, can be applied.
  • Residual samples can be derived based on the samples and original samples.
  • residual information can be derived based on the residual samples.
  • Residual samples (modified) can be derived based on the residual information.
  • Restore samples may be generated based on the (modified) residual samples and the prediction samples.
  • a restoration block and a restored picture may be derived based on the restoration samples.
  • the encoding device generates reshaping-related information for the restored samples (1 character 0).
  • the encoding device may generate reshaping-related information and/or information for the restored samples.
  • the encoding device derives reshaping-related parameters that can be applied for ringing the restored samples.
  • Relevant information can be generated.
  • the encoding device can derive parameters related to the show, which can be applied for filtering the restored samples, and generate relevant information within the show.
  • Shaping-related information may include at least some of the reshaping-related information described above in this document.
  • the information related to the show may include at least some of the information related to the show described above in this document, where the information related to reshaping is
  • the encoding device stores information related to the generation of restored samples and information related to reshaping.
  • the encoding device Encodes the included image information ( ⁇ 320).
  • Image information including at least a portion may be encoded.
  • the reshaping related information may be referred to as related information.
  • the image information may be referred to as video information.
  • the information for generating the restoration samples may be referred to as the information for generating the restoration samples.
  • it may include prediction-related information and/or residual information.
  • the prediction-related information is various prediction modes, merge modes, Information, Information may be included.
  • the image information may include a variety of information according to an embodiment of this document.
  • the image information is at least one information or at least one syntax element disclosed in at least one of Tables 1 to 45 described above. May include.
  • the data may contain information for deriving a mapping index indicating the mapping relationship of the values of the (luma component) restoration sample of the current block.
  • the reshaping data contains information for performing a reshaping procedure.
  • the reshaping procedure can represent the inverse reshaping procedure
  • the mapping index can contain the index used for the above inverse reshaping procedure.
  • the reshaping procedure according to this document is inverse reshaping procedure.
  • a forward reshaping procedure or a chromashaping procedure can be used, provided that, if a forward reshaping procedure or a chromashaping procedure is used, the order in which reshaping related information is generated within the encoding procedure Can be different, and the index can also be different depending on the forward reshaping procedure or the chromashaping procedure.
  • the reshaping procedure It can be described as a procedure, and the reshaping data is
  • the above string 8 may contain the yarn data.
  • the above yarn data could contain information to derive the show filter coefficients.
  • the above yarn data could contain information to derive the show filter coefficients.
  • Data II The information for II) may be the same, so it may be divided and presented separately, but may also be presented as one. In this case, for convenience of explanation, one including reshaping data and show data are included.
  • ⁇ Information may also be different from the above information.
  • the reshaping data or the show 1 data is one
  • first with reshaping data or show data it may be referred to as first with reshaping data or show data.
  • the reshaping data and the company data are identical to each other.
  • the reshaping data and the show data are If included or either reshaping data or company data If included, Information about) can be displayed in the same form as the tile_group_aps_id syntax element or the slice_aps_id syntax element.
  • reshaping and death can be performed using information of the same type.
  • the correct information for reshaping and/or for the company may be included in the header information.
  • the image information may include the header information.
  • the header information may include a picture header or a slice header (or tile group header), and may be the same in the following.
  • the image information may include 3D), where is the first reshaping available flag indicating whether the reshaping is available and/or the first reshaping available flag indicating whether the show 1 is available or not.
  • the first reshaper enabled flag may be included.
  • the first reshaper enabled flag may represent the sps_reshaper_enabled_flag syntax element, and the first first enabled flag may represent the sps_alf_enabled_flag syntax element. 2020/175893 1»(:1 ⁇ 1 ⁇ 2020/002702
  • the information may include a second reshaping available flag indicating whether or not the reshaping is available in a picture or slice. That is, when the value of the first reshaping available flag is 1, the header information is the above in the picture or slice. It may contain a second reshaping available flag indicating whether reshaping is available, or based on the first reshaping available flag with a value of 0, the header information may not include the second reshaping available flag. That is, when the value of the first reshaping available flag is 0, the header information may not include the second reshaping available flag. For example, the second reshaping available flag is
  • the header information may include a reshaping model presence flag indicating whether a reshaping model exists in a picture or slice. That is, when the value of the first reshaping available flag is 1, the header information is the reshaping model.
  • Model existence flag can be included.
  • the reshaping model existence flag is bar 6_ 011]3 ⁇ 3 ⁇ 481 6]'_1110(161_ 86111;_£ syntax element or
  • the header information is Two reshaping available flags may be included. That is, when the value of the reshaping model presence flag is 1, the header information may include the second reshaping available flag.
  • it may represent the performance of the reshaping procedure based on the first reshaping available flag, the reshaping model presence flag, and/or the second reshaping available flag.
  • a value The first reshaping available flag, the value is
  • the execution of the reshaping procedure may be indicated based on the first reshaping model presence flag and/or the second reshaping enabled flag with a value of 1.
  • the header information may include a second company available flag indicating whether the image is available in the picture or slice.
  • the header information is the above in the picture or slice. It may include a second company availability flag indicating availability.
  • the header information may not include the second company availability flag. That is, when the value of the available flag in the first show is 0, the header information may not include the available flag in the second show.
  • the second company available flag is bar 6_ is ]' 011 ]3_ _611 16 ( 1 1 & Eun Syntax Can represent elements.
  • the header information is 2020/175893 1» (:1 ⁇ 1 ⁇ 2020/002702 May include a usage flag indicating whether to use the above show 1 in a picture or slice. That is, if the value of the available flag in the second show is 1, The header information indicates whether or not the show 1 is used in a picture or slice.
  • the use flag may be included.
  • the second company may not include it. That is, the second company may not include it.
  • the header information may not include the use flag above. For example, the use flag above
  • Bar 6_ is ]'011 ]3_ 1 ? _118&Silver 6_:(3 ⁇ 4 Silver Syntax Can represent elements.
  • the header information may include data in the show. That is, if the value of the flag used in the show is 1, the header information is May contain data. II) Show data can be displayed without using the displayed information. Or, for example, based on the use flag with the value 0, the header information It may include information about the right.
  • the header information is a picture or
  • the second show 1 available flag indicating whether or not the show 1 is available in the slice may be included, and based on the available flag in the second show with a value of 1 (that is, if the value of the available flag in the second show is 1) ),
  • the header information is the picture or the
  • the thread and reshaper used flags may include the tile_group_alf_reshaper_usage_flag syntax element or
  • the slice_alf_reshaper_usage_flag syntax element can be represented.
  • the header information is in the show
  • the information can represent II) information for reshaping data, and the second
  • II) Information about II) Information can be displayed. They can be the same, but they can be different.
  • the encoding device may contain all of the above-described information (or syntax elements) or
  • a bitstream or encoded information may be generated by encoding image information including a part, or output in the form of a bitstream.
  • the bitstream 2020/175893 1»(:1 ⁇ 1 ⁇ 2020/002702 or the encoded information can be transmitted to the decoding device through a network or a storage medium.
  • the bitstream or encoded information is stored in a computer-readable storage medium.
  • the bitstream or the encoded information may be generated by the image encoding method described above.
  • 15 and 16 illustrate an image/video decoding method and related information according to an embodiment of this document.
  • FIG. 15 An example of a component is schematically shown.
  • the method disclosed in FIG. 15 can be performed by the decoding device disclosed in FIG. 3. Specifically, for example, ⁇ 500 in FIG. 15 is performed by the entropy decoding unit 310 of the decoding device. It may be, and ⁇ 0 may be performed by the adding unit 340 of the decoding device, and 520 may be performed by the filtering unit 350 of the decoding device.
  • the method disclosed in FIG. 15 is implemented as described above in this document. Examples can be included.
  • the decoding apparatus transmits image information through a bitstream.
  • the image information may include various types of information according to the embodiments of this document.
  • the image information is at least one information disclosed in at least one of Tables 1 to 45, or It may contain at least one syntax element.
  • the image information may be referred to as video information.
  • the image information may include at least some of the information related to the creation of the restoration samples, information related to the reshaping, or information related to the company.
  • the information for generating the restoration samples is, for example, information related to prediction. And/or residual information.
  • the prediction-related information may include various prediction modes. Mode, etc.), Information may be included.
  • the decoding apparatus retrieves the restored samples of the current block based on the image information.
  • the decoding apparatus may derive the prediction samples of the current block based on the prediction related information included in the video information. Or, for example, the decoding apparatus may derive the residual included in the video information. Residual samples may be derived based on information. Or, for example, a decoding apparatus may generate restoration samples based on the predicted samples and the residual samples. A restoration block and restoration picture based on the restoration samples. Can be escaped.
  • the decoding apparatus performs a reshaping procedure on the restored samples 1520).
  • the decoding device may perform a reshaping procedure and/or a show procedure for the restored samples.
  • the decoding device may perform reshaping related information and/or show from the image information. Related information can be obtained, and based on this, parameters related to reshaping and/or parameters related to the company can be derived, and based on this, the reshaping procedure or show procedure can be performed. For example, reshaping-related parameters can be derived.
  • the information may include at least some of the information related to reshaping described above in this document, or, for example, information related to the company may include at least some of the information related to reshaping in this document.
  • Information can be displayed, and parameters related to reshaping are 2020/175893 1»(:1 ⁇ 1 ⁇ 2020/002702
  • reshaping data It can be referred to as related parameters, reshaping model information, or information contained in the reshaping model.
  • a reshaping procedure can be performed based on the reshaping data, where the reshaping procedure is the value of the restored sample (luma component) of the current block and the value of the restored sample mapped based on the mapping index.
  • the mapping index can be derived based on the reshaping data.
  • the reshaping procedure can represent the inverse reshaping procedure, and the mapping index is the above. It may contain the index used for the inverse reshaping procedure, provided that the reshaping procedure according to this document is not limited to the inverse reshaping procedure, but a forward reshaping procedure or a chromashaping procedure may be used.
  • the order in which the reshaping procedure is performed within the decoding procedure may be different, and the index may also differ depending on the forward reshaping procedure or the chromashaping procedure.
  • the reshaping procedure can be expressed as a procedure, and information related to reshaping can be In the following, the same may be the case.
  • the above string 8 may include the show data, and the company procedure may be performed based on the show data. It can include a procedure to derive the show filter coefficients based on it.
  • the reshaping data is based on II) information about the above.
  • the above show 1 data is It can be derived based on information, i.e. the reshaping data is II) information about may be included in the above indicated. or the above data
  • One II) Information ⁇ The information for the data can be the same, so it can be separated separately, but can also be used as one. In this case, for convenience of explanation, one containing reshaping data and company data can be referred to as first.
  • the first is different from the second and
  • ⁇ )It may be different from the information.
  • Information about can be displayed in the same format as the tile_group_aps_id syntax element or the slice_aps_id syntax element.
  • Information about Korea may be displayed separately.
  • Lee Shay ⁇ Information may be included in the header information.
  • the video information may include the header information
  • the header information may include a picture header or a slice header (or tile group header), and the same applies to the following.
  • the first reshaping available flag indicating whether the reshaping is available and/or the first show first available flag indicating whether the show 1 is available.
  • the first reshaping available flag may be included.
  • the reshaper enabled flag may represent the sps_reshaper_enabled_flag syntax element, and the first first available flag may represent the sps_alf_enabled_flag syntax element.
  • the header information may include a second reshaping available flag indicating whether the reshaping is available in a picture or slice.
  • the header information may include a second reshaping available flag indicating whether the reshaping is available in a picture or slice, or the first reshaping available flag having a value of 0.
  • the header information may not include the second reshaping available flag, that is, when the value of the first reshaping available flag is 0, the header information includes the second reshaping available flag
  • the second reshaping enabled flag above is
  • the header information may include a reshaping model presence flag indicating whether a reshaping model exists in a picture or slice.
  • the header information may include the reshaping model presence flag.
  • the reshaping model existence flag is a bar. 6_ 011]3 ⁇ 3 ⁇ 481 6]'_1110(161_ 86111;_£ syntax element or
  • the header information is Two reshaping available flags may be included. That is, when the value of the reshaping model presence flag is 1, the header information may include the second reshaping available flag.
  • the reshaping procedure may represent the performance of the reshaping procedure based on the first reshaping available flag, the reshaping model presence flag, and/or the second reshaping available flag.
  • a value The first reshaping available flag, the value is
  • the execution of the reshaping procedure may be indicated based on the first reshaping model presence flag and/or the second reshaping enabled flag with a value of 1.
  • the header information may include a second company available flag indicating whether the image is available in a picture or slice.
  • the header information may include a second company available flag indicating whether or not the available flag in the picture or slice is available.
  • the header information may not include the second company available flag. That is, when the value of the available flag in the first show is 0, the header information may not include the available flag in the second show.
  • the second company available flag is bar 6_ is ]' 011 ]3_ _611 16 ( 1 1 & Eun Syntax Can represent elements.
  • the header information indicates whether or not the show 1 is used in a picture or slice.
  • the header information may include a usage flag indicating whether the show 1 is used in a picture or slice.
  • the use flag may be included.
  • the second company may not include it. That is, the second company may not include it.
  • the header information may not include the use flag above. For example, the use flag above
  • Bar 6_ is ]'011 ]3_ 1 ? _118&Silver 6_:(3 ⁇ 4 Silver Syntax Can represent elements.
  • the header information may include data in the show. That is, if the value of the flag used in the show is 1, the header information is Data can be included. In this case, it is possible to derive the show data without the use of II) information about
  • the procedure can be performed, or, for example, a flag with a value of 0 2020/175893 1» (: based on 1 ⁇ 1 ⁇ 2020/002702, the header information above may contain II) information.
  • the header information is a picture or
  • the second show 1 available flag indicating whether or not the show 1 is available in the slice may be included, and based on the available flag in the second show with a value of 1 (that is, if the value of the available flag in the second show is 1) ),
  • the header information is the picture or the
  • the thread and reshaper used flags may include the tile_group_alf_reshaper_usage_flag syntax element or
  • the slice_alf_reshaper_usage_flag syntax element can be represented.
  • the header information is the data in the show and the reshaping Data can be included.
  • the header information is It may include posit) information and II) information for the second above.
  • the posit information for the first) is for reshaping data. II) information can be displayed, and the second
  • the decoding device decodes the bitstream or the encoded information
  • Image information including all or part of the above-described information (or syntax elements) can be obtained.
  • bitstream or encoded information can be stored in a computer-readable storage medium, so that the above-described decoding method is performed. Can cause
  • the encoding device and/or the decoding device according to this document may be included in a device that performs image processing, such as an IV, a computer, a smartphone, a set-top box, and a display device.
  • a device that performs image processing such as an IV, a computer, a smartphone, a set-top box, and a display device.
  • a module is stored in memory. It is stored and can be executed by the processor.
  • the memory can be internal or external to the processor, is well known and can be connected to the processor by a number of means.
  • Processors may include application-specific integrated circuits (ASICs), other chipsets, logic circuits and/or data processing devices.
  • Memory may include read-only memory (ROM), random access memory (RAM), flash memory, memory card.
  • ASICs application-specific integrated circuits
  • Memory may include read-only memory (ROM), random access memory (RAM), flash memory, memory card.
  • the embodiments described in this document may be implemented and performed on a processor, microprocessor, controller, or chip.
  • the functional units shown in the respective figures may be implemented. It can be implemented and performed on a computer, processor, microprocessor, controller or chip, in which case information on instructions or algorithms can be stored on a digital storage medium.
  • Multimedia broadcasting transmission/reception device mobile communication terminal, home cinema video device, digital cinema video device, surveillance camera, video conversation device, real-time communication device such as video communication, mobile streaming device, storage medium, camcorder, on-demand type
  • Video (VoD) service providing device OTT video (Over the top video) device, Internet streaming service providing device, 3D (3D) video device, VR (virtual reality) device, AR (argumente reality) device, video phone video device ,Transportation terminal (ex.
  • 3D 3D
  • VR virtual reality
  • AR argumente reality
  • OTT video Over the top video
  • Devices may include a game console, a Blu-ray player, an Internet-connected TV, a home theater system, a smartphone, a tablet PC, and a digital video recorder (DVR).
  • DVR digital video recorder
  • Multimedia data having a data structure according to the embodiment(s) of this document can also be stored in a computer-readable recording medium.
  • the recording media that can be read by a computer include all types of storage devices and distributed storage devices that store computer-readable data.
  • the computer-readable recording media are, for example, Blu-ray Disc (BD), general-purpose serial Bus (USB), ROM, PROM, EPROM, EEPROM, RAM, CD-ROM, magnetic tape, floppy disk and optical data storage devices can be included.
  • the recording medium that the computer can read is a carrier (e.g., It includes media implemented in the form of transmission via the Internet).
  • the bitstream generated by the encoding method can be stored on a computer-readable recording medium or transmitted via a wired or wireless communication network.
  • the embodiment(s) of this document may be implemented as a computer program product by program code, and the program code may be executed by the computer by the embodiment(s) of this document.
  • the program code is It can be stored on a carrier readable by a computer. 17 shows an example of a content streaming system to which the embodiments disclosed in this document can be applied.
  • the content streaming system to which the embodiments of this document are applied may largely include an encoding server, a streaming server, a web server, a media storage, a user device, and a multimedia input device.
  • the above encoding server is a multimedia input such as a smartphone, camera, camcorder, etc.
  • bitstream By compressing the content input from the devices into digital data and transmitting it to the streaming server.
  • multimedia input devices such as smartphones, cameras, and camcorders directly generate the bitstream.
  • the encoding server may be omitted.
  • the bitstream is an encoding method to which the embodiments of this document are applied, or
  • the streaming server may temporarily store the bitstream in a process of transmitting or receiving the bitstream.
  • the streaming server transmits multimedia data to the user device based on a user request through a web server, and the web server serves as a medium that informs the user of what kind of service there is.
  • the web server delivers it to the streaming server, and the streaming server transmits multimedia data to the user.
  • the content streaming system may include a separate control server, in which case the control server is the above. It controls the command/response between devices in the content streaming system.
  • the streaming server can receive content from a media storage and/or an encoding server. For example, when receiving content from the encoding server, the content can be received in real time. In this case, a seamless streaming service In order to provide a, the streaming server may store the bitstream for a predetermined time.
  • Computer laptop computer
  • digital broadcasting terminal PDA (personal digital assistants)
  • PMP portable multimedia player
  • navigation slate PC
  • tablet PC ultrabook
  • wearable device for example, smartwatch, smart glass, head mounted display
  • digital TV desktop computer
  • digital signage etc.
  • Each server in the content streaming system can be operated as a distributed server, and in this case, data received from each server can be distributed and processed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

본 문서의 개시에 따르면, ALF 파라미터 및/또는 LMCS 파라미터가 계층적으로 시그널링될 수 있고, 이를 통하여 비디오/영상 코딩을 위하여 시그널링되어야 하는 데이터량을 줄이고, 코딩 효율을 높일 수 있다.

Description

2020/175893 1»(:1/10公020/002702 명세서
발명의 명칭: APS시그널링 기반비디오또는영상코딩 기술분야
[1] 본기술은 APS시그널링기반비디오또는영상코딩에관한것이다.
배경기술
四 최근 4K또는 8K이상의 UHD(Ultra High Definition)영상/비디오와같은
고해상도,고품질의영상/비디오에대한수요가다양한분야에서증가하고있다. 영상/비디오데이터가고해상도,고품질이될수록기존의영상/비디오데이터에 비해상대적으로전송되는정보량또는비트량이증가하기때문에기존의 유무선광대역회선과같은매체를이용하여영상데이터를전송하거나기존의 저장매체를이용해영상/비디오데이터를저장하는경우,전송비용과저장 비용이증가된다.
[3] 또한,최근 VR( Virtual Reality), AR(Artificial Realtiy)컨텐츠나홀로그램등의 실감미디어 (Immersive Media)에대한관심및수요가증가하고있으며 ,게임 영상과같이현실영상과다른영상특성을갖는영상/비디오에대한방송이 증가하고있다.
[4] 이에따라,상기와같은다양한특성을갖는고해상도고품질의영상/비디오의 정보를효과적으로압축하여전송하거나저장하고,재생하기위해고효율의 영상/비디오압축기술이요구된다.
[5] 또한,압축효율을향상시키고주관적/객관적비주얼품질을높이기위하여
LMCS(luma mapping with chroma scaling), ALF(adaptive loop filtering)등의 기술들에대한논의가있다.이러한기술들을효율적으로적용하기위하여 관련된정보를효율적으로시그널링하는방법이필요하다.
발명의상세한설명
과제해결수단
[6] 본문서의일실시예에따르면,영상/비디오코딩효율을높이는방법및
장치를제공한다.
[7] 본문서의일실시예에따르면, ALF관련정보및/또는 LMCS관련정보를
계층적으로시그널링하는방법및장치를제공한다.
[8] 본문서의일실시예에따르면조건적으로헤더정보 (픽처헤더또는슬라이스 헤더 )를통하여 ALF데이터및/또는 LMCS데이터가시그널링될수있다.
[9] 본문서의일실시예에따르면 APS를통하여 ALF데이터및/또는 LMCS
데이터가시그널링될수있고,헤더정보 (픽처헤더또는슬라이스헤더 )를 통하여상기참조되는 APS의 ID를나타내는 APS ID정보가시그널링될수있다.
[1이 본문서의일실시예에따르면 ALF데이터를위해참조되는 APS ID정보및 LMCS데이터를위해참조되는 APS ID정보가별도로시그널링될수있다. 2020/175893 1»(:1^1{2020/002702
[11] 본문서의 일실시예에 따르면,디코딩장치에 의하여수행되는비디오/영상 디코딩 방법을제공한다.
[12] 본문서의 일실시예에 따르면,비디오/영상디코딩을수행하는디코딩장치를 제공한다.
[13] 본문서의 일실시예에 따르면,인코딩장치에 의하여수행되는비디오/영상 인코딩 방법을제공한다.
[14] 본문서의 일실시예에 따르면,비디오/영상인코딩을수행하는인코딩장치를 제공한다.
[15] 본문서의 일실시예에 따르면,본문서의실시예들중적어도하나에 개시된 비디오/영상인코딩방법에 따라생성된인코딩된비디오/영상정보가저장된 컴퓨터판독가능한디지털저장매체를제공한다.
[16] 본문서의 일실시예에 따르면,디코딩장치에 의하여본문서의실시예들중 적어도하나에 개시된비디오/영상디코딩 방법을수행하도록야기하는 인코딩된정보또는인코딩된비디오/영상정보가저장된컴퓨터판독가능한 디지털저장매체를제공한다.
발명의효과
[17] 본문서의 일실시예에 따르면전반적인영상/비디오압축효율을높일수 있다.
[18] 본문서의 일실시예에 따르면효율적인필터링을통하여주관적/객관적
비주얼품질을높일수있다.
[19] 본문서의 일실시예에 따르면픽처,슬라이스및/또는코딩블록단위로
적응적으로쇼[玉및/또는 1^« 를적용할수있다.
[2이 본문서의 일실시예에 따르면쇼[止관련정보를효율적으로시그널링할수 있다.
[21] 본문서의 일실시예에 따르면 1 «:3관련정보를효율적으로시그널링할수 있다.
도면의간단한설명
[22] 도 1은본문서의실시예들이 적용될수있는비디오/영상코딩시스템의 예를 개략적으로나타낸다.
[23] 도 2는본문서의실시예들이 적용될수있는비디오/영상인코딩장치의
구성을개략적으로설명하는도면이다.
[24] 도 3은본문서의실시예들이 적용될수있는비디오/영상디코딩장치의
구성을개략적으로설명하는도면이다.
[25] 도 4는코딩된영상/비디오에 대한계층구조를예시적으로나타낸다.
[26] 도 5는쇼[玉절차의 일 예를개략적으로나타내는흐름도이다.
[27] 모양의 예를나타낸다.
[28]
Figure imgf000003_0001
구조의 일 예를나타낸다. 2020/175893 1»(:1^1{2020/002702
[29] 도 8은쇼[玉데이터의 계층구조의다른예를나타낸다.
[3이 도 9는본문서의 일실시예에 따른 의 계층적인구조를예시적으로
도시한다.
[31] 구조를도시한다.
[32]
Figure imgf000004_0001
를도시한다.
[33] 도 12는예시적인포워드맵핑을나타내는그래프를보여준다.
[34] 도 13및 14는본문서의실시예(들)에 따른비디오/영상인코딩 방법 및관련 컴포넌트의 일예를개략적으로나타낸다.
[35] 도 15및 16은본문서의실시예에따른영상/비디오디코딩 방법 및관련
컴포넌트의 일예를개략적으로나타낸다.
[36] 도 17은본문서에서 개시된실시예들이 적용될수있는컨텐츠스트리밍
시스템의 예를나타낸다.
발명의실시를위한형태
[37] 본문서의 개시는다양한변경을가할수있고여러가지실시예를가질수있는 바,특정실시예들을도면에 예시하고상세하게설명하고자한다.그러나,이는 본개시를특정실시예에 한정하려고하는것이 아니다.본문서에서사용하는 용어는단지특정한실시예를설명하기 위해사용된것으로,본문서의
실시예들의 기술적사상을한정하려는의도로사용되는것은아니다.단수의 표현은문맥상명백하게다르게뜻하지 않는한,복수의표현을포함한다.본 문서에서 "포함하다”또는 "가지다”등의용어는문서상에기재된특징,숫자, 단계,동작,구성요소,부품또는이들을조합한것이존재함을지정하려는 것이지,하나또는그이상의다른특징들이나숫자,단계,동작,구성요소,부품 또는이들을조합한것들의존재또는부가가능성을미리 배제하지 않는것으로 이해되어야한다.
[38] 한편,본문서에서 설명되는도면상의 각구성들은서로다른특징적인
기능들에 관한설명의 편의를위해독립적으로도시된것으로서,각구성들이 서로별개의하드웨어나별개의소프트웨어로구현된다는것을의미하지는 않는다.예컨대,각구성중두개 이상의구성이 합쳐져하나의구성을이룰수도 있고,하나의구성이복수의구성으로나뉘어질수도있다.각구성이통합 및/또는분리된실시예도본문서의 개시범위에포함된다.
[39] 이하,첨부한도면들을참조하여,본문서의실시예들을설명하고자한다.이하, 도면상의동일한구성요소에 대해서는동일한참조부호를사용할수있고 동일한구성요소에 대해서중복된설명은생략될수있다.
[4이 도 1은본문서의실시예들이 적용될수있는비디오/영상코딩시스템의 예를 개략적으로나타낸다.
[41] 도 1을참조하면,비디오/영상코딩시스템은제 1장치(소스디바이스)및제 2 장치(수신디바이스)를포함할수있다.소스디바이스는인코딩된 비디오 (video)/영상 (image)정보또는데이터를파일또는스트리밍형태로 디지털저장매체또는네트워크를통하여수신디바이스로전달할수있다.
[42] 상기소스디바이스는비디오소스,인코딩장치,전송부를포함할수있다. 상기수신디바이스는수신부,디코딩장치및렌더러를포함할수있다.상기 인코딩장치는비디오/영상인코딩장치라고불릴수있고,상기디코딩장치는 비디오/영상디코딩장치라고불릴수있다.송신기는인코딩장치에포함될수 있다.수신기는디코딩장치에포함될수있다.렌더러는디스플레이부를포함할 수도있고,디스플레이부는별개의디바이스또는외부컴포넌트로구성될수도 있다.
[43] 비디오소스는비디오/영상의캡쳐 ,합성또는생성과정등을통하여
비디오/영상을획득할수있다.비디오소스는비디오/영상캡쳐디바이스 및/또는비디오/영상생성디바이스를포함할수있다.비디오/영상캡쳐 디바이스는예를들어,하나이상의카메라,이전에캡쳐된비디오/영상을 포함하는비디오/영상아카이브등을포함할수있다.비디오/영상생성 디바이스는예를들어컴퓨터,타블렛및스마트폰등을포함할수있으며 (전자적으로)비디오/영상을생성할수있다.예를들어,컴퓨터등을통하여 가상의비디오/영상이생성될수있으며,이경우관련데이터가생성되는 과정으로비디오/영상캡쳐과정이갈음될수있다.
[44] 인코딩장치는입력비디오/영상을인코딩할수있다.인코딩장치는압축및 코딩효율을위하여 예측,변환,양자화등일련의절차를수행할수있다.
인코딩된데이터 (인코딩된영상/비디오정보)는비트스트림 (bitstream)형태로 줄력될수있다.
[45] 전송부는비트스트림형태로출력된인코딩된영상/비디오정보또는
데이터를파일또는스트리밍형태로디지털저장매체또는네트워크를통하여 수신디바이스의수신부로전달할수있다.디지털저장매체는 USB, SD, CD, DVD,블루레이 , HDD, SSD등다양한저장매체를포함할수있다.전송부는 미리정해진파일포멧을통하여미디어파일을생성하기위한엘리먼트를 포함할수있고,방송/통신네트워크를통한전송을위한엘리먼트를포함할수 있다.수신부는상기비트스트림을수신/추출하여디코딩장치로전달할수 있다.
[46] 디코딩장치는인코딩장치의동작에대응하는역양자화,역변환,예측등
일련의절차를수행하여비디오/영상을디코딩할수있다.
[47] 렌더러는디코딩된비디오/영상을렌더링할수있다.렌더링된비디오/영상은 디스플레이부를통하여디스플레이될수있다.
[48] 본문서는비디오/영상코딩에관한것이다.예를들어본문서에서개시된
방법/실시예는 VVC (versatile video coding)표준에개시되는방법에적용될수 있다.또한,본문서에서개시된방법/실시예는 EVC (essential video coding)표준, AVI (AOMedia Video 1)표준, AVS2 (2nd generation of audio video coding standard)또는차세대비디오/영상코딩표준 (ex. H.267 or H.268등)에개시되는 방법에적용될수있다.
[49] 본문서에서는비디오/영상코딩에관한다양한실시예들을제시하며 ,다른 언급이없는한상기실시예들은서로조합되어수행될수도있다.
[5이 본문서에서비디오 (video)는시간의흐름에따른일련의영상 (image)들의
집합을의미할수있다.픽처 (picture)는일반적으로특정시간대의하나의영상을 나타내는단위를의미하며,슬라이스 (slice)/타일 (tile)은코딩에 있어서픽처의 일부를구성하는단위이다.슬라이스/타일은하나이상의 CTU(coding tree unit)을 포함할수있다.하나의픽처는하나이상의슬라이스/타일로구성될수있다. 타일은픽너내특정타일열및특정타일열이내의 CTU들의사각영역이다 (A tile is a rectangular region of CTUs within a particular tile column and a particular tile row in a picture).상기타일열은 CTU들의사각영역이고,상기사각영역은상기 픽처의높이와동일한높이를갖고,너비는픽처파라미터세트내의신택스 요소들에의하여명시될수있다 (The tile column is a rectangular region of CTUs having a height equal to the height of the picture and a width specified by syntax elements in the picture parameter set).상기타일행은 CTU들의사각영역이고, 상기사각영역은픽처파라미터세트내의신택스요소들에의하여명시되는 너비를갖고,높이는상기픽처의높이와동일할수있다 (The tile row is a rectangular region of CTUs having a height specified by syntax elements in the picture parameter set and a width equal to the width of the picture).타일스캔은픽처를 파티셔닝하는 CTU들의특정순차적오더링을나타낼수있고,상기 CTU들은 타일내 CTU래스터스캔으로연속적으로정렬될수있고,픽처내타일들은 상기픽처의상기타일들의래스터스캔으로연속적으로정렬될수있다 (A tile scan is a specific sequential ordering of CTUs partitioning a picture in which the CTUs are ordered consecutively in CTU raster scan in a tile whereas tiles in a picture are ordered consecutively in a raster scan of the tiles of the picture).슬라이스는단일 NAL유닛에배타적으로담겨질수있는,정수개의완전한타일들또는픽처의 타일내의정수개의연속적인완전한 CTU행들을포함할수있다 (A slice includes an integer number of complete tiles or an integer number of consecutive complete CTU rows within a tile of a picture that may be exclusively contained in a single NAL unit)
[51] 한편,하나의픽처는둘이상의서브픽처로구분될수있다.서브픽처는픽처내 하나이상의슬라이스들의사각리전일수있다 (an mctangular mgion of one or more slices within a picture).
[52] 픽셀 (pixel)또는펠 (pel)은하나의픽처 (또는영상)을구성하는최소의단위를 의미할수있다.또한,픽셀에대응하는용어로서쨉플 (sample)끼사용될수 있다.샘플은일반적으로픽셀또는픽셀의값을나타낼수있으며,루마 (luma) 성분의픽셀/픽셀값만을나타낼수도있고,크로마 (chroma)성분의픽셀/픽셀 값만을나타낼수도있다.
[53] 유닛 (unit)은영상처리의기본단위를나타낼수있다.유닛은픽처의특정영역 및해당영역에관련된정보중적어도하나를포함할수있다.하나의유닛은 하나의루마블록및두개의크로마 (ex. cb, cr)블록을포함할수있다.유닛은 경우에따라서블록 (block)또는영역 (area)등의용어와혼용하여사용될수있다. 일반적인경우, MxN블록은 M개의열과 N개의행으로이루어진샘플들 (또는 샘늘어레이 )또는변환계수 (transform coefficient)들의집합 (또는어레이 )을 포함할수있다.
[54] 본문서에서’’A또는 B(A or B)’’는’’오직 A",’’오직 B"또는’’쇼와 B모두’’를
의미할수있다.달리표현하면,본문서에서” A또는 B(A or B)”는” A및/또는 B(A and/or B)’’으로해석될수있다.예를들어 ,본문서에서 "A, B또는 C(A, B or C)”는 "오직 A", "오직 B", "오직 C",또는” A, B및 C의임의의모든조합 (any combination of A, B and C)”를의미할수있다.
[55] 본문서에서사용되는슬래쉬 (/)나쉼표 (comma)는 "및/또는 (and/or)”을의미할 수있다.예를들어,’’A/B”는 "A및/또는 B”를의미할수있다.이에따라’’A/B”는 "오직 A", "오직 B",또는” A와 B모두”를의미할수있다.예를들어 , "A, B, 는 "A, B또는 C”를의미할수있다.
[56] 본문서에서 "적어도하나의 A및 B(at least one of A and B)”는, "오직 A", "오직 B"또는”A와 B모두’’를의미할수있다.또한,본문서에서’’적어도하나의 A 또는 B(at least one of A or B)”나 "적어도하나의 A및/또는 B(at least one of A and/or B)”라는표현은 "적어도하나의 A및 B(at least one of A and B)”와
동일하게해석될수있다.
[57] 또한,본문서에서 "적어도하나의 A, B및 C(at least one of A, B and C)”는,
"오직 A", "오직 B", "오직 C",또는” A, B및 C의임의의모든조합 (any
combination of A, B and C)’’를의미할수있다.또한,’’적어도하나의 A, B또는 C(at least one of A, B or C)”나 "적어도하나의 A, B및/또는 C(at least one of A, B and/or C)”는 "적어도하나의 A, B및 C(at least one of A, B and C)”를의미할수 있다.
[58] 또한,본문서에서사용되는괄호는”예를들어 (for example)”를의미할수있다. 구체적으로,”예측 (인트라예측)”로표시된경우,”예측”의일례로”인트라 예측’’이제안된것일수있다.달리표현하면본문서의’’예측’’은’’인트라 예측’’으로제한 (limit)되지않고,’’인트라예측’’이’’예측’’의일례로제안될것일 수있다.또한,’’예측 (즉,인트라예측)’’으로표시된경우에도,’’예측’’의일례로 ’’인트라예측’’이제안된것일수있다.
[59] 본문서에서하나의도면내에서개별적으로설명되는기술적특징은,
개별적으로구현될수도있고,동시에구현될수도있다.
[6이 도 2는본문서의실시예들이적용될수있는비디오/영상인코딩장치의
구성을개략적으로설명하는도면이다.이하인코딩장치라함은영상인코딩 장치및/또는비디오인코딩장치를포함할수있다.
[61] 도 2를참조하면,인코딩장치 (200)는영상분할부 (image partitioner, 210),
예즉부 (predictor, 220),레지듀얼처리부 (residual processor, 230),엔트로피 인코딩부 (entropy encoder, 240),가산부 (adder, 250),필터링부 (filter, 260)및 메모리 (memory, 270)를포함하여구성될수있다.예즉부 (220)는인터
예측부 (221)및인트라예측부 (222)를포함할수있다.레지듀얼처리부 (230)는 변환부 (transformer, 232),양자화부 (quantizer 233),역양자화부 (dequantizer 234), 역변환부 (inverse transformer, 235)를포함할수있다.레지듀얼처리부 (230)은 감산부 (subtractor, 231)를더포함할수있다.가산부 (250)는복원부 (reconstructor) 또는복원블록생성부 (recontructged block generator)로불릴수있다.상술한영상 분할부 (210),예측부 (220),레지듀얼처리부 (230),엔트로피인코딩부 (240), 가산부 (250)및필터링부 (260)는실시예에따라하나이상의하드웨어
컴포넌트 (예를들어인코더칩셋또는프로세서)에의하여구성될수있다.또한 메모리 (270)는 DPB(decoded picture buffer)를포함할수있고,디지털저장매체에 의하여구성될수도있다.상기하드웨어컴포넌트는메모리 (270)을내/외부 컴포넌트로더포함할수도있다.
[62] 영상분할부 (2W)는인코딩장치 (200)에입력된입력영상 (또는,픽쳐 ,
프레임)를하나이상의처리유닛 (processing unit)으로분할할수있다.일예로, 상기처리유닛은코딩유닛 (coding unit, CU)이라고불릴수있다.이경우코딩 유닛은코딩트리유닛 (coding tree unit, CTU)또는최대코딩유닛 (largest coding unit, LCU)으로부터 QTBTTT (Quad-tree binary-tree ternary-tree)구조에따라 재귀적으로 (recursively)분할될수있다.예를들어,하나의코딩유닛은쿼드 트리구조,바이너리트리구조,및/또는터너리구조를기반으로하위 (deeper) 뎁스의복수의코딩유닛들로분할될수있다.이경우예를들어쿼드트리 구조가먼저적용되고바이너리트리구조및/또는터너리구조가나중에적용될 수있다.또는바이너리트리구조가먼저적용될수도있다.더이상분할되지 않는최종코딩유닛을기반으로본문서에따른코딩절차가수행될수있다.이 경우영상특성에따른코딩효율등을기반으로,최대코딩유닛이바로최종 코딩유닛으로사용될수있고,또는필요에따라코딩유닛은
재귀적으로 (recursively)보다하위 뎁스의코딩유닛들로분할되어최적의 사이즈의코딩유닛이최종코딩유닛으로사용될수있다.여기서코딩절차라 함은후술하는예측,변환,및복원등의절차를포함할수있다.다른예로,상기 처리유닛은예즉유닛 (PU: Prediction Unit)또는변환유닛 (TU: Transform Unit)을 더포함할수있다.이경우상기예측유닛및상기변환유닛은각각상술한 최종코딩유닛으로부터분할또는파티셔닝될수있다.상기예측유닛은샘플 예측의단위일수있고,상기변환유닛은변환계수를유도하는단위및/또는 변환계수로부터레지듀얼신호 (residual signal)를유도하는단위일수있다.
[63] 유닛은경우에따라서블록 (block)또는영역 (area)등의용어와혼용하여 사용될수있다.일반적인경우, MxN블록은 M개의열과 N개의행으로 이루어진샘늘들또는변환계수 (transform coefficient)들의집합을나타낼수 있다.샘플은일반적으로픽셀또는픽셀의값을나타낼수있으며,휘도 (luma) 성분의픽셀/픽셀값만을나타낼수도있고,채도 (chroma)성분의픽셀/픽셀 값만을나타낼수도있다.샘플은하나의픽처 (또는영상)을픽셀 (pixel)또는 펠 (pel)에대응하는용어로서사용될수있다.
[64] 인코딩장치 (200)는입력영상신호 (원본블록,원본샘플어레이)에서인터
예측부 (221)또는인트라예측부 (222)로부터출력된예측신호 (예측된블록,예측 샘플어레이 )를감산하여레지듀얼신호 (residual signal,잔여블록,잔여샘플 어레이)를생성할수있고,생성된레지듀얼신호는변환부 (232)로전송된다.이 경우도시된바와같이인코더 (200)내에서입력영상신호 (원본블록,원본샘플 어레이)에서 예측신호 (예측블록,예측샘플어레이)를감산하는유닛은 감산부 (231)라고불릴수있다.예측부는처리대상블록 (이하,현재블록이라 함)에대한예측을수행하고,상기현재블록에대한예측샘플들을포함하는 예측된블록 (predicted block)을생성할수있다.예측부는현재블록또는 CU 단위로인트라예측이적용되는지또는인터예측이적용되는지결정할수있다. 예측부는각예측모드에대한설명에서후술하는바와같이예측모드정보등 예측에관한다양한정보를생성하여엔트로피인코딩부 (240)로전달할수있다. 예측에관한정보는엔트로피인코딩부 (240)에서인코딩되어비트스트림형태로 줄력될수있다.
[65] 인트라예측부 (222)는현재픽처내의샘플들을참조하여현재블록을예측할 수있다.상기참조되는샘플들은예측모드에따라상기현재블록의
주변 (neighbor)에위치할수있고,또는떨어져서위치할수도있다.인트라 예측에서 예측모드들은복수의비방향성모드와복수의방향성모드를포함할 수있다.비방향성모드는예를들어 DC모드및플래너모드 (Planar모드)를 포함할수있다.방향성모드는예측방향의세밀한정도에따라예를들어
33개의방향성예측모드또는 65개의방향성 예측모드를포함할수있다.다만, 이는예시로서설정에따라그이상또는그이하의개수의방향성 예측
모드들이사용될수있다.인트라예측부 (222)는주변블록에적용된예측모드를 이용하여,현재블록에적용되는예측모드를결정할수도있다.
[66] 인터예측부 (221)는참조픽처상에서움직임벡터에의해특정되는참조
블록 (참조샘플어레이)을기반으로,현재블록에대한예측된블록을유도할수 있다.이때,인터 예측모드에서전송되는움직임정보의양을줄이기위해주변 블록과현재블록간의움직임정보의상관성에기초하여움직임정보를블록, 서브블록또는샘플단위로예측할수있다.상기움직임정보는움직임벡터및 참조픽처인덱스를포함할수있다.상기움직임정보는인터 예측방향 (L0예측, L1예측, Bi예측등)정보를더포함할수있다.인터예측의경우에,주변블록은 현재픽처내에존재하는공간적주변블록 (spatial neighboring block)과참조 픽처에존재하는시간적주변블록 (temporal neighboring block)을포함할수있다. 상기참조블록을포함하는참조픽처와상기시간적주변블록을포함하는참조 픽처는동일할수도있고,다를수도있다.상기시간적주변블록은동일위치 참조블록 (collocated reference block),동일위치 CU(colCU)등의이름으로불릴 수있으며 ,상기시간적주변블록을포함하는참조픽처는동일위치
픽처 (collocated picture, colPic)라고불릴수도있다.예를들어 ,인터
예측부 (221)는주변블록들을기반으로움직임정보후보리스트를구성하고, 상기현재블록의움직임벡터및/또는참조픽처인덱스를도출하기위하여 어떤후보가사용되는지를지시하는정보를생성할수있다.다양한예측모드를 기반으로인터예측이수행될수있으며,예를들어스킵모드와머지모드의 경우에,인터 예측부 (221)는주변블록의움직임정보를현재블록의움직임 정보로이용할수있다.스킵모드의경우,머지모드와달리레지듀얼신호가 전송되지않을수있다.움직임정보예즉 (motion vector prediction, MVP)모드의 경우,주변블록의움직임벡터를움직임벡터예즉자 (motion vector predictor)로 이용하고,움직임벡터차분 (motion vector difference)을시그널링함으로써현재 블록의움직임벡터를지시할수있다.
[67] 예측부 (220)는후술하는다양한예측방법을기반으로예측신호를생성할수 있다.예를들어,예측부는하나의블록에대한예측을위하여인트라예측또는 인터 예측을적용할수있을뿐아니라,인트라예측과인터 예측을동시에 적용할수있다.이는 combined inter and intra prediction (〔고 라고불릴수있다. 또한,예즉부는블록에대한예즉을위하여인트라블록카피 (intra block copy, IBC)예측모드에기반할수도있고또는팔레트모드 (palette mode)에기반할 수도있다.상기 IBC예측모드또는팔레트모드는예를들어 SCC(screen content coding)등과같이게임등의컨텐츠영상/동영상코딩을위하여사용될수있다. IBC는기본적으로현재픽처내에서예측을수행하나현재픽처내에서참조 블록을도출하는점에서인터예측과유사하게수행될수있다.즉, IBC는본 문서에서설명되는인터 예측기법들중적어도하나를이용할수있다.팔레트 모드는인트라코딩또는인트라예측의일예로볼수있다.팔레트모드가 적용되는경우팔레트테이블및팔레트인덱스에관한정보를기반으로픽처내 샘플값을시그널링할수있다.
[68] 상기예측부 (인터 예측부 (221)및/또는상기인트라예측부 (222)포함)를통해 생성된예측신호는복원신호를생성하기위해이용되거나레지듀얼신호를 생성하기위해이용될수있다.변환부 (232)는레지듀얼신호에변환기법을 적용하여변환계수들 (transform coefficients)를생성할수있다.예를들어,변환 기법은 DCT (Discrete Cosine Transform), DST(Discrete Sine Transform),
GBT(Graph-Based Transform),또는 CNT (Conditionally Non-linear Transform)중 적어도하나를포함할수있다.여기서, GBT는픽셀간의관계정보를그래프로 표현한다고할때이그래프로부터얻어진변환을의미한다. CNT는이전에 복원된모든픽셀 (all previously reconstructed pixel)를이용하여 예즉신호를 생성하고그에기초하여획득되는변환을의미한다.또한,변환과정은 정사각형의동일한크기를갖는픽셀블록에적용될수도있고,정사각형이아닌 가변크기의블록에도적용될수있다.
[69] 양자화부 (233)는변환계수들을양자화하여엔트로피인코딩부 (240)로
전송되고,엔트로피인코딩부 (240)는양자화된신호 (양자화된변환계수들에 관한정보)를인코딩하여비트스트림으로출력할수있다.상기양자화된변환 계수들에관한정보는레지듀얼정보라고불릴수있다.양자화부 (233)는계수 스캔순서 (scan order)를기반으로블록형태의양자화된변환계수들을 1차원 벡터형태로재정렬할수있고,상기 1차원벡터형태의양자화된변환계수들을 기반으로상기양자화된변환계수들에관한정보를생성할수도있다.엔트로피 인코딩부 (240)는예를들어지수골롬 (exponential Golomb),
CAVLC(context-adaptive variable length coding), CABAC(context-adaptive binary arithmetic coding)등과같은다양한인코딩방법을수행할수있다.엔트로피 인코딩부 (240)는양자화된변환계수들외비디오/이미지복원에필요한 정보들 (예컨대신택스요소들 (syntax elements)의값등)을함께또는별도로 인코딩할수도있다.인코딩된정보 (ex.인코딩된영상/비디오정보)는 비트스트림형태로 NAL(network abstraction layer)유닛단위로전송또는저장될 수있다.상기영상/비디오정보는어맵테이션파라미터세트 (APS),픽처 파라미터세트 (PPS),시퀀스파라미터세트 (SPS)또는비디오파라미터 세트 (VPS)등다양한파라미터세트에관한정보를더포함할수있다.또한상기 영상/비디오정보는일반제한정보 (general constraint information)을더포함할수 있다.본문서에서인코딩장치에서디코딩장치로전달/시그널링되는정보 및/또는신택스요소들은영상/비디오정보에포함될수있다.상기영상/비디오 정보는상술한인코딩절차를통하여인코딩되어상기비트스트림에포함될수 있다.상기비트스트림은네트워크를통하여전송될수있고,또는디지털 저장매체에저장될수있다.여기서네트워크는방송망및/또는통신망등을 포함할수있고,디지털저장매체는 USB, SD, CD, DVD,블루레이 , HDD, SSD등 다양한저장매체를포함할수있다.엔트로피인코딩부 (240)로부터출력된 신호는전송하는전송부 (미도시)및/또는저장하는저장부 (미도시)가인코딩 장치 (200)의내/외부엘리먼트로서구성될수있고,또는전송부는엔트로피 인코딩부 (240)에포함될수도있다.
P이 양자화부 (233)로부터출력된양자화된변환계수들은예측신호를생성하기 위해이용될수있다.예를들어 ,양자화된변환계수들에역양자화부 (234)및 역변환부 (235)를통해역양자화및역변환을적용함으로써레지듀얼
신호 (레지듀얼블록 or레지듀얼샘플들)를복원할수있다.가산부 (250)는 복원된레지듀얼신호를인터예측부 (221)또는인트라예측부 (222)로부터 출력된예측신호에더함으로써복원 (reconstructed)신호 (복원픽처,복원블록, 복원샘플어레이)가생성될수있다.스킵모드가적용된경우와같이처리대상 블록에대한레지듀얼이없는경우,예측된블록이복원블록으로사용될수 있다.가산부 (250)는복원부또는복원블록생성부라고불릴수있다.생성된 복원신호는현재픽처내다음처리대상블록의인트라예측을위하여사용될 수있고,후술하는바와같이필터링을거쳐서다음픽처의인터 예측을위하여 사용될수도있다.
PI] 한편픽처인코딩및/또는복원과정에서 LMCS (luma mapping with chroma scaling)가적용될수도있다.
2] 필터링부 (260)는복원신호에필터링을적용하여주관적/객관적화질을
향상시킬수있다.예를들어필터링부 (260)은복원픽처에다양한필터링방법을 적용하여수정된 (modified)복원픽처를생성할수있고,상기수정된복원 픽처를메모리 (270),구체적으로메모리 (270)의 DPB에저장할수있다.상기 다양한필터링방법은예를들어,디블록킹필터링,샘플적응적오프셋 (sample adaptive offset),적응적루프필터 (adaptive loop filter),양방향필터 (bilateral filter) 등을포함할수있다.필터링부 (260)은각필터링방법에대한설명에서후술하는 바와같이필터링에관한다양한정보를생성하여엔트로피인코딩부 (240)로 전달할수있다.필터링관한정보는엔트로피인코딩부 (240)에서인코딩되어 비트스트림형태로출력될수있다.
3] 메모리 (270)에전송된수정된복원픽처는인터예측부 (221)에서참조픽처로 사용될수있다.인코딩장치는이를통하여인터 예측이적용되는경우,인코딩 장치 (200)와디코딩장치에서의예측미스매치를피할수있고,부호화효율도 향상시킬수있다.
4] 메모리 (270) DPB는수정된복원픽처를인터예측부 (221)에서의참조픽처로 사용하기위해저장할수있다.메모리 (270)는현재픽처내움직임정보가 도출된 (또는인코딩된)블록의움직임정보및/또는이미복원된픽처내 블록들의움직임정보를저장할수있다.상기저장된움직임정보는공간적 주변블록의움직임정보또는시간적주변블록의움직임정보로활용하기 위하여인터예측부 (221)에전달할수있다.메모리 (270)는현재픽처내복원된 블록들의복원샘플들을저장할수있고,인트라예측부 (222)에전달할수있다. 5] 도 3은본문서의실시예들이적용될수있는비디오/영상디코딩장치의
구성을개략적으로설명하는도면이다.이하디코딩장치라함은영상디코딩 장치및/또는비디오디코딩장치를포함할수있다.
6] 도 3을참조하면,디코딩장치 (300)는엔트로피디코딩부 (entropy decoder, 310), 레지듀얼처리부 (residual processor, 320),예즉부 (predictor, 330),가산부 (adder, 340),필터링부 (filter, 350)및메모리 (memoery, 360)를포함하여구성될수있다. 예측부 (330)는인터 예측부 (331)및인트라예측부 (332)를포함할수있다.
레지듀얼처리부 (320)는역양자화부 (dequantizer, 321)및역변환부 (inverse transformer, 321)를포함할수있다.상술한엔트로피디코딩부 (310),레지듀얼 처리부 (320),예측부 (330),가산부 (340)및필터링부 (350)는실시예에따라하나의 하드웨어컴포넌트 (예를들어디코더칩셋또는프로세서)에의하여구성될수 있다.또한메모리 (360)는 DPB(decoded picture buffer)를포함할수있고,디지털 저장매체에의하여구성될수도있다.상기하드웨어컴포넌트는메모리 (360)을 내/외부컴포넌트로더포함할수도있다.
7] 영상/비디오정보를포함하는비트스트림이입력되면,디코딩장치 (300)는도
2의인코딩장치에서영상/비디오정보가처리된프로세스에대응하여영상을 복원할수있다.예를들어,디코딩장치 (300)는상기비트스트림으로부터획득한 블록분할관련정보를기반으로유닛들/블록들을도출할수있다.디코딩 장치 (300)는인코딩장치에서적용된처리유닛을이용하여디코딩을수행할수 있다.따라서디코딩의처리유닛은예를들어코딩유닛일수있고,코딩유닛은 코딩트리유닛또는최대코딩유닛으로부터쿼드트리구조,바이너리트리 구조및/또는터너리트리구조를따라서분할될수있다.코딩유닛으로부터 하나이상의변환유닛이도출될수있다.그리고,디코딩장치 (300)를통해 디코딩및출력된복원영상신호는재생장치를통해재생될수있다.
[78] 디코딩장치 (300)는도 2의인코딩장치로부터출력된신호를비트스트림
형태로수신할수있고,수신된신호는엔트로피디코딩부 (310)를통해디코딩될 수있다.예를들어,엔트로피디코딩부 (3 W)는상기비트스트림을파싱하여영상 복원 (또는픽처복원)에필요한정보 (ex.영상/비디오정보)를도출할수있다. 상기영상/비디오정보는어맵테이션파라미터세트 (APS),픽처파라미터 세트 (PPS),시퀀스파라미터세트 (SPS)또는비디오파라미터세트 (VPS)등 다양한파라미터세트에관한정보를더포함할수있다.또한상기영상/비디오 정보는일반제한정보 (general constraint information)을더포함할수있다.
디코딩장치는상기파라미터세트에관한정보및/또는상기일반제한정보를 더기반으로픽처를디코딩할수있다.본문서에서후술되는시그널링/수신되는 정보및/또는신택스요소들은상기디코딩절차를통하여디코딩되어상기 비트스트림으로부터획득될수있다.예컨대,엔트로피디코딩부 (3W)는지수 골롬부호화, CAVLC또는 CABAC등의코딩방법을기초로비트스트림내 정보를디코딩하고,영상복원에필요한신택스엘리먼트의값,레지듀얼에관한 변환계수의양자화된값들을출력할수있다.보다상세하게, CABAC엔트로피 디코딩방법은,비트스트림에서각구문요소에해당하는빈을수신하고,디코딩 대상구문요소정보와주변및디코딩대상블록의디코딩정보혹은이전 단계에서디코딩된심볼/빈의정보를이용하여문맥 (context)모델을결정하고, 결정된문맥모델에따라빈 (bin)의발생확률을예측하여빈의산술
디코딩 (arithmetic decoding)를수행하여각구문요소의값에해당하는심볼을 생성할수있다.이때, CABAC엔트로피디코딩방법은문맥모델결정후다음 심볼/빈의문맥모델을위해디코딩된심볼/빈의정보를이용하여문맥모델을 업데이트할수있다.엔트로피디코딩부 (3 W)에서디코딩된정보중예측에관한 정보는예측부 (인터예측부 (332)및인트라예측부 (331))로제공되고,엔트로피 디코딩부 (3W)에서엔트로피디코딩이수행된레지듀얼값,즉양자화된변환 계수들및관련파라미터정보는레지듀얼처리부 (320)로입력될수있다.
레지듀얼처리부 (320)는레지듀얼신호 (레지듀얼블록,레지듀얼샘플들, 레지듀얼샘플어레이)를도출할수있다.또한,엔트로피디코딩부 (310)에서 디코딩된정보중필터링에관한정보는필터링부 (350)으로제공될수있다. 한편,인코딩장치로부터출력된신호를수신하는수신부 (미도시)가디코딩 장치 (300)의내/외부엘리먼트로서더구성될수있고,또는수신부는엔트로피 디코딩부 (3 W)의구성요소일수도있다.한편,본문서에따른디코딩장치는 비디오/영상/픽처디코딩장치라고불릴수있고,상기디코딩장치는정보 디코더 (비디오/영상/픽처정보디코더)및샘플디코더 (비디오/영상/픽처샘플 디코더)로구분할수도있다.상기정보디코더는상기엔트로피
디코딩부 (3 W)를포함할수있고,상기샘플디코더는상기역양자화부 (321), 역변환부 (322),가산부 (340),필터링부 (350),메모리 (360),인터 예측부 (332)및 인트라예측부 (331)중적어도하나를포함할수있다.
9] 역양자화부 (321)에서는양자화된변환계수들을역양자화하여변환계수들을 출력할수있다.역양자화부 (321)는양자화된변환계수들을 2차원의블록 형태로재정렬할수있다.이경우상기재정렬은인코딩장치에서수행된계수 스캔순서를기반하여재정렬을수행할수있다.역양자화부 (321)는양자화 파라미터 (예를들어양자화스텝사이즈정보)를이용하여양자화된변환 계수들에대한역양자화를수행하고,변환계수들 (transform coefficient)를획득할 수있다.
[8이 역변환부 (322)에서는변환계수들를역변환하여레지듀얼신호 (레지듀얼블록, 레지듀얼샘플어레이)를획득하게된다.
[81] 예측부는현재블록에대한예측을수행하고,상기현재블록에대한예측
샘플들을포함하는예측된블록 (predicted block)을생성할수있다.예측부는 엔트로피디코딩부 (310)로부터출력된상기 예측에관한정보를기반으로상기 현재블록에인트라예측이적용되는지또는인터 예측이적용되는지결정할수 있고,구체적인인트라/인터예측모드를결정할수있다.
[82] 예측부 (320)는후술하는다양한예측방법을기반으로예측신호를생성할수 있다.예를들어,예측부는하나의블록에대한예측을위하여인트라예측또는 인터 예측을적용할수있을뿐아니라,인트라예측과인터 예측을동시에 적용할수있다.이는 combined inter and intra prediction (〔고 라고불릴수있다. 또한,예즉부는블록에대한예즉을위하여인트라블록카피 (intra block copy, IBC)예측모드에기반할수도있고또는팔레트모드 (palette mode)에기반할 수도있다.상기 IBC예측모드또는팔레트모드는예를들어 SCC(screen content coding)등과같이게임등의컨텐츠영상/동영상코딩을위하여사용될수있다. IBC는기본적으로현재픽처내에서예측을수행하나현재픽처내에서참조 블록을도출하는점에서인터예측과유사하게수행될수있다.즉, IBC는본 문서에서설명되는인터 예측기법들중적어도하나를이용할수있다.팔레트 모드는인트라코딩또는인트라예측의일예로볼수있다.팔레트모드가 적용되는경우팔레트테이블및팔레트인덱스에관한정보가상기영상/비디오 정보에포함되어시그널링될수있다.
[83] 인트라예측부 (331)는현재픽처내의샘플들을참조하여현재블록을예측할 수있다.상기참조되는샘플들은예측모드에따라상기현재블록의
주변 (neighbor)에위치할수있고,또는떨어져서위치할수도있다.인트라 예측에서 예측모드들은복수의비방향성모드와복수의방향성모드를포함할 수있다.인트라예측부 (331)는주변블록에적용된예측모드를이용하여,현재 블록에적용되는예측모드를결정할수도있다.
[84] 인터예측부 (332)는참조픽처상에서움직임벡터에의해특정되는참조
블록 (참조샘플어레이)을기반으로,현재블록에대한예측된블록을유도할수 있다.이때,인터 예측모드에서전송되는움직임정보의양을줄이기위해주변 블록과현재블록간의움직임정보의상관성에기초하여움직임정보를블록, 서브블록또는샘플단위로예측할수있다.상기움직임정보는움직임벡터및 참조픽처인덱스를포함할수있다.상기움직임정보는인터 예측방향 (L0예측, L1예측, Bi예측등)정보를더포함할수있다.인터예측의경우에,주변블록은 현재픽처내에존재하는공간적주변블록 (spatial neighboring block)과참조 픽처에존재하는시간적주변블록 (temporal neighboring block)을포함할수있다. 예를들어,인터예측부 (332)는주변블록들을기반으로움직임정보후보 리스트를구성하고,수신한후보선택정보를기반으로상기현재블록의움직임 벡터및/또는참조픽처인덱스를도출할수있다.다양한예측모드를기반으로 인터 예측이수행될수있으며,상기 예측에관한정보는상기현재블록에대한 인터 예측의모드를지시하는정보를포함할수있다.
[85] 가산부 (340)는획득된레지듀얼신호를예측부 (인터예측부 (332)및/또는
인트라예측부 (331)포함)로부터출력된예측신호 (예측된블록,예측샘플 어레이)에더함으로써복원신호 (복원픽처,복원블록,복원샘플어레이)를 생성할수있다.스킵모드가적용된경우와같이처리대상블록에대한 레지듀얼이없는경우,예측된블록이복원블록으로사용될수있다.
[86] 가산부 (340)는복원부또는복원블록생성부라고불릴수있다.생성된복원 신호는현재픽처내다음처리대상블록의인트라예측을위하여사용될수 있고,후술하는바와같이필터링을거쳐서출력될수도있고또는다음픽처의 인터 예측을위하여사용될수도있다.
[87] 한편,픽처디코딩과정에서 LMCS (luma mapping with chroma scaling)가적용될 수도있다.
[88] 필터링부 (350)는복원신호에필터링을적용하여주관적/객관적화질을
향상시킬수있다.예를들어필터링부 (350)는복원픽처에다양한필터링방법을 적용하여수정된 (modified)복원픽처를생성할수있고,상기수정된복원 픽처를메모리 (360),구체적으로메모리 (360)의 DPB에전송할수있다.상기 다양한필터링방법은예를들어,디블록킹필터링,샘플적응적오프셋 (sample adaptive offset),적응적루프필터 (adaptive loop filter),양방향필터 (bilateral filter) 등을포함할수있다.
[89] 메모리 (360)의 DPB에저장된 (수정된)복원픽처는인터 예측부 (332)에서참조 픽쳐로사용될수있다.메모리 (360)는현재픽처내움직임정보가도출된 (또는 디코딩된)블록의움직임정보및/또는이미복원된픽처내블록들의움직임 정보를저장할수있다.상기저장된움직임정보는공간적주변블록의움직임 정보또는시간적주변블록의움직임정보로활용하기위하여인터
예측부 (260)에전달할수있다.메모리 (360)는현재픽처내복원된블록들의 복원샘플들을저장할수있고,인트라예측부 (331)에전달할수있다.
[9이 본문서에서,인코딩장치 (200)의필터링부 (260),인터예측부 (221)및인트라 예측부 (222)에서설명된실시예들은각각디코딩장치 (300)의필터링부 (350), 인터 예측부 (332)및인트라예측부 (331)에도동일또는대응되도록적용될수 있다
[91] 상술한바와같이비디오코딩을수행함에 있어압축효율을높이기위하여 예측을수행한다.이를통하여코딩대상블록인현재블록에대한예측 샘플들을포함하는예측된블록을생성할수있다.여기서상기예측된블록은 공간도메인 (또는픽셀도메인)에서의 예측샘플들을포함한다.상기예측된 블록은인코딩장치및디코딩장치에서동일하게도출되며,상기인코딩장치는 원본블록의원본샘플값자체가아닌상기원본블록과상기 예측된블록간의 레지듀얼에대한정보 (레지듀얼정보)를디코딩장치로시그널링함으로써영상 코딩효율을높일수있다.디코딩장치는상기레지듀얼정보를기반으로 레지듀얼샘플들을포함하는레지듀얼블록을도출하고,상기레지듀얼블록과 상기 예측된블록을합하여복원샘플들을포함하는복원블록을생성할수 있고,복원블록들을포함하는복원픽처를생성할수있다.
[92] 상기레지듀얼정보는변환및양자화절차를통하여생성될수있다.예를 들어 ,인코딩장치는상기원본블록과상기예측된블록간의레지듀얼블록을 도출하고,상기레지듀얼블록에포함된레지듀얼샘플들 (레지듀얼샘플 어레이)에변환절차를수행하여변환계수들을도출하고,상기변환계수들에 양자화절차를수행하여양자화된변환계수들을도출하여관련된레지듀얼 정보를 (비트스트림을통하여)디코딩장치로시그널링할수있다.여기서상기 레지듀얼정보는상기양자화된변환계수들의값정보,위치정보,변환기법, 변환커널,양자화파라미터등의정보를포함할수있다.디코딩장치는상기 레지듀얼정보를기반으로역양자화/역변환절차를수행하고레지듀얼 샘플들 (또는레지듀얼블록)을도출할수있다.디코딩장치는예측된블록과 상기레지듀얼블록을기반으로복원픽처를생성할수있다.인코딩장치는 또한이후픽처의인터예측을위한참조를위하여양자화된변환계수들을 역양자화/역변환하여레지듀얼블록을도출하고,이를기반으로복원픽처를 생성할수있다.
[93] 인트라예측은현재블록이속하는픽처 (이하,현재픽처)내의참조샘플들을 기반으로현재블록에대한예측샘플들을생성하는예측을나타낼수있다. 현재블록에인트라예측이적용되는경우,현재블록의인트라예측에사용할 주변참조샘플들이도출될수있다.상기현재블록의주변참조샘플들은 nWxnH크기의현재블록의좌측 (left)경계에인접한샘플및
좌하즉 (bottom-left)에이웃하는종 2xnH개의샘플들,현재블록의상즉 (top) 경계에인접한샘플및우상측 (top-right)에이웃하는총 2xnW개의샘플들및 현재블록의좌상측 (top-left)에이웃하는 1개의샘플을포함할수있다.또는, 상기현재블록의주변참조샘플들은복수열의상측주변샘플들및복수행의 좌측주변샘플들을포함할수도있다.또한,상기현재블록의주변참조 샘플들은 nWxnH크기의현재블록의우측 (right)경계에인접한총 nH개의 샘플들,현재블록의하측 (bottom)경계에인접한총 nW개의샘플들및현재 블록의우하측 (bottom-right)에이웃하는 1개의샘플을포함할수도있다.
[94] 다만,현재블록의주변참조샘플들중일부는아직디코딩되지않았거나,이용 가능하지않을수있다.이경우,디코더는이용가능한샘플들로이용가능하지 않은샘플들을대체 (substitution)하여예즉에사용할주변참조샘플들을구성할 수있다.또는,이용가능한샘플들의보간 (interpolation)을통하여 예즉에사용할 주변참조샘플들을구성할수있다.
[95] 주변참조샘플들이도출된경우, (i)현재블록의주변 (neighboring)참조
샘플들의평균 (average)혹은인터폴레이션 (interpolation)을기반으로예즉 샘플을유도할수있고, (ii)현재블록의주변참조샘플들중예측샘플에대하여 특정 (예측)방향에존재하는참조샘플을기반으로상기 예측샘플을유도할 수도있다. (i)의경우는비방향성 (non-directional)모드또는비각도 (non-angular) 모드, (ii)의경우는방향성 (directional)모드또는각도 (angular)모드라고불릴수 있다.
[96] 또한,상기주변참조샘플들중상기현재블록의 예측샘플을기준으로상기 현재블록의인트라예측모드의 예측방향에위치하는제 1주변샘플과상기 예측방향의반대방향에위치하는제 2주변샘플과의보간을통하여상기 예측 샘플이생성될수도있다.상술한경우는선형보간인트라예측 (Linear interpolation intra prediction, LIP)이라고불릴수있다.또한,선형모델 (linear model)을이용하여루마샘플들을기반으로크로마예측샘플들이생성될수도 있다.이경우는 LM모드라고불릴수있다.
[97] 또한,필터링된주변참조샘플들을기반으로상기현재블록의임시예측
샘플을도출하고,상기기존의주변참조샘플들,즉,필터링되지않은주변참조 샘플들중상기인트라예측모드에따라도출된적어도하나의참조샘플과 상기임시예측샘플을가중합 (weighted sum)하여상기현재블록의예측샘플을 도줄할수도있다.상술한경우는 PDPC(Position dependent intra prediction)라고 불릴수있다.
[98] 또한,현재블록의주변다중참조샘플라인중가장예측정확도가높은참조 샘플라인을선택하여해당라인에서 예측방향에위치하는참조샘플을 이용하여 예측샘플을도출하고이때,사용된참조샘플라인을디코딩장치에 지시 (시그널링)하는방법으로인트라예측부호화를수행할수있다.상술한 경우는다중참조라인 (multi-reference line)인트라예즉또는 MRL기반인트라 예즉이라고불릴수있다.
[99] 또한,현재블록을수직또는수평의서브파티션들로나누어동일한인트라 예측모드를기반으로인트라예측을수행하되,상기서브파티션단위로주변 참조샘플들을도출하여이용할수있다.즉,이경우현재블록에대한인트라 예측모드가상기서브파티션들에동일하게적용되되,상기서브파티션단위로 주변참조샘플을도출하여이용함으로써경우에따라인트라예측성능을높일 수있다.이러한예즉방법은 ISP (intra sub-partitions)기반인트라예즉이라고 불릴수있다.
[100] 상술한인트라예측방법들은인트라예측모드와구분하여인트라예측
타입이라고불릴수있다.상기인트라예측타입은인트라예측기법또는부가 인트라예측모드등다양한용어로불릴수있다.예를들어상기인트라예측 타입 (또는부가인트라예측모드등)은상술한 LIP, PDPC, MRL, ISP중적어도 하나를포함할수있다.상기 LIP, PDPC, MRL, ISP등의특정인트라예측타입을 제외한일반인트라예측방법은노멀인트라예측타입이라고불릴수있다. 노멀인트라예측타입은상기와같은특정인트라예측타입이적용되지않는 경우일반적으로적용될수있으며,상술한인트라예측모드를기반으로예측이 수행될수있다.한편,필요에따라서도출된예측샘플에대한후처리필터링이 수행될수도있다.
[101] 구체적으로,인트라예측절차는인트라예측모드/타입결정단계,주변참조 샘플도출단계,인트라예측모드/타입기반예측샘플도출단계를포함할수 있다.또한,필요에따라서도출된예측샘플에대한후처리필터링 (post-filtering) 단계가수행될수도있다.
[102] 인트라예측이적용되는경우,주변블록의인트라예측모드를이용하여현재 블록에적용되는인트라예측모드가결정될수있다.예를들어,디코딩장치는 현재블록의주변블록 (ex.좌측및/또는상측주변블록)의인트라예측모드및 주가적인후보모드들을기반으로도줄된 MPM(most probable mode)리스트내 MPM후보들중하나를수신된 MPM인덱스를기반으로선택할수있으며,또는 상기 MPM후보들 (및플래너모드)에포함되지않은나머지인트라예측모드들 중하나를리메이닝인트라예측모드정보를기반으로선택할수있다.상기 MPM리스트는플래너모드를후보로포함하거나포함하지않도록구성될수 2020/175893 1»(:1^1{2020/002702 있다.예를들어,상기 MPM리스트가플래너모드를후보로포함하는경우상기 MPM리스트는 6개의후보를가질수있고,상기 MPM리스트가플래너모드를 후보로포함하지 않는경우상기 MPM리스트는 5개의후보를가질수있다. 상기 MPM리스트가플래너모드를후보로포함하지 않는경우현재블록의 인트라예측모드가플래너모드가아닌지나타내는 11 플래너플래그 .
1대_11111 _1101;_]31 표1'_1¾은)가시그널링될수있다.예를들어, 1 ] 늘래그가 먼저시그널링되고, MPM인덱스및 11 플래너플래그는 MPM플래그의 값이 1인경우시그널링될수있다.또한,상기 MPM인덱스는상기 11 플래너 플래그의 값이 1인경우시그널링될수있다.여기서,상기 MPM리스트가 플래너모드를후보로포함하지 않도록구성되는것은,상기플래너모드가 MPM이아니라는것이라기보다는, MPM으로항상플래너모드가고려되기에 먼저플래그知 !) 1131 £ 를시그널링하여플래너모드인지 여부를먼저 확인하기 위함이다.
[103] 예를들어,현재블록에 적용되는인트라예측모드가 MPM후보들(및플래너 모드)중에 있는지,아니면리메이닝모드중에 있는지는 MPM플래그 .
_1111패_1111)111_ )를기반으로지시될수있다. MPM플래그의값 1은상기 현재블록에 대한인트라예측모드가 MPM후보들(및플래너모드)내에 있음을 나타낼수있으며, MPM flag의 값 0은상기 현재블록에 대한인트라예측모드가 MPM후보들(및플래너모드)내에 없음을나타낼수있다.상기 11 플래너 늘래그( . 1 _11111 _1101;_1 11표1'_£^)값 0은상기 현재블록에 대한인트라 예측모드가플래너모드임을나타낼수있고,상기 11 플래너플래그값 1은 상기 현재블록에 대한인트라예측모드가플래너모드가아님을나타낼수 있다.상기 ] ] 인덱스는 111]3111_1(뇨또는 1:¾_11111 _111]3111_1(뇨신텍스요소의 형태로시그널링될수있고,상기 리메이닝 인트라예측모드정보는
111_뇨11;¾_11111따_]光6(1_1110(16또는 1;¾_11111따_111]3111_1611^11(1 신텍스요소의 형태로시그널링될수있다.예를들어,상기 리메이닝 인트라예측모드정보는 전체 인트라예측모드들중상기 MPM후보들(및플래너모드)에포함되지 않는 나머지 인트라예측모드들을예측모드번호순으로인덱싱하여그중하나를 가리킬수있다.상기 인트라예측모드는루마성분(샘플)에 대한인트라예측
Figure imgf000019_0003
적어도하나를포함할수있다.본문서에서 MPM리스트는 MPM후보리스트, candModeList등다양한용어로불릴수있다. MIP가현재블록에 적용되는경우, ] 1]:>를위한별도의
Figure imgf000019_0001
1; _111 _111]3111_1¾
1대_1111]3_111]3111_1(뇨),리메이닝 인트라예즉모드정
1대_1111]3_111]3111_1611^11(노]')가시그널링될수있으며
Figure imgf000019_0002
시그널링되지않는다.
[104] 다시말해,일반적으로영상에대한블록분할이되면,코딩하려는현재블록과 주변 (neighboring)블록은비슷한영상특성을갖게된다.따라서 ,현재블록과 주변블록은서로동일하거나비슷한인트라예측모드를가질확률이높다. 따라서,인코더는현재블록의인트라예측모드를인코딩하기위해주변블록의 인트라예측모드를이용할수있다.
[105] 예를들어 ,인코더/디코더는현재블록에대한 MPM(most probable modes) 리스트를구성할수있다.상기 MPM리스트는 MPM후보리스트라고나타낼 수도있다.여기서, MPM이라함은인트라예측모드코딩시현재블록과주변 블록의유사성을고려하여코딩효율을향상시키기위해이용되는모드를 의미할수있다.상술한바와같이 MPM리스트는플래너모드를포함하여 구성될수있고,또는플래너모드를제외하여구성될수있다.예를들어, MPM 리스트가플래너모드를포함하는경우 MPM리스트의후보들의개수는 6개일 수있다.그리고, MPM리스트가플래너모드를포함하지않는경우, MPM 리스트의후보들의개수는 5개일수있다.
[106] 인코더/디코더는 5개또는 6개의 MPM을포함하는 MPM리스트를구성할수 있다.
[107] MPM리스트를구성하기위하여디를트인트라모드들 (Default intra modes), 주변인트라모드들 (Neighbour intra modes)및도줄된인트라모드들 (Derved intra modes)의 3가지종류의모드들이고려될수있다.
[108] 상기주변인트라모드들을위하여,두개의주변블록들,즉,좌측주변블록및 상측주변블록가고려될수있다.
[109] 상술한바와같이만약 MPM리스트가플래너모드를포함하지않도록
구성하는경우,상기리스트에서플래너 (planar)모드가제외되며 ,상기 MPM 리스트후보들의개수는 5개로설정될수있다.
[110] 또한,인트라예측모드중비방향성모드 (또는비각도모드)는현재블록의 주변 (neighboring)참조샘플들의평균 (average)기반의 DC모드또는
보간 (interpolation)기반의플래너 (planar)모드를포함할수있다.
[111] 인터예측이적용되는경우,인코딩장치/디코딩장치의 예측부는블록단위로 인터 예측을수행하여 예측샘플을도출할수있다.인터예측은현재픽처 이외의픽처 (들)의데이터요소들 (ex.샘플값들,또는움직임정보)에의존적인 방법으로도줄되는예즉을나타낼수있다 (Inter prediction can be a prediction derived in a manner that is dependent on data elements (ex. sample values or motion information) of picture(s) other than the current picture). ¾ ^}|블록에인터 예즉이 적용되는경우,참조픽처인덱스가가리키는참조픽처상에서움직임벡터에 의해특정되는참조블록 (참조샘플어레이)을기반으로,현재블록에대한 예측된블록 (예측샘플어레이)을유도할수있다.이때,인터예측모드에서 전송되는움직임정보의양을줄이기위해주변블록과현재블록간의움직임 정보의상관성에기초하여현재블록의움직임정보를블록,서브블록또는샘플 단위로예측할수있다.상기움직임정보는움직임벡터및참조픽처인덱스를 포함할수있다.상기움직임정보는인터예측타입 (L0예측, L1예측, Bi예측 등)정보를더포함할수있다.인터예측이적용되는경우,주변블록은현재 픽처내에존재하는공간적주변블록 (spatial neighboring block)과참조픽처에 존재하는시간적주변블록 (temporal neighboring block)을포함할수있다.상기 참조블록을포함하는참조픽처와상기시간적주변블록을포함하는참조 픽처는동일할수도있고,다를수도있다.상기시간적주변블록은동일위치 참조블록 (collocated reference block),동일위치 CU(colCU)등의이름으로불릴 수있으며 ,상기시간적주변블록을포함하는참조픽처는동일위치
픽처 (collocated picture, colPic)라고불릴수도있다.예를들어 ,현재블록의주변 블록들을기반으로움직임정보후보리스트가구성될수있고,상기현재 블록의움직임벡터및/또는참조픽처인덱스를도출하기위하여어떤후보가 선택 (사용)되는지를지시하는플래그또는인덱스정보가시그널링될수있다. 다양한예측모드를기반으로인터 예측이수행될수있으며,예를들어스킵 모드와머지모드의경우에,현재블록의움직임정보는선택된주변블록의 움직임정보와같을수있다.스킵모드의경우,머지모드와달리레지듀얼 신호가전송되지않을수있다.움직임정보예즉 (motion vector prediction, MVP) 모드의경우,선택된주변블록의움직임벡터를움직임벡터예측자 (motion vector predictor)로이용하고,움직임벡터차분 (motion vector difference)은 시그널링될수있다.이경우상기움직임벡터 예측자및움직임벡터차분의 합을이용하여상기현재블록의움직임벡터를도출할수있다.
[112] 상기움직임정보는인터 예측타입 (L0예측, L1예측, Bi예측등)에따라 L0 움직임정보및/또는 L1움직임정보를포함할수있다. L0방향의움직임벡터는 L0움직임벡터또는 MVL0라고불릴수있고, L1방향의움직임벡터는 L1 움직임벡터또는 MVL1이라고불릴수있다. L0움직임벡터에기반한예측은 L0예측이라고불릴수있고, L1움직임벡터에기반한예측을 L1예측이라고 불릴수있고,상기 L0움직임벡터및상기 L1움직임벡터둘다에기반한 예측을쌍 (Bi)예측이라고불릴수있다.여기서 L0움직임벡터는참조픽처 리스트 L0 (L0)에연관된움직임벡터를나타낼수있고, L1움직임벡터는참조 픽처리스트 L1 (L1)에연관된움직임벡터를나타낼수있다.참조픽처리스트 내는상기현재픽처보다출력순서상이전픽처들을참조픽처들로포함할수 있고,참조픽처리스트 L1은상기현재픽처보다출력순서상이후픽처들을 포함할수있다.상기이전픽처들은순방향 (참조)픽처라고불릴수있고,상기 이후픽처들은역방향 (참조)픽처라고불릴수있다.상기참조픽처리스트 내은상기현재픽처보다출력순서상이후픽처들을참조픽처들로더포함할 수있다.이경우상기참조픽처리스트 L0내에서상기이전픽처들이먼저 인덱싱되고상기이후픽처들은그다음에인덱싱될수있다.상기참조픽처 리스트 L1은상기현재픽처보다출력순서상이전픽처들을참조픽처들로더 포함할수있다.이경우상기참조픽처리스트 1내에서상기이후픽처들이 먼저인덱싱되고상기이전픽처들은그다음에인덱싱될수있다.여기서출력 순서는 POC(picture order count)순서 (order)에대응될수있다.
[113] 도 4는코딩된영상/비디오에대한계층구조를예시적으로나타낸다.
[114] 도 4를참조하면,코딩된영상/비디오는영상/비디오의디코딩처리및그
자체를다루는 VCL( video coding layer,비디오코딩계층),부호화된정보를 전송하고저장하는하위시스템,그리고 VCL과하위시스템사이에존재하며 네트워크적응기능을담당하는 NAL(network abstraction layer,네트워크주상 계층)로구분되어있다.
[115] VCL에서는압축된영상데이터 (슬라이스데이터 )를포함하는 VCL데이터를 생성하거나,혹은픽처파라미터세트 (Picture Parameter Set: PPS),시퀀스 파라미터세트 (Sequence Parameter Set: SPS),비디오파라미터세트 (Video Parameter Set: VPS)등의정보를포함하는파라미터세트또는영상의디코딩 과정에부가적으로필요한 SEI(Supplemental Enhancement Information)메시지를 생성할수있다.
[116] NAL에서는 VCL에서생성된 RBSP(Raw Byte Sequence Payload)에헤더
정보 (NAL유닛헤더 )를부가하여 NAL유닛을생성할수있다.이때, RBSP는 VCL에서생성된슬라이스데이터 ,파라미터세트, SEI메시지등을말한다. NAL 유닛헤더에는해당 NAL유닛에포함되는 RBSP데이터에따라특정되는 NAL 유닛타입정보를포함할수있다.
[117] 도 4에서도시된바와같이, NAL유닛은 VCL에서생성된 RBSP의따라 VCL NAL유닛과 Non-VCL NAL유닛으로구분될수있다. VCL NAL유닛은영상에 대한정보 (슬라이스데이터 )를포함하고있는 NAL유닛을의미할수있고, Non-VCL NAL유닛은영상을디코딩하기위하여필요한정보 (파라미터세트 또는 SEI메시지)를포함하고있는 NAL유닛을의미할수있다.
[118] 상술한 VCL NAL유닛, Non-VCL NAL유닛은하위시스템의데이터규격에 따라헤더정보를붙여서네트워크를통해전송될수있다.예컨대, NAL유닛은 H.266/VVC파일포맷, RTP(Real-time Transport Protocol), TS(Transport Stream) 등과같은소정규격의데이터형태로변형되어다양한네트워크를통해전송될 수있다.
[119] 상술한바와같이, NAL유닛은해당 NAL유닛에포함되는 RBSP데이터
구조 (structure)에따라 NAL유닛타입이특정될수있으며,이러한 NAL유닛 타입에대한정보는 NAL유닛헤더에저장되어시그널링될수있다.
[12이 예를들어 , NAL유닛이영상에대한정보 (슬라이스데이터 )를포함하는지
여부에따라크게 VCL NAL유닛타입과 Non-VCL NAL유닛타입으로분류될 수있다. VCL NAL유닛타입은 VCL NAL유닛이포함하는픽처의성질및종류 등에따라분류될수있으며, Non-VCL NAL유닛타입은파라미터세트의종류 등에따라분류될수있다.
[121] 아래는 Non-VCL NAL유닛타입이포함하는파라미터세트의종류등에따라 특정된 NAL유닛타입의일예이다.
[122] - APS (Adaptation Parameter Set) NAL unit: APS를포함하는 NAL유닛에대한 타입
[123] - DPS (Decoding Parameter Set) NAL unit: DPS를포함하는 NAL유닛에대한 타입
[124] - VPS(Video Parameter Set) NAL unit: VPS를포함하는 NAL유닛에대한타입
[125] - SPS(Sequence Parameter Set) NAL unit: SPS를포함하는 NAL유닛에대한타입
[126] - PPS(Picture Parameter Set) NAL unit: PPS를포함하는 NAL유닛에대한타입
[127] - PH(Picture header) NAL unit: PH를포함하는 NAL유닛에대한타입
[128] 상술한 NAL유닛타입들은 NAL유닛타입을위한신택스정보를가지며,상기 신택스정보는 NAL유닛헤더에저장되어시그널링될수있다.예컨대,상기 신택스정보는 nal_unit_type일수있으며 , NAL유닛타입들은 nal_unit_type 값으로특정될수있다.
[129] 한편,상술한바와같이하나의픽처는복수의슬라이스를포함할수있으며, 하나의슬라이스는슬라이스헤더및슬라이스데이터를포함할수있다.이 경우,하나의픽처내복수의슬라이스 (슬라이스헤더및슬라이스데이터 집합)에대하여하나의픽처헤더가더부가될수있다.상기픽처헤더 (픽처헤더 신택스)는상기픽처에공통적으로적용할수있는정보/파라미터를포함할수 있다.본문서에서타일그룹은슬라이스로혼용또는대체될수있다.또한,본 문서에서타일그룹헤더는슬라이스헤더로혼용또는대체될수있다.
[130] 상기슬라이스헤더 (슬라이스헤더신택스)는상기슬라이스에공통적으로 적용할수있는정보/파라미터를포함할수있다.상기 APS(APS신택스)또는 PPS(PPS신택스)는하나이상의슬라이스또는픽처에공통적으로적용할수 있는정보/파라미터를포함할수있다.상기 SPS(SPS신택스)는하나이상의 시퀀스에공통적으로적용할수있는정보/파라미터를포함할수있다.상기 VPS(VPS신택스)는다중레이어에공통적으로적용할수있는정보/파라미터를 포함할수있다.상기 DPS(DPS신택스)는비디오전반에공통적으로적용할수 있는정보/파라미터를포함할수있다.상기 DPS는 CVS(coded video sequence)의 접합 (concatenation)에관련된정보/파라미터를포함할수있다.본문서에서상위 레벨신택스 (High level syntax, HLS)라함은상기 APS신택스, PPS신택스, SPS 신택스, VPS신택스, DPS신택스,픽처헤더신택스,슬라이스헤더신택스중 적어도하나를포함할수있다.
[131] 본문서에서인코딩장치에서디코딩장치로인코딩되어비트스트림형태로 시그널링되는영상/비디오정보는픽처내파티셔닝관련정보,인트라/인터 예측정보,레지듀얼정보,인루프필터링정보등을포함할뿐아니라,상기 슬라이스헤더에포함된정보,상기픽처헤더에포함된정보,상기 APS에 포함된정보,상기 PPS에포함된정보, SPS에포함된정보, VPS에포함된정보 및/또는 DPS에포함된정보를포함할수있다.또한상기영상/비디오정보는 NAL유닛헤더의정보를더포함할수있다.
[132] 한편,양자화등압축부호화과정에서발생하는에러에의한원본 (original) 영상과복원영상의차이를보상하기위하여,상술한바와같이복원샘플들 또는복원픽처에인루프필터링절차가수행될수있다.상술한바와같이 인루프필터링은인코딩장치의필터부및디코딩장치의필터부에서수행될수 있으며,디블록킹필터, SAO및/또는적응적루프필터 (ALF)가적용될수잇다. 예를들어, ALF절차는디블록킹필터링절차및/또는 SAO절차가완료된후 수행될수있다.다만,이경우에도디블록킹필터링절차및/또는 SAO절차가 생략될수도있다.
[133] 도 5는 ALF절차의일예를개략적으로나타내는흐름도이다.도 5에개시된 ALF절차는인코딩장치및디코딩장치에서수행될수있다.본문서에서코딩 장치는상기인코딩장치및/또는디코딩장치를포함할수있다.
[134] 도 5를참조하면,코딩장치는쇼내를위한필터를도출한다 (S500).상기필터는 필터계수들을포함할수있다.코딩장치는 ALF적용여부를결정할수있고, 상기 ALF를적용하기로판단한경우,상기 ALF를위한필터계수들을포함하는 필터를도출할수있다.쇼내를위한필터 (계수들)또는쇼내를위한필터
(계수들)를도출하기위한정보는 ALF파라미터로불릴수있다. ALF적용 여부에관한정보 (ex. ALF가용플래그)및상기필터를도출하기위한 ALF 데이터가인코딩장치에서디코딩장치로시그널링될수있다. ALF데이터는 상기 ALF를위한필터를도출하기위한정보를포함할수있다.또한,일예로, ALF의계층적제어를위하여 , ALF가용플래그가 SPS,픽처헤더 ,슬라이스헤더 및/또는 CTB레벨에서각각시그널링될수있다.
[135] 상기 ALF를위한필터를도출하기위하여,현재블록 (또는 ALF대상블록)의 활동성 (activity)및/또는방향성 (directivity)이도줄되고,상기활동성및/또는 상기방향성을기반으로상기필터가도출될수있다.예를들어, ALF절차는 4x4 블록 (루마성분기준)단위로적용될수있다.상기현재블록또는 ALF대상 블록은예를들어 CU일수있고,또는 CU내 4x4블록일수있다.구체적으로 예를들어 ,상기 ALF데이터에포함된정보로부터도출되는제 1필터들과,미리 정의된제 2필터들을기반으로 ALF를위한필터들이도출될수있고,코딩 장치는상기활동성및/또는상기방향성을기반으로상기필터들중하나를 선택할수있다.코딩장치는상기선택된필터에포함된필터계수들을상기 쇼내를위하여이용할수있다.
[136] 코딩장치는상기필터를기반으로필터링을수행한다 (S510).상기필터링을 기반으로수정된 (modified)복원샘플들이도출될수있다.예를들어,상기필터 내상기필터계수들은필터모양에따라배치또는할당될수있고,현재블록내 복원샘플들에대하여상기필터링이수행될수있다.여기서상기현재블록내 복원샘플들은디블록킹필터절차및 SAO절차가완료된후의복원샘플들일 수있다.일예로,하나의필터모양이사용되거나,소정의복수의필터모양 중에서하나의필터모양이선택되어사용될수있다.예를들어,루마성분에 대하여적용되는필터모양과크로마성분에대하여적용되는필터모양이다를 수있다.예를들어,루마성분에대하여 7x7다이아몬드필터모양이사용될수 있고,크로마성분에대하여 5x5다이아몬드필터모양이사용될수있다.
[137] 도 6a및도 6b는 ALF필터모양의예를나타낸다.
[138] 도 6a는 7x7다이아몬드필터모양을나타내고,도 6b는 5x5다이아몬드필터 모양을나타낸다.도 6a및도 6b에서필터모양내 Cn은필터계수를나타낸다. 상기 Cn에서 n이동일한경우,이는동일한필터계수가할당될수있음을 나타낸다.본문서에서 ALF의필터모양에따라필터계수가할당되는위치 및/또는단위는필터탭이라불릴수있다.이때각각의필터탭에는하나의필터 계수가할당될수있고,필터탭이배열된형태는필터모양에해당될수있다. 필터모양의센터에위치한필터탭은센터필터탭이라불릴수있다.센터필터 탭을기준으로서로대응되는위치에존재하는동일한 n값의두개의필터 탭에는동일한필터계수가할당될수있다.예를들어, 7x7다이아몬드필터 모양의경우, 25개의필터탭을포함하며, C0내지 C11의필터계수들이중앙 대칭형태로할당되므로, 13개의필터계수들만으로상기 25개의필터탭에필터 계수들을할당할수있다.또한,예를들어, 5x5다이아몬드필터모양의경우,
13개의필터탭을포함하며, C0내지 C5의필터계수들이중앙대칭형태로 할당되므로, 7개의필터계수들만으로상기 13개의필터탭에필터계수들을 할당할수있다.예를들어,시그널링되는필터계수에관한정보의데이터량을 줄이기위하여, 7x7다이아몬드필터모양에대한 13개의필터계수들중 12개의 필터계수들은 (명시적으로)시그널링되고, 1개의필터계수는 (묵시적으로) 도출될수있다.또한,예를들어, 5x5다이아몬드필터모양에대한 7개의필터 계수들중 6개의필터계수들은 (명시적으로)시그널링되고, 1개의필터계수는 (묵시적으로)도출될수있다.
[139] 본문서의일실시예에따르면,상기 ALF절차를위하여사용되는 ALF
파라미터가 APS(adaptation parameter set)를통하여시그널링될수있다.상기 ALF파라미터는상기 ALF를위한필터정보또는 ALF데이터로부터도출될수 있다.
[14이 쇼내는상술한바와같이비디오/영상코딩에서적용될수있는인루프필터링 기술 (technique)의타입이다. ALF는위너기반 (Wiener-based)적응적필터를 사용하여수행될수있다.이는원본샘플들과디코딩된샘플들 (또는복원 샘늘들)간 MSE(mean square error)를최소화하기위함일수있다. ALF툴 (tool)을 위한상위레벨디자인 (high level design)은 SPS및/또는슬라이스헤더 (또는타일 그룹헤더)에서접근할수있는신텍스요소들을함유 (incorporate)할수있다.
[141] 도 7은 ALF데이터의계층구조의일예를나타낸다. [142] 도 7을참조하면, CVS(coded video sequence)는 SPS,하나이상의 PPS,그리고 뒤따르는하나이상의코딩된픽처를포함할수있다.각코딩된픽처는사각 리전 (region)들로나눠질수있다.상기사각리전들은타일들이라고불릴수 있다.하나이상의타일들은모여서타일그룹또는슬라이스를형성할수있다. 이경우,타일그룹헤더가 PPS에링크되고,상기 PPS가 SPS에링크될수있다. 기존방법에따르면상기 ALF데이터 (ALF파라미터 )는타일그룹헤더에 포함되었다.하나의비디오가다수의픽처들로구성되고,하나의픽처가다수의 타일들을포함하는것을고려할때, ALF데이터 (ALF파라미터)시그널링이타일 그룹단위로빈번하게이루어지는것은코딩효율을저하시키는문제가있었다.
[143] 본문서에서제안된일실시예에따르면상기 ALF파라미터는다음과같이 APS에포함되어시그널링될수있다.
[144] 도 8은 ALF데이터의계층구조의다른예를나타낸다.
[145] 도 8을참조하면, APS가정의되고,상기 APS는필요한 ALF데이터 (ALF
파라미터)를나를수있다.게다가, APS는자기식별파라미터 (self-identification parameter)및 ALF데이터를가질수있다.상기 APS의자기식별파라미터는 APS ID를포함할수있다.즉,상기 APS는 ALF데이터필드외에도상기 APS ID를나타내는정보를포함할수있다.타일그룹헤더또는슬라이스헤더는 APS인덱스정보를이용하여 APS를참조할수있다.다시말하면,타일그룹 헤더또는슬라이스헤더는 APS인덱스정보를포함할수있으며 ,상기 APS 인덱스정보가가리키는 APS ID를갖는 APS에포함된 ALF데이터 (ALF 파라미터 )를기반으로대상블록에대한 ALF절차를수행할수있다.여기서, 상기 APS인덱스정보는 APS ID정보라고불릴수도있다.
[146] 또한,상기 SPS는 ALF의사용을허용하는플래그를포함할수있다.예를들어 , CVS가시작 (begin)할때 , SPS가체크되고,상기 SPS내에상기플래그가체크될 수있다.예를들어, SPS는아래표 1의신택스를포함할수있다.표 1의신택스는 SPS의일부분일수있다.
[147] [표 1]
Figure imgf000026_0001
[148] 상기표 1의신택스에포함된신택스요소의시맨틱스는예를들어,다음표와 같이나타낼수있다.
[149] [표 2]
Figure imgf000026_0002
[15이 즉,상기 sps_alf_enabled_flag신택스요소는그값이 0인지 1인지여부를
기반으로 ALF71-가용한지여부를나타낼수있다. sps_alf_enabled_flag신택스 요소는 ALF가용플래그(제 1 ALF가용플래그라고불릴수있다)라고불릴수 있고, SPS에포함될수있다.즉,상기 ALF가용플래그는 SPS(또는 SPS 레벨)에서시그널링될수있다.상기 SPS에서시그널링되는상기 ALF가용 플래그의값이 1인경우,상기 SPS를참조하는 CVS내의픽처들에대하여 기본적으로 ALF가가용하도록결정될수있다.한편,상술한바와같이상기 SPS보다하위레벨에서추가적인가용플래그를시그널링하여개별적으로 사 를 on/off처리할수도있다.
[151] 예를들어, ALF툴이 CVS에대하여가용한경우,타일그룹헤더또는
슬라이스헤더에서추가적인가용플래그(제 2 ALF가용플래그라고불릴수 있다)가시그널링될수있다.상기제 2 ALF가용플래그는예를들어, SPS 레벨에서 ALF71-가용한경우에파싱 /시그널링될수있다.만약,제 2 ALF가용 플래그의값이 1인경우,상기타일그룹헤더또는상기슬라이스헤더를통하여 ALF데이터를파싱할수있다.예를들어 ,상기제 2 ALF가용플래그는루마및 크로마성분들에관한 ALF가용조건(condition)을명시(specify)할수있다.상기 ALF데이터는 APS ID정보를통하여접근할수있다.
[152] [표 3]
Figure imgf000027_0001
[153] [표 4]
Figure imgf000027_0002
[154] 상기표 3또는표 4의신택스에포함된신택스요소들의시맨틱스는예를들어, 다음표들와같이나타낼수있다. 2020/175893 1»(:1/10公020/002702
[155] [표 5]
Figure imgf000028_0002
[156] [표 6]
Figure imgf000028_0003
[157] 상기제 2 ALF가용물래그는 tile_group_alf_enabled_flag신택스요소또는
slice_alf_enabled_flag신택스요소를포함할수있다.
[158] 상기 APS ID정보 (ex. tile_group_aps_id신택스요소또는 slice_aps_id신택스 요소)를기반으로해당타일그룹또는해당슬라이스에서참조하는 APS가 식별될수있다.상기 APS는 ALF데이터를포함할수있다.
[159] 한편, ALF데이터를포함하는 APS의구조는예를들어,다음과같은신택스및 시맨틱스를기반으로설명될수있다.표 7의신택스는 APS의일부분일수있다.
[16이 [표 7]
Figure imgf000028_0001
2020/175893 1»(:1/10公020/002702
[161] [표 8]
Figure imgf000029_0001
[162] 상기와같이 , adaptation_parameter_set_id신택스요소는해당 APS의식별자를 나타낼수있다.즉, APS는상기 adaptation_parameter_set_id신택스요소를 기반으로식별될수있다.상기 adaptation_parameter_set_id신택스요소는 APS ID 정보라고불릴수있다.또한,상기 APS는 ALF데이터필드를포함할수있다. 상기 ALF데이터필드는상기 adaptation_parameter_set_id신택스요소이후에 파싱 /시그널링될수있다.
[163] 또한,예를들어 , APS에서 APS확장늘래그 (ex. aps_extension_flag신택스
요소)가파싱 /시그널링될수있다.상기 APS확장플래그는 APS확장데이터 늘래그 (aps_extension_data_flag)신택스요소들이존재하는지여부를지시할수 있다.상기 APS확장플래그는예를들어 VVC표준의이후버전을위한확장 포인트들을제공하기위하여사용될수있다.
[164] ALF정보의핵심처리/핸들링은슬라이스헤더또는타일그룹헤더에서
수행될수있다.상술한 ALF데이터필드는 ALF필터의처리에관한정보를 포함할수있다.예를들어,상기 ALF데이터필드로부터추출될수있는정보는, 사용되는필터의개수정보, ALF가루마성분에만적용되는지여부를나타내는 정보,컬러성분에관한정보,지수골룸 (exponential golomb, EG)파라미터들 및/또는필터계수들의 델타값에관한정보등을포함할수있다.
[165] 한편, ALF데이터필드는예를들어다음과같이 ALF데이터신택스를포함할 수있다.
2020/175893 1»(:1 1{2020/002702
[166] [표 9]
Figure imgf000030_0001
[167] 상기표 9의신택스에포함된신택스요소들의시맨틱스는예를들어,다음 표와같이나타낼수있다.
WO 2020/175893 PCT/KR2020/002702
[168] [£10]
Figure imgf000031_0001
WO 2020/175893 PCT/KR2020/002702
[169] [5.11]
Figure imgf000032_0001
WO 2020/175893 PCT/KR2020/002702
[170] [5.12]
Figure imgf000033_0001
2020/175893 1»(:1/10公020/002702
Figure imgf000034_0001
성분들의조합에적용되는지를나타낼수있다.일단,각성분에대한가용 여부 용파라미터들)가결정되면,루마(성분)필터들의개수에관한정보가 파싱될수있다.일예로,사용될수있는필터들의최대개수는 25로설정될수 2020/175893 1»(:1/10公020/002702 있다.만약시그널링되는루마필터들의개수가적어도하나이면 , 0부터최대 필터개수 (ex. 25, which may alternatively be known as class)까지범위의각필터에 대하여,상기필터에대한인덱스정보가파싱 /시그널링될수있다.이는매 클래스 (즉, 0부터최대필터개수까지)가필터인덱스와연계됨을
의미 (implies)할수있다.상기필터인덱스를기반으로각클래스에대하여 사용될필터가라벨링되면,물래그 (ex. alf_luma_coeff_delta_flag)가
파싱 /시그널링될수있다.상기플래그는 ALF루마필터계수델타값의예측에 관한물래그정보 (ex.alf_luma_coeff_delta_prediction_flag)가슬라이스헤더또는 타일그룹헤더에존재하는지여부를해석하기위하여사용될수있다.
[173] 만약 alf_luma_num_filters_signalled_minusl신택스요소에의하여
시그널링되는루마필터의개수가 0보다크고, alf_luma_coeff_delta_flag신택스 요소의값이 0이면,이는 alf_luma_coeff_delta_prediction_flag신택스요소가 슬라이스헤더또는타일그룹헤더에존재하고그상태 (status)가
평가 (evaluate)될수있음을의미할수있다.만약,
alf_luma_coeff_delta_prediction_flag신택스요소의상태가 1을나타내면,이는 루마필터계수들이이전루마 (필터)계수들로부터예측됨을의미할수있다. 만약 alf_luma_coeff_delta_prediction_flag신택스요소의상태가 0을나타내면, 이는루마필터계수들이이전루마 (필터)계수들의 델타들로부터예측되지 않음을의미할수있다.
[174] 델타필터계수 (ex. alf_luma_coeff_delta_abs)가지수골룸코드를기반으로 코딩된경우,상기델타루마필터계수 (ex. alf_luma_coeff_delta_abs)를 디코딩하기위하여 ,지수골룸 (EG)코드의차수 k(order-k)가결정되어야할수 있다.이정보는필터계수들을디코딩하기위하여필요할수있다.상기지수 골룸코드의차수는 EG(k)라고표현될수잇다. EG(k)를정하기위하여, alf_luma_min_eg_order_minus 1신택스요소가파싱 /시그널링될수있다.상기 alf_luma_min_eg_order_minus 1신택스요소는엔트로피코딩된신택스요소일수 있다. alf_luma_min_eg_order_minus 1신택스요소는상기델타루마필터계수의 디코딩을위하여사용되는 EG의최소차수 (smallest order)를나타낼수있다. 예를들어,상기 alf_luma_min_eg_order_minus 1신택스요소의값은 0부터 6범위 내의값일수있다.상기 alf_luma_min_eg_order_minus 1신택스요소가 파싱 /시그널링된후, alf_luma_eg_order_increase_flag신택스요소가
파싱 /시그널링될수있다.만약,상기 alf_luma_eg_order_increase_flag신택스 요소의값이 1이면,이는상기 alf_luma_min_eg_order_minus 1신택스요소가 나타내는 EG의차수가 1만큼증가함을나타낸다.만약,상기
alf_luma_eg_order_increase_flag신택스요소의값이 0이면,이는상기
alf_luma_min_eg_order_minus 1신택스요소가나타내는 EG의차수가증가하지 않음을나타낸다.상기 EG의차수는상기 EG의인덱스로나타내어질수있다. 상기 alf_luma_min_eg_order_minus 1신택스요소및상기 2020/175893 1»(:1/10公020/002702 alf_luma_eg_order_increase_flag신택스요소에기반한 (루마성분에관한) EG 차수 (또는 EG인덱스)는예를들어다음과같이판단될수있다.
[175] [표 14]
Figure imgf000036_0001
_ _
[176] 상기판단절차를기반으로 expGoOrderY는 expGoOrderY=KminTab로도줄될 수있다.이를통하여 EG차수들을포함하는어레이가도출될수있으며 ,이는 디코딩장치에의하여사용될수있다.상기 expGoOrderY는상기 EG차수 (또는 EG인덱스)를나타낼수있다.
[177] 미리정의된 (pre-defined)골룸차수인덱스 (즉, golombOrderldxY)가있을수 있다.상기미리정의된골룸차수는상기계수들을코딩하기위한마지막골룸 차수 (final golomb order)를결정하기위하여사용될수있다.
[178] 예를들어,상기미리정의된골룸차수는예를들어다음수학식과같이구성될 수있다.
[179] [수식 1]
golombOrderldxY [ ] - { 0, 0, 1 , 0, 1 , 2, 1 , 0, 0, 1 , 2 }
[18이 여기서,차수 k = expGoOrderY[golombOrderIdxY[j]]이고, j는 j번째
시그널링되는필터계수를나타낼수있다.예를들어, j=2이면,즉 3번째필터 계수이면, golomborderIdxY[2] = 1이고,따라서 k = expGoOrderY[l]일수있다.
[181] 이경우,예를들어,만약 alf_luma_coeff_delta_flag신택스요소의값이참 (true), 즉 1을나타내면,시그널링되는모든 (every)필터에대하여, alf_luma_coeff_flag 신택스요소가시그널링될수있다.상기 alf_luma_coeff_flag신택스요소는루마 필터계수가 (명시적으로)시그널링되는지여부를나타낸다.
[182] 상기 EG차수와상술한관련물래그들 (ex. alf_luma_coeff_delta_flag,
alf_luma_coeff_flag등)의상태가결정되면,루마필터계수들의차분정보및 부호 (sign)정보가파싱 /시그널링될수있다 (즉, alf_luma_coeff_flag가참 (true)을 나타내는경우). 12개의필터계수들각각에대한델타절대값정보
(alf_luma_coeff_delata_abs신택스요소)가파싱 /시그널링될수있다.게다가, 만약상기 alf_luma_coeff_delata_abs신택스요소가값을갖는경우,부호정보 (alf_luma_coeff_delta_sign신택스요소)가파싱 /시그널링될수있다.상기루마 필터계수들의차분정보부호정보를포함하는정보는상기루마필터계수들에 관한정보라고불릴수있다.
[183] 상기필터계수들의델타들은상기부호와함께 (along with)결정되고,저장될 2020/175893 1»(:1^1{2020/002702 수있다.이경우상기부호를
Figure imgf000037_0001
필터 계수들의 델타들은어레이 형태로저장될수있고,
Figure imgf000037_0002
나타내어질수있다.상기필터 계수들의 델타들은델타루마계수들이라고불릴수있고,상기부호를지닌 필터 계수들의 델타들은부호를지닌델타루마계수들이라고불릴수있다.
[184] 상기부호를지닌델타루마계수들로부터 최종필터 계수들을결정하기
위하여,(루마)필터 계수들은다음의수학식과같이 업데이트될수있다.
[185] [수식 2]
Figure imgf000037_0003
[186] 여기서, 는필터 계수인덱스를나타낼수있고, 8¾ (뇨는시그널링되는필터 인덱스를나타낼수있다. ={0,...,11 }그리고 1¾1(뇨 = {0,
, _11111 _패 8_8 1 16(1_11난111181 }일 수 있다.
[187] 상기 계수들은최종쇼 (¾벼로복사仁애 될수있다.
Figure imgf000037_0004
= 0,...,24및 = 0,...,11일수있다.
[188] 주어진필터 인덱스에 대한상기부호를지닌델타루마계수들은처음 12개의 필터 계수들을결정하기 위하여사용될수있다. 7x7필터의 13번째필터 계수는 예를들어다음수학식을기반으로결정될수있다.상기 13번째필터 계수는 상술한센터 탭의 필터 계수를나타낼수있다.
[189] [수식 3]
쇼比*30^뀨,[1:!1«(1 ] [1 2] = 1 28 - å,< AlfCoeff [fLltIdx] [k] « 1
[19이 여기서,상기필터 계수인덱스 12는 13번째필터 계수를나타낼수있다. 참고로,상기 필터 계수인덱스는 0부터시작하므로값 12는 13번째필터 계수를 나타낼수있다.
[191] 예를들어,비트스트림부합(이 0]'11 )을보장하기 위하여,상기최종필터 계수들
Figure imgf000037_0005
- 1까지일수있고,노가 12인경우 0부터 28-1까지일수있다.여기서,상기노는 로 대체될수있다.
[192]
Figure imgf000037_0006
요소를기반으로 크로마성분에 대한처리가수행될수있다.만약 _ 1'0111 (10신택스요소의 값이 0보다큰경우,상기크로마성분에 대한최소 £0차수정보 .
_。11]'0111&_111111_6은_01(16]'_1111111181신택스요소)가파싱/시그널링될수있다. 상술한본문서의실시예에따르면크로마성분에 대하여 5x5다이아몬드필터 모양이사용될수있으므로,이경우최대골룸인덱스는 2일수있다.이 경우
Figure imgf000037_0007
인덱스)는예를들어다음과같이판단될 수있다. 2020/175893 1»(:1/10公020/002702
[193] [표 15]
Figure imgf000038_0001
_ _
[194] 상기판단절차를기반으로 expGoOrderC는 expGoOrderC=KminTab로도줄될 수있다.이를통하여 EG차수들을포함하는어레이가도출될수있으며 ,이는 디코딩장치에의하여사용될수있다.상기 expGoOrderC크로마성분에관한 상기 EG차수 (또는 EG인덱스)를나타낼수있다.
[195] 미리정의된 (pre-defined)골룸차수인덱스 (golombOrderldxC)가있을수있다. 상기미리정의된골룸차수는상기계수들을코딩하기위한마지막골룸 차수 (final golomb order)를결정하기위하여사용될수있다.
[196] 예를들어,상기미리정의된골룸차수는예를들어다음의수학식과같이
구성될수있다.
[197] [수식 4]
golombOrderldxC [ ] ^ { 0, 0, 1 , 0, 0, 1 }
[198] 여기서 ,차수 k = expGoOrderC[golombOrderIdxC[j] ]이고, 는 j번째
시그널링되는필터계수를나타낼수있다.예를들어, j=2이면,즉 3번째필터 계수이면, golomborderIdxY[2] = 1이고,따라서 k = expGoOrderC[l]일수있다.
[199] 이를기반으로,크로마필터계수들의절대값정보및부호 (sign)정보가
파싱 /시그널링될수있다.상기크로마필터계수들의절대값정보및부호 정보를포함하는정보는크로마필터계수들에관한정보라고불릴수있다. 예를들어,크로마성분에대하여 5x5다이아몬드필터모양이적용될수있으며, 이경우 6개의 (크로마성분)필터계수들각각에대한델타절대값정보
(alf_chroma_coeff_abs신택스요소)가파싱 /시그널링될수있다.게다가,만약 상기 alf_chroma_coeff_ab s신택스요소의값이 0보다큰경우,부호정보
(alf_chroma_coeff_sign신택스요소)가파싱 /시그널링될수있다.예를들어,상기 6개크로마필터계수들은상기크로마필터계수들에관한정보를기반으로 도출될수있다.이경우, 7번째크로마필터계수는예를들어다음수학식을 기반으로결정될수있다.상기 7번째필터계수는상술한센터탭의필터계수를 나타낼수있다.
[200] [수식 5]
AlfCoeffc [6] = 128 - åkAlfCoeffc [filtIdx] [k] « 1
[201] 여기서,상기필터계수인덱스 6는 7번째필터계수를나타낼수있다.참고로, 상기필터계수인덱스는 0부터시작하므로값 6은 7번째필터계수를나타낼수 있다.
[202] 예를들어,비트스트림부합 (conformance)을보장하기위하여 ,상기최종필터 계수들 AlfCoeffc[filtIdx][k]값의범위는노가 0,쩓 ,5까지인경우 -27부터 27 -1까지일수있고,노가 6인경우 0부터 28-1까지일수있다.여기서,상기 k는 j로 대체될수있다.
[203] (루마/크로마)필터계수들이도출되면,상기필터계수들또는상기필터
계수들을포함하는필터를기반으로 ALF기반필터링을수행할수있다.이를 통하여수정된복원샘플들이도출될수있음은상술한바와같다.또한,다수의 필터들이도출될수있고,상기다수의필터들중하나의필터의필터계수들이 상기 ALF절차를위하여사용될수도있다.일예로,시그널링된필터선택 정보를기반으로상기다수의필터들중하나가지시될수있다.또는예를들어, 현재블록또는 ALF대상블록의활동성및/또는방향성을기반으로상기다수의 필터들중하나의필터가선택되고,상기선택된필터의필터계수들이상기 ALF 절차를위하여사용될수도있다.
[204] 한편,코딩효율을높이기위하여상술한바와같이 LMCS (luma mapping wth chroma scaling)가적용될수있다. LMCS는루프리셰이퍼 (리셰이핑 )으로 지칭될수있다.코딩효율을높이기위하여 LMCS의제어및/또는 LMCS관련 정보의시그널링은계층적으로수행될수있다.
[205] 도 9는본문서의일실시예에따른 CVS의계층적인구조를예시적으로
도시한다. CVS (coded video suquence)는 SPS(sequence parameter set), PPS(picture parameter set),타일그룹헤더 (tile group header),타일데이터 (tile data),및/또는 CTU (들)을포함할수있다.여기서,타일그룹헤더및타일데이터는각각 슬라이스헤더및슬라이스데이터로지칭될수도있다.
[206] SPS는 CVS에서사용되도록툴들을인에이블시키기위한플래그들을
원시적으로포함할수있다.또한, SPS는픽처마다바뀌는파라미터들에대한 정보를포함하는 PPS에의하여참조될수있다.부호화된픽처각각은하나 이상의부호화된직사각형도메인의타일들을포함할수있다.상기타일들은 타일그룹들을형성하는래스터스캔으로그룹화될수있다.각타일그룹은 타일그룹헤더라는헤더정보로캡슐화된다.각타일들은부호화된데이터를 포함하는 CTU로구성된다.여기서데이터는원본샘플값들,예측샘플값들,및 그것의루마및크로마성분들 (루마예측샘플값들및크로마예측샘플값들)을 포함할수있다.
[207] 도 10은본문서의일실시예에따른예시적인 LMCS구조를도시한다.도 10의 LMCS구조 (1000)는,적응적부분선형 (adaptive piecewise linear, adaptive PWL) 모델들에기반한루마성분들의인-루프맵핑 (in-loop mapping)부분 (1010)과 크로마성분들에대해루마-의존적인크로마레지듀얼스케일링 (luma-dependent chroma residual scaling)부분 (1020)을포함할수있다.인-루프맵핑부분 (1010)의 역양자화및역변환 (1011),복원 (1012),및인트라예측 (1013)블록들은
맵핑된 (리세이프된 (reshaped))도메인에서적용되는프로세스들을나타낸다. 인-루프맵핑부분 (1010)의루프필터들 (1015),움직임보상또는인터예측 (1017) 블록들,및크로마레지듀얼스케일링부분(1020)의복원(1022),인트라 예측(1023),움직임보상또는인터 예측(1024),루프필터들(1025)블록들은 본래의(맵핑되지않은(non-mapped),리셰이프되지않은)도메인에서적용되는 프로세스들을나타낸다.
[208] 도 W에서설명되는바와같이, LMCS가인에이블되면,인버스리셰이핑(맵핑) 프로세스(1014),포워드리셰이핑(맵핑)프로세스(1018),및크로마스케일링 프로세스(1021)중적어도하나가적용될수있다.예를들면,인버스리셰이핑 프로세스는복원된픽처의(복원된)루마샘플(또는루마샘플들또는루마샘플 어레이)에적용될수있다.인버스리셰이핑프로세스는루마샘플의부분함수 (인버스)인덱스(piecewise function(inverse) index)를기반으로수행될수있다. 부분함수(인버스)인덱스는루마샘플이속하는조각(또는부분)을식별할수 있다.인버스리셰이핑프로세스의출력은수정된(복원)루마샘플(또는수정된 루마샘플들또는수정된루마샘플어레이)이다. LMCS는타일그룹(또는 슬라이스),픽처또는더높은레벨에서인에이블되거나또는디스에이블될수 있다.
[209] 포워드리셰이핑프로세스및/또는크로마스케일링프로세스는복원된
픽처를생성하기위해적용될수있다.픽처는루마샘플들과크로마샘플들을 포함할수있다.루마샘플들을갖는복원된픽처는복원된루마픽처라고지칭 될수있고,크로마샘플들을갖는복원된픽처는복원된크로마픽처라고지칭 될수있다.복원된루마픽처와복원된크로마픽처의조합은복원된픽처라고 지칭될수있다.복원된루마픽처는포워드리셰이핑프로세스를기반으로 생성될수있다.예를들어,인터 예측이현재블록에적용되면,포워드 리셰이핑은참조픽처의(복원된)루마샘플을기반으로도출된루마예측 샘플에적용된다.참조픽처의(복원된)루마샘플은인버스리셰이핑
프로세스를기반으로생성되므로,포워드리셰이핑이루마예측샘플에 적용되어리세이프된(매핑된)루마예측샘플이도출될수있다.포워드 리셰이핑프로세스는루마예측샘플의부분함수인덱스를기반으로수행될수 있다.부분함수인덱스는인터예측에사용된참조픽처의루마예측샘플의값 또는루마샘플의값을기반으로도출될수있다.상기(리세이프된/매핑된)루마 예측샘플을기반으로복원샘플이생성될수있다.상기복원샘플에인버스 리셰이핑(매핑)프로세스가적용될수있다.상기인버스리셰이핑(매핑) 프로세스가적용된복원샘플은인버스리셰이핑된(매핑된)복원샘플이라고 불릴수있다.또한,상기인버스리셰이핑된(매핑된)복원샘플은간단히 리셰이핑된(매핑된)복원샘플이라고불릴수있다.인트라예측(또는 IBC(intra block copy))이현재블록에적용되는경우,참조되는현재픽처의복원샘플들에 대하여는인버스리셰이핑프로세스가아직적용되지않았기때문에현재 블록의 예측샘플(들)에포워드매핑은필요하지않을수있다.복원된루마 픽처에서(복원된)루마샘플은(리세이프된)루마예측샘플및대응하는루마 레지듀얼샘플을기반으로생성될수있다.
[210] 복원된크로마픽처는크로마스케일링프로세스를기반으로생성될수있다. 예를들어,복원된코마픽처에서의(복원된)크로마샘플은현재블록에서의 크로마예측샘플및크로마레지듀얼샘플(cres)를기반으로도출될수있다. 크로마레지듀얼샘플(cres)은현재블록에대한(스케일링된)크로마레지듀얼 샘플(cresstaie)및크로마레지듀얼스케일링팩터(cScalelnv는 varScale로지칭될 수있음)를기반으로도출된다.크로마레지듀얼스케일링팩터는현재블록에서 리셰이프된루마예측샘플값들을기반으로계산될수있다.예를들어, 스케일링팩터는리세이프된루마예측샘플값들(Y’pred)의평균루마값(ave(Y’pred ))에기초하여계산될수있다.참고로,도 W에서역변환/역양자화를기반으로 도출된(스케일링된)크로마레지듀얼샘플은 cresScale,상기(스케일링된)크로마 레지듀얼샘플에(인버스)스케일링절차를수행하여도출되는크로마레지듀얼 샘플은 Cres로지칭될수있다.
[211] 도 11은본문서의다른일실시예에따른 LMCS구조를도시한다.도 11은도 10을참조하여설명될것이다.여기서는,도 11의 LMCS구조(1100)와도 10의 LMCS구조(1000)간의차이가주로설명될것이다.도 11의인-루프맵핑 부분(mo)과루마-의존적인크로마레지듀얼스케일링부분(1120)은도 10의 인-루프맵핑부분(1010)과루마-의존적인크로마레지듀얼스케일링
부분(1020)과동일/유사하게동작할수있다.
[212] 도 11을참조하면,루마복원샘플들을기반으로크로마레지듀얼스케일링 팩터를도출할수있다.이경우,복원블록의내부루마복원샘플들이아닌복원 블록외부의주변루마복원샘플들을기반으로평균루마값(avgY 을획득할수 있고상기평균루마값(avgYJ을기반으로크로마레지듀얼스케일링팩터를 도출할수있다.여기서상기주변루마복원샘플들은현재블록의주변루마 복원샘플들일수있고,또는상기현재블록을포함하는 VPDU( virtual pipeline data units)의주변루마복원샘플들일수도있다.예를들어,대상블록에인트라 예측이적용되는경우,상기인트라예측을기반으로도출된예측샘플들을 기반으로복원샘플들이도출될수있다.또한예를들어,상기대상블록에인터 예측이적용되는경우,상기인터예측을기반으로도출된예측샘플들에포워드 맵핑을적용하고,리세이프된(혹은포워드맵핑된)루마예측샘플들을기반으로 복원샘플들이생성될수있다.
[213] 비트스트림을통해시그널링되는동영상/영상정보는 LMCS
파라미터들(LMCS대한정보)를포함할수있다. LMCS파라미터들은 HLS(high level syntax,슬라이스헤더신택스를포함)등으로구성될수있다. LMCS 파라미터들및구성의상세한설명은후술될것이다.전술한바와같이,본 문서(및이하의실시예들)에서설명된신택스표들은인코더단에서
구성/인코딩될수있고,비트스트림을통해디코더단으로시그널링될수있다. 디코더는신택스표들에서 LMCS에대한정보(신택스구성요소의형태들로)를 파싱 /디코딩할수있다.이하에서설명될하나이상의실시예는조합될수있다. 인코더는 LMCS에관한정보를기반으로현재픽처를인코딩할수있고그리고 디코더는 LMCS에관한정보를기반으로현재픽처를디코딩할수있다.
[214] 루마성분들의인-루프맵핑은압축효율을향상시키기위해동적범위에걸쳐 코드워드들을재분배함으로써입력신호의동적범위를조절할수있다.루마 맵핑을위해 ,포워드맵핑 (리셰이핑 )함수 (FwdMap)와,상기포워드맵핑 함수 (FwdMap)에대응하는인버스맵핑 (리셰이핑)함수 (InvMap)가사용될수 있다.포워드맵핑함수 (FwdMap)는부분선형모델들을이용하여시그널링될수 있고,예를들면부분선형모델들은 16개의조각들 (pieces)또는빈들 (bins)을 가질수있다.상기조각들은동일한길이를가질수있다.일예에서,인버스 맵핑함수 (InvMap)는별도로시그널링되지않을수있고,대신포워드맵핑 함수 (FwdMap)로부터도출될수있다.즉,인버스맵핑은포워드맵핑의함수일 수있다.예를들어,인버스맵핑함수는 y=x를기준으로포워드맵핑함수를 대칭시킨함수일수있다.
[215] 인-루프 (루마)리셰이핑 (reshaping)은리셰이프된도메인에서입력루마
값들 (샘플들)을변경된값들로맵핑하는데사용될수있다.리세이프된값들은 부호화되고,그리고복원후에본래의 (맵핑되지않은,리셰이프되지않은) 도메인으로다시맵핑될수있다.크로마레지듀얼스케일링은루마신호와 크로마신호간의차이를보상하기위해적용될수있다.인-루프리셰이핑은 리셰이퍼모델을위한하이레벨신택스를지정하여수행될수있다.리셰이퍼 모델신택스는부분선형모델 (PWL모델)을시그널링할수있다.부분선형 모델을기반으로포워드룩업테이블 (FwdLUT)및/또는인버스
룩업테이블 (InvLUT)이도출될수있다.일예로서,포워드
룩업테이블 (FwdLUT)이도출된경우,포워드룩업테이블 (FwdLUT)을기반으로 인버스룩업테이블 (InvLUT)이도출될수있다.포워드룩업테이블 (FwdLUT)은 입력루마값들 Yi을변경된값들兄로맵핑하고,인버스룩업테이블 (InvLUT)은 변경된값들에기반한복원값들兄을복원된값들 로맵핑할수있다.복원된 값들 는입력루마값들兄를기반으로도출될수있다.
[216] 일예에서, SPS는아래표 13의신택스를포함할수있다.표 16의신택스는툴 인에이블링늘래그로서 sps_reshaper_enabled_flag를포함할수있다.여기서, sps_reshaper_enabled_flag는리셰이퍼가 CVS(coded video sequence)에서 사용되는지를지정하는데이용될수있다.즉, sps_reshaper_enabled_flag는 SPS에서리셰이핑을인에이블링하는플래그일수있다.일예에서,표 16의 신택스는 SPS의일부분일수있다. 2020/175893 1»(:1/10公020/002702
[217] [표 16]
Figure imgf000043_0001
[218] 일예에서, sps_seq_parameter_set_id및 sps_reshaper_enabled_flag가나타낼수 있는시맨틱스는아래표 17와같을수있다.
[219] [표 17]
Figure imgf000043_0003
_
[22이 일예에서 ,타일그룹헤더또는슬라이스헤더는아래표 18또는표 19의 신택스를포함할수있다.
[221] [표 18]
Figure imgf000043_0002
2020/175893 1»(:1/10公020/002702
[222] [표 19]
Figure imgf000044_0001
[223] 상기표 18또는표 19의신택스에포함된신택스요소들의시맨틱스는예를 들어,다음표들에개시된사항을포함할수있다.
[224] [표 2이
Figure imgf000044_0002
2020/175893 1»(:1/10公020/002702
[225] [표 21]
Figure imgf000045_0002
[226] 일예로서, sps_reshaper_enabled_flag가파싱되면,타일그룹헤더는룩업
테이블들 및/또는 1 1乂71)을구성하는데사용되는추가적인
데이터(예컨대,상기표 18또는 19에포함된정보)를파싱할수있다.이를위해, 리셰이퍼플래그의상태가슬라이스헤더도는타일그룹헤더에서확인될 수있다. sps_reshaper_enabled_flag가참(또는 1)인경우,주가적인물래그, tile_group_reshaper_model_pmsent_flag(또는 slice_reshaper_model_present_flag)7]- 파싱될수있다. tile_group_reshaper_model_pmsent_flag(또는
81 _ 1^ _1110(161_1 8611(:_£^)의목적은리셰이퍼모델의존재를지시하는 데있을수있다.예를들어 , tile_group_reshaper_model_present_flag(또는
81 _ 1^ _1110(161_1 8611(:_£^)가참(또는 1)인경우,현재타일그룹(또는 현재슬라이스)에대해리셰이퍼가존재한다고지시될수있다.
tile_group_reshaper_model_pmsent_flag(또는 slice_reshaper_model_present_flag)7]- 거짓(또는 0)인경우,현재타일그룹(또는현재슬라이스)에대해리셰이퍼가 존재하지않는다고지시될수있다.
[227] 리셰이퍼가존재하고그리고리셰이퍼가현재타일그룹(또는현재
슬라이스)에서인에이블되었다면,리셰이퍼모델(예컨대,
tile_group_reshaper_model()또는 slice_reshaper_model())은프로세싱될수있고, 이에더하여주가적인물래그, tile_group_reshaper_enable_flag(또는
slice_reshaper_enable_flag)도파싱될수있다. tile_group_reshaper_enable_flag(또는 slice_reshaper_enable_flag)는리셰이퍼모델이현재타일그룹(또는슬라이스)에 사용되었는지를지시할수있다.예를들어,산 _용1'0111)_ 1 61'_611 _:¾크용(또는
Figure imgf000045_0001
)이면,리셰이퍼모델은현재타일 그룹(또는현재슬라이스)에사용되지않은것으로지시될수있다.
tile_group_reshaper_enable_flag(또는 slice_reshaper_enable_flag) 7]- 1(또는참)0!면, 리셰이퍼모델은현재타일그룹(또는슬라이스)에사용된것으로지시될수 있다.
[228] 일예로서,예를들어, tile_group_reshaper_model_present_flag(또는 2020/175893 1»(:1^1{2020/002702
81治」¾81 6]'_1110(161_ 86111;_£ )가참(또는 1)이고
tile_group_reshaper_enable_flag(또는 slice_reshaper_enable_flag)가거짓(또는 0)일 수있다.이는,리셰이퍼모델이존재하지만현재타일그룹(또는슬라이스)에서 사용되지않았음을의미한다.이러한경우리셰이퍼모델은다음타일
그룹들(또는슬라이스들)에서사용될수있다.다른예로서,
1 _은]'011]3」¾81 6]'_611 16_:(¾은가참(또는 1)이고
tile_group_reshaper_model_present_flag가거짓(또는 0)일수도있다.
[229] 리셰이퍼모델(예컨대 , tile_group_reshaper_model()또는 slice_reshaper_model()) 및 tile_group_reshaper_enable_flag(또는 slice_reshaper_enable_flag)가파싱되면, 크로마스케일링을위해필요한조건들이존재하는지여부가판단(평가)될수 있다.상기조건들은조건 1(현재타일그룹/슬라이스가인트라부호화되지 않았을것)및/또는조건 2(현재타일그룹/슬라이스가루마및크로마에대한 두개의구분된코딩쿼드트리구조로분할되지않았을것,즉현재타일 그룹/슬라이스가듀얼트리구조가아닐것)를포함할수있다.조건 1및/또는 조건 2가참이고및/또는 1 _은]'0111)_168—6]'_611 16_:(¾은(또는
slice_reshaper_enable_flag)가참(또는 1)이라면,
tile_group_reshaper_chroma_residual_scale_flag(또는
가파싱될수있다.
tile_group_reshaper_chroma_residual_scale_flag(또는
slice_reshaper_chroma_residual_scale_flag)가인에이블되면(1또는참이라면), 현재타일그룹(또는슬라이스)에대해크로마레지듀얼스케일링이
인에이블됨이지시될수있다.
tile_group_reshaper_chroma_residual_scale_flag(또는
slice_reshaper_chroma_residual_scale_flag)가디스에이블되면(0또는거짓이라면), 현재타일그룹(또는슬라이스)에대해크로마레지듀얼스케일링이
디스에이블됨이지시될수있다.
[23이 상술된리셰이핑의목적은룩
구성하기위해필요한데이터를
Figure imgf000046_0001
데이터를기반으로구성된룩업테이블들은허용가능한루마값범위의분포를 복수개의빈들(예컨대, 16개)로나눌수있다.따라서,주어진빈들내에 있는 루마값들은변경된루마값들에맵핑될수있다.
[231] 도 12는예시적인포워드맵핑을나타내는그래프를보여준다.도 12에서는
예시적으로 5개의빈들만이도시된다.
[232] 도 12를참조하면, X축은입력루마값들을나타내고, X축은변경된출력루마 값들을나타낸다. X축은 5개의빈들또는조각들로나뉘어지고,각빈은길이 을 가진다.즉,변경된루마값들에맵핑된 5개의빈들은서로동일한길이를 가진다.포워드룩업테이블 \¥(11刀1')은타일그룹헤더에서이용가능한 데이터(예컨대,리셰이퍼데이터)를사용하여구성될수있고,이로부터맵핑이 용이해질수있다.
[233] 일실시예에서 ,상기빈인덱스들과관련된출력피벗지점 (output pivot
points)들이계산될수있다.출력피벗지점들은루마코드워드리셰이핑의출력 범위의최소및최대경계들을설정 (마킹)할수있다.출력피벗지점들을 계산하는과정은코드워드들의수의부분누적 (piecewise cumulative)분포 함수를기반으로수행될수있다.상기출력피벗범위는사용될빈들의최대 개수및룩업테이블 (FwdLUT또는 InvLUT)의크기를기반으로분할될수있다. 일예로서,상기출력피벗범위는빈들의최대개수와룩업테이블의크기간의 곱을기반으로분할될수있다.예를들어,빈들의최대개수와룩업테이블의 크기간의곱이 1024인경우,상기출력피벗범위는 1024개의엔트리들로 분할될수있다.상기출력피벗범위의분할은스케일링팩터를
기반으로 (이용하여)수행 (적용또는달성)될수있다.일예에서,스케일링 팩터는아래수학식 6을기반으로도출될수있다.
[234] [수식 6]
SF = (y2 - yl) * (1« FP PREC) + c
[235] 상기수학식 6에서, SF는스케일링팩터를나타내고, yl및 y2는각각의빈에 대응하는출력피벗지점들을나타낸다.또한, FP_PREC및 c는사전에결정된 상수들일수있다.상기수학식 6을기반으로결정되는스케일링팩터는포워드 리셰이이핑을위한스케일링팩터로지칭될수있다.
[236] 다른실시예에서,인버스리셰이핑 (인버스맵핑)과관련하여,빈들의정의된 범위 (예컨대 , reshaper_model_min_bin_idx에서
reshape_model_max_bin_idx까지)에대해,포워드룩업테이블 (FwdLUT)의맵핑된 피벗지점들에대응하는입력리세이프된피벗지점들및맵핑된인버스출력 피벗지점들 (빈인덱스*초기코드워드들의수로주어짐)이패치된다.다른 예에서,스케일링팩터 (SF)는아래수학식 7를기반으로도출될수있다.
[237] [수식7]
SF = (y2 - y 1) * (1« FP PREC) / (x2 -xl)
[238] 상기수학식 7에서 , SF는스케일링팩터를나타내고, xl및 x2는입력피벗
지점들을나타내고, yl및 y2는각각의조각 (빈)에대응하는출력피벗지점들을 나타낸다.여기서 ,입력피벗지점들은포워드룩업테이블 (FwdLUT)를기반으로 맵핑된피벗지점들일수있고,그리고출력피벗지점들은인버스
룩업테이블 (InvLUT)를기반으로인버스맵핑된피벗지점들일수있다.또한, FP_PREC는사전에결정된상수일수있다.수학식 7의 FP_PREC은수학식 6의 FP_PREC과동일하거나상이할수있다.상기수학식 4를기반으로결정되는 스케일링팩터는인버스리셰이핑을위한스케일링팩터로지칭될수있다. 인버스리셰이핑도중에,수학식 4의스케일링팩터를기반으로입력피벗 지점들의분할이수행될수있다.분할된입력피벗지점들을기반으로 , 0에서 2020/175893 PCT/KR2020/002702 최소빈인덱스(reshaper_model_mm_bin_idx)까지및/또는최소빈
인덱스江£ _1110(노1_111뇨13111_:1(1)에서최대빈
인덱스(1^ £-1110(노1_11 _1^11_:1(1)까지의범위에속하는빈인덱스들을위해 최소및최대빈값들에대응하는피벗값들이지정된다.
[239] 아래표 22는일실시예에따른리셰이퍼모델의신택스를나타낸다.상기 리셰이퍼모델은
Figure imgf000048_0001
모델로불릴수있다.여기서,리셰이퍼모델은 예시적으로타일그룹리셰이퍼로설명되었으나,반드시본실시예에의하여본 명세서가제한되는것은아니다.예를들어,상기리셰이퍼모델은 에포함될 수도있고,또는타일그룹리셰이퍼모델은슬라이스리셰이퍼모델로지칭될 수도있다.
[24이 [표 22]
Figure imgf000048_0002
[241] 상기표 22의신택스에포함된신택스요소들의시맨틱스는예를들어,다음 표에개시된사항을포함할수있다.
2020/175893 1»(:1/10公020/002702
[242] [표 23]
Figure imgf000049_0002
[243] 상기리셰이퍼모델은 reshape_model_min_bin_idx,
reshape_model_delta_max_bin_idx, reshaper_model_bin_delta_abs_cw_prec_minus 1 , reshape_model_bin_delta_abs_CW[i] ,및
reshaper_model_bin_delta_sign_CW_flag[i]를구성요소들로서포함한다.
이하에서는각각의구성요소들이상세하게설명될것이다.
[244] reshape_model_min_bin_idx는리셰이퍼구성과정에서사용되는최소빈 (또는 조각)인덱스를나타낸다. reshape_model_min_bin_idx의값은 0부터
Figure imgf000049_0001
있다.예를들어, 은 15일수있다.
[245] 일실시예에서,타일그룹리셰이퍼모델은두개의인덱스들 (또는
파라미터들), reshaper_model_min_bin_idx및 reshaper_model_delta_max_bin_idx를 우선적으로파싱할수있다.이들두개의인덱스들을기반으로최대빈 인덱스 (reshaper_model_max_bin_idx)가도줄 (결정)될수있다.
reshape_model_delta_max_bin_idx는허용된최대빈인덱스 ] \ 111(1\에서 리셰이퍼구성과정에서사용되는실질적인최대빈
인덱스 (reshape_model_max_bin_idx)를뺀것을나타낼수있다.최대빈
인덱스江£ _1110(161_111크 _1^11_1(1 )의값은 0부터 1 크 111(1까지일수있다. 예를들어, ]ᆻ[ \ 111(1\은 15일수있다.일예로서, reshape_model_max_bin_idx의 값은아래수학식 8을기반으로도출될수있다. 2020/175893 1»(:1/10公020/002702
[246] [수식 8]
Figure imgf000050_0001
[247] 최대빈인덱스 (reshaper_model_max_bin_idx)는최소빈
인덱스 (1^ _1110(161_111111_1^11_1(1:?0보다크거나또는같을수있다.최소빈 인덱스는최소허용된빈인덱스또는허용된최소빈인덱스로지칭될수있고, 또한최대빈인덱스는최대허용된빈인덱스또는허용된최대빈인덱스로 지칭될수있다.
[248] 최대빈인덱스 (rehape_model_max_bin_idx)가도줄되었다면,신택스구성요소 reshaper_model_bin_delta_abs_cw_prec_minus 17}파싱될수있다.
reshaper_model_bin_delta_abs_cw_prec_minusl을기반으로신택스
reshape_model_bin_delta_abs_CW[i]를나타내는데사용되는비트들의개수가 결정될수있다.예를들어, reshape_model_bin_delta_abs_CW[i]를나타내는데 사용되는비트들의개수는 reshaper_model_bin_delta_abs_cw_prec_minusl에 1을 더한것과동일할수있다.
[249] 81^6_1110(別_ 11_(1出크_此8_€\ ¾는 1번째빈의절대델타코드워드값 (델타 코드워드의절대값)과관련된정보를나타낼수있다.일예에서, 1번째빈의 절대델타코드워드값이 0보다크면,
delta_sign_CW_flag[i]가파싱될수있다.
(1611 _ 11_。、\^_£1옾|¾를기반으로
delta_abs_CW[i]의부호가결정될수있다.일예에서, delta_sign_CW_flag[i]가 0(또는거짓)이면,대응하는변수 의부호일수있다.이외의
1_1 11_(1611 _ 옾11_。'\\ 1옾[ 가 0이아니면,
delta_sign_CW_flag[i]가 1(또는참)이면),대응하는변수 의부호일수있다.
Figure imgf000050_0002
delta_sign_CW_flag[i]가존재하지않는경우 0(또는거짓)으로 간주될수있다.
[250] 일실시예에서,상술된 reshape_model_bin_delta_abs
reshape_model_b 기반으로
Figure imgf000050_0003
도출될수있다.
Figure imgf000050_0004
워드의값으로지칭될수있다.예를 들어, 수학식 9을기반으로도출될수있다.
[251] [수식
Figure imgf000050_0005
RspDeltaCW[i] = (l-2*reshape_model bin_delta_sign_CW[ 1 ])
Figure imgf000050_0006
[252] 상기수학식 9에서, reshape_model_bin_delta_sign_CW[i]는 RspDeltaCW[i]의 2020/175893 1»(:1^1{2020/002702 부호와관련된정보일수있다.예를들어 , reshape_model_bin_delta_sign_CW[i]는 전술된 reshaper_model_bin_delta_sign_CW_flag[i]와동일할수있다.여기서土는 최소빈인덱스(reshaper_model_min_bin_idx)에서최대빈
노1_111&:^_1^11_:1(뇨)까지의범위에 있을수있다.
[253] 기반으로변수(또는어레이)
Figure imgf000051_0001
도출될수있다.
1번째빈에 할당(분배)되는코드워드들의수를나타낼수
Figure imgf000051_0002
당(분배)되는코드워드들의수는어레이 형태로저장될수 있다.일 예에서, 1^\전술된 reshaper_model_min_bin_idx보다작거나또는
81 6]'_1110(161_11 :05뇨1_:1(1 보다크면(土<3 1 _1110(161_11난105뇨1_:1(1 (竹 reshaper_model_max_bin_idx<i), 는 0일수있다.이외의 경우 가전술된 reshaper_model_min_bin_idx보다크거나같고그리고
81 6]'_1110(161_11 :05뇨1_:1(1 보다작거나
같으면(reshaper_model_min_bin_idx
Figure imgf000051_0004
상술된 RspDeltaCW[i],
Figure imgf000051_0003
MaxBinIdx를기반으로도출될수있다.이경우,예를들면,
Figure imgf000051_0005
는아래 수학식 을기반으로도출될수있다.
[254] [수식 10]
Figure imgf000051_0006
[255] 상기수학식 에서, 0 0\¥는사전에 결정된값일수있으며,예를들어아래 수학시 11를기반으로결정될수있다.
[256] [수식 11]
Figure imgf000051_0007
[257] 상기수학식
Figure imgf000051_0008
도이고,그리고 MaxBinIdx는 라면,
Figure imgf000051_0009
[258] 상술된 0 0 를기반으로변수 가도출될수있다.예를들어 ,
11平아1切0따]는아래수학식 12를기반으로도출될수있다.
[259] [수식 12]
[26이
Figure imgf000051_0010
반으로변수들
ReshapePivot[i],
Figure imgf000051_0011
이도줄될수있으며 ,예를 들어 ReshapePivot[i], 크 ],
Figure imgf000051_0012
아래표 24을 기반으로도출될수있다 2020/175893 1»(:1/10公020/002702
[261] [표 24]
Figure imgf000052_0005
[262] 상기표 24에서 , 171- 0부터 MaxBinIdx까지증가하는 ^루프구문이이용될수 있으며,此 는비트시프팅을위해사전에결정된상수일수있다.
Figure imgf000052_0001
기반으로도출되는지여부는,요8 \¥미가 0인지 여부에따른조건절을기반으로결정될수있다.
[263] 크로마레지듀얼스케일링팩터를도출하기
Figure imgf000052_0002
표 25를기반으로도출될수있다.
[264] [표 25]
Figure imgf000052_0004
[265] 상기표 25에서,止 (:는비트 위해사전에결정된상수일수있다. 상기표 25를참조하면, 011*0111 ]가어레이 011*0111及£811 1;8。 1611(:를 기반으로도출되는지여부는, 0인지여부에따른조건절을기반으로 결정될수있다.여기서,  ¾1*011
Figure imgf000052_0003
산 11(:는사전에결정된어레이일수 2020/175893 1»(:1^1{2020/002702 있다.다만,어레이(¾01패1 8:仙파1 1 아는예시적인것일뿐이며 ,본 실시예가표 25에 의해반드시 제한되는것은아니다.
[266] 이상 1번째변수들을도출하기위한방법이설명되었다. 1+1번째변수들은 ReshapePivot[i+l]을기반으로할수있고,예를들어 ReshapePivot[i+l]는수학식 13을기반으로도출될수있다.
[267] [수식 13]
[^113 ᄅ1:> 01:[ I + 1 ] = ReshapePivot[ I ] + [¾|3〔\八/[ I ]
[268] 상기수학식
Figure imgf000053_0001
전술된수학식 10및/또는 11을기반으로 도출될수있다.상술된실시예들및 예시들을기반으로루마맵핑이수행될수 있으며,상술된신택스및그것에포함된구성요소들은단지 예시적인표현일수 있고실시예들이상술된표들이나수학식들에 의해제한되는것은아니다.
[269]
Figure imgf000053_0006
적응적으로도출할수있다.상기
Figure imgf000053_0002
모델은
Figure imgf000053_0003
파라미터를기반으로 도출될수있다.또한,예를들어상기 헤더정보를통하여복수의
Figure imgf000053_0004
시그널링될수있고,이를통하여동일픽처/슬라이스내블록단위로서로다른 쇼[止및/또는 모델을적용할수있다.
[27이 예를들어,본문서의 일실시예에서는타일그룹헤더또는슬라이스헤더내에
1止파라미터가조건적으로존재할수있다.이는 ¥0^0 / 이 ¥신1¾ 1^ ·) 유닛의 비트스트림추출이 필요한애플리케이션에서유용할수있다.이러한 애플리케이션을용이하게하기 위해타일그룹헤더또는슬라이스헤더도
Figure imgf000053_0005
파라미터를가지는것이유리할수있다.
[271] 이를위해,타일그룹헤더(신택스)또는슬라이스헤더(신택스)는새로운 신택스요소를포함할수있다.예를들어,새로운신택스요소는
Figure imgf000053_0007
2020/175893 1»(:1/10公020/002702
Figure imgf000054_0001
요소)는쇼내사용플래그라불릴수도있다.
[272] 일예에서 ,타일그룹헤더또는슬라이스헤더는아래표 26또는표 27의 신택스를포함할수있다.일예에서,표 26또는표 27은타일그룹헤더또는 슬라이스헤더의일부분일수있다.
[273] [표 26]
Figure imgf000054_0002
[274] [표 27]
Figure imgf000054_0003
[275] 상기표 26또는상기표 27의신택스에포함된신택스요소들의시맨틱스는 예를들어,다음표들에개시된사항을포함할수있다. WO 2020/175893 PCT/KR2020/002702
[276] [5.28]
Figure imgf000055_0001
2020/175893 1»(:1/10公020/002702
Figure imgf000056_0003
요소의 값이 1), _(1 ()는타일그룹헤더또는슬라이스헤더내에포함될수 있으며,이를기반으로쇼내를위한필터를도출할수있다.또는
사용플래그가디스에이블을나타내는경우(예를들어,(고6_용1*011
신택스요소의값 요소의 값
인덱스정보(예를
Figure imgf000056_0002
또는 slice_
Figure imgf000056_0001
요소)가타일그룹헤더또는슬라이스헤더에포함될수있고, APS인덱스 정보가가리키는 APS내의 ALF데이터를기반으로쇼내를위한필터를도출할 수있다.여기서 , APS인덱스정보는 APS ID정보라불릴수도있다.
[279] 예를들어,본문서의다른실시예에서는 APS에 ALF파라미터와함께
리셰이퍼파라미터가포함될수있다.기존의리셰이퍼파라미터는타일그룹 헤더또는슬라이스헤더에포함되었으나, APS내에서리셰이퍼파라미터가 ALF데이터와함께파싱되도록인캡슐레이션하는것이유리할수있다.따라서, 다른실시예에서는 APS에리셰이퍼파라미터및 ALF파라미터가함께포함될 수있다.여기서 ,리셰이퍼데이터는리셰이퍼파라미터를포함할수있고, 리셰이퍼파라미터는 LMCS파라미터라불릴수도있다.
[28이 이를위해 , SPS에서 ALF및리셰이퍼툴플래그가우선평가 (evaluate)될수 있다.예를들어 , ALF및/또는리셰이퍼의사용의인에이블/디스에이블에대한 결정은 SPS에의해지정될수있다.또는상기결정은 SPS에포함된정보또는 신택스요소에의해결정될수있다.상기 2개의툴함수들 (ALF가용플래그및 리셰이퍼가용플래그)은서로독립적으로동작될수있다.즉,상기 2개의툴 함수들은 SPS내에서로독립적으로포함될수있다.예를들어 , VCL NAL 유닛이디코딩되기전에 ALF및리셰이퍼는테이블을구성하는것이요구될수 있으므로, APS내에 ALF데이터및리셰이퍼데이터를인캡슐레이팅하는것은 기능의유사한메커니즘들의툴들 (tools of similar mechanisms of functionality)을 함께그룹화하여기능을제공하는데도움이될수있다.
[281] 예를들어,상술한바와같이 SPS에서의 ALF가용플래그 (ex.
sps_alf_enabled_flag)를통하여 ALF툴의가용여부가판단될수있고,이후 ALF가현재픽처또는슬라이스에서가용한지여부가상기헤더정보에서의 ALF가용늘래그 (ex. slice_alf_enabeld_flag)를통하여지시될수있다.상기헤더 정보에서의 ALF가용플래그의값이 1인경우, ALF관련 APS ID개수신택스 요소가파싱 /시그널링될수있다.그리고상기 ALF관련 APS ID개수신택스 요소를기반으로도출된 ALF관련 APS ID개수만큼의 ALF관련 APS ID신택스 요소가파싱 /시그널링될수있다.즉,이는다수의 APS가하나의헤더정보를 통하여파싱또는참조될수있음을나타낼수있다.
[282] 또한,예를들어 SPS에서의 LMCS가용늘래그 (ex. sps_reshaper_enabled_flag)를 통하여 LMCS (또는리셰이핑)툴의가용여부가판단될수있다.상기
sps_reshaper_enabled_flag는 sps_lmcs_enabled_flag로지칭될수있다. LMCS가 현재픽처또는슬라이스에서가용한지여부가상기헤더정보에서의 LMCS 가용플래그 (ex. slice_lmcs_enabeld_flag)를통하여지시될수있다.상기헤더 정보에서의 LMCS가용플래그의값이 1인경우, LMCS관련 APS ID신택스 요소가파싱 /시그널링될수있다. LMCS관련 APS ID신택스요소가가리키는 APS에서 LMCS모델 (리셰이퍼모델)을도출될수있다.예를들어,상기 APS는 LMCS데이터필드를더포함할수있고,상기 LMCS데이터필드는상술한 2020/175893 1»(:1/10公020/002702 모델(리셰이퍼모델)정보를포함할수있다.
[283] 일예에서, 는아래표 30의신택스를포함할수있다.표 30의신택스는툴
Figure imgf000058_0002
요소는리셰이퍼가
Figure imgf000058_0001
에서사용되는지를지정하는데이용될수있다.예를 들어, sps_alf_enabled_flag신택스요소는 에서쇼내를인에이블링하는 물래그일수있다.또한, sps_reshaper_enabled_flag신택스요소는客 客에서 리셰이핑을인에이블링하는플래그일수있다.일예에서,표 30의신택스는 의일부분일수있다.
[284] [표 3이
Figure imgf000058_0003
[285] 일예에서, sps_alf_enabled_flag및 sps_reshaper_enabled_flag가나타낼수있는 시맨틱스는아래표 31과같을수있다.
[286] [표 31]
Figure imgf000058_0005
_
[287] 또한일예에서, APS는아래표 32또는표 33의신택스를포함할수있다.일 예에서 ,표 32또는표 33은 APS의일부분일수있다.
[288] [표 32]
Figure imgf000058_0004
2020/175893 PCT/KR2020/002702
[289] [표 33]
Figure imgf000059_0002
Figure imgf000059_0001
aps_extension_data_flag신택스요소는상술한여러실시예들에의해설명된바와 동일한정보를나타낼수있다.
[291] 또한일예에서,타일그룹헤더또는슬라이스헤더는아래표 34또는표 35의 신택스를포함할수있다.일예에서,표 34또는표 35는타일그룹헤더또는 슬라이스헤더의일부분일수있다.
[292] [표 34]
Figure imgf000059_0003
2020/175893 PCT/KR2020/002702
[293] [표 35]
Figure imgf000060_0001
[294] 상기표 34또는표 35의신택스에포함된신택스요소들중
tile_group_aps_id_alf신택스요소, tile_group_aps_id_reshaper신택스요소, slice_aps_id_alf신택스요소및/또는 slice_aps_id_reshaper신택스요소의 시맨틱스는다음표들에개시된사항을포함할수있으며 ,그외에다른신택스 요소들은상술한여러실시예들에의해설명된바와동일한정보를나타낼수 있다.
[295] [표 36]
Figure imgf000060_0002
2020/175893 1»(:1/10公020/002702
[296] [표 37]
Figure imgf000061_0002
[297] 또는예를들어 , ALF데이터 (예를들어 , alf_data())뿐만아니라리셰이퍼모델에 대한정보 (예를들어, tile_group_reshaper_model())를파싱하기위해동일한
APS가사용될수도있다.이경우,타일그룹헤더또는슬라이스헤더는아래표 38또는표 39의신택스를포함할수있다.일예에서,표 38또는표 39는타일 그룹헤더또는슬라이스헤더의일부분일수있다.
[298] [표 38]
Figure imgf000061_0001
2020/175893 1»(:1/10公020/002702
[299] [표 39]
Figure imgf000062_0002
[30이 상기표 38또는표 39의신택스에포함된신택스요소들은상술한여러
실시예들에의해설명된바와동일한정보를나타낼수있다.여기서,하나의 쇼 에서쇼내및리셰이퍼모델에접근하기위해공통 tile_group_aps_id신택스 요소또는 slice_aps_id신택스요소가파싱될수있다.
[301] 상기표 34또는표 35에따른타일그룹헤더또는슬라이스헤더는쇼내
데이터를위한쇼 인덱스정보(예를들어 , tile_group_aps_id_alf신택스요소 또는 slice_aps_id_alf신택스요소)및리셰이퍼데이터를위한쇼 인덱스 정보(예를들어 , tile_group_aps_id_reshaper신택스요소또는
slice_aps_id_reshaper신택스요소)를구분하여포함할수있다.
[302] 상기표 38또는표 39에따른타일그룹헤더또는슬라이스헤더는하나의 쇼 客인덱스정보(예를들어, tile_group_aps_id신택스요소또는 slice_aps_id 신택스요소)를포함하며,상기하나의쇼 인덱스정보가 1玉데이터를위한 쇼 인덱스정보및리셰이퍼데이터를위한쇼 인덱스정보로모두이용될수 있다.
[303] 셰이퍼는서로다른쇼 를참조할수있다.다시말해 ,표 데이터를위한쇼 인덱스정보(예를들어,
Figure imgf000062_0001
택스요소또는 slice_aps_id_alf신택스요소)및리셰이퍼 데이터를위한쇼 인덱스정보(예를들어 , tile_group_aps_id_reshaper신택스 요소또는 slice_aps_id_reshaper신택스요소)는서로다른쇼 에대한인덱스 정보를나타낼수있다. [304] 또는예를들어 , ALF및리셰이퍼는동일한 APS를참조할수있다.다시말해 , 표 34또는표 35에서 ALF데이터를위한 APS인덱스정보 (예를들어, tile_group_aps_id_alf신택스요소또는 slice_aps_id_alf신택스요소)및리셰이퍼 데이터를위한 APS인덱스정보 (예를들어 , tile_group_ap s_id_re shaper신택스 요소또는 slice_aps_id_reshaper신택스요소)는동일한 APS에대한인덱스 정보를나타낼수있다.또는 ALF및리셰이퍼는동일한 APS를참조하는경우, 표 38또는표 39와같이하나의 APS인덱스정보 (예를들어, tile_group_aps_id 신택스요소또는 slice_aps_id신택스요소)가포함될수도있다.
[305] 예를들어,본문서의또다른실시예에서는 APS에 ALF파라미터와함께
리셰이퍼파라미터가조건적으로포함될수있다.이를위해,타일그룹헤더 또는슬라이스헤더는애플리케이션의요구에의해동작될수있는하나이상의 APS를참조할수있다.예를들어,서브-비트스트림 (sub-bitstream)추출 프로세스/비트스트림스늘라이싱 (splicing)에관한유즈케이스 (use case)가 고려될수있다.기존에는비트스트림속성이지시되지않는것은시스템의 제약들 (constraints)을암시할수있었다.특히,시스템은전체에대해하나의 SPS를사용하고 (따라서,다른인코딩장치들로부터효율적으로 CVS들의복잡한 스플라이싱함),또는세션시작시에모든 SPS를
어나운스 (announce)하였다 (따라서,런-타임조정 (run-time adjustenents)에서 인코딩장치들의유연성을감소시킴).따라서,용도에따라타일그룹헤더또는 슬라이스헤더내에필요한 ALF데이터및/또는리셰이퍼모델데이터이 파싱되는것이유리할수있다.이경우,시스템은처리를위해자체포함된 VCL NAL유닛들에대한추출의유연성을가질수있다.또한, NAL유닛은 APS ID를 시그널링하는것에의해제공되는유연성에더불어,타일그룹헤더또는 슬라이스헤더내에정보를시그널링함으로유리할수있다.
[306] 이를위해,일예에서는 NAL유닛이 ALF데이터 (예를들어 , alf_data())및/또는 리셰이퍼모델 (예를들어 , tile_group_reshaper_model()또는
slice_reshaper_model())을파싱하거나대신에 APS를참조해야하는경우, tile_group_alf_reshaper_usage_flag신택스요소또는 slice_alf_reshaper_usage_flag 신택스요소를사용할수있다.즉, tile_group_alf_reshaper_usage_flag신택스요소 또는 slice_alf_reshaper_usage_flag신택스요소는타일그룹헤더또는슬라이스 헤더에포함될수있다.
[307] 일예에서,타일그룹헤더또는슬라이스헤더는아래표 40또는표 41의
신택스를포함할수있다.일예에서,표 40또는표 41은타일그룹헤더또는 슬라이스헤더의일부분일수있다. WO 2020/175893 PCT/KR2020/002702
[308] [5.40]
Figure imgf000064_0001
2020/175893 PCT/KR2020/002702
[309] [표 41]
Figure imgf000065_0001
[310] 상기표 40또는표 41의신택스에포함된신택스요소들중
tile_group_alf_reshaper_usage_flag신택스요소또는 slice_alf_reshaper_usage_flag 신택스요소의시맨틱스는다음표들에개시된사항을포함할수있으며,그 외에다른신택스요소들은상술한여러실시예들에의해설명된바와동일한 정보를나타낼수있다.
[311] [표 42]
Figure imgf000065_0002
_
[312] [표 43]
Figure imgf000065_0003
_
[313] 예를들어, tile_group_alf_reshaper_usage_flag신택스요소또는 2020/175893 1»(:1/10公020/002702 slice_alf_reshaper_usage_flag신택스요소는쇼 113를사용하는지에대한정보를 나타낼수있다.즉, tile_group_alf_reshaper_usage_flag신택스요소또는 slice_alf_reshaper_usage_flag신택스요소의값이 1인경우,쇼 113는사용되지 않으며 ,이에따라쇼 도참조하지않을수있다.또는
tile_group_alf_reshaper_usage_flag신택스요소또는 slice_alf_reshaper_usage_flag 신택스요소의값이 0인경우,쇼 II)가사용될수있으며,
tile_group_aps_pic_parameter_set신택스요소또는 slice_aps_pic_parameter_set 신택스요소에의해쇼 !)가지정될수있다.여기서,
tile_group_alf_reshaper_usage_flag신택스요소또는 slice_alf_reshaper_usage_flag 신택스요소는쇼 사용플래그또는쇼내 &리셰이퍼사용플래그라나타낼 수도있다.
[314] 또는예를들어,동일한타일에서타일그룹헤더또는슬라이스헤더내에
Figure imgf000066_0001
및리셰이퍼모델파라미터들모두가지는유연성을제공하면서 ,두파라미터들 모두에접근하기위해하나의쇼 가참조될수도있다.이경우,타일그룹헤더 또는슬라이스헤더는아래표 44또는표 45의신택스를포함할수있다.일 예에서,표 44또는표 45는타일그룹헤더또는슬라이스헤더의일부분일수 있다.
WO 2020/175893 PCT/KR2020/002702
[315] 544]
Figure imgf000067_0001
2020/175893 1»(:1/10公020/002702
[316] [표 45]
Figure imgf000068_0001
[317] 상기표 40또는표 41에따른타일그룹헤더또는슬라이스헤더는쇼내
데이터를위한쇼 인덱스정보(예를들어 , tile_group_aps_id_alf신택스요소 또는 slice_aps_id_alf신택스요소)및리셰이퍼데이터를위한쇼 인덱스 정보(예를들어 , tile_group_aps_id_reshaper신택스요소또는
slice_aps_id_reshaper신택스요소)를구분하여포함할수있다.
[318] 상기표 44또는표 45에따른타일그룹헤더또는슬라이스헤더는하나의 쇼 客인덱스정보(예를들어, tile_group_aps_id신택스요소또는 slice_aps_id 신택스요소)를포함하며,상기하나의쇼 인덱스정보가 1玉데이터를위한 쇼 인덱스정보및리셰이퍼데이터를위한쇼 인덱스정보로모두이용될수 있다.
[319] 예를들어 ,쇼내및리셰이퍼는서로다른쇼 를참조할수있다.다시말해 ,표 40또는표 41에서쇼1玉데이터를위한쇼 인덱스정보(예를들어,
tile_group_aps_id_alf신택스요소또는 slice_aps_id_alf신택스요소)및리셰이퍼 데이터를위한쇼 인덱스정보(예를들어 , tile_group_aps_id_reshaper신택스 2020/175893 1»(:1^1{2020/002702
Figure imgf000069_0001
정보를나타낼수있다.
[32이
Figure imgf000069_0008
정보를나타낼수있다.또는쇼1止및리셰이퍼는동일한
Figure imgf000069_0002
참조하는경우, 표 44또는표 45와같이하나의 인덱스정보(예를들어, tile_group_aps_id 신택스요소또는 신
Figure imgf000069_0003
요소)가포함될수도있다.
[321] 도 13및 14는본문서의실시예(들)에 따른비디오/영상인코딩 방법 및관련 컴포넌트의 일예를개략적으로나타낸다.도 13에서 개시된방법은도 2에서 개시된인코딩장치에 의하여수행될수있다.구체적으로예를들어 ,도 13의 別300은상기 인코딩장치의가산부(250)에 의하여수행될수있고,도 13의 別자0은상기 인코딩장치의필터링부(260)에 의하여수행될수있고,도 13의 別320은상기 인코딩장치의 엔트로피 인코딩부(240)에의하여수행될수있다. 도 13에서 개시된방법은본문서에서상술한실시예들을포함할수있다.
[322] 도 13을참조하면,인코딩장치는현재픽처 내현재블록의복원샘플들을
생성한다 1300).예를들어,인코딩장치는예측모드를기반으로상기 현재 블록의 예측샘플들을도출할수있다.이경우,인터 예측또는인트라예측등 본문서에서 개시된다양한예측방법이 적용될수있다.상기 예측샘플들과 원본샘플들을기반으로레지듀얼샘플들을도출할수있다.이경우상기 레지듀얼샘플들을기반으로레지듀얼정보가도출될수있다.상기 레지듀얼 정보를기반으로(수정된)레지듀얼샘플들을도출될수있다.상기(수정된) 레지듀얼샘플들및상기 예측샘플들을기반으로복원샘플들이 생성될수 있다.상기복원샘플들을기반으로복원블록및복원픽처가도출될수있다.
[323] 인코딩장치는복원샘플들에 대한리셰이핑 관련정보를생성한다 1자0).
들어,인코딩장치는복원샘플들에 대한리셰이핑관련정보및/또는 정보를생성할수있다.예를들어,인코딩장치는상기복원샘플들에 링을위하여 적용될수있는,리셰이핑에관련된파라미터를도출하고,
Figure imgf000069_0004
관련정보를생성할수있다.또는예를들어,인코딩장치는상기복원 샘플들에 대한필터링을위하여 적용될수있는,쇼[ 에관련된파라미터를 도출하고,쇼내관련정보를생성할수있다.예를들어 ,리셰이핑관련정보는본 문서에서상술한리셰이핑에 관련된정보들중적어도일부를포함할수있다. 또는예를들어,쇼내관련정보는본문서에서상술한쇼내에 관련된정보들중 적어도일부를포함할수있다.여기서 ,리셰이핑 관련정보는
Figure imgf000069_0005
정보라나타낼수있고,리셰이핑에관련된파라미터는
Figure imgf000069_0006
관련된 파라미터라나타낼수있다.
Figure imgf000069_0007
2020/175893 1»(:1^1{2020/002702 리셰이핑에 관련된파라미터 ,리셰이핑모델정보또는리셰이핑모델에포함된 정보라고불릴수있다.
[324] 인코딩장치는복원샘플들의 생성에관한정보및리셰이핑관련정보를
포함하는영상정보를인코딩한다(別320).또는예를들어 ,인코딩장치는복원 샘플들의
Figure imgf000070_0001
적어도 일부를포함하는영상정보를인코딩할수있다.여기서,리셰이핑 관련정보는 관련정보라나타낼수있다.상기 영상정보는비디오정보라나타낼 수도있다.예를들어,상기복원샘플들생성을위한정보는예를들어 예측관련 정보및/또는레지듀얼정보를포함할수있다.상기 예측관련정보는다양한 예측모드 .머지모드,
Figure imgf000070_0002
정보,
Figure imgf000070_0003
정보등을포함할수 있다.
[325] 상기 영상정보는본문서의실시예에 따른다양한정보를포함할수있다.예를 들어,상기 영상정보는상술한표 1내지표 45중적어도하나에 개시된적어도 하나의 정보또는적어도하나의신택스요소를포함할수있다.
[326]
Figure imgf000070_0007
데이터는현재블록의(루마성분)복원샘플의값의 매핑관계를나타내는매핑 인덱스를도출하기위한정보를포함할수있다.또는예를들어 ,상기 리셰이핑 데이터는리셰이핑절차를수행하기위한정보를포함할수있다.예를들어, 리셰이핑 절차는인버스리셰이핑절차를나타낼수있으며,매핑 인덱스는상기 인버스리셰이핑절차를위해사용되는인덱스를포함할수있다.다만,본 문서에 따른리셰이핑 절차는인버스리셰이핑절차에 한정되는것이 아니며, 포워드리셰이핑절차또는크로마리셰이핑절차가이용될수있다.다만, 포워드리셰이핑절차또는크로마리셰이핑절차가이용되는경우,인코딩 절차 내에서 리셰이핑관련정보가생성되는순서가달라질수있으며,인덱스도 포워드리셰이핑절차또는크로마리셰이핑절차에 따라달라질수있다.예를 들어,리셰이핑 절차는
Figure imgf000070_0004
절차라고나타낼수있고,리셰이핑 데이터는
데이터라고나타낼수있으며,리셰이
Figure imgf000070_0005
정보라고나타낼수있고,이하에서도마찬가지일수있다.또는예를들어,상기
Figure imgf000070_0006
포함할수있다. 예를들어 ,상기ᅀ끈8는사 데이터를포함할수있다.예를들어 ,상기사 데이터는쇼[玉필터 계수를도출하기위한정보를포함할수있다.또는예를
Figure imgf000070_0008
2020/175893 1»(:1^1{2020/002702
Figure imgf000071_0015
데이터를
Figure imgf000071_0001
에 대한 II)정보는동일할수있으므로,별도로구분되어 나타내어질수있으나,하나로나타내어질수도있다.이경우,설명의 편의를 위해 리셰이핑 데이터 및쇼1止데이터가포함된하나의
Figure imgf000071_0002
나타낼수있다.
Figure imgf000071_0003
,
Figure imgf000071_0004
나타낼 수있다.이경우,
Figure imgf000071_0005
다르며
Figure imgf000071_0006
©정보도상기 제 2 에 대한正)정보와다를수있다.
[33이 또는예를들어 ,상기 리셰이핑 데이터또는상기쇼1止데이터는하나의
Figure imgf000071_0007
포함될수도있다.이 경우,설명의편의를위해리셰이핑 데이터또는쇼[玉 데이터가포함된 를제 1 라고나타낼수있다.
[331] 예를들어 ,상기 리셰이핑 데이터 및상기사 데이터는하나의
Figure imgf000071_0008
포함되며 , II)정보가별도로구분되
Figure imgf000071_0009
포함되는경우
Figure imgf000071_0010
_ _ _ _
slice_aps_id_alf신택스요소로나타낼수있다.또는예를들어,상기 리셰이핑 데이터 및상기쇼[玉데이터는하나의
Figure imgf000071_0011
포함되는경우또는리셰이핑 데이터또는사 데이터중하나가
Figure imgf000071_0012
포함되는경우,
Figure imgf000071_0013
대한正)정보는 tile_group_aps_id신택스요소또는 slice_aps_id신택스요소와같이동일한 형태로나타낼수있다.다만,예를들어,동일한형태의 에 대한 정보를 이용하여 리셰이핑 및사 를
Figure imgf000071_0014
대한正)정보가각각나타내어질수도 있다.예를들어 ,리셰이핑 및/또는사 를위한 에 대한正)정보는헤더 정보에포함될수있다.예를들어,상기 영상정보는상기헤더 정보를포함할수 있고,상기 헤더정보는픽처헤더또는슬라이스헤더(또는타일그룹헤더)를 포함할수있으며,이하에서도마찬가지일수있다.
[332] 예를들어 ,상기 영상정보는 모따 라 3라)를포함할수있고, 상기 는상기 리셰이핑의 가용여부를나타내는제 1리셰이핑가용플래그 및/또는상기쇼1止의가용여부를나타내는제 1쇼1止가용플래그를포함할수 있다.예를들어,상기제 1리셰이핑가용플래그는 sps_reshaper_enabled_flag 신택스요소를나타낼수있고,상기 제 1 1止가용플래그는 sps_alf_enabled_flag 신택스요소를나타낼수있다. 2020/175893 1»(:1^1{2020/002702
[333] 예를들어 ,값이 1인상기 제 1리셰이핑 가용플래그를기반으로,상기헤더
정보는픽처또는슬라이스에서의상기 리셰이핑의가용여부를나타내는제 2 리셰이핑 가용플래그를포함할수있다.즉,상기제 1리셰이핑가용플래그의 값이 1인경우,상기 헤더정보는픽처또는슬라이스에서의상기 리셰이핑의 가용여부를나타내는제 2리셰이핑 가용플래그를포함할수있다.또는값이 0인상기제 1리셰이핑가용플래그를기반으로,상기 헤더정보는상기제 2 리셰이핑 가용플래그를포함하지 않을수있다.즉,상기 제 1리셰이핑 가용 플래그의 값이 0인경우,상기헤더 정보는상기 제 2리셰이핑 가용플래그를 포함하지 않을수있다.예를들어,상기 제 2리셰이핑 가용플래그는
Figure imgf000072_0001
신택스요소를나타낼수있다.
[334] 또는예를들어 ,값이 1인상기제 1리셰이핑가용플래그를기반으로,상기
헤더 정보는픽처또는슬라이스에서의 리셰이핑모델이존재하는지 여부를 나타내는리셰이핑모델존재플래그를포함할수있다.즉,상기제 1리셰이핑 가용플래그의값이 1인경우,상기 헤더정보는상기 리셰이핑모델존재 플래그를포함할수있다.예를들어,상기 리셰이핑모델존재플래그는 바6_ 011]3」¾81 6]'_1110(161_ 86111;_£ 신택스요소또는
81노6」¾81^6]'_1110(161_1^ 6111;_£'^신택스요소를나타낼수있다.또한예를들어 , 값이 1인상기 리셰이핑모델존재플래그를기반으로,상기헤더 정보는상기 제 2리셰이핑가용플래그를포함할수있다.즉,상기 리셰이핑모델존재 플래그의 값이 1인경우,상기헤더 정보는상기 제 2리셰이핑 가용플래그를 포함할수있다.
[335] 예를들어,상기 제 1리셰이핑 가용플래그,리셰이핑모델존재플래그및/또는 상기 제 2리셰이핑 가용플래그를기반으로상기 리셰이핑절차의수행을 나타낼수있다.또는예를들어,값이 1인상기 제 1리셰이핑 가용플래그,값이
1인상기 리셰이핑모델존재플래그및/또는값이 1인상기제 2리셰이핑가용 플래그를기반으로상기 리셰이핑절차의수행을나타낼수있다.
[336] 예를들어 ,값이 1인상기 제 1쇼내가용플래그를기반으로,상기 헤더정보는 픽처또는슬라이스에서의상기사 의가용여부를나타내는제 2사止가용 플래그를포함할수있다.즉,상기제 1쇼내가용플래그의 값이 1인경우,상기 헤더 정보는픽처또는슬라이스에서의상기
Figure imgf000072_0002
가용여부를나타내는제 2 사 가용플래그를포함할수있다.또는값이 0인상기제 1사 가용플래그를 기반으로,상기 헤더정보는상기제 2사 가용플래그를포함하지 않을수있다. 즉,상기 제 1쇼내가용플래그의값이 0인경우,상기 헤더정보는상기제 2쇼내 가용플래그를포함하지 않을수있다.예를들어,상기 제 2사止가용플래그는 바6_은]'011]3_ _611 16(1 1&은신택스
Figure imgf000072_0003
요소를 나타낼수있다.
[337] 예를들어,값이 1인상기 제 2쇼내가용플래그를기반으로,상기 헤더정보는 2020/175893 1»(:1^1{2020/002702 픽처또는슬라이스에서의상기쇼1止의사용여부를나타내는 사용 플래그를포함할수있다.즉,상기제 2쇼내가용플래그의 값이 1인경우,상기 헤더 정보는픽처또는슬라이스에서의상기쇼1止의사용여부를나타내는
Figure imgf000073_0001
사용플래그를포함할수있다.또는값이 0인상기제 2쇼내가용플래그를 기반으로,상기 그를포함하지 않을수있다.즉, 상기 제 2사止가
Figure imgf000073_0002
헤더정보는상기사止사용 플래그를포함하지 않을수있다.예를들어,상기사止사용플래그는
바6_은]'011]3_ 1?_118&은6_:(¾은신택스
Figure imgf000073_0003
요소를 나타낼수있다.
[338] 예를들어 ,값이 1인상기쇼내사용플래그를기반으로,상기헤더 정보는쇼내 데이터를포함할수있다.즉,상기쇼내사용플래그의값이 1인경우,상기 헤더 정보는쇼1玉데이터를포함할수있다.
Figure imgf000073_0004
II)정보가나타내는 를이용하지 않고도쇼1玉데이터가나타내어질수있다.또는예를들어 , 값이 0인상기사 사용플래그를기반으로,상기헤더 정보는
Figure imgf000073_0005
대한正) 정보를포함할수있다.
[339] 예를들어,값이 1인상기 제 1리셰이핑 가용플래그및값이 1인상기 제 1쇼내 가용플래그를기반으로(즉,상기 제 1리셰이핑 가용플래그의 값이 1이고,상기 제 1쇼[玉가용플래그의 값이 1인경우),상기 헤더정보는픽처또는
슬라이스에서의상기쇼1止의가용여부를나타내는제 2쇼1止가용플래그를 포함할수있고,값이 1인상기 제 2쇼내가용플래그를기반으로(즉,제 2쇼내 가용플래그의값이 1인경우),상기 헤더정보는상기픽처또는상기
슬라이스에서의상기쇼1 및상기 리셰이핑의사용여부를나타내는쇼1 및 리셰이핑사용플래그를포함할수있다.예를들어 ,사 및리셰이핑사용 늘래그는 tile_group_alf_reshaper_usage_flag신택스요소또는
slice_alf_reshaper_usage_flag신택스요소를나타낼수있다.
[34이 예를들어 ,값이 1인상기쇼내및리셰이핑사용플래그를기반으로(즉,상기 쇼내및리셰이핑사용플래그의값이 1인경우),상기헤더 정보는상기쇼내
Figure imgf000073_0008
정보는리셰이핑 데이터를위한 에 대한 II)정보를나타낼수있고,제 2
에 대한 II)정보는
Figure imgf000073_0006
II)정보를나타낼수 있다.
Figure imgf000073_0007
동일할수도있으나,서로다를 수도있다.
[341] 예를들어,인코딩장치는상술한정보들(또는신택스요소들)모두또는
일부를포함하는영상정보를인코딩하여 비트스트림또는인코딩된정보를 생성할수있다.또는비트스트림 형태로출력할수있다.또한,상기비트스트림 2020/175893 1»(:1^1{2020/002702 또는인코딩된정보는네트워크또는저장매체를통하여디코딩장치로전송될 수있다.또는,상기비트스트림또는인코딩된정보는컴퓨터판독가능한저장 매체에저장될수있으며,상기비트스트림또는상기인코딩된정보는상술한 영상인코딩방법에의해생성될수있다.
[342] 도 15및 16은본문서의실시예에따른영상/비디오디코딩방법및관련
컴포넌트의일예를개략적으로나타낸다.도 15에서개시된방법은도 3에서 개시된디코딩장치에의하여수행될수있다.구체적으로예를들어,도 15의 別500은상기디코딩장치의엔트로피디코딩부 (310)에의하여수행될수있고, 別 0은상기디코딩장치의가산부 (340)에의하여수행될수있고,別520은 상기디코딩장치의필터링부 (350)에의하여수행될수있다.도 15에서개시된 방법은본문서에서상술한실시예들을포함할수있다.
[343] 도 15를참조하면,디코딩장치는비트스트림을통하여영상정보를
수신/획득한다 (別500).상기영상정보는본문서의실시예에따른다양한 정보를포함할수있다.예를들어,상기영상정보는상술한표 1내지표 45중 적어도하나에개시된적어도하나의정보또는적어도하나의신택스요소를 포함할수있다.예를들어,상기영상정보는비디오정보라나타낼수도있다. 예를들어,영상정보는복원샘플들의생성에관한정보,리셰이핑관련정보 또는사 관련정보중적어도일부를포함할수있다.예를들어,상기복원 샘플들생성을위한정보는예를들어예측관련정보및/또는레지듀얼정보를 포함할수있다.상기 예측관련정보는다양한예측모드 .머지모드,
Figure imgf000074_0001
모드등)에대한정보,
Figure imgf000074_0002
정보등을포함할수있다.
[344] 디코딩장치는상기영상정보를기반으로현재블록의복원샘플들을
생성한다 1 0).예를들어,디코딩장치는영상정보에포함된예측관련 정보를기반으로현재블록의예측샘플들을도출할수있다.또는예를들어, 디코딩장치는상기영상정보에포함된레지듀얼정보를기반으로레지듀얼 샘플들을도출할수있다.또는예를들어,디코딩장치는상기예측샘플들및 상기레지듀얼샘플들을기반으로복원샘플들을생성할수있다.상기복원 샘플들을기반으로복원블록및복원픽처가도출될수있다.
[345] 디코딩장치는상기복원샘플들에대한리셰이핑절차를수행한다 1520). 또는예를들어,디코딩장치는상기복원샘플들에대한리셰이핑절차및/또는 쇼[止절차를수행할수있다.예를들어,디코딩장치는상기영상정보로부터 리셰이핑관련정보및/또는쇼[止관련정보를획득할수있고,이를기반으로 리셰이핑에관련된파라미터및/또는사 에관련된파라미터를도출할수있고, 이를기반으로리셰이핑절차또는쇼[玉절차를수행할수있다.예를들어 , 리셰이핑관련정보는본문서에서상술한리셰이핑에관련된정보들중적어도 일부를포함할수있다.또는예를들어,사 관련정보는본문서에서상술한 들중적어도일부를포함할수있다.여기서 ,리셰이핑관련
Figure imgf000074_0003
정보라나타낼수있고,리셰이핑에관련된파라미터는 2020/175893 1»(:1^1{2020/002702
1 «:3에 관련된파라미터라나타낼수있다.예를들어,리셰이핑 데이터는
Figure imgf000075_0001
관련된파라미터 ,리셰이핑모델정보또는리셰이핑 모델에포함된정보라고불릴수있다.
[346]
Figure imgf000075_0002
parameter요 )및/또는상기
Figure imgf000075_0003
데이터를포함할수있고,상기 리셰이핑 데이터를기반으로리셰이핑절차가 수행될수있다.여기서,리셰이핑 절차는현재블록의(루마성분)복원샘플의 값및매핑 인덱스를기반으로매핑된복원샘플의값을도출하는절차를포함할 수있다.예를들어,상기 매핑 인덱스는상기 리셰이핑 데이터를기반으로 도출될수있다.예를들어,리셰이핑절차는인버스리셰이핑 절차를나타낼수 있으며,매핑 인덱스는상기 인버스리셰이핑 절차를위해사용되는인덱스를 포함할수있다.다만,본문서에따른리셰이핑절차는인버스리셰이핑 절차에 한정되는것이아니며 ,포워드리셰이핑절차또는크로마리셰이핑절차가 이용될수있다.다만,포워드리셰이핑절차또는크로마리셰이핑절차가 이용되는경우,리셰이핑절차가디코딩절차내에서수행되는순서가달라질수 있으며,인덱스도포워드리셰이핑 절차또는크로마리셰이핑 절차에따라 달라질수있다.예를들어,리셰이핑절차는 절차라고나타낼수있고, 나타낼수있으며,리셰이핑 관련정보는
Figure imgf000075_0004
며,이하에서도마찬가지일수있다.또는 예를들어 ,상기ᅀ끈8는쇼1玉데이터를포함할수있고,상기쇼1止데이터를 기반으로사 절차가수행될수있다.여기서 ,사 절차는상기사 데이터를 기반으로쇼[玉필터 계수를도출하는절차를포함할수있다.
[347] 예를들어 ,상기 리셰이핑 데이터는상기 에 대한 II)정보를기반으로
도출될수있다.또는상기쇼1止데이터는상기
Figure imgf000075_0005
대한正)정보를기반으로 도출될수있다.즉,상기 리셰이핑 데이터는상기
Figure imgf000075_0006
대한 II)정보가 가리키는상기 에포함될수있다.또는상기사 데이터는
Figure imgf000075_0007
[348] 터는하나의
Figure imgf000075_0008
한 II)정보
Figure imgf000075_0009
데이터를위한 에 대한 ©정보는동일할수있으므로,별도로구분될수 있으나,하나로이용될수도있다.이경우,설명의 편의를위해 리셰이핑 데이터 및사 데이터가포함된하나의 를제 1 라고나타낼수있다.
Figure imgf000075_0010
Figure imgf000075_0011
나타낼 수있다.이경우,상기 제 1 는상기 제 2 와다르며
Figure imgf000075_0012
正)정보와다를수있다.
[35이
Figure imgf000075_0013
데이터또는상기쇼1止데이터는하나의 2020/175893 1»(:1^1{2020/002702 포함될수도있다.이경우,설명의편의를위해리셰이핑데이터또는쇼[玉 데이터가포함된 나타낼수있다.
[351] 예를들어 ,상기
Figure imgf000076_0001
상기사 데이터는하나의
Figure imgf000076_0002
포함되며 , II)정보가별도로구분되는경우,서로다른 에포함되는경우
Figure imgf000076_0003
_ _ _ _
slice_aps_id_alf신택스요소로나타낼수있다.또는예를들어,상기리셰이핑 데이터및상기쇼[玉데이터는하나의 에포함되는경우또는리셰이핑 데이터또는사 데이터중하나가
Figure imgf000076_0004
포함되는경우,
Figure imgf000076_0005
대한正)정보는 tile_group_aps_id신택스요소또는 slice_aps_id신택스요소와같이동일한 형태로나타낼수있다.다만,예를들어,동일한형태의 에대한 정보를 이용하여리셰이핑및사 를
Figure imgf000076_0006
대한正)정보가각각나타내어질수도 있다.예를들어 ,리셰이
Figure imgf000076_0007
正)정보는헤더 정보에포함될수있다.예를들어,상기영상정보는상기헤더정보를포함할수 있고,상기헤더정보는픽처헤더또는슬라이스헤더(또는타일그룹헤더)를 포함할수있으며,이하에서도마찬가지일수있다.
[352]
Figure imgf000076_0008
포함할수있고, 상기 는상기리셰이핑의가용여부를나타내는제 1리셰이핑가용플래그 및/또는상기쇼1止의가용여부를나타내는제 1쇼1止가용플래그를포함할수 있다.예를들어,상기제 1리셰이핑가용플래그는 sps_reshaper_enabled_flag 신택스요소를나타낼수있고,상기제 1 1止가용플래그는 sps_alf_enabled_flag 신택스요소를나타낼수있다.
[353] 예를들어 ,값이 1인상기제 1리셰이핑가용플래그를기반으로,상기헤더 정보는픽처또는슬라이스에서의상기리셰이핑의가용여부를나타내는제 2 리셰이핑가용플래그를포함할수있다.즉,상기제 1리셰이핑가용플래그의 값이 1인경우,상기헤더정보는픽처또는슬라이스에서의상기리셰이핑의 가용여부를나타내는제 2리셰이핑가용플래그를포함할수있다.또는값이 0인상기제 1리셰이핑가용플래그를기반으로,상기헤더정보는상기제 2 리셰이핑가용플래그를포함하지않을수있다.즉,상기제 1리셰이핑가용 플래그의값이 0인경우,상기헤더정보는상기제 2리셰이핑가용플래그를 포함하지않을수있다.예를들어,상기제 2리셰이핑가용플래그는
Figure imgf000076_0009
신택스요소를나타낼수있다.
[354] 또는예를들어 ,값이 1인상기제 1리셰이핑가용플래그를기반으로,상기 헤더정보는픽처또는슬라이스에서의리셰이핑모델이존재하는지여부를 나타내는리셰이핑모델존재플래그를포함할수있다.즉,상기제 1리셰이핑 2020/175893 1»(:1^1{2020/002702 가용플래그의값이 1인경우,상기 헤더정보는상기 리셰이핑모델존재 플래그를포함할수있다.예를들어,상기 리셰이핑모델존재플래그는 바6_ 011]3」¾81 6]'_1110(161_ 86111;_£ 신택스요소또는
81노6」¾81^6]'_1110(161_1^ 6111;_£'^신택스요소를나타낼수있다.또한예를들어 , 값이 1인상기 리셰이핑모델존재플래그를기반으로,상기헤더 정보는상기 제 2리셰이핑가용플래그를포함할수있다.즉,상기 리셰이핑모델존재 플래그의 값이 1인경우,상기헤더 정보는상기 제 2리셰이핑 가용플래그를 포함할수있다.
[355] 예를들어,상기 제 1리셰이핑 가용플래그,리셰이핑모델존재플래그및/또는 상기 제 2리셰이핑 가용플래그를기반으로상기 리셰이핑절차의수행을 나타낼수있다.또는예를들어,값이 1인상기 제 1리셰이핑 가용플래그,값이
1인상기 리셰이핑모델존재플래그및/또는값이 1인상기제 2리셰이핑가용 플래그를기반으로상기 리셰이핑절차의수행을나타낼수있다.
[356] 예를들어 ,값이 1인상기 제 1쇼내가용플래그를기반으로,상기 헤더정보는 픽처또는슬라이스에서의상기사 의가용여부를나타내는제 2사止가용 플래그를포함할수있다.즉,상기제 1쇼내가용플래그의 값이 1인경우,상기 헤더 정보는픽처또는슬라이스에서의상기 의가용여부를나타내는제 2 사 가용플래그를포함할수있다.또는값이 0인상기제 1사 가용플래그를 기반으로,상기 헤더정보는상기제 2사 가용플래그를포함하지 않을수있다. 즉,상기 제 1쇼내가용플래그의값이 0인경우,상기 헤더정보는상기제 2쇼내 가용플래그를포함하지 않을수있다.예를들어,상기 제 2사止가용플래그는 바6_은]'011]3_ _611 16(1 1&은신택스
Figure imgf000077_0001
요소를 나타낼수있다.
[357] 예를들어,값이 1인상기 제 2쇼내가용플래그를기반으로,상기 헤더정보는 픽처또는슬라이스에서의상기쇼1止의사용여부를나타내는
Figure imgf000077_0002
사용 플래그를포함할수있다.즉,상기제 2쇼내가용플래그의 값이 1인경우,상기 헤더 정보는픽처또는슬라이스에서의상기쇼1止의사용여부를나타내는
Figure imgf000077_0003
사용플래그를포함할수있다.또는값이 0인상기제 2쇼내가용플래그를 기반으로,상기 그를포함하지 않을수있다.즉, 상기 제 2사止가
Figure imgf000077_0004
헤더정보는상기사止사용 플래그를포함하지 않을수있다.예를들어,상기사止사용플래그는
바6_은]'011]3_ 1?_118&은6_:(¾은신택스
Figure imgf000077_0005
요소를 나타낼수있다.
[358] 예를들어 ,값이 1인상기쇼내사용플래그를기반으로,상기헤더 정보는쇼내 데이터를포함할수있다.즉,상기쇼내사용플래그의값이 1인경우,상기 헤더 정보는쇼1玉데이터를포함할수있다.이 경우, 에 대한 II)정보가나타내는 이용하지 않고도쇼1玉데이터를도출할수있고,이를기반으로
Figure imgf000077_0006
절차가수행될수있다.또는예를들어 ,값이 0인상기사 사용플래그를 2020/175893 1»(:1^1{2020/002702 기반으로,상기 헤더정보는 에 대한 II)정보를포함할수있다.
[359] 예를들어,값이 1인상기 제 1리셰이핑 가용플래그및값이 1인상기 제 1쇼내 가용플래그를기반으로(즉,상기 제 1리셰이핑 가용플래그의 값이 1이고,상기 제 1쇼[玉가용플래그의 값이 1인경우),상기 헤더정보는픽처또는
슬라이스에서의상기쇼1止의가용여부를나타내는제 2쇼1止가용플래그를 포함할수있고,값이 1인상기 제 2쇼내가용플래그를기반으로(즉,제 2쇼내 가용플래그의값이 1인경우),상기 헤더정보는상기픽처또는상기
슬라이스에서의상기쇼1 및상기 리셰이핑의사용여부를나타내는쇼1 및 리셰이핑사용플래그를포함할수있다.예를들어 ,사 및리셰이핑사용 늘래그는 tile_group_alf_reshaper_usage_flag신택스요소또는
slice_alf_reshaper_usage_flag신택스요소를나타낼수있다.
[36이 예를들어 ,값이 1인상기쇼내및리셰이핑사용플래그를기반으로(즉,상기 쇼내및리셰이핑사용플래그의값이 1인경우),상기헤더 정보는상기쇼내 데이터 및상기 리셰이핑 데이터를포함할수있다.또는예를들어,값이 0인 상기사 및리셰이핑사용플래그를기반으로(즉,상기사 및리셰이핑사용 플래그의 값이 0인경우),상기헤더 정보는상기 제 1 에 대한正)정보및 상기 제 2 에 대한 II)정보를포함할수있다.여기서 ,제 1 에 대한正) 정보는리셰이핑 데이터를위한
Figure imgf000078_0001
대한 II)정보를나타낼수있고,제 2
II)정보를나타낼수
Figure imgf000078_0002
할수도있으나,서로다를 수도있다.
[361] 예를들어,디코딩장치는비트스트림또는인코딩된정보를디코딩하여
상술한정보들(또는신택스요소들)모두또는일부를포함하는영상정보를 획득할수있다.또한,상기 비트스트림또는인코딩된정보는컴퓨터판독 가능한저장매체에 저장될수있으며 ,상술한디코딩 방법이수행되도록야기할 수있다.
[362] 상술한실시예에서 ,방법들은일련의단계또는블록으로써순서도를기초로 설명되고있지만,해당실시예는단계들의순서에 한정되는것은아니며,어떤 단계는상술한바와다른단계와다른순서로또는동시에 발생할수있다.또한, 당업자라면순서도에나타내어진단계들이 배타적이지 않고,다른단계가 포함되거나순서도의하나또는그이상의단계가본문서의실시예들의범위에 영향을미치지 않고삭제될수있음을이해할수있을것이다.
[363] 상술한본문서의실시예들에 따른방법은소프트웨어 형태로구현될수
있으며,본문서에따른인코딩장치 및/또는디코딩장치는예를들어 IV, 컴퓨터,스마트폰,셋톱박스,디스플레이장치등의 영상처리를수행하는 장치에포함될수있다.
[364] 본문서에서실시예들이소프트웨어로구현될때,상술한방법은상술한
기능을수행하는모듈(과정,기능등)로구현될수있다.모듈은메모리에 저장되고,프로세서에의해실행될수있다.메모리는프로세서내부또는 외부에 있을수있고,잘알려진다양한수단으로프로세서와연결될수있다. 프로세서는 ASIC(application- specific integrated circuit),다른칩셋,논리회로 및/또는데이터처리장치를포함할수있다.메모리는 ROM(read-only memory), RAM(random access memory),늘래쉬메모리,메모리카드,저장매체및/또는 다른저장장치를포함할수있다.즉,본문서에서설명한실시예들은프로세서, 마이크로프로세서,컨트롤러또는칩상에서구현되어수행될수있다.예를 들어,각도면에서도시한기능유닛들은컴퓨터,프로세서,마이크로프로세서, 컨트롤러또는칩상에서구현되어수행될수있다.이경우구현을위한정보 (ex. information on instructions)또는알고리즘이디지털저장매체에저장될수있다.
[365] 또한,본문서의실시예 (들)이적용되는디코딩장치및인코딩장치는
멀티미디어방송송수신장치 ,모바일통신단말,홈시네마비디오장치,디지털 시네마비디오장치 ,감시용카메라,비디오대화장치,비디오통신과같은 실시간통신장치,모바일스트리밍장치,저장매체,캠코더,주문형
비디오 (VoD)서비스제공장치 , OTT비디오 (Over the top video)장치,인터넷 스트리밍서비스제공장치 , 3차원 (3D)비디오장치 , VR(virtual reality)장치 , AR(argumente reality)장치 ,화상전화비디오장치 ,운송수단단말 (ex.
차량 (자율주행차량포함)단말,비행기단말,선박단말등)및의료용비디오 장치등에포함될수있으며,비디오신호또는데이터신호를처리하기위해 사용될수있다.예를들어 , OTT비디오 (Over the top video)장치로는게임콘솔, 블루레이플레이어,인터넷접속 TV,홈시어터시스템,스마트폰,태블릿 PC, DVR(Digital Video Recorder)등을포함할수있다.
[366] 또한,본문서의실시예 (들)이적용되는처리방법은컴퓨터로실행되는
프로그램의형태로생산될수있으며,컴퓨터가판독할수있는기록매체에 저장될수있다.본문서의실시예 (들)에따른데이터구조를가지는멀티미디어 데이터도또한컴퓨터가판독할수있는기록매체에저장될수있다.상기 컴퓨터가판독할수있는기록매체는컴퓨터로읽을수있는데이터가저장되는 모든종류의저장장치및분산저장장치를포함한다.상기컴퓨터가판독할수 있는기록매체는,예를들어,블루레이디스크 (BD),범용직렬버스 (USB), ROM, PROM, EPROM, EEPROM, RAM, CD-ROM,자기테이프,플로피디스크및 광학적데이터저장장치를포함할수있다.또한,상기컴퓨터가판독할수있는 기록매체는반송파 (예를들어,인터넷을통한전송)의형태로구현된미디어를 포함한다.또한,인코딩방법으로생성된비트스트림이컴퓨터가판독할수있는 기록매체에저장되거나유무선통신네트워크를통해전송될수있다.
[367] 또한,본문서의실시예 (들)는프로그램코드에의한컴퓨터프로그램제품으로 구현될수있고,상기프로그램코드는본문서의실시예 (들)에의해컴퓨터에서 수행될수있다.상기프로그램코드는컴퓨터에의해판독가능한캐리어상에 저장될수있다. [368] 도 17은본문서에서개시된실시예들이적용될수있는컨텐츠스트리밍 시스템의 예를나타낸다.
[369] 도 17을참조하면,본문서의실시예들이적용되는컨텐츠스트리밍시스템은 크게인코딩서버 ,스트리밍서버 ,웹서버 ,미디어저장소,사용자장치및 멀티미디어입력장치를포함할수있다.
[37이 상기인코딩서버는스마트폰,카메라,캠코더등과같은멀티미디어입력
장치들로부터입력된컨텐츠를디지털데이터로압축하여비트스트림을 생성하고이를상기스트리밍서버로전송하는역할을한다.다른예로, 스마트폰,카메라,캠코더등과같은멀티미디어입력장치들이비트스트림을 직접생성하는경우,상기인코딩서버는생략될수있다.
[371] 상기비트스트림은본문서의실시예들이적용되는인코딩방법또는
비트스트림생성방법에의해생성될수있고,상기스트리밍서버는상기 비트스트림을전송또는수신하는과정에서일시적으로상기비트스트림을 저장할수있다.
[372] 상기스트리밍서버는웹서버를통한사용자요청에기초하여멀티미디어 데이터를사용자장치에전송하고,상기웹서버는사용자에게어떠한서비스가 있는지를알려주는매개체역할을한다.사용자가상기웹서버에원하는 서비스를요청하면,상기웹서버는이를스트리밍서버에전달하고,상기 스트리밍서버는사용자에게멀티미디어데이터를전송한다.이때 ,상기컨텐츠 스트리밍시스템은별도의제어서버를포함할수있고,이경우상기제어 서버는상기컨텐츠스트리밍시스템내각장치간명령/응답을제어하는 역할을한다.
[373] 상기스트리밍서버는미디어저장소및/또는인코딩서버로부터컨텐츠를 수신할수있다.예를들어,상기인코딩서버로부터컨텐츠를수신하게되는 경우,상기컨텐츠를실시간으로수신할수있다.이경우,원활한스트리밍 서비스를제공하기위하여상기스트리밍서버는상기비트스트림을일정 시간동안저장할수있다.
[374] 상기사용자장치의예로는,휴대폰,스마트폰 (smart phone),노트북
컴퓨터 (laptop computer),디지털방송용단말기 , PDA(personal digital assistants), PMP(portable multimedia player),네비게이션,슬레이트 PC(slate PC),태블릿 PC(tablet PC),울트라북 (ul仕 abook),웨어러블디바이스 (wearable device,예를 들어,워치형단말기 (smartwatch),글래스형단말기 (smart glass), HMD(head mounted display)),디지털 TV,데스크탑컴퓨터,디지털사이니지등이있을수 있다.
[375] 상기컨텐츠스트리밍시스템내각서버들은분산서버로운영될수있으며 ,이 경우각서버에서수신하는데이터는분산처리될수있다.
[376] 본명세서에기재된청구항들은다양한방식으로조합될수있다.예를들어,본 명세서의방법청구항의기술적특징이조합되어장치로구현될수있고,본 2020/175893 1»(:1^1{2020/002702 명세서의장치 청구항의 기술적특징이조합되어방법으로구현될수있다. 또한,본명세서의방법 청구항의기술적특징과장치 청구항의기술적특징이 조합되어장치로구현될수있고,본명세서의방법 청구항의기술적특징과 장치 청구항의기술적특징이조합되어 방법으로구현될수있다.

Claims

2020/175893 1»(:1/10公020/002702 청구범위
[청구항 1] 디코딩장치에의하여수행되는영상디코딩방법에 있어서,
비트스트림을통하여영상정보를수신하는단계;
상기영상정보를기반으로현재블록의복원샘플들을생성하는단계 ;및 상기복원샘플들에대한리셰이핑 (reshaping)절차를수행하는단계를 포함하고,
상기리셰이핑절차는상기현재블록의복원샘플의값및매핑인덱스를 기반으로매핑된복원샘플의값을도출하는절차를포함하고, 상기영상정보는리셰이핑데이터를포함하는제 1 APS(Adaptation Parameter Set)및상기제 1 APS에대한 ID정보를포함하고, 상기리셰이핑데이터는상기제 1 APS에대한 ID정보를기반으로 도출되고,
상기매핑인덱스는상기리셰이핑데이터를기반으로도출되는것을 특징으로하는,영상디코딩방법.
[청구항 2] 제 1항에 있어서,
상기영상정보는 SPS(Sequence Parameter Set)를포함하고, 상기 SPS는상기리셰이핑의가용여부를나타내는제 1리셰이핑가용 플래그를포함하는것을특징으로하는,영상디코딩방법.
[청구항 3] 제 2항에 있어서,
상기영상정보는헤더정보를포함하고,
값이 1인상기제 1리셰이핑가용플래그를기반으로,상기헤더정보는 픽처또는슬라이스에서의상기리셰이핑의가용여부를나타내는제 2 리셰이핑가용플래그를포함하는것을특징으로하는,영상디코딩방법. [청구항 4] 제 1항에 있어서,
상기복원샘늘들에대한 ALF( Adaptive Loop Filtering)절차를수행하는 단계를포함하고,
상기영상정보는 ALF데이터를포함하는제 2 APS및상기제 2 APS에 대한 ID정보를포함하고,
상기 ALF데이터는상기제 2 APS에대한 ID정보를기반으로도출되고, 상기 ALF절차는상기 ALF데이터를기반으로 ALF필터계수를 도출하는절차를포함하고,
상기제 2 APS에대한 ID정보는상기제 1 APS에대한 ID정보와다른 것을특징으로하는,영상디코딩방법.
[청구항 5] 제 1항에 있어서,
상기복원샘늘들에대한 ALF( Adaptive Loop Filtering)절차를수행하는 단계를포함하고,
상기제 1 APS는 ALF데이터를포함하고, 상기 ALF데이터는상기제 1 APS에대한 ID정보를기반으로도출되고, 상기 ALF절차는상기 ALF데이터를기반으로 ALF필터계수를 도출하는절차를포함하는것을특징으로하는,영상디코딩방법.
[청구항 6] 제 4항에 있어서,
상기영상정보는 SPS(Sequence Parameter Set)및헤더정보를포함하고, 상기 SPS는상기 ALF의가용여부를나타내는제 1 ALF가용플래그를 포함하고,
값이 1인상기제 1 ALF가용플래그를기반으로,상기헤더정보는픽처 또는슬라이스에서의상기 ALF의가용여부를나타내는제 2 ALF가용 플래그를포함하고,
값이 1인상기제 2 ALF가용플래그를기반으로,상기헤더정보는상기 픽처또는상기슬라이스에서의상기 ALF의사용여부를나타내는 ALF 사용플래그를포함하는것을특징으로하는,영상디코딩방법.
[청구항 7] 제 6항에 있어서,
값이 1인상기 ALF사용플래그를기반으로,상기헤더정보는상기 ALF 데이터를포함하고,
값이 0인상기 ALF사용플래그를기반으로,상기헤더정보는상기제 2 APS에대한 ID정보를포함하는것을특징으로하는,영상디코딩방법 . [청구항 8] 제 4항에 있어서,
상기영상정보는 SPS(Sequence Parameter Set)및헤더정보를포함하고, 상기 SPS는상기리셰이핑의가용여부를나타내는리셰이핑가용 플래그및상기 ALF의가용여부를나타내는제 1 ALF가용플래그를 포함하고,
값이 1인상기제 1리셰이핑가용플래그및값이 1인상기제 1 ALF가용 플래그를기반으로,상기헤더정보는픽처또는슬라이스에서의상기 ALF의가용여부를나타내는제 2 ALF가용플래그를포함하고, 값이 1인상기제 2 ALF가용플래그를기반으로,상기헤더정보는상기 픽처또는상기슬라이스에서의상기 ALF및상기리셰이핑의사용 여부를나타내는 ALF및리셰이핑사용플래그를포함하고, 값이 1인상기 ALF및리셰이핑사용플래그를기반으로,상기헤더 정보는상기 ALF데이터및상기리셰이핑데이터를포함하고, 값이 0인상기 ALF및리셰이핑사용플래그를기반으로,상기헤더 정보는상기제 1 APS에대한 ID정보및상기제 2 APS에대한 ID정보를 포함하는것을특징으로하는,영상디코딩방법.
[청구항 9] 인코딩장치에의하여수행되는영상인코딩방법에 있어서,
현재픽처내현재블록의복원샘플들을생성하는단계;
상기복원샘플들에대한리셰이핑 (reshaping)관련정보를생성하는단겨] ; 2020/175893 PCT/KR2020/002702 상기복원샘플들의생성에관한정보및상기리셰이핑관련정보를 포함하는영상정보를인코딩하는단계를포함하고,
상기리셰이핑관련정보는리셰이핑데이터를포함하는제 1 APS (Adaptation Parameter Set)및상기제 1 APS에대한 ID정보를 포함하고,
상기리셰이핑데이터는상기현재블록의복원샘플의값의매핑관계를 나타내는매핑인덱스를도출하기위한정보를포함하는것을특징으로 하는,영상인코딩방법 .
[청구항 ] 제 9항에 있어서,
상기영상정보는 SPS(Sequence Parameter Set)를포함하고, 상기 SPS는상기리셰이핑의가용여부를나타내는제 1리셰이핑가용 플래그를포함하는것을특징으로하는,영상인코딩방법 .
[청구항 11] 제 W항에 있어서 ,
상기영상정보는헤더정보를포함하고,
값이 1인상기제 1리셰이핑가용플래그를기반으로,상기헤더정보는 픽처또는슬라이스에서의상기리셰이핑의가용여부를나타내는제 2 리셰이핑가용플래그를포함하는것을특징으로하는,영상인코딩방법.
[청구항 12] 제 9항에 있어서,
상기복원샘늘들에대한 ALF(Adaptive Loop Filtering)관련정보를 생성하는단계를포함하고,
상기 ALF관련정보는 ALF데이터를포함하는제 2 APS및상기제 2 APS에대한 ID정보를포함하고,
상기 ALF데이터는 ALF필터계수를도출하기위한정보를포함하고, 상기제 2 APS에대한 ID정보는상기제 1 APS에대한 ID정보와다른 것을특징으로하는,영상인코딩방법 .
[청구항 제 9항에 있어서,
상기복원샘늘들에대한 ALF(Adaptive Loop Filtering)관련정보를 생성하는단계를포함하고,
상기 ALF관련정보는 ALF필터계수를도출하기위한정보를포함하는 ALF데이터를포함하고,
상기제 1 APS는상기 ALF데이터를포함하는것을특징으로하는,영상 인코딩방법.
[청구항 14] 제 12항에 있어서,
상기영상정보는 SPS(Sequence Parameter Set)및헤더정보를포함하고, 상기 SPS는상기 ALF의가용여부를나타내는제 1 ALF가용플래그를 포함하고,
값이 1인상기제 1 ALF가용플래그를기반으로,상기헤더정보는픽처 또는슬라이스에서의상기 ALF의가용여부를나타내는제 2 ALF가용 2020/175893 1»(:1^1{2020/002702 플래그를포함하고,
값이 1인상기 제 2사 가용플래그를기반으로,상기 헤더정보는상기 픽처또는상기슬라이스에서의상기사 의사용여부를나타내는
Figure imgf000085_0001
사용플래그를포함하는것을특징으로하는,영상인코딩 방법 .
[청구항 15] 영상디코딩장치가영상디코딩 방법을수행하도록야기하는인코딩된 정보를저장하는컴퓨터판독가능한저장매체에 있어서,상기 영상 디코딩 방법은:
비트스트림을통하여 영상정보를수신하는단계;
상기 영상정보를기반으로현재블록의복원샘플들을생성하는단계 ;및 상기복원샘플들에 대한리셰이핑
Figure imgf000085_0002
절차를수행하는단계를 포함하고,
상기 리셰이핑절차는상기 현재블록의복원샘플의값및매핑 인덱스를 기반으로매핑된복원샘플의값을도출하는절차를포함하고,
Figure imgf000085_0003
도출되고,
상기 매핑 인덱스는상기 리셰이핑 데이터를기반으로도출되는것을 특징으로하는,컴퓨터판독가능한저장매체.
PCT/KR2020/002702 2019-02-28 2020-02-25 Aps 시그널링 기반 비디오 또는 영상 코딩 WO2020175893A1 (ko)

Priority Applications (7)

Application Number Priority Date Filing Date Title
KR1020237040136A KR20230163584A (ko) 2019-02-28 2020-02-25 Aps 시그널링 기반 비디오 또는 영상 코딩
KR1020227014284A KR102606330B1 (ko) 2019-02-28 2020-02-25 Aps 시그널링 기반 비디오 또는 영상 코딩
KR1020217027581A KR102393325B1 (ko) 2019-02-28 2020-02-25 Aps 시그널링 기반 비디오 또는 영상 코딩
AU2020229608A AU2020229608B2 (en) 2019-02-28 2020-02-25 APS signaling-based video or image coding
US17/400,883 US11758141B2 (en) 2019-02-28 2021-08-12 APS signaling-based video or image coding
US18/227,134 US12069270B2 (en) 2019-02-28 2023-07-27 APS signaling-based video or image coding
AU2023282249A AU2023282249A1 (en) 2019-02-28 2023-12-13 Aps signaling-based video or image coding

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962812170P 2019-02-28 2019-02-28
US62/812,170 2019-02-28

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/400,883 Continuation US11758141B2 (en) 2019-02-28 2021-08-12 APS signaling-based video or image coding

Publications (1)

Publication Number Publication Date
WO2020175893A1 true WO2020175893A1 (ko) 2020-09-03

Family

ID=72240038

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/002702 WO2020175893A1 (ko) 2019-02-28 2020-02-25 Aps 시그널링 기반 비디오 또는 영상 코딩

Country Status (4)

Country Link
US (2) US11758141B2 (ko)
KR (3) KR102606330B1 (ko)
AU (2) AU2020229608B2 (ko)
WO (1) WO2020175893A1 (ko)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022125151A1 (en) * 2020-12-08 2022-06-16 Tencent America LLC Method and apparatus for video filtering

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113519157B (zh) * 2019-03-04 2023-12-26 北京字节跳动网络技术有限公司 视频处理中滤波信息的两级信令
CN113785571A (zh) * 2019-04-03 2021-12-10 Lg 电子株式会社 基于自适应环路滤波器的视频或图像编译
CN113875237A (zh) * 2019-04-10 2021-12-31 韩国电子通信研究院 用于在帧内预测中用信号传送预测模式相关信号的方法和装置
CN117395397A (zh) 2019-06-04 2024-01-12 北京字节跳动网络技术有限公司 使用临近块信息的运动候选列表构建
KR20220016839A (ko) 2019-06-04 2022-02-10 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 기하학적 분할 모드 코딩을 갖는 모션 후보 리스트
WO2021008511A1 (en) * 2019-07-14 2021-01-21 Beijing Bytedance Network Technology Co., Ltd. Geometric partition mode candidate list construction in video coding
US20220303582A1 (en) * 2019-08-19 2022-09-22 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Usage of access unit delimiters and adaptation parameter sets
AU2020352453A1 (en) * 2019-09-24 2022-04-21 Huawei Technologies Co., Ltd. SEI message dependency simplification in video coding
CN114450959B (zh) 2019-09-28 2024-08-02 北京字节跳动网络技术有限公司 视频编解码中的几何分割模式
WO2021110045A1 (en) * 2019-12-03 2021-06-10 Huawei Technologies Co., Ltd. Coding method, device, system with merge mode
WO2024011074A1 (en) * 2022-07-04 2024-01-11 Bytedance Inc. Method, apparatus, and medium for video processing

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170332098A1 (en) * 2016-05-16 2017-11-16 Qualcomm Incorporated Loop sample processing for high dynamic range and wide color gamut video coding
KR20180100368A (ko) * 2015-12-31 2018-09-10 지티이 코포레이션 이미지의 복호화 및 부호화 방법, 복호화 및 부호화 장치, 디코더 및 인코더
WO2019006300A1 (en) * 2017-06-29 2019-01-03 Dolby Laboratories Licensing Corporation INTEGRATED IMAGE REMODELING AND VIDEO CODING

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9001883B2 (en) * 2011-02-16 2015-04-07 Mediatek Inc Method and apparatus for slice common information sharing
US9277228B2 (en) * 2011-07-18 2016-03-01 Qualcomm Incorporated Adaptation parameter sets for video coding
JPWO2013031315A1 (ja) * 2011-08-30 2015-03-23 ソニー株式会社 画像処理装置及び画像処理方法
CN103096047B (zh) * 2011-11-01 2018-06-19 中兴通讯股份有限公司 一种分片层参数集解码及编码方法和装置
CN103200400B (zh) * 2012-01-09 2018-03-16 中兴通讯股份有限公司 一种图像层和分片层的编解码方法、编解码器和电子设备
WO2013144144A1 (en) * 2012-03-30 2013-10-03 Panasonic Corporation Syntax and semantics for adaptive loop filter and sample adaptive offset
GB2516424A (en) * 2013-07-15 2015-01-28 Nokia Corp A method, an apparatus and a computer program product for video coding and decoding
JP2017501599A (ja) * 2013-10-07 2017-01-12 ヴィド スケール インコーポレイテッド マルチレイヤビデオコーディングのコンバインドスケーラビリティ処理
US20150264404A1 (en) * 2014-03-17 2015-09-17 Nokia Technologies Oy Method and apparatus for video coding and decoding
US10334260B2 (en) * 2014-03-17 2019-06-25 Nokia Technologies Oy Apparatus, a method and a computer program for video coding and decoding
US10432951B2 (en) * 2014-06-24 2019-10-01 Qualcomm Incorporated Conformance and inoperability improvements in multi-layer video coding
EP3386198A1 (en) * 2017-04-07 2018-10-10 Thomson Licensing Method and device for predictive picture encoding and decoding
CN116634175A (zh) * 2017-05-17 2023-08-22 株式会社Kt 用于解码图像信号的方法和用于编码图像信号的方法
EP3425911A1 (en) * 2017-07-06 2019-01-09 Thomson Licensing A method and a device for picture encoding and decoding
KR20190024212A (ko) * 2017-08-31 2019-03-08 세종대학교산학협력단 타일 구조의 구성 방법 및 이의 장치
KR102435014B1 (ko) * 2018-02-14 2022-08-23 돌비 레버러토리즈 라이쎈싱 코오포레이션 레이트 왜곡 최적화를 이용한 비디오 코딩에서의 이미지 재성형

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180100368A (ko) * 2015-12-31 2018-09-10 지티이 코포레이션 이미지의 복호화 및 부호화 방법, 복호화 및 부호화 장치, 디코더 및 인코더
US20170332098A1 (en) * 2016-05-16 2017-11-16 Qualcomm Incorporated Loop sample processing for high dynamic range and wide color gamut video coding
WO2019006300A1 (en) * 2017-06-29 2019-01-03 Dolby Laboratories Licensing Corporation INTEGRATED IMAGE REMODELING AND VIDEO CODING

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JILL BOYCE: "BoG report on high level syntax", JVET-M0816-V3, JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11, 13TH MEETING, 15 January 2019 (2019-01-15), Marrakech, - MA, XP030202250 *
TAORAN LU: "CE12: Mapping functions (test CE12-1 and CE12-2", JVET-M0427-V2, JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11, 13TH MEETING, 15 January 2019 (2019-01-15), Marrakech, XP030254089 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022125151A1 (en) * 2020-12-08 2022-06-16 Tencent America LLC Method and apparatus for video filtering
US11546638B2 (en) 2020-12-08 2023-01-03 Tencent America LLC Method and apparatus for video filtering

Also Published As

Publication number Publication date
US20230421770A1 (en) 2023-12-28
KR20210115042A (ko) 2021-09-24
KR20220057661A (ko) 2022-05-09
KR102393325B1 (ko) 2022-05-02
KR20230163584A (ko) 2023-11-30
AU2020229608B2 (en) 2023-09-14
AU2023282249A1 (en) 2024-01-18
KR102606330B1 (ko) 2023-11-24
US12069270B2 (en) 2024-08-20
US20210392333A1 (en) 2021-12-16
AU2020229608A1 (en) 2021-10-07
US11758141B2 (en) 2023-09-12

Similar Documents

Publication Publication Date Title
KR102393325B1 (ko) Aps 시그널링 기반 비디오 또는 영상 코딩
KR20220044766A (ko) 크로스 컴포넌트 필터링 기반 영상 코딩 장치 및 방법
KR102479050B1 (ko) 루마 맵핑 및 크로마 스케일링 기반 비디오 또는 영상 코딩
KR102491959B1 (ko) 루마 맵핑 및 크로마 스케일링 기반 비디오 또는 영상 코딩
US11917143B2 (en) Adaptive loop filter-based video or image coding
KR20220019241A (ko) 적응적 루프 필터 기반 비디오 또는 영상 코딩
KR20210135337A (ko) 적응적 루프 필터 기반 비디오 또는 영상 코딩
KR102616829B1 (ko) Mpm 리스트를 이용하는 인트라 예측에 기반한 영상 코딩 방법 및 그 장치
US20240121434A1 (en) Video or image coding on basis of conditionally parsed alf model and reshaping model
KR20210036413A (ko) 신택스 디자인 방법 및 신택스를 이용하여 코딩을 수행하는 장치
KR20210118951A (ko) 크로마 포맷에 대한 정보를 시그널링 하는 방법 및 장치
KR20210136988A (ko) 비디오 또는 영상 코딩 방법 및 그 장치
WO2020175918A1 (ko) 단일화된 mpm 리스트를 사용하는 인트라 예측 기반 영상 코딩 방법 및 장치
JP2024038444A (ja) ルママッピングベースのビデオまたは映像コーディング
KR20220017425A (ko) 루마 샘플들의 맵핑 및 크로마 샘플들의 스케일링 기반 비디오 또는 영상 코딩
JP7471495B2 (ja) ルママッピング基盤ビデオまたは映像コーディング
US20220132110A1 (en) Filtering-based video or image coding comprising mapping
KR20210158392A (ko) 루마 맵핑 기반 비디오 또는 영상 코딩
KR20210158390A (ko) 루마 맵핑 및 크로마 스케일링 기반 비디오 또는 영상 코딩
KR20220003114A (ko) 루마 샘플들의 맵핑 및 크로마 샘플들의 스케일링 기반 비디오 또는 영상 코딩
RU2781172C1 (ru) Кодирование видео или изображений на основе отображения яркости
KR20240090169A (ko) 인트라 예측 모드를 코딩하는 방법 및 장치
KR20240131341A (ko) 인트라 예측 모드 도출 기반 인트라 예측 방법 및 장치
KR20220003115A (ko) 루마 맵핑 및 크로마 스케일링 기반 비디오 또는 영상 코딩
KR20220004767A (ko) 루마 맵핑 기반 비디오 또는 영상 코딩

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20762109

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20217027581

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020229608

Country of ref document: AU

Date of ref document: 20200225

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 20762109

Country of ref document: EP

Kind code of ref document: A1