WO2020256436A1 - 디블록킹 필터링을 사용하는 영상 또는 비디오 코딩 - Google Patents
디블록킹 필터링을 사용하는 영상 또는 비디오 코딩 Download PDFInfo
- Publication number
- WO2020256436A1 WO2020256436A1 PCT/KR2020/007908 KR2020007908W WO2020256436A1 WO 2020256436 A1 WO2020256436 A1 WO 2020256436A1 KR 2020007908 W KR2020007908 W KR 2020007908W WO 2020256436 A1 WO2020256436 A1 WO 2020256436A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- block
- boundary
- target boundary
- filter length
- deblocking filtering
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
- H04N19/82—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/86—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
Definitions
- the present technology relates to video or video coding, for example, to video or video coding techniques using deblocking filtering.
- VR Virtual Reality
- AR Artificial Realtiy
- high-efficiency video/video compression technology is required in order to effectively compress, transmit, store, and reproduce information of high-resolution, high-quality video/video having various characteristics as described above.
- the technical problem of this document is to provide a method and apparatus for increasing video/image coding efficiency.
- Another technical problem of this document is to provide a method and apparatus for improving video/image quality.
- Another technical problem of this document is to provide a method and apparatus for determining a filter length based on a distance between peripheral edges in a process of performing a deblocking filter.
- a filter length may be determined based on a distance between block boundaries, and deblocking filtering may be performed based on the filter length.
- a filter length for a luma component block may be derived as 0, 3, 5, or 7 based on whether a distance between block boundaries is less than or equal to 4, 8, or 16.
- a filter length for a chroma component block may be derived as 0, 1, or 3 based on whether a distance between block boundaries is less than or equal to 2 or 4.
- deblocking filtering for a block boundary may be performed based on whether a boundary strength for a block boundary is greater than 0 for a chroma component block.
- a video/video decoding method performed by a decoding apparatus is provided.
- the video/video decoding method may include the method disclosed in the embodiments of this document.
- a decoding apparatus for performing video/video decoding.
- the decoding apparatus may perform the method disclosed in the embodiments of this document.
- a video/video encoding method performed by an encoding device is provided.
- the video/video encoding method may include the method disclosed in the embodiments of this document.
- an encoding device that performs video/video encoding.
- the encoding device may perform the method disclosed in the embodiments of this document.
- a computer-readable digital storage medium in which encoded video/image information generated according to the video/image encoding method disclosed in at least one of the embodiments of the present document is stored is provided.
- encoded information causing to perform the video/image decoding method disclosed in at least one of the embodiments of the present document by a decoding device or a computer-readable digital storing encoded video/image information Provide a storage medium.
- image/video quality can be improved.
- the filter length is effectively determined based on the distance between block boundaries, thereby improving subjective image quality versus complexity and simplifying the H/W design process.
- an aligned filtering boundary between a luma component and a chroma component can be provided, a uniform filter length can be provided for each block boundary, and parallel processing can be performed.
- FIG. 1 schematically shows an example of a video/image coding system that can be applied to embodiments of this document.
- FIG. 2 is a diagram schematically illustrating a configuration of a video/video encoding apparatus to which embodiments of the present document can be applied.
- FIG. 3 is a diagram schematically illustrating a configuration of a video/image decoding apparatus to which embodiments of the present document can be applied.
- FIG. 4 schematically shows an in-loop filtering-based video/video method
- FIG. 5 schematically shows a filtering unit in an encoding apparatus.
- FIG. 6 schematically shows an in-loop filtering-based video/video decoding method
- FIG. 7 schematically shows a filtering unit in a decoding apparatus.
- FIG. 8 exemplarily shows an embodiment of a method of performing deblocking filtering.
- FIG. 9 shows an example of a method of determining a filter length based on a condition for a peripheral edge.
- FIG. 10 shows an example of a method of performing deblocking filtering based on a filtering condition for a boundary of a chroma component.
- FIG. 11 shows an example of a method of determining a filter length according to an embodiment of the present document.
- FIG. 12 illustrates an example of a method of performing deblocking filtering based on a filtering condition for a boundary of a chroma component according to an embodiment of the present document.
- 13 and 14 are exemplary diagrams for explaining an aligned filtering boundary between a luma component and a chroma component.
- 15 and 16 are exemplary diagrams for explaining a uniform filter length.
- 17 is an exemplary diagram for explaining a parallel processing function in a deblocking filtering process.
- FIG. 22 shows an example of a content streaming system to which embodiments disclosed in this document can be applied.
- each of the components in the drawings described in this document is independently illustrated for convenience of description of different characteristic functions, and does not mean that each component is implemented as separate hardware or separate software.
- two or more of the configurations may be combined to form one configuration, or one configuration may be divided into a plurality of configurations.
- Embodiments in which each configuration is integrated and/or separated are also included in the scope of the rights of this document, unless departing from the essence of this document.
- a or B may mean “only A”, “only B” or “both A and B”.
- a or B (A or B) may be interpreted as “A and/or B (A and/or B)”.
- A, B or C (A, B or C) means “only A”, “only B”, “only C”, or "any and all combinations of A, B and C ( It can mean any combination of A, B and C)”.
- a forward slash (/) or comma (comma) used in this document may mean “and/or”.
- A/B can mean “A and/or B”. Accordingly, “A/B” may mean “only A”, “only B”, or “both A and B”.
- A, B, C may mean "A, B or C”.
- At least one of A and B may mean “only A”, “only B”, or “both A and B”.
- the expression “at least one of A or B” or “at least one of A and/or B” means “at least one A and B (at least one of A and B)" can be interpreted the same.
- At least one of A, B and C means “only A”, “only B”, “only C", or "A, B and C May mean any combination of A, B and C”.
- at least one of A, B or C (at least one of A, B or C) or “at least one of A, B and/or C (at least one of A, B and/or C)” It can mean “at least one of A, B and C”.
- parentheses used in this document may mean “for example”. Specifically, when indicated as “prediction (intra prediction)", “intra prediction” may be proposed as an example of “prediction”. In other words, “prediction” in this document is not limited to “intra prediction”, and “intra prediction” may be suggested as an example of “prediction”. In addition, even when displayed as “prediction (ie, intra prediction)", “intra prediction” may be proposed as an example of "prediction”.
- This document is about video/image coding.
- the method/embodiment disclosed in this document may be applied to a method disclosed in the VVC (versatile video coding) standard.
- the method/embodiment disclosed in this document is an EVC (essential video coding) standard, AV1 (AOMedia Video 1) standard, AVS2 (2nd generation of audio video coding standard), or next-generation video/image coding standard (ex. H.267). or H.268, etc.).
- a video may mean a set of a series of images over time.
- a picture generally refers to a unit representing one image in a specific time period, and a slice/tile is a unit constituting a part of a picture in coding.
- a slice/tile may include one or more coding tree units (CTU).
- CTU coding tree units
- One picture may be composed of one or more slices/tiles.
- a tile is a rectangular region of CTUs within a particular tile column and a particular tile row in a picture.
- the tile column is a rectangular region of CTUs, the rectangular region has a height equal to the height of the picture, and the width may be specified by syntax elements in a picture parameter set (The tile column is a rectangular region of CTUs having a height equal to the height of the picture and a width specified by syntax elements in the picture parameter set).
- the tile row is a rectangular region of CTUs, the rectangular region has a width specified by syntax elements in a picture parameter set, and a height may be the same as the height of the picture (The tile row is a rectangular region of CTUs having a height specified by syntax elements in the picture parameter set and a width equal to the width of the picture).
- a tile scan may represent a specific sequential ordering of CTUs that partition a picture, the CTUs may be sequentially arranged in a CTU raster scan in a tile, and tiles in a picture may be sequentially arranged in a raster scan of the tiles of the picture.
- a tile scan is a specific sequential ordering of CTUs partitioning a picture in which the CTUs are ordered consecutively in CTU raster scan in a tile whereas tiles in a picture are ordered consecutively in a raster scan of the tiles of the picture).
- a slice may include an integer number of complete tiles, which may be contained exclusively in a single NAL unit, or an integer number of consecutive complete CTU rows in a tile of a picture (A slice includes an integer number of complete tiles or an integer number of consecutive tiles). complete CTU rows within a tile of a picture that may be exclusively contained in a single NAL unit).
- one picture may be divided into two or more subpictures.
- the subpicture may be an rectangular region of one or more slices within a picture.
- a pixel or pel may mean a minimum unit constituting one picture (or image).
- sample' may be used as a term corresponding to a pixel.
- a sample may generally represent a pixel or a value of a pixel, may represent only a pixel/pixel value of a luma component, or may represent only a pixel/pixel value of a chroma component.
- the sample may mean a pixel value in the spatial domain, and when such a pixel value is converted to the frequency domain, it may mean a transform coefficient in the frequency domain.
- a unit may represent a basic unit of image processing.
- the unit may include at least one of a specific area of a picture and information related to the corresponding area.
- One unit may include one luma block and two chroma (ex. cb, cr) blocks.
- the unit may be used interchangeably with terms such as a block or an area depending on the case.
- the MxN block may include samples (or sample arrays) consisting of M columns and N rows, or a set (or array) of transform coefficients.
- quantization/inverse quantization and/or transform/inverse transform may be omitted in this document.
- the quantized transform coefficient may be referred to as a transform coefficient.
- the transform coefficient may be called a coefficient or a residual coefficient, or may still be called a transform coefficient for uniformity of expression.
- the quantized transform coefficient and the transform coefficient may be referred to as a transform coefficient and a scaled transform coefficient, respectively.
- the residual information may include information about the transform coefficient(s), and the information about the transform coefficient(s) may be signaled through the residual coding syntax.
- Transform coefficients may be derived based on residual information (or information about the transform coefficient(s)), and scaled transform coefficients may be derived through an inverse transform (scaling) of the transform coefficients. Residual samples may be derived based on the inverse transform (transform) of the scaled transform coefficients. This may be applied/expressed in other parts of this document as well.
- FIG. 1 schematically shows an example of a video/image coding system that can be applied to embodiments of this document.
- a video/image coding system may include a first device (a source device) and a second device (a receiving device).
- the source device may transmit the encoded video/image information or data in a file or streaming form to the receiving device through a digital storage medium or a network.
- the source device may include a video source, an encoding device, and a transmission unit.
- the receiving device may include a receiving unit, a decoding device, and a renderer.
- the encoding device may be referred to as a video/image encoding device, and the decoding device may be referred to as a video/image decoding device.
- the transmitter may be included in the encoding device.
- the receiver may be included in the decoding device.
- the renderer may include a display unit, and the display unit may be configured as a separate device or an external component.
- the video source may acquire a video/image through a process of capturing, synthesizing, or generating a video/image.
- the video source may include a video/image capturing device and/or a video/image generating device.
- the video/image capture device may include, for example, one or more cameras, a video/image archive including previously captured video/images, and the like.
- the video/image generating device may include, for example, a computer, a tablet and a smartphone, and may (electronically) generate a video/image.
- a virtual video/image may be generated through a computer or the like, and in this case, a video/image capturing process may be substituted as a process of generating related data.
- the encoding device may encode the input video/video.
- the encoding apparatus may perform a series of procedures such as prediction, transformation, and quantization for compression and coding efficiency.
- the encoded data (encoded video/video information) may be output in the form of a bitstream.
- the transmission unit may transmit the encoded video/video information or data output in the form of a bitstream to the reception unit of the receiving device through a digital storage medium or a network in a file or streaming form.
- Digital storage media may include various storage media such as USB, SD, CD, DVD, Blu-ray, HDD, and SSD.
- the transmission unit may include an element for generating a media file through a predetermined file format, and may include an element for transmission through a broadcast/communication network.
- the receiver may receive/extract the bitstream and transmit it to the decoding device.
- the decoding device may decode the video/image by performing a series of procedures such as inverse quantization, inverse transformation, and prediction corresponding to the operation of the encoding device.
- the renderer can render the decoded video/video.
- the rendered video/image may be displayed through the display unit.
- the encoding device may include an image encoding device and/or a video encoding device.
- the encoding device 200 includes an image partitioner 210, a predictor 220, a residual processor 230, an entropy encoder 240, and It may be configured to include an adder 250, a filter 260, and a memory 270.
- the prediction unit 220 may include an inter prediction unit 221 and an intra prediction unit 222.
- the residual processing unit 230 may include a transform unit 232, a quantizer 233, an inverse quantizer 234, and an inverse transformer 235.
- the residual processing unit 230 may further include a subtractor 231.
- the addition unit 250 may be referred to as a reconstructor or a recontructged block generator.
- the image segmentation unit 210, the prediction unit 220, the residual processing unit 230, the entropy encoding unit 240, the addition unit 250, and the filtering unit 260 described above may include one or more hardware components (for example, it may be configured by an encoder chipset or a processor).
- the memory 270 may include a decoded picture buffer (DPB), and may be configured by a digital storage medium.
- the hardware component may further include the memory 270 as an internal/external component.
- the image segmentation unit 210 may divide an input image (or picture, frame) input to the encoding apparatus 200 into one or more processing units.
- the processing unit may be referred to as a coding unit (CU).
- the coding unit is recursively divided according to the QTBTTT (Quad-tree binary-tree ternary-tree) structure from a coding tree unit (CTU) or a largest coding unit (LCU).
- QTBTTT Quad-tree binary-tree ternary-tree
- CTU coding tree unit
- LCU largest coding unit
- one coding unit may be divided into a plurality of coding units of a deeper depth based on a quad tree structure, a binary tree structure, and/or a ternary structure.
- a quad tree structure may be applied first, and a binary tree structure and/or a ternary structure may be applied later.
- the binary tree structure may be applied first.
- the coding procedure according to this document may be performed based on the final coding unit that is no longer divided. In this case, based on the coding efficiency according to the image characteristics, the maximum coding unit can be directly used as the final coding unit, or if necessary, the coding unit is recursively divided into coding units of lower depth to be optimal. A coding unit of the size of may be used as the final coding unit.
- the coding procedure may include a procedure such as prediction, transformation, and restoration described later.
- the processing unit may further include a prediction unit (PU) or a transform unit (TU).
- the prediction unit and the transform unit may be divided or partitioned from the above-described final coding unit, respectively.
- the prediction unit may be a unit of sample prediction
- the transform unit may be a unit for inducing a transform coefficient and/or a unit for inducing a residual signal from the transform coefficient.
- the unit may be used interchangeably with terms such as a block or an area depending on the case.
- the MxN block may represent a set of samples or transform coefficients consisting of M columns and N rows.
- a sample may represent a pixel or a value of a pixel, may represent only a pixel/pixel value of a luminance component, or may represent only a pixel/pixel value of a saturation component.
- a sample may be used as a term corresponding to one picture (or image) as a pixel or pel.
- the encoding apparatus 200 subtracts the prediction signal (predicted block, prediction sample array) output from the inter prediction unit 221 or the intra prediction unit 222 from the input video signal (original block, original sample array) to make a residual.
- a signal residual signal, residual block, residual sample array
- a unit that subtracts the prediction signal (prediction block, prediction sample array) from the input image signal (original block, original sample array) in the encoder 200 may be referred to as a subtraction unit 231.
- the prediction unit may perform prediction on a block to be processed (hereinafter, referred to as a current block) and generate a predicted block including prediction samples for the current block.
- the prediction unit may determine whether intra prediction or inter prediction is applied in units of the current block or CU.
- the prediction unit may generate various information related to prediction, such as prediction mode information, as described later in the description of each prediction mode, and transmit it to the entropy encoding unit 240.
- the information on prediction may be encoded by the entropy encoding unit 240 and output in the form of a bitstream.
- the intra prediction unit 222 may predict the current block by referring to samples in the current picture.
- the referenced samples may be located in the vicinity of the current block or may be located apart according to the prediction mode.
- prediction modes may include a plurality of non-directional modes and a plurality of directional modes.
- the non-directional mode may include, for example, a DC mode and a planar mode (Planar mode).
- the directional mode may include, for example, 33 directional prediction modes or 65 directional prediction modes according to a detailed degree of the prediction direction. However, this is an example, and more or less directional prediction modes may be used depending on the setting.
- the intra prediction unit 222 may determine a prediction mode applied to the current block by using the prediction mode applied to the neighboring block.
- the inter prediction unit 221 may derive a predicted block for the current block based on a reference block (reference sample array) specified by a motion vector on the reference picture.
- motion information may be predicted in units of blocks, subblocks, or samples based on correlation between motion information between neighboring blocks and the current block.
- the motion information may include a motion vector and a reference picture index.
- the motion information may further include inter prediction direction (L0 prediction, L1 prediction, Bi prediction, etc.) information.
- the neighboring block may include a spatial neighboring block existing in the current picture and a temporal neighboring block existing in the reference picture.
- the reference picture including the reference block and the reference picture including the temporal neighboring block may be the same or different.
- the temporal neighboring block may be called a collocated reference block, a co-located CU (colCU), and the like, and a reference picture including the temporal neighboring block may be referred to as a collocated picture (colPic).
- the inter prediction unit 221 constructs a motion information candidate list based on neighboring blocks, and provides information indicating which candidate is used to derive a motion vector and/or a reference picture index of the current block. Can be generated. Inter prediction may be performed based on various prediction modes.
- the inter prediction unit 221 may use motion information of a neighboring block as motion information of a current block.
- a residual signal may not be transmitted.
- MVP motion vector prediction
- the motion vector of the current block is calculated by using the motion vector of the neighboring block as a motion vector predictor and signaling a motion vector difference. I can instruct.
- the prediction unit 220 may generate a prediction signal based on various prediction methods to be described later.
- the prediction unit may apply intra prediction or inter prediction for prediction of one block, as well as simultaneously apply intra prediction and inter prediction. This can be called combined inter and intra prediction (CIIP).
- the prediction unit may be based on an intra block copy (IBC) prediction mode or a palette mode to predict a block.
- IBC intra block copy
- the IBC prediction mode or the palette mode may be used for content image/video coding such as a game, for example, screen content coding (SCC).
- SCC screen content coding
- IBC basically performs prediction in the current picture, but can be performed similarly to inter prediction in that it derives a reference block in the current picture. That is, the IBC may use at least one of the inter prediction techniques described in this document.
- the palette mode can be viewed as an example of intra coding or intra prediction. When the palette mode is applied, a sample value in a picture may be signaled based on information about a palette table and
- the prediction signal generated through the prediction unit may be used to generate a reconstructed signal or may be used to generate a residual signal.
- the transform unit 232 may generate transform coefficients by applying a transform technique to the residual signal.
- the transformation technique uses at least one of DCT (Discrete Cosine Transform), DST (Discrete Sine Transform), KLT (Karhunen-Loeve Transform), GBT (Graph-Based Transform), or CNT (Conditionally Non-linear Transform).
- DCT Discrete Cosine Transform
- DST Discrete Sine Transform
- KLT Kerhunen-Loeve Transform
- GBT Graph-Based Transform
- CNT Conditionally Non-linear Transform
- CNT refers to a transformation obtained based on generating a prediction signal using all previously reconstructed pixels.
- the conversion process may be applied to a pixel block having the same size of a square, or may be applied to a block of variable size other than a square.
- the quantization unit 233 quantizes the transform coefficients and transmits it to the entropy encoding unit 240, and the entropy encoding unit 240 encodes the quantized signal (information on quantized transform coefficients) and outputs it as a bitstream. have.
- the information on the quantized transform coefficients may be called residual information.
- the quantization unit 233 may rearrange the quantized transform coefficients in the form of blocks into a one-dimensional vector form based on a coefficient scan order, and the quantized transform coefficients in the form of the one-dimensional vector It is also possible to generate information about transform coefficients.
- the entropy encoding unit 240 may perform various encoding methods such as exponential Golomb, context-adaptive variable length coding (CAVLC), and context-adaptive binary arithmetic coding (CABAC).
- the entropy encoding unit 240 may encode together or separately information necessary for video/image reconstruction (eg, values of syntax elements) in addition to quantized transform coefficients.
- the encoded information (eg, encoded video/video information) may be transmitted or stored in a bitstream format in units of network abstraction layer (NAL) units.
- the video/video information may further include information on various parameter sets, such as an adaptation parameter set (APS), a picture parameter set (PPS), a sequence parameter set (SPS), or a video parameter set (VPS).
- the video/video information may further include general constraint information.
- information and/or syntax elements transmitted/signaled from the encoding device to the decoding device may be included in the video/video information.
- the video/video information may be encoded through the above-described encoding procedure and included in the bitstream.
- the bitstream may be transmitted through a network or may be stored in a digital storage medium.
- the network may include a broadcasting network and/or a communication network
- the digital storage medium may include various storage media such as USB, SD, CD, DVD, Blu-ray, HDD, and SSD.
- a transmission unit for transmitting and/or a storage unit (not shown) for storing may be configured as an internal/external element of the encoding apparatus 200, or the transmission unit It may be included in the entropy encoding unit 240.
- the quantized transform coefficients output from the quantization unit 233 may be used to generate a prediction signal.
- a residual signal residual block or residual samples
- the addition unit 155 adds the reconstructed residual signal to the prediction signal output from the inter prediction unit 221 or the intra prediction unit 222 to obtain a reconstructed signal (restored picture, reconstructed block, reconstructed sample array). Can be created.
- the predicted block may be used as a reconstructed block.
- the addition unit 250 may be referred to as a restoration unit or a restoration block generation unit.
- the generated reconstructed signal may be used for intra prediction of the next processing target block in the current picture, and may be used for inter prediction of the next picture through filtering as described later.
- LMCS luma mapping with chroma scaling
- the filtering unit 260 may improve subjective/objective image quality by applying filtering to the reconstructed signal.
- the filtering unit 260 may apply various filtering methods to the reconstructed picture to generate a modified reconstructed picture, and the modified reconstructed picture may be converted to the memory 270, specifically, the DPB of the memory 270. Can be saved on.
- the various filtering methods may include, for example, deblocking filtering, sample adaptive offset, adaptive loop filter, bilateral filter, and the like.
- the filtering unit 260 may generate a variety of filtering information and transmit it to the entropy encoding unit 240 as described later in the description of each filtering method.
- the filtering information may be encoded by the entropy encoding unit 240 and output in the form of a bitstream.
- the modified reconstructed picture transmitted to the memory 270 may be used as a reference picture in the inter prediction unit 221.
- the encoding device may avoid prediction mismatch between the encoding device 100 and the decoding device, and may improve encoding efficiency.
- the memory 270 DPB may store the modified reconstructed picture for use as a reference picture in the inter prediction unit 221.
- the memory 270 may store motion information of a block from which motion information in a current picture is derived (or encoded) and/or motion information of blocks in a picture that have already been reconstructed.
- the stored motion information may be transferred to the inter prediction unit 221 in order to be used as motion information of spatial neighboring blocks or motion information of temporal neighboring blocks.
- the memory 270 may store reconstructed samples of reconstructed blocks in the current picture, and may be transmitted to the intra prediction unit 222.
- the decoding device may include an image decoding device and/or a video decoding device.
- the decoding apparatus 300 includes an entropy decoder 310, a residual processor 320, a predictor 330, an adder 340, and a filtering unit. It may be configured to include (filter, 350) and memory (memoery) 360.
- the prediction unit 330 may include an inter prediction unit 331 and an intra prediction unit 332.
- the residual processing unit 320 may include a dequantizer 321 and an inverse transformer 321.
- the entropy decoding unit 310, the residual processing unit 320, the prediction unit 330, the addition unit 340, and the filtering unit 350 described above are one hardware component (for example, a decoder chipset or a processor). ) Can be configured.
- the memory 360 may include a decoded picture buffer (DPB), and may be configured by a digital storage medium.
- the hardware component may further include the memory 360 as an internal/external component.
- the decoding apparatus 300 may reconstruct an image in response to a process in which the video/image information is processed by the encoding apparatus of FIG. 2. For example, the decoding apparatus 300 may derive units/blocks based on block division related information obtained from the bitstream.
- the decoding device 300 may perform decoding using a processing unit applied in the encoding device.
- the processing unit of decoding may be, for example, a coding unit, and the coding unit may be divided from a coding tree unit or a maximum coding unit along a quad tree structure, a binary tree structure and/or a ternary tree structure.
- One or more transform units may be derived from the coding unit.
- the reconstructed image signal decoded and output through the decoding device 300 may be reproduced through the playback device.
- the decoding apparatus 300 may receive a signal output from the encoding apparatus of FIG. 2 in the form of a bitstream, and the received signal may be decoded through the entropy decoding unit 310.
- the entropy decoding unit 310 may parse the bitstream to derive information (eg, video/video information) necessary for image restoration (or picture restoration).
- the video/video information may further include information on various parameter sets, such as an adaptation parameter set (APS), a picture parameter set (PPS), a sequence parameter set (SPS), or a video parameter set (VPS).
- the video/video information may further include general constraint information.
- the decoding apparatus may further decode the picture based on the information on the parameter set and/or the general restriction information.
- Signaled/received information and/or syntax elements described later in this document may be decoded through the decoding procedure and obtained from the bitstream.
- the entropy decoding unit 310 decodes information in the bitstream based on a coding method such as exponential Golomb coding, CAVLC, or CABAC, and a value of a syntax element required for image restoration, a quantized value of a transform coefficient related to a residual. Can be printed.
- the CABAC entropy decoding method receives a bin corresponding to each syntax element in a bitstream, and includes information on a syntax element to be decoded and information on a neighboring and decoding target block or information on a symbol/bin decoded in a previous step.
- a context model is determined using the context model, and a symbol corresponding to the value of each syntax element can be generated by performing arithmetic decoding of the bin by predicting the probability of occurrence of a bin according to the determined context model.
- the CABAC entropy decoding method may update the context model using information of the decoded symbol/bin for the context model of the next symbol/bin after the context model is determined.
- information about prediction is provided to a prediction unit (inter prediction unit 332 and intra prediction unit 331), and entropy decoding is performed by the entropy decoding unit 310.
- the dual value that is, quantized transform coefficients and related parameter information may be input to the residual processing unit 320.
- the residual processing unit 320 may derive a residual signal (a residual block, residual samples, and a residual sample array).
- information about filtering among information decoded by the entropy decoding unit 310 may be provided to the filtering unit 350.
- a receiver (not shown) for receiving a signal output from the encoding device may be further configured as an inner/outer element of the decoding device 300, or the receiver may be a component of the entropy decoding unit 310.
- the decoding apparatus may be called a video/video/picture decoding apparatus, and the decoding apparatus can be divided into an information decoder (video/video/picture information decoder) and a sample decoder (video/video/picture sample decoder). May be.
- the information decoder may include the entropy decoding unit 310, and the sample decoder includes the inverse quantization unit 321, an inverse transform unit 322, an addition unit 340, a filtering unit 350, and a memory 360. ), an inter prediction unit 332 and an intra prediction unit 331 may be included.
- the inverse quantization unit 321 may inverse quantize the quantized transform coefficients and output transform coefficients.
- the inverse quantization unit 321 may rearrange the quantized transform coefficients in a two-dimensional block shape. In this case, the rearrangement may be performed based on the coefficient scan order performed by the encoding device.
- the inverse quantization unit 321 may perform inverse quantization on quantized transform coefficients by using a quantization parameter (for example, quantization step size information) and obtain transform coefficients.
- a quantization parameter for example, quantization step size information
- the inverse transform unit 322 obtains a residual signal (residual block, residual sample array) by inverse transforming the transform coefficients.
- the prediction unit may perform prediction on the current block and generate a predicted block including prediction samples for the current block.
- the prediction unit may determine whether intra prediction or inter prediction is applied to the current block based on the information about the prediction output from the entropy decoding unit 310, and may determine a specific intra/inter prediction mode.
- the prediction unit 320 may generate a prediction signal based on various prediction methods to be described later.
- the prediction unit may apply intra prediction or inter prediction for prediction of one block, as well as simultaneously apply intra prediction and inter prediction. This can be called combined inter and intra prediction (CIIP).
- the prediction unit may be based on an intra block copy (IBC) prediction mode or a palette mode to predict a block.
- IBC intra block copy
- the IBC prediction mode or the palette mode may be used for content image/video coding such as a game, for example, screen content coding (SCC).
- SCC screen content coding
- IBC basically performs prediction in the current picture, but can be performed similarly to inter prediction in that it derives a reference block in the current picture. That is, the IBC may use at least one of the inter prediction techniques described in this document.
- the palette mode can be viewed as an example of intra coding or intra prediction. When the palette mode is applied, information about a palette table and a palette index may be included in the video/video information and signale
- the intra prediction unit 331 may predict the current block by referring to samples in the current picture.
- the referenced samples may be located in the vicinity of the current block or may be located apart according to the prediction mode.
- prediction modes may include a plurality of non-directional modes and a plurality of directional modes.
- the intra prediction unit 331 may determine a prediction mode applied to the current block by using the prediction mode applied to the neighboring block.
- the inter prediction unit 332 may derive a predicted block for the current block based on a reference block (reference sample array) specified by a motion vector on the reference picture.
- motion information may be predicted in units of blocks, subblocks, or samples based on correlation between motion information between neighboring blocks and the current block.
- the motion information may include a motion vector and a reference picture index.
- the motion information may further include inter prediction direction (L0 prediction, L1 prediction, Bi prediction, etc.) information.
- the neighboring block may include a spatial neighboring block existing in the current picture and a temporal neighboring block existing in the reference picture.
- the inter prediction unit 332 may construct a motion information candidate list based on neighboring blocks, and derive a motion vector and/or a reference picture index of the current block based on the received candidate selection information.
- Inter prediction may be performed based on various prediction modes, and the information about the prediction may include information indicating a mode of inter prediction for the current block.
- the addition unit 340 is reconstructed by adding the obtained residual signal to the prediction signal (predicted block, prediction sample array) output from the prediction unit (including the inter prediction unit 332 and/or the intra prediction unit 331). Signals (restored pictures, reconstructed blocks, reconstructed sample arrays) can be generated. When there is no residual for a block to be processed, such as when the skip mode is applied, the predicted block may be used as a reconstructed block.
- the addition unit 340 may be referred to as a restoration unit or a restoration block generation unit.
- the generated reconstructed signal may be used for intra prediction of the next processing target block in the current picture, may be output through filtering as described later, or may be used for inter prediction of the next picture.
- LMCS luma mapping with chroma scaling
- the filtering unit 350 may improve subjective/objective image quality by applying filtering to the reconstructed signal.
- the filtering unit 350 may generate a modified reconstructed picture by applying various filtering methods to the reconstructed picture, and the modified reconstructed picture may be converted to the memory 360, specifically, the DPB of the memory 360. Can be transferred to.
- the various filtering methods may include, for example, deblocking filtering, sample adaptive offset, adaptive loop filter, bilateral filter, and the like.
- the (modified) reconstructed picture stored in the DPB of the memory 360 may be used as a reference picture in the inter prediction unit 332.
- the memory 360 may store motion information of a block from which motion information in a current picture is derived (or decoded) and/or motion information of blocks in a picture that have already been reconstructed.
- the stored motion information may be transmitted to the inter prediction unit 332 in order to be used as motion information of spatial neighboring blocks or motion information of temporal neighboring blocks.
- the memory 360 may store reconstructed samples of reconstructed blocks in the current picture, and may be transmitted to the intra prediction unit 331.
- the embodiments described in the filtering unit 260, the inter prediction unit 221, and the intra prediction unit 222 of the encoding apparatus 200 are respectively the filtering unit 350 and the inter prediction of the decoding apparatus 300.
- the same or corresponding to the unit 332 and the intra prediction unit 331 may be applied.
- a predicted block including prediction samples for a current block as a coding target block may be generated.
- the predicted block includes prediction samples in the spatial domain (or pixel domain).
- the predicted block is derived equally from the encoding device and the decoding device, and the encoding device signals information about the residual between the original block and the predicted block (residual information), not the original sample value of the original block, to the decoding device.
- Image coding efficiency can be improved.
- the decoding apparatus may derive a residual block including residual samples based on the residual information, generate a reconstructed block including reconstructed samples by adding the residual block and the predicted block, and reconstruct including the reconstructed blocks You can create a picture.
- the residual information may be generated through transformation and quantization procedures.
- the encoding apparatus derives a residual block between an original block and a predicted block, performs a transformation procedure on residual samples (residual sample array) included in the residual block to derive transform coefficients, and transforms
- residual samples residual sample array
- transforms By performing a quantization procedure on the coefficients, quantized transform coefficients may be derived, and related residual information may be signaled to a decoding apparatus (through a bitstream).
- the residual information may include information such as value information of quantized transform coefficients, position information, a transform technique, a transform kernel, and a quantization parameter.
- the decoding apparatus may perform an inverse quantization/inverse transform procedure based on the residual information and derive residual samples (or residual blocks).
- the decoding apparatus may generate a reconstructed picture based on the predicted block and the residual block.
- the encoding apparatus may also inverse quantize/inverse transform quantized transform coefficients for reference for inter prediction of a picture to derive a residual block, and generate a reconstructed picture based on this.
- the encoding device/decoding device may generate a modified reconstructed picture by applying various filtering methods to the reconstructed picture in order to improve subjective/objective quality.
- the modified reconstructed picture may be stored in a memory of the encoding/decoding device, specifically, in the DPB of the memories 270 and 360.
- the various filtering methods may include, for example, deblocking filtering, sample adaptive offset, adaptive loop filter, bilateral filter, and the like.
- pictures constituting the video/video may be encoded/decoded according to a series of decoding orders.
- a picture order corresponding to an output order of a decoded picture may be set differently from the decoding order, and based on this, not only forward prediction but also backward prediction may be performed during inter prediction.
- the picture decoding procedure may roughly include a picture restoration procedure and an in-loop filtering procedure for a reconstructed picture.
- a modified reconstructed picture can be generated through an in-loop filtering procedure, and the modified reconstructed picture can be output as a decoded picture, and is also stored in the decoded picture buffer 360 or memory of the decoding device, When decoding a picture, it can be used as a reference picture in an inter prediction procedure.
- the in-loop filtering procedure may include a deblocking filtering procedure, a sample adaptive offset (SAO) procedure, an adaptive loop filter (ALF) procedure, and/or a bi-lateral filter procedure, as described above.
- SAO sample adaptive offset
- ALF adaptive loop filter
- one or some of the deblocking filtering procedure may be sequentially applied, or all of them may be sequentially applied. It can also be applied as
- the SAO procedure may be performed after the deblocking filtering procedure is applied to the reconstructed picture.
- the ALF procedure may be performed. This can likewise be done in the encoding device.
- the picture encoding procedure is not only a procedure of encoding information for picture restoration (ex. partitioning information, prediction information, residual information, etc.) and outputting it in the form of a bitstream, as well as generating a reconstructed picture for the current picture, and in-loop It may include a procedure for applying filtering.
- a modified reconstructed picture may be generated through the in-loop filtering procedure, which may be stored in the decoded picture buffer 270 or in a memory, and as in the case of a decoding device, in the inter prediction procedure when encoding a picture later It can be used as a reference picture.
- (in-loop) filtering-related information may be encoded by the entropy encoding unit 240 and output in the form of a bitstream, and the decoding apparatus encodes based on the filtering-related information.
- the in-loop filtering procedure can be performed in the same way as the device.
- noise generated during video/video coding such as blocking artifacts and ringing artifacts can be reduced, and subjective/objective visual quality can be improved.
- the encoding device and the decoding device can derive the same prediction result, increase the reliability of picture coding, and reduce the amount of data to be transmitted for picture coding. Can be reduced.
- FIG. 4 schematically shows an in-loop filtering-based video/video method
- FIG. 5 schematically shows a filtering unit in an encoding apparatus.
- the filtering unit in the encoding apparatus of FIG. 5 may be applied to the filtering unit 260 of the encoding apparatus 200 of FIG. 2 as described above or correspondingly.
- the encoding apparatus generates a reconstructed picture for the current picture (S400).
- the encoding apparatus may generate a reconstructed picture through a procedure such as partitioning, intra/inter prediction, and residual processing for an input original picture.
- the encoding device generates prediction samples for the current block through intra or inter prediction, generates residual samples based on the prediction samples, transforms/quantizes the residual samples, and then performs inverse quantization/inverse transform processing (modification ) Residual samples can be derived.
- the reason for performing inverse quantization/inverse transformation after transformation/quantization as described above is to derive residual samples identical to residual samples derived from the decoding apparatus as described above.
- the encoding apparatus may generate a reconstructed block including reconstructed samples for the current block based on the prediction samples and (modified) residual samples.
- the reconstructed picture may be generated based on the reconstructed block.
- the encoding apparatus performs an in-loop filtering procedure on the reconstructed picture (S410).
- a modified reconstructed picture may be generated through an in-loop filtering procedure.
- the modified reconstructed picture may be stored in the decoded picture buffer 270 or a memory as a decoded picture, and may be used as a reference picture in an inter prediction procedure when encoding the picture afterwards.
- the in-loop filtering procedure may include a deblocking filtering procedure, a sample adaptive offset (SAO) procedure, an adaptive loop filter (ALF) procedure, and/or a bi-lateral filter procedure.
- S410 may be performed by the filtering unit 260 of the encoding device.
- the deblocking filtering procedure is the deblocking filtering processing unit 261
- the SAO procedure is the SAO processing unit 262
- the ALF procedure is the ALF processing unit 263
- the viral filter procedure is the viral filter processing unit 264.
- Some of the various filtering procedures may be omitted in consideration of image characteristics, complexity, and efficiency, and in this case, related components in FIG. 5 may also be omitted.
- the encoding apparatus may encode image information including information for picture reconstruction and information related to (in-loop) filtering, and output the encoded image information in the form of a bitstream (S420).
- the output bitstream may be delivered to a decoding device through a storage medium or a network.
- S420 may be performed by the entropy encoding unit 240 of the encoding device.
- Information for picture restoration may include partitioning information, prediction information, residual information, and the like described above/after.
- Filtering-related information includes, for example, flag information indicating whether to apply all in-loop filtering, flag information indicating whether to apply each filtering procedure, information about SAO type, information about SAO offset value, and information about SAO band position.
- Information information about the ALF filtering shape, information about the ALF filtering coefficient, information about the viral filter shape, and/or information about the viral filter weight. Detailed filtering-related information will be described later. Meanwhile, as described above, when some filtering methods are omitted, information (parameters) related to the omitted filtering may be omitted.
- FIG. 6 schematically shows an in-loop filtering-based video/video decoding method
- FIG. 7 schematically shows a filtering unit in a decoding apparatus.
- the filtering unit in the decoding apparatus of FIG. 7 may be applied to the same or corresponding to the filtering unit 350 of the decoding apparatus 300 of FIG. 3 described above.
- the decoding device may perform an operation corresponding to an operation performed by the encoding device.
- the decoding apparatus may obtain image information including information for picture restoration and information related to (in-loop) filtering from a received bitstream (S600).
- S600 may be performed by the entropy decoding unit 310 of the decoding device.
- Information for picture restoration may include partitioning information, prediction information, residual information, and the like described above/after.
- Filtering-related information includes, for example, flag information indicating whether to apply all in-loop filtering, flag information indicating whether to apply each filtering procedure, information about SAO type, information about SAO offset value, and information about SAO band position.
- Information information about the ALF filtering shape, information about the ALF filtering coefficient, information about the viral filter shape, and/or information about the viral filter weight. Detailed filtering-related information will be described later. Meanwhile, as described above, when some filtering methods are omitted, information (parameters) related to the omitted filtering may be omitted.
- the decoding apparatus generates a reconstructed picture for the current picture based on the information for picture restoration (S610). As described above with reference to FIG. 3, the decoding apparatus may generate a reconstructed picture through procedures such as intra/inter prediction and residual processing for the current picture. Specifically, the decoding apparatus generates prediction samples for the current block through intra or inter prediction based on prediction information included in information for picture restoration, and based on residual information included in the information for picture restoration, the current block Derive residual samples for (based on inverse quantization/inverse transformation). The decoding apparatus may generate a reconstructed block including reconstructed samples for the current block based on the prediction samples and the residual samples. A reconstructed picture can be generated based on the reconstructed block.
- the decoding apparatus may generate a reconstructed picture through procedures such as intra/inter prediction and residual processing for the current picture. Specifically, the decoding apparatus generates prediction samples for the current block through intra or inter prediction based on prediction information included in information for picture restoration, and based on residual information included in the information for picture restoration
- the decoding apparatus performs an in-loop filtering procedure on the reconstructed picture (S620).
- a modified reconstructed picture may be generated through an in-loop filtering procedure.
- the modified reconstructed picture may be stored in the output and/or decoded picture buffer 360 or memory as a decoded picture, and may be used as a reference picture in an inter prediction procedure when decoding a picture afterwards.
- the in-loop filtering procedure may include a deblocking filtering procedure, a sample adaptive offset (SAO) procedure, an adaptive loop filter (ALF) procedure, and/or a bi-lateral filter procedure.
- S620 may be performed by the filtering unit 350 of the decoding device.
- the deblocking filtering procedure is the deblocking filtering processing unit 351
- the SAO procedure is the SAO processing unit 352
- the ALF procedure is the ALF processing unit 353
- the viral filter procedure is the viral filter processing unit 354.
- the encoding device/decoding device may reconstruct a picture in block units.
- block distortion may occur at the boundary between blocks in the reconstructed picture.
- the encoding device and the decoding device may use a deblocking filter to remove block distortion occurring at a boundary between blocks in the reconstructed picture.
- the deblocking filtering procedure may, for example, derive a target boundary from a reconstructed picture, determine a boundary strength (bS) for the target boundary, and perform deblocking filtering on the target boundary based on the bS.
- the bS may be determined based on a prediction mode of two blocks adjacent to a target boundary, a motion vector difference, whether a reference picture is the same, and whether a non-zero significant coefficient exists.
- FIG. 8 exemplarily shows an embodiment of a method of performing deblocking filtering.
- the method of FIG. 8 may be performed by the filtering unit 260 in the encoding apparatus of FIG. 2 and the filtering unit 350 in the decoding apparatus of FIG. 3 described above.
- the encoding device/decoding device may derive a boundary between blocks on which deblocking filtering is performed in a reconstructed picture (S800). Meanwhile, a boundary on which deblocking filtering is performed may be referred to as an edge.
- the boundary on which deblocking filtering is performed may include two types, and the two types may be a vertical boundary and a horizontal boundary.
- the vertical boundary may be referred to as a vertical edge
- the horizontal boundary may be referred to as a horizontal edge.
- the encoding device/decoding device may perform deblocking filtering on a vertical edge and deblocking filtering on a horizontal edge.
- the encoding device/decoding device may derive a transform block boundary.
- the encoding device/decoding device may derive a coding subblock boundary.
- the encoding device/decoding device may derive a block boundary on which deblocking filtering is performed based on an NxN size grid. For example, the encoding device/decoding device may derive a block boundary on which deblocking filtering is performed based on whether a boundary of a block (a transform block or a coding sub-block) corresponds to an NxN size grid. In other words, for example, the encoding device/decoding device may derive a block boundary on which deblocking filtering is performed based on whether the boundary of a block (transform block or coding sub-block) is a block boundary located on an NxN size grid. I can.
- the encoding device/decoding device may derive a boundary of a block corresponding to the NxN size grid as a block boundary on which deblocking filtering is performed.
- the NxN size grid may mean a boundary derived by dividing the reconstructed picture into NxN size squares.
- the NxN size grid may be, for example, a 4x4 or 8x8 size grid.
- the encoding device/decoding device may determine a boundary strength (bS) for a boundary on which deblocking filtering is performed (S810).
- the bS may also be referred to as a boundary filtering strength.
- the encoding device/decoding device may determine bS based on blocks adjacent to a boundary on which deblocking filtering is performed. For example, it may be assumed that the bS value for the boundary (block edge) between the block P and the block Q is obtained. In this case, the encoding device/decoding device may determine the bS value for the boundary based on the positions of the blocks P and Q and/or information on whether the blocks P and Q are coded in the intra mode.
- block P may indicate a block including p0 samples adjacent to the boundary on which deblocking filtering is performed
- block Q may indicate a block including q0 samples adjacent to the boundary on which deblocking filtering is performed.
- p0 may represent a sample of a block adjacent to the left or upper side of a boundary on which deblocking filtering is performed
- q0 may represent a sample of a block adjacent to the right or lower side of a boundary on which deblocking filtering is performed.
- the direction of the filtering boundary is a vertical direction (that is, when the filtering boundary is a vertical boundary)
- p0 may represent a sample of a block adjacent to the left side of the boundary on which deblocking filtering is performed
- q0 is deblocking
- a sample of a block adjacent to the right side of the boundary on which filtering is performed may be indicated.
- p0 may represent a sample of a block adjacent to the upper side of the boundary on which deblocking filtering is performed
- q0 is A sample of a block adjacent to a lower side of a boundary on which blocking filtering is performed may be indicated.
- the encoding device/decoding device may perform deblocking filtering based on the bS (S820).
- the encoding device/decoding device may determine whether the filtering process for all block boundaries in the reconstructed picture has been performed, and when the filtering process for all block boundaries has not been performed, the encoding device/decoding device It may be determined whether the position of the boundary corresponds to an NxN size grid (eg, an 8x8 grid). For example, it may be determined whether the remainder derived by dividing the x component and the y component of the boundary position of the subblock by N is 0. If the remainder derived by dividing the x and y components of the boundary position of the sub-block by N is 0, the boundary position of the sub-block may correspond to an NxN size grid. When the position of the boundary of the sub-block corresponds to the NxN size grid, the encoding device/decoding device may perform deblocking filtering on the boundary based on bS for the boundary.
- NxN size grid e.g, an 8x8 grid
- the encoding device/decoding device may determine a filter applied to the boundary between blocks based on the determined bS value. Filters can be divided into strong filters and weak filters.
- the encoding/decoding apparatus may improve encoding efficiency by performing filtering with different filters on a boundary at a position where block distortion is likely to occur in a reconstructed picture and a boundary at a position at which block distortion occurs at low probability.
- the encoding device/decoding device may perform deblocking filtering on the boundary between blocks by using the determined filter (eg, a strong filter or a weak filter).
- the deblocking filtering process may be terminated.
- this document proposes a method of determining the filter length based on the distance between edges in the process of performing deblocking filtering. That is, it is possible to simplify a method of improving subjective image quality and determining a filter length compared to the same complexity, thereby simplifying a hardware (H/W) design process.
- the filter length is determined based on the condition for the boundary of the transform block (the edge of the transform block or the transform edge), and may be modified for the boundary of the prediction block (the edge of the prediction block or the prediction edge). . In this process, the filter length may be determined according to conditions for the surrounding edges.
- the filter length may indicate the number of samples applied to blocks P and Q based on a block boundary (ie, a target boundary).
- a block boundary ie, a target boundary
- the filter length P may represent the number of samples (number of luma/chroma samples) applied to the block P adjacent to the left side of the target boundary
- the filter length Q is the number of samples adjacent to the right side of the target boundary. It can represent the number of samples (number of luma/chroma samples) applied to block Q.
- the filter length P can represent the number of samples (number of luma/chroma samples) applied to the block P adjacent to the upper side of the target boundary, and the filter length Q is applied to the block Q adjacent to the lower side of the target boundary. It can indicate the number of samples (number of luma/chroma samples).
- FIG. 9 shows an example of a method of determining a filter length based on a condition for a peripheral edge.
- the encoding device/decoding device may perform deblocking filtering for one direction (ie, deblocking filtering for a vertical boundary or deblocking filtering for a horizontal boundary) (S900).
- the encoding device/decoding device may derive a transform block (or sub-transform block; sub-TU) boundary (S910) and determine a filter length based on a condition for the transform block boundary (S920).
- the encoding device/decoding device may determine the filter length based on blocks P and Q adjacent to a transform block boundary (ie, a transform block boundary on which deblocking filtering is performed).
- the encoding device/decoding device may determine whether the transform block is a luma component or a chroma component (S921).
- the encoding device/decoding device determines the filter length P as 7 if the size on the transform block P side is 32 or more, and sets the filter length P to 3 if the size on the transform block P side is less than 32. It can be determined (S922).
- the encoding device/decoding device determines the filter length Q as 7 if the size on the transform block Q side is 32 or more, and determines the filter length Q if the size on the transform block Q side is less than 32. It can be determined as 3 (S923).
- the encoding device/decoding device determines the filter length P and the filter length Q to be 3 if the size of the transform block P side is 8 or more and the size of the transform block Q side is 8 or more.
- the filter length P and the filter length Q may be determined as 1 (S924 and S925).
- the encoding device/decoding device may derive a prediction block (or sub-prediction block; sub-PU) boundary (S930) and determine whether the prediction block is a luma component or a chroma component (S940). .
- a filter length may be determined based on a condition for a prediction block boundary (S950).
- the encoding device/decoding device may determine whether the current target boundary (ie, the prediction block boundary) is a transform block boundary (S951).
- the encoding device/decoding device may determine the filter length P and the filter length Q for the prediction block boundary based on the filter lengths P and Q derived based on the transform block boundary.
- the filter length P may be determined as the smaller of the filter lengths P and 5 derived based on the transform block boundary (S952)
- the filter length Q is the filter length Q derived based on the transform block boundary It may be determined as the smaller of 5 (S953).
- the encoding device/decoding device may determine the filter length P and the filter length Q as 3 or 2 based on the condition for the prediction block boundary. For example, the filter length may be determined based on whether the first sub-PU boundary, the last sub-PU boundary, or the peripheral boundary (before or after the distance is separated by 8) is a transform boundary (S954). If at least one of the above conditions is satisfied, the encoding device/decoding device may determine the filter length P and the filter length Q as 3 (S955). Otherwise, the filter length P and the filter length Q may be determined as 2 (S956). .
- the encoding device/decoding device may determine a boundary strength (bS) for the target boundary (S960).
- FIG. 10 shows an example of a method of performing deblocking filtering based on a filtering condition for a boundary of a chroma component.
- the encoding device/decoding device may perform deblocking filtering on a chroma component (S1000).
- a chroma component S1000
- it may be a boundary for chroma components arranged in an NxN size grid.
- the NxN size grid may be an 8x8 size grid.
- the encoding device/decoding device may derive a target boundary based on whether it corresponds to an NxN size grid, and perform the following process on the target boundary.
- the encoding device/decoding device may determine whether 1) bS is 2 or 2) bS is 1, filter length P is 3, and filter length Q is 3 (S1010). In addition, when the above 1) or 2) is satisfied, the encoding device/decoding device may determine whether the filter length P is 3 and the filter length Q is 3 (S1020).
- the encoding device/decoding device may determine whether to use a strong filter or a weak filter (S1030). For example, the encoding device/decoding device may determine whether to use a strong filter or a weak filter based on the bS value. When it is determined to use a strong filter, the encoding device/decoding device may perform deblocking filtering on the boundary between blocks using the strong filter (S1040). When it is determined to use a weak filter, the encoding device/decoding device may perform deblocking filtering on the boundary between blocks using the weak filter (S1050).
- the encoding device/decoding device may use a weak filter (S1050).
- deblocking filtering on the boundary of the chroma component is performed at an edge that satisfies the following two conditions, and a strong filter is applied at the edge when the filter length is 3.
- the filter length may be determined only by distances between neighboring edges.
- the distance may refer to the number of samples (ie, pixels).
- the filter length for the luma component may be determined as follows.
- the filter length may be set to 0.
- the filter length may be set to 3.
- the filter length may be set to 5.
- the filter length can be set to 7.
- the filter length for the chroma component may be determined as follows.
- the filter length may be set to 0.
- the filter length may be set to 1.
- the filter length can be set to 3.
- a distance condition for determining a filter length may be adaptively applied to support various color formats.
- a chroma filter length for a 4:4:4 chroma format can be adaptively applied as follows.
- the filter length may be set to 0.
- the filter length may be set to 1.
- the filter length can be set to 3.
- FIG. 11 shows an example of a method of determining a filter length according to an embodiment of the present document.
- the encoding device/decoding device may perform deblocking filtering for one direction (ie, deblocking filtering for a vertical boundary or deblocking filtering for a horizontal boundary) (S1100).
- the encoding device/decoding device may derive a transform block (or sub-transform block; sub-TU) boundary (S1110) and a prediction block (or sub-prediction block; sub-PU) boundary (S1120). Further, the encoding device/decoding device may determine whether the component of the block is luma or chroma (S1130).
- the encoding device/decoding device may use a filter length (eg 0, 3, or less) based on the distance between neighboring edges (eg, less than or equal to 4, 8, 16). 5, 7) may be determined (S1140).
- the encoding device/decoding device may determine whether the size of the block Q side is 4 or less. In other words, the encoding device/decoding device may determine whether the distance (number of samples) of the block Q is 4 or less based on the target edge. If this condition is satisfied, the filter length P and the filter length Q can be determined as zero. In this case, when the filter length is set to 0, it may indicate that filtering may be skipped (omitted).
- the encoding device/decoding device may determine the filter length based on the sizes of each of the blocks P and Q. For example, it may be determined whether the size of the block P (or block Q) side is 8 or less. That is, it is determined whether the distance (number of samples) of the block P (or block Q) is 8 or less based on the target edge, and if this condition is satisfied, the filter length P (or filter length Q) may be determined as 3.
- the encoding device/decoding device may determine whether the size of the block P (or block Q) side is 16 or less. That is, it is determined whether the distance (the number of samples) of the block P (or block Q) is 16 or less based on the target edge, and when this condition is satisfied, the filter length P (or filter length Q) may be determined as 5. However, if this condition is not satisfied, the filter length P (or filter length Q) can be determined as 7.
- the encoding device/decoding device is the filter length (eg 0, 1, 3) based on the distance between neighboring edges (eg, less than or equal to 2, 4). Can be determined (S1150).
- the encoding device/decoding device may determine whether the size of the block Q side is 2 or less. In other words, the encoding device/decoding device may determine whether the distance (number of samples) of the block Q is 2 or less based on the target edge. If this condition is satisfied, the filter length P and the filter length Q can be determined as zero. In this case, when the filter length is set to 0, it may indicate that filtering may be skipped (omitted).
- the encoding device/decoding device may determine the filter length based on the sizes of each of the blocks P and Q. For example, it may be determined whether the size of the block P (or block Q) side is 4 or less. That is, it is determined whether the distance (number of samples) of the block P (or block Q) is 4 or less based on the target edge, and if this condition is satisfied, the filter length P (or filter length Q) may be determined as 1. However, if this condition is not satisfied, the filter length P (or filter length Q) can be determined as 3.
- the encoding device/decoding device may determine a boundary strength (bS) for the target boundary (S1160). Also, the encoding device/decoding device may perform deblocking filtering by determining whether to apply a strong filter or a weak filter based on bS.
- bS boundary strength
- FIG. 12 illustrates an example of a method of performing deblocking filtering based on a filtering condition for a boundary of a chroma component according to an embodiment of the present document.
- the encoding device/decoding device may perform deblocking filtering on a chroma component (S1200).
- a chroma component S1200
- it may be a boundary for chroma components arranged in an NxN size grid.
- the NxN size grid may be, for example, a 4x4 size grid.
- the encoding device/decoding device may derive a target boundary based on whether it corresponds to an NxN size grid, and perform the following process on the target boundary.
- the encoding device/decoding device may determine whether bS is greater than 0 (S1210). When bS is greater than 0, the encoding device/decoding device may determine whether the filter length P is 3 and the filter length Q is 3 (S1220).
- the encoding device/decoding device may determine whether to use a strong filter or a weak filter (S1230). For example, the encoding device/decoding device may determine whether to use a strong filter or a weak filter based on the bS value.
- the encoding device/decoding device may perform deblocking filtering on the boundary between blocks using the strong filter (S1240).
- the encoding device/decoding device may perform deblocking filtering on the boundary between blocks by using the weak filter (S1250).
- the encoding device/decoding device may use a weak filter (S1250).
- edge filtering on a chroma component may be performed only when the boundary strength bS is not zero. Therefore, compared to the existing method, the process of checking the condition can be simplified, so that the complexity can be reduced and the performance can be improved.
- deblocking filtering process proposed in this document may support the following functions.
- 13 and 14 are exemplary diagrams for explaining an aligned filtering boundary between a luma component and a chroma component.
- filtering is performed on an 8x8 sample grid for both luma and chroma components at a block boundary in consideration of the following conditions.
- (a) of FIG. 13 is a coding tree unit partitioned according to a QTBTTT (Quad-tree binary-tree ternary-tree) structure, and shows coding tree units of a luma component of a size of 32x32 and a chroma component of a size of 16x16, respectively. .
- QTBTTT Quad-tree binary-tree ternary-tree
- edges (parts indicated by thick solid lines) that have been filtered for a 32x32 size luma component and a 16x16 size chroma component are displayed, and the above-described conventional deblocking filtering When the process is applied, it can be seen that the filtered edges between the 32x32 size luma component and the 16x16 size chroma component are different from each other.
- (b) of FIG. 13 shows a block having a 32x32 subblock-based temporal motion vector predictor (SbTMVP) coded block, and shows a block of a luma component of a size of 32x32 and a chroma component of a size of 16x16, respectively.
- SBTMVP subblock-based temporal motion vector predictor
- edges (parts indicated by thick solid lines) that have been filtered for a luma component of a size of 32x32 and a chroma component of a size of 16x16 are displayed, and the above-described conventional deblocking filtering When the process is applied, it can be seen that the filtered edges between the 32x32 size luma component and the 16x16 size chroma component are different from each other.
- filtering is performed using a grid (eg, a 4x4 sample grid) in which edges between a luma component and a chroma component are dense.
- a grid eg, a 4x4 sample grid
- filtered edges between the luma component and the chroma component may be aligned as shown in FIG. 14.
- (a) of FIG. 14 is a coding tree unit partitioned according to a QTBTTT (Quad-tree binary-tree ternary-tree) structure, and shows coding tree units of a luma component of a size of 32x32 and a chroma component of a size of 16x16, respectively. .
- QTBTTT Quad-tree binary-tree ternary-tree
- edges (parts indicated by thick solid lines) that have been filtered for a 32x32 size luma component and a 16x16 size chroma component are displayed, and the deblocking proposed in this document When the filtering process is applied, it can be seen that the filtered edges between the 32x32 size luma component and the 16x16 size chroma component coincide with each other.
- FIG. 14 shows a block coded with a 32x32 subblock-based temporal motion vector predictor (SbTMVP), and shows a block of a luma component of a size of 32x32 and a chroma component of a size of 16x16, respectively.
- SBTMVP subblock-based temporal motion vector predictor
- 15 and 16 are exemplary diagrams for explaining a uniform filter length.
- the filter length is determined by the peripheral TU edge distance and the peripheral PU edge distance. Therefore, even if each edge has the same property, different filter lengths can be derived.
- FIG. 15 shows an example in which different filter lengths are applied in conventional deblocking filtering as described above.
- a 32xN block partitioned into subblocks is shown, and it can be seen that different filter lengths are applied to vertical boundaries.
- a filter length such as 2, 3, 5 taps (ie, 2 samples, 3 samples, 5 samples, etc.) may be applied to each of the subblock edges.
- FIG. 16 shows an example in which the same filter length is applied in deblocking filtering according to an embodiment of the present document.
- a 32xN block partitioned into subblocks is shown, and it can be seen that the same filter lengths are applied to each of the vertical boundaries.
- a filter length of 3 taps ie, 3 samples may be applied to each of the subblock edges.
- 17 is an exemplary diagram for explaining a parallel processing function in a deblocking filtering process.
- a filtering operation for block edges on a narrow distance between peripheral edges may be skipped (omitted). For example, as shown in (a) and (b) of FIG. 17, since narrow edges occur (that is, a case occurs where the distance between neighboring edges is narrow), the filtering process for these edges is Can be skipped (omitted). For example, when the size of the block Q side is greater than 4, filtering may be performed.
- the method disclosed in FIG. 18 may be performed by the encoding apparatus 200 disclosed in FIG. 2. Specifically, step S1800 of FIG. 18 may be performed by the adder 250 of the encoding apparatus 200 disclosed in FIG. 2, and steps S1810 to S1830 of FIG. 18 are performed by filtering the encoding apparatus 200 disclosed in FIG. 2. It may be performed by the unit 260, and step S1840 of FIG. 18 may be performed by the entropy encoding unit 240 of the encoding apparatus 200 disclosed in FIG. 2. In addition, the method disclosed in FIG. 18 may be performed including the embodiments described above in this document. Accordingly, in FIG. 18, detailed descriptions of contents overlapping with the above-described embodiments will be omitted or simplified.
- the encoding apparatus may generate a reconstructed picture based on prediction samples of a current block (S1800).
- the encoding apparatus may determine whether to perform inter prediction or intra prediction on the current block, and may determine a specific inter prediction mode or a specific intra prediction mode based on RD cost. According to the determined mode, the encoding apparatus may derive prediction samples for the current block.
- the encoding device may generate a reconstructed picture based on the prediction samples of the current block. That is, the encoding apparatus may derive residual samples through subtraction of original samples and prediction samples for the current block, and may generate reconstructed samples based on the residual samples and prediction samples. The encoding apparatus may generate a reconstructed block based on reconstructed samples for a current block in a picture, and generate a reconstructed picture including the reconstructed blocks.
- the encoding apparatus may derive the boundary of the current block in the reconstructed picture as a target boundary for deblocking filtering (S1810).
- the encoding apparatus may apply deblocking filtering to remove block distortion occurring at the boundary between blocks in the reconstructed picture, and in this case, determine the filtering strength according to the degree of block distortion.
- the encoding apparatus may perform deblocking filtering on a vertical boundary or deblocking filtering on a horizontal boundary, and may derive a target boundary for each of a vertical boundary and a horizontal boundary.
- the encoding apparatus may derive a block boundary (ie, a target boundary) on which deblocking filtering is performed based on an NxN size grid.
- the encoding apparatus may derive a block boundary on which deblocking filtering is performed based on whether the boundary of the current block (transform block or prediction block) corresponds to an NxN size grid.
- the encoding apparatus may derive a target boundary on which deblocking filtering is performed based on whether the boundary of the current block (transform block or prediction block) is a block boundary positioned on the NxN size grid.
- the encoding apparatus may derive a boundary of a block corresponding to the NxN size grid as a target boundary on which deblocking filtering is performed.
- the NxN size grid may mean a boundary derived by dividing the reconstructed picture into NxN size squares.
- the NxN size grid may be, for example, a 4x4 or 8x8 size grid.
- a target boundary may be derived based on a 4x4 size grid for a chroma component and an 8x8 size grid for a luma component.
- the encoding apparatus may perform deblocking filtering based on the filter length for the target boundary (S1820).
- the encoding apparatus may derive a filter length based on a distance between a target boundary and a surrounding target boundary of the target boundary, and may perform deblocking filtering based on the filter length.
- the filter length may represent the number of samples to which deblocking filtering is applied to blocks P and Q based on a block boundary (ie, a target boundary) as described above.
- a block boundary ie, a target boundary
- the filter length P may represent the number of samples (number of luma/chroma samples) applied to the block P adjacent to the left side of the target boundary
- the filter length Q is the number of samples adjacent to the right side of the target boundary. It can represent the number of samples (number of luma/chroma samples) applied to block Q.
- the filter length P may represent the number of samples (number of luma/chroma samples) applied to the block P adjacent to the upper side of the target boundary, and the filter length Q is the block Q adjacent to the lower side of the target boundary. It can indicate the number of samples (number of luma/chroma samples) applied to.
- the encoding apparatus may first determine whether the current block is a luma component or a chroma component, and derive a filter length for the luma component and a filter length for the chroma component. Since the above-described embodiments can be applied to the process of deriving such a filter length, the description will be briefly described in this embodiment.
- the filter length may be derived as 0 based on whether the distance between the target boundary and the surrounding target boundary is less than or equal to 4.
- the encoding device may determine whether a distance between a target boundary and a target boundary around the right (or lower) of the target boundary is less than or equal to 4, and if this condition is satisfied, the filter length P and the filter length Q are set to 0.
- a filter length of 0 may be derived based on whether the size of the block Q is less than or equal to 4 based on the target boundary.
- the filter length may be derived as 3, 5 or 7 based on whether the distance between the target boundary and the surrounding target boundary is less than or equal to 8 or 16.
- the encoding device may determine whether a distance between a target boundary and a target boundary around the right (or lower)/left (or upper) of the target boundary is less than or equal to 8, and if this condition is satisfied, the filter length P and The filter length Q can be derived as 3.
- a filter length of 3 may be derived based on whether the size of the block Q or the block P is less than or equal to 8 based on the target boundary.
- the encoding device may determine whether the distance between the target boundary and the target boundary around the right (or lower)/left (or upper) of the target boundary is less than or equal to 16, and if this condition is satisfied, the filter length P And the filter length Q can be derived as 5. In other words, a filter length of 5 may be derived based on whether the size of the block Q or the block P is less than or equal to 16 based on the target boundary. In addition, as an example, the encoding apparatus may determine whether the distance between the target boundary and the target boundary around the right (or lower)/left (or upper) of the target boundary is greater than 16, and if this condition is satisfied, the filter length P And filter length Q can be derived as 7.
- the filter length may be derived as 0 based on whether the distance between the target boundary and the surrounding target boundary is less than or equal to 2.
- the encoding device may determine whether a distance between a target boundary and a target boundary around the right (or lower) of the target boundary is less than or equal to 2, and if this condition is satisfied, the filter length P and the filter length Q are set to 0.
- a filter length of 0 may be derived based on whether the size of the block Q is less than or equal to 2 based on the target boundary.
- the filter length may be derived as 1 or 3 based on whether the distance between the target boundary and the surrounding target boundary is less than or equal to 4.
- the encoding device may determine whether a distance between a target boundary and a target boundary around the right (or lower)/left (or upper) of the target boundary is less than or equal to 4, and if this condition is satisfied, the filter length P and The filter length Q can be derived as 1.
- the filter length may be derived as 1 based on whether the size of the block Q or the block P is less than or equal to 4 based on the target boundary.
- the encoding apparatus may determine whether the distance between the target boundary and the target boundary around the right (or lower)/left (or upper) of the target boundary is greater than 4, and if this condition is satisfied, the filter length P And the filter length Q can be derived as 3.
- the encoding apparatus may perform deblocking filtering by determining a boundary strength (bS) for a target boundary, and determining whether to apply a strong filter or a weak filter based on the bS and the filter length.
- bS boundary strength
- the encoding apparatus may perform deblocking filtering on a target boundary based on whether the boundary strength is greater than 0 with respect to the current block as a chroma component. For example, when the boundary strength is greater than 0, the encoding apparatus may perform deblocking filtering on the target boundary for the chroma component. Alternatively, when the boundary strength is 0, the encoding apparatus may skip (omit) the deblocking filtering at the target boundary for the chroma component.
- the deblocking filtering may include deblocking filtering on a vertical boundary and deblocking filtering on a horizontal boundary.
- the encoding apparatus may derive a modified reconstructed picture for the reconstructed picture based on the deblocking filtering (S1830).
- the encoding device can derive a reconstructed sample from which blocking artifacts have been removed by performing deblocking filtering on the boundary of the current block in the reconstructed picture, and generates a reconstructed picture based on the reconstructed sample. can do. Through this, it is possible to remove blocking artifacts at a block boundary caused by prediction performed in a block unit (coding block or coding subblock unit), and to improve visual quality of a reconstructed picture.
- the encoding apparatus may further apply an in-loop filtering procedure such as an SAO procedure to the modified reconstructed picture in order to improve subjective/objective image quality as needed.
- an in-loop filtering procedure such as an SAO procedure
- the encoding device may encode image information including information on the current block (S1840).
- the information on the current block may include information related to prediction of the current block.
- the prediction-related information may include prediction mode information of the current block (eg, intra prediction mode, inter prediction mode, Rane prediction mode, subblock-based merge mode, IBC mode referring to the current picture, etc.).
- information on the current block may include information on residual samples derived based on prediction samples of the current block.
- information on residual samples may include information on values of quantized transform coefficients derived by performing transform and quantization on residual samples, position information, transform technique, transform kernel, quantization parameter, etc. I can.
- the encoding device may encode the image information including information on the current block as described above, output it as a bitstream, and transmit it to the decoding device through a network or a storage medium.
- the encoding apparatus may generate a bitstream by encoding information (eg, information related to deblocking filtering) derived in the above-described process.
- the method disclosed in FIG. 20 may be performed by the decoding apparatus 300 disclosed in FIG. 3. Specifically, step S2000 of FIG. 20 may be performed by the adder 340 of the decoding apparatus 300 disclosed in FIG. 3, and steps S2010 to S2030 of FIG. 20 are filtered by the decoding apparatus 300 disclosed in FIG. 3. It can be performed by the unit 350.
- the method disclosed in FIG. 20 may include the embodiments described above in this document. Accordingly, in FIG. 20, detailed descriptions of contents overlapping with the above-described embodiments will be omitted or simplified.
- the decoding apparatus may generate a reconstructed picture based on prediction samples of a current block (S2000).
- the decoding apparatus may receive image information on a current block through a bitstream.
- the decoding apparatus may receive image information including prediction related information for a current block through a bitstream.
- the image information may include prediction related information for the current block.
- the prediction related information may include information on an inter prediction mode or an intra prediction mode performed on the current block. That is, the decoding apparatus may perform inter prediction or intra prediction on the current block based on the prediction related information received through the bitstream, and may derive prediction samples of the current block.
- the decoding apparatus may receive image information including residual information for a current block through a bitstream.
- the image information may include residual information on the current block.
- the residual information may include a transform coefficient for a residual sample.
- the decoding apparatus may derive residual samples (or residual sample array) of the current block based on the residual information.
- the decoding apparatus may generate reconstructed samples based on prediction samples and residual samples, and may generate a reconstructed block based on reconstructed samples for a current block in a picture. In addition, the decoding apparatus may generate a reconstructed picture including reconstructed blocks.
- the decoding apparatus may derive the boundary of the current block in the reconstructed picture as a target boundary for deblocking filtering (S2010).
- the decoding apparatus may apply deblocking filtering to remove block distortion occurring at the boundary between blocks in the reconstructed picture, and in this case, determine the filtering strength according to the degree of block distortion.
- the decoding apparatus may perform deblocking filtering on a vertical boundary or deblocking filtering on a horizontal boundary, and may derive a target boundary for each of a vertical boundary and a horizontal boundary.
- the decoding apparatus may derive a block boundary (ie, a target boundary) on which deblocking filtering is performed based on an NxN size grid.
- the decoding apparatus may derive a block boundary on which deblocking filtering is performed based on whether the boundary of the current block (transform block or prediction block) corresponds to an NxN size grid.
- the decoding apparatus may derive a target boundary on which deblocking filtering is performed based on whether the boundary of the current block (transform block or prediction block) is a block boundary positioned on the NxN size grid.
- the decoding apparatus may derive a boundary of a block corresponding to the NxN size grid as a target boundary on which deblocking filtering is performed.
- the NxN size grid may mean a boundary derived by dividing the reconstructed picture into NxN size squares.
- the NxN size grid may be, for example, a 4x4 or 8x8 size grid.
- a target boundary may be derived based on a 4x4 size grid for a chroma component and an 8x8 size grid for a luma component.
- the decoding apparatus may perform deblocking filtering based on the filter length for the target boundary (S2020).
- the decoding apparatus may derive a filter length based on a distance between a target boundary and a target boundary surrounding the target boundary, and perform deblocking filtering based on the filter length.
- the filter length may represent the number of samples to which deblocking filtering is applied to blocks P and Q based on a block boundary (ie, a target boundary) as described above.
- a block boundary ie, a target boundary
- the filter length P may represent the number of samples (number of luma/chroma samples) applied to the block P adjacent to the left side of the target boundary
- the filter length Q is the number of samples adjacent to the right side of the target boundary. It can represent the number of samples (number of luma/chroma samples) applied to block Q.
- the filter length P may represent the number of samples (number of luma/chroma samples) applied to the block P adjacent to the upper side of the target boundary, and the filter length Q is the block Q adjacent to the lower side of the target boundary. It can indicate the number of samples (number of luma/chroma samples) applied to.
- the decoding apparatus may first determine whether the current block is a luma component or a chroma component, and derive a filter length for the luma component and a filter length for the chroma component. Since the above-described embodiments can be applied to the process of deriving such a filter length, description will be made in this embodiment.
- the filter length may be derived as 0 based on whether the distance between the target boundary and the surrounding target boundary is less than or equal to 4.
- the decoding apparatus may determine whether a distance between a target boundary and a target boundary around the right (or lower) of the target boundary is less than or equal to 4, and if this condition is satisfied, the filter length P and the filter length Q are set to 0. Can be derived.
- a filter length of 0 may be derived based on whether the size of the block Q is less than or equal to 4 based on the target boundary.
- the filter length may be derived as 3, 5 or 7 based on whether the distance between the target boundary and the surrounding target boundary is less than or equal to 8 or 16.
- the decoding apparatus may determine whether the distance between the target boundary and the target boundary around the right (or lower)/left (or upper) of the target boundary is less than or equal to 8, and if this condition is satisfied, the filter length P and The filter length Q can be derived as 3.
- a filter length of 3 may be derived based on whether the size of the block Q or the block P is less than or equal to 8 based on the target boundary.
- the decoding apparatus may determine whether the distance between the target boundary and the target boundary around the right (or lower)/left (or upper) of the target boundary is less than or equal to 16. If this condition is satisfied, the filter length P And the filter length Q can be derived as 5. In other words, a filter length of 5 may be derived based on whether the size of the block Q or the block P is less than or equal to 16 based on the target boundary. In addition, as an example, the decoding apparatus may determine whether the distance between the target boundary and the target boundary around the right (or lower)/left (or upper) of the target boundary is greater than 16, and if this condition is satisfied, the filter length P And filter length Q can be derived as 7.
- the filter length may be derived as 0 based on whether the distance between the target boundary and the surrounding target boundary is less than or equal to 2.
- the decoding apparatus may determine whether a distance between a target boundary and a target boundary around the right (or lower) of the target boundary is less than or equal to 2, and if this condition is satisfied, the filter length P and the filter length Q are set to 0.
- a filter length of 0 may be derived based on whether the size of the block Q is less than or equal to 2 based on the target boundary.
- the filter length may be derived as 1 or 3 based on whether the distance between the target boundary and the surrounding target boundary is less than or equal to 4.
- the decoding apparatus may determine whether a distance between a target boundary and a target boundary around the right (or lower)/left (or upper) of the target boundary is less than or equal to 4, and if this condition is satisfied, the filter length P and The filter length Q can be derived as 1.
- the filter length may be derived as 1 based on whether the size of the block Q or the block P is less than or equal to 4 based on the target boundary.
- the decoding apparatus may determine whether the distance between the target boundary and the target boundary around the right (or lower)/left (or upper) of the target boundary is greater than 4, and if this condition is satisfied, the filter length P And the filter length Q can be derived as 3.
- the encoding apparatus may perform deblocking filtering by determining a boundary strength (bS) for a target boundary, and determining whether to apply a strong filter or a weak filter based on the bS and the filter length.
- bS boundary strength
- the decoding apparatus may perform deblocking filtering on a target boundary based on whether the boundary strength is greater than 0 with respect to the current block as a chroma component. For example, when the boundary strength is greater than 0, the decoding apparatus may perform deblocking filtering on the target boundary for the chroma component. Alternatively, when the boundary strength is 0, the decoding apparatus may skip (omit) the deblocking filtering at the target boundary for the chroma component.
- the deblocking filtering may include deblocking filtering on a vertical boundary and deblocking filtering on a horizontal boundary.
- the decoding apparatus may derive a modified reconstructed picture for the reconstructed picture based on the deblocking filtering (S2030).
- the decoding apparatus can derive a reconstructed sample from which blocking artifacts have been removed by performing deblocking filtering on the boundary of the current block in the reconstructed picture, and generates a reconstructed picture based on the reconstructed sample. can do. Through this, it is possible to remove blocking artifacts at a block boundary caused by prediction performed in a block unit (coding block or coding subblock unit), and to improve visual quality of a reconstructed picture.
- the decoding apparatus may further apply an in-loop filtering procedure such as the SAO procedure to the modified reconstructed picture in order to improve subjective/objective image quality as needed.
- an in-loop filtering procedure such as the SAO procedure
- the above-described method according to this document may be implemented in a software form, and the encoding device and/or decoding device according to this document performs image processing such as a TV, computer, smartphone, set-top box, display device, etc. Can be included in the device.
- the above-described method may be implemented as a module (process, function, etc.) performing the above-described functions.
- the modules are stored in memory and can be executed by the processor.
- the memory may be inside or outside the processor, and may be connected to the processor by various well-known means.
- the processor may include an application-specific integrated circuit (ASIC), another chipset, a logic circuit, and/or a data processing device.
- the memory may include read-only memory (ROM), random access memory (RAM), flash memory, memory card, storage medium, and/or other storage device. That is, the embodiments described in this document may be implemented and performed on a processor, microprocessor, controller, or chip.
- the functional units illustrated in each drawing may be implemented and executed on a computer, processor, microprocessor, controller, or chip. In this case, information for implementation (ex. information on instructions) or an algorithm may be stored in a digital storage medium.
- decoding devices and encoding devices to which this document is applied include multimedia broadcasting transmission/reception devices, mobile communication terminals, home cinema video devices, digital cinema video devices, surveillance cameras, video chat devices, real-time communication devices such as video communications, and mobile streaming.
- Devices storage media, camcorders, video-on-demand (VoD) service providers, OTT video (Over the top video) devices, Internet streaming service providers, three-dimensional (3D) video devices, virtual reality (VR) devices, AR (argumente) reality) devices, video telephony video devices, transportation means terminals (ex.vehicle (including autonomous vehicles) terminals, airplane terminals, ship terminals, etc.) and medical video devices, and can be used to process video signals or data signals.
- an OTT video (Over the top video) device may include a game console, a Blu-ray player, an Internet-connected TV, a home theater system, a smartphone, a tablet PC, and a digital video recorder (DVR).
- DVR digital video recorder
- the processing method to which the embodiment(s) of this document is applied may be produced in the form of a program executed by a computer, and may be stored in a computer-readable recording medium.
- Multimedia data having a data structure according to the embodiment(s) of this document may also be stored in a computer-readable recording medium.
- the computer-readable recording medium includes all kinds of storage devices and distributed storage devices in which computer-readable data is stored.
- the computer-readable recording medium includes, for example, Blu-ray disk (BD), universal serial bus (USB), ROM, PROM, EPROM, EEPROM, RAM, CD-ROM, magnetic tape, floppy disk, and optical It may include a data storage device.
- the computer-readable recording medium includes media implemented in the form of a carrier wave (for example, transmission through the Internet).
- the bitstream generated by the encoding method may be stored in a computer-readable recording medium or transmitted through a wired or wireless communication network.
- embodiment(s) of this document may be implemented as a computer program product by program code, and the program code may be executed in a computer according to the embodiment(s) of this document.
- the program code may be stored on a carrier readable by a computer.
- FIG. 22 shows an example of a content streaming system to which embodiments disclosed in this document can be applied.
- a content streaming system applied to embodiments of the present document may largely include an encoding server, a streaming server, a web server, a media storage device, a user device, and a multimedia input device.
- the encoding server serves to generate a bitstream by compressing content input from multimedia input devices such as smartphones, cameras, camcorders, etc. into digital data, and transmits it to the streaming server.
- multimedia input devices such as smartphones, cameras, camcorders, etc. directly generate bitstreams
- the encoding server may be omitted.
- the bitstream may be generated by an encoding method or a bitstream generation method applied to the embodiments of the present document, and the streaming server may temporarily store the bitstream while transmitting or receiving the bitstream. .
- the streaming server transmits multimedia data to a user device based on a user request through a web server, and the web server serves as an intermediary for notifying the user of a service.
- the web server transmits it to the streaming server, and the streaming server transmits multimedia data to the user.
- the content streaming system may include a separate control server, and in this case, the control server serves to control commands/responses between devices in the content streaming system.
- the streaming server may receive content from a media storage and/or encoding server. For example, when content is received from the encoding server, the content may be received in real time. In this case, in order to provide a smooth streaming service, the streaming server may store the bitstream for a predetermined time.
- Examples of the user device include a mobile phone, a smart phone, a laptop computer, a digital broadcasting terminal, a personal digital assistant (PDA), a portable multimedia player (PMP), a navigation system, a slate PC, and Tablet PC, ultrabook, wearable device, for example, smartwatch, smart glass, head mounted display (HMD)), digital TV, desktop There may be computers, digital signage, etc.
- PDA personal digital assistant
- PMP portable multimedia player
- HMD head mounted display
- TV desktop
- desktop There may be computers, digital signage, etc.
- Each server in the content streaming system may be operated as a distributed server, and in this case, data received from each server may be distributedly processed.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims (15)
- 디코딩 장치에 의하여 수행되는 영상 디코딩 방법에 있어서,현재 블록의 예측 샘플들을 기반으로 복원 픽처를 생성하는 단계;상기 복원 픽처 내 상기 현재 블록의 경계를 디블록킹 필터링을 위한 타겟 경계로 도출하는 단계;상기 타겟 경계에 대한 필터 길이를 기반으로 디블록킹 필터링을 수행하는 단계; 및상기 디블록킹 필터링을 기반으로 상기 복원 픽처에 대한 수정된 복원 픽처를 도출하는 단계를 포함하며,상기 필터 길이는, 상기 타겟 경계 및 상기 타겟 경계의 주변 타겟 경계 간의 거리를 기반으로 도출되는 것을 특징으로 하는 영상 디코딩 방법.
- 제1항에 있어서,루마 성분(luma component) 블록에 대하여, 상기 필터 길이는 상기 타겟 경계 및 상기 주변 타겟 경계 간의 거리가 4보다 작거나 같은지를 기반으로 0으로 도출되는 것을 특징으로 하는 영상 디코딩 방법.
- 제1항에 있어서,루마 성분(luma component) 블록에 대하여, 상기 필터 길이는 상기 타겟 경계 및 상기 주변 타겟 경계 간의 거리가 8 또는 16보다 작거나 같은지를 기반으로 3, 5 또는 7로 도출되는 것을 특징으로 하는 영상 디코딩 방법.
- 제1항에 있어서,크로마 성분(chroma component) 블록에 대하여, 상기 필터 길이는 상기 타겟 경계 및 상기 주변 타겟 경계 간의 거리가 2보다 작거나 같은지를 기반으로 0으로 도출되는 것을 특징으로 하는 영상 디코딩 방법.
- 제1항에 있어서,크로마 성분(chroma component) 블록에 대하여, 상기 필터 길이는 상기 타겟 경계 및 상기 주변 타겟 경계 간의 거리가 4보다 작거나 같은지를 기반으로 1 또는 3으로 도출되는 것을 특징으로 하는 영상 디코딩 방법.
- 제1항에 있어서,크로마 성분(chroma component) 블록에 대하여, 상기 타겟 경계에 대한 경계 강도(bs; Boundary Strength)가 0보다 큰지를 기반으로 상기 타겟 경계에 대한 상기 디블록킹 필터링이 수행되는 것을 특징으로 하는 영상 디코딩 방법.
- 제1항에 있어서,상기 필터 길이는 상기 타겟 경계를 기준으로 블록 P 및 블록 Q에 상기 디블록킹 필터링이 적용되는 샘플 수를 나타내고,수직 경계인 상기 타겟 경계에 대하여, 상기 필터 길이는 상기 타겟 경계의 좌측에 인접한 상기 블록 P에 적용되는 샘플 수 및 상기 타겟 경계의 우측에 인접한 상기 블록 Q에 적용되는 샘플 수를 나타내고,수평 경계인 상기 타겟 경계에 대하여, 상기 필터 길이는 상기 타겟 경계의 상측에 인접한 상기 블록 P에 적용되는 샘플 수 상기 타겟 경계의 하측에 인접한 상기 블록 Q에 적용되는 샘플 수를 나타내는 것을 특징으로 하는 영상 디코딩 방법.
- 인코딩 장치에 의하여 수행되는 영상 인코딩 방법에 있어서,현재 블록의 예측 샘플들을 기반으로 복원 픽처를 생성하는 단계;상기 복원 픽처 내 상기 현재 블록의 경계를 디블록킹 필터링을 위한 타겟 경계로 도출하는 단계;상기 타겟 경계에 대한 필터 길이를 기반으로 디블록킹 필터링을 수행하는 단계;상기 디블록킹 필터링을 기반으로 상기 복원 픽처에 대한 수정된 복원 픽처를 도출하는 단계; 및상기 현재 블록에 대한 정보를 포함하는 영상 정보를 인코딩하는 단계를 포함하며,상기 필터 길이는, 상기 타겟 경계 및 상기 타겟 경계의 주변 타겟 경계 간의 거리를 기반으로 도출되는 것을 특징으로 하는 영상 인코딩 방법.
- 제7항에 있어서,루마 성분(luma component) 블록에 대하여, 상기 필터 길이는 상기 타겟 경계 및 상기 주변 타겟 경계 간의 거리가 4보다 작거나 같은지를 기반으로 0으로 도출되는 것을 특징으로 하는 영상 인코딩 방법.
- 제8항에 있어서,루마 성분(luma component) 블록에 대하여, 상기 필터 길이는 상기 타겟 경계 및 상기 주변 타겟 경계 간의 거리가 8 또는 16보다 작거나 같은지를 기반으로 3, 5 또는 7로 도출되는 것을 특징으로 하는 영상 인코딩 방법.
- 제8항에 있어서,크로마 성분(chroma component) 블록에 대하여, 상기 필터 길이는 상기 타겟 경계 및 상기 주변 타겟 경계 간의 거리가 2보다 작거나 같은지를 기반으로 0으로 도출되는 것을 특징으로 하는 영상 인코딩 방법.
- 제8항에 있어서,크로마 성분(chroma component) 블록에 대하여, 상기 필터 길이는 상기 타겟 경계 및 상기 주변 타겟 경계 간의 거리가 4보다 작거나 같은지를 기반으로 1 또는 3으로 도출되는 것을 특징으로 하는 영상 인코딩 방법.
- 제8항에 있어서,크로마 성분(chroma component) 블록에 대하여, 상기 타겟 경계에 대한 경계 강도(bs; Boundary Strength)가 0보다 큰지를 기반으로 상기 타겟 경계에 대한 상기 디블록킹 필터링이 수행되는 것을 특징으로 하는 영상 인코딩 방법.
- 제8항에 있어서,상기 필터 길이는 상기 타겟 경계를 기준으로 블록 P 및 블록 Q에 상기 디블록킹 필터링이 적용되는 샘플 수를 나타내고,수직 경계인 상기 타겟 경계에 대하여, 상기 필터 길이는 상기 타겟 경계의 좌측에 인접한 상기 블록 P에 적용되는 샘플 수 및 상기 타겟 경계의 우측에 인접한 상기 블록 Q에 적용되는 샘플 수를 나타내고,수평 경계인 상기 타겟 경계에 대하여, 상기 필터 길이는 상기 타겟 경계의 상측에 인접한 상기 블록 P에 적용되는 샘플 수 상기 타겟 경계의 하측에 인접한 상기 블록 Q에 적용되는 샘플 수를 나타내는 것을 특징으로 하는 영상 인코딩 방법.
- 영상 디코딩 장치가 영상 디코딩 방법을 수행하도록 야기하는 인코딩된 정보를 저장하는 컴퓨터 판독 가능한 저장 매체에 있어서, 상기 영상 디코딩 방법은,현재 블록의 예측 샘플들을 기반으로 복원 픽처를 생성하는 단계;상기 복원 픽처 내 상기 현재 블록의 경계를 디블록킹 필터링을 위한 타겟 경계로 도출하는 단계;상기 타겟 경계에 대한 필터 길이를 기반으로 디블록킹 필터링을 수행하는 단계; 및상기 디블록킹 필터링을 기반으로 상기 복원 픽처에 대한 수정된 복원 픽처를 도출하는 단계를 포함하며,상기 필터 길이는, 상기 타겟 경계 및 상기 타겟 경계의 주변 타겟 경계 간의 거리를 기반으로 도출되는 것을 특징으로 하는 컴퓨터 판독 가능한 저장 매체.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2020297232A AU2020297232B2 (en) | 2019-06-18 | 2020-06-18 | Image or video coding using deblocking filtering |
US17/620,936 US11997265B2 (en) | 2019-06-18 | 2020-06-18 | Image or video coding using deblocking filtering |
AU2024202392A AU2024202392A1 (en) | 2019-06-18 | 2024-04-12 | Image or video coding using deblocking filtering |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201962863252P | 2019-06-18 | 2019-06-18 | |
US62/863,252 | 2019-06-18 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020256436A1 true WO2020256436A1 (ko) | 2020-12-24 |
Family
ID=74040249
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2020/007908 WO2020256436A1 (ko) | 2019-06-18 | 2020-06-18 | 디블록킹 필터링을 사용하는 영상 또는 비디오 코딩 |
Country Status (3)
Country | Link |
---|---|
US (1) | US11997265B2 (ko) |
AU (2) | AU2020297232B2 (ko) |
WO (1) | WO2020256436A1 (ko) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7778480B2 (en) * | 2004-11-23 | 2010-08-17 | Stmicroelectronics Asia Pacific Pte. Ltd. | Block filtering system for reducing artifacts and method |
US20130294525A1 (en) * | 2011-01-14 | 2013-11-07 | Telefonaktiebolaget L M Ericsson (Publ) | Method for Filter Control and a Filtering Control Device |
KR101574447B1 (ko) * | 2011-07-19 | 2015-12-03 | 퀄컴 인코포레이티드 | 비디오 코딩을 위한 비정방형 블록들의 디블록화 |
KR20190052097A (ko) * | 2016-09-30 | 2019-05-15 | 엘지전자 주식회사 | 영상 처리 방법 및 이를 위한 장치 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9906790B2 (en) * | 2014-03-14 | 2018-02-27 | Qualcomm Incorporated | Deblock filtering using pixel distance |
-
2020
- 2020-06-18 US US17/620,936 patent/US11997265B2/en active Active
- 2020-06-18 AU AU2020297232A patent/AU2020297232B2/en active Active
- 2020-06-18 WO PCT/KR2020/007908 patent/WO2020256436A1/ko active Application Filing
-
2024
- 2024-04-12 AU AU2024202392A patent/AU2024202392A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7778480B2 (en) * | 2004-11-23 | 2010-08-17 | Stmicroelectronics Asia Pacific Pte. Ltd. | Block filtering system for reducing artifacts and method |
US20130294525A1 (en) * | 2011-01-14 | 2013-11-07 | Telefonaktiebolaget L M Ericsson (Publ) | Method for Filter Control and a Filtering Control Device |
KR101574447B1 (ko) * | 2011-07-19 | 2015-12-03 | 퀄컴 인코포레이티드 | 비디오 코딩을 위한 비정방형 블록들의 디블록화 |
KR20190052097A (ko) * | 2016-09-30 | 2019-05-15 | 엘지전자 주식회사 | 영상 처리 방법 및 이를 위한 장치 |
Non-Patent Citations (1)
Title |
---|
BENJAMIN BROSS; JIANLE CHEN; SHAN LIU: "Versatile Video Coding (Draft 5).", JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11. 14TH MEETING, no. JVET-N1001-v8, 11 June 2019 (2019-06-11), Geneva, CH, pages 1 - 385, XP030205561 * |
Also Published As
Publication number | Publication date |
---|---|
AU2024202392A1 (en) | 2024-05-02 |
AU2020297232A1 (en) | 2022-02-17 |
US20220360773A1 (en) | 2022-11-10 |
US11997265B2 (en) | 2024-05-28 |
AU2020297232B2 (en) | 2024-01-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020246849A1 (ko) | 변환에 기반한 영상 코딩 방법 및 그 장치 | |
WO2020218793A1 (ko) | Bdpcm에 기반한 영상 코딩 방법 및 그 장치 | |
WO2020171632A1 (ko) | Mpm 리스트 기반 인트라 예측 방법 및 장치 | |
WO2020060282A1 (ko) | 변환 계수 레벨 코딩 방법 및 그 장치 | |
WO2020256344A1 (ko) | 영상 코딩에서 변환 커널 세트를 나타내는 정보의 시그널링 | |
WO2021040400A1 (ko) | 팔레트 모드 기반 영상 또는 비디오 코딩 | |
WO2020116961A1 (ko) | 이차 변환에 기반한 영상 코딩 방법 및 그 장치 | |
WO2021096057A1 (ko) | 비디오 또는 영상 코딩 시스템에서의 엔트리 포인트 관련 정보에 기반한 영상 코딩 방법 | |
WO2020167097A1 (ko) | 영상 코딩 시스템에서 인터 예측을 위한 인터 예측 타입 도출 | |
WO2021040398A1 (ko) | 팔레트 이스케이프 코딩 기반 영상 또는 비디오 코딩 | |
WO2020071832A1 (ko) | 변환 계수 코딩 방법 및 그 장치 | |
WO2020235960A1 (ko) | Bdpcm 에 대한 영상 디코딩 방법 및 그 장치 | |
WO2020197274A1 (ko) | 변환에 기반한 영상 코딩 방법 및 그 장치 | |
WO2020256346A1 (ko) | 변환 커널 세트에 관한 정보에 대한 코딩 | |
WO2021034161A1 (ko) | 인트라 예측 장치 및 방법 | |
WO2020251270A1 (ko) | 서브블록 단위의 시간적 움직임 정보 기반 영상 또는 비디오 코딩 | |
WO2020185039A1 (ko) | 레지듀얼 코딩 방법 및 장치 | |
WO2020149616A1 (ko) | 영상 코딩 시스템에서 cclm 예측 기반 영상 디코딩 방법 및 그 장치 | |
WO2021040402A1 (ko) | 팔레트 코딩 기반 영상 또는 비디오 코딩 | |
WO2021201463A1 (ko) | 인루프 필터링 기반 영상 코딩 장치 및 방법 | |
WO2020256345A1 (ko) | 영상 코딩 시스템에서 변환 커널 세트에 관한 정보에 대한 컨텍스트 코딩 | |
WO2021040488A1 (ko) | 팔레트 모드에서의 이스케이프 이진화 기반 영상 또는 비디오 코딩 | |
WO2021034160A1 (ko) | 매트릭스 인트라 예측 기반 영상 코딩 장치 및 방법 | |
WO2021091255A1 (ko) | 영상/비디오 코딩을 위한 상위 레벨 신택스 시그널링 방법 및 장치 | |
WO2021091253A1 (ko) | 슬라이스 타입 기반 영상/비디오 코딩 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20825693 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2020297232 Country of ref document: AU Date of ref document: 20200618 Kind code of ref document: A |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20825693 Country of ref document: EP Kind code of ref document: A1 |