WO2021006692A1 - 비디오 복호화 방법 및 장치, 비디오 부호화 방법 및 장치 - Google Patents
비디오 복호화 방법 및 장치, 비디오 부호화 방법 및 장치 Download PDFInfo
- Publication number
- WO2021006692A1 WO2021006692A1 PCT/KR2020/009085 KR2020009085W WO2021006692A1 WO 2021006692 A1 WO2021006692 A1 WO 2021006692A1 KR 2020009085 W KR2020009085 W KR 2020009085W WO 2021006692 A1 WO2021006692 A1 WO 2021006692A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pixel
- current
- coding unit
- value
- neighboring
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/174—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
- H04N19/82—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/86—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
Definitions
- the present disclosure relates to a video decoding method and a video decoding apparatus, and more specifically, the present disclosure relates to a neighboring pixel located at the upper left or lower right of the current pixel among neighboring pixels used for the adaptive loop filter of the current pixel.
- the value of the neighboring pixel is determined as the value of the pixel closest to the horizontal direction of the neighboring pixel located at the upper left or lower right among the pixels included in the slice including the current pixel, and the current pixel and the neighboring pixels are The current pixel using values of the neighboring pixels by determining an adaptive loop filter including filter coefficients for the current pixel and the neighboring pixels based on a value, and applying the adaptive loop filter to the current pixel.
- the present invention relates to a method and apparatus for correcting a value of and encoding or decoding a current block including the current pixel.
- Image data is encoded by a codec according to a predetermined data compression standard, for example, a Moving Picture Expert Group (MPEG) standard, and then stored in a recording medium in the form of a bitstream or transmitted through a communication channel.
- MPEG Moving Picture Expert Group
- a slice including a neighboring pixel located at the upper left or lower right of the current pixel among neighboring pixels used for the adaptive loop filter of the current pixel is different from the slice including the current pixel, and the upper left If the slice including the surrounding pixels located on the side or the lower right side is different from the slice including the current pixel, the value of the surrounding pixels located on the upper left or the bottom right is selected from among the pixels included in the slice including the current pixel. It is determined as a value of a pixel closest to the horizontal direction of the neighboring pixels located at the upper left or lower right, and filter coefficients for the current pixel and the neighboring pixels are included based on the values of the current pixel and the neighboring pixels.
- Determining an adaptive loop filter to perform, and applying the adaptive loop filter to a current pixel, correcting a value of the current pixel using values of the neighboring pixels, and encoding/decoding a current block including the current pixel It proposes a method and apparatus.
- a slice including a neighboring pixel located at an upper left or lower right of the current pixel among neighboring pixels used for an adaptive loop filter of the current pixel is Determining whether the current pixel is different from the included slice; If the slice including the surrounding pixels located at the top left or bottom right is different from the slice including the current pixel, the value of the surrounding pixels located at the top left or bottom right is a pixel included in the slice including the current pixel Determining a value of a pixel closest to the horizontal direction of the neighboring pixels located at the upper left or lower right of the ones; Determining an adaptive loop filter including filter coefficients for the current pixel and the neighboring pixels based on values of the current pixel and the neighboring pixels; Correcting a value of the current pixel using values of the surrounding pixels by applying the adaptive loop filter to a current pixel; It may include decoding a current block including the current pixel.
- a video decoding apparatus proposed in the present disclosure includes: a memory; And at least one processor connected to the memory, wherein the at least one processor includes: a neighboring pixel located at an upper left or lower right of the current pixel among neighboring pixels used for an adaptive loop filter of the current pixel.
- the value of the neighboring pixel is determined as the value of the pixel closest to the horizontal direction of the neighboring pixel located at the upper left or lower right among the pixels included in the slice including the current pixel, and the current pixel and the neighboring pixels are The current pixel using values of the neighboring pixels by determining an adaptive loop filter including filter coefficients for the current pixel and the neighboring pixels based on a value, and applying the adaptive loop filter to the current pixel. It may be configured to correct a value of and decode a current block including the current pixel.
- the video encoding method proposed in the present disclosure includes a slice including a neighboring pixel located at an upper left or lower right of the current pixel among neighboring pixels used for an adaptive loop filter of the current pixel. Determining whether the current pixel is different from the included slice; If the slice including the surrounding pixels located at the top left or bottom right is different from the slice including the current pixel, the value of the surrounding pixels located at the top left or bottom right is a pixel included in the slice including the current pixel Determining a value of a pixel closest to the horizontal direction of the neighboring pixels located at the upper left or lower right of the ones; Determining an adaptive loop filter including filter coefficients for the current pixel and the neighboring pixels based on values of the current pixel and the neighboring pixels; Correcting a value of the current pixel using values of the surrounding pixels by applying the adaptive loop filter to a current pixel; It may include encoding a current block including the current pixel.
- the video encoding apparatus proposed in the present disclosure includes: a memory; And at least one processor connected to the memory, wherein the at least one processor includes: a neighboring pixel located at an upper left or lower right of the current pixel among neighboring pixels used for an adaptive loop filter of the current pixel.
- the value of the neighboring pixel is determined as the value of the pixel closest to the horizontal direction of the neighboring pixel located at the upper left or lower right among the pixels included in the slice including the current pixel, and the current pixel and the neighboring pixels are The current pixel using values of the neighboring pixels by determining an adaptive loop filter including filter coefficients for the current pixel and the neighboring pixels based on a value, and applying the adaptive loop filter to the current pixel. It may be configured to correct a value of and encode a current block including the current pixel.
- a slice including neighboring pixels located at the upper left or lower right of the current pixel among neighboring pixels used for the adaptive loop filter of the current pixel is different from the slice including the current pixel
- the slice including the surrounding pixels located at the upper left or lower right is different from the slice including the current pixel
- the value of the surrounding pixels located at the upper left or lower right is included in the slice including the current pixel.
- the current pixel and the neighboring pixels are By determining an adaptive loop filter including filter coefficients and applying the adaptive loop filter to a current pixel, correcting a value of the current pixel using values of the neighboring pixels, and a current block including the current pixel
- coding efficiency and filtering effect may be improved by applying ALF filtering after performing pixel padding even when a pixel on the upper left or lower right of the current pixel exists outside the slice boundary.
- FIG. 1 is a schematic block diagram of an image decoding apparatus according to an embodiment.
- FIG. 2 is a flowchart illustrating a method of decoding an image according to an embodiment.
- FIG. 3 is a diagram illustrating a process of determining at least one coding unit by dividing a current coding unit by an image decoding apparatus, according to an embodiment.
- FIG. 4 illustrates a process in which an image decoding apparatus determines at least one coding unit by dividing coding units having a non-square shape, according to an exemplary embodiment.
- FIG. 5 is a diagram illustrating a process in which an image decoding apparatus divides a coding unit based on at least one of block type information and split type mode information, according to an embodiment.
- FIG. 6 is a diagram illustrating a method for an image decoding apparatus to determine a predetermined coding unit among odd coding units, according to an embodiment.
- FIG. 7 illustrates an order in which a plurality of coding units are processed when a plurality of coding units are determined by dividing a current coding unit by an image decoding apparatus according to an embodiment.
- FIG. 8 illustrates a process of determining that a current coding unit is divided into odd number of coding units when coding units cannot be processed in a predetermined order, according to an embodiment.
- FIG. 9 is a diagram illustrating a process of determining at least one coding unit by dividing a first coding unit by an image decoding apparatus according to an embodiment.
- FIG. 10 illustrates that, according to an embodiment, when a second coding unit of a non-square shape determined by splitting a first coding unit satisfies a predetermined condition, a form in which the second coding unit can be split is limited. Shows that.
- FIG. 11 illustrates a process in which an image decoding apparatus splits a square coding unit when it is not possible to indicate that split mode information is split into four square coding units, according to an embodiment.
- FIG. 12 illustrates that a processing order between a plurality of coding units may vary according to a splitting process of a coding unit according to an embodiment.
- FIG. 13 illustrates a process in which a depth of a coding unit is determined according to a change in a shape and size of a coding unit when a coding unit is recursively split to determine a plurality of coding units according to an embodiment.
- PID 14 illustrates a depth that may be determined according to a shape and size of coding units and a part index (hereinafter referred to as PID) for classifying coding units according to an embodiment.
- FIG. 15 illustrates that a plurality of coding units are determined according to a plurality of predetermined data units included in a picture, according to an embodiment.
- FIG. 16 illustrates a processing block that serves as a reference for determining an order of determining reference coding units included in a picture, according to an embodiment.
- 17 is a block diagram of a video encoding apparatus according to an embodiment.
- FIG. 18 is a flowchart of a video encoding method according to an embodiment.
- 19 is a block diagram of a video decoding apparatus according to an embodiment.
- FIG. 20 is a flowchart of a video decoding method according to an embodiment.
- 21 is a diagram for describing filtering at a slice boundary of a raster scan order, according to an exemplary embodiment.
- FIG. 22 is a diagram for describing pixel padding for an upper left region of a filtering region at a slice boundary in a raster scan order, according to an exemplary embodiment.
- FIG. 23 is a diagram for describing pixel padding for a lower right area of a filtering area at a slice boundary in a raster scan order, according to an exemplary embodiment.
- FIG. 24 shows a filter including filter coefficients of an adaptive loop filter for a current pixel of a luma block.
- FIG. 25 is a diagram for describing a method of padding an upper left peripheral pixel located outside a slice boundary when an adaptive loop filter is applied to a current pixel of a luma block.
- FIG. 26 is a diagram for explaining a method of padding a lower right neighboring pixel located outside a slice boundary when an adaptive loop filter is applied to a current pixel of a luma block
- FIG. 27 shows a filter including filter coefficients of an adaptive loop filter for a current pixel of a chroma block.
- FIG. 28 is a diagram for describing a method of padding an upper left neighboring pixel located outside a slice boundary when an adaptive loop filter is applied to a current pixel of a chroma block.
- FIG. 29 is a diagram for describing a method of padding a lower right neighboring pixel located outside a slice boundary when an adaptive loop filter is applied to a current pixel of a chroma block.
- a slice including a neighboring pixel positioned at the upper left or lower right of the current pixel among neighboring pixels used for the adaptive loop filter of the current pixel is the current pixel. Determining whether it is different from the included slice; If the slice including the surrounding pixels located at the top left or bottom right is different from the slice including the current pixel, the value of the surrounding pixels located at the top left or bottom right is a pixel included in the slice including the current pixel Determining a value of a pixel closest to the horizontal direction of the neighboring pixels located at the upper left or lower right of the ones; Determining an adaptive loop filter including filter coefficients for the current pixel and the neighboring pixels based on values of the current pixel and the neighboring pixels; Correcting a value of the current pixel using values of the surrounding pixels by applying the adaptive loop filter to a current pixel; It may include decoding a current block including the current pixel.
- the adaptive loop filter may be a 7x7 rhombus filter.
- the adaptive loop filter may be a 5x5 rhombus filter.
- the current pixel may be a pixel to which deblocking filtering to remove a block effect and sample offset filtering to correct a pixel value using at least one of an edge offset and a band offset are applied.
- the step of correcting the value of the current pixel using the values of the neighboring pixels includes the filter according to the difference between the value of the current pixel and the value of the current pixel and the values of each of the neighboring pixels. It may further include adding a value multiplied by the coefficient.
- the filter coefficient may be determined based on a direction and a change amount of the current pixel and the neighboring pixels.
- the value of the neighboring pixel positioned above the current pixel is stored in the current block.
- the value of the pixel value closest to the vertical direction of the surrounding pixel positioned above the included pixels is determined, and the neighboring pixel positioned below the current pixel is located outside the lower boundary of the slice including the current block.
- the value of the neighboring pixel located at the lower side of the current pixel is determined as the value of the pixel value closest to the vertical direction of the neighboring pixel located at the lower side of the pixels included in the current block, and to the left of the current pixel
- the value of the neighboring pixel located to the left of the current pixel is the value of the neighboring pixel located to the left of the pixels included in the current block.
- the value may be determined as a value of a pixel closest to a horizontal direction of a neighboring pixel positioned on the right of the pixels included in the current block.
- a slice including a neighboring pixel positioned at an upper left or a lower right of the current pixel among neighboring pixels used for an adaptive loop filter of the current pixel is Determining whether the pixel is different from the slice containing the pixel; If the slice including the surrounding pixels located at the top left or bottom right is different from the slice including the current pixel, the value of the surrounding pixels located at the top left or bottom right is a pixel included in the slice including the current pixel Determining a value of a pixel closest to the horizontal direction of the neighboring pixels located at the upper left or lower right of the ones; Determining an adaptive loop filter including filter coefficients for the current pixel and the neighboring pixels based on values of the current pixel and the neighboring pixels; Correcting a value of the current pixel using values of the surrounding pixels by applying the adaptive loop filter to a current pixel; It may include encoding a current block including the current pixel.
- the adaptive loop filter may be a 7x7 rhombus filter.
- the adaptive loop filter may be a 5x5 rhombus filter.
- the current pixel may be a pixel to which deblocking filtering to remove a block effect and sample offset filtering to correct a pixel value using at least one of an edge offset and a band offset are applied.
- the step of correcting the value of the current pixel using the values of the neighboring pixels includes the filter according to the difference between the value of the current pixel and the value of the current pixel and the values of each of the neighboring pixels. It may further include adding a value multiplied by the coefficient.
- the filter coefficient may be determined based on a direction and a change amount of the current pixel and the neighboring pixels.
- the value of the neighboring pixel positioned above the current pixel is stored in the current block.
- the value of the pixel value closest to the vertical direction of the surrounding pixel positioned above the included pixels is determined, and the neighboring pixel positioned below the current pixel is located outside the lower boundary of the slice including the current block.
- the value of the neighboring pixel located at the lower side of the current pixel is determined as the value of the pixel value closest to the vertical direction of the neighboring pixel located at the lower side of the pixels included in the current block, and to the left of the current pixel
- the value of the neighboring pixel located to the left of the current pixel is the value of the neighboring pixel located to the left of the pixels included in the current block.
- the value may be determined as a value of a pixel closest to a horizontal direction of a neighboring pixel positioned on the right of the pixels included in the current block.
- a video decoding apparatus includes: a memory; And at least one processor connected to the memory, wherein the at least one processor includes: a neighboring pixel located at an upper left or lower right of the current pixel among neighboring pixels used for an adaptive loop filter of the current pixel.
- the value of the neighboring pixel is determined as the value of the pixel closest to the horizontal direction of the neighboring pixel located at the upper left or lower right among the pixels included in the slice including the current pixel, and the current pixel and the neighboring pixels are The current pixel using values of the neighboring pixels by determining an adaptive loop filter including filter coefficients for the current pixel and the neighboring pixels based on a value, and applying the adaptive loop filter to the current pixel. It may be configured to correct a value of and decode a current block including the current pixel.
- unit used in the specification refers to a software or hardware component, and “unit” performs certain roles. However, “unit” is not meant to be limited to software or hardware.
- the “unit” may be configured to be in an addressable storage medium or may be configured to reproduce one or more processors.
- unit refers to components such as software components, object-oriented software components, class components, and task components, processes, functions, properties, procedures, Includes subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, database, data structures, tables, arrays and variables.
- the functions provided within the components and “units” may be combined into a smaller number of components and “units” or may be further separated into additional components and “units”.
- the "unit” may be implemented with a processor and a memory.
- processor is to be interpreted broadly to include general purpose processors, central processing units (CPUs), microprocessors, digital signal processors (DSPs), controllers, microcontrollers, state machines, and the like.
- processor may refer to an application specific application (ASIC), programmable logic device (PLD), field programmable gate array (FPGA), and the like.
- ASIC application specific application
- PLD programmable logic device
- FPGA field programmable gate array
- processor refers to a combination of processing devices, such as, for example, a combination of a DSP and a microprocessor, a combination of a plurality of microprocessors, a combination of one or more microprocessors in combination with a DSP core, or any other such configuration. You can also refer to it.
- memory should be interpreted broadly to include any electronic component capable of storing electronic information.
- the term memory refers to random access memory (RAM), read-only memory (ROM), non-volatile random access memory (NVRAM), programmable read-only memory (PROM), erase-programmable read-only memory (EPROM), electrical May refer to various types of processor-readable media such as erasable PROM (EEPROM), flash memory, magnetic or optical data storage, registers, and the like.
- RAM random access memory
- ROM read-only memory
- NVRAM non-volatile random access memory
- PROM programmable read-only memory
- EPROM erase-programmable read-only memory
- EEPROM erasable PROM
- flash memory magnetic or optical data storage, registers, and the like.
- the memory is said to be in electronic communication with the processor if the processor can read information from and
- image may represent a static image such as a still image of a video or a moving image, that is, a dynamic image such as the video itself.
- sample refers to data allocated to a sampling position of an image and to be processed.
- a pixel value in an image in a spatial domain and transform coefficients in a transform domain may be samples.
- a unit including at least one such sample may be defined as a block.
- the'current block' may mean a block of a largest coding unit, a coding unit, a prediction unit, or a transformation unit of a current image to be encoded or decoded.
- FIGS. 1 to 16 A method of determining a data unit of an image according to an exemplary embodiment will be described with reference to FIGS. 3 to 16, and neighboring pixels used for an adaptive loop filter of a current pixel according to an exemplary embodiment with reference to FIGS.
- the slice including the surrounding pixels located at the upper left or lower right of the current pixel is different from the slice including the current pixel, and the slice including the surrounding pixels located at the upper left or lower right is the If it is different from the slice containing the current pixel, the value of the neighboring pixel located at the upper left or lower right is in the horizontal direction of the neighboring pixel located at the upper left or lower right among the pixels included in the slice including the current pixel.
- FIGS. 1 and 2 a method and an apparatus for adaptively selecting a context model based on various types of coding units according to an embodiment of the present disclosure will be described with reference to FIGS. 1 and 2.
- FIG. 1 is a schematic block diagram of an image decoding apparatus according to an embodiment.
- the image decoding apparatus 100 may include a receiving unit 110 and a decoding unit 120.
- the receiving unit 110 and the decoding unit 120 may include at least one processor.
- the receiving unit 110 and the decoding unit 120 may include a memory storing instructions to be executed by at least one processor.
- the receiver 110 may receive a bitstream.
- the bitstream includes information obtained by encoding an image by the image encoding apparatus 2200, which will be described later. Also, the bitstream may be transmitted from the image encoding apparatus 2200.
- the image encoding apparatus 2200 and the image decoding apparatus 100 may be connected by wire or wirelessly, and the receiver 110 may receive a bitstream through wired or wireless.
- the receiving unit 110 may receive a bitstream from a storage medium such as an optical media and a hard disk.
- the decoder 120 may reconstruct an image based on information obtained from the received bitstream.
- the decoder 120 may obtain a syntax element for reconstructing an image from the bitstream.
- the decoder 120 may reconstruct an image based on the syntax element.
- FIG. 2 is a flowchart illustrating a method of decoding an image according to an embodiment.
- the receiver 110 receives a bitstream.
- the image decoding apparatus 100 performs an operation 210 of obtaining a binstring corresponding to a split mode mode of a coding unit from a bitstream.
- the image decoding apparatus 100 performs an operation 220 of determining a partitioning rule of a coding unit.
- the image decoding apparatus 100 performs an operation 230 of dividing a coding unit into a plurality of coding units based on at least one of a binstring corresponding to a split mode and the splitting rule.
- the image decoding apparatus 100 may determine an allowable first range of the size of the coding unit according to a ratio of the width and height of the coding unit to determine a splitting rule.
- the image decoding apparatus 100 may determine an allowable second range of a size of a coding unit according to a split type mode of a coding unit in order to determine a splitting rule.
- one picture may be divided into one or more slices or one or more tiles.
- One slice or one tile may be a sequence of one or more largest coding units (CTU).
- CTU largest coding unit
- CTB largest coding block
- the largest coding block CTB refers to an NxN block including NxN samples (N is an integer). Each color component may be divided into one or more maximum coding blocks.
- the maximum coding unit is a maximum coding block of luma samples and two maximum coding blocks of chroma samples corresponding thereto, and luma. It is a unit including syntax structures used to encode samples and chroma samples.
- the maximum coding unit is a unit including a maximum coding block of a monochrome sample and syntax structures used to encode the monochrome samples.
- the maximum coding unit is a unit including the picture and syntax structures used to encode samples of the picture.
- One maximum coding block CTB may be divided into MxN coding blocks including MxN samples (M and N are integers).
- the coding unit When a picture has a sample array for each component of Y, Cr, and Cb, the coding unit (CU) is a coding block of a luma sample, two coding blocks of chroma samples corresponding to the coding block, and coding luma samples and chroma samples. It is a unit that contains syntax structures used to do so.
- the coding unit When the picture is a monochrome picture, the coding unit is a unit including a coding block of a monochrome sample and syntax structures used to encode the monochrome samples.
- the coding unit When a picture is a picture coded with a color plane separated for each color component, the coding unit is a unit including the picture and syntax structures used to encode samples of the picture.
- a largest coding block and a largest coding unit are concepts that are distinguished from each other, and a coding block and a coding unit are concepts that are distinguished from each other. That is, the (maximum) coding unit refers to a data structure including a (maximum) coding block including a corresponding sample and a syntax structure corresponding thereto.
- the (maximum) coding unit or the (maximum) coding block refers to a block of a predetermined size including a predetermined number of samples, the following specification describes the largest coding block and the largest coding unit, or the coding block and coding unit. Is mentioned without distinction unless there are special circumstances.
- An image may be divided into a largest coding unit (CTU).
- the size of the largest coding unit may be determined based on information obtained from the bitstream.
- the shape of the largest coding unit may have a square of the same size. However, it is not limited thereto.
- information on the maximum size of a luma coding block may be obtained from the bitstream.
- the maximum size of the luma coded block indicated by information on the maximum size of the luma coded block may be one of 4x4, 8x8, 16x16, 32x32, 64x64, 128x128, and 256x256.
- information about a maximum size of a luma coded block capable of dividing into two and a difference in size of the luma block may be obtained from the bitstream.
- Information on the difference in the size of the luma block may indicate a difference in size between the largest luma coding unit and the largest luma coding block that can be split into two. Accordingly, by combining information on a maximum size of a luma coding block that can be divided into two obtained from a bitstream and information on a difference in size of a luma block, the size of the largest luma coding unit may be determined. If the size of the largest luma coding unit is used, the size of the largest chroma coding unit may also be determined.
- the size of the chroma block may be half the size of the luma block, and similarly, the size of the chroma largest coding unit is of the luma largest coding unit. It can be half the size.
- the maximum size of a luma coded block capable of binary splitting may be variably determined.
- a maximum size of a luma coding block capable of ternary splitting may be fixed.
- the maximum size of a luma coded block capable of ternary division in an I picture may be 32x32
- a maximum size of a luma coded block capable of ternary division in a P picture or B picture may be 64x64.
- the largest coding unit may be hierarchically split into coding units based on split type mode information obtained from a bitstream.
- split type mode information at least one of information indicating whether or not quad splitting, information indicating whether or not multi-dividing, information regarding a division direction, and information about a split type may be obtained from the bitstream.
- information indicating whether the current coding unit is quad split may indicate whether the current coding unit is to be quad split (QUAD_SPLIT) or not quad split.
- information indicating whether the current coding unit is not divided into multiple divisions may indicate whether the current coding unit is no longer divided (NO_SPLIT) or binary/ternary division.
- the split direction information indicates that the current coding unit is split in either a horizontal direction or a vertical direction.
- the split type information indicates that the current coding unit is split into binary split) or ternary split.
- a split mode of the current coding unit may be determined according to split direction information and split type information.
- the split mode when the current coding unit is binary split in the horizontal direction is binary horizontal split (SPLIT_BT_HOR), ternary horizontal split if ternary split in the horizontal direction (SPLIT_TT_HOR), and the split mode if binary split in the vertical direction is
- the binary vertical division (SPLIT_BT_VER) and the division mode in the case of ternary division in the vertical direction may be determined as ternary vertical division (SPLIT_BT_VER).
- the image decoding apparatus 100 may obtain split mode information from a bitstream from one binstring.
- the type of the bitstream received by the video decoding apparatus 100 may include a fixed length binary code, an unary code, a truncated unary code, a predetermined binary code, and the like.
- An empty string is a binary representation of information.
- the binstring may consist of at least one bit.
- the image decoding apparatus 100 may obtain information on a division type mode corresponding to a binstring based on a division rule.
- the image decoding apparatus 100 may determine whether to divide the coding unit into quads or not, or determine a division direction and a division type based on one binstring.
- the coding unit may be less than or equal to the largest coding unit.
- the largest coding unit is also a coding unit having a maximum size, it is one of the coding units.
- a coding unit determined in the largest coding unit has the same size as the largest coding unit.
- the largest coding unit may be split into coding units.
- the split type mode information for the coding unit indicates splitting, the coding units may be split into coding units having smaller sizes.
- the division of the image is not limited thereto, and the largest coding unit and the coding unit may not be distinguished. Splitting of the coding unit will be described in more detail with reference to FIGS. 3 to 16.
- one or more prediction blocks for prediction may be determined from the coding unit.
- the prediction block may be equal to or smaller than the coding unit.
- one or more transform blocks for transformation may be determined from the coding unit.
- the transform block may be equal to or smaller than the coding unit.
- the shape and size of the transform block and the prediction block may not be related to each other.
- the coding unit may be a prediction block, and prediction may be performed using the coding unit.
- the coding unit may be a transform block and transformation may be performed using the coding unit.
- the current block and the neighboring block of the present disclosure may represent one of a largest coding unit, a coding unit, a prediction block, and a transform block.
- the current block or the current coding unit is a block currently undergoing decoding or encoding or a block currently undergoing splitting.
- the neighboring block may be a block restored before the current block.
- the neighboring blocks may be spatially or temporally adjacent to the current block.
- the neighboring block may be located in one of the lower left, left, upper left, upper, upper right, right and lower right of the current block.
- FIG. 3 is a diagram illustrating a process of determining at least one coding unit by dividing a current coding unit by an image decoding apparatus, according to an embodiment.
- the block shape may include 4Nx4N, 4Nx2N, 2Nx4N, 4NxN, Nx4N, 32NxN, Nx32N, 16NxN, Nx16N, 8NxN, or Nx8N.
- N may be a positive integer.
- the block type information is information indicating at least one of a shape, a direction, a ratio or a size of a width and a height of a coding unit.
- the shape of the coding unit may include a square and a non-square.
- the image decoding apparatus 100 may determine block type information of the coding unit as a square.
- the image decoding apparatus 100 may determine the shape of the coding unit as a non-square.
- the image decoding apparatus 100 Block type information of the coding unit may be determined as a non-square.
- the image decoding apparatus 100 adjusts the ratio of the width and height among block type information of the coding unit to 1:2, 2:1, 1:4, 4:1, and 1:8. , 8:1, 1:16, 16:1, 1:32, 32:1.
- the image decoding apparatus 100 may determine whether the coding unit is in a horizontal direction or a vertical direction. In addition, the image decoding apparatus 100 may determine the size of the coding unit based on at least one of the width, height, or width of the coding unit.
- the image decoding apparatus 100 may determine a type of a coding unit using block type information, and may determine in what type a coding unit is divided using the split type mode information. That is, a method of dividing the coding unit indicated by the division type mode information may be determined according to which block type the block type information used by the image decoding apparatus 100 represents.
- the image decoding apparatus 100 may obtain split type mode information from the bitstream. However, the present invention is not limited thereto, and the image decoding apparatus 100 and the image encoding apparatus 2200 may determine predetermined split type mode information based on the block type information.
- the image decoding apparatus 100 may determine split type mode information predetermined for the largest coding unit or the smallest coding unit. For example, the image decoding apparatus 100 may determine the split type mode information for the largest coding unit as a quad split. In addition, the image decoding apparatus 100 may determine the split type mode information as "not split" for the minimum coding unit. In more detail, the image decoding apparatus 100 may determine the size of the largest coding unit to be 256x256.
- the image decoding apparatus 100 may determine pre-promised segmentation mode information as quad segmentation.
- Quad splitting is a split mode in which both the width and height of a coding unit are bisected.
- the image decoding apparatus 100 may obtain a coding unit having a size of 128x128 from the largest coding unit having a size of 256x256 based on the split type mode information.
- the image decoding apparatus 100 may determine the size of the minimum coding unit to be 4x4.
- the image decoding apparatus 100 may obtain split type mode information indicating "no splitting" with respect to the minimum coding unit.
- the image decoding apparatus 100 may use block type information indicating that the current coding unit is a square shape. For example, the image decoding apparatus 100 may determine whether to split a square coding unit, split it vertically, split it horizontally, split it horizontally, or split it into four coding units according to split mode information.
- the decoder 120 when block type information of the current coding unit 300 represents a square shape, the decoder 120 has the same size as the current coding unit 300 according to split type mode information indicating that it is not split.
- the coding unit 310a having a is not split, or split coding units 310b, 310c, 310d, 310e, 310f, etc. may be determined based on split mode information indicating a predetermined splitting method.
- the image decoding apparatus 100 uses two coding units 310b obtained by vertically splitting the current coding unit 300 based on split mode information indicating that the image is split in the vertical direction. You can decide.
- the image decoding apparatus 100 may determine two coding units 310c obtained by splitting the current coding unit 300 in the horizontal direction based on split mode information indicating that the image is split in the horizontal direction.
- the image decoding apparatus 100 may determine four coding units 310d obtained by splitting the current coding unit 300 vertically and horizontally based on split mode information indicating splitting in the vertical and horizontal directions.
- the image decoding apparatus 100 may generate three coding units 310e obtained by vertically dividing the current coding unit 300 based on split mode information indicating ternary splitting in the vertical direction. You can decide.
- the image decoding apparatus 100 may determine three coding units 310f obtained by splitting the current coding unit 300 in the horizontal direction based on split mode information indicating that ternary splitting is performed in the horizontal direction.
- the split form in which the square coding unit can be split is limited to the above-described form and should not be interpreted, and various forms that can be represented by the split mode information may be included. Pre-determined split forms in which the square coding unit is split will be described in detail through various embodiments below.
- FIG. 4 illustrates a process in which an image decoding apparatus determines at least one coding unit by dividing coding units having a non-square shape, according to an exemplary embodiment.
- the image decoding apparatus 100 may use block type information indicating that the current coding unit is a non-square type.
- the image decoding apparatus 100 may determine whether to split the non-square current coding unit or split it by a predetermined method according to the split type mode information. Referring to FIG.
- the image decoding apparatus 100 may be configured to use the current coding unit ( Coding units 410 or 460 having the same size as 400 or 450) are determined, or coding units 420a, 420b, 430a, 430b, 430c, 470a divided based on split mode information indicating a predetermined splitting method , 470b, 480a, 480b, 480c) can be determined.
- a predetermined splitting method in which a non-square coding unit is split will be described in detail through various embodiments below.
- the image decoding apparatus 100 may determine a form in which a coding unit is split using split form mode information, and in this case, the split form mode information includes at least one coding unit generated by splitting the coding unit. Can represent the number.
- the image decoding apparatus 100 may determine the current coding unit 400 or 450 based on the split type mode information. 450) may be split to determine two coding units 420a and 420b or 470a and 470b included in the current coding unit.
- the image decoding apparatus 100 when the image decoding apparatus 100 splits a non-square type of current coding unit 400 or 450 based on the split type mode information, the image decoding apparatus 100
- the current coding unit may be split in consideration of the position of the long side of the coding unit 400 or 450.
- the image decoding apparatus 100 splits the current coding unit 400 or 450 in a direction for dividing the long side of the current coding unit 400 or 450 in consideration of the shape of the current coding unit 400 or 450
- a plurality of coding units may be determined.
- the image decoding apparatus 100 when the split mode information indicates that coding units are split into odd blocks (ternary split), the image decoding apparatus 100 encodes odd numbers included in the current coding units 400 or 450. You can decide the unit. For example, when the split mode information indicates that the current coding unit 400 or 450 is split into three coding units, the image decoding apparatus 100 may convert the current coding unit 400 or 450 into three coding units ( 430a, 430b, 430c, 480a, 480b, 480c).
- a ratio of the width and height of the current coding unit 400 or 450 may be 4:1 or 1:4.
- the ratio of the width and the height is 4:1, since the length of the width is longer than the length of the height, the block shape information may be in the horizontal direction.
- the ratio of the width and the height is 1:4, since the length of the width is shorter than the length of the height, the block shape information may be in a vertical direction.
- the image decoding apparatus 100 may determine to divide the current coding unit into odd-numbered blocks based on the split mode information. Also, the image decoding apparatus 100 may determine a splitting direction of the current coding unit 400 or 450 based on block type information of the current coding unit 400 or 450.
- the image decoding apparatus 100 may determine the coding units 430a, 430b, and 430c by dividing the current coding unit 400 in the horizontal direction. Also, when the current coding unit 450 is in the horizontal direction, the image decoding apparatus 100 may determine the coding units 480a, 480b, and 480c by dividing the current coding unit 450 in the vertical direction.
- the image decoding apparatus 100 may determine an odd number of coding units included in the current coding unit 400 or 450, and all sizes of the determined coding units may not be the same.
- the size of a predetermined coding unit 430b or 480b among the determined odd number of coding units 430a, 430b, 430c, 480a, 480b, 480c is different from that of other coding units 430a, 430c, 480a, 480c
- the image decoding apparatus 100 may determine an odd number of coding units included in the current coding unit 400 or 450, and Furthermore, the image decoding apparatus 100 may place a predetermined limit on at least one coding unit among odd number of coding units generated by dividing. Referring to FIG. 4, the image decoding apparatus 100 is a coding unit positioned at the center of three coding units 430a, 430b, 430c, 480a, 480b, 480c generated by splitting a current coding unit 400 or 450 A decoding process for 430b and 480b may be different from that of other coding units 430a, 430c, 480a, and 480c.
- the image decoding apparatus 100 limits the coding units 430b and 480b located at the center so that they are not further divided or limited to a predetermined number of times. Can be restricted to be divided.
- FIG. 5 is a diagram illustrating a process in which an image decoding apparatus divides a coding unit based on at least one of block type information and split type mode information, according to an embodiment.
- the image decoding apparatus 100 may determine that the square-shaped first coding unit 500 is divided into coding units or not divided based on at least one of block type information and split type mode information. .
- the image decoding apparatus 100 divides the first coding unit 500 in the horizontal direction to perform the second encoding.
- the unit 510 can be determined.
- a first coding unit, a second coding unit, and a third coding unit used according to an embodiment are terms used to understand a relationship before and after splitting between coding units.
- a second coding unit when the first coding unit is split, a second coding unit may be determined, and when the second coding unit is split, a third coding unit may be determined.
- a third coding unit may be determined when the second coding unit is split.
- the image decoding apparatus 100 may determine that the determined second coding unit 510 is split into coding units or not split based on split type mode information. Referring to FIG. 5, the image decoding apparatus 100 divides a first coding unit 500 based on split mode information to perform at least one third coding on a second coding unit 510 in a non-square shape determined The units 520a, 520b, 520c, 520d, etc.) may be divided or the second coding unit 510 may not be divided. The image decoding apparatus 100 may obtain split type mode information, and the image decoding apparatus 100 divides the first coding unit 500 based on the obtained split type mode information to perform a plurality of second encodings of various types.
- a unit (eg, 510) may be divided, and the second coding unit 510 may be divided according to a method in which the first coding unit 500 is divided based on the split type mode information.
- the second coding unit 510 when the first coding unit 500 is split into second coding units 510 based on split mode information for the first coding unit 500, the second coding unit 510 is also The second coding unit 510 may be split into third coding units (eg, 520a, 520b, 520c, 520d, etc.) based on split type mode information. That is, the coding units may be recursively split based on split type mode information related to each coding unit. Accordingly, a square coding unit may be determined from a non-square coding unit, and a non-square coding unit may be determined by recursively splitting the square coding unit.
- a predetermined coding unit (for example, among odd number of third coding units 520b, 520c, 520d) determined by splitting a second coding unit 510 in a non-square shape
- a coding unit or a coding unit having a square shape) may be recursively divided.
- a square-shaped third coding unit 520b which is one of the odd number of third coding units 520b, 520c, and 520d, may be split in a horizontal direction and split into a plurality of fourth coding units.
- One of the plurality of fourth coding units 530a, 530b, 530c, and 530d which is a non-square type fourth coding unit 530b or 530d, may be further divided into a plurality of coding units.
- the fourth coding unit 530b or 530d having a non-square shape may be split again into odd coding units.
- a method that can be used for recursive partitioning of coding units will be described later through various embodiments.
- the image decoding apparatus 100 may divide each of the third coding units 520a, 520b, 520c, 520d, etc. into coding units based on split mode information. Also, the image decoding apparatus 100 may determine not to split the second coding unit 510 based on the split type mode information. The image decoding apparatus 100 may divide the second coding unit 510 in a non-square shape into odd number of third coding units 520b, 520c, and 520d according to an embodiment. The image decoding apparatus 100 may place a predetermined limit on a predetermined third coding unit among the odd number of third coding units 520b, 520c, and 520d.
- the image decoding apparatus 100 should be limited to a coding unit 520c positioned in the middle of the odd number of third coding units 520b, 520c, and 520d, or divided by a settable number of times. You can limit yourself to what you do.
- the image decoding apparatus 100 includes a coding unit positioned in the middle among odd number of third coding units 520b, 520c, and 520d included in a second coding unit 510 having a non-square shape ( 520c) is not further divided or is divided into a predetermined division type (e.g., divided into only four coding units or divided into a shape corresponding to the divided shape of the second coding unit 510), or a predetermined It can be limited to dividing only by the number of times (for example, dividing only n times, n>0).
- central coding unit 520c is merely exemplary embodiments, it is limited to the above-described exemplary embodiments and should not be interpreted, and the central coding unit 520c is different from the other coding units 520b and 520d ), it should be interpreted as including various restrictions that can be decrypted differently.
- the image decoding apparatus 100 may obtain split type mode information used to split a current coding unit at a predetermined position within the current coding unit.
- FIG. 6 is a diagram illustrating a method for an image decoding apparatus to determine a predetermined coding unit among odd coding units, according to an embodiment.
- split type mode information of the current coding units 600 and 650 is a sample at a predetermined position among a plurality of samples included in the current coding units 600 and 650 (for example, a sample located in the center ( 640, 690)).
- a predetermined position in the current coding unit 600 in which at least one of the split mode information can be obtained should not be interpreted as being limited to the center position shown in FIG. 6, and the predetermined position is included in the current coding unit 600. It should be interpreted that a variety of possible locations (eg, top, bottom, left, right, top left, bottom left, top right or bottom right, etc.) may be included.
- the image decoding apparatus 100 may determine that the current coding unit is divided into coding units of various types and sizes or not divided by obtaining split type mode information obtained from a predetermined location.
- the image decoding apparatus 100 may select one of the coding units.
- Methods for selecting one of a plurality of coding units may be various, and a description of these methods will be described later through various embodiments below.
- the image decoding apparatus 100 may divide a current coding unit into a plurality of coding units and determine a coding unit at a predetermined location.
- the image decoding apparatus 100 may use information indicating a location of each of the odd number of coding units to determine a coding unit located in the middle of the odd number of coding units. Referring to FIG. 6, the image decoding apparatus 100 divides a current coding unit 600 or a current coding unit 650 to divide an odd number of coding units 620a, 620b, and 620c or an odd number of coding units 660a. 660b, 660c) can be determined.
- the image decoding apparatus 100 uses the information on the positions of the odd number of coding units 620a, 620b, and 620c or the odd number of coding units 660a, 660b, 660c, and the middle coding unit 620b or the middle coding unit (660b) can be determined. For example, the image decoding apparatus 100 determines the location of the coding units 620a, 620b, and 620c based on information indicating the location of a predetermined sample included in the coding units 620a, 620b, and 620c. The coding unit 620b positioned at may be determined.
- the image decoding apparatus 100 includes coding units 620a, 620b, and 620c based on information indicating a location of the upper left sample 630a, 630b, and 630c of the coding units 620a, 620b, and 620c.
- the coding unit 620b positioned in the center may be determined by determining the position of.
- information indicating the location of the upper left sample 630a, 630b, and 630c included in the coding units 620a, 620b, and 620c, respectively is within a picture of the coding units 620a, 620b, and 620c. It may include information about the location or coordinates of. According to an embodiment, information indicating the location of the upper left sample 630a, 630b, and 630c included in the coding units 620a, 620b, and 620c, respectively, is the coding units 620a included in the current coding unit 600. , 620b, 620c) may include information indicating the width or height of each of the coding units 620a, 620b, and 620c.
- the width or height may correspond to information indicating a difference between coordinates within a picture of the coding units 620a, 620b, and 620c. That is, the image decoding apparatus 100 directly uses information on a location or coordinates within a picture of the coding units 620a, 620b, and 620c, or information on a width or height of a coding unit corresponding to a difference value between coordinates.
- the coding unit 620b positioned in the center may be determined by using.
- information indicating the location of the upper left sample 630a of the upper coding unit 620a may represent (xa, ya) coordinates
- Information indicating the location of) may indicate (xb, yb) coordinates
- information indicating the location of the upper left sample 630c of the lower coding unit 620c may indicate (xc, yc) coordinates.
- the image decoding apparatus 100 may determine the center coding unit 620b by using coordinates of the upper left samples 630a, 630b, and 630c included in the coding units 620a, 620b, and 620c, respectively.
- the coding unit 620b including (xb, yb) which is the coordinates of the sample 630b located in the center
- the current coding unit 600 may be determined as a coding unit positioned in the middle of the coding units 620a, 620b, and 620c determined by splitting the current coding unit 600.
- the coordinates indicating the position of the upper left samples 630a, 630b, 630c may indicate the coordinates indicating the absolute position in the picture, and furthermore, the position of the upper left sample 630a of the upper coding unit 620a
- (dxb, dyb) coordinates which is information indicating the relative position of the upper left sample 630b of the center coding unit 620b, indicating the relative position of the upper left sample 630c of the lower coding unit 620c
- Information (dxc, dyc) coordinates can also be used.
- the method of determining the coding unit of a predetermined location by using the coordinates of the sample should not be interpreted limited to the above-described method, and various arithmetical coordinates that can use the coordinates of the sample Should be interpreted in a way.
- the image decoding apparatus 100 may split the current coding unit 600 into a plurality of coding units 620a, 620b, and 620c, and a predetermined number of coding units 620a, 620b, and 620c Coding units can be selected according to criteria. For example, the image decoding apparatus 100 may select a coding unit 620b having a different size among coding units 620a, 620b, and 620c.
- the image decoding apparatus 100 includes (xa, ya) coordinates, which is information indicating the position of the upper left sample 630a of the upper coding unit 620a, and the upper left sample of the center coding unit 620b.
- 620b, 620c can determine each width or height.
- the image decoding apparatus 100 uses the coding units 620a and 620b using (xa, ya), (xb, yb), and (xc, yc), which are coordinates representing the positions of the coding units 620a, 620b, and 620c. , 620c) each size can be determined.
- the image decoding apparatus 100 may determine the width of the upper coding unit 620a as the width of the current coding unit 600.
- the image decoding apparatus 100 may determine the height of the upper coding unit 620a as yb-ya.
- the image decoding apparatus 100 may determine the width of the center coding unit 620b as the width of the current coding unit 600.
- the image decoding apparatus 100 may determine the height of the central coding unit 620b as yc-yb. According to an embodiment, the image decoding apparatus 100 may determine the width or height of the lower coding unit using the width or height of the current coding unit and the width and height of the upper coding unit 620a and the center coding unit 620b. . The image decoding apparatus 100 may determine a coding unit having a size different from other coding units based on the determined widths and heights of the coding units 620a, 620b, and 620c. Referring to FIG.
- the image decoding apparatus 100 may determine a coding unit 620b having a size different from that of the upper coding unit 620a and the lower coding unit 620c as the coding unit at a predetermined position.
- the process of determining a coding unit having a size different from that of other coding units the process of determining a coding unit at a predetermined location using a size of a coding unit determined based on sample coordinates Therefore, various processes of determining a coding unit at a predetermined location by comparing sizes of coding units determined according to predetermined sample coordinates may be used.
- the image decoding apparatus 100 includes (xd, yd) coordinates, which is information indicating the location of the upper left sample 670a of the left coding unit 660a, and the location of the upper left sample 670b of the center coding unit 660b. Coding units 660a, 660b, and 660c using (xe, ye) coordinates, which is information indicating the position, and (xf, yf) coordinates, which are information indicating the location of the upper left sample 670c of the right coding unit 660c. You can decide the width or height of each.
- the image decoding apparatus 100 uses the coding units 660a and 660b using (xd, yd), (xe, ye), and (xf, yf), which are coordinates representing the positions of the coding units 660a, 660b, and 660c. , 660c) Each size can be determined.
- the image decoding apparatus 100 may determine the width of the left coding unit 660a as xe-xd.
- the image decoding apparatus 100 may determine the height of the left coding unit 660a as the height of the current coding unit 650.
- the image decoding apparatus 100 may determine the width of the center coding unit 660b as xf-xe.
- the image decoding apparatus 100 may determine the height of the center coding unit 660b as the height of the current coding unit 600.
- the width or height of the right coding unit 660c is the width or height of the current coding unit 650 and the width and height of the left coding unit 660a and the center coding unit 660b.
- the image decoding apparatus 100 may determine a coding unit having a size different from other coding units based on the determined width and height of the coding units 660a, 660b, and 660c. Referring to FIG. 6, the image decoding apparatus 100 may determine a coding unit 660b having a size different from the size of the left coding unit 660a and the right coding unit 660c as the coding unit at a predetermined position.
- the location of the sample considered to determine the location of the coding unit should not be interpreted by being limited to the upper left corner described above, and it may be interpreted that information on the location of an arbitrary sample included in the coding unit can be used.
- the image decoding apparatus 100 may select a coding unit at a predetermined position from among odd number of coding units determined by splitting the current coding unit in consideration of a shape of a current coding unit. For example, if the current coding unit has a non-square shape whose width is longer than the height, the image decoding apparatus 100 may determine the coding unit at a predetermined position according to the horizontal direction. That is, the image decoding apparatus 100 may determine one of coding units having different positions in the horizontal direction and place restrictions on the corresponding coding unit. If the current coding unit has a non-square shape whose height is longer than the width, the image decoding apparatus 100 may determine a coding unit at a predetermined position according to the vertical direction. That is, the image decoding apparatus 100 may determine one of coding units that change positions in the vertical direction and place restrictions on the corresponding coding unit.
- the image decoding apparatus 100 may use information indicating a location of each of the even number of coding units to determine a coding unit of a predetermined position among even number of coding units.
- the image decoding apparatus 100 may determine the even number of coding units by dividing the current coding unit (binary splitting), and may determine the coding unit at a predetermined position by using information on the positions of the even number of coding units.
- a detailed process for this may be a process corresponding to a process of determining a coding unit at a predetermined location (eg, a center location) among the odd numbered coding units described above in FIG. 6, and thus will be omitted.
- a predetermined coding unit at a predetermined position is determined during the splitting process to determine a coding unit at a predetermined position among the plurality of coding units.
- Information of is available. For example, in order to determine a coding unit located in the middle among coding units in which the current coding unit is divided into a plurality of coding units, the image decoding apparatus 100 may use block type information and split type stored in a sample included in the center coding unit during the splitting process. At least one of the mode information may be used.
- the image decoding apparatus 100 may split a current coding unit 600 into a plurality of coding units 620a, 620b, and 620c based on split type mode information, and the plurality of coding units ( A coding unit 620b positioned in the middle of 620a, 620b, and 620c may be determined. Furthermore, the image decoding apparatus 100 may determine a coding unit 620b positioned in the center in consideration of a location where split mode information is obtained. That is, the split type mode information of the current coding unit 600 may be obtained from a sample 640 positioned in the center of the current coding unit 600, and the current coding unit 600 is based on the split type mode information.
- a coding unit 620b including the sample 640 may be determined as a coding unit positioned at the center.
- information used to determine the centrally located coding unit should not be interpreted as being limited to the split mode information, and various types of information may be used in the process of determining the centrally located coding unit.
- predetermined information for identifying a coding unit at a predetermined location may be obtained from a predetermined sample included in a coding unit to be determined.
- the image decoding apparatus 100 includes coding units (e.g., divided into a plurality of coding units 620a, 620b, 620c) of a plurality of coding units determined by splitting the current coding unit 600.
- the image decoding apparatus 100 may determine a sample at the predetermined position in consideration of the block shape of the current coding unit 600, and the image decoding apparatus 100 may determine a plurality of samples determined by dividing the current coding unit 600 Among the coding units 620a, 620b, and 620c, a coding unit 620b including a sample from which predetermined information (eg, split mode information) can be obtained may be determined and a predetermined limit may be set. .
- predetermined information eg, split mode information
- the image decoding apparatus 100 may determine a sample 640 located in the center of the current coding unit 600 as a sample from which predetermined information may be obtained, and the image decoding apparatus 100 may place a predetermined limit in the decoding process of the coding unit 620b including the sample 640.
- the location of the sample from which predetermined information can be obtained is limited to the above-described location and should not be interpreted, but may be interpreted as samples at an arbitrary location included in the coding unit 620b to be determined to impose restrictions.
- the location of a sample from which predetermined information can be obtained may be determined according to the shape of the current coding unit 600.
- the block shape information may determine whether the shape of a current coding unit is a square or a non-square shape, and according to the shape, a location of a sample from which predetermined information can be obtained may be determined.
- the image decoding apparatus 100 uses at least one of information about the width and height of the current coding unit to be positioned on a boundary that divides at least one of the width and height of the current coding unit in half. The sample may be determined as a sample from which predetermined information can be obtained.
- the image decoding apparatus 100 selects one of the samples adjacent to the boundary dividing the long side of the current coding unit in half. It can be determined as a sample from which information of can be obtained.
- the image decoding apparatus 100 may use split type mode information to determine a coding unit at a predetermined position among the plurality of coding units.
- the image decoding apparatus 100 may obtain split type mode information from a sample at a predetermined position included in a coding unit, and the image decoding apparatus 100 may obtain a plurality of encodings generated by splitting a current coding unit.
- the units may be split using split mode information obtained from samples at a predetermined position included in each of a plurality of coding units. That is, the coding units may be recursively split by using split type mode information obtained from a sample at a predetermined position included in each coding unit. Since the recursive splitting process of the coding unit has been described above with reference to FIG. 5, detailed descriptions will be omitted.
- the image decoding apparatus 100 may determine at least one coding unit by dividing a current coding unit, and determine an order in which the at least one coding unit is decoded by a predetermined block (eg, a current coding unit). ) Can be determined.
- a predetermined block eg, a current coding unit
- FIG. 7 illustrates an order in which a plurality of coding units are processed when a plurality of coding units are determined by dividing a current coding unit by an image decoding apparatus according to an embodiment.
- the image decoding apparatus 100 determines the second coding units 710a and 710b by dividing the first coding unit 700 in a vertical direction according to split type mode information, or the first coding unit 700
- the second coding units 750a, 750b, 750c, and 750d may be determined by splitting in the horizontal direction to determine the second coding units 730a and 730b, or splitting the first coding unit 700 in the vertical and horizontal directions. have.
- the image decoding apparatus 100 may determine an order so that the second coding units 710a and 710b determined by dividing the first coding unit 700 in the vertical direction are processed in the horizontal direction 710c. .
- the image decoding apparatus 100 may determine a processing order of the second coding units 730a and 730b determined by dividing the first coding unit 700 in the horizontal direction as the vertical direction 730c.
- the image decoding apparatus 100 divides the first coding unit 700 in the vertical direction and the horizontal direction to divide the determined second coding units 750a, 750b, 750c, and 750d into the coding units located in one row. Coding units located in the next row may be determined according to a predetermined order (eg, a raster scan order or a z scan order 750e).
- the image decoding apparatus 100 may recursively split coding units.
- the image decoding apparatus 100 may divide the first coding unit 700 to determine a plurality of coding units 710a, 710b, 730a, 730b, 750a, 750b, 750c, 750d, and Each of the determined coding units 710a, 710b, 730a, 730b, 750a, 750b, 750c, and 750d may be recursively split.
- a method of dividing the plurality of coding units 710a, 710b, 730a, 730b, 750a, 750b, 750c, and 750d may correspond to a method of dividing the first coding unit 700.
- the plurality of coding units 710a, 710b, 730a, 730b, 750a, 750b, 750c, and 750d may be independently divided into a plurality of coding units.
- the image decoding apparatus 100 may determine the second coding units 710a and 710b by dividing the first coding unit 700 in the vertical direction, and further, the second coding units 710a and 710b, respectively. It can be decided to divide independently or not to divide.
- the image decoding apparatus 100 may split the second coding unit 710a on the left side in a horizontal direction and divide it into third coding units 720a and 720b, and the second coding unit 710b on the right side. ) May not be divided.
- the processing order of coding units may be determined based on a splitting process of coding units.
- the processing order of the split coding units may be determined based on the processing order of the coding units immediately before being split.
- the image decoding apparatus 100 may independently determine an order in which the third coding units 720a and 720b determined by splitting the second coding unit 710a on the left side are processed, independently from the second coding unit 710b on the right side. Since the left second coding unit 710a is split in the horizontal direction to determine the third coding units 720a and 720b, the third coding units 720a and 720b may be processed in the vertical direction 720c.
- FIG. 8 illustrates a process of determining that a current coding unit is divided into odd number of coding units when coding units cannot be processed in a predetermined order, according to an embodiment.
- the image decoding apparatus 100 may determine that the current coding unit is divided into odd number of coding units based on the obtained split type mode information.
- a first coding unit 800 having a square shape may be divided into second coding units 810a and 810b having a non-square shape, and the second coding units 810a and 810b are each independently It may be divided into 3 coding units 820a, 820b, 820c, 820d, and 820e.
- the image decoding apparatus 100 may determine a plurality of third coding units 820a and 820b by dividing the left coding unit 810a among the second coding units in a horizontal direction, and determining the right coding unit 810b. ) May be divided into odd number of third coding units 820c, 820d, and 820e.
- the image decoding apparatus 100 determines whether the third coding units 820a, 820b, 820c, 820d, and 820e can be processed in a predetermined order to determine whether there are coding units divided into odd numbers. You can decide. Referring to FIG. 8, the image decoding apparatus 100 may determine third coding units 820a, 820b, 820c, 820d and 820e by recursively dividing the first coding unit 800. Based on at least one of the block type information and the split type mode information, the image decoding apparatus 100 may provide a first coding unit 800, a second coding unit 810a, 810b, or a third coding unit 820a, 820b, 820c.
- a coding unit positioned to the right of the second coding units 810a and 810b may be split into odd number of third coding units 820c, 820d, and 820e.
- An order in which a plurality of coding units included in the first coding unit 800 are processed may be a predetermined order (for example, a z-scan order 830), and the image decoding apparatus ( 100) may determine whether the third coding units 820c, 820d, and 820e determined by splitting the right second coding units 810b into odd numbers satisfy a condition capable of being processed according to the predetermined order.
- the image decoding apparatus 100 satisfies a condition in which the third coding units 820a, 820b, 820c, 820d, and 820e included in the first coding unit 800 can be processed in a predetermined order. Whether or not at least one of the widths and heights of the second coding units 810a and 810b is split in half according to the boundary of the third coding units 820a, 820b, 820c, 820d, 820e, and Related. For example, the third coding units 820a and 820b determined by dividing the height of the left second coding unit 810a in a non-square shape in half may satisfy a condition.
- the boundary of the third coding units 820c, 820d, and 820e determined by dividing the right second coding unit 810b into three coding units cannot divide the width or height of the right second coding unit 810b in half. Therefore, it may be determined that the third coding units 820c, 820d, and 820e do not satisfy the condition. In the case of dissatisfaction with this condition, the image decoding apparatus 100 may determine that the scan order is disconnected, and determine that the right second coding unit 810b is divided into odd number of coding units based on the determination result.
- a predetermined limit may be imposed on a coding unit at a predetermined position among the divided coding units. Since it has been described above through the embodiment, a detailed description will be omitted.
- FIG 9 illustrates a process in which an image decoding apparatus determines at least one coding unit by dividing a first coding unit, according to an embodiment.
- the image decoding apparatus 100 may split the first coding unit 900 on the basis of the split mode information obtained through the receiver 110.
- the first coding unit 900 having a square shape may be divided into four coding units having a square shape or may be divided into a plurality of coding units having a non-square shape.
- the image decoding apparatus 100 may refer to the first coding unit 900. It can be divided into a plurality of non-square coding units.
- the image decoding apparatus 100 includes a first coding unit having a square shape ( 900) may be divided into odd numbered coding units, and may be divided into second coding units 910a, 910b, and 910c that are determined by being split in the vertical direction or second coding units 920a, 920b, and 920c that are determined by being split in a horizontal direction.
- the image decoding apparatus 100 may process the second coding units 910a, 910b, 910c, 920a, 920b, 920c included in the first coding unit 900 in a predetermined order. Is satisfied, and the condition is whether at least one of the width and height of the first coding unit 900 is divided in half according to the boundary of the second coding units 910a, 910b, 910c, 920a, 920b, 920c It is related to whether or not. Referring to FIG. 9, a boundary of second coding units 910a, 910b, and 910c determined by dividing a square-shaped first coding unit 900 in a vertical direction divides the width of the first coding unit 900 in half.
- the image decoding apparatus 100 may determine that the scan order is disconnected, and determine that the first coding unit 900 is divided into odd number of coding units based on the determination result.
- a predetermined limit may be imposed on a coding unit at a predetermined position among the divided coding units. Since it has been described above through the embodiment, a detailed description will be omitted.
- the image decoding apparatus 100 may determine various types of coding units by dividing the first coding unit.
- the image decoding apparatus 100 may split a square type first coding unit 900 and a non-square type first coding unit 930 or 950 into various types of coding units. .
- FIG. 10 illustrates that, according to an embodiment, when a second coding unit of a non-square shape determined by splitting a first coding unit satisfies a predetermined condition, a form in which the second coding unit can be split is limited. Shows that.
- the image decoding apparatus 100 may use the first coding unit 1000 in a square shape based on the split mode information obtained through the receiver 110 and use the second coding unit 1010a in a non-square shape. 1010b, 1020a, 1020b).
- the second coding units 1010a, 1010b, 1020a, and 1020b may be independently split. Accordingly, the image decoding apparatus 100 may determine that the second coding units 1010a, 1010b, 1020a, and 1020b are split into a plurality of coding units or not split based on split mode information related to each of the second coding units 1010a, 1010b, 1020a, and 1020b.
- the image decoding apparatus 100 splits the second non-square type left second coding unit 1010a determined by splitting the first coding unit 1000 in a vertical direction in a horizontal direction, and splits the third coding unit ( 1012a, 1012b) can be determined.
- the image decoding apparatus 100 splits the left second coding unit 1010a in the horizontal direction the right second coding unit 1010b is in the horizontal direction in the same direction as the left second coding unit 1010a. It can be restricted so that it cannot be divided into.
- the right second coding unit 1010b is split in the same direction to determine the third coding unit 1014a and 1014b, the left second coding unit 1010a and the right second coding unit 1010b are respectively By being split independently, the third coding units 1012a, 1012b, 1014a, and 1014b may be determined.
- this is the same result as the image decoding apparatus 100 splitting the first coding unit 1000 into four square-shaped second coding units 1030a, 1030b, 1030c, and 1030d based on the split mode information. It may be inefficient in terms of image decoding.
- the image decoding apparatus 100 divides the second coding unit 1020a or 1020b in a non-square shape determined by dividing the first coding unit 1000 in the horizontal direction in a vertical direction to obtain a third coding unit. (1022a, 1022b, 1024a, 1024b) can be determined.
- the image decoding apparatus 100 splits one of the second coding units (for example, the upper second coding unit 1020a) in the vertical direction
- the other second coding unit for example, the lower
- the coding unit 1020b may be limited so that the upper second coding unit 1020a cannot be split in the vertical direction in the same direction as the split direction.
- FIG. 11 illustrates a process in which an image decoding apparatus splits a square coding unit when it is not possible to indicate that split mode information is split into four square coding units, according to an embodiment.
- the image decoding apparatus 100 may determine the second coding units 1110a, 1110b, 1120a, 1120b, etc. by dividing the first coding unit 1100 based on the split mode information.
- the split type mode information may include information on various types in which a coding unit can be split, but information on various types may not include information for splitting into four coding units having a square shape.
- the image decoding apparatus 100 cannot split the square-shaped first coding unit 1100 into four square-shaped second coding units 1130a, 1130b, 1130c, and 1130d.
- the image decoding apparatus 100 may determine the second coding units 1110a, 1110b, 1120a, 1120b, etc. of a non-square shape based on the split mode information.
- the image decoding apparatus 100 may independently divide the second coding units 1110a, 1110b, 1120a, 1120b, etc. of a non-square shape.
- Each of the second coding units 1110a, 1110b, 1120a, 1120b, etc. may be split in a predetermined order through a recursive method, and this is based on the split type mode information, based on the method in which the first coding unit 1100 is split. It may be a corresponding segmentation method.
- the image decoding apparatus 100 may determine the third coding units 1112a and 1112b in a square shape by dividing the left second coding unit 1110a horizontally, and the second coding unit 1110b on the right
- the third coding units 1114a and 1114b having a square shape may be determined by splitting in a horizontal direction.
- the image decoding apparatus 100 may determine the third coding units 1116a, 1116b, 1116c, and 1116d in a square shape by splitting both the left second coding unit 1110a and the right second coding unit 1110b in the horizontal direction. have.
- the coding unit may be determined in the same form as that in which the first coding unit 1100 is divided into four square-shaped second coding units 1130a, 1130b, 1130c, and 1130d.
- the image decoding apparatus 100 may determine the third coding units 1122a and 1122b in a square shape by dividing the upper second coding unit 1120a in a vertical direction, and the lower second coding unit 1120b ) Is divided in a vertical direction to determine the third coding units 1124a and 1124b having a square shape. Furthermore, the image decoding apparatus 100 may determine the third coding units 1126a, 1126b, 1126a, and 1126b in a square shape by splitting both the upper second coding units 1120a and the lower second coding units 1120b in the vertical direction. have. In this case, the coding unit may be determined in the same form as that in which the first coding unit 1100 is divided into four square-shaped second coding units 1130a, 1130b, 1130c, and 1130d.
- FIG. 12 illustrates that a processing order between a plurality of coding units may vary according to a splitting process of a coding unit according to an embodiment.
- the image decoding apparatus 100 may split the first coding unit 1200 based on split type mode information.
- the block shape is a square and the split type mode information indicates that the first coding unit 1200 is split in at least one of a horizontal direction and a vertical direction
- the image decoding apparatus 100 uses the first coding unit 1200.
- the second coding unit (eg, 1210a, 1210b, 1220a, 1220b, etc.) may be determined by dividing. Referring to FIG. 12, the second coding units 1210a, 1210b, 1220a, and 1220b in a non-square shape determined by splitting the first coding unit 1200 only in the horizontal direction or the vertical direction are determined based on split type mode information for each. Can be divided independently.
- the image decoding apparatus 100 divides the second coding units 1210a and 1210b generated by splitting the first coding unit 1200 in the vertical direction and splitting the second coding units 1210a and 1210b in the horizontal direction, respectively, and the third coding units 1216a and 1216b, respectively. 1216c and 1216d) may be determined, and the second coding units 1220a and 1220b generated by splitting the first coding unit 1200 in the horizontal direction are respectively divided in the horizontal direction, and the third coding units 1226a, 1226b, and 1226c , 1226d) can be determined. Since the dividing process of the second coding units 1210a, 1210b, 1220a, and 1220b has been described above with reference to FIG. 11, a detailed description will be omitted.
- the image decoding apparatus 100 may process coding units in a predetermined order. Features of processing of coding units according to a predetermined order have been described above with reference to FIG. 7, and thus detailed descriptions thereof will be omitted. Referring to FIG. 12, the image decoding apparatus 100 divides the first coding unit 1200 in a square shape to form four square-shaped third coding units 1216a, 1216b, 1216c, 1216d, 1226a, 1226b, 1226c, and 1226d. ) Can be determined.
- the image decoding apparatus 100 performs a processing order of the third coding units 1216a, 1216b, 1216c, 1216d, 1226a, 1226b, 1226c, and 1226d according to a form in which the first coding unit 1200 is split. You can decide.
- the image decoding apparatus 100 determines the third coding units 1216a, 1216b, 1216c, and 1216d by dividing the second coding units 1210a and 1210b generated by being split in the vertical direction, respectively, in the horizontal direction.
- the image decoding apparatus 100 may first process the third coding units 1216a and 1216c included in the left second coding unit 1210a in the vertical direction, and then process the third coding units 1216a and 1216c included in the right second coding unit 1210b.
- the third coding units 1216a, 1216b, 1216c, and 1216d may be processed according to an order 1217 of processing the third coding units 1216b and 1216d in the vertical direction.
- the image decoding apparatus 100 determines the third coding units 1226a, 1226b, 1226c, and 1226d by dividing the second coding units 1220a and 1220b generated by being split in a horizontal direction in a vertical direction, respectively.
- the image decoding apparatus 100 may first process the third coding units 1226a and 1226b included in the upper second coding unit 1220a in the horizontal direction, and then process the third coding units 1226a and 1226b included in the lower second coding unit 1220b.
- the third coding units 1226a, 1226b, 1226c, and 1226d may be processed according to an order 1227 of processing the third coding units 1226c and 1226d in the horizontal direction.
- second coding units 1210a, 1210b, 1220a, and 1220b are respectively divided to determine square-shaped third coding units 1216a, 1216b, 1216c, 1216d, 1226a, 1226b, 1226c, and 1226d. have.
- the second coding units 1210a and 1210b determined by splitting in the vertical direction and the second coding units 1220a and 1220b determined by splitting in the horizontal direction are split into different forms, but the third coding unit 1216a determined later , 1216b, 1216c, 1216d, 1226a, 1226b, 1226c, and 1226d), eventually, the first coding unit 1200 is divided into coding units of the same type.
- the image decoding apparatus 100 may recursively split coding units through different processes based on the split mode information, and consequently determine the coding units of the same type, but the plurality of coding units determined in the same type are different from each other. Can be processed in order.
- FIG. 13 illustrates a process in which a depth of a coding unit is determined according to a change in a shape and size of a coding unit when a coding unit is recursively split to determine a plurality of coding units according to an embodiment.
- the image decoding apparatus 100 may determine a depth of a coding unit according to a predetermined criterion.
- the predetermined criterion may be the length of the long side of the coding unit.
- the depth of the current coding unit is greater than the depth of the coding unit before splitting. It can be determined that the depth is increased by n.
- a coding unit having an increased depth is expressed as a coding unit having a lower depth.
- the image decoding apparatus 100 may be split to determine a second coding unit 1302 and a third coding unit 1304 having a lower depth. If the size of the square-shaped first coding unit 1300 is 2Nx2N, the second coding unit 1302 determined by dividing the width and height of the first coding unit 1300 by 1/2 times may have a size of NxN. have. Furthermore, the third coding unit 1304 determined by dividing the width and height of the second coding unit 1302 into 1/2 size may have a size of N/2xN/2.
- the width and height of the third coding unit 1304 are 1/4 times that of the first coding unit 1300.
- the depth of the second coding unit 1302 that is 1/2 times the width and height of the first coding unit 1300 may be D+1, and the first coding unit
- the depth of the third coding unit 1304, which is 1/4 times the width and height of 1300, may be D+2.
- block shape information indicating a non-square shape (for example, block shape information is '1: NS_VER' indicating that the height is a non-square that is longer than the width, or ′ indicating that the width is a non-square shape that is longer than the height. 2: NS_HOR′), the image decoding apparatus 100 divides the first coding unit 1310 or 1320 in a non-square shape to a second coding unit 1312 or 1322 having a lower depth, The third coding unit 1314 or 1324 may be determined.
- the image decoding apparatus 100 may determine a second coding unit (eg, 1302, 1312, 1322, etc.) by dividing at least one of the width and height of the first coding unit 1310 having a size of Nx2N. That is, the image decoding apparatus 100 may split the first coding unit 1310 in a horizontal direction to determine a second coding unit 1302 having a size of NxN or a second coding unit 1322 having a size of NxN/2, A second coding unit 1312 having a size of N/2xN may be determined by dividing in a horizontal direction and a vertical direction.
- a second coding unit eg, 1302, 1312, 1322, etc.
- the image decoding apparatus 100 determines a second coding unit (eg, 1302, 1312, 1322, etc.) by dividing at least one of a width and a height of the first coding unit 1320 having a size of 2NxN. May be. That is, the image decoding apparatus 100 may determine a second coding unit 1302 having a size of NxN or a second coding unit 1312 having a size of N/2xN by dividing the first coding unit 1320 in a vertical direction, A second coding unit 1322 having a size of NxN/2 may be determined by dividing in a horizontal direction and a vertical direction.
- a second coding unit eg, 1302, 1312, 1322, etc.
- the image decoding apparatus 100 determines a third coding unit (eg, 1304, 1314, 1324, etc.) by dividing at least one of a width and a height of the second coding unit 1302 having an NxN size. May be. That is, the image decoding apparatus 100 determines the third coding unit 1304 having a size of N/2xN/2 by dividing the second coding unit 1302 in a vertical direction and a horizontal direction, or determines the third coding unit 1304 having a size of N/4xN/2.
- the 3 coding units 1314 may be determined or a third coding unit 1324 having a size of N/2xN/4 may be determined.
- the image decoding apparatus 100 divides at least one of a width and a height of the second coding unit 1312 having a size of N/2xN to a third coding unit (eg, 1304, 1314, 1324, etc.). You can also decide. That is, the image decoding apparatus 100 splits the second coding unit 1312 in a horizontal direction to obtain a third coding unit 1304 having a size of N/2xN/2 or a third coding unit 1304 having a size of N/2xN/4. ) Or by dividing in a vertical direction and a horizontal direction to determine the third coding unit 1314 having a size of N/4xN/2.
- a third coding unit eg, 1304, 1314, 1324, etc.
- the image decoding apparatus 100 divides at least one of a width and a height of the second coding unit 1322 having a size of NxN/2 to a third coding unit (eg, 1304, 1314, 1324, etc.). You can also decide. That is, the image decoding apparatus 100 splits the second coding unit 1322 in a vertical direction to obtain a third coding unit 1304 having a size of N/2xN/2 or a third coding unit 1304 having a size of N/4xN/2. ) May be determined or divided in a vertical direction and a horizontal direction to determine the third coding unit 1324 of size N/2xN/4.
- a third coding unit eg, 1304, 1314, 1324, etc.
- the image decoding apparatus 100 may divide a square coding unit (eg, 1300, 1302, 1304) in a horizontal direction or a vertical direction.
- a square coding unit eg, 1300, 1302, 1304
- the first coding unit 1300 having a size of 2Nx2N is split in the vertical direction to determine the first coding unit 1310 having a size of Nx2N, or split in the horizontal direction to determine the first coding unit 1300 having a size of 2NxN.
- I can.
- the depth of the coding unit determined by splitting the first coding unit 1300 having a size of 2Nx2N in a horizontal direction or a vertical direction is the first coding It may be the same as the depth of the unit 1300.
- the width and height of the third coding unit 1314 or 1324 may be 1/4 times that of the first coding unit 1310 or 1320.
- the depth of the second coding unit 1312 or 1322 that is 1/2 times the width and height of the first coding unit 1310 or 1320 may be D+1
- the depth of the third coding unit 1314 or 1324 that is 1/4 times the width and height of the first coding unit 1310 or 1320 may be D+2.
- PID 14 illustrates a depth that may be determined according to a shape and size of coding units and a part index (hereinafter referred to as PID) for classifying coding units according to an embodiment.
- the image decoding apparatus 100 may determine various types of second coding units by dividing the first coding unit 1400 having a square shape. Referring to FIG. 14, the image decoding apparatus 100 splits a first coding unit 1400 in at least one of a vertical direction and a horizontal direction according to split type mode information to provide the second coding units 1402a, 1402b, and 1404a. , 1404b, 1406a, 1406b, 1406c, 1406d). That is, the image decoding apparatus 100 may determine the second coding units 1402a, 1402b, 1404a, 1404b, 1406a, 1406b, 1406c, and 1406d based on the split type mode information for the first coding unit 1400. .
- the second coding units 1402a, 1402b, 1404a, 1404b, 1406a, 1406b, 1406c, and 1406d determined according to split mode information for the first coding unit 1400 in a square shape are the length of the long side
- the depth may be determined based on. For example, since the length of one side of the first coding unit 1400 in the square shape and the length of the long side of the second coding units 1402a, 1402b, 1404a, 1404b in the non-square shape are the same, the first coding unit ( 1400) and the non-square second coding units 1402a, 1402b, 1404a, and 1404b may have the same depth as D.
- the image decoding apparatus 100 divides the first coding unit 1400 into four square-shaped second coding units (1406a, 1406b, 1406c, 1406d) based on the split mode information, the square-shaped Since the length of one side of the second coding unit (1406a, 1406b, 1406c, 1406d) is 1/2 times the length of one side of the first coding unit (1400), the second coding unit (1406a, 1406b, 1406c, 1406d) The depth may be a depth of D+1 that is one depth lower than the depth of D of the first coding unit 1400.
- the image decoding apparatus 100 divides a first coding unit 1410 having a height longer than a width in a horizontal direction according to the split mode information to form a plurality of second coding units 1412a, 1412b, and 1414a. , 1414b, 1414c). According to an embodiment, the image decoding apparatus 100 splits a first coding unit 1420 having a width longer than a height in a vertical direction according to the split mode information to form a plurality of second coding units 1422a, 1422b, and 1424a. , 1424b, 1424c).
- second coding units 1412a, 1412b, 1414a, 1414b, 1414c. 1422a, 1422b, 1424a which are determined according to split mode mode information for the first coding unit 1410 or 1420 of a non-square form, 1424b, 1424c) may be determined based on the length of the long side.
- the length of one side of the second coding units 1412a and 1412b having a square shape is 1/2 times the length of one side of the first coding unit 1410 having a non-square shape whose height is longer than the width.
- the depth of the second coding units 1412a and 1412b of the shape is D+1, which is one depth lower than the depth D of the first coding unit 1410 of the non-square shape.
- the image decoding apparatus 100 may divide the first coding unit 1410 of the non-square shape into odd number of second coding units 1414a, 1414b, and 1414c based on the split mode information.
- the odd number of second coding units 1414a, 1414b, and 1414c may include second coding units 1414a and 1414c having a non-square shape and a second coding unit 1414b having a square shape.
- the length of the long side of the second coding units 1414a and 1414c of the non-square form and the length of one side of the second coding unit 1414b of the square form are 1/ of the length of one side of the first coding unit 1410 Since it is twice, the depth of the second coding units 1414a, 1414b, and 1414c may be a depth of D+1 that is one depth lower than the depth of D of the first coding unit 1410.
- the image decoding apparatus 100 is a method corresponding to the method of determining the depth of coding units related to the first coding unit 1410, and is related to the first coding unit 1420 having a non-square shape having a width greater than a height. The depth of coding units may be determined.
- a coding unit 1414b located in the middle of coding units 1414a, 1414b, and 1414c divided into odd numbers is a coding unit having the same width as other coding units 1414a and 1414c but different heights. It may be twice the height of the fields 1414a and 1414c. That is, in this case, the coding unit 1414b positioned in the center may include two of the other coding units 1414a and 1414c.
- the image decoding apparatus 100 may determine whether or not the odd-numbered coding units are of the same size based on whether there is a discontinuity in an index for distinguishing between the divided coding units.
- the image decoding apparatus 100 may determine whether to be split into a specific split type based on a value of an index for classifying a plurality of coding units determined by being split from a current coding unit. Referring to FIG. 14, the image decoding apparatus 100 determines an even number of coding units 1412a and 1412b by dividing a rectangular first coding unit 1410 having a height greater than a width, or an odd number of coding units 1414a and 1414b. , 1414c) can be determined. The image decoding apparatus 100 may use an index (PID) representing each coding unit to classify each of a plurality of coding units. According to an embodiment, the PID may be obtained from a sample (eg, an upper left sample) at a predetermined position of each coding unit.
- a sample eg, an upper left sample
- the image decoding apparatus 100 may determine a coding unit at a predetermined position among coding units that are split and determined using an index for classifying coding units. According to an embodiment, when it is indicated that split mode information for a first coding unit 1410 having a rectangular shape having a height longer than a width is split into three coding units, the image decoding apparatus 100 may be configured to perform a first coding unit 1410. May be divided into three coding units 1414a, 1414b, and 1414c. The image decoding apparatus 100 may allocate indexes for each of the three coding units 1414a, 1414b, and 1414c. The image decoding apparatus 100 may compare an index for each coding unit in order to determine a coding unit among coding units divided into odd numbers.
- the image decoding apparatus 100 encodes a coding unit 1414b having an index corresponding to a middle value among the indices based on the indexes of the coding units, and a center position among coding units determined by splitting the first coding unit 1410. Can be determined as a unit.
- the image decoding apparatus 100 may determine the index based on a size ratio between coding units. .
- a coding unit 1414b generated by dividing the first coding unit 1410 is the same as the other coding units 1414a and 1414c, but the coding units 1414a and 1414c having different heights. It can be twice the height.
- the coding unit 1414c positioned in the next order may have an index of 3 with an increase of 2.
- the image decoding apparatus 100 may determine that the image decoding apparatus 100 is divided into a plurality of coding units including coding units having different sizes from other coding units.
- the video decoding apparatus 100 may determine that the coding unit (for example, the middle coding unit) at a predetermined position among the odd number of coding units is different from other coding units.
- the current coding unit can be split into a form.
- the image decoding apparatus 100 may determine a coding unit having a different size using an index (PID) for the coding unit.
- PID index
- the above-described index and the size or position of the coding unit at a predetermined position to be determined are specific for explaining an embodiment and should not be interpreted as being limited thereto, and various indexes and positions and sizes of the coding unit may be used. It must be interpreted.
- the image decoding apparatus 100 may use a predetermined data unit in which recursive division of coding units is started.
- FIG. 15 illustrates that a plurality of coding units are determined according to a plurality of predetermined data units included in a picture, according to an embodiment.
- a predetermined data unit may be defined as a data unit in which a coding unit starts to be recursively split using split type mode information. That is, it may correspond to the coding unit of the highest depth used in the process of determining a plurality of coding units that split the current picture.
- a predetermined data unit will be referred to as a reference data unit.
- the reference data unit may represent a predetermined size and shape.
- the reference coding unit may include MxN samples.
- M and N may be the same as each other, and may be integers expressed as a multiplier of 2. That is, the reference data unit may represent a square or non-square shape, and may be divided into an integer number of coding units thereafter.
- the image decoding apparatus 100 may divide a current picture into a plurality of reference data units. According to an embodiment, the image decoding apparatus 100 may divide a plurality of reference data units for dividing a current picture using split mode information for each reference data unit. The process of dividing the reference data unit may correspond to a dividing process using a quad-tree structure.
- the image decoding apparatus 100 may determine in advance a minimum size that a reference data unit included in a current picture may have. Accordingly, the image decoding apparatus 100 may determine a reference data unit of various sizes having a size equal to or greater than the minimum size, and may determine at least one coding unit using split mode information based on the determined reference data unit. .
- the image decoding apparatus 100 may use a reference coding unit 1500 in a square shape or a reference coding unit 1502 in a non-square shape.
- the shape and size of a reference coding unit are various data units that may include at least one reference coding unit (e.g., a sequence, a picture, a slice, and a slice segment ( slice segment), tile, tile group, maximum coding unit, etc.).
- the receiver 110 of the image decoding apparatus 100 may obtain at least one of information about a shape of a reference coding unit and information about a size of a reference coding unit from a bitstream for each of the various data units. .
- the process of determining at least one coding unit included in the square-shaped reference coding unit 1500 has been described above through the process of dividing the current coding unit 300 of FIG. 3, and the non-square-shaped reference coding unit 1502
- the process of determining at least one coding unit included in) has been described above through a process in which the current coding unit 400 or 450 of FIG. 4 is split, so a detailed description thereof will be omitted.
- the image decoding apparatus 100 determines the size and shape of a reference coding unit according to some data units that are predetermined based on a predetermined condition, and an index for identifying the size and shape of the reference coding unit You can use That is, the receiving unit 110 receives a predetermined condition (eg, a size less than a slice) among the various data units (eg, sequence, picture, slice, slice segment, tile, tile group, maximum coding unit, etc.) from the bitstream. As a data unit that satisfies (a data unit having a), only an index for identifying the size and shape of the reference coding unit may be obtained for each slice, slice segment, tile, tile group, and maximum coding unit.
- a predetermined condition eg, a size less than a slice
- the various data units eg, sequence, picture, slice, slice segment, tile, tile group, maximum coding unit, etc.
- the image decoding apparatus 100 may determine the size and shape of the reference data unit for each data unit that satisfies the predetermined condition by using the index.
- the bitstream utilization efficiency may be poor, so the type of the reference coding unit Instead of directly obtaining information on and information on the size of a reference coding unit, only the index may be obtained and used. In this case, at least one of the size and shape of the reference coding unit corresponding to the index indicating the size and shape of the reference coding unit may be predetermined.
- the image decoding apparatus 100 selects at least one of the size and shape of the predetermined reference coding unit according to the index, so that at least one of the size and shape of the reference coding unit included in the data unit that is a reference for obtaining the index You can decide.
- the image decoding apparatus 100 may use at least one reference coding unit included in one largest coding unit. That is, at least one reference coding unit may be included in the largest coding unit for dividing an image, and a coding unit may be determined through a recursive splitting process of each reference coding unit. According to an embodiment, at least one of the width and height of the largest coding unit may correspond to an integer multiple of at least one of the width and height of the reference coding unit. According to an embodiment, the size of a reference coding unit may be a size obtained by dividing a maximum coding unit n times according to a quad tree structure.
- the image decoding apparatus 100 may determine the reference coding unit by dividing the maximum coding unit n times according to the quad-tree structure, and according to various embodiments, the reference coding unit is at least one of block type information and split type mode information. It can be divided based on one.
- FIG. 16 illustrates a processing block that serves as a reference for determining an order of determining reference coding units included in a picture, according to an embodiment.
- the image decoding apparatus 100 may determine at least one processing block for dividing a picture.
- a processing block is a data unit including at least one reference coding unit that divides an image, and at least one reference coding unit included in a processing block may be determined in a specific order. That is, the order of determination of at least one reference coding unit determined in each processing block may correspond to one of various types of order in which the reference coding unit may be determined, and an order of determination of the reference coding unit determined in each processing block May be different for each processing block.
- the order of determining the reference coding unit determined for each processing block is raster scan, Z-scan, N-scan, up-right diagonal scan, and horizontal scan ( Horizontal scan), vertical scan, etc. may be one of various orders, but the order that can be determined is limited to the scan orders and should not be interpreted.
- the image decoding apparatus 100 may determine the size of at least one processing block included in the image by obtaining information on the size of the processing block.
- the image decoding apparatus 100 may determine the size of at least one processing block included in the image by obtaining information on the size of the processing block from the bitstream.
- the size of the processing block may be a predetermined size of a data unit indicated by information about the size of the processing block.
- the receiving unit 110 of the image decoding apparatus 100 may obtain information on the size of a processing block from a bitstream for each specific data unit.
- information on the size of a processing block may be obtained from a bitstream in units of data such as an image, a sequence, a picture, a slice, a slice segment, a tile, and a tile group. That is, the receiving unit 110 may obtain information on the size of the processing block from the bitstream for each of the plurality of data units, and the image decoding apparatus 100 may at least divide a picture using information on the size of the obtained processing block.
- the size of one processing block may be determined, and the size of such a processing block may be an integer multiple of the reference coding unit.
- the image decoding apparatus 100 may determine the size of the processing blocks 1602 and 1612 included in the picture 1600. For example, the image decoding apparatus 100 may determine the size of the processing block based on information on the size of the processing block obtained from the bitstream. Referring to FIG. 16, according to an embodiment, the image decoding apparatus 100 sets the horizontal size of the processing blocks 1602 and 1612 to 4 times the horizontal size of the reference coding unit and the vertical size to 4 times the vertical size of the reference coding unit. You can decide. The image decoding apparatus 100 may determine an order in which at least one reference coding unit is determined in at least one processing block.
- the image decoding apparatus 100 may determine each of the processing blocks 1602 and 1612 included in the picture 1600 based on the size of the processing block, and are included in the processing blocks 1602 and 1612. It is possible to determine an order of determining at least one of the reference coding units. According to an embodiment, determining the reference coding unit may include determining the size of the reference coding unit.
- the image decoding apparatus 100 may obtain information about a determination order of at least one reference coding unit included in at least one processing block from a bitstream, and based on the obtained determination order information Thus, an order in which at least one reference coding unit is determined may be determined.
- the information on the order of determination may be defined as an order or direction in which reference coding units are determined in a processing block. That is, the order in which the reference coding units are determined may be independently determined for each processing block.
- the image decoding apparatus 100 may obtain information on an order of determining a reference coding unit for each specific data unit from a bitstream.
- the receiver 110 may obtain information about the determination order of the reference coding unit from the bitstream for each data unit such as an image, a sequence, a picture, a slice, a slice segment, a tile, a tile group, and a processing block.
- the information on the determination order of the reference coding unit indicates the determination order of the reference coding unit within the processing block, the information on the determination order may be obtained for each specific data unit including an integer number of processing blocks.
- the image decoding apparatus 100 may determine at least one reference coding unit based on an order determined according to an embodiment.
- the receiving unit 110 may obtain information on an order of determining a reference coding unit as information related to the processing blocks 1602 and 1612 from a bitstream, and the image decoding apparatus 100 includes the processing block ( An order of determining at least one reference coding unit included in 1602 and 1612 may be determined, and at least one reference coding unit included in the picture 1600 may be determined according to an order of determining the coding unit. Referring to FIG. 16, the image decoding apparatus 100 may determine a determination order 1604 and 1614 of at least one reference coding unit related to each of the processing blocks 1602 and 1612.
- the order of determining the reference coding unit related to each of the processing blocks 1602 and 1612 may be different for each processing block.
- the reference coding unit determination order 1604 related to the processing block 1602 is a raster scan order
- the reference coding units included in the processing block 1602 may be determined according to the raster scan order.
- the reference coding unit determination order 1614 related to the other processing block 1612 is in the reverse order of the raster scan order
- the reference coding units included in the processing block 1612 may be determined according to the reverse order of the raster scan order.
- the image decoding apparatus 100 may decode at least one determined reference coding unit according to an embodiment.
- the image decoding apparatus 100 may decode an image based on the reference coding unit determined through the above-described embodiment.
- a method of decoding the reference coding unit may include various methods of decoding an image.
- the image decoding apparatus 100 may obtain and use block type information indicating a type of a current coding unit or split type mode information indicating a method of dividing a current coding unit from a bitstream.
- the split type mode information may be included in a bitstream related to various data units.
- the video decoding apparatus 100 includes a sequence parameter set, a picture parameter set, a video parameter set, a slice header, and a slice segment header. Segmentation mode information included in a segment header, a tile header, and a tile group header may be used.
- the image decoding apparatus 100 may obtain and use a syntax element corresponding to block type information or split type mode information from a bitstream for each maximum coding unit, a reference coding unit, and processing block.
- the image decoding apparatus 100 may determine an image segmentation rule.
- the segmentation rule may be predetermined between the image decoding apparatus 100 and the image encoding apparatus 2200.
- the image decoding apparatus 100 may determine an image segmentation rule based on information obtained from a bitstream.
- the video decoding apparatus 100 includes a sequence parameter set, a picture parameter set, a video parameter set, a slice header, a slice segment header, and A partitioning rule may be determined based on information obtained from at least one of a tile header and a tile group header.
- the image decoding apparatus 100 may determine a split rule differently according to a frame, a slice, a tile, a temporal layer, a maximum coding unit, or a coding unit.
- the image decoding apparatus 100 may determine a splitting rule based on a block shape of a coding unit.
- the block shape may include the size, shape, width and height ratio and direction of the coding unit.
- the image encoding apparatus 2200 and the image decoding apparatus 100 may determine in advance to determine a partitioning rule based on a block shape of a coding unit. However, it is not limited thereto.
- the image decoding apparatus 100 may determine a segmentation rule based on information obtained from a bitstream received from the image encoding apparatus 2200.
- the shape of the coding unit may include a square and a non-square.
- the image decoding apparatus 100 may determine the shape of the coding unit as a square. Also, . If the width and height of the coding unit are not the same, the image decoding apparatus 100 may determine the shape of the coding unit as a non-square.
- the size of the coding unit may include various sizes of 4x4, 8x4, 4x8, 8x8, 16x4, 16x8, ..., 256x256.
- the size of the coding unit may be classified according to the length of the long side and the length or width of the short side of the coding unit.
- the image decoding apparatus 100 may apply the same splitting rule to coding units classified into the same group. For example, the image decoding apparatus 100 may classify coding units having the same long side length into the same size. In addition, the image decoding apparatus 100 may apply the same splitting rule to coding units having the same long side length.
- the ratio of the width and height of the coding unit is 1:2, 2:1, 1:4, 4:1, 1:8, 8:1, 1:16, 16:1, 32:1 or 1:32, etc.
- the direction of the coding unit may include a horizontal direction and a vertical direction.
- the horizontal direction may indicate a case where the length of the width of the coding unit is longer than the length of the height.
- the vertical direction may indicate a case where the length of the width of the coding unit is shorter than the length of the height.
- the image decoding apparatus 100 may adaptively determine a splitting rule based on a size of a coding unit.
- the image decoding apparatus 100 may differently determine an allowable split mode mode based on the size of the coding unit. For example, the image decoding apparatus 100 may determine whether division is allowed based on the size of the coding unit.
- the image decoding apparatus 100 may determine a splitting direction according to the size of the coding unit.
- the image decoding apparatus 100 may determine an allowable split type according to the size of the coding unit.
- Determining the splitting rule based on the size of the coding unit may be a splitting rule predetermined between the image encoding apparatus 2200 and the image decoding apparatus 100. Also, the image decoding apparatus 100 may determine a segmentation rule based on information obtained from the bitstream.
- the image decoding apparatus 100 may adaptively determine a splitting rule based on the position of the coding unit.
- the image decoding apparatus 100 may adaptively determine a segmentation rule based on a position occupied by the coding unit in the image.
- the image decoding apparatus 100 may determine a splitting rule so that coding units generated by different split paths do not have the same block shape.
- the present invention is not limited thereto, and coding units generated by different split paths may have the same block shape. Coding units generated by different split paths may have different decoding processing orders. Since the decoding processing sequence has been described with reference to FIG. 12, detailed descriptions are omitted.
- a slice including a neighboring pixel positioned at the upper left or lower right of the current pixel among neighboring pixels used for the adaptive loop filter of the current pixel Is different from the slice containing the current pixel, and if the slice containing the neighboring pixels positioned at the upper left or lower right is different from the slice containing the current pixel, the neighboring pixels positioned at the upper left or lower right
- the value of is determined as the value of the pixel closest to the horizontal direction of the neighboring pixels located at the upper left or lower right among pixels included in the slice including the current pixel, and based on the values of the current pixel and the neighboring pixels
- To determine an adaptive loop filter including filter coefficients for the current pixel and the neighboring pixels and apply the adaptive loop filter to the current pixel, thereby determining the value of the current pixel using the values of the neighboring pixels.
- 17 is a block diagram of a video encoding apparatus according to an embodiment.
- the video encoding apparatus 1700 may include a memory 1710 and at least one processor 1720 connected to the memory 1710.
- the operations of the video encoding apparatus 1700 according to an embodiment may operate as an individual processor or may be operated under the control of a central processor.
- the memory 1710 of the video encoding apparatus 1700 includes data received from the outside, data generated by a processor, for example, values of current and neighboring pixels, and filters for the current and neighboring pixels. It can store coefficients, etc.
- the processor 1720 of the video encoding apparatus 1700 includes a slice including a neighboring pixel positioned at the upper left or lower right of the current pixel among neighboring pixels used for the adaptive loop filter of the current pixel. If the slice including the surrounding pixels located at the top left or bottom right is different from the slice including the current pixel, the value of the surrounding pixels located at the top left or bottom right is determined as the current pixel.
- the included current block may be encoded.
- a slice including neighboring pixels positioned at the upper left or lower right of the current pixel among neighboring pixels used for the adaptive loop filter of the current pixel by the video encoding apparatus 1700 according to an embodiment. Is different from the slice containing the current pixel, and if the slice containing the neighboring pixels positioned at the upper left or lower right is different from the slice containing the current pixel, the neighboring pixels positioned at the upper left or lower right The value of is determined as the value of the pixel closest to the horizontal direction of the neighboring pixels located at the upper left or lower right among pixels included in the slice including the current pixel, and based on the values of the current pixel and the neighboring pixels To determine an adaptive loop filter including filter coefficients for the current pixel and the neighboring pixels, and apply the adaptive loop filter to the current pixel, thereby determining the value of the current pixel using the values of the neighboring pixels.
- FIG. 18 is a flowchart of a video encoding method according to an embodiment.
- step S1810 the video encoding apparatus 1700 selects a slice including a neighboring pixel located at the upper left or lower right of the current pixel among neighboring pixels used for the adaptive loop filter of the current pixel. It may be determined whether the current pixel is different from the included slice.
- the current pixel may be a pixel to which deblocking filtering to remove a block effect and sample offset filtering to correct a pixel value using at least one of an edge offset and a band offset are applied.
- step S1830 the video encoding apparatus 1700, if the slice including the surrounding pixels located on the upper left or the lower right side is different from the slice including the current pixel, the adjacent pixels located on the upper left or the lower right
- the value may be determined as a value of a pixel closest to a horizontal direction of a neighboring pixel located at the upper left or lower right among pixels included in a slice including the current pixel.
- the value of the neighboring pixel located above the current pixel is included in the current block.
- the value of the pixel value closest to the vertical direction of the surrounding pixel located at the upper side is determined among the pixels, and the surrounding pixel located at the bottom of the current pixel is located outside the lower boundary of the slice including the current block.
- the value of the neighboring pixel located at the lower side of the current pixel is determined as the value of the pixel value closest to the vertical direction of the neighboring pixel located at the lower side of the pixels included in the current block, and to the left of the current pixel
- the value of the neighboring pixel located to the left of the current pixel is the neighboring pixel located to the left of the pixels included in the current block.
- the value may be determined as a value of a pixel closest to a horizontal direction of a neighboring pixel positioned on the right of the pixels included in the current block.
- whether in-loop filtering is applicable using surrounding pixels outside the current slice boundary is determined through sum of transform difference (SATD) or Rate Distortion Optimization (RDO) calculation, and neighboring pixels outside the current slice boundary are Information indicating whether in-loop filtering is applicable may be encoded and signaled by using. If in-loop filtering is applicable outside the current slice boundary, in-loop filtering may be performed using surrounding pixels outside the current slice boundary.
- SATD transform difference
- RDO Rate Distortion Optimization
- the left or right neighboring pixels outside the current slice boundary are The value may be determined as a pixel value of a pixel within a current slice boundary at a position closest to the horizontal direction of the left or right surrounding pixel.
- the values of the upper or lower surrounding pixels outside the current slice boundary are recalled. It can be determined as a pixel value of a pixel within a current slice boundary at a position closest to the vertical direction of the upper or lower surrounding pixels.
- in-loop filtering is not applicable using neighboring pixels outside the current slice boundary, and a current slice including a current pixel and a slice including neighboring pixels positioned at the upper left or lower right of the current pixel are Otherwise, the value of the neighboring pixel located at the upper left or lower right is determined as the value of the pixel closest to the horizontal direction of the neighboring pixel located at the upper left or lower right among the pixels included in the slice including the current pixel. I can.
- the value of the neighboring pixel located above the current pixel is included in the current block. If the value of the pixel value closest to the vertical direction of the surrounding pixel located on the upper side is determined among the pixels, and when the surrounding pixel located below the current pixel is located outside the lower boundary of the tile including the current block, the The value of the neighboring pixel located at the lower side of the current pixel is determined as the value of the pixel closest to the vertical direction of the neighboring pixel located at the lower side of the pixels included in the current block, and is located at the left of the current pixel If the neighboring pixel of the current block is located outside the left boundary of the tile including the current block, the value of the neighboring pixel located to the left of the current pixel is the value of the neighboring pixel located to the left of the pixels included in the current block.
- the value of the neighboring pixel located to the right of the current pixel May be determined as a value of a pixel closest to a horizontal direction of a neighboring pixel positioned on the right of the pixels included in the current block.
- whether in-loop filtering is applicable using neighboring pixels outside the current tile boundary is determined through sum of transform difference (SATD) or Rate Distortion Optimization (RDO) calculation, and neighboring pixels outside the current tile boundary are Information indicating whether in-loop filtering is applicable may be encoded and signaled by using. If in-loop filtering is applicable outside the current tile boundary, the in-loop filtering may be performed using surrounding pixels outside the current tile boundary.
- SATD transform difference
- RDO Rate Distortion Optimization
- the value may be determined as a pixel value of a pixel within a current tile boundary at a position closest to the horizontal direction of the left or right surrounding pixel.
- the values of the upper or lower surrounding pixels outside the current tile boundary are recalled. It can be determined as a pixel value of a pixel within a current tile boundary at a position closest to the vertical direction of the upper or lower surrounding pixels.
- the value of the neighboring pixel positioned above the current pixel is the current block. It is determined as the value of the pixel value closest to the vertical direction of the surrounding pixel located at the upper side among the pixels included in, and the surrounding pixel located below the current pixel is located outside the lower boundary of the subpicture including the current block. In this case, the value of the neighboring pixel located at the lower side of the current pixel is determined as the value of the pixel closest to the vertical direction of the neighboring pixel located at the lower side of the pixels included in the current block.
- the value of the neighboring pixel located on the left of the current pixel is located on the left of the pixels included in the current block.
- the value of the pixel closest to the horizontal direction of the neighboring pixel is determined, and if the neighboring pixel located to the right of the current pixel is located outside the right boundary of the subpicture including the current block, it is located on the right of the current pixel.
- the value of the neighboring pixel may be determined as a value of a pixel closest to the horizontal direction of the neighboring pixel positioned on the right among pixels included in the current block.
- whether in-loop filtering is applicable using neighboring pixels outside the current subpicture boundary is determined through sum of transform difference (SATD) or rate distortion optimization (RDO) calculation, and Information indicating whether in-loop filtering is applicable using pixels may be encoded and signaled. If in-loop filtering is applicable outside the current sub-picture boundary, in-loop filtering may be performed using surrounding pixels outside the current tile boundary.
- SATD transform difference
- RDO rate distortion optimization
- a value of a neighboring pixel may be determined as a pixel value of a pixel within a current subpicture boundary at a position closest to the horizontal direction of the left or right neighboring pixel.
- the value may be determined as a pixel value of a pixel within a current subpicture boundary at a position closest to the vertical direction of the upper or lower surrounding pixels.
- the video encoding apparatus 1700 may determine an adaptive loop filter including filter coefficients for the current pixel and the neighboring pixels based on values of the current pixel and the neighboring pixels.
- the adaptive loop filter may be a 7x7 rhombus filter.
- the adaptive loop filter may be a 5x5 rhombus filter.
- the filter coefficient may be determined based on a direction and a change amount of the current pixel and the neighboring pixels.
- the video encoding apparatus 1700 may correct the value of the current pixel by using the values of the neighboring pixels by applying the adaptive loop filter to the current pixel.
- the step of correcting the value of the current pixel using the values of the neighboring pixels includes the filter according to the difference between the value of the current pixel and the value of the current pixel and the values of each of the neighboring pixels. It may further include adding a value multiplied by the coefficient.
- step S1890 the video encoding apparatus 1700 may encode a current block including the current pixel.
- 19 and 20 are block diagrams of a video decoding apparatus according to an embodiment corresponding to the video encoding apparatus and the video encoding method described above, and a flowchart of a video decoding method according to an embodiment.
- 19 is a block diagram of a video decoding apparatus according to an embodiment.
- the video decoding apparatus 1900 may include a memory 1910 and at least one processor 1920 connected to the memory 1910.
- the operations of the video decoding apparatus 1900 according to an embodiment may operate as an individual processor or may be operated under the control of a central processor.
- the memory 1910 of the video decoding apparatus 1900 includes data received from the outside, data generated by a processor, for example, values of the current pixel and neighboring pixels, and filters for the current pixel and neighboring pixels. It can store coefficients, etc.
- the processor 1920 of the video decoding apparatus 1900 includes a neighboring pixel located at the upper left or lower right of the current pixel among neighboring pixels used for the adaptive loop filter of the current pixel. It is determined whether the sliced slice is different from the slice including the current pixel, and if the slice including the neighboring pixels located at the upper left or lower right is different from the slice including the current pixel, the slice located at the upper left or lower right A value of a neighboring pixel is determined as a value of a pixel closest to a horizontal direction of a neighboring pixel located at the upper left or lower right among pixels included in a slice including the current pixel, and values of the current pixel and the neighboring pixels Based on the determination of an adaptive loop filter including filter coefficients for the current pixel and the neighboring pixels, and applying the adaptive loop filter to the current pixel, values of the neighboring pixels are used to determine the current pixel. A value may be corrected and a current block including the current pixel may be decoded.
- a slice including a neighboring pixel positioned at an upper left or lower right of the current pixel among neighboring pixels used for an adaptive loop filter of the current pixel by the video decoding apparatus 1900 Is different from the slice containing the current pixel, and if the slice containing the neighboring pixels positioned at the upper left or lower right is different from the slice containing the current pixel, the neighboring pixels positioned at the upper left or lower right The value of is determined as the value of the pixel closest to the horizontal direction of the neighboring pixels located at the upper left or lower right among pixels included in the slice including the current pixel, and based on the values of the current pixel and the neighboring pixels To determine an adaptive loop filter including filter coefficients for the current pixel and the neighboring pixels, and apply the adaptive loop filter to the current pixel, thereby determining the value of the current pixel using the values of the neighboring pixels.
- FIG. 20 is a flowchart of a video decoding method according to an embodiment.
- step s2010 the video decoding apparatus 1900 selects a slice including a neighboring pixel positioned at the upper left or lower right of the current pixel among neighboring pixels used for the adaptive loop filter of the current pixel. It may be determined whether the current pixel is different from the included slice.
- the current pixel may be a pixel to which deblocking filtering to remove a block effect and sample offset filtering to correct a pixel value using at least one of an edge offset and a band offset are applied.
- step s2030 the video decoding apparatus 1900, if the slice including the surrounding pixels located at the upper left or lower right is different from the slice including the current pixel,
- the value may be determined as a value of a pixel closest to a horizontal direction of a neighboring pixel located at the upper left or lower right among pixels included in a slice including the current pixel.
- the value of the neighboring pixel located above the current pixel is included in the current block.
- the value of the pixel value closest to the vertical direction of the surrounding pixel located at the upper side is determined among the pixels, and the surrounding pixel located at the bottom of the current pixel is located outside the lower boundary of the slice including the current block.
- the value of the neighboring pixel located at the lower side of the current pixel is determined as the value of the pixel value closest to the vertical direction of the neighboring pixel located at the lower side of the pixels included in the current block, and to the left of the current pixel
- the value of the neighboring pixel located to the left of the current pixel is the neighboring pixel located to the left of the pixels included in the current block.
- the value may be determined as a value of a pixel closest to a horizontal direction of a neighboring pixel positioned on the right of the pixels included in the current block.
- in-loop filtering may be performed by using them.
- information indicating whether in-loop filtering is applicable using neighboring pixels outside the boundary of the current slice obtained from the bitstream indicates that in-loop filtering is not applicable, and is located on the left or right side.
- a value of a left or right surrounding pixel outside the current slice boundary may be determined as a pixel value of a pixel within the current slice boundary at a position closest to the horizontal direction of the left or right surrounding pixel.
- a value of an upper or lower peripheral pixel outside the current slice boundary may be determined as a pixel value of a pixel within the current slice boundary at a position closest to the vertical direction of the upper or lower peripheral pixel.
- information indicating whether in-loop filtering is applicable using surrounding pixels outside the current slice boundary indicates that in-loop filtering is not applicable, and the current slice including the current pixel and the upper left side of the current pixel
- the slice including the surrounding pixels located at the lower right side is different, the value of the surrounding pixels located at the top left or the bottom right is located at the top left or bottom right among the pixels included in the slice including the current pixel. It can be determined by the value of the pixel closest to the horizontal direction of the surrounding pixels.
- the value of the neighboring pixel located above the current pixel is included in the current block. If the value of the pixel value closest to the vertical direction of the surrounding pixel located on the upper side is determined among the pixels, and when the surrounding pixel located below the current pixel is located outside the lower boundary of the tile including the current block, the The value of the neighboring pixel located at the lower side of the current pixel is determined as the value of the pixel closest to the vertical direction of the neighboring pixel located at the lower side of the pixels included in the current block, and is located at the left of the current pixel If the neighboring pixel of the current block is located outside the left boundary of the tile including the current block, the value of the neighboring pixel located to the left of the current pixel is the value of the neighboring pixel located to the left of the pixels included in the current block.
- the value of the neighboring pixel located to the right of the current pixel May be determined as a value of a pixel closest to a horizontal direction of a neighboring pixel positioned on the right of the pixels included in the current block.
- information indicating whether in-loop filtering is applicable by using neighboring pixels outside the current tile boundary from the bitstream, and if the information indicates that in-loop filtering is applicable, neighboring pixels outside the current tile boundary In-loop filtering may be performed by using them.
- information indicating whether in-loop filtering is applicable using neighboring pixels outside the boundary of the current tile obtained from the bitstream indicates that in-loop filtering is not applicable, and is located at the left or right side. If the pixel is located outside the current tile boundary, a value of a left or right neighboring pixel outside the current tile boundary may be determined as a pixel value of a pixel within the current tile boundary at a position closest to the horizontal direction of the left or right neighboring pixel.
- a value of an upper or lower peripheral pixel outside the current tile boundary may be determined as a pixel value of a pixel within the current tile boundary at a position closest to the vertical direction of the upper or lower peripheral pixel.
- the value of the neighboring pixel positioned above the current pixel is the current block. It is determined as the value of the pixel value closest to the vertical direction of the surrounding pixel located at the upper side among the pixels included in, and the surrounding pixel located below the current pixel is located outside the lower boundary of the subpicture including the current block. In this case, the value of the neighboring pixel located at the lower side of the current pixel is determined as the value of the pixel closest to the vertical direction of the neighboring pixel located at the lower side of the pixels included in the current block.
- the value of the neighboring pixel located on the left of the current pixel is located on the left of the pixels included in the current block.
- the value of the pixel closest to the horizontal direction of the neighboring pixel is determined, and if the neighboring pixel located to the right of the current pixel is located outside the right boundary of the subpicture including the current block, it is located on the right of the current pixel.
- the value of the neighboring pixel may be determined as a value of a pixel closest to the horizontal direction of the neighboring pixel positioned on the right among pixels included in the current block.
- information indicating whether in-loop filtering is applicable using neighboring pixels outside a current sub-picture boundary from a bitstream, and if the information indicates that in-loop filtering is applicable, outside the current sub-picture boundary In-loop filtering may be performed using surrounding pixels.
- information indicating whether in-loop filtering is applicable using surrounding pixels outside the current sub-picture boundary obtained from a bitstream indicates that in-loop filtering is not applicable, and is located on the left or right side. If the surrounding pixels are located outside the current sub-picture boundary, the value of the left or right surrounding pixel outside the current sub-picture boundary can be determined as the pixel value of the pixel within the current sub-picture boundary at the closest position to the horizontal direction of the left or right surrounding pixel. have.
- a value of an upper or lower peripheral pixel outside the current sub-picture boundary may be determined as a pixel value of a pixel within the sub-picture boundary at a position closest to the vertical direction of the upper or lower peripheral pixel.
- the video decoding apparatus 1900 may determine an adaptive loop filter including filter coefficients for the current pixel and the neighboring pixels based on values of the current pixel and the neighboring pixels.
- the adaptive loop filter may be a 7x7 rhombus filter.
- the adaptive loop filter may be a 5x5 rhombus filter.
- the filter coefficient may be determined based on a direction and a change amount of the current pixel and the neighboring pixels.
- the video decoding apparatus 1900 may correct the value of the current pixel by using the values of the surrounding pixels by applying the adaptive loop filter to the current pixel.
- the step of correcting the value of the current pixel using the values of the neighboring pixels includes the filter according to the difference between the value of the current pixel and the value of the current pixel and the values of each of the neighboring pixels. It may further include adding a value multiplied by the coefficient.
- step S2090 the video encoding apparatus 1900 may decode a current block including the current pixel.
- 21 is a diagram for describing filtering at a slice boundary of a raster scan order, according to an exemplary embodiment.
- the filtering regions 2115 and 2125 may deviate from the boundary of the slice. have.
- the upper left area of the filtering area 2115 for the pixel of the first block 2110 includes the first pixel 2110
- the lower right area of the filtering area 2125 for the pixel of the second block 2120 is a second pixel. It is possible to deviate from the boundary of the slice containing (2120). It is necessary to perform filtering by padding a region outside the boundary of the slice among the filtering regions with pixels adjacent to the boundary of the slice instead of pixels outside the boundary of the slice. A pixel padding method in this case will be described later with reference to FIG. 22.
- FIG. 22 is a diagram for describing pixel padding for an upper left region of a filtering region at a slice boundary in a raster scan order, according to an exemplary embodiment.
- pixel padding is performed.
- the pixel A0 and the pixel A1 may be padded with the value of the pixel B0 closest to the slice boundary in the horizontal direction
- the pixel A2 and the pixel A3 may be padded with the value of the pixel B3 closest to the slice boundary in the horizontal direction.
- the pixel A0 and the pixel A2 may be padded with the value of the pixel C0 closest to the slice boundary in the vertical direction
- the pixel A1 and the pixel A3 may be padded with the value of the pixel C1 closest to the slice boundary in the vertical direction.
- padding may be performed on adjacent pixels of blocks B, C, and D based on the pixel distance.
- pixel A0 is padded with one of the pixel value of pixel B0, pixel value of pixel C0, and pixel value of (B0 + C0 + 1)/2
- pixel A1 is padded with the value of pixel B0
- pixel A2 is It is padded with the pixel value of pixel C0
- the pixel A3 may be padded with one of the pixel value of pixel B3, the pixel value of pixel C1, the pixel value of pixel D0, and the pixel value of (B3 + C1 + 1)/2.
- it may be padded with an average value of the representative value of block B and the representative value of block C.
- pixel values of all the pixels of block A may be padded with (B3 + C1 + 1)/2 using pixel B3 of block B and pixel C1 of block C.
- it may be padded with an average value of representative values of block B, block C, and block D.
- pixel values of all the pixels of block A may be padded with (B3 + C1 + 2* D0 + 2)/4 using the pixel B3 of block B, the pixel C1 of block C, and the pixel D0 of block D. have.
- padding may be performed on the block A region using a planar mode algorithm in intra prediction.
- the block A region may be padded using the pixel of D0. That is, all pixels of block A may be padded with a pixel value of D0.
- the block A region may be padded with an average of pixel values of adjacent pixels in the x-coordinate and y-coordinate directions of the pixel of the block A region.
- pixel A0 is padded with a value of (C0 + B0)/2
- pixel A1 is padded with a value of (C1 + B0)/2
- pixel A2 is padded with a value of (C0 + B3)/2
- Pixel A3 may be padded with a value of (C1 + B3)/2.
- the block A region may be padded with a specific value.
- pixel values of the block A region may be padded with an intermediate value of a bit-depth (eg, 512 in the case of 10 bits).
- pixels in the block A area may be padded with an average value of all pixels in blocks B, C, and D.
- Padding may be performed on pixels of an area outside the slice boundary (block A) using pixels within the slice boundary.
- pixels outside the slice boundary may be clipped and not used for filtering.
- filtering may be performed using other pixels B3, C1, and D0 except for A3.
- the padding methods described above in FIG. 22 can be applied not only to adaptive loop filtering, but also to all loop filtering methods that perform diagonal filtering or 2-dimensional filtering.
- FIG. 23 is a diagram for describing pixel padding for a lower right area of a filtering area at a slice boundary in a raster scan order, according to an exemplary embodiment.
- block D is outside the slice boundary, and blocks A, B, and C are within the slice boundary
- pixel padding is performed.
- the pixel D0 and the pixel D1 may be padded with the value of the pixel C2 closest to the slice boundary in the horizontal direction
- the pixel D2 and the pixel D3 may be padded with the value of the pixel C5 closest to the slice boundary in the horizontal direction.
- the pixel D0 and the pixel D2 may be padded with the value of the pixel B4 closest to the slice boundary in the vertical direction
- the pixel D1 and the pixel D3 may be padded with the value of the pixel B5 that is closest to the vertical direction from the slice boundary.
- FIG. 24 shows a filter including filter coefficients of an adaptive loop filter for a current pixel of a luma block.
- indices from C0 to C12 represent respective filter coefficients, and each filter coefficient corresponds to each pixel according to an arrangement position of the filter coefficients.
- the filter of FIG. 24 corrects a pixel value by using the current pixel corresponding to the C12 index and surrounding pixels corresponding to the filter coefficients.
- the difference between the current pixel and the neighboring pixel corresponding to the upper C0 index, the calculated value calculated using the filter coefficient corresponding to the upper C0 index, the difference between the neighboring pixel corresponding to the upper left C1 index and the current pixel, and the upper left C1 The calculated value calculated using the filter coefficient corresponding to the index, ..., the difference between the current pixel and the surrounding pixel corresponding to the lower C0 index and the calculated value calculated using the filter coefficient corresponding to the lower C0 index are added together.
- the pixel value of the current pixel corresponding to the C12 index may be corrected by adding the generated pixel correction value with the value of the current pixel corresponding to the C12 index.
- ALF filter coefficients may be determined so that the pixel value of a specific block of the reconstructed image does not differ most from the pixel value of the original image.
- FIG. 25 is a diagram for describing a method of padding an upper left peripheral pixel located outside a slice boundary when an adaptive loop filter is applied to a current pixel of a luma block.
- an upper area 2520 and a left area 2530 of the area 2540 including the current luma pixel are the same. It is a slice, and the upper left region 2510 may be included in a different slice from the region 2540 including the current luma pixel. In this case, surrounding pixels corresponding to the C1, C4, and C5 indexes included in the upper left region 2510 need to be padded.
- the neighboring pixels corresponding to the C1 index included in the upper left area 2510 are padded with the pixel values of the pixels corresponding to the C2 index included in the upper area 2520, which is the closest pixel in the horizontal direction, and
- the surrounding pixels corresponding to the C4 and C5 indexes included in the region 2510 may be padded with a pixel value of a pixel corresponding to the C6 index included in the upper region 2520 that is the closest pixel in the horizontal direction.
- an adaptive loop filter including filter coefficients is determined based on the current pixel and the surrounding pixels, and the adaptive loop filter is applied to the current pixel. By doing so, the pixel value of the current pixel can be corrected.
- FIG. 26 is a diagram for explaining a method of padding a lower right neighboring pixel located outside a slice boundary when an adaptive loop filter is applied to a current pixel of a luma block
- the right area 2620 and the lower area 2630 of the area 2610 including the current luma pixel are the same. It is a slice, and the lower right area 2640 may be included in a different slice from the area 2610 including the current luma pixel. In this case, surrounding pixels corresponding to the C5, C4, and C1 indexes included in the lower right area 2640 need to be padded.
- the surrounding pixels corresponding to the C5 and C4 indexes included in the lower right area 2640 are padded with the pixel values of the pixels corresponding to the C6 index included in the lower area 2630, which is the closest pixel in the horizontal direction
- the surrounding pixels corresponding to the C1 index included in the lower right area 2640 may be padded with a pixel value of a pixel corresponding to the C2 index included in the lower area 2630, which is the closest pixel in the horizontal direction.
- an adaptive loop filter including filter coefficients is determined based on the current pixel and the surrounding pixels, and the adaptive loop filter is applied to the current pixel. By doing so, the pixel value of the current pixel can be corrected.
- FIG. 27 shows a filter including filter coefficients of an adaptive loop filter for a current pixel of a chroma block.
- indices from C0 to C6 represent respective filter coefficients, and each filter coefficient corresponds to each pixel according to an arrangement position of the filter coefficients.
- the filter of FIG. 27 corrects a pixel value by using the current pixel corresponding to the C6 index and surrounding pixels corresponding to the filter coefficients.
- the difference between the current pixel and the neighboring pixel corresponding to the upper C0 index, the calculated value calculated using the filter coefficient corresponding to the upper C0 index, the difference between the neighboring pixel corresponding to the upper left C1 index and the current pixel, and the upper left C1 The calculated value calculated using the filter coefficient corresponding to the index, ..., the difference between the current pixel and the surrounding pixel corresponding to the lower C0 index and the calculated value calculated using the filter coefficient corresponding to the lower C0 index are added together.
- the generated pixel correction value to the current pixel value corresponding to the C6 index, the pixel value of the current pixel corresponding to the C6 index may be corrected.
- FIG. 28 is a diagram for describing a method of padding an upper left neighboring pixel located outside a slice boundary when an adaptive loop filter is applied to a current pixel of a chroma block.
- the upper area 2820 and the left area 2830 of the area 2840 including the current chroma pixel are the same. It is a slice, and the upper left region 2810 may be included in a slice different from the region 2840 including the current chroma pixel. In this case, the surrounding pixels corresponding to the C1 index included in the upper left region 2810 need to be padded. Specifically, the neighboring pixels corresponding to the C1 index included in the upper left region 2810 may be padded with a pixel value of a pixel corresponding to the C2 index included in the upper region 2820, which is the closest pixel in the horizontal direction.
- an adaptive loop filter including filter coefficients is determined based on the current pixel and the surrounding pixels, and the adaptive loop filter is applied to the current pixel. By doing so, the pixel value of the current pixel can be corrected.
- FIG. 29 is a diagram for explaining a method of padding a lower right neighboring pixel located outside a slice boundary when an adaptive loop filter is applied to a current pixel of a chroma block
- the right area 2920 and the lower area 2930 of the area 2910 including the current chroma pixel are the same. It is a slice, and the lower right area 2940 may be included in a different slice from the area 2910 including the current chroma pixel.
- the surrounding pixels corresponding to the C1 index included in the lower right area 2940 need to be padded.
- the surrounding pixels corresponding to the C1 index included in the lower right area 2940 may be padded with a pixel value of a pixel corresponding to the C2 index included in the lower area 2930 which is the closest pixel in the horizontal direction.
- an adaptive loop filter including filter coefficients is determined based on the current pixel and the surrounding pixels, and the adaptive loop filter is applied to the current pixel. By doing so, the pixel value of the current pixel can be corrected.
- Computer-readable recording media include storage media such as magnetic storage media (eg, ROM, floppy disk, hard disk, etc.) and optical reading media (eg, CD-ROM, DVD, etc.).
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims (15)
- 현재 픽셀의 적응적 루프 필터에 이용되는 주변 픽셀들 중 상기 현재 픽셀의 좌상측 또는 우하측에 위치하는 주변 픽셀이 포함된 슬라이스가 상기 현재 픽셀이 포함된 슬라이스와 다른지 판단하는 단계;상기 좌상측 또는 우하측에 위치하는 주변 픽셀을 포함하는 슬라이스가 상기 현재 픽셀을 포함하는 슬라이스와 다르면, 상기 좌상측 또는 우하측에 위치하는 주변 픽셀의 값을 현재 픽셀을 포함하는 슬라이스에 포함된 픽셀들 중 상기 좌상측 또는 우하측에 위치하는 주변 픽셀의 수평 방향에 가장 가까운 픽셀의 값으로 결정하는 단계;상기 현재 픽셀과 상기 주변 픽셀들의 값에 기초하여 상기 현재 픽셀 및 상기 주변 픽셀들에 대한 필터 계수를 포함하는 적응적 루프 필터를 결정하는 단계;상기 적응적 루프 필터를 현재 픽셀에 적용함으로써, 상기 주변 픽셀들의 값을 이용하여 상기 현재 픽셀의 값을 보정하는 단계;상기 현재 픽셀이 포함된 현재 블록을 복호화하는 단계를 포함하는, 비디오 복호화 방법.
- 제1항에 있어서,상기 현재 블록이 루마 블록이면, 상기 적응적 루프 필터는 7x7의 마름모 형태의 필터인, 비디오 복호화 방법.
- 제1항에 있어서,상기 현재 블록이 크로마 블록이면, 상기 적응적 루프 필터는 5x5의 마름모 형태의 필터인, 비디오 복호화 방법.
- 제1항에 있어서,상기 현재 픽셀은 블록 효과를 제거하기 위한 디블로킹 필터링 및 에지 오프셋 및 밴드 오프셋 중 적어도 하나를 이용하여 픽셀의 값을 보정하는 샘플 오프셋 필터링이 적용된 픽셀인, 비디오 복호화 방법.
- 제1항에 있어서,상기 주변 픽셀들의 값을 이용하여 상기 현재 픽셀의 값을 보정하는 단계는 상기 현재 픽셀의 값과, 상기 현재 픽셀의 값과 상기 주변 픽셀들 각각의 값 사이의 차이에 상기 필터 계수를 곱한 값을 더하는 단계를 더 포함하는, 비디오 복호화 방법.
- 제1항에 있어서,상기 필터 계수는 상기 현재 픽셀과 상기 주변 픽셀들의 방향성 및 변화량에 기초하여 결정되는, 비디오 복호화 방법.
- 제1항에 있어서,상기 현재 픽셀의 상측에 위치하는 주변 픽셀이 상기 현재 블록를 포함하는 슬라이스의 상측 경계선 밖에 위치하는 경우, 상기 현재 픽셀의 상기 상측에 위치하는 상기 주변 픽셀의 값을 상기 현재 블록에 포함된 픽셀들 중 상기 상측에 위치하는 주변 픽셀의 수직 방향에 가장 가까운 픽셀의 값으로 결정하고,상기 현재 픽셀의 하측에 위치하는 주변 픽셀이 상기 현재 블록를 포함하는 슬라이스의 하측 경계선 밖에 위치하는 경우, 상기 현재 픽셀의 상기 하측에 위치하는 상기 주변 픽셀의 값을 상기 현재 블록에 포함된 픽셀들 중 상기 하측에 위치하는 주변 픽셀의 수직 방향에 가장 가까운 픽셀의 값으로 결정하고,상기 현재 픽셀의 좌측에 위치하는 주변 픽셀이 상기 현재 블록를 포함하는 슬라이스의 좌측 경계선 밖에 위치하는 경우, 상기 현재 픽셀의 상기 좌측에 위치하는 주변 픽셀의 값을 상기 현재 블록에 포함된 픽셀들 중 상기 좌측에 위치하는 주변 픽셀의 수평 방향에 가장 가까운 픽셀의 값으로 결정하고,상기 현재 픽셀의 우측에 위치하는 주변 픽셀이 상기 현재 블록를 포함하는 슬라이스의 우측 경계선 밖에 위치하는 경우, 상기 현재 픽셀의 상기 우측에 위치하는 상기 주변 픽셀의 값을 상기 현재 블록에 포함된 픽셀들 중 상기 우측에 위치하는 주변 픽셀의 수평 방향에 가장 가까운 픽셀의 값으로 결정하는, 비디오 복호화 방법.
- 현재 픽셀의 적응적 루프 필터에 이용되는 주변 픽셀들 중 상기 현재 픽셀의 좌상측 또는 우하측에 위치하는 주변 픽셀이 포함된 슬라이스가 상기 현재 픽셀이 포함된 슬라이스와 다른지 판단하는 단계;상기 좌상측 또는 우하측에 위치하는 주변 픽셀을 포함하는 슬라이스가 상기 현재 픽셀을 포함하는 슬라이스와 다르면, 상기 좌상측 또는 우하측에 위치하는 주변 픽셀의 값을 현재 픽셀을 포함하는 슬라이스에 포함된 픽셀들 중 상기 좌상측 또는 우하측에 위치하는 주변 픽셀의 수평 방향에 가장 가까운 픽셀의 값으로 결정하는 단계;상기 현재 픽셀과 상기 주변 픽셀들의 값에 기초하여 상기 현재 픽셀 및 상기 주변 픽셀들에 대한 필터 계수를 포함하는 적응적 루프 필터를 결정하는 단계;상기 적응적 루프 필터를 현재 픽셀에 적용함으로써, 상기 주변 픽셀들의 값을 이용하여 상기 현재 픽셀의 값을 보정하는 단계;상기 현재 픽셀이 포함된 현재 블록을 부호화하는 단계를 포함하는, 비디오 부호화 방법.
- 제8항에 있어서,상기 현재 블록이 루마 블록이면, 상기 적응적 루프 필터는 7x7의 마름모 형태의 필터인, 비디오 부호화 방법.
- 제8항에 있어서,상기 현재 블록이 크로마 블록이면, 상기 적응적 루프 필터는 5x5의 마름모 형태의 필터인, 비디오 부호화 방법.
- 제8항에 있어서,상기 현재 픽셀은 블록 효과를 제거하기 위한 디블로킹 필터링 및 에지 오프셋 및 밴드 오프셋 중 적어도 하나를 이용하여 픽셀의 값을 보정하는 샘플 오프셋 필터링이 적용된 픽셀인, 비디오 부호화 방법.
- 제8항에 있어서,상기 주변 픽셀들의 값을 이용하여 상기 현재 픽셀의 값을 보정하는 단계는 상기 현재 픽셀의 값과, 상기 현재 픽셀의 값과 상기 주변 픽셀들 각각의 값 사이의 차이에 상기 필터 계수를 곱한 값을 더하는 단계를 더 포함하는, 비디오 부호화 방법.
- 제8항에 있어서,상기 필터 계수는 상기 현재 픽셀과 상기 주변 픽셀들의 방향성 및 변화량에 기초하여 결정되는, 비디오 부호화 방법.
- 제8항에 있어서,상기 현재 픽셀의 상측에 위치하는 주변 픽셀이 상기 현재 블록를 포함하는 슬라이스의 상측 경계선 밖에 위치하는 경우, 상기 현재 픽셀의 상기 상측에 위치하는 상기 주변 픽셀의 값을 상기 현재 블록에 포함된 픽셀들 중 상기 상측에 위치하는 주변 픽셀의 수직 방향에 가장 가까운 픽셀의 값으로 결정하고,상기 현재 픽셀의 하측에 위치하는 주변 픽셀이 상기 현재 블록를 포함하는 슬라이스의 하측 경계선 밖에 위치하는 경우, 상기 현재 픽셀의 상기 하측에 위치하는 상기 주변 픽셀의 값을 상기 현재 블록에 포함된 픽셀들 중 상기 하측에 위치하는 주변 픽셀의 수직 방향에 가장 가까운 픽셀의 값으로 결정하고,상기 현재 픽셀의 좌측에 위치하는 주변 픽셀이 상기 현재 블록를 포함하는 슬라이스의 좌측 경계선 밖에 위치하는 경우, 상기 현재 픽셀의 상기 좌측에 위치하는 주변 픽셀의 값을 상기 현재 블록에 포함된 픽셀들 중 상기 좌측에 위치하는 주변 픽셀의 수평 방향에 가장 가까운 픽셀의 값으로 결정하고,상기 현재 픽셀의 우측에 위치하는 주변 픽셀이 상기 현재 블록를 포함하는 슬라이스의 우측 경계선 밖에 위치하는 경우, 상기 현재 픽셀의 상기 우측에 위치하는 상기 주변 픽셀의 값을 상기 현재 블록에 포함된 픽셀들 중 상기 우측에 위치하는 주변 픽셀의 수평 방향에 가장 가까운 픽셀의 값으로 결정하는, 비디오 부호화 방법.
- 메모리; 및상기 메모리와 접속된 적어도 하나의 프로세서를 포함하고,상기 적어도 하나의 프로세서는:현재 픽셀의 적응적 루프 필터에 이용되는 주변 픽셀들 중 상기 현재 픽셀의 좌상측 또는 우하측에 위치하는 주변 픽셀이 포함된 슬라이스가 상기 현재 픽셀이 포함된 슬라이스와 다른지 판단하고,상기 좌상측 또는 우하측에 위치하는 주변 픽셀을 포함하는 슬라이스가 상기 현재 픽셀을 포함하는 슬라이스와 다르면, 상기 좌상측 또는 우하측에 위치하는 주변 픽셀의 값을 현재 픽셀을 포함하는 슬라이스에 포함된 픽셀들 중 상기 좌상측 또는 우하측에 위치하는 주변 픽셀의 수평 방향에 가장 가까운 픽셀의 값으로 결정하고,상기 현재 픽셀과 상기 주변 픽셀들의 값에 기초하여 상기 현재 픽셀 및 상기 주변 픽셀들에 대한 필터 계수를 포함하는 적응적 루프 필터를 결정하고,상기 적응적 루프 필터를 현재 픽셀에 적용함으로써, 상기 주변 픽셀들의 값을 이용하여 상기 현재 픽셀의 값을 보정하고,상기 현재 픽셀이 포함된 현재 블록을 복호화하는, 비디오 복호화 장치.
Priority Applications (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP20837156.7A EP3998772A4 (en) | 2019-07-11 | 2020-07-10 | VIDEO DECODING METHOD AND APPARATUS, AND VIDEO CODING METHOD AND APPARATUS |
BR112022000214A BR112022000214A2 (pt) | 2019-07-11 | 2020-07-10 | Método de decodificação de vídeo em |
CN202080050565.4A CN114097227A (zh) | 2019-07-11 | 2020-07-10 | 视频解码方法和设备以及视频编码方法和设备 |
AU2020309443A AU2020309443A1 (en) | 2019-07-11 | 2020-07-10 | Video decoding method and apparatus, and video encoding method and apparatus |
MX2022000354A MX2022000354A (es) | 2019-07-11 | 2020-07-10 | Metodo y aparato de decodificacion de video, y metodo y aparato de codificacion de video. |
KR1020227000730A KR20220031002A (ko) | 2019-07-11 | 2020-07-10 | 비디오 복호화 방법 및 장치, 비디오 부호화 방법 및 장치 |
US17/573,181 US12081772B2 (en) | 2019-07-11 | 2022-01-11 | Video decoding method and apparatus, and video encoding method and apparatus |
AU2023270341A AU2023270341A1 (en) | 2019-07-11 | 2023-11-24 | Video decoding method and apparatus, and video encoding method and apparatus |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201962872811P | 2019-07-11 | 2019-07-11 | |
US62/872,811 | 2019-07-11 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/573,181 Continuation US12081772B2 (en) | 2019-07-11 | 2022-01-11 | Video decoding method and apparatus, and video encoding method and apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021006692A1 true WO2021006692A1 (ko) | 2021-01-14 |
Family
ID=74114655
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2020/009085 WO2021006692A1 (ko) | 2019-07-11 | 2020-07-10 | 비디오 복호화 방법 및 장치, 비디오 부호화 방법 및 장치 |
Country Status (8)
Country | Link |
---|---|
US (1) | US12081772B2 (ko) |
EP (1) | EP3998772A4 (ko) |
KR (1) | KR20220031002A (ko) |
CN (1) | CN114097227A (ko) |
AU (2) | AU2020309443A1 (ko) |
BR (1) | BR112022000214A2 (ko) |
MX (1) | MX2022000354A (ko) |
WO (1) | WO2021006692A1 (ko) |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110494863B (zh) | 2018-03-15 | 2024-02-09 | 辉达公司 | 确定自主车辆的可驾驶自由空间 |
CN113811886B (zh) | 2019-03-11 | 2024-03-19 | 辉达公司 | 自主机器应用中的路口检测和分类 |
KR20220020268A (ko) | 2019-06-14 | 2022-02-18 | 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 | 비디오 유닛 경계들 및 가상 경계들의 처리 |
CN113994671B (zh) | 2019-06-14 | 2024-05-10 | 北京字节跳动网络技术有限公司 | 基于颜色格式处理视频单元边界和虚拟边界 |
US11436837B2 (en) * | 2019-06-25 | 2022-09-06 | Nvidia Corporation | Intersection region detection and classification for autonomous machine applications |
JP7291846B2 (ja) | 2019-07-09 | 2023-06-15 | 北京字節跳動網絡技術有限公司 | 適応ループフィルタリングのためのサンプル決定 |
WO2021004542A1 (en) | 2019-07-11 | 2021-01-14 | Beijing Bytedance Network Technology Co., Ltd. | Sample padding in adaptive loop filtering |
EP3984219A4 (en) | 2019-07-15 | 2022-08-17 | Beijing Bytedance Network Technology Co., Ltd. | CLASSIFICATION IN AN ADAPTIVE LOOP FILTERING |
US11698272B2 (en) | 2019-08-31 | 2023-07-11 | Nvidia Corporation | Map creation and localization for autonomous driving applications |
CN114430902B (zh) | 2019-09-22 | 2023-11-10 | 北京字节跳动网络技术有限公司 | 自适应环路滤波中的填充过程 |
JP7326600B2 (ja) * | 2019-09-27 | 2023-08-15 | 北京字節跳動網絡技術有限公司 | 異なるビデオユニット間の適応ループフィルタリング |
JP7454042B2 (ja) | 2019-10-10 | 2024-03-21 | 北京字節跳動網絡技術有限公司 | 適応ループ・フィルタリングにおける利用可能でないサンプル位置でのパディング・プロセス |
US11978266B2 (en) | 2020-10-21 | 2024-05-07 | Nvidia Corporation | Occupant attentiveness and cognitive load monitoring for autonomous and semi-autonomous driving applications |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012213128A (ja) * | 2011-03-24 | 2012-11-01 | Sony Corp | 画像処理装置および方法 |
KR20130034614A (ko) * | 2011-09-28 | 2013-04-05 | 한국전자통신연구원 | 제한된 오프셋 보상 및 루프 필터를 기반으로 하는 영상 부호화 및 복호화 방법 및 그 장치 |
KR20190003497A (ko) * | 2016-05-02 | 2019-01-09 | 소니 주식회사 | 화상 처리 장치 및 화상 처리 방법 |
WO2019089695A1 (en) * | 2017-11-01 | 2019-05-09 | Vid Scale, Inc. | Methods for simplifying adaptive loop filter in video coding |
KR20190057910A (ko) * | 2017-11-21 | 2019-05-29 | 디지털인사이트 주식회사 | 적응적 루프 필터를 사용하는 비디오 코딩 방법 및 장치 |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9525884B2 (en) * | 2010-11-02 | 2016-12-20 | Hfi Innovation Inc. | Method and apparatus of slice boundary filtering for high efficiency video coding |
US9148663B2 (en) | 2011-09-28 | 2015-09-29 | Electronics And Telecommunications Research Institute | Method for encoding and decoding images based on constrained offset compensation and loop filter, and apparatus therefor |
US9462298B2 (en) * | 2011-10-21 | 2016-10-04 | Qualcomm Incorporated | Loop filtering around slice boundaries or tile boundaries in video coding |
CN103891292B (zh) * | 2011-10-24 | 2018-02-02 | 寰发股份有限公司 | 视频数据环路滤波处理方法及其装置 |
US20130128986A1 (en) * | 2011-11-23 | 2013-05-23 | Mediatek Inc. | Method and Apparatus of Slice Boundary Padding for Loop Filtering |
US8983218B2 (en) * | 2012-04-11 | 2015-03-17 | Texas Instruments Incorporated | Virtual boundary processing simplification for adaptive loop filtering (ALF) in video coding |
US20130343447A1 (en) * | 2012-06-25 | 2013-12-26 | Broadcom Corporation | Adaptive loop filter (ALF) padding in accordance with video coding |
KR102276854B1 (ko) * | 2014-07-31 | 2021-07-13 | 삼성전자주식회사 | 인루프 필터 파라미터 예측을 사용하는 비디오 부호화 방법 및 그 장치, 비디오 복호화 방법 및 그 장치 |
CN104702963B (zh) * | 2015-02-13 | 2017-11-24 | 北京大学 | 一种自适应环路滤波的边界处理方法及装置 |
US10455249B2 (en) * | 2015-03-20 | 2019-10-22 | Qualcomm Incorporated | Downsampling process for linear model prediction mode |
US10448015B2 (en) * | 2015-06-16 | 2019-10-15 | Lg Electronics Inc. | Method and device for performing adaptive filtering according to block boundary |
US10484712B2 (en) * | 2016-06-08 | 2019-11-19 | Qualcomm Incorporated | Implicit coding of reference line index used in intra prediction |
CN109792541A (zh) * | 2016-10-05 | 2019-05-21 | 瑞典爱立信有限公司 | 用于视频译码的去振铃滤波器 |
CA3059870A1 (en) * | 2017-04-11 | 2018-10-18 | Vid Scale, Inc. | 360-degree video coding using face continuities |
CN109600611B (zh) * | 2018-11-09 | 2021-07-13 | 北京达佳互联信息技术有限公司 | 环路滤波方法、环路滤波装置、电子设备和可读介质 |
JP7291846B2 (ja) * | 2019-07-09 | 2023-06-15 | 北京字節跳動網絡技術有限公司 | 適応ループフィルタリングのためのサンプル決定 |
US11432015B2 (en) | 2019-07-11 | 2022-08-30 | Qualcomm Incorporated | Adaptive loop filtering across raster-scan slices |
-
2020
- 2020-07-10 WO PCT/KR2020/009085 patent/WO2021006692A1/ko unknown
- 2020-07-10 BR BR112022000214A patent/BR112022000214A2/pt unknown
- 2020-07-10 EP EP20837156.7A patent/EP3998772A4/en active Pending
- 2020-07-10 MX MX2022000354A patent/MX2022000354A/es unknown
- 2020-07-10 AU AU2020309443A patent/AU2020309443A1/en not_active Abandoned
- 2020-07-10 KR KR1020227000730A patent/KR20220031002A/ko unknown
- 2020-07-10 CN CN202080050565.4A patent/CN114097227A/zh active Pending
-
2022
- 2022-01-11 US US17/573,181 patent/US12081772B2/en active Active
-
2023
- 2023-11-24 AU AU2023270341A patent/AU2023270341A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012213128A (ja) * | 2011-03-24 | 2012-11-01 | Sony Corp | 画像処理装置および方法 |
KR20130034614A (ko) * | 2011-09-28 | 2013-04-05 | 한국전자통신연구원 | 제한된 오프셋 보상 및 루프 필터를 기반으로 하는 영상 부호화 및 복호화 방법 및 그 장치 |
KR20190003497A (ko) * | 2016-05-02 | 2019-01-09 | 소니 주식회사 | 화상 처리 장치 및 화상 처리 방법 |
WO2019089695A1 (en) * | 2017-11-01 | 2019-05-09 | Vid Scale, Inc. | Methods for simplifying adaptive loop filter in video coding |
KR20190057910A (ko) * | 2017-11-21 | 2019-05-29 | 디지털인사이트 주식회사 | 적응적 루프 필터를 사용하는 비디오 코딩 방법 및 장치 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3998772A4 * |
Also Published As
Publication number | Publication date |
---|---|
AU2023270341A1 (en) | 2023-12-14 |
MX2022000354A (es) | 2022-02-03 |
US20220132145A1 (en) | 2022-04-28 |
KR20220031002A (ko) | 2022-03-11 |
AU2020309443A1 (en) | 2022-02-10 |
US12081772B2 (en) | 2024-09-03 |
BR112022000214A2 (pt) | 2022-02-22 |
EP3998772A1 (en) | 2022-05-18 |
EP3998772A4 (en) | 2023-06-07 |
CN114097227A (zh) | 2022-02-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021006692A1 (ko) | 비디오 복호화 방법 및 장치, 비디오 부호화 방법 및 장치 | |
WO2019172676A1 (ko) | 비디오 복호화 방법 및 장치, 비디오 부호화 방법 및 장치 | |
WO2020040619A1 (ko) | 비디오 복호화 방법 및 장치, 비디오 부호화 방법 및 장치 | |
WO2020027551A1 (ko) | 영상 부호화 방법 및 장치, 영상 복호화 방법 및 장치 | |
WO2019066384A1 (ko) | 크로스-성분 예측에 의한 비디오 복호화 방법 및 장치, 크로스-성분 예측에 의한 비디오 부호화 방법 및 장치 | |
WO2019143093A1 (ko) | 비디오 복호화 방법 및 장치, 비디오 부호화 방법 및 장치 | |
WO2020256521A1 (ko) | 제한된 예측 모드에서 복원후 필터링을 수행하는 비디오 부호화 방법 및 장치, 비디오 복호화 방법 및 장치 | |
WO2019009502A1 (ko) | 비디오 부호화 방법 및 장치, 비디오 복호화 방법 및 장치 | |
WO2020235951A1 (ko) | 영상 부호화 방법 및 장치, 영상 복호화 방법 및 장치 | |
WO2017090968A1 (ko) | 영상을 부호화/복호화 하는 방법 및 그 장치 | |
WO2019088700A1 (ko) | 영상 부호화 방법 및 장치, 영상 복호화 방법 및 장치 | |
WO2019066472A1 (ko) | 영상 부호화 방법 및 장치, 영상 복호화 방법 및 장치 | |
WO2019135558A1 (ko) | 비디오 복호화 방법 및 장치, 비디오 부호화 방법 및 장치 | |
WO2021141451A1 (ko) | 양자화 파라미터를 획득하기 위한 비디오 복호화 방법 및 장치, 양자화 파라미터를 전송하기 위한 비디오 부호화 방법 및 장치 | |
WO2020076130A1 (ko) | 타일 및 타일 그룹을 이용하는 비디오 부호화 및 복호화 방법, 및 타일 및 타일 그룹을 이용하는 비디오 부호화 및 복호화 장치 | |
WO2020013627A1 (ko) | 비디오 복호화 방법 및 장치, 비디오 부호화 방법 및 장치 | |
WO2019216712A1 (ko) | 비디오 부호화 방법 및 장치, 비디오 복호화 방법 및 장치 | |
WO2019066514A1 (ko) | 부호화 방법 및 그 장치, 복호화 방법 및 그 장치 | |
WO2019209028A1 (ko) | 비디오 부호화 방법 및 장치, 비디오 복호화 방법 및 장치 | |
WO2019066574A1 (ko) | 부호화 방법 및 그 장치, 복호화 방법 및 그 장치 | |
WO2020117010A1 (ko) | 비디오 복호화 방법 및 장치, 비디오 부호화 방법 및 장치 | |
WO2017195945A1 (ko) | 영상을 부호화/복호화 하는 방법 및 그 장치 | |
WO2020189980A1 (ko) | 영상 부호화 방법 및 장치, 영상 복호화 방법 및 장치 | |
WO2020189978A1 (ko) | 비디오 복호화 방법 및 장치, 비디오 부호화 방법 및 장치 | |
WO2019216710A1 (ko) | 영상의 부호화 및 복호화를 위한 영상의 분할 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20837156 Country of ref document: EP Kind code of ref document: A1 |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112022000214 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 2020837156 Country of ref document: EP Effective date: 20220211 |
|
ENP | Entry into the national phase |
Ref document number: 112022000214 Country of ref document: BR Kind code of ref document: A2 Effective date: 20220106 |