WO2020175967A1 - 영상의 부호화 및 복호화 장치, 및 이에 의한 영상의 부호화 및 복호화 방법 - Google Patents

영상의 부호화 및 복호화 장치, 및 이에 의한 영상의 부호화 및 복호화 방법 Download PDF

Info

Publication number
WO2020175967A1
WO2020175967A1 PCT/KR2020/002924 KR2020002924W WO2020175967A1 WO 2020175967 A1 WO2020175967 A1 WO 2020175967A1 KR 2020002924 W KR2020002924 W KR 2020002924W WO 2020175967 A1 WO2020175967 A1 WO 2020175967A1
Authority
WO
WIPO (PCT)
Prior art keywords
coding unit
image
list
unit
block
Prior art date
Application number
PCT/KR2020/002924
Other languages
English (en)
French (fr)
Inventor
최웅일
박민수
박민우
정승수
최기호
최나래
템즈아니쉬
표인지
Original Assignee
삼성전자주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 삼성전자주식회사 filed Critical 삼성전자주식회사
Priority to KR1020217027776A priority Critical patent/KR20210122818A/ko
Priority to US17/434,657 priority patent/US20230103665A1/en
Priority to EP20763453.6A priority patent/EP3934252A4/en
Priority to BR112021016926A priority patent/BR112021016926A2/pt
Priority to MX2021010368A priority patent/MX2021010368A/es
Priority to CN202080017337.7A priority patent/CN113508590A/zh
Publication of WO2020175967A1 publication Critical patent/WO2020175967A1/ko

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/573Motion compensation with multiple frame prediction using two or more reference frames in a given prediction direction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/463Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/58Motion compensation with long-term prediction, i.e. the reference frame for a current frame not being the temporally closest one
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression

Definitions

  • This disclosure relates to the field of encoding and decoding of images. More specifically, this disclosure relates to a method and apparatus for encoding an image, and a method and apparatus for decoding an image using the hierarchical structure of the image.
  • each block can be coded and decoded for example through inter prediction or intra prediction.
  • Inter prediction is a method of compressing images by removing temporal redundancy between images
  • motion estimation encoding is a typical example.
  • Motion estimation encoding predicts the blocks of the current image using at least one reference image.
  • the reference block that is most similar to the current block can be searched in a predetermined search range by using the evaluation function of.
  • the current block is predicted based on the reference block, and the prediction block generated as a result of the prediction is subtracted from the current block to create a residual block.
  • interpolation is performed on the reference image to generate pixels in sub-pel units smaller than the integer pel unit, and based on the pixels in the sub-pixel unit.
  • Hainter prediction can be performed.
  • Differential Motion Vector is a decoder through a predefined method.
  • An apparatus for encoding and decoding an image according to an embodiment, and a method for encoding and decoding an image thereby, is a technical task of encoding and decoding an image at a small bit rate using a hierarchical structure of the image.
  • Acquiring information representing a list of a plurality of first reference pictures for a picture sequence including the current picture from the set; from the group header of the bitstream 2020/175967 1» (: 1 ⁇ 1 ⁇ 2020/002924) Acquiring an indicator for the current block group including the current block in the current picture;
  • the first reference indicated by the indicator in the plurality of first reference picture lists Obtaining a second reference picture list based on the picture list; And predictively decoding a lower block of the current block based on the reference picture included in the second reference picture list.
  • An image encoding and decoding apparatus can encode and decode an image at a low bit rate using a hierarchical structure of the image.
  • FIG. 1 is a block diagram of an image decoding apparatus according to an embodiment.
  • FIG. 2 is a block diagram of an image encoding apparatus according to an embodiment.
  • FIG. 3 is a diagram illustrating a process of determining at least one coding unit by dividing a current coding unit by an image decoding apparatus according to an embodiment.
  • FIG. 4 is a diagram illustrating a process of determining at least one coding unit by dividing a coding unit in a non-square shape by an image decoding apparatus according to an embodiment.
  • FIG. 5 is a diagram illustrating a process of dividing a coding unit based on at least one of block type information and division type mode information by an image decoding apparatus according to an embodiment.
  • Figure 6 shows that the image decoding apparatus is among odd number of coding units according to an embodiment.
  • a method for determining a predetermined coding unit is shown.
  • FIG. 7 is a diagram illustrating a sequence in which a plurality of coding units are processed when a video decoding apparatus determines a plurality of coding units by dividing a current coding unit according to an embodiment.
  • FIG. 8 shows a process in which the video decoding apparatus determines that the current coding unit is divided into odd number of coding units when the coding units cannot be processed in a predetermined order according to an embodiment.
  • Figure 9 is a video decoding device according to an embodiment by dividing the first coding unit
  • the second 2020/175967 1»(:1/10 ⁇ 020/002924 the second 2020/175967 1»(:1/10 ⁇ 020/002924.
  • the image decoding apparatus 11 is a case in which the division mode information is divided into four square-shaped coding units according to an embodiment, the image decoding apparatus
  • FIG. 12 illustrates that the processing order between a plurality of coding units may vary according to a process of dividing the coding units according to an embodiment.
  • Figure 13 shows a plurality of coding units are divided recursively according to an embodiment.
  • PID 14 illustrates a depth and an index (hereinafter, referred to as PID) for classifying coding units, which may be determined according to the shape and size of coding units according to an embodiment.
  • FIG. 15 illustrates a plurality of predetermined data units included in a picture according to an embodiment.
  • Fig. 16 shows a combination of a type in which coding units can be divided according to an embodiment.
  • FIG. 17 shows various types of coding units that can be determined based on split mode mode information that can be expressed as a binary code according to an embodiment.
  • Fig. 18 is a split mode mode that can be expressed as a binary record according to an embodiment
  • Another form of coding unit that can be determined based on information is shown.
  • 19 is a block diagram of an image encoding and decoding system performing loop filtering
  • 20 is a diagram showing a configuration of an image decoding apparatus according to an embodiment.
  • Fig. 21 shows the structure of a bitstream generated according to the hierarchical structure of an image.
  • 201 is an exemplary drawing.
  • Fig. 22 is a diagram showing a slice, tile and CTU determined in the current image.
  • Figure 23 is for explaining a method of setting the slices in the current image
  • 24 is a diagram for explaining another method of setting slices in the current image.
  • 25 is an exemplary diagram showing a list of a plurality of first reference images acquired through a sequence parameter set.
  • 26 is a diagram for explaining a method of obtaining a second reference image list.
  • 27 is a diagram for explaining a method of acquiring a second reference image list.
  • Fig. 28 is for explaining another method of obtaining a second reference video list
  • Figure 29 is for explaining another method of obtaining a second reference video list
  • Figure 30 is for explaining another method of obtaining a second reference video list
  • Fig. 31 shows a plurality of post-processing used for luma mapping or adaptive loop filtering
  • 32 is a view for explaining an image decoding method according to an embodiment.
  • 33 is a diagram showing a configuration of an image encoding apparatus according to an embodiment.
  • 34 is a view for explaining an image encoding method according to an embodiment.
  • the indicator in the plurality of first reference image lists
  • lower blocks included in the next block group in the current picture may be predictively decoded.
  • the step of obtaining the second reference image list comprises:
  • the first reference image list indicated by the indicator includes a first type of reference image and a second type of reference image, wherein the step of obtaining the second reference image list, the indicator It may include the step of obtaining the second reference image list by excluding the second type of reference image from the pointed first reference image list.
  • the first reference image list indicated by the indicator includes a first type of reference image and a second type of reference image, and the step of obtaining the second reference image list, wherein the indicator The reference image of the second type is excluded from the first reference image list indicated, and the reference image of the second type indicated by the ?00 related value obtained from the group header is added to the first reference image list indicated by the indicator. It may include the step of obtaining a second reference image list.
  • the first reference image list indicated by the indicator is 2020/175967 1» (:1 ⁇ 1 ⁇ 2020/002924 type of reference image only, but the step of acquiring the second reference image list is the second type of value indicated by the ?00 related value obtained from the group header. It may include the step of acquiring the second reference image list by adding a reference image to the first reference image list indicated by the indicator.
  • an index larger than an index assigned to the reference images of the other type may be allocated to reference images of one type of the first type and the second type.
  • the image decoding method further comprises obtaining order information of the first type reference images and the second type reference images from the group header, wherein the first An index according to the order information may be assigned to the reference images of the type and the reference images of the second type.
  • the image decoding method includes at least some of the reference images included in the first reference image list indicated by the indicator, and reference images to be included in the second reference image list. Further comprising the step of obtaining a difference value between at least some of the ?00 related values from the group header, wherein the step of obtaining the second reference image list comprises: the indicator pointing to the obtained difference value based on the obtained difference value.
  • the step of obtaining the second reference image list by replacing at least some of the reference images included in the first reference image list may be included.
  • the address information includes identification information of a lower right block among blocks included in each of the block groups
  • the step of setting the block groups comprises: at the upper left of the plurality of blocks. Setting a first block group including an upper left block located and a lower right block indicated by the identification information of the lower right block; Identifying the upper left block of the second block group based on the identification information of the blocks included in the first block group; And a second block group including the lower right block indicated by the identification information of the lower right block and the identified upper left block May include steps to set up.
  • the image decoding method is at least one for luma mapping. 2020/175967 1»(:1/10 ⁇ 020/002924 Acquiring a post-processing parameter set from the bitstream; A post-processing parameter set applied to luma mapping for the predicted sample of the sub-block obtained as a result of the prediction decoding is obtained.
  • An apparatus for decoding an image according to an embodiment includes a bitstream sequence parameter
  • An image encoding method includes the steps of constructing a plurality of first reference image lists for an image sequence including a current image; a current block in the current image among the plurality of first reference image lists Selecting a first reference picture list for the current block group including; Acquiring a second reference picture list based on the selected first reference picture list; And Reference pictures included in the second reference picture list Predicts the sub-block of the current block based on
  • one component is “connected” to another component.
  • the one component When referred to as "connected” or the like, the one component may be directly connected to the other component or may be directly connected, but unless there is a particularly contrary device, it is connected through another component in the middle. Or it should be understood that it can be accessed.
  • two or more components expressed as' ⁇ sub (unit)','module', etc. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 Elements may be merged into one element, or one element may be subdivided into two or more for each more subdivided function.
  • Each element can additionally perform some or all of the functions that other components are responsible for in addition to its own main function, and some of the main functions that each component is responsible for is performed by a different component. Of course it could be
  • sample or “signal” is assigned to the sampling position of the image.
  • a pixel value in an image in a spatial domain, and transformation coefficients in a transform region may be samples.
  • a unit including at least one such sample can be defined as a block.
  • an image encoding method and apparatus an image decoding method and apparatus based on a coding unit and a conversion unit of a tree structure according to an embodiment are disclosed.
  • FIG. 1 is a block diagram of an image decoding apparatus 100 according to an embodiment.
  • the image decoding apparatus 100 may include a bitstream acquisition unit 110 and a decoding unit 120.
  • the bitstream acquisition unit 110 and the decoding unit 120 may include at least one processor.
  • the bitstream acquisition unit 110 and the decoding unit 120 may include a memory for storing instructions to be executed by at least one processor.
  • the bitstream acquisition unit 110 may receive a bitstream.
  • the bitstream is
  • the image encoding apparatus 200 to be described later includes information encoded by the image.
  • the bitstream may be transmitted from the image encoding apparatus 200.
  • the image encoding apparatus 200 and the image decoding apparatus 100 are wired or wireless.
  • the bitstream acquisition unit 110 may receive a bitstream through wired or wireless communication.
  • the bitstream acquisition unit 0 may be connected to optical media, hard disks, etc.
  • the bitstream may be received from the storage medium.
  • the decoder 120 may restore an image based on information obtained from the received bitstream.
  • the decoding unit 120 may obtain a syntax element for restoring an image from the bitstream.
  • the decoding unit 120 may restore an image based on the syntax element.
  • the acquisition unit 110 may receive a bitstream.
  • the video decoding apparatus 100 may perform an operation of acquiring an empty string corresponding to a division mode mode of a coding unit from a bitstream.
  • the video decoding apparatus 100 performs an operation of determining a division rule for a coding unit.
  • the video decoding apparatus 100 divides a coding unit into a plurality of coding units based on at least one of the bin string corresponding to the split mode and the split rule. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 operation can be performed.
  • the video decoding apparatus 100 determines the size of the coding unit according to the ratio of the width and height of the coding unit to determine the division rule.
  • the image decoding apparatus 100 may determine an allowable second range of the size of the coding unit according to the division mode mode of the coding unit in order to determine the division rule.
  • a picture can be divided into one or more slices or one or more tiles.
  • One slice or a tile can be a sequence of one or more coding tree units (CTU). Accordingly, one slice may include one or more tiles, and one slice may include one or more maximum coding units.
  • a slice including one or a plurality of tiles may be determined within a picture.
  • the maximum coding block refers to an NxN block containing NxN samples.
  • the color component can be divided into one or more maximum coding blocks.
  • the maximum coding unit is the maximum coding block of the luma sample and the two maximum coding blocks of the corresponding chroma samples.
  • the maximum coding unit includes the maximum coding block of the monochrome sample and the syntax structures used to code the monochrome samples.
  • the maximum coding unit is a unit including syntax structures used to code the picture and its samples.
  • One maximum coding block can be divided into MxN coding blocks including MxN samples (M and N are integers).
  • Unit; CU is a unit including a coding block of a luma sample, two coding blocks of chroma samples corresponding thereto, and syntax structures used to encode a luma sample and chroma samples.
  • the coding unit is a unit.
  • the coding unit is the coding unit of the picture and the samples of the picture. It is a unit that contains the syntax structures used to do so.
  • the maximum coding block and the maximum coding unit are
  • the (maximum) coding unit refers to the data structure including the (maximum) coding block containing the sample and the syntax structure corresponding thereto.
  • the person skilled in the art has the (maximum) coding unit or the (maximum) coding block taking a predetermined number of samples. Since it can be understood that a block of a predetermined size to be included is referred to, in the following specification, the maximum coding block and the maximum coding unit, or the coding block and the coding unit are mentioned without distinction unless there is special circumstances.
  • An image can be divided into a maximum coding unit (CTU).
  • CTU maximum coding unit
  • the size of the coding unit can be determined based on information obtained from the bitstream.
  • the shape of the largest coding unit can have the same size square, but is not limited to this.
  • information about the maximum size of a luma coded block can be obtained from a bitstream.
  • the maximum size of a luma coded block indicated by information about the maximum size of a luma coded block is 4x4, 8x8, 16x16.
  • Information on the difference in size and luma block size can be obtained.
  • Information on the difference in luma block size can indicate the size difference between the maximum luma coding unit and the maximum luma coded block that can be divided into two. Therefore, obtained from the bitstream.
  • the size of the maximum luma encoding unit can be determined.
  • the size of the chroma maximum coding unit can also be determined.
  • the size of the chroma block is It can be half the size, and likewise, the size of the chroma maximum coding unit can be half the size of the luma maximum coding unit.
  • the maximum size of a luma coded block capable of binary splitting may be determined variably. Unlike this, the maximum size of a luma coded block that can be ternary split can be fixed. For example, the maximum size of a luma coded block that can be ternary split in an I picture is 32x32, and a P picture or B picture The maximum size of a luma-coded block that can be ternary partitioned in can be 64x64.
  • the maximum coding unit can be hierarchically divided into coding units based on the split mode information obtained from the bitstream.
  • the split mode information information indicating whether or not quad splitting is performed, whether or not to divide it. At least one of the information indicating s, segment direction information, and segment type information may be obtained from the bitstream.
  • information indicating whether or not the current coding unit is quad split can indicate whether the current coding unit is to be quad split (QUAD_SPLIT) or not quad split.
  • the division direction information indicates that the current coding unit is divided into either the horizontal direction or the vertical direction.
  • the division type information indicates that the current coding unit is divided into binary division or ternary division.
  • the division mode of the current coding unit is
  • the division mode is binary horizontal division (SPLIT_BT_HOR), ternary horizontal division when ternary division in the horizontal direction (SPLIT_TT_HOR), and binary division in the vertical direction.
  • the division mode of is binary vertical division (SPLIT_BT_VER) and the division mode in the case of ternary division in the vertical direction is ternary vertical division.
  • the video decoding apparatus 100 may obtain the split mode mode information from the bitstream from one bin string.
  • the video decoding apparatus 100 receives
  • the format of the bitstream can include fixed length binary code, unary code, truncated unary code, predetermined binary record, etc.
  • An empty string is a binary sequence of information.
  • An empty string can consist of at least one bit.
  • the video decoding apparatus 100 may obtain information on the division mode mode corresponding to the bin string based on the division rule.
  • the video decoding apparatus 100 may determine whether to quad-divide the coding unit based on one bin string. You can decide whether not to divide or to decide the direction and type of the division.
  • the coding unit may be less than or equal to the largest coding unit.
  • the coding unit is also one of the coding units because it has the largest size.
  • the coding unit determined in the maximum coding unit has the same size as the maximum coding unit.
  • the maximum Coding units can be divided into coding units.
  • the coding units can be divided into coding units of smaller size.
  • the division of an image is limited to this.
  • the maximum coding unit and the coding unit may not be distinguished. The division of the coding unit will be described in more detail in FIGS. 3 to 16.
  • More than one prediction block for prediction can also be determined from the coding unit.
  • the prediction block may be less than or equal to the coding unit.
  • one or more transform blocks for transformation may be determined from the coding unit.
  • the transform block may be equal to or less than the coding unit. 2020/175967 1»(:1/10 ⁇ 020/002924
  • the transform block and eg the shape and size of the block may not be related to each other.
  • the prediction may be performed using the coding unit as the prediction block in the coding unit.
  • the transformation may be performed using the coding unit as the transformation block in the coding unit.
  • the current block and the neighboring block of the present disclosure may represent one of a maximum coding unit, a coding unit, a prediction block, and a transform block.
  • the current block or the current coding unit is a block that is currently being decoded or encoded or a block that is currently being divided.
  • a neighboring block may be a block restored before the current block.
  • a neighboring block can be spatially or temporally adjacent from the current block.
  • the surrounding block can be located in one of the lower left, left, upper left, upper, upper right, lower right, and lower left of the current block.
  • FIG 3 illustrates a process in which the image decoding apparatus 100 determines at least one coding unit by dividing a current coding unit according to an embodiment.
  • the block type may include 4Nx4N, 4Nx2N, 2Nx4N, 4NxN, Nx4N, 32NxN, Nx32N, 16NxN, Nxl6N, 8NxN, or Nx8N, where N may be a positive integer.
  • the block type information is information indicating at least one of the shape of the coding unit, the ratio or size of the room 3 ⁇ 4 width and height.
  • the shape of the coding unit may include a square and a non-square.
  • the width and height of the coding unit are the same (i.e., when the block type of the coding unit is 4NX4N)
  • video decoding The device W0 may determine the block type information of the coding unit as a square.
  • the image decoding device 100 may determine the shape of the coding unit as a non-square.
  • the video decoding device 100 can determine the block type information of the coding unit in a non-square shape.
  • the video decoding apparatus 100 adjusts the ratio of the width and height of the block type information of the coding unit to 1:2, 2: 1, 1:4, 4: 1, 1: At least one of 8, 8: 1, 1: 16, 16: 1, 1:32, and 32:1 can be determined.
  • the video decoding device 100 May determine whether the coding unit is in the horizontal direction or the vertical direction. Also, based on at least one of the width, height, or width of the coding unit, the image decoding apparatus 100 may determine the size of the coding unit.
  • the video decoding apparatus 100 can determine the shape of a coding unit using block shape information, and can determine in what shape the coding unit is divided by using the division mode information.
  • the method of dividing the coding unit indicated by the division type mode information may be determined according to which block type the block type information used by the decoding apparatus 100 indicates. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924
  • the video decoding apparatus 100 can obtain the split mode mode information from the bitstream. However, it is not limited thereto, and the video decoding apparatus ( ⁇ 0) and the video encoding apparatus 200 are based on block shape information.
  • the video decoding apparatus 100 can determine the pre-committed division mode information for the maximum coding unit or the minimum coding unit. For example, the video decoding apparatus 100 can determine the pre-committed split mode information. Can be determined as 1 wave (1 8 3 ⁇ 4) by quad-dividing the division type mode information for the largest coding unit.
  • the image decoding apparatus 100 "does not divide" the division type mode information for the smallest coding unit. Specifically, the image decoding device 100 determines the size of the maximum coding unit.
  • the video decoding device 100 can determine the pre-appointed split mode information by quad splitting Quad split is a split mode in which both the width and height of the coding unit are divided into two. 100) can obtain a 128x128 coding unit from a 256x256 maximum coding unit based on the split mode information. In addition, the video decoding apparatus 100 can determine the minimum coding unit size as 4x4. (100) can obtain the division type mode information indicating "no division" for the minimum coding unit.
  • the image decoding device 00 is the current coding unit is a square
  • Block shape information indicating that it is a shape can be used.
  • the video decoding device 100 may not divide a square coding unit, divide it vertically, divide it horizontally, or divide four coding units according to the division type mode information. 3, if the block type information of the current coding unit 300 indicates a square shape, the decoding unit 120 is divided according to the division type mode information indicating that it is not divided.
  • a coding unit that has the same size as the current coding unit (300) (a coding unit that does not divide the character 0 or is divided based on the division mode information indicating a predetermined division method (31A5, 310 310(1, 310 31)) Oh, etc.) can be determined.
  • the image decoding device 100 is divided form mode information to coding units of the divided one or two the current coding unit 300 in the vertical direction based on indicating that the division in the vertical direction according to neunil embodiment (self 5 ).
  • the video decoding apparatus 100 can determine two coding units (ruler 0) by dividing the current coding unit 300 in the horizontal direction based on the division mode information indicating that the video is divided in the horizontal direction.
  • the decoding device 100 is able to determine four coding units (character 0(1)) which divided the current coding unit 300 in the vertical and horizontal directions based on the division mode information indicating that they are divided in the vertical and horizontal directions. is.
  • an exemplary image decoding device 100 neunil example ternary ⁇ 1113) dividing that represents subdivided form mode information of three coding units by the dividing the current coding unit 300 in the vertical direction based on (i 0 ⁇ can be determined.
  • the current coding unit (300) is divided in the horizontal direction, and three coding units (quarters) are determined.
  • the division type in which the square coding unit can be divided should not be interpreted limited to the above-described shape, but may include various types in which the division type mode information can appear.
  • FIG 4 is an image decoding apparatus ( ⁇ 0) in accordance with an embodiment.
  • It shows the process of determining at least one coding unit by dividing the coding unit.
  • the video decoding apparatus 00) may use block shape information indicating that the current coding unit is a non-square type.
  • the video decoding apparatus 100 may use the non-square current coding according to the division type mode information. It is possible to decide whether to divide the unit or not by a predetermined method. Referring to Fig. 4, when the block shape information of the current coding unit (400 or 450) shows a non-square shape, the video decoding device (100) Determines the coding unit (410 or 460) having the same size as the current coding unit (400 or 450) according to the split mode information indicating that no division is made, or based on the division type mode information indicating a predetermined division method.
  • the divided coding units (420 42(3 ⁇ 4, 430 43(3 ⁇ 4, 4300, 470 47(3 ⁇ 4, 480 ⁇ 48)5, 480) can be determined.
  • the predetermined division method in which the non-square coding units are divided is as follows. It will be described in detail through various embodiments.
  • the video decoding apparatus 100 may determine a form in which a coding unit is divided using the split mode mode information, and in this case, the split mode mode information is at least one encoding generated by dividing the coding unit. The number of units can be indicated. Referring to Fig. 4, when the split mode information indicates that the current coding unit (400 or 450) is divided into two coding units, the video decoding device 100 is used to display the split mode information. Based on the current coding unit (400 or 450), it is possible to determine the ratio of two coding units (420 42 (3 ⁇ 4, or 470 470) to be included in the current coding unit.
  • the video decoding device ( ⁇ 0) divides the current coding unit (400 or 450) of the non-square shape based on the division mode information
  • the video decoding device 100 is non-
  • the current coding unit can be divided by taking into account the position of the long side of the current coding unit (400 or 450) of the square.
  • the video decoding device (100) considers the shape of the current coding unit (400 or 450), and the current coding unit (400 or 450) is considered.
  • Multiple coding units can be determined by dividing the current coding unit (400 or 450) in the direction of dividing the long side of the unit (400 or 450).
  • the video decoding apparatus 100 when the division mode information indicates that the coding unit is divided into odd-numbered blocks (ternary division), the video decoding apparatus 100 is included in the current coding unit (400 or 450).
  • An odd number of coding units can be determined. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924
  • the video decoding device 100 The current coding unit (400 or 450) can be divided into three coding units (430 43 (3 ⁇ 4, 4300, 480 48 (3 ⁇ 4, 480)).
  • the ratio of the width and height of the current coding unit (400 or 450) may be 4:1 or 1:4.
  • the ratio of the width and height is 4:1, the length of the width is Since it is longer than the length of the height, the block shape information may be in the horizontal direction.
  • the ratio of the width and height is 1:4, the block shape information may be in the vertical direction because the length of the width is shorter than the length of the height.
  • 100) may determine to divide the current coding unit into odd-numbered blocks based on the partition type mode information.
  • the video decoding apparatus 100 may encode the current coding unit based on block type information of the current coding unit (400 or 450). The direction of division of the unit (400 or 450) can be determined.
  • the video decoding apparatus 100 divides the current coding unit 400 in the horizontal direction to determine the coding unit. (It is possible to determine 430 43 (3 ⁇ 4, 430.
  • the video decoding device 100 divides the current coding unit 450 in the vertical direction and encodes
  • the image decoding apparatus 100 may determine an odd number of coding units included in the current coding unit (400 or 450), and not all of the determined coding units may have the same size. For example, ,The determined odd number of coding units (430 43(3 ⁇ 4, 4300, 480 48(3 ⁇ 4, 480) double predefined coding units (43(3 ⁇ 4 or
  • the size of 48(3 ⁇ 4) may have a size different from other coding units (430 4300, 480 480), that is, the coding unit that can be determined by dividing the current coding unit (400 or 450) is the size of multiple types. In some cases, an odd number of coding units (430 43 (3 ⁇ 4, 4300, 480 48 (3 ⁇ 4, 480) may have different sizes).
  • the image decoding apparatus 100 determines an odd number of coding units included in the current coding unit (400 or 450). Furthermore, the image decoding apparatus 100 may place a predetermined limit on at least one coding unit among the odd number of coding units generated by dividing. Referring to FIG. 4, the image decoding apparatus 100 is currently a coding unit. (400 or 450) is divided into three coding units (430 43 (3 ⁇ 4, 4300, 480 48 (3 ⁇ 4, 480)), the encoding unit located in the center (the decoding process for the 43 (3 ⁇ 4, 480 ratio) is different Unit (430 43( ⁇ 480
  • the video decoding apparatus 100 may be limited to a coding unit located in the center (as opposed to other coding units (430 4300, 480 480), unlike other coding units (430 4300, 480 480)) located at the center, or It can be limited to be divided only a predetermined number of times. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924
  • FIG 5 illustrates a process in which the video decoding apparatus 100 divides a coding unit based on at least one of block type information and split type mode information according to an embodiment.
  • the image decoding apparatus 100 determines that the first coding unit 500 in the square shape is divided into coding units or not divided based on at least one of the block type information and the division type mode information. According to an embodiment, when the division mode information indicates that the first coding unit 500 is divided in the horizontal direction, the video decoding apparatus 100 divides the first coding unit 500 in the horizontal direction to The second coding unit (0) can be determined.
  • the first coding unit, second coding unit, and third coding unit used according to an embodiment are terms used to understand the relationship before and after division between coding units. For example, if the first coding unit is divided, the second coding unit can be determined, and if the second coding unit is divided, the third coding unit can be determined.
  • the first coding unit, the second coding unit, and the third to be used can be determined.
  • the relationship between coding units can be understood as following the characteristics described above.
  • the image decoding device 100 uses the determined second coding unit 510.
  • the image decoding apparatus 100 divides the first coding unit 500 based on the split mode information.
  • the determined non-square form of the second coding unit (510) may not be divided into at least one third coding unit (520 52 (3 ⁇ 4, 52 ( ⁇ 520 (1))) or the second coding unit (0).
  • the video decoding device 100 can acquire the split mode mode information and decode the video.
  • the device 100 can divide the first coding unit 500 based on the obtained division type mode information to divide a plurality of second coding units (for example, 510) of various types, and can divide the second coding unit ( 0) may be divided according to the method in which the first coding unit 500 is divided based on the division type mode information.
  • the first coding unit 500 is divided for the first coding unit 500.
  • the second coding unit 510 is also divided into the third coding unit based on the split shape mode information for the second coding unit (0).
  • 520&, 52 (3 ⁇ 4, 520 0 , 520 (1, etc.).
  • the coding unit can be divided recursively based on the division mode information related to each of the coding units. Therefore, the non-square shape can be divided.
  • the coding unit of the square can be determined from the coding unit, and the coding unit of the square shape is divided recursively.
  • the coding unit of the non-square shape may be determined.
  • the determined odd number of third coding units can be divided recursively into a defined coding unit (e.g., a coding unit located in the center or a coding unit in a square shape).
  • a defined coding unit e.g., a coding unit located in the center or a coding unit in a square shape.
  • an odd number of third coding units 52 (3 ⁇ 4, 520 0 , 520 (1)) is one of the square-shaped third coding units (520 is horizontal 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 Can be divided into a plurality of fourth coding units.
  • One of a plurality of fourth coding units (530 53(3 ⁇ 4, 5300, 530(1)) -
  • the fourth coding unit in the form of a square (53 (3 ⁇ 4 or 530(1)) can be divided into multiple coding units.
  • the fourth coding unit in the non-square form (53 (3 ⁇ 4 or 530(1)) ) May be subdivided into odd-numbered coding units.
  • a method that can be used for recursive division of coding units will be described later through various examples.
  • the image decoding apparatus 100 may divide each of the third coding units 520 52 (3 ⁇ 4, 5200, 520 (1, etc.) into coding units based on the split mode information.
  • the video decoding apparatus 100 may determine not to divide the second coding unit (0) based on the division mode information.
  • the video decoding apparatus 100 is a non-square second coding unit according to an embodiment. (5) can be divided into an odd number of third coding units (52(3 ⁇ 4, 5200, 520(1)).
  • the image decoding apparatus 100 has an odd number of third coding units (52(3 ⁇ 4, 5200, 520(1)).
  • the video decoding device 100 may have a certain limit on the 3rd coding unit defined by the small and medium definition.
  • the video decoding device 100 has an odd number of 3rd coding units (52(3 ⁇ 4, 5200, 520(1)).
  • the 520 it can be limited to those that are no longer divided, or to those that must be divided by a configurable number of times.
  • the image decoding apparatus 100 includes an odd number of third coding units (52(3 ⁇ 4, 5200, 520(1)) included in the second coding unit 510 of a non-square shape.
  • Coding units located at (520 is no longer divided, or divided into a predetermined division (e.g., divided into only four coding units or divided into a form corresponding to the divided form of the second coding unit (5)). It may be limited or limited to division only by a predetermined number of times (for example, division by only II times, 11>0)
  • the above-described embodiment since the above limitation on the encoding unit (520) located in the center is only examples. It should not be interpreted as being limited to these, but should be interpreted as including various restrictions that can be decoded differently from the other coding units (52(3 ⁇ 4, 520(1)), where the encoding unit located in the center (520).
  • the image decoding apparatus 100 may obtain information on a division mode mode used to divide a current coding unit at a predetermined location in the current coding unit.
  • FIG. 6 shows a method for the image decoding apparatus 100 to determine a predetermined coding unit among odd coding units according to an embodiment.
  • the split mode information of the current coding unit (600, 650) is a sample at a predetermined position among a plurality of samples included in the current coding unit (600, 650) (for example, a center position
  • the predetermined position within the current coding unit 600 where at least one of these division type mode information can be obtained is limited to the center position shown in Fig. 6 and should not be interpreted.
  • various locations that can be included within the current coding unit (600) e.g. 2020/175967 1» (:1 ⁇ 1 (For 2020/002924, the top, bottom, left, right, top left, bottom left, top right or bottom right, etc.) should be interpreted as being able to be included.
  • 100 can determine that the current coding unit is divided into coding units of various shapes and sizes or not divided by acquiring the division mode information obtained from a predetermined location.
  • the video decoding apparatus 00) may select one of the coding units when the current coding unit is divided into a predetermined number of coding units.
  • the method for selecting one of the plurality of coding units may be various, and a description of these methods will be described later through various embodiments below.
  • the video decoding apparatus 00 may divide the current coding unit into a plurality of coding units and determine the coding unit at a predetermined location.
  • the image decoding apparatus 100 includes an odd number of coding units.
  • the image decoding apparatus 100 is configured to be in the current coding unit 600 or the current coding unit ( 650) can be divided to determine odd number of coding units (620 62 (3 ⁇ 4, 620 0) or odd number of coding units (660 66 (3 ⁇ 4, 660).)
  • the image decoding apparatus 100 includes odd number of coding units ( Using information on the position of 620 62 (3 ⁇ 4, 620 0) or odd number of coding units (660 66 (3 ⁇ 4, 660), it is possible to determine the center coding unit (620 ratio or the center coding unit (660 ratio), for example
  • the image decoding apparatus 100 determines the location of the coding units 620 62 (3 ⁇ 4, 620) based on information indicating the location of a predetermined sample included in the coding units 62( ⁇ , 62(3 ⁇ 4, 620).
  • the coding unit (620 ratio) can be determined.
  • the video decoding apparatus 100 contains information indicating the position of the coding units (620 62 (3 ⁇ 4, 620) in the upper left of the sample (630 63 (3 ⁇ 4, 630)).
  • the coding unit (620 ratio) located in the center can be determined.
  • coding units left side included in 620 62 (3 ⁇ 4, 620 respectively)
  • the information indicating the position of the upper sample (630 63 (3 ⁇ 4, 63 ( ⁇ ) may include information about the position or coordinates in the picture of the coding units (620 62 (3 ⁇ 4, 620).
  • the coding unit) (Information indicating the location of the upper left sample included in 62( ⁇ , 62(3 ⁇ 4, 620)
  • 630 63(3 ⁇ 4, 63( ⁇ ) is the coding units included in the current coding unit 600)
  • 620 62( Information indicating a width or height of 3 ⁇ 4, 620 may be included, and this width or height may correspond to information indicating a difference between coordinates within a picture of 620 62 (3 ⁇ 4, 620), i.e., a video decoding device ( 100) is the coding unit (620, 62, 5 , 620), which is located in the middle by using information about the position or coordinates in the picture or using information about the width or height of the coding unit corresponding to the difference between coordinates. Coding unit (620 ratio can be determined. 2020/
  • the upper coding unit (the upper left sample of 620 ⁇ (information indicating the position of 630 ⁇ ) can represent the jiang coordinate, and the center coding unit (620 ratio, the upper left sample (530 ratio))
  • the information indicating the position of the ( ⁇ , etc.) can indicate the coordinates
  • the lower coding unit (the sample of the upper left of the 620 (63 (information indicating the position of the ⁇ is ⁇ )) can indicate the coordinates.
  • Video decoding device (100) )Is the coding units (620 62(3 ⁇ 4, 620 each included in the upper left sample (630 63(3 ⁇ 4, 63( ⁇ ), you can determine the center coding unit (620 ratio)).
  • the coordinates representing the position of the upper left sample can represent the coordinates representing the absolute position within the picture, and furthermore, the upper coding unit (620, the upper left sample (630)
  • the center coding unit (the sample at the top left of the 620 ratio (information indicating the relative position of the 630 ratio), (1) coordinate, the coding unit at the bottom (the sample at the top left of the 620 ratio (information indicating the relative position of the 630 ratio) In ((1x(:, (1)) 0 coordinates can also be used.
  • the method of determining the coding unit at a predetermined position by using the coordinates of the corresponding sample is the method described above. It should not be interpreted as limiting, but should be interpreted in various arithmetic methods that can use the coordinates of the sample.
  • the image decoding apparatus 100 can divide the current coding unit 600 into a plurality of coding units 620 62 (3 ⁇ 4, 620), and the coding units 620 62 (3 ⁇ 4, 6200).
  • the coding unit can be selected according to the small and medium definition criteria.
  • the video decoding device (100) can select the coding units (620 62015, 62) of different sizes (620 ratio).
  • the image decoding apparatus 100 is an information indicating the position of the upper coding unit (620 ⁇ of the upper left sample (630) (coordinates, the center coding unit (620 ratio of the upper left sample 630). Coding units (620 62(3 ⁇ 4) using the coordinates of the information indicating the location of the rain ( ⁇ , ⁇ ) and the lower coding unit (the sample at the top left of the 620 (63 ( ⁇ , ), which is information indicating the location of the ⁇ )) , 620 0 ) Each width or height can be determined.
  • the video decoding device (100) has coding units (620 62 characters 5 ,
  • each of the coding units (620 62(3 ⁇ 4, 620 0 ) can be determined by using the coordinates () ), (,) ), (X*:,) 0 indicating the position of 620(:).
  • the image decoding device 100 can determine the width of the upper coding unit (620) as the width of the current coding unit (600).
  • the image decoding device 100 has a height of the upper coding unit (620 ⁇ ).
  • the video decoding apparatus 100 can determine the width of the central coding unit (620 ratio) to the width of the current coding unit (600). 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924.
  • the video decoding device 100 can determine the center coding unit (the height of the 620 ratio can be-outside. According to an embodiment, the video decoding device (0)) The width or height of the lower coding unit can be determined using the width or height of the current coding unit and the upper coding unit (620 ⁇ and the width and height of the middle coding unit (620 ratio). The image decoding apparatus 100 is determined by the determined coding unit. It is possible to determine a coding unit having a size different from other coding units based on the width and height of the 620 62 (3 ⁇ 4, 620 teeth. Referring to FIG.
  • the image decoding apparatus 100 has an upper coding unit 620 and a lower part)
  • the coding unit (the center of which has a size different from the size of 620) (the ratio of 620 can be determined as the coding unit at a predetermined position.
  • the process of determining the coding unit having a size different from the other coding units by the device 100 is only in one embodiment in which the coding unit at a predetermined location is determined using the size of the coding unit determined based on the sample coordinates.
  • Various processes can be used to determine the coding unit at a predetermined location by comparing the size of the coding unit determined according to the sample coordinate of.
  • the image decoding apparatus 100 includes a left encoding unit (a sample at the top left of 660 ⁇ ), the information indicating the position of 670 ⁇ , ⁇ (1, ⁇ coordinates, a center coding unit (a sample at the top left of 660 ratio) (670 Coding units (660 66(3 ⁇ 4, 660 0) using the ⁇ coordinates, which is information indicating the location of the rain, the right coding unit (the sample at the top left of 660 (67( ⁇ , information indicating the location of the ⁇ )) coordinates Each width or height can be determined.
  • the video decoding apparatus 100 uses the coding units (660 66 (3 ⁇ 4, ⁇ (1),, £ ), which are coordinates representing the position of 660 teeth), and ⁇ . (660 660 ⁇ 660 0) Each size can be determined.
  • the image decoding device 100 has a left encoding unit (660 ⁇ width).
  • the image decoding apparatus 100 may determine the height of the left coding unit (660 ⁇ ) as the height of the current coding unit 650.
  • the image decoding apparatus 100 has a middle coding unit (660 ratio).
  • the video decoding device 100 can determine the height of the center coding unit (660 ratio is the height of the current coding unit 600.
  • the video decoding device 100 is on the right side of the image decoding device 100).
  • the coding unit (the width or height of 660 can be determined using the width or height of the current coding unit (650) and the left coding unit (660 and the width and height of the center coding unit (660 ratio).
  • the video decoding apparatus 100 is determined.
  • a coding unit having a size different from other coding units may be determined based on the coding units 660 66 (3/4, 660).
  • the image decoding apparatus 100 includes a left coding unit 660.
  • the right coding unit (the middle coding unit having a size different from the size of 660 (the ratio of 660 can be determined as the coding unit at a predetermined location.
  • the process of determining is only in one embodiment in which the coding unit at a predetermined location is determined using the size of the coding unit determined based on the sample coordinates. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 Various processes can be used to determine the coding unit at a given location by comparing the size of the coding unit determined according to the sample coordinates.
  • the position of the sample to be considered for determining the position of the coding unit should not be interpreted limited to the upper left corner described above, but it can be interpreted that information on the position of any sample included in the coding unit can be used. .
  • the image decoding apparatus 00) is configured to change the form of the current coding unit.
  • the coding unit at a predetermined position can be selected from among the odd number of coding units determined by dividing the current coding unit. For example, if the current coding unit is longer than the width-square shape, the video decoding device 00)
  • the encoding unit of a predetermined position can be determined according to the horizontal direction, that is, the video decoding apparatus 100 can set a limit on the corresponding encoding unit by determining one of the encoding units that vary in position in the horizontal direction. If the unit is a non-square type whose height is longer than the width, the image decoding apparatus 100 can determine the coding unit at a predetermined position according to the vertical direction. That is, the image decoding apparatus 100 may determine the coding unit of the predetermined position according to the vertical direction. By deciding one of the units, you can place limits on that coding unit.
  • the image decoding apparatus 100 may use information indicating the position of each of the even number of coding units in order to determine the coding unit of a predetermined position among the even number of coding units.
  • 100 can determine the even number of coding units by dividing the current coding unit (binary division), and using the information on the positions of the even number of coding units to determine the coding unit at a predetermined location. It may be a process that corresponds to the process of determining the coding unit of a predetermined position (for example, the center position) among the odd number of coding units described above, so it will be omitted.
  • the coding unit at a predetermined position is selected during the dividing process to determine the coding unit at a predetermined position among the plurality of coding units.
  • the image decoding apparatus 100 may use the current coding unit included in the center coding unit during the dividing process to determine the coding unit located in the middle of the coding units divided into a plurality of coding units. At least one of block type information and split type mode information stored in the sample can be used.
  • the image decoding apparatus 100 can divide the current coding unit 600 into a plurality of coding units (620 62 (3 ⁇ 4, 620), based on the split mode information, and a plurality of coding units).
  • the encoding unit (620 ratio ) located in the middle of the fields (620 62 (3 ⁇ 4, 620 0) can be determined.
  • the image decoding device 100 considers the location where the split mode information is acquired, and the encoding located in the center
  • the unit (620 ratio can be determined; that is, the division type mode information of the current coding unit 600 can be obtained from the sample 640 located in the middle of the current coding unit 600, and the division type 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 If the current coding unit (600) is divided into multiple coding units (62( ⁇ , 620 ⁇ 620), the above sample (640) is included)
  • the coding unit (620 ratio can be determined as the coding unit located in the center.
  • the information used to determine the coding unit located in the center should not be interpreted as limited to the split mode information, but it is among various types of information. It can be used in the process of determining the coding unit located at.
  • predetermined information for identifying a coding unit at a predetermined location may be obtained from a predetermined sample included in the coding unit to be determined.
  • the image decoding apparatus 100 The current coding unit (600) is divided to determine the coding unit (620 62 (3 ⁇ 4, 6200)) at a predetermined position among the determined coding units (e.g., the coding unit located in the middle of the coding units divided into a plurality).
  • the coding unit 620 62 (3 ⁇ 4, 6200)
  • split mode information obtained from a sample at a predetermined position within the current coding unit 600 (for example, a sample located in the middle of the current coding unit 600).
  • the video decoding apparatus 100 Can determine the sample at the predetermined location in consideration of the block type of the current coding unit 600, and the image decoding apparatus 100 includes a plurality of coding units determined by dividing the current coding unit 600.
  • the image decoding apparatus 100 includes a plurality of coding units determined by dividing the current coding unit 600.
  • the image decoding apparatus 100 may determine a sample 640 located in the middle of the current coding unit 600 as a sample from which predetermined information can be obtained, and the image decoding apparatus 100 may determine such a sample ( 640) is included in the coding unit (620 ratio may be subject to a certain limit in the process of decoding.
  • the position of the sample from which certain information can be obtained is limited to the position described above and should not be interpreted. Units (can be interpreted as samples of arbitrary positions included in the 620 ratio.
  • the position of the sample from which predetermined information can be obtained may be determined according to the shape of the current coding unit 600.
  • the block type information is determined whether the shape of the current coding unit is square. Alternatively, it is possible to determine whether it is a non-square or not, and the position of the sample from which predetermined information can be obtained depending on the shape.
  • the video decoding device 100 can determine the information on the width and height of the current coding unit. Using at least one of the information about the current coding unit, it is possible to determine a sample located on the boundary of at least one of the width and height of the current coding unit as the sample from which the predetermined information can be obtained.
  • the video decoding apparatus 100 uses one of the samples adjacent to the boundary dividing the long side of the current coding unit into half when it indicates that the block type information related to the current coding unit is a non-square type. I can decide.
  • the image decoding apparatus 00) is configured with a plurality of current coding units.
  • the video decoding apparatus 100 When divided into coding units, encoding at a predetermined position among a plurality of coding units 2020/175967 1» (: 1 ⁇ 1 ⁇ 2020/002924) In order to determine the unit, the split mode information can be used.
  • the video decoding apparatus 100 includes the split mode mode information in the coding unit. It can be obtained from a sample at a predetermined location, and the image decoding apparatus 100 can obtain the division mode information obtained from a sample at a predetermined location included in each of the plurality of coding units generated by dividing the current coding unit.
  • the coding unit can be divided recursively by using the division type mode information obtained from a sample at a predetermined position included in each of the coding units.
  • the recursive division process of the coding unit has been described above with reference to FIG. 5, so a detailed description will be omitted.
  • the image decoding apparatus 100 divides the current coding unit
  • At least one coding unit can be determined, and the order in which at least one coding unit is decoded can be determined according to a predetermined block (for example, the current coding unit).
  • FIG. 7 shows the image decoding apparatus 100 according to an embodiment of the present invention
  • the video decoding apparatus (W0) divides the first coding unit 700 in the vertical direction according to the split mode information to determine the second coding unit (groups 0a, TLOb) or to determine the first coding unit.
  • the second coding unit (730a, 730b) is determined by dividing the unit (700) in the horizontal direction, or the second coding unit (750a, 750b, 750c, 750d) by dividing the first coding unit (700) in the vertical and horizontal directions. ) Can be determined.
  • the image decoding apparatus 100 uses a first coding unit 700 vertically.
  • the order can be determined so that the second coding units (group 0a, TL0b) determined by dividing in the direction are processed in the horizontal direction (base Oc).
  • the image decoding apparatus 100 divides the first coding unit 700 in the horizontal direction and The processing order of the determined second coding units 730a and 730b can be determined in the vertical direction 730c.
  • the image decoding apparatus 100 divides the first coding unit 700 in the vertical and horizontal directions to determine the second encoding.
  • a predetermined order in which the coding units located in the next row are processed after the coding units (750a, 750b, 750c, 750d) are processed e.g., raster scan order
  • it can be determined according to the z scan order (750e).
  • the image decoding apparatus 100 recursively converts the coding units.
  • the image decoding apparatus 100 includes a first encoding
  • a plurality of coding units (groups 0a, 710b, 730a, 730b, 750a,
  • 750b, 750c, 750d can be determined, and each of the determined plurality of coding units (groups 0a, 710b, 730a, 730b, 750a, 750b, 750c, 750d) can be divided recursively.
  • a plurality of coding units C710a , 710b, 730a, 730b, 750a, 750b, 750c, 750d) may be a method corresponding to the method of dividing the first coding unit 700.
  • 2020/175967 1 (:1 ⁇ 1 ⁇ 2020/002924
  • Multiple coding units (0 Kia 5, 730 73(3 ⁇ 4, 750 75(3 ⁇ 4, 75( ⁇ 750(1)) are each independently multiple coding units
  • the video decoding apparatus 100 divides the first coding unit 700 in the vertical direction to determine the second coding unit (zero to zero), and furthermore, the second coding unit (It can be decided by dividing each of the periods 0 and 0 independently or not.
  • the video decoding apparatus 100 can divide the second coding unit on the left side (0 ⁇ in the horizontal direction and divide it into a third coding unit (720 to 720 ratio), and the second coding unit on the right side ( The base zero ratio may not be divided.
  • the processing order of the coding units may be determined based on the process of dividing the coding units.
  • the processing order of the divided coding units is determined based on the processing order of the coding units immediately before division.
  • the video decoding device (100) determines the order in which the second coding unit on the left (the third coding unit (720 72015) determined by dividing the period 0 is processed independently from the second coding unit on the right (the 0 ratio). Since the second coding unit on the left (base 0 is divided in the horizontal direction, and the third coding unit (720 72ah5) is determined, the third coding unit (720 72ah5) can be processed in the vertical direction (720).
  • the second coding unit on the left (base 0 and the second coding unit on the right) (the order in which the zero ratio is processed corresponds to the horizontal direction (base 0), so the second coding unit on the left (the system included in the zero)
  • the right coding unit zero ratio can be processed.
  • the above is a process in which the processing order of coding units is determined according to the coding unit before each division) Since it is for explanation, it should not be interpreted limited to the above-described embodiment, but it should be interpreted as being used in various ways in which the coding units determined by being divided into various forms can be independently processed according to a predetermined order.
  • FIG. 8 shows an image decoding apparatus 100 encoding in a predetermined order according to an embodiment.
  • the process of determining that the current coding unit is to be divided into an odd number of coding units is shown.
  • the image decoding apparatus 100 uses the acquired split mode information.
  • the first coding unit in a square shape (800) is a bi-square second coding unit (810 81A5). It can be divided, and the second coding unit (810 81 (3 ⁇ 4)) can be independently divided into the third coding unit (820 82 (3 ⁇ 4, 8200, 820 (1, 820)).
  • the image decoding apparatus 100 determines whether the third coding units (820 82 (3 ⁇ 4, 8200, 820 (1, 820)) can be processed in a predetermined order, and 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 It is possible to determine whether a divided coding unit exists.
  • the video decoding apparatus 100 recursively divides the first coding unit 800
  • the third coding unit (820a, 820b, 820c, 820d, 820e) can be determined.
  • the video decoding apparatus 100 is based on at least one of the block type information and the division type mode information, and the first coding unit (800), It is possible to determine whether the two coding units (810a, 810b) or the third coding units (820a, 820b, 820c, 820d, 820e) are divided into an odd number of coding units.
  • the second coding unit The coding unit positioned at the right of the (8Wa, 810b) may be divided into an odd number of third coding units (820c, 820d, 820e).
  • the order in which the plurality of coding units included in the first coding unit 800 are processed may be in a predetermined order (e.g., z-scan order 830), and the image decoding apparatus 100 ) Can determine whether the third coding unit 820c, 820d, 820e determined by dividing the right second coding unit 810b into odd numbers satisfies the condition that can be processed according to the predetermined order.
  • a predetermined order e.g., z-scan order 830
  • the image decoding apparatus 100 Can determine whether the third coding unit 820c, 820d, 820e determined by dividing the right second coding unit 810b into odd numbers satisfies the condition that can be processed according to the predetermined order.
  • the image decoding apparatus 100 satisfies the condition that the third coding units 820a, 820b, 820c, 820d, 820e included in the first coding unit 800 can be processed in a predetermined order. It can be determined whether or not at least one of the width and height of the second coding unit (810a, 810b) is divided in half according to the boundary of the third coding unit (820a, 820b, 820c, 820d, 820e). It is related, for example
  • the third coding unit (820a, 820b) which is determined by dividing the height of the left second coding unit (8 Wa) in a non-square shape in half, can satisfy the condition. Since the boundary of the third coding units (820c, 820d, 820e) determined by dividing into coding units cannot divide the width or height of the right second coding unit (810b) in half, the third coding unit (820c, 820d, 820e) ) May be determined as not satisfying the condition. In the case of dissatisfaction with this condition, the video decoding apparatus 100 judges that the scan order is disconnected, and based on the result of the judgment, the second coding unit (8Wb) is an odd number. It can be determined by dividing into two coding units.
  • the video decoding apparatus 100 when the video decoding apparatus 100 is divided into an odd number of coding units, it is possible to place a predetermined limit on the coding unit at a predetermined position among the divided coding units, and various implementations for such restrictions or predetermined positions. Since it has been described above through examples, detailed explanations will be omitted.
  • FIG. 9 is a video decoding apparatus 100 according to an embodiment of the first coding unit 900
  • the image decoding apparatus 100 uses a bitstream acquisition unit no
  • the first coding unit 900 can be divided based on the obtained division type mode information.
  • the first coding unit 900 of a square shape can be divided into coding units having four square shapes, or it can be divided into a plurality of coding units having a non-square shape.
  • the first coding unit (900) is a square, and the division type mode information is divided into non-square coding units. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924
  • the decoding device 100 can divide the first coding unit 900 into a plurality of non-square coding units. Specifically, the division type mode information is provided.
  • the image decoding apparatus 00 uses the first coding unit 900 in a square shape as odd number of coding units in the vertical direction. It can be divided into a second coding unit determined by dividing into 910 91 5 , 910 or a second coding unit determined by dividing in the horizontal direction (920 92 (3 ⁇ 4, 920).
  • the image decoding apparatus 100 includes the second coding unit 910 91 (3 ⁇ 4, 9100, 920 92 (3 ⁇ 4, 920) included in the first coding unit 900 to be processed according to a predetermined order. It is possible to determine whether or not the conditions are satisfied, and the above conditions are at least one of the width and height of the first coding unit (900) according to the boundary of the second coding unit (910 91 (3 ⁇ 4, 9100, 920 92 (3 ⁇ 4, 920)). 9, the second coding unit determined by dividing the first coding unit 900 in a square shape in the vertical direction (the boundary of 910 91 (3 ⁇ 4, 910) is the first coding unit.
  • the width of 900 cannot be divided in half, it may be determined that the first coding unit 900 does not satisfy the conditions that can be processed in a predetermined order.
  • the first coding unit 900 in the form of a square can be determined.
  • the second coding unit determined by dividing in the horizontal direction (the boundary of 920 92 (3 ⁇ 4, 920) cannot divide the width of the first coding unit (900) in half, so the first coding unit (900) is in a predetermined order. It may be determined that the conditions that can be processed are not satisfied.
  • the image decoding apparatus 100 judges that the scan order is interrupted ((1 01111 ⁇ 2 (line 011)), and based on the determination result, the first coding unit 900 may be determined to be divided into odd number of coding units. According to an embodiment, when the image decoding apparatus 100 is divided into odd number of coding units, the coding unit at a predetermined position among the divided coding units is determined. Limitations may be placed, and detailed descriptions thereof will be omitted since it has been described above through various embodiments with respect to such limitations or predetermined locations.
  • the image decoding apparatus 00) divides the first coding unit
  • Various types of coding units can be determined.
  • the image decoding apparatus 100 has a first coding in a square shape.
  • the unit (900) and the non-square type first coding unit (930 or 950) can be divided into various types of coding units.
  • the second coding unit of a non-square shape determined by dividing the first coding unit 1000 into a predetermined condition satisfies a predetermined condition
  • the second coding unit may be divided. It shows that the existing form is limited.
  • the image decoding apparatus 100 performs a first encoding in a square shape based on the division mode information obtained through the bitstream acquisition unit 0. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 It can be determined by dividing the unit (1000) into the second coding unit (1010 101015, 1020 102(3 ⁇ 4)) of the non-square form.
  • the second coding unit (1010) 101(3 ⁇ 4, 1020 102(3 ⁇ 4) can be divided independently. Accordingly, the video decoding apparatus 100 is divided into a plurality of coding units based on the division mode information related to the second coding unit (1010 101015, 1020 102015).
  • the image decoding apparatus 100 may determine that the first coding unit 1000 is divided in the vertical direction, and the left second coding unit of the non-square shape (1 is 0 ⁇ It is possible to determine the third coding unit (1012 1012 ratio) by dividing in the horizontal direction.
  • the image decoding apparatus 100 uses the left second coding unit (1010 in the horizontal direction) and the right second coding unit (101A5 ) Can be limited so that the second coding unit on the left (1010 cannot be divided in the horizontal direction in the same direction as the divided direction.
  • the third coding unit (1012 1215, 1014 & , ⁇ 1415) can be determined by independently dividing the second coding unit on the left (1010 ⁇ and the second coding unit on the right (101A5) in the horizontal direction). This is the same result as the video decoding apparatus 100 dividing the first coding unit 1000 into four square second coding units (1030 103(3 ⁇ 4, 10300, 1030(1)) based on the segmentation mode information. This may be inefficient in terms of image decoding.
  • the image decoding apparatus 100 performs a first encoding in the horizontal direction.
  • the third coding unit (1022 1022 ⁇ ⁇ 024a, 1024 ratio can be determined by dividing the second coding unit (102 or 1020 ratio) in the vertical direction determined by dividing the unit (1000).
  • the video decoding device (100) is one of the second coding units (e.g., if the upper second coding unit (1020 ⁇ ) is divided in the vertical direction, another second coding unit (for example, the lower coding unit (102015)) May be limited so that the upper second coding unit (1020 ⁇ cannot be divided in the vertical direction in the same way as the divided direction.
  • 11 shows the division mode information in the form of four squares according to an embodiment.
  • the image decoding apparatus divides the first coding unit 1100 based on the segmentation mode information to select the second coding unit (1110 111A5, 1120 112A5, etc.).
  • the division type mode information may include information on various types in which the coding unit can be divided, but information on various types may not include information for dividing into four coding units in a square shape.
  • the image decoding apparatus 100 divides the square-shaped first coding unit (00) into four square-shaped second coding units (1130 113(3 ⁇ 4, 11300, 1130(1)). Based on the split mode information, the image decoding apparatus 100 is configured to use a non-square type second coding unit (111( ⁇ , 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924
  • the image decoding apparatus 00) is a non-square second encoding
  • Each unit (1110 111015, n20a, 112ah5, etc.) can be divided independently.
  • each of the second coding units (1110 111ah5, 1120 11201?, etc.) can be divided in a predetermined order, and this is based on the division mode information, and this is based on the method in which the first coding unit (00) is divided. It may be a corresponding split method.
  • the image decoding device 100 has the second encoding unit on the left (1110 ⁇ is horizontal).
  • the third coding unit in the form of a square (1112 to 1112 ratio can be determined, and the second coding unit on the right (1110 ratio is divided in the horizontal direction to determine the third coding unit in the square form (1114 to 1114 ratio)).
  • the device 100 can also determine the left second coding unit (1110 and the right second coding unit (1110 ratio) in the horizontal direction to determine the third coding unit (1116 111 solution, 11160, 1116(1) in a square shape).
  • the coding unit may be determined in the same form as the first coding unit (00) divided into four square-shaped second coding units (1130 113 (3 ⁇ 4, 11300, 1130(1)).
  • the image decoding device 100 has the upper second coding unit (1120 yen).
  • the third coding unit in the form of a square (1122 to 1122 ratio can be determined, and the lower second coding unit (1120 ratio is divided in the vertical direction to determine the third coding unit in the square form (1124 to 1124 ratio)).
  • the image decoding apparatus 100 can determine the ratio of the third coding unit (1126, 112 years, 1126 &, 1126) in a square shape by dividing both the upper second coding unit (1120 and the lower second coding unit (112015) in the vertical direction).
  • the coding unit can be determined in the same form as the first coding unit (1100) divided into four square-shaped second coding units (1130 113 (3 ⁇ 4, 11300, 1130(1)).
  • FIG. 12 is a diagram illustrating that the processing order between a plurality of coding units may vary according to a process of dividing a coding unit according to an embodiment.
  • the video decoding apparatus 00 may divide the first coding unit 1200 based on the division mode information.
  • the block shape is a square
  • the division mode information is the first coding unit 1200 ) Is divided in at least one of the horizontal and vertical directions
  • the image decoding apparatus 100 divides the first coding unit 1200 and divides the second coding unit (e.g., 1210 121 (3 ⁇ 4, 1220 122). 12
  • the first coding unit 1200) is divided only in the horizontal or vertical direction
  • the second coding unit in the non-square shape (1 ( ⁇ , 1, 5, 122 ( ⁇ , 122 (3 ⁇ 4) can be independently divided based on the division type mode information for each.
  • the image decoding apparatus 100 has a second encoding generated by vertically dividing the first encoding unit 1200. It is possible to determine the third coding unit (1216 121 years, 12160, 1216(1)) by dividing the units (1 0 1 Ah 5) in the horizontal direction, and the first coding unit (1200) is divided in the horizontal direction. 2nd coding unit (1220 2020/175967 1 » (:1 ⁇ 1 ⁇ 2020/002924
  • the 3rd coding unit (1226 1226 ⁇ 12260, 1226(1)) can be determined by dividing the 1220 ratio in the horizontal direction.
  • the process of dividing the second coding unit (1210 121(3 ⁇ 4, 1220 122(3 ⁇ 4)) is shown in Fig. 11 As it has been described above, detailed explanations will be omitted.
  • the image decoding apparatus may process the coding units according to a predetermined order.
  • a predetermined order Features of processing coding units according to a predetermined order have been described above with reference to FIG. Referring to FIG. 12, the image decoding apparatus 100 divides the first coding unit 1200 in the form of a square
  • the device 100 may determine the processing order of the third coding unit (1216 12161), 12160, 1216 (1, 1226 12261), 12260, 1226 (3 ⁇ 4) according to the form in which the first coding unit 1200 is divided.
  • the image decoding apparatus ( ⁇ 0) divides the second coding units (1 0 1 A5) generated by being divided in the vertical direction, respectively, in the horizontal direction to perform the third coding.
  • the unit (1216 121 year, 12160, 1216 (1) can be determined, and the video decoding device 100 processes the second coding unit on the left (the third coding unit included in 1210 (1216 1216) in the vertical direction first, and then, The third coding unit (1216, 1216 ⁇ 12160, 1216(1)) in accordance with the order of processing the third coding unit (121 years, 1216(1) in the vertical direction (1217) included in the right second coding unit (1A5) ) Can be processed.
  • the image decoding apparatus ( ⁇ 0) divides the generated second coding units (1220 122 Ah 5) in the horizontal direction and performs the third encoding by dividing each in the vertical direction.
  • the unit (1226 122 year, 12260, 1226 (1) can be determined, and the video decoding device 100 is the second coding unit at the top (the third coding unit included in the 1220 ⁇ ) (1226 1226 ratio is first processed in the horizontal direction. After that, in accordance with the order of processing the third coding unit (122 ⁇ , 1226(1) in the horizontal direction (1227) included in the lower second coding unit (122015)), the third coding unit (1226 1226 ⁇ 12260, 1226(1) ) Can be processed.
  • the second coding unit (1210 121(3 ⁇ 4, 1220 122(3 ⁇ 4)) is
  • the third coding unit (1216 1216 ⁇ 12160, 1216 ( 1, 1226 122 year, 12260, 1226(1)) can be determined by dividing it; the second coding unit (1210 121015) determined by dividing in the vertical direction and the horizontal
  • the second coding unit (1220 122015) determined by dividing in the direction is divided into different forms, but the third coding unit (1216 12161), 12160, 1216 (1, 1226 12261), 12260, 1226 (3 ⁇ 4
  • the first coding unit 1200 is divided into coding units of the same type. Accordingly, the video decoding apparatus 100 recursively divides the coding unit through different processes based on the division mode information. Accordingly, even if coding units of the same type are determined as a result, a plurality of coding units determined in the same type can be processed in different orders.
  • FIG. 13 shows a plurality of coding units recursively divided according to an embodiment. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 When the coding unit is determined, a process in which the depth of the coding unit is determined as the shape and size of the coding unit changes.
  • the image decoding apparatus 100 defines a depth of a coding unit.
  • the predetermined criterion can be the length of the long side of the coding unit.
  • the video decoding device 00) is 211 conscious than the length of the long side of the coding unit before division of the current coding unit.
  • the depth of the current coding unit can be determined that the depth of the coding unit is increased by II from the depth of the coding unit before division.
  • the coding unit with increased depth is expressed as a coding unit of the lower depth. .
  • the image decoding apparatus 100 has a square shape based on block shape information indicating that it has a square shape (for example, the block shape information may indicate '0:').
  • the second coding unit (1302) and the third coding unit (1304) of the lower depth can be determined. If the size of the square-shaped first coding unit (1300) is 2Nx2N, the second coding unit (1302) determined by dividing the width and height of the first coding unit (1300) by 1/2 may have a size of NxN. Furthermore, the third coding unit (1304) determined by dividing the width and height of the second coding unit (1302) into 1/2 size may have a size of N/2xN/2.
  • the width and height correspond to times the first coding unit (1300). If the depth of the first coding unit (1300): 0, the second coding unit is half the width and height of the first coding unit (1300).
  • the depth of (1302) may be 0+1, and the depth of the third coding unit (1304), which is 1/4 times the width and height of the first coding unit (1300), may be 0+2.
  • block shape information indicating a non-square shape for example, block shape information is 1 : NS_VER indicating that the height is a non-square longer than the width, or'NS_VER indicating that the width is a non-square longer than the height.
  • the image decoding device 100 divides the first coding unit (1310 or 1320), which is a non-square type, to divide the second coding unit (1312 or 1322) of the lower depth, and the third The coding unit (1314 or 1324) can be determined.
  • This video decoding apparatus 100 can determine the second coding unit (for example, 1302, 1312, 1322, etc.) by dividing at least one of the width and height of the first coding unit 1310 of size Nx2N. That is, the image decoding apparatus 100 can determine the second coding unit 1302 of size NxN or the second coding unit 1322 of size NxN/2 by dividing the first coding unit 1310 in the horizontal direction, It is also possible to determine the second coding unit 1312 of size N/2xN by dividing it in the horizontal and vertical directions.
  • the second coding unit for example, 1302, 1312, 1322, etc.
  • the image decoding apparatus 100 has a first encoding having a size of 2NxN.
  • the second coding unit (e.g., 1302, 1312, 1322, etc.) by dividing at least one of the width and height of the unit 1320. That is, the image decoding apparatus 100 is the first coding unit 1320 Divide in the vertical direction to code the second NxN size 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 unit (1302) or the second coding unit of size N/2xN (1312) can be determined, divided in the horizontal and vertical directions, and the size of NxN/2 It is also possible to determine the two-coding unit (1322).
  • the image decoding apparatus 100 is the first coding unit 1320 Divide in the vertical direction to code the second NxN size 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 unit (1302) or the second coding unit of size N/2xN (1312) can be determined, divided in the horizontal and vertical directions, and the size of NxN/2 It is also possible to determine the two-coding unit (1322).
  • the image decoding apparatus 100 divides at least one of the width and height of the second coding unit 1302 of NxN size to divide the third coding unit (for example, 1304, 1314, 1324, etc.).
  • the video decoding apparatus 100 divides the second coding unit 1302 in the vertical and horizontal directions to determine the third coding unit 1304 of size N/2xN/2, or 4x ⁇ 2
  • the image decoding apparatus 100 performs a second encoding of N/2xN size.
  • a third coding unit (e.g., 1304, 1314, 1324, etc.) by dividing at least one of the width and height of the unit 1312, i.e., the video decoding apparatus 100 is the second coding unit 1312 Split in the horizontal direction to determine the third coding unit of size N/2xN/2 (1304) or the third coding unit of size ⁇ 2x ⁇ 4 (1324), or divide it in the vertical and horizontal directions to determine the size of ⁇ 4x ⁇ 2
  • the third coding unit of the agenda (1314) can be determined.
  • the image decoding apparatus 100 has a second encoding of NxN/2
  • a third coding unit (e.g., 1304, 1314, 1324, etc.) by dividing at least one of the width and height of the unit 1322, i.e., the video decoding apparatus 100 is the second coding unit 1322 Divide in the vertical direction to determine the third coding unit of size N/2xN/2 (1304) or the third coding unit of size ⁇ 4x ⁇ 2 (1314), or divide it in the vertical and horizontal directions to determine N/2xN/4
  • the third coding unit of size (1324) can be determined.
  • the image decoding apparatus 00) may divide a square-shaped coding unit (eg, 1300, 1302, 1304) in a horizontal direction or a vertical direction.
  • a square-shaped coding unit eg, 1300, 1302, 1304
  • the coding unit 1300 may be divided in the vertical direction to determine the first coding unit 1310 having a size of Nx2N, or the coding unit 1300 may be divided in the horizontal direction to determine the first coding unit 1320 having the size of 2NxN.
  • the depth of the coding unit determined by dividing the first coding unit (1300) of size 2Nx2N horizontally or vertically is the same as the depth of the first coding unit (1300). can do.
  • the width and height of the third coding unit are the first
  • the depth of the first coding unit (1310 or 1320) is: 0, 1/2 times the width and height of the first coding unit (1310 or 1320)
  • the depth of the second coding unit (1312 or 1322) can be A1
  • the depth of the third coding unit (1314 or 1324), which is 1/4 times the width and height of the first coding unit (1310 or 1320) can be 0+2 days. have.
  • FIG. 14 is a diagram that can be determined according to the shape and size of coding units according to an embodiment. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 The index for the division of the depth and coding unit, hereinafter ⁇ is shown.
  • the image decoding apparatus (0) is a first coding of a square shape.
  • the video decoding apparatus 100 divides the first coding unit 1400 in at least one of the vertical and horizontal directions according to the split mode information to divide the second coding unit 1402 14021 and 1404 14041. ), 1406 14061), 14060, 1406 (3 ⁇ 4 can be determined. That is, the video decoding apparatus 100 can determine the second coding unit (1402 14021), 1404 based on the information of the division mode for the first coding unit 1400. 14041), 1406 14061), 14060, 1406(1) can be determined.
  • the second coding unit determined according to the division mode information for the first coding unit 1400 in a square shape (1402 1402 ⁇ 1404 1404 ⁇ 1406 ⁇ 140 Sea, 14060, 1406(1))
  • the depth can be determined based on the length of the long side, for example, the length of one side of the first coding unit of the square shape (1400) and the length of the long side of the non-square shape of the second coding unit (1402 1402 ⁇ 1404 1404 ratio)
  • the image decoding device 100 uses the first coding unit (1400) as the second of four squares based on the split mode information.
  • the length of one side of the second coding unit in a square shape (1406 1406 ⁇ 14060, 1406(1) is 1/ of the length of one side of the first coding unit (1400)) Since it is twice, the depth of the second coding unit (1406 1406 ⁇ 14060, 1406(1) is 0+1, which is lower than the depth of the first coding unit (1400), I).
  • the image decoding apparatus ( ⁇ 0) divides a first coding unit 1410 having a height longer than a width in the horizontal direction according to the split mode information to obtain a plurality of second coding units 1412. It can be divided into 1413 ⁇ 4, 1414, 141 ⁇ , and 1414. According to an embodiment, the video decoding apparatus ( ⁇ 0) divides the first coding unit (1420) having a width longer than the height in the vertical direction according to the division mode information. Thus, it can be divided into a plurality of second coding units (1422 1422 ⁇ 1424 1424 ⁇ 1424).
  • the second coding unit (1412 1413 ⁇ 4, 1414 14141), 14140. 1422 14221), which is determined according to the split mode mode information for the first coding unit (1410 or 1420) of a non-square shape, according to an embodiment.
  • 1424 can determine the depth based on the length of the long side, for example, the length of one side of the second coding unit of the square shape (1412 1413 ⁇ 4) is longer than the width of the height-the first coding unit of the non-square shape ( Since it is 1/2 the length of one side of 1410), the second coding unit of the square shape (1412 1412 non-depth is 0+, which is the depth of the lower depth than the first coding unit of the non-square shape (1410) depth I). It is 1.
  • the image decoding device 00) is based on the division mode information, and the first coding unit (1410) in the form of a non-square is replaced by an odd number of second coding units (1414 1414 ⁇ ). 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924
  • An odd number of second coding units (1414 141415, 1414 can include a non-square second coding unit (1414 14140) and a square second coding unit (1414 ratio).
  • the second coding unit of the square form (the length of the long side of 1414 1414 and the second coding of the square form
  • the image decoding apparatus 100 is a method corresponding to the above method of determining the depth of the coding units related to the first coding unit 1410, and has a non-square shape with a width longer than the height. You can determine the depth of the coding units associated with the agenda 1 coding unit (1420).
  • the coding units The index can be determined based on the size ratio of the liver.
  • the coding unit located in the middle of the coding units divided into odd numbers (1414 1414 ⁇ 14140) (1414 ratio is the other coding units (1414 1414). Coding units of the same width but different heights (1414 1414 can be twice the height of each other, i.e., in this case, the center coding unit (1414 ratio can contain two of the other coding units (1414 1414)).
  • the image decoding apparatus 100 may determine whether the coding units divided into odd numbers are not the same size based on the existence of discontinuity in the index for distinguishing between the divided coding units.
  • the image decoding apparatus 100 may determine whether the image decoding apparatus 100 is divided into a specific division type based on an index value for dividing a plurality of coding units determined to be divided from a current coding unit. Referring to, the image decoding device 00) can determine an even number of coding units (1412 1412 ratio) or an odd number of coding units (1414 1414 ⁇ 1414) by dividing the first coding unit (1410) in a rectangular shape longer than the width.
  • the image decoding apparatus 100 may use an index (1 ⁇ 10) indicating each coding unit to classify each of a plurality of coding units.
  • is a sample at a predetermined position of each coding unit. (For example, it can be obtained from the top left sample).
  • the image decoding apparatus 100 is used for classification of coding units.
  • the video decoding apparatus 100 uses the first coding unit 1410. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924
  • the video decoding device 100 can allocate an index for each of the 3 coding units (1414 1414 ⁇ 14140).
  • the video decoding device 100 can be divided into odd numbers.
  • the index for each coding unit can be compared to determine the central coding unit among the coding units.
  • the video decoding apparatus 100 is based on the indexes of the coding units and has a coding unit having an index corresponding to the value in the middle of the indexes ( The 1414 ratio may be determined as a coding unit at the center of the determined coding units by dividing the first coding unit 1410.
  • the image decoding apparatus 100 determines an index for classifying the divided coding units.
  • the index can be determined based on the size ratio between the coding units.
  • the coding unit generated by dividing the first coding unit 1410 The 1414 ratio can be twice the height of other coding units (the same width as 1414 1414 but the height of the other coding units (1414 1414).
  • the center coding unit (the index of the 1414 ratio (1 ⁇ 10)) is 1 If this is the case, then the coding unit located in the next order (1414, which can be 3 when the index increases by 2. In this case, when the index increases uniformly and the increase width varies, video decoding
  • the device 100 may be determined to be divided into a plurality of coding units including coding units having a different size from other coding units, according to an embodiment, when it indicates that the division type mode information is divided into odd number of coding units.
  • the video decoding device 100 can divide the current coding unit into a form in which the coding unit at a predetermined position (for example, the center coding unit) of the odd number of coding units is different from the other coding units. In this case, the video decoding device 100 ) Can be used in the index for the coding unit to determine the coding unit among the different sizes.
  • the size or position of the coding unit at a predetermined position to be determined is specific to illustrate an embodiment. It should not be interpreted as being limited thereto, and it should be interpreted that various indexes, positions and sizes of coding units can be used.
  • the image decoding apparatus 100 may use a predetermined data unit in which the recursive division of the coding unit starts.
  • FIG. 15 illustrates that a plurality of coding units are determined according to a plurality of predetermined data units included in a picture according to an embodiment.
  • the predetermined data unit may be defined as a data unit in which the coding unit begins to be recursively divided using the division type mode information. That is, a plurality of coding units dividing the current picture are determined. It may correspond to the coding unit of the highest depth used in the process of becoming.
  • a predetermined data unit will be referred to as a reference data unit.
  • the reference data unit may represent a predetermined size and shape. According to an embodiment, the reference data unit may include samples of MxN. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924
  • M and N may be identical to each other, and may be integers expressed as a power of two, i.e., the reference data unit may represent a square or non-square shape, and then may be divided into an integer number of coding units.
  • the image decoding apparatus W0 may divide a current picture into a plurality of reference data units.
  • the image decoding apparatus 100 may divide a current picture into a plurality of reference data units. Can be divided by using the division type mode information for each reference data unit. The division process of such reference data units can correspond to the division process using a quad-tree structure.
  • the image decoding apparatus W0 may determine in advance a minimum size that a reference data unit included in the current picture can have. Accordingly, the image decoding apparatus W0 may have various sizes of more than the minimum size.
  • the standard data unit of the size can be determined, and at least one coding unit can be determined using the division type mode information based on the determined reference data unit.
  • the image decoding apparatus 100 is a square-shaped reference coding
  • Unit (1500) can be used, or non-square type standard coding
  • the shape and size of the reference coding unit is various data units (e.g., sequence, picture, slice) that can contain at least one reference coding unit. (slice), slice segment (slice segment), tile (tile), tile group (tile group), maximum coding unit, etc.).
  • the bitstream acquisition unit 110 of the image decoding apparatus 100 stores at least one of information on a shape of a reference coding unit and information on a size of a reference coding unit, for each of the various data units.
  • the process of determining at least one coding unit included in the square-shaped reference coding unit (1500) is described above through the process of dividing the current coding unit (300) in Fig. 3, and in a non-square form.
  • the process of determining at least one coding unit included in the standard coding unit 1502 of FIG. 4 has been described above through the process of dividing the current coding unit (400 or 450) in FIG. 4, so a detailed description will be omitted.
  • the image decoding apparatus 100 is provided in advance based on a predetermined condition.
  • an index for identifying the size and shape of the reference coding unit can be used. That is, the bitstream acquisition unit (no) is the bitstream from the bitstream. Slice as a data unit that satisfies certain conditions (e.g., a data unit having a size less than a slice) among various data units (e.g., sequence, picture, slice, slice segment, tile, tile group, maximum coding unit, etc.) For each slice segment, tile, tile group, maximum coding unit, etc., only an index for identification of the size and shape of the reference coding unit can be obtained. The video decoding apparatus 100 obtains an index.
  • certain conditions e.g., a data unit having a size less than a slice
  • various data units e.g., sequence, picture, slice, slice segment, tile, tile group, maximum coding unit, etc.
  • the size and shape of the reference data unit can be determined for each data unit that satisfies the above-defined conditions.
  • Information on the type of the reference coding unit and the standard coding unit If information about the size of is obtained from a bitstream for each data unit of a relatively small size, the bitstream may not have good use efficiency.Therefore, information about the type of the reference coding unit and the size of the reference coding unit are provided.
  • At least one of the size and shape of the reference coding unit corresponding to the index indicating the size and shape of the reference coding unit may be determined in advance, i.e., the image decoding device ( 100) can determine at least one of the size and shape of the reference coding unit included in the data unit that is the standard for obtaining the index by selecting at least one of the predetermined size and shape of the reference coding unit according to the index.
  • the image decoding apparatus W0 is used in one maximum coding unit.
  • At least one reference coding unit can be used, i.e., at least one reference coding unit can be included in the maximum coding unit for dividing an image, and the coding unit is determined through a recursive division process of each standard coding unit.
  • at least one of the width and height of the maximum coding unit may correspond to at least one integer multiple of the width and height of the reference coding unit.
  • the size of the reference coding unit may be the maximum coding unit. The size may be divided n times according to the quad tree structure. That is, the video decoding apparatus 100 may determine the reference coding unit by dividing the maximum coding unit n times according to the quad tree structure, and according to various embodiments.
  • the reference coding unit can be divided based on at least one of block type information and division type mode information.
  • the image decoding apparatus W0 has the form of the current coding unit.
  • the block type information indicated or the division type mode information indicating the method of dividing the current coding unit can be obtained from the bitstream and used.
  • the division type mode information can be included in the bitstream related to various data units.
  • the decoding apparatus 100 includes a sequence parameter set, a picture parameter set, a video parameter set, a slice header, a slice segment header, and a tile.
  • the segmentation mode information included in the header (tile header) and tile group header can be used.
  • the video decoding apparatus 100 blocks from the bitstream for each maximum coding unit, reference coding unit, and processing block.
  • the syntax element corresponding to the shape information or the split shape mode information can be obtained from the bitstream and used.
  • the video decoding device (W0) can determine the video segmentation rule. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 It may be determined in advance between the decoding device (W0) and the video encoding device (200).
  • the image decoding apparatus 100 may determine an image segmentation rule based on information obtained from the bitstream.
  • the image decoding apparatus 100 includes a sequence parameter set, a picture parameter set, and Based on information obtained from at least one of the video parameter set, slice header, slice segment header, tile header, and tile group header.
  • the segmentation rule may be determined.
  • the image decoding apparatus 100 may determine the segmentation rule differently according to a frame, slice, tile, temporal layer, maximum coding unit, or coding unit.
  • the image decoding apparatus 100 may determine a division rule based on the block type of the coding unit.
  • the block type may include the size, shape, width and height ratio, and direction of the coding unit. (200) and the video decoding device 100 can decide in advance to determine the division rule based on the block shape of the coding unit, but this is not limited to the video decoding device 100, the video encoding device 200. Based on the information obtained from the bitstream received from, the division rule can be determined.
  • the shape of a coding unit may include a square and a non-square. If the width and height of the coding unit are the same, the image is decoded.
  • the device 100 may determine the shape of the coding unit as a square. In addition, if the width and height of the coding unit are not the same, the image decoding device 100 may determine the shape of the coding unit as a non-square. have.
  • the size of the coding unit can include various sizes of 4x4, 8x4, 4x8, 8x8, 16x4, 16x8, ..., 256x256.
  • the size of the coding unit is the length of the long side, the length or width of the short side of the coding unit.
  • the video decoding apparatus 100 can apply the same division rule to the coding units classified into the same group. For example, the video decoding apparatus 100 may have the same size of coding units having the same long side length. In addition, the video decoding apparatus 100 can apply the same division rule for coding units having the same long side length.
  • the ratio of the width and height of the coding unit is 1 :2, 2: 1, 1 :4, 4: 1, 1 :8, 8: 1, 1: 16, 16: 1, 32: 1 or 1: 32, etc.
  • the direction of the coding unit may include the horizontal direction and the vertical direction.
  • the horizontal direction may indicate the case where the length of the width of the coding unit is longer than the length of the height.
  • the vertical direction is the case of the coding unit. It can be indicated that the length of the width of is shorter than the length of the height.
  • the video decoding apparatus 100 establishes a division rule based on the size of the coding unit.
  • the video decoding apparatus 100 can determine the allowable division mode differently based on the size of the coding unit. For example, the video decoding apparatus 100 can determine the size of the coding unit. Whether splitting is allowed 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 Can be determined. The video decoding device 100 can determine the division direction according to the size of the coding unit. The video decoding device 100 can determine the size of the coding unit. The allowable split type can be determined accordingly.
  • It may be a partitioning rule predetermined between the device 200 and the image decoding device 100.
  • the image decoding apparatus 100 may determine a division rule based on information obtained from the bitstream.
  • the video decoding device (0) establishes a division rule based on the position of the coding unit.
  • the video decoding apparatus 100 can adaptively determine a division rule based on the position occupied by the coding unit in the video.
  • the video decoding apparatus 100 may determine a partitioning rule so that the coding units generated by different partitioning paths do not have the same block shape. However, it is not limited thereto, but encodings generated by different partitioning paths are not limited thereto. Units can have the same block type. Coding units created with different partition paths can have different decoding processing orders. The decoding processing sequence has been described with Fig. 12, so a detailed explanation is omitted.
  • 16 shows a combination of a type in which coding units can be divided according to an embodiment.
  • the image decoding apparatus 100 may differently determine a combination of division types in which a coding unit can be divided for each picture.
  • the image decoding apparatus 100 may be included in an image.
  • a picture that can be divided into two, three or four coding units ( 1620) can be used to decode an image.
  • the image decoding apparatus 100 may use only the segmentation type information indicating that the picture 1600 is divided into four square coding units in order to divide the picture 1600 into a plurality of coding units.
  • the video decoding apparatus 100 may use only the division type information indicating that the picture 1610 is divided into two or four coding units.
  • the video decoding apparatus 100 may use the picture 1620 to divide the picture 1620.
  • only the division type information indicating that the division type is divided into 2, 3 or 4 coding units can be used.
  • the combination of the division type described above is only an embodiment for explaining the operation of the image decoding apparatus 100, so the above division Combinations of shapes should not be construed as being limited to the above embodiments, but should be interpreted as that combinations of various types of division types can be used for each predetermined data unit.
  • the bitstream acquisition unit 110 of the image decoding apparatus 100 may convert a bitstream including an index indicating a combination of the division type information into a predetermined data unit unit (for example, a sequence, a picture). , Slice, slice segment, tile or 2020/175967 1» (: 1 ⁇ 1 (2020/002924 tile group, etc.) can be acquired.
  • the bitstream acquisition unit (H0) is a sequence parameter set and picture parameter set. Set), slice headers, tile headers, or tile group headers, you can obtain an index indicating the combination of segmentation type information.
  • the video decoding apparatus 100 of the video decoding apparatus 100 can determine a combination of division types in which the encoding units can be divided for each predetermined data unit using the acquired index, and accordingly, different division types for each predetermined data unit. A combination of can be used.
  • FIG. 17 shows various types of coding units that can be determined based on split mode mode information that can be expressed as a binary code according to an embodiment.
  • the image decoding apparatus 100 uses the bitstream acquisition unit 110
  • the coding unit can be divided into various types.
  • the type of the coding unit that can be divided may correspond to various types including the types described through the above-described embodiments.
  • the image decoding apparatus 100 can divide the coding unit of a square shape in at least one of a horizontal direction and a vertical direction based on the division mode information, and the non-square type coding The unit can be divided horizontally or vertically.
  • the image decoding apparatus W0 uses a coding unit in a square shape.
  • Mode information When dividing in a horizontal direction and a vertical direction and dividing into four coding units of a square, there may be four types of division that can display the division type mode information for the coding unit of a square. Mode information
  • the division type mode information can be expressed as (00)b, and the coding unit is horizontal.
  • the split mode information can be expressed as (01)b, and if the coding unit is split in the horizontal direction, the split mode information can be expressed as (10)b, and the coding unit is split in the vertical direction.
  • the division type mode information can be expressed as (ll)b.
  • the image decoding apparatus W0 divides a non-square type coding unit in a horizontal direction or a vertical direction, and the type of division type that the division type mode information can display is divided into several coding units.
  • the image decoding apparatus 100 may divide up to three coding units in a non-square shape according to an embodiment.
  • the image decoding apparatus 100 may divide two coding units into two coding units.
  • the division mode information can be expressed as (W)b.
  • the video decoding device 100 can divide the coding unit into three coding units, and in this case, the division mode information is (ll It can be expressed as )b.
  • the video decoding apparatus 100 can be determined not to divide the coding unit, 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 In this case, the division mode information can be expressed as (0)b. That is, video decoding
  • the device 100 may use variable length coding (VLC: Varaible Length 1 Coding) instead of fixed length coding (FLC) in order to use the binary record indicating the segmentation mode information.
  • VLC variable length coding
  • FLC fixed length coding
  • the binary code of the division type mode information indicated can be expressed as (0)b. If the binary code of the division type mode information indicating that the coding unit is not divided is set to (00)b, it is set to (01)b. Even though there is no split mode information, all binary codes of 2-bit split mode information must be used. However, as shown in Fig. 17, three split types for non-square coding units are used. If this is the case, the video decoding apparatus 100 uses a 1-bit binary code (0)b as the division mode information.
  • the division type of the coding unit in the non-square type indicated by the division type mode information is only in the three types shown in Figure 17. It is limited and should not be interpreted, but should be interpreted in various forms including the above-described embodiments.
  • Another form of coding unit that can be determined based on information is shown.
  • the image decoding apparatus 100 can divide the coding unit of a square shape in a horizontal direction or a vertical direction based on the division type mode information, and the coding unit of a non-square shape can be divided into a horizontal direction or a vertical direction.
  • the division type mode information can indicate that the square type coding unit is divided in one direction.
  • the binary code of the division type mode information indicating that the square type coding unit is not divided is It can be expressed as (0)b. If the binary code of the division mode information indicating that the coding unit is not divided is set to (00)b, regardless of the absence of the division mode information set to (01)b. All binary codes of 2-bit division type mode information must be used.
  • the video decoding apparatus 100 Since it is possible to determine that the coding unit is not divided even if the 1-bit binary code (0)b is used as the shape mode information, the bitstream can be used efficiently.
  • the division of the coding unit in the square shape indicated by the division type mode information The form is limited to only the three forms shown in Figure 18 and should not be interpreted, but must be interpreted in various forms including the above-described embodiments.
  • block type information or division type mode information may be expressed using a binary record, and such information may be directly generated as a bitstream.
  • block type information or division type mode information that can be expressed in binary code is not directly generated as a bitstream, but is input by context adaptive binary arithmetic coding (CAB AC). It can also be used as a binary record code.
  • CAB AC context adaptive binary arithmetic coding
  • the image decoding apparatus 100 includes block type information through CABAC.
  • the bitstream including the binary record for the syntax can be obtained through the bitstream acquisition unit (no ) .
  • the image decoding apparatus 100 inversely converts a bin string included in the acquired bitstream to form a block. It is possible to detect a syntax element indicating information or split mode information.
  • the video decoding apparatus 100 obtains a set of binary bin strings corresponding to the syntax element to be decoded, and uses the probability information. Thus, each bin can be decoded, and the video decoding apparatus 100 can repeat until the empty string composed of such decoded bins becomes equal to one of the previously obtained bin strings.
  • the image decoding apparatus 100 may determine the syntax element by performing inverse binarization of the empty string.
  • the image decoding apparatus 100 may determine the syntax for the binstring by performing a decoding process of adaptive binary arithmetic coding, and the image decoding apparatus 100 The probability model for the bins acquired through the stream acquisition unit 110 can be updated.
  • the bitstream acquisition unit 110 of the image decoding apparatus 100 may update the segmentation mode information according to an embodiment.
  • a bitstream indicating the indicated binary code can be obtained.
  • the video decoding device 100 can determine the syntax for the split mode information.
  • the 100 may update the probability for each bit of the 2-bit binary code. That is, the video decoding apparatus 100 may update the first bin of the 2-bit binary record. Depending on whether the value of is 0 or 1, you can update the probability of having a value of 0 or 1 when decoding the next bin.
  • the probability of the beans used in the process of decoding the beans of the bean string for the syntax can be updated, and the video decoding device 100 can determine that the probability has the same probability without updating the probability at a specific bit of the bean string. .
  • the video decoding apparatus 100 uses one bin having a value of 0 when the coding unit of the non-square type is not divided.
  • the syntax can be determined, i.e., if the block type information indicates that the current coding unit is a non-square type, the first bin of the bin string for the split type mode information is, 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 0 if the coding unit of the non-square form is not divided, and may be 1 if the coding unit is divided into two or three coding units.
  • the probability that the first bin of the bin string of the split mode information for the coding unit is 0 may be 1/3, and the probability that it is 1 may be 2/3.
  • the video decoding apparatus 00) does not divide the non-square coding unit. Since the split mode information indicating that only a 1-bit empty string having a value of 0 can be expressed, video decoding
  • the device 100 may determine whether the second bin is 0 or 1 only when the first bin of the split mode information is 1 and determine the syntax for the split mode information. According to an embodiment, the video decoding apparatus 100 may determine the syntax for the split mode information. If the first bin for the mode information is 1, the second bin is 0 or the 1-day probability is the same probability, and the bin can be decoded.
  • the image decoding apparatus 100 may use various probabilities for each bin in the process of determining the bins of the bin string for the split mode information. According to an embodiment, the image decoding apparatus 100 may use various probabilities for each bin. (0) can determine the probability of bins for the split mode information differently according to the direction of the non-square block. According to an embodiment, the video decoding device 00) is split according to the width or length of the long side of the current coding unit. The probability of bins for mode information can be determined differently. According to one embodiment, the video decoding apparatus 00) can differently determine the probability of bins for the split mode information according to at least one of the shape of the current coding unit and the length of the long side. have.
  • the image decoding apparatus ( ⁇ 0) may determine that the probability of bins for the split mode information is the same for coding units of a predetermined size or larger. For example, the length of the long side of the coding unit For coding units having a size of 64 samples or more based on, it can be determined that the probability of bins for the split mode information is the same.
  • the image decoding apparatus 100 has an initial probability of the bins constituting the bin string of the split mode information based on a slice type (eg, I slice, parent slice, or 6 slice). Can be determined.
  • a slice type eg, I slice, parent slice, or 6 slice.
  • 19 is a diagram showing a block diagram of an image encoding and decoding system that performs loop filtering.
  • the encoding end 1910 of the image encoding and decoding system 1900 transmits the encoded bitstream of the image, and the decoding end 1950 receives the bitstream.
  • the reconstructed image is output by decoding.
  • the encoding end 1910 may have a configuration similar to the image encoding apparatus 200 to be described later, and the decoding end 1950 may have a configuration similar to the image decoding apparatus 100.
  • the predictive encoding unit 1915 outputs predicted data through inter prediction and intra prediction, and the transform and quantization unit 1920 quantizes residual data between the predicted data and the current input image.
  • the converted conversion factor is output. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924
  • the encoding unit (1925) encodes the quantized transformation coefficient, converts it, and outputs it as a bitstream.
  • the quantized transformation coefficient passes through the inverse quantization and inverse transformation unit (1930). The data in the spatial area is restored, and the data in the restored spatial area is deblocked.
  • the restored image is output through the filtering unit 1935 and the loop filtering unit 1940.
  • the restored image may be used as a reference image of the next input image through the prediction encoding unit 1915.
  • the encoded image data in the bitstream received by the decoding stage 1950 is restored to residual data in the spatial domain through an entropy decoding unit 1955 and an inverse quantization and inverse transformation unit 1960.
  • Prediction decoding unit ( 1975) the image data of the spatial domain is formed by combining the predicted data and residual data, and deblocking
  • the filtering unit 1965 and the loop filtering unit 1970 may perform filtering on the image data in the spatial domain and output a restored image for the current original image.
  • the restored image is then used by the predictive decoding unit 1975. It can be used as a reference image for
  • the loop filtering unit 1940 of this encoding end 1910 performs loop filtering by using the filter information input according to a user input or a system setting.
  • the filter information used by the filtering unit 1940 is output to the entropy encoding unit 1925 and transmitted to the decoding stage 1950 together with the coded image data.
  • the loop filtering unit 1970 of the decoding stage 1950 may perform loop filtering based on filter information input from the decoding stage 1950.
  • FIG. 2 is a block diagram of an image encoding apparatus 200 capable of encoding an image based on at least one of block type information and split type mode information according to an embodiment.
  • the image encoding apparatus 200 may include an encoding unit 220 and a bitstream generation unit 2W.
  • the encoding unit 220 may receive an input image and encode an input image.
  • the encoding unit 220 may obtain at least one syntax element by encoding the input image.
  • the syntax element includes a skip flag, prediction mode, motion vector difference, motion vector prediction method (or index), transform quantized coefficient, and coded block pattern. , coded block flag, intra prediction mode, direct flag, merge flag, delta QP, reference index, prediction direction, and transform index.
  • the encoding unit 220 includes the shape, direction, width, and height of the coding unit.
  • the context model can be determined based on block shape information including at least one of the ratio or size of
  • the bitstream generator (2W) generates a bitstream based on the encoded input image. 2020/175967 1» (:1 ⁇ 1 ⁇ 2020/002924) can be generated.
  • the bitstream generation unit (0) can generate a bitstream by entropy encoding a syntax element based on the context model.
  • the image encoding apparatus 200 may transmit a bitstream to the image decoding apparatus 100.
  • the encoding unit 220 of the image encoding apparatus 200 may determine the shape of the encoding unit. For example, whether the encoding unit is a square or not.
  • It can have a non-square shape, and information representing this shape can be included in the block shape information.
  • the encoding unit 220 may determine in what form the encoding unit is to be divided.
  • the encoding unit 220 may determine the shape of at least one encoding unit included in the encoding unit, and bitwise.
  • the stream generator (0) may generate a bitstream including split type mode information including information on the type of the coding unit.
  • the encoding unit 220 may determine whether the encoding unit is divided or not.
  • the encoding unit 220 includes only one encoding unit in the encoding unit or the encoding unit is divided. If you decide to not
  • the bitstream generator (0) may generate a bitstream including split mode mode information indicating that the coding unit is not divided.
  • the coding unit 220 may also be divided into a plurality of coding units included in the coding unit.
  • the bitstream generation unit (0) may generate a bitstream including split type mode information indicating that the coding unit is divided into a plurality of coding units.
  • Information indicating or indicating in which direction to be divided may be included in the segmentation mode information; for example, the segmentation mode information may indicate division in at least one of a vertical direction and a horizontal direction, or indicate no division. have.
  • the image encoding apparatus 200 determines information on the division mode based on the division mode mode of the coding unit.
  • the image encoding apparatus 200 determines the ratio or size of the shape, direction, width, and height of the coding unit.
  • a context model is determined based on at least one of them.
  • the image encoding apparatus 200 generates information on a split mode for splitting a coding unit as a bitstream based on the context model.
  • the image encoding apparatus 200 can acquire an arrangement for matching the index for the context model with at least one of the shape of the coding unit, the ratio or size of the room 3 ⁇ 4 width and height.
  • the image encoding apparatus 200 may acquire an index for a context model based on at least one of a shape, a ratio of a room 3 ⁇ 4 width and height, or a size of the encoding unit in the array.
  • the image encoding apparatus 200 may acquire an index for the context model. To determine the context model based on the index 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 May.
  • the image encoding apparatus 200 further comprises block shape information including at least one of the shape, direction, width, and height ratio or size of the surrounding encoding unit adjacent to the encoding unit.
  • the context model can be determined.
  • the peripheral coding unit may include at least one of the coding units located at the lower left, left, upper left, upper, upper right, right or lower right of the coding unit.
  • the video encoding apparatus 200 may compare the length of the width of the coding unit around the upper side and the length of the width of the coding unit in order to determine the context model.
  • the image encoding apparatus 200 may compare the length of the height of the peripheral encoding unit on the left and the right with the height of the encoding unit.
  • the image encoding apparatus 200 may generate a context model based on the comparison results. I can decide.
  • FIG. 20 is a diagram illustrating a configuration of an image decoding apparatus 2000 according to an embodiment.
  • the image decoding apparatus 2000 includes an acquisition unit 2010, a block determination unit 2030, a prediction decoding unit 2050, and a restoration unit 2070.
  • the acquisition unit 2010 corresponds to the bitstream acquisition unit 110 shown in FIG. 1, and the block determination unit 2030, the prediction decoding unit 2050, and the restoration unit 2070 are shown in FIG.
  • the acquisition unit 2010, the block determination unit 2030, the prediction decoding unit 2050, and the restoration unit 2070 may be implemented with at least one processor.
  • the image decoding apparatus 2000 One or more pieces of data that store input/output data of the acquisition unit 2010, block determination unit 2030, prediction decoding unit 2050, and recovery unit 2070
  • the image decoding apparatus 2000 may also include a memory control unit (not shown) that controls input/output of data to a data storage unit (not shown).
  • the acquisition unit 2010 receives a bitstream generated as a result of encoding an image.
  • the acquisition unit 2010 is a syntax for decoding an image from a bitstream.
  • Elements are acquired.
  • Binary values corresponding to the syntax elements may be included in the bitstream according to the hierarchical structure of the image.
  • the acquisition unit 2010 may acquire syntax elements by entropy coding the binary values included in the bitstream. .
  • 21 is an exemplary diagram illustrating a structure of a bitstream (00) generated according to a hierarchical structure of an image.
  • bitstream 2100 is a sequence parameter set 2110, a picture
  • Each of the sequence parameter set 2110, the picture parameter set 2120, the group header 2130, and the block parameter set 2140 includes information used in each layer according to the layer structure of the image.
  • the sequence parameter set 2110 includes information used in an image sequence composed of one or more images.
  • the picture parameter set 2120 includes information used in one image, and may refer to the sequence parameter set 2110.
  • the group header 2130 contains information used in the block group determined in the image.
  • the group header 2130 may be a slice header.
  • the block parameter set 2140 includes information used in a block determined in the image, and may refer to a group header 2130, a picture parameter set 2120, and a sequence parameter set 2110.
  • the block parameter set 2140 is a parameter set of a maximum coding unit (CTU), a parameter set of a coding unit (CU), and a prediction unit (PU) according to the hierarchical structure of a block determined in the image. It can be divided into at least one of the parameter set of and the parameter set of the conversion unit (TU).
  • CTU maximum coding unit
  • CU parameter set of a coding unit
  • PU prediction unit
  • the acquisition unit 2010 acquires information used for decoding an image from a bitstream (2 W0) according to the hierarchical structure of the image, and a block determination unit 2030, that is, a decoding unit 2050, and restoration to be described later
  • the unit 2070 may perform necessary operations using the information acquired by the acquisition unit 2010.
  • bitstream (2 W0) shown in FIG. 21 is only an example, and some of the parameter sets shown in FIG. 21 may not be included in the bitstream (2 W0), or are not shown.
  • a set of parameters for example a set of video parameters, may be included in the bitstream (2 W0).
  • the block determiner 2030 divides the current image into blocks and sets block groups including at least one block in the current image.
  • a block may correspond to a tile, and a block group is Can correspond to a slice; a slice can also be referred to as a group of tiles.
  • the prediction decoder 2050 obtains prediction samples corresponding to the subblocks by inter prediction or intra prediction of subblocks of blocks divided from the current image.
  • the subblocks are a maximum coding unit, a coding unit, and a coding unit. It can be at least one of the conversion units.
  • a block is limited to a tile and a block group is limited to a slice, but this is only an example, and the B block consisting of a set of A blocks is
  • block A may correspond to a block
  • block B may correspond to a block group.
  • a set of CTUs corresponds to a tile
  • the CTU is a block
  • the tile is a block.
  • (:111 can have the same size square shape.
  • Tiles contain one or more 0X1s. Tiles are square or rectangular.
  • a slice contains one or more tiles.
  • a slice may have a rectangular shape or a non-rectangular shape.
  • the block determining unit 2030 divides the current image 2200 into a plurality of 0X1s according to information obtained from the bitstream, and determines a tile including at least one 0X1 and at least one tile.
  • the included slice can be set within the current image 2200.
  • the block determiner 2030 may divide the current image 2200 into a plurality of tiles according to information obtained from the bitstream, and divide each tile into one or more 0X1s.
  • the block The determination unit 2030 may set a slice including at least one tile in the current image 2200.
  • the block determiner 2030 may divide the current image 2200 into one or more slices and divide each slice into one or more tiles according to information obtained from the bitstream. Then, the block The decision unit 2030 may divide each tile into one or more 0X1s.
  • the block determiner 2030 may use address information of the slices obtained from the bitstream to set the slices in the current image 2200.
  • the block determiner 2030 may use the slices obtained from the bitstream. Slices including one or more tiles in the current image 2200 can be set according to their address information.
  • the address information of the slice can be obtained from the video parameter set, sequence parameter set, picture parameter set, or group header of the bitstream. .
  • Slices including at least one tile can be set in the current image 2200 according to the address information of the slice obtained from the bitstream.
  • the slices (2310, 2320, 2330, 2340, 2350) can be determined along the raster scan direction (2300) in the current image (2200), the slices ( 2310, 2320, 2330, 2340, 2350) can be sequentially decoded according to the raster scan direction (2300).
  • the address information may include an identification value of the lower right tile located at the lower right of the tiles included in each of the slices 2310, 2320, 2330, 2340, and 2350.
  • the address information of the slices (2310, 2320, 2330, 2340, 2350) is 9, which is the identification value of the lower-right tile of the first slice 2310, and 7, which is the identification value of the lower-right tile of the second slice (2320).
  • the identification value of the lower right tile of the slice 2350 may include 15.
  • the address information of the fifth slice 2350 may not be included in the bitstream.
  • the block determination unit 2030 may identify the upper left tile among tiles in the current image 2200, that is, a tile having an identification value of 0, to set the first slice 2310. Further, the block determination unit 2030 may determine a region including the tile 0 and the tile 9 identified from the address information as the first slice 2310.
  • the block determination unit 2030 is a tile having the smallest identification value among tiles not included in the previous slice, that is, the first slice 2310, that is, a tile. 2 may be determined as the upper left tile of the second slice 2320, and the block determination unit 2030 may determine a region including the tile 2 and the tile 7 identified from the address information as the second slice 2320.
  • the block determination unit 2030 determines the smallest identification value among tiles not included in the previous slice, that is, the first slice 2310 and the second slice 2320, for the specificity of the third slice 2330. With tile, immediate tile W, third
  • the upper left tile of the slice 2330 may be determined.
  • the block determination unit 2030 may determine a region including the tile W and the tile 11 identified from the address information as the third slice 2330.
  • slices may be set within the current image 2200 only by identification information of the lower right tile included in the bitstream.
  • the acquisition unit 2010, as address information for determining the slices acquires the identification value of the upper left tile and the lower right tile included in each of the slices, and the block determination unit 2030 ) Can set the slices in the current image 2200 according to the information acquired by the acquisition unit 2010. Since the upper left tile and the lower right tile included in each slice can be identified from the address information, the block decision unit ( 2030) can set the area including the upper left tile and the lower right tile identified from the address information as a slice. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924
  • the acquisition unit 2010 is an address for setting the slices.
  • the block determination unit 2030 determines the current image (2200) according to the information acquired by the acquisition unit 2010. You can set the slices within ).
  • the address information of the second slice 2320 in FIG. 23 may include 2 which is the identification value of the upper left tile, 2 which is the size of the width of the slice, and 2 which is the size of the height of the slice.
  • the size of the width and height of 2 means that there are two rows of tiles and two columns of tiles along the width and height directions of the second slice 2320.
  • the upper left tile of the first slice 2310 is fixed to tile 0, so the identification value of the upper left tile of the first slice 2310 may not be included in the bitstream.
  • the size of the width and the height of the slice obtained from the bitstream is a value obtained by dividing the number of tile rows and the number of tile columns arranged along the width and height directions of the slice by a predetermined scaling factor.
  • a predetermined scaling factor e.g. 2, to 1, the size of the width of the slice, and 1, the size of the height of the slice, it can be confirmed that there are two tile rows and tile columns, respectively, along the width direction and height direction of the slice. .
  • the block determination unit 2030 is the address information of the first slice 2310 to the fifth
  • the first slice (2310) to the fifth slice (2350) can be determined within the current image (2200), depending on the address information, from the current image (2200) to the fourth slice (2340).
  • the address information of the last slice may not be included in the bitstream.
  • the first of the slices to be determined in the current image 2200 is determined in the current image 2200
  • the address information of a tile located in a row or a slice containing a tile located in the first column is not only the identification value of the upper left tile of the corresponding slice, the size of the width of the slice, and the size of the height of the slice, but also the right or lower part of the slice. It may further include a value indicating how many subsequent slices exist along the direction. A value indicating how many subsequent slices exist along the right or lower direction of the slice is arranged along the width or height direction of the slice. It can also be substituted with a value indicating how many slices exist.
  • the slice (2310) is the tile located in the first row of the image (2200) and the first column. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 Since all tiles are included, the address information of the first slice (2310) is a value indicating how many slices are succeeding along the right direction of the slice and It can contain a value indicating how many slices follow along the lower direction.
  • the address information of the second slice 2320 may include a value indicating how many slices follow along the lower direction of the slice.
  • the address information includes a value indicating how many slices follow along the right direction and/or the lower direction
  • the last slice placed along the width direction of the current image 2200 (the second slice 2320 in FIG. ) And/or the fifth slice (2350)
  • the size of the width of the slice may be omitted
  • the last slice placed along the height direction of the current image (2200) (the fourth slice (2340) in FIG. 23) And/or the address information of the fifth slice 2350)
  • the size of the height of the slice may be omitted.
  • the block determining unit 2030 is already the first slice 2310 along the width direction of the current image 2200.
  • the width of the slice following the first slice (2310) considering the width size of the current image (2200) even if the value indicating the width of the subsequent slice is not included in the bitstream.
  • Fig. 23 there are 4 tiles along the width direction of the current image 2200, and 2 tiles exist along the width direction of the first slice 2310, so the first slice 2310 It can be seen that two tiles exist along the width direction of the second slice 2320 that follows.
  • the block determination unit 2030 has already followed the first slice 2310 along the height direction of the current image 2200. Since it is known that one slice exists, it is possible to derive the height of the slice following the first slice 2310 even if a value indicating the size of the height of the subsequent slice is not included in the bitstream.
  • the acquisition unit 2010 is the current image 2200 as slices.
  • the partition information for segmentation is obtained from the bitstream, and the block determination unit 2030 may divide the current image 2200 into slices according to the segmentation information.
  • the segmentation information is, for example, 4 division, height It can be divided into 2 divisions of and 2 divisions of width.
  • Block decision unit 2030 is acquired as the current image 2200 is split for the first time.
  • the smaller slices are divided by dividing each of the slices according to the division information.
  • the block determination unit 2030 is
  • Two areas (2410, 2420) are determined by dividing the width of the image (2200) by two, and the height of the left area (2410) is divided into two according to the division information of the left area (2410), and two areas (2412, 2414)
  • the division information of the right area 2420 indicates non-division, and the areas 2412 and 2414 divided from the left area 2410 are not further divided.
  • 2020/175967 1»(:1 ⁇ 1 ⁇ If not 2020/002924, the block determination unit 2030 sets the upper left area 2412 to the first slice, the right area 2420 to the second slice, and the lower left area 2414). It can be set as the third slice.
  • the block determining unit 2030 is configured according to preset map information.
  • Slices are set in the current image 2200, but at least one slice is added in the current image 2200 according to the correction information obtained from the bitstream.
  • the map information may include address information of slices located in the image.
  • the block determination unit 2030 is a video parameter of a bitstream. Slices in the image 2200 may be initially set according to the map information acquired from the set or sequence parameter set, and the final slices may be set in the image 2200 according to the correction information acquired from the picture parameter set.
  • the block determiner 2030 can inter-predict at least one of the coding units included in the tiles.
  • the reference image list used for inter prediction is Explain how to configure it.
  • the prediction decoding unit 2050 predictively decodes the coding units included in tiles determined in the current image.
  • the prediction decoding unit 2050 calculates the coding units through inter prediction or intra prediction.
  • inter prediction a prediction sample of a coding unit is obtained based on a reference block in a reference image indicated by the motion vector, and a restoration sample of the coding unit is obtained based on the prediction sample and residual data obtained from the bitstream.
  • residual data may not be included in the bitstream, and in this case, the prediction sample may be determined as a restoration sample.
  • a reference picture list including reference pictures must be constructed.
  • the acquisition unit 2010 acquires information representing a plurality of first reference picture lists from a sequence parameter set of a bitstream. can do.
  • the information indicating the plurality of first reference picture lists may include a POC (picture order count) related value of the reference picture.
  • the plurality of first reference picture lists are used in a picture sequence including the current picture.
  • the information indicating the plurality of first reference picture lists may include the number of first reference picture lists.
  • the predictive decoding unit 2050 corresponds to the number identified from the bitstream.
  • the first reference picture lists can be constructed.
  • the predictive decoding unit 2050 can construct the first reference picture lists according to the same method as the picture encoding apparatus 3300.
  • the acquisition unit 2010 acquires an indicator indicating at least one of the plurality of first reference image lists used in the image sequence from the group header of the bitstream.
  • the prediction decoding unit 2050 Acquires an updated second reference image list from the first reference image list pointed to by the indicator.
  • the second reference image list is in the first reference image list pointed to by the indicator.
  • At least a part of the included reference images may be replaced with another reference image, at least part of the order of the reference images is changed, or a new reference image may be acquired as it is added to the first reference image list.
  • the acquisition unit 2010 is a group of bitstreams.
  • Update information can be obtained from the header.
  • the update information includes a value related to a reference image to be removed from the first reference image list pointed to by the indicator, a ?00 related value of the reference image to be added to the second reference image list, and the first.
  • the difference between the ?00 related value of the reference image to be removed from the reference image list and the ?00 related value of the reference image to be added to the second reference image list, information for changing the order of the images, etc. may be included.
  • the update information may be obtained from a parameter set other than the group header of the bitstream, for example, a picture parameter set.
  • the coding units included in the slice may be predictively decoded to obtain prediction samples of the coding units.
  • the prediction decoding unit 2050 uses a first reference picture list other than the first reference picture list pointed at by the indicator in the plurality of first reference picture lists used in the picture sequence, and the second reference picture list to slice the next slice.
  • the second reference image list obtained from the current slice can be used in the next slice.
  • an indicator indicating the reference video list used in the next slice is newly acquired, and is included in the next slice according to the reference video list pointed to by the indicator or the reference video list updated therefrom. Coding units can be predictively decoded.
  • a reference picture list suitable for predictive decoding of the coding units of the slices can be constructed only by updating the existing reference picture list.
  • Figure 25 is a plurality of first reference images acquired through a set of sequence parameters
  • FIG. 25 shows three first reference image lists 2510, 2520, and 2530, which is only an example, and the number of first reference image lists acquired through a sequence parameter set is variously changed. Can be
  • the first reference image lists 2510, 2520, and 2530 may include short-term type or long-term type reference images.
  • Short-term type reference images are examples of short-term type reference images.
  • the image designated as the short-term type is displayed, and the long-term type reference images indicate the image designated as the long-term type among the restored images stored in the DPB.
  • Reference images included in the first reference image lists (2510, 2520, 2530) may be specified as POC-related values.
  • the short-term type reference image refers to the POC and short-term reference of the current image.
  • the difference between the POCs of the image, i.e., the delta value, is specified, and the long-term type reference image may be specified as the LSB (least significant bit) of the POC of the long-term reference image.
  • the image may be specified as the most significant bit (MSB) of the POC of the long-term reference image.
  • the first reference image lists 2510, 2520, and 2530 may include only a short-term type reference image or a long-term type reference image. That is, shown in FIG. 25 All of the reference images may be short-term type reference images or long-term type reference images. In addition, depending on an embodiment, some of the first reference image lists 2510, 2520, 2530 are short-term. It may include only the reference image of the type, and others may include only the reference image of the long-term type.
  • 26 is a diagram for explaining a method of acquiring a second reference image list.
  • the prediction decoding unit 2050 may acquire a second reference image list 2600 by changing at least some of the reference images included in the first reference image list 2510 pointed to by the indicator to another reference image. Referring to Figure 26, the first reference image
  • a short-term reference image with a delta value of -1 in the list 2510, a long-term reference image with an LSB of W, and a short-term reference image with a delta value of -3 each have a delta value in the second reference image list 2600. It can be seen that the two-person short-term reference image, the long-term reference image with an LSB of 8, and the short-term reference image with a delta value of -5 have been replaced.
  • Fig. 26 shows all the reference images in the first reference image list 2510. Although shown as being replaced with another reference image, this is only an example, and only a part of all the reference images in the first reference image list 2510 may be replaced with another reference image.
  • the prediction decoding unit 2050 is a reference image of a specific type among the reference images included in the first reference image list 2510, for example, only different long-term type reference images- It can also be replaced with a term reference image, i.e., in the first reference image list (2510). 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 Among the included reference pictures, the short-term reference picture is maintained in the second reference picture list (2600), and only the long-term reference picture is acquired from the bitstream. Depending on the information, it may be replaced with another long-term reference image. Referring to FIG.
  • Video List (2600)
  • the long-term reference image among the reference images included in the first reference image list 2510 is maintained as it is in the second reference image list 2600.
  • 1Reference picture list (2510) Only the reference picture of the short-term type can be replaced with another short-term reference picture.
  • the acquisition unit (2010) is used from the group header of the bitstream.
  • the reference image indicated by the ?00 related value acquired by the acquisition unit 2010 may be included in the second reference image list 2600.
  • the acquisition unit 2010 receives the first reference picture from the bitstream.
  • the index of the reference image to be removed from the reference image list 2510 can be further obtained.
  • the first reference image list 2510 The index of the reference picture that should be removed from may not be included in the bitstream.
  • the bitstream may not include the index of the reference image to be removed, and the prediction decoding unit ( 2050) can remove a predetermined reference picture from among the reference pictures included in the first reference picture list 2510, and include the reference picture indicated by the ?00 related value obtained from the bitstream in the second reference picture list 2600. have.
  • the indicated information is the value related to ?00 of the new reference image and the first reference image.
  • the first reference image list 2510 includes In the second reference image list (2600), the 10 person reference image has been replaced with an 8 person reference image, so the information indicating the new reference image may include 2 (10-8).
  • the prediction decoding unit 2050 includes a difference value of ?00 related values and a first reference image.
  • the ?00 related value of the reference image to be removed from the list 2510 Based on the ?00 related value of the reference image to be removed from the list 2510, the ?00 related value of the reference image that should be newly included in the second reference image list 2600 can be also given.
  • the new reference image is a second reference image according to the order of the reference images to be removed from the first reference image list 2510 pointed to by the indicator.
  • 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 Can be added to the list (2600).
  • the long-term reference image to which the index 1 is assigned is the first reference image list (2510) If removed from, index 1 may also be assigned to the new reference image.
  • Fig. 27 is for explaining another method of obtaining a second reference video list
  • the prediction decoding unit 2050 excludes reference images of a specific type among the reference images in the first reference image list 2510 indicated by the indicator among the plurality of first reference image lists for the image sequence. It is also possible to obtain a list 2700. Referring to FIG. 27, a long-term type reference image among the reference images in the first reference image list 2510 indicated by the indicator is included in the second reference image list 2700. You can see that it is not.
  • the prediction decoding unit 2050 may acquire a second reference image list 2700 excluding a short-term type reference image among the reference images in the first reference image list 2510. .
  • Fig. 28 is for explaining another method of obtaining a second reference video list
  • the prediction decoding unit 2050 changes the order of the reference images in the first reference image list 2510 pointed to by the indicator according to the update information obtained from the group header of the bitstream, so that the second reference image list 2800 is displayed. At this time, according to the updated information, the order of all the reference images in the first reference image list 2510 may be changed, or the order of some of the reference images in the first reference image list 2510 Is subject to change.
  • the update information obtained from the group header of the bitstream may include indexes of reference images in the first reference image list 2510 arranged in the order to be changed.
  • the first in FIG. 28 In the reference picture list 2510 the reference picture of index 0, the reference picture of index 1, and the reference picture of index 2 are each of the reference picture of index 1, the reference picture of index 2, and the reference picture of index 0 in the second reference picture list 2800.
  • the group header of the bitstream may include (2, 0, 1) as update information.
  • the predictive decoding unit 2050 may include 2 in the first reference picture list 2510.
  • the second reference picture list 2800 can be configured by assigning index 0 to the reference picture to which the index is assigned, index 1 to the reference picture to which the index of 0 is assigned, and index 2 to the reference picture to which the index of 1 is assigned. .
  • the update information obtained from the group header of the bitstream may include the index of the reference image that needs to be changed in order among the reference images in the first reference image list 2510.
  • the group header of the bitstream may include (1, 2) as update information.
  • the prediction decoding unit 2050 is a reference to which an index of 1 is allocated in the first reference picture list 2510. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 You can configure the second reference picture list (2800) by assigning index 2 to the video and index 1 to the reference video to which the index of 2 is assigned.
  • Fig. 29 is for explaining another method of obtaining a second reference video list
  • the number of first reference image lists indicated by the indicators in the plurality of first reference image lists used in the image sequence may be plural. That is, as shown in FIG. 29, the indicator is only short-term reference images. It may refer to a first reference image list 2910 including and a first reference image list 2920 including only long-term reference images.
  • the prediction decoding unit 2050 includes a second reference picture list 2930 including short-term reference pictures and long-term reference pictures included in the first reference pictures 2910 and 2920 pointed to by the indicator. At this time, in the second reference image list 2930, an index larger than the index allocated to the short-term reference images may be assigned to the long-term reference images. Conversely, the second reference image list ( 2930), an index larger than the index allocated to the long-term reference images may be allocated to the short-term reference images.
  • the acquisition unit 2010 acquires order information of the short-term reference images and the long-term reference images from the bitstream, and the prediction decoding unit 2050 obtains the second order information according to the acquired order information. Indexes may be allocated to short-term reference images and long-term reference images included in the reference image list 2930.
  • the list 2920 may include at least one reference image irrespective of the type.
  • the prediction decoding unit 2050 is the first reference image indicated by the indicator.
  • the short-term reference image task 2 included in the first reference image list 2910 When a short-term reference image exists in the list 2910 and a long-term reference image exists in the second reference image list 2920, the short-term reference image task 2 included in the first reference image list 2910
  • the second reference picture list 2930 including the long-term reference picture included in the reference picture list 2920 can be obtained.
  • the prediction decoding unit 2050 is the first reference picture list 2910 indicated by the indicator. If an elongated-term reference picture exists and a short-term reference picture exists in the second reference picture list 2920, the long-term reference picture task 2 included in the first reference picture list 2910
  • Figure 30 is for explaining another method of obtaining a second reference video list
  • the first reference image list 3010 indicated by the indicator may include only a short-term reference image. According to an embodiment, the first reference image list 3010 indicated by the indicator may include only a long-term reference image.
  • the acquisition unit 2010 may be included in the second reference image list 3030 from the bitstream.
  • 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 A short included in the long-term reference video task 1 reference video list (3010), which acquires the POC-related value of the long-term reference video and points to the POC-related value-
  • a second reference image list 3030 including the term reference image can be configured, i.e., through a sequence parameter set, the first reference image list 3010 including only the short term reference image is signaled, and in the group header It signals the POC related value of the long-term reference image.
  • the compression rate is improved by reducing the overhead since the reference picture list does not need to be transmitted for each block group. For example, GOP (Group of Picture)
  • GOP Group of Picture
  • the reference list can be repeatedly transmitted for each G0P. As such, the more frequently transmitted reference picture lists are transmitted as a sequence parameter set, the greater the bit rate reduction effect.
  • the short-term reference image is correlated with the repeating pattern of the prediction structure as in the example above, while the long-term reference image correlates the current picture with the corresponding long-term reference image.
  • the prediction structure is repeated in units of G0P, but the content of the image is completely changed, such as a screen change, the long-term reference image is no longer valid, the reference list for the short-term reference images is It is possible to avoid sending the entire reference list to the group header by obtaining from the sequence parameter set and sending the long-term reference image to the group header separately.
  • the acquisition unit 2010 when only the long-term reference picture is included in the first reference picture list, the acquisition unit 2010 is a P0C related value of the short-term reference picture to be included in the second reference picture list from the bitstream. It is also possible to obtain a second reference image list including the short-term reference image and the long-term reference image included in the first reference image list, indicated by the P0C related value.
  • an index larger or smaller than the index allocated to the reference images included in the first reference image list 3010 may be allocated.
  • the decoding unit 2050 may inter-predict coding units based on the reference image included in the second reference image list. As a result of the inter prediction, prediction samples corresponding to the coding units may be obtained.
  • the restoration unit 2070 uses the predicted samples to obtain restoration samples of the coding units.
  • the restoration unit 2070 may acquire restoration samples of coding units by adding the residual data obtained from the bitstream to the predicted sample.
  • the restoration unit 2070 may luma-map prediction samples of coding units before obtaining restoration samples. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924
  • Luma mapping processing means that the luma values of predicted samples are obtained from a bitstream.
  • the acquisition unit 2010 may acquire parameters for luma mapping processing from at least one post-processing parameter set of the bitstream.
  • Each of at least one post-processing parameter set is luma mapping or described later. It can contain parameters used for adaptive loop filtering.
  • the parameters used for luma mapping may include, for example, a range of a luma value to be changed, a delta value to be applied to the luma values of predicted samples.
  • Figure 31 shows a plurality of post-processes used for luma mapping or adaptive loop filtering
  • Bitstream (3100) is the above-described sequence parameter set model 3)(3110), picture parameter set ?3)(3120), group header ((33 ⁇ 4(3130) and block parameter set?3)(3140)
  • a plurality of post-processing parameter sets (315( ⁇ , 3150 ⁇ 3150) can be included.
  • Post-processing parameter sets (3150 315(3 ⁇ 4, 3150 are, sequence parameter set (3110)), picture parameter set (3120), group header ( 3130) and block parameter set 3140, it can be included in the bitstream regardless of the hierarchical structure of the image.
  • Each of the post-processing parameter sets 315( ⁇ , 315(3 ⁇ 4, 3150 0) may be assigned an identifier to distinguish them.
  • Some of the post-processing parameter sets (3150 315(3 ⁇ 4, 31500)) contain parameters used for luma mapping, while others are used for adaptive loop filtering.
  • the acquisition unit (2010) is a set of post-processing parameters from the picture parameter set 3120, the group header 3130, or the block parameter set 3140 (3150 3150 ⁇ 3150).
  • An identifier indicating whether or not it is used for luma mapping can be obtained.
  • the restoration unit 2070 can change the luma value of the predicted samples by using parameters obtained from the post-processing parameter set pointed to by the identifier.
  • the post-processing parameter set indicated by the identifier is used for the predicted samples also exported in the current image, and when the identifier is obtained from the group header 3130, the post-processing parameter set indicated by the identifier is used for the predicted samples also exported in the current slice. Further, when the acquisition unit 2010 obtains the identifier from the block parameter set 3140, the post-processing parameter set pointed to by the identifier is used for predicted samples derived from the current block as well. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924
  • the acquisition unit 2010 is a set of a plurality of post-processing parameters (3150 315 (3 ⁇ 4,
  • modification information may also be obtained from the bitstream, where the modification information may include information for changing parameters included in the post-processing parameter set pointed to by the identifier.
  • the modification information may include the value of the difference between the value of the parameter contained in the set of post-processing parameters pointed to by the identifier and the value of the parameter to be changed.
  • the restoration unit 2070 corrects the parameters of the post-processing parameter set indicated by the identifier according to the correction information, and may change the luma value of the predicted samples using the modified parameters.
  • the identifier obtained from the bitstream is a plurality of post-processing
  • the restore unit (2070) constructs a new parameter set by partially combining the parameters contained in the post-processing parameter sets pointed to by the identifier, and uses the newly constructed parameter set, i.e., luma for the samples. Mapping processing is possible.
  • the restoration unit 2070 acquires restoration samples corresponding to the current coding unit by using the prediction samples generated as a result of the prediction decoding or the prediction samples subjected to luma mapping. When the restoration samples are obtained, the restoration unit 2070 may apply adaptive loop filtering to the restoration samples.
  • Adaptive loop filtering refers to filter coefficients signaled through a bitstream.
  • Adaptive loop filtering can be performed separately for the luma and chroma values.
  • the filter coefficient may include the filter coefficient for a one-dimensional filter.
  • the filter coefficient of each one-dimensional filter is the difference between successive filter coefficients. And the corresponding difference value can be signaled through the bitstream.
  • some of the post-processing parameter sets include parameters used for luma mapping, and others are used for adaptive loop filtering.
  • post-processing parameter set 3150 and post-processing parameter set: 8 (31501?) contain parameters used for adaptive loop filtering, and then
  • the processing parameter set ( :(3150) can contain parameters used for luma mapping.
  • the acquisition unit (2010) is a set of post-processing parameters from the picture parameter set (3120), the group header (3130), or the block parameter set (3140).
  • An identifier indicating whether it is used for adaptive loop filtering may be obtained.
  • the restoration unit 2070 may filter the restoration samples using parameters obtained from the post-processing parameter set pointed to by the identifier.
  • the post-processing parameter set indicated by the identifier is used for the restored samples also exported in the current image, and if the identifier is obtained from the group header, the identifier is 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924
  • the post-processing parameter set indicated is used for restoration samples exported even in the current slice.
  • the acquisition unit (2010) obtains the identifier from the block parameter set, the identifier The set of post-processing parameters that you point to are used for the restored samples that are also exported within the current block.
  • the acquisition unit 2010 may acquire an identifier indicating any one of a plurality of post-processing parameter sets 3150a, 3150b, 3150c, and correction information from the bitstream.
  • the modification The information may include information for changing the filter coefficients contained in the set of post-processing parameters pointed to by the identifier; for example, the correction information is the value of the filter coefficient contained in the set of post-processing parameters pointed to by the identifier and the filter coefficient to be changed. It can contain the difference between the values of.
  • the restoration unit 2070 may modify the filter coefficients of the post-processing parameter set indicated by the identifier according to the correction information, and filter the restoration samples using the corrected filter coefficients.
  • the identifier obtained from the bitstream is a plurality of post-processing
  • the restoration unit 2070 constructs a new filter coefficient set by partially combining the filter coefficients included in the post-processing parameter sets pointed to by the identifier, and then generates the restored samples with the newly constructed filter coefficient set. Can be filtered
  • the restoration unit 2070 uses filter coefficients included in any one post-processing parameter set pointed to by the identifier. It is also possible to filter the luma values of the reconstructed samples and filter the chroma values of the reconstructed samples using filter coefficients included in another set of post-processing parameters pointed to by the identifier.
  • the acquisition unit 2010 may acquire an identifier indicating a set of post-processing parameters and filter coefficient information from the bitstream.
  • the restoration unit 2070 may have an identifier. It is also possible to combine some of the filter coefficients included in the indicated post-processing parameter sets and the filter coefficients signaled through the bitstream, and filter the restored samples with the combined filter coefficient set.
  • the restoration unit 2070 may additionally perform deblocking filtering on the adaptive loop-filtered restoration sample.
  • the predictive decoding unit 2050 is included in the current slice.
  • the coding unit can be decoded according to inter prediction.
  • the boundary of the current slice may be regarded as a picture boundary.
  • the search in the DMVR (Decoder-side Motion Vector Refinement) mode, in which the decoder derives a motion vector of a direct coding unit, that is, when the decoder 2050 derives a motion vector of the current coding unit, the search The (search) range can be limited to the boundary of the area at the same position as the current slice in the reference image.
  • the area at the same position as the current slice is padded to obtain prediction samples. You may.
  • the prediction decoding unit 2050 may regard the boundary of a slice as a boundary of a picture in a Bi-Optical Flow (BIO) processing mode, and predictively decode a current coding unit.
  • BIO (Bi-Optical Flow) processing mode is a block for bidirectional prediction.
  • the acquisition unit (2010) retrieves the binary values included in the bitstream.
  • WPP Wide front parallel processing
  • CABAC context-adaptive binary arithmetic coding
  • the acquisition unit (2010) can set the probability model for the CTUs included in the tile based on WPP, if only one tile is included in the slice, and if the slice contains multiple tiles, For CTUs included in tiles, WPP technology may not be applied.
  • 32 is a view for explaining an image decoding method according to an embodiment.
  • step S3210 the image decoding apparatus 2000 acquires information indicating a plurality of first reference image lists for the image sequence including the current image from the sequence parameter set of the bitstream.
  • the plurality of first reference images The list can consist of at least one of a short-term reference image and a long-term reference image.
  • step S3220 the image decoding apparatus 2000 sets blocks in the current image and a block group including at least one block.
  • the block may be a tile, and the block group may be a slice.
  • the image decoding apparatus 2000 divides the current image into a plurality of CTUs according to the information acquired from the bitstream, and a tile including at least one CTU and a slice including at least one tile Can be set within the current video.
  • the image decoding apparatus 2000 may divide the current image into a plurality of tiles according to the information acquired from the bitstream, and divide each tile into one or more CTUs.
  • the block determination unit ( 2030) can set a slice containing at least one tile in the current video.
  • the image decoding apparatus 2000 may divide the current image into one or more slices according to information obtained from the bitstream, and divide each slice into one or more tiles. And, the block determination unit ( 2030), each tile 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924
  • the image decoding apparatus 2000 may set the slices in the current image according to the address information obtained from the bitstream.
  • the image decoding apparatus 2000 obtains an indicator for the current block group including the current block in the current image from the group header of the bitstream, and is based on the first reference image list pointed to by the indicator. Thus, the second reference image list is obtained.
  • the image decoding apparatus 2000 may further acquire update information for obtaining the second reference image list together with the indicator from the bitstream.
  • the update information is the first reference image list indicated by the indicator.
  • the value related to the value of the reference image to be removed from the reference image list, the value related to the value of the reference image to be added to the second reference image list, the value related to the value of the reference image to be removed from the first reference image list Task 2 It may include at least one of the information for changing the order of the images and the difference between the ?00 related values of the reference image to be added to.
  • step 83240 the image decoding apparatus 2000 predictively decodes the lower block of the current block based on the reference image included in the second reference image list.
  • the image decoding apparatus 2000 is a set of post-processing parameters for luma mapping the predicted samples according to an identifier indicating at least one of the plurality of post-processing parameter sets. And, the image decoding apparatus 2000 can change the luma value of the predicted samples with parameters included in the post-processing parameter set pointed to by the identifier.
  • the image decoding apparatus 2000 is a prediction obtained by the prediction decoding result.
  • restoration samples can be obtained, and the restoration samples can be adaptively loop filtered.
  • the image decoding apparatus 2000 is an identifier pointing to at least one of a plurality of post-processing parameter sets. According to the following, a set of post-processing parameters for adaptive loop filtering can be specified, and the image decoding apparatus 2000 can filter the restored samples with parameters included in the post-processing parameter set pointed to by the identifier.
  • FIG. 33 shows the configuration of an image encoding apparatus 3300 according to an embodiment.
  • the image encoding apparatus 3300 includes a block determination unit 3310, a prediction
  • the generation unit 3370 shown in FIG. 33 corresponds to the bitstream generation unit 0 shown in FIG. 2, and determines a block.
  • the part 3310, the prediction coding part 3330, and the restoration part 3350 are shown in FIG.
  • the block determination unit 3310, the predictive encoding unit 3330, the restoration unit 3350, and the generation unit 3370 may be implemented with at least one processor.
  • One or more data for storing input/output data of the block determination unit 3310, prediction encoding unit 3330, restoration unit 3350, and generation unit 3370 2020/175967 1» (:1 ⁇ 1 ⁇ 2020/002924 May include a storage unit (not shown)).
  • the image encoding device 3300 is a memory control unit that controls data input/output from the data storage unit (not shown). It can also include (not shown).
  • the block determiner 3310 divides the current image into blocks, and sets block groups including at least one block in the current image.
  • the block may correspond to a tile, and the block group is Can correspond to a slice; a slice can also be referred to as a group of tiles.
  • the block determination unit 3310 may divide the current image into a plurality of 0X1s, and set a tile including at least one 0X1 and a slice including at least one tile in the current image. .
  • the block determiner 3310 may divide the current image into a plurality of tiles, and may divide each tile into one or more 0X1s. Further, the block determination unit 3310 may at least within the current image. You can set a slice containing one tile.
  • the block determination unit 3310 may divide the current image into one or more slices, and may divide each slice into one or more tiles. And, the block determination unit 3310 may divide each tile one by one. It can be divided into more than 0x1.
  • the prediction encoder 3330 obtains prediction samples corresponding to the sub-blocks by inter-prediction or intra-prediction of sub-blocks of blocks divided from the current image.
  • the sub-block is a maximum coding unit, a coding unit, and a coding unit. It can be at least one of the conversion units.
  • the prediction encoding unit 3330 can predict and encode the coding units through inter prediction or intra prediction.
  • a prediction sample of the current coding unit is obtained based on a reference block in a reference image indicated by a motion vector.
  • the residual data corresponding to the difference between the predicted sample and the current coding unit may be transmitted to the video decoding apparatus 2000 through the bitstream. Depending on the prediction mode, residual data may not be included in the bitstream.
  • the prediction encoding unit 3330 may construct a plurality of first reference image lists for an image sequence including a current image.
  • the predictive encoding unit 3330 may configure a plurality of reference image lists used in the image sequence. At least one of the first reference image lists is selected.
  • the predictive encoding unit 3330 may select a first reference image list used in the current slice from among the plurality of first reference image lists.
  • the predictive encoding unit 3330 Acquires an updated second reference image list from the selected first reference image list.
  • the second reference image list is among the reference images included in the first reference image list.
  • At least one of the reference images included in the list can be used to encode the coding units included in the slice according to inter prediction.
  • the prediction encoding unit 3330 uses the first reference image list other than the first reference image list selected for the current slice among the plurality of first reference image lists used in the image sequence, and the second reference image list to the next
  • the coding units included in the slice can be predictively coded.
  • the second reference image list obtained from the current slice can be used in the next slice.
  • the prediction encoding unit 3330 may obtain a second reference image list by changing at least some of the reference images included in the first reference image list to another reference image.
  • the prediction encoding unit 3330 is a reference image of a specific type among the reference images included in the first reference image list, for example, a long-term reference image that differs only from a long-term type reference image. That is, among the reference images included in the first reference image list, the short-term reference image remains intact in the second reference image list, and only the long-term reference image can be replaced with another long-term reference image. have.
  • the types of reference images included in the first reference image list and
  • the reference images included in the first reference image list may be replaced with other reference images.
  • a new reference image is in the order of the reference images to be removed from the first reference image list. It can be added to the second reference picture list, i.e., if the long-term reference picture to which index 1 is assigned is removed from the first reference picture list, index 1 can also be assigned to the new reference picture.
  • the prediction encoding unit 3330 excludes reference images of a specific type among the reference images in the first reference image list selected for the current slice among the plurality of first reference image lists for the image sequence. It is also possible to obtain a second reference video list.
  • the prediction encoding unit 3330 is configured by changing the order of at least some of the reference images in the first reference image list selected for the current slice among the plurality of first reference image lists for the image sequence. 2 You can also obtain a list of reference videos.
  • the prediction encoding unit 3330 may generate a first reference image list including only short-term reference images and a first reference image list including only long-term reference images. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 You can obtain the second reference video list. For example, prediction
  • the encoding unit 3330 may include short-term reference pictures included in the first reference picture list and long-term reference pictures included in the first reference picture list in the second reference picture list.
  • the predictive encoding unit 3330 when the first reference image list includes only the short-term reference image, the predictive encoding unit 3330 includes a short-term reference image and a new long-term reference image included in the first reference image list. It is possible to obtain a second reference image list including the term reference image. Conversely, when the first reference image list includes only the long-term reference image, the prediction encoding unit 3330 may obtain a long term reference image list included in the first reference image list. -It is also possible to acquire a second reference image list including a term reference image and a new short-term reference image.
  • the prediction encoding unit 3330 may inter-predict the coding units based on the reference image included in the second reference image list. As a result of the inter prediction, corresponding to the coding units. Predictive samples can be obtained.
  • the restoration unit 3350 uses the predicted samples to obtain restoration samples of the coding units.
  • the restored image containing the restored samples can be stored in as a reference image of the subsequent image.
  • the restoration unit 3350 may perform luma-mapping processing of predicted samples of the coding units before acquiring the restoration samples.
  • the restoration unit 3350 may perform a luma mapping process from a plurality of post-processing parameter sets. You can get them.
  • Each of the plurality of post-processing parameter sets may include parameters used for luma mapping or adaptive loop filtering described below.
  • some of the post-processing parameter sets include parameters used for luma mapping, and Some others contain parameters used for adaptive loop filtering, for example, at least one parameter set contains parameters used for luma mapping, and another post-processing parameter set contains parameters used for adaptive loop filtering.
  • the restoration unit 3350 may generate a plurality of post-processing parameter sets consisting of parameters used for luma mapping or parameters used for adaptive loop filtering. As described above, a plurality of post-processing parameters can be created.
  • the parameter set is
  • the restoration unit 3350 may acquire parameters from a post-processing parameter set selected from a plurality of post-processing parameter sets, and change the luma values of the predicted samples with the acquired parameters.
  • the restoration unit 3350 may modify parameters of a post-processing parameter set selected from among a plurality of post-processing parameter sets, and change the luma values of the predicted samples with the modified parameters.
  • the restoration unit 3350 partially combines the parameters included in two or more post-processing parameter sets among the plurality of post-processing parameter sets, 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 You can configure the parameter set and change the luma value of the predicted samples with the parameters of the newly configured parameter set.
  • the restoration unit 3350 acquires restoration samples corresponding to the current coding unit by using the prediction samples generated as a result of the prediction decoding or the prediction samples subjected to luma mapping. When the reconstructed samples are obtained, the restoration unit 3350 may apply adaptive loop filtering to the restored samples.
  • some of the post-processing parameter sets include parameters used for luma mapping, and others are used for adaptive loop filtering.
  • the restoration unit 3350 may filter the restoration samples using parameters obtained from at least one of the plurality of post-processing parameter sets.
  • the restoration unit 3350 is one of a plurality of post-processing parameter sets.
  • the restoration unit 3350 is two of the plurality of post-processing parameter sets
  • the parameters included in the above post-processing parameter set can be partially combined to form a new parameter set, and the restored samples can be filtered with the parameters of the newly constructed parameter set.
  • the restoration unit 3350 filters the luma values of the restoration samples by using one post-processing parameter set among the plurality of post-processing parameter sets, and calculates the other post-processing parameter set. Can be used to filter the chroma values of the restored samples.
  • the predictive coding unit 3330 interpolates the coding units included in the current slice.
  • the boundary of the current slice can be regarded as the picture boundary.
  • the predictive encoding unit 3330 calculates the motion vector of the current encoding unit.
  • the search range can be limited to the boundary of the area of the reference image at the same position as the current slice.
  • the predictive encoding unit 3330 may regard a boundary of a slice as a boundary of a picture in a Bi-Optical Flow (BIO) processing mode, and predictively encode a current coding unit.
  • BIO Bi-Optical Flow
  • the generation unit 3370 generates a bitstream including information used for encoding an image.
  • the bitstream is a sequence parameter set, a picture parameter set, a group header, a block parameter set, and at least one. May contain a set of post-processing parameters.
  • the generation unit 3370 generates binary values corresponding to the syntax elements.
  • CABAC context-adaptive binary arithmetic coding
  • the generation unit 3370 selectively applies WPP (Wave front parallel processing) technology considering how many tiles are included in the slice.
  • WPP Wide front parallel processing
  • the generation unit 3370 can set the probability model for the CTUs included in the tile based on WPP when only one tile is included in the slice, and when a slice contains multiple tiles, In addition, the WPP technology may not be applied to CTUs included in tiles.
  • 34 is a view for explaining an image encoding method according to an embodiment.
  • step S3410 the image encoding device 3300 is an image including the current image
  • a plurality of first reference picture lists are constructed for the sequence.
  • the plurality of first reference picture lists may consist of at least one of a short-term reference picture and a long-term reference picture.
  • step S3420 the image encoding apparatus 3300 sets blocks in the current image and a block group including at least one block.
  • the block may be a tile, and the block group may be a slice.
  • the image encoding apparatus 3300 converts the current image into a plurality of CTUs.
  • the image encoding apparatus 3300 converts the current image into a plurality of tiles.
  • Each tile can be divided into one or more CTUs.
  • the image encoding device 3300 can set a slice including at least one tile in the current image.
  • the image encoding apparatus 3300 may divide the current image into one or more slices and divide each slice into one or more tiles. And, the image encoding apparatus 3300 may divide each tile one by one. It can be divided into more than one CTU.
  • step S3230 the image encoding apparatus 3300 selects the first reference image list for the current block group including the current block in the current image from among the plurality of first reference image lists, and the selected first reference image Based on the list, a second reference image list is obtained.
  • step S3240 the image encoding device 3300 is included in the second reference image list.
  • the sub-block included in the current block is predictively coded.
  • the image encoding apparatus 3300 is included in at least one of the plurality of post-processing parameter sets.
  • the image encoding apparatus 3300 is
  • restoration samples can be acquired, and the restored samples can be adaptively loop filtered.
  • the image encoding apparatus 3300 is configured with a parameter included in at least one of a plurality of post-processing parameter sets. You can filter the restored samples.
  • the medium continues to store, execute, or execute programs executable by a computer.
  • the medium may be a variety of recording means or storage means in the form of a single or a combination of several pieces of hardware, not limited to media directly connected to any computer system, but distributed over the network.
  • Examples of media include magnetic media such as hard disks, floppy disks and magnetic tapes, optical recording media such as CD-ROMs and DVDs, and magnetic-optical media such as floptical disks.
  • -optical medium may be configured to store program instructions, including ROM, RAM, flash memory, etc.
  • a site that supplies or distributes an app store or other various software that distributes applications. , Recording media and storage media managed by servers, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

비트스트림의 시퀀스 파라미터 세트로부터 현재 영상을 포함하는 영상 시퀀스를 위한 복수의 제 1 참조 영상 리스트를 나타내는 정보를 획득하는 단계; 비트스트림의 그룹 헤더로부터 현재 영상 내 현재 블록을 포함하는 현재 블록 그룹을 위한 인디케이터를 획득하는 단계; 인디케이터가 가리키는 제 1 참조 영상 리스트에 기반하여 제 2 참조 영상 리스트를 획득하는 단계; 및 제 2 참조 영상 리스트에 포함된 참조 영상에 기초하여 현재 블록의 하위 블록을 예측 복호화하는 단계를 포함하는, 일 실시예에 따른 영상의 복호화 방법이 개시된다.

Description

2020/175967 1»(:1^1{2020/002924 명세서
발명의명칭:영상의부호화및복호화장치,및이에의한영상의 부호화및복호화방법
기술분야
[1] 본개시는영상의부호화및복호화분야에관한것이다.보다구체적으로,본 개시는영상의계층적구조를이용하여영상을부호화하는방법및장치, 복호화하는방법및장치에관한것이다.
배경기술
[2] 영상의부호화및복호화에서는영상을블록으로분할하고,인터예측 (inter prediction)또는인트라예즉 (intraprediction)을통해각각의블록을예즉부호화 및예즉복호화할수있다.
[3] 인터예측은영상들사이의시간적인중복성을제거하여영상을압축하는 방법으로움직임추정부호화가대표적인예이다.움직임추정부호화는적어도 하나의참조영상을이용해현재영상의블록들을예측한다.소정의평가함수를 이용하여현재블록과가장유사한참조블록을소정의검색범위에서검색할수 있다.현재블록을참조블록에기초하여 예측하고,예측결과생성된예측 블록을현재블록으로부터감산하여잔차블록을생성및부호화한다.이때 , 예측을보다정확하게수행하기위해참조영상에대해보간을수행하여정수 화소단위 (integer pel unit)보다작은부화소단위 (sub pel unit)의픽셀들을 생성하고,부화소단위의픽셀에기초해인터 예측을수행할수있다.
[4] H.264 A VC( Advanced Video Coding)및 HEVC(High Efficiency Video Coding)와 같은코덱에서는현재블록의움직임벡터를예측하기위해현재블록에인접한 이전에부호화된블록들또는이전에부호화된영상에포함된블록들의움직임 벡터를현재블록의 예즉움직임벡터 (Prediction Motion Vector)로이용한다.현재 블록의움직임벡터와예측움직임벡터사이의차이인잔차움직임
벡터 (Differential Motion Vector)는소정의방식을통해디코더즉으로
시그널링된다.
발명의상세한설명
기술적과제
[5] 일실시예에따른영상의부호화및복호화장치,및이에의한영상의부호화 및복호화방법은영상의계층적구조를이용하여적은비트레이트로영상을 부호화및복호화하는것을기술적과제로한다.
과제해결수단
[6] 일실시예에따른영상의복호화방법은,비트스트림의시퀀스파라미터
세트로부터현재영상을포함하는영상시퀀스를위한복수의제 1참조영상 리스트를나타내는정보를획득하는단계 ;상기비트스트림의그룹헤더로부터 2020/175967 1»(:1^1{2020/002924 상기 현재 영상내현재블록을포함하는현재블록그룹을위한인디케이터를 획득하는단계 ;상기복수의제 1참조영상리스트중상기 인디케이터가 가리키는제 1참조영상리스트에기반하여 제 2참조영상리스트를획득하는 단계;및상기 제 2참조영상리스트에포함된참조영상에 기초하여 현재 블록의하위블록을예측복호화하는단계를포함할수있다.
발명의효과
[7] 일실시예에 따른영상의부호화및복호화장치,및 이에의한영상의부호화 및복호화방법은영상의 계층적구조를이용하여 적은비트레이트로영상을 부호화및복호화할수있다.
[8] 다만,일실시예에 따른영상의부호화및복호화장치,및 이에의한영상의 부호화및복호화방법이달성할수있는효과는이상에서 언급한것들로 제한되지 않으며,언급하지 않은또다른효과들은아래의 기재로부터본개시가 속하는기술분야에서통상의지식을가진자에게명확하게 이해될수있을 것이다.
도면의간단한설명
[9] 본명세서에서 인용되는도면을보다충분히 이해하기위하여 각도면의
간단한설명이제공된다.
[1이 도 1은일실시예에 따른영상복호화장치의블록도이다.
[11] 도 2는일실시예에 따른영상부호화장치의블록도이다.
[12] 도 3은일실시예에 따라영상복호화장치가현재부호화단위를분할하여 적어도하나의부호화단위를결정하는과정을도시한다.
[13] 도 4는일실시예에 따라영상복호화장치가비 -정사각형의 형태인부호화 단위를분할하여 적어도하나의부호화단위를결정하는과정을도시한다.
[14] 도 5는일실시예에 따라영상복호화장치가블록형태정보및분할형태모드 정보중적어도하나에기초하여부호화단위를분할하는과정을도시한다.
[15] 도 6은일실시예에 따라영상복호화장치가홀수개의부호화단위들중
소정의부호화단위를결정하기위한방법을도시한다.
[16] 도 7은일실시예에 따라영상복호화장치가현재부호화단위를분할하여 복수개의부호화단위들을결정하는경우,복수개의부호화단위들이 처리되는 순서를도시한다.
[17] 도 8은일실시예에 따라소정의순서로부호화단위가처리될수없는경우, 영상복호화장치가현재부호화단위가홀수개의부호화단위로분할되는 것임을결정하는과정을도시한다.
[18] 도 9는일실시예에 따라영상복호화장치가제 1부호화단위를분할하여
적어도하나의부호화단위를결정하는과정을도시한다.
[19] 도 은일실시예에 따라영상복호화장치가제 1부호화단위가분할되어 결정된비-정사각형 형태의 제 2부호화단위가소정의조건을만족하는경우제 2 2020/175967 1»(:1/10公020/002924 부호화단위가분할될수있는형태가제한되는것을도시한다.
도 11은일실시예에 따라분할형태모드정보가 4개의 정사각형 형태의 부호화단위로분할하는것을나타낼수없는경우,영상복호화장치가
정사각형 형태의부호화단위를분할하는과정을도시한다.
[21] 도 12는일실시예에 따라복수개의부호화단위들간의처리순서가부호화 단위의분할과정에 따라달라질수있음을도시한것이다.
[22] 도 13은일실시예에 따라부호화단위가재귀적으로분할되어복수개의
부호화단위가결정되는경우,부호화단위의 형태및크기가변함에 따라 부호화단위의심도가결정되는과정을도시한다.
[23] 도 14는일실시예에 따라부호화단위들의 형태 및크기에 따라결정될수있는 심도및부호화단위구분을위한인덱스 (part index,이하 PID)를도시한다.
[24] 도 15는일실시예에 따라픽쳐에포함되는복수개의소정의 데이터 단위에
따라복수개의부호화단위들이결정된것을도시한다.
[25] 도 16은일실시예에 따라부호화단위가분할될수있는형태의조합이
픽쳐마다서로다른경우,각각의픽쳐마다결정될수있는부호화단위들을 도시한다.
[26] 도 17은일실시예에 따라바이너리 (binary)코드로표현될수있는분할형태 모드정보에기초하여 결정될수있는부호화단위의다양한형태를도시한다.
[27] 도 18은일실시예에 따라바이너리코드로표현될수있는분할형태모드
정보에 기초하여결정될수있는부호화단위의또다른형태를도시한다.
[28] 도 19는루프필터링을수행하는영상부호화및복호화시스템의블록도를
나타낸도면이다.
[29] 도 20은일실시예에 따른영상복호화장치의구성을도시하는도면이다.
[3이 도 21은영상의 계층구조에따라생성된비트스트림의구조를도시하는
201 예시적인도면이다.
[31] 도 22는현재 영상내에서결정된슬라이스,타일및 CTU를나타내는도면이다.
[32] 도 23은현재 영상내에서슬라이스들을설정하는방법을설명하기위한
도면이다.
[33] 도 24는현재 영상내에서슬라이스들을설정하는다른방법을설명하기 위한 도면이다.
[34] 도 25는시퀀스파라미터 세트를통해 획득된복수의 제 1참조영상리스트를 나타내는예시적인도면이다.
[35] 도 26은제 2참조영상리스트를획득하는방법을설명하기 위한도면이다.
[36] 도 27은제 2참조영상리스트를획득하는방법을설명하기 위한도면이다.
[37] 도 28은제 2참조영상리스트를획득하는다른방법을설명하기위한
도면이다.
[38] 도 29는제 2참조영상리스트를획득하는다른방법을설명하기위한
도면이다. 2020/175967 1»(:1^1{2020/002924
[39] 도 30은제 2참조영상리스트를획득하는다른방법을설명하기위한
도면이다.
[4이 도 31은루마매핑또는적응적루프필터링에 이용되는복수의후처리
파라미터 세트를포함하는비트스트림을도시하는도면이다.
[41] 도 32는일실시예에 따른영상복호화방법을설명하기위한도면이다.
[42] 도 33은일실시예에 따른영상부호화장치의구성을도시하는도면이다.
[43] 도 34는일실시예에 따른영상부호화방법을설명하기위한도면이다.
발명의실시를위한최선의형태
[44] 일실시예에 따른영상의복호화방법은,비트스트림의시퀀스파라미터
세트로부터 현재 영상을포함하는영상시퀀스를위한복수의제 1참조영상 리스트를나타내는정보를획득하는단계 ;상기비트스트림의그룹헤더로부터 상기 현재 영상내현재블록을포함하는현재블록그룹을위한인디케이터를 획득하는단계 ;상기복수의제 1참조영상리스트중상기 인디케이터가 가리키는제 1참조영상리스트에기반하여 제 2참조영상리스트를획득하는 단계;및상기 제 2참조영상리스트에포함된참조영상에 기초하여 현재 블록의하위블록을예측복호화하는단계를포함할수있다.
[45] 일실시예에서 ,상기복수의제 1참조영상리스트중상기 인디케이터가
가리키는제 1참조영상리스트이외의 제 1참조영상리스트,및상기 제 2참조 영상리스트에기초하여,상기 현재 영상내다음블록그룹에포함된하위 블록들이 예측복호화될수있다.
[46] 일실시예에서 ,상기 제 2참조영상리스트를획득하는단계는,상기
인디케이터가가리키는제 1참조영상리스트에포함된참조영상들의 적어도 일부의순서를변경하여상기제 2참조영상리스트를획득하는단계를포함할 수있다.
[47] 일실시예에서 ,상기 인디케이터가가리키는제 1참조영상리스트는제 1 타입의 참조영상및제 2타입의 참조영상을포함하되,상기제 2참조영상 리스트를획득하는단계는,상기 인디케이터가가리키는제 1참조영상 리스트로부터상기제 2타입의 참조영상을배제하여상기 제 2참조영상 리스트를획득하는단계를포함할수있다.
[48] 일실시예에서 ,상기 인디케이터가가리키는제 1참조영상리스트는제 1 타입의 참조영상및제 2타입의 참조영상을포함하되,상기제 2참조영상 리스트를획득하는단계는,상기 인디케이터가가리키는제 1참조영상 리스트로부터상기제 2타입의 참조영상을배제하고,상기그룹헤더로부터 획득된 ?00관련값이 가리키는제 2타입의 참조영상을상기 인디케이터가 가리키는제 1참조영상리스트에추가하여상기제 2참조영상리스트를 획득하는단계를포함할수있다.
[49] 일실시예에서 ,상기 인디케이터가가리키는제 1참조영상리스트는제 1 2020/175967 1»(:1^1{2020/002924 타입의 참조영상만을포함하되,상기제 2참조영상리스트를획득하는단계는, 상기그룹헤더로부터 획득된 ?00관련값이가리키는제 2타입의참조영상을 상기 인디케이터가가리키는제 1참조영상리스트에추가하여상기제 2참조 영상리스트를획득하는단계를포함할수있다.
[5이 일실시예에서 ,상기 제 2참조영상리스트를획득하는단계는,상기
인디케이터가가리키는어느하나의참조영상리스트에포함된제 1타입의 참조영상과상기 인디케이터가가리키는다른하나의 참조영상리스트에 포함된제 2타입의참조영상을포함하는상기제 2참조영상리스트를 획득하는단계를포함할수있다.
[51] 일실시예에서,상기 제 1타입 및상기제 2타입중어느하나의타입의 참조 영상들에는,다른하나의타입의 참조영상들에 할당된인덱스보다큰인덱스가 할당될수있다.
[52] 일실시예에서 ,상기 영상의복호화방법은,상기그룹헤더로부터상기 제 1 타입의 참조영상들과상기제 2타입의 참조영상들의순서정보를획득하는 단계를더포함하고,상기 제 1타입의참조영상들과상기 제 2타입의참조 영상들에는상기순서 정보에따른인덱스가할당될수있다.
[53] 일실시예에서 ,상기 영상의복호화방법은,상기 인디케이터가가리키는제 1 참조영상리스트에포함된참조영상들중적어도일부의 ?00관련값과상기 제 2참조영상리스트에포함될참조영상들중적어도일부의 ?00관련값 사이의차분값을상기그룹헤더로부터 획득하는단계를더포함하고,상기제 2 참조영상리스트를획득하는단계는,상기 획득한차분값을기초로상기 인디케이터가가리키는제 1참조영상리스트에포함된참조영상들중적어도 일부를교체하여상기 제 2참조영상리스트를획득하는단계를포함할수있다.
[54] 일실시예에서 ,상기 영상의복호화방법은,상기 현재 영상내에서복수의
블록들을결정하는단계;상기비트스트림으로부터블록그룹들에 대한주소 정보를획득하는단계;및상기 획득한주소정보에따라상기 현재 영상내에서 상기복수의블록중적어도하나의블록을포함하는블록그룹들을설정하는 단계를포함하되,상기 현재블록은상기복수의블록들중어느하나이고,상기 현재블록그룹은상기블록그룹들중어느하나일수있다.
[55] 일실시예에서,상기주소정보는,상기블록그룹들각각에포함된블록들중 우하단블록의식별정보를포함하되,상기블록그룹들을설정하는단계는, 상기복수의블록중좌상단에 위치하는좌상단블록과상기우하단블록의식별 정보가가리키는우하단블록을포함하는첫번째블록그룹을설정하는단계; 상기 첫번째블록그룹에포함된블록들의식별정보에 기초하여두번째블록 그룹의좌상단블록을식별하는단계 ;및상기우하단블록의식별정보가 가리키는우하단블록과상기식별된좌상단블록을포함하는두번째블록 그룹을설정하는단계를포함할수있다.
[56] 일실시예에서 ,상기 영상의복호화방법은,루마매핑을위한적어도하나의 2020/175967 1»(:1/10公020/002924 후처리파라미터세트를상기비트스트림으로부터획득하는단계 ;상기예측 복호화결과획득된하위블록의 예측샘플에대한루마매핑에적용되는후처리 파라미터세트를가리키는식별정보를상기비트스트림의그룹헤더또는픽처 파라미터세트로부터획득하는단계 ;및상기식별정보가가리키는후처리 파라미터세트에따라상기예측샘플을루마매핑하는단계를더포함할수 있다.
[57] 일실시예에따른영상의복호화장치는,비트스트림의시퀀스파라미터
세트로부터현재영상을포함하는영상시퀀스를위한복수의제 1참조영상 리스트를나타내는정보를획득하고,상기비트스트림의그룹헤더로부터상기 현재영상내현재블록을포함하는현재블록그룹을위한인디케이터를 획득하는획득부;및상기복수의제 1참조영상리스트중상기인디케이터가 가리키는제 1참조영상리스트에기반하여,제 2참조영상리스트를획득하고, 상기제 2참조영상리스트에포함된참조영상에기초하여현재블록의하위 블록을예측복호화하는예측복호화부를포함할수있다.
[58] 일실시예에따른영상의부호화방법은,현재영상을포함하는영상시퀀스를 위한복수의제 1참조영상리스트를구성하는단계;상기복수의제 1참조영상 리스트중상기현재영상내현재블록을포함하는현재블록그룹을위한제 1 참조영상리스트를선택하는단계;상기선택된제 1참조영상리스트에 기반하여제 2참조영상리스트를획득하는단계 ;및상기제 2참조영상 리스트에포함된참조영상에기초하여현재블록의하위블록을예측
부호화하는단계를포함할수있다.
발명의실시를위한형태
[59] 본개시는다양한변경을가할수있고여러가지실시예를가질수있는바, 특정실시예들을도면에 예시하고,이를상세한설명을통해상세히설명하고자 한다.그러나,이는본개시의실시형태에대해한정하려는것이아니며,본 개시는여러실시예들의사상및기술범위에포함되는모든변경,균등물내지 대체물을포함하는것으로이해되어야한다.
[6이 실시예를설명함에 있어서,관련된공지기술에대한구체적인설명이본
개시의요지를불필요하게흐릴수있다고판단되는경우그상세한설명을 생략한다.또한,명세서의설명과정에서이용되는숫자(예를들어,제 1,제 2 등)는하나의구성요소를다른구성요소와구분하기위한식별기호에불과하다.
[61] 또한,본명세서에서,일구성요소가다른구성요소와 "연결된다”거나
"접속된다”등으로언급된때에는,상기일구성요소가상기다른구성요소와 직접연결되거나또는직접접속될수도있지만,특별히반대되는기재가 존재하지않는이상,중간에또다른구성요소를매개하여연결되거나또는 접속될수도있다고이해되어야할것이다.
[62] 또한,본명세서에서’〜부(유닛)’,’모듈’등으로표현되는구성요소는 2개이상의 2020/175967 1»(:1^1{2020/002924 구성요소가하나의구성요소로합쳐지거나또는하나의구성요소가보다 세분화된기능별로 2개 이상으로분화될수도있다.또한,이하에서 설명할 구성요소각각은자신이 담당하는주기능이외에도다른구성요소가담당하는 기능중일부또는전부의 기능을추가적으로수행할수도있으며,구성요소 각각이 담당하는주기능중일부기능이다른구성요소에의해 전담되어수행될 수도있음은물론이
[63] 또한,본명세서에
Figure imgf000009_0001
비디오의정지영상이거나 동영상,즉비디오그자체를나타낼수있다.
[64] 또한,본명세서에서’샘플’또는’신호’는,영상의 샘플링 위치에할당된
데이터로서프로세싱 대상이 되는데이터를의미한다.예를들어 ,공간영역의 영상에서 화소값,변환영역상의 변환계수들이 샘플들일수있다.이러한 적어도하나의 샘플들을포함하는단위를블록이라고정의할수있다.
[65] 이하에서는,도 1내지도 19를참조하여,일실시예에따른트리구조의부호화 단위 및변환단위에기초한영상부호화방법 및그장치 ,영상복호화방법 및 그장치가개시된다.
[66] 도 1은일실시예에 따른영상복호화장치 (100)의블록도를도시한다.
[67] 영상복호화장치 (100)는비트스트림 획득부 (110)및복호화부 (120)를포함할 수있다.비트스트림 획득부 (110)및복호화부 (120)는적어도하나의프로세서를 포함할수있다.또한비트스트림 획득부 (110)및복호화부 (120)는적어도하나의 프로세서가수행할명령어들을저장하는메모리를포함할수있다.
[68] 비트스트림 획득부 (110)는비트스트림을수신할수있다.비트스트림은
후술되는영상부호화장치 (200)가영상을부호화한정보를포함한다.또한 비트스트림은영상부호화장치 (200)로부터송신될수있다.영상부호화 장치 (200)및 영상복호화장치 (100)는유선또는무선으로연결될수있으며, 비트스트림 획득부 (110)는유선또는무선을통하여 비트스트림을수신할수 있다.비트스트림 획득부 ( 0)는광학미디어,하드디스크등과같은
저장매체로부터 비트스트림을수신할수있다.복호화부 (120)는수신된 비트스트림으로부터 획득된정보에 기초하여 영상을복원할수있다.
복호화부 (120)는영상을복원하기위한신택스엘리먼트를비트스트림으로부터 획득할수있다.복호화부 (120)는신택스엘리먼트에기초하여 영상을복원할수 있다.
[69] 영상복호화장치 ( 0)의동작에 대해상세히설명하면,비트스트림
획득부 (110)는비트스트림을수신할수있다.
이 영상복호화장치 (100)는비트스트림으로부터부호화단위의분할형태모드에 대응하는빈스트링을획득하는동작을수행할수있다.그리고,영상복호화 장치 (100)는부호화단위의분할규칙을결정하는동작을수행할수있다.또한 영상복호화장치 (100)는분할형태모드에 대응하는빈스트링 및상기분할규칙 중적어도하나에 기초하여 ,부호화단위를복수의부호화단위들로분할하는 2020/175967 1»(:1^1{2020/002924 동작을수행할수있다.영상복호화장치 (100)는분할규칙을결정하기위하여 , 부호화단위의너비 및높이의 비율에따른,상기부호화단위의크기의 허용가능한제 1범위를결정할수있다.영상복호화장치 (100)는분할규칙을 결정하기 위하여,부호화단위의분할형태모드에따른,부호화단위의크기의 허용가능한제 2범위를결정할수있다.
P 1] 이하에서는본개시의 일실시예에따라부호화단위의분할에 대하여자세히 설명한다.
[72] 먼저하나의픽처 (Picture)는하나이상의슬라이스혹은하나이상의타일로 분할될수있다.하나의슬라이스혹은하나의타일는하나이상의 최대부호화 단위 (Coding Tree Unit; CTU)의시퀀스일수있다.구현예에따라,하나의 슬라이스는하나이상의 타일을포함하고,하나의슬라이스는하나이상의최대 부호화단위를포함할수도있다.하나또는복수의 타일을포함하는슬라이스가 픽처 내에서결정될수있다.
[73] 최대부호화단위 (CTU)와대비되는개념으로최대부호화블록 (Coding Tree Block; CTB)이 있다.최대부호화블록 (CTB)은 NxN개의 샘플들을포함하는 NxN 블록을의미한다어은정수).각컬러성분은하나이상의최대부호화블록으로 분할될수있다.
[74] 픽처가 3개의 샘플어레이 (Y, Cr, Cb성분별샘플어레이 )를가지는경우에 최대부호화단위 (CTU)란,루마샘플의최대부호화블록및그에 대응되는 크로마샘플들의 2개의 최대부호화블록과,루마샘플,크로마샘플들을 부호화하는데 이용되는신택스구조들을포함하는단위이다.픽처가모노크롬 픽처인경우에최대부호화단위란,모노크롬샘플의최대부호화블록과 모노크롬샘플들을부호화하는데 이용되는신택스구조들을포함하는 단위이다.픽처가컬러성분별로분리되는컬러플레인으로부호화되는픽처인 경우에 최대부호화단위란,해당픽처와픽처의 샘플들을부호화하는데 이용되는신택스구조들을포함하는단위이다.
5] 하나의최대부호화블록 (CTB)은 MxN개의 샘플들을포함하는 MxN부호화 블록 (coding block)으로분할될수있다 (M, N은정수).
R6] 픽처가 Y, Cr, Cb성분별샘플어레이를가지는경우에부호화단위 (Coding
Unit; CU)란,루마샘플의부호화블록및그에 대응되는크로마샘플들의 2개의 부호화블록과,루마샘플,크로마샘플들을부호화하는데 이용되는신택스 구조들을포함하는단위이다.픽처가모노크롬픽처인경우에부호화단위란, 모노크롬샘플의부호화블록과모노크롬샘플들을부호화하는데 이용되는 신택스구조들을포함하는단위이다.픽처가컬러성분별로분리되는컬러 플레인으로부호화되는픽처인경우에부호화단위란,해당픽처와픽처의 샘플들을부호화하는데이용되는신택스구조들을포함하는단위이다.
7] 위에서설명한바와같이,최대부호화블록과최대부호화단위는서로
구별되는개념이며 ,부호화블록과부호화단위는서로구별되는개념이다.즉, 2020/175967 1»(:1^1{2020/002924
(최대)부호화단위는해당샘플을포함하는 (최대)부호화블록과그에 대응하는 신택스구조를포함하는데이터구조를의미한다.하지만당업자가 (최대 ) 부호화단위또는 (최대)부호화블록가소정 개수의 샘플들을포함하는소정 크기의블록을지칭한다는것을이해할수있으므로,이하명세서에서는최대 부호화블록과최대부호화단위,또는부호화블록과부호화단위를특별한 사정이 없는한구별하지 않고언급한다.
[78] 영상은최대부호화단위 (Coding Tree Unit; CTU)로분할될수있다.최대
부호화단위의크기는비트스트림으로부터 획득된정보에기초하여 결정될수 있다.최대부호화단위의모양은동일크기의정사각형을가질수있다.하지만 이에 한정되는것은아니다.
9] 예를들어,비트스트림으로부터루마부호화블록의최대크기에 대한정보가 획득될수있다.예를들어,루마부호화블록의최대크기에 대한정보가 나타내는루마부호화블록의최대크기는 4x4, 8x8, 16x16, 32x32, 64x64, 128x128, 256x256중하나일수있다.
[8이 예를들어,비트스트림으로부터 2분할이가능한루마부호화블록의최대
크기와루마블록크기차이에 대한정보가획득될수있다.루마블록크기 차이에 대한정보는루마최대부호화단위와 2분할이가능한최대루마부호화 블록간의크기차이를나타낼수있다.따라서,비트스트림으로부터 획득된 2분할이 가능한루마부호화블록의 최대크기에 대한정보와루마블록크기 차이에 대한정보를결합하면,루마최대부호화단위의크기가결정될수있다. 루마최대부호화단위의크기를이용하면크로마최대부호화단위의크기도 결정될수있다.예를들어,컬러포맷에 따라 Y: Cb : Cr비율이 4:2:0이라면, 크로마블록의크기는루마블록의크기의절반일수있고,마찬가지로크로마 최대부호화단위의크기는루마최대부호화단위의크기의 절반일수있다.
[81] 일실시예에 따르면,바이너리분할 (binary split)이가능한루마부호화블록의 최대크기에 대한정보는비트스트림으로부터 획득하므로,바이너리분할이 가능한루마부호화블록의최대크기는가변적으로결정될수있다.이와달리, 터너리분할 (ternary split)이가능한루마부호화블록의최대크기는고정될수 있다.예를들어, I픽처에서 터너리분할이 가능한루마부호화블록의 최대 크기는 32x32이고, P픽처또는 B픽처에서터너리분할이가능한루마부호화 블록의 최대크기는 64x64일수있다.
[82] 또한최대부호화단위는비트스트림으로부터 획득된분할형태모드정보에 기초하여부호화단위로계층적으로분할될수있다.분할형태모드정보로서, 쿼드분할 (quad split)여부를나타내는정보,다분할여부를나타내는정보,분할 방향정보및분할타입 정보중적어도하나가비트스트림으로부터 획득될수 있다.
[83] 예를들어 ,쿼드분할 (quad split)여부를나타내는정보는현재부호화단위가 쿼드분할 (QUAD_SPLIT)될지또는쿼드분할되지 않을지를나타낼수있다. 2020/175967 1»(:1^1{2020/002924
[84] 현재부호화단위가쿼드분할지되 않으면,다분할여부를나타내는정보는
현재부호화단위가더 이상분할되지 않을지 (NO_SPLIT)아니면
바이너리/터너리분할될지 여부를나타낼수있다.
[85] 현재부호화단위가바이너리분할되거나터너리분할되면,분할방향정보는 현재부호화단위가수평방향또는수직 방향중하나로분할됨을나타낸다.
[86] 현재부호화단위가수평또는수직 방향으로분할되면분할타입 정보는현재 부호화단위를바이너리분할또는터너리분할로분할함을나타낸다.
[87] 분할방향정보및분할타입 정보에따라,현재부호화단위의분할모드가
결정될수있다.현재부호화단위가수평방향으로바이너리분할되는경우의 분할모드는바이너리수평분할 (SPLIT_BT_HOR),수평방향으로터너리 분할되는경우의터너리수평분할 (SPLIT_TT_HOR),수직 방향으로바이너리 분할되는경우의분할모드는바이너리수직분할 (SPLIT_BT_VER)및수직 방향으로터너리분할되는경우의분할모드는터너리수직분할
(SPLIT_TT_VER)로결정될수있다.
[88] 영상복호화장치 (100)는비트스트림으로부터분할형태모드정보를하나의 빈스트링으로부터 획득할수있다.영상복호화장치 (100)가수신한
비트스트림의 형태는 Fixed length binary code, Unary code, Truncated unary code, 미리 결정된바이너리코드등을포함할수있다.빈스트링은정보를 2진수의 나열로나타낸것이다.빈스트링은적어도하나의 비트로구성될수있다.영상 복호화장치 (100)는분할규칙에 기초하여빈스트링에 대응하는분할형태모드 정보를획득할수있다.영상복호화장치 ( 100)는하나의빈스트링에 기초하여 , 부호화단위를쿼드분할할지 여부,분할하지 않을지또는분할방향및분할 타입을결정할수있다.
[89] 부호화단위는최대부호화단위보다작거나같을수있다.예를들어 최대
부호화단위도최대크기를가지는부호화단위이므로부호화단위의하나이다. 최대부호화단위에 대한분할형태모드정보가분할되지 않음을나타내는 경우,최대부호화단위에서 결정되는부호화단위는최대부호화단위와같은 크기를가진다.최대부호화단위에 대한분할형태모드정보가분할됨을 나타내는경우최대부호화단위는부호화단위들로분할될수있다.또한 부호화단위에 대한분할형태모드정보가분할을나타내는경우부호화 단위들은더작은크기의부호화단위들로분할될수있다.다만,영상의분할은 이에 한정되는것은아니며최대부호화단위 및부호화단위는구별되지 않을 수있다.부호화단위의분할에 대해서는도 3내지도 16에서보다자세히 설명한다.
[9이 또한부호화단위로부터 예측을위한하나이상의 예측블록이결정될수있다. 예측블록은부호화단위와같거나작을수있다.또한부호화단위로부터 변환을위한하나이상의 변환블록이결정될수있다.변환블록은부호화 단위와같거나작을수있다. 2020/175967 1»(:1/10公020/002924 변환블록과예즉블록의모양및크기는서로관련없을수있다.
다른실시예로,부호화단위가예측블록으로서부호화단위를이용하여 예측이수행될수있다.또한부호화단위가변환블록으로서부호화단위를 이용하여 변환이수행될수있다.
[93] 부호화단위의분할에 대해서는도 3내지도 16에서보다자세히설명한다.본 개시의 현재블록및주변블록은최대부호화단위,부호화단위,예측블록및 변환블록중하나를나타낼수있다.또한,현재블록또는현재부호화단위는 현재복호화또는부호화가진행되는블록또는현재분할이 진행되고있는 블록이다.주변블록은현재블록이전에복원된블록일수있다.주변블록은 현재블록으로부터공간적또는시간적으로인접할수있다.주변블록은현재 블록의좌하측,좌측,좌상측,상측,우상측,우측,우하측중하나에위치할수 있다.
도 3은일실시예에 따라영상복호화장치 (100)가현재부호화단위를 분할하여 적어도하나의부호화단위를결정하는과정을도시한다.
[95] 블록형태는 4Nx4N, 4Nx2N, 2Nx4N, 4NxN, Nx4N, 32NxN, Nx32N, 16NxN, Nxl6N, 8NxN또는 Nx8N을포함할수있다.여기서 N은양의 정수일수있다. 블록형태정보는부호화단위의모양,방 ¾너비 및높이의비율또는크기중 적어도하나를나타내는정보이다.
부호화단위의모양은정사각형 (square)및비-정사각형 (non-square)을포함할 수있다.부호화단위의 너비 및높이의길이가같은경우 (즉,부호화단위의블록 형태가 4NX4N인경우),영상복호화장치 ( W0)는부호화단위의블록형태 정보를정사각형으로결정할수있다.영상복호화장치 (100)는부호화단위의 모양을비-정사각형으로결정할수있다.
[97] ] ] 부호화단위의 너비 및높이의길이가다른경우 (즉,부호화단위의블록
21
99 9 9 946811 형태가 4NX2N, 2NX4N, 4NXN, NX4N, 32NXN, NX32N, 16NXN, NX16N, 8NXN또는 Nx8N인경우),영상복호화장치 (100)는부호화단위의블록형태정보를 비-정사각형으로결정할수있다.부호화단위의모양이비-정사각형인경우, 영상복호화장치 (100)는부호화단위의블록형태정보중너비 및높이의 비율을 1:2, 2: 1, 1:4, 4: 1, 1:8, 8: 1, 1: 16, 16: 1, 1:32, 32:1중적어도하나로결정할수 있다.또한,부호화단위의너비의 길이 및높이의길이에 기초하여,영상복호화 장치 (100)는부호화단위가수평 방향인지수직 방향인지결정할수있다.또한, 부호화단위의너비의 길이,높이의길이또는넓이중적어도하나에기초하여, 영상복호화장치 (100)는부호화단위의크기를결정할수있다.
일실시예에 따라영상복호화장치 (100)는블록형태정보를이용하여부호화 단위의 형태를결정할수있고,분할형태모드정보를이용하여부호화단위가 어떤형태로분할되는지를결정할수있다.즉,영상복호화장치 (100)가 이용하는블록형태정보가어떤블록형태를나타내는지에따라분할형태모드 정보가나타내는부호화단위의분할방법이결정될수있다. 2020/175967 1»(:1^1{2020/002924
[99] 영상복호화장치 (100)는비트스트림으로부터분할형태모드정보를획득할 수있다.하지만이에한정되는것은아니며 ,영상복호화장치 ( ^0)및영상 부호화장치 (200)는블록형태정보에기초하여미리약속된분할형태모드 정보를결정할수있다.영상복호화장치 (100)는최대부호화단위또는최소 부호화단위에대하여미리약속된분할형태모드정보를결정할수있다.예를 들어영상복호화장치 (100)는최대부호화단위에대하여분할형태모드정보를 쿼드분할여1파(1 8마¾로결정할수있다.또한,영상복호화장치 (100)는최소 부호화단위에대하여분할형태모드정보를 "분할하지않음”으로결정할수 있다.구체적으로영상복호화장치 (100)는최대부호화단위의크기를
256x256으로결정할수있다.영상복호화장치 (100)는미리약속된분할형태 모드정보를쿼드분할로결정할수있다.쿼드분할은부호화단위의너비및 높이를모두이등분하는분할형태모드이다.영상복호화장치 (100)는분할형태 모드정보에기초하여 256x256크기의최대부호화단위로부터 128x128크기의 부호화단위를획득할수있다.또한영상복호화장치 (100)는최소부호화 단위의크기를 4x4로결정할수있다.영상복호화장치 (100)는최소부호화 단위에대하여 "분할하지않음”을나타내는분할형태모드정보를획득할수 있다.
[10이 일실시예에따라,영상복호화장치 00)는현재부호화단위가정사각형
형태임을나타내는블록형태정보를이용할수있다.예를들어영상복호화 장치 (100)는분할형태모드정보에따라정사각형의부호화단위를분할하지 않을지 ,수직으로분할할지 ,수평으로분할할지 , 4개의부호화단위로분할할지 등을결정할수있다.도 3을참조하면,현재부호화단위 (300)의블록형태 정보가정사각형의형태를나타내는경우,복호화부 (120)는분할되지않음을 나타내는분할형태모드정보에따라현재부호화단위 (300)와동일한크기를 가지는부호화단위 (자0 를분할하지않거나,소정의분할방법을나타내는분할 형태모드정보에기초하여분할된부호화단위 (31아5, 310 310(1, 310 31아 등)를결정할수있다.
[101] 도 3을참조하면영상복호화장치 (100)는일실시예에따라수직방향으로 분할됨을나타내는분할형태모드정보에기초하여현재부호화단위 (300)를 수직방향으로분할한두개의부호화단위 (자아5)를결정할수있다.영상복호화 장치 (100)는수평방향으로분할됨을나타내는분할형태모드정보에기초하여 현재부호화단위 (300)를수평방향으로분할한두개의부호화단위 (자0이를 결정할수있다.영상복호화장치 (100)는수직방향및수평방향으로분할됨을 나타내는분할형태모드정보에기초하여현재부호화단위 (300)를수직방향및 수평방향으로분할한네개의부호화단위 (자0(1)를결정할수있다.영상복호화 장치 (100)는일실시예에따라수직방향으로터너리如1113 )분할됨을나타내는 분할형태모드정보에기초하여현재부호화단위 (300)를수직방향으로분할한 세개의부호화단위 (자0句를결정할수있다.영상복호화장치 (100)는 2020/175967 1»(:1^1{2020/002924 수평방향으로터너리분할됨을나타내는분할형태모드정보에기초하여현재 부호화단위(300)를수평방향으로분할한세개의부호화단위(자¾)를결정할수 있다.다만정사각형의부호화단위가분할될수있는분할형태는상술한 형태로한정하여해석되어서는안되고,분할형태모드정보가나타낼수있는 다양한형태가포함될수있다.정사각형의부호화단위가분할되는소정의분할 형태들은이하에서다양한실시예를통해구체적으로설명하도록한다.
[102] 도 4는일실시예에따라영상복호화장치( ^0)가비 -정사각형의형태인
부호화단위를분할하여적어도하나의부호화단위를결정하는과정을 도시한다.
[103] 일실시예에따라영상복호화장치 00)는현재부호화단위가비-정사각형 형태임을나타내는블록형태정보를이용할수있다.영상복호화장치(100)는 분할형태모드정보에따라비-정사각형의현재부호화단위를분할하지않을지 소정의방법으로분할할지여부를결정할수있다.도 4를참조하면,현재부호화 단위(400또는 450)의블록형태정보가비 -정사각형의형태를나타내는경우, 영상복호화장치(100)는분할되지않음을나타내는분할형태모드정보에따라 현재부호화단위(400또는 450)와동일한크기를가지는부호화단위(410또는 460)를결정하거나,소정의분할방법을나타내는분할형태모드정보에따라 기초하여분할된부호화단위(420 42(¾, 430 43(¾, 4300, 470 47(¾, 480山 48아5, 480이를결정할수있다.비-정사각형의부호화단위가분할되는소정의 분할방법은이하에서다양한실시예를통해구체적으로설명하도록한다.
[104] 일실시예에따라영상복호화장치(100)는분할형태모드정보를이용하여 부호화단위가분할되는형태를결정할수있고,이경우분할형태모드정보는 부호화단위가분할되어생성되는적어도하나의부호화단위의개수를나타낼 수있다.도 4를참조하면분할형태모드정보가두개의부호화단위로현재 부호화단위(400또는 450)가분할되는것을나타내는경우,영상복호화 장치(100)는분할형태모드정보에기초하여현재부호화단위(400또는 450)를 분할하여현재부호화단위에포함되는두개의부호화단위(420 42(¾,또는 470 470비를결정할수있다.
[105] 일실시예에따라영상복호화장치( ^0)가분할형태모드정보에기초하여 비-정사각형의형태의현재부호화단위(400또는 450)를분할하는경우,영상 복호화장치(100)는비-정사각형의현재부호화단위(400또는 450)의긴변의 위치를고려하여현재부호화단위를분할할수있다.예를들면,영상복호화 장치(100)는현재부호화단위(400또는 450)의형태를고려하여현재부호화 단위(400또는 450)의긴변을분할하는방향으로현재부호화단위(400또는 450)를분할하여복수개의부호화단위를결정할수있다.
[106] 일실시예에따라,분할형태모드정보가홀수개의블록으로부호화단위를 분할(터너리분할)하는것을나타내는경우,영상복호화장치(100)는현재 부호화단위(400또는 450)에포함되는홀수개의부호화단위를결정할수있다. 2020/175967 1»(:1^1{2020/002924 예를들면,분할형태모드정보가 3개의부호화단위로현재부호화단위 (400 또는 450)를분할하는것을나타내는경우,영상복호화장치 (100)는현재부호화 단위 (400또는 450)를 3개의부호화단위 (430 43(¾, 4300, 480 48(¾, 480 로 분할할수있다.
[107] 일실시예에따라,현재부호화단위 (400또는 450)의너비및높이의비율이 4:1 또는 1:4일수있다.너비및높이의비율이 4:1인경우,너비의길이가높이의 길이보다길므로블록형태정보는수평방향일수있다.너비및높이의비율이 1:4인경우,너비의길이가높이의길이보다짧으므로블록형태정보는수직 방향일수있다.영상복호화장치 (100)는분할형태모드정보에기초하여현재 부호화단위를홀수개의블록으로분할할것을결정할수있다.또한영상 복호화장치 (100)는현재부호화단위 (400또는 450)의블록형태정보에 기초하여현재부호화단위 (400또는 450)의분할방향을결정할수있다.예를 들어현재부호화단위 (400)가수직방향인경우,영상복호화장치 (100)는현재 부호화단위 (400)를수평방향으로분할하여부호화단위 (430 43(¾, 430 를 결정할수있다.또한현재부호화단위 (450)가수평방향인경우,영상복호화 장치 (100)는현재부호화단위 (450)를수직방향으로분할하여부호화
단위 (480 48(¾, 480 를결정할수있다.
[108] 일실시예에따라영상복호화장치 (100)는현재부호화단위 (400또는 450)에 포함되는홀수개의부호화단위를결정할수있으며,결정된부호화단위들의 크기모두가동일하지는않을수있다.예를들면,결정된홀수개의부호화 단위 (430 43(¾, 4300, 480 48(¾, 480이중소정의부호화단위 (43(¾또는
48(¾)의크기는다른부호화단위 (430 4300, 480 480이들과는다른크기를 가질수도있다.즉,현재부호화단위 (400또는 450)가분할되어결정될수있는 부호화단위는복수의종류의크기를가질수있고,경우에따라서는홀수개의 부호화단위 (430 43(¾, 4300, 480 48(¾, 480이가각각서로다른크기를가질 수도있다.
[109] 일실시예에따라분할형태모드정보가홀수개의블록으로부호화단위가 분할되는것을나타내는경우,영상복호화장치 (100)는현재부호화단위 (400 또는 450)에포함되는홀수개의부호화단위를결정할수있고,나아가영상 복호화장치 (100)는분할하여생성되는홀수개의부호화단위들중적어도 하나의부호화단위에대하여소정의제한을둘수있다.도 4을참조하면영상 복호화장치 (100)는현재부호화단위 (400또는 450)가분할되어생성된 3개의 부호화단위 (430 43(¾, 4300, 480 48(¾, 480 들중중앙에위치하는부호화 단위 (43(¾, 480비에대한복호화과정을다른부호화단위 (430 43(、 480
480이와다르게할수있다.예를들면,영상복호화장치 (100)는중앙에위치하는 부호화단위 (43(¾, 480비에대하여는다른부호화단위 (430 4300, 480 480 와 달리더이상분할되지않도록제한하거나,소정의횟수만큼만분할되도록 제한할수있다. 2020/175967 1»(:1^1{2020/002924
[110] 도 5는일실시예에따라영상복호화장치 (100)가블록형태정보및분할형태 모드정보중적어도하나에기초하여부호화단위를분할하는과정을도시한다.
[111] 일실시예에따라영상복호화장치 (100)는블록형태정보및분할형태모드 정보중적어도하나에기초하여정사각형형태의제 1부호화단위 (500)를 부호화단위들로분할하거나분할하지않는것으로결정할수있다.일실시예에 따라분할형태모드정보가수평방향으로제 1부호화단위 (500)를분할하는 것을나타내는경우,영상복호화장치 (100)는제 1부호화단위 (500)를수평 방향으로분할하여제 2부호화단위 ( 0)를결정할수있다.일실시예에따라 이용되는제 1부호화단위 ,제 2부호화단위 ,제 3부호화단위는부호화단위 간의분할전후관계를이해하기위해이용된용어이다.예를들면,제 1부호화 단위를분할하면제 2부호화단위가결정될수있고,제 2부호화단위가 분할되면제 3부호화단위가결정될수있다.이하에서는이용되는제 1부호화 단위,제 2부호화단위및제 3부호화단위의관계는상술한특징에따르는 것으로이해될수있다.
[112] 일실시예에따라영상복호화장치 (100)는결정된제 2부호화단위 (510)를
분할형태모드정보에기초하여부호화단위들로분할하거나분할하지않는 것으로결정할수있다.도 5를참조하면영상복호화장치 (100)는분할형태모드 정보에기초하여제 1부호화단위 (500)를분할하여결정된비-정사각형의 형태의제 2부호화단위 (510)를적어도하나의제 3부호화단위 (520 52(¾, 52(、 520(1등)로분할하거나제 2부호화단위 ( 0)를분할하지않을수있다.영상 복호화장치 (100)는분할형태모드정보를획득할수있고영상복호화
장치 (100)는획득한분할형태모드정보에기초하여제 1부호화단위 (500)를 분할하여다양한형태의복수개의제 2부호화단위 (예를들면, 510)를분할할수 있으며 ,제 2부호화단위 ( 0)는분할형태모드정보에기초하여제 1부호화 단위 (500)가분할된방식에따라분할될수있다.일실시예에따라,제 1부호화 단위 (500)가제 1부호화단위 (500)에대한분할형태모드정보에기초하여제 2 부호화단위 (510)로분할된경우,제 2부호화단위 (510)역시제 2부호화 단위 ( 0)에대한분할형태모드정보에기초하여제 3부호화단위 (예를들면, 520&, 52(¾, 5200, 520(1등)으로분할될수있다.즉,부호화단위는부호화단위 각각에관련된분할형태모드정보에기초하여재귀적으로분할될수있다. 따라서비-정사각형형태의부호화단위에서정사각형의부호화단위가결정될 수있고,이러한정사각형형태의부호화단위가재귀적으로분할되어
비-정사각형형태의부호화단위가결정될수도있다.
[113] 도 5를참조하면,비-정사각형형태의제 2부호화단위 (5 )가분할되어
결정되는홀수개의제 3부호화단위 (52(¾, 5200, 520(1)중소정의부호화 단위 (예를들면,가운데에위치하는부호화단위또는정사각형형태의부호화 단위)는재귀적으로분할될수있다.일실시예에따라홀수개의제 3부호화 단위 (52(¾, 5200, 520(1)중하나인정사각형형태의제 3부호화단위 (520 는수평 2020/175967 1»(:1^1{2020/002924 방향으로분할되어복수개의 제 4부호화단위로분할될수있다.복수개의 제 4 부호화단위 (530 53(¾, 5300, 530(1)중하나인비-정사각형 형태의 제 4부호화 단위 (53(¾또는 530(1)는다시복수개의부호화단위들로분할될수있다.예를 들면,비-정사각형 형태의제 4부호화단위 (53(¾또는 530(1)는홀수개의부호화 단위로다시분할될수도있다.부호화단위의 재귀적분할에 이용될수있는 방법에 대하여는다양한실시예를통해후술하도록한다.
[114] 일실시예에 따라영상복호화장치 (100)는분할형태모드정보에기초하여 제 3부호화단위 (520 52(¾, 5200, 520(1등)각각을부호화단위들로분할할수 있다.또한영상복호화장치 (100)는분할형태모드정보에 기초하여제 2부호화 단위 ( 0)를분할하지 않는것으로결정할수있다.영상복호화장치 (100)는일 실시예에 따라비-정사각형 형태의제 2부호화단위 (5 )를홀수개의제 3부호화 단위 (52(¾, 5200, 520(1)로분할할수있다.영상복호화장치 (100)는홀수개의제 3 부호화단위 (52(¾, 5200, 520(1)중소정의 제 3부호화단위에 대하여소정의 제한을둘수있다.예를들면영상복호화장치 (100)는홀수개의제 3부호화 단위 (52(¾, 5200, 520(1)중가운데에위치하는부호화단위 (520 에 대하여는더 이상분할되지 않는것으로제한하거나또는설정 가능한횟수로분할되어야 하는것으로제한할수있다.
[115] 도 5를참조하면,영상복호화장치 (100)는비-정사각형 형태의제 2부호화 단위 (510)에포함되는홀수개의제 3부호화단위 (52(¾, 5200, 520(1)들중 가운데에 위치하는부호화단위 (520 는더 이상분할되지 않거나,소정의분할 형태로분할 (예를들면 4개의부호화단위로만분할하거나제 2부호화 단위 (5 )가분할된형태에 대응하는형태로분할)되는것으로제한하거나, 소정의 횟수로만분할 (예를들면 II회만분할, 11>0)하는것으로제한할수있다. 다만가운데에위치한부호화단위 (520이에 대한상기제한은단순한실시예들에 불과하므로상술한실시예들로제한되어해석되어서는안되고,가운데에 위치한부호화단위 (520 가다른부호화단위 (52(¾, 520(1)와다르게복호화될 수있는다양한제한들을포함하는것으로해석되어야한다.
[116] 일실시예에 따라영상복호화장치 (100)는현재부호화단위를분할하기위해 이용되는분할형태모드정보를현재부호화단위내의소정의위치에서 획득할 수있다.
[117] 도 6은일실시예에 따라영상복호화장치 (100)가홀수개의부호화단위들중 소정의부호화단위를결정하기위한방법을도시한다.
[118] 도 6을참조하면,현재부호화단위 (600, 650)의분할형태모드정보는현재 부호화단위 (600, 650)에포함되는복수개의 샘플중소정위치의 샘플 (예를 들면,가운데에 위치하는샘플 (640, 690))에서 획득될수있다.다만이러한분할 형태모드정보중적어도하나가획득될수있는현재부호화단위 (600)내의 소정 위치가도 6에서도시하는가운데위치로한정하여해석되어서는안되고, 소정 위치에는현재부호화단위 (600)내에포함될수있는다양한위치 (예를 2020/175967 1»(:1^1{2020/002924 들면,최상단,최하단,좌측,우측,좌측상단,좌측하단,우측상단또는우측하단 등)가포함될수있는것으로해석되어야한다.영상복호화장치 (100)는소정 위치로부터획득되는분할형태모드정보를획득하여현재부호화단위를 다양한형태및크기의부호화단위들로분할하거나분할하지않는것으로 결정할수있다.
[119] 일실시예에따라영상복호화장치 00)는현재부호화단위가소정의개수의 부호화단위들로분할된경우그중하나의부호화단위를선택할수있다. 복수개의부호화단위들중하나를선택하기위한방법은다양할수있으며, 이러한방법들에대한설명은이하의다양한실시예를통해후술하도록한다.
[12이 일실시예에따라영상복호화장치 00)는현재부호화단위를복수개의 부호화단위들로분할하고,소정위치의부호화단위를결정할수있다.
[121] 일실시예에따라영상복호화장치 (100)는홀수개의부호화단위들중
가운데에위치하는부호화단위를결정하기위하여홀수개의부호화단위들 각각의위치를나타내는정보를이용할수있다.도 6을참조하면,영상복호화 장치 (100)는현재부호화단위 (600)또는현재부호화단위 (650)를분할하여 홀수개의부호화단위들 (620 62(¾, 6200)또는홀수개의부호화단위들 (660 66(¾, 660이을결정할수있다.영상복호화장치 (100)는홀수개의부호화 단위들 (620 62(¾, 6200)또는홀수개의부호화단위들 (660 66(¾, 660 의 위치에대한정보를이용하여가운데부호화단위 (620비또는가운데부호화 단위 (660비를결정할수있다.예를들면영상복호화장치 (100)는부호화 단위들 (62(切, 62(¾, 620 에포함되는소정의샘플의위치를나타내는정보에 기초하여부호화단위들 (620 62(¾, 620 의위치를결정함으로써가운데에 위치하는부호화단위 (620비를결정할수있다.구체적으로,영상복호화 장치 (100)는부호화단위들 (620 62(¾, 620이의좌측상단의샘플 (630 63(¾, 630이의위치를나타내는정보에기초하여부호화단위들 (62(切, 62(¾, 620이의 위치를결정함으로써가운데에위치하는부호화단위 (620비를결정할수있다.
[122] 일실시예에따라부호화단위들 (620 62(¾, 620 에각각포함되는좌측
상단의샘플 (630 63(¾, 63(切의위치를나타내는정보는부호화단위들 (620 62(¾, 620이의픽쳐내에서의위치또는좌표에대한정보를포함할수있다.일 실시예에따라부호화단위들 (62(切, 62(¾, 620 에각각포함되는좌측상단의 샘플 (630 63(¾, 63(切의위치를나타내는정보는현재부호화단위 (600)에 포함되는부호화단위들 (620 62(¾, 620 의너비또는높이를나타내는정보를 포함할수있고,이러한너비또는높이는부호화단위들 (620 62(¾, 620 의 픽쳐내에서의좌표간의차이를나타내는정보에해당할수있다.즉,영상 복호화장치 (100)는부호화단위들 (620 62아5, 620 의픽쳐내에서의위치또는 좌표에대한정보를직접이용하거나좌표간의차이값에대응하는부호화 단위의너비또는높이에대한정보를이용함으로써가운데에위치하는부호화 단위 (620비를결정할수있다. 2020/175967 1»(:1^1{2020/002924
[123] 일실시예에따라,상단부호화단위(620幻의좌측상단의샘플(630幻의위치를 나타내는정보는江 좌표를나타낼수있고,가운데부호화단위(620비의 좌측상단의샘플(530비의위치를나타내는정보는(此,外)좌표를나타낼수 있고,하단부호화단위(620 의좌측상단의샘플(63(切의위치를나타내는 정보는江 )좌표를나타낼수있다.영상복호화장치(100)는부호화 단위들(620 62(¾, 620 에각각포함되는좌측상단의샘플(630 63(¾, 63(切의 좌표를이용하여가운데부호화단위(620비를결정할수있다.예를들면,좌측 상단의샘플(630 63(¾, 630이의좌표를오름차순또는내림차순으로
정렬하였을때,가운데에위치하는샘플(630비의좌표인(此,外)를포함하는 부호화단위(620비를현재부호화단위(600)가분할되어결정된부호화 단위들(620 62(¾, 6200)중가운데에위치하는부호화단위로결정할수있다. 다만좌측상단의샘플(63(切, 63(¾, 630이의위치를나타내는좌표는픽쳐 내에서의절대적인위치를나타내는좌표를나타낼수있고,나아가상단부호화 단위(620 의좌측상단의샘플(630 의위치를기준으로,가운데부호화 단위(620비의좌측상단의샘플(630비의상대적위치를나타내는정보인付 , (1外)좌표,하단부호화단위(620 의좌측상단의샘플(630이의상대적위치를 나타내는정보인((1x(:,(1)0좌표를이용할수도있다.또한부호화단위에 포함되는샘플의위치를나타내는정보로서해당샘플의좌표를이용함으로써 소정위치의부호화단위를결정하는방법이상술한방법으로한정하여 해석되어서는안되고,샘플의좌표를이용할수있는다양한산술적방법으로 해석되어야한다.
[124] 일실시예에따라영상복호화장치(100)는현재부호화단위(600)를복수개의 부호화단위들(620 62(¾, 620 로분할할수있고,부호화단위들(620 62(¾, 6200)중소정의기준에따라부호화단위를선택할수있다.예를들면,영상 복호화장치(100)는부호화단위들(620 62015, 62(切중크기가다른부호화 단위(620비를선택할수있다.
[125] 일실시예에따라영상복호화장치(100)는상단부호화단위(620幻의좌측 상단의샘플(630 의위치를나타내는정보인( 좌표,가운데부호화 단위(620비의좌측상단의샘플(630비의위치를나타내는정보인(此,外)좌표, 하단부호화단위(620이의좌측상단의샘플(63(切의위치를나타내는정보인 (^, )좌표를이용하여부호화단위들(620 62(¾, 6200)각각의너비또는 높이를결정할수있다.영상복호화장치(100)는부호화단위들(620 62아5,
620(:)의위치를나타내는좌표인( ) ),( ,) ),(X*:,)0를이용하여부호화 단위들(620 62(¾, 6200)각각의크기를결정할수있다.일실시예에따라,영상 복호화장치(100)는상단부호화단위(620 의너비를현재부호화단위(600)의 너비로결정할수있다.영상복호화장치(100)는상단부호화단위(620幻의 높이를
Figure imgf000020_0001
있다.일실시예에따라영상복호화장치(100)는 가운데부호화단위(620비의너비를현재부호화단위(600)의너비로결정할수 2020/175967 1»(:1^1{2020/002924 있다.영상복호화장치 (100)는가운데부호화단위 (620비의높이를 -外로 결정할수있다.일실시예에따라영상복호화장치 ( 0)는하단부호화단위의 너비또는높이는현재부호화단위의너비또는높이와상단부호화단위 (620幻 및가운데부호화단위 (620비의너비및높이를이용하여결정할수있다.영상 복호화장치 (100)는결정된부호화단위들 (620 62(¾, 620이의너비및높이에 기초하여다른부호화단위와다른크기를갖는부호화단위를결정할수있다. 도 6을참조하면,영상복호화장치 (100)는상단부호화단위 (620 및하단 부호화단위 (620이의크기와다른크기를가지는가운데부호화단위 (620비를 소정위치의부호화단위로결정할수있다.다만상술한영상복호화
장치 (100)가다른부호화단위와다른크기를갖는부호화단위를결정하는 과정은샘플좌표에기초하여결정되는부호화단위의크기를이용하여소정 위치의부호화단위를결정하는일실시예에불과하므로,소정의샘플좌표에 따라결정되는부호화단위의크기를비교하여소정위치의부호화단위를 결정하는다양한과정이이용될수있다.
[126] 영상복호화장치 (100)는좌측부호화단위 (660幻의좌측상단의샘플 (670幻의 위치를나타내는정보인江(1, 幻좌표,가운데부호화단위 (660비의좌측상단의 샘플 (670비의위치를나타내는정보인江 이좌표,우측부호화단위 (660 의 좌측상단의샘플 (67(切의위치를나타내는정보인江, )좌표를이용하여 부호화단위들 (660 66(¾, 6600)각각의너비또는높이를결정할수있다.영상 복호화장치 (100)는부호화단위들 (660 66(¾, 660이의위치를나타내는좌표인 江山 (1), , £), , 幻를이용하여부호화단위들 (660 660江 6600)각각의 크기를결정할수있다.
[127] 예에따라,영상복호화장치 (100)는좌측부호화단위 (660幻의너비를
Figure imgf000021_0001
정할수있다.영상복호화장치 (100)는좌측부호화단위 (660幻의 높이를현재부호화단위 (650)의높이로결정할수있다.일실시예에따라영상 복호화장치 (100)는가운데부호화단위 (660비의너비를 로결정할수있다. 영상복호화장치 (100)는가운데부호화단위 (660비의높이를현재부호화 단위 (600)의높이로결정할수있다.일실시예에따라영상복호화장치 (100)는 우측부호화단위 (660 의너비또는높이는현재부호화단위 (650)의너비또는 높이와좌측부호화단위 (660 및가운데부호화단위 (660비의너비및높이를 이용하여결정할수있다.영상복호화장치 (100)는결정된부호화단위들 (660 66(¾, 660이의너비및높이에기초하여다른부호화단위와다른크기를갖는 부호화단위를결정할수있다.도 6을참조하면,영상복호화장치 (100)는좌측 부호화단위 (660 및우측부호화단위 (660이의크기와다른크기를가지는 가운데부호화단위 (660비를소정위치의부호화단위로결정할수있다.다만 상술한영상복호화장치 (100)가다른부호화단위와다른크기를갖는부호화 단위를결정하는과정은샘플좌표에기초하여결정되는부호화단위의크기를 이용하여소정위치의부호화단위를결정하는일실시예에불과하므로,소정의 2020/175967 1»(:1^1{2020/002924 샘플좌표에따라결정되는부호화단위의크기를비교하여소정위치의부호화 단위를결정하는다양한과정이이용될수있다.
[128] 다만부호화단위의위치를결정하기위하여고려하는샘플의위치는상술한 좌측상단으로한정하여해석되어서는안되고부호화단위에포함되는임의의 샘플의위치에대한정보가이용될수있는것으로해석될수있다.
[129] 일실시예에따라영상복호화장치 00)는현재부호화단위의형태를
고려하여,현재부호화단위가분할되어결정되는홀수개의부호화단위들중 소정위치의부호화단위를선택할수있다.예를들면,현재부호화단위가 너비가높이보다긴비 -정사각형형태라면영상복호화장치 00)는수평방향에 따라소정위치의부호화단위를결정할수있다.즉,영상복호화장치 (100)는 수평방향으로위치를달리하는부호화단위들중하나를결정하여해당부호화 단위에대한제한을둘수있다.현재부호화단위가높이가너비보다긴 비-정사각형형태라면영상복호화장치 (100)는수직방향에따라소정위치의 부호화단위를결정할수있다.즉,영상복호화장치 (100)는수직방향으로 위치를달리하는부호화단위들중하나를결정하여해당부호화단위에대한 제한을둘수있다.
[130] 일실시예에따라영상복호화장치 (100)는짝수개의부호화단위들중소정 위치의부호화단위를결정하기위하여짝수개의부호화단위들각각의위치를 나타내는정보를이용할수있다.영상복호화장치 (100)는현재부호화단위를 분할 (바이너리분할)하여짝수개의부호화단위들을결정할수있고짝수개의 부호화단위들의위치에대한정보를이용하여소정위치의부호화단위를 결정할수있다.이에대한구체적인과정은도 6에서상술한홀수개의부호화 단위들중소정위치 (예를들면,가운데위치)의부호화단위를결정하는과정에 대응하는과정일수있으므로생략하도록한다.
[131] 일실시예에따라,비-정사각형형태의현재부호화단위를복수개의부호화 단위로분할한경우,복수개의부호화단위들중소정위치의부호화단위를 결정하기위하여분할과정에서소정위치의부호화단위에대한소정의정보를 이용할수있다.예를들면영상복호화장치 (100)는현재부호화단위가 복수개로분할된부호화단위들중가운데에위치하는부호화단위를결정하기 위하여분할과정에서가운데부호화단위에포함된샘플에저장된블록형태 정보및분할형태모드정보중적어도하나를이용할수있다.
[132] 도 6을참조하면영상복호화장치 (100)는분할형태모드정보에기초하여 현재부호화단위 (600)를복수개의부호화단위들 (620 62(¾, 620 로분할할수 있으며,복수개의부호화단위들 (620 62(¾, 6200)중가운데에위치하는부호화 단위 (620비를결정할수있다.나아가영상복호화장치 (100)는분할형태모드 정보가획득되는위치를고려하여 ,가운데에위치하는부호화단위 (620비를 결정할수있다.즉,현재부호화단위 (600)의분할형태모드정보는현재부호화 단위 (600)의가운데에위치하는샘플 (640)에서획득될수있으며,상기분할형태 2020/175967 1»(:1^1{2020/002924 모드정보에기초하여현재부호화단위 (600)가복수개의부호화단위들 (62(切, 620江 620이로분할된경우상기샘플 (640)을포함하는부호화단위 (620비를 가운데에위치하는부호화단위로결정할수있다.다만가운데에위치하는 부호화단위로결정하기위해이용되는정보가분할형태모드정보로한정하여 해석되어서는안되고,다양한종류의정보가가운데에위치하는부호화단위를 결정하는과정에서이용될수있다.
[133] 일실시예에따라소정위치의부호화단위를식별하기위한소정의정보는, 결정하려는부호화단위에포함되는소정의샘플에서획득될수있다.도 6을 참조하면,영상복호화장치 (100)는현재부호화단위 (600)가분할되어결정된 복수개의부호화단위들 (620 62(¾, 6200)중소정위치의부호화단위 (예를 들면,복수개로분할된부호화단위중가운데에위치하는부호화단위)를 결정하기위하여현재부호화단위 (600)내의소정위치의샘플 (예를들면,현재 부호화단위 (600)의가운데에위치하는샘플)에서획득되는분할형태모드 정보를이용할수있다.즉,영상복호화장치 (100)는현재부호화단위 (600)의 블록형태를고려하여상기소정위치의샘플을결정할수있고,영상복호화 장치 (100)는현재부호화단위 (600)가분할되어결정되는복수개의부호화 단위들 (620 62(¾, 6200)중,소정의정보 (예를들면,분할형태모드정보)가 획득될수있는샘플이포함된부호화단위 (620비를결정하여소정의제한을둘 수있다.도 6을참조하면일실시예에따라영상복호화장치 (100)는소정의 정보가획득될수있는샘늘로서현재부호화단위 (600)의가운데에위치하는 샘플 (640)을결정할수있고,영상복호화장치 (100)는이러한샘플 (640)이 포함되는부호화단위 (620비를복호화과정에서의소정의제한을둘수있다. 다만소정의정보가획득될수있는샘플의위치는상술한위치로한정하여 해석되어서는안되고,제한을두기위해결정하려는부호화단위 (620비에 포함되는임의의위치의샘플들로해석될수있다.
[134] 일실시예에따라소정의정보가획득될수있는샘플의위치는현재부호화 단위 (600)의형태에따라결정될수있다.일실시예에따라블록형태정보는 현재부호화단위의형태가정사각형인지또는비-정사각형인지여부를결정할 수있고,형태에따라소정의정보가획득될수있는샘플의위치를결정할수 있다.예를들면,영상복호화장치 (100)는현재부호화단위의너비에대한정보 및높이에대한정보중적어도하나를이용하여현재부호화단위의너비및 높이중적어도하나를반으로분할하는경계상에위치하는샘플을소정의 정보가획득될수있는샘플로결정할수있다.또다른예를들면,영상복호화 장치 (100)는현재부호화단위에관련된블록형태정보가비-정사각형형태임을 나타내는경우,현재부호화단위의긴변을반으로분할하는경계에인접하는 샘플중하나를소정의정보가획득될수있는샘플로결정할수있다.
[135] 일실시예에따라영상복호화장치 00)는현재부호화단위를복수개의
부호화단위로분할한경우,복수개의부호화단위들중소정위치의부호화 2020/175967 1»(:1^1{2020/002924 단위를결정하기위하여,분할형태모드정보를이용할수있다.일실시예에 따라영상복호화장치 (100)는분할형태모드정보를부호화단위에포함된소정 위치의샘플에서획득할수있고,영상복호화장치 (100)는현재부호화단위가 분할되어생성된복수개의부호화단위들을복수개의부호화단위각각에 포함된소정위치의샘플로부터획득되는분할형태모드정보를이용하여 분할할수있다.즉,부호화단위는부호화단위각각에포함된소정위치의 샘플에서획득되는분할형태모드정보를이용하여재귀적으로분할될수있다. 부호화단위의재귀적분할과정에대하여는도 5를통해상술하였으므로 자세한설명은생략하도록한다.
[136] 일실시예에따라영상복호화장치 (100)는현재부호화단위를분할하여
적어도하나의부호화단위를결정할수있고,이러한적어도하나의부호화 단위가복호화되는순서를소정의블록 (예를들면,현재부호화단위)에따라 결정할수있다.
[137] 도 7는일실시예에따라영상복호화장치 (100)가현재부호화단위를
분할하여복수개의부호화단위들을결정하는경우,복수개의부호화단위들이 처리되는순서를도시한다.
[138] 일실시예에따라영상복호화장치 ( W0)는분할형태모드정보에따라제 1 부호화단위 (700)를수직방향으로분할하여제 2부호화단위 (기 0a, TLOb)를 결정하거나제 1부호화단위 (700)를수평방향으로분할하여제 2부호화 단위 (730a, 730b)를결정하거나제 1부호화단위 (700)를수직방향및수평 방향으로분할하여제 2부호화단위 (750a, 750b, 750c, 750d)를결정할수있다.
[139] 도 7를참조하면,영상복호화장치 (100)는제 1부호화단위 (700)를수직
방향으로분할하여결정된제 2부호화단위 (기 0a, TL0b)를수평방향 (기 Oc)으로 처리되도록순서를결정할수있다.영상복호화장치 (100)는제 1부호화 단위 (700)를수평방향으로분할하여결정된제 2부호화단위 (730a, 730b)의처리 순서를수직방향 (730c)으로결정할수있다.영상복호화장치 (100)는제 1 부호화단위 (700)를수직방향및수평방향으로분할하여결정된제 2부호화 단위 (750a, 750b, 750c, 750d)를하나의행에위치하는부호화단위들이처리된후 다음행에위치하는부호화단위들이처리되는소정의순서 (예를들면,래스터 스캔순서 ((raster scan order)또는 z스캔순서 (z scan order)(750e)등)에따라 결정할수있다.
[140] 일실시예에따라영상복호화장치 (100)는부호화단위들을재귀적으로
분할할수있다.도 7를참조하면,영상복호화장치 (100)는제 1부호화
단위 (700)를분할하여복수개의부호화단위들 (기 0a, 710b, 730a, 730b, 750a,
750b, 750c, 750d)을결정할수있고,결정된복수개의부호화단위들 (기 0a, 710b, 730a, 730b, 750a, 750b, 750c, 750d)각각을재귀적으로분할할수있다.복수개의 부호화단위들 C710a, 710b, 730a, 730b, 750a, 750b, 750c, 750d)을분할하는방법은 제 1부호화단위 (700)를분할하는방법에대응하는방법이될수있다.이에따라 2020/175967 1»(:1^1{2020/002924 복수개의부호화단위들(기 0 기아5, 730 73(¾, 750 75(¾, 75(、 750(1)은각각 독립적으로복수개의부호화단위들로분할될수있다.도 7를참조하면영상 복호화장치(100)는제 1부호화단위(700)를수직방향으로분할하여제 2부호화 단위(기 0 기 0비를결정할수있고,나아가제 2부호화단위(기 0 기 0비각각을 독립적으로분할하거나분할하지않는것으로결정할수있다.
[141] 일실시예에따라영상복호화장치(100)는좌측의제 2부호화단위(기 0幻를 수평방향으로분할하여제 3부호화단위(720 720비로분할할수있고,우측의 제 2부호화단위(기 0비는분할하지않을수있다.
[142] 일실시예에따라부호화단위들의처리순서는부호화단위의분할과정에 기초하여결정될수있다.다시말해,분할된부호화단위들의처리순서는 분할되기직전의부호화단위들의처리순서에기초하여결정될수있다.영상 복호화장치(100)는좌측의제 2부호화단위(기 0 가분할되어결정된제 3 부호화단위(720 72015)가처리되는순서를우측의제 2부호화단위(기 0비와 독립적으로결정할수있다.좌측의제 2부호화단위(기 0 가수평방향으로 분할되어제 3부호화단위(720 72아5)가결정되었으므로제 3부호화단위(720 72아5)는수직방향(720 으로처리될수있다.또한좌측의제 2부호화단위(기 0幻 및우측의제 2부호화단위(기 0비가처리되는순서는수평방향(기 0이에 해당하므로,좌측의제 2부호화단위(기 0幻에포함되는제 3부호화단위(720 72아5)가수직방향(720 으로처리된후에우측부호화단위(기 0비가처리될수 있다.상술한내용은부호화단위들이각각분할전의부호화단위에따라처리 순서가결정되는과정을설명하기위한것이므로,상술한실시예에한정하여 해석되어서는안되고,다양한형태로분할되어결정되는부호화단위들이 소정의순서에따라독립적으로처리될수있는다양한방법으로이용되는 것으로해석되어야한다.
[143] 도 8는일실시예에따라영상복호화장치(100)가소정의순서로부호화
단위가처리될수없는경우,현재부호화단위가홀수개의부호화단위로 분할되는것임을결정하는과정을도시한다.
[144] 일실시예에따라영상복호화장치(100)는획득된분할형태모드정보에
기초하여현재부호화단위가홀수개의부호화단위들로분할되는것을결정할 수있다.도 8를참조하면정사각형형태의제 1부호화단위(800)가비-정사각형 형태의제 2부호화단위(810 81아5)로분할될수있고,제 2부호화단위(810 81(¾)는각각독립적으로제 3부호화단위(820 82(¾, 8200, 820(1, 820이로분할될 수있다.일실시예에따라영상복호화장치 00)는제 2부호화단위중좌측 부호화단위( 0 는수평방향으로분할하여복수개의제 3부호화단위(820 82아5)를결정할수있고,우측부호화단위( 아5)는홀수개의제 3부호화 단위(820 820(1, 820句로분할할수있다.
[145] 일실시예에따라영상복호화장치(100)는제 3부호화단위들(820 82(¾, 8200, 820(1, 820이이소정의순서로처리될수있는지여부를판단하여홀수개로 2020/175967 1»(:1^1{2020/002924 분할된부호화단위가존재하는지를결정할수있다.도 8를참조하면,영상 복호화장치 (100)는제 1부호화단위 (800)를재귀적으로분할하여제 3부호화 단위 (820a, 820b, 820c, 820d, 820e)를결정할수있다.영상복호화장치 (100)는 블록형태정보및분할형태모드정보중적어도하나에기초하여,제 1부호화 단위 (800),제 2부호화단위 (810a, 810b)또는제 3부호화단위 (820a, 820b, 820c, 820d, 820e)가분할되는형태중홀수개의부호화단위로분할되는지여부를 결정할수있다.예를들면,제 2부호화단위 (8Wa, 810b)중우측에위치하는 부호화단위가홀수개의제 3부호화단위 (820c, 820d, 820e)로분할될수있다. 제 1부호화단위 (800)에포함되는복수개의부호화단위들이처리되는순서는 소정의순서 (예를들면, Z-스캔순서 (z-scan order)(830))가될수있고,영상복호화 장치 (100)는우측제 2부호화단위 (810b)가홀수개로분할되어결정된제 3 부호화단위 (820c, 820d, 820e)가상기소정의순서에따라처리될수있는조건을 만족하는지를판단할수있다.
[146] 일실시예에따라영상복호화장치 (100)는제 1부호화단위 (800)에포함되는 제 3부호화단위 (820a, 820b, 820c, 820d, 820e)가소정의순서에따라처리될수 있는조건을만족하는지를결정할수있으며,상기조건은제 3부호화단위 (820a, 820b, 820c, 820d, 820e)의경계에따라제 2부호화단위 (810a, 810b)의너비및 높이중적어도하나를반으로분할되는지여부와관련된다.예를들면
비-정사각형형태의좌측제 2부호화단위 (8 Wa)의높이를반으로분할하여 결정되는제 3부호화단위 (820a, 820b)는조건을만족할수있다.우측제 2부호화 단위 (810b)를 3개의부호화단위로분할하여결정되는제 3부호화단위 (820c, 820d, 820e)들의경계가우측제 2부호화단위 (810b)의너비또는높이를반으로 분할하지못하므로제 3부호화단위 (820c, 820d, 820e)는조건을만족하지못하는 것으로결정될수있다.영상복호화장치 (100)는이러한조건불만족의경우 스캔순서의단절 (disconnection)로판단하고,판단결과에기초하여우즉제 2 부호화단위 (8Wb)는홀수개의부호화단위로분할되는것으로결정할수있다. 일실시예에따라영상복호화장치 (100)는홀수개의부호화단위로분할되는 경우분할된부호화단위들중소정위치의부호화단위에대하여소정의제한을 둘수있으며,이러한제한내용또는소정위치등에대하여는다양한실시예를 통해상술하였으므로자세한설명은생략하도록한다.
[147] 도 9은일실시예에따라영상복호화장치 (100)가제 1부호화단위 (900)를
분할하여적어도하나의부호화단위를결정하는과정을도시한다.
[148] 일실시예에따라영상복호화장치 (100)는비트스트림획득부 (no)를통해
획득한분할형태모드정보에기초하여제 1부호화단위 (900)를분할할수있다. 정사각형형태의제 1부호화단위 (900)는 4개의정사각형형태를가지는부호화 단위로분할되거나또는비 -정사각형형태의복수개의부호화단위로분할할수 있다.예를들면도 9을참조하면,제 1부호화단위 (900)는정사각형이고분할 형태모드정보가비 -정사각형의부호화단위로분할됨을나타내는경우영상 2020/175967 1»(:1^1{2020/002924 복호화장치 (100)는제 1부호화단위 (900)를복수개의 비-정사각형의부호화 단위들로분할할수있다.구체적으로,분할형태모드정보가제 1부호화 단위 (900)를수평 방향또는수직방향으로분할하여홀수개의부호화단위를 결정하는것을나타내는경우,영상복호화장치 00)는정사각형 형태의 제 1 부호화단위 (900)를홀수개의부호화단위들로서수직 방향으로분할되어 결정된제 2부호화단위 (910 91아5, 910 또는수평 방향으로분할되어 결정된 제 2부호화단위 (920 92(¾, 920 로분할할수있다.
[149] 일실시예에 따라영상복호화장치 (100)는제 1부호화단위 (900)에포함되는 제 2부호화단위 (910 91(¾, 9100, 920 92(¾, 920이가소정의순서에 따라처리될 수있는조건을만족하는지를결정할수있으며,상기조건은제 2부호화 단위 (910 91(¾, 9100, 920 92(¾, 920이의 경계에따라제 1부호화단위 (900)의 너비 및높이중적어도하나를반으로분할되는지 여부와관련된다.도 9를 참조하면정사각형 형태의 제 1부호화단위 (900)를수직방향으로분할하여 결정되는제 2부호화단위 (910 91(¾, 910 들의 경계가제 1부호화단위 (900)의 너비를반으로분할하지못하므로제 1부호화단위 (900)는소정의순서에따라 처리될수있는조건을만족하지못하는것으로결정될수있다.또한정사각형 형태의 제 1부호화단위 (900)를수평방향으로분할하여결정되는제 2부호화 단위 (920 92(¾, 920 들의경계가제 1부호화단위 (900)의 너비를반으로 분할하지못하므로제 1부호화단위 (900)는소정의순서에따라처리될수있는 조건을만족하지못하는것으로결정될수있다.영상복호화장치 (100)는이러한 조건불만족의경우스캔순서의 단절 ((1 0111½(선011)로판단하고,판단결과에 기초하여 제 1부호화단위 (900)는홀수개의부호화단위로분할되는것으로 결정할수있다.일실시예에따라영상복호화장치 (100)는홀수개의부호화 단위로분할되는경우분할된부호화단위들중소정위치의부호화단위에 대하여소정의제한을둘수있으며,이러한제한내용또는소정위치등에 대하여는다양한실시예를통해상술하였으므로자세한설명은생략하도록 한다.
[150] 일실시예에 따라,영상복호화장치 00)는제 1부호화단위를분할하여
다양한형태의부호화단위들을결정할수있다.
[151] 도 9을참조하면,영상복호화장치 (100)는정사각형 형태의 제 1부호화
단위 (900),비-정사각형 형태의제 1부호화단위 (930또는 950)를다양한형태의 부호화단위들로분할할수있다.
[152] 도 은일실시예에 따라영상복호화장치 (100)가제 1부호화단위 (1000)가 분할되어 결정된비-정사각형 형태의제 2부호화단위가소정의조건을 만족하는경우제 2부호화단위가분할될수있는형태가제한되는것을 도시한다.
[153] 일실시예에 따라영상복호화장치 (100)는비트스트림 획득부 ( 0)를통해 획득한분할형태모드정보에 기초하여정사각형 형태의제 1부호화 2020/175967 1»(:1^1{2020/002924 단위 (1000)를비-정사각형형태의제 2부호화단위 (1010 101015, 1020 102(¾)로 분할하는것으로결정할수있다.제 2부호화단위 (1010 101(¾, 1020 102(¾)는 독립적으로분할될수있다.이에따라영상복호화장치 (100)는제 2부호화 단위 (1010 101015, 1020 102015)각각에관련된분할형태모드정보에기초하여 복수개의부호화단위로분할하거나분할하지않는것을결정할수있다.일 실시예에따라영상복호화장치 (100)는수직방향으로제 1부호화단위 (1000)가 분할되어결정된비-정사각형형태의좌측제 2부호화단위 (1이0幻를수평 방향으로분할하여제 3부호화단위 (1012 1012비를결정할수있다.다만영상 복호화장치 (100)는좌측제 2부호화단위 (1010 를수평방향으로분할한경우, 우측제 2부호화단위 (101아5)는좌측제 2부호화단위 (1010 가분할된방향과 동일하게수평방향으로분할될수없도록제한할수있다.만일우측제 2부호화 단위 (1010비가동일한방향으로분할되어제 3부호화단위 (1014 1014비가 결정된경우,좌측제 2부호화단위 (1010幻및우측제 2부호화단위 (101아5)가 수평방향으로각각독립적으로분할됨으로써제 3부호화단위 (1012 1215, 1014&, ^1415)가결정될수있다.하지만이는영상복호화장치 (100)가분할형태 모드정보에기초하여제 1부호화단위 (1000)를 4개의정사각형형태의제 2 부호화단위 (1030 103(¾, 10300, 1030(1)로분할한것과동일한결과이며이는 영상복호화측면에서비효율적일수있다.
[154] 일실시예에따라영상복호화장치 (100)는수평방향으로제 1부호화
단위 (1000)가분할되어결정된비-정사각형형태의제 2부호화단위 (102 또는 1020비를수직방향으로분할하여제 3부호화단위 (1022 1022江 \024a, 1024비를 결정할수있다.다만영상복호화장치 (100)는제 2부호화단위중하나 (예를 들면상단제 2부호화단위 (1020幻)를수직방향으로분할한경우,상술한이유에 따라다른제 2부호화단위 (예를들면하단부호화단위 (102015))는상단제 2 부호화단위 (1020幻가분할된방향과동일하게수직방향으로분할될수없도록 제한할수있다.
[155] 도 11은일실시예에따라분할형태모드정보가 4개의정사각형형태의
부호화단위로분할하는것을나타낼수없는경우,영상복호화장치 (100)가 정사각형형태의부호화단위를분할하는과정을도시한다.
[156] 일실시예에따라영상복호화장치 ( ^0)는분할형태모드정보에기초하여 제 1부호화단위 (1100)를분할하여제 2부호화단위 (1110 111아5, 1120 112아5 등)를결정할수있다.분할형태모드정보에는부호화단위가분할될수있는 다양한형태에대한정보가포함될수있으나,다양한형태에대한정보에는 정사각형형태의 4개의부호화단위로분할하기위한정보가포함될수없는 경우가있다.이러한분할형태모드정보에따르면,영상복호화장치 (100)는 정사각형형태의제 1부호화단위 ( 00)를 4개의정사각형형태의제 2부호화 단위 (1130 113(¾, 11300, 1130(1)로분할하지못한다.분할형태모드정보에 기초하여영상복호화장치 (100)는비-정사각형형태의제 2부호화단위 (111(切, 2020/175967 1»(:1^1{2020/002924
111015, 1120 112아5등)를결정할수있다.
[157] 일실시예에따라영상복호화장치 00)는비-정사각형형태의제 2부호화
단위 (1110 111015, n20a, 112아5등)를각각독립적으로분할할수있다.
재귀적인방법을통해제 2부호화단위 (1110 111아5, 1120 11201?등)각각이 소정의순서대로분할될수있으며 ,이는분할형태모드정보에기초하여제 1 부호화단위 ( 00)가분할되는방법에대응하는분할방법일수있다.
[158] 예를들면영상복호화장치 (100)는좌측제 2부호화단위 (1110幻가수평
방향으로분할되어정사각형형태의제 3부호화단위 (1112 1112비를결정할수 있고,우측제 2부호화단위 (1110비가수평방향으로분할되어정사각형형태의 제 3부호화단위 (1114 1114비를결정할수있다.나아가영상복호화
장치 (100)는좌측제 2부호화단위 (1110 및우측제 2부호화단위 (1110비모두 수평방향으로분할되어정사각형형태의제 3부호화단위 (1116 111해, 11160, 1116(1)를결정할수도있다.이러한경우제 1부호화단위 ( 00)가 4개의 정사각형형태의제 2부호화단위 (1130 113(¾, 11300, 1130(1)로분할된것과 동일한형태로부호화단위가결정될수있다.
[159] 또다른예를들면영상복호화장치 (100)는상단제 2부호화단위 (1120幻가
수직방향으로분할되어정사각형형태의제 3부호화단위 (1122 1122비를 결정할수있고,하단제 2부호화단위 (1120비가수직방향으로분할되어 정사각형형태의제 3부호화단위 (1124 1124비를결정할수있다.나아가영상 복호화장치 (100)는상단제 2부호화단위 (1120 및하단제 2부호화단위 (112015) 모두수직방향으로분할되어정사각형형태의제 3부호화단위 (1126 112해, 1126&, 1126비를결정할수도있다.이러한경우제 1부호화단위 (1100)가 4개의 정사각형형태의제 2부호화단위 (1130 113(¾, 11300, 1130(1)로분할된것과 동일한형태로부호화단위가결정될수있다.
[160] 도 12는일실시예에따라복수개의부호화단위들간의처리순서가부호화 단위의분할과정에따라달라질수있음을도시한것이다.
[161] 일실시예에따라영상복호화장치 00)는분할형태모드정보에기초하여 제 1부호화단위 (1200)를분할할수있다.블록형태가정사각형이고,분할형태 모드정보가제 1부호화단위 (1200)가수평방향및수직방향중적어도하나의 방향으로분할됨을나타내는경우,영상복호화장치 (100)는제 1부호화 단위 (1200)를분할하여제 2부호화단위 (예를들면, 1210 121(¾, 1220 122아5 등)를결정할수있다.도 12를참조하면제 1부호화단위 1200)가수평방향또는 수직방향만으로분할되어결정된비-정사각형형태의제 2부호화단위 (1 (切, 1 아5, 122(切, 122(¾)는각각에대한분할형태모드정보에기초하여독립적으로 분할될수있다.예를들면영상복호화장치 (100)는제 1부호화단위 (1200)가 수직방향으로분할되어생성된제 2부호화단위 (1 0 1 아5)를수평방향으로 각각분할하여제 3부호화단위 (1216 121해, 12160, 1216(1)를결정할수있고, 제 1부호화단위 (1200)가수평방향으로분할되어생성된제 2부호화단위 (1220 2020/175967 1»(:1^1{2020/002924
1220비를수평방향으로각각분할하여 제 3부호화단위 (1226 1226江 12260, 1226(1)를결정할수있다.이러한제 2부호화단위 (1210 121(¾, 1220 122(¾)의 분할과정은도 11과관련하여상술하였으므로자세한설명은생략하도록한다.
[162] 일실시예에 따라영상복호화장치 ( ^0)는소정의순서에따라부호화단위를 처리할수있다.소정의순서에따른부호화단위의 처리에 대한특징은도 7와 관련하여상술하였으므로자세한설명은생략하도록한다.도 12를참조하면 영상복호화장치 (100)는정사각형 형태의제 1부호화단위 (1200)를분할하여
4개의 정사각형 형태의 제 3부호화단위 (1216 1216江 1216。, 1216(1, 1226 122해, 12260, 1226(1)를결정할수있다.일실시예에 따라영상복호화
장치 (100)는제 1부호화단위 (1200)가분할되는형태에 따라제 3부호화 단위 (1216 12161), 12160, 1216(1, 1226 12261), 12260, 1226(¾의 처리순서를 결정할수있다.
[163] 일실시예에 따라영상복호화장치 ( ^0)는수직 방향으로분할되어 생성된제 2 부호화단위 (1 0 1 아5)를수평방향으로각각분할하여 제 3부호화
단위 (1216 121해, 12160, 1216(1)를결정할수있고,영상복호화장치 (100)는 좌측제 2부호화단위 (1210 에포함되는제 3부호화단위 (1216 1216 를수직 방향으로먼저처리한후,우측제 2부호화단위 (1 아5)에포함되는제 3부호화 단위 (121해, 1216(1)를수직 방향으로처리하는순서 (1217)에따라제 3부호화 단위 (1216 1216江 12160, 1216(1)를처리할수있다.
[164] 일실시예에 따라영상복호화장치 ( ^0)는수평 방향으로분할되어 생성된제 2 부호화단위 (1220 122아5)를수직방향으로각각분할하여 제 3부호화
단위 (1226 122해, 12260, 1226(1)를결정할수있고,영상복호화장치 (100)는 상단제 2부호화단위 (1220幻에포함되는제 3부호화단위 (1226 1226비를수평 방향으로먼저처리한후,하단제 2부호화단위 (122015)에포함되는제 3부호화 단위 (122此, 1226(1)를수평 방향으로처리하는순서 (1227)에따라제 3부호화 단위 (1226 1226江 12260, 1226(1)를처리할수있다.
[165] 도 12를참조하면,제 2부호화단위 (1210 121(¾, 1220 122(¾)가각각
분할되어 정사각형 형태의 제 3부호화단위 (1216 1216江 12160, 1216(1, 1226 122해, 12260, 1226(1)가결정될수있다.수직방향으로분할되어결정된제 2 부호화단위 (1210 121015)및수평 방향으로분할되어 결정된제 2부호화 단위 (1220 122015)는서로다른형태로분할된것이지만,이후에 결정되는제 3 부호화단위 (1216 12161), 12160, 1216(1, 1226 12261), 12260, 1226(¾에따르면 결국동일한형태의부호화단위들로제 1부호화단위 (1200)가분할된결과가 된다.이에 따라영상복호화장치 (100)는분할형태모드정보에기초하여 상이한과정을통해 재귀적으로부호화단위를분할함으로써 결과적으로 동일한형태의부호화단위들을결정하더라도,동일한형태로결정된복수개의 부호화단위들을서로다른순서로처리할수있다.
[166] 도 13은일실시예에 따라부호화단위가재귀적으로분할되어복수개의 2020/175967 1»(:1^1{2020/002924 부호화단위가결정되는경우,부호화단위의형태및크기가변함에따라 부호화단위의심도가결정되는과정을도시한다.
[167] 일실시예에따라영상복호화장치 (100)는부호화단위의심도를소정의
기준에따라결정할수있다.예를들면소정의기준은부호화단위의긴변의 길이가될수있다.영상복호화장치 00)는현재부호화단위의긴변의길이가 분할되기전의부호화단위의긴변의길이보다 211知>0)배로분할된경우,현재 부호화단위의심도는분할되기전의부호화단위의심도보다 II만큼심도가 증가된것으로결정할수있다.이하에서는심도가증가된부호화단위를하위 심도의부호화단위로표현하도록한다.
[168] 도 13을참조하면,일실시예에따라정사각형형태임을나타내는블록형태 정보 (예를들면블록형태정보는 '0: ’를나타낼수있음)에기초하여 영상복호화장치 (100)는정사각형형태인제 1부호화단위 (1300)를분할하여 하위심도의제 2부호화단위 (1302),제 3부호화단위 (1304)등을결정할수있다. 정사각형형태의제 1부호화단위 (1300)의크기를 2Nx2N이라고한다면,제 1 부호화단위 (1300)의너비및높이를 1/2배로분할하여결정된제 2부호화 단위 (1302)는 NxN의크기를가질수있다.나아가제 2부호화단위 (1302)의너비 및높이를 1/2크기로분할하여결정된제 3부호화단위 (1304)는 N/2xN/2의 크기를가질수있다.이경우제 3부호화단위 (1304)의너비및높이는제 1 부호화단위 (1300)의 배에해당한다.제 1부호화단위 (1300)의심도가: 0인 경우제 1부호화단위 (1300)의너비및높이의 1/2배인제 2부호화단위 (1302)의 심도는 0+1일수있고,제 1부호화단위 (1300)의너비및높이의 1/4배인제 3 부호화단위 (1304)의심도는 0+2일수있다.
[169] 일실시예에따라비-정사각형형태를나타내는블록형태정보 (예를들면블록 형태정보는,높이가너비보다긴비-정사각형임을나타내는 1: NS_VER '또는 너비가높이보다긴비-정사각형임을나타내는 '2: NS_HOR’를나타낼수있음)에 기초하여,영상복호화장치 (100)는비-정사각형형태인제 1부호화단위 (1310 또는 1320)를분할하여하위심도의제 2부호화단위 (1312또는 1322),제 3 부호화단위 (1314또는 1324)등을결정할수있다.
[17이 영상복호화장치 (100)는 Nx2N크기의제 1부호화단위 (1310)의너비및높이 중적어도하나를분할하여제 2부호화단위 (예를들면, 1302, 1312, 1322등)를 결정할수있다.즉,영상복호화장치 (100)는제 1부호화단위 (1310)를수평 방향으로분할하여 NxN크기의제 2부호화단위 (1302)또는 NxN/2크기의제 2 부호화단위 (1322)를결정할수있고,수평방향및수직방향으로분할하여 N/2xN크기의제 2부호화단위 (1312)를결정할수도있다.
[171] 일실시예에따라영상복호화장치 (100)는 2NxN크기의제 1부호화
단위 (1320)의너비및높이중적어도하나를분할하여제 2부호화단위 (예를 들면, 1302, 1312, 1322등)를결정할수도있다.즉,영상복호화장치 (100)는제 1 부호화단위 (1320)를수직방향으로분할하여 NxN크기의제 2부호화 2020/175967 1»(:1^1{2020/002924 단위 (1302)또는 N/2xN크기의제 2부호화단위 (1312)를결정할수있고,수평 방향및수직방향으로분할하여 NxN/2크기의제 2부호화단위 (1322)를결정할 수도있다.
[172] 일실시예에따라영상복호화장치 (100)는 NxN크기의제 2부호화단위 (1302) 의너비및높이중적어도하나를분할하여제 3부호화단위 (예를들면, 1304, 1314, 1324등)를결정할수도있다.즉,영상복호화장치 (100)는제 2부호화 단위 (1302)를수직방향및수평방향으로분할하여 N/2xN/2크기의제 3부호화 단위 (1304)를결정하거나 4x^2크기의제 3부호화단위 (1314)를결정하거나 ^2x^4크기의제 3부호화단위 (1324)를결정할수있다.
[173] 일실시예에따라영상복호화장치 (100)는 N/2xN크기의제 2부호화
단위 (1312)의너비및높이중적어도하나를분할하여제 3부호화단위 (예를 들면, 1304, 1314, 1324등)를결정할수도있다.즉,영상복호화장치 (100)는제 2 부호화단위 (1312)를수평방향으로분할하여 N/2xN/2크기의제 3부호화 단위 (1304)또는 ^2x^4크기의제 3부호화단위 (1324)를결정하거나수직방향 및수평방향으로분할하여 ^4x^2크기의제 3부호화단위 (1314)를결정할수 있다.
[174] 일실시예에따라영상복호화장치 (100)는 NxN/2크기의제 2부호화
단위 (1322)의너비및높이중적어도하나를분할하여제 3부호화단위 (예를 들면, 1304, 1314, 1324등)를결정할수도있다.즉,영상복호화장치 (100)는제 2 부호화단위 (1322)를수직방향으로분할하여 N/2xN/2크기의제 3부호화 단위 (1304)또는 ^4x^2크기의제 3부호화단위 (1314)를결정하거나수직방향 및수평방향으로분할하여 N/2xN/4크기의제 3부호화단위 (1324)를결정할수 있다.
[175] 일실시예에따라영상복호화장치 00)는정사각형형태의부호화단위 (예를 들면, 1300, 1302, 1304)를수평방향또는수직방향으로분할할수있다.예를 들면, 2Nx2N크기의제 1부호화단위 (1300)를수직방향으로분할하여 Nx2N 크기의제 1부호화단위 (1310)를결정하거나수평방향으로분할하여 2NxN 크기의제 1부호화단위 (1320)를결정할수있다.일실시예에따라심도가 부호화단위의가장긴변의길이에기초하여결정되는경우, 2Nx2N크기의제 1 부호화단위 (1300)가수평방향또는수직방향으로분할되어결정되는부호화 단위의심도는제 1부호화단위 (1300)의심도와동일할수있다.
[176] 일실시예에따라제 3부호화단위 (1314또는 1324)의너비및높이는제 1
부호화단위 (1310또는 1320)의 1/4배에해당할수있다.제 1부호화단위 (1310 또는 1320)의심도가 : 0인경우제 1부호화단위 (1310또는 1320)의너비및 높이의 1/2배인제 2부호화단위 (1312또는 1322)의심도는아1일수있고,제 1 부호화단위 (1310또는 1320)의너비및높이의 1/4배인제 3부호화단위 (1314 또는 1324)의심도는 0+2일수있다.
[177] 도 14은일실시예에따라부호화단위들의형태및크기에따라결정될수있는 2020/175967 1»(:1^1{2020/002924 심도및부호화단위구분을위한인덱스여 산 ,이하 ^이를도시한다.
[178] 일실시예에따라영상복호화장치 ( 0)는정사각형형태의제 1부호화
단위 (1400)를분할하여다양한형태의제 2부호화단위를결정할수있다.도
14를참조하면,영상복호화장치 (100)는분할형태모드정보에따라제 1부호화 단위 (1400)를수직방향및수평방향중적어도하나의방향으로분할하여제 2 부호화단위 (1402 14021), 1404 14041), 1406 14061), 14060, 1406(¾를결정할수 있다.즉,영상복호화장치 (100)는제 1부호화단위 (1400)에대한분할형태모드 정보에기초하여제 2부호화단위 (1402 14021), 1404 14041), 1406 14061), 14060, 1406(1)를결정할수있다.
[179] 일실시예에따라정사각형형태의제 1부호화단위 (1400)에대한분할형태 모드정보에따라결정되는제 2부호화단위 (1402 1402江 1404 1404江 1406山 140해, 14060, 1406(1)는긴변의길이에기초하여심도가결정될수있다.예를 들면,정사각형형태의제 1부호화단위 (1400)의한변의길이와비-정사각형 형태의제 2부호화단위 (1402 1402江 1404 1404비의긴변의길이가
동일하므로,제 1부호화단위 (1400)와비-정사각형형태의제 2부호화
단위 (1402 1402江 1404 1404비의심도는 I)로동일하다고볼수있다.이에 반해영상복호화장치 (100)가분할형태모드정보에기초하여제 1부호화 단위 (1400)를 4개의정사각형형태의제 2부호화단위 (1406 1406江 14060, 1406(1)로분할한경우,정사각형형태의제 2부호화단위 (1406 1406江 14060, 1406(1)의한변의길이는제 1부호화단위 (1400)의한변의길이의 1/2배이므로, 제 2부호화단위 (1406 1406江 14060, 1406(1)의심도는제 1부호화단위 (1400)의 심도인 I)보다한심도하위인 0+1의심도일수있다.
[18이 일실시예에따라영상복호화장치 ( ^0)는높이가너비보다긴형태의제 1 부호화단위 (1410)를분할형태모드정보에따라수평방향으로분할하여 복수개의제 2부호화단위 (1412 141¾, 1414 141此, 1414이로분할할수있다. 일실시예에따라영상복호화장치 ( ^0)는너비가높이보다긴형태의제 1 부호화단위 (1420)를분할형태모드정보에따라수직방향으로분할하여 복수개의제 2부호화단위 (1422 1422江 1424 1424江 1424 로분할할수있다.
[181] 일실시예에따라비-정사각형형태의제 1부호화단위 (1410또는 1420)에대한 분할형태모드정보에따라결정되는제 2부호화단위 (1412 141¾, 1414 14141), 14140. 1422 14221), 1424 14241), 1424 는긴변의길이에기초하여 심도가결정될수있다.예를들면,정사각형형태의제 2부호화단위 (1412 141¾)의한변의길이는높이가너비보다긴비-정사각형형태의제 1부호화 단위 (1410)의한변의길이의 1/2배이므로,정사각형형태의제 2부호화 단위 (1412 1412비의심도는비-정사각형형태의제 1부호화단위 (1410)의심도 I)보다한심도하위의심도인 0+1이다.
[182] 나아가영상복호화장치 00)가분할형태모드정보에기초하여비 -정사각형 형태의제 1부호화단위 (1410)를홀수개의제 2부호화단위 (1414 1414江 2020/175967 1»(:1^1{2020/002924
1414이로분할할수있다.홀수개의제 2부호화단위 (1414 141415, 1414 는 비-정사각형형태의제 2부호화단위 (1414 14140)및정사각형형태의제 2 부호화단위 (1414비를포함할수있다.이경우비-정사각형형태의제 2부호화 단위 (1414 1414 의긴변의길이및정사각형형태의제 2부호화
단위 (1414비의한변의길이는제 1부호화단위 (1410)의한변의길이의 1/2배 이므로,제 2부호화단위 (1414 1414江 1414 의심도는제 1부호화단위 (1410)의 심도인 I)보다한심도하위인 0+1의심도일수있다.영상복호화장치 (100)는 제 1부호화단위 (1410)와관련된부호화단위들의심도를결정하는상기방식에 대응하는방식으로,너비가높이보다긴비-정사각형형태의제 1부호화 단위 (1420)와관련된부호화단위들의심도를결정할수있다.
[183] 일실시예에따라영상복호화장치 (100)는분할된부호화단위들의구분을 위한인덱스에1))를결정함에 있어서,홀수개로분할된부호화단위들이서로 동일한크기가아닌경우,부호화단위들간의크기비율에기초하여인덱스를 결정할수있다.도 14를참조하면,홀수개로분할된부호화단위들 (1414 1414江 14140)중가운데에위치하는부호화단위 (1414비는다른부호화단위들 (1414 1414이와너비는동일하지만높이가다른부호화단위들 (1414 1414이의높이의 두배일수있다.즉,이경우가운데에위치하는부호화단위 (1414비는다른 부호화단위들 (1414 1414이의두개를포함할수있다.따라서,스캔순서에 따라가운데에위치하는부호화단위 (1414비의인덱스 )가 1이라면그다음 순서에위치하는부호화단위 (1414이는인덱스가 2가증가한 3일수있다.즉 인덱스의값의불연속성이존재할수있다.일실시예에따라영상복호화 장치 (100)는이러한분할된부호화단위들간의구분을위한인덱스의 불연속성의존재여부에기초하여홀수개로분할된부호화단위들이서로 동일한크기가아닌지여부를결정할수있다.
[184] 일실시예에따라영상복호화장치 (100)는현재부호화단위로부터분할되어 결정된복수개의부호화단위들을구분하기위한인덱스의값에기초하여특정 분할형태로분할된것인지를결정할수있다.도 14를참조하면영상복호화 장치 00)는높이가너비보다긴직사각형형태의제 1부호화단위 (1410)를 분할하여짝수개의부호화단위 (1412 1412비를결정하거나홀수개의부호화 단위 (1414 1414江 1414 를결정할수있다.영상복호화장치 (100)는복수개의 부호화단위각각을구분하기위하여각부호화단위를나타내는인덱스 (1^10)를 이용할수있다.일실시예에따라 。는각각의부호화단위의소정위치의 샘플 (예를들면,좌측상단샘플)에서획득될수있다.
[185] 일실시예에따라영상복호화장치 (100)는부호화단위의구분을위한
인덱스를이용하여분할되어결정된부호화단위들중소정위치의부호화 단위를결정할수있다.일실시예에따라높이가너비보다긴직사각형형태의 제 1부호화단위 (1410)에대한분할형태모드정보가 3개의부호화단위로 분할됨을나타내는경우영상복호화장치 (100)는제 1부호화단위 (1410)를 2020/175967 1»(:1^1{2020/002924
3개의부호화단위 (1414 1414江 1414 로분할할수있다.영상복호화 장치 (100)는 3개의부호화단위 (1414 1414江 14140)각각에대한인덱스를 할당할수있다.영상복호화장치 (100)는홀수개로분할된부호화단위중 가운데부호화단위를결정하기위하여각부호화단위에대한인덱스를비교할 수있다.영상복호화장치 (100)는부호화단위들의인덱스에기초하여인덱스들 중가운데값에해당하는인덱스를갖는부호화단위 (1414비를,제 1부호화 단위 (1410)가분할되어결정된부호화단위중가운데위치의부호화단위로서 결정할수있다.일실시예에따라영상복호화장치 (100)는분할된부호화 단위들의구분을위한인덱스를결정함에 있어서,부호화단위들이서로동일한 크기가아닌경우,부호화단위들간의크기비율에기초하여인덱스를결정할 수있다.도 14를참조하면,제 1부호화단위 (1410)가분할되어생성된부호화 단위 (1414비는다른부호화단위들 (1414 1414이와너비는동일하지만높이가 다른부호화단위들 (1414 1414이의높이의두배일수있다.이경우가운데에 위치하는부호화단위 (1414비의인덱스 (1^10)가 1이라면그다음순서에위치하는 부호화단위 (1414이는인덱스가 2가증가한 3일수있다.이러한경우처럼 균일하게인덱스가증가하다가증가폭이달라지는경우,영상복호화
장치 (100)는다른부호화단위들과다른크기를가지는부호화단위를포함하는 복수개의부호화단위로분할된것으로결정할수있다,일실시예에따라분할 형태모드정보가홀수개의부호화단위로분할됨을나타내는경우,영상복호화 장치 (100)는홀수개의부호화단위중소정위치의부호화단위 (예를들면 가운데부호화단위)가다른부호화단위와크기가다른형태로현재부호화 단위를분할할수있다.이경우영상복호화장치 (100)는부호화단위에대한 인덱스에이를이용하여다른크기를가지는가운데부호화단위를결정할수 있다.다만상술한인덱스,결정하고자하는소정위치의부호화단위의크기 또는위치는일실시예를설명하기위해특정한것이므로이에한정하여 해석되어서는안되며,다양한인덱스,부호화단위의위치및크기가이용될수 있는것으로해석되어야한다.
[186] 일실시예에따라영상복호화장치 (100)는부호화단위의재귀적인분할이 시작되는소정의데이터단위를이용할수있다.
[187] 도 15는일실시예에따라픽쳐에포함되는복수개의소정의데이터단위에 따라복수개의부호화단위들이결정된것을도시한다.
[188] 일실시예에따라소정의데이터단위는부호화단위가분할형태모드정보를 이용하여재귀적으로분할되기시작하는데이터단위로정의될수있다.즉, 현재픽쳐를분할하는복수개의부호화단위들이결정되는과정에서이용되는 최상위심도의부호화단위에해당할수있다.이하에서는설명상편의를위해 이러한소정의데이터단위를기준데이터단위라고지칭하도록한다.
[189] 일실시예에따라기준데이터단위는소정의크기및형태를나타낼수있다. 일실시예에따라,기준데이터단위는 MxN의샘플들을포함할수있다.여기서 2020/175967 1»(:1^1{2020/002924
M및 N은서로동일할수도있으며, 2의승수로표현되는정수일수있다.즉, 기준데이터단위는정사각형또는비-정사각형의형태를나타낼수있으며, 이후에정수개의부호화단위로분할될수있다.
[19이 일실시예에따라영상복호화장치 (W0)는현재픽쳐를복수개의기준데이터 단위로분할할수있다.일실시예에따라영상복호화장치 (100)는현재픽쳐를 분할하는복수개의기준데이터단위를각각의기준데이터단위에대한분할 형태모드정보를이용하여분할할수있다.이러한기준데이터단위의분할 과정은쿼드트리 (quad-tree)구조를이용한분할과정에대응될수있다.
[191] 일실시예에따라영상복호화장치 (W0)는현재픽쳐에포함되는기준데이터 단위가가질수있는최소크기를미리결정할수있다.이에따라,영상복호화 장치 (W0)는최소크기이상의크기를갖는다양한크기의기준데이터단위를 결정할수있고,결정된기준데이터단위를기준으로분할형태모드정보를 이용하여적어도하나의부호화단위를결정할수있다.
[192] 도 15를참조하면,영상복호화장치 (100)는정사각형형태의기준부호화
단위 (1500)를이용할수있고,또는비-정사각형형태의기준부호화
단위 (1502)를이용할수도있다.일실시예에따라기준부호화단위의형태및 크기는적어도하나의기준부호화단위를포함할수있는다양한데이터 단위 (예를들면,시퀀스 (sequence),픽쳐 (picture),슬라이스 (slice),슬라이스 세그먼트 (slice segment),타일 (tile),타일그룹 (tile group),최대부호화단위등)에 따라결정될수있다.
[193] 일실시예에따라영상복호화장치 (100)의비트스트림획득부 (110)는기준 부호화단위의형태에대한정보및기준부호화단위의크기에대한정보중 적어도하나를상기다양한데이터단위마다비트스트림으로부터획득할수 있다.정사각형형태의기준부호화단위 (1500)에포함되는적어도하나의 부호화단위가결정되는과정은도 3의현재부호화단위 (300)가분할되는 과정을통해상술하였고,비-정사각형형태의기준부호화단위 (1502)에 포함되는적어도하나의부호화단위가결정되는과정은도 4의현재부호화 단위 (400또는 450)가분할되는과정을통해상술하였으므로자세한설명은 생략하도록한다.
[194] 일실시예에따라영상복호화장치 (100)는소정의조건에기초하여미리
결정되는일부데이터단위에따라기준부호화단위의크기및형태를결정하기 위하여,기준부호화단위의크기및형태를식별하기위한인덱스를이용할수 있다.즉,비트스트림획득부 (no)는비트스트림으로부터상기다양한데이터 단위 (예를들면,시퀀스,픽쳐,슬라이스,슬라이스세그먼트,타일,타일그룹, 최대부호화단위등)중소정의조건 (예를들면슬라이스이하의크기를갖는 데이터단위)을만족하는데이터단위로서슬라이스,슬라이스세그먼트,타일, 타일그룹,최대부호화단위등마다,기준부호화단위의크기및형태의식별을 위한인덱스만을획득할수있다.영상복호화장치 (100)는인덱스를 2020/175967 1»(:1^1{2020/002924 이용함으로써상기소정의조건을만족하는데이터단위마다기준데이터 단위의크기 및형태를결정할수있다.기준부호화단위의 형태에 대한정보및 기준부호화단위의크기에 대한정보를상대적으로작은크기의 데이터 단위마다비트스트림으로부터 획득하여 이용하는경우,비트스트림의 이용 효율이좋지 않을수있으므로,기준부호화단위의 형태에 대한정보및기준 부호화단위의크기에 대한정보를직접 획득하는대신상기 인덱스만을 획득하여 이용할수있다.이경우기준부호화단위의크기 및형태를나타내는 인덱스에 대응하는기준부호화단위의크기 및형태중적어도하나는미리 결정되어 있을수있다.즉,영상복호화장치 (100)는미리결정된기준부호화 단위의크기 및형태중적어도하나를인덱스에따라선택함으로써,인덱스 획득의 기준이되는데이터단위에포함되는기준부호화단위의크기 및형태 중적어도하나를결정할수있다.
[195] 일실시예에 따라영상복호화장치 (W0)는하나의 최대부호화단위에
포함하는적어도하나의 기준부호화단위를이용할수있다.즉,영상을 분할하는최대부호화단위에는적어도하나의기준부호화단위가포함될수 있고,각각의 기준부호화단위의 재귀적인분할과정을통해부호화단위가 결정될수있다.일실시예에따라최대부호화단위의너비 및높이중적어도 하나는기준부호화단위의너비 및높이중적어도하나의정수배에해당할수 있다.일실시예에따라기준부호화단위의크기는최대부호화단위를쿼드 트리구조에따라 n번분할한크기일수있다.즉,영상복호화장치 (100)는최대 부호화단위를쿼드트리구조에 따라 n번분할하여 기준부호화단위를결정할 수있고,다양한실시예들에 따라기준부호화단위를블록형태정보및분할 형태모드정보중적어도하나에기초하여분할할수있다.
[196] 일실시예에 따라영상복호화장치 (W0)는현재부호화단위의 형태를
나타내는블록형태정보또는현재부호화단위를분할하는방법을나타내는 분할형태모드정보를비트스트림으로부터 획득하여 이용할수있다.분할형태 모드정보는다양한데이터단위와관련된비트스트림에포함될수있다.예를 들면,영상복호화장치 (100)는시퀀스파라미터세트 (sequence parameter set), 픽쳐 파라미터세트 (picture parameter set),비디오파라미터세트 (video parameter set),슬라이스헤더 (slice header),슬라이스세그먼트헤더 (slice segment header), 타일헤더 (tile header),타일그룹헤더 (tile group header)에포함된분할형태모드 정보를이용할수있다.나아가,영상복호화장치 (100)는최대부호화단위,기준 부호화단위,프로세싱블록마다비트스트림으로부터블록형태정보또는분할 형태모드정보에 대응하는신택스엘리먼트를비트스트림으로부터 획득하여 이용할수있다.
[197] 이하본개시의 일실시예에따른분할규칙을결정하는방법에 대하여자세히 설명한다.
[198] 영상복호화장치 (W0)는영상의분할규칙을결정할수있다.분할규칙은영상 2020/175967 1»(:1^1{2020/002924 복호화장치 (W0)및영상부호화장치 (200)사이에미리 결정되어 있을수있다. 영상복호화장치 (100)는비트스트림으로부터 획득된정보에 기초하여 영상의 분할규칙을결정할수있다.영상복호화장치 (100)는시퀀스파라미터 세트 (sequence parameter set),픽쳐파라미터 세트 (picture parameter set),비디오 파라미터 세트 (video parameter set),슬라이스헤더 (slice header),슬라이스 세그먼트헤더 (slice segment header),타일헤더 (tile header),타일그룹헤더 (tile group header)중적어도하나로부터 획득된정보에기초하여분할규칙을결정할 수있다.영상복호화장치 (100)는분할규칙을프레임,슬라이스,타일,템포럴 레이어 (Temporal layer),최대부호화단위또는부호화단위에 따라다르게 결정할수있다.
[199] 영상복호화장치 (100)는부호화단위의블록형태에 기초하여분할규칙을 결정할수있다.블록형태는부호화단위의크기,모양,너비 및높이의 비율, 방향을포함할수있다.영상부호화장치 (200)및영상복호화장치 (100)는 부호화단위의블록형태에기초하여분할규칙을결정할것을미리결정할수 있다.하지만이에 한정되는것은아니다.영상복호화장치 (100)는영상부호화 장치 (200)로부터수신된비트스트림으로부터 획득된정보에기초하여,분할 규직을결정할수있다.
[200] 부호화단위의모양은정사각형 (square)및비 -정사각형 (non-square)을포함할 수있다.부호화단위의 너비 및높이의길이가같은경우,영상복호화
장치 (100)는부호화단위의모양을정사각형으로결정할수있다.또한, .부호화 단위의 너비 및높이의길이가같지 않은경우,영상복호화장치 (100)는부호화 단위의모양을비-정사각형으로결정할수있다.
[201] 부호화단위의크기는 4x4, 8x4, 4x8, 8x8, 16x4, 16x8, ... , 256x256의다양한 크기를포함할수있다.부호화단위의크기는부호화단위의 긴변의 길이,짧은 변의 길이또는넓이에따라분류될수있다.영상복호화장치 (100)는동일한 그룹으로분류된부호화단위에동일한분할규칙을적용할수있다.예를들어 영상복호화장치 (100)는동일한긴변의길이를가지는부호화단위를동일한 크기로분류할수있다.또한영상복호화장치 (100)는동일한긴변의길이를 가지는부호화단위에 대하여동일한분할규칙을적용할수있다.
[202] 부호화단위의 너비 및높이의비율은 1 :2, 2: 1, 1 :4, 4: 1, 1 :8, 8: 1, 1 : 16, 16: 1, 32: 1 또는 1 :32등을포함할수있다.또한,부호화단위의 방향은수평 방향및수직 방향을포함할수있다.수평방향은부호화단위의 너비의길이가높이의 길이보다긴경우를나타낼수있다.수직방향은부호화단위의 너비의길이가 높이의 길이보다짧은경우를나타낼수있다.
[203] 영상복호화장치 (100)는부호화단위의크기에기초하여분할규칙을
적응적으로결정할수있다.영상복호화장치 (100)는부호화단위의크기에 기초하여 허용가능한분할형태모드를다르게결정할수있다.예를들어,영상 복호화장치 (100)는부호화단위의크기에기초하여분할이허용되는지 여부를 2020/175967 1»(:1^1{2020/002924 결정할수있다.영상복호화장치 (100)는부호화단위의크기에 따라분할 방향을결정할수있다.영상복호화장치 (100)는부호화단위의크기에따라 허용가능한분할타입을결정할수있다.
[204] 부호화단위의크기에기초하여분할규칙을결정하는것은영상부호화
장치 (200)및 영상복호화장치 (100)사이에 미리결정된분할규칙일수있다. 또한,영상복호화장치 (100)는비트스트림으로부터 획득된정보에 기초하여 , 분할규칙을결정할수있다.
[205] 영상복호화장치 ( 0)는부호화단위의 위치에기초하여분할규칙을
적응적으로결정할수있다.영상복호화장치 (100)는부호화단위가영상에서 차지하는위치에기초하여분할규칙을적응적으로결정할수있다.
[206] 또한,영상복호화장치 (100)는서로다른분할경로로생성된부호화단위가 동일한블록형태를가지지 않도록분할규칙을결정할수있다.다만이에 한정되는것은아니며서로다른분할경로로생성된부호화단위는동일한블록 형태를가질수있다.서로다른분할경로로생성된부호화단위들은서로다른 복호화처리순서를가질수있다.복호화처리순서에 대해서는도 12와함께 설명하였으므로자세한설명은생략한다.
[207] 도 16은일실시예에 따라부호화단위가분할될수있는형태의조합이
픽쳐마다서로다른경우,각각의픽쳐마다결정될수있는부호화단위들을 도시한다.
[208] 도 16을참조하면,영상복호화장치 (100)는픽쳐마다부호화단위가분할될수 있는분할형태들의조합을다르게결정할수있다.예를들면,영상복호화 장치 (100)는영상에포함되는적어도하나의픽쳐들중 4개의부호화단위로 분할될수있는픽쳐 (1600), 2개또는 4개의부호화단위로분할될수있는 픽쳐 (1610)및 2개, 3개또는 4개의부호화단위로분할될수있는픽쳐 (1620)를 이용하여 영상을복호화할수있다.영상복호화장치 (100)는픽쳐 (1600)를 복수개의부호화단위로분할하기위하여, 4개의 정사각형의부호화단위로 분할됨을나타내는분할형태정보만을이용할수있다.영상복호화장치 (100)는 픽쳐 (1610)를분할하기 위하여, 2개또는 4개의부호화단위로분할됨을 나타내는분할형태정보만을이용할수있다.영상복호화장치 (100)는 픽쳐 (1620)를분할하기 위하여, 2개, 3개또는 4개의부호화단위로분할됨을 나타내는분할형태정보만을이용할수있다.상술한분할형태의조합은영상 복호화장치 (100)의동작을설명하기 위한실시예에불과하므로상술한분할 형태의조합은상기실시예에 한정하여 해석되어서는안되며소정의 데이터 단위마다다양한형태의분할형태의조합이 이용될수있는것으로해석되어야 한다.
[209] 일실시예에 따라영상복호화장치 (100)의비트스트림 획득부 (110)는분할 형태정보의조합을나타내는인덱스를포함하는비트스트림을소정의 데이터 단위 단위 (예를들면,시퀀스,픽쳐,슬라이스,슬라이스세그먼트,타일또는 2020/175967 1»(:1^1{2020/002924 타일그룹등)마다획득할수있다.예를들면,비트스트림획득부 (H0)는시퀀스 파라미터세트 (Sequence Parameter Set),픽쳐파라미터세트 (Picture Parameter Set),슬라이스헤더 (Slice Header),타일헤더 (tile header)또는타일그룹헤더 (tile group header)에서분할형태정보의조합을나타내는인덱스를획득할수있다. 영상복호화장치 (100)의영상복호화장치 (100)는획득한인덱스를이용하여 소정의데이터단위마다부호화단위가분할될수있는분할형태의조합을 결정할수있으며 ,이에따라소정의데이터단위마다서로다른분할형태의 조합을이용할수있다.
[210] 도 17은일실시예에따라바이너리 (binary)코드로표현될수있는분할형태 모드정보에기초하여결정될수있는부호화단위의다양한형태를도시한다.
[211] 일실시예에따라영상복호화장치 (100)는비트스트림획득부 (110)를통해
획득한블록형태정보및분할형태모드정보를이용하여부호화단위를 다양한형태로분할할수있다.분할될수있는부호화단위의형태는상술한 실시예들을통해설명한형태들을포함하는다양한형태에해당할수있다.
[212] 도 17을참조하면,영상복호화장치 (100)는분할형태모드정보에기초하여 정사각형형태의부호화단위를수평방향및수직방향중적어도하나의 방향으로분할할수있고,비-정사각형형태의부호화단위를수평방향또는 수직방향으로분할할수있다.
[213] 일실시예에따라영상복호화장치 (W0)가정사각형형태의부호화단위를
수평방향및수직방향으로분할하여 4개의정사각형의부호화단위로분할할 수있는경우,정사각형의부호화단위에대한분할형태모드정보가나타낼수 있는분할형태는 4가지일수있다.일실시예에따라분할형태모드정보는
2자리의바이너리코드로써표현될수있으며,각각의분할형태마다바이너리 코드가할당될수있다.예를들면부호화단위가분할되지않는경우분할형태 모드정보는 (00)b로표현될수있고,부호화단위가수평방향및수직방향으로 분할되는경우분할형태모드정보는 (01)b로표현될수있고,부호화단위가 수평방향으로분할되는경우분할형태모드정보는 (10)b로표현될수있고 부호화단위가수직방향으로분할되는경우분할형태모드정보는 (l l)b로 표현될수있다.
[214] 일실시예에따라영상복호화장치 (W0)는비-정사각형형태의부호화단위를 수평방향또는수직방향으로분할하는경우분할형태모드정보가나타낼수 있는분할형태의종류는몇개의부호화단위로분할하는지에따라결정될수 있다.도 17을참조하면,영상복호화장치 (100)는일실시예에따라비-정사각형 형태의부호화단위를 3개까지분할할수있다.영상복호화장치 (100)는부호화 단위를두개의부호화단위로분할할수있으며,이경우분할형태모드정보는 (W)b로표현될수있다.영상복호화장치 (100)는부호화단위를세개의부호화 단위로분할할수있으며,이경우분할형태모드정보는 (l l)b로표현될수있다. 영상복호화장치 (100)는부호화단위를분할하지않는것으로결정할수있으며, 2020/175967 1»(:1^1{2020/002924 이 경우분할형태모드정보는 (0)b로표현될수있다.즉,영상복호화
장치 (100)는분할형태모드정보를나타내는바이너리코드를이용하기위하여 고정길이코딩 (FLC: Fixed Length Coding)이 아니라가변길이코딩 (VLC: Varaible Leng比 1 Coding)을이용할수있다.
[215] 일실시예에 따라도 17을참조하면,부호화단위가분할되지 않는것을
나타내는분할형태모드정보의 바이너리코드는 (0)b로표현될수있다.만일 부호화단위가분할되지 않음을나타내는분할형태모드정보의 바이너리 코드가 (00)b로설정된경우라면, (01)b로설정된분할형태모드정보가 없음에도불구하고 2비트의분할형태모드정보의 바이너리코드를모두 이용하여야한다.하지만도 17에서도시하는바와같이,비-정사각형 형태의 부호화단위에 대한 3가지의분할형태를이용하는경우라면,영상복호화 장치 (100)는분할형태모드정보로서 1비트의 바이너리코드 (0)b를
이용하더라도부호화단위가분할되지 않는것을결정할수있으므로, 비트스트림을효율적으로이용할수있다.다만분할형태모드정보가나타내는 비-정사각형 형태의부호화단위의분할형태는단지도 17에서도시하는 3가지 형태만으로국한되어해석되어서는안되고,상술한실시예들을포함하는 다양한형태로해석되어야한다.
[216] 도 18은일실시예에 따라바이너리코드로표현될수있는분할형태모드
정보에 기초하여결정될수있는부호화단위의또다른형태를도시한다.
[217] 도 18을참조하면영상복호화장치 (100)는분할형태모드정보에기초하여 정사각형 형태의부호화단위를수평 방향또는수직방향으로분할할수있고, 비-정사각형 형태의부호화단위를수평 방향또는수직방향으로분할할수 있다.즉,분할형태모드정보는정사각형 형태의부호화단위를한쪽방향으로 분할되는것을나타낼수있다.이러한경우정사각형 형태의부호화단위가 분할되지 않는것을나타내는분할형태모드정보의바이너리코드는 (0)b로 표현될수있다.만일부호화단위가분할되지 않음을나타내는분할형태모드 정보의 바이너리코드가 (00)b로설정된경우라면, (01)b로설정된분할형태 모드정보가없음에도불구하고 2비트의분할형태모드정보의 바이너리 코드를모두이용하여야한다.하지만도 18에서도시하는바와같이,정사각형 형태의부호화단위에 대한 3가지의분할형태를이용하는경우라면,영상 복호화장치 (100)는분할형태모드정보로서 1비트의바이너리코드 (0)b를 이용하더라도부호화단위가분할되지 않는것을결정할수있으므로, 비트스트림을효율적으로이용할수있다.다만분할형태모드정보가나타내는 정사각형 형태의부호화단위의분할형태는단지도 18에서도시하는 3가지 형태만으로국한되어해석되어서는안되고,상술한실시예들을포함하는 다양한형태로해석되어야한다.
[218] 일실시예에 따라블록형태정보또는분할형태모드정보는바이너리코드를 이용하여표현될수있고,이러한정보가곧바로비트스트림으로생성될수 2020/175967 1»(:1^1{2020/002924 있다.또한바이너리코드로표현될수있는블록형태정보또는분할형태모드 정보는바로비트스트림으로생성되지않고 CAB AC(context adaptive binary arithmetic coding)에서입력되는바이너리코드로서이용될수도있다.
[219] 일실시예에따라영상복호화장치 (100)는 CABAC을통해블록형태정보
또는분할형태모드정보에대한신택스를획득하는과정을설명한다.
비트스트림획득부 (no)를통해상기신택스에대한바이너리코드를포함하는 비트스트림을획득할수있다.영상복호화장치 (100)는획득한비트스트림에 포함되는빈스트링 (bin string)을역이진화하여블록형태정보또는분할형태 모드정보를나타내는신택스요소 (syntax element)를검출할수있다.일 실시예에따라영상복호화장치 (100)는복호화할신택스요소에해당하는 바이너리빈스트링의집합을구하고,확률정보를이용하여각각의빈을 복호화할수있고,영상복호화장치 (100)는이러한복호화된빈으로구성되는빈 스트링이이전에구한빈스트링들중하나와같아질때까지반복할수있다. 영상복호화장치 (100)는빈스트링의역이진화를수행하여신택스요소를 결정할수있다.
[22이 일실시예에따라영상복호화장치 (100)는적응적이진산술코딩 (adaptive binary arithmetic coding)의복호화과정을수행하여빈스트링에대한신택스를 결정할수있고,영상복호화장치 (100)는비트스트림획득부 (110)를통해획득한 빈들에대한확률모델을갱신할수있다.도 17을참조하면,영상복호화 장치 (100)의비트스트림획득부 (110)는일실시예에따라분할형태모드정보를 나타내는바이너리코드를나타내는비트스트림을획득할수있다.획득한 1비트또는 2비트의크기를가지는바이너리코드를이용하여영상복호화 장치 (100)는분할형태모드정보에대한신택스를결정할수있다.영상복호화 장치 (100)는분할형태모드정보에대한신택스를결정하기위하여, 2비트의 바이너리코드중각각의비트에대한확률을갱신할수있다.즉,영상복호화 장치 (100)는 2비트의바이너리코드중첫번째빈의값이 0또는 1중어떤 값이냐에따라,다음빈을복호화할때 0또는 1의값을가질확률을갱신할수 있다.
[221] 일실시예에따라영상복호화장치 (100)는신택스를결정하는과정에서,
신택스에대한빈스트링의빈들을복호화하는과정에서이용되는빈들에대한 확률을갱신할수있으며,영상복호화장치 (100)는상기빈스트링중특정 비트에서는확률을갱신하지않고동일한확률을가지는것으로결정할수있다.
[222] 도 17을참조하면,비-정사각형형태의부호화단위에대한분할형태모드
정보를나타내는빈스트링을이용하여신택스를결정하는과정에서,영상 복호화장치 (100)는비-정사각형형태의부호화단위를분할하지않는경우에는 0의값을가지는하나의빈을이용하여분할형태모드정보에대한신택스를 결정할수있다.즉,블록형태정보가현재부호화단위는비-정사각형형태임을 나타내는경우,분할형태모드정보에대한빈스트링의첫번째빈은, 2020/175967 1»(:1^1{2020/002924 비-정사각형 형태의부호화단위가분할되지 않는경우 0이고, 2개또는 3개의 부호화단위로분할되는경우 1일수있다.이에 따라비-정사각형의부호화 단위에 대한분할형태모드정보의빈스트링의 첫번째빈이 0일확률은 1/3, 1일 확률은 2/3일수있다.상술하였듯이 영상복호화장치 00)는비-정사각형 형태의부호화단위가분할되지 않는것을나타내는분할형태모드정보는 0의 값을가지는 1비트의 빈스트링만을표현될수있으므로,영상복호화
장치 (100)는분할형태모드정보의 첫번째빈이 1인경우에만두번째빈이 0인지 1인지판단하여분할형태모드정보에 대한신택스를결정할수있다.일 실시예에 따라영상복호화장치 (100)는분할형태모드정보에 대한첫번째빈이 1인경우,두번째빈이 0또는 1일확률은서로동일한확률인것으로보고빈을 복호화할수있다.
[223] 일실시예에 따라영상복호화장치 (100)는분할형태모드정보에 대한빈 스트링의 빈을결정하는과정에서각각의 빈에 대한다양한확률을이용할수 있다.일실시예에따라영상복호화장치 ( 0)는비-정사각형블록의방향에 따라분할형태모드정보에 대한빈의 확률을다르게결정할수있다.일 실시예에 따라영상복호화장치 00)는현재부호화단위의 넓이또는긴변의 길이에 따라분할형태모드정보에 대한빈의확률을다르게결정할수있다.일 실시예에 따라영상복호화장치 00)는현재부호화단위의 형태및 긴변의 길이중적어도하나에따라분할형태모드정보에 대한빈의 확률을다르게 결정할수있다.
[224] 일실시예에 따라영상복호화장치 ( ^0)는소정크기 이상의부호화단위들에 대하여는분할형태모드정보에 대한빈의 확률을동일한것으로결정할수 있다.예를들면,부호화단위의 긴변의 길이를기준으로 64샘플이상의크기의 부호화단위들에 대하여는분할형태모드정보에 대한빈의 확률이동일한 것으로결정할수있다.
[225] 일실시예에 따라영상복호화장치 (100)는분할형태모드정보의빈스트링을 구성하는빈들에 대한초기확률은슬라이스타입 (예를들면, I슬라이스,모 슬라이스또는 6슬라이스)에기초하여 결정될수있다.
[226] 도 19는루프필터링을수행하는영상부호화및복호화시스템의블록도를 나타낸도면이다.
[227] 영상부호화및복호화시스템 (1900)의부호화단 (1910)은영상의부호화된 비트스트림을전송하고,복호화단 (1950)은비트스트림을수신하여
복호화함으로써복원영상을출력한다.여기서부호화단 (1910)은후술할영상 부호화장치 (200)에유사한구성일수있고,복호화단 (1950)은영상복호화 장치 (100)에유사한구성일수있다.
[228] 부호화단 (1910)에서,예측부호화부 (1915)는인터 예측및 인트라예측을통해 예측데이터를출력하고,변환및양자화부 (1920)는예측데이터와현재 입력 영상간의 레지듀얼데이터의 양자화된변환계수를출력한다.엔트로피 2020/175967 1»(:1^1{2020/002924 부호화부 (1925)는양자화된변환계수를부호화하여변환하고비트스트림으로 출력한다.양자화된변환계수는역양자화및역변환부 (1930)을거쳐공간 영역의데이터로복원되고,복원된공간영역의데이터는디블로킹
필터링부 (1935)및루프필터링부 (1940)를거쳐복원영상으로출력된다.복원 영상은예측부호화부 (1915)를거쳐다음입력영상의참조영상으로사용될수 있다.
[229] 복호화단 (1950)으로수신된비트스트림중부호화된영상데이터는,엔트로피 복호화부 (1955)및역양자화및역변환부 (1960)를거쳐공간영역의레지듀얼 데이터로복원된다.예측복호화부 (1975)로부터출력된예측데이터및레지듀얼 데이터가조합되어공간영역의영상데이터가구성되고,디블로킹
필터링부 (1965)및루프필터링부 (1970)는공간영역의영상데이터에대해 필터링을수행하여현재원본영상에대한복원영상을출력할수있다.복원 영상은예측복호화부 (1975)에의해다음원본영상에대한참조영상으로서 이용될수있다.
[23이 부호화단 (1910)의루프필터링부 (1940)는사용자입력또는시스템설정에 따라입력된필터정보를이용하여루프필터링을수행한다.루프
필터링부 (1940)에의해사용된필터정보는엔트로피부호화부 (1925)로 출력되어,부호화된영상데이터와함께복호화단 (1950)으로전송된다.
복호화단 (1950)의루프필터링부 (1970)는복호화단 (1950)으로부터입력된필터 정보에기초하여루프필터링을수행할수있다.
[231] 상술한다양한실시예들은영상복호화장치 (100)이수행하는영상복호화 방법과관련된동작을설명한것이다.이하에서는이러한영상복호화방법에 역순의과정에해당하는영상부호화방법을수행하는영상부호화장치 (200)의 동작을다양한실시예를통해설명하도록한다.
[232] 도 2는일실시예에따라블록형태정보및분할형태모드정보중적어도 하나에기초하여영상을부호화할수있는영상부호화장치 (200)의블록도를 도시한다.
[233] 영상부호화장치 (200)는부호화부 (220)및비트스트림생성부 (2 W)를포함할 수있다.부호화부 (220)는입력영상을수신하여입력영상을부호화할수있다. 부호화부 (220)는입력영상을부호화하여적어도하나의신택스엘리먼트를 획득할수있다.신택스엘리먼트는 skip flag, prediction mode, motion vector difference, motion vector prediction method (or index), transform quantized coefficient, coded block pattern, coded block flag, intra prediction mode, direct flag, merge flag, delta QP, reference index, prediction direction, transform index중적어도 하나를포함할수있다.부호화부 (220)는부호화단위의모양,방향,너비및 높이의비율또는크기중적어도하나를포함하는블록형태정보에기초하여 컨텍스트모델을결정할수있다.
[234] 비트스트림생성부 (2 W)는부호화된입력영상에기초하여비트스트림을 2020/175967 1»(:1^1{2020/002924 생성할수있다.예를들어비트스트림 생성부 ( 0)는컨텍스트모델에 기초하여 신택스엘리먼트를엔트로피부호화함으로써비트스트림을생성할수있다. 또한영상부호화장치 (200)는비트스트림을영상복호화장치 (100)로전송할수 있다.
[235] 일실시예에 따라영상부호화장치 (200)의부호화부 (220)는부호화단위의 형태를결정할수있다.예를들면부호화단위가정사각형인지또는
비 -정사각형의 형태를가질수있고,이러한형태를나타내는정보는블록형태 정보에포함될수있다.
[236] 일실시예에 따라부호화부 (220)는부호화단위가어떤형태로분할될지를 결정할수있다.부호화부 (220)는부호화단위에포함되는적어도하나의부호화 단위의 형태를결정할수있고비트스트림 생성부 ( 0)는이러한부호화단위의 형태에 대한정보를포함하는분할형태모드정보를포함하는비트스트림을 생성할수있다.
[237] 일실시예에 따라부호화부 (220)는부호화단위가분할되는지분할되지 않는지 여부를결정할수있다.부호화부 (220)가부호화단위에하나의부호화단위만이 포함되거나또는부호화단위가분할되지 않는것으로결정하는경우
비트스트림 생성부 ( 0)는부호화단위가분할되지 않음을나타내는분할형태 모드정보를포함하는비트스트림을생성할수있다.또한부호화부 (220)는 부호화단위에포함되는복수개의부호화단위로분할할수있고,비트스트림 생성부 ( 0)는부호화단위는복수개의부호화단위로분할됨을나타내는분할 형태모드정보를포함하는비트스트림을생성할수있다.
[238] 일실시예에 따라부호화단위를몇 개의부호화단위로분할할지를
나타내거나어느방향으로분할할지를나타내는정보가분할형태모드정보에 포함될수있다.예를들면분할형태모드정보는수직 방향및수평방향중 적어도하나의방향으로분할하는것을나타내거나또는분할하지 않는것을 나타낼수있다.
[239] 영상부호화장치 (200)는부호화단위의분할형태모드에기초하여분할형태 모드에 대한정보를결정한다.영상부호화장치 (200)는부호화단위의모양, 방향,너비 및높이의 비율또는크기중적어도하나에 기초하여 컨텍스트 모델을결정한다.그리고,영상부호화장치 (200)는컨텍스트모델에 기초하여 부호화단위를분할하기 위한분할형태모드에 대한정보를비트스트림으로 생성한다.
[24이 영상부호화장치 (200)는컨텍스트모델을결정하기위하여 ,부호화단위의 모양,방 ¾너비 및높이의 비율또는크기중적어도하나와컨텍스트모델에 대한인덱스를대응시키기 위한배열을획득할수있다.영상부호화장치 (200)는 배열에서부호화단위의모양,방 ¾너비 및높이의비율또는크기중적어도 하나에 기초하여 컨텍스트모델에 대한인덱스를획득할수있다.영상부호화 장치 (200)는컨텍스트모델에 대한인덱스에기초하여 컨텍스트모델을결정할 2020/175967 1»(:1^1{2020/002924 수있다.
[241] 영상부호화장치 (200)는,컨텍스트모델을결정하기 위하여,부호화단위에 인접한주변부호화단위의모양,방향,너비 및높이의 비율또는크기중적어도 하나를포함하는블록형태정보에 더기초하여 컨텍스트모델을결정할수 있다.또한주변부호화단위는부호화단위의좌하측,좌측,좌상측,상측, 우상측,우측또는우하측에 위치한부호화단위중적어도하나를포함할수 있다.
[242] 또한,영상부호화장치 (200)는,컨텍스트모델을결정하기 위하여 ,상측주변 부호화단위의너비의 길이와부호화단위의너비의 길이를비교할수있다. 또한,영상부호화장치 (200)는좌측및우측의주변부호화단위의높이의 길이와부호화단위의높이의길이를비교할수있다.또한,영상부호화 장치 (200)는비교결과들에기초하여 컨텍스트모델을결정할수있다.
[243] 영상부호화장치 (200)의동작은도 3내지도 19에서 설명한비디오복호화 장치 (100)의동작과유사한내용을포함하고있으므로,상세한설명은생략한다.
[244] 이하,본개시의 기술적사상에 의한실시예들을차례로상세히 설명한다.
[245] 도 20은일실시예에 따른영상복호화장치 (2000)의구성을도시하는
블록도이다.
[246] 도 20을참조하면,영상복호화장치 (2000)는획득부 (2010),블록결정부 (2030), 예측복호화부 (2050)및복원부 (2070)를포함한다.도 20에도시된
획득부 (2010)는도 1에도시된비트스트림 획득부 (110)에 대응하고,블록 결정부 (2030),예측복호화부 (2050)및복원부 (2070)는도 1에도시된
복호화부 (120)에 대응할수있다.
[247] 일실시예에 따른획득부 (2010),블록결정부 (2030),예측복호화부 (2050)및 복원부 (2070)는적어도하나의프로세서로구현될수있다.영상복호화 장치 (2000)는획득부 (2010),블록결정부 (2030),예측복호화부 (2050)및 복원부 (2070)의 입출력 데이터를저장하는하나이상의 데이터
저장부 (미도시)를포함할수있다.또한,영상복호화장치 (2000)는,데이터 저장부 (미도시 )의 데이터 입출력을제어하는메모리제어부 (미도시 )를포함할 수도있다.
[248] 획득부 (2010)는영상의부호화결과생성된비트스트림을수신한다.
획득부 (2010)는비트스트림으로부터 영상의복호화를위한신택스
엘리먼트들을획득한다.신택스엘리먼트들에 해당하는이진값들은영상의 계층구조에따라비트스트림에포함될수있다.획득부 (2010)는비트스트림에 포함된이진값들을엔트로피코딩하여신택스엘리먼트들을획득할수있다.
[249] 도 21은영상의 계층구조에따라생성된비트스트림 ( 00)의구조를도시하는 예시적인도면이다.
[25이 도 21을참조하면,비트스트림 (2100)은시퀀스파라미터 세트 (2110),픽처
파라미터 세트 (2120),그룹헤더 (2130)및블록파라미터 세트 (2140)를포함할수 2020/175967 1»(:1^1{2020/002924 있다.
[251] 시퀀스파라미터 세트 (2110),픽처 파라미터세트 (2120),그룹헤더 (2130)및 블록파라미터세트 (2140)각각은영상의 계층구조에따른각계층에서 이용되는정보들을포함한다.
[252] 구체적으로,시퀀스파라미터 세트 (2110)는하나이상의 영상으로이루어진 영상시퀀스에서 이용되는정보들을포함한다.
[253] 픽처파라미터 세트 (2120)는하나의 영상에서 이용되는정보들을포함하며 , 시퀀스파라미터세트 (2110)를참조할수있다.
[254] 그룹헤더 (2130)는영상내에서결정된블록그룹에서 이용되는정보들을
포함하며,픽처 파라미터세트 (2120)및시퀀스파라미터 세트 (2110)를참조할수 있다.그룹헤더 (2130)는슬라이스헤더 (slice header)일수있다.
[255] 또한,블록파라미터 세트 (2140)는영상내에서 결정된블록에서 이용되는 정보들을포함하며,그룹헤더 (2130),픽처파라미터 세트 (2120)및시퀀스 파라미터 세트 (2110)를참조할수있다.
[256] 일실시예에서 ,블록파라미터세트 (2140)는영상내에서결정된블록의 계층적 구조에 따라최대부호화단위 (CTU)의 파라미터세트,부호화단위 (CU)의 파라미터 세트,예측단위 (PU)의 파라미터세트및변환단위 (TU)의파라미터 세트중적어도하나로구분될수있다.
[257] 획득부 (2010)는영상의 계층구조에따라비트스트림 (2 W0)으로부터 영상의 복호화에 이용되는정보들을획득하고,후술하는블록결정부 (2030),예즉 복호화부 (2050)및복원부 (2070)는획득부 (2010)가획득한정보들을이용하여 필요한동작을수행할수있다.
[258] 도 21에도시된비트스트림 (2 W0)의구조는하나의 예시일뿐이며,도 21에 도시된파라미터세트들중일부는비트스트림 (2 W0)에포함되지 않을수있고, 또는도시되지 않은파라미터세트,예를들어,비디오파라미터 세트가 비트스트림 (2 W0)에포함될수있다.
[259] 블록결정부 (2030)는현재 영상을블록들로분할하고,현재 영상내에서 적어도 하나의블록을포함하는블록그룹들을설정한다.여기서,블록은타일에 해당할 수있고,블록그룹은슬라이스에해당할수있다.슬라이스는타일그룹으로 참조될수도있다.
[26이 예측복호화부 (2050)는현재 영상으로부터분할된블록들의하위블록들을 인터 예측또는인트라예측하여하위블록들에 대응하는예측샘플들을 획득한다.여기서 ,하위블록은최대부호화단위 ,부호화단위 및변환단위중 적어도하나일수있다.
[261] 이하에서는,블록을타일로,블록그룹을슬라이스로한정하여설명하지만, 이는하나의 예시일뿐이며, A블록들의 집합으로이루어진 B블록이
존재한다면, A블록은블록에 해당하고, B블록은블록그룹에해당할수있다. 예를들어, CTU의 집합이 타일에해당할때, CTU는블록이고,타일이블록 2020/175967 1»(:1^1{2020/002924 그룹일수있다.
[262] 도 3내지도 16를참조하여 설명한바와같이,블록결정부 (2030)는현재
영상을분할하여 변환단위,부호화단위,최대부호화단위,타일,슬라이스등을 결정할수있다.
[263] 도 22는현재 영상 (2200)내에서 결정된슬라이스,타일및 0X1를도시하고 있다.
[264] 현재 영상 (2200)은복수의 (:111로분할된다. (:111의크기는
비트스트림으로부터 획득된정보에 기초하여결정될수있다. (:111는동일 크기의 정사각형의 형태를가질수있다.
[265] 타일은하나이상의 0X1를포함한다.타일은정사각형또는직사각형의
형태를가질수있다.
[266] 슬라이스는하나이상의타일을포함한다.슬라이스는사각형의 형태또는 비-사각형의 형태를가질수있다.
[267] 일실시예에서,블록결정부 (2030)는비트스트림으로부터 획득된정보에따라 현재 영상 (2200)을복수의 0X1로분할하고,적어도하나의 0X1를포함하는 타일및적어도하나의타일을포함하는슬라이스를현재 영상 (2200)내에서 설정할수있다.
[268] 일실시예에서,블록결정부 (2030)는비트스트림으로부터 획득된정보에따라 현재 영상 (2200)을복수의타일로분할하고,각타일을하나이상의 0X1로 분할할수있다.또한,블록결정부 (2030)는현재 영상 (2200)내에서 적어도 하나의 타일을포함하는슬라이스를설정할수있다.
[269] 일실시예에서,블록결정부 (2030)는비트스트림으로부터 획득된정보에따라 현재 영상 (2200)을하나이상의슬라이스로분할하고,각슬라이스를하나 이상의 타일로분할할수있다.그리고,블록결정부 (2030)는각각의타일을하나 이상의 0X1로분할할수있다.
[27이 블록결정부 (2030)는현재 영상 (2200)내에서슬라이스들을설정하기 위해 , 비트스트림으로부터 획득된슬라이스들의주소정보를이용할수있다.블록 결정부 (2030)는비트스트림으로부터 획득된슬라이스들의주소정보에 따라 현재 영상 (2200)내에서하나이상의타일을포함하는슬라이스들을설정할수 있다.슬라이스의주소정보는비트스트림의비디오파라미터세트,시퀀스 파라미터 세트,픽처파라미터 세트또는그룹헤더로부터 획득될수있다.
[271] 블록결정부 (2030)가현재 영상 (2200)내에서슬라이스들을설정하는방법에 대해도 23및도 24를참조하여 설명한다.
[272] 도 23및도 24는현재 영상 (2200)내에서슬라이스들을설정하는방법을
설명하기 위한도면이다.
[273] 현재 영상 (2200)내에서 타일들이설정되면,블록결정부 (2030)는
비트스트림으로부터 획득된슬라이스의주소정보에 따라적어도하나의 타일을포함하는슬라이스들을현재 영상 (2200)내에서 설정할수있다. 2020/175967 1»(:1^1{2020/002924
[274] 도 23을참조하여설명하면,슬라이스들 (2310, 2320, 2330, 2340, 2350)이현재 영상 (2200)내에서 래스터스캔 (raster scan)방향 (2300)을따라결정될수있고, 슬라이스들 (2310, 2320, 2330, 2340, 2350)이래스터스캔방향 (2300)에따라 순차적으로복호화될수있다.
[275] 일실시예에서,주소정보는슬라이스들 (2310, 2320, 2330, 2340, 2350)각각에 포함된타일들중우하단에위치하는우하단타일의식별값을포함할수있다.
[276] 구체적으로,슬라이스들 (2310, 2320, 2330, 2340, 2350)의주소정보는첫번째 슬라이스 (2310)의우하단타일의식별값인 9,두번째슬라이스 (2320)의우하단 타일의식별값인 7,세번째슬라이스 (2330)의우하단타일의식별값인 11,네 번째슬라이스 (2340)의우하단타일의식별값인 12및다섯번째
슬라이스 (2350)의우하단타일의식별값인 15를포함할수있다.일실시예에서, 네번째슬라이스 (2340)가현재영상 (2200)내에서설정되면,마지막슬라이스인 다섯번째슬라이스 (2350)는자동적으로확인이가능하므로,다섯번째 슬라이스 (2350)의주소정보는비트스트림에포함되지않을수있다.
[277] 블록결정부 (2030)는첫번째슬라이스 (2310)의설정을위해,현재영상 (2200) 내의타일들중좌상단타일,즉, 0의식별값을갖는타일을식별할수있다. 그리고,블록결정부 (2030)는타일 0과,주소정보로부터확인되는타일 9를 포함하는영역을첫번째슬라이스 (2310)로결정할수있다.
[278] 다음으로,블록결정부 (2030)는두번째슬라이스 (2320)의설정을위해,이전 슬라이스,즉,첫번째슬라이스 (2310)에포함되지않은타일들중가장작은식별 값을갖는타일,즉타일 2를두번째슬라이스 (2320)의좌상단타일로결정할수 있다.그리고,블록결정부 (2030)는타일 2와,주소정보로부터확인되는타일 7을 포함하는영역을두번째슬라이스 (2320)로결정할수있다.
[279] 마찬가지로,블록결정부 (2030)는세번째슬라이스 (2330)의특정을위해,이전 슬라이스,즉,첫번째슬라이스 (2310)와두번째슬라이스 (2320)에포함되지 않은타일들중가장작은식별값을갖는타일,즉타일 W을세번째
슬라이스 (2330)의좌상단타일로결정할수있다.그리고,블록결정부 (2030)는 타일 W과,주소정보로부터확인되는타일 11을포함하는영역을세번째 슬라이스 (2330)로결정할수있다.
[28이 즉,일실시예에따르면,비트스트림에포함된우하단타일의식별정보만으로 현재영상 (2200)내에서슬라이스들이설정될수있다.
[281] 다른실시예에서,획득부 (2010)는슬라이스들을결정하기위한주소정보로서, 슬라이스들각각에포함된좌상단타일의식별값과우하단타일의식별값을 획득하고,블록결정부 (2030)는획득부 (2010)가획득한정보에따라현재 영상 (2200)내에서슬라이스들을설정할수있다.주소정보로부터각각의 슬라이스에포함된좌상단타일과우하단타일의식별이가능하므로,블록 결정부 (2030)는주소정보로부터식별되는좌상단타일과우하단타일을 포함하는영역을슬라이스로설정할수있다. 2020/175967 1»(:1^1{2020/002924
[282] 또다른실시예에서,획득부 (2010)는슬라이스들을설정하기위한주소
정보로서,슬라이스에포함된좌상단타일의식별값,슬라이스의폭의크기및 슬라이스의높이의크기를획득하고,블록결정부 (2030)는획득부 (2010)가 획득한정보에따라현재영상 (2200)내에서슬라이스들을설정할수있다.
[283] 예를들어,도 23에서두번째슬라이스 (2320)의주소정보는,좌상단타일의 식별값인 2,슬라이스의폭의크기인 2,슬라이스의높이의크기인 2를포함할 수있다.여기서,폭및높이의크기가 2라는것은두번째슬라이스 (2320)의폭 방향및높이방향을따라 2개의타일의행과 2개의타일의열이존재한다는 것을의미한다.
[284] 구현예에따라,첫번째슬라이스 (2310)의좌상단타일은타일 0으로고정되어 있으므로,첫번째슬라이스 (2310)의좌상단타일의식별값은비트스트림에 포함되어 있지않을수있다.
[285] 구현예에따라,비트스트림으로부터획득된슬라이스의폭의크기및높이의 크기는슬라이스의폭방향및높이방향을따라배치된타일행의개수및타일 열의개수를소정의스케일링팩터로나눈값일수있다.다시말하면,도 23에서 두번째슬라이스 (2320)의주소정보가,좌상단타일의식별값 2,슬라이스의 폭의크기 1,슬라이스의높이의크기 1을나타내는경우,블록결정부 (2030)는 슬라이스의폭의크기인 1및슬라이스의높이의크기인 1에미리결정된 스케일링팩터,예를들어 2를곱함으로써슬라이스의폭방향및높이방향을 따라각각 2개의타일행과타일열이존재하는것으로확인할수있다.
[286] 블록결정부 (2030)는첫번째슬라이스 (2310)의주소정보내지다섯번째
슬라이스 (2350)의주소정보에따라첫번째슬라이스 (2310)내지다섯번째 슬라이스 (2350)를현재영상 (2200)내에서결정할수있다.주소정보에따라 현재영상 (2200)내에서네번째슬라이스 (2340)까지결정되면,다섯번째 슬라이스 (2350)는자동적으로결정되므로,마지막슬라이스의주소정보는 비트스트림에포함되지않을수도있다.
[287] 또다른실시예에서,현재영상 (2200)내에서결정될슬라이스들중첫번째
행에위치하는타일또는첫번째열에위치하는타일을포함하는슬라이스의 주소정보는,해당슬라이스의좌상단타일의식별값,슬라이스의폭의크기및 슬라이스의높이의크기뿐만아니라,해당슬라이스의우측방향또는하부 방향을따라후속하는슬라이스가몇개존재하는지를나타내는값을더포함할 수도있다.슬라이스의우측방향또는하부방향을따라후속하는슬라이스가 몇개존재하는지를나타내는값은,슬라이스의폭방향또는높이방향을따라 배열된슬라이스가몇개존재하는지를나타내는값으로대체될수도있다.
[288] 첫번째슬라이스 (2310)의주소정보는,우측방향을따라한개의슬라이스 (즉, 두번째슬라이스 (2320))가존재하고,하부방향을따라한개의슬라이스 (즉,네 번째슬라이스 (2340))가존재한다는정보를포함할수있다.첫번째
슬라이스 (2310)는영상 (2200)내첫번째행에위치하는타일과첫번째열에 2020/175967 1»(:1^1{2020/002924 위치하는타일을모두포함하므로,첫번째슬라이스 (2310)의주소정보는, 슬라이스의우측방향을따라후속하는슬라이스가몇개존재하는지를 나타내는값과슬라이스의하부방향을따라후속하는슬라이스가몇개 존재하는지를나타내는값을포함할수있다.
[289] 두번째슬라이스 (2320)는첫번째행에위치하는타일만을포함하므로,두 번째슬라이스 (2320)의주소정보는,슬라이스의하부방향을따라후속하는 슬라이스가몇개존재하는지를나타내는값을포함할수있다.
[29이 우측방향및/또는하부방향을따라후속하는슬라이스가몇개존재하는지를 나타내는값이주소정보에포함됨으로써,현재영상 (2200)의폭방향을따라 마지막에배치된슬라이스 (도 23에서는두번째슬라이스 (2320)및/또는다섯 번째슬라이스 (2350))의주소정보에서슬라이스의폭의크기가생략될수있고, 현재영상 (2200)의높이방향을따라마지막에배치된슬라이스 (도 23에서는네 번째슬라이스 (2340)및/또는다섯번째슬라이스 (2350))의주소정보에서 슬라이스의높이의크기가생략될수있다.예를들어,블록결정부 (2030)가이미 현재영상 (2200)의폭방향을따라첫번째슬라이스 (2310)에후속하여하나의 슬라이스가존재하는것을알수있으므로,후속슬라이스의폭의크기를 나타내는값이비트스트림에포함되지않더라도현재영상 (2200)의폭크기를 고려하여첫번째슬라이스 (2310)에후속하는슬라이스의폭의크기를도출할수 있기때문이다.도 23에서,현재영상 (2200)의폭방향을따라 4개의타일이 존재하고,첫번째슬라이스 (2310)의폭방향을따라 2개의타일이존재하므로, 첫번째슬라이스 (2310)에후속하는두번째슬라이스 (2320)의폭방향을따라 2개의타일이존재하는것을알수있다.마찬가지로,블록결정부 (2030)는이미 현재영상 (2200)의높이방향을따라첫번째슬라이스 (2310)에후속하여하나의 슬라이스가존재하는것을알수있으므로,후속슬라이스의높이의크기를 나타내는값이비트스트림에포함되지않더라도첫번째슬라이스 (2310)에 후속하는슬라이스의높이를도출할수있다.
[291] 또다른실시예에서,획득부 (2010)는현재영상 (2200)을슬라이스들로
분할하기위한분할정보를비트스트림으로부터획득하고,블록결정부 (2030)는 분할정보에따라현재영상 (2200)을슬라이스들로분할할수도있다.여기서, 분할정보는예를들어, 4분할,높이의 2분할,폭의 2분할등을나타낼수있다.
[292] 블록결정부 (2030)는현재영상 (2200)이최초분할됨에따라획득된
슬라이스들각각을분할정보에따라분할하여더작은슬라이스들을
계층적으로획득할수도있다.
[293] 도 24에도시된바와같이,블록결정부 (2030)는분할정보에따라현재
영상 (2200)의폭을 2분할하여 2개의영역 (2410, 2420)을결정하고,좌측 영역 (2410)의분할정보에따라좌측영역 (2410)의높이를 2분할하여 2개의 영역 (2412, 2414)을결정할수있다.우측영역 (2420)의분할정보가비분할을 나타내고,좌측영역 (2410)으로부터분할된영역들 (2412, 2414)이추가분할되지 2020/175967 1»(:1^1{2020/002924 않는경우,블록결정부 (2030)는좌측상부영역 (2412)을첫번째슬라이스,우측 영역 (2420)을두번째슬라이스,좌측하부영역 (2414)을세번째슬라이스로 설정할수있다.
[294] 또다른실시예에서 ,블록결정부 (2030)는미리 설정된맵정보에따라
슬라이스들을현재 영상 (2200)내에서 설정하되,비트스트림으로부터 획득되는 수정 정보에따라현재 영상 (2200)내 적어도하나의슬라이스를추가
분할하거나,두개 이상의슬라이스들을병합하여 최종적인슬라이스들을 설정할수도있다.상기 맵정보는영상내위치하는슬라이스들의주소정보를 포함할수있다.예를들어,블록결정부 (2030)는비트스트림의 비디오파라미터 세트또는시퀀스파라미터세트에서 획득되는맵정보에따라영상 (2200)내에 슬라이스들을최초설정하고,픽처 파라미터세트에서 획득되는수정 정보에 따라영상 (2200)내최종의슬라이스들을설정할수있다.
[295] 한편,현재 영상내에서타일들과슬라이스들이결정되면,블록결정부 (2030)는 타일들에포함된부호화단위들중적어도하나를인터 예측할수있는데,인터 예측에 이용되는참조영상리스트를구성하는방법에 대해설명한다.
[296] 도 20을참조하면,예측복호화부 (2050)는현재 영상내에서 결정된타일들에 포함된부호화단위들을예측복호화한다.예측복호화부 (2050)는인터 예측 또는인트라예측을통해부호화단위들을예측복호화할수있는데,인터 예측에 의하면,움직임 벡터가가리키는참조영상내참조블록에 기반하여 부호화단위의 예측샘플이 획득되고,예측샘플과비트스트림으로부터 획득되는잔차데이터에 기반하여부호화단위의복원샘플이 획득된다.예측 모드에따라비트스트림에잔차데이터가포함되어 있지 않을수있고,이경우, 예측샘플이복원샘플로결정될수도있다.
[297] 인터 예측을위해서는참조영상들을포함하는참조영상리스트를구성하여야 하는데,일실시예에서,획득부 (2010)는비트스트림의시퀀스파라미터 세트로부터복수의제 1참조영상리스트를나타내는정보를획득할수있다. 복수의 제 1참조영상리스트를나타내는정보는참조영상의 POC(picture order count)관련값을포함할수있다.복수의제 1참조영상리스트는현재 영상을 포함하는영상시퀀스에서 이용된다.
[298] 일실시예에서 ,복수의 제 1참조영상리스트를나타내는정보는제 1참조 영상리스트의 개수를포함할수도있다.이경우,예측복호화부 (2050)은 비트스트림으로부터 확인된개수에 대응하는제 1참조영상리스트들을구성할 수있다.이경우,예측복호화부 (2050)는영상부호화장치 (3300)와동일방법에 따라제 1참조영상리스트들을구성할수있다.
[299] 특정의슬라이스에포함된부호화단위들을부호화하는데 있어,영상의특성에 따라영상시퀀스를위한복수의 제 1참조영상리스트를이용하기부적합할수 있다.이에 따라복수의 제 1참조영상리스트들중에 현재슬라이스내부호화 단위들을인터 예측하는데 이용가능한참조영상리스트가존재하지 않으면, 2020/175967 1»(:1^1{2020/002924 새로운참조영상리스트를그룹헤더로부터 획득할수있다.다만이경우, 새로운참조영상리스트가그룹헤더에포함됨에 따라비트레이트가증가할수 있으므로,시퀀스파라미터세트를통해시그널링되는복수의제 1참조영상 리스트를이용하여 현재슬라이스에 이용될최적의참조영상리스트를 구성하는방안이요구된다.
[300] 일실시예에서,획득부 (2010)는영상시퀀스에서 이용되는복수의제 1참조 영상리스트중적어도하나를가리키는인디케이터를비트스트림의그룹 헤더로부터 획득한다.그리고,예측복호화부 (2050)는인디케이터가가리키는제 1참조영상리스트로부터갱신된제 2참조영상리스트를획득한다.
[301] 제 2참조영상리스트는,인디케이터가가리키는제 1참조영상리스트에
포함된참조영상들중적어도일부가다른참조영상으로대체되거나,참조 영상들의 적어도일부의순서가변경되거나,새로운참조영상이 제 1참조영상 리스트에추가됨에따라획득될수있다.
[302] 제 2참조영상리스트의구성을위해,획득부 (2010)는비트스트림의그룹
헤더로부터 갱신정보를획득할수있다.갱신정보는,인디케이터가가리키는 제 1참조영상리스트에서 제거될참조영상의汉犯관련값,제 2참조영상 리스트에추가될참조영상의 ?00관련값,제 1참조영상리스트에서제거될 참조영상의 ?00관련값과제 2참조영상리스트에추가될참조영상의 ?00 관련값사이의차분값,영상들의순서 변경을위한정보등을포함할수있다. 구현예에 따라,갱신정보는비트스트림의그룹헤더 이외의 파라미터세트, 예를들어,픽처파라미터 세트로부터 획득될수도있다.
[303] 예측복호화부 (2050)는제 2참조영상리스트가획득되면,제 2참조영상
리스트에포함된참조영상들중적어도하나를기초로슬라이스에포함된 부호화단위들을예측복호화하여부호화단위들의 예측샘플들을획득할수 있다.
[304] 예측복호화부 (2050)는영상시퀀스에 이용되는복수의제 1참조영상리스트 중인디케이터가가리키는제 1참조영상리스트이외의 제 1참조영상리스트, 및제 2참조영상리스트를이용하여다음슬라이스에포함된부호화단위들을 예측복호화할수있다.다시 말하면,현재슬라이스에서 획득된제 2참조영상 리스트가다음슬라이스에서도이용될수있다.구체적으로,현재슬라이스에 대해 획득된인디케이터가가리키는제 1참조영상리스트이외의제 1참조 영상리스트와제 2참조영상리스트중에서 ,다음슬라이스에서 이용되는참조 영상리스트를가리키는인디케이터가새롭게 획득되고,인디케이터가 가리키는참조영상리스트또는그로부터갱신된참조영상리스트에따라다음 슬라이스에포함된부호화단위들이 예측복호화될수있다.이에따라,시퀀스 파라미터 세트나그룹헤더에서새로운참조영상리스트를시그널링하지 않더라도,기존참조영상리스트의갱신과정만으로슬라이스들의부호화 단위들을예측복호화하는데적합한참조영상리스트를구성할수있다. 2020/175967 1»(:1^1{2020/002924
[305] 이하에서는,도 25내지도 30을참조하여 제 1참조영상리스트로부터갱신된 제 2참조영상리스트를획득하는방법에 대해설명한다.
[306] 도 25는시퀀스파라미터 세트를통해 획득된복수의 제 1참조영상
리스트 (2510, 2520, 2530)를나타내는예시적인도면이다.
[307] 도 25는 3개의제 1참조영상리스트 (2510, 2520, 2530)를도시하고있는데, 이는하나의 예시일뿐이며,시퀀스파라미터 세트를통해 획득된제 1참조영상 리스트들의 개수는다양하게 변경될수있다.
[308] 도 25를참조하면,제 1참조영상리스트들 (2510, 2520, 2530)은숏-텀 타입또는 롱-텀 타입의참조영상들을포함할수있다.숏-텀타입의 참조영상들은
DPB (decoded picture buffer)에 저장된복원영상들중숏-텀타입으로지정된 영상을나타내고,롱-텀타입의 참조영상들은 DPB에 저장된복원영상들중 롱-텀 타입으로지정된영상을나타낸다.
[309] 제 1참조영상리스트들 (2510, 2520, 2530)에포함된참조영상들은 POC관련 값으로특정될수있다.구체적으로,숏-텀타입의 참조영상은,현재 영상의 POC와숏-텀참조영상의 POC사이의차분값,즉델타값으로특정되고,롱-텀 타입의 참조영상은,롱-텀 참조영상의 POC의 LSB(least significant bit)로특정될 수있다.롱-텀타입의 참조영상은,롱-텀참조영상의 POC의 MSB(most significant bit)로특정될수도있다.
[310] 구현예에따라,제 1참조영상리스트들 (2510, 2520, 2530)은숏-텀타입의 참조 영상만을포함하거나롱-텀 타입의참조영상만을포함할수도있다.즉,도 25에 도시된참조영상들모두가숏-텀 타입의참조영상이거나,롱-텀 타입의참조 영상일수있다.또한,구현예에 따라,제 1참조영상리스트들 (2510, 2520, 2530) 중일부는숏-텀 타입의참조영상만을포함하고,다른일부는롱-텀타입의 참조 영상만을포함할수도있다.
[311] 도 26은제 2참조영상리스트를획득하는방법을설명하기 위한도면이다.
[312] 예측복호화부 (2050)는인디케이터가가리키는제 1참조영상리스트 (2510)에 포함된참조영상들중적어도일부를다른참조영상으로변경하여제 2참조 영상리스트 (2600)를획득할수있다.도 26을참조하면,제 1참조영상
리스트 (2510)내 델타값이 -1인숏-텀 참조영상, LSB가 W인롱-텀 참조영상및 델타값이 -3인숏-텀 참조영상이 각각제 2참조영상리스트 (2600)내에서 델타 값이 -2인숏-텀참조영상, LSB가 8인롱-텀참조영상및 델타값이 -5인숏-텀 참조영상으로교체된것을확인할수있다.도 26은제 1참조영상리스트 (2510) 내모든참조영상들이다른참조영상으로교체된것으로도시하고있지만, 이는예시일뿐이며,제 1참조영상리스트 (2510)내모든참조영상들의 일부만이다른참조영상으로교체될수도있다.
[313] 일실시예에서,예측복호화부 (2050)는제 1참조영상리스트 (2510)에포함된 참조영상들중특정타입의 참조영상,예를들어,롱-텀타입의 참조영상만을 다른롱-텀참조영상으로교체할수도있다.즉,제 1참조영상리스트 (2510)에 2020/175967 1»(:1^1{2020/002924 포함된참조영상들중숏-텀참조영상은제 2참조영상리스트 (2600)에서도 그대로유지되고,롱-텀참조영상만이비트스트림으로부터획득된정보에따라 다른롱-텀참조영상으로교체될수있다.도 26을참조하면,제 1참조영상 리스트 (2510)에포함된참조영상들중특정 텀타입의 가 10인 참조영상만이 ,제 2참조영상리스트 (2600)
Figure imgf000055_0001
인롱-텀참조영상으로 교체될수있다.구현예에따라,제 1참조영상리스트 (2510)에포함된참조 영상들중롱-텀참조영상은제 2참조영상리스트 (2600)에서도그대로 유지되고,제 1참조영상리스트 (2510)내숏-텀타입의참조영상만이,다른 숏-텀참조영상으로교체될수도있다.
[314] 참조영상의교체를위해,획득부 (2010)는비트스트림의그룹헤더로부터
새로운참조영상의 ?00관련값을획득하고,예측복호화부 (2050)는
획득부 (2010)가획득한 ?00관련값이가리키는참조영상을제 2참조영상 리스트 (2600)에포함시킬수있다.
[315] 제 1참조영상리스트 (2510)에포함된참조영상들중새로운참조영상으로 대체될참조영상 (즉,제거될참조영상)을특정하기위해,획득부 (2010)는 비트스트림으로부터제 1참조영상리스트 (2510)에서제거되어야하는참조 영상의인덱스를더획득할수있다.제 1참조영상리스트 (2510)에포함된참조 영상들전부가제거되어야하는경우에는,제 1참조영상리스트 (2510)에서 제거되어야하는참조영상의인덱스는비트스트림에포함되어 있지않을수 있다.
[316] 전술한바와같이,제 1참조영상리스트 (2510)에서특정타입의참조영상이 제거되는것으로미리결정된경우,비트스트림에는제거되어야하는참조 영상의인덱스가포함되지않을수있고,예측복호화부 (2050)는제 1참조영상 리스트 (2510)에포함된참조영상들중미리결정된참조영상을제거하고, 비트스트림으로부터획득된 ?00관련값이가리키는참조영상을제 2참조 영상리스트 (2600)에포함시킬수있다.
[317] 일실시예에서 ,제 2참조영상리스트 (2600)에포함될새로운참조영상을
나타내는정보는,새로운참조영상의 ?00관련값과제 1참조영상
리스트 (2510)에서제거되어야하는참조영상의 ?00관련값사이의차분값일 수있다.예를들어,도 26에서,제 1참조영상리스트 (2510)에포함된
Figure imgf000055_0002
가 10인참조영상이제 2참조영상리스트 (2600)에서는 1名6가 8인참조영상으로 교체되었으므로,새로운참조영상을나타내는정보는 2(10-8)를포함할수있다. 예측복호화부 (2050)는 ?00관련값들의차분값과,제 1참조영상
리스트 (2510)에서제거되어야하는참조영상의 ?00관련값에기초하여제 2 참조영상리스트 (2600)에새롭게포함되어야하는참조영상의 ?00관련값을 도줄할수있다.
[318] 구현예에따라,새로운참조영상은,인디케이터가가리키는제 1참조영상 리스트 (2510)로부터제거될참조영상의순서에맞게제 2참조영상 2020/175967 1»(:1^1{2020/002924 리스트 (2600)에추가될수있다.도 26에도시된바와같이,인덱스 1이할당된 롱-텀 참조영상이 제 1참조영상리스트 (2510)로부터제거되는경우,새로운 참조영상에도인덱스 1이할당될수있다.
[319] 도 27은제 2참조영상리스트를획득하는다른방법을설명하기위한
도면이다.
[32이 예측복호화부 (2050)는영상시퀀스를위한복수의제 1참조영상리스트중 인디케이터가가리키는제 1참조영상리스트 (2510)내참조영상들중특정 타입의 참조영상들을제외시켜제 2참조영상리스트 (2700)를획득할수도 있다.도 27을참조하면,인디케이터가가리키는제 1참조영상리스트 (2510)내 참조영상들중롱-텀타입의 참조영상은제 2참조영상리스트 (2700)에 포함되어 있지 않는것을알수있다.
[321] 구현예에따라,예측복호화부 (2050)는제 1참조영상리스트 (2510)내참조 영상들중숏-텀 타입의참조영상이제외된제 2참조영상리스트 (2700)를 획득할수도있다.
[322] 도 28은제 2참조영상리스트를획득하는다른방법을설명하기위한
도면이다.
[323] 예측복호화부 (2050)는비트스트림의그룹헤더로부터 획득된갱신정보에 따라인디케이터가가리키는제 1참조영상리스트 (2510)내참조영상들의 순서를변경하여제 2참조영상리스트 (2800)를획득할수도있다.이때,갱신 정보에 따라,제 1참조영상리스트 (2510)내모든참조영상들의순서가변경될 수있고,또는,제 1참조영상리스트 (2510)내참조영상들중일부의순서가 변경될수있다.
[324] 일예로,비트스트림의그룹헤더로부터 획득되는갱신정보는변경될순서에 따라배열된제 1참조영상리스트 (2510)내참조영상들의 인덱스를포함할수 있다.구체적으로,도 28에서제 1참조영상리스트 (2510)내 인덱스 0의 참조 픽처,인덱스 1의참조픽처 및 인덱스 2의 참조픽처들이 각각제 2참조영상 리스트 (2800)에서 인덱스 1의참조픽처,인덱스 2의 참조픽처 및인덱스 0의 참조픽처의순서로변경되어야하는경우,비트스트림의그룹헤더는갱신 정보로서, (2, 0, 1)을포함할수있다.예측복호화부 (2050)는제 1참조영상 리스트 (2510)내에서 2의 인덱스가할당된참조영상에는인덱스 0을, 0의 인덱스가할당된참조영상에는인덱스 1을, 1의 인덱스가할당된참조영상에는 인덱스 2를할당하여 제 2참조영상리스트 (2800)를구성할수있다.
[325] 다른예로,비트스트림의그룹헤더로부터 획득되는갱신정보는제 1참조 영상리스트 (2510)내참조영상들중순서 변경이 필요한참조영상의 인덱스를 포함할수있다.구체적으로,도 28에서제 1참조영상리스트 (2510)에포함된 인덱스 1의참조픽처와인덱스 2의참조픽처의순서가변경되어야하는경우, 비트스트림의그룹헤더는갱신정보로서, (1, 2)을포함할수있다.예측 복호화부 (2050)는제 1참조영상리스트 (2510)내에서 1의 인덱스가할당된참조 2020/175967 1»(:1^1{2020/002924 영상에는인덱스 2을, 2의 인덱스가할당된참조영상에는인덱스 1을할당하여 제 2참조영상리스트 (2800)를구성할수있다.
[326] 도 29는제 2참조영상리스트를획득하는다른방법을설명하기위한
도면이다.
[327] 영상시퀀스에서 이용되는복수의 제 1참조영상리스트중인디케이터가 가리키는제 1참조영상리스트의 개수는복수개일수있다.즉,도 29에도시된 바와같이,인디케이터는숏-텀 참조영상들만을포함하는제 1참조영상 리스트 (2910)와롱-텀참조영상들만을포함하는제 1참조영상리스트 (2920)를 가리킬수있다.
[328] 예측복호화부 (2050)는인디케이터가가리키는제 1참조영상들 (2910, 2920)에 포함된숏-텀참조영상들과롱-텀참조영상들을포함하는제 2참조영상 리스트 (2930)를획득할수있다.이때,제 2참조영상리스트 (2930)에서롱-텀 참조영상들에는,숏-텀참조영상들에할당된인덱스보다큰인덱스가할당될 수있다.반대로,제 2참조영상리스트 (2930)에서숏-텀 참조영상들에는,롱-텀 참조영상들에할당된인덱스보다큰인덱스가할당될수있다.
[329] 구현예에따라,획득부 (2010)는숏-텀참조영상들과롱-텀참조영상들의순서 정보를비트스트림으로부터 획득하고,예측복호화부 (2050)는획득한순서 정보에 따라제 2참조영상리스트 (2930)내에포함되는숏-텀참조영상들과 롱-텀 참조영상들에 인덱스를할당할수있다.
[33이 다른실시예에서 ,제 1참조영상리스트 (2910)와제 2참조영상
리스트 (2920)는타입과무관하게적어도하나의참조영상을포함할수있다.이 경우,예측복호화부 (2050)는인디케이터가가리키는제 1참조영상
리스트 (2910)에숏-텀참조영상이존재하고,제 2참조영상리스트 (2920)에 롱-텀 참조영상이존재하는경우,제 1참조영상리스트 (2910)에포함된숏-텀 참조영상과제 2참조영상리스트 (2920)에포함된롱-텀 참조영상을포함하는 제 2참조영상리스트 (2930)를획득할수있다.또는,예측복호화부 (2050)는 인디케이터가가리키는제 1참조영상리스트 (2910)에롱-텀참조영상이 존재하고,제 2참조영상리스트 (2920)에숏-텀참조영상이존재하는경우,제 1 참조영상리스트 (2910)에포함된롱-텀 참조영상과제 2참조영상
리스트 (2920)에포함된숏-텀 참조영상을포함하는제 2참조영상
리스트 (2930)를획득할수도있다.
[331] 도 30는제 2참조영상리스트를획득하는다른방법을설명하기위한
도면이다.
[332] 인디케이터가가리키는제 1참조영상리스트 (3010)는숏-텀 참조영상만을 포함할수있다.구현예에 따라,인디케이터가가리키는제 1참조영상 리스트 (3010)는롱-텀참조영상만을포함할수있다.
[333] 제 1참조영상리스트 (3010)에숏-텀 참조영상만이포함되어 있는경우, 획득부 (2010)는비트스트림으로부터 제 2참조영상리스트 (3030)에포함될 2020/175967 1»(:1^1{2020/002924 롱-텀참조영상의 POC관련값을획득하고, POC관련값이가리키는롱-텀참조 영상과제 1참조영상리스트 (3010)에포함된숏-텀참조영상을포함하는제 2 참조영상리스트 (3030)를구성할수있다.즉,시퀀스파라미터세트를통해서는 숏-텀참조영상만을포함하는제 1참조영상리스트 (3010)를시그널링하고, 그룹헤더에서롱-텀참조영상의 POC관련값을시그널링하는것이다.
[334] 그룹헤더대신시퀀스파리미터세트로참조영상리스트들을전송하게되면, 블록그룹마다매번참조영상리스트를전송하지않아도되므로오버헤드 감소에의한압축률향상의효과가있다.일례로 GOP(Group of Picture)단위로 예측구조가반복되는경우,각 G0P마다참조리스트가반복적으로전송될수 있다.이와같이빈번히전송될수있는참조영상리스트들을시퀀스파라미터 세트로전송할수록비트율감소효과가증가할수있다.
[335] 그런데여기서참조영상의타입이롱-텀인지,숏-텀인지에따라시퀀스
파라미터세트에대한활용도에차이가있을수있다.숏-텀참조영상은상기 예처럼 예측구조가반복되는패턴과연관성이있는반면,롱-텀참조영상은 현재픽춰와해당롱-텀참조영상과의상관도와관련성이높다.일례로 G0P 단위로예측구조가반복되고있으나화면전환등영상의내용이완전히바뀌게 되어더이상롱-텀참조영상이유효하지않게되는경우,숏-텀참조영상들에 대한참조리스트는시퀀스파라미터세트로부터얻고롱-텀참조영상은그룹 헤더로별도로전송함으로써전체참조리스트를그룹헤더로전송하는것을 피할수있다.
[336] 구현예에따라,제 1참조영상리스트에롱-텀참조영상만이포함되어있는 경우,획득부 (2010)는비트스트림으로부터제 2참조영상리스트에포함될 숏-텀참조영상의 P0C관련값을획득하고, P0C관련값이가리키는숏-텀참조 영상과제 1참조영상리스트에포함된롱-텀참조영상을포함하는제 2참조 영상리스트를구성할수도있다.
[337] 제 2참조영상리스트 (3030)를구성하는데있어,비트스트림의그룹
헤더로부터획득된 P0C관련값이가리키는참조영상들에대해서는,제 1참조 영상리스트 (3010)에포함된참조영상들에할당된인덱스보다큰인덱스또는 작은인덱스가할당될수있다.
[338] 전술한바와같이,제 2참조영상리스트의구성이완료되면,예측
복호화부 (2050)는제 2참조영상리스트에포함된참조영상에기초하여부호화 단위들을인터예측할수있다.인터 예측결과,부호화단위들에대응하는예측 샘늘들이획득될수있다.
[339] 복원부 (2070)는예측샘플들을이용하여부호화단위들의복원샘플들을
획득한다.일실시예에서,복원부 (2070)는예측샘플에비트스트림으로부터 획득되는잔차데이터를합하여부호화단위들의복원샘플들을획득할수있다.
[34이 복원부 (2070)는복원샘플들의획득전에부호화단위들의 예측샘플들을루마 매핑처리할수있다. 2020/175967 1»(:1^1{2020/002924
[341] 루마매핑 처리란,예측샘플들의루마값을비트스트림으로부터 획득된
파라미터에 따라변경하는것으로서,일종의톤매핑에해당할수있다.
[342] 일실시예에서,획득부 (2010)는비트스트림의 적어도하나의후처리파라미터 세트로부터루마매핑 처리를위한파라미터들을획득할수있다.적어도하나의 후처리 파라미터세트각각은루마매핑또는후술하는적응적루프필터링에 이용되는파라미터들을포함할수있다.
[343] 루마매핑에 이용되는파라미터들은,예를들어,변경 대상이 되는루마값의 범위,예측샘플들의루마값에 적용될델타값등을포함할수있다.
[344] 도 31은루마매핑또는적응적루프필터링에 이용되는복수의후처리
파라미터 세트를포함하는비트스트림을도시하는도면이다.
[345] 비트스트림 (3100)은전술한시퀀스파라미터세트 모3)(3110),픽처파라미터 세트 ?3)(3120),그룹헤더 ((3¾(3130)및블록파라미터세트여?3)(3140)외에 복수의후처리파라미터 세트 (315(切, 3150江 3150이를포함할수있다.후처리 파라미터 세트 (3150 315(¾, 3150 는,시퀀스파라미터세트 (3110),픽처 파라미터 세트 (3120),그룹헤더 (3130)및블록파라미터 세트 (3140)와달리 영상의 계층구조와무관하게비트스트림에포함될수있다.
[346] 후처리파라미터 세트들 (315(切, 315(¾, 31500)각각에는이들을구분하기위한 식별자가할당될수있다.일실시예에서,후처리 파라미터세트쇼(3150幻, 후처리 파라미터세트 6(3150^및후처리 파라미터세트 0 (3150 각각에는 0, 1, 2의식별자가할당될수있다.
[347] 후처리파라미터 세트들 (3150 315(¾, 31500)중일부는루마매핑에 이용되는 파라미터들을포함하고,다른일부는적응적루프필터링에 이용되는
파라미터들을포함한다.예를들어,후처리 파라미터세트쇼와후처리 파라미터 세트 (:는루마매핑에 이용되는파라미터들을포함하고,후처리파라미터 세트 모는적응적루프필터링에 이용되는파라미터들을포함할수있다.
[348] 획득부 (2010)는픽처파라미터 세트 (3120),그룹헤더 (3130)또는블록파라미터 세트 (3140)로부터복수의후처리파라미터 세트 (3150 3150江 3150이중어느 후처리 파라미터세트가예측샘플들의루마매핑에 이용되는지를나타내는 식별자를획득할수있다.복원부 (2070)는식별자가가리키는후처리파라미터 세트로부터 획득되는파라미터들을이용하여 예측샘플들의루마값을변경할 수있다.
[349] 획득부 (2010)가식별자를픽처 파라미터세트 (3120)에서 획득한경우,
식별자가가리키는후처리 파라미터세트는현재 영상내에서도출된예측 샘플들에 이용되고,식별자를그룹헤더 (3130)에서 획득한경우에는,식별자가 가리키는후처리파라미터 세트는현재슬라이스내에서도출된예측샘플들에 이용된다.또한,획득부 (2010)가식별자를블록파라미터 세트 (3140)에서 획득한 경우,식별자가가리키는후처리 파라미터세트는현재블록내에서도출된예측 샘플들에 이용된다. 2020/175967 1»(:1^1{2020/002924
[35이 일실시예에서,획득부 (2010)는복수의후처리파라미터세트 (3150 315(¾,
31500)중어느하나를가라키는식별자와,수정정보를비트스트림으로부터 획득할수도있다.여기서 ,수정정보는식별자가가리키는후처리파라미터 세트에포함된파라미터들을변경하기위한정보를포함할수있다.예를들어, 수정정보는식별자가가리키는후처리파라미터세트에포함된파라미터의 값과변경될파라미터의값사이의차분값을포함할수있다.
[351] 복원부 (2070)는식별자가가리키는후처리파라미터세트의파라미터들을 수정정보에따라수정하고,수정된파라미터들을이용하여예측샘플들의루마 값을변경할수있다.
[352] 다른실시예에서,비트스트림으로부터획득되는식별자는복수의후처리
파라미터세트를가리킬수도있다.이경우,복원부 (2070)는식별자가가리키는 후처리파라미터세트들에포함된파라미터들을일부씩조합하여새로운 파라미터세트를구성하고,새롭게구성된파라미터세트로예즉샘플들에대해 루마매핑처리를할수있다.
[353] 복원부 (2070)는예측복호화결과생성된예측샘플들또는루마매핑처리된 예측샘플들을이용하여현재부호화단위에대응하는복원샘플들을획득한다. 복원샘플들이획득되면,복원부 (2070)는복원샘플들에대해적응적루프 필터링을적용할수있다.
[354] 적응적루프필터링이란,비트스트림을통해시그널링된필터계수들을
이용하여복원샘플들의샘플값들을 1차원필터링하는처리를의미한다.
적응적루프필터링은루마값및크로마값에대해별개로수행될수있다.필터 계수는 1차원필터에대한필터계수를포함할수있다.각 1차원필터의필터 계수는연속적인필터계수들간의차이값으로표현되고,해당차이값이 비트스트림을통해시그널링될수있다.
[355] 전술한바와같이,후처리파라미터세트들중일부는루마매핑에이용되는 파라미터들을포함하고,다른일부는적응적루프필터링에이용되는
파라미터들 (예를들어,필터계수들)을포함한다.예를들어,후처리파라미터 세트사3150幻와후처리파라미터세트: 8(31501?)는적응적루프필터링에 이용되는파라미터들을포함하고,후처리파라미터세트 (:(3150 는루마 매핑에이용되는파라미터들을포함할수있다.
[356] 획득부 (2010)는픽처파라미터세트 (3120),그룹헤더 (3130)또는블록파라미터 세트 (3140)로부터복수의후처리파라미터세트 (3150 3150江 3150이중어느 후처리파라미터세트가복원샘플들의적응적루프필터링에이용되는지를 나타내는식별자를획득할수있다.복원부 (2070)는식별자가가리키는후처리 파라미터세트로부터획득되는파라미터들을이용하여복원샘플들을필터링할 수있다.획득부 (2010)가식별자를픽처파라미터세트에서획득한경우, 식별자가가리키는후처리파라미터세트는현재영상내에서도출된복원 샘플들에이용되고,식별자를그룹헤더에서획득한경우에는,식별자가 2020/175967 1»(:1^1{2020/002924 가리키는후처리파라미터세트는현재슬라이스내에서도출된복원샘플들에 이용된다.또한,획득부 (2010)가식별자를블록파라미터세트에서획득한경우, 식별자가가리키는후처리파라미터세트는현재블록내에서도출된복원 샘플들에이용된다.
[357] 일실시예에서,획득부 (2010)는복수의후처리파라미터세트 (3150a, 3150b, 3150c)중어느하나를가라키는식별자와,수정정보를비트스트림으로부터 획득할수도있다.여기서 ,수정정보는식별자가가리키는후처리파라미터 세트에포함된필터계수들을변경하기위한정보를포함할수있다.예를들어, 수정정보는식별자가가리키는후처리파라미터세트에포함된필터계수의 값과변경될필터계수의값사이의차분값을포함할수있다.
[358] 복원부 (2070)는식별자가가리키는후처리파라미터세트의필터계수들을 수정정보에따라수정하고,수정된필터계수들을이용하여복원샘플들을 필터링할수있다.
[359] 다른실시예에서,비트스트림으로부터획득되는식별자는복수의후처리
파라미터세트를가리킬수도있다.이경우,복원부 (2070)는식별자가가리키는 후처리파라미터세트들에포함된필터계수들을일부씩조합하여새로운필터 계수세트를구성하고,새롭게구성된필터계수세트로복원샘플들을필터링할 수있다.
[36이 또다른실시예에서,비트스트림으로부터획득되는식별자가복수의후처리 파라미터세트를가리키는경우,복원부 (2070)는식별자가가리키는어느하나의 후처리파라미터세트에포함된필터계수들을이용하여복원샘플들의루마 값을필터링하고,식별자가가리키는다른하나의후처리파라미터세트에 포함된필터계수들을이용하여복원샘플들의크로마값을필터링할수도있다.
[361] 또다른실시예에서,획득부 (2010)는비트스트림으로부터어느하나의후처리 파라미터세트를가리키는식별자와,필터계수정보를획득할수도있다.이 경우,복원부 (2070)는식별자가가리키는후처리파라미터세트들에포함된 일부의필터계수와비트스트림을통해시그널링된필터계수를조합하고, 조합된필터계수세트로복원샘플들을필터링할수도있다.
[362] 일실시예에서,복원부 (2070)은적응적루프필터링된복원샘플을추가적으로 디블로킹필터링할수도있다.
[363] 한편,전술한바와같이,예측복호화부 (2050)는현재슬라이스에포함된
부호화단위를인터 예측에따라복호화할수있는데,일실시예에서,부호화 단위를복호화할때,현재슬라이스의경계를픽처경계로간주할수도있다.
[364] 일실시예에서 ,디코더가직접부호화단위의움직임벡터를도출하는 DMVR (Decoder- side Motion Vector Refinement)모드에서 ,예즉복호화부 (2050)는현재 부호화단위의움직임벡터를도출할때 ,써치 (search)범위를참조영상중현재 슬라이스와동일한위치의영역의경계로제한할수있다.
[365] 일실시예에서,비트스트림을통해시그널링된현재부호화단위의움직임 2020/175967 1»(:1^1{2020/002924 벡터가참조영상중현재슬라이스와동일한위치의영역의경계바깥의블록을 가리키는경우,현재슬라이스와동일한위치의영역을패딩처리하여예측 샘플들을획득할수도있다.
[366] 일실시예에서 ,예측복호화부 (2050)는 BIO(Bi-Optical Flow)처리모드에서 슬라이스의경계를픽처의경계로간주하고,현재부호화단위를예측복호화할 수있다. BIO(Bi-Optical Flow)처리모드는양방향예측을위한블록
기반 (block-wise)움직임보상에대해수행되는샘플기반 (sample-wise)의움직임 벡터개선처리를나타낸다.
[367] 한편,획득부 (2010)는비트스트림에포함된이진값들을
CABAC(Context-adaptive binary arithmetic coding)기반으로엔트로피코딩을할 때,슬라이스에포함된타일의개수가몇개인지를고려하여 WPP (Wave front Parallel Processing)기술을선택적으로적용할수있다. WPP란,병렬적인 부호화/복호화를위해서우상측의 CTU의처리가완료된이후에현재 CTU를 처리하는것이다.구체적으로, 모모는각행의첫번째 CTU의확률모델을상측 행의두번째 CTU의처리에의해획득된확률정보를이용하여설정한다.
[368] 획득부 (2010)는슬라이스에하나의타일만이포함되어 있는경우에는,타일에 포함된 CTU들에대한확률모델을 WPP에기반하여설정할수있고,슬라이스에 복수의타일이포함되어 있는경우에는,타일들에포함된 CTU들에대해서는 WPP기술을적용하지않을수있다.
[369] 도 32는일실시예에따른영상복호화방법을설명하기위한도면이다.
[37이 S3210단계에서 ,영상복호화장치 (2000)는비트스트림의시퀀스파라미터 세트로부터현재영상을포함하는영상시퀀스를위한복수의제 1참조영상 리스트를나타내는정보를획득한다.복수의제 1참조영상리스트는숏-텀참조 영상과롱-텀참조영상중적어도하나로이루어질수있다.
[371] S3220단계에서 ,영상복호화장치 (2000)는현재영상에서블록들과,적어도 하나의블록을포함하는블록그룹을설정한다.블록은타일일수있고,블록 그룹은슬라이스일수있다.
[372] 일실시예에서 ,영상복호화장치 (2000)는비트스트림으로부터획득된정보에 따라현재영상을복수의 CTU로분할하고,적어도하나의 CTU를포함하는타일 및적어도하나의타일을포함하는슬라이스를현재영상내에서설정할수 있다.
[373] 일실시예에서 ,영상복호화장치 (2000)는비트스트림으로부터획득된정보에 따라현재영상을복수의타일로분할하고,각타일을하나이상의 CTU로 분할할수있다.또한,블록결정부 (2030)는현재영상내에서적어도하나의 타일을포함하는슬라이스를설정할수있다.
[374] 일실시예에서 ,영상복호화장치 (2000)는비트스트림으로부터획득된정보에 따라현재영상을하나이상의슬라이스로분할하고,각슬라이스를하나이상의 타일로분할할수있다.그리고,블록결정부 (2030)는각각의타일을하나이상의 2020/175967 1»(:1^1{2020/002924
(〕111로분할할수있다.
[375] 전술한바와같이 ,영상복호화장치 (2000)는비트스트림으로부터 획득된주소 정보에 따라현재 영상내에서슬라이스들을설정할수도있다.
[376] 83230단계에서 ,영상복호화장치 (2000)는비트스트림의그룹헤더로부터 현재 영상내 현재의블록을포함하는현재블록그룹을위한인디케이터를 획득하고,상기 인디케이터가가리키는제 1참조영상리스트에기반하여 제 2 참조영상리스트를획득한다.영상복호화장치 (2000)는비트스트림으로부터 인디케이터와함께제 2참조영상리스트의 획득을위한갱신정보를더 획득할 수있다.갱신정보는,인디케이터가가리키는제 1참조영상리스트에서 제거될 참조영상의汉犯관련값,제 2참조영상리스트에추가될참조영상의汉犯 관련값,제 1참조영상리스트에서제거될참조영상의汉犯관련값과제 2참조 영상리스트에추가될참조영상의 ?00관련값사이의차분값및 영상들의 순서 변경을위한정보중적어도하나를포함할수있다.
[377] 83240단계에서 ,영상복호화장치 (2000)는제 2참조영상리스트에포함된 참조영상에기초하여 현재블록의하위블록을예측복호화한다.
[378] 예측복호화결과,하위블록에 대응하는예측샘플들이 획득되면,영상복호화 장치 (2000)는복수의후처리파라미터 세트중적어도하나를가리키는식별자에 따라예측샘플들을루마매핑하기 위한후처리 파라미터세트를특정할수 있다.그리고,영상복호화장치 (2000)는식별자가가리키는후처리 파라미터 세트에포함된파라미터들로예측샘플들의루마값을변경할수있다.
[379] 일실시예에서 ,영상복호화장치 (2000)는예측복호화결과획득된예측
샘플들또는루마매핑된예측샘플들에기초하여복원샘플들을획득하고,복원 샘플들을적응적루프필터링할수있다.이를위해,영상복호화장치 (2000)는 복수의후처리파라미터 세트중적어도하나를가리키는식별자에 따라적응적 루프필터링을위한후처리파라미터 세트를특정할수있다.그리고,영상 복호화장치 (2000)는식별자가가리키는후처리파라미터 세트에포함된 파라미터들로복원샘플들을필터링할수있다.
[38이 도 33은일실시예에 따른영상부호화장치 (3300)의구성을도시하는
도면이다.
[381] 도 33을참조하면,영상부호화장치 (3300)는블록결정부 (3310),예측
부호화부 (3330),복원부 (3350)및생성부 (3370)를포함한다.도 33에도시된 생성부 (3370)는도 2에도시된비트스트림 생성부 ( 0)에 대응하고,블록 결정부 (3310),예측부호화부 (3330),복원부 (3350)는도 2에도시된
부호화부 (220)에 대응할수있다.
[382] 일실시예에 따른블록결정부 (3310),예측부호화부 (3330),복원부 (3350)및 생성부 (3370)는적어도하나의프로세서로구현될수있다.영상부호화 장치 (3300)는블록결정부 (3310),예측부호화부 (3330),복원부 (3350)및 생성부 (3370)의 입출력 데이터를저장하는하나이상의 데이터 2020/175967 1»(:1^1{2020/002924 저장부 (미도시)를포함할수있다.또한,영상부호화장치 (3300)는,데이터 저장부 (미도시 )의 데이터 입출력을제어하는메모리제어부 (미도시 )를포함할 수도있다.
[383] 블록결정부 (3310)는현재 영상을블록들로분할하고,현재 영상내에서 적어도 하나의블록을포함하는블록그룹들을설정한다.여기서,블록은타일에 해당할 수있고,블록그룹은슬라이스에해당할수있다.슬라이스는타일그룹으로 참조될수도있다.
[384] 도 3내지도 16을참조하여 설명한바와같이,블록결정부 (3310)는현재
영상을분할하여 변환단위,부호화단위,최대부호화단위,타일,슬라이스등을 결정할수있다.
[385] 일실시예에서,블록결정부 (3310)는현재 영상을복수의 0X1로분할하고, 적어도하나의 0X1를포함하는타일및적어도하나의타일을포함하는 슬라이스를현재 영상내에서설정할수있다.
[386] 일실시예에서,블록결정부 (3310)는현재 영상을복수의타일로분할하고,각 타일을하나이상의 0X1로분할할수있다.또한,블록결정부 (3310)는현재 영상 내에서 적어도하나의 타일을포함하는슬라이스를설정할수있다.
[387] 일실시예에서,블록결정부 (3310)는현재 영상을하나이상의슬라이스로 분할하고,각슬라이스를하나이상의타일로분할할수있다.그리고,블록 결정부 (3310)는각각의 타일을하나이상의 0X1로분할할수있다.
[388] 예측부호화부 (3330)는현재 영상으로부터분할된블록들의하위블록들을 인터 예측또는인트라예측하여하위블록들에 대응하는예측샘플들을 획득한다.여기서 ,하위블록은최대부호화단위 ,부호화단위 및변환단위중 적어도하나일수있다.
[389] 예측부호화부 (3330)는인터 예측또는인트라예측을통해부호화단위들을 예측부호화할수있는데,인터 예측에 의하면,움직임 벡터가가리키는참조 영상내참조블록에기반하여 현재부호화단위의 예측샘플이 획득되고,예측 샘플과현재부호화단위의차이에 해당하는잔차데이터가비트스트림을통해 영상복호화장치 (2000)로전송될수있다.예측모드에따라비트스트림에잔차 데이터가포함되지 않을수있다.
[390] 이하에서는,인터 예측에 이용되는참조영상리스트를구성하는방법에 대해 설명한다.
[391] 일실시예에서,예측부호화부 (3330)는현재 영상을포함하는영상시퀀스를 위한복수의제 1참조영상리스트를구성할수있다.예측부호화부 (3330)는 영상시퀀스에서 이용되는복수의제 1참조영상리스트중적어도하나를 선택한다.예측부호화부 (3330)는복수의제 1참조영상리스트중현재 슬라이스에서 이용되는제 1참조영상리스트를선택할수있다.그리고,예측 부호화부 (3330)는선택한제 1참조영상리스트로부터갱신된제 2참조영상 리스트를획득한다. 2020/175967 1»(:1^1{2020/002924
[392] 제 2참조영상리스트는,제 1참조영상리스트에포함된참조영상들중
적어도일부가다른참조영상으로대체되거나,참조영상들의적어도일부의 순서가변경되거나,새로운참조영상이제 1참조영상리스트에추가됨에따라 획득될수있다.
[393] 예측부호화부 (3330)는제 2참조영상리스트가획득되면,제 2참조영상
리스트에포함된참조영상들중적어도하나를이용하여슬라이스에포함된 부호화단위들을인터 예측에따라부호화할수있다.
[394] 예측부호화부 (3330)는영상시퀀스에이용되는복수의제 1참조영상리스트 중현재슬라이스를위해선택된제 1참조영상리스트이외의제 1참조영상 리스트,및제 2참조영상리스트를이용하여다음슬라이스에포함된부호화 단위들을예측부호화할수있다.다시말하면,현재슬라이스에서획득된제 2 참조영상리스트가다음슬라이스에서도이용될수있다.
[395] 이하에서는,제 1참조영상리스트로부터갱신된제 2참조영상리스트를
획득하는방법에대해설명한다.
[396] 일실시예에서,예측부호화부 (3330)는제 1참조영상리스트에포함된참조 영상들중적어도일부를다른참조영상으로변경하여제 2참조영상리스트를 획득할수있다.
[397] 일실시예에서,예측부호화부 (3330)는제 1참조영상리스트에포함된참조 영상들중특정타입의참조영상,예를들어,롱-텀타입의참조영상만을다른 롱-텀참조영상으로교체할수있다.즉,제 1참조영상리스트에포함된참조 영상들중숏-텀참조영상은제 2참조영상리스트에서도그대로유지되고, 롱-텀참조영상만이다른롱-텀참조영상으로교체될수있다.
[398] 일실시예에서 ,제 1참조영상리스트에포함된참조영상들의타입과
무관하게,제 1참조영상리스트에포함된참조영상들중적어도일부가다른 참조영상으로교체될수도있다.구현예에따라,새로운참조영상은,제 1참조 영상리스트로부터제거될참조영상의순서에맞게제 2참조영상리스트에 추가될수있다.즉,인덱스 1이할당된롱-텀참조영상이제 1참조영상 리스트로부터제거되는경우,새로운참조영상에도인덱스 1이할당될수있다.
[399] 일실시예에서,예측부호화부 (3330)는영상시퀀스를위한복수의제 1참조 영상리스트중현재슬라이스를위해선택된제 1참조영상리스트내참조 영상들중특정타입의참조영상들을제외시켜제 2참조영상리스트를획득할 수도있다.
[400] 일실시예에서,예측부호화부 (3330)는영상시퀀스를위한복수의제 1참조 영상리스트중현재슬라이스를위해선택된제 1참조영상리스트내참조 영상들중적어도일부의순서를변경하여제 2참조영상리스트를획득할수도 있다.
[401] 일실시예에서,예측부호화부 (3330)는숏-텀참조영상들만을포함하는제 1 참조영상리스트와롱-텀참조영상들만을포함하는제 1참조영상리스트를 2020/175967 1»(:1^1{2020/002924 이용하여제 2참조영상리스트를획득할수있다.예를들어,예측
부호화부 (3330)는제 1참조영상리스트에포함된숏-텀참조영상들과제 1 참조영상리스트에포함된롱-텀참조영상들을제 2참조영상리스트에 포함시킬수있다.
[402] 또한,일실시예에서,예측부호화부 (3330)는제 1참조영상리스트에숏-텀 참조영상만이포함되어 있는경우,제 1참조영상리스트에포함된숏-텀참조 영상과새로운롱-텀참조영상을포함하는제 2참조영상리스트를획득할수 있다.반대로,예측부호화부 (3330)는제 1참조영상리스트에롱-텀참조 영상만이포함되어있는경우,제 1참조영상리스트에포함된롱-텀참조 영상과새로운숏-텀참조영상을포함하는제 2참조영상리스트를획득할수도 있다.
[403] 제 2참조영상리스트의구성이완료되면,예측부호화부 (3330)는제 2참조 영상리스트에포함된참조영상에기초하여부호화단위들을인터 예측할수 있다.인터 예측결과부호화단위들에대응하는예측샘플들이획득될수있다.
[404] 복원부 (3350)는예측샘플들을이용하여부호화단위들의복원샘플들을
획득한다.복원샘플들을포함하는복원영상은후속하는영상의참조 영상으로서 에저장될수있다.
[405] 일실시예에서,복원부 (3350)는복원샘플들의획득전에부호화단위들의예측 샘플들을루마매핑처리할수있다.복원부 (3350)는복수의후처리파라미터 세트로부터루마매핑처리를위한파라미터들을획득할수있다.
[406] 복수의후처리파라미터세트각각은루마매핑또는후술하는적응적루프 필터링에이용되는파라미터들을포함할수있다.다시말하면,후처리파라미터 세트들중일부는루마매핑에이용되는파라미터들을포함하고,다른일부는 적응적루프필터링에이용되는파라미터들을포함한다.예를들어,적어도 하나의파라미터세트는루마매핑에이용되는파라미터들을포함하고,다른 후처리파라미터세트는적응적루프필터링에이용되는파라미터들을포함할 수있다.복원부 (3350)는루마매핑에이용되는파라미터들또는적응적루프 필터링에이용되는파라미터들로구성된복수의후처리파라미터세트를 생성할수있다.전술한바와같이,복수의후처리파라미터세트는
비트스트림을통해영상복호화장치 (2000)로시그널링될수있다.
[407] 복원부 (3350)는복수의후처리파라미터세트중에서선택된후처리파라미터 세트로부터파라미터들을획득하고,획득한파라미터들로예측샘플들의루마 값을변경할수있다.
[408] 일실시예에서,복원부 (3350)는복수의후처리파라미터세트중에서선택된 후처리파라미터세트의파라미터들을수정하고,수정된파라미터들로예측 샘플들의루마값을변경할수있다.
[409] 또한,일실시예에서 ,복원부 (3350)는복수의후처리파라미터세트중 2개 이상의후처리파라미터세트에포함된파라미터들을일부씩조합하여새로운 2020/175967 1»(:1^1{2020/002924 파라미터 세트를구성하고,새롭게구성된파라미터 세트의파라미터들로예측 샘플들의루마값을변경할수있다.
[410] 복원부 (3350)는예측복호화결과생성된예측샘플들또는루마매핑처리된 예측샘플들을이용하여 현재부호화단위에 대응하는복원샘플들을획득한다. 복원샘플들이 획득되면,복원부 (3350)는복원샘플들에 대해 적응적루프 필터링을적용할수있다.
[411] 전술한바와같이,후처리파라미터 세트들중일부는루마매핑에 이용되는 파라미터들을포함하고,다른일부는적응적루프필터링에 이용되는
파라미터들 (예를들어,필터 계수들)을포함할수있다.복원부 (3350)는복수의 후처리 파라미터세트중적어도하나로부터 획득되는파라미터들을이용하여 복원샘플들을필터링할수있다.
[412] 일실시예에서,복원부 (3350)는복수의후처리파라미터 세트중어느
하나로부터 획득된파라미터들을수정하고,수정된파라미터들을이용하여 복원샘플들을필터링할수있다.
[413] 또한,일실시예에서,복원부 (3350)는복수의후처리파라미터 세트중 2개
이상의후처리파라미터 세트에포함된파라미터들을일부씩조합하여새로운 파라미터 세트를구성하고,새롭게구성된파라미터 세트의파라미터들로복원 샘플들을필터링할수있다.
[414] 또한,일실시예에서,복원부 (3350)는복수의후처리파라미터 세트중어느 하나의후처리파라미터 세트를이용하여복원샘플들의루마값을필터링하고, 다른하나의후처리 파라미터세트를이용하여복원샘플들의크로마값을 필터링할수있다.
[415] 한편,예측부호화부 (3330)는현재슬라이스에포함된부호화단위를인터
예측할때,현재슬라이스의 경계를픽처 경계로간주할수도있다.
[416] 일실시예에서,예측부호화부 (3330)는현재부호화단위의움직임 벡터를
도출할때,써치범위를참조영상중현재슬라이스와동일한위치의 영역의 경계로제한할수있다.
[417] 일실시예에서 ,예측부호화부 (3330)는 BIO(Bi-Optical Flow)처리모드에서 슬라이스의 경계를픽처의 경계로간주하고,현재부호화단위를예측부호화할 수있다.
[418] 생성부 (3370)는영상의부호화에 이용된정보들을포함하는비트스트림을 생성한다.전술한바와같이,비트스트림은시퀀스파라미터 세트,픽처 파라미터 세트,그룹헤더 ,블록파라미터세트및적어도하나의후처리 파라미터 세트를포함할수있다.
[419] 생성부 (3370)에 의해생성된비트스트림에포함된정보에 대해서는,앞서 영상 복호화장치 (2000)와관련하여설명하였으므로,상세한설명은생략한다.
[42이 한편,생성부 (3370)는신택스엘리먼트들에 대응하는이진값들을
CABAC(Context-adaptive binary arithmetic coding)기반으로엔트로피코딩을할 的
2020/175967 1»(:1^1{2020/002924 수있다.이때,생성부 (3370)는슬라이스에포함된타일의개수가몇개인지를 고려하여 WPP (Wave front Parallel Processing)기술을선택적으로적용할수 있다.생성부 (3370)는슬라이스에하나의타일만이포함되어있는경우에는, 타일에포함된 CTU들에대한확률모델을 WPP에기반하여설정할수있고, 슬라이스에복수의타일이포함되어있는경우에는,타일들에포함된 CTU들에 대해서는 WPP기술을적용하지않을수있다.
[421] 도 34는일실시예에따른영상부호화방법을설명하기위한도면이다.
[422] S3410단계에서 ,영상부호화장치 (3300)는현재영상을포함하는영상
시퀀스를위한복수의제 1참조영상리스트를구성한다.복수의제 1참조영상 리스트는숏-텀참조영상과롱-텀참조영상중적어도하나로이루어질수있다.
[423] S3420단계에서 ,영상부호화장치 (3300)는현재영상에서블록들과,적어도 하나의블록을포함하는블록그룹을설정한다.블록은타일일수있고,블록 그룹은슬라이스일수있다.
[424] 일실시예에서 ,영상부호화장치 (3300)는현재영상을복수의 CTU로
분할하고,적어도하나의 CTU를포함하는타일및적어도하나의타일을 포함하는슬라이스를현재영상내에서설정할수있다.
[425] 일실시예에서,영상부호화장치 (3300)는현재영상을복수의타일로
분할하고,각타일을하나이상의 CTU로분할할수있다.또한,영상부호화 장치 (3300)는현재영상내에서적어도하나의타일을포함하는슬라이스를 설정할수있다.
[426] 일실시예에서,영상부호화장치 (3300)는현재영상을하나이상의슬라이스로 분할하고,각슬라이스를하나이상의타일로분할할수있다.그리고,영상 부호화장치 (3300)는각각의타일을하나이상의 CTU로분할할수있다.
[427] S3230단계에서 ,영상부호화장치 (3300)는복수의제 1참조영상리스트중 현재영상내현재의블록을포함하는현재블록그룹을위한제 1참조영상 리스트를선택하고,선택한제 1참조영상리스트에기반하여제 2참조영상 리스트를획득한다.
[428] S3240단계에서 ,영상부호화장치 (3300)는제 2참조영상리스트에포함된
참조영상에기초하여현재의블록에포함된하위블록을예측부호화한다.
[429] 예측부호화결과,하위블록에대응하는예측샘플들이획득되면,영상부호화 장치 (3300)는복수의후처리파라미터세트중적어도하나에포함된
파라미터들로예측샘플들의루마값을변경할수있다.
[43이 일실시예에서 ,영상부호화장치 (3300)는예측부호화결과획득된예측
샘플들또는루마매핑된예측샘플들에기초하여복원샘플들을획득하고,복원 샘플들을적응적루프필터링할수있다.이를위해,영상부호화장치 (3300)는 복수의후처리파라미터세트중적어도하나에포함된파라미터들로복원 샘플들을필터링할수있다.
[431] 한편,상술한본개시의실시예들은컴퓨터에서실행될수있는프로그램으로 2020/175967 1»(:1^1{2020/002924 작성가능하고,작성된프로그램은매체에저장될수있다.
[432] 매체는컴퓨터로실행가능한프로그램을계속저장하거나,실행또는
다운로드를위해임시저장하는것일수도있다.또한,매체는단일또는수개 하드웨어가결합된형태의다양한기록수단또는저장수단일수있는데,어떤 컴퓨터시스템에직접접속되는매체에한정되지않고,네트워크상에분산 존재하는것일수도있다.매체의예시로는,하드디스크,플로피디스크및자기 테이프와같은자기매체 , CD-ROM및 DVD와같은광기록매체 ,플롭티컬 디스크 (floptical disk)와같은자기 -광매체 (magneto-optical medium),및 ROM, RAM,플래시메모리등을포함하여프로그램명령어가저장되도록구성된것이 있을수있다.또한,다른매체의예시로,애플리케이션을유통하는앱스토어나 기타다양한소프트웨어를공급내지유통하는사이트,서버등에서관리하는 기록매체내지저장매체도들수있다.
[433] 이상,본개시의기술적사상을바람직한실시예를들어상세하게
설명하였으나,본개시의기술적사상은상기실시예들에한정되지않고,본 개시의기술적사상의범위내에서당분야에서통상의지식을가진자에의하여 여러가지변형및변경이가능하다.

Claims

2020/175967 1»(:1^1{2020/002924 청구범위
[청구항 1] 비트스트림의시퀀스파라미터 세트로부터 현재 영상을포함하는영상 시퀀스를위한복수의 제 1참조영상리스트를나타내는정보를 획득하는단계 ;
상기 비트스트림의그룹헤더로부터상기 현재 영상내현재블록을 포함하는현재블록그룹을위한인디케이터를획득하는단계 ;
상기복수의제 1참조영상리스트중상기 인디케이터가가리키는제 1 참조영상리스트에 기반하여제 2참조영상리스트를획득하는단계;및 상기 제 2참조영상리스트에포함된참조영상에 기초하여 현재블록의 하위블록을예측복호화하는단계를포함하는,영상의복호화방법 .
[청구항 2] 제 1항에 있어서,
상기복수의제 1참조영상리스트중상기 인디케이터가가리키는제 1 참조영상리스트이외의제 1참조영상리스트,및상기제 2참조영상 리스트에 기초하여,상기 현재 영상내다음블록그룹에포함된하위 블록들이 예측복호화되는,영상의복호화방법 .
[청구항 3] 제 1항에 있어서,
상기 인디케이터가가리키는제 1참조영상리스트는제 1타입의 참조 영상만을포함하되 ,
상기 제 2참조영상리스트를획득하는단계는,
상기그룹헤더로부터 획득된 ?00관련값이 가리키는제 2타입의 참조 영상을상기 인디케이터가가리키는제 1참조영상리스트에추가하여 상기 제 2참조영상리스트를획득하는단계를포함하는,영상의복호화 방법.
[청구항 4] 제 1항에 있어서,
상기 제 2참조영상리스트를획득하는단계는,
상기 인디케이터가가리키는제 1참조영상리스트에포함된참조 영상들의 적어도일부의순서를변경하여상기제 2참조영상리스트를 획득하는단계를포함하는,영상의복호화방법 .
[청구항 5] 제 1항에 있어서,
상기 인디케이터가가리키는제 1참조영상리스트는제 1타입의 참조 영상및제 2타입의 참조영상을포함하되 ,
상기 제 2참조영상리스트를획득하는단계는,
상기 인디케이터가가리키는제 1참조영상리스트로부터상기 제 2 타입의 참조영상을배제하여상기 제 2참조영상리스트를획득하는 단계를포함하는,영상의복호화방법 .
[청구항 6] 제 1항에 있어서,
상기 인디케이터가가리키는제 1참조영상리스트는제 1타입의 참조 2020/175967 1»(:1^1{2020/002924 영상및제 2타입의참조영상을포함하되 ,
상기제 2참조영상리스트를획득하는단계는,
상기인디케이터가가리키는제 1참조영상리스트로부터상기제 2 타입의참조영상을배제하고,상기그룹헤더로부터획득된 ?00관련 값이가리키는제 2타입의참조영상을상기인디케이터가가리키는제 1 참조영상리스트에추가하여상기제 2참조영상리스트를획득하는 단계를포함하는,영상의복호화방법 .
[청구항 7] 제 1항에 있어서,
상기제 2참조영상리스트를획득하는단계는,
상기인디케이터가가리키는어느하나의참조영상리스트에포함된제 1타입의참조영상과,상기인디케이터가가리키는다른하나의참조 영상리스트에포함된제 2타입의참조영상을포함하는상기제 2참조 영상리스트를획득하는단계를포함하는,영상의복호화방법 .
[청구항 8] 제 7항에 있어서,
상기제 1타입및상기제 2타입중어느하나의타입의참조영상들에는, 다른하나의타입의참조영상들에할당된인덱스보다큰인덱스가 할당되는,영상의복호화방법 .
[청구항 9] 제 7항에 있어서,
상기영상의복호화방법은,
상기그룹헤더로부터상기제 1타입의참조영상들과상기제 2타입의 참조영상들의순서정보를획득하는단계를더포함하고, 상기제 1타입의참조영상들과상기제 2타입의참조영상들에는상기 순서정보에따른인덱스가할당되는,영상의복호화방법 .
[청구항 1이 제 1항에 있어서,
상기영상의복호화방법은,
상기인디케이터가가리키는제 1참조영상리스트에포함된참조 영상들중적어도일부의 ?00관련값과상기제 2참조영상리스트에 포함될참조영상들중적어도일부의 ?00관련값사이의차분값을 상기그룹헤더로부터획득하는단계를더포함하고,
상기제 2참조영상리스트를획득하는단계는,
상기획득한차분값을기초로상기인디케이터가가리키는제 1참조 영상리스트에포함된참조영상들중적어도일부를교체하여상기제 2 참조영상리스트를획득하는단계를포함하는,영상의복호화방법.
[청구항 11] 제 1항에 있어서,
상기영상의복호화방법은,
상기현재영상내에서복수의블록들을결정하는단계;
상기비트스트림으로부터블록그룹들에대한주소정보를획득하는 단계;및 2020/175967 1»(:1^1{2020/002924 상기 획득한주소정보에따라상기 현재 영상내에서상기복수의블록 중적어도하나의블록을포함하는블록그룹들을설정하는단계를 포함하되,
상기 현재블록은상기복수의블록들중어느하나이고,상기 현재블록 그룹은상기블록그룹들중어느하나인,영상의복호화방법 .
[청구항 12] 제 11항에 있어서,
상기주소정보는,
상기블록그룹들각각에포함된블록들중우하단블록의식별정보를 포함하되,
상기블록그룹들을설정하는단계는,
상기복수의블록중좌상단에 위치하는좌상단블록과상기우하단 블록의식별정보가가리키는우하단블록을포함하는첫번째블록 그룹을설정하는단계 ;
상기 첫번째블록그룹에포함된블록들의식별정보에 기초하여두번째 블록그룹의좌상단블록을식별하는단계;및
상기우하단블록의식별정보가가리키는우하단블록과상기식별된 좌상단블록을포함하는두번째블록그룹을설정하는단계를포함하는, 영상의복호화방법 .
[청구항 13] 제 1항에 있어서,
상기 영상의복호화방법은,
루마매핑을위한적어도하나의후처리파라미터 세트를상기
비트스트림으로부터 획득하는단계 ;
상기 예측복호화결과획득된하위블록의 예측샘플에 대한루마매핑에 적용되는후처리파라미터 세트를가리키는식별정보를상기
비트스트림의그룹헤더또는픽처 파라미터세트로부터 획득하는단계; 및
상기식별정보가가리키는후처리 파라미터세트에 따라상기 예측 샘플을루마매핑하는단계를더포함하는,영상의복호화방법 .
[청구항 14] 비트스트림의시퀀스파라미터 세트로부터 현재 영상을포함하는영상 시퀀스를위한복수의 제 1참조영상리스트를나타내는정보를 획득하고,상기 비트스트림의그룹헤더로부터상기 현재 영상내현재 블록을포함하는현재블록그룹을위한인디케이터를획득하는획득부; 및
상기복수의제 1참조영상리스트중상기 인디케이터가가리키는제 1 참조영상리스트에 기반하여,제 2참조영상리스트를획득하고,상기 제 2참조영상리스트에포함된참조영상에 기초하여 현재블록의하위 블록을예측복호화하는예측복호화부를포함하는,영상의복호화장치 .
[청구항 15] 현재 영상을포함하는영상시퀀스를위한복수의 제 1참조영상 2020/175967 1»(:1^1{2020/002924 리스트를구성하는단계 ;
상기복수의제 1참조영상리스트중상기현재영상내현재블록을 포함하는현재블록그룹을위한제 1참조영상리스트를선택하는단계; 상기선택된제 1참조영상리스트에기반하여제 2참조영상리스트를 획득하는단계 ;및
상기제 2참조영상리스트에포함된참조영상에기초하여현재블록의 하위블록을예측부호화하는단계를포함하는,영상의부호화방법 .
PCT/KR2020/002924 2019-02-28 2020-02-28 영상의 부호화 및 복호화 장치, 및 이에 의한 영상의 부호화 및 복호화 방법 WO2020175967A1 (ko)

Priority Applications (6)

Application Number Priority Date Filing Date Title
KR1020217027776A KR20210122818A (ko) 2019-02-28 2020-02-28 영상의 부호화 및 복호화 장치, 및 이에 의한 영상의 부호화 및 복호화 방법
US17/434,657 US20230103665A1 (en) 2019-02-28 2020-02-28 Apparatuses for encoding and decoding image, and methods for encoding and decoding image thereby
EP20763453.6A EP3934252A4 (en) 2019-02-28 2020-02-28 APPARATUS FOR ENCODING AND DECODED AN IMAGE, AND METHODS FOR ENCODING AND DECODED AN IMAGE THEREOF
BR112021016926A BR112021016926A2 (pt) 2019-02-28 2020-02-28 Método de decodificação de imagem, aparelho de decodificação de imagem, e método de codificação de imagem
MX2021010368A MX2021010368A (es) 2019-02-28 2020-02-28 Aparatos para codificar y decodificar imagenes, y metodos para codificar y decodificar imagenes mediante los mismos.
CN202080017337.7A CN113508590A (zh) 2019-02-28 2020-02-28 用于对图像进行编码和解码的设备及其用于对图像进行编码和解码的方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962811764P 2019-02-28 2019-02-28
US62/811,764 2019-02-28

Publications (1)

Publication Number Publication Date
WO2020175967A1 true WO2020175967A1 (ko) 2020-09-03

Family

ID=72238536

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/002924 WO2020175967A1 (ko) 2019-02-28 2020-02-28 영상의 부호화 및 복호화 장치, 및 이에 의한 영상의 부호화 및 복호화 방법

Country Status (7)

Country Link
US (1) US20230103665A1 (ko)
EP (1) EP3934252A4 (ko)
KR (1) KR20210122818A (ko)
CN (1) CN113508590A (ko)
BR (1) BR112021016926A2 (ko)
MX (1) MX2021010368A (ko)
WO (1) WO2020175967A1 (ko)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220182623A1 (en) * 2019-03-13 2022-06-09 Lg Electronics Inc. Video encoding/decoding method and device using segmentation limitation for chroma block, and method for transmitting bitstream

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140016699A1 (en) * 2012-07-13 2014-01-16 Qualcomm Incorporated Reference picture list modification for video coding
US20170034508A1 (en) * 2012-09-07 2017-02-02 Vid Scale, Inc. Reference picture lists modification
KR20180032549A (ko) * 2012-07-02 2018-03-30 삼성전자주식회사 블록크기에 따라 인터 예측의 참조픽처리스트를 결정하는 비디오 부호화 방법과 그 장치, 비디오 복호화 방법과 그 장치
KR20180063033A (ko) * 2015-06-05 2018-06-11 인텔렉추얼디스커버리 주식회사 화면내 예측에서의 참조 화소 구성에 관한 부호화/복호화 방법 및 장치
KR20180080166A (ko) * 2015-06-05 2018-07-11 인텔렉추얼디스커버리 주식회사 화면 내 예측 모드에 대한 부호화/복호화 방법 및 장치

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130050863A (ko) * 2011-11-08 2013-05-16 삼성전자주식회사 참조리스트를 이용하는 예측을 수반하는 비디오 부호화 방법 및 그 장치, 비디오 복호화 방법 및 그 장치
US9973749B2 (en) * 2012-01-20 2018-05-15 Nokia Technologies Oy Method for video coding and an apparatus, a computer-program product, a system, and a module for the same
US9319679B2 (en) * 2012-06-07 2016-04-19 Qualcomm Incorporated Signaling data for long term reference pictures for video coding
US9894357B2 (en) * 2013-07-30 2018-02-13 Kt Corporation Image encoding and decoding method supporting plurality of layers and apparatus using same
US10104362B2 (en) * 2013-10-08 2018-10-16 Sharp Kabushiki Kaisha Image decoding device, image coding device, and coded data
CN115086652A (zh) * 2015-06-05 2022-09-20 杜比实验室特许公司 图像编码和解码方法和图像解码设备
KR20170058838A (ko) * 2015-11-19 2017-05-29 한국전자통신연구원 화면간 예측 향상을 위한 부호화/복호화 방법 및 장치
US10555002B2 (en) * 2016-01-21 2020-02-04 Intel Corporation Long term reference picture coding
KR102324844B1 (ko) * 2016-06-17 2021-11-11 세종대학교산학협력단 비디오 신호의 복호화 방법 및 이의 장치
WO2019089864A1 (en) * 2017-11-01 2019-05-09 Vid Scale, Inc. Overlapped block motion compensation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180032549A (ko) * 2012-07-02 2018-03-30 삼성전자주식회사 블록크기에 따라 인터 예측의 참조픽처리스트를 결정하는 비디오 부호화 방법과 그 장치, 비디오 복호화 방법과 그 장치
US20140016699A1 (en) * 2012-07-13 2014-01-16 Qualcomm Incorporated Reference picture list modification for video coding
US20170034508A1 (en) * 2012-09-07 2017-02-02 Vid Scale, Inc. Reference picture lists modification
KR20180063033A (ko) * 2015-06-05 2018-06-11 인텔렉추얼디스커버리 주식회사 화면내 예측에서의 참조 화소 구성에 관한 부호화/복호화 방법 및 장치
KR20180080166A (ko) * 2015-06-05 2018-07-11 인텔렉추얼디스커버리 주식회사 화면 내 예측 모드에 대한 부호화/복호화 방법 및 장치

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3934252A4 *

Also Published As

Publication number Publication date
EP3934252A4 (en) 2022-11-30
US20230103665A1 (en) 2023-04-06
CN113508590A (zh) 2021-10-15
MX2021010368A (es) 2021-10-01
EP3934252A1 (en) 2022-01-05
KR20210122818A (ko) 2021-10-12
BR112021016926A2 (pt) 2021-11-03

Similar Documents

Publication Publication Date Title
US11343498B2 (en) Method and apparatus for processing video signal
KR102613966B1 (ko) 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
KR102441568B1 (ko) 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
KR102410424B1 (ko) 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
US11445185B2 (en) Image encoding/decoding method and device, and recording medium in which bitstream is stored
KR20190043482A (ko) 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
JP2020533822A (ja) 画像復号方法、画像符号化方法、及び、記録媒体
KR20230156294A (ko) 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
KR20180123674A (ko) 비디오 신호 처리 방법 및 장치
KR20180015598A (ko) 비디오 신호 처리 방법 및 장치
KR20180001478A (ko) 비디오 신호 처리 방법 및 장치
KR20180051424A (ko) 비디오 신호 처리 방법 및 장치
WO2020175913A1 (ko) 영상 신호 부호화/복호화 방법 및 이를 위한 장치
CN116866563A (zh) 图像编码/解码方法、存储介质以及图像数据的传输方法
KR20180059367A (ko) 비디오 신호 처리 방법 및 장치
KR102654647B1 (ko) 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
CN112771862A (zh) 通过使用边界处理对图像进行编码/解码的方法和设备以及用于存储比特流的记录介质
KR20200010113A (ko) 지역 조명 보상을 통한 효과적인 비디오 부호화/복호화 방법 및 장치
WO2020175970A1 (ko) 크로마 성분을 예측하는 비디오 부호화 및 복호화 방법, 및 크로마 성분을 예측하는 비디오 부호화 및 복호화 장치
CN114342372A (zh) 帧内预测模式、以及熵编解码方法和装置
WO2020175914A1 (ko) 영상 신호 부호화/복호화 방법 및 이를 위한 장치
WO2020175967A1 (ko) 영상의 부호화 및 복호화 장치, 및 이에 의한 영상의 부호화 및 복호화 방법
CN113924773A (zh) 图像编码/解码方法和装置以及用于存储比特流的记录介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20763453

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20217027776

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112021016926

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 2020763453

Country of ref document: EP

Effective date: 20210928

ENP Entry into the national phase

Ref document number: 112021016926

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20210826