WO2020175967A1 - Appareils permettant de coder et de décoder une image, et procédés permettant de coder et de décoder une image par ceux-ci - Google Patents

Appareils permettant de coder et de décoder une image, et procédés permettant de coder et de décoder une image par ceux-ci Download PDF

Info

Publication number
WO2020175967A1
WO2020175967A1 PCT/KR2020/002924 KR2020002924W WO2020175967A1 WO 2020175967 A1 WO2020175967 A1 WO 2020175967A1 KR 2020002924 W KR2020002924 W KR 2020002924W WO 2020175967 A1 WO2020175967 A1 WO 2020175967A1
Authority
WO
WIPO (PCT)
Prior art keywords
coding unit
image
list
unit
block
Prior art date
Application number
PCT/KR2020/002924
Other languages
English (en)
Korean (ko)
Inventor
최웅일
박민수
박민우
정승수
최기호
최나래
템즈아니쉬
표인지
Original Assignee
삼성전자주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 삼성전자주식회사 filed Critical 삼성전자주식회사
Priority to KR1020217027776A priority Critical patent/KR20210122818A/ko
Priority to US17/434,657 priority patent/US20230103665A1/en
Priority to EP20763453.6A priority patent/EP3934252A4/fr
Priority to BR112021016926A priority patent/BR112021016926A2/pt
Priority to MX2021010368A priority patent/MX2021010368A/es
Priority to CN202080017337.7A priority patent/CN113508590A/zh
Publication of WO2020175967A1 publication Critical patent/WO2020175967A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/573Motion compensation with multiple frame prediction using two or more reference frames in a given prediction direction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/463Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/58Motion compensation with long-term prediction, i.e. the reference frame for a current frame not being the temporally closest one
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression

Definitions

  • This disclosure relates to the field of encoding and decoding of images. More specifically, this disclosure relates to a method and apparatus for encoding an image, and a method and apparatus for decoding an image using the hierarchical structure of the image.
  • each block can be coded and decoded for example through inter prediction or intra prediction.
  • Inter prediction is a method of compressing images by removing temporal redundancy between images
  • motion estimation encoding is a typical example.
  • Motion estimation encoding predicts the blocks of the current image using at least one reference image.
  • the reference block that is most similar to the current block can be searched in a predetermined search range by using the evaluation function of.
  • the current block is predicted based on the reference block, and the prediction block generated as a result of the prediction is subtracted from the current block to create a residual block.
  • interpolation is performed on the reference image to generate pixels in sub-pel units smaller than the integer pel unit, and based on the pixels in the sub-pixel unit.
  • Hainter prediction can be performed.
  • Differential Motion Vector is a decoder through a predefined method.
  • An apparatus for encoding and decoding an image according to an embodiment, and a method for encoding and decoding an image thereby, is a technical task of encoding and decoding an image at a small bit rate using a hierarchical structure of the image.
  • Acquiring information representing a list of a plurality of first reference pictures for a picture sequence including the current picture from the set; from the group header of the bitstream 2020/175967 1» (: 1 ⁇ 1 ⁇ 2020/002924) Acquiring an indicator for the current block group including the current block in the current picture;
  • the first reference indicated by the indicator in the plurality of first reference picture lists Obtaining a second reference picture list based on the picture list; And predictively decoding a lower block of the current block based on the reference picture included in the second reference picture list.
  • An image encoding and decoding apparatus can encode and decode an image at a low bit rate using a hierarchical structure of the image.
  • FIG. 1 is a block diagram of an image decoding apparatus according to an embodiment.
  • FIG. 2 is a block diagram of an image encoding apparatus according to an embodiment.
  • FIG. 3 is a diagram illustrating a process of determining at least one coding unit by dividing a current coding unit by an image decoding apparatus according to an embodiment.
  • FIG. 4 is a diagram illustrating a process of determining at least one coding unit by dividing a coding unit in a non-square shape by an image decoding apparatus according to an embodiment.
  • FIG. 5 is a diagram illustrating a process of dividing a coding unit based on at least one of block type information and division type mode information by an image decoding apparatus according to an embodiment.
  • Figure 6 shows that the image decoding apparatus is among odd number of coding units according to an embodiment.
  • a method for determining a predetermined coding unit is shown.
  • FIG. 7 is a diagram illustrating a sequence in which a plurality of coding units are processed when a video decoding apparatus determines a plurality of coding units by dividing a current coding unit according to an embodiment.
  • FIG. 8 shows a process in which the video decoding apparatus determines that the current coding unit is divided into odd number of coding units when the coding units cannot be processed in a predetermined order according to an embodiment.
  • Figure 9 is a video decoding device according to an embodiment by dividing the first coding unit
  • the second 2020/175967 1»(:1/10 ⁇ 020/002924 the second 2020/175967 1»(:1/10 ⁇ 020/002924.
  • the image decoding apparatus 11 is a case in which the division mode information is divided into four square-shaped coding units according to an embodiment, the image decoding apparatus
  • FIG. 12 illustrates that the processing order between a plurality of coding units may vary according to a process of dividing the coding units according to an embodiment.
  • Figure 13 shows a plurality of coding units are divided recursively according to an embodiment.
  • PID 14 illustrates a depth and an index (hereinafter, referred to as PID) for classifying coding units, which may be determined according to the shape and size of coding units according to an embodiment.
  • FIG. 15 illustrates a plurality of predetermined data units included in a picture according to an embodiment.
  • Fig. 16 shows a combination of a type in which coding units can be divided according to an embodiment.
  • FIG. 17 shows various types of coding units that can be determined based on split mode mode information that can be expressed as a binary code according to an embodiment.
  • Fig. 18 is a split mode mode that can be expressed as a binary record according to an embodiment
  • Another form of coding unit that can be determined based on information is shown.
  • 19 is a block diagram of an image encoding and decoding system performing loop filtering
  • 20 is a diagram showing a configuration of an image decoding apparatus according to an embodiment.
  • Fig. 21 shows the structure of a bitstream generated according to the hierarchical structure of an image.
  • 201 is an exemplary drawing.
  • Fig. 22 is a diagram showing a slice, tile and CTU determined in the current image.
  • Figure 23 is for explaining a method of setting the slices in the current image
  • 24 is a diagram for explaining another method of setting slices in the current image.
  • 25 is an exemplary diagram showing a list of a plurality of first reference images acquired through a sequence parameter set.
  • 26 is a diagram for explaining a method of obtaining a second reference image list.
  • 27 is a diagram for explaining a method of acquiring a second reference image list.
  • Fig. 28 is for explaining another method of obtaining a second reference video list
  • Figure 29 is for explaining another method of obtaining a second reference video list
  • Figure 30 is for explaining another method of obtaining a second reference video list
  • Fig. 31 shows a plurality of post-processing used for luma mapping or adaptive loop filtering
  • 32 is a view for explaining an image decoding method according to an embodiment.
  • 33 is a diagram showing a configuration of an image encoding apparatus according to an embodiment.
  • 34 is a view for explaining an image encoding method according to an embodiment.
  • the indicator in the plurality of first reference image lists
  • lower blocks included in the next block group in the current picture may be predictively decoded.
  • the step of obtaining the second reference image list comprises:
  • the first reference image list indicated by the indicator includes a first type of reference image and a second type of reference image, wherein the step of obtaining the second reference image list, the indicator It may include the step of obtaining the second reference image list by excluding the second type of reference image from the pointed first reference image list.
  • the first reference image list indicated by the indicator includes a first type of reference image and a second type of reference image, and the step of obtaining the second reference image list, wherein the indicator The reference image of the second type is excluded from the first reference image list indicated, and the reference image of the second type indicated by the ?00 related value obtained from the group header is added to the first reference image list indicated by the indicator. It may include the step of obtaining a second reference image list.
  • the first reference image list indicated by the indicator is 2020/175967 1» (:1 ⁇ 1 ⁇ 2020/002924 type of reference image only, but the step of acquiring the second reference image list is the second type of value indicated by the ?00 related value obtained from the group header. It may include the step of acquiring the second reference image list by adding a reference image to the first reference image list indicated by the indicator.
  • an index larger than an index assigned to the reference images of the other type may be allocated to reference images of one type of the first type and the second type.
  • the image decoding method further comprises obtaining order information of the first type reference images and the second type reference images from the group header, wherein the first An index according to the order information may be assigned to the reference images of the type and the reference images of the second type.
  • the image decoding method includes at least some of the reference images included in the first reference image list indicated by the indicator, and reference images to be included in the second reference image list. Further comprising the step of obtaining a difference value between at least some of the ?00 related values from the group header, wherein the step of obtaining the second reference image list comprises: the indicator pointing to the obtained difference value based on the obtained difference value.
  • the step of obtaining the second reference image list by replacing at least some of the reference images included in the first reference image list may be included.
  • the address information includes identification information of a lower right block among blocks included in each of the block groups
  • the step of setting the block groups comprises: at the upper left of the plurality of blocks. Setting a first block group including an upper left block located and a lower right block indicated by the identification information of the lower right block; Identifying the upper left block of the second block group based on the identification information of the blocks included in the first block group; And a second block group including the lower right block indicated by the identification information of the lower right block and the identified upper left block May include steps to set up.
  • the image decoding method is at least one for luma mapping. 2020/175967 1»(:1/10 ⁇ 020/002924 Acquiring a post-processing parameter set from the bitstream; A post-processing parameter set applied to luma mapping for the predicted sample of the sub-block obtained as a result of the prediction decoding is obtained.
  • An apparatus for decoding an image according to an embodiment includes a bitstream sequence parameter
  • An image encoding method includes the steps of constructing a plurality of first reference image lists for an image sequence including a current image; a current block in the current image among the plurality of first reference image lists Selecting a first reference picture list for the current block group including; Acquiring a second reference picture list based on the selected first reference picture list; And Reference pictures included in the second reference picture list Predicts the sub-block of the current block based on
  • one component is “connected” to another component.
  • the one component When referred to as "connected” or the like, the one component may be directly connected to the other component or may be directly connected, but unless there is a particularly contrary device, it is connected through another component in the middle. Or it should be understood that it can be accessed.
  • two or more components expressed as' ⁇ sub (unit)','module', etc. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 Elements may be merged into one element, or one element may be subdivided into two or more for each more subdivided function.
  • Each element can additionally perform some or all of the functions that other components are responsible for in addition to its own main function, and some of the main functions that each component is responsible for is performed by a different component. Of course it could be
  • sample or “signal” is assigned to the sampling position of the image.
  • a pixel value in an image in a spatial domain, and transformation coefficients in a transform region may be samples.
  • a unit including at least one such sample can be defined as a block.
  • an image encoding method and apparatus an image decoding method and apparatus based on a coding unit and a conversion unit of a tree structure according to an embodiment are disclosed.
  • FIG. 1 is a block diagram of an image decoding apparatus 100 according to an embodiment.
  • the image decoding apparatus 100 may include a bitstream acquisition unit 110 and a decoding unit 120.
  • the bitstream acquisition unit 110 and the decoding unit 120 may include at least one processor.
  • the bitstream acquisition unit 110 and the decoding unit 120 may include a memory for storing instructions to be executed by at least one processor.
  • the bitstream acquisition unit 110 may receive a bitstream.
  • the bitstream is
  • the image encoding apparatus 200 to be described later includes information encoded by the image.
  • the bitstream may be transmitted from the image encoding apparatus 200.
  • the image encoding apparatus 200 and the image decoding apparatus 100 are wired or wireless.
  • the bitstream acquisition unit 110 may receive a bitstream through wired or wireless communication.
  • the bitstream acquisition unit 0 may be connected to optical media, hard disks, etc.
  • the bitstream may be received from the storage medium.
  • the decoder 120 may restore an image based on information obtained from the received bitstream.
  • the decoding unit 120 may obtain a syntax element for restoring an image from the bitstream.
  • the decoding unit 120 may restore an image based on the syntax element.
  • the acquisition unit 110 may receive a bitstream.
  • the video decoding apparatus 100 may perform an operation of acquiring an empty string corresponding to a division mode mode of a coding unit from a bitstream.
  • the video decoding apparatus 100 performs an operation of determining a division rule for a coding unit.
  • the video decoding apparatus 100 divides a coding unit into a plurality of coding units based on at least one of the bin string corresponding to the split mode and the split rule. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 operation can be performed.
  • the video decoding apparatus 100 determines the size of the coding unit according to the ratio of the width and height of the coding unit to determine the division rule.
  • the image decoding apparatus 100 may determine an allowable second range of the size of the coding unit according to the division mode mode of the coding unit in order to determine the division rule.
  • a picture can be divided into one or more slices or one or more tiles.
  • One slice or a tile can be a sequence of one or more coding tree units (CTU). Accordingly, one slice may include one or more tiles, and one slice may include one or more maximum coding units.
  • a slice including one or a plurality of tiles may be determined within a picture.
  • the maximum coding block refers to an NxN block containing NxN samples.
  • the color component can be divided into one or more maximum coding blocks.
  • the maximum coding unit is the maximum coding block of the luma sample and the two maximum coding blocks of the corresponding chroma samples.
  • the maximum coding unit includes the maximum coding block of the monochrome sample and the syntax structures used to code the monochrome samples.
  • the maximum coding unit is a unit including syntax structures used to code the picture and its samples.
  • One maximum coding block can be divided into MxN coding blocks including MxN samples (M and N are integers).
  • Unit; CU is a unit including a coding block of a luma sample, two coding blocks of chroma samples corresponding thereto, and syntax structures used to encode a luma sample and chroma samples.
  • the coding unit is a unit.
  • the coding unit is the coding unit of the picture and the samples of the picture. It is a unit that contains the syntax structures used to do so.
  • the maximum coding block and the maximum coding unit are
  • the (maximum) coding unit refers to the data structure including the (maximum) coding block containing the sample and the syntax structure corresponding thereto.
  • the person skilled in the art has the (maximum) coding unit or the (maximum) coding block taking a predetermined number of samples. Since it can be understood that a block of a predetermined size to be included is referred to, in the following specification, the maximum coding block and the maximum coding unit, or the coding block and the coding unit are mentioned without distinction unless there is special circumstances.
  • An image can be divided into a maximum coding unit (CTU).
  • CTU maximum coding unit
  • the size of the coding unit can be determined based on information obtained from the bitstream.
  • the shape of the largest coding unit can have the same size square, but is not limited to this.
  • information about the maximum size of a luma coded block can be obtained from a bitstream.
  • the maximum size of a luma coded block indicated by information about the maximum size of a luma coded block is 4x4, 8x8, 16x16.
  • Information on the difference in size and luma block size can be obtained.
  • Information on the difference in luma block size can indicate the size difference between the maximum luma coding unit and the maximum luma coded block that can be divided into two. Therefore, obtained from the bitstream.
  • the size of the maximum luma encoding unit can be determined.
  • the size of the chroma maximum coding unit can also be determined.
  • the size of the chroma block is It can be half the size, and likewise, the size of the chroma maximum coding unit can be half the size of the luma maximum coding unit.
  • the maximum size of a luma coded block capable of binary splitting may be determined variably. Unlike this, the maximum size of a luma coded block that can be ternary split can be fixed. For example, the maximum size of a luma coded block that can be ternary split in an I picture is 32x32, and a P picture or B picture The maximum size of a luma-coded block that can be ternary partitioned in can be 64x64.
  • the maximum coding unit can be hierarchically divided into coding units based on the split mode information obtained from the bitstream.
  • the split mode information information indicating whether or not quad splitting is performed, whether or not to divide it. At least one of the information indicating s, segment direction information, and segment type information may be obtained from the bitstream.
  • information indicating whether or not the current coding unit is quad split can indicate whether the current coding unit is to be quad split (QUAD_SPLIT) or not quad split.
  • the division direction information indicates that the current coding unit is divided into either the horizontal direction or the vertical direction.
  • the division type information indicates that the current coding unit is divided into binary division or ternary division.
  • the division mode of the current coding unit is
  • the division mode is binary horizontal division (SPLIT_BT_HOR), ternary horizontal division when ternary division in the horizontal direction (SPLIT_TT_HOR), and binary division in the vertical direction.
  • the division mode of is binary vertical division (SPLIT_BT_VER) and the division mode in the case of ternary division in the vertical direction is ternary vertical division.
  • the video decoding apparatus 100 may obtain the split mode mode information from the bitstream from one bin string.
  • the video decoding apparatus 100 receives
  • the format of the bitstream can include fixed length binary code, unary code, truncated unary code, predetermined binary record, etc.
  • An empty string is a binary sequence of information.
  • An empty string can consist of at least one bit.
  • the video decoding apparatus 100 may obtain information on the division mode mode corresponding to the bin string based on the division rule.
  • the video decoding apparatus 100 may determine whether to quad-divide the coding unit based on one bin string. You can decide whether not to divide or to decide the direction and type of the division.
  • the coding unit may be less than or equal to the largest coding unit.
  • the coding unit is also one of the coding units because it has the largest size.
  • the coding unit determined in the maximum coding unit has the same size as the maximum coding unit.
  • the maximum Coding units can be divided into coding units.
  • the coding units can be divided into coding units of smaller size.
  • the division of an image is limited to this.
  • the maximum coding unit and the coding unit may not be distinguished. The division of the coding unit will be described in more detail in FIGS. 3 to 16.
  • More than one prediction block for prediction can also be determined from the coding unit.
  • the prediction block may be less than or equal to the coding unit.
  • one or more transform blocks for transformation may be determined from the coding unit.
  • the transform block may be equal to or less than the coding unit. 2020/175967 1»(:1/10 ⁇ 020/002924
  • the transform block and eg the shape and size of the block may not be related to each other.
  • the prediction may be performed using the coding unit as the prediction block in the coding unit.
  • the transformation may be performed using the coding unit as the transformation block in the coding unit.
  • the current block and the neighboring block of the present disclosure may represent one of a maximum coding unit, a coding unit, a prediction block, and a transform block.
  • the current block or the current coding unit is a block that is currently being decoded or encoded or a block that is currently being divided.
  • a neighboring block may be a block restored before the current block.
  • a neighboring block can be spatially or temporally adjacent from the current block.
  • the surrounding block can be located in one of the lower left, left, upper left, upper, upper right, lower right, and lower left of the current block.
  • FIG 3 illustrates a process in which the image decoding apparatus 100 determines at least one coding unit by dividing a current coding unit according to an embodiment.
  • the block type may include 4Nx4N, 4Nx2N, 2Nx4N, 4NxN, Nx4N, 32NxN, Nx32N, 16NxN, Nxl6N, 8NxN, or Nx8N, where N may be a positive integer.
  • the block type information is information indicating at least one of the shape of the coding unit, the ratio or size of the room 3 ⁇ 4 width and height.
  • the shape of the coding unit may include a square and a non-square.
  • the width and height of the coding unit are the same (i.e., when the block type of the coding unit is 4NX4N)
  • video decoding The device W0 may determine the block type information of the coding unit as a square.
  • the image decoding device 100 may determine the shape of the coding unit as a non-square.
  • the video decoding device 100 can determine the block type information of the coding unit in a non-square shape.
  • the video decoding apparatus 100 adjusts the ratio of the width and height of the block type information of the coding unit to 1:2, 2: 1, 1:4, 4: 1, 1: At least one of 8, 8: 1, 1: 16, 16: 1, 1:32, and 32:1 can be determined.
  • the video decoding device 100 May determine whether the coding unit is in the horizontal direction or the vertical direction. Also, based on at least one of the width, height, or width of the coding unit, the image decoding apparatus 100 may determine the size of the coding unit.
  • the video decoding apparatus 100 can determine the shape of a coding unit using block shape information, and can determine in what shape the coding unit is divided by using the division mode information.
  • the method of dividing the coding unit indicated by the division type mode information may be determined according to which block type the block type information used by the decoding apparatus 100 indicates. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924
  • the video decoding apparatus 100 can obtain the split mode mode information from the bitstream. However, it is not limited thereto, and the video decoding apparatus ( ⁇ 0) and the video encoding apparatus 200 are based on block shape information.
  • the video decoding apparatus 100 can determine the pre-committed division mode information for the maximum coding unit or the minimum coding unit. For example, the video decoding apparatus 100 can determine the pre-committed split mode information. Can be determined as 1 wave (1 8 3 ⁇ 4) by quad-dividing the division type mode information for the largest coding unit.
  • the image decoding apparatus 100 "does not divide" the division type mode information for the smallest coding unit. Specifically, the image decoding device 100 determines the size of the maximum coding unit.
  • the video decoding device 100 can determine the pre-appointed split mode information by quad splitting Quad split is a split mode in which both the width and height of the coding unit are divided into two. 100) can obtain a 128x128 coding unit from a 256x256 maximum coding unit based on the split mode information. In addition, the video decoding apparatus 100 can determine the minimum coding unit size as 4x4. (100) can obtain the division type mode information indicating "no division" for the minimum coding unit.
  • the image decoding device 00 is the current coding unit is a square
  • Block shape information indicating that it is a shape can be used.
  • the video decoding device 100 may not divide a square coding unit, divide it vertically, divide it horizontally, or divide four coding units according to the division type mode information. 3, if the block type information of the current coding unit 300 indicates a square shape, the decoding unit 120 is divided according to the division type mode information indicating that it is not divided.
  • a coding unit that has the same size as the current coding unit (300) (a coding unit that does not divide the character 0 or is divided based on the division mode information indicating a predetermined division method (31A5, 310 310(1, 310 31)) Oh, etc.) can be determined.
  • the image decoding device 100 is divided form mode information to coding units of the divided one or two the current coding unit 300 in the vertical direction based on indicating that the division in the vertical direction according to neunil embodiment (self 5 ).
  • the video decoding apparatus 100 can determine two coding units (ruler 0) by dividing the current coding unit 300 in the horizontal direction based on the division mode information indicating that the video is divided in the horizontal direction.
  • the decoding device 100 is able to determine four coding units (character 0(1)) which divided the current coding unit 300 in the vertical and horizontal directions based on the division mode information indicating that they are divided in the vertical and horizontal directions. is.
  • an exemplary image decoding device 100 neunil example ternary ⁇ 1113) dividing that represents subdivided form mode information of three coding units by the dividing the current coding unit 300 in the vertical direction based on (i 0 ⁇ can be determined.
  • the current coding unit (300) is divided in the horizontal direction, and three coding units (quarters) are determined.
  • the division type in which the square coding unit can be divided should not be interpreted limited to the above-described shape, but may include various types in which the division type mode information can appear.
  • FIG 4 is an image decoding apparatus ( ⁇ 0) in accordance with an embodiment.
  • It shows the process of determining at least one coding unit by dividing the coding unit.
  • the video decoding apparatus 00) may use block shape information indicating that the current coding unit is a non-square type.
  • the video decoding apparatus 100 may use the non-square current coding according to the division type mode information. It is possible to decide whether to divide the unit or not by a predetermined method. Referring to Fig. 4, when the block shape information of the current coding unit (400 or 450) shows a non-square shape, the video decoding device (100) Determines the coding unit (410 or 460) having the same size as the current coding unit (400 or 450) according to the split mode information indicating that no division is made, or based on the division type mode information indicating a predetermined division method.
  • the divided coding units (420 42(3 ⁇ 4, 430 43(3 ⁇ 4, 4300, 470 47(3 ⁇ 4, 480 ⁇ 48)5, 480) can be determined.
  • the predetermined division method in which the non-square coding units are divided is as follows. It will be described in detail through various embodiments.
  • the video decoding apparatus 100 may determine a form in which a coding unit is divided using the split mode mode information, and in this case, the split mode mode information is at least one encoding generated by dividing the coding unit. The number of units can be indicated. Referring to Fig. 4, when the split mode information indicates that the current coding unit (400 or 450) is divided into two coding units, the video decoding device 100 is used to display the split mode information. Based on the current coding unit (400 or 450), it is possible to determine the ratio of two coding units (420 42 (3 ⁇ 4, or 470 470) to be included in the current coding unit.
  • the video decoding device ( ⁇ 0) divides the current coding unit (400 or 450) of the non-square shape based on the division mode information
  • the video decoding device 100 is non-
  • the current coding unit can be divided by taking into account the position of the long side of the current coding unit (400 or 450) of the square.
  • the video decoding device (100) considers the shape of the current coding unit (400 or 450), and the current coding unit (400 or 450) is considered.
  • Multiple coding units can be determined by dividing the current coding unit (400 or 450) in the direction of dividing the long side of the unit (400 or 450).
  • the video decoding apparatus 100 when the division mode information indicates that the coding unit is divided into odd-numbered blocks (ternary division), the video decoding apparatus 100 is included in the current coding unit (400 or 450).
  • An odd number of coding units can be determined. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924
  • the video decoding device 100 The current coding unit (400 or 450) can be divided into three coding units (430 43 (3 ⁇ 4, 4300, 480 48 (3 ⁇ 4, 480)).
  • the ratio of the width and height of the current coding unit (400 or 450) may be 4:1 or 1:4.
  • the ratio of the width and height is 4:1, the length of the width is Since it is longer than the length of the height, the block shape information may be in the horizontal direction.
  • the ratio of the width and height is 1:4, the block shape information may be in the vertical direction because the length of the width is shorter than the length of the height.
  • 100) may determine to divide the current coding unit into odd-numbered blocks based on the partition type mode information.
  • the video decoding apparatus 100 may encode the current coding unit based on block type information of the current coding unit (400 or 450). The direction of division of the unit (400 or 450) can be determined.
  • the video decoding apparatus 100 divides the current coding unit 400 in the horizontal direction to determine the coding unit. (It is possible to determine 430 43 (3 ⁇ 4, 430.
  • the video decoding device 100 divides the current coding unit 450 in the vertical direction and encodes
  • the image decoding apparatus 100 may determine an odd number of coding units included in the current coding unit (400 or 450), and not all of the determined coding units may have the same size. For example, ,The determined odd number of coding units (430 43(3 ⁇ 4, 4300, 480 48(3 ⁇ 4, 480) double predefined coding units (43(3 ⁇ 4 or
  • the size of 48(3 ⁇ 4) may have a size different from other coding units (430 4300, 480 480), that is, the coding unit that can be determined by dividing the current coding unit (400 or 450) is the size of multiple types. In some cases, an odd number of coding units (430 43 (3 ⁇ 4, 4300, 480 48 (3 ⁇ 4, 480) may have different sizes).
  • the image decoding apparatus 100 determines an odd number of coding units included in the current coding unit (400 or 450). Furthermore, the image decoding apparatus 100 may place a predetermined limit on at least one coding unit among the odd number of coding units generated by dividing. Referring to FIG. 4, the image decoding apparatus 100 is currently a coding unit. (400 or 450) is divided into three coding units (430 43 (3 ⁇ 4, 4300, 480 48 (3 ⁇ 4, 480)), the encoding unit located in the center (the decoding process for the 43 (3 ⁇ 4, 480 ratio) is different Unit (430 43( ⁇ 480
  • the video decoding apparatus 100 may be limited to a coding unit located in the center (as opposed to other coding units (430 4300, 480 480), unlike other coding units (430 4300, 480 480)) located at the center, or It can be limited to be divided only a predetermined number of times. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924
  • FIG 5 illustrates a process in which the video decoding apparatus 100 divides a coding unit based on at least one of block type information and split type mode information according to an embodiment.
  • the image decoding apparatus 100 determines that the first coding unit 500 in the square shape is divided into coding units or not divided based on at least one of the block type information and the division type mode information. According to an embodiment, when the division mode information indicates that the first coding unit 500 is divided in the horizontal direction, the video decoding apparatus 100 divides the first coding unit 500 in the horizontal direction to The second coding unit (0) can be determined.
  • the first coding unit, second coding unit, and third coding unit used according to an embodiment are terms used to understand the relationship before and after division between coding units. For example, if the first coding unit is divided, the second coding unit can be determined, and if the second coding unit is divided, the third coding unit can be determined.
  • the first coding unit, the second coding unit, and the third to be used can be determined.
  • the relationship between coding units can be understood as following the characteristics described above.
  • the image decoding device 100 uses the determined second coding unit 510.
  • the image decoding apparatus 100 divides the first coding unit 500 based on the split mode information.
  • the determined non-square form of the second coding unit (510) may not be divided into at least one third coding unit (520 52 (3 ⁇ 4, 52 ( ⁇ 520 (1))) or the second coding unit (0).
  • the video decoding device 100 can acquire the split mode mode information and decode the video.
  • the device 100 can divide the first coding unit 500 based on the obtained division type mode information to divide a plurality of second coding units (for example, 510) of various types, and can divide the second coding unit ( 0) may be divided according to the method in which the first coding unit 500 is divided based on the division type mode information.
  • the first coding unit 500 is divided for the first coding unit 500.
  • the second coding unit 510 is also divided into the third coding unit based on the split shape mode information for the second coding unit (0).
  • 520&, 52 (3 ⁇ 4, 520 0 , 520 (1, etc.).
  • the coding unit can be divided recursively based on the division mode information related to each of the coding units. Therefore, the non-square shape can be divided.
  • the coding unit of the square can be determined from the coding unit, and the coding unit of the square shape is divided recursively.
  • the coding unit of the non-square shape may be determined.
  • the determined odd number of third coding units can be divided recursively into a defined coding unit (e.g., a coding unit located in the center or a coding unit in a square shape).
  • a defined coding unit e.g., a coding unit located in the center or a coding unit in a square shape.
  • an odd number of third coding units 52 (3 ⁇ 4, 520 0 , 520 (1)) is one of the square-shaped third coding units (520 is horizontal 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 Can be divided into a plurality of fourth coding units.
  • One of a plurality of fourth coding units (530 53(3 ⁇ 4, 5300, 530(1)) -
  • the fourth coding unit in the form of a square (53 (3 ⁇ 4 or 530(1)) can be divided into multiple coding units.
  • the fourth coding unit in the non-square form (53 (3 ⁇ 4 or 530(1)) ) May be subdivided into odd-numbered coding units.
  • a method that can be used for recursive division of coding units will be described later through various examples.
  • the image decoding apparatus 100 may divide each of the third coding units 520 52 (3 ⁇ 4, 5200, 520 (1, etc.) into coding units based on the split mode information.
  • the video decoding apparatus 100 may determine not to divide the second coding unit (0) based on the division mode information.
  • the video decoding apparatus 100 is a non-square second coding unit according to an embodiment. (5) can be divided into an odd number of third coding units (52(3 ⁇ 4, 5200, 520(1)).
  • the image decoding apparatus 100 has an odd number of third coding units (52(3 ⁇ 4, 5200, 520(1)).
  • the video decoding device 100 may have a certain limit on the 3rd coding unit defined by the small and medium definition.
  • the video decoding device 100 has an odd number of 3rd coding units (52(3 ⁇ 4, 5200, 520(1)).
  • the 520 it can be limited to those that are no longer divided, or to those that must be divided by a configurable number of times.
  • the image decoding apparatus 100 includes an odd number of third coding units (52(3 ⁇ 4, 5200, 520(1)) included in the second coding unit 510 of a non-square shape.
  • Coding units located at (520 is no longer divided, or divided into a predetermined division (e.g., divided into only four coding units or divided into a form corresponding to the divided form of the second coding unit (5)). It may be limited or limited to division only by a predetermined number of times (for example, division by only II times, 11>0)
  • the above-described embodiment since the above limitation on the encoding unit (520) located in the center is only examples. It should not be interpreted as being limited to these, but should be interpreted as including various restrictions that can be decoded differently from the other coding units (52(3 ⁇ 4, 520(1)), where the encoding unit located in the center (520).
  • the image decoding apparatus 100 may obtain information on a division mode mode used to divide a current coding unit at a predetermined location in the current coding unit.
  • FIG. 6 shows a method for the image decoding apparatus 100 to determine a predetermined coding unit among odd coding units according to an embodiment.
  • the split mode information of the current coding unit (600, 650) is a sample at a predetermined position among a plurality of samples included in the current coding unit (600, 650) (for example, a center position
  • the predetermined position within the current coding unit 600 where at least one of these division type mode information can be obtained is limited to the center position shown in Fig. 6 and should not be interpreted.
  • various locations that can be included within the current coding unit (600) e.g. 2020/175967 1» (:1 ⁇ 1 (For 2020/002924, the top, bottom, left, right, top left, bottom left, top right or bottom right, etc.) should be interpreted as being able to be included.
  • 100 can determine that the current coding unit is divided into coding units of various shapes and sizes or not divided by acquiring the division mode information obtained from a predetermined location.
  • the video decoding apparatus 00) may select one of the coding units when the current coding unit is divided into a predetermined number of coding units.
  • the method for selecting one of the plurality of coding units may be various, and a description of these methods will be described later through various embodiments below.
  • the video decoding apparatus 00 may divide the current coding unit into a plurality of coding units and determine the coding unit at a predetermined location.
  • the image decoding apparatus 100 includes an odd number of coding units.
  • the image decoding apparatus 100 is configured to be in the current coding unit 600 or the current coding unit ( 650) can be divided to determine odd number of coding units (620 62 (3 ⁇ 4, 620 0) or odd number of coding units (660 66 (3 ⁇ 4, 660).)
  • the image decoding apparatus 100 includes odd number of coding units ( Using information on the position of 620 62 (3 ⁇ 4, 620 0) or odd number of coding units (660 66 (3 ⁇ 4, 660), it is possible to determine the center coding unit (620 ratio or the center coding unit (660 ratio), for example
  • the image decoding apparatus 100 determines the location of the coding units 620 62 (3 ⁇ 4, 620) based on information indicating the location of a predetermined sample included in the coding units 62( ⁇ , 62(3 ⁇ 4, 620).
  • the coding unit (620 ratio) can be determined.
  • the video decoding apparatus 100 contains information indicating the position of the coding units (620 62 (3 ⁇ 4, 620) in the upper left of the sample (630 63 (3 ⁇ 4, 630)).
  • the coding unit (620 ratio) located in the center can be determined.
  • coding units left side included in 620 62 (3 ⁇ 4, 620 respectively)
  • the information indicating the position of the upper sample (630 63 (3 ⁇ 4, 63 ( ⁇ ) may include information about the position or coordinates in the picture of the coding units (620 62 (3 ⁇ 4, 620).
  • the coding unit) (Information indicating the location of the upper left sample included in 62( ⁇ , 62(3 ⁇ 4, 620)
  • 630 63(3 ⁇ 4, 63( ⁇ ) is the coding units included in the current coding unit 600)
  • 620 62( Information indicating a width or height of 3 ⁇ 4, 620 may be included, and this width or height may correspond to information indicating a difference between coordinates within a picture of 620 62 (3 ⁇ 4, 620), i.e., a video decoding device ( 100) is the coding unit (620, 62, 5 , 620), which is located in the middle by using information about the position or coordinates in the picture or using information about the width or height of the coding unit corresponding to the difference between coordinates. Coding unit (620 ratio can be determined. 2020/
  • the upper coding unit (the upper left sample of 620 ⁇ (information indicating the position of 630 ⁇ ) can represent the jiang coordinate, and the center coding unit (620 ratio, the upper left sample (530 ratio))
  • the information indicating the position of the ( ⁇ , etc.) can indicate the coordinates
  • the lower coding unit (the sample of the upper left of the 620 (63 (information indicating the position of the ⁇ is ⁇ )) can indicate the coordinates.
  • Video decoding device (100) )Is the coding units (620 62(3 ⁇ 4, 620 each included in the upper left sample (630 63(3 ⁇ 4, 63( ⁇ ), you can determine the center coding unit (620 ratio)).
  • the coordinates representing the position of the upper left sample can represent the coordinates representing the absolute position within the picture, and furthermore, the upper coding unit (620, the upper left sample (630)
  • the center coding unit (the sample at the top left of the 620 ratio (information indicating the relative position of the 630 ratio), (1) coordinate, the coding unit at the bottom (the sample at the top left of the 620 ratio (information indicating the relative position of the 630 ratio) In ((1x(:, (1)) 0 coordinates can also be used.
  • the method of determining the coding unit at a predetermined position by using the coordinates of the corresponding sample is the method described above. It should not be interpreted as limiting, but should be interpreted in various arithmetic methods that can use the coordinates of the sample.
  • the image decoding apparatus 100 can divide the current coding unit 600 into a plurality of coding units 620 62 (3 ⁇ 4, 620), and the coding units 620 62 (3 ⁇ 4, 6200).
  • the coding unit can be selected according to the small and medium definition criteria.
  • the video decoding device (100) can select the coding units (620 62015, 62) of different sizes (620 ratio).
  • the image decoding apparatus 100 is an information indicating the position of the upper coding unit (620 ⁇ of the upper left sample (630) (coordinates, the center coding unit (620 ratio of the upper left sample 630). Coding units (620 62(3 ⁇ 4) using the coordinates of the information indicating the location of the rain ( ⁇ , ⁇ ) and the lower coding unit (the sample at the top left of the 620 (63 ( ⁇ , ), which is information indicating the location of the ⁇ )) , 620 0 ) Each width or height can be determined.
  • the video decoding device (100) has coding units (620 62 characters 5 ,
  • each of the coding units (620 62(3 ⁇ 4, 620 0 ) can be determined by using the coordinates () ), (,) ), (X*:,) 0 indicating the position of 620(:).
  • the image decoding device 100 can determine the width of the upper coding unit (620) as the width of the current coding unit (600).
  • the image decoding device 100 has a height of the upper coding unit (620 ⁇ ).
  • the video decoding apparatus 100 can determine the width of the central coding unit (620 ratio) to the width of the current coding unit (600). 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924.
  • the video decoding device 100 can determine the center coding unit (the height of the 620 ratio can be-outside. According to an embodiment, the video decoding device (0)) The width or height of the lower coding unit can be determined using the width or height of the current coding unit and the upper coding unit (620 ⁇ and the width and height of the middle coding unit (620 ratio). The image decoding apparatus 100 is determined by the determined coding unit. It is possible to determine a coding unit having a size different from other coding units based on the width and height of the 620 62 (3 ⁇ 4, 620 teeth. Referring to FIG.
  • the image decoding apparatus 100 has an upper coding unit 620 and a lower part)
  • the coding unit (the center of which has a size different from the size of 620) (the ratio of 620 can be determined as the coding unit at a predetermined position.
  • the process of determining the coding unit having a size different from the other coding units by the device 100 is only in one embodiment in which the coding unit at a predetermined location is determined using the size of the coding unit determined based on the sample coordinates.
  • Various processes can be used to determine the coding unit at a predetermined location by comparing the size of the coding unit determined according to the sample coordinate of.
  • the image decoding apparatus 100 includes a left encoding unit (a sample at the top left of 660 ⁇ ), the information indicating the position of 670 ⁇ , ⁇ (1, ⁇ coordinates, a center coding unit (a sample at the top left of 660 ratio) (670 Coding units (660 66(3 ⁇ 4, 660 0) using the ⁇ coordinates, which is information indicating the location of the rain, the right coding unit (the sample at the top left of 660 (67( ⁇ , information indicating the location of the ⁇ )) coordinates Each width or height can be determined.
  • the video decoding apparatus 100 uses the coding units (660 66 (3 ⁇ 4, ⁇ (1),, £ ), which are coordinates representing the position of 660 teeth), and ⁇ . (660 660 ⁇ 660 0) Each size can be determined.
  • the image decoding device 100 has a left encoding unit (660 ⁇ width).
  • the image decoding apparatus 100 may determine the height of the left coding unit (660 ⁇ ) as the height of the current coding unit 650.
  • the image decoding apparatus 100 has a middle coding unit (660 ratio).
  • the video decoding device 100 can determine the height of the center coding unit (660 ratio is the height of the current coding unit 600.
  • the video decoding device 100 is on the right side of the image decoding device 100).
  • the coding unit (the width or height of 660 can be determined using the width or height of the current coding unit (650) and the left coding unit (660 and the width and height of the center coding unit (660 ratio).
  • the video decoding apparatus 100 is determined.
  • a coding unit having a size different from other coding units may be determined based on the coding units 660 66 (3/4, 660).
  • the image decoding apparatus 100 includes a left coding unit 660.
  • the right coding unit (the middle coding unit having a size different from the size of 660 (the ratio of 660 can be determined as the coding unit at a predetermined location.
  • the process of determining is only in one embodiment in which the coding unit at a predetermined location is determined using the size of the coding unit determined based on the sample coordinates. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 Various processes can be used to determine the coding unit at a given location by comparing the size of the coding unit determined according to the sample coordinates.
  • the position of the sample to be considered for determining the position of the coding unit should not be interpreted limited to the upper left corner described above, but it can be interpreted that information on the position of any sample included in the coding unit can be used. .
  • the image decoding apparatus 00) is configured to change the form of the current coding unit.
  • the coding unit at a predetermined position can be selected from among the odd number of coding units determined by dividing the current coding unit. For example, if the current coding unit is longer than the width-square shape, the video decoding device 00)
  • the encoding unit of a predetermined position can be determined according to the horizontal direction, that is, the video decoding apparatus 100 can set a limit on the corresponding encoding unit by determining one of the encoding units that vary in position in the horizontal direction. If the unit is a non-square type whose height is longer than the width, the image decoding apparatus 100 can determine the coding unit at a predetermined position according to the vertical direction. That is, the image decoding apparatus 100 may determine the coding unit of the predetermined position according to the vertical direction. By deciding one of the units, you can place limits on that coding unit.
  • the image decoding apparatus 100 may use information indicating the position of each of the even number of coding units in order to determine the coding unit of a predetermined position among the even number of coding units.
  • 100 can determine the even number of coding units by dividing the current coding unit (binary division), and using the information on the positions of the even number of coding units to determine the coding unit at a predetermined location. It may be a process that corresponds to the process of determining the coding unit of a predetermined position (for example, the center position) among the odd number of coding units described above, so it will be omitted.
  • the coding unit at a predetermined position is selected during the dividing process to determine the coding unit at a predetermined position among the plurality of coding units.
  • the image decoding apparatus 100 may use the current coding unit included in the center coding unit during the dividing process to determine the coding unit located in the middle of the coding units divided into a plurality of coding units. At least one of block type information and split type mode information stored in the sample can be used.
  • the image decoding apparatus 100 can divide the current coding unit 600 into a plurality of coding units (620 62 (3 ⁇ 4, 620), based on the split mode information, and a plurality of coding units).
  • the encoding unit (620 ratio ) located in the middle of the fields (620 62 (3 ⁇ 4, 620 0) can be determined.
  • the image decoding device 100 considers the location where the split mode information is acquired, and the encoding located in the center
  • the unit (620 ratio can be determined; that is, the division type mode information of the current coding unit 600 can be obtained from the sample 640 located in the middle of the current coding unit 600, and the division type 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 If the current coding unit (600) is divided into multiple coding units (62( ⁇ , 620 ⁇ 620), the above sample (640) is included)
  • the coding unit (620 ratio can be determined as the coding unit located in the center.
  • the information used to determine the coding unit located in the center should not be interpreted as limited to the split mode information, but it is among various types of information. It can be used in the process of determining the coding unit located at.
  • predetermined information for identifying a coding unit at a predetermined location may be obtained from a predetermined sample included in the coding unit to be determined.
  • the image decoding apparatus 100 The current coding unit (600) is divided to determine the coding unit (620 62 (3 ⁇ 4, 6200)) at a predetermined position among the determined coding units (e.g., the coding unit located in the middle of the coding units divided into a plurality).
  • the coding unit 620 62 (3 ⁇ 4, 6200)
  • split mode information obtained from a sample at a predetermined position within the current coding unit 600 (for example, a sample located in the middle of the current coding unit 600).
  • the video decoding apparatus 100 Can determine the sample at the predetermined location in consideration of the block type of the current coding unit 600, and the image decoding apparatus 100 includes a plurality of coding units determined by dividing the current coding unit 600.
  • the image decoding apparatus 100 includes a plurality of coding units determined by dividing the current coding unit 600.
  • the image decoding apparatus 100 may determine a sample 640 located in the middle of the current coding unit 600 as a sample from which predetermined information can be obtained, and the image decoding apparatus 100 may determine such a sample ( 640) is included in the coding unit (620 ratio may be subject to a certain limit in the process of decoding.
  • the position of the sample from which certain information can be obtained is limited to the position described above and should not be interpreted. Units (can be interpreted as samples of arbitrary positions included in the 620 ratio.
  • the position of the sample from which predetermined information can be obtained may be determined according to the shape of the current coding unit 600.
  • the block type information is determined whether the shape of the current coding unit is square. Alternatively, it is possible to determine whether it is a non-square or not, and the position of the sample from which predetermined information can be obtained depending on the shape.
  • the video decoding device 100 can determine the information on the width and height of the current coding unit. Using at least one of the information about the current coding unit, it is possible to determine a sample located on the boundary of at least one of the width and height of the current coding unit as the sample from which the predetermined information can be obtained.
  • the video decoding apparatus 100 uses one of the samples adjacent to the boundary dividing the long side of the current coding unit into half when it indicates that the block type information related to the current coding unit is a non-square type. I can decide.
  • the image decoding apparatus 00) is configured with a plurality of current coding units.
  • the video decoding apparatus 100 When divided into coding units, encoding at a predetermined position among a plurality of coding units 2020/175967 1» (: 1 ⁇ 1 ⁇ 2020/002924) In order to determine the unit, the split mode information can be used.
  • the video decoding apparatus 100 includes the split mode mode information in the coding unit. It can be obtained from a sample at a predetermined location, and the image decoding apparatus 100 can obtain the division mode information obtained from a sample at a predetermined location included in each of the plurality of coding units generated by dividing the current coding unit.
  • the coding unit can be divided recursively by using the division type mode information obtained from a sample at a predetermined position included in each of the coding units.
  • the recursive division process of the coding unit has been described above with reference to FIG. 5, so a detailed description will be omitted.
  • the image decoding apparatus 100 divides the current coding unit
  • At least one coding unit can be determined, and the order in which at least one coding unit is decoded can be determined according to a predetermined block (for example, the current coding unit).
  • FIG. 7 shows the image decoding apparatus 100 according to an embodiment of the present invention
  • the video decoding apparatus (W0) divides the first coding unit 700 in the vertical direction according to the split mode information to determine the second coding unit (groups 0a, TLOb) or to determine the first coding unit.
  • the second coding unit (730a, 730b) is determined by dividing the unit (700) in the horizontal direction, or the second coding unit (750a, 750b, 750c, 750d) by dividing the first coding unit (700) in the vertical and horizontal directions. ) Can be determined.
  • the image decoding apparatus 100 uses a first coding unit 700 vertically.
  • the order can be determined so that the second coding units (group 0a, TL0b) determined by dividing in the direction are processed in the horizontal direction (base Oc).
  • the image decoding apparatus 100 divides the first coding unit 700 in the horizontal direction and The processing order of the determined second coding units 730a and 730b can be determined in the vertical direction 730c.
  • the image decoding apparatus 100 divides the first coding unit 700 in the vertical and horizontal directions to determine the second encoding.
  • a predetermined order in which the coding units located in the next row are processed after the coding units (750a, 750b, 750c, 750d) are processed e.g., raster scan order
  • it can be determined according to the z scan order (750e).
  • the image decoding apparatus 100 recursively converts the coding units.
  • the image decoding apparatus 100 includes a first encoding
  • a plurality of coding units (groups 0a, 710b, 730a, 730b, 750a,
  • 750b, 750c, 750d can be determined, and each of the determined plurality of coding units (groups 0a, 710b, 730a, 730b, 750a, 750b, 750c, 750d) can be divided recursively.
  • a plurality of coding units C710a , 710b, 730a, 730b, 750a, 750b, 750c, 750d) may be a method corresponding to the method of dividing the first coding unit 700.
  • 2020/175967 1 (:1 ⁇ 1 ⁇ 2020/002924
  • Multiple coding units (0 Kia 5, 730 73(3 ⁇ 4, 750 75(3 ⁇ 4, 75( ⁇ 750(1)) are each independently multiple coding units
  • the video decoding apparatus 100 divides the first coding unit 700 in the vertical direction to determine the second coding unit (zero to zero), and furthermore, the second coding unit (It can be decided by dividing each of the periods 0 and 0 independently or not.
  • the video decoding apparatus 100 can divide the second coding unit on the left side (0 ⁇ in the horizontal direction and divide it into a third coding unit (720 to 720 ratio), and the second coding unit on the right side ( The base zero ratio may not be divided.
  • the processing order of the coding units may be determined based on the process of dividing the coding units.
  • the processing order of the divided coding units is determined based on the processing order of the coding units immediately before division.
  • the video decoding device (100) determines the order in which the second coding unit on the left (the third coding unit (720 72015) determined by dividing the period 0 is processed independently from the second coding unit on the right (the 0 ratio). Since the second coding unit on the left (base 0 is divided in the horizontal direction, and the third coding unit (720 72ah5) is determined, the third coding unit (720 72ah5) can be processed in the vertical direction (720).
  • the second coding unit on the left (base 0 and the second coding unit on the right) (the order in which the zero ratio is processed corresponds to the horizontal direction (base 0), so the second coding unit on the left (the system included in the zero)
  • the right coding unit zero ratio can be processed.
  • the above is a process in which the processing order of coding units is determined according to the coding unit before each division) Since it is for explanation, it should not be interpreted limited to the above-described embodiment, but it should be interpreted as being used in various ways in which the coding units determined by being divided into various forms can be independently processed according to a predetermined order.
  • FIG. 8 shows an image decoding apparatus 100 encoding in a predetermined order according to an embodiment.
  • the process of determining that the current coding unit is to be divided into an odd number of coding units is shown.
  • the image decoding apparatus 100 uses the acquired split mode information.
  • the first coding unit in a square shape (800) is a bi-square second coding unit (810 81A5). It can be divided, and the second coding unit (810 81 (3 ⁇ 4)) can be independently divided into the third coding unit (820 82 (3 ⁇ 4, 8200, 820 (1, 820)).
  • the image decoding apparatus 100 determines whether the third coding units (820 82 (3 ⁇ 4, 8200, 820 (1, 820)) can be processed in a predetermined order, and 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 It is possible to determine whether a divided coding unit exists.
  • the video decoding apparatus 100 recursively divides the first coding unit 800
  • the third coding unit (820a, 820b, 820c, 820d, 820e) can be determined.
  • the video decoding apparatus 100 is based on at least one of the block type information and the division type mode information, and the first coding unit (800), It is possible to determine whether the two coding units (810a, 810b) or the third coding units (820a, 820b, 820c, 820d, 820e) are divided into an odd number of coding units.
  • the second coding unit The coding unit positioned at the right of the (8Wa, 810b) may be divided into an odd number of third coding units (820c, 820d, 820e).
  • the order in which the plurality of coding units included in the first coding unit 800 are processed may be in a predetermined order (e.g., z-scan order 830), and the image decoding apparatus 100 ) Can determine whether the third coding unit 820c, 820d, 820e determined by dividing the right second coding unit 810b into odd numbers satisfies the condition that can be processed according to the predetermined order.
  • a predetermined order e.g., z-scan order 830
  • the image decoding apparatus 100 Can determine whether the third coding unit 820c, 820d, 820e determined by dividing the right second coding unit 810b into odd numbers satisfies the condition that can be processed according to the predetermined order.
  • the image decoding apparatus 100 satisfies the condition that the third coding units 820a, 820b, 820c, 820d, 820e included in the first coding unit 800 can be processed in a predetermined order. It can be determined whether or not at least one of the width and height of the second coding unit (810a, 810b) is divided in half according to the boundary of the third coding unit (820a, 820b, 820c, 820d, 820e). It is related, for example
  • the third coding unit (820a, 820b) which is determined by dividing the height of the left second coding unit (8 Wa) in a non-square shape in half, can satisfy the condition. Since the boundary of the third coding units (820c, 820d, 820e) determined by dividing into coding units cannot divide the width or height of the right second coding unit (810b) in half, the third coding unit (820c, 820d, 820e) ) May be determined as not satisfying the condition. In the case of dissatisfaction with this condition, the video decoding apparatus 100 judges that the scan order is disconnected, and based on the result of the judgment, the second coding unit (8Wb) is an odd number. It can be determined by dividing into two coding units.
  • the video decoding apparatus 100 when the video decoding apparatus 100 is divided into an odd number of coding units, it is possible to place a predetermined limit on the coding unit at a predetermined position among the divided coding units, and various implementations for such restrictions or predetermined positions. Since it has been described above through examples, detailed explanations will be omitted.
  • FIG. 9 is a video decoding apparatus 100 according to an embodiment of the first coding unit 900
  • the image decoding apparatus 100 uses a bitstream acquisition unit no
  • the first coding unit 900 can be divided based on the obtained division type mode information.
  • the first coding unit 900 of a square shape can be divided into coding units having four square shapes, or it can be divided into a plurality of coding units having a non-square shape.
  • the first coding unit (900) is a square, and the division type mode information is divided into non-square coding units. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924
  • the decoding device 100 can divide the first coding unit 900 into a plurality of non-square coding units. Specifically, the division type mode information is provided.
  • the image decoding apparatus 00 uses the first coding unit 900 in a square shape as odd number of coding units in the vertical direction. It can be divided into a second coding unit determined by dividing into 910 91 5 , 910 or a second coding unit determined by dividing in the horizontal direction (920 92 (3 ⁇ 4, 920).
  • the image decoding apparatus 100 includes the second coding unit 910 91 (3 ⁇ 4, 9100, 920 92 (3 ⁇ 4, 920) included in the first coding unit 900 to be processed according to a predetermined order. It is possible to determine whether or not the conditions are satisfied, and the above conditions are at least one of the width and height of the first coding unit (900) according to the boundary of the second coding unit (910 91 (3 ⁇ 4, 9100, 920 92 (3 ⁇ 4, 920)). 9, the second coding unit determined by dividing the first coding unit 900 in a square shape in the vertical direction (the boundary of 910 91 (3 ⁇ 4, 910) is the first coding unit.
  • the width of 900 cannot be divided in half, it may be determined that the first coding unit 900 does not satisfy the conditions that can be processed in a predetermined order.
  • the first coding unit 900 in the form of a square can be determined.
  • the second coding unit determined by dividing in the horizontal direction (the boundary of 920 92 (3 ⁇ 4, 920) cannot divide the width of the first coding unit (900) in half, so the first coding unit (900) is in a predetermined order. It may be determined that the conditions that can be processed are not satisfied.
  • the image decoding apparatus 100 judges that the scan order is interrupted ((1 01111 ⁇ 2 (line 011)), and based on the determination result, the first coding unit 900 may be determined to be divided into odd number of coding units. According to an embodiment, when the image decoding apparatus 100 is divided into odd number of coding units, the coding unit at a predetermined position among the divided coding units is determined. Limitations may be placed, and detailed descriptions thereof will be omitted since it has been described above through various embodiments with respect to such limitations or predetermined locations.
  • the image decoding apparatus 00) divides the first coding unit
  • Various types of coding units can be determined.
  • the image decoding apparatus 100 has a first coding in a square shape.
  • the unit (900) and the non-square type first coding unit (930 or 950) can be divided into various types of coding units.
  • the second coding unit of a non-square shape determined by dividing the first coding unit 1000 into a predetermined condition satisfies a predetermined condition
  • the second coding unit may be divided. It shows that the existing form is limited.
  • the image decoding apparatus 100 performs a first encoding in a square shape based on the division mode information obtained through the bitstream acquisition unit 0. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 It can be determined by dividing the unit (1000) into the second coding unit (1010 101015, 1020 102(3 ⁇ 4)) of the non-square form.
  • the second coding unit (1010) 101(3 ⁇ 4, 1020 102(3 ⁇ 4) can be divided independently. Accordingly, the video decoding apparatus 100 is divided into a plurality of coding units based on the division mode information related to the second coding unit (1010 101015, 1020 102015).
  • the image decoding apparatus 100 may determine that the first coding unit 1000 is divided in the vertical direction, and the left second coding unit of the non-square shape (1 is 0 ⁇ It is possible to determine the third coding unit (1012 1012 ratio) by dividing in the horizontal direction.
  • the image decoding apparatus 100 uses the left second coding unit (1010 in the horizontal direction) and the right second coding unit (101A5 ) Can be limited so that the second coding unit on the left (1010 cannot be divided in the horizontal direction in the same direction as the divided direction.
  • the third coding unit (1012 1215, 1014 & , ⁇ 1415) can be determined by independently dividing the second coding unit on the left (1010 ⁇ and the second coding unit on the right (101A5) in the horizontal direction). This is the same result as the video decoding apparatus 100 dividing the first coding unit 1000 into four square second coding units (1030 103(3 ⁇ 4, 10300, 1030(1)) based on the segmentation mode information. This may be inefficient in terms of image decoding.
  • the image decoding apparatus 100 performs a first encoding in the horizontal direction.
  • the third coding unit (1022 1022 ⁇ ⁇ 024a, 1024 ratio can be determined by dividing the second coding unit (102 or 1020 ratio) in the vertical direction determined by dividing the unit (1000).
  • the video decoding device (100) is one of the second coding units (e.g., if the upper second coding unit (1020 ⁇ ) is divided in the vertical direction, another second coding unit (for example, the lower coding unit (102015)) May be limited so that the upper second coding unit (1020 ⁇ cannot be divided in the vertical direction in the same way as the divided direction.
  • 11 shows the division mode information in the form of four squares according to an embodiment.
  • the image decoding apparatus divides the first coding unit 1100 based on the segmentation mode information to select the second coding unit (1110 111A5, 1120 112A5, etc.).
  • the division type mode information may include information on various types in which the coding unit can be divided, but information on various types may not include information for dividing into four coding units in a square shape.
  • the image decoding apparatus 100 divides the square-shaped first coding unit (00) into four square-shaped second coding units (1130 113(3 ⁇ 4, 11300, 1130(1)). Based on the split mode information, the image decoding apparatus 100 is configured to use a non-square type second coding unit (111( ⁇ , 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924
  • the image decoding apparatus 00) is a non-square second encoding
  • Each unit (1110 111015, n20a, 112ah5, etc.) can be divided independently.
  • each of the second coding units (1110 111ah5, 1120 11201?, etc.) can be divided in a predetermined order, and this is based on the division mode information, and this is based on the method in which the first coding unit (00) is divided. It may be a corresponding split method.
  • the image decoding device 100 has the second encoding unit on the left (1110 ⁇ is horizontal).
  • the third coding unit in the form of a square (1112 to 1112 ratio can be determined, and the second coding unit on the right (1110 ratio is divided in the horizontal direction to determine the third coding unit in the square form (1114 to 1114 ratio)).
  • the device 100 can also determine the left second coding unit (1110 and the right second coding unit (1110 ratio) in the horizontal direction to determine the third coding unit (1116 111 solution, 11160, 1116(1) in a square shape).
  • the coding unit may be determined in the same form as the first coding unit (00) divided into four square-shaped second coding units (1130 113 (3 ⁇ 4, 11300, 1130(1)).
  • the image decoding device 100 has the upper second coding unit (1120 yen).
  • the third coding unit in the form of a square (1122 to 1122 ratio can be determined, and the lower second coding unit (1120 ratio is divided in the vertical direction to determine the third coding unit in the square form (1124 to 1124 ratio)).
  • the image decoding apparatus 100 can determine the ratio of the third coding unit (1126, 112 years, 1126 &, 1126) in a square shape by dividing both the upper second coding unit (1120 and the lower second coding unit (112015) in the vertical direction).
  • the coding unit can be determined in the same form as the first coding unit (1100) divided into four square-shaped second coding units (1130 113 (3 ⁇ 4, 11300, 1130(1)).
  • FIG. 12 is a diagram illustrating that the processing order between a plurality of coding units may vary according to a process of dividing a coding unit according to an embodiment.
  • the video decoding apparatus 00 may divide the first coding unit 1200 based on the division mode information.
  • the block shape is a square
  • the division mode information is the first coding unit 1200 ) Is divided in at least one of the horizontal and vertical directions
  • the image decoding apparatus 100 divides the first coding unit 1200 and divides the second coding unit (e.g., 1210 121 (3 ⁇ 4, 1220 122). 12
  • the first coding unit 1200) is divided only in the horizontal or vertical direction
  • the second coding unit in the non-square shape (1 ( ⁇ , 1, 5, 122 ( ⁇ , 122 (3 ⁇ 4) can be independently divided based on the division type mode information for each.
  • the image decoding apparatus 100 has a second encoding generated by vertically dividing the first encoding unit 1200. It is possible to determine the third coding unit (1216 121 years, 12160, 1216(1)) by dividing the units (1 0 1 Ah 5) in the horizontal direction, and the first coding unit (1200) is divided in the horizontal direction. 2nd coding unit (1220 2020/175967 1 » (:1 ⁇ 1 ⁇ 2020/002924
  • the 3rd coding unit (1226 1226 ⁇ 12260, 1226(1)) can be determined by dividing the 1220 ratio in the horizontal direction.
  • the process of dividing the second coding unit (1210 121(3 ⁇ 4, 1220 122(3 ⁇ 4)) is shown in Fig. 11 As it has been described above, detailed explanations will be omitted.
  • the image decoding apparatus may process the coding units according to a predetermined order.
  • a predetermined order Features of processing coding units according to a predetermined order have been described above with reference to FIG. Referring to FIG. 12, the image decoding apparatus 100 divides the first coding unit 1200 in the form of a square
  • the device 100 may determine the processing order of the third coding unit (1216 12161), 12160, 1216 (1, 1226 12261), 12260, 1226 (3 ⁇ 4) according to the form in which the first coding unit 1200 is divided.
  • the image decoding apparatus ( ⁇ 0) divides the second coding units (1 0 1 A5) generated by being divided in the vertical direction, respectively, in the horizontal direction to perform the third coding.
  • the unit (1216 121 year, 12160, 1216 (1) can be determined, and the video decoding device 100 processes the second coding unit on the left (the third coding unit included in 1210 (1216 1216) in the vertical direction first, and then, The third coding unit (1216, 1216 ⁇ 12160, 1216(1)) in accordance with the order of processing the third coding unit (121 years, 1216(1) in the vertical direction (1217) included in the right second coding unit (1A5) ) Can be processed.
  • the image decoding apparatus ( ⁇ 0) divides the generated second coding units (1220 122 Ah 5) in the horizontal direction and performs the third encoding by dividing each in the vertical direction.
  • the unit (1226 122 year, 12260, 1226 (1) can be determined, and the video decoding device 100 is the second coding unit at the top (the third coding unit included in the 1220 ⁇ ) (1226 1226 ratio is first processed in the horizontal direction. After that, in accordance with the order of processing the third coding unit (122 ⁇ , 1226(1) in the horizontal direction (1227) included in the lower second coding unit (122015)), the third coding unit (1226 1226 ⁇ 12260, 1226(1) ) Can be processed.
  • the second coding unit (1210 121(3 ⁇ 4, 1220 122(3 ⁇ 4)) is
  • the third coding unit (1216 1216 ⁇ 12160, 1216 ( 1, 1226 122 year, 12260, 1226(1)) can be determined by dividing it; the second coding unit (1210 121015) determined by dividing in the vertical direction and the horizontal
  • the second coding unit (1220 122015) determined by dividing in the direction is divided into different forms, but the third coding unit (1216 12161), 12160, 1216 (1, 1226 12261), 12260, 1226 (3 ⁇ 4
  • the first coding unit 1200 is divided into coding units of the same type. Accordingly, the video decoding apparatus 100 recursively divides the coding unit through different processes based on the division mode information. Accordingly, even if coding units of the same type are determined as a result, a plurality of coding units determined in the same type can be processed in different orders.
  • FIG. 13 shows a plurality of coding units recursively divided according to an embodiment. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 When the coding unit is determined, a process in which the depth of the coding unit is determined as the shape and size of the coding unit changes.
  • the image decoding apparatus 100 defines a depth of a coding unit.
  • the predetermined criterion can be the length of the long side of the coding unit.
  • the video decoding device 00) is 211 conscious than the length of the long side of the coding unit before division of the current coding unit.
  • the depth of the current coding unit can be determined that the depth of the coding unit is increased by II from the depth of the coding unit before division.
  • the coding unit with increased depth is expressed as a coding unit of the lower depth. .
  • the image decoding apparatus 100 has a square shape based on block shape information indicating that it has a square shape (for example, the block shape information may indicate '0:').
  • the second coding unit (1302) and the third coding unit (1304) of the lower depth can be determined. If the size of the square-shaped first coding unit (1300) is 2Nx2N, the second coding unit (1302) determined by dividing the width and height of the first coding unit (1300) by 1/2 may have a size of NxN. Furthermore, the third coding unit (1304) determined by dividing the width and height of the second coding unit (1302) into 1/2 size may have a size of N/2xN/2.
  • the width and height correspond to times the first coding unit (1300). If the depth of the first coding unit (1300): 0, the second coding unit is half the width and height of the first coding unit (1300).
  • the depth of (1302) may be 0+1, and the depth of the third coding unit (1304), which is 1/4 times the width and height of the first coding unit (1300), may be 0+2.
  • block shape information indicating a non-square shape for example, block shape information is 1 : NS_VER indicating that the height is a non-square longer than the width, or'NS_VER indicating that the width is a non-square longer than the height.
  • the image decoding device 100 divides the first coding unit (1310 or 1320), which is a non-square type, to divide the second coding unit (1312 or 1322) of the lower depth, and the third The coding unit (1314 or 1324) can be determined.
  • This video decoding apparatus 100 can determine the second coding unit (for example, 1302, 1312, 1322, etc.) by dividing at least one of the width and height of the first coding unit 1310 of size Nx2N. That is, the image decoding apparatus 100 can determine the second coding unit 1302 of size NxN or the second coding unit 1322 of size NxN/2 by dividing the first coding unit 1310 in the horizontal direction, It is also possible to determine the second coding unit 1312 of size N/2xN by dividing it in the horizontal and vertical directions.
  • the second coding unit for example, 1302, 1312, 1322, etc.
  • the image decoding apparatus 100 has a first encoding having a size of 2NxN.
  • the second coding unit (e.g., 1302, 1312, 1322, etc.) by dividing at least one of the width and height of the unit 1320. That is, the image decoding apparatus 100 is the first coding unit 1320 Divide in the vertical direction to code the second NxN size 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 unit (1302) or the second coding unit of size N/2xN (1312) can be determined, divided in the horizontal and vertical directions, and the size of NxN/2 It is also possible to determine the two-coding unit (1322).
  • the image decoding apparatus 100 is the first coding unit 1320 Divide in the vertical direction to code the second NxN size 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 unit (1302) or the second coding unit of size N/2xN (1312) can be determined, divided in the horizontal and vertical directions, and the size of NxN/2 It is also possible to determine the two-coding unit (1322).
  • the image decoding apparatus 100 divides at least one of the width and height of the second coding unit 1302 of NxN size to divide the third coding unit (for example, 1304, 1314, 1324, etc.).
  • the video decoding apparatus 100 divides the second coding unit 1302 in the vertical and horizontal directions to determine the third coding unit 1304 of size N/2xN/2, or 4x ⁇ 2
  • the image decoding apparatus 100 performs a second encoding of N/2xN size.
  • a third coding unit (e.g., 1304, 1314, 1324, etc.) by dividing at least one of the width and height of the unit 1312, i.e., the video decoding apparatus 100 is the second coding unit 1312 Split in the horizontal direction to determine the third coding unit of size N/2xN/2 (1304) or the third coding unit of size ⁇ 2x ⁇ 4 (1324), or divide it in the vertical and horizontal directions to determine the size of ⁇ 4x ⁇ 2
  • the third coding unit of the agenda (1314) can be determined.
  • the image decoding apparatus 100 has a second encoding of NxN/2
  • a third coding unit (e.g., 1304, 1314, 1324, etc.) by dividing at least one of the width and height of the unit 1322, i.e., the video decoding apparatus 100 is the second coding unit 1322 Divide in the vertical direction to determine the third coding unit of size N/2xN/2 (1304) or the third coding unit of size ⁇ 4x ⁇ 2 (1314), or divide it in the vertical and horizontal directions to determine N/2xN/4
  • the third coding unit of size (1324) can be determined.
  • the image decoding apparatus 00) may divide a square-shaped coding unit (eg, 1300, 1302, 1304) in a horizontal direction or a vertical direction.
  • a square-shaped coding unit eg, 1300, 1302, 1304
  • the coding unit 1300 may be divided in the vertical direction to determine the first coding unit 1310 having a size of Nx2N, or the coding unit 1300 may be divided in the horizontal direction to determine the first coding unit 1320 having the size of 2NxN.
  • the depth of the coding unit determined by dividing the first coding unit (1300) of size 2Nx2N horizontally or vertically is the same as the depth of the first coding unit (1300). can do.
  • the width and height of the third coding unit are the first
  • the depth of the first coding unit (1310 or 1320) is: 0, 1/2 times the width and height of the first coding unit (1310 or 1320)
  • the depth of the second coding unit (1312 or 1322) can be A1
  • the depth of the third coding unit (1314 or 1324), which is 1/4 times the width and height of the first coding unit (1310 or 1320) can be 0+2 days. have.
  • FIG. 14 is a diagram that can be determined according to the shape and size of coding units according to an embodiment. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 The index for the division of the depth and coding unit, hereinafter ⁇ is shown.
  • the image decoding apparatus (0) is a first coding of a square shape.
  • the video decoding apparatus 100 divides the first coding unit 1400 in at least one of the vertical and horizontal directions according to the split mode information to divide the second coding unit 1402 14021 and 1404 14041. ), 1406 14061), 14060, 1406 (3 ⁇ 4 can be determined. That is, the video decoding apparatus 100 can determine the second coding unit (1402 14021), 1404 based on the information of the division mode for the first coding unit 1400. 14041), 1406 14061), 14060, 1406(1) can be determined.
  • the second coding unit determined according to the division mode information for the first coding unit 1400 in a square shape (1402 1402 ⁇ 1404 1404 ⁇ 1406 ⁇ 140 Sea, 14060, 1406(1))
  • the depth can be determined based on the length of the long side, for example, the length of one side of the first coding unit of the square shape (1400) and the length of the long side of the non-square shape of the second coding unit (1402 1402 ⁇ 1404 1404 ratio)
  • the image decoding device 100 uses the first coding unit (1400) as the second of four squares based on the split mode information.
  • the length of one side of the second coding unit in a square shape (1406 1406 ⁇ 14060, 1406(1) is 1/ of the length of one side of the first coding unit (1400)) Since it is twice, the depth of the second coding unit (1406 1406 ⁇ 14060, 1406(1) is 0+1, which is lower than the depth of the first coding unit (1400), I).
  • the image decoding apparatus ( ⁇ 0) divides a first coding unit 1410 having a height longer than a width in the horizontal direction according to the split mode information to obtain a plurality of second coding units 1412. It can be divided into 1413 ⁇ 4, 1414, 141 ⁇ , and 1414. According to an embodiment, the video decoding apparatus ( ⁇ 0) divides the first coding unit (1420) having a width longer than the height in the vertical direction according to the division mode information. Thus, it can be divided into a plurality of second coding units (1422 1422 ⁇ 1424 1424 ⁇ 1424).
  • the second coding unit (1412 1413 ⁇ 4, 1414 14141), 14140. 1422 14221), which is determined according to the split mode mode information for the first coding unit (1410 or 1420) of a non-square shape, according to an embodiment.
  • 1424 can determine the depth based on the length of the long side, for example, the length of one side of the second coding unit of the square shape (1412 1413 ⁇ 4) is longer than the width of the height-the first coding unit of the non-square shape ( Since it is 1/2 the length of one side of 1410), the second coding unit of the square shape (1412 1412 non-depth is 0+, which is the depth of the lower depth than the first coding unit of the non-square shape (1410) depth I). It is 1.
  • the image decoding device 00) is based on the division mode information, and the first coding unit (1410) in the form of a non-square is replaced by an odd number of second coding units (1414 1414 ⁇ ). 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924
  • An odd number of second coding units (1414 141415, 1414 can include a non-square second coding unit (1414 14140) and a square second coding unit (1414 ratio).
  • the second coding unit of the square form (the length of the long side of 1414 1414 and the second coding of the square form
  • the image decoding apparatus 100 is a method corresponding to the above method of determining the depth of the coding units related to the first coding unit 1410, and has a non-square shape with a width longer than the height. You can determine the depth of the coding units associated with the agenda 1 coding unit (1420).
  • the coding units The index can be determined based on the size ratio of the liver.
  • the coding unit located in the middle of the coding units divided into odd numbers (1414 1414 ⁇ 14140) (1414 ratio is the other coding units (1414 1414). Coding units of the same width but different heights (1414 1414 can be twice the height of each other, i.e., in this case, the center coding unit (1414 ratio can contain two of the other coding units (1414 1414)).
  • the image decoding apparatus 100 may determine whether the coding units divided into odd numbers are not the same size based on the existence of discontinuity in the index for distinguishing between the divided coding units.
  • the image decoding apparatus 100 may determine whether the image decoding apparatus 100 is divided into a specific division type based on an index value for dividing a plurality of coding units determined to be divided from a current coding unit. Referring to, the image decoding device 00) can determine an even number of coding units (1412 1412 ratio) or an odd number of coding units (1414 1414 ⁇ 1414) by dividing the first coding unit (1410) in a rectangular shape longer than the width.
  • the image decoding apparatus 100 may use an index (1 ⁇ 10) indicating each coding unit to classify each of a plurality of coding units.
  • is a sample at a predetermined position of each coding unit. (For example, it can be obtained from the top left sample).
  • the image decoding apparatus 100 is used for classification of coding units.
  • the video decoding apparatus 100 uses the first coding unit 1410. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924
  • the video decoding device 100 can allocate an index for each of the 3 coding units (1414 1414 ⁇ 14140).
  • the video decoding device 100 can be divided into odd numbers.
  • the index for each coding unit can be compared to determine the central coding unit among the coding units.
  • the video decoding apparatus 100 is based on the indexes of the coding units and has a coding unit having an index corresponding to the value in the middle of the indexes ( The 1414 ratio may be determined as a coding unit at the center of the determined coding units by dividing the first coding unit 1410.
  • the image decoding apparatus 100 determines an index for classifying the divided coding units.
  • the index can be determined based on the size ratio between the coding units.
  • the coding unit generated by dividing the first coding unit 1410 The 1414 ratio can be twice the height of other coding units (the same width as 1414 1414 but the height of the other coding units (1414 1414).
  • the center coding unit (the index of the 1414 ratio (1 ⁇ 10)) is 1 If this is the case, then the coding unit located in the next order (1414, which can be 3 when the index increases by 2. In this case, when the index increases uniformly and the increase width varies, video decoding
  • the device 100 may be determined to be divided into a plurality of coding units including coding units having a different size from other coding units, according to an embodiment, when it indicates that the division type mode information is divided into odd number of coding units.
  • the video decoding device 100 can divide the current coding unit into a form in which the coding unit at a predetermined position (for example, the center coding unit) of the odd number of coding units is different from the other coding units. In this case, the video decoding device 100 ) Can be used in the index for the coding unit to determine the coding unit among the different sizes.
  • the size or position of the coding unit at a predetermined position to be determined is specific to illustrate an embodiment. It should not be interpreted as being limited thereto, and it should be interpreted that various indexes, positions and sizes of coding units can be used.
  • the image decoding apparatus 100 may use a predetermined data unit in which the recursive division of the coding unit starts.
  • FIG. 15 illustrates that a plurality of coding units are determined according to a plurality of predetermined data units included in a picture according to an embodiment.
  • the predetermined data unit may be defined as a data unit in which the coding unit begins to be recursively divided using the division type mode information. That is, a plurality of coding units dividing the current picture are determined. It may correspond to the coding unit of the highest depth used in the process of becoming.
  • a predetermined data unit will be referred to as a reference data unit.
  • the reference data unit may represent a predetermined size and shape. According to an embodiment, the reference data unit may include samples of MxN. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924
  • M and N may be identical to each other, and may be integers expressed as a power of two, i.e., the reference data unit may represent a square or non-square shape, and then may be divided into an integer number of coding units.
  • the image decoding apparatus W0 may divide a current picture into a plurality of reference data units.
  • the image decoding apparatus 100 may divide a current picture into a plurality of reference data units. Can be divided by using the division type mode information for each reference data unit. The division process of such reference data units can correspond to the division process using a quad-tree structure.
  • the image decoding apparatus W0 may determine in advance a minimum size that a reference data unit included in the current picture can have. Accordingly, the image decoding apparatus W0 may have various sizes of more than the minimum size.
  • the standard data unit of the size can be determined, and at least one coding unit can be determined using the division type mode information based on the determined reference data unit.
  • the image decoding apparatus 100 is a square-shaped reference coding
  • Unit (1500) can be used, or non-square type standard coding
  • the shape and size of the reference coding unit is various data units (e.g., sequence, picture, slice) that can contain at least one reference coding unit. (slice), slice segment (slice segment), tile (tile), tile group (tile group), maximum coding unit, etc.).
  • the bitstream acquisition unit 110 of the image decoding apparatus 100 stores at least one of information on a shape of a reference coding unit and information on a size of a reference coding unit, for each of the various data units.
  • the process of determining at least one coding unit included in the square-shaped reference coding unit (1500) is described above through the process of dividing the current coding unit (300) in Fig. 3, and in a non-square form.
  • the process of determining at least one coding unit included in the standard coding unit 1502 of FIG. 4 has been described above through the process of dividing the current coding unit (400 or 450) in FIG. 4, so a detailed description will be omitted.
  • the image decoding apparatus 100 is provided in advance based on a predetermined condition.
  • an index for identifying the size and shape of the reference coding unit can be used. That is, the bitstream acquisition unit (no) is the bitstream from the bitstream. Slice as a data unit that satisfies certain conditions (e.g., a data unit having a size less than a slice) among various data units (e.g., sequence, picture, slice, slice segment, tile, tile group, maximum coding unit, etc.) For each slice segment, tile, tile group, maximum coding unit, etc., only an index for identification of the size and shape of the reference coding unit can be obtained. The video decoding apparatus 100 obtains an index.
  • certain conditions e.g., a data unit having a size less than a slice
  • various data units e.g., sequence, picture, slice, slice segment, tile, tile group, maximum coding unit, etc.
  • the size and shape of the reference data unit can be determined for each data unit that satisfies the above-defined conditions.
  • Information on the type of the reference coding unit and the standard coding unit If information about the size of is obtained from a bitstream for each data unit of a relatively small size, the bitstream may not have good use efficiency.Therefore, information about the type of the reference coding unit and the size of the reference coding unit are provided.
  • At least one of the size and shape of the reference coding unit corresponding to the index indicating the size and shape of the reference coding unit may be determined in advance, i.e., the image decoding device ( 100) can determine at least one of the size and shape of the reference coding unit included in the data unit that is the standard for obtaining the index by selecting at least one of the predetermined size and shape of the reference coding unit according to the index.
  • the image decoding apparatus W0 is used in one maximum coding unit.
  • At least one reference coding unit can be used, i.e., at least one reference coding unit can be included in the maximum coding unit for dividing an image, and the coding unit is determined through a recursive division process of each standard coding unit.
  • at least one of the width and height of the maximum coding unit may correspond to at least one integer multiple of the width and height of the reference coding unit.
  • the size of the reference coding unit may be the maximum coding unit. The size may be divided n times according to the quad tree structure. That is, the video decoding apparatus 100 may determine the reference coding unit by dividing the maximum coding unit n times according to the quad tree structure, and according to various embodiments.
  • the reference coding unit can be divided based on at least one of block type information and division type mode information.
  • the image decoding apparatus W0 has the form of the current coding unit.
  • the block type information indicated or the division type mode information indicating the method of dividing the current coding unit can be obtained from the bitstream and used.
  • the division type mode information can be included in the bitstream related to various data units.
  • the decoding apparatus 100 includes a sequence parameter set, a picture parameter set, a video parameter set, a slice header, a slice segment header, and a tile.
  • the segmentation mode information included in the header (tile header) and tile group header can be used.
  • the video decoding apparatus 100 blocks from the bitstream for each maximum coding unit, reference coding unit, and processing block.
  • the syntax element corresponding to the shape information or the split shape mode information can be obtained from the bitstream and used.
  • the video decoding device (W0) can determine the video segmentation rule. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 It may be determined in advance between the decoding device (W0) and the video encoding device (200).
  • the image decoding apparatus 100 may determine an image segmentation rule based on information obtained from the bitstream.
  • the image decoding apparatus 100 includes a sequence parameter set, a picture parameter set, and Based on information obtained from at least one of the video parameter set, slice header, slice segment header, tile header, and tile group header.
  • the segmentation rule may be determined.
  • the image decoding apparatus 100 may determine the segmentation rule differently according to a frame, slice, tile, temporal layer, maximum coding unit, or coding unit.
  • the image decoding apparatus 100 may determine a division rule based on the block type of the coding unit.
  • the block type may include the size, shape, width and height ratio, and direction of the coding unit. (200) and the video decoding device 100 can decide in advance to determine the division rule based on the block shape of the coding unit, but this is not limited to the video decoding device 100, the video encoding device 200. Based on the information obtained from the bitstream received from, the division rule can be determined.
  • the shape of a coding unit may include a square and a non-square. If the width and height of the coding unit are the same, the image is decoded.
  • the device 100 may determine the shape of the coding unit as a square. In addition, if the width and height of the coding unit are not the same, the image decoding device 100 may determine the shape of the coding unit as a non-square. have.
  • the size of the coding unit can include various sizes of 4x4, 8x4, 4x8, 8x8, 16x4, 16x8, ..., 256x256.
  • the size of the coding unit is the length of the long side, the length or width of the short side of the coding unit.
  • the video decoding apparatus 100 can apply the same division rule to the coding units classified into the same group. For example, the video decoding apparatus 100 may have the same size of coding units having the same long side length. In addition, the video decoding apparatus 100 can apply the same division rule for coding units having the same long side length.
  • the ratio of the width and height of the coding unit is 1 :2, 2: 1, 1 :4, 4: 1, 1 :8, 8: 1, 1: 16, 16: 1, 32: 1 or 1: 32, etc.
  • the direction of the coding unit may include the horizontal direction and the vertical direction.
  • the horizontal direction may indicate the case where the length of the width of the coding unit is longer than the length of the height.
  • the vertical direction is the case of the coding unit. It can be indicated that the length of the width of is shorter than the length of the height.
  • the video decoding apparatus 100 establishes a division rule based on the size of the coding unit.
  • the video decoding apparatus 100 can determine the allowable division mode differently based on the size of the coding unit. For example, the video decoding apparatus 100 can determine the size of the coding unit. Whether splitting is allowed 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 Can be determined. The video decoding device 100 can determine the division direction according to the size of the coding unit. The video decoding device 100 can determine the size of the coding unit. The allowable split type can be determined accordingly.
  • It may be a partitioning rule predetermined between the device 200 and the image decoding device 100.
  • the image decoding apparatus 100 may determine a division rule based on information obtained from the bitstream.
  • the video decoding device (0) establishes a division rule based on the position of the coding unit.
  • the video decoding apparatus 100 can adaptively determine a division rule based on the position occupied by the coding unit in the video.
  • the video decoding apparatus 100 may determine a partitioning rule so that the coding units generated by different partitioning paths do not have the same block shape. However, it is not limited thereto, but encodings generated by different partitioning paths are not limited thereto. Units can have the same block type. Coding units created with different partition paths can have different decoding processing orders. The decoding processing sequence has been described with Fig. 12, so a detailed explanation is omitted.
  • 16 shows a combination of a type in which coding units can be divided according to an embodiment.
  • the image decoding apparatus 100 may differently determine a combination of division types in which a coding unit can be divided for each picture.
  • the image decoding apparatus 100 may be included in an image.
  • a picture that can be divided into two, three or four coding units ( 1620) can be used to decode an image.
  • the image decoding apparatus 100 may use only the segmentation type information indicating that the picture 1600 is divided into four square coding units in order to divide the picture 1600 into a plurality of coding units.
  • the video decoding apparatus 100 may use only the division type information indicating that the picture 1610 is divided into two or four coding units.
  • the video decoding apparatus 100 may use the picture 1620 to divide the picture 1620.
  • only the division type information indicating that the division type is divided into 2, 3 or 4 coding units can be used.
  • the combination of the division type described above is only an embodiment for explaining the operation of the image decoding apparatus 100, so the above division Combinations of shapes should not be construed as being limited to the above embodiments, but should be interpreted as that combinations of various types of division types can be used for each predetermined data unit.
  • the bitstream acquisition unit 110 of the image decoding apparatus 100 may convert a bitstream including an index indicating a combination of the division type information into a predetermined data unit unit (for example, a sequence, a picture). , Slice, slice segment, tile or 2020/175967 1» (: 1 ⁇ 1 (2020/002924 tile group, etc.) can be acquired.
  • the bitstream acquisition unit (H0) is a sequence parameter set and picture parameter set. Set), slice headers, tile headers, or tile group headers, you can obtain an index indicating the combination of segmentation type information.
  • the video decoding apparatus 100 of the video decoding apparatus 100 can determine a combination of division types in which the encoding units can be divided for each predetermined data unit using the acquired index, and accordingly, different division types for each predetermined data unit. A combination of can be used.
  • FIG. 17 shows various types of coding units that can be determined based on split mode mode information that can be expressed as a binary code according to an embodiment.
  • the image decoding apparatus 100 uses the bitstream acquisition unit 110
  • the coding unit can be divided into various types.
  • the type of the coding unit that can be divided may correspond to various types including the types described through the above-described embodiments.
  • the image decoding apparatus 100 can divide the coding unit of a square shape in at least one of a horizontal direction and a vertical direction based on the division mode information, and the non-square type coding The unit can be divided horizontally or vertically.
  • the image decoding apparatus W0 uses a coding unit in a square shape.
  • Mode information When dividing in a horizontal direction and a vertical direction and dividing into four coding units of a square, there may be four types of division that can display the division type mode information for the coding unit of a square. Mode information
  • the division type mode information can be expressed as (00)b, and the coding unit is horizontal.
  • the split mode information can be expressed as (01)b, and if the coding unit is split in the horizontal direction, the split mode information can be expressed as (10)b, and the coding unit is split in the vertical direction.
  • the division type mode information can be expressed as (ll)b.
  • the image decoding apparatus W0 divides a non-square type coding unit in a horizontal direction or a vertical direction, and the type of division type that the division type mode information can display is divided into several coding units.
  • the image decoding apparatus 100 may divide up to three coding units in a non-square shape according to an embodiment.
  • the image decoding apparatus 100 may divide two coding units into two coding units.
  • the division mode information can be expressed as (W)b.
  • the video decoding device 100 can divide the coding unit into three coding units, and in this case, the division mode information is (ll It can be expressed as )b.
  • the video decoding apparatus 100 can be determined not to divide the coding unit, 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 In this case, the division mode information can be expressed as (0)b. That is, video decoding
  • the device 100 may use variable length coding (VLC: Varaible Length 1 Coding) instead of fixed length coding (FLC) in order to use the binary record indicating the segmentation mode information.
  • VLC variable length coding
  • FLC fixed length coding
  • the binary code of the division type mode information indicated can be expressed as (0)b. If the binary code of the division type mode information indicating that the coding unit is not divided is set to (00)b, it is set to (01)b. Even though there is no split mode information, all binary codes of 2-bit split mode information must be used. However, as shown in Fig. 17, three split types for non-square coding units are used. If this is the case, the video decoding apparatus 100 uses a 1-bit binary code (0)b as the division mode information.
  • the division type of the coding unit in the non-square type indicated by the division type mode information is only in the three types shown in Figure 17. It is limited and should not be interpreted, but should be interpreted in various forms including the above-described embodiments.
  • Another form of coding unit that can be determined based on information is shown.
  • the image decoding apparatus 100 can divide the coding unit of a square shape in a horizontal direction or a vertical direction based on the division type mode information, and the coding unit of a non-square shape can be divided into a horizontal direction or a vertical direction.
  • the division type mode information can indicate that the square type coding unit is divided in one direction.
  • the binary code of the division type mode information indicating that the square type coding unit is not divided is It can be expressed as (0)b. If the binary code of the division mode information indicating that the coding unit is not divided is set to (00)b, regardless of the absence of the division mode information set to (01)b. All binary codes of 2-bit division type mode information must be used.
  • the video decoding apparatus 100 Since it is possible to determine that the coding unit is not divided even if the 1-bit binary code (0)b is used as the shape mode information, the bitstream can be used efficiently.
  • the division of the coding unit in the square shape indicated by the division type mode information The form is limited to only the three forms shown in Figure 18 and should not be interpreted, but must be interpreted in various forms including the above-described embodiments.
  • block type information or division type mode information may be expressed using a binary record, and such information may be directly generated as a bitstream.
  • block type information or division type mode information that can be expressed in binary code is not directly generated as a bitstream, but is input by context adaptive binary arithmetic coding (CAB AC). It can also be used as a binary record code.
  • CAB AC context adaptive binary arithmetic coding
  • the image decoding apparatus 100 includes block type information through CABAC.
  • the bitstream including the binary record for the syntax can be obtained through the bitstream acquisition unit (no ) .
  • the image decoding apparatus 100 inversely converts a bin string included in the acquired bitstream to form a block. It is possible to detect a syntax element indicating information or split mode information.
  • the video decoding apparatus 100 obtains a set of binary bin strings corresponding to the syntax element to be decoded, and uses the probability information. Thus, each bin can be decoded, and the video decoding apparatus 100 can repeat until the empty string composed of such decoded bins becomes equal to one of the previously obtained bin strings.
  • the image decoding apparatus 100 may determine the syntax element by performing inverse binarization of the empty string.
  • the image decoding apparatus 100 may determine the syntax for the binstring by performing a decoding process of adaptive binary arithmetic coding, and the image decoding apparatus 100 The probability model for the bins acquired through the stream acquisition unit 110 can be updated.
  • the bitstream acquisition unit 110 of the image decoding apparatus 100 may update the segmentation mode information according to an embodiment.
  • a bitstream indicating the indicated binary code can be obtained.
  • the video decoding device 100 can determine the syntax for the split mode information.
  • the 100 may update the probability for each bit of the 2-bit binary code. That is, the video decoding apparatus 100 may update the first bin of the 2-bit binary record. Depending on whether the value of is 0 or 1, you can update the probability of having a value of 0 or 1 when decoding the next bin.
  • the probability of the beans used in the process of decoding the beans of the bean string for the syntax can be updated, and the video decoding device 100 can determine that the probability has the same probability without updating the probability at a specific bit of the bean string. .
  • the video decoding apparatus 100 uses one bin having a value of 0 when the coding unit of the non-square type is not divided.
  • the syntax can be determined, i.e., if the block type information indicates that the current coding unit is a non-square type, the first bin of the bin string for the split type mode information is, 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 0 if the coding unit of the non-square form is not divided, and may be 1 if the coding unit is divided into two or three coding units.
  • the probability that the first bin of the bin string of the split mode information for the coding unit is 0 may be 1/3, and the probability that it is 1 may be 2/3.
  • the video decoding apparatus 00) does not divide the non-square coding unit. Since the split mode information indicating that only a 1-bit empty string having a value of 0 can be expressed, video decoding
  • the device 100 may determine whether the second bin is 0 or 1 only when the first bin of the split mode information is 1 and determine the syntax for the split mode information. According to an embodiment, the video decoding apparatus 100 may determine the syntax for the split mode information. If the first bin for the mode information is 1, the second bin is 0 or the 1-day probability is the same probability, and the bin can be decoded.
  • the image decoding apparatus 100 may use various probabilities for each bin in the process of determining the bins of the bin string for the split mode information. According to an embodiment, the image decoding apparatus 100 may use various probabilities for each bin. (0) can determine the probability of bins for the split mode information differently according to the direction of the non-square block. According to an embodiment, the video decoding device 00) is split according to the width or length of the long side of the current coding unit. The probability of bins for mode information can be determined differently. According to one embodiment, the video decoding apparatus 00) can differently determine the probability of bins for the split mode information according to at least one of the shape of the current coding unit and the length of the long side. have.
  • the image decoding apparatus ( ⁇ 0) may determine that the probability of bins for the split mode information is the same for coding units of a predetermined size or larger. For example, the length of the long side of the coding unit For coding units having a size of 64 samples or more based on, it can be determined that the probability of bins for the split mode information is the same.
  • the image decoding apparatus 100 has an initial probability of the bins constituting the bin string of the split mode information based on a slice type (eg, I slice, parent slice, or 6 slice). Can be determined.
  • a slice type eg, I slice, parent slice, or 6 slice.
  • 19 is a diagram showing a block diagram of an image encoding and decoding system that performs loop filtering.
  • the encoding end 1910 of the image encoding and decoding system 1900 transmits the encoded bitstream of the image, and the decoding end 1950 receives the bitstream.
  • the reconstructed image is output by decoding.
  • the encoding end 1910 may have a configuration similar to the image encoding apparatus 200 to be described later, and the decoding end 1950 may have a configuration similar to the image decoding apparatus 100.
  • the predictive encoding unit 1915 outputs predicted data through inter prediction and intra prediction, and the transform and quantization unit 1920 quantizes residual data between the predicted data and the current input image.
  • the converted conversion factor is output. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924
  • the encoding unit (1925) encodes the quantized transformation coefficient, converts it, and outputs it as a bitstream.
  • the quantized transformation coefficient passes through the inverse quantization and inverse transformation unit (1930). The data in the spatial area is restored, and the data in the restored spatial area is deblocked.
  • the restored image is output through the filtering unit 1935 and the loop filtering unit 1940.
  • the restored image may be used as a reference image of the next input image through the prediction encoding unit 1915.
  • the encoded image data in the bitstream received by the decoding stage 1950 is restored to residual data in the spatial domain through an entropy decoding unit 1955 and an inverse quantization and inverse transformation unit 1960.
  • Prediction decoding unit ( 1975) the image data of the spatial domain is formed by combining the predicted data and residual data, and deblocking
  • the filtering unit 1965 and the loop filtering unit 1970 may perform filtering on the image data in the spatial domain and output a restored image for the current original image.
  • the restored image is then used by the predictive decoding unit 1975. It can be used as a reference image for
  • the loop filtering unit 1940 of this encoding end 1910 performs loop filtering by using the filter information input according to a user input or a system setting.
  • the filter information used by the filtering unit 1940 is output to the entropy encoding unit 1925 and transmitted to the decoding stage 1950 together with the coded image data.
  • the loop filtering unit 1970 of the decoding stage 1950 may perform loop filtering based on filter information input from the decoding stage 1950.
  • FIG. 2 is a block diagram of an image encoding apparatus 200 capable of encoding an image based on at least one of block type information and split type mode information according to an embodiment.
  • the image encoding apparatus 200 may include an encoding unit 220 and a bitstream generation unit 2W.
  • the encoding unit 220 may receive an input image and encode an input image.
  • the encoding unit 220 may obtain at least one syntax element by encoding the input image.
  • the syntax element includes a skip flag, prediction mode, motion vector difference, motion vector prediction method (or index), transform quantized coefficient, and coded block pattern. , coded block flag, intra prediction mode, direct flag, merge flag, delta QP, reference index, prediction direction, and transform index.
  • the encoding unit 220 includes the shape, direction, width, and height of the coding unit.
  • the context model can be determined based on block shape information including at least one of the ratio or size of
  • the bitstream generator (2W) generates a bitstream based on the encoded input image. 2020/175967 1» (:1 ⁇ 1 ⁇ 2020/002924) can be generated.
  • the bitstream generation unit (0) can generate a bitstream by entropy encoding a syntax element based on the context model.
  • the image encoding apparatus 200 may transmit a bitstream to the image decoding apparatus 100.
  • the encoding unit 220 of the image encoding apparatus 200 may determine the shape of the encoding unit. For example, whether the encoding unit is a square or not.
  • It can have a non-square shape, and information representing this shape can be included in the block shape information.
  • the encoding unit 220 may determine in what form the encoding unit is to be divided.
  • the encoding unit 220 may determine the shape of at least one encoding unit included in the encoding unit, and bitwise.
  • the stream generator (0) may generate a bitstream including split type mode information including information on the type of the coding unit.
  • the encoding unit 220 may determine whether the encoding unit is divided or not.
  • the encoding unit 220 includes only one encoding unit in the encoding unit or the encoding unit is divided. If you decide to not
  • the bitstream generator (0) may generate a bitstream including split mode mode information indicating that the coding unit is not divided.
  • the coding unit 220 may also be divided into a plurality of coding units included in the coding unit.
  • the bitstream generation unit (0) may generate a bitstream including split type mode information indicating that the coding unit is divided into a plurality of coding units.
  • Information indicating or indicating in which direction to be divided may be included in the segmentation mode information; for example, the segmentation mode information may indicate division in at least one of a vertical direction and a horizontal direction, or indicate no division. have.
  • the image encoding apparatus 200 determines information on the division mode based on the division mode mode of the coding unit.
  • the image encoding apparatus 200 determines the ratio or size of the shape, direction, width, and height of the coding unit.
  • a context model is determined based on at least one of them.
  • the image encoding apparatus 200 generates information on a split mode for splitting a coding unit as a bitstream based on the context model.
  • the image encoding apparatus 200 can acquire an arrangement for matching the index for the context model with at least one of the shape of the coding unit, the ratio or size of the room 3 ⁇ 4 width and height.
  • the image encoding apparatus 200 may acquire an index for a context model based on at least one of a shape, a ratio of a room 3 ⁇ 4 width and height, or a size of the encoding unit in the array.
  • the image encoding apparatus 200 may acquire an index for the context model. To determine the context model based on the index 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 May.
  • the image encoding apparatus 200 further comprises block shape information including at least one of the shape, direction, width, and height ratio or size of the surrounding encoding unit adjacent to the encoding unit.
  • the context model can be determined.
  • the peripheral coding unit may include at least one of the coding units located at the lower left, left, upper left, upper, upper right, right or lower right of the coding unit.
  • the video encoding apparatus 200 may compare the length of the width of the coding unit around the upper side and the length of the width of the coding unit in order to determine the context model.
  • the image encoding apparatus 200 may compare the length of the height of the peripheral encoding unit on the left and the right with the height of the encoding unit.
  • the image encoding apparatus 200 may generate a context model based on the comparison results. I can decide.
  • FIG. 20 is a diagram illustrating a configuration of an image decoding apparatus 2000 according to an embodiment.
  • the image decoding apparatus 2000 includes an acquisition unit 2010, a block determination unit 2030, a prediction decoding unit 2050, and a restoration unit 2070.
  • the acquisition unit 2010 corresponds to the bitstream acquisition unit 110 shown in FIG. 1, and the block determination unit 2030, the prediction decoding unit 2050, and the restoration unit 2070 are shown in FIG.
  • the acquisition unit 2010, the block determination unit 2030, the prediction decoding unit 2050, and the restoration unit 2070 may be implemented with at least one processor.
  • the image decoding apparatus 2000 One or more pieces of data that store input/output data of the acquisition unit 2010, block determination unit 2030, prediction decoding unit 2050, and recovery unit 2070
  • the image decoding apparatus 2000 may also include a memory control unit (not shown) that controls input/output of data to a data storage unit (not shown).
  • the acquisition unit 2010 receives a bitstream generated as a result of encoding an image.
  • the acquisition unit 2010 is a syntax for decoding an image from a bitstream.
  • Elements are acquired.
  • Binary values corresponding to the syntax elements may be included in the bitstream according to the hierarchical structure of the image.
  • the acquisition unit 2010 may acquire syntax elements by entropy coding the binary values included in the bitstream. .
  • 21 is an exemplary diagram illustrating a structure of a bitstream (00) generated according to a hierarchical structure of an image.
  • bitstream 2100 is a sequence parameter set 2110, a picture
  • Each of the sequence parameter set 2110, the picture parameter set 2120, the group header 2130, and the block parameter set 2140 includes information used in each layer according to the layer structure of the image.
  • the sequence parameter set 2110 includes information used in an image sequence composed of one or more images.
  • the picture parameter set 2120 includes information used in one image, and may refer to the sequence parameter set 2110.
  • the group header 2130 contains information used in the block group determined in the image.
  • the group header 2130 may be a slice header.
  • the block parameter set 2140 includes information used in a block determined in the image, and may refer to a group header 2130, a picture parameter set 2120, and a sequence parameter set 2110.
  • the block parameter set 2140 is a parameter set of a maximum coding unit (CTU), a parameter set of a coding unit (CU), and a prediction unit (PU) according to the hierarchical structure of a block determined in the image. It can be divided into at least one of the parameter set of and the parameter set of the conversion unit (TU).
  • CTU maximum coding unit
  • CU parameter set of a coding unit
  • PU prediction unit
  • the acquisition unit 2010 acquires information used for decoding an image from a bitstream (2 W0) according to the hierarchical structure of the image, and a block determination unit 2030, that is, a decoding unit 2050, and restoration to be described later
  • the unit 2070 may perform necessary operations using the information acquired by the acquisition unit 2010.
  • bitstream (2 W0) shown in FIG. 21 is only an example, and some of the parameter sets shown in FIG. 21 may not be included in the bitstream (2 W0), or are not shown.
  • a set of parameters for example a set of video parameters, may be included in the bitstream (2 W0).
  • the block determiner 2030 divides the current image into blocks and sets block groups including at least one block in the current image.
  • a block may correspond to a tile, and a block group is Can correspond to a slice; a slice can also be referred to as a group of tiles.
  • the prediction decoder 2050 obtains prediction samples corresponding to the subblocks by inter prediction or intra prediction of subblocks of blocks divided from the current image.
  • the subblocks are a maximum coding unit, a coding unit, and a coding unit. It can be at least one of the conversion units.
  • a block is limited to a tile and a block group is limited to a slice, but this is only an example, and the B block consisting of a set of A blocks is
  • block A may correspond to a block
  • block B may correspond to a block group.
  • a set of CTUs corresponds to a tile
  • the CTU is a block
  • the tile is a block.
  • (:111 can have the same size square shape.
  • Tiles contain one or more 0X1s. Tiles are square or rectangular.
  • a slice contains one or more tiles.
  • a slice may have a rectangular shape or a non-rectangular shape.
  • the block determining unit 2030 divides the current image 2200 into a plurality of 0X1s according to information obtained from the bitstream, and determines a tile including at least one 0X1 and at least one tile.
  • the included slice can be set within the current image 2200.
  • the block determiner 2030 may divide the current image 2200 into a plurality of tiles according to information obtained from the bitstream, and divide each tile into one or more 0X1s.
  • the block The determination unit 2030 may set a slice including at least one tile in the current image 2200.
  • the block determiner 2030 may divide the current image 2200 into one or more slices and divide each slice into one or more tiles according to information obtained from the bitstream. Then, the block The decision unit 2030 may divide each tile into one or more 0X1s.
  • the block determiner 2030 may use address information of the slices obtained from the bitstream to set the slices in the current image 2200.
  • the block determiner 2030 may use the slices obtained from the bitstream. Slices including one or more tiles in the current image 2200 can be set according to their address information.
  • the address information of the slice can be obtained from the video parameter set, sequence parameter set, picture parameter set, or group header of the bitstream. .
  • Slices including at least one tile can be set in the current image 2200 according to the address information of the slice obtained from the bitstream.
  • the slices (2310, 2320, 2330, 2340, 2350) can be determined along the raster scan direction (2300) in the current image (2200), the slices ( 2310, 2320, 2330, 2340, 2350) can be sequentially decoded according to the raster scan direction (2300).
  • the address information may include an identification value of the lower right tile located at the lower right of the tiles included in each of the slices 2310, 2320, 2330, 2340, and 2350.
  • the address information of the slices (2310, 2320, 2330, 2340, 2350) is 9, which is the identification value of the lower-right tile of the first slice 2310, and 7, which is the identification value of the lower-right tile of the second slice (2320).
  • the identification value of the lower right tile of the slice 2350 may include 15.
  • the address information of the fifth slice 2350 may not be included in the bitstream.
  • the block determination unit 2030 may identify the upper left tile among tiles in the current image 2200, that is, a tile having an identification value of 0, to set the first slice 2310. Further, the block determination unit 2030 may determine a region including the tile 0 and the tile 9 identified from the address information as the first slice 2310.
  • the block determination unit 2030 is a tile having the smallest identification value among tiles not included in the previous slice, that is, the first slice 2310, that is, a tile. 2 may be determined as the upper left tile of the second slice 2320, and the block determination unit 2030 may determine a region including the tile 2 and the tile 7 identified from the address information as the second slice 2320.
  • the block determination unit 2030 determines the smallest identification value among tiles not included in the previous slice, that is, the first slice 2310 and the second slice 2320, for the specificity of the third slice 2330. With tile, immediate tile W, third
  • the upper left tile of the slice 2330 may be determined.
  • the block determination unit 2030 may determine a region including the tile W and the tile 11 identified from the address information as the third slice 2330.
  • slices may be set within the current image 2200 only by identification information of the lower right tile included in the bitstream.
  • the acquisition unit 2010, as address information for determining the slices acquires the identification value of the upper left tile and the lower right tile included in each of the slices, and the block determination unit 2030 ) Can set the slices in the current image 2200 according to the information acquired by the acquisition unit 2010. Since the upper left tile and the lower right tile included in each slice can be identified from the address information, the block decision unit ( 2030) can set the area including the upper left tile and the lower right tile identified from the address information as a slice. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924
  • the acquisition unit 2010 is an address for setting the slices.
  • the block determination unit 2030 determines the current image (2200) according to the information acquired by the acquisition unit 2010. You can set the slices within ).
  • the address information of the second slice 2320 in FIG. 23 may include 2 which is the identification value of the upper left tile, 2 which is the size of the width of the slice, and 2 which is the size of the height of the slice.
  • the size of the width and height of 2 means that there are two rows of tiles and two columns of tiles along the width and height directions of the second slice 2320.
  • the upper left tile of the first slice 2310 is fixed to tile 0, so the identification value of the upper left tile of the first slice 2310 may not be included in the bitstream.
  • the size of the width and the height of the slice obtained from the bitstream is a value obtained by dividing the number of tile rows and the number of tile columns arranged along the width and height directions of the slice by a predetermined scaling factor.
  • a predetermined scaling factor e.g. 2, to 1, the size of the width of the slice, and 1, the size of the height of the slice, it can be confirmed that there are two tile rows and tile columns, respectively, along the width direction and height direction of the slice. .
  • the block determination unit 2030 is the address information of the first slice 2310 to the fifth
  • the first slice (2310) to the fifth slice (2350) can be determined within the current image (2200), depending on the address information, from the current image (2200) to the fourth slice (2340).
  • the address information of the last slice may not be included in the bitstream.
  • the first of the slices to be determined in the current image 2200 is determined in the current image 2200
  • the address information of a tile located in a row or a slice containing a tile located in the first column is not only the identification value of the upper left tile of the corresponding slice, the size of the width of the slice, and the size of the height of the slice, but also the right or lower part of the slice. It may further include a value indicating how many subsequent slices exist along the direction. A value indicating how many subsequent slices exist along the right or lower direction of the slice is arranged along the width or height direction of the slice. It can also be substituted with a value indicating how many slices exist.
  • the slice (2310) is the tile located in the first row of the image (2200) and the first column. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 Since all tiles are included, the address information of the first slice (2310) is a value indicating how many slices are succeeding along the right direction of the slice and It can contain a value indicating how many slices follow along the lower direction.
  • the address information of the second slice 2320 may include a value indicating how many slices follow along the lower direction of the slice.
  • the address information includes a value indicating how many slices follow along the right direction and/or the lower direction
  • the last slice placed along the width direction of the current image 2200 (the second slice 2320 in FIG. ) And/or the fifth slice (2350)
  • the size of the width of the slice may be omitted
  • the last slice placed along the height direction of the current image (2200) (the fourth slice (2340) in FIG. 23) And/or the address information of the fifth slice 2350)
  • the size of the height of the slice may be omitted.
  • the block determining unit 2030 is already the first slice 2310 along the width direction of the current image 2200.
  • the width of the slice following the first slice (2310) considering the width size of the current image (2200) even if the value indicating the width of the subsequent slice is not included in the bitstream.
  • Fig. 23 there are 4 tiles along the width direction of the current image 2200, and 2 tiles exist along the width direction of the first slice 2310, so the first slice 2310 It can be seen that two tiles exist along the width direction of the second slice 2320 that follows.
  • the block determination unit 2030 has already followed the first slice 2310 along the height direction of the current image 2200. Since it is known that one slice exists, it is possible to derive the height of the slice following the first slice 2310 even if a value indicating the size of the height of the subsequent slice is not included in the bitstream.
  • the acquisition unit 2010 is the current image 2200 as slices.
  • the partition information for segmentation is obtained from the bitstream, and the block determination unit 2030 may divide the current image 2200 into slices according to the segmentation information.
  • the segmentation information is, for example, 4 division, height It can be divided into 2 divisions of and 2 divisions of width.
  • Block decision unit 2030 is acquired as the current image 2200 is split for the first time.
  • the smaller slices are divided by dividing each of the slices according to the division information.
  • the block determination unit 2030 is
  • Two areas (2410, 2420) are determined by dividing the width of the image (2200) by two, and the height of the left area (2410) is divided into two according to the division information of the left area (2410), and two areas (2412, 2414)
  • the division information of the right area 2420 indicates non-division, and the areas 2412 and 2414 divided from the left area 2410 are not further divided.
  • 2020/175967 1»(:1 ⁇ 1 ⁇ If not 2020/002924, the block determination unit 2030 sets the upper left area 2412 to the first slice, the right area 2420 to the second slice, and the lower left area 2414). It can be set as the third slice.
  • the block determining unit 2030 is configured according to preset map information.
  • Slices are set in the current image 2200, but at least one slice is added in the current image 2200 according to the correction information obtained from the bitstream.
  • the map information may include address information of slices located in the image.
  • the block determination unit 2030 is a video parameter of a bitstream. Slices in the image 2200 may be initially set according to the map information acquired from the set or sequence parameter set, and the final slices may be set in the image 2200 according to the correction information acquired from the picture parameter set.
  • the block determiner 2030 can inter-predict at least one of the coding units included in the tiles.
  • the reference image list used for inter prediction is Explain how to configure it.
  • the prediction decoding unit 2050 predictively decodes the coding units included in tiles determined in the current image.
  • the prediction decoding unit 2050 calculates the coding units through inter prediction or intra prediction.
  • inter prediction a prediction sample of a coding unit is obtained based on a reference block in a reference image indicated by the motion vector, and a restoration sample of the coding unit is obtained based on the prediction sample and residual data obtained from the bitstream.
  • residual data may not be included in the bitstream, and in this case, the prediction sample may be determined as a restoration sample.
  • a reference picture list including reference pictures must be constructed.
  • the acquisition unit 2010 acquires information representing a plurality of first reference picture lists from a sequence parameter set of a bitstream. can do.
  • the information indicating the plurality of first reference picture lists may include a POC (picture order count) related value of the reference picture.
  • the plurality of first reference picture lists are used in a picture sequence including the current picture.
  • the information indicating the plurality of first reference picture lists may include the number of first reference picture lists.
  • the predictive decoding unit 2050 corresponds to the number identified from the bitstream.
  • the first reference picture lists can be constructed.
  • the predictive decoding unit 2050 can construct the first reference picture lists according to the same method as the picture encoding apparatus 3300.
  • the acquisition unit 2010 acquires an indicator indicating at least one of the plurality of first reference image lists used in the image sequence from the group header of the bitstream.
  • the prediction decoding unit 2050 Acquires an updated second reference image list from the first reference image list pointed to by the indicator.
  • the second reference image list is in the first reference image list pointed to by the indicator.
  • At least a part of the included reference images may be replaced with another reference image, at least part of the order of the reference images is changed, or a new reference image may be acquired as it is added to the first reference image list.
  • the acquisition unit 2010 is a group of bitstreams.
  • Update information can be obtained from the header.
  • the update information includes a value related to a reference image to be removed from the first reference image list pointed to by the indicator, a ?00 related value of the reference image to be added to the second reference image list, and the first.
  • the difference between the ?00 related value of the reference image to be removed from the reference image list and the ?00 related value of the reference image to be added to the second reference image list, information for changing the order of the images, etc. may be included.
  • the update information may be obtained from a parameter set other than the group header of the bitstream, for example, a picture parameter set.
  • the coding units included in the slice may be predictively decoded to obtain prediction samples of the coding units.
  • the prediction decoding unit 2050 uses a first reference picture list other than the first reference picture list pointed at by the indicator in the plurality of first reference picture lists used in the picture sequence, and the second reference picture list to slice the next slice.
  • the second reference image list obtained from the current slice can be used in the next slice.
  • an indicator indicating the reference video list used in the next slice is newly acquired, and is included in the next slice according to the reference video list pointed to by the indicator or the reference video list updated therefrom. Coding units can be predictively decoded.
  • a reference picture list suitable for predictive decoding of the coding units of the slices can be constructed only by updating the existing reference picture list.
  • Figure 25 is a plurality of first reference images acquired through a set of sequence parameters
  • FIG. 25 shows three first reference image lists 2510, 2520, and 2530, which is only an example, and the number of first reference image lists acquired through a sequence parameter set is variously changed. Can be
  • the first reference image lists 2510, 2520, and 2530 may include short-term type or long-term type reference images.
  • Short-term type reference images are examples of short-term type reference images.
  • the image designated as the short-term type is displayed, and the long-term type reference images indicate the image designated as the long-term type among the restored images stored in the DPB.
  • Reference images included in the first reference image lists (2510, 2520, 2530) may be specified as POC-related values.
  • the short-term type reference image refers to the POC and short-term reference of the current image.
  • the difference between the POCs of the image, i.e., the delta value, is specified, and the long-term type reference image may be specified as the LSB (least significant bit) of the POC of the long-term reference image.
  • the image may be specified as the most significant bit (MSB) of the POC of the long-term reference image.
  • the first reference image lists 2510, 2520, and 2530 may include only a short-term type reference image or a long-term type reference image. That is, shown in FIG. 25 All of the reference images may be short-term type reference images or long-term type reference images. In addition, depending on an embodiment, some of the first reference image lists 2510, 2520, 2530 are short-term. It may include only the reference image of the type, and others may include only the reference image of the long-term type.
  • 26 is a diagram for explaining a method of acquiring a second reference image list.
  • the prediction decoding unit 2050 may acquire a second reference image list 2600 by changing at least some of the reference images included in the first reference image list 2510 pointed to by the indicator to another reference image. Referring to Figure 26, the first reference image
  • a short-term reference image with a delta value of -1 in the list 2510, a long-term reference image with an LSB of W, and a short-term reference image with a delta value of -3 each have a delta value in the second reference image list 2600. It can be seen that the two-person short-term reference image, the long-term reference image with an LSB of 8, and the short-term reference image with a delta value of -5 have been replaced.
  • Fig. 26 shows all the reference images in the first reference image list 2510. Although shown as being replaced with another reference image, this is only an example, and only a part of all the reference images in the first reference image list 2510 may be replaced with another reference image.
  • the prediction decoding unit 2050 is a reference image of a specific type among the reference images included in the first reference image list 2510, for example, only different long-term type reference images- It can also be replaced with a term reference image, i.e., in the first reference image list (2510). 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 Among the included reference pictures, the short-term reference picture is maintained in the second reference picture list (2600), and only the long-term reference picture is acquired from the bitstream. Depending on the information, it may be replaced with another long-term reference image. Referring to FIG.
  • Video List (2600)
  • the long-term reference image among the reference images included in the first reference image list 2510 is maintained as it is in the second reference image list 2600.
  • 1Reference picture list (2510) Only the reference picture of the short-term type can be replaced with another short-term reference picture.
  • the acquisition unit (2010) is used from the group header of the bitstream.
  • the reference image indicated by the ?00 related value acquired by the acquisition unit 2010 may be included in the second reference image list 2600.
  • the acquisition unit 2010 receives the first reference picture from the bitstream.
  • the index of the reference image to be removed from the reference image list 2510 can be further obtained.
  • the first reference image list 2510 The index of the reference picture that should be removed from may not be included in the bitstream.
  • the bitstream may not include the index of the reference image to be removed, and the prediction decoding unit ( 2050) can remove a predetermined reference picture from among the reference pictures included in the first reference picture list 2510, and include the reference picture indicated by the ?00 related value obtained from the bitstream in the second reference picture list 2600. have.
  • the indicated information is the value related to ?00 of the new reference image and the first reference image.
  • the first reference image list 2510 includes In the second reference image list (2600), the 10 person reference image has been replaced with an 8 person reference image, so the information indicating the new reference image may include 2 (10-8).
  • the prediction decoding unit 2050 includes a difference value of ?00 related values and a first reference image.
  • the ?00 related value of the reference image to be removed from the list 2510 Based on the ?00 related value of the reference image to be removed from the list 2510, the ?00 related value of the reference image that should be newly included in the second reference image list 2600 can be also given.
  • the new reference image is a second reference image according to the order of the reference images to be removed from the first reference image list 2510 pointed to by the indicator.
  • 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 Can be added to the list (2600).
  • the long-term reference image to which the index 1 is assigned is the first reference image list (2510) If removed from, index 1 may also be assigned to the new reference image.
  • Fig. 27 is for explaining another method of obtaining a second reference video list
  • the prediction decoding unit 2050 excludes reference images of a specific type among the reference images in the first reference image list 2510 indicated by the indicator among the plurality of first reference image lists for the image sequence. It is also possible to obtain a list 2700. Referring to FIG. 27, a long-term type reference image among the reference images in the first reference image list 2510 indicated by the indicator is included in the second reference image list 2700. You can see that it is not.
  • the prediction decoding unit 2050 may acquire a second reference image list 2700 excluding a short-term type reference image among the reference images in the first reference image list 2510. .
  • Fig. 28 is for explaining another method of obtaining a second reference video list
  • the prediction decoding unit 2050 changes the order of the reference images in the first reference image list 2510 pointed to by the indicator according to the update information obtained from the group header of the bitstream, so that the second reference image list 2800 is displayed. At this time, according to the updated information, the order of all the reference images in the first reference image list 2510 may be changed, or the order of some of the reference images in the first reference image list 2510 Is subject to change.
  • the update information obtained from the group header of the bitstream may include indexes of reference images in the first reference image list 2510 arranged in the order to be changed.
  • the first in FIG. 28 In the reference picture list 2510 the reference picture of index 0, the reference picture of index 1, and the reference picture of index 2 are each of the reference picture of index 1, the reference picture of index 2, and the reference picture of index 0 in the second reference picture list 2800.
  • the group header of the bitstream may include (2, 0, 1) as update information.
  • the predictive decoding unit 2050 may include 2 in the first reference picture list 2510.
  • the second reference picture list 2800 can be configured by assigning index 0 to the reference picture to which the index is assigned, index 1 to the reference picture to which the index of 0 is assigned, and index 2 to the reference picture to which the index of 1 is assigned. .
  • the update information obtained from the group header of the bitstream may include the index of the reference image that needs to be changed in order among the reference images in the first reference image list 2510.
  • the group header of the bitstream may include (1, 2) as update information.
  • the prediction decoding unit 2050 is a reference to which an index of 1 is allocated in the first reference picture list 2510. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 You can configure the second reference picture list (2800) by assigning index 2 to the video and index 1 to the reference video to which the index of 2 is assigned.
  • Fig. 29 is for explaining another method of obtaining a second reference video list
  • the number of first reference image lists indicated by the indicators in the plurality of first reference image lists used in the image sequence may be plural. That is, as shown in FIG. 29, the indicator is only short-term reference images. It may refer to a first reference image list 2910 including and a first reference image list 2920 including only long-term reference images.
  • the prediction decoding unit 2050 includes a second reference picture list 2930 including short-term reference pictures and long-term reference pictures included in the first reference pictures 2910 and 2920 pointed to by the indicator. At this time, in the second reference image list 2930, an index larger than the index allocated to the short-term reference images may be assigned to the long-term reference images. Conversely, the second reference image list ( 2930), an index larger than the index allocated to the long-term reference images may be allocated to the short-term reference images.
  • the acquisition unit 2010 acquires order information of the short-term reference images and the long-term reference images from the bitstream, and the prediction decoding unit 2050 obtains the second order information according to the acquired order information. Indexes may be allocated to short-term reference images and long-term reference images included in the reference image list 2930.
  • the list 2920 may include at least one reference image irrespective of the type.
  • the prediction decoding unit 2050 is the first reference image indicated by the indicator.
  • the short-term reference image task 2 included in the first reference image list 2910 When a short-term reference image exists in the list 2910 and a long-term reference image exists in the second reference image list 2920, the short-term reference image task 2 included in the first reference image list 2910
  • the second reference picture list 2930 including the long-term reference picture included in the reference picture list 2920 can be obtained.
  • the prediction decoding unit 2050 is the first reference picture list 2910 indicated by the indicator. If an elongated-term reference picture exists and a short-term reference picture exists in the second reference picture list 2920, the long-term reference picture task 2 included in the first reference picture list 2910
  • Figure 30 is for explaining another method of obtaining a second reference video list
  • the first reference image list 3010 indicated by the indicator may include only a short-term reference image. According to an embodiment, the first reference image list 3010 indicated by the indicator may include only a long-term reference image.
  • the acquisition unit 2010 may be included in the second reference image list 3030 from the bitstream.
  • 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 A short included in the long-term reference video task 1 reference video list (3010), which acquires the POC-related value of the long-term reference video and points to the POC-related value-
  • a second reference image list 3030 including the term reference image can be configured, i.e., through a sequence parameter set, the first reference image list 3010 including only the short term reference image is signaled, and in the group header It signals the POC related value of the long-term reference image.
  • the compression rate is improved by reducing the overhead since the reference picture list does not need to be transmitted for each block group. For example, GOP (Group of Picture)
  • GOP Group of Picture
  • the reference list can be repeatedly transmitted for each G0P. As such, the more frequently transmitted reference picture lists are transmitted as a sequence parameter set, the greater the bit rate reduction effect.
  • the short-term reference image is correlated with the repeating pattern of the prediction structure as in the example above, while the long-term reference image correlates the current picture with the corresponding long-term reference image.
  • the prediction structure is repeated in units of G0P, but the content of the image is completely changed, such as a screen change, the long-term reference image is no longer valid, the reference list for the short-term reference images is It is possible to avoid sending the entire reference list to the group header by obtaining from the sequence parameter set and sending the long-term reference image to the group header separately.
  • the acquisition unit 2010 when only the long-term reference picture is included in the first reference picture list, the acquisition unit 2010 is a P0C related value of the short-term reference picture to be included in the second reference picture list from the bitstream. It is also possible to obtain a second reference image list including the short-term reference image and the long-term reference image included in the first reference image list, indicated by the P0C related value.
  • an index larger or smaller than the index allocated to the reference images included in the first reference image list 3010 may be allocated.
  • the decoding unit 2050 may inter-predict coding units based on the reference image included in the second reference image list. As a result of the inter prediction, prediction samples corresponding to the coding units may be obtained.
  • the restoration unit 2070 uses the predicted samples to obtain restoration samples of the coding units.
  • the restoration unit 2070 may acquire restoration samples of coding units by adding the residual data obtained from the bitstream to the predicted sample.
  • the restoration unit 2070 may luma-map prediction samples of coding units before obtaining restoration samples. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924
  • Luma mapping processing means that the luma values of predicted samples are obtained from a bitstream.
  • the acquisition unit 2010 may acquire parameters for luma mapping processing from at least one post-processing parameter set of the bitstream.
  • Each of at least one post-processing parameter set is luma mapping or described later. It can contain parameters used for adaptive loop filtering.
  • the parameters used for luma mapping may include, for example, a range of a luma value to be changed, a delta value to be applied to the luma values of predicted samples.
  • Figure 31 shows a plurality of post-processes used for luma mapping or adaptive loop filtering
  • Bitstream (3100) is the above-described sequence parameter set model 3)(3110), picture parameter set ?3)(3120), group header ((33 ⁇ 4(3130) and block parameter set?3)(3140)
  • a plurality of post-processing parameter sets (315( ⁇ , 3150 ⁇ 3150) can be included.
  • Post-processing parameter sets (3150 315(3 ⁇ 4, 3150 are, sequence parameter set (3110)), picture parameter set (3120), group header ( 3130) and block parameter set 3140, it can be included in the bitstream regardless of the hierarchical structure of the image.
  • Each of the post-processing parameter sets 315( ⁇ , 315(3 ⁇ 4, 3150 0) may be assigned an identifier to distinguish them.
  • Some of the post-processing parameter sets (3150 315(3 ⁇ 4, 31500)) contain parameters used for luma mapping, while others are used for adaptive loop filtering.
  • the acquisition unit (2010) is a set of post-processing parameters from the picture parameter set 3120, the group header 3130, or the block parameter set 3140 (3150 3150 ⁇ 3150).
  • An identifier indicating whether or not it is used for luma mapping can be obtained.
  • the restoration unit 2070 can change the luma value of the predicted samples by using parameters obtained from the post-processing parameter set pointed to by the identifier.
  • the post-processing parameter set indicated by the identifier is used for the predicted samples also exported in the current image, and when the identifier is obtained from the group header 3130, the post-processing parameter set indicated by the identifier is used for the predicted samples also exported in the current slice. Further, when the acquisition unit 2010 obtains the identifier from the block parameter set 3140, the post-processing parameter set pointed to by the identifier is used for predicted samples derived from the current block as well. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924
  • the acquisition unit 2010 is a set of a plurality of post-processing parameters (3150 315 (3 ⁇ 4,
  • modification information may also be obtained from the bitstream, where the modification information may include information for changing parameters included in the post-processing parameter set pointed to by the identifier.
  • the modification information may include the value of the difference between the value of the parameter contained in the set of post-processing parameters pointed to by the identifier and the value of the parameter to be changed.
  • the restoration unit 2070 corrects the parameters of the post-processing parameter set indicated by the identifier according to the correction information, and may change the luma value of the predicted samples using the modified parameters.
  • the identifier obtained from the bitstream is a plurality of post-processing
  • the restore unit (2070) constructs a new parameter set by partially combining the parameters contained in the post-processing parameter sets pointed to by the identifier, and uses the newly constructed parameter set, i.e., luma for the samples. Mapping processing is possible.
  • the restoration unit 2070 acquires restoration samples corresponding to the current coding unit by using the prediction samples generated as a result of the prediction decoding or the prediction samples subjected to luma mapping. When the restoration samples are obtained, the restoration unit 2070 may apply adaptive loop filtering to the restoration samples.
  • Adaptive loop filtering refers to filter coefficients signaled through a bitstream.
  • Adaptive loop filtering can be performed separately for the luma and chroma values.
  • the filter coefficient may include the filter coefficient for a one-dimensional filter.
  • the filter coefficient of each one-dimensional filter is the difference between successive filter coefficients. And the corresponding difference value can be signaled through the bitstream.
  • some of the post-processing parameter sets include parameters used for luma mapping, and others are used for adaptive loop filtering.
  • post-processing parameter set 3150 and post-processing parameter set: 8 (31501?) contain parameters used for adaptive loop filtering, and then
  • the processing parameter set ( :(3150) can contain parameters used for luma mapping.
  • the acquisition unit (2010) is a set of post-processing parameters from the picture parameter set (3120), the group header (3130), or the block parameter set (3140).
  • An identifier indicating whether it is used for adaptive loop filtering may be obtained.
  • the restoration unit 2070 may filter the restoration samples using parameters obtained from the post-processing parameter set pointed to by the identifier.
  • the post-processing parameter set indicated by the identifier is used for the restored samples also exported in the current image, and if the identifier is obtained from the group header, the identifier is 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924
  • the post-processing parameter set indicated is used for restoration samples exported even in the current slice.
  • the acquisition unit (2010) obtains the identifier from the block parameter set, the identifier The set of post-processing parameters that you point to are used for the restored samples that are also exported within the current block.
  • the acquisition unit 2010 may acquire an identifier indicating any one of a plurality of post-processing parameter sets 3150a, 3150b, 3150c, and correction information from the bitstream.
  • the modification The information may include information for changing the filter coefficients contained in the set of post-processing parameters pointed to by the identifier; for example, the correction information is the value of the filter coefficient contained in the set of post-processing parameters pointed to by the identifier and the filter coefficient to be changed. It can contain the difference between the values of.
  • the restoration unit 2070 may modify the filter coefficients of the post-processing parameter set indicated by the identifier according to the correction information, and filter the restoration samples using the corrected filter coefficients.
  • the identifier obtained from the bitstream is a plurality of post-processing
  • the restoration unit 2070 constructs a new filter coefficient set by partially combining the filter coefficients included in the post-processing parameter sets pointed to by the identifier, and then generates the restored samples with the newly constructed filter coefficient set. Can be filtered
  • the restoration unit 2070 uses filter coefficients included in any one post-processing parameter set pointed to by the identifier. It is also possible to filter the luma values of the reconstructed samples and filter the chroma values of the reconstructed samples using filter coefficients included in another set of post-processing parameters pointed to by the identifier.
  • the acquisition unit 2010 may acquire an identifier indicating a set of post-processing parameters and filter coefficient information from the bitstream.
  • the restoration unit 2070 may have an identifier. It is also possible to combine some of the filter coefficients included in the indicated post-processing parameter sets and the filter coefficients signaled through the bitstream, and filter the restored samples with the combined filter coefficient set.
  • the restoration unit 2070 may additionally perform deblocking filtering on the adaptive loop-filtered restoration sample.
  • the predictive decoding unit 2050 is included in the current slice.
  • the coding unit can be decoded according to inter prediction.
  • the boundary of the current slice may be regarded as a picture boundary.
  • the search in the DMVR (Decoder-side Motion Vector Refinement) mode, in which the decoder derives a motion vector of a direct coding unit, that is, when the decoder 2050 derives a motion vector of the current coding unit, the search The (search) range can be limited to the boundary of the area at the same position as the current slice in the reference image.
  • the area at the same position as the current slice is padded to obtain prediction samples. You may.
  • the prediction decoding unit 2050 may regard the boundary of a slice as a boundary of a picture in a Bi-Optical Flow (BIO) processing mode, and predictively decode a current coding unit.
  • BIO (Bi-Optical Flow) processing mode is a block for bidirectional prediction.
  • the acquisition unit (2010) retrieves the binary values included in the bitstream.
  • WPP Wide front parallel processing
  • CABAC context-adaptive binary arithmetic coding
  • the acquisition unit (2010) can set the probability model for the CTUs included in the tile based on WPP, if only one tile is included in the slice, and if the slice contains multiple tiles, For CTUs included in tiles, WPP technology may not be applied.
  • 32 is a view for explaining an image decoding method according to an embodiment.
  • step S3210 the image decoding apparatus 2000 acquires information indicating a plurality of first reference image lists for the image sequence including the current image from the sequence parameter set of the bitstream.
  • the plurality of first reference images The list can consist of at least one of a short-term reference image and a long-term reference image.
  • step S3220 the image decoding apparatus 2000 sets blocks in the current image and a block group including at least one block.
  • the block may be a tile, and the block group may be a slice.
  • the image decoding apparatus 2000 divides the current image into a plurality of CTUs according to the information acquired from the bitstream, and a tile including at least one CTU and a slice including at least one tile Can be set within the current video.
  • the image decoding apparatus 2000 may divide the current image into a plurality of tiles according to the information acquired from the bitstream, and divide each tile into one or more CTUs.
  • the block determination unit ( 2030) can set a slice containing at least one tile in the current video.
  • the image decoding apparatus 2000 may divide the current image into one or more slices according to information obtained from the bitstream, and divide each slice into one or more tiles. And, the block determination unit ( 2030), each tile 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924
  • the image decoding apparatus 2000 may set the slices in the current image according to the address information obtained from the bitstream.
  • the image decoding apparatus 2000 obtains an indicator for the current block group including the current block in the current image from the group header of the bitstream, and is based on the first reference image list pointed to by the indicator. Thus, the second reference image list is obtained.
  • the image decoding apparatus 2000 may further acquire update information for obtaining the second reference image list together with the indicator from the bitstream.
  • the update information is the first reference image list indicated by the indicator.
  • the value related to the value of the reference image to be removed from the reference image list, the value related to the value of the reference image to be added to the second reference image list, the value related to the value of the reference image to be removed from the first reference image list Task 2 It may include at least one of the information for changing the order of the images and the difference between the ?00 related values of the reference image to be added to.
  • step 83240 the image decoding apparatus 2000 predictively decodes the lower block of the current block based on the reference image included in the second reference image list.
  • the image decoding apparatus 2000 is a set of post-processing parameters for luma mapping the predicted samples according to an identifier indicating at least one of the plurality of post-processing parameter sets. And, the image decoding apparatus 2000 can change the luma value of the predicted samples with parameters included in the post-processing parameter set pointed to by the identifier.
  • the image decoding apparatus 2000 is a prediction obtained by the prediction decoding result.
  • restoration samples can be obtained, and the restoration samples can be adaptively loop filtered.
  • the image decoding apparatus 2000 is an identifier pointing to at least one of a plurality of post-processing parameter sets. According to the following, a set of post-processing parameters for adaptive loop filtering can be specified, and the image decoding apparatus 2000 can filter the restored samples with parameters included in the post-processing parameter set pointed to by the identifier.
  • FIG. 33 shows the configuration of an image encoding apparatus 3300 according to an embodiment.
  • the image encoding apparatus 3300 includes a block determination unit 3310, a prediction
  • the generation unit 3370 shown in FIG. 33 corresponds to the bitstream generation unit 0 shown in FIG. 2, and determines a block.
  • the part 3310, the prediction coding part 3330, and the restoration part 3350 are shown in FIG.
  • the block determination unit 3310, the predictive encoding unit 3330, the restoration unit 3350, and the generation unit 3370 may be implemented with at least one processor.
  • One or more data for storing input/output data of the block determination unit 3310, prediction encoding unit 3330, restoration unit 3350, and generation unit 3370 2020/175967 1» (:1 ⁇ 1 ⁇ 2020/002924 May include a storage unit (not shown)).
  • the image encoding device 3300 is a memory control unit that controls data input/output from the data storage unit (not shown). It can also include (not shown).
  • the block determiner 3310 divides the current image into blocks, and sets block groups including at least one block in the current image.
  • the block may correspond to a tile, and the block group is Can correspond to a slice; a slice can also be referred to as a group of tiles.
  • the block determination unit 3310 may divide the current image into a plurality of 0X1s, and set a tile including at least one 0X1 and a slice including at least one tile in the current image. .
  • the block determiner 3310 may divide the current image into a plurality of tiles, and may divide each tile into one or more 0X1s. Further, the block determination unit 3310 may at least within the current image. You can set a slice containing one tile.
  • the block determination unit 3310 may divide the current image into one or more slices, and may divide each slice into one or more tiles. And, the block determination unit 3310 may divide each tile one by one. It can be divided into more than 0x1.
  • the prediction encoder 3330 obtains prediction samples corresponding to the sub-blocks by inter-prediction or intra-prediction of sub-blocks of blocks divided from the current image.
  • the sub-block is a maximum coding unit, a coding unit, and a coding unit. It can be at least one of the conversion units.
  • the prediction encoding unit 3330 can predict and encode the coding units through inter prediction or intra prediction.
  • a prediction sample of the current coding unit is obtained based on a reference block in a reference image indicated by a motion vector.
  • the residual data corresponding to the difference between the predicted sample and the current coding unit may be transmitted to the video decoding apparatus 2000 through the bitstream. Depending on the prediction mode, residual data may not be included in the bitstream.
  • the prediction encoding unit 3330 may construct a plurality of first reference image lists for an image sequence including a current image.
  • the predictive encoding unit 3330 may configure a plurality of reference image lists used in the image sequence. At least one of the first reference image lists is selected.
  • the predictive encoding unit 3330 may select a first reference image list used in the current slice from among the plurality of first reference image lists.
  • the predictive encoding unit 3330 Acquires an updated second reference image list from the selected first reference image list.
  • the second reference image list is among the reference images included in the first reference image list.
  • At least one of the reference images included in the list can be used to encode the coding units included in the slice according to inter prediction.
  • the prediction encoding unit 3330 uses the first reference image list other than the first reference image list selected for the current slice among the plurality of first reference image lists used in the image sequence, and the second reference image list to the next
  • the coding units included in the slice can be predictively coded.
  • the second reference image list obtained from the current slice can be used in the next slice.
  • the prediction encoding unit 3330 may obtain a second reference image list by changing at least some of the reference images included in the first reference image list to another reference image.
  • the prediction encoding unit 3330 is a reference image of a specific type among the reference images included in the first reference image list, for example, a long-term reference image that differs only from a long-term type reference image. That is, among the reference images included in the first reference image list, the short-term reference image remains intact in the second reference image list, and only the long-term reference image can be replaced with another long-term reference image. have.
  • the types of reference images included in the first reference image list and
  • the reference images included in the first reference image list may be replaced with other reference images.
  • a new reference image is in the order of the reference images to be removed from the first reference image list. It can be added to the second reference picture list, i.e., if the long-term reference picture to which index 1 is assigned is removed from the first reference picture list, index 1 can also be assigned to the new reference picture.
  • the prediction encoding unit 3330 excludes reference images of a specific type among the reference images in the first reference image list selected for the current slice among the plurality of first reference image lists for the image sequence. It is also possible to obtain a second reference video list.
  • the prediction encoding unit 3330 is configured by changing the order of at least some of the reference images in the first reference image list selected for the current slice among the plurality of first reference image lists for the image sequence. 2 You can also obtain a list of reference videos.
  • the prediction encoding unit 3330 may generate a first reference image list including only short-term reference images and a first reference image list including only long-term reference images. 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 You can obtain the second reference video list. For example, prediction
  • the encoding unit 3330 may include short-term reference pictures included in the first reference picture list and long-term reference pictures included in the first reference picture list in the second reference picture list.
  • the predictive encoding unit 3330 when the first reference image list includes only the short-term reference image, the predictive encoding unit 3330 includes a short-term reference image and a new long-term reference image included in the first reference image list. It is possible to obtain a second reference image list including the term reference image. Conversely, when the first reference image list includes only the long-term reference image, the prediction encoding unit 3330 may obtain a long term reference image list included in the first reference image list. -It is also possible to acquire a second reference image list including a term reference image and a new short-term reference image.
  • the prediction encoding unit 3330 may inter-predict the coding units based on the reference image included in the second reference image list. As a result of the inter prediction, corresponding to the coding units. Predictive samples can be obtained.
  • the restoration unit 3350 uses the predicted samples to obtain restoration samples of the coding units.
  • the restored image containing the restored samples can be stored in as a reference image of the subsequent image.
  • the restoration unit 3350 may perform luma-mapping processing of predicted samples of the coding units before acquiring the restoration samples.
  • the restoration unit 3350 may perform a luma mapping process from a plurality of post-processing parameter sets. You can get them.
  • Each of the plurality of post-processing parameter sets may include parameters used for luma mapping or adaptive loop filtering described below.
  • some of the post-processing parameter sets include parameters used for luma mapping, and Some others contain parameters used for adaptive loop filtering, for example, at least one parameter set contains parameters used for luma mapping, and another post-processing parameter set contains parameters used for adaptive loop filtering.
  • the restoration unit 3350 may generate a plurality of post-processing parameter sets consisting of parameters used for luma mapping or parameters used for adaptive loop filtering. As described above, a plurality of post-processing parameters can be created.
  • the parameter set is
  • the restoration unit 3350 may acquire parameters from a post-processing parameter set selected from a plurality of post-processing parameter sets, and change the luma values of the predicted samples with the acquired parameters.
  • the restoration unit 3350 may modify parameters of a post-processing parameter set selected from among a plurality of post-processing parameter sets, and change the luma values of the predicted samples with the modified parameters.
  • the restoration unit 3350 partially combines the parameters included in two or more post-processing parameter sets among the plurality of post-processing parameter sets, 2020/175967 1»(:1 ⁇ 1 ⁇ 2020/002924 You can configure the parameter set and change the luma value of the predicted samples with the parameters of the newly configured parameter set.
  • the restoration unit 3350 acquires restoration samples corresponding to the current coding unit by using the prediction samples generated as a result of the prediction decoding or the prediction samples subjected to luma mapping. When the reconstructed samples are obtained, the restoration unit 3350 may apply adaptive loop filtering to the restored samples.
  • some of the post-processing parameter sets include parameters used for luma mapping, and others are used for adaptive loop filtering.
  • the restoration unit 3350 may filter the restoration samples using parameters obtained from at least one of the plurality of post-processing parameter sets.
  • the restoration unit 3350 is one of a plurality of post-processing parameter sets.
  • the restoration unit 3350 is two of the plurality of post-processing parameter sets
  • the parameters included in the above post-processing parameter set can be partially combined to form a new parameter set, and the restored samples can be filtered with the parameters of the newly constructed parameter set.
  • the restoration unit 3350 filters the luma values of the restoration samples by using one post-processing parameter set among the plurality of post-processing parameter sets, and calculates the other post-processing parameter set. Can be used to filter the chroma values of the restored samples.
  • the predictive coding unit 3330 interpolates the coding units included in the current slice.
  • the boundary of the current slice can be regarded as the picture boundary.
  • the predictive encoding unit 3330 calculates the motion vector of the current encoding unit.
  • the search range can be limited to the boundary of the area of the reference image at the same position as the current slice.
  • the predictive encoding unit 3330 may regard a boundary of a slice as a boundary of a picture in a Bi-Optical Flow (BIO) processing mode, and predictively encode a current coding unit.
  • BIO Bi-Optical Flow
  • the generation unit 3370 generates a bitstream including information used for encoding an image.
  • the bitstream is a sequence parameter set, a picture parameter set, a group header, a block parameter set, and at least one. May contain a set of post-processing parameters.
  • the generation unit 3370 generates binary values corresponding to the syntax elements.
  • CABAC context-adaptive binary arithmetic coding
  • the generation unit 3370 selectively applies WPP (Wave front parallel processing) technology considering how many tiles are included in the slice.
  • WPP Wide front parallel processing
  • the generation unit 3370 can set the probability model for the CTUs included in the tile based on WPP when only one tile is included in the slice, and when a slice contains multiple tiles, In addition, the WPP technology may not be applied to CTUs included in tiles.
  • 34 is a view for explaining an image encoding method according to an embodiment.
  • step S3410 the image encoding device 3300 is an image including the current image
  • a plurality of first reference picture lists are constructed for the sequence.
  • the plurality of first reference picture lists may consist of at least one of a short-term reference picture and a long-term reference picture.
  • step S3420 the image encoding apparatus 3300 sets blocks in the current image and a block group including at least one block.
  • the block may be a tile, and the block group may be a slice.
  • the image encoding apparatus 3300 converts the current image into a plurality of CTUs.
  • the image encoding apparatus 3300 converts the current image into a plurality of tiles.
  • Each tile can be divided into one or more CTUs.
  • the image encoding device 3300 can set a slice including at least one tile in the current image.
  • the image encoding apparatus 3300 may divide the current image into one or more slices and divide each slice into one or more tiles. And, the image encoding apparatus 3300 may divide each tile one by one. It can be divided into more than one CTU.
  • step S3230 the image encoding apparatus 3300 selects the first reference image list for the current block group including the current block in the current image from among the plurality of first reference image lists, and the selected first reference image Based on the list, a second reference image list is obtained.
  • step S3240 the image encoding device 3300 is included in the second reference image list.
  • the sub-block included in the current block is predictively coded.
  • the image encoding apparatus 3300 is included in at least one of the plurality of post-processing parameter sets.
  • the image encoding apparatus 3300 is
  • restoration samples can be acquired, and the restored samples can be adaptively loop filtered.
  • the image encoding apparatus 3300 is configured with a parameter included in at least one of a plurality of post-processing parameter sets. You can filter the restored samples.
  • the medium continues to store, execute, or execute programs executable by a computer.
  • the medium may be a variety of recording means or storage means in the form of a single or a combination of several pieces of hardware, not limited to media directly connected to any computer system, but distributed over the network.
  • Examples of media include magnetic media such as hard disks, floppy disks and magnetic tapes, optical recording media such as CD-ROMs and DVDs, and magnetic-optical media such as floptical disks.
  • -optical medium may be configured to store program instructions, including ROM, RAM, flash memory, etc.
  • a site that supplies or distributes an app store or other various software that distributes applications. , Recording media and storage media managed by servers, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne un procédé permettant de décoder une image selon un mode de réalisation, le procédé comprenant les étapes consistant à : obtenir, à partir d'un ensemble de paramètres de séquence d'un train de bits, des informations indiquant une pluralité de premières listes d'images de référence pour une séquence d'images comprenant une image courante ; obtenir, à partir d'un en-tête de groupe du train de bits, un indicateur pour un groupe de blocs courant comprenant un bloc courant dans l'image courante ; obtenir une seconde liste d'images de référence sur la base d'une première liste d'images de référence indiquée par l'indicateur ; et décoder par prédiction un bloc inférieur du bloc courant sur la base d'une image de référence incluse dans la seconde liste d'images de référence.
PCT/KR2020/002924 2019-02-28 2020-02-28 Appareils permettant de coder et de décoder une image, et procédés permettant de coder et de décoder une image par ceux-ci WO2020175967A1 (fr)

Priority Applications (6)

Application Number Priority Date Filing Date Title
KR1020217027776A KR20210122818A (ko) 2019-02-28 2020-02-28 영상의 부호화 및 복호화 장치, 및 이에 의한 영상의 부호화 및 복호화 방법
US17/434,657 US20230103665A1 (en) 2019-02-28 2020-02-28 Apparatuses for encoding and decoding image, and methods for encoding and decoding image thereby
EP20763453.6A EP3934252A4 (fr) 2019-02-28 2020-02-28 Appareils permettant de coder et de décoder une image, et procédés permettant de coder et de décoder une image par ceux-ci
BR112021016926A BR112021016926A2 (pt) 2019-02-28 2020-02-28 Método de decodificação de imagem, aparelho de decodificação de imagem, e método de codificação de imagem
MX2021010368A MX2021010368A (es) 2019-02-28 2020-02-28 Aparatos para codificar y decodificar imagenes, y metodos para codificar y decodificar imagenes mediante los mismos.
CN202080017337.7A CN113508590A (zh) 2019-02-28 2020-02-28 用于对图像进行编码和解码的设备及其用于对图像进行编码和解码的方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962811764P 2019-02-28 2019-02-28
US62/811,764 2019-02-28

Publications (1)

Publication Number Publication Date
WO2020175967A1 true WO2020175967A1 (fr) 2020-09-03

Family

ID=72238536

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/002924 WO2020175967A1 (fr) 2019-02-28 2020-02-28 Appareils permettant de coder et de décoder une image, et procédés permettant de coder et de décoder une image par ceux-ci

Country Status (7)

Country Link
US (1) US20230103665A1 (fr)
EP (1) EP3934252A4 (fr)
KR (1) KR20210122818A (fr)
CN (1) CN113508590A (fr)
BR (1) BR112021016926A2 (fr)
MX (1) MX2021010368A (fr)
WO (1) WO2020175967A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220182623A1 (en) * 2019-03-13 2022-06-09 Lg Electronics Inc. Video encoding/decoding method and device using segmentation limitation for chroma block, and method for transmitting bitstream

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140016699A1 (en) * 2012-07-13 2014-01-16 Qualcomm Incorporated Reference picture list modification for video coding
US20170034508A1 (en) * 2012-09-07 2017-02-02 Vid Scale, Inc. Reference picture lists modification
KR20180032549A (ko) * 2012-07-02 2018-03-30 삼성전자주식회사 블록크기에 따라 인터 예측의 참조픽처리스트를 결정하는 비디오 부호화 방법과 그 장치, 비디오 복호화 방법과 그 장치
KR20180063033A (ko) * 2015-06-05 2018-06-11 인텔렉추얼디스커버리 주식회사 화면내 예측에서의 참조 화소 구성에 관한 부호화/복호화 방법 및 장치
KR20180080166A (ko) * 2015-06-05 2018-07-11 인텔렉추얼디스커버리 주식회사 화면 내 예측 모드에 대한 부호화/복호화 방법 및 장치

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130050863A (ko) * 2011-11-08 2013-05-16 삼성전자주식회사 참조리스트를 이용하는 예측을 수반하는 비디오 부호화 방법 및 그 장치, 비디오 복호화 방법 및 그 장치
US9973749B2 (en) * 2012-01-20 2018-05-15 Nokia Technologies Oy Method for video coding and an apparatus, a computer-program product, a system, and a module for the same
US9319679B2 (en) * 2012-06-07 2016-04-19 Qualcomm Incorporated Signaling data for long term reference pictures for video coding
US9894357B2 (en) * 2013-07-30 2018-02-13 Kt Corporation Image encoding and decoding method supporting plurality of layers and apparatus using same
US10104362B2 (en) * 2013-10-08 2018-10-16 Sharp Kabushiki Kaisha Image decoding device, image coding device, and coded data
CN115086652A (zh) * 2015-06-05 2022-09-20 杜比实验室特许公司 图像编码和解码方法和图像解码设备
KR20170058838A (ko) * 2015-11-19 2017-05-29 한국전자통신연구원 화면간 예측 향상을 위한 부호화/복호화 방법 및 장치
US10555002B2 (en) * 2016-01-21 2020-02-04 Intel Corporation Long term reference picture coding
KR102324844B1 (ko) * 2016-06-17 2021-11-11 세종대학교산학협력단 비디오 신호의 복호화 방법 및 이의 장치
WO2019089864A1 (fr) * 2017-11-01 2019-05-09 Vid Scale, Inc. Compensation de mouvement de blocs superposés

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180032549A (ko) * 2012-07-02 2018-03-30 삼성전자주식회사 블록크기에 따라 인터 예측의 참조픽처리스트를 결정하는 비디오 부호화 방법과 그 장치, 비디오 복호화 방법과 그 장치
US20140016699A1 (en) * 2012-07-13 2014-01-16 Qualcomm Incorporated Reference picture list modification for video coding
US20170034508A1 (en) * 2012-09-07 2017-02-02 Vid Scale, Inc. Reference picture lists modification
KR20180063033A (ko) * 2015-06-05 2018-06-11 인텔렉추얼디스커버리 주식회사 화면내 예측에서의 참조 화소 구성에 관한 부호화/복호화 방법 및 장치
KR20180080166A (ko) * 2015-06-05 2018-07-11 인텔렉추얼디스커버리 주식회사 화면 내 예측 모드에 대한 부호화/복호화 방법 및 장치

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3934252A4 *

Also Published As

Publication number Publication date
EP3934252A4 (fr) 2022-11-30
US20230103665A1 (en) 2023-04-06
CN113508590A (zh) 2021-10-15
MX2021010368A (es) 2021-10-01
EP3934252A1 (fr) 2022-01-05
KR20210122818A (ko) 2021-10-12
BR112021016926A2 (pt) 2021-11-03

Similar Documents

Publication Publication Date Title
US11343498B2 (en) Method and apparatus for processing video signal
KR102613966B1 (ko) 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
KR102441568B1 (ko) 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
KR102410424B1 (ko) 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
US11445185B2 (en) Image encoding/decoding method and device, and recording medium in which bitstream is stored
KR20190043482A (ko) 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
JP2020533822A (ja) 画像復号方法、画像符号化方法、及び、記録媒体
KR20230156294A (ko) 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
KR20180123674A (ko) 비디오 신호 처리 방법 및 장치
KR20180015598A (ko) 비디오 신호 처리 방법 및 장치
KR20180001478A (ko) 비디오 신호 처리 방법 및 장치
KR20180051424A (ko) 비디오 신호 처리 방법 및 장치
WO2020175913A1 (fr) Procédé permettant de coder/décoder un signal vidéo, et appareil associé
CN116866563A (zh) 图像编码/解码方法、存储介质以及图像数据的传输方法
KR20180059367A (ko) 비디오 신호 처리 방법 및 장치
KR102654647B1 (ko) 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
CN112771862A (zh) 通过使用边界处理对图像进行编码/解码的方法和设备以及用于存储比特流的记录介质
KR20200010113A (ko) 지역 조명 보상을 통한 효과적인 비디오 부호화/복호화 방법 및 장치
WO2020175970A1 (fr) Procédé de codage et de décodage vidéo pour prédire une composante de chrominance, et dispositif de codage et de décodage vidéo pour prédire une composante de chrominance
CN114342372A (zh) 帧内预测模式、以及熵编解码方法和装置
WO2020175914A1 (fr) Procédé de codage/décodage de signal d'image et dispositif associé
WO2020175967A1 (fr) Appareils permettant de coder et de décoder une image, et procédés permettant de coder et de décoder une image par ceux-ci
CN113924773A (zh) 图像编码/解码方法和装置以及用于存储比特流的记录介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20763453

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20217027776

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112021016926

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 2020763453

Country of ref document: EP

Effective date: 20210928

ENP Entry into the national phase

Ref document number: 112021016926

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20210826