WO2019117613A1 - Procédé et dispositif de décodage vidéo à l'aide d'une compensation de luminance locale, et procédé et dispositif de codage vidéo à l'aide d'une compensation de luminance locale - Google Patents

Procédé et dispositif de décodage vidéo à l'aide d'une compensation de luminance locale, et procédé et dispositif de codage vidéo à l'aide d'une compensation de luminance locale Download PDF

Info

Publication number
WO2019117613A1
WO2019117613A1 PCT/KR2018/015751 KR2018015751W WO2019117613A1 WO 2019117613 A1 WO2019117613 A1 WO 2019117613A1 KR 2018015751 W KR2018015751 W KR 2018015751W WO 2019117613 A1 WO2019117613 A1 WO 2019117613A1
Authority
WO
WIPO (PCT)
Prior art keywords
encoding unit
block
luminance compensation
local luminance
encoding
Prior art date
Application number
PCT/KR2018/015751
Other languages
English (en)
Korean (ko)
Inventor
템즈아니쉬
진보라
Original Assignee
삼성전자주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 삼성전자주식회사 filed Critical 삼성전자주식회사
Priority to KR1020207010874A priority Critical patent/KR20200088295A/ko
Publication of WO2019117613A1 publication Critical patent/WO2019117613A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation

Definitions

  • a video decoding apparatus comprising: a memory; And at least one processor coupled to the memory, wherein the at least one processor is configured to: determine an available block of neighboring blocks neighboring the current block in the same frame as the current block, Derive luminance compensation information, determine a local luminance compensation model based on the local luminance compensation information, and apply local luminance compensation using the determined local luminance compensation model to the current block.
  • the local luminance compensation parameters may be distinguished according to the bidirectional reference list number of the peripheral block and the three color components.
  • the "image” may be a static image such as a still image of a video or a dynamic image such as a moving image, i.e., the video itself.
  • One maximum coding block may be divided into MxN coding blocks (M, N is an integer) including MxN samples.
  • information on the maximum size and luma block size difference of a luma coding block that can be divided into two from the bitstream can be obtained.
  • the information on the luma block size difference may indicate the size difference between the luma maximum encoding unit and the maximum luma encoding block that can be divided into two. Therefore, when the information on the maximum size of the luma coding block obtained from the bitstream and capable of being divided into two pieces is combined with information on the luma block size difference, the size of the luma maximum coding unit can be determined. Using the size of the luma maximum encoding unit, the size of the chroma maximum encoding unit can also be determined.
  • the maximum size of the luma coding block capable of binary division can be variably determined.
  • the maximum size of a luma coding block capable of ternary splitting can be fixed.
  • the maximum size of a luma coding block capable of ternary partitioning on an I slice is 32x32
  • the maximum size of a luma coding block capable of ternary partitioning on a P slice or B slice can be 64x64.
  • the image decoding apparatus 100 can obtain the split mode mode information from the bit stream in one bin string.
  • the form of the bit stream received by the video decoding apparatus 100 may include a fixed length binary code, a unary code, a truncated unary code, and a predetermined binary code.
  • An empty string is a binary sequence of information. The empty string may consist of at least one bit.
  • the image decoding apparatus 100 can obtain the split mode mode information corresponding to the bin string based on the split rule.
  • the video decoding apparatus 100 can determine whether or not to divide the encoding unit into quad, division, or division direction and division type based on one bin string.
  • one or more prediction blocks for prediction from the encoding unit can be determined.
  • the prediction block may be equal to or smaller than the encoding unit.
  • one or more conversion blocks for conversion from an encoding unit may be determined.
  • the conversion block may be equal to or smaller than the encoding unit.
  • the shapes and sizes of the transform block and the prediction block may not be related to each other.
  • the shape of the encoding unit may include a square and a non-square. If the width and height of the encoding unit are the same (i.e., the block type of the encoding unit is 4Nx4N), the image decoding apparatus 100 can determine the block type information of the encoding unit as a square. The image decoding apparatus 100 can determine the shape of the encoding unit as a non-square.
  • the video decoding apparatus 100 can determine whether the coding unit is the horizontal direction or the vertical direction. Further, the image decoding apparatus 100 can determine the size of the encoding unit based on at least one of the width of the encoding unit, the length of the height, and the width.
  • the image decoding apparatus 100 can obtain the split mode information from the bit stream. However, the present invention is not limited thereto, and the image decoding apparatus 100 and the image encoding apparatus 2200 can determine the promised divided mode information based on the block type information.
  • the video decoding apparatus 100 can determine the promised divided mode mode information for the maximum encoding unit or the minimum encoding unit. For example, the image decoding apparatus 100 may determine the division type mode information as a quad split with respect to the maximum encoding unit. Also, the video decoding apparatus 100 can determine the division type mode information to be "not divided" for the minimum encoding unit. Specifically, the image decoding apparatus 100 can determine the size of the maximum encoding unit to be 256x256.
  • the video decoding apparatus 100 can determine the promised division mode information in advance by quad division.
  • Quad partitioning is a split mode mode that bisects both the width and the height of the encoding unit.
  • the image decoding apparatus 100 can obtain a 128x128 encoding unit from the 256x256 maximum encoding unit based on the division type mode information. Also, the image decoding apparatus 100 can determine the size of the minimum encoding unit to be 4x4.
  • the image decoding apparatus 100 can obtain the division type mode information indicating "not divided" for the minimum encoding unit.
  • the decoding unit 120 decodes the same size as the current encoding unit 300 according to the split mode mode information indicating that the current block is not divided 310c, 310d, 310e, 310f, etc.) based on the division type mode information indicating the predetermined division method without dividing the coding unit 310a having the division type mode information 310b, 310c, 310d, 310e, 310f or the like.
  • the image decoding apparatus 100 includes two encoding units 310b, which are obtained by dividing a current encoding unit 300 in the vertical direction, based on division mode information indicating that the image is divided vertically according to an embodiment You can decide.
  • the image decoding apparatus 100 can determine two encoding units 310c in which the current encoding unit 300 is divided in the horizontal direction based on the division type mode information indicating that the image is divided in the horizontal direction.
  • the image decoding apparatus 100 can determine four coding units 310d in which the current coding unit 300 is divided into the vertical direction and the horizontal direction based on the division type mode information indicating that the image is divided into the vertical direction and the horizontal direction.
  • the ratio of the width and height of the current encoding unit 400 or 450 may be 4: 1 or 1: 4. If the ratio of width to height is 4: 1, the length of the width is longer than the length of the height, so the block type information may be horizontal. If the ratio of width to height is 1: 4, the block type information may be vertical because the length of the width is shorter than the length of the height.
  • the image decoding apparatus 100 may determine to divide the current encoding unit into odd number blocks based on the division type mode information. The image decoding apparatus 100 can determine the division direction of the current encoding unit 400 or 450 based on the block type information of the current encoding unit 400 or 450.
  • the image decoding apparatus 100 can determine the encoding units 430a, 430b, and 430c by dividing the current encoding unit 400 in the horizontal direction. Also, when the current encoding unit 450 is in the horizontal direction, the image decoding apparatus 100 can determine the encoding units 480a, 480b, and 480c by dividing the current encoding unit 450 in the vertical direction.
  • the coding units 430b and 480b positioned at the center are restricted so as not to be further divided unlike the other coding units 430a, 430c, 480a, and 480c, It can be limited to be divided.
  • the image decoding apparatus 100 may determine to divide or not divide the first encoding unit 500 of a square shape into encoding units based on at least one of the block type information and the division mode mode information .
  • the image decoding apparatus 100 divides the first encoding unit 500 in the horizontal direction, The unit 510 can be determined.
  • the first encoding unit, the second encoding unit, and the third encoding unit used according to an embodiment are terms used to understand the relation before and after the division between encoding units.
  • the second encoding unit can be determined, and if the second encoding unit is divided, the third encoding unit can be determined.
  • the relationship between the first coding unit, the second coding unit and the third coding unit used can be understood to be in accordance with the above-mentioned characteristic.
  • predetermined encoding units for example, An encoding unit or a square-shaped encoding unit
  • the square-shaped third coding unit 520b which is one of the odd-numbered third coding units 520b, 520c, and 520d, may be divided in the horizontal direction and divided into a plurality of fourth coding units.
  • the non-square fourth encoding unit 530b or 530d which is one of the plurality of fourth encoding units 530a, 530b, 530c, and 530d, may be further divided into a plurality of encoding units.
  • the fourth encoding unit 530b or 530d in the non-square form may be divided again into odd number of encoding units.
  • a method which can be used for recursive division of an encoding unit will be described later in various embodiments.
  • the division type mode information of the current encoding units 600 and 650 includes information on a sample at a predetermined position among a plurality of samples included in the current encoding units 600 and 650 (for example, 640, 690).
  • the predetermined position in the current coding unit 600 in which at least one of the division mode information can be obtained should not be limited to the middle position shown in FIG. 6, and the predetermined position should be included in the current coding unit 600 (E.g., top, bottom, left, right, top left, bottom left, top right or bottom right, etc.)
  • the image decoding apparatus 100 may determine division mode mode information obtained from a predetermined position and divide the current encoding unit into the encoding units of various types and sizes.
  • the image decoding apparatus 100 may divide the current encoding unit into a plurality of encoding units and determine a predetermined encoding unit.
  • the information indicating the position of the upper left sample 630a of the upper coding unit 620a may indicate the coordinates (xa, ya) and the upper left sample 530b of the middle coding unit 620b May indicate the coordinates (xb, yb), and the information indicating the position of the upper left sample 630c of the lower coding unit 620c may indicate the coordinates (xc, yc).
  • the video decoding apparatus 100 can determine the center encoding unit 620b using the coordinates of the upper left samples 630a, 630b, and 630c included in the encoding units 620a, 620b, and 620c.
  • the coordinates indicating the positions of the samples 630a, 630b and 630c in the upper left corner may indicate the coordinates indicating the absolute position in the picture
  • the position of the upper left sample 630a of the upper coding unit 620a may be (Dxb, dyb), which is information indicating the relative position of the sample 630b at the upper left of the middle encoding unit 620b, and the relative position of the sample 630c at the upper left of the lower encoding unit 620c
  • Information dyn (dxc, dyc) coordinates may also be used.
  • the image decoding apparatus 100 may include (xa, ya) coordinates, which is information indicating the position of the upper left sample 630a of the upper encoding unit 620a, a sample of the upper left sample of the middle encoding unit 620b (Xc, yc) coordinates, which is information indicating the position of the lower-stage coding unit 630b and the position of the upper-left sample 630c of the lower-stage coding unit 620c, , 620b, and 620c, respectively.
  • the image decoding apparatus 100 encodes the encoding units 620a and 620b using the coordinates (xa, ya), (xb, yb), (xc, yc) indicating the positions of the encoding units 620a, 620b and 620c , And 620c, respectively.
  • the image decoding apparatus 100 may determine the width of the upper encoding unit 620a as the width of the current encoding unit 600.
  • the image decoding apparatus 100 can determine the height of the upper encoding unit 620a as yb-ya.
  • the image decoding apparatus 100 may determine the width of the middle encoding unit 620b as the width of the current encoding unit 600 according to an embodiment.
  • the video decoding apparatus 100 determines the position (xd, yd) which is the information indicating the position of the upper left sample 670a of the left encoding unit 660a and the position (xd, yd) of the sample 670b at the upper left of the middle encoding unit 660b 660b and 660c using the (xf, yf) coordinates, which is information indicating the (xe, ye) coordinate which is the information indicating the position of the right encoding unit 660c and the position of the sample 670c at the upper left of the right encoding unit 660c, Each width or height can be determined.
  • the image decoding apparatus 100 encodes the encoded units 660a and 660b using the coordinates (xd, yd), (xe, ye), (xf, yf) indicating the positions of the encoding units 660a, 660b and 660c And 660c, respectively.
  • the image decoding apparatus 100 may determine the width of the left encoding unit 660a as xe-xd. The image decoding apparatus 100 can determine the height of the left encoding unit 660a as the height of the current encoding unit 650. [ According to an embodiment, the image decoding apparatus 100 may determine the width of the middle encoding unit 660b as xf-xe. The image decoding apparatus 100 can determine the height of the middle encoding unit 660b as the height of the current encoding unit 600.
  • the image decoding apparatus 100 may determine a coding unit 660b as a coding unit at a predetermined position while having a size different from that of the left coding unit 660a and the right coding unit 660c.
  • the process of determining the encoding unit having a size different from that of the other encoding units by the video decoding apparatus 100 may be the same as that of the first embodiment in which the encoding unit of a predetermined position is determined using the size of the encoding unit determined based on the sample coordinates .
  • Various processes may be used for determining the encoding unit at a predetermined position by comparing the sizes of the encoding units determined according to predetermined sample coordinates.
  • the position of the sample to be considered for determining the position of the coding unit should not be interpreted as being limited to the left upper end, and information about the position of any sample included in the coding unit can be interpreted as being available.
  • the image decoding apparatus 100 can select a coding unit at a predetermined position among the odd number of coding units determined by dividing the current coding unit considering the type of the current coding unit. For example, if the current coding unit is a non-square shape having a width greater than the height, the image decoding apparatus 100 can determine a coding unit at a predetermined position along the horizontal direction. That is, the image decoding apparatus 100 may determine one of the encoding units which are located in the horizontal direction and limit the encoding unit. If the current coding unit is a non-square shape having a height greater than the width, the image decoding apparatus 100 can determine a coding unit at a predetermined position in the vertical direction. That is, the image decoding apparatus 100 may determine one of the encoding units having different positions in the vertical direction and set a restriction on the encoding unit.
  • the image decoding apparatus 100 may use information indicating positions of even-numbered encoding units in order to determine an encoding unit at a predetermined position among the even-numbered encoding units.
  • the image decoding apparatus 100 can determine an even number of encoding units by dividing the current encoding unit (binary division) and determine a predetermined encoding unit using information on the positions of the even number of encoding units. A concrete procedure for this is omitted because it may be a process corresponding to a process of determining a coding unit of a predetermined position (e.g., the middle position) among the odd number of coding units described with reference to FIG.
  • the division type mode information of the current encoding unit 600 can be obtained in the sample 640 positioned in the middle of the current encoding unit 600, and the current encoding unit 600 can be obtained based on the division type mode information
  • the encoding unit 620b including the sample 640 may be determined as a middle encoding unit.
  • the information used for determining the coding unit located in the middle should not be limited to the division type mode information, and various kinds of information can be used in the process of determining the coding unit located in the middle.
  • the image decoding apparatus 100 can determine the sample at the predetermined position in consideration of the block form of the current encoding unit 600, and the image decoding apparatus 100 can determine a plurality of It is possible to determine a coding unit 620b including a sample from which predetermined information (for example, divided mode information) can be obtained among the number of coding units 620a, 620b, and 620c .
  • the image decoding apparatus 100 may determine a sample 640 located in the center of a current encoding unit 600 as a sample from which predetermined information can be obtained, The coding unit 100 may limit the coding unit 620b including the sample 640 to a predetermined limit in the decoding process.
  • the position of the sample from which the predetermined information can be obtained should not be construed to be limited to the above-mentioned position, but may be interpreted as samples at arbitrary positions included in the encoding unit 620b to be determined for limiting.
  • the position of a sample from which predetermined information can be obtained may be determined according to the type of the current encoding unit 600.
  • the block type information can determine whether the current encoding unit is a square or a non-square, and determine the position of a sample from which predetermined information can be obtained according to the shape.
  • the video decoding apparatus 100 may use at least one of the information on the width of the current coding unit and the information on the height to position at least one of the width and the height of the current coding unit in half The sample can be determined as a sample from which predetermined information can be obtained.
  • the image decoding apparatus 100 selects one of the samples adjacent to the boundary dividing the longer side of the current encoding unit into halves by a predetermined Can be determined as a sample from which the information of < / RTI >
  • the image decoding apparatus 100 may use the division mode information to determine a predetermined unit of the plurality of encoding units.
  • the image decoding apparatus 100 may acquire division type mode information from a sample at a predetermined position included in an encoding unit, and the image decoding apparatus 100 may include a plurality of encoding units
  • the units may be divided using the division mode information obtained from the sample at a predetermined position included in each of the plurality of encoding units. That is, the coding unit can be recursively divided using the division type mode information obtained in the sample at the predetermined position contained in each of the coding units. Since the recursive division process of the encoding unit has been described with reference to FIG. 5, a detailed description thereof will be omitted.
  • FIG. 7 illustrates a sequence in which a plurality of coding units are processed when the image decoding apparatus 100 determines a plurality of coding units by dividing the current coding unit according to an embodiment.
  • the image decoding apparatus 100 may determine the order in which the second encoding units 710a and 710b determined by dividing the first encoding unit 700 in the vertical direction are processed in the horizontal direction 710c .
  • the image decoding apparatus 100 may determine the processing order of the second encoding units 730a and 730b determined by dividing the first encoding unit 700 in the horizontal direction as the vertical direction 730c.
  • the image decoding apparatus 100 processes the encoding units located in one row of the second encoding units 750a, 750b, 750c and 750d determined by dividing the first encoding unit 700 in the vertical direction and the horizontal direction, (For example, a raster scan order or a z scan order 750e) in which the encoding units located in the next row are processed.
  • the method of dividing the plurality of encoding units 710a, 710b, 730a, 730b, 750a, 750b, 750c, and 750d may be a method corresponding to the method of dividing the first encoding unit 700.
  • the plurality of encoding units 710a, 710b, 730a, 730b, 750a, 750b, 750c, and 750d may be independently divided into a plurality of encoding units.
  • the image decoding apparatus 100 may determine the second encoding units 710a and 710b by dividing the first encoding unit 700 in the vertical direction, and may further determine the second encoding units 710a and 710b Can be determined not to divide or separate independently.
  • the image decoding apparatus 100 may divide the second encoding unit 710a on the left side in the horizontal direction into the third encoding units 720a and 720b and the second encoding units 710b ) May not be divided.
  • the processing order of the encoding units may be determined based on the division process of the encoding units.
  • the processing order of the divided coding units can be determined based on the processing order of the coding units immediately before being divided.
  • the image decoding apparatus 100 can determine the order in which the third encoding units 720a and 720b determined by dividing the second encoding unit 710a on the left side are processed independently of the second encoding unit 710b on the right side.
  • the third encoding units 720a and 720b may be processed in the vertical direction 720c because the second encoding units 710a on the left side are divided in the horizontal direction and the third encoding units 720a and 720b are determined.
  • the image decoding apparatus 100 may determine that the current encoding unit is divided into odd number of encoding units based on the obtained division mode mode information.
  • the first encoding unit 800 in the form of a square may be divided into second non-square encoding units 810a and 810b, and the second encoding units 810a and 810b may be independently 3 encoding units 820a, 820b, 820c, 820d, and 820e.
  • the image decoding apparatus 100 can determine the plurality of third encoding units 820a and 820b by dividing the left encoding unit 810a of the second encoding unit in the horizontal direction, and the right encoding unit 810b Can be divided into an odd number of third encoding units 820c, 820d, and 820e.
  • the image decoding apparatus 100 determines whether or not the third encoding units 820a, 820b, 820c, 820d, and 820e can be processed in a predetermined order and determines whether there are odd- You can decide. Referring to FIG. 8, the image decoding apparatus 100 may recursively divide the first encoding unit 800 to determine the third encoding units 820a, 820b, 820c, 820d, and 820e.
  • the image decoding apparatus 100 may further include a first encoding unit 800, a second encoding unit 810a and 810b or a third encoding unit 820a, 820b, 820c , 820d, and 820e are divided into odd number of coding units among the divided types. For example, an encoding unit located on the right of the second encoding units 810a and 810b may be divided into odd third encoding units 820c, 820d, and 820e.
  • the order in which the plurality of coding units included in the first coding unit 800 are processed may be a predetermined order (for example, a z-scan order 830) 100 can determine whether the third encoding units 820c, 820d, and 820e determined by dividing the right second encoding unit 810b into odd numbers satisfy the condition that the third encoding units 820c, 820d, and 820e can be processed according to the predetermined order.
  • a predetermined order for example, a z-scan order 830
  • the image decoding apparatus 100 satisfies a condition that third encoding units 820a, 820b, 820c, 820d, and 820e included in the first encoding unit 800 can be processed in a predetermined order And it is determined whether or not at least one of the widths and heights of the second encoding units 810a and 810b is divided in half according to the boundaries of the third encoding units 820a, 820b, 820c, 820d, and 820e, .
  • the third encoding units 820a and 820b which are determined by dividing the height of the left second encoding unit 810a in the non-square shape by half, can satisfy the condition.
  • the boundaries of the third encoding units 820c, 820d, and 820e determined by dividing the right second encoding unit 810b into three encoding units do not divide the width or height of the right second encoding unit 810b in half ,
  • the third encoding units 820c, 820d, and 820e may be determined as not satisfying the condition.
  • the image decoding apparatus 100 may determine that the scan order is disconnection in the case of such unsatisfactory condition and determine that the right second encoding unit 810b is divided into odd number of encoding units based on the determination result.
  • the image decoding apparatus 100 may limit a coding unit of a predetermined position among the divided coding units when the coding unit is divided into odd number of coding units. Since the embodiment has been described above, a detailed description thereof will be omitted.
  • FIG. 9 illustrates a process in which the image decoding apparatus 100 determines at least one encoding unit by dividing a first encoding unit 900 according to an embodiment.
  • the image decoding apparatus 100 may divide the first encoding unit 900 based on the division type mode information acquired through the receiving unit 110.
  • the first coding unit 900 in the form of a square may be divided into four coding units having a square form, or may be divided into a plurality of non-square coding units.
  • the image decoding apparatus 100 transmits the first encoding unit 900 And may be divided into a plurality of non-square encoding units.
  • the video decoding apparatus 100 determines whether or not the first coding unit 900 can be divided into the second encoding units 910a, 910b, and 910c divided in the vertical direction as the odd number of encoding units or the second encoding units 920a, 920b, and 920c determined in the horizontal direction.
  • the boundaries of the second encoding units 910a, 910b, and 910c which are determined by vertically dividing the first encoding unit 900 in a square shape, are divided in half by the width of the first encoding unit 900
  • the first encoding unit 900 can be determined as not satisfying a condition that can be processed in a predetermined order.
  • the boundaries of the second encoding units 920a, 920b, and 920c which are determined by dividing the first encoding unit 900 in the horizontal direction into the horizontal direction, can not divide the width of the first encoding unit 900 in half, 1 encoding unit 900 may be determined as not satisfying a condition that can be processed in a predetermined order.
  • the image decoding apparatus 100 may determine the encoding units of various types by dividing the first encoding unit.
  • the image decoding apparatus 100 may divide a first coding unit 900 in a square form and a first coding unit 930 or 950 in a non-square form into various types of coding units .
  • the image decoding apparatus 100 may divide the first encoding unit 1000 of a square shape into second non-square encoding units 1010a, 1010b, and 1010c based on the division mode information acquired through the receiver 110, 1010b, 1020a, and 1020b.
  • the second encoding units 1010a, 1010b, 1020a, and 1020b may be independently divided. Accordingly, the image decoding apparatus 100 can determine whether to divide or not divide the image into a plurality of encoding units based on the division type mode information associated with each of the second encoding units 1010a, 1010b, 1020a, and 1020b.
  • the image decoding apparatus 100 divides the left second encoding unit 1010a in a non-square form determined by dividing the first encoding unit 1000 in the vertical direction into a horizontal direction, 1012a, and 1012b.
  • the right-side second encoding unit 1010b is arranged in the horizontal direction in the same manner as the direction in which the left second encoding unit 1010a is divided, As shown in Fig.
  • the left second encoding unit 1010a and the right second encoding unit 1010b are arranged in the horizontal direction
  • the third encoding units 1012a, 1012b, 1014a, and 1014b can be determined by being independently divided. However, this is the same result that the image decoding apparatus 100 divides the first encoding unit 1000 into four square-shaped second encoding units 1030a, 1030b, 1030c, and 1030d based on the split mode information, It may be inefficient in terms of image decoding.
  • the image decoding apparatus 100 divides a second encoding unit 1020a or 1020b in a non-square form determined by dividing a first encoding unit 1000 in a horizontal direction into a vertical direction, (1022a, 1022b, 1024a, 1024b).
  • the image decoding apparatus 100 may be configured to encode the second encoding unit (for example, The encoding unit 1020b) can be restricted such that the upper second encoding unit 1020a can not be divided vertically in the same direction as the divided direction.
  • the image decoding apparatus 100 may determine the second encoding units 1110a, 1110b, 1120a, and 1120b by dividing the first encoding unit 1100 based on the division type mode information.
  • the division type mode information may include information on various types in which an encoding unit can be divided, but information on various types may not include information for division into four square units of encoding units. According to the division type mode information, the image decoding apparatus 100 can not divide the first encoding unit 1100 in the square form into the second encoding units 1130a, 1130b, 1130c, and 1130d in the four square form.
  • the image decoding apparatus 100 may determine the non-square second encoding units 1110a, 1110b, 1120a, and 1120b based on the split mode information.
  • the image decoding apparatus 100 may independently divide the non-square second encoding units 1110a, 1110b, 1120a, and 1120b, respectively.
  • Each of the second encoding units 1110a, 1110b, 1120a, 1120b, etc. may be divided in a predetermined order through a recursive method, which is a method of dividing the first encoding unit 1100 based on the split mode information May be a corresponding partitioning method.
  • the image decoding apparatus 100 can determine the third encoding units 1112a and 1112b in the form of a square by dividing the left second encoding unit 1110a in the horizontal direction and the right second encoding unit 1110b It is possible to determine the third encoding units 1114a and 1114b in the form of a square by being divided in the horizontal direction. Furthermore, the image decoding apparatus 100 may divide the left second encoding unit 1110a and the right second encoding unit 1110b in the horizontal direction to determine the third encoding units 1116a, 1116b, 1116c, and 1116d in the form of a square have. In this case, the encoding unit can be determined in the same manner as the first encoding unit 1100 is divided into the four second square encoding units 1130a, 1130b, 1130c, and 1130d.
  • the image decoding apparatus 100 can determine the third encoding units 1122a and 1122b in the form of a square by dividing the upper second encoding unit 1120a in the vertical direction, and the lower second encoding units 1120b May be divided in the vertical direction to determine the third encoding units 1124a and 1124b in the form of a square. Further, the image decoding apparatus 100 may divide the upper second encoding unit 1120a and the lower second encoding unit 1120b in the vertical direction to determine the square-shaped third encoding units 1126a, 1126b, 1126a, and 1126b have. In this case, the encoding unit can be determined in the same manner as the first encoding unit 1100 is divided into the four second square encoding units 1130a, 1130b, 1130c, and 1130d.
  • FIG. 12 illustrates that the processing order among a plurality of coding units may be changed according to a division process of a coding unit according to an exemplary embodiment.
  • the image decoding apparatus 100 divides the second encoding units 1210a and 1210b, which are generated by dividing the first encoding unit 1200 in the vertical direction, in the horizontal direction, and outputs the third encoding units 1216a, 1216b, 1216c and 1216d can be determined and the second encoding units 1220a and 1220b generated by dividing the first encoding unit 1200 in the horizontal direction are divided in the horizontal direction and the third encoding units 1226a, , 1226d. Since the process of dividing the second encoding units 1210a, 1210b, 1220a, and 1220b has been described above with reference to FIG. 11, a detailed description thereof will be omitted.
  • the image decoding apparatus 100 divides the generated second encoding units 1210a and 1210b in the vertical direction and divides them in the horizontal direction to determine third encoding units 1216a, 1216b, 1216c, and 1216d And the image decoding apparatus 100 first processes the third encoding units 1216a and 1216c included in the left second encoding unit 1210a in the vertical direction and then processes the third encoding units 1216a and 1216c included in the second right encoding unit 1210b The third encoding units 1216a, 1216b, 1216c, and 1216d can be processed according to the order 1217 of processing the third encoding units 1216b and 1216d in the vertical direction.
  • the second encoding units 1210a, 1210b, 1220a, and 1220b are divided to determine the third encoding units 1216a, 1216b, 1216c, 1216d, 1226a, 1226b, 1226c, and 1226d, have.
  • the second encoding units 1210a and 1210b determined to be divided in the vertical direction and the second encoding units 1220a and 1220b determined to be divided in the horizontal direction are divided into different formats, but the third encoding units 1216a , 1216b, 1216c, 1216d, 1226a, 1226b, 1226c, and 1226d, the result is that the first encoding unit 1200 is divided into the same type of encoding units. Accordingly, the image decoding apparatus 100 recursively divides an encoding unit through a different process based on division mode information, thereby eventually determining the same type of encoding units, It can be processed in order.
  • the image decoding apparatus 100 may determine the depth of a coding unit according to a predetermined criterion.
  • a predetermined criterion may be a length of a long side of a coding unit.
  • the depth of the current encoding unit is smaller than the depth of the encoding unit before being divided it can be determined that the depth is increased by n.
  • an encoding unit with an increased depth is expressed as a lower-depth encoding unit.
  • the image decoding apparatus 100 may generate a square 1 encoding unit 1300 can be divided to determine the second encoding unit 1302, the third encoding unit 1304, etc. of the lower depth. If the size of the first encoding unit 1300 in the form of a square is 2Nx2N, the second encoding unit 1302 determined by dividing the width and height of the first encoding unit 1300 by 1/2 may have a size of NxN have.
  • block type information indicating a non-square shape for example, block type information is' 1: NS_VER 'indicating that the height is a non-square having a width greater than the width or' 2 >
  • the image decoding apparatus 100 divides the first coding unit 1310 or 1320 in a non-square form into a second coding unit 1312 or 1322 of a lower depth, The third encoding unit 1314 or 1324, or the like.
  • the image decoding apparatus 100 may determine a second coding unit (for example, 1302, 1312, 1322, etc.) by dividing at least one of the width and the height of the first coding unit 1310 of Nx2N size. That is, the image decoding apparatus 100 can determine the second encoding unit 1302 of NxN size or the second encoding unit 1322 of NxN / 2 size by dividing the first encoding unit 1310 in the horizontal direction, It is also possible to determine the second encoding unit 1312 of N / 2xN size by dividing it in the horizontal direction and the vertical direction.
  • a second coding unit for example, 1302, 1312, 1322, etc.
  • the image decoding apparatus 100 divides at least one of a width and a height of a 2NxN first encoding unit 1320 to determine a second encoding unit (e.g., 1302, 1312, 1322, etc.) It is possible. That is, the image decoding apparatus 100 can determine the second encoding unit 1302 of NxN size or the second encoding unit 1312 of N / 2xN size by dividing the first encoding unit 1320 in the vertical direction, The second encoding unit 1322 of the NxN / 2 size may be determined by dividing the image data in the horizontal direction and the vertical direction.
  • a second encoding unit e.g. 1302, 1312, 1322, etc.
  • the image decoding apparatus 100 divides at least one of the width and the height of the second encoding unit 1302 of NxN size to determine a third encoding unit (for example, 1304, 1314, 1324, etc.) It is possible. That is, the image decoding apparatus 100 determines the third encoding unit 1304 of N / 2xN / 2 size by dividing the second encoding unit 1302 in the vertical direction and the horizontal direction, or determines the third encoding unit 1304 of N / 4xN / 3 encoding unit 1314 or a third encoding unit 1324 of N / 2xN / 4 size.
  • a third encoding unit for example, 1304, 1314, 1324, etc.
  • the image decoding apparatus 100 may divide at least one of the width and the height of the second encoding unit 1312 of N / 2xN size into a third encoding unit (e.g., 1304, 1314, 1324, etc.) . That is, the image decoding apparatus 100 divides the second encoding unit 1312 in the horizontal direction to generate a third encoding unit 1304 of N / 2xN / 2 or a third encoding unit 1324 of N / 2xN / 4 size ) Or may be divided in the vertical and horizontal directions to determine the third encoding unit 1314 of N / 4xN / 2 size.
  • a third encoding unit e.g. 1304, 1314, 1324, etc.
  • the image decoding apparatus 100 divides at least one of the width and the height of the second encoding unit 1322 of NxN / 2 size to generate a third encoding unit 1304, 1314, 1324, . That is, the image decoding apparatus 100 divides the second encoding unit 1322 in the vertical direction to generate a third encoding unit 1304 of N / 2xN / 2 or a third encoding unit 1314 of N / 4xN / 2 size ) Or may be divided in the vertical and horizontal directions to determine the third encoding unit 1324 of N / 2xN / 4 size.
  • the depth of the encoding unit when the depth is determined based on the length of the longest side of the encoding unit, the depth of the encoding unit, which is determined by dividing the first encoding unit 1300 of 2Nx2N size in the horizontal direction or the vertical direction, May be the same as the depth of the unit (1300).
  • the width and height of the third encoding unit 1314 or 1324 may correspond to one fourth of the first encoding unit 1310 or 1320.
  • the depth of the first coding unit 1310 or 1320 is D
  • the depth of the second coding unit 1312 or 1322 which is half the width and height of the first coding unit 1310 or 1320 is D +
  • the depth of the third encoding unit 1314 or 1324, which is one fourth of the width and height of the first encoding unit 1310 or 1320 may be D + 2.
  • FIG. 14 illustrates a depth index (hereinafter referred to as a PID) for classifying a depth and a coding unit that can be determined according to the type and size of coding units according to an exemplary embodiment.
  • a PID depth index
  • the image decoding apparatus 100 may divide the first encoding unit 1400 in a square form to determine various types of second encoding units. 14, the image decoding apparatus 100 divides the first encoding unit 1400 into at least one of a vertical direction and a horizontal direction according to the division type mode information, and outputs the second encoding units 1402a, 1402b, and 1404a , 1404b, 1406a, 1406b, 1406c, 1406d. That is, the image decoding apparatus 100 can determine the second encoding units 1402a, 1402b, 1404a, 1404b, 1406a, 1406b, 1406c, and 1406d based on the split mode mode information for the first encoding unit 1400 .
  • the second encoding units 1402a, 1402b, 1404a, 1404b, 1406a, 1406b, 1406c, and 1406d which are determined according to the split mode mode information for the first encoded unit 1400 in the form of a square.
  • the depth of field can be determined based on the depth. For example, since the length of one side of the first encoding unit 1400 in the square form is the same as the length of long sides of the second encoding units 1402a, 1402b, 1404a, and 1404b in the non-square form, 1400) and the non-square type second encoding units 1402a, 1402b, 1404a, 1404b are denoted by D in the same manner.
  • the image decoding apparatus 100 divides a first encoding unit 1410 having a height greater than a width in a horizontal direction according to division mode information, and generates a plurality of second encoding units 1412a, 1412b, and 1414a , 1414b, and 1414c.
  • the image decoding apparatus 100 divides a first encoding unit 1420 of a shape whose width is longer than a height in a vertical direction according to the division mode information to generate a plurality of second encoding units 1422a, 1422b, and 1424a , 1424b, and 1424c.
  • 1422a, 1422b, 1424a, 1422b, 1424b, 1424b, 1424b, 1424b, 1424b, 1424c can be determined in depth based on the length of the long side. For example, since the length of one side of the square-shaped second encoding units 1412a and 1412b is 1/2 times the length of one side of the non-square first encoding unit 1410 whose height is longer than the width, The depth of the second encoding units 1412a and 1412b of the form is D + 1 which is one depth lower than the depth D of the first encoding unit 1410 of the non-square form.
  • the image decoding apparatus 100 may divide the non-square first encoding unit 1410 into odd second encoding units 1414a, 1414b, and 1414c based on the division type mode information.
  • the odd number of second encoding units 1414a, 1414b and 1414c may include non-square second encoding units 1414a and 1414c and a square second encoding unit 1414b.
  • the length of the long sides of the non-square type second encoding units 1414a and 1414c and the length of one side of the second encoding unit 1414b in the square form are set to 1/10 of the length of one side of the first encoding unit 1410
  • the depth of the second encoding units 1414a, 1414b, and 1414c may be a depth of D + 1 which is one depth lower than D, which is the depth of the first encoding unit 1410.
  • the image decoding apparatus 100 is connected to the first encoding unit 1420 in the form of a non-square shape whose width is longer than the height in a manner corresponding to the scheme for determining the depths of the encoding units associated with the first encoding unit 1410 The depth of the encoding units can be determined.
  • the image decoding apparatus 100 may determine whether the image is divided into a specific division form based on an index value for distinguishing a plurality of coding units divided from the current coding unit. 14, the image decoding apparatus 100 divides a first coding unit 1410 of a rectangular shape whose height is longer than the width to determine an even number of coding units 1412a and 1412b or an odd number of coding units 1414a and 1414b , And 1414c.
  • the image decoding apparatus 100 may use an index (PID) indicating each coding unit in order to distinguish each of the plurality of coding units.
  • the PID may be obtained at a sample of a predetermined position of each coding unit (e.g., the upper left sample).
  • the image decoding apparatus 100 may determine a coding unit of a predetermined position among the coding units determined by using the index for classifying the coding unit.
  • the image decoding apparatus 100 encodes the first encoding unit 1410, Can be divided into three coding units 1414a, 1414b and 1414c.
  • the image decoding apparatus 100 can assign an index to each of the three encoding units 1414a, 1414b, and 1414c.
  • the image decoding apparatus 100 may compare the indexes of the respective encoding units in order to determine the middle encoding unit among the encoding units divided into odd numbers.
  • the image decoding apparatus 100 encodes an encoding unit 1414b having an index corresponding to a middle value among the indices based on the indices of the encoding units by encoding the middle position among the encoding units determined by dividing the first encoding unit 1410 Can be determined as a unit.
  • the image decoding apparatus 100 may determine an index based on a size ratio between coding units when the coding units are not the same size in determining the index for dividing the divided coding units .
  • the coding unit 1414b generated by dividing the first coding unit 1410 is divided into coding units 1414a and 1414c having the same width as the other coding units 1414a and 1414c but different in height Can be double the height.
  • the image decoding apparatus 100 may determine that the image decoding apparatus 100 is divided into a plurality of encoding units including encoding units having different sizes from other encoding units.
  • the image decoding apparatus 100 determines that the encoding unit (for example, the middle encoding unit) at a predetermined position among the odd number of encoding units is different from the encoding units You can split the current encoding unit into a form.
  • the image decoding apparatus 100 may determine an encoding unit having a different size by using an index (PID) for the encoding unit.
  • PID index
  • the index and the size or position of the encoding unit at a predetermined position to be determined are specific for explaining an embodiment, and thus should not be construed to be limited thereto, and various indexes, positions and sizes of encoding units can be used Should be interpreted.
  • the image decoding apparatus 100 may use a predetermined data unit in which a recursive division of an encoding unit starts.
  • FIG. 15 illustrates that a plurality of coding units are determined according to a plurality of predetermined data units included in a picture according to an embodiment.
  • a predetermined data unit may be defined as a unit of data in which an encoding unit begins to be recursively segmented using segmentation mode information. That is, it may correspond to a coding unit of the highest depth used in the process of determining a plurality of coding units for dividing the current picture.
  • a predetermined data unit is referred to as a reference data unit for convenience of explanation.
  • the reference data unit may represent a predetermined size and shape.
  • the reference encoding unit may comprise samples of MxN.
  • M and N may be equal to each other, or may be an integer represented by a multiplier of 2. That is, the reference data unit may represent a square or a non-square shape, and may be divided into an integer number of encoding units.
  • the image decoding apparatus 100 may use a square-shaped reference encoding unit 1500 or a non-square-shaped reference encoding unit 1502.
  • the type and size of the reference encoding unit may include various data units (e.g., a sequence, a picture, a slice, a slice segment a slice segment, a maximum encoding unit, and the like).
  • the image decoding apparatus 100 may include an index for identifying the size and type of the reference encoding unit Can be used. That is, the receiving unit 110 extracts a predetermined condition (for example, a data unit having a size smaller than a slice) among the various data units (e.g., a sequence, a picture, a slice, a slice segment, It is possible to obtain only an index for identifying the size and type of the reference encoding unit for each slice, slice segment, maximum encoding unit, and the like.
  • a predetermined condition for example, a data unit having a size smaller than a slice
  • the various data units e.g., a sequence, a picture, a slice, a slice segment, It is possible to obtain only an index for identifying the size and type of the reference encoding unit for each slice, slice segment, maximum encoding unit, and the like.
  • the image decoding apparatus 100 may use at least one reference encoding unit included in one maximum encoding unit. That is, the maximum encoding unit for dividing an image may include at least one reference encoding unit, and the encoding unit may be determined through a recursive division process of each reference encoding unit. According to an exemplary embodiment, at least one of the width and the height of the maximum encoding unit may correspond to at least one integer multiple of the width and height of the reference encoding unit. According to an exemplary embodiment, the size of the reference encoding unit may be a size obtained by dividing the maximum encoding unit n times according to a quadtree structure.
  • the receiving unit 110 of the image decoding apparatus 100 may obtain information on the size of a processing block from a bitstream for each specific data unit.
  • information on the size of a processing block can be obtained from a bitstream in units of data such as an image, a sequence, a picture, a slice, a slice segment, and the like. That is, the receiving unit 110 may acquire information on the size of the processing block from the bitstream for each of the plurality of data units, and the image decoding apparatus 100 may use at least information on the size of the obtained processing block
  • the size of one processing block may be determined, and the size of the processing block may be an integer multiple of the reference encoding unit.
  • the image decoding apparatus 100 may obtain information on a determination order of a reference encoding unit from a bitstream for each specific data unit.
  • the receiving unit 110 may acquire information on a determination order of a reference encoding unit from a bitstream for each data unit such as an image, a sequence, a picture, a slice, a slice segment, and a processing block. Since the information on the determination order of the reference encoding unit indicates the reference encoding unit determination order in the processing block, the information on the determination order can be obtained for each specific data unit including an integer number of processing blocks.
  • the reference encoding unit determination order 1604 related to the processing block 1602 is a raster scan order
  • the reference encoding unit included in the processing block 1602 can be determined according to the raster scan order.
  • the reference encoding unit determination order 1614 related to the other processing block 1612 is a reverse order of the raster scan order
  • the reference encoding unit included in the processing block 1612 can be determined according to the reverse order of the raster scan order.
  • the image decoding apparatus 100 may decode the determined at least one reference encoding unit according to an embodiment.
  • the image decoding apparatus 100 can decode an image based on the reference encoding unit determined through the above-described embodiment.
  • the method of decoding the reference encoding unit may include various methods of decoding the image.
  • the image decoding apparatus 100 may obtain block type information indicating a type of a current encoding unit or divided mode type information indicating a method of dividing a current encoding unit from a bitstream.
  • the split mode information may be included in a bitstream associated with various data units.
  • the video decoding apparatus 100 may include a sequence parameter set, a picture parameter set, a video parameter set, a slice header, a slice segment header slice segment type mode information included in the segment header can be used.
  • the image decoding apparatus 100 may obtain a syntax element corresponding to the maximum encoding unit, the reference encoding unit, the block type information from the bitstream or the split mode information for each processing block from the bitstream and use the obtained syntax element.
  • the image decoding apparatus 100 can determine the division rule of the image.
  • the segmentation rule may be predetermined between the video decoding apparatus 100 and the video encoding apparatus 2200.
  • the image decoding apparatus 100 can determine the division rule of the image based on the information obtained from the bit stream.
  • the video decoding apparatus 100 includes a sequence parameter set, a picture parameter set, a video parameter set, a slice header, and a slice segment header
  • the partitioning rule can be determined based on the information obtained from at least one.
  • the video decoding apparatus 100 may determine the division rule differently according to a frame, a slice, a temporal layer, a maximum encoding unit, or an encoding unit.
  • the shape of the encoding unit may include a square and a non-square. If the width and height of the encoding unit are the same, the image decoding apparatus 100 can determine the shape of the encoding unit as a square. Also, . If the lengths of the widths and heights of the coding units are not the same, the image decoding apparatus 100 can determine the shape of the coding unit to be non-square.
  • the size of the encoding unit may include various sizes of 4x4, 8x4, 4x8, 8x8, 16x4, 16x8, ..., 256x256.
  • the size of the encoding unit can be classified according to the length of the longer side of the encoding unit, the length or the width of the shorter side.
  • the video decoding apparatus 100 may apply the same division rule to the coding units classified into the same group. For example, the image decoding apparatus 100 may classify encoding units having the same long side length into the same size. In addition, the image decoding apparatus 100 can apply the same division rule to coding units having the same long side length.
  • Determination of the division rule based on the size of the encoding unit may be a predetermined division rule between the image encoding device 2200 and the image decoding device 100.
  • the video decoding apparatus 100 can determine the division rule based on the information obtained from the bit stream.
  • the image decoding apparatus 100 can determine the division rule so that the encoding units generated by different division paths do not have the same block form.
  • the present invention is not limited thereto, and coding units generated by different division paths may have the same block form.
  • the coding units generated by different division paths may have different decoding processing orders. Since the decoding procedure has been described with reference to FIG. 12, a detailed description thereof will be omitted.
  • an available block among neighboring blocks neighboring the current block in the same frame as the current block is determined, and local luminance compensation information is determined from the available block according to an embodiment disclosed herein
  • a method and an apparatus for coding or decoding video that derives a local luminance compensation model based on the local luminance compensation information and applies local luminance compensation using the determined local luminance compensation model to the current block is described above.
  • the video decoding apparatus 1700 may include a memory 1710 and at least one processor 1720 connected to the memory 1710. [ The operations of video decoding apparatus 1700 according to one embodiment may operate as separate processors or may be operated under the control of a central processor. In addition, the memory 1710 of the video decoding apparatus 1700 can store data received from the outside and data generated by the processor, for example, local luminance compensation information of a neighboring block.
  • the processor 1720 of the video decoding apparatus 1700 determines an available block among neighboring blocks neighboring the current block in the same frame as the current block, derives local luminance compensation information from the available block, The local luminance compensation model may be determined based on the luminance compensation information and the local luminance compensation may be applied using the local luminance compensation model determined in the current block.
  • a video decoding apparatus 1700 determines an available block among neighboring blocks neighboring the current block in the same frame as the current block, and extracts local luminance compensation information
  • the local luminance compensation model is determined based on the local luminance compensation information, and the local luminance compensation is applied using the determined local luminance compensation model to the current block.
  • FIG. 18 shows a flow diagram of a video decoding method according to one embodiment.
  • the video decoding apparatus 1700 can determine an available block among neighboring blocks neighboring the current block in the same frame as the current block.
  • the neighboring blocks may be adjacent to at least one of the left, top left, top, right top, and right of the current block.
  • the neighboring blocks may be adjacent to at least one of the left side, the upper left side, and the upper side of the current block.
  • the neighboring blocks may be adjacent to at least one of the upper, upper right, and right side of the current block.
  • the video decoding apparatus 1700 may derive local luminance compensation information from the available blocks.
  • a local luminance compensation model may be determined based on the local luminance compensation information.
  • step s1870 local luminance compensation may be applied using the local luminance compensation model determined for the current block.
  • a final prediction pixel to which local luminance compensation is applied can be obtained by applying a local luminance compensation model to a sample obtained from motion compensation of the current block.
  • 21A and 21B show an example of a modeling process for applying local luminance compensation using a predictive pixel and a restored pixel.
  • the x-axis is the pixel value of the predicted pixels of the current block of the current frame
  • the y-axis is the pixel value of the restored pixel of the current block.
  • the predicted pixel and the reconstructed pixel of the current block may be pixels positioned at the edge of the current block (e.g., the left, lower, and upper edges of the current block).
  • the full range of pixel values may be from 0 to 1023.
  • the points shown in the graph for the model generation shown in FIG. 21A represent the relationship between the predicted pixel of the current block of the current frame and the restored pixel of the current block. By performing linear regression on these points, a local luminance compensation model for the current encoding unit can be generated.
  • the generated local luminance compensation model can be used for local luminance compensation of future neighboring encoding units neighboring the current encoding unit.
  • a straight line graph of the local luminance compensation model application can be completed as a result of FIG. 21A.
  • the local luminance compensation information of the stored neighboring coding units can be derived, and the local luminance compensation model for the current coding unit based on the local luminance compensation information can be determined.
  • the final predicted pixel of the current coding unit to which the local luminance compensation is applied can be obtained by applying the determined local luminance compensation model to the prediction samples obtained from the motion compensation of the current coding unit.
  • the local luminance compensation information may comprise a local luminance compensation parameter.
  • the local luminance compensation parameter may be calculated and stored in memory using the predicted samples of the current block of the current frame and the reconstructed samples of the current block.
  • the local luminance compensation parameter is calculated and stored in the memory using the predicted sample and the reconstructed sample of the current block of the current frame so that the restored pixel of the neighboring block to the current block of the current frame and the restored pixel of the reference frame Can be removed.
  • the predicted sample of the current block used for local luminance compensation modeling is the pixel immediately obtained after motion compensation and is the pixel before any other refinement or other tool is applied.
  • FIG. 22 shows a local luminance compensation parameter according to an embodiment.
  • the local luminance compensation model may be a linear model and the scale factors int a [REFP_NUM] [N_C] [IC_TOTAL-SIDES] and int b [REFP_NUM] [N_C] [IC_TOTAL-SIDES]) can be used as the modeling parameter, and the local luminance compensation parameter can be used as the bidirectional reference list index (REFP_NUM) of the current block, the three color components N_C of the current block, And three edge positions on the right side (IC_TOTAL-SIDES).
  • REFP_NUM bidirectional reference list index
  • FIG 24 shows the locations of the samples used in the calculation of the local luminance compensation parameters of the current block, the neighboring block neighboring the current block, and the neighboring block according to an embodiment.
  • the local luminance compensation information available block among the right neighboring blocks 2440 adjacent to the right side of the current block 2410 can be determined.
  • determining an available block it is checked whether neighboring blocks are available blocks in a predetermined order according to the size or type of the current block, and available blocks among neighboring blocks can be determined. For example, in deriving the local luminance compensation information from neighboring blocks, the upper neighboring block 2430 neighboring the upper side of the current block 2410 is checked first to determine whether the neighboring block is an available block A left neighboring block 2420 adjacent to the left side of the current block 2410 may be checked and a right neighboring block 2440 adjacent to the right side of the current block 2410 may be checked.
  • the right neighboring block 2440 adjacent to the right side of the current block 2410 is checked first, and the neighboring upper neighboring block 2410 adjacent to the upper side of the current block 2410 2430) and finally check the left neighboring block 2420 adjacent to the left side of the current block 2410.
  • the local luminance compensation information of the current block 2310 in the current frame 2300 is stored in the left frame 2320 excluding the upper side of the current block 2310 in the current frame 2300 ),
  • the lower side 2330, and the right side 2340, and the samples of the remaining region 2350 are not used in the calculation of the local luminance compensation information.
  • the local luminance compensation parameter may be computed at the 4x4 block level, which is the minimum encoding unit, and stored in the memory.
  • the local luminance compensation parameter may be calculated for each encoding unit and stored in a memory. Specifically, the local luminance compensation parameter may be calculated by storing one parameter value for one encoding unit and stored in the memory.
  • the local luminance compensation parameter can be stored in a memory in units of encoding.
  • the local luminance compensation parameter may store one parameter value for one encoding unit in the memory.
  • the local luminance compensation model application may be signaled with a flag at the encoding unit level.
  • the scale factors and offsets can be derived by pre-calculating and retrieving the stored values in the frame level parameter map.
  • the local luminance compensation parameter of the position corresponding to the neighboring block neighboring the current block in the parameter storage map can be checked.
  • the neighboring blocks neighboring the current block when neighboring blocks neighboring the current block are reference indices different from the reference index of the current block, the neighboring blocks may be skipped, and neighboring blocks in the next order may be checked to derive local luminance compensation information . If the reference index of the current block is different from the reference index of the neighboring block, the current block does not need to refer to the luminance change of the neighboring block because it refers to a different reference frame.
  • the parameters of the block corresponding to the skip mode or the merge mode may be used when the current block is a skip mode or a merge mode in which information is obtained from a neighboring block.
  • the flag for local luminance compensation may not be transmitted.
  • local brightness compensation may not be applied when there are no available blocks that provide valid local luminance compensation parameter sets even though all neighboring blocks neighboring the current block have been checked.
  • the flag signaling bits can be saved.
  • 25 shows a local luminance compensation parameter according to another embodiment.
  • a new set of predicted pixels after motion compensation if there are no available blocks to provide a valid local luminance compensation parameter set even though all neighboring blocks neighboring the current block have been checked,
  • the set of default local brightness compensation parameters shown in Fig. 25 may be applied.
  • the default local luminance compensation parameter may include a scale factor int a [REFP_NUM] [N_C] and an offset int b [REFP_NUM] [N_C].
  • the scale factors and the offsets can be distinguished by two bidirectional reference indices (REFP_NUM) and three color components (N_C).
  • Fig. 26 shows another example of a local luminance compensation model using the predicted pixel and the restored pixel of the current frame.
  • the prediction frame of the current frame i.e., the prediction samples itself
  • a local luminance compensation model can be derived using the stored prediction samples of the reconstructed neighboring blocks and reconstructed samples of the reconstructed neighboring blocks.
  • temporal dependency on the reference frame can be eliminated by using the predicted and reconstructed samples of the current frame.
  • a DC residual value is added to a predicted sample value of the current block in accordance with a local luminance compensation model (scale factor: 0, offset: DC residual value) derived from neighboring blocks neighboring the current block
  • a local luminance compensation model scale factor: 0, offset: DC residual value
  • the derived set of local luminance compensation parameters can be distinguished according to the three color components.
  • calculation is not required since only the DC residual value is used from the neighboring block.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

La présente invention concerne un procédé et un dispositif pour déterminer, durant un processus de codage et décodage vidéo, un bloc disponible parmi des blocs adjacents voisins d'un bloc actuel dans la même trame que le bloc actuel, pour déduire des éléments d'informations de compensation de luminance locale à partir du bloc disponible, pour déterminer un modèle de compensation de luminance locale sur la base des éléments d'informations de compensation de luminance locale, et pour appliquer la compensation de luminance locale au bloc actuel à l'aide du modèle de compensation de luminance locale déterminé.
PCT/KR2018/015751 2017-12-13 2018-12-12 Procédé et dispositif de décodage vidéo à l'aide d'une compensation de luminance locale, et procédé et dispositif de codage vidéo à l'aide d'une compensation de luminance locale WO2019117613A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020207010874A KR20200088295A (ko) 2017-12-13 2018-12-12 로컬 휘도 보상에 의한 비디오 복호화 방법 및 장치, 로컬 휘도 보상에 의한 비디오 부호화 방법 및 장치

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201762598011P 2017-12-13 2017-12-13
US62/598,011 2017-12-13
US201862661976P 2018-04-24 2018-04-24
US62/661,976 2018-04-24

Publications (1)

Publication Number Publication Date
WO2019117613A1 true WO2019117613A1 (fr) 2019-06-20

Family

ID=66819685

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2018/015751 WO2019117613A1 (fr) 2017-12-13 2018-12-12 Procédé et dispositif de décodage vidéo à l'aide d'une compensation de luminance locale, et procédé et dispositif de codage vidéo à l'aide d'une compensation de luminance locale

Country Status (2)

Country Link
KR (1) KR20200088295A (fr)
WO (1) WO2019117613A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101187580B1 (ko) * 2008-06-02 2012-10-02 삼성전자주식회사 조도 보상 방법 및 그 장치와 이를 이용한 동영상 부호화방법 및 그 장치
KR20130003816A (ko) * 2011-07-01 2013-01-09 에스케이텔레콤 주식회사 영상 부호화 및 복호화 방법과 장치
KR20150034213A (ko) * 2012-07-03 2015-04-02 삼성전자주식회사 추가 파라미터들(변형들)의 전송 없이, 참조 프레임들의 적응적 국부 조명 보정에 기반한, 멀티-뷰 비디오 시퀀스 코딩/디코딩 방법
WO2016204360A1 (fr) * 2015-06-16 2016-12-22 엘지전자 주식회사 Procédé et dispositif de prédiction de bloc basée sur la compensation d'éclairage dans un système de codage d'image
WO2017014412A1 (fr) * 2015-07-20 2017-01-26 엘지전자 주식회사 Procédé et dispositif de prédiction interne dans un système de codage vidéo

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101187580B1 (ko) * 2008-06-02 2012-10-02 삼성전자주식회사 조도 보상 방법 및 그 장치와 이를 이용한 동영상 부호화방법 및 그 장치
KR20130003816A (ko) * 2011-07-01 2013-01-09 에스케이텔레콤 주식회사 영상 부호화 및 복호화 방법과 장치
KR20150034213A (ko) * 2012-07-03 2015-04-02 삼성전자주식회사 추가 파라미터들(변형들)의 전송 없이, 참조 프레임들의 적응적 국부 조명 보정에 기반한, 멀티-뷰 비디오 시퀀스 코딩/디코딩 방법
WO2016204360A1 (fr) * 2015-06-16 2016-12-22 엘지전자 주식회사 Procédé et dispositif de prédiction de bloc basée sur la compensation d'éclairage dans un système de codage d'image
WO2017014412A1 (fr) * 2015-07-20 2017-01-26 엘지전자 주식회사 Procédé et dispositif de prédiction interne dans un système de codage vidéo

Also Published As

Publication number Publication date
KR20200088295A (ko) 2020-07-22

Similar Documents

Publication Publication Date Title
WO2021006692A1 (fr) Procédé et appareil de décodage vidéo, et procédé et appareil de codage vidéo
WO2019143093A1 (fr) Procédé et dispositif de décodage vidéo et procédé et dispositif de codage vidéo
WO2019066384A1 (fr) Procédé et dispositif de décodage vidéo utilisant une prédiction inter-composante, et procédé et dispositif de codage de vidéo utilisant une prédiction inter-composante
WO2020040619A1 (fr) Procédé et appareil de décodage vidéo, et procédé et appareil de codage vidéo
WO2019172676A1 (fr) Procédé et dispositif de décodage vidéo, et procédé et dispositif de codage vidéo
WO2019009502A1 (fr) Procédé et dispositif de codage vidéo, procédé et dispositif de décodage vidéo
WO2018012808A1 (fr) Procédé et dispositif de prédiction intra de chrominance
WO2020027551A1 (fr) Procédé et appareil de codage d'image, et procédé et appareil de décodage d'image
WO2017142327A1 (fr) Procédé de prédiction intra pour réduire les erreurs de prédiction intra et dispositif à cet effet
WO2019135558A1 (fr) Procédé et appareil de décodage vidéo et procédé et appareil de codage vidéo
WO2019009503A1 (fr) Procédé et dispositif d'encodage vidéo, procédé et dispositif de décodage vidéo
WO2019078630A1 (fr) Procédé et appareil de décodage vidéo, et procédé et appareil de codage vidéo
WO2019066514A1 (fr) Procédé de codage et appareil associé, et procédé de décodage et appareil associé
WO2020076130A1 (fr) Procédé et dispositif de codage et de décodage vidéo utilisant des tuiles et des groupes de tuiles
WO2019216712A1 (fr) Procédé et appareil de codage de vidéo et procédé et appareil de décodage de vidéo
WO2020013627A1 (fr) Procédé et dispositif de décodage vidéo, et procédé et dispositif de codage vidéo
WO2019216710A1 (fr) Procédé de segmentation d'image et appareil de codage et de décodage d'image
WO2019066472A1 (fr) Procédé et appareil de codage d'image, et procédé et appareil de décodage d'image
WO2019164306A1 (fr) Dispositif et procédé de codage d'image, dispositif et procédé de décodage d'image, et support d'enregistrement dans lequel un train de bits est stocké
WO2019009666A1 (fr) Procédé et appareil d'encodage d'image, et procédé et appareil de décodage d'image
WO2017090968A1 (fr) Procédé pour coder/décoder une image et dispositif associé
WO2017091007A1 (fr) Procédé et dispositif de codage d'image, et procédé et dispositif de décodage d'image
WO2019209028A1 (fr) Procédé et dispositif de codage vidéo, et procédé et dispositif de décodage vidéo
WO2020189978A1 (fr) Procédé et dispositif de décodage vidéo et procédé et dispositif de codage vidéo
WO2019066574A1 (fr) Procédé et dispositif de codage, et procédé et dispositif de décodage

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18888939

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18888939

Country of ref document: EP

Kind code of ref document: A1