WO2011068360A2 - 고해상도 영상의 부호화/복호화 방법 및 이를 수행하는 장치 - Google Patents
고해상도 영상의 부호화/복호화 방법 및 이를 수행하는 장치 Download PDFInfo
- Publication number
- WO2011068360A2 WO2011068360A2 PCT/KR2010/008563 KR2010008563W WO2011068360A2 WO 2011068360 A2 WO2011068360 A2 WO 2011068360A2 KR 2010008563 W KR2010008563 W KR 2010008563W WO 2011068360 A2 WO2011068360 A2 WO 2011068360A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- size
- prediction
- prediction unit
- picture
- block
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/004—Predictors, e.g. intraframe, interframe coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/107—Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/11—Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/119—Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/14—Coding unit complexity, e.g. amount of activity or edge presence estimation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/182—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/91—Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
Definitions
- the present invention relates to encoding and decoding of an image, and more particularly, to an encoding method that can be applied to a high resolution image, a decoding device to perform the same, a decoding method, and a decoding device to perform the same.
- an image compression method performs encoding by dividing one picture into a plurality of blocks having a predetermined size.
- inter prediction, inter prediction, and intra prediction, intra prediction which removes redundancy between pictures, are used to increase compression efficiency.
- a method of encoding an image by using inter prediction is a method of compressing an image by removing temporal redundancy among pictures, and a typical motion compensation prediction encoding method.
- the motion compensation predictive encoding generates a motion vector (MV) by searching a region similar to the block currently encoded in at least one reference picture located in front of or behind the currently encoded picture, and uses the generated motion vector.
- MV motion vector
- the difference between the prediction block and the current block obtained by performing motion compensation is transformed by DCT (Discrete Cosine Transform), quantized, and then transmitted by entropy encoding.
- DCT Discrete Cosine Transform
- blocks having various sizes such as 16x16, 8x16, and 8x8 pixels are used, and blocks having 8x8 or 4x4 pixel sizes are used for transformation and quantization.
- the size of a block used for conventional motion compensation prediction or transformation and quantization is not suitable for encoding a high resolution image having a high definition (HD) or higher resolution.
- the size of a block used for intra prediction is 4 ⁇ 4, 8 ⁇ 8, or 16 ⁇ 16 pixels.
- the conventional block-based prediction technique as described above generally selects and uses one prediction method having excellent coding efficiency from among the inter prediction method and the intra prediction method. That is, the conventional block-based prediction technique removes and encodes only one overlap having a higher encoding efficiency among temporal overlap and spatial overlap of an image to be encoded. However, even if the overlap of the image is removed by using any one of the inter prediction or the intra prediction, there is a problem that the encoding efficiency is not greatly improved since the other duplicate still exists.
- the conventional block-based prediction technique cannot obtain an efficient coding efficiency for an image including both temporal overlap and spatial overlap.
- the block-based prediction technique as described above has a disadvantage that it is not suitable for encoding a high resolution image having a resolution of HD (High Definition) or more.
- performing motion prediction and compensation using a block having a small size may be effective in terms of motion prediction accuracy and bit rate.
- motion prediction and compensation are performed in units of blocks having a size of 16x16 or less, the number of blocks included in one picture increases exponentially, thereby increasing not only the encoding processing load but also the amount of compressed data. As a result, the transmission bit rate is increased.
- a method of encoding an image using intra-picture prediction is a pixel between blocks from pixel values in an upper block, a left block, an upper left block, and an upper right block in an already encoded block-current frame (or picture) around a current block.
- the correlation is used to predict the pixel value and to transmit the prediction error.
- an optimal prediction mode (prediction direction) is selected from various prediction directions (horizontal, vertical, diagonal, average, etc.) according to characteristics of an image to be encoded.
- prediction mode when in-screen prediction coding is applied to a block of 4x4 pixel units, one of the most suitable prediction modes among nine prediction modes (prediction modes 0 to 8) is one for each 4x4 pixel block.
- the selected prediction mode (prediction direction) is encoded in units of 4x4 pixel blocks.
- the intra prediction encoding is applied to a block of 16x16 pixel units, one of the four prediction modes (vertical prediction, horizontal prediction, average value prediction, plane prediction) is selected for each 16x16 pixel block.
- the selected prediction mode (prediction direction) is encoded in units of 16 ⁇ 16 pixel blocks.
- a first object of the present invention is to provide a method of encoding and decoding an image capable of improving encoding efficiency for a high resolution image having a resolution of HD or higher definition.
- a second method of the present invention is to provide an apparatus for encoding and decoding an image capable of improving encoding efficiency for a high resolution image having a resolution of HD or higher definition.
- a fourth object of the present invention is to provide an intra prediction prediction decoding method and a decoding apparatus which can be applied to a high resolution image having a HD (High Definition) or higher resolution.
- a fifth object of the present invention is to provide a method of encoding and decoding an image capable of improving encoding efficiency while maintaining the quality of an image for a high definition image having a resolution of HD or higher definition.
- a sixth object of the present invention is to provide an image encoding and decoding apparatus capable of improving encoding efficiency while maintaining image quality of a high resolution image having a resolution of HD or higher definition.
- a method of encoding an image including: receiving at least one picture to be encoded and based on temporal frequency characteristics between the at least one picture received; Determining a size of a block to be encoded and encoding a block having the determined size.
- the video encoding method according to another aspect of the present invention for achieving the first object of the present invention is a video encoding method having a high definition (HD) or higher resolution, NxN pixel size-where N is 32 or more Generating a prediction block by performing motion compensation on a prediction unit having a power of two, comparing the prediction unit with the prediction block, obtaining a residual value, and converting the residual value It includes.
- the prediction unit may have an extended macro block size.
- the transforming the residual value may include performing a discrete cosine transform on the extended macroblock.
- the prediction unit may have an N ⁇ N pixel size, where N is a power of 2, not less than 32 and not more than 128.
- the prediction unit has an N ⁇ N pixel size, where N is a power of two, but the size of the prediction unit may be limited to 64 ⁇ 64 pixels or less due to the complexity of the encoder and the decoder.
- an image encoding method for achieving the first object of the present invention, receiving at least one picture to be encoded, and based on the received spatial frequency characteristics of the at least one picture Determining the size of the prediction unit, wherein the size of the prediction unit has an N ⁇ N pixel size (where N is a power of 2 or more); and encoding the prediction unit having the determined size. do.
- the video encoding method for achieving the first object of the present invention, the step of receiving an extended macro block having an NxN pixel size, where N is a power of 2 or more than 32; Detecting a pixel belonging to an edge of a block adjacent to the received extended macroblock; dividing the extended macroblock into at least one partition based on the pixel belonging to the detected edge; Performing encoding on a predetermined partition among the partitions.
- a method of decoding an image having a high definition (HD) or higher resolution including receiving an encoded bit stream and decoding from the received bit stream.
- Obtaining size information of the prediction unit wherein the size of the prediction unit is an N ⁇ N pixel size (N is a power of 2 of 32 or more); and inversely quantizing and inversely transforming the received bit stream to obtain a residual value.
- generating a prediction block by performing motion compensation on a prediction unit having a size corresponding to the obtained prediction unit size information, and reconstructing an image by adding the generated prediction block and the residual value.
- the prediction unit may have an extended macro block size.
- the transforming of the residual value may include performing an inverse discrete cosine transform (DCT) on the extended macroblock.
- the prediction unit may have an N ⁇ N pixel size, where N is a power of 2, not less than 32 and not more than 128.
- the prediction unit has an N ⁇ N pixel size, where N is a power of two, but the size of the prediction unit may be limited to 64 ⁇ 64 pixels or less due to the complexity of the encoder and the decoder.
- the prediction unit may be an end coding unit when hierarchically dividing a coding unit having a variable size to reach a maximum allowable layer level or layer depth.
- the method may further include obtaining partition information of a prediction unit to be decoded from the received bit stream.
- Generating a prediction block by performing motion compensation on a prediction unit having a size corresponding to the obtained prediction unit size information may perform partition partitioning on the prediction unit based on partition information of the prediction unit. And performing the motion compensation on the divided partition.
- the partition partitioning may be performed in an asymmetric partitioning scheme.
- the partition division may be performed by a geometric partition division scheme having a shape other than square.
- the partitioning may be performed in a partitioning manner along an edge direction. Partitioning according to the edge direction may include detecting a pixel belonging to an edge of a block adjacent to the prediction unit, and dividing the prediction unit into at least one partition based on the pixel belonging to the detected edge. have.
- the partition partitioning scheme according to the edge direction may be applied to intra prediction.
- the method may include restoring an image.
- the image encoding apparatus receives at least one picture to be encoded, the time-frequency characteristics between the received at least one picture or received And a prediction unit determiner that determines a size of a prediction unit to be encoded based on the spatial frequency characteristics of the at least one picture, and an encoder that encodes the prediction unit having the determined size.
- an image decoding apparatus for achieving the above-described second object of the present invention includes an entropy decoder for generating header information by decoding a received bit stream, and a prediction unit obtained from the header information.
- a motion compensation unit for generating a prediction block by performing motion compensation on the prediction unit on the basis of the size information, wherein the size of the prediction unit is N ⁇ N pixel size (N is a power of 2 or more)
- the inverse quantizer for inversely quantizing the received bit stream, an inverse transformer for inversely transforming inverse quantized data to obtain a residual value, and an adder for reconstructing an image by adding the residual value and the prediction block.
- the prediction unit may have an extended macro block size.
- the inverse transform unit may perform an inverse discrete cosine transform (DCT) on the extended macroblock.
- the prediction unit may have an N ⁇ N pixel size, where N is a power of 2, not less than 32 and not more than 128.
- the prediction unit has an N ⁇ N pixel size, where N is a power of two, but the size of the prediction unit may be limited to 64 ⁇ 64 pixels or less due to the complexity of the encoder and the decoder.
- the prediction unit may be an end coding unit when hierarchically dividing a coding unit having a variable size to reach a maximum allowable layer level or layer depth.
- the motion compensator may perform partition compensation on the prediction unit based on partition information of the prediction unit to perform the motion compensation on the partitioned partition.
- the partition partitioning may be performed in an asymmetric partitioning scheme.
- the partition division may be performed by a geometric partition division scheme having a shape other than square.
- the partitioning may be performed in a partitioning manner along an edge direction.
- a method of encoding an image in which the input image is partitioned by applying at least one of an asymmetric partition and a geometric partition partitioning method.
- intra prediction encoding by selectively using one of a plurality of prediction modes with respect to the predicted prediction unit, and converting a residual value which is a difference between the prediction unit predicted by the intra prediction and the current prediction unit
- entropy encoding by quantization.
- the pixel values in the asymmetric partitioned prediction unit are predicted using the pixel values in the block encoded before the prediction unit along one of the vertical, horizontal, average value prediction, right diagonal direction, and left diagonal direction. Can be.
- the method for decoding an image according to an aspect of the present invention for achieving the fourth object of the present invention is to entropy-decode the received bit stream to dequantize and inversely transform the residual value to restore the residual value, and asymmetric partitions.
- Generating an prediction unit by performing intra prediction encoding on one of a plurality of prediction modes for a divided prediction unit by applying at least one of a division and a geometric partition division method, and generating the residual unit in the prediction unit. Reconstructing the image by adding the value.
- the pixel values in the asymmetric partitioned prediction unit are predicted using the pixel values in the block encoded before the prediction unit along one of the vertical, horizontal, average value prediction, right diagonal direction, and left diagonal direction. Can be.
- the pixel value in the asymmetric partitioned prediction unit may be predicted using the pixel value in the block encoded before the prediction unit along a line formed with a predetermined isometric interval in 360 degrees omnidirectional.
- the pixel value in the asymmetric partitioned prediction unit may perform intra prediction along a line of an angle corresponding to the slope based on dx and dy information defining a slope of dx in the horizontal direction and dy in the vertical direction. have.
- the predicted pixel value of the lower rightmost pixel of the prediction unit may be obtained using corresponding pixel values in the vertical and horizontal directions in the left and upper blocks encoded before the prediction unit.
- the predicted pixel values of the lower rightmost pixel of the prediction unit are corresponding pixel values in the vertical and horizontal directions in the left and upper blocks encoded before the prediction unit and internal pixel values corresponding to the vertical and horizontal directions in the prediction unit.
- the predicted pixel values of the lower rightmost pixel of the current prediction unit of the Nth picture are corresponding to the corresponding pixel values in the vertical and horizontal directions in the previously encoded left block and the upper block located in the periphery of the current prediction unit, and N-1th.
- the average value may be obtained by using corresponding pixel values in the vertical and horizontal directions in a previously encoded left block and an upper block located in the periphery of a corresponding prediction unit of a picture, or by performing linear interpolation.
- the image decoding apparatus for achieving the fourth object of the present invention is an inverse quantization and inverse transform unit for entropy decoding the received bit stream to dequantize the residual value and inverse transform to restore the residual value
- An intra prediction unit configured to generate a prediction unit by performing intra prediction encoding on one of a plurality of prediction modes by selectively applying at least one of an asymmetric partition division and a geometric partition division method;
- an adder configured to reconstruct an image by adding the residual value to the prediction unit.
- a video decoding method in which an adjacent pixel of a current block having a second size in an N-th picture and N-1 in time preceding the N-th picture are temporally advanced.
- Receiving a bit stream obtained by encoding an intra prediction mode for the current block determined based on the residual value and a residual value between adjacent pixels of a reference block in a first picture entropy decoding the bit stream to obtain a motion vector, Obtaining the intra prediction mode and the quantized residual value, inversely quantizing and inversely transforming the quantized residual value, and obtaining the residual value, and using the motion vector in the at least one picture.
- By applying the intra-prediction mode in sanhan result includes the step of reconstructing the current block.
- Determining a reference block of a current block having a second size in at least one picture using the motion vector has a first size including a current block having the second size using the motion vector.
- the current macro block having the first size may have a size of 32 ⁇ 32 pixels or more
- the current block having the second size may have a size of any one of 4 ⁇ 4 pixels and 8 ⁇ 8 pixels.
- the video decoding method for achieving the fifth object of the invention, the N + 1-th time later than the adjacent pixel of the current block having the second size in the N-th picture and the N-th picture
- Determining a reference block of the current block having a size and opening the residual value with the determined neighboring pixel of the reference block; Applying the intra-prediction mode to the result comprises the step of reconstructing the current block.
- the video decoding method for achieving the fifth object of the present invention, N-1, the adjacent pixel of the current block having a second size in the N-th picture and the time N-1 faster than the N-th picture Based on the residual value and the residual value determined based on the forward residual value between adjacent pixels of the reference block in the first picture and the backward residual value between adjacent pixels of the reference block in the N + 1th picture later in time than the Nth picture
- N-1 the adjacent pixel of the current block having a second size in the N-th picture and the time N-1 faster than the N-th picture
- Receiving an encoded bit stream in which the intra prediction mode for the current block is determined entropy decoding the bit stream to obtain a motion vector, the intra prediction mode, and a quantized residual value; Inversely quantizing and inverse
- the video decoding method for achieving the fifth object of the present invention, N-1, the adjacent pixel of the current block having a second size in the N-th picture and the time N-1 faster than the N-th picture A residual value determined based on a first residual value between adjacent pixels of the reference block in the first picture and a second residual value between adjacent pixels of the reference block in the N-2 th picture faster in time than the N-1 th picture; Receiving an encoded bit stream in which an intra prediction mode for the current block is determined based on the residual value, entropy decoding the bit stream to obtain a motion vector, the intra prediction mode, and a quantized residual value Inversely quantizing and inversely transforming the quantized residual value to obtain the residual value, and at least one using the motion vector Determining a reference block of the current block having the second size in a picture and reconstructing the current block by applying the intra prediction mode to a result of calculating the neighboring pixel and the residual value of the determined reference block; It includes.
- An image decoding apparatus for achieving the sixth object of the present invention described above is provided between an adjacent pixel of a current block having a second size in an Nth picture and a neighboring pixel of a reference block in at least one reference picture.
- An entropy decoder configured to entropy decode a bitstream encoded by the intra prediction mode determined based on the residual value and the residual value, to generate a motion vector, the intra prediction mode, and a quantized residual, from the entropy decoded information.
- Determine a reference block of the current block having a second size to After computing the pixel and the residual value includes prediction unit configured to reconstruct the current block by applying the result of computing the prediction mode, the screen.
- the size of a prediction unit to be encoded is set to a size of 32x32 pixels or 64x64 pixels or 128x128 pixels, and the motion prediction is performed based on the set prediction unit size. Performs motion compensation and performs transformation.
- a prediction unit having a size of 32x32 pixels or 64x64 pixels or 128x128 pixels is divided into at least one partition based on an edge, and then encoded.
- the size of the prediction unit is increased to 32x32 or 64x64 pixels or 128x128 pixels corresponding to the extended macroblock size, thereby encoding / decoding.
- the encoding / decoding efficiency of a large screen image having a resolution of HD or Ultra HD (Ultra High Definition) or higher can be improved.
- the encoding / decoding efficiency is increased by increasing or decreasing the size of the extended macroblock using the extended macroblock size for the pixel area according to the time frequency characteristics (previous, the current inter-screen change or the degree of motion, etc.) for the large screen. Can increase.
- encoding efficiency may be improved in encoding a large screen image having a resolution of HD level and Ultra High Definition (HD) or higher, and encoding noise may be reduced in a region having high flatness and uniformity.
- HD Ultra High Definition
- intra prediction encoding / decoding method and apparatus by applying intra prediction encoding / decoding to an MxN size asymmetric shape or an arbitrary geometric shape pixel block, HD or Ultra HD (Ultra) is applied. It is possible to improve the coding efficiency of an image having a high definition or higher resolution.
- N-2, N-1, N + 1, and N and adjacent pixels of a current block having a second size in an N-th picture to be encoded Obtain a residual value between adjacent pixels of a reference block having a second size included in at least one reference picture of the + 2th reference picture, and determine the intra prediction mode using the obtained residual value, and then convert the residual value. And quantization and entropy encoding are transmitted. In addition, encoding efficiency may be improved by entropy encoding and transmitting header information such as block size information and reference picture information.
- the encoding / decoding method as described above is applied to an encoding / decoding method of an extended macroblock unit having a size of 32 ⁇ 32 pixels or more to increase the encoding / decoding efficiency of a large screen image having a resolution of Ultra HD (Ultra High Definition) level or higher. Can be.
- Ultra HD Ultra High Definition
- FIG. 1 is a flowchart illustrating an image encoding method according to an embodiment of the present invention.
- FIG. 2 is a conceptual diagram illustrating a recursive coding unit structure according to another embodiment of the present invention.
- 3 to 6 are conceptual views illustrating an asymmetric partitioning scheme according to an embodiment of the present invention.
- FIG. 7 to 9 are conceptual views illustrating a geometrical partitioning scheme according to other embodiments of the present invention.
- FIG. 10 is a conceptual diagram illustrating motion compensation for boundary pixels positioned at boundary lines in the case of geometric partition division.
- FIG. 11 is a flowchart illustrating a video encoding method according to another embodiment of the present invention.
- FIG. 12 is a conceptual diagram for explaining a partitioning process illustrated in FIG. 11.
- FIG. 13 is a conceptual diagram illustrating a case where partition partitioning considering an edge is applied to intra prediction.
- FIG. 14 is a flowchart illustrating a video encoding method according to another embodiment of the present invention.
- 15 is a flowchart illustrating a video encoding method according to another embodiment of the present invention.
- 16 is a flowchart illustrating an image decoding method according to an embodiment of the present invention.
- 17 is a flowchart illustrating an image decoding method according to another embodiment of the present invention.
- FIG. 18 is a block diagram illustrating a configuration of an image encoding apparatus according to an embodiment of the present invention.
- FIG. 19 is a block diagram illustrating a configuration of an image encoding apparatus according to another embodiment of the present invention.
- 20 is a block diagram illustrating a configuration of an image decoding apparatus according to an embodiment of the present invention.
- 21 is a block diagram illustrating a configuration of an image decoding apparatus according to another embodiment of the present invention.
- FIG. 22 is a conceptual diagram illustrating an intra prediction encoding method using an asymmetric pixel block according to an embodiment of the present invention.
- 23 to 25 are conceptual views illustrating an intra prediction encoding method using an asymmetric pixel block according to another embodiment of the present invention.
- FIG. 26 is a conceptual diagram illustrating an intra prediction encoding method based on linear prediction according to another embodiment of the present invention.
- FIG. 26 is a conceptual diagram illustrating an intra prediction encoding method based on linear prediction according to another embodiment of the present invention.
- FIG. 27 is a conceptual diagram illustrating an intra prediction encoding method based on linear prediction according to another embodiment of the present invention.
- FIG. 27 is a conceptual diagram illustrating an intra prediction encoding method based on linear prediction according to another embodiment of the present invention.
- FIG. 28 is a block diagram illustrating a configuration of an image encoding apparatus for performing intra prediction encoding according to an embodiment of the present invention.
- 29 is a flowchart illustrating a method of encoding an image to which intra-prediction encoding is applied according to an embodiment of the present invention.
- FIG. 30 is a block diagram illustrating a configuration of an image decoding apparatus according to an embodiment of the present invention.
- 31 is a flowchart illustrating an image decoding method according to an embodiment of the present invention.
- FIG. 32 is a flowchart illustrating a video encoding method according to an embodiment of the present invention.
- FIG. 33 is a conceptual diagram illustrating the image encoding method illustrated in FIG. 32.
- 34 is a flowchart illustrating a video encoding method according to another embodiment of the present invention.
- FIG. 35 is a conceptual diagram for explaining an image encoding method illustrated in FIG. 34.
- 36 is a flowchart illustrating an image encoding method according to another embodiment of the present invention.
- FIG. 37 is a conceptual diagram for explaining an image encoding method illustrated in FIG. 36.
- 38 is a flowchart illustrating a video encoding method according to another embodiment of the present invention.
- FIG. 39 is a conceptual diagram for explaining an image encoding method illustrated in FIG. 38.
- FIG. 40 is a flowchart illustrating an image decoding method according to an embodiment of the present invention.
- 41 is a block diagram illustrating a configuration of a video encoding apparatus according to an embodiment of the present invention.
- FIG. 42 is a block diagram illustrating a configuration of an image decoding apparatus according to an embodiment of the present invention.
- Prediction unit determiner 1820, 1920 and 2110 Prediction unit divider
- first and second may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another.
- the first component may be referred to as the second component, and similarly, the second component may also be referred to as the first component.
- FIG. 1 is a flowchart illustrating an image encoding method according to an embodiment of the present invention.
- FIG. 1 illustrates a method of determining a size of a macro block according to a temporal frequency characteristic of an image and then performing motion compensation encoding using the macro block having the determined size.
- the encoding apparatus first receives a target frame (or picture) to be encoded (step 110).
- the received encoding target frame may be stored in a buffer, and the buffer may store a predetermined number of frames.
- the buffer may store at least four frames (n-3, n-2, n-1, and n).
- the encoding apparatus analyzes a temporal frequency characteristic of the received frame (or picture) (step 120). For example, the encoding apparatus detects an amount of change of the n-3 th frame (or picture) and the n-2 th frame (or picture) stored in the buffer, and the n-2 th frame (or picture) and the n-1 th frame. The amount of change in the (or picture) may be detected, and the amount of change in the n-1th frame (or picture) and the nth frame (or picture) may be detected to analyze temporal frequency characteristics between the frames (or pictures).
- the encoding apparatus compares the analyzed temporal frequency characteristic with a preset threshold and determines the size of the macro block to be encoded based on the comparison result (step 130).
- the encoding apparatus may determine the size of the macro block based on the amount of change of two temporally adjacent frames (for example, the n-1 and n th frames) among the frames stored in the buffer, and may overwrite the macro block size information.
- the size of the macro block may be determined based on a change characteristic of a predetermined number of frames (eg, n-3, n-2, n-1, and nth).
- the encoding apparatus analyzes the temporal frequency characteristics of the n-th frame (or picture) and the n-th frame (or picture), and if the analyzed time frequency characteristic value is less than the first threshold value, the macro block.
- the size of the macroblock is determined to be 64x64 pixels, and if the analyzed time frequency characteristic value is greater than or equal to the preset first threshold value and less than the second threshold value, the size of the macro block is determined to be 32x32 pixels, and the analyzed time frequency characteristic value is preset. If it is greater than or equal to the second threshold, the size of the macro block is determined to be 16x16 pixels or less.
- the first threshold value represents a time frequency characteristic value when the amount of change between frames (or pictures) is smaller than the second threshold value.
- an extended macro block is defined as a macro block having a size of 32x32 pixels or more.
- the extended macroblock may have a size of 32x32 pixels or more, that is, 64x64 pixels, 128x128 pixels or more, so as to be suitable for a high resolution having an ultra high definition (HD) or higher resolution.
- the extended macro block may be limited to a maximum size of 64x64 pixels or less in consideration of encoder and decoder complexity in the case of a high resolution having a resolution of Ultra HD (Ultra High Definition) level or more.
- the size of the macro block to be encoded may have a predetermined value for each picture or for each group of pictures (GOP) based on a result of analyzing the temporal frequency characteristics of the received picture.
- GOP group of pictures
- the size of the macro block to be encoded may have a predetermined value for each picture or for each group of pictures (GOP) regardless of a result of analyzing the temporal frequency characteristics of the received picture.
- the encoding apparatus performs encoding in units of the macro block of the determined size (step 140).
- the encoding apparatus performs motion prediction on the current macroblock having a size of 64x64 pixels to obtain a motion vector, and performs motion compensation using the obtained motion vector to predict the prediction block. After generating, transforms and quantizes a residual value that is a difference between the generated prediction block and the current macro block, and then performs entropy encoding to transmit the transform. In addition, information on the size of the determined macroblock and information on the motion vector are also transmitted after entropy encoding.
- the encoding process in units of extended macroblocks may be performed according to a macroblock size determined by an encoding control unit (not shown) or a decoding control unit (not shown). It may be applied to compensation coding, transform, or quantization, but may be applied to at least one of motion compensation coding or the transform or the quantization.
- the above-described encoding processing in units of extended macroblocks may be similarly applied to the following decoding processes of the embodiments of the present invention.
- the size of the macro block is increased, and the change amount between input frames is increased.
- the coding efficiency can be improved by reducing the size of the macro block to use for encoding.
- the image encoding / decoding method according to the temporal frequency characteristics may be applied to a high resolution having an ultra high definition (HD) or higher resolution having a resolution of HD or higher definition.
- the macro block means an extended macro block or a macro block having a size of less than 32 ⁇ 32 pixels.
- encoding and decoding are performed using a recursive coding unit (CU).
- CU recursive coding unit
- FIG. 2 is a conceptual diagram illustrating a recursive coding unit structure according to another embodiment of the present invention.
- each coding unit CU has a square shape, and each coding unit CU may have a variable size of 2N ⁇ 2N (unit pixel) size.
- Inter prediction, intra prediction, transform, quantization, and entropy coding may be performed in units of coding units (CUs).
- the coding unit (CU) may comprise a maximum coding unit (LCU), a minimum coding unit (SCU), the size of the maximum coding unit (LCU), the minimum coding unit (SCU) is a power of two having a size of 8 or more. Can be represented by a value.
- the recursive structure can be represented through a series of flags. For example, when a flag value of a coding unit CU k having a hierarchical level or a layer depth k is 0, the coding for the coding unit CU k is current. If the flag value is 1, the coding unit CU k with the current layer level or layer depth k is 4, for a layer level or layer depth.
- the partitioned coding unit (CU k + 1 ) has a hierarchical level or hierarchical depth of k + 1 and a size of N k + 1 XN k + 1 .
- the coding unit CU k + 1 may be represented as a sub coding unit of the coding unit CU k .
- the coding unit CU k + 1 cycles until the hierarchical level or hierarchical depth of the coding unit CU k + 1 reaches the maximum allowable hierarchical level or hierarchical depth. It can be handled recursive.
- the hierarchical level or hierarchical depth of the coding unit CU k + 1 is the same as the maximum allowable hierarchical level or hierarchical depth-the case of 4 in FIG. 2 is taken as an example. No further division is allowed.
- the size of the largest coding unit (LCU) and the size of the minimum coding unit (SCU) may be included in a sequence parameter set (SPS).
- the sequence parameter set (SPS) may comprise the maximum allowable layer level or layer depth of the maximum coding unit (LCU). For example, in the case of FIG. 2, when the maximum allowable layer level or layer depth is 5, and the size of one side of the maximum coding unit (LCU) is 128 (unit pixel), 128 X 128 ( LCU), 64 X 64, 32 X 32, 16 X 16 and 8 X 8 (SCU) are available in five different coding unit sizes. That is, the size of the allowable coding unit may be determined given the size of the largest coding unit (LCU) and the maximum allowable layer level or layer depth.
- the size of the coding unit may be limited to a maximum of 64x64 pixels or less in consideration of encoder and decoder complexity in the case of a high resolution having a resolution of Ultra HD (Ultra High Definition) or higher.
- the large coding unit may display the image region of interest with fewer symbols than if using several small blocks.
- the codec can be easily optimized for various contents, applications and devices by supporting a maximum coding unit (LCU) having any of various sizes as compared to using fixed size macroblocks. That is, by appropriately selecting the maximum coding unit (LCU) size and the maximum hierarchical level or maximum hierarchical depth, the hierarchical block structure can be further optimized for the target application.
- LCU maximum coding unit
- a multilevel hierarchical structure can be defined as a maximum coding unit (LCU) size, a maximum hierarchical level ( It can be represented very simply using level (or maximum layer depth) and a series of flags.
- LCU maximum coding unit
- the maximum value of the hierarchical level may have a random value and may have a larger value than that allowed in the existing H.264 / AVC coding scheme.
- Size independent syntax representation can be used to specify all syntax elements in a manner consistent with the size of the coding unit (CU) independent of.
- the splitting process for the coding unit (CU) can be specified circularly, and other syntax elements for the leaf coding unit-the last coding unit at the layer level-are independent of the coding unit size. Can be defined to be the same size.
- Such a representation is very effective in reducing parsing complexity, and the clarity of the representation can be improved when a large hierarchical level or hierarchical depth is allowed.
- inter prediction or intra prediction may be performed on end nodes of the coding unit hierarchical tree without further splitting, and the end coding unit is a prediction unit in which the end coding unit is a basic unit of inter prediction or intra prediction. Used as (Prediction Unit; PU).
- Partition splitting is performed on the end coding unit for inter prediction or intra prediction.
- partition partitioning is performed on the prediction unit (PU).
- the prediction unit (PU) refers to a basic unit for inter prediction or intra prediction, and may be an existing macro block unit or sub-macro block unit, and an extended macro block unit or coding unit of 32 ⁇ 32 pixels or more. It can also be a unit.
- Partitioning for the inter prediction or intra prediction may be performed by asymmetric partitioning, or by geometrical partitioning having an arbitrary shape other than square. It may also be achieved by partitioning.
- partitioning a partitioning scheme according to embodiments of the present invention will be described in detail.
- 3 to 6 are conceptual views illustrating an asymmetric partitioning scheme according to embodiments of the present invention.
- the size of the prediction unit (PU) for inter prediction or intra prediction is MXM (M is a natural number and a unit is pixel)
- MXM a natural number and a unit is pixel
- an asymmetric partition division can be performed in the horizontal direction of the coding unit or asymmetric partition division in the vertical direction.
- the size of the prediction unit PU is, for example, 64 ⁇ 64.
- asymmetric partitioning in the horizontal direction is performed to divide the partition into partitions P11a of size 64 X 16 and P21a of size 64 X 48, or partition P12a of size 64 X 48 and P22a of size 64 X 16.
- the asymmetric partitioning in the vertical direction can be divided into partitions P13a of size 16 X 64 and P23a of size 48 X 64, or partitions P48a of size 48 X 64 and P24a of size 16 X 64.
- the asymmetric partitioning is performed in the horizontal direction to divide the partition into a partition P11b of size 32 X 8 and P21b of size 32 X 24, or 32 X It can be split into 24 partitions P12b and 32 X 8 P22b.
- the asymmetric partitioning in the vertical direction can be divided into 8 X 32 partition P13b and 24 X 32 P23b, or 24 X 32 partition P14b and 8 X 32 P24b.
- an asymmetric partition is performed in the horizontal direction to divide the partition into a partition P11c having a size of 16 ⁇ 4 and a size of P21c having a size of 16 ⁇ 12.
- it can be divided into a 16 X 12 top partition and a 16 X 4 bottom partition.
- the asymmetric partition is divided into 4 X 16 sized left end partitions and 12 X 16 sized right end partitions, or 12 X 16 sized left end partitions and 4 X 16 sized partitions. It can be partitioned into the right end partition of.
- the partition is divided into 8 ⁇ 2 partition P11d and 8 ⁇ 6 size P21d by asymmetric partitioning in the horizontal direction. However, it can be partitioned into an 8 X 6 top partition and an 8 X 2 bottom partition.
- the asymmetric partition is divided in the vertical direction into 2 X 8 sized left end partitions and 6 X 8 sized right end partitions, or 6 X 8 sized left end partitions and 2 X 8 sized partitions. It can be partitioned into the right end partition of.
- FIG. 7 to 9 are conceptual views illustrating a geometrical partitioning scheme according to other embodiments of the present invention.
- FIG. 7 illustrates an embodiment of performing geometric partition partitioning having a shape other than square for the prediction unit PU.
- the boundary line L of the geometric partition for the prediction unit PU may be defined as follows. After dividing the center O of the prediction unit PU into four quadrants by using the X and Y axes, and drawing a waterline at the boundary line L from the center O of the prediction unit PU, the center of the prediction unit PU ( All boundary lines in any direction can be specified by the vertical distance p from O) to the boundary line L and the rotation angle ⁇ from the X axis to the waterline in the counterclockwise direction.
- FIG. 8 illustrates another embodiment of performing geometric partition partitioning having a shape other than square for the prediction unit PU.
- the upper left block of the second quadrant consists of partition P11b ', the remaining one, three, and quadrants.
- the lower left block of the third quadrant may be partitioned into partition P12b 'and the remaining blocks consisting of the first, second and fourth quadrants may be partitioned into partition P22b'.
- the upper right block of the first quadrant may be partitioned into partition P13b 'and the remaining blocks of the second, third, and fourth quadrants may be partitioned into partition P23b'.
- the lower right block of the quadrant 4 may be partitioned into partition P14b 'and the remaining blocks of the first, second, and third quadrants are partitioned into partition P24b'.
- partitioning the partition into an L-shape as described above if there are moving objects in the edge block, that is, the upper left, lower left, upper right and lower right blocks, the encoding is more efficient than partitioning into four blocks. Can be.
- a corresponding partition may be selected and used according to which edge block a moving object is located.
- FIG. 9 illustrates another embodiment of performing geometric partition partitioning having a shape other than square for the prediction unit PU.
- a prediction unit (PU) for inter prediction or intra prediction may be divided into two different irregular areas (modes 0 and 1) or divided into rectangular areas of different sizes (modes 2 and 3). Can be.
- the parameter 'pos' is used to indicate the position of the partition boundary.
- 'pos' represents the distance in the horizontal direction from the diagonal of the prediction unit PU to the partition boundary.
- 'pos' represents the vertical bisector or horizontal of the prediction unit PU. The horizontal distance from the bisector to the partition boundary.
- mode information may be transmitted to the decoder.
- a mode having a minimum RD cost in terms of rate distortion (RD) may be used for inter prediction.
- FIG. 10 is a conceptual diagram illustrating motion compensation for boundary pixels positioned at boundary lines in the case of geometric partition division.
- the partition is divided into the region 1 and the region 2 by geometric partitioning, it is assumed that the motion vector of the region 1 is MV1 and the motion vector of the region 2 is MV2.
- the boundary pixel may be any one of the top, bottom, left, and right pixels of a specific pixel located in the area 1 (or area 2). If it belongs to, it can be seen as a boundary pixel.
- the boundary pixel A is a boundary pixel belonging to the boundary with the region 2
- the boundary pixel B is a boundary pixel belonging to the boundary with the region 1.
- normal motion compensation is performed using an appropriate motion vector.
- motion compensation is performed using a value obtained by multiplying the motion prediction values from the motion vectors MV1 and MV2 of the region 1 and the region 2 by the weight.
- a weight of 2/3 is used for an area including a border pixel
- a weight of 1/3 is used for a remaining area not including a border pixel.
- FIG. 11 is a flowchart illustrating a video encoding method according to another exemplary embodiment.
- FIG. 12 is a conceptual diagram illustrating a partitioning process illustrated in FIG. 11.
- FIG. 11 illustrates a method of partitioning a prediction unit PU into partitions after determining the size of the prediction unit PU through the image encoding method of FIG. 1 in consideration of an edge included in the prediction unit PU having the determined size. After that, the encoding is performed for each partition.
- FIG. 3 an example of using a 32 ⁇ 32 macroblock as the prediction unit PU will be described.
- partition partitioning considering edge is applied to intra prediction as well as inter prediction. Detailed description will be described later.
- Steps 1110 to 1130 illustrated in FIG. 11 execute the same functions as those of steps 110 to 130 of FIG. 1, and thus descriptions thereof will be omitted.
- the encoding apparatus detects a pixel belonging to an edge among pixels belonging to a macro block adjacent to the current macro block having the determined size. (Step 1140).
- a method of detecting a pixel belonging to an edge may be performed through various known methods.
- edges may be detected using an edge detection algorithm such as a sobel algorithm or a difference value between the current macroblock and adjacent neighboring pixels.
- the encoding apparatus divides the current macroblock into partitions using pixels belonging to the detected edge (step 1150).
- the encoding apparatus detects a pixel belonging to an edge among peripheral pixels of the detected edge pixel among pixels included in a neighboring block adjacent to the current macro block to partition the current macro block, and then detects a pixel surrounding the detected edge pixel.
- the partition may be divided using a line connecting the edge pixels detected in operation 1140.
- the encoding apparatus detects a pixel belonging to an edge from among pixels belonging to a neighboring block of a current macro block having a size of 32x32 pixels, and then includes pixels 211 and 214. ). Subsequently, the encoding apparatus detects the pixel 212 by detecting a pixel belonging to an edge among the pixels located around the detected pixel 211, and then extends an extension line 213 of a line connecting the pixel 211 and the pixel 212. Partition the macro block into partitions.
- the encoding apparatus detects a pixel 215 by detecting a pixel belonging to an edge among the pixels adjacent to the detected pixel 214 and then uses an extension line 216 of a line connecting the pixel 214 and the pixel 215. Partition the macro block into partitions.
- the encoding apparatus detects pixels belonging to an edge of only pixels adjacent to the current macro block 210 among pixels belonging to a neighboring block of the current macro block 210, and then passes the pixels belonging to the detected edge.
- the direction of the straight line may be determined to divide the block into the current macro.
- the direction of the edge straight line passing through the pixels belonging to the edge is the vertical mode (mode 0), horizontal mode (mode 1), diagonal among the intra prediction modes of the 4x4 block according to the H.264 / AVC standard.
- the current macro block may be divided according to the direction of any one of the vertical left mode (mode 7) and the horizontal-up mode (mode 8), and the pixels belonging to the edge are centered.
- the final straight line direction may be determined in consideration of encoding efficiency.
- the direction of the straight line passing through the pixels belonging to the edge is not the intra prediction modes of the 4x4 block according to the H.264 / AVC standard, but the current macro according to the mode direction of any one of the various intra prediction modes for a block larger than 4x4 pixels. You can also split the block.
- Information (including direction information) about an edge straight line passing through pixels belonging to the edge may be included in partition information and transmitted to the decoder.
- the encoding apparatus performs encoding for each partition (step 1160).
- the encoding apparatus obtains a motion vector by performing motion prediction on each partition partitioned within a current macroblock having a size of 64x64 or 32x32 pixels, and performs motion compensation using the obtained motion vector to obtain a prediction partition.
- the residual value which is the difference between the generated prediction partition and the partition of the current macroblock, is transformed and quantized, and then transmitted by performing entropy encoding.
- information on the determined macroblock size, partition information, and motion vector is also transmitted after entropy encoding.
- the inter prediction using the partition partition considering the edge as described above may be implemented to be performed when the prediction mode using the partition partition considering the edge is activated.
- partition partitioning considering edges may be used not only for inter prediction but also for intra prediction. Application to intra prediction will be described with reference to FIG. 13.
- FIG. 13 is a conceptual diagram illustrating a case where partition partitioning considering an edge is applied to intra prediction.
- Intra prediction using the partition partitioning considering the edge of FIG. 13 may be implemented to be performed when the prediction mode using the partition partitioning considering the edge is activated.
- reference pixels can be estimated along the detected edge direction.
- p ( x, y) when line E is an edge boundary and pixels a and b are pixels located on both sides of the edge boundary E, and a reference pixel to be subjected to intra prediction is p (x, y), p ( x, y) can be predicted using the following equation.
- Wa ⁇ x -floor ( ⁇ x)
- ⁇ x represents the distance from the X-axis coordinate position of the reference pixel p (x, y) to the position where the edge line E intersects the X-axis
- Wa and Wb are weighted factors
- Information (including direction information) about an edge boundary line passing through pixels belonging to the edge may be included in partition information and transmitted to the decoder.
- 14 is a flowchart illustrating a video encoding method according to another embodiment of the present invention. 14 illustrates a method of performing motion compensation encoding using a prediction unit PU having a determined size after determining a size of a prediction unit PU according to a spatial frequency characteristic of an image.
- the encoding apparatus first receives a target frame (or picture) to be encoded (step 1310).
- the received encoding target frame may be stored in a buffer, and the buffer may store a predetermined number of frames.
- the buffer may store at least four frames (n-3, n-2, n-1, and n).
- the encoding apparatus analyzes the spatial frequency characteristics of each received frame (or picture) (step 1420). For example, the encoding apparatus may calculate the signal energy of each frame stored in the buffer, and analyze the spatial frequency characteristic of each image by analyzing the relationship between the calculated signal energy and the frequency spectrum.
- the encoding apparatus determines the size of the prediction unit PU based on the analyzed spatial frequency characteristic.
- the size of the prediction unit PU may be determined in units of frames stored in the buffer or in units of a predetermined number of frames.
- the encoding apparatus determines the size of the prediction unit PU to be 16x16 pixels or less, and the signal energy is preset. If the third threshold is greater than or equal to the fourth threshold, the size of the prediction unit PU is determined to be 32x32 pixels. If the signal energy is greater than or equal to the preset fourth threshold, the size of the prediction unit PU is determined to be 64x64 pixels. do.
- the third threshold indicates a case where the spatial frequency of the image is higher than the fourth threshold.
- the coding efficiency is improved by using the size of a macro block for encoding using an extended macroblock or a prediction unit according to the temporal frequency characteristics or the spatial frequency characteristics of each received picture, but the temporal efficiency of each received picture is improved.
- encoding / decoding may be performed using an extended macroblock or a prediction unit according to the resolution (size) of each picture received independently of the frequency characteristic or the spatial frequency characteristic. That is, encoding / decoding may be performed using an extended macroblock or a prediction unit for a picture having a resolution of at least Ultra High Definition (HD) or more.
- HD Ultra High Definition
- the encoding apparatus When the size of the prediction unit PU is determined through execution of step 1330, the encoding apparatus performs encoding in units of the prediction unit PU having the determined size (step 1440).
- the encoding apparatus performs motion prediction on the current prediction unit (PU) having a size of 64x64 pixels to obtain a motion vector, and then uses the obtained motion vector.
- the residual value which is the difference between the generated prediction block and the current prediction unit (PU)
- the residual value is transformed and quantized, and then transmitted by performing entropy encoding.
- information on the size of the determined prediction unit (PU) and information on the motion vector are also transmitted after entropy encoding.
- the size of the prediction unit PU is set to be larger than 32x32 pixels or more, and when the picture flatness or uniformity of the picture is low (that is, the spatial frequency is high), the prediction unit (PU)
- the coding efficiency can be improved by setting a small size of 16) to 16x16 pixels or less.
- FIG. 15 is a flowchart illustrating an image encoding method according to another embodiment of the present invention. After the size of the prediction unit PU is determined through the image encoding method illustrated in FIG. 14, the prediction unit PU having the determined size is determined. A process of splitting a prediction unit (PU) into partitions in consideration of the edges included therein and then performing encoding for each partitioned partition.
- PU prediction unit
- steps 1510 to 1530 illustrated in FIG. 15 execute the same functions as those of steps 1410 to 1430 of FIG. 14, description thereof is omitted.
- the encoding apparatus may determine the prediction unit PU adjacent to the current prediction unit PU having the determined size. A pixel belonging to the edge is detected among the pixels belonging to (step 1540).
- the method of detecting the pixel belonging to the edge in operation 1540 may be performed through various known methods.
- the edge may be detected by calculating a difference value between the current prediction unit PU and adjacent neighboring pixels, or by using an edge detection algorithm such as a sobel algorithm.
- the encoding apparatus divides the current prediction unit PU into partitions using pixels belonging to the detected edge (step 1550).
- the encoding apparatus belongs to an edge of neighboring pixels of the detected edge pixel among pixels included in a neighboring block adjacent to the current prediction unit PU for partitioning of the current prediction unit PU.
- the partition may be divided using a line connecting the pixel surrounding the detected edge pixel and the edge pixel detected in operation 1540.
- the encoding apparatus detects pixels belonging to an edge of only pixels closest to the current prediction unit PU among pixels belonging to a neighboring block of the current prediction unit PU, and passes the pixels belonging to the detected edge.
- the direction of the straight line may be determined to split the current prediction unit PU.
- the encoding apparatus performs encoding for each partition (step 360).
- the encoding apparatus obtains a motion vector by performing motion prediction on each partition partitioned within a current prediction unit (PU) having a size of 64x64 or 32x32 pixels, and performs motion compensation by using the obtained motion vector.
- a residual value that is a difference between the generated prediction partition and the partition of the current prediction unit (PU) is transformed, quantized, and then transmitted by performing entropy encoding.
- information on the determined size of the prediction unit (PU), partition information, and a motion vector is also transmitted after entropy encoding.
- Partitioning considering the edges described with reference to FIG. 15 may be used not only for inter prediction but also for intra prediction of FIG. 13.
- 16 is a flowchart illustrating an image decoding method according to an embodiment of the present invention.
- a decoding apparatus first receives a bit stream from an encoding apparatus (step 1610).
- the decoding apparatus performs entropy decoding on the received bit stream to obtain current prediction unit (PU) information to be decoded (step 1620).
- the prediction unit (PU) information may include the size of the largest coding unit (LCU), the size of the minimum coding unit (SCU), the maximum allowable layer level or layer depth, and flag information.
- the decoding apparatus obtains a motion vector for motion compensation.
- the size of the prediction unit PU may have a size determined according to a temporal frequency characteristic or a spatial frequency characteristic in the encoding apparatus. For example, a size of 32x32 or 64x64 pixels may be determined.
- the decoding control unit receives the information on the size of the prediction unit (PU) applied by the encoding apparatus from the encoding apparatus according to the size of the prediction unit (PU) applied by the encoding apparatus to be described later. Inverse quantization can be performed.
- the decoding apparatus uses the prediction unit (PU) size (for example, 32x32 or 64x64 pixels) information and the motion vector information obtained as described above, and predicts the motion by using the previously reconstructed picture. (Step 1630).
- PU prediction unit
- the decoding apparatus reconstructs the current prediction unit PU by adding the generated predicted prediction unit PU and the residual value provided from the encoding apparatus (step 1640).
- the decoding apparatus may entropy-decode the bit stream provided from the encoding apparatus and then perform inverse quantization and inverse transformation to obtain the residual value.
- the inverse transformation may be performed in units of a prediction unit (PU) size (for example, 32x32 or 64x64 pixels) obtained in operation 1620.
- PU prediction unit
- FIG. 17 is a flowchart illustrating an image decoding method according to another embodiment of the present invention.
- a macroblock having a size determined according to a temporal frequency characteristic or a spatial frequency characteristic is divided according to an edge, thereby encoding an image encoded for each partition. It shows the process of decoding.
- the decoding apparatus receives a bit stream from an encoding apparatus (step 1710).
- the decoding apparatus performs entropy decoding on the received bit stream to obtain current prediction unit (PU) information to be decoded and partition information of the current prediction unit (PU) (step 1720).
- the size of the current prediction unit PU may have a size of 32x32 or 64x64 pixels, for example.
- the decoding apparatus obtains a motion vector for motion compensation.
- the prediction unit (PU) information may include the size of the largest coding unit (LCU), the size of the minimum coding unit (SCU), the maximum allowable layer level or layer depth, and flag information.
- the partition information may include partition information transmitted to the decoder in the case of asymmetric partitioning, geometrical partitioning, and partitioning along an edge direction.
- the decoding apparatus divides the prediction unit PU by using the obtained prediction unit (PU) information and partition information (step 1730).
- the decoding apparatus generates a prediction partition using the partition information, the motion vector information, and the previously reconstructed picture (step 1740), and reconstructs the current partition by adding the generated prediction partition and the residual value provided from the encoding apparatus (Ste 1750).
- the decoding apparatus may entropy-decode the bit stream provided from the encoding apparatus and then perform inverse quantization and inverse transformation to obtain the residual value.
- the decoding apparatus restores all partitions included in the current block based on the obtained partition information, and then reconstructs the restored partitions (step 1760).
- FIG. 18 is a block diagram illustrating a configuration of an image encoding apparatus according to an embodiment of the present invention.
- the apparatus for encoding an image may largely include a prediction unit determiner 1810 and an encoder 1830, and the encoder 1830 may include a motion predictor 1831, a motion compensator 1835, and intra prediction. 1835, subtractor 1837, transformer 1839, quantizer 1841, entropy encoder 1843, inverse quantizer 1845, inverse transformer 1847, adder 1849, and frame buffer ( 1851).
- the prediction unit determiner 1810 may be performed by an encoding controller (not shown) that determines the size of the prediction unit applied to inter prediction, intra prediction, or the like, or may be performed in a separate block outside the encoder as shown in the figure. It may be.
- the prediction unit determiner 1810 is performed in a separate block outside the encoder will be described as an example.
- the prediction unit determiner 1810 receives the provided input image, stores the received input image in a buffer (not shown), and analyzes the time frequency characteristics of the stored frame.
- the buffer may store a predetermined number of frames.
- the buffer may store at least four frames (n-3, n-2, n-1, and n).
- the prediction unit determiner 1810 detects an amount of change of the n-3 th frame (or picture) and the n-2 frame (or picture) stored in the buffer, and the n-2 th frame (or picture) and the n-1 th frame. (Or picture), the amount of change is detected, and the amount of change in the n-1th frame (or picture) and the nth frame (or picture) can be detected to analyze the temporal frequency characteristics between the frames (or pictures), and the analyzed temporal characteristics.
- the frequency characteristic may be compared with a preset threshold and the size of the prediction unit to be encoded may be determined based on the comparison result.
- the prediction unit determiner 1810 may determine the size of the prediction unit based on a change amount of two temporally adjacent frames (for example, n-1 and nth frames) among the frames stored in the buffer.
- the size of the prediction unit may be determined based on a change characteristic of a predetermined number of frames (eg, n-3, n-2, n-1, and nth) to reduce overhead for the size information.
- the prediction unit determiner 1810 analyzes temporal frequency characteristics of the n-th frame (or picture) and the n-th frame (or picture), and the first threshold value in which the analyzed time frequency characteristic is preset If it is less than the size of the prediction unit is determined to be 64x64 pixels, and if the analyzed time frequency characteristic value is greater than or equal to the preset first threshold value and less than the second threshold value, the size of the prediction unit is determined to be 32x32 pixels, and the analyzed time frequency When the characteristic value is greater than or equal to a second preset threshold, the size of the prediction unit may be determined to be 16x16 pixels or less.
- the first threshold value may represent a time frequency characteristic value when the amount of change between frames (or pictures) is smaller than the second threshold value.
- the prediction unit determiner 1810 provides the prediction unit information determined for inter prediction or intra prediction to the entropy encoder 1843 as described above, and provides the encoder 1830 in units of prediction units having the determined size.
- the prediction unit information may include size information of the prediction unit determined for inter prediction or intra prediction.
- the prediction block information may include macroblock size information or extended macroblock size information.
- prediction unit information may be used for inter prediction or intra prediction instead of size information of the macro block (LCU).
- the size information of the prediction unit may include the size of the largest coding unit (LCU), the size of the minimum coding unit (SCU), the maximum allowable layer level, or It may further include layer depth and flag information.
- the prediction unit determiner 1810 may determine the size of the prediction unit by analyzing the temporal frequency characteristic of the provided input frame (or picture) as described above, but predicts by analyzing the spatial frequency characteristic of the provided input frame (or picture).
- the size of the unit may be determined. For example, when the image flatness or uniformity of the input frame (or picture) is high, the size of the prediction unit is set to be larger than 32x32 pixels, and when the image flatness or uniformity of the frame (or picture) is low (that is, If the spatial frequency is high), the size of the prediction unit can be set small to 16x16 pixels or less.
- the encoder 1830 performs encoding on the prediction unit having the size determined by the prediction unit determiner 1810.
- the motion prediction unit 1831 generates a motion vector by predicting motion by comparing the provided current prediction unit with a previous reference frame that is encoded and stored in the frame buffer 1831.
- the motion compensator 1833 generates a predicted prediction block or a predicted prediction unit by using the motion vector and the reference frame provided from the motion predictor 1831.
- the intra predictor 1835 performs intra prediction encoding using pixel correlation between blocks.
- the intra prediction unit 1835 performs intra prediction to obtain a prediction block of the current prediction unit by predicting a pixel value from an already encoded pixel value of a block in a current frame (or picture).
- the subtractor 1837 subtracts the predicted prediction unit and the current prediction unit provided by the motion compensator 1833 to generate a residual value, and the transform unit 1839 and the quantization unit 1841 decrement the residual value by DCT (Discrete Cosine). Transform and quantize.
- the transformer 1839 may perform the transformation based on the prediction unit size information provided from the prediction unit determiner 1810. For example, you can convert to 32x32 or 64x64 pixel size.
- the transform unit 1839 may perform transform in a separate transform unit (TU) unit independently of the prediction unit size information provided from the prediction unit determiner 1810.
- the transform unit (TU) size may range from a minimum of 4 by 4 pixels to a maximum of 64 by 64 pixels.
- the maximum size of the transform unit (TU) may have a 64x64 pixel size or more, for example 128 ⁇ 128 pixel size.
- the transform unit size information may be included in the transform unit information and transmitted to the decoder.
- the entropy encoder 1843 generates a bit stream by entropy encoding quantized DCT coefficients and header information such as a motion vector, determined prediction unit information, partition information, and transform unit information.
- the inverse quantization unit 1845 and the inverse transform unit 1847 inverse quantize and inversely convert the quantized data through the quantization unit 1841.
- the adder 1849 adds the inverse transformed data and the predictive prediction unit provided by the motion compensator 1833 to reconstruct the image and provide the image to the frame buffer 1851, and the frame buffer 1851 stores the reconstructed image.
- FIG. 19 is a block diagram illustrating a configuration of an image encoding apparatus according to another embodiment of the present invention.
- an image encoding apparatus may largely include a prediction unit determiner 1910, a prediction unit divider 1920, and an encoder 1930, and the encoder 1930 may include A motion predictor 1931, a motion compensator 1933, an intra predictor 1935, a subtractor 1937, a transformer 1939, a quantizer 1194, an entropy encoder 1943, an inverse quantizer 1945 ), An inverse transform unit 1947, an adder 1949, and a frame buffer 1951.
- the prediction unit determiner or the prediction unit divider used in the encoding process may be performed by an encoding control unit (not shown) that determines the size of the prediction unit applied to the inter prediction and the intra prediction. May be performed in a block of?
- the prediction unit determiner or the prediction unit divider is performed in a separate block outside the encoder will be described as an example.
- prediction unit determiner 1910 performs the same function as the components of the same reference numeral shown in FIG. 18, description thereof is omitted.
- the prediction unit dividing unit 1920 divides the current prediction unit into partitions based on the edges included in the neighboring block of the current prediction unit with respect to the current prediction unit provided from the prediction unit determiner 1910, and then divides the partition and the partition.
- the information is provided to the encoder 1930.
- the partition information may include partition information in the case of asymmetric partitioning, geometric partitioning, and edge partitioning along an edge direction.
- the prediction unit dividing unit 1920 reads the prediction unit adjacent to the current prediction unit provided from the prediction unit determiner 1910 from the frame buffer 1951 and then applies an edge to the edges among pixels belonging to the prediction unit adjacent to the current prediction unit. A pixel belonging to is detected and the current prediction unit is partitioned into partitions by using the pixel belonging to the detected edge.
- the prediction unit splitter 1920 may calculate a difference value between the current prediction unit and adjacent neighboring pixels or detect an edge by using a known edge detection algorithm such as a sobel algorithm.
- the prediction unit dividing unit 1920 selects a pixel belonging to an edge from among pixels included in a neighboring block adjacent to the current prediction unit to divide the current prediction unit. After the detection, the partition may be divided using a line connecting the pixels surrounding the detected edge pixels with the detected edge pixels.
- the prediction unit dividing unit 1920 detects pixels belonging to an edge of only pixels closest to the current prediction unit among pixels belonging to the neighboring block of the current prediction unit, and then passes a straight line through the pixels belonging to the detected edge.
- the current prediction unit may be split by determining the direction of.
- the direction of the straight line passing through the pixels belonging to the edge may be any one of the intra prediction modes of the 4x4 block according to the H.264 standard.
- the prediction unit splitter 1920 divides the current prediction unit into at least one partition, and then provides the partitioned partition to the motion predictor 1931 of the encoder 1930. In addition, the prediction unit splitter 1920 provides partition information of the prediction unit to the entropy encoder 1943.
- the encoder 1930 performs encoding on a partition provided from the prediction unit splitter 1920.
- the motion predictor 1931 generates a motion vector by predicting motion compared to a previous reference frame that is encoded and stored in the frame buffer 1951 for the provided current partition, and the motion compensator 1933 moves A prediction partition is generated using the motion vector and the reference frame provided from the predictor 1931.
- the intra predictor 1935 performs intra prediction encoding using pixel correlation between blocks.
- the intra prediction unit 1935 performs intra prediction to obtain a prediction block of the current prediction unit by predicting a pixel value from an already encoded pixel value of the block in the current frame.
- the subtractor 1937 subtracts the prediction partition and the current partition provided by the motion compensator 1933 to generate a residual value
- the transform unit 1939 and the quantization unit 1941 convert the residual value to a DCT (Discrete Cosine Transform) transform.
- the entropy encoder 1943 generates a bit stream by entropy encoding quantized DCT coefficients and header information such as a motion vector, determined prediction unit information, prediction unit partition information, or transform unit information.
- the inverse quantization unit 1945 and the inverse transform unit 1947 inverse quantizes and inverse transforms the quantized data through the quantization unit 194.
- the adder 1949 adds the inverse transformed data and the prediction partition provided by the motion compensator 1933 to reconstruct the image and provide the image to the frame buffer 1951, and the frame buffer 1951 stores the reconstructed image.
- 20 is a block diagram illustrating a configuration of an image decoding apparatus according to an embodiment of the present invention.
- a decoding apparatus includes an entropy decoder 2031, an inverse quantizer 2033, an inverse transform unit 2035, a motion compensator 2037, and an intra predictor 2039. And a frame buffer 2041 and an adder 2043.
- the entropy decoder 2031 receives the compressed bit stream and performs entropy decoding to generate quantized coefficients.
- the inverse quantization unit 2033 and the inverse transform unit 2035 perform inverse quantization and inverse transformation on the quantized coefficients to restore the residual values.
- the motion compensation unit 2037 predicts the prediction unit by performing motion compensation on a prediction unit having a size equal to that of a prediction unit (PU) encoded using the header information decoded from the bit stream by the entropy decoding unit 2031.
- the decoded header information may include prediction unit size information, and the prediction unit size may be, for example, a block size as an extended mark of 32x32 or 64x64 or 128 ⁇ 128 pixels.
- the motion compensation unit 2037 may generate a predicted prediction unit by performing motion compensation on the prediction unit having the decoded prediction unit size.
- the intra prediction unit 2039 performs intra prediction encoding using pixel correlation between blocks.
- the intra prediction unit 2039 performs intra prediction to obtain a prediction block of the current prediction unit by predicting a pixel value from an already encoded pixel value of a block in a current frame (or picture).
- the adder 2043 adds the residual value provided by the inverse transformer 2035 and the predicted prediction unit provided by the motion compensator 2037 to reconstruct an image and provide the image to the frame buffer 2041, and the frame buffer 2041 Save the restored image.
- 21 is a block diagram illustrating a configuration of an image decoding apparatus according to another embodiment of the present invention.
- a decoding apparatus may largely include a prediction unit splitter 2110 and a decoder 2130, and the decoder 2130 may include an entropy decoder 2131, An inverse quantizer 2133, an inverse transformer 2135, a motion compensator 2137, an intra predictor 2139, a frame buffer 2141, and an adder 2143 are included.
- the prediction unit splitter 2110 obtains header information from which the bit stream is decoded by the entropy decoder 2131, and extracts prediction unit information and partition information from the obtained header information.
- the partition information may be information of a line dividing the prediction unit.
- the partition information may include partition information in the case of asymmetric partitioning, geometrical partitioning, and partitioning along an edge direction.
- the prediction unit dividing unit 2110 divides the prediction unit of the reference frame stored in the frame buffer 2141 into partitions using the extracted partition information, and then provides the divided partitions to the motion compensation unit 2137.
- the prediction unit splitter used in the decoding process may be performed by a decoding controller (not shown) that determines the size of the prediction unit applied to the inter prediction and the intra prediction, or may be performed in a separate block outside the decoder as shown in the figure. May be Hereinafter, a case where the prediction unit splitter is performed in a separate block outside the decoder will be described as an example.
- the motion compensator 2137 performs motion compensation on the partition provided from the prediction unit splitter 2110 using motion vector information included in the decoded header information to generate a predicted partition.
- the inverse quantization unit 2133 and the inverse transformer 2135 inversely quantizes and inverse transforms the entropy decoded coefficient in the entropy decoder 2131 to generate a residual value, and the adder 2143 is provided from the motion compensator 2137.
- the image is reconstructed by adding the prediction partition and the residual value, and the reconstructed image is stored in the frame buffer 2141.
- the macro block size to be decoded may be, for example, 32x32 or 64x64 or 128 ⁇ 128 pixels
- the prediction unit division unit 2120 may header the macroblock having the size of 32x32 or 64x64 or 128 ⁇ 128 pixels. Partitioning can be performed based on partition information extracted from the information.
- FIG. 22 is a conceptual diagram illustrating an intra prediction encoding method using an asymmetric pixel block according to an embodiment of the present invention.
- 23 to 25 are conceptual views illustrating an intra prediction encoding method using an asymmetric pixel block according to another embodiment of the present invention.
- 22 to 25 illustrate an example of intra prediction when the asymmetric partition division of FIGS. 2 to 6 is used for intra prediction, and is not limited to the case illustrated in FIGS. 22 to 25.
- the intra prediction encoding method using the asymmetric pixel block according to another embodiment of the present invention can also be applied to the various asymmetric partition divisions shown in FIG. 6.
- FIG. 22 is a diagram for explaining a prediction mode for performing inter prediction on a partition P11d having a size of 8 ⁇ 2 by performing asymmetric partitioning in the horizontal direction when the size of the prediction unit PU is 8 ⁇ 8. .
- the vertical direction (prediction mode 0), the horizontal direction (prediction mode 1), the average value prediction (prediction mode 2), the right diagonal direction (prediction mode 3), the left diagonal line
- a pixel value in the partition P11d of size 8 ⁇ 2 is predicted using the pixel value in the previously encoded block along the prediction direction of the direction (prediction mode 4).
- a value equal to the pixel value of the corresponding position in the vertical direction in the upper block previously encoded as the prediction pixel value in the partition P11d having the size of 8 X 2 is used.
- a value equal to the pixel value of the corresponding position in the horizontal direction in the left block previously encoded is used as the prediction pixel value in the partition P11d having the size of 8 ⁇ 2.
- prediction mode 2 an average value of pixel values in a left block and an upper block previously encoded as a prediction pixel value in a partition P11d having a size of 8 ⁇ 2 is used.
- prediction mode 3 a value equal to the pixel value in the right diagonal direction in the upper block previously encoded is used as the prediction pixel value in the partition P11d having the size of 8 ⁇ 2.
- a portion short of only the pixels in the upper block of the partition P11d may use two pixels in the upper right block.
- the same pixel value as the pixel value in the left diagonal direction in the left and upper blocks previously encoded is used as the prediction pixel value in the partition P11d having the size of 8 ⁇ 2.
- FIG. 23 is a diagram for describing a prediction mode for performing inter prediction on a partition P21d having a size of 8 ⁇ 6 by performing asymmetric partitioning in the horizontal direction when the size of the prediction unit PU is 8 ⁇ 8. .
- the vertical direction (prediction mode 0), the horizontal direction (prediction mode 1), the average value prediction (prediction mode 2), the right diagonal direction (prediction mode 3), the left diagonal line
- the pixel values in the partition P21d of size 8 ⁇ 6 are predicted using the pixel values in the previously encoded block along the prediction direction of the direction (prediction mode 4).
- a value equal to the pixel value of the corresponding position in the vertical direction in the upper block previously encoded as the prediction pixel value in the partition P21d having the size of 8 X 6 is used.
- a value equal to the pixel value of the corresponding position in the horizontal direction in the left block previously encoded is used as the prediction pixel value in the partition P21d having the size of 8 ⁇ 6.
- a value equal to the pixel value in the right diagonal direction in the previously encoded upper block is used as the prediction pixel value in the partition P21d having the size of 8 ⁇ 6.
- six pixels in the upper right block may be used for a portion short of only the pixels in the upper block of the partition P21d.
- the same pixel value as the pixel value in the left diagonal direction in the left and upper blocks previously encoded is used as the prediction pixel value in the partition P21d having the size of 8 X 6.
- FIG. 24 is a diagram for describing a prediction mode for performing inter prediction on a partition P11c having a size of 16 ⁇ 4 by performing asymmetric partitioning in the horizontal direction when the size of the prediction unit PU is 16 ⁇ 16. .
- the vertical direction (prediction mode 0), the horizontal direction (prediction mode 1), the average value prediction (prediction mode 2), the right diagonal direction (prediction mode 3), the left diagonal line A pixel value in the partition P11c of size 16 ⁇ 4 is predicted using the pixel value in the previously encoded block along the prediction direction of the direction (prediction mode 4).
- a value equal to the pixel value of the corresponding position in the vertical direction in the upper block previously encoded as the prediction pixel value in the partition P11c having the size of 16 ⁇ 4 is used.
- a value equal to the pixel value of the corresponding position in the horizontal direction in the left block previously encoded as the prediction pixel value in the partition P11c having the size of 16 ⁇ 4 is used.
- prediction mode 2 an average value of pixel values in a left block and an upper block previously encoded as a prediction pixel value in a partition P11c having a size of 16 ⁇ 4 is used.
- prediction mode 3 a value equal to the pixel value in the right diagonal direction in the upper block previously encoded is used as the prediction pixel value in the partition P11c having a size of 16 ⁇ 4.
- four pixels in the upper right block may be used for a portion short of only the pixels in the upper block of partition P11c.
- the same pixel value as the pixel value in the left diagonal direction in the left and upper blocks previously encoded is used as the prediction pixel value in the partition P11c having a size of 16 ⁇ 4.
- FIG. 25 is a prediction for performing inter prediction on a partition P11b having a size of 32 X 8 by performing asymmetric partitioning in the horizontal direction when the size of the prediction unit PU is 32 X 32 corresponding to the extended macroblock size. It is a figure for demonstrating a mode.
- a pixel value in the partition P11b having a size of 32 X 8 is predicted using the pixel value in the previously encoded block along the prediction direction of the direction (prediction mode 4).
- a value equal to the pixel value of the corresponding position in the vertical direction in the upper block previously encoded as the prediction pixel value in the partition P11b having a size of 32 X 8 is used.
- a value equal to the pixel value of the corresponding position in the horizontal direction in the left block previously encoded is used as the prediction pixel value in the partition P11b having a size of 32 X 8.
- a value equal to the pixel value in the right diagonal direction in the previously encoded upper block is used as the prediction pixel value in the partition P11b having a size of 32 ⁇ 8.
- eight pixels in the upper right block may be used for a portion short of only the pixels in the upper block of the partition P11b.
- the same pixel value as the pixel value in the left diagonal direction in the left and upper blocks previously encoded is used as the predicted pixel value in the partition P11b having a size of 32 ⁇ 8.
- an intra prediction may be performed along a line formed by a predetermined isometric angle (22.5 degrees, 11.25 degrees, etc.) in 360 degrees by using pixel values in previously encoded left and upper blocks.
- an arbitrary angle may be specified in advance at the encoder side to perform intra prediction along the line of the designated angle.
- the dx and dy information may be transmitted from the encoder side to the decoder by defining an inclination of dx in the horizontal direction and dy in the vertical direction, or the predetermined angle information may be transmitted from the encoder side. It can also send to the decoder.
- FIG. 26 is a conceptual diagram illustrating an intra prediction encoding method based on linear prediction according to another embodiment of the present invention.
- FIG. 26 is a conceptual diagram illustrating an intra prediction encoding method based on linear prediction according to another embodiment of the present invention.
- an extended macro block having a size of 16 ⁇ 16 or more is used, or when the size of the prediction unit increases by 8 ⁇ 8 or more,
- the existing intra prediction mode is applied to the image, it may be difficult to restore a smooth image due to the distortion caused by the prediction.
- a separate linear prediction mode is defined, and when the linear prediction mode flag is activated, the pixel value at the bottom right of the prediction unit may be transmitted from the encoder to the decoder.
- the pixel value of the rightmost line is subjected to linear interpolation using the pixel 1010 value of the lower right side and the pixel 1001 value of the upper right side transmitted from the encoder. You can get it.
- the pixel value of the bottom line is linear interpolated using the pixel 1010 value at the bottom right side and the pixel 1003 value at the bottom left side transmitted from the encoder. You can get it.
- the linear prediction mode flag when the linear prediction mode flag is activated, as shown in FIG. 26, the vertical and horizontal directions in the left and upper blocks previously encoded to obtain the predicted pixel value of the pixel 1010 at the bottom right of the prediction unit are shown. Linear interpolation may be performed using the corresponding pixel values 1001 and 1003 and / or internal pixel values corresponding to the vertical and horizontal directions in the prediction block. Further, when the linear prediction mode flag is activated, the predicted pixel values of the internal pixels of the prediction unit are corresponding to the pixel values in the vertical and horizontal directions in the previously encoded left and upper blocks and / or in the vertical and horizontal directions inside the prediction unit. It may be obtained by performing bilinear interpolation using corresponding inner boundary pixel values.
- FIG. 27 is a conceptual diagram illustrating an intra prediction encoding method based on linear prediction according to another embodiment of the present invention.
- FIG. 27 is a conceptual diagram illustrating an intra prediction encoding method based on linear prediction according to another embodiment of the present invention.
- the linear prediction mode flag When the linear prediction mode flag is activated, as shown in FIG. 27, for the current prediction unit having the first size included in the N-th picture, which is the current picture to be encoded, which is 8 ⁇ 8 size in FIG. 27, for example.
- a reference prediction unit is determined from the N-th picture located temporally before the N-th picture. Previously encoded left block and corresponding pixels in the vertical and horizontal directions in the upper block 213 located in the periphery of the current prediction unit to obtain a prediction pixel value of a pixel at the bottom right of the current prediction unit of the Nth picture.
- the average or linear value is obtained using not only the values but also the corresponding pixel values in the vertical and horizontal directions in the previously encoded left block and the upper block 233 located in the periphery of the corresponding prediction unit of the N-th picture. Interpolation can be performed.
- the vertical and horizontal directions corresponding to the previously encoded left block and the upper block 213 located in the periphery of the current prediction unit to obtain a prediction pixel value of the rightmost bottom pixel of the current prediction unit of the Nth picture.
- Previously encoded left block and top block located in the periphery of the corresponding prediction unit of the N-th picture, as well as the pixel values and the internal pixel values corresponding to the vertical and horizontal directions in the current prediction unit of the N-th picture The averaged value or linear interpolation may be performed using corresponding pixel values in the vertical and horizontal directions in 233.
- the vertical and horizontal directions corresponding to the previously encoded left block and the upper block 213 located in the periphery of the current prediction unit to obtain a prediction pixel value of the rightmost bottom pixel of the current prediction unit of the Nth picture.
- Previously encoded left block and top block located in the periphery of the corresponding prediction unit of the N-th picture, as well as the pixel values and the internal pixel values corresponding to the vertical and horizontal directions in the current prediction unit of the N-th picture The average value is linear or linear using the corresponding pixel values in the vertical and horizontal directions in the vertical and horizontal directions and the internal pixel values corresponding to the vertical and horizontal directions of the bottom right pixel in the corresponding prediction unit of the N-th picture. Interpolation can be performed.
- the predicted pixel value of the internal pixel of the prediction unit of the Nth picture is the corresponding pixel value in the vertical and horizontal directions in the previously encoded left and upper blocks of the current prediction unit of the Nth picture.
- / or the inner boundary pixel values corresponding to the vertical and horizontal directions in the current prediction unit of the N-th picture the correspondence of the vertical and horizontal directions in the previously coded left and top blocks of the corresponding prediction unit of the N-th picture. It can be obtained by performing bilinear interpolation using the pixel values and / or internal boundary pixel values corresponding to the vertical and horizontal directions in the corresponding prediction unit of the N-th picture.
- an intra prediction is performed by using a current prediction unit of an N th picture and a corresponding prediction unit of an N-1 th picture, but a corresponding prediction of a current prediction unit of an N th picture and an N + 1 th picture is illustrated.
- Intra-picture prediction may be performed using a unit, or intra-prediction may be performed using a current prediction unit of an N-th picture, and a corresponding prediction unit of an N-1 th picture and an N + 1 th picture, or Intra-prediction may be performed using the current prediction unit of the Nth picture, the corresponding prediction unit of the N-2nd picture, the N-1st picture, the N + 1th picture, and the N + 2nd picture.
- the current prediction unit having the second size may have a square symmetric shape of 8x8 pixels or 16x16 pixels or 32x32 pixels, or may have an asymmetric shape as described above with reference to FIGS. 2 to 6. It is a matter of course that the inter prediction may be performed by applying the above-described embodiments including FIGS. 26 and 27 to the asymmetric shape described above with reference to FIGS. 2 to 6.
- FIG. 28 is a block diagram illustrating a configuration of an image encoding apparatus for performing intra prediction encoding according to an embodiment of the present invention.
- the apparatus for encoding an image includes an encoder 2830, and the encoder 2830 includes an inter prediction unit 2832, an intra prediction unit 2835, a subtractor 2837, a transform unit 2839, and quantization.
- the unit 2841, an entropy encoder 2843, an inverse quantizer 2845, an inverse transformer 2847, an adder 2849, and a frame buffer 2851 may be included.
- the inter prediction unit 2832 includes a motion predictor 2831 and a motion compensator 2833.
- the encoder 2830 performs encoding on the input image.
- the input image may be used for inter prediction in the inter prediction unit 2832 or intra prediction in the intra prediction unit 2835 in units of prediction units (PUs).
- PUs prediction units
- the size of the prediction unit applied to the inter prediction or intra prediction may be determined according to the temporal frequency characteristics of the stored frame (or picture) after storing the input image in a buffer (not shown) provided in the encoder.
- the prediction unit determiner 2810 analyzes temporal frequency characteristics of the n-th frame (or picture) and the n-th frame (or picture), and the first threshold value in which the analyzed time frequency characteristic value is preset.
- the size of the prediction unit is determined to be 64x64 pixels, and if the analyzed time frequency characteristic value is greater than or equal to the preset first threshold value and less than the second threshold value, the size of the prediction unit is determined to be 32x32 pixels, and the analyzed time frequency When the characteristic value is greater than or equal to a second preset threshold, the size of the prediction unit may be determined to be 16x16 pixels or less.
- the first threshold value may represent a time frequency characteristic value when the amount of change between frames (or pictures) is smaller than the second threshold value.
- the size of the prediction unit applied to the inter prediction or intra prediction may be determined according to the spatial frequency characteristics of the stored frame (or picture) after storing the input image in a buffer (not shown) provided in the encoder. For example, when the image flatness or uniformity of the input frame (or picture) is high, the size of the prediction unit is set to be larger than 32x32 pixels, and when the image flatness or uniformity of the frame (or picture) is low (that is, If the spatial frequency is high), the size of the prediction unit can be set small to 16x16 pixels or less.
- the operation of determining the size of the prediction unit may be performed by a coding controller (not shown) by receiving the input image or by a separate prediction unit determination unit (not shown) by receiving the input image.
- the size of the prediction unit may have a size of 16x16 pixels or less, a 32x32 pixel size, and a 64x64 pixel size.
- prediction unit information including the prediction unit size determined for inter-screen prediction or intra-screen prediction is provided to the entropy encoder 2843, and is provided to the encoder 2830 in units of prediction units having the determined size.
- the prediction block information may include macroblock size information or extended macroblock size information.
- the extended macroblock size may be 32x32 pixels or more, and may include, for example, 32x32 pixels, 64x64 pixels, or 128x128 pixels.
- prediction unit information is an end coding unit to be used for inter prediction or intra prediction instead of the size information of the macro block.
- the size information of the LCU that is, the size information of the prediction unit may be included, and further, the prediction unit information may further include the size of the largest coding unit (LCU), the size of the minimum coding unit (SCU), and the maximum allowable hierarchical level. ), Or may further include layer depth and flag information.
- the encoder 2830 performs encoding on the prediction unit having the determined size.
- the inter prediction unit 2832 divides the provided prediction unit to be currently encoded using a partition partitioning method such as asymmetric partitioning, geometric partitioning, and the like, and estimates motion in units of the partitioned block to generate a motion vector. do.
- a partition partitioning method such as asymmetric partitioning, geometric partitioning, and the like
- the motion predictor 2831 divides the provided current prediction unit by using the aforementioned various partition partitioning methods, and includes at least one reference picture (frame buffer) located before and / or after a picture currently encoded for each partitioned block.
- frame buffer located before and / or after a picture currently encoded for each partitioned block.
- encoding is completed and stored) to search for an area similar to a partitioned block currently encoded and to generate a motion vector in units of blocks.
- the size of the block used for the motion estimation may vary, and when the asymmetric partition and the geometric partition partition according to the embodiments of the present invention are applied, the shape of the block is not only a conventional square shape. 2 to 9 may have a geometric shape such as an asymmetrical shape such as a rectangle, a-shape, a triangle shape, and the like.
- the motion compensator 2833 generates a prediction block (or predicted prediction unit) obtained by performing motion compensation using the motion vector generated from the motion predictor 2831 and the reference picture.
- the inter prediction unit 2832 performs the above-described block merging to obtain a motion parameter for each merged block.
- the block-specific motion parameters merged by performing the above-described block merging are transmitted to the decoder.
- the intra predictor 2835 may perform intra prediction encoding using pixel correlation between blocks.
- the intra prediction unit 2835 estimates and obtains the prediction block of the current prediction unit by predicting the pixel value from the already encoded pixel value of the block in the current frame (or picture) according to various embodiments described with reference to FIGS. 22 to 27. Perform (Intra Prediction).
- the subtractor 2837 subtracts the prediction block (or predicted prediction unit) and the current block (or current prediction unit) provided by the motion compensator 2833 to generate a residual value, and the transform unit 2839 and the quantizer 2841 ) Transforms and residuals the residual cosine transform (DCT).
- the transform unit 2839 may perform the transformation based on the prediction unit size information, and for example, may perform the transformation to a 32x32 or 64x64 pixel size.
- the transform unit 2839 may perform transform in a separate transform unit (TU) unit independently of the prediction unit size information provided from the prediction unit determiner 2810.
- the transform unit (TU) size may range from a minimum of 4 by 4 pixels to a maximum of 64 by 64 pixels.
- the maximum size of the transform unit (TU) may have a 64x64 pixel size or more, for example 128 ⁇ 128 pixel size.
- the transform unit size information may be included in the transform unit information and transmitted to the decoder.
- the entropy encoder 2843 entropy encodes header information such as quantized DCT coefficients, motion vectors, determined prediction unit information, partition information, and transform unit information to generate a bit stream.
- the inverse quantizer 2845 and the inverse transformer 2847 inverse quantizes and inverse transforms the quantized data through the quantizer 2841.
- the adder 2849 adds the inverse transformed data and the predictive prediction unit provided by the motion compensator 2833 to reconstruct an image and provide the image to the frame buffer 2851, and the frame buffer 2851 stores the reconstructed image.
- 29 is a flowchart illustrating a method of encoding an image to which intra-prediction encoding is applied according to an embodiment of the present invention.
- a prediction unit for inter-screen prediction or intra-picture prediction with respect to the input image is partitioned using the aforementioned various asymmetrical and geometric partition partitioning methods. (Step 1403).
- the intra prediction mode When the intra prediction mode is activated, the intra prediction is performed on the partitioned asymmetric block or the geometric block by applying the intra prediction method described with reference to FIGS. 22 to 27 (step 1405).
- the current encoding is performed in at least one reference picture (encoding is completed and stored in the frame buffer 2851) positioned before and / or after the picture currently encoded for each partitioned block.
- a motion vector is generated in units of blocks by searching an area similar to a partitioned block, and a prediction block (or a predicted prediction unit) is generated by performing motion compensation using the generated motion vector and the picture.
- the encoding apparatus obtains the difference between the current prediction unit and the prediction unit predicted (intra-prediction or inter-screen prediction), generates a residual, and transforms and quantizes the generated residual (step 1407).
- the bit stream is generated by entropy encoding header information such as quantized DCT coefficients and a motion parameter.
- FIG. 30 is a block diagram illustrating a configuration of an image decoding apparatus according to an embodiment of the present invention.
- a decoding apparatus includes an entropy decoder 731, an inverse quantizer 733, an inverse transformer 735, a motion compensator 737, and an intra predictor 739. And a frame buffer 741 and an adder 743.
- the entropy decoder 731 receives the compressed bit stream and performs entropy decoding to generate quantized coefficients.
- the inverse quantization unit 733 and the inverse transform unit 735 restore the residual values by performing inverse quantization and inverse transformation on the quantized coefficients.
- the header information decoded by the entropy decoder 731 may include prediction unit size information, and the prediction unit size may include, for example, an extended macroblock size of 16x16 pixels or 32x32 pixels, 64x64 pixels, or 128x128 pixels. Can be.
- the decoded header information may include a motion parameter for motion compensation and prediction.
- the motion parameter may include a motion parameter transmitted for each block merged by block merging methods according to embodiments of the present invention.
- the decoded header information may include a flag indicating whether the linear prediction mode is activated.
- the decoded header information may include prediction mode information for each prediction unit of the asymmetric type described above.
- the motion compensator 737 performs motion compensation using the motion parameter on a prediction unit having a size equal to that of a prediction unit encoded by using the header information decoded from the bit stream by the entropy decoder 731. Generate a predicted prediction unit.
- the motion compensator 737 generates a predicted prediction unit by performing motion compensation using the motion parameters transmitted for each block merged by the block merging methods according to the embodiments of the present invention.
- the intra predictor 739 performs intra prediction encoding using pixel correlation between blocks.
- the intra predictor 739 may obtain the predicted pixel value of the current prediction unit by applying the intra prediction encoding method of FIGS. 22 to 27.
- the adder 743 reconstructs an image by adding the residual value provided by the inverse transformer 735 and the predicted prediction unit provided by the motion compensator 737 or the intra predictor 739 to provide the frame buffer 741.
- the frame buffer 741 stores the restored image.
- 31 is a flowchart illustrating an image decoding method according to an embodiment of the present invention.
- a decoding apparatus first receives a bit stream from an encoding apparatus (step 3101).
- Data decoded through entropy decoding includes a residual indicating a difference between the current prediction unit and the predicted prediction unit.
- the header information decoded through entropy decoding may include prediction unit information, motion parameters for motion compensation and prediction, a flag indicating whether a linear prediction mode is activated, and prediction mode information for each prediction unit in an asymmetric form.
- the prediction unit information may include prediction unit size information.
- the prediction unit (PU) information may include the size of the largest coding unit (LCU), the size of the minimum coding unit (SCU), the maximum allowable layer level or layer depth, and flag information. .
- the decoding control unit receives the information on the size of the prediction unit (PU) applied by the encoding apparatus from the encoding apparatus and performs motion compensation decoding and intra prediction encoding according to the size of the prediction unit (PU) applied by the encoding apparatus. , Inverse transformation or inverse quantization can be performed.
- the decoding apparatus inverse quantizes and inversely transforms the entropy decoded residual value (step 3105).
- the inverse transform process may be performed in units of prediction unit sizes (eg, 32x32 or 64x64 pixels).
- the decoding apparatus generates the predicted prediction unit by applying the intra prediction or the intra prediction method for the prediction units of various asymmetric or geometric shapes described above with reference to FIGS. 22 to 27 (step 3107).
- the decoder reconstructs the image by adding an inverse quantized and inverse transformed residual value and a prediction unit predicted through the inter prediction or intra prediction.
- FIG. 32 is a flowchart illustrating a video encoding method according to an embodiment of the present invention
- FIG. 33 is a conceptual diagram illustrating the video encoding method shown in FIG. 32.
- the apparatus for encoding an image includes a reference macro block for a current macro block having a first size included in an N-th picture, which is a current picture to be encoded, N-1 located temporally ahead of the N-th picture. After determining in the first picture, a motion vector is generated (step 3210).
- the current macroblock having the first size may be an extended macroblock having a size of 16x16 pixels or less, or a size of 32x32 pixels or 64x64 pixels or more.
- the extended macroblock may have a size of 32x32 pixels or more, that is, 64x64 pixels, 128x128 pixels, and more to be suitable for high resolution having an ultra high definition (HD) or higher resolution.
- the apparatus for encoding an image divides a current macroblock having a first size into a plurality of current blocks having a second size, and performs inter prediction and intra prediction for each divided current block.
- the apparatus for encoding an image first includes a neighboring pixel of a current block having a second size in the current macro block and a corresponding neighboring pixel of a reference block at a position corresponding to the current block in a reference macro block of an N ⁇ 1th picture. The difference is obtained to obtain a residual value between adjacent pixels (step 3220).
- the current block having the second size may be configured as, for example, 4x4 pixels or 8x8 pixels, and may be determined according to the size of the current macro block.
- the image encoding apparatus determines the intra prediction mode of the current block by using the residual values between the adjacent pixels obtained in operation 120 (operation 3230).
- the video encoding apparatus is a vertical mode (mode 0), a horizontal mode (mode 1), an average value mode (mode 2), a diagonal that is an intra prediction mode of a 4x4 block according to the H.264 / AVC standard.
- One of the vertical left mode (mode 7) and the horizontal-up mode (mode 8) may be determined as the intra prediction mode, and the nine different modes may be determined.
- the intra prediction mode may be determined in consideration of encoding efficiency.
- one of the various inter-screen prediction modes for a block larger than 4x4 pixels may be determined as the intra prediction mode, not the intra prediction modes of the 4x4 block according to the H.264 / AVC standard described above. .
- the apparatus for encoding an image may include adjacent pixels 2613 of the current block 2611 having a second size in the N-th picture 210 and a second in the N-th picture 2630. Calculate the difference between adjacent pixels 2633 of the reference block 2651 having two sizes to obtain a residual value between each corresponding pixel, and then apply various in-picture prediction modes to the obtained residual value. In consideration of the encoding efficiency of the result, the most optimal intra prediction mode can be determined.
- the apparatus for encoding an image transforms a residual value obtained through the execution of step 3220 (step 3240), and then performs quantization on the transformed data (eg, DCT coefficients) (step 3250). .
- Entropy for the quantized data the first size (ie, the size of the current macro block), the second size (ie, the size of the current block), a motion vector, intra prediction mode information, reference picture information, and the like.
- the encoding is performed to generate a bit stream (step 3260).
- a residual motion vector and a residual value of the predictive motion vector may be entropy encoded after generating the predictive motion vector.
- the image encoding method according to an embodiment of the present invention shown in FIG. 32 is performed on all blocks included in each macro block, and the encoding order of the plurality of current blocks divided in each macro block is determined in a predetermined order.
- each current block included in the current macro block is decoded in the same order as the predetermined order.
- the left and upper sides of the predetermined current block are present. Since the adjacent pixel information is known, the residual value obtained through the execution of step 3220 may not be provided to the decoding side.
- an intra prediction is performed by using a residual value between adjacent pixels of a current block and neighboring pixels of a reference block in the current macro block.
- the intra prediction described in steps 3220 to 3230 of FIG. 32 is performed on the generated prediction macro block and the residual values of the current macro block. It may be configured.
- the encoding process of the image may be performed according to the macro block size determined by the encoding controller (not shown) or the decoding controller (not shown) and the current block size included in each macro block.
- the present invention may be applied to both prediction, transform, and quantization, but may be applied to at least one of prediction, transform, or quantization.
- the above encoding process can be similarly applied to the decoding process of the following embodiments of the present invention.
- FIG. 34 is a flowchart illustrating a video encoding method according to another embodiment
- FIG. 35 is a conceptual diagram for describing the video encoding method shown in FIG. 34.
- the apparatus for encoding an image includes a reference macro block for a current macro block having a first size included in an N-th picture, which is a current picture to be encoded, N + 1 located later in time than the N-th picture. After determining in the first picture, a motion vector is generated (step 3411).
- the current macroblock having the first size may be an extended macroblock having a size of 16x16 pixels or less, or a size of 32x32 pixels or 64x64 pixels or more.
- the extended macroblock may have a size of 32x32 pixels or more, that is, 64x64 pixels, 128x128 pixels, and more to be suitable for high resolution having an ultra high definition (HD) or higher resolution.
- the apparatus for encoding an image corresponds to an adjacent pixel of a current block having a second size in the current macro block in an Nth picture and a reference block at a position corresponding to the current block in a reference macro block of an N + 1th picture.
- the difference between adjacent pixels is obtained to obtain a residual value between adjacent pixels (step 3420).
- the current block having the second size may be configured as, for example, 4x4 pixels or 8x8 pixels, and may be determined according to the size of the current macro block.
- the image encoding apparatus determines the intra prediction mode of the current block by using the residual values between the adjacent pixels obtained in step 3420 (step 3430), and then converts the residual values obtained through the execution of step 3420 (step 3440). And quantization (step 3450), the quantized data and the first size (i.e., size of the current macro block), the second size (i.e., size of the current block), motion vector, intra prediction mode information.
- the bit stream is generated by performing entropy encoding on the reference picture information.
- Steps 3430 to 3460 illustrated in FIG. 34 are executed in the same manner as those of steps 3230 to 3260 of FIG. 32, and thus detailed descriptions thereof will be omitted.
- the most optimal intra prediction mode may be determined by applying the mode and considering the resultant encoding efficiency.
- FIG. 36 is a flowchart illustrating a video encoding method according to another embodiment
- FIG. 37 is a conceptual diagram for describing the video encoding method illustrated in FIG. 36.
- the apparatus for encoding an image includes a reference macro block for a current macro block having a first size included in an N-th picture, which is a current picture to be encoded, N-1 located temporally ahead of the N-th picture.
- a forward motion vector is generated by determining the first picture, and at the same time, a backward motion vector is generated by determining a reference macroblock in the N + 1th picture located later in time than the Nth picture (step 3610).
- the current macroblock having the first size may be an extended macroblock having a size of 16x16 pixels or less, or a size of 32x32 pixels or 64x64 pixels or more.
- the extended macroblock may have a size of 32x32 pixels or more, that is, 64x64 pixels, 128x128 pixels, and more to be suitable for high resolution having an ultra high definition (HD) or higher resolution.
- the apparatus for encoding an image includes a neighboring pixel of a current block having a second size in the current macro block in an Nth picture and a reference block located at a position corresponding to a corresponding current block generated in a reference macro block of an N ⁇ 1th picture.
- Obtains the residual residual between the adjacent pixels by obtaining the difference between the corresponding adjacent pixels, and obtains the difference between the corresponding adjacent pixels of the reference block at the position corresponding to the current block in the reference macro block of the N + 1th picture
- an average of the forward residual value and the reverse residual value is obtained as a final residual value (step 3620).
- the current block having the second size may be configured as, for example, 4x4 pixels or 8x8 pixels, and may be determined according to the size of the current macro block.
- the image encoding apparatus determines the intra prediction mode of the current block by using the residual values between the adjacent pixels obtained in step 3620 (step 3630), and then converts the final residual value obtained through the execution of step 3620 (step 3620). 3640) and quantization (step 3650), the quantized data and the first size (i.e., size of the current macro block), the second size (i.e., size of the current block), forward motion vector, reverse motion vector
- bitstreams are generated by performing entropy encoding on intra prediction mode information, reference picture information, and the like.
- steps 3630 to 3660 shown in FIG. 36 are executed in the same manner as steps 3230 to 3260 of FIG. 32, detailed descriptions thereof will be omitted.
- the image encoding method includes the adjacent pixels 2613 and the N-1 th picture 2630 of the current block 2611 having a second size in the N th picture 2610.
- the average value of the residual and reverse residual values may be obtained as a final residual value, and various intra prediction modes may be applied to the obtained residual values, and the most optimal intra prediction mode may be determined in consideration of the encoding efficiency of the result.
- the final residual value may be determined as a residual value having a smaller value among the forward residual value and the reverse residual value.
- the image encoding apparatus obtains a residual value between an adjacent pixel of a current block and an adjacent pixel of a reference block with reference to an N-1 th picture and an N + 1 th picture.
- the video encoding apparatus is adjacent by further referring to an N-2nd picture located temporally earlier than an N-1 th picture and an N + 2th picture located later in time than an N + 1 th picture. It may be configured to obtain a residual value between pixels.
- the image encoding apparatus may perform inter prediction and intra prediction with reference to the N-2, N-1, N + 1, and N + 2th pictures.
- the image encoding apparatus buffers the N-2, N-1, N, N + 1, and N + 2th pictures, and then changes the temporal frequency characteristics or temporal frequency characteristics between the pictures according to the order of time.
- the size of the current macroblock and the current block used for inter-screen prediction and intra-screen prediction may be determined based on the above.
- the apparatus for encoding an image may determine an amount of change of two temporally adjacent pictures (for example, n-1 and nth pictures) with respect to the buffered N-2, N-1, N, N + 1, and N + 2th pictures. After detecting and comparing the detected change amount with at least one predetermined reference value, the size of a block to be used for inter prediction and intra prediction may be determined according to the comparison result.
- the image encoding apparatus determines that the size of the macroblock having the first size is 64x64 pixels and the size of the block having the second size is 8x8 when the detected variation between the pictures adjacent to each other in time is less than the first reference value. If the pixel is determined as a pixel, and the detected change amount is greater than or equal to the first reference value and less than the second reference value, the size of the macro block is determined to be 32x32 pixels, and the size of the block is determined to be 4x4 pixels.
- the size of the macroblock having the first size may be determined to be 16x16 pixels or less, and the size of the block having the second size may be determined to be 4x4 pixels.
- FIG. 38 is a flowchart illustrating a video encoding method according to another embodiment
- FIG. 39 is a conceptual diagram for describing the video encoding method illustrated in FIG. 38.
- the apparatus for encoding an image includes a reference macro block for a current macro block having a first size included in an N-th picture, which is a current picture to be encoded, N-1 located temporally ahead of the N-th picture.
- a first motion vector is generated by determining the first picture, and at the same time, a second motion vector is generated by determining a reference macroblock in the N-2th picture located in time earlier than the N-1th picture (step 3810).
- the current macroblock having the first size may be an extended macroblock having a size of 16x16 pixels or less, or a size of 32x32 pixels or 64x64 pixels or more.
- the extended macroblock may have a size of 32x32 pixels or more, that is, 64x64 pixels, 128x128 pixels, and more to be suitable for high resolution having an ultra high definition (HD) or higher resolution.
- the extended macro block may be limited to a maximum size of 64x64 pixels or less in consideration of encoder and decoder complexity in the case of a high resolution having a resolution of Ultra HD (Ultra High Definition) level or more.
- the apparatus for encoding an image includes an adjacent pixel of a current block having a second size in the current macro block in an Nth picture and a reference block at a position corresponding to the current block generated in a reference macro block of an N ⁇ 1th picture.
- Obtains a first residual value between adjacent pixels by obtaining a difference between corresponding adjacent pixels, obtains a difference between corresponding adjacent pixels of a reference block at a position corresponding to the current block in a reference macro block of an N-2th picture,
- a final residual value is obtained based on the first residual value and the second residual value (step 3820).
- the final residual value may be determined as an average value of the first residual value and the second residual value, or may be determined as a smaller residual value of the first residual value and the second residual value.
- it may be determined by differentially applying a weight according to a positional difference from a current picture in time.
- the current block having the second size may be configured as, for example, 4x4 pixels or 8x8 pixels, and may be determined according to the size of the current macro block.
- the image encoding apparatus determines the intra prediction mode of the current block by using the final residual value between the adjacent pixels obtained in step 3810 (step 3830), and then converts the final residual value obtained through the execution of step 3820 ( Step 3840) and quantization (step 3850), the quantized data and the first size (i.e. the size of the current macro block), the second size (i.e. the size of the current block), the first motion vector, Entropy encoding is performed on two motion vectors, intra prediction mode information, and reference picture information to generate a bit stream (step 3860).
- steps 3830 to 3860 shown in FIG. 38 are executed in the same manner as steps 3230 to 3260 of FIG. 32, detailed descriptions thereof will be omitted.
- the image encoding method includes the adjacent pixels 2613 and the N-1 th picture 2630 of the current block 2611 having a second size in the N th picture 210. Calculates a difference between adjacent pixels 2633 of the reference block 2651 having a second size in the second block, obtains a first residual value, and simultaneously obtains a current block 2611 having a second size in the N-th picture 2610.
- the final residual value may be determined as a residual value having a smaller value among the forward residual value and the reverse residual value, or may be determined by applying a weight according to a position difference that is temporally separated from the current picture.
- FIG. 40 is a flowchart illustrating an image decoding method according to an embodiment of the present invention.
- the apparatus for decoding an image receives an encoded bit stream (step 4010) and entropy decodes the received bit stream (step 4020).
- the entropy decoded information may include, for example, the motion vector (or residual value of the motion vector) of the macro block, the intra prediction mode, the size of the macro block (ie, the first size), the current block size (ie, the macro block). , Second size) information, reference picture information, and the like, and may include different information according to an embodiment of the image encoding method.
- the image decoding apparatus performs inverse quantization and inverse transformation on the entropy decoded information to obtain a residual value between the adjacent pixel of the current block and the adjacent pixel of the reference block (step 4030).
- the apparatus for decoding an image may include a reference macro block having a first size in a reference picture by using a size of a macro block obtained through entropy decoding, a size of a current block in a macro block, reference picture information, and motion vector information of a current macro block, and the A reference block having a second size in the reference macro block is determined (step 4040), and adjacent pixel information of the reference block corresponding to the current block to be encoded is obtained (step 4050).
- the image decoding apparatus calculates the obtained neighboring pixel information and the residual value to obtain neighboring pixel information of the current block having a second size and restores the current block according to the intra prediction mode information (step 4060).
- the image decoding device may perform a forward motion vector and a reverse motion vector. Determine the reference macroblocks in the N-1 < th > picture and the N + 1 < th > Adjacent pixel information of the current block to be restored may be obtained.
- the N-1 th picture and the N ⁇ using the first motion vector and the second motion vector.
- the neighboring pixel information of the reference block in each of the determined reference macroblocks is obtained, and the neighboring pixel information of the current block to be restored is calculated by calculating the acquired neighboring pixel information and the acquired residual value. Can be obtained.
- 41 is a block diagram illustrating a configuration of a video encoding apparatus according to an embodiment of the present invention.
- the apparatus for encoding an image may include an encoding controller 4110, a predictor 4120, a transformer 4130, a quantizer 4140, an inverse quantizer 4150, an inverse transform unit 4160, and a buffer 4170. ), And may include an entropy encoder 4180.
- the encoding controller 4110 determines the size of a block used for inter-screen and intra-picture prediction and controls the predictor 4120 to perform encoding according to the determined size. In addition, the encoding controller 4110 may determine the size of a block processed by the transform unit 4130 and the quantization unit 4140, and may transform and quantize the block according to the determined block size. To control.
- the encoding controller 4110 determines a picture to be referred to in the inter-picture and intra-picture prediction processes. For example, the encoding control unit 4110 may select a reference picture used for inter-screen and intra-picture prediction of the N-th picture, which is a picture to be currently encoded, from any of the N-2, N-1, N + 1, and N + 2th pictures. One may decide to, or one or more pictures may be determined to be referenced.
- the encoding controller 4110 provides the entropy encoder 4180 with the above-described block size information used for inter-picture and intra-picture prediction, block size information used for transform and quantization, reference picture information, and the like.
- the predictor 4120 determines a reference macroblock for the current macroblock having the first size included in the Nth picture, which is the current picture to be encoded, from the N-1th picture stored in the buffer 4170, and then generates a motion vector.
- the generated motion vector is provided to the entropy encoder 4180.
- the prediction unit 4120 performs inter-screen prediction and intra-screen prediction on the current block having the second size in the current macro block having the first size.
- the predictor 4120 obtains a difference between a neighboring pixel of the current block having the second size and a corresponding neighboring pixel of the reference block at a position corresponding to the current block in the reference macro block of the N-th picture. After obtaining the residual value between adjacent pixels, the obtained residual value is provided to the converter (step 4130).
- the prediction unit 4120 determines the intra prediction mode using the residual value, and then provides the determined intra prediction mode information to the entropy encoder 4180.
- the intra prediction mode is a vertical mode (mode 0), a horizontal mode (mode 1), an average value mode (mode 2), a diagonal left mode (mode 3) which is an intra prediction mode of a 4x4 block according to the H.264 / AVC standard. ), Diagonal right mode (mode 4), vertical right mode (mode 5), horizontal bottom mode (mode 6), vertical left mode (mode 7), and horizontal top mode (mode 8).
- the intra prediction mode may be determined in consideration of the coding efficiency.
- one of the various inter-screen prediction modes for a block larger than 4x4 pixels may be determined as the intra prediction mode, not the intra prediction modes of the 4x4 block according to the H.264 / AVC standard described above. .
- the current macro block having the first size may be an extended macro block having a size of 16x16 pixels or less, or a size of 32x32 pixels, 64x64 pixels, or 128x128 pixels or more, and the current block having the second size may be, for example.
- the size may be 4x4 pixels or 8x8 pixels, and the size of the current macro block and the size of the current block may be determined by the encoding controller 4110.
- the predictor 4120 obtains the neighboring pixel information of the current block having the second size from the residual value provided from the inverse transform unit 4160 and the neighboring pixel value of the reference block in the N ⁇ 1th picture and the intra prediction mode information. After restoring the current block, the restored current block is provided to the buffer 4170.
- the transformer 4130 and the quantizer 4140 transform and quantize the residual values provided by the predictor 4120.
- the transformer 4130 and the quantizer 4140 may perform the transformation based on the block size information provided to the encoding controller 4110, and may perform the transformation, for example, in a size of 32x32 or 64x64 pixels. .
- the inverse quantizer 4150 and the inverse transformer 4160 inversely quantize and inversely transform the quantized data provided from the quantizer 4140 to obtain a residual value, and then provide the residual value to the predictor 4120.
- the buffer 4170 stores at least one reconstructed picture.
- the entropy encoder 4180 may determine a quantized residual value and a motion vector, block size information used for inter-screen and intra picture prediction, block size information used for transform and quantization, reference picture information, and the like provided from the quantization unit 4140. Entropy encoding generates a bit stream.
- the N-th picture is referred to for encoding the N-th picture.
- the N-th picture is illustrated in FIGS. 33 to 39.
- the encoding may be performed by referring to at least one of the N-2, N-1, N + 1, and N + 2th pictures that are encoded for encoding the picture.
- FIG. 42 is a block diagram illustrating a configuration of an image decoding apparatus according to an embodiment of the present invention.
- the apparatus for decoding an image may include a decoding controller 4210, an entropy decoder 4220, an inverse quantizer 4230, an inverse transformer 4240, a predictor 4250, and a buffer 4260. have.
- the decoding control unit 4210 may determine the size information of blocks used for inter-screen and intra-picture prediction from the entropy decoded information, the size information of blocks processed in the inverse and inverse transform processes, and the pictures referred to in inter-screen prediction and intra-picture prediction. Information and intra prediction mode information are acquired, and control for decoding is performed based on the obtained information.
- the decoding controller 4210 may control the sizes of blocks processed by the inverse quantizer 4230 and the inverse transform unit 4240, and the reference picture and the reference picture referred to when the image is reconstructed by the prediction unit 4250. It is possible to control the macro block size within and the current block size within the macro block.
- the entropy decoder 4220 performs entropy decoding on the input bit stream.
- the entropy decoded residual value is provided to the inverse quantization unit 4230, and the entropy decoded motion vector is provided to the prediction unit 4250.
- size information of blocks used for inter-screen and intra-picture prediction, size information of blocks processed during inverse transform and inverse transform, picture information referenced in inter-screen prediction and intra-screen prediction, etc. are provided to the decoding controller 4210. .
- the inverse quantizer 4230 and the inverse transformer 4240 inverse quantize and inversely transform the quantized residual value provided from the entropy decoder 4220 to generate a residual value, and provide the generated residual value to the predictor 4250. .
- the prediction unit 4250 may include a motion vector provided from the entropy decoding unit 4220, a size of a macro block provided from the decoding control unit 4210, a size of a current block in the macro block, reference picture information, and an entropy decoding unit 4220.
- the reference macro block corresponding to the current macro block to be decoded having the first size and the current block having the second size within the current macro block using the provided motion vector and the reference block within the reference macro block are stored in the buffer 4260. Is determined, and neighboring pixel information of the reference block is obtained.
- the predictor 4250 obtains the neighboring pixel information of the current block by calculating the obtained neighboring pixel information and the residual value provided by the inverse transform unit 4240, and according to the intra prediction mode information provided from the decoding controller 4210. After restoring the current block, the restored current block is stored in the buffer 4250.
- the prediction unit 4250 may forward and backward motion vectors provided from the entropy decoding unit 4220 when the image is encoded using the N-1 th picture and the N + 1 th picture as shown in the embodiment of FIG. 36.
- the prediction unit 4250 may include the first motion vector provided from the entropy decoding unit 4220 when the image is encoded using the N-1 th picture and the N-2 th picture as shown in the embodiment shown in FIG. After determining the reference macroblock in the N-1th picture and the N-2th picture using the second motion vector, the neighboring pixel information of the reference block in each of the determined reference macroblocks is obtained, and the obtained neighboring pixel and inverse transform unit ( After acquiring the neighboring pixel information of the current block to be reconstructed by calculating the residual value provided from 4240, the current block may be reconstructed according to the intra prediction mode.
- the buffer 4260 stores the decoded pictures provided from the predictor 4250.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Abstract
Description
Claims (37)
- 영상의 부호화 방법에 있어서,NxN 픽셀 크기-여기서, N은 32 이상의 2의 거듭제곱임-를 가지는 예측 유닛에 대한 움직임 보상을 수행하여 예측 블록을 생성하는 단계;상기 예측 유닛과 상기 예측 블록을 비교하여 잔여값을 획득하는 단계; 및상기 잔여값을 변환하는 단계를 포함하는 영상의 부호화 방법.
- 제1항에 있어서, 상기 영상은 HD(High Definition)급 이상의 해상도를 가지며, 상기 예측 유닛은 확장 매크로 블록 크기를 가지는 것을 특징으로 하는 영상의 부호화 방법.
- 제2항에 있어서, 상기 잔여값을 변환하는 단계는 상기 확장 마크로 블록에 대해 DCT 변환(Discrete Cosine Transform)을 수행하는 단계인 것을 특징으로 하는 영상의 부호화 방법.
- 제1항에 있어서, 상기 예측 유닛은 N x N 픽셀 크기-여기서, N은 32 이상 128 이하의 2의 거듭제곱임-를 가지는 것을 특징으로 하는 영상의 부호화 방법.
- 부호화할 적어도 하나의 픽처를 수신하는 단계;수신된 상기 적어도 하나의 픽처 사이의 시간적 주파수 특성에 기초하여 부호화할 블록의 크기를 결정하는 단계; 및결정된 크기를 가지는 블록을 부호화하는 단계를 포함하는 영상의 부호화 방법.
- 부호화할 적어도 하나의 픽처를 수신하는 단계;수신된 상기 적어도 하나의 픽처의 공간적 주파수 특성에 기초하여 부호화할 예측 유닛의 크기-여기서, 상기 예측 유닛의 크기는 NxN 픽셀 크기(N은 32 이상의 2의 거듭제곱임)를 가짐-를 결정하는 단계; 및상기 결정된 크기를 가지는 예측 유닛을 부호화하는 단계를 포함하는 영상의 부호화 방법.
- 영상의 복호화 방법에 있어서,부호화된 비트 스트림을 수신하는 단계;수신된 비트 스트림으로부터 복호화할 예측 유닛의 크기 정보- 여기서 상기 예측 유닛의 크기는NxN 픽셀 크기(N은 32 이상의 2의 거듭제곱임)임-를 획득하는 단계;상기 수신된 비트 스트림을 역양자화하고 역변환하여 잔여값을 획득하는 단계;상기 획득한 예측 유닛 크기 정보에 상응하는 크기를 가지는 예측 유닛에 대하여 움직임 보상을 수행하여 예측 블록을 생성하는 단계; 및생성된 상기 에측 블록과 상기 잔여값을 더하여 영상을 복원하는 단계를 포함하는 영상의 복호화 방법.
- 제7항에 있어서, 상기 영상은 HD(High Definition)급 이상의 해상도를 가지며, 상기 예측 유닛은 확장 매크로 블록 크기를 가지는 것을 특징으로 하는 영상의 복호화 방법.
- 제8항에 있어서, 상기 잔여값을 변환하는 단계는 상기 확장 마크로 블록에 대해 역 DCT(Inverse Discrete Cosine Transform)을 수행하는 단계인 것을 특징으로 하는 영상의 복호화 방법.
- 제7항에 있어서, 상기 예측 유닛은 N x N 픽셀 크기-여기서, N은 2의 거듭제곱임-를 가지되, 상기 예측 유닛의 크기는 부호화기 및 복호화기의 복잡도를 고러하여 64X64 픽셀 크기 이하로 제한되는 것을 특징으로 하는 영상의 복호화 방법.
- 제7항에 있어서, 상기 예측 유닛은 가변적 크기를 가지는 코딩 유닛을 계층적으로 분할하여 최대 허용 가능한 계층 레벨(level) 또는 계층 깊이(depth)에 도달했을 때의 말단 코딩 유닛에 해당되는 것을 특징으로 하는 영상의 복호화 방법.
- 제7항에 있어서, 상기 수신된 비트 스트림으로부터 복호화할 예측 유닛의 파티션 정보를 획득하는 단계를 더 포함하는 것을 특징으로 하는 영상의 복호화 방법.
- 제12항에 있어서, 상기 획득한 예측 유닛 크기 정보에 상응하는 크기를 가지는 예측 유닛에 대하여 움직임 보상을 수행하여 예측 블록을 생성하는 단계는상기 예측 유닛의 파티션 정보에 기초하여 상기 예측 유닛에 대해 파티션(partition) 분할을 수행하여 상기 분할된 파티션에 대하여 상기 움직임 보상을 수행하는 단계를 포함하는 것을 특징으로하는 영상의 복호화 방법.
- 제13항에 있어서, 상기 파티션 분할은 비대칭적 파티션 분할 방식으로 이루어지는 것을 특징으로 하는 영상의 복호화 방법.
- 제13항에 있어서, 상기 파티션 분할은 정사각형이외의 모양을 가지는 기하학적 파티션 분할 방식으로 이루어지는 것을 특징으로 하는 영상의 복호화 방법.
- 제13항에 있어서, 상기 파티션 분할은 에지 방향에 따른 파티션 분할 방식으로 이루어지는 것을 특징으로 하는 영상의 복호화 방법.
- 제16항에 있어서, 상기 에지 방향에 따른 파티션 분할은상기 예측 유닛과 인접한 블록 중 에지에 속하는 화소를 검출하는 단계; 및검출된 에지에 속하는 화소에 기초하여 상기 예측 유닛을 적어도 하나의 파티션으로 분할하는 단계를 포함하는 영상의 복호화 방법.
- 제16항에 있어서, 상기 에지 방향에 따른 파티션 분할 방식은 인트라 예측에 적용되는 것을 특징으로 하는 영상의 복호화 방법.
- 부호화할 적어도 하나의 픽처를 수신하고, 수신된 상기 적어도 하나의 픽처 사이의 시간 주파수 특성 또는 수신된 상기 적어도 하나의 픽처의 공간적 주파수 특성에 기초하여 부호화할 예측 유닛의 크기를 결정하는 예측 유닛 결정부; 및결정된 크기를 가지는 예측 유닛을 부호화하는 부호화기를 포함하는 영상 부호화 장치.
- 수신된 비트 스트림을 복호화하여 헤더 정보를 생성하는 엔트로피 복호화부;상기 헤더 정보로부터 획득된 예측 유닛의 크기 정보- 여기서 상기 예측 유닛의 크기는NxN 픽셀 크기(N은 32 이상의 2의 거듭제곱임)임-를 에 기초하여 상기 예측 유닛에 대해 움직임 보상을 수행하여 예측 블록을 생성하는 움직임 보상부;상기 수신된 비트 스트림을 역양자화하는 역양자화부;역양자화된 데이터를 역변환하여 잔여값을 획득하는 역변환부; 및상기 잔여값과 상기 예측 블록을 더하여 영상을 복원하는 가산부를 포함하는 영상 복호화 장치.
- 제20항에 있어서,상기 예측 유닛은 확장 매크로 블록 크기를 가지는 것을 특징으로 하는 영상 복호화 장치.
- 제21항에 있어서, 상기 역변환부는 상기 확장 마크로 블록에 대해 역 DCT(Inverse Discrete Cosine Transform)을 수행하는 것을 특징으로 하는 영상 복호화 장치.
- 제20항에 있어서, 상기 예측 유닛은 N x N 픽셀 크기-여기서, N은 2의 거듭제곱임-를 가지되, 상기 예측 유닛의 크기는 부호화기 및 복호화기의 복잡도를 고러하여 64X64 픽셀 크기 이하로 제한되는 것을 특징으로 하는 영상 복호화 장치.
- 제20항에 있어서, 상기 예측 유닛은 가변적 크기를 가지는 코딩 유닛을 계층적으로 분할하여 최대 허용 가능한 계층 레벨(level) 또는 계층 깊이(depth)에 도달했을 때의 말단 코딩 유닛에 해당되는 것을 특징으로 영상 복호화 장치.
- 제20항에 있어서, 상기 움직임 보상부는 상기 예측 유닛의 파티션 정보에 기초하여 상기 예측 유닛에 대해 파티션(partition) 분할을 수행하여 상기 분할된 파티션에 대하여 상기 움직임 보상을 수행하는 것을 특징으로하는 영상 복호화 장치.
- 제25항에 있어서, 상기 파티션 분할은 비대칭적 파티션 분할 방식으로 이루어지는 것을 특징으로 하는 영상 복호화 장치.
- 제25항에 있어서, 상기 파티션 분할은 정사각형이외의 모양을 가지는 기하학적 파티션 분할 방식으로 이루어지는 것을 특징으로 하는 영상 복호화 장치.
- 제25항에 있어서, 상기 파티션 분할은 에지 방향에 따른 파티션 분할 방식으로 이루어지는 것을 특징으로 하는 영상 복호화 장치.
- 제28항에 있어서, 상기 에지 방향에 따른 파티션 분할 방식은 인트라 예측에 적용되는 것을 특징으로 하는 영상 복호화 장치.
- 영상의 부호화 방법에 있어서,입력 영상을 예측 부호화하기 위하여 상기 입력 영상을 비대칭 파티션 분할 및 기하학적 파티션 분할 방법 중 적어도 하나를 적용하여 분할된 예측 유닛에 대하여 복수의 예측 모드 중 하나를 선택적으로 사용하여 화면내 예측 부호화를 수행하는 단계; 및상기 화면내 예측에 의해 예측된 예측 유닛과 현재 예측 유닛간의 차이인 잔여값(residue)을 변환 및 양자화하여 엔트로피 부호화하는 단계를 포함하는 것을 특징으로 하는 영상 부호화 방법.
- 영상의 복호화 방법에 있어서,수신된 비트 스트림을 엔트로피 복호화하여 잔여값을 역양자화 및 역변환하여 잔여값을 복원하는 단계;비대칭 파티션 분할 및 기하학적 파티션 분할 방법 중 적어도 하나를 적용하여 분할된 예측 유닛에 대하여 복수의 예측 모드 중 하나를 선택적으로 사용하여 화면내 예측 부호화를 수행하여 예측 유닛을 생성하는 단계; 및상기 예측 유닛에 상기 잔여값을 더하여 영상을 복원하는 단계를 포함하는 것을 특징으로 하는 영상의 복호화 방법.
- 영상의 복호화 장치에 있어서,수신된 비트 스트림을 엔트로피 복호화하여 잔여값을 역양자화하고 역변환하여 잔여값을 복원하는 역양자화 및 역변환부;비대칭 파티션 분할 및 기하학적 파티션 분할 방법 중 적어도 하나를 적용하여 분할된 예측 유닛에 대하여 복수의 예측 모드 중 하나를 선택적으로 사용하여 화면내 예측 부호화를 수행하여 예측 유닛을 생성하는 인트라 예측부; 및상기 예측 유닛에 상기 잔여값을 더하여 영상을 복원하는 가산부를 포함하는 것을 특징으로 하는 영상의 복호화 장치.
- N번째 픽처내의 제2 크기를 가지는 현재 블록의 인접한 화소와 상기 N번째 픽처보다 시간적으로 앞선 N-1번째 픽처내의 참조 블록의 인접화소 사이의 잔여값과 상기 잔여값에 기초하여 결정된 상기 현재 블록에 대한 화면내 예측 모드가 부호화된 비트 스트림을 제공받는 단계;상기 비트 스트림을 엔트로피 복호화하여 움직임 벡터, 상기 화면내 예측 모드 및 양자화된 잔여값을 획득하는 단계;상기 양자화된 잔여값을 역양자화하고 역변환하여 상기 잔여값을 획득하는 단계;상기 움직임 벡터를 이용하여 적어도 하나의 픽처내에서 상기 제2 크기를 가지는 현재 블록의 참조 블록을 결정하는 단계; 및결정된 상기 참조 블록의 인접 화소와 상기 잔여값을 연산한 결과에 상기 화면내 예측 모드를 적용하여 상기 현재 블록을 복원하는 단계를 포함하는 영상 복호화 방법.
- N번째 픽처내의 제2 크기를 가지는 현재 블록의 인접한 화소와 상기 N번째 픽처보다 시간적으로 늦은 N+1번째 픽처내의 참조 블록의 인접화소 사이의 잔여값과 상기 잔여값에 기초하여 결정된 상기 현재 블록에 대한 화면내 예측 모드가 부호화된 비트 스트림을 제공받는 단계;상기 비트 스트림을 엔트로피 복호화하여 움직임 벡터, 상기 화면내 예측 모드 및 양자화된 잔여값을 획득하는 단계;상기 양자화된 잔여값을 역양자화하고 역변환하여 상기 잔여값을 획득하는 단계;상기 움직임 벡터를 이용하여 적어도 하나의 픽처내에서 상기 제2 크기를 가지는 현재 블록의 참조 블록을 결정하는 단계; 및결정된 상기 참조 블록의 인접 화소와 상기 잔여값을 연산한 결과에 상기 화면내 예측 모드를 적용하여 상기 현재 블록을 복원하는 단계를 포함하는 영상 복호화 방법.
- N번째 픽처내의 제2 크기를 가지는 현재 블록의 인접한 화소와 상기 N번째 픽처보다 시간적으로 빠른 N-1번째 픽처내의 참조 블록의 인접화소 사이의 순방향 잔여값 및 상기 N번째 픽처보다 시간적으로 늦은 N+1번째 픽처내의 참조 블록의 인접화소 사이의 역방향 잔여값에 기초하여 결정된 잔여값과 상기 잔여값에 기초하여 결정된 상기 현재 블록에 대한 화면내 예측 모드가 부호화된 비트 스트림을 제공받는 단계;상기 비트 스트림을 엔트로피 복호화하여 움직임 벡터, 상기 화면내 예측 모드 및 양자화된 잔여값을 획득하는 단계;상기 양자화된 잔여값을 역양자화하고 역변환하여 상기 잔여값을 획득하는 단계;상기 움직임 벡터를 이용하여 적어도 하나의 픽처내에서 상기 제2 크기를 가지는 현재 블록의 참조 블록을 결정하는 단계; 및결정된 상기 참조 블록의 인접 화소와 상기 잔여값을 연산한 결과에 상기 화면내 예측 모드를 적용하여 상기 현재 블록을 복원하는 단계를 포함하는 영상 복호화 방법.
- N번째 픽처내의 제2 크기를 가지는 현재 블록의 인접한 화소와 상기 N번째 픽처보다 시간적으로 빠른 N-1번째 픽처내의 참조 블록의 인접화소 사이의 제1 잔여값 및 상기 N-1번째 픽처보다 시간적으로 더 빠른 N-2번째 픽처내의 참조 블록의 인접화소 사이의 제2 잔여값에 기초하여 결정된 잔여값과 상기 잔여값에 기초하여 결정된 상기 현재 블록에 대한 화면내 예측 모드가 부호화된 비트 스트림을 제공받는 단계;상기 비트 스트림을 엔트로피 복호화하여 움직임 벡터, 상기 화면내 예측 모드 및 양자화된 잔여값을 획득하는 단계;상기 양자화된 잔여값을 역양자화하고 역변환하여 상기 잔여값을 획득하는 단계;상기 움직임 벡터를 이용하여 적어도 하나의 픽처내에서 상기 제2 크기를 가지는 현재 블록의 참조 블록을 결정하는 단계; 및결정된 상기 참조 블록의 인접 화소와 상기 잔여값을 연산한 결과에 상기 화면내 예측 모드를 적용하여 상기 현재 블록을 복원하는 단계를 포함하는 영상 복호화 방법.
- N번째 픽처내의 제2 크기를 가지는 현재 블록의 인접한 화소와 적어도 하나의 참조 픽쳐내의 참조 블록의 인접화소 사이의 잔여값 및 상기 잔여값에 기초하여 결정된 화면내 예측 모드가 부호화된 비트스트림을 엔트로피 복호화하여 움직임 벡터, 상기 화면내 예측 모드 및 양자화된 잔여값을 생성하는 엔트로피 복호화부;엔트로피 복호화된 정보로부터 블록 크기 및 참조 픽처 정보를 획득하는 복호화 제어부;상기 양자화된 잔여값을 역양자화는 역양자화부;역양자화된 잔여값을 역변환하는 역변환부; 및상기 움직임 벡터 및 상기 참조 픽처 정보에 기초하여 복호화할 제2 크기를 가지는 현재 블록의 참조 블록를 결정하고, 결정된 상기 참조 블록의 인접 화소와 상기 잔여값을 연산한 후 상기 화면내 예측 모드를 연산한 결과에 적용하여 현재 블록을 복원하는 예측부를 포함하는 영상 복호화 장치.
Priority Applications (9)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/513,122 US8995778B2 (en) | 2009-12-01 | 2010-12-01 | Method and apparatus for encoding/decoding high resolution images |
EP10834775.8A EP2509319A4 (en) | 2009-12-01 | 2010-12-01 | METHOD AND APPARATUS FOR ENCODING / DECODING HIGH RESOLUTION IMAGES |
CN201080054678.8A CN102648631B (zh) | 2009-12-01 | 2010-12-01 | 用于编码/解码高分辨率图像的方法和设备 |
US14/490,159 US9058659B2 (en) | 2009-12-01 | 2014-09-18 | Methods and apparatuses for encoding/decoding high resolution images |
US14/489,893 US9053543B2 (en) | 2009-12-01 | 2014-09-18 | Methods and apparatuses for encoding/decoding high resolution images |
US14/490,101 US9047667B2 (en) | 2009-12-01 | 2014-09-18 | Methods and apparatuses for encoding/decoding high resolution images |
US14/490,255 US9053544B2 (en) | 2009-12-01 | 2014-09-18 | Methods and apparatuses for encoding/decoding high resolution images |
US14/675,391 US20150208091A1 (en) | 2009-12-01 | 2015-03-31 | Methods and apparatuses for encoding/decoding high resolution images |
US14/739,884 US20150281688A1 (en) | 2009-12-01 | 2015-06-15 | Methods and apparatuses for encoding/decoding high resolution images |
Applications Claiming Priority (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2009-0117919 | 2009-12-01 | ||
KR20090117919 | 2009-12-01 | ||
KR1020090124334A KR20110067648A (ko) | 2009-12-15 | 2009-12-15 | 영상 부호화/복호화 방법 및 이를 수행하는 장치 |
KR10-2009-0124334 | 2009-12-15 | ||
KR10-2010-0053186 | 2010-06-07 | ||
KR20100053186A KR20110061468A (ko) | 2009-12-01 | 2010-06-07 | 고해상도 영상의 부호화/복호화 방법 및 이를 수행하는 장치 |
KR10-2010-0064009 | 2010-07-02 | ||
KR20100064009 | 2010-07-02 |
Related Child Applications (6)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/513,122 A-371-Of-International US8995778B2 (en) | 2009-12-01 | 2010-12-01 | Method and apparatus for encoding/decoding high resolution images |
US14/490,101 Continuation US9047667B2 (en) | 2009-12-01 | 2014-09-18 | Methods and apparatuses for encoding/decoding high resolution images |
US14/490,255 Continuation US9053544B2 (en) | 2009-12-01 | 2014-09-18 | Methods and apparatuses for encoding/decoding high resolution images |
US14/490,159 Continuation US9058659B2 (en) | 2009-12-01 | 2014-09-18 | Methods and apparatuses for encoding/decoding high resolution images |
US14/489,893 Continuation US9053543B2 (en) | 2009-12-01 | 2014-09-18 | Methods and apparatuses for encoding/decoding high resolution images |
US14/675,391 Continuation US20150208091A1 (en) | 2009-12-01 | 2015-03-31 | Methods and apparatuses for encoding/decoding high resolution images |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2011068360A2 true WO2011068360A2 (ko) | 2011-06-09 |
WO2011068360A3 WO2011068360A3 (ko) | 2011-09-15 |
Family
ID=46660411
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2010/008563 WO2011068360A2 (ko) | 2009-12-01 | 2010-12-01 | 고해상도 영상의 부호화/복호화 방법 및 이를 수행하는 장치 |
Country Status (4)
Country | Link |
---|---|
US (7) | US8995778B2 (ko) |
EP (2) | EP2509319A4 (ko) |
CN (7) | CN105959688B (ko) |
WO (1) | WO2011068360A2 (ko) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110181604A1 (en) * | 2010-01-22 | 2011-07-28 | Samsung Electronics Co., Ltd. | Method and apparatus for creating animation message |
US20130216147A1 (en) * | 2011-04-13 | 2013-08-22 | Huawei Technologies Co., Ltd. | Image Encoding and Decoding Methods and Related Devices |
US20130272381A1 (en) * | 2012-04-16 | 2013-10-17 | Qualcomm Incorporated | Simplified non-square quadtree transforms for video coding |
WO2014142630A1 (en) * | 2013-03-15 | 2014-09-18 | Samsung Electronics Co., Ltd. | Creating details in an image with frequency lifting |
CN104067613A (zh) * | 2011-11-08 | 2014-09-24 | 株式会社Kt | 图像编码方法和装置以及图像解码方法和装置 |
US9066025B2 (en) | 2013-03-15 | 2015-06-23 | Samsung Electronics Co., Ltd. | Control of frequency lifting super-resolution with image features |
US9349188B2 (en) | 2013-03-15 | 2016-05-24 | Samsung Electronics Co., Ltd. | Creating details in an image with adaptive frequency strength controlled transform |
CN106101721A (zh) * | 2011-11-23 | 2016-11-09 | 数码士有限公司 | 视频解码设备 |
US9536288B2 (en) | 2013-03-15 | 2017-01-03 | Samsung Electronics Co., Ltd. | Creating details in an image with adaptive frequency lifting |
US9652829B2 (en) | 2015-01-22 | 2017-05-16 | Samsung Electronics Co., Ltd. | Video super-resolution by fast video segmentation for boundary accuracy control |
CN107277544A (zh) * | 2011-08-17 | 2017-10-20 | 佳能株式会社 | 编码装置、编码方法和存储介质 |
US10091512B2 (en) | 2014-05-23 | 2018-10-02 | Futurewei Technologies, Inc. | Advanced screen content coding with improved palette table and index map coding methods |
US10291827B2 (en) | 2013-11-22 | 2019-05-14 | Futurewei Technologies, Inc. | Advanced screen content coding solution |
US10499062B2 (en) | 2011-11-25 | 2019-12-03 | Samsung Electronics Co., Ltd. | Image coding method and device for buffer management of decoder, and image decoding method and device |
US10638143B2 (en) | 2014-03-21 | 2020-04-28 | Futurewei Technologies, Inc. | Advanced screen content coding with improved color table and index map coding methods |
CN114666580A (zh) * | 2019-12-31 | 2022-06-24 | Oppo广东移动通信有限公司 | 一种帧间预测方法、编码器、解码器及存储介质 |
Families Citing this family (65)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102771125B (zh) * | 2009-12-10 | 2015-12-09 | Sk电信有限公司 | 使用树形结构的编码/解码方法和装置 |
EP2547107A4 (en) * | 2010-04-13 | 2014-07-02 | Samsung Electronics Co Ltd | VIDEO ENCODING METHOD AND VIDEO ENCODING APPARATUS BASED ON CODING UNITS DETERMINED ACCORDING TO ARBORESCENT STRUCTURE, AND VIDEO DECODING METHOD AND VIDEO DECODING APPARATUS BASED ON ENCODING UNITS DETERMINED ACCORDING TO ARBORESCENT STRUCTURE |
WO2011149265A2 (en) | 2010-05-25 | 2011-12-01 | Lg Electronics Inc. | New planar prediction mode |
KR101530284B1 (ko) * | 2010-07-16 | 2015-06-19 | 삼성전자주식회사 | 영상의 인트라 예측 부호화, 복호화 방법 및 장치 |
KR101681303B1 (ko) * | 2010-07-29 | 2016-12-01 | 에스케이 텔레콤주식회사 | 블록 분할예측을 이용한 영상 부호화/복호화 방법 및 장치 |
CA2808376C (en) * | 2010-12-06 | 2018-11-13 | Panasonic Corporation | Image coding method, image decoding method, image coding device, and image decoding device |
KR101955374B1 (ko) * | 2011-06-30 | 2019-05-31 | 에스케이 텔레콤주식회사 | 고속 코딩 단위(Coding Unit) 모드 결정을 통한 부호화/복호화 방법 및 장치 |
US9807426B2 (en) * | 2011-07-01 | 2017-10-31 | Qualcomm Incorporated | Applying non-square transforms to video data |
FR2980068A1 (fr) * | 2011-09-13 | 2013-03-15 | Thomson Licensing | Procede de codage et de reconstruction d'un bloc de pixels et dispositifs correspondants |
SI2744204T1 (sl) * | 2011-09-14 | 2019-02-28 | Samsung Electronics Co., Ltd., | Postopek dekodiranja enote za napovedovanje (PU) na podlagi njene velikosti |
US8811760B2 (en) * | 2011-10-25 | 2014-08-19 | Mitsubishi Electric Research Laboratories, Inc. | Coding images using intra prediction modes |
GB201122022D0 (en) * | 2011-12-20 | 2012-02-01 | Imagination Tech Ltd | Method and apparatus for compressing and decompressing data |
US9531990B1 (en) | 2012-01-21 | 2016-12-27 | Google Inc. | Compound prediction using multiple sources or prediction modes |
US8737824B1 (en) | 2012-03-09 | 2014-05-27 | Google Inc. | Adaptively encoding a media stream with compound prediction |
WO2014003421A1 (ko) | 2012-06-25 | 2014-01-03 | 한양대학교 산학협력단 | 비디오 부호화 및 복호화를 위한 방법 |
US9185414B1 (en) | 2012-06-29 | 2015-11-10 | Google Inc. | Video encoding using variance |
US9667994B2 (en) * | 2012-10-01 | 2017-05-30 | Qualcomm Incorporated | Intra-coding for 4:2:2 sample format in video coding |
US9628790B1 (en) | 2013-01-03 | 2017-04-18 | Google Inc. | Adaptive composite intra prediction for image and video compression |
CN103067715B (zh) | 2013-01-10 | 2016-12-28 | 华为技术有限公司 | 深度图像的编解码方法和编解码装置 |
CN103067716B (zh) * | 2013-01-10 | 2016-06-29 | 华为技术有限公司 | 深度图像的编解码方法和编解码装置 |
KR102254118B1 (ko) * | 2013-10-12 | 2021-05-20 | 삼성전자주식회사 | 인트라 블록 복사 예측을 이용한 비디오 부호화 방법 및 그 장치, 비디오 복호화 방법 및 그 장치 |
US9813730B2 (en) * | 2013-12-06 | 2017-11-07 | Mediatek Inc. | Method and apparatus for fine-grained motion boundary processing |
US9609343B1 (en) | 2013-12-20 | 2017-03-28 | Google Inc. | Video coding using compound prediction |
CN103997650B (zh) * | 2014-05-30 | 2017-07-14 | 华为技术有限公司 | 一种视频解码的方法和视频解码器 |
KR20170002460A (ko) * | 2014-06-11 | 2017-01-06 | 엘지전자 주식회사 | 임베디드 블록 파티셔닝을 이용하여 비디오 신호를 인코딩, 디코딩하는 방법 및 장치 |
EP3207699B1 (en) | 2014-11-14 | 2021-06-09 | Huawei Technologies Co., Ltd. | Systems and methods for processing a block of a digital image |
WO2016074746A1 (en) | 2014-11-14 | 2016-05-19 | Huawei Technologies Co., Ltd. | Systems and methods for mask based processing of a block of a digital image |
BR112017010160B1 (pt) | 2014-11-14 | 2023-05-02 | Huawei Technologies Co., Ltd | Aparelho e método para gerar uma pluralidade de coeficientes de transformada, método para codificar um quadro, aparelho e método para decodificar um quadro e meio legível por computador |
US10462461B2 (en) * | 2015-02-27 | 2019-10-29 | Kddi Corporation | Coding device and decoding device which allow encoding of non-square blocks |
WO2016182317A1 (ko) * | 2015-05-12 | 2016-11-17 | 삼성전자 주식회사 | 인트라 예측을 수행하는 영상 복호화 방법 및 그 장치 및 인트라 예측을 수행하는 영상 부호화 방법 및 그 장치 |
EP3273694A4 (en) * | 2015-05-12 | 2018-04-25 | Samsung Electronics Co., Ltd. | Image decoding method for performing intra prediction and device thereof, and image encoding method for performing intra prediction and device thereof |
US10392868B2 (en) * | 2015-09-30 | 2019-08-27 | Schlumberger Technology Corporation | Milling wellbore casing |
KR102447907B1 (ko) * | 2015-11-05 | 2022-09-28 | 삼성전자주식회사 | 추천 객체를 제공하기 위한 전자 장치 및 방법 |
US10575000B2 (en) | 2016-04-20 | 2020-02-25 | Mediatek Inc. | Method and apparatus for image compression using block prediction mode |
MX2018014493A (es) * | 2016-05-25 | 2019-08-12 | Arris Entpr Llc | Particionamiento binario, ternario, cuaternario para jvet. |
JP2019525577A (ja) * | 2016-07-18 | 2019-09-05 | エレクトロニクス アンド テレコミュニケーションズ リサーチ インスチチュートElectronics And Telecommunications Research Institute | 画像符号化/復号方法、装置、及び、ビットストリームを保存した記録媒体 |
EP3487749B1 (en) | 2016-07-21 | 2021-10-06 | Zephyros, Inc. | Reinforcement structure |
CN106254719B (zh) * | 2016-07-25 | 2018-11-30 | 清华大学深圳研究生院 | 一种基于线性变换和图像插值的光场图像压缩方法 |
EP3490876A1 (en) | 2016-07-28 | 2019-06-05 | Zephyros, Inc. | Multiple stage deformation reinforcement structure for impact absorption |
KR102471208B1 (ko) | 2016-09-20 | 2022-11-25 | 주식회사 케이티 | 비디오 신호 처리 방법 및 장치 |
CN114245122A (zh) | 2016-10-04 | 2022-03-25 | 有限公司B1影像技术研究所 | 图像数据编码/解码方法、介质和发送比特流的方法 |
WO2018088805A1 (ko) * | 2016-11-08 | 2018-05-17 | 주식회사 케이티 | 비디오 신호 처리 방법 및 장치 |
US11445186B2 (en) | 2016-11-25 | 2022-09-13 | Kt Corporation | Method and apparatus for processing video signal |
KR20180074000A (ko) * | 2016-12-23 | 2018-07-03 | 삼성전자주식회사 | 비디오 디코딩 방법, 이를 수행하는 비디오 디코더, 비디오 인코딩 방법, 및 이를 수행하는 비디오 인코더 |
JP2018107588A (ja) * | 2016-12-26 | 2018-07-05 | ルネサスエレクトロニクス株式会社 | 画像処理装置および半導体装置 |
EP3349455A1 (en) | 2017-01-11 | 2018-07-18 | Thomson Licensing | Method and device for coding a block of video data, method and device for decoding a block of video data |
CN116193109A (zh) * | 2017-01-16 | 2023-05-30 | 世宗大学校产学协力团 | 影像编码/解码方法 |
CN107465920A (zh) * | 2017-06-28 | 2017-12-12 | 江苏科技大学 | 一种基于时空相关性的视频编码单元快速划分方法 |
CN110999306B (zh) * | 2017-08-22 | 2022-09-16 | 松下电器(美国)知识产权公司 | 图像编码器和图像解码器 |
CN115118996A (zh) * | 2017-08-22 | 2022-09-27 | 松下电器(美国)知识产权公司 | 图像编码器和图像解码器 |
CN109996074A (zh) * | 2017-12-29 | 2019-07-09 | 富士通株式会社 | 图像编码装置,图像解码装置和电子设备 |
WO2019151297A1 (ja) | 2018-01-30 | 2019-08-08 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 符号化装置、復号装置、符号化方法及び復号方法 |
WO2019151280A1 (ja) * | 2018-01-30 | 2019-08-08 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 符号化装置、復号装置、符号化方法及び復号方法 |
EP3808083A1 (en) * | 2018-06-18 | 2021-04-21 | InterDigital VC Holdings, Inc. | Method and apparatus for video encoding and decoding based on asymmetric binary partitioning of image blocks |
US10382772B1 (en) | 2018-07-02 | 2019-08-13 | Tencent America LLC | Method and apparatus for video coding |
FR3088511B1 (fr) * | 2018-11-09 | 2021-05-28 | Fond B Com | Procede de decodage d’au moins une image, procede de codage, dispositifs, signal et programmes d’ordinateur correspondants. |
CN111385581B (zh) * | 2018-12-28 | 2022-04-26 | 杭州海康威视数字技术股份有限公司 | 一种编解码方法及其设备 |
CN113170177A (zh) * | 2019-01-02 | 2021-07-23 | 北京字节跳动网络技术有限公司 | 基于散列的运动搜索 |
CN113519164A (zh) * | 2019-03-02 | 2021-10-19 | 北京字节跳动网络技术有限公司 | 对分割结构的限制 |
US10742972B1 (en) * | 2019-03-08 | 2020-08-11 | Tencent America LLC | Merge list construction in triangular prediction |
WO2020190113A1 (ko) * | 2019-03-21 | 2020-09-24 | 삼성전자주식회사 | 블록 형태별로 블록 크기가 설정되는 비디오 부호화 방법 및 장치, 비디오 복호화 방법 및 장치 |
CN113055684B (zh) * | 2019-06-24 | 2022-09-30 | 杭州海康威视数字技术股份有限公司 | 一种编解码方法、装置及其设备 |
CN114208197A (zh) | 2019-08-15 | 2022-03-18 | 阿里巴巴集团控股有限公司 | 用于视频编解码的块划分方法 |
US11606563B2 (en) * | 2019-09-24 | 2023-03-14 | Tencent America LLC | CTU size signaling |
US11689715B2 (en) * | 2020-09-28 | 2023-06-27 | Tencent America LLC | Non-directional intra prediction for L-shape partitions |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0824341B2 (ja) * | 1985-10-28 | 1996-03-06 | 株式会社日立製作所 | 画像データ符号化方法 |
KR960012933B1 (ko) * | 1993-09-17 | 1996-09-25 | 대우전자 주식회사 | 동영상(Motion Image) 압축방법 및 장치 |
US5799110A (en) * | 1995-11-09 | 1998-08-25 | Utah State University Foundation | Hierarchical adaptive multistage vector quantization |
US5764235A (en) * | 1996-03-25 | 1998-06-09 | Insight Development Corporation | Computer implemented method and system for transmitting graphical images from server to client at user selectable resolution |
US6347157B2 (en) * | 1998-07-24 | 2002-02-12 | Picsurf, Inc. | System and method for encoding a video sequence using spatial and temporal transforms |
TW595124B (en) * | 2003-10-08 | 2004-06-21 | Mediatek Inc | Method and apparatus for encoding video signals |
KR100999091B1 (ko) * | 2003-11-17 | 2010-12-07 | 삼성전자주식회사 | 임의 크기의 가변 블록을 이용한 영상 압축 방법 및 장치 |
KR100565066B1 (ko) | 2004-02-11 | 2006-03-30 | 삼성전자주식회사 | 중첩된 블록 기반 움직임 추정에 의한 움직임 보상 보간방법 및 그를 적용한 프레임 레이트 변환 장치 |
KR20060042295A (ko) * | 2004-11-09 | 2006-05-12 | 삼성전자주식회사 | 화상 데이터 부호화 및 복호화 방법 및 장치 |
WO2007034918A1 (ja) | 2005-09-26 | 2007-03-29 | Mitsubishi Electric Corporation | 動画像符号化装置及び動画像復号装置 |
WO2008042127A2 (en) * | 2006-09-29 | 2008-04-10 | Thomson Licensing | Geometric intra prediction |
KR100827093B1 (ko) * | 2006-10-13 | 2008-05-02 | 삼성전자주식회사 | 영상 부호화 방법 및 장치 |
KR100846512B1 (ko) * | 2006-12-28 | 2008-07-17 | 삼성전자주식회사 | 영상의 부호화, 복호화 방법 및 장치 |
KR101366093B1 (ko) * | 2007-03-28 | 2014-02-21 | 삼성전자주식회사 | 영상의 부호화, 복호화 방법 및 장치 |
EP2213098A2 (en) * | 2007-10-16 | 2010-08-04 | Thomson Licensing | Methods and apparatus for video encoding and decoding geometrically partitioned super blocks |
KR101630006B1 (ko) * | 2009-12-04 | 2016-06-13 | 톰슨 라이센싱 | 텍스처 패턴 적응형 파티션 블록 변환 |
US8811760B2 (en) * | 2011-10-25 | 2014-08-19 | Mitsubishi Electric Research Laboratories, Inc. | Coding images using intra prediction modes |
-
2010
- 2010-12-01 CN CN201610284616.5A patent/CN105959688B/zh active Active
- 2010-12-01 CN CN201510176228.0A patent/CN104768005B/zh active Active
- 2010-12-01 WO PCT/KR2010/008563 patent/WO2011068360A2/ko active Application Filing
- 2010-12-01 CN CN201610288628.5A patent/CN105812812B/zh active Active
- 2010-12-01 CN CN201610284065.2A patent/CN105898311A/zh active Pending
- 2010-12-01 EP EP10834775.8A patent/EP2509319A4/en not_active Ceased
- 2010-12-01 CN CN201510219813.4A patent/CN104811717B/zh active Active
- 2010-12-01 CN CN201080054678.8A patent/CN102648631B/zh active Active
- 2010-12-01 CN CN201510110163.XA patent/CN104702951B/zh active Active
- 2010-12-01 EP EP15168459.4A patent/EP2942960A1/en not_active Withdrawn
- 2010-12-01 US US13/513,122 patent/US8995778B2/en active Active
-
2014
- 2014-09-18 US US14/490,159 patent/US9058659B2/en active Active
- 2014-09-18 US US14/490,101 patent/US9047667B2/en active Active
- 2014-09-18 US US14/489,893 patent/US9053543B2/en active Active
- 2014-09-18 US US14/490,255 patent/US9053544B2/en active Active
-
2015
- 2015-03-31 US US14/675,391 patent/US20150208091A1/en not_active Abandoned
- 2015-06-15 US US14/739,884 patent/US20150281688A1/en not_active Abandoned
Non-Patent Citations (2)
Title |
---|
None |
See also references of EP2509319A4 |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9449418B2 (en) * | 2010-01-22 | 2016-09-20 | Samsung Electronics Co., Ltd | Method and apparatus for creating animation message |
US20110181604A1 (en) * | 2010-01-22 | 2011-07-28 | Samsung Electronics Co., Ltd. | Method and apparatus for creating animation message |
US20130216147A1 (en) * | 2011-04-13 | 2013-08-22 | Huawei Technologies Co., Ltd. | Image Encoding and Decoding Methods and Related Devices |
US8718389B2 (en) * | 2011-04-13 | 2014-05-06 | Huawei Technologies Co., Ltd. | Image encoding and decoding methods and related devices |
US8891889B2 (en) | 2011-04-13 | 2014-11-18 | Huawei Technologies Co., Ltd. | Image encoding and decoding methods and related devices |
CN107277544B (zh) * | 2011-08-17 | 2020-10-27 | 佳能株式会社 | 编码装置、编码方法和存储介质 |
US10771806B2 (en) | 2011-08-17 | 2020-09-08 | Canon Kabushiki Kaisha | Method and device for encoding a sequence of images and method and device for decoding a sequence of images |
CN107277544A (zh) * | 2011-08-17 | 2017-10-20 | 佳能株式会社 | 编码装置、编码方法和存储介质 |
US9554140B1 (en) | 2011-11-08 | 2017-01-24 | Kt Corporation | Method and apparatus for encoding image, and method and apparatus for decoding image |
CN104067613B (zh) * | 2011-11-08 | 2018-01-02 | 株式会社Kt | 图像编码方法和装置以及图像解码方法和装置 |
CN104378632B (zh) * | 2011-11-08 | 2018-02-09 | 株式会社Kt | 对具有待被解码的当前块的视频信号进行解码的方法 |
CN104067613A (zh) * | 2011-11-08 | 2014-09-24 | 株式会社Kt | 图像编码方法和装置以及图像解码方法和装置 |
US9729893B2 (en) | 2011-11-08 | 2017-08-08 | Kt Corporation | Method and apparatus for encoding image, and method and apparatus for decoding image |
US9578338B1 (en) | 2011-11-08 | 2017-02-21 | Kt Corporation | Method and apparatus for encoding image, and method and apparatus for decoding image |
CN104378632A (zh) * | 2011-11-08 | 2015-02-25 | 株式会社Kt | 对具有待被解码的当前块的视频信号进行解码的方法 |
CN106101721A (zh) * | 2011-11-23 | 2016-11-09 | 数码士有限公司 | 视频解码设备 |
US10499062B2 (en) | 2011-11-25 | 2019-12-03 | Samsung Electronics Co., Ltd. | Image coding method and device for buffer management of decoder, and image decoding method and device |
US9912944B2 (en) * | 2012-04-16 | 2018-03-06 | Qualcomm Incorporated | Simplified non-square quadtree transforms for video coding |
US20130272381A1 (en) * | 2012-04-16 | 2013-10-17 | Qualcomm Incorporated | Simplified non-square quadtree transforms for video coding |
US9066025B2 (en) | 2013-03-15 | 2015-06-23 | Samsung Electronics Co., Ltd. | Control of frequency lifting super-resolution with image features |
US9305332B2 (en) | 2013-03-15 | 2016-04-05 | Samsung Electronics Company, Ltd. | Creating details in an image with frequency lifting |
US9349188B2 (en) | 2013-03-15 | 2016-05-24 | Samsung Electronics Co., Ltd. | Creating details in an image with adaptive frequency strength controlled transform |
US9536288B2 (en) | 2013-03-15 | 2017-01-03 | Samsung Electronics Co., Ltd. | Creating details in an image with adaptive frequency lifting |
WO2014142630A1 (en) * | 2013-03-15 | 2014-09-18 | Samsung Electronics Co., Ltd. | Creating details in an image with frequency lifting |
US10291827B2 (en) | 2013-11-22 | 2019-05-14 | Futurewei Technologies, Inc. | Advanced screen content coding solution |
US10638143B2 (en) | 2014-03-21 | 2020-04-28 | Futurewei Technologies, Inc. | Advanced screen content coding with improved color table and index map coding methods |
US10091512B2 (en) | 2014-05-23 | 2018-10-02 | Futurewei Technologies, Inc. | Advanced screen content coding with improved palette table and index map coding methods |
US9652829B2 (en) | 2015-01-22 | 2017-05-16 | Samsung Electronics Co., Ltd. | Video super-resolution by fast video segmentation for boundary accuracy control |
CN114666580A (zh) * | 2019-12-31 | 2022-06-24 | Oppo广东移动通信有限公司 | 一种帧间预测方法、编码器、解码器及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN102648631A (zh) | 2012-08-22 |
US20150016739A1 (en) | 2015-01-15 |
CN105812812A (zh) | 2016-07-27 |
CN104702951B (zh) | 2018-10-19 |
CN104768005B (zh) | 2018-07-31 |
CN104811717A (zh) | 2015-07-29 |
CN102648631B (zh) | 2016-03-30 |
EP2942960A1 (en) | 2015-11-11 |
EP2509319A4 (en) | 2013-07-10 |
CN104768005A (zh) | 2015-07-08 |
US20130129237A1 (en) | 2013-05-23 |
CN105959688A (zh) | 2016-09-21 |
CN104811717B (zh) | 2018-09-14 |
CN105898311A (zh) | 2016-08-24 |
US20150208091A1 (en) | 2015-07-23 |
US9053544B2 (en) | 2015-06-09 |
US20150016740A1 (en) | 2015-01-15 |
US9047667B2 (en) | 2015-06-02 |
US20150016737A1 (en) | 2015-01-15 |
CN105812812B (zh) | 2018-08-24 |
EP2509319A2 (en) | 2012-10-10 |
US8995778B2 (en) | 2015-03-31 |
US20150281688A1 (en) | 2015-10-01 |
CN104702951A (zh) | 2015-06-10 |
US9058659B2 (en) | 2015-06-16 |
CN105959688B (zh) | 2019-01-29 |
WO2011068360A3 (ko) | 2011-09-15 |
US9053543B2 (en) | 2015-06-09 |
US20150016738A1 (en) | 2015-01-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2011068360A2 (ko) | 고해상도 영상의 부호화/복호화 방법 및 이를 수행하는 장치 | |
WO2018030599A1 (ko) | 인트라 예측 모드 기반 영상 처리 방법 및 이를 위한 장치 | |
WO2017018664A1 (ko) | 인트라 예측 모드 기반 영상 처리 방법 및 이를 위한 장치 | |
WO2018097693A2 (ko) | 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체 | |
WO2018030773A1 (ko) | 영상 부호화/복호화 방법 및 장치 | |
WO2018097692A2 (ko) | 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체 | |
WO2018047995A1 (ko) | 인트라 예측 모드 기반 영상 처리 방법 및 이를 위한 장치 | |
WO2020050685A1 (ko) | 인트라 예측을 이용한 영상 부호화/복호화 방법 및 장치 | |
WO2011071328A2 (en) | Method and apparatus for encoding video, and method and apparatus for decoding video | |
WO2011126273A2 (en) | Method and apparatus for encoding video by compensating for pixel value according to pixel groups, and method and apparatus for decoding video by the same | |
WO2012005520A2 (en) | Method and apparatus for encoding video by using block merging, and method and apparatus for decoding video by using block merging | |
WO2013002554A2 (ko) | 픽셀 분류에 따른 오프셋 조정을 이용하는 비디오 부호화 방법 및 그 장치, 비디오 복호화 방법 및 그 장치 | |
WO2011096741A2 (en) | Method and apparatus for encoding video based on scanning order of hierarchical data units, and method and apparatus for decoding video based on scanning order of hierarchical data units | |
WO2017086748A1 (ko) | 기하 변환 영상을 이용하는 영상의 부호화/복호화 방법 및 장치 | |
WO2018124333A1 (ko) | 인트라 예측 모드 기반 영상 처리 방법 및 이를 위한 장치 | |
WO2015093890A1 (ko) | 인트라 예측을 수반한 비디오 부호화 방법 및 그 장치, 비디오 복호화 방법 및 그 장치 | |
WO2019182292A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2020071616A1 (ko) | Cclm에 기반한 인트라 예측 방법 및 그 장치 | |
WO2021054676A1 (ko) | Prof를 수행하는 영상 부호화/복호화 방법, 장치 및 비트스트림을 전송하는 방법 | |
WO2019194653A1 (ko) | 움직임 정보의 복합적 머지 모드 처리를 제공하는 영상 처리 방법, 그를 이용한 영상 복호화, 부호화 방법 및 그 장치 | |
WO2015137785A1 (ko) | 샘플값 보상을 위한 영상 부호화 방법과 그 장치, 및 샘플값 보상을 위한 영상 복호화 방법과 그 장치 | |
WO2015133838A1 (ko) | 폴리곤 유닛 기반 영상 인코딩/디코딩 방법 및 이를 위한 장치 | |
WO2018174457A1 (ko) | 영상 처리 방법 및 이를 위한 장치 | |
WO2019194425A1 (ko) | 영상 부호화 또는 복호화에 인공 신경망을 적용하는 장치 및 방법 | |
WO2018101700A1 (ko) | 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201080054678.8 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10834775 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13513122 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REEP | Request for entry into the european phase |
Ref document number: 2010834775 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2010834775 Country of ref document: EP |