WO2011071325A2 - Method and apparatus for encoding and decoding image by using rotational transform - Google Patents

Method and apparatus for encoding and decoding image by using rotational transform Download PDF

Info

Publication number
WO2011071325A2
WO2011071325A2 PCT/KR2010/008818 KR2010008818W WO2011071325A2 WO 2011071325 A2 WO2011071325 A2 WO 2011071325A2 KR 2010008818 W KR2010008818 W KR 2010008818W WO 2011071325 A2 WO2011071325 A2 WO 2011071325A2
Authority
WO
WIPO (PCT)
Prior art keywords
frequency coefficient
coefficient matrix
rot
angle parameter
matrix
Prior art date
Application number
PCT/KR2010/008818
Other languages
English (en)
French (fr)
Other versions
WO2011071325A3 (en
Inventor
Elena Alshina
Alexander Alshin
Vadim Seregin
Original Assignee
Samsung Electronics Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co., Ltd. filed Critical Samsung Electronics Co., Ltd.
Priority to CN2010800634990A priority Critical patent/CN102754438A/zh
Priority to EP10836220.3A priority patent/EP2510691A4/en
Publication of WO2011071325A2 publication Critical patent/WO2011071325A2/en
Publication of WO2011071325A3 publication Critical patent/WO2011071325A3/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • H04N19/122Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/625Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using discrete cosine transform [DCT]

Definitions

  • Exemplary embodiments relate to a method and apparatus for encoding and decoding an image, and more particularly, to a method and apparatus for encoding and decoding coefficients generated by transforming an image from a pixel domain to a frequency domain.
  • DCT Discrete cosine transform
  • AV audio/video
  • One or more exemplary embodiments provide a method and apparatus for encoding and decoding an image, and a computer readable recording medium having recorded thereon a computer program for executing the method.
  • a frequency coefficient matrix may be encoded strongly based on mathematics at a high compression ratio and thus an overall image compression ratio may be greatly increased.
  • FIG. 1 is a block diagram of an apparatus for encoding an image, according to an exemplary embodiment
  • FIG. 2 is a block diagram of an apparatus for decoding an image, according to an exemplary embodiment
  • FIG. 3 illustrates hierarchical coding units according to an exemplary embodiment
  • FIG. 4 is a block diagram of an image encoder based on a coding unit, according to an exemplary embodiment
  • FIG. 5 is a block diagram of an image decoder based on a coding unit, according to an exemplary embodiment
  • FIG. 6 illustrates a maximum coding unit, a sub coding unit, and a prediction unit, according to an exemplary embodiment
  • FIG. 7 illustrates a coding unit and a transform unit, according to an exemplary embodiment
  • FIGs. 8A through 8D illustrate division shapes of a coding unit, a prediction unit, and a transform unit, according to an exemplary embodiment
  • FIG. 9 is a block diagram of an apparatus for encoding an image, according to another exemplary embodiment.
  • FIG. 10 is a block diagram of a transformer illustrated in FIG. 9, according to an exemplary embodiment
  • FIGs. 11A through 11C are diagrams for describing rotational transform (ROT) according to an exemplary embodiment
  • FIGs. 12A through 12I illustrate ROT matrices according to an exemplary embodiment
  • FIGs. 13A and 13B illustrate inverse ROT syntaxes according to an exemplary embodiment
  • FIG. 14 illustrates Euler angles according to an exemplary embodiment
  • FIG. 15 illustrates pseudo-random points according to an exemplary embodiment
  • FIG. 16 is a block diagram of an apparatus for decoding an image, according to another embodiment.
  • FIG. 17 is a block diagram of an inverse transformer, according to an exemplary embodiment
  • FIG. 18 is a flowchart of a method of encoding an image, according to an exemplary embodiment
  • FIG. 19 is a flowchart of a method of decoding an image, according to an exemplary embodiment
  • FIGS. 20A and 20B illustrate ROT matrices according to another exemplary embodiment
  • FIGs. 21A and 21B illustrate inverse ROT syntaxes according to another exemplary embodiment.
  • a method of encoding an image including: generating a first frequency coefficient matrix by performing discrete cosine transform (DCT) on a current block; determining angle parameters based on whether the current block is intra predicted; generating a second frequency coefficient matrix by performing a partial switch between at least one of rows and columns of the first frequency coefficient matrix based on the determined angle parameters; quantizing the second frequency coefficient matrix; and entropy-encoding information about the angle parameters and the second frequency coefficient matrix, wherein the angle parameters indicate a degree of partial switching between the at least one of the rows and the columns of the first frequency coefficient matrix.
  • DCT discrete cosine transform
  • the determining the angle parameters may include determining the angle parameters based on an intra prediction direction of the current block.
  • the determining the angle parameters based on the intra prediction direction of the current block may include selecting a first matrix for performing partial switching between the rows and selecting a second matrix for performing partial switching between the columns from among a plurality of matrices based on the intra prediction direction of the current block.
  • the generating the second frequency coefficient matrix may include multiplying a left side of the first frequency coefficient matrix by the selected first matrix and multiplying a right side of the first frequency coefficient matrix by the selected second matrix.
  • the angle parameters may be parameters for Euler angles.
  • a method of decoding an image including: entropy-decoding information about predetermined angle parameters and a second frequency coefficient matrix; determining the angle parameters based on the entropy-decoded information about the angle parameters and whether a current block is intra predicted; inverse quantizing the entropy-decoded second frequency coefficient matrix; generating a first frequency coefficient matrix by performing a partial switch between at least one of rows and columns of the second frequency coefficient matrix based on the determined angle parameters; and restoring the current block by performing DCT on the first frequency coefficient matrix, wherein the angle parameters indicate a degree of partial switching between the at least one of the the rows and the columns of the second frequency coefficient matrix.
  • an apparatus for encoding an image including: a transformer which generates a first frequency coefficient matrix by performing DCT on a current block and generating a second frequency coefficient matrix by performing a partial switch between at least one of rows and columns of the first frequency coefficient matrix based on determined angle parameters according to whether the current block is intra predicted; a quantizer which quantizes the second frequency coefficient matrix; and an entropy encoder which entropy-encodes information about the angle parameters and the second frequency coefficient matrix, wherein the angle parameters indicate a degree of partial switching between the at least one of the rows and the columns of the first frequency coefficient matrix.
  • an apparatus for decoding an image including: an entropy decoder which entropy-decodes information about predetermined angle parameters and a second frequency coefficient matrix; an inverse quantizer which inverse quantizes the entropy-decoded second frequency coefficient matrix; and an inverse transformer which generates a first frequency coefficient matrix by performing a partial switch between at least one of rows and columns of the second frequency coefficient matrix based on the determined angle parameters according to the entropy-decoded information about the angle parameters and whether a current block is intra predicted and restoring the current block by performing DCT on the first frequency coefficient matrix, wherein the angle parameters indicate a degree of partial switching between the at least one of the rows and the columns of the second frequency coefficient matrix.
  • a computer readable recording medium having embodied thereon a computer program for executing at least one of the method of encoding an image and the method of decoding an image.
  • an “image” may denote a still image for a video or a moving image, that is, the video itself.
  • FIG. 1 is a block diagram of an apparatus 100 for encoding an image, according to an exemplary embodiment.
  • the apparatus 100 for encoding an image includes a maximum coding unit divider 110, an encoding depth determiner 120, an image data encoder 130, and an encoding information encoder 140.
  • the maximum coding unit divider 110 may divide a current frame or slice based on a maximum coding unit that is a coding unit of the largest size. That is, the maximum coding unit divider 110 may divide the current frame or slice into at least one maximum coding unit.
  • a coding unit may be represented using a maximum coding unit and a depth.
  • the maximum coding unit indicates a coding unit having the largest size from among coding units of the current frame, and the depth indicates a degree of hierarchically decreasing the coding unit.
  • a coding unit may decrease from a maximum coding unit to a minimum coding unit, wherein a depth of the maximum coding unit is defined as a minimum depth and a depth of the minimum coding unit is defined as a maximum depth.
  • a sub coding unit of a kth depth may include a plurality of sub coding units of a (k+n)th depth (k and n are integers equal to or greater than 1).
  • encoding an image in a greater coding unit may cause a higher image compression ratio.
  • an image may not be efficiently encoded by reflecting continuously changing image characteristics.
  • a smooth area such as the sea or sky
  • the greater a coding unit is the more a compression ration may increase.
  • the smaller a coding unit is the more a compression ration may increase.
  • a different maximum image coding unit and a different maximum depth are set for each frame or slice. Since a maximum depth denotes the maximum number of times by which a coding unit may decrease, the size of each minimum coding unit included in a maximum image coding unit may be variably set according to a maximum depth. The maximum depth may be determined differently for each frame or slice or for each maximum coding unit.
  • the encoding depth determiner 120 determines a division shape of the maximum coding unit.
  • the division shape may be determined based on calculation of rate-distortion (RD) costs.
  • the determined division shape of the maximum coding unit is provided to the encoding information encoder 140, and image data according to maximum coding units is provided to the image data encoder 130.
  • a maximum coding unit may be divided into sub coding units having different sizes according to different depths, and the sub coding units having different sizes, which are included in the maximum coding unit, may be predicted or frequency-transformed based on processing units having different sizes.
  • the apparatus 100 for encoding an image may perform a plurality of processing operations for image encoding based on processing units having various sizes and various shapes.
  • processing operations such as at least one of prediction, transform, and entropy encoding are performed, wherein processing units having the same size or different sizes may be used for every operation.
  • the apparatus 100 for encoding an image may select a processing unit that is different from a coding unit to predict the coding unit.
  • processing units for prediction may be 2N ⁇ 2N, 2N ⁇ N, N ⁇ 2N, and N ⁇ N.
  • motion prediction may be performed based on a processing unit having a shape whereby at least one of height and width of a coding unit is equally divided by two.
  • a processing unit which is the base of prediction, is defined as a ‘prediction unit’.
  • a prediction mode may be at least one of an intra mode, an inter mode, and a skip mode, and a specific prediction mode may be performed for only a prediction unit having a specific size or shape.
  • the intra mode may be performed for only prediction units having the sizes of 2N ⁇ 2N and N ⁇ N of which the shape is a square.
  • the skip mode may be performed for only a prediction unit having the size of 2N ⁇ 2N. If a plurality of prediction units exist in a coding unit, the prediction mode with the least encoding errors may be selected after performing prediction for every prediction unit.
  • the apparatus 100 for encoding an image may perform transform on image data based on a processing unit having a different size from a coding unit.
  • the transform may be performed based on a processing unit having a size equal to or smaller than that of the coding unit.
  • a processing unit which is the base of transform, is defined as a ‘transform unit’.
  • the transform may be discrete cosine transform (DCT) or Karhunen Loeve transform (KLT) or any other fixed point spatial tranform.
  • the encoding depth determiner 120 may determine sub coding units included in a maximum coding unit using RD optimization based on a Lagrangian multiplier. In other words, the encoding depth determiner 120 may determine which shape a plurality of sub coding units divided from the maximum coding unit have, wherein the plurality of sub coding units have different sizes according to their depths.
  • the image data encoder 130 outputs a bitstream by encoding the maximum coding unit based on the division shapes determined by the encoding depth determiner 120.
  • the encoding information encoder 140 encodes information about an encoding mode of the maximum coding unit determined by the encoding depth determiner 120. In other words, the encoding information encoder 140 outputs a bitstream by encoding information about a division shape of the maximum coding unit, information about the maximum depth, and information about an encoding mode of a sub coding unit for each depth.
  • the information about the encoding mode of the sub coding unit may include information about a prediction unit of the sub coding unit, information about a prediction mode for each prediction unit, and information about a transform unit of the sub coding unit.
  • the information about the division shape of the maximum coding unit may be information, e.g., flag information, indicating whether each coding unit is divided. For example, when the maximum coding unit is divided and encoded, information indicating whether the maximum coding unit is divided is encoded. Also, when a sub coding unit divided from the maximum coding unit is divided and encoded, information indicating whether the sub coding unit is divided is encoded.
  • information about an encoding mode may be determined for one maximum coding unit.
  • the apparatus 100 for encoding an image may generate sub coding units by equally dividing both height and width of a maximum coding unit by two according to an increase of depth. That is, when the size of a coding unit of a kth depth is 2N ⁇ 2N, the size of a coding unit of a (k+1)th depth is N ⁇ N.
  • the apparatus 100 for encoding an image may determine an optimal division shape for each maximum coding unit based on sizes of maximum coding units and a maximum depth in consideration of image characteristics.
  • the size of a maximum coding unit in consideration of image characteristics and encoding an image through division of a maximum coding unit into sub coding units of different depths images having various resolutions may be more efficiently encoded.
  • FIG. 2 is a block diagram of an apparatus 200 for decoding an image according to an exemplary embodiment.
  • the apparatus 200 for decoding an image includes an image data acquisition unit 210, an encoding information extractor 220, and an image data decoder 230.
  • the image data acquisition unit 210 acquires image data according to maximum coding units by parsing a bitstream received by the apparatus 200 for decoding an image and outputs the image data to the image data decoder 230.
  • the image data acquisition unit 210 may extract information about a maximum coding unit of a current frame or slice from a header of the current frame or slice. In other words, the image data acquisition unit 210 divides the bitstream in the maximum coding unit so that the image data decoder 230 may decode the image data according to maximum coding units.
  • the encoding information extractor 220 extracts information about a maximum coding unit, a maximum depth, a division shape of the maximum coding unit, an encoding mode of sub coding units from the header of the current frame by parsing the bitstream received by the apparatus 200 for decoding an image.
  • the information about a division shape and the information about an encoding mode are provided to the image data decoder 230.
  • the information about a division shape of the maximum coding unit may include information about sub coding units having different sizes according to depths and included in the maximum coding unit, and may be information (e.g., flag information) indicating whether each coding unit is divided.
  • the information about an encoding mode may include information about a prediction unit according to sub coding units, information about a prediction mode, and information about a transform unit.
  • the image data decoder 230 restores the current frame by decoding image data of every maximum coding unit based on the information extracted by the encoding information extractor 220.
  • the image data decoder 230 may decode sub coding units included in a maximum coding unit based on the information about a division shape of the maximum coding unit.
  • a decoding process may include a prediction process including intra prediction and motion compensation and an inverse transform process.
  • the image data decoder 230 may perform intra prediction or inter prediction based on information about a prediction unit and information about a prediction mode in order to predict a prediction unit.
  • the image data decoder 230 may also perform inverse transform for each sub coding unit based on information about a transform unit of a sub coding unit.
  • FIG. 3 illustrates hierarchical coding units according to an exemplary embodiment.
  • the hierarchical coding units may include coding units whose width ⁇ heights are 64 ⁇ 64, 32 ⁇ 32, 16 ⁇ 16, 8 ⁇ 8, and 4 ⁇ 4. Besides these coding units having perfect square shapes, coding units whose width ⁇ heights are 64 ⁇ 32, 32 ⁇ 64, 32 ⁇ 16, 16 ⁇ 32, 16 ⁇ 8, 8 ⁇ 16, 8 ⁇ 4, and 4 ⁇ 8 may also exist.
  • the size of a maximum coding unit is set to 64 ⁇ 64, and a maximum depth is set to 2.
  • the size of a maximum coding unit is set to 64 ⁇ 64, and a maximum depth is set to 3.
  • the size of a maximum coding unit is set to 16 ⁇ 16, and a maximum depth is set to 2.
  • a maximum size of a coding unit may be relatively great to increase a compression ratio and exactly reflect image characteristics. Accordingly, for the image data 310 and 320 having higher resolution than the image data 330, 64 ⁇ 64 may be selected as the size of a maximum coding unit.
  • a maximum depth indicates the total number of layers in the hierarchical coding units. Since the maximum depth of the image data 310 is 2, a coding unit 315 of the image data 310 may include a maximum coding unit whose longer axis size is 64 and sub coding units whose longer axis sizes are 32 and 16, according to an increase of a depth.
  • a coding unit 335 of the image data 330 may include a maximum coding unit whose longer axis size is 16 and coding units whose longer axis sizes are 8 and 4, according to an increase of a depth.
  • a coding unit 325 of the image data 320 may include a maximum coding unit whose longer axis size is 64 and sub coding units whose longer axis sizes are 32, 16, 8 and 4 according to an increase of a depth. Since an image is encoded based on a smaller sub coding unit as a depth increases, an exemplary embodiment is suitable for encoding an image including more minute scenes.
  • FIG. 4 is a block diagram of an image encoder 400 based on a coding unit, according to an exemplary embodiment.
  • An intra prediction unit 410 performs intra prediction on prediction units of the intra mode in a current frame 405, and a motion estimator 420 and a motion compensator 425 perform inter prediction and motion compensation on prediction units of the inter mode using the current frame 405 and a reference frame 495.
  • Residual values are generated based on the prediction units output from the intra prediction unit 410, the motion estimator 420, and the motion compensator 425, and the generated residual values are output as quantized transform coefficients by passing through a transformer 430 and a quantizer 440.
  • the quantized transform coefficients are restored to residual values by passing through an inverse quantizer 460 and a inverse transformer 470, and the restored residual values are post-processed by passing through a deblocking unit 480 and a loop filtering unit 490 and output as the reference frame 495.
  • the quantized transform coefficients may be output as a bitstream 455 by passing through an entropy encoder 450.
  • components of the image encoder 400 i.e., the intra prediction unit 410, the motion estimator 420, the motion compensator 425, the transformer 430, the quantizer 440, the entropy encoder 450, the inverse quantizer 460, the inverse transformer 470, the deblocking unit 480 and the loop filtering unit 490, perform image encoding processes based on a maximum coding unit, a sub coding unit according to depths, a prediction unit, and a transform unit.
  • FIG. 5 is a block diagram of an image decoder 500 based on a coding unit, according to an exemplary embodiment.
  • a bitstream 505 passes through a parser 510 so that encoded image data to be decoded and encoding information necessary for decoding are parsed.
  • the encoded image data is output as inverse-quantized data by passing through an entropy decoder 520 and an inverse quantizer 530 and restored to residual values by passing through an inverse transformer 540.
  • the residual values are restored according to coding units by being added to an intra prediction result of an intra prediction unit 550 or a motion compensation result of a motion compensator 560.
  • the restored coding units are used for prediction of next coding units or a next frame by passing through a deblocking unit 570 and a loop filtering unit 580.
  • components of the image decoder 500 i.e., the parser 510, the entropy decoder 520, the inverse quantizer 530, the inverse transformer 540, the intra prediction unit 550, the motion compensator 560, the deblocking unit 570 and the loop filtering unit 580, perform image decoding processes based on a maximum coding unit, a sub coding unit according to depths, a prediction unit, and a transform unit.
  • the intra prediction unit 550 and the motion compensator 560 determine a prediction unit and a prediction mode in a sub coding unit by considering a maximum coding unit and a depth, and the inverse transformer 540 performs inverse transform by considering the size of a transform unit.
  • FIG. 6 illustrates a maximum coding unit, a sub coding unit, and a prediction unit, according to an exemplary embodiment.
  • the apparatus 100 for encoding an image illustrated in FIG. 1 and the apparatus 200 for decoding an image illustrated in FIG. 2 use hierarchical coding units to perform encoding and decoding in consideration of image characteristics.
  • a maximum coding unit and a maximum depth may be adaptively set according to the image characteristics or variously set according to requirements of a user.
  • a hierarchical coding unit structure 600 has a maximum coding unit 610 whose height and width are 64 and maximum depth is 4. A depth increases along a vertical axis of the hierarchical coding unit structure 600, and as a depth increases, heights and widths of sub coding units 620 to 650 decrease. Prediction units of the maximum coding unit 610 and the sub coding units 620 to 650 are shown along a horizontal axis of the hierarchical coding unit structure 600.
  • the maximum coding unit 610 has a depth of 0 and the size of a coding unit, i.e., height and width, of 64 ⁇ 64.
  • a depth increases along the vertical axis, and there exist a sub coding unit 620 whose size is 32 ⁇ 32 and depth is 1, a sub coding unit 630 whose size is 16 ⁇ 16 and depth is 2, a sub coding unit 640 whose size is 8 ⁇ 8 and depth is 3, and a sub coding unit 650 whose size is 4 ⁇ 4 and depth is 4.
  • the sub coding unit 650 whose size is 4 ⁇ 4 and depth is 4 is a minimum coding unit, and the minimum coding unit may be divided into prediction units, each of which is less than the minimum coding unit.
  • a prediction unit of the maximum coding unit 610 whose depth is 0 may be a prediction unit whose size is equal to the coding unit 610, i.e., 64 ⁇ 64, or a prediction unit 612 whose size is 64 ⁇ 32, a prediction unit 614 whose size is 32 ⁇ 64, or a prediction unit 616 whose size is 32 ⁇ 32, which has a size smaller than the coding unit 610 whose size is 64 ⁇ 64.
  • a prediction unit of the coding unit 620 whose depth is 1 and size is 32 ⁇ 32 may be a prediction unit whose size is equal to the coding unit 620, i.e., 32 ⁇ 32, or a prediction unit 622 whose size is 32 ⁇ 16, a prediction unit 624 whose size is 16 ⁇ 32, or a prediction unit 626 whose size is 16 ⁇ 16, which has a size smaller than the coding unit 620 whose size is 32 ⁇ 32.
  • a prediction unit of the coding unit 630 whose depth is 2 and size is 16 ⁇ 16 may be a prediction unit whose size is equal to the coding unit 630, i.e., 16 ⁇ 16, or a prediction unit 632 whose size is 16 ⁇ 8, a prediction unit 634 whose size is 8 ⁇ 16, or a prediction unit 636 whose size is 8 ⁇ 8, which has a size smaller than the coding unit 630 whose size is 16 ⁇ 16.
  • a prediction unit of the coding unit 640 whose depth is 3 and size is 8 ⁇ 8 may be a prediction unit whose size is equal to the coding unit 640, i.e., 8 ⁇ 8, or a prediction unit 642 whose size is 8 ⁇ 4, a prediction unit 644 whose size is 4 ⁇ 8, or a prediction unit 646 whose size is 4 ⁇ 4, which has a size smaller than the coding unit 640 whose size is 8 ⁇ 8.
  • the coding unit 650 whose depth is 4 and size is 4 ⁇ 4 is a minimum coding unit and a coding unit of a maximum depth
  • a prediction unit of the coding unit 650 may be a prediction unit 650 whose size is 4 ⁇ 4, a prediction unit 652 having a size of 4 ⁇ 2, a prediction unit 654 having a size of 2 ⁇ 4, or a prediction unit 656 having a size of 2 ⁇ 2.
  • FIG. 7 illustrates a coding unit and a transform unit, according to an exemplary embodiment.
  • the apparatus 100 for encoding an image illustrated in FIG. 1 and the apparatus 200 for decoding an image illustrated in FIG. 2 perform encoding and decoding with a maximum coding unit itself or with sub coding units, which are equal to or smaller than the maximum coding unit, divided from the maximum coding unit.
  • the size of a transform unit for transform is selected to be no larger than that of a corresponding coding unit. For example, referring to FIG. 7, when a current coding unit 710 has the size of 64 ⁇ 64, transform may be performed using a transform unit 720 having the size of 32 ⁇ 32.
  • FIGs. 8A through 8D illustrate division shapes of a coding unit, a prediction unit, and a transform unit, according to an exemplary embodiment.
  • FIGs. 8A and 8B illustrate a coding unit and a prediction unit, according to an exemplary embodiment.
  • FIG. 8A shows a division shape selected by the apparatus 100 for encoding an image illustrated in FIG. 1, in order to encode a maximum coding unit 810.
  • the apparatus 100 for encoding an image divides the maximum coding unit 810 into various shapes, performs encoding, and selects an optimal division shape by comparing encoding results of various division shapes with each other based on R-D costs.
  • the maximum coding unit 810 may be encoded without dividing the maximum coding unit 810 as illustrated in FIGs. 8A through 8D.
  • the maximum coding unit 810 whose depth is 0 is encoded by dividing it into sub coding units whose depths are equal to or greater than 1. That is, the maximum coding unit 810 is divided into 4 sub coding units whose depths are 1, and all or some of the sub coding units whose depths are 1 are divided into sub coding units whose depths are 2.
  • a sub coding unit located in an upper-right side and a sub coding unit located in a lower-left side among the sub coding units whose depths are 1 are divided into sub coding units whose depths are equal to or greater than 2.
  • Some of the sub coding units whose depths are equal to or greater than 2 may be divided into sub coding units whose depths are equal to or greater than 3.
  • FIG. 8B shows a division shape of a prediction unit for the maximum coding unit 810.
  • a prediction unit 860 for the maximum coding unit 810 may be divided differently from the maximum coding unit 810. In other words, a prediction unit for each of sub coding units may be smaller than a corresponding sub coding unit.
  • a prediction unit for a sub coding unit 854 located in a lower-right side among the sub coding units whose depths are 1 may be smaller than the sub coding unit 854.
  • prediction units for some sub coding units 814, 816, 850, and 852 from among sub coding units 814, 816, 818, 828, 850, and 852 whose depths are 2 may be smaller than the sub coding units 814, 816, 850, and 852, respectively.
  • prediction units for sub coding units 822, 832, and 848 whose depths are 3 may be smaller than the sub coding units 822, 832, and 848, respectively.
  • the prediction units may have a shape whereby respective sub coding units are equally divided by two in a direction of height or width or have a shape whereby respective sub coding units are equally divided by four in directions of height and width.
  • FIGs. 8C and 8D illustrate a prediction unit and a transform unit, according to an exemplary embodiment.
  • FIG. 8C shows a division shape of a prediction unit for the maximum coding unit 810 shown in FIG. 8B
  • FIG. 8D shows a division shape of a transform unit of the maximum coding unit 810.
  • a division shape of a transform unit 870 may be set differently from the prediction unit 860.
  • a transform unit may be selected with the same size as the coding unit 854.
  • prediction units for coding units 814 and 850 whose depths are 2 are selected with a shape whereby the height of each of the coding units 814 and 850 is equally divided by two
  • a transform unit may be selected with the same size as the original size of each of the coding units 814 and 850.
  • a transform unit may be selected with a smaller size than a prediction unit. For example, when a prediction unit for the coding unit 852 whose depth is 2 is selected with a shape whereby the width of the coding unit 852 is equally divided by two, a transform unit may be selected with a shape whereby the coding unit 852 is equally divided by four in directions of height and width, which has a smaller size than the shape of the prediction unit.
  • FIG. 9 is a block diagram of an apparatus 900 for encoding an image, according to another exemplary embodiment.
  • the apparatus 900 for encoding an image illustrated in FIG. 9 may be a module, which is included in the apparatus 100 for encoding an image illustrated in FIG. 1 or the image encoder 400 illustrated in FIG. 4, for performing the following image encoding processes.
  • the apparatus 900 for encoding an image includes a transformer 910, a quantizer 920, and an entropy encoder 930.
  • the transformer 910 receives and transforms a current block to a frequency domain.
  • the transform to the frequency domain may be DCT or KLT or any other fixed point spatial transform
  • the received current block may be a residual block.
  • the current block may be a transform unit as described above in relation to FIG. 7 or 8D.
  • the current block of a pixel domain is transformed into coefficients of the frequency domain.
  • a transform coefficient matrix is generated by performing DCT or KLT or any other fixed point spatial transform on the current block of the pixel domain.
  • the transformer 910 performs post-processing to partially switch at least one of rows and columns of the DCT coefficient matrix. The operation of the transformer 910 will now be described in detail with reference to FIG. 10.
  • FIG. 10 is a block diagram of the transformer 910 illustrated in FIG. 9, according to an exemplary embodiment.
  • the transformer 910 includes a transform performer 1010, an angle parameter determiner 1020, and a rotational transform (ROT) performer 1030.
  • the transform performer 1010 generates a first frequency coefficient matrix of a frequency domain by transforming a current block of a pixel domain.
  • DCT or KLT or any other fixed point spatial tranform may be performed so as to generate the first frequency coefficient matrix including DCT coefficients or KLT coeffiecient or any other fixed poin spatial transform coefficients.
  • the angle parameter determiner 1020 determines an angle parameter indicating a degree of ROT, that is, a partial switching between at least one of rows and columns. Determining of the angle parameter corresponds to selecting one of a plurality of ROT matrices.
  • the first frequency coefficient matrix is rotational transformed based on the plurality of ROT matrices that each correspond to a ‘ROT_index’, which will be described later, and the ROT matrix that corresponds to an optimal angle parameter may be selected based on an encoding result.
  • the angle parameter may be determined based on calculation of rate-distortion (RD) costs. In other words, the result obtained by repeatedly performing the encoding for a plurality of times based on the ROT is compared based on RD costs, thereby selecting one ROT matrix, as will be described later with reference to FIGs. 21A and 21B.
  • RD rate-distortion
  • the angle parameter determiner 1020 determines the ROT matrix
  • the ROT matrix that corresponds to a ‘ROT_index’ in a different range may be selected based on whether a current block is an intra predicted block and an intra prediction direction, as will be described later with reference to FIGs. 12A through 12I.
  • the ROT performer 1030 generates a second frequency coefficient matrix by receiving the first frequency coefficient matrix generated from the transform performer 1010 and performing ROT according to the present exemplary embodiment.
  • the ROT is performed based on the ROT matrix determined from the angle parameter determiner 1020 and thus at least one of the rows and columns of the first frequency coefficient matrix is partially switched, thereby generating the second frequency coefficient matrix.
  • the ROT will be described more fully with reference to FIGs. 11A through 11C.
  • FIGs. 11A through 11C are diagrams for describing ROT according to an exemplary embodiment.
  • FIGs. 11A through 11C the switching of rows and columns of the first frequency coefficient matrix is described.
  • the ROT performer 1030 partially switches at least one of rows and columns of the first frequency coefficient matrix.
  • partial switching of rows or columns involves partially switching values of two rows or columns by using a certain function such as a sinusoidal function instead of unconditionally switching values of two rows or columns in one to one correspondence.
  • switching of two rows A and B may be defined according to the value of a parameter ‘a’ as represented in Equation 1.
  • Row A(new) cos(a) * row A(old) - sin(a) * row B(old)
  • Row B(new) sin(a) * row A(old) + cos(a) * row B(old)
  • the parameter ‘a’ operates as an angle.
  • ‘a’ of Equation 1 is only an example of the angle parameter and in the present exemplary embodiment, a parameter indicating a degree of partial switching between rows and columns of a DCT matrix is defined as an angle parameter.
  • FIG. 11A illustrates a case in which a ROT is performed on a 4x4 frequency coefficient matrix according to an exemplary embodiment.
  • three angle parameters ⁇ 1 , ⁇ 2 , ⁇ 3 are used in a partial switch between rows of a frequency coefficient matrix, and three angle parameters ⁇ 4 , ⁇ 5 , ⁇ 6 are used in a partial switch between columns .
  • FIG. 11B illustrates a case in which a ROT is performed on an 8x8 frequency coefficient matrix according to an exemplary embodiment.
  • ⁇ 1 , ⁇ 2 , ⁇ 3 , ⁇ 4 , ⁇ 5 , ⁇ 6 are used in a partial switch between rows
  • ⁇ 7 , ⁇ 8 , ⁇ 9 , ⁇ 10 , ⁇ 11 , ⁇ 12 are used in a partial switch between columns.
  • FIG. 11C illustrates a case in which a ROT is performed on a frequency coefficient matrix having a size equal to or greater than 16x16 according to an exemplary embodiment.
  • a compression rate is improved in a manner that a second frequency coefficient matrix is generated by performing the ROT on the first frequency coefficient matrix, and if the number of angle parameters is increased so that an overhead is increased, a total amount of data is not decreased.
  • a predetermined size e.g., when the ROT is performed on the first frequency coefficient matrix having a size equal to or greater than 16x16, coefficient sampling is used.
  • the ROT performer 1030 selects a sampled frequency coefficient matrix 1120 including only some of coefficients of a first frequency coefficient matrix 1110 and then performs the ROT on the selected frequency coefficient matrix 1120.
  • the ROT is not performed on a remaining portion 1130 of the first frequency coefficient matrix 1110.
  • the ROT performer 1030 only selects coefficients that have a low frequency component and that may have a value other than 0, and performs the ROT on the coefficients.
  • a frequency coefficient such as DCT includes coefficients with respect to a low frequency component at an upper left corner of the frequency coefficient matrix.
  • the ROT performer 1030 selects only coefficients positioned at an upper left corner of the first frequency coefficient matrix 1110, and then performs the ROT.
  • the ROT performer 1030 performs the ROT on the frequency coefficient matrix 1120 having a size of 8x8 in the same manner as the ROT in relation to FIG. 11B
  • partial switching between at least one of the rows and columns of the first frequency coefficient matrix is performed based on an angle parameter.
  • a certain angle parameter has high compression rate according to whether a current block is an intra predicted block and an intra prediction direction.
  • efficiency of ROT may be increased by applying each different angle parameter.
  • efficiency of ROT may be increased by applying each different angle parameter according to an intra prediction direction.
  • the ROT performer 1030 may perform ROT on the first frequency coefficient matrix by multiplying each different ROT matrices as represented by Equation 3 based on whether the current block is an intra predicted block and an intra prediction direction.
  • a second frequency coefficient matrix that is generated as a result of performing the ROT on a case in which the angle parameter ⁇ 1 is first applied and then the angle parameter ⁇ 2 is applied is different from an second frequency coefficient matrix that is generated as a result of performing the ROT on a case in which the angle parameter ⁇ 2 is first applied and then the angle parameter ⁇ 1 is applied. This will be described in detail with reference to FIG. 14.
  • FIG. 14 is a diagram of Euler angles according to another exemplary embodiment.
  • a switch between rows or between columns of a matrix is similar to rotation of coordinate axes in a three-dimensional (3D) space. That is, three rows or columns correspond to X, Y, and Z axes of 3D coordinates, respectively.
  • ⁇ , ⁇ , ⁇ angles indicate the Euler angles.
  • X, Y, and Z axes indicate coordinate axes before the rotation
  • X’, Y’, and Z’ axes indicate coordinate axes after the rotation.
  • An N-axis is an intersection between an X-Y plane and an X’-Y’ plane.
  • the N-axis is referred to as ‘line of nodes’.
  • the angle ⁇ indicates an angle between the X-axis and the N-axis which rotate around Z-axis.
  • the angle ⁇ indicates an angle between the Z-axis and the Z’-axis between which rotate around the N-axis.
  • the angle ⁇ indicates an angle between the N-axis and the X’-axis which rotate around the Z’-axis.
  • a first matrix indicates rotation around the Z’-axis.
  • a second matrix indicates rotation around the N-axis.
  • a third matrix indicates rotation around the Z-axis.
  • the switch between the rows or between the columns of the matrix may be indicated as rotation of coordinates axes using the Euler angles.
  • the second frequency coefficient matrix generated when the ROT performer 1030 of the transformer 910 performs ROT on the first frequency coefficient matrix is input to the quantizer 920.
  • the quantizer 920 quantizes coefficients included in the second frequency coefficient matrix according to a predetermined quantization step and the entropy encoder 930 entropy-encodes the quantized second frequency coefficient matrix.
  • Entropy encoding is performed according to a context-adaptive binary arithmetic coding (CABAC) method or a context-adaptive variable length coding (CAVLC) method. If the first frequency coefficient matrix has a large size and thus ROT is performed on the matrix 1120 including only some sampled coefficients, the entire first frequency coefficient matrix 1110 including the selected matrix 1120 on which ROT is performed and the remaining portion 1130 on which ROT is not performed may be quantized and entropy-encoded.
  • CABAC context-adaptive binary arithmetic coding
  • CAVLC context-adaptive variable length coding
  • the entropy encoder 930 entropy-encodes information about the angle parameter used in ROT in the transformer 910.
  • the information about the angle parameter may be ‘ROT_index’ of FIGs. 12A through 12I or ‘idx’ of FIGs. 20A and 21B.
  • ROT is performed on a current block by using a ROT matrix that corresponds to a ‘ROT_index’ in each different range according to whether the current block is predicted according to intra prediction and an intra prediction direction. Accordingly, when the information about the angle parameter, that is, ‘ROT_index’ is entropy-encoded, the ‘ROT_index’ in the entire range may not be encoded.
  • the ‘ROT_index’ in a first range may only be entropy-encoded.
  • the entropy-encoded ‘ROT_index’ is decoded and the current block is intra predicted in a first direction, a predetermine value (for example, 9) is added to the decoded ‘ROT_index’ and inverse ROT may be performed on the current block by using the ROT matrix that corresponds to the ‘ROT_index’ in a second range (for example, range of 9 to 17).
  • a predetermined value for example, 18
  • inverse ROT may be performed on the current block by using the ROT matrix that corresponds to the ‘ROT_index’ in a third range (for example, range of 18 to 26).
  • the apparatus 900 for encoding an image may efficiently determine angle parameter candidates used to perform ROT as described below.
  • the apparatus 900 for encoding an image In order to efficiently perform compression, it is necessary for the apparatus 900 for encoding an image to search for the optimal angle parameters. However, this is a multi-parameter problem having strongly non-smooth dependence on parameter.
  • a Monte Carlo method is used.
  • a Lehmer’s random sequence number may be used to generate a random point in the Monte Carlo method.
  • only one integer indicating a sequence number may be stored or transmitted.
  • the sequence number may be the ‘idx’ or ‘ROT_index’.
  • parts that are revised by rotation of the first frequency coefficient matrix are colored black, and parts that are not revised are colored white.
  • the 4x4 frequency coefficient matrix of FIG. 11A six angle parameters are involved in revision of fifteen coefficients according to a switch between rows and between columns.
  • the 8x8 frequency coefficient matrix of FIG. 11B twelve angle parameters involve in revision of sixty coefficients.
  • the apparatus 900 for encoding an image may perform the ROT according to following operations:
  • Step 1 Orthogonal transform family parameterization
  • Step 2 Monte Carlo method
  • Step 3 - Lehmer’ s pseudo-random numbers
  • Step 4 Localization of diapason for optimal angle parameters
  • Step 5 Quasi-optimal basis
  • the ROT performer 1030 searches for an optimal angle parameter while minimizing an overhead according to following operations.
  • basis adjustment is required.
  • the rotation of basis is chosen as basis modification. Accordingly, the set of rotational angles describes basis modification uniquely.
  • rotation of the basis is mainly selected.
  • the rotation of the basis is performed by using an angle parameter.
  • the rotation of the basis which is performed by using the angle parameter, is used.
  • the angle parameter may be the Euler angle.
  • the angle parameter is not limited to the Euler angle, and thus may include others that may indicate a level of a partial switch of one or more values between rows and between columns of a matrix.
  • an example involving using the Euler angle will now be described.
  • the rotation is defined as Equation 3 by using a left multiplication R horizontal and a right multiplication R vertical of a first frequency coefficient matrix D.
  • the matrix R horizontal performs a switch between rows of the first frequency coefficient matrix D.
  • the matrix R vertical performs a switch between columns of the first frequency coefficient matrix D.
  • An ROT matrix is determined according to an angle parameter and an example of the matrix R horizontal and R vertical in a 4x4 block are given by Equation 4. As described with reference to FIG. 11A, three angle parameters ⁇ 1 , ⁇ 2 and ⁇ 3 are used in a partial switch between rows and three angle parameters ⁇ 4 , ⁇ 5 and ⁇ 6 are used in a partial switch between columns.
  • Equation 4 ⁇ 1 , ⁇ 2 , ⁇ 3 , ⁇ 4 , ⁇ 5 and ⁇ 6 indicate the Euler angles.
  • the Euler angles describe revision of fifteen frequency coefficients by a group of six parameters of ⁇ 1 , ⁇ 2 , ... , and ⁇ 6 .
  • twelve Euler angles of ⁇ 1 , ⁇ 2 , ... , and ⁇ 12 describe revision of sixty frequency coefficients.
  • R horizontal and R vertical of Equation 3 may be defined as in Equation 5.
  • the optimization has a difficulty in that a high-dimensional domain of a parameter (six or twelve angle parameters) is used, and compression of an image is non-smoothly dependent on used parameters.
  • the difficulty is solved by using the Monte Carlo method.
  • the core of the Monte Carlo method is to perform a plurality of attempts. That is, a compression rate is measured from several points, and then a best point therefrom is selected.
  • a quality of a random point in a multi-dimensional domain is highly important (in particular, the quality is more important according to elevation of a dimension).
  • a pseudo-random point is preferred to a uniform grid point. This will be described in FIG. 15 with reference to a two-dimensional (2D) case.
  • FIG. 15 illustrates pseudo-random points according to another exemplary embodiment.
  • a left diagram of FIG. 15 illustrates uniform grid points, and a right diagram of FIG. 15 illustrates first sixteen points according to a pseudo-random process.
  • the uniform grid points are used, despite sixteen points of the Monte Carlo method, only four different values are checked with respect to a first parameter (and a second parameter).
  • sixteen different values are checked with respect to a first parameter (and a second parameter) by sixteen points. That is, when the pseudo-random points are used, various values of the first and second parameters are sufficiently checked with respect to sixteen points.
  • the use of the pseudo-random points is more advantageous than the use of the uniform grid points, according to an increase of the number of parameters.
  • a pseudo-random sequence may be generated by using various methods according to one or more exemplary embodiments.
  • One of the most efficient methods is to use a Lehmer number. This is an artificially generated sequence, and is close to actual random numbers that are uniformly distributed.
  • An algorithm for generating a Lehmer sequence is well known, and thus a detailed description thereof is omitted here. According to an exemplary embodiment, at least 10 13 unrepeated points are provided.
  • the Lehmer sequence is an artificial sequence, and the algorithm for generating the Lehmer sequence is well known, so that a decoder may easily recalculate it.
  • a combination of angle parameters may be coded by using one information (that is, a number in a random sequence).
  • one information that is, a number in a random sequence.
  • random points corresponding to the combination of the angle parameters are generated, a compression rate is measured after compression is performed by using the random points, and then an optimal parameter point is selected.
  • a number of the Lehmer sequence corresponding to the optimal parameter point is stored or transmitted.
  • an optimal rotation angle has a value near 0 degrees or 180 degrees ( ⁇ radian). This indicates that a basis for a transform to a frequency domain, e.g., a DCT basis or a KLT basis is almost optimized.
  • angle parameters according to the one or more exemplary embodiments are only used to perform a partial switch (an angle near 0 degrees in the case of the Euler angles) of one or more values between rows and between columns, or to perform the partial switch, and a change (an angle near 180 degrees in the case of the Euler angles) of a sign of a basis element. That is, a diapason of parameters used in the one or more exemplary embodiments is limited to a specific part of a domain, and this limitation is referred to as localization.
  • the number of bits with respect to an overhead is decreased. If it is assumed that points to be checked are limited to a specific part in FIG. 15, the number of points to be checked so as to search for optimal combination of the angle parameters is decreased.
  • the number of points to be checked is fixed (that is, in a case where the number of bits used as overheads is fixed), and the localization is applied, more points in a smaller angle may be checked so that a compression rate may be increased.
  • the quasi-optimal basis indicates that the same rotation is applied to a group of all transform units or some of the transform units included in a slice or a picture. If optimal rotation is applied to each block, a compression rate with respect to an image is increased but an overhead is also increased.
  • the apparatus 900 for encoding an image performs the ROT on a combination of a plurality of angle parameters by using the Monte Carlo method, and repeatedly performs quantization and entropy encoding, thereby determining optimal combination of the angle parameters. Also, the image encoding apparatus 900 does not encode the angle parameters but encodes a Lehmer’s pseudo-random sequence number as information regarding the determined optimal combination of the angle parameters. Here, by using the localization and the quasi-optimal basis, the information regarding the angle parameters may be encoded at a higher efficiency.
  • FIG. 16 is a block diagram of an apparatus 1600 for decoding an image, according to another exemplary embodiment.
  • the apparatus 1600 for decoding an image illustrated in FIG. 16 may be a module, which is included in the apparatus 200 for decoding an image illustrated in FIG. 2 or the image decoder 500 illustrated in FIG. 5, for performing the following image decoding processes.
  • the apparatus 1600 for decoding an image includes an entropy decoder 1610, an inverse quantizer 1620, and an inverse transformer 1630.
  • the entropy decoder 1610 receives a bitstream and entropy-decodes a second frequency coefficient matrix of a predetermined block.
  • the second frequency coefficient matrix is a matrix generated by performing ROT on the first frequency coefficient matrix generated by transforming the predetermined block to a frequency domain.
  • the entropy decoder 1610 may decode information about angle parameters used in ROT for inverse ROT.
  • the information about angle parameters may be information about inverse ROT matrix ‘idx’ or ‘ROT_index’ used in inverse ROT of a current block.
  • Entropy decoding is performed according to a CABAC method or a CAVLC method as in the entropy encoder 930.
  • the inverse quantizer 1620 inverse quantizes the second frequency coefficient matrix entropy-decoded in the entropy decoder 1610. Inverse quantization is performed according to quantization steps used in encoding.
  • the inverse transformer 1630 generates the first frequency coefficient matrix by performing inverse ROT on the second frequency coefficient matrix and performs inverse transform on the first frequency coefficient matrix.
  • a ROT matrix used in inverse ROT of a current block is selected based on the information about the angle parameter entropy-decoded in the entropy decoder 1610, that is, information about ‘idx’ or ‘ROT_index’ and inverse ROT is performed on the second frequency coefficient matrix according to the selected ROT matrix. This will be described more fully with reference to FIG. 17.
  • FIG. 17 is a block diagram of the inverse transformer 1630 of FIG. 16, according to an exemplary embodiment.
  • the inverse transformer 1630 includes an angle parameter determiner 1710, an inverse ROT performer 1720, and an inverse transform performer 1730.
  • the angle parameter determiner 1710 determines a ROT matrix to be used in inverse ROT based on information about angle parameters, that is, ‘idx’ or ‘ROT_index’.
  • the ROT matrix to be used in the inverse ROT maybe determined according to whether the current block is an intra or inter predicted block and an intra prediction direction.
  • the ROT matrix to be used in the inverse ROT is finally determined according to whether the current block is an intra or inter predicted block and an intra prediction direction with reference to information about the angle parameter entropy-decoded in the entropy decoder 1610, that is information about ‘ROT_index’. This will be described more fully with reference to FIGs. 12A through 12I.
  • FIGs. 12A through 12I illustrate the ROT matrices according to an exemplary embodiment.
  • FIGs. 12A through 12I illustrate the ROT matrices according to an angle parameter used in inverse ROT of the second frequency coefficient matrix.
  • the ROT matrices of FIGs. 12A through 12I may be inverse matrices of R horizontal and R vertical multiplied to a left side and a right side of the second frequency coefficient matrix for inverse ROT.
  • Each row classified by 'ROT_idx' may be filled with ninety-six values. From the values, first thirty-two are values for matrices of R horizontal and R vertical used in 4x4 inverse ROT and remaining sixty-four are values for matrices of R horizontal and R vertical used in 8x8 inverse ROT.
  • the angle parameter determiner 1710 may determine one of twenty-seven ROT matrices.
  • the ROT matrix may be determined according to whether the current block is an intra or inter predicted block and an intra prediction direction.
  • one of the ROT matrices where ‘ROT_index’ is in the first range may be selected.
  • one of the ROT matrices where ‘ROT_index’ is in the second range may be selected.
  • one of the ROT matrices where ‘ROT_index’ is in the third range may be selected.
  • one of the ROT matrices where ‘ROT_index’ is in the second range is selected by adding a predetermined value (for example, 9) to ‘ROT_index’ in the first range (for example, range of 0 to 8) entropy-decoded in the entropy decoder 1610.
  • one of the ROT matrices where ‘ROT_index’ is in the third range is selected by adding a predetermined value (for example, 18) to ‘ROT_index’ in the first range (for example, range of 0 to 8) entropy-decoded in the entropy decoder 1610
  • FIGs. 20A and 20B illustrate ROT matrices according to another exemplary embodiment.
  • FIG. 20A illustrates values of ROT matrices that correspond to information about a plurality of angle parameters, that is, INV_ROT_4[idx][k], when the second frequency coefficient matrix is 4x4.
  • Idx corresponds to the information about angle parameters
  • k corresponds to values of first columns in FIG. 20A.
  • FIG. 20B illustrates values of ROT matrices that correspond to information about a plurality of angle parameters, that is, INV_ROT_8[idx][k], when the second frequency coefficient matrix is 8x8.
  • idx corresponds to the information about angle parameters
  • ‘k’ corresponds to values of first columns in FIG. 20B.
  • Inverse ROT based on FIGs. 20A and 20B will be described in relation to the inverse ROT performer 1720.
  • the inverse ROT performer 1720 performs inverse ROT on an inverse quantized second frequency coefficient matrix received from the inverse quantizer 1620.
  • the ROT described with reference to FIGs. 11A through 11C is inversely performed.
  • inverse ROT may be performed according to the ROT matrix selected from the angle parameter determiner 1710. Partial switching between at least one of the rows and columns of the second frequency coefficient matrix is performed based on the ROT matrix selected from the angle parameter determiner 1710, thereby generating the first frequency coefficient matrix.
  • Inverse ROT is performed according to the ROT matrix selected as in Equation 6 for inverse ROT.
  • D’ is an inverse quantized second frequency coefficient matrix input to the inverse ROT performer 1720
  • R horizontal -1 is a ROT matrix for partial switching between rows when inverse ROT is performed
  • R vertical -1 is a ROT matrix for partial switching between columns when inverse ROT is performed.
  • R horizontal -1 and R vertical -1 may be defined by Equation 7.
  • R horizontal and R vertical in Equation 7 may be R horizontal and R vertical described in relation to Equation 4 or Equation 5.
  • R horizontal -1 and R vertical -1 may be ROT matrices where transposed matrices of the R horizontal and R vertical described in relation to Equation 4 or Equation 5 are used in inverse ROT.
  • FIGs. 13A and13B illustrate inverse ROT syntaxes according to an exemplary embodiment.
  • FIGs. 13A and 13B illustrate syntax for performing inverse ROT by using one of the ROT matrices of FIGs. 12A through 12I.
  • FIG. 13A illustrates syntax for 4x4 inverse ROT and
  • FIG. 13B illustrates syntax for 8x8 inverse ROT.
  • the inverse ROT performer 1720 performs inverse ROT on the second frequency coefficient matrix by selecting one of the ROT matrices of FIGs. 12A through 12I according to the angle parameter determined in the angle parameter determiner 1710.
  • the inverse ROT is performed according to the ROT matrix corresponding to one of the ‘ROT_index’ in the first range (for example, range of 0 to 8).
  • the inverse ROT is performed according to the ROT matrix corresponding to one of the ‘ROT_index’ in the second range (for example, range of 9 to 17).
  • the inverse ROT is performed according to the ROT matrix corresponding to one of the ‘ROT_index’ in the third range (for example, range of 18 to 26).
  • the inverse ROT performer 1720 may perform inverse ROT on the second frequency coefficient matrix based on the ROT matrix of the ‘ROT_index’ in one of the first through third ranges selected in the angle parameter determiner 1710 based on whether the current block is predicted according to intra prediction and an intra prediction direction.
  • FIGs. 21A and 21B illustrate inverse ROT syntaxes according to another exemplary embodiment.
  • FIGs. 21A and 21B illustrate syntax for performing inverse ROT by using one of the ROT matrices of FIGs. 20A and 20B.
  • FIG. 21A illustrates syntax for 4x4 inverse ROT
  • FIG. 21B illustrates syntax for 8x8 inverse ROT.
  • the inverse ROT performer 1720 performs inverse ROT on the second frequency coefficient matrix by selecting one of the ROT matrices of FIG. 20A according to the angle parameter determined in the angle parameter determiner 1710.
  • INV_ROT_4[idx][k] corresponding to ‘idx’ that is, an information about the angle parameter entropy-decoded from among the ROT matrices is used to generate ‘t’ values.
  • the values in FIG. 20A are 8-bit values.
  • Equation 8 ‘increased_bit_depth_luma’ is a value internally added for accuracy of operation of a luminance value and may be an internal bit depth increasing (IBDI) bit. Equation 8 is described with a luminance value as an example. However, it would have been obvious to one of ordinary skill in the art to define ‘shift’ in the same manner as above in operation of a chroma value.
  • IBDI internal bit depth increasing
  • the values are bit right-shifted by ‘8+shift’ so as to match the ‘d’ values with the number of bits of the ‘S’ values.
  • the inverse ROT performer 1720 performs inverse ROT on the second frequency coefficient matrix by selecting one of the ROT matrices of FIG. 20B according to the angle parameter determined in the angle parameter determiner 1710.
  • FIG. 21B is different from FIG. 21A in that a size of the second frequency coefficient matrix increases to 8x8, and ‘shift’ and ‘offset’ may be determined according to Equation 7 and Equation 8 as in FIG. 21A.
  • the inverse transform performer 1730 receives the first frequency coefficient matrix from the inverse ROT performer 1720 and performs inverse transform on the received first frequency coefficient matrix. DCT or KLT is inversely performed so as to perform inverse transform the first frequency coefficient matrix. As a result of the inverse transform, a predetermined block of a pixel domain is restored.
  • FIG. 18 is a flowchart of a method of encoding an image, according to an exemplary embodiment.
  • an apparatus for encoding an image generates a first frequency coefficient matrix by transforming a current block to a frequency domain.
  • the apparatus for encoding an image receives the current block, performs DCT or KLT or any other fixed point spatial transform, and thus generates the first frequency coefficient matrix including spatial transform coefficients.
  • the apparatus for encoding an image determines an angle parameter used in ROT of the first frequency coefficient matrix.
  • the angle parameter may be ‘idx’ or ‘ROT_idx’.
  • ‘idx’ is determined based on result obtained by repeatedly performing the encoding for a plurality of times.
  • ‘ROT_idx’ is determined based on whether the current block is intra or inter predicted block. Different ROT matrices may be selected according to the case when the current block is intra predicted and the case when the current block is inter predicted. When the current block is intra predicted, different ROT matrices may be selected according to the intra prediction direction of the current block. In other words, ROT matrices corresponding to the 'ROT_index' in the range of 0-8, 9-17, or 18-26 of FIGs. 12A through 12I may be selected based on whether the current block is an intra predicted block and an intra prediction direction.
  • the ROT matrices may be Rhorizontal and Rvertical described in relation to Equation 3.
  • the apparatus for encoding an image performs a partial switch between at least one of rows and columns of the first frequency coefficient matrix based on the angle parameter determined in operation 1820 and generates a second frequency coefficient matrix.
  • ROT is performed on the first frequency coefficient matrix based on the ROT matrices determined according to whether the current block is intra predicted block and an intra prediction direction in operation 1820.
  • the first frequency coefficient matrix has a large size (e.g., a size equal to or greater than 16 ⁇ 16)
  • a matrix including only some sampled coefficients of the first frequency coefficient matrix may be selected and ROT may be performed on only the selected matrix.
  • a matrix including only coefficients of low-frequency components may be selected.
  • the apparatus for encoding an image quantizes the second frequency coefficient matrix generated in operation 1830.
  • the second frequency coefficient matrix is quantized according to a predetermined quantization step.
  • the apparatus for encoding an image entropy-encodes the second frequency coefficient matrix quantized in operation 1840.
  • Entropy encoding is performed according to a CABAC method or a CAVLC method.
  • the apparatus for encoding an image entropy-encodes information about angle parameters used to partial switch at least one of rows and columns of the first frequency coefficient matrix, in operation 1850.
  • Information about ‘idx’ pr ‘ROT_index’ for specifying one matrix used in inverse ROT from among the ROT matrices of FIGS. 12A through 12I is entropy-encoded.
  • inverse ROT may be performed on the ROT matrices by using ‘ROT_index’ in the second range (for example, range of 9 to 17) or the third range (for example, range of 18 to 26) in the decoding side according to whether the current block is predicted according to intra prediction and an intra prediction direction.
  • FIG. 19 is a flowchart of a method of decoding an image, according to an exemplary embodiment.
  • an apparatus for decoding an image receives a bitstream of a current block and entropy-decodes information about angle parameters and a second frequency coefficient matrix.
  • the second frequency coefficient matrix is a matrix obtained by rotational transforming all or a part of the first frequency coefficient matrix.
  • the information about angle parameters may be information about ‘idx’ or ‘ROT_index’.
  • the apparatus for decoding an image determines angle parameters used to perform inverse ROT on the second frequency coefficient matrix based on information about angle parameters entropy-decoded in operation 1910.
  • the angle parameters may be determined according to whether the current block is intra predicted block and an intra prediction direction.
  • ROT matrices corresponding to ‘ROT_index’ in different ranges are selected.
  • a ROT matrix corresponding to ‘ROT_index’ obtained by adding a predetermined value (for example, 9) to the entropy-decoded ‘ROT_index’ may be selected.
  • a ROT matrix corresponding to ‘ROT_index’ obtained by adding a predetermined value (for example, 18) to the entropy-decoded ‘ROT_index’ may be selected.
  • the apparatus for decoding an image inverse quantizes the second frequency coefficient matrix.
  • the second frequency coefficient matrix is inverse quantized according to a predetermined quantization step used in image encoding.
  • the apparatus for decoding an image performs a partial switch between at least one of rows and columns of the second frequency coefficient matrix inverse quantized in operation 1930 and generates the first frequency coefficient matrix.
  • inverse ROT is performed on the second frequency coefficient matrix.
  • Inverse ROT may be performed on the second frequency coefficient matrix based on the ROT matrix selected according to whether the current block is intra predicted and an intra prediction direction in operation 1920.
  • the first frequency coefficient matrix is generated by performing inverse ROT on the matrix including only some sampled coefficients.
  • the apparatus for decoding an image performs inverse transform on the first frequency coefficient matrix generated in operation 1940.
  • the apparatus for decoding an image restores a block of a pixel domain by performing inverse DCT or KLT or any other fixed point inverse spatial transform on the first frequency coefficient matrix.
  • a frequency coefficient matrix may be encoded strongly based on mathematics at a high compression ratio and thus an overall image compression ratio may be greatly increased.
  • an exemplary embodiment can be embodied as computer readable codes on a computer readable recording medium.
  • the apparatus for encoding an image according to an exemplary embodiment, the apparatus for decoding an image according to an exemplary embodiment, the image encoder and the image decoder illustrated in FIGs. 1, 2, 4, 5, 9, 10, 16, and 17 may include a bus coupled to every unit of the apparatus or coder, at least one processor that is connected to the bus and is for executing commands, and memory connected to the bus to store the commands, received messages, and generated messages.
  • the computer readable recording medium is any data storage device that can store data which can be thereafter read by a computer system.
  • Examples of the computer readable recording medium include read-only memory (ROM), random-access memory (RAM), CD-ROMs, magnetic tapes, floppy disks, and optical data storage devices.
  • the computer readable recording medium can also be distributed over network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
PCT/KR2010/008818 2009-12-09 2010-12-09 Method and apparatus for encoding and decoding image by using rotational transform WO2011071325A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN2010800634990A CN102754438A (zh) 2009-12-09 2010-12-09 用于通过使用转动变换来对图像编码和解码的方法和设备
EP10836220.3A EP2510691A4 (en) 2009-12-09 2010-12-09 METHOD AND APPARATUS FOR ENCODING AND DECODING AN IMAGE USING ROTATIONAL TRANSFORMATION

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2009-0121937 2009-12-09
KR1020090121937A KR20110065092A (ko) 2009-12-09 2009-12-09 회전 변환을 이용한 영상 부호화, 복호화 방법 및 장치

Publications (2)

Publication Number Publication Date
WO2011071325A2 true WO2011071325A2 (en) 2011-06-16
WO2011071325A3 WO2011071325A3 (en) 2011-11-10

Family

ID=44082079

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2010/008818 WO2011071325A2 (en) 2009-12-09 2010-12-09 Method and apparatus for encoding and decoding image by using rotational transform

Country Status (5)

Country Link
US (1) US8494296B2 (ko)
EP (1) EP2510691A4 (ko)
KR (1) KR20110065092A (ko)
CN (1) CN102754438A (ko)
WO (1) WO2011071325A2 (ko)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019101295A1 (en) * 2017-11-21 2019-05-31 Huawei Technologies Co., Ltd. Image and video processing apparatuses and methods

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012039590A (ja) * 2010-07-16 2012-02-23 Sony Corp 画像処理装置、画像処理方法、及びプログラム
US20120320972A1 (en) * 2011-06-16 2012-12-20 Samsung Electronics Co., Ltd. Apparatus and method for low-complexity optimal transform selection
PL3346705T3 (pl) 2012-04-16 2021-04-06 Electronics And Telecommunications Research Institute Sposób kodowania/dekodowania obrazu
US9699461B2 (en) * 2015-08-14 2017-07-04 Blackberry Limited Scaling in perceptual image and video coding
WO2017061671A1 (ko) * 2015-10-08 2017-04-13 엘지전자 주식회사 영상 코딩 시스템에서 적응적 변환에 기반한 영상 코딩 방법 및 장치
ITUB20155295A1 (it) * 2015-10-16 2017-04-16 Torino Politecnico Apparatuses and methods for encoding and decoding images
WO2017151877A1 (en) * 2016-03-02 2017-09-08 MatrixView, Inc. Apparatus and method to improve image or video quality or encoding performance by enhancing discrete cosine transform coefficients
WO2019009618A1 (ko) * 2017-07-04 2019-01-10 삼성전자 주식회사 영상 부호화 방법 및 장치, 영상 복호화 방법 및 장치
CN109255824A (zh) * 2018-08-21 2019-01-22 同济大学 基于dct的防窜货编解码方法
CN112655215A (zh) 2019-07-10 2021-04-13 Oppo广东移动通信有限公司 图像分量预测方法、编码器、解码器以及存储介质
US11375219B2 (en) * 2019-09-24 2022-06-28 Tencent America LLC Coding method and system with improved dynamic internal bit depth

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7146053B1 (en) * 2000-05-10 2006-12-05 International Business Machines Corporation Reordering of compressed data
US6907079B2 (en) * 2002-05-01 2005-06-14 Thomson Licensing S.A. Deblocking filter conditioned on pixel brightness
US7167560B2 (en) * 2002-08-08 2007-01-23 Matsushita Electric Industrial Co., Ltd. Partial encryption of stream-formatted media
JP4594688B2 (ja) * 2004-06-29 2010-12-08 オリンパス株式会社 画像符号化処理方法、画像復号化処理方法、動画圧縮処理方法、動画伸張処理方法、画像符号化処理プログラム、画像符号化装置、画像復号化装置、画像符号化/復号化システム、拡張画像圧縮伸張処理システム
KR101088375B1 (ko) * 2005-07-21 2011-12-01 삼성전자주식회사 가변 블록 변환 장치 및 방법 및 이를 이용한 영상부호화/복호화 장치 및 방법
KR100809686B1 (ko) * 2006-02-23 2008-03-06 삼성전자주식회사 이산 여현 변환을 이용한 영상 리사이징 방법 및 장치
US8488668B2 (en) * 2007-06-15 2013-07-16 Qualcomm Incorporated Adaptive coefficient scanning for video coding
KR101496324B1 (ko) * 2007-10-17 2015-02-26 삼성전자주식회사 영상의 부호화, 복호화 방법 및 장치
KR101370288B1 (ko) * 2007-10-24 2014-03-05 삼성전자주식회사 이미지 신호의 압축 방법 및 장치

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of EP2510691A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019101295A1 (en) * 2017-11-21 2019-05-31 Huawei Technologies Co., Ltd. Image and video processing apparatuses and methods
US11265575B2 (en) 2017-11-21 2022-03-01 Huawei Technologies Co., Ltd. Image and video processing apparatuses and methods

Also Published As

Publication number Publication date
KR20110065092A (ko) 2011-06-15
EP2510691A2 (en) 2012-10-17
US8494296B2 (en) 2013-07-23
EP2510691A4 (en) 2014-06-04
CN102754438A (zh) 2012-10-24
WO2011071325A3 (en) 2011-11-10
US20110135212A1 (en) 2011-06-09

Similar Documents

Publication Publication Date Title
WO2011053021A2 (en) Method and apparatus for encoding and decoding image by using rotational transform
WO2011071325A2 (en) Method and apparatus for encoding and decoding image by using rotational transform
WO2018128323A1 (ko) 이차 변환을 이용한 비디오 신호의 인코딩/디코딩 방법 및 장치
WO2013157825A1 (ko) 영상 부호화/복호화 방법 및 장치
WO2017065525A2 (ko) 영상을 부호화 또는 복호화하는 방법 및 장치
WO2018128322A1 (ko) 영상 처리 방법 및 이를 위한 장치
WO2011087323A2 (en) Method and apparatus for encoding and decoding image by using large transform unit
WO2017043760A1 (ko) 엔트로피 부호화 및 복호화를 위한 장치 및 방법
WO2013002555A2 (ko) 산술부호화를 수반한 비디오 부호화 방법 및 그 장치, 비디오 복호화 방법 및 그 장치
WO2018038554A1 (ko) 이차 변환을 이용한 비디오 신호의 인코딩/디코딩 방법 및 장치
WO2011087320A2 (ko) 예측 부호화를 위해 가변적인 파티션을 이용하는 비디오 부호화 방법 및 장치, 예측 부호화를 위해 가변적인 파티션을 이용하는 비디오 복호화 방법 및 장치
WO2011016702A2 (ko) 영상의 부호화 방법 및 장치, 그 복호화 방법 및 장치
WO2011087292A2 (en) Method and apparatus for encoding video and method and apparatus for decoding video by considering skip and split order
WO2018236028A1 (ko) 인트라 예측 모드 기반 영상 처리 방법 및 이를 위한 장치
WO2014003423A4 (ko) 영상 부호화/복호화 방법 및 장치
WO2011049396A2 (en) Method and apparatus for encoding video and method and apparatus for decoding video, based on hierarchical structure of coding unit
WO2018190523A1 (ko) 영상의 부호화/복호화 방법 및 이를 위한 장치
WO2013002619A2 (ko) 고정소수점 변환을 위한 비트뎁스 조절을 수반하는 비디오 부호화 방법 및 그 장치, 비디오 복호화 방법 및 그 장치
WO2011126282A2 (en) Method and apparatus for encoding video by using transformation index, and method and apparatus for decoding video by using transformation index
WO2011126283A2 (en) Method and apparatus for encoding video based on internal bit depth increment, and method and apparatus for decoding video based on internal bit depth increment
WO2017014585A1 (ko) 그래프 기반 변환을 이용하여 비디오 신호를 처리하는 방법 및 장치
WO2013157794A1 (ko) 변환 계수 레벨의 엔트로피 부호화 및 복호화를 위한 파라메터 업데이트 방법 및 이를 이용한 변환 계수 레벨의 엔트로피 부호화 장치 및 엔트로피 복호화 장치
EP2556672A2 (en) Method and apparatus for encoding video by using transformation index, and method and apparatus for decoding video by using transformation index
WO2010027170A2 (ko) 예측 방향 전환과 선택적 부호화를 이용한 영상 부호화/복호화 장치 및 방법
WO2019017694A1 (ko) 인트라 예측 모드 기반 영상 처리 방법 및 이를 위한 장치

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201080063499.0

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10836220

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2010836220

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE