US20130148909A1 - Method and apparatus for image encoding and image decoding - Google Patents

Method and apparatus for image encoding and image decoding Download PDF

Info

Publication number
US20130148909A1
US20130148909A1 US13/759,197 US201313759197A US2013148909A1 US 20130148909 A1 US20130148909 A1 US 20130148909A1 US 201313759197 A US201313759197 A US 201313759197A US 2013148909 A1 US2013148909 A1 US 2013148909A1
Authority
US
United States
Prior art keywords
residual blocks
sub residual
prediction
sub
blocks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/759,197
Inventor
Yu-mi Sohn
Woo-jin Han
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Priority to US13/759,197 priority Critical patent/US20130148909A1/en
Publication of US20130148909A1 publication Critical patent/US20130148909A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • H04N19/00569
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/109Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • H04N19/122Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/523Motion estimation or motion compensation with sub-pixel accuracy
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Definitions

  • Apparatuses and methods consistent with the present invention relate to image encoding and image decoding, and more particularly, to image encoding which improves prediction efficiency and compression efficiency in accordance with image characteristics by performing prediction in lines and performing a one-dimensional transformation in lines on an input image.
  • image data is compressed by dividing an image frame into a plurality of image blocks, performing prediction on the image blocks and thereby obtaining prediction blocks, and transforming and quantizing differences between the original image blocks and the prediction blocks.
  • MPEG Moving Picture Experts Group
  • MPEG-4 Visual Visual
  • H.261, H.263 and H.264 image data is compressed by dividing an image frame into a plurality of image blocks, performing prediction on the image blocks and thereby obtaining prediction blocks, and transforming and quantizing differences between the original image blocks and the prediction blocks.
  • the prediction performed may be intra prediction or inter prediction.
  • Intra prediction is performed on a current image block by using data of restored neighboring blocks, which is included in the current image block.
  • Inter prediction is performed by generating a prediction block that corresponds to a current image block from one or more video frames previously encoded using a block-based motion compensation method.
  • data of neighboring blocks used for intra prediction comprises pixels of neighboring previous blocks, which are adjacent to the top and left of the current image block.
  • top and left pixels of the current image block, which are adjacent to pixels of previous blocks have small differences between prediction values and original pixel values due to their close distances from the pixels of the previous blocks.
  • pixels of the current image block, which are disposed far from the pixels of the previous blocks may have large differences between prediction values and original pixel values.
  • two-dimensional discrete cosine transformation is performed on residual data obtained by using inter prediction or intra prediction in 4 ⁇ 4 blocks.
  • JPEG Joint Photographic Experts Group
  • MPEG-1, MPEG-2, and MPEG-4 standards two-dimensional DCT is performed on the residual data in 8 ⁇ 8 blocks.
  • JPEG Joint Photographic Experts Group
  • MPEG-1, MPEG-2, and MPEG-4 standards two-dimensional DCT is performed on the residual data in 8 ⁇ 8 blocks.
  • the correlations between data in a residual block may not be efficiently used.
  • a method of image encoding which improves compression efficiency by improving prediction efficiency is desired in order to cope with a restriction of a transmission bandwidth and provide an image having higher quality to a user.
  • Exemplary embodiments of the present invention overcome the above disadvantages and other disadvantages not described above. Also, the present invention is not required to overcome the disadvantages described above, and an exemplary embodiment of the present invention may not overcome any of the problems described above.
  • aspects of the present invention provide a method and apparatus for image encoding which improves prediction efficiency and compression efficiency when an image is encoded, and a method and apparatus for image decoding.
  • a method of image encoding including generating a plurality of sub residual blocks by dividing a residual block having a predetermined size; generating prediction sub residual blocks of the sub residual blocks by using residues of previously processed neighboring sub residual blocks; generating difference sub residual blocks by calculating differences between the prediction sub residual blocks and the sub residual blocks; and transforming the difference sub residual blocks.
  • an apparatus for image encoding including a division unit which generates a plurality of sub residual blocks by dividing a residual block having a predetermined size; a residue prediction unit which generates prediction sub residual blocks of the sub residual blocks by using residues of previously processed neighboring sub residual blocks; a subtraction unit which generates difference sub residual blocks by calculating differences between the prediction sub residual blocks and the sub residual blocks; and a transformation unit which transforms the difference sub residual blocks.
  • a method of image decoding including determining a division mode of a current residual block to be decoded by using information on a division mode of the residual block which is included in a received bitstream; generating prediction sub residual blocks of a plurality of sub residual blocks of the residual block by using residues of previously decoded neighboring sub residual blocks in accordance with the determined division mode; restoring difference residues that are differences between the prediction sub residual blocks and the sub residual blocks and are included in the bitstream; and restoring the sub residual blocks by adding the prediction sub residual blocks and the difference residues.
  • an apparatus for image decoding including a residue prediction unit which generates prediction sub residual blocks of a plurality of sub residual blocks of a current residual block to be decoded by using residues of previously decoded neighboring sub residual blocks in accordance with a division mode of the residual block included in a received bitstream; a difference residue restoration unit which restores difference residues that are differences between the prediction sub residual blocks and the sub residual blocks and are included in the bitstream; and an addition unit which restores the sub residual blocks by adding the prediction sub residual blocks and the difference residues.
  • a method of image encoding including dividing an input image into a plurality of image blocks and generating prediction values of pixels of each image block in horizontal or vertical lines; generating residues that are differences between original values and the prediction values of the pixels, in lines; and performing a one-dimensional discrete cosine transformation (DCT) on the residues in lines.
  • DCT discrete cosine transformation
  • an apparatus for image encoding including a prediction unit which divides an input image into a plurality of image blocks and generates prediction values of pixels of each image block in horizontal or vertical pixel lines; a subtraction unit which generates residues that are differences between original values and the prediction values of the pixels, in lines; and a transformation unit which performs one-dimensional discrete cosine transformation (DCT) on the residues in lines.
  • DCT discrete cosine transformation
  • a method of image decoding including restoring residues that are differences between prediction values and original values of horizontal or vertical pixel lines and are included in a received bitstream; predicting pixel values of each pixel line to be decoded by using pixel values of a previous pixel line decoded in a predetermined order; and decoding pixels of the pixel lines by adding the predicted pixel values of the pixel lines and the restored residues.
  • an apparatus for image decoding including a prediction unit which predicts pixel values of horizontal or vertical pixel lines to be decoded by using previous pixel lines in vertical or horizontal lines in a predetermined order; a restoration unit which restores residues that are differences between prediction values of the pixel lines and original pixel values of the pixel lines and are included in a received bitstream; and an addition unit which decodes pixels of the pixel lines by adding the predicted pixel values of the pixel lines and the restored residues.
  • FIG. 1 is a block diagram illustrating an apparatus for image encoding, according to an embodiment of the present invention
  • FIGS. 2A , 2 B and 2 C are diagrams illustrating examples of when a residual block is divided into a plurality of sub residual blocks, according to an exemplary embodiment of the present invention
  • FIGS. 3A and 3B are diagrams for illustrating a method of generating prediction sub residual blocks, according to an exemplary embodiment of the present invention
  • FIG. 4 is a diagram for illustrating a method of generating prediction sub residual blocks, according to another exemplary embodiment of the present invention.
  • FIG. 5 is a flowchart illustrating a method of image encoding, according to an exemplary embodiment of the present invention.
  • FIG. 6 is a block diagram illustrating an apparatus for image encoding, according to another exemplary embodiment of the present invention.
  • FIG. 7 is a diagram for illustrating a method of predicting pixel values in lines by a prediction unit illustrated in FIG. 6 ;
  • FIG. 8 is a diagram for illustrating a method of predicting pixel values, according to another exemplary embodiment of the present invention.
  • FIG. 9 is a flowchart illustrating a method of image encoding, according to another exemplary embodiment of the present invention.
  • FIG. 10 is a block diagram illustrating an apparatus for image decoding, according to an exemplary embodiment of the present invention.
  • FIG. 11 is a flowchart illustrating a method of image decoding, according to an exemplary embodiment of the present invention.
  • FIG. 12 is a block diagram illustrating an apparatus for image decoding, according to another exemplary embodiment of the present invention.
  • FIG. 13 is a flowchart illustrating a method of image decoding, according to another exemplary embodiment of the present invention.
  • FIG. 1 is a block diagram illustrating an apparatus 100 for image encoding, according to an exemplary embodiment of the present invention.
  • the apparatus 100 divides a residual block, that is, a difference between an original image block and a prediction image block into a plurality of sub residual blocks, generates prediction sub residual blocks of the sub residual blocks by using neighboring residues, and transforms difference sub residual blocks that are differences between the original sub residual blocks and the prediction sub residual blocks.
  • the apparatus 100 includes a prediction unit 110 , a first subtraction unit 115 , a division unit 120 , a second subtraction unit 125 , a residue prediction unit 130 , a transformation unit 135 , a quantization unit 140 , an entropy encoding unit 145 , an inverse quantization unit 150 , an inverse transformation unit 155 and an addition unit 160 .
  • the prediction unit 110 divides an input image into a plurality of sub blocks having a predetermined size and generates prediction blocks by performing inter or intra prediction on each of the sub blocks.
  • the inter prediction is performed by using a reference picture that was previously encoded and then restored.
  • the prediction unit 110 performs the inter prediction by performing motion prediction which generates motion vectors indicating regions similar to regions of a current block in a predetermined search range of the reference picture and by performing motion compensation which obtains data on corresponding regions of the reference picture which are indicated by the motion vectors, thereby generating a prediction block of the current block.
  • the prediction unit 110 performs the intra prediction which generates a prediction block by using data of neighboring blocks of the current block.
  • the inter prediction and the intra prediction according to related art image compression standards such as H.264 may be used and a variety of modified prediction methods may also be used.
  • the first subtraction unit 115 calculates prediction errors by subtracting pixel values of the prediction block from original pixel values of the current block.
  • a prediction error between an original pixel value and a prediction pixel value is defined as a residue and a block composed of a plurality of residues is defined as a residual block.
  • the division unit 120 divides the residual block into a plurality of sub residual blocks.
  • the size of the residual block is N ⁇ N (where N is a positive number equal to or greater than 2)
  • the residual block is divided into the sub residual blocks having the size of any one of N ⁇ 1, 1 ⁇ N, and a ⁇ a (where a is a natural number smaller than N).
  • FIGS. 2A through 2B are diagrams illustrating examples of when a residual block is divided into a plurality of sub residual blocks, according to an exemplary embodiment of the present invention.
  • FIG. 2A is a diagram illustrating an example of when a 4 ⁇ 4 residual block 210 is divided into a plurality of 1 ⁇ 4 sub residual blocks 211 , 212 , 213 and 214 .
  • FIG. 2B is a diagram illustrating an example of when a 4 ⁇ 4 residual block 220 is divided into a plurality of 4 ⁇ 1 sub residual blocks 221 , 222 , 223 and 224 .
  • 2C is a diagram illustrating an example of when a 4 ⁇ 4 residual block 230 is divided into a plurality of 2 ⁇ 2 sub residual blocks 231 , 232 , 233 and 234 .
  • 4 ⁇ 4 residual block is described as an example, the present invention is not limited thereto.
  • the present invention may also be similarly applied to a variety of residual blocks such as an 8 ⁇ 8 residual block and a 16 ⁇ 16 residual block.
  • the residue prediction unit 130 generates prediction sub residual blocks of the sub residual blocks by predicting residues of the sub residual blocks of the residual block divided as illustrated in FIG. 2A , 2 B or 2 C by using residues of previously processed neighboring sub residual blocks.
  • FIGS. 3A and 3B are diagrams for illustrating a method of generating prediction sub residual blocks, according to an exemplary embodiment of the present invention.
  • a method of generating prediction sub residual blocks by dividing a 4 ⁇ 4 residual block into a plurality of 1 ⁇ 4 sub residual blocks 311 , 312 , 313 and 314 or 321 , 322 , 323 and 324 will now be described with reference to FIGS. 3A and 3B .
  • the sub residual blocks 311 , 312 , 313 and 314 included in the 4 ⁇ 4 residual block are separately predicted by using residues of neighboring sub residual blocks previously processed in a predetermined order.
  • the prediction may be performed in an orthogonal direction to a division direction of the sub residual blocks 311 , 312 , 313 and 314 .
  • prediction residues PR 21 , PR 22 , PR 23 and PR 24 of the second sub residual block 312 may be predicted by extending previously processed residues R 11 , R 12 , R 13 and R 14 of the first sub residual block 311 in a vertical direction.
  • prediction residues PR 31 , PR 32 , PR 33 and PR 34 of residues R 31 , R 32 , R 33 and R 34 of a third sub residual block 313 and prediction residues PR 41 , PR 42 , PR 43 and PR 44 of residues R 41 , R 42 , R 43 and R 44 of a fourth sub residual block 314 may be predicted by extending original or restored residues R 21 , R 22 , R 23 and R 24 of the second sub residual block 312 and original or restored residues R 31 , R 32 , R 33 and R 34 of the third sub residual block 313 , respectively.
  • each sub residual block is predicted by using residues of a previous sub residual block which are differences between an original image and a prediction image or by using residues of a neighboring sub residual block restored by performing a one-dimensional discrete cosine transformation (DCT), quantization, inverse quantization, and one-dimensional inverse discrete cosine transformation (IDCT) on a difference sub residual block and by adding the difference sub residual block to a prediction sub residual block.
  • DCT discrete cosine transformation
  • IDCT one-dimensional inverse discrete cosine transformation
  • the sub residual blocks divided in a horizontal direction are sequentially predicted in a downward direction.
  • a prediction order of the sub residual blocks may be changed as illustrated in FIG. 3B .
  • a fourth sub residual block 324 is predicted first, then a second sub residual block 322 is predicted, then a first sub residual block 321 is predicted, and then a third sub residual block 323 is predicted.
  • residues R 41 , R 42 , R 43 and R 44 of the fourth sub residual block 324 are predicted by extending residues a, b, c and d of a previous residual block, then residues R 21 , R 22 , R 23 and R 24 of the second sub residual block 322 are predicted by calculating average values of corresponding residues, each respectively from among the residues a, b, c and d of the previous residual block and from among residues R 41 , R 42 , R 43 and R 44 of the fourth sub residual block 324 .
  • residues R 11 , R 12 , R 13 and R 14 of the first sub residual block 321 are predicted by calculating average values of corresponding residues, each respectively from among the residues a, b, c and d of the previous residual block and from among residues R 21 , R 22 , R 23 and R 24 of the second sub residual block 322
  • residues R 31 , R 32 , R 33 and R 34 of the third sub residual block 323 are predicted by calculating average values of corresponding residues, each respectively from among residues R 21 , R 22 , R 23 and R 24 of the second sub residual block 322 and from among residues R 41 , R 42 , R 43 and R 44 of the fourth sub residual block 324 .
  • PRxy is a prediction residue of a residue Rxy
  • PR 41 a
  • PR 21 (a+R 41 )/2
  • PR 11 (a+R 21 )/2
  • PR 31 (R 21 +R 41 )/2.
  • the above described method of generating prediction sub residual blocks of sub residual blocks divided in a horizontal direction may also be similarly applied to sub residual blocks divided in a vertical direction as illustrated in FIG. 2B .
  • FIG. 4 is a diagram for illustrating a method of generating prediction sub residual blocks, according to another exemplary embodiment of the present invention.
  • residues of a current sub residual block is predicted by performing prediction in lines using residues of a previous sub residual block disposed in a orthogonal direction of a division direction of the sub residual blocks.
  • prediction residues may be generated by extending neighboring pixels of a previous sub residual block at least in one of a horizontal direction and a vertical direction.
  • the second subtraction unit 125 generates difference sub residual blocks by calculating differences between the prediction sub residual blocks generated by the residue prediction unit 130 and the original sub residual blocks.
  • the transformation unit 135 performs DCT on the difference sub residual blocks.
  • the transformation unit 135 performs one-dimensional DCT on the N ⁇ 1 or 1 ⁇ N sub residual blocks.
  • the transformation unit 135 performs one-dimensional horizontal DCT on the difference sub residual blocks which are divided in a horizontal direction as illustrated in FIG. 2A and then are predicted and performs one-dimensional vertical DCT on the difference sub residual blocks which are divided in a vertical direction as illustrated in FIG. 2B and then are predicted.
  • the quantization unit 140 performs quantization and the entropy encoding unit 145 performs variable length encoding on difference residues of the transformed difference sub residual blocks so that a bitstream is generated.
  • the difference residues quantized by the quantization unit 140 are inverse quantized by the inverse quantization unit 150 and inverse transformed by the inverse transformation unit 155 so that the difference sub residual blocks are restored.
  • the addition unit 160 restores the sub residual blocks by adding difference residues of the restored difference sub residual blocks and prediction residues of the prediction sub residual blocks which are generated by the reside prediction unit 130 .
  • the restored sub residual blocks are used when prediction sub residual blocks of next sub residual blocks are generated.
  • the apparatus 100 may further include a division mode determination unit (not shown) which compares costs of bitstreams generated by using a plurality of sub residual blocks having different sizes, and selects sub residual blocks having the smallest cost so as to use the sub residual block to divide a current residual block.
  • a division mode determination unit (not shown) which compares costs of bitstreams generated by using a plurality of sub residual blocks having different sizes, and selects sub residual blocks having the smallest cost so as to use the sub residual block to divide a current residual block.
  • the division mode determination unit determines a division mode of a residual block by dividing the residual block into a plurality of sub residual blocks having different sizes, generating prediction sub residual blocks of the sub residual blocks by using residues of a previous sub residual block, and comparing costs of bitstreams generated by transforming, quantizing and entropy encoding difference sub residual blocks.
  • the division mode determination unit divides an N ⁇ N residual block into a plurality of 1 ⁇ N sub residual blocks in division mode 1 , into a plurality of N ⁇ 1 sub residual blocks in division mode 2 , or into a plurality of a ⁇ a sub residual blocks in division mode 3 , compares rate distortion (RD) costs of bitstreams generated by transforming, quantizing, and entropy encoding the difference sub residual blocks generated in accordance with each division mode, and determines the division mode having the smallest RD cost as a final division mode.
  • RD rate distortion
  • the division mode determination unit may also determine whether to perform transformation of the residual block or not by comparing costs of bitstreams generated by encoding difference residual blocks of a plurality of sub residual blocks having different sizes and costs of bitstreams generated by bypassing transformation of a residual block and quantizing and entropy encoding the residual block.
  • FIG. 5 is a flowchart illustrating a method of image encoding, according to an exemplary embodiment of the present invention.
  • a residual block generated by subtracting pixel values of a prediction block from original pixel values of a current block is divided into a plurality of sub residual blocks.
  • prediction sub residual blocks are generated by predicting residues of the current sub residual blocks using residues of previously processed sub residual blocks.
  • the prediction sub residual blocks are predicted by extending the residues of the previous sub residual blocks at least in one of a horizontal direction and a vertical direction in accordance with a division type of the sub residual blocks.
  • difference sub residual blocks are generated by calculating differences between the prediction sub residual blocks and the original sub residual blocks.
  • DCT is performed on the difference sub residual blocks in accordance with the division type. As described above, one-dimensional DCT is performed on N ⁇ 1 or 1 ⁇ N difference sub residual blocks.
  • the transformed difference sub residual blocks are quantized and entropy encoded and thus a bitstream is output.
  • the sub residual blocks are restored by inverse quantizing and inverse transforming the quantized difference sub residual blocks and adding the processed difference sub residual blocks to the prediction sub residual blocks. The restored sub residual blocks are used when residues of next sub residual blocks are predicted.
  • FIG. 6 is a block diagram illustrating an apparatus 600 for image encoding, according to another exemplary embodiment of the present invention.
  • the apparatus 100 illustrated in FIG. 1 divides a residual block into a plurality of sub residual blocks, generates prediction sub residual blocks of the sub residual blocks, and transforms difference sub residual blocks that are differences between the original sub residual blocks and the prediction sub residual blocks.
  • the apparatus 600 according to the current exemplary embodiment generates prediction values of an input image block, not a residual block, in lines and one-dimensional DCT is performed on the prediction values.
  • the apparatus 600 includes a prediction unit 610 , a subtraction unit 615 , a transformation unit 620 , a quantization unit 625 , an entropy encoding unit 630 , an inverse quantization unit 635 , an inverse transformation unit 640 and an addition unit 645 .
  • the prediction unit 610 divides an input image into a plurality of image blocks and predicts pixel values of each image block in horizontal or vertical pixel lines.
  • the prediction unit 610 predicts the pixel values in horizontal or vertical pixel lines in the same manner as the residue prediction unit 130 of the apparatus 100 illustrated in FIG. 1 which predicts residues of current 1 ⁇ N or N ⁇ 1 sub residual blocks divided from a residual block by using residues of a neighboring sub residual block.
  • FIG. 7 is a diagram for illustrating a method of predicting pixel values in lines by the prediction unit 610 illustrated in FIG. 6 .
  • a 4 ⁇ 4 input image block that is divided into a plurality of horizontal pixel lines is illustrated in FIG. 7 as an example, the present invention is not limited thereto.
  • An exemplary embodiment of the present invention may also be applied to input image blocks having different sizes and an input image block divided into a plurality of vertical pixel lines.
  • pixel values P 11 , P 12 , P 13 and P 14 of a first horizontal line 711 may be predicted by extending pixel values x, y, z and w of the neighboring block in an orthogonal direction of a direction of the horizontal pixel lines.
  • pixel values P 21 , P 22 , P 23 and P 24 of a second horizontal line 712 may be predicted by extending pixel values P 11 , P 12 , P 13 and P 14 of the first horizontal line 711 in an orthogonal direction of the direction of the horizontal pixel lines.
  • pixel values P 31 , P 32 , P 33 and P 34 of a third horizontal line 713 and pixel values P 41 , P 42 , P 43 and P 44 of a fourth horizontal line 714 may be respectively predicted by extending pixel values P 21 , P 22 , P 23 and P 24 of the second horizontal line 712 and pixel values P 31 , P 32 , P 33 and P 34 of the third horizontal line 713 in an orthogonal direction of the direction of the horizontal pixel lines.
  • the original pixel values or pixel values restored by being transformed, quantized, inverse quantized and inverse transformed may be used as pixel values of a previous horizontal pixel line in order to predict pixel values of a current horizontal pixel line.
  • FIG. 7 A method of sequentially predicting pixel values of an image block in horizontal pixel lines in a downward direction is described in FIG. 7 .
  • a prediction order of the horizontal pixel lines may be changed as shown by the prediction order of the sub residual blocks 321 , 322 , 323 and 324 illustrated in FIG. 3B .
  • the pixel values of the image block may be predicted in vertical pixel lines.
  • the prediction unit 610 since the prediction unit 610 generates prediction values of pixel values of a current block in lines by using pixel values of a neighboring block, a problem of related art block-based prediction, in which prediction efficiency of pixels disposed relatively far from the neighboring block than the other pixels is reduced, may be improved.
  • the prediction unit 610 may predict each pixel value of the input image block by using a half-pel interpolation filter.
  • FIG. 8 is a diagram for illustrating a method of predicting pixel values, according to another exemplary embodiment of the present invention.
  • FIG. 8 illustrates pixel value P 11 illustrated in FIG. 7 and previous pixel values u, v and x, which are disposed in a vertical direction of pixel value P 11 .
  • an interpolation value h is generated at a half-pel location by using the previous pixel values u, v and x and a prediction value of pixel value P 11 may be generated by using the interpolation value h and the closest neighboring pixel value x.
  • the interpolation value h may be interpolated by a 3-tap filter as shown in Equation 1 using the previous pixel values u, v and x.
  • w1, w2 and w3 represent weights given in accordance with relative distances between the interpolation value h and the previous pixel values u, v and x, w4 represents a predetermined offset value, and an operator “>>” represents a shift operation.
  • pixel value P 11 may be predicted by using the interpolation value h and the previous neighboring pixel value x as shown in Equation 2.
  • each pixel value of the current block may be predicted by generating half-pel interpolation values at half-pel locations and by using the half-pel interpolation values and the previous pixel values which is the closest to the current pixel to be predicted.
  • the subtraction unit 615 generates residues that are differences between the prediction values and the original pixel values in lines.
  • the transformation unit 620 performs one-dimensional DCT on the residues in lines. If the prediction unit 610 has predicted the pixel values in horizontal lines, the transformation unit 620 generates transformation coefficients by performing a one-dimensional horizontal DCT. If the prediction unit 610 has predicted the pixel values in vertical lines, the transformation unit 620 generates transformation coefficients by performing a one-dimensional vertical DCT. Meanwhile, although the prediction unit 610 has generated the prediction values in lines, the transformation unit 620 may alternatively perform two-dimensional DCT in blocks after the prediction of all pixels included in the image block is completed.
  • the quantization unit 625 quantizes the transformation coefficients in lines and the entropy encoding unit 630 performs variable-length encoding on the quantized transformation coefficients so that a bitstream is generated.
  • the quantized transformation coefficients are inverse quantized by the inverse quantization unit 635 and inverse transformed by the inverse transformation unit 640 so that the residues are restored.
  • the addition unit 645 restores the pixel values in lines by adding the restored residues and the prediction pixel values generated by the prediction unit 610 . As described above, when two-dimensional DCT has been performed, the pixel values may be restored in blocks. The restored pixel values in lines are used when next pixel values in lines are predicted.
  • FIG. 9 is a flowchart illustrating a method of image encoding, according to another exemplary embodiment of the present invention.
  • an input image is divided into a plurality of image blocks and prediction values of pixels of each image block are generated in horizontal or vertical lines.
  • transformation coefficients are generated by performing a one-dimensional DCT on the residues generated in lines.
  • a bitstream is generated by quantizing and entropy encoding the transformation coefficients in lines. As described above, although the prediction is performed in lines, the transformation may be performed in blocks as in a related art method.
  • prediction values are generated in lines so that a distance between a current pixel and a reference pixel used for prediction is reduced. Accordingly, accuracy of the prediction increases and thus a bit rate may be reduced.
  • FIG. 10 is a block diagram illustrating an apparatus 1000 for image decoding, according to an exemplary embodiment of the present invention.
  • the apparatus 1000 for image decoding corresponds to the apparatus 100 for image encoding illustrated in FIG. 1 .
  • the apparatus 1000 includes an entropy decoding unit 1010 , an inverse quantization unit 1020 , an inverse transformation unit 1030 , a residue prediction unit 1040 , a first addition unit 1050 , a second addition unit 1060 and a prediction unit 1070 .
  • the entropy decoding unit 1010 receives and entropy decodes a compressed bitstream so that information on a division mode of a current residual block which is included in the bitstream is extracted.
  • the entropy decoding unit 1010 also entropy decodes difference residues included in the bitstream, the inverse quantization unit 1020 inverse quantizes the entropy decoded difference residues, and the inverse transformation unit 1030 restores the difference residues by inverse transforming the inverse quantized difference residues.
  • the inverse transformation unit 1030 performs one-dimensional inverse DCT if the current residual block is encoded in N ⁇ 1 or 1 ⁇ N sub residual blocks.
  • the residue prediction unit 1040 divides the current residual block into a plurality of sub residual blocks in accordance with the extracted information on the division mode of the current residual block to be decoded and generates prediction sub residual blocks of the current sub residual blocks by using residues of previously decoded neighboring sub residual blocks.
  • the residue prediction unit 1040 generates the prediction sub residual blocks of the current sub residual blocks in the same manner as the residue prediction unit 130 illustrated in FIG. 1 .
  • the first addition unit 1050 restores the sub residual blocks by adding difference sub residual blocks of the current sub residual blocks which are composed of the difference residues output by the inverse transformation unit 1030 and the prediction sub residual blocks.
  • the prediction unit 1070 generates a prediction block by performing inter prediction or intra prediction in accordance with a prediction mode of the current block.
  • the second addition unit 1060 restores the current block by adding the prediction block generated by the prediction unit 1070 and the sub residual blocks restored by the first addition unit 1050 .
  • FIG. 11 is a flowchart illustrating a method of image decoding, according to an exemplary embodiment of the present invention.
  • a division mode of a current residual block to be decoded is determined by using information on a division mode of the residual block which is included in a received bitstream.
  • prediction sub residual blocks of a plurality of sub residual blocks of the residual block are generated by using residues of neighboring sub residual blocks previously decoded in accordance with the determined division mode.
  • difference sub residual blocks that are differences between the prediction sub residual blocks and the sub residual blocks and that are included in the bitstream, are restored.
  • the sub residual blocks are restored by adding the prediction sub residual blocks and the difference sub residual blocks.
  • An image is restored by adding the restored sub residual blocks and a prediction block generated by performing inter or intra prediction.
  • FIG. 12 is a block diagram illustrating an apparatus 1200 for image decoding, according to another exemplary embodiment of the present invention.
  • the apparatus 1200 includes a restoration unit 1210 , a prediction unit 1220 and an addition unit 1230 .
  • the restoration unit 1210 restores residues that are differences between prediction values and original values of pixel lines and are included in a received bitstream, and includes an entropy decoding unit 1211 , an inverse quantization unit 1212 and an inverse transformation unit 1213 .
  • the prediction unit 1220 predicts pixel values of a horizontal or vertical current pixel line to be decoded in a predetermined order by using a corresponding previously decoded pixel line.
  • the addition unit 1230 decodes the current pixel line by adding the prediction values of the current pixel line and the restored residues. By repeating the above-described procedure, all pixels included in an image block may be decoded.
  • FIG. 13 is a flowchart illustrating a method of image decoding, according to another exemplary embodiment of the present invention.
  • pixel values of each pixel line to be decoded are predicted by using pixel values of a previous pixel line decoded in a predetermined order.
  • pixels of the current pixel lines are decoded by adding the predicted pixel values of the pixel lines and the restored residues.
  • the invention can also be embodied as computer readable codes on a computer readable recording medium.
  • the computer readable recording medium is any data storage device that can store data which can be thereafter read by a computer system. Examples of the computer readable recording medium include read-only memory (ROM), random-access memory (RAM), CD-ROMs, magnetic tapes, floppy disks, and optical data storage devices.
  • ROM read-only memory
  • RAM random-access memory
  • prediction efficiency and compression efficiency may be improved by performing prediction and one-dimensional transformation in lines in consideration of the correlations.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

Provided are a method and apparatus for image encoding which improves encoding efficiency in accordance with image characteristics by performing prediction in lines and performing a one-dimensional transformation in lines on an input image, and a method and apparatus for image decoding. Encoding efficiency of an image may be improved by generating a prediction sub residual block using neighboring residues and performing a one-dimensional discrete cosine transformation (DCT) on a difference residual block which is a difference between an original sub residual block and the prediction sub residual block.

Description

    CROSS-REFERENCE TO RELATED PATENT APPLICATIONS
  • This application is a continuation of U.S. application Ser. No. 13/541,151, filed Jul. 3, 2012, which is a divisional application of U.S. application Ser. No. 11/965,104, filed Dec. 27, 2007, which claims priority from Korean Patent Application No. 10-2007-0028886, filed on Mar. 23, 2007, in the Korean Intellectual Property Office, the disclosures of which are incorporated herein in their entirety by reference.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • Apparatuses and methods consistent with the present invention relate to image encoding and image decoding, and more particularly, to image encoding which improves prediction efficiency and compression efficiency in accordance with image characteristics by performing prediction in lines and performing a one-dimensional transformation in lines on an input image.
  • 2. Description of the Related Art
  • In general, according to video compression standards such as Moving Picture Experts Group (MPEG)-1, MPEG-2, MPEG-4 Visual, H.261, H.263 and H.264, image data is compressed by dividing an image frame into a plurality of image blocks, performing prediction on the image blocks and thereby obtaining prediction blocks, and transforming and quantizing differences between the original image blocks and the prediction blocks.
  • The prediction performed may be intra prediction or inter prediction. Intra prediction is performed on a current image block by using data of restored neighboring blocks, which is included in the current image block. Inter prediction is performed by generating a prediction block that corresponds to a current image block from one or more video frames previously encoded using a block-based motion compensation method. According to related art methods, generally, data of neighboring blocks used for intra prediction comprises pixels of neighboring previous blocks, which are adjacent to the top and left of the current image block. In this case, top and left pixels of the current image block, which are adjacent to pixels of previous blocks, have small differences between prediction values and original pixel values due to their close distances from the pixels of the previous blocks. However, pixels of the current image block, which are disposed far from the pixels of the previous blocks, may have large differences between prediction values and original pixel values.
  • Meanwhile, according to H.264 standards, two-dimensional discrete cosine transformation (DCT) is performed on residual data obtained by using inter prediction or intra prediction in 4×4 blocks. According to related art Joint Photographic Experts Group (JPEG), MPEG-1, MPEG-2, and MPEG-4 standards, two-dimensional DCT is performed on the residual data in 8×8 blocks. In the two-dimensional DCT, although horizontal or vertical correlations exist in the residual data, the correlations between data in a residual block may not be efficiently used.
  • Thus, a method of image encoding which improves compression efficiency by improving prediction efficiency is desired in order to cope with a restriction of a transmission bandwidth and provide an image having higher quality to a user.
  • SUMMARY OF THE INVENTION
  • Exemplary embodiments of the present invention overcome the above disadvantages and other disadvantages not described above. Also, the present invention is not required to overcome the disadvantages described above, and an exemplary embodiment of the present invention may not overcome any of the problems described above.
  • Aspects of the present invention provide a method and apparatus for image encoding which improves prediction efficiency and compression efficiency when an image is encoded, and a method and apparatus for image decoding.
  • According to an aspect of the present invention, there is provided a method of image encoding, including generating a plurality of sub residual blocks by dividing a residual block having a predetermined size; generating prediction sub residual blocks of the sub residual blocks by using residues of previously processed neighboring sub residual blocks; generating difference sub residual blocks by calculating differences between the prediction sub residual blocks and the sub residual blocks; and transforming the difference sub residual blocks.
  • According to another aspect of the present invention, there is provided an apparatus for image encoding, including a division unit which generates a plurality of sub residual blocks by dividing a residual block having a predetermined size; a residue prediction unit which generates prediction sub residual blocks of the sub residual blocks by using residues of previously processed neighboring sub residual blocks; a subtraction unit which generates difference sub residual blocks by calculating differences between the prediction sub residual blocks and the sub residual blocks; and a transformation unit which transforms the difference sub residual blocks.
  • According to another aspect of the present invention, there is provided a method of image decoding, including determining a division mode of a current residual block to be decoded by using information on a division mode of the residual block which is included in a received bitstream; generating prediction sub residual blocks of a plurality of sub residual blocks of the residual block by using residues of previously decoded neighboring sub residual blocks in accordance with the determined division mode; restoring difference residues that are differences between the prediction sub residual blocks and the sub residual blocks and are included in the bitstream; and restoring the sub residual blocks by adding the prediction sub residual blocks and the difference residues.
  • According to another aspect of the present invention, there is provided an apparatus for image decoding, including a residue prediction unit which generates prediction sub residual blocks of a plurality of sub residual blocks of a current residual block to be decoded by using residues of previously decoded neighboring sub residual blocks in accordance with a division mode of the residual block included in a received bitstream; a difference residue restoration unit which restores difference residues that are differences between the prediction sub residual blocks and the sub residual blocks and are included in the bitstream; and an addition unit which restores the sub residual blocks by adding the prediction sub residual blocks and the difference residues.
  • According to another aspect of the present invention, there is provided a method of image encoding, including dividing an input image into a plurality of image blocks and generating prediction values of pixels of each image block in horizontal or vertical lines; generating residues that are differences between original values and the prediction values of the pixels, in lines; and performing a one-dimensional discrete cosine transformation (DCT) on the residues in lines.
  • According to another aspect of the present invention, there is provided an apparatus for image encoding, including a prediction unit which divides an input image into a plurality of image blocks and generates prediction values of pixels of each image block in horizontal or vertical pixel lines; a subtraction unit which generates residues that are differences between original values and the prediction values of the pixels, in lines; and a transformation unit which performs one-dimensional discrete cosine transformation (DCT) on the residues in lines.
  • According to another aspect of the present invention, there is provided a method of image decoding, including restoring residues that are differences between prediction values and original values of horizontal or vertical pixel lines and are included in a received bitstream; predicting pixel values of each pixel line to be decoded by using pixel values of a previous pixel line decoded in a predetermined order; and decoding pixels of the pixel lines by adding the predicted pixel values of the pixel lines and the restored residues.
  • According to another aspect of the present invention, there is provided an apparatus for image decoding, including a prediction unit which predicts pixel values of horizontal or vertical pixel lines to be decoded by using previous pixel lines in vertical or horizontal lines in a predetermined order; a restoration unit which restores residues that are differences between prediction values of the pixel lines and original pixel values of the pixel lines and are included in a received bitstream; and an addition unit which decodes pixels of the pixel lines by adding the predicted pixel values of the pixel lines and the restored residues.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The above and other features of the present invention will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings in which:
  • FIG. 1 is a block diagram illustrating an apparatus for image encoding, according to an embodiment of the present invention;
  • FIGS. 2A, 2B and 2C are diagrams illustrating examples of when a residual block is divided into a plurality of sub residual blocks, according to an exemplary embodiment of the present invention;
  • FIGS. 3A and 3B are diagrams for illustrating a method of generating prediction sub residual blocks, according to an exemplary embodiment of the present invention;
  • FIG. 4 is a diagram for illustrating a method of generating prediction sub residual blocks, according to another exemplary embodiment of the present invention;
  • FIG. 5 is a flowchart illustrating a method of image encoding, according to an exemplary embodiment of the present invention;
  • FIG. 6 is a block diagram illustrating an apparatus for image encoding, according to another exemplary embodiment of the present invention;
  • FIG. 7 is a diagram for illustrating a method of predicting pixel values in lines by a prediction unit illustrated in FIG. 6;
  • FIG. 8 is a diagram for illustrating a method of predicting pixel values, according to another exemplary embodiment of the present invention;
  • FIG. 9 is a flowchart illustrating a method of image encoding, according to another exemplary embodiment of the present invention;
  • FIG. 10 is a block diagram illustrating an apparatus for image decoding, according to an exemplary embodiment of the present invention;
  • FIG. 11 is a flowchart illustrating a method of image decoding, according to an exemplary embodiment of the present invention;
  • FIG. 12 is a block diagram illustrating an apparatus for image decoding, according to another exemplary embodiment of the present invention; and
  • FIG. 13 is a flowchart illustrating a method of image decoding, according to another exemplary embodiment of the present invention.
  • DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS OF THE INVENTION
  • Hereinafter, the present invention will be described in detail by explaining exemplary embodiments of the invention with reference to the attached drawings.
  • FIG. 1 is a block diagram illustrating an apparatus 100 for image encoding, according to an exemplary embodiment of the present invention.
  • The apparatus 100 divides a residual block, that is, a difference between an original image block and a prediction image block into a plurality of sub residual blocks, generates prediction sub residual blocks of the sub residual blocks by using neighboring residues, and transforms difference sub residual blocks that are differences between the original sub residual blocks and the prediction sub residual blocks.
  • Referring to FIG. 1, the apparatus 100 includes a prediction unit 110, a first subtraction unit 115, a division unit 120, a second subtraction unit 125, a residue prediction unit 130, a transformation unit 135, a quantization unit 140, an entropy encoding unit 145, an inverse quantization unit 150, an inverse transformation unit 155 and an addition unit 160.
  • The prediction unit 110 divides an input image into a plurality of sub blocks having a predetermined size and generates prediction blocks by performing inter or intra prediction on each of the sub blocks. The inter prediction is performed by using a reference picture that was previously encoded and then restored. The prediction unit 110 performs the inter prediction by performing motion prediction which generates motion vectors indicating regions similar to regions of a current block in a predetermined search range of the reference picture and by performing motion compensation which obtains data on corresponding regions of the reference picture which are indicated by the motion vectors, thereby generating a prediction block of the current block. Also, the prediction unit 110 performs the intra prediction which generates a prediction block by using data of neighboring blocks of the current block. The inter prediction and the intra prediction according to related art image compression standards such as H.264 may be used and a variety of modified prediction methods may also be used.
  • When the prediction block of the current block is generated by performing the inter prediction or the intra prediction, the first subtraction unit 115 calculates prediction errors by subtracting pixel values of the prediction block from original pixel values of the current block. Hereinafter, a prediction error between an original pixel value and a prediction pixel value is defined as a residue and a block composed of a plurality of residues is defined as a residual block.
  • The division unit 120 divides the residual block into a plurality of sub residual blocks. In more detail, assuming that the size of the residual block is N×N (where N is a positive number equal to or greater than 2), the residual block is divided into the sub residual blocks having the size of any one of N×1, 1×N, and a×a (where a is a natural number smaller than N).
  • FIGS. 2A through 2B are diagrams illustrating examples of when a residual block is divided into a plurality of sub residual blocks, according to an exemplary embodiment of the present invention. FIG. 2A is a diagram illustrating an example of when a 4×4 residual block 210 is divided into a plurality of 1×4 sub residual blocks 211, 212, 213 and 214. FIG. 2B is a diagram illustrating an example of when a 4×4 residual block 220 is divided into a plurality of 4×1 sub residual blocks 221, 222, 223 and 224. FIG. 2C is a diagram illustrating an example of when a 4×4 residual block 230 is divided into a plurality of 2×2 sub residual blocks 231, 232, 233 and 234. Although only a 4×4 residual block is described as an example, the present invention is not limited thereto. The present invention may also be similarly applied to a variety of residual blocks such as an 8×8 residual block and a 16×16 residual block.
  • Referring back to FIG. 1, the residue prediction unit 130 generates prediction sub residual blocks of the sub residual blocks by predicting residues of the sub residual blocks of the residual block divided as illustrated in FIG. 2A, 2B or 2C by using residues of previously processed neighboring sub residual blocks.
  • FIGS. 3A and 3B are diagrams for illustrating a method of generating prediction sub residual blocks, according to an exemplary embodiment of the present invention. In FIGS. 3A and 3B, Rxy represents a residue at a location (x,y) (x,y=1, 2, 3, 4). A method of generating prediction sub residual blocks by dividing a 4×4 residual block into a plurality of 1×4 sub residual blocks 311, 312, 313 and 314 or 321, 322, 323 and 324 will now be described with reference to FIGS. 3A and 3B.
  • Referring to FIG. 3A, the sub residual blocks 311, 312, 313 and 314 included in the 4×4 residual block are separately predicted by using residues of neighboring sub residual blocks previously processed in a predetermined order. The prediction may be performed in an orthogonal direction to a division direction of the sub residual blocks 311, 312, 313 and 314. For example, assuming that the sub residual blocks 311, 312, 313 and 314 divided in a horizontal direction are sequentially predicted in a downward direction, residues R11, R12, R13 and R14 of a first sub residual block 311 may be predicted by extending residues a, b, c and d of a previous residual block encoded prior to the current residual block in a vertical direction. That is, assuming that residues R11, R12, R13 and R14 of the first sub residual block 311 have prediction residues PR11, PR12, PR13 and PR14, respectively, PR11=a, PR12=b, PR13=c and PR14=d.
  • Also, assuming that residues R21, R22, R23 and R24 of a second sub residual block 312 have prediction residues PR21, PR22, PR23 and PR24, respectively, prediction residues PR21, PR22, PR23 and PR24 of the second sub residual block 312 may be predicted by extending previously processed residues R11, R12, R13 and R14 of the first sub residual block 311 in a vertical direction. Likewise, prediction residues PR31, PR32, PR33 and PR34 of residues R31, R32, R33 and R34 of a third sub residual block 313 and prediction residues PR41, PR42, PR43 and PR44 of residues R41, R42, R43 and R44 of a fourth sub residual block 314 may be predicted by extending original or restored residues R21, R22, R23 and R24 of the second sub residual block 312 and original or restored residues R31, R32, R33 and R34 of the third sub residual block 313, respectively. In this case, when each sub residual block is predicted by using residues of a previous sub residual block which are differences between an original image and a prediction image or by using residues of a neighboring sub residual block restored by performing a one-dimensional discrete cosine transformation (DCT), quantization, inverse quantization, and one-dimensional inverse discrete cosine transformation (IDCT) on a difference sub residual block and by adding the difference sub residual block to a prediction sub residual block.
  • In FIG. 3A, the sub residual blocks divided in a horizontal direction are sequentially predicted in a downward direction. However, a prediction order of the sub residual blocks may be changed as illustrated in FIG. 3B.
  • Referring to FIG. 3B, a fourth sub residual block 324 is predicted first, then a second sub residual block 322 is predicted, then a first sub residual block 321 is predicted, and then a third sub residual block 323 is predicted. In more detail, residues R41, R42, R43 and R44 of the fourth sub residual block 324 are predicted by extending residues a, b, c and d of a previous residual block, then residues R21, R22, R23 and R24 of the second sub residual block 322 are predicted by calculating average values of corresponding residues, each respectively from among the residues a, b, c and d of the previous residual block and from among residues R41, R42, R43 and R44 of the fourth sub residual block 324. Also, residues R11, R12, R13 and R14 of the first sub residual block 321 are predicted by calculating average values of corresponding residues, each respectively from among the residues a, b, c and d of the previous residual block and from among residues R21, R22, R23 and R24 of the second sub residual block 322, and residues R31, R32, R33 and R34 of the third sub residual block 323 are predicted by calculating average values of corresponding residues, each respectively from among residues R21, R22, R23 and R24 of the second sub residual block 322 and from among residues R41, R42, R43 and R44 of the fourth sub residual block 324. For example, assuming that PRxy is a prediction residue of a residue Rxy, PR41=a, PR21=(a+R41)/2, PR11=(a+R21)/2, and PR31=(R21+R41)/2.
  • The above described method of generating prediction sub residual blocks of sub residual blocks divided in a horizontal direction may also be similarly applied to sub residual blocks divided in a vertical direction as illustrated in FIG. 2B.
  • FIG. 4 is a diagram for illustrating a method of generating prediction sub residual blocks, according to another exemplary embodiment of the present invention.
  • When a residual block is divided into a plurality of sub residual blocks having the one width as the residual block illustrated in FIG. 3A or FIG. 3B, residues of a current sub residual block is predicted by performing prediction in lines using residues of a previous sub residual block disposed in a orthogonal direction of a division direction of the sub residual blocks. However, referring FIG. 4, when a 4×4 residual block is divided into a plurality of 2×2 sub residual blocks, prediction residues may be generated by extending neighboring pixels of a previous sub residual block at least in one of a horizontal direction and a vertical direction. For example, assuming that a residue Rxy at a location (x,y) of a 2×2 sub residual block 410 has a prediction residue PRxy, if upper neighboring previous residues a and b are extended in a vertical direction, PR11=a, PR13=a, PR12=b, and PR14=b or if left neighboring previous residues c and d are extended in a horizontal direction, PR11=c, PR12=c, PR13=d, and PR14=d. Alternatively, a prediction residue of a current sub residual block to be predicted may be calculated as an average residue of previous residues at the same horizontal and vertical line from the upper and left neighboring previous residues. For example, PR11=(a+c)/2, PR12=(b+c)/2, PR13=(a+d)/2, and PR14=(b+d)/2.
  • Referring back to FIG. 1, the second subtraction unit 125 generates difference sub residual blocks by calculating differences between the prediction sub residual blocks generated by the residue prediction unit 130 and the original sub residual blocks.
  • The transformation unit 135 performs DCT on the difference sub residual blocks. In particular, the transformation unit 135 performs one-dimensional DCT on the N×1 or 1×N sub residual blocks. For example, the transformation unit 135 performs one-dimensional horizontal DCT on the difference sub residual blocks which are divided in a horizontal direction as illustrated in FIG. 2A and then are predicted and performs one-dimensional vertical DCT on the difference sub residual blocks which are divided in a vertical direction as illustrated in FIG. 2B and then are predicted.
  • The quantization unit 140 performs quantization and the entropy encoding unit 145 performs variable length encoding on difference residues of the transformed difference sub residual blocks so that a bitstream is generated.
  • The difference residues quantized by the quantization unit 140 are inverse quantized by the inverse quantization unit 150 and inverse transformed by the inverse transformation unit 155 so that the difference sub residual blocks are restored.
  • The addition unit 160 restores the sub residual blocks by adding difference residues of the restored difference sub residual blocks and prediction residues of the prediction sub residual blocks which are generated by the reside prediction unit 130. The restored sub residual blocks are used when prediction sub residual blocks of next sub residual blocks are generated.
  • Also, the apparatus 100 may further include a division mode determination unit (not shown) which compares costs of bitstreams generated by using a plurality of sub residual blocks having different sizes, and selects sub residual blocks having the smallest cost so as to use the sub residual block to divide a current residual block.
  • The division mode determination unit determines a division mode of a residual block by dividing the residual block into a plurality of sub residual blocks having different sizes, generating prediction sub residual blocks of the sub residual blocks by using residues of a previous sub residual block, and comparing costs of bitstreams generated by transforming, quantizing and entropy encoding difference sub residual blocks. For example, the division mode determination unit divides an N×N residual block into a plurality of 1×N sub residual blocks in division mode 1, into a plurality of N×1 sub residual blocks in division mode 2, or into a plurality of a×a sub residual blocks in division mode 3, compares rate distortion (RD) costs of bitstreams generated by transforming, quantizing, and entropy encoding the difference sub residual blocks generated in accordance with each division mode, and determines the division mode having the smallest RD cost as a final division mode.
  • The division mode determination unit may also determine whether to perform transformation of the residual block or not by comparing costs of bitstreams generated by encoding difference residual blocks of a plurality of sub residual blocks having different sizes and costs of bitstreams generated by bypassing transformation of a residual block and quantizing and entropy encoding the residual block.
  • FIG. 5 is a flowchart illustrating a method of image encoding, according to an exemplary embodiment of the present invention.
  • Referring to FIG. 5, in operation 510, a residual block generated by subtracting pixel values of a prediction block from original pixel values of a current block is divided into a plurality of sub residual blocks.
  • In operation 520, prediction sub residual blocks are generated by predicting residues of the current sub residual blocks using residues of previously processed sub residual blocks. As described above, the prediction sub residual blocks are predicted by extending the residues of the previous sub residual blocks at least in one of a horizontal direction and a vertical direction in accordance with a division type of the sub residual blocks.
  • In operation 530, difference sub residual blocks are generated by calculating differences between the prediction sub residual blocks and the original sub residual blocks.
  • In operation 540, DCT is performed on the difference sub residual blocks in accordance with the division type. As described above, one-dimensional DCT is performed on N×1 or 1×N difference sub residual blocks. The transformed difference sub residual blocks are quantized and entropy encoded and thus a bitstream is output. Also, the sub residual blocks are restored by inverse quantizing and inverse transforming the quantized difference sub residual blocks and adding the processed difference sub residual blocks to the prediction sub residual blocks. The restored sub residual blocks are used when residues of next sub residual blocks are predicted.
  • In a method and apparatus for image encoding according to the above exemplary embodiments described with reference to FIGS. 1 through 6, if horizontal or vertical correlations exist in a residual block, the magnitude of data generated by performing DCT in order to be encoded is reduced and thus compression efficiency is improved. For example, assuming that a 4×4 residual block includes residues having vertical correlations as shown in a matrix as shown
  • ( 0 10 0 0 0 10 0 0 0 10 0 0 0 10 0 0 ) ,
  • if two-dimensional DCT is applied, the matrix is transformed to
  • ( 10 5.4120 - 10 - 13.0656 0 0 0 0 0 0 0 0 0 0 0 0 ) .
  • However, if one-dimensional vertical DCT is applied, the matrix is transformed to
  • ( 0 20 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ) .
  • As a result, if one-dimensional DCT is performed on a residual block divided in a horizontal or vertical direction according to an exemplary embodiment of the present invention, the magnitude of transformation coefficients generated is reduced in accordance with image characteristics so that compression efficiency is improved.
  • FIG. 6 is a block diagram illustrating an apparatus 600 for image encoding, according to another exemplary embodiment of the present invention.
  • The apparatus 100 illustrated in FIG. 1 according to the previous exemplary embodiment of the present invention divides a residual block into a plurality of sub residual blocks, generates prediction sub residual blocks of the sub residual blocks, and transforms difference sub residual blocks that are differences between the original sub residual blocks and the prediction sub residual blocks. However, the apparatus 600 according to the current exemplary embodiment generates prediction values of an input image block, not a residual block, in lines and one-dimensional DCT is performed on the prediction values.
  • Referring to FIG. 6, the apparatus 600 includes a prediction unit 610, a subtraction unit 615, a transformation unit 620, a quantization unit 625, an entropy encoding unit 630, an inverse quantization unit 635, an inverse transformation unit 640 and an addition unit 645.
  • The prediction unit 610 divides an input image into a plurality of image blocks and predicts pixel values of each image block in horizontal or vertical pixel lines. The prediction unit 610 predicts the pixel values in horizontal or vertical pixel lines in the same manner as the residue prediction unit 130 of the apparatus 100 illustrated in FIG. 1 which predicts residues of current 1×N or N×1 sub residual blocks divided from a residual block by using residues of a neighboring sub residual block.
  • FIG. 7 is a diagram for illustrating a method of predicting pixel values in lines by the prediction unit 610 illustrated in FIG. 6. In FIG. 7, Pab represents a pixel value at a location (a,b) (a,b=1, 2, 3, 4) of an input image block and x, y, z, w, u and v represent pixel values of a neighboring block. Although only a 4×4 input image block that is divided into a plurality of horizontal pixel lines is illustrated in FIG. 7 as an example, the present invention is not limited thereto. An exemplary embodiment of the present invention may also be applied to input image blocks having different sizes and an input image block divided into a plurality of vertical pixel lines.
  • Referring to FIG. 7, assuming that the pixel values of the current block are sequentially predicted in horizontal pixel lines in a downward direction, pixel values P11, P12, P13 and P14 of a first horizontal line 711 may be predicted by extending pixel values x, y, z and w of the neighboring block in an orthogonal direction of a direction of the horizontal pixel lines. Assuming that a prediction pixel value of a pixel value Pab at a location (a,b) (a,b=1, 2, 3, 4) is PPab, PP11=x, PP12=y, PP13=z, and PP14=w. Also, pixel values P21, P22, P23 and P24 of a second horizontal line 712 may be predicted by extending pixel values P11, P12, P13 and P14 of the first horizontal line 711 in an orthogonal direction of the direction of the horizontal pixel lines. Likewise, pixel values P31, P32, P33 and P34 of a third horizontal line 713 and pixel values P41, P42, P43 and P44 of a fourth horizontal line 714 may be respectively predicted by extending pixel values P21, P22, P23 and P24 of the second horizontal line 712 and pixel values P31, P32, P33 and P34 of the third horizontal line 713 in an orthogonal direction of the direction of the horizontal pixel lines. Here, the original pixel values or pixel values restored by being transformed, quantized, inverse quantized and inverse transformed may be used as pixel values of a previous horizontal pixel line in order to predict pixel values of a current horizontal pixel line.
  • A method of sequentially predicting pixel values of an image block in horizontal pixel lines in a downward direction is described in FIG. 7. However, a prediction order of the horizontal pixel lines may be changed as shown by the prediction order of the sub residual blocks 321, 322, 323 and 324 illustrated in FIG. 3B. Furthermore, the pixel values of the image block may be predicted in vertical pixel lines.
  • Referring back to FIG. 6, since the prediction unit 610 generates prediction values of pixel values of a current block in lines by using pixel values of a neighboring block, a problem of related art block-based prediction, in which prediction efficiency of pixels disposed relatively far from the neighboring block than the other pixels is reduced, may be improved.
  • Meanwhile, the prediction unit 610 may predict each pixel value of the input image block by using a half-pel interpolation filter.
  • FIG. 8 is a diagram for illustrating a method of predicting pixel values, according to another exemplary embodiment of the present invention. FIG. 8 illustrates pixel value P11 illustrated in FIG. 7 and previous pixel values u, v and x, which are disposed in a vertical direction of pixel value P11.
  • Referring to FIGS. 7 and 8, when pixel value P11 is predicted, an interpolation value h is generated at a half-pel location by using the previous pixel values u, v and x and a prediction value of pixel value P11 may be generated by using the interpolation value h and the closest neighboring pixel value x. First, the interpolation value h may be interpolated by a 3-tap filter as shown in Equation 1 using the previous pixel values u, v and x.

  • h=(w1·x+w2·v+w3·u+w4)>>4  1),
  • where w1, w2 and w3 represent weights given in accordance with relative distances between the interpolation value h and the previous pixel values u, v and x, w4 represents a predetermined offset value, and an operator “>>” represents a shift operation.
  • For example, w1=20, w2=−5, w3=1, and w4=8.
  • When the interpolation value h at the half-pel location between pixel value P11 to be predicted and the previous neighboring pixel value x is interpolated, pixel value P11 may be predicted by using the interpolation value h and the previous neighboring pixel value x as shown in Equation 2.

  • P11=x+(h−x)×2=2h−x  2)
  • Likewise, each pixel value of the current block may be predicted by generating half-pel interpolation values at half-pel locations and by using the half-pel interpolation values and the previous pixel values which is the closest to the current pixel to be predicted.
  • Referring back to FIG. 6, the subtraction unit 615 generates residues that are differences between the prediction values and the original pixel values in lines.
  • The transformation unit 620 performs one-dimensional DCT on the residues in lines. If the prediction unit 610 has predicted the pixel values in horizontal lines, the transformation unit 620 generates transformation coefficients by performing a one-dimensional horizontal DCT. If the prediction unit 610 has predicted the pixel values in vertical lines, the transformation unit 620 generates transformation coefficients by performing a one-dimensional vertical DCT. Meanwhile, although the prediction unit 610 has generated the prediction values in lines, the transformation unit 620 may alternatively perform two-dimensional DCT in blocks after the prediction of all pixels included in the image block is completed.
  • The quantization unit 625 quantizes the transformation coefficients in lines and the entropy encoding unit 630 performs variable-length encoding on the quantized transformation coefficients so that a bitstream is generated.
  • The quantized transformation coefficients are inverse quantized by the inverse quantization unit 635 and inverse transformed by the inverse transformation unit 640 so that the residues are restored.
  • The addition unit 645 restores the pixel values in lines by adding the restored residues and the prediction pixel values generated by the prediction unit 610. As described above, when two-dimensional DCT has been performed, the pixel values may be restored in blocks. The restored pixel values in lines are used when next pixel values in lines are predicted.
  • FIG. 9 is a flowchart illustrating a method of image encoding, according to another exemplary embodiment of the present invention.
  • Referring to FIG. 9, in operation 910, an input image is divided into a plurality of image blocks and prediction values of pixels of each image block are generated in horizontal or vertical lines.
  • In operation 920, residues that are differences between the prediction values and original values of the pixels in lines, are generated.
  • In operation 930, transformation coefficients are generated by performing a one-dimensional DCT on the residues generated in lines. A bitstream is generated by quantizing and entropy encoding the transformation coefficients in lines. As described above, although the prediction is performed in lines, the transformation may be performed in blocks as in a related art method.
  • In a method and apparatus for image encoding according to the above exemplary embodiments described with reference to FIGS. 7 through 9, prediction values are generated in lines so that a distance between a current pixel and a reference pixel used for prediction is reduced. Accordingly, accuracy of the prediction increases and thus a bit rate may be reduced.
  • FIG. 10 is a block diagram illustrating an apparatus 1000 for image decoding, according to an exemplary embodiment of the present invention. The apparatus 1000 for image decoding corresponds to the apparatus 100 for image encoding illustrated in FIG. 1.
  • Referring to FIG. 10, the apparatus 1000 includes an entropy decoding unit 1010, an inverse quantization unit 1020, an inverse transformation unit 1030, a residue prediction unit 1040, a first addition unit 1050, a second addition unit 1060 and a prediction unit 1070.
  • The entropy decoding unit 1010 receives and entropy decodes a compressed bitstream so that information on a division mode of a current residual block which is included in the bitstream is extracted. The entropy decoding unit 1010 also entropy decodes difference residues included in the bitstream, the inverse quantization unit 1020 inverse quantizes the entropy decoded difference residues, and the inverse transformation unit 1030 restores the difference residues by inverse transforming the inverse quantized difference residues. In particular, the inverse transformation unit 1030 performs one-dimensional inverse DCT if the current residual block is encoded in N×1 or 1×N sub residual blocks.
  • The residue prediction unit 1040 divides the current residual block into a plurality of sub residual blocks in accordance with the extracted information on the division mode of the current residual block to be decoded and generates prediction sub residual blocks of the current sub residual blocks by using residues of previously decoded neighboring sub residual blocks. The residue prediction unit 1040 generates the prediction sub residual blocks of the current sub residual blocks in the same manner as the residue prediction unit 130 illustrated in FIG. 1.
  • The first addition unit 1050 restores the sub residual blocks by adding difference sub residual blocks of the current sub residual blocks which are composed of the difference residues output by the inverse transformation unit 1030 and the prediction sub residual blocks.
  • The prediction unit 1070 generates a prediction block by performing inter prediction or intra prediction in accordance with a prediction mode of the current block.
  • The second addition unit 1060 restores the current block by adding the prediction block generated by the prediction unit 1070 and the sub residual blocks restored by the first addition unit 1050.
  • FIG. 11 is a flowchart illustrating a method of image decoding, according to an exemplary embodiment of the present invention.
  • Referring to FIG. 11, in operation 1110, a division mode of a current residual block to be decoded is determined by using information on a division mode of the residual block which is included in a received bitstream.
  • In operation 1120, prediction sub residual blocks of a plurality of sub residual blocks of the residual block are generated by using residues of neighboring sub residual blocks previously decoded in accordance with the determined division mode.
  • In operation 1130, difference sub residual blocks that are differences between the prediction sub residual blocks and the sub residual blocks and that are included in the bitstream, are restored.
  • In operation 1140, the sub residual blocks are restored by adding the prediction sub residual blocks and the difference sub residual blocks. An image is restored by adding the restored sub residual blocks and a prediction block generated by performing inter or intra prediction.
  • FIG. 12 is a block diagram illustrating an apparatus 1200 for image decoding, according to another exemplary embodiment of the present invention.
  • Referring to FIG. 12, the apparatus 1200 includes a restoration unit 1210, a prediction unit 1220 and an addition unit 1230. The restoration unit 1210 restores residues that are differences between prediction values and original values of pixel lines and are included in a received bitstream, and includes an entropy decoding unit 1211, an inverse quantization unit 1212 and an inverse transformation unit 1213.
  • The prediction unit 1220 predicts pixel values of a horizontal or vertical current pixel line to be decoded in a predetermined order by using a corresponding previously decoded pixel line.
  • The addition unit 1230 decodes the current pixel line by adding the prediction values of the current pixel line and the restored residues. By repeating the above-described procedure, all pixels included in an image block may be decoded.
  • FIG. 13 is a flowchart illustrating a method of image decoding, according to another exemplary embodiment of the present invention.
  • Referring to FIG. 13, in operation 1310, residues that are differences between prediction values and original values of horizontal or vertical pixel lines and that are included in a received bitstream, are restored.
  • In operation 1320, pixel values of each pixel line to be decoded are predicted by using pixel values of a previous pixel line decoded in a predetermined order.
  • In operation 1330, pixels of the current pixel lines are decoded by adding the predicted pixel values of the pixel lines and the restored residues.
  • The invention can also be embodied as computer readable codes on a computer readable recording medium. The computer readable recording medium is any data storage device that can store data which can be thereafter read by a computer system. Examples of the computer readable recording medium include read-only memory (ROM), random-access memory (RAM), CD-ROMs, magnetic tapes, floppy disks, and optical data storage devices. The computer readable recording medium can also be distributed over network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.
  • As described above, according to the exemplary embodiments of the present invention, if horizontal or vertical correlations exist between pixels in an input image block, prediction efficiency and compression efficiency may be improved by performing prediction and one-dimensional transformation in lines in consideration of the correlations.
  • While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. The exemplary embodiments should be considered in a descriptive sense only and not for purposes of limitation. Therefore, the scope of the invention is defined not by the detailed description of the invention but by the appended claims, and all differences within the scope will be construed as being included in the present invention.

Claims (8)

What is claimed is:
1. A method of image encoding, the method comprising:
dividing a residual block into a plurality of sub residual blocks having different sizes according to division mode;
transforming, quantizing and entropy encoding the sub residual blocks;
comparing costs of bitstreams generated by transforming, quantizing and entropy encoding the sub residual blocks; and
determining the division mode based on the result of the comparing,
wherein, if a size of the residual block is N×N, where N is a positive number equal to or greater than 2, the residual block is divided into the sub residual blocks which have a size of a×a according to the division mode, where a is a natural number smaller than N.
2. The method of claim 1, wherein the division mode further comprises modes which divides the residual block into one of the sub residual blocks which have a size of one of N×1 and 1×N.
3. The method of claim 2, wherein transforming the sub residual blocks comprises performing a one-dimensional transformation on the first plurality of the sub residual blocks.
4. The method of claim 1, further comprising:
generating prediction sub residual blocks of a plurality of sub residual blocks of the residual block by using residues of previously processed neighboring sub residual blocks; and
generating difference sub residual blocks by calculating differences between the prediction sub residual blocks and the first plurality of sub residual blocks.
5. A method of image decoding, the method comprising:
dividing a residual block into a plurality of sub residual blocks having different sizes by using information on a division mode of the residual block included in a received bitstream;
entropy decoding, inverse quantizing and inverse transforming the sub residual blocks; and
restoring the sub residual blocks,
wherein, if a size of the residual block is N×N, where N is a positive number equal to or greater than 2, the residual block is divided into the sub residual blocks which have a size of a×a according to the division mode, where a is a natural number smaller than N.
6. The method of claim 5, wherein, the division mode further comprises modes which divides the residual block into one of the sub residual blocks which have a size of one of N×1 and 1×N.
7. The method of claim 6, wherein inverse transforming the sub residual blocks comprises performing a one-dimensional inverse transformation on the first plurality of the sub residual blocks.
8. The method of claim 5, wherein the restoring comprising:
generating prediction sub residual blocks of a plurality of sub residual blocks of the residual block by using residues of previously processed neighboring sub residual blocks; and
restoring difference residues that are differences between the prediction sub residual blocks and the plurality of the sub residual blocks.
US13/759,197 2007-03-23 2013-02-05 Method and apparatus for image encoding and image decoding Abandoned US20130148909A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/759,197 US20130148909A1 (en) 2007-03-23 2013-02-05 Method and apparatus for image encoding and image decoding

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
KR1020070028886A KR101403338B1 (en) 2007-03-23 2007-03-23 Method and apparatus for image encoding, decoding
KR10-2007-0028886 2007-03-23
US11/965,104 US8244048B2 (en) 2007-03-23 2007-12-27 Method and apparatus for image encoding and image decoding
US13/541,151 US8625916B2 (en) 2007-03-23 2012-07-03 Method and apparatus for image encoding and image decoding
US13/759,197 US20130148909A1 (en) 2007-03-23 2013-02-05 Method and apparatus for image encoding and image decoding

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/541,151 Continuation US8625916B2 (en) 2007-03-23 2012-07-03 Method and apparatus for image encoding and image decoding

Publications (1)

Publication Number Publication Date
US20130148909A1 true US20130148909A1 (en) 2013-06-13

Family

ID=39774760

Family Applications (3)

Application Number Title Priority Date Filing Date
US11/965,104 Expired - Fee Related US8244048B2 (en) 2007-03-23 2007-12-27 Method and apparatus for image encoding and image decoding
US13/541,151 Active 2028-01-07 US8625916B2 (en) 2007-03-23 2012-07-03 Method and apparatus for image encoding and image decoding
US13/759,197 Abandoned US20130148909A1 (en) 2007-03-23 2013-02-05 Method and apparatus for image encoding and image decoding

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US11/965,104 Expired - Fee Related US8244048B2 (en) 2007-03-23 2007-12-27 Method and apparatus for image encoding and image decoding
US13/541,151 Active 2028-01-07 US8625916B2 (en) 2007-03-23 2012-07-03 Method and apparatus for image encoding and image decoding

Country Status (5)

Country Link
US (3) US8244048B2 (en)
EP (3) EP2611159A1 (en)
KR (1) KR101403338B1 (en)
CN (2) CN103108181A (en)
WO (1) WO2008117923A1 (en)

Families Citing this family (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101403338B1 (en) * 2007-03-23 2014-06-09 삼성전자주식회사 Method and apparatus for image encoding, decoding
KR101369224B1 (en) * 2007-03-28 2014-03-05 삼성전자주식회사 Method and apparatus for Video encoding and decoding using motion compensation filtering
CN102067602B (en) * 2008-04-15 2014-10-29 法国电信公司 Prediction of images by prior determination of a family of reference pixels, coding and decoding using such a prediction
KR101710619B1 (en) * 2009-02-04 2017-02-28 삼성전자주식회사 Method and apparatus for successively encoding/decoding image
GB0906058D0 (en) * 2009-04-07 2009-05-20 Nokia Corp An apparatus
AU2014268181B2 (en) * 2009-10-28 2016-02-18 Samsung Electronics Co., Ltd. Method and apparatus for encoding residual block, and method and apparatus for decoding residual block
KR101457894B1 (en) * 2009-10-28 2014-11-05 삼성전자주식회사 Method and apparatus for encoding image, and method and apparatus for decoding image
CN101841713B (en) * 2010-04-30 2012-12-05 西安电子科技大学 Video coding method for reducing coding code rate and system
CN108848379A (en) * 2010-12-07 2018-11-20 韩国电子通信研究院 The medium of video coding-decoding method, the method for generating bit stream and stored bits stream
JP5592779B2 (en) * 2010-12-22 2014-09-17 日本電信電話株式会社 Image encoding method, image decoding method, image encoding device, and image decoding device
JP5594841B2 (en) * 2011-01-06 2014-09-24 Kddi株式会社 Image encoding apparatus and image decoding apparatus
SG10202008690XA (en) * 2011-01-12 2020-10-29 Mitsubishi Electric Corp Moving image encoding device, moving image decoding device, moving image encoding method, and moving image decoding method
CN102131093A (en) * 2011-01-13 2011-07-20 北京中星微电子有限公司 Image processing method and device
CN102611885B (en) * 2011-01-20 2014-04-30 华为技术有限公司 Encoding and decoding method and device
EP2603000B1 (en) * 2011-12-08 2017-11-01 Dolby Laboratories Licensing Corporation Guided prediction-filtering in layered vdr image coding
US8934028B2 (en) * 2011-12-15 2015-01-13 Samsung Electronics Co., Ltd. Imaging apparatus and image processing method
JP2013126182A (en) * 2011-12-15 2013-06-24 Samsung Electronics Co Ltd Imaging apparatus and image processing method
US20130195185A1 (en) * 2012-02-01 2013-08-01 Industry-University Cooperation Foundation Hanyang University Apparatus and method for providing additional information to functional unit in reconfigurable codec
CN103021007B (en) * 2012-09-04 2016-01-13 小米科技有限责任公司 A kind of method that animation is play and device
US9247255B2 (en) 2013-02-28 2016-01-26 Research & Business Foundation Sungkyunkwan University Method and apparatus for image encoding/decoding
KR101462637B1 (en) 2013-02-28 2014-11-21 성균관대학교산학협력단 Method and apparatus for image encoding/decoding
US9571858B2 (en) 2013-07-19 2017-02-14 Futurewei Technologies, Inc. Method and apparatus of derivation for a binary partition pattern
CN103634608B (en) * 2013-12-04 2015-03-25 中国科学技术大学 Residual error transformation method of high-performance video coding lossless mode
WO2015196333A1 (en) * 2014-06-23 2015-12-30 Mediatek Singapore Pte. Ltd. Segmental prediction for video coding
WO2016044979A1 (en) * 2014-09-22 2016-03-31 Mediatek Singapore Pte. Ltd. Segmental prediction for video coding
WO2016043417A1 (en) * 2014-09-19 2016-03-24 엘지전자(주) Method and apparatus for encoding and decoding video signal adaptively on basis of separable transformation
US10785475B2 (en) 2014-11-05 2020-09-22 Mediatek Singapore Pte. Ltd. Method and apparatus of video coding with prediction offset
KR101729904B1 (en) 2015-11-16 2017-04-24 (주)루먼텍 System for lossless transmission through lossy compression of data and the method thereof
US10425656B2 (en) * 2016-01-19 2019-09-24 Peking University Shenzhen Graduate School Method of inter-frame prediction for video encoding and decoding
KR20170089777A (en) * 2016-01-27 2017-08-04 한국전자통신연구원 Method and apparatus for encoding and decoding video using prediction
CN113873260B (en) * 2016-10-04 2023-02-24 有限公司B1影像技术研究所 Image data encoding/decoding method and apparatus
WO2019076138A1 (en) 2017-10-16 2019-04-25 Huawei Technologies Co., Ltd. Encoding method and apparatus
WO2019076290A1 (en) * 2017-10-16 2019-04-25 Huawei Technologies Co., Ltd. Spatial varying transforms for video coding
CN111758255A (en) 2018-02-23 2020-10-09 华为技术有限公司 Position dependent spatially varying transforms for video coding
PT3782361T (en) 2018-05-31 2023-11-17 Huawei Tech Co Ltd Spatially varying transform with adaptive transform type
JP2022524523A (en) * 2019-03-11 2022-05-06 ヴィド スケール インコーポレイテッド Intra-subpartition in video encoding

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5021891A (en) * 1990-02-27 1991-06-04 Qualcomm, Inc. Adaptive block size image compression method and system
US5416854A (en) * 1990-07-31 1995-05-16 Fujitsu Limited Image data processing method and apparatus
US20020071610A1 (en) * 1998-12-03 2002-06-13 Philips Electronics North America Corporation Systems and methods for compressing and decompressing images
US20030021485A1 (en) * 2001-07-02 2003-01-30 Raveendran Vijayalakshmi R. Apparatus and method for encoding digital image data in a lossless manner
US6532306B1 (en) * 1996-05-28 2003-03-11 Matsushita Electric Industrial Co., Ltd. Image predictive coding method
US6571016B1 (en) * 1997-05-05 2003-05-27 Microsoft Corporation Intra compression of pixel blocks using predicted mean
US20030185452A1 (en) * 1996-03-28 2003-10-02 Wang Albert S. Intra compression of pixel blocks using predicted mean
US20040156552A1 (en) * 2003-02-03 2004-08-12 Actimagine Process and device for the compression of portions of images
US20060098879A1 (en) * 2004-11-11 2006-05-11 Samsung Electronics Co., Ltd. Apparatus and method for performing dynamic capacitance compensation (DCC) in liquid crystal display (LCD)
US20070065026A1 (en) * 2005-09-16 2007-03-22 Industry-Academia Cooperation Group Of Sejong University Method of and apparatus for lossless video encoding and decoding
US20080232705A1 (en) * 2007-03-23 2008-09-25 Samsung Electronics Co., Ltd. Method and apparatus for image encoding and image decoding
US20090003716A1 (en) * 2007-06-28 2009-01-01 Mitsubishi Electric Corporation Image encoding device, image decoding device, image encoding method and image decoding method

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE2640157C2 (en) * 1976-09-07 1982-10-07 Philips Patentverwaltung Gmbh, 2000 Hamburg Method and arrangement for redundancy-reducing coding of pictures
US7469069B2 (en) * 2003-05-16 2008-12-23 Samsung Electronics Co., Ltd. Method and apparatus for encoding/decoding image using image residue prediction
HUP0301368A3 (en) * 2003-05-20 2005-09-28 Amt Advanced Multimedia Techno Method and equipment for compressing motion picture data
FR2860122B1 (en) 2003-09-24 2006-03-03 Medialive SCREENING, UNLOCKING AND SECURED DISTRIBUTION OF AUDIOVISUAL SEQUENCES FROM DCT BASED VIDEO ENCODERS
JP4431973B2 (en) 2003-12-10 2010-03-17 ソニー株式会社 Moving image processing apparatus and method
JP4213646B2 (en) * 2003-12-26 2009-01-21 株式会社エヌ・ティ・ティ・ドコモ Image encoding device, image encoding method, image encoding program, image decoding device, image decoding method, and image decoding program.
JP4192856B2 (en) * 2004-07-07 2008-12-10 ブラザー工業株式会社 Communication device, line closing method and program
KR100682912B1 (en) 2005-01-05 2007-02-15 삼성전자주식회사 Method and apparatus for encoding and decoding image data
KR101246915B1 (en) * 2005-04-18 2013-03-25 삼성전자주식회사 Method and apparatus for encoding or decoding moving picture
US8422546B2 (en) * 2005-05-25 2013-04-16 Microsoft Corporation Adaptive video encoding using a perceptual model
KR100746006B1 (en) * 2005-07-19 2007-08-06 삼성전자주식회사 Method and apparatus for encoding and decoding in temporal direct mode hierarchical B structure adaptive
KR101256548B1 (en) * 2005-12-30 2013-04-19 삼성전자주식회사 Image encoding and decoding apparatuses and methods
KR101261526B1 (en) * 2006-07-04 2013-05-06 삼성전자주식회사 An video encoding/decoding method and apparatus
KR101354151B1 (en) * 2006-08-24 2014-01-28 삼성전자주식회사 Method and apparatus for transforming and inverse-transforming image
KR20080082143A (en) * 2007-03-07 2008-09-11 삼성전자주식회사 An image encoding/decoding method and apparatus
JP4707118B2 (en) * 2007-03-28 2011-06-22 株式会社Kddi研究所 Intra prediction method for moving picture coding apparatus and moving picture decoding apparatus
US8654833B2 (en) * 2007-09-26 2014-02-18 Qualcomm Incorporated Efficient transformation techniques for video coding

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5021891A (en) * 1990-02-27 1991-06-04 Qualcomm, Inc. Adaptive block size image compression method and system
US5416854A (en) * 1990-07-31 1995-05-16 Fujitsu Limited Image data processing method and apparatus
US20030185452A1 (en) * 1996-03-28 2003-10-02 Wang Albert S. Intra compression of pixel blocks using predicted mean
US6532306B1 (en) * 1996-05-28 2003-03-11 Matsushita Electric Industrial Co., Ltd. Image predictive coding method
US20050259877A1 (en) * 1997-05-05 2005-11-24 Wang Albert S Intra compression of pixel blocks using predicted mean
US6571016B1 (en) * 1997-05-05 2003-05-27 Microsoft Corporation Intra compression of pixel blocks using predicted mean
US20020071610A1 (en) * 1998-12-03 2002-06-13 Philips Electronics North America Corporation Systems and methods for compressing and decompressing images
US20030021485A1 (en) * 2001-07-02 2003-01-30 Raveendran Vijayalakshmi R. Apparatus and method for encoding digital image data in a lossless manner
US20040156552A1 (en) * 2003-02-03 2004-08-12 Actimagine Process and device for the compression of portions of images
US20060098879A1 (en) * 2004-11-11 2006-05-11 Samsung Electronics Co., Ltd. Apparatus and method for performing dynamic capacitance compensation (DCC) in liquid crystal display (LCD)
US20070065026A1 (en) * 2005-09-16 2007-03-22 Industry-Academia Cooperation Group Of Sejong University Method of and apparatus for lossless video encoding and decoding
US20080232705A1 (en) * 2007-03-23 2008-09-25 Samsung Electronics Co., Ltd. Method and apparatus for image encoding and image decoding
US20090003716A1 (en) * 2007-06-28 2009-01-01 Mitsubishi Electric Corporation Image encoding device, image decoding device, image encoding method and image decoding method

Also Published As

Publication number Publication date
US8244048B2 (en) 2012-08-14
KR101403338B1 (en) 2014-06-09
EP2611159A1 (en) 2013-07-03
US8625916B2 (en) 2014-01-07
US20080232705A1 (en) 2008-09-25
EP2127380A1 (en) 2009-12-02
EP2613536A1 (en) 2013-07-10
CN103108181A (en) 2013-05-15
WO2008117923A1 (en) 2008-10-02
KR20080086771A (en) 2008-09-26
US20120269449A1 (en) 2012-10-25
EP2127380A4 (en) 2011-04-20
CN101641955B (en) 2013-03-13
CN101641955A (en) 2010-02-03

Similar Documents

Publication Publication Date Title
US8625916B2 (en) Method and apparatus for image encoding and image decoding
US8630340B2 (en) Method for encoding and decoding video information, a motion compensated video encoder and a corresponding decoder
US7925107B2 (en) Adaptive variable block transform system, medium, and method
US8737481B2 (en) Method and apparatus for encoding and decoding image using adaptive interpolation filter
US8126053B2 (en) Image encoding/decoding method and apparatus
US7778459B2 (en) Image encoding/decoding method and apparatus
US10721481B2 (en) Method and apparatus for motion compensation prediction
US7620108B2 (en) Integrated spatial-temporal prediction
US20050135484A1 (en) Method of encoding mode determination, method of motion estimation and encoding apparatus
US20080049837A1 (en) Image Processing Apparatus, Program for Same, and Method of Same
US20070223021A1 (en) Image encoding/decoding method and apparatus
US20050123039A1 (en) Motion estimation method for motion picture encoding and recording medium having program recorded thereon to implement the motion estimation method
US7433407B2 (en) Method for hierarchical motion estimation
JP4494803B2 (en) Improved noise prediction method and apparatus based on motion compensation, and moving picture encoding method and apparatus using the same
US9106917B2 (en) Video encoding apparatus and video encoding method
US20090028241A1 (en) Device and method of coding moving image and device and method of decoding moving image
US20090232208A1 (en) Method and apparatus for encoding and decoding image
US20050111551A1 (en) Data processing apparatus and method and encoding device of same
EP1443771A2 (en) Video encoding/decoding method and apparatus based on interlaced frame motion compensation
US20060146183A1 (en) Image processing apparatus, encoding device, and methods of same
JP2009049969A (en) Device and method of coding moving image and device and method of decoding moving image
US20070297517A1 (en) Entropy encoding and decoding apparatuses, and entropy encoding and decoding methods
JP2005252870A (en) Image data processing method and device

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE