WO2011064932A1 - Image encoding device and image decoding device - Google Patents

Image encoding device and image decoding device Download PDF

Info

Publication number
WO2011064932A1
WO2011064932A1 PCT/JP2010/005919 JP2010005919W WO2011064932A1 WO 2011064932 A1 WO2011064932 A1 WO 2011064932A1 JP 2010005919 W JP2010005919 W JP 2010005919W WO 2011064932 A1 WO2011064932 A1 WO 2011064932A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
unit
prediction
intra
predicted
Prior art date
Application number
PCT/JP2010/005919
Other languages
French (fr)
Japanese (ja)
Inventor
杉本 和夫
裕介 伊谷
彰 峯澤
関口 俊一
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to JP2011543081A priority Critical patent/JPWO2011064932A1/en
Publication of WO2011064932A1 publication Critical patent/WO2011064932A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/59Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Definitions

  • the present invention relates to an image encoding device that performs intra-screen prediction when an image is compressed and transmitted and an image decoding device that decodes compressed data.
  • H.264 recommended by ITU-T (International Telecommunication Union Telecommunication Standardization Sector).
  • ITU-T International Telecommunication Union Telecommunication Standardization Sector
  • intra-frame redundancy is removed by performing intra-screen prediction, and a prediction error signal is compression-encoded.
  • Patent Document 1 there has been an apparatus as disclosed in Patent Document 1, for example, as an encoding apparatus that performs such compression encoding and a decoding apparatus that decodes the same.
  • the input image is a macroblock that is an input image having a macroblock size that is a predetermined encoding unit by an image dividing unit.
  • the image is divided into input images and processed in units of macroblocks.
  • the macroblock input image is subtracted from the prediction image generated by the intra prediction unit by the subtraction unit, and a difference image signal obtained as a result is output.
  • the difference image signal is input to the frequency conversion unit, converted to a conversion coefficient representing the amplitude for each frequency by performing frequency conversion, and output to the quantization unit.
  • the transform coefficient is quantized in the quantization unit, and the quantized transform coefficient is output.
  • the output quantized transform coefficient is entropy-encoded in the entropy encoding unit and output as an encoded bit stream.
  • the quantized transform coefficient is also input to the inverse quantization unit.
  • the inverse quantization unit inversely quantizes the quantized transform coefficient and outputs a decoded transform coefficient.
  • the decoded transform coefficient is inversely frequency-converted by an inverse frequency converter, thereby obtaining a decoded difference image.
  • the decoded difference image is added with the predicted image in the adding unit, and stored as a locally decoded image in the frame memory.
  • the locally decoded image stored in the frame memory is used by the prediction unit to generate a predicted image.
  • the optimal prediction direction is selected from the eight types of directions and the DC prediction in which prediction is performed using the average value of surrounding pixels. select.
  • an optimal one is selected from a total of four types of horizontal direction, vertical direction, diagonal direction, and DC prediction.
  • the selected prediction mode and prediction direction are each entropy-encoded in the entropy encoder and multiplexed into the encoded bitstream.
  • H.264 is used.
  • a prediction image is generated by linearly expanding a local decoded image around a block to be encoded, so that a prediction image in which a change in the peripheral local decoded image is directly formed in a straight edge shape is generated. Generated. Therefore, when the edge of the straight edge-like predicted image does not match the edge of the actual input image, a straight edge is added to the prediction error signal, resulting in a decrease in coding efficiency. That is, the conventional intra-frame prediction has a problem that the prediction efficiency is good for a video having an edge in a specific direction, and the prediction efficiency is poor for a video having an edge in another direction.
  • the present invention has been made to solve the above-described problems, and obtains an image coding apparatus capable of performing stable and efficient prediction regardless of the texture of the input image and improving the coding efficiency. For the purpose.
  • An image encoding device inputs a frame of image data, divides the image into a plurality of blocks, outputs a macroblock image, inputs the macroblock image, and outputs the first intra prediction image
  • a first intra prediction unit that outputs first intra prediction data that is information for reconstructing the intra prediction image, and a difference calculation between the macroblock image and the first intra prediction image
  • a difference image generation unit that outputs a difference image and a difference transform quantization unit that frequency-converts and quantizes the difference image and outputs a transform coefficient, entropy-encodes the first intra prediction data and the transform coefficient, and generates a bitstream.
  • An entropy encoding unit that outputs, the first intra prediction unit reduces the macroblock image and outputs a reduced image, and frequency conversion of the reduced image
  • Predictive image transform quantizing unit that quantizes and outputs first intra prediction data
  • predictive image inverse quantization transform that outputs local decoded prediction image by dequantizing and inverse frequency converting first intra prediction data
  • a predicted image enlarging unit for enlarging the locally decoded predicted image and generating a first intra-screen predicted image.
  • FIG. 1 is a block diagram showing an image coding apparatus according to Embodiment 1 of the present invention.
  • An image encoding apparatus 100 illustrated in FIG. 1 includes a block division unit 101, a first intra prediction unit (first intra prediction unit) 200, an original image difference calculation unit 102, a difference transform quantization unit 103, And an entropy encoding unit 104.
  • the block dividing unit 101 inputs a moving image as an original image, divides the moving image into units of macro blocks, and outputs an image of each divided macro block (hereinafter referred to as a macro block image).
  • the first intra prediction unit 200 generates a first intra predicted image by performing intra-frame prediction, and outputs information about the predicted image as a predicted image conversion coefficient.
  • the original image difference calculation unit 102 takes the difference between the macroblock image from the block division unit 101 and the first intra prediction image from the first intra prediction unit 200 and outputs the difference as a difference macroblock image.
  • the difference transform quantization unit 103 performs orthogonal transform such as DCT on the difference macroblock image from the original image difference calculation unit 102, quantizes the generated orthogonal transform coefficient, and outputs a quantization coefficient.
  • the entropy encoding unit 104 entropy-encodes the predicted image transform coefficient and the quantization coefficient by variable length encoding or arithmetic encoding, and outputs the result as a bit stream.
  • the first intra prediction unit 200 includes an image reduction unit 201, a predicted image transform quantization unit 202, a predicted image inverse quantization inverse transform unit 203, and a predicted image enlargement unit 204.
  • the image reduction unit 201 reduces the input macroblock image and outputs a reduced image.
  • the image reduction method may be any method, for example, a method in which an average value of pixel values for every 4 ⁇ 4 pixels is used as a pixel value of a reduced image, or a sub-sampling method after applying a low-pass filter.
  • the reduction rate may also be an inverse number of the enlargement rate in the predicted image enlargement unit 204 described later.
  • the predicted image transform quantization unit 202 performs orthogonal transform such as DCT on the input reduced image, quantizes the generated orthogonal transform coefficient, and outputs it as a predicted image transform coefficient.
  • the predicted image inverse quantization inverse transform unit 203 generates a locally decoded reduced image by performing inverse quantization and inverse orthogonal transform, which are the inverse processes of the predicted image transform quantization unit 202, on the input predicted image transform coefficient. Then output.
  • processing for improving the resolution is performed on the input locally decoded reduced image, the image is enlarged so as to have the same size as the macroblock image, and is output as the first intra predicted image.
  • the processing content may be enlarged by interpolation processing, or the processing content is not particularly limited as long as it is a processing that improves the resolution of an input image by any other method.
  • the image may be reduced to one pixel by the image reduction unit 201 and returned to the original size by the image enlargement unit 204.
  • the pixel at the lower right of the block can be a reduced image, and the same effect can be obtained by enlarging by enlarging by using a locally decoded image of the peripheral block at the time of enlargement.
  • the reduced image may be configured such that the pixel value is directly encoded without performing orthogonal transformation or quantization, or the difference value is encoded by DC prediction of the pixel value.
  • FIG. 2 is a flowchart showing processing of the image encoding device 100.
  • the moving image is divided into macroblock units by the block dividing unit 101, and each divided macroblock image is output (step ST101).
  • intra-frame prediction is performed on each macroblock image in the first intra prediction unit 200, the first intra prediction image is output, and information on the prediction image is output as a prediction image transform coefficient (step) ST200).
  • step ST102 when the macroblock image and the first intra prediction image are input to the original image difference calculation unit 102, a difference value is calculated and output as a difference macroblock image (step ST102).
  • the difference macroblock image is subjected to orthogonal transform such as DCT in the difference transform quantization unit 103, and the generated orthogonal transform coefficient is quantized and the quantized coefficient is output (step ST103). Further, the quantized coefficient and the predicted image transform coefficient are entropy encoded by the entropy encoding unit 104, and a bit stream is output (step ST104).
  • orthogonal transform such as DCT in the difference transform quantization unit 103
  • the quantized coefficient and the predicted image transform coefficient are entropy encoded by the entropy encoding unit 104, and a bit stream is output (step ST104).
  • step ST200 the input macroblock image is reduced by the image reduction unit 201 and a reduced image is output (step ST201).
  • the reduced image is subjected to orthogonal transform such as DCT in the predicted image transform quantization unit 202, and the generated orthogonal transform coefficient is quantized and output as a predicted image transform coefficient (step ST202).
  • the predicted image transform coefficient is subjected to inverse quantization and inverse orthogonal transform, which are the inverse processes of step ST103, by the predicted image inverse quantization inverse transform unit 203, and a locally decoded reduced image is output (step ST203).
  • Step ST204 When the local decoded reduced image is input to the predicted image enlarging unit 204, a process for improving the resolution is performed, and the local decoded reduced image is enlarged so as to have the same size as the macroblock image and is output as the first intra predicted image ( Step ST204).
  • the image encoding device 100 performs image reduction / transformation / quantization / inverse quantization / inverse transform / image enlargement when performing intra prediction, thereby A predicted image of a macroblock image that does not depend on the state of an edge can be generated, and the predicted image can be expressed with a quantized transform coefficient of a reduced image, so that the number of bits required to reproduce the predicted image can be reduced. . Therefore, stable intra-frame coding with high coding efficiency can be realized.
  • each unit in the image encoding device 100 may be configured by hardware or software. Moreover, the module which combined hardware and software may be sufficient.
  • the orthogonal transform process is not particularly limited, and DCT, DST (Discrete Sine Transform: Discrete Sine Transform), DFT (Discrete Fourier Transform: Discrete Fourier Transform), wavelet transform, integer transform process, and the like may be performed.
  • one frame of image data is input, divided into a plurality of blocks, and a macroblock image is output, and a macroblock image is input.
  • generating a first intra prediction image and outputting first intra prediction data that is information for reconstructing the intra prediction image (first intra prediction portion (first intra prediction portion) ) 200
  • an original image difference calculation unit 102 that performs a difference calculation between the macroblock image and the first intra prediction image and outputs a difference image
  • a difference transform quantization unit that frequency-converts and quantizes the difference image and outputs a transform coefficient 103
  • an entropy encoding unit 104 that entropy-encodes the first intra-screen prediction data and the transform coefficient, and outputs a bit stream, and includes a first intra prediction unit 200.
  • An image reduction unit 201 that reduces a macroblock image and outputs a reduced image, a predictive image transform quantization unit 202 that frequency-converts and quantizes the reduced image and outputs first in-screen prediction data, and a first in-screen
  • a predicted image inverse quantization inverse transform unit 203 that performs inverse quantization and inverse frequency transform on the prediction data and outputs a local decoded predicted image, and a predicted image enlargement unit that expands the local decoded predicted image and generates a first intra prediction image 204, the prediction can be performed stably and efficiently regardless of the texture of the input image, and the encoding efficiency can be improved.
  • FIG. FIG. 3 is a block diagram showing a configuration of the image encoding device 100a according to the second embodiment.
  • the image encoding device 100a is configured using a DC component prediction intra prediction unit 200a as a first intra prediction unit. Since the configuration other than this is the same as that of the image coding apparatus 100 according to the first embodiment, differences from the image coding apparatus 100 according to the first embodiment will be described below.
  • the DC component prediction intra prediction unit 200a differs only in that it includes a DC component prediction unit 205 and a DC component storage buffer 206 in addition to the configuration of the first intra prediction unit 200 in the first embodiment.
  • the DC component prediction unit 205 outputs the DC component of the input predicted image transform coefficient to the DC component storage buffer 206 and reads the DC component of the adjacent block from the DC component storage buffer 206 to predict the DC component.
  • a value is generated and subtracted from the DC component of the predicted image conversion coefficient to obtain a DC difference coefficient.
  • the obtained DC difference coefficient and other AC components are output to the entropy encoding unit 104 as a DC difference predicted image transform coefficient.
  • the DC component storage buffer 206 is a buffer for storing the DC component of the predicted image conversion coefficient.
  • FIG. 4 is a flowchart showing the operation of the image coding apparatus 100a according to the second embodiment.
  • the only difference from the first embodiment is the operation related to the DC component prediction intra prediction unit 200a (step ST200a), and therefore step ST200a will be described.
  • the processing up to the predicted image transform coefficient inverse quantization and inverse transform in step ST203 and the image enlargement process in step ST204 are the same as those in the first embodiment.
  • the predicted image transform coefficient output in step ST202 is input to the DC component prediction unit 205, only the DC component is first extracted and stored in the DC component storage buffer 206 (step ST211).
  • the DC component of the adjacent block is read from the DC component storage buffer 206, a predicted value of the DC component is generated, and subtracted from the DC component of the predicted image transform coefficient to obtain a DC difference coefficient.
  • the obtained DC difference coefficient and other AC components are output to the entropy encoding unit 104 as DC difference predicted image transform coefficients (step ST212).
  • the image encoding device 100a is configured to predict the DC component of the predicted image transform coefficient from the DC component in the adjacent block, and to entropy encode the prediction residual.
  • the amplitude of the DC component prediction residual is generally smaller than the DC component itself. Therefore, video coding with higher compression efficiency can be realized by performing entropy coding of the prediction residual of the DC component.
  • the DC component prediction intra prediction unit (first intra prediction unit) 200a is a DC component that is a direct current component of the first intra prediction data.
  • a DC component storage buffer 206 that stores the component as a locally decoded DC component, and a predicted value of the DC component is calculated based on the local decoded DC component stored in the DC component storage buffer 206, and the first in-screen prediction data
  • a DC component prediction unit 205 that outputs the first intra prediction data difference value after subtracting the prediction value from the DC component, and the entropy encoding unit 104 uses the first intra prediction data instead of the first intra prediction data. Since the intra-screen prediction data difference value is entropy-encoded, video encoding with higher compression efficiency can be realized.
  • FIG. 5 is a block diagram illustrating a configuration of the image encoding device 100b according to the third embodiment.
  • the image encoding device 100b is configured using an average pixel prediction intra prediction unit 200b as a first intra-screen prediction unit, and further includes a difference inverse quantization inverse transform unit 105, a difference image addition unit 106, a frame memory. 107. Since the configuration other than this is the same as the configuration of the first embodiment shown in FIG. 1, only the differences will be described.
  • the difference inverse quantization inverse transform unit 105 performs inverse quantization and inverse orthogonal transform, which are inverse processes of the difference transform quantization unit 103, on the quantization coefficient output from the difference transform quantization unit 103, and performs local decoding difference Output an image.
  • the difference image adding unit 106 generates and outputs a locally decoded image by adding the locally decoded difference image and the first intra predicted image.
  • the frame memory 107 is a buffer for storing locally decoded images.
  • the average pixel prediction intra prediction unit 200b includes an average pixel value prediction unit 207, an average pixel value subtraction unit 208, and an average pixel value addition unit 209 in addition to the configuration of the first intra prediction unit 200 of the first embodiment. Only different.
  • the average pixel value prediction unit 207 reads local decoded image data of the adjacent block from the frame memory 107, calculates the pixel average value of the adjacent block, and outputs it as the average pixel value.
  • an average pixel value of adjacent blocks an average value of pixels adjacent to the upper and left sides may be used, or an average value obtained by adding the lowermost right pixel of the upper left block of the processing target block may be used.
  • the average value of all the blocks adjacent to the upper and left sides may be used, or the average value of pixels located within a predetermined number of pixels near the processing target block may be used.
  • the same effect can be obtained by any means as long as it is an average value of pixel values of locally decoded image data in the vicinity of the processing target block.
  • the average pixel value subtracting unit 208 subtracts the average pixel value output from the average pixel value predicting unit 207 from each pixel value of the reduced image output from the image reducing unit 201 and converts the result into a predicted reduced image as a difference reduced image.
  • the data is output to the quantization unit 202.
  • the average pixel value adding unit 209 adds the average pixel value output from the average pixel value predicting unit 207 to the local decoded reduced image output from the predicted image inverse quantization inverse transform unit 203, and locally decodes the result.
  • the reduced image is output to the predicted image enlargement unit 204.
  • FIG. 6 is a flowchart showing the operation of the image coding apparatus 100b according to the third embodiment.
  • the moving image is divided into macroblock units by the block dividing unit 101, and each divided macroblock image is output (step ST101).
  • Each macroblock image is subjected to intra-frame prediction in the average pixel prediction intra prediction unit 200b using local decoded image data of peripheral blocks stored in the frame memory 107, and a first intra prediction image is output.
  • Information about the image is output as a predicted image conversion coefficient (step ST200b).
  • a difference value is calculated and output as a difference macroblock image (step ST102).
  • the difference macroblock image is subjected to orthogonal transform such as DCT in the difference transform quantization unit 103, and the generated orthogonal transform coefficient is quantized and the quantized coefficient is output (step ST103).
  • the quantized coefficient and the predicted image transform coefficient are entropy encoded by the entropy encoding unit 104, and a bit stream is output (step ST104).
  • the quantized coefficient is subjected to inverse quantization and inverse orthogonal transform, which are inverse processes of the difference transform quantization unit 103, in the difference inverse quantization inverse transform unit 105, and is output as a locally decoded difference image (step ST111).
  • This locally decoded difference image and the first intra predicted image generated in step ST200b are added by the difference image adding unit 106 and output as a locally decoded image (step ST112).
  • the locally decoded image is stored in frame memory 107 (step ST113).
  • step ST200b details of step ST200b will be described.
  • steps ST201 to ST204 in step ST200b are the same processes as in the first and second embodiments.
  • the input macroblock image is reduced by the image reduction unit 201 and a reduced image is output (step ST201).
  • the average pixel value predicting unit 207 reads the locally decoded image data of the adjacent block from the frame memory 107, calculates the pixel average value of the adjacent block, and outputs it as an average pixel value (step ST221).
  • the average pixel value subtracting unit 208 subtracts the average pixel value from each pixel value of the reduced image, and outputs the result as a difference reduced image to the predicted image transform quantization unit 202 (step ST222).
  • the difference reduced image is subjected to orthogonal transform such as DCT in the predicted image transform quantization unit 202, and the generated orthogonal transform coefficient is quantized and output as a predicted image transform coefficient (step ST202).
  • the predicted image transform coefficient is subjected to inverse quantization and inverse orthogonal transform, which are the inverse processes of step ST202, by the predicted image inverse quantization inverse transform unit 203, and a locally decoded difference reduced image is output (step ST203).
  • the average pixel value is added to the local decoded difference reduced image by the average pixel value adding unit 209 and output as a locally decoded reduced image (step ST223).
  • the local decoded reduced image is input to the predicted image enlarging unit 204, a process for improving the resolution is performed, and the local decoded reduced image is enlarged so as to have the same size as the macroblock image and is output as the first intra predicted image (step). ST204).
  • the image encoding device 100b is configured to predict each pixel value of the reduced image from the average value of the pixel values in the adjacent blocks, and to convert and quantize the prediction residual. .
  • the energy of the prediction residual signal is smaller than that of the signal before prediction, and the entropy of the quantized transform coefficient is also reduced. Therefore, by performing entropy coding of the signal obtained in this way, video coding with higher compression efficiency can be realized.
  • the average pixel value is subtracted from each pixel value of the reduced image.
  • encoding can be suitably performed by adding the average pixel value after enlarging the locally decoded reduced image by the predicted image enlarging unit 204.
  • the difference inverse quantization inverse transform unit 105 that inputs a transform coefficient, performs inverse quantization and inverse frequency transform, and outputs a difference local decoded image
  • a difference image addition unit 106 that adds the first intra-screen prediction image and the difference local decoded image and outputs a local decoded image
  • a frame memory 107 that stores the local decoded image, and includes an average pixel prediction intra prediction unit (first 200b) predicts the average value of the pixel values of the processing target area from the pixel values of the peripheral area of the processing target area using the locally decoded image stored in the frame memory 107, and uses the predicted value as the predicted average pixel value.
  • the predicted image inverse quantization inverse transform unit 203 performs inverse quantization and inverse frequency transform on the first intra-screen prediction data and outputs the result as an average-value-removed local decoded predicted image. Therefore, video encoding with higher compression efficiency can be realized.
  • FIG. 7 is a block diagram illustrating a configuration of an image encoding device 100c according to the fourth embodiment.
  • the image encoding device 100c also performs intra prediction on the second intra prediction unit (second intra prediction unit) 108. It has functions that can be selected as means. Since the configuration of the average pixel prediction intra prediction unit 200b is the same as that of the third embodiment, a configuration different from this will be described below.
  • the second intra prediction unit 108 is, for example, a conventional H.264 standard. As in intra prediction in H.264, the pixel values of the encoded peripheral blocks are expanded to generate and output a second intra predicted image, and information such as the prediction direction is input to the entropy encoding unit 104 as an intra prediction mode. Output.
  • the intra prediction switching switch (predicted image selection unit) 109 compares the input first intra predicted image and second intra predicted image with the macroblock image, selects the one with higher prediction efficiency, and sets it as the intra predicted image. In addition to outputting, a flag indicating which prediction image has been selected is output to the entropy encoding unit 104 as intra prediction type information.
  • FIG. 8 is a flowchart showing the operation of the image coding apparatus 100c according to the fourth embodiment. Since the operations of step ST101, step ST200b, steps ST102 to ST104, and steps ST111 to ST113 are the same as those in the third embodiment, the description of these processes is omitted.
  • the second intra prediction unit 108 uses conventional H.264. Intra prediction in H.264 is performed, a second intra predicted image is output, and information such as a prediction direction is output to the entropy encoding unit 104 as an intra prediction mode (step ST121).
  • the intra prediction changeover switch 109 compares the input prediction image with the macroblock image, and selects the one with the higher prediction efficiency and outputs it as the intra prediction image.
  • a flag indicating which prediction image has been selected is output to the entropy encoding unit 104 as intra prediction type information (step ST122).
  • the first intra prediction unit may be defined as one of a plurality of intra prediction types as shown in FIG. 21, or the first intra prediction unit and the second intra prediction type as shown in FIG. A flag for identifying the intra prediction unit may be defined.
  • the intra prediction based on the conventional peripheral pixel expansion can be selected.
  • Video coding with higher compression efficiency because it can be predicted more effectively by intra prediction using peripheral pixel expansion with good prediction efficiency, and can be effectively predicted by average pixel prediction intra prediction for other edges. Can be realized.
  • the pixel value of the processing target region is predicted from the pixel value of the peripheral region of the processing target region using the local decoded image stored in the frame memory 107.
  • the second intra prediction unit (second intra prediction unit) 108 that outputs the second intra prediction image and outputs information for reconstructing the prediction image as the second intra prediction data.
  • FIG. 9 is a block diagram showing a configuration of an image encoding device 100d according to the fifth embodiment.
  • the image encoding device 100d according to the fifth embodiment includes a switching intra prediction unit 200c as a first intra-screen prediction unit.
  • the switching intra prediction unit 200c includes a DC component prediction unit 205 and a DC component storage buffer 206 in the second embodiment in addition to the configuration of the average pixel prediction intra prediction unit 200b in the fourth embodiment. Since other configurations are the same as those of the fourth embodiment, the same reference numerals are given to corresponding portions, and descriptions thereof are omitted.
  • FIG. 10 is a flowchart showing the operation of the image encoding device 100d according to the fifth embodiment. Since only the process of step ST200c is different from the fourth embodiment, only the different process will be described.
  • the predicted image transform coefficient output in the predicted image transform / quantization process in step ST202 is input to the DC component prediction unit 205, only the DC component is first extracted and stored in the DC component storage buffer 206 (step ST211). ).
  • the DC component of the adjacent block is read from the DC component storage buffer 206, a predicted value of the DC component is generated, and subtracted from the DC component of the predicted image transform coefficient to obtain a DC difference coefficient.
  • the obtained DC difference coefficient and other AC components are output to the entropy encoding unit 104 as DC difference predicted image transform coefficients (step ST212).
  • the image coding device 100d according to Embodiment 5 performs average pixel value prediction when the second intra prediction is selected, and performs DC component prediction when the switched intra prediction is selected. By suppressing the entropy of the coefficient generated by this, video encoding with higher compression efficiency can be realized.
  • the configurations of the DC component prediction unit 205 and the DC component storage buffer 206 are applied to the configuration of the fourth embodiment, but may be applied to the configuration of the third embodiment.
  • the switching intra prediction unit (first intra-screen prediction unit) 200c A DC component storage buffer 206 for storing a DC component, which is a direct current component of the prediction data, as a locally decoded DC component; and a predicted value of the DC component based on the locally decoded DC component stored in the DC component storage buffer 206;
  • the entropy encoding unit 104 includes a DC component prediction unit 205 that outputs a first intra prediction data difference value after subtracting the prediction value from the DC component of the first intra prediction data. Since the first intra prediction data difference value is entropy encoded instead of the intra prediction data, video encoding with higher compression efficiency can be realized.
  • FIG. Embodiments 6 to 10 relate to an image decoding apparatus corresponding to the image encoding apparatuses of Embodiments 1 to 5.
  • FIG. 11 is a block diagram illustrating a configuration of the image decoding apparatus 300 according to the sixth embodiment.
  • the image decoding apparatus 300 includes an entropy decoding unit 301, a difference inverse quantization inverse transform unit 302, a first predicted image generation unit (first in-screen predicted image generation unit) 400, and a difference image addition unit 303. Is done.
  • the entropy decoding unit 301 performs entropy decoding on the input bitstream and outputs the resulting predicted image transform coefficient and quantization coefficient.
  • the difference inverse quantization inverse transform unit 302 performs inverse quantization and inverse orthogonal transform, which are inverse processes of the difference transform quantization unit 103 in the image coding apparatus, on the quantized coefficient, and outputs a locally decoded difference image.
  • the first predicted image generation unit 400 generates a predicted image based on the input predicted image conversion coefficient and outputs it as a first intra predicted image.
  • the difference image addition unit 303 adds the locally decoded difference image and the first intra-predicted image and outputs the result as a decoded image.
  • the first predicted image generation unit 400 includes a predicted image inverse quantization inverse transform unit 401 and a predicted image enlargement unit 402.
  • the predicted image inverse quantization inverse transform unit 401 generates a locally decoded reduced image by performing inverse quantization and inverse orthogonal transform, which are inverse processes of the predicted image transform quantization unit 202, on the input predicted image transform coefficient. And output.
  • the predicted image enlarging unit 402 performs processing for improving the resolution of the input local decoded reduced image, enlarges the image to have the same size as the macroblock image, and outputs the first intra predicted image.
  • FIG. 12 is a flowchart showing processing of the image decoding apparatus 300 according to Embodiment 6.
  • the bit stream is entropy-decoded by the entropy decoding unit 301, and as a result, a predicted image transform coefficient and a quantization coefficient are output (step ST301).
  • the quantized coefficient is subjected to inverse quantization and inverse transform processing in the difference inverse quantization inverse transform unit 302, and a local decoded difference image is output (step ST302).
  • the predicted image conversion coefficient is input to the first predicted image generation unit 400, a predicted image is generated and output as the first intra predicted image (step ST400).
  • the locally decoded differential image and the first intra predicted image obtained in this way are added by the differential image adding unit 303 and output as a decoded image (step ST303).
  • step ST400 more specifically, the input predicted image transform coefficient is subjected to inverse quantization and inverse transform processing in the predicted image inverse quantization inverse transform unit 401, and as a result, a locally decoded reduced image is output ( Step ST401).
  • the local decoded reduced image is subjected to processing for improving the resolution in the predicted image enlarging unit 402, and is output as a first intra predicted image (step ST402).
  • the image decoding apparatus 300 according to Embodiment 6 is configured to perform the inverse process of the image encoding apparatus 100 according to Embodiment 1, and thus is generated by the image encoding apparatus 100 according to Embodiment 1.
  • the decoded bit stream can be suitably decoded to obtain a decoded image.
  • a reduced image can be generated without generating a decoded image, and a thumbnail image or the like can be generated with a small amount of calculation.
  • the entropy decoding unit 301 that entropy-decodes the bitstream and outputs the transform coefficient and the first in-screen prediction data, and the transform coefficient is dequantized and dequantized.
  • a differential inverse quantization inverse transform unit 302 that performs inverse frequency transform and outputs a differential decoded image, and a first predicted image generation unit that reconstructs and outputs the first intra prediction image based on the first intra prediction data (First intra prediction image generation unit) 400, and a difference image addition unit 303 that adds the differential decoded image and the first intra prediction image and outputs a decoded image, and includes the first intra prediction image.
  • the generation unit 400 performs inverse quantization and inverse frequency conversion on the first intra prediction data and outputs a decoded prediction image, and expands the decoded prediction image to generate the first intra prediction.
  • Generate image Since a prediction image enlargement unit 402, it can be decoded efficiently coded video bit stream, and can be produced with a smaller amount of calculation and thumbnail images.
  • FIG. 13 is a block diagram illustrating a configuration of an image decoding device 300a according to the seventh embodiment.
  • the image decoding apparatus 300a includes a DC component prediction intra-prediction image generation unit 400a as a first intra-screen prediction image generation unit.
  • This DC component prediction intra-prediction image generation unit 400a replaces the first prediction image generation unit 400 of the sixth embodiment.
  • the DC component prediction intra prediction image generation unit 400a A component prediction unit 403 and a DC component storage buffer 404 are provided.
  • the DC component prediction unit 403 reads the stored DC component of the adjacent block from the DC component storage buffer 404, generates a predicted value of the DC component, adds it to the DC component of the predicted image transform coefficient, and obtains the DC coefficient. .
  • the obtained DC coefficients and other AC components are output as predicted image transform coefficients to the predicted image inverse quantization inverse transform unit 401 and the DC coefficients are stored in the DC component storage buffer 404.
  • the DC component storage buffer 404 is a buffer for storing the DC component of the predicted image conversion coefficient.
  • FIG. 14 is a flowchart showing the operation of the image decoding apparatus 300a according to the seventh embodiment. Since the difference between the seventh embodiment and the sixth embodiment is only the process related to the first predicted image generation unit 400 (step ST400a), only step ST400a will be described.
  • the predicted image transform coefficient output in step ST301 is input to the DC component prediction unit 403, first, the DC component of the adjacent block is read from the DC component storage buffer 404, and a predicted value of the DC component is generated. A DC coefficient is obtained by adding the DC component of the predicted image conversion coefficient.
  • the obtained DC coefficients and other AC components are output to the predicted image inverse quantization inverse transform unit 401 as predicted image transform coefficients (step ST411).
  • the obtained DC coefficient is stored in the DC component storage buffer 404 (step ST412).
  • Subsequent steps ST401 and ST402 are the same as in the sixth embodiment.
  • the image decoding device 300a according to the seventh embodiment is configured to perform the inverse process of the image coding device 100a according to the second embodiment, and thus is generated by the image coding device 100a according to the second embodiment.
  • the decoded bit stream can be suitably decoded to obtain a decoded image.
  • the DC component prediction intra prediction image generation unit (first intra prediction image generation unit) 400a predicts the DC component based on the decoded DC component.
  • a DC component prediction unit 403 that calculates a value, adds the predicted value to the DC component of the first intra-screen prediction data difference value, and then outputs the first intra-screen prediction data;
  • a DC component storage buffer 404 that stores a DC component that is a direct current component as a decoded DC component, and the entropy decoding unit 301 entropy decodes the first intra prediction data difference value instead of the first intra prediction data.
  • FIG. 15 is a block diagram illustrating a configuration of an image decoding device 300b according to the eighth embodiment.
  • the image decoding apparatus 300b is configured using an average pixel prediction intra-prediction image generation unit 400b as a first intra-screen prediction image generation unit, and further includes a frame memory 304.
  • Other configurations are the same as those in the sixth and seventh embodiments.
  • the frame memory 304 is a buffer for storing the decoded image.
  • the average pixel prediction intra prediction image generation unit 400b includes an average pixel value prediction unit 405 and an average pixel value addition unit 406 in addition to the configuration of the first prediction image generation unit 400 in the sixth embodiment.
  • the average pixel value prediction unit 405 reads the decoded image data of the adjacent block from the frame memory 304, calculates the pixel average value of the adjacent block, and outputs it as the average pixel value.
  • the average pixel value addition unit 406 adds the average pixel value output from the average pixel value prediction unit 405 to the local decoded reduced image output from the predicted image inverse quantization inverse conversion unit 401, and the result is locally decoded.
  • the reduced image is output to the predicted image enlargement unit 402.
  • the decoding device can also suitably decode by performing decoding processing in the same order. Become.
  • FIG. 16 is a flowchart showing processing of the image decoding device 300b according to Embodiment 8.
  • the bit stream is entropy-decoded by the entropy decoding unit 301, and as a result, a predicted image transform coefficient and a quantization coefficient are output (step ST301).
  • the quantized coefficient is subjected to inverse quantization and inverse transform processing in the difference inverse quantization inverse transform unit 302, and a local decoded difference image is output (step ST302).
  • Step ST400b when the predicted image transform coefficient is input to the average pixel predicted intra predicted image generation unit 400b, a predicted image is generated based on the decoded image stored in the frame memory 304, and is output as the first intra predicted image.
  • the locally decoded differential image and the first intra predicted image obtained in this way are added by the differential image adding unit 303 and output as a decoded image (step ST303).
  • the decoded image is stored in the frame memory 304 and used for intra prediction image generation (step ST311).
  • step ST400b more specifically, the input predicted image transform coefficient is subjected to inverse quantization and inverse transform processing in the predicted image inverse quantization inverse transform unit 401, and as a result, a locally decoded reduced image is output (step ST400b).
  • step ST401 the pixel value data of the decoded peripheral blocks is input to the average pixel value prediction unit 405, and the average value thereof is calculated and output as an average pixel value (step ST421).
  • the average pixel value and the locally decoded reduced image are added by the average pixel value adding unit 406, and a locally decoded reduced image is output (step ST422).
  • the local decoded reduced image is subjected to processing for improving the resolution in the predicted image enlarging unit 402, and is output as a first intra predicted image (step ST402).
  • the image decoding device 300b according to the eighth embodiment is configured to perform the reverse process of the image coding device 100b according to the third embodiment, and thus is generated by the image coding device 100b according to the third embodiment.
  • the decoded bit stream can be suitably decoded to obtain a decoded image.
  • the frame memory 304 that stores the decoded image is provided, and the average pixel prediction intra prediction image generation unit (first intra prediction image generation unit) 400b is provided.
  • An average pixel value prediction unit 405 that predicts an average value of the pixel values of the processing target area from the pixel values of the peripheral area of the processing target area using the decoded image stored in the frame memory 304, and outputs the predicted average pixel value;
  • the predicted image inverse quantization inverse transform unit 401 includes an average pixel value addition unit 406 that adds the output of the predicted image inverse quantization inverse transform unit 401 and the predicted average pixel value and outputs a decoded predicted image.
  • Intra-screen prediction data is inversely quantized and inverse frequency converted and an average-value-removed decoded prediction image is output, so that the encoded image bitstream can be efficiently decoded to obtain a decoded image. Kill.
  • FIG. 17 is a block diagram illustrating a configuration of an image decoding device 300c according to the ninth embodiment.
  • the image decoding device 300c includes a second prediction image generation unit (second intra-screen prediction image generation unit) 305. Is also provided with a function that can be selected as an intra predicted image generating means. Since the configuration of the average pixel prediction intra predicted image generation unit 400b is the same as that of the eighth embodiment, a configuration different from this will be described below.
  • the second predicted image generation unit 305 obtains information such as a prediction direction based on the intra prediction mode entropy-decoded by the entropy decoding unit 301, for example.
  • a second intra-predicted image is generated and output by expanding the pixel values of the encoded peripheral blocks as in the case of intra prediction in H.264.
  • the intra prediction changeover switch (prediction image selection unit) 306 indicates a flag indicating which prediction image entropy-decoded by the entropy decoding unit 301 is selected from the input first intra prediction image and second intra prediction image. Is selected based on the intra prediction type information, and is output as an intra predicted image.
  • FIG. 18 is a flowchart illustrating the operation of the image decoding device 300c according to the ninth embodiment.
  • intra prediction type information which is a flag indicating which prediction image has been selected as entropy-decoded information
  • the intra prediction type information selects the second intra prediction image. If so, the second intra prediction image generation is performed, and if not, the switching process is performed so that the average pixel prediction intra prediction image generation is performed (step ST321).
  • the second predicted image generation unit 305 Based on the intra prediction mode entropy decoded by the entropy decoding unit 301, the second predicted image generation unit 305 obtains information such as the prediction direction.
  • a second intra-prediction image is generated and output by expanding the pixel values of the encoded peripheral blocks as in the case of intra prediction in H.264 (step ST322). Since other operations are the same as those in the eighth embodiment, description thereof is omitted here.
  • the image decoding device 300c according to the ninth embodiment is configured to perform the reverse process of the image coding device 100c according to the fourth embodiment, and thus generated by the image coding device 100c according to the fourth embodiment.
  • the decoded bit stream can be suitably decoded to obtain a decoded image.
  • the entropy decoding unit 301 performs entropy decoding on the second intra prediction data and the prediction type information, and based on the second intra prediction data.
  • a second predicted image generation unit that predicts the pixel value of the processing target region from the pixel value of the peripheral region of the processing target region using the decoded image stored in the frame memory 304 and outputs a second intra prediction image ( A second intra prediction image generation unit) 305 and an intra prediction changeover switch (prediction) that selects and outputs one of the first intra prediction image and the second intra prediction image based on the prediction type information.
  • Image selection unit) 306 the encoded image bitstream can be efficiently decoded to obtain a decoded image.
  • FIG. FIG. 19 is a block diagram illustrating a configuration of an image decoding device 300d according to the tenth embodiment.
  • the image decoding device 300d according to the tenth embodiment includes a switching intra predicted image generation unit 400c as the first intra prediction image generation unit.
  • the switched intra predicted image generation unit 400c includes a DC component prediction unit 403 and a DC component storage buffer 404 in the seventh embodiment in addition to the configuration of the average pixel prediction intra predicted image generation unit 400b in the ninth embodiment. Is. Since other configurations are the same as those of the ninth embodiment, the same reference numerals are given to corresponding portions, and descriptions thereof are omitted.
  • FIG. 20 is a flowchart illustrating the operation of the image decoding device 300d according to the tenth embodiment. Since only the process of step ST400c is different from the ninth embodiment, only the different process will be described.
  • the predicted image transform coefficient output in step ST301 is input to the DC component prediction unit 403, first, the DC component of the adjacent block is read from the DC component storage buffer 404, and a predicted value of the DC component is generated.
  • a DC coefficient is obtained by adding the DC component of the predicted image conversion coefficient.
  • the obtained DC coefficients and other AC components are output to the predicted image inverse quantization inverse transform unit 401 as predicted image transform coefficients (step ST411).
  • the obtained DC coefficient is stored in the DC component storage buffer 404 (step ST412).
  • Other processes are the same as those in the ninth embodiment.
  • the image decoding device 300d according to the tenth embodiment is configured to perform the reverse process of the image coding device 100d according to the fifth embodiment, and thus generated by the image coding device 100d according to the fifth embodiment.
  • the decoded bit stream can be suitably decoded to obtain a decoded image.
  • the configurations of the DC component prediction unit 403 and the DC component storage buffer 404 are applied to the configuration of the ninth embodiment, but may be applied to the configuration of the eighth embodiment.
  • the switching intra-predicted image generation unit (first intra-screen predicted image generation unit) 400c includes the decoded DC component.
  • a DC component prediction unit 403 that calculates a predicted value of a DC component based on the first component, adds the predicted value to the DC component of the first in-screen predicted data difference value, and outputs the first predicted value in the screen.
  • a DC component storage buffer 404 that stores a DC component that is a direct current component of one intra-screen prediction data as a decoded DC component, and the entropy decoding unit 301 uses the first intra-screen prediction data instead of the first intra-screen prediction data. Since the prediction data difference value is output as the entropy decoding result, the encoded image bitstream can be efficiently decoded and a decoded image can be obtained.
  • the image encoding device and the image decoding device according to the present invention perform intra-screen prediction when an image is compressed and transmitted, and decode compressed data.
  • H It is suitable for encoding and decoding when the H.264 encoding method is used.

Abstract

In the provided image encoding device, an image size-reduction unit (201) reduces the size of a macroblock image and outputs a size-reduced image. A prediction-image transformation/quantization unit (202) performs a frequency transformation on the size-reduced image, quantizes said image, and outputs first intra prediction data. A prediction-image reverse-quantization/reverse-transformation unit (203) performs reverse quantization and a reverse frequency transformation on the first intra prediction data and outputs a locally-decoded prediction image. A prediction-image enlargement unit (204) enlarges the locally-decoded prediction image to generate a first intra prediction image. An original-image difference computation unit (102) outputs a differential image representing the difference between the macroblock image and the first intra prediction image. A difference transformation/quantization unit (103) performs a frequency transformation on the differential image, quantizes said image, and outputs transformation coefficients. An entropy encoding unit (104) performs entropy encoding on the first intra prediction data and the transformation coefficients, outputting a bitstream.

Description

画像符号化装置及び画像復号装置Image encoding apparatus and image decoding apparatus
 本発明は、画像を圧縮符号化して伝送するに際して、画面内予測を行う画像符号化装置及び圧縮されたデータを復号する画像復号装置に関するものである。 The present invention relates to an image encoding device that performs intra-screen prediction when an image is compressed and transmitted and an image decoding device that decodes compressed data.
 例えば、ITU-T(国際電気通信連合電気通信標準化部門)によって勧告されているH.264動画像符号化方式では、画面内予測を行うことによってフレーム内の冗長性を取り除き、予測誤差信号を圧縮符号化している。従来、このような圧縮符号化する符号化装置及びこれを復号する復号装置として例えば特許文献1に示すような装置があった。 For example, H.264 recommended by ITU-T (International Telecommunication Union Telecommunication Standardization Sector). In the H.264 moving image encoding method, intra-frame redundancy is removed by performing intra-screen prediction, and a prediction error signal is compression-encoded. Conventionally, there has been an apparatus as disclosed in Patent Document 1, for example, as an encoding apparatus that performs such compression encoding and a decoding apparatus that decodes the same.
 このような従来の画像符号化装置では、入力画像から符号化ビットストリームを生成するために、先ず、入力画像は画像分割部によって所定の符号化単位であるマクロブロックサイズの入力画像であるマクロブロック入力画像に分割され、マクロブロック単位に処理が行われる。マクロブロック入力画像は減算部によってイントラ予測部で生成される予測画像が減算され、その結果として得られる差分画像信号が出力される。差分画像信号は周波数変換部に入力され、周波数変換が施されることにより周波数ごとの振幅を表す変換係数に変換され、量子化部に出力される。量子化部において変換係数は量子化され、量子化変換係数が出力される。出力された量子化変換係数は、エントロピー符号化部においてエントロピー符号化され、符号化ビットストリームとして出力される。また、量子化変換係数は逆量子化部にも入力される。逆量子化部では量子化変換係数が逆量子化されて、復号変換係数が出力される。復号変換係数は逆周波数変換部によって逆周波数変換されることにより、復号差分画像を得る。この復号差分画像は加算部において予測画像が加算され、局部復号画像としてフレームメモリに格納される。フレームメモリに格納された局部復号画像は予測部において予測画像を生成するために用いられる。 In such a conventional image encoding device, in order to generate an encoded bit stream from an input image, first, the input image is a macroblock that is an input image having a macroblock size that is a predetermined encoding unit by an image dividing unit. The image is divided into input images and processed in units of macroblocks. The macroblock input image is subtracted from the prediction image generated by the intra prediction unit by the subtraction unit, and a difference image signal obtained as a result is output. The difference image signal is input to the frequency conversion unit, converted to a conversion coefficient representing the amplitude for each frequency by performing frequency conversion, and output to the quantization unit. The transform coefficient is quantized in the quantization unit, and the quantized transform coefficient is output. The output quantized transform coefficient is entropy-encoded in the entropy encoding unit and output as an encoded bit stream. The quantized transform coefficient is also input to the inverse quantization unit. The inverse quantization unit inversely quantizes the quantized transform coefficient and outputs a decoded transform coefficient. The decoded transform coefficient is inversely frequency-converted by an inverse frequency converter, thereby obtaining a decoded difference image. The decoded difference image is added with the predicted image in the adding unit, and stored as a locally decoded image in the frame memory. The locally decoded image stored in the frame memory is used by the prediction unit to generate a predicted image.
 予測部で行われるH.264の画面内予測方法では、符号化対象となるブロックに隣接する符号化済ブロックの局部復号画像を直線的に伸長することによって予測画像が生成される。この際、予測方向はイントラ4×4予測モード及びイントラ8×8予測モードでは8種類の方向と、周辺画素の平均値を用いて予測を行うDC予測の計9種類の中から最適なものを選択する。イントラ16×16予測モードでは水平方向、垂直方向、斜め方向及びDC予測の計4種類の中から最適なものを選択する。
 選択した予測モード及び予測方向はそれぞれエントロピー符号化部においてエントロピー符号化され、符号化ビットストリームに多重化される。
H. performed in the prediction unit. In the H.264 intra-screen prediction method, a prediction image is generated by linearly expanding a locally decoded image of an encoded block adjacent to a block to be encoded. At this time, in the intra 4 × 4 prediction mode and the intra 8 × 8 prediction mode, the optimal prediction direction is selected from the eight types of directions and the DC prediction in which prediction is performed using the average value of surrounding pixels. select. In the intra 16 × 16 prediction mode, an optimal one is selected from a total of four types of horizontal direction, vertical direction, diagonal direction, and DC prediction.
The selected prediction mode and prediction direction are each entropy-encoded in the entropy encoder and multiplexed into the encoded bitstream.
国際公開第08/072500号公報International Publication No. 08/07500
 しかしながら、従来の画像符号化装置では、上述したように、H.264のフレーム内予測では、符号化対象ブロックの周辺の局部復号画像を直線的に伸長することにより予測画像を生成するため、周辺局部復号画像の変化がそのまま直線エッジ状に形成された予測画像が生成される。従って、この直線エッジ状の予測画像と実際の入力画像のエッジが一致しない場合には予測誤差信号に直線エッジを付加することになり、符号化効率の低下を招く。即ち、従来のフレーム内予測では特定の方向にエッジを持つ映像に対しては予測効率がよく、その他の方向にエッジを持つ映像に対しては予測効率が悪くなるという課題があった。 However, in the conventional image encoding device, as described above, H.264 is used. In the H.264 intra-frame prediction, a prediction image is generated by linearly expanding a local decoded image around a block to be encoded, so that a prediction image in which a change in the peripheral local decoded image is directly formed in a straight edge shape is generated. Generated. Therefore, when the edge of the straight edge-like predicted image does not match the edge of the actual input image, a straight edge is added to the prediction error signal, resulting in a decrease in coding efficiency. That is, the conventional intra-frame prediction has a problem that the prediction efficiency is good for a video having an edge in a specific direction, and the prediction efficiency is poor for a video having an edge in another direction.
 この発明は上記のような課題を解決するためになされたもので、入力画像のテクスチャによらず安定して効率のよい予測を行い、符号化効率を向上させることのできる画像符号化装置を得ることを目的とする。 The present invention has been made to solve the above-described problems, and obtains an image coding apparatus capable of performing stable and efficient prediction regardless of the texture of the input image and improving the coding efficiency. For the purpose.
 この発明に係る画像符号化装置は、1フレームの画像データを入力し、複数のブロックに分割しマクロブロック画像を出力するブロック分割部と、マクロブロック画像を入力し、第一の画面内予測画像を生成すると共に画面内予測画像を再構成するための情報である第一の画面内予測データを出力する第一の画面内予測部と、マクロブロック画像と第一の画面内予測画像の差分演算を行い差分画像を出力する差分画像生成部と、差分画像を周波数変換及び量子化し変換係数を出力する差分変換量子化部と、第一の画面内予測データと変換係数をエントロピー符号化しビットストリームを出力するエントロピー符号化部とを備え、第一の画面内予測部は、マクロブロック画像を縮小し縮小画像を出力する画像縮小部と、縮小画像を周波数変換及び量子化し第一の画面内予測データを出力する予測画像変換量子化部と、第一の画面内予測データを逆量子化及び逆周波数変換し局部復号予測画像を出力する予測画像逆量子化変換部と、局部復号予測画像を拡大し第一の画面内予測画像を生成する予測画像拡大部とを備えたものである。これにより、入力画像のテクスチャによらず安定して効率のよい予測を行い、符号化効率を向上させることができる。 An image encoding device according to the present invention inputs a frame of image data, divides the image into a plurality of blocks, outputs a macroblock image, inputs the macroblock image, and outputs the first intra prediction image A first intra prediction unit that outputs first intra prediction data that is information for reconstructing the intra prediction image, and a difference calculation between the macroblock image and the first intra prediction image A difference image generation unit that outputs a difference image and a difference transform quantization unit that frequency-converts and quantizes the difference image and outputs a transform coefficient, entropy-encodes the first intra prediction data and the transform coefficient, and generates a bitstream. An entropy encoding unit that outputs, the first intra prediction unit reduces the macroblock image and outputs a reduced image, and frequency conversion of the reduced image Predictive image transform quantizing unit that quantizes and outputs first intra prediction data, and predictive image inverse quantization transform that outputs local decoded prediction image by dequantizing and inverse frequency converting first intra prediction data And a predicted image enlarging unit for enlarging the locally decoded predicted image and generating a first intra-screen predicted image. As a result, stable and efficient prediction can be performed regardless of the texture of the input image, and the encoding efficiency can be improved.
本発明の実施の形態1に係る画像符号化装置の構成を示すブロック図である。It is a block diagram which shows the structure of the image coding apparatus which concerns on Embodiment 1 of this invention. 本発明の実施の形態1に係る画像符号化装置の動作を示すフローチャートである。It is a flowchart which shows operation | movement of the image coding apparatus which concerns on Embodiment 1 of this invention. 本発明の実施の形態2に係る画像符号化装置の構成を示すブロック図である。It is a block diagram which shows the structure of the image coding apparatus which concerns on Embodiment 2 of this invention. 本発明の実施の形態2に係る画像符号化装置の動作を示すフローチャートである。It is a flowchart which shows operation | movement of the image coding apparatus which concerns on Embodiment 2 of this invention. 本発明の実施の形態3に係る画像符号化装置の構成を示すブロック図である。It is a block diagram which shows the structure of the image coding apparatus which concerns on Embodiment 3 of this invention. 本発明の実施の形態3に係る画像符号化装置の動作を示すフローチャートである。It is a flowchart which shows operation | movement of the image coding apparatus which concerns on Embodiment 3 of this invention. 本発明の実施の形態4に係る画像符号化装置の構成を示すブロック図である。It is a block diagram which shows the structure of the image coding apparatus which concerns on Embodiment 4 of this invention. 本発明の実施の形態4に係る画像符号化装置の動作を示すフローチャートである。It is a flowchart which shows operation | movement of the image coding apparatus which concerns on Embodiment 4 of this invention. 本発明の実施の形態5に係る画像符号化装置の構成を示すブロック図である。It is a block diagram which shows the structure of the image coding apparatus which concerns on Embodiment 5 of this invention. 本発明の実施の形態5に係る画像符号化装置の動作を示すフローチャートである。It is a flowchart which shows operation | movement of the image coding apparatus which concerns on Embodiment 5 of this invention. 本発明の実施の形態6に係る画像復号装置の構成を示すブロック図である。It is a block diagram which shows the structure of the image decoding apparatus which concerns on Embodiment 6 of this invention. 本発明の実施の形態6に係る画像復号装置の動作を示すフローチャートである。It is a flowchart which shows operation | movement of the image decoding apparatus which concerns on Embodiment 6 of this invention. 本発明の実施の形態7に係る画像復号装置の構成を示すブロック図である。It is a block diagram which shows the structure of the image decoding apparatus which concerns on Embodiment 7 of this invention. 本発明の実施の形態7に係る画像復号装置の動作を示すフローチャートである。It is a flowchart which shows operation | movement of the image decoding apparatus which concerns on Embodiment 7 of this invention. 本発明の実施の形態8に係る画像復号装置の構成を示すブロック図である。It is a block diagram which shows the structure of the image decoding apparatus which concerns on Embodiment 8 of this invention. 本発明の実施の形態8に係る画像復号装置の動作を示すフローチャートである。It is a flowchart which shows operation | movement of the image decoding apparatus which concerns on Embodiment 8 of this invention. 本発明の実施の形態9に係る画像復号装置の構成を示すブロック図である。It is a block diagram which shows the structure of the image decoding apparatus which concerns on Embodiment 9 of this invention. 本発明の実施の形態9に係る画像復号装置の動作を示すフローチャートである。It is a flowchart which shows operation | movement of the image decoding apparatus which concerns on Embodiment 9 of this invention. 本発明の実施の形態10に係る画像復号装置の構成を示すブロック図である。It is a block diagram which shows the structure of the image decoding apparatus which concerns on Embodiment 10 of this invention. 本発明の実施の形態10に係る画像復号装置の動作を示すフローチャートである。It is a flowchart which shows operation | movement of the image decoding apparatus which concerns on Embodiment 10 of this invention. 本発明の実施の形態4に係る画像符号化装置のイントラ予測タイプ情報の説明図である。It is explanatory drawing of the intra prediction type information of the image coding apparatus which concerns on Embodiment 4 of this invention. 本発明の実施の形態4に係る画像符号化装置のイントラ予測タイプ情報の他の例を示す説明図である。It is explanatory drawing which shows the other example of the intra prediction type information of the image coding apparatus which concerns on Embodiment 4 of this invention.
 以下、この発明をより詳細に説明するために、この発明を実施するための形態について、添付の図面に従って説明する。
実施の形態1.
 図1は、この発明の実施の形態1による画像符号化装置を示す構成図である。
 図1に示す画像符号化装置100は、ブロック分割部101と、第一のイントラ予測部(第一の画面内予測部)200と、原画差分算出部102と、差分変換量子化部103と、エントロピー符号化部104とを備えている。
 ブロック分割部101は、原画像として動画像を入力し、その動画像をマクロブロック単位に分割して、分割後の各マクロブロックの画像(以下、マクロブロック画像と称する)を出力する。第一のイントラ予測部200は、フレーム内予測を実施することで、第一のイントラ予測画像を生成すると共に、予測画像に関する情報を予測画像変換係数として出力する。原画差分算出部102は、ブロック分割部101からのマクロブロック画像と、第一のイントラ予測部200からの第一のイントラ予測画像との差分をとり、差分マクロブロック画像として出力する。差分変換量子化部103は、原画差分算出部102からの差分マクロブロック画像に対してDCTなどの直交変換を行うと共に、発生した直交変換係数を量子化し、量子化係数を出力する。エントロピー符号化部104は、予測画像変換係数及び量子化係数を可変長符号化や算術符号化などによってエントロピー符号化し、ビットストリームとして出力する。
Hereinafter, in order to explain the present invention in more detail, modes for carrying out the present invention will be described with reference to the accompanying drawings.
Embodiment 1 FIG.
FIG. 1 is a block diagram showing an image coding apparatus according to Embodiment 1 of the present invention.
An image encoding apparatus 100 illustrated in FIG. 1 includes a block division unit 101, a first intra prediction unit (first intra prediction unit) 200, an original image difference calculation unit 102, a difference transform quantization unit 103, And an entropy encoding unit 104.
The block dividing unit 101 inputs a moving image as an original image, divides the moving image into units of macro blocks, and outputs an image of each divided macro block (hereinafter referred to as a macro block image). The first intra prediction unit 200 generates a first intra predicted image by performing intra-frame prediction, and outputs information about the predicted image as a predicted image conversion coefficient. The original image difference calculation unit 102 takes the difference between the macroblock image from the block division unit 101 and the first intra prediction image from the first intra prediction unit 200 and outputs the difference as a difference macroblock image. The difference transform quantization unit 103 performs orthogonal transform such as DCT on the difference macroblock image from the original image difference calculation unit 102, quantizes the generated orthogonal transform coefficient, and outputs a quantization coefficient. The entropy encoding unit 104 entropy-encodes the predicted image transform coefficient and the quantization coefficient by variable length encoding or arithmetic encoding, and outputs the result as a bit stream.
 第一のイントラ予測部200は、画像縮小部201、予測画像変換量子化部202、予測画像逆量子化逆変換部203、予測画像拡大部204から構成されている。
 画像縮小部201は、入力されたマクロブロック画像を縮小し、縮小画像を出力する。画像縮小の方法は、例えば4×4画素毎の画素値の平均値を縮小画像の画素値とする方法や、ローパスフィルタを掛けた後にサブサンプリングする方法など、いかなる方法でもよい。縮小率についても後述する予測画像拡大部204での拡大率の逆数であればよい。予測画像変換量子化部202は、入力された縮小画像に対してDCTなどの直交変換を行うと共に、発生した直交変換係数を量子化し、予測画像変換係数として出力する。予測画像逆量子化逆変換部203は、入力された予測画像変換係数に対して予測画像変換量子化部202の逆処理となる逆量子化及び逆直交変換を行うことによって局部復号縮小画像を生成し出力する。予測画像拡大部204では、入力された局部復号縮小画像に対して解像度を向上させる処理が行われ、マクロブロック画像と同じサイズになるよう拡大され、第一のイントラ予測画像として出力される。ここでは、例えば、内挿処理によって拡大してもよいし、その他いかなる手法であっても入力される画像の解像度を向上させる処理であれば、処理内容を特に限定しない。
The first intra prediction unit 200 includes an image reduction unit 201, a predicted image transform quantization unit 202, a predicted image inverse quantization inverse transform unit 203, and a predicted image enlargement unit 204.
The image reduction unit 201 reduces the input macroblock image and outputs a reduced image. The image reduction method may be any method, for example, a method in which an average value of pixel values for every 4 × 4 pixels is used as a pixel value of a reduced image, or a sub-sampling method after applying a low-pass filter. The reduction rate may also be an inverse number of the enlargement rate in the predicted image enlargement unit 204 described later. The predicted image transform quantization unit 202 performs orthogonal transform such as DCT on the input reduced image, quantizes the generated orthogonal transform coefficient, and outputs it as a predicted image transform coefficient. The predicted image inverse quantization inverse transform unit 203 generates a locally decoded reduced image by performing inverse quantization and inverse orthogonal transform, which are the inverse processes of the predicted image transform quantization unit 202, on the input predicted image transform coefficient. Then output. In the predicted image enlarging unit 204, processing for improving the resolution is performed on the input locally decoded reduced image, the image is enlarged so as to have the same size as the macroblock image, and is output as the first intra predicted image. Here, for example, the processing content may be enlarged by interpolation processing, or the processing content is not particularly limited as long as it is a processing that improves the resolution of an input image by any other method.
 尚、上記の説明では4×4画素に縮小する場合について述べたが、画像縮小部201で1画素に縮小し、画像拡大部204で元のサイズに戻すよう構成してもよい。この場合、例えばブロック右下の画素を縮小画像とすることができ、拡大時には周辺ブロックの局部復号画像を用いて内挿することにより拡大するよう構成しても同様の効果が得られる。また、縮小画像については直交変換や量子化を行わずに直接画素値を符号化するか、画素値をDC予測してその差分値を符号化するよう構成してもよい。 In the above description, the case where the image is reduced to 4 × 4 pixels has been described. However, the image may be reduced to one pixel by the image reduction unit 201 and returned to the original size by the image enlargement unit 204. In this case, for example, the pixel at the lower right of the block can be a reduced image, and the same effect can be obtained by enlarging by enlarging by using a locally decoded image of the peripheral block at the time of enlargement. Further, the reduced image may be configured such that the pixel value is directly encoded without performing orthogonal transformation or quantization, or the difference value is encoded by DC prediction of the pixel value.
 次に、本実施の形態に係る画像符号化装置100の動作について説明する。
 図2は、画像符号化装置100の処理を示すフローチャートである。
 先ず、動画像が画像符号化装置100に入力されると、動画像はブロック分割部101においてマクロブロック単位に分割され、分割後の各マクロブロック画像が出力される(ステップST101)。次に、各マクロブロック画像は第一のイントラ予測部200においてフレーム内予測が実施され、第一のイントラ予測画像が出力されると共に、予測画像に関する情報が予測画像変換係数として出力される(ステップST200)。次いで、マクロブロック画像及び第一のイントラ予測画像は原画差分算出部102に入力されると、差分値が算出され、差分マクロブロック画像として出力される(ステップST102)。次いで、差分マクロブロック画像は、差分変換量子化部103においてDCTなどの直交変換が行われると共に発生した直交変換係数が量子化され、量子化係数が出力される(ステップST103)。更に、量子化係数及び予測画像変換係数はエントロピー符号化部104においてエントロピー符号化され、ビットストリームが出力される(ステップST104)。
Next, the operation of image coding apparatus 100 according to the present embodiment will be described.
FIG. 2 is a flowchart showing processing of the image encoding device 100.
First, when a moving image is input to the image encoding device 100, the moving image is divided into macroblock units by the block dividing unit 101, and each divided macroblock image is output (step ST101). Next, intra-frame prediction is performed on each macroblock image in the first intra prediction unit 200, the first intra prediction image is output, and information on the prediction image is output as a prediction image transform coefficient (step) ST200). Next, when the macroblock image and the first intra prediction image are input to the original image difference calculation unit 102, a difference value is calculated and output as a difference macroblock image (step ST102). Next, the difference macroblock image is subjected to orthogonal transform such as DCT in the difference transform quantization unit 103, and the generated orthogonal transform coefficient is quantized and the quantized coefficient is output (step ST103). Further, the quantized coefficient and the predicted image transform coefficient are entropy encoded by the entropy encoding unit 104, and a bit stream is output (step ST104).
 次に、ステップST200の詳細を説明する。
 先ず、入力されたマクロブロック画像は画像縮小部201によって画像が縮小処理され、縮小画像が出力される(ステップST201)。この縮小画像は、予測画像変換量子化部202においてDCTなどの直交変換が行われると共に発生した直交変換係数が量子化され、予測画像変換係数として出力される(ステップST202)。この予測画像変換係数は、予測画像逆量子化逆変換部203によってステップST103の逆処理となる逆量子化及び逆直交変換が行われ、局部復号縮小画像が出力される(ステップST203)。局部復号縮小画像は、予測画像拡大部204に入力されると、解像度を向上させる処理が行われ、マクロブロック画像と同じサイズになるように拡大されて第一のイントラ予測画像として出力される(ステップST204)。
Next, details of step ST200 will be described.
First, the input macroblock image is reduced by the image reduction unit 201 and a reduced image is output (step ST201). The reduced image is subjected to orthogonal transform such as DCT in the predicted image transform quantization unit 202, and the generated orthogonal transform coefficient is quantized and output as a predicted image transform coefficient (step ST202). The predicted image transform coefficient is subjected to inverse quantization and inverse orthogonal transform, which are the inverse processes of step ST103, by the predicted image inverse quantization inverse transform unit 203, and a locally decoded reduced image is output (step ST203). When the local decoded reduced image is input to the predicted image enlarging unit 204, a process for improving the resolution is performed, and the local decoded reduced image is enlarged so as to have the same size as the macroblock image and is output as the first intra predicted image ( Step ST204).
 このように、本実施の形態に係る画像符号化装置100では、イントラ予測を行う際に画像の縮小・変換・量子化・逆量子化・逆変換・画像の拡大を行うことによって、周辺ブロックのエッジの状態に依存しないマクロブロック画像の予測画像を生成することができると共に、縮小画像の量子化変換係数で予測画像を表現できるため予測画像を再現するために要するビット数を少なく抑えることができる。従って、安定して符号化効率の高いフレーム内符号化を実現することができる。 As described above, the image encoding device 100 according to the present embodiment performs image reduction / transformation / quantization / inverse quantization / inverse transform / image enlargement when performing intra prediction, thereby A predicted image of a macroblock image that does not depend on the state of an edge can be generated, and the predicted image can be expressed with a quantized transform coefficient of a reduced image, so that the number of bits required to reproduce the predicted image can be reduced. . Therefore, stable intra-frame coding with high coding efficiency can be realized.
 尚、画像符号化装置100における各部の構成要素はハードウェアによって構成されてもよいし、ソフトウェアによって構成されていてもよい。また、ハードウェアとソフトウェアとを組み合わせたモジュールであってもよい。また、直交変換の処理は特に限定されず、DCT、DST(Discrete Sine Transform:離散サイン変換)、DFT(Discrete Fourier Transform:離散フーリエ変換)、ウェーブレット変換、整数変換処理等が行われてもよい。 It should be noted that the components of each unit in the image encoding device 100 may be configured by hardware or software. Moreover, the module which combined hardware and software may be sufficient. The orthogonal transform process is not particularly limited, and DCT, DST (Discrete Sine Transform: Discrete Sine Transform), DFT (Discrete Fourier Transform: Discrete Fourier Transform), wavelet transform, integer transform process, and the like may be performed.
 以上のように、実施の形態1の画像符号化装置によれば、1フレームの画像データを入力し、複数のブロックに分割しマクロブロック画像を出力するブロック分割部101と、マクロブロック画像を入力し、第一の画面内予測画像を生成すると共に画面内予測画像を再構成するための情報である第一の画面内予測データを出力する第一のイントラ予測部(第一の画面内予測部)200と、マクロブロック画像と第一の画面内予測画像の差分演算を行い差分画像を出力する原画差分算出部102と、差分画像を周波数変換及び量子化し変換係数を出力する差分変換量子化部103と、第一の画面内予測データと変換係数をエントロピー符号化しビットストリームを出力するエントロピー符号化部104とを備え、第一のイントラ予測部200は、マクロブロック画像を縮小し縮小画像を出力する画像縮小部201と、縮小画像を周波数変換及び量子化し第一の画面内予測データを出力する予測画像変換量子化部202と、第一の画面内予測データを逆量子化及び逆周波数変換し局部復号予測画像を出力する予測画像逆量子化逆変換部203と、局部復号予測画像を拡大し第一の画面内予測画像を生成する予測画像拡大部204とを備えたので、入力画像のテクスチャによらず安定して効率のよい予測を行い、符号化効率を向上させることができる。 As described above, according to the image coding apparatus of Embodiment 1, one frame of image data is input, divided into a plurality of blocks, and a macroblock image is output, and a macroblock image is input. And generating a first intra prediction image and outputting first intra prediction data that is information for reconstructing the intra prediction image (first intra prediction portion (first intra prediction portion) ) 200, an original image difference calculation unit 102 that performs a difference calculation between the macroblock image and the first intra prediction image and outputs a difference image, and a difference transform quantization unit that frequency-converts and quantizes the difference image and outputs a transform coefficient 103, an entropy encoding unit 104 that entropy-encodes the first intra-screen prediction data and the transform coefficient, and outputs a bit stream, and includes a first intra prediction unit 200. An image reduction unit 201 that reduces a macroblock image and outputs a reduced image, a predictive image transform quantization unit 202 that frequency-converts and quantizes the reduced image and outputs first in-screen prediction data, and a first in-screen A predicted image inverse quantization inverse transform unit 203 that performs inverse quantization and inverse frequency transform on the prediction data and outputs a local decoded predicted image, and a predicted image enlargement unit that expands the local decoded predicted image and generates a first intra prediction image 204, the prediction can be performed stably and efficiently regardless of the texture of the input image, and the encoding efficiency can be improved.
実施の形態2.
 図3は、実施の形態2に係る画像符号化装置100aの構成を示すブロック図である。
 画像符号化装置100aは、第一の画面内予測部として、DC成分予測イントラ予測部200aを用いて構成されている。これ以外の構成は、実施の形態1の画像符号化装置100と同様であるため、以下、実施の形態1の画像符号化装置100との相違点について説明する。
Embodiment 2. FIG.
FIG. 3 is a block diagram showing a configuration of the image encoding device 100a according to the second embodiment.
The image encoding device 100a is configured using a DC component prediction intra prediction unit 200a as a first intra prediction unit. Since the configuration other than this is the same as that of the image coding apparatus 100 according to the first embodiment, differences from the image coding apparatus 100 according to the first embodiment will be described below.
 DC成分予測イントラ予測部200aは、実施の形態1における第一のイントラ予測部200の構成に加えて、DC成分予測部205及びDC成分記憶バッファ206を備える点でのみ異なる。DC成分予測部205は、入力された予測画像変換係数のうち、その直流成分をDC成分記憶バッファ206に出力すると共に、隣接するブロックの直流成分をDC成分記憶バッファ206から読み出し、DC成分の予測値を生成して予測画像変換係数の直流成分から減算してDC差分係数を得るものである。そして、得られたDC差分係数及びその他の交流成分をDC差分予測画像変換係数としてエントロピー符号化部104に出力するよう構成されている。DC成分記憶バッファ206は、予測画像変換係数の直流成分を格納するためのバッファである。 The DC component prediction intra prediction unit 200a differs only in that it includes a DC component prediction unit 205 and a DC component storage buffer 206 in addition to the configuration of the first intra prediction unit 200 in the first embodiment. The DC component prediction unit 205 outputs the DC component of the input predicted image transform coefficient to the DC component storage buffer 206 and reads the DC component of the adjacent block from the DC component storage buffer 206 to predict the DC component. A value is generated and subtracted from the DC component of the predicted image conversion coefficient to obtain a DC difference coefficient. Then, the obtained DC difference coefficient and other AC components are output to the entropy encoding unit 104 as a DC difference predicted image transform coefficient. The DC component storage buffer 206 is a buffer for storing the DC component of the predicted image conversion coefficient.
 次に、実施の形態2に係る画像符号化装置100aの動作について説明する。
 図4は、実施の形態2の画像符号化装置100aの動作を示すフローチャートである。尚、動作中、実施の形態1と異なるのは、DC成分予測イントラ予測部200aに関する動作(ステップST200a)のみであるため、このステップST200aについて説明する。
 ステップST203における予測画像変換係数逆量子化・逆変換までの処理及びステップST204の画像拡大処理は実施の形態1と同様である。ステップST202において出力された予測画像変換係数は、DC成分予測部205に入力されると、先ず直流成分のみが抽出され、DC成分記憶バッファ206に格納される(ステップST211)。次に、DC成分記憶バッファ206から隣接するブロックの直流成分が読み出され、DC成分の予測値が生成されて予測画像変換係数の直流成分から減算されDC差分係数が得られる。得られたDC差分係数及びその他の交流成分はDC差分予測画像変換係数としてエントロピー符号化部104に出力される(ステップST212)。
Next, the operation of the image coding apparatus 100a according to Embodiment 2 will be described.
FIG. 4 is a flowchart showing the operation of the image coding apparatus 100a according to the second embodiment. Note that during operation, the only difference from the first embodiment is the operation related to the DC component prediction intra prediction unit 200a (step ST200a), and therefore step ST200a will be described.
The processing up to the predicted image transform coefficient inverse quantization and inverse transform in step ST203 and the image enlargement process in step ST204 are the same as those in the first embodiment. When the predicted image transform coefficient output in step ST202 is input to the DC component prediction unit 205, only the DC component is first extracted and stored in the DC component storage buffer 206 (step ST211). Next, the DC component of the adjacent block is read from the DC component storage buffer 206, a predicted value of the DC component is generated, and subtracted from the DC component of the predicted image transform coefficient to obtain a DC difference coefficient. The obtained DC difference coefficient and other AC components are output to the entropy encoding unit 104 as DC difference predicted image transform coefficients (step ST212).
 このように、実施の形態2に係る画像符号化装置100aでは、予測画像変換係数のDC成分を隣接するブロックにおけるDC成分から予測し、予測残差をエントロピー符号化するよう構成した。一般的に画像データは空間方向に相関性が高いため、DC成分の予測残差の振幅は一般的にはDC成分そのものよりも小さくなる。従って、DC成分の予測残差のエントロピー符号化を行うことにより、より圧縮効率の高い映像符号化が実現できる。 As described above, the image encoding device 100a according to Embodiment 2 is configured to predict the DC component of the predicted image transform coefficient from the DC component in the adjacent block, and to entropy encode the prediction residual. In general, since image data is highly correlated in the spatial direction, the amplitude of the DC component prediction residual is generally smaller than the DC component itself. Therefore, video coding with higher compression efficiency can be realized by performing entropy coding of the prediction residual of the DC component.
 以上のように、実施の形態2の画像符号化装置100aによれば、DC成分予測イントラ予測部(第一の画面内予測部)200aは、第一の画面内予測データの直流成分であるDC成分を局部復号DC成分として格納するDC成分記憶バッファ206と、DC成分記憶バッファ206に記憶されている局部復号DC成分に基づいてDC成分の予測値を算出し、第一の画面内予測データのDC成分から予測値を減じた後、第一の画面内予測データ差分値として出力するDC成分予測部205とを備え、エントロピー符号化部104は、第一の画面内予測データの代わりに第一の画面内予測データ差分値をエントロピー符号化するようにしたので、より圧縮効率の高い映像符号化を実現することができる。 As described above, according to the image coding device 100a of the second embodiment, the DC component prediction intra prediction unit (first intra prediction unit) 200a is a DC component that is a direct current component of the first intra prediction data. A DC component storage buffer 206 that stores the component as a locally decoded DC component, and a predicted value of the DC component is calculated based on the local decoded DC component stored in the DC component storage buffer 206, and the first in-screen prediction data And a DC component prediction unit 205 that outputs the first intra prediction data difference value after subtracting the prediction value from the DC component, and the entropy encoding unit 104 uses the first intra prediction data instead of the first intra prediction data. Since the intra-screen prediction data difference value is entropy-encoded, video encoding with higher compression efficiency can be realized.
実施の形態3.
 図5は、実施の形態3の画像符号化装置100bの構成を示すブロック図である。
 画像符号化装置100bは、第一の画面内予測部として、平均画素予測イントラ予測部200bを用いて構成されており、更に、差分逆量子化逆変換部105、差分画像加算部106、フレームメモリ107を備える。これ以外の構成は図1に示した実施の形態1の構成と同様であるため、相違部分についてのみ説明する。
Embodiment 3 FIG.
FIG. 5 is a block diagram illustrating a configuration of the image encoding device 100b according to the third embodiment.
The image encoding device 100b is configured using an average pixel prediction intra prediction unit 200b as a first intra-screen prediction unit, and further includes a difference inverse quantization inverse transform unit 105, a difference image addition unit 106, a frame memory. 107. Since the configuration other than this is the same as the configuration of the first embodiment shown in FIG. 1, only the differences will be described.
 差分逆量子化逆変換部105は、差分変換量子化部103から出力される量子化係数に対して差分変換量子化部103の逆処理となる逆量子化及び逆直交変換を行い、局部復号差分画像を出力する。差分画像加算部106は、局部復号差分画像と第一のイントラ予測画像を加算することにより局部復号画像を生成し出力する。フレームメモリ107は局部復号画像を格納するためのバッファである。 The difference inverse quantization inverse transform unit 105 performs inverse quantization and inverse orthogonal transform, which are inverse processes of the difference transform quantization unit 103, on the quantization coefficient output from the difference transform quantization unit 103, and performs local decoding difference Output an image. The difference image adding unit 106 generates and outputs a locally decoded image by adding the locally decoded difference image and the first intra predicted image. The frame memory 107 is a buffer for storing locally decoded images.
 平均画素予測イントラ予測部200bは、実施の形態1の第一のイントラ予測部200の構成に加えて、平均画素値予測部207、平均画素値減算部208、平均画素値加算部209を備える点でのみ異なる。平均画素値予測部207は、隣接するブロックの局部復号画像データをフレームメモリ107から読み出し、隣接するブロックの画素平均値を算出して平均画素値として出力する。隣接するブロックの画素平均値として、上および左に隣接する画素の平均値を用いてもよいし、さらに処理対象ブロック左上ブロックの最右下画素を加えた平均値としてもよい。あるいは上および左に隣接するブロック全体の平均値を用いてもよいし、処理対象ブロック近傍の所定の画素数以内の距離にある画素の平均値を用いてもよい。つまり、処理対象ブロック近傍の局部復号画像データの画素値の平均値であればいかなる手段で算出しても同様の効果を得られる。平均画素値減算部208は、画像縮小部201から出力される縮小画像の各画素値から平均画素値予測部207より出力される平均画素値を減算し、その結果を差分縮小画像として予測画像変換量子化部202に出力する。平均画素値加算部209は、予測画像逆量子化逆変換部203から出力される局部復号縮小画像に対して平均画素値予測部207より出力される平均画素値を加算し、その結果を局部復号縮小画像として予測画像拡大部204に出力する。 The average pixel prediction intra prediction unit 200b includes an average pixel value prediction unit 207, an average pixel value subtraction unit 208, and an average pixel value addition unit 209 in addition to the configuration of the first intra prediction unit 200 of the first embodiment. Only different. The average pixel value prediction unit 207 reads local decoded image data of the adjacent block from the frame memory 107, calculates the pixel average value of the adjacent block, and outputs it as the average pixel value. As an average pixel value of adjacent blocks, an average value of pixels adjacent to the upper and left sides may be used, or an average value obtained by adding the lowermost right pixel of the upper left block of the processing target block may be used. Alternatively, the average value of all the blocks adjacent to the upper and left sides may be used, or the average value of pixels located within a predetermined number of pixels near the processing target block may be used. In other words, the same effect can be obtained by any means as long as it is an average value of pixel values of locally decoded image data in the vicinity of the processing target block. The average pixel value subtracting unit 208 subtracts the average pixel value output from the average pixel value predicting unit 207 from each pixel value of the reduced image output from the image reducing unit 201 and converts the result into a predicted reduced image as a difference reduced image. The data is output to the quantization unit 202. The average pixel value adding unit 209 adds the average pixel value output from the average pixel value predicting unit 207 to the local decoded reduced image output from the predicted image inverse quantization inverse transform unit 203, and locally decodes the result. The reduced image is output to the predicted image enlargement unit 204.
 次に、実施の形態3に係る画像符号化装置100bの動作について説明する。
 図6は、実施の形態3の画像符号化装置100bの動作を示すフローチャートである。
 先ず、動画像が画像符号化装置100bに入力されると、動画像はブロック分割部101においてマクロブロック単位に分割され、分割後の各マクロブロック画像が出力される(ステップST101)。各マクロブロック画像はフレームメモリ107に保存された周辺ブロックの局部復号画像データを用いて平均画素予測イントラ予測部200bにおいてフレーム内予測が実施され、第一のイントラ予測画像が出力されると共に、予測画像に関する情報が予測画像変換係数として出力される(ステップST200b)。
Next, the operation of the image coding apparatus 100b according to Embodiment 3 will be described.
FIG. 6 is a flowchart showing the operation of the image coding apparatus 100b according to the third embodiment.
First, when a moving image is input to the image encoding device 100b, the moving image is divided into macroblock units by the block dividing unit 101, and each divided macroblock image is output (step ST101). Each macroblock image is subjected to intra-frame prediction in the average pixel prediction intra prediction unit 200b using local decoded image data of peripheral blocks stored in the frame memory 107, and a first intra prediction image is output. Information about the image is output as a predicted image conversion coefficient (step ST200b).
 マクロブロック画像及び第一のイントラ予測画像は原画差分算出部102に入力されると、差分値が算出され、差分マクロブロック画像として出力される(ステップST102)。差分マクロブロック画像は、差分変換量子化部103においてDCTなどの直交変換が行われると共に発生した直交変換係数が量子化され、量子化係数が出力される(ステップST103)。量子化係数及び予測画像変換係数はエントロピー符号化部104においてエントロピー符号化され、ビットストリームが出力される(ステップST104)。量子化係数は差分逆量子化逆変換部105において差分変換量子化部103の逆処理となる逆量子化及び逆直交変換が行われ、局部復号差分画像として出力される(ステップST111)。この局部復号差分画像とステップST200bで生成された第一のイントラ予測画像は、差分画像加算部106において加算され局部復号画像として出力される(ステップST112)。局部復号画像はフレームメモリ107に格納される(ステップST113)。 When the macroblock image and the first intra prediction image are input to the original image difference calculation unit 102, a difference value is calculated and output as a difference macroblock image (step ST102). The difference macroblock image is subjected to orthogonal transform such as DCT in the difference transform quantization unit 103, and the generated orthogonal transform coefficient is quantized and the quantized coefficient is output (step ST103). The quantized coefficient and the predicted image transform coefficient are entropy encoded by the entropy encoding unit 104, and a bit stream is output (step ST104). The quantized coefficient is subjected to inverse quantization and inverse orthogonal transform, which are inverse processes of the difference transform quantization unit 103, in the difference inverse quantization inverse transform unit 105, and is output as a locally decoded difference image (step ST111). This locally decoded difference image and the first intra predicted image generated in step ST200b are added by the difference image adding unit 106 and output as a locally decoded image (step ST112). The locally decoded image is stored in frame memory 107 (step ST113).
 次に、ステップST200bの詳細について説明する。尚、ステップST200bにおけるステップST201~ST204は、実施の形態1,2と同様の処理である。
 先ず、入力されたマクロブロック画像は画像縮小部201によって画像が縮小処理され、縮小画像が出力される(ステップST201)。一方、平均画素値予測部207では、隣接するブロックの局部復号画像データをフレームメモリ107から読み出し、隣接するブロックの画素平均値が算出されて平均画素値として出力される(ステップST221)。平均画素値減算部208では、縮小画像の各画素値から平均画素値が減算され、その結果が差分縮小画像として予測画像変換量子化部202に出力される(ステップST222)。差分縮小画像は予測画像変換量子化部202においてDCTなどの直交変換が行われると共に発生した直交変換係数が量子化され、予測画像変換係数として出力される(ステップST202)。予測画像変換係数は予測画像逆量子化逆変換部203によってステップST202の逆処理となる逆量子化及び逆直交変換が行われ、局部復号差分縮小画像が出力される(ステップST203)。局部復号差分縮小画像は平均画素値加算部209において平均画素値が加算され、局部復号縮小画像として出力される(ステップST223)。局部復号縮小画像は予測画像拡大部204に入力されると、解像度を向上させる処理が行われ、マクロブロック画像と同じサイズになるように拡大されて第一のイントラ予測画像として出力される(ステップST204)。
Next, details of step ST200b will be described. Note that steps ST201 to ST204 in step ST200b are the same processes as in the first and second embodiments.
First, the input macroblock image is reduced by the image reduction unit 201 and a reduced image is output (step ST201). On the other hand, the average pixel value predicting unit 207 reads the locally decoded image data of the adjacent block from the frame memory 107, calculates the pixel average value of the adjacent block, and outputs it as an average pixel value (step ST221). The average pixel value subtracting unit 208 subtracts the average pixel value from each pixel value of the reduced image, and outputs the result as a difference reduced image to the predicted image transform quantization unit 202 (step ST222). The difference reduced image is subjected to orthogonal transform such as DCT in the predicted image transform quantization unit 202, and the generated orthogonal transform coefficient is quantized and output as a predicted image transform coefficient (step ST202). The predicted image transform coefficient is subjected to inverse quantization and inverse orthogonal transform, which are the inverse processes of step ST202, by the predicted image inverse quantization inverse transform unit 203, and a locally decoded difference reduced image is output (step ST203). The average pixel value is added to the local decoded difference reduced image by the average pixel value adding unit 209 and output as a locally decoded reduced image (step ST223). When the local decoded reduced image is input to the predicted image enlarging unit 204, a process for improving the resolution is performed, and the local decoded reduced image is enlarged so as to have the same size as the macroblock image and is output as the first intra predicted image (step). ST204).
 このように、実施の形態3に係る画像符号化装置100bでは、縮小画像の各画素値を隣接するブロックにおける画素値の平均値から予測し、予測残差を変換・量子化処理するよう構成した。一般的に画像データは空間方向に相関性が高いため、予測残差信号は予測前の信号に比べエネルギーが小さくなるため、量子化変換係数のエントロピーも小さくなる。従って、このようにして得られた信号のエントロピー符号化を行うことにより、より圧縮効率の高い映像符号化が実現できる。尚、実施の形態3では縮小画像の各画素値から平均画素値を減算するよう構成したが、処理の順序を入れ替え、マクロブロック画像から平均画素値を減算した後に縮小画像を生成しても同様の効果が得られる。この場合には局部復号縮小画像に平均画素値を加算する代わりに、局部復号縮小画像を予測画像拡大部204で拡大した後に平均画素値を加算することにより好適に符号化できる。 As described above, the image encoding device 100b according to Embodiment 3 is configured to predict each pixel value of the reduced image from the average value of the pixel values in the adjacent blocks, and to convert and quantize the prediction residual. . In general, since image data is highly correlated in the spatial direction, the energy of the prediction residual signal is smaller than that of the signal before prediction, and the entropy of the quantized transform coefficient is also reduced. Therefore, by performing entropy coding of the signal obtained in this way, video coding with higher compression efficiency can be realized. In the third embodiment, the average pixel value is subtracted from each pixel value of the reduced image. However, the same applies to the case where the processing order is changed and a reduced image is generated after subtracting the average pixel value from the macroblock image. The effect is obtained. In this case, instead of adding the average pixel value to the locally decoded reduced image, encoding can be suitably performed by adding the average pixel value after enlarging the locally decoded reduced image by the predicted image enlarging unit 204.
 以上のように、実施の形態3の画像符号化装置100bによれば、変換係数を入力し逆量子化及び逆周波数変換を行い差分局部復号画像を出力する差分逆量子化逆変換部105と、第一の画面内予測画像と差分局部復号画像を加算し局部復号画像を出力する差分画像加算部106と、局部復号画像を格納するフレームメモリ107とを備え、平均画素予測イントラ予測部(第一の画面内予測部)200bは、フレームメモリ107に格納された局部復号画像を用いて処理対象領域の周辺領域の画素値から処理対象領域の画素値の平均値を予測し、予測平均画素値として出力する平均画素値予測部207と、縮小画像の各画素値から予測平均画素値を減算し平均値除去縮小画像を出力する平均画素値減算部208と、予測画像逆量子化逆変換部203の出力と予測平均画素値を加算し局部復号予測画像を出力する平均画素値加算部209とを有し、かつ、予測画像変換量子化部202は平均値除去縮小画像を周波数変換及び量子化し第一の画面内予測データを出力すると共に、予測画像逆量子化逆変換部203は第一の画面内予測データを逆量子化及び逆周波数変換し平均値除去局部復号予測画像として出力するようにしたので、より圧縮効率の高い映像符号化を実現することができる。 As described above, according to the image encoding device 100b of the third embodiment, the difference inverse quantization inverse transform unit 105 that inputs a transform coefficient, performs inverse quantization and inverse frequency transform, and outputs a difference local decoded image; A difference image addition unit 106 that adds the first intra-screen prediction image and the difference local decoded image and outputs a local decoded image; and a frame memory 107 that stores the local decoded image, and includes an average pixel prediction intra prediction unit (first 200b) predicts the average value of the pixel values of the processing target area from the pixel values of the peripheral area of the processing target area using the locally decoded image stored in the frame memory 107, and uses the predicted value as the predicted average pixel value. An average pixel value predicting unit 207 for outputting, an average pixel value subtracting unit 208 for subtracting the predicted average pixel value from each pixel value of the reduced image and outputting an average value removed reduced image, and a predicted image inverse quantization inverse transform An average pixel value adding unit 209 that adds the output of 203 and the predicted average pixel value and outputs a locally decoded predicted image, and the predicted image transform quantization unit 202 performs frequency conversion and quantization on the average value removed reduced image. In addition to outputting the first intra-screen prediction data, the predicted image inverse quantization inverse transform unit 203 performs inverse quantization and inverse frequency transform on the first intra-screen prediction data and outputs the result as an average-value-removed local decoded predicted image. Therefore, video encoding with higher compression efficiency can be realized.
実施の形態4.
 図7は、実施の形態4に係る画像符号化装置100cの構成を示すブロック図である。
 画像符号化装置100cは、上述した実施の形態3の画像符号化装置100bにおける平均画素予測イントラ予測部200bに加えて、第二のイントラ予測部(第二の画面内予測部)108もイントラ予測手段として選択できる機能を備えたものである。平均画素予測イントラ予測部200bの構成は実施の形態3と同様であるため、これとは異なる構成について以下説明する。
Embodiment 4 FIG.
FIG. 7 is a block diagram illustrating a configuration of an image encoding device 100c according to the fourth embodiment.
In addition to the average pixel prediction intra prediction unit 200b in the image encoding device 100b of the third embodiment described above, the image encoding device 100c also performs intra prediction on the second intra prediction unit (second intra prediction unit) 108. It has functions that can be selected as means. Since the configuration of the average pixel prediction intra prediction unit 200b is the same as that of the third embodiment, a configuration different from this will be described below.
 第二のイントラ予測部108は、例えば従来のH.264におけるイントラ予測のように符号化済の周辺ブロックの画素値を伸長することにより第二のイントラ予測画像を生成し出力すると共に、予測方向などの情報をイントラ予測モードとしてエントロピー符号化部104に出力する。イントラ予測切り替えスイッチ(予測画像選択部)109は、入力される第一のイントラ予測画像と第二のイントラ予測画像をマクロブロック画像と比較し、予測効率の高い方を選択してイントラ予測画像として出力すると共に、いずれの予測画像を選択したかを示すフラグをイントラ予測タイプ情報としてエントロピー符号化部104に出力するよう構成されている。 The second intra prediction unit 108 is, for example, a conventional H.264 standard. As in intra prediction in H.264, the pixel values of the encoded peripheral blocks are expanded to generate and output a second intra predicted image, and information such as the prediction direction is input to the entropy encoding unit 104 as an intra prediction mode. Output. The intra prediction switching switch (predicted image selection unit) 109 compares the input first intra predicted image and second intra predicted image with the macroblock image, selects the one with higher prediction efficiency, and sets it as the intra predicted image. In addition to outputting, a flag indicating which prediction image has been selected is output to the entropy encoding unit 104 as intra prediction type information.
 次に、実施の形態4の画像符号化装置100cの動作について説明する。
 図8は、実施の形態4の画像符号化装置100cの動作を示すフローチャートである。尚、ステップST101、ステップST200b、ステップST102~ST104及びステップST111~ST113の動作は実施の形態3と同様であるため、これらの処理についてはその説明を省略する。
Next, the operation of the image coding apparatus 100c according to the fourth embodiment will be described.
FIG. 8 is a flowchart showing the operation of the image coding apparatus 100c according to the fourth embodiment. Since the operations of step ST101, step ST200b, steps ST102 to ST104, and steps ST111 to ST113 are the same as those in the third embodiment, the description of these processes is omitted.
 ブロック分割部101から出力されたマクロブロック画像が第二のイントラ予測部108に入力されると、第二のイントラ予測部108では、例えば従来のH.264におけるイントラ予測が行われ、第二のイントラ予測画像が出力されると共に、予測方向などの情報がイントラ予測モードとしてエントロピー符号化部104に出力される(ステップST121)。第一及び第二のイントラ予測画像が生成されると、イントラ予測切り替えスイッチ109では、入力された予測画像とマクロブロック画像が比較され、予測効率の高い方が選択されてイントラ予測画像として出力されると共に、いずれの予測画像が選択されたかを示すフラグがイントラ予測タイプ情報としてエントロピー符号化部104に出力される(ステップST122)。イントラ予測タイプ情報としては図21に示すように複数のイントラ予測タイプの一つとして第一のイントラ予測部を定義してもよいし、図22のように第一のイントラ予測部と第二のイントラ予測部を識別するフラグを定義してもよい。 When the macroblock image output from the block dividing unit 101 is input to the second intra prediction unit 108, the second intra prediction unit 108, for example, uses conventional H.264. Intra prediction in H.264 is performed, a second intra predicted image is output, and information such as a prediction direction is output to the entropy encoding unit 104 as an intra prediction mode (step ST121). When the first and second intra prediction images are generated, the intra prediction changeover switch 109 compares the input prediction image with the macroblock image, and selects the one with the higher prediction efficiency and outputs it as the intra prediction image. In addition, a flag indicating which prediction image has been selected is output to the entropy encoding unit 104 as intra prediction type information (step ST122). As intra prediction type information, the first intra prediction unit may be defined as one of a plurality of intra prediction types as shown in FIG. 21, or the first intra prediction unit and the second intra prediction type as shown in FIG. A flag for identifying the intra prediction unit may be defined.
 このように実施の形態4に係る画像符号化装置100cでは、平均画素予測イントラ予測に加え、従来の周辺画素伸長によるイントラ予測も選択できるようにしたため、特定の方向のエッジを持つ映像に対して予測効率のよい周辺画素伸長によるイントラ予測によってより効果的に予測できると共に、その他のエッジなどを持つ映像に対しては平均画素予測イントラ予測によって効果的に予測できるため、より圧縮効率の高い映像符号化が実現できる。 As described above, in the image coding device 100c according to Embodiment 4, in addition to the average pixel prediction intra prediction, the intra prediction based on the conventional peripheral pixel expansion can be selected. Video coding with higher compression efficiency because it can be predicted more effectively by intra prediction using peripheral pixel expansion with good prediction efficiency, and can be effectively predicted by average pixel prediction intra prediction for other edges. Can be realized.
 以上のように、実施の形態4の画像符号化装置100cによれば、フレームメモリ107に格納された局部復号画像を用いて処理対象領域の周辺領域の画素値から処理対象領域の画素値を予測し、第二の画面内予測画像を出力すると共に、予測画像を再構成するための情報を第二の画面内予測データとして出力する第二のイントラ予測部(第二の画面内予測部)108と、第一の画面内予測画像と第二の画面内予測画像を比較して予測効率の高い予測画像を選択し、画面内予測画像として出力すると共に、いずれの予測画像を選択したかを示す予測タイプ情報を出力するイントラ予測切り替えスイッチ(予測画像選択部)109とを備え、エントロピー符号化部104は、第二の画面内予測データと予測タイプ情報をエントロピー符号化するようにしたので、より圧縮効率の高い映像符号化を実現することができる。 As described above, according to the image encoding device 100c of the fourth embodiment, the pixel value of the processing target region is predicted from the pixel value of the peripheral region of the processing target region using the local decoded image stored in the frame memory 107. The second intra prediction unit (second intra prediction unit) 108 that outputs the second intra prediction image and outputs information for reconstructing the prediction image as the second intra prediction data. And comparing the first intra-screen prediction image and the second intra-screen prediction image, selecting a prediction image with high prediction efficiency, outputting it as the intra-screen prediction image, and indicating which prediction image was selected An intra-prediction switching switch (prediction image selection unit) 109 that outputs the prediction type information, and the entropy encoding unit 104 entropy encodes the second intra-screen prediction data and the prediction type information. Since the way, it is possible to realize a higher compression efficiency video encoding.
実施の形態5.
 図9は、実施の形態5に係る画像符号化装置100dの構成を示すブロック図である。
 実施の形態5の画像符号化装置100dは、第一の画面内予測部として、切り替えイントラ予測部200cを備えている。この切り替えイントラ予測部200cは、実施の形態4における平均画素予測イントラ予測部200bの構成に加えて、実施の形態2におけるDC成分予測部205及びDC成分記憶バッファ206を備えている。その他の構成は実施の形態4と同様であるため、対応する部分に同一符号を付してその説明を省略する。
Embodiment 5 FIG.
FIG. 9 is a block diagram showing a configuration of an image encoding device 100d according to the fifth embodiment.
The image encoding device 100d according to the fifth embodiment includes a switching intra prediction unit 200c as a first intra-screen prediction unit. The switching intra prediction unit 200c includes a DC component prediction unit 205 and a DC component storage buffer 206 in the second embodiment in addition to the configuration of the average pixel prediction intra prediction unit 200b in the fourth embodiment. Since other configurations are the same as those of the fourth embodiment, the same reference numerals are given to corresponding portions, and descriptions thereof are omitted.
 次に、実施の形態5の画像符号化装置100dの動作について説明する。
 図10は、実施の形態5の画像符号化装置100dの動作を示すフローチャートである。実施の形態4と異なるのはステップST200cの処理のみであるため、異なる処理についてのみ説明する。
 ステップST202の予測画像変換・量子化処理において出力された予測画像変換係数はDC成分予測部205に入力されると、先ず直流成分のみが抽出され、DC成分記憶バッファ206に格納される(ステップST211)。次に、DC成分記憶バッファ206から隣接するブロックの直流成分が読み出され、DC成分の予測値が生成されて予測画像変換係数の直流成分から減算されDC差分係数が得られる。得られたDC差分係数及びその他の交流成分はDC差分予測画像変換係数としてエントロピー符号化部104に出力される(ステップST212)。
Next, the operation of the image encoding device 100d according to the fifth embodiment will be described.
FIG. 10 is a flowchart showing the operation of the image encoding device 100d according to the fifth embodiment. Since only the process of step ST200c is different from the fourth embodiment, only the different process will be described.
When the predicted image transform coefficient output in the predicted image transform / quantization process in step ST202 is input to the DC component prediction unit 205, only the DC component is first extracted and stored in the DC component storage buffer 206 (step ST211). ). Next, the DC component of the adjacent block is read from the DC component storage buffer 206, a predicted value of the DC component is generated, and subtracted from the DC component of the predicted image transform coefficient to obtain a DC difference coefficient. The obtained DC difference coefficient and other AC components are output to the entropy encoding unit 104 as DC difference predicted image transform coefficients (step ST212).
 このように、実施の形態5に係る画像符号化装置100dでは、第二のイントラ予測が選択された場合には平均画素値予測を、切り替えイントラ予測が選択された場合にはDC成分予測を行うことによって発生する係数のエントロピーを小さく抑えることにより、より圧縮効率の高い映像符号化が実現できる。 As described above, the image coding device 100d according to Embodiment 5 performs average pixel value prediction when the second intra prediction is selected, and performs DC component prediction when the switched intra prediction is selected. By suppressing the entropy of the coefficient generated by this, video encoding with higher compression efficiency can be realized.
 尚、実施の形態5では、DC成分予測部205とDC成分記憶バッファ206の構成を実施の形態4の構成に対して適用したが、実施の形態3の構成に対して適用してもよい。 In the fifth embodiment, the configurations of the DC component prediction unit 205 and the DC component storage buffer 206 are applied to the configuration of the fourth embodiment, but may be applied to the configuration of the third embodiment.
 以上のように、実施の形態5の画像符号化装置100dによれば、実施の形態4の構成に加えて、切り替えイントラ予測部(第一の画面内予測部)200cは、第一の画面内予測データの直流成分であるDC成分を局部復号DC成分として格納するDC成分記憶バッファ206と、DC成分記憶バッファ206に記憶されている局部復号DC成分に基づいてDC成分の予測値を算出し、第一の画面内予測データのDC成分から予測値を減じた後、第一の画面内予測データ差分値として出力するDC成分予測部205とを備え、エントロピー符号化部104は、第一の画面内予測データの代わりに第一の画面内予測データ差分値をエントロピー符号化するようにしたので、より圧縮効率の高い映像符号化を実現することができる。 As described above, according to the image coding device 100d of the fifth embodiment, in addition to the configuration of the fourth embodiment, the switching intra prediction unit (first intra-screen prediction unit) 200c A DC component storage buffer 206 for storing a DC component, which is a direct current component of the prediction data, as a locally decoded DC component; and a predicted value of the DC component based on the locally decoded DC component stored in the DC component storage buffer 206; The entropy encoding unit 104 includes a DC component prediction unit 205 that outputs a first intra prediction data difference value after subtracting the prediction value from the DC component of the first intra prediction data. Since the first intra prediction data difference value is entropy encoded instead of the intra prediction data, video encoding with higher compression efficiency can be realized.
実施の形態6.
 実施の形態6~10は、実施の形態1~5の画像符号化装置に対応した画像復号装置に関するものである。
 図11は、実施の形態6の画像復号装置300の構成を示すブロック図である。
 画像復号装置300は、エントロピー復号部301と、差分逆量子化逆変換部302と、第一の予測画像生成部(第一の画面内予測画像生成部)400と、差分画像加算部303から構成される。
 エントロピー復号部301は、入力されるビットストリームをエントロピー復号し、結果として得られる予測画像変換係数及び量子化係数を出力する。差分逆量子化逆変換部302は、量子化係数に対して画像符号化装置における差分変換量子化部103の逆処理となる逆量子化及び逆直交変換を行い、局部復号差分画像を出力する。第一の予測画像生成部400は、入力される予測画像変換係数に基づいて予測画像を生成し、第一のイントラ予測画像として出力する。差分画像加算部303は、局部復号差分画像と第一のイントラ予測画像を加算し復号画像として出力する。
Embodiment 6 FIG.
Embodiments 6 to 10 relate to an image decoding apparatus corresponding to the image encoding apparatuses of Embodiments 1 to 5.
FIG. 11 is a block diagram illustrating a configuration of the image decoding apparatus 300 according to the sixth embodiment.
The image decoding apparatus 300 includes an entropy decoding unit 301, a difference inverse quantization inverse transform unit 302, a first predicted image generation unit (first in-screen predicted image generation unit) 400, and a difference image addition unit 303. Is done.
The entropy decoding unit 301 performs entropy decoding on the input bitstream and outputs the resulting predicted image transform coefficient and quantization coefficient. The difference inverse quantization inverse transform unit 302 performs inverse quantization and inverse orthogonal transform, which are inverse processes of the difference transform quantization unit 103 in the image coding apparatus, on the quantized coefficient, and outputs a locally decoded difference image. The first predicted image generation unit 400 generates a predicted image based on the input predicted image conversion coefficient and outputs it as a first intra predicted image. The difference image addition unit 303 adds the locally decoded difference image and the first intra-predicted image and outputs the result as a decoded image.
 第一の予測画像生成部400は、より詳細には、予測画像逆量子化逆変換部401及び予測画像拡大部402から構成される。予測画像逆量子化逆変換部401は、入力された予測画像変換係数に対して予測画像変換量子化部202の逆処理となる逆量子化及び逆直交変換を行うことによって局部復号縮小画像を生成し出力する。予測画像拡大部402は、入力された局部復号縮小画像に対して解像度を向上させる処理が行われ、マクロブロック画像と同じサイズになるよう拡大し、第一のイントラ予測画像として出力する。 More specifically, the first predicted image generation unit 400 includes a predicted image inverse quantization inverse transform unit 401 and a predicted image enlargement unit 402. The predicted image inverse quantization inverse transform unit 401 generates a locally decoded reduced image by performing inverse quantization and inverse orthogonal transform, which are inverse processes of the predicted image transform quantization unit 202, on the input predicted image transform coefficient. And output. The predicted image enlarging unit 402 performs processing for improving the resolution of the input local decoded reduced image, enlarges the image to have the same size as the macroblock image, and outputs the first intra predicted image.
 次に、実施の形態6に係る画像復号装置300の動作について説明する。
 図12は、実施の形態6に係る画像復号装置300の処理を示すフローチャートである。
 先ず、ビットストリームが画像復号装置300に入力されると、ビットストリームはエントロピー復号部301においてエントロピー復号され、結果として予測画像変換係数及び量子化係数が出力される(ステップST301)。量子化係数は差分逆量子化逆変換部302において逆量子化ならびに逆変換処理が行われ、局部復号差分画像が出力される(ステップST302)。一方、予測画像変換係数が第一の予測画像生成部400に入力されると予測画像が生成され、第一のイントラ予測画像として出力される(ステップST400)。こうして得られた局部復号差分画像と第一のイントラ予測画像は差分画像加算部303において加算され、復号画像として出力される(ステップST303)。
Next, the operation of the image decoding apparatus 300 according to Embodiment 6 will be described.
FIG. 12 is a flowchart showing processing of the image decoding apparatus 300 according to Embodiment 6.
First, when a bit stream is input to the image decoding apparatus 300, the bit stream is entropy-decoded by the entropy decoding unit 301, and as a result, a predicted image transform coefficient and a quantization coefficient are output (step ST301). The quantized coefficient is subjected to inverse quantization and inverse transform processing in the difference inverse quantization inverse transform unit 302, and a local decoded difference image is output (step ST302). On the other hand, when the predicted image conversion coefficient is input to the first predicted image generation unit 400, a predicted image is generated and output as the first intra predicted image (step ST400). The locally decoded differential image and the first intra predicted image obtained in this way are added by the differential image adding unit 303 and output as a decoded image (step ST303).
 ステップST400は、より詳細には、入力された予測画像変換係数は予測画像逆量子化逆変換部401において、逆量子化ならびに逆変換処理が行われ、その結果局部復号縮小画像が出力される(ステップST401)。局部復号縮小画像は予測画像拡大部402において解像度を向上させる処理が行われ、第一のイントラ予測画像として出力される(ステップST402)。 In step ST400, more specifically, the input predicted image transform coefficient is subjected to inverse quantization and inverse transform processing in the predicted image inverse quantization inverse transform unit 401, and as a result, a locally decoded reduced image is output ( Step ST401). The local decoded reduced image is subjected to processing for improving the resolution in the predicted image enlarging unit 402, and is output as a first intra predicted image (step ST402).
 このように、実施の形態6に係る画像復号装置300では、実施の形態1に係る画像符号化装置100の逆処理を行うよう構成したので、実施の形態1に係る画像符号化装置100によって生成されるビットストリームを好適に復号し、復号画像を得ることができる。また、局部復号画像を予測画像生成に用いないため、復号画像を生成することなく縮小画像を生成することができ、サムネイル画像などを少ない演算量で生成することができる。 As described above, the image decoding apparatus 300 according to Embodiment 6 is configured to perform the inverse process of the image encoding apparatus 100 according to Embodiment 1, and thus is generated by the image encoding apparatus 100 according to Embodiment 1. The decoded bit stream can be suitably decoded to obtain a decoded image. In addition, since the local decoded image is not used for predictive image generation, a reduced image can be generated without generating a decoded image, and a thumbnail image or the like can be generated with a small amount of calculation.
 以上のように、実施の形態6の画像復号装置300によれば、ビットストリームをエントロピー復号し変換係数及び第一の画面内予測データを出力するエントロピー復号部301と、変換係数を逆量子化及び逆周波数変換し差分復号画像を出力する差分逆量子化逆変換部302と、第一の画面内予測データに基づいて第一の画面内予測画像を再構築し出力する第一の予測画像生成部(第一の画面内予測画像生成部)400と、差分復号画像と第一の画面内予測画像とを加算し復号画像を出力する差分画像加算部303とを備え、第一の画面内予測画像生成部400は、第一の画面内予測データを逆量子化及び逆周波数変換し復号予測画像を出力する予測画像逆量子化逆変換部401と、復号予測画像を拡大し第一の画面内予測画像を生成する予測画像拡大部402とを備えたので、符号化画像ビットストリームを効率よく復号することができ、かつ、サムネイル画像などを少ない演算量で生成することができる。 As described above, according to the image decoding apparatus 300 of the sixth embodiment, the entropy decoding unit 301 that entropy-decodes the bitstream and outputs the transform coefficient and the first in-screen prediction data, and the transform coefficient is dequantized and dequantized. A differential inverse quantization inverse transform unit 302 that performs inverse frequency transform and outputs a differential decoded image, and a first predicted image generation unit that reconstructs and outputs the first intra prediction image based on the first intra prediction data (First intra prediction image generation unit) 400, and a difference image addition unit 303 that adds the differential decoded image and the first intra prediction image and outputs a decoded image, and includes the first intra prediction image. The generation unit 400 performs inverse quantization and inverse frequency conversion on the first intra prediction data and outputs a decoded prediction image, and expands the decoded prediction image to generate the first intra prediction. Generate image Since a prediction image enlargement unit 402, it can be decoded efficiently coded video bit stream, and can be produced with a smaller amount of calculation and thumbnail images.
実施の形態7.
 図13は、実施の形態7に係る画像復号装置300aの構成を示すブロック図である。
 画像復号装置300aは、第一の画面内予測画像生成部として、DC成分予測イントラ予測画像生成部400aを備えている。このDC成分予測イントラ予測画像生成部400aは、実施の形態6の第一の予測画像生成部400に代わるもので、予測画像逆量子化逆変換部401と予測画像拡大部402に加えて、DC成分予測部403及びDC成分記憶バッファ404を備えている。DC成分予測部403は、格納されている隣接するブロックの直流成分をDC成分記憶バッファ404から読み出し、DC成分の予測値を生成して予測画像変換係数の直流成分に加算してDC係数を得る。得られたDC係数及びその他の交流成分を予測画像変換係数として予測画像逆量子化逆変換部401に出力すると共にDC係数をDC成分記憶バッファ404に格納する。DC成分記憶バッファ404は、予測画像変換係数の直流成分を格納するためのバッファである。
Embodiment 7 FIG.
FIG. 13 is a block diagram illustrating a configuration of an image decoding device 300a according to the seventh embodiment.
The image decoding apparatus 300a includes a DC component prediction intra-prediction image generation unit 400a as a first intra-screen prediction image generation unit. This DC component prediction intra-prediction image generation unit 400a replaces the first prediction image generation unit 400 of the sixth embodiment. In addition to the prediction image inverse quantization inverse transformation unit 401 and the prediction image enlargement unit 402, the DC component prediction intra prediction image generation unit 400a A component prediction unit 403 and a DC component storage buffer 404 are provided. The DC component prediction unit 403 reads the stored DC component of the adjacent block from the DC component storage buffer 404, generates a predicted value of the DC component, adds it to the DC component of the predicted image transform coefficient, and obtains the DC coefficient. . The obtained DC coefficients and other AC components are output as predicted image transform coefficients to the predicted image inverse quantization inverse transform unit 401 and the DC coefficients are stored in the DC component storage buffer 404. The DC component storage buffer 404 is a buffer for storing the DC component of the predicted image conversion coefficient.
 次に、実施の形態7に係る画像復号装置300aの処理について説明する。
 図14は、実施の形態7に係る画像復号装置300aの動作を示すフローチャートである。
 実施の形態7における実施の形態6との相違点は、第一の予測画像生成部400に関する処理(ステップST400a)のみであるため、このステップST400aについてのみ説明する。
 ステップST301において出力された予測画像変換係数はDC成分予測部403に入力されると、先ず、DC成分記憶バッファ404から隣接するブロックの直流成分が読み出され、DC成分の予測値が生成されて予測画像変換係数の直流成分と加算されDC係数が得られる。得られたDC係数及びその他の交流成分は予測画像変換係数として予測画像逆量子化逆変換部401に出力される(ステップST411)。また、得られたDC係数については、DC成分記憶バッファ404に格納される(ステップST412)。その後のステップST401,ST402は、実施の形態6と同様である。
Next, processing of the image decoding device 300a according to Embodiment 7 will be described.
FIG. 14 is a flowchart showing the operation of the image decoding apparatus 300a according to the seventh embodiment.
Since the difference between the seventh embodiment and the sixth embodiment is only the process related to the first predicted image generation unit 400 (step ST400a), only step ST400a will be described.
When the predicted image transform coefficient output in step ST301 is input to the DC component prediction unit 403, first, the DC component of the adjacent block is read from the DC component storage buffer 404, and a predicted value of the DC component is generated. A DC coefficient is obtained by adding the DC component of the predicted image conversion coefficient. The obtained DC coefficients and other AC components are output to the predicted image inverse quantization inverse transform unit 401 as predicted image transform coefficients (step ST411). The obtained DC coefficient is stored in the DC component storage buffer 404 (step ST412). Subsequent steps ST401 and ST402 are the same as in the sixth embodiment.
 このように、実施の形態7に係る画像復号装置300aでは、実施の形態2に係る画像符号化装置100aの逆処理を行うよう構成したので、実施の形態2に係る画像符号化装置100aによって生成されるビットストリームを好適に復号し、復号画像を得ることができる。 As described above, the image decoding device 300a according to the seventh embodiment is configured to perform the inverse process of the image coding device 100a according to the second embodiment, and thus is generated by the image coding device 100a according to the second embodiment. The decoded bit stream can be suitably decoded to obtain a decoded image.
 以上のように、実施の形態7の画像復号装置300aによれば、DC成分予測イントラ予測画像生成部(第一の画面内予測画像生成部)400aは、復号DC成分に基づいてDC成分の予測値を算出し、第一の画面内予測データ差分値のDC成分に予測値を加えた後、第一の画面内予測データとして出力するDC成分予測部403と、第一の画面内予測データの直流成分であるDC成分を復号DC成分として格納するDC成分記憶バッファ404とを備え、エントロピー復号部301は、第一の画面内予測データの代わりに第一の画面内予測データ差分値をエントロピー復号結果として出力するようにしたので、符号化画像ビットストリームを効率よく復号し、復号画像を得ることができる。 As described above, according to the image decoding apparatus 300a of the seventh embodiment, the DC component prediction intra prediction image generation unit (first intra prediction image generation unit) 400a predicts the DC component based on the decoded DC component. A DC component prediction unit 403 that calculates a value, adds the predicted value to the DC component of the first intra-screen prediction data difference value, and then outputs the first intra-screen prediction data; A DC component storage buffer 404 that stores a DC component that is a direct current component as a decoded DC component, and the entropy decoding unit 301 entropy decodes the first intra prediction data difference value instead of the first intra prediction data. As a result, the encoded image bitstream can be efficiently decoded and a decoded image can be obtained.
実施の形態8.
 図15は、実施の形態8に係る画像復号装置300bの構成を示すブロック図である。
 画像復号装置300bは、第一の画面内予測画像生成部として、平均画素予測イントラ予測画像生成部400bを用いて構成されており、更に、フレームメモリ304を備えている。これ以外の構成は実施の形態6、7と同様である。
Embodiment 8 FIG.
FIG. 15 is a block diagram illustrating a configuration of an image decoding device 300b according to the eighth embodiment.
The image decoding apparatus 300b is configured using an average pixel prediction intra-prediction image generation unit 400b as a first intra-screen prediction image generation unit, and further includes a frame memory 304. Other configurations are the same as those in the sixth and seventh embodiments.
 フレームメモリ304は、復号画像を格納するためのバッファである。平均画素予測イントラ予測画像生成部400bは、実施の形態6における第一の予測画像生成部400の構成に加えて、平均画素値予測部405及び平均画素値加算部406を備えている。平均画素値予測部405は、隣接するブロックの復号画像データをフレームメモリ304から読み出し、隣接するブロックの画素平均値を算出して平均画素値として出力する。平均画素値加算部406は、予測画像逆量子化逆変換部401から出力される局部復号縮小画像に対して平均画素値予測部405より出力される平均画素値を加算し、その結果を局部復号縮小画像として予測画像拡大部402に出力する。当然符号化装置において平均画素値が予測画像拡大部で局部復号縮小画像を拡大した後に加算されるよう構成した場合には、復号装置でも同様の順序で復号処理を行うことにより好適に復号可能となる。 The frame memory 304 is a buffer for storing the decoded image. The average pixel prediction intra prediction image generation unit 400b includes an average pixel value prediction unit 405 and an average pixel value addition unit 406 in addition to the configuration of the first prediction image generation unit 400 in the sixth embodiment. The average pixel value prediction unit 405 reads the decoded image data of the adjacent block from the frame memory 304, calculates the pixel average value of the adjacent block, and outputs it as the average pixel value. The average pixel value addition unit 406 adds the average pixel value output from the average pixel value prediction unit 405 to the local decoded reduced image output from the predicted image inverse quantization inverse conversion unit 401, and the result is locally decoded. The reduced image is output to the predicted image enlargement unit 402. Naturally, when the encoding device is configured to add the average pixel value after enlarging the locally decoded reduced image by the predicted image enlargement unit, the decoding device can also suitably decode by performing decoding processing in the same order. Become.
 次に、実施の形態8に係る画像復号装置300bの処理について説明する。
 図16は、実施の形態8に係る画像復号装置300bの処理を示すフローチャートである。
 先ず、ビットストリームが画像復号装置300bに入力されると、ビットストリームはエントロピー復号部301においてエントロピー復号され、結果として予測画像変換係数及び量子化係数が出力される(ステップST301)。量子化係数は差分逆量子化逆変換部302において逆量子化ならびに逆変換処理が行われ、局部復号差分画像が出力される(ステップST302)。一方、予測画像変換係数が平均画素予測イントラ予測画像生成部400bに入力されると、フレームメモリ304に格納されている復号画像に基づいて予測画像が生成され、第一のイントラ予測画像として出力される(ステップST400b)。こうして得られた局部復号差分画像と第一のイントラ予測画像は差分画像加算部303において加算され、復号画像として出力される(ステップST303)。復号画像はフレームメモリ304に格納され、イントラ予測画像生成に利用される(ステップST311)。
Next, processing of the image decoding device 300b according to Embodiment 8 will be described.
FIG. 16 is a flowchart showing processing of the image decoding device 300b according to Embodiment 8.
First, when a bit stream is input to the image decoding device 300b, the bit stream is entropy-decoded by the entropy decoding unit 301, and as a result, a predicted image transform coefficient and a quantization coefficient are output (step ST301). The quantized coefficient is subjected to inverse quantization and inverse transform processing in the difference inverse quantization inverse transform unit 302, and a local decoded difference image is output (step ST302). On the other hand, when the predicted image transform coefficient is input to the average pixel predicted intra predicted image generation unit 400b, a predicted image is generated based on the decoded image stored in the frame memory 304, and is output as the first intra predicted image. (Step ST400b). The locally decoded differential image and the first intra predicted image obtained in this way are added by the differential image adding unit 303 and output as a decoded image (step ST303). The decoded image is stored in the frame memory 304 and used for intra prediction image generation (step ST311).
 ステップST400bは、より詳細には、入力された予測画像変換係数は予測画像逆量子化逆変換部401において逆量子化ならびに逆変換処理が行われ、その結果局部復号縮小画像が出力される(ステップST401)。一方、平均画素値予測部405には復号済の周辺ブロックの画素値データが入力され、それらの平均値を算出することにより平均画素値として出力される(ステップST421)。平均画素値と局部復号縮小画像は平均画素値加算部406において加算され、局部復号縮小画像が出力される(ステップST422)。局部復号縮小画像は予測画像拡大部402において解像度を向上させる処理が行われ、第一のイントラ予測画像として出力される(ステップST402)。 In step ST400b, more specifically, the input predicted image transform coefficient is subjected to inverse quantization and inverse transform processing in the predicted image inverse quantization inverse transform unit 401, and as a result, a locally decoded reduced image is output (step ST400b). ST401). On the other hand, the pixel value data of the decoded peripheral blocks is input to the average pixel value prediction unit 405, and the average value thereof is calculated and output as an average pixel value (step ST421). The average pixel value and the locally decoded reduced image are added by the average pixel value adding unit 406, and a locally decoded reduced image is output (step ST422). The local decoded reduced image is subjected to processing for improving the resolution in the predicted image enlarging unit 402, and is output as a first intra predicted image (step ST402).
 このように、実施の形態8に係る画像復号装置300bでは、実施の形態3に係る画像符号化装置100bの逆処理を行うよう構成したので、実施の形態3に係る画像符号化装置100bによって生成されるビットストリームを好適に復号し、復号画像を得ることができる。 As described above, the image decoding device 300b according to the eighth embodiment is configured to perform the reverse process of the image coding device 100b according to the third embodiment, and thus is generated by the image coding device 100b according to the third embodiment. The decoded bit stream can be suitably decoded to obtain a decoded image.
 以上のように、実施の形態8の画像復号装置300bによれば、復号画像を格納するフレームメモリ304を備え、平均画素予測イントラ予測画像生成部(第一の画面内予測画像生成部)400bは、フレームメモリ304に格納された復号画像を用いて処理対象領域の周辺領域の画素値から処理対象領域の画素値の平均値を予測し予測平均画素値として出力する平均画素値予測部405と、予測画像逆量子化逆変換部401の出力と予測平均画素値とを加算し復号予測画像を出力する平均画素値加算部406とを有し、予測画像逆量子化逆変換部401は、第一の画面内予測データを逆量子化及び逆周波数変換し平均値除去復号予測画像を出力するようにしたので、符号化画像ビットストリームを効率よく復号し、復号画像を得ることができる。 As described above, according to the image decoding apparatus 300b of the eighth embodiment, the frame memory 304 that stores the decoded image is provided, and the average pixel prediction intra prediction image generation unit (first intra prediction image generation unit) 400b is provided. An average pixel value prediction unit 405 that predicts an average value of the pixel values of the processing target area from the pixel values of the peripheral area of the processing target area using the decoded image stored in the frame memory 304, and outputs the predicted average pixel value; The predicted image inverse quantization inverse transform unit 401 includes an average pixel value addition unit 406 that adds the output of the predicted image inverse quantization inverse transform unit 401 and the predicted average pixel value and outputs a decoded predicted image. Intra-screen prediction data is inversely quantized and inverse frequency converted and an average-value-removed decoded prediction image is output, so that the encoded image bitstream can be efficiently decoded to obtain a decoded image. Kill.
実施の形態9.
 図17は、実施の形態9に係る画像復号装置300cの構成を示すブロック図である。
 画像復号装置300cは、上述した実施の形態8の画像復号装置300bにおける平均画素予測イントラ予測画像生成部400bに加えて、第二の予測画像生成部(第二の画面内予測画像生成部)305もイントラ予測画像生成手段として選択できる機能を備えたものである。平均画素予測イントラ予測画像生成部400bの構成は実施の形態8と同様であるため、これとは異なる構成について以下説明する。
Embodiment 9 FIG.
FIG. 17 is a block diagram illustrating a configuration of an image decoding device 300c according to the ninth embodiment.
In addition to the average pixel prediction intra-prediction image generation unit 400b in the image decoding device 300b of the eighth embodiment described above, the image decoding device 300c includes a second prediction image generation unit (second intra-screen prediction image generation unit) 305. Is also provided with a function that can be selected as an intra predicted image generating means. Since the configuration of the average pixel prediction intra predicted image generation unit 400b is the same as that of the eighth embodiment, a configuration different from this will be described below.
 第二の予測画像生成部305は、例えばエントロピー復号部301によってエントロピー復号されたイントラ予測モードに基づいて予測方向などの情報を得、H.264におけるイントラ予測のように符号化済の周辺ブロックの画素値を伸長することにより第二のイントラ予測画像を生成し出力する。イントラ予測切り替えスイッチ(予測画像選択部)306は、入力される第一のイントラ予測画像と第二のイントラ予測画像をエントロピー復号部301によってエントロピー復号されたいずれの予測画像を選択したかを示すフラグであるイントラ予測タイプ情報に基づいて選択し、イントラ予測画像として出力する。 The second predicted image generation unit 305 obtains information such as a prediction direction based on the intra prediction mode entropy-decoded by the entropy decoding unit 301, for example. A second intra-predicted image is generated and output by expanding the pixel values of the encoded peripheral blocks as in the case of intra prediction in H.264. The intra prediction changeover switch (prediction image selection unit) 306 indicates a flag indicating which prediction image entropy-decoded by the entropy decoding unit 301 is selected from the input first intra prediction image and second intra prediction image. Is selected based on the intra prediction type information, and is output as an intra predicted image.
 次に、実施の形態9の画像復号装置300cの動作について説明する。
 図18は、実施の形態9の画像復号装置300cの動作を示すフローチャートである。
 エントロピー復号された情報としていずれの予測画像を選択したかを示すフラグであるイントラ予測タイプ情報がイントラ予測切り替えスイッチ306に入力され、イントラ予測タイプ情報が第二のイントラ予測画像を選択することを意味する場合、第二のイントラ予測画像生成が行われ、そうでない場合には平均画素予測イントラ予測画像生成が行われるように切り替え処理が行われる(ステップST321)。
Next, the operation of the image decoding device 300c according to the ninth embodiment will be described.
FIG. 18 is a flowchart illustrating the operation of the image decoding device 300c according to the ninth embodiment.
This means that intra prediction type information, which is a flag indicating which prediction image has been selected as entropy-decoded information, is input to the intra prediction changeover switch 306, and the intra prediction type information selects the second intra prediction image. If so, the second intra prediction image generation is performed, and if not, the switching process is performed so that the average pixel prediction intra prediction image generation is performed (step ST321).
 エントロピー復号部301によってエントロピー復号されたイントラ予測モードに基づいて第二の予測画像生成部305では、予測方向などの情報を得、H.264におけるイントラ予測のように符号化済の周辺ブロックの画素値を伸長することにより第二のイントラ予測画像が生成され出力される(ステップST322)。これ以外の動作については、実施の形態8と同様であるため、ここでの説明は省略する。 Based on the intra prediction mode entropy decoded by the entropy decoding unit 301, the second predicted image generation unit 305 obtains information such as the prediction direction. A second intra-prediction image is generated and output by expanding the pixel values of the encoded peripheral blocks as in the case of intra prediction in H.264 (step ST322). Since other operations are the same as those in the eighth embodiment, description thereof is omitted here.
 このように、実施の形態9に係る画像復号装置300cでは、実施の形態4に係る画像符号化装置100cの逆処理を行うよう構成したので、実施の形態4に係る画像符号化装置100cによって生成されるビットストリームを好適に復号し、復号画像を得ることができる。 As described above, the image decoding device 300c according to the ninth embodiment is configured to perform the reverse process of the image coding device 100c according to the fourth embodiment, and thus generated by the image coding device 100c according to the fourth embodiment. The decoded bit stream can be suitably decoded to obtain a decoded image.
 以上のように、実施の形態9の画像復号装置300cによれば、エントロピー復号部301は、第二の画面内予測データと予測タイプ情報をエントロピー復号すると共に、第二の画面内予測データに基づいてフレームメモリ304に格納された復号画像を用いて処理対象領域の周辺領域の画素値から処理対象領域の画素値を予測し第二の画面内予測画像を出力する第二の予測画像生成部(第二の画面内予測画像生成部)305と、予測タイプ情報に基づいて第一の画面内予測画像と第二の画面内予測画像のいずれか一つを選択し出力するイントラ予測切り替えスイッチ(予測画像選択部)306とを備えたので、符号化画像ビットストリームを効率よく復号し、復号画像を得ることができる。 As described above, according to the image decoding device 300c of the ninth embodiment, the entropy decoding unit 301 performs entropy decoding on the second intra prediction data and the prediction type information, and based on the second intra prediction data. A second predicted image generation unit that predicts the pixel value of the processing target region from the pixel value of the peripheral region of the processing target region using the decoded image stored in the frame memory 304 and outputs a second intra prediction image ( A second intra prediction image generation unit) 305 and an intra prediction changeover switch (prediction) that selects and outputs one of the first intra prediction image and the second intra prediction image based on the prediction type information. Image selection unit) 306, the encoded image bitstream can be efficiently decoded to obtain a decoded image.
実施の形態10.
 図19は、実施の形態10に係る画像復号装置300dの構成を示すブロック図である。
 実施の形態10の画像復号装置300dは、第一の画面内予測画像生成部として、切り替えイントラ予測画像生成部400cを備えている。この切り替えイントラ予測画像生成部400cは、実施の形態9における平均画素予測イントラ予測画像生成部400bの構成に加えて、実施の形態7におけるDC成分予測部403とDC成分記憶バッファ404とを備えたものである。その他の構成は実施の形態9と同様であるため、対応する部分に同一符号を付してその説明を省略する。
Embodiment 10 FIG.
FIG. 19 is a block diagram illustrating a configuration of an image decoding device 300d according to the tenth embodiment.
The image decoding device 300d according to the tenth embodiment includes a switching intra predicted image generation unit 400c as the first intra prediction image generation unit. The switched intra predicted image generation unit 400c includes a DC component prediction unit 403 and a DC component storage buffer 404 in the seventh embodiment in addition to the configuration of the average pixel prediction intra predicted image generation unit 400b in the ninth embodiment. Is. Since other configurations are the same as those of the ninth embodiment, the same reference numerals are given to corresponding portions, and descriptions thereof are omitted.
 次に、実施の形態10の画像復号装置300dの動作について説明する。
 図20は、実施の形態10の画像復号装置300dの動作を示すフローチャートである。実施の形態9と異なるのはステップST400cの処理のみであるため、異なる処理についてのみ説明する。
 ステップST301において出力された予測画像変換係数はDC成分予測部403に入力されると、先ず、DC成分記憶バッファ404から隣接するブロックの直流成分が読み出され、DC成分の予測値が生成されて予測画像変換係数の直流成分と加算されDC係数が得られる。得られたDC係数及びその他の交流成分は予測画像変換係数として予測画像逆量子化逆変換部401に出力される(ステップST411)。また、得られたDC係数については、DC成分記憶バッファ404に格納される(ステップST412)。これ以外の処理については実施の形態9と同様である。
Next, the operation of the image decoding device 300d according to the tenth embodiment will be described.
FIG. 20 is a flowchart illustrating the operation of the image decoding device 300d according to the tenth embodiment. Since only the process of step ST400c is different from the ninth embodiment, only the different process will be described.
When the predicted image transform coefficient output in step ST301 is input to the DC component prediction unit 403, first, the DC component of the adjacent block is read from the DC component storage buffer 404, and a predicted value of the DC component is generated. A DC coefficient is obtained by adding the DC component of the predicted image conversion coefficient. The obtained DC coefficients and other AC components are output to the predicted image inverse quantization inverse transform unit 401 as predicted image transform coefficients (step ST411). The obtained DC coefficient is stored in the DC component storage buffer 404 (step ST412). Other processes are the same as those in the ninth embodiment.
 このように、実施の形態10に係る画像復号装置300dでは、実施の形態5に係る画像符号化装置100dの逆処理を行うよう構成したので、実施の形態5に係る画像符号化装置100dによって生成されるビットストリームを好適に復号し、復号画像を得ることができる。 As described above, the image decoding device 300d according to the tenth embodiment is configured to perform the reverse process of the image coding device 100d according to the fifth embodiment, and thus generated by the image coding device 100d according to the fifth embodiment. The decoded bit stream can be suitably decoded to obtain a decoded image.
 尚、実施の形態10では、DC成分予測部403とDC成分記憶バッファ404の構成を実施の形態9の構成に対して適用したが、実施の形態8の構成に対して適用してもよい。 In the tenth embodiment, the configurations of the DC component prediction unit 403 and the DC component storage buffer 404 are applied to the configuration of the ninth embodiment, but may be applied to the configuration of the eighth embodiment.
 以上のように、実施の形態10の画像復号装置によれば、実施の形態9の構成に加えて、切り替えイントラ予測画像生成部(第一の画面内予測画像生成部)400cは、復号DC成分に基づいてDC成分の予測値を算出し、第一の画面内予測データ差分値のDC成分に予測値を加えた後、第一の画面内予測データとして出力するDC成分予測部403と、第一の画面内予測データの直流成分であるDC成分を復号DC成分として格納するDC成分記憶バッファ404とを備え、エントロピー復号部301は、第一の画面内予測データの代わりに第一の画面内予測データ差分値をエントロピー復号結果として出力するようにしたので、符号化画像ビットストリームを効率よく復号し、復号画像を得ることができる。 As described above, according to the image decoding apparatus of the tenth embodiment, in addition to the configuration of the ninth embodiment, the switching intra-predicted image generation unit (first intra-screen predicted image generation unit) 400c includes the decoded DC component. A DC component prediction unit 403 that calculates a predicted value of a DC component based on the first component, adds the predicted value to the DC component of the first in-screen predicted data difference value, and outputs the first predicted value in the screen. A DC component storage buffer 404 that stores a DC component that is a direct current component of one intra-screen prediction data as a decoded DC component, and the entropy decoding unit 301 uses the first intra-screen prediction data instead of the first intra-screen prediction data. Since the prediction data difference value is output as the entropy decoding result, the encoded image bitstream can be efficiently decoded and a decoded image can be obtained.
 以上のように、この発明に係る画像符号化装置及び画像復号装置は、画像を圧縮符号化して伝送する際に画面内予測を行い、また、圧縮されたデータを復号するものであり、例えば、H.264符号化方式を用いた際の符号化及び復号するのに適している。 As described above, the image encoding device and the image decoding device according to the present invention perform intra-screen prediction when an image is compressed and transmitted, and decode compressed data. H. It is suitable for encoding and decoding when the H.264 encoding method is used.

Claims (8)

  1.  1フレームの画像データを入力し、複数のブロックに分割しマクロブロック画像を出力するブロック分割部と、
     前記マクロブロック画像を入力し、第一の画面内予測画像を生成すると共に画面内予測画像を再構成するための情報である第一の画面内予測データを出力する第一の画面内予測部と、
     前記マクロブロック画像と前記第一の画面内予測画像の差分演算を行い差分画像を出力する差分画像生成部と、
     前記差分画像を周波数変換及び量子化し変換係数を出力する差分変換量子化部と、
     前記第一の画面内予測データと前記変換係数をエントロピー符号化しビットストリームを出力するエントロピー符号化部とを備え、
     前記第一の画面内予測部は、
     前記マクロブロック画像を縮小し縮小画像を出力する画像縮小部と、
     前記縮小画像を周波数変換及び量子化し第一の画面内予測データを出力する予測画像変換量子化部と、
     前記第一の画面内予測データを逆量子化及び逆周波数変換し局部復号予測画像を出力する予測画像逆量子化変換部と、
     前記局部復号予測画像を拡大し第一の画面内予測画像を生成する予測画像拡大部とを備えたことを特徴とする画像符号化装置。
    A block dividing unit that inputs image data of one frame, divides it into a plurality of blocks, and outputs a macroblock image;
    A first intra prediction unit that inputs the macroblock image, generates a first intra prediction image, and outputs first intra prediction data that is information for reconstructing the intra prediction image; ,
    A difference image generation unit that performs a difference calculation between the macroblock image and the first intra prediction image and outputs a difference image;
    A difference transform quantizing unit for frequency transforming and quantizing the difference image and outputting transform coefficients;
    An entropy encoding unit that entropy-encodes the first intra-screen prediction data and the transform coefficient and outputs a bitstream;
    The first in-screen prediction unit
    An image reduction unit that reduces the macroblock image and outputs a reduced image;
    A predicted image transform quantization unit that frequency-converts and quantizes the reduced image and outputs first in-screen prediction data;
    A predicted image inverse quantization transform unit that performs inverse quantization and inverse frequency transform on the first intra prediction data and outputs a locally decoded predicted image;
    An image encoding apparatus comprising: a predicted image enlarging unit that expands the locally decoded predicted image to generate a first intra-screen predicted image.
  2.  変換係数を入力し逆量子化及び逆周波数変換を行い差分局部復号画像を出力する差分逆量子化逆変換部と、
     第一の画面内予測画像と前記差分局部復号画像を加算し局部復号画像を出力する差分局部復号加算部と、
     前記局部復号画像を格納するフレームメモリとを備え、
     第一の画面内予測部は、
     前記フレームメモリに格納された前記局部復号画像を用いて処理対象領域の周辺領域の画素値から前記処理対象領域の画素値の平均値を予測し、予測平均画素値として出力する平均画素値予測部と、
     縮小画像の各画素値から前記予測平均画素値を減算し平均値除去縮小画像を出力する平均値除去部と、
     予測画像逆量子化逆変換部の出力と前記予測平均画素値を加算し局部復号予測画像を出力する平均値加算部とを有し、
     かつ、
     前記予測画像変換量子化部は前記平均値除去縮小画像を周波数変換及び量子化し第一の画面内予測データを出力すると共に、前記予測画像逆量子化逆変換部は前記第一の画面内予測データを逆量子化及び逆周波数変換し平均値除去局部復号予測画像として出力することを特徴とする請求項1記載の画像符号化装置。
    A differential inverse quantization inverse transform unit that inputs a transform coefficient, performs inverse quantization and inverse frequency transform, and outputs a differential local decoded image;
    A difference local decoding addition unit that adds a first intra prediction image and the difference local decoded image and outputs a local decoded image;
    A frame memory for storing the locally decoded image;
    The first in-screen predictor
    An average pixel value prediction unit that predicts an average value of pixel values of the processing target region from pixel values of a peripheral region of the processing target region using the locally decoded image stored in the frame memory, and outputs the predicted average pixel value When,
    An average value removing unit that subtracts the predicted average pixel value from each pixel value of the reduced image and outputs an average value removed reduced image; and
    An output of a predicted image inverse quantization inverse transform unit and an average value adding unit that adds the predicted average pixel value and outputs a locally decoded predicted image;
    And,
    The predicted image transform quantization unit frequency-converts and quantizes the average-value-reduced reduced image and outputs first intra-screen prediction data, and the predicted image inverse quantization inverse transform unit includes the first intra-screen prediction data. The image coding apparatus according to claim 1, wherein the image is subjected to inverse quantization and inverse frequency conversion, and output as an average-value-removed local decoded prediction image.
  3.  フレームメモリに格納された局部復号画像を用いて処理対象領域の周辺領域の画素値から当該処理対象領域の画素値を予測し、第二の画面内予測画像を出力すると共に、予測画像を再構成するための情報を第二の画面内予測データとして出力する第二の画面内予測部と、
     第一の画面内予測画像と前記第二の画面内予測画像を比較して予測効率の高い予測画像を選択し、画面内予測画像として出力すると共に、いずれの予測画像を選択したかを示す予測タイプ情報を出力する予測画像選択部とを備え、
     エントロピー符号化部は、前記第二の画面内予測データと前記予測タイプ情報をエントロピー符号化することを特徴とする請求項2記載の画像符号化装置。
    Using the locally decoded image stored in the frame memory, the pixel value of the processing target region is predicted from the pixel value of the peripheral region of the processing target region, and the second intra prediction image is output and the prediction image is reconstructed A second in-screen prediction unit that outputs information for performing as second in-screen prediction data;
    The first in-screen prediction image and the second in-screen prediction image are compared, a prediction image with high prediction efficiency is selected, output as the in-screen prediction image, and a prediction indicating which prediction image has been selected A predictive image selection unit that outputs type information,
    The image encoding apparatus according to claim 2, wherein the entropy encoding unit performs entropy encoding on the second intra-screen prediction data and the prediction type information.
  4.  第一の画面内予測部は、
     第一の画面内予測データの直流成分であるDC成分を局部復号DC成分として格納するDC成分記憶バッファと、
     DC成分記憶バッファに記憶されている局部復号DC成分に基づいてDC成分の予測値を算出し、第一の画面内予測データのDC成分から予測値を減じた後、第一の画面内予測データ差分値として出力するDC成分予測部とを備え、
     エントロピー符号化部は、前記第一の画面内予測データの代わりに前記第一の画面内予測データ差分値をエントロピー符号化することを特徴とする請求項1記載の画像符号化装置。
    The first in-screen predictor
    A DC component storage buffer for storing a DC component which is a direct current component of the first intra prediction data as a locally decoded DC component;
    After calculating the predicted value of the DC component based on the locally decoded DC component stored in the DC component storage buffer and subtracting the predicted value from the DC component of the first intra-screen prediction data, the first intra-screen prediction data A DC component prediction unit that outputs the difference value,
    The image encoding apparatus according to claim 1, wherein the entropy encoding unit performs entropy encoding on the first intra prediction data difference value instead of the first intra prediction data.
  5.  ビットストリームをエントロピー復号し変換係数及び第一の画面内予測データを出力するエントロピー復号部と、
     前記変換係数を逆量子化及び逆周波数変換し差分復号画像を出力する差分逆量子化逆変換部と、
     前記第一の画面内予測データに基づいて第一の画面内予測画像を再構築し出力する第一の画面内予測画像生成部と、
     前記差分復号画像と前記第一の画面内予測画像とを加算し復号画像を出力する差分復号画像加算部とを備え、
     前記第一の画面内予測画像生成部は、
     前記第一の画面内予測データを逆量子化及び逆周波数変換し復号予測画像を出力する予測画像逆量子化逆変換部と、
     前記復号予測画像を拡大し前記第一の画面内予測画像を生成する予測画像拡大部とを備えたことを特徴とする画像復号装置。
    An entropy decoding unit that entropy decodes the bitstream and outputs transform coefficients and first intra prediction data;
    A differential inverse quantization inverse transform unit that performs inverse quantization and inverse frequency transform on the transform coefficient and outputs a differential decoded image;
    A first intra-screen prediction image generating unit that reconstructs and outputs a first intra-screen prediction image based on the first intra-screen prediction data;
    A difference decoded image adding unit that adds the difference decoded image and the first intra prediction image and outputs a decoded image;
    The first in-screen predicted image generation unit
    A predicted image inverse quantization inverse transform unit for inversely quantizing and inverse frequency transforming the first intra-screen prediction data and outputting a decoded predicted image;
    An image decoding apparatus comprising: a prediction image enlargement unit that enlarges the decoded prediction image and generates the first intra-screen prediction image.
  6.  復号画像を格納するフレームメモリを備え、
     前記第一の画面内予測画像生成部は、
     前記フレームメモリに格納された前記復号画像を用いて処理対象領域の周辺領域の画素値から当該処理対象領域の画素値の平均値を予測し予測平均画素値として出力する平均画素値予測部と、
     予測画像逆量子化逆変換部の出力と前記予測平均画素値とを加算し復号予測画像を出力する平均値加算部とを有し、
     前記予測画像逆量子化変換部は、第一の画面内予測データを逆量子化及び逆周波数変換し平均値除去復号予測画像を出力することを特徴とする請求項5記載の画像復号装置。
    A frame memory for storing the decoded image;
    The first in-screen predicted image generation unit
    An average pixel value prediction unit that predicts an average value of the pixel values of the processing target region from the pixel values of the peripheral region of the processing target region using the decoded image stored in the frame memory, and outputs the predicted average pixel value;
    An average value adding unit that adds the output of the predicted image inverse quantization inverse transform unit and the predicted average pixel value and outputs a decoded predicted image;
    6. The image decoding apparatus according to claim 5, wherein the predicted image inverse quantization transform unit performs inverse quantization and inverse frequency transform on the first intra prediction data, and outputs an average-value-removed decoded predicted image.
  7.  エントロピー復号部は、第二の画面内予測データと予測タイプ情報をエントロピー復号すると共に、
     前記第二の画面内予測データに基づいてフレームメモリに格納された復号画像を用いて処理対象領域の周辺領域の画素値から当該処理対象領域の画素値を予測し第二の画面内予測画像を出力する第二の画面内予測画像生成部と、
     前記予測タイプ情報に基づいて第一の画面内予測画像と第二の画面内予測画像のいずれか一つを選択し出力する予測画像選択部とを備えたことを特徴とする請求項6記載の画像復号装置。
    The entropy decoding unit entropy decodes the second intra prediction data and the prediction type information,
    Using the decoded image stored in the frame memory based on the second intra-screen prediction data, the pixel value of the processing target region is predicted from the pixel value of the peripheral region of the processing target region, and the second intra-screen prediction image is obtained. A second in-screen predicted image generator to output;
    The prediction image selection part which selects and outputs any one of the 1st prediction image in a screen and the prediction image in a 2nd screen based on the said prediction type information, The output of Claim 6 characterized by the above-mentioned. Image decoding device.
  8.  第一の画面内予測画像生成部は、
     復号DC成分に基づいてDC成分の予測値を算出し、第一の画面内予測データ差分値のDC成分に予測値を加えた後、第一の画面内予測データとして出力するDC成分予測部と、
     前記第一の画面内予測データの直流成分であるDC成分を前記復号DC成分として格納するDC成分記憶バッファとを備え、
     前記エントロピー復号部は、前記第一の画面内予測データの代わりに前記第一の画面内予測データ差分値をエントロピー復号結果として出力することを特徴とする請求項5記載の画像復号装置。
    The first in-screen predicted image generation unit
    A DC component prediction unit that calculates a predicted value of the DC component based on the decoded DC component, adds the predicted value to the DC component of the first intra-screen prediction data difference value, and then outputs the first intra-screen prediction data. ,
    A DC component storage buffer that stores, as the decoded DC component, a DC component that is a direct current component of the first intra-screen prediction data,
    6. The image decoding apparatus according to claim 5, wherein the entropy decoding unit outputs the first intra prediction data difference value as an entropy decoding result instead of the first intra prediction data.
PCT/JP2010/005919 2009-11-30 2010-10-01 Image encoding device and image decoding device WO2011064932A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2011543081A JPWO2011064932A1 (en) 2009-11-30 2010-10-01 Image encoding apparatus and image decoding apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2009-271492 2009-11-30
JP2009271492 2009-11-30

Publications (1)

Publication Number Publication Date
WO2011064932A1 true WO2011064932A1 (en) 2011-06-03

Family

ID=44066047

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2010/005919 WO2011064932A1 (en) 2009-11-30 2010-10-01 Image encoding device and image decoding device

Country Status (2)

Country Link
JP (1) JPWO2011064932A1 (en)
WO (1) WO2011064932A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006333436A (en) * 2005-01-07 2006-12-07 Ntt Docomo Inc Motion image encoding apparatus, method, and program, and motion image decoding apparatus, method, and program
JP2007306276A (en) * 2006-05-11 2007-11-22 Nippon Telegr & Teleph Corp <Ntt> Hierarchical prediction method, device, and program, and recording medium thereof
WO2008072500A1 (en) * 2006-12-13 2008-06-19 Sharp Kabushiki Kaisha Dynamic image encoding device and dynamic image decoding device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006333436A (en) * 2005-01-07 2006-12-07 Ntt Docomo Inc Motion image encoding apparatus, method, and program, and motion image decoding apparatus, method, and program
JP2007306276A (en) * 2006-05-11 2007-11-22 Nippon Telegr & Teleph Corp <Ntt> Hierarchical prediction method, device, and program, and recording medium thereof
WO2008072500A1 (en) * 2006-12-13 2008-06-19 Sharp Kabushiki Kaisha Dynamic image encoding device and dynamic image decoding device

Also Published As

Publication number Publication date
JPWO2011064932A1 (en) 2013-04-11

Similar Documents

Publication Publication Date Title
JP3369573B2 (en) Image prediction decoding method and apparatus
JP6164600B2 (en) Divided block encoding method in video encoding, divided block decoding method in video decoding, and recording medium for realizing the same
JP4991699B2 (en) Scalable encoding and decoding methods for video signals
JP5513584B2 (en) Method and apparatus for intra-predictive video decoding
CN107087170B (en) Encoding device, encoding method, decoding device, and decoding method
JP4908180B2 (en) Video encoding device
WO2010137323A1 (en) Video encoder, video decoder, video encoding method, and video decoding method
JPWO2009051010A1 (en) Image encoding device, image decoding device, image encoding method, and image decoding method
WO2011125730A1 (en) Image encoding device, image encoding method, image decoding device, and image decoding method
KR101974952B1 (en) Methods of coding intra prediction mode using two candidate intra prediction modes and apparatuses using the same
KR20130045807A (en) Method for predicting quantization parameter based on intra prediction mode and apparatus using the same
KR20220074835A (en) Apparatus and method for encoding and decoding to image of ultra high definition resoutltion
KR20100009718A (en) Video encoding/decoding apparatus and mehod using direction of prediction
WO2020184227A1 (en) Image decoder, image decoding method, and program
WO2011064932A1 (en) Image encoding device and image decoding device
JP2011223316A (en) Image encoding apparatus, image decoding apparatus, and method for image encoding and decoding
WO2017104010A1 (en) Moving-image coding apparatus and moving-image coding method
JP2011223315A (en) Image encoding apparatus, image decoding apparatus, and method for image encoding and decoding
JP5578974B2 (en) Image encoding device
JP2011223317A (en) Image encoding apparatus, image decoding apparatus, and method for image encoding and decoding
KR101802304B1 (en) Methods of encoding using hadamard transform and apparatuses using the same
JP2013017085A (en) Image encoder and image encoding method
JP2004215296A (en) Image predictive encoding method and apparatus
WO2011089876A1 (en) Video decoding device, video decoding method, and video decoding-use integrated chip device
JP2002359851A (en) Device and method for predictively encoding image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10832788

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2011543081

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10832788

Country of ref document: EP

Kind code of ref document: A1