WO2011089865A1 - Procédé de codage d'images, procédé de décodage d'images, dispositif afférent, programme et circuit intégré - Google Patents

Procédé de codage d'images, procédé de décodage d'images, dispositif afférent, programme et circuit intégré Download PDF

Info

Publication number
WO2011089865A1
WO2011089865A1 PCT/JP2011/000085 JP2011000085W WO2011089865A1 WO 2011089865 A1 WO2011089865 A1 WO 2011089865A1 JP 2011000085 W JP2011000085 W JP 2011000085W WO 2011089865 A1 WO2011089865 A1 WO 2011089865A1
Authority
WO
WIPO (PCT)
Prior art keywords
filter coefficient
removal filter
coefficient
image
filter
Prior art date
Application number
PCT/JP2011/000085
Other languages
English (en)
Japanese (ja)
Inventor
マティアス ナロスキ
寿郎 笹井
ヴィルジニー ドリゥジョーン
Original Assignee
パナソニック株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニック株式会社 filed Critical パナソニック株式会社
Publication of WO2011089865A1 publication Critical patent/WO2011089865A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness

Definitions

  • Non-Patent Document 1 an image encoding method that predictively encodes a moving image for each block and an image decoding method that decodes an image that is predictively encoded for each block have been proposed (see, for example, Non-Patent Document 1).
  • Non-Patent Document 1 a difference between a block of a moving image (original image) and a block of a predicted image predicted for the block is frequency-converted and quantized to obtain a plurality of differences. Quantized values are generated, and the plurality of quantized values are entropy encoded.
  • the above-described moving image is encoded, and as a result, an encoded bit stream is generated.
  • a difference image is generated by dequantizing the plurality of quantized values and inverse frequency transforming, and the difference image and the predicted image are added for each block to generate a reconstructed image.
  • a deblocking filter is used to remove the distortion.
  • Non-Patent Document 1 a plurality of quantized values are extracted from the encoded bitstream by entropy decoding the encoded bitstream.
  • the extracted quantized values are inversely quantized and inverse frequency transformed to generate a difference image, and the difference image and the predicted image are added for each block to generate a reconstructed image. Is done.
  • Such a reconstructed image is used to generate a predicted image, as in the image encoding method.
  • block boundary distortion in the reconstructed image is removed by filtering using the above-described deblocking filter.
  • the encoded bit stream is decoded and output as a decoded image. Further, the reconstructed image from which the block boundary distortion has been removed in this way is used to generate the predicted image described above.
  • the image coding method of Non-Patent Document 1 has a problem in that block distortion cannot be appropriately removed because there is a limit to filter coefficient selection.
  • a plurality of types of filter coefficients that can be used are prepared in advance, and based on the characteristics of the reconstructed image, the block coefficients to be processed are set. For the same filter coefficient is selected.
  • the image encoding method can be freely based on information other than the features of the reconstructed image.
  • An appropriate filter coefficient cannot be selected.
  • an appropriate filter coefficient cannot be selected even in the image decoding method. Therefore, there is a problem that block distortion cannot be removed appropriately, encoding efficiency is lowered, and image quality is deteriorated.
  • the present invention has been made in view of such problems, and an object thereof is to provide an image encoding method, an image decoding method, and the like that improve encoding efficiency and image quality.
  • an image encoding method for encoding a moving image, and encodes the moving image for each block using a predicted image.
  • a coefficient for deriving according to the feature and performing filtering using the derived distortion removal filter coefficient on the block boundary in order to generate a predicted image, and identifying the derived distortion removal filter coefficient Specific information is inserted into the encoded bitstream.
  • coefficient specifying information for specifying the distortion removal filter coefficient is inserted into the encoded bitstream, so that the same distortion removal filter coefficient is applied to the block boundary to be processed at the time of image encoding and image decoding.
  • the appropriate filter coefficient can be derived freely based on information other than the features of the reconstructed image. Therefore, block distortion can be removed appropriately, and encoding efficiency and image quality can be improved.
  • the distortion removal filter coefficient is derived by calculating the distortion removal filter coefficient, and in the insertion of the coefficient specifying information into the encoded bitstream, the derived A distortion removal filter coefficient is inserted into the coded bitstream as the coefficient specifying information.
  • distortion removal filter coefficients such as deblocking filter coefficients are calculated according to the characteristics of the block boundary, so there is no restriction on the selection of filter coefficients, and an appropriate distortion removal filter coefficient is derived for the block boundary. can do.
  • the distortion removal filter coefficients derived in this way By performing filtering using the distortion removal filter coefficients derived in this way on the block boundary (pixels on the block boundary), the distortion of the block boundary can be appropriately removed, and the original moving image Predictive images closer to each other can be generated, and as a result, encoding efficiency and image quality can be improved.
  • the derived distortion removal filter coefficient is inserted into the encoded bitstream, when the image decoding apparatus decodes the encoded bitstream, the distortion removal filter coefficient is extracted from the encoded bitstream. As in the case of encoding a moving image, filtering using the distortion removal filter coefficient can be performed on the block boundary. As a result, the image quality of the decoded image can be improved.
  • the image encoding method further determines whether or not to derive the distortion removal filter coefficient according to a feature of a block boundary, and determines the distortion removal filter coefficient in the determination of the distortion removal filter derivation. Is derived in order to derive an internal noise removal filter coefficient for removing noise in the processing target block corresponding to the block boundary, and to generate the predicted image.
  • filtering using the internal noise removal filter coefficient is performed on the processing target block, and the derived internal noise removal filter coefficient is inserted into the encoded bitstream.
  • the distortion removal filter coefficient is derived in the processing target block together with the distortion of the block boundary.
  • the filter coefficient for removing the noise is derived as the distortion removal filter coefficient, and in the filtering using the distortion removal filter coefficient, the filtering using the filter coefficient is performed on the block boundary and the processing target block.
  • a filter coefficient for removing noise in the processing target block is derived as a distortion removal filter coefficient together with distortion at the block boundary. Therefore, the characteristic as the distortion removal filter coefficient and the characteristic as the internal noise removal filter coefficient are obtained.
  • Combined filter coefficients can be derived. Since filtering using the filter coefficient is performed on the block boundary and the processing target block, it is necessary to perform filtering on the block boundary and filtering on the processing target block sequentially (sequentially) by switching the filter coefficient. And filtering on them at once. For example, a deblocking filter can be applied at once with a winner filter. As a result, filtering can be performed easily and quickly.
  • the filter coefficient is derived.
  • the image encoding method further includes, for each block boundary in the image region, a filter mode corresponding to the block boundary according to the feature of the block boundary, from among a plurality of predetermined filter modes.
  • the common filter coefficient is derived for a plurality of block boundaries corresponding to the filter mode included in the image region for each filter mode.
  • a common filter coefficient is derived for a plurality of block boundaries corresponding to the filter mode. For example, a separate filter is applied to each block boundary included in an image region such as a frame. There is no need to derive coefficients, and the amount of calculation for deriving filter coefficients can be reduced.
  • the image encoding method further counts the number of block boundaries corresponding to the filter mode included in the image area for each filter mode, and determines the filter mode with the smallest counted number as the number. Is changed to the filter mode having the second smallest number, and the filter mode having the smallest number is combined with the filter mode having the second smallest number.
  • the common filter coefficient is derived for a plurality of block boundaries corresponding to the filter mode.
  • the derivation of the filter coefficient for the filter mode with the smallest number can be omitted.
  • the amount of calculation for deriving the coefficient can be further reduced.
  • the derivation of the filter coefficients is omitted in the filter mode having the smallest number of block boundaries, so that the influence of the image quality on the image region can be minimized.
  • the filter mode having the fewest number and the filter mode having the second smallest number are repeatedly combined, and the insertion of the coefficient specifying information into the coded bitstream is further repeated. Is inserted into the encoded bitstream.
  • the image encoding method further includes changing the filter mode of any one of the plurality of predetermined filter modes to another filter mode of the plurality of filter modes. Combining a filter mode with the other filter mode, and deriving the distortion removal filter coefficient, the common filter coefficient is derived for a plurality of block boundaries corresponding to the two combined filter modes, In the insertion of the coefficient specifying information into the encoded bitstream, a combined index for specifying the combined two filter modes is further inserted into the encoded bitstream.
  • the derivation of the filter coefficient for one filter mode can be omitted, and the amount of calculation for deriving the filter coefficient can be further reduced.
  • a combined index for specifying the two combined filter modes is inserted into the encoded bitstream, when the image decoding apparatus decodes the encoded bitstream, the combined index is combined from the encoded bitstream. An index is extracted, and it is possible to grasp which filter mode is combined based on the combined index. As a result, the image decoding apparatus can perform filtering on the same filter mode as the filter mode used for filtering when encoding a moving image.
  • the filtering using the distortion removal filter coefficient for each pixel at the block boundary, it is determined whether or not the pixel should be filtered based on the difference in pixel value between the pixels, and the filtering is performed. Filtering is performed on pixels determined to be performed.
  • filtering is performed only on the necessary pixels among the pixels at the block boundary, so that it is possible to reduce the amount of filtering calculation and to further improve the encoding efficiency and image quality. it can.
  • the image encoding method further derives a predicted value for the distortion removal filter coefficient by predicting the derived distortion removal filter coefficient, and inserts the coefficient identification information into the encoded bitstream. The difference between the distortion removal filter coefficient and the predicted value is inserted into the encoded bitstream.
  • the difference between the distortion removal filter coefficient and the predicted value is inserted into the encoded bit stream, the code amount of the encoded bit stream can be suppressed.
  • the distortion removal filter coefficient is derived for each color component and filtering direction.
  • the distortion removal filter coefficient is derived using an arithmetic expression used for derivation of the filter coefficient of the Wiener filter.
  • the predicted image can be brought closer to the original moving image, and the encoding efficiency and the image quality can be further improved.
  • an image decoding method for decoding an encoded bitstream, in which encoded blocks included in the encoded bitstream are sequentially re-executed.
  • Coefficient specifying information for specifying a distortion removal filter coefficient for removing distortion of the block boundary according to the feature of the block boundary, which is a boundary between the configured and reconstructed blocks, from the coded bitstream Extraction is performed, and filtering using the distortion removal filter coefficient specified by the extracted coefficient specifying information is performed on the block boundary.
  • the coefficient specifying information for specifying the distortion removing filter coefficient is extracted from the encoded bitstream according to the feature of the block boundary, and the filtering using the distortion removing filter coefficient specified by the coefficient specifying information is performed. Is performed on the block boundary, so that the distortion removal filter coefficient used for the filtering at the time of encoding the moving image can also be used for the filtering at the time of decoding. As a result, the image quality of the decoded image can be improved. Can do.
  • a filter coefficient for removing noise in the processing target block corresponding to the block boundary is extracted as the distortion removing filter coefficient together with the distortion of the block boundary, and the distortion removing filter coefficient
  • filtering using the filter coefficient is performed on the block boundary and the processing target block.
  • the filter coefficient for removing noise in the processing target block is extracted as the distortion removal filter coefficient together with the distortion at the block boundary. Therefore, the characteristic as the distortion removal filter coefficient and the characteristic as the internal noise removal filter coefficient are obtained. Combined filter coefficients can be extracted. Since filtering using the filter coefficient is performed on the block boundary and the processing target block, it is necessary to perform filtering on the block boundary and filtering on the processing target block sequentially (sequentially) by switching the filter coefficient. And filtering on them at once. For example, a deblocking filter can be applied at once with a winner filter. As a result, filtering can be performed easily and quickly.
  • the present invention can be realized not only as such an image encoding method and image decoding method, but also in an image encoding device, an image decoding device, an integrated circuit, and a computer that perform image processing according to those methods.
  • the present invention can also be realized as a program for executing image processing according to the method and a recording medium for storing the program. Moreover, you may combine how to solve the above subjects how.
  • FIG. 1A is a block diagram of an image encoding device according to an aspect of the present invention.
  • FIG. 1B is a flowchart illustrating an image encoding method according to an aspect of the present invention.
  • FIG. 2A is a block diagram of an image decoding apparatus according to an aspect of the present invention.
  • FIG. 2B is a flowchart illustrating an image decoding method according to an aspect of the present invention.
  • FIG. 3 is a block diagram of the image coding apparatus according to Embodiment 1 of the present invention.
  • FIG. 4 is a diagram for explaining block boundary specification and filter mode determination according to Embodiment 1 of the present invention.
  • FIG. 5 is a diagram showing pixels on the block boundary in the first embodiment of the present invention.
  • FIG. 6 is a diagram for explaining the filter mode in the first embodiment of the present invention.
  • FIG. 8 is a flowchart showing processing for calculating the filter coefficient by switching the weighting coefficient ⁇ for each filter mode according to Embodiment 1 of the present invention.
  • FIG. 9 is a diagram showing filter coefficients for each filter mode according to Embodiment 1 of the present invention.
  • FIG. 10 is a diagram for explaining filter mode coupling in Embodiment 1 of the present invention.
  • FIG. 11 is a diagram illustrating a syntax regarding a noise removal filter of an encoded bitstream according to Embodiment 1 of the present invention.
  • FIG. 11 is a diagram illustrating a syntax regarding a noise removal filter of an encoded bitstream according to Embodiment 1 of the present invention.
  • FIG. 12 is a block diagram of the image decoding apparatus according to Embodiment 1 of the present invention.
  • FIG. 13 is a diagram for explaining pixels to be filtered according to the first modification of the first embodiment of the present invention.
  • FIG. 14 is a diagram showing a filter mode combination table according to the second modification of the first embodiment of the present invention.
  • FIG. 15 is an explanatory diagram for describing filter coefficient designation values according to the third modification of the first embodiment of the present invention.
  • FIG. 16 is a schematic diagram illustrating an example of the overall configuration of a content supply system that implements a content distribution service.
  • FIG. 17 is a diagram illustrating an appearance of a mobile phone.
  • FIG. 18 is a block diagram illustrating a configuration example of a mobile phone.
  • FIG. 19 is a schematic diagram illustrating an example of the overall configuration of a digital broadcasting system.
  • FIG. 20 is a block diagram illustrating a configuration example of a television.
  • FIG. 21 is a block diagram illustrating a configuration example of an information reproducing / recording unit that reads and writes information from and on a recording medium that is an optical disk.
  • FIG. 22 is a diagram illustrating a structure example of a recording medium that is an optical disk.
  • FIG. 23 is a block diagram illustrating a configuration example of an integrated circuit that realizes the image encoding method and the image decoding method according to each embodiment.
  • FIG. 1A is a block diagram of an image encoding device according to an aspect of the present invention.
  • the image encoding apparatus 10 is an apparatus that encodes a moving image, and includes an encoding unit 11, a reconstruction unit 12, a filter coefficient derivation unit 13, a filtering unit 14, and an insertion unit 15.
  • the encoding unit 11 generates an encoded bitstream by encoding a moving image for each block using a predicted image.
  • the reconstruction unit 12 sequentially reconstructs the encoded blocks.
  • the filter coefficient deriving unit 13 derives a distortion removal filter coefficient for removing distortion at a block boundary, which is a boundary between reconstructed blocks, according to the feature of the block boundary.
  • the filtering unit 14 performs filtering on the block boundary using the derived distortion removal filter coefficient in order to generate the predicted image.
  • the insertion unit 15 inserts coefficient specifying information for specifying the derived distortion removal filter coefficient into the encoded bitstream.
  • FIG. 1B is a flowchart showing processing of the image encoding device 10 according to an aspect of the present invention.
  • the image encoding device 10 first generates an encoded bitstream by encoding a moving image block by block using a predicted image (step S11). Next, the image encoding device 10 sequentially reconstructs the encoded blocks (step S12). Next, the image coding apparatus 10 derives a distortion removal filter coefficient for removing distortion at the block boundary, which is a boundary between the reconstructed blocks, according to the feature of the block boundary (step S13). Next, the image encoding device 10 performs filtering on the block boundary using the derived distortion removal filter coefficient in order to generate the above-described predicted image (step S14). Further, the image encoding device 10 inserts coefficient specifying information for specifying the derived distortion removal filter coefficient into the encoded bitstream (step S15).
  • coefficient specifying information for specifying the distortion removal filter coefficient is inserted into the encoded bitstream, so that at the time of image encoding and image decoding An appropriate filter coefficient can be derived freely based on information other than the characteristics of the reconstructed image (reconstructed block) while using the same distortion removal filter coefficient for the block boundary to be processed. Therefore, block distortion can be removed appropriately, and encoding efficiency and image quality can be improved.
  • the filter coefficient deriving unit 13 derives the distortion removal filter coefficient by calculating the distortion removal filter coefficient, and the insertion unit 15 encodes the encoded bitstream using the derived distortion removal filter coefficient as coefficient specifying information. May be inserted.
  • distortion removal filter coefficients such as the filter coefficient of the deblocking filter are derived (calculated) according to the feature of the block boundary.
  • the distortion removal filter coefficient appropriate for the block boundary can be derived.
  • the distortion of the block boundary can be appropriately removed, and the original moving image Predictive images closer to each other can be generated, and as a result, encoding efficiency and image quality can be improved.
  • the derived distortion removal filter coefficient is inserted into the encoded bitstream, when the image decoding apparatus decodes the encoded bitstream, the distortion removal filter coefficient is extracted from the encoded bitstream.
  • filtering using the distortion removal filter coefficient can be performed on the block boundary. As a result, the image quality of the decoded image can be improved.
  • the unit for reconfiguration and filtering may be one slice, but may be a smaller unit.
  • the information amount of the filter coefficient can be reduced.
  • the unit is smaller (for example, several blocks), the delay time required for encoding can be shortened.
  • the filter coefficient deriving unit 13 derives the distortion removal filter coefficient by selecting the above-described distortion removal filter coefficient from among a plurality of distortion removal filter coefficient candidates, and the insertion unit 15 derives the distortion removal filter coefficient.
  • An index indicating the distortion removal filter coefficient may be inserted into the encoded bitstream as coefficient specifying information. Further, the filter coefficient deriving unit 13 may calculate a distortion removal filter coefficient and select a distortion removal filter coefficient closest to the calculated distortion removal filter coefficient from the plurality of candidates described above.
  • FIG. 2A is a block diagram of an image decoding apparatus according to an aspect of the present invention.
  • the image decoding device 20 is a device that decodes an encoded bit stream, and includes a reconstruction unit 21, an extraction unit 22, and a filtering unit 23.
  • the reconstruction unit 21 sequentially reconstructs the encoded blocks included in the encoded bitstream.
  • the extraction unit 22 generates coefficient specifying information for specifying a distortion removal filter coefficient for removing distortion at the block boundary in accordance with the feature of the block boundary that is a boundary between the reconstructed blocks. Extract from The filtering unit 23 performs filtering using the distortion removal filter coefficient specified by the extracted coefficient specifying information on the block boundary.
  • FIG. 2B is a flowchart showing processing of the image decoding device 20 according to an aspect of the present invention.
  • the image decoding device 20 sequentially reconstructs the encoded blocks included in the encoded bitstream (step S21). Next, the image decoding device 20 generates coefficient specifying information for specifying a distortion removing filter coefficient for removing distortion at the block boundary according to the feature of the block boundary that is a boundary between the reconstructed blocks. Extract from the encoded bit stream (step S22). Next, the image decoding device 20 performs filtering on the block boundary using the distortion removal filter coefficient specified by the extracted coefficient specifying information (step S23).
  • coefficient specifying information for specifying the distortion removal filter coefficient is extracted from the encoded bitstream according to the feature of the block boundary, and the coefficient specifying Since the filtering using the distortion removal filter coefficient specified by the information is performed on the block boundary, the distortion removal filter coefficient used for the filtering at the time of encoding the moving image may be used for the filtering at the time of decoding. As a result, the image quality of the decoded image can be improved.
  • the extraction unit 22 may extract the above-described distortion removal filter coefficient from the encoded bitstream as coefficient specifying information.
  • the extraction unit 22 extracts an index indicating the distortion removal filter coefficient from the encoded bitstream as coefficient specifying information, and the filtering unit 23 converts the distortion removal filter coefficient indicated by the extracted index into a plurality of distortions. Filtering using the selected distortion removal filter coefficient may be performed on the block boundary by selecting from the candidates for the removal filter coefficient.
  • FIG. 3 is a block diagram of the image coding apparatus according to Embodiment 1 of the present invention.
  • An image encoding device 100 is a device that generates an encoded bitstream by predictively encoding an original image S that is a moving image for each block, and includes a subtracter 101 and a transform quantization unit 102. , An entropy coding unit 103, an inverse quantization inverse transform unit 104, an adder 105, a noise removal filter 106, and a prediction unit 107.
  • the subtracter 101 acquires the original image S and the predicted image S ⁇ for each block, and calculates a prediction error image e that is the difference between them.
  • the transform quantizing unit 102 generates a coefficient block including a plurality of frequency coefficients by performing orthogonal transform (frequency transform) such as discrete cosine transform on the prediction error image e. Further, the transform quantization unit 102 generates a quantization block including a plurality of quantization values by performing quantization on each of the plurality of frequency coefficients included in the coefficient block.
  • orthogonal transform frequency transform
  • quantization unit 102 generates a quantization block including a plurality of quantization values by performing quantization on each of the plurality of frequency coefficients included in the coefficient block.
  • the entropy encoding unit 103 performs entropy encoding on a plurality of quantization values included in the quantization block. By performing such entropy encoding, an encoded bit stream is generated.
  • the inverse quantization inverse transform unit 104 generates a coefficient block including a plurality of frequency coefficients by performing inverse quantization on each of the plurality of quantization values included in the quantization block. Further, the inverse quantization inverse transform unit 104 generates a prediction error image e ′ in block units by performing inverse orthogonal transform (inverse frequency transform) such as inverse discrete cosine transform on the coefficient block.
  • inverse orthogonal transform inverse frequency transform
  • the adder 105 acquires the prediction error image e ′ and the prediction image S ⁇ for each block, and generates the reconstructed image S ′ by adding them. As a result, the encoded blocks are sequentially reconstructed.
  • the noise removal filter 106 acquires the reconstructed image S ′ and the predicted image S ⁇ , and based on these images, the distortion of the block boundary of the reconstructed image S ′ and the processing target block corresponding to the block boundary. Remove the included noise. That is, the noise removal filter 106 functions as a deblocking filter and a Wiener filter. Further, the noise removal filter 106 calculates a plurality of filter coefficients used for filtering for each image region such as a frame based on the original image S.
  • the prediction unit 107 generates the prediction image S ⁇ in units of blocks by performing intra prediction or inter prediction using the reconstructed image S ′ from which noise has been removed by the noise removal filter 106.
  • the image encoding device 10 in FIG. 1A corresponds to the image encoding device 100 in the present embodiment
  • the encoding unit 11 in FIG. 1A includes the subtractor 101 and the transform quantization unit 102 in the present embodiment.
  • An entropy encoding unit 103 and a prediction unit 107. 1A includes the inverse quantization inverse transform unit 104 and the adder 105 in the present embodiment, and the filter coefficient deriving unit 13 and the filtering unit 14 in FIG. This corresponds to the removal filter 106.
  • the insertion part 15 of FIG. 1A consists of a part of function of the entropy encoding part 103 in this Embodiment.
  • the noise removal filter 106 performs the following processes (1) to (6).
  • FIG. 4 is a diagram for explaining block boundary specification and filter mode determination.
  • the noise removal filter 106 identifies block boundaries on the left side and the upper side of the processing target block for each processing target block in the image area. Further, the noise removal filter 106 determines (selects) a filter mode for the identified block boundary. For example, the noise removal filter 106 determines the filter mode m1 corresponding to the feature of the block boundary for the left block boundary, and the filter mode m2 corresponding to the feature of the block boundary for the upper block boundary. To decide. That is, the noise removal filter 106 determines the filter mode m1 as the horizontal filter mode for the processing target block, and determines the filter mode m2 as the vertical filter mode for the processing target block. Details of the filter mode will be described later.
  • the noise removal filter 106 specifies four pixels arranged in the horizontal direction with the left block boundary in the center as candidates for filtering by the filter mode m1. Since there are a plurality of such pixel groups composed of four pixels along the block boundary, each of these pixel groups is specified as a candidate for filtering. Similarly, the noise removal filter 106 specifies four pixels arranged in the vertical direction across the upper block boundary as candidates for filtering by the filter mode m2. In addition, since there are a plurality of such pixel groups including four pixels along the block boundary, each of these pixel groups is specified as a filtering target candidate. Here, the noise removal filter 106 determines whether or not to perform filtering on each of the pixels specified as candidates by performing an operation described later.
  • the noise removal filter 106 calculates a filter coefficient corresponding to the filter mode by performing an operation to be described later for each filter mode determined as described above for each of the block boundaries in the image region. That is, for each filter mode, the noise removal filter 106 calculates a common filter coefficient (distortion removal filter coefficient) for one or more block boundaries corresponding to the fill mode. The noise removal filter 106 derives a predicted value by predicting the filter coefficient thus calculated, and quantizes the difference between the predicted value and the filter coefficient. The noise removal filter 106 outputs the quantized difference to the entropy encoding unit 103. The entropy encoding unit 103 performs entropy encoding on the quantized difference and inserts the difference into the encoded bitstream. Further, the noise removal filter 106 performs filtering using the filter coefficient calculated in the above (4) on the pixel determined to be filtered in the above (3).
  • FIG. 5 is a diagram showing pixels on the block boundary.
  • the block p and the block q are adjacent to each other.
  • the six pixels p 2 , p 1 , p 0 , q 0 , q 1 , and q 2 are each a sample, and the horizontal direction with the boundary (block boundary) between the block p and the block q in the middle Or they are arranged along the vertical direction.
  • a pixel p 0 , a pixel p 1 , and a pixel p 2 are arranged in order from the block boundary side.
  • the pixel q 0 , the pixel q 1 , and the pixel q 2 are sequentially arranged from the block boundary side. Are arranged.
  • the pixel values of the pixels p 2 , p 1 , p 0 , q 0 , q 1 , and q 2 in the reconstructed image S ′ are p 2, s ′ , p 1, s ′ , p 0, s ' , q0 , s' , q1 , s ' , and q2 , s' .
  • the pixel values of the pixels p 2 , p 1 , p 0 , q 0 , q 1 , and q 2 in the predicted image S ⁇ are p 2, s ⁇ , p 1, s ⁇ , p 0, s ⁇ , Q 0, s ⁇ , q 1, s ⁇ and q 2, s ⁇ .
  • the noise removal filter 106 determines that filtering should be performed on each of the four pixels p 1 , p 0 , q 0 , and q 1 on the block boundary of the reconstructed image S ′.
  • filtering is performed on each of those pixels (filtering target pixels).
  • the pixel values (p ′ 1 , p ′ 0 , q ′ 0 , and q ′ 1 ) of the filtered pixels p 1 , p 0 , q 0 , and q 1 are calculated.
  • a 1, m, ⁇ , a 4, m, b 1, m, ⁇ , b 4, m, c 1, m, ⁇ , c 4 , M , d 1, m ,..., D 4, m , o 1, m , and o 2, m are filter coefficients calculated for each filter mode for an image region such as a frame. . That is, the noise removal filter 106 adaptively calculates these filter coefficients according to the feature of the block boundary. These filter coefficients are calculated independently for each of the color components (luminance component Y, color difference component U, and color difference component V), and are also independent for each of the filtering directions (horizontal direction and vertical direction). Is calculated. Of the luminance component Y, the color difference component U, and the color difference component V, the filter coefficients of the color difference component U and the color difference component V may be made common.
  • FIG. 6 is a diagram for explaining the filter mode.
  • the filter mode m0 (DBF_SKIP) is a so-called skip mode, and is a mode that is applied when a specific condition that does not require filtering for removing distortion at the block boundary is satisfied.
  • This specific condition is a condition that a block boundary is a boundary of an object in an image, or a condition that distortion generated by encoding at the block boundary is assumed to be inconspicuous.
  • the specific condition is that the motion vector used for motion compensation and the reference picture are the same in adjacent blocks, and that no frequency coefficient (quantized value) is transmitted, or adjacent to each other.
  • This is a condition that the edge vector detected by using the pixel of the block to be processed has a norm larger than the threshold and passes through the block boundary.
  • the filter mode m1 (DBF_INTRA_QUANT) is a so-called intra mode, and is a mode applied when one of the adjacent blocks is intra-coded.
  • the filter mode m2 (DBF_PRED_SIGNIF) is applied when both adjacent blocks are inter-coded and the sum of the number of DCT coding coefficients (non-zero frequency coefficients) of both blocks is greater than the threshold. Mode.
  • both adjacent blocks are inter-coded with the same (or substantially the same) motion vector, and the total number of DCT coding coefficients of both blocks is 1 or more. This mode is applied to.
  • both adjacent blocks are inter-coded with different motion vectors or inter-template matching coded, and the total number of DCT coding coefficients of both blocks is 1. This mode is applied when the above is true.
  • inter template matching encoding is performed by using already encoded and decoded pixels adjacent to the encoding target block (for example, adjacent pixels on the left and above) from among encoded and decoded frames. In this encoding, the positions of pixels similar to those pixels are found, and the pixel corresponding to the lower right block of the similar pixels is used as a predicted image.
  • the filter mode m5 (DBF_MOT_DISC) is applied when both adjacent blocks are inter-coded with different motion vectors or inter-template matching coded, and both blocks have no DCT coding coefficient. Mode.
  • Filter mode m6 is a mode applied when both adjacent blocks are inter-coded with different motion vectors and two intensity-related parameters are different between both blocks.
  • the intensity related parameter is a parameter for correcting the pixel value, and includes a scale Sc and an offset value Of.
  • Filter mode m7 (DBF_IC_INTERMED) is a mode applied when both adjacent blocks are inter-coded with the same motion vector and two intensity related parameters are different between both blocks.
  • Filter mode m8 (DBF_IC_WEAK) is a mode applied when both adjacent blocks are inter-coded with different motion vectors and at least one intensity-related parameter is different between both blocks.
  • the filter mode m9 (DBF_BS_PLUS) is a mode applied when both adjacent blocks are inter-coded with different motion vectors and the merge mode is used for two pixels at the block boundary.
  • the merge mode is to move a predetermined area (for example, 2 pixels) in a block using motion information (motion vector and reference image) of the block in the direction (for example, left or upward direction) indicated by the index. This is a method for compensation (motion prediction).
  • the noise removal filter 106 has a predetermined filter mode corresponding to a block boundary for each block boundary in the image area according to the feature (condition) of the block boundary.
  • a filter coefficient is selected from the types of filter modes, and for each filter mode, a common filter coefficient is derived for a plurality of block boundaries corresponding to the filter mode included in the image region.
  • a common filter coefficient is derived for a plurality of block boundaries corresponding to the filter mode. Therefore, for example, each of the block boundaries included in the image region such as a frame. Therefore, it is not necessary to derive individual filter coefficients, and the amount of calculation for deriving filter coefficients can be reduced.
  • the noise removal filter 106 calculates a filter coefficient using two evaluation values for each filter mode as described above.
  • One of the two evaluation values is an evaluation value E1 for objective image quality, and is a value corresponding to the mean square of the difference between the original image S and the deconstructed reconstructed image S ′.
  • Another of the two evaluation values is an evaluation value E2 for subjective image quality, which is a value indicating the smoothness of the block boundary.
  • the noise removal filter 106 calculates the filter coefficient so that the evaluation value E1 becomes small and the evaluation value E2 becomes small.
  • the evaluation value E1 is calculated by the following (Formula 5).
  • p 1, org , p 0, org , q 0, org , and q 1, org are the pixels p 1 , p 0 , q 0 , and q 1 in the original image S, respectively. Value.
  • calculation using four pixels at the block boundary is shown, but the evaluation value E1 is included in an image region such as a frame for each filter mode, not only for the four pixels. It is calculated for all pixels on all block boundaries corresponding to the filter mode.
  • the evaluation value E1 indicated by (Equation 5) is also used for calculating the filter coefficient of the Wiener filter. That is, in the Wiener filter, the filter coefficient is calculated so that the evaluation value E1 is the smallest.
  • Evaluation value E2 is calculated by the following (formula 6). Further, in (Equation 6), calculation using four pixels at the block boundary is shown, but the evaluation value E2 is included in an image region such as a frame for each filter mode, not only for the four pixels. It is calculated for all pixels on all block boundaries corresponding to the filter mode.
  • the noise removal filter 106 calculates the filter coefficient for each filter mode by performing the following (Equation 7).
  • is a weighting coefficient set by the filter mode.
  • the noise removal filter 106 calculates the filter coefficient so that the sum of the product of the evaluation value E1 and the weighting coefficient ⁇ and the product of the evaluation value E2 and (1 ⁇ weighting coefficient ⁇ ) is minimized.
  • the filter mode is the skip mode, that is, the filter mode m1 (DBF_SKIP)
  • the noise removal filter 106 derives the distortion removal filter coefficient using the arithmetic expression (Formula 5) used for derivation of the filter coefficient of the Wiener filter.
  • the encoding efficiency and the image quality can be further improved.
  • the noise removal filter 106 sets the weighting factor ⁇ to 1 when calculating the skip mode filter coefficient.
  • the noise removal filter 106 has 4 ⁇ 4 pixels whose block boundary between the blocks p and q is located at the center. Is processed as a processing target block, and a filter coefficient for removing noise in the processing target block is calculated. That is, the noise removal filter 106 calculates a filter coefficient (internal noise removal filter coefficient) for applying the Wiener filter to the processing target block.
  • FIG. 8 is a flowchart showing processing for calculating the filter coefficient by switching the weighting coefficient ⁇ for each filter mode.
  • the noise removal filter 106 determines whether or not the processing target filter mode selected in step S100 is the intra mode (DBF_INTRA_QUANT). (Step S106).
  • the noise removal filter 106 determines that the mode is the intra mode (DBF_INTRA_QUANT) (Y in step S106)
  • a filter coefficient is calculated such that the Wiener filter strength is greater than the deblocking filter strength for the filter modes other than the skip mode and the intra mode.
  • a filter coefficient having both deblocking filter properties and Wiener filter properties is calculated.
  • the noise removal filter 106 determines whether or not there is an unselected filter mode among the above ten filter modes m0 to m9 (step S114). Here, if it is determined that there is an unselected filter mode (Y in step S114), the noise removal filter 106 repeatedly executes the processing from step S100, and if it is determined that there is no unselected filter mode (in step S114). N) The filter coefficient calculation process for the image area such as a frame is terminated.
  • filtering on the block boundary using distortion removal filter coefficients such as filter coefficients of a deblocking filter and internal noise removal such as filter coefficients of a Wiener filter, for example, according to the characteristics of the block boundary.
  • the filtering for the processing target block using the filter coefficient is performed by switching. Therefore, it is possible to select and apply appropriate filtering according to the state of the image, and further improve the encoding efficiency and the image quality.
  • the noise removal filter 106 determines the ratio between the strength for suppressing the distortion of the block boundary and the strength for suppressing the noise in the processing target block as the weighting factor ⁇ according to the feature of the block boundary. (Steps S104, S108, and S110), and filter coefficients are derived according to the ratio (Step S112).
  • FIG. 9 is a diagram showing filter coefficients for each filter mode.
  • the noise removal filter 106 applies the filter coefficient a 1 to each filter mode for an image region such as one frame as shown in FIG. , M ,..., A 4, m , b 1, m ,..., B 4, m , c 1, m ,..., C 4, m , d 1, m ,. 4, m 1 , o 1, m , and o 2, m are calculated. Further, the filter coefficient for each filter mode is calculated independently for each of the filtering direction (horizontal direction and vertical direction) and color components (luminance component Y, color difference component U, and color difference component V).
  • the noise removal filter 106 performs filtering using the filter coefficient calculated as described above, the pixels p 1 , p 0 , q 0 that are candidates for filtering in the reconstructed image S ′ in advance. , Q 1 , whether or not to perform filtering is determined by performing operations shown in the following (Expression 8) to (Expression 11).
  • the noise removal filter 106 when 'the pixel value p 2 of the pixel p 2 and p 0 in, s' reconstructed image S and p 0, s' is satisfies the following conditions of (Equation 10), It is determined that filtering should be performed on the pixel p1 of the reconstructed image S ′.
  • the noise removal filter 106 may adaptively determine the offset values (Offset A and Offset B ).
  • the noise removal filter 106 depends on whether the blocks p and q adjacent to each other are motion-compensated at different positions, that is, whether motion compensation is performed with different motion vectors or reference images. Determine the offset value. If the noise removal filter 106 determines that motion compensation is performed at different positions, the noise removal filter 106 increases the offset value from a predetermined value so that a candidate pixel to be filtered is selected as a filtering target. On the other hand, if it is determined that motion compensation is performed at the same position, the noise removal filter 106 decreases the offset value from a predetermined value so that a candidate pixel to be filtered is not selected as a filtering target.
  • the noise removal filter 106 may determine an offset value depending on whether or not the blocks p and q adjacent to each other are encoded in different prediction modes.
  • the noise removal filter 106 determines that the blocks p and q are encoded in different prediction modes when the block p is encoded in the intra prediction mode and the block q is encoded in the inter prediction mode. To do. Further, the noise removal filter 106 encodes the blocks p and q in different prediction modes when the block p is encoded in the vertical intra prediction mode and the block q is encoded in the horizontal intra prediction mode. Is determined.
  • the offset value is determined from a predetermined value so that a candidate pixel to be filtered is selected as a filtering target. increase.
  • the offset value is set to a predetermined value so that the candidate pixel to be filtered is not selected as the filtering target. Decrease from.
  • the offset value determined in this way is determined according to the feature of the image, it is determined to be the same value in the image encoding device 100 and the image decoding device that decodes the encoded bitstream. Therefore, the image encoding device 100 does not need to transmit the offset value to the image decoding device.
  • the image encoding apparatus 100 may determine an arbitrary offset value and transmit the offset value included in the encoded bitstream.
  • the noise removal filter 106 of the image encoding device 100 can adjust the image quality in consideration of the code amount, the quantization error, and the like. Further, the noise removal filter 106 of the image encoding device 100 calculates a difference between an arbitrary offset value and an offset value determined by the motion compensation or prediction mode, and transmits the difference included in the encoded bitstream. May be. In this case, the image quality can be adjusted with an arbitrary offset value, and an increase in the code amount of the encoded bit stream can be suppressed.
  • the noise removal filter 106 has filter coefficients a 1, m ,..., A 4, m , b 1, m ,..., B 4, m , c 1, m ,. .., C 4, m , d 1, m ,..., D 4, m , o 1, m , o 2, m are predicted to predict the filter coefficients (prediction target filter coefficients). Is derived. Furthermore, the noise removal filter 106 calculates the difference between the prediction target filter coefficient and the prediction value for each prediction target filter coefficient. Then, the noise removal filter 106 quantizes the difference (filter prediction error coefficient) and outputs the result to the entropy encoding unit 103. The entropy encoding unit 103 acquires the quantized filter prediction error coefficient output from the noise removal filter 106, performs entropy encoding, and inserts it into the encoded bitstream.
  • Fixed noise removal filter 106 for example, the filter coefficients a 1, m, ⁇ , a 4, m, c 1, m, ⁇ , and with respect to c 4, each m is a predetermined Use the value as the predicted value. Moreover, the noise removal filter 106, the filter coefficients b 1, m, ⁇ , b 4, m, d 1, m, ⁇ , and with respect to d 4, each of m, the filter coefficient a 1, m, ⁇ , a 4, m , c 1, m, is used., and c 4, m as the predicted value.
  • the noise removal filter 106 derives a prediction value a 1, m that is the filter coefficient a 1, m by predicting the filter coefficient b 1, m, and predicts the filter coefficient d 1, m , A prediction value c 1, m that is a filter coefficient c 1, m is derived.
  • the noise removal filter 106 quantizes the difference between the prediction target filter coefficient b 1, m and the prediction value a 1, m as a filter prediction error coefficient, and the prediction target filter coefficient d 1, m and the prediction value c 1, m. Is quantized as a filter prediction error coefficient.
  • the noise removal filter 106 the filter coefficients a 1, m, ⁇ , a 4, m, c 1, m, ⁇ , and c 4, the value constant is multiplied by each of the m , filter coefficients b 1, m, ⁇ , b 4, m, d 1, m, ⁇ , and d 4, may be used as the predicted value for each m.
  • the noise removal filter 106 can use twelve different values as quantization step sizes used for quantization of the filter prediction error coefficient. Such a quantization step size is the same value for all filter prediction error coefficients (filter coefficients) corresponding to one color component and one filtering direction, and is entropy encoded and included in the encoded bitstream. And transmitted to the image decoding apparatus.
  • the noise removal filter 106 may select a quantization step size used for quantization from among 12 values of quantization step size by RD (Rate-Distortion) optimization. That is, the noise removal filter 106 has a quantization step size so that the difference between the original image S and the filtered reconstructed image S ′ is small and the trade-off balance between code amount and distortion is optimal. Select.
  • FIG. 10 is a diagram for explaining the combination of the filter modes.
  • the noise removal filter 106 counts the number of boundaries corresponding to each of the filter modes m0 to m6 in the image area. As shown in FIG. 10, when the number of boundaries of the filter mode m5 is the smallest and the number of boundaries of the filter mode m6 is the second smallest, the noise removal filter 106 couples the filter mode m5 to the filter mode m6. In other words, the noise removal filter 106 changes the filter mode m5 to the filter mode m6. As a result, the noise removal filter 106 does not need to derive and encode a filter coefficient group including a plurality of filter coefficients individually for the filter mode m5 and the filter mode m6, and the filter mode (the filter mode m5 and the filter mode m6).
  • the filter coefficient group described above the filter coefficients a 1, m, ⁇ , a 4, m, b 1, m, ⁇ , b 4, m, c 1, m, ⁇ , c 4 , M , d 1, m ,..., D 4, m , o 1, m , and o 2, m .
  • RD Rate-Distortion
  • the noise removal filter 106 outputs a flag (0 or 1) indicating whether or not the filter mode is applied to the image area to the entropy encoding unit 103 for each filter mode. Further, when at least one filter mode is applied, the noise removal filter 106 is a filter for each of the number of times the filter mode is combined (number of times of combination), the quantization step size, and the applied filter mode.
  • the coefficient group (a plurality of quantized filter prediction error coefficients) is output to the entropy coding unit 103. As a result, the number of combinations, the quantization step size, and the filter coefficient group are entropy-coded, inserted into the encoded bitstream, and transmitted to the image decoding apparatus.
  • NUM_LOOP_MODES_DBF indicates a predetermined number of filter modes (for example, 10).
  • yuv indicates a color component
  • hv is a flag indicating a filtering direction (horizontal direction or vertical direction).
  • Mode_on [yuv] [hv] [mode] is a flag indicating whether or not filtering is performed according to the color component, filtering direction, and filter mode indicated by [yuv], [hv], and [mode]. That is, mode_on [yuv] [hv] [mode] is inserted into the encoded bitstream for each combination of color component, filtering method, and filter mode.
  • fixed_filter [yuv] [hv] [mode] may be used as an implicit filter coefficient group.
  • fixed_filter [yuv] [hv] [mode] may be replaced with diff_fixed_filter [yuv] [hv] [mode].
  • the diff_fixed_filter [yuv] [hv] [mode] is, for example, the difference between the fixed_filter [yuv] [hv] [mode] to be processed and the fixed_filter [yuv] [hv] [mode] processed immediately before. .
  • the image coding apparatus 100 and the image decoding apparatus derive fixed_filter [yuv] [hv] [mode] using diff_fixed_filter [yuv] [hv] [mode], respectively.
  • the code amount of the encoded bit stream can be reduced.
  • FIG. 12 is a block diagram of the image decoding apparatus in the present embodiment.
  • the image decoding apparatus 200 is an apparatus that decodes the encoded bitstream generated by the image encoding apparatus 100, and includes an entropy decoding unit 201, an inverse quantization inverse transform unit 202, an adder 203, and a prediction A unit 204 and a noise removal filter 206 are provided.
  • the entropy decoding unit 201 sequentially generates quantized blocks including a plurality of quantized values by performing entropy decoding on the encoded bit stream. Further, the entropy decoding unit 201 extracts a filter coefficient group (distortion removal filter coefficient) corresponding to the block boundary (filter mode) from the encoded bitstream in accordance with the feature (filter mode) of the block boundary, and the filter The coefficient group is output to the noise removal filter 206.
  • each filter coefficient included in the filter coefficient group is inserted into the encoded bitstream as a filter prediction error coefficient that is quantized and entropy-coded.
  • the entropy decoding unit 201 when extracting the filter coefficient from the encoded bit stream, performs entropy decoding on the filter prediction error coefficient and extracts the filter prediction error coefficient from the encoded bit stream. Furthermore, the entropy decoding unit 201 extracts the number of times of combination and the quantization step size from the encoded bitstream, and outputs them to the noise removal filter 206 together with the above-described filter coefficient (quantized filter prediction error coefficient).
  • the inverse quantization inverse transform unit 202 generates a coefficient block including a plurality of frequency coefficients by performing inverse quantization on each of the plurality of quantization values included in the quantization block. Further, the inverse quantization inverse transform unit 202 performs the inverse orthogonal transform (inverse frequency transform) such as inverse discrete cosine transform on the coefficient block, thereby generating the prediction error image e ′ in units of blocks.
  • inverse orthogonal transform inverse frequency transform
  • the adder 203 acquires the prediction error image e ′ and the prediction image S ⁇ for each block, and generates a reconstructed image S ′ by adding them. As a result, the encoded blocks included in the encoded bitstream are sequentially reconstructed.
  • the noise removal filter 206 acquires the reconstructed image S ′ and the predicted image S ⁇ from the adder 203 and the prediction unit 204, and further acquires the filter coefficient, the number of combinations, and the quantization step size from the entropy decoding unit 201.
  • the filter coefficient is acquired from the entropy decoding unit 201 as the filter prediction error coefficient quantized as described above. Therefore, the noise removal filter 206 dequantizes the filter prediction error coefficient using the quantization step size. Further, as described above, the noise removal filter 206 derives a prediction value by predicting the filter coefficient, and restores the filter coefficient by adding the prediction value and the dequantized filter prediction error coefficient. To do. Further, the noise removal filter 206 repeatedly executes the filter mode combination for the number of times of combination.
  • the noise removal filter 206 like the noise removal filter 106 of the image encoding device 100, is based on the reconstructed image S ′, the predicted image S ⁇ , and the filter coefficient group, and the block boundary of the reconstructed image S ′, Filtering is performed on the processing target block corresponding to the block boundary.
  • the noise removal filter 206 removes block boundary distortion and noise included in the processing target block. That is, the noise removal filter 206 functions as a deblocking filter and a Wiener filter.
  • the prediction unit 204 generates a prediction image S ⁇ in units of blocks by performing intra prediction or inter prediction using the reconstructed image S 'from which noise has been removed by the noise removal filter 206.
  • the reconstruction unit 21 in FIG. 2A corresponds to the image decoding device 200 in the present embodiment
  • the reconstruction unit 21 in FIG. 2A is derived from the inverse quantization inverse transform unit 202 and the adder 203 in the present embodiment.
  • the extraction part 22 of FIG. 2A consists of a part of function of the entropy decoding part 201 in this Embodiment
  • the filtering part 23 of FIG. 2A is equivalent to the noise removal filter 206 in this Embodiment.
  • the filter coefficient (distortion removal filter coefficient) is extracted from the encoded bitstream in accordance with the feature of the block boundary, and filtering using the filter coefficient is performed on the block. Since it is performed on the boundary, the filter coefficient used for filtering at the time of encoding a moving image can also be used for filtering at the time of decoding, and as a result, the image quality of the decoded image can be improved.
  • the noise removal filter 106 in the above embodiment performs filtering on pixels at a block boundary (a plurality of pixels arranged on the left and right or top and bottom with the block boundary in the middle).
  • the removal filter 106 also performs filtering on pixels on the block boundary and on pixels in the processing target block q.
  • FIG. 13 is a diagram for explaining pixels to be filtered in the present modification.
  • the noise removal filter 106 includes pixels p 0 , p 1 , q 0 , and q 1 arranged on the left and right with a block boundary in the center, and a processing target block q Filtering is performed on the pixels q 2 to q n ⁇ 3 inside.
  • the processing target block q is composed of n ⁇ n pixels.
  • the noise removal filter 106 performs filtering on the pixels q 2 to q n ⁇ 3 in the processing countermeasure block q by performing the following calculations (Equation 13) to (Equation 16).
  • the noise removal filter 106 calculates the filter coefficient by (Expression 5) to (Expression 7), and substitutes the calculated filter coefficient into (Expression 13) to (Expression 16), thereby processing the processing target block q. Filtering may be performed on the pixels q 2 to q n ⁇ 3 inside.
  • the noise removal filter 206 of the image decoding apparatus 200 may perform filtering in the same manner as the noise removal filter 106 according to this modification.
  • the noise removal filter 106 may calculate the evaluation value E1 by the following (Expression 17) or (Expression 18) instead of the above (Expression 12).
  • the weighting factor ⁇ corresponding to the filter mode is set.
  • the calculation method of the evaluation value E1 and the evaluation value E2 may be set or selected according to the filter mode.
  • a calculation formula for the evaluation value E1 is selected from (Formula 5), (Formula 12), (Formula 17), and (Formula 18) according to the filter mode, and the evaluation value E1 is based on the selected calculation formula. May be calculated.
  • Modification 2 Here, a second modification of the present embodiment will be described.
  • the number of block boundaries (boundary number) corresponding to the filter mode in the image area is counted, and the filter modes are combined according to the number of boundaries. Modes may be combined.
  • the noise removal filter 106 combines filter modes having similar conditions shown in FIG. 6, that is, filter modes having similar block boundary characteristics. Then, the noise removal filter 106 refers to the filter mode combination table indicating the combination relationship of the filter modes and an index (combination index) for specifying the combination relationship, and determines the combination index corresponding to the combined filter mode.
  • the data is output to the entropy encoding unit 103.
  • the combined index is included in the encoded bitstream and transmitted to the image decoding apparatus 200.
  • the noise removal filter 206 of the image decoding apparatus 200 holds the same table as the filter mode combination table referred to by the noise removal filter 106. Therefore, the noise removal filter 206 can grasp which filter mode is combined by acquiring the combination index included in the encoded bitstream and referring to the filter mode combination table. .
  • FIG. 14 is a diagram showing a filter mode combination table.
  • the combination index Idxm0 indicating that no filter mode is combined
  • the filter mode m3 is combined with the filter mode m2.
  • a combined index Idxm1 indicating that the filter mode m3 is combined with the filter mode m2
  • a combined index Idxm2 indicating that the filter mode m6 is combined with the filter mode m5.
  • the noise removal filter 106 first determines whether or not at least one of these filter modes should be combined with another filter mode by RD optimization or the like. Here, if it is determined that the noise removal filter 106 should not be combined, the noise removal filter 106 performs filtering by calculating a filter coefficient for each filter mode. Then, the noise removal filter 106 outputs the combined index Idxm0 to the entropy encoding unit 103, thereby inserting the combined index Idxm0 into the encoded bitstream. Thereby, the noise removal filter 206 of the image decoding apparatus 200 acquires the combined index Idxm0, and can grasp that the filter mode is not combined based on the combined index Idxm0. As a result, the noise removal filter 206 of the image decoding apparatus 200 performs filtering using the filter coefficient corresponding to each of the seven filter modes.
  • the noise removal filter 106 when it is determined that the noise removal filter 106 should be combined, at least one of the filter modes is combined with another filter mode. For example, if the block boundary feature corresponding to the filter mode m3 is similar to the block boundary feature corresponding to the filter mode m2, the noise removal filter 106 couples the filter mode m3 to the filter mode m2. In other words, the noise removal filter 106 applies the filtering of the filter mode m2 to the block boundary that satisfies the conditions of the filter mode m3. Then, the noise removal filter 106 outputs the combined index Idxm1 indicating the content of the combination to the entropy encoding unit 103, thereby including the combined index Idxm1 in the encoded bitstream.
  • the noise removal filter 206 of the image decoding apparatus 200 acquires the combined index Idxm1 of the encoded bitstream, and the filter mode m3 is combined with the filter mode m2 based on the combined index Idxm1 and the filter mode combination table. I can know that. As a result, the noise removal filter 206 of the image decoding apparatus 200 performs filtering using the filter coefficient corresponding to the filter mode for each filter mode except the filter mode m3.
  • the filter mode is combined according to the feature of the block boundary, so that the code amount of the filter coefficient can be reduced, and more appropriate filtering is performed to improve the encoding efficiency and the image quality. can do.
  • Modification 3 a third modification of the present embodiment will be described.
  • the filter modes are combined and a common filter coefficient group is inserted into the encoded bit stream for the plurality of filter modes.
  • the overhead may be reduced by other methods.
  • overhead is reduced by using a list and a filter coefficient designation value.
  • FIG. 15 is an explanatory diagram for explaining a list and filter coefficient designation values according to this modification.
  • the noise removal filter 106 first outputs a list of filter coefficient groups used for filtering to the entropy encoding unit 103.
  • This list includes a plurality of filter coefficient groups (gc1, gc2,..., Gck) used for filtering, and each filter coefficient included in the filter coefficient group is a filter prediction quantized with the above-described quantization step size. Expressed as an error coefficient. Therefore, the noise removal filter 106 outputs the quantization step size together with the above list. As a result, the list and the quantization step size are inserted into the encoded bit stream.
  • the noise removal filter 106 outputs a filter coefficient designated value corresponding to the filter mode for each filter mode.
  • the filter coefficient designation value corresponding to the predetermined filter mode indicates 0.
  • the filter coefficient designation value corresponding to the filter mode indicates a value of 1 or more.
  • the filter coefficient designation value corresponding to the predetermined filter mode indicates a value for specifying a filter coefficient group (for example, a group of 18 filter coefficients) corresponding to the filter mode from the list. For example, when the filter coefficient designated value corresponding to the filter mode is 1, the filter coefficient designated value indicates that the filter coefficient group gc1 transmitted first in the list is used for the filter mode. When the filter coefficient designation value corresponding to another filter mode is 2, the filter coefficient group gc2 transmitted second in the list is used for the other filter mode. Indicates.
  • the noise removal filter 106 specifies, for each filter mode, a filter coefficient group used for filtering by the filter mode for one color component and one filtering direction.
  • the filter coefficient designation value which is a value, is output to the entropy encoding unit 103. Further, the number of filter coefficient groups included in the above list is equal to the maximum value of the filter coefficient designated value.
  • the entropy encoding unit 103 entropy-encodes the filter coefficient designation value, quantization step size, and list output as described above and inserts them into the encoded bitstream.
  • the entropy decoding unit 201 of the image decoding apparatus 200 performs entropy decoding on the above-described filter coefficient designation value, quantization step size, and list included in the encoded bitstream. Then, the noise removal filter 206 performs inverse quantization using a quantization step size for each of the quantized filter prediction error coefficients included in the list. Further, the noise removal filter 206 restores a filter coefficient (filter coefficient group) by adding a prediction value to each of the inversely quantized filter prediction error coefficients. For each filter mode, the noise removal filter 206 selects a filter coefficient group from the list based on the filter coefficient designation value corresponding to the filter mode, and performs filtering using the filter coefficient group.
  • the weighting factor ⁇ corresponding to the filter mode is set for each filter mode, but the tap length corresponding to the filter mode may be set for each filter mode.
  • filtering is performed using four pixels (4 taps) as shown in (Expression 1) to (Expression 4).
  • filtering may be performed using six pixels (6 taps).
  • these weighting factors are merely examples, and other weighting factors may be set.
  • the filter mode is the skip mode (DBF_SKIP)
  • a common filter Wiener filter
  • Wiener filters having different strengths or properties to each processing target block or each block boundary corresponding to the skip mode.
  • Wiener filters having different filter coefficients and tap lengths may be applied.
  • the image encoding apparatus 100 inserts an index indicating what strength or property of the Wiener filter has been applied to the encoded bitstream.
  • the image decoding apparatus 200 can also apply a Wiener filter having the same strength or property as the Wiener filter applied by the image encoding apparatus 100.
  • ten types of filter modes are used, but eleven or more types of filter modes or nine or less types of filter modes may be used.
  • a frame is used as an example of an image area.
  • the image area may be a slice or a sequence.
  • the noise removal filter 106 of the image coding apparatus 100 calculates a filter coefficient group by performing the calculation of (Equation 7), and calculates the filter coefficient group from among at least one candidate filter coefficient group held in advance. A candidate closest to the set of filter coefficients is selected.
  • the noise removal filter 106 performs filtering using the selected candidate filter coefficient group, and outputs an index for identifying or specifying the filter coefficient group to the entropy encoding unit 103.
  • the entropy encoding unit 103 performs entropy encoding on the index and inserts the index into the encoded bit stream.
  • the entropy decoding unit 201 of the image decoding apparatus 200 extracts an index from the encoded bitstream by entropy decoding the encoded bitstream, and outputs the index to the noise removal filter 206.
  • the noise removal filter 206 selects a filter coefficient group identified or specified by the extracted index from among at least one candidate filter coefficient group held in advance, and uses the selected filter coefficient group. Filtering.
  • the filter coefficient group held in advance may be a filter coefficient group that has already been calculated.
  • the noise coefficient filter 106 assigns an index to the calculated filter coefficient group, and holds the filter coefficient group to which the index is attached.
  • the noise removal filter 106 performs filtering using the filter coefficient group to which the index is attached, and outputs the filter coefficient group to which the index is attached to the entropy coding unit 103.
  • the entropy encoding unit 103 performs entropy encoding on the filter coefficient group to which the index is attached and inserts the filter coefficient group into the encoded bitstream.
  • the entropy decoding unit 201 of the image decoding device 200 performs entropy decoding on the encoded bit stream, extracts a filter coefficient group with an index from the encoded bit stream, and outputs the filter coefficient group to the noise removal filter 206.
  • the noise removal filter 206 holds the extracted filter coefficient group with an index and performs filtering using the filter coefficient group.
  • the storage medium may be any medium that can record a program, such as a magnetic disk, an optical disk, a magneto-optical disk, an IC card, and a semiconductor memory.
  • FIG. 16 is a diagram showing an overall configuration of a content supply system ex100 that realizes a content distribution service.
  • the communication service providing area is divided into desired sizes, and base stations ex106 to ex110, which are fixed radio stations, are installed in each cell.
  • the content supply system ex100 includes a computer ex111, a PDA (Personal Digital Assistant) ex112, a camera ex113, a mobile phone ex114, a game machine via the Internet ex101, the Internet service provider ex102, the telephone network ex104, and the base stations ex106 to ex110. Each device such as ex115 is connected.
  • PDA Personal Digital Assistant
  • each device may be directly connected to the telephone network ex104 without going through the base stations ex106 to ex110 which are fixed wireless stations.
  • the devices may be directly connected to each other via short-range wireless or the like.
  • the camera ex113 is a device that can shoot moving images such as a digital video camera
  • the camera ex116 is a device that can shoot still images and movies such as a digital camera.
  • the mobile phone ex114 is a GSM (Global System for Mobile Communications) method, a CDMA (Code Division Multiple Access) method, a W-CDMA (Wideband-Code Division Multiple Access L (Semiconductor Access) method, a W-CDMA (Wideband-Code Division Multiple Access L method, or a high access rate).
  • GSM Global System for Mobile Communications
  • CDMA Code Division Multiple Access
  • W-CDMA Wideband-Code Division Multiple Access L (Semiconductor Access) method
  • W-CDMA Wideband-Code Division Multiple Access L method
  • a high access rate A High Speed Packet Access
  • PHS Personal Handyphone System
  • the camera ex113 and the like are connected to the streaming server ex103 through the base station ex109 and the telephone network ex104, thereby enabling live distribution and the like.
  • the content for example, music live video
  • the streaming server ex103 streams the content data transmitted to the requested client.
  • the client include a computer ex111, a PDA ex112, a camera ex113, a mobile phone ex114, a game machine ex115, and the like that can decode the encoded data.
  • Each device that has received the distributed data decodes and reproduces the received data.
  • the encoded processing of the captured data may be performed by the camera ex113, the streaming server ex103 that performs the data transmission processing, or may be performed in a shared manner.
  • the decryption processing of the distributed data may be performed by the client, the streaming server ex103, or may be performed in a shared manner.
  • still images and / or moving image data captured by the camera ex116 may be transmitted to the streaming server ex103 via the computer ex111.
  • the encoding process in this case may be performed by any of the camera ex116, the computer ex111, and the streaming server ex103, or may be performed in a shared manner.
  • these encoding processing and decoding processing are generally performed in a computer ex111 and an LSI (Large Scale Integration) ex500 included in each device.
  • the LSI ex500 may be configured as a single chip or a plurality of chips.
  • image encoding and image decoding software is incorporated into some recording medium (CD-ROM, flexible disk, hard disk, etc.) that can be read by the computer ex111 and the like, and the encoding processing and decoding processing are performed using the software. May be.
  • moving image data acquired by the camera may be transmitted. The moving image data at this time is data encoded by the LSI ex500 included in the mobile phone ex114.
  • the streaming server ex103 may be a plurality of servers or a plurality of computers, and may process, record, and distribute data in a distributed manner.
  • the encoded data can be received and reproduced by the client.
  • the information transmitted by the user can be received, decrypted and reproduced in real time by the client, and even a user who does not have special rights or facilities can realize personal broadcasting.
  • the image encoding method or the image decoding method shown in the above embodiment may be used for encoding and decoding of each device constituting the content supply system.
  • FIG. 17 is a diagram illustrating the mobile phone ex114 using the image encoding method and the image decoding method described in the above embodiment.
  • the cellular phone ex114 includes an antenna ex601 for transmitting and receiving radio waves to and from the base station ex110, a video from a CCD camera, a camera unit ex603 capable of taking a still image, a video shot by the camera unit ex603, and an antenna ex601.
  • a display unit ex602 such as a liquid crystal display that displays data obtained by decoding received video and the like, a main body unit composed of a group of operation keys ex604, an audio output unit ex608 such as a speaker for outputting audio, and a voice input Audio input unit ex605 such as a microphone, recorded moving image or still image data, received mail data, moving image data or still image data, etc., for storing encoded data or decoded data
  • Recording media ex607 can be attached to media ex607 and mobile phone ex114 And a slot unit ex606 for.
  • the recording medium ex607 stores a flash memory element, which is a kind of EEPROM, which is a nonvolatile memory that can be electrically rewritten and erased, in a plastic case such as an SD card.
  • the mobile phone ex114 has a power supply circuit ex710, an operation input control unit ex704, an image encoding unit, and a main control unit ex711 configured to control the respective units of the main body unit including the display unit ex602 and the operation key ex604.
  • Unit ex712, camera interface unit ex703, LCD (Liquid Crystal Display) control unit ex702, image decoding unit ex709, demultiplexing unit ex708, recording / reproducing unit ex707, modulation / demodulation circuit unit ex706, and audio processing unit ex705 are connected to each other via a synchronization bus ex713. It is connected.
  • the power supply circuit ex710 activates the camera-equipped digital mobile phone ex114 by supplying power to each unit from the battery pack. .
  • the cellular phone ex114 converts the audio signal collected by the audio input unit ex605 in the audio call mode into digital audio data by the audio processing unit ex705 based on the control of the main control unit ex711 including a CPU, a ROM, a RAM, and the like.
  • the modulation / demodulation circuit unit ex706 performs spread spectrum processing, the transmission / reception circuit unit ex701 performs digital analog conversion processing and frequency conversion processing, and then transmits the result via the antenna ex601.
  • the cellular phone ex114 amplifies the received data received by the antenna ex601 in the voice call mode, performs frequency conversion processing and analog-digital conversion processing, performs spectrum despreading processing by the modulation / demodulation circuit unit ex706, and performs analog speech by the voice processing unit ex705. After the data is converted, it is output via the audio output unit ex608.
  • text data of the e-mail input by operating the operation key ex604 on the main body is sent to the main control unit ex711 via the operation input control unit ex704.
  • the main control unit ex711 performs spread spectrum processing on the text data in the modulation / demodulation circuit unit ex706, performs digital analog conversion processing and frequency conversion processing in the transmission / reception circuit unit ex701, and then transmits the text data to the base station ex110 via the antenna ex601.
  • the image data captured by the camera unit ex603 is supplied to the image encoding unit ex712 via the camera interface unit ex703.
  • the image data captured by the camera unit ex603 can be directly displayed on the display unit ex602 via the camera interface unit ex703 and the LCD control unit ex702.
  • the image encoding unit ex712 is configured to include the image encoding device described in the present invention, and an encoding method using the image data supplied from the camera unit ex603 in the image encoding device described in the above embodiment. Is converted into encoded image data by compression encoding and sent to the demultiplexing unit ex708. At the same time, the mobile phone ex114 sends the sound collected by the sound input unit ex605 during imaging by the camera unit ex603 to the demultiplexing unit ex708 via the sound processing unit ex705 as digital sound data.
  • the demultiplexing unit ex708 multiplexes the encoded image data supplied from the image encoding unit ex712 and the audio data supplied from the audio processing unit ex705 by a predetermined method, and the resulting multiplexed data is a modulation / demodulation circuit unit Spread spectrum processing is performed in ex706, digital analog conversion processing and frequency conversion processing are performed in the transmission / reception circuit unit ex701, and then transmission is performed via the antenna ex601.
  • the received data received from the base station ex110 via the antenna ex601 is subjected to spectrum despreading processing by the modulation / demodulation circuit unit ex706, and the resulting multiplexing is obtained.
  • Data is sent to the demultiplexing unit ex708.
  • the demultiplexing unit ex708 separates the multiplexed data into a bit stream of image data and a bit stream of audio data, and a synchronization bus
  • the encoded image data is supplied to the image decoding unit ex709 via ex713 and the audio data is supplied to the audio processing unit ex705.
  • the image decoding unit ex709 is configured to include the image decoding device described in the present application, and is reproduced by decoding the bit stream of the image data with a decoding method corresponding to the encoding method described in the above embodiment.
  • Moving image data is generated and supplied to the display unit ex602 via the LCD control unit ex702, thereby displaying, for example, moving image data included in a moving image file linked to a home page.
  • the audio processing unit ex705 converts the audio data into analog audio data, and then supplies the analog audio data to the audio output unit ex608.
  • the audio data included in the moving image file linked to the home page is reproduced.
  • a decoding device can be incorporated. Specifically, in the broadcasting station ex201, audio data, video data, or a bit stream in which those data are multiplexed is transmitted to a communication or broadcasting satellite ex202 via radio waves. In response, the broadcasting satellite ex202 transmits a radio wave for broadcasting, and a home antenna ex204 having a satellite broadcasting receiving facility receives the radio wave, and the television (receiver) ex300 or the set top box (STB) ex217 or the like. The device decodes the bitstream and reproduces it.
  • audio data, video data recorded on a recording medium ex215 such as DVD or BD, or an encoded bit stream in which those data are multiplexed are read and decoded, or audio data, video data or these are recorded on the recording medium ex215.
  • the image decoding apparatus or the image encoding apparatus described in the above embodiment can also be mounted on the reader / recorder ex218 that encodes the data and records the multiplexed data as multiplexed data.
  • the reproduced video signal is displayed on the monitor ex219.
  • the recording medium ex215 on which the encoded bit stream is recorded allows other devices and systems to reproduce the video signal.
  • the other reproduction device ex212 can reproduce the video signal on the monitor ex213 using the recording medium ex214 on which the encoded bitstream is copied.
  • an image decoding device may be mounted in the set-top box ex217 connected to the cable ex203 for cable television or the antenna ex204 for satellite / terrestrial broadcasting and displayed on the monitor ex219 of the television.
  • the image decoding apparatus may be incorporated in the television instead of the set top box.
  • FIG. 20 is a diagram illustrating a television (receiver) ex300 that uses the image decoding method and the image encoding method described in the above embodiment.
  • the television ex300 obtains or outputs a bit stream of video information via the antenna ex204 or the cable ex203 that receives the broadcast, and a tuner ex301 that outputs or outputs the encoded data that is received or demodulated.
  • Modulation / demodulation unit ex302 that modulates data for transmission to the outside, and multiplexing / separation unit ex303 that separates demodulated video data and audio data, or multiplexes encoded video data and audio data Is provided.
  • the television ex300 decodes each of the audio data and the video data, or encodes the respective information, the audio signal processing unit ex304, the signal processing unit ex306 including the video signal processing unit ex305, and the decoded audio signal. And an output unit ex309 including a display unit ex308 such as a display for displaying the decoded video signal.
  • the television ex300 includes an interface unit ex317 including an operation input unit ex312 that receives an input of a user operation.
  • the television ex300 includes a control unit ex310 that controls each unit in an integrated manner, and a power supply circuit unit ex311 that supplies power to each unit.
  • the interface unit ex317 includes a bridge ex313 connected to an external device such as a reader / recorder ex218, a slot unit ex314 for enabling recording media ex216 such as an SD card, and an external recording such as a hard disk
  • a driver ex315 for connecting to a medium, a modem ex316 for connecting to a telephone network, and the like may be included.
  • the recording medium ex216 is capable of electrically recording information by using a nonvolatile / volatile semiconductor memory element to be stored.
  • Each part of the television ex300 is connected to each other via a synchronous bus.
  • the television ex300 receives a user operation from the remote controller ex220 or the like, and demultiplexes the video data and audio data demodulated by the modulation / demodulation unit ex302 by the multiplexing / separation unit ex303 based on the control of the control unit ex310 having a CPU or the like. . Furthermore, the television ex300 decodes the separated audio data by the audio signal processing unit ex304, and the separated video data is decoded by the video signal processing unit ex305 using the decoding method described in the above embodiment. The decoded audio signal and video signal are output to the outside from the output unit ex309.
  • these signals may be temporarily stored in the buffers ex318, ex319, etc. so that the audio signal and the video signal are reproduced in synchronization.
  • the television ex300 may read the encoded bitstream encoded from the recording media ex215 and ex216 such as a magnetic / optical disk and an SD card, not from a broadcast or the like. Next, a configuration will be described in which the television ex300 encodes an audio signal and a video signal and transmits them to the outside or writes them to a recording medium or the like.
  • the television ex300 receives a user operation from the remote controller ex220 or the like, and encodes an audio signal with the audio signal processing unit ex304 based on the control of the control unit ex310, and the video signal with the video signal processing unit ex305 in the above embodiment. Encoding is performed using the described encoding method.
  • the encoded audio signal and video signal are multiplexed by the multiplexing / demultiplexing unit ex303 and output to the outside. When multiplexing, these signals may be temporarily stored in the buffers ex320 and ex321 so that the audio signal and the video signal are synchronized.
  • a plurality of buffers ex318 to ex321 may be provided as shown in the figure, or a configuration in which one or more buffers are shared may be used. Further, in addition to the illustrated example, data may be stored in the buffer as a buffer material that prevents system overflow and underflow even between the modulation / demodulation unit ex302 and the multiplexing / demultiplexing unit ex303, for example.
  • the television ex300 In addition to acquiring audio data and video data from broadcast and recording media, the television ex300 has a configuration for receiving AV input of a microphone and a camera, and even if encoding processing is performed on the data acquired therefrom Good.
  • the television ex300 has been described as a configuration capable of the above-described encoding processing, multiplexing, and external output. However, all of these processing cannot be performed, and the above reception, decoding processing, and external
  • the configuration may be such that only one of the outputs is possible.
  • the decoding process or the encoding process may be performed by either the television ex300 or the reader / recorder ex218.
  • the television ex300 and the reader / recorder ex218 may be shared with each other.
  • FIG. 21 shows the configuration of the information reproducing / recording unit ex400 when data is read from or written to an optical disk.
  • the information reproducing / recording unit ex400 includes elements ex401 to ex407 described below.
  • the optical head ex401 irradiates a laser spot on the recording surface of the recording medium ex215 that is an optical disc to write information, and detects information reflected from the recording surface of the recording medium ex215 to read the information.
  • the modulation recording unit ex402 electrically drives a semiconductor laser built in the optical head ex401 and modulates the laser beam according to the recording data.
  • the reproduction demodulator ex403 amplifies a reproduction signal obtained by electrically detecting reflected light from the recording surface by a photodetector built in the optical head ex401, separates and demodulates a signal component recorded on the recording medium ex215, and is necessary. To play back information.
  • the buffer ex404 temporarily holds information to be recorded on the recording medium ex215 and information reproduced from the recording medium ex215.
  • the disk motor ex405 rotates the recording medium ex215.
  • the servo control unit ex406 moves the optical head ex401 to a predetermined information track while controlling the rotational drive of the disk motor ex405, and performs a laser spot tracking process.
  • the system control unit ex407 controls the entire information reproduction / recording unit ex400.
  • the system control unit ex407 uses various types of information held in the buffer ex404, and generates and adds new information as necessary. This is realized by recording / reproducing information through the optical head ex401 while the unit ex403 and the servo control unit ex406 are operated cooperatively.
  • the system control unit ex407 includes, for example, a microprocessor, and executes these processes by executing a read / write program.
  • the optical head ex401 has been described as irradiating a laser spot, but it may be configured to perform higher-density recording using near-field light.
  • FIG. 22 shows a schematic diagram of a recording medium ex215 that is an optical disk.
  • Guide grooves grooves
  • address information indicating the absolute position on the disc is recorded in advance on the information track ex230 by changing the shape of the groove.
  • This address information includes information for specifying the position of the recording block ex231 which is a unit for recording data, and the recording and reproducing apparatus specifies the recording block by reproducing the information track ex230 and reading the address information. be able to.
  • the recording medium ex215 includes a data recording area ex233, an inner peripheral area ex232, and an outer peripheral area ex234.
  • the area used for recording user data is the data recording area ex233, and the inner circumference area ex232 and the outer circumference area ex234 arranged on the inner circumference or outer circumference of the data recording area ex233 are used for specific purposes other than recording user data. Used.
  • the information reproducing / recording unit ex400 reads / writes encoded audio data, video data, or encoded data obtained by multiplexing these data, with respect to the data recording area ex233 of the recording medium ex215.
  • an optical disk such as a single-layer DVD or BD has been described as an example.
  • the present invention is not limited to these, and an optical disk having a multilayer structure and capable of recording other than the surface may be used. It also has a structure that performs multidimensional recording / reproduction, such as recording information using light of various different wavelengths at the same location on the disc, and recording different layers of information from various angles. It may be an optical disk.
  • the car ex210 having the antenna ex205 can receive data from the satellite ex202 and the like, and the moving image can be reproduced on a display device such as the car navigation ex211 that the car ex210 has.
  • the configuration of the car navigation ex211 may be, for example, a configuration in which a GPS receiving unit is added in the configuration illustrated in FIG.
  • the mobile phone ex114 and the like can be used in three ways: a transmitting terminal having only an encoder and a receiving terminal having only a decoder. The implementation form of can be considered.
  • the image encoding method or the image decoding method described in the above embodiment can be used in any of the above-described devices and systems, and by doing so, the effects described in the above embodiment can be obtained. be able to.
  • FIG. 23 shows a configuration of an LSI ex500 that is made into one chip.
  • the LSI ex500 includes elements ex501 to ex509 described below, and each element is connected via a bus ex510.
  • the power supply circuit unit ex505 starts up to an operable state by supplying power to each unit when the power supply is in an on state.
  • the LSI ex500 when performing the encoding process, inputs an AV signal from the microphone ex117, the camera ex113, and the like by the AV I / Oex 509 based on the control of the control unit ex501 having the CPU ex502, the memory controller ex503, the stream controller ex504, and the like. Accept.
  • the input AV signal is temporarily stored in an external memory ex511 such as SDRAM.
  • the accumulated data is appropriately divided into a plurality of times according to the processing amount and the processing speed, and sent to the signal processing unit ex507.
  • the signal processing unit ex507 performs encoding of an audio signal and / or encoding of a video signal.
  • the encoding process of the video signal is the encoding process described in the above embodiment.
  • the signal processing unit ex507 further performs processing such as multiplexing the encoded audio data and the encoded video data according to circumstances, and outputs the result from the stream I / Oex 506 to the outside.
  • the output bit stream is transmitted to the base station ex107 or written to the recording medium ex215. It should be noted that data should be temporarily stored in the buffer ex508 so that the data is synchronized when multiplexed.
  • the LSI ex500 is obtained by reading from the encoded data obtained via the base station ex107 by the stream I / Oex 506 or the recording medium ex215 based on the control of the control unit ex501.
  • the encoded data is temporarily stored in the memory ex511 or the like.
  • the accumulated data is appropriately divided into a plurality of times according to the processing amount and the processing speed and sent to the signal processing unit ex507.
  • the signal processing unit ex507 performs decoding of audio data and / or decoding of video data.
  • the decoding process of the video signal is the decoding process described in the above embodiment.
  • each signal may be temporarily stored in the buffer ex508 or the like so that the decoded audio signal and the decoded video signal can be reproduced in synchronization.
  • the decoded output signal is output from each output unit such as the mobile phone ex114, the game machine ex115, and the television ex300 through the memory ex511 or the like as appropriate.
  • the memory ex511 has been described as an external configuration of the LSI ex500.
  • a configuration included in the LSI ex500 may be used.
  • the buffer ex508 is not limited to one, and a plurality of buffers may be provided.
  • the LSI ex500 may be made into one chip or a plurality of chips.
  • LSI LSI
  • IC system LSI
  • super LSI ultra LSI depending on the degree of integration
  • the method of circuit integration is not limited to LSI, and implementation with a dedicated circuit or a general-purpose processor is also possible.
  • An FPGA that can be programmed after manufacturing the LSI or a reconfigurable processor that can reconfigure the connection and setting of circuit cells inside the LSI may be used.
  • the present invention is not limited to these embodiments. Unless it deviates from the meaning of the present invention, various forms conceived by those skilled in the art are applied to the embodiment, and other forms constructed by combining components and steps in different embodiments are also included in the present invention. It is included in the range.
  • the image encoding method and the image decoding method according to the present invention have the effect of being able to improve the encoding efficiency and the image quality.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne un procédé de codage d'images destiné à améliorer l'efficacité du codage et la qualité des images. Ledit procédé de codage d'images consiste à générer un train de bits codé en codant par blocs l'image animée au moyen d'une image de prédiction (S11) ; à reconfigurer séquentiellement les blocs codés (S12) ; à dériver un coefficient de filtre de suppression de la distorsion pour supprimer la distorsion à une limite de bloc qui est une limite entre les blocs reconfigurés, en fonction d'une caractéristique de la limite de bloc (S13) ; à filtrer la limite de bloc au moyen du coefficient de filtre de suppression de la distorsion dérivé, de manière à générer une image de prédiction (S14) ; et à insérer dans le train de bits codé les informations de spécification du coefficient pour spécifier le coefficient de filtre de suppression de la distorsion dérivé (S15).
PCT/JP2011/000085 2010-01-21 2011-01-12 Procédé de codage d'images, procédé de décodage d'images, dispositif afférent, programme et circuit intégré WO2011089865A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US29698910P 2010-01-21 2010-01-21
US61/296,989 2010-01-21

Publications (1)

Publication Number Publication Date
WO2011089865A1 true WO2011089865A1 (fr) 2011-07-28

Family

ID=44306668

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/000085 WO2011089865A1 (fr) 2010-01-21 2011-01-12 Procédé de codage d'images, procédé de décodage d'images, dispositif afférent, programme et circuit intégré

Country Status (1)

Country Link
WO (1) WO2011089865A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013030902A1 (fr) * 2011-08-26 2013-03-07 株式会社 東芝 Procédé de codage d'image en mouvement, procédé de décodage d'image en mouvement, appareil de codage d'image en mouvement et appareil de décodage d'image en mouvement
CN114342383A (zh) * 2019-08-29 2022-04-12 日本放送协会 编码装置、解码装置及程序

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008075247A1 (fr) * 2006-12-18 2008-06-26 Koninklijke Philips Electronics N.V. Compression et décompression d'images
WO2008084745A1 (fr) * 2007-01-09 2008-07-17 Panasonic Corporation Appareil de codage et de décodage d'image
WO2009110559A1 (fr) * 2008-03-07 2009-09-11 株式会社 東芝 Dispositif de codage/décodage dynamique d'image
WO2010001999A1 (fr) * 2008-07-04 2010-01-07 株式会社 東芝 Procédé et dispositif de codage/décodage d'image dynamique

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008075247A1 (fr) * 2006-12-18 2008-06-26 Koninklijke Philips Electronics N.V. Compression et décompression d'images
KR20090100402A (ko) * 2006-12-18 2009-09-23 코닌클리케 필립스 일렉트로닉스 엔.브이. 이미지 압축 및 압축해제
CN101563926A (zh) * 2006-12-18 2009-10-21 皇家飞利浦电子股份有限公司 图像压缩与解压缩
US20100027686A1 (en) * 2006-12-18 2010-02-04 Koninklijke Philips Electronics N.V. Image compression and decompression
JP2010514246A (ja) * 2006-12-18 2010-04-30 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ 画像圧縮及び伸張
WO2008084745A1 (fr) * 2007-01-09 2008-07-17 Panasonic Corporation Appareil de codage et de décodage d'image
WO2009110559A1 (fr) * 2008-03-07 2009-09-11 株式会社 東芝 Dispositif de codage/décodage dynamique d'image
WO2010001999A1 (fr) * 2008-07-04 2010-01-07 株式会社 東芝 Procédé et dispositif de codage/décodage d'image dynamique

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013030902A1 (fr) * 2011-08-26 2013-03-07 株式会社 東芝 Procédé de codage d'image en mouvement, procédé de décodage d'image en mouvement, appareil de codage d'image en mouvement et appareil de décodage d'image en mouvement
CN114342383A (zh) * 2019-08-29 2022-04-12 日本放送协会 编码装置、解码装置及程序
CN114342383B (zh) * 2019-08-29 2024-05-03 日本放送协会 编码装置、解码装置及程序

Similar Documents

Publication Publication Date Title
JP5588438B2 (ja) 画像符号化方法及び画像符号化装置
JP5485983B2 (ja) 動画像符号化方法、動画像復号方法、動画像符号化装置および動画像復号装置
JP5574345B2 (ja) 符号化方法、エラー検出方法、復号方法、符号化装置、エラー検出装置及び復号装置
WO2010061607A1 (fr) Procédé de décodage d'images animées, procédé de codage d'images animées, décodeur d'images animées, codeur d'images animées, programme et circuit intégré
JP5479470B2 (ja) 動画像符号化方法、動画像符号化装置、プログラム、および集積回路
WO2010026770A1 (fr) Procédé de codage d’image, procédé de décodage d’image, dispositif de codage d’image, dispositif de décodage d’image, système, programme et circuit intégré
WO2010087157A1 (fr) Procédé de codage d'image et procédé de décodage d'image
WO2010050156A1 (fr) Procédé de codage d’image, procédé de décodage d’image, dispositif de codage d’image, dispositif de décodage d’image, circuit intégré, et programme
JP5659160B2 (ja) 動画像復号装置、動画像符号化装置、動画像復号回路及び動画像復号方法
EP2464018A1 (fr) Procédé de codage, procédé de décodage, dispositif de codage et dispositif de décodage
EP2464017A1 (fr) Procédé de codage, procédé de décodage, dispositif de codage et dispositif de décodage
JP2013505647A (ja) 画像符号化装置、画像復号装置、画像符号化方法および画像復号方法
KR101863397B1 (ko) 디블로킹을 위한 효율적인 결정
KR20120046726A (ko) 부호화 방법, 복호 방법, 부호화 장치 및 복호 장치
KR20130042476A (ko) 필터 위치 결정 및 선택
KR20120089690A (ko) 복호방법, 복호장치, 부호화 방법 및 부호화 장치
KR20120099418A (ko) 화상 복호 방법, 화상 부호화 방법, 화상 복호 장치, 화상 부호화 장치, 프로그램, 및 집적 회로
JP5499035B2 (ja) 画像符号化方法、画像符号化装置、プログラムおよび集積回路
WO2011052216A1 (fr) Procédé et dispositif de codage d'image, procédé et dispositif de décodage d'image
WO2011089865A1 (fr) Procédé de codage d'images, procédé de décodage d'images, dispositif afférent, programme et circuit intégré
WO2011128055A1 (fr) Stockage efficace d'une représentation d'image à l'aide d'un quantificateur non uniforme adaptatif
WO2010131422A1 (fr) Appareil de décodage d'image, circuit intégré, procédé de décodage d'image et système de décodage d'image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11734473

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11734473

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP