WO2007108254A1 - Dispositif de codage et dispositif de decodage d'image en mouvement - Google Patents

Dispositif de codage et dispositif de decodage d'image en mouvement Download PDF

Info

Publication number
WO2007108254A1
WO2007108254A1 PCT/JP2007/052575 JP2007052575W WO2007108254A1 WO 2007108254 A1 WO2007108254 A1 WO 2007108254A1 JP 2007052575 W JP2007052575 W JP 2007052575W WO 2007108254 A1 WO2007108254 A1 WO 2007108254A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
filter processing
block
pixel value
inverse
Prior art date
Application number
PCT/JP2007/052575
Other languages
English (en)
Japanese (ja)
Inventor
Maki Takahashi
Tomoko Aono
Original Assignee
Sharp Kabushiki Kaisha
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sharp Kabushiki Kaisha filed Critical Sharp Kabushiki Kaisha
Priority to JP2008506197A priority Critical patent/JP4768011B2/ja
Publication of WO2007108254A1 publication Critical patent/WO2007108254A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression

Definitions

  • Moving picture encoding apparatus and moving picture decoding apparatus Moving picture encoding apparatus
  • the present invention relates to a moving image encoding device and a moving image decoding device that divide and encode a quantization target image into a plurality of blocks.
  • FIG. Description will be made with reference to FIG.
  • FIG. 11 is a functional block diagram showing a schematic configuration of a moving picture encoding apparatus 100 that encodes a moving picture using the H.264ZAVC moving picture coding method.
  • the moving image coding apparatus 100 includes a DCT unit 1, a quantization unit 2, a variable length coding unit 3, an inverse quantization unit 4, an I DCT unit 5, a deblocking filter processing unit 6, A frame memory 7, an intra prediction unit 8, an inter prediction unit 9, and a code key control unit 10 that controls each of the above units are provided.
  • the DCT unit 1 divides a difference image obtained by subtracting a prediction image, which will be described later, from an original image into blocks each consisting of 4 ⁇ 4 pixels or 8 ⁇ 8 pixels, and orthogonally transforms the image signal of each block ( Integer precision DCT).
  • the transform coefficient obtained by the orthogonal transform (corresponding to the DCT coefficient in the discrete cosine transform) is sent to the quantization unit 2.
  • the quantization unit 2 quantizes the transform coefficient of each block according to the quantization parameter supplied from the encoding control unit 10.
  • the quantized representative value obtained as a result of quantization is sent to the variable length coding unit 3 and the inverse quantization unit 4.
  • variable-length encoding unit 3 receives various encoding parameters supplied from the encoding control unit 10 and quantization representative values (transform coefficients of each quantized block) supplied from the quantization unit 3. Variable length coding.
  • the code key data obtained as a result of the code key by the variable length code key unit 3 is sent to the moving picture decoding apparatus 200 described later.
  • the inverse quantization unit 4 reverses the quantized representative value (transform coefficient of each quantized block) supplied from the quantization unit 3 according to the quantization parameter supplied from the encoding control unit 10. Quantify. That is, the inverse quantization unit 4 restores the transform coefficient of each block from the quantized representative value by the reverse operation of the quantization operation by the quantization unit 2. The transform coefficient of each block restored by inverse quantization is sent to the IDCT unit 5.
  • the IDCT unit 5 converts the transform coefficient of each block obtained by inverse quantization into an image signal in the spatial domain, and restores the difference image.
  • the inverse orthogonal transform applied to the transform coefficient by the IDCT unit 5 is the inverse transform (integer precision IDCT) of the orthogonal transform applied by the DCT unit 1.
  • the local decoded image obtained by adding the difference image restored by the IDCT unit 5 and the predicted image is sent to the deblocking filter processing unit 6.
  • the deblocking filter processing unit 6 performs adaptive filtering on the local decoded image in order to remove block distortion in the local decoded image obtained by adding the prediction image and the difference image. Details of adaptive filter processing by the deblocking filter processing unit 6 will be described in detail later.
  • the locally decoded image from which block distortion has been removed by the deblocking filter processing unit 6 is temporarily stored in the frame memory 7.
  • the frame memory 7 can store a plurality of locally decoded images.
  • the locally decoded image stored in the frame memory 7 is referred to by the intra prediction unit 8 or the inter prediction unit 9 as a reference image.
  • the intra prediction unit 8 generates a predicted image from the reference image recorded in the frame memory 7 by performing intra-frame prediction.
  • the intra-prediction unit 8 is capable of performing intra-frame prediction using a plurality of prediction modes (prediction algorithms) defined in accordance with the H.264ZAVC video coding standard. Intraframe prediction is performed according to the prediction mode specified by Section 10.
  • the inter prediction unit 9 Based on the motion vector determined by the sign key control unit 10 and the reference image stored in the frame memory 7, the inter prediction unit 9 generates a prediction image by inter-frame prediction (motion compensation prediction). Generate.
  • the inter prediction unit 9 performs inter-frame prediction using a block having a size specified by the sign key control unit 10 and using a plurality of reference images specified by the sign key control unit 10.
  • the code control unit 10 determines which prediction method to generate a prediction image of intra-frame prediction and inter-frame prediction, and various encoding parameters according to the determined prediction method. To decide.
  • the code parameter when performing intra-frame prediction includes information specifying a prediction mode in intra-frame prediction.
  • the code parameter when performing inter-frame prediction includes information specifying a motion vector, a block size, and a reference image.
  • the sign key control unit 10 designates quantization parameters for the quantization unit 2 and the inverse quantization unit 4.
  • the moving image coding apparatus 100 encodes a moving image by repeating the following steps 1 to 6.
  • Step 1 Code Key Control Unit 10 Determines whether intra-frame prediction or inter-frame prediction is performed, and determines the code key parameters and quantization parameters necessary for the code key.
  • Step 2 In accordance with the determination result in Step 1, the intra prediction unit 8 or the inter prediction unit 9 generates a predicted image based on the reference image stored in the frame memory 7.
  • Step 3 A difference image between the predicted image generated in Step 2 and the input original image is generated and supplied to the DCT unit 1.
  • Step 4 The DCT unit 1 and the quantization unit 2 orthogonally transform the image signal of the difference image obtained in step 3 for each block, and quantize the obtained transform coefficients.
  • the obtained quantization representative value is variable-length encoded by the variable-length encoding unit 3 and output as encoded data.
  • Step 5 The inverse quantization unit 4 and the IDCT unit 5 dequantize the quantized representative value obtained in step 4 to restore the difference image.
  • Step 6 The difference image restored in Step 5 and the prediction generated in Step 2 The images are added to each other, and the obtained locally decoded image is supplied to the deblocking filter processing unit 6.
  • Step 7 The local decoding image power block distortion obtained in Step 6 is removed by the deblocking filter processing unit 6 and the local decoded image with reduced block distortion is stored in the frame memory 7 as a reference image. Accumulated.
  • the moving image decoding apparatus 200 includes an inverse quantization unit 4, an IDCT unit 5, a deblocking filter processing unit 6, a frame memory 7, an intra prediction unit 8, an inter prediction unit 9, and a variable A long decoding unit 20 is provided.
  • variable length decoding section 20 is the only functional block that does not exist in the moving picture encoding apparatus 100 (FIG. 11). Blocks having the same functions as those of the moving image code display device 100 have the same names and reference numerals and description thereof is omitted.
  • the variable length decoding unit 20 has a function of variable length decoding the encoding parameter and the quantized representative value (quantized transform coefficient).
  • the moving picture decoding apparatus 200 shown in Fig. 12 decodes encoded data by repeating the following steps 1 to 6.
  • Step 1 The variable length decoding unit 20 performs variable length decoding on the encoded data to obtain an encoding parameter and a quantized representative value (quantized transform coefficient).
  • Step 2 In accordance with the decoded code parameter, the intra prediction unit 8 or the inter prediction unit 9 generates a prediction image based on the reference image stored in the frame memory 7.
  • Step 3 The inverse quantization unit 4 and the IDCT unit 5 dequantize the quantized representative value obtained in step 1 to restore the difference image.
  • Step 4 The difference image restored in Step 3 and the predicted image generated in Step 2 are added, and the obtained decoded image is supplied to the deblocking filter processing unit 6.
  • Step 5 The decoded image obtained in step 4 by the deblocking filter processing unit 6 Image force Block distortion is removed, and a decoded image with reduced block distortion is stored in the frame memory 7.
  • the decoded image stored in the frame memory 7 can be read at an arbitrary timing and used as a display image.
  • Step 6 The decoded image stored in the frame memory is output to an image display means such as a display at an appropriate timing as a display image.
  • the deblocking filter processing unit 6 in both the moving image encoding device 100 and the moving image decoding device 200, the moving image is encoded. Quantization ⁇ Block distortion generated in the inverse quantization process is reduced.
  • the deblocking filter processing unit 6 will be described in more detail with reference to FIGS.
  • FIG. 13 is an explanatory diagram showing a block division pattern of a quantization target image (difference image between an original image and a predicted image) in a moving image encoding process using the H.264ZAVC moving image encoding method.
  • the quantization target image is divided into WX H rectangular blocks.
  • WX H blocks include B ⁇ in order from the upper left corner of the image to be quantized.
  • FIG. 14 is a diagram showing two adjacent blocks, block B and block B, in the quantization target image shown in FIG. As shown in Figure 14, Block B and Block B
  • n + l n n + 1 is composed of a total of 16 pixel arrays arranged in 4 rows and 4 columns. As with block B, all blocks B to B that make up the image to be encoded in Fig. 13 are arranged in 4 rows and 4 columns.
  • a pixel constituting the quantization target image is a combination of a variable n for specifying a block including the pixel and variables u and V for specifying the position of the pixel in the block ( n , u , v ) Can be specified. That is, the pixel (n, u, v) is the pixel in the u-th column and the V-th row of the block B.
  • the attribute value of the pixel (n, u, v) is expressed as X (n, u, v).
  • the pixel value of the pixel (n, u, v) in the encoding target image is expressed as P (n, u, v).
  • Blocks B to B shown in FIG. 13 and FIG. Transformation 'Quantization' Dequantization 'Inverse orthogonal transformation is a processing unit of a series of processing.
  • the image data of the quantization target image is converted into a conversion coefficient for each of the blocks B to B, and the amount is converted.
  • the deblocking filter processing unit 6 used in the H.264ZAVC moving image encoding method uses filtering between adjacent blocks (B, B) in the horizontal direction and adjacent n n + 1 in the vertical direction.
  • the filter strength of the filter processing performed by the filtering filter processing unit 6 is adaptively set according to conditions such as the prediction mode applied to each block.
  • P (n, u, v) is a pixel in the processing target image (local decoded image in the moving image encoding device 100 or decoded image in the moving image decoding device 200) of the deblocking filter processing.
  • the pixel value (n, u, v) and P ′ (n, u, v) represent the pixel value of the pixel (n, u, v) in the filter output image output from the deblocking filter processing unit 6.
  • P '0,1, V) ⁇ ⁇ 2 ⁇ ( ⁇ , 0, ⁇ ) + 3 ⁇ (, 1, ⁇ ) + ⁇ ⁇ , 2, ⁇ ) + ⁇ ( ⁇ , 3, ⁇ ) + ⁇ ( ⁇ + 1 , 0, ⁇ ) ⁇
  • P '(n + 1,0, v) ⁇ (P (n, 2, v) + 2P (n, 3, v) + 2P (n + 1,0, v) + 2P (n + 1,1 , v) + P (n + l, 2, v) ⁇ -8
  • FIGS. Figure 15 (a) is a graph n n + 1 representing the pixel values p (n, u, v) of the original image in block B and block B.
  • Figure 15 (b) shows the deblocking filter processing n n + 1 for block B and block B.
  • FIG. 7 is a graph showing pixel values P (n, u, v) of a decoded image to be processed by the processing unit 6.
  • P pixel values
  • FIG. 15 (b) in the decoded image, discontinuous changes in pixel values at the block boundary, that is, block distortion occurs.
  • Figure 15 (c) shows the deblocking pattern on the decoded image. Block B and Block B of the image after filtering by filter 6
  • Non-Patent Literature 1 ITU-T Recommendation H.2b4: Advanced Video Coding for generic audiovisual services (2003)
  • the average pixel value in each block is different before and after the filter processing. Therefore, when the conventional deblocking filter is used, the average pixel value in each block is not saved before and after the filtering process, so that the difference in the average pixel value between the original image and the decoded image is enlarged by the filtering process. obtain.
  • the moving image encoding apparatus has a difference in average pixel values from the original image.
  • a predicted image is generated using the locally decoded image to be used as a reference image.
  • a decoded image having an average pixel value difference with respect to the original image is generated as a reference image or a display image. For this reason, blurring or flickering of the moving image occurs.
  • the present invention has been made in view of the above-described problems, and an object of the present invention is to reduce block distortion in a decoded moving image and to prevent side effects such as blurring and flickering of the image.
  • An object is to realize a moving image encoding device and a moving image decoding device.
  • the video encoding apparatus divides an image into a plurality of blocks, quantizes the image data of each block, and obtains a quantum obtained by the quantization.
  • the apparatus includes a filter processing means for performing a filter process for removing a frequency component that generates block distortion on the image data before quantization.
  • the filter processing means removes a frequency component that generates block distortion from the image data before quantization.
  • the quantization target image to be quantized is one in which frequency components that generate block distortion have been removed in advance. Therefore, a decoded image obtained by quantizing / inverse-quantizing the quantization target image subjected to the filtering process by the filter processing unit has a reduced block distortion.
  • the frequency component previously removed by the filter processing means can be restored by inverse filter processing corresponding to the inverse transformation of the filter processing of the filter processing means.
  • the moving picture coding apparatus having the above-described configuration can generate coded data that can restore a decoded picture with reduced block distortion without missing a specific frequency component. And! /, Has the effect.
  • the filter processing performed by the filter processing means is to keep the average pixel value of each block unchanged before and after the processing. .
  • the filter processing by the filter processing means is performed so as to keep the average pixel value (for example, average luminance level) of each block unchanged.
  • the conventional video coding apparatus using a deblocking filter has been There is an additional effect that it is possible to effectively prevent screen blurring caused by the fact that the average pixel value for each block is not stored before and after the filter processing.
  • a predicted image generating means for generating a predicted image and an inverse filter process corresponding to the inverse transform of the filter process are performed on the locally decoded image. It is preferable that the prediction image generation unit further includes a reverse filter processing unit, and the prediction image generation unit generates the prediction image using the locally decoded image subjected to the reverse filter processing as a reference image.
  • the predicted image generation means can generate a predicted image using the local decoded image subjected to the inverse filter process as a reference image.
  • the predicted image generation means may generate a predicted image based on a reference image in which block distortion is reduced and a specific frequency component is not lost, that is, a reference image that approximates the original image. There is an effect that can be done.
  • the predicted image generating means may use these prediction methods by appropriately switching these prediction methods according to the state of the image to be encoded.
  • the inverse filter processing performed by the inverse filter processing means may correspond to a strict inverse transformation or an approximate inverse transformation with respect to the filter processing of the filter processing means. It may be what you do. In other words, the inverse filtering process is sufficient if it approximates a strict inverse transform within a range that does not adversely affect the quality of the final moving image. For example, the calculation accuracy of the filtering process (for example, integer precision) A degree of error is acceptable.
  • the quantization target image to be quantized is a difference image between the predicted image and the original image subjected to the filtering process.
  • the local decoded image is an image obtained by adding the image data obtained by inversely quantizing the quantized representative value to the moving image coding apparatus and the image data of the predicted image. Is preferred.
  • the original image becomes a processing target image of the filter processing
  • the filter The difference image between the original image subjected to the data processing and the predicted image becomes the quantization target image.
  • the local decoded image obtained by adding the image data obtained by dequantizing the quantized representative value obtained by quantizing the quantization target image and the image data of the predicted image is the result of the inverse filter processing.
  • the local decoded image subjected to the above inverse filter processing is used as a reference image for generating a predicted image. Therefore, according to the above configuration, the predicted image generating means can generate a predicted image based on the reference image that better approximates the original image.
  • the quantization target image to be quantized is obtained by performing the filtering process on a difference image between the predicted image and the original image.
  • the locally decoded image is preferably an image obtained by adding the image data obtained by dequantizing the quantized representative value and the image data of the predicted image in the moving image coding apparatus. .
  • the difference image between the predicted image and the original image is the target of the filtering process, and the difference image subjected to the filtering process is the target of quantization.
  • a local decoded image obtained by adding the image data obtained by dequantizing the quantization representative value obtained by quantizing the quantization target image and the image data of the predicted image is the inverse filter.
  • the locally decoded image that has been subjected to the processing and subjected to the inverse filter processing is used as a reference image for generating a predicted image. Therefore, according to the above configuration, the predicted image generation means can generate a predicted image based on the reference image that better approximates the original image.
  • the filter processing means calculates a pixel value to be subtracted from the image data of the processing target image that is the target of the filtering process, as an image data force of the predicted image.
  • the inverse filter processing means calculates a pixel value to be added to the image data of the processing target image to be subjected to the inverse filtering process by calculating the image data force of the predicted image by the same method as the filter processing means. It is preferable that
  • the pixel value that the filter processing unit subtracts from the image data of the processing target image and the pixel value that the inverse filter processing unit adds to the image data of the processing target image are the same. Is calculated from the predicted image by the same method. Therefore, the above fill The inverse filter processing means can restore the frequency component that generates block distortion removed from the processing target image by the data processing means more completely.
  • the moving image decoding apparatus decodes encoded data obtained by encoding image data from which frequency components that generate block distortion are removed.
  • An image decoding apparatus comprising: inverse filter processing means for performing inverse filter processing for restoring the removed frequency component on the image data of a decoded image obtained by decoding! / Speak.
  • the moving image decoding apparatus decodes encoded data obtained by encoding image data from which a frequency component causing block distortion has been removed in advance.
  • the encoded data is obtained by removing frequency components that generate block distortion in advance, block distortion in a decoded image obtained by decoding is suppressed.
  • the decoded image obtained by decoding restores the frequency component removed in the encoding process by the inverse filter processing means. Therefore, a decoded image obtained by performing the inverse filter processing is an image in which block distortion is reduced and a specific frequency component is not lost, that is, an image that better approximates the original image.
  • the moving picture coding apparatus having the above configuration has an effect of being able to restore a decoded picture with reduced block distortion without losing a specific frequency component.
  • FIG. 1 is a functional block diagram showing a configuration of a moving picture encoding apparatus according to an embodiment of the present invention.
  • FIG. 2 is a functional block diagram showing a configuration of a moving picture decoding apparatus according to an embodiment of the present invention.
  • FIG. 3 is a flowchart showing an outline of filter processing calculation by the filter processing unit of the video encoding device shown in FIG. 1.
  • FIG. 4 (a) is a graph showing the effect of the filter processing operation shown in FIG. 3, and is a graph showing the pixel value of the processing target image to be subjected to the filter processing.
  • FIG. 4 (b) is a graph showing the operation of the filter processing calculation shown in FIG.
  • FIG. 4 (c) is a graph showing the operation of the filter processing operation shown in FIG. 3, and is a graph showing a predicted value obtained by linear interpolation using the average pixel value shown in FIG. 4 (b). .
  • FIG. 4 (d) is a graph showing the effect of the filter processing calculation shown in FIG. 3, and is a graph showing the pixel value of the filter output image.
  • FIG. 5 is a flowchart showing an overview of the inverse filter processing calculation by the inverse filter processing unit included in the video encoding device shown in FIG. 1 and the video decoding device shown in FIG.
  • FIG. 6 (a) is a graph showing the operation of the inverse filter processing operation shown in FIG. 5, and is a graph showing the pixel value of the processing target image to be subjected to the inverse filter processing.
  • FIG. 6 (b) is a graph showing the operation of the inverse filter processing operation shown in FIG.
  • FIG. 6 (c) is a graph showing the effect of the inverse filter processing operation shown in FIG. 5, and is a graph showing the predicted value obtained by linear interpolation using the average pixel value shown in FIG. 6 (b). is there.
  • FIG. 6 (d) is a graph showing the operation of the inverse filter processing calculation shown in FIG. 5, and is a graph showing the pixel value of the inverse filter output image.
  • FIG. 7 is a functional block diagram showing another configuration example of the video encoding device according to the embodiment of the present invention.
  • FIG. 8 Another example of the configuration of the video decoding device according to the embodiment of the present invention is shown, and is a functional block diagram showing the configuration of the video decoding device corresponding to the video encoding device of FIG. is there.
  • FIG. 9 is a functional block diagram showing another configuration example of the video encoding device according to the embodiment of the present invention.
  • FIG. 10 Another example of the configuration of the video decoding device according to an embodiment of the present invention is shown, and a functional block showing the configuration of the video decoding device corresponding to the video encoding device of FIG. FIG.
  • FIG. 11 is a functional block diagram showing a conventional technique and showing a configuration of a moving picture coding apparatus provided with a deblocking filter.
  • FIG. 12 is a functional block diagram showing a conventional technique and showing a configuration of a video decoding device corresponding to the video encoding device shown in FIG. 11.
  • FIG. 13 is an explanatory diagram showing an image division pattern in a quantization target image to be quantized or a processing target image to be filtered.
  • FIG. 14 is an enlarged view showing two adjacent blocks in an image divided into a plurality of blocks by the division pattern shown in FIG. 13.
  • FIG. 14 is an enlarged view showing two adjacent blocks in an image divided into a plurality of blocks by the division pattern shown in FIG. 13.
  • FIG. 15 (a) is a graph showing the prior art and showing the action of the deblocking filter. In particular, it is a graph showing pixel values of an original image to be encoded.
  • FIG. 15 (b) is a graph showing the prior art and showing the action of the deblocking filter. In particular, it is a graph showing pixel values of a locally decoded image including block distortion.
  • FIG. 15 (c) is a graph showing the prior art and showing the action of the deblocking filter. It is a graph which especially shows the pixel value of the output image of a deblocking filter.
  • Filter processing section (filter processing means)
  • Inverse filter processing unit (inverse filter processing means)
  • FIGS. 1 to 10 One embodiment of the moving picture encoding apparatus of the present invention will be described below with reference to FIGS. 1 to 10.
  • FIG. 1 is a functional block diagram showing a schematic configuration of a video encoding device 300 according to the present embodiment.
  • the moving picture coding apparatus 300 includes a DCT unit 1, a quantization unit 2, a variable length coding unit 3, an inverse quantization unit 4, an IDCT unit 5, a frame memory 7, and an intra prediction unit.
  • Inter prediction unit 9, encoding control unit 10, filter processing unit 11, and inverse filter processing unit 12 are provided.
  • FIG. 11 The difference between the moving picture coding apparatus 300 in FIG. 1 and the conventional moving picture coding apparatus 100 (FIG. 11) is that the moving picture coding apparatus 300 uses a filter instead of the conventional deblocking filter 6.
  • the processing unit 11 and the inverse filter processing unit 12 are provided.
  • blocks having the same functions as those of the conventional moving image coding apparatus 100 are shown using the same names and symbols as those in FIG. 11, and the description thereof is omitted.
  • a characteristic configuration of the moving image encoder 300 is a filter processing unit 11 and an inverse filter processing unit 12.
  • the filter processing unit 11 uses the original image to be encoded as a processing target image, and performs a filtering process to remove a frequency component that causes block distortion from the processing target image.
  • the DCT unit 1 is supplied with a differential image obtained by subtracting the prediction image generated by the intra prediction unit 8 or the inter prediction unit 9 from the image subjected to the filter processing by the filter processing unit 11.
  • the inverse filter processing unit 12 calculates the difference image restored by the inverse quantization unit 4 and the IDCT unit 5 and the prediction image generated by the intra prediction unit 8 or the inter prediction unit 9 from the calorie calculation. Thus, a locally decoded image obtained is supplied.
  • the inverse filter processing unit 12 sets this locally decoded image as a processing target image, and performs filtering on the processing target image by the filter processing unit 11. Inverse filter processing corresponding to inverse transformation of data processing is performed.
  • the locally decoded image that has been subjected to the inverse filter processing by the inverse filter processing unit 12 is stored in the frame memory 7 as a reference image, and is used for generation of a predicted image by the intra prediction unit 8 or the inter prediction unit 9.
  • the DCT unit 1 and the quantization unit 2 divide the quantization target image into a plurality of blocks and quantize the image data of each block. Therefore, a block image can be included in a decoded image obtained by inverse quantization of a quantized image. That is, block distortion may be included in a local decoded image generated inside the moving image encoding apparatus 300 to generate a predicted image and a decoded image generated by a moving image decoding device described later.
  • the quantization target image supplied to the DCT unit 1 is a difference image between the image subjected to the filter processing by the filter processing unit 11 and the predicted image. Since the filter processing unit 11 acts on the processing target image so as to remove in advance the frequency component that generates block distortion, the block distortion generated in the quantization / inverse quantization process can be effectively reduced.
  • the reference image that the intra prediction unit 8 or the inter prediction unit 9 refers to in order to generate a predicted image is a locally decoded image that has been subjected to the inverse filter processing by the inverse filter processing unit 12. Therefore, the frequency component removed from the original image is restored in the reference image.
  • the intra prediction unit 8 and the inter prediction unit 9 can generate a prediction image based on a reference image in which a specific frequency component is not lost while the block distortion is reduced.
  • the spatial frequency component lost due to the quantization process in the conventional video encoding device 100 is removed and removed by the filter processing unit 11 in the video encoding device 300, and the reverse operation is performed.
  • Restoration by the filter processing unit 12 can avoid the influence of quantization processing and suppress the occurrence of block distortion.
  • the filter processing unit 11 and the inverse filter processing unit 12 are, as will be described later, average pixel values before and after processing in each block (for example, an average luminance level) can be maintained. Therefore, it is possible to avoid problems such as image blurring and flickering during moving image playback caused by the deblocking filter processing.
  • FIG. 2 is a functional block diagram showing a schematic configuration of a video decoding device 400 corresponding to the video encoding device 300 shown in FIG.
  • the moving picture decoding apparatus 400 includes an inverse quantization unit 4, an IDCT unit 5, a frame memory 7, an intra prediction unit 8, an inter prediction unit 9, a variable length decoding unit 20, and an inverse filter.
  • a processing unit 12 is provided.
  • blocks having the same functions as those of the conventional video decoding device 200 are denoted by the same names and symbols as those in FIG. 12, and description thereof is omitted.
  • the encoded data decoded by the moving image decoding apparatus 400 is generated based on the original image from which the frequency component causing block distortion is removed in the moving image encoding apparatus 300.
  • the variable length decoding unit 20 of the video decoding device 400 decodes the encoded data, and the inverse quantization unit 4 and the IDCT unit 5 inversely quantize the value obtained by the decoding.
  • the image restored by inverse quantization corresponds to the difference image obtained by subtracting the predicted image from the original image.
  • the video decoding device 400 adds the prediction image generated by the intra prediction unit 8 or the inter prediction unit 9 to the restored difference image, and generates a decoded image.
  • the inverse filter processing unit 12 included in the video decoding device 400 performs the same inverse filter processing as the inverse filter processing unit 12 included in the video encoding device 300.
  • the inverse filter processing unit 12 acts on the decoded image lacking the frequency component that generates block distortion so as to restore the frequency component removed at the time of encoding.
  • the decoded image that has been subjected to the inverse filter processing by the inverse filter processing unit 12 is referred to by the intra prediction unit 8 or the inter prediction unit 9 as a reference image for generating a prediction image, and is output as a display image. .
  • the frequency component removed from the original image is restored in the display image or the reference image by the operation of the inverse filter processing unit 12. That is, the moving picture decoding apparatus 400 has a specific frequency component missing at the same time that the block distortion is reduced. It is possible to output a display image that never occurs. Further, at the same time that the block distortion is reduced, a prediction image can be generated based on a reference image in which a specific frequency component is not lost.
  • the filter processing unit 11 executes horizontal filter processing and vertical filter processing by dividing a processing target image to be filtered into a plurality of blocks.
  • the division pattern in which the filter processing unit 11 divides the processing target image is as shown in FIGS. 13 and 14, and the DCT unit 1 and the quantization unit 2 perform the quantization target image for quantization. This is the same as the division pattern for dividing.
  • the filter processing unit 11 converts the image data of the block B in the filter output image output by the filter processing unit 11 into the image data of the block B in the processing target image and the image of the block adjacent to the block B. Calculate from the data.
  • the image data of block B is calculated by referring to the block B + 1 image data adjacent to the right side of the block B
  • the image of block B Data is the image data of block B adjacent to the lower side of the block n n + W
  • the filter processing unit 11 completes the horizontal filter processing by repeating the filter processing calculation described below for all two adjacent blocks. In addition, the above repetition is performed by (B, B), (B, B), (B, B)...
  • the above filter processing operations may be executed for all pairs, and (B, B), (B, B), (B ,: B). So for all blocks,
  • the filter processing calculation may be executed while referring to the next block. However, in these repetitions, a pair of non-adjacent blocks, for example, the right edge of the image It is assumed that the above filter processing is not performed for a pair consisting of block B (k is an integer) and the next block B located at the lower left end of the block or kXW-1 kXW.
  • FIG. 3 is a flowchart schematically illustrating the filter processing calculation executed by the filter processing unit 11.
  • the filter processing calculation by the filter processing unit 11 includes step S1 for calculating the average pixel value, step S2 for calculating the predicted value, and step S3 for calculating the filter output image. It is out. Steps S1 to S3 will be described in more detail as follows. In the following description, a filter processing operation for calculating the pixel value of 4 pixels arranged in the V-th row among the 16 pixels arranged in 4 rows and 4 columns belonging to the block B will be described. The filter processing unit 11 executes the following filter processing operation on the first row of block B as well as on the fourth row, either sequentially or in parallel, so that the pixel values of all the pixels belonging to block B are obtained. Is calculated.
  • Step S1 The filter processing unit 11 calculates the average pixel value ⁇ p> of the four pixels arranged in the V-th row of the block B in the processing target image, and the V ⁇ and ⁇ ⁇ of the block B in the processing target image. + Calculate the average pixel value ⁇ > of the four pixels arranged in the first row. Filter processing unit 11 ⁇ + 1,
  • FIG. 4 (a) is a graph showing the pixel values of the processing target image.
  • FIG. 4 (b) is a graph showing average pixel values obtained in step S1 above.
  • Step S2 The filter processing unit 11 calculates the average pixel value obtained in step S1 as p
  • the filter processing unit 11 predicts the predicted values p (n, u, v) and (n + 1, u, v) pred pred
  • the calculation formula for calculating is as follows.
  • FIG. 4 (c) is a graph showing the predicted values obtained in step S2.
  • Step S3 The filter processing unit 11 uses the predicted value p (n, u, v) obtained in Step S2.
  • the pixel value p ′ (n, u, v) of the filter output image is obtained. calculate.
  • Filter processing unit 11a The calculation formula used to calculate the pixel value p ′ (n, u, v) of the filter output image is as follows.
  • FIG. 4 (d) is a graph showing pixel values of the filter output image obtained in step S3.
  • the average pixel value of block B in the filter output image matches the average pixel value of block B in the processing target image, that is, the filter processing unit 11 processes the average pixel value of each block. Please note that it is something that remains unchanged before and after. For this reason, it is possible to prevent blurring and flickering of moving images due to the filter processing.
  • the filter processing unit 11 performs the filtering process in the horizontal direction as described above, and then performs the filtering process in the vertical direction.
  • the vertical filtering processing is a force that calculates the image data of block B with reference to block B adjacent to the lower side of block B n n + W
  • the calculation method is the same as the horizontal filtering process, so the explanation is omitted.
  • the filter processing unit 11 performs the horizontal filter processing after performing the vertical filter processing. Also good.
  • the inverse filter processing unit 12 executes the horizontal direction reverse filter processing and the vertical direction reverse filter processing by dividing the processing target image into a plurality of blocks.
  • the division pattern in which the inverse filter processing unit 12 divides the processing target image is the same as the division pattern in which the filter processing unit 11 divides the processing target image.
  • the inverse filter processing unit 12 calculates the image data of the block B from the image data of the block B in the processing target image and the image data of the block adjacent to the block B. Specifically, in the horizontal filter processing, the image data of block B is calculated by referring to the block B image data n n + 1 adjacent to the right side of the block B in the processing target image, and the vertical In the direction filtering process, the image data of block B is calculated by referring to the image data of block B adjacent to the lower side of block B n n + W
  • the inverse filter processing calculation executed by the inverse filter processing unit in the horizontal inverse filter processing will be described with reference to FIG. 5 and FIG.
  • the inverse filter processing calculation for calculating the pixel value (for example, luminance level) of one block B will be described.
  • the inverse filter processing unit 12 repeats the filter processing calculation described below for all adjacent two blocks, thereby completing the horizontal filter processing.
  • FIG. 5 is a flowchart schematically illustrating the inverse filter processing calculation performed by the inverse filter processing unit 12.
  • the filtering processing by the inverse filter processing unit 12 includes step T1 for calculating an average pixel value, step T2 for calculating a predicted value, and step T3 for calculating an inverse filter output image. Including.
  • the steps Tl to T3 will be described in more detail as follows. In the following description, out of 16 pixels arranged in 4 rows and 4 columns belonging to block ⁇ , the inverse of calculating the pixel values of 4 pixels arranged in the V row The filter processing operation will be described.
  • the inverse filter processing unit 12 performs the inverse filter processing operation described below for the first row force of the block B and the fourth row, sequentially or in parallel, thereby obtaining the pixel values of all the pixels belonging to the block B. Is calculated.
  • Step T1 The inverse filter processing unit 12 calculates the average pixel value ⁇ P> of four pixels arranged in the Vth row of the block B in the processing target image, and the blocks B n and ⁇ + 1 in the processing target image. Calculate the average pixel value ⁇ > of the four pixels arranged in the ⁇ row of. Inverse filter processing ⁇ + 1,
  • the calculation formula used by the unit 12 to calculate the average pixel values ⁇ > and ⁇ > is as follows.
  • P (n, u, v) is a pixel value in the pixel (n, u, v) of the processing target image.
  • FIG. 6 (a) is a graph showing the pixel values of the processing target image
  • FIG. 6 (b) is a graph showing the average pixel values obtained in step T1.
  • Step T2 The inverse filter processing unit 12 determines that the average pixel value obtained in Step T1 ⁇ P ⁇ ,
  • Fig. 6 (c) is a graph showing the predicted values obtained in step T2 above. [0117] (Step T3)
  • the inverse filter processing unit 12 calculates the predicted value P (n, u, v) obtained in step ⁇ 2 and the step
  • a pixel value (n, u, v) of the filter output image is calculated by adding the additional component to the pixel value P (n, u, v) of the processing target image as a power additional component.
  • the calculation formula used by the inverse filter processing unit 12 to calculate the pixel value (n, u, v) of the inverse filter output image is as follows.
  • FIG. 6 (d) is a graph showing the pixel values of the inverse filter output image obtained in step T3. Note that the average pixel value of block B in the inverse filter output image matches the average pixel value of block B in the processing target image. For this reason, the occurrence of blurring and flickering of moving images is effectively prevented.
  • the inverse filter processing unit 12 performs the inverse filter process in the horizontal direction as described above, and then performs the inverse filter process in the vertical direction.
  • the vertical inverse filtering operation is to calculate the image data of block B with reference to block B adjacent to the lower side of block B n n + W
  • the reverse filter processing unit 12 performs the horizontal reverse filter processing after performing the vertical reverse filter processing.
  • a configuration in which inverse filter processing is performed may be employed.
  • the block removed from the processing target image by the filtering process of the filter processing unit 11 is performed.
  • the frequency component that generates the noise distortion is restored by the inverse filter processing of the inverse filter processing unit 12. Therefore, a specific frequency component is not lost in the restored image.
  • the predicted value calculated by the filter processing unit 11 is calculated by linear interpolation using the average pixel value of the adjacent block, but the present invention is not limited to this. That is, the predicted value used by the filter processing unit 11 to calculate the pixel value of the filter output image may be a predicted value calculated based on an average pixel value in block units, for example.
  • the average pixel value of three adjacent blocks may be a value calculated by cubic interpolation using P_>, ⁇ P>, and ⁇ P>. Details ⁇ ⁇ + 1, ⁇
  • the inverse filter processing unit 12 corresponding to the filter processing unit 11 that performs these filter processes is easily configured to perform an inverse filter process corresponding to the inverse transformation of these filter processes. obtain.
  • the average pixel value of the prediction value and the average pixel value of the input image are used for the filter output It is preferable to make a correction so that the average pixel value of the filter input and filter output is maintained.
  • the filter processing unit 11 performs one-dimensional filter processing independently in the horizontal direction and the vertical direction, but the present invention is not limited to this. It may be configured to perform two-dimensional filter processing.
  • An example of the two-dimensional filter processing performed by the filter processing unit 11 is as follows.
  • the filter processing unit 11 calculates the pixel value of the block of interest with reference to the pixel values of four blocks that are adjacent vertically and horizontally.
  • the filter processing calculation performed by the filter processing unit 11 to calculate the pixel value of the target block ⁇ includes the following steps 1 to 3.
  • Step 1 The filter processing unit 11 calculates the average pixel value of the target block ⁇ and the average pixel value of blocks ⁇ , ⁇ , ⁇ , and ⁇ adjacent to the target block ⁇ vertically and horizontally n n -1 n + 1 nW n + W
  • the formula is as follows.
  • Step 2 The filter processing unit 11 performs the prediction value ⁇ ( ⁇ for each pixel of the block ⁇ ⁇ ⁇ by linear interpolation using the average pixel value of each block obtained in Step 1 above.
  • the calculation formula is as follows.
  • Step 3 The filter processing unit 11 calculates the predicted value p (n, u, v) obtained in Step 2,
  • the difference from the average pixel value ⁇ p> of block B obtained in step 1 is calculated from the image to be processed.
  • the pixel value (n, u, v) of the filter output image is calculated by subtracting the removal component from the pixel value p (n, u, v) of the processing target image as the removal component to be removed. .
  • the calculation formula for the filter processing unit 11 to calculate the pixel value iT (n, u, v) of the block B in the filter output image is as follows.
  • the filter processing unit 11 repeats the above steps 1 to 3 for each block, thereby obtaining a pixel value of the entire filter output image. Is calculated.
  • the corresponding inverse filter processing unit 12 may be configured to perform an inverse filter process corresponding to the inverse transformation of the filter process of the filter processing unit 11. (Modification 1)
  • FIG. 7 is a functional block diagram showing a schematic configuration of a video encoding device 300a that is a modification of the video encoding device 300.
  • the moving picture coding apparatus 300a includes a DCT unit 1, a quantization unit 2, a variable length coding unit 3, an inverse quantization unit 4, an IDCT unit 5, a frame memory 7, and an intra prediction unit. 8, an inter prediction unit 9, an encoding control unit 10, a filter processing unit lla, and an inverse filter processing unit 12a.
  • the difference between the moving image encoding device 300a and the moving image encoding device 300 is that the moving image encoding device 300a is based on a predicted image instead of the filter processing unit 11.
  • a filter processing unit 11a that performs filter processing is provided, and an inverse filter processing unit 12a that performs inverse filter processing based on a predicted image is provided instead of the inverse filter processing unit 12.
  • the intra prediction unit 8 and the inter prediction unit 9 supply the generated prediction image to the filter processing unit 1 la and the inverse filter processing unit 12a.
  • blocks having the same functions as those of the moving picture coding apparatus 300 in FIG. 1 are denoted by the same names and symbols as those in FIG. 1, and description thereof is omitted.
  • FIG. 8 is a functional block diagram showing a schematic configuration of a video decoding device 400a corresponding to the video encoding device 300a shown in FIG.
  • the video decoding device 400a includes an inverse quantization unit 4, an IDCT unit 5, a frame memory 7, an intra prediction unit 8, an inter prediction unit 9, a variable length decoding unit 20, and an inverse filter process. Part 12a is provided.
  • the difference between the video decoding device 400a and the video decoding device 400 (Fig. 2) in Fig. 8 is that the video decoding device 400a is different from the video encoding device 300a in place of the inverse filter processing unit 12.
  • the same inverse filter processing unit 12a is provided.
  • the intra prediction unit 8 and the inter prediction unit 9 supply the generated predicted image to the inverse filter processing unit 12a.
  • the filter processing unit 11a and the inverse filter processing unit 12a that can be suitably used in the moving image coding apparatus 300a and the moving image decoding device 400a will be described as follows.
  • the filter processing unit 11a and the inverse filter processing unit 12a The image data of the block is calculated from the image data of the predicted image supplied from the intra prediction unit 8 or the inter prediction unit 9. Specifically, in horizontal filtering, the image data of block B is calculated by referring to the nnn + 1 image data of block B and block B in the predicted image, and in the vertical filtering, Refer to the image data of block B and block B in the predicted image for the image data of B n n + W
  • the force filter processing unit 11a and the inverse filter processing unit 12a that describe the filter processing calculation for calculating the pixel value of one block B repeat the filter calculation processing described below so that the filter output image Complete the entire horizontal filtering process.
  • the entire filtering process is completed by performing the filtering process in the vertical direction in the same manner.
  • the filter processing calculation executed by the filter processing unit 11a in the horizontal filter processing includes the following steps Sla to S3a.
  • Step Sla The filter processing unit 11a calculates the average pixel value ⁇ q> of the four pixels arranged in the v-th row of the block B in the predicted image and the V-rows ⁇ and ⁇ of the block B in the processing target image. Calculate the average pixel value q> of the four pixels arranged in the n + 1st eye. Filter processing unit 11a is n + l,
  • q (n, u, v) is a pixel value in the pixel (n, u, v) of the predicted image.
  • Step S2a The filter processing unit 11a calculates the average pixel value obtained in Step Sla q
  • the filter processing unit 11a predicts q (n, u, v) and q (n + 1, u, v) pred pred
  • the calculation formula for calculating is as follows.
  • the difference between pred u, v) and the average pixel value q> of block B obtained in step S la is the removal component to be removed from the processing target image, and the pixel value p (n,
  • the pixel value p ′ (n, u, v) of the filter output image is calculated by subtracting the removed component from u, v).
  • the calculation formula used by the filter processing unit 11a to calculate the pixel value p ′ (n, u, v) of the filter output image is as follows.
  • steps Sla to S3a are filter processing operations that calculate the pixel values of the four pixels arranged in the Vth row among the 16 pixels arranged in the fourth row and the fourth column belonging to the block B.
  • the pixel values of all the pixels belonging to the block B are calculated by executing the above steps Sla to S3a sequentially or in parallel with respect to the fourth row from the first row force of the block B.
  • the inverse filter processing operation executed by the filter processing unit 12a in the vertical filter processing includes the following steps Tla to T3a.
  • the inverse filter processing unit 12a includes the average pixel value ⁇ q> of four pixels arranged in the Vth row of the block B in the predicted image, and the Vth row of the block B in the predicted image.
  • the inverse filter processing unit 12a Calculate the average pixel value q> of the four pixels arranged in.
  • Step T2a The inverse filter processing unit 12a performs linear interpolation using the average pixel values ⁇ q> and ⁇ > obtained in Step Tla for each pixel in the Vth row of the block B. And the predicted value q for each pixel in the Vth row of block B (n
  • the inverse filter processing unit 12a performs prediction values q n, u, v) and q (n + 1,
  • the inverse filter processing unit 12a calculates the predicted value q (n, u, v) obtained in step T2a and the step
  • a pixel value (n, u, v) of the filter output image is calculated by adding the additional component to the pixel value P (n, u, v) of the processing target image as an additional component to be processed.
  • the calculation formula used by the inverse filter processing unit 12a to calculate the pixel value (n, u, v) of the inverse filter output image is as follows.
  • steps Tla to T3a are filter processing operations for calculating the pixel values of the four pixels arranged in the V-th row among the 16 pixels arranged in the fourth row and the fourth column belonging to the block B.
  • the pixel values of all the pixels belonging to the block B are calculated by executing the above steps Tla to T3a sequentially or in parallel for the fourth row from the first row force of the block B.
  • the filter processing unit 11a calculates, from the image data of the predicted image, a pixel value that also subtracts the image data of the target image to be filtered. Further, the inverse filter processing unit 12a calculates the image data force of the predicted image by the same method as the filter processing means, with respect to the pixel value to be added to the image data of the processing target image to be subjected to the inverse filter processing. That is, the pixel value that the filter processing unit 11a subtracts from the processing target image and the pixel value that the inverse filter processing unit 12a adds to the processing target image are calculated from the same predicted image. The same image value.
  • the filter processing unit 11a and the inverse filter processing unit 12a compared with the moving image coding apparatus 300 that performs the filter processing 'inverse filter processing based on the processing target image having a difference in quantization error level, The frequency component that generates block distortion from which the processing target image force has also been removed by the filter processing unit 1 la can be completely restored by the inverse filter processing unit 12a.
  • FIG. 9 is a functional block diagram showing a schematic configuration of a video encoding device 300b which is another modification of the video encoding device 300.
  • the moving picture coding apparatus 300a includes a DCT unit 1, a quantization unit 2, a variable length coding unit 3, an inverse quantization unit 4, an IDCT unit 5, a frame memory 7, and an intra prediction unit. 8, an inter prediction unit 9, an encoding control unit 10, a filter processing unit l ib, and an inverse filter processing unit 12b.
  • the difference between the moving image encoding device 300b and the moving image encoding device 300 is that the moving image encoding device 300a is based on a predicted image instead of the filter processing unit 11.
  • the filter processing unit l ib for executing the filter processing is provided, and instead of the inverse filter processing unit 12, an inverse filter processing unit 12a for executing the inverse filter processing based on the predicted image is provided.
  • the filter processing unit ib is provided immediately before the DCT unit 1 and uses the difference image between the original image and the predicted image as the processing target image of the filter processing.
  • the prediction images used by the filter processing unit 1 lb and the inverse filter processing unit 12b for the filter processing are both supplied from the intra prediction unit 8 or the inter prediction unit 9.
  • the functional blocks of the video encoding device 300b excluding the filter processing unit ib and the inverse filter processing unit 12b are the same as those of the video encoding unit 300 in Fig. 1. They are identical. Therefore, in FIG. 9, blocks having the same functions as those of the moving picture coding apparatus 300 in FIG. 1 are denoted by the same names and symbols as those in FIG. 1, and description thereof is omitted.
  • FIG. 10 is a functional block diagram showing a schematic configuration of a video decoding device 400b corresponding to the video encoding device 300b shown in FIG.
  • the video decoding device 400b includes an inverse quantization unit 4, an IDCT unit 5, a frame memory 7, an intra prediction unit 8, and an inter prediction.
  • Unit 9 a variable length decoding unit 20, and an inverse filter processing unit 12b.
  • the difference between the video decoding device 400b and the video decoding device 400 (Fig. 2) in Fig. 10 is that the video decoding device 400b is different from the video encoding device 300b in place of the inverse filter processing unit 12.
  • the same inverse filter processing unit 12b is provided. Further, as shown in FIG. 10, it is supplied to the predicted image power inverse filter processing unit 12b generated by the intra prediction unit 8 and the inter prediction unit 9.
  • filter processing in the filter processing unit l ib suitable for the moving image encoding device 300b and inverse filtering processing in the inverse filter processing unit 12b suitable for the moving image encoding device 300 and the moving image decoding device 400b Will be described.
  • the filter processing unit l ib and the inverse filter processing unit 12b calculate the image data of each block in the filter output image from the image data of the prediction image supplied from the intra prediction unit 8 or the inter prediction unit 9. Specifically, in the horizontal filter processing, the image data of block B is used as the block B and block B of the predicted image.
  • the image data of block B is referred to the image data of block B and block B in the predicted image.
  • the force filter processing unit l ib and the inverse filter processing unit 12b which explain the filter processing calculation for calculating the pixel value of one block B, repeat the filter calculation processing described below, thereby outputting the filter output.
  • the entire filtering process is completed by performing the filtering process in the vertical direction in the same manner.
  • the filter processing calculation performed by the filter processing unit l ib in the horizontal filter processing includes the following steps Slb to S3b.
  • Step Sib The filter processing unit l ib calculates the average pixel value ⁇ q> of the four pixels arranged in the vth row of the block B in the predicted image and the Vth row of the block B in the processing target image.
  • Step S2b The filter processing unit l ib calculates the average pixel value obtained in step Sib q
  • Filter processing unit l ib calculates predicted values (n, u, v) and (n + 1, u, v) pred pred
  • Step S3b The filter processing unit l ib calculates the prediction obtained in step S2b from the pixel value p (n, u, v) of the image to be filtered (difference image between the original image and the predicted image).
  • the pixel value p ′ (n, u, v) of the filter output image is calculated by subtracting the values ( ⁇ , pred u, v).
  • the calculation formula used by the filter processing unit l ib to calculate the pixel value p ′ (n, u, v) of the filter output image is as follows.
  • steps Slb to S3b are filter processing operations for calculating the pixel values of 4 pixels arranged in the V-th row among the 16 pixels arranged in 4 rows and 4 columns belonging to the block B.
  • the pixel values of all the pixels belonging to the block B are calculated by executing the above steps Slb to S3b sequentially or in parallel for the first row and the fourth row of the block B.
  • the inverse filter processing executed by the inverse filter processing unit 12b includes the following steps Tlb to T3b.
  • Step Tib The inverse filter processing unit 12b performs processing on the Vth row of the block B in the predicted image.
  • Step T2b The inverse filter processing unit 12b uses the average pixel values ⁇ q> and ⁇ > obtained in Step Tib to calculate the predicted value q for each pixel in the Vth row of block B.
  • the inverse filter processing unit 12b calculates the predicted values q (n, u, v) and q (n + 1, u, v).
  • the inverse filter processing unit 12b adds the predicted value q (n, u, v) obtained in step T2b to the pixel value P (n, u, v) of the image to be inverse filtered, thereby obtaining a filter.
  • steps Tlb to T3b are filter processing operations that calculate pixel values of 4 pixels arranged in the V-th row among 16 pixels arranged in 4 rows and 4 columns belonging to the block B.
  • the pixel values of all the pixels belonging to the block B are calculated by executing the above steps Tlb to T3b sequentially or in parallel with respect to the first row and the fourth row of the block B.
  • the filter processing unit ib calculates a pixel value from which the image data of the target image to be filtered is also subtracted from the image data of the predicted image.
  • the inverse filter processing unit 12b calculates the image data force of the predicted image using the same method as the filter processing means, with respect to the pixel value to be added to the image data of the processing target image to be subjected to the inverse filter processing. That is, the pixel value subtracted from the processing target image by the filter processing unit l ib and the pixel value added by the inverse filter processing unit 12b to the processing target image are calculated with the same predicted image power. The same image value.
  • the filter processing unit l ib and the inverse filter processing unit 12b compared with the moving image encoding apparatus 300 that performs the filter processing 'inverse filter processing based on the processing target image having a difference in the quantization error level, The frequency component that generates block distortion removed from the processing target image by the filter processing unit 1 lb can be more completely restored by the inverse filter processing unit 12b.
  • the present invention is not limited to the above-described embodiment, and various modifications can be made within the scope shown in the claims. That is, embodiments obtained by combining technical means appropriately modified within the scope of the claims are also included in the technical scope of the present invention.
  • the present invention can be configured as follows.
  • the moving image encoding apparatus is a moving image encoding apparatus that divides and encodes an image into a plurality of blocks, and performs a predetermined filtering process on the image in units of blocks. It may be configured to include a filter processing means and an inverse filter processing means for performing an inverse filter process that is an inverse transformation of the filter processing means.
  • the filter processing unit is configured to apply a predetermined filter to the encoding target image of the block based on the encoding target images of a plurality of adjacent blocks. It is configured to do the processing.
  • the moving picture coding apparatus includes prediction means for performing intra-screen prediction or inter-screen prediction for each block, and the filter processing means outputs the prediction means. Based on the predicted images of a plurality of adjacent blocks, a predetermined filtering process may be performed on the encoding target image of the block.
  • the moving picture coding apparatus includes a prediction unit that performs intra-frame prediction or inter-screen prediction for each block, and the filter processing unit outputs the prediction unit. Based on the predicted images of a plurality of adjacent blocks, a predetermined filter process may be performed on the difference image between the encoding target image and the predicted image of the block.
  • the moving picture decoding apparatus is a moving picture decoding apparatus for decoding moving picture data encoded for each block, and is configured to include the inverse filter processing means. May be.
  • the above-described blocks of the moving image encoding apparatus 300, 300a, 300b of the present invention and the moving image decoding apparatus 400, 400a, 400b of the present invention are particularly adapted to filter processing means 11. 11a • l ib , And the inverse filter processing means 12'12a'12b may be constituted by hardware logic, or may be realized by software using a CPU as follows. .
  • the above-described video encoding device 'video decoding device includes a CPU (central processing unit) that executes instructions of a control program that realizes each function, and a ROM (read only memory) that stores the program.
  • An object of the present invention is to provide a computer with program codes (execution format program, intermediate code program, source program) of a control program for the above-described moving image encoding device “moving image decoding device” which is software for realizing the above-described functions.
  • the recording medium recorded so as to be readable can be supplied to the moving picture encoding apparatus / moving picture decoding apparatus, and the computer (or CPU or MPU) can read and execute the program code recorded on the recording medium. Is achievable.
  • the recording medium includes, for example, a tape system such as a magnetic tape and a cassette tape, a magnetic disk such as a floppy disk Z hard disk, and an optical disk such as CD-ROMZMOZ MD / DVD / CD-R.
  • Disk systems IC cards (including memory cards) Z optical cards and other card systems, or mask ROMZEPROMZEEPROMZ flash ROM and other semiconductor memory systems can be used.
  • the moving image encoding device / moving image decoding device may be configured to be connectable to a communication network, and the program code may be supplied via the communication network.
  • the communication network is not particularly limited.
  • the Internet intranet, extranet, LAN, ISDN, VAN, CATV communication network, virtual private network, telephone line network, mobile communication network, satellite A communication network or the like is available.
  • the transmission medium constituting the communication network is not particularly limited.
  • IEEE 1394, USB, power line carrier, cable TV line, telephone line, ADSL line, etc. can be used, such as IrDA or remote control.
  • the present invention can also be realized in the form of a computer data signal embedded in a carrier wave in which the program code is embodied by electronic transmission.
  • the moving image encoding apparatus divides an image into a plurality of blocks, quantizes the image data of each block, and uses the quantized representative value obtained by the quantization.
  • the moving image encoding apparatus for encoding includes a filter processing means for performing a filter process for removing a frequency component that generates block distortion on the image data before quantization.
  • the video decoding device decodes encoded data obtained by encoding image data from which frequency components that generate block distortion are removed.
  • the apparatus includes an inverse filter processing unit that performs an inverse filter process for restoring the removed frequency component on the image data of the decoded image obtained by decoding.
  • the present invention relates to a moving image storage device that encodes and stores a moving image, a moving image transmission device that encodes and transmits a moving image, a moving image reproducing device that decodes and reproduces a moving image, and the like. Therefore, it can be suitably used. Specifically, for example, it can be suitably used for a hard disk recorder or a mobile phone terminal.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Etant donné qu'un dispositif de codage d'image en mouvement (300) est muni d'une unité de traitement de filtrage (11) pour éliminer des composantes de fréquence qui causent des distorsions de blocs aux données d'image avant la quantification, le dispositif de codage d'image en mouvement peut générer des données de codage pour reconstituer une image décodée dans laquelle des distorsions de blocs sont réduites sans élimination de composantes de fréquence spécifiques. Ainsi, on met en pratique un dispositif de codage d'image en mouvement qui n'utilise aucun filtre de déblocage, lequel provoque le flou ou le scintillement d'une image, mais peut réduire des distorsions de blocs dans une image en mouvement décodée.
PCT/JP2007/052575 2006-03-17 2007-02-14 Dispositif de codage et dispositif de decodage d'image en mouvement WO2007108254A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2008506197A JP4768011B2 (ja) 2006-03-17 2007-02-14 動画像符号化装置、及び、動画像復号装置

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2006075685 2006-03-17
JP2006-075685 2006-03-17

Publications (1)

Publication Number Publication Date
WO2007108254A1 true WO2007108254A1 (fr) 2007-09-27

Family

ID=38522291

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2007/052575 WO2007108254A1 (fr) 2006-03-17 2007-02-14 Dispositif de codage et dispositif de decodage d'image en mouvement

Country Status (2)

Country Link
JP (2) JP4768011B2 (fr)
WO (1) WO2007108254A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012077719A1 (fr) * 2010-12-09 2012-06-14 シャープ株式会社 Dispositif de décodage d'images et dispositif de codage d'images

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0319488A (ja) * 1989-06-15 1991-01-28 Matsushita Electric Ind Co Ltd ブロック符号化装置と復号化装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3529432B2 (ja) * 1994-06-30 2004-05-24 株式会社東芝 動画像符号化/復号化装置
JPH11177993A (ja) * 1997-12-12 1999-07-02 Nec Corp 動画像符号化装置

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0319488A (ja) * 1989-06-15 1991-01-28 Matsushita Electric Ind Co Ltd ブロック符号化装置と復号化装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YAMAMOTO T. ET AL.: "Catmull-Rom Spline Hokan ni Motozuku Block Hizumi Teigen Shuho no Kiso Kento", 2006 NEN THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS ENGINEERING SCIENCES SOCIETY TAIKAI KOEN RONBUNSHU, vol. A-4-10, 7 September 2006 (2006-09-07), pages 77, XP003017840 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012077719A1 (fr) * 2010-12-09 2012-06-14 シャープ株式会社 Dispositif de décodage d'images et dispositif de codage d'images

Also Published As

Publication number Publication date
JP2011166832A (ja) 2011-08-25
JPWO2007108254A1 (ja) 2009-08-06
JP4768011B2 (ja) 2011-09-07

Similar Documents

Publication Publication Date Title
JP5283628B2 (ja) 映像復号方法及び映像符号化方法
KR100716998B1 (ko) 블록화 현상을 감소시키기 위한 부호화 및 복호화 장치 및그 방법과, 이를 구현하기 위한 프로그램이 기록된 기록매체
JP4352110B2 (ja) データ圧縮伸張方法、プログラム及び装置
US20100254450A1 (en) Video coding method, video decoding method, video coding apparatus, video decoding apparatus, and corresponding program and integrated circuit
KR100853336B1 (ko) 화상 부호화 장치 및 화상 복호화 장치
US20130215961A1 (en) Motion video encoding apparatus, motion video encoding method, motion video encoding computer program, motion video decoding apparatus, motion video decoding method, and motion video decoding computer program
JP2007166522A (ja) 復号化装置及び復号化方法及びプログラム
JPWO2017068856A1 (ja) 予測画像生成装置、画像復号装置および画像符号化装置
WO2006098226A1 (fr) Dispositif de codage et système d’enregistrement d’images dynamiques pourvu du dispositif de codage
JPWO2010041534A1 (ja) 画像加工装置、方法及びプログラム、動画像符号化装置、方法及びプログラム、動画像復号装置、方法及びプログラム、並びに、符号化・復号システム及び方法
KR20210021581A (ko) 비디오 코딩에서의 필터링을 위한 장치 및 방법
JPH10224790A (ja) 圧縮伸張された画像中のブロック状ノイズを除去するフィルタおよびフィルタ方法
US8442338B2 (en) Visually optimized quantization
WO2008007717A1 (fr) Dispositif de décodage d'image dynamique et dispositif de codage d'image dynamique
EP1511319A1 (fr) Filtre pour l'extraction du grain de film
JP6796463B2 (ja) 映像符号化装置、映像復号装置、及びプログラム
WO2007108254A1 (fr) Dispositif de codage et dispositif de decodage d'image en mouvement
JP4784618B2 (ja) 動画像符号化装置、動画像復号化装置、動画像符号化プログラム、及び動画像復号化プログラム
US20120155534A1 (en) Image Decoding Apparatus, Image Decoding Method and Computer Readable, Non-Transitory Storage Medium
JP2017103723A (ja) 符号化装置、復号装置、及びプログラム
KR101979634B1 (ko) 화질 개선을 위한 영상 처리 장치 및 그 방법
JP2005260989A (ja) 画像処理装置及び画像処理方法
WO2009133938A1 (fr) Dispositif de codage et de décodage d'image variable dans le temps
JP2007189622A (ja) 動画像符号化方法及び装置及び復号化方法及び装置及び動画像処理プログラム及びコンピュータ読み取り可能な記録媒体
JP2007516639A (ja) 符号化方法及び符号化装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07708377

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2008506197

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07708377

Country of ref document: EP

Kind code of ref document: A1