WO2011086836A1 - 符号化装置、復号装置、および、データ構造 - Google Patents
符号化装置、復号装置、および、データ構造 Download PDFInfo
- Publication number
- WO2011086836A1 WO2011086836A1 PCT/JP2010/073436 JP2010073436W WO2011086836A1 WO 2011086836 A1 WO2011086836 A1 WO 2011086836A1 JP 2010073436 W JP2010073436 W JP 2010073436W WO 2011086836 A1 WO2011086836 A1 WO 2011086836A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- region
- filter coefficient
- filter
- reference image
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/577—Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/154—Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/174—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/189—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
- H04N19/192—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding the adaptation method, adaptation tool or adaptation type being iterative or recursive
- H04N19/194—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding the adaptation method, adaptation tool or adaptation type being iterative or recursive involving only two passes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
- H04N19/463—Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
- H04N19/82—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
Definitions
- the present invention relates to a moving image encoding apparatus that encodes a moving image and generates encoded data.
- the present invention also relates to a moving picture decoding apparatus that decodes encoded data generated using such a moving picture encoding apparatus.
- a moving image encoding device In order to efficiently transmit or record moving images, a moving image encoding device is used.
- a specific moving picture encoding method for example, H.264 is used.
- KTA software which is a codec for joint development in H.264 / MPEG-4 AVC and VCEG (Video Coding Expert Group).
- an image (picture) constituting a moving image is obtained by dividing a slice obtained by dividing an image, a macroblock obtained by dividing the slice, and a macroblock. It is managed by a hierarchical structure consisting of blocks to be encoded, and is usually encoded for each block.
- a method for encoding a difference (prediction error) between an input image and a predicted image is employed. That is, (1) a motion vector is calculated from an input image and a local decoded image obtained by quantizing / dequantizing the input image, and (2) a predicted image is obtained by performing motion compensation using the motion vector. And (3) encoding a prediction error between the predicted image and the input image.
- a first motion vector is obtained from an input image and a locally decoded image, and a reference image obtained by filtering the locally decoded image is motion-compensated with the first motion vector and the input image. So as to minimize the error between the local decoded image and the local decoded image, the local decoded image is filtered by the filter to generate the reference image, and a second image is generated from the input image and the reference image.
- An adaptive filtering method for a reference image is disclosed in which a motion vector is obtained and a predicted image is generated by performing motion compensation on the reference image with a second motion vector.
- the filter can be adaptively generated so that the error between the image obtained by motion compensation of the reference image with the first motion vector and the input image can be minimized, only the fixed filter can be generated. There is a tendency that the prediction error between the predicted image and the input image is smaller than when using.
- JP 2006-135376 (May 25, 2006)
- the present invention has been made in view of the above-described problem, and an object of the present invention is to provide a filter capable of performing appropriate filtering even when image characteristics in each region of the locally decoded image are not uniform. Is to realize an encoding device including
- an encoding apparatus includes a first filter that acts on a reference image obtained by encoding and reconstructing an input image, and an output image of the first filter.
- the first prediction means for generating a first predicted image by performing motion compensation with reference to the above, a second filter acting on the reference image, and motion compensation with reference to the output image of the second filter
- Second prediction means for generating a second predicted image by performing; dividing means for dividing the first predicted image and the input image into a plurality of regions; the input image and the first predicted image; Filter coefficient setting means for setting the filter coefficient of the second filter so that the difference between the input image and the second predicted image is encoded. Turn into It is characterized in.
- a dividing unit that divides the first predicted image and the input image into a plurality of regions, the input image, and the first predicted image.
- Filter coefficient setting means for setting the filter coefficient of the second filter so as to minimize the difference with respect to each region, the characteristic of the first predicted image is not uniform.
- the filter coefficient of the second filter can be set adaptively for each of the plurality of regions.
- the filter coefficient of the second filter is set for each of the plurality of regions.
- Another encoding device refers to a first filter that acts on a plurality of reference images obtained by encoding and reconstructing an input image, and an output image of the first filter.
- First prediction means for generating a first predicted image by performing motion compensation, a second filter acting on the plurality of reference images, and motion compensation referring to an output image of the second filter
- a second prediction unit that generates a second predicted image, and encodes residual data between the input image and the second predicted image.
- the filter In the reference image list The reference image to be filtered using a filter coefficient set to minimize the difference between the input image and the first predicted image, and the first reference among the plurality of reference images
- the reference image belonging to the second reference image list different from the first reference image list
- filtering is performed using a predetermined filter coefficient.
- the weight of the contribution of the reference image to the predicted image becomes smaller, the demerits of the calculation cost and the increase in the code amount required by the filter coefficient due to the adaptive filtering are due to the adaptive filtering.
- the possibility of surpassing the merit of improving the coding efficiency is increased.
- adaptive filtering is performed only on the reference image whose contribution weight to the first predicted image is equal to or greater than a predetermined weight, and the contribution of the contribution to the first predicted image is increased. Since a reference image with a weight smaller than a predetermined weight can be filtered using a predetermined filter coefficient, appropriate filtering can be performed without incurring the disadvantage of an increase in calculation cost. There is an effect that it can be performed.
- the reference list number can function as a flag indicating whether to use adaptively obtained filter coefficients or non-adaptive filter coefficients. Therefore, according to said structure, there exists an effect that more suitable filtering can be performed, without increasing code amount by additional side information.
- the decoding apparatus decodes the residual data in a decoding apparatus that decodes encoded data obtained by encoding residual data between an original image and a predicted image together with a filter coefficient group.
- Filter means for generating a filtered reference image by filtering a reference image generated based on a prediction residual obtained by switching the filter coefficient for each unit region of the reference image
- Filter means predicted image generation means for generating the predicted image by performing motion compensation on the filtered reference image, and filter coefficients included in the filter coefficient group for each unit region on the reference image
- a filter coefficient selecting means for selecting any one of predetermined filter coefficients. It is set to.
- filtered reference is performed by filtering the reference image generated based on the prediction residual obtained by decoding the residual data.
- Filter means for generating an image wherein the filter coefficient can be switched for each unit area of the reference image, and the prediction image for generating the prediction image by performing motion compensation on the filtered reference image Since it comprises a generating means and a filter coefficient selecting means for selecting either a filter coefficient included in the filter coefficient group or a predetermined filter coefficient for each unit region on the reference image, There is an effect that filtering can be performed using a more appropriate filter coefficient for each unit region on the reference image. That.
- the data structure of the encoded data according to the present invention is the data structure of encoded data obtained by encoding the residual data between the original image and the predicted image generated from the original image together with the filter coefficient group.
- the filter coefficient group is selected for each unit region on the reference image generated based on the prediction residual obtained by decoding the residual data in the decoding device that decodes the encoded data.
- the filter coefficient is included.
- the decoding apparatus can perform an appropriate filtering for each unit region on the reference image.
- the encoding apparatus has the first filter acting on the reference image obtained by encoding and reconstructing the input image, and the motion referring to the output image of the first filter.
- First prediction means for generating a first predicted image by performing compensation, a second filter acting on the reference image, and motion compensation with reference to an output image of the second filter are performed.
- a second prediction unit that generates two prediction images; a dividing unit that divides the first prediction image and the input image into a plurality of regions; and a difference between the input image and the first prediction image.
- Filter coefficient setting means for setting the filter coefficient of the second filter so as to minimize each area, and encodes residual data between the input image and the second predicted image.
- FIG. 1 It is a block diagram which shows the structure of the encoding apparatus which concerns on embodiment. It is a block diagram which shows the structure of the inter estimated image generation part in the encoding apparatus which concerns on embodiment. It is a flowchart which shows the flow of operation
- FIG. 9 is a diagram for explaining an example of the operation of the inter-predicted image generation unit in the encoding device according to the embodiment, and includes a region in a reference picture in which a reference image index in a reference image is 0, and a reference image index in a reference image It is a figure which shows the case where each is filtered with respect to the area
- FIG. 1 The area on the reference picture also designated by the reference picture index 1 in the reference picture list L1, the area on the reference picture designated by the reference picture index 1 in the reference picture list L0, and the reference in the reference picture list L1 It is a figure which shows the case where each is filtered with respect to the area
- FIG. It is a figure for demonstrating the example of operation
- FIG. 9 is a diagram for explaining an example of the operation of the inter-predicted image generation unit in the encoding device according to the embodiment, and includes a region in a reference picture in which a reference image index in a reference image is 0, and a reference image index in a reference image It is a figure which shows the case where each is filtered with respect to the other area
- FIG. 9 is a diagram for explaining an example of the operation of the inter-predicted image generation unit in the encoding device according to the embodiment, and includes a region in a reference picture in which a reference image index in a reference image is 0, and a reference image index in a reference image It is a figure which shows the case where each is filtered with respect to the other area
- FIG. 9 is a diagram for explaining an example of the operation of the inter-predicted image generation unit in the encoding device according to the embodiment, and includes a region on a reference picture designated by a reference image index 0 in a reference image list L0 among reference images;
- FIG. 6 is a diagram illustrating a case where a filter is applied to an area on a reference picture designated by, respectively.
- FIG. It is a block diagram which shows the structure of the decoding apparatus which concerns on embodiment. It is a block diagram which shows the structure of the inter estimated image generation part in the decoding apparatus which concerns on embodiment. It is a figure which shows the bit stream of the coding data input into the decoding apparatus which concerns on embodiment.
- the moving picture encoding apparatus 1 includes H.264 as a part thereof. This is a moving picture encoding apparatus using the technology adopted in the H.264 / AVC standard and KTA software.
- FIG. 1 is a block diagram showing a configuration of the moving picture encoding apparatus 1.
- the moving image encoding device 1 includes a transform / quantization unit 11, a variable length encoding unit 12, an inverse quantization / inverse transform unit 13, a buffer memory 14, an intra-predicted image generation unit 15, A prediction image generation unit 16, a prediction method control unit 18, a motion vector redundancy reduction unit 19, an adder 21, and a subtracter 22 are provided.
- the moving image encoding apparatus 1 receives an input image # 1 divided into block images (hereinafter referred to as “macroblocks”) composed of a plurality of adjacent pixels.
- macroblocks an input image # 1 divided into block images
- the moving image encoding apparatus 1 performs an encoding process on the input image # 1 and outputs encoded data # 2.
- the transform / quantization unit 11 performs DCT (Discrete Cosine Transform) transform on a difference image # 22 between an input image # 1 divided into macroblocks and a prediction image # 18a output from the prediction method control unit 18 described later.
- DCT Discrete Cosine Transform
- the frequency component is quantized to generate quantized prediction residual data # 11.
- the quantization is an operation for associating the frequency component with an integer value.
- the DCT transform and quantization are performed in units of partitions obtained by dividing a macroblock. In the following, a macro block to be processed is called a “target macro block”, and a partition to be processed is called a “target partition”.
- the inverse quantization / inverse transform unit 13 decodes the quantized prediction residual data # 11 and generates a prediction residual # 13. Specifically, the inverse quantization / inverse transform unit 13 performs inverse quantization of the quantized prediction residual data # 11, that is, associates integer values constituting the quantized prediction residual data # 11 with frequency components. Then, inverse DCT transform of the frequency component, that is, inverse transform to the pixel component of the target macroblock based on the frequency component is performed to generate prediction residual # 13.
- the adder 21 adds the prediction residual # 13 and the prediction image # 18a to generate a decoded image # 21.
- the generated decoded image # 21 is supplied to the buffer memory 14.
- the intra-predicted image generation unit 15 extracts the local decoded image # 14a (the decoded area of the same frame as the target macroblock) from the decoded image # 21 stored in the buffer memory 14, and the frame based on the local decoded image # 14a Intra prediction is performed to generate an intra prediction image # 15.
- the inter prediction image generation unit 16 calculates and assigns a motion vector # 17 to the target partition on the input image # 1 using the reference image # 14b that has already been decoded and stored in the buffer memory 14. .
- the calculated motion vector # 17 is output to the predicted image generation unit 16 and the motion vector redundancy reduction unit 19, and is stored in the buffer memory 14.
- the inter predicted image generation unit 16 performs motion compensation based on the motion vector # 17 for each partition for the reference image # 14b to generate an inter predicted image # 16.
- the inter predicted image generation unit 16 outputs the filter coefficient # 101 used for the filtering process to the variable length encoding unit 12. The details of the configuration of the inter predicted image generation unit 16 will be described in detail later, and thus the description thereof is omitted here.
- the prediction method control unit 18 compares the intra prediction image # 15, the inter prediction image # 16, and the input image # 1 in units of macro blocks, and the intra prediction image # 15 or the inter prediction image # 16. Any one of them is selected and output as a predicted image # 18a. Moreover, the prediction method control part 18 outputs prediction mode # 18b which is the information showing which was selected among intra prediction image # 15 or inter prediction image # 16. The predicted image # 18a is input to the subtracter 22.
- the prediction mode # 18b is stored in the buffer memory 14 and input to the variable length encoding unit 12.
- the motion vector redundancy reduction unit 19 assigns the motion vector # 17 to the target partition in the inter predicted image generation unit 16 and then assigns the motion vector # 17c to the other partition and stored in the buffer memory 14. Based on this, a prediction vector is calculated. In addition, the motion vector redundancy reduction unit 19 calculates a difference between the prediction vector and the motion vector # 17, and generates a difference motion vector # 19. The generated difference motion vector # 19 is output to the variable length coding unit 12.
- variable-length encoding unit 12 performs variable-length encoding on the quantized prediction residual data # 11, the differential motion vector # 19, the prediction mode # 18b, and the filter coefficient # 101, and generates the encoded data # 2 Generate.
- the subtracter 22 takes the difference between the input image # 1 and the predicted image # 18a for the target macroblock, and outputs the difference image # 22.
- FIG. 2 is a block diagram illustrating a configuration of the inter predicted image generation unit 16.
- the inter predicted image generation unit 16 includes a predicted image generation unit 16a, a motion vector estimation unit 17, and an adaptive filter 100.
- FIG. 3 is a flowchart showing an operation flow in the inter predicted image generation unit 16.
- Step 101 The reference image # 14b stored in the buffer memory 14 is input to the adaptive filter 100. Further, when a plurality of reference pictures are used in step 102 described later, it is assumed that the reference image # 14b is composed of the plurality of reference pictures.
- the adaptive filter 100 performs a filtering process on the reference image # 14b based on a predetermined filter coefficient (hereinafter referred to as a standard filter coefficient) to obtain the first output image data # 100 is output.
- a predetermined filter coefficient hereinafter referred to as a standard filter coefficient
- the adaptive filter 100 calculates the weighted linear sum represented by the equation (1) from the pixel value S0 (x ′, y ′) at the coordinates (x ′, y ′) of the first output image data # 100. Calculated by Note that the adaptive filter 100 does not need to generate and output the entire output image data at once, and generates and outputs a partial region of the output image data based on a request from the motion vector estimation unit or the predicted image generation unit. You may do it.
- SI (x, y) represents the pixel value at the coordinates (x, y) of the reference image # 14b
- h (i, j) is a filter multiplied by the pixel value SI (x + i, y + j).
- R represents a pixel region (hereinafter referred to as a filter region) that takes the weighted linear sum. More specifically, R represents a set of relative coordinates to be subjected to the weighted linear sum.
- Hoffset represents an offset value to be added to the pixel value.
- the filter region R may generally be an M ⁇ N tap rectangular region, or may be a rhombus, circle, or any other arbitrary region.
- the reference image # 14b is image data composed of pixel values of integer coordinate pixels (hereinafter referred to as integer pixels). That is, in Expression (1), x and y both take integer values. On the other hand, x ′ and y ′ may take non-integer values. That is, the first output image data # 100 is image data including a pixel value of an integer pixel and an interpolation signal having a pixel accuracy equal to or less than the integer pixel. In other words, the adaptive filter 100 is an image filter that generates an interpolation signal with pixel accuracy equal to or lower than the integer pixel by interpolation from the pixel value of the integer pixel.
- the adaptive filter 100 appropriately switches the filter coefficient and the offset depending on the coordinate value of the pixel to be obtained by the filtering process. For example, when each of x ′ and y ′ takes a coordinate position value corresponding to any of four types of integer pixels, 1 ⁇ 4 pixels, 1 ⁇ 2 pixels, and 3 ⁇ 4 pixels, x ′ and y ′.
- the filter coefficient and the offset are switched according to the combination of types of coordinate positions.
- the filter coefficient h (i, j) and the offset hoffset are assumed to include a filter coefficient and an offset corresponding to a combination of coordinate values of x ′ and y ′, and are appropriately selected and applied. To do.
- the filter coefficient h (i, j) and the offset hoffset have predetermined values.
- filter coefficient used for the interpolation filter in the H.264 / AVC standard may be used.
- Step 102 The motion vector estimation unit 17 performs motion prediction based on the first output image data # 100 and the input image # 1, and generates a first motion vector # 17 ′.
- a plurality of reference pictures included in the reference image # 14b may be used.
- Step 103 The predicted image generation unit 16a generates a first predicted image # 16 ′ by performing motion compensation on the first output image data # 100 based on the first motion vector # 17 ′. Note that the processing in step 102 and step 103 is tried for each prediction mode with a different prediction method, and the optimal prediction mode is used.
- the adaptive filter 100 converts the first predicted image # 16 ′ into a first region ER1 composed of macroblocks to which the skip mode is applied and a second block composed of macroblocks to which the skip mode is not applied. Is divided into the area ER2.
- FIG. 4 is a diagram illustrating an example of the first region ER1 and the second region ER2 in the first predicted image # 16 '.
- the adaptive filter 100 divides the input image # 1 into a region ER1 'corresponding to the first region ER1 and a region ER2' corresponding to the second region ER2.
- a region referred to for prediction of the region ER1 ′ in the reference image # 14b is set to a region ER ⁇ 1 ′
- a region referred to for prediction of the region ER2 ′ in the reference image # 14b Is set in the region ER ⁇ 2 '.
- the region ER ⁇ 1 ′ and the region ER ⁇ 2 ′ do not necessarily divide the reference image # 14b into two. That is, the region ER ⁇ 1 ′ and the region ER ⁇ 2 ′ may overlap each other.
- the adaptive filter 100 performs filtering on the region ER ⁇ 1 ′ in the reference image # 14b based on the filter coefficient h1 ′ (i, j), and applies to the region ER ⁇ 2 ′ in the reference image # 14b. Filtering is performed based on the filter coefficient h2 ′ (i, j). As described above, the region ER ⁇ 1 ′ and the region ER ⁇ 2 ′ may overlap on the reference image # 14b. Depending on whether the region referring to the region is the region ER1 ′ or the region ER2 ′. It is possible to determine whether the region is referred to as the region ER ⁇ 1 ′ or the region referred to as the region ER ⁇ 2 ′.
- the filter coefficient h1 '(i, j) is determined such that the error between the region ER1 in the first predicted image # 16' and the corresponding input image # 1 is minimized.
- the filter coefficient h2 '(i, j) is determined so that the error between the region ER2 in the first predicted image # 16' and the corresponding input image # 1 is minimized.
- Coefficients may be derived for each combination of pixel precision less than an integer of motion vector # 17 ′.
- S (x, y) represents a pixel value at the coordinates (x, y) of the input image # 1
- SI (x ⁇ + i, y ⁇ + j) represents the first predicted image # 16 ′.
- the pixel values at the coordinates (x ⁇ + i, y ⁇ + j) are represented.
- mvx and mvy are the x component and the y component of the first motion vector # 17 ', respectively.
- F (mvx) is a floor function that maps mvx to the largest integer pixel that does not exceed mvx.
- FOx filter size in the x direction
- FOy filter size in the y direction
- the sum of (x, y) in Expression (2) is the sum of all the pixels included in the region ERk (k is either 1 or 2) in the first predicted image # 16 ′. .
- the adaptive filter 100 performs the filtering process based on the filter coefficient optimized for each region on the reference image # 14b, thereby generating the second output image data # 100 ′ and outputting it. To do.
- the filter coefficient may be determined so as to minimize the absolute value error obtained by replacing the square operation in the square error E with the absolute value operation, or the filter coefficient may be added with a weight added to these errors. May be determined.
- the adaptive filter 100 uses the skip mode in the first predicted image # 16 ′ in the first predicted image # 16 ′ in the region of the input image # 1 with the prediction error between the first predicted image # 16 ′ and the input image # 1.
- the filter coefficient hk is divided into a prediction error E1 for a region corresponding to a macroblock and a prediction error E2 for a region corresponding to a macroblock to which the skip mode is not applied to minimize each prediction error.
- Step 106 the motion vector estimation unit 17 generates a second motion vector # 17 based on the second output image data # 100 ′ and the input image # 1.
- the motion vector estimation unit 17 outputs the same value as the first motion vector # 17 ′ that has already been obtained as the second motion vector # 17. By doing so, the calculation cost for obtaining the second motion vector # 17 can be reduced.
- the relationship between the second motion vector # 17 and the first motion vector # 17 ′ does not limit the present invention.
- Step 107 The predicted image generation unit 16a generates and outputs an inter predicted image # 16 by performing motion compensation on the second output image data # 100 ′ based on the second motion vector # 17.
- the optimum filter coefficient is different between a macroblock to which the skip mode is applied and a macroblock to which the skip mode is not applied.
- the adaptive filter 100 extracts the first predicted image # 16 ′ from the first region ER1 composed of macroblocks to which the skip mode is applied and the macroblocks to which the skip mode is not applied.
- the optimum filter coefficient can be obtained by the statistical method described above for each region of the reference image # 14b that is divided into the second region ER2 to be configured and is referenced from the regions ER1 and ER2.
- the first predicted image # 16 ′ includes a partition to which the skip mode is applied and a partition to which the skip mode is not applied, appropriate filtering is performed. Therefore, it is possible to generate an appropriate predicted image # 16 and an appropriate second motion vector # 17.
- the inter prediction image generation part 16 is good also as a structure which repeats step 107 from said step 104 in multiple times. That is, the inter prediction image # 16 generated in step 107 is divided into two regions depending on whether or not the skip mode is applied, and the filter coefficient is calculated for each region using the statistical method. , And further motion compensation may be performed based on the output image data generated using these filter coefficients. As described above, it is possible to generate a more appropriate predicted image and motion vector by repeating region division, motion compensation, and filter coefficient calculation a plurality of times (the same applies hereinafter).
- the offset hoffset may be optimized for each region (the same applies hereinafter).
- the segmentation and motion compensation are not repeated, and the prediction mode for each macroblock is set to be the same in the first predicted image # 16 ′ and the inter predicted image # 16. . By doing in this way, calculation cost can be reduced.
- the encoding device (moving image encoding device 1) according to the present embodiment operates on the reference image (reference image # 14b) obtained by encoding and reconstructing the input image # 1.
- 1st prediction means which produces
- the encoding device (moving image encoding device 1) according to the present embodiment operates on the reference image (reference image # 14b) obtained by encoding and reconstructing the input image # 1.
- Filter adaptive filter 100
- first prediction means (first prediction image # 16 ') for generating a first prediction image (first prediction image # 16') by performing motion compensation with reference to the output image of the first filter.
- a prediction image generation unit 16a), and the first prediction unit (prediction image generation unit 16a) further includes a second filter acting on the reference image # 14b and an output image of the second filter.
- the second predicted image (inter-predicted image # 16) is generated by performing motion compensation with reference to, and the first filter (adaptive filter 100) further includes the first predicted image (first predicted image).
- the filter coefficient of the second filter is set.
- a dividing unit that divides the first predicted image and the input image into a plurality of regions, the input image, and the first predicted image
- Filter coefficient setting means for setting the filter coefficient of the second filter so as to minimize the difference between the regions, the characteristics of the reference image used for generating the first predicted image are Even if it is not uniform, the filter coefficient of the second filter can be set adaptively for each of the plurality of regions.
- the filter coefficient of the second filter is adaptively set for each of the plurality of regions even when the characteristics of the reference image used for generating the first predicted image are not uniform.
- appropriate filtering can be performed.
- the adaptive filter 100 determines that the first prediction image # 16 ′ has the first region ER1 composed of macroblocks to which the skip mode is applied and the skip mode is It divides into the 2nd area
- the present invention is not limited to this.
- the first predicted image # 16 ′ is divided into a plurality of regions, and the input image # 1 corresponding to each region is divided. It is possible to obtain the optimum filter coefficient for the above region by the statistical method.
- step 104 and step 105 are replaced with step 204 and step 205 described below, respectively.
- the adaptive filter 100 divides the first predicted image # 16 ′ into a region ER21 including a partition whose reference image index ref_idx is 0 and a region ER22 other than that.
- the adaptive filter 100 sets a region corresponding to the region ER21 as a region ER21 ', and sets a region corresponding to the region ER22 as a region ER22'.
- a region referred to for prediction of the region ER21 ′ in the reference image # 14b is set to a region ER ⁇ 21 ′, and a region referred to for prediction of the region ER22 ′ in the reference image # 14b. Is set in the region ER ⁇ 22 ′.
- the adaptive filter 100 performs filtering on the region ER ⁇ 21 ′ in the reference image # 14b based on the filter coefficient h21 ′ (i, j), and filters the region ER ⁇ 22 ′ in the reference image # 14b. Filtering is performed based on the coefficient h22 ′ (i, j).
- FIG. 5 shows that the region ER ⁇ 21 ′ in the reference picture A in which the reference image index ref_idx in the reference image # 14b is 0 is subjected to the filter 1 using the filter coefficient h21 ′, and the reference image index in the reference image # 14b. It is a figure which shows the case where the filter 2 using filter coefficient h22 'is given with respect to area
- the filter coefficient h21 '(i, j) is determined so that the error between the region ER21 in the first predicted image # 16' and the region ER21 'in the corresponding input image # 1 is minimized.
- the filter coefficient h22 '(i, j) is determined so that the error between the region ER22 in the first predicted image # 16' and the region ER22 'in the input image # 1 corresponding thereto is minimized.
- the adaptive filter 100 performs the above filtering to generate and output the second output image data # 100 '.
- the reference picture to be referred to is also different, and therefore, the optimum filter coefficient is different for a partition having a different reference image index.
- the adaptive filter 100 includes the region ER21 ′ and the region ER22 in the input image # 1 corresponding to the region ER21 including the partition whose reference image index ref_idx is 0 and the other region ER22, respectively.
- the optimum filter coefficient can be determined for each region. Therefore, according to the above configuration, even when the first predicted image # 16 ′ is composed of a region where the reference image index ref_idx is 0 and a region where the reference image index ref_idx is not 0, the optimal Filter coefficients can be obtained. Even in such a case, an appropriate predicted image # 16 can be generated.
- the filter coefficient hk ′ (i, j) may be obtained so as to minimize the square error given by the following equation (3).
- SI1 (x ⁇ 1 + i, y ⁇ 1 + j) is the coordinate (x ⁇ 1 + i, y) of the image data in which one of the two reference pictures is motion-compensated by the corresponding motion vector.
- ⁇ 1 + j) represents a pixel value
- SI2 (x ⁇ 2 + i, y ⁇ 2 + j) is an image obtained by motion-compensating the other one of the two reference pictures with a motion vector corresponding thereto.
- the pixel value at the coordinates (x ⁇ 2 + i, y ⁇ 2 + j) of the data is represented.
- (mvx1, mvy1) represents a motion vector component associated with the one reference image
- (mvx2, mvy2) represents a motion vector component associated with the other reference image. Represents.
- FIG. 6 is a diagram illustrating a reference picture C together with an inter predicted image # 16.
- the reference image list is a list indicating one or a plurality of reference image candidates. Each image included in the same list is assigned a number for identification (reference image index).
- the adaptive filter 100 applies to the region ER21′a on the reference picture A and the region ER21′b on the reference picture C to which the region ER21 ′′ on the inter predicted image # 16 refers.
- the filter 1 using the filter coefficient h21 ′ is applied to the region ER22′a on the reference picture A and the region ER22′b on the reference picture B to which the region ER22 ′′ on the inter predicted image # 16 refers.
- the filter 2 using the filter coefficient h22 ′ is applied.
- the present invention is not limited to this, and the above 2 A configuration may be adopted in which one of the reference pictures contributes with a larger weight. Further, the weight need not be constant for the entire image, and may be changed in units of partitions or macroblocks.
- the adaptive filter 100 performs the filter coefficient hk obtained as described above for the one reference picture referred to by the partition generated by the bi-directional prediction and the other reference picture. Filtering may be performed using '(i, j).
- the adaptive filter 100 may be configured to divide the first predicted image # 16 ′ into three or more regions according to the reference image index ref_idx, or the first The predicted image # 16 ′ may be divided into two regions depending on whether the prediction is forward prediction or backward prediction. Further, the forward prediction / backward prediction and the reference image index may be used in combination to be divided into a plurality of regions.
- the adaptive filter 100 performs both the operation shown in the above-described operation example 1 and the operation shown in the operation example 2, and obtains an inter prediction image with higher encoding efficiency and a filter coefficient. Output.
- the adaptive filter 100 performs the operations from Step 101 to Step 107 described in the operation example 1, thereby performing inter prediction image # 16a (inter prediction image # 16 in operation example 1). Corresponding).
- the adaptive filter 100 performs the operation in which Step 104 and Step 105 in Operation Example 1 are replaced with Step 204 and Step 205 described in Operation Example 2, thereby performing inter prediction image # 16b (Operation Example 2). Corresponding to the inter prediction image # 16).
- the adaptive filter 100 selects and outputs a prediction image with high coding efficiency from the inter prediction image # 16a and the inter prediction image # 16b.
- the adaptive filter 100 also outputs the motion vector # 17 and the filter coefficient # 101 used for generating the selected predicted image.
- the adaptive filter 100 outputs a flag # 102 indicating which method of the operation example 1 and the operation example 2 has generated the predicted image.
- the flag # 102 is preferably encoded by the variable length encoding unit 12 and transmitted to the moving image decoding apparatus as a part of the encoded data # 2.
- the moving picture decoding apparatus that has received such encoded data # 2 has the decoded picture based on the method selected by the adaptive filter 100 and the filter coefficient # 101 in the first and second operation examples. Can be generated.
- the adaptive filter 100 compares the area difference between the region ER1 and the region ER2 in the operation example 1 with the area difference between the region ER21 and the region ER22 in the operation example 2, and uses the region having a smaller area difference. It is good also as a structure which outputs the estimated image produced
- the adaptive filter 100 compares the area difference between the region ER1 and the region ER2 in the operation example 1 with the area difference between the region ER21 and the region ER22 in the operation example 2, and uses the region having a smaller area difference. It is good also as a structure which outputs the estimated image produced
- the moving image decoding apparatus can also determine which operation example the predicted image was generated by comparing the above area difference. It is not necessary to output a flag indicating whether the selection has been made. Therefore, the image data can be transmitted by the encoded data # 2 having a smaller code amount. On the other hand, if the adaptive filter 100 outputs a flag indicating which operation example has been selected, the processing amount of the video decoding device can be reduced.
- Step 104 and Step 105 in Operation Example 1 are replaced with Step 404 and Step 405 described below.
- the adaptive filter 100 classifies each macroblock included in the first predicted image # 16 ′ into two sets according to a predetermined criterion.
- the predetermined criterion is that the moving image encoding apparatus according to this operation example is used in the moving image decoding apparatus corresponding to this operation example without adding a flag or the like for determination to the encoded data. It is a standard that can be determined in the same way. For example, it can be determined based on whether or not the macroblock number is a predetermined value or more.
- the adaptive filter 100 converts the first predicted image # 16 ′ into an area ER41 composed of a macroblock belonging to one of the two sets and an area composed of a macroblock belonging to the other set. Divide into ER42.
- the adaptive filter 100 divides the input image # 1 into an area ER41 'corresponding to the first area ER41 and an area ER42' corresponding to the second area ER42.
- a region referred to for prediction of the region ER41 ′ in the reference image # 14b is set to a region ER ⁇ 41 ′, and a region referred to for prediction of the region ER42 ′ in the reference image # 14b. Is set in the region ER ⁇ 42 ′.
- the adaptive filter 100 stores in the memory a flag # F1 indicating which of the above-mentioned region ER41 'or the above-mentioned region ER42' the region included in the input image # 1 belongs to.
- the flag # F1 may be derived each time it is referenced without being stored in the memory.
- the adaptive filter 100 refers to the flag # F1 and performs filtering on the region ER ⁇ 41 ′ in the reference image # 14b based on the filter coefficient h41 ′ (i, j), and the region ER in the reference image # 14b. Filtering is performed on ⁇ 42 'based on the filter coefficient h42' (i, j).
- the filter coefficient h41 '(i, j) is determined such that the error between the region ER41' in the input image # 1 and the region ER41 in the first predicted image # 16 'is minimized. Further, the filter coefficient h42 '(i, j) is determined so that the error between the region ER42' in the input image # 1 and the region ER42 in the first predicted image # 16 'is minimized.
- the adaptive filter 100 refers to the flag # F1, and the filter coefficient h41 ′ (i, j) and the region ER ⁇ 41 ′ and the region ER ⁇ 42 ′ in the reference image # 14b, respectively, Filtering is performed using the filter coefficient h42 ′ (i, j), and second output image data # 100 ′ is generated and output.
- the moving picture decoding apparatus that decodes the encoded data # 2 divides the decoded image into the area ER41 ′ and the area ER42 ′ as described above according to the predetermined criterion.
- the filtering may be performed using the filter coefficients h41 ′ (i, j) and h42 ′ (i, j).
- the adaptive filter 100 assigns each macroblock to a larger number of macroblocks among a plurality of macroblocks adjacent to the macroblock. What is necessary is just to classify
- the case of classifying into two sets for each macroblock is given as an example, but this operation example is not limited to this. That is, it is good also as a structure classified into 2 sets by a unit larger than a macroblock, and it is good also as a structure classified into 2 sets by a unit smaller than a macroblock.
- the number of groups to be classified is not limited to two, and filter coefficients can be derived for the number of sets as three or more.
- Step 104 and Step 105 in Operation Example 1 are replaced with Step 504 and Step 505 described below.
- the adaptive filter 100 determines each macro according to whether or not the average pixel value in the area of the input image # 1 corresponding to each macroblock included in the first predicted image # 16 ′ is greater than or equal to a predetermined threshold value. Classify blocks into two sets. Further, the adaptive filter 100 converts the first predicted image # 16 ′ into an area ER51 composed of macroblocks belonging to one of the two sets and an area composed of macroblocks belonging to the other set. Divide into ER52.
- the adaptive filter 100 divides the input image # 1 into an area ER51 'corresponding to the first area ER51 and an area ER52' corresponding to the second area ER52.
- a region referred to for prediction of the region ER51 ′ in the reference image # 14b is set to a region ER ⁇ 51 ′, and a region referred to for prediction of the region ER52 ′ in the reference image # 14b. Is set in the region ER ⁇ 52 ′.
- the adaptive filter 100 stores in the memory a flag # F2 indicating to which area the area included in the input image # 1 belongs to the area ER51 'or the area ER52'.
- the flag # F2 may be derived every time it is referred to without being stored in the memory.
- the flag # F2 is sent to the variable length encoding unit 12 and encoded as encoded data # 2.
- the adaptive filter 100 filters the region ER ⁇ 51 ′ in the reference image # 14b based on the filter coefficient h51 ′ (i, j), and filters the region ER ⁇ 52 ′ in the reference image # 14b. Filtering is performed based on the coefficient h52 ′ (i, j).
- the filter coefficient h51 '(i, j) is determined so that the error between the region ER51 in the first predicted image # 16' and the region ER51 'in the input image # 1 corresponding thereto is minimized.
- the filter coefficient h52 '(i, j) is determined so that the error between the region ER52 in the first predicted image # 16' and the region ER52 'in the corresponding input image # 1 is minimized.
- the adaptive filter 100 refers to the flag # F2, and the filter coefficient h51 ′ (i, j) and the filter coefficient for the region ER ⁇ 51 ′ and the region ER ⁇ 52 ′ in the reference image # 14b, respectively. Filtering using h52 ′ (i, j) is performed to generate and output second output image data # 100 ′.
- the optimum filter coefficient varies depending on the average luminance in the region to be filtered using the filter coefficient. Therefore, even if the brightness of each region in the input image # 1 varies by determining the filter coefficient for the region with a higher average brightness and the region with a lower average brightness, Appropriate filtering can be performed.
- the moving picture decoding apparatus that decodes the encoded data # 2 refers to the flag # F2 included in the encoded data # 2, and determines the decoded image as an area ER51 ′ and an area ER52 ′. It is only necessary to perform the filtering using the filter coefficients h51 ′ (i, j) and h52 ′ (i, j) for the areas that are respectively referred to.
- step 504 may be replaced with step 504 'described below.
- Step 504 ' When the error between the region of the processing target macroblock included in the first predicted image # 16 ′ and the region in the input image # 1 corresponding thereto is equal to or greater than a predetermined threshold, the adaptive filter 100 The processing target macroblock is classified into the first group, and if not, the processing target macroblock is classified into the second group.
- the error specifically, for example, a value obtained by replacing the region ERk with the region MB included in the processing target macroblock in the equation (2) may be used.
- the adaptive filter 100 converts the first predicted image # 16 ′ into an area ER51 composed of macroblocks belonging to the first group and an area ER52 composed of macroblocks belonging to the second group. Divide into
- the adaptive filter 100 divides the input image # 1 into an area ER51 'corresponding to the first area ER51 and an area ER52' corresponding to the second area ER52.
- a region referred to for prediction of the region ER51 ′ in the reference image # 14b is set to a region ER ⁇ 51 ′, and a region referred to for prediction of the region ER52 ′ in the reference image # 14b. Is set in the region ER ⁇ 52 ′.
- the adaptive filter 100 stores in the memory a flag # F2 that indicates which of the region ER51 'or the region ER52' the region included in the reference image # 14b belongs to.
- the flag # F2 is sent to the variable length encoding unit 12 and encoded as encoded data # 2.
- the optimum filter coefficient varies depending on the size of an error (prediction error) between the predicted image (first predicted image # 16 ') and the input image (input image # 1).
- the case of classifying into two sets for each macroblock is given as an example, but this operation example is not limited to this. That is, it is good also as a structure classified into 2 sets by a unit larger than a macroblock, and it is good also as a structure classified into 2 sets by a unit smaller than a macroblock. Furthermore, it is good also as a structure which classify
- the adaptive filter 100 has an area of one region of the first prediction image # 16 ′ divided in the operation examples 1 to 5 described above and the operation example described later as the first prediction image.
- the ratio of the area of # 16 ′ is equal to or less than a predetermined ratio
- the area of the reference image # 14b referred to from the one area is filtered using the standard filter coefficient described above. And filtering is performed on the region of the reference image # 14b referenced from the other region using the optimum filter coefficient calculated by the statistical method described above.
- the adaptive filter 100 performs filtering using the standard filter coefficient for the region ER1 ′, and performs filtering using the filter coefficient h2 ′ (i, j) for the region ER2 ′.
- the filter coefficient h2 '(i, j) is determined so that the error between the region ER2 in the first predicted image # 16' and the corresponding input image # 1 is minimized.
- a statistical method can be used for the specific determination of the filter coefficient h2 '(i, j).
- FIG. 7 is a diagram illustrating an example of the first region ER1 and the second region ER2 in the first predicted image # 16 '.
- the filter coefficient is not determined by a statistical method, and is filtered using the standard filter coefficient. The same applies to the other operation examples 2 to 4.
- the number of prediction residual samples used to determine the filter coefficients by the statistical method is reduced. That is, the number of pixels included in the corresponding region ERk in Expression (2) is smaller. Therefore, for such a small region, it is difficult to improve the prediction accuracy of the predicted image even if the statistical method is used. Even if the prediction accuracy can be improved, in a small area, a larger amount of code is required for the filter coefficient than is reduced by the improvement of the prediction accuracy, which may reduce the encoding efficiency. There is also an increase in calculation cost due to the use of the statistical method.
- the adaptive filter 100 outputs a flag # F3 indicating a region using the standard filter coefficient in the above-described process, and encodes it as a part of the encoded data # 2 in the variable length encoding unit 12. It is good. With such a configuration, in the moving picture decoding apparatus that decodes the encoded data # 2, by referring to the flag # F3, the region to which the standard filter coefficient is to be applied and the adaptive filter 100 using a statistical method It is possible to determine a region to which the determined filter coefficient is to be applied.
- the video decoding device does not need to refer to the flag # F3 and It is possible to discriminate between the region to which the filter is to be applied and the region to which the filter coefficient determined by the statistical method in the adaptive filter 100 is to be applied. Therefore, in such a case, the adaptive filter 100 may not output the flag # F3.
- the ratio of the area of one region of the divided first predicted image # 16 ′ to the area of the first predicted image # 16 ′ is equal to or less than a predetermined ratio.
- the filter coefficient may be adaptively calculated for the entire first predicted image # 16 ′ that is not divided. In such a case, the adaptive filter 100 preferably outputs a flag indicating that a plurality of adaptive filters are not used.
- step 104 and step 105 are replaced with step 704 and step 705 described below, respectively.
- the value of the reference image index ref_idx is 0 or 1.
- the adaptive filter 100 refers to the first predicted image # 16 ′ with an area ER71 configured by a partition that refers to a reference picture having a reference image index ref_idx of 0 and a reference picture having a reference image index ref_idx of 1.
- the area is divided into an area ER72 composed of partitions.
- the adaptive filter 100 divides the input image # 1 into an area ER71 'corresponding to the first area ER71 and an area ER72' corresponding to the second area ER72.
- the region referred to for the prediction of the region ER71 ′ is set as the region ER ⁇ 71 ′.
- the region referred to for the prediction of the region ER72 ′ is set to the region ER ⁇ 72 ′.
- Step 705 In the adaptive filter 100, the area ratio of the region ER71 to the first predicted image # 16 ′ and the area ratio of the region ER72 to the first predicted image # 16 ′ are both greater than or equal to a predetermined ratio. The same operation as that described in the operation example 2 is performed.
- the adaptive filter 100 refers to the reference picture RP in which the reference image index ref_idx in the reference image # 14b is 0.
- the filter coefficient h71 ′ (i, j) is derived and set.
- the filter coefficient h71 '(i, j) is determined such that the error between the region ER71 in the first predicted image # 16' and the region ER71 'in the input image # 1 is minimized.
- the statistical method described above can be used for the specific determination of the filter coefficient h71 '(i, j).
- the adaptive filter 100 sets the filter coefficient h72 ′ (i, j) when the area ratio of the region ER72 to the first predicted image # 16 ′ is less than a predetermined area ratio and The process of step 705.1 is performed.
- the filter coefficient h72 '(i, j) only needs to be different from the filter coefficient h71' (i, j). For example, a filter having an edge enhancement effect can be applied.
- Step 705.1 the adaptive filter 100 generates the second output image data # 100 ′ for the target frame using the filter coefficients h71 ′ (i, j) and h72 ′ (i, j).
- the inter predicted image generation unit 16 In a normal flow, the inter predicted image generation unit 16 generates a predicted image by selecting a reference picture and a motion vector that minimizes an error from the input image # 1 from a plurality of reference pictures in the reference image # 14b.
- the adaptive filter 100 uses the reference picture as specified by the reference picture index ref_idx as the reference picture.
- the second output image data # 100 ′ is generated by selecting a filter and a motion vector having a smaller error from the input image using ref_idx.
- a predetermined coefficient may be used as the filter coefficient having the edge enhancement effect.
- a standard filter coefficient may be used instead of the filter coefficient having the edge enhancement effect.
- the adaptive filter 100 performs the above filtering to generate and output the second output image data # 100 '.
- the reference image index ref_idx functions as an index indicating whether adaptive filtering is performed or non-adaptive filtering is performed using a filter coefficient having an edge enhancement effect. Will be.
- the first motion vector # 17 ′ and the prediction mode derived using the parameter obtained by the interpretation before the change are derived again. It is preferable to generate a predicted image.
- the inter prediction image # 16 is a P slice generated by unidirectional prediction, that is, each partition of the inter prediction image # 16 is generated by referring to one reference picture.
- the present invention is not limited to this.
- the inter prediction image # 16 is a B slice generated by bidirectional prediction, that is, when each partition of the inter prediction image # 16 is generated by referring to two reference pictures. Even if it exists, it can apply similarly.
- step 704 and step 705 are replaced with step 704 'and step 705' described below, respectively.
- the adaptive filter 100 refers to the first predicted image # 16 ′ for the reference picture whose reference image index ref_idx is 0 in the reference image list L0 and the reference picture whose reference image index ref_idx is 0 in the reference image list L1.
- the adaptive filter 100 divides the input image # 1 into an area ER81 'corresponding to the first area ER81 and an area ER82' corresponding to the second area ER82.
- an area referred to for the prediction of the area ER81 ′ is set.
- the region referred to for the prediction of the region ER81 ′ is the region Set to ER ⁇ 81'b.
- Step 705 ′ In the adaptive filter 100, the area ratio of the region ER81 with respect to the first predicted image # 16 ′ and the area ratio of the region ER82 with respect to the first predicted image # 16 ′ are both greater than or equal to a predetermined area ratio.
- adaptive filtering is performed on the regions ER ⁇ 81′a and the regions ER ⁇ 81′b using the filter coefficient h81 ′ (i, j).
- adaptive filtering is performed on the regions ER ⁇ 82′a and the regions ER ⁇ 82′b using the filter coefficient h82 ′ (i, j).
- the statistical method described above can be used to determine the filter coefficient h81 ′ (i, j) and the filter coefficient h82 ′ (i, j).
- the adaptive filter 100 inputs the region ER81 in the first predicted image # 16 ′.
- a filter coefficient h81 ′′ (i, j) is derived and set so as to minimize an error from the region ER81 ′ in the image # 1.
- a filter h82 ′′ (i, j) different from h81 ′′ (i, j), for example, having an edge enhancement effect is set.
- the adaptive filter 100 determines that the area ratio of the region ER82 with respect to the first predicted image # 16 'is less than a predetermined area ratio, the following step 705'. Process 1 is performed.
- Step 705'.1 the second output image data # 100 ′ is generated for the target frame using the filter coefficients h81 ′′ (i, j) and h82 ′′ (i, j).
- the reference image # 14b is referred to with a non-zero value as the reference image index ref_idx by performing filtering using the filter coefficient h81 ′′ (i, j) by reference
- the reference in the reference image # 14b is different from the normal case.
- filtering using the filter coefficient h82 ′′ (i, j) ′ is performed.
- the adaptive filter 100 in the reference image list L0 if the area ratio of the region ER82 to the first predicted image # 16 ′ is less than a predetermined area ratio when the predicted image is regenerated, the adaptive filter 100 in the reference image list L0.
- filtering is performed using a filter coefficient having an edge enhancement effect.
- a predetermined coefficient may be used as the filter coefficient having the edge enhancement effect.
- a filter coefficient having other effects such as a blurring effect or a standard filter coefficient may be used.
- the adaptive filter 100 performs the above filtering to generate and output the second output image data # 100 '.
- the reference image index ref_idx is subjected to adaptive filtering or non-adaptive filtering using a filter coefficient having an edge enhancement effect. It functions as an index indicating whether or not to perform.
- the encoded data # 2 has a flag for selecting whether the value stored as the reference image index ref_idx represents the identification number of the reference image as an original meaning or is used for filter selection. It is preferable to include.
- step 104 and step 105 are replaced with step 804 and step 805 described below, respectively.
- the adaptive filter 100 divides the first predicted image # 16 ′ into an area ER91 composed of a partition that refers to a reference picture in the reference picture list L0 and an area composed of a partition that refers to a reference picture in the reference picture list L1. Divided into ER92. Further, the input image is divided into an area ER91 ′ corresponding to the first area ER91 and an area ER92 ′ corresponding to the second area ER92.
- region ER to 91′a and the region ER to 91′b are collectively referred to as a region ER to 91 ′
- region ER to 92′a and the region ER to 92′b are collectively referred to as a region ER to 92 ′.
- the adaptive filter 100 determines that the weight of the contribution of the regions ER to 91 ′ and the weight of the contribution of the regions ER to 92 ′ with respect to the first predicted image # 16 ′ are greater than or equal to a predetermined weight. Performs adaptive filtering on the region ER91′a and the region ER91′b using the filter coefficient h91 ′ (i, j), and on the region ER92′a and the region ER92′b. Thus, adaptive filtering is performed using the filter coefficient h92 ′ (i, j).
- the statistical method described above can be used to determine the filter coefficient h91 ′ (i, j) and the filter coefficient h92 ′ (i, j).
- the above weight of contribution is, for example, H.264.
- the total contribution weight is equal to the number of pixels.
- the contribution weight is also counted a plurality of times.
- the adaptive filter 100 applies the filter coefficient h91 ′ (i, j) to the region ER ⁇ 91′a on the reference picture A and the region ER ⁇ 91′b on the reference picture B. Is applied to the region ER ⁇ 92′b on the reference picture A and the region ER ⁇ 92′a on the reference picture C using the filter coefficient h92 ′ (i, j). 2 is applied.
- the adaptive filter 100 changes the interpretation of the information specifying the reference image lists L0 and L1 when the weight of the contribution of the region ER to 91 ′ with respect to the whole is smaller than a predetermined weight.
- the predicted image is generated again by the processing of step 805.1. (Step 805.1)
- the operation of the adaptive filter 100 will be described by taking as an example a case where the weight of the contribution of the region ER92 ′ to the whole is smaller than a predetermined weight.
- the processing procedure other than the adaptive filter is the same as usual, but the adaptive filter 100 is referred to by specifying the reference image list L0 when the weight of the contribution of the region ER92 ′ to the whole is smaller than a predetermined weight.
- the reference picture of the designated ref_idx is acquired from the reference picture list L0 as usual, and adaptive filtering is performed using the filter coefficient h91 ′′ (i, j).
- the filter coefficient h91 ′′ (i, j) is a filter that minimizes an error between the region ER91 in the first predicted image # 16 ′ and the region ER91 ′ in the input image.
- the adaptive filter 100 uses the reference picture list L0 instead of the reference picture list L1, and uses a filter different from the filter coefficient h91 ′′ (i, j). Apply. For example, filtering is performed using a filter coefficient having an edge enhancement effect. A predetermined coefficient may be used as the filter coefficient having the edge enhancement effect. Further, instead of the filter coefficient having the edge enhancement effect, a filter coefficient having other effects such as a blurring effect or a standard filter coefficient may be used.
- filtering is performed using a filter coefficient having an edge enhancement effect.
- a predetermined coefficient may be used as the filter coefficient having the edge enhancement effect.
- a standard filter coefficient may be used instead of the filter coefficient having the edge enhancement effect.
- the adaptive filter 100 performs the above filtering to generate and output the second output image data # 100 '.
- the reference image list number functions as an index indicating whether adaptive filtering is performed or non-adaptive filtering is performed using a filter coefficient having an edge enhancement effect. Will be.
- the reference image list number L1 when the reference image list number L1 is not used, it can be used as an index indicating that non-adaptive filtering is performed, so that the code amount is not increased by additional side information, It is possible to switch between adaptive filtering and non-adaptive filtering.
- the first motion vector # 17 ′ and the prediction mode derived using the parameter obtained by the interpretation before the change are derived again. It is preferable to generate a predicted image.
- the weight of the contribution of the region ER92 ′ to the entire first predicted image # 16 ′ is smaller than a predetermined weight is taken as an example, but the first predicted image # 16
- the present invention can be similarly applied to the case where the weight of the contribution of the region ER91 with respect to the whole of “is less than a predetermined weight.
- step 104 and step 105 are replaced with step 904 and step 905 described below, respectively.
- the adaptive filter 100 divides the first predicted image # 16 ′ into an upper half region ER101 of the image and a lower half region ER102 of the image.
- the adaptive filter 100 divides the input image # 1 into an area ER101 'corresponding to the first area ER101 and an area ER102' corresponding to the second area ER102.
- the region referred to for prediction of the region ER101 ′ in the reference image # 14b is set to the region ER ⁇ 101 ′, and is referred to for prediction of the region ER102 ′ in the reference image # 14b.
- the region is set to region ER ⁇ 102 ′.
- the adaptive filter 100 performs adaptive filtering on the region ER ⁇ 101 ′ in the reference image # 14b based on the filter coefficient h101 ′ (i, j), and performs the region ER ⁇ 102 ′ on the reference image # 14b. Then, adaptive filtering is performed based on the filter coefficient h102 ′ (i, j).
- the filter coefficient h101 '(i, j) is determined such that the error between the region ER101 in the first predicted image # 16' and the region ER101 'in the input image # 1 corresponding thereto is minimized. Further, the filter coefficient h102 '(i, j) is determined such that the error between the region ER102 in the first predicted image # 16' and the region ER102 'in the input image # 1 corresponding thereto is minimized.
- the adaptive filter 100 performs the above filtering to generate and output the second output image data # 100 '.
- the inter prediction image generation unit 16 independently performs the reference image area referenced from the upper half of the image and the reference image area referenced from the lower half. Adaptive filtering can be performed.
- one region may be configured to perform filtering using a predetermined filter coefficient.
- the adaptive filtering may be performed only on the reference image region referred to from the center of the image, and the filtering may be performed on the other regions using a predetermined filter coefficient. .
- viewers tend to be more focused on the center of the screen.
- it is possible to selectively perform adaptive filtering on a region that is easily noticed by the viewer, so that more effective filtering can be performed while suppressing processing costs.
- step 104 and step 105 are replaced with step 1004 and step 1005 described below, respectively.
- the adaptive filter 100 divides the first predicted image # 16 ′ into an area ER111 composed of blocks having a predetermined size or larger and an area ER112 composed of blocks smaller than the predetermined size.
- the block may be a macro block or a unit area smaller than the macro block.
- the adaptive filter 100 divides the input image # 1 into an area ER111 'corresponding to the first area ER111 and an area ER112' corresponding to the second area ER112.
- the region referred to for prediction of the region ER111 ′ in the reference image # 14b is set to the region ER ⁇ 111 ′, and is referred to for prediction of the region ER112 ′ in the reference image # 14b.
- the region is set to region ER ⁇ 112 ′.
- the adaptive filter 100 performs adaptive filtering on the region ER ⁇ 111 ′ in the reference image # 14b based on the filter coefficient h111 ′ (i, j), and performs the region ER ⁇ 112 ′ on the reference image # 14b. Then, adaptive filtering is performed based on the filter coefficient h112 ′ (i, j).
- the filter coefficient h111 '(i, j) is determined so that the error between the region ER111 in the first predicted image # 16' and the region ER111 'in the corresponding input image # 1 is minimized. Further, the filter coefficient h112 '(i, j) is determined so that the error between the region ER112 in the first predicted image # 16' and the region ER112 'in the corresponding input image # 1 is minimized.
- the adaptive filter 100 performs the above filtering to generate and output the second output image data # 100 '.
- the optimal filter coefficient varies depending on the block size.
- the reference image area referred to from an area composed of blocks having a size equal to or larger than the predetermined size is smaller than the predetermined size.
- Adaptive filtering can be independently performed on a reference image area referred to from an area composed of blocks.
- the first predicted image # 16 ' may be divided into two or more regions according to the partition size.
- filtering with a predetermined filter coefficient is performed on a region of a reference image that is referred to from a block composed of a block having a predetermined size or larger or a region composed of a partition having a predetermined size or larger. It is good also as such a structure.
- the reference image area referred to by a block (or partition) having a size equal to or larger than the predetermined size is fixedly filtered and smaller than the predetermined size.
- Adaptive filtering can be performed on the region of the reference image referenced from the block (or partition).
- step 104 and step 105 are replaced with step 1104 and step 1105 described below, respectively.
- the adaptive filter 100 divides the first predicted image # 16 ′ into a region ER121 including a partition to which a motion vector having a size equal to or greater than a predetermined value is allocated, and a partition to which a motion vector other than that is allocated.
- a region ER121 including a partition to which a motion vector having a size equal to or greater than a predetermined value is allocated, and a partition to which a motion vector other than that is allocated.
- the adaptive filter 100 divides the input image # 1 into an area ER121 'corresponding to the first area ER121 and an area ER122' corresponding to the second area ER122.
- a region referred to for prediction of the region ER121 ′ in the reference image # 14b is set as a region ER ⁇ 121 ′, and is referred to for prediction of the region ER122 ′ in the reference image # 14b.
- the area is set to the area ER ⁇ 122 ′.
- the adaptive filter 100 performs adaptive filtering on the region ER121 ′ in the reference image # 14b based on the filter coefficient h121 ′ (i, j), and performs the filter coefficient h122 on the region ER122 ′ in the reference image # 14b. 'Adaptive filtering is performed based on (i, j).
- the filter coefficient h121 '(i, j) is determined so that the error between the region ER121 in the first predicted image # 16' and the region ER121 'in the input image # 1 corresponding thereto is minimized. Further, the filter coefficient h122 '(i, j) is determined such that the error between the region ER122 in the first predicted image # 16' and the region ER122 'in the corresponding input image # 1 is minimized.
- the adaptive filter 100 performs the above filtering to generate and output the second output image data # 100 '.
- the optimum filter coefficient varies depending on the size of the motion vector assigned to the partition.
- a region of a reference image that is referred to from a region composed of partitions to which a motion vector having a magnitude equal to or larger than the predetermined value is obtained by the inter predicted image generation unit 16 performing the above-described operation.
- adaptive filtering can be performed independently for each of the reference image regions that are referred to from regions that are configured by partitions that are not assigned motion vectors.
- adaptive filtering is performed only on the reference image region that is referred to from the region composed of the partition to which the motion vector having a magnitude greater than or equal to the predetermined value is assigned.
- fixed filtering may be performed using a predetermined filter coefficient for a reference image area referred to from an area composed of partitions to which motion vectors are not assigned.
- a motion with a large size is performed while performing fixed filtering on a reference image region that is referenced from a region composed of partitions to which a small motion vector is assigned. Since it is possible to perform adaptive filtering on the area of the reference image that is referenced from the area composed of the partitions to which the vectors are assigned, it is possible to perform more effective filtering while suppressing the processing cost. it can.
- whether the horizontal component of the motion vector is equal to or greater than a predetermined value, or whether the vertical component of the motion vector is equal to or greater than a predetermined value. Depending on the situation, it may be divided into two regions.
- it can be configured to be divided into two regions according to the direction of the motion vector.
- the macro block or the partition is one of a plurality of areas according to the information (skip mode, reference index, etc.) associated with the macro block or the partition.
- the present invention is not limited to this.
- the macro block or the partition is a plurality of areas. It can also be set as the structure classified into any area
- the adaptive filter 100 classifies the macroblock into one of a plurality of areas according to the value of the transform coefficient in the macroblock near (including adjacent) the macroblock, and corresponds to each area.
- a configuration may be adopted in which adaptive filtering is performed.
- the adaptive filter 100 filters the region of the reference image referenced from the macroblock with a fixed filter when the value of the transform coefficient in the macroblock near the macroblock is smaller than a predetermined value. It is good also as such a structure.
- a macroblock with a small code amount of a transform coefficient is a region that is easy to predict because a change in an image in the macroblock is small. Therefore, for a reference image area referenced from a macroblock in the vicinity of such a macroblock, a reference referenced from a macroblock in the vicinity of the macroblock having a large code amount of the transform coefficient is used while using a fixed filter. By using an adaptive filter for an image region, more appropriate filtering can be performed while suppressing processing costs.
- the adaptive filter 100 may be configured to classify the macroblock into any one of a plurality of regions according to the flatness of the image in the macroblock near the macroblock.
- the optimum filter coefficient varies depending on the flatness of the region.
- filter coefficients can be obtained independently and adaptively for each of a plurality of regions divided according to flatness, so that more efficient filtering can be performed.
- the adaptive filter 100 may be configured to classify the macroblock into any one of a plurality of regions according to the brightness and color difference of the image in the macroblock near the macroblock.
- the optimum filter coefficient varies depending on the brightness and color difference of the area.
- filter coefficients can be obtained independently and adaptively for each of a plurality of regions divided according to lightness and color difference, so that more efficient filtering can be performed. .
- the adaptive filter 100 may be configured to divide the first predicted image # 16 ′ into a plurality of regions in accordance with the information of the reference image referred to by the first predicted image # 16 ′.
- the adaptive filter 100 determines whether the reference image referred to by the first predicted image # 16 ′ is an intra picture, that is, a picture generated by intra prediction, according to whether or not the first predicted image # 16 ′. May be divided into two regions.
- the optimum filter coefficient is different.
- adaptive filter coefficients can be calculated independently for a region where the reference image is an intra picture and a region where the reference image is not an intra picture. Therefore, appropriate filtering can be performed even when the first predicted image # 16 'includes a region that refers to an intra picture and a region that does not refer to an intra picture.
- the adaptive filter 100 is configured to divide the first predicted image # 16 ′ into a plurality of regions according to the value of the quantization parameter QP in the reference picture referred to by the first predicted image # 16 ′. Also good.
- the average value of the quantization parameter QP in the region corresponding to the macroblock in the reference picture referenced by each macroblock of the first predicted image # 16 ′ is equal to or greater than a predetermined threshold.
- the first predicted image # 16 ′ may be divided into two regions.
- the image quality of the reference image changes depending on the value of the quantization parameter QP, and therefore the optimum filter coefficient differs.
- the adaptive filter 100 determines whether the first predicted image # 16 depends on whether the average pixel value in the region on the reference image referenced by the first predicted image # 16 ′ is equal to or greater than a predetermined threshold value.
- the configuration may be such that 16 ′ is divided into two regions.
- the optimum filter coefficient is different when the region on the reference image is flat and when the region is not flat.
- adaptive filter coefficients can be determined independently for the case where the region on the reference image is flat and the case where the region on the reference image is not flat.
- the adaptive filter 100 divides the first predicted image # 16 ′ into a plurality of regions in accordance with information on macroblocks included in the region on the reference image referenced by the first predicted image # 16 ′. It is good also as a simple structure.
- FIG. 14 is a diagram showing a bit stream #BS for each slice of encoded data # 2 that is generated using the video encoding device 1 and is referred to by the video decoding device 2 described below.
- the bitstream #BS includes filter coefficient information FC and macroblock information MB1 to MBN.
- the filter coefficient information FC is information including the filter coefficient generated by the adaptive filter 100.
- Macro block information MB1 to MBN is information related to macro blocks included in the slice, and includes macro block prediction mode # 18b, block division information, and the like.
- N represents the number of macroblocks included in the slice.
- the moving image encoding apparatus 1 converts the first predicted image # 16 ′ into a first region ER1 including macroblocks to which the skip mode is applied,
- the area is divided into the second area ER2 composed of macroblocks to which the skip mode is not applied, and the filter coefficient h1 '(i, j) and the filter are respectively corresponding to the area ER1 and the area ER2.
- the coefficient h2 ′ (i, j) is calculated adaptively.
- the filter coefficient information FC in the encoded data # 2 includes the filter coefficient h1 '(i, j) and the filter coefficient h2' (i, j) calculated in this way.
- the macro block information MB1 to MBN in the encoded data # 2 includes information on whether or not the skip mode is applied to each macro block.
- the moving image encoding device 1 converts the first predicted image # 16 ′ into the region ER21 including the partition whose reference image index ref_idx is 0, and other than that
- the filter coefficient h21 ′ (i, j) and the filter coefficient h22 ′ (i, j) are adaptively calculated corresponding to the area ER21 and the area ER22, respectively.
- the filter coefficient information FC in the encoded data # 2 includes the filter coefficient h21 '(i, j) and the filter coefficient h22' (i, j) calculated in this way.
- the encoded data # 2 includes a reference image index referenced by each partition.
- the adaptive filter 100 generates the inter prediction image # 16a by the operation described in the operation example 1, and the inter prediction image # 16b by the operation described in the operation example 2. Generate. Moreover, the adaptive filter 100 selects and outputs a prediction image with high coding efficiency from the inter prediction image # 16a and the inter prediction image # 16b.
- the filter coefficient information FC in the encoded data # 2 includes the filter coefficient used by the adaptive filter 100 to generate the selected predicted image.
- the encoded data # 2 includes a flag indicating which method of the operation example 1 and the operation example 2 has generated the predicted image. That is, encoded data # 2 indicates whether the filter coefficient included in itself is a filter coefficient calculated with reference to the skip mode or a filter coefficient calculated with reference to the reference image index. It is preferable to include a flag to indicate.
- the adaptive filter 100 refers to the area difference between the region ER1 and the region ER2 in the operation example 1 and the area difference between the region ER21 and the region ER22 in the operation example 2.
- the encoded data # 2 may be configured not to include a flag indicating which method the predicted image has been generated by. Is possible. This is predicted by any one of the operation example 1 and the operation example 2 without referring to the flag by calculating the area difference in the moving picture decoding apparatus that decodes the encoded data # 2. This is because it is possible to identify whether an image has been generated.
- the adaptive filter 100 divides each region included in the first predicted image # 16 ′ into a region ER41 and a region ER42 according to a predetermined criterion. Further, the input image # 1 is divided into a region ER41 ′ and a region ER42 ′ corresponding to the region ER41 and the region ER42, and a filter coefficient h41 ′ corresponding to the region ER41 ′ and the region ER42 ′, respectively. (I, j) and the filter coefficient h42 ′ (i, j) are adaptively calculated.
- the filter coefficient information FC in the encoded data # 2 includes the filter coefficient h41 '(i, j) and the filter coefficient h42' (i, j) calculated in this way.
- the encoded data # 2 includes a flag indicating whether each region in the reference image corresponds to the region ER41 'or the region ER42'.
- the macroblock information MB1 to MBN in the encoded data # 2 includes a flag indicating which of the area ER41 ′ and the area ER42 ′ each macroblock belongs to.
- the moving image encoding device 1 determines each area included in the first predicted image # 16 ′ as It is preferable to include a flag indicating which criterion is used for division.
- the adaptive filter 100 for example, according to the average pixel value in the area of the input image # 1 corresponding to each macroblock included in the first predicted image # 16 ′,
- the first predicted image # 16 ′ is divided into a region ER51 and a region ER52, and the input image # 1 is further divided into a region ER51 ′ and a region ER52 ′ corresponding to the region ER51 and the region ER52,
- the filter coefficient h51 ′ (i, j) and the filter coefficient h52 ′ (i, j) are adaptively calculated, respectively.
- the filter coefficient information FC in the encoded data # 2 includes the filter coefficient h51 '(i, j) and the filter coefficient h52' (i, j) thus calculated.
- the macro block information MB1 to MBN in the encoded data # 2 includes a flag indicating to which of the above-mentioned areas each macro block belongs.
- the adaptive filter 100 has an area of one region of the first prediction image # 16 ′ divided in the operation examples 1 to 5 described above so that the first prediction image #
- filtering is performed on the region of the reference image # 14b referred to from the one region using the above-described standard filter coefficient.
- filtering is performed on the region of the reference image # 14b referred to from the other region using the adaptive filter coefficient calculated by the statistical method described above.
- the filter coefficient information FC in the encoded data # 2 includes the adaptive filter coefficient calculated in this way.
- the encoded data # 2 preferably includes a flag indicating a region using the standard filter coefficient.
- the ratio of the area of one region of the divided first predicted image # 16 ′ to the area of the first predicted image # 16 ′ in the adaptive filter 100 is equal to or less than a predetermined ratio.
- the encoded data # 2 uses a plurality of adaptive filters. It is preferable to include a flag indicating that there is not.
- the adaptive filter 100 has a predetermined area ratio of an area that refers to a reference picture in which the reference image index ref_idx is not 0 with respect to the first predicted image # 16 ′. If the ratio is less than the ratio, an adaptive filter coefficient is calculated for an area that references a reference picture for which the reference image index ref_idx is 0, and a reference picture for which the reference image index ref_idx is not 0 is referred to. Filtering is performed on the corresponding region, which is a reference picture index ref_idx that is not 0, using, for example, a filter coefficient having an edge enhancement effect. In addition, a predetermined filter coefficient having the edge enhancement effect may be used.
- the filter coefficient information FC in the encoded data # 2 includes the adaptive filter coefficient calculated as described above.
- the encoded data # 2 includes a reference picture index of a reference picture that each area refers to.
- the reference image index ref_idx may function as an index indicating whether adaptive filtering is performed or non-adaptive filtering is performed using a filter coefficient having an edge enhancement effect.
- the encoded data # 2 has a flag for selecting whether the value stored as the reference image index ref_idx represents the identification number of the reference image as the original meaning, or represents that it is used for filter selection. It is preferable to include.
- the adaptive filter 100 is the reference image in the region of the first predicted image # 16 ′ that refers to the reference image whose reference image list number is 1.
- an adaptive filter coefficient is set for an area that refers to the reference image whose reference image list number is 0.
- an edge enhancement effect is applied to a region on the reference image having a reference image list number of 0 and corresponding to a region that refers to a reference image having a reference image list number of 1, which is referred to. Filtering is performed using the filter coefficient.
- a predetermined filter coefficient having the edge enhancement effect may be used.
- the filter coefficient information FC in the encoded data # 2 includes the adaptive filter coefficient calculated as described above.
- the encoded data # 2 includes the reference image list number of the reference image that each area refers to.
- the reference image list number may function as an index indicating whether adaptive filtering is performed or non-adaptive filtering is performed using a filter coefficient having an edge enhancement effect.
- encoded data # 2 it is selected whether the value stored as the reference image list number represents the number that distinguishes the reference image list as the original meaning, or represents that it is used for filter selection. It is preferable to include a flag.
- the adaptive filter 100 divides the input image # 1 into the upper half area ER101 ′ and the lower half area ER102 ′ of the image, and starts from the area ER101 ′.
- the filter coefficient h101 ′ (i, j) and the filter coefficient h102 ′ (i , J) is calculated adaptively.
- the filter coefficient information FC in the encoded data # 2 includes the filter coefficient h101 '(i, j) and the filter coefficient h102' (i, j) calculated as described above.
- the adaptive filter 100 filters the area of the reference image # 14b referenced from the lower half area ER102 ′ of the image by using a predetermined filter coefficient instead of the adaptive filtering.
- the filter coefficient information FC in the encoded data # 2 includes a filter coefficient h101 ′ (i, j).
- the adaptive filter 100 reduces the first predicted image # 16 ′ to an area ER111 composed of blocks of a predetermined size or larger and a predetermined size.
- the input image # 1 is divided into an area ER111 ′ and an area ER112 ′ corresponding to the area ER111 and the area ER112, and the area ER111 ′ and the area ER112 ′ are further divided.
- the filter coefficient h111 ′ (i, j) and the filter coefficient h112 ′ (i, j) are adaptively calculated.
- the filter coefficient information FC in the encoded data # 2 includes the filter coefficient h111 '(i, j) and the filter coefficient h112' (i, j) calculated as described above.
- the adaptive filter 100 performs filtering with a predetermined filter coefficient on the region of the reference image # 14b referred to from the region ER111 composed of blocks having a predetermined size or more.
- the filter coefficient information FC in the encoded data # 2 includes the filter coefficient h112 ′ (i, j).
- the adaptive filter 100 is a region in which the first prediction image # 16 ′ is composed of partitions to which motion vectors having a magnitude greater than or equal to a predetermined value are assigned.
- ER121 is divided into an area ER122 composed of partitions to which motion vectors are not assigned, and the input image # 1 is further divided into an area ER121 ′ and an area ER122 ′ corresponding to the area ER121 and the area ER122.
- the filter coefficient h121 ′ (i, j) and the filter coefficient h122 ′ (i, j) are adaptively calculated for the region ER121 ′ and the region ER122 ′, respectively.
- the filter coefficient information FC in the encoded data # 2 includes the filter coefficient h121 '(i, j) and the filter coefficient h122' (i, j) calculated as described above.
- the filter coefficient information FC in the encoded data # 2 includes the filter coefficient h121 ′ (i, j).
- the adaptive filter 100 selects the macroblock from among a plurality of regions according to the value of the transform coefficient in the macroblock in the vicinity (including the adjacent) of the macroblock.
- the filter coefficients are adaptively calculated corresponding to each region.
- the filter coefficient information FC in the encoded data # 2 includes the adaptive filter coefficient calculated in this way.
- a filter for performing fixed filtering on a moving image decoding apparatus that decodes encoded data # 2 when fixed filtering is performed on any of the plurality of regions.
- the filter coefficient information FC in the encoded data # 2 may not include a filter coefficient for performing such fixed filtering. However, in this case, it is preferable that the encoded data # 2 includes a flag indicating a region where fixed filtering has been performed.
- the adaptive filter 100 determines whether the reference image referred to by the first predicted image # 16 ′ is an intra picture, that is, a picture generated by intra prediction. Then, the first predicted image # 16 ′ is divided into two regions, and filter coefficients are adaptively calculated corresponding to the respective regions.
- the filter coefficient information FC in the encoded data # 2 includes the adaptive filter coefficient calculated in this way.
- the adaptive filter 100 divides the first predicted image # 16 'into a plurality of regions with reference to the quantization parameter QP.
- Moving picture decoding apparatus 2 (Moving picture decoding apparatus 2)
- the moving picture decoding apparatus 2 according to the present invention will be described with reference to FIGS.
- the moving image decoding apparatus 2 includes H.264 as a part thereof. This is a moving picture decoding apparatus using the technology adopted in the H.264 / AVC standard and KTA software.
- FIG. 12 is a block diagram showing a configuration of the video decoding device 2.
- the moving image decoding apparatus 2 includes a variable length code decoding unit 23, a motion vector restoration unit 24, a buffer memory 25, an inter prediction image generation unit 26, an intra prediction image generation unit 27, and a prediction method determination unit 28. , An inverse quantization / inverse transform unit 29 and an adder 30 are provided.
- the video decoding device 2 receives the encoded data # 2 and outputs a decoded image # 3.
- variable length code decoding unit 23 performs variable length decoding on the encoded data # 2, and outputs a differential motion vector # 23a, side information # 23b, quantized prediction residual data # 23c, and filter coefficient information # 23d.
- the filter coefficient information # 23d includes information corresponding to the filter coefficient # 101 described above.
- the motion vector restoration unit 24 decodes the motion vector # 24 of the target partition from the difference motion vector # 23a and the motion vector # 25a that has already been decoded and stored in the buffer memory 25.
- the buffer memory 25 stores decoded image # 3, motion vector # 24, and side information # 23b.
- the inter prediction image generation unit 26 is decoded by the motion vector restoration unit 24, and based on the motion vector # 24 that passes through the buffer memory 25 and the reference image # 25d stored in the buffer memory 25, the inter prediction image # 26. Is generated.
- the motion vector # 25c includes the same motion vector as the motion vector # 24. Further, the side information # 23b and the filter coefficient information # 23d are input to the inter predicted image generation unit 26.
- inter-predicted image generation unit 26 The configuration of the inter-predicted image generation unit 26 will be described in detail later, and the description thereof is omitted here.
- the intra-predicted image generation unit 27 generates an intra-predicted image # 27 from the locally decoded image # 25b in the same image as the target macroblock stored in the buffer memory 25.
- the prediction method determination unit 28 selects either the intra prediction image # 27 or the inter prediction image # 26 based on the prediction mode information included in the side information # 23b, and outputs it as the prediction image # 28. To do.
- the inverse quantization / inverse transform unit 29 performs inverse quantization and inverse DCT transform on the quantized prediction residual data # 23c, and outputs a prediction residual # 29.
- Adder 30 adds prediction residual # 29 and prediction image # 28, and outputs the result as decoded image # 3.
- the output decoded image # 3 is supplied to the buffer memory # 3.
- FIG. 13 is a block diagram illustrating a configuration of the inter predicted image generation unit 26. As shown in FIG. 13, the inter prediction image generation unit 26 includes a prediction image generation unit 26a and an adaptive filter 100 ''.
- the adaptive filter 100 ′′ generates and outputs output image data # 100 ′′ ′′ by filtering the reference image # 25 d stored in the buffer memory 25. Further, the filtering is performed based on the filter coefficient information # 23d decoded from the encoded data # 2.
- the side information # 23b is input to the adaptive filter 100 ''.
- the side information # 23b is information indicating whether the target block is a bidirectionally predicted block or a unidirectionally predicted block, and whether or not the skip mode is applied to the target macroblock.
- Information, prediction mode information indicating whether the target macroblock is an intra-predicted macroblock or an inter-predicted macroblock, and a quantization parameter QP associated with the target block are included.
- the predicted image generation unit 26a generates and outputs an inter predicted image # 26 by performing motion compensation on the output image data # 100 "" using the motion vector # 25c.
- the adaptive filter 100 ′′ is information included in the macroblock information MB1 to MBN in the encoded data # 2, and refers to information on whether or not the skip mode is applied to each macroblock, and the skip mode is applied. For the macroblocks that have been skipped, filtering is performed using the filter coefficient h1 ′ (i, j) included in the filter coefficient information FC in the encoded data # 2, and the macroblocks to which the skip mode is not applied are filtered. Thus, filtering is performed using the filter coefficient h2 ′ (i, j) included in the filter coefficient information FC in the encoded data # 2.
- the adaptive filter 100 ′′ is information included in the encoded data # 2, refers to information about the reference image index ref_idx that is referenced by each partition, and for a partition whose reference image index ref_idx is 0, Filtering is performed using the filter coefficient h21 ′ (i, j) included in the filter coefficient information FC in the encoded data # 2, and for the partition whose reference image index ref_idx is not 0, the filter coefficient in the encoded data # 2 Filtering is performed using the filter coefficient h22 ′ (i, j) included in the information FC.
- the adaptive filter 100 '' refers to the flag included in the encoded data # 2, and the filter coefficient included in the filter coefficient information FC in the encoded data # 2 is calculated with reference to the skip mode. It is determined whether the coefficient is a filter coefficient calculated with reference to the reference image index ref_idx.
- the adaptive filter 100 ′′ applies to the macroblock to which the skip mode is applied.
- filtering is performed using the filter coefficient h1 ′ (i, j) included in the filter coefficient information FC in the encoded data # 2, and the encoded data is applied to a macroblock to which the skip mode is not applied.
- Filtering is performed using the filter coefficient h2 ′ (i, j) included in the filter coefficient information FC in # 2.
- the filter coefficient included in the filter coefficient information FC in the encoded data # 2 is a filter coefficient calculated with reference to the reference image index ref_idx
- the partition whose reference image index ref_idx is 0 Performs filtering using the filter coefficient h21 ′ (i, j) included in the filter coefficient information FC in the encoded data # 2, and for a partition whose reference image index ref_idx is not 0, in the encoded data # 2
- Filtering is performed using the filter coefficient h22 ′ (i, j) included in the filter coefficient information FC.
- the adaptive filter 100 ′′ classifies the region on the inter prediction image # 26 being generated into a region belonging to the region ER41 ′′ and a region belonging to the region ER42 ′′ according to a predetermined determination criterion. Further, the adaptive filter 100 ′′ sets the region of the reference image # 25d referred to for prediction of the region ER41 ′′ to the regions ER to 41 ′′, and the reference image referred to for prediction of the region ER42 ′′. The area of # 25d is set to the area ER ⁇ 42 ′′.
- the adaptive filter 100 ′′ for each of the regions ER ⁇ 41 ′′ and the regions ER ⁇ 42 ′′, includes filter coefficients h41 ′ (i, j) and the filter coefficient h42 ′ (i, j).
- the regions ER to 41 ′′ and the regions ER to 41 ′′ correspond to the regions ER to 41 ′ and the regions ER to 41 ′ described in the operation example 4, respectively.
- the adaptive filter 100 ′′ determines the determination criteria specified by the flag included in the encoded data # 2 among the predetermined determination criteria. Based on the above, the macroblock is classified, and the filter coefficient included in the filter coefficient information FC in the encoded data # 2 is selected according to the classification for the area referred to by each classified macroblock. To perform filtering.
- the predetermined criterion may be the same as the criterion in the video encoding device, and does not limit the present invention.
- the adaptive filter 100 '' What is necessary is just to classify
- the adaptive filter 100 ′′ refers to the flags included in the macroblock information MB1 to MBN in the encoded data # 2, and converts the inter prediction image # 26 being generated into the region ER51 ′′ and the region ER52 ′′. To divide. Further, the adaptive filter 100 ′′ sets the region of the reference image # 25d referred to for prediction of the region ER51 ′′ to the region ER ⁇ 51 ′′, and the reference image referred to for prediction of the region ER52 ′′. The area of # 25d is set to the area ER ⁇ 52 ′′.
- the adaptive filter 100 ′′ for each of the regions ER ⁇ 51 ′′ and the regions ER ⁇ 52 ′′, includes filter coefficients h51 ′ (i, j) and the filter coefficient h52 ′ (i, j).
- the regions ER to 51 ′′ and the regions ER to 52 ′′ correspond to the regions ER to 51 ′ and the regions ER to 52 ′ described in the operation example 5, respectively.
- the adaptive filter 100 '' converts the inter prediction image # 26 being generated into the region ER51 'according to the condition using the average pixel value in each region of the reference image # 25d referred to for the prediction image generation.
- a configuration may be adopted in which filtering is performed using the filter coefficient h51 ′ (i, j) and the filter coefficient h52 ′ (i, j). In this case, if the same condition is used on the moving image encoding device side, even if the encoded data # 2 does not include a flag indicating which region each macroblock belongs to, Appropriate filtering can be performed for each region.
- the adaptive filter 100 '' refers to the flag included in the encoded data # 2, and sets the inter prediction image # 26 being generated as a region corresponding to the region to which the standard filter is to be applied in the reference image # 25d.
- the reference image # 25d is divided into regions corresponding to regions to which the filter coefficients included in the encoded data # 2 are to be applied.
- adaptive filter 100 '' performs filtering using the standard filter on the region to which the standard filter is to be applied, and applies to the region to which the filter coefficient included in encoded data # 2 is to be applied. Filtering is performed using the filter coefficient included in the filter coefficient information FC in the encoded data # 2.
- the ratio of the area of one region of the divided first predicted image # 16 ′ to the area of the first predicted image # 16 ′ in the moving image encoding device 1 is a predetermined ratio. If the configuration is such that the filter coefficient is adaptively calculated with respect to the entire first predicted image # 16 ′ before the division in the following cases, the adaptive filter 100 ′′ has the coded data # 16. 2, depending on the flag indicating whether or not the filter should be applied to the entire region referred to by the inter prediction image # 26 being generated. The region is filtered using the filter coefficient included in the filter coefficient information FC in the encoded data # 2.
- the adaptive filter 100 '' corresponds to a region corresponding to a region in which the area ratio with respect to the inter prediction image # 26 being generated is larger than a predetermined area ratio among a plurality of regions on the inter prediction image # 26 being generated.
- Filtering is performed on the area on the image # 25d using the filter coefficient included in the filter coefficient information FC in the encoded data # 2, and among the plurality of areas on the inter prediction image # 26 being generated,
- the area on the reference image # 25d corresponding to the area whose area ratio with respect to the inter predicted image # 26 is equal to or less than a predetermined area ratio may be filtered using a standard filter.
- the adaptive filter 100 ′′ can generate and output the output image data # 100 ′ ′′ even when the encoded data # 2 does not include a flag. it can. Therefore, effective filtering can be performed while reducing the code amount of the encoded data # 2.
- the reference image index ref_idx may function as an index indicating whether adaptive filtering is performed or non-adaptive filtering is performed using a filter coefficient having an edge enhancement effect.
- the adaptive filter 100 ′′ has a predetermined area ratio of a region that specifies 1 as the reference image index in the inter predicted image # 26 being generated to the entire inter predicted image # 26 being generated. It is determined whether it is less than the ratio.
- the adaptive filter 100 '' is an area on the reference picture in which the reference image index is 0 in the reference image # 25d, Each of a region referred to when the predicted image # 26 is generated and a region on the reference picture whose reference image index is 1 and referred to when the inter predicted image # 26 is generated
- filtering is performed using the filter coefficient corresponding to each region included in the filter coefficient information FC in the encoded data # 2.
- the adaptive filter 100 ′′ is an area on the reference picture in which the reference image index is 0 in the reference image # 25d, A region that is referred to when the predicted image # 26 is generated is filtered using a filter coefficient included in the filter coefficient information FC in the encoded data # 2.
- the adaptive filter 100 '' specifies a reference picture with a reference image index of 1 for an area referred to by specifying 1 as a reference image index.
- the region on the reference picture whose reference image index is 0 is filtered by using, for example, a filter coefficient having an edge enhancement effect. Output as an area to be referenced when generated.
- a predetermined filter coefficient may be used as the filter coefficient having the edge enhancement effect.
- the encoded data # 2 includes a flag indicating whether or not the value stored as the reference image index ref_idx indicates the original meaning
- the flag may be referred to, and the determination regarding the area ratio is unnecessary. Become.
- the reference image list number may function as an index indicating whether adaptive filtering is performed or non-adaptive filtering is performed using a filter coefficient having an edge enhancement effect.
- the adaptive filter 100 ′′ determines whether or not the contribution weight of the reference picture whose reference image list number is 1 among the inter prediction images # 26 being generated is smaller than a predetermined weight. To do.
- the adaptive filter 100 ′′ is an area on the reference picture whose reference image list number is 0, and the inter prediction image # 26 is generated. Encoded data # for each of the area that is referred to at the time and the area on the reference picture whose reference image list number is 1 and that is referred to when the inter predicted image # 26 is generated. Filtering is performed using the filter coefficients corresponding to the respective areas included in the filter coefficient information FC 2.
- the adaptive filter 100 ′′ is a region on the reference picture having the reference image list number 0 in the reference image # 25d, and the inter prediction image Filtering is performed on the area referred to when # 26 is generated, using the filter coefficient included in the filter coefficient information FC in the encoded data # 2.
- the adaptive filter 100 ′′ indicates that the reference image list number is 1 for the area referred to by designating 1 as the reference image list number. For example, a region on a reference picture having a reference image list number of 0 is filtered using a filter coefficient having an edge enhancement effect, and the filtered region is interpolated.
- the predicted image # 26 is output as an area to be referred to when it is generated.
- a predetermined filter coefficient may be used as the filter coefficient having the edge enhancement effect.
- the encoded data # 2 includes a flag indicating whether or not the value stored as the reference image list number indicates the original meaning, the flag may be referred to, and determination regarding the contribution weight is not necessary. It is.
- the adaptive filter 100 ′′ divides the inter prediction image # 26 being generated into an upper half area of the image and a lower half area of the image, and the upper half area of the image and the lower half area of the image.
- the filter coefficient h101 ′ (i, j) and the filter coefficient h102 ′ (i, j) included in the filter coefficient information FC in the encoded data # 2 are respectively obtained for the area of the reference image # 25d referred to by each area. ) To perform filtering.
- the adaptive filter 100 in the moving image coding apparatus 1 performs the above-described adaptive filtering on the region of the reference image # 14b and the region of the reference image # 25d that is referenced from the lower half region of the image.
- the adaptive filter 100 ′′ performs the reference image # 25d area referenced from the upper half area of the image. Filtering is performed using the filter coefficient h101 ′ (i, j) included in the filter coefficient information FC in the encoded data # 2, and the area of the reference image # 25d that is referenced from the lower half area of the image is Filtering is performed by a predetermined filter coefficient.
- the adaptive filter 100 ′′ applies the filter coefficient h111 ′ (i, i, i) included in the filter coefficient information FC in the encoded data # 2 to the area of the reference image # 25d that is referenced from a macroblock having a predetermined size or larger. j), and a macroblock smaller than a predetermined size is filtered using the filter coefficient h112 ′ (i, j) included in the filter coefficient information FC in the encoded data # 2. .
- the adaptive filter 100 in the moving image coding apparatus 1 uses a predetermined filter coefficient for the region of the reference image # 25d that is referred to from the region composed of macroblocks having a predetermined size or more.
- the adaptive filter 100 ′′ applies a predetermined filter coefficient to the region of the reference image # 25d that is referred to from the macroblock having the predetermined size or more. Filtering for the region of the reference image # 25d referenced from the macroblock smaller than the predetermined size, the filter coefficient h112 ′ (i, Filtering is performed using j).
- the adaptive filter 100 ′′ is included in the filter coefficient information FC in the encoded data # 2 with respect to the region of the reference image # 25d that is referred to from the partition to which the motion vector having a magnitude equal to or larger than a predetermined value is assigned. Filtering is performed using the filter coefficient h121 ′ (i, j) to be encoded, and the region of the reference image # 25d referenced from the partition to which the motion vector having a magnitude smaller than a predetermined value is assigned is encoded. Filtering is performed using the filter coefficient h122 ′ (i, j) included in the filter coefficient information FC in the data # 2.
- the adaptive filter 100 in the moving image encoding apparatus 1 only applies to a reference image area referred to from an area composed of partitions to which a motion vector having a magnitude equal to or larger than the predetermined value is assigned.
- the adaptive filter 100 '' applies a code to a reference image region referred to from a region composed of partitions to which a motion vector having a magnitude greater than or equal to the predetermined value is assigned.
- Filter coefficient h121 ′ (i, j) included in the filter coefficient information FC in the digitized data # 2 There performs filtering with respect to the area of the reference image motion vector is referenced from the allocated partition having a magnitude less than a predetermined value, it performs filtering using a fixed filter coefficient. Note that a predetermined filter coefficient may be used as the fixed filter coefficient.
- the adaptive filter 100 ′′ classifies each macroblock into a plurality of sets according to the value of the transform coefficient in the macroblock near (including adjacent) each macroblock, and is referred to from each macroblock set. Filtering is performed on the region of the image # 25d using the filter coefficient included in the filter coefficient information FC in the encoded data # 2.
- the moving image encoding apparatus 1 is a case where fixed filtering is performed on any one of the areas corresponding to the plurality of areas on the reference image # 14b, and the moving image decoding apparatus 2, when the filter coefficient for performing the fixed filtering is stored, the adaptive filter 100 ′′ is a region corresponding to the region subjected to the fixed filtering by the moving image encoding device 1.
- fixed filtering is performed, and filtering using the filter coefficient included in the filter coefficient information FC in the encoded data # 2 is performed on the other regions.
- the adaptive filter 100 '' determines whether the inter-prediction image # 26 being generated is an intra picture according to whether or not the reference picture included in the reference image # 25d to which the region referred to from each partition belongs is an intra picture.
- the reference picture referenced by each region is filtered using the filter coefficient included in the filter coefficient information FC in the encoded data # 2.
- the adaptive filter 100 divides the inter prediction image # 26 being generated into a plurality of regions with reference to the quantization parameter QP.
- the encoding apparatus has the first filter acting on the reference image obtained by encoding and reconstructing the input image, and the motion referring to the output image of the first filter.
- First prediction means for generating a first predicted image by performing compensation, a second filter acting on the reference image, and motion compensation with reference to an output image of the second filter are performed.
- a second prediction unit that generates two prediction images; a dividing unit that divides the first prediction image and the input image into a plurality of regions; and a difference between the input image and the first prediction image.
- Filter coefficient setting means for setting a filter coefficient of the second filter so as to minimize each area, and encoding residual data between the input image and the second predicted image.
- a dividing unit that divides the first predicted image and the input image into a plurality of regions, the input image, and the first predicted image.
- Filter coefficient setting means for setting the filter coefficient of the second filter so as to minimize the difference with respect to each region, the characteristic of the first predicted image is not uniform.
- the filter coefficient of the second filter can be set adaptively for each of the plurality of regions.
- the second filter is determined for each of the plurality of regions.
- the dividing unit divides the first predicted image into an area composed of macroblocks to which the skip mode is applied and an area composed of macroblocks to which the skip mode is not applied. Is preferred.
- the optimum filter coefficient is different between a macroblock to which the skip mode is applied and a macroblock to which the skip mode is not applied.
- the first predicted image is divided into a region composed of macroblocks to which the skip mode is applied and a region composed of macroblocks to which the skip mode is not applied
- the filter coefficient of the second filter can be set so as to minimize the difference between the input image and the first predicted image for each region.
- the first predicted image includes a macro block to which the skip mode is applied and a macro block to which the skip mode is not applied, it is possible to perform an appropriate filtering. Play.
- the dividing unit may determine that the first predicted image includes an area configured by a partition that refers to an image having a reference image index of 0 and a partition that refers to an image having a reference image index that is not 0. It is preferable to divide into regions to be constructed.
- the optimum filter coefficient is different for an area that refers to a reference picture with a different reference image index, particularly when there is motion as a moving image.
- the first predicted image is divided into an area composed of partitions referring to an image whose reference image index is 0, and a partition referring to an image whose reference image index is not 0.
- the filter coefficient of the second filter can be set so that the difference between the input image and the first predicted image is minimized for each region.
- the first predicted image includes a partition that refers to an image with a reference image index of 0 and an area that includes a partition that refers to an image with a reference image index that is not 0.
- the dividing unit includes a first dividing unit and a second dividing unit.
- the filter coefficient setting unit includes a first filter coefficient setting unit and a second filter coefficient setting unit.
- the first dividing means converts the first predicted image into a first region composed of macroblocks to which the skip mode is applied and a second region composed of macroblocks to which the skip mode is not applied.
- the first filter coefficient setting means minimizes the difference between the first predicted image and the input image for each of the first region and the second region. In this way, the first preliminary filter coefficient is determined, and the second dividing means is a third segment configured by partitioning the first predicted image with reference to an image having a reference image index of 0.
- the second filter coefficient setting means is divided into a region and a fourth region composed of partitions referring to an image whose reference image index is not 0, and the second filter coefficient setting means includes the third region and the fourth region. For each of the regions, a second preliminary filter coefficient is determined so as to minimize a difference between the first predicted image and the input image, and the first preliminary filter coefficient or the second preliminary filter coefficient is determined. Of these preliminary filter coefficients, it is preferable to set a filter coefficient with good coding efficiency as the filter coefficient of the second filter.
- the optimum filter coefficient is different between a macroblock to which the skip mode is applied and a macroblock to which the skip mode is not applied.
- optimum filter coefficients are different for regions that refer to reference pictures having different reference image indexes.
- encoding efficiency differs between adaptive filtering that refers to whether or not the skip mode is applied and adaptive filtering that refers to a reference image index.
- the first preliminary filter coefficient that is adaptively set according to whether or not the skip mode is applied is adaptively set according to the reference image index.
- the second preliminary filter coefficient having good coding efficiency can be set as the filter coefficient of the second filter, further filtering with better coding efficiency can be performed. There is an effect.
- the dividing unit assigns each unit region on the first predicted image to a region to which more macroblocks belong, among a plurality of macroblocks adjacent to the macroblock. It is preferable to divide the predicted image into a plurality of regions.
- the prediction image can be divided into a plurality of regions, and the filter coefficient of the second filter can be set so as to minimize the difference between the input image and the first prediction image for each region.
- the dividing unit includes the first predicted image that includes an area composed of macroblocks whose average luminance is larger than a predetermined luminance and a macroblock whose average luminance is equal to or lower than the predetermined luminance. It is preferable to divide it into regions.
- the optimum filter coefficient is different between the area with higher luminance and the area with lower luminance.
- the first predicted image is configured from an area configured by macroblocks having an average luminance greater than a predetermined luminance and a macroblock having an average luminance equal to or lower than the predetermined luminance.
- the filter coefficient of the second filter can be set so that the difference between the input image and the first predicted image is minimized for each region.
- the filter coefficient setting unit may include the first predicted image, the input image, and the input image with respect to a region having an area ratio with respect to the first predicted image larger than a predetermined area ratio among the plurality of regions.
- the filter coefficient of the second filter is set so as to minimize the difference between the plurality of regions, and the region ratio of the plurality of regions to the first predicted image is equal to or less than a predetermined area. It is preferable to select a predetermined filter coefficient as the filter coefficient of the second filter.
- the number of samples of the prediction residual (difference above) used to determine the filter coefficients is smaller. Therefore, it is difficult to improve the prediction accuracy of the predicted image for such a small region. Even if the prediction accuracy can be improved, the demerits of an increase in calculation cost due to adaptive filtering and an increase in the amount of code required by the filter coefficients outweigh the advantage of improved coding efficiency. There is a possibility.
- the filter coefficient of the second filter is set such that the area ratio of the plurality of regions with respect to the first predicted image is equal to or less than a predetermined area. Since a predetermined filter coefficient can be selected as the filter coefficient of the second filter, it is possible to perform appropriate filtering without incurring problems such as an increase in calculation cost and code amount of the filter coefficient. There is a further effect.
- the dividing unit refers to the first predicted image by referring to a first area composed of partitions referring to an image having a reference image index of 0 and an image having a reference image index of which is not 0.
- the filter coefficient setting means when the area ratio of the second area to the first predicted image is less than a predetermined area ratio, The filter coefficient of the second filter is set so as to minimize the difference between the first prediction image and the input image for the first region, and the second prediction means If the area ratio of the area to the first predicted image is less than a predetermined area ratio, among the reference pictures corresponding to the second area, the reference picture having a reference index of 0 Performs filtering using a filter coefficient predetermined, it is preferable.
- region comprised from the partition which is referring the image whose reference image index is not 0 is less than a predetermined area ratio.
- adaptive filtering is performed only on the first region composed of partitions referring to an image whose reference image index is 0, and the region of the second region is Since filtering can be performed using a predetermined filter coefficient, there is a further effect that appropriate filtering can be performed while suppressing the calculation cost and the code amount of the filter coefficient.
- the reference image index ref_idx functions as an index indicating whether adaptive filtering is performed or non-adaptive filtering is performed using a filter coefficient having an edge enhancement effect.
- Another encoding device refers to a first filter that acts on a plurality of reference images obtained by encoding and reconstructing an input image, and an output image of the first filter.
- First prediction means for generating a first predicted image by performing motion compensation, a second filter acting on the plurality of reference images, and motion compensation referring to an output image of the second filter
- a second prediction unit that generates a second predicted image, and encodes residual data between the input image and the second predicted image.
- the filter In the reference image list The reference image to be filtered using a filter coefficient set to minimize the difference between the input image and the first predicted image, and the first reference among the plurality of reference images
- the reference image belonging to the second reference image list different from the first reference image list
- filtering is performed using a predetermined filter coefficient.
- the weight of the contribution of the reference image to the predicted image becomes smaller, the demerits of the calculation cost and the increase in the code amount required by the filter coefficient due to the adaptive filtering are due to the adaptive filtering.
- the possibility of surpassing the merit of improving the coding efficiency is increased.
- adaptive filtering is performed only on the reference image whose contribution weight to the first predicted image is equal to or greater than a predetermined weight, and the contribution of the contribution to the first predicted image is increased. Since a reference image with a weight smaller than a predetermined weight can be filtered using a predetermined filter coefficient, appropriate filtering can be performed without incurring the disadvantage of an increase in calculation cost. There is an effect that it can be performed.
- the reference list number can function as a flag indicating whether to use adaptively obtained filter coefficients or non-adaptive filter coefficients. Therefore, according to said structure, there exists an effect that more suitable filtering can be performed, without increasing code amount by additional side information.
- the decoding apparatus decodes the residual data in a decoding apparatus that decodes encoded data obtained by encoding residual data between an original image and a predicted image together with a filter coefficient group.
- Filter means for generating a filtered reference image by filtering a reference image generated based on a prediction residual obtained by switching the filter coefficient for each unit region of the reference image
- Filter means predicted image generation means for generating the predicted image by performing motion compensation on the filtered reference image, and filter coefficients included in the filter coefficient group for each unit region on the reference image
- a filter coefficient selecting means for selecting any one of predetermined filter coefficients. It is set to.
- filtered reference is performed by filtering the reference image generated based on the prediction residual obtained by decoding the residual data.
- Filter means for generating an image wherein the filter coefficient can be switched for each unit area of the reference image, and the prediction image for generating the prediction image by performing motion compensation on the filtered reference image Since it comprises a generating means and a filter coefficient selecting means for selecting either a filter coefficient included in the filter coefficient group or a predetermined filter coefficient for each unit region on the reference image, There is an effect that filtering can be performed using a more appropriate filter coefficient for each unit region on the reference image. That.
- the filter coefficient selection unit may add the filter coefficient group to the filter coefficient group according to whether or not the unit area on the reference image belongs to a macroblock to which the skip mode is applied. It is preferable to select one of the included filter coefficients.
- the optimum filter coefficient is different between a macroblock to which the skip mode is applied and a macroblock to which the skip mode is not applied.
- the filter coefficient selection unit includes the filter coefficient included in the filter coefficient group according to whether the unit area on the reference image belongs to a macroblock to which a skip mode is applied. Since the reference image includes a macro block to which the skip mode is applied and a macro block to which the skip mode is not applied, appropriate filtering is performed. There is a further effect of being able to.
- the filter coefficient selection unit selects one of the filter coefficients included in the filter coefficient group according to whether or not the reference image index of the reference image is 0. It is preferable to select.
- optimum filter coefficients are different for regions that refer to reference pictures having different reference image indexes.
- the filter coefficient selection unit can select one of the filter coefficients included in the filter coefficient group according to whether or not the reference image index of the reference image is 0. Since the reference image includes a partition that refers to an image with a reference image index of 0 and an area that includes a partition that refers to an image with a reference image index that is not 0, There is a further effect that appropriate filtering can be performed.
- the filter coefficient selection unit divides the prediction image being generated into a plurality of regions according to a predetermined criterion, and each of the plurality of regions is divided. It is preferable to select one of the filter coefficients included in the filter coefficient group for the corresponding region on the reference image.
- the filter coefficient selection unit divides the prediction image being generated into a plurality of regions according to a predetermined criterion, and the reference corresponding to each of the plurality of regions. Since any one of the filter coefficients included in the filter coefficient group can be selected for the region on the image, there is an additional effect that more appropriate filtering can be performed.
- the filter coefficient selection unit is a region in which an area ratio with respect to the prediction image being generated is larger than a predetermined area ratio among a plurality of regions on the prediction image being generated.
- Filter coefficients included in the filter coefficient group are selected with respect to the region on the reference image corresponding to the area, and the area ratio of the plurality of regions on the predicted image being generated to the predicted image being generated is determined in advance.
- the predetermined filter coefficient is selected for a region on the reference image corresponding to a region that is equal to or smaller than a predetermined area ratio.
- the number of samples of the prediction residual (the difference) used for determining the filter coefficient is smaller in the moving picture coding apparatus. Therefore, it is difficult to improve the prediction accuracy of the predicted image for such a small region. Even if the prediction accuracy can be improved, the demerit of increasing the amount of encoding by encoding adaptively obtained filter coefficients may outweigh the advantage of improving encoding efficiency. .
- the filter coefficient selection unit corresponds to a region corresponding to a region in which an area ratio with respect to the prediction image being generated is larger than a predetermined area ratio among a plurality of regions on the prediction image being generated.
- An area in which a filter coefficient included in the filter coefficient group is selected for a region on the reference image, and an area ratio of the plurality of regions on the prediction image being generated to the prediction image being generated is determined in advance Since the predetermined filter coefficient can be selected for the area on the reference image corresponding to the area that is less than or equal to the ratio, appropriate filtering can be performed without incurring the problem of increased coding amount. There is a further effect that it can be performed.
- an area ratio of the region on the predicted image corresponding to a reference image whose reference image index is not 0 among the reference images is less than a predetermined area ratio.
- the filter means determines the region on the predicted image corresponding to the reference image whose reference image index is not 0 with respect to the region on the reference image whose reference image index is 0.
- the filter coefficient is generated by performing filtering using the obtained filter coefficient.
- the number of samples of the prediction residual (the difference) used for determining the filter coefficient is smaller in the moving picture coding apparatus. Therefore, it is difficult to improve the prediction accuracy of the predicted image for such a small region. Even if the prediction accuracy can be improved, the demerit of increasing the amount of encoding by encoding adaptively obtained filter coefficients may outweigh the advantage of improving encoding efficiency. .
- the reference image index ref_idx other than 0 can be caused to function as a flag indicating that adaptive filtering is performed.
- the reference image index ref_idx other than 0 can be used as an index indicating that filtering is performed using a non-adaptive filter coefficient, adaptive filtering without increasing the code amount by additional side information, There is a further effect that it is possible to switch between non-adaptive filtering.
- the weight of the contribution to the predicted image in the region on the predicted image corresponding to the reference image whose reference image list number is not 0 among the reference images is a predetermined weight.
- the filter means sets the region on the predicted image corresponding to the reference image whose reference image list number is not 0 to the region on the reference image whose reference image list number is 0. It is preferable to generate by performing filtering using a predetermined filter coefficient.
- the demerit of increasing the amount of coding by encoding the adaptively obtained filter coefficients outweighs the merit of improving the coding efficiency. The possibility that it will be increased.
- the filter means in the case where the weight of the contribution of the reference image whose reference image list number is not 0 among the reference images to the predicted image is smaller than a predetermined weight, Since filtering can be performed using a predetermined filter coefficient, there is an effect that appropriate filtering can be performed without incurring the disadvantage of increasing the amount of encoding.
- the reference image list number can function as a flag indicating whether to use adaptively obtained filter coefficients or non-adaptive filter coefficients. Therefore, according to said structure, there exists an effect that more suitable filtering can be performed, without increasing code amount by additional side information.
- the filter coefficient selection means instead of the filter coefficient selection means, referring to a flag included in the encoded data, any one of the filter coefficients included in the filter coefficient group, And it is good also as a structure provided with the filter coefficient selection means which selects the area
- filtering using any one of the filter coefficients included in the filter coefficient group can be performed on each region on the reference image with reference to the flag. Therefore, there is a further effect that appropriate filtering can be performed for each region.
- the data structure of the encoded data according to the present invention is the data structure of encoded data obtained by encoding the residual data between the original image and the predicted image generated from the original image together with the filter coefficient group.
- the filter coefficient group is selected for each unit region on the reference image generated based on the prediction residual obtained by decoding the residual data in the decoding device that decodes the encoded data.
- the filter coefficient is included.
- the decoding apparatus can perform an appropriate filtering for each unit region on the reference image.
- the present invention is suitably applied to a moving image encoding device that encodes a moving image and generates encoded data, and a moving image decoding device that decodes encoded data generated using such a moving image encoding device. be able to.
- Video encoding device (encoding device) 16 Inter prediction image generation unit 16a Prediction image generation unit (first prediction unit, second prediction unit) 100 Adaptive filter (dividing means, filter coefficient setting means) 17 motion vector estimation unit 2 video decoding device (decoding device) 100 ′′ adaptive filter (filter means, filter coefficient selection means) 26a Predicted image generating unit (predicted image generating means)
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
次に、インター予測画像生成部16の構成、および、動作について、図2~図4を参照して説明する。
(ステップ101)
バッファメモリ14に格納された参照画像#14bが、適応フィルタ100に入力される。また、後述するステップ102において、複数の参照ピクチャが用いられる場合には、参照画像#14bは、それら複数の参照ピクチャから構成されているものとする。
動きベクトル推定部17は、第1の出力画像データ#100と、入力画像#1とに基づいて、動き予測を行い、第1の動きベクトル#17’を生成する。なお、本ステップにおける動き予測においては、参照画像#14bに含まれる複数の参照ピクチャを用いてもよい。
予測画像生成部16aは、第1の出力画像データ#100に対して、第1の動きベクトル#17’に基づいた動き補償を行うことによって、第1の予測画像#16’を生成する。なお、ステップ102およびステップ103の処理は、予測方法の異なる予測モード毎に試行され、最適な予測モードが用いられるものとする。
適応フィルタ100は、第1の予測画像#16’を、スキップモードが適用されているマクロブロックから構成される第1の領域ER1と、スキップモードが適用されていないマクロブロックから構成される第2の領域ER2とに分割する。
適応フィルタ100は、参照画像#14bにおける領域ER~1’に対して、フィルタ係数h1’(i,j)に基づいて、フィルタリングを行い、参照画像#14bにおける領域ER~2’に対して、フィルタ係数h2’(i,j)に基づいて、フィルタリングを行う。領域ER~1’と領域ER~2’は前述のように参照画像#14b上で重なる場合があるが、当該領域を参照している領域が領域ER1’であるか領域ER2’であるかにより、領域ER~1’として参照された領域であるか領域ER~2’として参照された領域であるかを判別することができる。
次に、動きベクトル推定部17は、第2の出力画像データ#100’と、入力画像#1とに基づいて、第2の動きベクトル#17を生成する。なお、本実施例においては、動きベクトル推定部17は、第2の動きベクトル#17として、既に求めた第1の動きベクトル#17’と同一の値を出力するものとする。そのようにすることにより、第2の動きベクトル#17を求める計算コストを減らすことができる。ただし、第2の動きベクトル#17と、第1の動きベクトル#17’との関係は、本発明を限定するものではない。
予測画像生成部16aは、第2の出力画像データ#100’に対して、第2の動きベクトル#17に基づいた動き補償を行うことによって、インター予測画像#16を生成し出力する。
以下では、インター予測画像生成部16の第2の動作例について図5を参照して説明する。
適応フィルタ100は、第1の予測画像#16’を、参照画像インデクスref_idxが0であるパーティションから構成される領域ER21と、それ以外の領域ER22とに分割する。
適応フィルタ100は、参照画像#14bにおける領域ER~21’に対して、フィルタ係数h21’(i,j)に基づいてフィルタリングを行い、参照画像#14bにおける領域ER~22’に対して、フィルタ係数h22’(i,j)に基づいてフィルタリングを行う。
以下では、インター予測画像生成部16の第3の動作例について説明する。
以下では、インター予測画像生成部16の第4の動作例について説明する。
本動作例においては、動作例1におけるステップ104、および、ステップ105は、以下に説明するステップ404、および、ステップ405に置き換わる。
適応フィルタ100は、第1の予測画像#16’に含まれる各マクロブロックを、予め定められた判定基準によって、2組に分類する。ここで、予め定められた判定基準とは、判定のためのフラグ等を符号化データに追加することなく、本動作例に対応する動画像復号装置においても、本動作例の動画像符号化装置と同一の判定が可能な基準である。例えば、マクロブロック番号が予め定められた値以上か否かを基準として判定することができる。また、適応フィルタ100は、第1の予測画像#16’を、上記2組のうち一方の組に属するマクロブロックから構成される領域ER41と、もう一方の組に属するマクロブロックから構成される領域ER42とに分割する。
適応フィルタ100は、フラグ#F1を参照して、参照画像#14bにおける領域ER~41’に対して、フィルタ係数h41’(i,j)に基づいてフィルタリングを行い、参照画像#14bにおける領域ER~42’に対して、フィルタ係数h42’(i,j)に基づいてフィルタリングを行う。
以下では、インター予測画像生成部16の第5の動作例について説明する。
適応フィルタ100は、第1の予測画像#16’に含まれる各マクロブロックに対応する入力画像#1の領域における平均画素値が予め定められた閾値以上であるか否かに応じて、各マクロブロックを2組に分類する。また、適応フィルタ100は、第1の予測画像#16’を、上記2組のうち一方の組に属するマクロブロックから構成される領域ER51と、もう一方の組に属するマクロブロックから構成される領域ER52とに分割する。
適応フィルタ100は、参照画像#14bにおける領域ER~51’に対して、フィルタ係数h51’(i,j)に基づいてフィルタリングを行い、参照画像#14bにおける領域ER~52’に対して、フィルタ係数h52’(i,j)に基づいてフィルタリングを行う。
適応フィルタ100は、第1の予測画像#16’に含まれる処理対象マクロブロックの領域と、それに対応する入力画像#1における領域との誤差が、予め定められた閾値以上である場合には、当該処理対象マクロブロックを第1の組に分類し、そうでない場合には、当該処理対象マクロブロックを第2の組に分類する。
以下では、インター予測画像生成部16の第6の動作例について図7を参照して説明する。
以下では、インター予測画像生成部16の第7の動作例について図8~図9を参照して説明する。
適応フィルタ100は、第1の予測画像#16’を、参照画像インデクスref_idxが0である参照ピクチャを参照するパーティションから構成される領域ER71と、参照画像インデクスref_idxが1である参照ピクチャを参照するパーティションから構成される領域ER72とに分割する。
適応フィルタ100は、第1の予測画像#16’に対する領域ER71の面積比、および、第1の予測画像#16’に対する領域ER72の面積比が、何れも、予め定められた割合以上である場合には、動作例2において説明した動作と同様の動作を行う。
本ステップでは、適応フィルタ100は、上記フィルタ係数h71’(i,j)およびh72’(i,j)を用いて、当該対象フレームに対して第2の出力画像データ#100’を生成する。ただし、適応フィルタ100は、本ステップにおいては、参照画像インデクスref_idxの解釈を変更する。つまり、参照画像インデクスref_idxを0として参照画像#14bが参照された場合、参照画像#14bにおける参照ピクチャRP(ref_idx=0)を参照してフィルタ係数h71’(i,j)を用いたフィルタリングを行い、参照画像インデクスref_idxを1として参照画像#14bが参照された場合、通常とは異なり、参照画像#14bにおける参照ピクチャRP(ref_idx=0)を参照し、フィルタ係数h72’(i,j)’を用いたフィルタリングを行う。
適応フィルタ100は、第1の予測画像#16’を、参照画像リストL0における参照画像インデクスref_idxが0である参照ピクチャ、および、参照画像リストL1における参照画像インデクスref_idxが0である参照ピクチャを参照するパーティションから構成される領域ER81と、参照画像リストL0における参照画像インデクスref_idxが1である参照ピクチャ、および、参照画像リストL1における参照画像インデクスref_idxが1である参照ピクチャを参照するパーティションから構成される領域ER82と、に分割する。
適応フィルタ100は、第1の予測画像#16’に対する領域ER81の面積比、および、第1の予測画像#16’に対する領域ER82の面積比が、何れも、予め定められた面積比以上である場合には、動作例2において説明した動作と同様に、領域ER~81’a、および、領域ER~81’bに対して、フィルタ係数h81’(i,j)を用いて適応的なフィルタリングを行うと共に、領域ER~82’a、および、領域ER~82’bに対して、フィルタ係数h82’(i,j)を用いて適応的なフィルタリングを行う。フィルタ係数h81’(i,j)、および、フィルタ係数h82’(i,j)の決定には、上述した統計的手法を用いることができる。
本ステップでは、上記フィルタ係数h81’’(i,j)およびh82’’(i,j)を用いて、当該対象フレームに対して第2の出力画像データ#100’を生成する。ただし、適応フィルタ100は、本ステップにおいては、参照画像リストを問わず、参照画像インデクスref_idxを0として参照画像#14bが参照された場合、参照画像#14bにおける参照ピクチャRP(ref_idx=0)を参照してフィルタ係数h81’’(i,j)を用いたフィルタリングを行い、参照画像インデクスref_idxとして0でない値で参照画像#14bが参照された場合、通常とは異なり、参照画像#14bにおける参照ピクチャRP(ref_idx=0)を参照し、フィルタ係数h82’’(i,j)’を用いたフィルタリングを行う。
以下では、インター予測画像生成部16の第8の動作例について図10~図11を参照して説明する。
適応フィルタ100は、第1の予測画像#16’を、参照画像リストL0における参照ピクチャを参照するパーティションから構成される領域ER91と、参照画像リストL1における参照ピクチャを参照するパーティションから構成される領域ER92と、に分割する。また、入力画像を、上記第1の領域ER91に対応する領域ER91’と、上記第2の領域ER92に対応する領域ER92’とに分割する。
適応フィルタ100は、第1の予測画像#16’に対する、領域ER~91’の寄与の重み、および、領域ER~92’の寄与の重みが何れも、予め定められた重み以上である場合には、領域ER91’a、および、領域ER91’bに対して、フィルタ係数h91’(i,j)を用いて適応的なフィルタリングを行うと共に、領域ER92’a、および、領域ER92’bに対して、フィルタ係数h92’(i,j)を用いて適応的なフィルタリングを行う。フィルタ係数h91’(i,j)、および、フィルタ係数h92’(i,j)の決定には、上述した統計的手法を用いることができる。
(ステップ805.1)
以下では、全体に対する領域ER92’の寄与の重みが、予め定められた重みよりも小さい場合を例に挙げ、適応フィルタ100の動作について説明する。
以下では、インター予測画像生成部16の第9の動作例について説明する。
適応フィルタ100は、第1の予測画像#16’を、画像の上半分の領域ER101と、画像の下半分の領域ER102とに分割する。
適応フィルタ100は、参照画像#14bにおける領域ER~101’に対して、フィルタ係数h101’(i,j)に基づいて適応的フィルタリングを行い、参照画像#14bにおける領域ER~102’に対して、フィルタ係数h102’(i,j)に基づいて適応的フィルタリングを行う。
以下では、インター予測画像生成部16の第10の動作例について説明する。
適応フィルタ100は、第1の予測画像#16’を、予め定められたサイズ以上のブロックから構成される領域ER111と、予め定められたサイズより小さいブロックから構成される領域ER112とに分割する。
適応フィルタ100は、参照画像#14bにおける領域ER~111’に対して、フィルタ係数h111’(i,j)に基づいて適応的フィルタリングを行い、参照画像#14bにおける領域ER~112’に対して、フィルタ係数h112’(i,j)に基づいて適応的フィルタリングを行う。
以下では、インター予測画像生成部16の第11の動作例について説明する。
適応フィルタ100は、第1の予測画像#16’を、予め定められた値以上の大きさを有する動きベクトルが割り付けられたパーティションから構成される領域ER121と、そうでない動きベクトルが割り付けられたパーティションから構成される領域ER122とに分割する。
適応フィルタ100は、参照画像#14bにおける領域ER121’に対して、フィルタ係数h121’(i,j)に基づいて適応的フィルタリングを行い、参照画像#14bにおける領域ER122’に対して、フィルタ係数h122’(i,j)に基づいて適応的フィルタリングを行う。
また、上記の動作例においては、当該マクロブロック、または、当該パーティションに関連付けられた情報(スキップモード、参照インデックスなど)に応じて、当該マクロブロック、または、当該パーティションが複数の領域のうち何れかに分類される場合について説明を行ったが、本発明はこれに限定されるものではない。
また、適応フィルタ100は、第1の予測画像#16’が参照する参照画像の情報に応じて、第1の予測画像#16’を複数の領域に分割するような構成としてもよい。
以下では、動画像符号化装置1を用いて生成した符号化データ#2の構成について、図14を参照して説明する。
動作例1において動画像符号化装置1が出力する符号化データ#2の構成は以下の通りである。
動作例2において動画像符号化装置1が出力する符号化データ#2の構成は以下の通りである。
動作例3において動画像符号化装置1が出力する符号化データ#2の構成は以下の通りである。
動作例4において動画像符号化装置1が出力する符号化データ#2の構成は以下の通りである。
動作例5において動画像符号化装置1が出力する符号化データ#2の構成は以下の通りである。
動作例6において動画像符号化装置1が出力する符号化データ#2の構成は以下の通りである。
動作例7において動画像符号化装置1が出力する符号化データ#2の構成は以下の通りである。
動作例8において動画像符号化装置1が出力する符号化データ#2の構成は以下の通りである。
動作例9において動画像符号化装置1が出力する符号化データ#2の構成は以下の通りである。
動作例10において動画像符号化装置1が出力する符号化データ#2の構成は以下の通りである。
動作例11において動画像符号化装置1が出力する符号化データ#2の構成は以下の通りである。
動作例12において動画像符号化装置1が出力する符号化データ#2の構成は以下の通りである。
動作例13において動画像符号化装置1が出力する符号化データ#2の構成は以下の通りである。
以下では、本発明に係る動画像復号装置2について、図12~図13を参照して説明する。
図13は、インター予測画像生成部26の構成を示すブロック図である。図13に示すように、インター予測画像生成部26は、予測画像生成部26a、および、適応フィルタ100’’を備えている。
構成例1において説明した符号化データ#2を復号する場合の適応フィルタ100’’の動作は以下の通りである。
構成例2において説明した符号化データ#2を復号する場合の適応フィルタ100’’の動作は以下の通りである。
構成例3において説明した符号化データ#2を復号する場合の適応フィルタ100’’の動作は以下の通りである。
構成例4において説明した符号化データ#2を復号する場合の適応フィルタ100’’の動作は以下の通りである。
構成例5において説明した符号化データ#2を復号する場合の適応フィルタ100’’の動作は以下の通りである。
構成例6において説明した符号化データ#2を復号する場合の適応フィルタ100’’の動作は以下の通りである。
構成例7において説明した符号化データ#2を復号する場合の適応フィルタ100’’の動作は以下の通りである。
構成例8において説明した符号化データ#2を復号する場合の適応フィルタ100’’の動作は以下の通りである。
構成例9において説明した符号化データ#2を復号する場合の適応フィルタ100’’の動作は以下の通りである。
構成例10において説明した符号化データ#2を復号する場合の適応フィルタ100’’の動作は以下の通りである。
構成例11において説明した符号化データ#2を復号する場合の適応フィルタ100’’の動作は以下の通りである。
構成例12において説明した符号化データ#2を復号する場合の適応フィルタ100’’の動作は以下の通りである。
構成例12において説明した符号化データ#2を復号する場合の適応フィルタ100’’の動作は以下の通りである。
以上のように、本発明に係る符号化装置は、入力画像を符号化して再構成することによって得られる参照画像に作用する第1のフィルタと、上記第1のフィルタの出力画像を参照した動き補償を行うことによって第1の予測画像を生成する第1の予測手段と、上記参照画像に作用する第2のフィルタと、上記第2のフィルタの出力画像を参照した動き補償を行うことによって第2の予測画像を生成する第2の予測手段と、上記第1の予測画像と上記入力画像とを複数の領域に分割する分割手段と、上記入力画像と上記第1の予測画像との差を上記領域毎に最小化するよう、上記第2のフィルタのフィルタ係数を設定するフィルタ係数設定手段と、を備え、上記入力画像と上記第2の予測画像との残差データを符号化する、ことを特徴としている。
16 インター予測画像生成部
16a 予測画像生成部(第1の予測手段、第2の予測手段)
100 適応フィルタ(分割手段、フィルタ係数設定手段)
17 動きベクトル推定部
2 動画像復号装置(復号装置)
100’’ 適応フィルタ(フィルタ手段、フィルタ係数選択手段)
26a 予測画像生成部(予測画像生成手段)
Claims (19)
- 入力画像を符号化して再構成することによって得られる参照画像に作用する第1のフィルタと、上記第1のフィルタの出力画像を参照した動き補償を行うことによって第1の予測画像を生成する第1の予測手段と、
上記参照画像に作用する第2のフィルタと、上記第2のフィルタの出力画像を参照した動き補償を行うことによって第2の予測画像を生成する第2の予測手段と、
上記第1の予測画像と上記入力画像とを複数の領域に分割する分割手段と、
上記入力画像と上記第1の予測画像との差を上記領域毎に最小化するよう、上記第2のフィルタのフィルタ係数を設定するフィルタ係数設定手段と、を備え、
上記入力画像と上記第2の予測画像との残差データを符号化する、ことを特徴とする符号化装置。 - 上記分割手段は、
上記第1の予測画像を、スキップモードが適用されたマクロブロックから構成される領域と、スキップモードが適用されていないマクロブロックから構成される領域とに分割する、
ことを特徴とする請求項1に記載の符号化装置。 - 上記分割手段は、
上記第1の予測画像を、参照画像インデクスが0である画像を参照しているパーティションから構成される領域と、参照画像インデクスが0でない画像を参照しているパーティションから構成される領域とに分割する、
ことを特徴とする請求項1に記載の符号化装置。 - 上記分割手段は、第1の分割手段、および、第2の分割手段を含み、
上記フィルタ係数設定手段は、第1のフィルタ係数設定手段、および、第2のフィルタ係数設定手段を含み、
上記第1の分割手段は、
上記第1の予測画像を、スキップモードが適用されたマクロブロックから構成される第1の領域と、スキップモードが適用されていないマクロブロックから構成される第2の領域とに分割し、
上記第1のフィルタ係数設定手段は、上記第1の領域、および、上記第2の領域のそれぞれについて、上記第1の予測画像と上記入力画像との差を最小化するように第1の予備的フィルタ係数を決定し、
上記第2の分割手段は、
上記第1の予測画像を、参照画像インデクスが0である画像を参照しているパーティションから構成される第3の領域と、参照画像インデクスが0でない画像を参照しているパーティションから構成される第4の領域とに分割し、
上記第2のフィルタ係数設定手段は、上記第3の領域、および、上記第4の領域のそれぞれについて、上記第1の予測画像と上記入力画像との差を最小化するように第2の予備的フィルタ係数を決定し、
上記第1の予備的フィルタ係数、または、上記第2の予備的フィルタ係数のうち、符号化効率の良いフィルタ係数を上記第2のフィルタのフィルタ係数として設定する、
ことを特徴とする請求項1に記載の符号化装置。 - 上記分割手段は、
上記第1の予測画像上の各単位領域を、該マクロブロックに隣接する複数のマクロブロックのうち、より多くのマクロブロックが属する領域に割り当てることによって、上記第1の予測画像を複数の領域に分割する、
ことを特徴とする請求項1に記載の符号化装置。 - 上記分割手段は、
上記第1の予測画像を、平均輝度が予め定められた輝度より大きいマクロブロックから構成される領域と、平均輝度が予め定められた輝度以下であるマクロブロックから構成される領域とに分割する、
ことを特徴とする請求項1に記載の符号化装置。 - 上記フィルタ係数設定手段は、
上記複数の領域のうち、上記第1の予測画像に対する面積比が予め定められた面積比より大きい領域に対して、上記第1の予測画像と上記入力画像との差を最小化するように、上記第2のフィルタのフィルタ係数を設定し、
上記複数の領域のうち、上記第1の予測画像に対する面積比が予め定められた面積以下である領域に対して、上記第2のフィルタのフィルタ係数として、予め定められたフィルタ係数を選択する、
ことを特徴とする請求項1に記載の符号化装置。 - 上記分割手段は、
上記第1の予測画像を、参照画像インデクスが0である画像を参照しているパーティションから構成される第1の領域と、参照画像インデクスが0でない画像を参照しているパーティションから構成される第2の領域とに分割し、
上記フィルタ係数設定手段は、
上記第2の領域の上記第1の予測画像に対する面積比が予め定められた面積比未満である場合には、上記第1の領域について上記第1の予測画像と上記入力画像との差を最小化するように、上記第2のフィルタのフィルタ係数を設定し、
上記第2の予測手段は、
上記第2の領域の上記第1の予測画像に対する面積比が予め定められた面積比未満である場合には、上記第2の領域に対応する上記参照画像のうち参照インデクスが0である参照ピクチャに対して、予め定められたフィルタ係数を用いてフィルタリングを行う、
ことを特徴とする請求項1に記載の符号化装置。 - 上記フィルタ係数設定手段は、上記第2のフィルタのフィルタ係数の二乗誤差が最小となるようにフィルタ係数を決定することを特徴とする請求項1に記載の符号化装置。
- 入力画像を符号化して再構成することによって得られる複数の参照画像に作用する第1のフィルタと、上記第1のフィルタの出力画像を参照した動き補償を行うことによって第1の予測画像を生成する第1の予測手段と、
上記複数の参照画像に作用する第2のフィルタと、上記第2のフィルタの出力画像を参照した動き補償を行うことによって第2の予測画像を生成する第2の予測手段と、
を備え、
上記入力画像と上記第2の予測画像との残差データを符号化する符号化装置であって、
上記第2のフィルタは、
上記複数の参照画像のうち、第1の参照画像リストに属する参照画像の、上記第1の予測画像に対する寄与の重みが、予め定められた重み以上である場合には、当該第1の参照画像リストに属する参照画像に対して、上記入力画像と上記第1の予測画像との差を最小化するように設定されたフィルタ係数を用いてフィルタリングを行い、
上記複数の参照画像のうち、第1の参照画像リストに属する参照画像の、上記第1の予測画像に対する寄与の重みが、予め定められた重みより小さい場合には、第1の参照画像リストとは異なる第2の参照画像リストに属する参照画像に対して、予め定められたフィルタ係数を用いてフィルタリングを行う、
ことを特徴とする符号化装置。 - 原画像と予測画像との残差データを、フィルタ係数群と共に符号化することによって得られた符号化データを復号する復号装置であって、
上記残差データを復号することによって得られる予測残差に基づいて生成される参照画像に対してフィルタリングを行うことによってフィルタ済参照画像を生成するフィルタ手段であって、フィルタ係数が上記参照画像の単位領域毎に切り替え可能なフィルタ手段と、
上記フィルタ済参照画像に対し、動き補償を行うことによって上記予測画像を生成する予測画像生成手段と、
上記参照画像上の単位領域毎に、上記フィルタ係数群に含まれるフィルタ係数、または、予め定められたフィルタ係数のうち何れかを選択するフィルタ係数選択手段と、
を備えていることを特徴とする復号装置。 - 上記フィルタ係数選択手段は、上記参照画像上の単位領域が、スキップモードが適用されたマクロブロックに属しているか否かに応じて、上記フィルタ係数群に含まれるフィルタ係数のうち何れかを選択する、
ことを特徴とする請求項11に記載の復号装置。 - 上記フィルタ係数選択手段は、上記参照画像の参照画像インデクスが0であるか否かに応じて、上記フィルタ係数群に含まれるフィルタ係数のうち何れかを選択する、
ことを特徴とする請求項11に記載の復号装置。 - 上記フィルタ係数選択手段は、生成中の上記予測画像を、予め定められた判定基準に応じて、複数の領域に分割し、当該複数の領域のそれぞれに対応する上記参照画像上の領域に対して、上記フィルタ係数群に含まれるフィルタ係数のうち何れかを選択する、
ことを特徴とする、
ことを特徴とする請求項11に記載の復号装置。 - 上記フィルタ係数選択手段は、生成中の上記予測画像上の複数の領域のうち当該生成中の予測画像に対する面積比が予め定められた面積比より大きい領域に対応する上記参照画像上の領域に対して、上記フィルタ係数群に含まれるフィルタ係数を選択し、生成中の上記予測画像上の複数の領域のうち当該生成中の予測画像に対する面積比が予め定められた面積比以下である領域に対応する上記参照画像上の領域に対して、上記予め定められたフィルタ係数を選択する、
ことを特徴とする請求項11に記載の復号装置。 - 上記参照画像のうち参照画像インデクスが0でない参照画像に対応する上記予測画像上の領域の、上記予測画像に対する面積比が、予め定められた面積比未満である場合には、
上記フィルタ手段は、参照画像インデクスが0でない参照画像に対応する上記予測画像上の領域を、参照画像インデクスが0である参照画像上の当該領域に対して、上記予め定められたフィルタ係数を用いてフィルタリングを行うことによって生成する、
ことを特徴とする請求項11に記載の復号装置。 - 上記参照画像のうち参照画像リスト番号が0でない参照画像に対応する上記予測画像上の領域の、上記予測画像に対する寄与の重みが、予め定められた重みより小さい場合には、
上記フィルタ手段は、参照画像リスト番号が0でない参照画像に対応する上記予測画像上の領域を、参照画像リスト番号が0である参照画像上の当該領域に対して、上記予め定められたフィルタ係数を用いてフィルタリングを行うことによって生成する、
ことを特徴とする請求項11に記載の復号装置。 - 上記フィルタ係数選択手段に代えて、上記符号化データに含まれるフラグを参照して、上記フィルタ係数群に含まれるフィルタ係数のうち何れかのフィルタ係数、および、当該フィルタ係数を適用すべき参照画像上の領域を選択するフィルタ係数選択手段を備えている、
ことを特徴とする請求項11に記載の復号装置。 - 原画像と原画像から生成される予測画像との残差データをフィルタ係数群と共に符号化することによって得られた符号化データのデータ構造であって、
上記フィルタ係数群は、当該符号化データを復号する復号装置において、上記残差データを復号することによって得られる予測残差に基づいて生成される参照画像上の単位領域毎に選択されるフィルタ係数を含んでいる、
ことを特徴とする符号化データのデータ構造。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/521,392 US20120300849A1 (en) | 2010-01-12 | 2010-12-24 | Encoder apparatus, decoder apparatus, and data structure |
JP2011549902A JPWO2011086836A1 (ja) | 2010-01-12 | 2010-12-24 | 符号化装置、復号装置、および、データ構造 |
CN2010800611289A CN102714732A (zh) | 2010-01-12 | 2010-12-24 | 编码装置、解码装置及数据结构 |
EP10843189.1A EP2525576A4 (en) | 2010-01-12 | 2010-12-24 | ENCODER APPARATUS, DECODER APPARATUS, AND DATA STRUCTURE |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010-004432 | 2010-01-12 | ||
JP2010004432 | 2010-01-12 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011086836A1 true WO2011086836A1 (ja) | 2011-07-21 |
Family
ID=44304123
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2010/073436 WO2011086836A1 (ja) | 2010-01-12 | 2010-12-24 | 符号化装置、復号装置、および、データ構造 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20120300849A1 (ja) |
EP (1) | EP2525576A4 (ja) |
JP (1) | JPWO2011086836A1 (ja) |
CN (1) | CN102714732A (ja) |
WO (1) | WO2011086836A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140056352A1 (en) * | 2011-04-25 | 2014-02-27 | Lg Electronics Inc. | Intra-prediction method, and encoder and decoder using same |
WO2014050554A1 (ja) * | 2012-09-28 | 2014-04-03 | シャープ株式会社 | 画像復号装置、および画像符号化装置 |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2559238B1 (en) | 2010-04-13 | 2015-06-03 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Adaptive image filtering method and apparatus |
GB2484969B (en) * | 2010-10-29 | 2013-11-20 | Canon Kk | Improved reference frame for video encoding and decoding |
GB2502047B (en) * | 2012-04-04 | 2019-06-05 | Snell Advanced Media Ltd | Video sequence processing |
CN103916673B (zh) * | 2013-01-06 | 2017-12-22 | 华为技术有限公司 | 基于双向预测的编码方法、解码方法和装置 |
EP3151559A1 (en) * | 2015-09-29 | 2017-04-05 | Thomson Licensing | Method for coding and decoding a plurality of picture blocks and corresponding devices |
CN106604030A (zh) * | 2015-10-16 | 2017-04-26 | 中兴通讯股份有限公司 | 参考图像的处理方法及装置、编码器以及解码器 |
KR20180069789A (ko) * | 2015-10-16 | 2018-06-25 | 엘지전자 주식회사 | 영상 코딩 시스템에서 예측 향상을 위한 필터링 방법 및 장치 |
JP6626319B2 (ja) * | 2015-11-18 | 2019-12-25 | キヤノン株式会社 | 符号化装置、撮像装置、符号化方法、及びプログラム |
US20190297320A1 (en) * | 2016-05-13 | 2019-09-26 | Sharp Kabushiki Kaisha | Image decoding device and image encoding device |
WO2019065537A1 (ja) * | 2017-09-28 | 2019-04-04 | シャープ株式会社 | 動き補償フィルタ装置、画像復号装置および動画像符号化装置 |
EP3799694A1 (en) | 2018-07-06 | 2021-04-07 | Huawei Technologies Co., Ltd. | A picture encoder, a picture decoder and corresponding methods |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006135376A (ja) | 2004-11-02 | 2006-05-25 | Toshiba Corp | 動画像符号化装置、動画像符号化方法、動画像復号化装置および動画像復号化方法 |
WO2007114368A1 (ja) * | 2006-03-30 | 2007-10-11 | Kabushiki Kaisha Toshiba | 画像符号化装置及び方法並びに画像復号化装置及び方法 |
JP2009194617A (ja) * | 2008-02-14 | 2009-08-27 | Sony Corp | 画像処理装置、画像処理方法、画像処理方法のプログラム及び画像処理方法のプログラムを記録した記録媒体 |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6591398B1 (en) * | 1999-02-12 | 2003-07-08 | Sony Corporation | Multiple processing system |
US7929610B2 (en) * | 2001-03-26 | 2011-04-19 | Sharp Kabushiki Kaisha | Methods and systems for reducing blocking artifacts with reduced complexity for spatially-scalable video coding |
JP3861698B2 (ja) * | 2002-01-23 | 2006-12-20 | ソニー株式会社 | 画像情報符号化装置及び方法、画像情報復号装置及び方法、並びにプログラム |
US8218634B2 (en) * | 2005-01-13 | 2012-07-10 | Ntt Docomo, Inc. | Nonlinear, in-the-loop, denoising filter for quantization noise removal for hybrid video compression |
US8208564B2 (en) * | 2005-06-24 | 2012-06-26 | Ntt Docomo, Inc. | Method and apparatus for video encoding and decoding using adaptive interpolation |
AU2008352118A1 (en) * | 2008-03-07 | 2009-09-11 | Kabushiki Kaisha Toshiba | Dynamic image encoding/decoding method and device |
AU2009220567A1 (en) * | 2008-03-07 | 2009-09-11 | Kabushiki Kaisha Toshiba | Dynamic image encoding/decoding device |
US8811484B2 (en) * | 2008-07-07 | 2014-08-19 | Qualcomm Incorporated | Video encoding by filter selection |
US8750378B2 (en) * | 2008-09-23 | 2014-06-10 | Qualcomm Incorporated | Offset calculation in switched interpolation filters |
US10178406B2 (en) * | 2009-11-06 | 2019-01-08 | Qualcomm Incorporated | Control of video encoding based on one or more video capture parameters |
EP2515541A4 (en) * | 2009-12-18 | 2015-08-26 | Sharp Kk | BILDFILTER, CODING DEVICE, DECODING DEVICE AND DATA STRUCTURE |
US20120243611A1 (en) * | 2009-12-22 | 2012-09-27 | Sony Corporation | Image processing apparatus and method as well as program |
-
2010
- 2010-12-24 EP EP10843189.1A patent/EP2525576A4/en not_active Withdrawn
- 2010-12-24 WO PCT/JP2010/073436 patent/WO2011086836A1/ja active Application Filing
- 2010-12-24 US US13/521,392 patent/US20120300849A1/en not_active Abandoned
- 2010-12-24 JP JP2011549902A patent/JPWO2011086836A1/ja active Pending
- 2010-12-24 CN CN2010800611289A patent/CN102714732A/zh active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006135376A (ja) | 2004-11-02 | 2006-05-25 | Toshiba Corp | 動画像符号化装置、動画像符号化方法、動画像復号化装置および動画像復号化方法 |
WO2007114368A1 (ja) * | 2006-03-30 | 2007-10-11 | Kabushiki Kaisha Toshiba | 画像符号化装置及び方法並びに画像復号化装置及び方法 |
JP2009194617A (ja) * | 2008-02-14 | 2009-08-27 | Sony Corp | 画像処理装置、画像処理方法、画像処理方法のプログラム及び画像処理方法のプログラムを記録した記録媒体 |
Non-Patent Citations (2)
Title |
---|
SAKAE OKUBO ET AL., KABUSHIKI KAISHA IMPRESS R&D, 1 January 2009 (2009-01-01), pages 330, XP008169348 * |
See also references of EP2525576A4 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140056352A1 (en) * | 2011-04-25 | 2014-02-27 | Lg Electronics Inc. | Intra-prediction method, and encoder and decoder using same |
US10645415B2 (en) | 2011-04-25 | 2020-05-05 | Lg Electronics Inc. | Intra-prediction method, and encoder and decoder using same |
US11006146B2 (en) | 2011-04-25 | 2021-05-11 | Lg Electronics Inc. | Intra-prediction method, and encoder and decoder using same |
US11910010B2 (en) | 2011-04-25 | 2024-02-20 | Lg Electronics Inc. | Intra-prediction method, and encoder and decoder using same |
WO2014050554A1 (ja) * | 2012-09-28 | 2014-04-03 | シャープ株式会社 | 画像復号装置、および画像符号化装置 |
Also Published As
Publication number | Publication date |
---|---|
EP2525576A4 (en) | 2015-04-15 |
EP2525576A1 (en) | 2012-11-21 |
CN102714732A (zh) | 2012-10-03 |
US20120300849A1 (en) | 2012-11-29 |
JPWO2011086836A1 (ja) | 2013-05-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2011086836A1 (ja) | 符号化装置、復号装置、および、データ構造 | |
US10779001B2 (en) | Image encoding method and image decoding method | |
JP6335365B2 (ja) | 復号装置 | |
JP4913245B2 (ja) | インターレース・ビデオの符号化および復号 | |
EP3416386B1 (en) | Hash-based encoder decisions for video coding | |
JP5154635B2 (ja) | 拡張された空間スケーラビティにおける画像レベルの適応化方法およびシステム | |
JP2013537771A (ja) | イントラ予測復号化方法 | |
JP2014171097A (ja) | 符号化装置、符号化方法、復号装置、および、復号方法 | |
JP2011223303A (ja) | 画像符号化装置と画像符号化方法および画像復号化装置と画像復号化方法 | |
JP2022502899A (ja) | ビデオ信号符号化/復号化方法及びそのための機器 | |
WO2011125445A1 (ja) | 画像フィルタ装置、符号化装置、および、復号装置 | |
JP2023528609A (ja) | 符号化・復号方法、装置及びそのデバイス | |
JPWO2013125171A1 (ja) | イントラ予測モード判定装置、イントラ予測モード判定方法、及びイントラ予測モード判定プログラム | |
JP4406887B2 (ja) | 動画像符号化装置及び動画像符号化方法 | |
CN116636210A (zh) | 用于自适应运动矢量差分辨率的内插滤波器 | |
WO2011142221A1 (ja) | 符号化装置、および、復号装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201080061128.9 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10843189 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011549902 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13521392 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1201003441 Country of ref document: TH |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2010843189 Country of ref document: EP |