WO2013031071A1 - Appareil de décodage d'image animée, procédé de décodage d'image animée et circuit intégré - Google Patents

Appareil de décodage d'image animée, procédé de décodage d'image animée et circuit intégré Download PDF

Info

Publication number
WO2013031071A1
WO2013031071A1 PCT/JP2012/004154 JP2012004154W WO2013031071A1 WO 2013031071 A1 WO2013031071 A1 WO 2013031071A1 JP 2012004154 W JP2012004154 W JP 2012004154W WO 2013031071 A1 WO2013031071 A1 WO 2013031071A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion vector
block
motion
motion compensation
blocks
Prior art date
Application number
PCT/JP2012/004154
Other languages
English (en)
Japanese (ja)
Inventor
一憲 岡嶋
Original Assignee
パナソニック株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニック株式会社 filed Critical パナソニック株式会社
Publication of WO2013031071A1 publication Critical patent/WO2013031071A1/fr
Priority to US14/191,253 priority Critical patent/US20140177726A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding

Definitions

  • the present invention relates to a moving image decoding apparatus and a moving image decoding method for decoding a moving image encoded stream encoded using motion prediction.
  • H.264 has been standardized jointly by ITU-T and ISO as an encoding method that can be applied to a wider range of devices, such as mobile information terminals represented by smartphones and network distribution, and has a higher compression rate.
  • the amount of information is compressed by reducing redundancy in the time direction and the spatial direction.
  • motion vectors are calculated by detecting motion in units of blocks with reference to the front or rear image of the encoding target image. Further, a predicted image is generated from the block indicated by the motion vector, and the difference value between the obtained predicted image and the encoding target image is encoded together with the motion vector to generate a moving image encoded stream.
  • motion compensation processing is performed in units of macroblocks each consisting of 16 pixels ⁇ 16 pixels. That is, since one or a plurality of motion vectors are provided for each macroblock, the decoding apparatus reads data of an image area indicated by these motion vectors from reference image data (for example, a frame memory), and a moving image encoded stream The image can be restored by adding the difference value from the original image given by the data and the read data.
  • reference image data for example, a frame memory
  • MV motion vector
  • PMV prediction MV
  • MVD predicted motion vector
  • MV PMV + MVD (Formula 1) H.
  • the MV of which adjacent block is used or its derivation method is determined according to the block size (also referred to as macroblock type) to be compensated for motion. And it is determined that MVD is encoded into a moving image encoding stream according to the macroblock type (for example, in H.264, when it is an inter macroblock and not a skip macroblock) MVD is encoded).
  • the PMV in VP8 (see Non-Patent Document 2) has a flag indicating which adjacent block MV is used, whether the MV value is 0, or whether the MVD is included in the moving image encoded stream.
  • the MVD is encoded into a moving image encoded stream according to the value of the flag indicating the PMV.
  • the MV may be directly encoded in the moving image encoded stream.
  • how to include MV in a moving image encoded stream is defined in each moving image encoding standard.
  • a decoding circuit that executes the above-described decoding processing is usually configured to temporarily store decoded image data in an external memory. For this reason, in order to decode an image using motion prediction, it is necessary to read the reference image data from an external memory.
  • a region to be subjected to motion prediction is a block. Or a motion compensation target block.
  • a macroblock can be further divided into 16 blocks each having a size of 4 pixels ⁇ 4 pixels.
  • the reference image data indicated by the motion vector is read from an external memory for each area corresponding to the divided block.
  • the number of divisions increases, the number of accesses to the memory increases.
  • the motion vector can usually indicate not only the reference image position, that is, the integer position of the reference image, but also the decimal position (for example, a 1/2 position or a 1/4 position). .
  • the decimal position for example, a 1/2 position or a 1/4 position.
  • FIG. 9 is used in a 6-tap filter when the block size of motion compensation processing (hereinafter referred to as the partition size) is 4 horizontal pixels ⁇ 4 vertical pixels (hereinafter referred to as 4 ⁇ 4). It is a figure which shows the reference image data.
  • a cross indicates a predicted pixel calculated after the motion compensation process
  • a circle indicates a pixel necessary for the motion compensation process using a 6-tap filter. That is, reference image data of 9 pixels ⁇ 9 pixels (broken line in FIG. 10A) is required for motion compensation of a block having a partition size of 4 ⁇ 4 (solid line in FIG. 10A). Similarly, in the case of a block having a partition size of 16 ⁇ 16 (solid line in FIG. 10B), reference image data of 21 pixels ⁇ 21 pixels (broken line in FIG. 10B) is required. Furthermore, in the case of a block having a partition size of 8 ⁇ 8, reference image data of 13 pixels ⁇ 13 pixels is necessary (not shown).
  • the reference image data necessary for generating the prediction image of the luminance component is the reference image data of 21 pixels ⁇ 21 pixels as shown in FIG. 10B.
  • the amount of reference image data read can be increased depending on the bus width of the bus connected to the external memory, the amount of data accessed once, the AC characteristics of the external memory (for example, CAS latency and wait cycle of SDRAM), and the like. There is sex.
  • the partition size is 4 ⁇ 4, it is necessary to read an area of 9 pixels ⁇ 9 pixels as shown in FIG. 10A.
  • the reference image data read amount is reduced by collecting the reference image data necessary for one macroblock in one two-dimensional data area.
  • a moving picture decoding circuit see, for example, Patent Document 1 and Patent Document 2.
  • FIG. 11A and 11B show examples of reference image data necessary for the motion compensation processing according to the conventional technique.
  • FIG. 11A is a diagram showing four blocks of 8 ⁇ 8 partitions included in one macroblock and their motion vectors.
  • FIG. 11B is a diagram showing reference image data required by each block in FIG. 11A, and a case where the blocks are combined into one image region is indicated by a broken line 1100.
  • by collecting a plurality of reference image data in one image area it is not necessary to repeatedly acquire overlapping image data from an external memory (frame memory), so that the memory bandwidth can be reduced.
  • FIGS. 12A and 12B are diagrams illustrating examples of pixels, intermediate pixels, and output pixels used for motion compensation processing.
  • a decimal pixel position for example, a 1 ⁇ 2 pixel position in the X direction and a 1 ⁇ 2 pixel position in the Y direction
  • a horizontal direction is obtained by performing a 6-tap filter in the horizontal direction.
  • a horizontal filter for obtaining the decimal pixel position is combined with a vertical filter for obtaining the decimal pixel position in the vertical direction by performing a 6-tap filter in the vertical direction using the pixels obtained by the horizontal filter.
  • FIG. 12A is a diagram showing pixels necessary for motion compensation (circles), output pixels after horizontal filtering ( ⁇ marks), and output pixels after motion compensation (x marks) when the partition size is 4 ⁇ 4. is there.
  • FIG. 12B is a diagram showing pixels necessary for motion compensation (marked with ⁇ ), output pixels after horizontal filtering (marked with ⁇ ), and output pixels after motion compensation (marked with ⁇ ) when the partition size is 4 ⁇ 8. .
  • the horizontal filter and the vertical filter are performed in this order.
  • the number of executions (processing amount) of the 6-tap filter process necessary for generating a predicted image (256 bytes) of the luminance component of one macroblock is 4 ⁇ 4 for the horizontal filter.
  • [Pixel] ⁇ 9 [Pixel] 36 times (marked with ⁇ in FIG. 12A)
  • An object of the present invention is to provide a moving image decoding apparatus and method capable of realizing motion compensation processing at high speed and with low power consumption.
  • the moving picture decoding apparatus decodes a moving picture encoded stream that has been encoded using block-based motion prediction. Specifically, in the video decoding device, the prediction direction indicating the same motion vector as that of the adjacent block, the motion vector is 0, and motion vector difference information is encoded in the video encoded stream. In order to extract a flag relating to a motion vector indicating any one of the above, a decoding means for decoding the moving picture encoded stream, and a flag relating to the motion vector of a plurality of blocks extracted by the decoding means.
  • a motion vector comparison unit that determines whether or not adjacent blocks have the same motion vector, and a block that integrates a plurality of blocks that have been determined to have the same motion vector by the motion vector comparison unit into one motion compensation target block Integration means; and motion vector generation means for generating a motion vector based on the flag relating to the motion vector;
  • reference image acquisition means for acquiring a reference image for each motion compensation target block from reference image data decoded in the past and stored in a memory; and the reference Motion compensation is performed using the reference image acquired by the image acquisition means, a motion compensation means for generating a predicted image for each of the motion compensation target blocks, and an image using the prediction image generated by the motion compensation means Restoring means for restoring.
  • the block sizes are integrated, and reference image data can be acquired for each integrated motion compensation target block. it can.
  • the number of pixels to be read out of the reference image data from the frame memory is reduced, and the block size for performing the motion compensation processing is greatly integrated, so that the motion compensation processing is speeded up and realized with low power consumption. be able to.
  • the block integration means may use individual motion compensation target blocks for blocks determined by the motion vector comparison means as not having the same motion vector as an adjacent block.
  • the reference image can be acquired for each individual motion compensation target block.
  • the number of pixels to be read out of the reference image data from the frame memory is reduced, and the block size for performing the motion compensation processing is greatly integrated, so that the motion compensation processing is speeded up and realized with low power consumption. be able to.
  • the prediction direction of the flag relating to the motion vector is the same as the motion vector of the block adjacent to the flag in the upward direction, or the same motion vector as the block adjacent to the left direction. It may indicate any of the above.
  • the adjacent blocks are the same. It is possible to easily determine whether or not there is a motion vector. And when it becomes the same motion vector, after integrating a block size, a reference image can be acquired for every block after integration.
  • the number of pixels to be read out of the reference image data from the frame memory is reduced, and the block size for performing the motion compensation processing is greatly integrated, so that the motion compensation processing is speeded up and realized with low power consumption. be able to.
  • the size of the block to be compared by the motion vector comparison unit and the block before being integrated by the block integration unit may be 4 pixels ⁇ 4 pixels or 8 pixels ⁇ 8 pixels.
  • a plurality of adjacent blocks have the same motion vector based on a flag related to the prediction direction of the motion vector of the block of 4 pixels ⁇ 4 pixels or 8 pixels ⁇ 8 pixels. If so, adjacent blocks that have the same motion vector can be integrated, and a reference image can be acquired for each integrated motion compensation block.
  • the number of pixels to be read out of the reference image data from the frame memory is reduced, and the block size for performing the motion compensation processing is greatly integrated, so that the motion compensation processing is speeded up and realized with low power consumption. be able to.
  • the block to be compared by the motion vector comparing means may be a block in one macro block.
  • the number of pixels to be read out of the reference image data from the frame memory is reduced, and the block size for performing the motion compensation processing is greatly integrated, so that the motion compensation processing is speeded up and realized with low power consumption. be able to.
  • the motion vector comparison means determines whether the motion vector is the same for each of the plurality of blocks as a block adjacent to the block in the upward direction, a block adjacent to the left direction, or a block adjacent to the upper left direction. You may judge.
  • the block sizes are integrated.
  • a reference image can be acquired for each block after integration.
  • the number of pixels to be read out of the reference image data from the frame memory is reduced, and the block size for performing the motion compensation processing is greatly integrated, so that the motion compensation processing is speeded up and realized with low power consumption. be able to.
  • the motion vector comparison means has the same motion vector as the motion compensation target block included in the macroblock adjacent to the two blocks, with respect to each of the two adjacent blocks to be compared.
  • the two blocks may be integrated into one motion compensation target block.
  • the block size is determined when a plurality of adjacent blocks refer to the motion vector of the adjacent macroblock and become the same motion vector.
  • the reference image can be acquired for each block after integration.
  • the number of pixels to be read out of the reference image data from the frame memory is reduced, and the block size for performing the motion compensation processing is greatly integrated, so that the motion compensation processing is speeded up and realized with low power consumption. be able to.
  • the size of the motion compensation target block included in the adjacent macroblock may be 16 pixels ⁇ 16 pixels or 8 pixels ⁇ 8 pixels.
  • the adjacent macroblock is used in the motion vector comparison unit, so that When the block has the same motion vector as the motion compensation block of the adjacent macroblock that is referenced, the block image is integrated and a reference image can be acquired for each integrated block.
  • the number of pixels to be read out of the reference image data from the frame memory is reduced, and the block size for performing the motion compensation processing is greatly integrated, so that the motion compensation processing is speeded up and realized with low power consumption. be able to.
  • the moving image encoded stream may be encoded by VP8.
  • the number of pixels to be read out of the reference image data from the frame memory is reduced, and the block size for performing the motion compensation processing is greatly integrated, so that the motion compensation processing is speeded up and realized with low power consumption. be able to.
  • the moving picture decoding apparatus decodes a moving picture encoded stream that has been encoded using motion prediction in units of blocks.
  • the moving picture decoding apparatus includes a decoding unit that decodes a difference value of a motion vector from the moving image encoded stream, a prediction vector calculation unit that calculates a prediction vector that is a prediction value of the motion vector, A motion vector generation unit that generates a motion vector by adding the prediction vector calculated by the prediction vector calculation unit and a difference value of the motion vector decoded by the decoding unit; and the motion vector generation unit
  • the generated motion vector is compared with the motion vectors of a plurality of adjacent blocks to determine whether or not the same motion vector is obtained, and the motion vector comparing unit determines that the motion vectors are the same.
  • Block integration means for integrating the plurality of blocks into one motion compensation target block, and the motion vector Based on the motion vector generated by the generation unit, a reference image acquisition unit that acquires a reference image for each motion compensation target block from reference image data decoded in the past and stored in a memory; and the reference image acquisition unit A motion compensation unit that performs motion compensation using the reference image acquired in step S1 and generates a predicted image for each motion compensation target block, and a restoration that restores the image using the predicted image generated by the motion compensation unit Means.
  • the block sizes are integrated, and reference image data can be acquired for each integrated motion compensation target block.
  • the number of pixels to be read out of the reference image data from the frame memory is reduced, and the block size for performing the motion compensation processing is greatly integrated, so that the motion compensation processing is speeded up and realized with low power consumption. be able to.
  • the moving picture decoding method is a method for decoding a moving picture encoded stream that has been encoded using block-based motion prediction. Specifically, in the video decoding method, the prediction direction indicating the same motion vector as that of an adjacent block, the motion vector is 0, and motion vector difference information is encoded in the video encoded stream.
  • a decoding step of decoding the video encoded stream, and a flag related to the motion vector of a plurality of blocks extracted in the decoding step A motion vector comparison step for determining whether or not adjacent blocks have the same motion vector, and a block for integrating a plurality of blocks determined to have the same motion vector in the motion vector comparison step into one motion compensation target block Based on the integration step and the flag relating to the motion vector, a motion vector is generated.
  • Reference image acquisition for acquiring a reference image for each motion compensation target block from reference image data decoded in the past and stored in a memory based on the vector generation step and the motion vector generated in the motion vector generation step
  • a motion compensation step for performing motion compensation using the reference image acquired in the reference image acquisition step, and generating a prediction image for each motion compensation target block; and the prediction image generated in the motion compensation step.
  • the block sizes are integrated, and the reference image data can be acquired for each block after integration.
  • the integrated circuit decodes a moving image encoded stream that is encoded using motion prediction in units of blocks. Specifically, the integrated circuit has a prediction direction indicating the same motion vector as that of the adjacent block, the motion vector is 0, and motion vector difference information is encoded in the video encoded stream.
  • a decoding unit that decodes the moving image encoded stream, and a flag related to the motion vector of a plurality of blocks extracted by the decoding unit
  • a motion vector comparison unit that determines whether or not adjacent block motion vectors are the same, and a block integration unit that integrates a plurality of blocks determined to have the same motion vector by the motion vector comparison unit into one motion compensation target block
  • a motion vector generation unit that generates a motion vector based on the flag relating to the motion vector, and the motion vector
  • a reference image acquisition unit that acquires a reference image for each of the motion compensation target blocks from reference image data that has been decoded in the past and stored in a memory based on the motion vector generated by the generation unit; and the reference image acquisition unit
  • a motion compensation unit that performs motion compensation using the reference image acquired in step S1 and generates a predicted image for each motion compensation target block, and a restoration that restores the image using the predicted image generated by the motion compensation unit A part.
  • the block sizes are integrated, and the reference image data can be acquired for each block after integration.
  • the memory bandwidth can be reduced and the amount of processing during the motion compensation process can be reduced by integrating blocks that are processing units of the motion compensation process. It can be carried out.
  • the decoding performance can be improved and the decoding process can be speeded up, and the power consumption can be reduced by reducing the processing amount.
  • FIG. 1 is a block diagram showing a configuration of a video decoding apparatus according to Embodiment 1.
  • FIG. 2A is a diagram illustrating a flag related to a motion vector, and is a diagram illustrating that the motion vector is the same as a block adjacent on the left side.
  • FIG. 2B is a diagram illustrating a flag related to a motion vector, and is a diagram illustrating that the motion vector is the same as that of an adjacent block on the upper side.
  • FIG. 2C is a diagram for explaining a flag relating to a motion vector, and showing that the motion vector is 0.
  • FIG. 2D is a diagram for explaining a flag related to a motion vector, and illustrates that motion vector data is included in a moving image encoded stream.
  • FIG. 2A is a diagram illustrating a flag related to a motion vector, and is a diagram illustrating that the motion vector is the same as a block adjacent on the left side.
  • FIG. 2B is a diagram illustrating a flag related to
  • FIG. 3A is a diagram illustrating an example of a flag related to a motion vector of each block included in a macroblock.
  • FIG. 3B is a diagram illustrating a result of integrating the blocks in FIG. 3A.
  • FIG. 3C is a diagram illustrating an example of a flag related to the motion vector of each block included in the macroblock.
  • FIG. 3D is a diagram illustrating a result of integrating the blocks illustrated in FIG. 3C.
  • FIG. 3E is a diagram illustrating a result of further integrating the blocks in FIG. 3D.
  • FIG. 3F is a diagram illustrating a result of further integrating the blocks in FIG. 3E.
  • FIG. 4 is a flowchart of predictive image generation according to the first embodiment.
  • FIG. 4 is a flowchart of predictive image generation according to the first embodiment.
  • FIG. 5A is a diagram illustrating a positional relationship between a plurality of blocks included in an 8 ⁇ 8 partition.
  • FIG. 5B is a list of cases that can be integrated into each block of FIG. 5A.
  • FIG. 5C is a list of cases that can be integrated into each block of FIG. 5A.
  • FIG. 6A is a diagram illustrating an example of a flag related to the motion vector of each block included in the macroblock.
  • FIG. 6B is a diagram illustrating a result of integrating the blocks in FIG. 6A.
  • FIG. 6C is a diagram illustrating a result of further integrating the blocks in FIG. 6B.
  • FIG. 7A is a diagram illustrating a positional relationship between a plurality of blocks included in an 8 ⁇ 8 partition and other adjacent macroblocks.
  • FIG. 7B is a list of cases that can be integrated with each block in FIG. 7A.
  • FIG. 7C is a list of cases that can be integrated into each block of FIG. 7A.
  • FIG. 8 is a flowchart of predicted image generation according to the second embodiment.
  • FIG. 9 is a diagram for explaining motion compensation processing using a 6-tap filter.
  • FIG. 10A is a diagram illustrating an example of reference image data necessary for motion compensation of a block having a partition size of 4 ⁇ 4.
  • FIG. 10B is a diagram illustrating an example of reference image data necessary for motion compensation of a block having a partition size of 16 ⁇ 16.
  • FIG. 11A is a diagram showing four blocks of 8 ⁇ 8 partitions and their motion vectors.
  • FIG. 10A is a diagram showing four blocks of 8 ⁇ 8 partitions and their motion vectors.
  • FIG. 11B is a diagram showing reference image data required by each block in FIG. 11A.
  • FIG. 12A is a diagram illustrating pixels, intermediate pixels, and output pixels necessary for motion compensation of a 4 ⁇ 4 partition block.
  • FIG. 12B is a diagram illustrating pixels, intermediate pixels, and output pixels necessary for motion compensation of a 4 ⁇ 8 partition block.
  • FIG. 1 is a block diagram showing a configuration of a moving picture decoding apparatus 100 according to Embodiment 1 of the present invention.
  • the moving picture decoding apparatus 100 includes a decoding unit 110, a motion vector comparison unit 120, a block integration unit 130, a motion vector generation unit 140, a frame memory transfer control unit 150, and a buffer 160.
  • the decoding unit 110 decodes the moving image encoded stream input to the decoding unit 110, and includes a flag relating to a motion vector, motion vector data, and a difference value between an encoding target image and a predicted image (hereinafter, “residual image”). ) Is output. Then, the decoding unit 110 outputs a flag and motion vector data regarding the motion vector to the motion vector comparison unit 120 and the motion vector generation unit 140, and outputs a residual image to the adder 190.
  • the flag relating to the motion vector is any of a prediction direction indicating that the adjacent block has the same motion vector, that the motion vector is 0, and that motion vector data is included in the moving image encoded stream. This flag is stored for each block in the moving image encoded stream.
  • the motion vector data is the difference value of the motion vector or the motion vector itself.
  • this motion vector data only needs to be stored in the moving image encoded stream only when a flag relating to the motion vector indicates that the moving image encoded stream includes the motion vector data.
  • the motion vector generation unit 140 When the motion vector generation unit 140 receives a flag related to the motion vector from the decoding unit 110, the motion vector generation unit 140 generates a motion vector of the decoding target block using the motion vector of the adjacent block. Further, when the motion vector generation unit 140 receives the difference value of the motion vector, the motion vector generation unit 140 adds the motion vector of the adjacent block or the prediction MV calculated from the motion vector of the adjacent block and the difference value of the motion vector, and performs decoding. Generate block motion vectors.
  • the motion vector comparison unit 120 compares the flag related to the motion vector received from the decoding unit 110 or the motion vector generation unit 140 with the flag related to the motion vector of a plurality of adjacent blocks, and the motion vectors in the plurality of adjacent blocks are the same. It is determined whether or not, and the determination result is output to the block integration unit 130.
  • the block integration unit 130 that has received the comparison result, when a plurality of adjacent blocks have the same motion vector, has those adjacent blocks having a block size (hereinafter referred to as a partition size) as a motion compensation processing unit. It is determined to be integrated into the motion compensation target block, and the result is output to the motion vector generation unit 140. On the other hand, the block integration unit 130 determines that a block different from any motion vector of a plurality of adjacent blocks is to be a motion compensation target block alone, and outputs the result to the motion vector generation unit 140.
  • a partition size a block size
  • the flag is illustrated as four types shown in FIGS. 2A to 2D (Left, Above, Zero, New). )are categorized.
  • the flag regarding said motion vector is demonstrated as an example, it may be another case. For example, it may indicate the same as the motion vector of the adjacent block in the upper left or upper right, or may indicate the same as the motion vector of a block two or more away.
  • H In a moving picture coding standard such as H.264, a motion vector prediction direction is obtained from a moving picture coded stream. That is, the contents are not limited to those shown in FIGS. 2A to 2D.
  • the block described as “Left” shown in FIG. 2A indicates that the motion vector (hereinafter referred to as MV) is the same as that of the adjacent block on the left side.
  • the block labeled “Above” shown in FIG. 2B indicates that the MV is the same as the adjacent block on the upper side.
  • the block labeled “Zero” shown in FIG. 2C indicates that MV is zero. However, when MV becomes 0, there is naturally a possibility of having the same MV as the adjacent block.
  • the block described as “New” shown in FIG. 2D indicates that the moving image encoded stream includes a difference value from the predicted MV calculated from the new MV or the MV of the decoded peripheral block. That is, it can be said that the MV of the encoding target block is different from any MV of a plurality of adjacent blocks.
  • the derivation method of the prediction MV derived from the MVs of the peripheral blocks is determined by the moving picture coding standard, and will not be described in detail here (see, for example, H.264 and VP8).
  • the partition size of the adjacent block is different from the partition size of the current block (for example, when the partition size of the current block is 8 pixels ⁇ 8 pixels and the partition size of the adjacent block is 4 pixels ⁇ 4 pixels), it is adjacent It may be considered that the block is composed of two blocks each having a partition size of 4 pixels ⁇ 4 pixels.
  • the partition size of the adjacent block is 8 pixels ⁇ 8 pixels and the partition size of the adjacent block is 4 pixels ⁇ 4 pixels
  • the block is composed of two blocks each having a partition size of 4 pixels ⁇ 4 pixels.
  • replacement with a certain value (for example, 0) or not used as a candidate prediction MV may be determined.
  • replacement with a certain value (for example, 0) or not used as a candidate prediction MV may be defined.
  • the motion vector generation unit 140 generates the motion vector of the decoding target block using the motion vector of the adjacent block specified by the flag related to the motion vector received from the decoding unit 110.
  • a difference value from the predicted MV is required in addition to the flag related to the motion vector (in the case of FIG. 2D)
  • the decoded motion vector difference value is received from the decoding unit 110, and the predicted MV and the difference value are added together. To generate a motion vector.
  • the frame memory transfer control unit 150 for each motion compensation target block of the partition size that is the integration result in the block integration unit 130, the reference image area indicated by the generated motion vector and the pixels necessary for motion compensation (when the predicted image is generated). Data including necessary pixels is transferred from the buffer 160 to the local reference memory 170.
  • the motion compensation unit 180 obtains a predicted image for each motion compensation target block from the data stored in the local reference memory 170, and outputs the predicted image to the adder 190.
  • the adder 190 adds the residual image output from the decoding unit 110 and the predicted image acquired from the motion compensation unit 180, and outputs the result to the buffer 160. Thereafter, the decoded image data is output from the buffer 160 to a display unit (not shown).
  • the residual image output from the decoding unit 110 is obtained by inversely quantizing the frequency component coefficient data (for example, DCT coefficients) decoded by the decoding unit 110, and further converting it into pixel data (for example, inverse conversion or IDCT). : Inverted Discrete Cosine Transform).
  • a predicted image can be calculated by using intra prediction.
  • the buffer 160 may be configured with an external memory or may be configured with a built-in memory.
  • the motion vector comparison unit 120 uses a flag related to the motion vector output from the decoding unit 110 (details are as described above with reference to FIGS. 2A to 2D, for example) and a flag related to the motion vectors of a plurality of adjacent blocks. The motion vectors are compared. Note that a flag related to the motion vector of the adjacent block may be held in the motion vector comparison unit 120 or may be held in the motion vector generation unit 140.
  • the motion vector comparison unit 120 compares motion vectors based on flags related to motion vectors of a plurality of adjacent blocks. When the same motion vector is obtained between adjacent blocks, the motion vector comparison unit 120 integrates a plurality of blocks having the same motion vector as motion compensation target blocks having a new partition size for performing motion compensation processing. .
  • the adjacent block on the upper side for example, the partition size 4 ⁇ 4 and the flag related to the motion vector is “Above”
  • the adjacent block on the upper side for example, the partition size 4 ⁇ 4
  • the current block have the same motion vector. It is integrated as a motion compensation target block having a partition size of 4 ⁇ 8.
  • the frame memory transfer control unit 150 acquires a reference image corresponding to the motion compensation target block having the new integrated partition size.
  • the motion compensation unit 180 performs motion compensation processing on the motion compensation target block using the acquired reference image, and generates a predicted image.
  • FIG. 3A and 3B are diagrams showing an example of block integration according to the embodiment of the present invention.
  • FIG. 3A is a diagram illustrating an example in which the partition size of a plurality of blocks included in one macroblock is 4 ⁇ 4.
  • FIG. 3A shows the contents of flags relating to the motion vector of each block.
  • the flag of the blocks 200, 206, 208, and 215 is “New (indicating that another MV exists as shown in FIG. 2D)”.
  • the flag of the blocks 201, 203, 207, 210, and 214 is “Left (shows the same MV as the adjacent block on the left side as shown in FIG. 2A)”.
  • the flags of the blocks 202 and 209 indicate “Zero (MV indicates 0 as in FIG. 2C)”.
  • the flags of the blocks 204, 205, 211, 212, and 213 are “Above (shows the same MV as the adjacent block on the upper side as shown in FIG. 2B)”.
  • the flags of the blocks 201 and 204 indicate that the block 200 and the MV are the same.
  • the flag of the block 205 indicates that the block 201 and the MV are the same. That is, it can be seen from the flags relating to the respective motion vectors that these four blocks 200, 201, 204, and 205 have the same MV. That is, the four blocks 200, 201, 204, and 205 included in the upper left 8 ⁇ 8 partition of FIG. 3A can be integrated as a motion compensation target block 301 of 8 ⁇ 8 partition, as shown in FIG. 3B.
  • the four blocks 202, 203, 206, and 207 included in the upper right 8 ⁇ 8 partition in FIG. 3A are integrated as two motion compensation target blocks 302 and 303 in the 8 ⁇ 4 partition, as shown in FIG. 3B. can do.
  • the four blocks 208, 209, 212, and 213 included in the lower left 8 ⁇ 8 partition of FIG. 3A are integrated as two motion compensation target blocks 304 and 305 of the 4 ⁇ 8 partition, as shown in FIG. 3B. be able to.
  • the above is based on flags relating to motion vectors of a plurality of blocks included in an 8 ⁇ 8 partition, which is one of motion compensation processing units, and it is determined whether or not a plurality of adjacent blocks can be integrated.
  • the partition size to be compared there is no limitation on the partition size to be compared.
  • the flags may be compared in units of 8 ⁇ 4 partitions, or the flags may be compared in units of 4 ⁇ 8 partitions.
  • the flags may be compared in a plurality of adjacent blocks within the macroblock and across the macroblock boundary. For example, since the four 4 ⁇ 4 partition blocks 209, 210, 213, and 214 in FIG. 3A have the same motion vector, they can be integrated into the motion compensation target block of 8 ⁇ 8 partition.
  • motion compensation target blocks 305, 306, and 308 after integration in FIG. 3B have the same motion vector, they can be further integrated into a motion compensation target block of 8 ⁇ 8 partitions.
  • FIG. 3C, FIG. 3D, FIG. 3E, and FIG. 3F are diagrams showing another example of block integration according to the embodiment of the present invention.
  • FIG. 3C is a diagram illustrating an example in which the partition size of a plurality of blocks included in one macroblock is 4 ⁇ 4.
  • FIG. 3C shows the contents of the flags related to the motion vector of each block.
  • the flags of the blocks 220, 221, 222, 223, 225, 226, 227, 228, 229, 230, 231, 233, 234, and 235 are “Left”.
  • the flag of the blocks 224 and 232 is “Above”.
  • the four blocks 220, 221, 224, and 225 included in the upper left 8 ⁇ 8 partition have the same MV, respectively. You can see from the flag. That is, the four blocks 220, 221, 224, and 225 included in the upper left 8 ⁇ 8 partition of FIG. 3C can be integrated as a motion compensation target block 321 of the 8 ⁇ 8 partition, as illustrated in FIG. 3D.
  • the four blocks 222, 223, 226, 227 included in the upper right 8 ⁇ 8 partition of FIG. 3C are integrated as two motion compensation target blocks 322, 323 of the 8 ⁇ 4 partition, as shown in FIG. 3D.
  • the flag relating to motion compensation of each of the motion compensation target blocks 322 and 323 is “Left”.
  • the motion compensation target blocks 322 and 323 there is a motion compensation target block 321 after the integration of 8 ⁇ 8 partitions. That is, the motion vectors of the two motion compensation target blocks 322 and 323 in the 8 ⁇ 4 partition are equal to the motion vector of the motion compensation target block 321 in FIG. 3D as a result. Therefore, the two motion compensation target blocks 322 and 323 in FIG. 3D can be further integrated into a motion compensation target block 332 of 8 ⁇ 8 partitions, as shown in FIG. 3E.
  • the four blocks 228, 229, 232, and 233 included in the lower left 8 ⁇ 8 partition of FIG. 3C can be integrated as a motion compensation target block 324 of the 8 ⁇ 8 partition, as shown in FIG. 3D.
  • the four blocks 230, 231, 234, and 235 included in the lower right 8 ⁇ 8 partition of FIG. 3C are integrated as two motion compensation target blocks 325 and 326 of the 8 ⁇ 4 partition as shown in FIG. 3D.
  • the flag relating to motion compensation of each of the motion compensation target blocks 325 and 326 is “Left”.
  • the motion vectors of the two motion compensation target blocks 325 and 326 of the 8 ⁇ 4 partition become equal to the motion vector of the motion compensation target block 324 in FIG. 3D as a result. Therefore, the two motion compensation target blocks 325 and 326 in FIG. 3D can be further integrated into a motion compensation target block 334 of 8 ⁇ 8 partitions, as shown in FIG. 3E.
  • the motion compensation target block 332 after integration shown in FIG. 3E is an 8 ⁇ 8 partition, and the flag related to the motion vector is “Left”.
  • the motion compensation target block 331 after the integration of 8 ⁇ 8 partitions there is a motion compensation target block 331 after the integration of 8 ⁇ 8 partitions. That is, the motion vectors of the two motion compensation target blocks 331 and 332 of the 8 ⁇ 8 partition are equal. Therefore, the two motion compensation target blocks 331 and 332 in FIG. 3E can be further integrated into a motion compensation target block 341 of 16 ⁇ 8 partitions, as shown in FIG. 3F.
  • the motion compensation target block 334 after integration shown in FIG. 3E is an 8 ⁇ 8 partition, and the flag related to the motion vector is “Left”.
  • the motion compensation target block 334 On the left side of the motion compensation target block 334, there is a motion compensation target block 333 after the integration of 8 ⁇ 8 partitions. That is, the motion vectors of the two motion compensation target blocks 333 and 334 of the 8 ⁇ 8 partition are equal. Therefore, the two motion compensation target blocks 333 and 334 in FIG. 3E can be further integrated into a motion compensation target block 342 of 16 ⁇ 8 partitions, as shown in FIG. 3F.
  • a recursive integration process is performed to determine whether the motion compensation target block can be further integrated.
  • the flag related to the motion vector of the motion compensation target block is compared for each 8 ⁇ 8 partition.
  • the flag related to the motion vector of the motion compensation target block in the 16 ⁇ 16 partition may be compared. Absent.
  • a plurality of blocks having the same motion vector are integrated into a motion compensation target block having a larger partition size, and motion compensation processing can be performed for each motion compensation target block.
  • the memory transfer size at the time of acquiring the reference image from the buffer 160 is a reference image of 9 pixels ⁇ 9 pixels when a 6-tap filter is used for motion compensation of a block having a partition size of 4 ⁇ 4. I need data.
  • a 6-tap filter when a 6-tap filter is used for motion compensation of a block having a partition size of 16 ⁇ 16, reference image data of 21 pixels ⁇ 21 pixels is necessary.
  • the reference image data necessary for generating the prediction image of the luminance component is 21 pixels ⁇ 21 pixels for one motion compensation target block.
  • the partition size of the motion compensation target block is 4 ⁇ 4, as shown in FIG. 10A, it is necessary to read out reference image data of 9 pixels ⁇ 9 pixels for one motion compensation target block.
  • the four 4 ⁇ 4 blocks 200, 201, 204, and 205 included in the upper left 8 ⁇ 8 partition of FIG. 3A described in FIG. 3A and FIG. 3B are motion compensation targets of the 8 ⁇ 8 partition shown in FIG. 3B.
  • the read amount of the reference image data when integrated as the block 301 will be described.
  • the partition size of the motion compensation target block before integration is 4 ⁇ 4, it is necessary to read out 9 ⁇ 9 reference image data for the partition size 4 ⁇ 4 of one motion compensation target block.
  • the partition size of the motion compensation target block after integration is 8 ⁇ 8
  • the memory transfer size of the motion compensation target block with the partition size of 8 ⁇ 8 after the integration is the same as that when the partition size of the original motion compensation target block is 8 ⁇ 8.
  • access to the buffer 160 for example, an external memory such as SDR-SDRAM, DDR-SDRAM, DDR2-SDRAM, or DDR3-SDRAM
  • the order for example, address to SDRAM, control command and control signal
  • data transfer order are the same.
  • the read amount and access time of the reference image data of the luminance component include the bus width of the bus connected to the external memory, the amount of data accessed once, and the AC characteristics of the external memory (for example, the CAS latency and wait cycle of SDRAM) ), Etc., may increase. Further, during the reading of the reference data of the luminance component from the buffer 160, other access (for example, reading of the reference data of the color difference component corresponding to the motion compensation target block and image data for outputting the image to the display unit) There is a possibility of interruption due to reading or access from the CPU.
  • the processing amount at the time of motion compensation As shown in FIGS. 12A and 12B, assuming that the processing amount at the time of motion compensation is equivalent to the number of output pixels including intermediate pixels, the luminance component of one macroblock
  • 4 [pixels] ⁇ 8 [pixels] 32 times ( ⁇ in FIG. 12B)
  • (52 + 32) ⁇ 8 [number of partitions] 672 [times]. That is, in the case of 4 ⁇ 8 partitions, the number of filters is reduced compared to the case of 4 ⁇ 4 partitions.
  • the motion compensation target block is a 16 ⁇ 16 partition
  • the partition size of the motion compensation target block increases, the number of times of filtering (processing amount) can be reduced. Therefore, the number of pixels for reading reference image data from the buffer 160 can be reduced, and motion compensation processing can be realized at high speed and with low power consumption.
  • FIG. 4 is a flowchart of predictive image generation according to Embodiment 1 of the present invention.
  • the decoding unit 110 acquires a flag related to a motion vector from the moving image encoded stream, and outputs the flag to the motion vector comparison unit 120 (step S401).
  • the motion vector comparison unit 120 uses the flags related to the input motion vectors and the flags related to the motion vectors of the adjacent blocks already acquired, so that the motion vectors of the adjacent blocks are the same. It is determined whether or not there is, and the result is output to the block integration unit 130 (step S402).
  • step S403 when it is determined that the blocks can be integrated (that is, the motion vectors of the plurality of blocks are the same in step S402) (Yes in step S403), the block integration unit 130 sets the plurality of blocks having the same motion vector to 1 It is changed so as to become a motion compensation target block of one partition size (step S404).
  • the motion vector generation unit 140 calculates a motion vector and outputs the motion vector to the frame memory transfer control unit 150.
  • the motion vector generation unit 140 may calculate a motion vector for each motion compensation target block. More specifically, it is sufficient to calculate a motion vector of one block among a plurality of blocks included in the motion compensation target block.
  • the frame memory transfer control unit 150 acquires the reference image area indicated by the motion vector, that is, the reference image data necessary for the motion compensation processing of the motion compensation target block, from the buffer 160 and transfers it to the local reference memory 170 (step S405).
  • the motion compensation unit 180 uses the reference image data acquired from the local reference memory 170 to perform motion compensation processing for each motion compensation target block, and outputs the generated predicted image to the adder 190 (step S406).
  • step S403 when a plurality of adjacent blocks cannot be integrated, the reference image acquisition and the motion compensation process are performed for each motion compensation target block of the original partition size without changing the partition size (step S405). , S406).
  • the flowchart in FIG. 4 can be integrated for each 8 ⁇ 8 partition as a comparison target block for determining whether or not integration is possible based on flags relating to motion vectors of a plurality of blocks included in this partition. It may be determined whether or not it is every other partition size (for example, 16 ⁇ 16, 16 ⁇ 8, 8 ⁇ 16, 8 ⁇ 4, 4 ⁇ 8). Naturally, if the partition size of the block is larger than the comparison target block size for determining whether or not integration is possible, the reference image of the reference image is changed for each motion compensation target block of the original partition size without changing the partition size. Acquisition and motion compensation processing may be performed.
  • FIG. 5A, FIG. 5B, and FIG. 5C show an example in which integration is possible based on flags relating to motion vectors of four blocks included in an 8 ⁇ 8 partition.
  • FIG. 5A is a diagram showing the positional relationship of four blocks having a partition size of 4 ⁇ 4.
  • the upper left block is block 0, the upper right block is block 1, the lower left block is block 2, and the lower right block is block 3.
  • 5B and 5C are diagrams showing examples of combinations of flags that can integrate four blocks 0, 1, 2, and 3 included in the 8 ⁇ 8 partition shown in FIG. 5A.
  • 5B and 5C show the partition sizes after integration in each case.
  • the case 43 in FIG. 5C shows the case of the four blocks 200, 201, 204, and 205 in the upper left of FIG. 3A. In this case, it can be integrated into the motion compensation target block of the 8 ⁇ 8 partition.
  • the comparison process is simplified by using the comparison result of FIG. 5B or 5C. Is possible. Further, by using it as a comparison circuit, it is possible to realize a moving picture decoding apparatus and a moving picture decoding circuit with a relatively simple circuit.
  • one motion compensation target block of 8 ⁇ 4 partition and two motion compensation target blocks of 4 ⁇ 4 partition are integrated. Sometimes it is done. It is also possible to compare the flags of a plurality of blocks included in a 16 ⁇ 16 macroblock. In addition, it is also possible to compare flags of a plurality of blocks included in a partition having a size of 16 ⁇ 8 or 8 ⁇ 16.
  • FIGS. 6A to 6C show an example related to block integration according to Embodiment 1 of the present invention.
  • FIG. 6A is a diagram showing a case where the partition size of each block included in one macroblock is 4 ⁇ 4, and a case where the partition size of each block included in an adjacent macroblock is 8 ⁇ 8. .
  • the contents of flags relating to motion vectors are described for each block in a macro block having a block size of 4 ⁇ 4.
  • the four blocks 420, 421, 422, and 423 of the left 8 ⁇ 8 partition are, for example, motion compensation target blocks that have already been integrated by the block integration unit 130.
  • the flag of the block 406 is “New (indicating that another MV exists as shown in FIG. 2D)” and the blocks 400, 401, 403, 404, 405, 407, 408, 409, 410
  • the flags of 411, 412, 413, and 415 are “Left (shows the same MV as the left adjacent block as shown in FIG. 2A)”, and the flag of the block 402 is “Zero (MV is 0 as shown in FIG. 2C).
  • the flag of the block 414 indicates “Above (shows the same MV as the upper adjacent block as shown in FIG. 2B)”.
  • the blocks 400 and 401 have the same MV, and the blocks 404 and 405 have the same MV. (Left) in this case. That is, the four blocks 400, 401, 404, and 405 of the 4 ⁇ 4 partition in FIG. 6A can be integrated as two motion compensation target blocks 501 and 502 of the 8 ⁇ 4 partition as shown in FIG. 6B.
  • FIG. 6A four blocks 402, 403, 406, and 407 included in the upper right 8 ⁇ 8 partition in FIG. 6A are integrated as two motion compensation target blocks 503 and 504 in the 8 ⁇ 4 partition as shown in FIG. 6B. can do.
  • the four blocks 408, 409, 412, and 413 included in the lower left 8 ⁇ 8 partition in FIG. 6A are integrated as two motion compensation target blocks 505 and 506 in the 8 ⁇ 4 partition, as shown in FIG. 6B. be able to.
  • the four blocks 410, 411, 414, and 415 included in the lower right 8 ⁇ 8 partition of FIG. 6A are integrated as one motion compensation target block 507 of the 8 ⁇ 8 partition as shown in FIG. 6B. Can do.
  • the blocks included in the upper left 8 ⁇ 8 partition in FIG. 6A are integrated into two motion compensation target blocks 501 and 502 of the 8 ⁇ 4 partition, and each motion compensation target block 501 and 502 is integrated.
  • the flag relating to the motion vector is “left”.
  • the motion vectors of the two motion compensation target blocks 501 and 502 of the 8 ⁇ 4 partition are equal to the motion vector of the block 421 in FIG. 6A as a result. Therefore, the two motion compensation target blocks 501 and 502 in FIG. 6B can be further integrated into a motion compensation target block 601 of 8 ⁇ 8 partitions, as shown in FIG. 6C.
  • the motion vectors of the two blocks to be compared are adjacent to the two blocks and have a larger partition size than the two blocks.
  • the two blocks can be integrated into one motion compensation target block (the motion compensation target block 601 in the above example).
  • the flags related to the motion vectors of the two motion compensation target blocks 505 and 506 included in the lower left 8 ⁇ 8 partition in FIG. 6B are “left”.
  • the motion vectors of the two motion compensation target blocks 505 and 506 in the 8 ⁇ 4 partition are equal to the motion vector of the block 423 in FIG. 6A as a result. Therefore, the two motion compensation target blocks 505 and 506 in FIG. 6B can be further integrated into a motion compensation target block 604 of 8 ⁇ 8 partitions, as shown in FIG. 6C.
  • the motion compensation target block 605 after integration shown in FIG. 6C is an 8 ⁇ 8 partition, and the flag related to the motion vector is “Left”.
  • the motion compensation target block 605 after the integration shown in FIG. 6C is an 8 ⁇ 8 partition, and the flag related to the motion vector is “Left”.
  • the partition size of each block included in the adjacent macroblock is 8 ⁇ 8.
  • the motion compensation target block size in the adjacent macroblock is 16 ⁇ 16, 8 ⁇ 16, or 16 Even in the case of ⁇ 8 and an intra macroblock (for example, when the motion vector can be handled as 0), the motion compensation target block size after the integration is the same result.
  • FIG. 7A, FIG. 7B, and FIG. 7C based on the flags related to the motion vectors of the four blocks included in the 8 ⁇ 8 partition, further considering the adjacent block in the upper direction and the adjacent block in the left direction. An example when motion vectors are compared is shown.
  • FIG. 7A is a diagram showing a positional relationship between four 4 ⁇ 4 partition blocks adjacent to each other.
  • the upper left block is block 0, the upper right block is block 1, the lower left block is block 2, and the lower right block is block 3.
  • FIG. 7A shows the positions of blocks adjacent in the upward direction (described as “upper / adjacent”) and blocks adjacent in the left direction (described as “left / adjacent”).
  • 7B and 7C can be integrated with the four blocks 0, 1, 2, and 3 included in the 8 ⁇ 8 partition shown in FIG. 7A and the blocks of the 8 ⁇ 8 partition adjacent in the upward and left directions. It is the figure which showed an example of the combination of a flag. 7B and 7C show the partition sizes after integration in each case.
  • Case 1 in FIG. 7B shows the case of blocks 400, 401, 404, and 405 included in the upper left 8 ⁇ 8 partition in FIG. 6A.
  • the block is integrated into the motion compensation target block of 8 ⁇ 8 partition. Show what is possible.
  • the comparison process is simplified by using the comparison result of FIG. 7B or FIG. 7C. Is possible. Further, by using it as a comparison circuit, it is possible to realize a moving picture decoding apparatus and a moving picture decoding circuit with a relatively simple circuit.
  • a single motion compensation target block of 8 ⁇ 4 partition and two motion compensation target blocks of 4 ⁇ 4 partition are integrated.
  • the moving picture decoding apparatus does not use a flag relating to a motion vector, but first calculates the motion vector of each block first, and compares the motion vectors in a plurality of adjacent blocks. And different. A detailed description of points common to the first embodiment will be omitted, and differences will be mainly described.
  • FIG. 8 is a flowchart of predictive image generation according to Embodiment 2 of the present invention.
  • the decoding unit 110 acquires a difference value between a motion vector or a predicted motion vector from the moving image encoded stream, and outputs the difference value to the motion vector generation unit 140.
  • the motion vector generation unit 140 calculates a motion vector from the received motion vector or the motion vector difference value and the predicted motion vector, and outputs the motion vector to the motion vector comparison unit 120 (step S801).
  • the motion vector comparison unit 120 compares the motion vector input from the motion vector generation unit 140 with the motion vectors of a plurality of adjacent blocks, and whether or not the motion vectors of the plurality of adjacent blocks are the same. And the determination result is output to the block integration unit 130 (step S802).
  • the block integration unit 130 includes a plurality of blocks that have the same motion vector. Are changed to become one motion compensation target block having a large partition size, and the result is output to the motion vector generation unit 140 (step S804).
  • the motion vector generation unit 140 calculates a motion vector of the motion compensation target block, and outputs the calculated motion vector to the frame memory transfer control unit 150. Note that the motion vector calculated in step S801 may be used.
  • the frame memory transfer control unit 150 acquires, from the buffer 160, reference image data necessary for motion compensation processing of the reference image region indicated by the motion vector, that is, the motion compensation target block, based on the result of the block integration unit 130. Then, it is transferred to the local reference memory 170 (step S805).
  • the motion compensation unit 180 performs motion compensation processing of the motion compensation target block using the reference image data acquired from the local reference memory 170, and outputs the generated predicted image to the adder 190 (step S806).
  • step S803 if a plurality of adjacent blocks cannot be integrated, the reference image acquisition and the motion compensation process are performed for each motion compensation target block of the original partition size without changing the partition size (step S805). , S806).
  • a comparison target block for determining whether or not integration is possible for each 8 ⁇ 8 partition, it is possible to integrate based on motion vectors of a plurality of blocks included in this partition. Or may be determined for every other partition size (for example, 16 ⁇ 16, 16 ⁇ 8, 8 ⁇ 16, 8 ⁇ 4, 4 ⁇ 8).
  • the partition size of the block is larger than the comparison target block size for determining whether or not integration is possible, the reference image of the reference image is changed for each motion compensation target block of the original partition size without changing the partition size. Acquisition and motion compensation processing may be performed.
  • the embodiment may be realized as a single-chip LSI as well as this, or may be configured as an individual LSI. Further, if integrated circuit technology comes out to replace LSI's as a result of the advancement of semiconductor technology or a derivative other technology, it is naturally also possible to carry out function block integration using this technology. Biotechnology can be applied. Moreover, you may implement
  • the moving picture decoding apparatus is useful for a moving picture decoding apparatus that decodes a moving picture encoded stream that has been encoded using motion prediction, and a playback method thereof.
  • the present invention can also be applied to applications such as DVD recorders, DVD players, Blu-ray disc recorders, Blu-ray disc players, digital TVs, and portable information terminals such as smartphones.
  • Video decoding device 110 Decoding part 120 Motion vector comparison part 130 Block integration part 140 Motion vector generation part 150 Frame memory transfer control part 160 Buffer 170 Local reference memory 180 Motion compensation part 190 Adder 0, 1, 2, 3, 200 , 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229 , 230, 231, 232, 233, 234, 235, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 420, 421, 422 , 423 Block 301 302,303,304,305,306,307,308,309,321,322,323,324,325,326,331,332,333,334,341,342,501,502,503,504,505

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention porte sur un appareil de décodage d'image animée qui comprend : une unité de décodage qui extrait, d'un flux codé d'image animée, un drapeau relatif à un vecteur de mouvement ; une unité de comparaison de vecteurs de mouvement qui détermine si les vecteurs de mouvement de blocs adjacents sont les mêmes ou non ; une unité d'intégration de blocs qui intègre une pluralité de blocs, dont les vecteurs de mouvement sont considérés comme étant les mêmes, en un seul bloc devant être soumis à une compensation de mouvement ; une unité de génération de vecteur de mouvement qui génère un vecteur de mouvement ; une unité d'acquisition d'image de référence qui acquiert, parmi des données d'image de référence stockées dans une mémoire, une image de référence pour chaque bloc devant être soumis à une compensation de mouvement ; une unité de compensation de mouvement qui génère une image prédite pour chaque bloc devant être soumis à une compensation de mouvement ; et un additionneur qui reconstruit une image par utilisation de l'image prédite générée par l'unité de compensation de mouvement. Un appareil de décodage d'image animée qui peut effectuer un processus de compensation de mouvement à grande vitesse avec une plus faible consommation d'énergie est donc proposée.
PCT/JP2012/004154 2011-09-02 2012-06-27 Appareil de décodage d'image animée, procédé de décodage d'image animée et circuit intégré WO2013031071A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/191,253 US20140177726A1 (en) 2011-09-02 2014-02-26 Video decoding apparatus, video decoding method, and integrated circuit

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2011192066 2011-09-02
JP2011-192066 2011-09-02

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/191,253 Continuation US20140177726A1 (en) 2011-09-02 2014-02-26 Video decoding apparatus, video decoding method, and integrated circuit

Publications (1)

Publication Number Publication Date
WO2013031071A1 true WO2013031071A1 (fr) 2013-03-07

Family

ID=47755606

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/004154 WO2013031071A1 (fr) 2011-09-02 2012-06-27 Appareil de décodage d'image animée, procédé de décodage d'image animée et circuit intégré

Country Status (3)

Country Link
US (1) US20140177726A1 (fr)
JP (1) JPWO2013031071A1 (fr)
WO (1) WO2013031071A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016031253A1 (fr) * 2014-08-28 2016-03-03 日本電気株式会社 Procédé de détermination de capacité de bloc et support d'enregistrement de programme

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9300977B2 (en) * 2013-10-02 2016-03-29 Amlogic Co., Ltd. Methods for encoding motion vectors
US11252464B2 (en) 2017-06-14 2022-02-15 Mellanox Technologies, Ltd. Regrouping of video data in host memory
US20200014945A1 (en) * 2018-07-08 2020-01-09 Mellanox Technologies, Ltd. Application acceleration

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10215457A (ja) * 1997-01-30 1998-08-11 Toshiba Corp 動画像復号方法及び動画像復号装置
JPH10276439A (ja) * 1997-03-28 1998-10-13 Sharp Corp 領域統合が可能な動き補償フレーム間予測方式を用いた動画像符号化・復号化装置
JP2000115776A (ja) * 1998-09-30 2000-04-21 Victor Co Of Japan Ltd 動き補償画像符号化装置・復号化装置及びその方法
JP2006520551A (ja) * 2003-03-03 2006-09-07 モービリゲン コーポレーション メモリワードアレイ構成およびメモリアクセス予測結合
WO2008117440A1 (fr) * 2007-03-27 2008-10-02 Fujitsu Limited Procédé de décodage et dispositif de décodage
JP2009182792A (ja) * 2008-01-31 2009-08-13 Oki Electric Ind Co Ltd 動きベクトル検出装置及び方法、動画像符号化装置及び方法、並びに、動画像復号化装置及び方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050045746A (ko) * 2003-11-12 2005-05-17 삼성전자주식회사 계층 구조의 가변 블록 크기를 이용한 움직임 추정 방법및 장치
KR101590511B1 (ko) * 2009-01-23 2016-02-02 에스케이텔레콤 주식회사 움직임 벡터 부호화/복호화 장치 및 방법과 그를 이용한 영상 부호화/복호화 장치 및 방법
US8731067B2 (en) * 2011-08-31 2014-05-20 Microsoft Corporation Memory management for video decoding

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10215457A (ja) * 1997-01-30 1998-08-11 Toshiba Corp 動画像復号方法及び動画像復号装置
JPH10276439A (ja) * 1997-03-28 1998-10-13 Sharp Corp 領域統合が可能な動き補償フレーム間予測方式を用いた動画像符号化・復号化装置
JP2000115776A (ja) * 1998-09-30 2000-04-21 Victor Co Of Japan Ltd 動き補償画像符号化装置・復号化装置及びその方法
JP2006520551A (ja) * 2003-03-03 2006-09-07 モービリゲン コーポレーション メモリワードアレイ構成およびメモリアクセス予測結合
WO2008117440A1 (fr) * 2007-03-27 2008-10-02 Fujitsu Limited Procédé de décodage et dispositif de décodage
JP2009182792A (ja) * 2008-01-31 2009-08-13 Oki Electric Ind Co Ltd 動きベクトル検出装置及び方法、動画像符号化装置及び方法、並びに、動画像復号化装置及び方法

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016031253A1 (fr) * 2014-08-28 2016-03-03 日本電気株式会社 Procédé de détermination de capacité de bloc et support d'enregistrement de programme
JPWO2016031253A1 (ja) * 2014-08-28 2017-06-15 日本電気株式会社 ブロックサイズ決定方法及びプログラム
US10356403B2 (en) 2014-08-28 2019-07-16 Nec Corporation Hierarchial video code block merging using depth-dependent threshold for block merger

Also Published As

Publication number Publication date
US20140177726A1 (en) 2014-06-26
JPWO2013031071A1 (ja) 2015-03-23

Similar Documents

Publication Publication Date Title
US11252436B2 (en) Video picture inter prediction method and apparatus, and codec
US10630992B2 (en) Method, application processor, and mobile terminal for processing reference image
US10284853B2 (en) Projected interpolation prediction generation for next generation video coding
US20200177877A1 (en) Video image encoding and decoding method, apparatus, and device
US9762929B2 (en) Content adaptive, characteristics compensated prediction for next generation video
TWI504237B (zh) 視訊寫碼中之緩衝預測資料
CN107318025B (zh) 图像处理设备和方法
CN104169971B (zh) 使用非线性缩放和自适应源块大小的分层运动估计
JP2022521979A (ja) デコーダ側動きベクトル改良に対する制約
US20090141808A1 (en) System and methods for improved video decoding
US20140105295A1 (en) Moving image encoding method and apparatus, and moving image decoding method and apparatus
US20190052877A1 (en) Adaptive in-loop filtering for video coding
WO2011078003A1 (fr) Dispositif, procédé et programme de traitement d'image
CN113259661A (zh) 视频解码的方法和装置
CN102918839A (zh) 用于视频编码的功率高效的运动估计技术
US20070171977A1 (en) Moving picture coding method and moving picture coding device
JP2022535859A (ja) Mpmリストを構成する方法、クロマブロックのイントラ予測モードを取得する方法、および装置
KR20210058856A (ko) 저장된 파라미터들을 사용하는 비디오 인코딩 및 디코딩을 위한 로컬 조명 보상
JP2024056899A (ja) インター予測の方法および装置、並びに対応するエンコーダおよびデコーダ
WO2013031071A1 (fr) Appareil de décodage d'image animée, procédé de décodage d'image animée et circuit intégré
US20120121019A1 (en) Image processing device and method
JP2011250400A (ja) 動画像符号化装置及び動画像符号化方法
JP2010081498A (ja) 画像圧縮符号化方法、及び装置
KR20120088372A (ko) 고화질 영상을 위한 무손실 영상 압축방법, 압축 해제방법 및 이를 적용한 전자기기
JPH11239352A (ja) 画像処理方法及び画像処理装置、並びにデータ記憶媒体

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12826850

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2013531013

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12826850

Country of ref document: EP

Kind code of ref document: A1