WO2011070730A1 - Video coding device and video decoding device - Google Patents

Video coding device and video decoding device Download PDF

Info

Publication number
WO2011070730A1
WO2011070730A1 PCT/JP2010/006732 JP2010006732W WO2011070730A1 WO 2011070730 A1 WO2011070730 A1 WO 2011070730A1 JP 2010006732 W JP2010006732 W JP 2010006732W WO 2011070730 A1 WO2011070730 A1 WO 2011070730A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion vector
reference pictures
target image
prediction
video
Prior art date
Application number
PCT/JP2010/006732
Other languages
French (fr)
Japanese (ja)
Inventor
蝶野慶一
仙田裕三
田治米純二
青木啓史
先崎健太
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Publication of WO2011070730A1 publication Critical patent/WO2011070730A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding

Definitions

  • the present invention relates to a video encoding device and a video decoding device to which a video encoding technique for predicting a motion vector is applied.
  • a video encoding device digitizes a moving image signal input from the outside, and then performs encoding processing in accordance with a predetermined video encoding method to generate encoded data, that is, a bit stream.
  • Non-Patent Document 1 There is ISO / IEC 14496-10 Advanced Video Coding (AVC) described in Non-Patent Document 1 as a predetermined video encoding method.
  • a Joint Model method is known as a reference model for an AVC encoder (hereinafter referred to as a general video encoding device).
  • One frame is composed of one frame picture of progressive scanning or two field pictures of interlace scanning.
  • one frame is composed of one frame picture of progressive scanning.
  • a general video encoding apparatus includes an MB buffer 101, a frequency conversion unit 102, a quantization unit 103, an entropy encoding unit 104, an inverse quantization unit 105, an inverse frequency conversion unit 106, a picture buffer. 107, a block distortion removal filter unit 108, a decoded picture buffer 109, an intra prediction unit 110, an inter-frame prediction unit 111, a motion vector prediction unit 112, an encoding control unit 113, and a switch 100.
  • a general video encoding apparatus divides each frame into 16 ⁇ 16 pixel size blocks called MB (Macro Block), and encodes each MB in order from the upper left of the frame.
  • MB Micro Block
  • the MB is further divided into blocks of 4 ⁇ 4 pixel size, and each 4 ⁇ 4 block is encoded.
  • FIG. 12 is an explanatory diagram showing an example of block division when the spatial resolution of the frame is QCIF (Quarter Common Intermediate Format).
  • QCIF Quadrater Common Intermediate Format
  • the MB buffer 101 stores the pixel value of the encoding target MB of the input image frame.
  • the encoding target MB is simply referred to as an input MB.
  • the prediction signal supplied from the intra prediction unit 110 or the inter-frame prediction unit 111 via the switch 100 is subtracted from the input MB supplied from the MB buffer 101.
  • the input MB from which the prediction signal is reduced is referred to as a prediction error image block.
  • the intra prediction unit 110 generates an intra prediction signal using a reconstructed picture image stored in the picture buffer 107 having the same display time as the current picture as a reference image.
  • Intra_4 ⁇ 4 Intra_8 ⁇ 8
  • Intra_16 16
  • Intra_4 ⁇ 4 and Intra_8 ⁇ 8 are intra predictions of 4 ⁇ 4 block size and 8 ⁇ 8 block size, respectively.
  • a circle ( ⁇ ) in the figure indicates a reference pixel for intra prediction, that is, a reconstructed image stored in the picture buffer 107.
  • Intra_4 ⁇ 4 intra prediction the surrounding pixels of the reconstructed image are used as reference pixels as they are, and reference signals are padded (extrapolated) in nine types of directions shown in FIG.
  • Intra_8 ⁇ 8 intra prediction the peripheral pixels of the reconstructed picture are smoothed by the low-pass filters (1/2, 1/4, 1/2) described immediately below the right arrow in FIG. The predicted pixel is formed by extrapolating the reference pixel in nine types of directions shown in FIG.
  • Intra — 16 ⁇ 16 is intra prediction of 16 ⁇ 16 block size.
  • circles ( ⁇ ) in the drawing indicate reference pixels used for intra prediction, that is, reconstructed picture images stored in the picture buffer 107.
  • Intra_16 ⁇ 16 intra prediction surrounding pixels of the reconstructed image are used as reference pixels as they are, and prediction signals are formed by extrapolating reference pixels in the four types of directions shown in FIG.
  • an MB encoded using an intra prediction signal is referred to as an intra MB.
  • the block size of intra prediction is called intra prediction mode.
  • the extrapolation direction is referred to as an intra prediction direction.
  • the inter-frame prediction unit 111 generates an inter-frame prediction signal using, as a reference image, an image of a reference picture stored in the decoded picture buffer 109 that has a display time different from that of the current picture.
  • an MB encoded using an inter-frame prediction signal is referred to as an inter MB.
  • the inter MB block size 16 ⁇ 16, 16 ⁇ 8, 8 ⁇ 16, 8 ⁇ 8, 8 ⁇ 4, 4 ⁇ 8, and 4 ⁇ 4 can be selected.
  • FIG. 15 is an explanatory diagram illustrating an example of inter-frame prediction using a 16 ⁇ 16 block size as an example.
  • the motion vector MV (mv x , mv y ) shown in FIG. 15 is one of prediction parameters for inter-frame prediction that represents the amount of translation of the inter-frame prediction block (inter-frame prediction signal) of the reference picture with respect to the encoding target block.
  • AVC in order to identify the reference picture used for the inter-frame prediction of the encoding target block in addition to the inter-frame prediction direction indicating the direction of the reference picture of the inter-frame prediction signal with respect to the encoding target picture of the encoding target block.
  • the reference picture index is also a prediction parameter for inter-frame prediction. This is because in AVC, a plurality of reference pictures stored in the decoded picture buffer 109 can be used for inter-frame prediction.
  • inter MB an MB encoded using an inter-frame prediction signal
  • inter prediction mode The block size for inter-frame prediction
  • inter prediction direction The direction of inter-frame prediction
  • a picture encoded only with an intra MB is called an I picture.
  • a picture coded including not only an intra MB but also an inter MB is called a P picture.
  • a picture that is encoded including inter MBs that use not only one reference picture but also two reference pictures at the same time for inter-frame prediction is called a B picture.
  • the reference picture direction of the inter-frame prediction signal with respect to the encoding target picture of the encoding target block is the forward prediction with respect to the past inter-frame prediction
  • the inter-frame prediction signal with respect to the encoding target picture of the encoding target block Prediction of the future reference frame is referred to as backward prediction, and interframe prediction including the past and future is referred to as bidirectional prediction.
  • the motion vector predicting unit 112 assumes that an encoding target block is E, an adjacent block on the left is A, an upper adjacent block is B, and an upper right adjacent block is C.
  • the block predicted motion vector PMV (pmv x , pmv y ) is obtained based on the median value of the respective motion vectors of the blocks A, B, and C.
  • SPMV (spmv x , spmv y ).
  • Median (X, Y, Z) are X, Y, a function that returns the median value of Z, mv x _A, mv x _B, mv x _C Block A, B, C motion vectors of the respective horizontal component, mv y _A, mv y _B, mv y _C block a, B, a motion vector of the C respectively of the vertical component.
  • the encoding control unit 113 compares the intra prediction signal and the inter-frame prediction signal with the input MB stored in the MB buffer 101, and selects a prediction signal that reduces the energy of the prediction error image block.
  • the encoding control unit 113 controls the switch 100 so that an image is predicted by the selected prediction signal, and supplies information related to the selected prediction signal to the entropy encoding unit 104.
  • the information related to the selected prediction signal is the intra prediction mode and the intra prediction direction when the intra prediction signal is selected, and when the inter prediction signal is selected, the inter prediction mode, the inter prediction direction, And a differential motion vector (of the motion vector and the spatial axis predicted motion vector).
  • the encoding control unit 113 selects a base block size of integer DCT suitable for frequency conversion of the prediction error image block based on the input MB or the prediction error image block.
  • base block size options there are 16 ⁇ 16, 8 ⁇ 8, and 4 ⁇ 4 block sizes.
  • Information on the base size of the selected integer DCT is supplied to the frequency transform unit 102 and the entropy encoding unit 104.
  • auxiliary information information related to the selected prediction signal and information related to the base size of the selected integer DCT.
  • the encoding control unit 113 monitors the bit number of the bit stream output from the entropy encoding unit 104 in order to encode the picture with the target bit number, and the bit number of the output bit stream is the target bit number. If the number of bits is less than the target number of bits, a quantization parameter that reduces the quantization step size is supplied. Supply. As such, the output bitstream is encoded to approach the target number of bits.
  • the frequency conversion unit 102 converts the prediction error image block by frequency conversion from the spatial domain to the frequency domain with the selected base size of the integer DCT.
  • the prediction error converted to the frequency domain is called a conversion coefficient.
  • orthogonal transformation such as DCT (Discrete Cosine Transform) or Hadamard transformation can be used.
  • the integer DCT means frequency conversion based on a base obtained by approximating a DCT base with an integer value in a general video encoding apparatus.
  • the quantization unit 103 quantizes the transform coefficient with a quantization step size corresponding to the quantization parameter supplied from the encoding control unit 113. Note that the quantization index of the quantized transform coefficient is also called a level.
  • the entropy coding unit 104 entropy codes the auxiliary information and the quantization index, and outputs the bit string, that is, the bit stream.
  • the inverse quantization unit 105 and the inverse transform unit 106 inversely quantize the quantization index supplied from the quantization unit 103 for subsequent encoding, and perform inverse frequency transform to return to the original space region.
  • the prediction error image block returned to the original space area is referred to as a reconstructed prediction error image block.
  • the picture buffer 107 stores a reconstructed image block obtained by adding a prediction signal to a reconstructed prediction error image block until all MBs included in the current picture are encoded.
  • a picture constituted by the reconstructed image block is referred to as a reconstructed picture.
  • the picture buffer 107 also stores auxiliary information related to the reconstructed image block.
  • the block distortion removal filter unit 108 removes block distortion of the reconstructed picture stored in the picture buffer 107.
  • the decoded picture buffer 109 stores the reconstructed picture from which the block distortion supplied from the block distortion removal filter 108 is removed as a reference picture.
  • the decoded picture buffer 109 also stores auxiliary information related to the reconstructed image block from which block distortion has been removed.
  • the video encoding device shown in FIG. 11 generates a bit stream by the processing as described above.
  • time-axis motion vector prediction assuming that the motion vectors between pictures are consistent is used in the coding of temporal direct mode (Temporal DIRECT mode) of a B picture.
  • the motion vector MV base of a block of one future reference picture (Collocated block) in the display order adjacent to the encoding target block (Current block) in the temporal axis. Is used.
  • the motion vector MV col of the collocated block is obtained by calculating the time difference tb between the coding target picture (pic (t)) of the coding target block and the reference picture (pic (t + 1)) of the collocated block and the reference picture (pic (t + 1) of the collocated block. ) And the reference picture (pic (t ⁇ 1)) referenced by the motion vector of the collocated block is interpolated on the basis of the time difference td, and the following formula (3) and formula (4)
  • the backward predicted motion vectors TPMV L0 and TPMV L1 are calculated (see FIG. 18).
  • Non-Patent Document 1 proposes Symmetric Mode that applies temporal-axis motion vector prediction to inter-frame prediction for backward prediction and generates a motion vector for forward prediction from the motion vector for backward prediction.
  • Non-Patent Document 2 and Non-Patent Document 3 when the motion vector of the Collocated block is not available (that is, when the Collocated block is in the intra MB mode), the temporal axis motion vector prediction is prohibited, and instead, the spatial axis motion vector Technologies that use predictions have been proposed.
  • Patent Document 1 when the motion vector of the Collocated block cannot be used (that is, in the intra MB mode), the motion vector of another reference picture adjacent in the time direction is extrapolated to obtain the time-axis predicted motion vector. Techniques for calculating have been proposed.
  • the present invention provides a video encoding apparatus and video that can efficiently compress video without reducing the accuracy of motion vector prediction even in a scene in which there is a change in the motion vector between pictures due to the presence of acceleration or the like.
  • An object is to provide a decoding device.
  • the video encoding apparatus includes a prediction motion vector calculation unit that calculates a prediction motion vector of an encoding target image based on motion vectors of two reference pictures before and after the encoding target image. To do.
  • the video decoding apparatus is characterized in that it includes a predicted motion vector calculation means for calculating a predicted motion vector of a decoding target image based on motion vectors of two reference pictures before and after the decoding target image.
  • a video encoding method is a video encoding method executed by a video encoding device, and predictive motion of an encoding target image based on motion vectors of two reference pictures before and after the encoding target image. It is characterized by calculating a vector.
  • a video decoding method is a video decoding method executed by a video decoding device, and calculates a predicted motion vector of a decoding target image based on motion vectors of two reference pictures before and after the decoding target image. It is characterized by.
  • the video encoding program according to the present invention is characterized by causing a computer to calculate a predicted motion vector of an encoding target image based on motion vectors of two reference pictures before and after the encoding target image.
  • the video decoding program according to the present invention is characterized by causing a computer to calculate a predicted motion vector of a decoding target image based on motion vectors of two reference pictures before and after the decoding target image.
  • the video can be efficiently compressed without reducing the accuracy of motion vector prediction.
  • Intra_16x16 intra prediction It is explanatory drawing for demonstrating Intra_16x16 intra prediction. It is explanatory drawing which shows the example of the inter-frame prediction which made the block size of 16x16 an example. It is explanatory drawing which shows the image block adjacent to a process target image block within a screen. It is explanatory drawing which shows the motion vector MV base in a prior art. It is explanatory drawing which shows the time-axis prediction motion vector TPMV L0 and TPMV L1 in a prior art.
  • Embodiment 1 The video encoding apparatus according to the present embodiment, as a feature of the motion vectors of the two reference pictures before and after the encoding target image, is based on the norm of the motion vectors of the two reference pictures before and after the encoding target image. There is provided means for detecting a change in a motion vector, and further generating a predicted motion vector by compensating for the change in the motion vector based on a weighted sum of motion vectors of two reference pictures before and after.
  • FIG. 1 is a block diagram showing a video encoding apparatus according to the first embodiment of the present invention.
  • the video encoding apparatus includes an MB buffer 101, a frequency conversion unit 102, a quantization unit 103, an entropy encoding unit 104, an inverse quantization unit 105, an inverse frequency conversion unit 106, a picture.
  • a buffer 107, a block distortion removal filter unit 108, a decoded picture buffer 109, an intra prediction unit 110, an inter-frame prediction unit 111, a motion vector prediction unit 112, an encoding control unit 113, and a switch 100 are provided.
  • the video encoding apparatus has a motion vector predicting unit 112 that performs not only a reconstructed picture stored in the picture buffer 107 but also a decoded picture.
  • a feature is that the auxiliary information of the reconstructed picture stored in the buffer 109 can also be accessed.
  • the motion vector prediction unit 112 detects a motion vector change between pictures based on the norms of the motion vectors of the two reference pictures before and after the encoding target image, and further weights the motion vectors of the two reference pictures before and after. Compensate for motion vector changes based on the sum. That is, the motion vector prediction unit 112 corresponds to a unit that generates a predicted motion vector. Therefore, in the following description, the operation of the motion vector predicting unit 112, which is a feature of the video encoding device of the present embodiment, will be described in detail.
  • the MB buffer 101 stores the pixel value of the encoding target MB of the input picture.
  • the prediction signal supplied from the intra prediction unit 110 or the inter-frame prediction unit 111 via the switch 100 is subtracted from the input MB supplied from the MB buffer 101.
  • the intra prediction unit 110 generates an intra prediction signal using a reconstructed picture image stored in the picture buffer 107 having the same display time as the current picture as a reference image.
  • the inter-frame prediction unit 111 generates an inter-frame prediction signal using, as a reference image, an image of a reference picture stored in the decoded picture buffer 109 that has a display time different from that of the current picture.
  • the motion vector prediction unit 112 operates differently from a general video encoding technique for B pictures. Details of the operation of the motion vector prediction unit 112 of the present embodiment for B pictures will be described.
  • the motion vector prediction unit 112 receives blocks (Collocated block L0 and Collocated block L1) of two reference pictures (pic (t-1) and pic (t + 1)) before and after the encoding target picture from the decoded picture buffer 206. Each motion vector (MV baseL0 and MV baseL1 ) is read from (see FIG. 2).
  • the motion vector prediction unit 112 calculates the norm DNMV_norm of the difference between the normalized motion vectors NMV baseL0 and NMV baseL1 normalized by the distance between reference pictures (td L0 and td L1 ) of the motion vectors of the two reference pictures before and after. Is compared with a predetermined threshold th_norm. Normalized motion vectors NMV baseL0 and NMV baseL1 and norm DNMV_norm are expressed by the following equations (5), (6), and (7).
  • NMV baseL0 MV baseL0 / td L0
  • NMV baseL1 MV baseL1 / td L1
  • DNMV_norm ⁇ NMV baseL0 -NMV baseL1 ⁇ ⁇ (7)
  • ⁇ X ⁇ is a function for calculating the L1 norm of the vector X.
  • the L2 norm or the maximum absolute value difference of each component may be used.
  • the motion vector prediction unit 112 does not have an acceleration of a motion vector between the reference picture pic (t ⁇ 1) and the reference picture pic (t + 1) (that is, the reference picture pic (T-1) and the reference picture pic (t + 1) are determined to have a motion vector change due to another factor), and only the MV baseL1 is used, and the following equations (8), (9) , (10), TPMV L0 and TPMV L1 are generated by supplying predicted motion vectors (that is, time-axis predicted motion vectors similar to the time-axis predicted motion vectors based on a general video encoding technique) and supplied to the encoding control unit 113. .
  • MV base MV baseL1 (8)
  • TPMV L0 tb * MV base / td L1 (9)
  • TPMV L1 (tb ⁇ td L1 ) * MV base / td L1 (10)
  • TPMV L0 is a predicted motion vector for the forward prediction motion vector
  • TPMV L1 is a predicted motion vector for the backward prediction motion vector.
  • the motion vector prediction unit 112 determines that there is an acceleration of the motion vector between the reference picture pic (t ⁇ 1) and the reference picture pic (t + 1), and MV baseL0 and MV Using both of baseL1 , the following equations (11), (12), and (13) are used to predict motion vectors (time-axis prediction according to the present invention, which is different from motion vector prediction motion vectors based on general video coding techniques). Motion vectors) TPMV L0 and TPMV L1 are generated and supplied to the encoding control unit 113.
  • MV base td L1 * (MV baseL0 / td L0 + MV baseL1 / td L1 ) / 2 (11)
  • TPMV L0 tb * MV base / td L1 (12)
  • TPMV L1 (tb ⁇ td L1 ) * MV base / td L1 (13)
  • the motion vector in Expression (11) is a motion vector based on the weighted sum of motion vectors of two reference pictures before and after the encoding target image (see FIG. 3). Therefore, the obtained prediction motion vector is a time-axis prediction motion vector based on the weighted sum of the motion vectors of the two reference pictures before and after the encoding target image (see FIG. 4).
  • the temporal axis predicted motion vector in the present embodiment is the same as the temporal axis predicted motion vector by a general video coding technique.
  • the temporal axis predicted motion vector in this embodiment has a change in motion vector due to acceleration. It becomes a compensated prediction motion vector. Therefore, even in a scene in which there is a change in the motion vector between pictures due to the presence of acceleration or the like, the accuracy of motion vector prediction by the motion vector prediction unit 112 does not decrease, so that the video can be efficiently compressed.
  • the encoding control unit 113 compares the intra prediction signal and the inter-frame prediction signal with the input MB of the MB buffer 101, respectively, and selects a prediction signal that reduces the energy of the prediction error image block.
  • the encoding control unit 113 controls the switch 100 so that an image is predicted by the selected prediction signal, and supplies information related to the selected prediction signal to the entropy encoding unit 104.
  • the information related to the selected prediction signal is the intra prediction mode and the intra prediction direction when the intra prediction signal is selected, and when the inter prediction signal is selected, the inter prediction mode, the inter prediction direction, And a differential motion vector (of the motion vector and the spatial axis predicted motion vector).
  • the encoding control unit 113 selects a base block size of integer DCT suitable for frequency conversion of the prediction error image block based on the input MB or the prediction error image block.
  • base block size options there are 16 ⁇ 16, 8 ⁇ 8, and 4 ⁇ 4 block sizes.
  • Information on the base size of the selected integer DCT is supplied to the frequency transform unit 102 and the entropy encoding unit 104.
  • auxiliary information information related to the selected prediction signal and information related to the base size of the selected integer DCT.
  • the encoding control unit 113 monitors the bit number of the bit stream output from the entropy encoding unit 104 in order to encode the picture with the target bit number, and the bit number of the output bit stream is the target bit number. If the number of bits is less than the target number of bits, a quantization parameter that reduces the quantization step size is supplied. Supply. As a result, the output bit stream is encoded so as to approach the target number of bits.
  • the frequency conversion unit 102 converts the prediction error image block by frequency conversion from the spatial domain to the frequency domain with the selected base size of the integer DCT.
  • the prediction error converted to the frequency domain is called a conversion coefficient.
  • orthogonal transform such as DCT or Hadamard transform can be used.
  • the integer DCT means frequency conversion based on a base obtained by approximating a DCT base with an integer value in a general video encoding apparatus.
  • the quantization unit 103 quantizes the transform coefficient with a quantization step size corresponding to the quantization parameter supplied from the encoding control unit 113. Note that the quantization index of the quantized transform coefficient is also called a level.
  • the entropy coding unit 104 entropy codes the auxiliary information and the quantization index, and outputs the bit string, that is, the bit stream.
  • the inverse quantization unit 105 and the inverse transform unit 106 inversely quantize the quantization index supplied from the quantization unit 103 for subsequent encoding, and perform inverse frequency transform to return to the original space region.
  • the prediction error image block returned to the original space area is referred to as a reconstructed prediction error image block.
  • the picture buffer 107 stores a reconstructed image block obtained by adding a prediction signal to a reconstructed prediction error image block until all MBs included in the current picture are encoded.
  • a picture constituted by the reconstructed image block is referred to as a reconstructed picture.
  • the picture buffer 107 also stores auxiliary information related to the reconstructed image block.
  • the block distortion removal filter unit 108 removes block distortion of the reconstructed picture stored in the picture buffer 107.
  • the decoded picture buffer 109 stores the reconstructed picture from which the block distortion supplied from the block distortion removal filter 108 is removed as a reference picture.
  • the decoded picture buffer 109 also stores auxiliary information related to the reconstructed image block from which block distortion has been removed.
  • the video encoding device of this embodiment Based on the above-described operation, the video encoding device of this embodiment generates a bit stream.
  • Embodiment 2 The video decoding apparatus according to the present embodiment detects a change in motion vector between pictures based on the norm as the feature of the motion vector of two reference pictures before and after the decoding target image, and further, Means for generating a motion vector predictor by compensating for a change in motion vector based on a weighted sum of motion vectors.
  • FIG. 5 is a block diagram showing a video decoding apparatus according to the second embodiment of the present invention.
  • the video decoding apparatus according to the present embodiment includes an entropy decoding unit 201, an inverse quantization unit 202, an inverse frequency conversion unit 203, a picture buffer 204, a block distortion removal filter unit 205, a decoded picture buffer 206, an intra A prediction unit 207, an inter-frame prediction unit 208, a motion vector prediction unit 209, a decoding control unit 210, and a switch 200 are provided.
  • the entropy decoding unit 201 performs entropy decoding on the bitstream to obtain information related to the prediction signal of the decoding target MB, the base size of the integer DCT, and the quantization index.
  • the decoding control unit 210 prepares to supply an intra prediction signal or an inter-frame prediction signal as a prediction signal.
  • the motion vector prediction unit 209 calculates a prediction motion vector and sends it to the inter-frame prediction unit 208 in the same manner as the motion vector prediction unit 112 in the first embodiment. Supply.
  • the intra prediction unit 207 performs entropy decoding supplied to the intra MB from the reconstructed image stored in the picture buffer 204 having the same display time as the currently decoded frame, via the decoding control unit 210.
  • An intra prediction signal is generated based on the intra prediction mode and the intra prediction direction.
  • the inter-frame prediction unit 208 performs entropy-decoded inter-picture supplied via the decoding control unit 210 from a reference image stored in the decoded picture buffer 206 that has a display time different from that of the currently decoded frame.
  • An inter-frame prediction signal is generated based on the prediction mode, the inter prediction direction, the difference motion vector, and the prediction motion vector supplied from the motion vector prediction unit 209.
  • the decoding control unit 210 controls the switch 200 based on information (intra MB or inter MB) related to the entropy-decoded prediction signal, and predicts an image by the intra prediction signal or the inter-frame prediction signal.
  • the inverse quantization unit 202 and the inverse transform unit 203 inversely quantize the quantization index supplied from the entropy decoding unit 201, further perform inverse frequency transform, and return the original spatial region (reconstructed prediction error image block).
  • the picture buffer 204 stores a reconstructed image block obtained by adding a prediction signal supplied via the switch 200 to the reconstructed prediction error image block until all MBs included in the current frame are encoded.
  • the block distortion removal filter unit 205 removes block distortion of the reconstructed image picture stored in the picture buffer 204.
  • the decoded picture buffer 206 stores the reconstructed image picture from which block distortion has been removed by the block distortion removal filter unit 205 as a reference picture.
  • the reference image picture is output as an expanded frame at an appropriate display timing.
  • the video decoding apparatus decompresses the bit stream.
  • the motion vectors of the two reference pictures before and after the encoding target picture or the decoding target picture are characterized by the motion vector norm between the pictures based on the norm of the motion vectors of the two reference pictures.
  • a video encoding device including a motion vector prediction unit that detects a change in a motion vector and further generates a predicted motion vector by compensating for the change in the motion vector based on a weighted sum of motion vectors of two reference pictures before and after A video decoding device is shown.
  • a change in a motion vector between pictures is detected based on the norm of motion vectors of two reference pictures before and after the encoding target image or the decoding target image, and further, based on the motion vectors of adjacent images in the screen.
  • An embodiment using a motion vector prediction unit (motion vector prediction unit 112 and motion vector prediction unit 209) that generates a predicted motion vector by compensating for a change in the motion vector is also conceivable.
  • the motion vector prediction unit performs blocks (Collocated block L0 and Collocated block L1) of two reference pictures (pic (t ⁇ 1) and pic (t + 1)) stored in the decoded picture buffer 109 or the decoded picture buffer 206. ) Read each motion vector (MV baseL0 and MV baseL1 ).
  • the motion vector prediction unit determines whether or not the reference picture indicated by the motion vector MV baseL1 of the Collocated block L1 is a reference picture to which the Collocated block L0 belongs.
  • the reference picture pointed to by the motion vector MV baseL1 of the Collocated block L1 is not the reference picture to which the Collocated block L0 belongs, a large change in the motion vector between the reference picture pic (t ⁇ 1) and the reference picture pic (t + 1)
  • the predicted motion vector (spatial axis predicted motion vector) )
  • Generate SPMV (spmv x , spmv y ).
  • the spatial axis predicted motion vector should be used rather than the temporal axis predicted motion vector.
  • the accuracy of motion vector prediction is improved. Specifically, the accuracy of motion vector prediction is improved by generating a predicted motion vector by compensating for a large change in the motion vector between reference pictures based on the motion vector of an image adjacent in the screen.
  • the motion vector prediction unit determines the inter-reference picture distances (td) of the motion vectors of the two preceding and following reference pictures.
  • the norm DNMV_norm of the difference between the normalized motion vectors NMV baseL0 and NMV baseL1 normalized by L0 and td L1 ) is compared with a predetermined threshold th_norm.
  • the normalized motion vectors NMV baseL0 and NMV baseL1 and the norm DNMV_norm are expressed by the following formulas (16), (17), and (18).
  • ⁇ X ⁇ is a function for calculating the L1 norm of the vector X.
  • the L2 norm or the maximum absolute value difference of each component may be used.
  • the motion vector prediction unit does not have an acceleration of a motion vector between the reference picture pic (t ⁇ 1) and the reference picture pic (t + 1) (that is, the reference picture pic ( t-1) and the reference picture pic (t + 1) are determined to have a motion vector change due to another factor), and only the MV baseL1 is used, and the following equations (19), (20), In (21), prediction motion vectors (that is, time-axis prediction motion vectors by a general video encoding technique and time-axis prediction motion vectors similar to the technique) TPMV L0 and TPMV L1 are generated to generate the encoding control unit 113 or decoding It supplies to the control part 210.
  • MV base MV baseL1 (19)
  • TPMV L0 tb * MV base / td L1 (20)
  • TPMV L1 (tb ⁇ td L1 ) * MV base / td L1 (21)
  • Predicted motion vectors time-axis predicted motion vectors according to the present invention, which are different from the time-axis predicted motion vectors according to a general video coding technique) in the following equations (22), (23), and (24) TPMV L0 and TPMV L1 Is supplied to the encoding control unit 113 or the decoding control unit 210.
  • MV base td L1 * (MV baseL0 / td L0 + MV baseL1 / td L1 ) / 2 (22)
  • TPMV L0 tb * MV base / td L1 (23)
  • TPMV L1 (tb ⁇ td L1 ) * MV base / td L1 (24)
  • the motion vector of Expression (22) is a motion vector based on the weighted sum of motion vectors of two reference pictures before and after (see FIG. 3). Therefore, the obtained prediction motion vector is a time-axis prediction motion vector based on the weighted sum of the motion vectors of two reference pictures before and after (see FIG. 4).
  • Embodiment 4 A motion vector prediction device that detects a change in motion vector between pictures using an inner product or outer product instead of a norm as a feature of motion vectors of two reference pictures before and after an encoding target image or a decoding target image ( An embodiment using the motion vector prediction unit 112 and the motion vector prediction unit 209) is also conceivable.
  • the angle ⁇ formed by the motion vectors of the two front and rear reference pictures can be obtained.
  • the angle ⁇ is larger than the predetermined threshold th_theta, there is no motion vector acceleration between the reference picture pic (t ⁇ 1) and the reference picture pic (t + 1) (that is, the reference picture pic (t ⁇ 1) and the reference It can be determined that there is a change in motion vector due to another factor between the picture pic (t + 1)).
  • the outer product it can be determined whether or not the motion vectors of two reference pictures before and after are parallel. If the motion vectors of the two front and rear reference pictures are parallel (that is, the outer product is zero) and each has the same direction, the reference picture pic (t ⁇ 1) and the reference picture pic (t + 1) are between It can be determined that the acceleration of the motion vector exists.
  • the reference picture pic (t ⁇ 1) and the reference picture pic (t + 1) It can be determined that motion vector acceleration exists between them.
  • the inner product is a value larger than 0, and the outer product is zero, the reference picture pic (t ⁇ 1) and the reference picture pic (t + 1) It can be determined that there is an acceleration of the motion vector.
  • the video encoding device and the video decoding device detect and compensate for a change in the motion vector between pictures based on the motion vectors of two reference pictures before and after the encoding target image or the decoding target image.
  • the video encoding device and video decoding device can change a motion vector between pictures based on any one or more of an inner product, an outer product, and a norm of motion vectors of two reference pictures.
  • the present invention and the invention described in Patent Document 1 are similar in that the motion vectors of the two reference pictures before and after are used, but in the present invention, the motion vectors of the two reference pictures before and after, Alternatively, the difference is that a predicted motion vector in which a change in motion vector is compensated is calculated based on a motion vector of an adjacent block in a picture. In the present invention, a prediction motion vector in which a change in motion vector due to acceleration is compensated can be generated by this difference.
  • the inner product, outer product, and norm of the motion vectors of the two reference pictures before and after are not simple criteria such as the presence or absence of motion vectors of reference pictures adjacent in the time direction (whether or not the intra MB mode is used).
  • the point of detecting a change of a motion vector between pictures using any one or more of the above criteria also indicates an inventive step.
  • each of the above embodiments can be configured by hardware, it can also be realized by a computer program.
  • the information processing system shown in FIG. 6 includes a processor 1001, a program memory 1002, a storage medium 1003 for storing video data, and a storage medium 1004 for storing a bitstream.
  • the storage medium 1003 and the storage medium 1004 may be separate storage media, or may be storage areas composed of the same storage medium.
  • a magnetic storage medium such as a hard disk can be used as the storage medium.
  • the program memory 1002 stores a program for realizing the function of each block (excluding the buffer block) shown in FIGS.
  • the processor 1001 implements the functions of the video encoding device or the video decoding device shown in FIGS. 1 and 5 by executing processing in accordance with the program stored in the program memory 1002.
  • FIG. 7 is a block diagram showing the main configuration of the video encoding apparatus according to the present invention.
  • the video encoding apparatus according to the present invention is a prediction motion vector calculation means for calculating a prediction motion vector of an encoding target image based on motion vectors of two reference pictures before and after the encoding target image. 11 (implemented by the motion vector prediction unit 112 in the first embodiment).
  • the predicted motion vector calculation unit 11 calculates a time-axis predicted motion vector based on the weighted sum of the motion vectors of the two preceding and following reference pictures when the feature amounts of the motion vectors of the two preceding and following reference pictures satisfy a predetermined value.
  • the spatial axis prediction motion vector based on the motion vector of the image adjacent to the encoding target image in the screen is calculated when the feature amounts of the motion vectors of the two reference pictures before and after the predetermined value do not satisfy the predetermined value.
  • the feature quantities of the motion vectors of the two reference pictures before and after used by the predicted motion vector calculation means 11 include, for example, one or more of an inner product, an outer product, and a norm of the motion vectors of the two reference pictures before and after.
  • FIG. 8 is a block diagram showing the main configuration of the video decoding apparatus according to the present invention.
  • the video decoding apparatus according to the present invention is based on the motion vector predictor 21 (first motion vector calculation means 21) for calculating a motion vector predictor of a decoding target image based on motion vectors of two reference pictures before and after the decoding target image.
  • the motion vector prediction unit 209 it is realized by the motion vector prediction unit 209).
  • the predicted motion vector calculation means 21 calculates, for example, a time-axis predicted motion vector based on a weighted sum of motion vectors of two reference pictures before and after, as a predicted motion vector of a decoding target image.
  • the feature quantities of the motion vectors of the two reference pictures before and after that used by the predicted motion vector calculation means 21 include, for example, one or more of the inner product, outer product, and norm of the motion vectors of the two reference pictures before and after.
  • FIG. 9 is a flowchart showing the main steps of the video encoding method according to the present invention.
  • a predicted motion vector of an encoding target image is calculated based on the motion vectors of two reference pictures before and after the encoding target image (step S11). including.
  • FIG. 10 is a flowchart showing the main steps of the video decoding method according to the present invention.
  • the video decoding method according to the present invention includes a step (step S21) of calculating a predicted motion vector of a decoding target image based on motion vectors of two reference pictures before and after the decoding target image.
  • Prediction motion vector calculation means 20
  • Prediction motion vector calculation means 100
  • Switch 101 MB buffer 102
  • Quantization part 104 Entropy encoding part 105
  • Inverse quantization part 106
  • Inverse frequency conversion part 107
  • Picture buffer 108 Block distortion removal filter part
  • Decoded picture buffer 110
  • Intra prediction unit 111
  • Motion vector prediction unit 113
  • Coding control unit 200
  • Switch 201 Entropy decoding unit 202
  • Inverse quantization unit 203
  • Inverse frequency conversion unit 204
  • Picture buffer 205 Block distortion removal filter unit 206
  • Decoded picture Buffer 207 Intra prediction unit 208
  • Motion vector prediction unit 210
  • Decoding control unit 1001 Processor 1 002
  • Program memory 1003 Storage medium
  • 1004 Storage medium 1004 Storage medium

Abstract

Provided is a video coding device and a video decoding device which are capable of efficiently compressing a video without reducing accuracy of a motion vector prediction even in a scene in which a motion vector changes between pictures due to the existence of acceleration or the like. The video coding device is provided with a prediction motion vector calculation means for calculating a prediction motion vector (temporal axis prediction motion vector or spatial axis prediction motion vector) of a picture to be coded, on the basis of the motion vector of two reference pictures before and after the picture to be coded. The video decoding device is provided with a prediction motion vector calculation means for calculating a prediction motion vector of a picture to be decoded, on the basis of the motion vector of two reference pictures before and after the picture to be decoded.

Description

映像符号化装置および映像復号装置Video encoding device and video decoding device
 本発明は、動きベクトルを予測する映像符号化技術が適用される映像符号化装置および映像復号装置に関する。 The present invention relates to a video encoding device and a video decoding device to which a video encoding technique for predicting a motion vector is applied.
 一般に、映像符号化装置は、外部から入力される動画像信号をディジタル化した後、所定の映像符号化方式に準拠した符号化処理を行うことで符号化データすなわちビットストリームを生成する。 Generally, a video encoding device digitizes a moving image signal input from the outside, and then performs encoding processing in accordance with a predetermined video encoding method to generate encoded data, that is, a bit stream.
 所定の映像符号化方式として非特許文献1に記載されたISO/IEC 14496-10 Advanced Video Coding(AVC)がある。AVC方式の符号化器の参照モデルとしてJoint Model方式が知られている(以下、一般的な映像符号化装置という)。 There is ISO / IEC 14496-10 Advanced Video Coding (AVC) described in Non-Patent Document 1 as a predetermined video encoding method. A Joint Model method is known as a reference model for an AVC encoder (hereinafter referred to as a general video encoding device).
 図11を参照して、ディジタル化された映像の各フレームを入力としてビットストリームを出力する一般的な映像符号化装置の構成と動作を説明する。1枚のフレームは、順次走査のフレームピクチャ1枚または飛び越し走査のフィールドピクチャ2枚によって構成される。以下、説明の簡略化のために、1枚のフレームが順次走査のフレームピクチャ1枚で構成されると仮定する。 Referring to FIG. 11, the configuration and operation of a general video encoding apparatus that outputs each bit stream of a digitized video and outputs a bit stream will be described. One frame is composed of one frame picture of progressive scanning or two field pictures of interlace scanning. Hereinafter, for simplification of description, it is assumed that one frame is composed of one frame picture of progressive scanning.
 図11に示すように、一般的な映像符号化装置は、MBバッファ101、周波数変換部102、量子化部103、エントロピー符号化部104、逆量子化部105、逆周波数変換部106、ピクチャバッファ107、ブロック歪み除去フィルタ部108、デコードピクチャバッファ109、イントラ予測部110、フレーム間予測部111、動きベクトル予測部112、符号化制御部113およびスイッチ100を備える。 As shown in FIG. 11, a general video encoding apparatus includes an MB buffer 101, a frequency conversion unit 102, a quantization unit 103, an entropy encoding unit 104, an inverse quantization unit 105, an inverse frequency conversion unit 106, a picture buffer. 107, a block distortion removal filter unit 108, a decoded picture buffer 109, an intra prediction unit 110, an inter-frame prediction unit 111, a motion vector prediction unit 112, an encoding control unit 113, and a switch 100.
 一般的な映像符号化装置は、各フレームをMB(Macro Block :マクロブロック)と呼ばれる16×16画素サイズのブロックに分割し、フレームの左上から順に各MBを符号化する。非特許文献1に記載されたAVCでは、MBをさらに4×4画素サイズのブロックにブロック分割し、各4×4ブロックを符号化する。 A general video encoding apparatus divides each frame into 16 × 16 pixel size blocks called MB (Macro Block), and encodes each MB in order from the upper left of the frame. In AVC described in Non-Patent Document 1, the MB is further divided into blocks of 4 × 4 pixel size, and each 4 × 4 block is encoded.
 図12は、フレームの空間解像度がQCIF(Quarter Common Intermediate Format)の場合のブロック分割の例を示す説明図である。以下、説明の簡略化のために、輝度の画素値のみに着目して各装置の動作を説明する。 FIG. 12 is an explanatory diagram showing an example of block division when the spatial resolution of the frame is QCIF (Quarter Common Intermediate Format). Hereinafter, for simplification of description, the operation of each device will be described by focusing on only the luminance pixel value.
 MBバッファ101には、入力画像フレームの符号化対象MBの画素値が格納される。以下、符号化対象MBを単に入力MBという。 The MB buffer 101 stores the pixel value of the encoding target MB of the input image frame. Hereinafter, the encoding target MB is simply referred to as an input MB.
 MBバッファ101から供給される入力MBから、スイッチ100を介してイントラ予測部110またはフレーム間予測部111から供給される予測信号が減じられる。以後、予測信号が減じられた入力MBを予測誤差画像ブロックという。 The prediction signal supplied from the intra prediction unit 110 or the inter-frame prediction unit 111 via the switch 100 is subtracted from the input MB supplied from the MB buffer 101. Hereinafter, the input MB from which the prediction signal is reduced is referred to as a prediction error image block.
 イントラ予測部110は、現在のピクチャと表示時刻が同一である、ピクチャバッファ107に格納された再構築ピクチャの画像を参照画像として利用してイントラ予測信号を生成する。 The intra prediction unit 110 generates an intra prediction signal using a reconstructed picture image stored in the picture buffer 107 having the same display time as the current picture as a reference image.
 イントラ予測については、非特許文献1の8.3.1~8.3.3に記載されているように、3種類のブロックサイズのイントラ予測モードIntra_4×4、Intra_8×8、Intra_16×16がある。 As for intra prediction, as described in 8.3.1 to 8.3.3 of Non-Patent Document 1, there are three types of block prediction modes, Intra_4 × 4, Intra_8 × 8, and Intra_16 × 16.
 図13(a),(c)を参照すると、Intra_4×4とIntra_8×8はそれぞれ4×4ブロックサイズと8×8ブロックサイズのイントラ予測であることがわかる。ただし、図中の丸(○)はイントラ予測の参照画素、つまり、ピクチャバッファ107に格納された再構築画像を示す。 Referring to FIGS. 13A and 13C, it can be seen that Intra_4 × 4 and Intra_8 × 8 are intra predictions of 4 × 4 block size and 8 × 8 block size, respectively. However, a circle (◯) in the figure indicates a reference pixel for intra prediction, that is, a reconstructed image stored in the picture buffer 107.
  Intra_4×4のイントラ予測では、再構築画像の周辺画素をそのまま参照画素とし、図13(b)に示す9種類の方向に参照画素をパディング(外挿)して予測信号が形成される。Intra_8×8のイントラ予測では、図13(c)における右矢印の直下に記載されているローパスフィルタ(1/2、1/4、1/2)によって再構築ピクチャの画像の周辺画素が平滑化された画素を参照画素として、図13(b)に示す9種類の方向に参照画素を外挿して予測信号が形成される。 In Intra_4 × 4 intra prediction, the surrounding pixels of the reconstructed image are used as reference pixels as they are, and reference signals are padded (extrapolated) in nine types of directions shown in FIG. In Intra_8 × 8 intra prediction, the peripheral pixels of the reconstructed picture are smoothed by the low-pass filters (1/2, 1/4, 1/2) described immediately below the right arrow in FIG. The predicted pixel is formed by extrapolating the reference pixel in nine types of directions shown in FIG.
 図14(a)を参照すると、Intra_16×16は16×16ブロックサイズのイントラ予測であることがわかる。図13に示された例と同様に、図14において、図中の丸(○)はイントラ予測に用いる参照画素、つまり、ピクチャバッファ107に格納された再構築ピクチャの画像を示す。Intra_16×16のイントラ予測では、再構築画像の周辺画素をそのまま参照画素として、図14(b)に示す4種類の方向に参照画素を外挿して予測信号が形成される。 Referring to FIG. 14 (a), it can be seen that Intra — 16 × 16 is intra prediction of 16 × 16 block size. Similarly to the example shown in FIG. 13, in FIG. 14, circles (◯) in the drawing indicate reference pixels used for intra prediction, that is, reconstructed picture images stored in the picture buffer 107. In Intra_16 × 16 intra prediction, surrounding pixels of the reconstructed image are used as reference pixels as they are, and prediction signals are formed by extrapolating reference pixels in the four types of directions shown in FIG.
 以下、イントラ予測信号を用いて符号化されるMBをイントラMBという。イントラ予測のブロックサイズをイントラ予測モードという。また、外挿の方向をイントラ予測方向という。 Hereinafter, an MB encoded using an intra prediction signal is referred to as an intra MB. The block size of intra prediction is called intra prediction mode. The extrapolation direction is referred to as an intra prediction direction.
 フレーム間予測部111は、現在のピクチャと表示時刻が異なる、デコードピクチャバッファ109に格納された参照ピクチャの画像を参照画像として利用してフレーム間予測信号を生成する。以下、フレーム間予測信号を用いて符号化されるMBをインターMBという。インターMBのブロックサイズとして、16×16,16×8,8×16,8×8,8×4,4×8,4×4を選択することができる。 The inter-frame prediction unit 111 generates an inter-frame prediction signal using, as a reference image, an image of a reference picture stored in the decoded picture buffer 109 that has a display time different from that of the current picture. Hereinafter, an MB encoded using an inter-frame prediction signal is referred to as an inter MB. As the inter MB block size, 16 × 16, 16 × 8, 8 × 16, 8 × 8, 8 × 4, 4 × 8, and 4 × 4 can be selected.
 図15は、16×16のブロックサイズを例にしたフレーム間予測の例を示す説明図である。図15に示す動きベクトルMV=(mv,mv)は、符号化対象ブロックに対する参照ピクチャのフレーム間予測ブロック(フレーム間予測信号)の平行移動量を表す、フレーム間予測の予測パラメータの一つである。AVCにおいては、符号化対象ブロックの符号化対象ピクチャに対するフレーム間予測信号の参照ピクチャの方向を表すフレーム間予測の方向に加えて、符号化対象ブロックのフレーム間予測に用いる参照ピクチャを同定するための参照ピクチャインデックスもフレーム間予測の予測パラメータである。AVCにおいて、デコードピクチャバッファ109に格納された複数枚の参照ピクチャをフレーム間予測に利用できるからである。 FIG. 15 is an explanatory diagram illustrating an example of inter-frame prediction using a 16 × 16 block size as an example. The motion vector MV = (mv x , mv y ) shown in FIG. 15 is one of prediction parameters for inter-frame prediction that represents the amount of translation of the inter-frame prediction block (inter-frame prediction signal) of the reference picture with respect to the encoding target block. One. In AVC, in order to identify the reference picture used for the inter-frame prediction of the encoding target block in addition to the inter-frame prediction direction indicating the direction of the reference picture of the inter-frame prediction signal with respect to the encoding target picture of the encoding target block. The reference picture index is also a prediction parameter for inter-frame prediction. This is because in AVC, a plurality of reference pictures stored in the decoded picture buffer 109 can be used for inter-frame prediction.
 なお、フレーム間予測のより詳細な説明が、非特許文献1の8.4 Inter prediction processに記載されている。 Note that more detailed explanation of inter-frame prediction is described in 8.4 Interprediction process of Non-Patent Document 1.
 以下、フレーム間予測信号を用いて符号化されるMBをインターMBという。フレーム間予測のブロックサイズをインター予測モードという。また、フレーム間予測の方向をインター予測方向という。 Hereinafter, an MB encoded using an inter-frame prediction signal is referred to as an inter MB. The block size for inter-frame prediction is called inter prediction mode. The direction of inter-frame prediction is referred to as inter prediction direction.
 なお、イントラMBのみで符号化されたピクチャはIピクチャと呼ばれる。イントラMBだけでなくインターMBも含めて符号化されたピクチャはPピクチャと呼ばれる。フレーム間予測に1枚の参照ピクチャだけでなく、さらに同時に2枚の参照ピクチャを用いるインターMBを含めて符号化されたピクチャはBピクチャと呼ばれる。また、Bピクチャにおいて、符号化対象ブロックの符号化対象ピクチャに対するフレーム間予測信号の参照ピクチャの方向が過去のフレーム間予測を前方向予測、符号化対象ブロックの符号化対象ピクチャに対するフレーム間予測信号の参照ピクチャの方向が未来のフレーム間予測を後方向予測、過去と未来を含むフレーム間予測を双方向予測という。 Note that a picture encoded only with an intra MB is called an I picture. A picture coded including not only an intra MB but also an inter MB is called a P picture. A picture that is encoded including inter MBs that use not only one reference picture but also two reference pictures at the same time for inter-frame prediction is called a B picture. Further, in the B picture, the reference picture direction of the inter-frame prediction signal with respect to the encoding target picture of the encoding target block is the forward prediction with respect to the past inter-frame prediction, and the inter-frame prediction signal with respect to the encoding target picture of the encoding target block. Prediction of the future reference frame is referred to as backward prediction, and interframe prediction including the past and future is referred to as bidirectional prediction.
 動きベクトル予測部112は、図16に示すように、符号化対象ブロックをE、左に隣接するブロックをA、上に隣接するブロックをB、右上に隣接するブロックをCとすると、符号化対象ブロックの予測動きベクトルPMV=(pmv,pmv)をブロックA,B,Cの動きベクトルの(それぞれの成分)の中央値に基づいて求める。以下、説明を容易にするため、空間軸に隣接するブロックに基づいて得られた予測動きベクトルを空間軸予測動きベクトルSPMV=(spmv,spmv)という。SPMVの各成分は、定式的には以下のように表現できる。 As shown in FIG. 16, the motion vector predicting unit 112 assumes that an encoding target block is E, an adjacent block on the left is A, an upper adjacent block is B, and an upper right adjacent block is C. The block predicted motion vector PMV = (pmv x , pmv y ) is obtained based on the median value of the respective motion vectors of the blocks A, B, and C. Hereinafter, for ease of explanation, a predicted motion vector obtained based on a block adjacent to the spatial axis is referred to as a spatial axis predicted motion vector SPMV = (spmv x , spmv y ). Each component of SPMV can be expressed formally as follows.
 spmv=Median(mv_A,mv_B,mv_C)    ・・・(1)
 spmv=Median(mv_A,mv_B,mv_C)    ・・・(2)
spmv x = Median (mv x _A , mv x _B, mv x _C) ··· (1)
spmv y = Median (mv y — A, mv y — B, mv y — C) (2)
 ただし、Median(X,Y,Z)はX,Y,Zの中央値を返す関数、mv_A,mv_B,mv_CはブロックA,B,Cそれぞれの水平成分の動きベクトル、mv_A,mv_B,mv_CはブロックA,B,Cそれぞれの垂直成分の動きベクトルである。 However, Median (X, Y, Z) are X, Y, a function that returns the median value of Z, mv x _A, mv x _B, mv x _C Block A, B, C motion vectors of the respective horizontal component, mv y _A, mv y _B, mv y _C block a, B, a motion vector of the C respectively of the vertical component.
 符号化制御部113は、イントラ予測信号およびフレーム間予測信号とMBバッファ101に格納されている入力MBとを比較して、予測誤差画像ブロックのエネルギーが小さくなる予測信号を選択する。 The encoding control unit 113 compares the intra prediction signal and the inter-frame prediction signal with the input MB stored in the MB buffer 101, and selects a prediction signal that reduces the energy of the prediction error image block.
 さらに、符号化制御部113は、選択した予測信号にて画像が予測されるようにスイッチ100を制御し、また、選択された予測信号に関連する情報をエントロピー符号化部104に供給する。選択された予測信号に関連する情報は、イントラ予測信号を選択した場合には、イントラ予測モードとイントラ予測方向であり、フレーム間予測信号を選択した場合には、インター予測モード、インター予測方向、および(動きベクトルと空間軸予測動きベクトルとの)差分動きベクトルである。 Further, the encoding control unit 113 controls the switch 100 so that an image is predicted by the selected prediction signal, and supplies information related to the selected prediction signal to the entropy encoding unit 104. The information related to the selected prediction signal is the intra prediction mode and the intra prediction direction when the intra prediction signal is selected, and when the inter prediction signal is selected, the inter prediction mode, the inter prediction direction, And a differential motion vector (of the motion vector and the spatial axis predicted motion vector).
 続いて、符号化制御部113は、入力MBまたは予測誤差画像ブロックに基づいて、予測誤差画像ブロックの周波数変換に適した整数DCTの基底ブロックサイズを選択する。基底ブロックサイズの選択肢として、16×16、8×8、4×4のブロックサイズがあり、入力MBまたは予測誤差画像ブロックの画素値が平坦になる程、より大きな基底ブロックサイズが選択される。選択された整数DCTの基底サイズに関する情報は、周波数変換部102およびエントロピー符号化部104に供給される。 Subsequently, the encoding control unit 113 selects a base block size of integer DCT suitable for frequency conversion of the prediction error image block based on the input MB or the prediction error image block. As base block size options, there are 16 × 16, 8 × 8, and 4 × 4 block sizes. As the pixel value of the input MB or prediction error image block becomes flat, a larger base block size is selected. Information on the base size of the selected integer DCT is supplied to the frequency transform unit 102 and the entropy encoding unit 104.
 以下、選択された予測信号に関連する情報および選択された整数DCTの基底サイズに関する情報を補助情報という。 Hereinafter, information related to the selected prediction signal and information related to the base size of the selected integer DCT will be referred to as auxiliary information.
 また、符号化制御部113は、目標ビット数でピクチャを符号化するために、エントロピー符号化部104が出力するビットストリームのビット数を監視し、出力されるビットストリームのビット数が目標ビット数よりも多ければ量子化ステップサイズを大とする量子化パラメータを供給し、逆に、出力されるビットストリームのビット数が目標ビット数よりも少なければ量子化ステップサイズを小とする量子化パラメータを供給する。そのようにして、出力ビットストリームは目標のビット数に近づくように符号化される。 In addition, the encoding control unit 113 monitors the bit number of the bit stream output from the entropy encoding unit 104 in order to encode the picture with the target bit number, and the bit number of the output bit stream is the target bit number. If the number of bits is less than the target number of bits, a quantization parameter that reduces the quantization step size is supplied. Supply. As such, the output bitstream is encoded to approach the target number of bits.
 周波数変換部102は、選択された整数DCTの基底サイズで、予測誤差画像ブロックを周波数変換して空間領域から周波数領域に変換する。周波数領域に変換された予測誤差を変換係数という。周波数変換には、DCT(Discrete Cosine Transform )やアダマール変換などの直交変換を利用できる。整数DCTは、一般的な映像符号化装置ではDCT基底を整数値で近似した基底による周波数変換を意味する。 The frequency conversion unit 102 converts the prediction error image block by frequency conversion from the spatial domain to the frequency domain with the selected base size of the integer DCT. The prediction error converted to the frequency domain is called a conversion coefficient. For the frequency transformation, orthogonal transformation such as DCT (Discrete Cosine Transform) or Hadamard transformation can be used. The integer DCT means frequency conversion based on a base obtained by approximating a DCT base with an integer value in a general video encoding apparatus.
 量子化部103は、符号化制御部113から供給される量子化パラメータに対応する量子化ステップサイズで、変換係数を量子化する。なお、量子化された変換係数の量子化インデックスはレベルとも呼ばれる。エントロピー符号化部104は、補助情報と量子化インデックスをエントロピー符号化して、そのビット列すなわちビットストリームを出力する。 The quantization unit 103 quantizes the transform coefficient with a quantization step size corresponding to the quantization parameter supplied from the encoding control unit 113. Note that the quantization index of the quantized transform coefficient is also called a level. The entropy coding unit 104 entropy codes the auxiliary information and the quantization index, and outputs the bit string, that is, the bit stream.
 逆量子化部105および逆変換部106は、以降の符号化のために、量子化部103から供給される量子化インデックスを逆量子化し、さらに逆周波数変換して元の空間領域に戻す。以下、元の空間領域に戻された予測誤差画像ブロックを再構築予測誤差画像ブロックという。 The inverse quantization unit 105 and the inverse transform unit 106 inversely quantize the quantization index supplied from the quantization unit 103 for subsequent encoding, and perform inverse frequency transform to return to the original space region. Hereinafter, the prediction error image block returned to the original space area is referred to as a reconstructed prediction error image block.
 ピクチャバッファ107は、現在のピクチャに含まれるすべてのMBが符号化されるまで、再構築予測誤差画像ブロックに予測信号を加えた再構築画像ブロックを格納する。以下、再構築画像ブロックによって構成されるピクチャを再構築ピクチャという。また、ピクチャバッファ107は、再構築画像ブロックに関連する補助情報も格納する。 The picture buffer 107 stores a reconstructed image block obtained by adding a prediction signal to a reconstructed prediction error image block until all MBs included in the current picture are encoded. Hereinafter, a picture constituted by the reconstructed image block is referred to as a reconstructed picture. The picture buffer 107 also stores auxiliary information related to the reconstructed image block.
 ブロック歪み除去フィルタ部108は、ピクチャバッファ107に格納された再構築ピクチャのブロック歪みを除去する。 The block distortion removal filter unit 108 removes block distortion of the reconstructed picture stored in the picture buffer 107.
 デコードピクチャバッファ109は、ブロック歪み除去フィルタ108から供給されるブロック歪みが除去された再構築ピクチャを参照ピクチャとして格納する。また、デコードピクチャバッファ109は、ブロック歪みが除去された再構築画像ブロックに関連する補助情報も格納する。 The decoded picture buffer 109 stores the reconstructed picture from which the block distortion supplied from the block distortion removal filter 108 is removed as a reference picture. The decoded picture buffer 109 also stores auxiliary information related to the reconstructed image block from which block distortion has been removed.
 図11に示された映像符号化装置は、以上のような処理によって、ビットストリームを生成する。 The video encoding device shown in FIG. 11 generates a bit stream by the processing as described above.
国際公開第2007/074543号International Publication No. 2007/074543
 一般的な映像符号化技術において、Bピクチャの時間ダイレクトモード(Temporal DIRECT mode)の符号化では、ピクチャ間の動きベクトルに一貫性があることを仮定した時間軸動きベクトル予測が採用されている。 In a general video coding technique, time-axis motion vector prediction assuming that the motion vectors between pictures are consistent is used in the coding of temporal direct mode (Temporal DIRECT mode) of a B picture.
 時間軸動きベクトル予測においては、図17に示すように、符号化対象ブロック(Current block )に時間軸で隣接する表示順で未来の1枚の参照ピクチャのブロック(Collocated block)の動きベクトルMVbaseを用いる。 In the temporal axis motion vector prediction, as shown in FIG. 17, the motion vector MV base of a block of one future reference picture (Collocated block) in the display order adjacent to the encoding target block (Current block) in the temporal axis. Is used.
 Collocated blockの動きベクトルMVcol を、符号化対象ブロックの符号化対象ピクチャ(pic(t))とCollocated blockの参照ピクチャ(pic(t+1))の時間差tbおよびCollocated blockの参照ピクチャ(pic(t+1))とCollocated blockの動きベクトルが参照する参照ピクチャ(pic(t-1))の時間差tdに基づいて内挿することに基づいて、以下の式(3)と式(4)で、前方向と後方向の予測動きベクトルTPMVL0およびTPMVL1をそれぞれ計算する(図18参照)。 The motion vector MV col of the collocated block is obtained by calculating the time difference tb between the coding target picture (pic (t)) of the coding target block and the reference picture (pic (t + 1)) of the collocated block and the reference picture (pic (t + 1) of the collocated block. ) And the reference picture (pic (t−1)) referenced by the motion vector of the collocated block is interpolated on the basis of the time difference td, and the following formula (3) and formula (4) The backward predicted motion vectors TPMV L0 and TPMV L1 are calculated (see FIG. 18).
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
 非特許文献1では、時間軸動きベクトル予測を後方向予測のフレーム間予測に適用し、後方向予測の動きベクトルから前方向予測の動きベクトルを生成するSymmetric Modeが提案されている。非特許文献2および非特許文献3では、Collocated blockの動きベクトルを利用できない場合(つまり、Collocated blockがイントラMBモードである場合)、時間軸動きベクトル予測を禁止し、代わりに、空間軸動きベクトル予測を利用する技術が提案されている。特許文献1では、Collocated blockの動きベクトルを利用できない場合(つまり、イントラMBモードである場合)、時間方向に隣接する別の1枚の参照ピクチャの動きベクトルを外挿して時間軸予測動きベクトルを計算する技術が提案されている。 Non-Patent Document 1 proposes Symmetric Mode that applies temporal-axis motion vector prediction to inter-frame prediction for backward prediction and generates a motion vector for forward prediction from the motion vector for backward prediction. In Non-Patent Document 2 and Non-Patent Document 3, when the motion vector of the Collocated block is not available (that is, when the Collocated block is in the intra MB mode), the temporal axis motion vector prediction is prohibited, and instead, the spatial axis motion vector Technologies that use predictions have been proposed. In Patent Document 1, when the motion vector of the Collocated block cannot be used (that is, in the intra MB mode), the motion vector of another reference picture adjacent in the time direction is extrapolated to obtain the time-axis predicted motion vector. Techniques for calculating have been proposed.
 しかし、非特許文献および特許文献に記載された技術は、ピクチャ間の動きベクトルに一貫性があることを仮定する。ゆえに、ピクチャ間の動きベクトルに変化がある場合に、適切な時間軸動きベクトル予測ができない。その結果、例えば、加速度の存在に起因してピクチャ間の動きベクトルに変化がある場合に、非特許文献および特許文献に記載された技術では、動きベクトル予測の精度が低下し、効率的に映像を圧縮できない課題がある。 However, the techniques described in non-patent literature and patent literature assume that the motion vectors between pictures are consistent. Therefore, when there is a change in the motion vector between pictures, appropriate time-axis motion vector prediction cannot be performed. As a result, for example, when there is a change in the motion vector between pictures due to the presence of acceleration, the techniques described in non-patent literature and patent literature reduce the accuracy of motion vector prediction and efficiently There is a problem that cannot be compressed.
 そこで、本発明は、加速度の存在などに起因してピクチャ間の動きベクトルに変化が存在するシーンにおいても動きベクトル予測の精度を低下させることなく効率的に映像を圧縮できる映像符号化装置および映像復号装置を提供することを目的とする。 Therefore, the present invention provides a video encoding apparatus and video that can efficiently compress video without reducing the accuracy of motion vector prediction even in a scene in which there is a change in the motion vector between pictures due to the presence of acceleration or the like. An object is to provide a decoding device.
 本発明による映像符号化装置は、符号化対象画像の前後2枚の参照ピクチャの動きベクトルに基づいて、符号化対象画像の予測動きベクトルを計算する予測動きベクトル計算手段を備えたことを特徴とする。 The video encoding apparatus according to the present invention includes a prediction motion vector calculation unit that calculates a prediction motion vector of an encoding target image based on motion vectors of two reference pictures before and after the encoding target image. To do.
 本発明による映像復号装置は、復号対象画像の前後2枚の参照ピクチャの動きベクトルに基づいて、復号対象画像の予測動きベクトルを計算する予測動きベクトル計算手段を備えたことを特徴とする。 The video decoding apparatus according to the present invention is characterized in that it includes a predicted motion vector calculation means for calculating a predicted motion vector of a decoding target image based on motion vectors of two reference pictures before and after the decoding target image.
 本発明による映像符号化方法は、映像符号化装置で実行される映像符号化方法であって、符号化対象画像の前後2枚の参照ピクチャの動きベクトルに基づいて、符号化対象画像の予測動きベクトルを計算することを特徴とする。 A video encoding method according to the present invention is a video encoding method executed by a video encoding device, and predictive motion of an encoding target image based on motion vectors of two reference pictures before and after the encoding target image. It is characterized by calculating a vector.
 本発明による映像復号方法は、映像復号装置で実行される映像復号方法であって、復号対象画像の前後2枚の参照ピクチャの動きベクトルに基づいて、復号対象画像の予測動きベクトルを計算することを特徴とする。 A video decoding method according to the present invention is a video decoding method executed by a video decoding device, and calculates a predicted motion vector of a decoding target image based on motion vectors of two reference pictures before and after the decoding target image. It is characterized by.
 本発明による映像符号化プログラムは、コンピュータに、符号化対象画像の前後2枚の参照ピクチャの動きベクトルに基づいて、符号化対象画像の予測動きベクトルを計算させることを特徴とする。 The video encoding program according to the present invention is characterized by causing a computer to calculate a predicted motion vector of an encoding target image based on motion vectors of two reference pictures before and after the encoding target image.
 本発明による映像復号プログラムは、コンピュータに、復号対象画像の前後2枚の参照ピクチャの動きベクトルに基づいて、復号対象画像の予測動きベクトルを計算させることを特徴とする。 The video decoding program according to the present invention is characterized by causing a computer to calculate a predicted motion vector of a decoding target image based on motion vectors of two reference pictures before and after the decoding target image.
 本発明によれば、加速度の存在などに起因してピクチャ間の動きベクトルに変化が存在するシーンにおいても、動きベクトル予測の精度を低下させることなく効率的に映像を圧縮できる。 According to the present invention, even in a scene where there is a change in the motion vector between pictures due to the presence of acceleration or the like, the video can be efficiently compressed without reducing the accuracy of motion vector prediction.
第1の実施形態の映像符号化装置のブロック図である。It is a block diagram of the video coding apparatus of 1st Embodiment. 動きベクトル予測部による符号化対象画像の前後2枚の参照ピクチャの読み込み動作を示す説明図である。It is explanatory drawing which shows the read-in operation | movement of two reference pictures before and behind the encoding object image by a motion vector estimation part. 符号化対象画像の前後2枚の参照ピクチャの動きベクトルの重み付け和に基づいた動きベクトルMVbaseを示す説明図である。It is an explanatory diagram showing the motion vector MV base based on the weighted sum of the motion vectors before and after two reference pictures of the encoding target image. 符号化対象画像の動きベクトルMVbaseに基づいた時間軸予測動きベクトルTPMVL0とTPMVL1とを示す説明図である。Motion vector MV base time based on the axis of the encoding target picture prediction motion is an explanatory diagram showing a vector TPMV L0 and TPMV L1. 第2の実施形態の映像復号装置のブロック図である。It is a block diagram of the video decoding apparatus of 2nd Embodiment. 本発明による映像符号化装置および映像復号装置の機能を実現可能な情報処理システムの構成例を示すブロック図である。It is a block diagram which shows the structural example of the information processing system which can implement | achieve the function of the video coding apparatus and video decoding apparatus by this invention. 本発明による映像符号化装置の主要構成を示すブロック図である。It is a block diagram which shows the main structures of the video coding apparatus by this invention. 本発明による映像復号装置の主要構成を示すブロック図である。It is a block diagram which shows the main structures of the video decoding apparatus by this invention. 本発明による映像符号化装置の処理を示すフローチャートである。It is a flowchart which shows the process of the video coding apparatus by this invention. 本発明による映像復号装置の処理を示すフローチャートである。It is a flowchart which shows the process of the video decoding apparatus by this invention. 一般的な映像符号化装置の構成を示すブロック図である。It is a block diagram which shows the structure of a general video coding apparatus. ブロック分割の例を示す説明図である。It is explanatory drawing which shows the example of a block division. Intra_4×4及びIntra_8×8のイントラ予測を説明するための説明図である。It is explanatory drawing for demonstrating intra prediction of Intra_4x4 and Intra_8x8. Intra_16×16のイントラ予測を説明するための説明図である。It is explanatory drawing for demonstrating Intra_16x16 intra prediction. 16×16のブロックサイズを例にしたフレーム間予測の例を示す説明図である。It is explanatory drawing which shows the example of the inter-frame prediction which made the block size of 16x16 an example. 処理対象画像ブロックに画面内で隣接する画像ブロックを示す説明図である。It is explanatory drawing which shows the image block adjacent to a process target image block within a screen. 従来技術における動きベクトルMVbaseを示す説明図である。It is explanatory drawing which shows the motion vector MV base in a prior art. 従来技術における時間軸予測動きベクトルTPMVL0とTPMVL1とを示す説明図である。It is explanatory drawing which shows the time-axis prediction motion vector TPMV L0 and TPMV L1 in a prior art.
実施形態1.
 本実施形態の映像符号化装置は、符号化対象画像の前後2枚の参照ピクチャの動きベクトルの特徴として、符号化対象画像の前後2枚の参照ピクチャの動きベクトルのノルムに基づいてピクチャ間における動きベクトルの変化を検出し、さらに、前後2枚の参照ピクチャの動きベクトルの重み付け和に基づいて動きベクトルの変化を補償して、予測動きベクトルを生成する手段を備える。
Embodiment 1. FIG.
The video encoding apparatus according to the present embodiment, as a feature of the motion vectors of the two reference pictures before and after the encoding target image, is based on the norm of the motion vectors of the two reference pictures before and after the encoding target image. There is provided means for detecting a change in a motion vector, and further generating a predicted motion vector by compensating for the change in the motion vector based on a weighted sum of motion vectors of two reference pictures before and after.
 図1は、本発明の第1の実施形態の映像符号化装置を示すブロック図である。図1に示すように、本実施形態の映像符号化装置は、MBバッファ101、周波数変換部102、量子化部103、エントロピー符号化部104、逆量子化部105、逆周波数変換部106、ピクチャバッファ107、ブロック歪み除去フィルタ部108、デコードピクチャバッファ109、イントラ予測部110、フレーム間予測部111、動きベクトル予測部112、符号化制御部113およびスイッチ100を備える。 FIG. 1 is a block diagram showing a video encoding apparatus according to the first embodiment of the present invention. As shown in FIG. 1, the video encoding apparatus according to the present embodiment includes an MB buffer 101, a frequency conversion unit 102, a quantization unit 103, an entropy encoding unit 104, an inverse quantization unit 105, an inverse frequency conversion unit 106, a picture. A buffer 107, a block distortion removal filter unit 108, a decoded picture buffer 109, an intra prediction unit 110, an inter-frame prediction unit 111, a motion vector prediction unit 112, an encoding control unit 113, and a switch 100 are provided.
 本実施形態の映像符号化装置は、図11に示された一般的な映像符号化装置と比較すると、動きベクトル予測部112が、ピクチャバッファ107に格納された再構築ピクチャだけでなく、デコードピクチャバッファ109に格納された再構築ピクチャの補助情報にもアクセスできる点が特徴となっている。動きベクトル予測部112が、符号化対象画像の前後2枚の参照ピクチャの動きベクトルのノルムに基づいてピクチャ間における動きベクトルの変化を検出し、さらに、前後2枚の参照ピクチャの動きベクトルの重み付け和に基づいて動きベクトルの変化を補償する。つまり、動きベクトル予測部112は、予測動きベクトルを生成する手段に相当する。よって、以下の説明では、特に、本実施形態の映像符号化装置の特徴である動きベクトル予測部112の動作について詳細を述べる。 Compared with the general video encoding apparatus shown in FIG. 11, the video encoding apparatus according to the present embodiment has a motion vector predicting unit 112 that performs not only a reconstructed picture stored in the picture buffer 107 but also a decoded picture. A feature is that the auxiliary information of the reconstructed picture stored in the buffer 109 can also be accessed. The motion vector prediction unit 112 detects a motion vector change between pictures based on the norms of the motion vectors of the two reference pictures before and after the encoding target image, and further weights the motion vectors of the two reference pictures before and after. Compensate for motion vector changes based on the sum. That is, the motion vector prediction unit 112 corresponds to a unit that generates a predicted motion vector. Therefore, in the following description, the operation of the motion vector predicting unit 112, which is a feature of the video encoding device of the present embodiment, will be described in detail.
 MBバッファ101は、入力ピクチャの符号化対象MBの画素値を格納する。 The MB buffer 101 stores the pixel value of the encoding target MB of the input picture.
 MBバッファ101から供給される入力MBから、スイッチ100を介してイントラ予測部110またはフレーム間予測部111から供給される予測信号が減じられる。 The prediction signal supplied from the intra prediction unit 110 or the inter-frame prediction unit 111 via the switch 100 is subtracted from the input MB supplied from the MB buffer 101.
 イントラ予測部110は、現在のピクチャと表示時刻が同一である、ピクチャバッファ107に格納された再構築ピクチャの画像を参照画像として利用してイントラ予測信号を生成する。 The intra prediction unit 110 generates an intra prediction signal using a reconstructed picture image stored in the picture buffer 107 having the same display time as the current picture as a reference image.
 フレーム間予測部111は、現在のピクチャと表示時刻が異なる、デコードピクチャバッファ109に格納された参照ピクチャの画像を参照画像として利用してフレーム間予測信号を生成する。 The inter-frame prediction unit 111 generates an inter-frame prediction signal using, as a reference image, an image of a reference picture stored in the decoded picture buffer 109 that has a display time different from that of the current picture.
 動きベクトル予測部112は、Pピクチャについては、一般的な映像符号化技術と同様に、空間軸予測動きベクトルSPMV=(spmv,spmv)を生成し、符号化制御部113に予測動きベクトルとして供給する。 For the P picture, the motion vector prediction unit 112 generates a spatial axis prediction motion vector SPMV = (spmv x , spmv y ) as in a general video encoding technique, and sends the prediction motion vector to the encoding control unit 113. Supply as.
 動きベクトル予測部112は、Bピクチャについては、一般的な映像符号化技術とは異なる動作をする。Bピクチャについての本実施形態の動きベクトル予測部112の動作詳細について説明する。 The motion vector prediction unit 112 operates differently from a general video encoding technique for B pictures. Details of the operation of the motion vector prediction unit 112 of the present embodiment for B pictures will be described.
 動きベクトル予測部112は、まず、デコードピクチャバッファ206から、符号化対象画像の前後2枚の参照ピクチャ(pic(t-1)とpic(t+1))のブロック(Collocated block L0 とCollocated block L1 )からそれぞれの動きベクトル(MVbaseL0およびMVbaseL1)を読み込む(図2参照)。 First, the motion vector prediction unit 112 receives blocks (Collocated block L0 and Collocated block L1) of two reference pictures (pic (t-1) and pic (t + 1)) before and after the encoding target picture from the decoded picture buffer 206. Each motion vector (MV baseL0 and MV baseL1 ) is read from (see FIG. 2).
 続いて、動きベクトル予測部112は、前後2枚の参照ピクチャの動きベクトルそれぞれの参照ピクチャ間距離(tdL0およびtdL1)で正規化した正規化動きベクトルNMVbaseL0およびNMVbaseL1の差分のノルムDNMV_norm を所定閾値th_norm と比較する。正規化動きベクトルNMVbaseL0およびNMVbaseL1とノルムDNMV_norm は定式的には以下の式(5),(6),(7)で表現される。 Subsequently, the motion vector prediction unit 112 calculates the norm DNMV_norm of the difference between the normalized motion vectors NMV baseL0 and NMV baseL1 normalized by the distance between reference pictures (td L0 and td L1 ) of the motion vectors of the two reference pictures before and after. Is compared with a predetermined threshold th_norm. Normalized motion vectors NMV baseL0 and NMV baseL1 and norm DNMV_norm are expressed by the following equations (5), (6), and (7).
 NMVbaseL0=MVbaseL0/tdL0           ・・・(5)
 NMVbaseL1=MVbaseL1/tdL1           ・・・(6)
 DNMV_norm =∥NMVbaseL0-NMVbaseL1∥    ・・・(7)
NMV baseL0 = MV baseL0 / td L0 (5)
NMV baseL1 = MV baseL1 / td L1 (6)
DNMV_norm = ∥NMV baseL0 -NMV baseL1 ∥ ··· (7)
 ただし、∥X∥はベクトルXのL1ノルムを計算する関数であるとする。なお、L1ノルムの代わりに、L2ノルム、または各成分の絶対値差分の最大値などを利用してもよい。 However, ∥X∥ is a function for calculating the L1 norm of the vector X. In place of the L1 norm, the L2 norm or the maximum absolute value difference of each component may be used.
 動きベクトル予測部112は、ノルムDNMV_norm が所定閾値th_norm よりも大きい場合、参照ピクチャpic(t-1)と参照ピクチャpic(t+1)との間で動きベクトルの加速度が存在しない(つまり、参照ピクチャpic(t-1)と参照ピクチャpic(t+1)との間に別の要因による動きベクトルの変化がある)と判断して、MVbaseL1のみを利用して、以下の式(8),(9),(10)で予測動きベクトル(つまり、一般的な映像符号化技術による時間軸予測動きベクトルと同様の時間軸予測動きベクトル)TPMVL0およびTPMVL1を生成して符号化制御部113に供給する。 When the norm DNMV_norm is larger than the predetermined threshold th_norm, the motion vector prediction unit 112 does not have an acceleration of a motion vector between the reference picture pic (t−1) and the reference picture pic (t + 1) (that is, the reference picture pic (T-1) and the reference picture pic (t + 1) are determined to have a motion vector change due to another factor), and only the MV baseL1 is used, and the following equations (8), (9) , (10), TPMV L0 and TPMV L1 are generated by supplying predicted motion vectors (that is, time-axis predicted motion vectors similar to the time-axis predicted motion vectors based on a general video encoding technique) and supplied to the encoding control unit 113. .
 MVbase=MVbaseL1                ・・・(8)
 TPMVL0=tb*MVbase/tdL1         ・・・(9)
 TPMVL1=(tb-tdL1)*MVbase/tdL1    ・・・(10)
MV base = MV baseL1 (8)
TPMV L0 = tb * MV base / td L1 (9)
TPMV L1 = (tb−td L1 ) * MV base / td L1 (10)
 ただし、TPMVL0は前方向予測の動きベクトルに対する予測動きベクトル、TPMVL1は後方向予測の動きベクトルに対する予測動きベクトルである。 However, TPMV L0 is a predicted motion vector for the forward prediction motion vector, and TPMV L1 is a predicted motion vector for the backward prediction motion vector.
 動きベクトル予測部112は、ノルムが所定閾値th_norm 以下の場合、参照ピクチャpic(t-1)と参照ピクチャpic(t+1)との間で動きベクトルの加速度が存在すると判断して、MVbaseL0とMVbaseL1の両方を利用して、以下の式(11),(12),(13)で、予測動きベクトル(一般的な映像符号化技術による時間軸予測動きベクトルと異なる、本発明による時間軸予測動きベクトル)TPMVL0およびTPMVL1を生成して符号化制御部113に供給する。 When the norm is equal to or smaller than the predetermined threshold th_norm, the motion vector prediction unit 112 determines that there is an acceleration of the motion vector between the reference picture pic (t−1) and the reference picture pic (t + 1), and MV baseL0 and MV Using both of baseL1 , the following equations (11), (12), and (13) are used to predict motion vectors (time-axis prediction according to the present invention, which is different from motion vector prediction motion vectors based on general video coding techniques). Motion vectors) TPMV L0 and TPMV L1 are generated and supplied to the encoding control unit 113.
 MVbase=tdL1*(MVbaseL0/tdL0+MVbaseL1/tdL1)/2   ・・・(11)
 TPMVL0=tb*MVbase/tdL1        ・・・(12)
 TPMVL1=(tb-tdL1)*MVbase/tdL1   ・・・(13)
MV base = td L1 * (MV baseL0 / td L0 + MV baseL1 / td L1 ) / 2 (11)
TPMV L0 = tb * MV base / td L1 (12)
TPMV L1 = (tb−td L1 ) * MV base / td L1 (13)
 ただし、式(11)の動きベクトルは、符号化対象画像の前後2枚の参照ピクチャの動きベクトルの重み付け和に基づいた動きベクトルである(図3参照)。よって、得られた予測動きベクトルは、符号化対象画像の前後2枚の参照ピクチャの動きベクトルの重み付け和に基づいた時間軸予測動きベクトルである(図4参照)。 However, the motion vector in Expression (11) is a motion vector based on the weighted sum of motion vectors of two reference pictures before and after the encoding target image (see FIG. 3). Therefore, the obtained prediction motion vector is a time-axis prediction motion vector based on the weighted sum of the motion vectors of the two reference pictures before and after the encoding target image (see FIG. 4).
 本実施形態における時間軸予測動きベクトルは、MVbaseL0とMVbaseL1が同じ値の動きベクトルであるときは、一般的な映像符号化技術による時間軸予測動きベクトルと同じである。しかし、MVbaseL0とMVbaseL1に加速に起因する差分があるとき、式(11)と図4から明らかなように、本実施形態における時間軸予測動きベクトルは、加速に起因する動きベクトルの変化が補償された予測動きベクトルとなる。従って、加速度の存在などに起因してピクチャ間の動きベクトルに変化が存在するシーンにおいても、動きベクトル予測部112による動きベクトル予測の精度が低下しないため、効率的に映像を圧縮できる。 When the MV baseL0 and the MV baseL1 are motion vectors having the same value, the temporal axis predicted motion vector in the present embodiment is the same as the temporal axis predicted motion vector by a general video coding technique. However, when there is a difference due to acceleration between MV baseL0 and MV baseL1 , as is clear from equation (11) and FIG. 4, the temporal axis predicted motion vector in this embodiment has a change in motion vector due to acceleration. It becomes a compensated prediction motion vector. Therefore, even in a scene in which there is a change in the motion vector between pictures due to the presence of acceleration or the like, the accuracy of motion vector prediction by the motion vector prediction unit 112 does not decrease, so that the video can be efficiently compressed.
 以上で、Bピクチャにおける本実施形態の動きベクトル予測部112の動作詳細についての説明を終了する。 This completes the description of the operation details of the motion vector prediction unit 112 of the present embodiment in the B picture.
 符号化制御部113は、イントラ予測信号およびフレーム間予測信号をそれぞれMBバッファ101の入力MBと比較して、予測誤差画像ブロックのエネルギーが小さくなる予測信号を選択する。 The encoding control unit 113 compares the intra prediction signal and the inter-frame prediction signal with the input MB of the MB buffer 101, respectively, and selects a prediction signal that reduces the energy of the prediction error image block.
 さらに、符号化制御部113は、選択した予測信号にて画像が予測されるようにスイッチ100を制御し、また、選択された予測信号に関連する情報をエントロピー符号化部104に供給する。選択された予測信号に関連する情報は、イントラ予測信号を選択した場合には、イントラ予測モードとイントラ予測方向であり、フレーム間予測信号を選択した場合には、インター予測モード、インター予測方向、および(動きベクトルと空間軸予測動きベクトルとの)差分動きベクトルである。 Further, the encoding control unit 113 controls the switch 100 so that an image is predicted by the selected prediction signal, and supplies information related to the selected prediction signal to the entropy encoding unit 104. The information related to the selected prediction signal is the intra prediction mode and the intra prediction direction when the intra prediction signal is selected, and when the inter prediction signal is selected, the inter prediction mode, the inter prediction direction, And a differential motion vector (of the motion vector and the spatial axis predicted motion vector).
 続いて、符号化制御部113は、入力MBまたは予測誤差画像ブロックに基づいて、予測誤差画像ブロックの周波数変換に適した整数DCTの基底ブロックサイズを選択する。基底ブロックサイズの選択肢として、16×16、8×8、4×4のブロックサイズがあり、入力MBまたは予測誤差画像ブロックの画素値が平坦になる程、より大きな基底ブロックサイズが選択される。選択された整数DCTの基底サイズに関する情報は、周波数変換部102およびエントロピー符号化部104に供給される。 Subsequently, the encoding control unit 113 selects a base block size of integer DCT suitable for frequency conversion of the prediction error image block based on the input MB or the prediction error image block. As base block size options, there are 16 × 16, 8 × 8, and 4 × 4 block sizes. As the pixel value of the input MB or prediction error image block becomes flat, a larger base block size is selected. Information on the base size of the selected integer DCT is supplied to the frequency transform unit 102 and the entropy encoding unit 104.
 以下、選択された予測信号に関連する情報および選択された整数DCTの基底サイズに関する情報を補助情報という。 Hereinafter, information related to the selected prediction signal and information related to the base size of the selected integer DCT will be referred to as auxiliary information.
 また、符号化制御部113は、目標ビット数でピクチャを符号化するために、エントロピー符号化部104が出力するビットストリームのビット数を監視し、出力されるビットストリームのビット数が目標ビット数よりも多ければ量子化ステップサイズを大とする量子化パラメータを供給し、逆に、出力されるビットストリームのビット数が目標ビット数よりも少なければ量子化ステップサイズを小とする量子化パラメータを供給する。これによって出力ビットストリームは目標のビット数に近づくように符号化される。 In addition, the encoding control unit 113 monitors the bit number of the bit stream output from the entropy encoding unit 104 in order to encode the picture with the target bit number, and the bit number of the output bit stream is the target bit number. If the number of bits is less than the target number of bits, a quantization parameter that reduces the quantization step size is supplied. Supply. As a result, the output bit stream is encoded so as to approach the target number of bits.
 周波数変換部102は、選択された整数DCTの基底サイズで、予測誤差画像ブロックを周波数変換して空間領域から周波数領域に変換する。周波数領域に変換された予測誤差を変換係数という。周波数変換には、DCTやアダマール変換などの直交変換を利用できる。整数DCTは、一般的な映像符号化装置ではDCT基底を整数値で近似した基底による周波数変換を意味する。 The frequency conversion unit 102 converts the prediction error image block by frequency conversion from the spatial domain to the frequency domain with the selected base size of the integer DCT. The prediction error converted to the frequency domain is called a conversion coefficient. For frequency conversion, orthogonal transform such as DCT or Hadamard transform can be used. The integer DCT means frequency conversion based on a base obtained by approximating a DCT base with an integer value in a general video encoding apparatus.
 量子化部103は、符号化制御部113から供給される量子化パラメータに対応する量子化ステップサイズで、変換係数を量子化する。なお、量子化された変換係数の量子化インデックスはレベルとも呼ばれる。 The quantization unit 103 quantizes the transform coefficient with a quantization step size corresponding to the quantization parameter supplied from the encoding control unit 113. Note that the quantization index of the quantized transform coefficient is also called a level.
 エントロピー符号化部104は、補助情報と量子化インデックスをエントロピー符号化して、そのビット列すなわちビットストリームを出力する。 The entropy coding unit 104 entropy codes the auxiliary information and the quantization index, and outputs the bit string, that is, the bit stream.
 逆量子化部105および逆変換部106は、以降の符号化のために、量子化部103から供給される量子化インデックスを逆量子化し、さらに逆周波数変換して元の空間領域に戻す。以下、元の空間領域に戻された予測誤差画像ブロックを再構築予測誤差画像ブロックという。 The inverse quantization unit 105 and the inverse transform unit 106 inversely quantize the quantization index supplied from the quantization unit 103 for subsequent encoding, and perform inverse frequency transform to return to the original space region. Hereinafter, the prediction error image block returned to the original space area is referred to as a reconstructed prediction error image block.
 ピクチャバッファ107は、現在のピクチャに含まれるすべてのMBが符号化されるまで、再構築予測誤差画像ブロックに予測信号を加えた再構築画像ブロックを格納する。以下、再構築画像ブロックによって構成されるピクチャを再構築ピクチャと呼ぶ。また、ピクチャバッファ107は、再構築画像ブロックに関連する補助情報も格納する。 The picture buffer 107 stores a reconstructed image block obtained by adding a prediction signal to a reconstructed prediction error image block until all MBs included in the current picture are encoded. Hereinafter, a picture constituted by the reconstructed image block is referred to as a reconstructed picture. The picture buffer 107 also stores auxiliary information related to the reconstructed image block.
 ブロック歪み除去フィルタ部108は、ピクチャバッファ107に格納された再構築ピクチャのブロック歪みを除去する。 The block distortion removal filter unit 108 removes block distortion of the reconstructed picture stored in the picture buffer 107.
 デコードピクチャバッファ109は、ブロック歪み除去フィルタ108から供給されるブロック歪みが除去された再構築ピクチャを参照ピクチャとして格納する。また、デコードピクチャバッファ109は、ブロック歪みが除去された再構築画像ブロックに関連する補助情報も格納する。 The decoded picture buffer 109 stores the reconstructed picture from which the block distortion supplied from the block distortion removal filter 108 is removed as a reference picture. The decoded picture buffer 109 also stores auxiliary information related to the reconstructed image block from which block distortion has been removed.
 上述した動作に基づいて、本実施形態の映像符号化装置はビットストリームを生成する。 Based on the above-described operation, the video encoding device of this embodiment generates a bit stream.
実施形態2.
 本実施形態の映像復号装置は、復号対象画像の前後2枚の参照ピクチャの動きベクトルの特徴としてのノルムに基づいてピクチャ間における動きベクトルの変化を検出し、さらに、前後2枚の参照ピクチャの動きベクトルの重み付け和に基づいて動きベクトルの変化を補償して、予測動きベクトルを生成する手段を備える。
Embodiment 2. FIG.
The video decoding apparatus according to the present embodiment detects a change in motion vector between pictures based on the norm as the feature of the motion vector of two reference pictures before and after the decoding target image, and further, Means for generating a motion vector predictor by compensating for a change in motion vector based on a weighted sum of motion vectors.
 図5は、本発明の第2の実施形態の映像復号装置を示すブロック図である。図5に示すように、本実施形態の映像復号装置は、エントロピー復号部201、逆量子化部202、逆周波数変換部203、ピクチャバッファ204、ブロック歪み除去フィルタ部205、デコードピクチャバッファ206、イントラ予測部207、フレーム間予測部208、動きベクトル予測部209、復号制御部210およびスイッチ200を備える。 FIG. 5 is a block diagram showing a video decoding apparatus according to the second embodiment of the present invention. As shown in FIG. 5, the video decoding apparatus according to the present embodiment includes an entropy decoding unit 201, an inverse quantization unit 202, an inverse frequency conversion unit 203, a picture buffer 204, a block distortion removal filter unit 205, a decoded picture buffer 206, an intra A prediction unit 207, an inter-frame prediction unit 208, a motion vector prediction unit 209, a decoding control unit 210, and a switch 200 are provided.
 エントロピー復号部201は、ビットストリームをエントロピー復号して、復号対象MBの予測信号に関連する情報、整数DCTの基底サイズ、量子化インデックスを得る。 The entropy decoding unit 201 performs entropy decoding on the bitstream to obtain information related to the prediction signal of the decoding target MB, the base size of the integer DCT, and the quantization index.
 復号制御部210は、予測信号として、イントラ予測信号またはフレーム間予測信号を供給する準備をする。 The decoding control unit 210 prepares to supply an intra prediction signal or an inter-frame prediction signal as a prediction signal.
 動きベクトル予測部209は、復号制御部210がフレーム間予測信号を供給する準備をした場合、実施形態1における動きベクトル予測部112と同様に、予測動きベクトルを計算してフレーム間予測部208に供給する。 When the decoding control unit 210 prepares to supply an inter-frame prediction signal, the motion vector prediction unit 209 calculates a prediction motion vector and sends it to the inter-frame prediction unit 208 in the same manner as the motion vector prediction unit 112 in the first embodiment. Supply.
 イントラ予測部207は、イントラMBに対して、現在復号中のフレームと表示時刻が同一である、ピクチャバッファ204に格納された再構築画像から、復号制御部210を介して供給されるエントロピー復号したイントラ予測モードとイントラ予測方向に基づいて、イントラ予測信号を生成する。 The intra prediction unit 207 performs entropy decoding supplied to the intra MB from the reconstructed image stored in the picture buffer 204 having the same display time as the currently decoded frame, via the decoding control unit 210. An intra prediction signal is generated based on the intra prediction mode and the intra prediction direction.
 フレーム間予測部208は、インターMBに対して、現在復号中のフレームと表示時刻が異なる、デコードピクチャバッファ206に格納された参照画像から、復号制御部210を介して供給されるエントロピー復号したインター予測モード、インター予測方向、および差分動きベクトル、そして、動きベクトル予測部209から供給される予測動きベクトルに基づいて、フレーム間予測信号を生成する。 The inter-frame prediction unit 208 performs entropy-decoded inter-picture supplied via the decoding control unit 210 from a reference image stored in the decoded picture buffer 206 that has a display time different from that of the currently decoded frame. An inter-frame prediction signal is generated based on the prediction mode, the inter prediction direction, the difference motion vector, and the prediction motion vector supplied from the motion vector prediction unit 209.
 復号制御部210は、エントロピー復号した予測信号に関連する情報(イントラMBまたはインターMB)に基づいて、スイッチ200を制御し、イントラ予測信号またはフレーム間予測信号によって画像を予測させる。 The decoding control unit 210 controls the switch 200 based on information (intra MB or inter MB) related to the entropy-decoded prediction signal, and predicts an image by the intra prediction signal or the inter-frame prediction signal.
 逆量子化部202および逆変換部203は、エントロピー復号部201から供給される量子化インデックスを逆量子化し、さらに逆周波数変換して元の空間領域(再構築予測誤差画像ブロック)に戻す。 The inverse quantization unit 202 and the inverse transform unit 203 inversely quantize the quantization index supplied from the entropy decoding unit 201, further perform inverse frequency transform, and return the original spatial region (reconstructed prediction error image block).
 ピクチャバッファ204は、現在のフレームに含まれるすべてのMBが符号化されるまで、再構築予測誤差画像ブロックにスイッチ200を介して供給される予測信号を加えた、再構築画像ブロックを格納する。 The picture buffer 204 stores a reconstructed image block obtained by adding a prediction signal supplied via the switch 200 to the reconstructed prediction error image block until all MBs included in the current frame are encoded.
 ブロック歪み除去フィルタ部205は、ピクチャバッファ204に格納された再構築画像ピクチャのブロック歪みの除去を行う。 The block distortion removal filter unit 205 removes block distortion of the reconstructed image picture stored in the picture buffer 204.
 デコードピクチャバッファ206は、ブロック歪み除去フィルタ部205でブロック歪みが除去された再構築画像ピクチャを参照ピクチャとして格納する。参照画像ピクチャは、適切な表示タイミングにて伸張フレームとして出力される。 The decoded picture buffer 206 stores the reconstructed image picture from which block distortion has been removed by the block distortion removal filter unit 205 as a reference picture. The reference image picture is output as an expanded frame at an appropriate display timing.
 上述した動作に基づいて、本実施形態の映像復号装置は、ビットストリームを伸張する。 Based on the above-described operation, the video decoding apparatus according to the present embodiment decompresses the bit stream.
実施形態3. Embodiment 3. FIG.
 上述した実施形態1および実施形態2では、符号化対象画像または復号対象画像の前後2枚の参照ピクチャの動きベクトルの特徴として、前後2枚の参照ピクチャの動きベクトルのノルムに基づいてピクチャ間における動きベクトルの変化を検出し、さらに、前後2枚の参照ピクチャの動きベクトルの重み付け和に基づいて動きベクトルの変化を補償して予測動きベクトルを生成する動きベクトル予測部を備える映像符号化装置と映像復号装置を示した。 In the first and second embodiments described above, the motion vectors of the two reference pictures before and after the encoding target picture or the decoding target picture are characterized by the motion vector norm between the pictures based on the norm of the motion vectors of the two reference pictures. A video encoding device including a motion vector prediction unit that detects a change in a motion vector and further generates a predicted motion vector by compensating for the change in the motion vector based on a weighted sum of motion vectors of two reference pictures before and after A video decoding device is shown.
 他の実施形態として、画面内で隣接する画像の動きベクトルに基づいた予測動きベクトルを生成する実施形態も考えられる。 As another embodiment, an embodiment in which a predicted motion vector based on a motion vector of an image adjacent in the screen is generated is also conceivable.
 すなわち、符号化対象画像または復号対象画像の前後2枚の参照ピクチャの動きベクトルのノルムに基づいてピクチャ間における動きベクトルの変化を検出し、さらに、画面内で隣接する画像の動きベクトルに基づいて動きベクトルの変化を補償して予測動きベクトルを生成する動きベクトル予測部(動きベクトル予測部112および動きベクトル予測部209)を利用する実施形態も考えられる。 That is, a change in a motion vector between pictures is detected based on the norm of motion vectors of two reference pictures before and after the encoding target image or the decoding target image, and further, based on the motion vectors of adjacent images in the screen. An embodiment using a motion vector prediction unit (motion vector prediction unit 112 and motion vector prediction unit 209) that generates a predicted motion vector by compensating for a change in the motion vector is also conceivable.
 そのような実施形態における、Bピクチャでの動きベクトル予測部の動作を以下に説明する。 The operation of the motion vector prediction unit in the B picture in such an embodiment will be described below.
 動きベクトル予測部は、まず、デコードピクチャバッファ109またはデコードピクチャバッファ206に格納された前後2枚の参照ピクチャ(pic(t-1)とpic(t+1))のブロック(Collocated block L0 とCollocated block L1 )からそれぞれの動きベクトル(MVbaseL0およびMVbaseL1)を読み込む。 First, the motion vector prediction unit performs blocks (Collocated block L0 and Collocated block L1) of two reference pictures (pic (t−1) and pic (t + 1)) stored in the decoded picture buffer 109 or the decoded picture buffer 206. ) Read each motion vector (MV baseL0 and MV baseL1 ).
 続いて、動きベクトル予測部は、Collocated block L1 の動きベクトルMVbaseL1が指す参照ピクチャが、Collocated block L0 が属する参照ピクチャか否かを判断する。Collocated block L1 の動きベクトルMVbaseL1が指す参照ピクチャが、Collocated block L0 が属する参照ピクチャでない場合には、参照ピクチャpic(t-1)と参照ピクチャpic(t+1)との間で動きベクトルの大きな変化があると判断し、以下の式(14),(15)で、画面内で隣接する画像ブロックA,B,C(図16参照)の動きベクトルに基づいて予測動きベクトル(空間軸予測動きベクトル)SPMV=(spmv,spmv)を生成する。 Subsequently, the motion vector prediction unit determines whether or not the reference picture indicated by the motion vector MV baseL1 of the Collocated block L1 is a reference picture to which the Collocated block L0 belongs. When the reference picture pointed to by the motion vector MV baseL1 of the Collocated block L1 is not the reference picture to which the Collocated block L0 belongs, a large change in the motion vector between the reference picture pic (t−1) and the reference picture pic (t + 1) Based on the motion vectors of the image blocks A, B, and C (see FIG. 16) adjacent in the screen, the predicted motion vector (spatial axis predicted motion vector) ) Generate SPMV = (spmv x , spmv y ).
 spmv=Median(mv_A,mv_B,mv_C)   ・・・(14)
 spmv=Median(mv_A,mv_B,mv_C)   ・・・(15)
spmv x = Median (mv x _A , mv x _B, mv x _C) ··· (14)
spmv y = Median (mv y — A, mv y — B, mv y — C) (15)
 参照ピクチャpic(t-1)と参照ピクチャpic(t+1)との間で動きベクトルの大きな変化があると判断した場合には、時間軸予測動きベクトルよりも、空間軸予測動きベクトルを利用することで、動きベクトルの予測の精度が改善する。具体的には、画面内で隣接する画像の動きベクトルに基づいて参照ピクチャ間での動きベクトルの大きな変化を補償して予測動きベクトルを生成することにより動きベクトルの予測の精度を改善する。 When it is determined that there is a large change in the motion vector between the reference picture pic (t−1) and the reference picture pic (t + 1), the spatial axis predicted motion vector should be used rather than the temporal axis predicted motion vector. Thus, the accuracy of motion vector prediction is improved. Specifically, the accuracy of motion vector prediction is improved by generating a predicted motion vector by compensating for a large change in the motion vector between reference pictures based on the motion vector of an image adjacent in the screen.
 Collocated block L1 の動きベクトルMVbaseL1が指す参照ピクチャが、Collocated block L0 が属する参照ピクチャである場合、さらに、動きベクトル予測部は、前後2枚の参照ピクチャの動きベクトルそれぞれの参照ピクチャ間距離(tdL0およびtdL1)で正規化した正規化動きベクトルNMVbaseL0およびNMVbaseL1の差分のノルムDNMV_norm を所定閾値th_norm と比較する。正規化動きベクトルNMVbaseL0およびNMVbaseL1と、ノルムDNMV_norm は定式的には以下の式(16),(17),(18)で表現される。 When the reference picture pointed to by the motion vector MV baseL1 of the collocated block L1 is a reference picture to which the collocated block L0 belongs, the motion vector prediction unit further determines the inter-reference picture distances (td) of the motion vectors of the two preceding and following reference pictures. The norm DNMV_norm of the difference between the normalized motion vectors NMV baseL0 and NMV baseL1 normalized by L0 and td L1 ) is compared with a predetermined threshold th_norm. The normalized motion vectors NMV baseL0 and NMV baseL1 and the norm DNMV_norm are expressed by the following formulas (16), (17), and (18).
 NMVbaseL0=MVbaseL0/tdL0   ・・・(16)
 NMVbaseL1=MVbaseL1/tdL1   ・・・(17)
 DNMV_norm =||NMVbaseL0-NMVbaseL1||   ・・・(18)
NMV baseL0 = MV baseL0 / td L0 (16)
NMV baseL1 = MV baseL1 / td L1 (17)
DNMV_norm = || NMV baseL0 -NMV baseL1 || (18)
 ただし、∥X∥はベクトルXのL1ノルムを計算する関数であるとする。なお、L1ノルムの代わりに、L2ノルム、または、各成分の絶対値差分の最大値などを利用してもよい。 However, ∥X∥ is a function for calculating the L1 norm of the vector X. In place of the L1 norm, the L2 norm or the maximum absolute value difference of each component may be used.
 動きベクトル予測部は、ノルムDNMV_norm が所定閾値th_norm よりも大きい場合、参照ピクチャpic(t-1)と参照ピクチャpic(t+1)との間で動きベクトルの加速度が存在しない(つまり、参照ピクチャpic(t-1)と参照ピクチャpic(t+1)との間に別の要因による動きベクトルの変化がある)と判断して、MVbaseL1のみを利用して、以下の式(19),(20),(21)で予測動きベクトル(つまり、一般的な映像符号化技術による時間軸予測動きベクトルと技術と同様の時間軸予測動きベクトル)TPMVL0およびTPMVL1を生成して符号化制御部113または復号制御部210に供給する。 When the norm DNMV_norm is larger than the predetermined threshold th_norm, the motion vector prediction unit does not have an acceleration of a motion vector between the reference picture pic (t−1) and the reference picture pic (t + 1) (that is, the reference picture pic ( t-1) and the reference picture pic (t + 1) are determined to have a motion vector change due to another factor), and only the MV baseL1 is used, and the following equations (19), (20), In (21), prediction motion vectors (that is, time-axis prediction motion vectors by a general video encoding technique and time-axis prediction motion vectors similar to the technique) TPMV L0 and TPMV L1 are generated to generate the encoding control unit 113 or decoding It supplies to the control part 210.
 MVbase=MVbaseL1               ・・・(19)
 TPMVL0=tb*MVbase/tdL1        ・・・(20)
 TPMVL1=(tb-tdL1)*MVbase/tdL1   ・・・(21)
MV base = MV baseL1 (19)
TPMV L0 = tb * MV base / td L1 (20)
TPMV L1 = (tb−td L1 ) * MV base / td L1 (21)
 ノルムが所定閾値以下の場合、参照ピクチャpic(t-1)と参照ピクチャpic(t+1)との間で動きベクトルの加速度が存在すると判断して、MVbaseL0とMVbaseL1の両方を利用して、以下の式(22),(23),(24)で予測動きベクトル(一般的な映像符号化技術による時間軸予測動きベクトルとは異なる、本発明による時間軸予測動きベクトル)TPMVL0およびTPMVL1を生成して符号化制御部113または復号制御部210に供給する。 When the norm is equal to or smaller than a predetermined threshold, it is determined that there is an acceleration of a motion vector between the reference picture pic (t−1) and the reference picture pic (t + 1), and both MV baseL0 and MV baseL1 are used, Predicted motion vectors (time-axis predicted motion vectors according to the present invention, which are different from the time-axis predicted motion vectors according to a general video coding technique) in the following equations (22), (23), and (24) TPMV L0 and TPMV L1 Is supplied to the encoding control unit 113 or the decoding control unit 210.
 MVbase=tdL1*(MVbaseL0/tdL0+MVbaseL1/tdL1)/2  ・・・(22)
 TPMVL0=tb*MVbase/tdL1        ・・・(23)
 TPMVL1=(tb-tdL1)*MVbase/tdL1   ・・・(24)
MV base = td L1 * (MV baseL0 / td L0 + MV baseL1 / td L1 ) / 2 (22)
TPMV L0 = tb * MV base / td L1 (23)
TPMV L1 = (tb−td L1 ) * MV base / td L1 (24)
 ただし、式(22)の動きベクトルは前後2枚の参照ピクチャの動きベクトルの重み付け和に基づいた動きベクトルである(図3参照)。よって、得られた予測動きベクトルは、前後2枚の参照ピクチャの動きベクトルの重み付け和に基づいた時間軸予測動きベクトルである(図4参照)。 However, the motion vector of Expression (22) is a motion vector based on the weighted sum of motion vectors of two reference pictures before and after (see FIG. 3). Therefore, the obtained prediction motion vector is a time-axis prediction motion vector based on the weighted sum of the motion vectors of two reference pictures before and after (see FIG. 4).
実施形態4.
 符号化対象画像または復号対象画像の前後2枚の参照ピクチャの動きベクトルの特徴として、ノルムの代わりに、内積または外積を利用して、ピクチャ間における動きベクトルの変化を検出する動きベクトル予測装置(動きベクトル予測部112および動きベクトル予測部209)を利用する実施形態も考えられる。
Embodiment 4 FIG.
A motion vector prediction device that detects a change in motion vector between pictures using an inner product or outer product instead of a norm as a feature of motion vectors of two reference pictures before and after an encoding target image or a decoding target image ( An embodiment using the motion vector prediction unit 112 and the motion vector prediction unit 209) is also conceivable.
 内積を利用することによって、例えば、前後2枚の参照ピクチャの動きベクトルのなす角度θが得られる。角度θが所定閾値th_thetaよりも大きな場合、参照ピクチャpic(t-1)と参照ピクチャpic(t+1)との間に動きベクトルの加速度が存在しない(つまり、参照ピクチャpic(t-1)と参照ピクチャpic(t+1)との間に別の要因による動きベクトルの変化がある)と判断できる。 By using the inner product, for example, the angle θ formed by the motion vectors of the two front and rear reference pictures can be obtained. When the angle θ is larger than the predetermined threshold th_theta, there is no motion vector acceleration between the reference picture pic (t−1) and the reference picture pic (t + 1) (that is, the reference picture pic (t−1) and the reference It can be determined that there is a change in motion vector due to another factor between the picture pic (t + 1)).
 外積を利用することによって、例えば、前後2枚の参照ピクチャの動きベクトルが平行であるか否かを判定できる。前後2枚の参照ピクチャの動きベクトルが平行(つまり、外積がゼロ)で、かつ、それぞれの向きが同じであれば、参照ピクチャpic(t-1)と参照ピクチャpic(t+1)との間に動きベクトルの加速度が存在すると判断できる。 By using the outer product, for example, it can be determined whether or not the motion vectors of two reference pictures before and after are parallel. If the motion vectors of the two front and rear reference pictures are parallel (that is, the outer product is zero) and each has the same direction, the reference picture pic (t−1) and the reference picture pic (t + 1) are between It can be determined that the acceleration of the motion vector exists.
 また、ノルムと内積の組み合わせを利用することによって、ピクチャ間における動きベクトルの変化を検出する動きベクトル予測装置を利用する実施形態も考えられる。 Also, an embodiment using a motion vector prediction device that detects a change in motion vector between pictures by using a combination of norm and inner product is also conceivable.
 具体的には、前後2枚の参照ピクチャの動きベクトルのノルムが非ゼロ、かつ、それぞれの動きベクトルのなす角度θが所定値th_thetaよりも小さい場合、参照ピクチャpic(t-1)と参照ピクチャpic(t+1)との間に動きベクトルの加速度が存在すると判断できる。 Specifically, when the norm of the motion vectors of the two reference pictures before and after is non-zero and the angle θ formed by each motion vector is smaller than a predetermined value th_theta, the reference picture pic (t−1) and the reference picture It can be determined that there is an acceleration of the motion vector between pic (t + 1).
 また、ノルムと外積の組み合わせを利用することによって、ピクチャ間における動きベクトルの変化を検出する動きベクトル予測装置を利用する実施形態も考えられる。 Also, an embodiment using a motion vector prediction device that detects a change in motion vector between pictures by using a combination of norm and outer product is also conceivable.
 具体的には、前後2枚の参照ピクチャの動きベクトルのノルムが非ゼロ、かつ、それぞれの動きベクトルの外積がゼロの場合、参照ピクチャpic(t-1)と参照ピクチャpic(t+1)との間に動きベクトルの加速度が存在すると判断できる。 Specifically, when the norm of the motion vectors of the two preceding and following reference pictures is non-zero and the outer product of the respective motion vectors is zero, the reference picture pic (t−1) and the reference picture pic (t + 1) It can be determined that motion vector acceleration exists between them.
 さらに、ノルム、内積、および外積の組み合わせを利用することによって、ピクチャ間における動きベクトルの変化を検出する動きベクトル予測装置を利用する実施形態も考えられる。 Furthermore, an embodiment using a motion vector prediction device that detects a change in motion vector between pictures by using a combination of a norm, an inner product, and an outer product is also conceivable.
 具体的には、前後2枚の参照ピクチャの動きベクトルのノルムが非ゼロ、内積が0よりも大きな値、かつ、外積がゼロの場合、参照ピクチャpic(t-1)と参照ピクチャpic(t+1)との間に動きベクトルの加速度が存在すると判断できる。 Specifically, when the norm of the motion vectors of the two preceding and following reference pictures is non-zero, the inner product is a value larger than 0, and the outer product is zero, the reference picture pic (t−1) and the reference picture pic (t + 1) It can be determined that there is an acceleration of the motion vector.
 なお、本実施形態の考え方を、第1の実施形態および第2の実施形態に適用してもよいし、第3の実施形態に適用してもよい。 Note that the concept of the present embodiment may be applied to the first embodiment and the second embodiment, or may be applied to the third embodiment.
 上記の各実施形態では、映像符号化装置および映像復号装置は、符号化対象画像または復号対象画像の前後2枚の参照ピクチャの動きベクトルに基づいてピクチャ間の動きベクトルの変化を検出および補償する手段を備える。具体的には、一実施形態の映像符号化装置および映像復号装置は、前後2枚の参照ピクチャの動きベクトルの内積、外積、およびノルムのいずれか1つ以上に基づいてピクチャ間における動きベクトルの変化を検出する手段と、前後2枚の参照ピクチャの動きベクトルの重み付け和に基づいて動きベクトルの変化を補償する予測動きベクトルの生成手段を備える。また、別の実施形態の映像符号化装置および映像復号装置は、前後2枚の参照ピクチャの動きベクトルの内積、外積、およびノルムのいずれか1つ以上に基づいてピクチャ間における動きベクトルの変化を検出する手段と、画面内で隣接する画像の動きベクトルに基づいて動きベクトルの変化を補償する予測動きベクトルの生成手段を備える。 In each of the embodiments described above, the video encoding device and the video decoding device detect and compensate for a change in the motion vector between pictures based on the motion vectors of two reference pictures before and after the encoding target image or the decoding target image. Means. Specifically, the video encoding device and the video decoding device according to an embodiment can calculate a motion vector between pictures based on one or more of an inner product, an outer product, and a norm of motion vectors of two reference pictures. Means for detecting a change, and means for generating a motion vector predictor that compensates for a change in a motion vector based on a weighted sum of motion vectors of two reference pictures before and after. Further, the video encoding device and video decoding device according to another embodiment can change a motion vector between pictures based on any one or more of an inner product, an outer product, and a norm of motion vectors of two reference pictures. Means for detecting, and means for generating a predicted motion vector for compensating for a change in a motion vector based on a motion vector of an image adjacent in the screen.
 なお、本発明と特許文献1に記載された発明とでは、前後2枚の参照ピクチャの動きベクトルを用いるという点では類似しているが、本発明では、前後2枚の参照ピクチャの動きベクトル、またはピクチャ内隣接ブロックの動きベクトルに基づいて動きベクトルの変化が補償された予測動きベクトルを計算するという点で大きく異なる。本発明では、この違いによって、加速度に起因する動きベクトルの変化が補償された予測動きベクトルを生成できる。 Note that the present invention and the invention described in Patent Document 1 are similar in that the motion vectors of the two reference pictures before and after are used, but in the present invention, the motion vectors of the two reference pictures before and after, Alternatively, the difference is that a predicted motion vector in which a change in motion vector is compensated is calculated based on a motion vector of an adjacent block in a picture. In the present invention, a prediction motion vector in which a change in motion vector due to acceleration is compensated can be generated by this difference.
 また、本発明において、時間方向に隣接する参照ピクチャの動きベクトルの有無(イントラMBモードであるか否か)といった単純な基準ではなく、前後2枚の参照ピクチャの動きベクトルの内積、外積およびノルムのいずれか1つ以上の基準を利用してピクチャ間における動きベクトルの変化を検出する点も、進歩性を示すものである。 In the present invention, the inner product, outer product, and norm of the motion vectors of the two reference pictures before and after are not simple criteria such as the presence or absence of motion vectors of reference pictures adjacent in the time direction (whether or not the intra MB mode is used). The point of detecting a change of a motion vector between pictures using any one or more of the above criteria also indicates an inventive step.
 また、上記の各実施形態を、ハードウェアで構成することも可能であるが、コンピュータプログラムにより実現することも可能である。 Further, although each of the above embodiments can be configured by hardware, it can also be realized by a computer program.
 図6に示す情報処理システムは、プロセッサ1001、プログラムメモリ1002、映像データを格納するための記憶媒体1003およびビットストリームを格納するための記憶媒体1004を備える。記憶媒体1003と記憶媒体1004とは、別個の記憶媒体であってもよいし、同一の記憶媒体からなる記憶領域であってもよい。記憶媒体として、ハードディスク等の磁気記憶媒体を用いることができる。 The information processing system shown in FIG. 6 includes a processor 1001, a program memory 1002, a storage medium 1003 for storing video data, and a storage medium 1004 for storing a bitstream. The storage medium 1003 and the storage medium 1004 may be separate storage media, or may be storage areas composed of the same storage medium. A magnetic storage medium such as a hard disk can be used as the storage medium.
 図6に示された情報処理システムにおいて、プログラムメモリ1002には、図1,図5のそれぞれに示された各ブロック(バッファのブロックを除く)の機能を実現するためのプログラムが格納される。そして、プロセッサ1001は、プログラムメモリ1002に格納されているプログラムに従って処理を実行することによって、図1,図5のそれぞれに示された映像符号化装置または映像復号装置の機能を実現する。 In the information processing system shown in FIG. 6, the program memory 1002 stores a program for realizing the function of each block (excluding the buffer block) shown in FIGS. The processor 1001 implements the functions of the video encoding device or the video decoding device shown in FIGS. 1 and 5 by executing processing in accordance with the program stored in the program memory 1002.
 図7は、本発明による映像符号化装置の主要構成を示すブロック図である。図7に示すように、本発明による映像符号化装置は、符号化対象画像の前後2枚の参照ピクチャの動きベクトルに基づいて、符号化対象画像の予測動きベクトルを計算する予測動きベクトル計算手段11(第1の実施形態では、動きベクトル予測部112で実現される)を備える。 FIG. 7 is a block diagram showing the main configuration of the video encoding apparatus according to the present invention. As shown in FIG. 7, the video encoding apparatus according to the present invention is a prediction motion vector calculation means for calculating a prediction motion vector of an encoding target image based on motion vectors of two reference pictures before and after the encoding target image. 11 (implemented by the motion vector prediction unit 112 in the first embodiment).
 予測動きベクトル計算手段11は、例えば、前後2枚の参照ピクチャの動きベクトルの特徴量が所定値を満たすときに前後2枚の参照ピクチャの動きベクトルの重み付け和に基づいた時間軸予測動きベクトルを計算し、前後2枚の参照ピクチャの動きベクトルの特徴量が所定値を満たさないときに符号化対象画像に画面内で隣接する画像の動きベクトルに基づいた空間軸予測動きベクトルを計算する。 For example, the predicted motion vector calculation unit 11 calculates a time-axis predicted motion vector based on the weighted sum of the motion vectors of the two preceding and following reference pictures when the feature amounts of the motion vectors of the two preceding and following reference pictures satisfy a predetermined value. The spatial axis prediction motion vector based on the motion vector of the image adjacent to the encoding target image in the screen is calculated when the feature amounts of the motion vectors of the two reference pictures before and after the predetermined value do not satisfy the predetermined value.
 予測動きベクトル計算手段11が使用する前後2枚の参照ピクチャの動きベクトルの特徴量は、例えば、前後2枚の参照ピクチャの動きベクトルの内積、外積およびノルムのいずれか一つ以上を含む。 The feature quantities of the motion vectors of the two reference pictures before and after used by the predicted motion vector calculation means 11 include, for example, one or more of an inner product, an outer product, and a norm of the motion vectors of the two reference pictures before and after.
 図8は、本発明による映像復号装置の主要構成を示すブロック図である。図8に示すように、本発明による映像復号装置は、復号対象画像の前後2枚の参照ピクチャの動きベクトルに基づいて、復号対象画像の予測動きベクトルを計算する予測動きベクトル計算手段21(第2の実施形態では、動きベクトル予測部209で実現される)を備える。 FIG. 8 is a block diagram showing the main configuration of the video decoding apparatus according to the present invention. As shown in FIG. 8, the video decoding apparatus according to the present invention is based on the motion vector predictor 21 (first motion vector calculation means 21) for calculating a motion vector predictor of a decoding target image based on motion vectors of two reference pictures before and after the decoding target image. In the second embodiment, it is realized by the motion vector prediction unit 209).
 予測動きベクトル計算手段21は、例えば、復号対象画像の予測動きベクトルとして、前後2枚の参照ピクチャの動きベクトルの重み付け和に基づいた時間軸予測動きベクトルを計算する。 The predicted motion vector calculation means 21 calculates, for example, a time-axis predicted motion vector based on a weighted sum of motion vectors of two reference pictures before and after, as a predicted motion vector of a decoding target image.
 予測動きベクトル計算手段21が使用する前後2枚の参照ピクチャの動きベクトルの特徴量は、例えば、前後2枚の参照ピクチャの動きベクトルの内積、外積およびノルムのいずれか一つ以上を含む。 The feature quantities of the motion vectors of the two reference pictures before and after that used by the predicted motion vector calculation means 21 include, for example, one or more of the inner product, outer product, and norm of the motion vectors of the two reference pictures before and after.
 図9は、本発明による映像符号化方法の主要ステップを示すフローチャートである。図9に示すように、本発明による映像符号化方法は、符号化対象画像の前後2枚の参照ピクチャの動きベクトルに基づいて、符号化対象画像の予測動きベクトルを計算するステップ(ステップS11)を含む。 FIG. 9 is a flowchart showing the main steps of the video encoding method according to the present invention. As shown in FIG. 9, in the video encoding method according to the present invention, a predicted motion vector of an encoding target image is calculated based on the motion vectors of two reference pictures before and after the encoding target image (step S11). including.
 図10は、本発明による映像復号方法の主要ステップを示すフローチャートである。図10に示すように、本発明による映像復号方法は、復号対象画像の前後2枚の参照ピクチャの動きベクトルに基づいて、復号対象画像の予測動きベクトルを計算するステップ(ステップS21)を含む。 FIG. 10 is a flowchart showing the main steps of the video decoding method according to the present invention. As shown in FIG. 10, the video decoding method according to the present invention includes a step (step S21) of calculating a predicted motion vector of a decoding target image based on motion vectors of two reference pictures before and after the decoding target image.
 以上、実施形態および実施例を参照して本願発明を説明したが、本願発明は上記実施形態および実施例に限定されるものではない。本願発明の構成や詳細には、本願発明のスコープ内で当業者が理解し得る様々な変更をすることができる。 Although the present invention has been described with reference to the embodiments and examples, the present invention is not limited to the above embodiments and examples. Various changes that can be understood by those skilled in the art can be made to the configuration and details of the present invention within the scope of the present invention.
 この出願は、2009年12月7日に出願された日本特許出願2009-277952を基礎とする優先権を主張し、その開示の全てをここに取り込む。 This application claims priority based on Japanese Patent Application No. 2009-277952 filed on Dec. 7, 2009, the entire disclosure of which is incorporated herein.
 11   予測動きベクトル計算手段
 21   予測動きベクトル計算手段
 100  スイッチ
 101  MBバッファ
 102  周波数変換部
 103  量子化部
 104  エントロピー符号化部
 105  逆量子化部
 106  逆周波数変換部
 107  ピクチャバッファ
 108  ブロック歪み除去フィルタ部
 109  デコードピクチャバッファ
 110  イントラ予測部
 111  フレーム間予測部
 112  動きベクトル予測部
 113  符号化制御部
 200  スイッチ
 201  エントロピー復号部
 202  逆量子化部
 203  逆周波数変換部
 204  ピクチャバッファ
 205  ブロック歪み除去フィルタ部
 206  デコードピクチャバッファ
 207  イントラ予測部
 208  フレーム間予測部
 209  動きベクトル予測部
 210  復号制御部
 1001 プロセッサ
 1002 プログラムメモリ
 1003 記憶媒体
 1004 記憶媒体
DESCRIPTION OF SYMBOLS 11 Prediction motion vector calculation means 21 Prediction motion vector calculation means 100 Switch 101 MB buffer 102 Frequency conversion part 103 Quantization part 104 Entropy encoding part 105 Inverse quantization part 106 Inverse frequency conversion part 107 Picture buffer 108 Block distortion removal filter part 109 Decoded picture buffer 110 Intra prediction unit 111 Inter-frame prediction unit 112 Motion vector prediction unit 113 Coding control unit 200 Switch 201 Entropy decoding unit 202 Inverse quantization unit 203 Inverse frequency conversion unit 204 Picture buffer 205 Block distortion removal filter unit 206 Decoded picture Buffer 207 Intra prediction unit 208 Inter-frame prediction unit 209 Motion vector prediction unit 210 Decoding control unit 1001 Processor 1 002 Program memory 1003 Storage medium 1004 Storage medium

Claims (30)

  1.  符号化対象画像の前後2枚の参照ピクチャの動きベクトルに基づいて、前記符号化対象画像の予測動きベクトルを計算する予測動きベクトル計算手段を備える映像符号化装置。 A video encoding device comprising prediction motion vector calculation means for calculating a prediction motion vector of the encoding target image based on motion vectors of two reference pictures before and after the encoding target image.
  2.  前記予測動きベクトル計算手段は、前記符号化対象画像の予測動きベクトルとして、前記前後2枚の参照ピクチャの動きベクトルの重み付け和に基づいた時間軸予測動きベクトルを計算する
     請求項1記載の映像符号化装置。
    The video code according to claim 1, wherein the prediction motion vector calculation means calculates a time-axis prediction motion vector based on a weighted sum of motion vectors of the two preceding and following reference pictures as a prediction motion vector of the encoding target image. Device.
  3.  前記予測動きベクトル計算手段は、前記符号化対象画像の予測動きベクトルとして、前記符号化対象画像に画面内で隣接する画像の動きベクトルに基づいた空間軸予測動きベクトルを計算する
     請求項1記載の映像符号化装置。
    The predicted motion vector calculation unit calculates a spatial axis predicted motion vector based on a motion vector of an image adjacent to the encoding target image in a screen as a predicted motion vector of the encoding target image. Video encoding device.
  4.  前記予測動きベクトル計算手段は、前記前後2枚の参照ピクチャの動きベクトルの特徴量が所定値を満たすときに前記前後2枚の参照ピクチャの動きベクトルの重み付け和に基づいた時間軸予測動きベクトルを計算し、前記所定値を満たさないときに前記符号化対象画像に画面内で隣接する画像の動きベクトルに基づいた空間軸予測動きベクトルを計算する
     請求項1記載の映像符号化装置。
    The predicted motion vector calculation means calculates a time-axis predicted motion vector based on a weighted sum of motion vectors of the two preceding and following reference pictures when a feature amount of the motion vector of the two preceding and following reference pictures satisfies a predetermined value. The video encoding device according to claim 1, wherein a spatial axis predicted motion vector is calculated based on a motion vector of an image adjacent to the encoding target image in a screen when the predetermined value is not satisfied.
  5.  前記前後2枚の参照ピクチャの動きベクトルの特徴量は、前記前後2枚の参照ピクチャの動きベクトルの内積、外積およびノルムのいずれか一つ以上を含む
     請求項4記載の映像符号化装置。
    The video encoding device according to claim 4, wherein the feature quantity of the motion vectors of the two preceding and following reference pictures includes at least one of an inner product, an outer product, and a norm of the motion vectors of the two preceding and following reference pictures.
  6.  復号対象画像の前後2枚の参照ピクチャの動きベクトルに基づいて、前記復号対象画像の予測動きベクトルを計算する予測動きベクトル計算手段を備える映像復号装置。 A video decoding device comprising prediction motion vector calculation means for calculating a prediction motion vector of the decoding target image based on motion vectors of two reference pictures before and after the decoding target image.
  7.  前記予測動きベクトル計算手段は、前記復号対象画像の予測動きベクトルとして、前記前後2枚の参照ピクチャの動きベクトルの重み付け和に基づいた時間軸予測動きベクトルを計算する
     請求項6記載の映像復号装置。
    The video decoding device according to claim 6, wherein the predicted motion vector calculation means calculates a time-axis predicted motion vector based on a weighted sum of motion vectors of the two preceding and following reference pictures as a predicted motion vector of the decoding target image. .
  8.  前記予測動きベクトル計算手段は、前記復号対象画像の予測動きベクトルとして、前記復号対象画像に画面内で隣接する画像の動きベクトルに基づいた空間軸予測動きベクトルを計算する
     請求項6記載の映像復号装置。
    The video decoding according to claim 6, wherein the predicted motion vector calculating unit calculates a spatial axis predicted motion vector based on a motion vector of an image adjacent to the decoding target image in a screen as a predicted motion vector of the decoding target image. apparatus.
  9.  前記予測動きベクトル計算手段は、前記前後2枚の参照ピクチャの動きベクトルの特徴量が所定値を満たすときに前記前後2枚の参照ピクチャの動きベクトルの重み付け和に基づいた時間軸予測動きベクトルを計算し、前記所定値を満たさないときに前記復号対象画像に画面内で隣接する画像の動きベクトルに基づいた空間軸予測動きベクトルを計算する
     請求項6記載の映像復号装置。
    The predicted motion vector calculation means calculates a time-axis predicted motion vector based on a weighted sum of motion vectors of the two preceding and following reference pictures when a feature amount of the motion vector of the two preceding and following reference pictures satisfies a predetermined value. The video decoding apparatus according to claim 6, wherein a spatial axis predicted motion vector is calculated based on a motion vector of an image adjacent to the decoding target image in a screen when the predetermined value is not satisfied.
  10.  前記前後2枚の参照ピクチャの動きベクトルの特徴量は、前記前後2枚の参照ピクチャの動きベクトルの内積、外積およびノルムのいずれか一つ以上を含む
     請求項9記載の映像復号装置。
    The video decoding apparatus according to claim 9, wherein the feature quantity of the motion vectors of the two preceding and following reference pictures includes at least one of an inner product, an outer product, and a norm of the motion vectors of the two preceding and following reference pictures.
  11.  映像符号化装置で実行される映像符号化方法であって、
     符号化対象画像の前後2枚の参照ピクチャの動きベクトルに基づいて、前記符号化対象画像の予測動きベクトルを計算する映像符号化方法。
    A video encoding method executed by a video encoding device,
    A video encoding method for calculating a predicted motion vector of an encoding target image based on motion vectors of two reference pictures before and after the encoding target image.
  12.  前記符号化対象画像の予測動きベクトルとして、前記前後2枚の参照ピクチャの動きベクトルの重み付け和に基づいた時間軸予測動きベクトルを計算する
     請求項11記載の映像符号化方法。
    The video encoding method according to claim 11, wherein a temporal axis predicted motion vector based on a weighted sum of motion vectors of the two preceding and following reference pictures is calculated as a predicted motion vector of the encoding target image.
  13.  前記符号化対象画像の予測動きベクトルとして、前記符号化対象画像に画面内で隣接する画像の動きベクトルに基づいた空間軸予測動きベクトルを計算する
     請求項11記載の映像符号化方法。
    The video encoding method according to claim 11, wherein a spatial axis predicted motion vector based on a motion vector of an image adjacent to the encoding target image in a screen is calculated as the prediction motion vector of the encoding target image.
  14.  前記前後2枚の参照ピクチャの動きベクトルの特徴量が所定値を満たすときに前記前後2枚の参照ピクチャの動きベクトルの重み付け和に基づいた時間軸予測動きベクトルを計算し、前記所定値を満たさないときに前記符号化対象画像に画面内で隣接する画像の動きベクトルに基づいた空間軸予測動きベクトルを計算する
     請求項11記載の映像符号化方法。
    When the feature quantities of the motion vectors of the two preceding and following reference pictures satisfy a predetermined value, a temporal axis predicted motion vector is calculated based on a weighted sum of the motion vectors of the two preceding and following reference pictures, and the predetermined value is satisfied. The video encoding method according to claim 11, wherein a spatial axis predicted motion vector based on a motion vector of an image adjacent to the encoding target image in a screen is calculated when there is no image.
  15.  前記前後2枚の参照ピクチャの動きベクトルの特徴量として、前記前後2枚の参照ピクチャの動きベクトルの内積、外積およびノルムのいずれか一つ以上を使用する
     請求項14記載の映像符号化方法。
    The video encoding method according to claim 14, wherein one or more of an inner product, an outer product, and a norm of motion vectors of the two preceding and following reference pictures is used as a feature quantity of the motion vector of the two preceding and following reference pictures.
  16.  映像復号装置で実行される映像復号方法であって、
     復号対象画像の前後2枚の参照ピクチャの動きベクトルに基づいて、前記復号対象画像の予測動きベクトルを計算する映像復号方法。
    A video decoding method executed by a video decoding device,
    A video decoding method for calculating a predicted motion vector of a decoding target image based on motion vectors of two reference pictures before and after the decoding target image.
  17.  前記復号対象画像の予測動きベクトルとして、前記前後2枚の参照ピクチャの動きベクトルの重み付け和に基づいた時間軸予測動きベクトルを計算する
     請求項16記載の映像復号方法。
    The video decoding method according to claim 16, wherein a time-axis predicted motion vector based on a weighted sum of motion vectors of the two preceding and following reference pictures is calculated as a predicted motion vector of the decoding target image.
  18.  前記復号対象画像の予測動きベクトルとして、前記復号対象画像に画面内で隣接する画像の動きベクトルに基づいた空間軸予測動きベクトルを計算する
     請求項16記載の映像復号方法。
    The video decoding method according to claim 16, wherein a spatial axis predicted motion vector based on a motion vector of an image adjacent to the decoding target image in a screen is calculated as the prediction motion vector of the decoding target image.
  19.  前記前後2枚の参照ピクチャの動きベクトルの特徴量が所定値を満たすときに前記前後2枚の参照ピクチャの動きベクトルの重み付け和に基づいた時間軸予測動きベクトルを計算し、前記所定値を満たさないときに前記復号対象画像に画面内で隣接する画像の動きベクトルに基づいた空間軸予測動きベクトルを計算する
     請求項16記載の映像復号方法。
    When the feature quantities of the motion vectors of the two preceding and following reference pictures satisfy a predetermined value, a temporal axis predicted motion vector is calculated based on a weighted sum of the motion vectors of the two preceding and following reference pictures, and the predetermined value is satisfied. The video decoding method according to claim 16, wherein a spatial axis predicted motion vector based on a motion vector of an image adjacent to the decoding target image in a screen is calculated when there is no image.
  20.  前記前後2枚の参照ピクチャの動きベクトルの特徴量として、前記前後2枚の参照ピクチャの動きベクトルの内積、外積およびノルムのいずれか一つ以上を使用する
     請求項19記載の映像復号方法。
    The video decoding method according to claim 19, wherein one or more of an inner product, an outer product, and a norm of motion vectors of the two preceding and following reference pictures is used as a feature quantity of the motion vector of the two preceding and following reference pictures.
  21.  コンピュータに、符号化対象画像の前後2枚の参照ピクチャの動きベクトルに基づいて、前記符号化対象画像の予測動きベクトルを計算させるための映像符号化プログラム。 A video encoding program for causing a computer to calculate a predicted motion vector of the encoding target image based on motion vectors of two reference pictures before and after the encoding target image.
  22.  コンピュータに、前記符号化対象画像の予測動きベクトルとして、前記前後2枚の参照ピクチャの動きベクトルの重み付け和に基づいた時間軸予測動きベクトルを計算させる
     請求項21記載の映像符号化プログラム。
    The video coding program according to claim 21, wherein the computer calculates a time-axis predicted motion vector based on a weighted sum of motion vectors of the two preceding and following reference pictures as a predicted motion vector of the encoding target image.
  23.  コンピュータに、前記符号化対象画像の予測動きベクトルとして、前記符号化対象画像に画面内で隣接する画像の動きベクトルに基づいた空間軸予測動きベクトルを計算させる
     請求項21記載の映像符号化プログラム。
    The video encoding program according to claim 21, wherein a computer calculates a spatial axis predicted motion vector based on a motion vector of an image adjacent to the encoding target image in a screen as a prediction motion vector of the encoding target image.
  24.  コンピュータに、前記前後2枚の参照ピクチャの動きベクトルの特徴量が所定値を満たすときに前記前後2枚の参照ピクチャの動きベクトルの重み付け和に基づいた時間軸予測動きベクトルを計算させ、前記所定値を満たさないときに前記符号化対象画像に画面内で隣接する画像の動きベクトルに基づいた空間軸予測動きベクトルを計算させる
     請求項21記載の映像符号化プログラム。
    Causing the computer to calculate a time-axis predicted motion vector based on a weighted sum of the motion vectors of the two preceding and following reference pictures when the feature amounts of the motion vectors of the two preceding and following reference pictures satisfy a predetermined value; The video encoding program according to claim 21, wherein when the value is not satisfied, a spatial axis predicted motion vector is calculated based on a motion vector of an image adjacent to the encoding target image in a screen.
  25.  コンピュータに、前記前後2枚の参照ピクチャの動きベクトルの特徴量として、前記前後2枚の参照ピクチャの動きベクトルの内積、外積およびノルムのいずれか一つ以上を使用させる
     請求項24記載の映像符号化プログラム。
    The video code according to claim 24, wherein the computer uses one or more of an inner product, an outer product, and a norm of motion vectors of the two preceding and following reference pictures as a feature quantity of the motion vector of the two preceding and following reference pictures. Program.
  26.  コンピュータに、復号対象画像の前後2枚の参照ピクチャの動きベクトルに基づいて、前記復号対象画像の予測動きベクトルを計算させるための映像復号プログラム。 A video decoding program for causing a computer to calculate a predicted motion vector of the decoding target image based on motion vectors of two reference pictures before and after the decoding target image.
  27.  コンピュータに、前記復号対象画像の予測動きベクトルとして、前記前後2枚の参照ピクチャの動きベクトルの重み付け和に基づいた時間軸予測動きベクトルを計算させる
     請求項26記載の映像復号プログラム。
    27. The video decoding program according to claim 26, wherein a computer calculates a time-axis predicted motion vector based on a weighted sum of motion vectors of the two preceding and following reference pictures as a predicted motion vector of the decoding target image.
  28.  コンピュータに、前記復号対象画像の予測動きベクトルとして、前記復号対象画像に画面内で隣接する画像の動きベクトルに基づいた空間軸予測動きベクトルを計算させる
     請求項26記載の映像復号プログラム。
    27. The video decoding program according to claim 26, wherein a computer calculates a spatial axis predicted motion vector based on a motion vector of an image adjacent to the decoding target image in a screen as a prediction motion vector of the decoding target image.
  29.  コンピュータに、前記前後2枚の参照ピクチャの動きベクトルの特徴量が所定値を満たすときに前記前後2枚の参照ピクチャの動きベクトルの重み付け和に基づいた時間軸予測動きベクトルを計算させ、前記所定値を満たさない時に前記復号対象画像に画面内で隣接する画像の動きベクトルに基づいた空間軸予測動きベクトルを計算させる
     請求項26記載の映像復号プログラム。
    Causing the computer to calculate a time-axis predicted motion vector based on a weighted sum of the motion vectors of the two preceding and following reference pictures when the feature amounts of the motion vectors of the two preceding and following reference pictures satisfy a predetermined value; 27. The video decoding program according to claim 26, wherein a spatial axis predicted motion vector is calculated based on a motion vector of an image adjacent to the decoding target image within a screen when the value is not satisfied.
  30.  コンピュータに、前記前後2枚の参照ピクチャの動きベクトルの特徴量として、前記前後2枚の参照ピクチャの動きベクトルの内積、外積およびノルムのいずれか一つ以上を使用させる
     請求項29記載の映像復号プログラム。
    30. The video decoding according to claim 29, wherein the computer uses at least one of an inner product, an outer product, and a norm of motion vectors of the two preceding and following reference pictures as a feature quantity of the motion vector of the two preceding and following reference pictures. program.
PCT/JP2010/006732 2009-12-07 2010-11-17 Video coding device and video decoding device WO2011070730A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2009277952 2009-12-07
JP2009-277952 2009-12-07

Publications (1)

Publication Number Publication Date
WO2011070730A1 true WO2011070730A1 (en) 2011-06-16

Family

ID=44145295

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2010/006732 WO2011070730A1 (en) 2009-12-07 2010-11-17 Video coding device and video decoding device

Country Status (1)

Country Link
WO (1) WO2011070730A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2500909A (en) * 2012-04-04 2013-10-09 Snell Ltd Selecting motion vectors on the basis of acceleration
JP2015507904A (en) * 2012-01-18 2015-03-12 エレクトロニクス アンド テレコミュニケーションズ リサーチ インスチチュートElectronics And Telecommunications Research Institute Video decoding device
JP2015228693A (en) * 2015-08-04 2015-12-17 日本電信電話株式会社 Image encoding device, image decoding device, image encoding method, image decoding method, image encoding program, and image decoding program
JP2018520558A (en) * 2015-05-15 2018-07-26 華為技術有限公司Huawei Technologies Co.,Ltd. Moving picture encoding method, moving picture decoding method, encoding apparatus, and decoding apparatus
US10523967B2 (en) 2011-09-09 2019-12-31 Kt Corporation Method for deriving a temporal predictive motion vector, and apparatus using the method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07298270A (en) * 1994-04-26 1995-11-10 Matsushita Electric Ind Co Ltd Inter-motion compensation frame prediction coder
JPH1093976A (en) * 1996-09-17 1998-04-10 Sony Corp Motion detector
JPH10224800A (en) * 1997-02-07 1998-08-21 Matsushita Electric Ind Co Ltd Motion vector coding method and decoding method
JP2004129191A (en) * 2002-10-04 2004-04-22 Lg Electronics Inc Direct mode motion vector calculation method for b picture
JP2007124614A (en) * 2005-09-27 2007-05-17 Sanyo Electric Co Ltd Method of coding
JP2008283490A (en) * 2007-05-10 2008-11-20 Ntt Docomo Inc Moving image encoding device, method and program, and moving image decoding device, method and program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07298270A (en) * 1994-04-26 1995-11-10 Matsushita Electric Ind Co Ltd Inter-motion compensation frame prediction coder
JPH1093976A (en) * 1996-09-17 1998-04-10 Sony Corp Motion detector
JPH10224800A (en) * 1997-02-07 1998-08-21 Matsushita Electric Ind Co Ltd Motion vector coding method and decoding method
JP2004129191A (en) * 2002-10-04 2004-04-22 Lg Electronics Inc Direct mode motion vector calculation method for b picture
JP2007124614A (en) * 2005-09-27 2007-05-17 Sanyo Electric Co Ltd Method of coding
JP2008283490A (en) * 2007-05-10 2008-11-20 Ntt Docomo Inc Moving image encoding device, method and program, and moving image decoding device, method and program

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"IEEE International Conference on Image Processing, 2005. ICIP 2005.", vol. 2, 11 September 2005, article JIALI ZHENG ET AL.: "Extended Direct Mode for Hierarchical B Picture Coding", XP010851379 *
"ITU- Telecommunications Standardization Sector STUDY GROUP 16 Question 6 Video Coding Experts Group (VCEG) 29th Meeting: Klagenfurt, Austria, 17-18 July, 2006, Document VCEG-AC06, ITU-T", 17 July 2006, article JOEL JUNG ET AL.: "Competition-Based Scheme for Motion Vector Selection and Coding" *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10805639B2 (en) 2011-09-09 2020-10-13 Kt Corporation Method for deriving a temporal predictive motion vector, and apparatus using the method
US10523967B2 (en) 2011-09-09 2019-12-31 Kt Corporation Method for deriving a temporal predictive motion vector, and apparatus using the method
US11089333B2 (en) 2011-09-09 2021-08-10 Kt Corporation Method for deriving a temporal predictive motion vector, and apparatus using the method
US9374595B2 (en) 2012-01-18 2016-06-21 Electronics And Telecommunications Research Institute Method and device for generating a prediction block to encode and decode an image
US9621913B2 (en) 2012-01-18 2017-04-11 Electronics And Telecommunications Research Institute Method and device for generating a prediction block to encode and decode an image
US9621912B2 (en) 2012-01-18 2017-04-11 Electronics And Telecommunications Research Institute Method and device for generating a prediction block to encode and decode an image
US9635379B2 (en) 2012-01-18 2017-04-25 Electronics And Telecommunications Research Institute Method and device for generating a prediction block to encode and decode an image
US9635380B2 (en) 2012-01-18 2017-04-25 Electronics And Telecommunications Research Institute Method and device for generating a prediction block to encode and decode an image
US9807412B2 (en) 2012-01-18 2017-10-31 Electronics And Telecommunications Research Institute Method and device for encoding and decoding image
US11706438B2 (en) 2012-01-18 2023-07-18 Electronics And Telecommunications Research Institute Method and device for encoding and decoding image
US10397598B2 (en) 2012-01-18 2019-08-27 Electronics And Telecommunications Research Institue Method and device for encoding and decoding image
JP2015507904A (en) * 2012-01-18 2015-03-12 エレクトロニクス アンド テレコミュニケーションズ リサーチ インスチチュートElectronics And Telecommunications Research Institute Video decoding device
GB2500909A (en) * 2012-04-04 2013-10-09 Snell Ltd Selecting motion vectors on the basis of acceleration
US10390036B2 (en) 2015-05-15 2019-08-20 Huawei Technologies Co., Ltd. Adaptive affine motion compensation unit determing in video picture coding method, video picture decoding method, coding device, and decoding device
JP2020174373A (en) * 2015-05-15 2020-10-22 華為技術有限公司Huawei Technologies Co.,Ltd. Video coding method, video decoding method, coding device, and decoding device
US10887618B2 (en) 2015-05-15 2021-01-05 Huawei Technologies Co., Ltd. Adaptive affine motion compensation unit determining in video picture coding method, video picture decoding method, coding device, and decoding device
JP2018520558A (en) * 2015-05-15 2018-07-26 華為技術有限公司Huawei Technologies Co.,Ltd. Moving picture encoding method, moving picture decoding method, encoding apparatus, and decoding apparatus
JP2022010251A (en) * 2015-05-15 2022-01-14 華為技術有限公司 Video encoding method, video decoding method, encoding device, and decoding device
US11490115B2 (en) 2015-05-15 2022-11-01 Huawei Technologies Co., Ltd. Adaptive affine motion compensation unit determining in video picture coding method, video picture decoding method, coding device, and decoding device
JP7260620B2 (en) 2015-05-15 2023-04-18 華為技術有限公司 Video encoding method, video decoding method, encoding device and decoding device
US11949908B2 (en) 2015-05-15 2024-04-02 Huawei Technologies Co., Ltd. Adaptive affine motion compensation unit determining in video picture coding method, video picture decoding method, coding device, and decoding device
JP2015228693A (en) * 2015-08-04 2015-12-17 日本電信電話株式会社 Image encoding device, image decoding device, image encoding method, image decoding method, image encoding program, and image decoding program

Similar Documents

Publication Publication Date Title
JP7225381B2 (en) Method and apparatus for processing video signals based on inter-prediction
KR102635983B1 (en) Methods of decoding using skip mode and apparatuses for using the same
US8306120B2 (en) Method and apparatus for predicting motion vector using global motion vector, encoder, decoder, and decoding method
JP6215344B2 (en) Internal view motion prediction within texture and depth view components with asymmetric spatial resolution
US8873633B2 (en) Method and apparatus for video encoding and decoding
JP4373702B2 (en) Moving picture encoding apparatus, moving picture decoding apparatus, moving picture encoding method, moving picture decoding method, moving picture encoding program, and moving picture decoding program
JP5061179B2 (en) Illumination change compensation motion prediction encoding and decoding method and apparatus
JP4752631B2 (en) Image coding apparatus and image coding method
US11917157B2 (en) Image encoding/decoding method and device for performing PROF, and method for transmitting bitstream
TW202029748A (en) Intra prediction for multi-hypothesis
KR102362840B1 (en) Method and apparatus for decoding an image based on motion prediction in units of sub-blocks in an image coding system
TW201349876A (en) Method for decoding image
US20220182606A1 (en) Video encoding/decoding method and device for deriving weight index for bidirectional prediction of merge candidate, and method for transmitting bitstream
WO2019069602A1 (en) Video coding device, video decoding device, video coding method, video decoding method, program and video system
WO2011070730A1 (en) Video coding device and video decoding device
WO2014156648A1 (en) Method for encoding a plurality of input images and storage medium and device for storing program
US8699576B2 (en) Method of and apparatus for estimating motion vector based on sizes of neighboring partitions, encoder, decoding, and decoding method
JPWO2019069601A1 (en) Video coding device, video decoding device, video coding method, video decoding method and program
WO2014006959A1 (en) Video prediction encoding device, video prediction encoding method, video prediction encoding program, video prediction decoding device, video prediction decoding method, and video prediction decoding program
JP4697802B2 (en) Video predictive coding method and apparatus
US20130294510A1 (en) Video encoding device, video decoding device, video encoding method, video decoding method, and program
JP2014192701A (en) Method, program and device for encoding a plurality of input images
JP7483988B2 (en) Image decoding method and apparatus based on affine motion prediction using constructed affine MVP candidates in an image coding system - Patents.com
JP6646125B2 (en) Video prediction decoding method and video prediction decoding device
KR20240046574A (en) Method and apparatus for implicitly indicating motion vector predictor precision

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10835659

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10835659

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP