WO2007074543A1 - Moving picture image decoding device and moving picture image coding device - Google Patents

Moving picture image decoding device and moving picture image coding device Download PDF

Info

Publication number
WO2007074543A1
WO2007074543A1 PCT/JP2006/303999 JP2006303999W WO2007074543A1 WO 2007074543 A1 WO2007074543 A1 WO 2007074543A1 JP 2006303999 W JP2006303999 W JP 2006303999W WO 2007074543 A1 WO2007074543 A1 WO 2007074543A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
prediction
processing target
motion vector
vector
Prior art date
Application number
PCT/JP2006/303999
Other languages
French (fr)
Japanese (ja)
Inventor
Tomoyuki Yamamoto
Maki Takahashi
Original Assignee
Sharp Kabushiki Kaisha
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sharp Kabushiki Kaisha filed Critical Sharp Kabushiki Kaisha
Priority to JP2007551847A priority Critical patent/JP5020829B2/en
Publication of WO2007074543A1 publication Critical patent/WO2007074543A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/577Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/56Motion estimation with initialisation of the vector search, e.g. estimating a good candidate to initiate a search

Definitions

  • the present invention relates to a moving image encoding device and a moving image decoding device that use a plurality of reference images for motion compensation.
  • Non-patent Document 1 ISO / lEC 14496-10
  • the moving picture decoding apparatus 3 includes a variable length code decoding unit 100, a motion vector decoding unit (motion vector decoding means) 101, a prediction vector deriving unit 102, a buffer memory 103, an image decoding unit 104, and a predicted image deriving unit. (Predicted image deriving means) 105 is configured.
  • the variable-length code decoding unit 100 performs variable-length decoding on input encoded data, and decodes code information such as a prediction scheme, prediction residual data, difference margin, and time information.
  • the motion vector decoding unit 101 decodes a motion vector from the prediction vector and the difference vector.
  • the prediction vector deriving unit 102 derives a prediction vector using the decoded motion vector based on the prediction method.
  • the buffer memory 103 temporarily records motion vectors, images, time information, and the like.
  • the image decoding unit 104 decodes an image from the prediction method, the prediction residual data, and the predicted image.
  • the predicted image deriving unit 105 derives a predicted image by motion compensation using the motion vector and the reference image.
  • variable-length code decoding unit 100 performs variable-length decoding on code data input from the outside of the video decoding device 3 (step 10, hereinafter abbreviated as S10).
  • the output of the variable-length code decoding unit 100 is encoded information such as a prediction method, prediction residual data, a difference vector, and time information.
  • the prediction method is output to the prediction vector deriving unit 102 and the image decoding unit 104.
  • the prediction residual data is output to the image decoding unit 104, the difference vector is output to the motion vector decoding unit 101, and the time information is output to the image decoding unit 104.
  • the prediction vector deriving unit 102 derives a prediction vector according to the prediction method input from the variable length code decoding unit 100 using the motion vector recorded in the buffer memory 103 ( S20).
  • the prediction vector deriving unit 102 outputs the derived prediction vector to the motion vector decoding unit 101 and the buffer memory 103. Details of the operation of the prediction vector deriving unit 102 will be described later.
  • the motion vector decoding unit 101 adds the difference vector input from the variable length code decoding unit 100 to the prediction vector input from the prediction vector deriving unit 102, and outputs it as a motion vector ( S30).
  • the output motion vector is output to the notch memory 103 and recorded.
  • the predicted image deriving unit 105 reads the reference image recorded in the buffer memory 103.
  • the predicted image derivation unit 105 performs motion compensation prediction using the motion vector input from the motion vector decoding unit 101 via the buffer memory 103 and the read reference image, and derives a predicted image.
  • the derived predicted image is output to the image decoding unit 104 (S40).
  • the image decoding unit 104 is input from the prediction image input from the prediction image deriving unit 105 and the variable length code decoding unit 100 according to the prediction scheme input from the variable length code decoding unit 100.
  • the image is decoded based on the predicted residual data (S50).
  • the decoded image and time information regarding the display timing of the image are output to the buffer memory 103 and recorded.
  • the image recorded in the buffer memory 103 in S50 is output to a moving image display device (not shown) at the time indicated by the time information (S60).
  • temporal direct prediction a highly efficient motion compensation prediction method called temporal direct prediction can be used as one of prediction methods.
  • FIG. 11 for temporal direct prediction of the MPEG-4 AVC method. Will be described.
  • there are two reference images one for forward reference and one for backward reference.
  • Temporal direct prediction is to derive a predicted image of the target image P as shown in FIG.
  • a forward reference image P and a backward reference image P are used.
  • target area A the processing target area (white circle in FIG. 11) on the target image P is referred to as target area A.
  • the left and right directions in the figure represent the display time for displaying moving images, and the vertical bars represent the images.
  • the vertical direction in the figure represents the position of the area in each image.
  • the forward prediction vector mvLO and the backward prediction vector mvLl are spatially the same as the target area A on the image P defined as the “reference image”.
  • Region B is called the “reference region”
  • the motion vector mvCol is called the “reference motion vector”.
  • colref colref shall be on image P.
  • the back reference image P and the base image P are used. Also, especially when there are two reference images, forward reference rl, 0 col
  • the forward prediction vector mvLO and the backward prediction vector mvLl are calculated using the reference motion vector mvC ol according to the following equation.
  • Equation (1) and Equation (2) tb is the forward reference image P and the target image P.
  • td is the time interval between forward reference image P and backward reference image P
  • Tbb represents the time interval between the image P and the reference image P.
  • Each value of b is calculated using time information regarding the display time of each image recorded in the buffer memory 103 in the process of S50 in FIG.
  • Time intervals tb, td, and tbb Is the display time T of the target image P, the display time ⁇ of the forward reference image ⁇ , and the backward reference
  • Temporal direct prediction is performed for each of target area A and area B shown in Fig. 11.
  • pmv Predictor of Motion Vector
  • force vector direct decoding using a difference vector between an estimated motion vector and a prediction vector decodes a motion vector without using such a difference vector.
  • the forward prediction vector mvLO and the backward prediction vector mvLl are used as motion vectors in the target area A.
  • the prediction image of the target area A is the forward reference image P indicated by the forward prediction vector mvLO.
  • the detailed configuration of the prediction vector deriving unit 102 for deriving a prediction vector using temporal direct prediction is shown in the block diagram of FIG.
  • the prediction vector derivation unit 102 includes a derivation method selection unit 201, a switch 202, a temporal direct prediction unit 203, a spatial direct prediction unit 204, a pmv prediction unit 205, and a zero vector output unit 206.
  • the derivation method selection unit 201 selects a prediction vector derivation method according to the prediction method input from the variable length code decoding unit 100 and the presence or absence of the reference motion vector recorded in the buffer memory 103.
  • the switch 202 performs switching to the prediction vector derivation method selected by the derivation method selection unit 201.
  • Temporal direct prediction unit 203 obtains a prediction vector by a method determined by temporal direct prediction, and outputs it to motion vector decoding unit 101.
  • the rect prediction unit 204 obtains a prediction vector by a method determined by spatial direct prediction, and outputs it to the motion vector decoding unit 101.
  • the pmv prediction unit 205 When there is a difference vector, the pmv prediction unit 205 obtains a prediction vector by pmv prediction that encodes the difference vector between the estimated motion vector and the prediction vector, and outputs it to the motion vector decoding unit 101. Regardless of the input to the prediction vector deriving unit 102, the zero vector output unit 206 always outputs the zero vector to the motion vector decoding unit 101 as a prediction vector.
  • the derivation method selection unit 201 requests the reference motion vector mvCol from the buffer memory 103 (S21).
  • the derivation method selection unit 201 determines the presence / absence of the reference motion vector mvCol in the reference region B on the buffer memory 103 based on the notification from the buffer memory 103 (S22).
  • the derivation method selection unit 201 switches the switch 202, and the temporal direct prediction unit 2
  • the temporal direct prediction unit 203 obtains the reference motion beta acquired from the buffer memory 103.
  • mvCol is scaled with temporal information, that is, using equation (1) and equation (2), the forward prediction vector mvLO and the backward prediction vector mvLl are calculated, and the motion vector vector decoding unit 101 is output as the prediction vector.
  • the derivation method selection unit 201 determines that the reference region B force S is intra-encoded.
  • the switch 202 is switched and the zero vector output unit 206 is selected (S25).
  • the zero vector output unit 206 sets both the forward prediction vector mvLO and the backward prediction vector mvLl as zero vectors, outputs the motion vector decoding unit 101 as a prediction vector, and ends the processing (S26). .
  • the MPEG-4 AVC method can perform motion compensated prediction by spatial direct prediction and pmv prediction. However, the spatial direct prediction unit 204 and the pmv prediction unit that perform these predictions. Explanation of the operation 205 is omitted.
  • the back reference image P is the base image P
  • the base motion rl, 0 col col of the base region B on the base image P is used. If the col vector mvCol is available, a prediction vector close to the actual motion of the object in the video can be derived.
  • the present invention has been made in view of the above-described problems, and the object thereof is not a zero vector even when the base region B force intra-encoding on the backward reference image P is performed. 0 col
  • the object is to realize a moving picture decoding apparatus and a moving picture encoding apparatus capable of deriving a forward prediction vector mvLO and a backward prediction vector mvLl to prevent a decrease in prediction efficiency.
  • a prediction vector deriving unit that derives a prediction vector of a processing target region on a processing target image using a reference motion vector of a reference image, and the processing using the prediction vector Motion vector decoding means for reconstructing a motion vector of the target area, and predicted image derivation means for deriving a predicted image of the processing target area from the already decoded image after decoding processing using the motion vector.
  • the prediction vector deriving unit derives a prediction vector by temporal direct prediction, wherein the prediction vector deriving unit extracts at least two or more decoded images as the reference image.
  • One of the candidate images based on a predetermined selection criterion that avoids the reference motion vector from becoming a zero vector.
  • Reference image And a reference motion vector selection means for selecting, as a reference motion vector, a motion vector of a region spatially located at the same position as the processing target region on the reference image.
  • the processing target image refers to an image that is a target for reconstructing an image constituting a moving image from the state of encoded data.
  • the processing target area refers to a portion of the processing target image that is on the processing target image and is decoded in units of one decoding process.
  • Prediction beta is a vector used to derive a motion vector.
  • the motion vector is a vector used for deriving a prediction image when reconstructing an image constituting a moving image by a motion compensation prediction method.
  • a reference image is an image having a reference motion vector for obtaining a prediction vector.
  • the reference motion vector is a motion vector in the reference area of the reference image.
  • the reference area is an area on the reference image that is in the same spatial position as the processing target area.
  • the predicted image is an image from which the image is reconstructed from the prediction residual data included in the encoded data.
  • the already-decoded image is an image constituting a moving image derived by decoding and reconstructing encoded data.
  • Temporal direct prediction refers to scaling the reference motion vector based on the time interval between the display time of the processing target image, the display time of the reference image, and the display time of the decoded image indicated by the reference motion vector. And a prediction method for deriving a prediction vector.
  • the display time is time information included in the encoded data, and indicates at which time each decoded image constituting the moving image should be reproduced.
  • a prediction margin is derived using a decoded image whose display time is later than the processing target image as a reference image.
  • the prediction vector is zero.
  • a reference image is selected from a plurality of candidate images.
  • the decoded image whose display time is later than the processing target image is the intra code. Even if the reference region is encoded and has no motion margin, the reference image can be selected from a plurality of already decoded images included in the candidate image, so there is little correlation with the motion of the object on the image.
  • the zero vector increases the possibility of obtaining a prediction vector that reflects the motion of an object on the image, which has the effect of improving the prediction efficiency of the prediction image.
  • a prediction vector deriving unit for deriving a prediction vector of a processing target region on a processing target image using a reference motion vector of a reference image, and the processing target image and the encoding process From the already-encoded image, the motion vector estimating means for estimating the motion vector, the motion vector encoding means for encoding the motion vector using the prediction vector, and the already-encoded image, Prediction image deriving means for deriving a predicted image of the processing target region using the motion outer map, and the prediction vector deriving means is a moving image code for deriving the prediction outer map by temporal out-of-time prediction.
  • the prediction margin deriving means sets at least two or more of the already-encoded images as candidate images of the reference image, and the reference motion vector becomes a zero vector.
  • One of the candidate images is selected as the reference image based on a predetermined selection criterion, and a motion vector of a region located on the reference image at the same spatial position as the processing target region It is characterized by comprising reference motion vector selection means for selecting as a reference motion vector.
  • An already-encoded image is an image obtained by decoding and reconstructing an image from once encoded data among images constituting moving image data.
  • a prediction margin is derived using an already-encoded image whose display time is later than that of the processing target image as a reference image.
  • a prediction vector Is a zero vector.
  • a reference image is selected from a plurality of candidate images.
  • FIG. 1 is a flowchart showing a procedure by which a prediction vector deriving unit 112 derives a prediction vector.
  • FIG. 2 is a block diagram showing a configuration of a moving picture decoding apparatus 1 in the first embodiment.
  • FIG. 3 is a block diagram showing a detailed configuration of a prediction vector deriving unit 112.
  • FIG. 5 is a conceptual diagram illustrating a method for deriving a prediction curve based on out-of-time prediction when a plurality of reference images are set as reference image candidates.
  • FIG. 6 is a conceptual diagram illustrating a prediction rule derivation method based on out-of-time prediction when a non-reference image is included in a standard image candidate.
  • FIG. 7 is a block diagram showing a configuration of a moving image encoding device 2 in the second embodiment.
  • FIG. 8 is a flowchart showing a processing procedure of the moving picture encoding apparatus 2 in the second embodiment.
  • FIG. 9 is a block diagram of a video decoding device 3 in the prior art.
  • FIG. 10 is a flowchart showing an outline of a decoding procedure of the moving picture decoding apparatus 3 in the prior art.
  • FIG. 12 is a block diagram showing a detailed configuration of a prediction vector deriving unit 102 in the prior art.
  • FIG. 13 is a flowchart showing a procedure by which the prediction vector deriving unit 102 derives a prediction vector in the prior art.
  • FIG. 14 is a conceptual diagram showing a prediction margin when the reference region B force S intra prediction is performed in the prior art.
  • FIG. 2 shows the configuration of the moving picture decoding apparatus in the present embodiment.
  • the block diagram of FIG. 2 shows the overall configuration of the video decoding device 1
  • the block diagram of FIG. 3 shows the detailed configuration of the prediction vector derivation unit (prediction vector derivation means) 112 shown in FIG. Is.
  • the difference between the configuration of moving image decoding apparatus 1 of the present embodiment and the configuration of moving image decoding apparatus 3 according to the conventional technique is prediction vector deriving unit 112. Based on the prediction method input from the variable-length code decoding unit 100, the prediction vector deriving unit 112 derives a prediction vector using the decoded motion vector.
  • the difference between the configuration of the prediction vector derivation unit 112 of the present embodiment and the configuration of the prediction vector derivation unit 102 according to the prior art is a reference vector selection unit (reference motion vector selection). Means) 210 or not.
  • the reference vector selection unit 210 selects an image to be the reference image P from a plurality of reference image candidates on the buffer memory 103, and
  • the reference motion vector mvCol corresponding to the selected reference image P is obtained from the derivation method selection unit 201 col
  • the moving picture decoding apparatus 1 uses the reference picture P and the reference motion vector mvC col in a predetermined procedure from a plurality of reference picture candidates in the temporal direct prediction.
  • ol is determined, and a forward prediction vector mvLO and a backward prediction vector norm mvLl are derived.
  • the derivation method selection unit 201 sends a reference motion vector mvC to the reference vector selection unit 210.
  • the derivation method selection unit 201 determines whether or not the reference motion vector mvCol exists in the buffer memory 103 based on the response from the buffer memory 103 (S3).
  • the presence of the reference motion vector mvCol in the buffer memory 103 means that the reference motion vector mvCol exists in the reference region B of the backward reference image P.
  • the derivation method selection unit 201 switches the switch 202 and selects the time direct prediction unit 203 (S4 ).
  • the temporal direct prediction unit 203 uses the reference motion vector mvCol acquired from the buffer memory 103 to calculate the forward prediction vector mvLO and the backward prediction vector mvLl according to Equation (1) and Equation (2).
  • the calculation results are output as the forward prediction vector mvLO and the backward prediction vector mvLl to the motion vector decoding unit 101, and the process is terminated (S5).
  • the standard vector selection unit 210 sets the forward reference image P as the standard image P.
  • the reference motion vector mvCol of the reference image P is stored in the buffer memory 10 col rO, 0 col
  • the derivation method selection unit 201 determines whether or not the reference motion vector mvCol exists in the buffer memory 103 based on the response from the nonophor memory 103 (S7).
  • the presence of the reference motion vector mvCol in the buffer memory 103 means that the reference motion vector mvCol exists in the reference region B of the forward reference image P.
  • the derivation method selection unit 201 switches the switch 202 and selects the zero vector output unit 206 (S8).
  • the zero vector output unit 206 force S and the zero vector are output to the motion vector decoding unit 101 as the forward prediction vector mvLO and the backward prediction vector mvLl, and the process ends. (S9)
  • the prediction vector can be derived in the same manner as in the prior art. Also, if the base motion vector mvCol in the back reference image P is not available
  • the standard motion vector mvCol should be used with the forward reference image P as the base image P.
  • the forward reference image P is
  • cur rO, 0 It is assumed that uniform motion is performed in the same direction from region B to region B and from region B to region B. rl, 0 colrei rl, 0
  • the prediction vector is derived using the standard motion vector mvCol of another reference image and used for image prediction.
  • the movement at the time can be expressed accurately. Therefore, it is possible to make predictions with higher efficiency than the prediction methods based on the prior art.
  • a plurality of reference images can be used as reference image candidates.
  • priorities among the candidates are determined using the time interval between the display time of each reference image candidate and the display time of the target image P.
  • rear reference images P are arranged in ascending order of time interval, and then forward reference images
  • P, P, and P are arranged in ascending order of time interval, and this order is the priority rO, 0 rO, 1 rO, 2
  • the priority order of each standard image candidate in FIG. 5 is the order of the backward reference image P, the forward reference image P, the forward reference image P, and the forward reference image P.
  • the reference margin selection unit 210 sets a reference image candidate having a high priority in the reference image P.
  • the reference motion vector mvCol exists, the reference motion vector mvCol is selected. If it does not exist, the reference image candidate of the next priority is set as the reference image P, and the same processing is repeated according to the priority. After all the reference images P have been tried,
  • the derivation method selection unit 201 is notified of that fact.
  • the relationship between the forward vector mvCol, the forward prediction vector mvLO, and the backward prediction vector mvLl is shown. In this case as well, the object in the target area A and the area B
  • any one of the reference image candidates is set as the reference image P col, and the reference motion vector mvCol corresponding to the reference image P is selected.
  • a non-reference image that is not used for derivation of a predicted image in the related art may be included in the reference image candidate.
  • all already decoded images can be candidates for the reference image.
  • the reference vector selection unit 210 Even when the reference image candidate includes the non-reference image P, the reference vector selection unit 210
  • the reference motion col is determined by determining the presence or absence of the reference motion vector mvCol for each image set as the reference image P in order according to the priority order.
  • the display time of the target image P is displayed.
  • images that are reference images are arranged in order of display time close to the display time of the target image P, and then are non-reference images.
  • the prediction image of the target area A is used in the temporal direct prediction.
  • Two images, a forward reference image P and a backward reference image P, are used for image derivation.
  • Only one of the reference images may be used.
  • the forward reference image P is predicted as the standard image P.
  • a prediction image of the target area A is generated from only the area A.
  • the object in the reference area B as an image is from area B to area B.
  • the prediction efficiency is improved.
  • the backward reference image P is set as the base image P, and the base motion beta is shown.
  • nore mvCol refers to an image whose display time is later than the back reference image P
  • FIG. 7 shows the configuration of moving picture coding apparatus 2 according to the present embodiment.
  • the block diagram of FIG. 7 shows the overall configuration of the video encoding device 2.
  • the moving image encoding apparatus 2 includes an image encoding unit 121, a predicted image derivation unit 105, a motion vector estimation unit (motion vector nore estimation means) 122, a nother memory 103, an image decoding unit 104, A tuttle deriving unit 112, a prediction scheme control unit 124, a variable length coding unit 120, and a motion vector coding unit (motion vector coding means) 123 are configured.
  • FIG. 2 The elements shown in this figure are the same as those shown in the block diagram (FIG. 2) of the moving image decoding apparatus 1 in the first embodiment, except for those described below, and thus the description thereof is omitted.
  • the variable length coding unit 120 includes prediction residual data input from the image coding unit 121, a difference vector input from the motion vector coding unit 123, and a prediction method input from the prediction method control unit 124. Encoding information is variable-length encoded and output to the outside.
  • the image encoding unit 121 obtains prediction residual data using the moving image data input from the outside and the predicted image input from the predicted image deriving unit 105, and the image decoding unit 104 and the variable-length code encoding unit 120. Output to.
  • the motion vector estimation unit 122 estimates the motion vector nore using the externally input moving image data and the reference image in the buffer memory 103, and uses the obtained motion vector nore as the predicted image deriving unit 105, the buffer memory 103, the prediction The vector deriving unit 112, the prediction scheme control unit 124, and the motion vector encoding unit 123 are used.
  • the motion vector encoding unit 123 uses a motion vector input from the motion vector estimation unit i 22, a prediction vector input from the prediction vector derivation unit 112, and a prediction vector input from the prediction method control unit, using a difference vector. Is output to the variable length encoding unit 120.
  • the prediction method control unit 124 sets a prediction method for deriving a prediction vector based on the motion vector input from the motion vector estimation unit 122, and sets the prediction method as a prediction vector deriving unit 112, a variable length coding unit 120, and It outputs to the motion vector code part 123.
  • the motion vector estimation unit 122 performs motion estimation using the input moving image data and the reference image in the buffer memory 103, and motion vector for each region of the image to be encoded. Ask for.
  • the motion vector estimation unit 122 records the obtained motion vector in the buffer memory 103, and outputs the motion vector to the prediction vector deriving unit 112, the prediction scheme control unit 124, and the motion vector coding unit 123 (S100).
  • the prediction scheme control unit 124 receives the motion vector input from the motion vector estimation unit 122. Are used to determine the prediction method as one of temporal direct prediction, spatial direct prediction, or pmv prediction, and the prediction vector deriving unit 112, variable-length encoding unit 120, and motion vector encoding unit It outputs to 123 (S110).
  • the prediction vector deriving unit 112 performs the first embodiment.
  • the prediction error is calculated and output to the buffer memory 103 and the motion vector code key unit 123 (S120).
  • the motion vector encoding unit 123 obtains a difference vector.
  • Prediction method control unit 124 force For input prediction method power Spmv prediction, based on the motion vector input from the motion vector estimation unit 122 and the prediction vector input from the prediction vector derivation unit 112, the difference Calculate the vector.
  • Prediction method input from prediction method control unit 124 In the case of temporal direct prediction or spatial direct prediction, the difference vector is set to zero vector.
  • the motion vector encoding unit 123 outputs the obtained difference vector to the variable length encoding unit 120 (S130).
  • the predicted image deriving unit 105 performs motion compensation using the motion vector input from the motion vector estimating unit 122 and the reference image in the buffer memory 103 to obtain a predicted image.
  • the image is output to the image encoding unit 121 and the image decoding unit 104 (S140).
  • the image encoding unit 121 calculates prediction residual data based on the moving image data input from the outside and the predicted image input from the predicted image deriving unit 105, and the image decoding unit 10 4 And it outputs to the variable length encoding part 120 (S150).
  • the image decoding unit 104 uses the prediction residual data input from the image encoding unit 121 and the prediction image input from the prediction image deriving unit 105 as an encoded image.
  • the reconstructed image is recorded in the buffer memory 103.
  • the already-encoded image is used as a reference image when the predicted vector is derived, and is used as a reference image when the predicted image is derived.
  • variable length coding unit 120 inputs the prediction residual data input from the image coding unit 121, the difference vector input from the motion vector coding unit 123, and the prediction scheme control unit 124.
  • a variable-length code and a video code as a code data Output to the outside of the device 2 (S170).
  • the moving picture coding apparatus 2 derives a prediction vector using a plurality of reference image candidates according to the prediction margin derivation procedure described in the first embodiment. . Since the moving picture is encoded using the derived prediction vector, the moving picture can be encoded with high efficiency.
  • the prediction efficiency is improved by deriving a prediction vector using three or more reference image candidates.
  • the prediction efficiency is improved by deriving the prediction margin by including the non-reference image in the standard image candidate.
  • the prediction vector derivation method in the temporal direct prediction described above can also be used for derivation of a prediction vector when pmv prediction is used as a prediction method.
  • the candidate image has a display time later than that of the processing target image and a display time that is earlier than the processing target image and has a display time earlier than that of the processing target image and the processing target image.
  • the predetermined selection criterion is a motion vector of an area located spatially at the same position as the processing target area on the candidate image whose display time is later than the processing target image. If the candidate image is selected as the reference image, and the motion vector does not exist, the candidate image is displayed on the candidate image earlier in display time than the processing target image. In the case where there are motion vectors of regions located at the same spatial position, it is preferable that the criterion is that the candidate image is selected as the reference image.
  • the reference motion vector selection unit first determines whether or not the already-decoded image immediately after the processing target image can be selected as the reference image. If it can be selected, the predicted image is derived in the same way as in the conventional technology. If it cannot be selected, it is next determined whether or not the already decoded image immediately before the processing target image can be selected as the reference image. If it can be selected, a predicted image is derived using the decoded image as a reference image.
  • the decoded image having the closest display time to the processing target image is selected as the reference image, the time interval between the processing target image and the reference image is short. Therefore, the object on the image, which is the premise of time-directed prediction, has a higher probability that the assumption that it is moving at the same speed in the same direction will be increased, so that it is possible to improve the prediction efficiency of the predicted image.
  • the predetermined selection criterion includes priorities of decoded images that are included in the candidate images and that have a display time later than the processing target image, in order of display time closer to the display time of the processing target image.
  • the decoded images included in the candidate images, the display times of which are earlier than the processing target images, are given priorities in the order of the display times closer to the display times of the processing target images.
  • the determination of the presence or absence of a motion vector in an area spatially located at the same position as the processing target area on the candidate image is repeated until the motion vector is present.
  • the criterion is that the candidate image is selected as the reference image.
  • the reference motion vector selection unit first assigns priorities to a plurality of already-decoded images, and determines whether or not the reference image can be selected in descending order of priority.
  • This priority order is a priority order in which the decoded image having a high possibility that the prediction vector reflects the actual motion of the object is prioritized.
  • the reference motion vector selection means has a display time earlier than that of the processing target image.
  • the predicted image deriving unit preferably derives a predicted image using only the forward reference image whose display time is closest to the processing target image.
  • a reference image is a decoded image having an area pointed to by a motion vector.
  • the forward reference image refers to a decoded image having a display time earlier than the processing target image among the decoded images having the area indicated by the motion vector.
  • the backward reference image is a decoded image having a display time later than that of the processing target image, among the decoded images having the area indicated by the motion vector.
  • the forward prediction vector is a prediction vector indicating the forward reference image from the processing target region.
  • the backward prediction vector is a prediction vector indicating the backward reference image from the processing target region.
  • a prediction image of the processing target region is generated only from the region on the base image pointed to by the forward prediction vector.
  • the possibility that an object as an image in the reference area moves at a constant speed from the area on the forward reference image pointed to by the reference motion vector to the area on the processing target image points to the reference motion vector having a longer time interval.
  • the decoded image is preferably a reference image and a non-reference image.
  • a non-reference image is a decoded image that is not used for deriving a predicted image.
  • the candidate image has a display time later than the processing target image and a display time earlier than the processing target image and an already encoded image whose display time is closest to the processing target image and the processing target image.
  • the already-encoded image having the closest display time, and the predetermined selection criterion is an area of the candidate image that is later in display time than the processing target image and is spatially located at the same position as the processing target region.
  • the reference motion vector selection unit first determines whether or not the already-encoded image immediately after the processing target image can be selected as the reference image. If it can be selected, the prediction image is derived as in the conventional technique. If it cannot be selected, it is next determined whether or not the already-encoded image immediately before the processing target image can be selected as the reference image. If it can be selected, a predicted image is derived using the already-encoded image as a reference image.
  • the pre-signed image having the closest display time to the processing target image is selected as the reference image, so the time interval between the processing target image and the reference image is short. Therefore, the object on the image, which is the premise of temporal direct prediction, increases the probability that the assumption that the object will move at the same speed in the same direction will be satisfied, so that it is possible to improve the prediction efficiency of the predicted image.
  • the predetermined selection criterion is based on the processing target image included in the candidate image.
  • Prior coded images with a later display time are prioritized in the order in which the display time is closer to the display time of the processing target image, and are displayed from the processing target image included in the candidate image following the priority order.
  • Priorities are assigned to encoded images with earlier times in the order in which the display time is closer to the display time of the processing target image, and in the order of the priority, spatially with the processing target area on the candidate image.
  • the determination of the presence / absence of the motion vector of the region located at the same position is repeated until the motion vector is present, and if the motion vector is present, the candidate image may be selected as the reference image. preferable.
  • the reference motion vector selection means first assigns priorities to a plurality of already-encoded images, and determines whether or not the reference images can be selected in descending order of priority.
  • This priority order is a priority order for a pre-signed image with a high possibility that the prediction vector reflects the actual motion of the object.
  • the predicted image deriving unit displays the processing target image on the processing target image. It is preferable that the predicted image is derived using only the forward reference image at the latest time.
  • the prediction image of the processing target region is generated only from the region on the reference image pointed to by the forward prediction vector.
  • the possibility that an object as an image in the reference region moves at a constant speed from the region on the forward reference image indicated by the reference motion vector to the region on the processing target image has a longer time interval.
  • the already-encoded image is a reference image and a non-reference image.
  • the moving picture decoding apparatus includes a prediction vector deriving unit that derives a motion vector of a processing target region on a processing target image using a motion vector of a decoded image, and the prediction vector. And a predicted image deriving unit for deriving a predicted image of the processing target region from the decoded image using the prediction image deriving unit, wherein the prediction margin deriving unit uses at least two or more decoded images as a reference image One of the candidates is selected as a reference image based on a predetermined selection criterion, and a motion vector of a region spatially located at the same position as the processing target region on the reference image is selected as a reference motion vector.
  • Reference motion vector selection means for selecting as a token is provided, and the prediction motion vector is derived by scaling the reference motion vector based on the display time interval between the images. It ’s good enough
  • the moving image decoding apparatus is configured so that the reference motion vector selecting means has a display time later than the processing target image and closest to the processing target image. Display time is faster than the decoded image and the processing target image, and the processing target image The already decoded image with the closest display time is set as the reference image candidate, and if there is a motion vector of an area located at the same position as the processing target area on the reference image candidate with a display time later than the processing target image, the motion When the outer vector is selected as the reference motion vector and the motion vector does not exist, there is a motion vector in the region located at the same position as the processing target region on the reference image candidate whose display time is earlier than the processing target image. In such a case, the motion vector may be selected as a reference motion vector.
  • the reference motion vector selection means sets a plurality of decoded images as reference image candidates and is included in the reference image candidates. Arrange the decoded images whose display time is later than that of the processing target image in order of display time closer to the processing target images, and then display the decoded image earlier in the processing target image than the processing target images included in the reference image candidate.
  • the reference image candidates are set as the reference images in the order of the priority order, and there is a motion vector of a region located on the reference image at the same position as the processing target region. In such a case, the motion vector may be selected as a reference motion vector.
  • the moving picture decoding apparatus uses the reference motion vector selection means as a reference motion vector, with a decoded image having a display time later than that of the processing target image as a reference image. Is selected, the prediction image is derived using the forward reference image whose display time is closest to the processing target image and the backward reference image whose display time is closest to the processing target image, and the display time is shorter than the processing target image.
  • the prediction image may be derived using only the forward reference image whose display time is closest to the processing target image.
  • the moving image decoding apparatus may be configured such that the reference motion vector selection means uses a reference image as the already decoded image.
  • the moving image decoding apparatus is characterized in that the reference motion vector selection means uses a reference image and a non-reference image as the already-decoded image. But you can.
  • the moving image encoding apparatus is a method for encoding an already encoded image that has been subjected to the encoding process.
  • Prediction vector deriving means for deriving a motion vector of a processing target region on the processing target image using motion vector nore, and prediction image derivation for deriving a prediction image of the processing target region from an already encoded image using the prediction vector.
  • the prediction vector derivation means uses at least two or more already-encoded images as candidates for the reference image, and selects from among the candidates based on a predetermined selection criterion:
  • a reference motion vector selecting means for selecting a motion vector of a region that is spatially located at the same position as the processing target region on the reference image as a reference motion vector.
  • the prediction vector may be derived by scaling the display based on the display time interval between the images.
  • the moving image encoding apparatus is such that the reference motion vector selection means has a display time later than the processing target image and the display time closest to the processing target image.
  • An already-encoded image whose display time is earlier than that of the already-encoded image and the processing target image and whose display time is closest to the processing target image is set as the reference image candidate, and is processed on the reference image candidate whose display time is later than that of the processing target image. If there is a motion vector of a region located at the same position as the target region, the motion vector is selected as a reference motion vector, and if there is no motion vector, the display time from the processing target image is selected. If there is a motion vector of a region located at the same position as the processing target region on an early reference image candidate, the motion vector may be selected as the reference motion vector.
  • the reference motion vector selection means sets a plurality of already-encoded images as reference image candidates and includes them in the reference image candidates. Arrange the pre-encoded images whose display time is later than the processing target image in order of display time close to the processing target images, and then select the pre-encoded images whose display time is earlier than the processing target images included in the reference image candidates.
  • the reference image candidates are set as reference images in the order of the priority order, and the motion vector of the region located at the same position as the processing target region on the reference image If the motion vector exists, the motion vector may be selected as the reference motion vector.
  • the moving image encoding apparatus includes the reference motion vector.
  • the reference motion vector is selected with the pre-encoded image whose display time is slower than the processing target image as the reference image, the forward reference image and the processing target image whose display time is closest to the processing target image.
  • the reference motion beta is selected with the image having a display time earlier than the processing target image as the reference image, the display time for the processing target image is displayed.
  • the configuration may be such that the predicted image is derived using only the forward reference image that is closest.
  • the moving picture encoding apparatus may be configured such that, in addition to the above-described configuration, the reference motion vector selection means uses a reference image as the already-encoded image.
  • the moving picture encoding apparatus is characterized in that the reference motion vector selection means uses a reference image and a non-reference image as the already-encoded image.
  • the structure to do may be sufficient.
  • each block of the moving picture decoding device 1 and the moving picture coding device 2, particularly the derivation method selection unit 201, the reference vector selection unit 210, the temporal direct prediction unit 203, the spatial direct prediction unit 204, and pmv prediction And the zero vector output unit 206 may be configured by hardware logic, or may be realized by software using a CPU as follows.
  • the video decoding device 1 and the video encoding device 2 are a CPU (central processing unit) that executes instructions of a control program that realizes each function, and a ROM (read only memory) that stores the program.
  • a random access memory (RAM) for expanding the program a storage device (recording medium) such as a memory for storing the program and various data, and the like are provided.
  • the object of the present invention is to provide program codes (execution format program, intermediate code program, source program) for control programs of the video decoding device 1 and the video encoding device 2 which are software that realizes the functions described above. Is supplied to the moving image decoding device 1 and the moving image encoding device 2 and the computer (or CPU or MPU) is recorded on the recording medium. It can also be achieved by reading and executing the code.
  • Examples of the recording medium include a tape system such as a magnetic tape and a cassette tape, and a floppy disk.
  • Disk system including magnetic disk such as P (registered trademark) disk / hard disk and optical disk such as CD-ROM / MO / MD / DVD / CD-R, card system such as IC card (including memory card) / optical card
  • a semiconductor memory system such as mask ROM / EPROM / EEPROM / flash ROM can be used.
  • the moving picture decoding apparatus 1 and the moving picture encoding apparatus 2 may be configured to be connectable to a communication network, and the program code may be supplied via the communication network.
  • the communication network is not particularly limited.
  • the Internet intranet, extranet, LAN, ISDN, VAN, CATV communication network, virtual private network, telephone line network, mobile communication network, A satellite communication network or the like can be used.
  • the transmission medium constituting the communication network is not particularly limited. For example, even in the case of wired communication such as IEEE1394, USB, power line carrier, cable TV line, telephone line, ADSL line, etc., infrared rays such as IrDA remote control.
  • Bluetooth registered trademark
  • 802.11 wireless 802.11 wireless
  • HDR mobile phone network
  • satellite line mobile digital network
  • the present invention can also be realized in the form of a computer data signal embedded in a carrier wave, in which the program code is embodied by electronic transmission.
  • the prediction vector deriving unit uses at least two or more of the already-decoded images as candidate images of the reference image, and provides a reference motion vector. Is selected as the reference image based on a predetermined selection criterion that avoids becoming a zero vector, and is positioned spatially at the same position as the processing target region on the reference image. And a reference motion vector selection means for selecting a motion vector of a region to be used as a reference motion vector signal.
  • the prediction vector deriving means uses at least two or more already-encoded images as the reference image candidate images, and One of the candidate images is selected as the reference image based on a predetermined selection criterion that prevents the quasi-motion betaton from becoming a zero vector, and the processing target region and space are selected on the reference image.
  • a reference motion vector selection means for selecting a motion vector of a region located at the same position as a reference motion vector.
  • the moving picture decoding apparatus 1 and the moving picture encoding apparatus 2 it is possible to improve the prediction efficiency of the prediction vector, that is, an apparatus for encoding or decoding a moving picture, that is, It can be suitably used for mobile terminal devices, mobile phones, television receivers, multimedia devices, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A moving picture image decoding device and a moving picture image coding device include a prediction vector deriving means in which not less than two reference images are candidates for a standard image. When one of standard image candidates is selected as the standard image and no standard moving vector exists because a standard region is intra-coded, a separate standard image candidate is selected as the standard image, so that possibility to derive a prediction vector matched with a real image movement can be enhanced. Thus, even though a standard region on a backward reference image is intra-coded, other prediction vectors than a zero vector can be derived, and a moving picture image decoding device and a moving picture image coding device are prevented from deterioration in prediction efficiency.

Description

明 細 書  Specification
動画像復号装置および動画像符号化装置  Moving picture decoding apparatus and moving picture encoding apparatus
技術分野  Technical field
[0001] 本発明は、動き補償に複数の参照画像を用いる動画像符号化装置および動画像 復号装置に関するものである。  The present invention relates to a moving image encoding device and a moving image decoding device that use a plurality of reference images for motion compensation.
背景技術  Background art
[0002] 従来技術として、 MPEG— 4 AVC方式(非特許文献 1 ; ISO/lEC 14496— 10 )を用いた動画像復号装置について、図 9に示すブロック図を参照しながら、以下に 説明する。  As a conventional technique, a moving picture decoding apparatus using the MPEG-4 AVC method (Non-patent Document 1: ISO / lEC 14496-10) will be described below with reference to the block diagram shown in FIG.
[0003] 動画像復号装置 3は、可変長符号復号部 100、動きベクトル復号部(動きベクトル 復号手段) 101、予測ベクトル導出部 102、ノくッファメモリ 103、画像復号部 104、予 測画像導出部 (予測画像導出手段) 105を含んで構成される。  [0003] The moving picture decoding apparatus 3 includes a variable length code decoding unit 100, a motion vector decoding unit (motion vector decoding means) 101, a prediction vector deriving unit 102, a buffer memory 103, an image decoding unit 104, and a predicted image deriving unit. (Predicted image deriving means) 105 is configured.
[0004] 可変長符号復号部 100は、入力符号化データを可変長復号し、予測方式、予測残 差データ、差分べ外ルおよび時間情報等の符号ィヒ情報を復号する。動きべ外ル復 号部 101は、予測ベクトルおよび差分ベクトルから、動きベクトルを復号する。予測べ タトル導出部 102は、予測方式に基づき、復号済みの動きベクトルを用いて、予測べ タトルを導出する。バッファメモリ 103は、動きベクトル、画像、および時間情報等を一 時的に記録する。画像復号部 104は、予測方式、予測残差データ、および予測画像 から、画像を復号する。予測画像導出部 105は、動きベクトルおよび参照画像を用い て、動き補償により予測画像を導出する。  [0004] The variable-length code decoding unit 100 performs variable-length decoding on input encoded data, and decodes code information such as a prediction scheme, prediction residual data, difference margin, and time information. The motion vector decoding unit 101 decodes a motion vector from the prediction vector and the difference vector. The prediction vector deriving unit 102 derives a prediction vector using the decoded motion vector based on the prediction method. The buffer memory 103 temporarily records motion vectors, images, time information, and the like. The image decoding unit 104 decodes an image from the prediction method, the prediction residual data, and the predicted image. The predicted image deriving unit 105 derives a predicted image by motion compensation using the motion vector and the reference image.
[0005] <従来技術を用いた復号手順の概要にっレ、て >  [0005] <Outline of decryption procedure using conventional technology>
従来技術を用いた動画像復号装置における復号処理の概略について、図 10に示 す復号処理手順のフローチャートを参照しながら、以下に説明する。  The outline of the decoding process in the moving picture decoding apparatus using the prior art will be described below with reference to the flowchart of the decoding process procedure shown in FIG.
[0006] まず、可変長符号復号部 100が、動画像復号装置 3の外部から入力された符号ィ匕 データを可変長復号する(ステップ 10、以下 S10と略す)。可変長符号復号部 100の 出力は、予測方式、予測残差データ、差分ベクトル、および時間情報等の符号化情 報である。予測方式は、予測ベクトル導出部 102および画像復号部 104に出力され 、予測残差データは、画像復号部 104に出力され、差分ベクトルは、動きベクトル復 号部 101に出力され、時間情報は、画像復号部 104に出力される。 [0006] First, the variable-length code decoding unit 100 performs variable-length decoding on code data input from the outside of the video decoding device 3 (step 10, hereinafter abbreviated as S10). The output of the variable-length code decoding unit 100 is encoded information such as a prediction method, prediction residual data, a difference vector, and time information. The prediction method is output to the prediction vector deriving unit 102 and the image decoding unit 104. The prediction residual data is output to the image decoding unit 104, the difference vector is output to the motion vector decoding unit 101, and the time information is output to the image decoding unit 104.
[0007] 次に、予測ベクトル導出部 102が、バッファメモリ 103に記録されている動きベクトル を用いて、可変長符号復号部 100から入力された予測方式に応じた予測べ外ルを 導出する(S20)。予測ベクトル導出部 102は、導出した予測ベクトルを、動きベクトル 復号部 101およびバッファメモリ 103に出力する。なお、予測ベクトル導出部 102の 動作の詳細については後述する。 Next, the prediction vector deriving unit 102 derives a prediction vector according to the prediction method input from the variable length code decoding unit 100 using the motion vector recorded in the buffer memory 103 ( S20). The prediction vector deriving unit 102 outputs the derived prediction vector to the motion vector decoding unit 101 and the buffer memory 103. Details of the operation of the prediction vector deriving unit 102 will be described later.
[0008] 次に、動きベクトル復号部 101が、予測ベクトル導出部 102から入力された予測べ タトルに、可変長符号復号部 100から入力された差分ベクトルをカ卩えて、動きベクトル として出力する(S30)。出力された動きベクトルは、ノ ッファメモリ 103に出力され、記 録される。  [0008] Next, the motion vector decoding unit 101 adds the difference vector input from the variable length code decoding unit 100 to the prediction vector input from the prediction vector deriving unit 102, and outputs it as a motion vector ( S30). The output motion vector is output to the notch memory 103 and recorded.
[0009] 次に、予測画像導出部 105が、バッファメモリ 103に記録されている参照画像を読 み出す。予測画像導出部 105は、バッファメモリ 103を介して動きベクトル復号部 10 1から入力された動きベクトルと、読み出した参照画像とを用いて、動き補償予測を行 レ、、予測画像を導出し、導出した予測画像を画像復号部 104に出力する(S40)。  Next, the predicted image deriving unit 105 reads the reference image recorded in the buffer memory 103. The predicted image derivation unit 105 performs motion compensation prediction using the motion vector input from the motion vector decoding unit 101 via the buffer memory 103 and the read reference image, and derives a predicted image. The derived predicted image is output to the image decoding unit 104 (S40).
[0010] 次に、画像復号部 104が、可変長符号復号部 100から入力された予測方式に応じ て、予測画像導出部 105から入力された予測画像と可変長符号復号部 100から入 力された予測残差データとに基づいて、画像を復号する(S50)。復号された画像お よびその画像の表示タイミングに関する時間情報は、バッファメモリ 103に出力され、 記録される。  Next, the image decoding unit 104 is input from the prediction image input from the prediction image deriving unit 105 and the variable length code decoding unit 100 according to the prediction scheme input from the variable length code decoding unit 100. The image is decoded based on the predicted residual data (S50). The decoded image and time information regarding the display timing of the image are output to the buffer memory 103 and recorded.
[0011] 次に、 S50においてバッファメモリ 103に記録された画像は、時間情報で示される 時刻に、動画像表示装置(図示せず)に出力される(S60)。  [0011] Next, the image recorded in the buffer memory 103 in S50 is output to a moving image display device (not shown) at the time indicated by the time information (S60).
[0012] 以上説明したように、従来技術 (MPEG— 4 AVC方式)では、動き補償予測を用 いた復号が行われている。 [0012] As described above, in the conventional technique (MPEG-4 AVC method), decoding using motion compensation prediction is performed.
[0013] <時間ダイレクト予測について > [0013] <About temporal direct prediction>
MPEG -4 AVC方式では、予測方式の一つとして、時間ダイレクト予測と呼ばれ る高能率な動き補償予測方式を用いることができる。  In the MPEG-4 AVC method, a highly efficient motion compensation prediction method called temporal direct prediction can be used as one of prediction methods.
[0014] 次に、 MPEG— 4 AVC方式の時間ダイレクト予測について、図 11に示す概念図 を用いて説明する。なお、以降の説明では、特に断らない限り、参照画像は、前方参 照用と後方参照用とにそれぞれ 1枚づつ、合計 2枚あるものとする。 Next, the conceptual diagram shown in FIG. 11 for temporal direct prediction of the MPEG-4 AVC method. Will be described. In the following description, unless otherwise specified, there are two reference images, one for forward reference and one for backward reference.
[0015] 時間ダイレクト予測とは、図 11に示すように、対象画像 P の予測画像を導出する [0015] Temporal direct prediction is to derive a predicted image of the target image P as shown in FIG.
cur  cur
にあたり、前方参照画像 P および後方参照画像 P の 2枚の参照画像を用いる、  In this case, two reference images, a forward reference image P and a backward reference image P, are used.
rO, 0 rl, 0  rO, 0 rl, 0
動き補償予測方式である。  This is a motion compensation prediction method.
[0016] 以下では、対象画像 P 上の処理対象領域(図 11中の白丸)を、対象領域 A と呼 [0016] In the following, the processing target area (white circle in FIG. 11) on the target image P is referred to as target area A.
cur cur ぶ。なお、図 11の概念図、そして、図 4、図 5、図 6、図 14に示す概念図では、図の左 右方向が、動画像を表示する表示時刻を表し、縦棒が各画像を表し、図の上下方向 が、各画像内の領域の位置を表している。  cur cur In the conceptual diagram of FIG. 11 and the conceptual diagrams shown in FIGS. 4, 5, 6, and 14, the left and right directions in the figure represent the display time for displaying moving images, and the vertical bars represent the images. The vertical direction in the figure represents the position of the area in each image.
[0017] 時間ダイレクト予測における動き補償では、対象領域 A の、前方予測ベクトル mv [0017] In motion compensation in temporal direct prediction, the forward prediction vector mv of the target region A
cur  cur
L0と後方予測ベクトル mvLlとを用いる。前方予測べクトノレ mvLOおよび後方予測べ タトル mvLlは、「基準画像」として定めた画像 P 上の、対象領域 A と空間的に同  L0 and backward prediction vector mvLl are used. The forward prediction vector mvLO and the backward prediction vector mvLl are spatially the same as the target area A on the image P defined as the “reference image”.
col cur  col cur
一位置の領域 B (図 11中の二重丸)の動きベクトル mvColに基づいて計算する。な  Calculation is based on the motion vector mvCol of region B at one position (double circle in Fig. 11). Na
col  col
お、領域 B を「基準領域」と呼び、動きベクトル mvColを「基準動きベクトル」と呼ぶ  Region B is called the “reference region”, and the motion vector mvCol is called the “reference motion vector”.
 丄
こととする。また、基準動きベクトル mvColが指す領域を、領域 B とし、領域 B  I will do it. Also, the area pointed to by the reference motion vector mvCol is area B, and area B
colref colref は、画像 P 上にあるものとする。  colref colref shall be on image P.
coirei  coirei
[0018] MPEG -4 AVC方式の時間ダイレクト予測では、図 11に示すように、後方参照 画像 P 、基準画像 P である。また、特に参照画像が 2枚の場合には、前方参照 rl, 0 col  In the temporal direct prediction of the MPEG-4 AVC method, as shown in FIG. 11, the back reference image P and the base image P are used. Also, especially when there are two reference images, forward reference rl, 0 col
画像 p I 画像 ρ になる。  Image p I Image ρ.
rO, 0 coiref  rO, 0 coiref
[0019] 前方予測ベクトル mvLOおよび後方予測ベクトル mvLlは、基準動きベクトル mvC olを用いて、以下の式で計算する。 The forward prediction vector mvLO and the backward prediction vector mvLl are calculated using the reference motion vector mvC ol according to the following equation.
Figure imgf000005_0001
Figure imgf000005_0001
mvLl = (tb - td) / tbb X mvCol · · · (2)  mvLl = (tb-td) / tbb X mvCol (2)
なお、式(1)および式(2)において、 tbは、前方参照画像 P と対象画像 P との  In Equation (1) and Equation (2), tb is the forward reference image P and the target image P.
rO, 0 cur 表示時間の時間間隔、 tdは、前方参照画像 P と後方参照画像 P との時間間隔  rO, 0 cur Display time interval, td is the time interval between forward reference image P and backward reference image P
r0, 0 rl, 0  r0, 0 rl, 0
、 tbbは、画像 P と基準画像 P との時間間隔を表す。時間間隔 tb、 td、および tb  Tbb represents the time interval between the image P and the reference image P. Time intervals tb, td, and tb
coirei col  coirei col
bの各値は、図 10の S50の処理において、バッファメモリ 103に記録されている各画 像の表示時刻に関する時間情報を用いて計算される。時間間隔 tb、 td、および tbb は、対象画像 P の表示時刻 T 、前方参照画像 Ρ の表示時刻 Τ 、後方参照 Each value of b is calculated using time information regarding the display time of each image recorded in the buffer memory 103 in the process of S50 in FIG. Time intervals tb, td, and tbb Is the display time T of the target image P, the display time 前方 of the forward reference image Ρ, and the backward reference
cur cur rO, 0 rO, 0  cur cur rO, 0 rO, 0
画像 P の表示時刻 T 、基準画像 Ρ の表示時刻 Τ 、および画像 Ρ の表示 rl, 0 rl, 0 col col colref 時刻 T を用いて、以下の式で計算される。  Using the display time T of the image P, the display time 基準 of the reference image Ρ, and the display rl, 0 rl, 0 col col colref time T of the image。, the following formula is used.
colref  colref
tb = T - T •(3)  tb = T-T • (3)
rO, 0 cur  rO, 0 cur
td = T - T ••(4)  td = T-T (4)
rl, 0 rO, 0  rl, 0 rO, 0
tbb = T - T •••(5)  tbb = T-T ••• (5)
col colref  col colref
時間ダイレクト予測は、図 11に示す対象領域 A および領域 B それぞれにおい  Temporal direct prediction is performed for each of target area A and area B shown in Fig. 11.
cur cur  cur cur
て画像となっている 2つの物体が、領域 A から領域 A に、また領域 B 力 基  The two objects in the image from region A to region A, and region B
rO, 0 rl, 0 colref 準領域 B に、それぞれ同じ方向へ等速運動していることを仮定した予測方式である  rO, 0 rl, 0 colref Prediction method assuming constant motion in the same direction in subregion B
[0020] なお、 pmv (Predictor of Motion Vector)予測では、推定した動きベクトルと 予測ベクトルとの差分ベクトルを用いる力 時間ダイレクト予測では、そのような差分 ベクトルを用いずに動きべクトノレを復号する。具体的には、前方予測べクトノレ mvLO および後方予測ベクトル mvLlを対象領域 A における動きベクトルとする。 [0020] It should be noted that in pmv (Predictor of Motion Vector) prediction, force vector direct decoding using a difference vector between an estimated motion vector and a prediction vector decodes a motion vector without using such a difference vector. Specifically, the forward prediction vector mvLO and the backward prediction vector mvLl are used as motion vectors in the target area A.
cur  cur
[0021] 対象領域 A の予測画像は、前方予測ベクトル mvLOが指す前方参照画像 P  [0021] The prediction image of the target area A is the forward reference image P indicated by the forward prediction vector mvLO.
cur rO, 0 上の領域 A と、後方予測ベクトル mvLlが指す後方参照画像 P 上の領域 A  region A on cur rO, 0 and region A on the backward reference image P pointed to by the backward prediction vector mvLl
rO, 0 rl, 0 rl, 0 とを使用して動き補償を行い、生成される。  Generated by performing motion compensation using rO, 0 rl, 0 rl, 0.
[0022] <予測べタトル導出部 102の詳細な構成について > [0022] <Detailed configuration of prediction vector deriving unit 102>
時間ダイレクト予測を用いて予測ベクトルを導出する、予測ベクトル導出部 102の 詳細な構成を、図 12のブロック図で示す。  The detailed configuration of the prediction vector deriving unit 102 for deriving a prediction vector using temporal direct prediction is shown in the block diagram of FIG.
[0023] 予測ベクトル導出部 102は、導出方式選択部 201、スィッチ 202、時間ダイレクト予 測部 203、空間ダイレクト予測部 204、 pmv予測部 205、ゼロベクトル出力部 206か ら構成される。 The prediction vector derivation unit 102 includes a derivation method selection unit 201, a switch 202, a temporal direct prediction unit 203, a spatial direct prediction unit 204, a pmv prediction unit 205, and a zero vector output unit 206.
[0024] 導出方式選択部 201は、可変長符号復号部 100から入力された予測方式と、バッ ファメモリ 103に記録された基準動きベクトルの有無とに応じて、予測ベクトルの導出 方式を選択する。スィッチ 202は、導出方式選択部 201で選択された予測ベクトル導 出方式への切り替えを行う。時間ダイレクト予測部 203は、時間ダイレクト予測で定め られた方式で、予測ベクトルを求め、動きベクトル復号部 101へ出力する。空間ダイ レクト予測部 204は、空間ダイレクト予測で定められた方式で、予測ベクトルを求め、 動きべクトノレ復号部 101 出力する。 pmv予測部 205は、差分ベクトルが存在する 場合に、推定した動きベクトルと予測ベクトルとの差分べクトノレを符号化する pmv予 測により予測べクトノレを求め、動きべクトノレ復号部 101 出力する。ゼロベクトル出力 部 206は、予測ベクトル導出部 102への入力に関わらず、常にゼロベクトルを予測べ クトノレとして、動きベクトル復号部 101へ出力する。 The derivation method selection unit 201 selects a prediction vector derivation method according to the prediction method input from the variable length code decoding unit 100 and the presence or absence of the reference motion vector recorded in the buffer memory 103. The switch 202 performs switching to the prediction vector derivation method selected by the derivation method selection unit 201. Temporal direct prediction unit 203 obtains a prediction vector by a method determined by temporal direct prediction, and outputs it to motion vector decoding unit 101. Space die The rect prediction unit 204 obtains a prediction vector by a method determined by spatial direct prediction, and outputs it to the motion vector decoding unit 101. When there is a difference vector, the pmv prediction unit 205 obtains a prediction vector by pmv prediction that encodes the difference vector between the estimated motion vector and the prediction vector, and outputs it to the motion vector decoding unit 101. Regardless of the input to the prediction vector deriving unit 102, the zero vector output unit 206 always outputs the zero vector to the motion vector decoding unit 101 as a prediction vector.
[0025] <時間ダイレクト予測における予測べタトノレの導出手順について >  [0025] <Procedure for Deriving Predictive Betatones in Temporal Direct Prediction>
次に、動き補償予測方式として、時間ダイレ外予測が用いられる場合の、前方予 測ベクトル mvLOおよび後方予測べクトノレ mvLlの導出手順について、図 13に示す 導出手順のフローチャートを用いて説明する。  Next, the derivation procedure of the forward prediction vector mvLO and the backward prediction vector mvLl when the out-of-time prediction is used as the motion compensation prediction method will be described using the flowchart of the derivation procedure shown in FIG.
[0026] まず、導出方式選択部 201が、基準動きベクトル mvColを、バッファメモリ 103に要 求する(S21)。  First, the derivation method selection unit 201 requests the reference motion vector mvCol from the buffer memory 103 (S21).
[0027] 次に、導出方式選択部 201は、バッファメモリ 103からの通知に基づき、バッファメ モリ 103上の、基準領域 B の基準動きベクトル mvColの有無を判定する(S22)。  Next, the derivation method selection unit 201 determines the presence / absence of the reference motion vector mvCol in the reference region B on the buffer memory 103 based on the notification from the buffer memory 103 (S22).
col  col
[0028] 基準動きベクトル mvColが有る場合は、次の S23の処理に進み、基準動きベクトル mvColが無い場合は、 S25の処理に進む。  [0028] When the reference motion vector mvCol is present, the process proceeds to the next process of S23, and when there is no reference motion vector mvCol, the process proceeds to the process of S25.
[0029] 次に、導出方式選択部 201は、スィッチ 202を切り替えて、時間ダイレクト予測部 2[0029] Next, the derivation method selection unit 201 switches the switch 202, and the temporal direct prediction unit 2
03を選択する(S23)。 Select 03 (S23).
[0030] 次に、時間ダイレクト予測部 203は、バッファメモリ 103から取得した基準動きべタト  [0030] Next, the temporal direct prediction unit 203 obtains the reference motion beta acquired from the buffer memory 103.
mvColを、時間情報でスケーリング、すなわち式(1)および式(2)を用いて、前方 予測ベクトル mvLOおよび後方予測べクトノレ mvLlを計算し、予測ベクトルとして、動 きべクトノレ復号部 101 出力し、処理を終了する(S24)。  mvCol is scaled with temporal information, that is, using equation (1) and equation (2), the forward prediction vector mvLO and the backward prediction vector mvLl are calculated, and the motion vector vector decoding unit 101 is output as the prediction vector. The process ends (S24).
[0031] S22におレ、て、基準動きベクトル mvColがバッファメモリ 103上に無いとの通知を 受けた場合、導出方式選択部 201は、基準領域 B 力 Sイントラ符号化されていると判  [0031] When the notification that the reference motion vector mvCol is not in the buffer memory 103 is received in S22, the derivation method selection unit 201 determines that the reference region B force S is intra-encoded.
col  col
断し、スィッチ 202を切り替えて、ゼロベクトル出力部 206を選択する(S25)。  The switch 202 is switched and the zero vector output unit 206 is selected (S25).
[0032] 次に、ゼロべクトノレ出力部 206は、前方予測ベクトル mvLOおよび後方予測べクトノレ mvLlを、共にゼロベクトルとし、予測ベクトルとして動きベクトル復号部 101 出力し て、処理を終了する(S26)。 [0033] なお、 MPEG— 4 AVC方式では、時間ダイレクト予測の他に、空間ダイレクト予測 および pmv予測による動き補償予測が可能であるが、これらの予測を行う空間ダイレ タト予測部 204および pmv予測部 205の動作については、説明を省略する。 Next, the zero vector output unit 206 sets both the forward prediction vector mvLO and the backward prediction vector mvLl as zero vectors, outputs the motion vector decoding unit 101 as a prediction vector, and ends the processing (S26). . [0033] In addition to the temporal direct prediction, the MPEG-4 AVC method can perform motion compensated prediction by spatial direct prediction and pmv prediction. However, the spatial direct prediction unit 204 and the pmv prediction unit that perform these predictions. Explanation of the operation 205 is omitted.
[0034] 以上説明したように、従来技術による時間ダイレ外予測方式では、後方参照画像 P を基準画像 P とした場合であり、かつ基準画像 P 上の基準領域 B の基準動 rl, 0 col col col きベクトル mvColが利用可能な場合には、動画内の物体の実際の動きに近い予測 ベクトルを導出することができる。  [0034] As described above, in the non-temporal out-of-time prediction method according to the prior art, the back reference image P is the base image P, and the base motion rl, 0 col col of the base region B on the base image P is used. If the col vector mvCol is available, a prediction vector close to the actual motion of the object in the video can be derived.
[0035] し力、しながら、従来技術による予測ベクトル導出方法では、後方参照画像 P 上の However, in the prediction vector derivation method according to the conventional technique, the backward reference image P
rl, 0 基準領域 B 力 Sイントラ符号ィ匕されている場合には、基準動きベクトル mvColがゼロ  rl, 0 Reference region B force If the S intra code is set, the reference motion vector mvCol is zero.
col  col
ベクトルとされてしまう。その場合、図 14の概念図に示すように、ゼロベクトルを用い て式(1)および式(2)の計算を行うので、前方予測ベクトル mvLOが指す領域は領域 A となり、後方予測ベクトルが指す領域は領域 A となってしまい、前方予測べク r0, 0 rl, 0  It will be a vector. In this case, as shown in the conceptual diagram of FIG. 14, since the calculations of Equation (1) and Equation (2) are performed using the zero vector, the region pointed to by the forward prediction vector mvLO is the region A and pointed to by the backward prediction vector The region becomes region A, and the forward prediction vector r0, 0 rl, 0
トル mvLOおよび後方予測ベクトル mvLlと、対象領域 A の実際の動きとの差異が  Tol mvLO and backward prediction vector mvLl and the actual motion of target area A
cur  cur
大きくなる場合が多い。従って、予測効率が低下するという課題があった。  Often becomes large. Accordingly, there is a problem that the prediction efficiency is lowered.
発明の開示  Disclosure of the invention
[0036] 本発明は、上記の問題点に鑑みてなされたものであり、その目的は、後方参照画 像 P 上の基準領域 B 力イントラ符号化されている場合でも、ゼロベクトルではない rl, 0 col  [0036] The present invention has been made in view of the above-described problems, and the object thereof is not a zero vector even when the base region B force intra-encoding on the backward reference image P is performed. 0 col
前方予測ベクトル mvLOおよび後方予測ベクトル mvLlを導出し、予測効率の低下を 防止できる、動画像復号装置および動画像符号化装置を実現することにある。  The object is to realize a moving picture decoding apparatus and a moving picture encoding apparatus capable of deriving a forward prediction vector mvLO and a backward prediction vector mvLl to prevent a decrease in prediction efficiency.
[0037] 上記課題を解決するために、処理対象画像上の処理対象領域の予測ベクトルを、 基準画像の基準動きベクトルを用いて導出する予測ベクトル導出手段と、前記予測 ベクトルを用いて、前記処理対象領域の動きベクトルを再構成する動きベクトル復号 手段と、復号処理を終えた既復号画像から、前記動きべ外ルを用いて前記処理対 象領域の予測画像を導出する予測画像導出手段とを備え、前記予測ベクトル導出 手段は、時間ダイレクト予測により予測ベクトルを導出することを特徴とする動画像復 号装置において、前記予測ベクトル導出手段は、少なくとも 2枚以上の前記既復号 画像を前記基準画像の候補画像とし、前記基準動きベクトルがゼロベクトルとなること を回避する、所定の選択基準に基づいて前記候補画像の中の 1枚を前記基準画像 として選択し、該基準画像上で前記処理対象領域と空間的に同一位置に位置する 領域の動きベクトルを基準動きベクトルとして選択する基準動きベクトル選択手段を 備えたことを特徴とする。 [0037] In order to solve the above problem, a prediction vector deriving unit that derives a prediction vector of a processing target region on a processing target image using a reference motion vector of a reference image, and the processing using the prediction vector Motion vector decoding means for reconstructing a motion vector of the target area, and predicted image derivation means for deriving a predicted image of the processing target area from the already decoded image after decoding processing using the motion vector. The prediction vector deriving unit derives a prediction vector by temporal direct prediction, wherein the prediction vector deriving unit extracts at least two or more decoded images as the reference image. One of the candidate images based on a predetermined selection criterion that avoids the reference motion vector from becoming a zero vector. Reference image And a reference motion vector selection means for selecting, as a reference motion vector, a motion vector of a region spatially located at the same position as the processing target region on the reference image.
[0038] 処理対象画像とは、符号化データの状態から動画像を構成する画像を再構築する 対象となる画像をいう。  The processing target image refers to an image that is a target for reconstructing an image constituting a moving image from the state of encoded data.
[0039] 処理対象領域とは、処理対象画像上にあり、一つの復号処理の単位で復号される 処理対象画像の部分のことをレ、う。  [0039] The processing target area refers to a portion of the processing target image that is on the processing target image and is decoded in units of one decoding process.
[0040] 予測べタトノレとは、動きベクトルを導出するために用いるベクトルである。 [0040] Prediction beta is a vector used to derive a motion vector.
[0041] 動きべ外ルとは、動き補償予測方式により、動画像を構成する画像を再構築する 際に予測画像を導出するために用いるベクトルである。 [0041] The motion vector is a vector used for deriving a prediction image when reconstructing an image constituting a moving image by a motion compensation prediction method.
[0042] 基準画像とは、予測ベクトルを求めるための基準動きベクトルを持つ画像である。 [0042] A reference image is an image having a reference motion vector for obtaining a prediction vector.
[0043] 基準動きベクトルとは、基準画像の基準領域にある動きベクトルである。 [0043] The reference motion vector is a motion vector in the reference area of the reference image.
[0044] 基準領域とは、処理対象領域と空間的に同一の位置にある、基準画像上の領域で ある。 The reference area is an area on the reference image that is in the same spatial position as the processing target area.
[0045] 予測画像とは、符号化データに含まれる予測残差データから画像を再構築する元 になる画像である。  [0045] The predicted image is an image from which the image is reconstructed from the prediction residual data included in the encoded data.
[0046] 既復号画像とは、符号化データの復号および再構築により導出された動画を構成 する画像である。  [0046] The already-decoded image is an image constituting a moving image derived by decoding and reconstructing encoded data.
[0047] 時間ダイレクト予測とは、処理対象画像の表示時刻と、基準画像の表示時刻と、基 準動きベクトルが指す既復号画像の表示時刻との時間間隔に基づいて、基準動きべ タトルをスケーリングし、予測ベクトルを導出する予測方式である。  [0047] Temporal direct prediction refers to scaling the reference motion vector based on the time interval between the display time of the processing target image, the display time of the reference image, and the display time of the decoded image indicated by the reference motion vector. And a prediction method for deriving a prediction vector.
[0048] 表示時刻とは、符号化データに含まれる時間情報であり、動画像を構成する各既 復号画像をどの時刻に再生すればよいかを表すものである。  [0048] The display time is time information included in the encoded data, and indicates at which time each decoded image constituting the moving image should be reproduced.
[0049] 従来技術では、処理対象画像より表示時刻が遅い既復号画像を基準画像として、 予測べ外ルが導出されるが、該既復号画像がイントラ符号化されている場合、予測 ベクトルはゼロベクトルとされるところ、当該構成においては、複数ある候補画像の中 から基準画像を選択する。  [0049] In the prior art, a prediction margin is derived using a decoded image whose display time is later than the processing target image as a reference image. However, when the decoded image is intra-coded, the prediction vector is zero. In this configuration, a reference image is selected from a plurality of candidate images.
[0050] 上記の構成によれば、処理対象画像より表示時刻が遅い既復号画像がイントラ符 号化されており基準領域が動きべ外ルを持たない場合でも、候補画像に含まれる複 数の既復号画像の中から基準画像を選択できるので、画像上の物体の動きとの相関 が少ないゼロベクトルではなぐ画像上の物体の動きを反映した、予測ベクトルを得ら れる可能性が高くなるので、予測画像の予測効率を向上できるという効果を奏する。 [0050] According to the above configuration, the decoded image whose display time is later than the processing target image is the intra code. Even if the reference region is encoded and has no motion margin, the reference image can be selected from a plurality of already decoded images included in the candidate image, so there is little correlation with the motion of the object on the image. The zero vector increases the possibility of obtaining a prediction vector that reflects the motion of an object on the image, which has the effect of improving the prediction efficiency of the prediction image.
[0051] 上記課題を解決するために、処理対象画像上の処理対象領域の予測ベクトルを、 基準画像の基準動きベクトルを用いて導出する予測ベクトル導出手段と、前記処理 対象画像と符号化処理を終えた既符号化画像とを用いて、動きベクトルを推定する 動きベクトル推定手段と、前記予測ベクトルを用いて、前記動きベクトルを符号化する 動きベクトル符号化手段と、前記既符号化画像から、前記動きべ外ルを用いて前記 処理対象領域の予測画像を導出する予測画像導出手段とを備え、前記予測べタト ル導出手段は、時間ダイレ外予測により予測べ外ルを導出する動画像符号化装置 において、前記予測べ外ル導出手段は、少なくとも 2枚以上の前記既符号化画像を 前記基準画像の候補画像とし、前記基準動きベクトルがゼロベクトルとなることを回避 する、所定の選択基準に基づいて前記候補画像の中の 1枚を前記基準画像として 選択し、該基準画像上で前記処理対象領域と空間的に同一位置に位置する領域の 動きベクトルを基準動きベクトルとして選択する基準動きベクトル選択手段を備えたこ とを特徴とする。  [0051] In order to solve the above problem, a prediction vector deriving unit for deriving a prediction vector of a processing target region on a processing target image using a reference motion vector of a reference image, and the processing target image and the encoding process From the already-encoded image, the motion vector estimating means for estimating the motion vector, the motion vector encoding means for encoding the motion vector using the prediction vector, and the already-encoded image, Prediction image deriving means for deriving a predicted image of the processing target region using the motion outer map, and the prediction vector deriving means is a moving image code for deriving the prediction outer map by temporal out-of-time prediction. In the encoding apparatus, the prediction margin deriving means sets at least two or more of the already-encoded images as candidate images of the reference image, and the reference motion vector becomes a zero vector. One of the candidate images is selected as the reference image based on a predetermined selection criterion, and a motion vector of a region located on the reference image at the same spatial position as the processing target region It is characterized by comprising reference motion vector selection means for selecting as a reference motion vector.
[0052] 既符号化画像とは、動画像データ構成する画像のうち、一度符号化したデータから 画像を復号および再構築により導出した画像である。  An already-encoded image is an image obtained by decoding and reconstructing an image from once encoded data among images constituting moving image data.
[0053] 従来技術では、処理対象画像より表示時刻が遅い既符号化画像を基準画像として 、予測べ外ルが導出されるが、該既符号化画像がイントラ符号化されている場合、 予測ベクトルはゼロベクトルとされるところ、当該構成においては、複数ある候補画像 の中から基準画像を選択する。  In the conventional technique, a prediction margin is derived using an already-encoded image whose display time is later than that of the processing target image as a reference image. When the already-encoded image is intra-encoded, a prediction vector Is a zero vector. In this configuration, a reference image is selected from a plurality of candidate images.
[0054] 上記の構成によれば、処理対象画像より表示時刻が遅い既符号化画像がイントラ 符号化されており基準領域が動きべ外ルを持たない場合でも、候補画像に含まれる 複数の既符号化画像の中から基準画像を選択できるので、画像上の物体の動きとの 相関が少ないゼロベクトルではなぐ画像上の物体の動きを反映した、予測ベクトル を得られる可能性が高くなるので、予測画像の予測効率を向上できるという効果を奏 する。 [0054] According to the above configuration, even when an already-encoded image whose display time is later than the processing target image is intra-encoded and the reference region does not have a motion margin, a plurality of existing images included in the candidate image are included. Since the reference image can be selected from the encoded image, it is highly possible to obtain a prediction vector that reflects the motion of the object on the image that is less than the zero vector that has little correlation with the motion of the object on the image. The effect of improving the prediction efficiency of the predicted image is achieved. To do.
図面の簡単な説明 Brief Description of Drawings
[図 1]予測ベクトル導出部 112が予測ベクトルを導出する手順を示すフローチャートあ る。 FIG. 1 is a flowchart showing a procedure by which a prediction vector deriving unit 112 derives a prediction vector.
[図 2]第 1の実施の形態における、動画像復号装置 1の構成を示すブロック図である。  FIG. 2 is a block diagram showing a configuration of a moving picture decoding apparatus 1 in the first embodiment.
[図 3]予測ベクトル導出部 112の詳細な構成を示すブロック図である。  FIG. 3 is a block diagram showing a detailed configuration of a prediction vector deriving unit 112.
[図 4]前方参照画像 P を基準画像 P とする場合の、時間ダイレクト予測による予  [Fig.4] Prediction based on temporal direct prediction when the forward reference image P is the base image P
rO, 0 col  rO, 0 col
測べ外ルの導出方法を説明する概念図である。 It is a conceptual diagram explaining the derivation | leading-out method of a non-measurement lure.
[図 5]複数の参照画像を基準画像候補とする場合の、時間ダイレ外予測による予測 ベ外ルの導出方法を説明する概念図である。  FIG. 5 is a conceptual diagram illustrating a method for deriving a prediction curve based on out-of-time prediction when a plurality of reference images are set as reference image candidates.
[図 6]非参照画像を基準画像候補に含める場合の、時間ダイレ外予測による予測べ 外ルの導出方法を説明する概念図である。  FIG. 6 is a conceptual diagram illustrating a prediction rule derivation method based on out-of-time prediction when a non-reference image is included in a standard image candidate.
[図 7]第 2の実施の形態における、動画像符号化装置 2の構成を示すブロック図であ る。  [Fig. 7] Fig. 7 is a block diagram showing a configuration of a moving image encoding device 2 in the second embodiment.
[図 8]第 2の実施の形態における、動画像符号化装置 2の処理手順を示すフローチヤ ートである。  FIG. 8 is a flowchart showing a processing procedure of the moving picture encoding apparatus 2 in the second embodiment.
[図 9]従来技術における動画像復号装置 3のブロック図である。  FIG. 9 is a block diagram of a video decoding device 3 in the prior art.
[図 10]従来技術における動画像復号装置 3の復号手順の概要を示すフローチャート である。  FIG. 10 is a flowchart showing an outline of a decoding procedure of the moving picture decoding apparatus 3 in the prior art.
[図 11]後方参照画像 P が基準画像 P である場合の、時間ダイレクト予測による予  [Fig. 11] Prediction based on temporal direct prediction when the backward reference image P is the base image P.
rl, 0 col  rl, 0 col
測べ外ルの導出方法を説明する概念図である。 It is a conceptual diagram explaining the derivation | leading-out method of an off-measurement rule.
[図 12]従来技術における、予測ベクトル導出部 102の詳細な構成を示すブロック図 である。  FIG. 12 is a block diagram showing a detailed configuration of a prediction vector deriving unit 102 in the prior art.
[図 13]従来技術において、予測べクトノレ導出部 102が予測ベクトルを導出する手順 を示すフローチャートである。  FIG. 13 is a flowchart showing a procedure by which the prediction vector deriving unit 102 derives a prediction vector in the prior art.
[図 14]従来技術において、基準領域 B 力 Sイントラ予測されている場合の、予測べ外 ルを示す概念図である。  FIG. 14 is a conceptual diagram showing a prediction margin when the reference region B force S intra prediction is performed in the prior art.
発明を実施するための最良の形態 [0056] [第 1の実施の形態] BEST MODE FOR CARRYING OUT THE INVENTION [0056] [First embodiment]
以下、本発明の第 1の実施の形態における、動画像復号装置について説明する。  Hereinafter, the moving picture decoding apparatus according to the first embodiment of the present invention will be described.
[0057] <動画像復号装置の構成について > [0057] <Configuration of video decoding device>
図 2および図 3のブロック図において、本実施の形態における、動画像復号装置の 構成を示す。図 2のブロック図は、動画像復号装置 1の全体の構成を示し、図 3のブ ロック図は、図 2に示す予測ベクトル導出部(予測ベクトル導出手段) 112の詳細な構 成を示したものである。  In the block diagrams of FIG. 2 and FIG. 3, the configuration of the moving picture decoding apparatus in the present embodiment is shown. The block diagram of FIG. 2 shows the overall configuration of the video decoding device 1, and the block diagram of FIG. 3 shows the detailed configuration of the prediction vector derivation unit (prediction vector derivation means) 112 shown in FIG. Is.
[0058] これらの図に示す要素は、以下に述べるものを除き、従来技術の説明において用 いたブロック図(図 9および図 12)と同一なので、説明を省略する。  [0058] The elements shown in these figures are the same as the block diagrams (FIGS. 9 and 12) used in the description of the prior art except for those described below, and thus the description thereof is omitted.
[0059] 図 2に示すブロック図おいて、本実施の形態の動画像復号装置 1の構成と従来技 術による動画像復号装置 3の構成とが異なる部分は、予測ベクトル導出部 112である 。予測ベクトル導出部 112は、可変長符号復号部 100から入力した予測方式に基づ き、復号済みの動きベクトルを用いて、予測ベクトルを導出する。  In the block diagram shown in FIG. 2, the difference between the configuration of moving image decoding apparatus 1 of the present embodiment and the configuration of moving image decoding apparatus 3 according to the conventional technique is prediction vector deriving unit 112. Based on the prediction method input from the variable-length code decoding unit 100, the prediction vector deriving unit 112 derives a prediction vector using the decoded motion vector.
[0060] 図 3に示すブロック図において、本実施の形態の予測ベクトル導出部 112の構成と 従来技術による予測ベクトル導出部 102の構成とが異なる部分は、基準ベクトル選択 部(基準動きべクトノレ選択手段) 210の有無である。基準ベクトル選択部 210は、バッ ファメモリ 103上に複数ある基準画像候補から、基準画像 P とする画像を選択し、  In the block diagram shown in FIG. 3, the difference between the configuration of the prediction vector derivation unit 112 of the present embodiment and the configuration of the prediction vector derivation unit 102 according to the prior art is a reference vector selection unit (reference motion vector selection). Means) 210 or not. The reference vector selection unit 210 selects an image to be the reference image P from a plurality of reference image candidates on the buffer memory 103, and
col  col
選択した基準画像 P に対応する基準動きベクトル mvColを、導出方式選択部 201 col  The reference motion vector mvCol corresponding to the selected reference image P is obtained from the derivation method selection unit 201 col
に通知する。  Notify
[0061] すなわち、本実施の形態の動画像復号装置 1は、時間ダイレクト予測において、複 数の基準画像候補から、所定の手順で、基準画像 P および基準動きベクトル mvC col  That is, the moving picture decoding apparatus 1 according to the present embodiment uses the reference picture P and the reference motion vector mvC col in a predetermined procedure from a plurality of reference picture candidates in the temporal direct prediction.
olを決定し、前方予測ベクトル mvLOおよび後方予測べクトノレ mvLlを導出すること を特徴とする。  ol is determined, and a forward prediction vector mvLO and a backward prediction vector norm mvLl are derived.
[0062] <本実施の形態における予測ベクトルの導出手順について >  <Procedure for Deriving Prediction Vector in this Embodiment>
本実施の形態の動画像復号装置 1での、時間ダイレクト予測における前方予測べ タトル mvLOおよび後方予測ベクトル mvLlの導出手順について、図 1のフローチヤ ートを用いて説明する。  A procedure for deriving the forward prediction vector mvLO and the backward prediction vector mvLl in the temporal direct prediction in the moving picture decoding apparatus 1 according to the present embodiment will be described with reference to the flowchart of FIG.
[0063] まず、導出方式選択部 201が、基準ベクトル選択部 210に、基準動きベクトル mvC olを要求する(S l)。 [0063] First, the derivation method selection unit 201 sends a reference motion vector mvC to the reference vector selection unit 210. Request ol (S l).
[0064] 次に、基準ベクトル選択部 210が、導出方式選択部 201による要求に応じて、後方 参照画像 P を、基準画像 P として設定 (P =P )した後、基準画像 P の基準  Next, after the reference vector selection unit 210 sets the back reference image P as the reference image P (P = P) in response to a request from the derivation method selection unit 201, the reference of the reference image P
rl, 0 col col rl, 0 col 動きベクトル mvColを、バッファメモリ 103に要求する(S2)。  rl, 0 col col rl, 0 col A motion vector mvCol is requested to the buffer memory 103 (S2).
[0065] 次に、導出方式選択部 201は、バッファメモリ 103からの返答に基づいて、基準動 きベクトル mvColがバッファメモリ 103内に存在するか否かを判断する(S3)。なお、 この場合、基準動きベクトル mvColがバッファメモリ 103内に存在するとは、後方参照 画像 P の基準領域 B に基準動きベクトル mvColが存在することである。 Next, the derivation method selection unit 201 determines whether or not the reference motion vector mvCol exists in the buffer memory 103 based on the response from the buffer memory 103 (S3). In this case, the presence of the reference motion vector mvCol in the buffer memory 103 means that the reference motion vector mvCol exists in the reference region B of the backward reference image P.
rl, 0 col  rl, 0 col
[0066] 基準動きべクトノレ mvColがバッファメモリ 103内に存在する場合(後方参照画像 P  [0066] When the reference motion vector mvCol exists in the buffer memory 103 (back reference image P
rl を基準画像 P とする、図 11の概念図の場合に相当)は、次の S4の処理に進む。 If rl is the reference image P (corresponding to the conceptual diagram in FIG. 11), the process proceeds to the next step S4.
, 0 col , 0 col
[0067] 基準動きベクトル mvColがバッファメモリ 103内に存在しない場合は、 S6の処理に 進む。  If the reference motion vector mvCol does not exist in the buffer memory 103, the process proceeds to S6.
[0068] S3または S7において、基準動きベクトル mvColがバッファメモリ 103内に存在する と判断された場合、導出方式選択部 201は、スィッチ 202を切り替えて、時間ダイレク ト予測部 203を選択する(S4)。  [0068] When it is determined in S3 or S7 that the reference motion vector mvCol is present in the buffer memory 103, the derivation method selection unit 201 switches the switch 202 and selects the time direct prediction unit 203 (S4 ).
[0069] 次に、時間ダイレクト予測部 203が、バッファメモリ 103から取得した基準動きべタト ル mvColを用いて、前方予測ベクトル mvLOおよび後方予測ベクトル mvLlを、式(1 )および式(2)により計算し、計算結果を前方予測ベクトル mvLOおよび後方予測べ タトル mvLlとして、動きべクトノレ復号部 101へ出力し、処理を終了する(S5)。  [0069] Next, the temporal direct prediction unit 203 uses the reference motion vector mvCol acquired from the buffer memory 103 to calculate the forward prediction vector mvLO and the backward prediction vector mvLl according to Equation (1) and Equation (2). The calculation results are output as the forward prediction vector mvLO and the backward prediction vector mvLl to the motion vector decoding unit 101, and the process is terminated (S5).
[0070] S3において、基準動きベクトル mvColがバッファメモリ 103内に存在しないと判断 された場合、基準ベクトル選択部 210は、前方参照画像 P を基準画像 P として設  [0070] If it is determined in S3 that the standard motion vector mvCol does not exist in the buffer memory 103, the standard vector selection unit 210 sets the forward reference image P as the standard image P.
rO, 0 col 定(P =P )した後、基準画像 P の基準動きべクトノレ mvColを、バッファメモリ 10 col rO, 0 col  After setting rO, 0 col (P = P), the reference motion vector mvCol of the reference image P is stored in the buffer memory 10 col rO, 0 col
3に要求する(S6)。  Request to 3 (S6).
[0071] 次に、導出方式選択部 201は、ノ ノファメモリ 103からの返答に基づいて、基準動 きベクトル mvColがバッファメモリ 103内に存在するか否かを判断する(S7)。なお、 この場合、基準動きベクトル mvColがバッファメモリ 103内に存在するとは、前方参照 画像 P の基準領域 B に基準動きベクトル mvColが存在することである。  Next, the derivation method selection unit 201 determines whether or not the reference motion vector mvCol exists in the buffer memory 103 based on the response from the nonophor memory 103 (S7). In this case, the presence of the reference motion vector mvCol in the buffer memory 103 means that the reference motion vector mvCol exists in the reference region B of the forward reference image P.
rO, 0 col  rO, 0 col
[0072] 基準動きべクトノレ mvColがバッファメモリ 103内に存在する場合(図 4の概念図の場 合に相当)は、 S4の処理に進む。なお、図 4の概念図では、前方参照画像 P を基 [0072] When the reference motion vector mvCol exists in the buffer memory 103 (in the case of the conceptual diagram in FIG. 4) The process proceeds to S4. In the conceptual diagram of FIG. 4, the forward reference image P is used as the basis.
rO, 0 準画像 P とする場合の、基準動きベクトル mvColと、前方予測ベクトル mvLOおよび  In the case of rO, 0 quasi-image P, the reference motion vector mvCol and the forward prediction vector mvLO and
 丄
後方予測ベクトル mvLlとの関係を示している。  The relationship with the backward prediction vector mvLl is shown.
[0073] 基準動きべクトノレ mvColがバッファメモリ 103内に存在しなレ、場合は、次の S8に進 む。  [0073] If the reference motion vector mvCol does not exist in the buffer memory 103, the process proceeds to the next S8.
[0074] S7において、基準動きベクトル mvColがバッファメモリ 103内に存在しないと判断 された場合は、導出方式選択部 201は、スィッチ 202を切り替えて、ゼロベクトル出力 部 206を選択する(S8)。  [0074] When it is determined in S7 that the reference motion vector mvCol does not exist in the buffer memory 103, the derivation method selection unit 201 switches the switch 202 and selects the zero vector output unit 206 (S8).
[0075] 次に、ゼロベクトル出力部 206力 S、ゼロベクトルを前方予測ベクトル mvLOおよび後 方予測ベクトル mvLlとして動きベクトル復号部 101へ出力し、処理を終了する。 (S9 )  Next, the zero vector output unit 206 force S and the zero vector are output to the motion vector decoding unit 101 as the forward prediction vector mvLO and the backward prediction vector mvLl, and the process ends. (S9)
上記手順に従うと、後方参照画像 P を基準画像 P とした際に、基準動きべタト  According to the above procedure, when the back reference image P is set as the base image P, the base motion beta is
rl, 0 col  rl, 0 col
ル mvColが利用可能ならば、従来技術と同様にして予測ベクトルを導出することがで きる。また、後方参照画像 P における基準動きベクトル mvColが利用不可能なら  If mvCol is available, the prediction vector can be derived in the same manner as in the prior art. Also, if the base motion vector mvCol in the back reference image P is not available
rl, 0  rl, 0
ば、前方参照画像 P を基準画像 P として基準動きベクトル mvColを利用すること  For example, the standard motion vector mvCol should be used with the forward reference image P as the base image P.
rO, 0 col  rO, 0 col
ができる。  Can do.
[0076] 後方参照画像 P を基準画像 P として使う場合と同様に、前方参照画像 P を  [0076] As in the case of using the backward reference image P as the base image P, the forward reference image P is
rl, 0 col rO, 0 基準画像 P として使う場合も、図 4の概念図に示す対象領域 A において画像とし  rl, 0 col rO, 0 When used as the reference image P, the image is also displayed in the target area A shown in the conceptual diagram of
col cur てある物体および領域 B において画像としてある物体がそれぞれ、領域 A 力  The object col cur and the object in image in region B
cur rO, 0 領域 A に、また領域 B から領域 B に、同じ方向へ等速運動していることを仮 rl, 0 colrei rl, 0  cur rO, 0 It is assumed that uniform motion is performed in the same direction from region B to region B and from region B to region B. rl, 0 colrei rl, 0
定して予測を行う。  Make a prediction.
[0077] このように、従来手法において、図 14の概念図で示した様に、後方参照画像 P rl, 0 に基準動きベクトル mvColが無ぐ予測ベクトルがゼロベクトルとされる予測方法に較 ベて、本発明に係る予測方法では、他の参照画像の基準動きベクトル mvColを用い て予測ベクトルを導出し画像予測に用いるので、領域 A に画像としてある物体の実  [0077] Thus, in the conventional method, as shown in the conceptual diagram of Fig. 14, compared to a prediction method in which a prediction vector having no reference motion vector mvCol in the backward reference image P rl, 0 is a zero vector. In the prediction method according to the present invention, the prediction vector is derived using the standard motion vector mvCol of another reference image and used for image prediction.
cur  cur
際の動きを正確に表すことができる。従って、従来技術による予測方法に比べて、効 率の良い予測が可能である。  The movement at the time can be expressed accurately. Therefore, it is possible to make predictions with higher efficiency than the prediction methods based on the prior art.
[0078] <基準画像候補が 3枚以上ある場合の基準画像選択方法について > なお、本発明は、基準画像候補が 3枚以上ある場合にも、適用可能である。 [0078] <Reference image selection method when there are three or more reference image candidates> The present invention can also be applied when there are three or more reference image candidates.
[0079] 例えば、参照画像が 3枚以上存在する場合は、それら参照画像のうち、複数枚を基 準画像候補として利用できる。図 5の概念図に示すように、 3枚の前方参照画像 P rO, 0 [0079] For example, when there are three or more reference images, a plurality of reference images can be used as reference image candidates. As shown in the conceptual diagram of FIG. 5, three forward reference images P rO, 0
、 Ρ 、および Ρ と、 1枚の後方参照画像 Ρ とが存在し、それらの参照画像を全 rO, 1 rO, 2 rl, 0 , Ρ, and Ρ, and one backward reference image Ρ, and these reference images are all rO, 1 rO, 2 rl, 0
て基準画像候補とする場合について、以下に説明する。  The case where the reference image candidate is used will be described below.
[0080] 3枚以上の基準画像候補が存在する場合、候補間の優先順位を、各基準画像候 補の表示時刻と、対象画像 P の表示時刻との時間間隔の大小を用いて決めておく  [0080] When there are three or more reference image candidates, priorities among the candidates are determined using the time interval between the display time of each reference image candidate and the display time of the target image P.
cur  cur
例えば、後方参照画像 P を時間間隔が小さい順に並べ、その後、前方参照画像  For example, rear reference images P are arranged in ascending order of time interval, and then forward reference images
rl, 0  rl, 0
P 、 P 、および P を時間間隔が小さい順に並べ、この順序を優先順位とする rO, 0 rO, 1 rO, 2  P, P, and P are arranged in ascending order of time interval, and this order is the priority rO, 0 rO, 1 rO, 2
。この優先順位に従うと、図 5における各基準画像候補の優先順位は、後方参照画 像 P 、前方参照画像 P 、前方参照画像 P 、前方参照画像 P の順となる。  . According to this priority order, the priority order of each standard image candidate in FIG. 5 is the order of the backward reference image P, the forward reference image P, the forward reference image P, and the forward reference image P.
rl, 0 r0, 0 r0, 1 rO, 2  rl, 0 r0, 0 r0, 1 rO, 2
[0081] 基準べ外ル選択部 210は、優先順位の高い基準画像候補を、基準画像 P に設  [0081] The reference margin selection unit 210 sets a reference image candidate having a high priority in the reference image P.
col 定する。そして、設定した基準画像 P の基準動きベクトル mvColの有無を判定して  col Then, the presence or absence of the reference motion vector mvCol of the set reference image P is determined.
col  col
、基準動きベクトル mvColが存在する場合は、その基準動きベクトル mvColを選択 する。存在しない場合は、次の優先順位の基準画像候補を基準画像 P に設定し、 優先順位に従って、同様の処理を繰り返す。全ての基準画像 P を試した後に、基  If the reference motion vector mvCol exists, the reference motion vector mvCol is selected. If it does not exist, the reference image candidate of the next priority is set as the reference image P, and the same processing is repeated according to the priority. After all the reference images P have been tried,
col  col
準動きベクトル mvColが選択されていない場合は、その旨を導出方式選択部 201へ 通知する。  If the quasi-motion vector mvCol is not selected, the derivation method selection unit 201 is notified of that fact.
[0082] 図 5に示す概念図では、参照画像 P を基準画像 P に設定した場合の、基準動  In the conceptual diagram shown in FIG. 5, the reference motion when the reference image P is set as the reference image P is shown.
rO, 1 col  rO, 1 col
きベクトル mvColと、前方予測ベクトル mvLOおよび後方予測ベクトル mvLlとの関 係を示している。この場合も、対象領域 A に画像としてある物体および領域 B に  The relationship between the forward vector mvCol, the forward prediction vector mvLO, and the backward prediction vector mvLl is shown. In this case as well, the object in the target area A and the area B
cur cur 画像としてある物体が、それぞれ、領域 A から領域 A に、領域 B から領域 B rO, 0 rl, 0 colref r に、同じ方向へ等速運動していることを仮定し、式(1)および式(2)より、予測べク Assuming that an object as a cur cur image is moving in the same direction from region A to region A and from region B to region B rO, 0 rl, 0 colref r in the same direction, respectively (1) And the formula (2)
1, 0 Ten
トルが計算できる。  Torr can be calculated.
[0083] 基準画像候補が 3枚以上ある場合には、基準画像候補のいずれかを基準画像 P col とし、その基準画像 P に対応する基準動きベクトル mvColを選択することで、図 14  [0083] When there are three or more reference image candidates, any one of the reference image candidates is set as the reference image P col, and the reference motion vector mvCol corresponding to the reference image P is selected.
 丄
の概念図に示すように基準動きベクトル mvColをゼロべクトノレとする場合に較べて、 画像にある物体の実際の動きとの相関が高い予測ベクトルを導出できる可能性が高 くなる。従って、より効率の良い予測が可能である。 As shown in the conceptual diagram, it is more likely that a prediction vector having a high correlation with the actual motion of the object in the image can be derived compared to the case where the reference motion vector mvCol is set to zero vector nore. Become. Therefore, more efficient prediction is possible.
[0084] <非参照画像を基準画像候補に含める予測方法について >  [0084] <Prediction method for including non-reference image in standard image candidate>
参照画像に加えて、従来技術では予測画像の導出には用いない非参照画像を、 基準画像候補に含めても構わない。この場合は、全ての既復号画像が、基準画像候 補となり得る。図 6の概念図に示すように、前方参照画像 P および後方参照画像 P rO, 0  In addition to the reference image, a non-reference image that is not used for derivation of a predicted image in the related art may be included in the reference image candidate. In this case, all already decoded images can be candidates for the reference image. As shown in the conceptual diagram of FIG. 6, the forward reference image P and the backward reference image P rO, 0
の 2枚の参照画像に加えて、非参照画像 Ρ が基準画像候補に含まれる場合につ rl, 0 nr  Rl, 0 nr when the non-reference image Ρ is included in the standard image candidates in addition to the two reference images
いて説明する。  And explain.
[0085] 基準画像候補に非参照画像 P を含む場合であっても、基準ベクトル選択部 210の  [0085] Even when the reference image candidate includes the non-reference image P, the reference vector selection unit 210
nr  nr
動作は既に説明した動作と同様である。つまり、優先順位に従って、順に基準画像 P に設定した画像毎に、基準動きベクトル mvColの有無を判定することで、基準動き col  The operation is the same as that already described. In other words, the reference motion col is determined by determining the presence or absence of the reference motion vector mvCol for each image set as the reference image P in order according to the priority order.
ベクトル mvColを選択する。  Select the vector mvCol.
[0086] 基準画像候補に非参照画像 P を含む場合の、優先順位の決め方として、例えば、 [0086] As a method of determining the priority when the reference image candidate includes the non-reference image P, for example,
nr  nr
参照画像であるか非参照画像であるかを問わず、表示時刻が対象画像 P の表示  Regardless of whether it is a reference image or non-reference image, the display time of the target image P is displayed.
cur  cur
時刻に近い順に、画像に優先順位をつける方法がある。また、参照画像である画像 を表示時刻が対象画像 P の表示時刻に近い順に並べ、次に、非参照画像である  There is a method of assigning priorities to images in the order of time. Also, images that are reference images are arranged in order of display time close to the display time of the target image P, and then are non-reference images.
cur  cur
画像を表示時刻が対象画像 P の表示時刻に近い順に並べて、優先順位とする方  Arrange the images in order of display time close to the display time of the target image P, and set the priority
cur  cur
法もある。  There is also a law.
[0087] 図 6の概念図では、非参照画像 P を基準画像 P に設定した場合 (P =P )にお  [0087] In the conceptual diagram of FIG. 6, when the non-reference image P is set to the standard image P (P = P).
nr col col nr ける、基準動きベクトル mvColと前方予測ベクトル mvLOおよび後方予測ベクトル mv L1との関係を示している。この場合も、対象領域 A に画像としてある物体および領  The relationship between the reference motion vector mvCol, the forward prediction vector mvLO, and the backward prediction vector mv L1 in nr col col nr is shown. In this case as well, the object and area that are in the target area A as an image
cur  cur
域 B に画像としてある物体が、それぞれ、領域 A から領域 A に、また領域 B cur rO, 0 rl, 0 coir から基準領域 B に、同じ方向へ等速運動していることを仮定し、式(1)および式( ef rl, 0  Assuming that an object as an image in region B is moving in the same direction in the same direction from region A to region A and from region B cur rO, 0 rl, 0 coir to reference region B, respectively. (1) and the formula (ef rl, 0
2)より予測べ外ルを計算できる。  2) Calculate the predicted margin.
[0088] 非参照画像を基準画像候補に含めることで、参照画像のみを基準画像候補に含 める場合に較べて、対象画像 P の表示時刻と表示時刻が近い基準画像候補が多 [0088] By including the non-reference image in the standard image candidate, there are more standard image candidates whose display time is close to the display time of the target image P than when only the reference image is included in the standard image candidate.
cur  cur
くなる。対象画像 P の表示時刻と基準画像 P の表示時刻との間隔が狭い場合は、  Become. If the interval between the display time of the target image P and the display time of the reference image P is narrow,
cur col  cur col
対象領域 A と領域 B との、対象画像 P 上における距離が短くなるため、これら 2 cur cur cur  Since the distance between the target area A and the area B on the target image P becomes shorter, these 2 cur cur cur
つの領域が同じ方向へ等速運動しているとの仮定が成り立ちやすくなる。従って、画 像にある物体の実際の動きと相関が高い動きベクトルを導出できる可能性が高くなる ため、参照画像のみを基準画像候補とする場合に較べて、より効率の良い予測が可 能である。 It is easy to assume that two regions are moving at the same speed in the same direction. Therefore, Since there is a high possibility that a motion vector having a high correlation with the actual motion of an object in the image can be derived, more efficient prediction is possible than when only the reference image is used as a reference image candidate.
[0089] なお、非参照画像を基準画像候補として使用する場合には、少なくとも、候補となる 非参照画像に対応する動きベクトルおよびその非参照画像の表示時刻情報が、バッ ファメモリ 103内に記録されている必要がある。  [0089] When a non-reference image is used as a standard image candidate, at least a motion vector corresponding to the candidate non-reference image and display time information of the non-reference image are recorded in the buffer memory 103. Need to be.
[0090] <予測画像導出に 夂の参照画像のみを用いる方法にっレ、て > [0090] <Method using only 参照 reference image for prediction image derivation>
なお、ここまでの説明では、時間ダイレクト予測において、対象領域 A の予測画  In the above description, the prediction image of the target area A is used in the temporal direct prediction.
cur  cur
像導出に、前方参照画像 P および後方参照画像 P の 2枚を用いるものとしてい  Two images, a forward reference image P and a backward reference image P, are used for image derivation.
rO, 0 rl, 0  rO, 0 rl, 0
た力 いずれか一方の参照画像のみを用いても構わない。  Only one of the reference images may be used.
[0091] 例えば、図 4の概念図に示すように、前方参照画像 P を基準画像 P として予測 For example, as shown in the conceptual diagram of FIG. 4, the forward reference image P is predicted as the standard image P.
r0, 0 col  r0, 0 col
ベクトルを導出する場合に、領域 A のみから対象領域 A の予測画像を生成して  When deriving a vector, a prediction image of the target area A is generated from only the area A.
r0, 0 cur  r0, 0 cur
もよい。この場合、基準領域 B に画像としてある物体が領域 B から領域 B まで  Also good. In this case, the object in the reference area B as an image is from area B to area B.
col colref cur 等速運動する可能性の方力 より時間間隔の長い領域 B 力 領域 B まで等速  col colref cur Direction of possibility of constant velocity movement Region B with longer time interval Constant force to region B
colref rl, 0  colref rl, 0
運動する可能性に較べて高い。つまり、前方予測ベクトル mvLOの方が、後方予測 ベクトル mvLlに較べて精度が良い可能性が高い。従って、領域 A のみを用いた  Higher than the possibility of exercising. In other words, the forward prediction vector mvLO is more likely to be more accurate than the backward prediction vector mvLl. Therefore, only region A was used
rO, 0  rO, 0
場合に、予測効率が向上する。  In some cases, the prediction efficiency is improved.
[0092] また、後方参照画像 P を、基準画像 P とする場合であり、かつ、基準動きべタト  [0092] Further, the backward reference image P is set as the base image P, and the base motion beta is shown.
rl, 0 col  rl, 0 col
ノレ mvColが後方参照画像 P よりも表示時間が後になる画像を指す場合には、前  If nore mvCol refers to an image whose display time is later than the back reference image P,
rl, 0  rl, 0
方参照画像 P のみから予測ベクトルを導出する場合と同様に、領域 A のみを予  As in the case of deriving the prediction vector from only the reference image P, only the region A is predicted.
rO, 0 rl, 0 測画像の導出に用いることで、予測効率が向上する。  rO, 0 rl, 0 Prediction efficiency is improved by using it for derivation of measured images.
[0093] [第 2の実施の形態] [0093] [Second Embodiment]
以下、本発明の第 2の実施の形態における動画像符号化装置について説明する。 <動画像符号化装置の構成にっレ、て >  Hereinafter, the moving picture coding apparatus according to the second embodiment of the present invention will be described. <The configuration of the video encoding device>
図 7のブロック図において、本実施の形態に係る動画像符号化装置 2の構成を示 す。図 7のブロック図は、動画像符号化装置 2の全体の構成を示したものである。  In the block diagram of FIG. 7, the configuration of moving picture coding apparatus 2 according to the present embodiment is shown. The block diagram of FIG. 7 shows the overall configuration of the video encoding device 2.
[0094] 動画像符号化装置 2は、画像符号化部 121、予測画像導出部 105、動きベクトル 推定部(動きべクトノレ推定手段) 122、ノ ッファメモリ 103、画像復号部 104、予測べ タトル導出部 112、予測方式制御部 124、可変長符号化部 120、および動きベクトル 符号化部 (動きべ外ル符号化手段) 123を含んで構成される。 [0094] The moving image encoding apparatus 2 includes an image encoding unit 121, a predicted image derivation unit 105, a motion vector estimation unit (motion vector nore estimation means) 122, a nother memory 103, an image decoding unit 104, A tuttle deriving unit 112, a prediction scheme control unit 124, a variable length coding unit 120, and a motion vector coding unit (motion vector coding means) 123 are configured.
[0095] この図に示す要素は、以下に述べるものを除き、第 1の実施の形態において動画 像復号装置 1のブロック図(図 2)に示す要素と同一なので、説明を省略する。  The elements shown in this figure are the same as those shown in the block diagram (FIG. 2) of the moving image decoding apparatus 1 in the first embodiment, except for those described below, and thus the description thereof is omitted.
[0096] 可変長符号化部 120は、画像符号化部 121から入力した予測残差データ、動きべ タトル符号化部 123から入力した差分ベクトル、および予測方式制御部 124から入力 した予測方式等の符号化情報を可変長符号化し、外部に出力する。画像符号化部 1 21は、外部から入力した動画像データおよび予測画像導出部 105から入力した予 測画像を用いて予測残差データを求め、画像復号部 104および可変長符号ィ匕部 12 0に出力する。動きベクトル推定部 122は、外部から入力した動画像データおよびバ ッファメモリ 103内の参照画像を用いて、動きべクトノレを推定し、求めた動きべクトノレ を予測画像導出部 105、バッファメモリ 103、予測ベクトル導出部 112、予測方式制 御部 124、および動きベクトル符号化部 123にする。動きベクトル符号ィ匕部 123は、 動きべクトノレ推定部 i 22から入力した動きベクトル、予測ベクトル導出部 112から入 力した予測ベクトル、および予測方式制御部から入力した予測方式を用いて差分べ タトルを求め、可変長符号化部 120に出力する。予測方式制御部 124は、動きべタト ル推定部 122から入力した動きベクトルに基づき、予測ベクトル導出時の予測方式を 設定し、予測方式を予測ベクトル導出部 112、可変長符号化部 120、および動きべ タトル符号ィ匕部 123に出力する。  The variable length coding unit 120 includes prediction residual data input from the image coding unit 121, a difference vector input from the motion vector coding unit 123, and a prediction method input from the prediction method control unit 124. Encoding information is variable-length encoded and output to the outside. The image encoding unit 121 obtains prediction residual data using the moving image data input from the outside and the predicted image input from the predicted image deriving unit 105, and the image decoding unit 104 and the variable-length code encoding unit 120. Output to. The motion vector estimation unit 122 estimates the motion vector nore using the externally input moving image data and the reference image in the buffer memory 103, and uses the obtained motion vector nore as the predicted image deriving unit 105, the buffer memory 103, the prediction The vector deriving unit 112, the prediction scheme control unit 124, and the motion vector encoding unit 123 are used. The motion vector encoding unit 123 uses a motion vector input from the motion vector estimation unit i 22, a prediction vector input from the prediction vector derivation unit 112, and a prediction vector input from the prediction method control unit, using a difference vector. Is output to the variable length encoding unit 120. The prediction method control unit 124 sets a prediction method for deriving a prediction vector based on the motion vector input from the motion vector estimation unit 122, and sets the prediction method as a prediction vector deriving unit 112, a variable length coding unit 120, and It outputs to the motion vector code part 123.
[0097] <動画符号化装置 2における符号化手順の概略について >  <Outline of encoding procedure in moving image encoding apparatus 2>
動画符号化装置 2における符号化処理手順の概略について、図 8に示すフローチ ヤートを参照しながら、以下に説明する。  The outline of the encoding processing procedure in the moving image encoding apparatus 2 will be described below with reference to the flowchart shown in FIG.
[0098] まず、動きべクトノレ推定部 122が、入力された動画像データとバッファメモリ 103内 の参照画像とを用いて動き推定を行い、符号化の対象となる画像の各領域の動きべ タトルを求める。動きベクトル推定部 122は、求めた動きベクトルをバッファメモリ 103 に記録すると共に、予測ベクトル導出部 112、予測方式制御部 124、および動きべク トル符号ィ匕部 123へ出力する(S100)。  [0098] First, the motion vector estimation unit 122 performs motion estimation using the input moving image data and the reference image in the buffer memory 103, and motion vector for each region of the image to be encoded. Ask for. The motion vector estimation unit 122 records the obtained motion vector in the buffer memory 103, and outputs the motion vector to the prediction vector deriving unit 112, the prediction scheme control unit 124, and the motion vector coding unit 123 (S100).
[0099] 次に、予測方式制御部 124が、動きベクトル推定部 122から入力された動きべタト ルを用いて、予測方式を、時間ダイレクト予測、空間ダイレクト予測、または pmv予測 のいずれかの方式に決定し、予測ベクトル導出部 112、可変長符号化部 120、およ び動きベクトル符号化部 123に出力する(S110)。 Next, the prediction scheme control unit 124 receives the motion vector input from the motion vector estimation unit 122. Are used to determine the prediction method as one of temporal direct prediction, spatial direct prediction, or pmv prediction, and the prediction vector deriving unit 112, variable-length encoding unit 120, and motion vector encoding unit It outputs to 123 (S110).
[0100] 次に、予測ベクトル導出部 112が、動きベクトル推定部 122から入力された動きべク トルと、予測方式制御部 124から入力された予測方式とに基づいて、第 1の実施の形 態において説明した予測べ外ルの導出手順に従って、予測べ外ルを計算し、バッ ファメモリ 103および動きベクトル符号ィ匕部 123に出力する(S120)。  Next, based on the motion vector input from the motion vector estimation unit 122 and the prediction method input from the prediction method control unit 124, the prediction vector deriving unit 112 performs the first embodiment. In accordance with the prediction rule derivation procedure described in the above-mentioned state, the prediction error is calculated and output to the buffer memory 103 and the motion vector code key unit 123 (S120).
[0101] 次に、動きベクトル符号化部 123が、差分ベクトルを求める。予測方式制御部 124 力 入力された予測方式力 Spmv予測である場合には、動きベクトル推定部 122から 入力された動きベクトルと、予測ベクトル導出部 112から入力された予測ベクトルとに 基づいて、差分ベクトルを計算する。予測方式制御部 124から入力された予測方式 力 時間ダイレクト予測または空間ダイレクト予測の場合には、差分ベクトルをゼロべ タトルとする。動きベクトル符号化部 123は、求めた差分ベクトルを可変長符号化部 1 20に出力する(S130)。  Next, the motion vector encoding unit 123 obtains a difference vector. Prediction method control unit 124 force For input prediction method power Spmv prediction, based on the motion vector input from the motion vector estimation unit 122 and the prediction vector input from the prediction vector derivation unit 112, the difference Calculate the vector. Prediction method input from prediction method control unit 124 In the case of temporal direct prediction or spatial direct prediction, the difference vector is set to zero vector. The motion vector encoding unit 123 outputs the obtained difference vector to the variable length encoding unit 120 (S130).
[0102] 次に、予測画像導出部 105が、動きベクトル推定部 122から入力された動きべタト ルと、バッファメモリ 103内の参照画像とを用いて動き補償を行レ、、予測画像を求め、 画像符号化部 121および画像復号部 104に出力する(S140)。  Next, the predicted image deriving unit 105 performs motion compensation using the motion vector input from the motion vector estimating unit 122 and the reference image in the buffer memory 103 to obtain a predicted image. The image is output to the image encoding unit 121 and the image decoding unit 104 (S140).
[0103] 次に、画像符号化部 121が、外部から入力された動画像データと予測画像導出部 105から入力された予測画像とに基づき、予測残差データを計算し、画像復号部 10 4および可変長符号化部 120に出力する(S150)。  Next, the image encoding unit 121 calculates prediction residual data based on the moving image data input from the outside and the predicted image input from the predicted image deriving unit 105, and the image decoding unit 10 4 And it outputs to the variable length encoding part 120 (S150).
[0104] 次に、画像復号部 104が、画像符号化部 121から入力された予測残差データと、 予測画像導出部 105から入力された予測画像とを用いて、画像を既符号化画像とし て再構成する(S 160)。再構成された画像は、バッファメモリ 103に記録される。既符 号化画像は、予測べクトノレ導出時に基準画像として用レ、られ、また、予測画像導出 時に参照画像として用いられる。  [0104] Next, the image decoding unit 104 uses the prediction residual data input from the image encoding unit 121 and the prediction image input from the prediction image deriving unit 105 as an encoded image. To reconfigure (S 160). The reconstructed image is recorded in the buffer memory 103. The already-encoded image is used as a reference image when the predicted vector is derived, and is used as a reference image when the predicted image is derived.
[0105] 次に、可変長符号化部 120が、画像符号化部 121から入力された予測残差データ と、動きベクトル符号化部 123から入力された差分ベクトルと、予測方式制御部 124 力 入力された予測方式とを可変長符号ィ匕し、符号ィ匕データとして動画像符号ィ匕装 置 2の外部に出力する(S170)。 [0105] Next, the variable length coding unit 120 inputs the prediction residual data input from the image coding unit 121, the difference vector input from the motion vector coding unit 123, and the prediction scheme control unit 124. A variable-length code and a video code as a code data. Output to the outside of the device 2 (S170).
[0106] <第 2の実施の形態に関する補足事項 > [0106] <Supplementary Information on Second Embodiment>
以上説明したように、本実施の形態における動画像符号化装置 2は、第 1の実施の 形態において説明した予測べ外ルの導出手順に従い、複数の基準画像候補を用 いて予測ベクトルを導出する。そして、導出した予測ベクトルを用いて動画像の符号 化を行うので、動画像の高能率な符号化が可能である。  As described above, the moving picture coding apparatus 2 according to the present embodiment derives a prediction vector using a plurality of reference image candidates according to the prediction margin derivation procedure described in the first embodiment. . Since the moving picture is encoded using the derived prediction vector, the moving picture can be encoded with high efficiency.
[0107] なお、図 8に示すフローチャートの S120の処理(予測ベクトルの導出)において、第[0107] In the process of S120 (derivation of the prediction vector) in the flowchart shown in FIG.
1の実施の形態で説明したように、 3枚以上の基準画像候補を用いて予測ベクトルを 導出することにより、予測効率が向上する。 As described in the first embodiment, the prediction efficiency is improved by deriving a prediction vector using three or more reference image candidates.
[0108] また、同じく S120の処理において、第 1の実施の形態で説明したように、非参照画 像を基準画像候補に含めて予測べ外ルを導出することにより、予測効率が向上する Similarly, in the process of S120, as described in the first embodiment, the prediction efficiency is improved by deriving the prediction margin by including the non-reference image in the standard image candidate.
[0109] また、以上説明した時間ダイレクト予測における予測ベクトルの導出方法を、 pmv 予測を予測方式として用いる場合の、予測ベクトルの導出に用いることも可能である [0109] The prediction vector derivation method in the temporal direct prediction described above can also be used for derivation of a prediction vector when pmv prediction is used as a prediction method.
[0110] <補足事項 > [0110] <Supplementary information>
本発明は上述した各実施形態に限定されるものではなぐ請求項に示した範囲で 種々の変更が可能であり、異なる実施形態にそれぞれ開示された技術的手段を適 宜組み合わせて得られる実施形態についても本発明の技術的範囲に含まれる。  The present invention is not limited to the above-described embodiments, and various modifications can be made within the scope shown in the claims, and embodiments obtained by appropriately combining technical means disclosed in different embodiments. Is also included in the technical scope of the present invention.
[0111] 前記候補画像は、前記処理対象画像より表示時刻が遅くかつ前記処理対象画像 に表示時刻が最も近い既復号画像および前記処理対象画像より表示時刻が早くか つ前記処理対象画像に表示時刻が最も近い既復号画像であり、前記所定の選択基 準は、前記処理対象画像より表示時刻が遅い前記候補画像上で、前記処理対象領 域と空間的に同一位置に位置する領域の動きベクトルが存在する場合は、該候補画 像を前記基準画像として選択し、該動きべ外ルが存在しない場合で、前記処理対 象画像より表示時刻が早い前記候補画像上で、前記処理対象領域と空間的に同一 位置に位置する領域の動きベクトルが存在する場合は、該候補画像を前記基準画 像として選択するという基準であることが好ましい。 [0112] これにより、当該構成において、基準動きベクトル選択手段は、まず、処理対象画 像の直後にある既復号画像が基準画像に選択できるか否力を判断する。選択できる 場合は、従来技術と同様に予測画像を導出する。選択できない場合は、次に、処理 対象画像の直前にある既復号画像が基準画像に選択できるか否力、を判断する。選 択できる場合は、その既復号画像を基準画像として、予測画像を導出する。 [0111] The candidate image has a display time later than that of the processing target image and a display time that is earlier than the processing target image and has a display time earlier than that of the processing target image and the processing target image. Is the nearest decoded image, and the predetermined selection criterion is a motion vector of an area located spatially at the same position as the processing target area on the candidate image whose display time is later than the processing target image. If the candidate image is selected as the reference image, and the motion vector does not exist, the candidate image is displayed on the candidate image earlier in display time than the processing target image. In the case where there are motion vectors of regions located at the same spatial position, it is preferable that the criterion is that the candidate image is selected as the reference image. Accordingly, in the configuration, the reference motion vector selection unit first determines whether or not the already-decoded image immediately after the processing target image can be selected as the reference image. If it can be selected, the predicted image is derived in the same way as in the conventional technology. If it cannot be selected, it is next determined whether or not the already decoded image immediately before the processing target image can be selected as the reference image. If it can be selected, a predicted image is derived using the decoded image as a reference image.
[0113] 上記の構成によれば、処理対象画像に最も表示時刻が近い既復号画像を基準画 像に選択するので、処理対象画像と基準画像との時間間隔が短い。従って、時間ダ ィレクト予測の前提である画像上の物体は、同方向へ等速運動をしているという仮定 が成り立つ確率が高まるので、予測画像の予測効率を向上できるという効果を奏する  [0113] According to the above configuration, since the decoded image having the closest display time to the processing target image is selected as the reference image, the time interval between the processing target image and the reference image is short. Therefore, the object on the image, which is the premise of time-directed prediction, has a higher probability that the assumption that it is moving at the same speed in the same direction will be increased, so that it is possible to improve the prediction efficiency of the predicted image.
[0114] また、前記所定の選択基準は、前記候補画像に含まれる、前記処理対象画像より 表示時刻が遅い既復号画像を、その表示時刻が前記処理対象画像の表示時刻に 近い順に優先順位をつけ、その優先順位に続けて、前記候補画像に含まれる、前記 処理対象画像より表示時刻が早い既復号画像を、その表示時刻が前記処理対象画 像の表示時刻に近い順に優先順位をつけ、該優先順位の順に、前記候補画像上で の、前記処理対象領域と空間的に同一位置に位置する領域の動きベクトルの有無の 判定を、該動きベクトルが有るまで繰り返し、該動きベクトルが有った場合は、該候補 画像を前記基準画像として選択するという基準であることが好ましい。 [0114] In addition, the predetermined selection criterion includes priorities of decoded images that are included in the candidate images and that have a display time later than the processing target image, in order of display time closer to the display time of the processing target image. The decoded images included in the candidate images, the display times of which are earlier than the processing target images, are given priorities in the order of the display times closer to the display times of the processing target images. In the order of the priorities, the determination of the presence or absence of a motion vector in an area spatially located at the same position as the processing target area on the candidate image is repeated until the motion vector is present. In such a case, it is preferable that the criterion is that the candidate image is selected as the reference image.
[0115] 当該構成において、基準動きベクトル選択手段は、まず、複数ある既復号画像に 優先順位をつけ、優先度の高レ、ものから順に、基準画像に選択できるか否力を判断 していく。この優先順位は、予測ベクトルが実際の物体の動きを反映する可能性の高 い既復号画像を優先する順位付けになっている。  [0115] In this configuration, the reference motion vector selection unit first assigns priorities to a plurality of already-decoded images, and determines whether or not the reference image can be selected in descending order of priority. . This priority order is a priority order in which the decoded image having a high possibility that the prediction vector reflects the actual motion of the object is prioritized.
[0116] これにより、上記の構成によれば、候補画像のいずれ力を基準画像とし、その基準 画像に対応する基準動きベクトルを選択することで、基準動きベクトルをゼロべクトノレ とする場合に較べて、画像にある物体の実際の動きとの相関が高い予測ベクトルを 導出できる可能性が高くなるので、より効率の良い予測が可能になるという効果を奏 する。  [0116] Thus, according to the configuration described above, it is possible to select any of the candidate images as a reference image and select a reference motion vector corresponding to the reference image, so that the reference motion vector is zero vector. As a result, there is a high possibility that a prediction vector having a high correlation with the actual motion of the object in the image can be derived, so that more efficient prediction can be achieved.
[0117] また、前記基準動きベクトル選択手段が、前記処理対象画像よりも表示時刻が早い 既復号画像を前記基準画像として選択した場合、前記予測画像導出手段は、前記 処理対象画像に表示時刻が最も近レ、前方参照画像のみを用いて予測画像を導出 することが好ましい。 [0117] Further, the reference motion vector selection means has a display time earlier than that of the processing target image. When a previously decoded image is selected as the reference image, the predicted image deriving unit preferably derives a predicted image using only the forward reference image whose display time is closest to the processing target image.
[0118] 参照画像とは、動きベクトルが指す領域を有する既復号画像のことをいう。  [0118] A reference image is a decoded image having an area pointed to by a motion vector.
[0119] 前方参照画像とは、動きベクトルが指す領域を有する既復号画像のうち、処理対象 画像よりも表示時刻が早い既復号画像のことをいう。  [0119] The forward reference image refers to a decoded image having a display time earlier than the processing target image among the decoded images having the area indicated by the motion vector.
[0120] 後方参照画像とは、動きベクトルが指す領域を有する既復号画像のうち、処理対象 画像よりも表示時刻が遅い既復号画像のことをいう。 [0120] The backward reference image is a decoded image having a display time later than that of the processing target image, among the decoded images having the area indicated by the motion vector.
[0121] 前方予測ベクトルとは、処理対象領域から前方参照画像を指す予測ベクトルである [0121] The forward prediction vector is a prediction vector indicating the forward reference image from the processing target region.
[0122] 後方予測ベクトルとは、処理対象領域から後方参照画像を指す予測ベクトルである [0122] The backward prediction vector is a prediction vector indicating the backward reference image from the processing target region.
[0123] 当該構成において、表示時刻から見ると、まず基準動きベクトルが指す前方参照画 面があり、次に基準画像があり、次に処理対象画像があり、次に後方参照画像がある という関係、になる。 [0123] In this configuration, when viewed from the display time, there is a forward reference screen pointed to by the base motion vector first, then a base image, next a processing target image, and then a back reference image. ,become.
[0124] これにより、上記の構成によれば、前方参照画像を基準画像として予測ベクトルを 導出する場合に、前方予測ベクトルが指す基準画像上の領域のみから処理対象領 域の予測画像を生成する場合、基準領域に画像としてある物体が基準動きべクトノレ が指す前方参照画像上の領域から処理対象画像上の領域まで等速運動する可能 性の方が、より時間間隔の長い基準動きベクトルが指す前方参照画像上の領域から 後方参照画像上の領域まで等速運動する可能性に較べて高い。  Thus, according to the above configuration, when a prediction vector is derived using a forward reference image as a base image, a prediction image of the processing target region is generated only from the region on the base image pointed to by the forward prediction vector. In this case, the possibility that an object as an image in the reference area moves at a constant speed from the area on the forward reference image pointed to by the reference motion vector to the area on the processing target image points to the reference motion vector having a longer time interval. Compared to the possibility of moving at a constant speed from the area on the forward reference image to the area on the backward reference image.
[0125] さらに、前方参照画像と後方参照画像との時間間隔に比較して短い前方参照画像 と処理対象画像との時間間隔を用いて予測ベクトルを求めるので、時間ダイレクト予 測の仮定である画像上の物体が同一方向へ等速運動をするという条件が成り立つ 可能性が高まる。従って、予測画像の予測効率を向上できるという効果を奏する。 [0125] Further, since the prediction vector is obtained using the time interval between the forward reference image and the processing target image that is shorter than the time interval between the forward reference image and the backward reference image, the image that is an assumption of temporal direct prediction. The possibility that the condition that the upper object moves at the same speed in the same direction increases. Therefore, there is an effect that the prediction efficiency of the predicted image can be improved.
[0126] また、前記既復号画像は、参照画像および非参照画像であることが好ましい。 [0126] Further, the decoded image is preferably a reference image and a non-reference image.
[0127] 非参照画像とは、予測画像の導出には用いない既復号画像のことをいう。 [0127] A non-reference image is a decoded image that is not used for deriving a predicted image.
[0128] 当該構成において、非参照画像を候補画像に含めることで、参照画像のみを候補 画像に含める場合に較べて、処理対象画像の表示時刻と表示時刻が近い既復号画 像が多くなる。 [0128] In this configuration, by including a non-reference image in the candidate image, only the reference image is a candidate. Compared to the case where the image is included in the image, the number of already decoded images whose display time is close to the display time of the processing target image increases.
[0129] これにより、上記の構成によれば、処理対象画像の表示時刻と基準画像の表示時 亥 1Jとの間隔が狭い場合は、対象領域と基準動きべ外ルが指す基準画像上の領域と の処理対象画像上における距離が短くなるため、これら 2つの領域が同じ方向へ等 速運動しているとの仮定が成り立ちやすくなる。従って、画像にある物体の実際の動 きと相関が高い動きベクトルを導出できる可能性が高くなるので、参照画像のみを基 準画像候補とする場合に較べて、より予測効率を向上できるという効果を奏する。  Thus, according to the above configuration, when the interval between the display time of the processing target image and the display time of the reference image is narrow, the area on the reference image pointed to by the target area and the reference movement belt Since the distance on the image to be processed becomes shorter, it is easy to assume that these two areas are moving at the same speed in the same direction. Therefore, there is a high possibility that a motion vector having a high correlation with the actual movement of the object in the image can be derived, so that the prediction efficiency can be further improved compared to the case where only the reference image is set as the reference image candidate. Play.
[0130] また、前記候補画像は、前記処理対象画像より表示時刻が遅くかつ前記処理対象 画像に表示時刻が最も近い既符号化画像および前記処理対象画像より表示時刻が 早くかつ前記処理対象画像に表示時刻が最も近い既符号化画像であり、前記所定 の選択基準は、前記処理対象画像より表示時刻が遅い前記候補画像上で、前記処 理対象領域と空間的に同一位置に位置する領域の動きベクトルが存在する場合は、 該候補画像を前記基準画像として選択し、該動きベクトルが存在しない場合で、前 記処理対象画像より表示時刻が早レ、前記候補画像上で、前記処理対象領域と空間 的に同一位置に位置する領域の動きベクトルが存在する場合は、該候補画像を前 記基準画像として選択するという基準であることが好ましい。  [0130] Further, the candidate image has a display time later than the processing target image and a display time earlier than the processing target image and an already encoded image whose display time is closest to the processing target image and the processing target image. The already-encoded image having the closest display time, and the predetermined selection criterion is an area of the candidate image that is later in display time than the processing target image and is spatially located at the same position as the processing target region. When the motion vector exists, the candidate image is selected as the reference image, and when the motion vector does not exist, the display time is earlier than the processing target image, and the processing target region is displayed on the candidate image. If there is a motion vector in a region located in the same spatial position, it is preferable that the criterion is that the candidate image is selected as the reference image.
[0131] 当該構成において、基準動きベクトル選択手段は、まず、処理対象画像の直後に ある既符号ィ匕画像が基準画像に選択できるか否力を判断する。選択できる場合は、 従来技術と同様に予測画像を導出する。選択できない場合は、次に、処理対象画像 の直前にある既符号ィ匕画像が基準画像に選択できるか否力を判断する。選択できる 場合は、その既符号化画像を基準画像として、予測画像を導出する。  In this configuration, the reference motion vector selection unit first determines whether or not the already-encoded image immediately after the processing target image can be selected as the reference image. If it can be selected, the prediction image is derived as in the conventional technique. If it cannot be selected, it is next determined whether or not the already-encoded image immediately before the processing target image can be selected as the reference image. If it can be selected, a predicted image is derived using the already-encoded image as a reference image.
[0132] これにより、上記の構成によれば、処理対象画像に最も表示時刻が近い既符号ィ匕 画像を基準画像に選択するので、処理対象画像と基準画像との時間間隔が短い。 従って、時間ダイレクト予測の前提である画像上の物体は、同方向へ等速運動をして レ、るという仮定が成り立つ確率が高まるので、予測画像の予測効率を向上できるとい う効果を奏する。  Thus, according to the above configuration, the pre-signed image having the closest display time to the processing target image is selected as the reference image, so the time interval between the processing target image and the reference image is short. Therefore, the object on the image, which is the premise of temporal direct prediction, increases the probability that the assumption that the object will move at the same speed in the same direction will be satisfied, so that it is possible to improve the prediction efficiency of the predicted image.
[0133] また、前記所定の選択基準は、前記候補画像に含まれる、前記処理対象画像より 表示時刻が遅い既符号化画像を、その表示時刻が前記処理対象画像の表示時刻 に近い順に優先順位をつけ、その優先順位に続けて、前記候補画像に含まれる、前 記処理対象画像より表示時刻が早い既符号化画像を、その表示時刻が前記処理対 象画像の表示時刻に近い順に優先順位をつけ、該優先順位の順に、前記候補画像 上での、前記処理対象領域と空間的に同一位置に位置する領域の動きベクトルの有 無の判定を、該動きベクトルが有るまで繰り返し、該動きベクトルが有った場合は、該 候補画像を前記基準画像として選択するという基準であることが好ましい。 [0133] Further, the predetermined selection criterion is based on the processing target image included in the candidate image. Prior coded images with a later display time are prioritized in the order in which the display time is closer to the display time of the processing target image, and are displayed from the processing target image included in the candidate image following the priority order. Priorities are assigned to encoded images with earlier times in the order in which the display time is closer to the display time of the processing target image, and in the order of the priority, spatially with the processing target area on the candidate image. The determination of the presence / absence of the motion vector of the region located at the same position is repeated until the motion vector is present, and if the motion vector is present, the candidate image may be selected as the reference image. preferable.
[0134] 当該構成において、基準動きベクトル選択手段は、まず、複数ある既符号化画像 に優先順位をつけ、優先度の高いものから順に、基準画像に選択できるか否力 ^判 断していく。この優先順位は、予測ベクトルが実際の物体の動きを反映する可能性の 高い既符号ィヒ画像を優先する順位付けになっている。  [0134] In this configuration, the reference motion vector selection means first assigns priorities to a plurality of already-encoded images, and determines whether or not the reference images can be selected in descending order of priority. . This priority order is a priority order for a pre-signed image with a high possibility that the prediction vector reflects the actual motion of the object.
[0135] これにより、上記の構成によれば、候補画像のいずれ力を基準画像とし、その基準 画像に対応する基準動きベクトルを選択することで、基準動きベクトルをゼロべクトノレ とする場合に較べて、画像にある物体の実際の動きとの相関が高い予測ベクトルを 導出できる可能性が高くなるので、より効率の良い予測が可能になるという効果を奏 する。  Thus, according to the above configuration, as compared with the case where the reference motion vector is set to zero vector nore by selecting any power of the candidate image as the reference image and selecting the reference motion vector corresponding to the reference image. As a result, there is a high possibility that a prediction vector having a high correlation with the actual motion of the object in the image can be derived, so that more efficient prediction can be achieved.
[0136] また、前記基準動きベクトル選択手段が、前記処理対象画像よりも表示時刻が早い 既符号化画像を前記基準画像として選択した場合、前記予測画像導出手段は、前 記処理対象画像に表示時刻が最も近レ、前方参照画像のみを用いて予測画像を導 出することが好ましい。  [0136] Further, when the reference motion vector selection unit selects an already-encoded image having a display time earlier than the processing target image as the reference image, the predicted image deriving unit displays the processing target image on the processing target image. It is preferable that the predicted image is derived using only the forward reference image at the latest time.
[0137] 当該構成において、表示時刻から見ると、まず基準動きベクトルが指す前方参照画 面があり、次に基準画像があり、次に処理対象画像があり、次に後方参照画像がある という関係、になる。  [0137] In this configuration, when viewed from the display time, there is a forward reference screen pointed to by the base motion vector first, then a base image, next a processing target image, and then a back reference image. ,become.
[0138] これにより、上記の構成によれば、前方参照画像を基準画像として予測ベクトルを 導出する場合に、前方予測ベクトルが指す基準画像上の領域のみから処理対象領 域の予測画像を生成する場合、基準領域に画像としてある物体が、基準動きべタト ルが指す前方参照画像上の領域から、処理対象画像上の領域まで等速運動する可 能性の方が、より時間間隔の長い、基準動きベクトルが指す前方参照画像上の領域 力 後方参照画像上の領域まで等速運動する可能性に較べて高い。 Thus, according to the above configuration, when the prediction vector is derived using the forward reference image as the reference image, the prediction image of the processing target region is generated only from the region on the reference image pointed to by the forward prediction vector. In this case, the possibility that an object as an image in the reference region moves at a constant speed from the region on the forward reference image indicated by the reference motion vector to the region on the processing target image has a longer time interval. The area on the forward reference image pointed to by the reference motion vector Force Higher than the possibility of moving at a constant speed to the region on the rear reference image.
[0139] さらに、前方参照画像と後方参照画像との時間間隔に比較して短い、前方参照画 像と処理対象画像との時間間隔を用いて予測ベクトルを求めるので、時間ダイレクト 予測の仮定である、画像上の物体が同一方向へ等速運動をするという条件が成り立 つ可能性が高まる。従って、予測画像の予測効率を向上できるという効果を奏する。  [0139] Furthermore, since the prediction vector is obtained using the time interval between the forward reference image and the processing target image, which is shorter than the time interval between the forward reference image and the backward reference image, this is an assumption of temporal direct prediction. Therefore, the possibility that the condition that the object on the image moves at the same speed in the same direction increases. Therefore, there is an effect that the prediction efficiency of the predicted image can be improved.
[0140] また、前記既符号化画像は、参照画像および非参照画像であることが好ましい。 [0140] Further, it is preferable that the already-encoded image is a reference image and a non-reference image.
[0141] 当該構成において、非参照画像を候補画像に含めることで、参照画像のみを候補 画像に含める場合に較べて、処理対象画像の表示時刻と表示時刻が近い既符号化 画像が多くなる。 [0141] In this configuration, by including the non-reference image in the candidate image, the number of already-encoded images whose display time is close to the display time of the processing target image is larger than when only the reference image is included in the candidate image.
[0142] これにより、上記の構成によれば、処理対象画像の表示時刻と基準画像の表示時 亥 1Jとの間隔が狭い場合は、対象領域と基準動きべ外ルが指す基準画像上の領域と の、処理対象画像上における距離が短くなるため、これら 2つの領域が同じ方向へ等 速運動しているとの仮定が成り立ちやすくなる。従って、画像にある物体の実際の動 きと相関が高い動きベクトルを導出できる可能性が高くなるので、参照画像のみを基 準画像候補とする場合に較べて、より予測効率を向上できるという効果を奏する。  [0142] Thus, according to the above configuration, if the interval between the display time of the processing target image and the display time of the reference image is narrow, the region on the reference image pointed to by the target region and the reference motion belt Since the distance on the image to be processed becomes shorter, the assumption that these two regions are moving at the same speed in the same direction is likely to hold. Therefore, there is a high possibility that a motion vector having a high correlation with the actual movement of the object in the image can be derived, so that the prediction efficiency can be further improved compared to the case where only the reference image is set as the reference image candidate. Play.
[0143] なお、本発明に係る動画像復号装置は、処理対象画像上の処理対象領域の動き ベクトルを既復号画像の動きベクトルを用いて導出する予測ベクトル導出手段と、前 記予測べ外ルを用いて既復号画像から処理対象領域の予測画像を導出する予測 画像導出手段とを備えた動画像復号装置において、前記予測べ外ル導出手段は、 少なくとも 2枚以上の既復号画像を基準画像の候補とし、所定の選択基準に基づい て前記候補の中から 1枚を基準画像として選択し、該基準画像上で処理対象領域と 空間的に同一位置に位置する領域の動きベクトルを基準動きべクトノレとして選択する 基準動きベクトル選択手段を備え、該基準動きベクトルを前記画像間の表示時間間 隔に基づレ、てスケーリングして予測ベクトルを導出することを特徴とする構成でもよレヽ  Note that the moving picture decoding apparatus according to the present invention includes a prediction vector deriving unit that derives a motion vector of a processing target region on a processing target image using a motion vector of a decoded image, and the prediction vector. And a predicted image deriving unit for deriving a predicted image of the processing target region from the decoded image using the prediction image deriving unit, wherein the prediction margin deriving unit uses at least two or more decoded images as a reference image One of the candidates is selected as a reference image based on a predetermined selection criterion, and a motion vector of a region spatially located at the same position as the processing target region on the reference image is selected as a reference motion vector. Reference motion vector selection means for selecting as a token is provided, and the prediction motion vector is derived by scaling the reference motion vector based on the display time interval between the images. It ’s good enough
[0144] また、本発明に係る動画像復号装置は、前記の構成に加え、前記基準動きべタト ル選択手段は、処理対象画像より表示時間が遅くかつ処理対象画像に表示時間が 最も近い既復号画像並びに処理対象画像より表示時間が早くかつ処理対象画像に 表示時間が最も近い既復号画像を前記基準画像候補とし、処理対象画像より表示 時間が遅い基準画像候補上で処理対象領域と同一位置に位置する領域の動きべク トルが存在する場合は該動きべ外ルを基準動きベクトルとして選択し、該動きべタト ルが存在しない場合は、処理対象画像より表示時間が早い基準画像候補上で処理 対象領域と同一位置に位置する領域の動きベクトルが存在する場合は該動きべタト ルを基準動きベクトルとして選択することを特徴とする構成でもよい。 [0144] In addition to the above-described configuration, the moving image decoding apparatus according to the present invention is configured so that the reference motion vector selecting means has a display time later than the processing target image and closest to the processing target image. Display time is faster than the decoded image and the processing target image, and the processing target image The already decoded image with the closest display time is set as the reference image candidate, and if there is a motion vector of an area located at the same position as the processing target area on the reference image candidate with a display time later than the processing target image, the motion When the outer vector is selected as the reference motion vector and the motion vector does not exist, there is a motion vector in the region located at the same position as the processing target region on the reference image candidate whose display time is earlier than the processing target image. In such a case, the motion vector may be selected as a reference motion vector.
[0145] さらに、本発明に係る動画像復号装置は、前記の構成に加え、前記基準動きべタト ル選択手段は、複数の既復号画像を基準画像候補とし、該基準画像候補に含まれ る処理対象画像より表示時間が遅い既復号画像を処理対象画像に表示時間が近い 順に並べ、その後に該基準画像候補に含まれる処理対象画像より表示時間が早い 既復号画像を処理対象画像に表示時間が近い順に並べて基準画像候補の優先順 序とし、該優先順序の順に基準画像候補を基準画像として設定して、該基準画像上 で処理対象領域と同一位置に位置する領域の動きベクトルが存在する場合は該動き ベクトルを基準動きベクトルとして選択することを特徴とする構成でもよい。  [0145] Further, in the moving picture decoding apparatus according to the present invention, in addition to the above-described configuration, the reference motion vector selection means sets a plurality of decoded images as reference image candidates and is included in the reference image candidates. Arrange the decoded images whose display time is later than that of the processing target image in order of display time closer to the processing target images, and then display the decoded image earlier in the processing target image than the processing target images included in the reference image candidate. Are arranged in the order of closeness to be the priority order of the reference image candidates, the reference image candidates are set as the reference images in the order of the priority order, and there is a motion vector of a region located on the reference image at the same position as the processing target region. In such a case, the motion vector may be selected as a reference motion vector.
[0146] また、本発明に係る動画像復号装置は、前記の構成に加え、前記基準動きべタト ル選択手段において、処理対象画像よりも表示時間が遅い既復号画像を基準画像 として基準動きベクトルが選択された場合には、処理対象画像に表示時間が最も近 い前方参照画像及び処理対象画像に表示時間が最も近い後方参照画像を用いて 予測画像を導出し、処理対象画像よりも表示時間が早い画像を基準画像として基準 動きべタトノレが選択された場合には、処理対象画像に表示時間が最も近い前方参照 画像のみを用いて予測画像を導出することを特徴とする構成でもよい。  [0146] Further, in addition to the above configuration, the moving picture decoding apparatus according to the present invention uses the reference motion vector selection means as a reference motion vector, with a decoded image having a display time later than that of the processing target image as a reference image. Is selected, the prediction image is derived using the forward reference image whose display time is closest to the processing target image and the backward reference image whose display time is closest to the processing target image, and the display time is shorter than the processing target image. In the case where the reference motion beta is selected using the image with the fastest time as the reference image, the prediction image may be derived using only the forward reference image whose display time is closest to the processing target image.
[0147] さらに、本発明に係る動画像復号装置は、前記の構成に加え、前記基準動きべタト ル選択手段は、前記既復号画像として参照画像を用レ、ることを特徴とする構成でもよ レ、。 [0147] Furthermore, in addition to the above configuration, the moving image decoding apparatus according to the present invention may be configured such that the reference motion vector selection means uses a reference image as the already decoded image. Yo!
[0148] また、本発明に係る動画像復号装置は、前記の構成に加え、前記基準動きべタト ル選択手段は、前記既復号画像として参照画像及び非参照画像を用いることを特徴 とする構成でもよい。  [0148] In addition to the above configuration, the moving image decoding apparatus according to the present invention is characterized in that the reference motion vector selection means uses a reference image and a non-reference image as the already-decoded image. But you can.
[0149] さらに、本発明に係る動画像符号化装置は、符号化処理を終えた既符号化画像の 動きべクトノレを用いて処理対象画像上の処理対象領域の動きベクトルを導出する予 測ベクトル導出手段と、前記予測ベクトルを用いて既符号化画像から処理対象領域 の予測画像を導出する予測画像導出手段とを備えた動画像符号化装置において、 前記予測ベクトル導出手段は、少なくとも 2枚以上の既符号化画像を基準画像の候 補とし、所定の選択基準に基づいて前記候補の中から:!枚を基準画像として選択し、 該基準画像上で処理対象領域と空間的に同一位置に位置する領域の動きベクトル を基準動きベクトルとして選択する基準動きベクトル選択手段を備え、該基準動きべ タトルを前記画像間の表示時間間隔に基づレ、てスケーリングして予測べクトルを導出 することを特徴とする構成でもよレ、。 [0149] Furthermore, the moving image encoding apparatus according to the present invention is a method for encoding an already encoded image that has been subjected to the encoding process. Prediction vector deriving means for deriving a motion vector of a processing target region on the processing target image using motion vector nore, and prediction image derivation for deriving a prediction image of the processing target region from an already encoded image using the prediction vector. The prediction vector derivation means uses at least two or more already-encoded images as candidates for the reference image, and selects from among the candidates based on a predetermined selection criterion: A reference motion vector selecting means for selecting a motion vector of a region that is spatially located at the same position as the processing target region on the reference image as a reference motion vector. The prediction vector may be derived by scaling the display based on the display time interval between the images.
[0150] また、本発明に係る動画像符号化装置は、前記の構成に加え、前記基準動きべク トル選択手段は、処理対象画像より表示時間が遅くかつ処理対象画像に表示時間 が最も近い既符号化画像並びに処理対象画像より表示時間が早くかつ処理対象画 像に表示時間が最も近い既符号化画像を前記基準画像候補とし、処理対象画像よ り表示時間が遅い基準画像候補上で処理対象領域と同一位置に位置する領域の動 きべタトノレが存在する場合は該動きべ外ルを基準動きベクトルとして選択し、該動き ベ外ルが存在しない場合は、処理対象画像より表示時間が早い基準画像候補上で 処理対象領域と同一位置に位置する領域の動きベクトルが存在する場合は該動き ベクトルを基準動きベクトルとして選択することを特徴とする構成でもよい。  [0150] In addition to the above configuration, the moving image encoding apparatus according to the present invention is such that the reference motion vector selection means has a display time later than the processing target image and the display time closest to the processing target image. An already-encoded image whose display time is earlier than that of the already-encoded image and the processing target image and whose display time is closest to the processing target image is set as the reference image candidate, and is processed on the reference image candidate whose display time is later than that of the processing target image. If there is a motion vector of a region located at the same position as the target region, the motion vector is selected as a reference motion vector, and if there is no motion vector, the display time from the processing target image is selected. If there is a motion vector of a region located at the same position as the processing target region on an early reference image candidate, the motion vector may be selected as the reference motion vector.
[0151] さらに、本発明に係る動画像符号化装置は、前記の構成に加え、前記基準動きべ タトル選択手段は、複数の既符号化画像を基準画像候補とし、該基準画像候補に含 まれる処理対象画像より表示時間が遅い既符号化画像を処理対象画像に表示時間 が近い順に並べ、その後に該基準画像候補に含まれる処理対象画像より表示時間 が早い既符号化画像を処理対象画像に表示時間が近い順に並べて基準画像候補 の優先順序とし、該優先順序の順に基準画像候補を基準画像として設定して、該基 準画像上で処理対象領域と同一位置に位置する領域の動きベクトルが存在する場 合は該動きベクトルを基準動きベクトルとして選択することを特徴とする構成でもよい  [0151] Further, in the moving image encoding device according to the present invention, in addition to the above-described configuration, the reference motion vector selection means sets a plurality of already-encoded images as reference image candidates and includes them in the reference image candidates. Arrange the pre-encoded images whose display time is later than the processing target image in order of display time close to the processing target images, and then select the pre-encoded images whose display time is earlier than the processing target images included in the reference image candidates. Are arranged in descending order of display time and set as the priority order of the reference image candidates, the reference image candidates are set as reference images in the order of the priority order, and the motion vector of the region located at the same position as the processing target region on the reference image If the motion vector exists, the motion vector may be selected as the reference motion vector.
[0152] また、本発明に係る動画像符号化装置は、前記の構成に加え、前記基準動きべク トル選択手段において、処理対象画像よりも表示時間が遅い既符号化画像を基準 画像として基準動きベクトルが選択された場合には、処理対象画像に表示時間が最 も近い前方参照画像及び処理対象画像に表示時間が最も近い後方参照画像を用 いて予測画像を導出し、処理対象画像よりも表示時間が早い画像を基準画像として 基準動きべタトノレが選択された場合には、処理対象画像に表示時間が最も近い前方 参照画像のみを用いて予測画像を導出することを特徴とする構成でもよい。 [0152] In addition to the above configuration, the moving image encoding apparatus according to the present invention includes the reference motion vector. When the reference motion vector is selected with the pre-encoded image whose display time is slower than the processing target image as the reference image, the forward reference image and the processing target image whose display time is closest to the processing target image. When a reference image is derived using the back reference image with the closest display time for the image, and the reference motion beta is selected with the image having a display time earlier than the processing target image as the reference image, the display time for the processing target image is displayed. The configuration may be such that the predicted image is derived using only the forward reference image that is closest.
[0153] さらに、本発明に係る動画像符号化装置は、前記の構成に加え、前記基準動きべ タトル選択手段は、前記既符号化画像として参照画像を用いることを特徴とする構成 でもよい。  [0153] Further, the moving picture encoding apparatus according to the present invention may be configured such that, in addition to the above-described configuration, the reference motion vector selection means uses a reference image as the already-encoded image.
[0154] また、本発明に係る動画像符号化装置は、前記の構成に加え、前記基準動きべク トル選択手段は、前記既符号化画像として参照画像及び非参照画像を用いることを 特徴とする構成でもよい。  [0154] Further, in addition to the above configuration, the moving picture encoding apparatus according to the present invention is characterized in that the reference motion vector selection means uses a reference image and a non-reference image as the already-encoded image. The structure to do may be sufficient.
[0155] 最後に、動画像復号装置 1および動画像符号化装置 2の各ブロック、特に導出方 式選択部 201、基準ベクトル選択部 210、時間ダイレクト予測部 203、空間ダイレクト 予測部 204、 pmv予測部、およびゼロベクトル出力部 206は、ハードウェアロジックに よって構成してもよいし、次のように CPUを用いてソフトウェアによって実現してもよい  [0155] Finally, each block of the moving picture decoding device 1 and the moving picture coding device 2, particularly the derivation method selection unit 201, the reference vector selection unit 210, the temporal direct prediction unit 203, the spatial direct prediction unit 204, and pmv prediction And the zero vector output unit 206 may be configured by hardware logic, or may be realized by software using a CPU as follows.
[0156] すなわち、動画像復号装置 1および動画像符号化装置 2は、各機能を実現する制 御プログラムの命令を実行する CPU (central processing unit)、上記プログラムを 格納した ROM (read only memory)、上記プログラムを展開する RAM (random acc ess memory)、上記プログラムおよび各種データを格納するメモリ等の記憶装置 (記 録媒体)などを備えている。そして、本発明の目的は、上述した機能を実現するソフト ウェアである動画像復号装置 1および動画像符号ィヒ装置 2の制御プログラムのプログ ラムコード(実行形式プログラム、中間コードプログラム、ソースプログラム)をコンビュ ータで読み取り可能に記録した記録媒体を、上記動画像復号装置 1および動画像符 号化装置 2に供給し、そのコンピュータ (または CPUや MPU)が記録媒体に記録さ れているプログラムコードを読み出し実行することによつても、達成可能である。 That is, the video decoding device 1 and the video encoding device 2 are a CPU (central processing unit) that executes instructions of a control program that realizes each function, and a ROM (read only memory) that stores the program. In addition, a random access memory (RAM) for expanding the program, a storage device (recording medium) such as a memory for storing the program and various data, and the like are provided. The object of the present invention is to provide program codes (execution format program, intermediate code program, source program) for control programs of the video decoding device 1 and the video encoding device 2 which are software that realizes the functions described above. Is supplied to the moving image decoding device 1 and the moving image encoding device 2 and the computer (or CPU or MPU) is recorded on the recording medium. It can also be achieved by reading and executing the code.
[0157] 上記記録媒体としては、例えば、磁気テープやカセットテープ等のテープ系、フロッ ピー(登録商標)ディスク/ハードディスク等の磁気ディスクや CD— ROM/MO/ MD/DVD/CD— R等の光ディスクを含むディスク系、 ICカード(メモリカードを含 む)/光カード等のカード系、あるいはマスク ROM/EPROM/EEPROM/フラッ シュ ROM等の半導体メモリ系などを用いることができる。 [0157] Examples of the recording medium include a tape system such as a magnetic tape and a cassette tape, and a floppy disk. Disk system including magnetic disk such as P (registered trademark) disk / hard disk and optical disk such as CD-ROM / MO / MD / DVD / CD-R, card system such as IC card (including memory card) / optical card Alternatively, a semiconductor memory system such as mask ROM / EPROM / EEPROM / flash ROM can be used.
[0158] また、動画像復号装置 1および動画像符号化装置 2を通信ネットワークと接続可能 に構成し、上記プログラムコードを通信ネットワークを介して供給してもよい。この通信 ネットワークとしては、特に限定されず、例えば、インターネット、イントラネット、エキス トラネット、 LAN、 ISDN、 VAN、 CATV通信網、仮想専用網(virtual private netw ork)、電話回線網、移動体通信網、衛星通信網等が利用可能である。また、通信ネ ットワークを構成する伝送媒体としては、特に限定されず、例えば、 IEEE1394, US B、電力線搬送、ケーブル TV回線、電話線、 ADSL回線等の有線でも、 IrDAゃリモ コンのような赤外線、 Bluetooth (登録商標)、 802. 11無線、 HDR、携帯電話網、 衛星回線、地上波デジタル網等の無線でも利用可能である。なお、本発明は、上記 プログラムコードが電子的な伝送で具現化された、搬送波に坦め込まれたコンビユー タデータ信号の形態でも実現され得る。 [0158] Further, the moving picture decoding apparatus 1 and the moving picture encoding apparatus 2 may be configured to be connectable to a communication network, and the program code may be supplied via the communication network. The communication network is not particularly limited. For example, the Internet, intranet, extranet, LAN, ISDN, VAN, CATV communication network, virtual private network, telephone line network, mobile communication network, A satellite communication network or the like can be used. Also, the transmission medium constituting the communication network is not particularly limited. For example, even in the case of wired communication such as IEEE1394, USB, power line carrier, cable TV line, telephone line, ADSL line, etc., infrared rays such as IrDA remote control. Bluetooth (registered trademark), 802.11 wireless, HDR, mobile phone network, satellite line, terrestrial digital network, etc. can also be used. The present invention can also be realized in the form of a computer data signal embedded in a carrier wave, in which the program code is embodied by electronic transmission.
[0159] 本発明に係る動画像復号装置は、以上のように、前記予測ベクトル導出手段は、少 なくとも 2枚以上の前記既復号画像を前記基準画像の候補画像とし、基準動きべタト ルがゼロベクトルとなることを回避する、所定の選択基準に基づいて前記候補画像の 中の 1枚を前記基準画像として選択し、該基準画像上で前記処理対象領域と空間的 に同一位置に位置する領域の動きベクトルを基準動きべクトノレとして選択する基準動 きべクトノレ選択手段を備えたことを特徴とする。 [0159] As described above, in the video decoding device according to the present invention, the prediction vector deriving unit uses at least two or more of the already-decoded images as candidate images of the reference image, and provides a reference motion vector. Is selected as the reference image based on a predetermined selection criterion that avoids becoming a zero vector, and is positioned spatially at the same position as the processing target region on the reference image. And a reference motion vector selection means for selecting a motion vector of a region to be used as a reference motion vector signal.
[0160] それゆえ、処理対象画像より表示時刻が遅い既復号画像力 Sイントラ符号化されて おり基準領域が動きべ外ルを持たない場合でも、候補画像に含まれる複数の既復 号画像の中から基準画像を選択できるので、画像上の物体の動きとの相関が少ない ゼロベクトルではなぐ画像上の物体の動きを反映した、予測ベクトルを得られる可能 性が高くなるので、予測画像の予測効率を向上できるという効果を奏する。  [0160] Therefore, even when the decoded image power S-intra-coded later in the display time than the processing target image is encoded and the reference region has no motion margin, a plurality of decoded images included in the candidate image are not included. Since a reference image can be selected from among them, there is little correlation with the motion of the object on the image.There is a high possibility of obtaining a prediction vector that reflects the motion of the object on the image beyond the zero vector. There is an effect that the efficiency can be improved.
[0161] また、本発明に係る動画像符号化装置は、以上のように、前記予測ベクトル導出手 段は、少なくとも 2枚以上の前記既符号ィ匕画像を前記基準画像の候補画像とし、基 準動きべタトノレがゼロベクトルとなることを回避する、所定の選択基準に基づいて前 記候補画像の中の 1枚を前記基準画像として選択し、該基準画像上で前記処理対 象領域と空間的に同一位置に位置する領域の動きベクトルを基準動きベクトルとして 選択する基準動きベクトル選択手段を備えたことを特徴とする。 [0161] In addition, as described above, in the moving picture encoding apparatus according to the present invention, the prediction vector deriving means uses at least two or more already-encoded images as the reference image candidate images, and One of the candidate images is selected as the reference image based on a predetermined selection criterion that prevents the quasi-motion betaton from becoming a zero vector, and the processing target region and space are selected on the reference image. And a reference motion vector selection means for selecting a motion vector of a region located at the same position as a reference motion vector.
[0162] それゆえ、処理対象画像より表示時刻が遅い既符号化画像がイントラ符号化され ており基準領域が動きベクトルを持たない場合でも、候補画像に含まれる複数の既 符号化画像の中から基準画像を選択できるので、画像上の物体の動きとの相関が少 ないゼロべクトノレではなぐ画像上の物体の動きを反映した、予測ベクトルを得られる 可能性が高くなるので、予測画像の予測効率を向上できるという効果を奏する。 産業上の利用の可能性  [0162] Therefore, even when an already-encoded image whose display time is later than that of the processing target image is intra-encoded and the reference region does not have a motion vector, a plurality of already-encoded images included in the candidate image are included. Since the reference image can be selected, it is highly possible to obtain a prediction vector that reflects the motion of the object on the image that is less than zero vector, which has little correlation with the motion of the object on the image. There is an effect that the efficiency can be improved. Industrial applicability
[0163] 本発明に係る動画像復号装置 1および動画像符号化装置 2を用いると、予測べタト ルの予測効率を向上させることができるので、動画像の符号化または復号を行う装置 、すなわち、携帯端末機器、携帯電話機、テレビジョン受像機、マルチメディア機器 などに好適に使用できる。 [0163] By using the moving picture decoding apparatus 1 and the moving picture encoding apparatus 2 according to the present invention, it is possible to improve the prediction efficiency of the prediction vector, that is, an apparatus for encoding or decoding a moving picture, that is, It can be suitably used for mobile terminal devices, mobile phones, television receivers, multimedia devices, and the like.

Claims

請求の範囲 The scope of the claims
[1] 処理対象画像上の処理対象領域の予測ベクトルを、基準画像の基準動きベクトル を用いて導出する予測べ外ル導出手段と、  [1] A prediction margin deriving means for deriving a prediction vector of a processing target region on the processing target image using a reference motion vector of the reference image;
前記予測ベクトルを用いて、前記処理対象領域の動きベクトルを再構成する動きべ タトル復号手段と、  Motion vector decoding means for reconstructing a motion vector of the processing target region using the prediction vector;
復号処理を終えた既復号画像から、前記動きべ外ルを用いて前記処理対象領域 の予測画像を導出する予測画像導出手段と  Prediction image deriving means for deriving a prediction image of the processing target area from the already decoded image after decoding processing using the motion vector;
を備え、  With
前記予測ベクトル導出手段は、時間ダイレクト予測により予測ベクトルを導出する動 画像復号装置において、  The prediction vector deriving unit is a video decoding device that derives a prediction vector by temporal direct prediction,
前記予測ベクトル導出手段は、  The prediction vector deriving means includes:
少なくとも 2枚以上の前記既復号画像を前記基準画像の候補画像とし、 前記基準動きベクトルがゼロベクトルとなることを回避する、所定の選択基準に基づ いて前記候補画像の中の 1枚を前記基準画像として選択し、  At least two or more of the already decoded images are used as candidate images for the reference image, and one of the candidate images is selected based on a predetermined selection criterion that avoids the reference motion vector becoming a zero vector. Select as reference image,
該基準画像上で前記処理対象領域と空間的に同一位置に位置する領域の動きべ 外ルを基準動きべ外ルとして選択する基準動きべ外ル選択手段を備えたことを特 徴とする、動画像復号装置。  It is characterized by comprising reference movement margin selection means for selecting a movement margin of an area located spatially at the same position as the processing target area on the reference image as a reference movement margin. Video decoding device.
[2] 前記候補画像は、 [2] The candidate image is
前記処理対象画像より表示時刻が遅くかつ前記処理対象画像に表示時刻が最も 近い既復号画像  A decoded image having a display time later than that of the processing target image and closest to the processing target image.
および  and
前記処理対象画像より表示時刻が早くかつ前記処理対象画像に表示時刻が最も 近い既復号画像であり、  A decoded image whose display time is earlier than the processing target image and whose display time is closest to the processing target image;
前記所定の選択基準は、  The predetermined selection criterion is:
前記処理対象画像より表示時刻が遅い前記候補画像上で、前記処理対象領域と 空間的に同一位置に位置する領域の動きベクトルが存在する場合は、該候補画像 を前記基準画像として選択し、  If there is a motion vector of an area located at the same spatial position as the processing target area on the candidate image whose display time is later than the processing target image, the candidate image is selected as the reference image,
該動きベクトルが存在しない場合で、 前記処理対象画像より表示時刻が早い前記候補画像上で、前記処理対象領域と 空間的に同一位置に位置する領域の動きベクトルが存在する場合は、該候補画像 を前記基準画像として選択するとレ、う基準であることを特徴とする、請求項 1に記載の 動画像復号装置。 When the motion vector does not exist, On the candidate image whose display time is earlier than the processing target image, if there is a motion vector of a region located at the same spatial position as the processing target region, selecting the candidate image as the reference image, 2. The moving picture decoding apparatus according to claim 1, wherein the moving picture decoding apparatus is a standard.
[3] 前記所定の選択基準は、  [3] The predetermined selection criterion is:
前記候補画像に含まれる、前記処理対象画像より表示時刻が遅い既復号画像を、 その表示時刻が前記処理対象画像の表示時刻に近い順に優先順位をつけ、 その優先順位に続けて、  Prioritize decoded images that are included in the candidate images and that have a display time later than the processing target image, in order from the display time closer to the display time of the processing target image.
前記候補画像に含まれる、前記処理対象画像より表示時刻が早い既復号画像を、 その表示時刻が前記処理対象画像の表示時刻に近い順に優先順位をつけ、 該優先順位の順に、前記候補画像上での、前記処理対象領域と空間的に同一位 置に位置する領域の動きベクトルの有無の判定を、該動きベクトルが有るまで繰り返 し、  Priorities of decoded images that are included in the candidate images and that are earlier in display time than the processing target image are given priority in order of display time close to the display time of the processing target image, and in the order of priority, The determination of the presence / absence of a motion vector in a region spatially located at the same position as the processing target region is repeated until the motion vector exists,
該動きベクトルが有った場合は、該候補画像を前記基準画像として選択するという 基準であることを特徴とする、請求項 1に記載の動画像復号装置。  2. The moving picture decoding apparatus according to claim 1, wherein when there is the motion vector, the candidate image is selected as the reference image.
[4] 前記基準動きベクトル選択手段が、 [4] The reference motion vector selection means includes:
前記処理対象画像よりも表示時刻が早い既復号画像を前記基準画像として選択し た場合、  When a decoded image having a display time earlier than the processing target image is selected as the reference image,
前記予測画像導出手段は、  The predicted image derivation means includes
前記処理対象画像に表示時刻が最も近い前方参照画像のみを用いて予測画像を 導出することを特徴とする、請求項 1から 3のうちいずれか一項に記載の動画像復号 装置。  4. The moving picture decoding apparatus according to claim 1, wherein a predicted image is derived using only a forward reference image whose display time is closest to the processing target image. 5.
[5] 前記既復号画像は、参照画像および非参照画像であることを特徴とする、請求項 1 から 4のうちいずれか一項に記載の動画像復号装置。  [5] The moving picture decoding apparatus according to any one of claims 1 to 4, wherein the already decoded pictures are a reference picture and a non-reference picture.
[6] 処理対象画像上の処理対象領域の予測ベクトルを、基準画像の基準動きベクトル を用いて導出する予測べ外ル導出手段と、 [6] A prediction margin deriving means for deriving a prediction vector of a processing target region on the processing target image using a reference motion vector of the reference image;
前記処理対象画像と符号化処理を終えた既符号化画像とを用いて、動きベクトル を推定する動きベクトル推定手段と、 前記予測ベクトルを用いて、前記動きベクトルを符号化する動きベクトル符号化手 段と、 Motion vector estimation means for estimating a motion vector using the processing target image and the already-encoded image that has been subjected to the encoding process; A motion vector encoding means for encoding the motion vector using the prediction vector;
前記既符号化画像から、前記動きベクトルを用いて前記処理対象領域の予測画像 を導出する予測画像導出手段と  Prediction image deriving means for deriving a prediction image of the processing target region from the already-encoded image using the motion vector;
を備え、 With
前記予測べ外ル導出手段は、時間ダイレ外予測により予測べ外ルを導出する動 画像符号化装置において、  The prediction margin deriving means is a video encoding device for deriving a prediction margin by temporal out-of-time prediction,
前記予測ベクトル導出手段は、  The prediction vector deriving means includes:
少なくとも 2枚以上の前記既符号化画像を前記基準画像の候補画像とし、 前記基準動きベクトルがゼロベクトルとなることを回避する、所定の選択基準に基づ レ、て前記候補画像の中の:!枚を前記基準画像として選択し、  Among the candidate images, based on a predetermined selection criterion, wherein at least two or more of the already-encoded images are candidate images of the reference image and the reference motion vector is prevented from becoming a zero vector: Select the images as the reference image,
該基準画像上で前記処理対象領域と空間的に同一位置に位置する領域の動きべ タトルを基準動きベクトルとして選択する基準動きベクトル選択手段を備えたことを特 徴とする、動画像符号化装置。  A moving picture coding apparatus comprising reference motion vector selection means for selecting, as a reference motion vector, a motion vector in a region spatially located at the same position as the processing target region on the reference image .
前記候補画像は、  The candidate image is
前記処理対象画像より表示時刻が遅くかつ前記処理対象画像に表示時刻が最も 近い既符号化画像  An already-encoded image whose display time is later than the processing target image and whose display time is closest to the processing target image
および  and
前記処理対象画像より表示時刻が早くかつ前記処理対象画像に表示時刻が最も 近い既符号ィヒ画像であり、  An already-signed image whose display time is earlier than the processing target image and whose display time is closest to the processing target image;
前記所定の選択基準は、  The predetermined selection criterion is:
前記処理対象画像より表示時刻が遅い前記候補画像上で、前記処理対象領域と 空間的に同一位置に位置する領域の動きべ外ルが存在する場合は、該候補画像 を前記基準画像として選択し、  If there is a motion margin of an area located at the same spatial position as the processing target area on the candidate image whose display time is later than the processing target image, the candidate image is selected as the reference image. ,
該動きベクトルが存在しない場合で、  When the motion vector does not exist,
前記処理対象画像より表示時刻が早い前記候補画像上で、前記処理対象領域と 空間的に同一位置に位置する領域の動きべ外ルが存在する場合は、該候補画像 を前記基準画像として選択するという基準であることを特徴とする、請求項 6に記載の 動画像符号化装置。 When there is a motion margin of an area located at the same spatial position as the processing target area on the candidate image whose display time is earlier than the processing target image, the candidate image is selected as the reference image. The standard according to claim 6, wherein Video encoding device.
[8] 前記所定の選択基準は、  [8] The predetermined selection criterion is:
前記候補画像に含まれる、前記処理対象画像より表示時刻が遅い既符号化画像 を、その表示時刻が前記処理対象画像の表示時刻に近レ、順に優先順位をつけ、 その優先順位に続けて、  The encoded image included in the candidate image, the display time of which is later than the processing target image, the display time is closer to the display time of the processing target image, prioritized in order, followed by the priority order,
前記候補画像に含まれる、前記処理対象画像より表示時刻が早い既符号化画像 を、その表示時刻が前記処理対象画像の表示時刻に近レ、順に優先順位をつけ、 該優先順位の順に、前記候補画像上での、前記処理対象領域と空間的に同一位 置に位置する領域の動きベクトルの有無の判定を、該動きベクトルが有るまで繰り返 し、  Pre-coded images included in the candidate images, the display times of which are earlier in display time than the processing target images are given priority in order, the display times being closer to the display times of the processing target images, The determination of the presence or absence of a motion vector in an area located spatially at the same position as the processing target area on the candidate image is repeated until the motion vector exists.
該動きベクトルが有った場合は、該候補画像を前記基準画像として選択するという 基準であることを特徴とする、請求項 6に記載の動画像符号化装置。  7. The moving picture encoding apparatus according to claim 6, wherein when there is the motion vector, the candidate image is selected as the reference image.
[9] 前記基準動きベクトル選択手段が、 [9] The reference motion vector selection means includes:
前記処理対象画像よりも表示時刻が早い既符号化画像を前記基準画像として選 択した場合、  When an already-encoded image whose display time is earlier than the processing target image is selected as the reference image,
前記予測画像導出手段は、  The predicted image derivation means includes
前記処理対象画像に表示時刻が最も近い前方参照画像のみを用いて予測画像を 導出することを特徴とする、請求項 6から 8のうちいずれか一項に記載の動画像符号 化装置。  9. The moving picture encoding apparatus according to claim 6, wherein a predicted image is derived using only a forward reference image whose display time is closest to the processing target image.
[10] 前記既符号化画像は、参照画像および非参照画像であることを特徴とする、請求 項 6から 9のうちいずれか一項に記載の動画像符号化装置。  10. The moving picture coding apparatus according to any one of claims 6 to 9, wherein the already coded pictures are a reference picture and a non-reference picture.
PCT/JP2006/303999 2005-12-27 2006-03-02 Moving picture image decoding device and moving picture image coding device WO2007074543A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2007551847A JP5020829B2 (en) 2005-12-27 2006-03-02 Moving picture decoding apparatus and moving picture encoding apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2005376479 2005-12-27
JP2005-376479 2005-12-27

Publications (1)

Publication Number Publication Date
WO2007074543A1 true WO2007074543A1 (en) 2007-07-05

Family

ID=38217770

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2006/303999 WO2007074543A1 (en) 2005-12-27 2006-03-02 Moving picture image decoding device and moving picture image coding device

Country Status (2)

Country Link
JP (2) JP5020829B2 (en)
WO (1) WO2007074543A1 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009201112A (en) * 2008-02-20 2009-09-03 Samsung Electronics Co Ltd Coding and decoding methods for direct mode
WO2012073481A1 (en) * 2010-11-29 2012-06-07 パナソニック株式会社 Video-image encoding method and video-image decoding method
WO2012096173A1 (en) * 2011-01-12 2012-07-19 パナソニック株式会社 Video encoding method and video decoding method
WO2012108200A1 (en) * 2011-02-10 2012-08-16 パナソニック株式会社 Moving picture encoding method, moving picture encoding device, moving picture decoding method, moving picture decoding device, and moving picture encoding decoding device
WO2012114694A1 (en) * 2011-02-22 2012-08-30 パナソニック株式会社 Moving image coding method, moving image coding device, moving image decoding method, and moving image decoding device
WO2012176684A1 (en) * 2011-06-22 2012-12-27 ソニー株式会社 Image processing device and method
JP2013009421A (en) * 2007-10-16 2013-01-10 Lg Electronics Inc Method and apparatus for processing video signal
JP2015165694A (en) * 2010-01-14 2015-09-17 サムスン エレクトロニクス カンパニー リミテッド Method and apparatus for decoding image
US9210440B2 (en) 2011-03-03 2015-12-08 Panasonic Intellectual Property Corporation Of America Moving picture coding method, moving picture decoding method, moving picture coding apparatus, moving picture decoding apparatus, and moving picture coding and decoding apparatus
JP2015226111A (en) * 2014-05-26 2015-12-14 キヤノン株式会社 Image processing apparatus and control method thereof
US9300961B2 (en) 2010-11-24 2016-03-29 Panasonic Intellectual Property Corporation Of America Motion vector calculation method, picture coding method, picture decoding method, motion vector calculation apparatus, and picture coding and decoding apparatus
US9307239B2 (en) 2011-03-14 2016-04-05 Mediatek Inc. Method and apparatus for derivation of motion vector candidate and motion vector prediction candidate
CN103004209B (en) * 2011-01-12 2016-11-30 太阳专利托管公司 Motion image encoding method and dynamic image decoding method
US9781414B2 (en) 2011-03-17 2017-10-03 Hfi Innovation Inc. Method and apparatus for derivation of spatial motion vector candidate and motion vector prediction candidate

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000308062A (en) * 1999-04-15 2000-11-02 Canon Inc Method for processing animation
JP2003319403A (en) * 2002-04-09 2003-11-07 Lg Electronics Inc Method of predicting block in improved direct mode
JP2004048711A (en) * 2002-05-22 2004-02-12 Matsushita Electric Ind Co Ltd Method for coding and decoding moving picture and data recording medium
JP2004088731A (en) * 2002-07-02 2004-03-18 Matsushita Electric Ind Co Ltd Motion vector deriving method, motion picture encoding method, and motion picture decoding method
JP2004129191A (en) * 2002-10-04 2004-04-22 Lg Electronics Inc Direct mode motion vector calculation method for b picture

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3977716B2 (en) * 2002-09-20 2007-09-19 株式会社東芝 Video encoding / decoding method and apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000308062A (en) * 1999-04-15 2000-11-02 Canon Inc Method for processing animation
JP2003319403A (en) * 2002-04-09 2003-11-07 Lg Electronics Inc Method of predicting block in improved direct mode
JP2004048711A (en) * 2002-05-22 2004-02-12 Matsushita Electric Ind Co Ltd Method for coding and decoding moving picture and data recording medium
JP2004088731A (en) * 2002-07-02 2004-03-18 Matsushita Electric Ind Co Ltd Motion vector deriving method, motion picture encoding method, and motion picture decoding method
JP2004129191A (en) * 2002-10-04 2004-04-22 Lg Electronics Inc Direct mode motion vector calculation method for b picture

Cited By (65)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9813702B2 (en) 2007-10-16 2017-11-07 Lg Electronics Inc. Method and an apparatus for processing a video signal
US8867607B2 (en) 2007-10-16 2014-10-21 Lg Electronics Inc. Method and an apparatus for processing a video signal
US10820013B2 (en) 2007-10-16 2020-10-27 Lg Electronics Inc. Method and an apparatus for processing a video signal
US8761242B2 (en) 2007-10-16 2014-06-24 Lg Electronics Inc. Method and an apparatus for processing a video signal
US10306259B2 (en) 2007-10-16 2019-05-28 Lg Electronics Inc. Method and an apparatus for processing a video signal
JP2015046926A (en) * 2007-10-16 2015-03-12 エルジー エレクトロニクス インコーポレイティド Video signal processing method and device
JP2013009421A (en) * 2007-10-16 2013-01-10 Lg Electronics Inc Method and apparatus for processing video signal
US8750369B2 (en) 2007-10-16 2014-06-10 Lg Electronics Inc. Method and an apparatus for processing a video signal
US8750368B2 (en) 2007-10-16 2014-06-10 Lg Electronics Inc. Method and an apparatus for processing a video signal
JP2009201112A (en) * 2008-02-20 2009-09-03 Samsung Electronics Co Ltd Coding and decoding methods for direct mode
JP2013225892A (en) * 2008-02-20 2013-10-31 Samsung Electronics Co Ltd Direct mode coding and decoding device
US8804828B2 (en) 2008-02-20 2014-08-12 Samsung Electronics Co., Ltd Method for direct mode encoding and decoding
KR101505195B1 (en) 2008-02-20 2015-03-24 삼성전자주식회사 Method for direct mode encoding and decoding
JP2015165695A (en) * 2010-01-14 2015-09-17 サムスン エレクトロニクス カンパニー リミテッド Method and apparatus for decoding image
JP2015165694A (en) * 2010-01-14 2015-09-17 サムスン エレクトロニクス カンパニー リミテッド Method and apparatus for decoding image
US9300961B2 (en) 2010-11-24 2016-03-29 Panasonic Intellectual Property Corporation Of America Motion vector calculation method, picture coding method, picture decoding method, motion vector calculation apparatus, and picture coding and decoding apparatus
US9877038B2 (en) 2010-11-24 2018-01-23 Velos Media, Llc Motion vector calculation method, picture coding method, picture decoding method, motion vector calculation apparatus, and picture coding and decoding apparatus
US10218997B2 (en) 2010-11-24 2019-02-26 Velos Media, Llc Motion vector calculation method, picture coding method, picture decoding method, motion vector calculation apparatus, and picture coding and decoding apparatus
US10778996B2 (en) 2010-11-24 2020-09-15 Velos Media, Llc Method and apparatus for decoding a video block
WO2012073481A1 (en) * 2010-11-29 2012-06-07 パナソニック株式会社 Video-image encoding method and video-image decoding method
US10237569B2 (en) 2011-01-12 2019-03-19 Sun Patent Trust Moving picture coding method and moving picture decoding method using a determination whether or not a reference block has two reference motion vectors that refer forward in display order with respect to a current picture
US10904556B2 (en) 2011-01-12 2021-01-26 Sun Patent Trust Moving picture coding method and moving picture decoding method using a determination whether or not a reference block has two reference motion vectors that refer forward in display order with respect to a current picture
US9083981B2 (en) 2011-01-12 2015-07-14 Panasonic Intellectual Property Corporation Of America Moving picture coding method and moving picture decoding method using a determination whether or not a reference block has two reference motion vectors that refer forward in display order with respect to a current picture
US11838534B2 (en) 2011-01-12 2023-12-05 Sun Patent Trust Moving picture coding method and moving picture decoding method using a determination whether or not a reference block has two reference motion vectors that refer forward in display order with respect to a current picture
JPWO2012096173A1 (en) * 2011-01-12 2014-06-09 パナソニック株式会社 Video coding method
CN106878742B (en) * 2011-01-12 2020-01-07 太阳专利托管公司 Moving picture encoding and decoding device
CN106851306B (en) * 2011-01-12 2020-08-04 太阳专利托管公司 Moving picture decoding method and moving picture decoding device
CN103004209A (en) * 2011-01-12 2013-03-27 松下电器产业株式会社 Video encoding method and video decoding method
WO2012096173A1 (en) * 2011-01-12 2012-07-19 パナソニック株式会社 Video encoding method and video decoding method
US11317112B2 (en) 2011-01-12 2022-04-26 Sun Patent Trust Moving picture coding method and moving picture decoding method using a determination whether or not a reference block has two reference motion vectors that refer forward in display order with respect to a current picture
CN106878742A (en) * 2011-01-12 2017-06-20 太阳专利托管公司 Dynamic image coding and decoding device
CN103004209B (en) * 2011-01-12 2016-11-30 太阳专利托管公司 Motion image encoding method and dynamic image decoding method
CN106851306A (en) * 2011-01-12 2017-06-13 太阳专利托管公司 Dynamic image decoding method and dynamic image decoding device
US9819960B2 (en) 2011-02-10 2017-11-14 Sun Patent Trust Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus, and moving picture coding and decoding apparatus
US8948261B2 (en) 2011-02-10 2015-02-03 Panasonic Intellectual Property Corporation Of America Moving picture coding and decoding method with replacement and temporal motion vectors
CN107277542B (en) * 2011-02-10 2019-12-10 太阳专利托管公司 Moving picture decoding method and moving picture decoding device
CN103477637A (en) * 2011-02-10 2013-12-25 松下电器产业株式会社 Moving picture encoding method, moving picture encoding device, moving picture decoding method, moving picture decoding device, and moving picture encoding decoding device
US9693073B1 (en) 2011-02-10 2017-06-27 Sun Patent Trust Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus, and moving picture coding and decoding apparatus
US10911771B2 (en) 2011-02-10 2021-02-02 Sun Patent Trust Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus, and moving picture coding and decoding apparatus
CN107277542A (en) * 2011-02-10 2017-10-20 太阳专利托管公司 dynamic image decoding method, dynamic image decoding device
JP2013219813A (en) * 2011-02-10 2013-10-24 Panasonic Corp Dynamic image decoding method and dynamic image decoding device
US9204146B2 (en) 2011-02-10 2015-12-01 Panasonic Intellectual Property Corporation Of America Moving picture coding and decoding method with replacement and temporal motion vectors
US9641859B2 (en) 2011-02-10 2017-05-02 Sun Patent Trust Moving picture coding and decoding method with replacement and temporal motion vectors
US9432691B2 (en) 2011-02-10 2016-08-30 Sun Patent Trust Moving picture coding and decoding method with replacement and temporal motion vectors
JP5323273B2 (en) * 2011-02-10 2013-10-23 パナソニック株式会社 Moving picture coding method and moving picture coding apparatus
US10194164B2 (en) 2011-02-10 2019-01-29 Sun Patent Trust Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus, and moving picture coding and decoding apparatus
US11418805B2 (en) 2011-02-10 2022-08-16 Sun Patent Trust Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus, and moving picture coding and decoding apparatus
WO2012108200A1 (en) * 2011-02-10 2012-08-16 パナソニック株式会社 Moving picture encoding method, moving picture encoding device, moving picture decoding method, moving picture decoding device, and moving picture encoding decoding device
US11838536B2 (en) 2011-02-10 2023-12-05 Sun Patent Trust Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus, and moving picture coding and decoding apparatus
US10623764B2 (en) 2011-02-10 2020-04-14 Sun Patent Trust Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus, and moving picture coding and decoding apparatus
CN103477637B (en) * 2011-02-10 2017-04-26 太阳专利托管公司 Moving picture encoding method and moving picture encoding device
JP6108309B2 (en) * 2011-02-22 2017-04-05 サン パテント トラスト Moving picture encoding method, moving picture encoding apparatus, moving picture decoding method, and moving picture decoding apparatus
US10404998B2 (en) 2011-02-22 2019-09-03 Sun Patent Trust Moving picture coding method, moving picture coding apparatus, moving picture decoding method, and moving picture decoding apparatus
WO2012114694A1 (en) * 2011-02-22 2012-08-30 パナソニック株式会社 Moving image coding method, moving image coding device, moving image decoding method, and moving image decoding device
JPWO2012114694A1 (en) * 2011-02-22 2014-07-07 パナソニック株式会社 Moving picture encoding method, moving picture encoding apparatus, moving picture decoding method, and moving picture decoding apparatus
US9832480B2 (en) 2011-03-03 2017-11-28 Sun Patent Trust Moving picture coding method, moving picture decoding method, moving picture coding apparatus, moving picture decoding apparatus, and moving picture coding and decoding apparatus
US10771804B2 (en) 2011-03-03 2020-09-08 Sun Patent Trust Moving picture coding method, moving picture decoding method, moving picture coding apparatus, moving picture decoding apparatus, and moving picture coding and decoding apparatus
US11284102B2 (en) 2011-03-03 2022-03-22 Sun Patent Trust Moving picture coding method, moving picture decoding method, moving picture coding apparatus, moving picture decoding apparatus, and moving picture coding and decoding apparatus
US9210440B2 (en) 2011-03-03 2015-12-08 Panasonic Intellectual Property Corporation Of America Moving picture coding method, moving picture decoding method, moving picture coding apparatus, moving picture decoding apparatus, and moving picture coding and decoding apparatus
US10237570B2 (en) 2011-03-03 2019-03-19 Sun Patent Trust Moving picture coding method, moving picture decoding method, moving picture coding apparatus, moving picture decoding apparatus, and moving picture coding and decoding apparatus
US9860552B2 (en) 2011-03-14 2018-01-02 Hfi Innovation Inc. Method and apparatus for derivation of motion vector candidate and motion vector prediction candidate
US9307239B2 (en) 2011-03-14 2016-04-05 Mediatek Inc. Method and apparatus for derivation of motion vector candidate and motion vector prediction candidate
US9781414B2 (en) 2011-03-17 2017-10-03 Hfi Innovation Inc. Method and apparatus for derivation of spatial motion vector candidate and motion vector prediction candidate
WO2012176684A1 (en) * 2011-06-22 2012-12-27 ソニー株式会社 Image processing device and method
JP2015226111A (en) * 2014-05-26 2015-12-14 キヤノン株式会社 Image processing apparatus and control method thereof

Also Published As

Publication number Publication date
JPWO2007074543A1 (en) 2009-06-04
JP5020829B2 (en) 2012-09-05
JP2011160468A (en) 2011-08-18

Similar Documents

Publication Publication Date Title
WO2007074543A1 (en) Moving picture image decoding device and moving picture image coding device
US11729415B2 (en) Method and device for encoding a sequence of images and method and device for decoding a sequence of images
US10511855B2 (en) Method and system for predictive decoding with optimum motion vector
US9369731B2 (en) Method and apparatus for estimating motion vector using plurality of motion vector predictors, encoder, decoder, and decoding method
KR102523002B1 (en) Method and device for image decoding according to inter-prediction in image coding system
TW201904284A (en) Sub-prediction unit temporal motion vector prediction (sub-pu tmvp) for video coding
WO2011127828A1 (en) Method for performing localized multihypothesis prediction during video coding of a coding unit, and associated apparatus
JP4527677B2 (en) Moving picture coding method, moving picture coding apparatus, moving picture coding program
JP2004056823A (en) Motion vector encoding/decoding method and apparatus
JP5281597B2 (en) Motion vector prediction method, motion vector prediction apparatus, and motion vector prediction program
JP2013031131A (en) Moving image encoder, moving image encoding method, and moving image encoding program

Legal Events

Date Code Title Description
DPE2 Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
ENP Entry into the national phase

Ref document number: 2007551847

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06715108

Country of ref document: EP

Kind code of ref document: A1