WO2013046707A1 - 動画像符号化装置、動画像符号化方法、動画像符号化プログラム、送信装置、送信方法及び送信プログラム、並びに動画像復号装置、動画像復号方法、動画像復号プログラム、受信装置、受信方法及び受信プログラム - Google Patents
動画像符号化装置、動画像符号化方法、動画像符号化プログラム、送信装置、送信方法及び送信プログラム、並びに動画像復号装置、動画像復号方法、動画像復号プログラム、受信装置、受信方法及び受信プログラム Download PDFInfo
- Publication number
- WO2013046707A1 WO2013046707A1 PCT/JP2012/006225 JP2012006225W WO2013046707A1 WO 2013046707 A1 WO2013046707 A1 WO 2013046707A1 JP 2012006225 W JP2012006225 W JP 2012006225W WO 2013046707 A1 WO2013046707 A1 WO 2013046707A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- motion vector
- prediction
- predicted motion
- predicted
- candidate
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
Definitions
- the present invention relates to a moving image encoding and decoding technique, and more particularly to a moving image encoding and decoding technique using motion compensated prediction.
- MPEG-4 AVC / H.3 is a typical video compression encoding system.
- H.264 Motion compensation is used in which a picture is divided into a plurality of rectangular blocks, a picture that has already been encoded / decoded is used as a reference picture, and motion from the reference picture is predicted.
- a technique for predicting motion by this motion compensation is called inter prediction.
- MPEG-4 AVC / H. In the inter prediction in H.264, a plurality of pictures can be used as reference pictures, and the most suitable reference picture is selected for each block from the plurality of reference pictures to perform motion compensation. Therefore, a reference index is assigned to each reference picture, and the reference picture is specified by this reference index.
- L0 prediction mainly used as forward prediction
- L1 prediction list 1 prediction
- bi-prediction using two inter predictions of L0 prediction and L1 prediction at the same time is also defined.
- bi-directional prediction is performed, the inter-predicted signals of L0 prediction and L1 prediction are multiplied by a weighting coefficient, an offset value is added and superimposed, and a final inter-predicted image signal is obtained.
- weighting coefficients and offset values used for weighted prediction representative values are set for each reference picture in each list and encoded.
- the encoding information related to inter prediction includes, for each block, a prediction mode for distinguishing between L0 prediction and L1 prediction and bi-prediction, a reference index for specifying a reference picture for each reference list for each block, and a moving direction and a moving amount of the block.
- a prediction process is performed on a motion vector in order to reduce the amount of code of the motion vector generated in each block.
- MPEG-4 AVC / H.264 using the fact that the motion vector to be encoded has a strong correlation with the motion vectors of neighboring neighboring blocks, a prediction motion vector is calculated by performing prediction from neighboring neighboring blocks, and the coding target motion vector is calculated. A difference motion vector that is a difference between the motion vector and the predicted motion vector is calculated, and the difference motion vector is encoded to reduce the amount of codes.
- a median value is calculated from the motion vectors of neighboring blocks A, B, and C in the vicinity to obtain a predicted motion vector, and the difference between the motion vector and the predicted motion vector By taking this, the amount of code of the motion vector is reduced.
- the size and shape of the encoding target block and the adjacent block are different as shown in FIG. 36B, when there are a plurality of adjacent blocks on the left, When there is an adjacent block, the leftmost block is used as a prediction block, and prediction is performed from the determined motion vector of the prediction block.
- the present inventors have come to recognize the necessity of further compressing the encoded information and reducing the overall code amount in the moving image encoding method using motion compensated prediction.
- the present invention has been made in view of such a situation, and an object of the present invention is to calculate a motion vector encoding that improves the encoding efficiency by calculating a prediction motion vector candidate, thereby reducing the code amount of the difference motion vector. To provide technology. Another object of the present invention is to provide a moving image encoding technique that improves the encoding efficiency by calculating the encoding information candidates, thereby reducing the amount of encoding information.
- a moving image encoding device is a moving image encoding device that encodes a moving image using a motion vector in units of blocks obtained by dividing each picture.
- the number of motion vector predictor candidates registered in the motion vector predictor candidate registration unit (122) and the motion vector predictor candidate list When the number of motion vector predictor candidates is smaller than a predetermined number (a natural number of 2 or more), a motion vector predictor restriction for registering motion vector predictor candidates having the same value in the motion vector predictor candidate list repeatedly until the number of motion vector predictor candidates reaches a predetermined number.
- Unit (124) a prediction motion vector selection unit (126) for determining a prediction motion vector of the coding target block from the prediction motion vector candidate list, and the determined prediction motion vector in the prediction motion vector candidate list
- an encoding unit (109) for encoding information indicating the position of.
- This apparatus is a moving picture coding apparatus that codes a moving picture using a motion vector in units of blocks obtained by dividing each picture, and a code close to the coding target block in the same picture as the coding target block
- a prediction motion vector candidate generation unit (121, 122) that predicts from any one of the motion vectors of the converted block, generates a plurality of prediction motion vector candidates, and registers the prediction motion vector candidate list in the prediction motion vector candidate list; From the motion vector predictor candidate list, the motion vector predictor candidate number limiting unit (124) that limits the number of motion vector predictor candidates registered in the vector candidate list to the maximum number of candidates according to the size of the prediction block, the code
- a motion vector predictor selection unit (126) for determining a motion vector predictor for the target block, and the motion vector predictor candidate Comprising encoding unit for encoding information indicating the position of the prediction motion vector the decisions in strike and (109).
- Still another aspect of the present invention is a video encoding method.
- This method is a moving image coding method for coding a moving image using a motion vector in units of blocks obtained by dividing each picture, and a code close to the coding target block in the same picture as the coding target block.
- Predicted motion vector candidate generation step for deriving a plurality of motion vector predictor candidates from one of the motion vectors of the encoded block and one of the motion vectors of a block in the encoded picture different from the encoding target block
- a predicted motion vector candidate registration step for registering a predicted motion vector candidate satisfying a predetermined condition among the plurality of predicted motion vector candidates in a predicted motion vector candidate list; and a predicted motion registered in the predicted motion vector candidate list
- a predicted motion vector candidate limiting step of repeatedly registering a motion vector predictor candidate having the same value in the motion vector predictor candidate list until the complement number reaches a predetermined number; and from the motion vector predictor candidate list,
- a prediction motion vector selection step for determining a prediction motion vector; and an encoding step for encoding information indicating the position of the determined prediction motion vector in the prediction motion vector candidate list.
- Still another aspect of the present invention is a transmission device.
- the apparatus includes a packet processing unit that packetizes an encoded bit sequence encoded by a moving image encoding method that encodes a moving image using a motion vector in units of blocks obtained by dividing each picture, and obtains encoded data;
- a transmission unit that transmits the packetized encoded data.
- the moving image encoding method includes a motion vector of an encoded block adjacent to the encoding target block in the same picture as the encoding target block, and an encoded picture different from the encoding target block.
- a motion vector predictor candidate generation step for deriving a plurality of motion vector predictor candidates from one of the motion vectors in the block, and motion vector predictor motion that satisfies a predetermined condition among the motion vector predictor candidates.
- a predicted motion vector candidate registration step to be registered in the vector candidate list; and when the number of predicted motion vector candidates registered in the predicted motion vector candidate list is smaller than a predetermined number (a natural number of 2 or more), The predicted motion vector candidate having the same value is repeated until the number reaches a predetermined number.
- an encoding step for encoding information indicating the position of the predicted motion vector.
- Still another aspect of the present invention is a transmission method.
- the moving image encoding method includes a motion vector of an encoded block adjacent to the encoding target block in the same picture as the encoding target block, and an encoded picture different from the encoding target block.
- a motion vector predictor candidate generation step for deriving a plurality of motion vector predictor candidates from one of the motion vectors in the block, and motion vector predictor motion that satisfies a predetermined condition among the motion vector predictor candidates.
- an encoding step for encoding information indicating the position of the predicted motion vector.
- a moving picture decoding apparatus is a moving picture decoding apparatus that decodes a coded bit string in which a moving picture is coded using a motion vector in units of blocks obtained by dividing each picture, and is a predicted motion vector candidate
- a decoding unit (202) that decodes information indicating the position of a predicted motion vector to be selected in the list, and any motion vector of a decoded block adjacent to the decoding target block in the same picture as the decoding target block;
- a motion vector predictor candidate generation unit (221) for deriving a plurality of motion vector predictor candidates from any motion vector of a decoded block different from the decoding target block;
- a motion vector predictor that registers motion vector predictor candidates that satisfy a predetermined condition in the motion vector predictor candidate list When the number of motion vector predictor candidates registered in the candidate registration unit (222) and the motion vector predictor candidate list is smaller than a predetermined number (two or more natural numbers), the number of motion vector predictor candidates reaches a predetermined number.
- This apparatus is a moving picture decoding apparatus for decoding a coded bit sequence in which a moving picture is coded using a motion vector in units of blocks obtained by dividing each picture, and is a predicted motion vector to be selected in a predicted motion vector candidate list
- a decoding unit (202) that decodes information indicating the position of the decoding target, and a plurality of predicted motion vectors predicted from one of the decoded motion vectors adjacent to the decoding target block in the same picture as the decoding target block
- Predicted motion vector candidate generation units (221, 222) that generate the motion vector candidates and register them in the motion vector predictor candidate list, and the number of motion vector predictor candidates registered in the motion vector predictor candidate list.
- Still another aspect of the present invention is a moving picture decoding method.
- This method is a moving picture decoding method for decoding a coded bit string in which a moving picture is coded using a motion vector in units of blocks obtained by dividing each picture, and a predicted motion vector to be selected in a predicted motion vector candidate list
- a decoding step for decoding the information indicating the position of the block, a motion vector of one of the decoded blocks close to the decoding target block in the same picture as the decoding target block, and a decoded picture different from the decoding target block
- a predicted motion vector candidate generating step for deriving a plurality of predicted motion vector candidates from any one of the motion vectors of the block; and a predicted motion vector candidate satisfying a predetermined condition among the plurality of predicted motion vector candidates
- a predicted motion vector candidate registration step to be registered in the candidate list, and the predicted motion
- the number of motion vector predictor candidates registered in the candidate list for the battle is smaller than a predetermined number (a natural number of 2 or more
- Prediction of the decoding target block from the prediction motion vector candidate list based on the prediction motion vector candidate limiting step registered in the prediction motion vector candidate list and the information indicating the position of the decoded prediction motion vector to be selected A motion vector selection step for selecting a motion vector.
- Still another aspect of the present invention is a receiving device.
- This apparatus is a receiving apparatus that receives and decodes a coded bit sequence in which a moving image is encoded, and an encoded bit sequence in which a moving image is encoded using a motion vector in units of blocks obtained by dividing each picture.
- a receiving unit that receives packetized encoded data, a restoration unit that performs packet processing on the received encoded data to restore the original encoded bit sequence, and a predicted motion vector candidate from the restored encoded bit sequence
- a decoding unit (202) that decodes information indicating the position of a predicted motion vector to be selected in the list, and any motion vector of a decoded block adjacent to the decoding target block in the same picture as the decoding target block; Deriving multiple motion vector predictor candidates from the motion vector of one of the blocks in the decoded picture that is different from the current block
- the predicted motion vector candidate list A prediction motion vector selection unit (225) that selects a prediction motion vector of the decoding target block.
- Still another aspect of the present invention is a receiving method.
- This method is a receiving method for receiving and decoding an encoded bit string in which a moving image is encoded, and an encoded bit sequence in which a moving image is encoded using a motion vector in units of blocks obtained by dividing each picture.
- Prediction of the decoding target block from the prediction motion vector candidate list based on the prediction motion vector candidate limiting step registered in the prediction motion vector candidate list and the information indicating the position of the decoded prediction motion vector to be selected A motion vector selection step for selecting a motion vector.
- a plurality of prediction motion vectors are calculated, an optimal prediction motion vector is selected from the plurality of prediction motion vectors, and the amount of generated code of the difference motion vector is reduced, thereby improving the encoding efficiency. Can be made.
- FIG. 3 is a block diagram illustrating a detailed configuration of a motion vector calculation unit in FIG. 2. It is a flowchart which shows the difference motion vector calculation process procedure of the difference motion vector calculation part of FIG. It is a flowchart which shows the motion vector calculation process procedure of the motion vector calculation part of FIG. It is a flowchart which shows the final candidate number setting process sequence of a motion vector predictor.
- the present embodiment with regard to moving picture coding, in particular, to improve coding efficiency in moving picture coding in which a picture is divided into rectangular blocks of an arbitrary size and shape and motion compensation is performed in units of blocks between pictures.
- by calculating a plurality of predicted motion vectors from the motion vectors of the surrounding blocks that have been encoded, and calculating and encoding a difference vector between the motion vector of the block to be encoded and the selected predicted motion vector Reduce the amount of code.
- the amount of code is reduced by estimating the encoding information of the encoding target block by using the encoding information of the surrounding blocks that have already been encoded.
- a plurality of predicted motion vectors are calculated from the motion vectors of the peripheral blocks that have been decoded, and a decoding target is calculated from the difference vector decoded from the encoded stream and the selected predicted motion vector.
- the motion vector of the block is calculated and decoded.
- the encoding information of the decoding target block is estimated by using the encoding information of the peripheral blocks that have been decoded.
- the picture is equally divided into square units of any same size.
- This unit is defined as a tree block, and is a block to be encoded or decoded in a picture (a block to be encoded in the encoding process and a block to be decoded in the decoding process.
- a block to be encoded in the encoding process a block to be decoded in the decoding process.
- It is used as a basic unit of address management for specifying.
- the tree block is composed of one luminance signal and two color difference signals.
- the size of the tree block can be freely set to a power of 2 depending on the picture size and the texture in the picture.
- the tree block divides the luminance signal and chrominance signal in the tree block hierarchically into four parts (two parts vertically and horizontally) as necessary, The block can be made smaller in block size.
- Each block is defined as a coding block, and is a basic unit of processing when performing coding and decoding. Except for monochrome, the coding block is also composed of one luminance signal and two color difference signals.
- the maximum size of the coding block is the same as the size of the tree block.
- An encoded block having the minimum size of the encoded block is called a minimum encoded block, and can be freely set to a power of 2.
- the coding block A is a single coding block without dividing the tree block.
- the encoding block B is an encoding block formed by dividing a tree block into four.
- the coding block C is a coding block obtained by further dividing the block obtained by dividing the tree block into four.
- the coding block D is a coding block obtained by further dividing the block obtained by dividing the tree block into four parts and further dividing the block into four twice hierarchically, and is a coding block of the minimum size.
- a mode for identifying the intra prediction (MODE_INTRA) and the inter prediction (MODE_INTER) is defined as a prediction mode (PredMode).
- the prediction mode (PredMode) has intra prediction (MODE_INTRA) or inter prediction (MODE_INTER) as a value, and can be selected and encoded.
- PartMode A mode for identifying the division method of the luminance signal and the color difference signal of the coding block is defined as a division mode (PartMode). Furthermore, this divided block is defined as a prediction block. As shown in FIG. 4, four types of partition modes (PartMode) are defined according to the method of dividing the luminance signal of the coding block. The division mode (PartMode) of what is regarded as one prediction block without dividing the luminance signal of the coding block (FIG.
- the division mode (PartMode) of the two prediction blocks (FIG. 4B) is divided into 2N ⁇ N divisions (PART_2NxN), the luminance signal of the encoded block is divided in the vertical direction, and the encoded block is
- the partition mode (PartMode) of the two prediction blocks (FIG. 4 (c)) is N ⁇ 2N partition (PART_Nx2N), and the luminance signal of the encoded block is divided into four prediction blocks by horizontal and vertical equal partitioning.
- the division mode (PartMode) of (FIG. 4D) is defined as N ⁇ N division (PART_NxN), respectively. Except for N ⁇ N division (PART_NxN) of intra prediction (MODE_INTRA), the color difference signal is also divided for each division mode (PartMode) in the same manner as the vertical / horizontal division ratio of the luminance signal.
- a number starting from 0 is assigned to the prediction block existing inside the coding block in the coding order. This number is defined as a split index PartIdx.
- a number described in each prediction block of the encoded block in FIG. 4 represents a partition index PartIdx of the prediction block.
- the division index PartIdx of the upper prediction block is set to 0, and the division index PartIdx of the lower prediction block is set to 1.
- the division index PartIdx of the left prediction block is set to 0, and the division index PartIdx of the right prediction block is set to 1.
- the partition index PartIdx of the upper left prediction block is 0, the partition index PartIdx of the upper right prediction block is 1, and the partition index PartIdx of the lower left prediction block is 2.
- the division index PartIdx of the prediction block at the lower right is set to 3.
- the partition mode (PartMode) is 2N ⁇ 2N partition (PART_2Nx2N), 2N ⁇ N partition (PART_2NxN), and N ⁇ 2N partition (PART_Nx2N) is defined, and only the coding block D which is the smallest coding block, the partition mode (PartMode) is 2N ⁇ 2N partition (PART_2Nx2N), 2N ⁇ N partition (PART_2NxN), and N ⁇ 2N
- N ⁇ N division (PART_NxN) is defined.
- the reason why N ⁇ N division (PART_NxN) is not defined other than the smallest coding block is that, except for the smallest coding block, the coding block can be divided into four to represent a small coding block.
- the position of each block including the tree block, the encoding block, the prediction block, and the transform block according to the present embodiment has the position of the pixel of the luminance signal at the upper left of the luminance signal screen as the origin (0, 0).
- the pixel position of the upper left luminance signal included in each block area is represented by two-dimensional coordinates (x, y).
- the direction of the coordinate axis is a right direction in the horizontal direction and a downward direction in the vertical direction, respectively, and the unit is one pixel unit of the luminance signal.
- the luminance signal and the color difference signal have the same image size (number of pixels) and the color difference format is 4: 4: 4.
- the luminance signal and the color difference signal have a different color size format of 4: 4.
- the position of each block of the color difference signal is represented by the coordinates of the pixel of the luminance signal included in the block area, and the unit is one pixel of the luminance signal. In this way, not only can the position of each block of the color difference signal be specified, but also the relationship between the positions of the luminance signal block and the color difference signal block can be clarified only by comparing the coordinate values.
- a group composed of a plurality of prediction blocks is defined as a prediction block group.
- 5, 6, 7 and 8 are diagrams for explaining a prediction block group adjacent to a prediction block to be encoded or decoded in the same picture as the prediction block to be encoded or decoded.
- FIG. 9 shows a prediction block that has already been encoded or decoded and is present in a decoded picture that is temporally different from the prediction block to be encoded or decoded and is present at the same position as or near the prediction block to be encoded or decoded. It is a figure explaining a group.
- the prediction block group will be described with reference to FIGS. 5, 6, 7, 8, and 9.
- the prediction block A1 adjacent to the left side of the prediction block to be encoded or decoded in the same picture as the prediction block to be encoded or decoded and the prediction block to be encoded or decoded A first prediction block group including a prediction block A0 adjacent to the lower left vertex is defined as a prediction block group adjacent to the left side.
- the prediction block A1 is set. If the block A is adjacent to the lower left vertex of the prediction block to be encoded or decoded, the prediction block A0 is set. . In the example of FIG. 6, the prediction block A0 and the prediction block A1 are the same prediction block.
- this embodiment when the size of the prediction block adjacent to the left side of the prediction block to be encoded or decoded is smaller than the prediction block to be encoded or decoded and there are a plurality of prediction blocks, this embodiment In the embodiment, only the lowest prediction block A10 among the prediction blocks adjacent to the left side is set as the prediction block A1 adjacent to the left side.
- a second prediction block group including B0 and a prediction block B2 adjacent to the upper left vertex of the prediction block to be encoded or decoded is defined as a prediction block group adjacent to the upper side.
- the prediction adjacent to the upper side is also performed according to the above condition. If the block B is adjacent to the upper side of the prediction block to be encoded or decoded, the prediction block B1; if the block B is adjacent to the upper right vertex of the prediction block to be encoded or decoded, the prediction block B0; If it is adjacent to the top left vertex of the prediction block to be encoded or decoded, the prediction block is B2. In the example of FIG. 8, the prediction block B0, the prediction block B1, and the prediction block B2 are the same prediction block.
- a third prediction block group composed of the predicted or decoded prediction block groups T0 and T1 is defined as a prediction block group at different times.
- Inter prediction mode in which prediction is performed from an image signal of a decoded picture, a plurality of decoded pictures can be used as reference pictures. In order to identify a reference picture selected from a plurality of reference pictures, a reference index is attached to each prediction block. In the B slice, any two reference pictures can be selected for each prediction block and inter prediction can be performed.
- inter prediction modes there are L0 prediction (Pred_L0), L1 prediction (Pred_L1), and bi-prediction (Pred_BI).
- the reference picture is managed by L0 (reference list 0) and L1 (reference list 1) of the list structure, and the reference picture can be specified by specifying the reference index of L0 or L1.
- L0 prediction is inter prediction that refers to a reference picture managed in L0
- L1 prediction is inter prediction that refers to a reference picture managed in L1
- bi-prediction is This is inter prediction in which both L0 prediction and L1 prediction are performed and one reference picture managed by each of L0 and L1 is referred to. Only L0 prediction can be used in inter prediction of P slice, and L0 prediction, L1 prediction, and bi-prediction (Pred_BI) that averages or weights and adds L0 prediction and L1 prediction can be used in inter prediction of B slice.
- LX is 0 or 1
- (About POC) POC is a variable associated with the picture to be encoded, and is set to a value that increases by 1 in the picture output order. Based on the POC value, it is possible to determine whether they are the same picture, to determine the anteroposterior relationship between pictures in the output order, or to derive the distance between pictures. For example, if the POCs of two pictures have the same value, it can be determined that they are the same picture. When the POCs of two pictures have different values, it can be determined that a picture with a smaller POC value is a picture that is output earlier in time, and the difference between the POCs of the two pictures is a picture in the time axis direction. Indicates the distance.
- FIG. 1 is a block diagram showing a configuration of a moving picture coding apparatus according to an embodiment of the present invention.
- the moving image encoding apparatus includes an image memory 101, a motion vector detection unit 102, a difference motion vector calculation unit 103, an inter prediction information estimation unit 104, a motion compensation prediction unit 105, a prediction method determination unit 106, a residual signal.
- Generation unit 107 orthogonal transform / quantization unit 108, first encoded bit string generation unit 109, second encoded bit string generation unit 110, multiplexing unit 111, inverse quantization / inverse orthogonal transform unit 112, decoded image signal A superimposing unit 113, an encoded information storage memory 114, and a decoded image memory 115 are provided.
- the image memory 101 temporarily stores the image signal of the encoding target picture supplied in the order of shooting / display time.
- the image memory 101 supplies the stored image signal of the picture to be encoded to the motion vector detection unit 102, the prediction method determination unit 106, and the residual signal generation unit 107 in units of predetermined pixel blocks.
- the image signals of the pictures stored in the order of shooting / display time are rearranged in the encoding order and output from the image memory 101 in units of pixel blocks.
- the motion vector detection unit 102 uses the motion vector of each prediction block size and each prediction mode by block matching between the image signal supplied from the image memory 101 and the reference picture supplied from the decoded image memory 115 for each prediction block. Detection is performed in units, and the detected motion vector is supplied to the motion compensation prediction unit 105, the difference motion vector calculation unit 103, and the prediction method determination unit 106.
- the difference motion vector calculation unit 103 calculates a plurality of motion vector predictor candidates by using the encoded information of the already encoded image signal stored in the encoded information storage memory 114, and will be described later.
- the motion vector detection unit 102 selects an optimal motion vector predictor from a plurality of motion vector predictor candidates registered in the motion vector list and registered in the motion vector predictor list, and a motion vector difference is detected from the motion vector detected by the motion vector detection unit 102 and the motion vector predictor. And the calculated difference motion vector is supplied to the prediction method determination unit 106. Furthermore, a prediction motion vector index that identifies a prediction motion vector selected from prediction motion vector candidates registered in the prediction motion vector list is supplied to the prediction method determination unit 106. The detailed configuration and operation of the difference motion vector calculation unit 103 will be described later.
- the inter prediction information estimation unit 104 estimates inter prediction information in merge mode.
- the merge mode refers to inter prediction information such as a prediction mode of the prediction block, a reference index (information for specifying a reference picture used for motion compensation prediction from a plurality of reference pictures registered in the reference list), a motion vector, and the like. Is a mode in which inter prediction information of an adjacent inter-predicted prediction block that has been encoded or an inter-predicted prediction block of a different picture is used.
- a plurality of merge candidates are calculated using the encoded information of the already encoded prediction block stored in the encoded information storage memory 114, registered in the merge candidate list, and merged.
- An optimal merge candidate is selected from a plurality of merge candidates registered in the candidate list, and inter prediction information such as a prediction mode, a reference index, and a motion vector of the selected merge candidate is supplied to the motion compensation prediction unit 105.
- the merge index that identifies the selected merge candidate is supplied to the prediction method determination unit 106.
- the motion compensation prediction unit 105 generates a prediction image signal by motion compensation prediction from a reference picture using the motion vectors detected by the motion vector detection unit 102 and the inter prediction information estimation unit 104, and uses the prediction image signal as a prediction method determination unit. 106.
- L0 prediction and L1 prediction one-way prediction is performed.
- Pred_BI bi-prediction
- bi-directional prediction is performed, the inter-predicted signals of L0 prediction and L1 prediction are adaptively multiplied by weighting factors, offset values are added and superimposed, and finally A predicted image signal is generated.
- the prediction method determination unit 106 evaluates the code amount of the difference motion vector, the distortion amount between the prediction image signal and the image signal, and the like, so that inter prediction (in an optimal coding block unit) is selected from a plurality of prediction methods.
- the prediction mode PredMode and the partition mode PartMode are determined to determine whether the prediction mode is PRED_INTER) or intra prediction (PRED_INTRA).
- the prediction method such as whether the mode is the merge mode or not is determined for each prediction block.
- the prediction method determination unit 106 stores information indicating the determined prediction method and encoded information including a motion vector corresponding to the determined prediction method in the encoded information storage memory 114.
- the encoded information stored here is the motion of the reference indexes refIdxL0, refIdxL1, L0, and L1 of the flags predFlagL0, predFlagL1, L0, and L1 indicating whether to use the prediction mode PredMode, the partition mode PartMode, the L0 prediction, and the L1 prediction.
- the prediction mode PredMode is intra prediction (PRED_INTRA)
- a flag predFlagL0 indicating whether to use L0 prediction and a flag predFlagL1 indicating whether to use L1 prediction are both 0.
- the prediction mode PredMode is inter prediction (MODE_INTER) and the inter prediction mode is L0 prediction (Pred_L0)
- the flag predFlagL0 indicating whether or not to use L0 prediction is 1
- the flag predFlagL1 indicating whether or not to use L1 prediction Is 0.
- the inter prediction mode is L1 prediction (Pred_L1)
- the flag predFlagL0 indicating whether to use L0 prediction is 0, and the flag predFlagL1 indicating whether to use L1 prediction is 1.
- the prediction method determination unit 106 supplies a prediction image signal corresponding to the determined prediction mode to the residual signal generation unit 107 and the decoded image signal superposition unit 113.
- the residual signal generation unit 107 generates a residual signal by performing subtraction between the image signal to be encoded and the predicted image signal, and supplies the residual signal to the orthogonal transform / quantization unit 108.
- the orthogonal transform / quantization unit 108 performs orthogonal transform and quantization on the residual signal in accordance with the quantization parameter to generate an orthogonal transform / quantized residual signal, and a second encoded bit string generation unit 110 and the inverse quantization / inverse orthogonal transform unit 112. Further, the orthogonal transform / quantization unit 108 stores the quantization parameter in the encoded information storage memory 114.
- the first encoded bit string generation unit 109 adds information corresponding to the prediction method determined by the prediction method determination unit 106 for each encoding block and prediction block, in addition to the information in units of sequences, pictures, slices, and encoding blocks. Encoding information is encoded.
- the prediction mode PredMode for each coding block is inter prediction (PRED_INTER)
- PRED_INTER inter prediction
- encoding information such as information on a prediction motion vector index and a difference motion vector is encoded according to a prescribed syntax rule described later to generate a first encoded bit string, which is supplied to the multiplexing unit 111. If the number of motion vector predictor candidates registered in the motion vector predictor list is 1, the motion vector predictor index mvp_idx can be specified as 0, and is not encoded.
- the number of motion vector predictor candidates is set according to the size of the prediction block to be encoded, as will be described later. Therefore, the first encoded bit string generation unit 109 sets the number of motion vector predictor candidates according to the size of the prediction block to be encoded. When the set number of candidates is larger than 1, the motion vector predictor index Is encoded.
- the second encoded bit string generation unit 110 entropy-encodes the residual signal that has been orthogonally transformed and quantized according to a specified syntax rule to generate a second encoded bit string, and supplies the second encoded bit string to the multiplexing unit 111.
- the multiplexing unit 111 multiplexes the first encoded bit string and the second encoded bit string in accordance with a prescribed syntax rule, and outputs a bit stream.
- the inverse quantization / inverse orthogonal transform unit 112 calculates a residual signal by performing inverse quantization and inverse orthogonal transform on the orthogonal transform / quantized residual signal supplied from the orthogonal transform / quantization unit 108 to perform decoding. This is supplied to the image signal superimposing unit 113.
- the decoded image signal superimposing unit 113 superimposes the predicted image signal according to the determination by the prediction method determining unit 106 and the residual signal that has been inverse quantized and inverse orthogonal transformed by the inverse quantization / inverse orthogonal transform unit 112 to decode the decoded image. Is generated and stored in the decoded image memory 115.
- the decoded image may be stored in the decoded image memory 115 after filtering processing for reducing distortion such as block distortion due to encoding.
- FIG. 2 is a block diagram showing the configuration of the moving picture decoding apparatus according to the embodiment of the present invention corresponding to the moving picture encoding apparatus of FIG.
- the moving picture decoding apparatus according to the embodiment includes a separation unit 201, a first encoded bit string decoding unit 202, a second encoded bit string decoding unit 203, a motion vector calculation unit 204, an inter prediction information estimation unit 205, and a motion compensation prediction unit 206. , An inverse quantization / inverse orthogonal transform unit 207, a decoded image signal superimposing unit 208, an encoded information storage memory 209, and a decoded image memory 210.
- the decoding process of the moving picture decoding apparatus in FIG. 2 corresponds to the decoding process provided in the moving picture encoding apparatus in FIG. 1, so the motion compensation prediction unit 206 in FIG.
- the configurations of the inverse orthogonal transform unit 207, the decoded image signal superimposing unit 208, the encoded information storage memory 209, and the decoded image memory 210 are the same as those of the motion compensation prediction unit 105, the inverse quantization / inverse of the moving image encoding device in FIG.
- the orthogonal transform unit 112, the decoded image signal superimposing unit 113, the encoded information storage memory 114, and the decoded image memory 115 have functions corresponding to the respective configurations.
- the bit stream supplied to the separation unit 201 is separated according to a rule of a prescribed syntax, and the separated encoded bit string is supplied to the first encoded bit string decoding unit 202 and the second encoded bit string decoding unit 203.
- the first encoded bit string decoding unit 202 decodes the supplied encoded bit string to obtain sequence, picture, slice, encoded block unit information, and predicted block unit encoded information. Specifically, in the prediction mode PredMode for determining whether the prediction is inter prediction (PRED_INTER) or intra prediction (PRED_INTRA) for each coding block, split mode PartMode, and inter prediction (PRED_INTER), a flag for determining whether the mode is merge mode, When the merge mode is selected, the merge index is decoded. When the merge mode is not selected, the encoded information related to the inter prediction mode, the predicted motion vector index, the difference motion vector, and the like is decoded according to a predetermined syntax rule to be described later, and the encoded information is a motion vector calculation unit.
- PredMode for determining whether the prediction is inter prediction (PRED_INTER) or intra prediction (PRED_INTRA) for each coding block, split mode PartMode, and inter prediction (PRED_INTER)
- PRED_INTER
- the motion vector predictor index can be identified as 0, so the motion vector predictor index is not encoded.
- the predicted motion vector index is set to 0.
- the number of motion vector predictor candidates is set according to the size of the prediction block to be decoded, as will be described later. Therefore, the first encoded bit string decoding unit 202 sets the number of motion vector predictor candidates according to the size of the prediction block to be decoded, and decodes the motion vector predictor index when the set candidate number is greater than 1. To do.
- the second encoded bit string decoding unit 203 calculates a residual signal that has been orthogonally transformed / quantized by decoding the supplied encoded bit string, and dequantized / inverted the residual signal that has been orthogonally transformed / quantized. This is supplied to the orthogonal transform unit 207.
- the motion vector calculation unit 204 uses the encoded information of the already decoded image signal stored in the encoded information storage memory 209 to generate a plurality of prediction motion vector candidates. And the motion vector is registered in a motion vector predictor list, which will be described later, and a motion vector predictor that is decoded and supplied from the plurality of motion vector predictor candidates registered in the motion vector predictor list by the first encoded bit string decoding unit 202. A prediction motion vector corresponding to the index is selected, a motion vector is calculated from the difference vector decoded by the first encoded bit string decoding unit 202 and the selected prediction motion vector, and the motion compensated prediction unit 206 together with other encoded information.
- the encoding information of the prediction block to be supplied / stored here includes the prediction indexes PredFlagL0, predFlagL1, L0, and L1 reference indexes refIdxL0, refIdxL1, indicating whether to use the prediction mode PredMode, the partition mode PartMode, the L0 prediction, and the L1 prediction.
- the flag predFlagL0 indicating whether to use L0 prediction and the flag predFlagL1 indicating whether to use L1 prediction are both 0.
- the prediction mode PredMode is inter prediction (MODE_INTER) and the inter prediction mode is L0 prediction (Pred_L0)
- the flag predFlagL0 indicating whether or not to use L0 prediction
- the flag predFlagL1 indicating whether or not to use L1 prediction Is 0.
- the inter prediction mode is L1 prediction (Pred_L1)
- the flag predFlagL0 indicating whether to use L0 prediction
- the flag predFlagL1 indicating whether to use L1 prediction is 1.
- the inter prediction mode is bi-prediction (Pred_BI)
- a flag predFlagL0 indicating whether to use L0 prediction and a flag predFlagL1 indicating whether to use L1 prediction are both 1.
- the inter prediction information estimation unit 205 estimates the inter prediction information in the merge mode when the prediction block to be decoded is in the merge mode. Using the encoded information of the already decoded prediction block stored in the encoded information storage memory 114, a plurality of merge candidates are calculated and registered in the merge candidate list, and the plurality of merge candidates registered in the merge candidate list The merge candidate corresponding to the merge index decoded and supplied by the first encoded bit string decoding unit 202 is selected from the merge candidates, and the prediction mode PredMode, split mode PartMode, L0 prediction, and L1 prediction of the selected merge candidate are selected.
- Inter prediction information such as reference vectors refIdxL0, refIdxL1, L0, and L1 motion vectors mvL0 and mvL1 of the flags predFlagL0, predFlagL1, L0, and L1 indicating whether to use the image is supplied to the motion compensation prediction unit 206 and also encoded information Store in the storage memory 209.
- the motion compensated prediction unit 206 generates a predicted image signal by motion compensation prediction from the reference picture using the motion vector calculated by the motion vector calculation unit 204, and supplies the predicted image signal to the decoded image signal superimposing unit 208.
- Pred_BI bi-prediction
- a weighted coefficient is adaptively multiplied and superimposed on the two motion-compensated predicted image signals of L0 prediction and L1 prediction to generate a final predicted image signal.
- the inverse quantization / inverse orthogonal transform unit 207 performs inverse orthogonal transform and inverse quantization on the orthogonal transform / quantized residual signal decoded by the first encoded bit string decoding unit 202, and performs inverse orthogonal transform / An inverse quantized residual signal is obtained.
- the decoded image signal superimposing unit 208 superimposes the predicted image signal subjected to motion compensation prediction by the motion compensation prediction unit 206 and the residual signal subjected to inverse orthogonal transform / inverse quantization by the inverse quantization / inverse orthogonal transform unit 207.
- the decoded image signal is decoded and stored in the decoded image memory 210.
- the decoded image may be stored in the decoded image memory 210 after filtering processing for reducing block distortion or the like due to encoding is performed on the decoded image.
- FIG. 10 shows a first syntax structure described in the slice header in units of slices of the bitstream generated according to the present embodiment. However, only syntax elements relevant to the present embodiment are shown.
- the slice type is B
- a picture colPic at a different time used when calculating a motion vector prediction candidate or merge candidate in the temporal direction is a reference list of L0 or a reference of L1 of a picture including a prediction block to be processed.
- a flag collocated_from_l0_flag indicating which reference picture registered in which list is used is set. Details of the flag collocated_from_l0_flag will be described later.
- syntax elements may be placed in a picture parameter set describing syntax elements set in units of pictures.
- FIG. 11 shows a syntax pattern described in units of prediction blocks.
- PredMode of the prediction block is inter prediction (MODE_INTER)
- merge_flag [x0] [y0] indicating whether the mode is the merge mode is set.
- x0 and y0 are indices indicating the position of the upper left pixel of the prediction block in the picture of the luminance signal
- merge_flag [x0] [y0] is the prediction block located at (x0, y0) in the picture It is a flag indicating whether or not merge mode.
- merge_flag [x0] [y0] indicates merge mode
- a merge list index syntax element merge_idx [x0] [y0] is set, which is a list of merge candidates to be referred to.
- x0 and y0 are indices indicating the position of the upper left pixel of the prediction block in the picture
- merge_idx [x0] [y0] is the merge index of the prediction block located at (x0, y0) in the picture is there.
- merge_flag [x0] [y0] when merge_flag [x0] [y0] is 0, this indicates that the mode is not merge mode.
- the slice type is B
- a syntax element inter_pred_flag [x0] [y0] for identifying the inter prediction mode is installed.
- the L0 prediction (Pred_L0), L1 prediction (Pred_L1), and bi-prediction (Pred_BI) are identified by the tax element.
- syntax elements ref_idx_l0 [x0] [y0] and ref_idx_l1 [x0] [y0] of the reference index for specifying the reference picture, the motion vector and prediction of the prediction block obtained by motion vector detection The differential motion vector syntax elements mvd_l0 [x0] [y0] [j] and mvd_l1 [x0] [y0] [j], which are the differences from the motion vector, are provided.
- x0 and y0 are indexes indicating the position of the upper left pixel of the prediction block in the picture
- ref_idx_l0 [x0] [y0] and mvd_l0 [x0] [y0] [j] are respectively (x0 , Y0) is the reference index of L0 of the prediction block and the differential motion vector
- ref_idx_l1 [x0] [y0] and mvd_l1 [x0] [y0] [j] are respectively (x0, y0) in the picture It is the reference index of L1 of the prediction block located, and a difference motion vector.
- j represents a differential motion vector component
- j represents 0 as an x component
- j represents 1 as a y component.
- the index of the prediction motion vector list that is a list of prediction motion vector candidates to be referred to Syntax elements mvp_idx_l0 [x0] [y0] and mvp_idx_l1 [x0] [y0] are installed.
- x0 and y0 are indices indicating the position of the upper left pixel of the prediction block in the picture
- mvp_idx_l0 [x0] [y0] and mvp_idx_l1 [x0] [y0] are in (x0, y0) in the picture
- It is the prediction motion vector index of L0 and L1 of the prediction block located.
- the function NumMVPCand (LX) represents a function for calculating the total number of prediction motion vector candidates of the prediction block in the prediction direction LX (X is 0 or 1), and depends on the size of the prediction block to be encoded or decoded, which will be described later. The same value as the defined final candidate number finalNumMVPCand is set.
- the predicted motion vector index mvp_idx_lx [x0] [y0] is encoded when the number of predicted motion vector candidates NumMVPCand (LX) is larger than 1 by the motion vector prediction method.
- the signs of the syntax elements mvp_idx_l0 [x0] [y0] and mvp_idx_l1 [x0] [y0] of the predicted motion vector index are When the motion vector index is '0' and the motion vector index is 1, the code of the motion vector index syntax elements mvp_idx_l0 [x0] [y0] and mvp_idx_l1 [x0] [y0] is '10' and the motion vector predictor index is In the case of 2, the signs of the syntax elements mvp_idx_l0 [x0] [y0] and mvp_idx_l1 [x0] [y0] of the motion vector predictor index are “11”. In the embodiment of the present invention, the number of motion vector predictor candidates is set according to the size of the prediction block to be encoded or decoded, and details
- the motion vector prediction method according to the embodiment is implemented in the differential motion vector calculation unit 103 of the video encoding device in FIG. 1 and the motion vector calculation unit 204 of the video decoding device in FIG.
- a motion vector prediction method is performed in any of the encoding and decoding processes for each prediction block constituting the encoding block.
- the prediction mode of the prediction block is inter prediction (MODE_INTER) and not the merge mode
- the encoded motion vector used when calculating the differential motion vector to be encoded from the motion vector to be encoded is used.
- the decoding is performed when the predicted motion vector is derived using the decoded motion vector used when calculating the motion vector to be decoded.
- the motion vector prediction method is a method in which motion compensated prediction is performed in units of slices, that is, when the slice type is a P slice (unidirectional prediction slice) or a B slice (bidirectional prediction slice).
- the prediction mode is an inter prediction (MODE_INTER) and is applied to a prediction block that encodes or decodes a differential motion vector that is not a merge mode.
- FIG. 13 is a diagram illustrating a detailed configuration of the differential motion vector calculation unit 103 of the moving picture encoding device of FIG. A portion surrounded by a thick frame line in FIG. 13 indicates the differential motion vector calculation unit 103.
- the part surrounded by a thick dotted line inside thereof shows an operation unit of a motion vector prediction method described later, and is similarly installed in a video decoding device corresponding to the video encoding device of the embodiment, The same derivation result consistent with encoding and decoding is obtained.
- a motion vector prediction method in encoding will be described with reference to FIG.
- the final candidate number finalNumMVPCand of the motion vector predictor is set according to the size of the prediction block to be encoded or decoded.
- the finalNumMVPCand is set to a number smaller than the latter, otherwise the finalNumMVPCand is larger than the former Set to.
- the specified size sizePUNumMVPCand is set to 8x8, and the finalNumMVPCand is set to 1 if the size of the prediction block of the luminance signal to be encoded or decoded is less than the specified size sizePUNumMVPCand, otherwise Set finalNumMVPCand to 2.
- the number of prediction blocks per unit area increases. Therefore, the number of processes for deleting the redundant motion vector predictor candidates described later and the process for encoding the motion vector predictor index described above are reduced. It will increase.
- the final candidate number finalNumMVPCand is set to 1, and the process of deleting redundant prediction motion vector candidates and the prediction motion vector index are By skipping the encoding and decoding processes, the amount of processing is reduced. Further, when the final candidate number finalNumMVPCand is 1, it is not necessary to encode the motion vector predictor index, so that it is possible to reduce the amount of entropy encoding processing.
- the difference motion vector calculation unit 103 includes a prediction motion vector candidate generation unit 121, a prediction motion vector candidate registration unit 122, a prediction motion vector redundant candidate deletion unit 123, a prediction motion vector candidate number limit unit 124, and a prediction motion vector candidate code amount calculation unit. 125, a predicted motion vector selection unit 126, and a motion vector subtraction unit 127.
- the difference motion vector calculation process in the difference motion vector calculation unit 103 calculates a difference motion vector of the motion vector used in the inter prediction method selected in the encoding target block for each of L0 and L1. Specifically, when the prediction mode PredMode of the encoding target block is inter prediction (MODE_INTER) and the inter prediction mode of the encoding target block is L0 prediction (Pred_L0), the prediction motion vector list mvpListL0 of L0 is calculated and predicted. A motion vector mvpL0 is selected, and a differential motion vector mvdL0 of the L0 motion vector is calculated.
- the inter prediction mode (Pred_L1) of the encoding target block is L1 prediction
- the L1 predicted motion vector list mvpListL1 is calculated
- the predicted motion vector mvpL1 is selected
- the differential motion vector mvdL1 of the L1 motion vector is calculated.
- a prediction motion vector list mvpListL0 of L0 is calculated, a prediction motion vector mvpL0 of L0 is selected, and L0 A difference motion vector mvdL0 of the L1 motion vector mvLL, a prediction motion vector list mvpListL1 of the L1, a prediction motion vector mvpL1 of the L1, and a difference motion vector mvdL1 of the motion vector mvL1 of the L1.
- L0 and L1 are represented as a common LX.
- X is 0, and in the process of calculating the differential motion vector of L1, X is 1.
- the other list is represented as LY.
- the motion vector predictor candidate generation unit 121 has a prediction block group adjacent to the upper side (a prediction block group adjacent to the left side of the prediction block in the same picture as the prediction block to be encoded: A0 in FIG. 5).
- A1 prediction block group adjacent to the left side (prediction block group adjacent to the upper side of the prediction block in the same picture as the prediction block to be encoded: B0, B1, B2 in FIG. 5), prediction blocks at different times
- Three predictions of a group predicted block group that has already been encoded and exists at the same position as the prediction block in a picture temporally different from the prediction block to be encoded or a position near the prediction block: T0 and T1 in FIG.
- mvLXA and mvLXB are referred to as spatial prediction motion vectors
- mvLXCol is referred to as a temporal prediction motion vector.
- These predicted motion vector candidates mvLXA, mvLXB, and mvLXCol are pictures to be encoded or decoded (the pictures to be encoded in the encoding process and the pictures to be decoded in the decoding process.
- It may be derived by scaling according to the relationship between the POC of the reference picture and the POC of the reference picture.
- the motion vector predictor candidate generation unit 121 performs a later-described condition determination on the prediction blocks in each prediction block group in a predetermined order for each prediction block group.
- a motion vector is selected and set as the predicted motion vector candidates mvLXA, mvLXB, and mvLXCol.
- the lowest prediction block has the highest priority, and the priority is given from the bottom to the top.
- the rightmost prediction block Blocks have the highest priority and are prioritized from right to left.
- the prediction block of T0 has the highest priority, and the priority is given in the order of T0 and T1.
- Condition determination 1 Prediction using the same reference index, that is, the same reference picture is performed in the adjacent prediction block with the same reference list LX as the motion vector of the LX that is the difference motion vector calculation target of the prediction block to be encoded or decoded. Yes.
- Condition determination 2 Although the reference list LY is different from the LX motion vector of the difference motion vector calculation target of the prediction block to be encoded or decoded, prediction using the same reference picture is performed in the adjacent prediction block.
- Condition determination 3 Prediction using the same reference list LX as the motion vector of the LX that is the difference motion vector calculation target of the prediction block to be encoded or decoded is performed in the adjacent prediction block.
- Condition determination 4 Prediction using a different reference picture in a reference list LY different from a motion vector of an LX that is a differential motion vector calculation target of a prediction block to be encoded or decoded is performed in an adjacent prediction block.
- condition determination 1 or condition determination 2 the motion vector of the corresponding adjacent prediction block corresponds to the same reference picture, so that it is used as a candidate for the motion vector as it is.
- condition of condition determination 4 the motion vector of the corresponding adjacent prediction block corresponds to a different reference picture. Therefore, the motion vector is calculated by scaling based on the motion vector as a candidate for the motion vector predictor.
- the following five methods can be set as a method of loop scanning of the spatial prediction block, depending on the above four condition determination methods.
- the appropriateness of the prediction vector and the maximum processing amount differ depending on each method, and these are selected and set in consideration of them.
- the scanning method 1 will be described later in detail with reference to the flowcharts of FIGS. 19 to 23.
- a person skilled in the art will know the procedure for performing the scanning methods 2 to 5 by using the scanning method 1. Since this is an item that can be appropriately designed in accordance with the procedure for carrying out the above, detailed description is omitted.
- the loop process of the scan of the spatial prediction block in a moving image encoder is demonstrated here, the same process is also possible in a moving image decoder.
- Scanning method 1 Priority is given to the determination of a prediction motion vector that does not require scaling operation using the same reference picture, and two condition determinations are performed for each prediction block among the four condition determinations. If the condition is not satisfied, the condition determination of the adjacent prediction block is performed. Move on. Condition determination 1 and condition determination 2 are performed in the first round, and condition determination 3 and condition determination 4 are performed in the next round of prediction blocks.
- condition determination is performed in the following priority order.
- N is A or B
- Condition determination 1 of the prediction block N0 (same reference list LX, same reference picture) 2.
- Prediction block N0 condition determination 2 (different reference list LY, same reference picture) 3.
- Condition determination 1 of the prediction block N1 (same reference list LX, same reference picture) 4).
- Condition determination 2 of the prediction block N1 (different reference list LY, same reference picture) 5.
- Condition determination 1 of the prediction block N2 (the same reference list LX, the same reference picture), only the prediction block group adjacent on the upper side 6.
- Condition determination 2 of the prediction block N2 (different reference list LY, the same reference picture), only the prediction block group adjacent on the upper side Condition determination 3 of the prediction block N0 (same reference list LX, different reference pictures) 8).
- Prediction block N0 condition decision 4 (different reference list LY, different reference picture) 9.
- Condition determination 3 of the prediction block N1 (same reference list LX, different reference pictures) 10.
- Prediction block N1 condition determination 4 (different reference list LY, different reference picture) 11.
- Condition determination 3 of the prediction block N2 (same reference list LX, different reference pictures), only the prediction block group adjacent on the upper side Predictive block N2 condition decision 4 (different reference list LY, different reference picture), upper adjacent prediction block group only
- the scanning method 1 since a predicted motion vector that does not require a scaling operation using the same reference picture is easily selected, there is an effect of increasing the possibility that the code amount of the difference motion vector is reduced.
- the number of rounds of condition determination is a maximum of 2
- the number of memory accesses to the encoded information of the prediction block is less than that in the scanning method 2 described later when considering implementation in hardware, and complexity is increased. Is reduced.
- Scanning method 2 Of the four condition determinations, one condition determination is performed for each prediction block. If the condition is not satisfied, the process proceeds to the condition determination for the adjacent prediction block. If the condition determination is performed four times for each prediction block, the process is terminated.
- condition determination is performed in the following priority order.
- N is A or B
- Condition determination 1 of the prediction block N0 (same reference list LX, same reference picture) 2.
- Condition determination 1 of the prediction block N1 (same reference list LX, same reference picture) 3.
- Condition determination 1 of the prediction block N2 (same reference list LX, same reference picture), only the prediction block group adjacent on the upper side Prediction block N0 condition determination 2 (different reference list LY, same reference picture) 5.
- Condition determination 2 of the prediction block N1 (different reference list LY, same reference picture) 6). 6.
- Condition determination 2 of the prediction block N2 (different reference list LY, the same reference picture), only the prediction block group adjacent on the upper side Condition determination 3 of the prediction block N0 (same reference list LX, different reference pictures) 8).
- Condition determination 3 of the prediction block N1 (same reference list LX, different reference pictures) 9.
- Condition determination 3 of the prediction block N2 (same reference list LX, different reference pictures), only the prediction block group adjacent on the upper side Prediction block N0 condition decision 4 (different reference list LY, different reference picture) 11.
- Prediction block N1 condition determination 4 (different reference list LY, different reference picture) 12
- Predictive block N2 condition decision 4 (different reference list LY, different reference picture), upper adjacent prediction block group only
- the scan method 2 since a predicted motion vector that does not require a scaling operation using the same reference picture is easily selected, there is an effect of increasing the possibility that the code amount of the difference motion vector is reduced.
- the number of rounds for condition determination is four times, and the number of memory accesses to the encoded information of the prediction block is larger than that in the scanning method 1 when considering implementation in hardware, but the prediction of the same reference list is performed. A motion vector is easily selected.
- Scan method 3 In the first round, the condition determination of condition determination 1 is performed for each prediction block, and if the condition is not satisfied, the process proceeds to the condition determination of the adjacent prediction block. In the next lap, condition determination is performed in order of condition determination 2, condition determination 3 and condition determination 4 for each prediction block, and then the process proceeds to the next.
- condition determination is performed in the following priority order.
- N is A or B
- Condition determination 1 of the prediction block N0 (same reference list LX, same reference picture) 2.
- Condition determination 1 of the prediction block N1 (same reference list LX, same reference picture) 3.
- Condition determination 1 of the prediction block N2 (same reference list LX, same reference picture), only the prediction block group adjacent on the upper side
- Prediction block N0 condition determination 2 (different reference list LY, same reference picture) 5.
- Condition determination 3 of the prediction block N0 (same reference list LX, different reference pictures) 6).
- Prediction block N0 condition decision 4 (different reference list LY, different reference picture) 7).
- Condition determination 2 of the prediction block N1 (different reference list LY, same reference picture) 8).
- Condition determination 3 of the prediction block N1 (same reference list LX, different reference pictures) 9.
- Prediction block N1 condition determination 4 (different reference list LY, different reference picture) 10.
- Condition determination 2 of the prediction block N2 (different reference list LY, same reference picture), only the prediction block group adjacent on the upper side 11.
- Condition determination 3 of the prediction block N2 (same reference list LX, different reference pictures), only the prediction block group adjacent on the upper side
- Predictive block N2 condition decision 4 (different reference list LY, different reference picture), upper adjacent prediction block group only
- a prediction motion vector that does not require a scaling operation using the same reference picture in the same reference list can be easily selected, so that there is an effect that the possibility that the code amount of the difference motion vector is reduced is increased.
- the maximum number of rounds for condition determination is two, the number of memory accesses to the encoded information of the prediction block is smaller than that in the scanning method 2 when considering implementation in hardware, and complexity is reduced. Is done.
- Scanning method 4 Priority is given to the condition determination of the same prediction block, and four condition determinations are performed within one prediction block. If all the conditions are not met, it is determined that there is no motion vector matching the condition in the prediction block. The condition of the next prediction block is determined.
- condition determination is performed in the following priority order.
- N is A or B
- Condition determination 1 of the prediction block N0 (same reference list LX, same reference picture) 2.
- Prediction block N0 condition determination 2 (different reference list LY, same reference picture) 3.
- Condition determination 3 of the prediction block N0 (same reference list LX, different reference pictures) 4).
- Prediction block N0 condition decision 4 (different reference list LY, different reference picture) 5.
- Condition determination 1 of the prediction block N1 (same reference list LX, same reference picture) 6).
- Condition determination 2 of the prediction block N1 (different reference list LY, same reference picture) 7).
- Condition determination 3 of the prediction block N1 (same reference list LX, different reference pictures) 8).
- Prediction block N1 condition determination 4 (different reference list LY, different reference picture) 9. 9.
- Condition determination 1 of the prediction block N2 (the same reference list LX, the same reference picture), only the prediction block group adjacent on the upper side 10.
- Condition determination 2 of the prediction block N2 (different reference list LY, same reference picture), only the prediction block group adjacent on the upper side 11.
- Condition determination 3 of the prediction block N2 (same reference list LX, different reference pictures), only the prediction block group adjacent on the upper side
- Predictive block N2 condition decision 4 (different reference list LY, different reference picture), upper adjacent prediction block group only
- the scan method 4 since the number of rounds of condition determination is at most one, the number of memory accesses to the encoded information of the prediction block when the implementation in hardware is taken into consideration is the scan method 1 and the scan method 2. Compared with the scanning method 3, the complexity is reduced.
- Scanning method 5 Similar to the scanning method 4, priority is given to the condition determination of the same prediction block, and four condition determinations are performed within one prediction block. If all the conditions are not met, the motion vector matching the condition is not included in the prediction block. It is determined that it does not exist, and the condition of the next prediction block is determined. However, in the condition determination in the prediction block, the scan method 4 gives priority to the same reference picture, but the scan method 5 gives priority to the same reference list.
- condition determination is performed in the following priority order.
- N is A or B
- Condition determination 1 of the prediction block N0 (same reference list LX, same reference picture) 2.
- Condition determination 3 of the prediction block N0 (same reference list LX, different reference pictures) 3.
- Prediction block N0 condition determination 2 (different reference list LY, same reference picture) 4).
- Prediction block N0 condition decision 4 (different reference list LY, different reference picture) 5.
- Condition determination 1 of the prediction block N1 (same reference list LX, same reference picture) 6).
- Condition determination 3 of the prediction block N1 (same reference list LX, different reference pictures) 7).
- Condition determination 2 of the prediction block N1 (different reference list LY, same reference picture) 8).
- Prediction block N1 condition determination 4 (different reference list LY, different reference picture) 9. 9.
- Condition determination 1 of the prediction block N2 (the same reference list LX, the same reference picture), only the prediction block group adjacent on the upper side 10.
- Condition determination 3 of the prediction block N2 (same reference list LX, different reference pictures), only the prediction block group adjacent on the upper side 11.
- Condition determination 2 of the prediction block N2 (different reference list LY, the same reference picture), only the prediction block group adjacent on the upper side
- Predictive block N2 condition decision 4 (different reference list LY, different reference picture), upper adjacent prediction block group only
- the number of times of reference block reference lists can be reduced compared to the scan method 4, and the complexity is reduced by reducing the number of times of access to the memory and the amount of processing such as condition determination. can do.
- the number of memory accesses to the encoded information of the prediction block is the scan method 1, scan when considering implementation in hardware. Compared with Method 2 and Scan Method 3, the complexity is reduced.
- the predicted motion vector candidate registration unit 122 stores the calculated predicted motion vector candidates mvLXA, mvLXB, and mvLXCol in the predicted motion vector list mvpListLX.
- the motion vector predictor redundant candidate deletion unit 123 compares the motion vector values of the motion vector predictor candidates stored in the motion vector list mvpListLX of the LX, and selects the same motion vector candidate from the motion vector predictor candidates. Judgment of motion vector values, leave one candidate motion vector candidate determined to have the same motion vector value, delete the rest from the motion vector predictor list mvpListLX, The predicted motion vector list mvpListLX is updated so that the candidates do not overlap. However, when the defined final candidate number finalNumMVPCand is 1, this redundancy determination process can be omitted.
- the motion vector predictor candidate number limiting unit 124 the number of motion vector predictor candidates registered in the motion vector list mvpListLX of LX is counted, and the motion vector candidate number numMVPCandLX of LX is set to the counted candidate value. Is done.
- the motion vector predictor candidate number limiting unit 124 limits the number of predicted motion vector vectors numMVPCandLX registered in the motion vector list mvpListLX of LX to the final candidate number finalNumMVPCand defined according to the size of the prediction block. .
- the final candidate number finalNumMVPCand is defined according to the size of the prediction block. This is because if the number of motion vector predictor candidates registered in the motion vector predictor list fluctuates according to the state of construction of the motion vector predictor list, the motion vector list on the decoding side must be constructed before the motion vector predictor list is constructed. This is because entropy decoding cannot be performed.
- the prediction motion vector index can be entropy-decoded independently of the construction of the prediction motion vector list, and an encoded bit string of another picture Even if an error occurs during decoding, it is possible to continue entropy decoding of the encoded bit string of the current picture without being affected by the error.
- the predicted motion vector candidate number limit unit 124 (0 until the predicted motion vector candidate number numMVPCandLX becomes the same value as the final candidate number finalNumMVPCand. , 0) (the horizontal and vertical components are both 0) are added to the predicted motion vector list mvpListLX, thereby limiting the number of predicted motion vector candidates to a specified value.
- a motion vector having a value of (0, 0) may be added in duplicate, but on the decoding side, how the predicted motion vector index falls within the range from 0 to (specified number of candidates ⁇ 1). Even if a large value is taken, the predicted motion vector can be determined.
- the number of predicted motion vector candidates for LX numMVPCandLX is larger than the final number of final candidates defined, finalNumMVPCand, all the elements registered in the index larger than finalNumMVPCand-1 are deleted from the predicted motion vector list mvpListLX.
- the updated prediction motion vector list mvpListLX is supplied to the prediction motion vector candidate code amount calculation unit 125 and the prediction motion vector selection unit 126.
- the motion vector predictor selection unit 126 selects a motion vector candidate mvpListLX [i] that has the smallest code amount for each motion vector predictor candidate from among the elements registered in the motion vector list mvpListLX of LX. Select as the predicted motion vector mvpLX.
- the motion vector predictor candidates represented by the index i in the motion vector list mvpListLX with a small number mvpListLX [i] is selected as the LX optimum predicted motion vector mvpLX.
- the selected predicted motion vector mvpLX is supplied to the motion vector subtraction unit 127. Further, the index i in the predicted motion vector list corresponding to the selected predicted motion vector mvpLX is output as a predicted motion vector index mvpIdxLX of LX.
- the motion vector subtraction unit 127 calculates the LX differential motion vector mvdLX by subtracting the selected LX predicted motion vector mvpLX from the LX motion vector mvLX, and outputs the differential motion vector mvdLX.
- mvdLX mvLX-mvpLX
- the prediction method determination unit 106 determines a prediction method.
- the code amount and the coding distortion are calculated for each prediction mode, and the prediction block size and the prediction mode for the smallest generated code amount and coding distortion are determined.
- the LX differential motion vector mvdLX supplied from the motion vector subtraction unit 127 of the differential motion vector calculation unit 103 and the LX prediction motion vector index mvpIdxLX representing the prediction motion vector supplied from the prediction motion vector selection unit 126 are encoded.
- the code amount of motion information is calculated.
- the code amount of the prediction residual signal obtained by encoding the prediction residual signal between the prediction image signal supplied from the motion compensation prediction unit 105 and the image signal to be encoded supplied from the image memory 101 is calculated.
- the total generated code amount obtained by adding the code amount of the motion information and the code amount of the prediction residual signal is calculated and used as the first evaluation value.
- the prediction residual signal is decoded for distortion evaluation, and the encoding distortion is calculated as a ratio representing an error from the original image signal generated by the encoding.
- the prediction block size and the prediction mode that produce the least generated code amount and the coding distortion are determined.
- the motion vector prediction method described above is performed on the motion vector mvLX corresponding to the prediction mode of the determined prediction block size, and the index mvpIdxLX representing the prediction motion vector is a second syntax pattern in units of prediction blocks. It is encoded as the represented syntax element mvp_idx_lX [i].
- the generated code amount calculated here is preferably a simulation of the encoding process, but can be approximated or approximated easily.
- FIG. 14 is a diagram illustrating a detailed configuration of the motion vector calculation unit 204 of the video decoding device of FIG. 2 corresponding to the video encoding device of the embodiment.
- a portion surrounded by a thick frame line in FIG. 14 indicates the motion vector calculation unit 204.
- the part surrounded by the thick dotted line inside shows the operation part of the motion vector prediction method described later, which is also installed in the corresponding moving picture coding apparatus in the same way and has the same consistency between coding and decoding. Judgment results are obtained.
- a motion vector prediction method in decoding will be described with reference to FIG.
- the final motion vector finalNumMVPCand is set on the decoding side as well as the encoding side according to the size of the prediction block to be decoded.
- the motion vector calculation unit 204 sets finalNumMVPCand to a number smaller than the latter if the size of the prediction block of the luminance signal to be decoded is smaller than the sizePUNumMVPCand, and otherwise sets the finalNumMVPCand to a number larger than the former. To do.
- the specified size sizePUNumMVPCand is set to 8x8, and the finalNumMVPCand is set to 1 if the size of the prediction block of the luminance signal to be decoded is equal to or smaller than the specified sizePUNumMVPCand, otherwise, the finalNumMVPCand is set to 1 Set to 2.
- the motion vector calculation unit 204 includes a predicted motion vector candidate generation unit 221, a predicted motion vector candidate registration unit 222, a predicted motion vector redundancy candidate deletion unit 223, a predicted motion vector candidate number limit unit 224, a predicted motion vector selection unit 225, and a motion vector.
- An adder 226 is included.
- a motion vector used in inter prediction is calculated for each of L0 and L1. Specifically, when the prediction mode PredMode of the decoding target block is inter prediction (MODE_INTER) and the inter prediction mode of the decoding target block is L0 prediction (Pred_L0), the prediction motion vector list mvpListL0 of L0 is calculated, and the prediction motion vector mvpL0 is selected, and a motion vector mvL0 of L0 is calculated.
- PredMode of the decoding target block is inter prediction (MODE_INTER) and the inter prediction mode of the decoding target block is L0 prediction (Pred_L0)
- the prediction motion vector list mvpListL0 of L0 is calculated, and the prediction motion vector mvpL0 is selected, and a motion vector mvL0 of L0 is calculated.
- the inter prediction mode of the decoding target block is L1 prediction (Pred_L1)
- the L1 predicted motion vector list mvpListL1 is calculated, the predicted motion vector mvpL1 is selected, and the L1 motion vector mvL1 is calculated.
- the inter prediction mode of the decoding target block is bi-prediction (Pred_BI)
- both L0 prediction and L1 prediction are performed, the L0 prediction motion vector list mvpListL0 is calculated, the L0 prediction motion vector mvpL0 is selected, and the L0 prediction motion vector list is selected.
- the motion vector mvL0 is calculated, the L1 predicted motion vector list mvpListL1 is calculated, the L1 predicted motion vector mvpL1 is calculated, and the L1 motion vector mvL1 is calculated. Similar to the encoding side, the motion vector calculation processing is performed for each of L0 and L1 on the decoding side, but both L0 and L1 are common processing. Therefore, in the following description, L0 and L1 are represented as a common LX. X is 0 in the process of calculating the L0 motion vector, and X is 1 in the process of calculating the L1 motion vector. When the information of the other list is referred to instead of LX during the process of calculating the LX motion vector, the other list is represented as LY.
- the motion vector calculation unit 204 includes a motion vector predictor candidate generation unit 221, a motion vector predictor candidate registration unit 222, a motion vector predictor redundant candidate deletion unit 223, and a motion vector predictor candidate number restriction unit 224, which are differential motion vectors on the encoding side. It is specified that the motion vector predictor candidate generating unit 121, the motion vector predictor candidate registering unit 122, the motion vector predictor redundant candidate deleting unit 123, and the motion vector predictor candidate number limiting unit 124 in the calculation unit 103 perform the same operation. Thus, the same motion vector predictor candidates that are consistent between encoding and decoding can be obtained on the encoding side and the decoding side.
- the motion vector predictor candidate generation unit 221 performs the same processing as that of the motion vector predictor candidate generation unit 121 on the encoding side in FIG.
- the same position as or near the decoded prediction block adjacent to the decoding target block in the same picture as the decoding target block and the decoding target block in a different picture recorded in the encoded information storage memory 209 after decoding. Are read out from the encoded information storage memory 209.
- Predicted motion vector candidates mvLXA, mvLXB, and mvLXCol are generated for each predicted block group from the motion vectors of other decoded blocks read from the encoded information storage memory 209 and supplied to the predicted motion vector candidate registration unit 222. .
- These candidate motion vector candidates mvLXA, mvLXB, and mvLXCol may be calculated by scaling according to the reference index. Since the motion vector predictor candidate generation unit 221 performs the same processing as that of the motion vector predictor candidate generation unit 121 on the encoding side in FIG. 13, the motion vector predictor candidate generation unit 121 on the encoding side in FIG. The condition determination of the scan methods 1, 2, 3, 4, and 5 for calculating the motion vector predictor can also be applied to the motion vector predictor candidate generation unit 221 and detailed description thereof is omitted here.
- the motion vector predictor candidate registration unit 222 performs the same processing as the motion vector predictor candidate registration unit 122 on the encoding side in FIG.
- the calculated prediction motion vector candidates mvLXA, mvLXB, and mvLXCol are stored in the LX prediction motion vector list mvpListLX.
- the motion vector redundancy candidate deletion unit 223 performs the same process as the motion vector redundancy candidate deletion unit 123 on the encoding side in FIG.
- the predicted motion vector candidates stored in the LX predicted motion vector list mvpListLX those having the same motion vector value are determined, and one predicted motion vector candidate determined to have the same motion vector value.
- the remaining motion vectors are deleted from the predicted motion vector list mvpListLX, the predicted motion vector candidates are not duplicated, and the predicted motion vector list mvpListLX is updated.
- the defined final candidate number finalNumMVPCand is 1, this redundancy determination process can be omitted.
- the motion vector predictor candidate number limiting unit 224 performs the same processing as the motion vector predictor candidate number limiting unit 124 on the encoding side in FIG.
- the motion vector predictor candidate number limiting unit 224 the number of elements registered in the motion vector predictor list mvpListLX is counted, and the motion vector candidate number numMVPCandLX of LX is set to the value of the counted candidate number.
- the number of motion vector predictor candidates numMVPCandLX registered in the motion vector predictor list mvpListLX for the reason explained on the encoding side is determined according to the size of the prediction block. Limit the number of candidates to finalNumMVPCand. When the number of predicted motion vector candidates for LX numMVPCandLX is smaller than the specified final candidate number finalNumMVPCand, a motion vector having a value of (0, 0) is obtained until the predicted motion vector candidate number numMVPCandLX becomes the same value as the final candidate number finalNumMVPCand. By adding to the motion vector predictor list mvpListLX, the number of motion vector predictor candidates is limited to a specified value.
- a motion vector having a value of (0, 0) may be added in duplicate, but on the decoding side, any value within the range of the predicted motion vector index from 0 to the specified value ⁇ 1 is taken.
- the predicted motion vector can be determined.
- the number of predicted motion vector candidates for LX numMVPCandLX is larger than the final number of final candidates defined, finalNumMVPCand, all the elements registered in the index larger than finalNumMVPCand-1 are deleted from the predicted motion vector list mvpListLX.
- the updated motion vector predictor list mvpListLX is supplied to the motion vector predictor selection unit 225.
- the predicted motion vector index mvpIdxLX of the LX predicted motion vector decoded by the first encoded bit string decoding unit 202 is supplied to the predicted motion vector selection unit 225.
- the specified final candidate number finalNumMVPCand is 1, the predicted motion vector index mvpIdxLX of the LX predicted motion vector is not encoded and is not supplied to the predicted motion vector selection unit 225.
- the prediction motion vector selection unit 225 is supplied with the prediction motion vector index mvpIdxLX of the LX decoded by the first encoded bit string decoding unit 202, and selects a prediction motion vector candidate mvpListLX [mvpIdxLX] corresponding to the supplied index mvpIdxLX. It extracts from the motion vector list mvpListLX as the motion vector motion vector mvpLX of LX. However, when the defined final candidate number finalNumMVPCand is 1, the only motion vector candidate mvpListLX [0] for which the index i of the motion vector predictor list mvpListLX is registered as 0 is extracted. The supplied motion vector predictor candidate is supplied to the motion vector adding unit 226 as a motion vector predictor mvpLX.
- the motion vector addition unit 226 calculates the LX motion vector mv by adding the LX differential motion vector mvdLX and the LX prediction motion vector mvpLX that are supplied after being decoded by the first encoded bit string decoding unit 202. And outputs a motion vector mvLX.
- mvLX mvpLX + mvdLX
- the LX motion vector mvLX is calculated for each prediction block.
- a predicted image signal is generated by motion compensation using this motion vector, and is added to the decoded prediction residual signal to generate a decoded image signal.
- FIG. 15 is a flowchart showing a difference motion vector calculation processing procedure by the moving image encoding device
- FIG. 16 is a flowchart showing a motion vector calculation processing procedure by the moving image decoding device.
- step S100 The difference motion vector calculation processing procedure on the encoding side will be described with reference to FIG.
- the differential motion vector calculation unit 103 sets the final candidate number finalNumMVCand of motion vector predictor candidates (S100).
- S100 The detailed processing procedure of step S100 will be described later with reference to the flowchart of FIG.
- the differential motion vector calculation unit 103 calculates the differential motion vector of the motion vector used in the inter prediction selected in the encoding target block for each of L0 and L1 (S101 to S106). Specifically, when the prediction mode PredMode of the encoding target block is inter prediction (MODE_INTER) and the inter prediction mode is L0 prediction (Pred_L0), a prediction motion vector list mvpListL0 of L0 is calculated and a prediction motion vector mvpL0 is selected. Then, the differential motion vector mvdL0 of the motion vector mvL0 of L0 is calculated.
- the prediction motion vector list mvpListL1 of L1 is calculated, the prediction motion vector mvpL1 is selected, and the differential motion vector mvdL1 of the motion vector mvL1 of L1 is calculated. .
- a prediction motion vector list mvpListL0 of L0 is calculated, a prediction motion vector mvpL0 of L0 is selected, and L0 A difference motion vector mvdL0 of the L1 motion vector mvLL, a prediction motion vector list mvpListL1 of the L1, a prediction motion vector mvpL1 of the L1, and a difference motion vector mvdL1 of the motion vector mvL1 of the L1.
- L0 and L1 are represented as a common LX.
- X is 0, and in the process of calculating the differential motion vector of L1, X is 1.
- the other list is represented as LY.
- the LX predicted motion vector candidates are calculated to construct the LX predicted motion vector list mvpListLX (S103).
- the motion vector predictor candidate generation unit 121 in the motion vector difference calculation unit 103 calculates a plurality of motion vector predictor candidates, and the motion vector predictor candidate registration unit 122 calculates the motion vector predictor candidates calculated in the motion vector predictor list mvpListLX.
- the motion vector redundancy candidate deletion unit 123 deletes unnecessary motion vector predictor candidates, and the motion vector predictor candidate number limiting unit 124 registers the number of motion vector motion vector candidates numMVPCandLX in the motion vector list mvpListLX.
- step S103 The detailed processing procedure of step S103 will be described later with reference to the flowchart of FIG.
- the predicted motion vector candidate code amount calculation unit 125 and the predicted motion vector selection unit 126 select the LX predicted motion vector mvpLX from the LX predicted motion vector list mvpListLX (S104).
- the motion vector predictor candidate code amount calculation unit 125 calculates each motion vector difference which is a difference between the motion vector mvLX and each motion vector candidate mvpListLX [i] stored in the motion vector predictor list mvpListLX.
- the coding amount when the difference motion vector is encoded is calculated for each element of the prediction motion vector list mvpListLX, and the prediction motion vector selection unit 126 among the elements registered in the prediction motion vector list mvpListLX, A predicted motion vector candidate mvpListLX [i] that minimizes the code amount for each predicted motion vector candidate is selected as a predicted motion vector mvpLX.
- the motion vector subtraction unit 127 calculates an LX differential motion vector mvdLX by subtracting the selected LX motion vector mvpLX from the LX motion vector mvLX (S105).
- mvdLX mvLX-mvpLX
- the motion vector calculation unit 204 calculates motion vectors used for inter prediction for each of L0 and L1 (S201 to S206). Specifically, when the prediction mode PredMode of the decoding target block is inter prediction (MODE_INTER) and the inter prediction mode of the decoding target block is L0 prediction (Pred_L0), the prediction motion vector list mvpListL0 of L0 is calculated, and the prediction motion vector mvpL0 is selected, and a motion vector mvL0 of L0 is calculated.
- PredMode of the decoding target block is inter prediction (MODE_INTER) and the inter prediction mode of the decoding target block is L0 prediction (Pred_L0)
- the prediction motion vector list mvpListL0 of L0 is calculated, and the prediction motion vector mvpL0 is selected, and a motion vector mvL0 of L0 is calculated.
- the inter prediction mode of the decoding target block is L1 prediction (Pred_L1)
- the L1 predicted motion vector list mvpListL1 is calculated, the predicted motion vector mvpL1 is selected, and the L1 motion vector mvL1 is calculated.
- the inter prediction mode of the decoding target block is bi-prediction (Pred_BI)
- both L0 prediction and L1 prediction are performed, the L0 prediction motion vector list mvpListL0 is calculated, the L0 prediction motion vector mvpL0 is selected, and the L0 prediction motion vector list is selected.
- the motion vector mvL0 is calculated, the L1 predicted motion vector list mvpListL1 is calculated, the L1 predicted motion vector mvpL1 is calculated, and the L1 motion vector mvL1 is calculated.
- L0 and L1 are represented as a common LX.
- X is 0 in the process of calculating the L0 motion vector
- X is 1 in the process of calculating the L1 motion vector.
- the other list is represented as LY.
- the motion vector calculation unit 204 includes a motion vector predictor candidate generation unit 221 that calculates a plurality of motion vector predictor candidates, and the motion vector predictor candidate registration unit 222 adds the motion vector predictor candidates calculated to the motion vector predictor list mvpListLX. Then, the motion vector redundancy candidate deletion unit 223 deletes unnecessary motion vector predictor candidates, and the motion vector predictor candidate number limit unit 224 stores the number of motion vector motion vector candidates numMVPCandLX registered in the motion vector predictor list mvpListLX.
- the predicted motion vector list mvpListLX is constructed by limiting the set number of predicted motion vector candidates numMVPCandLX of LX registered in the predicted motion vector list mvpListLX to the final number of final candidates finalNumMVPCand.
- the detailed processing procedure of step S203 will be described later with reference to the flowchart of FIG.
- a predicted motion vector candidate mvpListLX [mvpIdxLX] corresponding to the predicted motion vector index mvpIdxLX decoded by the first encoded bit string decoding unit 202 is selected from the predicted motion vector list mvpListLX by the predicted motion vector selection unit 225.
- the predicted motion vector mvpLX is extracted (S204). However, if the maximum candidate number finalNumCandLX is 1, the only predicted motion vector candidate mvpListLX [0] registered in the predicted motion vector list mvpListLX is extracted as the selected predicted motion vector mvpLX.
- the predicted motion vector candidate number limit unit 124 and the predicted motion vector candidate number limit unit 224 set the final number of predicted motion vectors finalNumMVPCand according to the size of the prediction block to be encoded or decoded.
- FIG. 17 is a flowchart showing the final candidate number setting processing procedure of the motion vector predictor.
- the size of the prediction block to be encoded or decoded is obtained (S401). If the size of the prediction block to be encoded or decoded is equal to or smaller than the size sizePUNumMVPCand defined (YES in S402), finalNumMVPCand is set to a number smaller than the latter (S403), otherwise finalNumMVPCand is set to a number larger than the former Set (S404).
- the specified size sizePUNumMVPCand is set to 8x8, and the finalNumMVPCand is set to 1 if the size of the prediction block of the luminance signal to be encoded or decoded is less than the specified size sizePUNumMVPCand, otherwise Set finalNumMVPCand to 2.
- the prescribed size sizePUNumMVPCand may be a fixed value, or may be set by preparing a syntax element for each sequence, picture, or slice. In addition, if the size is the same size, such as 8x4 and 4x8, multiplied by the width and height, it is considered the same size. That is, when the prescribed size sizePUNumMVPCand is set to 8x4, it is considered that 4x8 is also set.
- step S402 if the size of the prediction block to be encoded or decoded is less than the specified size sizePUNumMVPCand, finalNumMVPCand is set to a number smaller than the latter, and if not, finalNumMVPCand is set to a number larger than the former. It is good.
- FIG. 18 shows motion vector predictor candidate generating units 121 and 221 having a function common to the motion vector calculating unit 103 of the video encoding device and the motion vector calculating unit 204 of the video decoding device, and a motion vector predictor candidate registration unit 122.
- 222, and the motion vector redundancy candidate deletion units 123 and 223 and the motion vector predictor candidate number limiting units 124 and 224 are flowcharts.
- the motion vector predictor candidate generation units 121 and 221 derive motion vector predictor candidates from the prediction block adjacent on the left side, a flag availableFlagLXA indicating whether the motion vector candidate of the prediction block adjacent on the left side can be used, and motion A vector mvLXA, a reference index refIdxA, and a list ListA are derived (S301 in FIG. 18). Note that X is 0 when L0 and X is 1 when L1 (the same applies hereinafter). Subsequently, the motion vector predictor candidate generation units 121 and 221 derive a motion vector predictor candidate from the upper adjacent prediction block, and a flag availableFlagLXB indicating whether the motion vector predictor candidate of the upper adjacent prediction block can be used.
- a motion vector mvLXB, a reference index refIdxB, and a list ListB are derived (S302 in FIG. 18).
- the processes in steps S301 and S302 in FIG. 18 are common except that the positions and numbers of adjacent blocks to be referenced are different, and a flag availableFlagLXN indicating whether or not a motion vector candidate of a predicted block can be used, and a motion vector mvLXN, reference
- N is A or B, and so on
- the motion vector predictor candidate generation units 121 and 221 derive motion vector predictor candidates from the prediction blocks of the pictures at different times, and indicate whether the motion vector predictor candidates of the motion pictures at different times can be used.
- a flag availableFlagLXCol, a motion vector mvLXCol, a reference index refIdxCol, and a list ListCol are derived (S303 in FIG. 18). The derivation processing procedure of step S303 will be described in detail later with reference to the flowcharts of FIGS.
- the motion vector predictor candidate registration units 122 and 222 create a motion vector predictor list mvpListLX, and add candidate motion vectors mvLXA, mvLXB, and mvLXCol for each LX (S304 in FIG. 18).
- the registration processing procedure in step S304 will be described in detail later using the flowchart in FIG.
- the motion vector redundancy candidate deletion units 123 and 223 determine that motion vector candidates are redundant motion vector candidates when the motion vector candidates have the same value or close values in the motion vector predictor list mvpListLX.
- the redundant motion vector candidates are removed except for the motion vector candidate with the smallest order, that is, the smallest index i (S305 in FIG. 18).
- the deletion processing procedure in step S305 will be described in detail later using the flowchart of FIG.
- the motion vector predictor candidate number limiting units 124 and 224 count the number of elements added in the motion vector predictor list mvpListLX, and the number is set as the motion vector vector candidate number numMVPCandLX of the LX.
- the number of predicted motion vector candidates for LX numMVPCandLX registered in mvpListLX is limited to the specified final candidate number finalNumMVPCand. (S306 in FIG. 18).
- the restriction process procedure of step S306 will be described in detail later with reference to the flowchart of FIG.
- a prediction block (in FIG. 5, FIG. 6, FIG. 7, and FIG. 8) that is defined for motion compensation within a coding block in the same picture
- a motion vector predictor candidate is derived from surrounding prediction blocks adjacent to the processing target prediction block.
- FIG. 19 is a flowchart showing the predicted motion vector candidate derivation process procedure of S301 and S302 of FIG.
- Subscript X contains 0 or 1 representing the reference list, and N represents A (left side) or B (upper side) representing the area of the adjacent prediction block group.
- motion vector predictor candidates from the left prediction block (S301 in FIG. 18) prediction is performed from prediction blocks A0 and A1 adjacent to the left side of the prediction block to be encoded or decoded in FIG.
- a motion vector predictor is predicted from the B0, B1, and B2 prediction blocks adjacent to the upper side, where N is B in FIG.
- the candidates are calculated according to the following procedure.
- the encoded information stored in the information storage memory 114 or the encoded information storage memory 209 is acquired.
- the encoding information of the adjacent prediction block Nk acquired here includes flags predFlagLX [xNk] [yNk] indicating whether to use the prediction modes PredMode and LX, and reference indexes refIdxLX [xNk] [yNk] and LX of the LX.
- the prediction block A0 adjacent to the lower left and the prediction block A1 adjacent to the left are specified and encoded.
- a flag availableFlagLXN indicating whether or not a prediction motion vector is selected from the prediction block group N is set to 0, and a motion vector mvLXN representing the prediction block group N is set to (0, 0) (S1107).
- a motion vector predictor candidate that matches condition determination 1 or condition determination 2 described above is derived (S1108).
- Reference list that is currently targeted by the prediction block to be encoded or decoded in the adjacent prediction blocks N0, N1, and N2 (N2 is only the upper adjacent prediction block group B) of the prediction block group N (N is A or B)
- the same reference list LX as LX, or the reference list LY opposite to the reference list LX that is the current target in the prediction block to be encoded or decoded (Y !
- X the opposite reference list when the current target reference list is L0) Is L1, and when the current reference list is L1, the opposite reference list is L0), and the motion vector of the same reference picture is searched for and the motion vector is set as a motion vector predictor candidate.
- FIG. 20 is a flowchart showing the derivation process procedure of step S1108 of FIG.
- Nk 0, 1, 2, where 2 is only the upper adjacent prediction block group
- the following processing is performed in the order of k, 0, 1, 2 (S1201 to S1207).
- N A
- N B
- the following processing is performed in the order of B0, B1, and B2 from right to left.
- the condition determination of the above-described condition determination 1 is performed (S1203).
- the LX reference index refIdxLX [xNk] [yNk] of the prediction block Nk and the index refIdxLX of the prediction block to be processed are the same, that is, when adjacent prediction blocks Nk are inter-predicted using the same reference picture in LX prediction ( If YES in step S1203, the process advances to step S1204. If not (NO in step S1203), the condition determination in step S1205 is performed.
- step S1203 When step S1203 is YES, the flag availableFlagLXN is set to 1, the prediction motion vector mvLXN of the prediction block group N is set to the same value as the LX motion vector mvLX [xNk] [yNk] of the adjacent prediction block Nk, and prediction is performed.
- the reference index refIdxN of the block group N is set to the same value as the LX reference index refIdxLX [xNk] [yNk] of the adjacent prediction block Nk, and the reference list ListN of the prediction block group N is set to LX (S1204).
- the predicted motion vector candidate calculation process is terminated.
- step S1203 the condition determination of the above-described condition determination 2 is performed (S1205).
- the flag predFlagLY indicating whether to use the LY of the adjacent prediction block Nk is 1, that is, the adjacent prediction block Nk is inter-predicted using a motion vector of LY different from the calculation target, and the current target of the adjacent prediction block Nk Inter prediction using the same reference picture in the LY prediction where the POC of the reference picture in the reference list LY opposite to the reference list LX is the same as the POC of the reference picture of the LX of the prediction block to be processed
- the flag availableFlagLXN is set to 1
- the prediction motion vector mvLXN of the prediction block group N is the LY motion vector mvLY [xNk] [yNk] of the adjacent prediction block Nk.
- the reference index refIdxN of the prediction block group N is It is set to the same value as the LY reference index refIdxLY [xNk] [yNk] of the adjacent prediction block Nk, the reference list ListN of the prediction block group N is set to LY (S1206), and this motion vector predictor candidate calculation process ends. To do.
- FIG. 21 is a flowchart showing the derivation process procedure of step S1110 of FIG.
- Nk 0, 1, 2, where 2 is only the upper adjacent prediction block group
- the following processing is performed in the order of k, 0, 1, 2 (S1301 to S1307).
- N A
- N B
- the following processing is performed in the order of B0, B1, and B2 from right to left.
- Step S1303 the condition determination of the above condition determination 3 is performed (S1303). If the flag predFlagLX [xNk] [yNk] indicating whether to use the LX of the adjacent prediction block Nk is 1, that is, if the adjacent prediction block Nk is inter-predicted using the same LX motion vector as the calculation target, step The process proceeds to S1304, and if not (NO in S1303), the condition is determined in Step S1305.
- step S1303 When step S1303 is YES, the flag availableFlagLXN is set to 1, the mvLXN of the prediction block group N is set to the same value as the LX motion vector mvLX [xNk] [yNk] of the adjacent prediction block Nk, and the prediction block group N Is set to the same value as the LX reference index refIdxLX [xNk] [yNk] of the adjacent prediction block Nk, the reference list ListN of the prediction block group N is set to LX (S1304), and the process proceeds to step S1308. .
- step S1303 the condition determination of the above-described condition determination 4 is performed (S1305).
- the flag predFlagLY indicating whether to use the LY of the adjacent prediction block Nk is 1, that is, when the adjacent prediction block Nk is inter-predicted using a motion vector of LY different from the calculation target (YES in S1305)
- the prediction motion vector mvLXN of the prediction block group N is set to the same value as the LY motion vector mvLY [xNk] [yNk] of the adjacent prediction block Nk
- the prediction motion vector mvLXN of the prediction block group N is
- the LY motion vector mvLY [xNk] [yNk] of the adjacent prediction block Nk is set to the same value
- the reference index refIdxN of the prediction block group N is the LY reference index refIdxLY [xNk] [yNk] of the adjacent prediction block Nk
- step S1308 if these conditions are not met (NO in S1303 or NO in S1305), k is incremented by 1 and the next adjacent prediction block is processed (S1301 to S1307), and availableFlagLXN becomes 1, The process is repeated until the processing of the adjacent block A1 or B2 is completed, and the process proceeds to step S1308.
- FIG. 22 is a flowchart showing the motion vector scaling calculation processing procedure in step S1309 in FIG.
- the inter-picture distance td is calculated by subtracting the POC of the reference picture referred to by the adjacent prediction block list ListN from the POC of the current encoding or decoding target picture (S1601). If the reference picture referenced in the adjacent prediction block list ListN is earlier in the display order than the current encoding or decoding target picture, the inter-picture distance td becomes a positive value, and the current encoding or decoding target When the reference picture referenced in the reference list ListN of the adjacent prediction block is later in the display order than the picture, the inter-picture distance td is a negative value.
- td POC of current encoding or decoding target picture-POC of reference picture referenced in reference list ListN of adjacent prediction block
- the inter-picture distance tb is calculated by subtracting the POC of the reference picture referenced by the current encoding or decoding target picture list LX from the POC of the current encoding or decoding target picture (S1602). If the reference picture referenced in the list LX of the current encoding or decoding target picture is earlier than the current encoding or decoding target picture in the display order, the inter-picture distance tb is a positive value. When the reference picture referred to in the encoding or decoding target picture list LX is later in the display order, the inter-picture distance tb is a negative value.
- tb POC of current encoding or decoding target picture-POC of reference picture referenced in reference list LX of current encoding or decoding target picture
- scaling operation processing is performed by multiplying mvLXN by the scaling coefficient tb / td according to the following equation (S1603) to obtain a scaled motion vector mvLXN.
- mvLXN tb / td * mvLXN
- FIG. 23 shows an example in which the scaling operation in step S1603 is performed with integer precision arithmetic.
- the processing in steps S1604 to S1606 in FIG. 23 corresponds to the processing in step S1603 in FIG.
- the inter-picture distance td and the inter-picture distance tb are calculated (S1601, S1602).
- tx (16384 + Abs (td / 2)) / td
- DistScaleFactor (tb * tx + 32) >> 6
- mvLXN ClipMv (Sign (DistScaleFactor * mvLXN) * ((Abs (DistScaleFactor * mvLXN) + 127) >> 8)
- FIG. 24 is a flowchart for explaining the predicted motion vector candidate derivation processing procedure in step S303 in FIG.
- a picture colPic at a different time is calculated based on slice_type and collocated_from_l0_flag (S2101 in FIG. 24).
- FIG. 25 is a flowchart for explaining the procedure for deriving the picture colPic at different times in step S2101 of FIG.
- RefPicList1 [0] that is, a picture with a reference index 0 in the reference list L1 is a picture colPic at a different time.
- RefPicList0 [0] that is, pictures with a reference index 0 in the reference list L0 are different times.
- the picture is colPic (S2205 in FIG. 25).
- FIG. 26 is a flowchart for explaining the procedure for deriving the prediction block colPU of the picture colPic at different times in step S2102 of FIG.
- the prediction block located at the lower right (outside) of the same position as the processing target prediction block in the picture colPic at different time is set as the prediction block colPU at different time (S2301 in FIG. 26).
- This prediction block corresponds to the prediction block T0 in FIG.
- the encoding information of the prediction block colPU at different times is acquired (S2302 in FIG. 26). If PredMode of the prediction block colPU at different time is MODE_INTRA or cannot be used (S2303 and S2304 in FIG. 26), the prediction block located at the upper left center of the same position as the processing target prediction block in the picture colPic at different time is different in time.
- the prediction block colPU is reset (S2305 in FIG. 26). This prediction block corresponds to the prediction block T1 in FIG.
- the LX prediction motion vector mvLXCol calculated from the prediction block of another picture at the same position as the prediction block to be encoded or decoded and the encoding information of the reference list LX of the prediction block group Col
- the flag availableFlagLXCol indicating whether or not is valid is calculated (S2103 in FIG. 24).
- FIG. 27 is a flowchart illustrating the inter prediction information derivation process in step S2103 of FIG.
- PredMode of the prediction block colPU at different times is MODE_INTRA or cannot be used (NO in S2401 in FIG. 27, NO in S2402), availableFlagLXCol is set to 0, mvLXCol is set to (0, 0) (S2403, S2404 in FIG. 27), and processing Exit.
- mvCol and refIdxCol are calculated according to the following procedure.
- the motion vector mvCol Is set to the same value as MvL1 [xPCol] [yPCol] which is the L1 motion vector of the prediction block colPU (S2406 in FIG. 27), and the reference index refIdxCol is set to the same value as the reference index RefIdxL1 [xPCol] [yPCol] of L1 It is set (S2407 in FIG. 27), and the list ListCol is set to L1 (S2408 in FIG. 27).
- the L0 prediction flag PredFlagL0 [xPCol] [yPCol] of the prediction block colPU is not 0 (NO in S2405 in FIG. 27)
- the motion vector mvCol is the same as MvL0 [xPCol] [yPCol], which is the L0 motion vector of the prediction block colPU. 27 (S2410 in FIG.
- the reference index refIdxCol is set to the same value as the reference index RefIdxL0 [xPCol] [yPCol] y of L0 (S2411 in FIG. 27), and the list ListCol is set to L0 (FIG. 27). S2412).
- FIG. 28 is a flowchart showing a process for deriving inter prediction information of a prediction block when the inter prediction mode of the prediction block colPU is bi-prediction (Pred_BI).
- the L1 interface of the prediction block colPU It selects a prediction information.
- the L0 inter prediction information of the prediction block colPU is selected. If the flag collocated_from_l0_flag is 1 (NO in S2503), the L1 inter prediction information of the prediction block colPU is selected.
- the motion vector mvCol is set to the same value as MvL0 [xPCol] [yPCol] (S2504), and the reference index refIdxCol is set.
- the same value as RefIdxL0 [xPCol] [yPCol] is set (S2505), and the list ListCol is set to L0 (S2506).
- the motion vector mvCol is set to the same value as MvL1 [xPCol] [yPCol] (S2507), and the reference index refIdxCol is set.
- the same value as RefIdxL1 [xPCol] [yPCol] is set (S2508), and the list ListCol is set to L1 (S2509).
- availableFlagLXCol is set to 1 (S2414 in FIG. 27).
- FIG. 29 is a flowchart showing the motion vector scaling calculation processing procedure in step S2105 of FIG.
- the inter-picture distance tb is calculated by subtracting the POC of the reference picture referenced by the current encoding or decoding target picture list LX from the POC of the current encoding or decoding target picture (S2602). If the reference picture referenced in the list LX of the current encoding or decoding target picture is earlier than the current encoding or decoding target picture in the display order, the inter-picture distance tb is a positive value. When the reference picture referred to in the encoding or decoding target picture list LX is later in the display order, the inter-picture distance tb is a negative value.
- tb POC of current encoding or decoding target picture-POC of reference picture referenced in reference list LX of current encoding or decoding target picture
- FIG. 30 shows an example in which the scaling operation in step S2604 is performed with integer precision.
- the processing in steps S2605 to S2607 in FIG. 30 corresponds to the processing in step S2604 in FIG.
- the inter-picture distance td and the inter-picture distance tb are calculated (S2601, S2602).
- DistScaleFactor (tb * tx + 32) >> 6
- mvLXN ClipMv (Sign (DistScaleFactor * mvLXN) * ((Abs (DistScaleFactor * mvLXN) + 127) >> 8)
- the candidate motion vector candidates mvLXA, mvLXB, and mvLXCol calculated in S301, S302, and S303 in Fig. 18 are added to the LX motion vector predictor list mvpListLX (S304).
- the LX motion vector predictor list mvpListLX S304.
- elements with higher priorities are registered. It is arranged in front of the predicted motion vector list.
- the elements arranged at the rear of the motion vector predictor list are removed from the motion vector predictor list to leave the elements with higher priority.
- the predicted motion vector list mvpListLX has a list structure, is managed by an index i indicating the location inside the predicted motion vector list, and has a storage area for storing predicted motion vector candidates corresponding to the index i as elements.
- the number of the index i starts from 0, and motion vector predictor candidates are stored in the storage area of the motion vector predictor list mvpListLX.
- a motion vector predictor candidate that is an element of the index i registered in the motion vector predictor list mvpListLX is represented by mvpListLX [i].
- FIG. 31 is a flowchart showing the motion vector predictor candidate registration processing procedure in step S304 of FIG.
- the index i of the motion vector predictor list mvpListLX is set to 0 (S3101).
- availableFlagLXA is 1 (YES in S3102)
- the LX predicted motion vector candidate mvLXA is registered at the position corresponding to the index i in the predicted motion vector list mvpListLX (S3103), and is updated by adding 1 to the index i (S3103). S3104).
- the respective motion vector candidates for LX are added to the motion vector list mvpListLX for LX in the order of mvLXA, mvLXB, and mvLXCol.
- the number of candidates is 1 when the prediction block is smaller than the prescribed size.
- PartMode When the partition mode (PartMode) is 2N ⁇ N partition (PART_2NxN) and the partition index PartIdx is 0, or the partition mode (PartMode) is N ⁇ 2N partition (PART_Nx2N) and the partition index PartIdx is 1 , LX candidates of motion vectors are added to the motion vector predictor list mvpListLX in the order of mvLXB, mvLXA, and mvLXCol, so that there is a high possibility of having a value close to the motion vector of the prediction block to be encoded or decoded Predictive motion vector mvLXB is selected at the beginning of the prediction motion vector list to make it easier to select, reducing the amount of code for differential motion vectors Rukoto can.
- the motion vector predictor candidate number limiting unit 124 sets the priority order of motion vector predictor candidates remaining in the motion vector predictor candidate list in accordance with the division mode in which the block to be encoded or decoded is divided into prediction blocks
- PartMode in the 2N ⁇ N partition (PART_2NxN), the motion vector of the upper prediction block in which the partition index PartIdx is 0 is from the upper prediction block group in contact with the long side. Since it is highly possible to have a motion vector value close to that of the predicted motion vector, the candidate mvLXB from the upper predicted block group is registered in front of the predicted motion vector list so that it can be preferentially selected, and the partition index PartIdx Since the motion vector of the lower prediction block where 1 is 1 is likely to have a different motion vector value from the upper prediction block, the motion vector candidate mvLXA from the left prediction block group is preferentially selected. In order to facilitate, it is registered in front of the motion vector predictor list.
- PartMode partition mode
- PartMode partition mode
- PART_Nx2N partition
- the candidate mvLXA from the left predicted block group is registered in front of the predicted motion vector list so that the candidate mvLXA can be preferentially selected, and the partition index PartIdx Since the motion vector of the right prediction block with a value of 1 is likely to have a motion vector value different from that of the left prediction block, it is easy to preferentially select the prediction motion vector candidate mvLXB from the upper prediction block group. In order to do so, it is registered in front of the motion vector predictor list.
- FIG. 34 shows registration for adaptively switching the order of LX predicted motion vector candidates mvLXA, mvLXB, and mvLXCol to the LX predicted motion vector list mvpListLX according to the size of the prediction block, the partition mode (PartMode), and the partition index PartIdx. It is a flowchart which shows a process sequence.
- the prediction block is smaller than the prescribed size (YES in S3201), the partition mode (PartMode) is 2N ⁇ N partition (PART_2NxN) and the partition index PartIdx is 0, or the partition mode (PartMode) is N ⁇ 2N partition ( PART_Nx2N) and the division index PartIdx is 1 (YES in S3202), the LX predicted motion vector list mvpListLX is registered in the order of LX predicted motion vector candidates mvLXB, mvLXA, and mvLXCol in the processing procedure of FIG. (S3203).
- the LX predicted motion vector list mvpListLX is registered in the order of LX predicted motion vector candidates mvLXA, mvLXB, and mvLXCol in the above-described processing procedure of FIG. (S3204).
- FIG. 35 shows a registration processing procedure in the order of LX predicted motion vector candidates mvLXB, mvLXA, and mvLXCol to the LX predicted motion vector list mvpListLX according to the size of the prediction block, the partition mode (PartMode), and the partition index PartIdx. It is a flowchart to show.
- the index i of the motion vector predictor list mvpListLX is set to 0 (S3301). If availableFlagLXB is 1 (YES in S3302), mvLXB is registered at a position corresponding to the index i of the motion vector predictor list mvpListLX (S3303), and is updated by adding 1 to the index i (S3304).
- mvLXA is registered at a position corresponding to index i of the predicted motion vector list mvpListLXmvpListLX (S3306), and is updated by adding 1 to index i (S3307).
- FIG. 32 is a flowchart showing the predicted motion vector redundant candidate deletion processing procedure in S305 of FIG.
- the motion vector redundancy candidate deletion units 123 and 223 compare motion vector candidates registered in the motion vector predictor list mvpListLX (S4102), If the motion vector candidates have the same value or close values (YES in S4103), the motion vector candidates are determined to be redundant motion vector candidates, and are redundant except for the motion vector candidate with the smallest order, that is, the smallest index i. A candidate for a motion vector is removed (S4104). After the redundant motion vector candidates are removed, the predicted motion vector list mvpListLX has an empty storage area for the deleted predicted motion vector candidates. Are packed in the order of the predicted motion vector candidates with the smallest (S4105).
- the index i 1
- the predicted motion vector candidate mvListLX [1] is newly changed to a predicted motion vector candidate mvListLX [0] having an index i of 0.
- the motion vector candidate mvListLX [2] with index i of 2 is newly changed to the motion vector candidate mvListLX [1] with index i of 1. Further, it is set that the motion vector candidate mvListLX [2] having the index i of 2 does not exist.
- the final candidate number finalNumMVPCand is set to 1 to skip the process of deleting redundant motion vector predictor candidates. , Reducing the amount of processing.
- the element of the motion vector predictor list mvpListLX with the motion vector predictor index 0 MvpListLX [0] is mvpLXA, mvpListLX [1], which is an element having a predicted motion vector index of 1, is mvpLXB, and mvpListLX [2], an element having a predicted motion vector index is 2, is mvpLXCol.
- step S4102 mvpListLX [0] in which the motion vector predictor index in the motion vector list mvpListLX is 0 is compared with mvpListLX [1] in which the motion vector predictor index is 1.
- mvpListLX [1] which is an element, is compared and it is determined as a candidate for a redundant motion vector
- mvpListLX [1] which is an element whose predicted motion vector index is 1
- mvpListLX [2] exists, its prediction motion vector index is set to 1, and the prediction motion vector index is set to element mvpListLX [1], and this processing can be ended.
- comparing mvpListLX [0], which is an element with a new motion vector predictor index of 0, and mvpListLX [1], an element with a motion vector predictor index of 1, may be redundant.
- the value of (0, 0) is set as an element mvpListLX [1] having a predicted motion vector index of 1 in step S306 described later.
- the motion vector it has is registered and the number of candidates becomes 2. Since it is considered that the advantage of registering a motion vector having a value of (0, 0) by deleting one is small, mvpListLX [0], which is an element with a predicted motion vector index of 0, and a predicted motion vector index of 1 Only the mvpListLX [1] element can be compared to reduce the processing amount.
- spatial prediction motion vector candidates mvpLXA and mvpLXB derive prediction motion vector candidates from prediction blocks of the same picture, so there is a high possibility that the motion information is the same, but temporal prediction motion vector candidates mvpLXCol are predictions of different pictures. Since the motion vector predictor candidate is derived from the block, there is a high possibility that the motion information is different from the spatial motion vector predictor candidates mvpLXA and mvpLXB. Therefore, even if the comparison of temporal prediction motion vector candidates mvpLXCol is omitted, it is considered that the influence is small.
- the predicted motion vector candidate with the predicted motion vector index of 0 is compared with the predicted motion vector candidate with the predicted motion vector index of 1, and the predicted motion vector index of 2 is predicted. By omitting the comparison with the vector candidate, it is possible to reduce the number of condition determination processes for deleting the redundant candidate.
- FIG. 33 is a flowchart for explaining the predicted motion vector candidate number limit processing procedure in S306 of FIG.
- the predicted motion vector candidate number limit unit 124 and the predicted motion vector candidate number limit unit 224 limit the LX predicted motion vector candidate number numMVPCandLX registered in the LX predicted motion vector list mvpListLX to the specified final candidate number finalNumMVPCand.
- the number of elements registered in the LX predicted motion vector list mvpListLX is counted and set to the number of predicted motion vector candidates numMVPCandLX (S5101).
- the index i of the LX predicted motion vector list mvpListLX is a value (0, 0) at a position corresponding to numMVPCandLX. Is registered (S5104), and 1 is added to numMVPCandLX (S5105). If numMVPCandLX and finalNumMVPCand have the same value (YES in S5106), this restriction process is terminated. If numMVPCandLX and finalNumMVPCand are not the same value (NO in S5106), the processes in steps S5104 and S5105 are repeated until they have the same value.
- MVs having a value of (0, 0) are registered until the number of predicted motion vector candidates numMVPCandLX of LX reaches the final number of candidates finalNumMVPCand.
- the predicted motion vector is determined regardless of the value of the predicted motion vector index within the range of 0 or more and less than the final candidate number finalNumMVPCand. be able to.
- the limiting process is terminated without performing the limiting process.
- the final candidate number finalNumMVPCand is defined according to the size of the prediction block. This is because if the number of motion vector predictor candidates registered in the motion vector predictor list fluctuates according to the state of construction of the motion vector predictor list, the motion vector list on the decoding side must be constructed before the motion vector predictor list is constructed. This is because entropy decoding cannot be performed. Furthermore, if entropy decoding depends on the construction state of the motion vector predictor list including the motion vector candidate mvLXCol derived from the prediction block of the picture at different time, when an error occurs when decoding the encoded bit string of another picture The encoded bit string of the current picture is also affected by this, and there is a problem that entropy decoding cannot be continued normally.
- the prediction motion vector index can be entropy-decoded independently of the construction of the prediction motion vector list, and an encoded bit string of another picture Even if an error occurs during decoding, it is possible to continue entropy decoding of the encoded bit string of the current picture without being affected by the error.
- the moving image encoded stream output from the moving image encoding apparatus of the embodiment described above has a specific data format so that it can be decoded according to the encoding method used in the embodiment. Therefore, the moving picture decoding apparatus corresponding to the moving picture encoding apparatus can decode the encoded stream of this specific data format.
- the encoded stream When a wired or wireless network is used to exchange an encoded stream between a moving image encoding device and a moving image decoding device, the encoded stream is converted into a data format suitable for the transmission form of the communication path. It may be transmitted.
- a video transmission apparatus that converts the encoded stream output from the video encoding apparatus into encoded data in a data format suitable for the transmission form of the communication channel and transmits the encoded data to the network, and receives the encoded data from the network Then, a moving image receiving apparatus that restores the encoded stream and supplies the encoded stream to the moving image decoding apparatus is provided.
- the moving image transmitting apparatus is a memory that buffers the encoded stream output from the moving image encoding apparatus, a packet processing unit that packetizes the encoded stream, and transmission that transmits the packetized encoded data via the network.
- the moving image receiving apparatus generates a coded stream by packetizing the received data, a receiving unit that receives the packetized coded data via a network, a memory that buffers the received coded data, and packet processing. And a packet processing unit provided to the video decoding device.
- the above processing relating to encoding and decoding can be realized as a transmission, storage, and reception device using hardware, and is stored in a ROM (Read Only Memory), a flash memory, or the like. It can also be realized by firmware or software such as a computer.
- the firmware program and software program can be recorded on a computer-readable recording medium, provided from a server through a wired or wireless network, or provided as a data broadcast of terrestrial or satellite digital broadcasting Is also possible.
- the present invention can be used for a moving picture coding and decoding technique, particularly a moving picture coding and decoding technique using motion compensated prediction.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
実施の形態では、図3に示されるように、ピクチャ内を任意の同一サイズの正方の矩形の単位にて均等分割する。この単位をツリーブロックと定義し、ピクチャ内での符号化または復号対象ブロック(符号化処理においては符号化対象ブロック、復号処理においては復号対象ブロックのことである。以下、断りのない限り、この意味で用いる。)を特定するためのアドレス管理の基本単位とする。モノクロを除きツリーブロックは1つの輝度信号と2つの色差信号で構成される。ツリーブロックのサイズはピクチャサイズやピクチャ内のテクスチャに応じて、2のべき乗のサイズで自由に設定することができる。ツリーブロックはピクチャ内のテクスチャに応じて、符号化処理を最適にすべく、必要に応じてツリーブロック内の輝度信号、及び色差信号を階層的に4分割(縦横に2分割ずつ)して、ブロックサイズの小さいブロックにすることができる。このブロックをそれぞれ符号化ブロックと定義し、符号化及び復号を行う際の処理の基本単位とする。モノクロを除き符号化ブロックも1つの輝度信号と2つの色差信号で構成される。符号化ブロックの最大サイズはツリーブロックのサイズと同一である。符号化ブロックの最小のサイズとなる符号化ブロックを最小符号化ブロックと呼び、2のべき乗のサイズで自由に設定することができる。
符号化ブロック単位で、復号済みの周囲の画像信号から予測を行うイントラ予測(MODE_INTRA)、及び復号済みのピクチャの画像信号から予測を行うインター予測(MODE_INTER)を切り替える。このイントラ予測(MODE_INTRA)とインター予測(MODE_INTER)を識別するモードを予測モード(PredMode)と定義する。予測モード(PredMode)はイントラ予測(MODE_INTRA)、またはインター予測(MODE_INTER)を値として持ち、選択して符号化できる。
ピクチャ内をブロックに分割してイントラ予測(MODE_INTRA)及びインター予測(MODE_INTER)を行う場合、イントラ予測及びインター予測の方法を切り替える単位をより小さくするために、必要に応じて符号化ブロックを分割して予測を行う。この符号化ブロックの輝度信号と色差信号の分割方法を識別するモードを分割モード(PartMode)と定義する。さらに、この分割されたブロックを予測ブロックと定義する。図4に示すように、符号化ブロックの輝度信号の分割方法に応じて4種類の分割モード(PartMode)を定義する。符号化ブロックの輝度信号を分割せず1つの予測ブロックとみなしたもの(図4(a))の分割モード(PartMode)を2N×2N分割(PART_2Nx2N)、符号化ブロックの輝度信号を水平方向に2分割し、2つの予測ブロックとしたもの(図4(b))の分割モード(PartMode)を2N×N分割(PART_2NxN)、符号化ブロックの輝度信号を垂直方向に分割し、符号化ブロックを2つの予測ブロックとしたもの(図4(c))の分割モード(PartMode)をN×2N分割(PART_Nx2N)、符号化ブロックの輝度信号を水平と垂直の均等分割により4つの予測ブロックとしたもの(図4(d))の分割モード(PartMode)をN×N分割(PART_NxN)とそれぞれ定義する。なお、イントラ予測(MODE_INTRA)のN×N分割(PART_NxN)を除き、各分割モード(PartMode)毎に輝度信号の縦横の分割比率と同様に色差信号も分割する。
本実施の形態のツリーブロック、符号化ブロック、予測ブロック、変換ブロックを始めとする各ブロックの位置は、輝度信号の画面の一番左上の輝度信号の画素の位置を原点(0,0)とし、それぞれのブロックの領域に含まれる一番左上の輝度信号の画素の位置を(x,y)の二次元座標で表す。座標軸の向きは水平方向に右の方向、垂直方向に下の方向をそれぞれ正の向きとし、単位は輝度信号の1画素単位である。輝度信号と色差信号で画像サイズ(画素数)が同じである色差フォーマットが4:4:4の場合ではもちろんのこと、輝度信号と色差信号で画像サイズ(画素数)が異なる色差フォーマットが4:2:0、4:2:2の場合でも色差信号の各ブロックの位置をそのブロックの領域に含まれる輝度信号の画素の座標で表し、単位は輝度信号の1画素である。この様にすることで、色差信号の各ブロックの位置が特定できるのはもちろんのこと、座標の値を比較するだけで、輝度信号のブロックと色差信号のブロックの位置の関係も明確となる。
複数の予測ブロックで構成されるグループを予測ブロックグループと定義する。図5、図6、図7及び図8は符号化または復号対象の予測ブロックと同一ピクチャ内でその符号化または復号対象の予測ブロックに隣接する予測ブロックグループを説明する図である。図9は符号化または復号対象の予測ブロックと時間的に異なる復号済みのピクチャにおいて、符号化または復号対象の予測ブロックと同一位置あるいはその近傍の位置に存在する既に符号化または復号済みの予測ブロックグループを説明する図である。図5、図6、図7、図8及び図9を用いて予測ブロックグループについて説明する。
本発明の実施の形態においては、復号済みのピクチャの画像信号から予測を行うインター予測では、複数の復号済みのピクチャを参照ピクチャとして用いることができる。複数の参照ピクチャから選択された参照ピクチャを特定するため、予測ブロック毎に参照インデックスを付ける。Bスライスでは予測ブロック毎に任意の2枚の参照ピクチャを選択してインター予測ことができ、インター予測モードとしてL0予測(Pred_L0)、L1予測(Pred_L1)、双予測(Pred_BI)がある。参照ピクチャはリスト構造のL0(参照リスト0)とL1(参照リスト1)で管理され、L0またはL1の参照インデックスを指定することにより参照ピクチャを特定することができる。L0予測(Pred_L0)はL0で管理されている参照ピクチャを参照するインター予測であり、L1予測(Pred_L1)はL1で管理されている参照ピクチャを参照するインター予測であり、双予測(Pred_BI)はL0予測とL1予測が共に行われ、L0とL1のそれぞれで管理されている1つずつの参照ピクチャを参照するインター予測である。Pスライスのインター予測ではL0予測のみが使用でき、Bスライスのインター予測ではL0予測、L1予測、L0予測とL1予測を平均または重み付け加算する双予測(Pred_BI)が使用できる。以降の処理において出力に添え字LX(Xは0または1)が付いている定数、変数に関しては、L0、L1ごとに処理が行われることを前提とする。
POCは符号化されるピクチャに関連付けられる変数とし、ピクチャの出力順序で1ずつ増加する値が設定される。POCの値によって、同じピクチャであるかを判別したり、出力順序でのピクチャ間の前後関係を判別したり、ピクチャ間の距離を導出したりすることができる。例えば、2つのピクチャのPOCが同じ値を持つ場合、同一のピクチャであると判断できる。2つのピクチャのPOCが違う値を持つ場合、POCの値が小さいピクチャのほうが、時間的に先に出力されるピクチャであると判断でき、2つのピクチャのPOCの差が時間軸方向でのピクチャ間距離を示す。
次に、本実施の形態に係る動きベクトルの予測方法を備える動画像符号化装置により符号化され、復号装置により復号される動画像のビットストリームの符号化および復号の共通規則であるシンタックスについて説明する。
上述のシンタックスに基づき、動画像のビットストリームを符号化する動画像符号化装置において、実施の形態に係る動きベクトルの予測方法の動作を説明する。動きベクトルの予測方法は、スライス単位で動き補償予測を行う場合、即ちスライスタイプがPスライス(片方向予測スライス)或いはBスライス(両方向予測スライス)の場合で、更に、スライスの中の予測ブロックの予測モードがインター予測(MODE_INTER)で、マージモードでない差分動きベクトルを符号化または復号する予測ブロックに適用される。
左側の隣接予測ブロックグループ、及び上側の隣接予測ブロックグループの各隣接予測ブロックに対しては、後述するスキャン方法1、2、3、4では、下記の条件判定1、2、3、4の優先順序でそれぞれの条件判定が適用される。ただし、後述するスキャン方法5のみ例外として、条件判定1、3、2、4の優先順序でそれぞれの条件判定が適用される。
条件判定1:符号化または復号対象の予測ブロックの差分動きベクトル算出対象のLXの動きベクトルと同じ参照リストLXで、同じ参照インデックス、すなわち同じ参照ピクチャを用いた予測が隣接予測ブロックでも行われている。
条件判定2:符号化または復号対象の予測ブロックの差分動きベクトル算出対象のLXの動きベクトルとは異なる参照リストLYであるが、同じ参照ピクチャを用いた予測が隣接予測ブロックで行われている。
条件判定3:符号化または復号対象の予測ブロックの差分動きベクトル算出対象のLXの動きベクトルと同じ参照リストLXで、異なる参照ピクチャを用いた予測が隣接予測ブロックで行われている。
条件判定4:符号化または復号対象の予測ブロックの差分動きベクトル算出対象のLXの動きベクトルとは異なる参照リストLYで、異なる参照ピクチャを用いた予測が隣接予測ブロックで行われている。
同じ参照ピクチャを用いたスケーリング演算が不要な予測動きベクトルの判定を優先し、4つの条件判定のうち予測ブロック毎に2つの条件判定を行い、条件を満たさなければ、隣の予測ブロックの条件判定に移る。最初の周回では条件判定1と条件判定2の条件判定を行い、次の予測ブロックの周回では条件判定3と条件判定4の条件判定を行う。
1.予測ブロックN0の条件判定1(同じ参照リストLX、同じ参照ピクチャ)
2.予測ブロックN0の条件判定2(異なる参照リストLY、同じ参照ピクチャ)
3.予測ブロックN1の条件判定1(同じ参照リストLX、同じ参照ピクチャ)
4.予測ブロックN1の条件判定2(異なる参照リストLY、同じ参照ピクチャ)
5.予測ブロックN2の条件判定1(同じ参照リストLX、同じ参照ピクチャ)、上側に隣接する予測ブロックグループのみ
6.予測ブロックN2の条件判定2(異なる参照リストLY、同じ参照ピクチャ)、上側に隣接する予測ブロックグループのみ
7.予測ブロックN0の条件判定3(同じ参照リストLX、異なる参照ピクチャ)
8.予測ブロックN0の条件判定4(異なる参照リストLY、異なる参照ピクチャ)
9.予測ブロックN1の条件判定3(同じ参照リストLX、異なる参照ピクチャ)
10.予測ブロックN1の条件判定4(異なる参照リストLY、異なる参照ピクチャ)
11.予測ブロックN2の条件判定3(同じ参照リストLX、異なる参照ピクチャ)、上側に隣接する予測ブロックグループのみ
12.予測ブロックN2の条件判定4(異なる参照リストLY、異なる参照ピクチャ)、上側に隣接する予測ブロックグループのみ
4つの条件判定のうち予測ブロック毎に1つの条件判定を行い、条件を満たさなければ、隣の予測ブロックの条件判定に移る。予測ブロック毎に条件判定を4周したら終了する。
1.予測ブロックN0の条件判定1(同じ参照リストLX、同じ参照ピクチャ)
2.予測ブロックN1の条件判定1(同じ参照リストLX、同じ参照ピクチャ)
3.予測ブロックN2の条件判定1(同じ参照リストLX、同じ参照ピクチャ)、上側に隣接する予測ブロックグループのみ
4.予測ブロックN0の条件判定2(異なる参照リストLY、同じ参照ピクチャ)
5.予測ブロックN1の条件判定2(異なる参照リストLY、同じ参照ピクチャ)
6.予測ブロックN2の条件判定2(異なる参照リストLY、同じ参照ピクチャ)、上側に隣接する予測ブロックグループのみ
7.予測ブロックN0の条件判定3(同じ参照リストLX、異なる参照ピクチャ)
8.予測ブロックN1の条件判定3(同じ参照リストLX、異なる参照ピクチャ)
9.予測ブロックN2の条件判定3(同じ参照リストLX、異なる参照ピクチャ)、上側に隣接する予測ブロックグループのみ
10.予測ブロックN0の条件判定4(異なる参照リストLY、異なる参照ピクチャ)
11.予測ブロックN1の条件判定4(異なる参照リストLY、異なる参照ピクチャ)
12.予測ブロックN2の条件判定4(異なる参照リストLY、異なる参照ピクチャ)、上側に隣接する予測ブロックグループのみ
最初の周回では予測ブロック毎に条件判定1の条件判定を行い条件を満たさなければ、隣の予測ブロックの条件判定に移る。次の周回では予測ブロック毎に条件判定2、条件判定3条件判定4順序で条件判定を行ってから隣に移る。
1.予測ブロックN0の条件判定1(同じ参照リストLX、同じ参照ピクチャ)
2.予測ブロックN1の条件判定1(同じ参照リストLX、同じ参照ピクチャ)
3.予測ブロックN2の条件判定1(同じ参照リストLX、同じ参照ピクチャ)、上側に隣接する予測ブロックグループのみ
4.予測ブロックN0の条件判定2(異なる参照リストLY、同じ参照ピクチャ)
5.予測ブロックN0の条件判定3(同じ参照リストLX、異なる参照ピクチャ)
6.予測ブロックN0の条件判定4(異なる参照リストLY、異なる参照ピクチャ)
7.予測ブロックN1の条件判定2(異なる参照リストLY、同じ参照ピクチャ)
8.予測ブロックN1の条件判定3(同じ参照リストLX、異なる参照ピクチャ)
9.予測ブロックN1の条件判定4(異なる参照リストLY、異なる参照ピクチャ)
10.予測ブロックN2の条件判定2(異なる参照リストLY、同じ参照ピクチャ)、上側に隣接する予測ブロックグループのみ
11.予測ブロックN2の条件判定3(同じ参照リストLX、異なる参照ピクチャ)、上側に隣接する予測ブロックグループのみ
12.予測ブロックN2の条件判定4(異なる参照リストLY、異なる参照ピクチャ)、上側に隣接する予測ブロックグループのみ
同じ予測ブロックの条件判定を優先し、1つの予測ブロック内で4つの条件判定を行い、すべての条件に合致しない場合、当該予測ブロックには条件に合致する動きベクトルは存在しないものと判断し、次の予測ブロックの条件判定を行う。
1.予測ブロックN0の条件判定1(同じ参照リストLX、同じ参照ピクチャ)
2.予測ブロックN0の条件判定2(異なる参照リストLY、同じ参照ピクチャ)
3.予測ブロックN0の条件判定3(同じ参照リストLX、異なる参照ピクチャ)
4.予測ブロックN0の条件判定4(異なる参照リストLY、異なる参照ピクチャ)
5.予測ブロックN1の条件判定1(同じ参照リストLX、同じ参照ピクチャ)
6.予測ブロックN1の条件判定2(異なる参照リストLY、同じ参照ピクチャ)
7.予測ブロックN1の条件判定3(同じ参照リストLX、異なる参照ピクチャ)
8.予測ブロックN1の条件判定4(異なる参照リストLY、異なる参照ピクチャ)
9.予測ブロックN2の条件判定1(同じ参照リストLX、同じ参照ピクチャ)、上側に隣接する予測ブロックグループのみ
10.予測ブロックN2の条件判定2(異なる参照リストLY、同じ参照ピクチャ)、上側に隣接する予測ブロックグループのみ
11.予測ブロックN2の条件判定3(同じ参照リストLX、異なる参照ピクチャ)、上側に隣接する予測ブロックグループのみ
12.予測ブロックN2の条件判定4(異なる参照リストLY、異なる参照ピクチャ)、上側に隣接する予測ブロックグループのみ
スキャン方法4と同様に、同じ予測ブロックの条件判定を優先し、1つの予測ブロック内で4つの条件判定を行い、すべての条件に合致しない場合、当該予測ブロックには条件に合致する動きベクトルは存在しないものと判断し、次の予測ブロックの条件判定を行う。ただし、予測ブロック内の条件判定においては、スキャン方法4は同じ参照ピクチャであることをより優先しているが、スキャン方法5は同じ参照リストであることを優先する。
1.予測ブロックN0の条件判定1(同じ参照リストLX、同じ参照ピクチャ)
2.予測ブロックN0の条件判定3(同じ参照リストLX、異なる参照ピクチャ)
3.予測ブロックN0の条件判定2(異なる参照リストLY、同じ参照ピクチャ)
4.予測ブロックN0の条件判定4(異なる参照リストLY、異なる参照ピクチャ)
5.予測ブロックN1の条件判定1(同じ参照リストLX、同じ参照ピクチャ)
6.予測ブロックN1の条件判定3(同じ参照リストLX、異なる参照ピクチャ)
7.予測ブロックN1の条件判定2(異なる参照リストLY、同じ参照ピクチャ)
8.予測ブロックN1の条件判定4(異なる参照リストLY、異なる参照ピクチャ)
9.予測ブロックN2の条件判定1(同じ参照リストLX、同じ参照ピクチャ)、上側に隣接する予測ブロックグループのみ
10.予測ブロックN2の条件判定3(同じ参照リストLX、異なる参照ピクチャ)、上側に隣接する予測ブロックグループのみ
11.予測ブロックN2の条件判定2(異なる参照リストLY、同じ参照ピクチャ)、上側に隣接する予測ブロックグループのみ
12.予測ブロックN2の条件判定4(異なる参照リストLY、異なる参照ピクチャ)、上側に隣接する予測ブロックグループのみ
mvdLX = mvLX - mvpLX
上述のシンタックスに基づき、符号化された動画像のビットストリームを復号する動画像復号装置において、本発明に係る動きベクトルの予測方法の動作を説明する。
符号化側と同様に、復号側でもL0、L1それぞれについて、動きベクトル算出処理を行うが、L0、L1ともに共通の処理となる。したがって、以下の説明においてはL0、L1を共通のLXとして表す。L0の動きベクトルを算出する処理ではXが0であり、L1の動きベクトルを算出する処理ではXが1である。また、LXの動きベクトルを算出する処理中に、LXではなく、もう一方のリストの情報を参照する場合、もう一方のリストをLYとして表す。
mvLX = mvpLX + mvdLX
mvdLX = mvLX - mvpLX
mvLX = mvpLX + mvdLX
図18は動画像符号化装置の差分動きベクトル算出部103及び動画像復号装置の動きベクトル算出部204とで共通する機能を有する予測動きベクトル候補生成部121及び221、予測動きベクトル候補登録部122及び222、ならびに予測動きベクトル冗長候補削除部123及び223、予測動きベクトル候補数制限部124及び224の処理手順を表すフローチャートである。
td=現在の符号化または復号対象ピクチャのPOC-隣接予測ブロックの参照リストListNで参照する参照ピクチャのPOC
tb=現在の符号化または復号対象ピクチャのPOC-現在の符号化または復号対象ピクチャの参照リストLXで参照する参照ピクチャのPOC
mvLXN=tb/td*mvLXN
tx = ( 16384 + Abs( td / 2 ) ) / td
DistScaleFactor = ( tb * tx + 32 ) >> 6
mvLXN = ClipMv( Sign( DistScaleFactor * mvLXN ) * ( (Abs( DistScaleFactor * mvLXN ) + 127 ) >> 8 ) )
td=異なる時間のピクチャcolPicのPOC-予測ブロックcolPUのリストListColで参照する参照ピクチャのPOC
tb=現在の符号化または復号対象ピクチャのPOC-現在の符号化または復号対象ピクチャの参照リストLXで参照する参照ピクチャのPOC
mvLXCol=tb/td*mvLXCol
tx = ( 16384 + Abs( td / 2 ) ) / td
DistScaleFactor = ( tb * tx + 32 ) >> 6
mvLXN = ClipMv( Sign( DistScaleFactor * mvLXN ) * ( (Abs( DistScaleFactor * mvLXN ) + 127 ) >> 8 ) )
Claims (22)
- 各ピクチャを分割したブロック単位で動きベクトルを用いて動画像を符号化する動画像符号化装置であって、
符号化対象ブロックと同一ピクチャ内の前記符号化対象ブロックと近接する符号化済みのブロックのいずれかの動きベクトルと、符号化対象ブロックと異なる符号化済みのピクチャ内のブロックのいずれかの動きベクトルとから、複数の予測動きベクトル候補を導出する予測動きベクトル候補生成部と、
前記複数の予測動きベクトル候補のうち、所定の条件を満たす予測動きベクトル候補を予測動きベクトル候補リストに登録する予測動きベクトル候補登録部と、
前記予測動きベクトル候補リストに登録された予測動きベクトル候補の数が所定数(2以上の自然数)よりも小さい場合に、予測動きベクトル候補の数が所定数に達するまで繰り返して同じ値の予測動きベクトル候補を前記予測動きベクトル候補リストに登録する予測動きベクトル候補制限部と、
前記予測動きベクトル候補リストから、前記符号化対象ブロックの予測動きベクトルを決定する予測動きベクトル選択部と、
前記予測動きベクトル候補リストにおける前記決定された予測動きベクトルの位置を示す情報を符号化する符号化部とを備えることを特徴とする動画像符号化装置。 - 前記予測動きベクトル候補制限部は、前記予測動きベクトル候補登録部により生成された前記予測動きベクトル候補リストから所定数を超える予測動きベクトル候補を削除することを特徴とする請求項1に記載の動画像符号化装置。
- 前記予測動きベクトル候補生成部は、符号化対象ブロックと同一ピクチャ内の前記符号化対象ブロックと近接する符号化済みのブロックのいずれかの動きベクトルから、第1及び第2の予測動きベクトル候補を導出し、符号化対象ブロックと異なる符号化済みのピクチャ内のブロックのいずれかの動きベクトルから、第3の予測動きベクトル候補を導出し、
前記予測動きベクトル候補登録部は、所定の条件を満たす前記第1、第2及び第3の予測動きベクトル候補を予測動きベクトル候補リストに登録し、
前記予測動きベクトル候補登録部により前記予測動きベクトル候補リストに登録された前記第1及び第2の予測動きベクトル候補が同じ値のとき、前記予測動きベクトル候補リストから第2の予測動きベクトル候補を削除する予測動きベクトル候補冗長判定部を有することを特徴とする請求項1または2に記載の動画像符号化装置。 - 各ピクチャを分割したブロック単位で動きベクトルを用いて動画像を符号化する動画像符号化装置であって、
符号化対象ブロックと同一ピクチャ内の前記符号化対象ブロックと近接する符号化済みのブロックのいずれかの動きベクトルから予測して、複数の予測動きベクトルの候補を生成し、予測動きベクトル候補リストに登録する予測動きベクトル候補生成部と、
前記予測動きベクトル候補リストに登録される前記予測動きベクトルの候補数を予測ブロックのサイズに応じた最大候補数に制限する予測動きベクトル候補数制限部と、
前記予測動きベクトル候補リストから、前記符号化対象ブロックの予測動きベクトルを決定する予測動きベクトル選択部と、
前記予測動きベクトル候補リストにおける前記決定された予測動きベクトルの位置を示す情報を符号化する符号化部とを備えることを特徴とする動画像符号化装置。 - 各ピクチャを分割したブロック単位で動きベクトルを用いて動画像を符号化する動画像符号化方法であって、
符号化対象ブロックと同一ピクチャ内の前記符号化対象ブロックと近接する符号化済みのブロックのいずれかの動きベクトルと、符号化対象ブロックと異なる符号化済みのピクチャ内のブロックのいずれかの動きベクトルとから、複数の予測動きベクトル候補を導出する予測動きベクトル候補生成ステップと、
前記複数の予測動きベクトル候補のうち、所定の条件を満たす予測動きベクトル候補を予測動きベクトル候補リストに登録する予測動きベクトル候補登録ステップと、
前記予測動きベクトル候補リストに登録された予測動きベクトル候補の数が所定数(2以上の自然数)よりも小さい場合に、予測動きベクトル候補の数が所定数に達するまで繰り返して同じ値の予測動きベクトル候補を前記予測動きベクトル候補リストに登録する予測動きベクトル候補制限ステップと、
前記予測動きベクトル候補リストから、前記符号化対象ブロックの予測動きベクトルを決定する予測動きベクトル選択ステップと、
前記予測動きベクトル候補リストにおける前記決定された予測動きベクトルの位置を示す情報を符号化する符号化ステップとを備えることを特徴とする動画像符号化方法。 - 各ピクチャを分割したブロック単位で動きベクトルを用いて動画像を符号化する動画像符号化プログラムであって、
符号化対象ブロックと同一ピクチャ内の前記符号化対象ブロックと近接する符号化済みのブロックのいずれかの動きベクトルと、符号化対象ブロックと異なる符号化済みのピクチャ内のブロックのいずれかの動きベクトルとから、複数の予測動きベクトル候補を導出する予測動きベクトル候補生成ステップと、
前記複数の予測動きベクトル候補のうち、所定の条件を満たす予測動きベクトル候補を予測動きベクトル候補リストに登録する予測動きベクトル候補登録ステップと、
前記予測動きベクトル候補リストに登録された予測動きベクトル候補の数が所定数(2以上の自然数)よりも小さい場合に、予測動きベクトル候補の数が所定数に達するまで繰り返して同じ値の予測動きベクトル候補を前記予測動きベクトル候補リストに登録する予測動きベクトル候補制限ステップと、
前記予測動きベクトル候補リストから、前記符号化対象ブロックの予測動きベクトルを決定する予測動きベクトル選択ステップと、
前記予測動きベクトル候補リストにおける前記決定された予測動きベクトルの位置を示す情報を符号化する符号化ステップとをコンピュータに実行させることを特徴とする動画像符号化プログラム。 - 各ピクチャを分割したブロック単位で動きベクトルを用いて動画像を符号化する動画像符号化方法により符号化された符号化ビット列をパケット化して符号化データを得るパケット処理部と、
パケット化された前記符号化データを送信する送信部とを備え、
前記動画像符号化方法は、
符号化対象ブロックと同一ピクチャ内の前記符号化対象ブロックと近接する符号化済みのブロックのいずれかの動きベクトルと、符号化対象ブロックと異なる符号化済みのピクチャ内のブロックのいずれかの動きベクトルとから、複数の予測動きベクトル候補を導出する予測動きベクトル候補生成ステップと、
前記複数の予測動きベクトル候補のうち、所定の条件を満たす予測動きベクトル候補を予測動きベクトル候補リストに登録する予測動きベクトル候補登録ステップと、
前記予測動きベクトル候補リストに登録された予測動きベクトル候補の数が所定数(2以上の自然数)よりも小さい場合に、予測動きベクトル候補の数が所定数に達するまで繰り返して同じ値の予測動きベクトル候補を前記予測動きベクトル候補リストに登録する予測動きベクトル候補制限ステップと、
前記予測動きベクトル候補リストから、前記符号化対象ブロックの予測動きベクトルを決定する予測動きベクトル選択ステップと、
前記予測動きベクトル候補リストにおける前記決定された予測動きベクトルの位置を示す情報を符号化する符号化ステップとを有することを特徴とする送信装置。 - 前記予測動きベクトル候補制限ステップは、前記予測動きベクトル候補登録ステップにより生成された前記予測動きベクトル候補リストから所定数を超える予測動きベクトル候補を削除することを特徴とする請求項7に記載の送信装置。
- 前記予測動きベクトル候補生成ステップは、符号化対象ブロックと同一ピクチャ内の前記符号化対象ブロックと近接する符号化済みのブロックのいずれかの動きベクトルから、第1及び第2の予測動きベクトル候補を導出し、符号化対象ブロックと異なる符号化済みのピクチャ内のブロックのいずれかの動きベクトルから、第3の予測動きベクトル候補を導出し、
前記予測動きベクトル候補登録ステップは、所定の条件を満たす前記第1、第2及び第3の予測動きベクトル候補を予測動きベクトル候補リストに登録し、
前記予測動きベクトル候補登録ステップにより前記予測動きベクトル候補リストに登録された前記第1及び第2の予測動きベクトル候補が同じ値のとき、前記予測動きベクトル候補リストから第2の予測動きベクトル候補を削除する予測動きベクトル候補冗長判定ステップを有することを特徴とする請求項7または8に記載の送信装置。 - 各ピクチャを分割したブロック単位で動きベクトルを用いて動画像を符号化する動画像符号化方法により符号化された符号化ビット列をパケット化して符号化データを得るパケット処理ステップと、
パケット化された前記符号化データを送信する送信ステップとを備え、
前記動画像符号化方法は、
符号化対象ブロックと同一ピクチャ内の前記符号化対象ブロックと近接する符号化済みのブロックのいずれかの動きベクトルと、符号化対象ブロックと異なる符号化済みのピクチャ内のブロックのいずれかの動きベクトルとから、複数の予測動きベクトル候補を導出する予測動きベクトル候補生成ステップと、
前記複数の予測動きベクトル候補のうち、所定の条件を満たす予測動きベクトル候補を予測動きベクトル候補リストに登録する予測動きベクトル候補登録ステップと、
前記予測動きベクトル候補リストに登録された予測動きベクトル候補の数が所定数(2以上の自然数)よりも小さい場合に、予測動きベクトル候補の数が所定数に達するまで繰り返して同じ値の予測動きベクトル候補を前記予測動きベクトル候補リストに登録する予測動きベクトル候補制限ステップと、
前記予測動きベクトル候補リストから、前記符号化対象ブロックの予測動きベクトルを決定する予測動きベクトル選択ステップと、
前記予測動きベクトル候補リストにおける前記決定された予測動きベクトルの位置を示す情報を符号化する符号化ステップとを有することを特徴とする送信方法。 - 各ピクチャを分割したブロック単位で動きベクトルを用いて動画像を符号化する動画像符号化方法により符号化された符号化ビット列をパケット化して符号化データを得るパケット処理ステップと、
パケット化された前記符号化データを送信する送信ステップとをコンピュータに実行させ、
前記動画像符号化方法は、
符号化対象ブロックと同一ピクチャ内の前記符号化対象ブロックと近接する符号化済みのブロックのいずれかの動きベクトルと、符号化対象ブロックと異なる符号化済みのピクチャ内のブロックのいずれかの動きベクトルとから、複数の予測動きベクトル候補を導出する予測動きベクトル候補生成ステップと、
前記複数の予測動きベクトル候補のうち、所定の条件を満たす予測動きベクトル候補を予測動きベクトル候補リストに登録する予測動きベクトル候補登録ステップと、
前記予測動きベクトル候補リストに登録された予測動きベクトル候補の数が所定数(2以上の自然数)よりも小さい場合に、予測動きベクトル候補の数が所定数に達するまで繰り返して同じ値の予測動きベクトル候補を前記予測動きベクトル候補リストに登録する予測動きベクトル候補制限ステップと、
前記予測動きベクトル候補リストから、前記符号化対象ブロックの予測動きベクトルを決定する予測動きベクトル選択ステップと、
前記予測動きベクトル候補リストにおける前記決定された予測動きベクトルの位置を示す情報を符号化する符号化ステップとを有することを特徴とする送信プログラム。 - 各ピクチャを分割したブロック単位で動きベクトルを用いて動画像が符号化された符号化ビット列を復号する動画像復号装置であって、
予測動きベクトル候補リストにおける選択すべき予測動きベクトルの位置を示す情報を復号する復号部と、
復号対象ブロックと同一ピクチャ内の前記復号対象ブロックと近接する復号済みのブロックのいずれかの動きベクトルと、復号対象ブロックと異なる復号済みのピクチャ内のブロックのいずれかの動きベクトルとから、複数の予測動きベクトル候補を導出する予測動きベクトル候補生成部と、
前記複数の予測動きベクトル候補のうち、所定の条件を満たす予測動きベクトル候補を予測動きベクトル候補リストに登録する予測動きベクトル候補登録部と、
前記予測動きベクトル候補リストに登録された予測動きベクトル候補の数が所定数(2以上の自然数)よりも小さい場合に、予測動きベクトル候補の数が所定数に達するまで繰り返して同じ値の予測動きベクトル候補を前記予測動きベクトル候補リストに登録する予測動きベクトル候補制限部と、
復号された前記選択すべき予測動きベクトルの位置を示す情報に基づいて、前記予測動きベクトル候補リストから、前記復号対象ブロックの予測動きベクトルを選択する予測動きベクトル選択部とを備えることを特徴とする動画像復号装置。 - 前記予測動きベクトル候補制限部は、前記予測動きベクトル候補登録部により生成された前記予測動きベクトル候補リストから所定数を超える予測動きベクトル候補を削除することを特徴とする請求項12に記載の動画像復号装置。
- 前記予測動きベクトル候補生成部は、復号対象ブロックと同一ピクチャ内の前記復号対象ブロックと近接する復号済みのブロックのいずれかの動きベクトルから、第1及び第2の予測動きベクトル候補を導出し、復号対象ブロックと異なる復号済みのピクチャ内のブロックのいずれかの動きベクトルから、第3の予測動きベクトル候補を導出し、
前記予測動きベクトル候補登録部は、所定の条件を満たす前記第1、第2及び第3の予測動きベクトル候補を予測動きベクトル候補リストに登録し、
前記予測動きベクトル候補登録部により前記予測動きベクトル候補リストに登録された前記第1及び第2の予測動きベクトル候補が同じ値のとき、前記予測動きベクトル候補リストから第2の予測動きベクトル候補を削除する予測動きベクトル候補冗長判定部を有することを特徴とする請求項12または13に記載の動画像復号装置。 - 各ピクチャを分割したブロック単位で動きベクトルを用いて動画像が符号化された符号化ビット列を復号する動画像復号装置であって、
予測動きベクトル候補リストにおける選択すべき予測動きベクトルの位置を示す情報を復号する復号部と、
復号対象ブロックと同一ピクチャ内の前記復号対象ブロックと近接する復号済みのブロックのいずれかの動きベクトルから予測して、複数の予測動きベクトルの候補を生成し、前記予測動きベクトル候補リストに登録する予測動きベクトル候補生成部と、
前記予測動きベクトル候補リストに登録される前記予測動きベクトルの候補数を予測ブロックのサイズに応じた最大候補数に制限する予測動きベクトル候補数制限部と、
復号された前記予測動きベクトル候補リストにおける前記選択すべき予測動きベクトルの位置を示す情報にもとづいて、前記予測動きベクトル候補リストから前記復号対象ブロックの予測動きベクトルを選択する予測動きベクトル選択部とを備えることを特徴とする動画像復号装置。 - 各ピクチャを分割したブロック単位で動きベクトルを用いて動画像が符号化された符号化ビット列を復号する動画像復号方法であって、
予測動きベクトル候補リストにおける選択すべき予測動きベクトルの位置を示す情報を復号する復号ステップと、
復号対象ブロックと同一ピクチャ内の前記復号対象ブロックと近接する復号済みのブロックのいずれかの動きベクトルと、復号対象ブロックと異なる復号済みのピクチャ内のブロックのいずれかの動きベクトルとから、複数の予測動きベクトル候補を導出する予測動きベクトル候補生成ステップと、
前記複数の予測動きベクトル候補のうち、所定の条件を満たす予測動きベクトル候補を予測動きベクトル候補リストに登録する予測動きベクトル候補登録ステップと、
前記予測動きベクトル候補リストに登録された予測動きベクトル候補の数が所定数(2以上の自然数)よりも小さい場合に、予測動きベクトル候補の数が所定数に達するまで繰り返して同じ値の予測動きベクトル候補を前記予測動きベクトル候補リストに登録する予測動きベクトル候補制限ステップと、
復号された前記選択すべき予測動きベクトルの位置を示す情報に基づいて、前記予測動きベクトル候補リストから、前記復号対象ブロックの予測動きベクトルを選択する予測動きベクトル選択ステップとを備えることを特徴とする動画像復号方法。 - 各ピクチャを分割したブロック単位で動きベクトルを用いて動画像が符号化された符号化ビット列を復号する動画像復号プログラムであって、
予測動きベクトル候補リストにおける選択すべき予測動きベクトルの位置を示す情報を復号する復号ステップと、
復号対象ブロックと同一ピクチャ内の前記復号対象ブロックと近接する復号済みのブロックのいずれかの動きベクトルと、復号対象ブロックと異なる復号済みのピクチャ内のブロックのいずれかの動きベクトルとから、複数の予測動きベクトル候補を導出する予測動きベクトル候補生成ステップと、
前記複数の予測動きベクトル候補のうち、所定の条件を満たす予測動きベクトル候補を予測動きベクトル候補リストに登録する予測動きベクトル候補登録ステップと、
前記予測動きベクトル候補リストに登録された予測動きベクトル候補の数が所定数(2以上の自然数)よりも小さい場合に、予測動きベクトル候補の数が所定数に達するまで繰り返して同じ値の予測動きベクトル候補を前記予測動きベクトル候補リストに登録する予測動きベクトル候補制限ステップと、
復号された前記選択すべき予測動きベクトルの位置を示す情報に基づいて、前記予測動きベクトル候補リストから、前記復号対象ブロックの予測動きベクトルを選択する予測動きベクトル選択ステップとをコンピュータに実行させることを特徴とする動画像復号プログラム。 - 動画像が符号化された符号化ビット列を受信して復号する受信装置であって、
各ピクチャを分割したブロック単位で動きベクトルを用いて動画像が符号化された符号化ビット列がパケット化された符号化データを受信する受信部と、
受信された前記符号化データをパケット処理して元の符号化ビット列を復元する復元部と、
復元された符号化ビット列から、予測動きベクトル候補リストにおける選択すべき予測動きベクトルの位置を示す情報を復号する復号部と、
復号対象ブロックと同一ピクチャ内の前記復号対象ブロックと近接する復号済みのブロックのいずれかの動きベクトルと、復号対象ブロックと異なる復号済みのピクチャ内のブロックのいずれかの動きベクトルとから、複数の予測動きベクトル候補を導出する予測動きベクトル候補生成部と、
前記複数の予測動きベクトル候補のうち、所定の条件を満たす予測動きベクトル候補を予測動きベクトル候補リストに登録する予測動きベクトル候補登録部と、
前記予測動きベクトル候補リストに登録された予測動きベクトル候補の数が所定数(2以上の自然数)よりも小さい場合に、予測動きベクトル候補の数が所定数に達するまで繰り返して同じ値の予測動きベクトル候補を前記予測動きベクトル候補リストに登録する予測動きベクトル候補制限部と、
復号された前記選択すべき予測動きベクトルの位置を示す情報に基づいて、前記予測動きベクトル候補リストから、前記復号対象ブロックの予測動きベクトルを選択する予測動きベクトル選択部とを備えることを特徴とする受信装置。 - 前記予測動きベクトル候補制限部は、前記予測動きベクトル候補登録部により生成された前記予測動きベクトル候補リストから所定数を超える予測動きベクトル候補を削除することを特徴とする請求項18に記載の受信装置。
- 前記予測動きベクトル候補生成部は、復号対象ブロックと同一ピクチャ内の前記復号対象ブロックと近接する復号済みのブロックのいずれかの動きベクトルから、第1及び第2の予測動きベクトル候補を導出し、復号対象ブロックと異なる復号済みのピクチャ内のブロックのいずれかの動きベクトルから、第3の予測動きベクトル候補を導出し、
前記予測動きベクトル候補登録部は、所定の条件を満たす前記第1、第2及び第3の予測動きベクトル候補を予測動きベクトル候補リストに登録し、
前記予測動きベクトル候補登録部により前記予測動きベクトル候補リストに登録された前記第1及び第2の予測動きベクトル候補が同じ値のとき、前記予測動きベクトル候補リストから第2の予測動きベクトル候補を削除する予測動きベクトル候補冗長判定部を有することを特徴とする請求項18または19に記載の受信装置。 - 動画像が符号化された符号化ビット列を受信して復号する受信方法であって、
各ピクチャを分割したブロック単位で動きベクトルを用いて動画像が符号化された符号化ビット列がパケット化された符号化データを受信する受信ステップと、
受信された前記符号化データをパケット処理して元の符号化ビット列を復元する復元ステップと、
復元された符号化ビット列から、予測動きベクトル候補リストにおける選択すべき予測動きベクトルの位置を示す情報を復号する復号ステップと、
復号対象ブロックと同一ピクチャ内の前記復号対象ブロックと近接する復号済みのブロックのいずれかの動きベクトルと、復号対象ブロックと異なる復号済みのピクチャ内のブロックのいずれかの動きベクトルとから、複数の予測動きベクトル候補を導出する予測動きベクトル候補生成ステップと、
前記複数の予測動きベクトル候補のうち、所定の条件を満たす予測動きベクトル候補を予測動きベクトル候補リストに登録する予測動きベクトル候補登録ステップと、
前記予測動きベクトル候補リストに登録された予測動きベクトル候補の数が所定数(2以上の自然数)よりも小さい場合に、予測動きベクトル候補の数が所定数に達するまで繰り返して同じ値の予測動きベクトル候補を前記予測動きベクトル候補リストに登録する予測動きベクトル候補制限ステップと、
復号された前記選択すべき予測動きベクトルの位置を示す情報に基づいて、前記予測動きベクトル候補リストから、前記復号対象ブロックの予測動きベクトルを選択する予測動きベクトル選択ステップとを備えることを特徴とする受信方法。 - 動画像が符号化された符号化ビット列を受信して復号する受信プログラムであって、
各ピクチャを分割したブロック単位で動きベクトルを用いて動画像が符号化された符号化ビット列がパケット化された符号化データを受信する受信ステップと、
受信された前記符号化データをパケット処理して元の符号化ビット列を復元する復元ステップと、
復元された符号化ビット列から、予測動きベクトル候補リストにおける選択すべき予測動きベクトルの位置を示す情報を復号する復号ステップと、
復号対象ブロックと同一ピクチャ内の前記復号対象ブロックと近接する復号済みのブロックのいずれかの動きベクトルと、復号対象ブロックと異なる復号済みのピクチャ内のブロックのいずれかの動きベクトルとから、複数の予測動きベクトル候補を導出する予測動きベクトル候補生成ステップと、
前記複数の予測動きベクトル候補のうち、所定の条件を満たす予測動きベクトル候補を予測動きベクトル候補リストに登録する予測動きベクトル候補登録ステップと、
前記予測動きベクトル候補リストに登録された予測動きベクトル候補の数が所定数(2以上の自然数)よりも小さい場合に、予測動きベクトル候補の数が所定数に達するまで繰り返して同じ値の予測動きベクトル候補を前記予測動きベクトル候補リストに登録する予測動きベクトル候補制限ステップと、
復号された前記選択すべき予測動きベクトルの位置を示す情報に基づいて、前記予測動きベクトル候補リストから、前記復号対象ブロックの予測動きベクトルを選択する予測動きベクトル選択ステップとをコンピュータに実行させることを特徴とする受信プログラム。
Priority Applications (12)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020147011156A KR101617974B1 (ko) | 2011-09-28 | 2012-09-28 | 동영상 인코딩 장치, 동영상 인코딩 방법, 동영상 인코딩 프로그램, 송신 장치, 송신 방법 및 송신 프로그램, 및 동영상 디코딩 장치, 동영상 디코딩 방법, 동영상 디코딩 프로그램, 수신 장치, 수신 방법 및 수신 프로그램 |
KR1020167010918A KR101711355B1 (ko) | 2011-09-28 | 2012-09-28 | 동영상 디코딩 장치, 동영상 디코딩 방법 및 동영상 디코딩 프로그램을 저장한 기록매체 |
KR1020177004758A KR101809881B1 (ko) | 2011-09-28 | 2012-09-28 | 동영상 디코딩 장치, 동영상 디코딩 방법 및 동영상 디코딩 프로그램을 저장한 기록매체 |
BR122020015070-0A BR122020015070B1 (pt) | 2011-09-28 | 2012-09-28 | Dispositivo de decodificação de imagem em movimento e método de decodificação de imagem em movimento |
BR112014007492-5A BR112014007492B1 (pt) | 2011-09-28 | 2012-09-28 | Dispositivo de codificação de imagem em movimento, método de codificação de imagem em movimento, dispositivo de decodificação de imagem em movimento e método de decodificação de imagem em movimento |
KR1020177004756A KR101809879B1 (ko) | 2011-09-28 | 2012-09-28 | 동영상 인코딩 장치, 동영상 인코딩 방법 및 동영상 인코딩 프로그램을 저장한 기록매체 |
KR1020177004757A KR101809880B1 (ko) | 2011-09-28 | 2012-09-28 | 동영상 디코딩 장치, 동영상 디코딩 방법 및 동영상 디코딩 프로그램을 저장한 기록매체 |
US14/226,268 US9674549B2 (en) | 2011-09-28 | 2014-03-26 | Moving picture coding device, moving picture coding method, moving picture coding program, transmitting device, transmission method and transmission program, and moving picture decoding device, moving picture decoding method, moving picture decoding program, receiving device, reception method and reception program |
US15/165,164 US9661343B2 (en) | 2011-09-28 | 2016-05-26 | Moving picture coding device, moving picture coding method, moving picture coding program, transmitting device, transmission method and transmission program, and moving picture decoding device, moving picture decoding method, moving picture decoding program, receiving device, reception method and reception program |
US15/581,239 US9924194B2 (en) | 2011-09-28 | 2017-04-28 | Moving picture decoding device, moving picture decoding method, and moving picture decoding program |
US15/581,275 US9866864B2 (en) | 2011-09-28 | 2017-04-28 | Moving picture decoding device, moving picture decoding method, and moving picture decoding program |
US15/581,301 US9866865B2 (en) | 2011-09-28 | 2017-04-28 | Moving picture decoding device, moving picture decoding method, and moving picture decoding program |
Applications Claiming Priority (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2011-212014 | 2011-09-28 | ||
JP2011212015 | 2011-09-28 | ||
JP2011212014 | 2011-09-28 | ||
JP2011-212015 | 2011-09-28 | ||
JP2012-214685 | 2012-09-27 | ||
JP2012214685A JP5488666B2 (ja) | 2011-09-28 | 2012-09-27 | 動画像復号装置、動画像復号方法、動画像復号プログラム、受信装置、受信方法及び受信プログラム |
JP2012-214684 | 2012-09-27 | ||
JP2012214684A JP5884697B2 (ja) | 2011-09-28 | 2012-09-27 | 動画像符号化装置、動画像符号化方法、動画像符号化プログラム、送信装置、送信方法及び送信プログラム |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/226,268 Continuation US9674549B2 (en) | 2011-09-28 | 2014-03-26 | Moving picture coding device, moving picture coding method, moving picture coding program, transmitting device, transmission method and transmission program, and moving picture decoding device, moving picture decoding method, moving picture decoding program, receiving device, reception method and reception program |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2013046707A1 true WO2013046707A1 (ja) | 2013-04-04 |
WO2013046707A9 WO2013046707A9 (ja) | 2013-07-25 |
Family
ID=47994781
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/006225 WO2013046707A1 (ja) | 2011-09-28 | 2012-09-28 | 動画像符号化装置、動画像符号化方法、動画像符号化プログラム、送信装置、送信方法及び送信プログラム、並びに動画像復号装置、動画像復号方法、動画像復号プログラム、受信装置、受信方法及び受信プログラム |
Country Status (2)
Country | Link |
---|---|
BR (1) | BR122020015070B1 (ja) |
WO (1) | WO2013046707A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020139184A1 (en) * | 2018-12-28 | 2020-07-02 | Telefonaktiebolaget Lm Ericsson (Publ) | Generating a motion vector predictor list |
-
2012
- 2012-09-28 BR BR122020015070-0A patent/BR122020015070B1/pt active IP Right Grant
- 2012-09-28 WO PCT/JP2012/006225 patent/WO2013046707A1/ja active Application Filing
Non-Patent Citations (4)
Title |
---|
JIANLE CHEN ET AL.: "MVP index parsing with fixed number of candidates", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11, JCTVC-F402-RL, 6TH MEETING, July 2011 (2011-07-01), TORINO, IT, pages 1 - 17 * |
TOSHIYASU SUGIO ET AL.: "On MVP candidate list for AMVP/Merge", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11, JCTVC-I0134, 9TH MEETING, May 2012 (2012-05-01), GENEVA, CH, pages 1 - 6 * |
TOSHIYASU SUGIO ET AL.: "Parsing Robustness for Merge/AMVP", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11, JCTVC-F470_R4, 6TH MEETING, July 2011 (2011-07-01), TORINO, IT, pages 1 - 15 * |
YUSUKE ITANI ET AL.: "Improvement to AMVP/Merge process", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11, JCTVC-E064_R1, 5TH MEETING, March 2011 (2011-03-01), GENEVA, CH, pages 1 - 8 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020139184A1 (en) * | 2018-12-28 | 2020-07-02 | Telefonaktiebolaget Lm Ericsson (Publ) | Generating a motion vector predictor list |
US11902566B2 (en) | 2018-12-28 | 2024-02-13 | Telefonaktiebolaget Lm Ericsson (Publ) | Generating a motion vector predictor list |
Also Published As
Publication number | Publication date |
---|---|
BR122020015070B1 (pt) | 2023-01-17 |
WO2013046707A9 (ja) | 2013-07-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6079912B2 (ja) | 動画像符号化装置、動画像符号化方法、動画像符号化プログラム、送信装置、送信方法、及び送信プログラム | |
WO2013099244A1 (ja) | 動画像符号化装置、動画像符号化方法及び動画像符号化プログラム、並びに動画像復号装置、動画像復号方法及び動画像復号プログラム | |
JP5488666B2 (ja) | 動画像復号装置、動画像復号方法、動画像復号プログラム、受信装置、受信方法及び受信プログラム | |
WO2013076981A1 (ja) | 動画像符号化装置、動画像符号化方法、動画像符号化プログラム、送信装置、送信方法、及び送信プログラム、並びに動画像復号装置、動画像復号方法、動画像復号プログラム、受信装置、受信方法、及び受信プログラム | |
JP2013132046A (ja) | 動画像復号装置、動画像復号方法、動画像復号プログラム、受信装置、受信方法、及び受信プログラム | |
JP5747816B2 (ja) | 動画像符号化装置、動画像符号化方法、動画像符号化プログラム、送信装置、送信方法及び送信プログラム | |
JP5962877B1 (ja) | 動画像符号化装置、動画像符号化方法、動画像符号化プログラム、送信装置、送信方法及び送信プログラム | |
JP2013131918A (ja) | 動画像復号装置、動画像復号方法及び動画像復号プログラム | |
WO2013046707A9 (ja) | 動画像符号化装置、動画像符号化方法、動画像符号化プログラム、送信装置、送信方法及び送信プログラム、並びに動画像復号装置、動画像復号方法、動画像復号プログラム、受信装置、受信方法及び受信プログラム | |
JP5617834B2 (ja) | 動画像復号装置、動画像復号方法、及び動画像復号プログラム、並びに、受信装置、受信方法、及び受信プログラム | |
JP2013074468A (ja) | 動画像復号装置、動画像復号方法及び動画像復号プログラム | |
JP5962876B1 (ja) | 動画像符号化装置、動画像符号化方法、動画像符号化プログラム、送信装置、送信方法及び送信プログラム | |
JP5962875B1 (ja) | 動画像符号化装置、動画像符号化方法、動画像符号化プログラム、送信装置、送信方法及び送信プログラム | |
JP6037061B2 (ja) | 動画像復号装置、動画像復号方法、及び動画像復号プログラム、並びに、受信装置、受信方法、及び受信プログラム | |
JP2013131917A (ja) | 動画像符号化装置、動画像符号化方法及び動画像符号化プログラム | |
JP2013110576A (ja) | 動画像復号装置、動画像復号方法及び動画像復号プログラム | |
JP2013110575A (ja) | 動画像符号化装置、動画像符号化方法及び動画像符号化プログラム | |
JP2013074467A (ja) | 動画像符号化装置、動画像符号化方法及び動画像符号化プログラム | |
JP2013192080A (ja) | 動画像符号化装置、動画像符号化方法及び動画像符号化プログラム | |
JP2013192081A (ja) | 動画像復号装置、動画像復号方法及び動画像復号プログラム | |
JP2013132047A (ja) | 動画像符号化装置、動画像符号化方法、動画像符号化プログラム、送信装置、送信方法、及び送信プログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12836399 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20147011156 Country of ref document: KR Kind code of ref document: A |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112014007492 Country of ref document: BR |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 12836399 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 112014007492 Country of ref document: BR Kind code of ref document: A2 Effective date: 20140327 |