WO2012102045A1 - 画像符号化方法および画像復号化方法 - Google Patents
画像符号化方法および画像復号化方法 Download PDFInfo
- Publication number
- WO2012102045A1 WO2012102045A1 PCT/JP2012/000491 JP2012000491W WO2012102045A1 WO 2012102045 A1 WO2012102045 A1 WO 2012102045A1 JP 2012000491 W JP2012000491 W JP 2012000491W WO 2012102045 A1 WO2012102045 A1 WO 2012102045A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- picture
- motion vector
- block
- reference picture
- index
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/109—Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/119—Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
- H04N19/147—Data rate or code amount at the encoder output according to rate distortion criteria
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
Definitions
- the present invention relates to an image encoding method and an image decoding method using a reference index and a motion vector.
- An image encoding apparatus generally compresses the amount of information using redundancy in the spatial direction and temporal direction of an image (including a still image and a moving image).
- redundancy in the spatial direction conversion to the frequency domain is used.
- Inter prediction is used as a method of using temporal redundancy. Inter prediction is also called inter-picture prediction.
- an image encoding apparatus using inter prediction encodes a certain picture
- a picture that has been encoded forward or backward in display order (display time order) with respect to the encoding target picture is used as a reference picture.
- the image encoding device derives a motion vector by detecting the motion of the encoding target picture with respect to the reference picture.
- the image encoding device obtains predicted image data by performing motion compensation based on the motion vector.
- the image coding apparatus acquires a difference between the predicted image data and the image data of the coded reference picture.
- the image encoding device encodes the acquired difference. Thereby, the image coding apparatus removes redundancy in the time direction.
- the image coding apparatus calculates a difference value between a block to be coded in a coded picture and a block in a reference picture, and uses a block in the reference picture having the smallest difference value as a reference block. Determine as. Then, the image encoding device detects a motion vector using the encoding target block and the reference block.
- H.264 An image encoding apparatus according to a standardized image encoding method called H.264 (see Non-Patent Document 1) uses three types of pictures, ie, an I picture, a P picture, and a B picture, in order to compress the amount of information.
- This image encoding apparatus does not perform inter prediction on an I picture. That is, the image coding apparatus performs intra prediction on the I picture. Intra prediction is also called intra-picture prediction.
- the image coding apparatus performs inter prediction on the P picture with reference to one already coded picture in front of or behind the picture to be coded in display order. Also, the image encoding apparatus performs inter prediction on the B picture with reference to two already encoded pictures in front of or behind the encoding target picture in display order.
- the image coding apparatus In inter prediction, the image coding apparatus generates a reference list (also referred to as a reference picture list) for specifying a reference picture.
- a reference picture index (also referred to as a reference index) is assigned to an encoded reference picture that is referred to in inter prediction. For example, since the image coding apparatus refers to two pictures with respect to a B picture, it holds two reference lists (L0, L1).
- FIG. 34 shows an example of a reference list.
- a reference picture list L0 in FIG. 34 is an example of a reference picture list corresponding to the first prediction direction of bidirectional prediction.
- a reference picture index having a value of 0 is assigned to a reference picture r1 having a display order of 2.
- a reference picture index having a value of 1 is assigned to a reference picture r2 having a display order of 1.
- a reference picture index having a value of 2 is assigned to the reference picture r3 having a display order of 0.
- a reference picture index is assigned to a smaller reference picture as the reference picture is closer to the encoding target picture in display order.
- the reference picture list L1 of FIG. 34 is an example of a reference picture list corresponding to the second prediction direction of bidirectional prediction.
- a reference picture index having a value of 0 is assigned to a reference picture r2 having a display order of 1.
- a reference picture index having a value of 1 is assigned to a reference picture r1 having a display order of 2.
- a reference picture index having a value of 2 is assigned to a reference picture r3 having a display order of 0.
- two different reference picture indexes may be assigned to specific reference pictures included in the two reference picture lists (reference pictures r1 and r2 in FIG. 34). Further, the same reference picture index may be assigned to specific reference pictures included in the two reference picture lists (reference picture r3 in FIG. 34).
- Prediction using only the reference picture list L0 is called L0 prediction.
- Prediction using only the reference picture list L1 is called L1 prediction.
- Prediction using both the reference picture list L0 and the reference picture list L1 is called bi-prediction or bi-prediction.
- the forward direction is often used as the prediction direction.
- the backward direction is often used as the prediction direction. That is, the reference picture list L0 is configured to correspond to the first prediction direction, and the reference picture list L1 is configured to correspond to the second prediction direction.
- the prediction direction is classified into one of the first prediction direction, the second prediction direction, and bidirectional. Further, when the prediction direction is bidirectional, it is also expressed that the prediction direction is bidirectional prediction or bi-prediction.
- H.264 there is a motion detection mode as a coding mode (also referred to as an inter prediction mode or a prediction mode) of a coding target block in a B picture.
- a motion detection mode as a coding mode (also referred to as an inter prediction mode or a prediction mode) of a coding target block in a B picture.
- the image encoding device detects the motion vector of the encoding target block. Then, the image encoding device generates predicted image data using the reference picture and the motion vector. Then, the image encoding device encodes the difference value between the predicted image data and the image data of the encoding target block, and the motion vector used to generate the predicted image data.
- the motion detection mode includes bi-directional prediction in which a prediction image is generated with reference to two already encoded pictures in front of or behind the encoding target picture.
- the motion detection mode includes unidirectional prediction in which a prediction image is generated with reference to one already encoded picture in front of or behind the encoding target picture. Then, either bidirectional prediction or unidirectional prediction is selected for the encoding target block.
- An image coding apparatus can select a coding mode called a temporal direct mode when deriving a motion vector in coding a B picture.
- a method of inter prediction in the temporal direct mode will be described with reference to FIG.
- FIG. 35 is an explanatory diagram showing motion vectors in the temporal direct mode.
- FIG. 35 shows a case where the image coding apparatus codes the block a of the picture B2 in the temporal direct mode.
- the image coding apparatus uses the motion vector vb used when coding the block b at the same position as the block a in the picture P3 which is a reference picture behind the picture B2.
- the motion vector vb refers to the picture P1.
- the image encoding device uses a motion vector parallel to the motion vector vb to extract a reference block from a picture P1 that is a front reference picture and a picture P3 that is a rear reference picture. get. Then, the image encoding device performs bidirectional prediction and encodes block a. That is, the image coding apparatus codes the block a using the motion vector va1 for the picture P1 and the motion vector va2 for the picture P3.
- a merge mode as an encoding mode of the encoding target block in the B picture and the P picture.
- the image encoding device copies a motion vector and a reference picture index from a block adjacent to the encoding target block, and encodes the encoding target block.
- the image encoding apparatus attaches the index of the adjacent block used for copying to the bit stream. As a result, the same motion vector and reference picture index as those on the encoding side can be selected on the decoding side.
- the adjacent block A in FIG. 36A is an encoded block adjacent to the left of the encoding target block.
- the adjacent block B is an encoded block adjacent on the encoding target block.
- the adjacent block C is an encoded block adjacent to the upper right of the encoding target block.
- adjacent block A is a block encoded by bi-directional prediction, and has motion vector MvL0_A in the first prediction direction and motion vector MvL1_A in the second prediction direction.
- the adjacent block B is a block encoded by unidirectional prediction and has a motion vector MvL0_B in the first prediction direction.
- the adjacent block C is a block encoded by unidirectional prediction and has a motion vector MvL0_C in the first prediction direction.
- the motion vectors MvL0_A, MvL0_B, and MvL0_C refer to the same reference picture RefIdxL0.
- MvL1_A refers to the reference picture RefIdxL1.
- the image encoding apparatus selects an adjacent block in which the motion vector and the reference picture index are copied to the encoding target block from the adjacent blocks A, B, and C. At this time, the image encoding apparatus selects adjacent blocks so that the encoding efficiency is the highest. Then, the image encoding apparatus attaches a merge block index representing the selected adjacent block to the bit stream.
- the image encoding apparatus encodes the encoding target block using the motion vector MvL0_A, the motion vector MvL1_A, and the reference picture referred to by the motion vectors MvL0_A and MvL1_A. Then, the image coding apparatus attaches only the merge block index indicating that the adjacent block A is used to the bit stream.
- FIG. 36B shows an example of a merge block index.
- the image encoding apparatus reduces the information amount of the motion vector and the reference picture index by attaching only such a merge block index to the bitstream.
- the merge source blocks are only adjacent blocks in the picture to be encoded. Therefore, for example, when there is no motion vector, such as when an adjacent block is intra-encoded, the encoding efficiency decreases.
- the present invention improves encoding efficiency by using not only adjacent blocks in the encoding target picture but also the encoding result information of other reference pictures different from the encoding target picture.
- An object is to provide an image encoding method.
- an image encoding method encodes a block to be encoded using a first reference index indicating a first reference picture and a first motion vector.
- a block included in a corresponding picture different from the encoding target picture and used for encoding the corresponding block which is a block that matches the position of the encoding target block in the encoding target picture.
- an encoding step of adding the value of the flag to the encoded stream
- the second reference index is copied to the third reference index, and the display order of the encoding target picture, the display order of the corresponding picture, and the second reference index are indicated by the second reference index.
- the third motion vector may be calculated by performing scaling processing of the second motion vector using the display order of the reference pictures and the display order of the third reference picture indicated by the third reference index. Good.
- the calculating step it is determined whether or not a second reference picture indicated by the second reference index is included in a reference picture list of the encoding target picture, and the second reference picture is included in the reference picture list.
- the fourth reference index indicating the second reference picture in the reference picture list is copied to the third reference index, and when the second reference picture is not included in the reference picture list, the third reference index When the reference index is invalidated and the third reference index is not invalid, the display order of the picture to be encoded, the display order of the corresponding picture, the display order of the second reference picture, and the third reference index And the display order of the third reference picture shown in FIG.
- By performing the ring processing may calculate the third motion vector.
- the calculating step it is determined whether or not a second reference picture indicated by the second reference index is included in a reference picture list of the encoding target picture, and the second reference picture is included in the reference picture list.
- the fourth reference index indicating the second reference picture in the reference picture list is copied to the third reference index, and when the second reference picture is not included in the reference picture list, the third reference index
- the reference index is set to the maximum value that can be assigned in the reference picture list, the display order of the encoding target picture, the display order of the corresponding picture, the display order of the second reference picture, and the third reference And the display order of the third reference picture indicated by the index.
- By performing the scaling processing Le may calculate the third motion vector.
- An image decoding method is an image decoding method for decoding a block to be decoded using a first reference index indicating a first reference picture and a first motion vector.
- a second reference index and a second reference index used for decoding a corresponding block that is a block included in a corresponding picture different from the target picture and is a block that matches a position of the target block in the target picture.
- the decoding target block is used as the first reference index and the first motion vector.
- An image decoding method including a decoding step to be converted may be used.
- the second reference index is copied to the third reference index, and the display order of the picture to be decoded, the display order of the corresponding picture, and the second reference index are indicated by the second reference index.
- the third motion vector may be calculated by performing scaling processing of the second motion vector using the display order of the reference pictures and the display order of the third reference picture indicated by the third reference index. Good.
- the calculating step it is determined whether or not the second reference picture indicated by the second reference index is included in a reference picture list of the decoding target picture, and the second reference picture is included in the reference picture list.
- the fourth reference index indicating the second reference picture in the reference picture list is copied to the third reference index, and when the second reference picture is not included in the reference picture list, the third reference index When the reference index is invalidated and the third reference index is not invalid, the display order of the decoding target picture, the display order of the corresponding picture, the display order of the second reference picture, and the third reference index And the display order of the third reference picture shown in FIG.
- By performing the ring processing may calculate the third motion vector.
- the calculating step it is determined whether or not the second reference picture indicated by the second reference index is included in a reference picture list of the decoding target picture, and the second reference picture is included in the reference picture list.
- the fourth reference index indicating the second reference picture in the reference picture list is copied to the third reference index, and when the second reference picture is not included in the reference picture list, the third reference index
- the reference index is set to a maximum value that can be assigned in the reference picture list, the display order of the decoding target picture, the display order of the corresponding picture, the display order of the second reference picture, and the third reference And the display order of the third reference picture indicated by the index.
- By performing the scaling processing Le may calculate the third motion vector.
- the present invention not only the adjacent blocks in the encoding target picture but also the encoding result information of another reference picture different from the encoding target picture is targeted for merging, thereby improving the encoding efficiency. It becomes possible.
- FIG. 1 is a block diagram showing a configuration of an image coding apparatus according to Embodiment 1.
- FIG. 2 is a flowchart showing the operation of the image coding apparatus according to Embodiment 1.
- FIG. 3A is a diagram illustrating an example of merge block candidates according to Embodiment 1.
- FIG. 3B is a diagram showing an example of a merge block index according to Embodiment 1.
- FIG. 4 is a diagram illustrating an example of a code table according to the first embodiment.
- FIG. 5 is a flowchart showing the comparison process according to the first embodiment.
- FIG. 6 is a conceptual diagram showing read / write processing according to the first embodiment.
- FIG. 7 is a flowchart showing a calculation process according to the first embodiment.
- FIG. 1 is a block diagram showing a configuration of an image coding apparatus according to Embodiment 1.
- FIG. 2 is a flowchart showing the operation of the image coding apparatus according to Embodiment 1.
- FIG. 3A is
- FIG. 8A is a diagram showing a first example of temporal merge motion vectors according to Embodiment 1.
- FIG. 8B is a diagram showing a second example of temporal merge motion vectors according to Embodiment 1.
- FIG. 9A is a diagram showing a third example of temporal merge motion vectors according to Embodiment 1.
- FIG. 9B is a diagram showing a fourth example of temporal merge motion vectors according to Embodiment 1.
- FIG. 10 is a flowchart showing reference index calculation processing according to the second embodiment.
- FIG. 11A is a diagram showing a first example of temporal merge motion vectors according to Embodiment 2.
- FIG. 11B is a diagram showing a second example of temporal merge motion vectors according to Embodiment 2.
- FIG. 11A is a diagram showing a first example of temporal merge motion vectors according to Embodiment 2.
- FIG. 12A is a diagram showing a third example of temporal merge motion vectors according to Embodiment 2.
- FIG. 12B is a diagram showing a fourth example of temporal merge motion vectors according to Embodiment 2.
- FIG. 13 is a block diagram showing a configuration of an image decoding apparatus according to Embodiment 3.
- FIG. 14 is a flowchart showing an operation of the image decoding apparatus according to the third embodiment.
- FIG. 15 is an overall configuration diagram of a content supply system that realizes a content distribution service.
- FIG. 16 is an overall configuration diagram of a digital broadcasting system.
- FIG. 17 is a block diagram illustrating a configuration example of a television.
- FIG. 18 is a block diagram illustrating a configuration example of an information reproducing / recording unit that reads and writes information from and on a recording medium that is an optical disk.
- FIG. 19 is a diagram illustrating a structure example of a recording medium that is an optical disk.
- FIG. 20A is a diagram illustrating an example of a mobile phone.
- FIG. 20B is a block diagram illustrating a configuration example of a mobile phone.
- FIG. 21 is a diagram showing a structure of multiplexed data.
- FIG. 22 is a diagram schematically showing how each stream is multiplexed in the multiplexed data.
- FIG. 23 is a diagram showing in more detail how the video stream is stored in the PES packet sequence.
- FIG. 20A is a diagram illustrating an example of a mobile phone.
- FIG. 20B is a block diagram illustrating a configuration example of a mobile phone.
- FIG. 21 is a diagram showing a structure of multiplexed data.
- FIG. 22 is
- FIG. 24 is a diagram showing the structure of TS packets and source packets in multiplexed data.
- FIG. 25 is a diagram illustrating a data structure of the PMT.
- FIG. 26 is a diagram showing an internal configuration of multiplexed data information.
- FIG. 27 shows the internal structure of the stream attribute information.
- FIG. 28 is a diagram showing steps for identifying video data.
- FIG. 29 is a block diagram illustrating a configuration example of an integrated circuit that realizes the moving picture coding method and the moving picture decoding method according to each embodiment.
- FIG. 30 is a diagram illustrating a configuration for switching the driving frequency.
- FIG. 31 is a diagram illustrating steps for identifying video data and switching between driving frequencies.
- FIG. 32 is a diagram illustrating an example of a lookup table in which video data standards are associated with drive frequencies.
- FIG. 33A is a diagram illustrating an example of a configuration for sharing a module of a signal processing unit.
- FIG. 33B is a diagram illustrating another example of a configuration for sharing a module of a signal processing unit.
- FIG. 34 is a diagram illustrating an example of a reference picture list.
- FIG. 35 is a diagram illustrating an example of a motion vector in the temporal direct mode.
- FIG. 36A is a diagram illustrating an example of adjacent blocks.
- FIG. 36B is a diagram illustrating an example of a merge block index.
- FIG. 1 is a block diagram showing an image coding apparatus according to the present embodiment.
- the image encoding apparatus includes a subtraction unit 102, an orthogonal transformation unit 103, a quantization unit 104, an inverse quantization unit 106, an inverse orthogonal transformation unit 107, an addition unit 108, a block memory 109, a frame memory 111, Intra prediction unit 110, inter prediction unit 112, switch unit 113, inter prediction control unit 121, picture type determination unit 124, temporal merge motion vector calculation unit 122, colPic memory 125, co-located reference direction determination unit 123, and variable A long encoding unit 105 is provided.
- the orthogonal transform unit 103 performs transform from the image domain to the frequency domain for the input image sequence.
- the quantization unit 104 performs a quantization process on the input image sequence converted into the frequency domain.
- the inverse quantization unit 106 performs an inverse quantization process on the input image sequence quantized by the quantization unit 104.
- the inverse orthogonal transform unit 107 performs transform from the frequency domain to the image domain for the input image sequence subjected to the inverse quantization process.
- the block memory 109 is a memory for storing the input image sequence in units of blocks.
- the frame memory 111 is a memory for storing the input image sequence in units of frames.
- the picture type determination unit 124 determines which of the I picture, B picture, and P picture is used to encode the input image sequence, and generates picture type information.
- the intra prediction unit 110 performs intra prediction on the encoding target block using the input image sequence in block units stored in the block memory 109, and generates predicted image data.
- the inter prediction unit 112 performs inter prediction on the current block using the input image stored in the frame memory 111 and the motion vector derived by motion detection, and generates predicted image data. To do.
- the co-located reference direction determination unit 123 determines whether the co-located block is a forward reference block or a backward reference block.
- the forward reference block is a block included in a picture located in front of the encoding target picture in display order.
- a backward reference block is a block included in a picture located behind.
- the co-located reference direction determination unit 123 generates a co-located reference direction flag for each picture and attaches it to the encoding target picture.
- the co-located block is a block in a picture different from the picture including the encoding target block, and a block in the same position as the encoding target block in the picture.
- the co-located block is a corresponding block corresponding to the encoding target block.
- the picture including the corresponding block is a corresponding picture corresponding to the encoding target picture.
- the position of the co-located block is the same as the position of the encoding target block. However, they may be different.
- the temporal merge motion vector calculation unit 122 derives merge mode merge block candidates (co-located merge blocks) using the colPic information including the motion vectors of the co-located blocks stored in the colPic memory 125. Further, the temporal merge motion vector calculation unit 122 assigns a merge block index value corresponding to the co-located merge block.
- the temporal merge motion vector calculation unit 122 sends the co-located merge block and the merge block index to the inter prediction control unit 121.
- the temporal merge motion vector calculation unit 122 cancels the derivation of the co-located merge block or regards the motion vector as 0, and the co-located merge block. Is derived.
- the inter prediction control unit 121 uses the prediction image generated by using the motion vector derived by motion detection and the prediction image generated by using the motion vector derived by the merge mode, and performs the prediction with the smallest prediction error. Inter prediction is performed using the mode. In addition, the inter prediction control unit 121 sends a merge flag indicating whether or not the prediction mode is the merge mode to the variable length coding unit 105.
- the inter prediction control unit 121 sends the merge block index corresponding to the determined merge block and prediction error information to the variable length coding unit 105. Further, the inter prediction control unit 121 transfers colPic information including the motion vector of the encoding target block to the colPic memory 125.
- the orthogonal transform unit 103 performs transform from the image domain to the frequency domain on the prediction error data between the generated predicted image data and the input image sequence.
- the quantization unit 104 performs a quantization process on the prediction error data converted into the frequency domain.
- the variable length coding unit 105 performs variable length coding processing on the quantized prediction error data, the merge flag, the merge block index, the picture type information, and the co-located reference direction flag. As a result, the variable length encoding unit 105 generates a bit stream.
- FIG. 2 shows an outline of the processing flow of the image coding method according to the present embodiment.
- the co-located reference direction determination unit 123 determines whether the co-located block is a forward reference block or a backward reference block in deriving a co-located merge block candidate (S101).
- the co-located reference direction determining unit 123 selects a co-located block from the forward reference picture to which the forward reference block belongs and the backward reference picture to which the backward reference block belongs, which is closest to the current picture in the display order. Determine as. Then, the co-located reference direction determination unit 123 generates a co-located block reference flag indicating whether the co-located block is a forward reference block or a backward reference block for each picture, and attaches it to the picture.
- the inter prediction control unit 121 generates merge block candidates from the adjacent blocks of the encoding target block (S102). For example, in the case of FIG. 3A, the inter prediction control unit 121 determines adjacent blocks A, B, and C as merge block candidates as motion vectors and reference picture indexes of the encoding target block. And the inter prediction control part 121 allocates a merge block index with respect to each merge block candidate like FIG. 3B.
- the inter prediction control unit 121 decreases the value of the merge block index corresponding to a merge block that is highly likely to have a motion vector and a reference picture index with higher accuracy. Thereby, encoding efficiency becomes high.
- the inter prediction control unit 121 measures the number of times selected as a merge block for each block. And the inter prediction control part 121 may allocate the merge block index of a small value with respect to the block with many frequency
- the temporal merge motion vector calculation unit 122 reads colPic information including the motion vector of the co-located block from the colPic memory 125 according to the co-located reference direction. Then, the temporal merge motion vector calculation unit 122 derives a co-located merge block in the merge mode using the reference picture index and the motion vector of the co-located block (S103).
- the inter prediction control unit 121 allocates the value of the merge block index corresponding to the co-located merge block as shown in FIG. 3B.
- the inter prediction control unit 121 compares the prediction error of the prediction image generated using the motion vector derived by the motion detection with the prediction error of the prediction image generated by the merge block candidate by a method described later. Then, the inter prediction control unit 121 sets the merge flag to 1 if the prediction mode is the merge mode, and sets the merge flag to 0 otherwise (S104).
- the variable length coding unit 105 determines whether or not the merge flag is 1, that is, whether or not the prediction mode is the merge mode (S105). If true, the variable length coding unit 105 attaches a merge flag and a merge block index used for merging to the bitstream (S106). If false, the variable length coding unit 105 causes the merge flag and motion detection mode information to accompany the bitstream (S107).
- the inter prediction control unit 121 transfers the colPic information including the motion vector used for the inter prediction to the colPic memory 125 by a method described later and stores it (S108).
- the colPic memory 125 stores the motion vector of the reference picture, the index value of the reference picture, the prediction direction, and the like in order to calculate the temporal direct mode motion vector for the current block.
- a merge block index value is assigned as shown in FIG. 3B. Specifically, the value corresponding to the adjacent block A is 0, the value corresponding to the adjacent block B is 1, the value corresponding to the adjacent block C is 2, and the value corresponding to the co-located merge block Is 3.
- how to assign the merge block index is not necessarily limited to this example.
- FIG. 4 shows an example of a code table used when the merge block index is variable length encoded. Codes with shorter code lengths are assigned in ascending order of merge block index values. Therefore, a small value is assigned to the merge block index corresponding to the merge block candidate that is likely to have good prediction accuracy, thereby improving the coding efficiency.
- FIG. 5 shows a detailed processing flow of the cost comparison (S104) of FIG.
- the inter prediction control unit 121 sets 0 to the merge block candidate index, sets the prediction error (cost) of the motion detection mode to the minimum prediction error, and sets 0 to the merge flag (S201).
- the cost is calculated by, for example, the following formula 1 based on the RD optimization model.
- Equation 1 D represents coding distortion. Specifically, the sum of absolute differences between the pixel value obtained by encoding and decoding the block to be encoded using a prediction image generated with a certain motion vector and the original pixel value of the block to be encoded, etc. Is used for D.
- R represents the generated code amount. Specifically, the code amount necessary for encoding the motion vector used for generating the predicted image is used for R.
- ⁇ is Lagrange's undetermined multiplier.
- the inter prediction control unit 121 determines whether or not the value of the merge block candidate index is smaller than the number of merge block candidates of the encoding target block, that is, whether or not there is a block that can still be a merge candidate (S202). ). If true, the inter prediction control unit 121 calculates the cost of the merge block candidate to which the merge block candidate index is allocated (S203).
- the inter prediction control unit 121 determines whether the calculated cost of the merge block candidate is smaller than the minimum prediction error (S204). If true, the inter prediction control unit 121 updates the minimum prediction error, the merge block index, and the merge flag (S205). The inter prediction control unit 121 adds 1 to the value of the merge block candidate index. And the inter prediction control part 121 repeats the above-mentioned process (S202 to S206).
- the inter prediction control unit 121 finalizes the remaining merge flag and merge block index values.
- FIG. 6 is a conceptual diagram showing a read / write process to the colPic memory 125 shown in FIG.
- FIG. 6 shows a co-located block included in the co-located picture colPic. Also, for the co-located block, the motion vector MvL0_Col in the first prediction direction, the reference picture index RefIdxL0_Col in the first prediction direction, the motion vector MvL1_Col in the second prediction direction, and the reference picture index RefIdxL1_Col in the second prediction direction are shown. ing.
- the first prediction direction is forward reference and the second prediction direction is backward reference.
- the first prediction direction may be backward reference
- the second prediction direction may be forward reference.
- both the first prediction direction and the second prediction direction may be forward reference or backward reference.
- the co-located block is a block whose position in the co-located picture colPic matches the position of the encoding target block in the encoding target picture. Whether the co-located picture colPic is behind or ahead of the current picture is switched by a co-located reference direction flag.
- colPic information including a motion vector stored in the colPic memory 125 is read according to the co-located reference flag, and a co-located merge block is calculated.
- the calculated co-located merge block is used for encoding the encoding target block.
- FIG. 7 is a detailed processing flow of the merge block calculation (S103) of FIG. Hereinafter, the process shown in FIG. 7 will be described.
- the temporal merge motion vector calculation unit 122 reads colPic information from the colPic memory 125 according to the co-located reference direction flag (S301). The temporal merge motion vector calculation unit 122 determines whether the co-located block included in the colPic information has two or more motion vectors. That is, the temporal merge motion vector calculation unit 122 determines whether the co-located block has a forward reference motion vector (mvL0) and a backward reference motion vector (mvL1) (S302).
- mvL0 forward reference motion vector
- mvL1 backward reference motion vector
- the temporal merge motion vector calculation unit 122 sets the value of the reference picture index RefIdxL0_Col in the first prediction direction of the co-located block to the reference picture index RefIdxL0 in the first prediction direction of the current block. make a copy. Also, the temporal merge motion vector calculation unit 122 copies the value of the reference picture index RefIdxL1_Col in the second prediction direction of the co-located block to the reference picture index RefIdxL1 in the second prediction direction of the current block (S303).
- the temporal merge motion vector calculation unit 122 calculates the temporal merge motion vector MergeMvL0 in the first prediction direction using the motion vector mvL0_Col in the first prediction direction of the co-located block (S304). In addition, the temporal merge motion vector calculation unit 122 calculates the temporal merge motion vector MergeMvL1 in the second prediction direction using the motion vector mvL1_Col in the second prediction direction of the co-located block (S305).
- the temporal merge motion vector calculation unit 122 determines whether the co-located block has a forward reference motion vector (S307).
- the temporal merge motion vector calculation unit 122 sets the reference picture index RefIdxL0 in the first prediction direction of the encoding target block.
- the value of the reference picture index RefIdxL0_Col in the first prediction direction of the co-located block is copied.
- the temporal merge motion vector calculation unit 122 sets ⁇ 1 to the reference picture index RefIdxL1 in the second prediction direction of the encoding target block.
- the reference picture index RefIdxL1 being ⁇ 1 indicates that the second prediction direction cannot be used. That is, the temporal merge motion vector calculation unit 122 determines the prediction direction to be one-way prediction (S308).
- the temporal merge motion vector calculation unit 122 calculates the temporal merge motion vector MergeMvL0 in the first prediction direction using the motion vector mvL0_Col in the first prediction direction of the co-located block (S309).
- the temporal merge motion vector calculation unit 122 determines whether the co-located block has the backward reference motion vector. Is determined (S310).
- the temporal merge motion vector calculation unit 122 sets the reference picture index RefIdxL0 in the first prediction direction of the encoding target block. -1 is set. A reference picture index RefIdxL0 of ⁇ 1 indicates that the first prediction direction cannot be used. That is, the temporal merge motion vector calculation unit 122 determines the prediction direction as one-way prediction.
- the temporal merge motion vector calculation unit 122 copies the value of the reference picture index RefIdxL1_Col in the second prediction direction of the co-located block to the reference picture index RefIdxL1 in the second prediction direction of the current block (S311).
- the temporal merge motion vector calculation unit 122 calculates the temporal merge motion vector MergeMvL1 in the second prediction direction using the motion vector mvL1_Col in the second prediction direction of the co-located block (S312).
- the temporal merge motion vector calculation unit 122 determines not to add the co-located merge block to the merge block candidate. (S313).
- the temporal merge motion vector calculation unit 122 adds the co-located merge block to the merge block candidate (S306).
- the co-located merge block has the temporal merge motion vector MergeMvL0 in the first prediction direction with respect to the reference picture index RefIdxL0 in the first prediction direction. Further, the co-located merge block has a temporal merge motion vector MergeMvL1 in the second prediction direction with respect to the reference picture index RefIdxL1 in the second prediction direction.
- a reference picture index of ⁇ 1 indicates that the direction cannot be used, that is, the prediction direction is unidirectional prediction. However, it is not necessarily limited to this, and any format may be used to indicate that it cannot be used.
- the temporal merge motion vector calculation unit 122 determines whether or not the co-located block has a forward reference motion vector, and then the co-located block has a backward reference motion vector. It is determined whether or not.
- the temporal merge motion vector calculation unit 122 determines whether the co-located block has a forward reference motion vector. May be.
- the temporal merge motion vector calculation unit 122 uses the forward reference motion vector mvL0_Col of the co-located block to derive the temporal merge motion vector MergeMvL0 of the encoding target block using Equation 2 below.
- MergeMvL0 mvL0_Col ⁇ (curPOC-POC1 (refIdxL0)) / (colPOC-POC2 (refIdxL0_Col)) (Equation 2)
- curPOC represents the display order of the encoding target picture
- colPOC represents the display order of colPic.
- POC1 (X) represents the display order of the reference picture indicated by the reference picture index X in the reference picture list of the encoding target picture.
- POC2 (X) represents the display order of the reference picture indicated by the reference picture index X in the reference picture list of colPic.
- curPOC-POC1 (refIdxL0) indicates time difference information in the display times of picture B2 and picture B0
- colPOC-POC2 (refIdxL0_Col) indicates time difference information in the display times of picture B4 and picture B0.
- the temporal merge motion vector calculation unit 122 derives the temporal merge motion vector MergeMvL1 of the encoding target block using the backward reference motion vector mvL1_Col of the co-located block according to the following Expression 3.
- MergeMvL1 mvL1_Col ⁇ (curPOC-POC1 (refIdxL1)) / (colPOC-POC2 (refIdxL1_Col)) (Equation 3)
- (curPOC-POC1 (refIdxL1)) indicates time difference information in the display times of pictures B2 and B4
- (colPOC-POC2 (refIdxL1_Col)) indicates time difference information in the display times of pictures B4 and B8. .
- FIGS. 9A and 9B show a method of deriving a temporal merge motion vector when the co-located block is a forward reference block and has a forward reference motion vector and a backward reference motion vector.
- the temporal merge motion vector calculation unit 122 uses the forward reference motion vector mvL0_Col of the co-located block to derive the temporal merge motion vector MergeMvL0 of the encoding target block according to the following Expression 4.
- the temporal merge motion vector calculation unit 122 derives the temporal merge motion vector MergeMvL1 of the encoding target block using the backward reference motion vector mvL1_Col of the co-located block according to the following Equation 5.
- MergeMvL1 mvL1_Col ⁇ (curPOC-POC1 (refIdxL1)) / (colPOC-POC2 (refIdxL1_Col)) (Equation 5)
- (curPOC-POC1 (refIdxL1)) indicates time difference information in the display times of the pictures B6 and B8, and
- (colPOC-POC2 (refIdxL1_Col)) indicates time difference information in the display times of the pictures B4 and B8. .
- the image coding apparatus uses not only adjacent blocks in a coding target picture but also coding result information of other reference pictures different from the coding target picture as merge blocks. Use as a candidate. Thereby, the encoding efficiency can be improved.
- the image encoding apparatus uses a co-located block of the encoding target block as a merge block candidate.
- the image encoding apparatus copies the value of the reference picture index of the co-located block as it is to the reference picture index of the encoding target block.
- the image encoding apparatus uses the motion vector of the co-located block as the motion vector of the encoding target block. Further, the motion vector of the co-located block is appropriately scaled according to the positional relationship among the current picture, the reference picture, colPic, and the picture referenced by colPic.
- the image encoding apparatus can generate an optimal merge block candidate for the encoding target block.
- the second embodiment is different from the first embodiment in the method for determining the RefIdx of the encoding target block (S303, S308, and S311 in FIG. 7). Others are the same as those in the first embodiment, and thus the description thereof is omitted.
- FIG. 10 is a detailed processing flow of RefIdx calculation of the encoding target block in the second embodiment.
- the temporal merge motion vector calculation unit 122 obtains the reference picture display order POC2 (RefIdxLX_Col) indicated by RefIdxLX_Col of the co-located block using the reference picture list of colPic (S401).
- the temporal merge motion vector calculation unit 122 determines whether or not a reference picture in the display order POC2 (RefIdxLX_Col) is included in the reference picture list of the encoding target picture (S402).
- the temporal merge motion vector calculation unit 122 obtains the reference picture index of the reference picture in the display order POC2 (RefIdxLX_Col) using the reference picture list of the encoding target picture. Then, the temporal merge motion vector calculation unit 122 sets the obtained reference picture index to RefIdxLX of the encoding target block (S403).
- the temporal merge motion vector calculation unit 122 sets ⁇ 1 to RefIdxLX of the encoding target block (S404).
- RefIdxLX is ⁇ 1
- the prediction direction cannot be used, that is, the prediction direction in the merge mode is unidirectional prediction.
- a reference picture index of ⁇ 1 indicates that the direction cannot be used, that is, the prediction direction is unidirectional prediction. However, this is not necessarily the case, and any format may be used to indicate that it cannot be used.
- the temporal merge motion vector calculation unit 122 uses the direction. Set so that it cannot.
- the present invention is not necessarily limited thereto.
- the temporal merge motion vector calculation unit 122 determines the maximum value of the reference picture index that can be used in the reference picture list of the encoding target picture as the reference picture index of the encoding target block. It doesn't matter.
- 11A and 11B show a method of deriving a temporal merge motion vector when the co-located block is a backward reference block and has a forward reference motion vector and a backward reference motion vector.
- the temporal merge motion vector calculation unit 122 uses the forward reference motion vector mvL0_Col of the co-located block to derive the temporal merge motion vector MergeMvL0 of the encoding target block using the following Expression 6.
- refIdxL0 mvL0_Col ⁇ (curPOC-POC1 (refIdxL0)) / (colPOC-POC2 (refIdxL0_Col)) (Equation 6)
- refIdxL0 represents the reference picture index in the first prediction direction of the current block obtained in the processing flow of FIG.
- (curPOC-POC1 (refIdxL0)) indicates time difference information in the display times of the picture B2 and the picture B0.
- (colPOC-POC2 (refIdxL0_Col)) indicates time difference information in the display time of the picture B4 and the picture B0.
- the temporal merge motion vector calculation unit 122 derives the temporal merge motion vector MergeMvL1 of the encoding target block using the backward reference motion vector mvL1_Col of the co-located block according to the following Expression 7.
- refIdxL1 mvL1_Col ⁇ (curPOC-POC1 (refIdxL1)) / (colPOC-POC2 (refIdxL1_Col)) (Equation 7)
- refIdxL1 represents the reference picture index in the second prediction direction of the current block obtained in the processing flow of FIG.
- (curPOC-POC1 (refIdxL1)) indicates time difference information in the display times of the picture B2 and the picture B8.
- (colPOC-POC2 (refIdxL1_Col)) indicates time difference information in the display time of the picture B4 and the picture B8.
- the temporal merge motion vector calculation unit 122 adds, for example, a one-way co-located merge block to the merge block candidate.
- the temporal merge motion vector calculation unit 122 derives the temporal merge motion vector MergeMvL0 of the encoding target block using the forward reference motion vector mvL0_Col of the co-located block according to the following Expression 8.
- refIdxL0 mvL0_Col ⁇ (curPOC-POC1 (refIdxL0)) / (colPOC-POC2 (refIdxL0_Col)) (Equation 8)
- refIdxL0 represents the reference picture index in the first prediction direction of the current block obtained in the processing flow of FIG.
- (curPOC-POC1 (refIdxL0)) indicates time difference information in the display times of the picture B6 and the picture B0.
- (colPOC-POC2 (refIdxL0_Col)) indicates time difference information in the display time of the picture B4 and the picture B0.
- the temporal merge motion vector calculation unit 122 adds, for example, a one-way co-located merge block to the merge block candidate.
- the temporal merge motion vector calculation unit 122 derives the temporal merge motion vector MergeMvL1 of the encoding target block using the backward reference motion vector mvL1_Col of the co-located block according to the following Expression 9.
- refIdxL1 mvL1_Col ⁇ (curPOC-POC1 (refIdxL1)) / (colPOC-POC2 (refIdxL1_Col)) (Equation 9)
- refIdxL1 represents the reference picture index in the second prediction direction of the current block obtained in the processing flow of FIG.
- (curPOC-POC1 (refIdxL1)) indicates time difference information in the display times of the picture B6 and the picture B8.
- (colPOC-POC2 (refIdxL1_Col)) indicates time difference information in the display time of the picture B4 and the picture B8.
- the image coding apparatus uses not only adjacent blocks in a coding target picture but also coding result information of other reference pictures different from the coding target picture as merge blocks. Use as a candidate. Thereby, the encoding efficiency can be improved.
- the image encoding apparatus uses a co-located block of the encoding target block as a merge block candidate.
- the image encoding device converts the reference picture index in the reference picture list of the encoding target picture based on the reference picture indicated by the value of the reference picture index of the co-located block.
- the image coding apparatus can refer to the reference picture of the co-located block as the reference picture of the coding target block. Therefore, the accuracy of the temporal merge motion vector of the co-located merge block is improved. Therefore, encoding efficiency is improved.
- the image encoding apparatus does not use the prediction direction. Thereby, the image coding apparatus can generate a co-located merge block that can be appropriately merged.
- the image coding apparatus may add the co-located merge block obtained based on Embodiment 1 to the merge block candidate as the first co-located merge block. Then, the image coding apparatus may add the co-located merge block obtained based on Embodiment 2 to the merge block candidate as the second co-located merge block.
- the image encoding apparatus may select a prediction mode that minimizes the prediction error in the flow of FIG. Further, the image coding apparatus may assign 3 to the first co-located merge block and 4 to the second co-located merge block as a method of assigning the merge block index to the co-located merge block of FIG. 3B. Thereby, the image encoding apparatus can more appropriately select a merge block for encoding the encoding target block.
- FIG. 13 is a block diagram showing a configuration of the image decoding apparatus according to the present embodiment.
- a block included in a picture located in front of the decoding target picture in display order is referred to as a forward reference block.
- a block included in a picture located behind the decoding target picture in display order is referred to as a backward reference block.
- the image decoding apparatus includes a variable length decoding unit 205, an inverse quantization unit 206, an inverse orthogonal transform unit 207, an addition unit 208, a block memory 209, a frame memory 211, an intra prediction unit 210, an inter prediction unit 210, A prediction unit 212, a switch unit 213, an inter prediction control unit 221, a temporal merge motion vector calculation unit 222, and a colPic memory 225 are provided.
- the variable length decoding unit 205 performs a variable length decoding process on the input bit stream. Then, the variable length decoding unit 205 generates picture type information, a merge flag, a merge block index, a co-located reference direction flag, and a bit stream that has been subjected to variable length decoding processing.
- the inverse quantization unit 206 performs inverse quantization processing on the bit stream that has been subjected to variable length decoding processing.
- the inverse orthogonal transform unit 207 transforms the bit stream that has been subjected to the inverse quantization process from the frequency domain to the image domain to obtain prediction error image data.
- the block memory 209 is a memory for storing an image sequence generated by adding the prediction error image data and the prediction image data in units of blocks.
- the frame memory 211 is a memory for storing an image sequence in units of frames.
- the intra prediction unit 210 generates the prediction error image data of the decoding target block by executing the intra prediction using the block-by-block image sequence stored in the block memory 209.
- the inter prediction unit 212 generates prediction error image data of a decoding target block by performing inter prediction using an image sequence in units of frames stored in the frame memory 211.
- the temporal merge motion vector calculation unit 222 uses the colPic information such as the motion vector of the co-located block stored in the colPic memory 225 to derive merge mode merge block candidates (co-located merge blocks). Further, the temporal merge motion vector calculation unit 222 assigns the value of the merge block index corresponding to the co-located merge block.
- the temporal merge motion vector calculation unit 222 sends the co-located merge block and the merge block index to the inter prediction control unit 221.
- the temporal merge motion vector calculation unit 222 stops deriving the co-located merge block.
- the temporal merge motion vector calculation unit 222 regards the motion vector as 0 and derives a co-located merge block.
- the inter prediction control unit 221 decodes the motion detection mode information and generates a prediction image. If the merge flag is 1, the inter prediction control unit 221 determines a motion vector and a reference picture index used for inter prediction from a plurality of merge block candidates based on the decoded merge block index, and generates a predicted image. . In addition, the inter prediction control unit 221 transfers colPic information including a motion vector of the decoding target block to the colPic memory 225.
- a decoded image sequence is generated by adding the decoded predicted image data and prediction error image data.
- FIG. 14 is an outline of the processing flow of the image decoding method according to the present embodiment.
- the variable length decoding unit 205 decodes the co-located reference flag and the merge flag (S501).
- the temporal merge motion vector calculation unit 222 reads colPic information such as a motion vector from the colPic memory 225 based on the co-located reference flag. Then, the temporal merge motion vector calculation unit 222 generates a co-located merge block by the same method as in FIG. 7, and adds it to the merge block candidate (S503).
- the inter prediction control unit 221 determines a merge block for copying the motion vector and the reference picture index according to the decoded merge block index, and generates a prediction image using them (S504). .
- the inter prediction unit 212 If the merge flag is 0 (No in S502), the inter prediction unit 212 generates a predicted image using the information of the motion detection mode (S505).
- the inter prediction control unit 221 transfers the colPic information including the motion vector used for the inter prediction to the colPic memory 225 and stores it (S506).
- the colPic memory 225 stores the motion vector of the reference picture, the index value of the reference picture, and the prediction direction for calculating the temporal direct mode motion vector for the decoding target block.
- the image decoding apparatus uses not only the neighboring block in the decoding target picture but also the decoding result information of another reference picture different from the decoding target picture as a merge block. Use as a candidate. As a result, the image decoding apparatus can appropriately decode a bit stream having high encoding efficiency.
- the image decoding apparatus uses the co-located block of the decoding target block as a merge block candidate.
- the image decoding apparatus directly copies the value of the reference picture index of the co-located block to the reference picture index of the decoding target block. Also, the image decoding apparatus uses the motion vector of the co-located block as the motion vector of the decoding target block. Further, the motion vector of the co-located block is appropriately scaled according to the positional relationship among the current picture to be decoded, the reference picture, colPic, and the picture referenced by colPic.
- the image decoding apparatus can appropriately decode the bit stream obtained by generating the optimum merge block candidate.
- the image decoding apparatus according to Embodiment 3 may be an image decoding apparatus corresponding to the image encoding apparatus according to Embodiment 1, or an image decoding corresponding to the image encoding apparatus according to Embodiment 2. It may be a device. Further, the image decoding apparatus according to Embodiment 3 may be an image decoding apparatus corresponding to the image encoding apparatus according to the combination of Embodiment 1 and Embodiment 2.
- the present invention is not limited to the embodiments. Embodiments obtained by subjecting the embodiments to modifications conceivable by those skilled in the art and other embodiments realized by arbitrarily combining the components in the embodiments are also included in the present invention.
- another processing unit may execute a process executed by a specific processing unit.
- the order in which the processes are executed may be changed, or a plurality of processes may be executed in parallel.
- the image encoding device and the image decoding device according to the present invention may be realized as an image encoding / decoding device realized by combining arbitrary constituent elements included therein.
- the present invention can be realized not only as an image encoding device and an image decoding device, but also as a method using steps of processing means constituting the image encoding device and the image decoding device.
- the present invention can be realized as a program for causing a computer to execute the steps included in these methods.
- the present invention can be realized as a non-transitory computer-readable recording medium such as a CD-ROM in which the program is recorded.
- the plurality of components included in the image encoding device and the image decoding device may be realized as an LSI (Large Scale Integration) that is an integrated circuit. These components may be individually made into one chip, or may be made into one chip so as to include a part or all of them. Although referred to here as an LSI, it may be referred to as an IC (Integrated Circuit), a system LSI, a super LSI, or an ultra LSI depending on the degree of integration.
- LSI Large Scale Integration
- the method of circuit integration is not limited to LSI, and implementation with a dedicated circuit or a general-purpose processor is also possible.
- An FPGA Field Programmable Gate Array
- a reconfigurable processor that can reconfigure the connection and setting of circuit cells inside the LSI may be used.
- the storage medium may be any medium that can record a program, such as a magnetic disk, an optical disk, a magneto-optical disk, an IC card, and a semiconductor memory.
- the system has an image encoding / decoding device including an image encoding device using an image encoding method and an image decoding device using an image decoding method.
- image encoding / decoding device including an image encoding device using an image encoding method and an image decoding device using an image decoding method.
- Other configurations in the system can be appropriately changed according to circumstances.
- FIG. 15 is a diagram showing an overall configuration of a content supply system ex100 that realizes a content distribution service.
- a communication service providing area is divided into desired sizes, and base stations ex106, ex107, ex108, ex109, and ex110, which are fixed wireless stations, are installed in each cell.
- This content supply system ex100 includes a computer ex111, a PDA (Personal Digital Assistant) ex112, a camera ex113, a mobile phone ex114, a game machine ex115 via the Internet ex101, the Internet service provider ex102, the telephone network ex104, and the base stations ex106 to ex110. Etc. are connected.
- PDA Personal Digital Assistant
- each device may be directly connected to the telephone network ex104 without going from the base station ex106, which is a fixed wireless station, to ex110.
- the devices may be directly connected to each other via short-range wireless or the like.
- the camera ex113 is a device that can shoot moving images such as a digital video camera
- the camera ex116 is a device that can shoot still images and movies such as a digital camera.
- the mobile phone ex114 is a GSM (registered trademark) (Global System for Mobile Communications) system, a CDMA (Code Division Multiple Access) system, a W-CDMA (Wideband-Code Division Multiple Access) system, or an LTE (Long Term Evolution). It is possible to use any of the above-mentioned systems, HSPA (High Speed Packet Access) mobile phone, PHS (Personal Handyphone System), or the like.
- the camera ex113 and the like are connected to the streaming server ex103 through the base station ex109 and the telephone network ex104, thereby enabling live distribution and the like.
- live distribution the content (for example, music live video) captured by the user using the camera ex113 is encoded as described in the above embodiments (that is, the image encoding of the present invention).
- Function as a device Function as a device) and transmit to the streaming server ex103.
- the streaming server ex103 stream-distributes the content data transmitted to the requested client. Examples of the client include a computer ex111, a PDA ex112, a camera ex113, a mobile phone ex114, and a game machine ex115 that can decode the encoded data.
- Each device that receives the distributed data decodes the received data and reproduces it (that is, functions as the image decoding device of the present invention).
- the captured data may be encoded by the camera ex113, the streaming server ex103 that performs data transmission processing, or may be shared with each other.
- the decryption processing of the distributed data may be performed by the client, the streaming server ex103, or may be performed in common with each other.
- still images and / or moving image data captured by the camera ex116 may be transmitted to the streaming server ex103 via the computer ex111.
- the encoding process in this case may be performed by any of the camera ex116, the computer ex111, and the streaming server ex103, or may be performed in a shared manner.
- these encoding / decoding processes are generally performed in the computer ex111 and the LSI ex500 included in each device.
- the LSI ex500 may be configured as a single chip or a plurality of chips.
- moving image encoding / decoding software is incorporated into some recording medium (CD-ROM, flexible disk, hard disk, etc.) that can be read by the computer ex111, etc., and encoding / decoding processing is performed using the software. May be.
- moving image data acquired by the camera may be transmitted.
- the moving image data at this time is data encoded by the LSI ex500 included in the mobile phone ex114.
- the streaming server ex103 may be a plurality of servers or a plurality of computers, and may process, record, and distribute data in a distributed manner.
- the encoded data can be received and reproduced by the client.
- the information transmitted by the user can be received, decrypted and reproduced by the client in real time, and personal broadcasting can be realized even for a user who does not have special rights or facilities.
- the digital broadcasting system ex200 also includes at least the moving image encoding device (image encoding device) or the moving image decoding according to each of the above embodiments. Any of the devices (image decoding devices) can be incorporated.
- the broadcast station ex201 multiplexed data obtained by multiplexing music data and the like on video data is transmitted to a communication or satellite ex202 via radio waves.
- This video data is data encoded by the moving image encoding method described in the above embodiments (that is, data encoded by the image encoding apparatus of the present invention).
- the broadcasting satellite ex202 transmits a radio wave for broadcasting, and this radio wave is received by a home antenna ex204 capable of receiving satellite broadcasting.
- the received multiplexed data is decoded and reproduced by an apparatus such as the television (receiver) ex300 or the set top box (STB) ex217 (that is, functions as the image decoding apparatus of the present invention).
- a reader / recorder ex218 that reads and decodes multiplexed data recorded on a recording medium ex215 such as a DVD or a BD, or encodes a video signal on the recording medium ex215 and, in some cases, multiplexes and writes it with a music signal. It is possible to mount the moving picture decoding apparatus or moving picture encoding apparatus described in the above embodiments. In this case, the reproduced video signal is displayed on the monitor ex219, and the video signal can be reproduced in another device or system using the recording medium ex215 on which the multiplexed data is recorded.
- a moving picture decoding apparatus may be mounted in a set-top box ex217 connected to a cable ex203 for cable television or an antenna ex204 for satellite / terrestrial broadcasting and displayed on the monitor ex219 of the television.
- the moving picture decoding apparatus may be incorporated in the television instead of the set top box.
- FIG. 17 is a diagram illustrating a television (receiver) ex300 that uses the video decoding method and the video encoding method described in each of the above embodiments.
- the television ex300 obtains or outputs multiplexed data in which audio data is multiplexed with video data via the antenna ex204 or the cable ex203 that receives the broadcast, and demodulates the received multiplexed data.
- the modulation / demodulation unit ex302 that modulates multiplexed data to be transmitted to the outside, and the demodulated multiplexed data is separated into video data and audio data, or the video data and audio data encoded by the signal processing unit ex306 Is provided with a multiplexing / demultiplexing unit ex303.
- the television ex300 decodes each of the audio data and the video data, or encodes the respective information, the audio signal processing unit ex304, the video signal processing unit ex305 (function as the image encoding device or the image decoding device of the present invention). ), A speaker ex307 for outputting the decoded audio signal, and an output unit ex309 having a display unit ex308 such as a display for displaying the decoded video signal.
- the television ex300 includes an interface unit ex317 including an operation input unit ex312 that receives an input of a user operation.
- the television ex300 includes a control unit ex310 that performs overall control of each unit, and a power supply circuit unit ex311 that supplies power to each unit.
- the interface unit ex317 includes a bridge unit ex313 connected to an external device such as a reader / recorder ex218, a recording unit ex216 such as an SD card, and an external recording unit such as a hard disk.
- a driver ex315 for connecting to a medium, a modem ex316 for connecting to a telephone network, and the like may be included.
- the recording medium ex216 is capable of electrically recording information by using a nonvolatile / volatile semiconductor memory element to be stored.
- Each part of the television ex300 is connected to each other via a synchronous bus.
- the television ex300 receives a user operation from the remote controller ex220 or the like, and demultiplexes the multiplexed data demodulated by the modulation / demodulation unit ex302 by the multiplexing / demultiplexing unit ex303 based on the control of the control unit ex310 having a CPU or the like. Furthermore, in the television ex300, the separated audio data is decoded by the audio signal processing unit ex304, and the separated video data is decoded by the video signal processing unit ex305 using the decoding method described in each of the above embodiments.
- the decoded audio signal and video signal are output from the output unit ex309 to the outside. At the time of output, these signals may be temporarily stored in the buffers ex318, ex319, etc. so that the audio signal and the video signal are reproduced in synchronization. Also, the television ex300 may read multiplexed data from recording media ex215 and ex216 such as a magnetic / optical disk and an SD card, not from broadcasting. Next, a configuration in which the television ex300 encodes an audio signal or a video signal and transmits the signal to the outside or to a recording medium will be described.
- the television ex300 receives a user operation from the remote controller ex220 and the like, encodes an audio signal with the audio signal processing unit ex304, and converts the video signal with the video signal processing unit ex305 based on the control of the control unit ex310. Encoding is performed using the encoding method described in (1).
- the encoded audio signal and video signal are multiplexed by the multiplexing / demultiplexing unit ex303 and output to the outside. When multiplexing, these signals may be temporarily stored in the buffers ex320, ex321, etc. so that the audio signal and the video signal are synchronized.
- a plurality of buffers ex318, ex319, ex320, and ex321 may be provided as illustrated, or one or more buffers may be shared. Further, in addition to the illustrated example, data may be stored in the buffer as a buffer material that prevents system overflow and underflow, for example, between the modulation / demodulation unit ex302 and the multiplexing / demultiplexing unit ex303.
- the television ex300 has a configuration for receiving AV input of a microphone and a camera, and performs encoding processing on the data acquired from them. Also good.
- the television ex300 has been described as a configuration capable of the above-described encoding processing, multiplexing, and external output, but these processing cannot be performed, and only the above-described reception, decoding processing, and external output are possible. It may be a configuration.
- the decoding process or the encoding process may be performed by either the television ex300 or the reader / recorder ex218,
- the reader / recorder ex218 may share with each other.
- FIG. 18 shows a configuration of the information reproducing / recording unit ex400 when data is read from or written to an optical disk.
- the information reproducing / recording unit ex400 includes elements ex401, ex402, ex403, ex404, ex405, ex406, and ex407 described below.
- the optical head ex401 irradiates a laser spot on the recording surface of the recording medium ex215 that is an optical disk to write information, and detects reflected light from the recording surface of the recording medium ex215 to read the information.
- the modulation recording unit ex402 electrically drives a semiconductor laser built in the optical head ex401 and modulates the laser beam according to the recording data.
- the reproduction demodulator ex403 amplifies the reproduction signal obtained by electrically detecting the reflected light from the recording surface by the photodetector built in the optical head ex401, separates and demodulates the signal component recorded on the recording medium ex215, and is necessary To play back information.
- the buffer ex404 temporarily holds information to be recorded on the recording medium ex215 and information reproduced from the recording medium ex215.
- the disk motor ex405 rotates the recording medium ex215.
- the servo controller ex406 moves the optical head ex401 to a predetermined information track while controlling the rotational drive of the disk motor ex405, and performs a laser spot tracking process.
- the system control unit ex407 controls the entire information reproduction / recording unit ex400.
- the system control unit ex407 uses various kinds of information held in the buffer ex404, and generates and adds new information as necessary, and the modulation recording unit ex402, the reproduction demodulation unit This is realized by recording / reproducing information through the optical head ex401 while operating the ex403 and the servo control unit ex406 in a coordinated manner.
- the system control unit ex407 is composed of, for example, a microprocessor, and executes these processes by executing a read / write program.
- the optical head ex401 has been described as irradiating a laser spot.
- a configuration in which higher-density recording is performed using near-field light may be used.
- FIG. 19 shows a schematic diagram of a recording medium ex215 that is an optical disk.
- Guide grooves grooves
- address information indicating the absolute position on the disc is recorded in advance on the information track ex230 by changing the shape of the groove.
- This address information includes information for specifying the position of the recording block ex231 that is a unit for recording data, and the recording block is specified by reproducing the information track ex230 and reading the address information in a recording or reproducing apparatus.
- the recording medium ex215 includes a data recording area ex233, an inner peripheral area ex232, and an outer peripheral area ex234.
- the area used for recording user data is the data recording area ex233, and the inner circumference area ex232 and the outer circumference area ex234 arranged on the inner or outer circumference of the data recording area ex233 are used for specific purposes other than user data recording. Used.
- the information reproducing / recording unit ex400 reads / writes encoded audio data, video data, or multiplexed data obtained by multiplexing these data with respect to the data recording area ex233 of the recording medium ex215.
- an optical disk such as a single-layer DVD or BD has been described as an example.
- the present invention is not limited to these, and an optical disk having a multilayer structure and capable of recording other than the surface may be used.
- an optical disc with a multi-dimensional recording / reproducing structure such as recording information using light of different wavelengths in the same place on the disc, or recording different layers of information from various angles. It may be.
- the car ex210 having the antenna ex205 can receive data from the satellite ex202 and the like, and the moving image can be reproduced on a display device such as the car navigation ex211 that the car ex210 has.
- the configuration of the car navigation ex211 may be, for example, a configuration in which a GPS receiving unit is added in the configuration illustrated in FIG. 17, and the same may be considered for the computer ex111, the mobile phone ex114, and the like.
- FIG. 20A is a diagram illustrating the mobile phone ex114 using the video decoding method and the video encoding method described in the above embodiment.
- the mobile phone ex114 includes an antenna ex350 for transmitting and receiving radio waves to and from the base station ex110, a camera unit ex365 capable of capturing video and still images, a video captured by the camera unit ex365, a video received by the antenna ex350, and the like Is provided with a display unit ex358 such as a liquid crystal display for displaying the decrypted data.
- the mobile phone ex114 further includes a main body unit having an operation key unit ex366, an audio output unit ex357 such as a speaker for outputting audio, an audio input unit ex356 such as a microphone for inputting audio, a captured video,
- an audio input unit ex356 such as a microphone for inputting audio
- a captured video In the memory unit ex367 for storing encoded data or decoded data such as still images, recorded audio, received video, still images, mails, or the like, or an interface unit with a recording medium for storing data
- a slot ex364 is provided.
- the mobile phone ex114 has a power supply circuit part ex361, an operation input control part ex362, and a video signal processing part ex355 with respect to a main control part ex360 that comprehensively controls each part of the main body including the display part ex358 and the operation key part ex366.
- a camera interface unit ex363, an LCD (Liquid Crystal Display) control unit ex359, a modulation / demodulation unit ex352, a multiplexing / demultiplexing unit ex353, an audio signal processing unit ex354, a slot unit ex364, and a memory unit ex367 are connected to each other via a bus ex370. ing.
- the power supply circuit unit ex361 starts up the mobile phone ex114 in an operable state by supplying power from the battery pack to each unit.
- the cellular phone ex114 converts the audio signal collected by the audio input unit ex356 in the voice call mode into a digital audio signal by the audio signal processing unit ex354 based on the control of the main control unit ex360 having a CPU, a ROM, a RAM, and the like. Then, this is subjected to spectrum spread processing by the modulation / demodulation unit ex352, digital-analog conversion processing and frequency conversion processing are performed by the transmission / reception unit ex351, and then transmitted via the antenna ex350.
- the mobile phone ex114 also amplifies the received data received via the antenna ex350 in the voice call mode, performs frequency conversion processing and analog-digital conversion processing, performs spectrum despreading processing by the modulation / demodulation unit ex352, and performs voice signal processing unit After being converted into an analog audio signal by ex354, this is output from the audio output unit ex357.
- the text data of the e-mail input by operating the operation key unit ex366 of the main unit is sent to the main control unit ex360 via the operation input control unit ex362.
- the main control unit ex360 performs spread spectrum processing on the text data in the modulation / demodulation unit ex352, performs digital analog conversion processing and frequency conversion processing in the transmission / reception unit ex351, and then transmits the text data to the base station ex110 via the antenna ex350.
- almost the reverse process is performed on the received data and output to the display unit ex358.
- the video signal processing unit ex355 compresses the video signal supplied from the camera unit ex365 by the moving image encoding method described in the above embodiments. Encode (that is, function as the image encoding apparatus of the present invention), and send the encoded video data to the multiplexing / demultiplexing unit ex353.
- the audio signal processing unit ex354 encodes the audio signal picked up by the audio input unit ex356 while the camera unit ex365 images a video, a still image, etc., and sends the encoded audio data to the multiplexing / separating unit ex353. To do.
- the multiplexing / demultiplexing unit ex353 multiplexes the encoded video data supplied from the video signal processing unit ex355 and the encoded audio data supplied from the audio signal processing unit ex354 by a predetermined method, and is obtained as a result.
- the multiplexed data is subjected to spread spectrum processing by the modulation / demodulation unit (modulation / demodulation circuit unit) ex352, digital-analog conversion processing and frequency conversion processing by the transmission / reception unit ex351, and then transmitted via the antenna ex350.
- the multiplexing / separating unit ex353 separates the multiplexed data into a video data bit stream and an audio data bit stream, and performs video signal processing on the video data encoded via the synchronization bus ex370.
- the encoded audio data is supplied to the audio signal processing unit ex354 while being supplied to the unit ex355.
- the video signal processing unit ex355 decodes the video signal by decoding using the video decoding method corresponding to the video encoding method shown in each of the above embodiments (that is, functions as the image decoding device of the present invention).
- video and still images included in the moving image file linked to the home page are displayed from the display unit ex358 via the LCD control unit ex359.
- the audio signal processing unit ex354 decodes the audio signal, and the audio is output from the audio output unit ex357.
- the terminal such as the mobile phone ex114 is referred to as a transmission terminal having only an encoder and a receiving terminal having only a decoder.
- a transmission terminal having only an encoder
- a receiving terminal having only a decoder.
- multiplexed data in which music data or the like is multiplexed with video data is received and transmitted, but data in which character data or the like related to video is multiplexed in addition to audio data It may be video data itself instead of multiplexed data.
- the moving picture encoding method or the moving picture decoding method shown in each of the above embodiments can be used in any of the above-described devices / systems. The described effect can be obtained.
- multiplexed data obtained by multiplexing audio data or the like with video data is configured to include identification information indicating which standard the video data conforms to.
- identification information indicating which standard the video data conforms to.
- FIG. 21 is a diagram showing a structure of multiplexed data.
- multiplexed data can be obtained by multiplexing one or more of a video stream, an audio stream, a presentation graphics stream (PG), and an interactive graphics stream.
- the video stream indicates the main video and sub-video of the movie
- the audio stream (IG) indicates the main audio portion of the movie and the sub-audio mixed with the main audio
- the presentation graphics stream indicates the subtitles of the movie.
- the main video indicates a normal video displayed on the screen
- the sub-video is a video displayed on a small screen in the main video.
- the interactive graphics stream indicates an interactive screen created by arranging GUI components on the screen.
- the video stream is encoded by the moving image encoding method or apparatus shown in the above embodiments, or the moving image encoding method or apparatus conforming to the conventional standards such as MPEG-2, MPEG4-AVC, and VC-1. ing.
- the audio stream is encoded by a method such as Dolby AC-3, Dolby Digital Plus, MLP, DTS, DTS-HD, or linear PCM.
- Each stream included in the multiplexed data is identified by PID. For example, 0x1011 for video streams used for movie images, 0x1100 to 0x111F for audio streams, 0x1200 to 0x121F for presentation graphics, 0x1400 to 0x141F for interactive graphics streams, 0x1B00 to 0x1B1F are assigned to video streams used for sub-pictures, and 0x1A00 to 0x1A1F are assigned to audio streams used for sub-audio mixed with the main audio.
- FIG. 22 is a diagram schematically showing how multiplexed data is multiplexed.
- a video stream ex235 composed of a plurality of video frames and an audio stream ex238 composed of a plurality of audio frames are converted into PES packet sequences ex236 and ex239, respectively, and converted into TS packets ex237 and ex240.
- the data of the presentation graphics stream ex241 and interactive graphics ex244 are converted into PES packet sequences ex242 and ex245, respectively, and further converted into TS packets ex243 and ex246.
- the multiplexed data ex247 is configured by multiplexing these TS packets into one stream.
- FIG. 23 shows in more detail how the video stream is stored in the PES packet sequence.
- the first row in FIG. 23 shows a video frame sequence of the video stream.
- the second level shows a PES packet sequence.
- a plurality of Video Presentation Units in the video stream are divided into pictures, B pictures, and P pictures, and are stored in the payload of the PES packet.
- Each PES packet has a PES header, and a PTS (Presentation Time-Stamp) that is a display time of a picture and a DTS (Decoding Time-Stamp) that is a decoding time of a picture are stored in the PES header.
- PTS Presentation Time-Stamp
- DTS Decoding Time-Stamp
- FIG. 24 shows the format of TS packets that are finally written in the multiplexed data.
- the TS packet is a 188-byte fixed-length packet composed of a 4-byte TS header having information such as a PID for identifying a stream and a 184-byte TS payload for storing data.
- the PES packet is divided and stored in the TS payload.
- a 4-byte TP_Extra_Header is added to a TS packet, forms a 192-byte source packet, and is written in multiplexed data.
- TP_Extra_Header information such as ATS (Arrival_Time_Stamp) is described.
- ATS indicates the transfer start time of the TS packet to the PID filter of the decoder.
- Source packets are arranged in the multiplexed data as shown in the lower part of FIG. 24, and the number incremented from the head of the multiplexed data is called SPN (source packet number).
- TS packets included in the multiplexed data include PAT (Program Association Table), PMT (Program Map Table), PCR (Program Clock Reference), and the like in addition to each stream such as video / audio / caption.
- PAT indicates what the PID of the PMT used in the multiplexed data is, and the PID of the PAT itself is registered as 0.
- the PMT has the PID of each stream such as video / audio / subtitles included in the multiplexed data and the attribute information of the stream corresponding to each PID, and has various descriptors related to the multiplexed data.
- the descriptor includes copy control information for instructing permission / non-permission of copying of multiplexed data.
- the PCR corresponds to the ATS in which the PCR packet is transferred to the decoder. Contains STC time information.
- FIG. 25 is a diagram for explaining the data structure of the PMT in detail.
- a PMT header describing the length of data included in the PMT is arranged at the head of the PMT.
- a plurality of descriptors related to multiplexed data are arranged.
- the copy control information and the like are described as descriptors.
- a plurality of pieces of stream information regarding each stream included in the multiplexed data are arranged.
- the stream information includes a stream descriptor in which a stream type, a stream PID, and stream attribute information (frame rate, aspect ratio, etc.) are described to identify a compression codec of the stream.
- the multiplexed data is recorded together with the multiplexed data information file.
- the multiplexed data information file is management information of multiplexed data, has a one-to-one correspondence with the multiplexed data, and includes multiplexed data information, stream attribute information, and an entry map.
- the multiplexed data information includes a system rate, a reproduction start time, and a reproduction end time as shown in FIG.
- the system rate indicates a maximum transfer rate of multiplexed data to a PID filter of a system target decoder described later.
- the ATS interval included in the multiplexed data is set to be equal to or less than the system rate.
- the playback start time is the PTS of the first video frame of the multiplexed data
- the playback end time is set by adding the playback interval for one frame to the PTS of the video frame at the end of the multiplexed data.
- attribute information about each stream included in the multiplexed data is registered for each PID.
- the attribute information has different information for each video stream, audio stream, presentation graphics stream, and interactive graphics stream.
- the video stream attribute information includes the compression codec used to compress the video stream, the resolution of the individual picture data constituting the video stream, the aspect ratio, and the frame rate. It has information such as how much it is.
- the audio stream attribute information includes the compression codec used to compress the audio stream, the number of channels included in the audio stream, the language supported, and the sampling frequency. With information. These pieces of information are used for initialization of the decoder before the player reproduces it.
- the stream type included in the PMT is used.
- video stream attribute information included in the multiplexed data information is used.
- the video encoding shown in each of the above embodiments for the stream type or video stream attribute information included in the PMT.
- FIG. 28 shows the steps of the moving picture decoding method according to the present embodiment.
- step exS100 the stream type included in the PMT or the video stream attribute information included in the multiplexed data information is acquired from the multiplexed data.
- step exS101 it is determined whether or not the stream type or the video stream attribute information indicates multiplexed data generated by the moving picture encoding method or apparatus described in the above embodiments. To do.
- step exS102 the above embodiments are performed. Decoding is performed by the moving picture decoding method shown in the form.
- the conventional information Decoding is performed by a moving image decoding method compliant with the standard.
- FIG. 29 shows a configuration of an LSI ex500 that is made into one chip.
- the LSI ex500 includes elements ex501, ex502, ex503, ex504, ex505, ex506, ex507, ex508, and ex509 described below, and each element is connected via a bus ex510.
- the power supply circuit unit ex505 is activated to an operable state by supplying power to each unit when the power supply is on.
- the LSI ex500 when performing the encoding process, performs the microphone ex117 and the camera ex113 by the AV I / O ex509 based on the control of the control unit ex501 including the CPU ex502, the memory controller ex503, the stream controller ex504, the drive frequency control unit ex512, and the like.
- the AV signal is input from the above.
- the input AV signal is temporarily stored in an external memory ex511 such as SDRAM.
- the accumulated data is divided into a plurality of times as appropriate according to the processing amount and the processing speed and sent to the signal processing unit ex507, and the signal processing unit ex507 encodes an audio signal and / or video. Signal encoding is performed.
- the encoding process of the video signal is the encoding process described in the above embodiments.
- the signal processing unit ex507 further performs processing such as multiplexing the encoded audio data and the encoded video data according to circumstances, and outputs the result from the stream I / Oex 506 to the outside.
- the output multiplexed data is transmitted to the base station ex107 or written to the recording medium ex215. It should be noted that data should be temporarily stored in the buffer ex508 so as to be synchronized when multiplexing.
- the memory ex511 is described as an external configuration of the LSI ex500.
- a configuration included in the LSI ex500 may be used.
- the number of buffers ex508 is not limited to one, and a plurality of buffers may be provided.
- the LSI ex500 may be made into one chip or a plurality of chips.
- control unit ex501 includes the CPU ex502, the memory controller ex503, the stream controller ex504, the drive frequency control unit ex512, and the like, but the configuration of the control unit ex501 is not limited to this configuration.
- the signal processing unit ex507 may further include a CPU.
- the CPU ex502 may be configured to include a signal processing unit ex507 or, for example, an audio signal processing unit that is a part of the signal processing unit ex507.
- the control unit ex501 is configured to include a signal processing unit ex507 or a CPU ex502 having a part thereof.
- LSI LSI
- IC system LSI
- super LSI ultra LSI depending on the degree of integration
- the method of circuit integration is not limited to LSI, and implementation with a dedicated circuit or a general-purpose processor is also possible.
- An FPGA Field Programmable Gate Array
- a reconfigurable processor that can reconfigure the connection and setting of circuit cells inside the LSI may be used.
- FIG. 30 shows a configuration ex800 in the present embodiment.
- the drive frequency switching unit ex803 sets the drive frequency high when the video data is generated by the moving image encoding method or apparatus described in the above embodiments.
- the decoding processing unit ex801 that executes the moving picture decoding method described in each of the above embodiments is instructed to decode the video data.
- the video data is video data compliant with the conventional standard, compared to the case where the video data is generated by the moving picture encoding method or apparatus shown in the above embodiments, Set the drive frequency low. Then, it instructs the decoding processing unit ex802 compliant with the conventional standard to decode the video data.
- the drive frequency switching unit ex803 includes the CPU ex502 and the drive frequency control unit ex512 of FIG.
- the decoding processing unit ex801 that executes the moving picture decoding method shown in each of the above embodiments and the decoding processing unit ex802 that complies with the conventional standard correspond to the signal processing unit ex507 in FIG.
- the CPU ex502 identifies which standard the video data conforms to. Then, based on the signal from the CPU ex502, the drive frequency control unit ex512 sets the drive frequency. Further, based on the signal from the CPU ex502, the signal processing unit ex507 decodes the video data.
- the identification information described in the fifth embodiment may be used.
- the identification information is not limited to that described in the fifth embodiment, and any information that can identify which standard the video data conforms to may be used. For example, it is possible to identify which standard the video data conforms to based on an external signal that identifies whether the video data is used for a television or a disk. In some cases, identification may be performed based on such an external signal.
- the selection of the driving frequency in the CPU ex502 may be performed based on, for example, a lookup table in which video data standards and driving frequencies are associated with each other as shown in FIG. The look-up table is stored in the buffer ex508 or the internal memory of the LSI, and the CPU ex502 can select the drive frequency by referring to the look-up table.
- FIG. 31 shows steps for executing the method of the present embodiment.
- the signal processing unit ex507 acquires identification information from the multiplexed data.
- the CPU ex502 identifies whether the video data is generated by the encoding method or apparatus described in each of the above embodiments based on the identification information.
- the CPU ex502 sends a signal for setting the drive frequency high to the drive frequency control unit ex512. Then, the drive frequency control unit ex512 sets a high drive frequency.
- step exS203 the CPU ex502 drives the signal for setting the drive frequency low. This is sent to the frequency control unit ex512. Then, in the drive frequency control unit ex512, the drive frequency is set to be lower than that in the case where the video data is generated by the encoding method or apparatus described in the above embodiments.
- the power saving effect can be further enhanced by changing the voltage applied to the LSI ex500 or the device including the LSI ex500 in conjunction with the switching of the driving frequency. For example, when the drive frequency is set low, it is conceivable that the voltage applied to the LSI ex500 or the device including the LSI ex500 is set low as compared with the case where the drive frequency is set high.
- the setting method of the driving frequency may be set to a high driving frequency when the processing amount at the time of decoding is large, and to a low driving frequency when the processing amount at the time of decoding is small. It is not limited to the method.
- the amount of processing for decoding video data compliant with the MPEG4-AVC standard is larger than the amount of processing for decoding video data generated by the moving picture encoding method or apparatus described in the above embodiments. It is conceivable that the setting of the driving frequency is reversed to that in the case described above.
- the method for setting the drive frequency is not limited to the configuration in which the drive frequency is lowered.
- the voltage applied to the LSIex500 or the apparatus including the LSIex500 is set high.
- the driving of the CPU ex502 is stopped.
- the CPU ex502 is temporarily stopped because there is room in processing. Is also possible. Even when the identification information indicates that the video data is generated by the moving image encoding method or apparatus described in each of the above embodiments, if there is a margin for processing, the CPU ex502 is temporarily driven. It can also be stopped. In this case, it is conceivable to set the stop time shorter than in the case where the video data conforms to the conventional standards such as MPEG-2, MPEG4-AVC, and VC-1.
- a plurality of video data that conforms to different standards may be input to the above-described devices and systems such as a television and a mobile phone.
- the signal processing unit ex507 of the LSI ex500 needs to support a plurality of standards in order to be able to decode even when a plurality of video data complying with different standards is input.
- the signal processing unit ex507 corresponding to each standard is used individually, there is a problem that the circuit scale of the LSI ex500 increases and the cost increases.
- a decoding processing unit for executing the moving picture decoding method shown in each of the above embodiments and a decoding conforming to a standard such as MPEG-2, MPEG4-AVC, or VC-1
- the processing unit is partly shared.
- An example of this configuration is shown as ex900 in FIG. 33A.
- the moving picture decoding method shown in each of the above embodiments and the moving picture decoding method compliant with the MPEG4-AVC standard are processed in processes such as entropy coding, inverse quantization, deblocking filter, and motion compensation. Some contents are common.
- the decoding processing unit ex902 corresponding to the MPEG4-AVC standard is shared, and for the other processing content unique to the present invention not corresponding to the MPEG4-AVC standard, the dedicated decoding processing unit ex901 is used.
- Configuration is conceivable.
- a dedicated decoding processing unit ex901 is used for motion compensation, and any of other entropy coding, deblocking filter, and inverse quantization is used.
- the decoding processing unit for executing the moving picture decoding method described in each of the above embodiments is shared, and the processing content specific to the MPEG4-AVC standard As for, a configuration using a dedicated decoding processing unit may be used.
- ex1000 in FIG. 33B shows another example in which processing is partially shared.
- a dedicated decoding processing unit ex1001 corresponding to processing content unique to the present invention
- a dedicated decoding processing unit ex1002 corresponding to processing content specific to other conventional standards
- a moving picture decoding method of the present invention A common decoding processing unit ex1003 corresponding to processing contents common to other conventional video decoding methods is used.
- the dedicated decoding processing units ex1001 and ex1002 are not necessarily specialized in the processing content specific to the present invention or other conventional standards, and may be capable of executing other general-purpose processing.
- the configuration of the present embodiment can be implemented by LSI ex500.
- the circuit scale of the LSI is reduced, and the cost is reduced. It is possible to reduce.
- the image encoding method and the image decoding method according to the present invention can be used for, for example, a television, a digital video recorder, a car navigation, a mobile phone, a digital camera, or a digital video camera.
Abstract
Description
図1は、本実施の形態に係る画像符号化装置を示すブロック図である。画像符号化装置は、図1のように、減算部102、直交変換部103、量子化部104、逆量子化部106、逆直交変換部107、加算部108、ブロックメモリ109、フレームメモリ111、イントラ予測部110、インター予測部112、スイッチ部113、インター予測制御部121、ピクチャタイプ決定部124、時間マージ動きベクトル算出部122、colPicメモリ125、co-located参照方向決定部123、および、可変長符号化部105を備えている。
式1において、Dは符号化歪を表す。具体的には、ある動きベクトルで生成した予測画像を用いて符号化対象ブロックを符号化および復号化して得られた画素値と、符号化対象ブロックの元の画素値との差分絶対値和などがDに用いられる。また、Rは発生符号量を表す。具体的には、予測画像生成に用いた動きベクトルを符号化することに必要な符号量などがRに用いられる。またλはラグランジュの未定乗数である。
ここで、curPOCは符号化対象ピクチャの表示順を表し、colPOCはcolPicの表示順を表す。また、POC1(X)は、符号化対象ピクチャの参照ピクチャリストにおける、参照ピクチャインデックスXが示す参照ピクチャの表示順を表す。また、POC2(X)は、colPicの参照ピクチャリストにおける、参照ピクチャインデックスXが示す参照ピクチャの表示順を表す。
図8Bの場合、(curPOC-POC1(refIdxL1))は、ピクチャB2とピクチャB4の表示時間における時間差情報、(colPOC-POC2(refIdxL1_Col))は、ピクチャB4とピクチャB8の表示時間における時間差情報を示す。
図9Aの場合、(curPOC-POC1(refIdxL0))は、ピクチャB6とピクチャB4の表示時間における時間差情報、(colPOC-POC2(refIdxL0_Col))は、ピクチャB4とピクチャB0の表示時間における時間差情報を示す。
図9Bの場合、(curPOC-POC1(refIdxL1))は、ピクチャB6とピクチャB8の表示時間における時間差情報、(colPOC-POC2(refIdxL1_Col))は、ピクチャB4とピクチャB8の表示時間における時間差情報を示す。
実施の形態2は、符号化対象ブロックのRefIdxの決定方法(図7のS303、S308、S311)において、実施の形態1と異なる。その他に関しては、実施の形態1と同様であるため、説明を省略する。
ここで、refIdxL0は、図10の処理フローで求めた符号化対象ブロックの第1予測方向の参照ピクチャインデックスを表す。図11Aの場合、(curPOC-POC1(refIdxL0))は、ピクチャB2とピクチャB0の表示時間における時間差情報を示す。また、(colPOC-POC2(refIdxL0_Col))は、ピクチャB4とピクチャB0の表示時間における時間差情報を示す。
ここで、refIdxL1は、図10の処理フローで求めた符号化対象ブロックの第2予測方向の参照ピクチャインデックスを表す。図11Bの場合、(curPOC-POC1(refIdxL1))は、ピクチャB2とピクチャB8の表示時間における時間差情報を示す。また、(colPOC-POC2(refIdxL1_Col))は、ピクチャB4とピクチャB8の表示時間における時間差情報を示す。
ここで、refIdxL0は、図10の処理フローで求めた符号化対象ブロックの第1予測方向の参照ピクチャインデックスを表す。図12Aの場合、(curPOC-POC1(refIdxL0))は、ピクチャB6とピクチャB0の表示時間における時間差情報を示す。また、(colPOC-POC2(refIdxL0_Col))は、ピクチャB4とピクチャB0の表示時間における時間差情報を示す。
ここで、refIdxL1は、図10の処理フローで求めた符号化対象ブロックの第2予測方向の参照ピクチャインデックスを表す。図12Bの場合、(curPOC-POC1(refIdxL1))は、ピクチャB6とピクチャB8の表示時間における時間差情報を示す。また、(colPOC-POC2(refIdxL1_Col))は、ピクチャB4とピクチャB8の表示時間における時間差情報を示す。
図13は、本実施の形態に係る画像復号化装置の構成を示すブロック図である。
上記各実施の形態で示した動画像符号化方法(画像符号化方法)または動画像復号化方法(画像復号方法)の構成を実現するためのプログラムを記憶メディアに記録することにより、上記各実施の形態で示した処理を独立したコンピュータシステムにおいて簡単に実施することが可能となる。記憶メディアは、磁気ディスク、光ディスク、光磁気ディスク、ICカード、半導体メモリ等、プログラムを記録できるものであればよい。
上記各実施の形態で示した動画像符号化方法または装置と、MPEG-2、MPEG4-AVC、VC-1など異なる規格に準拠した動画像符号化方法または装置とを、必要に応じて適宜切替えることにより、映像データを生成することも可能である。
上記各実施の形態で示した動画像符号化方法および装置、動画像復号化方法および装置は、典型的には集積回路であるLSIで実現される。一例として、図29に1チップ化されたLSIex500の構成を示す。LSIex500は、以下に説明する要素ex501、ex502、ex503、ex504、ex505、ex506、ex507、ex508、ex509を備え、各要素はバスex510を介して接続している。電源回路部ex505は電源がオン状態の場合に各部に対して電力を供給することで動作可能な状態に起動する。
上記各実施の形態で示した動画像符号化方法または装置によって生成された映像データを復号する場合、従来のMPEG-2、MPEG4-AVC、VC-1などの規格に準拠する映像データを復号する場合に比べ、処理量が増加することが考えられる。そのため、LSIex500において、従来の規格に準拠する映像データを復号する際のCPUex502の駆動周波数よりも高い駆動周波数に設定する必要がある。しかし、駆動周波数を高くすると、消費電力が高くなるという課題が生じる。
テレビや、携帯電話など、上述した機器・システムには、異なる規格に準拠する複数の映像データが入力される場合がある。このように、異なる規格に準拠する複数の映像データが入力された場合にも復号できるようにするために、LSIex500の信号処理部ex507が複数の規格に対応している必要がある。しかし、それぞれの規格に対応する信号処理部ex507を個別に用いると、LSIex500の回路規模が大きくなり、また、コストが増加するという課題が生じる。
103 直交変換部
104 量子化部
105 可変長符号化部
106、206 逆量子化部
107、207 逆直交変換部
108、208 加算部
109、209 ブロックメモリ
110、210 イントラ予測部
111、211 フレームメモリ
112、212 インター予測部
113、213 スイッチ部
121、221 インター予測制御部
122、222 時間マージ動きベクトル算出部
123 co-located参照方向決定部
124 ピクチャタイプ決定部
125、225 colPicメモリ
205 可変長復号化部
Claims (8)
- 第1参照ピクチャを示す第1参照インデックス、および、第1動きベクトルを用いて、符号化対象ブロックを符号化する画像符号化方法であって、
符号化対象ピクチャとは異なる対応ピクチャに含まれるブロックであり、前記符号化対象ピクチャ内における前記符号化対象ブロックの位置に一致するブロックである対応ブロックの符号化に用いられた第2参照インデックスおよび第2動きベクトルを用いて、第3参照インデックスおよび第3動きベクトルを前記第1参照インデックスおよび前記第1動きベクトルの候補として算出する算出ステップと、
前記第3参照インデックスおよび前記第3動きベクトルを前記第1参照インデックスおよび前記第1動きベクトルとして用いて前記符号化対象ブロックを符号化するか否かを示すフラグの値を決定する決定ステップと、
前記フラグの値に従って前記第1参照インデックスおよび前記第1動きベクトルを用いて前記符号化対象ブロックを符号化し、前記フラグの値を符号化ストリームに付加する符号化ステップとを含む
画像符号化方法。 - 前記算出ステップでは、
前記第2参照インデックスを前記第3参照インデックスにコピーし、
前記符号化対象ピクチャの表示順と、前記対応ピクチャの表示順と、前記第2参照インデックスにより示される第2参照ピクチャの表示順と、前記第3参照インデックスにより示される第3参照ピクチャの表示順とを用いて、前記第2動きベクトルのスケーリング処理を行うことによって、前記第3動きベクトルを算出する
請求項1に記載の画像符号化方法。 - 前記算出ステップでは、
前記第2参照インデックスにより示される第2参照ピクチャが、前記符号化対象ピクチャの参照ピクチャリストに含まれるか否かを判定し、
前記第2参照ピクチャが前記参照ピクチャリストに含まれる場合、前記参照ピクチャリストにおいて前記第2参照ピクチャを示す第4参照インデックスを前記第3参照インデックスにコピーし、
前記第2参照ピクチャが前記参照ピクチャリストに含まれない場合、前記第3参照インデックスを無効にし、
前記第3参照インデックスが無効でない場合、前記符号化対象ピクチャの表示順と、前記対応ピクチャの表示順と、前記第2参照ピクチャの表示順と、前記第3参照インデックスにより示される第3参照ピクチャの表示順とを用いて、前記第2動きベクトルのスケーリング処理を行うことによって、前記第3動きベクトルを算出する
請求項1に記載の画像符号化方法。 - 前記算出ステップでは、
前記第2参照インデックスにより示される第2参照ピクチャが、前記符号化対象ピクチャの参照ピクチャリストに含まれるか否かを判定し、
前記第2参照ピクチャが前記参照ピクチャリストに含まれる場合、前記参照ピクチャリストにおいて前記第2参照ピクチャを示す第4参照インデックスを前記第3参照インデックスにコピーし、
前記第2参照ピクチャが前記参照ピクチャリストに含まれない場合、前記第3参照インデックスに、前記参照ピクチャリストにおいて割り当て可能な最大値をセットし、
前記符号化対象ピクチャの表示順と、前記対応ピクチャの表示順と、前記第2参照ピクチャの表示順と、前記第3参照インデックスにより示される第3参照ピクチャの表示順とを用いて、前記第2動きベクトルのスケーリング処理を行うことによって、前記第3動きベクトルを算出する
請求項1に記載の画像符号化方法。 - 第1参照ピクチャを示す第1参照インデックス、および、第1動きベクトルを用いて、復号化対象ブロックを復号化する画像復号化方法であって、
復号化対象ピクチャとは異なる対応ピクチャに含まれるブロックであり、前記復号化対象ピクチャ内における前記復号化対象ブロックの位置に一致するブロックである対応ブロックの復号化に用いられた第2参照インデックスおよび第2動きベクトルを用いて、第3参照インデックスおよび第3動きベクトルを前記第1参照インデックスおよび前記第1動きベクトルの候補として算出する算出ステップと、
前記第3参照インデックスおよび前記第3動きベクトルを前記第1参照インデックスおよび前記第1動きベクトルとして用いて前記復号化対象ブロックを復号化するか否かを示すフラグの値を符号化ストリームから取得する取得ステップと、
前記フラグの値に従って前記第1参照インデックスおよび前記第1動きベクトルを用いて前記復号化対象ブロックを復号化する復号化ステップとを含む
画像復号化方法。 - 前記算出ステップでは、
前記第2参照インデックスを前記第3参照インデックスにコピーし、
前記復号化対象ピクチャの表示順と、前記対応ピクチャの表示順と、前記第2参照インデックスにより示される第2参照ピクチャの表示順と、前記第3参照インデックスにより示される第3参照ピクチャの表示順とを用いて、前記第2動きベクトルのスケーリング処理を行うことによって、前記第3動きベクトルを算出する
請求項5に記載の画像復号化方法。 - 前記算出ステップでは、
前記第2参照インデックスにより示される第2参照ピクチャが、前記復号化対象ピクチャの参照ピクチャリストに含まれるか否かを判定し、
前記第2参照ピクチャが前記参照ピクチャリストに含まれる場合、前記参照ピクチャリストにおいて前記第2参照ピクチャを示す第4参照インデックスを前記第3参照インデックスにコピーし、
前記第2参照ピクチャが前記参照ピクチャリストに含まれない場合、前記第3参照インデックスを無効にし、
前記第3参照インデックスが無効でない場合、前記復号化対象ピクチャの表示順と、前記対応ピクチャの表示順と、前記第2参照ピクチャの表示順と、前記第3参照インデックスにより示される第3参照ピクチャの表示順とを用いて、前記第2動きベクトルのスケーリング処理を行うことによって、前記第3動きベクトルを算出する
請求項5に記載の画像復号化方法。 - 前記算出ステップでは、
前記第2参照インデックスにより示される第2参照ピクチャが、前記復号化対象ピクチャの参照ピクチャリストに含まれるか否かを判定し、
前記第2参照ピクチャが前記参照ピクチャリストに含まれる場合、前記参照ピクチャリストにおいて前記第2参照ピクチャを示す第4参照インデックスを前記第3参照インデックスにコピーし、
前記第2参照ピクチャが前記参照ピクチャリストに含まれない場合、前記第3参照インデックスに、前記参照ピクチャリストにおいて割り当て可能な最大値をセットし、
前記復号化対象ピクチャの表示順と、前記対応ピクチャの表示順と、前記第2参照ピクチャの表示順と、前記第3参照インデックスにより示される第3参照ピクチャの表示順とを用いて、前記第2動きベクトルのスケーリング処理を行うことによって、前記第3動きベクトルを算出する
請求項5に記載の画像復号化方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012554695A JP5893570B2 (ja) | 2011-01-28 | 2012-01-26 | 画像符号化方法および画像復号化方法 |
US13/980,918 US9560352B2 (en) | 2011-01-28 | 2012-01-26 | Image coding method and image decoding method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201161437128P | 2011-01-28 | 2011-01-28 | |
US61/437,128 | 2011-01-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2012102045A1 true WO2012102045A1 (ja) | 2012-08-02 |
Family
ID=46580620
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/000491 WO2012102045A1 (ja) | 2011-01-28 | 2012-01-26 | 画像符号化方法および画像復号化方法 |
Country Status (3)
Country | Link |
---|---|
US (1) | US9560352B2 (ja) |
JP (1) | JP5893570B2 (ja) |
WO (1) | WO2012102045A1 (ja) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
BR112013022646B1 (pt) * | 2011-03-09 | 2022-09-13 | Kabushiki Kaisha Toshiba | Método para codificação e decodificação de imagem e a realização de interpredição nos blocos de pixels divididos |
JP5982734B2 (ja) * | 2011-03-11 | 2016-08-31 | ソニー株式会社 | 画像処理装置および方法 |
CN107959857B (zh) * | 2011-10-18 | 2022-03-01 | 株式会社Kt | 视频信号解码方法 |
EP2942961A1 (en) * | 2011-11-23 | 2015-11-11 | HUMAX Holdings Co., Ltd. | Methods for encoding/decoding of video using common merging candidate set of asymmetric partitions |
US9900615B2 (en) * | 2011-12-28 | 2018-02-20 | Microsoft Technology Licensing, Llc | Representative motion information for temporal motion prediction in video encoding and decoding |
JP2013207755A (ja) * | 2012-03-29 | 2013-10-07 | Sony Corp | 画像処理装置および方法 |
US9325990B2 (en) * | 2012-07-09 | 2016-04-26 | Qualcomm Incorporated | Temporal motion vector prediction in video coding extensions |
US9438925B2 (en) * | 2013-12-31 | 2016-09-06 | Vixs Systems, Inc. | Video encoder with block merging and methods for use therewith |
GB2527315B (en) * | 2014-06-17 | 2017-03-15 | Imagination Tech Ltd | Error detection in motion estimation |
US10560693B2 (en) * | 2015-11-24 | 2020-02-11 | Samsung Electronics Co., Ltd. | Video encoding method and apparatus, and video decoding method and apparatus |
WO2020086317A1 (en) * | 2018-10-23 | 2020-04-30 | Tencent America Llc. | Method and apparatus for video coding |
US11032574B2 (en) * | 2018-12-31 | 2021-06-08 | Tencent America LLC | Method and apparatus for video coding |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004088737A (ja) * | 2002-07-02 | 2004-03-18 | Matsushita Electric Ind Co Ltd | 画像符号化方法および画像復号化方法 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004088722A (ja) | 2002-03-04 | 2004-03-18 | Matsushita Electric Ind Co Ltd | 動画像符号化方法および動画像復号化方法 |
CN1666532A (zh) | 2002-07-02 | 2005-09-07 | 松下电器产业株式会社 | 图像编码方法和图像解码方法 |
BR0318528A (pt) * | 2003-10-09 | 2006-09-12 | Thomson Licensing | processo de derivação de modo direto para encobrimento de erros |
FI115589B (fi) * | 2003-10-14 | 2005-05-31 | Nokia Corp | Redundanttien kuvien koodaaminen ja dekoodaaminen |
-
2012
- 2012-01-26 JP JP2012554695A patent/JP5893570B2/ja active Active
- 2012-01-26 US US13/980,918 patent/US9560352B2/en active Active
- 2012-01-26 WO PCT/JP2012/000491 patent/WO2012102045A1/ja active Application Filing
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004088737A (ja) * | 2002-07-02 | 2004-03-18 | Matsushita Electric Ind Co Ltd | 画像符号化方法および画像復号化方法 |
Non-Patent Citations (1)
Title |
---|
"Test Model under Consideration Output Document (draft007)", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11 2ND MEETING, October 2010 (2010-10-01), GENEVA, CH, pages 78 - 93 * |
Also Published As
Publication number | Publication date |
---|---|
JPWO2012102045A1 (ja) | 2014-06-30 |
US9560352B2 (en) | 2017-01-31 |
JP5893570B2 (ja) | 2016-03-23 |
US20130301736A1 (en) | 2013-11-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6167409B2 (ja) | 画像復号化方法および画像復号化装置 | |
JP6340707B2 (ja) | 画像符号化方法および画像符号化装置 | |
JP6478133B2 (ja) | 動画像復号方法、動画像復号装置、動画像符号化方法、および動画像符号化装置 | |
JP5837575B2 (ja) | 動画像符号化方法、動画像符号化装置、動画像復号化方法、動画像復号化装置、および動画像符号化復号化装置 | |
JP6422011B2 (ja) | 動画像符号化方法、動画像復号化方法、動画像符号化装置および動画像復号化装置 | |
WO2013057877A1 (ja) | 画像符号化方法、画像符号化装置、画像復号方法、および、画像復号装置 | |
JP5893570B2 (ja) | 画像符号化方法および画像復号化方法 | |
WO2013051209A1 (ja) | 画像符号化方法、画像符号化装置、画像復号方法、画像復号装置、および、画像符号化復号装置 | |
WO2012117728A1 (ja) | 動画像符号化方法、動画像復号方法、動画像符号化装置、動画像復号装置、及び動画像符号化復号装置 | |
JP6108309B2 (ja) | 動画像符号化方法、動画像符号化装置、動画像復号方法、および、動画像復号装置 | |
JP5883431B2 (ja) | 画像符号化方法および画像復号化方法 | |
WO2013128832A1 (ja) | 画像符号化方法、画像復号方法、画像符号化装置、画像復号装置および画像符号化復号装置 | |
JP6304571B2 (ja) | 動画像復号化方法および動画像復号化装置 | |
WO2012090495A1 (ja) | 画像符号化方法および画像復号方法 | |
WO2012073481A1 (ja) | 動画像符号化方法および動画像復号化方法 | |
WO2012081225A1 (ja) | 画像符号化方法、及び画像復号方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12739604 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2012554695 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13980918 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 12739604 Country of ref document: EP Kind code of ref document: A1 |