WO2012090286A1 - Video image encoding method, and video image decoding method - Google Patents

Video image encoding method, and video image decoding method Download PDF

Info

Publication number
WO2012090286A1
WO2012090286A1 PCT/JP2010/073630 JP2010073630W WO2012090286A1 WO 2012090286 A1 WO2012090286 A1 WO 2012090286A1 JP 2010073630 W JP2010073630 W JP 2010073630W WO 2012090286 A1 WO2012090286 A1 WO 2012090286A1
Authority
WO
WIPO (PCT)
Prior art keywords
prediction
image signal
prediction direction
unit
intra
Prior art date
Application number
PCT/JP2010/073630
Other languages
French (fr)
Japanese (ja)
Inventor
太一郎 塩寺
昭行 谷沢
中條 健
Original Assignee
株式会社 東芝
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社 東芝 filed Critical 株式会社 東芝
Priority to PCT/JP2010/073630 priority Critical patent/WO2012090286A1/en
Publication of WO2012090286A1 publication Critical patent/WO2012090286A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes

Definitions

  • Embodiments of the present invention relate to an intra-screen prediction method, a video encoding method, and a video decoding method in video encoding and decoding.
  • H.264 achieves higher prediction efficiency than in-screen prediction in ISO / IEC MPEG-1, 2 and 4 (hereinafter referred to as intra prediction) by incorporating direction prediction in the spatial region (pixel region).
  • intra prediction in-screen prediction in ISO / IEC MPEG-1, 2 and 4
  • JCT-VC Joint Collaborative Team on Video Coding
  • Non-Patent Document 1 since a prediction value is generated at an individual prediction angle for each of a plurality of types of prediction modes and copied in the prediction direction, a texture having a luminance gradient that smoothly changes within a pixel block. Such a video or a video with gradation cannot be predicted efficiently, and the prediction error may increase.
  • the problem to be solved by the present invention is to provide a moving image encoding device and a moving image decoding device including a prediction image generating device capable of improving encoding efficiency.
  • the moving image encoding method divides an input image signal into pixel blocks expressed by hierarchical depth according to quadtree division, performs intra prediction on these divided pixel blocks, and generates a prediction error signal. And a reference prediction direction indicating a prediction direction of intra prediction corresponding to at least one encoded pixel block is acquired.
  • the first reference prediction direction is set as the first prediction direction, and a first prediction image signal is generated.
  • a second prediction image signal is generated by setting a second prediction direction different from the first prediction direction.
  • the relative distance between the reference pixel and the prediction pixel in each prediction direction is derived corresponding to the first prediction direction combination that is a combination of the set first prediction direction and the second prediction direction, and the difference value of the relative distance Is derived.
  • a predetermined weight component is derived according to the difference value. According to the weight component, the first predicted image signal and the second predicted image signal are weighted and averaged to generate a third predicted image signal. A prediction error signal is generated from the third prediction image signal, and the prediction error signal is encoded.
  • FIG. 1 is a block diagram illustrating a moving image encoding apparatus according to a first embodiment.
  • Explanatory drawing of the prediction encoding order of a pixel block Explanatory drawing of an example of pixel block size.
  • Explanatory drawing of another example of pixel block size Explanatory drawing of another example of pixel block size.
  • Explanatory drawing of an example of the pixel block in a coding tree unit Explanatory drawing of another example of the pixel block in a coding tree unit.
  • Explanatory drawing of another example of the pixel block in a coding tree unit Explanatory drawing of another example of the pixel block in a coding tree unit.
  • Explanatory drawing of another example of the pixel block in a coding tree unit Explanatory drawing of another example of the pixel block in a coding tree unit.
  • Explanatory drawing which shows an example of the unidirectional intra prediction mode, prediction type, and prediction angle parameter
  • A is explanatory drawing of intra prediction mode
  • (b) is explanatory drawing of the reference pixel and prediction pixel of intra prediction mode
  • (c) is explanatory drawing of the horizontal prediction mode of intra prediction mode
  • ( d) Explanatory drawing of the orthogonal lower right prediction mode of intra prediction mode.
  • the block diagram which illustrates the intra prediction part concerning a 1st embodiment.
  • FIG. 8D is a table showing the continuation of FIG. 8D.
  • the table figure which shows another example of the relationship between prediction mode, prediction type, bidirectional
  • the table figure which shows an example of the prediction mode, prediction type, bidirectional intra prediction, and unidirectional intra prediction based on 1st Embodiment.
  • the table figure which shows another example of the relationship between prediction mode, prediction type, bidirectional
  • the table figure which shows another example of the relationship between prediction mode, prediction type, bidirectional
  • the table figure which illustrates a response
  • Explanatory drawing which illustrates the prediction direction which concerns on 1st Embodiment.
  • the block diagram which shows an example of the calculation method of a city area distance based on 1st Embodiment.
  • the block diagram which shows another example of the calculation method of the city area distance based on 1st Embodiment.
  • the block diagram which shows another example of the calculation method of the city area distance based on 1st Embodiment.
  • the block diagram which shows another example of the calculation method of the city area distance based on 1st Embodiment.
  • the table which illustrates the relationship between the prediction mode and the distance of a prediction pixel position based on 1st Embodiment.
  • the table which illustrates the mapping of prediction mode and a distance table based on 1st Embodiment.
  • the table which illustrates the relationship between relative distance and a weight component based on 1st Embodiment. 6 is another table illustrating the relationship between the relative distance and the weight component according to the first embodiment.
  • the block diagram which shows the example of the bidirectional
  • Explanatory drawing which shows an example of bidirectional
  • Explanatory drawing which shows an example of an adjacent block position based on 1st Embodiment.
  • Explanatory drawing which shows an example of the conversion table from 1st prediction direction to 2nd prediction direction based on 1st Embodiment.
  • Explanatory drawing which shows another example of the conversion table from 1st prediction direction to 2nd prediction direction based on 1st Embodiment.
  • the block diagram which shows another example of the bidirectional
  • Explanatory drawing which shows the example of the correspondence of the bi-directional prediction mode production
  • Explanatory drawing which shows the corresponding example in case bidirectional
  • Explanatory drawing which shows another corresponding example in case bidirectional
  • Explanatory drawing which shows the example of the prediction mode structure of a color difference signal based on 1st Embodiment.
  • the block diagram which shows another embodiment of the intra estimation part based on 1st Embodiment.
  • Explanatory drawing of a syntax structure Explanatory drawing of a slice header syntax.
  • Explanatory drawing which shows an example of a prediction unit syntax.
  • Explanatory drawing which shows another example of a prediction unit syntax.
  • Explanatory drawing which shows another example of a prediction unit syntax Explanatory drawing which shows another example of a prediction unit syntax.
  • Explanatory drawing which shows another example of a prediction unit syntax Explanatory drawing which shows another example of a prediction unit syntax.
  • Explanatory drawing which shows another example of a prediction unit syntax Explanatory drawing which shows another example of a prediction unit syntax.
  • the table which shows the relationship at the time of predicting prediction mode The table which shows an example of the encoding method of prediction mode.
  • the table which shows another example of the encoding method of prediction mode The table which shows another example of the encoding method of prediction mode.
  • the table which shows another example of the encoding method of prediction mode The block diagram which shows the 1st modification of the intra estimation part based on 1st Embodiment.
  • Explanatory drawing which shows an example of the prediction unit syntax in the 1st modification based on 1st Embodiment.
  • Explanatory drawing which shows another example of the prediction unit syntax in the 1st modification based on 1st Embodiment Explanatory drawing which shows an example of the predicted value generation method of a pixel level.
  • the table figure which shows the relationship between prediction mode and a conversion index based on 2nd Embodiment.
  • the block diagram which illustrates the coefficient order control part concerning a 2nd embodiment.
  • the block diagram which illustrates another coefficient order control part concerning a 2nd embodiment.
  • Explanatory drawing which shows an example of the transform unit syntax based on 2nd Embodiment.
  • the block diagram which shows another example of the orthogonal transformation part based on 3rd Embodiment.
  • the block diagram which shows an example of the inverse orthogonal transformation part based on 3rd Embodiment.
  • Explanatory drawing which shows an example of the transform unit syntax based on 3rd Embodiment.
  • the block diagram which shows an example of the moving image decoding apparatus based on 4th Embodiment.
  • the block diagram which shows an example of the moving image decoding apparatus based on 5th Embodiment.
  • the block diagram which illustrates the coefficient order restoration part concerning a 5th embodiment.
  • the block diagram which shows another example of the coefficient order decompression
  • the first embodiment relates to an image encoding device.
  • a moving picture decoding apparatus corresponding to the picture encoding apparatus according to the present embodiment will be described in a fourth embodiment.
  • This image encoding device can be realized by hardware such as an LSI (Large-Scale Integration) chip, a DSP (Digital Signal Processor), or an FPGA (Field Programmable Gate Array).
  • the image encoding apparatus can also be realized by causing a computer to execute an image encoding program.
  • the image encoding device 100 includes a subtraction unit 101, an orthogonal transformation unit 102, a quantization unit 103, an inverse quantization unit 104, an inverse orthogonal transformation unit 105, an addition unit 106, a loop. Filter 107, reference image memory 108, intra prediction unit 109, inter prediction unit 110, prediction selection switch 111, prediction selection unit 112, entropy encoding unit 113, output buffer 114, encoding control unit 115, and intra prediction mode memory 116 including.
  • the image encoding apparatus 100 in FIG. 1 divides each frame or each field constituting the input image signal 151 into a plurality of pixel blocks, performs predictive encoding on the divided pixel blocks, and generates encoded data 162. Is output.
  • pixel blocks are predictively encoded from the upper left to the lower right as shown in FIG. 2A.
  • the encoded pixel block p is located on the left side and the upper side of the encoding target pixel block c in the encoding processing target frame f.
  • the pixel block refers to a unit for processing an image such as an M ⁇ N size block (N and M are natural numbers), a coding tree unit, a macro block, a sub block, and one pixel.
  • N and M are natural numbers
  • the pixel block is basically used in the meaning of the coding tree unit.
  • the pixel block can be interpreted in the above-described meaning by appropriately replacing the description.
  • the coding tree unit is typically a 16 ⁇ 16 pixel block shown in FIG. 2B, for example, but may be a 32 ⁇ 32 pixel block shown in FIG. 2C or a 64 ⁇ 64 pixel block shown in FIG. 2D, It may be an 8 ⁇ 8 pixel block (not shown) or a 4 ⁇ 4 pixel block.
  • the coding tree unit need not necessarily be square.
  • the encoding target block or coding tree unit of the input image signal 151 may be referred to as a “prediction target block”.
  • the coding unit is not limited to a pixel block such as a coding tree unit, and a frame, a field, a slice, or a combination thereof can be used.
  • FIG. 3A to 3D are diagrams showing specific examples of coding tree units.
  • N represents the size of the reference coding tree unit.
  • the coding tree unit has a quadtree structure, and when divided, the four pixel blocks are indexed in the Z-scan order.
  • FIG. 3B shows an example in which the 64 ⁇ 64 pixel block in FIG. 3A is divided into quadtrees.
  • the numbers shown in the figure represent the Z scan order.
  • the depth of division is defined by Depth.
  • a unit having the largest coding tree unit is called a large coding tree unit, and an input image signal is encoded in the order of raster scanning in this unit.
  • the image encoding apparatus 100 in FIG. 1 performs intra prediction (also referred to as intra prediction, intra prediction, etc.) or inter prediction (screen) for a pixel block based on the encoding parameter input from the encoding control unit 115. Inter-prediction, inter-frame prediction, motion compensation prediction, etc.) is performed to generate a predicted image signal 161.
  • the image encoding device 100 performs orthogonal transform and quantization on the prediction error signal 152 between the pixel block (input image signal 151) and the predicted image signal 161, performs entropy encoding, and generates encoded data 162. Output.
  • the image encoding apparatus 100 in FIG. 1 performs encoding by selectively applying a plurality of prediction modes having different block sizes and generation methods of the predicted image signal 161.
  • the generation method of the predicted image signal 161 can be roughly divided into two types: intra prediction in which prediction is performed within the encoding target frame and inter prediction in which prediction is performed using one or a plurality of reference frames that are temporally different. is there.
  • the subtraction unit 101 subtracts the corresponding prediction image signal 161 from the encoding target block of the input image signal 151 to obtain a prediction error signal 152.
  • the subtraction unit 101 inputs the prediction error signal 152 to the orthogonal transformation unit 102.
  • the orthogonal transform unit 102 performs orthogonal transform such as discrete cosine transform (DCT) on the prediction error signal 152 from the subtraction unit 101 to obtain a transform coefficient 153.
  • the orthogonal transform unit 102 inputs the transform coefficient 153 to the quantization unit 103.
  • the quantization unit 103 quantizes the transform coefficient 153 from the orthogonal transform unit 102 to obtain a quantized transform coefficient 154. Specifically, the quantization unit 103 performs quantization according to quantization information such as a quantization parameter and a quantization matrix specified by the encoding control unit 115. The quantization parameter indicates the fineness of quantization. The quantization matrix is used for weighting the fineness of quantization for each component of the transform coefficient.
  • the quantization unit 103 inputs the quantized transform coefficient 154 to the entropy encoding unit 113 and the inverse quantization unit 104.
  • the entropy encoding unit 113 performs various encoding parameters such as the quantized transform coefficient 154 from the quantization unit 103, the prediction information 160 from the prediction selection unit 112, and the quantization information specified by the encoding control unit 115.
  • Entropy encoding (for example, Huffman encoding, arithmetic encoding, etc.) is performed to generate encoded data.
  • the encoding parameter is a parameter necessary for decoding such as prediction information 160, information on transform coefficients, information on quantization, and the like.
  • the encoding control unit 115 has an internal memory (not shown), the encoding parameter is held in this memory, and the encoding parameter of an already encoded pixel block adjacent when encoding the prediction target block is stored. It is good also as a structure to use. For example, H.M. In the H.264 intra prediction, the prediction value of the prediction mode of the prediction target block can be derived from the prediction mode information of the encoded adjacent block.
  • the encoded data generated by the entropy encoding unit 113 is temporarily accumulated in the output buffer 114 through multiplexing, for example, and output as encoded data 162 according to an appropriate output timing managed by the encoding control unit 115. .
  • the encoded data 162 is output to, for example, a storage system (storage medium) or a transmission system (communication line) not shown.
  • the inverse quantization unit 104 performs inverse quantization on the quantized transform coefficient 154 from the quantizing unit 103 to obtain a restored transform coefficient 155. Specifically, the inverse quantization unit 104 performs inverse quantization according to the quantization information used in the quantization unit 103. The quantization information used in the quantization unit 103 is loaded from the internal memory of the encoding control unit 115. The inverse quantization unit 104 inputs the restored transform coefficient 155 to the inverse orthogonal transform unit 105.
  • the inverse orthogonal transform unit 105 performs an inverse orthogonal transform corresponding to the orthogonal transform performed in the orthogonal transform unit 102 such as an inverse discrete cosine transform on the restored transform coefficient 155 from the inverse quantization unit 104, A restored prediction error signal 156 is obtained.
  • the inverse orthogonal transform unit 105 inputs the restored prediction error signal 156 to the addition unit 106.
  • the addition unit 106 adds the restored prediction error signal 156 and the corresponding prediction image signal 161 to generate a local decoded image signal 157.
  • the decoded image signal 157 is input to the loop filter 107.
  • the loop filter 107 performs a deblocking filter, a Wiener filter, or the like on the input decoded image signal 157 to generate a filtered image signal 158.
  • the generated filtered image signal 158 is input to the reference image memory 108.
  • the reference image memory 108 stores the filtered image signal 158 after local decoding in the memory, and when the predicted image is generated as necessary by the intra prediction unit 109 and the inter prediction unit 110, the reference image signal 159 is used. Referenced each time.
  • the intra prediction mode memory 116 stores the intra prediction mode information 163 applied to the prediction unit that has been encoded, and when the intra prediction unit 109 generates the bidirectional prediction mode information as necessary, It is referred to as reference intra prediction mode information 164 each time.
  • the intra prediction mode information 163 includes information on one type of unidirectional intra prediction (prediction direction or FIGS. 8A and 8B described later). Index).
  • the bidirectional intra prediction image generation unit 602 is applied to the intra prediction unit 109 described later, information on two types of unidirectional intra prediction (prediction direction or indexes shown in FIGS. 8A and 8B described later). Corresponds to the intra prediction mode information 163.
  • the first unidirectional intra prediction mode is expressed as IntraPredModeL0
  • the second unidirectional intra prediction mode is expressed as IntraPredModeL1.
  • IntraPredModeL0 includes IntraPredTypeL0 and IntraAngleIdL0
  • IntraPredModeL1 includes IntraPredTypeL1 and IntraAngleIdL1.
  • IntraPredMode [puPartIdx] 34 is applied to the prediction unit
  • the intra prediction mode information 163 indicates that IntraPredTypeL0 is “Intra_Horizontal”, IntraAngleIdL0 is “0”, IntraPredTypeL1 is “Intra_Vertical”, Intra_Vertical ” Have in form.
  • the correspondence table shown in FIG. 4 may be used to change to index information.
  • IntraPredType “Intra_Horizontal” and IntraAngleId “0” are set to IntraPredMode “1”
  • IntraPredType “Intra_Vertical” and IntraAngleId “0” are set to IntraPredMode “0”
  • intra prediction mode information 163 is also acceptable.
  • the intra prediction unit 109 performs intra prediction using the reference image signal 159 stored in the reference image memory 108 and the reference intra prediction mode information 164 stored in the intra prediction mode memory 116.
  • H.M. In H.264, an intra prediction image is obtained by performing pixel interpolation (copying or copying after interpolation) along a prediction direction such as a vertical direction or a horizontal direction using an encoded reference pixel value adjacent to a prediction target block. Generate. In FIG. The prediction direction of intra prediction in H.264 is shown. Further, in FIG. 2 shows an arrangement relationship between reference pixels and encoding target pixels in H.264.
  • FIG. 5C illustrates a predicted image generation method in mode 1 (horizontal prediction)
  • FIG. 5D illustrates a predicted image generation method in mode 4 (diagonal lower right prediction).
  • H.264 In non-patent literature, H. The prediction direction of H.264 is further expanded to 34 directions to increase the number of prediction modes. A predicted pixel value is created by performing linear interpolation with 32-pixel accuracy in accordance with the predicted angle, and is copied in the predicted direction. Details of the intra prediction unit 109 used in the present embodiment will be described later.
  • the inter prediction unit 110 performs inter prediction using the reference image signal 159 stored in the reference image memory 108. Specifically, the inter prediction unit 110 performs block matching processing between the prediction target block and the reference image signal 159 to derive a motion shift amount (motion vector). The inter prediction unit 110 performs an interpolation process (motion compensation) based on the motion vector to generate an inter prediction image. H. With H.264, interpolation processing up to 1/4 pixel accuracy is possible.
  • the derived motion vector is entropy encoded as part of the prediction information 160.
  • the prediction selection switch 111 selects the output terminal of the intra prediction unit 109 or the output terminal of the inter prediction unit 110 according to the prediction information 160 from the prediction selection unit 112, and subtracts the intra prediction image or the inter prediction image as the prediction image signal 161. 101 and the adder 106.
  • the prediction selection switch 111 connects a switch to the output terminal from the intra prediction unit 109.
  • the prediction selection switch 111 connects a switch to the output terminal from the inter prediction unit 110.
  • the prediction selection unit 112 has a function of setting the prediction information 160 according to the prediction mode controlled by the encoding control unit 115. As described above, intra prediction or inter prediction can be selected to generate the predicted image signal 161, but a plurality of modes can be further selected for each of intra prediction and inter prediction.
  • the encoding control unit 115 determines one of a plurality of intra prediction modes and inter prediction modes as the optimal prediction mode, and the prediction selection unit 112 sets the prediction information 160 according to the determined optimal prediction mode. .
  • prediction mode information is specified by the encoding control unit 115 to the intra prediction unit 109, and the intra prediction unit 109 generates a predicted image signal 161 according to the prediction mode information.
  • the encoding control unit 115 may specify a plurality of prediction mode information in order from the smallest prediction mode number, or may specify a plurality of prediction mode information in order from the largest.
  • the encoding control unit 115 may limit the prediction mode according to the characteristics of the input image.
  • the encoding control unit 115 does not necessarily specify all prediction modes, and may specify at least one prediction mode information for the encoding target block.
  • the encoding control unit 115 determines an optimal prediction mode using a cost function represented by the following mathematical formula (1).
  • Equation (1) (hereinafter referred to as simple encoding cost), OH indicates a code amount relating to prediction information 160 (for example, motion vector information and prediction block size information), and SAD is a prediction target block and a prediction image signal 161.
  • the difference absolute value sum (ie, the cumulative sum of the absolute values of the prediction error signal 152) is shown.
  • represents a Lagrange undetermined multiplier determined based on the value of quantization information (quantization parameter)
  • K represents an encoding cost.
  • the prediction mode that minimizes the coding cost K is determined as the optimum prediction mode from the viewpoint of the generated code amount and the prediction error.
  • the encoding cost may be estimated from OH alone or SAD alone, or the encoding cost may be estimated using a value obtained by subjecting SAD to Hadamard transform or an approximation thereof.
  • the encoding control unit 115 determines an optimal prediction mode using a cost function expressed by the following formula (2).
  • Equation (2) D represents a sum of square errors (that is, encoding distortion) between the prediction target block and the locally decoded image, and R represents a prediction between the prediction target block and the prediction image signal 161 in the prediction mode.
  • An error amount indicates a code amount estimated by provisional encoding
  • J indicates an encoding cost.
  • provisional encoding processing and local decoding processing are required for each prediction mode, so that the circuit scale or the amount of calculation increases. .
  • the encoding cost J is derived based on more accurate encoding distortion and code amount, it is easy to determine the optimal prediction mode with high accuracy and maintain high encoding efficiency.
  • the encoding cost may be estimated from only R or D, or the encoding cost may be estimated using an approximate value of R or D. These costs may be used hierarchically.
  • the encoding control unit 115 performs determination using Expression (1) or Expression (2) based on information obtained in advance regarding the prediction target block (prediction mode of surrounding pixel blocks, image analysis result, and the like). The number of prediction mode candidates may be narrowed down in advance.
  • the number of prediction mode candidates can be further reduced while maintaining encoding performance by performing two-stage mode determination combining Formula (1) and Formula (2). It becomes.
  • the simple encoding cost represented by the formula (1) does not require a local decoding process, and can be calculated at high speed.
  • H.264 is used. Since the number of prediction modes is large even when compared with H.264, mode determination using the detailed coding cost is not realistic. Therefore, as a first step, mode determination using the simple coding cost is performed on the prediction modes available in the pixel block, and prediction mode candidates are derived.
  • the number of prediction mode candidates is changed using the property that the correlation between the simple coding cost and the detailed coding cost increases as the value of the quantization parameter that determines the roughness of quantization increases.
  • the intra prediction unit 109 illustrated in FIG. 6 includes a unidirectional intra predicted image generation unit 601, a bidirectional intra predicted image generation unit 602, a prediction mode information setting unit 603, a selection switch 604, and a bidirectional intra prediction mode generation unit 605. .
  • the reference image signal 159 is input from the reference image memory 108 to the unidirectional intra predicted image generation unit 601 and the bidirectional intra predicted image generation unit 602.
  • the prediction mode information setting unit 603 determines the prediction mode generated by the unidirectional intra prediction image generation unit 601 or the bidirectional intra prediction image generation unit 602. Set and output prediction mode 651.
  • the bidirectional intra prediction mode generation unit 605 outputs the bidirectional intra prediction mode information 652 according to the prediction mode 651 and the reference intra prediction mode information 164. The operation of the bidirectional intra prediction mode generation unit 605 will be described later.
  • the selection switch 604 has a function of switching the output ends of the respective intra predicted image generation units in accordance with the prediction mode 651.
  • the output terminal of the unidirectional intra prediction image generation unit 601 is connected to the switch, and if the prediction mode 651 is the bidirectional intra prediction mode, the bidirectional intra prediction is performed.
  • the output terminal of the image generation unit 602 is connected.
  • each of the intra predicted image generation units 601 and 602 generates the predicted image signal 161 according to the prediction mode 651.
  • the generated prediction image signal 161 (also referred to as a fifth prediction image signal) is output from the intra prediction unit 109.
  • the output signal of the unidirectional intra predicted image generation unit 601 is also called a fourth predicted image signal
  • the output signal of the bidirectional intra predicted image generation unit 602 is also called a third predicted image signal.
  • 7A and 7B show the numbers of the prediction modes according to the present embodiment for each block size.
  • PuSize indicates a pixel block (prediction unit) size to be predicted, and seven types of sizes from PU_2x2 to PU_128x128 are defined.
  • IntraUniModeNum represents the number of prediction modes for unidirectional intra prediction
  • IntraBiModeNum represents the number of prediction modes for bidirectional intra prediction.
  • Number of modes is the total number of prediction modes for each pixel block (prediction unit) size.
  • the number of prediction modes for unidirectional intra prediction and the number of prediction modes for bidirectional intra prediction may be any values other than those shown in FIGS. 7A and 7B. Note that when the number of prediction modes for bidirectional intra prediction is 0, it means that bidirectional intra prediction is not performed with the pixel block size.
  • FIG. 8A shows the relationship between the prediction mode and the prediction method in the case of PU_4x4, PU_8x8, PU_16x16, and PU_32x32 in FIG. 7A.
  • 10A shows the case of PU_64x64 or PU_128x128 in FIG. 7A
  • FIG. 10B shows the case of PU_64x64 or PU_128x128 in FIG. 7B.
  • IntraPredMode indicates a prediction mode number
  • IntraBipredFlag is a flag indicating whether or not bidirectional intra prediction. When the flag is 0, it indicates that the prediction mode is a unidirectional intra prediction mode. When the flag is 1, it indicates that the prediction mode is the bidirectional intra prediction mode.
  • the bidirectional intra prediction mode generation unit 605 When the flag is 1, the bidirectional intra prediction mode generation unit 605 generates bidirectional intra prediction mode information 652 in accordance with IntraBipredTypeIdx that defines a bidirectional intra prediction generation method.
  • IntraBipredTypeIdx When IntraBipredTypeIdx is 0, two types of unidirectional intra prediction modes used for bidirectional intra prediction are set in a first prediction mode generation unit 1901 described later using a predetermined table.
  • a method in which two types of unidirectional intra prediction modes used for bidirectional intra prediction are set in advance by a table is referred to as a fixed table method.
  • FIG. 8A shows an example in which all bidirectional intra prediction modes are fixed table methods.
  • IntraBipredTypeIdx When IntraBipredTypeIdx is a value larger than 0, two types of unidirectional intra prediction modes used for bidirectional intra prediction are set based on the reference intra prediction mode information 164.
  • a method in which two types of unidirectional intra prediction modes used for bidirectional intra prediction based on the reference intra prediction mode information 164 are set is referred to as a direct method.
  • IntraBipredTypeIdx has different values depending on the method of deriving two types of unidirectional intra prediction modes from the reference intra prediction mode information 164. A specific derivation method will be described later.
  • all modes may be fixed table methods, or all modes may be direct methods. Also, some modes may be fixed table methods and the remaining modes may be direct methods.
  • FIG. 8B shows an example in which among the eight types of bidirectional intra prediction modes, three types are the fixed table method and the remaining five types are the direct method.
  • IntraPredTypeLX indicates the prediction type of intra prediction. Intra_Vertical means that the vertical direction is the reference for prediction, and Intra_Horizontal means that the horizontal direction is the reference for prediction. Note that 0 or 1 is applied to X in IntraPredTypeLX. IntraPredTypeL0 indicates the first prediction mode of unidirectional intra prediction or bidirectional intra prediction. IntraPredTypeL1 indicates the second prediction mode of bidirectional intra prediction. IntraPredAngleId is an index indicating an index of a prediction angle. The prediction angle actually used in the generation of the predicted value is shown in FIG. Here, puPartIdx represents an index of the prediction unit that is divided in the quadtree division described with reference to FIG. 3B.
  • IntraPredMode 4
  • IntraPredTypeL0 Intra_Vertical
  • the vertical direction is used as a reference for prediction.
  • FIG. 8B shows the relationship between the prediction mode and the prediction method in the case of PU_32 ⁇ 32 in FIG. 7B.
  • 8C shows the relationship between the prediction mode and the prediction method in the case of PU_4x4 in FIG. 7C, PU_4x4, PU_8x8, and PU_16x16 in FIG. 7D.
  • 8D and 8E show the relationship between the prediction mode and the prediction method in the case of PU_32x32 in FIG. 7D.
  • FIG. 8F shows the relationship between the prediction mode and the prediction method in the case of PU_4x4 in FIGS. 7A, 7B, and 7C and PU_4x4 to PU_16x16 in FIG. 7D.
  • FIG. 8G shows the relationship between the prediction mode and the prediction method in the case of PU_32 ⁇ 32 in FIGS. 7C and 7D.
  • the prediction mode information setting unit 603 converts the above-described prediction information corresponding to the designated prediction mode 651 into the unidirectional intra prediction image generation unit 601 and the bidirectional intra prediction image generation unit 602 under the control of the encoding control unit 115.
  • the prediction mode 651 is output to the selection switch 604.
  • the unidirectional intra predicted image generation unit 601 has a function of generating the predicted image signal 161 for a plurality of prediction directions shown in FIG. In FIG. 12, there are 33 different prediction directions for the vertical and horizontal coordinates indicated by bold lines. H. The direction of a typical prediction angle indicated by H.264 is indicated by an arrow. In this embodiment, 33 kinds of prediction directions are prepared in the direction which pulled the line shown by the arrow from the origin. H. Similar to H.264, DC prediction for predicting with an average value of available reference pixels is added, and there are 34 prediction modes in total.
  • IntraPredMode 4
  • IntraPredAngleIdL0 4
  • the arrows included in the range shown in “Intra_Vertical” shown at the bottom of FIG. 12 indicate the prediction mode whose prediction type is Intra_Vertical, and are included in the range shown in “Intra_Horizontal” shown on the right side of FIG.
  • An arrow indicates a prediction mode whose prediction type is Intra_Horizontal.
  • FIG. 11 shows the relationship between IntraPredAngleIdLX and intraPredAngle used for predictive image value generation.
  • intraPredAngle indicates a prediction angle that is actually used when a predicted value is generated.
  • a predicted value generation method is expressed by a mathematical formula (3).
  • BLK_SIZE indicates the size of the pixel block (prediction unit)
  • ref [] indicates an array in which reference image signals are stored.
  • Pred (k, m) indicates the generated predicted image signal 161.
  • predicted values can be generated in the same manner according to the table of FIG.
  • the above is description of the unidirectional intra estimated image generation part 601 in this embodiment.
  • FIG. 13 shows a block diagram of the bidirectional intra-predicted image generation unit 602.
  • the bidirectional intra predicted image generation unit 602 includes a first unidirectional intra predicted image generation unit 1301, a second unidirectional intra predicted image generation unit 1302, and a weighted average unit 1303, and is based on the input reference image signal 159. Two unidirectional intra-predicted images are generated, and a function of generating a predicted image signal 161 by weighted averaging them is provided.
  • the functions of the first unidirectional intra predicted image generation unit 1301 and the second unidirectional intra predicted image generation unit 1302 are the same. In either case, a prediction image signal corresponding to a prediction mode given according to prediction mode information controlled by the encoding control unit 115 is generated.
  • a first predicted image signal 1351 is output from the first unidirectional intra predicted image generation unit 1301, and a second predicted image signal 1352 is output from the second unidirectional intra predicted image generation unit 1302.
  • Each predicted image signal is input to the weighted average unit 1303, and weighted average processing is performed.
  • the output signal of the weighted average unit 1303 is also called a third predicted image signal.
  • the table in FIG. 14 is a table for deriving two unidirectional intra prediction modes from the bidirectional intra prediction mode.
  • BipredIdx is derived using Equation (4).
  • the first predicted image signal 1351 and the second predicted image signal 1352 generated by the first unidirectional intra predicted image generation unit 1301 and the second unidirectional intra predicted image generation unit 1302, respectively, are sent to the weighted average unit 1303. Is entered.
  • the weighted average unit 1303 calculates the Euclidean distance or the city area distance (Manhattan distance) based on the prediction directions of IntraPredModeL0 and IntraPredModeL1, and derives a weight component used in the weighted average process.
  • the weight component of each pixel is represented by the Euclidean distance from the reference pixel used for prediction or the reciprocal of the urban distance, and is generalized by the following equation.
  • ⁇ L is expressed by the following equation.
  • ⁇ L is expressed by the following equation.
  • the weight table for each prediction mode is generalized by the following equation.
  • ⁇ L0 (n) represents the weight component of the pixel position n in IntraPredModeL0
  • ⁇ L1 (n) represents the weight component of the pixel position n in IntraPredModeL1. Therefore, the final prediction signal at the pixel position n is expressed by the following equation.
  • Bipred (n) represents the predicted image signal at the pixel position n
  • PredL0 (n) and PredL1 (n) are the predicted image signals of IntraPredModeL0 and IntraPredModeL1, respectively.
  • the prediction signal is generated by selecting two prediction modes for generating the prediction pixel.
  • a prediction value may be generated by selecting three or more prediction modes.
  • the ratio of the reciprocal of the spatial distance from the reference pixel to the prediction pixel may be set as the weighting factor.
  • the Euclidean distance from the reference pixel used in the prediction mode or the reciprocal of the urban area distance is directly used as a weight component.
  • the Euclidean distance and the urban area distance from the reference pixel are variables.
  • the weight component may be set using the distribution model.
  • the distribution model uses at least one of a linear model, an M-order function (M ⁇ 1), a nonlinear function such as a one-sided Laplace distribution or a one-sided Gaussian distribution, and a fixed value that is a fixed value regardless of the distance from the reference pixel.
  • M ⁇ 1 M-order function
  • a nonlinear function such as a one-sided Laplace distribution or a one-sided Gaussian distribution
  • the weight component is expressed by the following equation.
  • ⁇ (n) is a weight component at the position n of the predicted pixel
  • ⁇ 2 is variance
  • A is a constant (A> 0).
  • the weight component is expressed by the following equation.
  • is a standard deviation
  • B is a constant (B> 0).
  • an isotropic correlation model obtained by modeling an autocorrelation function, an elliptic correlation model, a generalized Gaussian model obtained by generalizing a Laplace function or a Gaussian function may be used as the weight component model.
  • Equation (5), Equation (8), Equation (10), and Equation (11) are calculated each time the predicted image is generated, a plurality of multipliers are required, and the hardware scale increases. . For this reason, the circuit scale required for the said calculation can be reduced by calculating a weight component beforehand according to the relative distance for every prediction mode, and hold
  • a method for deriving the weight component when the city distance is used will be described.
  • the city area distance ⁇ L L0 of IntraPredMode L0 and the city area distance ⁇ L L1 of IntraPredMode L1 are calculated from Equation (7).
  • the relative distance varies depending on the prediction directions (also referred to as reference prediction directions) of the two prediction modes.
  • the distance can be derived using Expression (6) or Expression (7) according to each prediction mode.
  • the distance is 2 at all pixel positions.
  • the table sizes of these distance tables may increase.
  • FIG. 17 shows the mapping of IntraPredModeLX used for distance table derivation.
  • a table of only the prediction mode corresponding to the prediction mode corresponding to the prediction mode and the DC prediction in 45 degrees is prepared, and other prediction angles are mapped closer to the prepared reference prediction mode. ing.
  • the index is mapped to the smaller one.
  • the prediction mode shown in “MappedIntraPredMode” is referred to from FIG. 17, and a distance table can be derived.
  • the relative distance for each pixel in the two prediction modes is calculated using the following equation.
  • BLK_WIDTH and BLK_HEIGHT indicate the width and height of the pixel block (prediction unit), respectively, and DistDiff (n) indicates the relative distance between the two prediction modes at the pixel position n.
  • DistDiff (n) indicates the relative distance between the two prediction modes at the pixel position n.
  • SHIFT indicates the calculation accuracy of the decimal point calculation of the weight component, and an optimal combination may be selected by balancing the coding performance and the circuit scale at the time of hardware implementation.
  • FIG. 18A and 18B show examples in which weight components using the one-sided Laplace distribution model in this embodiment are tabulated.
  • Other PuSizes can also be derived using Equation (5), Equation (8), Equation (10), and Equation (11).
  • a predetermined weighting factor is prepared for each combination of the unidirectional intra prediction modes IntraPredModeL0 and IntraPredModeL1 as a table for each pixel position and each pixel group, and the above Bired (n) is calculated. It doesn't matter. In this case, it is shown by the following formula.
  • ⁇ t (n) is a weighting coefficient at the pixel position n, and has different values depending on IntraPredModeL0 and IntraPredModeL1.
  • FIG. 19 shows a block diagram of the bidirectional intra prediction mode generation unit 605.
  • the bidirectional intra prediction mode generation unit 605 includes a first prediction mode generation unit 1901 and a second prediction mode generation unit 1902. Based on the prediction mode 651 and the reference intra prediction mode information 164, two types of unidirectional intra prediction are performed. It has a function of outputting bidirectional intra prediction mode information 652 that is a combination.
  • One of the first prediction mode generation unit 1901 and the second prediction mode generation unit 1902 is connected to the output terminal of the selection switch 1903 in accordance with IntraBipredTypeIdx included in the prediction mode 651.
  • the first prediction mode generation unit 1901 outputs a combination of two types of unidirectional intra predictions according to the fixed table method described above.
  • the table in FIG. 20 is a table for deriving a combination of two types of unidirectional intra prediction corresponding to IntraPredMode, and corresponds to the prediction mode configuration in which the unidirectional intra prediction mode is excluded in FIG. 8A.
  • BipredIdx in the figure is an index of the bidirectional intra prediction mode, and is derived using the above equation (4).
  • the table for deriving IntraPredModeL0 and IntraPredModeL1 from BipredIdx is not limited to FIG. 20, and any one-way intra prediction mode shown in FIGS. 8A and 8B may be set as IntraPredModeL0 and IntraPredModeL1.
  • the method for deriving the bidirectional intra prediction mode by the first prediction mode generation unit 1901 is referred to as a first generation method.
  • the second prediction mode generation unit 1902 outputs bi-directional intra prediction mode information 652 that is a combination of two types of unidirectional intra prediction using the reference intra prediction mode information 164 according to the direct method described above.
  • reference intra prediction mode information 164 corresponding to adjacent blocks A and B to which pixel positions a and b respectively belong is used as an already encoded prediction unit as shown in FIG.
  • the reference intra prediction mode information 164 corresponding to the adjacent block A is referred to as IntraPredModeA
  • IntraPredModeB the reference intra prediction mode information 164 corresponding to the adjacent block B
  • IntraPredModeAL0 When bidirectional intra prediction is applied to adjacent block A, the first unidirectional intra prediction mode and the second unidirectional intra prediction mode in IntraPredModeA are referred to as IntraPredModeAL0 and IntraPredModeAL1, respectively.
  • IntraPredModeA When unidirectional intra prediction is applied to the adjacent block A, IntraPredModeA has the same information as IntraPredModeAL0, and a predetermined fixed value (for example, minus 1) is set in IntraPredModeAL1.
  • the adjacent block B As with the adjacent block A, the adjacent block B is also referred to as IntraPredModeBL0 and IntraPredModeBL1.
  • the second prediction mode generation unit 1902 sets IntraPredModeAL0 in the first unidirectional intra prediction mode (IntraPredModeL0) and IntraPredModeB0 in the second unidirectional intra prediction mode (IntraPredModeL1) as the bidirectional intra prediction mode information 652.
  • IntraPredModeAL1 may be set instead of IntraPredModeAL0
  • IntraPredModeBL1 may be set instead of IntraPredModeBL0.
  • the method for deriving the bidirectional intra prediction mode by the second prediction mode generation unit 1902 is referred to as a second generation method.
  • IntraPredModeAL0 or IntraPredModeBL1 is set as the first unidirectional intra prediction mode (IntraPredModeL0), and is modified by a method determined in advance based on IntraPredModeL0.
  • the predicted mode may be set as the second unidirectional intra prediction mode (IntraPredModeL1).
  • FIG. 22A shows a table for setting as a second unidirectional intra prediction mode that is an intra prediction mode in which the prediction direction is adjacent to the first unidirectional intra prediction mode. When there are two types of intra prediction modes whose prediction directions are adjacent to the first unidirectional intra prediction mode, the intra prediction mode with the small intra prediction mode index IntraPredMode is selected.
  • IntraPredModeL1 By deriving IntraPredModeL1 using the table shown in FIG. 22A, bi-directional intra prediction is performed using prediction directions adjacent to each other, so that it is possible to obtain a filtering effect that removes noise in the predicted image signal. Therefore, the prediction efficiency is improved.
  • the derivation method of the bidirectional intra prediction mode by the first prediction mode generation unit 1901 is referred to as a third generation method.
  • IntraPredModeL1 may be derived using the table shown in FIG. 22C instead of the tables shown in FIGS. 22A and 22B.
  • IntraPredModeL0 and IntraPredModeL1 shown in FIG. 22C have a relationship in which the prediction direction is reversed. Therefore, since it is possible to perform the interpolation prediction so as to sandwich the encoded prediction unit, the prediction efficiency is improved.
  • the method for deriving the bidirectional intra prediction mode by the second prediction mode generation unit 1902 is referred to as a fourth generation method.
  • the bidirectional intra prediction modes when bidirectional intra prediction is performed on adjacent blocks, the bidirectional intra prediction modes may be set as IntraPredModeL0 and IntraPredModeL1.
  • IntraPredModeAL0 is set in IntraPredModeL0
  • IntraPredModeAL1 is set in IntraPredModeL1. Since the same applies to the case where the adjacent block B is bidirectional intra prediction, the description thereof is omitted.
  • the method for deriving the bidirectional intra prediction mode by the second prediction mode generation unit 1902 is referred to as a fifth generation method.
  • IntraPredModeL1 may be derived using a predetermined table from IntraPredModeL0.
  • the method for deriving the bidirectional intra prediction mode by the second prediction mode generation unit 1902 is referred to as a sixth generation method.
  • Each prediction mode generation unit uses one of the bidirectional prediction mode generation methods (from the second generation method to the sixth generation method). When the second generation method is used and when the third generation method to the sixth generation method are used, adjacent blocks to be used are shown. Note that the correspondence relationship between the second prediction mode generation unit to the Nth prediction mode generation unit and IntraBipredTypeIdx is not limited to that in FIG. 24, and may correspond in any way.
  • FIG. 8A and 8B show the configuration of the prediction mode according to the present embodiment.
  • FIG. 8B shows 34 types of unidirectional intra prediction and 8 types of bidirectional intra prediction.
  • IntraPredMode of 34, 35, and 36 are the first prediction mode generation unit 1901.
  • the bidirectional intra prediction mode information 652 is generated by one generation method. When IntraPredMode is 37 to 41, both using the second prediction mode generation unit 1902, the third prediction mode generation unit 2301, the fourth prediction mode generation unit, and the fifth prediction mode generation unit (not shown) described in this embodiment.
  • Direction intra prediction mode information 652 is generated.
  • the number of unidirectional intra predictions and the number of bidirectional intra predictions may be changed depending on the size of the prediction unit.
  • the number of modes using the first prediction mode generation unit 1901 and the number of modes using the prediction mode generation unit from the second prediction mode generation unit 1902 to the Nth prediction mode generation unit 2302 are Any number may be set (the number of modes may be increased for bidirectional use).
  • FIG. 9B shows an example in which there are 17 types of unidirectional intra prediction and four types of bidirectional intra prediction, and all four types of bidirectional intra prediction use the second prediction mode generation unit 1902 and the subsequent ones.
  • IntraBipredTypeIdx is not limited to that shown in FIGS. 8B and 9B. If the image encoding apparatus of the present embodiment and the corresponding image decoding apparatus have information indicating the same correspondence relationship in advance, the correspondence relationship can be arbitrarily set.
  • IntraBipredType in BipredIdx is reset to 0, and the bidirectional intra prediction mode is generated in the first prediction mode generation unit 1901 according to the table shown in FIG.
  • “Intra_Vertical” and “ ⁇ 8” and Intra_DC are newly selected as the bidirectional intra prediction modes.
  • the bi-predIdx does not use the same bi-directional intra prediction mode in the first prediction mode generation unit 1901, but other bi-directional intra prediction modes shown in FIG. The prediction mode may be used.
  • BipredIdx is already assigned in the first prediction mode generation unit 1901, it is excluded from the bidirectional intra prediction modes that can be used. That is, when BipredIdx is from 0 to 2 in FIG. 25A, the bidirectional intra prediction mode is excluded. Accordingly, the bi-directional intra prediction mode in which BipredIdx shown in FIG. 20 is 3 or more can be used.
  • the unidirectional intra prediction mode may be replaced and used.
  • the unidirectional intra prediction mode to be used as a replacement is preferably a unidirectional intra prediction mode having a prediction direction different from the prediction mode candidates.
  • the unidirectional intra prediction mode is 17 modes, and therefore the above-mentioned 17 modes out of the maximum 34 modes as shown in FIG. Use a unidirectional intra prediction mode other than.
  • the bidirectional intra prediction mode information may be encoded in a state where the total number of usable bidirectional prediction modes is reduced, as shown in FIG. 25C, instead of using other bidirectional prediction modes. Absent. In the example of FIG. 25C, it is shown that the total number of bidirectional intra prediction modes is 8, and when BipredIdx is 5, it overlaps with the bidirectional intra prediction mode generated by another generation method.
  • the bidirectional prediction mode information is encoded with the total number of bidirectional prediction modes set to 7. Accordingly, the average code amount required for the bidirectional prediction mode information is generally smaller than the total number of bidirectional prediction modes of 8. In this case, the total number of bidirectional prediction modes may change for each prediction unit.
  • FIG. 26 shows the configuration of the prediction mode for the color difference signal in this embodiment.
  • Intra_pred_mode_chroma in FIG. 26 indicates a prediction mode index in the color difference signal.
  • intra_pred_mode_chroma from 0 to 3
  • predetermined unidirectional intra prediction vertical, horizontal, DC, diagonal
  • intra_pred_mode_chroma 4
  • the prediction mode IntraPredMode for the luminance signal in the encoding prediction unit is applied as the prediction mode for the color difference signal.
  • the encoded prediction unit has the configuration of the prediction mode shown in FIG. 8B, unidirectional intra prediction whose IntraPredMode is 0 to 33 is applied, and when it is 34 or more, the above-described bidirectional intra prediction is applied.
  • intra_pred_mode_chroma when intra_pred_mode_chroma is 4 and bidirectional intra prediction is applied in the prediction mode IntraPredMode for the luminance signal, two types of unidirectional intra prediction modes (IntraPredModeL0 and IntraPredModeL1) are used. ) May be applied as a prediction mode for a color difference signal.
  • the internal configuration of the intra prediction unit 109 may be the configuration shown in FIG.
  • an image buffer 2701 is added, and the bidirectional intra prediction image generation unit 602 is replaced with a weighted average unit 2702.
  • the primary image buffer 2701 has a function of temporarily storing the prediction image signal 161 for each prediction mode generated by the unidirectional intra prediction image generation unit 601 in the buffer, and the prediction controlled by the encoding control unit 115.
  • the prediction image signal 161 corresponding to the necessary prediction mode is output to the weighted average unit 2702 according to the bidirectional intra prediction mode information 652 output from the mode and bidirectional intra prediction mode generation unit 605. This eliminates the need for the bidirectional intra predicted image generation unit 602 to hold the first unidirectional intra predicted image generation unit 1301 and the second unidirectional intra predicted image generation unit 1302, thereby reducing the hardware scale. It becomes possible.
  • the syntax indicates the structure of encoded data (for example, encoded data 162 in FIG. 1) when the image encoding device encodes moving image data.
  • the moving picture decoding apparatus interprets the syntax with reference to the same syntax structure.
  • FIG. 28 shows an example of syntax 2800 used by the video encoding apparatus of FIG.
  • the syntax 2800 includes three parts: a high level syntax 2801, a slice level syntax 2802, and a coding tree level syntax 2803.
  • the high level syntax 2801 includes syntax information of a layer higher than the slice.
  • a slice refers to a rectangular area or a continuous area included in a frame or a field.
  • the slice level syntax 2802 includes information necessary for decoding each slice.
  • the coding tree level syntax 2803 includes information necessary for decoding each coding tree (ie, each coding tree unit). Each of these parts includes more detailed syntax.
  • the high level syntax 2801 includes sequence and picture level syntax such as a sequence parameter set syntax 2804 and a picture parameter set syntax 2805.
  • the slice level syntax 2802 includes a slice header syntax 2806, a slice data syntax 2807, and the like.
  • the coding tree level syntax 2803 includes a coding tree unit syntax 2808, a prediction unit syntax 2809, and the like.
  • the coding tree unit syntax 2808 can have a quadtree structure. Specifically, the coding tree unit syntax 2808 can be recursively called as a syntax element of the coding tree unit syntax 2808. That is, one coding tree unit can be subdivided with a quadtree.
  • the coding tree unit syntax 2808 includes a transform unit syntax 2810.
  • the transform unit syntax 2810 is invoked at each coding tree unit syntax 2808 at the extreme end of the quadtree.
  • the transform unit syntax 2810 describes information related to inverse orthogonal transformation and quantization.
  • FIG. 29 exemplifies slice header syntax 2806 according to the present embodiment.
  • the slice_bipred_intra_flag shown in FIG. 29 is a syntax element indicating, for example, validity / invalidity of bidirectional intra prediction according to the present embodiment for the slice.
  • the orthogonal transform unit 102 and the inverse orthogonal transform unit 105 perform only unidirectional intra prediction.
  • unidirectional intra prediction prediction in which IntraBipredFlag [puPartIdx] in FIGS. 8A and 8B, 9A and 9B, and FIG. Intra prediction specified in H.264 may be performed.
  • the bidirectional intra prediction according to the present embodiment is effective in the entire area in the slice.
  • slice_bipred_intra_flag 1
  • the prediction validity / efficiency according to the present embodiment is determined for each local region in the slice. Invalidity may be specified.
  • FIG. 30A shows an example of the prediction unit syntax.
  • Pred_mode in the figure indicates the prediction type of the prediction unit.
  • MODE_INTRA indicates that the prediction type is intra prediction.
  • intra_split_flag is a flag indicating whether or not the prediction unit is further divided into four prediction units. When intra_split_flag is 1, a prediction unit is obtained by dividing a prediction unit into four in half in the vertical and horizontal sizes. When intra_split_flag is 0, the prediction unit is not divided.
  • Intra_luma_bipred_flag [i] is a flag indicating whether the prediction mode IntraPredMode applied to the prediction unit is a unidirectional intra prediction mode or a bidirectional intra prediction mode. i indicates the position of the divided prediction unit, and 0 is set when the intra_split_flag is 0, and 0 to 3 when the intra_split_flag is 1. In this flag, the value of IntraBipredFlag of the prediction unit shown in FIGS. 8A and 8B, 9A and 9B, and 10 is set.
  • intra_luma_bipred_flag [i] When intra_luma_bipred_flag [i] is 1, this indicates that the prediction unit is bi-directional intra prediction, and is information that identifies the used bi-directional intra prediction mode among a plurality of prepared bi-directional intra prediction modes.
  • Intra_luma_bipred_mode [i] is encoded.
  • intra_luma_bipred_mode [i] may be encoded with the isometric length according to the bidirectional intra prediction mode number IntraBiModeNum shown in FIGS. 7A to 7D, or may be encoded using a predetermined code table.
  • prev_intra_luma_unipred_flag [i] is a flag indicating whether or not the prediction value MostProbable of the prediction mode calculated from the adjacent block and the intra prediction mode of the prediction unit are the same. Details of the MostProbable calculation method will be described later. When prev_intra_luma_unipred_flag [i] is 1, it indicates that the MostProbable and the intra prediction mode IntraPredMode are equal.
  • prev_intra_luma_unipred_flag [i] When prev_intra_luma_unipred_flag [i] is 0, it indicates that the MostProbable and the intra prediction mode IntraPredMode are different, and the information rem_intraiprecoded_code that specifies whether the intra prediction mode IntraPredMode is a mode other than MostProbable. .
  • the rem_intra_luma_unipred_mode [i] may be encoded in the same length according to the bidirectional intra prediction mode number IntraUniModeNum shown in FIGS. 7A and 7B, or may be encoded using a predetermined code table. From the intra prediction mode IntraPredMode, rem_intra_luma_unipred_mode [i] is calculated using the following equation.
  • MostProbable is calculated according to the following equation.
  • Min (x, y) is a parameter for outputting the smaller one of the inputs x and y.
  • intraPredModeAL0 and intraPredModeBL0 respectively indicate the first unidirectional intra prediction modes of the prediction units adjacent to the left and above the encoded prediction unit as described above.
  • the first unidirectional intra prediction mode of the referable prediction unit is MostProbable.
  • Intra_DC is set in MostProbable.
  • MostProbable is larger than the number of unidirectional intra prediction modes IntraUniPredModeNum of the encoded prediction unit, MostProbable is recalculated using the following equation.
  • MappedProbable () is a table for converting MostProbable, and an example is shown in FIG. 31.
  • luma_pred_mode_code_type [i] indicates the type of prediction mode IntraPredMode applied to the prediction unit, 0 (IntraUnifiedMostProb) indicates unidirectional intra prediction, and intra prediction mode is the same as MostProbable. Intra prediction indicates that the intra prediction mode is different from MostProbable, and 2 (IntraBipred) indicates that it is a bidirectional intra prediction mode.
  • 32A, B, C, and D show an example of assignment of the number of modes according to the meaning corresponding to luma_pred_mode_code_type and the mode configuration shown in bin, FIG. 7A or FIG. 7B, FIG. 7C, and FIG. 7D.
  • luma_pred_mode_code_type [i] When luma_pred_mode_code_type [i] is 0, the intra prediction mode is the MostProbable mode, so no further information encoding is necessary.
  • luma_pred_mode_code_type [i] is 1, information rem_intra_luma_unipred_mode [i] that specifies which mode other than MostProbable is the intra prediction mode IntraPredMode is encoded.
  • rem_intra_luma_unipred_mode [i] may be encoded with the isometric length according to the number of bidirectional intra prediction modes IntraUniModeNum shown in FIG. 7A, FIG. 7B, FIG. 7C, or FIG. May be. From the intra prediction mode IntraPredMode, rem_intra_luma_unipred_mode [i] is calculated using Equation (16). Further, when luma_pred_mode_code_type [i] is 2, it indicates that the prediction unit is bidirectional intra prediction, and information that identifies the used bidirectional intra prediction mode among the prepared bidirectional intra prediction modes. Intra_luma_bipred_mode [i] is encoded.
  • intra_luma_bipred_mode [i] may be encoded in equal length according to the bidirectional intra prediction mode number IntraBiModeNum shown in FIG. 7A, FIG. 7B, FIG. 7C, or FIG. 7D, or encoded using a predetermined code table. May be. Further, as described above, when the total number of bidirectional intra prediction modes is different for each prediction unit, it is encoded using a code table that switches according to the total number of bidirectional intra prediction modes indicated for each prediction unit. Also good.
  • FIG. 30D shows still another example relating to the prediction unit syntax.
  • the table shown in FIG. 4 may be used instead of FIG. 8A and FIG. 8B, or FIG. 8A or FIG. IntraPredMode of 33 or more may be ignored.
  • IntraPredTypeL1 and IntraPredAngleIdL1 indicating information related to the second prediction mode at the time of bidirectional intra prediction from FIG. 8A or FIG. 8B, and deleting a table in which unnecessary IntraPredMode is 33 or more.
  • FIG. 4 and FIG. 8A or FIG. 8B may be applied with respect to FIG. 9A, FIG. 9B, or FIG. 10 corresponding to FIG. 8A or FIG.
  • pred_mode and intra_split_flag are the same as the syntax example described above, and thus description thereof is omitted.
  • Intra_bipred_flag is a flag indicating whether or not bi-directional intra prediction can be used in the encoded prediction unit. When intra_bipred_flag is 0, it indicates that bi-directional intra prediction is not used in the encoded prediction unit. Even when intra_split_flag is 1, that is, when the encoded prediction unit is further divided into four, bi-directional intra prediction is not used in all prediction units, and only uni-directional intra prediction is effective.
  • intra_bipred_flag When intra_bipred_flag is 1, it indicates that bi-directional intra prediction can be used in the encoded prediction unit. Even when intra_split_flag is 1, that is, when the encoded prediction unit is further divided into four, in all prediction units, bidirectional intra prediction can be selected in addition to unidirectional intra prediction.
  • intra_bipred_flag is encoded as 0 to disable bi-directional intra prediction. Since the amount of codes necessary for encoding can be reduced, encoding efficiency is improved.
  • FIG. 30E shows still another example relating to the prediction unit syntax.
  • intra_bipred_flag is a flag indicating whether or not bi-directional intra prediction can be used in the encoding prediction unit, and is the same as the above-described intra_bipred_flag, and thus the description thereof is omitted.
  • FIG. 33 shows the intra prediction unit 109 when adaptive reference pixel filtering is used. 6 is different from the intra prediction unit 109 shown in FIG. 6 in that a reference pixel filter unit 3301 is added.
  • the reference pixel filter unit 3301 receives the reference image signal 159 and the prediction mode 651, performs adaptive filter processing described later, and outputs a filtered reference image signal 3351.
  • the filtered reference image signal 3351 is input to the unidirectional intra predicted image generation unit 601 and the bidirectional intra predicted image generation unit 602.
  • the configuration and processing other than the reference pixel filter unit 3301 are the same as those of the intra prediction unit 109 shown in FIG.
  • the reference pixel filter unit 3301 determines whether to filter reference pixels used for intra prediction according to the reference pixel filter flag and the intra prediction mode included in the prediction mode 651.
  • the reference pixel filter flag is a flag indicating whether or not to filter the reference pixel when the intra prediction mode IntraPredMode is a value other than “Intra_DC”.
  • IntraPredMode is “Intra_DC”
  • the reference pixel is not filtered and the reference pixel filter flag is set to 0.
  • a filtered reference image signal 3351 is calculated by the following filtering.
  • p [x, y] indicates a reference pixel before filtering
  • pf [x, y] indicates a reference pixel in filter terms.
  • PuPartSize indicates the size (pixel) of the prediction unit.
  • FIG. 34A and 34B show a prediction unit syntax structure when performing adaptive reference pixel filtering.
  • FIG. 34A adds the syntax intra_luma_filter_flag [i] related to the adaptive reference pixel filter to FIG. 30A.
  • FIG. 34B adds syntax intra_luma_filter_flag [i] related to the adaptive reference pixel filter to FIG. 30C.
  • intra_luma_filter_flag [i] is further encoded when the intra prediction mode IntraPredMode [i] is other than Intra_DC. When the flag is 0, it indicates that the reference pixel is not filtered. Further, when intra_luma_filter_flag [i] is 1, it indicates that the reference pixel filtering is applied.
  • intra_luma_filter_flag [i] is encoded when the intra prediction mode IntraPredMode [i] is other than Intra_DC.
  • IntraPredMode [i] is 0 to 2
  • intra_luma_filter_flag [ i] need not be encoded. In this case, intra_luma_filter_flag [i] is set to 0.
  • the intra_luma_filter_flag [i] described above may be added in the same meaning.
  • decoded pixels can be used as pixels adjacent to the left, upper, and upper left.
  • the input image signal 151 is used as a pixel adjacent to the left, upper, and upper left.
  • FIG. 35 shows positions of adjacent decoded pixels A (left), B (upper), and C (upper left) used for prediction of the prediction target pixel X. Therefore, composite intra prediction is a so-called open-loop prediction method in which prediction values differ between the image encoding device 100 and the moving image decoding device 5100.
  • FIG. 37 shows a block diagram of the intra prediction unit 109 when combined with composite intra prediction. A difference is that a composite intra predicted image generation unit 3601, a selection switch 3602, and a decoded image buffer 3701 are added to the intra prediction unit 109 shown in FIG.
  • the selection switch 604 When the bidirectional intra prediction and the composite intra prediction are combined, first, in the selection switch 604, the unidirectional intra prediction image generation unit 601 or the bidirectional intra prediction image generation unit according to the prediction mode information controlled by the encoding control unit 115.
  • the output terminal of 602 is switched.
  • the output predicted image signal 161 is referred to as a direction predicted image signal 161.
  • the direction prediction image signal 161 is input to the composite intra prediction image generation unit 3601, and the prediction image signal 161 in the composite intra prediction is generated.
  • the description of the composite intra predicted image generation unit 3601 will be described later.
  • the selection switch 3602 switches between using the prediction image signal 161 and the direction prediction image signal in the composite intra prediction according to the composite intra prediction application flag in the prediction mode information controlled by the encoding control unit 115.
  • the final prediction image signal 161 in the intra prediction unit 109 is output.
  • the composite intra prediction application flag is 1
  • the predicted image signal 161 output from the composite intra predicted image generation unit 3601 becomes the final predicted image signal 161.
  • the direction prediction image signal 161 is the prediction image signal 161 that is finally output.
  • the predicted image signal output from the composite intra predicted image generation unit 3601 is also called a sixth predicted image signal.
  • the addition unit 106 adds the decoded prediction error signal 156 separately decoded and the pixel unit to generate a decoded image signal 157 for each pixel. And stored in the decoded image buffer 3701.
  • the stored decoded image signal 157 in units of pixels is input to the composite intra predicted image generation unit 3601 as a reference pixel 3751, and is used for pixel level prediction described later as an adjacent pixel 3751 shown in FIG.
  • the composite intra prediction image generation unit 3601 includes a pixel level prediction signal generation unit 3801 and a composite intra prediction calculation unit 3802.
  • the pixel level prediction signal generation unit 3801 inputs the reference pixel 3751 as the adjacent pixel 3751 and outputs the pixel level prediction signal 3851 by predicting the prediction target pixel X from the adjacent pixel.
  • the pixel level prediction signal 3851 (X) of the prediction target pixel is calculated from A, B, and C indicating the adjacent pixel 3751 using Equation (21).
  • coefficients related to A, B, and C may be other values.
  • the composite intra prediction calculation unit 3802 performs a weighted average of the direction prediction image signal 161 (X ′) and the pixel level prediction signal 3851 (X), and outputs a final prediction image signal 161 (P). Specifically, the following formula is used.
  • the decoded image signal 157 may have different values in encoding and decoding. Therefore, after all the decoded image signals 157 in the encoded prediction syntax are generated, the above-described combined intra prediction is executed again using the decoded image signal 157 as an adjacent pixel, so that the predicted image signal 161 that is the same as that in the decoding is obtained. Is further added to the prediction error signal 152 to generate a decoded image signal 157 identical to the decoding.
  • the weighting factor W may be switched according to the position of the prediction pixel in the prediction unit.
  • a prediction image signal generated using unidirectional intra prediction and bidirectional intra prediction generates a prediction value from spatially adjacent reference pixels positioned on the left or above already encoded.
  • the absolute value of the prediction error tends to increase as the distance from the reference pixel increases. Therefore, by increasing the weighting coefficient of the direction prediction image signal 161 and the pixel level prediction signal 3851 when the distance is close to the reference pixel, the weighting coefficient of the direction prediction image signal 161 is increased, and when it is far away, the weighting coefficient is decreased. It becomes possible.
  • a prediction error signal is generated using an input image signal at the time of encoding.
  • the pixel level prediction signal 3851 becomes an input image signal, even when the spatial distance between the reference pixel position and the prediction pixel position is increased, the prediction of the pixel level prediction signal 3851 is compared with the direction prediction image signal 161. High accuracy.
  • the weighting coefficient of the direction prediction image signal 161 and the pixel level prediction signal 3851 is simply increased when the weight coefficient of the direction prediction image signal 161 is close to the reference pixel, and is decreased when the distance is small.
  • the prediction error is reduced, there is a problem that the prediction accuracy at the time of encoding and the prediction value at the time of local decoding are different and the prediction accuracy is lowered. Therefore, especially when the value of the quantization parameter is large, as the spatial distance between the reference pixel position and the predicted pixel position becomes large, the difference generated in the case of such an open loop is set by setting the value of W small. A decrease in coding efficiency due to the phenomenon can be suppressed.
  • FIG. 39A and 39B show the prediction unit syntax structure when performing composite intra prediction.
  • FIG. 39A is different from FIG. 30A in that a syntax combined_intra_pred_flag for switching presence / absence of composite intra prediction is added. This is equivalent to the above-described composite intra prediction application flag.
  • FIG. 39B adds a syntax combined_intra_pred_flag for switching presence / absence of composite intra prediction to FIG. 30C.
  • the selection switch 3602 shown in FIG. 37 is connected to the output terminal of the composite intra predicted image generation unit 3601.
  • When combined_intra_pred_flag is 0, the selection switch 3602 shown in FIG. 37 is connected to the output terminal of either the unidirectional intra prediction image generation unit 601 or the bidirectional intra prediction image generation unit 602 to which the selection switch 604 is connected. .
  • the intra_luma_filter_flag [i] described above may be added in the same meaning.
  • the video encoding apparatus according to the second embodiment differs from the image encoding apparatus according to the first embodiment in details of orthogonal transform and inverse orthogonal transform.
  • the same parts as those in the first embodiment are denoted by the same indexes, and different parts will be mainly described.
  • a moving picture decoding apparatus corresponding to the picture encoding apparatus according to the present embodiment will be described in a fifth embodiment.
  • FIG. 40 is a block diagram showing a video encoding apparatus according to the second embodiment.
  • the change from the moving picture encoding apparatus according to the first embodiment is that a transformation selection unit 4001 and a coefficient order control unit 4002 are added. Also, the internal structures of the orthogonal transform unit 102 and the inverse orthogonal transform unit 105 are different.
  • processing performed by the moving image encoding apparatus in FIG. 40 will be described.
  • the orthogonal transform unit 102 and the inverse orthogonal transform unit 105 will be described with reference to FIGS. 41 and 42, respectively.
  • the orthogonal transform unit 102 in FIG. 41 includes a first orthogonal transform unit 4101, a second orthogonal transform unit 4102, an Nth orthogonal transform unit 4103, and a transform selection switch 4104.
  • N types of orthogonal transform units there may be a plurality of transform sizes using the same orthogonal transform method, or there may be a plurality of orthogonal transform units performing different orthogonal transform methods. . Moreover, each may be mixed.
  • the first orthogonal transform unit 4101 can be set to 4 ⁇ 4 size DCT
  • the second orthogonal transform unit 4102 can be set to 8 ⁇ 8 size DCT
  • the Nth orthogonal transform unit 4103 can be set to 16 ⁇ 16 size DCT.
  • the first orthogonal transform unit 4101 is 4 ⁇ 4 size DCT
  • the second orthogonal transform unit 4102 is 4 ⁇ 4 size DST (discrete sine transform)
  • the Nth orthogonal transform unit 4103 is 8 ⁇ 8 size KLT (Karunen-Labe transform). )
  • it is possible to select a transform that is not orthogonal transform, or a single transform. In this case, N 1 is considered.
  • the conversion selection switch 4104 has a function of selecting the output terminal of the subtraction unit 101 according to the conversion selection information 4051.
  • the conversion selection information 4051 is one piece of information controlled by the encoding control unit 115, and is set by the conversion selection unit 4001 according to the prediction information 160.
  • the transformation selection information 4051 indicates the first orthogonal transformation
  • the output terminal of the switch is connected to the first orthogonal transformation unit 4101.
  • the transformation selection information 4051 is the second orthogonal transformation
  • the output end is connected to the second orthogonal transformation unit 4102.
  • the first orthogonal transform unit 4101 performs DCT
  • the other orthogonal transform units 4102 and 4103 perform KLT (Carhunen-Labe transform).
  • the inverse orthogonal transform unit 105 in FIG. 42 includes a first inverse orthogonal transform unit 4201, a second inverse orthogonal transform unit 4202, an Nth inverse orthogonal transform unit 4203, and a transform selection switch 4204.
  • the transformation selection switch 4204 has a function of selecting the output terminal of the inverse quantization unit 104 according to the inputted transformation selection information 4051.
  • the conversion selection information 4051 is one piece of information controlled by the encoding control unit 115, and is set by the conversion selection unit 4001 according to the prediction information 160.
  • the output terminal of the switch is connected to the first inverse orthogonal transformation unit 4201.
  • the transformation selection information 4051 is the second orthogonal transformation
  • the output end is connected to the second inverse orthogonal transformation unit 4202.
  • the transform selection information 4051 is the Nth orthogonal transform
  • the output terminal is connected to the Nth inverse orthogonal transform unit 4203.
  • the transform selection information 4051 set in the orthogonal transform unit 102 and the transform selection information 4051 set in the inverse orthogonal transform unit 105 are the same, and the inverse orthogonal transform corresponding to the transform performed in the orthogonal transform unit 102 is performed.
  • the first inverse orthogonal transform unit 4201 performs inverse discrete cosine transform (hereinafter referred to as IDCT), and the second inverse orthogonal transform unit 4202 and the Nth inverse orthogonal transform unit 4203 are based on KLT (Karunen-Labe transform).
  • IDCT inverse discrete cosine transform
  • KLT Karunen-Labe transform
  • Inverse transformation is performed.
  • orthogonal transformation such as Hadamard transformation or discrete sine transformation may be used, or non-orthogonal transformation may be used.
  • the corresponding inverse conversion is performed in conjunction with the conversion unit 102.
  • the transformation selection unit 4001 receives prediction information 160 that is controlled by the encoding control unit 115 and includes the prediction mode set by the prediction selection unit 112 and the like. Based on the prediction information 160, the transform selection unit 4001 has a function of setting MapdTransformIdx information indicating which orthogonal transform is used for which prediction mode.
  • FIG. 44 shows a block diagram of the coefficient order control unit 4002.
  • the coefficient order control unit 4002 includes a coefficient order selection switch 4404, a first coefficient order conversion unit 4401, a second coefficient order conversion unit 4402, and an Nth coefficient order conversion unit 4403.
  • the coefficient order selection switch 4404 has a function of switching between the output terminal of the switch and the coefficient order conversion units 4401 to 4403 in accordance with the MappedTransformIdx shown in FIG.
  • the N types of coefficient order conversion units 4401 to 4403 have a function of converting the two-dimensional data of the quantization conversion coefficient 154 quantized by the quantization unit 103 into one-dimensional data.
  • H.M. In H.264 two-dimensional data is converted into one-dimensional data using a zigzag scan.
  • the quantized transform coefficient 154 obtained by performing the quantization process on the transform coefficient 153 subjected to orthogonal transform has a characteristic that the tendency of generating non-zero transform coefficients in the block is biased. have.
  • the tendency of occurrence of this non-zero transform coefficient has different properties for each prediction direction of intra prediction.
  • the generation tendency of non-zero transform coefficients in the same prediction direction has a similar property. Therefore, when transforming two-dimensional data into one-dimensional data (2D-1D conversion), entropy coding is performed preferentially from transform coefficients at positions where the occurrence probability of non-zero transform coefficients is high, thereby encoding transform coefficients. It is possible to reduce information.
  • the coefficient order control unit 4002 may dynamically update the scan order in 2D-1D conversion.
  • the coefficient order control unit 4002 that performs such an operation is illustrated in FIG.
  • the coefficient order control unit 4002 includes an occurrence frequency counting unit 4501 and an updating unit 4502 in addition to the configuration of FIG.
  • the coefficient order conversion units 4401 to 4403 are the same except that the scan order is updated by the update unit 4502.
  • the occurrence frequency counting unit 4501 creates a histogram 4552 of the number of occurrences of non-zero coefficients in each element of the quantized transform coefficient sequence 4052 for each prediction mode.
  • the occurrence frequency counting unit 4501 inputs the created histogram 4552 to the updating unit 4502.
  • the update unit 4502 updates the coefficient order based on the histogram 4552 at a predetermined timing.
  • the timing is, for example, the timing when the coding process of the coding tree unit is finished, the timing when the coding process for one line in the coding tree unit is finished, or the like.
  • the update unit 4502 refers to the histogram 4552 and updates the coefficient order with respect to the prediction mode having an element in which the number of occurrences of non-zero coefficients is counted more than a threshold. For example, the update unit 4502 updates the prediction mode having an element in which the occurrence of a non-zero coefficient is counted 16 times or more. By providing a threshold value for the number of occurrences, the coefficient order is updated globally, so that it is difficult to converge to a local optimum solution.
  • the update unit 4502 sorts the elements in the descending order of the occurrence frequency of the non-zero coefficient regarding the prediction mode to be updated. Sorting can be realized by existing algorithms such as bubble sort and quick sort. Then, the update unit 4502 inputs the update coefficient order 4551 indicating the order of the sorted elements to the coefficient order conversion parts 4401 to 4403 corresponding to the prediction mode to be updated.
  • each conversion unit When the update coefficient order 4551 is input, each conversion unit performs 2D-1D conversion according to the updated scan order.
  • the initial scan order of each 2D-1D conversion unit needs to be determined in advance.
  • the tendency of occurrence of non-zero coefficients in the quantized transform coefficients 154 changes according to the influence of the properties of the predicted image, quantization information (quantization parameters), and the like. Even in this case, high encoding efficiency can be expected stably. Specifically, the generated code amount of run-length encoding in the entropy encoding unit 113 can be suppressed.
  • the syntax configuration in the present embodiment is the same as in the first embodiment.
  • the conversion selection unit 4001 can select the mapped transform IDx separately from the prediction information 160.
  • information indicating which nine types of orthogonal transforms or inverse orthogonal transforms are used is set in the entropy encoding unit 113 and encoded together with the quantized transform coefficient sequence 4052.
  • FIG. 46 shows an example of syntax in this modification.
  • Directional_transform_idx indicated in the syntax indicates information indicating which of N orthogonal transforms has been selected.
  • FIG. 47 is a block diagram of the orthogonal transform unit 102 according to this embodiment.
  • the orthogonal transform unit 102 includes new processing units such as a first rotation transform unit 4701, a second rotation transform unit 4702, an Nth rotation transform unit 4703, and a discrete cosine transform unit 4704, and includes an existing transform selection switch 4104.
  • the discrete cosine transform unit 4704 performs DCT, for example.
  • the conversion coefficient after DCT is input to the conversion selection switch 4104.
  • the conversion selection switch 4104 connects the output end of the switch to any of the first rotation conversion unit 4701, the second rotation conversion unit 4702, and the Nth rotation conversion unit 4703 according to the conversion selection information 4051.
  • the switches are sequentially switched according to the control of the encoding control unit 115.
  • the rotation conversion units 4701 to 4703 perform rotation conversion on each conversion coefficient using a predetermined rotation matrix.
  • a conversion coefficient 153 after rotation conversion is output. This conversion is a reversible conversion.
  • rotation matrix it may be determined which rotation matrix is used by using the encoding cost as shown in the above formulas (1) and (2). Also, a table in which the prediction mode and the conversion number as shown in FIG. 43 are associated in advance may be prepared and selected.
  • the rotation conversion unit is applied before the quantization unit 103 is shown, but the rotation conversion unit may be applied to the quantization conversion coefficient 154 after the quantization process. In this case, the orthogonal transform unit 102 performs only DCT.
  • FIG. 48 is a block diagram of the inverse orthogonal transform unit 105 according to the present embodiment.
  • the inverse orthogonal transform unit 105 includes new processing units such as a first inverse rotation transform unit 4801, a second inverse rotation transform unit 4802, an Nth inverse rotation transform unit 4803, and an inverse discrete cosine transform unit 4804, and an existing transform selection switch 4204.
  • Have The restored transform coefficient 155 input after the inverse quantization process is input to the transform selection switch 4204.
  • the conversion selection switch 4204 connects the output terminal of the switch to one of the first reverse rotation conversion unit 4801, the second reverse rotation conversion unit 4802, and the Nth reverse rotation conversion unit 4803 according to the conversion selection information 4051.
  • the reverse rotation conversion processing is performed in any one of the reverse rotation conversion units 4801 to 4803, which is the same as the rotation conversion used in the orthogonal conversion unit 102, and the result is output to the inverse discrete cosine conversion unit 4804.
  • the inverse discrete cosine transform unit 4804 performs, for example, IDCT on the input signal to restore the restored prediction error signal 156.
  • IDCT an example using IDCT is shown here as an example, orthogonal transform such as Hadamard transform or discrete sine transform may be used, or non-orthogonal transform may be used. In any case, the corresponding inverse conversion is performed in conjunction with the conversion unit 102.
  • the syntax in this embodiment is shown in FIG.
  • the rotation_transform_idx shown in the syntax means the number of the rotation matrix to be used.
  • the fourth embodiment relates to a moving picture decoding apparatus.
  • the video encoding device corresponding to the video decoding device according to the present embodiment is as described in the first embodiment. That is, the moving picture decoding apparatus according to the present embodiment decodes encoded data generated by, for example, the moving picture encoding apparatus according to the first embodiment.
  • the moving picture decoding apparatus includes an input buffer 5001, an entropy decoding unit 5002, an inverse quantization unit 5003, an inverse orthogonal transform unit 5004, an addition unit 5005, and a loop filter 5006.
  • An image memory 5007, an intra prediction unit 5008, an inter prediction unit 5009, a prediction selection switch 5010, an output buffer 5011, a decoding control unit 5012, and an intra prediction mode memory 5013 are included.
  • the encoded data 5051 decodes the encoded data 5051 stored in the input buffer 5001, stores the decoded image 5060 in the output buffer 5011, and outputs it as an output image.
  • the encoded data 5051 is output from, for example, the moving image encoding apparatus shown in FIG. 1 or the like, and is temporarily stored in the input buffer 5001 through a storage system or transmission system (not shown).
  • the entropy decoding unit 5002 performs decoding based on the syntax for each frame or field for decoding the encoded data 5051.
  • the entropy decoding unit 5002 sequentially entropy-decodes the code string of each syntax, and reproduces the encoding parameters of the encoding target block such as prediction information 5059 including the prediction mode information and the quantization transform coefficient 5052.
  • the encoding parameter is a parameter necessary for decoding, such as prediction information 5059, information on transform coefficients, information on quantization, and the like.
  • the inverse quantization unit 5003 performs inverse quantization on the quantized transform coefficient 5052 from the entropy decoding unit 5002 to obtain a restored transform coefficient 5053. Specifically, the inverse quantization unit 5003 performs inverse quantization according to the information on the quantization decoded by the entropy decoding unit 5002. The inverse quantization unit 5003 inputs the restored transform coefficient 5053 to the inverse orthogonal transform unit 5004.
  • the inverse orthogonal transform unit 5004 performs inverse orthogonal transform corresponding to the orthogonal transform performed on the encoding side, on the reconstruction transform coefficient 5053 from the inverse quantization unit 5003, and obtains a reconstruction prediction error signal 5054.
  • the inverse orthogonal transform unit 5004 inputs the restored prediction error signal 5054 to the addition unit 5005.
  • the addition unit 5005 adds the restored prediction error signal 5054 and the corresponding predicted image signal 5058 to generate a decoded image signal 5055.
  • the decoded image signal 5055 is input to the loop filter 5006.
  • the loop filter 5006 performs a deblocking filter, a Wiener filter, or the like on the input decoded image signal 5055 to generate a filtered image signal 5056.
  • the generated filtered image signal 5056 is temporarily stored in the output buffer 5011 for the output image, and is also stored in the reference image memory 5007 for the reference image signal 5057.
  • the filtered image signal 5056 stored in the reference image memory 5007 is referred to by the intra prediction unit 5008 and the inter prediction unit 5009 as a reference image signal 5057 in units of frames or fields as necessary.
  • the filtered image signal 5056 temporarily accumulated in the output buffer 5011 is output according to the output timing managed by the decoding control unit 5012.
  • the intra prediction mode memory 5013 has the same function as the intra prediction mode memory 116 shown in FIG. 1 and stores intra prediction mode information 5061 applied to the prediction unit for which decoding has been completed.
  • the unit 5008 When the unit 5008 generates bidirectional prediction mode information as necessary, it is referred to as reference intra prediction mode information 5062 each time.
  • the intra prediction unit 5008, the inter prediction unit 5009, and the selection switch 5010 are substantially the same or similar elements as the intra prediction unit 109, the inter prediction unit 110, and the selection switch 111 in FIG.
  • the intra prediction unit 5008 and the intra prediction unit 109 use the reference intra prediction mode information 5062 stored in the intra prediction mode memory 5013 and the reference intra prediction mode information 164 stored in the intra prediction mode memory 116, respectively. Make a prediction. For example, H.M.
  • an intra prediction image is obtained by performing pixel interpolation (copying or copying after interpolation) along a prediction direction such as a vertical direction or a horizontal direction using an encoded reference pixel value adjacent to a prediction target block. Generate.
  • the prediction direction of intra prediction in H.264 is shown. Further, in FIG. The arrangement
  • FIG. 5C illustrates a predicted image generation method in mode 1 (horizontal prediction)
  • FIG. 5D illustrates a predicted image generation method in mode 4 (diagonal lower right prediction).
  • the inter prediction unit 5009 performs inter prediction using the reference image signal 5057 stored in the reference image memory 5007. Specifically, the inter prediction unit 5009 obtains a motion shift amount (motion vector) between the prediction target block and the reference image signal 5057 from the entropy decoding unit 5002, and performs an interpolation process ( Motion prediction) is performed to generate an inter prediction image.
  • a motion shift amount motion vector
  • Motion prediction Motion prediction
  • the prediction selection switch 5010 selects the output terminal of the intra prediction unit 5008 or the output terminal of the inter prediction unit 5009 according to the decoded prediction information 5059, and inputs the intra prediction image or the inter prediction image to the adding unit 5005 as the prediction image signal 5058. To do.
  • the prediction selection switch 5010 connects a switch to the output terminal from the intra prediction unit 5008.
  • the prediction selection switch 5010 connects a switch to the output terminal from the inter prediction unit 5009.
  • the decoding control unit 5012 controls each element of the moving picture decoding apparatus in FIG. Specifically, the decoding control unit 5012 performs various controls for decoding processing including the above-described operation.
  • the intra prediction unit 5008 has the same configuration and processing content as the intra prediction unit 109 described in the first embodiment.
  • An intra prediction unit 5008 (109 in FIG. 6) illustrated in FIG. 6 includes a unidirectional intra prediction image generation unit 601, a bidirectional intra prediction image generation unit 602, a prediction mode information setting unit 603, a selection switch 604, and bidirectional intra prediction.
  • a mode generation unit 605 is included.
  • a reference image signal 5057 (159 in FIG. 6) is input from the reference image memory 5007 to the unidirectional intra prediction image generation unit 601 and the bidirectional intra prediction image generation unit 602.
  • the prediction mode information setting unit 603 determines the prediction mode generated by the unidirectional intra prediction image generation unit 601 or the bidirectional intra prediction image generation unit 602. Set and output prediction mode 651.
  • the bidirectional intra prediction mode generation unit 605 outputs the bidirectional intra prediction mode information 652 according to the prediction mode 651 and the reference intra prediction mode information 164.
  • the selection switch 604 has a function of switching the output ends of the respective intra predicted image generation units in accordance with the prediction mode 651. If the input prediction mode 651 is the unidirectional intra prediction mode, the output terminal of the unidirectional intra prediction image generation unit 601 is connected to the switch, and if the prediction mode 651 is the bidirectional intra prediction mode, the bidirectional intra prediction is performed. The output terminal of the image generation unit 602 is connected. On the other hand, each of the intra predicted image generation units 601 and 602 generates a predicted image signal 5058 (161 in FIG. 6) according to the prediction mode 651. The generated predicted image signal 5058 is output from the intra prediction unit 109.
  • 7A and 7B show the numbers of the prediction modes according to the present embodiment for each block size.
  • PuSize indicates a pixel block (prediction unit) size to be predicted, and seven types of sizes from PU_2x2 to PU_128x128 are defined.
  • IntraUniModeNum represents the number of prediction modes for unidirectional intra prediction
  • IntraBiModeNum represents the number of prediction modes for bidirectional intra prediction.
  • Number of modes is the total number of prediction modes for each pixel block (prediction unit) size.
  • FIGS. 8A and 8B show the relationship between the prediction mode and the prediction method when PuSize is PU_8x8, PU_16x16, and PU_32x32.
  • 9A and 9B show a case where PuSize is PU_4x4, and
  • FIG. 10 shows a case where PU_64x64 or PU_128x128.
  • IntraPredMode indicates a prediction mode number
  • IntraBipredFlag is a flag indicating whether or not bidirectional intra prediction. When the flag is 0, it indicates that the prediction mode is a unidirectional intra prediction mode. When the flag is 1, it indicates that the prediction mode is the bidirectional intra prediction mode.
  • the bidirectional intra prediction mode generation unit 605 When the flag is 1, the bidirectional intra prediction mode generation unit 605 generates bidirectional intra prediction mode information 652 in accordance with IntraBipredTypeIdx that defines a bidirectional intra prediction generation method.
  • IntraBipredTypeIdx When IntraBipredTypeIdx is 0, two types of unidirectional intra prediction modes used for bidirectional intra prediction are set in a first prediction mode generation unit 1901 described later using a predetermined table.
  • a method in which two types of unidirectional intra prediction modes used for bidirectional intra prediction are preliminarily tabled is referred to as a fixed table method.
  • FIG. 8A shows an example in which all bidirectional intra prediction modes are fixed table methods.
  • IntraBipredTypeIdx When IntraBipredTypeIdx is a value larger than 0, two types of unidirectional intra prediction modes used for bidirectional intra prediction are set based on the reference intra prediction mode information 164.
  • a method in which two types of unidirectional intra prediction modes used for bidirectional intra prediction based on the reference intra prediction mode information 164 are set is referred to as a direct method.
  • IntraBipredTypeIdx has different values depending on the method of deriving two types of unidirectional intra prediction modes from the reference intra prediction mode information 164.
  • all modes may be fixed table methods, or all modes may be direct methods. Also, some modes may be fixed table methods and the remaining modes may be direct methods.
  • FIG. 8B shows an example in which among the eight types of bidirectional intra prediction modes, three types are the fixed table method and the remaining five types are the direct method.
  • IntraPredTypeLX indicates the prediction type of intra prediction. Intra_Vertical means that the vertical direction is the reference for prediction, and Intra_Horizontal means that the horizontal direction is the reference for prediction. Note that 0 or 1 is applied to X in IntraPredTypeLX. IntraPredTypeL0 indicates the first prediction mode of unidirectional intra prediction or bidirectional intra prediction. IntraPredTypeL1 indicates the second prediction mode of bidirectional intra prediction. IntraPredAngleId is an index indicating an index of a prediction angle. The prediction angle actually used in the generation of the predicted value is shown in FIG. Here, puPartIdx represents the index of the divided block in the quadtree division described with reference to FIG. 3B.
  • IntraPredMode 4
  • IntraPredTypeL0 Intra_Vertical
  • the vertical direction is used as a reference for prediction.
  • the prediction mode information setting unit 603 converts the above-described prediction information corresponding to the designated prediction mode 651 to the unidirectional intra prediction image generation unit 601 and the bidirectional intra prediction image generation unit 602 under the control of the decoding control unit 5012. And the prediction mode 651 is output to the selection switch.
  • the unidirectional intra predicted image generation unit 601 has a function of generating a predicted image signal 5058 (161 in FIG. 6) for a plurality of prediction directions shown in FIG. In FIG. 12, there are 33 different prediction directions for the vertical and horizontal coordinates indicated by bold lines. H.
  • the direction of a typical prediction angle indicated by H.264 is indicated by an arrow.
  • 33 kinds of prediction directions are prepared in the direction which pulled the line shown by the arrow from the origin.
  • IntraPredMode 4
  • IntraPredAngleIdL0 4
  • the arrows included in the range shown in “Intra_Vertical” shown at the bottom of FIG. 12 indicate the prediction mode whose prediction type is Intra_Vertical, and are included in the range shown in “Intra_Horizontal” shown on the right side of FIG.
  • An arrow indicates a prediction mode whose prediction type is Intra_Horizontal.
  • FIG. 11 shows the relationship between IntraPredAngleIdLX and intraPredAngle used for predictive image value generation.
  • intraPredAngle indicates a prediction angle that is actually used when a predicted value is generated.
  • the prediction value generation method is expressed by the above equation (3).
  • BLK_SIZE indicates the size of the pixel block (prediction unit)
  • ref [] indicates an array in which reference image signals are stored.
  • pred (k, m) indicates the generated predicted image signal 5058 (161 in FIG. 6).
  • predicted values can be generated in the same manner according to the table of FIG.
  • the above is description of the unidirectional intra estimated image generation part 601 in this embodiment.
  • FIG. 13 shows a block diagram of the bidirectional intra-predicted image generation unit 602.
  • the bidirectional intra predicted image generation unit 602 includes a first unidirectional intra predicted image generation unit 1301, a second unidirectional intra predicted image generation unit 1302, and a weighted average unit 1303.
  • An input reference image signal 5057 (FIG. 13 has a function of generating two unidirectional intra-predicted images based on 159) and generating a predicted image signal 5058 (161 in FIG. 13) by weighted averaging them.
  • the functions of the first unidirectional intra predicted image generation unit 1301 and the second unidirectional intra predicted image generation unit 1302 are the same. In either case, a prediction image signal corresponding to a prediction mode given according to prediction mode information controlled by the encoding control unit 115 is generated.
  • a first predicted image signal 1351 is output from the first unidirectional intra predicted image generation unit 1301, and a second predicted image signal 1352 is output from the second unidirectional intra predicted image generation unit 1302.
  • Each predicted image signal is input to the weighted average unit 1303, and weighted average processing is performed.
  • the table in FIG. 14 is a table for deriving two unidirectional intra prediction modes from the bidirectional intra prediction mode.
  • BipredIdx is derived using Equation (4).
  • the first predicted image signal 1351 and the second predicted image signal 1352 generated by the first unidirectional intra predicted image generation unit 1301 and the second unidirectional intra predicted image generation unit 1302 are sent to the weighted average unit 1303. Entered.
  • the weighted average unit 1303 calculates a Euclidean distance or a city area distance (Manhattan distance) based on the prediction directions of IntraPredModeL0 and IntraPredModeL1, and derives a weight component used in the weighted average process.
  • the weight component of each pixel is represented by the reciprocal of the Euclidean distance or the city distance from the reference pixel used for prediction, and is generalized by Expression (5).
  • ⁇ L is expressed by Equation (6).
  • ⁇ L is expressed by Equation (7).
  • the weight table for each prediction mode is generalized to Equation (8). Therefore, the final prediction signal at the pixel position n is expressed by Equation (9).
  • the prediction signal is generated by selecting two prediction modes for generating the prediction pixel.
  • a prediction value may be generated by selecting three or more prediction modes.
  • the ratio of the reciprocal of the spatial distance from the reference pixel to the prediction pixel may be set as the weighting factor.
  • the Euclidean distance from the reference pixel used in the prediction mode or the reciprocal of the urban area distance is used as a weight component as it is, but as another embodiment, the Euclidean distance from the reference pixel and the urban area distance are variables.
  • the weight component may be set using the distributed model.
  • the distribution model uses at least one of a linear model, an M-order function (M ⁇ 1), a nonlinear function such as a one-sided Laplace distribution or a one-sided Gaussian distribution, or a fixed value that is a fixed value regardless of the distance from the reference pixel.
  • M ⁇ 1 M-order function
  • the weight component is expressed by Equation (10).
  • the weight component is expressed by Expression (11).
  • an isotropic correlation model obtained by modeling an autocorrelation function, an elliptic correlation model, a generalized Gaussian model obtained by generalizing a Laplace function or a Gaussian function may be used as the weight component model.
  • Equation (5), Equation (8), Equation (10), and Equation (11) are calculated each time the predicted image is generated, a plurality of multipliers are required, and the hardware scale increases. . For this reason, the circuit scale required for the said calculation can be reduced by calculating a weight component beforehand according to the relative distance for every prediction mode, and hold
  • a method for deriving the weight component when the city distance is used will be described.
  • the city area distance ⁇ L L0 of IntraPredMode L0 and the city area distance ⁇ L L1 of IntraPredMode L1 are calculated from Equation (7).
  • the relative distance varies depending on the prediction direction of the two prediction modes.
  • the distance can be derived using Expression (6) or Expression (7) according to each prediction mode.
  • the table sizes of these distance tables may increase.
  • FIG. 17 shows the mapping of IntraPredModeLX used for distance table derivation.
  • a table of only the prediction mode corresponding to the prediction mode corresponding to the prediction mode and the DC prediction in 45 degrees is prepared, and other prediction angles are mapped closer to the prepared reference prediction mode. ing.
  • the index is mapped to the smaller one.
  • the prediction mode shown in “MappedIntraPredMode” is referred to from FIG. 17, and a distance table can be derived.
  • Equation (12) the relative distance for each pixel in the two prediction modes is calculated using Equation (12).
  • Equation (12) the final prediction signal at the pixel position n is represented by Equation (13).
  • the weight component is scaled in advance and converted to integer arithmetic, it can be expressed by Equation (14).
  • WM 1024
  • Offset 512
  • SHIFT 10.
  • FIGS. 18A and 18B An example in which the weight components using the one-sided Laplace distribution model in this embodiment are tabulated is shown in FIGS. 18A and 18B.
  • Other PuSizes can also be derived using Equation (5), Equation (8), Equation (10), and Equation (11).
  • ⁇ Bidirectional Intra Prediction Mode Generation Unit 605 Since the bidirectional intra prediction mode generation unit 605 is the same as the bidirectional intra prediction mode generation unit 605 described in the first embodiment, description thereof is omitted.
  • FIG. 28 illustrates a syntax 2800 used by the video decoding device 5000 in FIG. Since the syntax 2800 is the same as that of the first embodiment, a detailed description thereof will be omitted.
  • FIG. 30A shows an example of the prediction unit syntax.
  • Pred_mode in the figure indicates the prediction type of the prediction unit.
  • MODE_INTRA indicates that the prediction type is intra prediction.
  • intra_split_flag is a flag indicating whether or not the prediction unit is further divided into four prediction units. When intra_split_flag is 1, a prediction unit is obtained by dividing a prediction unit into four in half in the vertical and horizontal sizes. When intra_split_flag is 0, the prediction unit is not divided.
  • Intra_luma_bipred_flag [i] is a flag indicating whether the prediction mode IntraPredMode applied to the prediction unit is a unidirectional intra prediction mode or a bidirectional intra prediction mode. i indicates the position of the divided prediction unit, and 0 is set when the intra_split_flag is 0, and 0 to 3 when the intra_split_flag is 1. In this flag, the value of IntraBipredFlag of the prediction unit shown in FIGS. 8A and 8B, 9A and 9B, and 10 is set.
  • intra_luma_bipred_flag [i] When intra_luma_bipred_flag [i] is 1, this indicates that the prediction unit is bi-directional intra prediction, and is information that identifies the used bi-directional intra prediction mode among a plurality of prepared bi-directional intra prediction modes.
  • Intra_luma_bipred_mode [i] is decoded.
  • intra_luma_bipred_mode [i] may be decoded in equal length according to the bidirectional intra prediction mode number IntraBiModeNum shown in FIG. 7, or may be decoded using a predetermined code table. Further, as described above, when the total number of bidirectional intra prediction modes is different for each prediction unit, it is decoded using a code table that switches according to the total number of bidirectional intra prediction modes indicated for each prediction unit. Also good.
  • intra_luma_bipred_flag [i] When intra_luma_bipred_flag [i] is 0, it indicates that the prediction unit is
  • Prev_intra_luma_unipred_flag [i] is a flag indicating whether or not the prediction value MostProbable of the prediction mode calculated from the adjacent block and the intra prediction mode of the prediction unit are the same. Details of the MostProbable calculation method will be described later. When prev_intra_luma_unipred_flag [i] is 1, it indicates that the MostProbable and the intra prediction mode IntraPredMode are equal.
  • prev_intra_luma_unipred_flag [i] When prev_intra_luma_unipred_flag [i] is 0, it indicates that the MostProbable and the intra prediction mode IntraPredMode are different, and the information rem_intraprelum decoding that further specifies the intra prediction mode IntraPredMode other than MostProbable. . rem_intra_luma_unipred_mode [i] may be decoded in equal length according to the bidirectional intra prediction mode number IntraUniModeNum shown in FIGS. 7A and 7B, or may be decoded using a predetermined code table. From the intra prediction mode IntraPredMode, rem_intra_luma_unipred_mode [i] is calculated using Equation (17).
  • MostProbable which is a predicted value in the prediction mode.
  • MostProbable is calculated according to Equation (18).
  • Min (x, y) is a parameter for outputting the smaller one of the inputs x and y.
  • intraPredModeAL0 and intraPredModeBL0 respectively indicate the first unidirectional intra prediction modes of the prediction units adjacent to the left and above the decoded prediction unit as described above.
  • the first unidirectional intra prediction mode of the referable prediction unit is MostProbable.
  • Intra_DC is set in MostProbable.
  • MappedProbable () is a table for converting MostProbable, and an example is shown in FIG. 31.
  • FIG. 30C Another example of the prediction unit syntax is shown in FIG. 30C. Since pred_mode and intra_split_flag are the same as the syntax example described above, description thereof is omitted.
  • luma_pred_mode_code_type [i] indicates the type of the prediction mode IntraPredMode applied to the prediction unit, where 0 (IntraUnifiedMostProb) is unidirectional intra prediction and the intra prediction mode is the same as MostProbable, 1 (IntraUnipre intrareprediction) The intra prediction mode is different from MostProbable, and 2 (IntraBipred) indicates a bidirectional intra prediction mode.
  • 32D show an example of assignment of the number of modes according to the meaning corresponding to luma_pred_mode_code_type, and the mode configuration shown in FIG. 7A to FIG. 7D.
  • luma_pred_mode_code_type [i] When luma_pred_mode_code_type [i] is 0, the intra prediction mode is the MostProbable mode, so no further information decoding is necessary.
  • luma_pred_mode_code_type [i] is 1, information rem_intra_luma_unipred_mode [i] that specifies which mode other than MostProbable is the intra prediction mode IntraPredMode is decoded.
  • the rem_intra_luma_unipred_mode [i] may be decoded in equal length according to the bidirectional intra prediction mode number IntraUniModeNum shown in FIGS. 7A to 7D, or may be decoded using a predetermined code table. From the intra prediction mode IntraPredMode, rem_intra_luma_unipred_mode [i] is calculated using Equation (16). Further, when luma_pred_mode_code_type [i] is 2, it indicates that the prediction unit is bidirectional intra prediction, and information that identifies the used bidirectional intra prediction mode among the prepared bidirectional intra prediction modes. Intra_luma_bipred_mode [i] is decoded.
  • intra_luma_bipred_mode [i] may be decoded in equal length according to the bidirectional intra prediction mode number IntraBiModeNum shown in FIGS. 7A to 7D, or may be decoded using a predetermined code table. Further, as described above, when the total number of bidirectional intra prediction modes is different for each prediction unit, it is decoded using a code table that switches according to the total number of bidirectional intra prediction modes indicated for each prediction unit. Also good.
  • FIG. 30D shows still another example relating to the prediction unit syntax.
  • pred_mode and intra_split_flag are the same as the syntax example described above, and thus description thereof is omitted.
  • Intra_bipred_flag is a flag indicating whether or not bidirectional intra prediction can be used in the decoding prediction unit. When intra_bipred_flag is 0, it indicates that bi-directional intra prediction is not used in the decoding prediction unit. Even when intra_split_flag is 1, that is, when the decoded prediction unit is further divided into four, bi-directional intra prediction is not used in all prediction units, and only uni-directional intra prediction is effective.
  • intra_bipred_flag When intra_bipred_flag is 1, it indicates that bidirectional intra prediction can be used in the decoding prediction unit. Even when intra_split_flag is 1, that is, when the decoded prediction unit is further divided into four, in all prediction units, bidirectional intra prediction can be selected in addition to unidirectional intra prediction.
  • the intra-bipred_flag is decoded as 0 to disable bi-directional intra prediction. Since the amount of code required for decoding can be reduced, the coding efficiency is improved.
  • FIG. 30E shows still another example relating to the prediction unit syntax.
  • intra_bipred_flag is a flag indicating whether or not bi-directional intra prediction can be used in the decoding prediction unit, and is the same as the above-described intra_bipred_flag, and thus the description thereof is omitted.
  • FIG. 33 shows an intra prediction unit 5008 (109 in FIG. 33) when adaptive reference pixel filtering is used. 6 is different from the intra prediction unit 5008 shown in FIG. 6 (109 in FIG. 6) in that a reference pixel filter unit 3301 is added. The reference pixel filter unit 3301 inputs a reference image signal 5057 (159 in FIG.
  • the filtered reference image signal 3351 is input to the unidirectional intra predicted image generation unit 601 and the bidirectional intra predicted image generation unit 602.
  • the configuration and processing other than the reference pixel filter unit 3301 are the same as those of the intra prediction unit 5008 shown in FIG.
  • the reference pixel filter unit 3301 determines whether to filter reference pixels used for intra prediction according to the reference pixel filter flag and the intra prediction mode included in the prediction mode 651.
  • the reference pixel filter flag is a flag indicating whether or not to filter the reference pixel when the intra prediction mode IntraPredMode is a value other than “Intra_DC”.
  • IntraPredMode is “Intra_DC”
  • the reference pixel is not filtered and the reference pixel filter flag is set to 0.
  • a filtered reference image signal 3351 is calculated by filtering shown in Expression (20).
  • p [x, y] indicates a reference pixel before filtering
  • pf [x, y] indicates a reference pixel in filter terms.
  • PuPartSize indicates the size (pixel) of the prediction unit.
  • FIG. 34A and 34B show a prediction unit syntax structure when performing adaptive reference pixel filtering.
  • FIG. 34A adds the syntax intra_luma_filter_flag [i] related to the adaptive reference pixel filter to FIG. 30A.
  • FIG. 34B adds syntax intra_luma_filter_flag [i] related to the adaptive reference pixel filter to FIG. 30C.
  • intra_luma_filter_flag [i] is further decoded when the intra prediction mode IntraPredMode [i] is other than Intra_DC. When the flag is 0, it indicates that the reference pixel is not filtered. Further, when intra_luma_filter_flag [i] is 1, it indicates that the reference pixel filtering is applied.
  • intra_luma_filter_flag [i] is decoded when the intra prediction mode IntraPredMode [i] is other than Intra_DC.
  • IntraPredMode [i] is 0 to 2
  • intra_luma_filter_flag [ i] may not be decrypted. In this case, intra_luma_filter_flag [i] is set to 0.
  • the intra_luma_filter_flag [i] described above may be added in the same meaning.
  • FIG. 37 shows a block diagram of the intra prediction unit 5008 (109 in FIG. 37) when combined with composite intra prediction. The difference is that a composite intra predicted image generation unit 3601, a selection switch 3602 and a decoded image buffer 3701 are added to the intra prediction unit 5008 shown in FIG.
  • the selection switch 604 When the bidirectional intra prediction and the composite intra prediction are combined, first, in the selection switch 604, the unidirectional intra prediction image generation unit 601 or the bidirectional intra prediction image generation unit according to the prediction mode information controlled by the decoding control unit 5012. The output terminal of 602 is switched.
  • the output predicted image signal 5058 (161 in FIG. 37) is referred to as a direction predicted image signal 5058.
  • the direction prediction image signal is input to the composite intra prediction image generation unit 3601, and a prediction image signal 5058 in the composite intra prediction is generated.
  • the selection switch 3602 switches between using the prediction image signal 5058 and the direction prediction image signal in the composite intra prediction according to the composite intra prediction application flag in the prediction mode information controlled by the decoding control unit 5012.
  • the final prediction image signal 5058 in the intra prediction unit 5008 is output.
  • the composite intra prediction application flag is 1
  • the predicted image signal 5058 output from the composite intra predicted image generation unit 3601 becomes the final predicted image signal 5058.
  • the composite intra prediction application flag is 0, the direction prediction image signal 5058 is the prediction image signal 5058 that is finally output.
  • the composite intra prediction image generation unit 3601 includes a pixel level prediction signal generation unit 3801 and a composite intra prediction calculation unit 3802.
  • the pixel level prediction signal generation unit 3801 predicts the prediction target pixel X from adjacent pixels and outputs a pixel level prediction signal 3851.
  • the adjacent pixel indicates the decoded image signal 5055.
  • the pixel level prediction signal 3851 (X) of the prediction target pixel is calculated using Expression (21).
  • the coefficients related to A, B, and C may be other values.
  • the composite intra prediction calculation unit 3802 performs a weighted average of the direction prediction image signal 5058 (161 in FIG. 38) (X ′) and the pixel level prediction signal 3851 (X), and outputs a final prediction image signal 5058 (P). To do. Specifically, Formula (22) is used.
  • the weighting factor W may be switched according to the position of the prediction pixel in the prediction unit.
  • a prediction image signal generated using unidirectional intra prediction and bidirectional intra prediction generates a prediction value from spatially adjacent reference pixels positioned on the left or above already encoded.
  • the absolute value of the prediction error tends to increase as the distance from the reference pixel increases. Therefore, the weighting coefficient of the direction prediction image signal 5058 and the pixel level prediction signal 3851 is increased when the weight coefficient of the direction prediction image signal 161 is close to the reference pixel, and is decreased when the distance is far away, thereby improving the prediction accuracy. It becomes possible.
  • a prediction error signal is generated using an input image signal at the time of encoding.
  • the pixel level prediction signal 3851 becomes an input image signal, even if the spatial distance between the reference pixel position and the prediction pixel position is increased, the prediction of the pixel level prediction signal 3851 is compared with the direction prediction image signal 5058. High accuracy.
  • the weight coefficient of the direction prediction image signal 5058 and the pixel level prediction signal 3851 is simply increased when the weight coefficient of the direction prediction image signal 5058 is close to the reference pixel, and the weight coefficient of the direction prediction image signal 5058 is small when it is far away.
  • the prediction error is reduced, there is a problem that the prediction accuracy at the time of encoding and the prediction value at the time of local decoding are different and the prediction accuracy is lowered. Therefore, especially when the value of the quantization parameter is large, as the spatial distance between the reference pixel position and the predicted pixel position becomes large, the difference generated in the case of such an open loop is set by setting the value of W small. A decrease in coding efficiency due to the phenomenon can be suppressed.
  • FIG. 39A and 39B show the prediction unit syntax structure when performing composite intra prediction.
  • FIG. 39A is different from FIG. 30A in that a syntax combined_intra_pred_flag for switching presence / absence of composite intra prediction is added. This is equivalent to the above-described composite intra prediction application flag.
  • FIG. 39B adds a syntax combined_intra_pred_flag for switching presence / absence of composite intra prediction to FIG. 30C.
  • the selection switch 3602 shown in FIG. 37 is connected to the output terminal of the composite intra predicted image generation unit 3601.
  • When combined_intra_pred_flag is 0, the selection switch 3602 shown in FIG. 36 is connected to the output terminal of either the unidirectional intra prediction image generation unit 601 or the bidirectional intra prediction image generation unit 602 to which the selection switch 604 is connected. .
  • the intra_luma_filter_flag [i] described above may be added in the same meaning. Furthermore, you may combine with the 2nd modification of an intra estimation part.
  • the same or similar intra prediction unit as that of the video encoding device according to the first embodiment is included, the same or the same as the video encoding device according to the first embodiment or Similar effects can be obtained.
  • the video decoding device differs from the video decoding device according to the above-described fourth embodiment in the details of inverse orthogonal transform.
  • the same parts as those in the fourth embodiment are denoted by the same reference numerals, and different parts will be mainly described.
  • the moving picture coding apparatus corresponding to the moving picture decoding apparatus according to the present embodiment is as described in the second embodiment.
  • FIG. 51 is a block diagram showing a moving picture decoding apparatus according to the fifth embodiment.
  • a change from the video decoding apparatus according to the fourth embodiment is that a transformation selection unit 5102 and a coefficient order restoration unit 5101 are added. Also, the internal structure of the inverse orthogonal transform unit 5004 is different.
  • the inverse orthogonal transform unit 5004 will be described with reference to FIG. Note that the inverse orthogonal transform unit 5004 has the same configuration as the inverse orthogonal transform unit 105 according to the second embodiment. Therefore, in this embodiment, the conversion selection information 4051 in FIG. 42 is replaced with the conversion selection information 5151, the restored conversion coefficient 155 is replaced with the restored conversion coefficient 5053, and the restored prediction error signal 156 is replaced with the restored prediction error signal 5054. explain.
  • the conversion selection switch 4204 includes a first inverse orthogonal transform unit 4201, a second inverse orthogonal transform unit 4202, an Nth inverse orthogonal transform unit 4203, and a transform selection switch 4204.
  • the inverse orthogonal transform unit 5004 (105 in FIG. 42) in FIG. First, the conversion selection switch 4204 will be described.
  • the conversion selection switch 4204 has a function of selecting the output terminal of the inverse quantization unit 5003 according to the input conversion selection information 5151.
  • the conversion selection information 5151 is one piece of information controlled by the decoding control unit 5012 and is set by the conversion selection unit 5102 according to the prediction information 5059.
  • the output terminal of the switch is connected to the first inverse orthogonal transform unit 4201.
  • the transformation selection information 5151 is the second orthogonal transformation
  • the output end is connected to the second inverse orthogonal transformation unit 4202.
  • the transform selection information 5151 is the Nth orthogonal transform
  • the output terminal is connected to the Nth inverse orthogonal transform unit 4203.
  • Prediction information 5059 controlled by the decoding control unit 5012 and decoded by the entropy decoding unit 5002 is input to the transformation selection unit 5102.
  • the transform selection unit 5102 has a function of setting MapdTransformIdx information indicating which inverse orthogonal transform is used for which prediction mode.
  • FIG. 43 shows conversion selection information 5151 (MappedTransformIdx) in intra prediction.
  • N 9 is shown.
  • FIG. 52 shows a block diagram of the coefficient order restoration unit 5101.
  • the coefficient order restoration unit 5101 has a function of performing reverse scan order conversion with the coefficient order control unit 4002 according to the second embodiment.
  • the coefficient order restoration unit 5101 includes a coefficient order selection switch 5204, a first coefficient forward / reverse transform unit 5201, a second coefficient forward / reverse transform unit 5202, and an Nth coefficient forward / reverse transform unit 5203.
  • the coefficient order selection switch 5204 has a function of switching between the output terminal of the switch and the coefficient order inverse conversion units 5201 to 5203 in accordance with the mapped transform idx shown in FIG.
  • the N types of coefficient forward / inverse transform units 5201 to 5203 have a function of inversely transforming one-dimensional data into two-dimensional data with respect to the quantized transform coefficient sequence 5152 decoded by the entropy decoding unit 5002.
  • H.M. In H.264 two-dimensional data is converted into one-dimensional data using a zigzag scan.
  • the quantized transform coefficient obtained by performing quantization processing on the transform coefficient that has been subjected to orthogonal transform has the property that the tendency of generating non-zero transform coefficients in the block is biased. Have. The tendency of occurrence of this non-zero transform coefficient has different properties for each prediction direction of intra prediction. However, when different videos are encoded, the generation tendency of non-zero transform coefficients in the same prediction direction has a similar property. Therefore, when transforming two-dimensional data into one-dimensional data (2D-1D conversion), entropy coding is performed preferentially from transform coefficients at positions where the occurrence probability of non-zero transform coefficients is high, thereby encoding transform coefficients. It is possible to reduce information. Conversely, on the decoding side, it is necessary to restore the one-dimensional data to the two-dimensional data. Here, the raster scan is restored as a one-dimensional reference scan.
  • the coefficient order restoration unit 5101 may dynamically update the scan order in the 1D-2D conversion.
  • the configuration of the coefficient order restoration unit 5101 that performs such an operation is illustrated in FIG.
  • the coefficient order restoration unit 5101 includes an occurrence frequency counting unit 5301 and an updating unit 5302 in addition to the configuration of FIG.
  • the coefficient order reverse conversion units 5201,..., 5203 are the same except that the 1D-2D scan order is updated by the update unit 5302.
  • the occurrence frequency counting unit 5301 creates a histogram 5351 of the number of occurrences of non-zero coefficients in each element of the quantized transform coefficient sequence 5152 for each prediction mode.
  • the occurrence frequency counting unit 5301 inputs the created histogram 5351 to the update unit 5302.
  • the update unit 5302 updates the coefficient order based on the histogram 5351 at a predetermined timing.
  • the timing is, for example, the timing when the coding process of the coding tree unit is finished, the timing when the coding process for one line in the coding tree unit is finished, or the like.
  • the update unit 5302 refers to the histogram 5351 and updates the coefficient order with respect to the prediction mode having an element in which the number of occurrences of non-zero coefficients is counted more than a threshold. For example, the update unit 5302 updates the prediction mode having an element in which the occurrence of a non-zero coefficient is counted 16 times or more. By providing a threshold value for the number of occurrences, the coefficient order is updated globally, so that it is difficult to converge to a local optimum solution.
  • the update unit 5302 sorts the elements in descending order of the occurrence frequency of the non-zero coefficient regarding the prediction mode to be updated. Sorting can be realized by existing algorithms such as bubble sort and quick sort. Then, the update unit 5302 inputs the update coefficient order 5352 indicating the order of the sorted elements to the coefficient order inverse transform units 5201 to 5203 corresponding to the prediction mode to be updated.
  • each inverse conversion unit performs 1D-2D conversion according to the updated scan order.
  • the initial scan order of each 1D-2D conversion unit needs to be determined in advance.
  • the initial scan order is the same as that of the coefficient order control unit 4002 of the moving picture coding apparatus shown in FIG. In this way, when the scan order is dynamically updated, the tendency of occurrence of non-zero coefficients in the quantized transform coefficients changes according to the effect of the predicted image properties, quantization information (quantization parameters), etc. In addition, stable and high encoding efficiency can be expected. Specifically, the generated code amount of run-length encoding in the entropy encoding unit 113 can be suppressed.
  • the conversion selection unit 5102 can select the mapped transform IDx separately from the prediction information 5059.
  • information indicating which nine types of orthogonal transforms or inverse orthogonal transforms are used is set in the decoding control unit 5012 and used by the inverse orthogonal transform unit 5004.
  • FIG. 46 shows an example of syntax in this embodiment.
  • Directional_transform_idx indicated in the syntax indicates information indicating which of N orthogonal transforms has been selected.
  • the same or similar inverse orthogonal transform unit as that of the video encoding device according to the second embodiment is included, and thus the same as the video encoding device according to the second embodiment. Or a similar effect can be obtained.
  • the video decoding device differs from the video decoding device according to the above-described fourth embodiment in the details of inverse orthogonal transform.
  • the same parts as those in the fourth embodiment are denoted by the same reference numerals, and different parts will be mainly described.
  • the moving picture encoding apparatus corresponding to the moving picture decoding apparatus according to the present embodiment is as described in the third embodiment.
  • inverse orthogonal transform unit 5004 may be combined with the rotation transformation shown in JCTVC-B205_draft002, 5.3.5.2 “Rotational transformation process”, JCT-VC 2nd Meeting Geneva, July, 2010.
  • FIG. 48 is a block diagram of the inverse orthogonal transform unit 5004 (105 in FIG. 48) according to the present embodiment.
  • the inverse orthogonal transform unit 5004 includes new processing units, a first inverse rotation transform unit 4801, a second inverse rotation transform unit 4802, an Nth inverse rotation transform unit 4803, and an inverse discrete cosine transform unit 4804, and an existing transform selection switch 4204.
  • Have The restored transform coefficient 5053 (155 in FIG. 48) input after the inverse quantization process is input to the transform selection switch 4204.
  • the conversion selection information 5151 (4051 in FIG.
  • the conversion selection switch 4204 sets the output end of the switch to the first reverse rotation conversion unit 4801, the second reverse rotation conversion unit 4802, and the Nth reverse rotation conversion unit 4803. Connect to one. Thereafter, the reverse rotation conversion processing is performed in any one of the reverse rotation conversion units 4801 to 4803, which is the same as the rotation conversion used in the orthogonal conversion unit 102 shown in FIG. 47, and is output to the inverse discrete cosine conversion unit 4804. .
  • the inverse discrete cosine transform unit 4804 performs, for example, IDCT on the input signal to restore the restored prediction error signal 5054 (156 in FIG. 48).
  • orthogonal transform such as Hadamard transform or discrete sine transform may be used, or non-orthogonal transform may be used.
  • corresponding inverse transformation is performed in conjunction with the orthogonal transformation unit 102 shown in FIG.
  • the syntax in this embodiment is shown in FIG.
  • the rotation_transform_idx shown in the syntax means the number of the rotation matrix to be used.
  • the same or similar inverse orthogonal transform unit as that of the image encoding device according to the third embodiment is included, and therefore the same or similar as that of the image encoding device according to the third embodiment. The effect of can be obtained.
  • a frame is divided into rectangular blocks having a size of 16 ⁇ 16 pixels, and encoding / decoding is sequentially performed from the upper left block to the lower right side of the screen. (See FIG. 2A).
  • the encoding order and the decoding order are not limited to this example.
  • encoding and decoding may be performed sequentially from the lower right to the upper left, or encoding and decoding may be performed so as to draw a spiral from the center of the screen toward the screen end.
  • encoding and decoding may be performed in order from the upper right to the lower left, or encoding and decoding may be performed so as to draw a spiral from the screen edge toward the center of the screen.
  • the description has been given by exemplifying the prediction target block sizes such as the 4 ⁇ 4 pixel block, the 8 ⁇ 8 pixel block, and the 16 ⁇ 16 pixel block, but the prediction target block is uniform. It does not have to be a block shape.
  • the prediction target block (prediction unit) size may be a 16 ⁇ 8 pixel block, an 8 ⁇ 16 pixel block, an 8 ⁇ 4 pixel block, a 4 ⁇ 8 pixel block, or the like. Also, it is not necessary to unify all the block sizes within one coding tree unit, and a plurality of different block sizes may be mixed.
  • the amount of codes for encoding or decoding the division information increases as the number of divisions increases. Therefore, it is desirable to select the block size in consideration of the balance between the code amount of the division information and the quality of the locally decoded image or the decoded image.
  • the color signal component is described without distinguishing between the luminance signal and the color difference signal.
  • the same or different prediction methods may be used. If different prediction methods are used between the luminance signal and the chrominance signal, the prediction method selected for the chrominance signal can be encoded or decoded in the same manner as the luminance signal.
  • the color signal component is described without distinguishing between the luminance signal and the color difference signal.
  • the orthogonal transformation process is different between the luminance signal and the color difference signal, the same or different orthogonal transformation methods may be used. If different orthogonal transformation methods are used between the luminance signal and the color difference signal, the orthogonal transformation method selected for the color difference signal can be encoded or decoded in the same manner as the luminance signal.
  • syntax elements not defined in the embodiment can be inserted between the rows of the table shown in the syntax configuration, and other conditional branch descriptions are included. It does not matter.
  • the syntax table can be divided and integrated into a plurality of tables. Moreover, it is not always necessary to use the same term, and it may be arbitrarily changed depending on the form to be used.
  • each embodiment can realize highly efficient orthogonal transformation and inverse orthogonal transformation while alleviating the difficulty in hardware implementation and software implementation. Therefore, according to each embodiment, the encoding efficiency is improved, and the subjective image quality is also improved.
  • the instructions shown in the processing procedure shown in the above embodiment can be executed based on a program that is software.
  • a general-purpose computer system stores this program in advance, and by reading this program, it is also possible to obtain the same effects as those obtained by the video encoding device and video decoding device of the above-described embodiment. is there.
  • the instructions described in the above-described embodiments are, as programs that can be executed by a computer, magnetic disks (flexible disks, hard disks, etc.), optical disks (CD-ROM, CD-R, CD-RW, DVD-ROM, DVD). ⁇ R, DVD ⁇ RW, etc.), semiconductor memory, or a similar recording medium. As long as the recording medium is readable by the computer or the embedded system, the storage format may be any form.
  • the computer reads the program from the recording medium and causes the CPU to execute instructions described in the program based on the program
  • the computer is similar to the video encoding device and video decoding device of the above-described embodiment. Operation can be realized.
  • the computer acquires or reads the program, it may be acquired or read through a network.
  • the OS operating system
  • database management software database management software
  • MW middleware
  • a part of each process for performing may be executed.
  • the recording medium in the present invention is not limited to a medium independent of a computer or an embedded system, but also includes a recording medium in which a program transmitted via a LAN or the Internet is downloaded and stored or temporarily stored.
  • the program for realizing the processing of each of the above embodiments may be stored on a computer (server) connected to a network such as the Internet and downloaded to the computer (client) via the network.
  • the number of recording media is not limited to one, and when the processing in the present embodiment is executed from a plurality of media, it is included in the recording media in the present invention, and the configuration of the media may be any configuration.
  • the computer or the embedded system in the present invention is for executing each process in the present embodiment based on a program stored in a recording medium, and includes a single device such as a personal computer or a microcomputer, Any configuration such as a system in which apparatuses are connected to a network may be used.
  • the computer in the embodiment of the present invention is not limited to a personal computer, but includes an arithmetic processing device, a microcomputer, and the like included in an information processing device, and a device capable of realizing the functions in the embodiment of the present invention by a program, The device is a general term.
  • DESCRIPTION OF SYMBOLS 100 ... Image coding apparatus, 101 ... Subtraction part, 102 ... Orthogonal transformation part, 103 ... Quantization part, 104 ... Inverse quantization part, 105 ... Inverse orthogonal transformation part, 106 ... Adder, 107 ... Loop filter, 108 ... Reference image memory 109 ... Intra prediction unit 110 ... Inter prediction unit 111 ... Prediction selection switch 112 ... Prediction selection unit 113 ... Entropy coding unit 114 ... Output buffer 115 ... Coding control unit 116 ... Intra Prediction mode memory 151 ... Input image signal 152 ... Prediction error signal 153 ... Conversion coefficient 154 ... Quantization conversion coefficient 155 ...
  • Reconstruction conversion coefficient 156 ... Reconstruction prediction error signal 157 ... Decoded image signal 158 ... Covered Filter image signal, 159 ... Reference image signal, 160 ... Prediction information, 161 ... Prediction image signal, Direction prediction image signal, 162 ... Coding data 163 ... Intra prediction mode information, 164 ... Reference intra prediction mode information, 601 ... Unidirectional intra prediction image generation unit, 602 ... Bidirectional intra prediction image generation unit, 603 ... Prediction mode information setting unit, 604 ... Selection switch, 605 ... Bidirectional intra prediction mode generation unit, 651 ... Prediction mode, 652 ... Bidirectional intra prediction mode information, 1301 ... First unidirectional intra prediction image generation unit, 1302 ...
  • Second unidirectional intra prediction image generation unit 1303 ... Weighted average unit, 1351 ... first prediction image signal, 1352 ... second prediction image signal, 1901 ... first prediction mode generation unit, 1902 ... second prediction mode generation unit, 1903 ... selection switch, 2301 ... third prediction mode Generation unit 2302 ... Prediction mode generation unit 2701 ... Primary image buffer 2702 ... Weighted plane 2800 ... Syntax, 2801 ... High level syntax, 2802 ... Slice level syntax, 2803 ... Coding tree level syntax, 2804 ... Sequence parameter set syntax, 2805 ... Picture parameter set syntax, 2806 ... Slice header syntax, 2807 ... Slice data syntax 2808 ... Coding tree unit syntax, 2809 ... Prediction unit syntax, 2810 ...
  • Transform unit syntax 3301 ... Reference pixel filter unit, 3351 ... Filtered reference image signal, 3601 ... Composite intra prediction image generation unit, 3602 ... Selection switch 3701, decoded pixel buffer, 3701 ... decoded image buffer, 3751 ... reference pixel, adjacent pixel, 380 DESCRIPTION OF SYMBOLS 1 ... Pixel level prediction signal production
  • Transformation selection switch 4201 ... First inverse orthogonal transform unit, 4202 ... Second inverse orthogonal transform unit, 4203 ... Inverse orthogonal transformation unit, 4204 ... transformation selection switch, 4401 ... coefficient forward transformation unit, 4401 ... first coefficient forward transformation unit, 4402 ... second coefficient forward transformation unit, 4403 ... coefficient forward transformation unit, 4404 ... coefficient order selection switch, 4501 ... Occurrence frequency counting unit, 4502 ... Update unit, 4551 ... Update coefficient order, 4552 ... Histogram, 4701 ... Rotation conversion unit, 4701 ... First Conversion unit, 4702 ... second rotation conversion unit, 4703 ... rotation conversion unit, 4704 ... discrete cosine conversion unit, 4801 ...
  • reverse rotation conversion unit 4801 ... first reverse rotation conversion unit, 4802 ... second reverse rotation conversion unit, 4803: inverse rotation transform unit, 4804 ... inverse discrete cosine transform unit, 5000 ... moving picture decoding device, 5001 ... input buffer, 5002 ... entropy decoding unit, 5003 ... inverse quantization unit, 5004 ... inverse orthogonal transform unit, 5005 ... adder, 5006 ... loop filter, 5007 ... reference image memory, 5008 ... intra predictor, 5009 ... inter predictor, 5010 ... prediction selection switch, 5011 ... output buffer, 5012 ... decoding controller, 5013 ... intra prediction mode Memory, 5051... Encoded data, 5052... Quantization transform coefficient, 5053. 54 ...
  • Restored prediction error signal 5055 ... Decoded image signal, 5056 ... Filtered image signal, 5057 ... Reference image signal, 5058 ... Predicted image signal, Direction predicted image signal, 5059 ... Prediction information, 5060 ... Decoded image, 5061 ... Intra Prediction mode information, 5062 ... Reference intra prediction mode information, 5100 ... Video decoding device, 5101 ... Coefficient order restoration unit, 5102 ... Transformation selection unit, 5151 ... Transformation selection information, 5152 ... Quantized transformation coefficient sequence, 5201 ... Coefficient Forward / reverse conversion unit, 5201 ... first coefficient forward / reverse conversion unit, 5202 ... second coefficient forward / reverse conversion unit, 5203 ... coefficient forward / reverse conversion unit, 5204 ... coefficient order selection switch, 5301 ... occurrence frequency counting unit, 5302 ... update unit, 5351 ... Histogram, 5352... Update coefficient order.

Abstract

In the embodiment, a video image encoding method acquires reference prediction directions representing the prediction directions of an intra prediction corresponding to at least one encoded image block. A first reference prediction direction, from among the reference prediction directions, is set as the first prediction direction and a first prediction image signal is generated. A second prediction direction that is different from the first prediction direction is set and a second prediction image signal is generated. The relative distance between the reference pixels and the prediction pixels of the first and second prediction directions in a first prediction direction combination which is the combination of the set first prediction direction and second prediction direction are derived and the difference value of the relative distances is derived. A predetermined weight component is derived from the difference value. The weighted mean of the first prediction image signal and the second prediction image signal is obtained from the weight component and a third prediction image signal is generated. A prediction error signal is generated from the third prediction image signal and the prediction error signal is encoded.

Description

動画像符号化方法及び動画像復号化方法Video encoding method and video decoding method
 本発明の実施形態は、動画像の符号化及び復号化における画面内予測方法、動画像符号化方法及び動画像復号化方法に関する。 Embodiments of the present invention relate to an intra-screen prediction method, a video encoding method, and a video decoding method in video encoding and decoding.
 近年、大幅に符号化効率を向上させた画像符号化方法がITU-TとISO/IECとの共同で、ITU-T REC. H.26及びISO/IEC 14496-10(以下、「H.264」という。)として勧告されている。H.264では空間領域(画素領域)での方向予測を取り入れることにより、ISO/IEC MPEG-1, 2及び4における画面内予測(以下、イントラ予測と呼ぶ)と比較して高い予測効率を実現している。H.264の拡張として、最大34種類の予測角度や予測方法を導入してイントラ予測を行うことによって、さらに符号化効率を向上する手法が提案されている。 In recent years, an image coding method with greatly improved coding efficiency has been jointly developed by ITU-T and ISO / IEC. H. 26 and ISO / IEC 14496-10 (hereinafter referred to as “H.264”). H. H.264 achieves higher prediction efficiency than in-screen prediction in ISO / IEC MPEG-1, 2 and 4 (hereinafter referred to as intra prediction) by incorporating direction prediction in the spatial region (pixel region). Yes. H. As an extension of H.264, a method for further improving the coding efficiency by introducing a maximum of 34 types of prediction angles and prediction methods and performing intra prediction has been proposed.
 しかしながら、非特許文献1では複数種類の予測モードの夫々について個別の予測角度で予測値を生成し、当該予測方向にコピーするため、画素ブロック内で滑らかに変化するような輝度勾配を持ったテクスチャなどの映像や、グラデーションのある映像を効率よく予測することが出来ず、予測誤差が増大する場合がある。 However, in Non-Patent Document 1, since a prediction value is generated at an individual prediction angle for each of a plurality of types of prediction modes and copied in the prediction direction, a texture having a luminance gradient that smoothly changes within a pixel block. Such a video or a video with gradation cannot be predicted efficiently, and the prediction error may increase.
 本発明が解決しようとする課題は、符号化効率を向上可能な予測画像生成装置を含んだ動画像符号化装置及び動画像復号化装置を提供することである。 The problem to be solved by the present invention is to provide a moving image encoding device and a moving image decoding device including a prediction image generating device capable of improving encoding efficiency.
 実施形態の動画像符号化方法は、入力画像信号を四分木分割に従って、階層の深さで表現される画素ブロックに分割し、これら分割した画素ブロックに対してイントラ予測を行い、予測誤差信号を生成し、変換係数を符号化する動画像符号化方法において、少なくとも一つの符号化済み画素ブロックに対応するイントラ予測の予測方向を示す参照予測方向を取得する。参照予測方向の中から、第一の参照予測方向を第一の予測方向として設定し、第一予測画像信号を生成する。第一の予測方向とは異なる第二の予測方向を設定して第二予測画像信号を生成する。設定された第一の予測方向及び第二の予測方向の組み合わせである第一予測方向組み合わせに対応してそれぞれの予測方向における参照画素と予測画素との相対距離を導出し、相対距離の差分値を導出する。差分値に応じて、予め定められた重み成分を導出する。重み成分に従って、第一予測画像信号と第二予測画像信号を重み付き平均し、第三予測画像信号を生成する。第三予測画像信号から予測誤差信号を生成し、予測誤差信号を符号化する。 The moving image encoding method according to the embodiment divides an input image signal into pixel blocks expressed by hierarchical depth according to quadtree division, performs intra prediction on these divided pixel blocks, and generates a prediction error signal. And a reference prediction direction indicating a prediction direction of intra prediction corresponding to at least one encoded pixel block is acquired. Among the reference prediction directions, the first reference prediction direction is set as the first prediction direction, and a first prediction image signal is generated. A second prediction image signal is generated by setting a second prediction direction different from the first prediction direction. The relative distance between the reference pixel and the prediction pixel in each prediction direction is derived corresponding to the first prediction direction combination that is a combination of the set first prediction direction and the second prediction direction, and the difference value of the relative distance Is derived. A predetermined weight component is derived according to the difference value. According to the weight component, the first predicted image signal and the second predicted image signal are weighted and averaged to generate a third predicted image signal. A prediction error signal is generated from the third prediction image signal, and the prediction error signal is encoded.
第1の実施形態に係る動画像符号化装置を例示するブロック図。1 is a block diagram illustrating a moving image encoding apparatus according to a first embodiment. 画素ブロックの予測符号化順の説明図。Explanatory drawing of the prediction encoding order of a pixel block. 画素ブロックサイズの一例の説明図。Explanatory drawing of an example of pixel block size. 画素ブロックサイズの別の例の説明図。Explanatory drawing of another example of pixel block size. 画素ブロックサイズの別の例の説明図。Explanatory drawing of another example of pixel block size. コーディングツリーユニットにおける画素ブロックの一例の説明図。Explanatory drawing of an example of the pixel block in a coding tree unit. コーディングツリーユニットにおける画素ブロックの別の例の説明図。Explanatory drawing of another example of the pixel block in a coding tree unit. コーディングツリーユニットにおける画素ブロックの別の例の説明図。Explanatory drawing of another example of the pixel block in a coding tree unit. コーディングツリーユニットにおける画素ブロックの別の例の説明図。Explanatory drawing of another example of the pixel block in a coding tree unit. 第1の実施形態に係る、単方向イントラ予測モードと予測タイプ、予測角度指標の一例を示す説明図。Explanatory drawing which shows an example of the unidirectional intra prediction mode, prediction type, and prediction angle parameter | index based on 1st Embodiment. (a)はイントラ予測モードの説明図であり、(b)はイントラ予測モードの参照画素と予測画素の説明図であり、(c)はイントラ予測モードの水平予測モードの説明図であり、(d)はイントラ予測モードの直交右下予測モードの説明図。(A) is explanatory drawing of intra prediction mode, (b) is explanatory drawing of the reference pixel and prediction pixel of intra prediction mode, (c) is explanatory drawing of the horizontal prediction mode of intra prediction mode, ( d) Explanatory drawing of the orthogonal lower right prediction mode of intra prediction mode. 第1の実施形態に係るイントラ予測部を例示するブロック図。The block diagram which illustrates the intra prediction part concerning a 1st embodiment. 第1の実施形態に係る単方向イントラ予測数と双方向イントラ予測数の例の説明図。Explanatory drawing of the example of the unidirectional intra prediction number which concerns on 1st Embodiment, and a bidirectional | two-way intra prediction number. 第1の実施形態に係る単方向イントラ予測数と双方向イントラ予測数の別の例の説明図。Explanatory drawing of another example of the number of unidirectional intra predictions and the number of bidirectional intra predictions which concern on 1st Embodiment. 第1の実施形態に係る単方向イントラ予測数と双方向イントラ予測数の別の例の説明図。Explanatory drawing of another example of the number of unidirectional intra predictions and the number of bidirectional intra predictions which concern on 1st Embodiment. 第1の実施形態に係る単方向イントラ予測数と双方向イントラ予測数の別の例の説明図。Explanatory drawing of another example of the number of unidirectional intra predictions and the number of bidirectional intra predictions which concern on 1st Embodiment. 第1の実施形態に係る、予測モードと予測タイプと双方向イントラ予測、単方向イントラ予測の関係の一例を示すテーブル図。The table figure which shows an example of the relationship between prediction mode, a prediction type, bidirectional | two-way intra prediction, and unidirectional intra prediction based on 1st Embodiment. 第1の実施形態に係る、予測モードと予測タイプと双方向イントラ予測、単方向イントラ予測の関係の別の一例を示すテーブル図。The table figure which shows another example of the relationship between prediction mode, prediction type, bidirectional | two-way intra prediction, and unidirectional intra prediction based on 1st Embodiment. 図8Bの続きを示すテーブル図。The table figure which shows the continuation of FIG. 8B. 第1の実施形態に係る、予測モードと予測タイプと双方向イントラ予測、単方向イントラ予測の関係の別の一例を示すテーブル図。The table figure which shows another example of the relationship between prediction mode, prediction type, bidirectional | two-way intra prediction, and unidirectional intra prediction based on 1st Embodiment. 図8Dの続きを示すテーブル図。FIG. 8D is a table showing the continuation of FIG. 8D. 第1の実施形態に係る、予測モードと予測タイプと双方向イントラ予測、単方向イントラ予測の関係の別の一例を示すテーブル図。The table figure which shows another example of the relationship between prediction mode, prediction type, bidirectional | two-way intra prediction, and unidirectional intra prediction based on 1st Embodiment. 第1の実施形態に係る、予測モードと予測タイプと双方向イントラ予測、単方向イントラ予測の関係の別の一例を示すテーブル図。The table figure which shows another example of the relationship between prediction mode, prediction type, bidirectional | two-way intra prediction, and unidirectional intra prediction based on 1st Embodiment. 第1の実施形態に係る、予測モードと予測タイプと双方向イントラ予測、単方向イントラ予測の一例を示すテーブル図。The table figure which shows an example of the prediction mode, prediction type, bidirectional intra prediction, and unidirectional intra prediction based on 1st Embodiment. 第1の実施形態に係る、予測モードと予測タイプと双方向イントラ予測、単方向イントラ予測の関係の別の一例を示すテーブル図。The table figure which shows another example of the relationship between prediction mode, prediction type, bidirectional | two-way intra prediction, and unidirectional intra prediction based on 1st Embodiment. 第1の実施形態に係る、予測モードと予測タイプと双方向イントラ予測、単方向イントラ予測の関係の一例を示すテーブル図。The table figure which shows an example of the relationship between prediction mode, a prediction type, bidirectional | two-way intra prediction, and unidirectional intra prediction based on 1st Embodiment. 第1の実施形態に係る、予測モードと予測タイプと双方向イントラ予測、単方向イントラ予測の関係の別の一例を示すテーブル図。The table figure which shows another example of the relationship between prediction mode, prediction type, bidirectional | two-way intra prediction, and unidirectional intra prediction based on 1st Embodiment. 第1の実施形態に係る、予測角度の指標と予測画像生成の際の予測角度との対応を例示するテーブル図。The table figure which illustrates a response | compatibility with the parameter | index of the prediction angle based on 1st Embodiment, and the prediction angle in the case of prediction image generation. 第1の実施形態に係る予測方向を例示する説明図。Explanatory drawing which illustrates the prediction direction which concerns on 1st Embodiment. 第1の実施形態に係る、双方向イントラ予測画像生成部のブロック図。The block diagram of the bidirectional | two-way intra estimated image generation part based on 1st Embodiment. 第1の実施形態に係る、双方向イントラ予測と2つの単方向イントラ予測との対応を例示するテーブル図。The table figure which illustrates a response | compatibility with bidirectional | two-way intra prediction and two unidirectional intra prediction based on 1st Embodiment. 第1の実施形態に係る、市街地距離の算出方法の一例を示すブロック図。The block diagram which shows an example of the calculation method of a city area distance based on 1st Embodiment. 第1の実施形態に係る、市街地距離の算出方法の別の一例を示すブロック図。The block diagram which shows another example of the calculation method of the city area distance based on 1st Embodiment. 第1の実施形態に係る、市街地距離の算出方法の別の一例を示すブロック図。The block diagram which shows another example of the calculation method of the city area distance based on 1st Embodiment. 第1の実施形態に係る、予測モードと予測画素位置の距離との関係を例示するテーブル。The table which illustrates the relationship between the prediction mode and the distance of a prediction pixel position based on 1st Embodiment. 第1の実施形態に係る、予測モードと距離テーブルのマッピングを例示するテーブル。The table which illustrates the mapping of prediction mode and a distance table based on 1st Embodiment. 第1の実施形態に係る、相対距離と重み成分との関係を例示するテーブル。The table which illustrates the relationship between relative distance and a weight component based on 1st Embodiment. 第1の実施形態に係る、相対距離と重み成分との関係を例示する別のテーブル。6 is another table illustrating the relationship between the relative distance and the weight component according to the first embodiment. 第1の実施形態に係る、双方向イントラ予測モード生成部の例を示すブロック図。The block diagram which shows the example of the bidirectional | two-way intra prediction mode production | generation part based on 1st Embodiment. 第1の実施形態に係る、固定テーブル方式による双方向イントラ予測モード生成の一例を示す説明図。Explanatory drawing which shows an example of bidirectional | two-way intra prediction mode production | generation by a fixed table system based on 1st Embodiment. 第1の実施形態に係る、隣接ブロック位置の一例を示す説明図。Explanatory drawing which shows an example of an adjacent block position based on 1st Embodiment. 第1の実施形態に係る、第一の予測方向から第二予測方向への変換テーブルの一例を示す説明図。Explanatory drawing which shows an example of the conversion table from 1st prediction direction to 2nd prediction direction based on 1st Embodiment. 第1の実施形態に係る、第一の予測方向から第二予測方向への変換テーブルの別の一例を示す説明図。Explanatory drawing which shows another example of the conversion table from 1st prediction direction to 2nd prediction direction based on 1st Embodiment. 第1の実施形態に係る、第一の予測方向から第二予測方向への変換テーブルの更に別の一例を示す説明図。Explanatory drawing which shows another example of the conversion table from 1st prediction direction to 2nd prediction direction based on 1st Embodiment. 第1の実施形態に係る、双方向イントラ予測モード生成部の別の例を示すブロック図。The block diagram which shows another example of the bidirectional | two-way intra prediction mode production | generation part based on 1st Embodiment. 第1の実施形態に係る、第二予測モード生成部から第八予測モード生成部までと双方向予測モード生成方法の対応関係の例を示す説明図。Explanatory drawing which shows the example of the correspondence of the bi-directional prediction mode production | generation method from the 2nd prediction mode production | generation part to the 8th prediction mode production | generation part based on 1st Embodiment. 第1の実施形態に係る、双方向イントラ予測モードが重複する場合の対応例を示す説明図。Explanatory drawing which shows the corresponding example in case bidirectional | two-way intra prediction mode overlaps based on 1st Embodiment. 第1の実施形態に係る、双方向イントラ予測モードが重複する場合の別の対応例を示す説明図。Explanatory drawing which shows another corresponding example in case bidirectional | two-way intra prediction mode overlaps based on 1st Embodiment. 第1の実施形態に係る、双方向イントラ予測モードが重複する場合の更に別の対応例を示す説明図。Explanatory drawing which shows another corresponding example in case bidirectional | two-way intra prediction mode overlaps based on 1st Embodiment. 第1の実施形態に係る、色差信号の予測モード構成の例を示す説明図。Explanatory drawing which shows the example of the prediction mode structure of a color difference signal based on 1st Embodiment. 第1の実施形態に係る、イントラ予測部の別の実施形態を示すブロック図。The block diagram which shows another embodiment of the intra estimation part based on 1st Embodiment. シンタクス構造の説明図。Explanatory drawing of a syntax structure. スライスヘッダーシンタクスの説明図。Explanatory drawing of a slice header syntax. プレディクションユニットシンタクスの一例を示す説明図。Explanatory drawing which shows an example of a prediction unit syntax. プレディクションユニットシンタクスの別の例を示す説明図。Explanatory drawing which shows another example of a prediction unit syntax. プレディクションユニットシンタクスの別の例を示す説明図。Explanatory drawing which shows another example of a prediction unit syntax. プレディクションユニットシンタクスの別の例を示す説明図。Explanatory drawing which shows another example of a prediction unit syntax. プレディクションユニットシンタクスの別の例を示す説明図。Explanatory drawing which shows another example of a prediction unit syntax. 予測モードを予測する際の関係を示すテーブル。The table which shows the relationship at the time of predicting prediction mode. 予測モードの符号化方法の一例を示すテーブル。The table which shows an example of the encoding method of prediction mode. 予測モードの符号化方法の別の一例を示すテーブル。The table which shows another example of the encoding method of prediction mode. 予測モードの符号化方法の別の一例を示すテーブル。The table which shows another example of the encoding method of prediction mode. 予測モードの符号化方法の別の一例を示すテーブル。The table which shows another example of the encoding method of prediction mode. 第1の実施形態に係る、イントラ予測部の第1の変形例を示すブロック図。The block diagram which shows the 1st modification of the intra estimation part based on 1st Embodiment. 第1の実施形態に係る、第1の変形例でのプレディクションユニットシンタクスの一例を示す説明図。Explanatory drawing which shows an example of the prediction unit syntax in the 1st modification based on 1st Embodiment. 第1の実施形態に係る、第1の変形例でのプレディクションユニットシンタクスの別の例を示す説明図。Explanatory drawing which shows another example of the prediction unit syntax in the 1st modification based on 1st Embodiment. 画素レベルの予測値生成方法の一例を示す説明図。Explanatory drawing which shows an example of the predicted value generation method of a pixel level. 第1の実施形態に係る、イントラ予測部の別の一例を示すブロック図。The block diagram which shows another example of the intra estimation part based on 1st Embodiment. 第1の実施形態に係る、イントラ予測部の別の一例を示すブロック図。The block diagram which shows another example of the intra estimation part based on 1st Embodiment. 第1の実施形態に係る、複合イントラ予測画像生成部の一例を示すブロック図。The block diagram which shows an example of the composite intra estimated image generation part based on 1st Embodiment. 第1の実施形態に係る、プレディクションユニットシンタクス別の一例を示す説明図。Explanatory drawing which shows an example according to Prediction unit syntax based on 1st Embodiment. 第1の実施形態に係る、プレディクションユニットシンタクスの別の一例を示す説明図。Explanatory drawing which shows another example of the prediction unit syntax based on 1st Embodiment. 第2の実施形態に係る、動画像符号化装置を例示するブロック図。The block diagram which illustrates the moving picture coding device concerning a 2nd embodiment. 第2の実施形態に係る、直交変換部を例示するブロック図。The block diagram which illustrates the orthogonal transformation part based on 2nd Embodiment. 第2の実施形態に係る、逆直交変換部を例示するブロック図。The block diagram which illustrates the inverse orthogonal transformation part based on 2nd Embodiment. 第2の実施形態に係る、予測モードと変換インデックスの関係を示すテーブル図。The table figure which shows the relationship between prediction mode and a conversion index based on 2nd Embodiment. 第2の実施形態に係る、係数順制御部を例示するブロック図。The block diagram which illustrates the coefficient order control part concerning a 2nd embodiment. 第2の実施形態に係る、別の係数順制御部を例示するブロック図。The block diagram which illustrates another coefficient order control part concerning a 2nd embodiment. 第2の実施形態に係る、トランスフォームユニットシンタクスの一例を示す説明図。Explanatory drawing which shows an example of the transform unit syntax based on 2nd Embodiment. 第3の実施形態に係る、直交変換部の別の一例を示すブロック図。The block diagram which shows another example of the orthogonal transformation part based on 3rd Embodiment. 第3の実施形態に係る、逆直交変換部の一例を示すブロック図。The block diagram which shows an example of the inverse orthogonal transformation part based on 3rd Embodiment. 第3の実施形態に係る、トランスフォームユニットシンタクスの一例を示す説明図。Explanatory drawing which shows an example of the transform unit syntax based on 3rd Embodiment. 第4の実施形態に係る、動画像復号化装置の一例を示すブロック図。The block diagram which shows an example of the moving image decoding apparatus based on 4th Embodiment. 第5の実施形態に係る、動画像復号化装置の一例を示すブロック図。The block diagram which shows an example of the moving image decoding apparatus based on 5th Embodiment. 第5の実施形態に係る、係数順復元部を例示するブロック図。The block diagram which illustrates the coefficient order restoration part concerning a 5th embodiment. 第5の実施形態に係る、係数順復元部の別の一例を示すブロック図。The block diagram which shows another example of the coefficient order decompression | restoration part based on 5th Embodiment.
 以下、図面を参照して、各実施形態に係る動画像符号化装置及び動画像復号化装置について詳細に説明する。なお、以降の説明において、「画像」という用語は、「映像」「画素」「画像信号」、「画像データ」などの用語として適宜読み替えることができる。また、以下の実施形態では、同一の番号を付した部分については同様の動作を行うものとして、重ねての説明を省略する。 
 (第1の実施形態) 
 第1の実施形態は画像符号化装置に関する。本実施形態に係る画像符号化装置に対応する動画像復号化装置は、第4の実施形態において説明する。この画像符号化装置は、LSI(Large-Scale Integration)チップやDSP(Digital Signal Processor)、FPGA(Field Programmable Gate Array)などのハードウェアにより実現可能である。また、この画像符号化装置は、コンピュータに画像符号化プログラムを実行させることによっても実現可能である。
Hereinafter, with reference to the drawings, a video encoding device and a video decoding device according to each embodiment will be described in detail. In the following description, the term “image” can be appropriately read as terms such as “video”, “pixel”, “image signal”, and “image data”. Moreover, in the following embodiment, the same number is attached | subjected about what performs the same operation | movement, and repeated description is abbreviate | omitted.
(First embodiment)
The first embodiment relates to an image encoding device. A moving picture decoding apparatus corresponding to the picture encoding apparatus according to the present embodiment will be described in a fourth embodiment. This image encoding device can be realized by hardware such as an LSI (Large-Scale Integration) chip, a DSP (Digital Signal Processor), or an FPGA (Field Programmable Gate Array). The image encoding apparatus can also be realized by causing a computer to execute an image encoding program.
 図1に示すように、本実施形態に係る画像符号化装置100は、減算部101、直交変換部102、量子化部103、逆量子化部104、逆直交変換部105、加算部106、ループフィルタ107、参照画像メモリ108、イントラ予測部109、インター予測部110、予測選択スイッチ111、予測選択部112、エントロピー符号化部113、出力バッファ114、符号化制御部115、及びイントラ予測モードメモリ116を含む。 As illustrated in FIG. 1, the image encoding device 100 according to the present embodiment includes a subtraction unit 101, an orthogonal transformation unit 102, a quantization unit 103, an inverse quantization unit 104, an inverse orthogonal transformation unit 105, an addition unit 106, a loop. Filter 107, reference image memory 108, intra prediction unit 109, inter prediction unit 110, prediction selection switch 111, prediction selection unit 112, entropy encoding unit 113, output buffer 114, encoding control unit 115, and intra prediction mode memory 116 including.
 図1の画像符号化装置100は、入力画像信号151を構成する各フレームまたは各フィールドを複数の画素ブロックに分割し、これら分割した画素ブロックに対して予測符号化を行って、符号化データ162を出力する。以降の説明では、簡単化のために、図2Aに示されるように左上から右下に向かって画素ブロックの予測符号化が行われることを仮定する。図2Aでは、符号化処理対象のフレームfにおいて、符号化対象画素ブロックcよりも左側及び上側に符号化済み画素ブロックpが位置している。 The image encoding apparatus 100 in FIG. 1 divides each frame or each field constituting the input image signal 151 into a plurality of pixel blocks, performs predictive encoding on the divided pixel blocks, and generates encoded data 162. Is output. In the following description, for the sake of simplicity, it is assumed that pixel blocks are predictively encoded from the upper left to the lower right as shown in FIG. 2A. In FIG. 2A, the encoded pixel block p is located on the left side and the upper side of the encoding target pixel block c in the encoding processing target frame f.
 ここで、画素ブロックは、例えば、M×Nサイズのブロック(N及びMは自然数)、コーディングツリーユニット、マクロブロック、サブブロック、1画素などの画像を処理する単位を指す。なお、以降の説明では、画素ブロックをコーディングツリーユニットの意味で基本的に使用するが、説明を適宜読み替えることにより画素ブロックを上述した意味で解釈することも可能である。コーディングツリーユニットは、典型的には、例えば図2Bに示す16×16画素ブロックであるが、図2Cに示す32×32画素ブロック、図2Dに示す64×64画素ブロックであってもよいし、図示しない8×8画素ブロック、4×4画素ブロックであってもよい。また、コーディングツリーユニットは必ずしも正方形である必要はない。以下、入力画像信号151の符号化対象ブロックもしくはコーディングツリーユニットを「予測対象ブロック」と称することもある。また、符号化単位には、コーディングツリーユニットのような画素ブロックに限らず、フレームまたはフィールド、スライス、或いはこれらの組み合わせを用いることができる。 Here, the pixel block refers to a unit for processing an image such as an M × N size block (N and M are natural numbers), a coding tree unit, a macro block, a sub block, and one pixel. In the following description, the pixel block is basically used in the meaning of the coding tree unit. However, the pixel block can be interpreted in the above-described meaning by appropriately replacing the description. The coding tree unit is typically a 16 × 16 pixel block shown in FIG. 2B, for example, but may be a 32 × 32 pixel block shown in FIG. 2C or a 64 × 64 pixel block shown in FIG. 2D, It may be an 8 × 8 pixel block (not shown) or a 4 × 4 pixel block. Also, the coding tree unit need not necessarily be square. Hereinafter, the encoding target block or coding tree unit of the input image signal 151 may be referred to as a “prediction target block”. The coding unit is not limited to a pixel block such as a coding tree unit, and a frame, a field, a slice, or a combination thereof can be used.
 図3Aから図3Dまでは、コーディングツリーユニットの具体例を示した図である。図3Aは、コーディングツリーユニットのサイズが64×64(N=32)の場合の例を示している。ここでNは、基準となるコーディングツリーユニットのサイズを表しており、分割された場合のサイズをNと定義し、分割されない場合を2Nと定義する。コーディングツリーユニットは四分木構造を持ち、分割された場合は、4つの画素ブロックに対してZスキャン順でインデックスが付される。図3Bに、図3Aの64x64画素ブロックを四分木分割した例を示す。図中に示される番号がZスキャンの順番を表している。また、コーディングツリーユニットの1つの四分木のインデックス内でさらに四分木分割することが可能である。分割の深さをDepthで定義する。つまり、図3AはDepth=0の例を示している。図3CにDepth=1の場合の32×32(N=16)サイズのコーディングツリーユニットの例を示す。このようなコーディングツリーユニットの最も大きいユニットをラージコーディングツリーユニットと呼び、この単位で入力画像信号がラスタースキャン順に符号化される。 3A to 3D are diagrams showing specific examples of coding tree units. FIG. 3A shows an example where the size of the coding tree unit is 64 × 64 (N = 32). Here, N represents the size of the reference coding tree unit. The size when divided is defined as N, and the case where it is not divided is defined as 2N. The coding tree unit has a quadtree structure, and when divided, the four pixel blocks are indexed in the Z-scan order. FIG. 3B shows an example in which the 64 × 64 pixel block in FIG. 3A is divided into quadtrees. The numbers shown in the figure represent the Z scan order. Further, it is possible to further perform quadtree division within the index of one quadtree of the coding tree unit. The depth of division is defined by Depth. That is, FIG. 3A shows an example in which Depth = 0. FIG. 3C shows an example of a 32 × 32 (N = 16) size coding tree unit in the case of Depth = 1. A unit having the largest coding tree unit is called a large coding tree unit, and an input image signal is encoded in the order of raster scanning in this unit.
 図1の画像符号化装置100は、符号化制御部115から入力される符号化パラメータに基づいて、画素ブロックに対するイントラ予測(画面内予測、フレーム内予測などとも称される)またはインター予測(画面間予測、フレーム間予測、動き補償予測などとも称される)を行って、予測画像信号161を生成する。この画像符号化装置100は、画素ブロック(入力画像信号151)と予測画像信号161との間の予測誤差信号152を直交変換及び量子化し、エントロピー符号化を行って符号化データ162を生成して出力する。 The image encoding apparatus 100 in FIG. 1 performs intra prediction (also referred to as intra prediction, intra prediction, etc.) or inter prediction (screen) for a pixel block based on the encoding parameter input from the encoding control unit 115. Inter-prediction, inter-frame prediction, motion compensation prediction, etc.) is performed to generate a predicted image signal 161. The image encoding device 100 performs orthogonal transform and quantization on the prediction error signal 152 between the pixel block (input image signal 151) and the predicted image signal 161, performs entropy encoding, and generates encoded data 162. Output.
 図1の画像符号化装置100は、ブロックサイズ及び予測画像信号161の生成方法の異なる複数の予測モードを選択的に適用して符号化を行う。予測画像信号161の生成方法は、大別すると、符号化対象フレーム内で予測を行うイントラ予測と、時間的に異なる1つまたは複数の参照フレームを用いて予測を行うインター予測との2種類である。 The image encoding apparatus 100 in FIG. 1 performs encoding by selectively applying a plurality of prediction modes having different block sizes and generation methods of the predicted image signal 161. The generation method of the predicted image signal 161 can be roughly divided into two types: intra prediction in which prediction is performed within the encoding target frame and inter prediction in which prediction is performed using one or a plurality of reference frames that are temporally different. is there.
 以下、図1の画像符号化装置100に含まれる各要素を説明する。 
 減算部101は、入力画像信号151の符号化対象ブロックから、対応する予測画像信号161を減算して予測誤差信号152を得る。減算部101は、予測誤差信号152を直交変換部102に入力する。
Hereinafter, each element included in the image encoding device 100 of FIG. 1 will be described.
The subtraction unit 101 subtracts the corresponding prediction image signal 161 from the encoding target block of the input image signal 151 to obtain a prediction error signal 152. The subtraction unit 101 inputs the prediction error signal 152 to the orthogonal transformation unit 102.
 直交変換部102は、減算部101からの予測誤差信号152に対して、例えば離散コサイン変換(DCT)のような直交変換を行い、変換係数153を得る。直交変換部102は、変換係数153を量子化部103に入力する。 The orthogonal transform unit 102 performs orthogonal transform such as discrete cosine transform (DCT) on the prediction error signal 152 from the subtraction unit 101 to obtain a transform coefficient 153. The orthogonal transform unit 102 inputs the transform coefficient 153 to the quantization unit 103.
 量子化部103は、直交変換部102からの変換係数153に対して量子化を行い、量子化変換係数154を得る。具体的には、量子化部103は、符号化制御部115によって指定される量子化パラメータ、量子化マトリクスなどの量子化情報に従って量子化を行う。量子化パラメータは、量子化の細かさを示す。量子化マトリクスは、量子化の細かさを変換係数の成分毎に重み付けするために使用される。量子化部103は、量子化変換係数154をエントロピー符号化部113及び逆量子化部104に入力する。 The quantization unit 103 quantizes the transform coefficient 153 from the orthogonal transform unit 102 to obtain a quantized transform coefficient 154. Specifically, the quantization unit 103 performs quantization according to quantization information such as a quantization parameter and a quantization matrix specified by the encoding control unit 115. The quantization parameter indicates the fineness of quantization. The quantization matrix is used for weighting the fineness of quantization for each component of the transform coefficient. The quantization unit 103 inputs the quantized transform coefficient 154 to the entropy encoding unit 113 and the inverse quantization unit 104.
 エントロピー符号化部113は、量子化部103からの量子化変換係数154、予測選択部112からの予測情報160、符号化制御部115によって指定される量子化情報などの様々な符号化パラメータに対してエントロピー符号化(例えば、ハフマン符号化、算術符号化など)を行い、符号化データを生成する。なお、符号化パラメータとは、予測情報160、変換係数に関する情報、量子化に関する情報、などの復号に必要となるパラメータである。例えば、符号化制御部115が内部メモリ(図示しない)を持ち、このメモリに符号化パラメータが保持され、予測対象ブロックを符号化する際に隣接する既に符号化済みの画素ブロックの符号化パラメータを用いる構成としてもよい。例えば、H.264のイントラ予測では符号化済みの隣接ブロックの予測モード情報から、予測対象ブロックの予測モードの予測値を導出することが可能である。 The entropy encoding unit 113 performs various encoding parameters such as the quantized transform coefficient 154 from the quantization unit 103, the prediction information 160 from the prediction selection unit 112, and the quantization information specified by the encoding control unit 115. Entropy encoding (for example, Huffman encoding, arithmetic encoding, etc.) is performed to generate encoded data. The encoding parameter is a parameter necessary for decoding such as prediction information 160, information on transform coefficients, information on quantization, and the like. For example, the encoding control unit 115 has an internal memory (not shown), the encoding parameter is held in this memory, and the encoding parameter of an already encoded pixel block adjacent when encoding the prediction target block is stored. It is good also as a structure to use. For example, H.M. In the H.264 intra prediction, the prediction value of the prediction mode of the prediction target block can be derived from the prediction mode information of the encoded adjacent block.
 エントロピー符号化部113によって生成された符号化データは、例えば多重化を経て出力バッファ114に一時的に蓄積され、符号化制御部115が管理する適切な出力タイミングに従って符号化データ162として出力される。符号化データ162は、例えば、図示しない蓄積系(蓄積メディア)または伝送系(通信回線)へ出力される。 The encoded data generated by the entropy encoding unit 113 is temporarily accumulated in the output buffer 114 through multiplexing, for example, and output as encoded data 162 according to an appropriate output timing managed by the encoding control unit 115. . The encoded data 162 is output to, for example, a storage system (storage medium) or a transmission system (communication line) not shown.
 逆量子化部104は、量子化部103からの量子化変換係数154に対して逆量子化を行い、復元変換係数155を得る。具体的には、逆量子化部104は、量子化部103において使用された量子化情報に従って逆量子化を行う。量子化部103において使用された量子化情報は、符号化制御部115の内部メモリからロードされる。逆量子化部104は、復元変換係数155を逆直交変換部105に入力する。 The inverse quantization unit 104 performs inverse quantization on the quantized transform coefficient 154 from the quantizing unit 103 to obtain a restored transform coefficient 155. Specifically, the inverse quantization unit 104 performs inverse quantization according to the quantization information used in the quantization unit 103. The quantization information used in the quantization unit 103 is loaded from the internal memory of the encoding control unit 115. The inverse quantization unit 104 inputs the restored transform coefficient 155 to the inverse orthogonal transform unit 105.
 逆直交変換部105は、逆量子化部104からの復元変換係数155に対して、例えば逆離散コサイン変換などのような直交変換部102において行われた直交変換に対応する逆直交変換を行い、復元予測誤差信号156を得る。逆直交変換部105は、復元予測誤差信号156を加算部106に入力する。 The inverse orthogonal transform unit 105 performs an inverse orthogonal transform corresponding to the orthogonal transform performed in the orthogonal transform unit 102 such as an inverse discrete cosine transform on the restored transform coefficient 155 from the inverse quantization unit 104, A restored prediction error signal 156 is obtained. The inverse orthogonal transform unit 105 inputs the restored prediction error signal 156 to the addition unit 106.
 加算部106は、復元予測誤差信号156と、対応する予測画像信号161とを加算し、局所的な復号画像信号157を生成する。復号画像信号157は、ループフィルタ107に入力される。ループフィルタ107は、入力された復号画像信号157にデブロッキングフィルタやウィナーフィルタなどを施し、被フィルタ画像信号158を生成する。生成した被フィルタ画像信号158は、参照画像メモリ108へと入力される。 The addition unit 106 adds the restored prediction error signal 156 and the corresponding prediction image signal 161 to generate a local decoded image signal 157. The decoded image signal 157 is input to the loop filter 107. The loop filter 107 performs a deblocking filter, a Wiener filter, or the like on the input decoded image signal 157 to generate a filtered image signal 158. The generated filtered image signal 158 is input to the reference image memory 108.
 参照画像メモリ108は、メモリに局部復号後の被フィルタ画像信号158を蓄積しており、イントラ予測部109及びインター予測部110によって必要に応じて予測画像を生成する際に、参照画像信号159として都度参照される。 The reference image memory 108 stores the filtered image signal 158 after local decoding in the memory, and when the predicted image is generated as necessary by the intra prediction unit 109 and the inter prediction unit 110, the reference image signal 159 is used. Referenced each time.
 イントラ予測モードメモリ116は、符号化が終了したプレディクションユニットに適用されたイントラ予測モード情報163を蓄積しており、イントラ予測部109によって必要に応じて双方向予測モード情報を生成する際に、参照イントラ予測モード情報164として都度参照される。イントラ予測モード情報163は、後述するイントラ予測部109において単方向イントラ予測画像生成部601が適用された場合には、1種類の単方向イントラ予測の情報(予測方向若しくは後述する図8A、図8Bに示されるインデクス)に相当する。また、後述するイントラ予測部109において双方向イントラ予測画像生成部602が適用された場合には、2種類の単方向イントラ予測の情報(予測方向若しくは後述する図8A、図8Bに示されるインデクス)がイントラ予測モード情報163に相当する。なお、以後、2種類の単方向イントラ予測の情報のうち、第一単方向イントラ予測モードをIntraPredModeL0、第二単方向イントラ予測モードをIntraPredModeL1で表現する。IntraPredModeL0はIntraPredTypeL0とIntraAngleIdL0とを含んでいる。またIntraPredModeL1はIntraPredTypeL1とIntraAngleIdL1とを含んでいる。一例として、当該プレディクションユニットにIntraPredMode[ puPartIdx ]=34が適用される場合、イントラ予測モード情報163はIntraPredTypeL0が“Intra_Horizontal”、IntraAngleIdL0が“0”、IntraPredTypeL1が“Intra_Vertical”、IntraAngleIdL1が“0”という形式で有する。別の実施形態として図4に示される対応テーブルを用いてインデクス情報に変更しても構わない。つまり、IntraPredType“Intra_Horizontal”かつIntraAngleId“0”をIntraPredMode“1”、IntraPredType“Intra_Vertical”かつIntraAngleId“0”をIntraPredMode“0”として、イントラ予測モード情報163としても構わない。 The intra prediction mode memory 116 stores the intra prediction mode information 163 applied to the prediction unit that has been encoded, and when the intra prediction unit 109 generates the bidirectional prediction mode information as necessary, It is referred to as reference intra prediction mode information 164 each time. When the unidirectional intra-prediction image generation unit 601 is applied to the intra prediction unit 109 described later, the intra prediction mode information 163 includes information on one type of unidirectional intra prediction (prediction direction or FIGS. 8A and 8B described later). Index). In addition, when the bidirectional intra prediction image generation unit 602 is applied to the intra prediction unit 109 described later, information on two types of unidirectional intra prediction (prediction direction or indexes shown in FIGS. 8A and 8B described later). Corresponds to the intra prediction mode information 163. In the following, of the two types of unidirectional intra prediction information, the first unidirectional intra prediction mode is expressed as IntraPredModeL0, and the second unidirectional intra prediction mode is expressed as IntraPredModeL1. IntraPredModeL0 includes IntraPredTypeL0 and IntraAngleIdL0. IntraPredModeL1 includes IntraPredTypeL1 and IntraAngleIdL1. As an example, when IntraPredMode [puPartIdx] = 34 is applied to the prediction unit, the intra prediction mode information 163 indicates that IntraPredTypeL0 is “Intra_Horizontal”, IntraAngleIdL0 is “0”, IntraPredTypeL1 is “Intra_Vertical”, Intra_Vertical ” Have in form. As another embodiment, the correspondence table shown in FIG. 4 may be used to change to index information. That is, IntraPredType “Intra_Horizontal” and IntraAngleId “0” are set to IntraPredMode “1”, IntraPredType “Intra_Vertical” and IntraAngleId “0” are set to IntraPredMode “0”, and intra prediction mode information 163 is also acceptable.
 イントラ予測部109は、参照画像メモリ108に保存されている参照画像信号159及びイントラ予測モードメモリ116に保存されている参照イントラ予測モード情報164を利用してイントラ予測を行う。例えば、H.264では、予測対象ブロックに隣接する符号化済みの参照画素値を利用して、垂直方向、水平方向などの予測方向に沿って画素補填(コピーまたは補間後にコピー)を行うことによってイントラ予測画像を生成する。図5の(a)にH.264におけるイントラ予測の予測方向を示す。また、図5の(b)にH.264における参照画素と符号化対象画素との配置関係を示す。図5の(c)はモード1(水平予測)の予測画像生成方法を示しており、図5の(d)はモード4(対角右下予測)の予測画像生成方法を示している。 The intra prediction unit 109 performs intra prediction using the reference image signal 159 stored in the reference image memory 108 and the reference intra prediction mode information 164 stored in the intra prediction mode memory 116. For example, H.M. In H.264, an intra prediction image is obtained by performing pixel interpolation (copying or copying after interpolation) along a prediction direction such as a vertical direction or a horizontal direction using an encoded reference pixel value adjacent to a prediction target block. Generate. In FIG. The prediction direction of intra prediction in H.264 is shown. Further, in FIG. 2 shows an arrangement relationship between reference pixels and encoding target pixels in H.264. FIG. 5C illustrates a predicted image generation method in mode 1 (horizontal prediction), and FIG. 5D illustrates a predicted image generation method in mode 4 (diagonal lower right prediction).
 また、非特許文献では、H.264の予測方向をさらに34方向に拡張し、予測モード数を増やしている。予測角度に応じて32画素精度の線形補間を行うことで予測画素値を作成し、予測方向にコピーする。本実施形態で用いるイントラ予測部109の詳細は後述する。 In non-patent literature, H. The prediction direction of H.264 is further expanded to 34 directions to increase the number of prediction modes. A predicted pixel value is created by performing linear interpolation with 32-pixel accuracy in accordance with the predicted angle, and is copied in the predicted direction. Details of the intra prediction unit 109 used in the present embodiment will be described later.
 インター予測部110は、参照画像メモリ108に保存されている参照画像信号159を利用してインター予測を行う。具体的には、インター予測部110は、予測対象ブロックと参照画像信号159との間でブロックマッチング処理を行って動きのズレ量(動きベクトル)を導出する。インター予測部110は、この動きベクトルに基づいて補間処理(動き補償)を行ってインター予測画像を生成する。H.264では、1/4画素精度までの補間処理が可能である。導出された動きベクトルは予測情報160の一部としてエントロピー符号化される。 The inter prediction unit 110 performs inter prediction using the reference image signal 159 stored in the reference image memory 108. Specifically, the inter prediction unit 110 performs block matching processing between the prediction target block and the reference image signal 159 to derive a motion shift amount (motion vector). The inter prediction unit 110 performs an interpolation process (motion compensation) based on the motion vector to generate an inter prediction image. H. With H.264, interpolation processing up to 1/4 pixel accuracy is possible. The derived motion vector is entropy encoded as part of the prediction information 160.
 予測選択スイッチ111は、イントラ予測部109の出力端またはインター予測部110の出力端を予測選択部112からの予測情報160に従って選択し、イントラ予測画像またはインター予測画像を予測画像信号161として減算部101及び加算部106に入力する。予測情報160がイントラ予測を示唆する場合には、予測選択スイッチ111はイントラ予測部109からの出力端にスイッチを接続する。一方、予測情報160がインター予測を示唆する場合には、予測選択スイッチ111はインター予測部110からの出力端にスイッチを接続する。 The prediction selection switch 111 selects the output terminal of the intra prediction unit 109 or the output terminal of the inter prediction unit 110 according to the prediction information 160 from the prediction selection unit 112, and subtracts the intra prediction image or the inter prediction image as the prediction image signal 161. 101 and the adder 106. When the prediction information 160 suggests intra prediction, the prediction selection switch 111 connects a switch to the output terminal from the intra prediction unit 109. On the other hand, when the prediction information 160 suggests inter prediction, the prediction selection switch 111 connects a switch to the output terminal from the inter prediction unit 110.
 予測選択部112は、符号化制御部115が制御する予測モードに従って、予測情報160を設定する機能を有する。前述のように、予測画像信号161の生成のためにイントラ予測またはインター予測が選択可能であるが、イントラ予測及びインター予測の夫々に複数のモードがさらに選択可能である。符号化制御部115はイントラ予測及びインター予測の複数の予測モードのうち1つを最適な予測モードとして判定し、予測選択部112は判定された最適な予測モードに応じて予測情報160を設定する。 The prediction selection unit 112 has a function of setting the prediction information 160 according to the prediction mode controlled by the encoding control unit 115. As described above, intra prediction or inter prediction can be selected to generate the predicted image signal 161, but a plurality of modes can be further selected for each of intra prediction and inter prediction. The encoding control unit 115 determines one of a plurality of intra prediction modes and inter prediction modes as the optimal prediction mode, and the prediction selection unit 112 sets the prediction information 160 according to the determined optimal prediction mode. .
 例えば、イントラ予測に関して、符号化制御部115から予測モード情報がイントラ予測部109に指定され、イントラ予測部109はこの予測モード情報に従って予測画像信号161を生成する。符号化制御部115は、予測モードの番号が小さい方から順に複数の予測モード情報を指定してもよいし、大きい方から順に複数の予測モード情報を指定してもよい。また、符号化制御部115は、入力画像の特性に従って予測モードを限定してもよい。符号化制御部115は、必ずしも全ての予測モードを指定する必要はなく符号化対象ブロックに対して少なくとも1つの予測モード情報を指定すればよい。 For example, for intra prediction, prediction mode information is specified by the encoding control unit 115 to the intra prediction unit 109, and the intra prediction unit 109 generates a predicted image signal 161 according to the prediction mode information. The encoding control unit 115 may specify a plurality of prediction mode information in order from the smallest prediction mode number, or may specify a plurality of prediction mode information in order from the largest. The encoding control unit 115 may limit the prediction mode according to the characteristics of the input image. The encoding control unit 115 does not necessarily specify all prediction modes, and may specify at least one prediction mode information for the encoding target block.
 例えば、符号化制御部115は、次の数式(1)に示すコスト関数を用いて最適な予測モードを判定する。 
Figure JPOXMLDOC01-appb-M000001
For example, the encoding control unit 115 determines an optimal prediction mode using a cost function represented by the following mathematical formula (1).
Figure JPOXMLDOC01-appb-M000001
 数式(1)(以下、簡易符号化コストと呼ぶ)において、OHは予測情報160(例えば、動きベクトル情報、予測ブロックサイズ情報)に関する符号量を示し、SADは予測対象ブロックと予測画像信号161との間の差分絶対値和(即ち、予測誤差信号152の絶対値の累積和)を示す。また、λは量子化情報(量子化パラメータ)の値に基づいて決定されるラグランジュ未定乗数を示し、Kは符号化コストを示す。数式(1)を用いる場合には、符号化コストKを最小化する予測モードが発生符号量及び予測誤差の観点から最適な予測モードとして判定される。数式(1)の変形として、OHのみまたはSADのみから符号化コストを見積もってもよいし、SADにアダマール変換を施した値またはその近似値を利用して符号化コストを見積もってもよい。 In Equation (1) (hereinafter referred to as simple encoding cost), OH indicates a code amount relating to prediction information 160 (for example, motion vector information and prediction block size information), and SAD is a prediction target block and a prediction image signal 161. The difference absolute value sum (ie, the cumulative sum of the absolute values of the prediction error signal 152) is shown. Further, λ represents a Lagrange undetermined multiplier determined based on the value of quantization information (quantization parameter), and K represents an encoding cost. When Expression (1) is used, the prediction mode that minimizes the coding cost K is determined as the optimum prediction mode from the viewpoint of the generated code amount and the prediction error. As a modification of Equation (1), the encoding cost may be estimated from OH alone or SAD alone, or the encoding cost may be estimated using a value obtained by subjecting SAD to Hadamard transform or an approximation thereof.
 また、図示しない仮符号化ユニットを用いることにより最適な予測モードを判定することも可能である。例えば、符号化制御部115は、次の数式(2)に示すコスト関数を用いて最適な予測モードを判定する。
Figure JPOXMLDOC01-appb-M000002
It is also possible to determine an optimal prediction mode by using a temporary encoding unit (not shown). For example, the encoding control unit 115 determines an optimal prediction mode using a cost function expressed by the following formula (2).
Figure JPOXMLDOC01-appb-M000002
 数式(2)において、Dは予測対象ブロックと局所復号画像との間の二乗誤差和(即ち、符号化歪)を示し、Rは予測対象ブロックと予測モードの予測画像信号161との間の予測誤差について仮符号化によって見積もられた符号量を示し、Jは符号化コストを示す。数式(2)の符号化コストJ(以後、詳細符号化コストと呼ぶ)を導出する場合には予測モード毎に仮符号化処理及び局部復号化処理が必要なので、回路規模または演算量が増大する。反面、より正確な符号化歪と符号量とに基づいて符号化コストJが導出されるので、最適な予測モードを高精度に判定して高い符号化効率を維持しやすい。なお、数式(2)の変形として、RのみまたはDのみから符号化コストを見積もってもよいし、RまたはDの近似値を利用して符号化コストを見積もってもよい。また、これらのコストを階層的に用いてもよい。符号化制御部115は、予測対象ブロックに関して事前に得られる情報(周囲の画素ブロックの予測モード、画像解析の結果など)に基づいて、数式(1)または数式(2)を用いた判定を行う予測モードの候補の数を、予め絞り込んでおいてもよい。 In Equation (2), D represents a sum of square errors (that is, encoding distortion) between the prediction target block and the locally decoded image, and R represents a prediction between the prediction target block and the prediction image signal 161 in the prediction mode. An error amount indicates a code amount estimated by provisional encoding, and J indicates an encoding cost. In order to derive the encoding cost J (hereinafter referred to as the detailed encoding cost) of Equation (2), provisional encoding processing and local decoding processing are required for each prediction mode, so that the circuit scale or the amount of calculation increases. . On the other hand, since the encoding cost J is derived based on more accurate encoding distortion and code amount, it is easy to determine the optimal prediction mode with high accuracy and maintain high encoding efficiency. As a modification of Equation (2), the encoding cost may be estimated from only R or D, or the encoding cost may be estimated using an approximate value of R or D. These costs may be used hierarchically. The encoding control unit 115 performs determination using Expression (1) or Expression (2) based on information obtained in advance regarding the prediction target block (prediction mode of surrounding pixel blocks, image analysis result, and the like). The number of prediction mode candidates may be narrowed down in advance.
 本実施形態の変形例として、数式(1)と数式(2)を組み合わせた二段階のモード判定を行うことで、符号化性能を維持しつつ、予測モードの候補数をさらに削減することが可能となる。ここで、数式(1)で示される簡易符号化コストは、数式(2)と異なり局部復号化処理が必要ないため、高速に演算が可能である。本実施形態の動画像符号化装置では、H.264と比較しても予測モード数が多いため、詳細符号化コストを用いたモード判定は現実的ではない。そこで、第一ステップとして、簡易符号化コストを用いたモード判定を、当該画素ブロックで利用可能な予測モードに対して行い、予測モード候補を導出する。 As a modification of the present embodiment, the number of prediction mode candidates can be further reduced while maintaining encoding performance by performing two-stage mode determination combining Formula (1) and Formula (2). It becomes. Here, unlike the formula (2), the simple encoding cost represented by the formula (1) does not require a local decoding process, and can be calculated at high speed. In the moving picture coding apparatus according to the present embodiment, H.264 is used. Since the number of prediction modes is large even when compared with H.264, mode determination using the detailed coding cost is not realistic. Therefore, as a first step, mode determination using the simple coding cost is performed on the prediction modes available in the pixel block, and prediction mode candidates are derived.
 ここで、量子化の粗さを定めた量子化パラメータの値が大きくなるほど、簡易符号化コストと詳細符号化コストの相関が高くなる性質を利用して、予測モード候補数を変更する。 Here, the number of prediction mode candidates is changed using the property that the correlation between the simple coding cost and the detailed coding cost increases as the value of the quantization parameter that determines the roughness of quantization increases.
 以下、図6を用いて本実施形態に係るイントラ予測部109の詳細を説明する。 
 <イントラ予測部109> 
 図6に示されるイントラ予測部109は、単方向イントラ予測画像生成部601、双方向イントラ予測画像生成部602、予測モード情報設定部603、選択スイッチ604、双方向イントラ予測モード生成部605を含む。先ず、参照画像メモリ108から参照画像信号159が、単方向イントラ予測画像生成部601及び双方向イントラ予測画像生成部602に入力される。ここで、符号化制御部115で制御されている予測モード情報に従って、予測モード情報設定部603は、単方向イントラ予測画像生成部601或いは双方向イントラ予測画像生成部602で生成される予測モードを設定し、予測モード651を出力する。双方向イントラ予測モード生成部605は、予測モード651及び参照イントラ予測モード情報164に従って、双方向イントラ予測モード情報652を出力する。双方向イントラ予測モード生成部605の動作については後述する。選択スイッチ604は、この予測モード651に従って、それぞれのイントラ予測画像生成部の出力端を繋ぎかえる機能を有する。入力された予測モード651が単方向イントラ予測モードであれば、単方向イントラ予測画像生成部601の出力端とスイッチを接続し、予測モード651が双方向イントラ予測モードであれば、双方向イントラ予測画像生成部602の出力端を接続する。一方、それぞれのイントラ予測画像生成部601、602は、予測モード651に従って、予測画像信号161を生成する。生成された予測画像信号161(第五予測画像信号とも呼ぶ)がイントラ予測部109から出力される。単方向イントラ予測画像生成部601の出力信号を第四予測画像信号とも呼び、双方向イントラ予測画像生成部602の出力信号を第三予測画像信号とも呼ぶ。
Hereinafter, the details of the intra prediction unit 109 according to the present embodiment will be described with reference to FIG.
<Intra Prediction Unit 109>
The intra prediction unit 109 illustrated in FIG. 6 includes a unidirectional intra predicted image generation unit 601, a bidirectional intra predicted image generation unit 602, a prediction mode information setting unit 603, a selection switch 604, and a bidirectional intra prediction mode generation unit 605. . First, the reference image signal 159 is input from the reference image memory 108 to the unidirectional intra predicted image generation unit 601 and the bidirectional intra predicted image generation unit 602. Here, according to the prediction mode information controlled by the encoding control unit 115, the prediction mode information setting unit 603 determines the prediction mode generated by the unidirectional intra prediction image generation unit 601 or the bidirectional intra prediction image generation unit 602. Set and output prediction mode 651. The bidirectional intra prediction mode generation unit 605 outputs the bidirectional intra prediction mode information 652 according to the prediction mode 651 and the reference intra prediction mode information 164. The operation of the bidirectional intra prediction mode generation unit 605 will be described later. The selection switch 604 has a function of switching the output ends of the respective intra predicted image generation units in accordance with the prediction mode 651. If the input prediction mode 651 is the unidirectional intra prediction mode, the output terminal of the unidirectional intra prediction image generation unit 601 is connected to the switch, and if the prediction mode 651 is the bidirectional intra prediction mode, the bidirectional intra prediction is performed. The output terminal of the image generation unit 602 is connected. On the other hand, each of the intra predicted image generation units 601 and 602 generates the predicted image signal 161 according to the prediction mode 651. The generated prediction image signal 161 (also referred to as a fifth prediction image signal) is output from the intra prediction unit 109. The output signal of the unidirectional intra predicted image generation unit 601 is also called a fourth predicted image signal, and the output signal of the bidirectional intra predicted image generation unit 602 is also called a third predicted image signal.
 先ず、予測モード情報設定部603について詳細に説明する。図7A及び図7Bは、本実施形態に関わる予測モードのブロックサイズ別の数を示している。PuSizeは、予測を行う画素ブロック(プレディクションユニット)サイズを示しており、PU_2x2からPU_128x128まで7種類のサイズが規定されている。IntraUniModeNumは単方向イントラ予測の予測モード数を表しており、IntraBiModeNumは双方向イントラ予測の予測モード数を表している。また、Number of modesは、画素ブロック(プレディクションユニット)サイズ毎の予測モード数の総数である。単方向イントラ予測の予測モード数、双方向イントラ予測の予測モード数は図7A及び図7B以外のいずれの値でもかまわない。なお、双方向イントラ予測の予測モード数が0の場合には、当該画素ブロックサイズでは双方向イントラ予測を行わないことを意味する。 First, the prediction mode information setting unit 603 will be described in detail. 7A and 7B show the numbers of the prediction modes according to the present embodiment for each block size. PuSize indicates a pixel block (prediction unit) size to be predicted, and seven types of sizes from PU_2x2 to PU_128x128 are defined. IntraUniModeNum represents the number of prediction modes for unidirectional intra prediction, and IntraBiModeNum represents the number of prediction modes for bidirectional intra prediction. Also, Number of modes is the total number of prediction modes for each pixel block (prediction unit) size. The number of prediction modes for unidirectional intra prediction and the number of prediction modes for bidirectional intra prediction may be any values other than those shown in FIGS. 7A and 7B. Note that when the number of prediction modes for bidirectional intra prediction is 0, it means that bidirectional intra prediction is not performed with the pixel block size.
 一方、図8Aに図7AのPU_4x4、PU_8x8、PU_16x16及びPU_32x32の場合の、予測モードと予測方法の関係を示す。なお、図10Aに図7AのPU_64x64或いはPU_128x128の場合を、図10Bに図7BのPU_64x64或いはPU_128x128の場合を示す。ここで、IntraPredModeは予測モード番号を示し、IntraBipredFlagは、双方向イントラ予測であるかを示すフラグである。当該フラグが0の場合は、当該予測モードが単方向イントラ予測モードであることを示している。当該フラグが1である場合は、当該予測モードが双方向イントラ予測モードであることを示している。また、当該フラグが1の場合、双方向イントラ予測モード生成部605において、双方向イントラ予測の生成方法を規定するIntraBipredTypeIdxに従って双方向イントラ予測モード情報652が生成される。IntraBipredTypeIdxが0の場合は、後述する第一予測モード生成部1901において、双方向イントラ予測に用いる2種類の単方向イントラ予測モードが予め定められたテーブルによって設定される。以降、双方向イントラ予測に用いる2種類の単方向イントラ予測モードが予めテーブルによって設定される方式を固定テーブル方式と称する。図8Aに、すべての双方向イントラ予測モードが固定テーブル方式である一例を示している。 On the other hand, FIG. 8A shows the relationship between the prediction mode and the prediction method in the case of PU_4x4, PU_8x8, PU_16x16, and PU_32x32 in FIG. 7A. 10A shows the case of PU_64x64 or PU_128x128 in FIG. 7A, and FIG. 10B shows the case of PU_64x64 or PU_128x128 in FIG. 7B. Here, IntraPredMode indicates a prediction mode number, and IntraBipredFlag is a flag indicating whether or not bidirectional intra prediction. When the flag is 0, it indicates that the prediction mode is a unidirectional intra prediction mode. When the flag is 1, it indicates that the prediction mode is the bidirectional intra prediction mode. When the flag is 1, the bidirectional intra prediction mode generation unit 605 generates bidirectional intra prediction mode information 652 in accordance with IntraBipredTypeIdx that defines a bidirectional intra prediction generation method. When IntraBipredTypeIdx is 0, two types of unidirectional intra prediction modes used for bidirectional intra prediction are set in a first prediction mode generation unit 1901 described later using a predetermined table. Hereinafter, a method in which two types of unidirectional intra prediction modes used for bidirectional intra prediction are set in advance by a table is referred to as a fixed table method. FIG. 8A shows an example in which all bidirectional intra prediction modes are fixed table methods.
 IntraBipredTypeIdxが0より大きい値の場合、参照イントラ予測モード情報164に基づいて、双方向イントラ予測に用いる2種類の単方向イントラ予測モードが設定される。以降、参照イントラ予測モード情報164に基づいて双方向イントラ予測に用いる2種類の単方向イントラ予測モードが設定される方式をダイレクト方式と称する。IntraBipredTypeIdxは、参照イントラ予測モード情報164から2種類の単方向イントラ予測モードの導出方法によって異なる値を有する。具体的な導出方法は後述する。 When IntraBipredTypeIdx is a value larger than 0, two types of unidirectional intra prediction modes used for bidirectional intra prediction are set based on the reference intra prediction mode information 164. Hereinafter, a method in which two types of unidirectional intra prediction modes used for bidirectional intra prediction based on the reference intra prediction mode information 164 are set is referred to as a direct method. IntraBipredTypeIdx has different values depending on the method of deriving two types of unidirectional intra prediction modes from the reference intra prediction mode information 164. A specific derivation method will be described later.
 複数の双方向イントラ予測モードのうち、すべてのモードを固定テーブル方式としても構わないし、すべてのモードをダイレクト方式としても構わない。また、いくつかのモードを固定テーブル方式として、残りのモードをダイレクト方式としても構わない。図8Bには、8種類の双方向イントラ予測モードのうち、3種類が固定テーブル方式、残りの5種類がダイレクト方式である一例を示している。 ∙ Of the plurality of bidirectional intra prediction modes, all modes may be fixed table methods, or all modes may be direct methods. Also, some modes may be fixed table methods and the remaining modes may be direct methods. FIG. 8B shows an example in which among the eight types of bidirectional intra prediction modes, three types are the fixed table method and the remaining five types are the direct method.
 IntraPredTypeLXはイントラ予測の予測タイプを示している。Intra_Verticalは、垂直方向を予測の基準とすることを意味し、Intra_Horizontalは水平方向を予測の基準とすることを意味する。なお、IntraPredTypeLX内のXには、0或いは1が適用される。IntraPredTypeL0は、単方向イントラ予測或いは双方向イントラ予測の最初の予測モードを示している。IntraPredTypeL1は、双方向イントラ予測の2番目の予測モードを示している。また、IntraPredAngleIdは、予測角度のインデックスを示す指標である。実際に予測値生成で使われる予測角度は図11に示されている。ここで、puPartIdxは図3Bで説明した四分木分割の際の分割したプレディクションユニットのインデックスを表している。 IntraPredTypeLX indicates the prediction type of intra prediction. Intra_Vertical means that the vertical direction is the reference for prediction, and Intra_Horizontal means that the horizontal direction is the reference for prediction. Note that 0 or 1 is applied to X in IntraPredTypeLX. IntraPredTypeL0 indicates the first prediction mode of unidirectional intra prediction or bidirectional intra prediction. IntraPredTypeL1 indicates the second prediction mode of bidirectional intra prediction. IntraPredAngleId is an index indicating an index of a prediction angle. The prediction angle actually used in the generation of the predicted value is shown in FIG. Here, puPartIdx represents an index of the prediction unit that is divided in the quadtree division described with reference to FIG. 3B.
 例えば、IntraPredModeが4の場合、IntraPredTypeL0がIntra_Verticalであるため、垂直方向を予測の基準とすることが判る。図8Bから判る通り、IntraPredMode=0から33まで計34個が単方向イントラ予測モードを示しており、IntraPredMode=34から41まで計8個が双方向イントラ予測モードを示している。 For example, when IntraPredMode is 4, since IntraPredTypeL0 is Intra_Vertical, it can be seen that the vertical direction is used as a reference for prediction. As can be seen from FIG. 8B, a total of 34 from IntraPredMode = 0 to 33 indicate the unidirectional intra prediction mode, and a total of 8 from IntraPredMode = 34 to 41 indicate the bidirectional intra prediction mode.
 また、図8Bに図7BのPU_32x32の場合の、予測モードと予測方法の関係を示す。また、図8Cに図7CのPU_4x4、図7DのPU_4x4,PU_8x8及びPU_16x16の場合の、予測モードと予測方法の関係を示す。また、図8D及び図8Eに図7DのPU_32x32の場合の,予測モードと予測方法の関係を示す。また、図8Fに図7A、図7B、図7CのPU_4x4及び図7DのPU_4x4~PU_16x16の場合の,予測モードと予測方法の関係を示す。また、図8Gに図7C、図7DのPU_32x32の場合の、予測モードと予測方法の関係を示す。 FIG. 8B shows the relationship between the prediction mode and the prediction method in the case of PU_32 × 32 in FIG. 7B. 8C shows the relationship between the prediction mode and the prediction method in the case of PU_4x4 in FIG. 7C, PU_4x4, PU_8x8, and PU_16x16 in FIG. 7D. 8D and 8E show the relationship between the prediction mode and the prediction method in the case of PU_32x32 in FIG. 7D. FIG. 8F shows the relationship between the prediction mode and the prediction method in the case of PU_4x4 in FIGS. 7A, 7B, and 7C and PU_4x4 to PU_16x16 in FIG. 7D. FIG. 8G shows the relationship between the prediction mode and the prediction method in the case of PU_32 × 32 in FIGS. 7C and 7D.
 予測モード情報設定部603は、符号化制御部115における制御の下で、指定された予測モード651に対応する上述した予測情報を単方向イントラ予測画像生成部601及び双方向イントラ予測画像生成部602に設定し、選択スイッチ604へ予測モード651を出力する。 The prediction mode information setting unit 603 converts the above-described prediction information corresponding to the designated prediction mode 651 into the unidirectional intra prediction image generation unit 601 and the bidirectional intra prediction image generation unit 602 under the control of the encoding control unit 115. The prediction mode 651 is output to the selection switch 604.
 次に、単方向イントラ予測画像生成部601について詳細に説明する。単方向イントラ予測画像生成部601は、図12に示される複数の予測方向に対して予測画像信号161を生成する機能を有する。図12では、太線で示される垂直方向、水平方向の座標に対して、33個の異なる予測方向を持つ。また、H.264で示される代表的な予測角度の方向を矢印で示している。本実施形態では、原点から矢印で示される線を引いた方向に33種類の予測方向が用意されている。また、H.264と同様、利用可能な参照画素の平均値で予測するDC予測が追加されており、合計で34個の予測モードが存在する。 Next, the unidirectional intra predicted image generation unit 601 will be described in detail. The unidirectional intra predicted image generation unit 601 has a function of generating the predicted image signal 161 for a plurality of prediction directions shown in FIG. In FIG. 12, there are 33 different prediction directions for the vertical and horizontal coordinates indicated by bold lines. H. The direction of a typical prediction angle indicated by H.264 is indicated by an arrow. In this embodiment, 33 kinds of prediction directions are prepared in the direction which pulled the line shown by the arrow from the origin. H. Similar to H.264, DC prediction for predicting with an average value of available reference pixels is added, and there are 34 prediction modes in total.
 IntraPredMode=4の場合、IntraPredAngleIdL0が-4であるため、図12におけるIntraPredMode=4で示される予測方向で予測画像信号161が生成される。図12の下側に示した“Intra_Vertical”に示した範囲に含まれる矢印は、予測タイプがIntra_Verticalの予測モードを示しており、図12の右側に示した“Intra_Horizontal”に示した範囲に含まれる矢印は予測タイプがIntra_Horizontalの予測モードを示している。 In the case of IntraPredMode = 4, since IntraPredAngleIdL0 is −4, the prediction image signal 161 is generated in the prediction direction indicated by IntraPredMode = 4 in FIG. The arrows included in the range shown in “Intra_Vertical” shown at the bottom of FIG. 12 indicate the prediction mode whose prediction type is Intra_Vertical, and are included in the range shown in “Intra_Horizontal” shown on the right side of FIG. An arrow indicates a prediction mode whose prediction type is Intra_Horizontal.
 次に単方向イントラ予測画像生成部601の予測画像生成方法について説明する。ここでは、入力された参照画像信号159に基づいて、予測画像値を生成し、上述した予測方向に画素をコピーする。予測画像値は、1/32画素精度で内挿補間を行うことによって生成する。図11は、IntraPredAngleIdLXと予測画像値生成に使われるintraPredAngleとの関係を示す。intraPredAngleは予測値生成の際に、実際に利用される予測角度を示している。例えば、予測タイプがIntra_Verticalでかつ、図11で示されるintraPredAngleが正の値の場合の予測値の生成方法を数式で書くと数式(3)で表される。ここでBLK_SIZEは、当該画素ブロック(プレディクションユニット)のサイズを示しており、ref[]は参照画像信号が格納された配列を示している。また、pred(k,m)は生成された予測画像信号161を示している。
Figure JPOXMLDOC01-appb-M000003
Next, a prediction image generation method of the unidirectional intra prediction image generation unit 601 will be described. Here, based on the input reference image signal 159, a predicted image value is generated, and the pixels are copied in the above-described prediction direction. The predicted image value is generated by performing interpolation with 1/32 pixel accuracy. FIG. 11 shows the relationship between IntraPredAngleIdLX and intraPredAngle used for predictive image value generation. intraPredAngle indicates a prediction angle that is actually used when a predicted value is generated. For example, when the prediction type is Intra_Vertical and intraPredAngle shown in FIG. 11 is a positive value, a predicted value generation method is expressed by a mathematical formula (3). Here, BLK_SIZE indicates the size of the pixel block (prediction unit), and ref [] indicates an array in which reference image signals are stored. Pred (k, m) indicates the generated predicted image signal 161.
Figure JPOXMLDOC01-appb-M000003
 上記条件以外に関しても、図11のテーブルに従って同様な方法で予測値が生成可能である。例えば、IntraPredMode=1で示される予測モードの予測値は、図5の(c)で示されるH.264の水平予測と同一となる。以上が、本実施形態における単方向イントラ予測画像生成部601の説明である。 Even for conditions other than the above, predicted values can be generated in the same manner according to the table of FIG. For example, the prediction value of the prediction mode indicated by IntraPredMode = 1 is H.264 shown in FIG. This is the same as H.264 horizontal prediction. The above is description of the unidirectional intra estimated image generation part 601 in this embodiment.
 次に、双方向イントラ予測画像生成部602について詳細に説明する。図13に双方向イントラ予測画像生成部602のブロック図を示す。双方向イントラ予測画像生成部602は、第一単方向イントラ予測画像生成部1301、第二単方向イントラ予測画像生成部1302、重み付き平均部1303を含み、入力された参照画像信号159に基づいて、2つの単方向イントラ予測画像を生成し、これらを重み付き平均することにより予測画像信号161を生成する機能を有する。 Next, the bidirectional intra predicted image generation unit 602 will be described in detail. FIG. 13 shows a block diagram of the bidirectional intra-predicted image generation unit 602. The bidirectional intra predicted image generation unit 602 includes a first unidirectional intra predicted image generation unit 1301, a second unidirectional intra predicted image generation unit 1302, and a weighted average unit 1303, and is based on the input reference image signal 159. Two unidirectional intra-predicted images are generated, and a function of generating a predicted image signal 161 by weighted averaging them is provided.
 第一単方向イントラ予測画像生成部1301及び第二単方向イントラ予測画像生成部1302の機能は同一である。いずれも符号化制御部115で制御される予測モード情報に従って与えられた予測モードに対応する予測画像信号を生成する。第一単方向イントラ予測画像生成部1301から第一予測画像信号1351が、第二単方向イントラ予測画像生成部1302から第二予測画像信号1352が出力される。それぞれの予測画像信号が重み付き平均部1303に入力され、重み付き平均処理が行われる。重み付き平均部1303の出力信号は第三予測画像信号とも呼ばれる。 The functions of the first unidirectional intra predicted image generation unit 1301 and the second unidirectional intra predicted image generation unit 1302 are the same. In either case, a prediction image signal corresponding to a prediction mode given according to prediction mode information controlled by the encoding control unit 115 is generated. A first predicted image signal 1351 is output from the first unidirectional intra predicted image generation unit 1301, and a second predicted image signal 1352 is output from the second unidirectional intra predicted image generation unit 1302. Each predicted image signal is input to the weighted average unit 1303, and weighted average processing is performed. The output signal of the weighted average unit 1303 is also called a third predicted image signal.
 図14のテーブルは、双方向イントラ予測モードから、2つの単方向イントラ予測モードを導出するためのテーブルである。ここで数式(4)を用いてBipredIdxが導出される。
Figure JPOXMLDOC01-appb-M000004
The table in FIG. 14 is a table for deriving two unidirectional intra prediction modes from the bidirectional intra prediction mode. Here, BipredIdx is derived using Equation (4).
Figure JPOXMLDOC01-appb-M000004
例えば、PuSize=PU_8x8、IntraPredMode=34の場合、図7Aまたは図7BからIntraUniModeNum=34であることが判るので、BipredIdx=0であることが判る。結果として、図14から、第一単方向イントラ予測モード(MappedBi2Uni(0,idx))が1、第二単方向イントラ予測モード(MappedBi2Uni(1,idx))が0であることが導出される。他のPuSize及びIntraPredModeにおいても同様の方法で2つの予測モードを導出することが可能である。なお、以後、第一単方向イントラ予測モードをIntraPredModeL0、第二単方向イントラ予測モードをIntraPredModeL1で表現する。 For example, when PuSize = PU — 8 × 8 and IntraPredMode = 34, it can be seen from FIG. 7A or 7B that IntraUniModeNum = 34, and therefore BipredIdx = 0. As a result, it is derived from FIG. 14 that the first unidirectional intra prediction mode (MappedBi2Uni (0, idx)) is 1 and the second unidirectional intra prediction mode (MappedBi2Uni (1, idx)) is 0. In other PuSize and IntraPredMode, it is possible to derive two prediction modes by the same method. Hereinafter, the first unidirectional intra prediction mode is expressed as IntraPredModeL0, and the second unidirectional intra prediction mode is expressed as IntraPredModeL1.
 このように第一単方向イントラ予測画像生成部1301及び第二単方向イントラ予測画像生成部1302でそれぞれ生成された第一予測画像信号1351及び第二予測画像信号1352が、重み付き平均部1303へと入力される。 In this way, the first predicted image signal 1351 and the second predicted image signal 1352 generated by the first unidirectional intra predicted image generation unit 1301 and the second unidirectional intra predicted image generation unit 1302, respectively, are sent to the weighted average unit 1303. Is entered.
 重み付き平均部1303では、IntraPredModeL0及びIntraPredModeL1の予測方向を元にユークリッド距離或いは市街地距離(マンハッタン距離)を計算し、重み付き平均処理で用いる重み成分を導出する。各画素の重み成分は予測に用いる参照画素からのユークリッド距離若しくは市街地距離の逆数で表され、以下の式で一般化される。
Figure JPOXMLDOC01-appb-M000005
The weighted average unit 1303 calculates the Euclidean distance or the city area distance (Manhattan distance) based on the prediction directions of IntraPredModeL0 and IntraPredModeL1, and derives a weight component used in the weighted average process. The weight component of each pixel is represented by the Euclidean distance from the reference pixel used for prediction or the reciprocal of the urban distance, and is generalized by the following equation.
Figure JPOXMLDOC01-appb-M000005
ここでユークリッド距離を用いる場合は、ΔLは以下の式で表される。
Figure JPOXMLDOC01-appb-M000006
Here, when the Euclidean distance is used, ΔL is expressed by the following equation.
Figure JPOXMLDOC01-appb-M000006
一方、市街地距離を用いる場合は、ΔLは以下の式で表される。
Figure JPOXMLDOC01-appb-M000007
On the other hand, when using the city distance, ΔL is expressed by the following equation.
Figure JPOXMLDOC01-appb-M000007
各予測モードに関する重みテーブルは、下式で一般化される。
Figure JPOXMLDOC01-appb-M000008
The weight table for each prediction mode is generalized by the following equation.
Figure JPOXMLDOC01-appb-M000008
ここで、ρL0(n)はIntraPredModeL0における画素位置nの重み成分、ρL1(n)はIntraPredModeL1における画素位置nの重み成分を表している。従って、画素位置nにおける最終的な予測信号は、下式で示される。
Figure JPOXMLDOC01-appb-M000009
Here, ρ L0 (n) represents the weight component of the pixel position n in IntraPredModeL0, and ρ L1 (n) represents the weight component of the pixel position n in IntraPredModeL1. Therefore, the final prediction signal at the pixel position n is expressed by the following equation.
Figure JPOXMLDOC01-appb-M000009
ここで、Bipred(n)は画素位置nにおける予測画像信号を表しており、PredL0(n)、PredL1(n)はそれぞれIntraPredModeL0、IntraPredModeL1の予測画像信号である。 Here, Bipred (n) represents the predicted image signal at the pixel position n, and PredL0 (n) and PredL1 (n) are the predicted image signals of IntraPredModeL0 and IntraPredModeL1, respectively.
 本実施形態では、予測画素の生成に2つの予測モードを選択して予測信号を生成しているが、別の実施形態として3つ以上の予測モードを選択して予測値を生成してもよい。この場合、参照画素から予測画素の空間的距離の逆数の比を重み係数と設定すればよい。 In this embodiment, the prediction signal is generated by selecting two prediction modes for generating the prediction pixel. However, as another embodiment, a prediction value may be generated by selecting three or more prediction modes. . In this case, the ratio of the reciprocal of the spatial distance from the reference pixel to the prediction pixel may be set as the weighting factor.
 また、本実施形態では、予測モードが使用する参照画素からのユークリッド距離、或いは市街地距離の逆数をそのまま重み成分としているが、本実施形態の変形例として参照画素からのユークリッド距離、市街地距離を変数とした分布モデルを用いて重み成分を設定してもよい。分布モデルは線形モデルやM次関数(M≧1)、片側ラプラス分布や片側ガウス分布といった非線形関数、ある固定値を参照画素との距離に依らず固定値、のうち少なくとも一つを用いる。片側ガウス分布をモデルとして用いた場合、重み成分は下式で表される。
Figure JPOXMLDOC01-appb-M000010
In this embodiment, the Euclidean distance from the reference pixel used in the prediction mode or the reciprocal of the urban area distance is directly used as a weight component. However, as a modification of the present embodiment, the Euclidean distance and the urban area distance from the reference pixel are variables. The weight component may be set using the distribution model. The distribution model uses at least one of a linear model, an M-order function (M ≧ 1), a nonlinear function such as a one-sided Laplace distribution or a one-sided Gaussian distribution, and a fixed value that is a fixed value regardless of the distance from the reference pixel. When a one-sided Gaussian distribution is used as a model, the weight component is expressed by the following equation.
Figure JPOXMLDOC01-appb-M000010
ここで、ρ(n)は予測画素の位置nにおける重み成分、σは分散、Aは定数(A>0)である。 Here, ρ (n) is a weight component at the position n of the predicted pixel, σ 2 is variance, and A is a constant (A> 0).
 また、片側ラプラス分布をモデルとして用いた場合重み成分は下式で表される。
Figure JPOXMLDOC01-appb-M000011
When the one-sided Laplace distribution is used as a model, the weight component is expressed by the following equation.
Figure JPOXMLDOC01-appb-M000011
ここで、σは標準偏差、Bは定数(B>0)である。 Here, σ is a standard deviation, and B is a constant (B> 0).
 また、自己相関関数をモデル化した等方相関モデル、楕円相関モデル、ラプラス関数やガウス関数を一般化した一般化ガウスモデルを重み成分のモデルとして用いてもよい。 Further, an isotropic correlation model obtained by modeling an autocorrelation function, an elliptic correlation model, a generalized Gaussian model obtained by generalizing a Laplace function or a Gaussian function may be used as the weight component model.
 数式(5)、数式(8)、数式(10)、数式(11)で示される重み成分を予測画像生成の際に都度計算した場合、複数の乗算器が必要となり、ハードウェア規模が増大する。このため、予め予測モード毎の相対距離に応じて重み成分を計算し、メモリに保持することで上記演算に必要となる回路規模を削減することができる。ここでは、市街地距離を用いた場合の重み成分の導出方法について説明する。 When the weight components represented by Equation (5), Equation (8), Equation (10), and Equation (11) are calculated each time the predicted image is generated, a plurality of multipliers are required, and the hardware scale increases. . For this reason, the circuit scale required for the said calculation can be reduced by calculating a weight component beforehand according to the relative distance for every prediction mode, and hold | maintaining in a memory. Here, a method for deriving the weight component when the city distance is used will be described.
 IntraPredModeL0の市街地距離ΔLL0とIntraPredModeL1の市街地距離ΔLL1は、数式(7)より計算される。ここで、相対距離は2つの予測モードの予測方向(参照予測方向とも呼ぶ)によって変化する。例として、PuSize=PU_4x4の場合の代表的な距離を図15A、図15B、図15Cに示す。図15Aは、IntraPredModeLX=0の場合の市街地距離を示している。図15Bは、IntraPredModeLX=1の場合の市街地距離を示している。また、図15Cは、IntraPredModeLX=3の場合の市街地距離を示している。同様に、それぞれの予測モードに応じて数式(6)或いは数式(7)を用いて距離を導出することができる。但し、IntraPredModeLX=2のDC予測の場合、全ての画素位置で距離を2としている。図16にPuSize=PU_4x4の場合の代表的な6つの予測モードにおける距離のテーブルを示す。IntraPredModeLXの数が多い場合、これらの距離テーブルのテーブルサイズが増加する場合がある。 The city area distance ΔL L0 of IntraPredMode L0 and the city area distance ΔL L1 of IntraPredMode L1 are calculated from Equation (7). Here, the relative distance varies depending on the prediction directions (also referred to as reference prediction directions) of the two prediction modes. As an example, typical distances in the case of PuSize = PU — 4 × 4 are shown in FIGS. 15A, 15B, and 15C. FIG. 15A shows the city distance when IntraPredModeLX = 0. FIG. 15B shows the city distance in the case of IntraPredModeLX = 1. FIG. 15C shows the city distance in the case of IntraPredModeLX = 3. Similarly, the distance can be derived using Expression (6) or Expression (7) according to each prediction mode. However, in the case of DC prediction with IntraPredModeLX = 2, the distance is 2 at all pixel positions. FIG. 16 shows a table of distances in six typical prediction modes in the case of PuSize = PU — 4 × 4. When the number of IntraPredModeLX is large, the table sizes of these distance tables may increase.
 本実施形態では、いくつかの予測角度の近い予測モードの距離テーブルを共有化することにより、必要なメモリ量を削減している。図17は、距離テーブル導出に用いるIntraPredModeLXのマッピングを示している。ここでは予測角度が45度刻みの予測モードとDC予測に対応する予測モードのみのテーブルを用意しており、それ以外の予測角度を、用意した基準の予測モードに近い方にマッピングする例を示している。なお、基準の予測モードとの距離が同じ場合は、インデックスが小さい方にマッピングしている。MappedIntraPredModeに示される予測モードが図17から参照され、距離テーブルが導出できる。 In the present embodiment, the required memory amount is reduced by sharing a distance table of several prediction modes with close prediction angles. FIG. 17 shows the mapping of IntraPredModeLX used for distance table derivation. Here, an example is shown in which a table of only the prediction mode corresponding to the prediction mode corresponding to the prediction mode and the DC prediction in 45 degrees is prepared, and other prediction angles are mapped closer to the prepared reference prediction mode. ing. When the distance from the reference prediction mode is the same, the index is mapped to the smaller one. The prediction mode shown in “MappedIntraPredMode” is referred to from FIG. 17, and a distance table can be derived.
 上記距離テーブルを利用することにより、次式を用いて2つの予測モードの画素毎の相対距離が算出される。
Figure JPOXMLDOC01-appb-M000012
By using the distance table, the relative distance for each pixel in the two prediction modes is calculated using the following equation.
Figure JPOXMLDOC01-appb-M000012
ここで、BLK_WIDTH及びBLK_HEIGHTは、それぞれ当該画素ブロック(プレディクションユニット)の幅及び高さを示しており、DistDiff(n)は画素位置nにおける2つの予測モードの相対距離を示している。数式(12)を用いて、画素位置nにおける最終的な予測信号は、下式で示される。
Figure JPOXMLDOC01-appb-M000013
Here, BLK_WIDTH and BLK_HEIGHT indicate the width and height of the pixel block (prediction unit), respectively, and DistDiff (n) indicates the relative distance between the two prediction modes at the pixel position n. Using Equation (12), the final prediction signal at the pixel position n is expressed by the following equation.
Figure JPOXMLDOC01-appb-M000013
ここで、小数点演算を用いることによるハードウェア規模増加を避けるため、重み成分を予めスケーリングし、整数演算に直すと次式で表せる。
Figure JPOXMLDOC01-appb-M000014
Here, in order to avoid an increase in hardware scale due to the use of decimal point arithmetic, when weight components are scaled in advance and converted to integer arithmetic, the following expression is obtained.
Figure JPOXMLDOC01-appb-M000014
ここで、例えば小数点部分を10ビット精度で表現した場合、WM=1024、Offset=512、SHIFT=10となる。これらは以下の関係を満たす。
Figure JPOXMLDOC01-appb-M000015
Here, for example, when the decimal part is expressed with 10-bit precision, WM = 1024, Offset = 512, and SHIFT = 10. These satisfy the following relationship.
Figure JPOXMLDOC01-appb-M000015
SHIFTは重み成分の小数点演算の演算精度を示しており、符号化性能とハードウェア実現時の回路規模のバランスを取って、最適な組み合わせを選べばよい。 SHIFT indicates the calculation accuracy of the decimal point calculation of the weight component, and an optimal combination may be selected by balancing the coding performance and the circuit scale at the time of hardware implementation.
 本実施形態における片側ラプラス分布モデルを用いた重み成分をテーブル化した例を図18A、図18Bに示す。図18AはPuSize=PU_4x4の場合の重み成分テーブルを示している。また、図18BはPuSize=PU_8x8の場合の重み成分テーブルを示している。それ以外のPuSizeに関しても、数式(5)、数式(8)、数式(10)、数式(11)を用いることにより導出が可能である。 18A and 18B show examples in which weight components using the one-sided Laplace distribution model in this embodiment are tabulated. FIG. 18A shows a weight component table in the case of PuSize = PU — 4 × 4. FIG. 18B shows a weight component table in the case of PuSize = PU_8 × 8. Other PuSizes can also be derived using Equation (5), Equation (8), Equation (10), and Equation (11).
 重み係数の別の実施形態として、単方向イントラ予測モードIntraPredModeL0とIntraPredModeL1の組み合わせ毎に、予め定められた重み係数を画素位置毎、画素グループ毎にテーブルとして用意し、上記Bipred(n)を計算しても構わない。この場合、次式で示される。
Figure JPOXMLDOC01-appb-M000016
As another embodiment of the weighting factor, a predetermined weighting factor is prepared for each combination of the unidirectional intra prediction modes IntraPredModeL0 and IntraPredModeL1 as a table for each pixel position and each pixel group, and the above Bired (n) is calculated. It doesn't matter. In this case, it is shown by the following formula.
Figure JPOXMLDOC01-appb-M000016
ωt(n)は画素位置nにおける重み係数であり、IntraPredModeL0とIntraPredModeL1によって異なる値を有する。 ωt (n) is a weighting coefficient at the pixel position n, and has different values depending on IntraPredModeL0 and IntraPredModeL1.
 <双方向イントラ予測モード生成部605> 
 次に、双方向イントラ予測モード生成部605について詳細に説明する。 
 図19に双方向イントラ予測モード生成部605のブロック図を示す。双方向イントラ予測モード生成部605は、第一予測モード生成部1901、第二予測モード生成部1902を含み、予測モード651、参照イントラ予測モード情報164に基づいて、2種類の単方向イントラ予測の組み合わせである双方向イントラ予測モード情報652を出力する機能を有する。予測モード651が有するIntraBipredTypeIdxに従って、第一予測モード生成部1901、第二予測モード生成部1902のいずれかと、選択スイッチ1903の出力端が接続される。第一予測モード生成部1901は、前述の固定テーブル方式に従って2種類の単方向イントラ予測の組み合わせを出力する。図20のテーブルは、IntraPredModeと対応する2種類の単方向イントラ予測の組み合わせを導出するためのテーブルであり、図8Aにおいて単方向イントラ予測モードを除外した予測モード構成に相当する。図中のBipredIdxは双方向イントラ予測モードのインデクスであり、上式(4)を用いて導出される。
<Bidirectional Intra Prediction Mode Generation Unit 605>
Next, the bidirectional intra prediction mode generation unit 605 will be described in detail.
FIG. 19 shows a block diagram of the bidirectional intra prediction mode generation unit 605. The bidirectional intra prediction mode generation unit 605 includes a first prediction mode generation unit 1901 and a second prediction mode generation unit 1902. Based on the prediction mode 651 and the reference intra prediction mode information 164, two types of unidirectional intra prediction are performed. It has a function of outputting bidirectional intra prediction mode information 652 that is a combination. One of the first prediction mode generation unit 1901 and the second prediction mode generation unit 1902 is connected to the output terminal of the selection switch 1903 in accordance with IntraBipredTypeIdx included in the prediction mode 651. The first prediction mode generation unit 1901 outputs a combination of two types of unidirectional intra predictions according to the fixed table method described above. The table in FIG. 20 is a table for deriving a combination of two types of unidirectional intra prediction corresponding to IntraPredMode, and corresponds to the prediction mode configuration in which the unidirectional intra prediction mode is excluded in FIG. 8A. BipredIdx in the figure is an index of the bidirectional intra prediction mode, and is derived using the above equation (4).
 例えば、PuSize=PU_8x8、IntraPredMode=34の場合、図7Aまたは図7BからIntraUniModeNum=34であることが判るので、BipredIdx=0であることが判る。従って、IntraPredMode=34における単方向第一単方向イントラ予測モードをIntraPredModeL0及び第二単方向イントラ予測モードをIntraPredModeL1がテーブルより導出される。 For example, in the case of PuSize = PU — 8 × 8 and IntraPredMode = 34, it can be seen from FIG. 7A or 7B that IntraUniModeNum = 34, and therefore BipredIdx = 0. Accordingly, the unidirectional first unidirectional intra prediction mode IntraPredMode = 34 and IntraPredModeL0 and the second unidirectional intra prediction mode IntraPredModeL1 are derived from the table.
 なお、BipredIdxからIntraPredModeL0及びIntraPredModeL1を導出するテーブルは図20にのみではなく、図8A、図8Bに示されるいずれの単方向イントラ予測モードをIntraPredModeL0及びIntraPredModeL1として設定しても構わない。以降、第一予測モード生成部1901による双方向イントラ予測モードの導出方法を第一生成方法と称する。 The table for deriving IntraPredModeL0 and IntraPredModeL1 from BipredIdx is not limited to FIG. 20, and any one-way intra prediction mode shown in FIGS. 8A and 8B may be set as IntraPredModeL0 and IntraPredModeL1. Hereinafter, the method for deriving the bidirectional intra prediction mode by the first prediction mode generation unit 1901 is referred to as a first generation method.
 第二予測モード生成部1902は前述のダイレクト方式に従い、それぞれ参照イントラ予測モード情報164を用いて2種類の単方向イントラ予測の組み合わせである双方向イントラ予測モード情報652を出力する。 The second prediction mode generation unit 1902 outputs bi-directional intra prediction mode information 652 that is a combination of two types of unidirectional intra prediction using the reference intra prediction mode information 164 according to the direct method described above.
 以下に双方向イントラ予測モード情報652の具体的な導出方法を述べる。以下の例では、すでに符号化済みのプレディクションユニットとして、図21に示されるように画素位置a、bがそれぞれ属する隣接ブロックA,Bに対応する参照イントラ予測モード情報164を使用する。以降、隣接ブロックAに対応する参照イントラ予測モード情報164をIntraPredModeA,隣接ブロックBに対応する参照イントラ予測モード情報164をIntraPredModeBと称する。また、隣接ブロックAに対して双方向イントラ予測が適用されている場合、IntraPredModeAにおける第一単方向イントラ予測モード、第二単方向イントラ予測モードをそれぞれIntraPredModeAL0,IntraPredModeAL1と称する。隣接ブロックAに対して単方向イントラ予測が適用されている場合、IntraPredModeAはIntraPredModeAL0と同一の情報を有し、IntraPredModeAL1にはあらかじめ定められた固定値(例えばマイナス1)が設定される。隣接ブロックBについても、隣接ブロックAと同様にIntraPredModeBL0、IntraPredModeBL1と称する。 Hereinafter, a specific method for deriving the bidirectional intra prediction mode information 652 will be described. In the following example, reference intra prediction mode information 164 corresponding to adjacent blocks A and B to which pixel positions a and b respectively belong is used as an already encoded prediction unit as shown in FIG. Hereinafter, the reference intra prediction mode information 164 corresponding to the adjacent block A is referred to as IntraPredModeA, and the reference intra prediction mode information 164 corresponding to the adjacent block B is referred to as IntraPredModeB. When bidirectional intra prediction is applied to adjacent block A, the first unidirectional intra prediction mode and the second unidirectional intra prediction mode in IntraPredModeA are referred to as IntraPredModeAL0 and IntraPredModeAL1, respectively. When unidirectional intra prediction is applied to the adjacent block A, IntraPredModeA has the same information as IntraPredModeAL0, and a predetermined fixed value (for example, minus 1) is set in IntraPredModeAL1. As with the adjacent block A, the adjacent block B is also referred to as IntraPredModeBL0 and IntraPredModeBL1.
 第二予測モード生成部1902は、双方向イントラ予測モード情報652として第一単方向イントラ予測モード(IntraPredModeL0)にIntraPredModeAL0、第二単方向イントラ予測モード(IntraPredModeL1)にIntraPredModeB0をそれぞれセットする。隣接ブロックA,Bに対して双方向イントラ予測が適用されている場合には、IntraPredModeAL0の代わりにIntraPredModeAL1を、IntraPredModeBL0の代わりにIntraPredModeBL1をセットしても構わない。以降、当該第二予測モード生成部1902による双方向イントラ予測モードの導出方法を第二生成方法と称する。 The second prediction mode generation unit 1902 sets IntraPredModeAL0 in the first unidirectional intra prediction mode (IntraPredModeL0) and IntraPredModeB0 in the second unidirectional intra prediction mode (IntraPredModeL1) as the bidirectional intra prediction mode information 652. When bidirectional intra prediction is applied to adjacent blocks A and B, IntraPredModeAL1 may be set instead of IntraPredModeAL0, and IntraPredModeBL1 may be set instead of IntraPredModeBL0. Hereinafter, the method for deriving the bidirectional intra prediction mode by the second prediction mode generation unit 1902 is referred to as a second generation method.
 第二予測モード生成部1902における別の実施形態として、IntraPredModeAL0若しくは、IntraPredModeBL1のうち、どちらか一方を第一単方向イントラ予測モード(IntraPredModeL0)としてセットし、IntraPredModeL0を基準に予め定められた方法で変形した予測モードを第二単方向イントラ予測モード(IntraPredModeL1)としてセットしても構わない。図22Aに、第一単方向イントラ予測モードに対して予測方向が隣接するイントラ予測モードである第二単方向イントラ予測モードとしてセットするためのテーブルを示す。第一単方向イントラ予測モードに対して予測方向が隣接するイントラ予測モードが2種類存在する場合には、イントラ予測モードインデクスIntraPredModeが小さいイントラ予測モードを選択している。図22Aに示されるテーブルを用いてIntraPredModeL1を導出することにより、互いに隣接する予測方向を用いた双方向イントラ予測を行うため、予測画像信号のノイズを除去するフィルタリング効果を得ることができる。従って、予測効率が改善する。以降、当該第一予測モード生成部1901による双方向イントラ予測モードの導出方法を第三生成方法と称する。また、図22Aとは反対方向に隣接する予測方向として図22Bで示されるテーブルを用いても構わない。 As another embodiment in the second prediction mode generation unit 1902, either IntraPredModeAL0 or IntraPredModeBL1 is set as the first unidirectional intra prediction mode (IntraPredModeL0), and is modified by a method determined in advance based on IntraPredModeL0. The predicted mode may be set as the second unidirectional intra prediction mode (IntraPredModeL1). FIG. 22A shows a table for setting as a second unidirectional intra prediction mode that is an intra prediction mode in which the prediction direction is adjacent to the first unidirectional intra prediction mode. When there are two types of intra prediction modes whose prediction directions are adjacent to the first unidirectional intra prediction mode, the intra prediction mode with the small intra prediction mode index IntraPredMode is selected. By deriving IntraPredModeL1 using the table shown in FIG. 22A, bi-directional intra prediction is performed using prediction directions adjacent to each other, so that it is possible to obtain a filtering effect that removes noise in the predicted image signal. Therefore, the prediction efficiency is improved. Hereinafter, the derivation method of the bidirectional intra prediction mode by the first prediction mode generation unit 1901 is referred to as a third generation method. Moreover, you may use the table shown by FIG. 22B as a prediction direction adjacent to the opposite direction to FIG. 22A.
 第二予測モード生成部1902における更に別の実施形態として、図22A,図22Bに示すテーブルの代わりに図22Cに示すテーブルを使用して、IntraPredModeL1を導出しても構わない。図22Cに示すIntraPredModeL0とIntraPredModeL1は予測方向が反転した関係にある。従って、符号化プレディクションユニットを挟み込む様に内挿予測を行うことが可能となるため、予測効率が改善する。以降、当該第二予測モード生成部1902による双方向イントラ予測モードの導出方法を第四生成方法と称する。 As yet another embodiment in the second prediction mode generation unit 1902, IntraPredModeL1 may be derived using the table shown in FIG. 22C instead of the tables shown in FIGS. 22A and 22B. IntraPredModeL0 and IntraPredModeL1 shown in FIG. 22C have a relationship in which the prediction direction is reversed. Therefore, since it is possible to perform the interpolation prediction so as to sandwich the encoded prediction unit, the prediction efficiency is improved. Hereinafter, the method for deriving the bidirectional intra prediction mode by the second prediction mode generation unit 1902 is referred to as a fourth generation method.
 第二予測モード生成部1902における更に別の実施形態として、隣接ブロックに対して双方向イントラ予測されている場合、その双方向イントラ予測モードをIntraPredModeL0、IntraPredModeL1としてセットしても構わない。隣接ブロックAが双方向イントラ予測である場合、IntraPredModeL0にIntraPredModeAL0、IntraPredModeL1にIntraPredModeAL1をセットする。隣接ブロックBが双方向イントラ予測である場合も同様であるのでその説明を省略する。以降、当該第二予測モード生成部1902による双方向イントラ予測モードの導出方法を第五生成方法と称する。 As yet another embodiment in the second prediction mode generation unit 1902, when bidirectional intra prediction is performed on adjacent blocks, the bidirectional intra prediction modes may be set as IntraPredModeL0 and IntraPredModeL1. When the adjacent block A is bidirectional intra prediction, IntraPredModeAL0 is set in IntraPredModeL0, and IntraPredModeAL1 is set in IntraPredModeL1. Since the same applies to the case where the adjacent block B is bidirectional intra prediction, the description thereof is omitted. Hereinafter, the method for deriving the bidirectional intra prediction mode by the second prediction mode generation unit 1902 is referred to as a fifth generation method.
 第二予測モード生成部1902における更に別の実施形態として、IntraPredModeL0から予め定められたテーブルを用いて、IntraPredModeL1を導出しても構わない。以降、当該第二予測モード生成部1902による双方向イントラ予測モードの導出方法を第六生成方法と称する。 As yet another embodiment in the second prediction mode generation unit 1902, IntraPredModeL1 may be derived using a predetermined table from IntraPredModeL0. Hereinafter, the method for deriving the bidirectional intra prediction mode by the second prediction mode generation unit 1902 is referred to as a sixth generation method.
 <双方向イントラ予測モード生成部605 別の実施形態> 
 双方向イントラ予測モード生成部605における別の実施形態について図23を用いて説明する。図19に示される双方向イントラ予測モード生成部605に、第三予測モード生成部2301と第N予測モード生成部2302が追加されている点が異なる。図23における第二予測モード生成部1902、第三予測モード生成部2301及び第N予測モード生成部2302は、それぞれ参照イントラ予測モード情報164に基づいて双方向イントラ予測モード情報652を生成するが、その導出方法は互いに異なる。なお、(N-1)は参照イントラ予測モード情報164に基づいて双方向イントラ予測モード情報652を生成することなる予測モード生成部の総数である。
<Another Embodiment of Bidirectional Intra Prediction Mode Generation Unit 605>
Another embodiment of the bidirectional intra prediction mode generation unit 605 will be described with reference to FIG. The difference is that a third prediction mode generation unit 2301 and an Nth prediction mode generation unit 2302 are added to the bidirectional intra prediction mode generation unit 605 shown in FIG. The second prediction mode generation unit 1902, the third prediction mode generation unit 2301, and the Nth prediction mode generation unit 2302 in FIG. 23 generate bidirectional intra prediction mode information 652 based on the reference intra prediction mode information 164, respectively. The derivation methods are different from each other. Note that (N−1) is the total number of prediction mode generation units that generate the bidirectional intra prediction mode information 652 based on the reference intra prediction mode information 164.
 図24に、N=8における各予測モード生成部(第二予測モード生成部から第八予測モード生成部まで)とIntraPredModeL0、IntraPredModeL1の生成方法の例を示している。各予測モード生成部は前述の双方向予測モード生成方法(前述の第二生成方法から第六生成方法まで)のいずれかを用いる。第二生成方法を用いる場合、また、第三生成方法から第六生成方法までを用いる場合には、使用する隣接ブロックが示されている。なお、第二予測モード生成部から第N予測モード生成部までとIntraBipredTypeIdxの対応関係は図24に限定されず、どのように対応されていても構わない。 FIG. 24 illustrates an example of a method for generating each prediction mode generation unit (from the second prediction mode generation unit to the eighth prediction mode generation unit) and IntraPredModeL0 and IntraPredModeL1 at N = 8. Each prediction mode generation unit uses one of the bidirectional prediction mode generation methods (from the second generation method to the sixth generation method). When the second generation method is used and when the third generation method to the sixth generation method are used, adjacent blocks to be used are shown. Note that the correspondence relationship between the second prediction mode generation unit to the Nth prediction mode generation unit and IntraBipredTypeIdx is not limited to that in FIG. 24, and may correspond in any way.
 図8A及び図8Bに、本実施形態に関わる予測モードの構成を示している。図8Bは、34種類の単方向イントラ予測、8種類の双方向イントラ予測が示されており、双方向イントラ予測のうち、IntraPredModeが34,35,36は第一予測モード生成部1901において、第一生成方法により双方向イントラ予測モード情報652が生成される。IntraPredModeが37から41まででは、本実施形態で説明した第二予測モード生成部1902、第三予測モード生成部2301、図示されない第四予測モード生成部、第五予測モード生成部を使用して双方向イントラ予測モード情報652を生成する。 8A and 8B show the configuration of the prediction mode according to the present embodiment. FIG. 8B shows 34 types of unidirectional intra prediction and 8 types of bidirectional intra prediction. Of the bidirectional intra predictions, IntraPredMode of 34, 35, and 36 are the first prediction mode generation unit 1901. The bidirectional intra prediction mode information 652 is generated by one generation method. When IntraPredMode is 37 to 41, both using the second prediction mode generation unit 1902, the third prediction mode generation unit 2301, the fourth prediction mode generation unit, and the fifth prediction mode generation unit (not shown) described in this embodiment. Direction intra prediction mode information 652 is generated.
 別の実施形態として、図7A及び図7Bに示すように単方向イントラ予測の数及び双方向イントラ予測の数は、プレディクションユニットのサイズなどで変更しても構わない。また、双方向イントラ予測モードのうち、第一予測モード生成部1901を使用するモード数と第二予測モード生成部1902から第N予測モード生成部2302までの予測モード生成部を使用するモード数は任意の数を設定しても構わない(双方向用にモード数を増やしてもよい)。図9Bに、17種類の単方向イントラ予測及び4種類の双方向イントラ予測があり、双方向イントラ予測の4種類すべてが第二予測モード生成部1902以降を使用する一例が示されている。 As another embodiment, as shown in FIGS. 7A and 7B, the number of unidirectional intra predictions and the number of bidirectional intra predictions may be changed depending on the size of the prediction unit. In addition, among the bidirectional intra prediction modes, the number of modes using the first prediction mode generation unit 1901 and the number of modes using the prediction mode generation unit from the second prediction mode generation unit 1902 to the Nth prediction mode generation unit 2302 are Any number may be set (the number of modes may be increased for bidirectional use). FIG. 9B shows an example in which there are 17 types of unidirectional intra prediction and four types of bidirectional intra prediction, and all four types of bidirectional intra prediction use the second prediction mode generation unit 1902 and the subsequent ones.
 IntraBipredTypeIdxとIntraPredModeL0,IntraPredModeL1の対応関係は、図8B及び図9Bに示されるものに限定されない。予め本実施形態の画像符号化装置と、対応する画像復号化装置で同一の対応関係を示した情報を有していれば、上記対応関係を任意に設定することができる。 Correspondence between IntraBipredTypeIdx and IntraPredModeL0, IntraPredModeL1 is not limited to that shown in FIGS. 8B and 9B. If the image encoding apparatus of the present embodiment and the corresponding image decoding apparatus have information indicating the same correspondence relationship in advance, the correspondence relationship can be arbitrarily set.
 <双方向イントラ予測モードの重複への対応1:ほかの双方向イントラ予測モードを割り当てる> 
 第二予測モード生成部から第N予測モード生成部までで生成された双方向イントラ予測モードが互い若しくは第一予測モード生成部1901から出力される双方向イントラ予測モードと重複する場合には、当該双方向イントラ予測モードの代わりに他の双方向予測モード、例えば第一予測モード生成部1901を用いて導出された双方向イントラ予測モードを使用しても構わない。この場合、双方向予測モードの総数は常に一定となる。図25A,図25B,図25Cに双方向イントラ予測モードが重複する際の例が示されている。図25Aでは、BipredIdx(=IntraPredMode-IntraUniPredModeNum)が5の際に、他の生成方法で生成された双方向イントラ予測モードと重複していることを示している。この場合、BipredIdxにおけるIntraBipredTypeは0に再設定され、第一予測モード生成部1901において図20に示されるテーブルに従って双方向イントラ予測モードが生成される。本例では、“Intra_Vertical”及び“-8”とIntra_DCが新たに双方向イントラ予測モードとして選択される。
<Corresponding to Duplication of Bidirectional Intra Prediction Mode 1: Assigning Other Bidirectional Intra Prediction Mode>
When the bidirectional intra prediction mode generated from the second prediction mode generation unit to the Nth prediction mode generation unit overlaps with each other or the bidirectional intra prediction mode output from the first prediction mode generation unit 1901, Instead of the bidirectional intra prediction mode, another bidirectional prediction mode, for example, a bidirectional intra prediction mode derived using the first prediction mode generation unit 1901 may be used. In this case, the total number of bidirectional prediction modes is always constant. FIGS. 25A, 25B, and 25C show examples when the bidirectional intra prediction modes overlap. FIG. 25A shows that when BipredIdx (= IntraPredMode−IntraUniPredModeNum) is 5, it overlaps with the bidirectional intra prediction mode generated by another generation method. In this case, IntraBipredType in BipredIdx is reset to 0, and the bidirectional intra prediction mode is generated in the first prediction mode generation unit 1901 according to the table shown in FIG. In this example, “Intra_Vertical” and “−8” and Intra_DC are newly selected as the bidirectional intra prediction modes.
 双方向イントラ予測モードが重複した際の別の実施形態として、第一予測モード生成部1901においてBipredIdxが同一の双方向イントラ予測モードを使用するのではなく、図20に示される他の双方向イントラ予測モードを使用しても構わない。但し、既に第一予測モード生成部1901におけるBipredIdxが割り当てられている場合は使用できる双方向イントラ予測モードから除外する。つまり図25AにおいてBipredIdxが0から2までの場合には、当該双方向イントラ予測モードは除外される。従って、図20に示されるBipredIdxが3以上の双方向イントラ予測モードを使用可能となる。 As another embodiment when the bi-directional intra prediction modes overlap, the bi-predIdx does not use the same bi-directional intra prediction mode in the first prediction mode generation unit 1901, but other bi-directional intra prediction modes shown in FIG. The prediction mode may be used. However, if BipredIdx is already assigned in the first prediction mode generation unit 1901, it is excluded from the bidirectional intra prediction modes that can be used. That is, when BipredIdx is from 0 to 2 in FIG. 25A, the bidirectional intra prediction mode is excluded. Accordingly, the bi-directional intra prediction mode in which BipredIdx shown in FIG. 20 is 3 or more can be used.
 <双方向イントラ予測モードの重複への対応2’:ほかの単方向イントラ予測モードを割り当てる> 
 第二予測モード生成部から第N予測モード生成部までで生成された双方向イントラ予測モードが互い若しくは第一予測モード生成部1901から出力される双方向イントラ予測モードと重複する場合には、図25Bに示されるように単方向イントラ予測モードを置き換えて使用しても構わない。置き換えて使用する単方向イントラ予測モードは、予測モードの候補とは異なる予測方向を有する単方向イントラ予測モードが望ましい。具体的には図9Bに示される予測モード構成を適用するプレディクションユニットの場合、単方向イントラ予測モードは17モードであるので、図12に示されるように最大34モードのうち、上述の17モード以外の単方向イントラ予測モードを使用する。
<Corresponding to Duplication of Bidirectional Intra Prediction Mode 2 ': Assigning Other Unidirectional Intra Prediction Mode>
When the bidirectional intra prediction modes generated from the second prediction mode generation unit to the Nth prediction mode generation unit overlap with each other or the bidirectional intra prediction mode output from the first prediction mode generation unit 1901, FIG. As shown in 25B, the unidirectional intra prediction mode may be replaced and used. The unidirectional intra prediction mode to be used as a replacement is preferably a unidirectional intra prediction mode having a prediction direction different from the prediction mode candidates. Specifically, in the case of a prediction unit to which the prediction mode configuration shown in FIG. 9B is applied, the unidirectional intra prediction mode is 17 modes, and therefore the above-mentioned 17 modes out of the maximum 34 modes as shown in FIG. Use a unidirectional intra prediction mode other than.
 <双方向イントラ予測モードの重複への対応2:符号表を変更する> 
 第二予測モード生成部から第N予測モード生成部までで生成された双方向イントラ予測モードが互い若しくは第一予測モード生成部1901から出力される双方向イントラ予測モードと重複する場合の別の実施形態として、他の双方向予測モードを使用するのではなく、図25Cに示されるように使用可能な双方向予測モードの総数が削減された状態で双方向イントラ予測モード情報を符号化しても構わない。図25Cの例では、双方向イントラ予測モードの総数が8である、BipredIdxが5の際に他の生成方法で生成された双方向イントラ予測モードと重複していることを示している。この場合、双方向予測モードの総数を7として、双方向予測モード情報の符号化を行う。従って、双方向予測モードの総数を8の場合と比較して、双方向予測モード情報に必要な平均符号量は一般に少なく済む。また、この場合プレディクションユニット毎に双方向予測モードの総数が変化する可能性がある。
<Correspondence to Duplication of Bidirectional Intra Prediction Mode 2: Change Code Table>
Another implementation in the case where the bidirectional intra prediction modes generated from the second prediction mode generation unit to the Nth prediction mode generation unit overlap with each other or the bidirectional intra prediction mode output from the first prediction mode generation unit 1901 As a form, the bidirectional intra prediction mode information may be encoded in a state where the total number of usable bidirectional prediction modes is reduced, as shown in FIG. 25C, instead of using other bidirectional prediction modes. Absent. In the example of FIG. 25C, it is shown that the total number of bidirectional intra prediction modes is 8, and when BipredIdx is 5, it overlaps with the bidirectional intra prediction mode generated by another generation method. In this case, the bidirectional prediction mode information is encoded with the total number of bidirectional prediction modes set to 7. Accordingly, the average code amount required for the bidirectional prediction mode information is generally smaller than the total number of bidirectional prediction modes of 8. In this case, the total number of bidirectional prediction modes may change for each prediction unit.
 <色差信号への対応> 
 次に、色差信号のイントラ予測方法について説明する。 
 図26は本実施形態における色差信号における予測モードの構成を示している。図26におけるintra_pred_mode_chromaは色差信号における予測モードインデクスを示している。intra_pred_mode_chromaが0から3に対しては、予め定められた単方向イントラ予測(垂直、水平、DC,対角)を行う。一方intra_pred_mode_chromaが4の場合、符号化プレディクションユニットにおける輝度信号に対する予測モードIntraPredModeを色差信号に対する予測モードとして適用する。符号化プレディクションユニットが図8Bに示される予測モードの構成である場合、IntraPredModeが0から33である単方向イントラ予測を適用し、34以上の場合には上述の双方向イントラ予測を適用する。
<Correspondence to color difference signal>
Next, a color difference signal intra prediction method will be described.
FIG. 26 shows the configuration of the prediction mode for the color difference signal in this embodiment. Intra_pred_mode_chroma in FIG. 26 indicates a prediction mode index in the color difference signal. For intra_pred_mode_chroma from 0 to 3, predetermined unidirectional intra prediction (vertical, horizontal, DC, diagonal) is performed. On the other hand, when intra_pred_mode_chroma is 4, the prediction mode IntraPredMode for the luminance signal in the encoding prediction unit is applied as the prediction mode for the color difference signal. When the encoded prediction unit has the configuration of the prediction mode shown in FIG. 8B, unidirectional intra prediction whose IntraPredMode is 0 to 33 is applied, and when it is 34 or more, the above-described bidirectional intra prediction is applied.
 色差信号のイントラ予測方法に関する別の実施形態として、intra_pred_mode_chromaが4であり輝度信号に対する予測モードIntraPredModeで双方向イントラ予測が適用されている場合には、2種類の単方向イントラ予測モード(IntraPredModeL0,IntraPredModeL1)のうち、どちらか一方を色差信号に対する予測モードとして適用しても構わない。 As another embodiment regarding the intra prediction method of the color difference signal, when intra_pred_mode_chroma is 4 and bidirectional intra prediction is applied in the prediction mode IntraPredMode for the luminance signal, two types of unidirectional intra prediction modes (IntraPredModeL0 and IntraPredModeL1) are used. ) May be applied as a prediction mode for a color difference signal.
 以上が、本発明の本実施形態に関わるイントラ予測部109の詳細である。 The above is the details of the intra prediction unit 109 according to this embodiment of the present invention.
 <イントラ予測部109の符号化部側の処理量削減の変形例> 
 本実施形態の変形例として、イントラ予測部109の内部構成を図27で示される構成としてもよい。この場合、図6で示されるイントラ予測部109の構成と比較すると、画像バッファ2701が追加されおり、双方向イントラ予測画像生成部602が、重み付き平均部2702に置き換わっている。一次画像バッファ2701は、単方向イントラ予測画像生成部601で生成された予測モード毎の予測画像信号161を一次的にバッファに保持する機能を有しており、符号化制御部115が制御する予測モード及び双方向イントラ予測モード生成部605が出力する双方向イントラ予測モード情報652に応じて、必要な予測モードに対応する予測画像信号161を重み付き平均部2702に出力する。これにより、第一単方向イントラ予測画像生成部1301及び第二単方向イントラ予測画像生成部1302を、双方向イントラ予測画像生成部602が保持する必要がなくなるため、ハードウェア規模を削減することが可能となる。
<Modification of processing amount reduction on the encoding unit side of the intra prediction unit 109>
As a modification of the present embodiment, the internal configuration of the intra prediction unit 109 may be the configuration shown in FIG. In this case, compared with the configuration of the intra prediction unit 109 shown in FIG. 6, an image buffer 2701 is added, and the bidirectional intra prediction image generation unit 602 is replaced with a weighted average unit 2702. The primary image buffer 2701 has a function of temporarily storing the prediction image signal 161 for each prediction mode generated by the unidirectional intra prediction image generation unit 601 in the buffer, and the prediction controlled by the encoding control unit 115. The prediction image signal 161 corresponding to the necessary prediction mode is output to the weighted average unit 2702 according to the bidirectional intra prediction mode information 652 output from the mode and bidirectional intra prediction mode generation unit 605. This eliminates the need for the bidirectional intra predicted image generation unit 602 to hold the first unidirectional intra predicted image generation unit 1301 and the second unidirectional intra predicted image generation unit 1302, thereby reducing the hardware scale. It becomes possible.
 <シンタクス構成 1> 
 以下、図1の画像符号化装置100が利用するシンタクスについて説明する。 
 シンタクスは、画像符号化装置が動画像データを符号化する際の符号化データ(例えば、図1の符号化データ162)の構造を示している。この符号化データを復号化する際に、同じシンタクス構造を参照して動画像復号化装置がシンタクス解釈を行う。図1の動画像符号化装置が利用するシンタクス2800を図28に例示する。
<Syntax structure 1>
Hereinafter, the syntax used by the image coding apparatus 100 in FIG. 1 will be described.
The syntax indicates the structure of encoded data (for example, encoded data 162 in FIG. 1) when the image encoding device encodes moving image data. When decoding the encoded data, the moving picture decoding apparatus interprets the syntax with reference to the same syntax structure. FIG. 28 shows an example of syntax 2800 used by the video encoding apparatus of FIG.
 シンタクス2800は、ハイレベルシンタクス2801、スライスレベルシンタクス2802及びコーディングツリーレベルシンタクス2803の3つのパートを含む。ハイレベルシンタクス2801は、スライスよりも上位のレイヤのシンタクス情報を含む。スライスとは、フレームまたはフィールドに含まれる矩形領域もしくは連続領域を指す。スライスレベルシンタクス2802は、各スライスを復号化するために必要な情報を含む。コーディングツリーレベルシンタクス2803は、各コーディングツリー(即ち、各コーディングツリーユニット)を復号化するために必要な情報を含む。これら各パートは、さらに詳細なシンタクスを含む。 The syntax 2800 includes three parts: a high level syntax 2801, a slice level syntax 2802, and a coding tree level syntax 2803. The high level syntax 2801 includes syntax information of a layer higher than the slice. A slice refers to a rectangular area or a continuous area included in a frame or a field. The slice level syntax 2802 includes information necessary for decoding each slice. The coding tree level syntax 2803 includes information necessary for decoding each coding tree (ie, each coding tree unit). Each of these parts includes more detailed syntax.
 ハイレベルシンタクス2801は、シーケンスパラメータセットシンタクス2804及びピクチャパラメータセットシンタクス2805などの、シーケンス及びピクチャレベルのシンタクスを含む。スライスレベルシンタクス2802は、スライスヘッダーシンタクス2806及びスライスデータシンタクス2807などを含む。コーディングツリーレベルシンタクス2803は、コーディングツリーユニットシンタクス2808及びプレディクションユニットシンタクス2809などを含む。 The high level syntax 2801 includes sequence and picture level syntax such as a sequence parameter set syntax 2804 and a picture parameter set syntax 2805. The slice level syntax 2802 includes a slice header syntax 2806, a slice data syntax 2807, and the like. The coding tree level syntax 2803 includes a coding tree unit syntax 2808, a prediction unit syntax 2809, and the like.
 コーディングツリーユニットシンタクス2808は、四分木構造を持つことができる。具体的には、コーディングツリーユニットシンタクス2808のシンタクス要素として、さらにコーディングツリーユニットシンタクス2808を再帰呼び出しすることができる。即ち、1つのコーディングツリーユニットを四分木で細分化することができる。また、コーディングツリーユニットシンタクス2808内にはトランスフォームユニットシンタクス2810が含まれている。トランスフォームユニットシンタクス2810は、四分木の最末端の各コーディングツリーユニットシンタクス2808において呼び出される。トランスフォームユニットシンタクス2810は、逆直交変換及び量子化などに関わる情報が記述されている。 The coding tree unit syntax 2808 can have a quadtree structure. Specifically, the coding tree unit syntax 2808 can be recursively called as a syntax element of the coding tree unit syntax 2808. That is, one coding tree unit can be subdivided with a quadtree. The coding tree unit syntax 2808 includes a transform unit syntax 2810. The transform unit syntax 2810 is invoked at each coding tree unit syntax 2808 at the extreme end of the quadtree. The transform unit syntax 2810 describes information related to inverse orthogonal transformation and quantization.
 図29は、本実施形態に係るスライスヘッダーシンタクス2806を例示する。図29に示されるslice_bipred_intra_flagは、例えば、当該スライスに関して本実施形態に係る双方向イントラ予測の有効/無効を示すシンタクス要素である。 FIG. 29 exemplifies slice header syntax 2806 according to the present embodiment. The slice_bipred_intra_flag shown in FIG. 29 is a syntax element indicating, for example, validity / invalidity of bidirectional intra prediction according to the present embodiment for the slice.
 slice_bipred_intra_flagが0である場合、当該スライス内での本実施形態に係る双方向イントラは無効である。故に、直交変換部102及び逆直交変換部105は、単方向イントラ予測のみを行う。単方向イントラ予測の一例として、図8A及び図8B,図9A及び図9B,図10中のIntraBipredFlag[puPartIdx]が0となる予測や、H.264において規定されるイントラ予測を行っても構わない。 When slice_bipred_intra_flag is 0, the bidirectional intra according to this embodiment in the slice is invalid. Therefore, the orthogonal transform unit 102 and the inverse orthogonal transform unit 105 perform only unidirectional intra prediction. As an example of unidirectional intra prediction, prediction in which IntraBipredFlag [puPartIdx] in FIGS. 8A and 8B, 9A and 9B, and FIG. Intra prediction specified in H.264 may be performed.
 一例として、slice_bipred_intra_flagが1である場合には、当該スライス内全域で本実施形態に係る双方向イントラ予測が有効となる。 As an example, when slice_bipred_intra_flag is 1, the bidirectional intra prediction according to the present embodiment is effective in the entire area in the slice.
 また、別の例として、slice_bipred_intra_flagが1である場合には、より下位のレイヤ(コーディングツリーユニット、トランスフォームユニットなど)のシンタクスにおいて当該スライス内部の局所領域毎に本実施形態に係る予測の有効/無効が規定されてもよい。 As another example, when slice_bipred_intra_flag is 1, in the syntax of a lower layer (coding tree unit, transform unit, etc.), the prediction validity / efficiency according to the present embodiment is determined for each local region in the slice. Invalidity may be specified.
 図30Aに、プレディクションユニットシンタクスの一例を示す。図中のpred_modeは当該プレディクションユニットの予測タイプをしている。MODE_INTRAは予測タイプがイントラ予測であることを示す。intra_split_flagは当該プレディクションユニットをさらに4つのプレディクションユニットに分割するか否かを示すフラグである。intra_split_flagが1の場合、プレディクションユニットを、縦横のサイズ半分で4分割したものをプレディクションユニットとする。intra_split_flagが0の場合、プレディクションユニットを分割しない。 FIG. 30A shows an example of the prediction unit syntax. Pred_mode in the figure indicates the prediction type of the prediction unit. MODE_INTRA indicates that the prediction type is intra prediction. intra_split_flag is a flag indicating whether or not the prediction unit is further divided into four prediction units. When intra_split_flag is 1, a prediction unit is obtained by dividing a prediction unit into four in half in the vertical and horizontal sizes. When intra_split_flag is 0, the prediction unit is not divided.
 intra_luma_bipred_flag[i]は当該プレディクションユニットに適用した予測モードIntraPredModeが単方向イントラ予測モードか双方向イントラ予測モードであるかを示すフラグである。iは分割されたプレディクションユニットの位置を示しており、前記intra_split_flagが0の場合には0、前記intra_split_flagが1の場合には0から3までが設定される。当該フラグは、図8A及び図8B,図9A及び図9B,図10に示される当該プレディクションユニットのIntraBipredFlagの値がセットされている。 Intra_luma_bipred_flag [i] is a flag indicating whether the prediction mode IntraPredMode applied to the prediction unit is a unidirectional intra prediction mode or a bidirectional intra prediction mode. i indicates the position of the divided prediction unit, and 0 is set when the intra_split_flag is 0, and 0 to 3 when the intra_split_flag is 1. In this flag, the value of IntraBipredFlag of the prediction unit shown in FIGS. 8A and 8B, 9A and 9B, and 10 is set.
 intra_luma_bipred_flag[i]が1の場合、当該プレディクションユニットは双方向イントラ予測であることを示し、用意された複数の双方向イントラ予測モードの内、使用した双方向イントラ予測モードを特定する情報であるintra_luma_bipred_mode[i]を符号化する。intra_luma_bipred_mode[i]は、図7Aから図7Dまでに示される双方向イントラ予測モード数IntraBiModeNumに従って等長符号化されてもよいし、予め決定された符号表を用いて符号化されてもよい。また、前述のように双方向イントラ予測モードの総数がプレディクションユニット毎に異なる場合には、プレディクションユニット毎に示される双方向イントラ予測モードの総数に従って切り替わる符号表を用いて符号化されていてもよい。intra_luma_bipred_flag[i]が0の場合、当該プレディクションユニットは単方向イントラ予測であることを示し、隣接ブロックから予測符号化を行う。 When intra_luma_bipred_flag [i] is 1, this indicates that the prediction unit is bi-directional intra prediction, and is information that identifies the used bi-directional intra prediction mode among a plurality of prepared bi-directional intra prediction modes. Intra_luma_bipred_mode [i] is encoded. intra_luma_bipred_mode [i] may be encoded with the isometric length according to the bidirectional intra prediction mode number IntraBiModeNum shown in FIGS. 7A to 7D, or may be encoded using a predetermined code table. Further, as described above, when the total number of bidirectional intra prediction modes is different for each prediction unit, it is encoded using a code table that switches according to the total number of bidirectional intra prediction modes indicated for each prediction unit. Also good. When intra_luma_bipred_flag [i] is 0, it indicates that the prediction unit is unidirectional intra prediction, and predictive encoding is performed from adjacent blocks.
 prev_intra_luma_unipred_flag[i]は、隣接ブロックから計算される予測モードの予測値MostProbableと当該プレディクションユニットのイントラ予測モードが同一であるか否かを示すフラグである。MostProbableの計算方法の詳細は後述する。prev_intra_luma_unipred_flag[i]が1の場合、前記MostProbableとイントラ予測モードIntraPredModeが等しいことを示す。prev_intra_luma_unipred_flag[i]が0の場合、前記MostProbableとイントラ予測モードIntraPredModeは異なることを示し、イントラ予測モードIntraPredModeがさらにMostProbable以外のいずれのモードであるかを特定する情報rem_intra_luma_unipred_mode[i]が符号化される。rem_intra_luma_unipred_mode[i]は、図7A及び図7Bに示される双方向イントラ予測モード数IntraUniModeNumに従って等長符号化されてもよいし、予め決定された符号表を用いて符号化されてもよい。イントラ予測モードIntraPredModeからrem_intra_luma_unipred_mode[i]は下式を用いて計算される。
Figure JPOXMLDOC01-appb-M000017
prev_intra_luma_unipred_flag [i] is a flag indicating whether or not the prediction value MostProbable of the prediction mode calculated from the adjacent block and the intra prediction mode of the prediction unit are the same. Details of the MostProbable calculation method will be described later. When prev_intra_luma_unipred_flag [i] is 1, it indicates that the MostProbable and the intra prediction mode IntraPredMode are equal. When prev_intra_luma_unipred_flag [i] is 0, it indicates that the MostProbable and the intra prediction mode IntraPredMode are different, and the information rem_intraiprecoded_code that specifies whether the intra prediction mode IntraPredMode is a mode other than MostProbable. . The rem_intra_luma_unipred_mode [i] may be encoded in the same length according to the bidirectional intra prediction mode number IntraUniModeNum shown in FIGS. 7A and 7B, or may be encoded using a predetermined code table. From the intra prediction mode IntraPredMode, rem_intra_luma_unipred_mode [i] is calculated using the following equation.
Figure JPOXMLDOC01-appb-M000017
 次に、予測モードの予測値であるMostProbableの計算方法について説明する。MostProbableは下式に従って計算される。
Figure JPOXMLDOC01-appb-M000018
Next, a method for calculating MostProbable that is a predicted value in the prediction mode will be described. MostProbable is calculated according to the following equation.
Figure JPOXMLDOC01-appb-M000018
なお、Min(x,y)は入力x、yのうち小さい方を出力するパラメータである。 Min (x, y) is a parameter for outputting the smaller one of the inputs x and y.
 また、intraPredModeAL0,intraPredModeBL0はそれぞれ、前述のように符号化プレディクションユニットに対して左に及び上に隣接するプレディクションユニットの第一単方向イントラ予測モードを示している。隣接するプレディクションユニットが画面外や符号化前で参照不可能な場合は、参照可能なプレディクションユニットの第一単方向イントラ予測モードがMostProbableとなる。また、両方の隣接プレディクションユニットが参照不可能である場合には、MostProbableにIntra_DCが設定される。 Also, intraPredModeAL0 and intraPredModeBL0 respectively indicate the first unidirectional intra prediction modes of the prediction units adjacent to the left and above the encoded prediction unit as described above. When an adjacent prediction unit cannot be referenced outside the screen or before encoding, the first unidirectional intra prediction mode of the referable prediction unit is MostProbable. In addition, when both adjacent prediction units cannot be referred to, Intra_DC is set in MostProbable.
 また、MostProbableが符号化プレディクションユニットの単方向イントラ予測モード数IntraUniPredModeNumよりも大きい場合には、下式を用いてMostProbableを再計算する。
Figure JPOXMLDOC01-appb-M000019
Further, when MostProbable is larger than the number of unidirectional intra prediction modes IntraUniPredModeNum of the encoded prediction unit, MostProbable is recalculated using the following equation.
Figure JPOXMLDOC01-appb-M000019
MappedMostProbable()は、MostProbableを変換するテーブルであり、図31に一例が示されている。 “MappedProbable ()” is a table for converting MostProbable, and an example is shown in FIG. 31.
 <シンタクス構成 2> 
 次に、プレディクションユニットシンタクスに係る別の例を図30Cに示す。pred_mode、intra_split_flagは先に述べたシンタクスの一例と同様であるので説明を省略する。luma_pred_mode_code_type[i]は当該プレディクションユニットに適用した予測モードIntraPredModeの種類を示しており、0(IntraUnipredMostProb)は単方向イントラ予測でイントラ予測モードがMostProbableと等しいことを示し、1(IntraUnipredRem)は単方向イントラ予測でイントラ予測モードがMostProbableと異なることを示し、2(IntraBipred)は双方向イントラ予測モードであることを夫々示している。図32A、B、C、Dにluma_pred_mode_code_typeと対応する意味、及びbin、図7Aまたは図7B、図7C、図7Dに示すモード構成に従ったモード数の割当の一例を示している。luma_pred_mode_code_type[i]が0の場合、イントラ予測モードがMostProbableモードとなるため、これ以上の情報の符号化は必要無い。luma_pred_mode_code_type[i]が1の場合、イントラ予測モードIntraPredModeがさらにMostProbable以外のいずれのモードであるかを特定する情報rem_intra_luma_unipred_mode[i]が符号化される。rem_intra_luma_unipred_mode[i]は、図7Aまたは図7B、図7C、図7Dに示される双方向イントラ予測モード数IntraUniModeNumに従って等長符号化されてもよいし、予め決定された符号表を用いて符号化されてもよい。イントラ予測モードIntraPredModeからrem_intra_luma_unipred_mode[i]は数式(16)を用いて計算される。また、luma_pred_mode_code_type[i]が2の場合、当該プレディクションユニットは双方向イントラ予測であることを示し、用意された複数の双方向イントラ予測モードの内、使用した双方向イントラ予測モードを特定する情報であるintra_luma_bipred_mode[i]を符号化する。intra_luma_bipred_mode[i]は、図7Aまたは図7B、図7C、図7Dに示される双方向イントラ予測モード数IntraBiModeNumに従って等長符号化されてもよいし、予め決定された符号表を用いて符号化されてもよい。また、前述のように双方向イントラ予測モードの総数がプレディクションユニット毎に異なる場合には、プレディクションユニット毎に示される双方向イントラ予測モードの総数に従って切り替わる符号表を用いて符号化されていてもよい。
<Syntax structure 2>
Next, another example of the prediction unit syntax is shown in FIG. 30C. Since pred_mode and intra_split_flag are the same as the syntax example described above, description thereof is omitted. luma_pred_mode_code_type [i] indicates the type of prediction mode IntraPredMode applied to the prediction unit, 0 (IntraUnifiedMostProb) indicates unidirectional intra prediction, and intra prediction mode is the same as MostProbable. Intra prediction indicates that the intra prediction mode is different from MostProbable, and 2 (IntraBipred) indicates that it is a bidirectional intra prediction mode. 32A, B, C, and D show an example of assignment of the number of modes according to the meaning corresponding to luma_pred_mode_code_type and the mode configuration shown in bin, FIG. 7A or FIG. 7B, FIG. 7C, and FIG. 7D. When luma_pred_mode_code_type [i] is 0, the intra prediction mode is the MostProbable mode, so no further information encoding is necessary. When luma_pred_mode_code_type [i] is 1, information rem_intra_luma_unipred_mode [i] that specifies which mode other than MostProbable is the intra prediction mode IntraPredMode is encoded. rem_intra_luma_unipred_mode [i] may be encoded with the isometric length according to the number of bidirectional intra prediction modes IntraUniModeNum shown in FIG. 7A, FIG. 7B, FIG. 7C, or FIG. May be. From the intra prediction mode IntraPredMode, rem_intra_luma_unipred_mode [i] is calculated using Equation (16). Further, when luma_pred_mode_code_type [i] is 2, it indicates that the prediction unit is bidirectional intra prediction, and information that identifies the used bidirectional intra prediction mode among the prepared bidirectional intra prediction modes. Intra_luma_bipred_mode [i] is encoded. intra_luma_bipred_mode [i] may be encoded in equal length according to the bidirectional intra prediction mode number IntraBiModeNum shown in FIG. 7A, FIG. 7B, FIG. 7C, or FIG. 7D, or encoded using a predetermined code table. May be. Further, as described above, when the total number of bidirectional intra prediction modes is different for each prediction unit, it is encoded using a code table that switches according to the total number of bidirectional intra prediction modes indicated for each prediction unit. Also good.
 以上が、本実施形態に係るシンタクス構成である。 The above is the syntax configuration according to the present embodiment.
 <シンタクス構成 3> 
 プレディクションユニットシンタクスに係るさらに別の例を図30Dに示す。本例では、図30Aで示されるプレディクションユニットシンタクスをベースに、双方向イントラ予測を使用可能とするか、双方向イントラ予測を使用不可能として従来の単方向イントラ予測のみを使用可能とするかを、符号化プレディクションユニット内で切り替える場合のシンタクスを示している。尚、双方向イントラ予測を使用不可能として従来の単方向イントラ予測のみを使用可能とする場合、図8A及び図8Bに代わって図4で示すテーブルを使ってもよいし、図8Aまたは図8BのIntraPredModeが33以上のテーブルを無視する構成としてもよい。図4は、図8Aまたは図8B内から双方向イントラ予測時の第二の予測モードに関する情報を示すIntraPredTypeL1、IntraPredAngleIdL1を削除するとともに、不要なIntraPredModeが33以上のテーブルを削除したものである。図4と図8Aまたは図8Bに関する同様な構成を、図8Aまたは図8Bに対応する図9A、図9B、図10に関して適用可能して図4に対応するテーブルを使用してもよい。 
 なお、pred_mode、intra_split_flagは先に述べたシンタクスの一例と同様であるので説明を省略する。
<Syntax structure 3>
FIG. 30D shows still another example relating to the prediction unit syntax. In this example, based on the prediction unit syntax shown in FIG. 30A, whether bidirectional intra prediction can be used or whether conventional intra-unidirectional prediction can be used with bidirectional intra prediction disabled. Shows the syntax for switching within the encoding prediction unit. In the case where the bidirectional intra prediction cannot be used and only the conventional unidirectional intra prediction can be used, the table shown in FIG. 4 may be used instead of FIG. 8A and FIG. 8B, or FIG. 8A or FIG. IntraPredMode of 33 or more may be ignored. FIG. 4 is obtained by deleting IntraPredTypeL1 and IntraPredAngleIdL1 indicating information related to the second prediction mode at the time of bidirectional intra prediction from FIG. 8A or FIG. 8B, and deleting a table in which unnecessary IntraPredMode is 33 or more. A similar configuration with respect to FIG. 4 and FIG. 8A or FIG. 8B may be applied with respect to FIG. 9A, FIG. 9B, or FIG. 10 corresponding to FIG. 8A or FIG.
Note that pred_mode and intra_split_flag are the same as the syntax example described above, and thus description thereof is omitted.
 intra_bipred_flagは、符号化プレディクションユニット内で双方向イントラ予測を使用可能とするか否かを示すフラグである。intra_bipred_flagが0の場合、符号化プレディクションユニット内で双方向イントラ予測が使用されないことを示している。intra_split_flagが1、つまり符号化プレディクションユニットがさらに4分割されている場合においても、全てのプレディクションユニットにおいて双方向イントラ予測は使用されず、単方向イントラ予測のみが有効となる。 Intra_bipred_flag is a flag indicating whether or not bi-directional intra prediction can be used in the encoded prediction unit. When intra_bipred_flag is 0, it indicates that bi-directional intra prediction is not used in the encoded prediction unit. Even when intra_split_flag is 1, that is, when the encoded prediction unit is further divided into four, bi-directional intra prediction is not used in all prediction units, and only uni-directional intra prediction is effective.
 intra_bipred_flagが1の場合、符号化プレディクションユニット内で双方向イントラ予測が使用可能であることを示している。intra_split_flagが1、つまり符号化プレディクションユニットがさらに4分割されている場合においても、全てのプレディクションユニットにおいて、単方向イントラ予測に加え双方向イントラ予測が選択可能となる。 When intra_bipred_flag is 1, it indicates that bi-directional intra prediction can be used in the encoded prediction unit. Even when intra_split_flag is 1, that is, when the encoded prediction unit is further divided into four, in all prediction units, bidirectional intra prediction can be selected in addition to unidirectional intra prediction.
 双方向イントラ予測が不要である予測が比較的容易な領域(例えば、平坦領域)では、intra_bipred_flagを0として符号化して、双方向イントラ予測を使用不可能とすることにより、双方向イントラ予測モードの符号化に必要な符号量が削減可能となるため、符号化効率は改善する。 In a region where bi-directional intra prediction is not necessary (for example, a flat region), intra_bipred_flag is encoded as 0 to disable bi-directional intra prediction. Since the amount of codes necessary for encoding can be reduced, encoding efficiency is improved.
 <シンタクス構成 4> 
 プレディクションユニットシンタクスに係るさらに別の例を図30Eに示す。本例では、図30Cで示されるプレディクションユニットシンタクスをベースに、双方向イントラ予測を使用可能とするか、双方向イントラ予測を使用不可能として従来の単方向イントラ予測のみを使用可能とするかを、符号化プレディクションユニット内で切り替える場合のシンタクスを示している。intra_bipred_flagは、符号化プレディクションユニット内で双方向イントラ予測を使用可能とするか否かを示すフラグであり、前述のintra_bipred_flagと同様であるので説明を省略する。
<Syntax structure 4>
FIG. 30E shows still another example relating to the prediction unit syntax. In this example, based on the prediction unit syntax shown in FIG. 30C, whether bidirectional intra prediction can be used or whether only conventional unidirectional intra prediction can be used with bidirectional intra prediction disabled. Shows the syntax for switching within the encoding prediction unit. intra_bipred_flag is a flag indicating whether or not bi-directional intra prediction can be used in the encoding prediction unit, and is the same as the above-described intra_bipred_flag, and thus the description thereof is omitted.
 (第1の変形例) 
 <イントラ予測部の第1の変形例> 
 イントラ予測部109に係る第1の変形例として、JCTVC-B205_draft002, 5.2.1節" Intra prediction process for luma samples ", JCT-VC 2nd Meeting Geneva, July, 2010に示される適応参照画素フィルタリングと組み合わせても構わない。図33に、適応参照画素フィルタリングを用いた際のイントラ予測部109を示している。図6で示されるイントラ予測部109とは、参照画素フィルタ部3301が追加されている点が異なる。参照画素フィルタ部3301では、参照画像信号159及び予測モード651を入力し、後述する適応的なフィルタ処理を行い、被フィルタ参照画像信号3351を出力する。被フィルタ参照画像信号3351は単方向イントラ予測画像生成部601と双方向イントラ予測画像生成部602に入力される。参照画素フィルタ部3301以外の構成及び処理に関しては、図6で示されるイントラ予測部109と同様であるので、説明を省略する。
(First modification)
<First Modification of Intra Prediction Unit>
As a first modification related to the intra prediction unit 109, in combination with adaptive reference pixel filtering shown in JCTVC-B205_draft002, section 5.2.1 “Intra prediction process for luma samples”, JCT-VC 2nd Meeting Geneva, July, 2010 It doesn't matter. FIG. 33 shows the intra prediction unit 109 when adaptive reference pixel filtering is used. 6 is different from the intra prediction unit 109 shown in FIG. 6 in that a reference pixel filter unit 3301 is added. The reference pixel filter unit 3301 receives the reference image signal 159 and the prediction mode 651, performs adaptive filter processing described later, and outputs a filtered reference image signal 3351. The filtered reference image signal 3351 is input to the unidirectional intra predicted image generation unit 601 and the bidirectional intra predicted image generation unit 602. The configuration and processing other than the reference pixel filter unit 3301 are the same as those of the intra prediction unit 109 shown in FIG.
 次に参照画素フィルタ部3301について説明する。参照画素フィルタ部3301では、予測モード651に含まれる参照画素フィルタフラグ及びイントラ予測モードに従って、イントラ予測に使用する参照画素をフィルタリングするか否かを決定する。参照画素フィルタフラグはイントラ予測モードIntraPredModeが“Intra_DC”以外の値の場合に、参照画素をフィルタリングするか否かを示すフラグである。参照画素フィルタフラグが1の場合、参照画素をフィルタリングする。また、参照画素フィルタフラグ0の場合、参照画素をフィルタリングしない。なお、IntraPredModeが”Intra_DC”の場合には、参照画素はフィルタリングせず、参照画素フィルタフラグは0にセットされる。参照画素フィルタフラグが1の場合、以下のフィルタリングにより、被フィルタ参照画像信号3351が計算される。なお、p[x,y]はフィルタ前の参照画素、pf[x,y]はフィルタ的用語の参照画素を示している。また、x,yは当該プレディクションユニット内の左上画素位置をx=0,y=0とした場合の、参照画素の相対位置を示している。また、PuPartSizeは当該プレディクションユニットのサイズ(画素)を示している。
Figure JPOXMLDOC01-appb-M000020
Next, the reference pixel filter unit 3301 will be described. The reference pixel filter unit 3301 determines whether to filter reference pixels used for intra prediction according to the reference pixel filter flag and the intra prediction mode included in the prediction mode 651. The reference pixel filter flag is a flag indicating whether or not to filter the reference pixel when the intra prediction mode IntraPredMode is a value other than “Intra_DC”. When the reference pixel filter flag is 1, the reference pixel is filtered. In the case of the reference pixel filter flag 0, the reference pixel is not filtered. When IntraPredMode is “Intra_DC”, the reference pixel is not filtered and the reference pixel filter flag is set to 0. When the reference pixel filter flag is 1, a filtered reference image signal 3351 is calculated by the following filtering. Note that p [x, y] indicates a reference pixel before filtering, and pf [x, y] indicates a reference pixel in filter terms. Further, x and y indicate relative positions of the reference pixels when the upper left pixel position in the prediction unit is x = 0 and y = 0. PuPartSize indicates the size (pixel) of the prediction unit.
Figure JPOXMLDOC01-appb-M000020
 <シンタクス構成 5> 
 図34A,図34Bには、適応参照画素フィルタを行う際のプレディクションユニットシンタクス構造を示している。図34Aは、図30Aに適応参照画素フィルタに関するシンタクスintra_luma_filter_flag[i]を追加している。また、図34Bは、図30Cに適応参照画素フィルタに関するシンタクスintra_luma_filter_flag[i]を追加している。intra_luma_filter_flag[i]はイントラ予測モードIntraPredMode[i]がIntra_DC以外の場合に、さらに符号化される。当該フラグが0の場合、上記参照画素のフィルタリングは行われないことを示す。また、intra_luma_filter_flag[i]が1の場合、上記参照画素のフィルタリングを適用することを示す。
<Syntax structure 5>
34A and 34B show a prediction unit syntax structure when performing adaptive reference pixel filtering. FIG. 34A adds the syntax intra_luma_filter_flag [i] related to the adaptive reference pixel filter to FIG. 30A. FIG. 34B adds syntax intra_luma_filter_flag [i] related to the adaptive reference pixel filter to FIG. 30C. intra_luma_filter_flag [i] is further encoded when the intra prediction mode IntraPredMode [i] is other than Intra_DC. When the flag is 0, it indicates that the reference pixel is not filtered. Further, when intra_luma_filter_flag [i] is 1, it indicates that the reference pixel filtering is applied.
 上記の例では、イントラ予測モードIntraPredMode[i]がIntra_DC以外の場合に、intra_luma_filter_flag[i]を符号化したが、他の例として、IntraPredMode[i]が0から2までの場合には、intra_luma_filter_flag[i]を符号化しなくても構わない。この場合、intra_luma_filter_flag[i]は0に設定される。 In the above example, intra_luma_filter_flag [i] is encoded when the intra prediction mode IntraPredMode [i] is other than Intra_DC. As another example, when IntraPredMode [i] is 0 to 2, intra_luma_filter_flag [ i] need not be encoded. In this case, intra_luma_filter_flag [i] is set to 0.
 また、図30B、図30D、図30Eに示される他のシンタクス構造について、上記で説明したintra_luma_filter_flag[i]を同様の意味で追加しても構わない。 In addition, for the other syntax structures shown in FIGS. 30B, 30D, and 30E, the intra_luma_filter_flag [i] described above may be added in the same meaning.
 (第2の変形例) 
 <イントラ予測部の第2の変形例> 
 イントラ予測部109に係る第2の変形例として、JCTVC-B205_draft002, 9.6節"Combined Intra Prediction", JCT-VC 2nd Meeting Geneva, July, 2010に示される複合イントラ予測と組み合わせて使用しても構わない。この文献における復号イントラ予測は、前述の単方向イントラ予測の結果と、予測画素に対し左、上、左上に隣接する画素の平均値とを加重平均することにより、予測値を得る。後述する動画像復号化装置5000や画像符号化装置100内において復号画像信号157を計算した場合には、前記左、上、左上に隣接する画素として、復号画素を用いることが可能である。一方、画像符号化装置100において、復号画像信号157を計算する前では、復号画素を用いることが不可能であるため、前記左、上、左上に隣接する画素として入力画像信号151を用いる。図35には、予測対象画素Xの予測に用いる隣接復号画素A(左),B(上),C(左上)の位置が示されている。従って、複合イントラ予測は画像符号化装置100と動画像復号化装置5100で予測値が異なる、所謂オープンループの予測方法である。
(Second modification)
<Second Modification of Intra Prediction Unit>
As a second modification of the intra prediction unit 109, it may be used in combination with the composite intra prediction shown in JCTVC-B205_draft002, section 9.6 “Combined Intra Prediction”, JCT-VC 2nd Meeting Geneva, July, 2010 . In the decoded intra prediction in this document, a prediction value is obtained by performing weighted averaging of the result of the above-described unidirectional intra prediction and the average value of pixels adjacent to the left, top, and top left with respect to the prediction pixel. When a decoded image signal 157 is calculated in a moving picture decoding apparatus 5000 or an image encoding apparatus 100 described later, decoded pixels can be used as pixels adjacent to the left, upper, and upper left. On the other hand, since it is impossible to use decoded pixels before the decoded image signal 157 is calculated in the image encoding device 100, the input image signal 151 is used as a pixel adjacent to the left, upper, and upper left. FIG. 35 shows positions of adjacent decoded pixels A (left), B (upper), and C (upper left) used for prediction of the prediction target pixel X. Therefore, composite intra prediction is a so-called open-loop prediction method in which prediction values differ between the image encoding device 100 and the moving image decoding device 5100.
 図37に、複合イントラ予測と組み合わせた場合のイントラ予測部109のブロック図を示す。図6で示されるイントラ予測部109に複合イントラ予測画像生成部3601、選択スイッチ3602、及び復号画像バッファ3701が追加されている点が異なる。 FIG. 37 shows a block diagram of the intra prediction unit 109 when combined with composite intra prediction. A difference is that a composite intra predicted image generation unit 3601, a selection switch 3602, and a decoded image buffer 3701 are added to the intra prediction unit 109 shown in FIG.
 双方向イントラ予測と複合イントラ予測を組み合わせた場合、まず選択スイッチ604において、符号化制御部115で制御されている予測モード情報に従って、単方向イントラ予測画像生成部601若しくは双方向イントラ予測画像生成部602の出力端を切り替える。以後、出力された当該予測画像信号161を方向予測画像信号161と呼ぶ。 When the bidirectional intra prediction and the composite intra prediction are combined, first, in the selection switch 604, the unidirectional intra prediction image generation unit 601 or the bidirectional intra prediction image generation unit according to the prediction mode information controlled by the encoding control unit 115. The output terminal of 602 is switched. Hereinafter, the output predicted image signal 161 is referred to as a direction predicted image signal 161.
 その後、方向予測画像信号161を複合イントラ予測画像生成部3601に入力し、複合イントラ予測における予測画像信号161を生成する。複合イントラ予測画像生成部3601における説明は後述する。その後、選択スイッチ3602において、符号化制御部115で制御されている予測モード情報内の複合イントラ予測適用フラグに従って、複合イントラ予測における予測画像信号161と方向予測画像信号のいずれを用いるかを切り替えて、イントラ予測部109における最終的な予測画像信号161が出力される。複合イントラ予測適用フラグが1の場合には、複合イントラ予測画像生成部3601から出力される予測画像信号161が最終的な予測画像信号161となる。一方、複合イントラ予測適用フラグが0の場合には、方向予測画像信号161が最終的に出力される予測画像信号161となる。複合イントラ予測画像生成部3601から出力される予測画像信号は第六予測画像信号とも呼ばれる。 Thereafter, the direction prediction image signal 161 is input to the composite intra prediction image generation unit 3601, and the prediction image signal 161 in the composite intra prediction is generated. The description of the composite intra predicted image generation unit 3601 will be described later. Thereafter, the selection switch 3602 switches between using the prediction image signal 161 and the direction prediction image signal in the composite intra prediction according to the composite intra prediction application flag in the prediction mode information controlled by the encoding control unit 115. The final prediction image signal 161 in the intra prediction unit 109 is output. When the composite intra prediction application flag is 1, the predicted image signal 161 output from the composite intra predicted image generation unit 3601 becomes the final predicted image signal 161. On the other hand, when the composite intra prediction application flag is 0, the direction prediction image signal 161 is the prediction image signal 161 that is finally output. The predicted image signal output from the composite intra predicted image generation unit 3601 is also called a sixth predicted image signal.
 複合イントラ予測画像生成部3601から予測画像信号161が生成された場合、加算部106にて、別途復号された復元予測誤差信号156と画素単位に加算され、画素単位の復号画像信号157を生成し、復号画像バッファ3701に格納する。格納された画素単位の復号画像信号157は、参照画素3751として複合イントラ予測画像生成部3601に入力され、図38に示される隣接画素3751として、後述する画素レベルの予測に使用される。 When the prediction image signal 161 is generated from the composite intra prediction image generation unit 3601, the addition unit 106 adds the decoded prediction error signal 156 separately decoded and the pixel unit to generate a decoded image signal 157 for each pixel. And stored in the decoded image buffer 3701. The stored decoded image signal 157 in units of pixels is input to the composite intra predicted image generation unit 3601 as a reference pixel 3751, and is used for pixel level prediction described later as an adjacent pixel 3751 shown in FIG.
 次に、複合イントラ予測画像生成部3601について図38を用いて説明する。複合イントラ予測画像生成部3601は画素レベル予測信号生成部3801及び複合イントラ予測計算部3802を含む。画素レベル予測信号生成部3801では、隣接画素3751として参照画素3751を入力し、予測対象画素Xを隣接する画素から予測することによって画素レベル予測信号3851を出力する。具体的には、数式(21)を用いて隣接画素3751を示すA,B,Cから予測対象画素の画素レベル予測信号3851(X)を計算する。
Figure JPOXMLDOC01-appb-M000021
Next, the composite intra prediction image generation unit 3601 will be described with reference to FIG. The composite intra prediction image generation unit 3601 includes a pixel level prediction signal generation unit 3801 and a composite intra prediction calculation unit 3802. The pixel level prediction signal generation unit 3801 inputs the reference pixel 3751 as the adjacent pixel 3751 and outputs the pixel level prediction signal 3851 by predicting the prediction target pixel X from the adjacent pixel. Specifically, the pixel level prediction signal 3851 (X) of the prediction target pixel is calculated from A, B, and C indicating the adjacent pixel 3751 using Equation (21).
Figure JPOXMLDOC01-appb-M000021
 なお、A,B,Cに係る係数は、他の値であっても構わない。 Note that the coefficients related to A, B, and C may be other values.
 複合イントラ予測計算部3802では、方向予測画像信号161(X’)と画素レベル予測信号3851(X)の加重平均を行い、最終的な予測画像信号161(P)を出力する。具体的には、下式を用いる。
Figure JPOXMLDOC01-appb-M000022
The composite intra prediction calculation unit 3802 performs a weighted average of the direction prediction image signal 161 (X ′) and the pixel level prediction signal 3851 (X), and outputs a final prediction image signal 161 (P). Specifically, the following formula is used.
Figure JPOXMLDOC01-appb-M000022
 なお、Wは方向予測画像信号161(X’)と画素レベル予測信号3851(X)との加重平均の重み係数(W=0から32までの間の整数値)である。 Note that W is a weighted average weight coefficient (an integer value between W = 0 and 32) of the direction prediction image signal 161 (X ′) and the pixel level prediction signal 3851 (X).
 当該複合イントラ予測を使用して予測画像信号161を生成し、さらに予測誤差信号152、復号画像信号157を生成した場合、符号化と復号化で復号画像信号157が異なる値を持つ場合がある。そこで、符号化プレディクションシンタクス内の全ての復号画像信号157が生成された後、復号画像信号157を隣接画素として再度上述の複合イントラ予測を実行することで、復号化と同一の予測画像信号161が生成され、さらに予測誤差信号152と加算することにより、復号化と同一の復号画像信号157を生成することが可能となる。 When the prediction image signal 161 is generated using the composite intra prediction, and the prediction error signal 152 and the decoded image signal 157 are further generated, the decoded image signal 157 may have different values in encoding and decoding. Therefore, after all the decoded image signals 157 in the encoded prediction syntax are generated, the above-described combined intra prediction is executed again using the decoded image signal 157 as an adjacent pixel, so that the predicted image signal 161 that is the same as that in the decoding is obtained. Is further added to the prediction error signal 152 to generate a decoded image signal 157 identical to the decoding.
 以上が、複合イントラ予測と組み合わせた場合の実施形態である。 The above is an embodiment when combined with composite intra prediction.
 (第3の変形例) 
 <イントラ予測部の第3の変形例> 
 また、上記重み係数Wはプレディクションユニット内の予測画素の位置に応じて、切り替えても構わない。一般に、単方向イントラ予測及び双方向イントラ予測を用いて生成された予測画像信号は、空間的に隣接している既に符号化済みの上或いは左に位置する参照画素から予測値が生成されるため、参照画素からの距離が大きくなるほど予測誤差の絶対値が増加する傾向にある。従って、方向予測画像信号161と画素レベル予測信号3851の重み係数を、参照画素に近い場合は方向予測画像信号161の重み係数を大きくし、離れた場合は、小さくすることで予測精度を向上させることが可能となる。
(Third Modification)
<Third Modification of Intra Prediction Unit>
The weighting factor W may be switched according to the position of the prediction pixel in the prediction unit. In general, a prediction image signal generated using unidirectional intra prediction and bidirectional intra prediction generates a prediction value from spatially adjacent reference pixels positioned on the left or above already encoded. The absolute value of the prediction error tends to increase as the distance from the reference pixel increases. Therefore, by increasing the weighting coefficient of the direction prediction image signal 161 and the pixel level prediction signal 3851 when the distance is close to the reference pixel, the weighting coefficient of the direction prediction image signal 161 is increased, and when it is far away, the weighting coefficient is decreased. It becomes possible.
 一方、当該複合イントラ予測では、符号化時に入力画像信号を利用した予測誤差信号の生成を行う。この際、画素レベル予測信号3851は入力画像信号となるため、参照画素位置と予測画素位置との空間距離が大きくなっても、方向予測画像信号161と比較して、画素レベル予測信号3851の予測精度が高い。しかし、単純に方向予測画像信号161と画素レベル予測信号3851の重み係数を、参照画素に近い場合は方向予測画像信号161の重み係数を大きくし、離れた場合は、小さくすると、離れた場合の予測誤差が小さくなるが、符号化時の予測値と局部復号化時の予測値に乖離が生まれ、予測精度が低下する問題が生じる。従って、特に量子化パラメータの値が大きい場合に、参照画素位置と予測画素位置との空間距離が大きくなるにつれ、Wの値を小さく設定することにより、このようなオープンループの場合に発生する乖離現象による符号化効率低下を抑えることが出来る。 On the other hand, in the complex intra prediction, a prediction error signal is generated using an input image signal at the time of encoding. At this time, since the pixel level prediction signal 3851 becomes an input image signal, even when the spatial distance between the reference pixel position and the prediction pixel position is increased, the prediction of the pixel level prediction signal 3851 is compared with the direction prediction image signal 161. High accuracy. However, the weighting coefficient of the direction prediction image signal 161 and the pixel level prediction signal 3851 is simply increased when the weight coefficient of the direction prediction image signal 161 is close to the reference pixel, and is decreased when the distance is small. Although the prediction error is reduced, there is a problem that the prediction accuracy at the time of encoding and the prediction value at the time of local decoding are different and the prediction accuracy is lowered. Therefore, especially when the value of the quantization parameter is large, as the spatial distance between the reference pixel position and the predicted pixel position becomes large, the difference generated in the case of such an open loop is set by setting the value of W small. A decrease in coding efficiency due to the phenomenon can be suppressed.
 <シンタクス構成 6> 
 図39A、図39Bには、複合イントラ予測を行う際のプレディクションユニットシンタクス構造を示している。図39Aは、図30Aと比較して複合イントラ予測の有無を切り替えるシンタクスcombined_intra_pred_flagを追加している点で異なる。これは、前述の複合イントラ予測適用フラグと等しい。また、図39Bは、図30Cに複合イントラ予測の有無を切り替えるシンタクスcombined_intra_pred_flagを追加している。combined_intra_pred_flagが1の場合、図37に示される選択スイッチ3602は複合イントラ予測画像生成部3601の出力端に接続される。combined_intra_pred_flagが0の場合、図37に示される選択スイッチ3602は選択スイッチ604が接続している単方向イントラ予測画像生成部601若しくは双方向イントラ予測画像生成部602のいずれかの出力端に接続される。
<Syntax structure 6>
39A and 39B show the prediction unit syntax structure when performing composite intra prediction. FIG. 39A is different from FIG. 30A in that a syntax combined_intra_pred_flag for switching presence / absence of composite intra prediction is added. This is equivalent to the above-described composite intra prediction application flag. In addition, FIG. 39B adds a syntax combined_intra_pred_flag for switching presence / absence of composite intra prediction to FIG. 30C. When combined_intra_pred_flag is 1, the selection switch 3602 shown in FIG. 37 is connected to the output terminal of the composite intra predicted image generation unit 3601. When combined_intra_pred_flag is 0, the selection switch 3602 shown in FIG. 37 is connected to the output terminal of either the unidirectional intra prediction image generation unit 601 or the bidirectional intra prediction image generation unit 602 to which the selection switch 604 is connected. .
 また、図30B、図30D、図30Eに示される他のシンタクス構造について、上記で説明したintra_luma_filter_flag[i]を同様の意味で追加しても構わない。 In addition, for the other syntax structures shown in FIGS. 30B, 30D, and 30E, the intra_luma_filter_flag [i] described above may be added in the same meaning.
 さらに、イントラ予測部の第2の変形例と組み合わせても構わない。以上が、イントラ予測部109の別の実施形態に関する説明である。 Furthermore, you may combine with the 2nd modification of an intra estimation part. The above is the description regarding another embodiment of the intra prediction unit 109.
 以上の第1の実施形態によれば、高効率なイントラ予測を実現することができる。故に、符号化効率が向上し、ひいては主観画質も向上する。 According to the first embodiment described above, highly efficient intra prediction can be realized. Therefore, the coding efficiency is improved, and the subjective image quality is also improved.
 (第2の実施形態) 
 <動画像符号化装置-第2の実施形態> 
 第2の実施形態に係る動画像符号化装置は、第1の実施形態に係る画像符号化装置と直交変換及び逆直交変換の詳細において異なる。以降の説明では、本実施形態において第1の実施形態と同一部分には同一のインデックスを付して示し、異なる部分を中心に説明する。本実施形態に係る画像符号化装置に対応する動画像復号化装置は、第5の実施形態において説明する。
(Second Embodiment)
<Moving Image Encoding Device—Second Embodiment>
The video encoding apparatus according to the second embodiment differs from the image encoding apparatus according to the first embodiment in details of orthogonal transform and inverse orthogonal transform. In the following description, in the present embodiment, the same parts as those in the first embodiment are denoted by the same indexes, and different parts will be mainly described. A moving picture decoding apparatus corresponding to the picture encoding apparatus according to the present embodiment will be described in a fifth embodiment.
 図40は、第2の実施形態に係る動画像符号化装置を示すブロック図である。第1の実施形態に係る動画像符号化装置からの変更点は、変換選択部4001、係数順制御部4002が追加されている点である。また、直交変換部102及び逆直交変換部105の内部構造も異なる。以下、図40の動画像符号化装置が行う処理について説明する。 FIG. 40 is a block diagram showing a video encoding apparatus according to the second embodiment. The change from the moving picture encoding apparatus according to the first embodiment is that a transformation selection unit 4001 and a coefficient order control unit 4002 are added. Also, the internal structures of the orthogonal transform unit 102 and the inverse orthogonal transform unit 105 are different. Hereinafter, processing performed by the moving image encoding apparatus in FIG. 40 will be described.
 先ず、直交変換部102と逆直交変換部105をそれぞれ図41、図42で説明する。 First, the orthogonal transform unit 102 and the inverse orthogonal transform unit 105 will be described with reference to FIGS. 41 and 42, respectively.
 <直交変換部102> 
 図41の直交変換部102は、第一直交変換部4101と第二直交変換部4102、第N直交変換部4103、変換選択スイッチ4104を有する。なお、ここでは、N種類の直交変換部を持つ例を示しているが、同じ直交変換方法で変換サイズが複数あってもよいし、異なる直交変換方法を行う直交変換部が複数あってもよい。また、それぞれが混ざっていてもよい。例えば、第一直交変換部4101が4×4サイズDCT、第二直交変換部4102が8×8サイズDCT、第N直交変換部4103が16×16サイズDCTと設定することも可能であるし、第一直交変換部4101が4×4サイズDCT、第二直交変換部4102が4×4サイズDST(離散サイン変換)、第N直交変換部4103が8×8サイズKLT(カルーネン・レーベ変換)と設定することも可能である。また、直交変換でない変換も選択することも可能であし、単一の変換であってもよい。この場合N=1と考える。
<Orthogonal transform unit 102>
The orthogonal transform unit 102 in FIG. 41 includes a first orthogonal transform unit 4101, a second orthogonal transform unit 4102, an Nth orthogonal transform unit 4103, and a transform selection switch 4104. Although an example having N types of orthogonal transform units is shown here, there may be a plurality of transform sizes using the same orthogonal transform method, or there may be a plurality of orthogonal transform units performing different orthogonal transform methods. . Moreover, each may be mixed. For example, the first orthogonal transform unit 4101 can be set to 4 × 4 size DCT, the second orthogonal transform unit 4102 can be set to 8 × 8 size DCT, and the Nth orthogonal transform unit 4103 can be set to 16 × 16 size DCT. The first orthogonal transform unit 4101 is 4 × 4 size DCT, the second orthogonal transform unit 4102 is 4 × 4 size DST (discrete sine transform), and the Nth orthogonal transform unit 4103 is 8 × 8 size KLT (Karunen-Labe transform). ) Can also be set. In addition, it is possible to select a transform that is not orthogonal transform, or a single transform. In this case, N = 1 is considered.
 先ず、変換選択スイッチ4104について説明する。変換選択スイッチ4104は、減算部101の出力端を、変換選択情報4051に従って選択する機能を有する。変換選択情報4051は、符号化制御部115で制御されている情報の1つであり、予測情報160に従って変換選択部4001で設定される。例えば、H.264では、4×4画素ブロック(プレディクションユニット)のイントラ予測時には4×4DCTが、8×8画素ブロック(プレディクションユニット)のイントラ予測時には8×8DCTが設定される。本実施形態では、変換選択情報4051が第一直交変換を指す場合には、スイッチの出力端を第一直交変換部4101に接続する。一方、変換選択情報4051が第二直交変換である場合は、出力端を第二直交変換部4102に接続する。 First, the conversion selection switch 4104 will be described. The conversion selection switch 4104 has a function of selecting the output terminal of the subtraction unit 101 according to the conversion selection information 4051. The conversion selection information 4051 is one piece of information controlled by the encoding control unit 115, and is set by the conversion selection unit 4001 according to the prediction information 160. For example, H.M. In H.264, 4 × 4 DCT is set for intra prediction of a 4 × 4 pixel block (prediction unit), and 8 × 8 DCT is set for intra prediction of an 8 × 8 pixel block (prediction unit). In the present embodiment, when the transformation selection information 4051 indicates the first orthogonal transformation, the output terminal of the switch is connected to the first orthogonal transformation unit 4101. On the other hand, when the transformation selection information 4051 is the second orthogonal transformation, the output end is connected to the second orthogonal transformation unit 4102.
 次に第一直交変換部4101から第N直交変換部4103までの処理について説明する。本実施形態では、N個ある直交変換部の中で1つがDCT、それ以外がKLT(カルーネン・レーベ変換)である例について説明する。ここでは、第一直交変換部4101がDCT、それ以外の直交変換部4102、4103ではKLT(カルーネン・レーベ変換)が行われる。 Next, processing from the first orthogonal transform unit 4101 to the Nth orthogonal transform unit 4103 will be described. In the present embodiment, an example in which one of the N orthogonal transform units is DCT and the other is KLT (Karunen-Loeve transform) will be described. Here, the first orthogonal transform unit 4101 performs DCT, and the other orthogonal transform units 4102 and 4103 perform KLT (Carhunen-Labe transform).
 <逆直交変換部105> 
 図42の逆直交変換部105は、第一逆直交変換部4201、第二逆直交変換部4202、第N逆直交変換部4203、変換選択スイッチ4204を有する。先ず、変換選択スイッチ4204について説明する。変換選択スイッチ4204は、逆量子化部104の出力端を、入力された変換選択情報4051に従って選択する機能を有する。変換選択情報4051は、符号化制御部115で制御されている情報の1つであり、予測情報160に従って変換選択部4001で設定される。
<Inverse orthogonal transform unit 105>
The inverse orthogonal transform unit 105 in FIG. 42 includes a first inverse orthogonal transform unit 4201, a second inverse orthogonal transform unit 4202, an Nth inverse orthogonal transform unit 4203, and a transform selection switch 4204. First, the conversion selection switch 4204 will be described. The transformation selection switch 4204 has a function of selecting the output terminal of the inverse quantization unit 104 according to the inputted transformation selection information 4051. The conversion selection information 4051 is one piece of information controlled by the encoding control unit 115, and is set by the conversion selection unit 4001 according to the prediction information 160.
 変換選択情報4051が第一直交変換である場合は、スイッチの出力端を第一逆直交変換部4201に接続する。一方、変換選択情報4051が第二直交変換である場合は、出力端を第二逆直交変換部4202に接続する。同様に、変換選択情報4051が第N直交変換である場合は、出力端を第N逆直交変換部4203に接続する。ここで、直交変換部102に設定される変換選択情報4051と逆直交変換部105に設定される変換選択情報4051は同一であり、直交変換部102で行われた変換に対応する逆直交変換が逆直交変換部105で同期して行われる。つまり、第一逆直交変換部4201では逆離散コサイン変換(以下、IDCTという)が行われ、第二逆直交変換部4202、第N逆直交変換部4203ではKLT(カルーネン・レーベ変換)に基づいた逆変換が行われる。ここでは例としてIDCTなどを用いる例を示したが、アダマール変換や離散サイン変換などの直交変換を使ってもよいし、非直交変換を用いてもよい。いずれにしても変換部102と連動して対応する逆変換が執り行われる。 When the transformation selection information 4051 is the first orthogonal transformation, the output terminal of the switch is connected to the first inverse orthogonal transformation unit 4201. On the other hand, when the transformation selection information 4051 is the second orthogonal transformation, the output end is connected to the second inverse orthogonal transformation unit 4202. Similarly, when the transform selection information 4051 is the Nth orthogonal transform, the output terminal is connected to the Nth inverse orthogonal transform unit 4203. Here, the transform selection information 4051 set in the orthogonal transform unit 102 and the transform selection information 4051 set in the inverse orthogonal transform unit 105 are the same, and the inverse orthogonal transform corresponding to the transform performed in the orthogonal transform unit 102 is performed. This is performed synchronously by the inverse orthogonal transform unit 105. That is, the first inverse orthogonal transform unit 4201 performs inverse discrete cosine transform (hereinafter referred to as IDCT), and the second inverse orthogonal transform unit 4202 and the Nth inverse orthogonal transform unit 4203 are based on KLT (Karunen-Labe transform). Inverse transformation is performed. Although an example using IDCT or the like is shown here as an example, orthogonal transformation such as Hadamard transformation or discrete sine transformation may be used, or non-orthogonal transformation may be used. In any case, the corresponding inverse conversion is performed in conjunction with the conversion unit 102.
 <変換選択部4001> 
 次に、図40に示される変換選択部4001について説明する。変換選択部4001には、符号化制御部115で制御され、予測選択部112で設定された予測モードなどを含む予測情報160が入力される。変換選択部4001は、この予測情報160に基づいて、どの予測モードに対して、どの直交変換を使うかどうかを示すMappedTransformIdx情報を設定する機能を有する。図43に、イントラ予測における変換選択情報4051(MappedTransformIdx)を示す。ここではN=9の例を示している。なお、IntraPredModeLX=2に対応するDC予測時には、第一直交変換部4101、及び対応する第一逆直交変換部4201が選択される。このように予測角度の近い基準予測モードへマッピングすることにより、全予測モードに対して直交変換器及び逆直交変換器を用意する場合と比較して、ハードウェア実現時の直交変換及び逆直交変換の回路規模を削減することが可能である。なお、双方向イントラ予測が選択されている場合、それぞれ2つのIntraPredModeL0とIntraPredModeL1を導出した後、IntraPredModeL0に対応する予測モードを利用して図43からMappedTransformIdxを導出する。本実施形態では、N=9の例を示したが、Nの値は符号化性能とハードウェア実現時の回路規模のバランスを取って、最適な組み合わせを選べばよい。
<Conversion selection unit 4001>
Next, the conversion selection unit 4001 shown in FIG. 40 will be described. The transformation selection unit 4001 receives prediction information 160 that is controlled by the encoding control unit 115 and includes the prediction mode set by the prediction selection unit 112 and the like. Based on the prediction information 160, the transform selection unit 4001 has a function of setting MapdTransformIdx information indicating which orthogonal transform is used for which prediction mode. FIG. 43 shows conversion selection information 4051 (MappedTransformIdx) in intra prediction. Here, an example of N = 9 is shown. Note that the first orthogonal transform unit 4101 and the corresponding first inverse orthogonal transform unit 4201 are selected during DC prediction corresponding to IntraPredModeLX = 2. By mapping to the reference prediction mode with a close prediction angle in this way, compared to the case of preparing an orthogonal transformer and an inverse orthogonal transformer for all prediction modes, orthogonal transformation and inverse orthogonal transformation at the time of hardware implementation It is possible to reduce the circuit scale. When bi-directional intra prediction is selected, after two IntraPredModeL0 and IntraPredModeL1 are derived, MapTransformIdx is derived from FIG. 43 using a prediction mode corresponding to IntraPredModeL0. In the present embodiment, an example of N = 9 has been shown, but the value of N may be selected in an optimal combination by balancing the coding performance and the circuit scale at the time of hardware implementation.
 <係数順制御部4002> 
 次に、係数順制御部4002を説明する。図44に係数順制御部4002のブロック図を示す。係数順制御部4002は、係数順選択スイッチ4404と、第一係数順変換部4401、第二係数順変換部4402、第N係数順変換部4403とを有する。係数順選択スイッチ4404は、例えば、図43に示されるMappedTransformIdxに従って、スイッチの出力端と係数順変換部4401から4403までを切り替える機能を有する。N種類の係数順変換部4401から4403までは、量子化部103で量子化処理された量子化変換係数154の2次元データを1次元データへと変換する機能を有する。例えば、H.264では、ジグザグスキャンを用いて2次元データを1次元データへと変換している。
<Coefficient order controller 4002>
Next, the coefficient order control unit 4002 will be described. FIG. 44 shows a block diagram of the coefficient order control unit 4002. The coefficient order control unit 4002 includes a coefficient order selection switch 4404, a first coefficient order conversion unit 4401, a second coefficient order conversion unit 4402, and an Nth coefficient order conversion unit 4403. For example, the coefficient order selection switch 4404 has a function of switching between the output terminal of the switch and the coefficient order conversion units 4401 to 4403 in accordance with the MappedTransformIdx shown in FIG. The N types of coefficient order conversion units 4401 to 4403 have a function of converting the two-dimensional data of the quantization conversion coefficient 154 quantized by the quantization unit 103 into one-dimensional data. For example, H.M. In H.264, two-dimensional data is converted into one-dimensional data using a zigzag scan.
 イントラの予測方向を考慮した直交変換を用いる場合、直交変換を施した変換係数153に量子化処理を施した量子化変換係数154は、ブロック内の非ゼロとなる変換係数の発生傾向が偏る性質を持つ。この非ゼロ変換係数の発生傾向は、イントラ予測の予測方向毎に異なる性質がある。しかし、異なる映像を符号化した際に同じ予測方向における非ゼロ変換係数の発生傾向は似る性質を持つ。そこで、2次元データを1次元データへ変換(2D-1D変換)する際、非ゼロ変換係数の発生確率が高い位置の変換係数から優先的にエントロピー符号化することで、変換係数の符号化する情報を削減することが可能である。そこで、予測情報160に含まれる予測モードなどの予測方向を表す情報に基づいて、予め非ゼロ変換係数の発生確率を学習することによって、例えばH.264と比較して演算量の増加を引き起こすことなく、変換係数の符号量を削減することが可能となる。 When orthogonal transform in consideration of the intra prediction direction is used, the quantized transform coefficient 154 obtained by performing the quantization process on the transform coefficient 153 subjected to orthogonal transform has a characteristic that the tendency of generating non-zero transform coefficients in the block is biased. have. The tendency of occurrence of this non-zero transform coefficient has different properties for each prediction direction of intra prediction. However, when different videos are encoded, the generation tendency of non-zero transform coefficients in the same prediction direction has a similar property. Therefore, when transforming two-dimensional data into one-dimensional data (2D-1D conversion), entropy coding is performed preferentially from transform coefficients at positions where the occurrence probability of non-zero transform coefficients is high, thereby encoding transform coefficients. It is possible to reduce information. Therefore, by learning the generation probability of the non-zero conversion coefficient in advance based on information indicating the prediction direction such as the prediction mode included in the prediction information 160, for example, H.264. Compared with H.264, it is possible to reduce the code amount of the transform coefficient without causing an increase in the calculation amount.
 さらに別の例として、係数順制御部4002は、2D-1D変換におけるスキャン順を動的に更新してもよい。このような動作を行う係数順制御部4002は、図45に例示される。この係数順制御部4002は、図44の構成に加え、発生頻度カウント部4501と、更新部4502とを含む。係数順変換部4401から4403までは、そのスキャン順が更新部4502によって更新される点以外は同一である。 As yet another example, the coefficient order control unit 4002 may dynamically update the scan order in 2D-1D conversion. The coefficient order control unit 4002 that performs such an operation is illustrated in FIG. The coefficient order control unit 4002 includes an occurrence frequency counting unit 4501 and an updating unit 4502 in addition to the configuration of FIG. The coefficient order conversion units 4401 to 4403 are the same except that the scan order is updated by the update unit 4502.
 発生頻度カウント部4501は、予測モード毎に、量子化変換係数列4052の各要素における非零係数の発生回数のヒストグラム4552を作成する。発生頻度カウント部4501は、作成したヒストグラム4552を更新部4502に入力する。 The occurrence frequency counting unit 4501 creates a histogram 4552 of the number of occurrences of non-zero coefficients in each element of the quantized transform coefficient sequence 4052 for each prediction mode. The occurrence frequency counting unit 4501 inputs the created histogram 4552 to the updating unit 4502.
 更新部4502は、予め定められたタイミングで、ヒストグラム4552に基づいて係数順の更新を行う。上記タイミングは、例えば、コーディングツリーユニットの符号化処理が終了したタイミング、コーディングツリーユニット内の1ライン分の符号化処理が終了したタイミングなどである。 The update unit 4502 updates the coefficient order based on the histogram 4552 at a predetermined timing. The timing is, for example, the timing when the coding process of the coding tree unit is finished, the timing when the coding process for one line in the coding tree unit is finished, or the like.
 具体的には、更新部4502は、ヒストグラム4552を参照して、非零係数の発生回数が閾値以上にカウントされた要素を持つ予測モードに関して係数順の更新を行う。例えば、更新部4502は、非零係数の発生が16回以上カウントされた要素を持つ予測モードに関して更新を行う。このような発生回数に閾値を設けることによって、係数順の更新が大域的に実施されるので、局所的な最適解に収束しにくくなる。 Specifically, the update unit 4502 refers to the histogram 4552 and updates the coefficient order with respect to the prediction mode having an element in which the number of occurrences of non-zero coefficients is counted more than a threshold. For example, the update unit 4502 updates the prediction mode having an element in which the occurrence of a non-zero coefficient is counted 16 times or more. By providing a threshold value for the number of occurrences, the coefficient order is updated globally, so that it is difficult to converge to a local optimum solution.
 更新部4502は、更新対象となる予測モードに関して、非零係数の発生頻度の降順に要素をソーティングする。ソーティングは、例えばバブルソート、クイックソートなどの既存のアルゴリズムによって実現できる。そして、更新部4502は、ソーティングされた要素の順序を示す更新係数順4551を、更新対象となる予測モードに対応する係数順変換部4401から4403までに入力する。 The update unit 4502 sorts the elements in the descending order of the occurrence frequency of the non-zero coefficient regarding the prediction mode to be updated. Sorting can be realized by existing algorithms such as bubble sort and quick sort. Then, the update unit 4502 inputs the update coefficient order 4551 indicating the order of the sorted elements to the coefficient order conversion parts 4401 to 4403 corresponding to the prediction mode to be updated.
 更新係数順4551が入力されると、各変換部は更新後のスキャン順に従って2D-1D変換を行う。なお、スキャン順を動的に更新する場合には、各2D-1D変換部の初期スキャン順を予め定めておく必要がある。このように、動的にスキャン順を更新することにより、予測画像の性質、量子化情報(量子化パラメータ)などの影響に応じて、量子化変換係数154における非零係数の発生傾向が変化する場合にも、安定的に高い符号化効率を期待できる。具体的には、エントロピー符号化部113におけるランレングス符号化の発生符号量を抑制できる。 
 なお、本実施形態におけるシンタクス構成は、第一の実施形態と同一である。
When the update coefficient order 4551 is input, each conversion unit performs 2D-1D conversion according to the updated scan order. When the scan order is dynamically updated, the initial scan order of each 2D-1D conversion unit needs to be determined in advance. As described above, by dynamically updating the scan order, the tendency of occurrence of non-zero coefficients in the quantized transform coefficients 154 changes according to the influence of the properties of the predicted image, quantization information (quantization parameters), and the like. Even in this case, high encoding efficiency can be expected stably. Specifically, the generated code amount of run-length encoding in the entropy encoding unit 113 can be suppressed.
Note that the syntax configuration in the present embodiment is the same as in the first embodiment.
 本実施形態の変形例として、変換選択部4001が予測情報160とは別にMappedTransformIdxを選択することも可能である。この場合、9種類のどの直交変換或いは逆直交変換を用いたかを示す情報をエントロピー符号化部113に設定し、量子化変換係数列4052とともに符号化される。図46に本変形例におけるシンタクスの例を示す。シンタクス中に示されるdirectional_transform_idxは、N個に対応する直交変換のいずれを選択したかの情報が示されている。 As a modification of the present embodiment, the conversion selection unit 4001 can select the mapped transform IDx separately from the prediction information 160. In this case, information indicating which nine types of orthogonal transforms or inverse orthogonal transforms are used is set in the entropy encoding unit 113 and encoded together with the quantized transform coefficient sequence 4052. FIG. 46 shows an example of syntax in this modification. Directional_transform_idx indicated in the syntax indicates information indicating which of N orthogonal transforms has been selected.
 以上の第2の実施形態によれば、ハードウェア実装及びソフトウェア実装における困難性を緩和しつつ、高効率な直交変換及び逆直交変換を実現することができる。故に、符号化効率が向上し、ひいては主観画質も向上する。 According to the second embodiment described above, highly efficient orthogonal transformation and inverse orthogonal transformation can be realized while alleviating the difficulty in hardware implementation and software implementation. Therefore, the coding efficiency is improved, and the subjective image quality is also improved.
 (第3の実施形態) 
 <動画像符号化装置-第3の実施形態> 
 直交変換部102に係る実施形態として、JCTVC-B205_draft002, 5.3.5.2節" Rotational transformation process ", JCT-VC 2nd Meeting Geneva, July, 2010に示される回転変換と組み合わせても構わない。回転変換は、DCTを用いた直交変換後にさらに回転変換を行うことで変換係数の係数集密度をさらに高める手法である。
(Third embodiment)
<Moving Image Encoding Device—Third Embodiment>
As an embodiment related to the orthogonal transformation unit 102, it may be combined with the rotation transformation shown in JCTVC-B205_draft002, section 5.3.5.2 “Rotational transformation process”, JCT-VC 2nd Meeting Geneva, July, 2010. Rotational transformation is a technique for further increasing the coefficient density of transformation coefficients by further performing rotational transformation after orthogonal transformation using DCT.
 <直交変換部102> 
 図47に本実施形態に関わる直交変換部102のブロック図を示す。直交変換部102は、第一回転変換部4701、第二回転変換部4702、第N回転変換部4703、離散コサイン変換部4704の新しい処理部を持ち、既存の変換選択スイッチ4104を有する。離散コサイン変換部4704は、例えばDCTを行う。DCT後の変換係数が変換選択スイッチ4104に入力される。ここで、変換選択スイッチ4104は、変換選択情報4051に従って、スイッチの出力端を第一回転変換部4701、第二回転変換部4702、第N回転変換部4703のいずれかに接続する。例えば、符号化制御部115の制御に従って、順番にスイッチを切り替える。回転変換部4701から4703までは、それぞれの変換係数に対して、予め定められた回転行列を用いて、回転変換を行う。回転変換後の変換係数153が出力される。この変換は可逆変換である。
<Orthogonal transform unit 102>
FIG. 47 is a block diagram of the orthogonal transform unit 102 according to this embodiment. The orthogonal transform unit 102 includes new processing units such as a first rotation transform unit 4701, a second rotation transform unit 4702, an Nth rotation transform unit 4703, and a discrete cosine transform unit 4704, and includes an existing transform selection switch 4104. The discrete cosine transform unit 4704 performs DCT, for example. The conversion coefficient after DCT is input to the conversion selection switch 4104. Here, the conversion selection switch 4104 connects the output end of the switch to any of the first rotation conversion unit 4701, the second rotation conversion unit 4702, and the Nth rotation conversion unit 4703 according to the conversion selection information 4051. For example, the switches are sequentially switched according to the control of the encoding control unit 115. The rotation conversion units 4701 to 4703 perform rotation conversion on each conversion coefficient using a predetermined rotation matrix. A conversion coefficient 153 after rotation conversion is output. This conversion is a reversible conversion.
 ここでは、上記の数式(1)及び数式(2)に示されるような符号化コストを用いて、どの回転行列を使うかどうかを判定してもよい。また、予め図43に示したような予測モードと変換番号を対応付けたテーブルを用意して選択してもよい。また、ここでは、量子化部103の前に、回転変換部を適用する例を示したが、量子化処理後の量子化変換係数154に対して、回転変換部を適用してもよい。この場合、直交変換部102は、DCTのみを行う。 Here, it may be determined which rotation matrix is used by using the encoding cost as shown in the above formulas (1) and (2). Also, a table in which the prediction mode and the conversion number as shown in FIG. 43 are associated in advance may be prepared and selected. Here, an example in which the rotation conversion unit is applied before the quantization unit 103 is shown, but the rotation conversion unit may be applied to the quantization conversion coefficient 154 after the quantization process. In this case, the orthogonal transform unit 102 performs only DCT.
 <逆直交変換部105> 
 図48は本実施形態に関わる逆直交変換部105のブロック図である。逆直交変換部105は、第一逆回転変換部4801、第二逆回転変換部4802、第N逆回転変換部4803、逆離散コサイン変換部4804の新しい処理部を持ち、既存の変換選択スイッチ4204を有する。逆量子化処理後に入力された復元変換係数155が変換選択スイッチ4204に入力される。ここで、変換選択スイッチ4204は、変換選択情報4051に従って、スイッチの出力端を第一逆回転変換部4801、第二逆回転変換部4802、第N逆回転変換部4803のいずれかに接続する。その後、直交変換部102で利用された回転変換と同じ、いずれかの逆回転変換部4801から4803までで逆回転変換処理が施され、逆離散コサイン変換部4804へと出力する。逆離散コサイン変換部4804は、入力された信号に対して例えばIDCTを施し、復元予測誤差信号156を復元する。ここでは例としてIDCTを用いる例を示したが、アダマール変換や離散サイン変換などの直交変換を使ってもよいし、非直交変換を用いてもよい。いずれにしても変換部102と連動して対応する逆変換が執り行われる。
<Inverse orthogonal transform unit 105>
FIG. 48 is a block diagram of the inverse orthogonal transform unit 105 according to the present embodiment. The inverse orthogonal transform unit 105 includes new processing units such as a first inverse rotation transform unit 4801, a second inverse rotation transform unit 4802, an Nth inverse rotation transform unit 4803, and an inverse discrete cosine transform unit 4804, and an existing transform selection switch 4204. Have The restored transform coefficient 155 input after the inverse quantization process is input to the transform selection switch 4204. Here, the conversion selection switch 4204 connects the output terminal of the switch to one of the first reverse rotation conversion unit 4801, the second reverse rotation conversion unit 4802, and the Nth reverse rotation conversion unit 4803 according to the conversion selection information 4051. After that, the reverse rotation conversion processing is performed in any one of the reverse rotation conversion units 4801 to 4803, which is the same as the rotation conversion used in the orthogonal conversion unit 102, and the result is output to the inverse discrete cosine conversion unit 4804. The inverse discrete cosine transform unit 4804 performs, for example, IDCT on the input signal to restore the restored prediction error signal 156. Although an example using IDCT is shown here as an example, orthogonal transform such as Hadamard transform or discrete sine transform may be used, or non-orthogonal transform may be used. In any case, the corresponding inverse conversion is performed in conjunction with the conversion unit 102.
 本実施形態におけるシンタクスが図49で示されている。シンタクス中に示されるrotational_transform_idxは利用する回転行列の番号を意味している。 The syntax in this embodiment is shown in FIG. The rotation_transform_idx shown in the syntax means the number of the rotation matrix to be used.
 以上の第3の実施形態によれば、ハードウェア実装及びソフトウェア実装における困難性を緩和しつつ、高効率な直交変換及び逆直交変換を実現することができる。故に、符号化効率が向上し、ひいては主観画質も向上する。 According to the third embodiment described above, highly efficient orthogonal transformation and inverse orthogonal transformation can be realized while alleviating the difficulty in hardware implementation and software implementation. Therefore, the coding efficiency is improved, and the subjective image quality is also improved.
 (第4の実施形態) 
 第4の実施形態は動画像復号化装置に関する。本実施形態に係る動画像復号化装置に対応する動画像符号化装置は、第1の実施形態において説明した通りである。即ち、本実施形態に係る動画像復号化装置は、例えば第1の実施形態に係る動画像符号化装置によって生成された符号化データを復号化する。
(Fourth embodiment)
The fourth embodiment relates to a moving picture decoding apparatus. The video encoding device corresponding to the video decoding device according to the present embodiment is as described in the first embodiment. That is, the moving picture decoding apparatus according to the present embodiment decodes encoded data generated by, for example, the moving picture encoding apparatus according to the first embodiment.
 図50に示すように、本実施形態に係る動画像復号化装置は、入力バッファ5001、エントロピー復号化部5002、逆量子化部5003、逆直交変換部5004、加算部5005、ループフィルタ5006、参照画像メモリ5007、イントラ予測部5008、インター予測部5009、予測選択スイッチ5010、出力バッファ5011、復号化制御部5012、及びイントラ予測モードメモリ5013を含む。 As shown in FIG. 50, the moving picture decoding apparatus according to this embodiment includes an input buffer 5001, an entropy decoding unit 5002, an inverse quantization unit 5003, an inverse orthogonal transform unit 5004, an addition unit 5005, and a loop filter 5006. An image memory 5007, an intra prediction unit 5008, an inter prediction unit 5009, a prediction selection switch 5010, an output buffer 5011, a decoding control unit 5012, and an intra prediction mode memory 5013 are included.
 図50の動画像復号化装置は、入力バッファ5001に蓄積される符号化データ5051を復号し、復号画像5060を出力バッファ5011に蓄積して出力画像として出力する。符号化データ5051は、例えば図1の動画像符号化装置などから出力され、図示しない蓄積系または伝送系を経て、入力バッファ5001に一時的に蓄積される。 50 decodes the encoded data 5051 stored in the input buffer 5001, stores the decoded image 5060 in the output buffer 5011, and outputs it as an output image. The encoded data 5051 is output from, for example, the moving image encoding apparatus shown in FIG. 1 or the like, and is temporarily stored in the input buffer 5001 through a storage system or transmission system (not shown).
 エントロピー復号化部5002は、符号化データ5051の復号化のために、1フレームまたは1フィールド毎にシンタクスに基づいて解読を行う。エントロピー復号化部5002は、各シンタクスの符号列を順次エントロピー復号化し、予測モード情報などを含む予測情報5059、量子化変換係数5052などの符号化対象ブロックの符号化パラメータを再生する。符号化パラメータとは、予測情報5059、変換係数に関する情報、量子化に関する情報、などの復号に必要となるパラメータである。 The entropy decoding unit 5002 performs decoding based on the syntax for each frame or field for decoding the encoded data 5051. The entropy decoding unit 5002 sequentially entropy-decodes the code string of each syntax, and reproduces the encoding parameters of the encoding target block such as prediction information 5059 including the prediction mode information and the quantization transform coefficient 5052. The encoding parameter is a parameter necessary for decoding, such as prediction information 5059, information on transform coefficients, information on quantization, and the like.
 逆量子化部5003は、エントロピー復号化部5002からの量子化変換係数5052に逆量子化を行って、復元変換係数5053を得る。具体的には、逆量子化部5003は、エントロピー復号化部5002によって復号化された量子化に関する情報に従って逆量子化を行う。逆量子化部5003は、復元変換係数5053を逆直交変換部5004に入力する。 The inverse quantization unit 5003 performs inverse quantization on the quantized transform coefficient 5052 from the entropy decoding unit 5002 to obtain a restored transform coefficient 5053. Specifically, the inverse quantization unit 5003 performs inverse quantization according to the information on the quantization decoded by the entropy decoding unit 5002. The inverse quantization unit 5003 inputs the restored transform coefficient 5053 to the inverse orthogonal transform unit 5004.
 逆直交変換部5004は、逆量子化部5003からの復元変換係数5053に対して、符号化側において行われた直交変換に対応する逆直交変換を行い、復元予測誤差信号5054を得る。逆直交変換部5004は、復元予測誤差信号5054を加算部5005に入力する。 The inverse orthogonal transform unit 5004 performs inverse orthogonal transform corresponding to the orthogonal transform performed on the encoding side, on the reconstruction transform coefficient 5053 from the inverse quantization unit 5003, and obtains a reconstruction prediction error signal 5054. The inverse orthogonal transform unit 5004 inputs the restored prediction error signal 5054 to the addition unit 5005.
 加算部5005は、復元予測誤差信号5054と、対応する予測画像信号5058とを加算し、復号画像信号5055を生成する。復号画像信号5055は、ループフィルタ5006へと入力される。ループフィルタ5006は、入力された復号画像信号5055にデブロッキングフィルタやウィナーフィルタなどを施し、被フィルタ画像信号5056を生成する。生成した被フィルタ画像信号5056は、出力画像のために出力バッファ5011に一時的に蓄積されると共に、参照画像信号5057のために参照画像メモリ5007にも保存される。参照画像メモリ5007に保存された被フィルタ画像信号5056は、参照画像信号5057としてイントラ予測部5008及びインター予測部5009によって必要に応じてフレーム単位またはフィールド単位で参照される。出力バッファ5011に一時的に蓄積された被フィルタ画像信号5056は、復号化制御部5012によって管理される出力タイミングに従って出力される。 The addition unit 5005 adds the restored prediction error signal 5054 and the corresponding predicted image signal 5058 to generate a decoded image signal 5055. The decoded image signal 5055 is input to the loop filter 5006. The loop filter 5006 performs a deblocking filter, a Wiener filter, or the like on the input decoded image signal 5055 to generate a filtered image signal 5056. The generated filtered image signal 5056 is temporarily stored in the output buffer 5011 for the output image, and is also stored in the reference image memory 5007 for the reference image signal 5057. The filtered image signal 5056 stored in the reference image memory 5007 is referred to by the intra prediction unit 5008 and the inter prediction unit 5009 as a reference image signal 5057 in units of frames or fields as necessary. The filtered image signal 5056 temporarily accumulated in the output buffer 5011 is output according to the output timing managed by the decoding control unit 5012.
 イントラ予測モードメモリ5013は、図1に示されるイントラ予測モードメモリ116と同一の機能を有し、復号化が終了したプレディクションユニットに適用されたイントラ予測モード情報5061を蓄積しており、イントラ予測部5008によって必要に応じて双方向予測モード情報を生成する際に、参照イントラ予測モード情報5062として都度参照される。 
 イントラ予測部5008、インター予測部5009及び選択スイッチ5010は、図1のイントラ予測部109、インター予測部110及び選択スイッチ111と実質的に同一または類似の要素である。イントラ予測部5008及びイントラ予測部109はそれぞれ、イントラ予測モードメモリ5013に保存されている参照イントラ予測モード情報5062及びイントラ予測モードメモリ116に保存されている参照イントラ予測モード情報164を利用してイントラ予測を行う。例えば、H.264では、予測対象ブロックに隣接する符号化済みの参照画素値を利用して、垂直方向、水平方向などの予測方向に沿って画素補填(コピーまたは補間後にコピー)を行うことによってイントラ予測画像を生成する。図5の(a)にH.264におけるイントラ予測の予測方向を示す。また、図5の(b)にH.264における参照画素と符号化対象画素との配置関係を示す。図5の(c)はモード1(水平予測)の予測画像生成方法を示しており、図5の(d)はモード4(対角右下予測)の予測画像生成方法を示している。
The intra prediction mode memory 5013 has the same function as the intra prediction mode memory 116 shown in FIG. 1 and stores intra prediction mode information 5061 applied to the prediction unit for which decoding has been completed. When the unit 5008 generates bidirectional prediction mode information as necessary, it is referred to as reference intra prediction mode information 5062 each time.
The intra prediction unit 5008, the inter prediction unit 5009, and the selection switch 5010 are substantially the same or similar elements as the intra prediction unit 109, the inter prediction unit 110, and the selection switch 111 in FIG. The intra prediction unit 5008 and the intra prediction unit 109 use the reference intra prediction mode information 5062 stored in the intra prediction mode memory 5013 and the reference intra prediction mode information 164 stored in the intra prediction mode memory 116, respectively. Make a prediction. For example, H.M. In H.264, an intra prediction image is obtained by performing pixel interpolation (copying or copying after interpolation) along a prediction direction such as a vertical direction or a horizontal direction using an encoded reference pixel value adjacent to a prediction target block. Generate. In FIG. The prediction direction of intra prediction in H.264 is shown. Further, in FIG. The arrangement | positioning relationship between the reference pixel and encoding object pixel in H.264 is shown. FIG. 5C illustrates a predicted image generation method in mode 1 (horizontal prediction), and FIG. 5D illustrates a predicted image generation method in mode 4 (diagonal lower right prediction).
 また、Jung-Hye Min, “Unification of the Directional Intra Prediction Methods in TMuC”, Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11 Document, JCTVC-B100, July 2010.では、H.264の予測方向をさらに34方向に拡張し、予測モード数を増やしている。予測角度に応じて32画素精度の線形補間を行うことで予測画素値を作成し、予測方向にコピーする。本実施形態で用いるイントラ予測部5008の詳細は後述する。 Also, Jung-Hye Min, “Unification of the Directional Intra Prediction Methods in TMuC”, Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO / IEC JTC1 / SC29 / WG11-VCB , July 2010. The prediction direction of H.264 is further expanded to 34 directions to increase the number of prediction modes. A predicted pixel value is created by performing linear interpolation with 32-pixel accuracy in accordance with the predicted angle, and is copied in the predicted direction. Details of the intra prediction unit 5008 used in this embodiment will be described later.
 インター予測部5009は、参照画像メモリ5007に保存されている参照画像信号5057を利用してインター予測を行う。具体的には、インター予測部5009は、予測対象ブロックと参照画像信号5057との間の動きのズレ量(動きベクトル)をエントロピー復号化部5002から取得し、この動きベクトルに基づいて補間処理(動き補償)を行ってインター予測画像を生成する。H.264では、1/4画素精度までの補間処理が可能である。 The inter prediction unit 5009 performs inter prediction using the reference image signal 5057 stored in the reference image memory 5007. Specifically, the inter prediction unit 5009 obtains a motion shift amount (motion vector) between the prediction target block and the reference image signal 5057 from the entropy decoding unit 5002, and performs an interpolation process ( Motion prediction) is performed to generate an inter prediction image. H. With H.264, interpolation processing up to 1/4 pixel accuracy is possible.
 予測選択スイッチ5010は、イントラ予測部5008の出力端またはインター予測部5009の出力端を、復号した予測情報5059に従って選択し、イントラ予測画像またはインター予測画像を予測画像信号5058として加算部5005に入力する。予測情報5059がイントラ予測を示す場合には、予測選択スイッチ5010はイントラ予測部5008からの出力端にスイッチを接続する。一方、予測情報5059がインター予測を示す場合には、予測選択スイッチ5010はインター予測部5009からの出力端にスイッチを接続する。 The prediction selection switch 5010 selects the output terminal of the intra prediction unit 5008 or the output terminal of the inter prediction unit 5009 according to the decoded prediction information 5059, and inputs the intra prediction image or the inter prediction image to the adding unit 5005 as the prediction image signal 5058. To do. When the prediction information 5059 indicates intra prediction, the prediction selection switch 5010 connects a switch to the output terminal from the intra prediction unit 5008. On the other hand, when the prediction information 5059 indicates inter prediction, the prediction selection switch 5010 connects a switch to the output terminal from the inter prediction unit 5009.
 復号化制御部5012は、図50の動画像復号化装置の各要素を制御する。具体的には、復号化制御部5012は、上述の動作を含む復号化処理のための種々の制御を行う。 The decoding control unit 5012 controls each element of the moving picture decoding apparatus in FIG. Specifically, the decoding control unit 5012 performs various controls for decoding processing including the above-described operation.
 また、図50の動画像復号化装置は、図28、図29、図30Aから図30E、図34Aから図34B、及び図39Aから図39Bに関して説明したシンタクスと同一または類似のシンタクスを利用するのでその詳細な説明を省略する。 50 uses the same or similar syntax as that described with reference to FIGS. 28, 29, 30A to 30E, 34A to 34B, and 39A to 39B. Detailed description thereof is omitted.
 以下、イントラ予測部5008の詳細について図6を用いて説明する。 
 本実施形態における、イントラ予測部5008は第1の実施形態で説明したイントラ予測部109とその構成、処理内容は同一である。
Hereinafter, the details of the intra prediction unit 5008 will be described with reference to FIG.
In the present embodiment, the intra prediction unit 5008 has the same configuration and processing content as the intra prediction unit 109 described in the first embodiment.
 図6に示されるイントラ予測部5008(図6では109)は、単方向イントラ予測画像生成部601、双方向イントラ予測画像生成部602、予測モード情報設定部603、選択スイッチ604、双方向イントラ予測モード生成部605を有する。先ず、参照画像メモリ5007から参照画像信号5057(図6では159)が、単方向イントラ予測画像生成部601及び双方向イントラ予測画像生成部602に入力される。ここで、復号化制御部5012で制御されている予測モード情報に従って、予測モード情報設定部603は、単方向イントラ予測画像生成部601或いは双方向イントラ予測画像生成部602で生成される予測モードを設定し、予測モード651を出力する。双方向イントラ予測モード生成部605は、予測モード651及び参照イントラ予測モード情報164に従って、双方向イントラ予測モード情報652を出力する。選択スイッチ604は、この予測モード651に従って、それぞれのイントラ予測画像生成部の出力端を繋ぎかえる機能を有する。入力された予測モード651が単方向イントラ予測モードであれば、単方向イントラ予測画像生成部601の出力端とスイッチを接続し、予測モード651が双方向イントラ予測モードであれば、双方向イントラ予測画像生成部602の出力端を接続する。一方、それぞれのイントラ予測画像生成部601、602は、予測モード651に従って、予測画像信号5058(図6では161)を生成する。生成された予測画像信号5058がイントラ予測部109から出力される。 An intra prediction unit 5008 (109 in FIG. 6) illustrated in FIG. 6 includes a unidirectional intra prediction image generation unit 601, a bidirectional intra prediction image generation unit 602, a prediction mode information setting unit 603, a selection switch 604, and bidirectional intra prediction. A mode generation unit 605 is included. First, a reference image signal 5057 (159 in FIG. 6) is input from the reference image memory 5007 to the unidirectional intra prediction image generation unit 601 and the bidirectional intra prediction image generation unit 602. Here, according to the prediction mode information controlled by the decoding control unit 5012, the prediction mode information setting unit 603 determines the prediction mode generated by the unidirectional intra prediction image generation unit 601 or the bidirectional intra prediction image generation unit 602. Set and output prediction mode 651. The bidirectional intra prediction mode generation unit 605 outputs the bidirectional intra prediction mode information 652 according to the prediction mode 651 and the reference intra prediction mode information 164. The selection switch 604 has a function of switching the output ends of the respective intra predicted image generation units in accordance with the prediction mode 651. If the input prediction mode 651 is the unidirectional intra prediction mode, the output terminal of the unidirectional intra prediction image generation unit 601 is connected to the switch, and if the prediction mode 651 is the bidirectional intra prediction mode, the bidirectional intra prediction is performed. The output terminal of the image generation unit 602 is connected. On the other hand, each of the intra predicted image generation units 601 and 602 generates a predicted image signal 5058 (161 in FIG. 6) according to the prediction mode 651. The generated predicted image signal 5058 is output from the intra prediction unit 109.
 先ず、予測モード情報設定部603について詳細に説明する。図7A及び図7Bは、本実施形態に関わる予測モードのブロックサイズ別の数を示している。PuSizeは、予測を行う画素ブロック(プレディクションユニット)サイズを示しており、PU_2x2からPU_128x128まで7種類のサイズが規定されている。IntraUniModeNumは単方向イントラ予測の予測モード数を表しており、IntraBiModeNumは双方向イントラ予測の予測モード数を表している。また、Number of modesは、各画素ブロック(プレディクションユニット)サイズ毎の予測モード数の総数である。 First, the prediction mode information setting unit 603 will be described in detail. 7A and 7B show the numbers of the prediction modes according to the present embodiment for each block size. PuSize indicates a pixel block (prediction unit) size to be predicted, and seven types of sizes from PU_2x2 to PU_128x128 are defined. IntraUniModeNum represents the number of prediction modes for unidirectional intra prediction, and IntraBiModeNum represents the number of prediction modes for bidirectional intra prediction. Also, Number of modes is the total number of prediction modes for each pixel block (prediction unit) size.
 一方、図8A、図8BにPuSizeがPU_8x8、PU_16x16及びPU_32x32の場合の、予測モードと予測方法の関係を示す。なお、図9A及び図9BにPuSizeがPU_4x4の場合を示し、図10にPU_64x64或いはPU_128x128の場合を示す。ここで、IntraPredModeは予測モード番号を示し、IntraBipredFlagは、双方向イントラ予測であるかを示すフラグである。当該フラグが0の場合は、当該予測モードが単方向イントラ予測モードであることを示している。当該フラグが1である場合は、当該予測モードが双方向イントラ予測モードであることを示している。また、当該フラグが1の場合、双方向イントラ予測モード生成部605において、双方向イントラ予測の生成方法を規定するIntraBipredTypeIdxに従って双方向イントラ予測モード情報652が生成される。IntraBipredTypeIdxが0の場合は、後述する第一予測モード生成部1901において、双方向イントラ予測に用いる2種類の単方向イントラ予測モードが予め定められたテーブルによって設定される。以降、双方向イントラ予測に用いる2種類の単方向イントラ予測モードが予めテーブルによってされる方式を固定テーブル方式と称する。図8Aに、すべての双方向イントラ予測モードが固定テーブル方式である一例を示している。 On the other hand, FIGS. 8A and 8B show the relationship between the prediction mode and the prediction method when PuSize is PU_8x8, PU_16x16, and PU_32x32. 9A and 9B show a case where PuSize is PU_4x4, and FIG. 10 shows a case where PU_64x64 or PU_128x128. Here, IntraPredMode indicates a prediction mode number, and IntraBipredFlag is a flag indicating whether or not bidirectional intra prediction. When the flag is 0, it indicates that the prediction mode is a unidirectional intra prediction mode. When the flag is 1, it indicates that the prediction mode is the bidirectional intra prediction mode. When the flag is 1, the bidirectional intra prediction mode generation unit 605 generates bidirectional intra prediction mode information 652 in accordance with IntraBipredTypeIdx that defines a bidirectional intra prediction generation method. When IntraBipredTypeIdx is 0, two types of unidirectional intra prediction modes used for bidirectional intra prediction are set in a first prediction mode generation unit 1901 described later using a predetermined table. Hereinafter, a method in which two types of unidirectional intra prediction modes used for bidirectional intra prediction are preliminarily tabled is referred to as a fixed table method. FIG. 8A shows an example in which all bidirectional intra prediction modes are fixed table methods.
 IntraBipredTypeIdxが0より大きい値の場合、参照イントラ予測モード情報164に基づいて、双方向イントラ予測に用いる2種類の単方向イントラ予測モードが設定される。以降、参照イントラ予測モード情報164に基づいて双方向イントラ予測に用いる2種類の単方向イントラ予測モードが設定される方式をダイレクト方式と称する。IntraBipredTypeIdxは、参照イントラ予測モード情報164から2種類の単方向イントラ予測モードの導出方法によって異なる値を有する。 When IntraBipredTypeIdx is a value larger than 0, two types of unidirectional intra prediction modes used for bidirectional intra prediction are set based on the reference intra prediction mode information 164. Hereinafter, a method in which two types of unidirectional intra prediction modes used for bidirectional intra prediction based on the reference intra prediction mode information 164 are set is referred to as a direct method. IntraBipredTypeIdx has different values depending on the method of deriving two types of unidirectional intra prediction modes from the reference intra prediction mode information 164.
 複数の双方向イントラ予測モードのうち、すべてのモードを固定テーブル方式としても構わないし、すべてのモードをダイレクト方式としても構わない。また、いくつかのモードを固定テーブル方式として、残りのモードをダイレクト方式としても構わない。図8Bには、8種類の双方向イントラ予測モードのうち、3種類が固定テーブル方式、残りの5種類がダイレクト方式である一例を示している。 ∙ Of the plurality of bidirectional intra prediction modes, all modes may be fixed table methods, or all modes may be direct methods. Also, some modes may be fixed table methods and the remaining modes may be direct methods. FIG. 8B shows an example in which among the eight types of bidirectional intra prediction modes, three types are the fixed table method and the remaining five types are the direct method.
 IntraPredTypeLXはイントラ予測の予測タイプを示している。Intra_Verticalは、垂直方向を予測の基準とすることを意味し、Intra_Horizontalは水平方向を予測の基準とすることを意味する。なお、IntraPredTypeLX内のXには、0或いは1が適用される。IntraPredTypeL0は、単方向イントラ予測或いは双方向イントラ予測の最初の予測モードを示している。IntraPredTypeL1は、双方向イントラ予測の2番目の予測モードを示している。また、IntraPredAngleIdは、予測角度のインデックスを示す指標である。実際に予測値生成で使われる予測角度は図11に示されている。ここで、puPartIdxは図3Bで説明した四分木分割の際の分割したブロックのインデックスを表している。 IntraPredTypeLX indicates the prediction type of intra prediction. Intra_Vertical means that the vertical direction is the reference for prediction, and Intra_Horizontal means that the horizontal direction is the reference for prediction. Note that 0 or 1 is applied to X in IntraPredTypeLX. IntraPredTypeL0 indicates the first prediction mode of unidirectional intra prediction or bidirectional intra prediction. IntraPredTypeL1 indicates the second prediction mode of bidirectional intra prediction. IntraPredAngleId is an index indicating an index of a prediction angle. The prediction angle actually used in the generation of the predicted value is shown in FIG. Here, puPartIdx represents the index of the divided block in the quadtree division described with reference to FIG. 3B.
 例えば、IntraPredModeが4の場合、IntraPredTypeL0がIntra_Verticalであるため、垂直方向を予測の基準とすることが判る。図8Bから判る通り、IntraPredMode=0から33まで計34個が単方向イントラ予測モードを示しており、IntraPredMode=34から41まで計8個が双方向イントラ予測モードを示している。 For example, when IntraPredMode is 4, since IntraPredTypeL0 is Intra_Vertical, it can be seen that the vertical direction is used as a reference for prediction. As can be seen from FIG. 8B, a total of 34 from IntraPredMode = 0 to 33 indicate the unidirectional intra prediction mode, and a total of 8 from IntraPredMode = 34 to 41 indicate the bidirectional intra prediction mode.
 予測モード情報設定部603は、復号化制御部5012における制御の下で、指定された予測モード651に対応する上述した予測情報を単方向イントラ予測画像生成部601及び双方向イントラ予測画像生成部602に設定し、選択スイッチへ予測モード651を出力する。 The prediction mode information setting unit 603 converts the above-described prediction information corresponding to the designated prediction mode 651 to the unidirectional intra prediction image generation unit 601 and the bidirectional intra prediction image generation unit 602 under the control of the decoding control unit 5012. And the prediction mode 651 is output to the selection switch.
 次に、単方向イントラ予測画像生成部601について詳細に説明する。単方向イントラ予測画像生成部601は、図12に示される複数の予測方向に対して予測画像信号5058(図6では161)を生成する機能を有する。図12では、太線で示される垂直方向、水平方向の座標に対して、33個の異なる予測方向を持つ。また、H.264で示される代表的な予測角度の方向を矢印で示している。本実施形態では、原点から矢印で示される線を引いた方向に33種類の予測方向が用意されている。また、H.264と同様、利用可能な参照画素の平均値で予測するDC予測が追加されており、合計で34個の予測モードが存在する。 Next, the unidirectional intra predicted image generation unit 601 will be described in detail. The unidirectional intra predicted image generation unit 601 has a function of generating a predicted image signal 5058 (161 in FIG. 6) for a plurality of prediction directions shown in FIG. In FIG. 12, there are 33 different prediction directions for the vertical and horizontal coordinates indicated by bold lines. H. The direction of a typical prediction angle indicated by H.264 is indicated by an arrow. In this embodiment, 33 kinds of prediction directions are prepared in the direction which pulled the line shown by the arrow from the origin. H. Similar to H.264, DC prediction for predicting with an average value of available reference pixels is added, and there are 34 prediction modes in total.
 IntraPredMode=4の場合、IntraPredAngleIdL0が-4であるため、図12におけるIntraPredMode=4で示される予測方向で予測画像信号5058(図6では161)が生成される。図12の下側に示した“Intra_Vertical”に示した範囲に含まれる矢印は、予測タイプがIntra_Verticalの予測モードを示しており、図12の右側に示した“Intra_Horizontal”に示した範囲に含まれる矢印は予測タイプがIntra_Horizontalの予測モードを示している。 When IntraPredMode = 4, because IntraPredAngleIdL0 is −4, a prediction image signal 5058 (161 in FIG. 6) is generated in the prediction direction indicated by IntraPredMode = 4 in FIG. The arrows included in the range shown in “Intra_Vertical” shown at the bottom of FIG. 12 indicate the prediction mode whose prediction type is Intra_Vertical, and are included in the range shown in “Intra_Horizontal” shown on the right side of FIG. An arrow indicates a prediction mode whose prediction type is Intra_Horizontal.
 <イントラ予測部5008> 
 次に単方向イントラ予測画像生成部601の予測画像生成方法について説明する。ここでは、入力された参照画像信号5057(図6では159)に基づいて、予測画像値を生成し、上述した予測方向に画素をコピーする。予測画像値は、1/32画素精度で内挿補間を行うことによって生成する。図11は、IntraPredAngleIdLXと予測画像値生成に使われるintraPredAngleとの関係を示す。intraPredAngleは予測値生成の際に、実際に利用される予測角度を示している。例えば、予測タイプがIntra_Verticalでかつ、図11で示されるintraPredAngleが正の値の場合の予測値の生成方法を数式で書くと上記の数式(3)で表される。ここでBLK_SIZEは、当該画素ブロック(プレディクションユニット)のサイズを示しており、ref[]は参照画像信号が格納された配列を示している。また、pred(k,m)は生成された予測画像信号5058(図6では161)を示している。
<Intra Prediction Unit 5008>
Next, a prediction image generation method of the unidirectional intra prediction image generation unit 601 will be described. Here, based on the input reference image signal 5057 (159 in FIG. 6), a predicted image value is generated, and the pixels are copied in the above-described prediction direction. The predicted image value is generated by performing interpolation with 1/32 pixel accuracy. FIG. 11 shows the relationship between IntraPredAngleIdLX and intraPredAngle used for predictive image value generation. intraPredAngle indicates a prediction angle that is actually used when a predicted value is generated. For example, when the prediction type is Intra_Vertical and intraPredAngle shown in FIG. 11 is a positive value, the prediction value generation method is expressed by the above equation (3). Here, BLK_SIZE indicates the size of the pixel block (prediction unit), and ref [] indicates an array in which reference image signals are stored. Also, pred (k, m) indicates the generated predicted image signal 5058 (161 in FIG. 6).
 上記条件以外に関しても、図11のテーブルに従って同様な方法で予測値が生成可能である。例えば、IntraPredMode=1で示される予測モードの予測値は、図5の(c)で示されるH.264の水平予測と同一となる。以上が、本実施形態における単方向イントラ予測画像生成部601の説明である。 Even for conditions other than the above, predicted values can be generated in the same manner according to the table of FIG. For example, the prediction value of the prediction mode indicated by IntraPredMode = 1 is H.264 shown in FIG. This is the same as H.264 horizontal prediction. The above is description of the unidirectional intra estimated image generation part 601 in this embodiment.
 次に、双方向イントラ予測画像生成部602について詳細に説明する。図13に双方向イントラ予測画像生成部602のブロック図を示す。双方向イントラ予測画像生成部602は、第一単方向イントラ予測画像生成部1301、第二単方向イントラ予測画像生成部1302、重み付き平均部1303を有し、入力された参照画像信号5057(図13では159)に基づいて、2つの単方向イントラ予測画像を生成し、これらを重み付き平均することにより予測画像信号5058(図13では161)を生成する機能を有する。 Next, the bidirectional intra predicted image generation unit 602 will be described in detail. FIG. 13 shows a block diagram of the bidirectional intra-predicted image generation unit 602. The bidirectional intra predicted image generation unit 602 includes a first unidirectional intra predicted image generation unit 1301, a second unidirectional intra predicted image generation unit 1302, and a weighted average unit 1303. An input reference image signal 5057 (FIG. 13 has a function of generating two unidirectional intra-predicted images based on 159) and generating a predicted image signal 5058 (161 in FIG. 13) by weighted averaging them.
 第一単方向イントラ予測画像生成部1301及び第二単方向イントラ予測画像生成部1302の機能は同一である。いずれも符号化制御部115で制御される予測モード情報に従って与えられた予測モードに対応する予測画像信号を生成する。第一単方向イントラ予測画像生成部1301から第一予測画像信号1351が、第二単方向イントラ予測画像生成部1302から第二予測画像信号1352が出力される。それぞれの予測画像信号が重み付き平均部1303に入力され、重み付き平均処理が行われる。 The functions of the first unidirectional intra predicted image generation unit 1301 and the second unidirectional intra predicted image generation unit 1302 are the same. In either case, a prediction image signal corresponding to a prediction mode given according to prediction mode information controlled by the encoding control unit 115 is generated. A first predicted image signal 1351 is output from the first unidirectional intra predicted image generation unit 1301, and a second predicted image signal 1352 is output from the second unidirectional intra predicted image generation unit 1302. Each predicted image signal is input to the weighted average unit 1303, and weighted average processing is performed.
 図14のテーブルは、双方向イントラ予測モードから、2つの単方向イントラ予測モードを導出するためのテーブルである。ここで数式(4)を用いてBipredIdxが導出される。 The table in FIG. 14 is a table for deriving two unidirectional intra prediction modes from the bidirectional intra prediction mode. Here, BipredIdx is derived using Equation (4).
 例えば、PuSize=PU_8x8、IntraPredMode=34の場合、図7Aまたは図7BからIntraUniModeNum=34であることが判るので、BipredIdx=0であることが判る。結果として、図14から、第一単方向イントラ予測モード(MappedBi2Uni(0,idx))が1、第二単方向イントラ予測モード(MappedBi2Uni(1,idx))が0であることが導出される。他のPuSize及びIntraPredModeにおいても同様の方法で2つの予測モードを導出することが可能である。なお、以後、第一単方向イントラ予測モードをIntraPredModeL0、第二単方向イントラ予測モードをIntraPredModeL1で表現する。 For example, in the case of PuSize = PU — 8 × 8 and IntraPredMode = 34, it can be seen from FIG. 7A or 7B that IntraUniModeNum = 34, and therefore BipredIdx = 0. As a result, it is derived from FIG. 14 that the first unidirectional intra prediction mode (MappedBi2Uni (0, idx)) is 1 and the second unidirectional intra prediction mode (MappedBi2Uni (1, idx)) is 0. In other PuSize and IntraPredMode, it is possible to derive two prediction modes by the same method. Hereinafter, the first unidirectional intra prediction mode is expressed as IntraPredModeL0, and the second unidirectional intra prediction mode is expressed as IntraPredModeL1.
 このように第一単方向イントラ予測画像生成部1301及び第二単方向イントラ予測画像生成部1302で生成された第一予測画像信号1351、第二予測画像信号1352が、重み付き平均部1303へと入力される。 Thus, the first predicted image signal 1351 and the second predicted image signal 1352 generated by the first unidirectional intra predicted image generation unit 1301 and the second unidirectional intra predicted image generation unit 1302 are sent to the weighted average unit 1303. Entered.
 重み付き平均部1303では、IntraPredModeL0及びIntraPredModeL1の予測方向を元にユークリッド距離或いは市街地距離(マンハッタン距離)を計算し、重み付き平均処理で用いる重み成分を導出する。各画素の重み成分は予測に用いる参照画素からのユークリッド距離若しくは市街地距離の逆数で表され、数式(5)で一般化される。ここでユークリッド距離を用いる場合は、ΔLは数式(6)で表される。一方、市街地距離を用いる場合は、ΔLは数式(7)で表される。各予測モードに関する重みテーブルは、数式(8)に一般化される。従って、画素位置nにおける最終的な予測信号は、数式(9)で示される。 The weighted average unit 1303 calculates a Euclidean distance or a city area distance (Manhattan distance) based on the prediction directions of IntraPredModeL0 and IntraPredModeL1, and derives a weight component used in the weighted average process. The weight component of each pixel is represented by the reciprocal of the Euclidean distance or the city distance from the reference pixel used for prediction, and is generalized by Expression (5). Here, when using the Euclidean distance, ΔL is expressed by Equation (6). On the other hand, when using the city distance, ΔL is expressed by Equation (7). The weight table for each prediction mode is generalized to Equation (8). Therefore, the final prediction signal at the pixel position n is expressed by Equation (9).
 本実施形態では、予測画素の生成に2つの予測モードを選択して予測信号を生成しているが、別の実施形態として3つ以上の予測モードを選択して予測値を生成してもよい。この場合、参照画素から予測画素の空間的距離の逆数の比を重み係数と設定すればよい。 In this embodiment, the prediction signal is generated by selecting two prediction modes for generating the prediction pixel. However, as another embodiment, a prediction value may be generated by selecting three or more prediction modes. . In this case, the ratio of the reciprocal of the spatial distance from the reference pixel to the prediction pixel may be set as the weighting factor.
 また、本実施形態では、予測モードが使用する参照画素からのユークリッド距離、或いは市街地距離の逆数をそのまま重み成分としているが、別の一実施形態として参照画素からのユークリッド距離、市街地距離を変数とした分布モデルを用いて重み成分を設定してもよい。分布モデルは線形モデルやM次関数(M≧1)、片側ラプラス分布や片側ガウス分布といった非線形関数、ある固定値を参照画素との距離に依らず固定値、のうちの少なくとも一つを用いる。片側ガウス分布をモデルとして用いた場合、重み成分は数式(10)で表される。また、片側ラプラス分布をモデルとして用いた場合重み成分は数式(11)で表される。 In this embodiment, the Euclidean distance from the reference pixel used in the prediction mode or the reciprocal of the urban area distance is used as a weight component as it is, but as another embodiment, the Euclidean distance from the reference pixel and the urban area distance are variables. The weight component may be set using the distributed model. The distribution model uses at least one of a linear model, an M-order function (M ≧ 1), a nonlinear function such as a one-sided Laplace distribution or a one-sided Gaussian distribution, or a fixed value that is a fixed value regardless of the distance from the reference pixel. When the one-sided Gaussian distribution is used as a model, the weight component is expressed by Equation (10). Further, when the one-sided Laplace distribution is used as a model, the weight component is expressed by Expression (11).
 また、自己相関関数をモデル化した等方相関モデル、楕円相関モデル、ラプラス関数やガウス関数を一般化した一般化ガウスモデルを重み成分のモデルとして用いてもよい。 Further, an isotropic correlation model obtained by modeling an autocorrelation function, an elliptic correlation model, a generalized Gaussian model obtained by generalizing a Laplace function or a Gaussian function may be used as the weight component model.
 数式(5)、数式(8)、数式(10)、数式(11)で示される重み成分を予測画像生成の際に都度計算した場合、複数の乗算器が必要となり、ハードウェア規模が増大する。このため、予め予測モード毎の相対距離に応じて重み成分を計算し、メモリに保持することで上記演算に必要となる回路規模を削減することができる。ここでは、市街地距離を用いた場合の重み成分の導出方法について説明する。 When the weight components represented by Equation (5), Equation (8), Equation (10), and Equation (11) are calculated each time the predicted image is generated, a plurality of multipliers are required, and the hardware scale increases. . For this reason, the circuit scale required for the said calculation can be reduced by calculating a weight component beforehand according to the relative distance for every prediction mode, and hold | maintaining in a memory. Here, a method for deriving the weight component when the city distance is used will be described.
 IntraPredModeL0の市街地距離ΔLL0とIntraPredModeL1の市街地距離ΔLL1は、数式(7)より計算される。ここで、相対距離は2つの予測モードの予測方向によって変化する。例として、PuSize=PU_4x4の場合の代表的な距離を図15A、B、Cに示す。図15Aは、IntraPredModeLX=0の場合の市街地距離を示している。図15Bは、IntraPredModeLX=1の場合の市街地距離を示している。また、図15Cは、IntraPredModeLX=3の場合の市街地距離を示している。同様に、それぞれの予測モードに応じて数式(6)或いは数式(7)を用いて距離を導出することができる。但し、IntraPredModeLX=2のDC予測の場合、全ての画素位置で距離を2としている。図16にPuSize=PU_4x4の場合の代表的な6つの予測モードにおける距離のテーブルを示す。IntraPredModeLXの数が多い場合、これらの距離テーブルのテーブルサイズが増加する場合がある。 The city area distance ΔL L0 of IntraPredMode L0 and the city area distance ΔL L1 of IntraPredMode L1 are calculated from Equation (7). Here, the relative distance varies depending on the prediction direction of the two prediction modes. As an example, typical distances in the case of PuSize = PU — 4 × 4 are shown in FIGS. FIG. 15A shows the city distance when IntraPredModeLX = 0. FIG. 15B shows the city distance in the case of IntraPredModeLX = 1. FIG. 15C shows the city distance in the case of IntraPredModeLX = 3. Similarly, the distance can be derived using Expression (6) or Expression (7) according to each prediction mode. However, in the case of DC prediction with IntraPredModeLX = 2, the distance is 2 at all pixel positions. FIG. 16 shows a table of distances in six typical prediction modes in the case of PuSize = PU — 4 × 4. When the number of IntraPredModeLX is large, the table sizes of these distance tables may increase.
 本実施形態では、いくつかの予測角度の近い予測モードの距離テーブルを共有化することにより、必要なメモリ量を削減している。図17は、距離テーブル導出に用いるIntraPredModeLXのマッピングを示している。ここでは予測角度が45度刻みの予測モードとDC予測に対応する予測モードのみのテーブルを用意しており、それ以外の予測角度を、用意した基準の予測モードに近い方にマッピングする例を示している。なお、基準の予測モードとの距離が同じ場合は、インデックスが小さい方にマッピングしている。MappedIntraPredModeに示される予測モードが図17から参照され、距離テーブルが導出できる。 In the present embodiment, the required memory amount is reduced by sharing a distance table of several prediction modes with close prediction angles. FIG. 17 shows the mapping of IntraPredModeLX used for distance table derivation. Here, an example is shown in which a table of only the prediction mode corresponding to the prediction mode corresponding to the prediction mode and the DC prediction in 45 degrees is prepared, and other prediction angles are mapped closer to the prepared reference prediction mode. ing. When the distance from the reference prediction mode is the same, the index is mapped to the smaller one. The prediction mode shown in “MappedIntraPredMode” is referred to from FIG. 17, and a distance table can be derived.
 上記距離テーブルを利用することにより、数式(12)を用いて2つの予測モードの画素毎の相対距離が算出される。数式(12)を用いて、画素位置nにおける最終的な予測信号は、数式(13)で示される。ここで、小数点演算を用いることによるハードウェア規模増加を避けるため、重み成分を予めスケーリングし、整数演算に直すと数式(14)で表せる。ここで、例えば小数点部分を10ビット精度で表現した場合、WM=1024、Offset=512、SHIFT=10となる。これらは数式(15)の関係を満たす。 By using the distance table, the relative distance for each pixel in the two prediction modes is calculated using Equation (12). Using Equation (12), the final prediction signal at the pixel position n is represented by Equation (13). Here, in order to avoid an increase in hardware scale due to the use of decimal point arithmetic, the weight component is scaled in advance and converted to integer arithmetic, it can be expressed by Equation (14). Here, for example, when the decimal part is expressed with 10-bit precision, WM = 1024, Offset = 512, and SHIFT = 10. These satisfy the relationship of Expression (15).
 本実施形態における片側ラプラス分布モデルを用いた重み成分をテーブル化した例を図18A、図18Bに示す。図18AはPuSize=PU_4x4の場合の重み成分テーブルを示している。また、図18BはPuSize=PU_8x8の場合の重み成分テーブルを示している。それ以外のPuSizeに関しても、数式(5)、数式(8)、数式(10)、数式(11)を用いることにより導出が可能である。 
 <双方向イントラ予測モード生成部605> 
 双方向イントラ予測モード生成部605は、第一の実施形態で説明した双方向イントラ予測モード生成部605と同一であるため説明を省略する。
An example in which the weight components using the one-sided Laplace distribution model in this embodiment are tabulated is shown in FIGS. 18A and 18B. FIG. 18A shows a weight component table in the case of PuSize = PU — 4 × 4. FIG. 18B shows a weight component table in the case of PuSize = PU_8 × 8. Other PuSizes can also be derived using Equation (5), Equation (8), Equation (10), and Equation (11).
<Bidirectional Intra Prediction Mode Generation Unit 605>
Since the bidirectional intra prediction mode generation unit 605 is the same as the bidirectional intra prediction mode generation unit 605 described in the first embodiment, description thereof is omitted.
 以上が、本実施形態に関わるイントラ予測部5008の詳細である。 The above is the details of the intra prediction unit 5008 according to the present embodiment.
 <シンタクス構成 1> 
 以下、図50の動画像復号化装置5000が利用するシンタクスについて説明する。 
 シンタクスは、動画像復号化装置5000が動画像データを復号化する際の符号化データ(例えば、図1の符号化データ162)の構造を示している。第1の実施形態に代表される画像符号化装置は同じシンタクス構造を用いてこの符号化データを符号化する。図50の動画像復号化装置5000が利用するシンタクス2800を図28に例示する。シンタクス2800については、第1の実施形態と同一であるため、詳細な説明は省略する。
<Syntax structure 1>
Hereinafter, the syntax used by the video decoding device 5000 in FIG. 50 will be described.
The syntax indicates the structure of encoded data (for example, encoded data 162 in FIG. 1) when the moving image decoding apparatus 5000 decodes moving image data. The image encoding apparatus represented by the first embodiment encodes this encoded data using the same syntax structure. FIG. 28 illustrates a syntax 2800 used by the video decoding device 5000 in FIG. Since the syntax 2800 is the same as that of the first embodiment, a detailed description thereof will be omitted.
 次に、本実施形態に係るプレディクションユニットシンタクスの一例について説明する。 Next, an example of the prediction unit syntax according to this embodiment will be described.
 図30Aに、プレディクションユニットシンタクスの一例を示す。図中のpred_modeは当該プレディクションユニットの予測タイプをしている。MODE_INTRAは予測タイプがイントラ予測であることを示す。intra_split_flagは当該プレディクションユニットをさらに4つのプレディクションユニットに分割するか否かを示すフラグである。intra_split_flagが1の場合、プレディクションユニットを、縦横のサイズ半分で4分割したものをプレディクションユニットとする。intra_split_flagが0の場合、プレディクションユニットを分割しない。 FIG. 30A shows an example of the prediction unit syntax. Pred_mode in the figure indicates the prediction type of the prediction unit. MODE_INTRA indicates that the prediction type is intra prediction. intra_split_flag is a flag indicating whether or not the prediction unit is further divided into four prediction units. When intra_split_flag is 1, a prediction unit is obtained by dividing a prediction unit into four in half in the vertical and horizontal sizes. When intra_split_flag is 0, the prediction unit is not divided.
 intra_luma_bipred_flag[i]は当該プレディクションユニットに適用した予測モードIntraPredModeが単方向イントラ予測モードか双方向イントラ予測モードであるかを示すフラグである。iは分割されたプレディクションユニットの位置を示しており、前記intra_split_flagが0の場合には0、前記intra_split_flagが1の場合には0から3までが設定される。当該フラグは、図8A及び図8B,図9A及び図9B,図10に示される当該プレディクションユニットのIntraBipredFlagの値がセットされている。 Intra_luma_bipred_flag [i] is a flag indicating whether the prediction mode IntraPredMode applied to the prediction unit is a unidirectional intra prediction mode or a bidirectional intra prediction mode. i indicates the position of the divided prediction unit, and 0 is set when the intra_split_flag is 0, and 0 to 3 when the intra_split_flag is 1. In this flag, the value of IntraBipredFlag of the prediction unit shown in FIGS. 8A and 8B, 9A and 9B, and 10 is set.
 intra_luma_bipred_flag[i]が1の場合、当該プレディクションユニットは双方向イントラ予測であることを示し、用意された複数の双方向イントラ予測モードの内、使用した双方向イントラ予測モードを特定する情報であるintra_luma_bipred_mode[i]を復号化する。intra_luma_bipred_mode[i]は、図7に示される双方向イントラ予測モード数IntraBiModeNumに従って等長復号化されてもよいし、予め決定された符号表を用いて復号化されてもよい。また、前述のように双方向イントラ予測モードの総数がプレディクションユニット毎に異なる場合には、プレディクションユニット毎に示される双方向イントラ予測モードの総数に従って切り替わる符号表を用いて復号化されていてもよい。intra_luma_bipred_flag[i]が0の場合、当該プレディクションユニットは単方向イントラ予測であることを示し、隣接ブロックから予測復号化を行う。 When intra_luma_bipred_flag [i] is 1, this indicates that the prediction unit is bi-directional intra prediction, and is information that identifies the used bi-directional intra prediction mode among a plurality of prepared bi-directional intra prediction modes. Intra_luma_bipred_mode [i] is decoded. intra_luma_bipred_mode [i] may be decoded in equal length according to the bidirectional intra prediction mode number IntraBiModeNum shown in FIG. 7, or may be decoded using a predetermined code table. Further, as described above, when the total number of bidirectional intra prediction modes is different for each prediction unit, it is decoded using a code table that switches according to the total number of bidirectional intra prediction modes indicated for each prediction unit. Also good. When intra_luma_bipred_flag [i] is 0, it indicates that the prediction unit is unidirectional intra prediction, and predictive decoding is performed from adjacent blocks.
 prev_intra_luma_unipred_flag[i]は、隣接ブロックから計算される予測モードの予測値MostProbableと当該プレディクションユニットのイントラ予測モードが同一であるか否かを示すフラグである。MostProbableの計算方法の詳細は後述する。prev_intra_luma_unipred_flag[i]が1の場合、前記MostProbableとイントラ予測モードIntraPredModeが等しいことを示す。prev_intra_luma_unipred_flag[i]が0の場合、前記MostProbableとイントラ予測モードIntraPredModeは異なることを示し、イントラ予測モードIntraPredModeがさらにMostProbable以外のいずれのモードであるかを特定する情報rem_intra_luma_unipred_mode[i]が復号化される。rem_intra_luma_unipred_mode[i]は、図7A及び図7Bに示される双方向イントラ予測モード数IntraUniModeNumに従って等長復号化されてもよいし、予め決定された符号表を用いて復号化されてもよい。イントラ予測モードIntraPredModeからrem_intra_luma_unipred_mode[i]は数式(17)を用いて計算される。 Prev_intra_luma_unipred_flag [i] is a flag indicating whether or not the prediction value MostProbable of the prediction mode calculated from the adjacent block and the intra prediction mode of the prediction unit are the same. Details of the MostProbable calculation method will be described later. When prev_intra_luma_unipred_flag [i] is 1, it indicates that the MostProbable and the intra prediction mode IntraPredMode are equal. When prev_intra_luma_unipred_flag [i] is 0, it indicates that the MostProbable and the intra prediction mode IntraPredMode are different, and the information rem_intraprelum decoding that further specifies the intra prediction mode IntraPredMode other than MostProbable. . rem_intra_luma_unipred_mode [i] may be decoded in equal length according to the bidirectional intra prediction mode number IntraUniModeNum shown in FIGS. 7A and 7B, or may be decoded using a predetermined code table. From the intra prediction mode IntraPredMode, rem_intra_luma_unipred_mode [i] is calculated using Equation (17).
 次に、予測モードの予測値であるMostProbableの計算方法について説明する。MostProbableは数式(18)に従って計算される。なお、Min(x,y)は入力x、yのうち小さい方を出力するパラメータである。 Next, a method for calculating MostProbable, which is a predicted value in the prediction mode, will be described. MostProbable is calculated according to Equation (18). Min (x, y) is a parameter for outputting the smaller one of the inputs x and y.
 また、intraPredModeAL0,intraPredModeBL0はそれぞれ、前述のように復号化プレディクションユニットに対して左に及び上に隣接するプレディクションユニットの第一単方向イントラ予測モードを示している。隣接するプレディクションユニットが画面外や復号化前で参照不可能な場合は、参照可能なプレディクションユニットの第一単方向イントラ予測モードがMostProbableとなる。また、両方の隣接プレディクションユニットが参照不可能である場合には、MostProbableにIntra_DCが設定される。 Also, intraPredModeAL0 and intraPredModeBL0 respectively indicate the first unidirectional intra prediction modes of the prediction units adjacent to the left and above the decoded prediction unit as described above. When the adjacent prediction unit cannot be referred to outside the screen or before decoding, the first unidirectional intra prediction mode of the referable prediction unit is MostProbable. In addition, when both adjacent prediction units cannot be referred to, Intra_DC is set in MostProbable.
 また、MostProbableが復号化プレディクションユニットの単方向イントラ予測モード数IntraUniPredModeNumよりも大きい場合には、数式(19)を用いてMostProbableを再計算するする。MappedMostProbable()は、MostProbableを変換するテーブルであり、図31に一例が示されている。 Also, when MostProbable is larger than the unidirectional intra prediction mode number IntraUniPredModeNum of the decoding prediction unit, the MostProbable is recalculated using Equation (19). “MappedProbable ()” is a table for converting MostProbable, and an example is shown in FIG. 31.
 <シンタクス構成 2> 
 次に、プレディクションユニットシンタクスに係る別の例を図30Cに示す。pred_mode、intra_split_flagは先に述べたシンタクスの一例と同様であるので説明を省略する。luma_pred_mode_code_type[i]は当該プレディクションユニットに適用した予測モードIntraPredModeの種類を示しており、0(IntraUnipredMostProb)は単方向イントラ予測でイントラ予測モードがMostProbableと等しい、1(IntraUnipredRem)は単方向イントラ予測でイントラ予測モードがMostProbableと異なる、2(IntraBipred)は双方向イントラ予測モードであることを夫々示している。図32Aから図32Dまでにluma_pred_mode_code_typeと対応する意味、及びbin、図7Aから図7Dまでに示すモード構成に従ったモード数の割当の一例を示している。luma_pred_mode_code_type[i]が0の場合、イントラ予測モードがMostProbableモードとなるため、これ以上の情報の復号化は必要無い。luma_pred_mode_code_type[i]が1の場合、イントラ予測モードIntraPredModeがさらにMostProbable以外のいずれのモードであるかを特定する情報rem_intra_luma_unipred_mode[i]が復号化される。rem_intra_luma_unipred_mode[i]は、図7Aから図7Dまでに示される双方向イントラ予測モード数IntraUniModeNumに従って等長復号化されてもよいし、予め決定された符号表を用いて復号化されてもよい。イントラ予測モードIntraPredModeからrem_intra_luma_unipred_mode[i]は数式(16)を用いて計算される。また、luma_pred_mode_code_type[i]が2の場合、当該プレディクションユニットは双方向イントラ予測であることを示し、用意された複数の双方向イントラ予測モードの内、使用した双方向イントラ予測モードを特定する情報であるintra_luma_bipred_mode[i]を復号化する。intra_luma_bipred_mode[i]は、図7Aから図7Dまでに示される双方向イントラ予測モード数IntraBiModeNumに従って等長復号化されてもよいし、予め決定された符号表を用いて復号化されてもよい。また、前述のように双方向イントラ予測モードの総数がプレディクションユニット毎に異なる場合には、プレディクションユニット毎に示される双方向イントラ予測モードの総数に従って切り替わる符号表を用いて復号化されていてもよい。
<Syntax structure 2>
Next, another example of the prediction unit syntax is shown in FIG. 30C. Since pred_mode and intra_split_flag are the same as the syntax example described above, description thereof is omitted. luma_pred_mode_code_type [i] indicates the type of the prediction mode IntraPredMode applied to the prediction unit, where 0 (IntraUnifiedMostProb) is unidirectional intra prediction and the intra prediction mode is the same as MostProbable, 1 (IntraUnipre intrareprediction) The intra prediction mode is different from MostProbable, and 2 (IntraBipred) indicates a bidirectional intra prediction mode. FIG. 32A to FIG. 32D show an example of assignment of the number of modes according to the meaning corresponding to luma_pred_mode_code_type, and the mode configuration shown in FIG. 7A to FIG. 7D. When luma_pred_mode_code_type [i] is 0, the intra prediction mode is the MostProbable mode, so no further information decoding is necessary. When luma_pred_mode_code_type [i] is 1, information rem_intra_luma_unipred_mode [i] that specifies which mode other than MostProbable is the intra prediction mode IntraPredMode is decoded. The rem_intra_luma_unipred_mode [i] may be decoded in equal length according to the bidirectional intra prediction mode number IntraUniModeNum shown in FIGS. 7A to 7D, or may be decoded using a predetermined code table. From the intra prediction mode IntraPredMode, rem_intra_luma_unipred_mode [i] is calculated using Equation (16). Further, when luma_pred_mode_code_type [i] is 2, it indicates that the prediction unit is bidirectional intra prediction, and information that identifies the used bidirectional intra prediction mode among the prepared bidirectional intra prediction modes. Intra_luma_bipred_mode [i] is decoded. intra_luma_bipred_mode [i] may be decoded in equal length according to the bidirectional intra prediction mode number IntraBiModeNum shown in FIGS. 7A to 7D, or may be decoded using a predetermined code table. Further, as described above, when the total number of bidirectional intra prediction modes is different for each prediction unit, it is decoded using a code table that switches according to the total number of bidirectional intra prediction modes indicated for each prediction unit. Also good.
 以上が、本実施形態に係るシンタクス構成である。 The above is the syntax configuration according to the present embodiment.
 <シンタクス構成 3> 
 プレディクションユニットシンタクスに係るさらに別の例を図30Dに示す。本例では、図30Aで示されるプレディクションユニットシンタクスをベースに、双方向イントラ予測を使用可能とするか、双方向イントラ予測を使用不可能として従来の単方向イントラ予測のみを使用可能とするかを、復号化するプレディクションユニット内で切り替える場合のシンタクスを示している。 
 なお、pred_mode、intra_split_flagは先に述べたシンタクスの一例と同様であるので説明を省略する。
<Syntax structure 3>
FIG. 30D shows still another example relating to the prediction unit syntax. In this example, based on the prediction unit syntax shown in FIG. 30A, whether bidirectional intra prediction can be used or whether conventional intra-unidirectional prediction can be used with bidirectional intra prediction disabled. Shows the syntax for switching within the prediction unit to be decoded.
Note that pred_mode and intra_split_flag are the same as the syntax example described above, and thus description thereof is omitted.
 intra_bipred_flagは、復号化プレディクションユニット内で双方向イントラ予測を使用可能とするか否かを示すフラグである。intra_bipred_flagが0の場合、復号化プレディクションユニット内で双方向イントラ予測が使用されないことを示している。intra_split_flagが1、つまり復号化プレディクションユニットがさらに4分割されている場合においても、全てのプレディクションユニットにおいて双方向イントラ予測は使用されず、単方向イントラ予測のみが有効となる。 Intra_bipred_flag is a flag indicating whether or not bidirectional intra prediction can be used in the decoding prediction unit. When intra_bipred_flag is 0, it indicates that bi-directional intra prediction is not used in the decoding prediction unit. Even when intra_split_flag is 1, that is, when the decoded prediction unit is further divided into four, bi-directional intra prediction is not used in all prediction units, and only uni-directional intra prediction is effective.
 intra_bipred_flagが1の場合、復号化プレディクションユニット内で双方向イントラ予測が使用可能であることを示している。intra_split_flagが1、つまり復号化プレディクションユニットがさらに4分割されている場合においても、全てのプレディクションユニットにおいて、単方向イントラ予測に加え双方向イントラ予測が選択可能となる。 When intra_bipred_flag is 1, it indicates that bidirectional intra prediction can be used in the decoding prediction unit. Even when intra_split_flag is 1, that is, when the decoded prediction unit is further divided into four, in all prediction units, bidirectional intra prediction can be selected in addition to unidirectional intra prediction.
 双方向イントラ予測が不要である予測が比較的容易な領域(例えば、平坦領域)では、intra_bipred_flagを0として復号化して、双方向イントラ予測を使用不可能とすることにより、双方向イントラ予測モードの復号化に必要な符号量が削減可能となるため、符号化効率は改善する。 In a region where bi-directional intra prediction is unnecessary (for example, a flat region), the intra-bipred_flag is decoded as 0 to disable bi-directional intra prediction. Since the amount of code required for decoding can be reduced, the coding efficiency is improved.
 <シンタクス構成 4> 
 プレディクションユニットシンタクスに係るさらに別の例を図30Eに示す。本例では、図30Cで示されるプレディクションユニットシンタクスをベースに、双方向イントラ予測を使用可能とするか、双方向イントラ予測を使用不可能として従来の単方向イントラ予測のみを使用可能とするかを、復号化プレディクションユニット内で切り替える場合のシンタクスを示している。intra_bipred_flagは、復号化プレディクションユニット内で双方向イントラ予測を使用可能とするか否かを示すフラグでああり、前述のintra_bipred_flagと同様であるので説明を省略する。
<Syntax structure 4>
FIG. 30E shows still another example relating to the prediction unit syntax. In this example, based on the prediction unit syntax shown in FIG. 30C, whether bidirectional intra prediction can be used or whether only conventional unidirectional intra prediction can be used with bidirectional intra prediction disabled. Shows the syntax for switching in the decoding prediction unit. intra_bipred_flag is a flag indicating whether or not bi-directional intra prediction can be used in the decoding prediction unit, and is the same as the above-described intra_bipred_flag, and thus the description thereof is omitted.
 (第1の変形例) 
<イントラ予測部 第一の変形例> 
 イントラ予測部5008に係る第1の変形例として、JCTVC-B205_draft002, 5.2.1節" Intra prediction process for luma samples ", JCT-VC 2nd Meeting Geneva, July, 2010に示される適応参照画素フィルタリングと組み合わせても構わない。図33に、適応参照画素フィルタリングを用いた際のイントラ予測部5008(図33では109)を示している。図6で示されるイントラ予測部5008(図6では109)とは、参照画素フィルタ部3301が追加されている点が異なる。参照画素フィルタ部3301では、参照画像信号5057(図33では159)及び予測モード651を入力し、後述する適応的なフィルタ処理を行い、被フィルタ参照画像信号3351を出力する。被フィルタ参照画像信号3351は単方向イントラ予測画像生成部601と双方向イントラ予測画像生成部602に入力される。参照画素フィルタ部3301以外の構成及び処理に関しては、図6で示されるイントラ予測部5008と同様であるので、説明を省略する。
(First modification)
<Intra prediction unit first modification>
As a first modification related to the intra prediction unit 5008, in combination with adaptive reference pixel filtering shown in JCTVC-B205_draft002, section 5.2.1 “Intra prediction process for luma samples”, JCT-VC 2nd Meeting Geneva, July, 2010 It doesn't matter. FIG. 33 shows an intra prediction unit 5008 (109 in FIG. 33) when adaptive reference pixel filtering is used. 6 is different from the intra prediction unit 5008 shown in FIG. 6 (109 in FIG. 6) in that a reference pixel filter unit 3301 is added. The reference pixel filter unit 3301 inputs a reference image signal 5057 (159 in FIG. 33) and a prediction mode 651, performs adaptive filter processing described later, and outputs a filtered reference image signal 3351. The filtered reference image signal 3351 is input to the unidirectional intra predicted image generation unit 601 and the bidirectional intra predicted image generation unit 602. The configuration and processing other than the reference pixel filter unit 3301 are the same as those of the intra prediction unit 5008 shown in FIG.
 次に参照画素フィルタ部3301について説明する。参照画素フィルタ部3301では、予測モード651に含まれる参照画素フィルタフラグ及びイントラ予測モードに従って、イントラ予測に使用する参照画素をフィルタリングするか否かを決定する。参照画素フィルタフラグはイントラ予測モードIntraPredModeが“Intra_DC”以外の値の場合に、参照画素をフィルタリングするか否かを示すフラグである。参照画素フィルタフラグが1の場合、参照画素をフィルタリングする。また、参照画素フィルタフラグ0の場合、参照画素をフィルタリングしない。なお、IntraPredModeが”Intra_DC”の場合には、参照画素はフィルタリングせず、参照画素フィルタフラグは0にセットされる。参照画素フィルタフラグが1の場合、数式(20)に示されるフィルタリングにより、被フィルタ参照画像信号3351が計算される。なお、p[x,y]はフィルタ前の参照画素、pf[x,y]はフィルタ的用語の参照画素を示している。また、x,yは当該プレディクションユニット内の左上画素位置をx=0,y=0とした場合の、参照画素の相対位置を示している。また、PuPartSizeは当該プレディクションユニットのサイズ(画素)を示している。 Next, the reference pixel filter unit 3301 will be described. The reference pixel filter unit 3301 determines whether to filter reference pixels used for intra prediction according to the reference pixel filter flag and the intra prediction mode included in the prediction mode 651. The reference pixel filter flag is a flag indicating whether or not to filter the reference pixel when the intra prediction mode IntraPredMode is a value other than “Intra_DC”. When the reference pixel filter flag is 1, the reference pixel is filtered. In the case of the reference pixel filter flag 0, the reference pixel is not filtered. When IntraPredMode is “Intra_DC”, the reference pixel is not filtered and the reference pixel filter flag is set to 0. When the reference pixel filter flag is 1, a filtered reference image signal 3351 is calculated by filtering shown in Expression (20). Note that p [x, y] indicates a reference pixel before filtering, and pf [x, y] indicates a reference pixel in filter terms. Further, x and y indicate relative positions of the reference pixels when the upper left pixel position in the prediction unit is x = 0 and y = 0. PuPartSize indicates the size (pixel) of the prediction unit.
 <シンタクス構成 5> 
 図34A,図34Bには、適応参照画素フィルタを行う際のプレディクションユニットシンタクス構造を示している。図34Aは、図30Aに適応参照画素フィルタに関するシンタクスintra_luma_filter_flag[i]を追加している。また、図34Bは、図30Cに適応参照画素フィルタに関するシンタクスintra_luma_filter_flag[i]を追加している。intra_luma_filter_flag[i]はイントラ予測モードIntraPredMode[i]がIntra_DC以外の場合に、さらに復号化される。当該フラグが0の場合、上記参照画素のフィルタリングは行われないことを示す。また、intra_luma_filter_flag[i]が1の場合、上記参照画素のフィルタリングを適用することを示す。
<Syntax structure 5>
34A and 34B show a prediction unit syntax structure when performing adaptive reference pixel filtering. FIG. 34A adds the syntax intra_luma_filter_flag [i] related to the adaptive reference pixel filter to FIG. 30A. FIG. 34B adds syntax intra_luma_filter_flag [i] related to the adaptive reference pixel filter to FIG. 30C. intra_luma_filter_flag [i] is further decoded when the intra prediction mode IntraPredMode [i] is other than Intra_DC. When the flag is 0, it indicates that the reference pixel is not filtered. Further, when intra_luma_filter_flag [i] is 1, it indicates that the reference pixel filtering is applied.
 上記の例では、イントラ予測モードIntraPredMode[i]がIntra_DC以外の場合に、intra_luma_filter_flag[i]を復号化したが、他の例として、IntraPredMode[i]が0から2までの場合には、intra_luma_filter_flag[i]を復号化しなくても構わない。この場合、intra_luma_filter_flag[i]は0に設定される。 In the above example, intra_luma_filter_flag [i] is decoded when the intra prediction mode IntraPredMode [i] is other than Intra_DC. As another example, when IntraPredMode [i] is 0 to 2, intra_luma_filter_flag [ i] may not be decrypted. In this case, intra_luma_filter_flag [i] is set to 0.
 また、図30B、図30D、図30Eに示される他のシンタクス構造について、上記で説明したintra_luma_filter_flag[i]を同様の意味で追加しても構わない。 In addition, for the other syntax structures shown in FIGS. 30B, 30D, and 30E, the intra_luma_filter_flag [i] described above may be added in the same meaning.
 (第2の変形例) 
 <イントラ予測部 第2の変形例> 
 イントラ予測部5008に係る第2の変形例として、JCTVC-B205_draft002, 9.6節"Combined Intra Prediction", JCT-VC 2nd Meeting Geneva, July, 2010に示される複合イントラ予測と組み合わせて使用しても構わない。この文献における復号イントラ予測は、前述の単方向イントラ予測の結果と、予測画素に対し左、上、左上に隣接する画素の平均値とを加重平均することにより、予測値を得る。動画像復号化装置5000や画像符号化装置100内において復号画像信号5055を計算した場合には、前記左、上、左上に隣接する画素として、復号画素を用いることが可能である。
(Second modification)
<Intra Prediction Unit Second Modification>
As a second modification related to the intra prediction unit 5008, it may be used in combination with the composite intra prediction shown in JCTVC-B205_draft002, section 9.6 “Combined Intra Prediction”, JCT-VC 2nd Meeting Geneva, July, 2010. . In the decoded intra prediction in this document, a prediction value is obtained by performing weighted averaging of the result of the above-described unidirectional intra prediction and the average value of pixels adjacent to the left, top, and top left with respect to the prediction pixel. When the decoded image signal 5055 is calculated in the moving image decoding device 5000 or the image encoding device 100, it is possible to use decoded pixels as pixels adjacent to the left, upper, and upper left.
 図37に、複合イントラ予測と組み合わせた場合のイントラ予測部5008(図37では109)のブロック図を示す。図6で示されるイントラ予測部5008に複合イントラ予測画像生成部3601、選択スイッチ3602、及び復号画像バッファ3701が追加されている点が異なる。 FIG. 37 shows a block diagram of the intra prediction unit 5008 (109 in FIG. 37) when combined with composite intra prediction. The difference is that a composite intra predicted image generation unit 3601, a selection switch 3602 and a decoded image buffer 3701 are added to the intra prediction unit 5008 shown in FIG.
 双方向イントラ予測と複合イントラ予測を組み合わせた場合、まず選択スイッチ604において、復号化制御部5012で制御されている予測モード情報に従って、単方向イントラ予測画像生成部601若しくは双方向イントラ予測画像生成部602の出力端を切り替える。以後、出力された当該予測画像信号5058(図37では161)を方向予測画像信号5058と呼ぶ。 When the bidirectional intra prediction and the composite intra prediction are combined, first, in the selection switch 604, the unidirectional intra prediction image generation unit 601 or the bidirectional intra prediction image generation unit according to the prediction mode information controlled by the decoding control unit 5012. The output terminal of 602 is switched. Hereinafter, the output predicted image signal 5058 (161 in FIG. 37) is referred to as a direction predicted image signal 5058.
 その後、方向予測画像信号を複合イントラ予測画像生成部3601に入力し、複合イントラ予測における予測画像信号5058を生成する。その後、選択スイッチ3602において、復号化制御部5012で制御されている予測モード情報内の複合イントラ予測適用フラグに従って、複合イントラ予測における予測画像信号5058と方向予測画像信号のいずれを用いるかを切り替えて、イントラ予測部5008における最終的な予測画像信号5058が出力される。複合イントラ予測適用フラグが1の場合には、複合イントラ予測画像生成部3601から出力される予測画像信号5058が最終的な予測画像信号5058となる。一方、複合イントラ予測適用フラグが0の場合には、方向予測画像信号5058が最終的に出力される予測画像信号5058となる。 Thereafter, the direction prediction image signal is input to the composite intra prediction image generation unit 3601, and a prediction image signal 5058 in the composite intra prediction is generated. After that, the selection switch 3602 switches between using the prediction image signal 5058 and the direction prediction image signal in the composite intra prediction according to the composite intra prediction application flag in the prediction mode information controlled by the decoding control unit 5012. The final prediction image signal 5058 in the intra prediction unit 5008 is output. When the composite intra prediction application flag is 1, the predicted image signal 5058 output from the composite intra predicted image generation unit 3601 becomes the final predicted image signal 5058. On the other hand, when the composite intra prediction application flag is 0, the direction prediction image signal 5058 is the prediction image signal 5058 that is finally output.
 次に、複合イントラ予測画像生成部3601について図38を用いて説明する。複合イントラ予測画像生成部3601は画素レベル予測信号生成部3801及び複合イントラ予測計算部3802を含む。画素レベル予測信号生成部3801では、予測対象画素Xを隣接する画素から予測し、画素レベル予測信号3851を出力する。前述のように当該隣接画素は復号画像信号5055を示す。具体的には、数式(21)を用いて予測対象画素の画素レベル予測信号3851(X)を計算する。なお、A,B,Cに係る係数は、他の値であっても構わない。 Next, the composite intra prediction image generation unit 3601 will be described with reference to FIG. The composite intra prediction image generation unit 3601 includes a pixel level prediction signal generation unit 3801 and a composite intra prediction calculation unit 3802. The pixel level prediction signal generation unit 3801 predicts the prediction target pixel X from adjacent pixels and outputs a pixel level prediction signal 3851. As described above, the adjacent pixel indicates the decoded image signal 5055. Specifically, the pixel level prediction signal 3851 (X) of the prediction target pixel is calculated using Expression (21). The coefficients related to A, B, and C may be other values.
 複合イントラ予測計算部3802では、方向予測画像信号5058(図38では161)(X’)と画素レベル予測信号3851(X)の加重平均を行い、最終的な予測画像信号5058(P)を出力する。具体的には、数式(22)を用いる。 The composite intra prediction calculation unit 3802 performs a weighted average of the direction prediction image signal 5058 (161 in FIG. 38) (X ′) and the pixel level prediction signal 3851 (X), and outputs a final prediction image signal 5058 (P). To do. Specifically, Formula (22) is used.
 なお、Wは方向予測画像信号5058(X’)と画素レベル予測信号3851(X)との加重平均の重み係数(W=0から32までの間の整数値)である。以上が、複合イントラ予測と組み合わせた場合の実施形態である。 Note that W is a weighted average weight coefficient (an integer value between W = 0 and 32) of the direction prediction image signal 5058 (X ′) and the pixel level prediction signal 3851 (X). The above is an embodiment when combined with composite intra prediction.
 (第3の変形例) 
 <イントラ予測部 第3の変形例> 
 また、上記重み係数Wはプレディクションユニット内の予測画素の位置に応じて、切り替えても構わない。一般に、単方向イントラ予測及び双方向イントラ予測を用いて生成された予測画像信号は、空間的に隣接している既に符号化済みの上或いは左に位置する参照画素から予測値が生成されるため、参照画素からの距離が大きくなるほど予測誤差の絶対値が増加する傾向にある。従って、方向予測画像信号5058と画素レベル予測信号3851の重み係数を、参照画素に近い場合は方向予測画像信号161の重み係数を大きくし、離れた場合は、小さくすることで予測精度を向上させることが可能となる。
(Third Modification)
<Intra Prediction Unit Third Modification>
The weighting factor W may be switched according to the position of the prediction pixel in the prediction unit. In general, a prediction image signal generated using unidirectional intra prediction and bidirectional intra prediction generates a prediction value from spatially adjacent reference pixels positioned on the left or above already encoded. The absolute value of the prediction error tends to increase as the distance from the reference pixel increases. Therefore, the weighting coefficient of the direction prediction image signal 5058 and the pixel level prediction signal 3851 is increased when the weight coefficient of the direction prediction image signal 161 is close to the reference pixel, and is decreased when the distance is far away, thereby improving the prediction accuracy. It becomes possible.
 一方、当該複合イントラ予測では、符号化時に入力画像信号を利用した予測誤差信号の生成を行う。この際、画素レベル予測信号3851は入力画像信号となるため、参照画素位置と予測画素位置との空間距離が大きくなっても、方向予測画像信号5058と比較して、画素レベル予測信号3851の予測精度が高い。しかし、単純に方向予測画像信号5058と画素レベル予測信号3851の重み係数を、参照画素に近い場合は方向予測画像信号5058の重み係数を大きくし、離れた場合は、小さくすると、離れた場合の予測誤差が小さくなるが、符号化時の予測値と局部復号化時の予測値に乖離が生まれ、予測精度が低下する問題が生じる。従って、特に量子化パラメータの値が大きい場合に、参照画素位置と予測画素位置との空間距離が大きくなるにつれ、Wの値を小さく設定することにより、このようなオープンループの場合に発生する乖離現象による符号化効率低下を抑えることが出来る。 On the other hand, in the complex intra prediction, a prediction error signal is generated using an input image signal at the time of encoding. At this time, since the pixel level prediction signal 3851 becomes an input image signal, even if the spatial distance between the reference pixel position and the prediction pixel position is increased, the prediction of the pixel level prediction signal 3851 is compared with the direction prediction image signal 5058. High accuracy. However, the weight coefficient of the direction prediction image signal 5058 and the pixel level prediction signal 3851 is simply increased when the weight coefficient of the direction prediction image signal 5058 is close to the reference pixel, and the weight coefficient of the direction prediction image signal 5058 is small when it is far away. Although the prediction error is reduced, there is a problem that the prediction accuracy at the time of encoding and the prediction value at the time of local decoding are different and the prediction accuracy is lowered. Therefore, especially when the value of the quantization parameter is large, as the spatial distance between the reference pixel position and the predicted pixel position becomes large, the difference generated in the case of such an open loop is set by setting the value of W small. A decrease in coding efficiency due to the phenomenon can be suppressed.
 <シンタクス構成 6> 
 図39A,図39Bには、複合イントラ予測を行う際のプレディクションユニットシンタクス構造を示している。図39Aは、図30Aと比較して複合イントラ予測の有無を切り替えるシンタクスcombined_intra_pred_flagを追加している点で異なる。これは、前述の複合イントラ予測適用フラグと等しい。また、図39Bは、図30Cに複合イントラ予測の有無を切り替えるシンタクスcombined_intra_pred_flagを追加している。combined_intra_pred_flagが1の場合、図37に示される選択スイッチ3602は複合イントラ予測画像生成部3601の出力端に接続される。combined_intra_pred_flagが0の場合、図36に示される選択スイッチ3602は選択スイッチ604が接続している単方向イントラ予測画像生成部601若しくは双方向イントラ予測画像生成部602のいずれかの出力端に接続される。
<Syntax structure 6>
39A and 39B show the prediction unit syntax structure when performing composite intra prediction. FIG. 39A is different from FIG. 30A in that a syntax combined_intra_pred_flag for switching presence / absence of composite intra prediction is added. This is equivalent to the above-described composite intra prediction application flag. In addition, FIG. 39B adds a syntax combined_intra_pred_flag for switching presence / absence of composite intra prediction to FIG. 30C. When combined_intra_pred_flag is 1, the selection switch 3602 shown in FIG. 37 is connected to the output terminal of the composite intra predicted image generation unit 3601. When combined_intra_pred_flag is 0, the selection switch 3602 shown in FIG. 36 is connected to the output terminal of either the unidirectional intra prediction image generation unit 601 or the bidirectional intra prediction image generation unit 602 to which the selection switch 604 is connected. .
 また、図30B、図30D、図30Eに示される他のシンタクス構造について、上記で説明したintra_luma_filter_flag[i]を同様の意味で追加しても構わない。 
 さらに、イントラ予測部の第2の変形例と組み合わせても構わない。
In addition, for the other syntax structures shown in FIGS. 30B, 30D, and 30E, the intra_luma_filter_flag [i] described above may be added in the same meaning.
Furthermore, you may combine with the 2nd modification of an intra estimation part.
 以上が、イントラ予測部5008の別の実施形態に関する説明である。 This completes the description of another embodiment of the intra prediction unit 5008.
 以上の第4の実施形態によれば、第1の実施形態に係る動画像符号化装置と同一または類似のイントラ予測部を含むので、第1の実施形態に係る動画像符号化装置と同一または類似の効果を得ることができる。 According to the fourth embodiment described above, since the same or similar intra prediction unit as that of the video encoding device according to the first embodiment is included, the same or the same as the video encoding device according to the first embodiment or Similar effects can be obtained.
 (第5の実施形態) 
 <動画像復号化装置-第5の実施形態> 
 第5の実施形態に係る動画像復号化装置は、前述の第4の実施形態に係る動画像復号化装置と逆直交変換の詳細において異なる。以降の説明では、本実施形態において第4の実施形態と同一部分には同一符号を付して示し、異なる部分を中心に説明する。本実施形態に係る動画像復号化装置に対応する動画像符号化装置は、第2の実施形態において説明した通りである。
(Fifth embodiment)
<Video Decoding Device—Fifth Embodiment>
The video decoding device according to the fifth embodiment differs from the video decoding device according to the above-described fourth embodiment in the details of inverse orthogonal transform. In the following description, in this embodiment, the same parts as those in the fourth embodiment are denoted by the same reference numerals, and different parts will be mainly described. The moving picture coding apparatus corresponding to the moving picture decoding apparatus according to the present embodiment is as described in the second embodiment.
 図51は、第5の実施形態に係る動画像復号化装置を示すブロック図である。第4の実施形態に係る動画像復号化装置からの変更点は、変換選択部5102、係数順復元部5101が追加されている点である。また、逆直交変換部5004の内部構造も異なる。 FIG. 51 is a block diagram showing a moving picture decoding apparatus according to the fifth embodiment. A change from the video decoding apparatus according to the fourth embodiment is that a transformation selection unit 5102 and a coefficient order restoration unit 5101 are added. Also, the internal structure of the inverse orthogonal transform unit 5004 is different.
 <逆直交変換部5004> 
 先ず、逆直交変換部5004を、図42を用いて説明する。なお、逆直交変換部5004は第2の実施形態に係る逆直交変換部105と同一の構成である。従って、本実施形態では、図42内の変換選択情報4051を変換選択情報5151へ、復元変換係数155を復元変換係数5053へ、復元予測誤差信号156を復元予測誤差信号5054へと、夫々置き換えて説明する。
<Inverse orthogonal transform unit 5004>
First, the inverse orthogonal transform unit 5004 will be described with reference to FIG. Note that the inverse orthogonal transform unit 5004 has the same configuration as the inverse orthogonal transform unit 105 according to the second embodiment. Therefore, in this embodiment, the conversion selection information 4051 in FIG. 42 is replaced with the conversion selection information 5151, the restored conversion coefficient 155 is replaced with the restored conversion coefficient 5053, and the restored prediction error signal 156 is replaced with the restored prediction error signal 5054. explain.
 図42の逆直交変換部5004(図42では105)は、第一逆直交変換部4201、第二逆直交変換部4202、第N逆直交変換部4203、変換選択スイッチ4204を有する。先ず、変換選択スイッチ4204について説明する。変換選択スイッチ4204は、逆量子化部5003の出力端を、入力された変換選択情報5151に従って選択する機能を有する。変換選択情報5151は、復号化制御部5012で制御されている情報の1つであり、予測情報5059に従って変換選択部5102で設定される。 42 includes a first inverse orthogonal transform unit 4201, a second inverse orthogonal transform unit 4202, an Nth inverse orthogonal transform unit 4203, and a transform selection switch 4204. The inverse orthogonal transform unit 5004 (105 in FIG. 42) in FIG. First, the conversion selection switch 4204 will be described. The conversion selection switch 4204 has a function of selecting the output terminal of the inverse quantization unit 5003 according to the input conversion selection information 5151. The conversion selection information 5151 is one piece of information controlled by the decoding control unit 5012 and is set by the conversion selection unit 5102 according to the prediction information 5059.
 変換選択情報5151が第一直交変換である場合は、スイッチの出力端を第一逆直交変換部4201に接続する。一方、変換選択情報5151が第二直交変換である場合は、出力端を第二逆直交変換部4202に接続する。同様に、変換選択情報5151が第N直交変換である場合は、出力端を第N逆直交変換部4203に接続する。 When the conversion selection information 5151 is the first orthogonal transform, the output terminal of the switch is connected to the first inverse orthogonal transform unit 4201. On the other hand, when the transformation selection information 5151 is the second orthogonal transformation, the output end is connected to the second inverse orthogonal transformation unit 4202. Similarly, when the transform selection information 5151 is the Nth orthogonal transform, the output terminal is connected to the Nth inverse orthogonal transform unit 4203.
 <変換選択部5102> 
 次に、図51に示される変換選択部5102について説明する。変換選択部5102には、復号化制御部5012で制御され、エントロピー復号化部5002で復号された予測情報5059が入力される。変換選択部5102は、この予測情報5059に基づいて、どの予測モードに対して、どの逆直交変換を使うかどうかを示すMappedTransformIdx情報を設定する機能を有する。図43に、イントラ予測における変換選択情報5151(MappedTransformIdx)を示す。ここではN=9の例を示している。なお、IntraPredModeLX=2に対応するDC予測時には、第一逆直交変換部4201が選択される。このように予測角度の近い基準予測モードへマッピングすることにより、全予測モードに対して直交変換器及び逆直交変換器を用意する場合と比較して、ハードウェア実現時の直交変換及び逆直交変換の回路規模を削減することが可能である。なお、双方向イントラ予測が選択されている場合、それぞれ2つのIntraPredModeL0とIntraPredModeL1を導出した後、IntraPredModeL0に対応する予測モードを利用して図43からMappedTransformIdxを導出する。本実施形態では、N=9の例を示したが、Nの値は符号化性能とハードウェア実現時の回路規模のバランスを取って、最適な組み合わせを選べばよい。
<Conversion selection unit 5102>
Next, the conversion selection unit 5102 shown in FIG. 51 will be described. Prediction information 5059 controlled by the decoding control unit 5012 and decoded by the entropy decoding unit 5002 is input to the transformation selection unit 5102. Based on the prediction information 5059, the transform selection unit 5102 has a function of setting MapdTransformIdx information indicating which inverse orthogonal transform is used for which prediction mode. FIG. 43 shows conversion selection information 5151 (MappedTransformIdx) in intra prediction. Here, an example of N = 9 is shown. Note that the first inverse orthogonal transform unit 4201 is selected during DC prediction corresponding to IntraPredModeLX = 2. By mapping to the reference prediction mode with a close prediction angle in this way, compared to the case of preparing an orthogonal transformer and an inverse orthogonal transformer for all prediction modes, orthogonal transformation and inverse orthogonal transformation at the time of hardware implementation It is possible to reduce the circuit scale. When bi-directional intra prediction is selected, after two IntraPredModeL0 and IntraPredModeL1 are derived, the mapped transformIdx is derived from FIG. 43 using the prediction mode corresponding to IntraPredModeL0. In the present embodiment, an example of N = 9 has been shown, but the value of N may be selected in an optimal combination by balancing the coding performance and the circuit scale at the time of hardware implementation.
 <係数順復元部5101> 
 次に、係数順復元部5101を説明する。図52に係数順復元部5101のブロック図を示す。なお、係数順復元部5101は第2の実施形態に係る係数順制御部4002と、逆のスキャン順変換を行う機能を有する。
<Coefficient order restoration unit 5101>
Next, the coefficient order restoration unit 5101 will be described. FIG. 52 shows a block diagram of the coefficient order restoration unit 5101. The coefficient order restoration unit 5101 has a function of performing reverse scan order conversion with the coefficient order control unit 4002 according to the second embodiment.
 係数順復元部5101は、係数順選択スイッチ5204と、第一係数順逆変換部5201、第二係数順逆変換部5202、第N係数順逆変換部5203とを有する。係数順選択スイッチ5204は、例えば、図43に示されるMappedTransformIdxに従って、スイッチの出力端と係数順逆変換部5201から5203までを切り替える機能を有する。N種類の係数順逆変換部5201から5203までは、エントロピー復号化部5002で復号された量子化変換係数列5152に対して1次元データを2次元データへと逆変換する機能を有する。例えば、H.264では、ジグザグスキャンを用いて2次元データを1次元データへと変換している。ここでは、例えば、ジグザグスキャンからラスタースキャンへの変換を行うことを意味する。 The coefficient order restoration unit 5101 includes a coefficient order selection switch 5204, a first coefficient forward / reverse transform unit 5201, a second coefficient forward / reverse transform unit 5202, and an Nth coefficient forward / reverse transform unit 5203. For example, the coefficient order selection switch 5204 has a function of switching between the output terminal of the switch and the coefficient order inverse conversion units 5201 to 5203 in accordance with the mapped transform idx shown in FIG. The N types of coefficient forward / inverse transform units 5201 to 5203 have a function of inversely transforming one-dimensional data into two-dimensional data with respect to the quantized transform coefficient sequence 5152 decoded by the entropy decoding unit 5002. For example, H.M. In H.264, two-dimensional data is converted into one-dimensional data using a zigzag scan. Here, for example, it means that conversion from a zigzag scan to a raster scan is performed.
 イントラ予測の予測方向を考慮した直交変換を用いる場合、直交変換を施した変換係数に量子化処理を施した量子化変換係数は、ブロック内の非ゼロとなる変換係数の発生傾向が偏る性質を持つ。この非ゼロ変換係数の発生傾向は、イントラ予測の予測方向毎に異なる性質がある。しかし、異なる映像を符号化した際に同じ予測方向における非ゼロ変換係数の発生傾向は似る性質を持つ。そこで、2次元データを1次元データへ変換(2D-1D変換)する際、非ゼロ変換係数の発生確率が高い位置の変換係数から優先的にエントロピー符号化することで、変換係数の符号化する情報を削減することが可能である。復号側では、逆に1次元データを2次元データに復元する必要がある。ここでは、ラスタースキャンを一次元の基準スキャンとして復元を行う。 When using orthogonal transform in consideration of the prediction direction of intra prediction, the quantized transform coefficient obtained by performing quantization processing on the transform coefficient that has been subjected to orthogonal transform has the property that the tendency of generating non-zero transform coefficients in the block is biased. Have. The tendency of occurrence of this non-zero transform coefficient has different properties for each prediction direction of intra prediction. However, when different videos are encoded, the generation tendency of non-zero transform coefficients in the same prediction direction has a similar property. Therefore, when transforming two-dimensional data into one-dimensional data (2D-1D conversion), entropy coding is performed preferentially from transform coefficients at positions where the occurrence probability of non-zero transform coefficients is high, thereby encoding transform coefficients. It is possible to reduce information. Conversely, on the decoding side, it is necessary to restore the one-dimensional data to the two-dimensional data. Here, the raster scan is restored as a one-dimensional reference scan.
 さらに別の例として、係数順復元部5101は、1D-2D変換におけるスキャン順を動的に更新してもよい。このような動作を行う係数順復元部5101の構成は、図53に例示される。この係数順復元部5101は、図52の構成に加え、発生頻度カウント部5301と、更新部5302とを含む。係数順逆変換部5201、・・・,5203は、その1D-2Dスキャン順が更新部5302によって更新される点以外は同一である。 As yet another example, the coefficient order restoration unit 5101 may dynamically update the scan order in the 1D-2D conversion. The configuration of the coefficient order restoration unit 5101 that performs such an operation is illustrated in FIG. The coefficient order restoration unit 5101 includes an occurrence frequency counting unit 5301 and an updating unit 5302 in addition to the configuration of FIG. The coefficient order reverse conversion units 5201,..., 5203 are the same except that the 1D-2D scan order is updated by the update unit 5302.
 発生頻度カウント部5301は、予測モード毎に、量子化変換係数列5152の各要素における非零係数の発生回数のヒストグラム5351を作成する。発生頻度カウント部5301は、作成したヒストグラム5351を更新部5302に入力する。 The occurrence frequency counting unit 5301 creates a histogram 5351 of the number of occurrences of non-zero coefficients in each element of the quantized transform coefficient sequence 5152 for each prediction mode. The occurrence frequency counting unit 5301 inputs the created histogram 5351 to the update unit 5302.
 更新部5302は、予め定められたタイミングで、ヒストグラム5351に基づいて係数順の更新を行う。上記タイミングは、例えば、コーディングツリーユニットの符号化処理が終了したタイミング、コーディングツリーユニット内の1ライン分の符号化処理が終了したタイミングなどである。 The update unit 5302 updates the coefficient order based on the histogram 5351 at a predetermined timing. The timing is, for example, the timing when the coding process of the coding tree unit is finished, the timing when the coding process for one line in the coding tree unit is finished, or the like.
 具体的には、更新部5302は、ヒストグラム5351を参照して、非零係数の発生回数が閾値以上にカウントされた要素を持つ予測モードに関して係数順の更新を行う。例えば、更新部5302は、非零係数の発生が16回以上カウントされた要素を持つ予測モードに関して更新を行う。このような発生回数に閾値を設けることによって、係数順の更新が大域的に実施されるので、局所的な最適解に収束しにくくなる。 Specifically, the update unit 5302 refers to the histogram 5351 and updates the coefficient order with respect to the prediction mode having an element in which the number of occurrences of non-zero coefficients is counted more than a threshold. For example, the update unit 5302 updates the prediction mode having an element in which the occurrence of a non-zero coefficient is counted 16 times or more. By providing a threshold value for the number of occurrences, the coefficient order is updated globally, so that it is difficult to converge to a local optimum solution.
 更新部5302は、更新対象となる予測モードに関して、非零係数の発生頻度の降順に要素をソーティングする。ソーティングは、例えばバブルソート、クイックソートなどの既存のアルゴリズムによって実現できる。そして、更新部5302は、ソーティングされた要素の順序を示す更新係数順5352を、更新対象となる予測モードに対応する係数順逆変換部5201から5203までに入力する。 The update unit 5302 sorts the elements in descending order of the occurrence frequency of the non-zero coefficient regarding the prediction mode to be updated. Sorting can be realized by existing algorithms such as bubble sort and quick sort. Then, the update unit 5302 inputs the update coefficient order 5352 indicating the order of the sorted elements to the coefficient order inverse transform units 5201 to 5203 corresponding to the prediction mode to be updated.
 更新係数順5352が入力されると、各逆変換部は更新後のスキャン順に従って1D-2D変換を行う。なお、スキャン順を動的に更新する場合には、各1D-2D変換部の初期スキャン順を予め定めておく必要がある。図40に示される動画像符号化装置の係数順制御部4002と同じ初期スキャン順に定める。このように、動的にスキャン順を更新することにより、予測画像の性質、量子化情報(量子化パラメータ)などの影響に応じて、量子化変換係数における非零係数の発生傾向が変化する場合にも、安定的に高い符号化効率を期待できる。具体的には、エントロピー符号化部113におけるランレングス符号化の発生符号量を抑制できる。 When the update coefficient order 5352 is input, each inverse conversion unit performs 1D-2D conversion according to the updated scan order. When the scan order is dynamically updated, the initial scan order of each 1D-2D conversion unit needs to be determined in advance. The initial scan order is the same as that of the coefficient order control unit 4002 of the moving picture coding apparatus shown in FIG. In this way, when the scan order is dynamically updated, the tendency of occurrence of non-zero coefficients in the quantized transform coefficients changes according to the effect of the predicted image properties, quantization information (quantization parameters), etc. In addition, stable and high encoding efficiency can be expected. Specifically, the generated code amount of run-length encoding in the entropy encoding unit 113 can be suppressed.
 なお、本実施形態におけるシンタクス構成は、第4の実施形態と同一である。 Note that the syntax configuration in this embodiment is the same as that in the fourth embodiment.
 本実施形態の変形例として、変換選択部5102が予測情報5059とは別にMappedTransformIdxを選択することも可能である。この場合、9種類のどの直交変換或いは逆直交変換を用いたかを示す情報が復号化制御部5012に設定され、逆直交変換部5004で利用される。図46に本実施形態におけるシンタクスの例を示す。シンタクス中に示されるdirectional_transform_idxは、N個に対応する直交変換のいずれを選択したかの情報が示されている。 As a modification of the present embodiment, the conversion selection unit 5102 can select the mapped transform IDx separately from the prediction information 5059. In this case, information indicating which nine types of orthogonal transforms or inverse orthogonal transforms are used is set in the decoding control unit 5012 and used by the inverse orthogonal transform unit 5004. FIG. 46 shows an example of syntax in this embodiment. Directional_transform_idx indicated in the syntax indicates information indicating which of N orthogonal transforms has been selected.
 以上の第5の実施形態によれば、第2の実施形態に係る動画像符号化装置と同一または類似の逆直交変換部を含むので、第2の実施形態に係る動画像符号化装置と同一または類似の効果を得ることができる。 According to the fifth embodiment described above, the same or similar inverse orthogonal transform unit as that of the video encoding device according to the second embodiment is included, and thus the same as the video encoding device according to the second embodiment. Or a similar effect can be obtained.
 (第6の実施形態) 
 <動画像復号化装置-第6の実施形態> 
 第6の実施形態に係る動画像復号化装置は、前述の第4の実施形態に係る動画像復号化装置と逆直交変換の詳細において異なる。以降の説明では、本実施形態において第4の実施形態と同一部分には同一符号を付して示し、異なる部分を中心に説明する。本実施形態に係る動画像復号化装置に対応する動画像符号化装置は、第3の実施形態において説明した通りである。
(Sixth embodiment)
<Video Decoding Device—Sixth Embodiment>
The video decoding device according to the sixth embodiment differs from the video decoding device according to the above-described fourth embodiment in the details of inverse orthogonal transform. In the following description, in this embodiment, the same parts as those in the fourth embodiment are denoted by the same reference numerals, and different parts will be mainly described. The moving picture encoding apparatus corresponding to the moving picture decoding apparatus according to the present embodiment is as described in the third embodiment.
 逆直交変換部5004に係る実施形態として、JCTVC-B205_draft002, 5.3.5.2節" Rotational transformation process ", JCT-VC 2nd Meeting Geneva, July, 2010に示される回転変換と組み合わせても構わない。 As an embodiment related to the inverse orthogonal transform unit 5004, it may be combined with the rotation transformation shown in JCTVC-B205_draft002, 5.3.5.2 “Rotational transformation process”, JCT-VC 2nd Meeting Geneva, July, 2010.
 <逆直交変換部5004> 
 図48は本実施形態に関わる逆直交変換部5004(図48では105)のブロック図である。逆直交変換部5004は、第一逆回転変換部4801、第二逆回転変換部4802、第N逆回転変換部4803、逆離散コサイン変換部4804の新しい処理部を持ち、既存の変換選択スイッチ4204を有する。逆量子化処理後に入力された復元変換係数5053(図48では155)が変換選択スイッチ4204に入力される。ここで、変換選択スイッチ4204は、変換選択情報5151(図48では4051)に従って、スイッチの出力端を第一逆回転変換部4801、第二逆回転変換部4802、第N逆回転変換部4803のいずれかに接続する。その後、図47に示される直交変換部102で利用された回転変換と同じ、いずれかの逆回転変換部4801から4803までで逆回転変換処理が施され、逆離散コサイン変換部4804へと出力する。逆離散コサイン変換部4804は、入力された信号に対して例えばIDCTを施し、復元予測誤差信号5054(図48では156)を復元する。ここでは例としてIDCTを用いる例を示したが、アダマール変換や離散サイン変換などの直交変換を使ってもよいし、非直交変換を用いてもよい。いずれにしても図47に示される直交変換部102と連動して対応する逆変換が執り行われる。
<Inverse orthogonal transform unit 5004>
FIG. 48 is a block diagram of the inverse orthogonal transform unit 5004 (105 in FIG. 48) according to the present embodiment. The inverse orthogonal transform unit 5004 includes new processing units, a first inverse rotation transform unit 4801, a second inverse rotation transform unit 4802, an Nth inverse rotation transform unit 4803, and an inverse discrete cosine transform unit 4804, and an existing transform selection switch 4204. Have The restored transform coefficient 5053 (155 in FIG. 48) input after the inverse quantization process is input to the transform selection switch 4204. Here, according to the conversion selection information 5151 (4051 in FIG. 48), the conversion selection switch 4204 sets the output end of the switch to the first reverse rotation conversion unit 4801, the second reverse rotation conversion unit 4802, and the Nth reverse rotation conversion unit 4803. Connect to one. Thereafter, the reverse rotation conversion processing is performed in any one of the reverse rotation conversion units 4801 to 4803, which is the same as the rotation conversion used in the orthogonal conversion unit 102 shown in FIG. 47, and is output to the inverse discrete cosine conversion unit 4804. . The inverse discrete cosine transform unit 4804 performs, for example, IDCT on the input signal to restore the restored prediction error signal 5054 (156 in FIG. 48). Although an example using IDCT is shown here as an example, orthogonal transform such as Hadamard transform or discrete sine transform may be used, or non-orthogonal transform may be used. In any case, corresponding inverse transformation is performed in conjunction with the orthogonal transformation unit 102 shown in FIG.
 本実施形態におけるシンタクスが図49で示されている。シンタクス中に示されるrotational_transform_idxは利用する回転行列の番号を意味している。 The syntax in this embodiment is shown in FIG. The rotation_transform_idx shown in the syntax means the number of the rotation matrix to be used.
 以上の第6の実施形態によれば、第3の実施形態に係る画像符号化装置と同一または類似の逆直交変換部を含むので、第3の実施形態に係る画像符号化装置と同一または類似の効果を得ることができる。 According to the sixth embodiment described above, the same or similar inverse orthogonal transform unit as that of the image encoding device according to the third embodiment is included, and therefore the same or similar as that of the image encoding device according to the third embodiment. The effect of can be obtained.
 以下、各実施形態の変形例を列挙して紹介する。 
 第1から第6までの実施形態において、フレームを16×16画素サイズなどの矩形ブロックに分割し、画面左上のブロックから右下に向かって順に符号化/復号化を行う例について説明している(図2Aを参照)。しかしながら、符号化順序及び復号化順序はこの例に限定されない。例えば、右下から左上に向かって順に符号化及び復号化が行われてもよいし、画面中央から画面端に向かって渦巻を描くように符号化及び復号化が行われてもよい。さらに、右上から左下に向かって順に符号化及び復号化が行われてもよいし、画面端から画面中央に向かって渦巻きを描くように符号化及び復号化が行われてもよい。
Hereinafter, modifications of each embodiment will be listed and introduced.
In the first to sixth embodiments, an example is described in which a frame is divided into rectangular blocks having a size of 16 × 16 pixels, and encoding / decoding is sequentially performed from the upper left block to the lower right side of the screen. (See FIG. 2A). However, the encoding order and the decoding order are not limited to this example. For example, encoding and decoding may be performed sequentially from the lower right to the upper left, or encoding and decoding may be performed so as to draw a spiral from the center of the screen toward the screen end. Furthermore, encoding and decoding may be performed in order from the upper right to the lower left, or encoding and decoding may be performed so as to draw a spiral from the screen edge toward the center of the screen.
 第1から第6までの実施形態において、4×4画素ブロック、8×8画素ブロック、16×16画素ブロックなどの予測対象ブロックサイズを例示して説明を行ったが、予測対象ブロックは均一なブロック形状でなくてもよい。例えば、予測対象ブロック(プレディクションユニット)サイズは、16×8画素ブロック、8×16画素ブロック、8×4画素ブロック、4×8画素ブロックなどであってもよい。また、1つのコーディングツリーユニット内で全てのブロックサイズを統一させる必要はなく、複数の異なるブロックサイズを混在させてもよい。1つのコーディングツリーユニット内で複数の異なるブロックサイズを混在させる場合、分割数の増加に伴って分割情報を符号化または復号化するための符号量も増加する。そこで、分割情報の符号量と局部復号画像または復号画像の品質との間のバランスを考慮して、ブロックサイズを選択することが望ましい。 In the first to sixth embodiments, the description has been given by exemplifying the prediction target block sizes such as the 4 × 4 pixel block, the 8 × 8 pixel block, and the 16 × 16 pixel block, but the prediction target block is uniform. It does not have to be a block shape. For example, the prediction target block (prediction unit) size may be a 16 × 8 pixel block, an 8 × 16 pixel block, an 8 × 4 pixel block, a 4 × 8 pixel block, or the like. Also, it is not necessary to unify all the block sizes within one coding tree unit, and a plurality of different block sizes may be mixed. When a plurality of different block sizes are mixed in one coding tree unit, the amount of codes for encoding or decoding the division information increases as the number of divisions increases. Therefore, it is desirable to select the block size in consideration of the balance between the code amount of the division information and the quality of the locally decoded image or the decoded image.
 第1から第6までの実施形態において、簡単化のために、輝度信号と色差信号とを区別せず、色信号成分に関して包括的な説明を記述した。しかしながら、予測処理が輝度信号と色差信号との間で異なる場合には、同一または異なる予測方法が用いられてよい。輝度信号と色差信号との間で異なる予測方法が用いられるならば、色差信号に対して選択した予測方法を輝度信号と同様の方法で符号化または復号化できる。 In the first to sixth embodiments, for the sake of simplicity, a comprehensive description of the color signal component is described without distinguishing between the luminance signal and the color difference signal. However, when the prediction process is different between the luminance signal and the color difference signal, the same or different prediction methods may be used. If different prediction methods are used between the luminance signal and the chrominance signal, the prediction method selected for the chrominance signal can be encoded or decoded in the same manner as the luminance signal.
 第1から第6までの実施形態において、簡単化のために、輝度信号と色差信号とを区別せず、色信号成分に関して包括的な説明を記述した。しかしながら、直交変換処理が輝度信号と色差信号との間で異なる場合には、同一または異なる直交変換方法が用いられてよい。輝度信号と色差信号との間で異なる直交変換方法が用いられるならば、色差信号に対して選択した直交変換方法を輝度信号と同様の方法で符号化または復号化できる。 In the first to sixth embodiments, for the sake of simplicity, a comprehensive description of the color signal component is described without distinguishing between the luminance signal and the color difference signal. However, when the orthogonal transformation process is different between the luminance signal and the color difference signal, the same or different orthogonal transformation methods may be used. If different orthogonal transformation methods are used between the luminance signal and the color difference signal, the orthogonal transformation method selected for the color difference signal can be encoded or decoded in the same manner as the luminance signal.
 第1から第6までの実施形態において、シンタクス構成に示す表の行間には、実施形態で規定していないシンタクス要素が挿入されることも可能であるし、それ以外の条件分岐に関する記述が含まれていても構わない。或いは、シンタクステーブルを複数のテーブルに分割、統合することも可能である。また、必ずしも同一の用語を用いる必要は無く、利用する形態によって任意に変更しても構わない。 In the first to sixth embodiments, syntax elements not defined in the embodiment can be inserted between the rows of the table shown in the syntax configuration, and other conditional branch descriptions are included. It does not matter. Alternatively, the syntax table can be divided and integrated into a plurality of tables. Moreover, it is not always necessary to use the same term, and it may be arbitrarily changed depending on the form to be used.
 以上説明したように、各実施形態は、ハードウェア実装及びソフトウェア実装における困難性を緩和しつつ、高効率な直交変換及び逆直交変換を実現することができる。故に、各実施形態によれば、符号化効率が向上し、ひいては主観画質も向上する。 As described above, each embodiment can realize highly efficient orthogonal transformation and inverse orthogonal transformation while alleviating the difficulty in hardware implementation and software implementation. Therefore, according to each embodiment, the encoding efficiency is improved, and the subjective image quality is also improved.
 また、上述の実施形態の中で示した処理手順に示された指示は、ソフトウェアであるプログラムに基づいて実行されることが可能である。汎用の計算機システムが、このプログラムを予め記憶しておき、このプログラムを読み込むことにより、上述した実施形態の動画像符号化装置及び動画像復号化装置による効果と同様な効果を得ることも可能である。上述の実施形態で記述された指示は、コンピュータに実行させることのできるプログラムとして、磁気ディスク(フレキシブルディスク、ハードディスクなど)、光ディスク(CD-ROM、CD-R、CD-RW、DVD-ROM、DVD±R、DVD±RWなど)、半導体メモリ、またはこれに類する記録媒体に記録される。コンピュータまたは組み込みシステムが読み取り可能な記録媒体であれば、その記憶形式は何れの形態であってもよい。コンピュータは、この記録媒体からプログラムを読み込み、このプログラムに基づいてプログラムに記述されている指示をCPUで実行させれば、上述した実施形態の動画像符号化装置及び動画像復号化装置と同様な動作を実現することができる。もちろん、コンピュータがプログラムを取得する場合または読み込む場合はネットワークを通じて取得または読み込んでもよい。 
 また、記録媒体からコンピュータや組み込みシステムにインストールされたプログラムの指示に基づきコンピュータ上で稼働しているOS(オペレーティングシステム)や、データベース管理ソフト、ネットワーク等のMW(ミドルウェア)等が本実施形態を実現するための各処理の一部を実行してもよい。 
 さらに、本願発明における記録媒体は、コンピュータあるいは組み込みシステムと独立した媒体に限らず、LANやインターネット等により伝達されたプログラムをダウンロードして記憶または一時記憶した記録媒体も含まれる。また、上記各実施形態の処理を実現するプログラムを、インターネットなどのネットワークに接続されたコンピュータ(サーバ)上に格納し、ネットワーク経由でコンピュータ(クライアント)にダウンロードさせてもよい。 
 また、記録媒体は1つに限られず、複数の媒体から本実施形態における処理が実行される場合も、本発明における記録媒体に含まれ、媒体の構成は何れの構成であってもよい。
The instructions shown in the processing procedure shown in the above embodiment can be executed based on a program that is software. A general-purpose computer system stores this program in advance, and by reading this program, it is also possible to obtain the same effects as those obtained by the video encoding device and video decoding device of the above-described embodiment. is there. The instructions described in the above-described embodiments are, as programs that can be executed by a computer, magnetic disks (flexible disks, hard disks, etc.), optical disks (CD-ROM, CD-R, CD-RW, DVD-ROM, DVD). ± R, DVD ± RW, etc.), semiconductor memory, or a similar recording medium. As long as the recording medium is readable by the computer or the embedded system, the storage format may be any form. If the computer reads the program from the recording medium and causes the CPU to execute instructions described in the program based on the program, the computer is similar to the video encoding device and video decoding device of the above-described embodiment. Operation can be realized. Of course, when the computer acquires or reads the program, it may be acquired or read through a network.
In addition, the OS (operating system), database management software, MW (middleware) such as a network, etc. running on the computer based on the instructions of the program installed in the computer or embedded system from the recording medium implement this embodiment. A part of each process for performing may be executed.
Furthermore, the recording medium in the present invention is not limited to a medium independent of a computer or an embedded system, but also includes a recording medium in which a program transmitted via a LAN or the Internet is downloaded and stored or temporarily stored. Further, the program for realizing the processing of each of the above embodiments may be stored on a computer (server) connected to a network such as the Internet and downloaded to the computer (client) via the network.
Further, the number of recording media is not limited to one, and when the processing in the present embodiment is executed from a plurality of media, it is included in the recording media in the present invention, and the configuration of the media may be any configuration.
 なお、本願発明におけるコンピュータまたは組み込みシステムは、記録媒体に記憶されたプログラムに基づき、本実施形態における各処理を実行するためのものであって、パソコン、マイコン等の1つからなる装置、複数の装置がネットワーク接続されたシステム等の何れの構成であってもよい。 
 また、本願発明の実施形態におけるコンピュータとは、パソコンに限らず、情報処理機器に含まれる演算処理装置、マイコン等も含み、プログラムによって本発明の実施形態における機能を実現することが可能な機器、装置を総称している。
The computer or the embedded system in the present invention is for executing each process in the present embodiment based on a program stored in a recording medium, and includes a single device such as a personal computer or a microcomputer, Any configuration such as a system in which apparatuses are connected to a network may be used.
Further, the computer in the embodiment of the present invention is not limited to a personal computer, but includes an arithmetic processing device, a microcomputer, and the like included in an information processing device, and a device capable of realizing the functions in the embodiment of the present invention by a program, The device is a general term.
 本発明のいくつかの実施形態を説明したが、これらの実施形態は、例として提示したものであり、発明の範囲を限定することは意図していない。これら新規な実施形態は、その他の様々な形態で実施されることが可能であり、発明の要旨を逸脱しない範囲で、種々の省略、置き換え、変更を行うことができる。これら実施形態やその変形は、発明の範囲や要旨に含まれるとともに、特許請求の範囲に記載された発明とその均等の範囲に含まれる。 Although several embodiments of the present invention have been described, these embodiments are presented as examples and are not intended to limit the scope of the invention. These novel embodiments can be implemented in various other forms, and various omissions, replacements, and changes can be made without departing from the scope of the invention. These embodiments and modifications thereof are included in the scope and gist of the invention, and are included in the invention described in the claims and the equivalents thereof.
100…画像符号化装置、101…減算部、102…直交変換部、103…量子化部、104…逆量子化部、105…逆直交変換部、106…加算部、107…ループフィルタ、108…参照画像メモリ、109…イントラ予測部、110…インター予測部、111…予測選択スイッチ、112…予測選択部、113…エントロピー符号化部、114…出力バッファ、115…符号化制御部、116…イントラ予測モードメモリ、151…入力画像信号、152…予測誤差信号、153…変換係数、154…量子化変換係数、155…復元変換係数、156…復元予測誤差信号、157…復号画像信号、158…被フィルタ画像信号、159…参照画像信号、160…予測情報、161…予測画像信号、方向予測画像信号、162…符号化データ、163…イントラ予測モード情報、164…参照イントラ予測モード情報、601…単方向イントラ予測画像生成部、602…双方向イントラ予測画像生成部、603…予測モード情報設定部、604…選択スイッチ、605…双方向イントラ予測モード生成部、651…予測モード、652…双方向イントラ予測モード情報、1301…第一単方向イントラ予測画像生成部、1302…第二単方向イントラ予測画像生成部、1303…重み付き平均部、1351…第一予測画像信号、1352…第二予測画像信号、1901…第一予測モード生成部、1902…第二予測モード生成部、1903…選択スイッチ、2301…第三予測モード生成部、2302…予測モード生成部、2701…一次画像バッファ、2702…重み付き平均部、2800…シンタクス、2801…ハイレベルシンタクス、2802…スライスレベルシンタクス、2803…コーディングツリーレベルシンタクス、2804…シーケンスパラメータセットシンタクス、2805…ピクチャパラメータセットシンタクス、2806…スライスヘッダーシンタクス、2807…スライスデータシンタクス、2808…コーディングツリーユニットシンタクス、2809…プレディクションユニットシンタクス、2810…トランスフォームユニットシンタクス、3301…参照画素フィルタ部、3351…被フィルタ参照画像信号、3601…複合イントラ予測画像生成部、3602…選択スイッチ、3701…復号画素バッファ、3701…復号画像バッファ、3751…参照画素、隣接画素、3801…画素レベル予測信号生成部、3802…複合イントラ予測計算部、3851…画素レベル予測信号、4001…変換選択部、4002…係数順制御部、4051…変換選択情報、4052…量子化変換係数列、4101…第一直交変換部、4102…第二直交変換部、4103…直交変換部、4104…変換選択スイッチ、4201…第一逆直交変換部、4202…第二逆直交変換部、4203…逆直交変換部、4204…変換選択スイッチ、4401…係数順変換部、4401…第一係数順変換部、4402…第二係数順変換部、4403…係数順変換部、4404…係数順選択スイッチ、4501…発生頻度カウント部、4502…更新部、4551…更新係数順、4552…ヒストグラム、4701…回転変換部、4701…第一回転変換部、4702…第二回転変換部、4703…回転変換部、4704…離散コサイン変換部、4801…逆回転変換部、4801…第一逆回転変換部、4802…第二逆回転変換部、4803…逆回転変換部、4804…逆離散コサイン変換部、5000…動画像復号化装置、5001…入力バッファ、5002…エントロピー復号化部、5003…逆量子化部、5004…逆直交変換部、5005…加算部、5006…ループフィルタ、5007…参照画像メモリ、5008…イントラ予測部、5009…インター予測部、5010…予測選択スイッチ、5011…出力バッファ、5012…復号化制御部、5013…イントラ予測モードメモリ、5051…符号化データ、5052…量子化変換係数、5053…復元変換係数、5054…復元予測誤差信号、5055…復号画像信号、5056…被フィルタ画像信号、5057…参照画像信号、5058…予測画像信号、方向予測画像信号、5059…予測情報、5060…復号画像、5061…イントラ予測モード情報、5062…参照イントラ予測モード情報、5100…動画像復号化装置、5101…係数順復元部、5102…変換選択部、5151…変換選択情報、5152…量子化変換係数列、5201…係数順逆変換部、5201…第一係数順逆変換部、5202…第二係数順逆変換部、5203…係数順逆変換部、5204…係数順選択スイッチ、5301…発生頻度カウント部、5302…更新部、5351…ヒストグラム、5352…更新係数順。 DESCRIPTION OF SYMBOLS 100 ... Image coding apparatus, 101 ... Subtraction part, 102 ... Orthogonal transformation part, 103 ... Quantization part, 104 ... Inverse quantization part, 105 ... Inverse orthogonal transformation part, 106 ... Adder, 107 ... Loop filter, 108 ... Reference image memory 109 ... Intra prediction unit 110 ... Inter prediction unit 111 ... Prediction selection switch 112 ... Prediction selection unit 113 ... Entropy coding unit 114 ... Output buffer 115 ... Coding control unit 116 ... Intra Prediction mode memory 151 ... Input image signal 152 ... Prediction error signal 153 ... Conversion coefficient 154 ... Quantization conversion coefficient 155 ... Reconstruction conversion coefficient 156 ... Reconstruction prediction error signal 157 ... Decoded image signal 158 ... Covered Filter image signal, 159 ... Reference image signal, 160 ... Prediction information, 161 ... Prediction image signal, Direction prediction image signal, 162 ... Coding data 163 ... Intra prediction mode information, 164 ... Reference intra prediction mode information, 601 ... Unidirectional intra prediction image generation unit, 602 ... Bidirectional intra prediction image generation unit, 603 ... Prediction mode information setting unit, 604 ... Selection switch, 605 ... Bidirectional intra prediction mode generation unit, 651 ... Prediction mode, 652 ... Bidirectional intra prediction mode information, 1301 ... First unidirectional intra prediction image generation unit, 1302 ... Second unidirectional intra prediction image generation unit, 1303 ... Weighted average unit, 1351 ... first prediction image signal, 1352 ... second prediction image signal, 1901 ... first prediction mode generation unit, 1902 ... second prediction mode generation unit, 1903 ... selection switch, 2301 ... third prediction mode Generation unit 2302 ... Prediction mode generation unit 2701 ... Primary image buffer 2702 ... Weighted plane 2800 ... Syntax, 2801 ... High level syntax, 2802 ... Slice level syntax, 2803 ... Coding tree level syntax, 2804 ... Sequence parameter set syntax, 2805 ... Picture parameter set syntax, 2806 ... Slice header syntax, 2807 ... Slice data syntax 2808 ... Coding tree unit syntax, 2809 ... Prediction unit syntax, 2810 ... Transform unit syntax, 3301 ... Reference pixel filter unit, 3351 ... Filtered reference image signal, 3601 ... Composite intra prediction image generation unit, 3602 ... Selection switch 3701, decoded pixel buffer, 3701 ... decoded image buffer, 3751 ... reference pixel, adjacent pixel, 380 DESCRIPTION OF SYMBOLS 1 ... Pixel level prediction signal production | generation part, 3802 ... Composite intra prediction calculation part, 3851 ... Pixel level prediction signal, 4001 ... Conversion selection part, 4002 ... Coefficient order control part, 4051 ... Conversion selection information, 4052 ... Quantization conversion coefficient sequence 4101: First orthogonal transform unit, 4102 ... Second orthogonal transform unit, 4103 ... Orthogonal transform unit, 4104 ... Transformation selection switch, 4201 ... First inverse orthogonal transform unit, 4202 ... Second inverse orthogonal transform unit, 4203 ... Inverse orthogonal transformation unit, 4204 ... transformation selection switch, 4401 ... coefficient forward transformation unit, 4401 ... first coefficient forward transformation unit, 4402 ... second coefficient forward transformation unit, 4403 ... coefficient forward transformation unit, 4404 ... coefficient order selection switch, 4501 ... Occurrence frequency counting unit, 4502 ... Update unit, 4551 ... Update coefficient order, 4552 ... Histogram, 4701 ... Rotation conversion unit, 4701 ... First Conversion unit, 4702 ... second rotation conversion unit, 4703 ... rotation conversion unit, 4704 ... discrete cosine conversion unit, 4801 ... reverse rotation conversion unit, 4801 ... first reverse rotation conversion unit, 4802 ... second reverse rotation conversion unit, 4803: inverse rotation transform unit, 4804 ... inverse discrete cosine transform unit, 5000 ... moving picture decoding device, 5001 ... input buffer, 5002 ... entropy decoding unit, 5003 ... inverse quantization unit, 5004 ... inverse orthogonal transform unit, 5005 ... adder, 5006 ... loop filter, 5007 ... reference image memory, 5008 ... intra predictor, 5009 ... inter predictor, 5010 ... prediction selection switch, 5011 ... output buffer, 5012 ... decoding controller, 5013 ... intra prediction mode Memory, 5051... Encoded data, 5052... Quantization transform coefficient, 5053. 54 ... Restored prediction error signal, 5055 ... Decoded image signal, 5056 ... Filtered image signal, 5057 ... Reference image signal, 5058 ... Predicted image signal, Direction predicted image signal, 5059 ... Prediction information, 5060 ... Decoded image, 5061 ... Intra Prediction mode information, 5062 ... Reference intra prediction mode information, 5100 ... Video decoding device, 5101 ... Coefficient order restoration unit, 5102 ... Transformation selection unit, 5151 ... Transformation selection information, 5152 ... Quantized transformation coefficient sequence, 5201 ... Coefficient Forward / reverse conversion unit, 5201 ... first coefficient forward / reverse conversion unit, 5202 ... second coefficient forward / reverse conversion unit, 5203 ... coefficient forward / reverse conversion unit, 5204 ... coefficient order selection switch, 5301 ... occurrence frequency counting unit, 5302 ... update unit, 5351 ... Histogram, 5352... Update coefficient order.

Claims (20)

  1.  入力画像信号を四分木分割に従って、階層の深さで表現される画素ブロックに分割し、これら分割した画素ブロックに対してイントラ予測を行い、予測誤差信号を生成し、変換係数を符号化する動画像符号化方法において、
     少なくとも一つの符号化済み画素ブロックに対応する前記イントラ予測の予測方向を示す参照予測方向を取得し、
     前記参照予測方向の中から、第一の参照予測方向を第一の予測方向として設定し、第一予測画像信号を生成し、
     第一の予測方向とは異なる第二の予測方向を設定して第二予測画像信号を生成し、
     重み成分に従って、第一予測画像信号と第二予測画像信号を重み付き平均し、第三予測画像信号を生成し、
     前記第三予測画像信号から予測誤差信号を生成し、
     前記予測誤差信号を符号化する、
    ことを具備する動画像符号化方法。
    The input image signal is divided into pixel blocks expressed by hierarchical depth according to quadtree division, intra prediction is performed on these divided pixel blocks, a prediction error signal is generated, and transform coefficients are encoded. In the video encoding method,
    Obtaining a reference prediction direction indicating a prediction direction of the intra prediction corresponding to at least one encoded pixel block;
    Among the reference prediction directions, the first reference prediction direction is set as the first prediction direction, and a first prediction image signal is generated,
    Generating a second predicted image signal by setting a second prediction direction different from the first prediction direction;
    According to the weight component, the first predicted image signal and the second predicted image signal are weighted averaged to generate a third predicted image signal,
    Generating a prediction error signal from the third predicted image signal;
    Encoding the prediction error signal;
    A moving picture encoding method comprising:
  2.  前記第二予測画像信号を生成することは、
    (A)前記第一の参照予測方向に対応する前記符号化済み画素ブロックとは異なる前記符号化済み画素ブロックにおける前記参照予測方向である第二の参照予測方向、
    (B)前記第一の予測方向に対して、隣接する予測方向、
    (C)前記第一の予測方向を反転した予測方向、及び、
    (D)予め定められた方法で前記第一の予測方向を変換した予測方向、
    のいずれかを第二の予測方向に設定する請求項1に記載の動画像符号化方法。
    Generating the second predicted image signal comprises:
    (A) a second reference prediction direction that is the reference prediction direction in the encoded pixel block different from the encoded pixel block corresponding to the first reference prediction direction;
    (B) an adjacent prediction direction with respect to the first prediction direction;
    (C) a prediction direction obtained by inverting the first prediction direction; and
    (D) a prediction direction obtained by converting the first prediction direction by a predetermined method;
    The moving picture coding method according to claim 1, wherein any one of the above is set in the second prediction direction.
  3.  複数の前記第一予測方向組み合わせ候補から、1つの選択第一予測方向組み合わせを選択し、
     前記選択第一予測方向組み合わせに対応する前記第一の予測方向及び前記第二の予測方向を使用して前記第三予測画像信号を生成する、
    ことをさらに具備する請求項2に記載の動画像符号化方法。
    Select one selected first prediction direction combination from the plurality of first prediction direction combination candidates,
    Generating the third predicted image signal using the first prediction direction and the second prediction direction corresponding to the selected first prediction direction combination;
    The video encoding method according to claim 2, further comprising:
  4.  予め定められた複数の予測方向の中から、予め定められた予測方向の組み合わせであって第四の予測方向及び第五の予測方向の組み合わせである第二予測方向組み合わせを設定して第四予測画像信号及び第五予測画像信号を生成し、
     設定された前記第二予測方向組み合わせに対応してそれぞれの予測方向における参照画素と予測画素との相対距離を導出し、該相対距離の差分値を導出し、
     前記差分値に応じて、予め定められた重み成分を導出し、
     前記重み成分に従って、第四予測画像信号と第五予測画像信号を重み付き平均し、第六予測画像信号を生成し、
     前記第三予測画像信号及び前記第六予測画像信号のうちの一つを第七予測画像信号として選択し、
     前記第七予測画像信号から予測誤差信号を生成し、
     予測誤差信号を符号化する、
    ことをさらに具備する請求項3に記載の動画像符号化方法。
    Fourth prediction by setting a second prediction direction combination that is a combination of a predetermined prediction direction and a combination of a fourth prediction direction and a fifth prediction direction from among a plurality of predetermined prediction directions. Generating an image signal and a fifth predicted image signal;
    Deriving the relative distance between the reference pixel and the prediction pixel in each prediction direction corresponding to the set second prediction direction combination, deriving a difference value of the relative distance,
    In accordance with the difference value, a predetermined weight component is derived,
    According to the weight component, the fourth predicted image signal and the fifth predicted image signal are weighted averaged to generate a sixth predicted image signal,
    Selecting one of the third predicted image signal and the sixth predicted image signal as a seventh predicted image signal;
    Generating a prediction error signal from the seventh predicted image signal;
    Encode the prediction error signal;
    The video encoding method according to claim 3, further comprising:
  5.  前記符号化済み画素ブロックに対して、双方向イントラ予測が適用される場合、前記双方向イントラ予測に含まれる第一予測方向組み合わせ若しくは第二予測方向組み合わせのうち、いずれか一方を前記参照予測方向として設定することをさらに具備する請求項4に記載の動画像符号化方法。 When bidirectional intra prediction is applied to the encoded pixel block, one of the first prediction direction combination and the second prediction direction combination included in the bidirectional intra prediction is set as the reference prediction direction. The moving picture encoding method according to claim 4, further comprising:
  6.  前記第六予測画像信号に含まれる前記選択第一予測方向組み合わせが、他の前記第一予測方向組み合わせ候補、若しくは前記第二予測方向組み合わせと同一である場合に、
     他の前記第一予測方向組み合わせ候補及び前記第二予測方向組み合わせとは異なる第三予測方向組み合わせである第八の予測方向及び第九の予測方向を設定してそれぞれ、第八予測画像信号及び第九予測画像信号を生成し、
     設定された第三予測方向組み合わせに対応してそれぞれの予測方向における参照画素と予測画素との相対距離を導出し、該相対距離の差分値を導出し、
     前記差分値に応じて、予め定められた重み成分を導出し、
     前記重み成分に従って、第八予測画像信号と第九予測画像信号を重み付き平均し、第十予測画像信号を生成し前記第六予測画像信号と置き換える、
    ことをさらに具備する請求項5に記載の動画像符号化方法。
    When the selected first prediction direction combination included in the sixth predicted image signal is the same as another first prediction direction combination candidate or the second prediction direction combination,
    An eighth prediction image signal and a ninth prediction direction are set by setting an eighth prediction direction and a ninth prediction direction, which are third prediction direction combinations different from the other first prediction direction combination candidates and the second prediction direction combination, respectively. Nine predicted image signals are generated,
    Deriving the relative distance between the reference pixel and the prediction pixel in each prediction direction corresponding to the set third prediction direction combination, deriving a difference value of the relative distance,
    In accordance with the difference value, a predetermined weight component is derived,
    According to the weight component, the eighth predicted image signal and the ninth predicted image signal are weighted and averaged to generate a tenth predicted image signal and replace the sixth predicted image signal.
    The moving picture encoding method according to claim 5, further comprising:
  7.  互いに異なる予測方向の組み合わせである前記第一予測方向組み合わせと前記第二予測方向組み合わせの合計数を求め、
     前記合計数に応じて予め定められた符号表を参照して、前記第七予測画像信号を特定する予測モード情報を符号化する、
    ことをさらに具備する請求項5に記載の動画像符号化方法。
    Obtain the total number of the first prediction direction combination and the second prediction direction combination that are combinations of different prediction directions,
    Encoding prediction mode information for specifying the seventh predicted image signal with reference to a predetermined code table according to the total number,
    The moving picture encoding method according to claim 5, further comprising:
  8.  前記予め定められた複数の予測方向の中から、第十一の予測方向を設定して第十一予測画像信号を生成し、
     前記第七予測画像信号と前記第十一予測画像信号のうちの一つを第十二予測画像信号として選択し、
     前記第十二予測画像信号から予測誤差信号を生成し、
     前記予測誤差信号を符号化する、
    ことをさらに具備する請求項6または請求項7に記載の動画像符号化方法。
    An eleventh prediction image signal is generated by setting an eleventh prediction direction from the plurality of predetermined prediction directions,
    Selecting one of the seventh predicted image signal and the eleventh predicted image signal as a twelfth predicted image signal;
    Generating a prediction error signal from the twelfth prediction image signal;
    Encoding the prediction error signal;
    The moving picture coding method according to claim 6 or 7, further comprising:
  9.  前記第六予測画像信号に含まれる前記選択第一予測方向組み合わせが、他の前記第一予測方向組み合わせ候補、若しくは前記第二予測方向組み合わせと同一である場合に、
     前記予め定められた複数の予測方向の中から、前記第十一の予測方向とは異なる第十二の予測方向を設定して第十二予測画像信号を生成すること、
    をさらに具備する請求項8に記載の動画像符号化方法。
    When the selected first prediction direction combination included in the sixth predicted image signal is the same as another first prediction direction combination candidate or the second prediction direction combination,
    Generating a twelfth prediction image signal by setting a twelfth prediction direction different from the eleventh prediction direction from the plurality of predetermined prediction directions;
    The video encoding method according to claim 8, further comprising:
  10.  前記画素ブロックにおける輝度信号の前記予測モードが前記第九予測画像信号として前記第七予測画像信号を特定する場合、前記第七予測画像信号に含まれる前記第四の予測方向及び第五の予測方向を同一画素ブロックの色差信号に設定し、色差予測画像信号を生成し、
     前記輝度信号の前記予測モードが前記第九予測画像信号として前記第八予測画像信号を特定する場合、前記第八予測画像信号に含まれる前記第八の予測方向を同一画素ブロックの色差信号に設定し、色差予測画像信号を生成する、
    ことをさらに具備する請求項9に記載の動画像符号化方法。
    When the prediction mode of the luminance signal in the pixel block specifies the seventh prediction image signal as the ninth prediction image signal, the fourth prediction direction and the fifth prediction direction included in the seventh prediction image signal Is set to the color difference signal of the same pixel block, and a color difference prediction image signal is generated,
    When the prediction mode of the luminance signal specifies the eighth prediction image signal as the ninth prediction image signal, the eighth prediction direction included in the eighth prediction image signal is set to a color difference signal of the same pixel block And generating a color difference prediction image signal,
    The video encoding method according to claim 9, further comprising:
  11.  入力画像信号を四分木分割に従って、階層の深さで表現される画素ブロックに分割し、これら分割した画素ブロックに対してイントラ予測を行い、予測誤差信号を生成し、変換係数を復号化する動画像復号化方法において、
     少なくとも一つの復号化済み画素ブロックに対応する前記イントラ予測の予測方向を示す参照予測方向を取得し、
     前記参照予測方向の中から、第一の参照予測方向を第一の予測方向として設定し第一予測画像信号を生成し、
     第一の予測方向とは異なる第二の予測方向を設定して第二予測画像信号を生成し、
     重み成分に従って、第一予測画像信号と第二予測画像信号を重み付き平均し、第三予測画像信号を生成し、
     前記第三予測画像信号から予測誤差信号を生成し、
     前記予測誤差信号を復号化する、
    ことを具備する動画像復号化方法。
    The input image signal is divided into pixel blocks expressed by hierarchical depth according to quadtree division, intra prediction is performed on these divided pixel blocks, a prediction error signal is generated, and transform coefficients are decoded. In the video decoding method,
    Obtaining a reference prediction direction indicating a prediction direction of the intra prediction corresponding to at least one decoded pixel block;
    Among the reference prediction directions, a first reference prediction direction is set as a first prediction direction to generate a first prediction image signal,
    Generating a second predicted image signal by setting a second prediction direction different from the first prediction direction;
    According to the weight component, the first predicted image signal and the second predicted image signal are weighted averaged to generate a third predicted image signal,
    Generating a prediction error signal from the third predicted image signal;
    Decoding the prediction error signal;
    A moving picture decoding method.
  12.  前記第二予測画像信号を生成することは、
    (A)前記第一の参照予測方向に対応する前記復号化済み画素ブロックとは異なる前記復号化済み画素ブロックにおける前記参照予測方向である第二の参照予測方向、
    (B)前記第一の予測方向に対して、隣接する予測方向、
    (C)前記第一の予測方向を反転した予測方向、及び、
    (D)予め定められた方法で前記第一の予測方向を変換した予測方向、
    のいずれかを第二の予測方向に設定する請求項11に記載の動画像復号化方法。
    Generating the second predicted image signal comprises:
    (A) a second reference prediction direction that is the reference prediction direction in the decoded pixel block different from the decoded pixel block corresponding to the first reference prediction direction;
    (B) an adjacent prediction direction with respect to the first prediction direction;
    (C) a prediction direction obtained by inverting the first prediction direction; and
    (D) a prediction direction obtained by converting the first prediction direction by a predetermined method;
    The moving picture decoding method according to claim 11, wherein any one of the above is set in the second prediction direction.
  13.  複数の前記第一予測方向組み合わせ候補から、1つの選択第一予測方向組み合わせを選択し、
     前記選択第一予測方向組み合わせに対応する前記第一の予測方向及び第二の予測方向を使用して前記第三予測画像信号を生成する、
    ことをさらに具備する請求項12に記載の動画像復号化方法。
    Select one selected first prediction direction combination from the plurality of first prediction direction combination candidates,
    Generating the third predicted image signal using the first prediction direction and the second prediction direction corresponding to the selected first prediction direction combination;
    The moving picture decoding method according to claim 12, further comprising:
  14.  予め定められた予測方向の中から、予め定められた予測方向の組み合わせであって第四の予測方向及び第五の予測方向の組み合わせである第二予測方向組み合わせを設定して第四予測画像信号及び第五予測画像信号を生成し、
     設定された前記第二予測方向組み合わせに対応してそれぞれの予測方向における参照画素と予測画素との相対距離を導出し、該相対距離の差分値を導出し、
     前記差分値に応じて、予め定められた重み成分を導出し、
     前記重み成分に従って、第四予測画像信号と第五予測画像信号を重み付き平均し、第六予測画像信号を生成し、
     前記第三予測画像信号及び前記第六予測画像信号のうちの一つを第七予測画像信号として選択し、
     前記第七予測画像信号から予測誤差信号を生成し、
     予測誤差信号を復号化する、
    ことをさらに具備する請求項13に記載の動画像復号化方法。
    A fourth prediction image signal is set by setting a second prediction direction combination which is a combination of a predetermined prediction direction and a combination of a fourth prediction direction and a fifth prediction direction from among the predetermined prediction directions. And a fifth predicted image signal,
    Deriving the relative distance between the reference pixel and the prediction pixel in each prediction direction corresponding to the set second prediction direction combination, deriving a difference value of the relative distance,
    In accordance with the difference value, a predetermined weight component is derived,
    According to the weight component, the fourth predicted image signal and the fifth predicted image signal are weighted averaged to generate a sixth predicted image signal,
    Selecting one of the third predicted image signal and the sixth predicted image signal as a seventh predicted image signal;
    Generating a prediction error signal from the seventh predicted image signal;
    Decoding the prediction error signal;
    The video decoding method according to claim 13, further comprising:
  15.  前記復号化済み画素ブロックに対して、双方向イントラ予測が適用される場合、前記双方向イントラ予測に含まれる第一予測方向組み合わせ若しくは第二予測方向組み合わせのうち、いずれか一方を前記参照予測方向として設定することをさらに具備する請求項14に記載の動画像復号化方法。 When bidirectional intra prediction is applied to the decoded pixel block, one of the first prediction direction combination and the second prediction direction combination included in the bidirectional intra prediction is set as the reference prediction direction. 15. The moving picture decoding method according to claim 14, further comprising:
  16.  前記第六予測画像信号に含まれる前記選択第一予測方向組み合わせが、他の前記第一予測方向組み合わせ候補、若しくは前記第二予測方向組み合わせと同一である場合に、
     他の前記第一予測方向組み合わせ候補及び前記第二予測方向組み合わせとは異なる第三予測方向組み合わせである第八の予測方向及び第九の予測方向を設定してそれぞれ、第八予測画像信号及び第九予測画像信号を生成し、
     設定された第三予測方向組み合わせに対応してそれぞれの予測方向における参照画素と予測画素との相対距離を導出し、該相対距離の差分値を導出し、
     前記差分値に応じて、予め定められた重み成分を導出し、
     前記重み成分に従って、第八予測画像信号と第九予測画像信号を重み付き平均し、第十予測画像信号を生成し前記第六予測画像信号と置き換える、
    ことをさらに具備する請求項15に記載の動画像復号化方法。
    When the selected first prediction direction combination included in the sixth predicted image signal is the same as another first prediction direction combination candidate or the second prediction direction combination,
    An eighth prediction image signal and a ninth prediction direction are set by setting an eighth prediction direction and a ninth prediction direction, which are third prediction direction combinations different from the other first prediction direction combination candidates and the second prediction direction combination, respectively. Nine predicted image signals are generated,
    Deriving the relative distance between the reference pixel and the prediction pixel in each prediction direction corresponding to the set third prediction direction combination, deriving a difference value of the relative distance,
    In accordance with the difference value, a predetermined weight component is derived,
    According to the weight component, the eighth predicted image signal and the ninth predicted image signal are weighted and averaged to generate a tenth predicted image signal and replace the sixth predicted image signal.
    The video decoding method according to claim 15, further comprising:
  17.  互いに異なる予測方向の組み合わせである前記第一予測方向組み合わせと前記第二予測方向組み合わせの合計数を求め、
     前記合計数に応じて予め定められた符号表を参照して、前記第七予測画像信号を特定する予測モード情報を復号化する、
    ことをさらに具備する請求項15に記載の動画像復号化方法。
    Obtain the total number of the first prediction direction combination and the second prediction direction combination that are combinations of different prediction directions,
    Decoding prediction mode information identifying the seventh predicted image signal with reference to a code table determined in advance according to the total number,
    The video decoding method according to claim 15, further comprising:
  18.  前記予め定められた複数の予測方向の中から、第十一の予測方向を設定して第十一予測画像信号を生成し、
     前記第七予測画像信号と前記第十一予測画像信号のうちの一つを第十二予測画像信号として選択し、
     前記第十二予測画像信号から予測誤差信号を生成し、
     前記予測誤差信号を復号化する、
    ことをさらに具備する請求項16または請求項17に記載の動画像復号化方法。
    An eleventh prediction image signal is generated by setting an eleventh prediction direction from the plurality of predetermined prediction directions,
    Selecting one of the seventh predicted image signal and the eleventh predicted image signal as a twelfth predicted image signal;
    Generating a prediction error signal from the twelfth prediction image signal;
    Decoding the prediction error signal;
    The video decoding method according to claim 16 or 17, further comprising:
  19.  前記第六予測画像信号に含まれる前記選択第一予測方向組み合わせが、他の前記第一予測方向組み合わせ候補、若しくは前記第二予測方向組み合わせと同一である場合に、
     前記予め定められた複数の予測方向の中から、前記第十一の予測方向とは異なる第十二の予測方向を設定して第十二予測画像信号を生成すること、
    をさらに具備する請求項18に記載の動画像復号化方法。
    When the selected first prediction direction combination included in the sixth predicted image signal is the same as another first prediction direction combination candidate or the second prediction direction combination,
    Generating a twelfth prediction image signal by setting a twelfth prediction direction different from the eleventh prediction direction from the plurality of predetermined prediction directions;
    The video decoding method according to claim 18, further comprising:
  20.  前記画素ブロックにおける輝度信号の前記予測モードが前記第九予測画像信号として前記第七予測画像信号を特定する場合、前記第七予測画像信号に含まれる前記第四の予測方向及び第五の予測方向を同一画素ブロックの色差信号に設定し、色差予測画像信号を生成し、
     前記輝度信号の前記予測モードが前記第九予測画像信号として前記第八予測画像信号を特定する場合、前記第八予測画像信号に含まれる前記第八の予測方向を同一画素ブロックの色差信号に設定し、色差予測画像信号を生成する、
    ことをさらに具備する請求項19に記載の動画像復号化方法。
    When the prediction mode of the luminance signal in the pixel block specifies the seventh prediction image signal as the ninth prediction image signal, the fourth prediction direction and the fifth prediction direction included in the seventh prediction image signal Is set to the color difference signal of the same pixel block, and a color difference prediction image signal is generated,
    When the prediction mode of the luminance signal specifies the eighth prediction image signal as the ninth prediction image signal, the eighth prediction direction included in the eighth prediction image signal is set to a color difference signal of the same pixel block And generating a color difference prediction image signal,
    The video decoding method according to claim 19, further comprising:
PCT/JP2010/073630 2010-12-27 2010-12-27 Video image encoding method, and video image decoding method WO2012090286A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2010/073630 WO2012090286A1 (en) 2010-12-27 2010-12-27 Video image encoding method, and video image decoding method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2010/073630 WO2012090286A1 (en) 2010-12-27 2010-12-27 Video image encoding method, and video image decoding method

Publications (1)

Publication Number Publication Date
WO2012090286A1 true WO2012090286A1 (en) 2012-07-05

Family

ID=46382436

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2010/073630 WO2012090286A1 (en) 2010-12-27 2010-12-27 Video image encoding method, and video image decoding method

Country Status (1)

Country Link
WO (1) WO2012090286A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014050971A1 (en) * 2012-09-28 2014-04-03 日本電信電話株式会社 Intra-prediction coding method, intra-prediction decoding method, intra-prediction coding device, intra-prediction decoding device, programs therefor and recording mediums on which programs are recorded
CN109417629A (en) * 2016-07-12 2019-03-01 韩国电子通信研究院 Image coding/decoding method and recording medium for this method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008084817A1 (en) * 2007-01-09 2008-07-17 Kabushiki Kaisha Toshiba Image encoding and decoding method and device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008084817A1 (en) * 2007-01-09 2008-07-17 Kabushiki Kaisha Toshiba Image encoding and decoding method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SHIODERA TAICHIRO ET AL.: "Improvement of Bidirectional Intra Prediction (VCEG-AG08)", ITU-T SG16 Q.6 VCEG, 20 October 2007 (2007-10-20), pages 1 - 19, Retrieved from the Internet <URL:http://wftp3.itu.int/av-arch/video-site/0710She/VCEG-AG08.zip> [retrieved on 20110225] *
TAKESHI CHUJOH ET AL.: "Description of video coding technology proposal by TOSHIBA (JCTVC-Al17r1)", ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11, 15 April 2010 (2010-04-15), pages 1 - 38, Retrieved from the Internet <URL:http://wftp3.itu.int/av-arch/jctvc-site/2010_04_A_Dresden/JCTVC-A117rl.doc> [retrieved on 20110225] *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014050971A1 (en) * 2012-09-28 2014-04-03 日本電信電話株式会社 Intra-prediction coding method, intra-prediction decoding method, intra-prediction coding device, intra-prediction decoding device, programs therefor and recording mediums on which programs are recorded
KR20150042268A (en) * 2012-09-28 2015-04-20 니폰 덴신 덴와 가부시끼가이샤 Intra-prediction coding method, intra-prediction decoding method, intra-prediction coding device, intra-prediction decoding device, programs therefor and recording mediums on which programs are recorded
CN104838650A (en) * 2012-09-28 2015-08-12 日本电信电话株式会社 Intra-prediction coding method, intra-prediction decoding method, intra-prediction coding device, intra-prediction decoding device, programs therefore and recoding mediums on which programs are recorded
JP5841670B2 (en) * 2012-09-28 2016-01-13 日本電信電話株式会社 Intra-prediction encoding method, intra-prediction decoding method, intra-prediction encoding device, intra-prediction decoding device, their program, and recording medium recording the program
JP2016027756A (en) * 2012-09-28 2016-02-18 日本電信電話株式会社 Intra-prediction coding method, intra-prediction decoding method, intra-prediction encoder, intra-prediction decoder, their program and recording medium recording them
EP2890130A4 (en) * 2012-09-28 2016-04-20 Nippon Telegraph & Telephone Intra-prediction coding method, intra-prediction decoding method, intra-prediction coding device, intra-prediction decoding device, programs therefor and recording mediums on which programs are recorded
KR101650532B1 (en) 2012-09-28 2016-08-23 니폰 덴신 덴와 가부시끼가이샤 Intra-prediction coding method, intra-prediction decoding method, intra-prediction coding device, intra-prediction decoding device, programs therefor and recording mediums on which programs are recorded
US9813709B2 (en) 2012-09-28 2017-11-07 Nippon Telegraph And Telephone Corporation Intra-prediction encoding method, intra-prediction decoding method, intra-prediction encoding apparatus, intra-prediction decoding apparatus, program therefor and recording medium having program recorded thereon
CN104838650B (en) * 2012-09-28 2018-03-30 日本电信电话株式会社 Intra-frame predictive encoding method, infra-frame prediction decoding method, intraframe predictive coding device, the recording medium of infra-frame prediction decoding device and logging program
CN109417629A (en) * 2016-07-12 2019-03-01 韩国电子通信研究院 Image coding/decoding method and recording medium for this method
CN109417629B (en) * 2016-07-12 2023-07-14 韩国电子通信研究院 Image encoding/decoding method and recording medium therefor

Similar Documents

Publication Publication Date Title
US11936858B1 (en) Constrained position dependent intra prediction combination (PDPC)
KR102125956B1 (en) Method for encoding/decoing video using intra prediction and apparatus for the same
Han et al. Improved video compression efficiency through flexible unit representation and corresponding extension of coding tools
US9392282B2 (en) Moving-picture encoding apparatus and moving-picture decoding apparatus
WO2012035640A1 (en) Moving picture encoding method and moving picture decoding method
WO2011125256A1 (en) Image encoding method and image decoding method
WO2012148139A2 (en) Method for managing a reference picture list, and apparatus using same
US20120128064A1 (en) Image processing device and method
JP7124222B2 (en) Method and apparatus for color conversion in VVC
WO2012120661A1 (en) Video image encoding method and video image decoding method
TW202135530A (en) Method, apparatus and system for encoding and decoding a block of video samples
JP2024026317A (en) Methods, apparatus and programs for decoding and encoding encoding units
JP2023504333A (en) Method, Apparatus and System for Encoding and Decoding Coding Tree Units
JP2023159400A (en) Method, device and system for encoding and decoding block of video sample
KR20220032620A (en) Method, apparatus and system for encoding and decoding a block of video samples
WO2012090286A1 (en) Video image encoding method, and video image decoding method
JP6871447B2 (en) Moving image coding method and moving image decoding method
WO2012172667A1 (en) Video encoding method, video decoding method, and device
JP5367161B2 (en) Image encoding method, apparatus, and program
JP6871343B2 (en) Moving image coding method and moving image decoding method
JP6871442B2 (en) Moving image coding method and moving image decoding method
JP6510084B2 (en) Moving picture decoding method and electronic apparatus
WO2012081706A1 (en) Image filter device, filter device, decoder, encoder, and data structure
JP2024056945A (en) Method, apparatus and program for decoding and encoding coding units
JP5649701B2 (en) Image decoding method, apparatus, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10861313

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10861313

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP