WO2012035640A1 - Procédé de codage d'image en mouvement et procédé de décodage d'image en mouvement - Google Patents

Procédé de codage d'image en mouvement et procédé de décodage d'image en mouvement Download PDF

Info

Publication number
WO2012035640A1
WO2012035640A1 PCT/JP2010/066102 JP2010066102W WO2012035640A1 WO 2012035640 A1 WO2012035640 A1 WO 2012035640A1 JP 2010066102 W JP2010066102 W JP 2010066102W WO 2012035640 A1 WO2012035640 A1 WO 2012035640A1
Authority
WO
WIPO (PCT)
Prior art keywords
prediction
image signal
unit
intra
pixel
Prior art date
Application number
PCT/JP2010/066102
Other languages
English (en)
Japanese (ja)
Inventor
昭行 谷沢
太一郎 塩寺
Original Assignee
株式会社 東芝
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社 東芝 filed Critical 株式会社 東芝
Priority to PCT/JP2010/066102 priority Critical patent/WO2012035640A1/fr
Publication of WO2012035640A1 publication Critical patent/WO2012035640A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding

Definitions

  • Embodiments of the present invention relate to an intra-screen prediction method, a video encoding method, and a video decoding method in video encoding and decoding.
  • H.264 achieves higher prediction efficiency than in-screen prediction in ISO / IEC MPEG-1, 2 and 4 (hereinafter referred to as intra prediction) by incorporating direction prediction in the spatial region (pixel region).
  • intra prediction in-screen prediction in ISO / IEC MPEG-1, 2 and 4
  • JCT-VC Joint Collaborative Team on Video Coding
  • Non-Patent Document 1 since a prediction value is generated at an individual prediction angle for each of a plurality of types of prediction modes and copied in the prediction direction, a texture having a luminance gradient that smoothly changes within a pixel block. Such a video or a video with gradation cannot be predicted efficiently, and the prediction error may increase.
  • an object of the present embodiment is to provide a moving image encoding device and a moving image decoding device including a prediction image generating device capable of improving encoding efficiency.
  • the moving image encoding method divides an input image signal into pixel blocks represented by hierarchical depths according to quadtree division, and generates prediction error signals for these divided pixel blocks. Then, in the moving picture encoding method for encoding the transform coefficient, a step of setting a first prediction direction from a plurality of prediction direction sets to create a first prediction image signal, and the plurality of prediction direction sets A step of creating a second prediction image signal by setting a second prediction direction different from the first prediction direction, and a pixel to be predicted and reference for each of the first and second prediction directions Deriving a relative distance from the pixel, deriving a difference value of the relative distance, deriving a predetermined weight component according to the difference value, and according to the weight component, a first unit Direction A step of weighted averaging the prediction image and the second unidirectional intra prediction image to generate a third prediction image signal, a step of generating a prediction error signal from the third prediction image signal, and encoding the prediction error signal And
  • the moving picture decoding method divides an input image signal into pixel blocks expressed by hierarchical depth according to quadtree division, and performs a decoding process on the divided pixel blocks.
  • the measurement image and the second unidirectional intra-prediction image and the weighted average comprises the steps of generating a third predictive image signal, and generating a decoded image signal from said third prediction image signal.
  • FIG. 1 is a block diagram illustrating a moving image encoding apparatus according to a first embodiment.
  • Explanatory drawing of the prediction encoding order of a pixel block Explanatory drawing of an example of pixel block size.
  • Explanatory drawing of another example of pixel block size Explanatory drawing of another example of pixel block size.
  • Explanatory drawing of an example of the pixel block in a coding tree unit Explanatory drawing of another example of the pixel block in a coding tree unit.
  • Explanatory drawing of another example of the pixel block in a coding tree unit Explanatory drawing of another example of the pixel block in a coding tree unit.
  • (A) is explanatory drawing of intra prediction mode
  • (b) is explanatory drawing of the reference pixel and prediction pixel of intra prediction mode
  • (c) is explanatory drawing of the horizontal prediction mode of intra prediction mode
  • ( d) Explanatory drawing of the orthogonal lower right prediction mode of intra prediction mode.
  • the block diagram which illustrates the intra prediction part concerning a 1st embodiment. Explanatory drawing of the number of unidirectional intra predictions and the number of bidirectional intra predictions which concern on 1st Embodiment. Explanatory drawing which illustrates the prediction direction which concerns on 1st Embodiment.
  • the table figure which illustrates the relationship of prediction mode, prediction type, bidirectional
  • the table figure which illustrates the relationship of prediction mode, prediction type, bidirectional
  • the table figure which illustrates the relationship of prediction mode, prediction type, bidirectional
  • the table figure which illustrates a response
  • the table figure which illustrates a response
  • the block diagram which shows an example of the calculation method of a city area distance based on 1st Embodiment.
  • the block diagram which shows another example of the calculation method of the city area distance based on 1st Embodiment.
  • the block diagram which shows another example of the calculation method of the city area distance based on 1st Embodiment.
  • the block diagram which shows another example of the calculation method of the city area distance based on 1st Embodiment.
  • the table which illustrates the relationship between the prediction mode and the distance of a prediction pixel position based on 1st Embodiment.
  • the table which illustrates the mapping of prediction mode and a distance table based on 1st Embodiment.
  • the table which illustrates the relationship between relative distance and a weight component based on 1st Embodiment. 6 is another table illustrating the relationship between the relative distance and the weight component according to the first embodiment.
  • Explanatory drawing of a syntax structure Explanatory drawing of a slice header syntax.
  • Explanatory drawing which shows an example of a prediction unit syntax.
  • Explanatory drawing which shows another example of a prediction unit syntax.
  • Explanatory drawing which shows another example of a prediction unit syntax.
  • Explanatory drawing which shows another example of a prediction unit syntax.
  • the flowchart which shows an example of the calculation method of intraPredModeN.
  • the table which shows the relationship at the time of predicting prediction mode The table which shows another example of the relationship at the time of predicting prediction mode.
  • Explanatory drawing which shows an example of the prediction unit syntax in the 1st modification based on 1st Embodiment.
  • Explanatory drawing which shows another example of the prediction unit syntax in the 1st modification based on 1st Embodiment.
  • Explanatory drawing which shows an example of the predicted value generation method of a pixel level.
  • the block diagram which shows an example of the composite intra estimated image generation part based on 1st Embodiment.
  • Explanatory drawing which shows an example according to Prediction unit syntax based on 1st Embodiment.
  • Explanatory drawing which shows another example of the prediction unit syntax based on 1st Embodiment.
  • the block diagram which illustrates the moving picture coding device concerning a 2nd embodiment.
  • the block diagram which illustrates the orthogonal transformation part based on 2nd Embodiment.
  • the block diagram which illustrates the inverse orthogonal transformation part based on 2nd Embodiment.
  • the table which shows the relationship between prediction mode and a conversion index based on 2nd Embodiment.
  • the block diagram which illustrates the coefficient order control part concerning a 2nd embodiment.
  • the block diagram which illustrates another coefficient order control part concerning a 2nd embodiment.
  • Explanatory drawing which shows an example of the transform unit syntax based on 2nd Embodiment.
  • the block diagram which shows another example of the orthogonal transformation part based on 3rd Embodiment.
  • the block diagram which shows an example of the inverse orthogonal transformation part based on 3rd Embodiment.
  • Explanatory drawing which shows an example of the transform unit syntax based on 3rd Embodiment.
  • Explanatory drawing which shows an example of the unidirectional intra prediction mode, prediction type, and prediction angle parameter
  • the block diagram which shows an example of the moving image decoding apparatus based on 4th Embodiment.
  • the block diagram which shows an example of the moving image decoding apparatus based on 5th Embodiment.
  • the block diagram which illustrates the coefficient order restoration part concerning a 5th embodiment.
  • the block diagram which shows another example of the coefficient order decompression
  • the first embodiment relates to an image encoding device.
  • a moving picture decoding apparatus corresponding to the picture encoding apparatus according to the present embodiment will be described in a fourth embodiment.
  • This image encoding device can be realized by hardware such as an LSI (Large-Scale Integration) chip, a DSP (Digital Signal Processor), or an FPGA (Field Programmable Gate Array).
  • the image encoding apparatus can also be realized by causing a computer to execute an image encoding program.
  • the image coding apparatus includes a subtraction unit 101, an orthogonal transformation unit 102, a quantization unit 103, an inverse quantization unit 104, an inverse orthogonal transformation unit 105, an addition unit 106, and a loop filter. 107, a reference image memory 108, an intra prediction unit 109, an inter prediction unit 110, a prediction selection switch 111, a prediction selection unit 112, an entropy encoding unit 113, an output buffer 114, and an encoding control unit 115.
  • the image coding apparatus in FIG. 1 divides each frame or each field constituting the input image signal 116 into a plurality of pixel blocks, performs predictive coding on the divided pixel blocks, and generates coded data 127. Output.
  • pixel blocks are predictively encoded from the upper left to the lower right as shown in FIG. 2A.
  • the encoded pixel block p is located on the left side and the upper side of the encoding target pixel block c in the encoding processing target frame f.
  • the pixel block refers to a unit for processing an image such as an M ⁇ N size block (N and M are natural numbers), a coding tree unit, a macro block, a sub block, and one pixel.
  • N and M are natural numbers
  • the pixel block is basically used in the meaning of the coding tree unit.
  • the pixel block can be interpreted in the above-described meaning by appropriately replacing the description.
  • the coding tree unit is typically a 16 ⁇ 16 pixel block shown in FIG. 2B, for example, but may be a 32 ⁇ 32 pixel block shown in FIG. 2C or a 64 ⁇ 64 pixel block shown in FIG. 3D. It may be an 8 ⁇ 8 pixel block (not shown) or a 4 ⁇ 4 pixel block.
  • the coding tree unit need not necessarily be square.
  • the encoding target block or coding tree unit of the input image signal 116 may be referred to as a “prediction target block”.
  • the coding unit is not limited to a pixel block such as a coding tree unit, and a frame, a field, a slice, or a combination thereof can be used.
  • FIG. 3A to 3D are diagrams showing specific examples of coding tree units.
  • N represents the size of the reference coding tree unit.
  • the coding tree unit has a quadtree structure, and when divided, the four pixel blocks are indexed in the Z-scan order.
  • FIG. 3B shows an example in which the 64 ⁇ 64 pixel block in FIG. 3A is divided into quadtrees.
  • the numbers shown in the figure represent the Z scan order.
  • the depth of division is defined by Depth.
  • a unit having the largest coding tree unit is called a large coding tree unit, and an input image signal is encoded in the order of raster scanning in this unit.
  • the image encoding apparatus in FIG. 1 performs intra prediction (also referred to as intra-frame prediction, intra-frame prediction, etc.) or inter prediction (inter-screen prediction) for a pixel block based on the encoding parameter input from the encoding control unit 115. Prediction, inter-frame prediction, motion compensation prediction, etc.) is performed to generate the predicted image signal 126.
  • This image coding apparatus orthogonally transforms and quantizes the prediction error signal 117 between the pixel block (input image signal 116) and the predicted image signal 126, performs entropy coding, generates coded data 127, and outputs it. To do.
  • the image encoding device in FIG. 1 performs encoding by selectively applying a plurality of prediction modes having different block sizes and generation methods of the predicted image signal 126.
  • the generation method of the predicted image signal 126 can be broadly divided into two types: intra prediction in which prediction is performed within the encoding target frame and inter prediction in which prediction is performed using one or a plurality of reference frames that are temporally different. is there.
  • the subtraction unit 101 subtracts the corresponding prediction image signal 126 from the encoding target block of the input image signal 116 to obtain a prediction error signal 117.
  • the subtraction unit 101 inputs the prediction error signal 117 to the orthogonal transformation unit 102.
  • the orthogonal transform unit 102 performs an orthogonal transform such as discrete cosine transform (DCT) on the prediction error signal 117 from the subtraction unit 101 to obtain a transform coefficient 118.
  • the orthogonal transform unit 102 inputs the transform coefficient 118 to the quantization unit 103.
  • the quantization unit 103 performs quantization on the transform coefficient from the orthogonal transform unit 102 to obtain a quantized transform coefficient 119. Specifically, the quantization unit 103 performs quantization according to quantization information such as a quantization parameter and a quantization matrix specified by the encoding control unit 115.
  • the quantization parameter indicates the fineness of quantization.
  • the quantization matrix is used for weighting the fineness of quantization for each component of the transform coefficient.
  • the quantization unit 103 inputs the quantized transform coefficient 119 to the entropy encoding unit 113 and the inverse quantization unit 104.
  • the entropy encoding unit 113 performs various encoding parameters such as the quantized transform coefficient 119 from the quantization unit 103, the prediction information 125 from the prediction selection unit 112, and the quantization information specified by the encoding control unit 115.
  • Entropy encoding (for example, Huffman encoding, arithmetic encoding, etc.) is performed to generate encoded data.
  • the encoding parameter is a parameter necessary for decoding, such as prediction information 125, information on transform coefficients, information on quantization, and the like.
  • the encoding control unit 115 has an internal memory (not shown), the encoding parameter is held in this memory, and the encoding parameter of an already encoded pixel block adjacent when encoding the prediction target block is stored. It is good also as a structure to use. For example, H.M. In the H.264 intra prediction, the prediction value of the prediction mode of the prediction target block can be derived from the prediction mode information of the encoded adjacent block.
  • the encoded data generated by the entropy encoding unit 113 is temporarily accumulated, for example, in the output buffer 114 through multiplexing, and is output as encoded data 127 according to an appropriate output timing managed by the encoding control unit 115. .
  • the encoded data 127 is output to, for example, a storage system (storage medium) or a transmission system (communication line) not shown.
  • the inverse quantization unit 104 performs inverse quantization on the quantized transform coefficient 119 from the quantization unit 103 to obtain a restored transform coefficient 120. Specifically, the inverse quantization unit 104 performs inverse quantization according to the quantization information used in the quantization unit 103. The quantization information used in the quantization unit 103 is loaded from the internal memory of the encoding control unit 115. The inverse quantization unit 104 inputs the restored transform coefficient 120 to the inverse orthogonal transform unit 105.
  • the inverse orthogonal transform unit 105 performs an inverse orthogonal transform corresponding to the orthogonal transform performed in the orthogonal transform unit 102 such as an inverse discrete cosine transform on the restored transform coefficient 120 from the inverse quantization unit 104, A restoration prediction error signal 121 is obtained.
  • the inverse orthogonal transform unit 105 inputs the restored prediction error signal 121 to the addition unit 106.
  • the addition unit 106 adds the restored prediction error signal 121 and the corresponding prediction image signal 126 to generate a local decoded image signal 122.
  • the decoded image signal 122 is input to the loop filter 107.
  • the loop filter 107 performs a deblocking filter, a Wiener filter, or the like on the input decoded image signal 122 to generate a filtered image signal 123.
  • the generated filtered image signal 123 is input to the reference image memory 108.
  • the reference image memory 108 stores the filtered image signal 123 after local decoding in the memory, and when the predicted image is generated as necessary by the intra prediction unit 109 and the inter prediction unit 110, the reference image signal 124 is used. Referenced each time.
  • the intra prediction unit 109 performs intra prediction using the reference image signal 124 stored in the reference image memory 108.
  • H.M. In H.264, an intra prediction image is obtained by performing pixel interpolation (copying or copying after interpolation) along a prediction direction such as a vertical direction or a horizontal direction using an encoded reference pixel value adjacent to a prediction target block. Generate. FIG. The prediction direction of intra prediction in H.264 is shown. Further, in FIG. 2 shows an arrangement relationship between reference pixels and encoding target pixels in H.264.
  • FIG. 5C illustrates a predicted image generation method in mode 1 (horizontal prediction)
  • FIG. 5D illustrates a predicted image generation method in mode 4 (diagonal lower right prediction).
  • H.264 In non-patent literature, H. The prediction direction of H.264 is further expanded to 34 directions to increase the number of prediction modes. A predicted pixel value is created by performing linear interpolation with 32-pixel accuracy in accordance with the predicted angle, and is copied in the predicted direction. Details of the intra prediction unit 109 used in the present embodiment of the present invention will be described later.
  • the inter prediction unit 110 performs inter prediction using the reference image signal 124 stored in the reference image memory 108. Specifically, the inter prediction unit 110 performs block matching processing between the prediction target block and the reference image signal 124 to derive a motion shift amount (motion vector). The inter prediction unit 110 performs an interpolation process (motion compensation) based on the motion vector to generate an inter prediction image. H. With H.264, interpolation processing up to 1/4 pixel accuracy is possible.
  • the derived motion vector is entropy encoded as part of the prediction information 125.
  • the prediction selection switch 111 selects the output terminal of the intra prediction unit 109 or the output terminal of the inter prediction unit 110 according to the prediction information 125 from the prediction selection unit 112, and subtracts the intra prediction image or the inter prediction image as the prediction image signal 126. 101 and the adder 106.
  • the prediction selection switch 111 connects a switch to the output terminal from the intra prediction unit 109.
  • the prediction selection switch 111 connects a switch to the output terminal from the inter prediction unit 110.
  • the prediction selection unit 112 has a function of setting the prediction information 125 according to the prediction mode controlled by the encoding control unit 115. As described above, intra prediction or inter prediction can be selected for generating the predicted image signal 126, but a plurality of modes can be further selected for each of intra prediction and inter prediction.
  • the encoding control unit 115 determines one of a plurality of prediction modes of intra prediction and inter prediction as the optimal prediction mode, and the prediction selection unit 112 sets the prediction information 125 according to the determined optimal prediction mode. .
  • prediction mode information is designated by the intra prediction unit 109 from the encoding control unit 115, and the intra prediction unit 109 generates a predicted image signal 126 according to the prediction mode information.
  • the encoding control unit 115 may specify a plurality of prediction mode information in order from the smallest prediction mode number, or may specify a plurality of prediction mode information in order from the largest.
  • the encoding control unit 115 may limit the prediction mode according to the characteristics of the input image.
  • the encoding control unit 115 does not necessarily specify all prediction modes, and may specify at least one prediction mode information for the encoding target block.
  • the encoding control unit 115 determines an optimal prediction mode using a cost function represented by the following mathematical formula (1).
  • Equation (1) (hereinafter referred to as simple encoding cost), OH indicates a code amount relating to prediction information 125 (for example, motion vector information and prediction block size information), and SAD is a prediction target block, a prediction image signal 126, and The difference absolute value sum (ie, the cumulative sum of the absolute values of the prediction error signal 117) is shown. Further, ⁇ represents a Lagrange undetermined multiplier determined based on the value of quantization information (quantization parameter), and K represents an encoding cost. When Expression (1) is used, the prediction mode that minimizes the coding cost K is determined as the optimum prediction mode from the viewpoint of the generated code amount and the prediction error. As a modification of Equation (1), the encoding cost may be estimated from OH alone or SAD alone, or the encoding cost may be estimated using a value obtained by subjecting SAD to Hadamard transform or an approximation thereof.
  • the encoding control unit 115 determines an optimal prediction mode using a cost function expressed by the following formula (2).
  • Equation (2) D represents a square error sum (ie, encoding distortion) between the prediction target block and the local decoded image, and R represents a prediction between the prediction target block and the prediction image signal 126 in the prediction mode.
  • An error amount indicates a code amount estimated by provisional encoding
  • J indicates an encoding cost.
  • provisional encoding processing and local decoding processing are required for each prediction mode, so that the circuit scale or the amount of calculation increases. .
  • the encoding cost J is derived based on more accurate encoding distortion and code amount, it is easy to determine the optimal prediction mode with high accuracy and maintain high encoding efficiency.
  • the encoding cost may be estimated from only R or D, or the encoding cost may be estimated using an approximate value of R or D. These costs may be used hierarchically.
  • the encoding control unit 115 performs determination using Expression (1) or Expression (2) based on information obtained in advance regarding the prediction target block (prediction mode of surrounding pixel blocks, image analysis result, and the like). The number of prediction mode candidates may be narrowed down in advance.
  • the number of prediction mode candidates can be further reduced while maintaining coding performance by performing two-stage mode determination combining Formula (1) and Formula (2). It becomes possible.
  • the simple encoding cost represented by the formula (1) does not require a local decoding process, and can be calculated at high speed.
  • H.264 is used. Since the number of prediction modes is large even when compared with H.264, mode determination using the detailed coding cost is not realistic. Therefore, as a first step, mode determination using the simple coding cost is performed on the prediction modes available in the pixel block, and prediction mode candidates are derived.
  • the number of prediction mode candidates is changed using the property that the correlation between the simple coding cost and the detailed coding cost increases as the value of the quantization parameter that determines the roughness of quantization increases.
  • FIG. 4 shows the number of prediction mode candidates selected in the first step.
  • PuSize is an index indicating the size of a pixel block (sometimes referred to as a prediction unit) that performs prediction described later.
  • QP represents a quantization parameter, and the number of prediction mode candidates changes depending on the quotient divided by 5. Since the detailed coding cost only has to be derived for the number of prediction mode candidates narrowed down in this way, the number of local decoding processes can be greatly reduced.
  • the encoding control unit 115 controls each element of the image encoding device in FIG. Specifically, the encoding control unit 115 performs various controls for the encoding process including the above-described operation.
  • the intra prediction unit 109 illustrated in FIG. 6 includes a unidirectional intra predicted image generation unit 601, a bidirectional intra predicted image generation unit 602, a prediction mode information setting unit 603, and a selection switch 604.
  • the reference image signal 124 is input from the reference image memory 108 to the unidirectional intra predicted image generation unit 601 and the bidirectional intra predicted image generation unit 602.
  • the prediction mode information setting unit 603 determines the prediction mode generated by the unidirectional intra prediction image generation unit 601 or the bidirectional intra prediction image generation unit 602. Set and output prediction mode 605.
  • the selection switch 604 has a function of switching the output ends of the respective intra predicted image generation units according to the prediction mode 605. If the input prediction mode 605 is the unidirectional intra prediction mode, the output terminal of the unidirectional intra prediction image generation unit 601 is connected to the switch, and if the prediction mode 605 is the bidirectional intra prediction mode, the bidirectional intra prediction is performed. The output terminal of the image generation unit 602 is connected. On the other hand, each of the intra predicted image generation units 601 and 602 generates the predicted image signal 126 according to the prediction mode 605. The generated prediction image signal 126 (also referred to as a fifth prediction image signal) is output from the intra prediction unit 109. The output signal of the unidirectional intra predicted image generation unit 601 is also called a fourth predicted image signal, and the output signal of the bidirectional intra predicted image generation unit 602 is also called a third predicted image signal.
  • FIG. 7 shows the number of prediction modes according to the block size according to the present embodiment of the present invention.
  • PuSize indicates the pixel block (prediction unit) size to be predicted, and seven types of sizes from PU_2x2 to PU_128x128 are defined.
  • IntraUniModeNum represents the number of prediction modes for unidirectional intra prediction
  • IntraBiModeNum represents the number of prediction modes for bidirectional intra prediction.
  • Number of modes is the total number of prediction modes for each pixel block (prediction unit) size.
  • FIG. 9 shows the relationship between the prediction mode and the prediction method when PuSize is PU_8x8, PU_16x16, and PU_32x32.
  • FIG. 10 shows a case where PuSize is PU_4 ⁇ 4
  • FIG. 11 shows a case where PU_64 ⁇ 64 or PU_128 ⁇ 128.
  • IntraPredMode indicates a prediction mode number
  • IntraBipredFlag is a flag indicating whether or not bidirectional intra prediction. When the flag is 1, it indicates that the prediction mode is the bidirectional intra prediction mode. When the flag is 0, it indicates that the prediction mode is a unidirectional intra prediction mode.
  • IntraPredTypeLX indicates the prediction type of intra prediction.
  • Intra_Vertical means that the vertical direction is the reference for prediction
  • Intra_Horizontal means that the horizontal direction is the reference for prediction. Note that 0 or 1 is applied to X in IntraPredTypeLX.
  • IntraPredTypeL0 indicates the first prediction mode of unidirectional intra prediction or bidirectional intra prediction.
  • IntraPredTypeL1 indicates the second prediction mode of bidirectional intra prediction.
  • IntraPred AngleID is an index indicating an index of a prediction angle. The prediction angle actually used in the generation of the predicted value is shown in FIG.
  • puPartIdx represents an index of the prediction unit that is divided in the quadtree division described with reference to FIG. 3B.
  • IntraPredMode 4
  • IntraPredTypeL0 Intra_Vertical
  • the prediction mode information setting unit 603 converts the above-described prediction information corresponding to the designated prediction mode 605 to the unidirectional intra prediction image generation unit 601 and the bidirectional intra prediction image generation unit 602 under the control of the encoding control unit 115. And the prediction mode 605 is output to the selection switch 604.
  • the unidirectional intra predicted image generation unit 601 has a function of generating the predicted image signal 126 for a plurality of prediction directions shown in FIG. In FIG. 8, there are 33 different prediction directions for the vertical and horizontal coordinates indicated by the bold lines. H.
  • the direction of a typical prediction angle indicated by H.264 is indicated by an arrow.
  • 33 kinds of prediction directions are prepared in a direction in which a line is drawn from the origin to a mark indicated by a diamond.
  • IntraPredMode 4
  • IntraPredAngleIDL0 4
  • An arrow indicated by a dotted line in FIG. 8 indicates a prediction mode whose prediction type is Intra_Vertical
  • an arrow indicated by a solid line indicates a prediction mode whose prediction type is Intra_Horizontal.
  • FIG. 12 shows the relationship between IntraPredAngleIDLX and intraPredAngle used for predictive image value generation.
  • intraPredAngle indicates a prediction angle that is actually used when a predicted value is generated.
  • a prediction value generation method is expressed by Expression (3).
  • BLK_SIZE indicates the size of the pixel block (prediction unit)
  • ref [] indicates an array in which reference image signals are stored.
  • Pred (k, m) indicates the generated predicted image signal 126.
  • a predicted value can be generated by a similar method according to the table of FIG.
  • the above is description of the unidirectional intra estimated image generation part 601 in this Embodiment of this invention.
  • FIG. 13 shows a block diagram of the bidirectional intra-predicted image generation unit 602.
  • the bidirectional intra predicted image generation unit 602 includes a first unidirectional intra predicted image generation unit 1301, a second unidirectional intra predicted image generation unit 1302, and a weighted average unit 1303, and is based on the input reference image signal 124. Two unidirectional intra-predicted images are generated, and a function of generating a predicted image signal 126 by weighting and averaging them is provided.
  • the functions of the first unidirectional intra predicted image generation unit 1301 and the second unidirectional intra predicted image generation unit 1302 are the same. In either case, a prediction image signal corresponding to a prediction mode given according to prediction mode information controlled by the encoding control unit 115 is generated.
  • a first predicted image signal 1304 is output from the first unidirectional intra predicted image generation unit 1301, and a second predicted image signal 1305 is output from the second unidirectional intra predicted image generation unit 1302.
  • Each predicted image signal is input to the weighted average unit 1303, and weighted average processing is performed.
  • the output signal of the weighted average unit 1303 is also called a third predicted image signal.
  • the table in FIG. 14 is a table for deriving two unidirectional intra prediction modes from the bidirectional intra prediction mode.
  • BiPredIdx is derived using Equation (4).
  • the first predicted image signal 1304 and the second predicted image signal 1305 generated by the first unidirectional intra predicted image generation unit 1301 and the second unidirectional intra predicted image generation unit 1302 are sent to the weighted average unit 1303. Entered.
  • the weighted average unit 1303 calculates the Euclidean distance or the city area distance (Manhattan distance) based on the prediction directions of IntraPredModeL0 and IntraPredModeL1, and derives a weight component used in the weighted average process.
  • the weight component of each pixel is represented by the Euclidean distance from the reference pixel used for prediction or the reciprocal of the urban distance, and is generalized by the following equation.
  • ⁇ L is expressed by the following equation.
  • ⁇ L is expressed by the following equation.
  • the weight table for each prediction mode is generalized by the following equation.
  • ⁇ L0 (n) represents the weight component of the pixel position n in IntraPredModeL0
  • ⁇ L1 (n) represents the weight component of the pixel position n in IntraPredModeL1. Therefore, the final prediction signal at the pixel position n is expressed by the following equation.
  • BiPred (n) represents a predicted image signal at the pixel position n
  • PredL0 (n) and PredL1 (n) are predicted image signals of IntraPredModeL0 and IntraPredModeL1, respectively.
  • the prediction signal is generated by selecting two prediction modes for generating the prediction pixel.
  • a prediction value may be generated by selecting three or more prediction modes.
  • the ratio of the reciprocal of the spatial distance from the reference pixel to the prediction pixel may be set as the weighting factor.
  • the Euclidean distance from the reference pixel used in the prediction mode or the reciprocal of the urban area distance is directly used as a weight component.
  • the Euclidean distance and the urban area distance from the reference pixel are variables.
  • the weight component may be set using the distribution model.
  • the distribution model uses at least one of a linear model, an M-order function (M ⁇ 1), a nonlinear function such as a one-sided Laplace distribution or a one-sided Gaussian distribution, and a fixed value that is a fixed value regardless of the distance from the reference pixel.
  • M ⁇ 1 M-order function
  • a nonlinear function such as a one-sided Laplace distribution or a one-sided Gaussian distribution
  • the weight component is expressed by the following equation.
  • ⁇ (n) is a weight component at the position n of the predicted pixel
  • ⁇ 2 is variance
  • A is a constant (A> 0).
  • the weight component is expressed by the following equation.
  • is a standard deviation
  • B is a constant (B> 0).
  • an isotropic correlation model obtained by modeling an autocorrelation function, an elliptic correlation model, a generalized Gaussian model obtained by generalizing a Laplace function or a Gaussian function may be used as the weight component model.
  • the circuit scale required for the said calculation can be reduced by calculating a weight component beforehand according to the relative distance for every prediction mode, and hold
  • the relative distance is a distance between a prediction target pixel and a reference pixel with respect to a certain prediction direction.
  • the city area distance ⁇ L L0 of IntraPredMode L0 and the city area distance ⁇ L L1 of IntraPredMode L1 are calculated from Equation (7).
  • the relative distance varies depending on the prediction direction of the two prediction modes.
  • the distance can be derived using Expression (6) or Expression (7) according to each prediction mode.
  • the table sizes of these distance tables may increase.
  • FIG. 17 shows the mapping of IntraPredModeLX used for distance table derivation.
  • a table of only the prediction mode corresponding to the prediction mode corresponding to the prediction mode and the DC prediction in 45 degrees is prepared, and other prediction angles are mapped closer to the prepared reference prediction mode. ing.
  • the index is mapped to the smaller one.
  • the prediction mode shown in “MappedIntraPredMode” is referred to from FIG. 17, and a distance table can be derived.
  • the relative distance for each pixel in the two prediction modes is calculated using the following equation.
  • BLK_WIDTH and BLK_HEIGHT indicate the width and height of the pixel block (prediction unit), respectively, and DistDiff (n) indicates the relative distance between the two prediction modes at the pixel position n.
  • DistDiff (n) indicates the relative distance between the two prediction modes at the pixel position n.
  • SHIFT indicates the calculation accuracy of the decimal point calculation of the weight component, and an optimal combination may be selected by balancing the coding performance and the circuit scale at the time of hardware implementation.
  • FIGS. 18A and 18B show examples in which weight components using the one-sided Laplace distribution model in the present embodiment of the present invention are tabulated.
  • Other PuSizes can also be derived using Equation (5), Equation (8), Equation (10), and Equation (11). The above is the details of the intra prediction unit 109 according to the present embodiment of the present invention.
  • the internal configuration of the intra prediction unit 109 may be the configuration shown in FIG.
  • an image buffer 1901 is added, and the bidirectional intra predicted image generation unit 602 is replaced with a weighted average unit 1303.
  • the primary image buffer 1901 has a function of temporarily storing the prediction image signal 126 for each prediction mode generated by the unidirectional intra prediction image generation unit 601 in the buffer, and the prediction controlled by the encoding control unit 115.
  • the prediction image signal 126 corresponding to the necessary prediction mode is output to the weighted average unit 1303. This eliminates the need for the bidirectional intra predicted image generation unit 602 to hold the first unidirectional intra predicted image generation unit 1301 and the second unidirectional intra predicted image generation unit 1302, thereby reducing the hardware scale. It becomes possible.
  • the syntax indicates the structure of encoded data (for example, encoded data 127 in FIG. 1) when the image encoding device encodes moving image data.
  • the moving picture decoding apparatus interprets the syntax with reference to the same syntax structure.
  • FIG. 20 shows an example of syntax 2000 used by the moving picture encoding apparatus of FIG.
  • the syntax 2000 includes three parts: a high level syntax 2001, a slice level syntax 2002, and a coding tree level syntax 2003.
  • the high level syntax 2001 includes syntax information of a layer higher than the slice.
  • a slice refers to a rectangular area or a continuous area included in a frame or a field.
  • the slice level syntax 2002 includes information necessary for decoding each slice.
  • the coding tree level syntax 2003 includes information necessary for decoding each coding tree (ie, each coding tree unit). Each of these parts includes more detailed syntax.
  • the high level syntax 2001 includes sequence and picture level syntaxes such as a sequence parameter set syntax 2004 and a picture parameter set syntax 2005.
  • the slice level syntax 2002 includes a slice header syntax 2006, a slice data syntax 2007, and the like.
  • the coding tree level syntax 2003 includes a coding tree unit syntax 2008, a prediction unit syntax 2009, and the like.
  • the coding tree unit syntax 2008 can have a quadtree structure. Specifically, the coding tree unit syntax 2008 can be recursively called as a syntax element of the coding tree unit syntax 2008. That is, one coding tree unit can be subdivided with a quadtree.
  • the coding tree unit syntax 2008 includes a transform unit syntax 2010.
  • the transform unit syntax 2010 is called at each coding tree unit syntax 2008 at the extreme end of the quadtree.
  • the transform unit syntax 2010 describes information related to inverse orthogonal transformation and quantization.
  • FIG. 21 illustrates the slice header syntax 2006 according to the present embodiment.
  • the slice_bipred_intra_flag shown in FIG. 21 is a syntax element indicating, for example, validity / invalidity of bidirectional intra prediction according to the present embodiment for the slice.
  • the orthogonal transform unit 102 and the inverse orthogonal transform unit 105 perform only unidirectional intra prediction.
  • unidirectional intra prediction prediction in which IntraBipredFlag [] in FIGS. Intra prediction specified in H.264 may be performed.
  • the bidirectional intra prediction according to the present embodiment is effective in the entire area in the slice.
  • slice_bipred_intra_flag 1
  • the prediction validity / efficiency according to the present embodiment is determined for each local region in the slice. Invalidity may be specified.
  • FIG. 22A shows an example of the prediction unit syntax.
  • Pred_mode in the figure indicates the prediction type of the prediction unit.
  • MODE_INTRA indicates that the prediction type is intra prediction.
  • intra_split_flag is a flag indicating whether or not the prediction unit is further divided into four prediction units. When intra_split_flag is 1, a prediction unit is obtained by dividing a prediction unit into four in half in the vertical and horizontal sizes. When intra_split_flag is 0, the prediction unit is not divided.
  • Intra_luma_bipred_flag [i] is a flag indicating whether the prediction mode IntraPredMode applied to the prediction unit is a unidirectional intra prediction mode or a bidirectional intra prediction mode. i indicates the position of the divided prediction unit. When the intra_split_flag is 0, 0 is set, and when the intra_split_flag is 1, 0 to 3 are set. The flag is set with the value of IntraBiprededFlag of the prediction unit shown in FIGS.
  • intra_luma_bipred_flag [i] When intra_luma_bipred_flag [i] is 1, this indicates that the prediction unit is bi-directional intra prediction, and is information that identifies the used bi-directional intra prediction mode among a plurality of prepared bi-directional intra prediction modes.
  • Intra_luma_bipred_mode [i] is encoded.
  • intra_luma_bipred_mode [i] may be encoded with the isometric length according to the number of bidirectional intra prediction modes IntraBiModeNum shown in FIG. 7, or may be encoded using a predetermined code table.
  • intra_luma_bipred_flag [i] When intra_luma_bipred_flag [i] is 0, it indicates that the prediction unit is unidirectional intra prediction, and predictive encoding is performed from adjacent blocks.
  • prev_intra_luma_unipred_flag [i] is a flag indicating whether or not the prediction value MostProbable of the prediction mode calculated from the adjacent block and the intra prediction mode of the prediction unit are the same. Details of the MostProbable calculation method will be described later. When prev_intra_luma_unipred_flag [i] is 1, it indicates that the MostProbable and the intra prediction mode IntraPredMode are equal.
  • prev_intra_luma_unipred_flag [i] When prev_intra_luma_unipred_flag [i] is 0, it indicates that the MostProbable and the intra prediction mode IntraPredMode are different, and the information rem_intraiprecoded_code that specifies whether the intra prediction mode IntraPredMode is a mode other than MostProbable. . rem_intra_luma_unipred_mode [i] may be encoded with the isometric length according to the bidirectional intra prediction mode number IntraUniModeNum shown in FIG. 7, or may be encoded using a predetermined code table. From the intra prediction mode IntraPredMode, rem_intra_luma_unipred_mode [i] is calculated using the following equation.
  • MostProbable is calculated according to the following equation.
  • Min (x, y) is a parameter for outputting the smaller one of the inputs x and y.
  • intraPredModeA and intraPredModeB indicate intra prediction modes of prediction units adjacent to the left and above the encoded prediction unit.
  • intraPredModeA and intraPredModeB are collectively expressed as intraPredModeN.
  • N is set to A or B.
  • a method of calculating intraPredModeN will be described using the flowchart shown in FIG. First, it is determined whether a coding tree unit to which an adjacent prediction unit belongs can be used (step S2301). If the coding tree unit cannot be used (NO in S2301), reference to intraPredModeN is not possible. “ ⁇ 1” indicating “” is set.
  • intraPredModeN is calculated using the following equation.
  • IntraUniModeNum is the number of unidirectional intra prediction modes determined by the size of the adjacent prediction unit, and an example thereof is shown in FIG.
  • “MappedBi2Uni (List, idx)” is a table for converting the bidirectional intra prediction mode into the unidirectional intra prediction mode.
  • List is the unidirectional intra prediction mode of List0 (corresponding to IntraPredTypeL0 [] shown in FIGS. 9, 10, and 11) of the two unidirectional intra prediction modes constituting the bidirectional intra prediction mode. Is a flag indicating whether to use the unidirectional intra prediction mode of List1 or the IntraPredTypeL1 [] shown in FIGS. 9, 10, and 11).
  • List1 is used for conversion to the unidirectional intra prediction mode.
  • FIG. 14 shows an example of the conversion table. The numerical values in the figure correspond to IntraPredMode shown in FIGS.
  • MappedMostProble () is a table for converting MostProbable, and an example is shown in FIG. 24.
  • luma_pred_mode_code_type [i] indicates the type of the prediction mode IntraPredMode applied to the prediction unit, where 0 (IntraUnifiedMostProb) is unidirectional intra prediction and the intra prediction mode is the same as MostProbable, 1 (IntraUnipre intrareprediction) The intra prediction mode is different from MostProbable, and 2 (IntraBipred) indicates a bidirectional intra prediction mode.
  • 0 IntraUnifiedMostProb
  • 1 IntraUnipre intrareprediction
  • the intra prediction mode is different from MostProbable
  • 2 IntraBipred indicates a bidirectional intra prediction mode.
  • FIG. 24 shows bidirectional intra prediction modes, 1 and unidirectional intra prediction (whether or not the same prediction mode as MostProbable).
  • FIG. 25 shows an example of assignment of the number of modes according to the meaning corresponding to luma_pred_mode_code_type, bin, and the mode configuration shown in FIG.
  • luma_pred_mode_code_type [i] When luma_pred_mode_code_type [i] is 0, the intra prediction mode is the MostProbable mode, so no further information encoding is necessary.
  • luma_pred_mode_code_type [i] is 1, information rem_intra_luma_unipred_mode [i] that specifies which mode other than MostProbable is the intra prediction mode IntraPredMode is encoded.
  • rem_intra_luma_unipred_mode [i] may be encoded with the isometric length according to the bidirectional intra prediction mode number IntraUniModeNum shown in FIG. 7, or may be encoded using a predetermined code table. From the intra prediction mode IntraPredMode, rem_intra_luma_unipred_mode [i] is calculated using Equation (16). Further, when luma_pred_mode_code_type [i] is 2, it indicates that the prediction unit is bidirectional intra prediction, and information that identifies the used bidirectional intra prediction mode among the prepared bidirectional intra prediction modes. Intra_luma_bipred_mode [i] is encoded.
  • intra_luma_bipred_mode [i] may be encoded with the isometric length according to the number of bidirectional intra prediction modes IntraBiModeNum shown in FIG. 7, or may be encoded using a predetermined code table.
  • the above is the syntax configuration according to the present embodiment.
  • FIG. 22D Yet another example of the prediction unit syntax is shown in FIG. 22D.
  • the table shown in FIG. 43 may be used instead of FIG. 9, and the IntraPredMode in FIG. The table may be ignored.
  • FIG. 43 is a table in which IntraPredTypeL1 and IntraPredAngleIdL1 indicating information related to the second prediction mode at the time of bidirectional intra prediction are deleted from FIG.
  • IntraPredMode 33 or more tables are deleted.
  • FIG. 42 and FIG. 9 are applicable also regarding FIG. 10, FIG.
  • pred_mode and intra_split_flag are the same as the syntax example described above, and thus description thereof is omitted.
  • Intra_bipred_flag is a flag indicating whether or not bi-directional intra prediction can be used in the encoded prediction unit. When intra_bipred_flag is 0, it indicates that bi-directional intra prediction is not used in the encoded prediction unit. Even when intra_split_flag is 1, that is, when the encoded prediction unit is further divided into four, bi-directional intra prediction is not used in all prediction units, and only uni-directional intra prediction is effective.
  • intra_bipred_flag When intra_bipred_flag is 1, it indicates that bi-directional intra prediction can be used in the encoded prediction unit. Even when intra_split_flag is 1, that is, when the encoded prediction unit is further divided into four, in all prediction units, bidirectional intra prediction can be selected in addition to unidirectional intra prediction.
  • intra_bipred_flag is encoded as 0 to disable bi-directional intra prediction. Since the amount of codes necessary for encoding can be reduced, encoding efficiency is improved.
  • FIG. 22E Still another example relating to the prediction unit syntax is shown in FIG. 22E.
  • intra_bipred_flag is a flag indicating whether or not bi-directional intra prediction can be used in the encoding prediction unit, and is the same as the above-described intra_bipred_flag, and thus the description thereof is omitted.
  • FIG. 26 shows the intra prediction unit 109 when adaptive reference pixel filtering is used. 6 is different from the intra prediction unit 109 shown in FIG. 6 in that a reference pixel filter unit 2601 is added.
  • the reference pixel filter unit 2601 receives the reference image signal 124 and the prediction mode 605, performs an adaptive filtering process described later, and outputs a filtered reference image signal 2602.
  • the filtered reference image signal 2602 is input to the unidirectional intra predicted image generation unit 601 and the bidirectional intra predicted image generation unit 602.
  • the configuration and processing other than the reference pixel filter unit 2601 are the same as those of the intra prediction unit 109 shown in FIG.
  • the reference pixel filter unit 2601 determines whether or not to filter reference pixels used for intra prediction according to the reference pixel filter flag and the intra prediction mode included in the prediction mode 605.
  • the reference pixel filter flag is a flag indicating whether or not reference pixels are filtered when the intra prediction mode IntraPredMode is a value other than “Intra_DC”.
  • IntraPredMode is “Intra_DC”
  • the reference pixel is not filtered and the reference pixel filter flag is set to 0.
  • a filtered reference image signal 2602 is calculated by the following filtering.
  • p [x, y] indicates a reference pixel before filtering
  • pf [x, y] indicates a reference pixel in filter terms.
  • PuPartSize indicates the size (pixel) of the prediction unit.
  • FIG. 27A and 27B show a prediction unit syntax structure when performing adaptive reference pixel filtering.
  • FIG. 27A adds the syntax intra_luma_filter_flag [i] related to the adaptive reference pixel filter to FIG. 22A.
  • FIG. 27B adds syntax intra_luma_filter_flag [i] related to the adaptive reference pixel filter to FIG. 22C.
  • intra_luma_filter_flag [i] is further encoded when the intra prediction mode IntraPredMode [i] is other than Intra_DC. When the flag is 0, it indicates that the reference pixel is not filtered. Further, when intra_luma_filter_flag [i] is 1, it indicates that the reference pixel filtering is applied.
  • intra_luma_filter_flag [i] is encoded when the intra prediction mode IntraPredMode [i] is other than Intra_DC.
  • IntraPredMode [i] is 0 to 2
  • intra_luma_filter_flag [i ] May not be encoded. In this case, intra_luma_filter_flag [i] is set to 0.
  • intra_luma_filter_flag [i] described above may be added in the same meaning for the other syntax structures shown in FIGS. 22B, 22D, and 22E.
  • the decoded image signal 122 is calculated in the moving image decoding device 4400 or the image encoding device 100, it is possible to use decoded pixels as pixels adjacent to the left, upper, and upper left.
  • the input image signal 116 is used as a pixel adjacent to the left, upper, and upper left.
  • FIG. 28 shows positions of adjacent decoded pixels A (left), B (upper), and C (upper left) used for prediction of the prediction target pixel X. Therefore, composite intra prediction is a so-called open-loop prediction method in which prediction values differ between the image encoding device 100 and the moving image decoding device 4400.
  • FIG. 30 shows a block diagram of the intra prediction unit 4408 (109) when combined with composite intra prediction. A difference is that a composite intra predicted image generation unit 2901, a selection switch 2902, and a decoded image buffer 3001 are added to the intra prediction unit 109 shown in FIG.
  • the selection switch 604 When the bidirectional intra prediction and the composite intra prediction are combined, first, in the selection switch 604, the unidirectional intra prediction image generation unit 601 or the bidirectional intra prediction image generation unit according to the prediction mode information controlled by the encoding control unit 115. The output terminal of 602 is switched.
  • the output predicted image signal 126 is referred to as a direction predicted image signal 126.
  • the direction prediction image signal is input to the composite intra prediction image generation unit 2901, and a prediction image signal 4420 (126) in the composite intra prediction is generated.
  • the description of the composite intra predicted image generation unit 2901 will be described later.
  • the selection switch 2902 which one of the prediction image signal 4420 (126) and the direction prediction image signal in the composite intra prediction is used according to the composite intra prediction application flag in the prediction mode information controlled by the encoding control unit 115.
  • the final predicted image signal 4420 (126) in the intra prediction unit 4408 (109) is output.
  • the composite intra prediction application flag is 1
  • the predicted image signal 4420 (126) output from the composite intra predicted image generation unit 2901 is the final predicted image signal 4420 (126).
  • the direction prediction image signal 4420 (126) is the prediction image signal 4420 (126) that is finally output.
  • the predicted image signal output from the composite intra predicted image generation unit 2901 is also called a sixth predicted image signal.
  • the decoded prediction error signal 4416 separately decoded by the addition unit 4405 is added to the pixel unit to generate a decoded image signal 4417 for each pixel. And stored in the decoded pixel buffer 3001.
  • the stored decoded image signal 4417 in units of pixels is input to the composite intra predicted image generation unit 2901 as the reference pixel 3002, and is used for pixel level prediction described later as the adjacent pixel 3104 shown in FIG.
  • the composite intra prediction image generation unit 2901 includes a pixel level prediction signal generation unit 3101 and a composite intra prediction calculation unit 3102.
  • the pixel level prediction signal generation unit 3101 receives the reference pixel 3002 as the adjacent pixel 3104, and outputs the pixel level prediction signal 3103 by predicting the prediction target pixel X from the adjacent pixel.
  • the pixel level prediction signal 3103 (X) of the prediction target pixel is calculated from A, B, and C indicating the adjacent pixel 3104 using Expression (21).
  • coefficients related to A, B, and C may be other values.
  • the composite intra prediction calculation unit 3102 performs a weighted average of the direction prediction image signal 126 (X ′) and the pixel level prediction signal 3103 (X), and outputs a final prediction image signal 126 (P). Specifically, the following formula is used.
  • the decoded image signal 122 may have different values in encoding and decoding. . Therefore, after all the decoded image signals 122 in the encoded prediction syntax are generated, the above-described combined intra prediction is performed again using the decoded image signal 122 as an adjacent pixel, so that the same predicted image signal 126 as that in the decoding is obtained. Is further added to the prediction error signal 117, and the decoded image signal 122 identical to the decoding can be generated.
  • the weighting factor W may be switched according to the position of the prediction pixel in the prediction unit.
  • a prediction image signal generated using unidirectional intra prediction and bidirectional intra prediction generates a prediction value from spatially adjacent reference pixels positioned on the left or above already encoded.
  • the absolute value of the prediction error tends to increase as the distance from the reference pixel increases. Therefore, the weighting coefficient of the direction prediction image signal 126 and the pixel level prediction signal 3103 is increased when the weight coefficient of the direction prediction image signal 126 is close to the reference pixel, and is decreased when the distance is far away, thereby improving the prediction accuracy. It becomes possible.
  • a prediction error signal is generated using an input image signal at the time of encoding.
  • the pixel level prediction signal 3103 becomes an input image signal, even if the spatial distance between the reference pixel position and the prediction pixel position is increased, the prediction of the pixel level prediction signal 3103 is compared with the direction prediction image signal 126. High accuracy.
  • the weighting coefficient of the direction prediction image signal 126 and the pixel level prediction signal 3103 is simply increased when the weight coefficient of the direction prediction image signal 126 is close to the reference pixel, and is decreased when the distance is small.
  • the prediction error is reduced, there is a problem that the prediction accuracy at the time of encoding and the prediction value at the time of local decoding are different and the prediction accuracy is lowered. Therefore, especially when the value of the quantization parameter is large, as the spatial distance between the reference pixel position and the predicted pixel position becomes large, the difference generated in the case of such an open loop is set by setting the value of W small. A decrease in coding efficiency due to the phenomenon can be suppressed.
  • FIG. 32A and 32B show a prediction unit syntax structure when performing composite intra prediction.
  • FIG. 32A is different from FIG. 22A in that a syntax combined_intra_pred_flag for switching presence / absence of composite intra prediction is added. This is equivalent to the above-described composite intra prediction application flag.
  • FIG. 32B adds a syntax combined_intra_pred_flag for switching presence / absence of composite intra prediction to FIG. 22C.
  • the selection switch 2902 shown in FIG. 30 is connected to the output terminal of the composite intra prediction image generation unit 2901.
  • the selection switch 2902 shown in FIG. 30 is connected to the output terminal of either the unidirectional intra prediction image generation unit 601 or the bidirectional intra prediction image generation unit 602 to which the selection switch 604 is connected. .
  • intra_luma_filter_flag [i] described above may be added in the same meaning for the other syntax structures shown in FIGS. 22B, 22D, and 22E.
  • the video encoding apparatus according to the second embodiment differs from the above-described image encoding apparatus according to the first embodiment in the details of orthogonal transform and inverse orthogonal transform.
  • the same parts as those in the first embodiment are denoted by the same indexes, and different parts will be mainly described.
  • a moving picture decoding apparatus corresponding to the picture encoding apparatus according to the present embodiment will be described in a fifth embodiment.
  • FIG. 33 is a block diagram showing a video encoding apparatus according to the second embodiment.
  • the change from the moving picture encoding apparatus according to the first embodiment is that a transformation selection unit 3301 and a coefficient order control unit 3302 are added. Also, the internal structures of the orthogonal transform unit 102 and the inverse orthogonal transform unit 105 are different.
  • the process of FIG. 33 will be described.
  • ⁇ Orthogonal transform unit 102> 34 includes a first orthogonal transform unit 3401, a second orthogonal transform unit 3402, an Nth orthogonal transform unit 3403, and a transform selection switch 3404.
  • N types of orthogonal transform units there may be a plurality of transform sizes using the same orthogonal transform method, or there may be a plurality of orthogonal transform units performing different orthogonal transform methods. .
  • each may be mixed.
  • the first orthogonal transform unit 3401 can be set to 4 ⁇ 4 size DCT
  • the second orthogonal transform unit 3402 can be set to 8 ⁇ 8 size DCT
  • the Nth orthogonal transform unit 3403 can be set to 16 ⁇ 16 size DCT.
  • the first orthogonal transform unit 3401 is 4 ⁇ 4 size DCT
  • the second orthogonal transform unit 3402 is 4 ⁇ 4 size DST (discrete sine transform)
  • the Nth orthogonal transform unit 3403 is 8 ⁇ 8 size KLT (Karunen-Labe transform). ) Can also be set.
  • N 1 is considered.
  • the conversion selection switch 3404 has a function of selecting the output terminal of the subtraction unit 101 according to the conversion selection information 3303.
  • the transformation selection information 3303 is one piece of information controlled by the encoding control unit 115 and is set by the transformation selection unit 3301 according to the prediction information 125.
  • the transformation selection information 3303 indicates the first orthogonal transformation
  • the output terminal of the switch is connected to the first orthogonal transformation unit 3401.
  • the transformation selection information 3303 is the second orthogonal transformation
  • the output end is connected to the second orthogonal transformation unit 3402.
  • the first orthogonal transform unit 3401 performs DCT
  • the other orthogonal transform units 3402 and 3403 perform KLT (Carhunen-Labe transform).
  • the inverse orthogonal transform unit 105 in FIG. 35 includes a first inverse orthogonal transform unit 3501, a second inverse orthogonal transform unit 3502, an Nth inverse orthogonal transform unit 3503, and a transform selection switch 3504.
  • the conversion selection switch 3504 will be described.
  • the conversion selection switch 3504 has a function of selecting the output terminal of the inverse quantization unit 104 according to the input conversion selection information 3303.
  • the transformation selection information 3303 is one piece of information controlled by the encoding control unit 115 and is set by the transformation selection unit 3301 according to the prediction information 125.
  • the output terminal of the switch is connected to the first inverse orthogonal transform unit 3501.
  • the transformation selection information 3303 is the second orthogonal transformation
  • the output end is connected to the second inverse orthogonal transformation unit 3502.
  • the transform selection information 3303 is the Nth orthogonal transform
  • the output terminal is connected to the Nth inverse orthogonal transform unit 3503.
  • the transform selection information 3303 set in the orthogonal transform unit 102 and the transform selection information 3303 set in the inverse orthogonal transform unit 105 are the same, and the inverse orthogonal transform corresponding to the transform performed in the orthogonal transform unit 102 is performed.
  • the first inverse orthogonal transform unit 3501 performs inverse discrete cosine transform (hereinafter referred to as IDCT), and the second inverse orthogonal transform unit 3502 and the Nth inverse orthogonal transform unit 3503 are based on KLT (Karunen-Labe transform).
  • IDCT inverse discrete cosine transform
  • KLT Karunen-Labe transform
  • Inverse transformation is performed.
  • orthogonal transformation such as Hadamard transformation or discrete sine transformation may be used, or non-orthogonal transformation may be used.
  • the corresponding inverse conversion is performed in conjunction with the conversion unit 102.
  • Prediction information 125 controlled by the encoding control unit 115 and including the prediction mode set by the prediction selection unit 112 is input to the transform selection unit 3301.
  • the transform selection unit 3301 has a function of setting MapdTransformIdx information indicating which orthogonal transform is used for which prediction mode.
  • FIG. 36 shows conversion selection information 3303 (MappedTransformIdx) in intra prediction.
  • FIG. 37 shows a block diagram of the coefficient order control unit 3302.
  • the coefficient order control unit 3302 includes a coefficient order selection switch 3704, a first coefficient order conversion unit 3701, a second coefficient order conversion unit 3702, and an Nth coefficient order conversion unit 3703.
  • the coefficient order selection switch 3704 has a function of switching the output terminal of the switch and the coefficient order conversion units 3701 to 3703 in accordance with, for example, the mapped transform IDx shown in FIG.
  • the N types of coefficient forward conversion units 3701 to 3703 have a function of converting the two-dimensional data of the quantized conversion coefficient 119 quantized by the quantization unit 103 into one-dimensional data. For example, H.M. In H.264, two-dimensional data is converted into one-dimensional data using a zigzag scan.
  • the quantized transform coefficient 119 obtained by performing the quantization process on the transform coefficient 118 subjected to the orthogonal transform has a characteristic that the tendency of generating non-zero transform coefficients in the block is biased. have.
  • the tendency of occurrence of this non-zero transform coefficient has different properties for each prediction direction of intra prediction.
  • the generation tendency of non-zero transform coefficients in the same prediction direction has a similar property. Therefore, when transforming two-dimensional data into one-dimensional data (2D-1D conversion), entropy coding is performed preferentially from transform coefficients at positions where the occurrence probability of non-zero transform coefficients is high, thereby encoding transform coefficients.
  • the coefficient order control unit 3302 may dynamically update the scan order in 2D-1D conversion.
  • the coefficient order control unit 3302 that performs such an operation is illustrated in FIG.
  • the coefficient order control unit 3302 includes an occurrence frequency counting unit 3801 and an updating unit 3802 in addition to the configuration of FIG.
  • the coefficient order conversion units 3701,..., 3703 are the same except that the scan order is updated by the coefficient order update unit 3802.
  • the occurrence frequency counting unit 3801 creates a histogram 3804 of the number of occurrences of non-zero coefficients in each element of the quantized transform coefficient sequence 3304 for each prediction mode.
  • the occurrence frequency counting unit 3801 inputs the created histogram 3804 to the update unit 3802.
  • the update unit 3802 updates the coefficient order based on the histogram 3804 at a predetermined timing.
  • the timing is, for example, the timing when the coding process of the coding tree unit is finished, the timing when the coding process for one line in the coding tree unit is finished, or the like.
  • the update unit 3802 refers to the histogram 3804 and updates the coefficient order with respect to a prediction mode having an element in which the number of occurrences of non-zero coefficients is counted more than a threshold. For example, the update unit 3802 updates the prediction mode having an element in which the occurrence of a non-zero coefficient is counted 16 times or more. By providing a threshold value for the number of occurrences, the coefficient order is updated globally, so that it is difficult to converge to a local optimum solution.
  • the update unit 3802 sorts the elements in descending order of the occurrence frequency of the non-zero coefficient with respect to the prediction mode to be updated. Sorting can be realized by existing algorithms such as bubble sort and quick sort. Then, the update unit 3802 inputs the update coefficient order 3803 indicating the order of the sorted elements to the coefficient order conversion units 3701 to 3703 corresponding to the prediction mode to be updated.
  • each conversion unit When the update coefficient order 3803 is input, each conversion unit performs 2D-1D conversion according to the updated scan order.
  • the initial scan order of each 2D-1D conversion unit needs to be determined in advance. In this way, by dynamically updating the scan order, the tendency of occurrence of non-zero coefficients in the quantized transform coefficients 119 changes according to the influence of the properties of the predicted image, quantization information (quantization parameters), and the like. Even in this case, high encoding efficiency can be expected stably. Specifically, the generated code amount of run-length encoding in the entropy encoding unit 113 can be suppressed.
  • the syntax configuration in this embodiment is the same as that in the first embodiment.
  • the conversion selection unit 3301 can select the mapped transform IDx separately from the prediction information 125.
  • information indicating which nine types of orthogonal transforms or inverse orthogonal transforms are used is set in the entropy encoding unit 113 and encoded together with the quantized transform coefficient sequence 3304.
  • FIG. 39 shows an example of syntax in this modification.
  • Directional_transform_idx indicated in the syntax indicates information indicating which of N orthogonal transforms has been selected.
  • FIG. 40 shows a block diagram of the orthogonal transform unit 102 according to the present embodiment.
  • the orthogonal transform unit 102 includes new processing units such as a first rotation transform unit 4001, a second rotation transform unit 4002, an Nth rotation transform unit 4003, and a discrete cosine transform unit 4004, and has an existing transform selection switch 3404.
  • the discrete cosine transform unit 4004 performs DCT, for example.
  • the conversion coefficient after DCT is input to the conversion selection switch 3404.
  • the conversion selection switch 3404 connects the output end of the switch to one of the first rotation conversion unit 4001, the second rotation conversion unit 4002, and the Nth rotation conversion unit 4003 according to the conversion selection information 3303.
  • the switches are sequentially switched according to the control of the encoding control unit 115.
  • the rotation conversion units 4001 to 4003 perform rotation conversion for each conversion coefficient using a predetermined rotation matrix.
  • the conversion coefficient 118 after the rotation conversion is output. This conversion is a reversible conversion.
  • rotation matrix it may be determined which rotation matrix is to be used by using the encoding cost as shown in Equation (1) and Equation (2).
  • the rotation conversion unit may be applied to the quantization conversion coefficient 119 after the quantization process.
  • the orthogonal transform unit 102 performs only DCT.
  • FIG. 41 is a block diagram of the inverse orthogonal transform unit 105 according to the present embodiment.
  • the inverse orthogonal transform unit 105 includes new processing units such as a first inverse rotation transform unit 4101, a second inverse rotation transform unit 4102, an Nth inverse rotation transform unit 4103, and an inverse discrete cosine transform unit 4104, and an existing transform selection switch 3504.
  • the conversion selection switch 3504 connects the output terminal of the switch to one of the first reverse rotation conversion unit 4101, the second reverse rotation conversion unit 4102, and the Nth reverse rotation conversion unit 4103 according to the conversion selection information 3303.
  • the same inverse rotation transform used in the orthogonal transform unit 102 is subjected to inverse rotation transform processing by any one of the inverse rotation transform units 4101 to 4103, and is output to the inverse discrete cosine transform unit 4104.
  • the inverse discrete cosine transform unit 4104 performs, for example, IDCT on the input signal to restore the restored prediction error signal 121.
  • IDCT an example using IDCT is shown here as an example, orthogonal transform such as Hadamard transform or discrete sine transform may be used, or non-orthogonal transform may be used. In any case, the corresponding inverse conversion is performed in conjunction with the conversion unit 102.
  • FIG. 42 shows the syntax in the present embodiment.
  • the rotation_transform_idx shown in the syntax means the number of the rotation matrix to be used.
  • the fourth embodiment relates to a moving picture decoding apparatus.
  • the video encoding device corresponding to the video decoding device according to the present embodiment is as described in the first embodiment. That is, the moving picture decoding apparatus according to the present embodiment decodes encoded data generated by, for example, the moving picture encoding apparatus according to the first embodiment.
  • the moving picture decoding apparatus includes an input buffer 4401, an entropy decoding unit 4402, an inverse quantization unit 4403, an inverse orthogonal transform unit 4404, an addition unit 4405, and a loop filter 4406.
  • An image memory 4407, an intra prediction unit 4408, an inter prediction unit 4409, a prediction selection switch 4410, and an output buffer 4411 are included.
  • the encoded data 4413 decodes the encoded data 4413 stored in the input buffer 4401, stores the decoded image 4422 in the output buffer 4411, and outputs it as an output image.
  • the encoded data 4413 is output from, for example, the moving image encoding apparatus shown in FIG. 1, and is temporarily stored in the input buffer 4401 via a storage system or transmission system (not shown).
  • the entropy decoding unit 4402 performs decoding based on the syntax for each frame or field for decoding the encoded data 4413.
  • the entropy decoding unit 4402 sequentially entropy-decodes the code string of each syntax, and reproduces the encoding parameters of the encoding target block such as the prediction information 4421 including the prediction mode information and the quantization transform coefficient 4414.
  • the encoding parameter is a parameter necessary for decoding such as prediction information 4421, information on transform coefficients, information on quantization, and the like.
  • the inverse quantization unit 4403 performs inverse quantization on the quantized transform coefficient 4414 from the entropy decoding unit 4402 to obtain a restored transform coefficient 4415. Specifically, the inverse quantization unit 4403 performs inverse quantization according to the information regarding the quantization decoded by the entropy decoding unit 4402. The inverse quantization unit 4403 inputs the restored transform coefficient 4415 to the inverse orthogonal transform unit 4404.
  • the inverse orthogonal transform unit 4404 performs inverse orthogonal transform corresponding to the orthogonal transform performed on the encoding side, on the reconstruction transform coefficient 4415 from the inverse quantization unit 4403, and obtains a reconstruction prediction error signal 4416.
  • the inverse orthogonal transform unit 4404 inputs the restored prediction error signal 4416 to the adder 4405.
  • the addition unit 4405 adds the restored prediction error signal 4416 and the corresponding predicted image signal 4420 to generate a decoded image signal 4417.
  • the decoded image signal 4417 is input to the loop filter 4406.
  • the loop filter 4406 performs a deblocking filter, a Wiener filter, or the like on the input decoded image signal 4417 to generate a filtered image signal 4418.
  • the generated filtered image signal 4418 is temporarily stored in the output buffer 4411 for the output image, and is also stored in the reference image memory 4407 for the reference image signal 4419.
  • the filtered image signal 4418 stored in the reference image memory 4407 is referenced as a reference image signal 4419 by the intra prediction unit 4408 and the inter prediction unit 4409 as necessary in units of frames or fields.
  • the filtered image signal 4418 temporarily accumulated in the output buffer 4411 is output according to the output timing managed by the decoding control unit 4412.
  • the intra prediction unit 4408, the inter prediction unit 4409, and the selection switch 4410 are substantially the same or similar elements as the intra prediction unit 109, the inter prediction unit 110, and the selection switch 111 in FIG.
  • the intra prediction unit 4408 (109) performs intra prediction using the reference image signal 4419 stored in the reference image memory 4407.
  • H.M. In H.264, an intra prediction image is obtained by performing pixel interpolation (copying or copying after interpolation) along a prediction direction such as a vertical direction or a horizontal direction using an encoded reference pixel value adjacent to a prediction target block. Generate. FIG. The prediction direction of intra prediction in H.264 is shown. Further, in FIG. 2 shows an arrangement relationship between reference pixels and encoding target pixels in H.264. 5C shows a predicted image generation method in mode 1 (horizontal prediction), and FIG. 5D shows a predicted image generation method in mode 4 (diagonal lower right prediction; Intra_NxN_Diagonal_Down_Right in FIG. 4A). Yes.
  • the inter prediction unit 4409 (110) performs inter prediction using the reference image signal 4419 stored in the reference image memory 4407. Specifically, the inter prediction unit 4409 (110) obtains a motion shift amount (motion vector) between the prediction target block and the reference image signal 124 from the entropy decoding unit 4402, and based on this motion vector. Inter prediction processing (motion compensation) is performed to generate an inter predicted image. H. With H.264, interpolation processing up to 1/4 pixel accuracy is possible.
  • the prediction selection switch 4410 selects the output terminal of the intra prediction unit 4408 or the output terminal of the inter prediction unit 4409 according to the decoded prediction information 4421, and inputs the intra predicted image or the inter predicted image as the predicted image signal 4420 to the adding unit 4405. .
  • the prediction selection switch 4410 connects a switch to the output terminal from the intra prediction unit 4408.
  • the prediction selection switch 4410 connects a switch to the output terminal from the inter prediction unit 4409.
  • the decoding control unit 4412 controls each element of the moving picture decoding apparatus in FIG. Specifically, the decoding control unit 4412 performs various controls for decoding processing including the above-described operation.
  • the intra prediction unit 4408 has the same configuration and processing content as the intra prediction unit 109 described in the first embodiment.
  • the intra prediction unit 4408 (109) shown in FIG. 6 includes a unidirectional intra predicted image generation unit 601, a bidirectional intra predicted image generation unit 602, a prediction mode information setting unit 603, and a selection switch 604.
  • a reference image signal 4419 (124) is input from the reference image memory 4407 to the unidirectional intra predicted image generation unit 601 and the bidirectional intra predicted image generation unit 602.
  • the prediction mode information setting unit 603 selects the prediction mode generated by the unidirectional intra prediction image generation unit 601 or the bidirectional intra prediction image generation unit 602.
  • the selection switch 604 has a function of switching the output ends of the respective intra predicted image generation units according to the prediction mode 605.
  • the output terminal of the unidirectional intra prediction image generation unit 601 is connected to the switch, and if the prediction mode 605 is the bidirectional intra prediction mode, the bidirectional intra prediction is performed.
  • the output terminal of the image generation unit 602 is connected.
  • generation part 601 and 602 produces
  • FIG. The generated predicted image signal 4420 (126) is output from the intra prediction unit 109.
  • FIG. 7 shows the number of prediction modes according to the block size according to the present embodiment of the present invention.
  • PuSize indicates the pixel block (prediction unit) size to be predicted, and seven types of sizes from PU_2x2 to PU_128x128 are defined.
  • IntraUniModeNum represents the number of prediction modes for unidirectional intra prediction
  • IntraBiModeNum represents the number of prediction modes for bidirectional intra prediction.
  • Number of modes is the total number of prediction modes for each pixel block (prediction unit) size.
  • FIG. 9 shows the relationship between the prediction mode and the prediction method when PuSize is PU_8x8, PU_16x16, and PU_32x32.
  • FIG. 10 shows a case where PuSize is PU_4 ⁇ 4
  • FIG. 11 shows a case where PU_64 ⁇ 64 or PU_128 ⁇ 128.
  • IntraPredMode indicates a prediction mode number
  • IntraBipredFlag is a flag indicating whether or not bidirectional intra prediction. When the flag is 1, it indicates that the prediction mode is the bidirectional intra prediction mode. When the flag is 0, it indicates that the prediction mode is a unidirectional intra prediction mode.
  • IntraPredTypeLX indicates the prediction type of intra prediction.
  • Intra_Vertical means that the vertical direction is the reference for prediction
  • Intra_Horizontal means that the horizontal direction is the reference for prediction. Note that 0 or 1 is applied to X in IntraPredTypeLX.
  • IntraPredTypeL0 indicates the first prediction mode of unidirectional intra prediction or bidirectional intra prediction.
  • IntraPredTypeL1 indicates the second prediction mode of bidirectional intra prediction.
  • IntraPred AngleID is an index indicating an index of a prediction angle. The prediction angle actually used in the generation of the predicted value is shown in FIG.
  • puPartIdx represents the index of the divided block in the quadtree division described with reference to FIG. 3B.
  • IntraPredMode 4
  • IntraPredTypeL0 Intra_Vertical
  • the prediction mode information setting unit 603 converts the above-described prediction information corresponding to the designated prediction mode 605 to the unidirectional intra prediction image generation unit 601 and the bidirectional intra prediction image generation unit 602 under the control of the decoding control unit 4412. And the prediction mode 605 is output to the selection switch.
  • the unidirectional intra predicted image generation unit 601 has a function of generating a predicted image signal 4420 (126) for a plurality of prediction directions shown in FIG. In FIG. 8, there are 33 different prediction directions for the vertical and horizontal coordinates indicated by the bold lines.
  • the direction of a typical prediction angle indicated by H.264 is indicated by an arrow.
  • 33 kinds of prediction directions are prepared in a direction in which a line is drawn from the origin to a mark indicated by a diamond.
  • IntraPredMode 4
  • IntraPredAngleIDL0 4
  • An arrow indicated by a dotted line in FIG. 8 indicates a prediction mode whose prediction type is Intra_Vertical, and an arrow indicated by a solid line indicates a prediction mode whose prediction type is Intra_Horizontal.
  • FIG. 12 shows the relationship between IntraPredAngleIDLX and intraPredAngle used for predictive image value generation.
  • intraPredAngle indicates a prediction angle that is actually used when a predicted value is generated.
  • a prediction value generation method is expressed by Expression (3).
  • BLK_SIZE indicates the size of the pixel block (prediction unit)
  • ref [] indicates an array in which reference image signals are stored.
  • pred (k, m) indicates the generated predicted image signal 4420 (126).
  • a predicted value can be generated by a similar method according to the table of FIG.
  • the above is description of the unidirectional intra estimated image generation part 601 in this Embodiment of this invention.
  • FIG. 13 shows a block diagram of the bidirectional intra-predicted image generation unit 602.
  • the bidirectional intra predicted image generation unit 602 includes a first unidirectional intra predicted image generation unit 1301, a second unidirectional intra predicted image generation unit 1302, and a weighted average unit 1303.
  • the input reference image signal 4419 (124 ) Two unidirectional intra prediction images are generated, and a weighted average of these is generated to generate a prediction image signal 4420 (126).
  • the functions of the first unidirectional intra predicted image generation unit 1301 and the second unidirectional intra predicted image generation unit 1302 are the same. In either case, a prediction image signal corresponding to a prediction mode given according to prediction mode information controlled by the encoding control unit 115 is generated.
  • a first predicted image signal 1304 is output from the first unidirectional intra predicted image generation unit 1301, and a second predicted image signal 1305 is output from the second unidirectional intra predicted image generation unit 1302.
  • Each predicted image signal is input to the weighted average unit 1303, and weighted average processing is performed.
  • the table in FIG. 14 is a table for deriving two unidirectional intra prediction modes from the bidirectional intra prediction mode.
  • BiPredIdx is derived using Equation (4).
  • the first predicted image signal 1304 and the second predicted image signal 1305 generated by the first unidirectional intra predicted image generation unit 1301 and the second unidirectional intra predicted image generation unit 1302 are sent to the weighted average unit 1303. Entered.
  • the weighted average unit 1303 calculates a Euclidean distance or a city area distance (Manhattan distance) based on the prediction directions of IntraPredModeL0 and IntraPredModeL1, and derives a weight component used in the weighted average process.
  • the weight component of each pixel is represented by the reciprocal of the Euclidean distance or the city distance from the reference pixel used for prediction, and is generalized by Expression (5).
  • ⁇ L is expressed by Equation (6).
  • ⁇ L is expressed by Equation (7).
  • the weight table for each prediction mode is generalized to Equation (8). Therefore, the final prediction signal at the pixel position n is expressed by Equation (9).
  • the prediction signal is generated by selecting two prediction modes for generating the prediction pixel.
  • a prediction value may be generated by selecting three or more prediction modes.
  • the ratio of the reciprocal of the spatial distance from the reference pixel to the prediction pixel may be set as the weighting factor.
  • the Euclidean distance from the reference pixel used in the prediction mode or the reciprocal of the urban area distance is used as a weight component as it is, but as another embodiment, the Euclidean distance from the reference pixel and the urban area distance are variables.
  • the weight component may be set using the distributed model.
  • the distribution model uses at least one of a linear model, an M-order function (M ⁇ 1), a nonlinear function such as a one-sided Laplace distribution or a one-sided Gaussian distribution, or a fixed value that is a fixed value regardless of the distance from the reference pixel.
  • M ⁇ 1 M-order function
  • the weight component is expressed by Equation (10).
  • the weight component is expressed by Expression (11).
  • an isotropic correlation model obtained by modeling an autocorrelation function, an elliptic correlation model, a generalized Gaussian model obtained by generalizing a Laplace function or a Gaussian function may be used as the weight component model.
  • Equation (5), Equation (8), Equation (10), and Equation (11) are calculated each time the predicted image is generated, a plurality of multipliers are required, and the hardware scale increases. . For this reason, the circuit scale required for the said calculation can be reduced by calculating a weight component beforehand according to the relative distance for every prediction mode, and hold
  • a method for deriving the weight component when the city distance is used will be described.
  • the city area distance ⁇ L L0 of IntraPredMode L0 and the city area distance ⁇ L L1 of IntraPredMode L1 are calculated from Equation (7).
  • the relative distance varies depending on the prediction direction of the two prediction modes.
  • the distance can be derived using Expression (6) or Expression (7) according to each prediction mode.
  • the table sizes of these distance tables may increase.
  • FIG. 17 shows the mapping of IntraPredModeLX used for distance table derivation.
  • IntraPredModeLX used for distance table derivation.
  • an example is shown in which a table of only the prediction mode corresponding to the prediction mode corresponding to the prediction mode and the DC prediction in 45 degrees is prepared, and other prediction angles are mapped closer to the prepared reference prediction mode. ing.
  • the index is mapped to the smaller one.
  • the prediction mode shown in “MappedIntraPredMode” is referred to from FIG. 17, and a distance table can be derived.
  • Equation (12) the relative distance for each pixel in the two prediction modes is calculated using Equation (12).
  • Equation (12) the final prediction signal at the pixel position n is represented by Equation (13).
  • the weight component is scaled in advance and converted to integer arithmetic, it can be expressed by Equation (14).
  • WM 1024
  • Offset 512
  • SHIFT 10.
  • FIGS. 18A and 18B show examples in which weight components using the one-sided Laplace distribution model in the present embodiment of the present invention are tabulated.
  • Other PuSizes can also be derived using Equation (5), Equation (8), Equation (10), and Equation (11). The above is the details of the intra prediction unit 109 according to the present embodiment of the present invention.
  • the syntax indicates the structure of encoded data (for example, encoded data 127 in FIG. 1) when the moving image decoding apparatus 4400 decodes moving image data.
  • the image encoding apparatus represented by the first embodiment encodes this encoded data using the same syntax structure.
  • FIG. 20 shows an example of syntax 2000 used by the image coding apparatus in FIG. Since the syntax 2000 is the same as that of the first embodiment, detailed description thereof is omitted.
  • FIG. 22A shows an example of the prediction unit syntax.
  • Pred_mode in the figure indicates the prediction type of the prediction unit.
  • MODE_INTRA indicates that the prediction type is intra prediction.
  • intra_split_flag is a flag indicating whether or not the prediction unit is further divided into four prediction units. When intra_split_flag is 1, a prediction unit is obtained by dividing a prediction unit into four in half in the vertical and horizontal sizes. When intra_split_flag is 0, the prediction unit is not divided.
  • Intra_luma_bipred_flag [i] is a flag indicating whether the prediction mode IntraPredMode applied to the prediction unit is a unidirectional intra prediction mode or a bidirectional intra prediction mode. i indicates the position of the divided prediction unit. When the intra_split_flag is 0, 0 is set, and when the intra_split_flag is 1, 0 to 3 are set. The flag is set with the value of IntraBiprededFlag of the prediction unit shown in FIGS.
  • intra_luma_bipred_flag [i] When intra_luma_bipred_flag [i] is 1, this indicates that the prediction unit is bi-directional intra prediction, and is information that identifies the used bi-directional intra prediction mode among a plurality of prepared bi-directional intra prediction modes.
  • Intra_luma_bipred_mode [i] is decoded.
  • intra_luma_bipred_mode [i] may be decoded in equal length according to the bidirectional intra prediction mode number IntraBiModeNum shown in FIG. 7, or may be decoded using a predetermined code table.
  • intra_luma_bipred_flag [i] When intra_luma_bipred_flag [i] is 0, it indicates that the prediction unit is unidirectional intra prediction, and predictive decoding is performed from adjacent blocks.
  • Prev_intra_luma_unipred_flag [i] is a flag indicating whether or not the prediction value MostProbable of the prediction mode calculated from the adjacent block and the intra prediction mode of the prediction unit are the same. Details of the MostProbable calculation method will be described later. When prev_intra_luma_unipred_flag [i] is 1, it indicates that the MostProbable and the intra prediction mode IntraPredMode are equal.
  • prev_intra_luma_unipred_flag [i] When prev_intra_luma_unipred_flag [i] is 0, it indicates that the MostProbable and the intra prediction mode IntraPredMode are different, and the information rem_intraprelum decoding that further specifies the intra prediction mode IntraPredMode other than MostProbable. . rem_intra_luma_unipred_mode [i] may be decoded in equal length according to the bidirectional intra prediction mode number IntraUniModeNum shown in FIG. 7, or may be decoded using a predetermined code table. From the intra prediction mode IntraPredMode, rem_intra_luma_unipred_mode [i] is calculated using Equation (16).
  • MostProbable which is a predicted value in the prediction mode.
  • MostProbable is calculated according to Equation (17).
  • Min (x, y) is a parameter for outputting the smaller one of the inputs x and y.
  • intraPredModeA and intraPredModeB indicate intra prediction modes of prediction units adjacent to the left and above the decoded prediction unit.
  • intraPredModeA and intraPredModeB are collectively expressed as intraPredModeN.
  • N is set to A or B.
  • a method of calculating intraPredModeN will be described using the flowchart shown in FIG. First, it is determined whether a coding tree unit to which an adjacent prediction unit belongs can be used (step S2301). If the coding tree unit cannot be used (NO in S2301), reference to intraPredModeN is not possible. “ ⁇ 1” indicating “” is set.
  • step S2302 it is next determined whether or not intra prediction is applied to the adjacent prediction unit (step S2302).
  • the adjacent prediction unit is not intra prediction (NO in S2302)
  • “2” meaning “Intra_DC” is set in intraPredModeN.
  • the adjacent prediction unit is intra prediction (YES in S2302)
  • the adjacent prediction unit is not bidirectional intra prediction, that is, in the case of unidirectional intra prediction (NO in S2303), the prediction mode IntraPredMode of the adjacent prediction unit is set in intraPredModeN.
  • intraPredModeN is calculated using Equation (18).
  • IntraUniModeNum is the number of unidirectional intra prediction modes determined by the size of the adjacent prediction unit, and an example thereof is shown in FIG.
  • MappedBi2Uni (List, idx)” is a table for converting the bidirectional intra prediction mode into the unidirectional intra prediction mode.
  • List is the unidirectional intra prediction mode of List0 (corresponding to IntraPredTypeL0 [] shown in FIGS. 9, 10, and 11) of the two unidirectional intra prediction modes constituting the bidirectional intra prediction mode.
  • FIG. 14 shows an example of the conversion table. The numerical values in the figure correspond to IntraPredMode shown in FIGS.
  • MappedMostProble () is a table for converting MostProbable, and an example is shown in FIG. 24.
  • luma_pred_mode_code_type [i] indicates the type of the prediction mode IntraPredMode applied to the prediction unit, where 0 (IntraUnifiedMostProb) is unidirectional intra prediction and the intra prediction mode is the same as MostProbable, 1 (IntraUnipre intrareprediction) The intra prediction mode is different from MostProbable, and 2 (IntraBipred) indicates a bidirectional intra prediction mode.
  • 0 IntraUnifiedMostProb
  • 1 IntraUnipre intrareprediction
  • the intra prediction mode is different from MostProbable
  • 2 IntraBipred indicates a bidirectional intra prediction mode.
  • FIG. 24 shows bidirectional intra prediction modes, 1 and unidirectional intra prediction (whether or not the same prediction mode as MostProbable).
  • FIG. 25 shows an example of assignment of the number of modes according to the meaning corresponding to luma_pred_mode_code_type, bin, and the mode configuration shown in FIG.
  • luma_pred_mode_code_type [i] When luma_pred_mode_code_type [i] is 0, the intra prediction mode is the MostProbable mode, so no further information decoding is necessary.
  • luma_pred_mode_code_type [i] is 1, information rem_intra_luma_unipred_mode [i] that specifies which mode other than MostProbable is the intra prediction mode IntraPredMode is decoded.
  • rem_intra_luma_unipred_mode [i] may be decoded in equal length according to the bidirectional intra prediction mode number IntraUniModeNum shown in FIG. 7, or may be decoded using a predetermined code table. From the intra prediction mode IntraPredMode, rem_intra_luma_unipred_mode [i] is calculated using Equation (16). Further, when luma_pred_mode_code_type [i] is 2, it indicates that the prediction unit is bidirectional intra prediction, and information that identifies the used bidirectional intra prediction mode among the prepared bidirectional intra prediction modes. Intra_luma_bipred_mode [i] is decoded.
  • intra_luma_bipred_mode [i] may be decoded in equal length according to the bidirectional intra prediction mode number IntraBiModeNum shown in FIG. 7, or may be decoded using a predetermined code table.
  • the above is the syntax configuration according to the present embodiment.
  • FIG. 22D Yet another example of the prediction unit syntax is shown in FIG. 22D.
  • pred_mode and intra_split_flag are the same as the syntax example described above, and thus description thereof is omitted.
  • Intra_bipred_flag is a flag indicating whether or not bidirectional intra prediction can be used in the decoding prediction unit. When intra_bipred_flag is 0, it indicates that bi-directional intra prediction is not used in the decoding prediction unit. Even when intra_split_flag is 1, that is, when the decoded prediction unit is further divided into four, bi-directional intra prediction is not used in all prediction units, and only uni-directional intra prediction is effective.
  • intra_bipred_flag When intra_bipred_flag is 1, it indicates that bidirectional intra prediction can be used in the decoding prediction unit. Even when intra_split_flag is 1, that is, when the decoded prediction unit is further divided into four, in all prediction units, bidirectional intra prediction can be selected in addition to unidirectional intra prediction.
  • the intra-bipred_flag is decoded as 0 to disable bi-directional intra prediction. Since the amount of code required for decoding can be reduced, the coding efficiency is improved.
  • FIG. 22E Still another example relating to the prediction unit syntax is shown in FIG. 22E.
  • intra_bipred_flag is a flag indicating whether or not bi-directional intra prediction can be used in the decoding prediction unit, and is the same as the above-described intra_bipred_flag, and thus the description thereof is omitted.
  • FIG. 26 shows an intra prediction unit 4408 (109) when adaptive reference pixel filtering is used. It differs from the intra prediction unit 4408 (109) shown in FIG. 6 in that a reference pixel filter unit 2601 is added.
  • the reference pixel filter unit 2601 receives the reference image signal 4419 (124) and the prediction mode 605, performs adaptive filter processing described later, and outputs a filtered reference image signal 2602.
  • the filtered reference image signal 2602 is input to the unidirectional intra predicted image generation unit 601 and the bidirectional intra predicted image generation unit 602.
  • the configuration and processing other than the reference pixel filter unit 2601 are the same as those of the intra prediction unit 4408 (109) shown in FIG.
  • the reference pixel filter unit 2601 determines whether or not to filter reference pixels used for intra prediction according to the reference pixel filter flag and the intra prediction mode included in the prediction mode 605.
  • the reference pixel filter flag is a flag indicating whether or not reference pixels are filtered when the intra prediction mode IntraPredMode is a value other than “Intra_DC”.
  • IntraPredMode is “Intra_DC”
  • the reference pixel is not filtered and the reference pixel filter flag is set to 0.
  • a filtered reference image signal 2602 is calculated by filtering shown in Expression (20).
  • p [x, y] indicates a reference pixel before filtering
  • pf [x, y] indicates a reference pixel in filter terms.
  • PuPartSize indicates the size (pixel) of the prediction unit.
  • FIG. 27A and 27B show a prediction unit syntax structure when performing adaptive reference pixel filtering.
  • FIG. 27A adds the syntax intra_luma_filter_flag [i] related to the adaptive reference pixel filter to FIG. 22A.
  • FIG. 27B adds syntax intra_luma_filter_flag [i] related to the adaptive reference pixel filter to FIG. 22C.
  • intra_luma_filter_flag [i] is further decoded when the intra prediction mode IntraPredMode [i] is other than Intra_DC. When the flag is 0, it indicates that the reference pixel is not filtered. Further, when intra_luma_filter_flag [i] is 1, it indicates that the reference pixel filtering is applied.
  • intra_luma_filter_flag [i] is decoded when the intra prediction mode IntraPredMode [i] is other than Intra_DC.
  • IntraPredMode [i] is 0 to 2
  • intra_luma_filter_flag [i ] Need not be decrypted. In this case, intra_luma_filter_flag [i] is set to 0.
  • intra_luma_filter_flag [i] described above may be added in the same meaning for the other syntax structures shown in FIGS. 22B, 22D, and 22E.
  • FIG. 30 shows a block diagram of the intra prediction unit 4408 (109) when combined with composite intra prediction. The difference is that a composite intra predicted image generation unit 2901, a selection switch 2902, and a decoded pixel buffer 3001 are added to the intra prediction unit 4408 (109) shown in FIG.
  • the selection switch 604 When the bidirectional intra prediction and the composite intra prediction are combined, first, in the selection switch 604, the unidirectional intra prediction image generation unit 601 or the bidirectional intra prediction image generation unit according to the prediction mode information controlled by the decoding control unit 4412. The output terminal of 602 is switched.
  • the output predicted image signal 4420 (126) is referred to as a direction predicted image signal 4420 (126).
  • the direction prediction image signal is input to the composite intra prediction image generation unit 2901, and a prediction image signal 4420 (126) in the composite intra prediction is generated.
  • the description of the composite intra predicted image generation unit 2901 will be described later.
  • the selection switch 2902 which one of the prediction image signal 4420 (126) and the direction prediction image signal in the composite intra prediction is used according to the composite intra prediction application flag in the prediction mode information controlled by the decoding control unit 4412. And the final predicted image signal 4420 (126) in the intra prediction unit 4408 (109) is output.
  • the composite intra prediction application flag is 1
  • the predicted image signal 4420 (126) output from the composite intra predicted image generation unit 2901 is the final predicted image signal 4420 (126).
  • the direction prediction image signal 4420 (126) is the prediction image signal 126 that is finally output.
  • the composite intra prediction image generation unit 2901 includes a pixel level prediction signal generation unit 3101 and a composite intra prediction calculation unit 3102.
  • the pixel level prediction signal generation unit 3101 predicts the prediction target pixel X from adjacent pixels and outputs a pixel level prediction signal 3103.
  • the adjacent pixel indicates the decoded image signal 4417.
  • the pixel level prediction signal 3103 (X) of the prediction target pixel is calculated using the number (21).
  • the coefficients related to A, B, and C may be other values.
  • the composite intra prediction calculation unit 3102 performs a weighted average of the direction prediction image signal 4420 (126) (X ′) and the pixel level prediction signal 3103 (X), and outputs a final prediction image signal 4420 (126) (P). To do. Specifically, Formula (22) is used.
  • the weighting factor W may be switched according to the position of the prediction pixel in the prediction unit.
  • a prediction image signal generated using unidirectional intra prediction and bidirectional intra prediction generates a prediction value from spatially adjacent reference pixels positioned on the left or above already encoded.
  • the absolute value of the prediction error tends to increase as the distance from the reference pixel increases. Therefore, the weighting coefficient of the direction prediction image signal 126 and the pixel level prediction signal 3103 is increased when the weight coefficient of the direction prediction image signal 126 is close to the reference pixel, and is decreased when the distance is far away, thereby improving the prediction accuracy. It becomes possible.
  • a prediction error signal is generated using an input image signal at the time of encoding.
  • the pixel level prediction signal 3103 becomes an input image signal, even if the spatial distance between the reference pixel position and the prediction pixel position is increased, the prediction of the pixel level prediction signal 3103 is compared with the direction prediction image signal 126. High accuracy.
  • the weighting coefficient of the direction prediction image signal 126 and the pixel level prediction signal 3103 is simply increased when the weight coefficient of the direction prediction image signal 126 is close to the reference pixel, and is decreased when the distance is small.
  • the prediction error is reduced, there is a problem that the prediction accuracy at the time of encoding and the prediction value at the time of local decoding are different and the prediction accuracy is lowered. Therefore, especially when the value of the quantization parameter is large, as the spatial distance between the reference pixel position and the predicted pixel position becomes large, the difference generated in the case of such an open loop is set by setting the value of W small. A decrease in coding efficiency due to the phenomenon can be suppressed.
  • FIG. 32A and 32B show a prediction unit syntax structure when performing composite intra prediction.
  • FIG. 32A is different from FIG. 22A in that a syntax combined_intra_pred_flag for switching presence / absence of composite intra prediction is added. This is equivalent to the above-described composite intra prediction application flag.
  • FIG. 32B adds a syntax combined_intra_pred_flag for switching presence / absence of composite intra prediction to FIG. 22C.
  • the selection switch 2902 shown in FIG. 30 is connected to the output terminal of the composite intra prediction image generation unit 2901.
  • When combined_intra_pred_flag is 0, the selection switch 2902 shown in FIG. 29 is connected to the output terminal of either the unidirectional intra prediction image generation unit 601 or the bidirectional intra prediction image generation unit 602 to which the selection switch 604 is connected. .
  • intra_luma_filter_flag [i] described above may be added in the same meaning for the other syntax structures shown in FIGS. 22B, 22D, and 22E.
  • the same or similar intra prediction unit as that of the video encoding device according to the first embodiment is included, the same or the same as the video encoding device according to the first embodiment or Similar effects can be obtained.
  • the video decoding device differs from the video decoding device according to the above-described fourth embodiment in the details of inverse orthogonal transform.
  • the same parts as those in the fourth embodiment are denoted by the same reference numerals, and different parts will be mainly described.
  • the moving picture coding apparatus corresponding to the moving picture decoding apparatus according to the present embodiment is as described in the second embodiment.
  • FIG. 45 is a block diagram showing a moving picture decoding apparatus according to the fifth embodiment.
  • a change from the moving picture decoding apparatus according to the fifth embodiment is that a conversion selection unit 4502 and a coefficient order restoration unit 4501 are added. Also, the internal structure of the inverse orthogonal transform unit 4404 is different.
  • the inverse orthogonal transform unit 4404 will be described with reference to FIG. Note that the inverse orthogonal transform unit 4404 has the same configuration as the inverse orthogonal transform unit 105 according to the second embodiment. Therefore, in the present embodiment, the conversion selection information 3303 in FIG. 35 is replaced with the conversion selection information 4504, the restored transform coefficient 120 is replaced with the restored transform coefficient 4415, and the restored prediction error signal 121 is replaced with the restored prediction error signal 4416. .
  • the conversion selection switch 3504 has a function of selecting the output terminal of the inverse quantization unit 4403 according to the input conversion selection information 4504.
  • the conversion selection information 4504 is one of information controlled by the decoding control unit 4412, and is set by the conversion selection unit 4502 in accordance with the prediction information 4421 (125).
  • the output terminal of the switch is connected to the first inverse orthogonal transform unit 3501.
  • the transformation selection information 4504 is the second orthogonal transformation
  • the output end is connected to the second inverse orthogonal transformation unit 3502.
  • the transform selection information 4504 is the Nth orthogonal transform
  • the output terminal is connected to the Nth inverse orthogonal transform unit 3503.
  • Prediction information 4421 (125) which is controlled by the decoding control unit 4412 and decoded by the entropy decoding unit 4402, is input to the transformation selection unit 4502.
  • the transform selection unit 4502 has a function of setting MapdTransformIdx information indicating which inverse orthogonal transform is to be used for which prediction mode.
  • FIG. 36 shows conversion selection information 4504 (MappedTransformIdx) in intra prediction.
  • FIG. 46 shows a block diagram of the coefficient order restoration unit 4501.
  • the coefficient order restoration unit 4501 has a function of performing reverse scan order conversion with the coefficient order control unit 3302 according to the second embodiment.
  • the coefficient order restoration unit 4501 includes a coefficient order selection switch 4604, a first coefficient forward / reverse transform unit 4601, a second coefficient forward / reverse transform unit 4602, and an Nth coefficient forward / reverse transform unit 4603.
  • the coefficient order selection switch 4604 has a function of switching the output terminal of the switch and the coefficient order inverse conversion units 4601 to 4603 in accordance with, for example, the mapped transform IDx shown in FIG.
  • the N types of coefficient forward / inverse transform units 4601 to 4603 have a function of inversely transforming one-dimensional data into two-dimensional data with respect to the quantized transform coefficient sequence 4503 decoded by the entropy decoding unit 4402.
  • two-dimensional data is converted into one-dimensional data using a zigzag scan.
  • the quantized transform coefficient obtained by performing quantization processing on the transform coefficient that has been subjected to orthogonal transform has the property that the tendency of generating non-zero transform coefficients in the block is biased. Have. The tendency of occurrence of this non-zero transform coefficient has different properties for each prediction direction of intra prediction. However, when different videos are encoded, the generation tendency of non-zero transform coefficients in the same prediction direction has a similar property. Therefore, when transforming two-dimensional data into one-dimensional data (2D-1D conversion), entropy coding is performed preferentially from transform coefficients at positions where the occurrence probability of non-zero transform coefficients is high, thereby encoding transform coefficients. It is possible to reduce information. Conversely, on the decoding side, it is necessary to restore the one-dimensional data to the two-dimensional data. Here, the raster scan is restored as a one-dimensional reference scan.
  • the coefficient order restoration unit 4501 may dynamically update the scan order in the 1D-2D conversion.
  • the configuration of the coefficient order restoration unit 4501 that performs such an operation is illustrated in FIG.
  • the coefficient order restoration unit 4501 includes an occurrence frequency counting unit 4701 and an updating unit 4702 in addition to the configuration of FIG. .., 4603 are the same except that the 1D-2D scan order is updated by the updating unit 4702.
  • the occurrence frequency counting unit 4701 creates a histogram 4704 of the number of occurrences of non-zero coefficients in each element of the quantized transform coefficient sequence 4503 for each prediction mode.
  • the occurrence frequency counting unit 4701 inputs the created histogram 4704 to the update unit 4702.
  • the update unit 4702 updates the coefficient order based on the histogram 4704 at a predetermined timing.
  • the timing is, for example, the timing when the coding process of the coding tree unit is finished, the timing when the coding process for one line in the coding tree unit is finished, or the like.
  • the update unit 4702 refers to the histogram 4704 and updates the coefficient order with respect to a prediction mode having an element in which the number of occurrences of non-zero coefficients is counted more than a threshold. For example, the update unit 4702 updates the prediction mode having an element in which the occurrence of a non-zero coefficient is counted 16 times or more. By providing a threshold value for the number of occurrences, the coefficient order is updated globally, so that it is difficult to converge to a local optimum solution.
  • the update unit 4702 sorts the elements in descending order of the occurrence frequency of the non-zero coefficient regarding the prediction mode to be updated. Sorting can be realized by existing algorithms such as bubble sort and quick sort. Then, the update unit 4702 inputs the update coefficient order 4703 indicating the order of the sorted elements to the coefficient order inverse transform units 4601 to 4603 corresponding to the prediction mode to be updated.
  • each inverse conversion unit performs 1D-2D conversion in accordance with the updated scan order.
  • the initial scan order of each 1D-2D conversion unit needs to be determined in advance.
  • the initial scanning order is the same as that of the coefficient order control unit 3302 of the moving picture coding apparatus shown in FIG. In this way, when the scan order is dynamically updated, the tendency of occurrence of non-zero coefficients in the quantized transform coefficients changes according to the effect of the predicted image properties, quantization information (quantization parameters), etc. In addition, stable and high encoding efficiency can be expected. Specifically, the generated code amount of run-length encoding in the entropy encoding unit 113 can be suppressed.
  • the syntax configuration in the present embodiment is the same as that in the fourth embodiment.
  • the conversion selection unit 4502 can select the mapped transform idx separately from the prediction information 4421.
  • information indicating which nine types of orthogonal transforms or inverse orthogonal transforms are used is set in the decoding control unit 4412 and used by the inverse orthogonal transform unit 4404.
  • FIG. 39 shows an example of syntax in the present embodiment.
  • Directional_transform_idx indicated in the syntax indicates information indicating which of N orthogonal transforms has been selected.
  • the same or similar inverse orthogonal transform unit as that of the video encoding device according to the second embodiment is included, and thus the same as the video encoding device according to the second embodiment. Or a similar effect can be obtained.
  • the video decoding device differs from the video decoding device according to the above-described fourth embodiment in the details of inverse orthogonal transform.
  • the same parts as those in the fourth embodiment are denoted by the same reference numerals, and different parts will be mainly described.
  • the moving picture encoding apparatus corresponding to the moving picture decoding apparatus according to the present embodiment is as described in the third embodiment.
  • JCTVC-B205_draft002 As an embodiment related to the inverse orthogonal transform unit 105, JCTVC-B205_draft002, 5.3.5.2 section “Rotational transformation process”, JCT-VC 2nd Meeting Geneva, July, 2010, may be combined.
  • FIG. 41 is a block diagram of inverse orthogonal transform section 4404 (105) according to the present embodiment.
  • the inverse orthogonal transform unit 4404 (105) has new processing units such as a first inverse rotation transform unit 4101, a second inverse rotation transform unit 4102, an Nth inverse rotation transform unit 4103, and an inverse discrete cosine transform unit 4104.
  • a selection switch 3504 is included.
  • the restored transform coefficient 4415 (120) input after the inverse quantization processing is input to the transform selection switch 3504.
  • the conversion selection switch 3504 sets the output end of the switch to one of the first reverse rotation conversion unit 4101, the second reverse rotation conversion unit 4102, and the Nth reverse rotation conversion unit 4103 according to the conversion selection information 4504 (3303). Connecting. Thereafter, the same inverse rotation transform unit 4101 to 4103 as that used in the orthogonal transform unit 102 shown in FIG. 40 is subjected to the inverse rotation transform process, and the result is output to the inverse discrete cosine transform unit 4104.
  • the inverse discrete cosine transform unit 4104 performs, for example, IDCT on the input signal to restore the restored prediction error signal 4416 (121).
  • orthogonal transform such as Hadamard transform or discrete sine transform may be used, or non-orthogonal transform may be used.
  • corresponding inverse transformation is performed in conjunction with the orthogonal transformation unit 102 shown in FIG.
  • FIG. 42 shows the syntax in the present embodiment.
  • the rotation_transform_idx shown in the syntax means the number of the rotation matrix to be used.
  • the same or similar inverse orthogonal transform unit as that of the image encoding device according to the third embodiment is included, and therefore the same or similar as that of the image encoding device according to the third embodiment. The effect of can be obtained.
  • encoding and decoding may be performed sequentially from the lower right to the upper left, or encoding and decoding may be performed so as to draw a spiral from the center of the screen toward the screen end.
  • encoding and decoding may be performed in order from the upper right to the lower left, or encoding and decoding may be performed so as to draw a spiral from the screen edge toward the center of the screen.
  • the prediction target block is a uniform block. It does not have to be a shape.
  • the prediction target block (prediction unit) size may be a 16 ⁇ 8 pixel block, an 8 ⁇ 16 pixel block, an 8 ⁇ 4 pixel block, a 4 ⁇ 8 pixel block, or the like. Also, it is not necessary to unify all the block sizes within one coding tree unit, and a plurality of different block sizes may be mixed.
  • the amount of codes for encoding or decoding the division information increases as the number of divisions increases. Therefore, it is desirable to select the block size in consideration of the balance between the code amount of the division information and the quality of the locally decoded image or the decoded image.
  • the color signal component has been described without distinguishing between the luminance signal and the color difference signal.
  • the same or different prediction methods may be used. If different prediction methods are used between the luminance signal and the chrominance signal, the prediction method selected for the chrominance signal can be encoded or decoded in the same manner as the luminance signal.
  • the orthogonal transformation process is different between the luminance signal and the color difference signal
  • the same or different orthogonal transformation methods may be used. If different orthogonal transformation methods are used between the luminance signal and the color difference signal, the orthogonal transformation method selected for the color difference signal can be encoded or decoded in the same manner as the luminance signal.
  • syntax elements that are not defined in the present invention can be inserted between the rows of the table shown in the syntax configuration, and other conditional branch descriptions are included. It does not matter.
  • the syntax table can be divided and integrated into a plurality of tables. Moreover, it is not always necessary to use the same term, and it may be arbitrarily changed depending on the form to be used.
  • each embodiment can realize highly efficient orthogonal transformation and inverse orthogonal transformation while alleviating the difficulty in hardware implementation and software implementation. Therefore, according to each embodiment, the encoding efficiency is improved, and the subjective image quality is also improved.
  • the instructions shown in the processing procedure shown in the above embodiment can be executed based on a program that is software.
  • a general-purpose computer system stores this program in advance, and by reading this program, it is also possible to obtain the same effects as those obtained by the video encoding device and video decoding device of the above-described embodiment. is there.
  • the instructions described in the above-described embodiments are, as programs that can be executed by a computer, magnetic disks (flexible disks, hard disks, etc.), optical disks (CD-ROM, CD-R, CD-RW, DVD-ROM, DVD). ⁇ R, DVD ⁇ RW, etc.), semiconductor memory, or a similar recording medium. As long as the recording medium is readable by the computer or the embedded system, the storage format may be any form.
  • the computer reads the program from the recording medium and causes the CPU to execute instructions described in the program based on the program
  • the computer is similar to the video encoding device and video decoding device of the above-described embodiment. Operation can be realized.
  • the computer acquires or reads the program, it may be acquired or read through a network.
  • the OS operating system
  • database management software database management software
  • MW middleware
  • a part of each process for performing may be executed.
  • the recording medium in the present invention is not limited to a medium independent of a computer or an embedded system, but also includes a recording medium in which a program transmitted via a LAN or the Internet is downloaded and stored or temporarily stored.
  • the program for realizing the processing of each of the above embodiments may be stored on a computer (server) connected to a network such as the Internet and downloaded to the computer (client) via the network.
  • the number of recording media is not limited to one, and when the processing in the present embodiment is executed from a plurality of media, it is included in the recording media in the present invention, and the configuration of the media may be any configuration.
  • the computer or the embedded system in the present invention is for executing each process in the present embodiment based on a program stored in a recording medium, and includes a single device such as a personal computer or a microcomputer, Any configuration such as a system in which apparatuses are connected to a network may be used.
  • the computer in the embodiment of the present invention is not limited to a personal computer, but includes an arithmetic processing device, a microcomputer, and the like included in an information processing device, and a device capable of realizing the functions in the embodiment of the present invention by a program, The device is a general term.
  • Prediction error Signal 118, transform coefficient, 119 quantized transform coefficient, 120 restored transform coefficient, 121 restored restoration error signal, 122 decoded image signal, 123 filtered image signal, 124 reference image signal, 125 prediction information , 126 ... predicted image signal, 127 ... encoded data, 601 ... unidirectional intra predicted image generation unit, 602 ... bidirectional intra predicted image raw 603 ... Prediction mode information setting unit, 604 ... Selection switch, 605 ... Prediction mode, 1301 ... First unidirectional intra prediction image generation unit, 1302 ... Second unidirectional intra prediction image generation unit, 1303 ... Weighted average unit DESCRIPTION OF SYMBOLS 1304 ... 1st prediction image signal, 1305 ... 2nd prediction image signal, 1901 ...
  • Image buffer 2000 ... Syntax, 2001 ... High level syntax, 2002 ... Slice level syntax, 2003 ... Coding tree level syntax, 2004 ... Sequence parameter set Syntax: 2005 ... Picture parameter set syntax, 2006: Slice header syntax, 2007 ... Slice data syntax, 2008 ... Coding tree unit syntax, 2009 ... Prediction unit syntax , 2010 ... Transform unit syntax, 2601 ... Reference pixel filter unit, 2602 ... Filtered reference image signal, 2901 ... Composite intra prediction image generation unit, 2902 ... Selection switch, 3001 ... Decoded pixel buffer, 3001 ... Decoded image buffer, 3002 Reference pixels, 3101 ... Pixel level prediction signal generation unit, 3102 ... Composite intra prediction calculation unit, 3103 ... Pixel level prediction signal, 3104 ...
  • Occurrence frequency counting section 3802 ... Coefficient order Update unit, 3803 ... Update coefficient order, 3804 ... Histogram, 4001 ... Rotation conversion unit, 4001 ... First rotation conversion unit, 4002 ... Second rotation conversion unit, 4003 ... Nth rotation conversion unit, 4004 ... Discrete cosine conversion unit, 4101: First reverse rotation conversion unit, 4102 ... Second reverse rotation conversion unit, 4103 ... Nth reverse rotation conversion unit, 4104 ... Inverse discrete cosine conversion unit, 4401 ... Input buffer, 4402 ... Entropy decoding unit, 4403 ... Reverse Quantization unit, 4404 ... inverse orthogonal transform unit, 4405 ... addition unit, 4406 ... loop filter, 4407 ...
  • reference image memory 4408 ... in La prediction unit, 4409 ... inter prediction unit, 4410 ... prediction selection switch, 4411 ... output buffer, 4412 ... decoding control unit, 4413 ... encoded data, 4414 ... quantized transform coefficient, 4415 ... restoration transform coefficient, 4416 ... restoration Prediction error signal, 4417 ... decoded image signal, 4418 ... filtered image signal, 4419 ... reference image signal, 4420 ... predicted image signal, 4421 ... prediction information, 4422 ... decoded image, 4501 ... coefficient order restoration unit, 4502 ... conversion selection. , 4503... Quantized transform coefficient sequence, 4504... Transform selection information, 4601, 4602, 4603... Coefficient order inverse transform section, 4701... Occurrence frequency count section, 4702.

Abstract

La présente invention se rapporte à un procédé de codage d'image en mouvement. Le procédé de codage d'image en mouvement selon l'invention a pour objectif de diviser un signal d'image d'entrée en une pluralité de blocs de pixels représentés par la profondeur de hiérarchies sur la base d'une segmentation en arbre quaternaire. Le procédé selon l'invention a également pour but de générer un signal d'erreur de prédiction pour les blocs de pixels obtenus par la division et de coder des coefficients de transformée. Le procédé comprend : une étape consistant à définir une première direction de prédiction à partir d'un ensemble d'une pluralité de directions de prédiction et à générer un premier signal d'image prédictif ; une étape consistant à définir une seconde direction de prédiction, qui est différente de la première direction de prédiction, à partir de l'ensemble de directions de prédiction et à générer un deuxième signal d'image prédictif ; une étape consistant à faire dériver une distance relative entre un pixel devant être prédit et un pixel de référence dans chacune des première et seconde directions de prédiction et à faire dériver une valeur de différence entre les distances relatives ; une étape consistant à faire dériver une composante de pondération prédéterminée sur la base de la valeur de différence ; une étape consistant à calculer une moyenne pondérée entre une première image unidirectionnelle à prédiction Intra et une seconde image unidirectionnelle à prédiction Intra sur la base de la composante de pondération dans le but de générer un troisième signal d'image prédictif ; une étape consistant à générer un signal d'erreur de prédiction à partir du troisième signal d'image prédictif ; et une étape consistant à coder le signal d'erreur de prédiction.
PCT/JP2010/066102 2010-09-16 2010-09-16 Procédé de codage d'image en mouvement et procédé de décodage d'image en mouvement WO2012035640A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2010/066102 WO2012035640A1 (fr) 2010-09-16 2010-09-16 Procédé de codage d'image en mouvement et procédé de décodage d'image en mouvement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2010/066102 WO2012035640A1 (fr) 2010-09-16 2010-09-16 Procédé de codage d'image en mouvement et procédé de décodage d'image en mouvement

Publications (1)

Publication Number Publication Date
WO2012035640A1 true WO2012035640A1 (fr) 2012-03-22

Family

ID=45831140

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2010/066102 WO2012035640A1 (fr) 2010-09-16 2010-09-16 Procédé de codage d'image en mouvement et procédé de décodage d'image en mouvement

Country Status (1)

Country Link
WO (1) WO2012035640A1 (fr)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017205701A1 (fr) * 2016-05-25 2017-11-30 Arris Enterprises Llc Prédiction angulaire pondérée pour codage intra
CN110063056A (zh) * 2016-12-07 2019-07-26 株式会社Kt 用于处理视频信号的方法和设备
US10542264B2 (en) 2017-04-04 2020-01-21 Arris Enterprises Llc Memory reduction implementation for weighted angular prediction
CN110807789A (zh) * 2019-08-23 2020-02-18 腾讯科技(深圳)有限公司 图像处理方法、模型、装置、电子设备及可读存储介质
US10575023B2 (en) 2017-10-09 2020-02-25 Arris Enterprises Llc Adaptive unequal weight planar prediction
US10616596B2 (en) 2016-12-28 2020-04-07 Arris Enterprises Llc Unequal weight planar prediction
US10645395B2 (en) 2016-05-25 2020-05-05 Arris Enterprises Llc Weighted angular prediction coding for intra coding
CN111108749A (zh) * 2018-09-25 2020-05-05 北京大学 编码方法、解码方法、编码设备和解码设备
CN111279703A (zh) * 2017-07-05 2020-06-12 艾锐势有限责任公司 用于加权角度预测的后滤波
JP2020522180A (ja) * 2017-05-29 2020-07-27 オランジュ 少なくとも1つの画像を表すデータストリームを符号化及び復号する方法及びデバイス
US10944963B2 (en) 2016-05-25 2021-03-09 Arris Enterprises Llc Coding weighted angular prediction for intra coding

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004200991A (ja) * 2002-12-18 2004-07-15 Nippon Telegr & Teleph Corp <Ntt> 画像符号化方法,画像復号方法,画像符号化装置,画像復号装置,画像符号化プログラム,画像復号プログラム,画像符号化プログラムを記録した記録媒体および画像復号プログラムを記録した記録媒体
WO2007063808A1 (fr) * 2005-11-30 2007-06-07 Kabushiki Kaisha Toshiba Méthode de codage d’image/décodage d’image et appareil de codage d’image/décodage d’image
WO2008084817A1 (fr) * 2007-01-09 2008-07-17 Kabushiki Kaisha Toshiba Procédé et dispositif d'encodage et de décodage d'image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004200991A (ja) * 2002-12-18 2004-07-15 Nippon Telegr & Teleph Corp <Ntt> 画像符号化方法,画像復号方法,画像符号化装置,画像復号装置,画像符号化プログラム,画像復号プログラム,画像符号化プログラムを記録した記録媒体および画像復号プログラムを記録した記録媒体
WO2007063808A1 (fr) * 2005-11-30 2007-06-07 Kabushiki Kaisha Toshiba Méthode de codage d’image/décodage d’image et appareil de codage d’image/décodage d’image
WO2008084817A1 (fr) * 2007-01-09 2008-07-17 Kabushiki Kaisha Toshiba Procédé et dispositif d'encodage et de décodage d'image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PENG ZHANG ET AL.: "Multiple modes intra-prediction in intra coding", 2004 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME '04), vol. 1, 30 June 2004 (2004-06-30), pages 419 - 422 *

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11627312B2 (en) 2016-05-25 2023-04-11 Arris Enterprises Llc Coding weighted angular prediction for intra coding
US10939097B2 (en) 2016-05-25 2021-03-02 Arris Enterprises Llc Weighted angular prediction for intra coding
WO2017205701A1 (fr) * 2016-05-25 2017-11-30 Arris Enterprises Llc Prédiction angulaire pondérée pour codage intra
US11553189B2 (en) 2016-05-25 2023-01-10 Arris Enterprises Llc Weighted angular prediction coding for intra coding
US11303906B2 (en) 2016-05-25 2022-04-12 Arris Enterprises Llc Weighted angular prediction for intra coding
US11758153B2 (en) 2016-05-25 2023-09-12 Arris Enterprises Llc Weighted angular prediction for intra coding
US11153573B2 (en) 2016-05-25 2021-10-19 Arris Enterprises Llc Weighted angular prediction coding for intra coding
US10645395B2 (en) 2016-05-25 2020-05-05 Arris Enterprises Llc Weighted angular prediction coding for intra coding
US11917166B2 (en) 2016-05-25 2024-02-27 Arris Enterprises Llc Weighted angular prediction coding for intra coding
US10523949B2 (en) 2016-05-25 2019-12-31 Arris Enterprises Llc Weighted angular prediction for intra coding
US10944963B2 (en) 2016-05-25 2021-03-09 Arris Enterprises Llc Coding weighted angular prediction for intra coding
US20200322600A1 (en) 2016-05-25 2020-10-08 Arris Enterprises Llc Weighted angular prediction for intra coding
CN110063056A (zh) * 2016-12-07 2019-07-26 株式会社Kt 用于处理视频信号的方法和设备
US11716467B2 (en) 2016-12-07 2023-08-01 Kt Corporation Method and apparatus for processing video signal
US11736686B2 (en) 2016-12-07 2023-08-22 Kt Corporation Method and apparatus for processing video signal
CN110063056B (zh) * 2016-12-07 2023-09-12 株式会社Kt 用于处理视频信号的方法和设备
US10616596B2 (en) 2016-12-28 2020-04-07 Arris Enterprises Llc Unequal weight planar prediction
US11019353B2 (en) 2016-12-28 2021-05-25 Arris Enterprises Llc Unequal weight planar prediction
US10542264B2 (en) 2017-04-04 2020-01-21 Arris Enterprises Llc Memory reduction implementation for weighted angular prediction
US11575915B2 (en) 2017-04-04 2023-02-07 Arris Enterprises Llc Memory reduction implementation for weighted angular prediction
JP2020522180A (ja) * 2017-05-29 2020-07-27 オランジュ 少なくとも1つの画像を表すデータストリームを符号化及び復号する方法及びデバイス
JP7274427B2 (ja) 2017-05-29 2023-05-16 オランジュ 少なくとも1つの画像を表すデータストリームを符号化及び復号する方法及びデバイス
US11627315B2 (en) 2017-07-05 2023-04-11 Arris Enterprises Llc Post-filtering for weighted angular prediction
CN111279703A (zh) * 2017-07-05 2020-06-12 艾锐势有限责任公司 用于加权角度预测的后滤波
CN111279703B (zh) * 2017-07-05 2023-08-04 艾锐势有限责任公司 用于加权角度预测的后滤波
US11902519B2 (en) 2017-07-05 2024-02-13 Arris Enterprises Llc Post-filtering for weighted angular prediction
US10992934B2 (en) 2017-07-05 2021-04-27 Arris Enterprises Llc Post-filtering for weighted angular prediction
US11159828B2 (en) 2017-10-09 2021-10-26 Arris Enterprises Llc Adaptive unequal weight planar prediction
US10575023B2 (en) 2017-10-09 2020-02-25 Arris Enterprises Llc Adaptive unequal weight planar prediction
CN111108749A (zh) * 2018-09-25 2020-05-05 北京大学 编码方法、解码方法、编码设备和解码设备
CN110807789A (zh) * 2019-08-23 2020-02-18 腾讯科技(深圳)有限公司 图像处理方法、模型、装置、电子设备及可读存储介质

Similar Documents

Publication Publication Date Title
US11936858B1 (en) Constrained position dependent intra prediction combination (PDPC)
WO2012035640A1 (fr) Procédé de codage d&#39;image en mouvement et procédé de décodage d&#39;image en mouvement
US9392282B2 (en) Moving-picture encoding apparatus and moving-picture decoding apparatus
Han et al. Improved video compression efficiency through flexible unit representation and corresponding extension of coding tools
KR101677406B1 (ko) 차세대 비디오용 비디오 코덱 아키텍처
WO2011125256A1 (fr) Procédé de codage d&#39;image et procédé de décodage d&#39;image
WO2012148139A2 (fr) Procédé de gestion d&#39;une liste d&#39;images de référence, et appareil l&#39;utilisant
US20120128064A1 (en) Image processing device and method
CN113163211B (zh) 基于合并模式的帧间预测方法及装置
CN111819853A (zh) 变换域中预测的信令残差符号
JP7124222B2 (ja) Vvcにおける色変換のための方法及び機器
TW202135530A (zh) 用於編碼和解碼視訊樣本區塊的方法、設備及系統
JP2024026317A (ja) 符号化ユニットを復号および符号化する方法、装置およびプログラム
JP2024056945A (ja) 符号化ユニットを復号および符号化する方法、装置およびプログラム
KR20220032620A (ko) 비디오 샘플들의 블록을 인코딩 및 디코딩하기 위한 방법, 장치 및 시스템
KR20220041940A (ko) 비디오 샘플들의 블록을 인코딩 및 디코딩하기 위한 방법, 장치 및 시스템
WO2012090286A1 (fr) Procédé de codage d&#39;image vidéo, et procédé de décodage d&#39;image vidéo
JP6871447B2 (ja) 動画像符号化方法及び動画像復号化方法
JP5537695B2 (ja) 画像復号化装置、方法およびプログラム
WO2012172667A1 (fr) Procédé de codage vidéo, procédé de décodage vidéo et dispositif
JP2017073598A (ja) 動画像符号化装置、動画像符号化方法及び動画像符号化用コンピュータプログラム
JP6042478B2 (ja) 画像復号化装置
JP5367161B2 (ja) 画像符号化方法、装置、及びプログラム
JP2013176110A (ja) 画像符号化装置
JP6871343B2 (ja) 動画像符号化方法及び動画像復号化方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10857274

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10857274

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP