WO2012008039A1 - 画像符号化方法及び画像復号化方法 - Google Patents
画像符号化方法及び画像復号化方法 Download PDFInfo
- Publication number
- WO2012008039A1 WO2012008039A1 PCT/JP2010/062007 JP2010062007W WO2012008039A1 WO 2012008039 A1 WO2012008039 A1 WO 2012008039A1 JP 2010062007 W JP2010062007 W JP 2010062007W WO 2012008039 A1 WO2012008039 A1 WO 2012008039A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- matrix
- transformation
- transform
- unit
- prediction mode
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/11—Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/12—Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/12—Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
- H04N19/122—Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
Definitions
- Embodiments of the present invention relate to orthogonal transformation and inverse orthogonal transformation in encoding and decoding of moving images.
- H.264 an image coding method with greatly improved coding efficiency has been jointly developed by ITU-T and ISO / IEC. H. H.264 and ISO / IEC 14496-10 (hereinafter referred to as “H.264”).
- DCT discrete cosine transform
- IDCT inverse discrete cosine transform
- H As an extension of H.264, encoding efficiency is improved by performing orthogonal transform and inverse orthogonal transform using individual transform bases for each of nine types of prediction modes defined in intra prediction (intra prediction). Is assumed.
- individual transformation matrices for each of a plurality of types of prediction directions can be appropriately loaded from a memory, or appropriately stored in a cache memory.
- desired orthogonal transformation and inverse orthogonal transformation can be realized by a general-purpose multiplier, there is a problem of cost increase due to increase in memory bandwidth or cost increase due to increase in cache memory size.
- an object of the embodiment is to provide orthogonal transform or inverse orthogonal transform that can improve coding efficiency.
- the image encoding method includes intra prediction of an encoding target and obtaining a prediction error.
- a combination of a vertical transformation matrix and a horizontal transformation matrix corresponding to an intra prediction mode to be coded is set based on a predetermined relationship according to a prediction image generation method in each intra prediction mode. Including doing.
- This combination is a one-dimensional orthogonal in the direction orthogonal to the line of the reference pixel group to the prediction error of the intra prediction mode for generating the intra prediction image with reference to the first transformation matrix and the reference pixel group on at least one line.
- This image coding method uses a set vertical transformation matrix and horizontal transformation matrix to perform vertical transformation and horizontal transformation on a prediction error to obtain transformation coefficients, and transform coefficients and intra coding target. Encoding information indicating the prediction mode.
- An image decoding method includes decoding a transform coefficient to be decoded and information indicating an intra prediction mode to be decoded.
- This image decoding method is based on a predetermined relationship according to a prediction image generation method of each intra prediction mode, and a vertical inverse matrix and a horizontal inverse matrix corresponding to the decoding target intra prediction mode.
- This combination is a one-dimensional orthogonal in the direction orthogonal to the line of the reference pixel group to the prediction error of the intra prediction mode for generating the intra prediction image with reference to the first transformation matrix and the reference pixel group on at least one line.
- the decoding method uses the set vertical inverse transform matrix and horizontal inverse transform matrix to obtain a prediction error by performing vertical inverse transform and horizontal inverse transform on the transform coefficient, and based on the prediction error. Generating a decoded image.
- FIG. 1 is a block diagram illustrating an image encoding device according to a first embodiment.
- the block diagram which illustrates the orthogonal transformation part concerning a 1st embodiment.
- FIG. 3 is a block diagram illustrating an inverse orthogonal transform unit according to the first embodiment.
- the table figure which illustrates a response
- Explanatory drawing of the prediction encoding order of a pixel block Explanatory drawing of an example of pixel block size. Explanatory drawing of another example of pixel block size. Explanatory drawing of another example of pixel block size.
- FIG. Explanatory drawing of a zigzag scan. Explanatory drawing of a zigzag scan.
- 3 is a flowchart illustrating processing performed by the image encoding device in FIG. 1 on an encoding target block.
- 3 is a flowchart illustrating processing performed by the image encoding device in FIG. 1 on an encoding target block.
- Explanatory drawing of a syntax structure Explanatory drawing of a slice header syntax.
- Explanatory drawing of coding tree unit syntax Explanatory drawing of a transform unit syntax.
- the block diagram which illustrates the orthogonal transformation part which performs orthogonal transformation using each conversion base about each of nine types of prediction directions.
- the block diagram which illustrates the orthogonal transformation part concerning a 2nd embodiment.
- the block diagram which illustrates the inverse orthogonal transformation part which concerns on 2nd Embodiment.
- the table figure which illustrates a response
- the table figure which illustrates a response
- the table figure which illustrates a correspondence with a conversion index, a vertical conversion index, and a horizontal conversion index concerning a 3rd embodiment.
- the table figure which integrated FIG. 21A and FIG. 21D.
- the block diagram which illustrates the picture decoding device concerning a 4th embodiment.
- the block diagram which illustrates the coefficient order control part concerning a 4th embodiment.
- the first embodiment relates to an image encoding device.
- An image decoding apparatus corresponding to the image encoding apparatus according to the present embodiment will be described in a fourth embodiment.
- This image encoding device can be realized by hardware such as an LSI (Large-Scale Integration) chip, a DSP (Digital Signal Processor), or an FPGA (Field Programmable Gate Array).
- the image encoding apparatus can also be realized by causing a computer to execute an image encoding program.
- the image coding apparatus includes a subtraction unit 101, an orthogonal transformation unit 102, a quantization unit 103, an inverse quantization unit 104, an inverse orthogonal transformation unit 105, an addition unit 106, and a reference image.
- An encoding control unit 116 is included.
- the image coding apparatus in FIG. 1 divides each frame or each field constituting the input image 118 into a plurality of pixel blocks, performs predictive coding on the divided pixel blocks, and outputs coded data 130. To do.
- pixel blocks are predictively encoded from the upper left to the lower right as shown in FIG. 6A.
- the encoded pixel block p is located on the left side and the upper side of the encoding target pixel block c in the encoding processing target frame f.
- the pixel block refers to, for example, a coding tree unit, a macro block, a sub block, and one pixel.
- the pixel block is basically used in the meaning of the coding tree unit, but the pixel block can be interpreted in another meaning by appropriately replacing the description.
- the coding tree unit is typically a 16 ⁇ 16 pixel block shown in FIG. 6B, for example, but may be a 32 ⁇ 32 pixel block shown in FIG. 6C, or a 64 ⁇ 64 pixel block shown in FIG. 6D, It may be an 8 ⁇ 8 pixel block (not shown) or a 4 ⁇ 4 pixel block.
- the coding tree unit need not necessarily be square.
- the encoding target block or coding tree unit of the input image 118 may be referred to as a “prediction target block”.
- the encoding unit is not limited to a pixel block such as a coding tree unit, and a frame, a field, or a combination thereof can be used.
- the image encoding apparatus in FIG. 1 performs intra prediction (also referred to as intra-frame prediction, intra-frame prediction, etc.) or inter prediction (inter-screen prediction) on a pixel block based on the encoding parameter input from the encoding control unit 116.
- Prediction image 127 is generated by performing prediction (also referred to as prediction, inter-frame prediction).
- This image encoding apparatus orthogonally transforms and quantizes the prediction error 119 between the pixel block (input image 118) and the predicted image 127, performs entropy encoding, and generates and outputs encoded data 130.
- the image encoding apparatus in FIG. 1 performs encoding by selectively applying a plurality of prediction modes having different block sizes and generation methods of the predicted image 127.
- the generation method of the predicted image 127 can be broadly divided into two types: intra prediction in which prediction is performed within the encoding target frame and inter prediction in which prediction is performed using one or a plurality of reference frames that are temporally different. .
- intra prediction in which prediction is performed within the encoding target frame
- inter prediction in which prediction is performed using one or a plurality of reference frames that are temporally different.
- an orthogonal transform and an inverse orthogonal transform when generating a predicted image using intra prediction will be described in detail.
- the subtracter 101 subtracts the corresponding predicted image 127 from the encoding target block of the input image 118 to obtain a prediction error 119.
- the subtractor 101 inputs the prediction error 119 to the orthogonal transform unit 102.
- the orthogonal transform unit 102 performs orthogonal transform on the prediction error 119 from the subtractor 101 to obtain a transform coefficient 120. Details of the orthogonal transform unit 102 will be described later.
- the orthogonal transform unit 102 inputs the transform coefficient 120 to the quantization unit 103.
- the quantization unit 103 performs quantization on the transform coefficient from the orthogonal transform unit 102 to obtain a quantized transform coefficient 121. Specifically, the quantization unit 103 performs quantization according to quantization information such as a quantization parameter and a quantization matrix specified by the encoding control unit 116.
- the quantization parameter indicates the fineness of quantization.
- the quantization matrix is used for weighting the fineness of quantization for each component of the transform coefficient.
- the quantization unit 103 inputs the quantized transform coefficient 121 to the coefficient order control unit 113 and the inverse quantization unit 104.
- the coefficient order control unit 113 converts the quantized transform coefficient 121 that is a two-dimensional (2D) representation into a quantized transform coefficient sequence 117 that is a one-dimensional (1D) representation, and inputs the quantized transform coefficient sequence 117 to the entropy coding unit 114. Details of the coefficient control unit 113 will be described later.
- the entropy encoding unit 114 applies various encoding parameters such as the quantized transform coefficient sequence 117 from the coefficient control unit 113, the prediction information 126 from the prediction selection unit 110, and the quantization information specified by the encoding control unit 116.
- Entropy coding (for example, Huffman coding, arithmetic coding, etc.) is performed on the data to generate coded data.
- the encoding parameter is a parameter necessary for decoding, such as prediction information 126, information on transform coefficients, information on quantization, and the like.
- the encoding parameter is held in an internal memory (not shown) of the encoding control unit 116, and it is possible to use the encoding parameter of an already encoded pixel block adjacent when encoding the prediction target block. .
- the prediction value of the prediction mode of the prediction target block can be derived from the prediction mode information of the encoded adjacent block.
- the encoded data generated by the entropy encoding unit 114 is temporarily accumulated in the output buffer 115 through multiplexing, for example, and output as encoded data 130 according to an appropriate output timing managed by the encoding control unit 116. .
- the encoded data 130 is output to a storage system (storage medium) or a transmission system (communication line) (not shown), for example.
- the inverse quantization unit 104 performs inverse quantization on the quantized transform coefficient 121 from the quantizing unit 103 to obtain a restored transform coefficient 122. Specifically, the inverse quantization unit 104 performs inverse quantization according to the quantization information used in the quantization unit 103. The quantization information used in the quantization unit 103 is loaded from the internal memory of the encoding control unit 116. The inverse quantization unit 104 inputs the restored transform coefficient 122 to the inverse orthogonal transform unit 105.
- the inverse orthogonal transform unit 105 performs an inverse orthogonal transform corresponding to the orthogonal transform performed in the orthogonal transform unit 102 on the restored transform coefficient 122 from the inverse quantization unit 104 to obtain a restored prediction error 123. Details of the inverse orthogonal transform unit 105 will be described later.
- the inverse orthogonal transform unit 105 inputs the restoration prediction error 123 to the addition unit 106.
- the addition unit 106 adds the restored prediction error 123 and the corresponding prediction image 127 to generate a local decoded image 124.
- the locally decoded image 124 is stored in the reference image memory 107.
- the locally decoded image 124 stored in the reference image memory 107 is referred to as the reference image 125 by the intra prediction unit 108 and the inter prediction unit 109 as necessary.
- the intra prediction unit 108 performs intra prediction using the reference image 125 stored in the reference image memory 107.
- H.M. In H.264, an intra prediction image is generated by performing pixel interpolation (copying or interpolation) along a prediction direction such as a vertical direction or a horizontal direction using an encoded reference pixel value adjacent to a prediction target block. .
- FIG. 2 shows an arrangement relationship between reference pixels and encoding target pixels in H.264.
- FIG. 7C shows a predicted image generation method in mode 1 (horizontal prediction)
- FIG. 7D shows a predicted image generation method in mode 4 (diagonal lower right prediction; Intra_NxN_Diagonal_Down_Right in FIG. 4A).
- the intra prediction unit 108 may copy the interpolated pixel value in a predetermined prediction direction after interpolating the pixel value using a predetermined interpolation method.
- H Although the prediction direction of the H.264 intra prediction is illustrated, the prediction direction can be extended to use any number of prediction modes such as 17 types and 33 types by further specifying the prediction direction. Specifically, H.C. H.264 defines a prediction angle for every 22.5 degrees. For example, if a prediction angle for every 11.25 degrees is defined, 17 types of prediction modes including DC prediction can be used. Further, if a prediction angle for every 5.625 degrees is defined, 33 types of prediction modes including DC prediction can be used.
- the angle in the prediction direction may be represented by a straight line connecting the second reference points moved horizontally and vertically from the first reference point.
- the prediction mode can be easily expanded, and this embodiment can be applied regardless of the number of prediction modes.
- the inter prediction unit 109 performs inter prediction using the reference image 125 stored in the reference image memory 107. Specifically, the inter prediction unit 109 performs block matching processing between the prediction target block and the reference image 125 to derive a motion shift amount (motion vector). The inter prediction unit 109 performs an interpolation process (motion compensation) based on the motion vector to generate an inter prediction image. H. With H.264, interpolation processing up to 1/4 pixel accuracy is possible.
- the derived motion vector is entropy encoded as part of the prediction information 126.
- the selection switch 111 selects the output terminal of the intra prediction unit 108 or the output terminal of the inter prediction unit 109 according to the prediction information 126 from the prediction selection unit 110, and uses the subtraction unit 101 and the intra prediction image or the inter prediction image as the prediction image 127.
- the data is input to the adding unit 106.
- the selection switch 110 captures the intra prediction image from the intra prediction unit 108 as the prediction image 127.
- the selection switch 110 captures the inter prediction image from the inter prediction unit 109 as the prediction image 127.
- the prediction selection unit 110 has a function of setting the prediction information 126 according to the prediction mode controlled by the encoding control unit 116. As described above, intra prediction or inter prediction can be selected for generating the predicted image 127, but a plurality of modes can be further selected for each of intra prediction and inter prediction.
- the encoding control unit 116 determines one of a plurality of intra prediction modes and inter prediction modes as the optimal prediction mode, and the prediction selection unit 110 sets the prediction information 126 according to the determined optimal prediction mode. .
- prediction mode information is specified by the encoding control unit 116 as the intra prediction unit 108, and the intra prediction unit 108 generates a prediction image 127 according to the prediction mode information.
- the encoding control unit 116 may specify a plurality of prediction mode information in order from the smallest prediction mode number, or may specify a plurality of prediction mode information in order from the largest.
- the encoding control unit 116 may limit the prediction mode according to the characteristics of the input image.
- the encoding control unit 116 need not always specify all prediction modes, but may specify at least one prediction mode information for the encoding target block.
- the encoding control unit 116 determines an optimal prediction mode using a cost function expressed by the following formula (1).
- OH indicates a code amount related to the prediction information 126 (for example, motion vector information, prediction block size information), and SAD is a sum of absolute differences between the prediction target block and the prediction image 127 (that is, prediction). Accumulated sum of absolute values of error 119). Further, ⁇ represents a Lagrange undetermined multiplier determined based on the value of quantization information (quantization parameter), and K represents an encoding cost.
- the prediction mode that minimizes the coding cost K is determined as the optimum prediction mode from the viewpoint of the generated code amount and the prediction error.
- the encoding cost may be estimated from OH alone or SAD alone, or the encoding cost may be estimated using a value obtained by subjecting SAD to Hadamard transform or an approximation thereof.
- the encoding control unit 116 determines the optimal prediction mode using the cost function shown in the following mathematical formula (2).
- Equation (2) D represents a square error sum (ie, encoding distortion) between the prediction target block and the local decoded image, and R represents a prediction error between the prediction target block and the prediction image 127 in the prediction mode.
- D represents a square error sum (ie, encoding distortion) between the prediction target block and the local decoded image
- R represents a prediction error between the prediction target block and the prediction image 127 in the prediction mode.
- J indicates the encoding cost.
- provisional encoding processing and local decoding processing are required for each prediction mode, so that the circuit scale or the amount of calculation increases.
- the encoding cost J is derived based on more accurate encoding distortion and code amount, it is easy to determine the optimal prediction mode with high accuracy and maintain high encoding efficiency.
- the encoding cost may be estimated from only R or D, or the encoding cost may be estimated using an approximate value of R or D.
- the encoding control unit 116 makes a determination using Formula (1) or Formula (2) based on information obtained in advance regarding the prediction target block (prediction mode of surrounding pixel blocks, results of image analysis, and the like). The number of prediction mode candidates for performing may be narrowed down in advance.
- the encoding control unit 116 controls each element of the image encoding device in FIG. Specifically, the encoding control unit 116 performs various controls for the encoding process including the above-described operation.
- the 1D transform matrix set unit 112 generates 1D transform matrix set information 129 based on the prediction mode information included in the prediction information 126 from the prediction selection unit 110 and inputs the 1D transform matrix set information 129 to the orthogonal transform unit 102 and the inverse orthogonal transform unit 105. Details of the 1D conversion matrix set information 129 will be described later.
- the orthogonal transform unit 102 includes a selection switch 201, a vertical conversion unit 202, a transposition unit 203, a selection switch 204, and a horizontal conversion unit 205.
- the vertical transform unit 202 includes a 1D orthogonal transform unit A206 and a 1D orthogonal transform unit B207.
- the horizontal transform unit 205 includes a 1D orthogonal transform unit A208 and a 1D orthogonal transform unit B209. Note that the order of the vertical conversion unit 202 and the horizontal conversion unit 205 is an example, and these may be reversed.
- the 1D orthogonal transform unit A206 and the 1D orthogonal transform unit A208 have a common function in that the input matrix is multiplied by the 1D transform matrix A, and the 1D orthogonal transform unit B207 and the 1D orthogonal transform unit B209 are input. It has a common function in that the 1D conversion matrix B is multiplied with the matrix. Therefore, the 1D orthogonal transform unit A206 and the 1D orthogonal transform unit A208 can also be realized by using physically identical hardware in a time division manner. The same applies to the 1D orthogonal transform unit B207 and the 1D orthogonal transform unit B209.
- the selection switch 201 guides the prediction error 119 to one of the 1D orthogonal transform unit A206 and the 1D orthogonal transform unit B207 according to the vertical transform index included in the 1D transform matrix set information 129.
- the 1D orthogonal transform unit A206 multiplies the input prediction error (matrix) 119 by the 1D transform matrix A and outputs the result.
- the 1D orthogonal transform unit B207 multiplies the input prediction error 119 by the 1D transform matrix B and outputs the result.
- the 1D orthogonal transform unit A206 and the 1D orthogonal transform unit B207 perform a one-dimensional orthogonal transform represented by the following equation (3), and the prediction error 119 in the vertical direction Remove correlation.
- Equation (3) X represents a matrix (N ⁇ N) of the prediction error 119, V represents a 1D conversion matrix A and a 1D conversion matrix B (both N ⁇ N), and Y represents 1D
- the output matrix (NxN) of orthogonal transformation part A206 and 1D orthogonal transformation part B207 is shown.
- the transformation matrix V is an N ⁇ N transformation matrix in which transformation bases designed to remove the correlation in the vertical direction of the matrix X are arranged in the vertical direction as row vectors.
- the 1D conversion matrix A and the 1D conversion matrix B are designed by different methods and have different properties.
- the 1D conversion matrix A and the 1D conversion matrix B it is also possible to use an integer obtained by multiplying each designed conversion base by a scalar.
- the block size for performing orthogonal transform may also be M ⁇ N.
- the transposition unit 203 transposes the output matrix (Y) of the vertical conversion unit 202 and supplies it to the selection switch 204.
- the transposition unit 203 is an example, and corresponding hardware may not necessarily be prepared.
- the result of executing the 1D orthogonal transform by the vertical transform unit 202 (each element of the output matrix of the vertical transform unit 202) is stored and read in an appropriate order when the 1D orthogonal transform is performed by the horizontal transform unit 205. If output, the transposition of the output matrix (Y) can be performed without preparing hardware corresponding to the transposition unit 203.
- the selection switch 204 guides the input matrix from the transposition unit 203 to one of the 1D orthogonal transform unit A 208 and the 1D orthogonal transform unit B 209 according to the horizontal transform index included in the 1D transform matrix set information 129.
- the 1D orthogonal transform unit A208 multiplies the input matrix by the 1D transform matrix A and outputs the result.
- the 1D orthogonal transform unit B209 multiplies the input matrix by the 1D transform matrix B and outputs the result.
- the 1D orthogonal transform unit A 208 and the 1D orthogonal transform unit B 209 perform a one-dimensional orthogonal transform represented by the following formula (4) to correlate the prediction error in the horizontal direction. Remove.
- H comprehensively represents the 1D transformation matrix A and the 1D transformation matrix B (both N ⁇ N), and Z represents the output matrix (N of the 1D orthogonal transformation unit A208 and the 1D orthogonal transformation unit B209).
- XN which refers to the conversion factor 120.
- the transformation matrix H is an N ⁇ N transformation matrix in which transformation bases designed to remove the correlation in the horizontal direction of the matrix Y are vertically arranged as row vectors.
- the 1D transformation matrix A and the 1D transformation matrix B are designed in different ways and have different properties.
- the 1D conversion matrix A and the 1D conversion matrix B may be obtained by converting each designed conversion base into an integer by scalar multiplication.
- the orthogonal transform unit 102 performs orthogonal transform on the prediction error (matrix) 119 according to the 1D transform matrix set information 129 input from the 1D transform matrix set unit 112, and converts the transform coefficient (matrix) 120 into the transform coefficient (matrix) 120.
- the orthogonal transform unit 102 may include a DCT unit (not shown), or one of the 1D transform matrix A and the 1D transform matrix B may be replaced with a matrix for DCT.
- the 1D transformation matrix B may be a transformation matrix for DCT.
- the orthogonal transform unit 102 may implement various orthogonal transforms such as Hadamard transform, Karhunen-Loeve transform, and discrete sine transform, which will be described later, in addition to DCT.
- the prediction pixel is generated by copying the reference pixel group on one or both adjacent lines on the left side and the upper side of the prediction target block along the prediction direction or after interpolation. There is. That is, in this intra prediction mode, at least one reference pixel in the reference pixel group is selected according to the prediction direction, and a predicted image is generated by copying the reference pixel or interpolating from the reference pixel. Since the intra prediction mode uses the spatial correlation of images, the prediction accuracy tends to decrease as the distance from the reference pixel increases. That is, the absolute value of the prediction error is likely to increase according to the distance from the reference pixel.
- an intra prediction mode in which only the reference pixel group on the left adjacent line of the prediction target block is referred (copying of the pixel value of the reference pixel or interpolation from the reference pixel) (for example, mode 1 in FIG. 7A and For mode 8)
- the prediction error shows a trend in the horizontal direction.
- the intra prediction mode for example, mode 0, mode 3 and mode 7 in FIG. 7A
- the prediction error shows a tendency in the vertical direction.
- the prediction error is in the horizontal direction and the vertical direction. This trend is shown. In summary, it can be said that the tendency is related to the direction orthogonal to the line of the reference pixel group used for generating the predicted image.
- the 1D transformation matrix A has higher coefficient density when performing 1D orthogonal transformation in the orthogonal direction (vertical direction or horizontal direction) (that is, non-zero in the quantized transformation coefficient 121). It is generated by designing a common transformation base in advance so that the ratio of the coefficients becomes small.
- the 1D transformation matrix B is generated by designing a general-purpose transformation matrix having no such property. For example, a generic conversion is DCT. If 1D orthogonal transformation is performed in the orthogonal direction using the 1D transformation matrix A, the conversion efficiency of prediction errors in intra prediction is improved, and thus the coding efficiency is improved.
- the prediction error 119 of mode 0 shows the above tendency in the vertical direction, but does not show the above tendency in the horizontal direction. Therefore, efficient orthogonal transformation can be realized by performing 1D orthogonal transformation using the 1D transformation matrix A in the vertical transformation unit 202 and performing 1D orthogonal transformation using the 1D transformation matrix B in the horizontal transformation unit 205.
- the inverse orthogonal transform unit 105 includes a selection switch 301, a vertical inverse transform unit 302, a transposition unit 303, a selection switch 304, and a horizontal inverse transform unit 305.
- the vertical inverse transform unit 302 includes a 1D inverse orthogonal transform unit A306 and a 1D inverse orthogonal transform unit B307.
- the horizontal inverse transform unit 305 includes a 1D inverse orthogonal transform unit A308 and a 1D orthogonal transform unit B309. The order of the vertical inverse transform unit 302 and the horizontal inverse transform unit 305 is an example, and these may be reversed.
- the 1D inverse orthogonal transform unit A306 and the 1D inverse orthogonal transform unit A308 have a common function in that the input matrix is multiplied by the transposed matrix of the 1D transform matrix A described above, and the 1D inverse orthogonal transform units B307 and 1D.
- the inverse orthogonal transform unit B309 has a common function in that the input matrix is multiplied by the transposed matrix of the 1D transform matrix B described above. Therefore, the 1D inverse orthogonal transform unit A306 and the 1D inverse orthogonal transform unit A308 can also be realized by using physically identical hardware in a time division manner. The same applies to the 1D inverse orthogonal transform unit B307 and the 1D inverse orthogonal transform unit B309.
- the selection switch 301 guides the restoration transform coefficient 122 to one of the 1D inverse orthogonal transform unit A306 and the 1D inverse orthogonal transform unit B307 according to the vertical transform index included in the 1D transform matrix set information 129.
- the 1D inverse orthogonal transform unit A306 multiplies the input transform transform coefficient 122 (matrix format) by the transposed matrix of the 1D transform matrix A and outputs the result.
- the 1D inverse orthogonal transform unit B307 multiplies the input restored transform coefficient 122 by the transposed matrix of the 1D transform matrix B and outputs the result.
- the 1D inverse orthogonal transform unit A306 and the 1D inverse orthogonal transform unit B307 perform one-dimensional inverse orthogonal transform represented by the following equation (5).
- Equation (5) Z 'represents a matrix of the restored transform coefficients 122 (N ⁇ N), V T is generically indicates a transposed matrix of 1D transform matrix A and 1D transformation matrix B (both N ⁇ N) Y ′ represents an output matrix (N ⁇ N) of the 1D inverse orthogonal transform unit A306 and the 1D inverse orthogonal transform unit B307.
- the transposition unit 303 transposes the output matrix (Y ′) of the vertical inverse transform unit 302 and gives the result to the selection switch 304.
- the transposition unit 303 is an example, and corresponding hardware may not necessarily be prepared.
- the 1D inverse orthogonal transform performed by the vertical inverse transform unit 302 is stored (each element of the output matrix of the vertical inverse transform unit 302) and the 1D inverse orthogonal transform is performed by the horizontal inverse transform unit 305. If read in an appropriate order, transposition of the output matrix (Y ′) can be executed without preparing hardware corresponding to the transposition unit 303.
- the selection switch 304 guides the input matrix from the transposition unit 303 to one of the 1D inverse orthogonal transform unit A308 and the 1D inverse orthogonal transform unit B309 according to the horizontal transform index included in the 1D transform matrix set information 129.
- the 1D inverse orthogonal transform unit A308 multiplies the input matrix by the transposed matrix of the 1D transform matrix A and outputs the result.
- the 1D inverse orthogonal transform unit B309 multiplies the input matrix by the transposed matrix of the 1D transform matrix B and outputs the result.
- the 1D inverse orthogonal transform unit A308 and the 1D inverse orthogonal transform unit B309 perform one-dimensional inverse orthogonal transform represented by the following equation (6).
- H T is 1D transform matrix
- X ' is 1D inverse orthogonal transform unit A308 and 1D inverse orthogonal transform
- the output matrix (N ⁇ N) of the part B309 is shown, which indicates the restoration prediction error 123.
- the inverse orthogonal transform unit 105 performs inverse orthogonal transform on the reconstructed transform coefficient (matrix) 122 in accordance with the 1D transform matrix set information 129 input from the 1D transform matrix set unit 112, and reconstructed prediction error ( Matrix) 123 is generated.
- the inverse orthogonal transform unit 105 may include an IDCT unit (not shown), or one of the 1D transform matrix A and the 1D transform matrix B may be replaced with a matrix for DCT.
- the 1D transformation matrix B may be a matrix for DCT.
- the inverse orthogonal transform unit 105 realizes inverse orthogonal transforms corresponding to various orthogonal transforms such as Hadamard transform, Karhunen-Loeve transform, and discrete sine transform, which will be described later, in harmony with the orthogonal transform unit 102. May be.
- the 1D transformation matrix set information 129 includes a vertical transformation index for selecting a transformation matrix used for vertical orthogonal transformation and vertical inverse orthogonal transformation, and a transformation matrix used for horizontal orthogonal transformation and horizontal inverse orthogonal transformation.
- the horizontal transformation index for selecting is directly or indirectly indicated.
- the 1D transformation matrix set information 129 can be expressed by a transformation index (TransformIdx) illustrated in FIG. 4D.
- a vertical transformation index (Vertical Transform Idx) and a horizontal transformation index (Horizontal Transform Idx) can be derived from the transformation index.
- the vertical transformation index is “0”
- the 1D transformation matrix A (1D_Transform_Matrix_A) or its transposed matrix is selected for vertical orthogonal transformation or vertical inverse orthogonal transformation.
- the vertical transformation index is “1”
- the aforementioned 1D transformation matrix B (1D_Transform_Matrix_B) or its transposed matrix is selected for vertical orthogonal transformation or vertical inverse orthogonal transformation.
- the horizontal transformation index is “0”
- the 1D transformation matrix A (1D_Transform_Matrix_A) or its transposed matrix is selected for horizontal orthogonal transformation or horizontal inverse orthogonal transformation.
- the horizontal transformation index is “1”
- the aforementioned 1D transformation matrix B (1D_Transform_Matrix_B) or its transposed matrix is selected for horizontal orthogonal transformation or horizontal inverse orthogonal transformation.
- FIG. 4A illustrates an index (IntraNxNPredModeIndex) of each (intra) prediction mode, its name (Name of IntraNxNPredMode), and a corresponding vertical conversion index and horizontal conversion index.
- the size of the prediction target block can be expanded to “MxN” (that is, a rectangle other than a square).
- FIG. 4E illustrates an index of each prediction mode, a name thereof, and a corresponding conversion index obtained by integrating FIGS. 4A and 4D.
- the 1D transform matrix set unit 112 detects a prediction mode index from the prediction mode information included in the prediction information 126, and generates corresponding 1D transform matrix set information 129. Note that the various tables shown in FIGS. 4A, 4B, 4C, 4D, and 4E are examples, and the 1D conversion matrix set unit 112 uses the 1D conversion matrix set information without using some or all of these tables. 129 may be generated.
- the TransformIdx indicates 0
- the Horizontal Transform index indicates 0. That is, it means that the 1D conversion matrix A is used for the vertical orthogonal transformation, and the 1D conversion matrix A is used for the horizontal orthogonal transformation. Further, it means that a transposed matrix of 1D transformation matrix A is used for vertical inverse orthogonal transformation, and a transposed matrix of 1D transformation matrix A is used for horizontal inverse orthogonal transformation.
- TransformIdx When TransformIdx indicates 1, it means that Vertical Transform index indicates 0 and Horizontal Transform index indicates 1. That is, it means that the 1D conversion matrix A is used for the vertical orthogonal transform and the 1D conversion matrix B is used for the horizontal orthogonal transform. Further, it means that a transposed matrix of 1D transformation matrix A is used for vertical inverse orthogonal transformation, and a transposed matrix of 1D transformation matrix B is used for horizontal inverse orthogonal transformation.
- TransformIdx When TransformIdx is 2, it means that Vertical Transform index is 1 and Horizontal Transform index is 0. That is, it means that the 1D transformation matrix B is used for the vertical orthogonal transformation and the 1D transformation matrix A is used for the horizontal orthogonal transformation. Further, it means that a transposed matrix of 1D transformation matrix B is used for vertical inverse orthogonal transformation, and a transposed matrix of 1D transformation matrix A is used for horizontal inverse orthogonal transformation.
- TransformIdx When TransformIdx indicates 3, it means that Vertical Transform index indicates 1 and Horizontal Transform index indicates 1. That is, it means that the 1D transformation matrix B is used for the vertical orthogonal transformation and the 1D transformation matrix B is used for the horizontal orthogonal transformation. Further, it means that a transposed matrix of 1D transformation matrix B is used for vertical inverse orthogonal transformation, and a transposed matrix of 1D transformation matrix B is used for horizontal inverse orthogonal transformation.
- 1D conversion matrix set information 129 is assigned in consideration of the tendency of each intra prediction mode described above. That is, 0 is assigned to the Vertical Transform index in the prediction mode that shows the tendency in the vertical direction of the prediction error, and 0 is assigned to the Horizontal Transform index in the mode that shows the tendency in the horizontal direction. On the other hand, 1 is assigned to each direction in which the above tendency is not exhibited.
- each prediction mode is applied by adaptively applying the 1D conversion matrix A or the 1D conversion matrix B to each of the vertical direction and the horizontal direction. High conversion efficiency is achieved as compared with a case where fixed orthogonal transform such as DCT is uniformly applied to the mode.
- the coefficient order control unit 113 converts each element of the quantized transform coefficient 121 that is a two-dimensional representation into a quantized transform coefficient sequence 117 that is a one-dimensional representation by arranging the elements in a predetermined order.
- the coefficient order control unit 113 can perform a common 2D-1D conversion regardless of the prediction mode.
- the coefficient control unit 113 includes the H.264 standard. Similar to H.264, zigzag scanning can be used. In the zigzag scan, the respective elements of the quantized transform coefficients 121 are arranged in the order shown in FIG. 8A and converted into a quantized transform coefficient string 117 as shown in FIG. 8B.
- (i, j) indicates the coordinates (position information) in the quantized transform coefficient (matrix) 121 of each element.
- FIG. 8C shows 2D-1D conversion using a zigzag scan (in the case of a 4 ⁇ 4 pixel block). Specifically, FIG. 8C shows an index (idx) indicating the coefficient order (scan order) of the quantized transform coefficient sequence 117 subjected to 2D-1D conversion using zigzag scanning, and the corresponding quantized transform coefficient 121.
- Element (cij) is shown. In FIG. 8C, cij indicates an element of coordinates (i, j) in the quantized transform coefficient (matrix) 121.
- the coefficient order control unit 113 can perform individual 2D-1D conversion for each prediction mode.
- the coefficient order control unit 113 that performs such an operation is illustrated in FIG. 5A.
- the coefficient order control unit 113 includes a selection switch 501 and individual 2D-1D conversion units 502,..., 510 for each of nine types of prediction modes.
- the selection switch 501 converts the quantized transform coefficient 121 according to the prediction mode information (for example, the index of the prediction mode in FIG. 4A) included in the prediction information 126, and the 2D-1D conversion unit (502,... Any one of 510). For example, if the prediction mode index is 0, the selection switch 501 guides the quantized transform coefficient 121 to the 2D-1D transform unit 502. In FIG.
- each prediction mode and the 2D-1D conversion unit have a one-to-one correspondence, and the quantized transform coefficient 121 is guided to one 2D-1D conversion unit corresponding to the prediction mode.
- FIG. 9 illustrates 2D-1D conversion (in the case of a 4 ⁇ 4 pixel block) performed by each 2D-1D conversion unit 502,..., 510. A specific design method for 2D-1D conversion for each prediction mode as shown in FIG. 9 will be described later.
- An index (idx) indicating the coefficient order (scan order) of the quantized transform coefficient sequence 117 that has been 2D-1D transformed by the 2D-1D transform unit corresponding to each prediction mode, and an element (cij) of the corresponding quantized transform coefficient 121 ).
- cij represents an element of coordinates (i, j) in the quantized transform coefficient (matrix) 121.
- each prediction mode is represented by its name, but the correspondence with the prediction mode index is as shown in FIG. 4A.
- the coefficients are scanned in an order suitable for the generation tendency of the non-zero coefficient in the quantized transform coefficient 121 for each prediction mode. Will improve.
- the coefficient order control unit 113 may dynamically update the scan order in 2D-1D conversion.
- the coefficient order control unit 113 that performs such an operation is illustrated in FIG. 5B.
- the coefficient order control unit 113 includes a selection switch 501, individual 2D-1D conversion units 502,..., 510 for each of nine types of prediction modes, an occurrence frequency counting unit 511, and a coefficient order updating unit 512.
- the selection switch 501 is as described with reference to FIG. 5A. .., 510 differ from FIG. 5A in that the scan order is updated by the coefficient order update unit 512 for each of the nine types of prediction modes.
- the occurrence frequency counting unit 511 creates a histogram of the number of occurrences of non-zero coefficients in each element of the quantized transform coefficient sequence 117 for each prediction mode.
- the occurrence frequency counting unit 511 inputs the created histogram 513 to the coefficient order updating unit 512.
- the coefficient order update unit 512 updates the coefficient order based on the histogram 513 at a predetermined timing.
- the timing is, for example, the timing when the coding process of the coding tree unit is finished, the timing when the coding process for one line in the coding tree unit is finished, or the like.
- the coefficient order update unit 512 refers to the histogram 513 and updates the coefficient order for a prediction mode having an element in which the number of occurrences of non-zero coefficients is counted more than a threshold. For example, the coefficient order update unit 512 updates the prediction mode having an element in which the occurrence of a non-zero coefficient is counted 16 times or more. By providing a threshold value for the number of occurrences, the coefficient order is updated globally, so that it is difficult to converge to a local optimum solution.
- the coefficient order update unit 512 sorts the elements in descending order of the occurrence frequency of the non-zero coefficient regarding the prediction mode to be updated. Sorting can be realized by existing algorithms such as bubble sort and quick sort. Then, the coefficient order update unit 512 inputs coefficient order update information 514 indicating the order of the sorted elements to the 2D-1D conversion unit corresponding to the prediction mode to be updated.
- the 2D-1D conversion unit When the coefficient order update information 514 is input, the 2D-1D conversion unit performs 2D-1D conversion according to the updated scan order.
- the initial scan order of each 2D-1D conversion unit needs to be determined in advance. For example, the zigzag scan or the scan order illustrated in FIG. 9 can be used as the initial scan order.
- the tendency of occurrence of non-zero coefficients in the quantized transform coefficients 121 changes according to the influence of the properties of the predicted image, quantization information (quantization parameters), and the like. Even in this case, high encoding efficiency can be expected stably. Specifically, the generated code amount of run-length encoding in the entropy encoding unit 114 can be suppressed.
- H.C. H.264 has been described as an example when there are nine types of prediction modes. However, even when the prediction modes are expanded to 17 types, 33 types, etc., a 2D-1D conversion unit corresponding to each expanded prediction mode is added. Then, individual 2D-1D conversion for each prediction mode can be performed.
- FIGS. 10A and 10B the processing performed by the image encoding apparatus in FIG. 1 for the encoding target block (coding tree unit) will be described with reference to FIGS. 10A and 10B.
- the orthogonal transformation and inverse orthogonal transformation that is, adaptive orthogonal transformation and inverse orthogonal transformation based on the 1D transformation matrix set information 129 according to the present embodiment are effective. It is said. However, as described later, it may be specified that the orthogonal transform and the inverse orthogonal transform according to the present embodiment are invalidated by the syntax.
- step S601 When the input image 118 is input to the image encoding device in FIG. 1 in units of encoding target blocks, encoding processing of the encoding target blocks starts (step S601).
- the intra prediction unit 108 and the inter prediction unit 109 generate an intra prediction image and an inter prediction image using the reference image 125 stored in the reference image memory 107 (step S602).
- the encoding control unit 116 determines the optimal prediction mode from the viewpoint of the above-described encoding cost and generates the prediction information 126 (step S603).
- the prediction information 126 is input from the prediction selection unit 110 to each element as described above. If the prediction information 126 generated in step S603 indicates intra prediction, the process proceeds to step S605. If the prediction information 126 indicates inter prediction, the process proceeds to step S605 '.
- step S605 the subtraction unit 101 subtracts the (intra) prediction image 127 from the encoding target block to generate a prediction error 119, and the process proceeds to step S606.
- step S605 'as well the subtracting unit 101 subtracts the (inter) predicted image 127 from the encoding target block to generate a prediction error 119, and the process proceeds to step S614'.
- step S606 the 1D conversion matrix setting unit 112 extracts prediction mode information included in the prediction information 126 generated in step S603.
- the 1D transformation matrix set unit 112 generates 1D transformation matrix set information 129 based on the extracted prediction mode information (for example, referring to the table of FIG. 4A) (step S607).
- the 1D transform matrix set unit 112 inputs 1D transform matrix set information 129 to the orthogonal transform unit 102 and the inverse orthogonal transform unit 105.
- the selection switch 201 in the orthogonal transform unit 102 selects the 1D orthogonal transform unit A206 or the 1D orthogonal transform unit B207 based on the 1D transform matrix set information 129 (steps S608, S609, and S610).
- the selection switch 204 in the orthogonal transform unit 102 selects the 1D orthogonal transform unit A208 or the 1D orthogonal transform unit B209 based on the 1D transform matrix set information 129 (steps S611, S612, and S613). Thereafter, the process proceeds to step S614.
- the selection switch 201 selects the 1D orthogonal transformation unit A206 in the vertical transformation unit 202 (step S609), and the selection switch 204 is horizontal.
- the 1D orthogonal transform unit A208 in the transform unit 205 is selected (step S612).
- TransformIdx is 1
- the selection switch 201 selects the 1D orthogonal transform unit A206 in the vertical transform unit 202 (step S609), and the selection switch 204 selects the 1D orthogonal transform unit B209 in the horizontal transform unit 205 (step S613).
- the selection switch 201 selects the 1D orthogonal transform unit B207 in the vertical transform unit 202 (step S610), and the selection switch 204 selects the 1D orthogonal transform unit A208 in the horizontal transform unit 205 (step S612). ).
- the selection switch 201 selects the 1D orthogonal transform unit B207 in the vertical transform unit 202 (step S610), and the selection switch 204 selects the 1D orthogonal transform unit B209 in the horizontal transform unit 205 (step S613). ).
- step S614 the orthogonal transform unit 102 performs vertical transform and horizontal transform on the prediction error 119 according to the settings in step S608,..., Step S613, respectively, and generates a transform coefficient 120.
- the quantization unit 103 quantizes the transform coefficient 120 generated in step S614 to generate a quantized transform coefficient 121 (step S615), and the process proceeds to step S616.
- step S614 ' the orthogonal transform unit 102 performs a fixed orthogonal transform such as DCT on the prediction error 119 to generate a transform coefficient 120.
- the quantization unit 103 quantizes the transform coefficient 120 generated in step S614 'to generate a quantized transform coefficient 121 (step S615'), and the process proceeds to step S617 '.
- the orthogonal transform performed in step S614 ' may be realized by a DCT unit (not shown) or the like, or may be realized by the 1D orthogonal transform unit B207 and the 1D orthogonal transform unit B209.
- step S616 the coefficient order control unit 113 scans based on the prediction mode information included in the prediction information 126 generated in step S603 (that is, the connection of the selection switch 501 in the example of FIGS. 5A and 5B). First) is set, and the process proceeds to step S617. However, if the coefficient control unit 113 performs common 2D-1D conversion regardless of the prediction mode, step S616 can be omitted.
- step S617 the coefficient order control unit 113 performs 2D-1D conversion on the quantized transform coefficient 121 according to the setting in step S616 to generate a quantized transform coefficient sequence 117.
- the entropy encoding unit 114 performs entropy encoding on the encoding parameter including the quantized transform coefficient sequence 117 (step S618).
- the encoded data 130 is output at an appropriate timing managed by the encoding control unit 116.
- the inverse quantization unit 104 performs inverse quantization on the quantized transform coefficient 121 to generate a restored transform coefficient 122 (step S619), and the process proceeds to step S620.
- step S617 ′ the coefficient order control unit 113 performs a quantized transform on the quantized transform coefficient 121 by performing a fixed 2D-1D transform such as a zigzag scan or a 2D-1D transform corresponding to Intra_NxN_DC in FIG. A coefficient sequence 117 is generated. Subsequently, the entropy encoding unit 114 performs entropy encoding on the encoding parameter including the quantized transform coefficient sequence 117 (step S618 '). The encoded data 130 is output at an appropriate timing managed by the encoding control unit 116. On the other hand, the inverse quantization unit 104 performs inverse quantization on the quantized transform coefficient 121 to generate the restored transform coefficient 122 (step S619 '), and the process proceeds to step S626'.
- a fixed 2D-1D transform such as a zigzag scan or a 2D-1D transform corresponding to Intra_NxN_DC in FIG.
- a coefficient sequence 117 is generated.
- the selection switch 301 in the inverse orthogonal transform unit 105 selects the 1D inverse orthogonal transform unit A306 or the 1D inverse orthogonal transform unit B307 based on the 1D transform matrix set information 129 (step S620, step S621, and step S622).
- the selection switch 304 in the inverse orthogonal transform unit 105 selects the 1D inverse orthogonal transform unit A308 or the 1D inverse orthogonal transform unit B309 based on the 1D transform matrix set information 129 (steps S623, S624, and S625). Thereafter, the process proceeds to step S626.
- the selection switch 301 selects the 1D inverse orthogonal transform unit A306 in the vertical inverse transform unit 302 (step S621), and the selection switch 304 Selects the 1D inverse orthogonal transform unit A308 in the horizontal inverse transform unit 305 (step S624).
- TransformIdx is 1
- the selection switch 301 selects the 1D inverse orthogonal transform unit A306 in the vertical inverse transform unit 302 (step S621)
- the selection switch 304 selects the 1D inverse orthogonal transform unit B309 in the horizontal inverse transform unit 305. (Step S625).
- the selection switch 301 selects the 1D inverse orthogonal transform unit B307 in the vertical inverse transform unit 302 (step S622), and the selection switch 304 selects the 1D inverse orthogonal transform unit A308 in the horizontal inverse transform unit 305. (Step S624).
- the selection switch 301 selects the 1D inverse orthogonal transform unit B307 in the vertical inverse transform unit 302 (step S622), and the selection switch 304 selects the 1D inverse orthogonal transform unit B309 in the horizontal inverse transform unit 305. (Step S625).
- step S626 the inverse orthogonal transform unit 105 performs the vertical inverse transform and the horizontal inverse transform according to the settings in step S620,..., Step S625 on the restored transform coefficient 122 to generate the restored prediction error 123.
- the process proceeds to step S627.
- step S626 ' the inverse orthogonal transform unit 105 performs inverse orthogonal transform such as IDCT on the reconstructed transform coefficient 122 to generate a reconstructed prediction error 123, and the process proceeds to step S627.
- the fixed inverse orthogonal transform performed in step S626 ' may be realized by an IDCT unit (not shown) or the like, or may be realized by the 1D inverse orthogonal transform unit B307 and the 1D inverse orthogonal transform unit B309.
- step S627 the adding unit 106 adds the reconstructed prediction error 123 generated in step S626 or step S626 ′ and the predicted image 127 to generate a local decoded image 124.
- the local decoded image 124 is used as a reference image as a reference image memory.
- step S628 the encoding process of the block to be encoded is completed.
- a prediction error 119 for each prediction mode is generated.
- the prediction errors 119 in each prediction mode those indicating the above-mentioned tendency that the absolute value of the prediction error increases as the distance from the reference pixel increases in the vertical direction or the horizontal direction are collected.
- a 1D orthogonal basis for removing the correlation in the vertical direction of the matrix is designed.
- a 1D conversion matrix A is generated by vertically arranging the 1D orthogonal bases as row vectors.
- a singular value decomposition is performed on a matrix in which prediction directions 119 are set vertically and a prediction error 119 is arranged horizontally, thereby generating a 1D orthogonal basis for removing the vertical correlation of the matrix.
- a 1D conversion matrix B is generated by vertically arranging the 1D orthogonal bases as row vectors.
- the 1D conversion matrix B can be simply replaced with a matrix for DCT.
- a design for a 4 ⁇ 4 pixel block is illustrated for simplicity, 1D transformation matrices for 8 ⁇ 8 pixel blocks and 16 ⁇ 16 pixel blocks can be designed as well.
- the described design method is an example, and there is room for appropriate design in consideration of the properties of the prediction residual described above.
- the scan order for each prediction mode is designed based on the quantized transform coefficient 121 generated by the quantization unit 103.
- a plurality of training images are prepared and the prediction residuals 119 of each of the nine types of prediction modes are generated.
- Each of the prediction residuals 119 is subjected to orthogonal transformation shown in Equation (3) and Equation (4) to generate a transform coefficient 120, which is further quantized.
- the number of occurrences of non-zero coefficients is cumulatively added to each quantized transform coefficient 121 for each element in the 4 ⁇ 4 pixel block.
- This cumulative addition is performed on all training images, and a histogram indicating the frequency of occurrence of non-zero coefficients is created for every 16 elements of the 4 ⁇ 4 pixel block. Based on this histogram, indexes 0 to 15 are given in ascending order from the element with the highest occurrence frequency. Such index assignment is performed individually for all prediction modes. The order of the assigned indexes is used as the scan order corresponding to each prediction mode.
- the design related to the 4 ⁇ 4 pixel block is illustrated, but the scan order of the 8 ⁇ 8 pixel block and the 16 ⁇ 16 pixel block can be similarly designed. Further, even if the prediction modes are expanded to 17 types, 33 types, and an arbitrary number, the design can be performed by the same method. The method for dynamically updating the scan order is as described with reference to FIG. 5B.
- the syntax indicates the structure of encoded data (for example, encoded data 130 in FIG. 1) when the image encoding apparatus encodes moving image data.
- the image decoding apparatus interprets the syntax with reference to the same syntax structure.
- FIG. 11 illustrates a syntax 700 used by the image coding apparatus in FIG.
- the syntax 700 includes three parts: a high level syntax 701, a slice level syntax 702, and a coding tree level syntax 703.
- the high level syntax 701 includes syntax information of a layer higher than the slice.
- a slice refers to a rectangular area or a continuous area included in a frame or a field.
- the slice level syntax 702 includes information necessary for decoding each slice.
- the coding tree level syntax 703 includes information necessary for decoding each coding tree (ie, each coding tree unit). Each of these parts includes more detailed syntax.
- the high level syntax 701 includes sequence and picture level syntaxes such as a sequence parameter set syntax 704 and a picture parameter set syntax 705.
- the slice level syntax 702 includes a slice header syntax 706, a slice data syntax 707, and the like.
- the coding tree level syntax 703 includes a coding tree unit syntax 708, a prediction unit syntax 709, and the like.
- the coding tree unit syntax 708 can have a quadtree structure. Specifically, the coding tree unit syntax 708 can be recursively called as a syntax element of the coding tree unit syntax 708. That is, one coding tree unit can be subdivided with a quadtree.
- the coding tree unit syntax 708 includes a transform unit syntax 710.
- the transform unit syntax 710 is invoked at each coding tree unit syntax 708 at the extreme end of the quadtree.
- the transform unit syntax 710 describes information related to inverse orthogonal transformation and quantization.
- FIG. 12 illustrates a slice header syntax 706 according to the present embodiment.
- the slice_directive_unified_transform_flag shown in FIG. 12 is a syntax element indicating, for example, validity / invalidity of orthogonal transformation and inverse orthogonal transformation according to the present embodiment for the slice.
- the orthogonal transform unit 102 and the inverse orthogonal transform unit 105 perform fixed orthogonal transform and inverse orthogonal transform such as DCT and IDCT.
- This fixed orthogonal transform and inverse orthogonal transform are performed by the 1D orthogonal transform unit B207, the 1D orthogonal transform unit B209, the 1D inverse orthogonal transform unit 307, and the 1D inverse orthogonal transform unit 309 (that is, by the 1D transform matrix B).
- it may be performed by a DCT unit and an IDCT unit (not shown).
- the coefficient order control unit 113 also performs fixed 2D-1D conversion (for example, zigzag scanning). This fixed 2D-1D conversion may be performed by the 2D-1D conversion unit (mode 2) 504, or may be performed by a 2D-1D conversion unit (not shown).
- the orthogonal transformation and inverse orthogonal transformation according to the present embodiment are effective over the entire area in the slice. That is, the encoding process is performed in the entire area in the slice according to the encoding flowchart described with reference to FIGS. 10A and 10B. That is, the selection switch 201 selects the 1D orthogonal transform unit A206 or the 1D orthogonal transform unit B207 based on the 1D transform matrix set information 129. The selection switch 204 selects the 1D orthogonal transform unit A208 or the 1D orthogonal transform unit B209 based on the 1D transform matrix set information 129.
- the selection switch 301 selects the 1D inverse orthogonal transform unit A306 or the 1D inverse orthogonal transform unit B307 based on the 1D transform matrix set information 129.
- the selection switch 304 selects the 1D inverse orthogonal transform unit A308 or the 1D inverse orthogonal transform unit B309 based on the 1D transform matrix set information 129.
- the selection switch 501 selects one of the 2D-1D conversion units 502,..., 510 according to the prediction mode information included in the prediction information 126.
- slice_directional_unified_transform_flag 1
- the orthogonal transform and inverse according to this embodiment are performed for each local region in the slice in the syntax of a lower layer (coding tree unit, transform unit, etc.).
- Validity / invalidity of orthogonal transformation may be defined.
- FIG. 13 illustrates a coding tree unit syntax 708 according to the present embodiment.
- Ctb_directive_unified_transform_flag shown in FIG. 13 is a syntax element indicating validity / invalidity of orthogonal transform and inverse orthogonal transform according to the present embodiment with respect to the coding tree unit.
- pred_mode shown in FIG. 13 is one of syntax elements included in the prediction unit syntax 709, and indicates the coding type in the coding tree unit or macroblock. MODE_INTRA indicates that the encoding type is intra prediction.
- ctb_directive_unified_transform_flag is encoded only when the above-mentioned slice_directional_unified_transform_flag is 1 and the coding type of the coding tree unit is intra prediction.
- the orthogonal transform unit 102 and the inverse orthogonal transform unit 105 perform fixed orthogonal transform and inverse orthogonal transform such as DCT and IDCT.
- This fixed orthogonal transform and inverse orthogonal transform are performed by the 1D orthogonal transform unit B207, the 1D orthogonal transform unit B209, the 1D inverse orthogonal transform unit 307, and the 1D inverse orthogonal transform unit 309 (that is, by the 1D transform matrix B).
- it may be performed by a DCT unit and an IDCT unit (not shown).
- the coefficient order control unit 113 also performs fixed 2D-1D conversion (for example, zigzag scanning). This fixed 2D-1D conversion may be performed by the 2D-1D conversion unit (mode 2) 504, or may be performed by a 2D-1D conversion unit (not shown).
- the selection switch 201 selects the 1D orthogonal transform unit A206 or the 1D orthogonal transform unit B207 based on the 1D transform matrix set information 129.
- the selection switch 204 selects the 1D orthogonal transform unit A208 or the 1D orthogonal transform unit B209 based on the 1D transform matrix set information 129.
- the selection switch 301 selects the 1D inverse orthogonal transform unit A306 or the 1D inverse orthogonal transform unit B307 based on the 1D transform matrix set information 129.
- the selection switch 304 selects the 1D inverse orthogonal transform unit A308 or the 1D inverse orthogonal transform unit B309 based on the 1D transform matrix set information 129.
- the selection switch 501 selects one of the 2D-1D conversion units 502,..., 510 according to the prediction mode information included in the prediction information 126.
- FIG. 14 illustrates a transform unit syntax 710 according to this embodiment.
- a tu_directive_unified_transform_flag shown in FIG. 14 is a syntax element indicating validity / invalidity of the orthogonal transform and the inverse orthogonal transform according to the present embodiment with respect to the transform unit.
- pred_mode shown in FIG. 14 is one of syntax elements included in the prediction unit syntax 709, and indicates the coding type in the coding tree unit or macroblock. MODE_INTRA indicates that the encoding type is intra prediction.
- Tu_directive_unified_transform_flag is encoded only when slice_directive_unified_transform_flag is 1 and the coding type of the coding tree unit is intra prediction.
- the orthogonal transform unit 102 and the inverse orthogonal transform unit 105 perform fixed orthogonal transform and inverse orthogonal transform such as DCT and IDCT.
- This fixed orthogonal transform and inverse orthogonal transform are performed by the 1D orthogonal transform unit B207, the 1D orthogonal transform unit B209, the 1D inverse orthogonal transform unit 307, and the 1D inverse orthogonal transform unit 309 (that is, by the 1D transform matrix B).
- it may be performed by a DCT unit and an IDCT unit (not shown).
- the coefficient order control unit 113 also performs fixed 2D-1D conversion (for example, zigzag scanning). This fixed 2D-1D conversion may be performed by the 2D-1D conversion unit (mode 2) 504, or may be performed by a 2D-1D conversion unit (not shown).
- the selection switch 201 selects the 1D orthogonal transform unit A206 or the 1D orthogonal transform unit B207 based on the 1D transform matrix set information 129.
- the selection switch 204 selects the 1D orthogonal transform unit A208 or the 1D orthogonal transform unit B209 based on the 1D transform matrix set information 129.
- the selection switch 301 selects the 1D inverse orthogonal transform unit A306 or the 1D inverse orthogonal transform unit B307 based on the 1D transform matrix set information 129.
- the selection switch 304 selects the 1D inverse orthogonal transform unit A308 or the 1D inverse orthogonal transform unit B309 based on the 1D transform matrix set information 129.
- the selection switch 501 selects one of the 2D-1D conversion units 502,..., 510 according to the prediction mode information included in the prediction information 126.
- the transform unit syntax 710 when the flag that defines the validity / invalidity of the orthogonal transform and the inverse orthogonal transform according to the present embodiment is encoded, the information is compared with the case where the flag is not encoded. The amount (code amount) increases. However, by encoding this flag, it is possible to perform optimal orthogonal transform for each local region (that is, transform unit).
- syntax elements not defined in this embodiment may be inserted between the rows of the syntax tables illustrated in FIGS. 12, 13, and 14, or other conditional branch descriptions may be included. Good. Further, the syntax table may be divided into a plurality of tables, or a plurality of syntax tables may be integrated. Moreover, the term of each illustrated syntax element can be changed arbitrarily.
- the image encoding apparatus uses the tendency of intra prediction that the prediction accuracy decreases as the distance from the reference pixel increases.
- This image encoding apparatus classifies the vertical direction and horizontal direction of each prediction mode into two classes according to the presence or absence of the above-described tendency, and adaptively assigns the 1D conversion matrix A or 1D conversion matrix B to each of the vertical direction and horizontal direction. Apply.
- the 1D transform matrix A has high coefficient density when performing 1D orthogonal transform in the direction (vertical direction or horizontal direction) orthogonal to the line of the reference pixel group (that is, the ratio of non-zero coefficients in the quantized transform coefficient 121) Is generated in advance by designing a common transformation base so that
- the 1D transformation matrix B is generated by designing a general-purpose transformation matrix having no such property.
- a generic conversion is DCT. Therefore, according to the image coding apparatus according to the present embodiment, high conversion efficiency is achieved as compared with a case where fixed orthogonal transform such as DCT is uniformly applied to each prediction mode.
- orthogonal transform unit 102 and the inverse orthogonal transform unit 105 are suitable for both hardware implementation and software implementation. Since Equations (3) to (6) represent multiplication of a fixed matrix, when the orthogonal transform unit and the inverse orthogonal transform unit are implemented by hardware, they are configured by hard-wired logic rather than a multiplier. Is assumed.
- two orthogonal transform units and inverse orthogonal transform units according to the present embodiment are used (when the vertical (inverse) transform unit and the horizontal (inverse) transform unit are shared in a time division manner).
- the four types of two-dimensional orthogonal transforms are executed by a combination of the 1D orthogonal transform unit and a circuit for transposing the matrix. Therefore, according to the orthogonal transformation part and the inverse orthogonal transformation part which concern on this embodiment, the increase in the circuit scale in hardware mounting can be suppressed significantly.
- the orthogonal transform unit and the inverse orthogonal transform unit according to the present embodiment combine four types of two-dimensional by combining vertical transform and horizontal transform using two 1D orthogonal transform matrices as shown in FIGS.
- the orthogonal transformation of is performed. Therefore, according to the orthogonal transformation part and the inverse orthogonal transformation part which concern on this embodiment, the increase in the memory size in software mounting can be suppressed significantly.
- the quantized transform coefficient 121 has a property that the generation tendency of non-zero coefficients is biased for each element. The occurrence tendency of such non-zero coefficients differs for each prediction direction of intra prediction. Furthermore, if the prediction directions are the same, the non-zero coefficient generation tendency is similar even when pixel blocks of different input images 118 are encoded. Therefore, the coefficient order control unit 113 converts the zero coefficient in the quantized transform coefficient sequence 122 by transforming the quantized transform coefficient sequence 122 into the one-dimensional quantized transform coefficient sequence 122 in order from the element having the highest non-zero coefficient occurrence probability. Are dense with high probability.
- the coefficient order control unit 113 may use the scan order learned in advance for each prediction mode, or dynamically update the scan order during the encoding process. You may use it. If the scan order optimized for each prediction mode is used, for example, H.264 is used. Compared with H.264, the amount of generated code based on the quantized transform coefficient sequence 122 can be reduced without causing a significant increase in the amount of computation.
- the image encoding device according to the second embodiment differs from the image encoding device according to the first embodiment described above in the details of orthogonal transform and inverse orthogonal transform.
- the same parts as those in the first embodiment are denoted by the same reference numerals in the present embodiment, and different parts will be mainly described.
- An image decoding apparatus corresponding to the image encoding apparatus according to the present embodiment will be described in a fifth embodiment.
- the image encoding apparatus includes an orthogonal transform unit 102 illustrated in FIG. 16 instead of the orthogonal transform unit 102 illustrated in FIG. 16 includes a selection switch 801, a vertical conversion unit 802, a transposition unit 203, a selection switch 804, and a horizontal conversion unit 805.
- the vertical transform unit 802 includes a 1D orthogonal transform unit C806, a 1D orthogonal transform unit D807, and a 1D orthogonal transform unit E808.
- the horizontal transform unit 805 includes a 1D orthogonal transform unit C809, a 1D orthogonal transform unit D810, and a 1D orthogonal transform unit E811. Note that the order of the vertical conversion unit 802 and the horizontal conversion unit 805 is an example, and these may be reversed.
- the 1D orthogonal transform unit C806 and the 1D orthogonal transform unit C809 have a common function in that the input matrix is multiplied by the 1D transform matrix C.
- the 1D orthogonal transform unit D807 and the 1D orthogonal transform unit D810 have a common function in that the input matrix is multiplied by the 1D transform matrix D.
- the 1D orthogonal transform unit E808 and the 1D orthogonal transform unit E811 have a common function in that the input matrix is multiplied by the 1D transform matrix E.
- the 1D conversion matrix C, the 1D conversion matrix D, and the 1D conversion matrix E will be described.
- the prediction error 119 tends to increase in absolute value as the distance from the reference pixel increases.
- the tendency is the same regardless of the prediction direction, it cannot be said that the prediction pixel 119 in the DC prediction mode shows a tendency related to either the vertical direction or the horizontal direction.
- a 1D conversion matrix E described later with respect to the DC prediction mode is used.
- the 1D conversion matrix C and the 1D conversion matrix D are adaptively used according to the presence or absence of the above-described tendency, as in the first embodiment.
- the 1D conversion matrix C can be generated by the same design method as the 1D conversion matrix A described above.
- the 1D conversion matrix D can be generated by a design method similar to the 1D conversion matrix B described above. That is, the 1D transformation matrix D can be generated by performing the above-described design method for the 1D transformation matrix B after excluding the DC prediction mode.
- the 1D transformation matrix E may be a matrix for DCT.
- the 1D transform matrix E has higher coefficient density when performing 1D orthogonal transform in the vertical and horizontal directions with respect to the prediction error 119 in the DC prediction mode, compared to the 1D transform matrix D (ie, quantization). It may be generated by designing a common transformation base in advance so that the ratio of non-zero coefficients in the transformation coefficient 121 becomes smaller.
- the image encoding apparatus includes an inverse orthogonal transform unit 105 illustrated in FIG. 17 instead of the inverse orthogonal transform unit 105 illustrated in FIG.
- the inverse orthogonal transform unit 105 in FIG. 17 includes a selection switch 901, a vertical inverse transform unit 902, a transposition unit 303, a selection switch 904, and a horizontal inverse transform unit 905.
- the vertical inverse transform unit 902 includes a 1D inverse orthogonal transform unit C906, a 1D inverse orthogonal transform unit D907, and a 1D inverse orthogonal transform unit E908.
- the horizontal inverse transform unit 905 includes a 1D inverse orthogonal transform unit C909, a 1D inverse orthogonal transform unit D910, and a 1D inverse orthogonal transform unit E911. Note that the order of the vertical inverse transform unit 902 and the horizontal inverse transform unit 905 is an example, and these may be reversed.
- the 1D inverse orthogonal transform unit C906 and the 1D inverse orthogonal transform unit C909 have a common function in that the input matrix is multiplied by the transposed matrix of the 1D transform matrix C.
- the 1D inverse orthogonal transform unit D907 and the 1D inverse orthogonal transform unit D910 have a common function in that the input matrix is multiplied by the transposed matrix of the 1D transform matrix D.
- the 1D inverse orthogonal transform unit E908 and the 1D inverse orthogonal transform unit E911 have a common function in that the input matrix is multiplied by the transposed matrix of the 1D transform matrix E.
- the 1D transformation matrix set information 129 includes a vertical transformation index for selecting a transformation matrix used for vertical orthogonal transformation and vertical inverse orthogonal transformation, and a transformation matrix used for horizontal orthogonal transformation and horizontal inverse orthogonal transformation.
- the horizontal transformation index for selecting is directly or indirectly indicated.
- the 1D transformation matrix set information 129 can be expressed by a transformation index (TransformIdx) illustrated in FIG. 18D.
- a vertical transformation index (Vertical Transform Idx) and a horizontal transformation index (Horizontal Transform Idx) can be derived from the transformation index.
- the vertical transformation index is “0”, the 1D transformation matrix C (1D_Transform_Matrix_C) or its transposed matrix is selected for the vertical orthogonal transformation or the vertical inverse orthogonal transformation.
- the vertical transformation index is “1”
- the aforementioned 1D transformation matrix D (1D_Transform_Matrix_D) or its transposed matrix is selected for vertical orthogonal transformation or vertical inverse orthogonal transformation.
- the vertical transformation index is “2”
- the 1D transformation matrix E (1D_transform_Matrix_E) or its transposed matrix is selected for vertical orthogonal transformation or vertical inverse orthogonal transformation.
- the horizontal transformation index is “0”, the 1D transformation matrix C (1D_Transform_Matrix_C) or its transposed matrix is selected for horizontal orthogonal transformation or horizontal inverse orthogonal transformation.
- the horizontal transformation index is “1”
- the aforementioned 1D transformation matrix D (1D_Transform_Matrix_D) or its transposed matrix is selected for horizontal orthogonal transformation or horizontal inverse orthogonal transformation.
- the horizontal transformation index is “2”
- the 1D transformation matrix E (1D_Transform_Matrix_E) or its transposed matrix is selected for horizontal orthogonal transformation or horizontal inverse orthogonal transformation.
- FIG. 18A illustrates an index (IntraNxNPredModeIndex) of each (intra) prediction mode, its name (Name of IntraNxNPredMode), and a corresponding vertical conversion index and horizontal conversion index.
- the size of the prediction target block can be expanded to “MxN” (that is, a rectangle other than a square).
- FIG. 18E illustrates an index of each prediction mode, its name, and a corresponding conversion index obtained by integrating FIGS. 18A and 18D.
- the 1D transform matrix set unit 112 detects a prediction mode index from the prediction mode information included in the prediction information 126, and generates corresponding 1D transform matrix set information 129. Note that the various tables shown in FIGS. 18A, 18B, 18C, 18D, and 18E are examples, and the 1D conversion matrix set unit 112 uses the 1D conversion matrix set information without using some or all of these tables. 129 may be generated.
- the TransformIdx indicates 0
- the Horizontal Transform index indicates 0. That is, it means that the 1D transformation matrix C is used for the vertical orthogonal transformation and the 1D transformation matrix C is used for the horizontal orthogonal transformation. Further, it means that a transposed matrix of 1D transformation matrix C is used for vertical inverse orthogonal transformation, and a transposed matrix of 1D transformation matrix C is used for horizontal inverse orthogonal transformation.
- TransformIdx When TransformIdx indicates 1, it means that Vertical Transform index indicates 0 and Horizontal Transform index indicates 1. That is, it means that the 1D transformation matrix C is used for the vertical orthogonal transformation, and the 1D transformation matrix D is used for the horizontal orthogonal transformation. Further, it means that a transposed matrix of 1D transformation matrix C is used for vertical inverse orthogonal transformation, and a transposed matrix of 1D transformation matrix D is used for horizontal inverse orthogonal transformation.
- TransformIdx When TransformIdx is 2, it means that Vertical Transform index is 1 and Horizontal Transform index is 0. That is, it means that the 1D transformation matrix D is used for the vertical orthogonal transformation, and the 1D transformation matrix C is used for the horizontal orthogonal transformation. Further, it means that a transposed matrix of the 1D transformation matrix D is used for the vertical inverse orthogonal transformation, and a 1D transformation matrix C is used for the horizontal inverse orthogonal transformation.
- TransformIdx When TransformIdx indicates 3, it means that Vertical Transform index indicates 2, and Horizontal Transform index indicates 2. That is, it means that the 1D transformation matrix E is used for the vertical orthogonal transformation and the 1D transformation matrix E is used for the horizontal orthogonal transformation. Further, it means that a transposed matrix of 1D transformation matrix E is used for vertical inverse orthogonal transformation, and a transposed matrix of 1D transformation matrix E is used for horizontal inverse orthogonal transformation.
- the block size for performing orthogonal transform may also be M ⁇ N.
- 1D transformation matrix set information 129 is assigned in consideration of the tendency of each intra prediction mode described above. That is, in the DC prediction mode, 2 is assigned to both the Vertical Transform index and the Horizontal Transform index. Therefore, in the DC prediction mode, orthogonal transformation or inverse orthogonal transformation in the vertical direction and the horizontal direction is performed using the above-described 1D transformation matrix E or its transposed matrix, and high transformation efficiency is achieved.
- prediction modes other than the DC prediction mode if the above tendency is shown in the vertical direction of the prediction error, 0 is assigned to the Vertical Transform index, and if the above tendency is shown in the horizontal direction, 0 is assigned to the Horizontal Transform index. On the other hand, 1 is assigned to each direction in which the above tendency is not exhibited.
- prediction modes other than the DC prediction mode the vertical direction and horizontal direction of each prediction mode are classified into two classes according to the presence or absence of the above-described tendency, and the 1D conversion matrix C or 1D conversion matrix is adaptively applied to each of the vertical direction and the horizontal direction. By applying D, high conversion efficiency is achieved.
- the image coding apparatus uses the tendency of intra prediction that the prediction accuracy decreases as the distance from the reference pixel increases as in the first embodiment, Differentiate predictions and apply orthogonal and inverse orthogonal transforms.
- This image encoding apparatus classifies the vertical direction and horizontal direction of each prediction mode into two classes according to the presence or absence of the above-described tendency, and adaptively sets the 1D conversion matrix C or 1D conversion matrix D for each of the vertical direction and the horizontal direction. Apply.
- This image encoding apparatus applies the 1D transformation matrix E to the DC prediction mode.
- the 1D transform matrix C has higher coefficient density when performing 1D orthogonal transform in the direction (vertical direction or horizontal direction) orthogonal to the line of the reference pixel group (that is, the ratio of non-zero coefficients in the quantized transform coefficient 121) Is generated in advance by designing a common transformation base so that The 1D transformation matrix D is generated by designing a general-purpose transformation matrix that does not have such a property after excluding the DC prediction mode.
- the 1D transformation matrix E may be a matrix for DCT. Alternatively, the 1D transform matrix E has a high coefficient density when performing 1D orthogonal transform in the vertical and horizontal directions with respect to the prediction error 119 in the DC prediction mode (that is, the non-zero coefficient of the quantized transform coefficient 121).
- the image encoding device differs from the image encoding devices according to the first and second embodiments described above in details of orthogonal transform and inverse orthogonal transform.
- the same reference numerals are given to the same parts as those in the first embodiment or the second embodiment in the present embodiment, and different parts will be mainly described.
- An image decoding apparatus corresponding to the image encoding apparatus according to the present embodiment will be described in a sixth embodiment.
- the image encoding apparatus includes an orthogonal transform unit 102 illustrated in FIG. 19 instead of the orthogonal transform unit 102 illustrated in FIG. 19 includes a selection switch 1201, a vertical conversion unit 1202, a transposition unit 203, a selection switch 1204, and a horizontal conversion unit 1205.
- the vertical transform unit 1202 includes a 1D orthogonal transform unit F1206, a 1D orthogonal transform unit G1207, and a 1D orthogonal transform unit H1208.
- the horizontal transformation unit 1205 includes a 1D orthogonal transformation unit F1209, a 1D orthogonal transformation unit G1210, and a 1D orthogonal transformation unit H1211. Note that the order of the vertical conversion unit 1202 and the horizontal conversion unit 1205 is an example, and these may be reversed.
- the 1D orthogonal transform unit F1206 and the 1D orthogonal transform unit F1209 have a common function in that the input matrix is multiplied by the 1D transform matrix F.
- the 1D orthogonal transform unit G1207 and the 1D orthogonal transform unit G1210 have a common function in that the input matrix is multiplied by the 1D transform matrix G.
- the 1D orthogonal transform unit H1208 and the 1D orthogonal transform unit H1211 have a common function in that the input matrix is multiplied by the 1D transform matrix H.
- the prediction error 119 tends to increase in absolute value as the distance from the reference pixel increases. This tendency is the same regardless of the prediction direction, but in the intra prediction mode, only the reference pixel group on the left adjacent line of the prediction target block or only the reference pixel group on the upper adjacent line is referenced (a copy of the reference pixel value or There is a prediction mode in which interpolation is performed from a reference pixel value), and there is also a prediction mode in which reference pixel groups on the left adjacent line and the upper adjacent line of the prediction target block are referred to.
- orthogonal transformation and inverse orthogonal transformation are performed by distinguishing between a prediction mode that refers only to a reference pixel group on one line and a prediction mode that refers to a reference pixel group on two lines. Specifically, for a prediction mode that refers to a reference pixel group on two lines, a 1D conversion matrix H described later is used.
- the 1D conversion matrix F and the 1D conversion matrix G are adaptively used according to the presence or absence of the above-described tendency, as in the first embodiment. To do.
- the 1D conversion matrix F can be generated by a design method similar to the 1D conversion matrix A described above. That is, the 1D transformation matrix F excludes a prediction mode (for example, mode 4, mode 5 and mode 6 in FIG. 7A) that refers to a reference pixel group on two lines, and then designs the 1D transformation matrix A described above. Can be generated. Further, the 1D conversion matrix G can be generated by the same design method as the 1D conversion matrix B described above. Alternatively, the 1D transformation matrix G may be a matrix for DCT.
- a prediction mode for example, mode 4, mode 5 and mode 6 in FIG. 7A
- the 1D transform matrix H has high coefficient density when performing 1D orthogonal transform in the vertical direction and the horizontal direction on the prediction error 119 of the prediction mode that refers to the reference pixel group on two lines (that is, quantization transform).
- the common conversion base can be generated in advance so that the ratio of the non-zero coefficient in the coefficient 121 is reduced).
- the image encoding apparatus includes an inverse orthogonal transform unit 105 illustrated in FIG. 20 instead of the inverse orthogonal transform unit 105 illustrated in FIG.
- the inverse orthogonal transform unit 105 in FIG. 20 includes a selection switch 1301, a vertical inverse transform unit 1302, a transposition unit 303, a selection switch 1304, and a horizontal inverse transform unit 1305.
- the vertical inverse transform unit 1302 includes a 1D inverse orthogonal transform unit F1306, a 1D inverse orthogonal transform unit G1307, and a 1D inverse orthogonal transform unit H1308.
- the horizontal inverse transform unit 1305 includes a 1D inverse orthogonal transform unit F1309, a 1D inverse orthogonal transform unit G1310, and a 1D inverse orthogonal transform unit H1311. Note that the order of the vertical inverse transform unit 1302 and the horizontal inverse transform unit 1305 is an example, and these may be reversed.
- the 1D inverse orthogonal transform unit F1306 and the 1D inverse orthogonal transform unit F1309 have a common function in that an input matrix is multiplied by a transposed matrix of the 1D transform matrix F.
- the 1D inverse orthogonal transform unit G1307 and the 1D inverse orthogonal transform unit G1310 have a common function in that the input matrix is multiplied by the transposed matrix of the 1D transform matrix G.
- the 1D inverse orthogonal transform unit H1308 and the 1D inverse orthogonal transform unit H1311 have a common function in that an input matrix is multiplied by a transposed matrix of the 1D transform matrix H.
- the 1D transformation matrix set information 129 includes a vertical transformation index for selecting a transformation matrix used for vertical orthogonal transformation and vertical inverse orthogonal transformation, and a transformation matrix used for horizontal orthogonal transformation and horizontal inverse orthogonal transformation.
- the horizontal transformation index for selecting is directly or indirectly indicated.
- the 1D transformation matrix set information 129 can be expressed by a transformation index (TransformIdx) illustrated in FIG. 21D.
- a vertical transformation index (Vertical Transform Idx) and a horizontal transformation index (Horizontal Transform Idx) can be derived from the transformation index.
- the vertical transformation index is “0”, the 1D transformation matrix F (1D_Transform_Matrix_F) or its transposed matrix is selected for vertical orthogonal transformation or vertical inverse orthogonal transformation.
- the vertical transformation index is “1”
- the aforementioned 1D transformation matrix G (1D_Transform_Matrix_G) or its transposed matrix is selected for vertical orthogonal transformation or vertical inverse orthogonal transformation.
- the vertical transformation index is “2”
- the 1D transformation matrix H (1D_transform_Matrix_H) or its transposed matrix is selected for vertical orthogonal transformation or vertical inverse orthogonal transformation.
- the horizontal transformation index is “0”, the aforementioned 1D transformation matrix F (1D_Transform_Matrix_F) or its transposed matrix is selected for horizontal orthogonal transformation or horizontal inverse orthogonal transformation.
- the horizontal transformation index is “1”
- the aforementioned 1D transformation matrix G (1D_Transform_Matrix_G) or its transposed matrix is selected for horizontal orthogonal transformation or horizontal inverse orthogonal transformation.
- the horizontal transformation index is “2”
- the 1D transformation matrix H (1D_Transform_Matrix_H) or its transposed matrix is selected for horizontal orthogonal transformation or horizontal inverse orthogonal transformation.
- FIG. 21A illustrates an index (IntraNxNPredModeIndex) of each (intra) prediction mode, its name (Name of IntraNxNPredMode), and a corresponding vertical conversion index and horizontal conversion index.
- the size of the prediction target block can be expanded to “MxN” (that is, a rectangle other than a square).
- FIG. 21E illustrates an index of each prediction mode, a name thereof, and a corresponding conversion index obtained by integrating FIGS. 21A and 21D.
- the 1D transform matrix set unit 112 detects a prediction mode index from the prediction mode information included in the prediction information 126, and generates corresponding 1D transform matrix set information 129. Note that the various tables shown in FIGS. 21A, 21B, 21C, 21D, and 21E are examples, and the 1D conversion matrix set unit 112 uses the 1D conversion matrix set information without using some or all of these tables. 129 may be generated.
- TransformIdx when TransformIdx indicates 0, it means that Vertical Transform index indicates 2, and Horizontal Transform index indicates 2. That is, it means that the 1D transformation matrix H is used for the vertical orthogonal transformation and the 1D transformation matrix H is used for the horizontal orthogonal transformation. Further, it means that a transposed matrix of 1D transformation matrix H is used for vertical inverse orthogonal transformation, and a transposed matrix of 1D transformation matrix H is used for horizontal inverse orthogonal transformation.
- TransformIdx When TransformIdx indicates 1, it means that Vertical Transform index indicates 0 and Horizontal Transform index indicates 1. That is, it means that the 1D transformation matrix F is used for the vertical orthogonal transformation and the 1D transformation matrix G is used for the horizontal orthogonal transformation. Further, it means that a transposed matrix of 1D transformation matrix F is used for vertical inverse orthogonal transformation, and a transposed matrix of 1D transformation matrix G is used for horizontal inverse orthogonal transformation.
- TransformIdx When TransformIdx is 2, it means that Vertical Transform index is 1 and Horizontal Transform index is 0. That is, it means that the 1D transformation matrix G is used for the vertical orthogonal transformation, and the 1D transformation matrix F is used for the horizontal orthogonal transformation. Further, it means that a transposed matrix of the 1D transformation matrix G is used for the vertical inverse orthogonal transformation, and a 1D transformation matrix F is used for the horizontal inverse orthogonal transformation.
- TransformIdx When TransformIdx indicates 3, it means that Vertical Transform index indicates 1 and Horizontal Transform index indicates 1. That is, it means that the 1D conversion matrix G is used for the vertical orthogonal transformation, and the 1D conversion matrix G is used for the horizontal orthogonal transformation. Further, it means that a transposed matrix of the 1D transformation matrix G is used for the vertical inverse orthogonal transformation, and a transposed matrix of the 1D transformation matrix G is used for the horizontal inverse orthogonal transformation.
- the block size for performing orthogonal transform may also be M ⁇ N.
- 1D transformation matrix set information 129 is assigned in consideration of the tendency of each intra prediction mode described above. That is, 2 is assigned to both the Vertical Transform index and the Horizontal Transform index in the prediction mode for referring to the reference pixel group on two lines. Therefore, for the prediction mode that refers to the reference pixel group on two lines, the orthogonal transformation or inverse orthogonal transformation in the vertical direction and the horizontal direction is performed using the 1D transformation matrix H or its transpose matrix, and high transformation efficiency is achieved.
- prediction modes other than the prediction mode that refers to a reference pixel group on two lines if the above tendency is shown in the vertical direction of the prediction error, 0 is shown in the Vertical Transform index, and if the above tendency is shown in the horizontal direction, the Horizontal Transform is shown. 0 is assigned to the index. On the other hand, 1 is assigned to each direction that does not show the above tendency.
- prediction modes other than the prediction mode that refers to the reference pixel group on two lines the vertical direction and horizontal direction of each prediction mode are classified into two classes according to the presence or absence of the above-mentioned tendency, and adaptive in each of the vertical direction and the horizontal direction. By applying the 1D conversion matrix F or the 1D conversion matrix G to the above, high conversion efficiency is achieved.
- the image encoding apparatus uses the tendency of intra prediction that the prediction accuracy decreases as the distance from the reference pixel increases, as in the first embodiment.
- the prediction mode is distinguished by the number of lines of the reference pixel group, and orthogonal transformation and inverse orthogonal transformation are applied.
- This image encoding apparatus classifies the vertical direction and the horizontal direction into two classes according to the presence or absence of the above-described tendency with respect to the prediction mode except the prediction mode that refers to the reference pixel group on two lines, and each of the vertical direction and the horizontal direction.
- the 1D transformation matrix F or the 1D transformation matrix G is applied adaptively.
- this image encoding apparatus applies the 1D transformation matrix H to each prediction mode that refers to reference pixel groups on two lines.
- the 1D transformation matrix F has a coefficient density when performing 1D orthogonal transformation in a direction (vertical direction or horizontal direction) orthogonal to the line of the reference pixel group with respect to each prediction mode in which only the reference pixel group on one line is referred to. It is generated by designing a common transform base in advance so as to be high (that is, the ratio of non-zero coefficients in the quantized transform coefficient 121 is small). On the other hand, the 1D transformation matrix G is generated by designing a general-purpose transformation matrix having no such property.
- the 1D transform matrix H has a high coefficient density when performing 1D orthogonal transform in the vertical direction and the horizontal direction with respect to the prediction error 119 of each prediction mode referring to the reference pixel group on two lines (that is, It is generated by designing a common transform base in advance so that the ratio of the non-zero coefficient in the quantized transform coefficient 121 becomes smaller. Therefore, according to the image coding apparatus according to the present embodiment, high conversion efficiency is achieved as compared with a case where fixed orthogonal transform such as DCT is uniformly applied to each prediction mode.
- two or three types of 1D conversion matrices are prepared, and 1D for vertical conversion (or vertical reverse conversion) and horizontal conversion (or horizontal reverse conversion) according to the prediction mode.
- Select a transformation matrix a transformation matrix.
- the above-described two or three types of 1D transformation matrices are examples, and it is possible to prepare more transformation matrices to improve the encoding efficiency.
- additional hardware is required as the types of transformation matrices to be prepared increase, it is desirable to consider the balance between the disadvantages associated with the increase in types of transformation matrices and the coding efficiency.
- the fourth embodiment relates to an image decoding apparatus.
- the image coding apparatus corresponding to the image decoding apparatus according to the present embodiment is as described in the first embodiment. That is, the image decoding apparatus according to the present embodiment decodes encoded data generated by the image encoding apparatus according to the first embodiment, for example.
- the image decoding apparatus includes an input buffer 401, an entropy decoding unit 402, a coefficient order control unit 403, an inverse quantization unit 404, an inverse orthogonal transform unit 405, an addition unit 406, A reference image memory 407, an intra prediction unit 408, an inter prediction unit 409, a selection switch 410, a 1D transformation matrix setting unit 411, and an output buffer 412 are included.
- the 22 decodes the encoded data 414 stored in the input buffer 401, stores the decoded image 419 in the output buffer 412, and outputs it as an output image 425.
- the encoded data 414 is output from, for example, the image encoding device in FIG. 1 and the like, and is temporarily stored in the input buffer 401 via a storage system or a transmission system (not shown).
- the entropy decoding unit 402 performs decoding based on the syntax for each frame or field for decoding the encoded data 414.
- the entropy decoding unit 402 sequentially entropy-decodes the code string of each syntax, and reproduces the encoding parameters of the encoding target block such as the prediction information 424 including the prediction mode information 421 and the quantized transform coefficient string 415.
- the encoding parameter is a parameter necessary for decoding such as prediction information 424, information on transform coefficients, information on quantization, and the like.
- the quantized transform coefficient sequence 415 is input to the coefficient order control unit 403.
- the prediction mode information 421 included in the prediction information 424 is also input to the coefficient order control unit 403.
- the prediction information 424 is input to the 1D conversion matrix setting unit 411 and the selection switch 410.
- the coefficient order control unit 403 converts the quantized transform coefficient sequence 415 that is a one-dimensional representation into a quantized transform coefficient 416 that is a two-dimensional representation and inputs the quantized transform coefficient sequence 415 to the inverse quantization unit 404. Details of the coefficient control unit 403 will be described later.
- the inverse quantization unit 404 performs inverse quantization on the quantized transform coefficient 416 from the coefficient order control unit 403 to obtain a restored transform coefficient 417. Specifically, the inverse quantization unit 404 performs inverse quantization according to the information related to quantization decoded by the entropy decoding unit 402. The inverse quantization unit 404 inputs the restored transform coefficient 417 to the inverse orthogonal transform unit 405.
- the inverse orthogonal transform unit 405 performs an inverse orthogonal transform corresponding to the orthogonal transform performed on the encoding side on the restored transform coefficient 417 from the inverse quantization unit 404 to obtain a restored prediction error 418.
- the inverse orthogonal transform unit 405 inputs the restoration prediction error 418 to the addition unit 406.
- the inverse orthogonal transform unit 405 according to the present embodiment is substantially the same as or similar to the inverse orthogonal transform unit 105 of FIG. 3, detailed description thereof is omitted.
- the inverse orthogonal transform unit 405 according to the present embodiment uses the 1D transform matrix A and the 1D transform matrix B common to the inverse orthogonal transform unit 105 of FIG.
- the restored transform coefficient 122, 1D transform matrix set information 129, and the restored prediction error 123 in FIG. 3 respectively correspond to the restored transform coefficient 417, 1D transform matrix set information 422, and the restored prediction error signal 418 in the present embodiment. .
- the addition unit 406 adds the restored prediction error 418 and the corresponding predicted image 423 to generate a decoded image 419.
- the decoded image 419 is temporarily stored in the output buffer 412 for the output image 425 and also stored in the reference image memory 407 for the reference image 420.
- the decoded image 419 stored in the reference image memory 407 is referred to by the intra prediction unit 408 and the inter prediction unit 409 as the reference image 420 in units of frames or fields as necessary.
- the decoded image 419 temporarily stored in the output buffer 412 is output according to the output timing managed by the decoding control unit 413.
- the decoding control unit 413 controls each element of the image decoding device in FIG. Specifically, the decoding control unit 413 performs various controls for the decoding process including the above-described operation.
- the 1D transform matrix set unit 411 generates 1D transform matrix set information 422 based on the prediction mode information included in the prediction information 424 from the entropy decoding unit 402, and inputs the 1D transform matrix set information 422 to the inverse orthogonal transform unit 405.
- the 1D transformation matrix set unit 411 according to the present embodiment is substantially the same as or similar to the 1D transformation matrix set unit 112 according to the first embodiment, and thus detailed description thereof is omitted. That is, the 1D conversion matrix set unit 411 according to the present embodiment generates 1D conversion matrix set information 422 using, for example, the tables of FIGS. 4A, 4B, 4C, 4D, and 4E. Note that the prediction information 126 and the 1D transformation matrix set information 129 in the first embodiment correspond to the prediction information 424 and the 1D transformation matrix set information 422 in the present embodiment, respectively.
- the image decoding apparatus in FIG. 22 uses the same or similar syntax as the syntax described with reference to FIGS. 11, 12, 13, and 14, and thus detailed description thereof is omitted.
- the coefficient order control unit 403 arranges the elements of the quantized transform coefficient sequence 415 that is a one-dimensional representation according to a predetermined order (that is, the order that corresponds to the encoding side), and thereby the quantized transform that is a two-dimensional representation. Convert to coefficient 416.
- a predetermined order that is, the order that corresponds to the encoding side
- the coefficient order control unit 403 can perform the common 1D-2D conversion regardless of the prediction mode.
- the coefficient control unit 403 includes the H.264 standard. Similar to H.264, reverse zigzag scanning can be used.
- Inverse zigzag scanning is 1D-2D conversion corresponding to the above-described zigzag scanning.
- coefficient order control section 403 In addition, individual 1D-2D conversion can be performed for each prediction mode.
- the coefficient order control unit 403 that performs such an operation is illustrated in FIG. 23A.
- the coefficient order control unit 403 includes a selection switch 1001 and individual 1D-2D conversion units 1002,..., 1010 for each of nine types of prediction modes.
- the selection switch 1001 converts the quantized transform coefficient sequence 415 according to the prediction mode information included in the prediction information 424 (for example, the index of the prediction mode in FIG. 4A), and the 1D-2D conversion unit (1002,.
- the selection switch 1001 guides the quantized transform coefficient sequence 415 to the 1D-2D transform unit 1002.
- each prediction mode and the 1D-2D conversion unit have a one-to-one correspondence, and the quantized transform coefficient sequence 415 is guided to one 1D-2D conversion unit corresponding to the prediction mode, and the quantized transform Converted to a coefficient 416.
- the coefficient order control unit 403 also causes the scan order in the 1D-2D conversion to correspond to the encoding side. It may be updated dynamically.
- the coefficient order control unit 403 that performs such an operation is illustrated in FIG. 23B.
- the coefficient order control unit 403 includes a selection switch 1001, individual 1D-2D conversion units 1002,..., 1010 for each of nine types of prediction modes, an occurrence frequency counting unit 1011 and a coefficient order update unit 1012. Including.
- the selection switch 1001 is as described with reference to FIG. 23A.
- the individual 1D-2D conversion units 1002,..., 1010 for each of the nine types of prediction modes differ from FIG. 23A in that the scan order is updated by the coefficient order update unit 1012.
- the occurrence frequency counting unit 1011 creates a histogram of the number of occurrences of non-zero coefficients in each element of the quantized transform coefficient 416 for each prediction mode.
- the occurrence frequency counting unit 1011 inputs the created histogram 1013 to the coefficient order updating unit 1012.
- the coefficient order update unit 1012 updates the coefficient order based on the histogram 1013 at a predetermined timing.
- the timing is, for example, the timing when the decoding process of the coding tree unit is completed, the timing when the decoding process for one line in the coding tree unit is completed, or the like.
- the coefficient order update unit 1012 refers to the histogram 1013 and updates the coefficient order for a prediction mode having an element in which the number of occurrences of non-zero coefficients is counted above a threshold. For example, the coefficient order update unit 1012 updates the prediction mode having an element in which the occurrence of a non-zero coefficient is counted 16 times or more. By providing a threshold value for the number of occurrences, the coefficient order is updated globally, so that it is difficult to converge to a local optimum solution.
- the coefficient order update unit 1012 sorts the elements in descending order of the occurrence frequency of the non-zero coefficient regarding the prediction mode to be updated. Sorting can be realized by existing algorithms such as bubble sort and quick sort. Then, the coefficient order update unit 1012 inputs coefficient order update information 1014 indicating the order of the sorted elements to the 1D-2D conversion unit corresponding to the prediction mode to be updated.
- the 1D-2D conversion unit When the coefficient order update information 1014 is input, the 1D-2D conversion unit performs 1D-2D conversion according to the updated scan order. Note that when the scan order is dynamically updated, it is necessary to determine in advance the initial scan order corresponding to the encoding side of each 1D-2D conversion unit.
- H.C. In the example of H.264, there are 9 types of prediction modes. However, even when the prediction modes are expanded to 17 types, 33 types, etc., a 1D-2D conversion unit corresponding to each expanded prediction mode is added. Then, individual 1D-2D conversion for each prediction mode can be performed.
- the image decoding apparatus has the same or similar inverse orthogonal transform unit as the image encoding apparatus according to the first embodiment described above. Therefore, according to the image decoding apparatus according to the present embodiment, the same or similar effects as those of the image encoding apparatus according to the first embodiment described above can be obtained.
- the image decoding apparatus according to the fifth embodiment differs from the image decoding apparatus according to the above-described fourth embodiment in the details of inverse orthogonal transform.
- the same parts as those in the fourth embodiment are denoted by the same reference numerals, and different parts will be mainly described.
- the image coding apparatus corresponding to the image decoding apparatus according to the present embodiment is as described in the second embodiment.
- the inverse orthogonal transform unit 405 according to the present embodiment is substantially the same as or similar to the inverse orthogonal transform unit 105 in FIG.
- the inverse orthogonal transform unit 405 according to the present embodiment uses the 1D transform matrix C, the 1D transform matrix D, and the 1D transform matrix E common to the inverse orthogonal transform unit 105 in FIG.
- the restored transform coefficient 122, 1D transform matrix set information 129, and the restored prediction error 123 in FIG. 17 correspond to the restored transform coefficient 417, 1D transform matrix set information 422, and the restored prediction error signal 418 in the present embodiment, respectively. .
- the 1D conversion matrix set unit 411 according to the present embodiment is substantially the same as or similar to the 1D conversion matrix set unit 112 according to the second embodiment, detailed description thereof is omitted. That is, the 1D conversion matrix set unit 411 according to the present embodiment generates 1D conversion matrix set information 422 using, for example, the tables of FIGS. 18A, 18B, 18C, 18D, and 18E. Note that the prediction information 126 and the 1D transformation matrix set information 129 in the second embodiment correspond to the prediction information 424 and the 1D transformation matrix set information 422 in the present embodiment, respectively.
- the image decoding apparatus has the same or similar inverse orthogonal transform unit as the image encoding apparatus according to the second embodiment described above. Therefore, according to the image decoding apparatus according to the present embodiment, the same or similar effects as those of the image encoding apparatus according to the second embodiment described above can be obtained.
- the image decoding device differs from the image decoding devices according to the fourth embodiment and the fifth embodiment described above in details of inverse orthogonal transform.
- the same parts as those in the fourth embodiment or the fifth embodiment are denoted by the same reference numerals in the present embodiment, and different parts will be mainly described.
- the image encoding device corresponding to the image decoding device according to the present embodiment is as described in the third embodiment.
- the inverse orthogonal transform unit 405 according to the present embodiment is substantially the same as or similar to the inverse orthogonal transform unit 105 of FIG. 20, detailed description thereof is omitted.
- the inverse orthogonal transform unit 405 according to the present embodiment uses the 1D transform matrix F, the 1D transform matrix G, and the 1D transform matrix H that are common to the inverse orthogonal transform unit 105 in FIG.
- the restored transform coefficient 122, 1D transform matrix set information 129, and the restored prediction error 123 in FIG. 20 respectively correspond to the restored transform coefficient 417, 1D transform matrix set information 422, and the restored prediction error signal 418 in the present embodiment. .
- the 1D conversion matrix set unit 411 according to the present embodiment is substantially the same as or similar to the 1D conversion matrix set unit 112 according to the third embodiment, detailed description thereof is omitted. That is, the 1D conversion matrix set unit 411 according to the present embodiment generates 1D conversion matrix set information 422 using, for example, the tables of FIGS. 21A, 21B, 21C, 21D, and 21E. Note that the prediction information 126 and the 1D transformation matrix set information 129 in the third embodiment correspond to the prediction information 424 and the 1D transformation matrix set information 422 in the present embodiment, respectively.
- the image decoding apparatus has the same or similar inverse orthogonal transform unit as that of the image encoding apparatus according to the third embodiment described above. Therefore, according to the image decoding apparatus according to the present embodiment, the same or similar effects as those of the image encoding apparatus according to the third embodiment described above can be obtained.
- two or three types of 1D conversion matrices are prepared, and a 1D conversion matrix for vertical reverse conversion and horizontal reverse conversion is selected according to the prediction mode.
- the above-described two or three types of 1D transformation matrices are examples, and it is possible to prepare more transformation matrices to improve the encoding efficiency.
- additional hardware is required as the types of transformation matrices to be prepared increase, it is desirable to consider the balance between the disadvantages associated with the increase in types of transformation matrices and the coding efficiency.
- encoding and decoding may be performed sequentially from the lower right to the upper left, or encoding and decoding may be performed so as to draw a spiral from the center of the screen toward the screen end.
- encoding and decoding may be performed sequentially from the upper right to the lower left, or encoding and decoding may be performed so as to draw a spiral from the screen end toward the center of the screen.
- the prediction target block is a uniform block. It does not have to be a shape.
- the prediction target block size may be a 16 ⁇ 8 pixel block, an 8 ⁇ 16 pixel block, an 8 ⁇ 4 pixel block, a 4 ⁇ 8 pixel block, or the like. Also, it is not necessary to unify all the block sizes within one coding tree unit, and a plurality of different block sizes may be mixed.
- the amount of codes for encoding or decoding the division information increases as the number of divisions increases. Therefore, it is desirable to select the block size in consideration of the balance between the code amount of the division information and the quality of the locally decoded image or the decoded image.
- the color signal component has been described without distinguishing between the luminance signal and the color difference signal.
- the same or different prediction methods may be used. If different prediction methods are used between the luminance signal and the chrominance signal, the prediction method selected for the chrominance signal can be encoded or decoded in the same manner as the luminance signal.
- the orthogonal transformation process is different between the luminance signal and the color difference signal
- the same or different orthogonal transformation methods may be used. If different orthogonal transformation methods are used between the luminance signal and the color difference signal, the orthogonal transformation method selected for the color difference signal can be encoded or decoded in the same manner as the luminance signal.
- each embodiment realizes highly efficient orthogonal transformation and inverse orthogonal transformation while alleviating the difficulty in hardware implementation and software implementation. Therefore, according to each embodiment, the encoding efficiency is improved, and the subjective image quality is also improved.
- the storage medium can be a computer-readable storage medium such as a magnetic disk, optical disk (CD-ROM, CD-R, DVD, etc.), magneto-optical disk (MO, etc.), semiconductor memory, etc.
- the storage format may be any form.
- the program for realizing the processing of each of the above embodiments may be stored on a computer (server) connected to a network such as the Internet and downloaded to the computer (client) via the network.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Medicines Containing Antibodies Or Antigens For Use As Internal Diagnostic Agents (AREA)
Abstract
Description
(第1の実施形態)
第1の実施形態は、画像符号化装置に関する。本実施形態に係る画像符号化装置に対応する画像復号化装置は、第4の実施形態において説明する。この画像符号化装置は、LSI(Large-Scale Integration)チップやDSP(Digital Signal Processor)、FPGA(Field Programmable Gate Array)などのハードウェアにより実現可能である。また、この画像符号化装置は、コンピュータに画像符号化プログラムを実行させることによっても実現可能である。
減算器101は、入力画像118の符号化対象ブロックから、対応する予測画像127を減算して予測誤差119を得る。減算器101は、予測誤差119を直交変換部102に入力する。
数式(1)において、OHは予測情報126(例えば、動きベクトル情報、予測ブロックサイズ情報)に関する符号量を示し、SADは予測対象ブロックと予測画像127との間の差分絶対値和(即ち、予測誤差119の絶対値の累積和)を示す。また、λは量子化情報(量子化パラメータ)の値に基づいて決定されるラグランジュ未定乗数を示し、Kは符号化コストを示す。数式(1)を用いる場合には、符号化コストKを最小化する予測モードが発生符号量及び予測誤差の観点から最適な予測モードとして判定される。数式(1)の変形として、OHのみまたはSADのみから符号化コストを見積もってもよいし、SADにアダマール変換を施した値またはその近似値を利用して符号化コストを見積もってもよい。
数式(2)において、Dは予測対象ブロックと局所復号画像との間の二乗誤差和(即ち、符号化歪)を示し、Rは予測対象ブロックと予測モードの予測画像127との間の予測誤差について仮符号化によって見積もられた符号量を示し、Jは符号化コストを示す。数式(2)の符号化コストJを導出する場合には予測モード毎に仮符号化処理及び局部復号化処理が必要なので、回路規模または演算量が増大する。反面、より正確な符号化歪と符号量とに基づいて符号化コストJが導出されるので、最適な予測モードを高精度に判定して高い符号化効率を維持しやすい。尚、数式(2)の変形として、RのみまたはDのみから符号化コストを見積もってもよいし、RまたはDの近似値を利用して符号化コストを見積もってもよい。また、符号化制御部116は、予測対象ブロックに関して事前に得られる情報(周囲の画素ブロックの予測モード、画像解析の結果など)に基づいて、数式(1)または数式(2)を用いた判定を行う予測モードの候補の数を、予め絞り込んでおいてもよい。
1D変換行列セット部112は、予測選択部110からの予測情報126に含まれる予測モード情報に基づいて1D変換行列セット情報129を生成し、直交変換部102及び逆直交変換部105に入力する。尚、1D変換行列セット情報129の詳細は後述される。
直交変換部102は、選択スイッチ201、垂直変換部202、転置部203、選択スイッチ204及び水平変換部205を有する。垂直変換部202は、1D直交変換部A206及び1D直交変換部B207を含む。水平変換部205は、1D直交変換部A208及び1D直交変換部B209を含む。尚、垂直変換部202及び水平変換部205の順序は、一例であり、これらは逆順であっても構わない。
数式(3)において、Xは予測誤差119の行列(N×N)を示し、Vは1D変換行列A及び1D変換行列B(いずれもN×N)を包括的に示しており、Yは1D直交変換部A206及び1D直交変換部B207の出力行列(N×N)を示す。具体的には、変換行列Vは、行列Xの垂直方向の相関を除去するために設計された変換基底を行ベクトルとし縦に並べたN×Nの変換行列である。但し、後述するように、1D変換行列A及び1D変換行列Bは、異なる方法で設計され、異なる性質を持つ。尚、1D変換行列A及び1D変換行列Bは、設計された各変換基底をスカラ倍して整数化したものを使用することも可能である。
数式(4)において、Hは1D変換行列A及び1D変換行列B(いずれもN×N)を包括的に示しており、Zは1D直交変換部A208及び1D直交変換部B209の出力行列(N×N)を示しており、これは変換係数120を指す。具体的には、変換行列Hは、行列Yの水平方向の相関を除去するために設計された変換基底を行ベクトルとし縦に並べたN×Nの変換行列である。先の説明と重複するが、1D変換行列A及び1D変換行列Bは、異なる方法で設計され、異なる性質を持つ。また、1D変換行列A及び1D変換行列Bは、設計された各変換基底をスカラ倍して整数化したものを使用することも可能である。
逆直交変換部105は、選択スイッチ301、垂直逆変換部302、転置部303、選択スイッチ304及び水平逆変換部305を有する。垂直逆変換部302は、1D逆直交変換部A306及び1D逆直交変換部B307を含む。水平逆変換部305は、1D逆直交変換部A308及び1D直交変換部B309を含む。尚、垂直逆変換部302及び水平逆変換部305の順序は、一例であり、これらは逆順であっても構わない。
数式(5)において、Z'は復元変換係数122の行列(N×N)を示し、VTは1D変換行列A及び1D変換行列B(いずれもN×N)の転置行列を包括的に示しており、Y'は1D逆直交変換部A306及び1D逆直交変換部B307の出力行列(N×N)を示す。
数式(6)において、HTは1D変換行列A及び1D変換行列B(いずれもN×N)の転置行列を包括的に示しており、X'は1D逆直交変換部A308及び1D逆直交変換部B309の出力行列(N×N)を示しており、これは復元予測誤差123を指す。
1D変換行列セット情報129は、垂直直交変換及び垂直逆直交変換のために使用される変換行列を選択するための垂直変換インデックスと、水平直交変換及び水平逆直交変換のために使用される変換行列を選択するための水平変換インデックスとを直接的または間接的に示す。例えば、1D変換行列セット情報129は、図4Dに示す変換インデックス(TransformIdx)で表現することができる。図4Dのテーブルを参照すれば、変換インデックスから垂直変換インデックス(Vertical Transform Idx)及び水平変換インデックス(Horizontal Transform Idx)を導出できる。
ここで、図4Aと図4Dを統合した、各予測モードのインデックスとその名称と、対応する変換インデックスを図4Eに例示する。
係数順制御部113は、2次元表現である量子化変換係数121の各要素を所定の順序に従って配列することにより、1次元表現である量子化変換係数列117に変換する。一例として、係数順制御部113は、予測モードに関わらず共通の2D-1D変換を行うことができる。具体的には、係数制御部113は、H.264と同様にジグザグスキャンを利用できる。ジグザグスキャンは、図8Aに示すような順序で量子化変換係数121の各要素を配列して、図8Bに示すような量子化変換係数列117に変換する。図8A及び図8Bにおいて、(i,j)は各要素の量子化変換係数(行列)121中の座標(位置情報)を示す。また、図8Cは、ジグザグスキャンを利用した2D-1D変換(4×4画素ブロックの場合)を示している。具体的には、図8Cは、ジグザグスキャンを利用して2D-1D変換された量子化変換係数列117の係数順(スキャン順)を示すインデックス(idx)と、対応する量子化変換係数121の要素(cij)とを示している。尚、図8Cにおいて、cijは、量子化変換係数(行列)121中の座標(i,j)の要素を示している。
シンタクスは、画像符号化装置が動画像データを符号化する際の符号化データ(例えば、図1の符号化データ130)の構造を示している。この符号化データを復号化する際に、同じシンタクス構造を参照して画像復号化装置がシンタクス解釈を行う。図1の画像符号化装置が利用するシンタクス700を図11に例示する。
数式(3)乃至数式(6)は固定行列の乗算を表しているので、直交変換部及び逆直交変換部をハードウェア実装する場合には、乗算器よりもむしろハードワイヤードロジックによって構成されることが想定される。
第2の実施形態に係る画像符号化装置は、前述の第1の実施形態に係る画像符号化装置と直交変換及び逆直交変換の詳細において異なる。以降の説明では、本実施形態において第1の実施形態と同一部分には同一符号を付して示し、異なる部分を中心に説明する。本実施形態に係る画像符号化装置に対応する画像復号化装置は、第5の実施形態において説明する。
前述のように、予測誤差119は参照画素からの距離が大きくなるにつれて絶対値が大きくなる傾向を持つ。係る傾向は予測方向に関わらず同様であるが、DC予測モードの予測画素119は垂直方向及び水平方向のいずれにも係る傾向を示すとはいえない。本実施形態では、DC予測モードに関して後述する1D変換行列Eを利用する。一方、DC予測モード以外の予測モードについては、前述の第1の実施形態と同様に上記傾向の有無に応じて夫々1D変換行列C及び1D変換行列Dを適応的に利用する。
1D変換行列セット情報129は、垂直直交変換及び垂直逆直交変換のために使用される変換行列を選択するための垂直変換インデックスと、水平直交変換及び水平逆直交変換のために使用される変換行列を選択するための水平変換インデックスとを直接的または間接的に示す。例えば、1D変換行列セット情報129は、図18Dに示す変換インデックス(TransformIdx)で表現することができる。図18Dのテーブルを参照すれば、変換インデックスから垂直変換インデックス(Vertical Transform Idx)及び水平変換インデックス(Horizontal Transform Idx)を導出できる。
ここで、図18Aと図18Dを統合した、各予測モードのインデックスとその名称と、対応する変換インデックスを図18Eに例示する。
第3の実施形態に係る画像符号化装置は、前述の第1の実施形態及び第2の実施形態に係る画像符号化装置と直交変換及び逆直交変換の詳細において異なる。以降の説明では、本実施形態において第1の実施形態または第2の実施形態と同一部分には同一符号を付して示し、異なる部分を中心に説明する。本実施形態に係る画像符号化装置に対応する画像復号化装置は、第6の実施形態において説明する。
前述のように、予測誤差119は参照画素からの距離が大きくなるにつれて絶対値が大きくなる傾向を持つ。係る傾向は予測方向に関わらず同様であるが、イントラ予測モードには予測対象ブロックの左隣接ライン上の参照画素群のみまたは上隣接ライン上の参照画素群のみを参照(参照画素値のコピーまたは参照画素値からの補間)する予測モードもあれば、予測対象ブロックの左隣接ライン及び上隣接ライン上の参照画素群を参照する予測モードもある。1ライン上の参照画素群のみを参照する予測モードと、2ライン上の参照画素群を参照する予測モードとでは、上記傾向の現れ方に差が生じるといえる。従って、本実施形態では、1ライン上の参照画素群のみを参照する予測モードと、2ライン上の参照画素群を参照する予測モードとを区別して直交変換及び逆直交変換を行う。具体的には、2ライン上の参照画素群を参照する予測モードについては、後述する1D変換行列Hを利用する。一方、1ライン上の参照画素群のみを参照する予測モードについては、前述の第1の実施形態と同様に上記傾向の有無に応じて夫々1D変換行列F及び1D変換行列Gを適応的に利用する。
1D変換行列セット情報129は、垂直直交変換及び垂直逆直交変換のために使用される変換行列を選択するための垂直変換インデックスと、水平直交変換及び水平逆直交変換のために使用される変換行列を選択するための水平変換インデックスとを直接的または間接的に示す。例えば、1D変換行列セット情報129は、図21Dに示す変換インデックス(TransformIdx)で表現することができる。図21Dのテーブルを参照すれば、変換インデックスから垂直変換インデックス(Vertical Transform Idx)及び水平変換インデックス(Horizontal Transform Idx)を導出できる。
ここで、図21Aと図21Dを統合した、各予測モードのインデックスとその名称と、対応する変換インデックスを図21Eに例示する。
第4の実施形態は、画像復号化装置に関する。本実施形態に係る画像復号化装置に対応する画像符号化装置は、第1の実施形態において説明した通りである。即ち、本実施形態に係る画像復号化装置は、例えば第1の実施形態に係る画像符号化装置によって生成された符号化データを復号化する。
また、図22の画像復号化装置は、図11、図12、図13及び図14に関して説明したシンタクスと同一または類似のシンタクスを利用するのでその詳細な説明を省略する。
係数順制御部403は、1次元表現である量子化変換係数列415の各要素を所定の順序(即ち、符号化側と対応する順序)に従って配列することにより、2次元表現である量子化変換係数416に変換する。一例として、符号化側において予測モードに関わらず共通の2D-1D変換が行われているならば、係数順制御部403は予測モードに関わらず共通の1D-2D変換を行うことができる。具体的には、係数制御部403は、H.264と同様に逆ジグザグスキャンを利用できる。逆ジグザグスキャンは、前述のジグザグスキャンに対応する1D-2D変換である
別の例として、符号化側において予測モード毎の個別の2D-1D変換が行われているならば、係数順制御部403もまた予測モード毎の個別の1D-2D変換を行うことができる。このような動作を行う係数順制御部403は、図23Aに例示されている。この係数順制御部403は、選択スイッチ1001と、9種類の予測モード毎の個別の1D-2D変換部1002,・・・,1010とを含む。選択スイッチ1001は、予測情報424に含まれる予測モード情報(例えば、図4Aの予測モードのインデックス)に従って量子化変換係数列415を、予測モードに応じた1D-2D変換部(1002,・・・,1010のうちいずれか1つ)に導く。例えば、予測モードインデックスが0であれば、選択スイッチ1001は量子化変換係数列415を1D-2D変換部1002に導く。図23Aにおいて、各予測モードと1D-2D変換部とは1対1に対応しており、量子化変換係数列415は予測モードに応じた1つの1D-2D変換部に導かれ、量子化変換係数416に変換される。
第5の実施形態に係る画像復号化装置は、前述の第4の実施形態に係る画像復号化装置と逆直交変換の詳細において異なる。以降の説明では、本実施形態において第4の実施形態と同一部分には同一符号を付して示し、異なる部分を中心に説明する。本実施形態に係る画像復号化装置に対応する画像符号化装置は、第2の実施形態において説明した通りである。
第6の実施形態に係る画像復号化装置は、前述の第4の実施形態及び第5の実施形態に係る画像復号化装置と逆直交変換の詳細において異なる。以降の説明では、本実施形態において第4の実施形態または第5の実施形態と同一部分には同一符号を付して示し、異なる部分を中心に説明する。本実施形態に係る画像復号化装置に対応する画像符号化装置は、第3の実施形態において説明した通りである。
第1乃至第6の実施形態において、フレームを16×16画素サイズなどの矩形ブロックに分割し、画面左上のブロックから右下に向かって順に符号化/復号化を行う例について説明している(図6Aを参照)。しかしながら、符号化順序及び復号化順序はこの例に限定されない。例えば、右下から左上に向かって順に符号化及び復号化が行われてもよいし、画面中央から画面端に向かって渦巻を描くように符号化及び復号化が行われてもよい。更に、右上から左下に向かって順に符号化及び復号化が行われてもよいし、画面端から画面中央に向かって渦巻きを描くように符号化及び復号化が行われてもよい。
102・・・直交変換部
103・・・量子化部
104・・・逆量子化部
105・・・逆直交変換部
106・・・加算部
107・・・参照画像メモリ
108・・・イントラ予測部
109・・・インター予測部
110・・・予測選択部
111・・・選択スイッチ
112・・・1D変換行列セット部
113・・・係数順制御部
114・・・エントロピー符号化部
115・・・出力バッファ
116・・・符号化制御部
117・・・量子化変換係数列
118・・・入力画像
119・・・予測誤差
120・・・変換係数
121・・・量子化変換係数
122・・・復元変換係数
123・・・復元予測誤差
124・・・局所復号画像
125・・・参照画像
126・・・予測情報
127・・・予測画像
129・・・1D変換行列セット情報
130・・・符号化データ
201,204,801,804,1101,1104,1201,1204・・・選択スイッチ
202,802,1102,1202・・・垂直変換部
206,・・・,209,806,・・・,811,1206,・・・,1211・・・1D直交変換部
203,1103・・・転置部
205,805,1105,1205・・・水平変換部
301,304,901,904,1301,1304・・・選択スイッチ
302,902,1302・・・垂直逆変換部
303・・・転置部
305,905,1305・・・水平逆変換部
306,・・・,309,906,・・・,911,1306,・・・,1311・・・1D逆直交変換部
401・・・入力バッファ
402・・・エントロピー復号化部
403・・・係数順制御部
404・・・逆量子化部
405・・・逆直交変換部
406・・・加算部
407・・・参照画像メモリ
408・・・イントラ予測部
409・・・インター予測部
410・・・選択スイッチ
411・・・1D変換行列セット部
412・・・出力バッファ
413・・・復号化制御部
414・・・符号化データ
415・・・量子化変換係数列
416・・・量子化変換係数
417・・・復元変換係数
418・・・復元予測誤差
419・・・復号画像
420・・・参照画像
421・・・予測モード情報
422・・・1D変換行列セット情報
423・・・予測画像
424・・・予測情報
425・・・出力画像
501・・・選択スイッチ
502,・・・,510・・・2D-1D変換部
511・・・発生頻度カウント部
512・・・係数順更新部
513・・・ヒストグラム
514・・・係数順更新情報
700・・・シンタクス
701・・・ハイレベルシンタクス
702・・・スライスレベルシンタクス
703・・・コーディングツリーレベルシンタクス
704・・・シーケンスパラメータセットシンタクス
705・・・ピクチャパラメータセットシンタクス
706・・・スライスヘッダーシンタクス
707・・・スライスデータシンタクス
708・・・コーディングツリーユニットシンタクス
709・・・プレディクションユニットシンタクス
710・・・トランスフォームユニットシンタクス
1001・・・選択スイッチ
1002,・・・,1010・・・1D-2D変換部
1011・・・発生頻度カウント部
1012・・・係数順更新部
1013・・・ヒストグラム
1014・・・係数順更新情報
Claims (16)
- 符号化対象をイントラ予測して予測誤差を求めることと、
各イントラ予測モードの予測画像生成方法に応じて予め定められた関係に基づいて、前記符号化対象のイントラ予測モードに対応する垂直変換行列と水平変換行列との組み合わせを設定することと、
設定された前記垂直変換行列と前記水平変換行列とを用いて、前記予測誤差に対して垂直変換及び水平変換を行って変換係数を得ることと、
前記変換係数と前記符号化対象のイントラ予測モードを示す情報とを符号化することと
を具備し、
前記組み合わせは、第1の変換行列と、少なくとも1つのライン上の参照画素群を参照してイントラ予測画像を生成するイントラ予測モードの予測誤差に前記参照画素群のラインと直交する方向の1次元直交変換を行う場合に前記第1の変換行列に比べて係数集密度が高くなる第2の変換行列とを含む複数の変換行列のいずれか同士の組み合わせである、
ことを特徴とする画像符号化方法。 - 前記第2の変換行列は、少なくとも1つのライン上の参照画素群を参照してイントラ予測画像を生成するイントラ予測モードの予測誤差の絶対値が参照画素からの距離に応じて大きくなる傾向に基づいて、前記参照画素群のラインと直交する方向の1次元直交変換を行った場合に前記第1の変換行列に比べて係数集密度が高くなるように予め設計された変換基底を含む、請求項1の画像符号化方法。
- 少なくとも1つのライン上の参照画素群を参照してイントラ予測画像を生成するイントラ予測モードに関して前記参照画素群のラインが水平方向ラインを含むならば、前記第2の変換行列を当該イントラ予測モードに対応する垂直変換行列に設定し、前記参照画素群のラインが垂直方向ラインを含むならば前記第2の変換行列を当該イントラ予測モードに対応する水平変換行列に設定することを更に具備する、請求項1の画像符号化方法。
- 前記変換係数を符号化するよりも前に、当該変換係数を前記符号化対象のイントラ予測モードに応じた個別のスキャン順に従って1次元表現に変換することを更に具備し、
前記符号化対象のイントラ予測モードに応じた個別のスキャン順は、当該イントラ予測モードの量子化後の変換係数における非零係数の発生頻度の高さの降順に一致するように予め設計されている、
請求項1の画像符号化方法。 - 前記符号化対象のイントラ予測モードに応じた個別のスキャン順を、当該イントラ予測モードの量子化後の変換係数における非零係数の発生頻度の高さの降順に一致するように動的に更新することを更に具備する、請求項4の画像符号化方法。
- 前記複数の変換行列は、DC予測を行うイントラ予測モードの予測誤差に垂直方向及び水平方向の1次元直交変換を行う場合に前記第1の変換行列に比べて係数集密度が高くなる第3の変換行列を更に含む、請求項1の画像符号化方法。
- 前記第3の変換行列は、離散コサイン変換行列である、請求項6の画像符号化方法。
- 前記第2の変換行列は、1つのライン上の参照画素群を参照してイントラ予測画像を生成するイントラ予測モードの予測誤差に前記参照画素群のラインと直交する方向の1次元直交変換を行う場合に前記第1の変換行列に比べて係数集密度が高くなり、
前記複数の変換行列は、2つのライン上の参照画素群を参照してイントラ予測画像を生成するイントラ予測モードの予測誤差に垂直方向及び水平方向の1次元直交変換を行う場合に前記第1の変換行列に比べて係数集密度が高くなる第3の変換行列を更に含む、
請求項1の画像符号化方法。 - 復号化対象の変換係数と、前記復号化対象のイントラ予測モードを示す情報とを復号化することと、
各イントラ予測モードの予測画像生成方法に応じて予め定められた関係に基づいて、前記復号化対象のイントラ予測モードに対応する垂直逆変換行列と水平逆変換行列との組み合わせを設定することと、
設定された前記垂直逆変換行列と前記水平逆変換行列とを用いて、前記変換係数に対して垂直逆変換及び水平逆変換を行って予測誤差を得ることと、
前記予測誤差に基づいて復号画像を生成することと
を具備し、
前記組み合わせは、第1の変換行列と、少なくとも1つのライン上の参照画素群を参照してイントラ予測画像を生成するイントラ予測モードの予測誤差に前記参照画素群のラインと直交する方向の1次元直交変換を行う場合に前記第1の変換行列に比べて係数集密度が高くなる第2の変換行列とを含む複数の変換行列の転置行列のいずれか同士の組み合わせである、
ことを特徴とする画像復号化方法。 - 前記第2の変換行列は、少なくとも1つのライン上の参照画素群を参照してイントラ予測画像を生成するイントラ予測モードの予測誤差の絶対値が参照画素からの距離に応じて大きくなる傾向に基づいて、前記参照画素群のラインと直交する方向の1次元直交変換を行った場合に前記第1の変換行列に比べて係数集密度が高くなるように予め設計された変換基底を含む、請求項9の画像復号化方法。
- 少なくとも1つのライン上の参照画素群を参照してイントラ予測画像を生成するイントラ予測モードに関して前記参照画素群のラインが水平方向ラインを含むならば、前記第2の変換行列の転置行列を当該イントラ予測モードに対応する垂直逆変換行列に設定し、前記参照画素群のラインが垂直方向ラインを含むならば前記第2の変換行列の逆行列を当該イントラ予測モードに対応する水平逆変換行列に設定することを更に具備する、請求項9の画像復号化方法。
- 前記変換係数に対して前記垂直逆変換及び前記水平逆変換を行うよりも前に、当該変換係数を前記復号化対象のイントラ予測モードに応じた個別のスキャン順に従って2次元表現に変換することを更に具備し、
前記復号化対象のイントラ予測モードに応じた個別のスキャン順は、当該イントラ予測モードの逆量子化前の変換係数における非零係数の発生頻度の高さの降順に一致するように予め設計されている、
請求項9の画像復号化方法。 - 前記復号化対象のイントラ予測モードに応じた個別のスキャン順を、当該イントラ予測モードの逆量子化前の変換係数における非零係数の発生頻度の高さの降順に一致するように動的に更新することを更に具備する、請求項12の画像復号化方法。
- 前記複数の変換行列は、DC予測を行うイントラ予測モードの予測誤差に垂直方向及び水平方向の1次元直交変換を行う場合に前記第1の変換行列に比べて係数集密度が高くなる第3の変換行列を更に含む、請求項9の画像復号化方法。
- 前記第3の変換行列は、離散コサイン変換行列の転置行列である、請求項14の画像復号化方法。
- 前記第2の変換行列は、1つのライン上の参照画素群を参照してイントラ予測画像を生成するイントラ予測モードの予測誤差に前記参照画素群のラインと直交する方向の1次元直交変換を行う場合に前記第1の変換行列に比べて係数集密度が高くなり、
前記複数の変換行列は、2つのライン上の参照画素群を参照してイントラ予測画像を生成するイントラ予測モードの予測誤差に垂直方向及び水平方向の1次元直交変換を行う場合に前記第1の変換行列に比べて係数集密度が高くなる第3の変換行列を更に含む、
請求項9の画像復号化方法。
Priority Applications (18)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012524380A JP5259879B2 (ja) | 2010-07-15 | 2010-07-15 | 画像符号化方法及び画像復号化方法 |
CN201080067113.3A CN103493491A (zh) | 2010-07-15 | 2010-07-15 | 图像编码方法以及图像解码方法 |
RU2013101598/08A RU2528144C1 (ru) | 2010-07-15 | 2010-07-15 | Способ кодирования изображения и способ декодирования изображения |
BR122020008881-8A BR122020008881B1 (pt) | 2010-07-15 | 2010-07-15 | Métodos de codificação e de decodificação de imagem |
AU2010357291A AU2010357291B2 (en) | 2010-07-15 | 2010-07-15 | Image encoding method and image decoding method |
PCT/JP2010/062007 WO2012008039A1 (ja) | 2010-07-15 | 2010-07-15 | 画像符号化方法及び画像復号化方法 |
CA2921057A CA2921057C (en) | 2010-07-15 | 2010-07-15 | Image encoding method and image decoding method |
EP14155176.2A EP2747435A1 (en) | 2010-07-15 | 2010-07-15 | Image encoding method using transform matrices |
BR112013000865-2A BR112013000865B1 (pt) | 2010-07-15 | 2010-07-15 | Métodos de codificação e de decodificação de imagem |
EP10854722.5A EP2595385A4 (en) | 2010-07-15 | 2010-07-15 | IMAGE ENCODING AND DECODING METHOD |
MX2013000516A MX2013000516A (es) | 2010-07-15 | 2010-07-15 | Metodo de codificacion de imagenes y metodo de descodificacion de imagenes. |
EP14155181.2A EP2747436A1 (en) | 2010-07-15 | 2010-07-15 | Image decoding method using transform matrices |
CA2805248A CA2805248C (en) | 2010-07-15 | 2010-07-15 | Image encoding method and image decoding method |
MX2015011031A MX338867B (es) | 2010-07-15 | 2010-07-15 | Metodo de codificación de imágenes y método de descodificacion de imágenes. |
US13/740,841 US20130128972A1 (en) | 2010-07-15 | 2013-01-14 | Image encoding apparatus and image decoding apparatus |
US14/186,959 US9706226B2 (en) | 2010-07-15 | 2014-02-21 | Image encoding apparatus and image decoding apparatus employing intra preciction and direction transform matrix |
US14/186,990 US9426467B2 (en) | 2010-07-15 | 2014-02-21 | Image decoding method using a vertical inverse transform matrix and a horizontal inverse transform matrix |
RU2014127123/08A RU2580088C2 (ru) | 2010-07-15 | 2014-07-02 | Способ кодирования изображения и способ декодирования изображения |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2010/062007 WO2012008039A1 (ja) | 2010-07-15 | 2010-07-15 | 画像符号化方法及び画像復号化方法 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/740,841 Continuation US20130128972A1 (en) | 2010-07-15 | 2013-01-14 | Image encoding apparatus and image decoding apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2012008039A1 true WO2012008039A1 (ja) | 2012-01-19 |
Family
ID=45469063
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2010/062007 WO2012008039A1 (ja) | 2010-07-15 | 2010-07-15 | 画像符号化方法及び画像復号化方法 |
Country Status (10)
Country | Link |
---|---|
US (3) | US20130128972A1 (ja) |
EP (3) | EP2595385A4 (ja) |
JP (1) | JP5259879B2 (ja) |
CN (1) | CN103493491A (ja) |
AU (1) | AU2010357291B2 (ja) |
BR (2) | BR122020008881B1 (ja) |
CA (2) | CA2805248C (ja) |
MX (2) | MX338867B (ja) |
RU (2) | RU2528144C1 (ja) |
WO (1) | WO2012008039A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108605129A (zh) * | 2016-01-28 | 2018-09-28 | 日本放送协会 | 编码装置、解码装置以及程序 |
Families Citing this family (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101452860B1 (ko) | 2009-08-17 | 2014-10-23 | 삼성전자주식회사 | 영상의 부호화 방법 및 장치, 영상 복호화 방법 및 장치 |
MX338867B (es) | 2010-07-15 | 2016-05-03 | Toshiba Kk | Metodo de codificación de imágenes y método de descodificacion de imágenes. |
RU2719340C2 (ru) * | 2011-10-18 | 2020-04-17 | Кт Корпорейшен | Способ декодирования видеосигнала |
US10904551B2 (en) | 2013-04-05 | 2021-01-26 | Texas Instruments Incorporated | Video coding using intra block copy |
JP5537695B2 (ja) * | 2013-04-10 | 2014-07-02 | 株式会社東芝 | 画像復号化装置、方法およびプログラム |
JP5535361B2 (ja) * | 2013-04-10 | 2014-07-02 | 株式会社東芝 | 画像符号化装置、方法およびプログラム |
CN103974076B (zh) * | 2014-05-19 | 2018-01-12 | 华为技术有限公司 | 图像编解码方法和设备、系统 |
EP3222044A1 (en) * | 2014-11-21 | 2017-09-27 | VID SCALE, Inc. | One-dimensional transform modes and coefficient scan order |
CN107925771B (zh) * | 2015-05-29 | 2022-01-11 | 深圳市大疆创新科技有限公司 | 视频处理的方法、系统、存储介质和成像装置 |
US20180302629A1 (en) * | 2015-10-30 | 2018-10-18 | Sony Corporation | Image processing apparatus and method |
FR3050598B1 (fr) * | 2016-04-26 | 2020-11-06 | Bcom | Procede de decodage d'une image numerique, procede de codage, dispositifs, et programmes d'ordinateurs associes |
US10972733B2 (en) * | 2016-07-15 | 2021-04-06 | Qualcomm Incorporated | Look-up table for enhanced multiple transform |
CN106254883B (zh) * | 2016-08-02 | 2021-01-22 | 海信视像科技股份有限公司 | 一种视频解码中的反变换方法和装置 |
RU2645290C1 (ru) * | 2017-03-27 | 2018-02-19 | федеральное государственное казенное военное образовательное учреждение высшего образования "Военная академия связи имени Маршала Советского Союза С.М. Буденного" Министерства обороны Российской Федерации | Способ кодирования оцифрованных изображений с использованием адаптивного ортогонального преобразования |
KR102257829B1 (ko) * | 2017-04-13 | 2021-05-28 | 엘지전자 주식회사 | 영상의 부호화/복호화 방법 및 이를 위한 장치 |
CN111226442B (zh) * | 2017-08-04 | 2022-06-21 | Lg电子株式会社 | 配置用于视频压缩的变换的方法及计算机可读存储介质 |
BR122021010320B1 (pt) | 2017-12-15 | 2022-04-05 | Lg Electronics Inc. | Método de codificação de imagem com base em transformada secundária não separável e dispositivo para o mesmo |
EP3800883A4 (en) * | 2018-06-25 | 2021-07-07 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | INTRAFRAME PREDICTION PROCESS AND DEVICE |
US10904563B2 (en) * | 2019-01-02 | 2021-01-26 | Tencent America LLC | Method and apparatus for improved zero out transform |
CN112655215A (zh) | 2019-07-10 | 2021-04-13 | Oppo广东移动通信有限公司 | 图像分量预测方法、编码器、解码器以及存储介质 |
CN113302934B (zh) * | 2019-12-23 | 2022-10-18 | Oppo广东移动通信有限公司 | 图像预测方法、编码器、解码器以及存储介质 |
CN111429555A (zh) * | 2020-03-24 | 2020-07-17 | 谷元(上海)文化科技有限责任公司 | 一种动画人物表情转换视觉捕捉方法 |
US11418795B2 (en) * | 2020-08-05 | 2022-08-16 | University Of Electronic Science And Technology Of China | Temporal domain rate distortion optimization based on video content characteristic and QP-λcorrection |
CN111881438B (zh) * | 2020-08-14 | 2024-02-02 | 支付宝(杭州)信息技术有限公司 | 基于隐私保护进行生物特征识别的方法、装置及电子设备 |
CN114580649A (zh) * | 2022-03-09 | 2022-06-03 | 北京百度网讯科技有限公司 | 消除量子泡利噪声的方法及装置、电子设备和介质 |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009272727A (ja) * | 2008-04-30 | 2009-11-19 | Toshiba Corp | 予測誤差の方向性に基づく変換方法、画像符号化方法及び画像復号化方法 |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3050736B2 (ja) * | 1993-12-13 | 2000-06-12 | シャープ株式会社 | 動画像符号化装置 |
JP2900998B2 (ja) | 1997-07-31 | 1999-06-02 | 日本ビクター株式会社 | ブロック間内挿予測符号化装置、復号化装置、符号化方法及び復号化方法 |
US6707948B1 (en) * | 1997-11-17 | 2004-03-16 | The Regents Of The University Of California | Image compression for memory-constrained decoders |
US6990506B2 (en) * | 2000-12-13 | 2006-01-24 | Sharp Laboratories Of America, Inc. | Integer cosine transform matrix for picture coding |
JP4336789B2 (ja) * | 2002-01-10 | 2009-09-30 | 日本電気株式会社 | 2次元直交変換と量子化方法及びその装置並びにプログラム |
MXPA06006107A (es) * | 2003-12-01 | 2006-08-11 | Samsung Electronics Co Ltd | Metodo y aparato de codificacion y decodificacion escalables de video. |
JP4591657B2 (ja) * | 2003-12-22 | 2010-12-01 | キヤノン株式会社 | 動画像符号化装置及びその制御方法、プログラム |
US7565020B2 (en) * | 2004-07-03 | 2009-07-21 | Microsoft Corp. | System and method for image coding employing a hybrid directional prediction and wavelet lifting |
US7778327B2 (en) * | 2005-02-08 | 2010-08-17 | Texas Instruments Incorporated | H.264 quantization |
AU2006320064B2 (en) * | 2005-11-30 | 2010-09-09 | Kabushiki Kaisha Toshiba | Image encoding/image decoding method and image encoding/image decoding apparatus |
US8488668B2 (en) | 2007-06-15 | 2013-07-16 | Qualcomm Incorporated | Adaptive coefficient scanning for video coding |
EP2200324A4 (en) * | 2007-10-15 | 2012-10-17 | Nippon Telegraph & Telephone | PICTURE CODING DEVICE AND DECODING DEVICE, PICTURE CODING METHOD AND DECODING METHOD, PROGRAM FOR THE DEVICES AND METHOD AND RECORDING MEDIUM WITH RECORDED PROGRAM |
WO2011083573A1 (ja) | 2010-01-07 | 2011-07-14 | 株式会社 東芝 | 動画像符号化装置及び動画像復号化装置 |
US8885714B2 (en) * | 2010-01-14 | 2014-11-11 | Texas Instruments Incorporated | Method and system for intracoding in video encoding |
JP5908848B2 (ja) * | 2010-03-10 | 2016-04-26 | トムソン ライセンシングThomson Licensing | 変換選択を有するビデオ符号化および復号のための制約付きの変換を行う方法および装置 |
US8705619B2 (en) * | 2010-04-09 | 2014-04-22 | Sony Corporation | Directional discrete wavelet transform (DDWT) for video compression applications |
MX338867B (es) | 2010-07-15 | 2016-05-03 | Toshiba Kk | Metodo de codificación de imágenes y método de descodificacion de imágenes. |
JP5535361B2 (ja) | 2013-04-10 | 2014-07-02 | 株式会社東芝 | 画像符号化装置、方法およびプログラム |
JP5622954B2 (ja) | 2014-04-17 | 2014-11-12 | 株式会社東芝 | 画像復号化装置、方法およびプログラム |
-
2010
- 2010-07-15 MX MX2015011031A patent/MX338867B/es unknown
- 2010-07-15 EP EP10854722.5A patent/EP2595385A4/en not_active Ceased
- 2010-07-15 CA CA2805248A patent/CA2805248C/en active Active
- 2010-07-15 CA CA2921057A patent/CA2921057C/en active Active
- 2010-07-15 BR BR122020008881-8A patent/BR122020008881B1/pt active IP Right Grant
- 2010-07-15 EP EP14155181.2A patent/EP2747436A1/en not_active Ceased
- 2010-07-15 AU AU2010357291A patent/AU2010357291B2/en active Active
- 2010-07-15 EP EP14155176.2A patent/EP2747435A1/en not_active Ceased
- 2010-07-15 MX MX2013000516A patent/MX2013000516A/es active IP Right Grant
- 2010-07-15 BR BR112013000865-2A patent/BR112013000865B1/pt active IP Right Grant
- 2010-07-15 JP JP2012524380A patent/JP5259879B2/ja active Active
- 2010-07-15 WO PCT/JP2010/062007 patent/WO2012008039A1/ja active Application Filing
- 2010-07-15 RU RU2013101598/08A patent/RU2528144C1/ru active
- 2010-07-15 CN CN201080067113.3A patent/CN103493491A/zh active Pending
-
2013
- 2013-01-14 US US13/740,841 patent/US20130128972A1/en not_active Abandoned
-
2014
- 2014-02-21 US US14/186,959 patent/US9706226B2/en active Active
- 2014-02-21 US US14/186,990 patent/US9426467B2/en active Active
- 2014-07-02 RU RU2014127123/08A patent/RU2580088C2/ru active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009272727A (ja) * | 2008-04-30 | 2009-11-19 | Toshiba Corp | 予測誤差の方向性に基づく変換方法、画像符号化方法及び画像復号化方法 |
Non-Patent Citations (4)
Title |
---|
M. KARCZEWICZ: "Improved intra coding", ITU-T SG16/Q. 6, April 2007 (2007-04-01) |
See also references of EP2595385A4 |
YAN YE ET AL.: "Improved H.264 intra coding based on bi-directional intra prediction, directional transform, and adaptive coefficient scanning", 15TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP 2008), IEEE, October 2008 (2008-10-01), pages 2116 - 2119, XP031374452 * |
YAN YE ET AL.: "Improved Intra Coding, ITU - Telecommunications Standardization Sector STUDY GROUP 16 Question 6, Document VCEG-AG11", VIDEO CODING EXPERTS GROUP (VCEG) 33RD MEETING: SHENZHEN, 20 October 2007 (2007-10-20), CHINA, pages 1 - 6, XP030003615 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108605129A (zh) * | 2016-01-28 | 2018-09-28 | 日本放送协会 | 编码装置、解码装置以及程序 |
Also Published As
Publication number | Publication date |
---|---|
US20140169461A1 (en) | 2014-06-19 |
JP5259879B2 (ja) | 2013-08-07 |
MX2013000516A (es) | 2013-09-02 |
AU2010357291B2 (en) | 2015-01-15 |
BR112013000865B1 (pt) | 2021-08-10 |
RU2580088C2 (ru) | 2016-04-10 |
MX338867B (es) | 2016-05-03 |
AU2010357291A1 (en) | 2013-02-14 |
CA2921057C (en) | 2017-10-17 |
BR112013000865A2 (pt) | 2016-05-17 |
CA2921057A1 (en) | 2012-01-19 |
US20140169460A1 (en) | 2014-06-19 |
EP2747435A1 (en) | 2014-06-25 |
EP2747436A1 (en) | 2014-06-25 |
EP2595385A4 (en) | 2014-06-25 |
BR122020008881B1 (pt) | 2021-08-10 |
JPWO2012008039A1 (ja) | 2013-09-05 |
US9706226B2 (en) | 2017-07-11 |
RU2014127123A (ru) | 2016-02-10 |
CN103493491A (zh) | 2014-01-01 |
RU2528144C1 (ru) | 2014-09-10 |
CA2805248C (en) | 2016-04-26 |
EP2595385A1 (en) | 2013-05-22 |
CA2805248A1 (en) | 2012-01-19 |
US20130128972A1 (en) | 2013-05-23 |
US9426467B2 (en) | 2016-08-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5259879B2 (ja) | 画像符号化方法及び画像復号化方法 | |
EP2744212A1 (en) | Adaptive transformation of residual blocks depending on the intra prediction mode | |
WO2012035640A1 (ja) | 動画像符号化方法及び動画像復号化方法 | |
JP2015109695A (ja) | 動画像符号化装置及び動画像復号化装置 | |
AU2015201843B2 (en) | Image encoding method and image decoding method | |
JP5622954B2 (ja) | 画像復号化装置、方法およびプログラム | |
JP5537695B2 (ja) | 画像復号化装置、方法およびプログラム | |
JP5535361B2 (ja) | 画像符号化装置、方法およびプログラム | |
JP6042478B2 (ja) | 画像復号化装置 | |
JP2014078977A (ja) | 動画像復号化装置、方法及びプログラム | |
JP6310034B2 (ja) | 復号装置、復号方法および復号プログラム | |
JP2013070419A (ja) | 動画像符号化装置及び動画像復号化装置 | |
JP5925855B2 (ja) | 画像復号化装置、方法およびプログラム、第1のプログラムおよび第2のプログラム、サーバシステムならびにダウンロード制御方法 | |
JP2014042332A (ja) | 動画像符号化装置及び動画像復号化装置 | |
RU2631992C2 (ru) | Способ кодирования изображения и способ декодирования изображения | |
JP2014197890A (ja) | 動画像符号化装置及び動画像復号化装置 | |
JP2014078976A (ja) | 動画像復号化装置、方法及びプログラム | |
JPWO2011083599A1 (ja) | 動画像符号化装置及び動画像復号化装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10854722 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012524380 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2010854722 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2805248 Country of ref document: CA |
|
WWE | Wipo information: entry into national phase |
Ref document number: MX/A/2013/000516 Country of ref document: MX |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2010357291 Country of ref document: AU Date of ref document: 20100715 Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2013101598 Country of ref document: RU Kind code of ref document: A |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112013000865 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 112013000865 Country of ref document: BR Kind code of ref document: A2 Effective date: 20130114 |