US20100086028A1  Video encoding and decoding method and apparatus  Google Patents
Video encoding and decoding method and apparatus Download PDFInfo
 Publication number
 US20100086028A1 US20100086028A1 US12/532,024 US53202408A US2010086028A1 US 20100086028 A1 US20100086028 A1 US 20100086028A1 US 53202408 A US53202408 A US 53202408A US 2010086028 A1 US2010086028 A1 US 2010086028A1
 Authority
 US
 United States
 Prior art keywords
 quantization
 modulation
 matrix
 image signal
 generate
 Prior art date
 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
 Abandoned
Links
Images
Classifications

 H—ELECTRICITY
 H04—ELECTRIC COMMUNICATION TECHNIQUE
 H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
 H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
 H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
 H04N19/189—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
 H04N19/192—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding the adaptation method, adaptation tool or adaptation type being iterative or recursive

 H—ELECTRICITY
 H04—ELECTRIC COMMUNICATION TECHNIQUE
 H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
 H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
 H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
 H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
 H04N19/124—Quantisation
 H04N19/126—Details of normalisation or weighting functions, e.g. normalisation matrices or variable uniform quantisers

 H—ELECTRICITY
 H04—ELECTRIC COMMUNICATION TECHNIQUE
 H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
 H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
 H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
 H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
 H04N19/146—Data rate or code amount at the encoder output
 H04N19/147—Data rate or code amount at the encoder output according to rate distortion criteria

 H—ELECTRICITY
 H04—ELECTRIC COMMUNICATION TECHNIQUE
 H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
 H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
 H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
 H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
 H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
 H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock

 H—ELECTRICITY
 H04—ELECTRIC COMMUNICATION TECHNIQUE
 H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
 H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
 H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
 H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
 H04N19/18—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients

 H—ELECTRICITY
 H04—ELECTRIC COMMUNICATION TECHNIQUE
 H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
 H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
 H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
 H04N19/189—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
 H04N19/19—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding using optimisation based on Lagrange multipliers

 H—ELECTRICITY
 H04—ELECTRIC COMMUNICATION TECHNIQUE
 H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
 H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
 H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
 H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

 H—ELECTRICITY
 H04—ELECTRIC COMMUNICATION TECHNIQUE
 H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
 H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
 H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

 H—ELECTRICITY
 H04—ELECTRIC COMMUNICATION TECHNIQUE
 H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
 H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
 H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
 H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
 H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
Abstract
A video encoding apparatus includes a predictor to perform prediction for an input image signal to generate a prediction image signal, a subtractor to calculate a difference between the input image signal and the prediction image signal to generate a prediction residual signal, a transformer to transform the prediction residual signal to generate a transform coefficient, a modulating unit to perform modulation on a quantization matrix to obtain a modulated quantization matrix, a quantizer to quantize the transform coefficient using the modulated quantization matrix to generate a quantized transform coefficient, and an encoder to encode the quantized transform coefficient and a modulation index to generate encoded data.
Description
 The present invention relates to a video encoding and decoding method and apparatus for a motion video or a still video.
 In recent years, a video encoding method in which encoding efficiency is greatly improved has been recommended as ITUT Rec. H.264 and ISO/IEC 1449610 (hereinafter, referred to as H.264) in conjunction with ITUT and ISO/IEC. Encoding methods, such as ISO/IEC MPEG1, 2 and 4, and ITUT H.261 and H.263, perform compression using a twodimensional DCT of 8×8 blocks. Meanwhile, since a twodimensional integer orthogonal transform of 4×4 blocks is used in the H.264, an IDCT mismatch does not need to be considered, and an operation using a 16bit register is enabled.
 Further, in an H.264 high profile, a quantization matrix is introduced for a quantization process of orthogonal transform coefficients, as one tool for subjective image quality improvement for a highdefinition image like an HDTV size (refer to J. Lu, “Proposal of quantization weighting for H.264/MPEG4 AVC Professional Profiles”, JVT of ISO/IEC MPEG & ITUT VCEG, JVTK 029, March. 2004(Document 1)). The quantization matrix is a tool that uses a visual characteristic of the human being to perform weighting on quantization coefficients in a frequency domain so as to improve a subjective image quality, and is also used in ISO/IEC MPEG2,4. The quantization matrix that is used in H.264 can be switched in units of a sequence, picture or slice, but cannot be changed in units of a smaller process block.
 Meanwhile, a technique for enabling a quantization matrix to be switched in units of a macroblock is suggested in JPA 2006262004 (KOKAI). However, according to the technique suggested in JPA 2006262004, it is only possible to switch whether or not to use the quantization matrix, and optimization of a quantization process that considers locality of a tobeencoded image is not possible.
 A method for changing a quantization matrix using a variation in the number of encoded bits from a previous picture in order to control the number of encoded bits is suggested in JPA 2003189308 (KOKAI). However, even in JPA 2003189308, similar to Document 1, optimization of a quantization process in units of a quantization block is not possible.
 An object of the present invention is to enable optimization of a quantization process using locality of an image when a motion video or a still video is encoded, thereby realizing high encoding efficiency.
 According to an aspect of the present invention, there is provided performing prediction for an input image signal to generate a prediction image signal; calculating a difference between the input image signal and the prediction image signal to generate a prediction residual signal; transforming the prediction residual signal to generate a transform coefficient; performing modulation on any one of (a) a quantization matrix, (b) a control parameter for controlling operation precision for quantization, (c) a quantization parameter indicating roughness of the quantization, and (d) a table in which a quantization scale is associated with the quantization parameter indicating roughness of the quantization, to obtain a modulation result related to the quantization; quantizing the transform coefficient using the modulation result to generate a quantized transform coefficient; and encoding the quantized transform coefficient and an index related to the modulation to generate encoding data.
 According to another aspect of the present invention, there is provided a video decoding method comprising: decoding encoded data including a quantization transform coefficient and an index related to modulation; performing modulation on any one of (a) a quantization matrix, (b) a control parameter for controlling operation precision for quantization, (c) a quantization parameter indicating roughness of the quantization, and (d) a table wherein a quantization scale is associated with the quantization parameter indicating roughness of the quantization in accordance with the index, to obtain a modulation result related to the quantization; inversely quantizing the quantization transform coefficient using the modulation result to generate an inverse quantized transform coefficient; performing inverse transform on the inverse quantized transform coefficient to generate a prediction residual signal; performing prediction using a decoding image signal to generate a prediction image signal; and adding the prediction image signal and the prediction residual signal to generate a decoded image signal.

FIG. 1 is a block diagram illustrating a video encoding apparatus according to a first embodiment. 
FIG. 2 is a diagram illustrating an encoding sequence in an encoding frame. 
FIG. 3 is a diagram illustrating a quantization block size. 
FIG. 4A is a diagram illustrating a 4×4 pixel block. 
FIG. 4B is a diagram illustrating an 8×8 pixel block. 
FIG. 5A is a diagram illustrating a frequency place of a 4×4 pixel block. 
FIG. 5B is a diagram illustrating a frequency place of an 8×8 pixel block. 
FIG. 6 is a block diagram illustrating a quantization matrix modulating unit ofFIG. 1 . 
FIG. 7 is a block diagram illustrating a modulation matrix setting unit ofFIG. 6 . 
FIG. 8 is a diagram illustrating an example of a modulation model of a modulation matrix. 
FIG. 9 is a diagram illustrating another example of a modulation model of a modulation matrix. 
FIG. 10 is a block diagram illustrating a modulation quantization matrix generating unit ofFIG. 6 . 
FIG. 11A is a diagram illustrating a slice quantization matrix of an encoding slice. 
FIG. 11B is a diagram illustrating a block quantization matrix of an encoding slice. 
FIG. 11C is a diagram illustrating a relationship between a block quantization matrix and a modulation matrix and a modulation quantization matrix. 
FIG. 11D is a diagram illustrating a modulation quantization matrix of an encoding slice. 
FIG. 12 is a flowchart illustrating a sequence of an encoding process in the first embodiment. 
FIG. 13 is a diagram schematically illustrating a syntax structure in the first embodiment. 
FIG. 14 is a diagram illustrating an example of a data structure of sequence parameter set syntax in the first embodiment. 
FIG. 15 is a diagram illustrating an example of a data structure of picture parameter set syntax in the first embodiment. 
FIG. 16 is a diagram illustrating an example of a data structure of slice header syntax in the first embodiment. 
FIG. 17 is a diagram illustrating an example of a data structure of macroblock header syntax in the first embodiment. 
FIG. 18 is a diagram illustrating an example of a data structure of macroblock header syntax in the first embodiment. 
FIG. 19 is a diagram illustrating an example of a data structure of slice header syntax in the first embodiment. 
FIG. 20 is a diagram illustrating semantics of a syntax element in the first embodiment. 
FIG. 21 is a block diagram illustrating a video encoding apparatus according to a second embodiment. 
FIG. 22 is a block diagram illustrating a video encoding apparatus according to a third embodiment. 
FIG. 23 is a block diagram illustrating a video encoding apparatus according to a fourth embodiment. 
FIG. 24 is a diagram illustrating a relationship between a precision modulation index and a quantization parameter variation value and a quantization scale variation value in the fourth embodiment. 
FIG. 25 is a diagram illustrating an example of a data structure of sequence parameter set syntax in the fourth embodiment. 
FIG. 26 is a diagram illustrating an example of a data structure of picture parameter set syntax in the fourth embodiment. 
FIG. 27 is a diagram illustrating an example of a data structure of slice header syntax in the fourth embodiment. 
FIG. 28 is a diagram illustrating an example of a data structure of macroblock header syntax in the fourth embodiment. 
FIG. 29 is a diagram illustrating an example of a data structure of slice header syntax according to an embodiment. 
FIG. 30 is a block diagram illustrating a video decoding apparatus according to a fifth embodiment. 
FIG. 31 is a block diagram illustrating a video decoding apparatus according to a sixth embodiment. 
FIG. 32 is a block diagram illustrating a video decoding apparatus according to a seventh embodiment. 
FIG. 33 is a block diagram illustrating a video decoding apparatus according to an eighth embodiment.  Hereinafter, preferred embodiments of the present invention will be described with reference to the accompanying drawings.
 <Video Encoding Apparatus>
 First, first to fourth embodiments that are related to video encoding will be described.
 Referring to
FIG. 1 , in a video encoding apparatus according to the first embodiment of the present invention, an input image signal 120 of a motion video or a still video is divided in units of a small pixel block, for example, in units of a macroblock, and is input to an encoding unit 100. In this case, a macroblock becomes a basic process block size of an encoding process. Hereinafter, a tobeencoded macroblock of the input image signal 120 is simply referred to as a target block.  In the encoding unit 100, a plurality of prediction modes in which block sizes or methods of generating a prediction image signal are different from each other are prepared. As the methods of generating the prediction image signal, an intraframe prediction for generating a prediction image in only a tobeencoded frame and an interframe prediction for performing a prediction using a plurality of temporally different reference frames are generally used. In this embodiment, for the simplicity of description, as illustrated in
FIG. 2 , it is assumed that an encoding process is performed from an upper left side to a lower right side.  A macroblock is typically a 16×16 pixel block as illustrated in
FIG. 3 . However, the macroblock may be in units of a 32×32 pixel block or in units of an 8×8 pixel block. Further, a shape of the macroblock is not necessarily a square lattice.  The encoding unit 100 will be described. In a subtractor 101, a difference between the input image signal 120 and a prediction image signal 121 from a predictor 102 is calculated, and a prediction residual signal 122 is generated. The prediction residual signal 122 is input to a mode determining unit 103 and a transformer 104. The mode determining unit 103 will be described in detail below. In the transformer 104, an orthogonal transform, such as a discrete cosine transform (DCT), is performed on the prediction residual signal 122, and transform coefficients 123 are generated. A transform in the transformer 104 may be performed using a method, such as a discrete sine transform, a Wavelet transform, or an independent component analysis.
 The transform coefficients 123 output from the transformer 104 are input to a quantizer 105. In the quantizer 105, the transform coefficients 123 are quantized in accordance with a quantization parameter provided by an encoding control unit 113 and a modulation quantization matrix 133 generated by a quantization matrix modulating unit 110, which will be described in detail below, and quantized transform coefficients 124 are generated.
 The quantized transform coefficients 124 are input to an inverse quantizer 106 and an entropy encoder 111. The entropy encoder 111 will be described in detail below. In the inverse quantizer 106, inverse quantization is performed on the quantized transform coefficients 124 in accordance with the quantization parameter provided by the encoding control unit 113 and the modulation quantization matrix 133, and an inversequantized transform coefficients 125 are generated.
 An inverse transformer 107 subjects the inversequantized transform coefficients 125 from the inverse quantizer 106 to an inverse transform from the transform of the transformer 104, for example, an inverse orthogonal transform such as an inverse discrete cosine transform (IDCT). By the inverse orthogonal transform, the same signal 126 (referred to as decoding prediction residual signal) as the prediction residual signal 122 is reproduced. The decoding prediction residual signal 126 is input to an adder 108. In the adder 108, the decoding prediction residual signal 126 and the prediction image signal 121 from the predictor 102 are added, and a local decoded signal 127 is generated. The local decoded signal 127 is accumulated as a reference image signal in a reference memory 109. The reference image signal that is accumulated in the reference memory 109 is referred to, when a prediction is performed by the predictor 102.
 In the predictor 102, an interframe prediction or an intraframe prediction is performed using a pixel (encoded reference pixel) of the reference image signal that is accumulated in the reference memory 107. As a result, all of the prediction image signals 121 that can be selected with respect to a tobeencoded block by the predictor 102 are generated. However, in regards to a prediction mode in which a next prediction is not possible if a local decoded signal is generated in the tobeencoded block, such as an intraframe prediction of H.264, for example, a 4×4 pixel block size prediction illustrated in
FIG. 4A or an 8×8 pixel block size prediction illustrated inFIG. 4B , transform/quantization and inverse quantization/inverse transform may be performed in the predictor 102.  As an example of the prediction mode in the predictor 102, the interframe prediction will be described. When the tobeencoded block is predicted in the interframe prediction, block matching is performed using a plurality of encoded reference pixels that are accumulated in the reference memory 109. In the block matching, a shift amount between the pixel of the target block of the input image signal 120 as an original image and the plurality of reference pixels is calculated. From the predictor 102, among the images that are predicted using the shift amount, an image where a difference from the original image is small is output as the prediction image signal 121. The shift amount is calculated at integer pixel precision or fraction pixel precision, and information indicating the shift amount is added to prediction mode information 129 as motion vector information 128.
 The prediction image signal 121 generated by the predictor 102 and the prediction residual signal 122 are input to the mode determining unit 103. In the mode determining unit 103, an optimal prediction mode is selected (which is referred to as a mode determination), on the basis of the input image signal 120, the prediction image signal 121, the prediction residual signal 122, mode information 129 indicating a prediction mode used in the predictor 102, and a modulation index 132 to be described in detail below.
 Specifically, the mode determining unit 103 carries out a mode determination using a cost like the following Equation. If the number of encoded bits related to the prediction mode information 129 is OH, the number of encoded bits of the modulation index 132 is INDEX, and a sum of absolute difference between the input image signal 120 and the local decoded signal 127 is SAD, the mode determining unit 103 uses the following mode determination equation.

K=SAD+λ×(OH+INDEX) (1)  In this case, K denotes a cost and λ denotes an integer. λ is determined on the basis of a value of a quantization scale or a quantization parameter. On the basis of the cost K obtained in the above way, the mode determination is carried out. That is, a mode in which the cost K has the smallest value is selected as an optimal prediction mode.
 In the mode determining unit 103, the mode determination may be performed using only (a) the prediction mode information 129, (b) the modulation index 132, (c) the SAD or (d) an absolute sum of the prediction residual signal 122 instead of the equation 1, and a value that is obtained by performing an Hadamard transform on any one of (a), (b), (c), and (d) or a value approximated to the value may be used. Further, in the mode determining unit 103, a cost may be created using activity of the input image signal 120 or a cost function may be created using a quantization scale or a quantization parameter.
 As another example, a preliminary encoding unit may be prepared, and a mode determination may be carried out using of the number of encoded bits when actually encoding the prediction residual signal 122 generated in any prediction mode and a square error between the input image signal 120 and the local decoded signal 127, by a preliminary encoding unit in the mode determining unit 103. In this case, the mode determining equation is as follows.

J=D+λ×R (2)  In this case, J denotes an encoding cost, and D denotes an encoding distortion indicating the square error between the input image signal 120 and the local decoding image 116. Meanwhile, R denotes the number of encoded bits that is estimated by preliminary encoding.
 If the encoding cost J of the equation 2 is used, preliminary encoding and local decoding processes are needed for every prediction mode, and thus, a circuit scale or an operation amount is increased. Meanwhile, since the more accurate number of encoded bits and encoding distortion are used, high encoding efficiency can be maintained. A cost may be calculated using only R or D instead of the equation 2, and a cost function may be created using a value obtained by approximating R or D. In the description below, a description is given using the encoding cost J illustrated in the equation 2.
 The prediction mode information 129 (including motion vector information) that is output from the mode determining unit 103 is input to an entropy encoder 111. In the entropy encoder 111, with respect to information, such as the quantized transform coefficients 124, the prediction mode information 129, the quantization matrix 131, and the modulation matrix 132, entropy encoding, for example, Huffman encoding or arithmetic encoding is performed, and encoding data is generated.
 The encoding data that is generated by the entropy encoder 111 is output from the encoding unit 100, and is temporary stored in an output buffer 112 after multiplexing. The encoding data that is accumulated in the output buffer 112 is output as an encoding bit stream 130 to the outside of a video encoding apparatus, in accordance with output timing managed by the encoding control unit 113. The encoding bit stream 130 is transmitted to a transmission system (communication network) or an accumulation system (accumulation media) not shown.
 (With Respect to a Quantization Matrix Modulating Unit 110)
 In the quantization matrix modulating unit 110, with respect to the quantization matrix 131 that is provided from the encoding control unit 113, a modulation is performed in accordance with the modulation index 132 from the mode determining unit 103, and a modulated quantization matrix 133 is generated. The modulated quantization matrix 133 is provided to the quantizer 105 and the inverse quantizer 106 and used in the quantization and the inverse quantization.
 Specifically, the quantization that is performed in the quantizer 105 in accordance with the modulated quantization matrix 133 is represented by the following equation.

Y(i,j)=(X(i,j)×MQM(i,j,idx)+f)/Q _{step } (3)  In this case, Y denotes quantized transform coefficients 124, and X denotes transform coefficients 123 before quantization. In addition, f denotes a rounding offset to control roundup/truncation in the quantization, and Q_{step }denotes a quantization scale (called a quantization step size or a quantization width). When a value of Q_{step }is large, quantization is roughly performed, and when the value is small, the quantization is minutely performed. Q_{step }is changed on the basis of a quantization parameter. (i,j) indicates a frequency component position in a quantization block in the quantizer 105 with the xy coordinates. In this case, (i,j) is different depending on whether the quantization block is a 4×4 pixel block illustrated in
FIG. 5A or an 8×8 pixel block illustrated inFIG. 5B .  In general, a transform block size and a quantization block size are matched with each other. In this embodiment, transform quantization block sizes of a plurality of block sizes exist. The transform quantization block size is set as a different prediction mode, and is determined by the mode determining unit 103 as the different prediction mode.
 In the equation 3, MQM denotes a modulation quantization matrix 133, and idx denotes a modulation index 132. The modulation index 132 is an index that is related to a modulation of the quantization matrix 131 that is performed by the quantization matrix modulating unit 110. The modulation index 132 will be described in detail below.
 When signs of the transform coefficients 123 are separated, the equation 3 is transformed as follows.

$\begin{array}{cc}Y\ue8a0\left(i,j\right)=\phantom{\rule{0.em}{0.ex}}\ue89e\mathrm{sign}(\phantom{\rule{0.em}{0.ex}}\ue89eX(\phantom{\rule{0.em}{0.ex}}\ue89ei,\phantom{\rule{0.em}{0.ex}}\ue89ej))\times \phantom{\rule{0.em}{0.ex}}\ue89e\left(\begin{array}{c}\mathrm{abs}\ue89e\left(X\ue8a0\left(i,j\right)\right)\times \\ \mathrm{MQM}\ue89e\left(i,j,\mathrm{jdx}\right)+f\end{array}\right)/\phantom{\rule{0.em}{0.ex}}\ue89e{Q}_{\mathrm{step}}& \left(4\right)\end{array}$  In this case, sign(X) is a function that returns a sign of X, and denotes a sign of the conversion coefficients 123. abs(X) is a function that returns an absolute value of X.
 In order to simplify an operation, if the quantization scale Q_{step }is designed by a poweroftwo, the equation 3 is transformed as follows.

$\begin{array}{cc}Y\ue8a0\left(i,j\right)=\phantom{\rule{0.em}{0.ex}}\ue89e\mathrm{sign}\ue8a0\left(X\ue8a0\left(i,j\right)\right)\times \phantom{\rule{0.em}{0.ex}}\ue89e\left(\begin{array}{c}\mathrm{abs}\ue89e\left(X\ue8a0\left(i,j\right)\right)\times \\ \mathrm{MQM}\ue89e\left(i,j,\mathrm{idx}\right)+f\end{array}\right)\ue2f7{Q}_{\mathrm{bit}}& \left(5\right)\end{array}$  Here, Q_{bit }denotes a quantization scale that is designed by a poweroftwo.
 In this case, the division can be replaced by the bit shift, and a process amount that is needed in the division can be reduced.
 In order to maximally suppress operation precision, the operation precision can be changed for every frequency component. In this case, Equation 3 is transformed as follows.

$\begin{array}{cc}Y(\phantom{\rule{0.em}{0.ex}}\ue89ei,\phantom{\rule{0.em}{0.ex}}\ue89ej)=\phantom{\rule{0.em}{0.ex}}\ue89e\mathrm{sign}(\phantom{\rule{0.em}{0.ex}}\ue89eX(\phantom{\rule{0.em}{0.ex}}\ue89ei,\phantom{\rule{0.em}{0.ex}}\ue89ej))\times \phantom{\rule{0.em}{0.ex}}\ue89e\left(\begin{array}{c}\mathrm{abs}\ue89e\left(X\ue8a0\left(i,j\right)\right)\times \\ \mathrm{MQM}\ue89e\left(i,j,\mathrm{jdx}\right)\times \mathrm{LS}\ue8a0\left(i,j\right)+f\end{array}\right)\ue2f7\phantom{\rule{0.em}{0.ex}}\ue89e{Q}_{\mathrm{bit}}& \left(6\right)\end{array}$  Here, LS denotes an operation precision control parameter to adjust the operation precision of the quantization process for every frequency component. That is, LS is used to change an operation scale for every frequency place, when the quantization process is performed, and is called LevelScale or normAdjust. The operation precision control parameter LS uses a property which the probability that a value having a large absolute value is generated in a high frequency component of the transform coefficients (lower right region of each of
FIGS. 5A and 5B ) is low. LS and ILS to be described in detail below need to be designed to adjust an operation scale by the quantization and the inverse quantization.  Next, the modulation quantization matrix 133 output from the quantization matrix modulating unit 110 will be described. The quantization matrix 131 before the modulation is a matrix that can change roughness of quantization for every frequency component of the transform coefficients 123. An example of the quantization matrix 131 that corresponds to a 4×4 pixel block is represented by the following equation.

$\begin{array}{cc}\mathrm{QM}\ue8a0\left(i,j\right)=\left[\begin{array}{cccc}16& 20& 24& 28\\ 20& 24& 28& 32\\ 24& 28& 32& 36\\ 28& 32& 36& 40\end{array}\right]& \left(7\right)\end{array}$  The frequency component (i,j) of
FIG. 5A and that of the equation 7 are in a onetoone relation, and indicate a matrix value with respect to a high frequency component in a lower right value. For example, a matrix value of a frequency place (0,3) becomes 28. A relationship between the quantization matrix 131 and the modulation quantization matrix 133 is represented by the following equation. 
MQM(i,j,idx)=(QM(i,j)+MP(idx)) (8)  Here, QM denotes the quantization matrix 131, and MQM denotes the modulation quantization matrix 133. MP denotes a modulation parameter indicating modulation strength. In this case, the modulation index 132 denotes a modulation method (method of modulating a quantization matrix by addition of a modulation parameter) illustrated in the equation 8 and a modulation parameter MP. Further, the modulation index 132 may be a number of a table where the modulation method is described.
 In the equation 8, an example of modulating QM by adding the quantization matrix QM and the modulation parameter MP is illustrated. However, subtraction, multiplication, division or bit shift may be performed between QM and MP to modulate QM.
 Meanwhile, when performing a different modulation on the quantization matrix QM for every frequency component, the following equation is used.

MQM(i,j,idx)=(QM(i,j)+MM(i,j,idx)) (9)  Here, MM denotes a modulation matrix. In this case, the modulation index 132 denotes a modulation method (method of modulating a quantization matrix by addition of a modulation matrix) expressed by the equation 9 and a modulation matrix MM. Further, the modulation index 132 may be a number of a table in which the modulation method is described.
 Here, an example of modulating QM by adding the quantization matrix QM and the modulation matrix MM is described. However, subtraction, multiplication, division or bit shift may be performed between QM and MM to modulate QM. The equation 8 is synonymous to the case where all components of the modulation matrix MM of the equation 9 take the same value.
 Equation 10 expresses an example of a modulation matrix MM for a quantization block of a 4×4 size. Similarly to the quantization matrix QM, a relationship between the modulation matrix MM and the frequency place illustrated in
FIG. 5A is in a onetoone relation. 
$\begin{array}{cc}\mathrm{MM}\ue8a0\left(i,j\right)=\left[\begin{array}{cccc}0& 1& 2& 3\\ 1& 2& 3& 4\\ 2& 3& 4& 5\\ 3& 4& 5& 6\end{array}\right]& \left(10\right)\end{array}$  When the quantization matrix QM has a fixed value with respect to the frequency component, instead of the equation 10, the following equation may be used.

MQM(i,j,idx)=(QM+MM(i,j,idx)) (11)  Here, QM indicates that all components of QM(i,j) take the same value.
 The modulation parameter MP and the modulation matrix MM are introduced to perform a modulation on the quantization matrix QM. When the modulation is not performed on QM, MP is 0, or all components of MM are 0, MQM is synonymous to one calculated by the following equation.

MQM(i,j,idx)=(QM(i,j)) (12)  When a modulation of the quantization matrix QM is not performed, even though the modulation matrix MM expressed by the following equation is substituted for the equation 9, the same result as the equation 12 is obtained.

$\begin{array}{cc}{\mathrm{MM}}_{\mathrm{Init}}\ue8a0\left(i,j\right)=\left[\begin{array}{cccc}0& 0& 0& 0\\ 0& 0& 0& 0\\ 0& 0& 0& 0\\ 0& 0& 0& 0\end{array}\right]& \left(13\right)\end{array}$  In this way, the quantizer 105 carries out quantization using the modulation quantization matrix 133 (MM). Here, the quantization matrix 131 as an input parameter is provided from the encoding control unit 113 to the quantization matrix modulating unit 110, but the quantization matrix 131 may not be provided to the quantization matrix modulating unit 110. In this case, a predetermined initial quantization matrix, for example, a matrix QM_{int}(i,j) expressed by the following equation is set to the quantization matrix modulating unit 110.

$\begin{array}{cc}{\mathrm{QM}}_{\mathrm{Init}}\ue8a0\left(i,j\right)=\left[\begin{array}{cccc}16& 16& 16& 16\\ 16& 16& 16& 16\\ 16& 16& 16& 16\\ 16& 16& 16& 16\end{array}\right]& \left(14\right)\end{array}$  The equation 14 expresses an example wherein all values of the initial quantization matrix QM_{int}(i,j) are 16. However, another value may be used, and a different value may be set for every frequency component. The same predetermined initial quantization matrix may be set between the video encoding apparatus and the video decoding apparatus.
 The quantization parameter that is needed in the quantization and the inverse quantization is set in the encoding control unit 113. The quantization parameters used in the quantizer 105 and the inverse quantizer 106 are in a onetoone relation. The quantized transform coefficients 124 output from the quantizer 105 are input to the inverse quantizer 106 together with the modulation quantization matrix 133. The inverse quantizer 106 performs inverse quantization on the quantized transform coefficients 124 provided from the quantizer 105, using the modulation quantization matrix 133 and the quantization parameter. The inverse quantization corresponding to the quantization of the equation 3 is expressed by the following equation.

X′(i,j)=(Y(i,j)×MQM(i,j,idx))×Q _{step } (15)  Here, Y denotes quantized transform coefficients 124, X′ denotes inversequantized transform coefficients 125, and MQM denotes a modulation quantization matrix 132 used at the time of quantization.
 The inverse quantization corresponding to the quantization of the equation 4 is expressed by the following equation.

$\begin{array}{cc}{X}^{\prime}(\phantom{\rule{0.em}{0.ex}}\ue89ei,\phantom{\rule{0.em}{0.ex}}\ue89ej)=\phantom{\rule{0.em}{0.ex}}\ue89e\mathrm{sign}(\phantom{\rule{0.em}{0.ex}}\ue89eY(\phantom{\rule{0.em}{0.ex}}\ue89ei,\phantom{\rule{0.em}{0.ex}}\ue89ej))\times \phantom{\rule{0.em}{0.ex}}\ue89e\left(\mathrm{abs}(\phantom{\rule{0.em}{0.ex}}\ue89eY(\phantom{\rule{0.em}{0.ex}}\ue89ei,\phantom{\rule{0.em}{0.ex}}\ue89ej))\times \phantom{\rule{0.em}{0.ex}}\ue89e\mathrm{MQM}(\phantom{\rule{0.em}{0.ex}}\ue89ei,\phantom{\rule{0.em}{0.ex}}\ue89ej,\phantom{\rule{0.em}{0.ex}}\ue89e\mathrm{idx})\right)\times \phantom{\rule{0.em}{0.ex}}\ue89e{Q}_{\mathrm{step}}& \left(16\right)\end{array}$  Here, sign(Y) denotes a function that returns a sign of Y.
 In order to simplify an operation, if Q_{step }is designed by a poweroftwo, the inverse quantization corresponding to the equation 5 is expressed by the following equation.

$\begin{array}{cc}{X}^{\prime}(\phantom{\rule{0.em}{0.ex}}\ue89ei,\phantom{\rule{0.em}{0.ex}}\ue89ej)=\phantom{\rule{0.em}{0.ex}}\ue89e\mathrm{sign}(\phantom{\rule{0.em}{0.ex}}\ue89eY(\phantom{\rule{0.em}{0.ex}}\ue89ei,\phantom{\rule{0.em}{0.ex}}\ue89ej))\times \phantom{\rule{0.em}{0.ex}}\ue89e\left(\mathrm{abs}(\phantom{\rule{0.em}{0.ex}}\ue89eY(\phantom{\rule{0.em}{0.ex}}\ue89ei,\phantom{\rule{0.em}{0.ex}}\ue89ej))\times \phantom{\rule{0.em}{0.ex}}\ue89e\mathrm{MQM}(\phantom{\rule{0.em}{0.ex}}\ue89ei,\phantom{\rule{0.em}{0.ex}}\ue89ej,\phantom{\rule{0.em}{0.ex}}\ue89e\mathrm{idx})\right)\ue2fb\phantom{\rule{0.em}{0.ex}}\ue89e{Q}_{\mathrm{bit}}& \left(17\right)\end{array}$  According to the equation 17, the multiplication can be replaced by the bit shift, and a process amount that is needed in the multiplication can be reduced.
 Meanwhile, the inverse quantization corresponding to the equation 6 in which the operation precision is changed for every frequency component in order to suppress operation precision is expressed by the following equation.

$\begin{array}{cc}{X}^{\prime}(\phantom{\rule{0.em}{0.ex}}\ue89ei,\phantom{\rule{0.em}{0.ex}}\ue89ej)=\phantom{\rule{0.em}{0.ex}}\ue89e\mathrm{sign}(\phantom{\rule{0.em}{0.ex}}\ue89eY(\phantom{\rule{0.em}{0.ex}}\ue89ei,\phantom{\rule{0.em}{0.ex}}\ue89ej))\times \phantom{\rule{0.em}{0.ex}}\ue89e\left(\begin{array}{c}\mathrm{abs}\ue89e\left(Y\ue8a0\left(i,j\right)\right)\times \\ \mathrm{MQM}\ue89e\left(i,j,\mathrm{jdx}\right)\times \mathrm{ILS}\ue8a0\left(i,j\right)\end{array}\right)\ue2fb\phantom{\rule{0.em}{0.ex}}\ue89e{Q}_{\mathrm{bit}}& \left(18\right)\end{array}$  Here, ILS denotes an operation precision control parameter to adjust the operation precision of the inverse quantization process for every frequency component. That is, ILS is used to change an operation scale for every frequency place, when the inverse quantization process is performed, and is called LevelScale or normAdjust. A value corresponding to the operation precision control parameter used in the quantization is used as the ILS. Inverse quantization (error signal 4×4 pixel block) of the H.264 high profile is expressed by the following equation. That is, in order to realize 16bit operation precision with a small operation amount in the H.264, inverse quantization of the following equation is carried out.

$\begin{array}{cc}{X}^{\prime}\ue8a0\left(i,j\right)=\mathrm{sign}\ue8a0\left(Y\ue8a0\left(i,j\right)\right)\times \left(\mathrm{abs}\ue8a0\left(Y\ue8a0\left(i,j\right)\right)\times \mathrm{ILS}\ue8a0\left(m,i,j\right)\right)\ue2fb\left(\frac{\mathrm{QP}}{6}\right)& \left(19\right)\end{array}$  Here, the level scale ILS(m,i,j) is a value defined in an equation 20, and QP is a quantization parameter that takes values from 0 to 51.

ILS(m,i,j)=QM(i,j)×Norm(m,i,j) (20)  Here, Norm(m,i,j) is a scale adjusting parameter expressed by the equation 5, and each element is expressed by the equation 6.

$\begin{array}{cc}\mathrm{Norm}\ue8a0\left(m,i,j\right)=\{\begin{array}{ccc}{v}_{m,0}& \mathrm{for}& \left(i,j\right)=\left\{\begin{array}{c}\left(0,0\right),\left(0,2\right),\\ \left(2,0\right),\left(2,2\right)\end{array}\right\}\\ {v}_{m,1}& \mathrm{for}& \left(i,j\right)=\left\{\begin{array}{c}\left(1,1\right),\left(1,3\right),\\ \left(3,1\right),\left(3,3\right)\end{array}\right\}\\ {v}_{m,2}& \phantom{\rule{0.3em}{0.3ex}}& \mathrm{otherwise};\end{array}& \left(21\right)\\ {v}_{\mathrm{mn}}=\left[\begin{array}{ccc}10& 16& 13\\ 11& 18& 14\\ 13& 20& 16\\ 14& 23& 18\\ 16& 25& 20\\ 18& 29& 23\end{array}\right]& \left(22\right)\end{array}$  The quantization parameter used at the time of quantization in the quantizer 105 also is set to the inverse quantizer 106 by the encoding control unit 113. Thereby, the same quantization parameter needs to be used for both the quantizer 105 and the inverse quantizer 106. Further, the same modulation quantization matrix 133 is used for the quantizer 105 and the inverse quantizer 106.
 A loop of the subtractor 101→the transformer 104→the quantizer 105→the inverse quantizer 106→the inverse transformer 107→the adder 108→the reference memory 109 in
FIG. 1 is called an encoding loop. The encoding loop takes a round when a process is performed on a combination of one prediction mode, one modulation index, and one block size, which are selectable for the tobeencoded block. In this case, the combination denotes a combination between an intraprediction mode, a modulation index 0, and an 8×8 block size, and a combination between an interprediction mode, the modulation index 0, and a 4×4 block size. Such the process of the encoding loop is performed on the tobeencoded block a plurality of times. If all of the obtained combinations are completed, an input image signal 120 of a next block is input, and next encoding is performed.  The encoding control unit 113 performs the entire encoding process, such as rate control for controlling the number of generated encoded bits by performing feedback control of the number of generated encoded bits, quantization characteristic control, and mode determination control, control of the predictor 102, and control of an external input parameter. The encoding control unit 113 has functions of performing control of the output buffer 112 and outputting an encoding bit stream 130 to the outside at appropriate timing.
 The processes of the encoding unit 100 and the encoding control unit 113 are realized by hardware, but may be realized by software (program) using a computer.
 (Specific Example of a Quantization Matrix Modulating Unit 110)
 Next, a specific example of the quantization matrix modulating unit 110 will be described. As illustrated in
FIG. 6 , the quantization matrix modulating unit 110 has a modulation matrix setting unit 201 and a modulation quantization matrix generating unit 202. InFIG. 1 , the modulation index 132 output from the mode determining unit 103 is input to the modulation matrix setting unit 201. InFIG. 1 , the quantization matrix 131 that is set as the input parameter from the encoding control unit 113 and held in advance is input to the modulation quantization matrix generating unit 202.  In the modulation matrix setting unit 201, the modulation matrix 134 corresponding to the modulation index 132 is set to the modulation quantization matrix generating unit 202. In the modulation quantization matrix generating unit 202, a modulation is performed on the quantization matrix 131 using the modulation matrix 134, and a modulation quantization matrix 133 is generated. The generated modulation quantization matrix 133 is output from the quantization matrix modulating unit 110.
 (Modulation Matrix Setting Unit 201)
 As illustrated in
FIG. 7 , the modulation matrix setting unit 201 has a switch 301, and modulation matrix generating units 302, 303, and 304 which are different from each other with respect to generation methods or modulation parameters. The switch 301 has a function of activating any one of the modulation matrix generating units 302, 303, and 304 by switching according to a value of the input modulation index 132. For example, when the modulation index 132 is idx=0, the switch 301 operates the modulation matrix generating unit 302. Similarly, the switch 301 operates the modulation matrix generating unit 303 in the case of idx=1, and operates the modulation matrix generating unit 304 in the case of idx=N1. The modulation matrix 134 is generated by the operated modulation matrix generating unit. The generated modulation matrix 134 is set to the modulation quantization matrix generating unit 202.  A specific method for generating the modulation matrix 134 will be described. Here, two generation models for generating the modulation matrix 134 are illustrated. Hereinafter, a method for generating the modulation matrix 134 is called a modulation model. A distance from a component of the first row and the first column among the components of the modulation matrix 134 expressed by equations 24 and 25 is defined as a town distance by the following equation.

r=i+j (23)  For example, in
FIG. 5A , a distance of a frequency component that is located at (i,j)=(3,3) becomes 6. Meanwhile, in the case of the 8×8 block illustrated inFIG. 5B , a distance of a frequency component that is located at (i,j)=(3,7) becomes 10.  As in this embodiment, in an example in which the modulation matrix 134 is added to the quantization matrix 131, each frequency component of the quantization matrix 131 and the modulation matrix 134 is in a onetoone relation. That is, when a value of r (matrix value of the modulation matrix 134) is increased, a modulation is performed on a high frequency component, and when the value of r is decreased, a modulation is performed on a low frequency component. Hereinafter, a modulation model to modulate the quantization matrix 131 will be described.

FIG. 8 illustrates a modulation model defined by a linear function, which is represented by the following equation. 
MM(i,j)=a×r (24)  In the equation 24, a denotes a parameter to control modulation strength. Hereinafter, the parameter a is called a modulation control parameter. The modulation control parameter a has a value as a first image limit of
FIG. 8 when a positive value is taken, and has a value as a fourth image limit when a negative value is taken. Thereby, when the modulation control parameter a has a large value, a strong modulation is performed on a high frequency component. 
FIG. 9 illustrates a modulation model in the case of using a linear function and a sine function, which is expressed by the following equation. 
MM(i,j)=a×r+b×sin(c×r) (25)  In the equation 25, b and c denote modulation control parameters, similarly to a. The sine function becomes a term for adding a distortion to the linear function. The modulation control parameter c is a parameter for controlling a variation period of the sine function. The modulation control parameter b is a parameter for controlling strength of the distortion.
 Here, an example of using a linear function model or a sine function model as the modulation model is illustrated, but as another example of the modulation model, a logarithm model, an autocorrelation function model, a proportional/inversely proportional model, an Norder function (N≧1) model, or a generalization Gauss function model including a Gauss function or a Laplace function may be used. Regardless of which model is used, it is important to use the same modulation as the modulation used in the video encoding apparatus even in the video decoding apparatus, but this is enabled by designating the modulation model by the modulation index 132 in the video encoding apparatus.
 For convenience of explanation, the modulation matrix generating units 302, 303, and 304 correspond to the index 0, the index 1, and the index (N1), respectively. However, the modulation matrix generating unit may be prepared according to a value of the index number N, and the same modulation matrix generating unit may be used for a different value of the index. For example, Tables 1 to 3 illustrate examples of combinations of modulation models and modulation control parameters for the modulation index 132.

TABLE 1 Modulation index Modulation Parameter Parameter Parameter number(N = 4) model A B c 0 N/A N/A N/A 1 Equation(24) −2 N/A N/A 2 Equation(24) 2 N/A N/A 3 Equation(24) 4 N/A N/A . . . . . . . . . . . . . . . 
TABLE 2 Modulation index Modulation Parameter Parameter Parameter number (N = 8) model a B c 0 N/A N/A N/A 1 Equation(24) −2 N/A N/A 2 Equation(24) −1 N/A N/A 3 Equation(24) 1 N/A N/A 4 Equation(24) 2 N/A N/A 5 Equation(25) −1 2 π/4 6 Equation(25) 1 2 π/4 7 Equation(25) 1 2 π/4 . . . . . . . . . . . . . . . 
TABLE 3 Modulation index Modulation Parameter Parameter Parameter number model a B c . . . . . . . . . . . . . . . −3 Equation(24) −3 N/A N/A −2 Equation(24) −2 N/A N/A −1 Equation(24) −1 N/A N/A 0 Equation(24) 0 N/A N/A 1 Equation(24) 1 N/A N/A 2 Equation(24) 2 N/A N/A 3 Equation(24) 3 N/A N/A . . . . . . . . . . . . . . .  In Tables 1 to 3, a symbol N/A means that an object parameter is not used in the currently regulated modulation model. The index 0 indicates the case where a modulation is not performed, that is, the equation 12 is used.
 Table 1 illustrates an example of combinations of modulation control parameters and a modulation model when a modulation index is regulated by 2 bits (N=4).
 In this case, since only the modulation model expressed by the equation 24 is used, the modulation matrix generating unit of
FIG. 7 may be only one. In accordance with the modulation index, the previously set modulation control parameter a is read, and a modulation matrix is generated.  Table 2 illustrates an example of the case when a modulation index is regulated by 3 bits (N=8) and a plurality of modulation models are used. In this case, two modulation models of the equations 24 and 25 are used. Similarly to Table 1, a modulation matrix is generated in accordance with the predetermined modulation control parameter.
 As illustrated in Table 1, when a modulation model represented by only one modulation control parameter is used, a value of the modulation index may be directly associated with the modulation control parameter. An example of the above case is illustrated in Table 3. In the association of Tables 1 and 2, the modulation matrix is generated in accordance with the predetermined table. Meanwhile, in the case of Table 3, modulation strength of the quantization matrix can be directly changed. That is, the previous setting is not needed, and a large value may be directly set and a modulation matrix may be generated, if necessary.
 (Modulation Quantization Matrix Generating Unit 202)
 As illustrated in
FIG. 10 , the modulation quantization matrix generating unit 202 has an arithmetic operator 501. The arithmetic operator 401 can perform basic operations, such as subtraction, multiplication, division, and bit shift, as well as addition. Further, the basic operations are combined, and addition, subtraction, multiplication, and division of a matrix can be performed.  In the arithmetic operator 401, the modulation matrix is input from the modulation matrix setting unit 203 and the quantization matrix 131 is input from the encoding control unit 113, and a modulation is performed on the quantization matrix 131. In this embodiment, the quantization matrix 131 is modulated by addition of the modulation matrix (MM) expressed by the equation 9, and the modulation quantization matrix 133 is generated. The generated modulation quantization matrix 133 is output from the modulation quantization matrix generating unit 202.
 Next, a modulation of the quantization matrix will be described using
FIGS. 11A , 11B, 11C, and 11D.FIG. 11A illustrates a quantization matrix allocated to a macroblock, when the modulation matrix is not used as in the equation 12. In this case, since the same quantization matrix 131 is applied to all of the macroblocks of encoding slices, the quantization matrix is described as a slice quantization matrix inFIG. 11A .  Meanwhile,
FIG. 11B illustrates an example of the case of using two modulation matrixes (N=2). Further,FIG. 11D illustrates an example of using four modulation matrixes (N=4) illustrated inFIG. 11C .FIG. 11C illustrates four modulation matrixes 203 set by the modulation matrix setting unit 203 for the quantization matrix 131. In the modulation quantization matrix generating unit 202, a modulation by addition of the modulation matrix (MM) illustrated inFIG. 9 is performed, and a quantization matrix (called block quantization matrix) having a different characteristic can be generated in a local region in the encoding slice, as illustrated inFIGS. 11B and 11D . Thereby, the different quantization matrix may be applied in the local area in the encoding slice.  (Encoding Process Sequence)
 Next, a video encoding process sequence according to this embodiment will be described using
FIG. 12 . If a moving picture signal is input to the video encoding apparatus, a moving picture frame of a tobeencoded is read (S001), the read moving picture frame is divided into a plurality of macroblocks, and an input image signal 120 in the macroblock unit is input to the encoding unit 100 (S002). At this time, in the mode determining unit 103, initialization of a prediction mode: mode and a modulation index 132: index and initialization of an encoding cost: min_cost are performed (S003).  Next, a prediction image signal 121 in one mode that can be selected for the tobeencoded block is generated using the input image signal 120 in the predictor 102 (S004). Although not illustrated in
FIG. 12 , a difference between the input image signal 120 and the generated prediction image signal 121 is calculated, and a prediction residual signal 122 is generated. The generated prediction residual signal 122 is subjected to an orthogonal transform by the transformer 104 (first half of S006), and the transform coefficients 123 generated by the orthogonal transform are input to the quantizer 105.  Meanwhile, a modulation matrix is set according to a value of the modulation index 132: index selected by the mode determining unit 103 (S005). The modulation quantization matrix 132 is generated by the quantization matrix modulating unit 110 using the set modulation matrix, and quantization of the transform coefficients 123 is performed by the quantizer 105 using the modulation quantization matrix 132 (second half of S006). Here, the encoding distortion D and the number of encoded bits R are calculated, and an encoding cost: cost is calculated using the equation 3 (S007).
 The mode determining unit 103 determines whether the calculated encoding cost: cost is smaller than a minimum cost: min_cost (S008). When cost is smaller than the minimum cost: min_cost (when the result of S008 is YES), min_cost is updated by cost, the prediction mode at this time is held as best_mode, and the modulation index 132: index at this time is held as best_index (S009). At the same time, the prediction image signal 121 is temporarily stored in an internal memory (S010).
 Meanwhile, when the cost is larger than the minimum cost: min_cost (when the result of S008 is NO), the modulation index 132: index is incremented, and it is determined whether the index after the increment is the final of the modulation index 132 (S011). When the index is larger than IMAX as a final number of the modulation index 132 (when the result of S011 is YES), information of best_index is delivered to the entropy encoder 111. Meanwhile, when the index is smaller than IMAX (when the result of S011 is NO), the process of the encoding loop is executed again using the updated modulation index.
 When the result of step S010 is YES, the prediction mode: mode is incremented, and it is determined whether the mode after the increment is the final of the prediction mode (S012). When the mode is larger than MMAX as a final number of the prediction mode (when the result of S012 is YES), prediction mode information of best_mode and the quantized transform coefficients 123 are transmitted to the entropy encoder 111, and entropy encoding of the previously fixed modulation index 132 and the transform coefficients 111 is performed (S013). Meanwhile, when the mode is smaller than MMAX (when the result of S012 is NO), the process of the encoding loop is performed for the prediction mode illustrated in a next mode.
 If encoding in best_mode and best_index is performed, the quantized transform coefficients 124 are input to the inverse quantizer 106, and inverse quantization is performed by the same best_index as the modulation index used at the time of quantization (first half of S014). Further, the inversely quantized transform coefficients 125 are input to the inverse transformer 107, and an inverse transform is performed (second half of S014). The reproduced prediction residual signal 126 and the prediction image signal 124 of best_mode provided from the mode determining unit 103 are added. As a result, the generated decoding image signal 127 is held in the reference memory 109 (S015).
 Here, it is determined whether an encoding process of one frame is completed (S016). When the process is completed (when the result of S106 is YES), an image signal of a next frame is input and an encoding process is performed. Meanwhile, when an encoding process of one frame is not completed (when the result of S016 is NO), an image signal of a next target block is input, and the encoding process is continuously performed.
 (Method for Encoding Syntax)
 Next, a method for encoding syntax used in this embodiment will be described.
FIG. 13 schematically illustrates a structure of syntax used in this embodiment. The syntax mainly includes three parts. In the high level syntax 501, syntax information of an upper layer more than the slice is written. In the slice level syntax 502, information that is needed for every slice is clearly written. A change value of a quantization parameter or mode information that is needed for every macroblock is recited in the macroblock level syntax 503.  The syntaxes 501 to 503 include detailed syntaxes. The high level syntax 501 includes sequence level and picture level syntaxes, such as sequence parameter set syntax 504 and picture parameter set syntax 505. The slice level syntax 502 includes slice header syntax 506 and slice data syntax 507. The macroblock level syntax 503 includes macroblock layer syntax 508 and macroblock prediction syntax 509.
 The syntax information needed in this embodiment includes the sequence parameter set syntax 504, the picture parameter set sequence 505, the slice header syntax 506, and the macroblock layer syntax 508. The individual syntaxes 504 to 506 will be described in detail below.
 As illustrated in the sequence parameter set syntax of
FIG. 14 , seq_moduletaed_quantization_matrix_flag is a flag indicating whether performance or nonperformance of a modulation of a quantization matrix, that is, performance or nonperformance of quantization of the quantizer 105 using the modulation quantization matrix 133 (performance or nonperformance of quantization using the quantization 131 before the modulation) is changed or not for every sequence. When the corresponding flag seq_moduletaed_quantization_matrix_flag is TRUE, it is possible to switch whether or not to use the modulation of the quantization matrix in a sequence unit. Meanwhile, when the corresponding flag seq_moduletaed_quantization_matrix_flag is FALSE, the modulation of the quantization matrix cannot be used in the sequence.  As illustrated in the picture parameter set syntax of
FIG. 15 , pic_moduletaed_quantization_matrix_flag is a flag indicating whether use or nonuse of a modulation of a quantization matrix is changed for every picture. When the corresponding flag pic_moduletaed_quantization_matrix_flag is TRUE, it is possible to switch whether or not to use the modulation of the quantization matrix in a picture unit. Meanwhile, when the corresponding flag pic_moduletaed_quantization_matrix_flag is FALSE, the modulation of the quantization matrix cannot be used in the picture.  As illustrated in the slice header syntax of
FIG. 16 , slice_moduletaed_quantization_matrix_flag is a flag indicating whether use or nonuse of a modulation of a quantization matrix is changed for every slice. When the corresponding flag slice_moduletaed_quantization_matrix_flag is TRUE, it is possible to switch whether or not to use the modulation of the quantization matrix in a slice unit. Meanwhile, when the corresponding flag slice_moduletaed_quantization_matrix_flag is FALSE, the modulation of the quantization matrix cannot be used in the slice.  As illustrated in the macroblock layer syntax of
FIG. 17 , modulation_index denotes a modulation index. In the syntax, coded_block_pattern is an index indicating whether transform coefficients are generated in the corresponding block. When the corresponding index coded_block_pattern is 0, since the transform coefficients are not generated in the corresponding macroblock, it is not necessary to perform inverse quantization at the time of decoding. In this case, since information that is related to a quantization matrix does not need to be transmitted, modulation index is not transmitted.  Meanwhile, a mode in the syntax is an index indicating a prediction mode. When the corresponding index mode selects a skip mode, the corresponding block does not transmit the transform coefficients, similarly to the above case. Accordingly, modulation_index is not transmitted. CurrentModulatedQuantizationMatrixFlag becomes TRUE when at least one of seq_moduletaed_quantization_matrix_flag, pic_moduletaed_quantization_matrix_flag, and slice_moduletaed_quantization_matrix_flag is TRUE, but becomes FALSE when the condition is not satisfied. When the corresponding flag CurrentModulatedQuantizationMatrixFlag is FALSE, modulation_index is not transmitted, and a value corresponding to 0 is set to the modulation index 132. As illustrated in Tables 1 and 2, modulation_index previously holds a table where a modulation model and a modulation control parameter are determined for every index.
 The macroblock data syntax illustrated in
FIG. 17 may be changed to syntax illustrated inFIG. 18 . In the syntax illustrated inFIG. 18 , modulation_strength is transmitted, instead of modulation_index in the syntax ofFIG. 17 . The modulation_index previously holds the table where the modulation model and the modulation control parameter are determined, as described above. Meanwhile, in the modulation_strength, the modulation model is fixed, and a value of the modulation control parameter is directly transmitted. That is, the syntax ofFIG. 18 corresponds to the method described in Table 3. In this case, the number of transmission encoded bits for transmitting modulation_strength is generally increased, and a degree of freedom to change modulation strength of the quantization matrix is high. Therefore, flexible quantization is enabled. Accordingly, any one of the syntax ofFIG. 17 and the syntax ofFIG. 18 may be selected in consideration of a balance of the decoding image and the number of encoded bits.  In
FIG. 18 , CurrentModulatedQuantizationMatrixFlag is TRUE when at least one of seq_moduletaed_quantization_matrix_flag, pic_moduletaed_quantization_matrix_flag, and slice_moduletaed_quantization_matrix_flag is TRUE, but becomes FALSE when the condition is not satisfied. When the corresponding flag CurrentModulatedQuantizationMatrixFlag is FALSE, modulation_strength is not transmitted, and a value corresponding to 0 is set to a modulation index 132.  As another example, the slice header syntax illustrated in
FIG. 16 may be changed to syntax illustrated inFIG. 19 . The syntax ofFIG. 19 and the syntax ofFIG. 16 are different from each other in that three indexes of slice_modulation_length, slice_modulation_model, and slice_modulation_type are additionally transmitted, when slice_moduletaed_quantization_matrix_flag is TRUE. 
FIG. 20 illustrates an example of semantics for these syntax elements. The slice_modulation_length indicates a maximum value of the modulation index 132. For example, when the slice modulation length is 2, this means that modulation matrixes of N=4 kinds can be used. The slice modulation model indicates a used modulation model. For example, when slice_modulation_model is 0, this means that the equation 19 is used, and when slice_modulation_model is 1, this means that a modulation model corresponding to the equation 20 is allocated. The slice_modulation_type defines a modulation operation method of the modulation matrix for the quantization matrix. For example, when the slice_modulation type is 0, this means that a modulation by addition is performed, and when the slice_modulation_type is 4, this means that a modulation by a bit shift is performed.  As described above, in the first embodiment, a modulation is performed on the quantization matrix, quantization/inverse quantization is performed on the transform coefficients using a modulation quantization matrix, and quantized transform coefficients and a modulation index indicating a modulation method of a quantization matrix are subjected to entropy encoding. Accordingly, as compared to the related art, while high encoding efficiency is maintained, encoding without increasing a decodingside operation cost can be realized. That is, appropriate encoding can be performed according to contents of a target block.
 When the quantizer 105 and the inverse quantizer 106 perform quantization and inverse quantization corresponding to the equations 6 and 18, instead of performing the modulation on the quantization matrix as in the first embodiment, a modulation may be performed on an operation precision control parameter to control operation precision at the time of quantization/inverse quantization. In this case, the equations 6 and 18 are changed as follows.

$\begin{array}{cc}Y\ue8a0\left(i,j\right)=\mathrm{sign}\ue8a0\left(X\ue8a0\left(i,j\right)\right)\times \left(\begin{array}{c}\mathrm{abs}\ue89e\left(X\ue8a0\left(i,j\right)\right)\times \mathrm{QM}\ue8a0\left(i,j\right)\times \\ \mathrm{MLS}\ue89e\left(i,j,\mathrm{idx}\right)+f\end{array}\right)\ue2f7{Q}_{\mathrm{bit}}& \left(26\right)\\ {X}^{\prime}\ue8a0\left(i,j\right)=\mathrm{sign}\ue8a0\left(Y\ue8a0\left(i,j\right)\right)\times \left(\begin{array}{c}\mathrm{abs}\ue89e\left(Y\ue8a0\left(i,j\right)\right)\times \mathrm{QM}\ue8a0\left(i,j\right)\times \\ \mathrm{IMLS}\ue8a0\left(i,j,\mathrm{idx}\right)\end{array}\right)\ue2fb{Q}_{\mathrm{bit}}& \left(27\right)\end{array}$  Here, MLS and IMLS are modulated operation precision control parameters, which are expressed by the following Equation.

MLS(i,j,idx)=(LS(i,j)+MM(i,j,idx)) (28) 
IMLS(i,j,idx)=(ILS(i,j)+MM(i,j,idx)) (29)  As such, the modulation on the operation precision control parameters LS and ILS is almost equal to the modulation on the quantization matrix by adjusting a value of the modulation matrix. When Equations 26 and 27 are used, the operation precision control parameters LS and ILS may be modulated using subtraction, multiplication, division, and bit shift in addition to addition.

FIG. 21 illustrates a video encoding apparatus according to the second embodiment. In this case, the quantization matrix modulating unit 110 in the video encoding apparatus according to the first embodiment illustrated inFIG. 1 is replaced by the operation precision control parameter modulating unit 140.  In the operation precision control parameter modulating unit 140, the operation precision control parameter 141 corresponding to LS of the equation 28 or ILS of the equation 29 is provided from the encoding control unit 113. Further, the modulation index 142 that corresponds to idx of the equations 26 to 29 and indicates a modulation method is provided from the mode determining unit 103. In the operation precision control parameter modulating unit 140, a modulation is performed on the operation precision control parameter 141 in accordance with the modulation method illustrated by the modulation index 142, and the modulated operation precision control parameter (called modulation control parameter) 143 corresponding to MLS of the equation 28 or MILS of the equation 29 is generated.
 The modulation control parameter 143 is provided to the quantizer 105 and the inverse quantizer 106. In the quantizer 105 and the inverse quantizer 106, quantization of the transform coefficients 123 and inverse quantization of the quantized transform coefficients 124 are performed according to the modulation control parameter 143.
 As such, according to the second embodiment, the same effect as the first embodiment can be obtained by performing the modulation of the operation precision control parameter to control the operation precision at the time of quantization/inverse quantization, which is the same process as the transform of the quantization matrix in the first embodiment.
 When the quantizer 105 and the inverse quantizer 106 perform quantization and inverse quantization corresponding to the equations 4 and 16, instead of performing the modulation on the quantization matrix as in the first embodiment, a modulation may be performed on the quantization parameter. In this case, Equations 4 and 16 are transformed as follows.

$\begin{array}{cc}Y\ue8a0\left(i,j\right)=\mathrm{sign}\ue8a0\left(X\ue8a0\left(i,j\right)\right)\times \frac{\left(\begin{array}{c}\mathrm{abs}\ue8a0\left(X\ue8a0\left(i,j\right)\right)\times \mathrm{QM}\ue8a0\left(i,j\right)\times \\ \mathrm{LS}\ue8a0\left(i,j\right)+f\end{array}\right)}{\left({\mathrm{QP}}_{\mathrm{step}}\ue8a0\left(i,j,\mathrm{idx}\right)\right)}& \left(30\right)\\ {X}^{\prime}\ue8a0\left(i,j\right)=\mathrm{sign}\ue8a0\left(Y\ue8a0\left(i,j\right)\right)\times \left(\begin{array}{c}\mathrm{abs}\ue8a0\left(Y\ue8a0\left(i,j\right)\right)\times \mathrm{QM}\ue8a0\left(i,j\right)\times \\ \mathrm{ILS}\ue8a0\left(i,j\right)\times ({\mathrm{QP}}_{\mathrm{step}}\ue8a0\left(i,j,\mathrm{idx}\right)\end{array}\right)& \left(31\right)\end{array}$  Here, QP_{step }is a modulation quantization parameter, which is represented by the following equation.

QP_{step}(i,j,idx)=(Q _{step}+MM(i,j,idx)) (32)  Here, Q_{step }denotes a quantization parameter.
 As such, the modulation on the quantization parameter Q_{step }is synonymous to the modulation on the quantization matrix. With respect to the quantization/inverse quantization as in the equations 5 and 17 and the equations 6 and 18, a modulation can be performed on the quantization parameter by adjusting the operation precision control parameter.

FIG. 22 illustrates a video encoding apparatus according to the third embodiment. In this case, the quantization matrix modulating unit 110 in the video encoding apparatus according to the first embodiment illustrated inFIG. 1 is replaced by a quantization parameter modulating unit 150.  In the quantization parameter modulating unit 150, the quantization parameter 151 corresponding to Q_{step }of the equation 32 is provided from the encoding control unit 113. Further, the modulation index 152 corresponding to idx of the equations 30 and 31 and indicating a modulation method is provided from the mode determining unit 103. In the quantization parameter modulating unit 150, a modulation is performed on the quantization parameter 151 in accordance with the modulation method indicated by the modulation index 152, and the modulation quantization parameter (called modulation quantization parameter) 153 corresponding to Q_{step }of the equations 30 to 32 is generated.
 The modulation quantization parameter 153 is provided to the quantizer 105 and the inverse quantizer 106. In the quantizer 105 and the inverse quantizer 106, quantization of the transform coefficients 123 and inverse quantization of the quantized transform coefficients 124 are performed in accordance with the modulation quantization parameter 153.
 As such, according to the third embodiment, the same effect as the first embodiment can be obtained by performing the modulation of the quantization parameter at the time of quantization/inverse quantization, which is the same process as the transform of the quantization matrix in the first embodiment.

FIG. 23 illustrates a video encoding apparatus according to a fourth embodiment of the present invention. In this case, the quantization matrix modulating unit 110 in the video encoding apparatus according to the first embodiment illustrated inFIG. 1 is replaced by a quantum scale table modulating unit 160.  In the quantum scale table modulating unit 160, a quantum scale table 161 to be described in detail below is provided from the encoding control unit 113, and a modulation index 162 indicating a modulation method is provided from the mode determining unit 103. In the quantum scale table modulating unit 160, a modulation is performed on the quantum scale table 161 in accordance with the modulation method indicated by the modulation index 162, and a modulation quantum scale table 163 is generated.
 The modulation quantum scale table 163 is provided to the quantizer 105 and the inverse quantizer 106. In the quantizer 105 and the inverse quantizer 106, quantization of the transform coefficients 123 and inverse quantization of the quantized transform coefficients 124 are performed in accordance with the modulation quantum scale table 163.
 Specifically, the quantum scale table modulating unit 160 has a function of setting a change width of a quantum scale controlled by a quantization parameter determining roughness of quantization. At this time, the quantization performed by the quantizer 105 and the inverse quantization performed by the inverse quantizer 106 are represented by the following equations.

$\begin{array}{cc}{X}^{\prime}\ue8a0\left(i,j\right)=\mathrm{sign}\ue8a0\left(Y\ue8a0\left(i,j\right)\right)\times (\mathrm{abs}\ue89e\left(Y\ue8a0\left(i,j\right)\right)\times \mathrm{QM}\ue8a0\left(i,j\right)\times \mathrm{ILS}\ue8a0\left(i,j\right)\times \left({\mathrm{QT}}_{\mathrm{step}}\ue8a0\left(\mathrm{qp},\mathrm{Tidx}\right)\right)& \left(33\right)\\ {X}^{\prime}\ue8a0\left(i,j\right)=\mathrm{sign}\ue8a0\left(Y\ue8a0\left(i,j\right)\right)\times \left(\begin{array}{c}\mathrm{abs}\ue89e\left(Y\ue8a0\left(i,j\right)\right)\times \\ \mathrm{QM}\ue89e\left(i,j\right)\times \mathrm{ILS}\ue8a0\left(i,j\right)\end{array}\right)\times \left({\mathrm{QT}}_{\mathrm{step}}\ue8a0\left(\mathrm{qp},\mathrm{Tidx}\right)\right)& \left(34\right)\end{array}$  Here, QT_{step }denotes a quantization scale, and roughness in the quantization is controlled according to a value of the quantization scale. Meanwhile, qp denotes a quantization parameter, and a quantization scale that is determined by qp is derived. Tidx denotes a modulation index 162 for a quantum scale table. Here, if qp is changed, the quantization scale is varied, and roughness in the quantization is also varied.
 In the moving picture encoding method according to the related art like H.264, a fixed quantization scale is derived according to a value of the quantization parameter. In this embodiment, a width of the quantization scale when the quantization parameter is changed can be changed by the modulation index 162.

FIG. 24 illustrates a relationship between a quantization parameter and a quantization scale. In this embodiment, a table on which the quantization parameter and the quantization scale are associated with each other is called a quantum scale table. Each circle illustrated inFIG. 24 indicates a quantization parameter qp (QP±i; i=1, 2, . . . ). That is, QP denotes a reference quantization parameter (called a reference parameter), and a quantization parameter qp denotes a variation from QP. Meanwhile, a distance between the circles indicates a quantization scale Δ. 
FIG. 24A illustrates an example of when a modulation index 162 corresponds to Tidx=0. Specifically,FIG. 24A illustrates an example of a quantum scale table when precision of a quantization scale is not changed (when a modulation of the quantum scale table is not performed). As illustrated inFIG. 24A , when a quantization parameter qp is changed from a reference parameter QP, a quantization scale Δ linearly varies according to the quantization parameter. The variation in the quantization parameter is made according to a buffer amount of the output buffer 112, as well known already.  Meanwhile,
FIG. 24B illustrates an example in which the modulation index 162 is Tidx=1. In this example, the quantization scale Δ when qp is increased or decreased to ±1 is expanded to about twice as much.FIG. 24C illustrates an example in which the modulation index 162 is Tidx=2. In this example, the quantization scale Δ when qp is increased or decreased to ±1 is reduced to half as much.FIG. 24D illustrates an example in which the modulation index 162 is Tidx=3. In this example, the quantization scale when qp is increased or decreased to ±2 is reduced to half as much. Here, the modulation of the quantum scale table means that the reference quantum scale table illustrated inFIG. 24A is varied according to the modulation index 162 as illustrated inFIGS. 24B , 24C, and 24D. In this case,FIG. 24A corresponds to the quantum scale table 161 that is input to the quantum scale table modulating unit 160, andFIGS. 24B , 24C, and 24D correspond to the modulation quantum scale table 163.  Table 4 illustrates a variation value of a quantization parameter corresponding to the modulation index 162: Tidx and a variation value of the quantization scale at this time. In accordance with Table 4, a change width of the quantization scale corresponding to the target block is determined from the provided qp, and QT_{step }is set. This table information is called precision modulation information 603. As such, by changing the modulation index 162, precision of the quantization scale can be changed in units of macroblock.

TABLE 4 Precision Quantization Quantization modulation parameter variation scale variation index (Tidx) value value 0 . . . . . . −3 −3Δ −2 −2Δ −1 −Δ 0 0 1 Δ 2 2Δ 3 3Δ . . . . . . 1 . . . . . . −3 −4Δ −2 −3Δ −1 −2Δ 0 0 1 2Δ 2 3Δ 3 4Δ . . . . . . 2 . . . . . . −3 −2Δ −2 −Δ −1 −Δ/2 0 0 1 Δ/2 2 Δ 3 2Δ . . . . . . 3 . . . . . . −4 −2Δ −3 −3Δ/2 −2 −Δ −1 −Δ/2 0 0 1 Δ/2 2 Δ 3 3Δ/2 4 2Δ . . . . . .  Next, the syntaxes according to this embodiment will be described. Since the syntax structure is the same as that in
FIG. 13 described in the first embodiment, the repetitive description will be omitted.  As illustrated in the sequence parameter set syntax of
FIG. 25 , seq_moduletaed_quantization_precision_flag is a flag indicating whether use or nonuse of a modulation of quantization precision is changed for every sequence. When the corresponding flag seq_moduletaed_quantization_precision_flag is TRUE, it is possible to switch whether or not to perform the precision modulation of the quantization scale corresponding to the quantization parameter in a sequence unit. Meanwhile, when the corresponding flag seq_moduletaed_quantization_precision_flag is FALSE, the precision modulation of the quantization scale corresponding to the quantization parameter cannot be used in the sequence.  As illustrated in the picture parameter set syntax of
FIG. 26 , pic_moduletaed_quantization_precision_flag is a flag indicating whether use or nonuse of a modulation of quantization precision is changed for every picture. When the corresponding flag pic_moduletaed_quantization_precision_flag is TRUE, it is possible to switch whether or not to use the precision modulation of the quantization scale corresponding to the quantization parameter in a picture unit. Meanwhile, when the corresponding flag pic_moduletaed_quantization_precision_flag is FALSE, the precision modulation of the quantization scale corresponding to the quantization parameter cannot be used in the picture.  As illustrated in the slice header syntax of
FIG. 27 , slice_moduletaed_quantization_precision_flag is a flag indicating whether use or nonuse of a modulation of quantization precision is changed for every slice. When the corresponding flag slice_moduletaed_quantization_precision_flag is TRUE, it is possible to switch whether or not to use the precision modulation of the quantization scale corresponding to the quantization parameter in a slice unit. Meanwhile, when the corresponding flag slice_moduletaed_quantization_precision_flag is FALSE, the precision modulation of the quantization scale corresponding to the quantization parameter cannot be used in the slice.  As illustrated in the macroblock layer syntax of
FIG. 28 , precision_modulation_index_indicates a precision modulation index. In the syntax, coded_block_pattern is an index indicating whether transform coefficients are generated in the corresponding block. When the corresponding index coded_block_pattern is 0, since the transform coefficients are not generated in the corresponding macroblock, it is not necessary to perform inverse quantization at the time of decoding. In this case, since information that is related to a quantization process does not need to be transmitted, precision_modulation_index is not transmitted.  Meanwhile, a mode is an index indicating a prediction mode. When the corresponding index mode selects a skip mode, the corresponding block does not transmit the transform coefficients, similarly to the above case. Accordingly, precision_modulation_index is not transmitted.
 As illustrated in
FIG. 28 , mb_qp_delta denotes a variation value of a quantization parameter. In the video encoding method according to the related art like H.264, mb_qp_delta becomes a syntax that encodes a differential value between a quantization parameter of a macroblock (called previous macroblock) encoded immediately before the corresponding macroblock and the quantization parameter of the corresponding macroblock. In this case, mb_qp_delta denotes the differential value. When the quantization parameter is not varied, the quantization precision of the corresponding macroblock is not varied. Therefore, precision_modulation_index is not transmitted when mb_qp_delta is 0.  CurrentModulatedQuantizationPrecisionFlag becomes TRUE when at least one of seq_moduletaed_quantization_precision_flag, pic_moduletaed_quantization_precision_flag, and slice_moduletaed_quantization_precision_flag is TRUE, but becomes FALSE when the condition is not satisfied. When the corresponding flag CurrentModulatedQuantizationPrecisionFlag is FALSE, precision_modulation_index is not transmitted, and the internal modulation index is set to Tidx=0. As illustrated in Table 4, precision_modulation_index previously holds a table wherein a quantization parameter variation value and a quantization scale variation value are determined for every index.
 The slice header syntax illustrated in
FIG. 27 may be changed to the syntax illustrated inFIG. 29 . In the syntax illustrated inFIG. 29 , the modulation index of the quantization scale corresponding to the quantization parameter can be changed by the slice level without depending on whether the modulation of the quantization precision is used or not. The slice_precision_modulation_index denotes the modulation index of the quantization scale corresponding to the quantization parameter. When the precision is modulated by the minute macroblock level, overwriting may be performed by the macroblock header syntax illustrated inFIG. 28 .  Here, the CurrentModulatedQuantizationPrecisionFlag becomes TRUE when at least one of seq_moduletaed_quantization_precision_flag and pic_moduletaed_quantization_precision_flag as syntax elements having levels higher than the level of the slice header is TRUE, but becomes FALSE when the condition is not satisfied. When the corresponding flag CurrentModulatedQuantizationPrecisionFlag is FALSE, slice_precision_modulation index is not transmitted, and the internal modulation index is set to Tidx=0.
 As described above, in the fourth embodiment, using the modulation index by which the quantization precision can be changed with respect to the quantization parameter, the quantization precision suitable for the transform coefficients are set and the quantization/inverse quantization is performed, and quantized transform coefficients and a modulation index indicating a modulation method of quantization precision are subjected to entropy encoding. Accordingly, similarly to the first to third embodiments, while high encoding efficiency is maintained, encoding to fail increase a decodingside operation cost can be realized. That is, appropriate encoding can be performed according to contents of a target block.
 As described also in the first embodiment, when encoding is performed in the selected mode, generation of the decoding image signal may be performed only for the selected mode, and may not be performed in a loop to determine a prediction mode.
 (1) In the first embodiment, the example wherein the encoding loops are repetitively temporarily encoded with respect to all the combinations of the tobeencoded blocks has been described. However, in order to simplify the operation process, preliminary encoding may be performed with respect to the prediction mode that is likely to be previously selected, the modulation index, and the block size, and a combining process of the target blocks that are difficult to be selected may be omitted. If the selective preliminary encoding is performed, encoding efficiency can be suppressed from being lowered or the process amount needed to perform the preliminary encoding can be suppressed.
 (2) In the first embodiment, the example where the modulation matrix is generated by the combination tables of the modulation models and the modulation control parameters illustrated in Tables 1 to 3 has been described. However, as in Tables 1 and 2, when the previously used modulation matrix is fixed, the modulation matrix may be previously held in the internal memory. In this case, since the process of generating a modulation matrix for every macroblock can be omitted, the operation cost can be reduced.
 (3) In the first embodiment, the case wherein the quantization matrix and the modulation matrix are added to each other to modulate the quantization matrix has been described. Meanwhile, the modulation may be performed on the quantization matrix using subtraction, multiplication, division, or bit shift between the quantization matrix and the modulation matrix. Further, the modulation of the quantization matrix may be performed by combining the operations.
 In the same way, in the second embodiment, the modulation may be performed on the operation precision control parameter using subtraction, multiplication, division or bit shift as well as addition between the operation precision control parameter and the modulation matrix.
 In the same way, in the third embodiment, the modulation may be performed on the quantization parameter using subtraction, multiplication, division or bit shift as well as addition between the quantization parameter and the modulation matrix.
 (4) In the first embodiment, a generation model by a town distance is used to generate the modulation matrix. As a parameter r indicating a distance of a frequency component, at least one of Minkowski distances including a town distance and a Euclidean distance may be used in addition to the town distance.
 (5) In the first to fourth embodiments, the case wherein a tobeprocessed frame is divided into short blocks such as a 16×16 pixel size, and encoding is sequentially performed from the upper left block of the screen to the lower right block as illustrated in
FIG. 2 has been described. However, the encoding sequence is not limited thereto. For example, the encoding may be sequentially performed toward the upper left block from the lower right block, and the encoding may be sequentially performed in a spiral shape from the center of the screen. Further, the encoding may be sequentially performed toward the lower left block from the upper right block, and the encoding may be sequentially performed toward the central portion of the screen from the peripheral portion.  (6) In the first to fourth embodiments, the quantization block size has been described as the 4×4 pixel block or the 8×8 pixel block. However, the tobeencoded block does not need to have a uniform block shape, and may have any block size of a 16×8 pixel block, an 8×16 pixel block, an 8×4 pixel block, and a 4×8 pixel block. Further, even in one macroblock, the uniform block size does not need to be taken, and blocks having different sizes may be mixed. In this case, if the number of divisions is increased, the number of encoded bits to encode division information is increased. However, the block size may be selected in consideration of a balance of the number of encoded bits of the transform coefficients and a local decoding image.
 (7) In the first to fourth embodiments, the example in which the transform block size and the quantization block size are the same has been described, but the different block sizes may be used. Even in this case, similarly to the above case, a combination of block sizes may be selected in consideration of a balance of the number of encoded bits and the local decoding image.
 <Video Decoding Apparatus>
 Next, fifth to eighth embodiments that are related to video decoding will be described.

FIG. 30 illustrates a video decoding apparatus according to a fifth embodiment, which corresponds to the video encoding apparatus according to the first embodiment described usingFIGS. 1 to 20 . An encoding bit stream 620 that is transmitted from the video encoding apparatus illustrated inFIG. 1 and transmitted through the accumulation system or the transmission system is temporarily accumulated in an input buffer 601. The multiplexed encoding data is input from the input buffer 601 to a decoding unit 600.  In the decoding unit 600, the encoding data is input to an entropy decoder 602. In the entropy decoder 602, decoding by a syntax analysis is performed for every frame, on the basis of the syntaxes described using
FIGS. 13 to 20 in the first embodiment. That is, in the entropy decoder 602, entropy decoding of code strings of the individual syntaxes is sequentially performed on a high level syntax, a slice level syntax, and a macroblock level syntax in accordance with the syntax structure illustrated inFIG. 13 . The quantized transform coefficients 621, the quantization matrix 631, the modulation index 632, the quantization parameter, and the prediction mode information 627 (including motion vector information) are decoded.  The quantized transform coefficients 621 are input to the inverse quantizer 603. The quantization matrix 631 and the modulation index 632 are input to the quantization matrix modulating unit 610. In the quantization matrix modulating unit 610, the quantization matrix 632 is modulated using a modulation method indicated by the modulation index 632, and a modulation quantization matrix 633 is generated. The modulation quantization matrix 633 is provided to the inverse quantizer 603.
 In the inverse quantizer 603, inverse quantization is performed on the quantized transform coefficients 621 on the basis of the modulation quantization matrix 633. Here, a parameter related to necessary quantization (for example, quantization parameter) is set from the entropy decoder 602 to the decoding control unit 609, and is read when inverse quantization is performed.
 Transform coefficients 622 after the inverse quantization are input to the inverse transformer 604. The inverse transformer 604 subjects the transform coefficients 622 after the inverse quantization to an inverse transform to the transform of the transformer 104 of the video encoding apparatus of
FIG. 1 , for example, an inverse orthogonal transform such as the IDCT, whereby the decoding prediction residual signal 623 is generated. Here, an example of the inverse orthogonal transform has been described. However, when the Wavelet transform or the independent component analysis is performed by the transformer 104 of the video encoding apparatus illustrated inFIG. 1 , an inverse Wavelet transform or an inverse independent component analysis is performed by the inverse transformer 604.  The decoding prediction residual signal 623 is added to the prediction image signal 624 from the predictor 607 by the adder 605, and a decoding image signal 625 is generated. The decoding image signal 625 is accumulated in a reference memory 606, read from the reference memory 606, and output from the decoding unit 600. After the decoding image signal output from the decoding unit 600 is temporarily accumulated in the output buffer 608, the decoding image signal is output as a reproduction image signal 628 in accordance with output timing managed by the decoding control unit 609.
 The prediction mode information 627 decoded by the entropy decoder 602 is input to the predictor 607. Meanwhile, the reference image signal 626 read from the reference memory 606 in which the decoding image signal subjected to decoding is accumulated is also input to the predictor 607. In the predictor 607, if the interframe prediction or intraframe prediction is performed on the basis of the prediction mode information 627 (including motion vector information), a prediction image signal 624 is generated. The prediction image signal 642 is input to the adder 605.
 The decoding control unit 609 performs control of output timing for the input buffer 601 and the output buffer 608, control of decoding timing, and control of a decoding process including a management of the reference memory 606.
 The processes of the decoding unit 600 and the decoding control unit 609 can be realized by hardware, but may be realized by software (program) using a computer.
 The process of the inverse quantizer 603 in this embodiment is the same as the process of the inverse quantizer 106 in the video encoding apparatus of
FIG. 1 . That is, in the inverse quantizer 603, inverse quantization is performed on the transform coefficients 713 decoded by the entropy decoder 602, using the modulation quantization matrix 118 and the quantization parameter. Here, the example of the inverse quantization is as illustrated in the equation 15. Meanwhile, inverse quantization like the equation 16 taking into consideration a sign of the transform coefficients is also enabled. Inverse quantization like the equation 17 in which Q_{step }is designed by a poweroftwo to simplify an operation is also enabled. When operation precision is changed for every frequency component to suppress operation precision, the inverse quantization as illustrated in the equation 18 can be performed.  Meanwhile, similarly to the quantization matrix modulating unit 110 in the video encoding apparatus of
FIG. 1 , the quantization matrix modulating unit 610 is realized by the modulation matrix setting unit 201 and the modulation quantization matrix generating unit 202 as illustrated inFIG. 6 . The modulation matrix setting unit 201 includes the switch 301 and the modulation matrix generating units 302, 303, and 304 as illustrated inFIG. 7 . The modulation quantization matrix generating unit 202 is realized by using the arithmetic operator as illustrated inFIG. 10 . The operation of the quantization matrix modulating unit 610 is the same as the operation of the quantization matrix modulating unit 110 in the video encoding apparatus ofFIG. 1 .  When the inverse quantizer 603 performs inverse quantization corresponding to the equation 18, instead of performing the modulation on the quantization matrix as in the fifth embodiment, the modulation may be performed on an operation precision control parameter to control operation precision at the time of inverse quantization. In this case, the equation 18 is transformed to the equation 27, and IMLS of the equation 27 is expressed by the equation 29.

FIG. 31 illustrates a video decoding apparatus according to a sixth embodiment, which corresponds to the video encoding apparatus according to the second embodiment illustrated inFIG. 21 . In the video decoding apparatus ofFIG. 31 , the quantization matrix modulating unit 610 in the video decoding apparatus according to the fifth embodiment illustrated inFIG. 30 is replaced by an operation precision control parameter modulating unit 640.  In the operation precision control parameter modulating unit 640, the operation precision control parameter 641 that corresponds to ILS of Equation 29 is provided from the decoding control unit 609, and the index (index indicating a modulation method) 642 corresponding to idx of the equations 27 and 29 is provided from the entropy decoder 602. In the operation precision control parameter modulating unit 640, a modulation is performed on the operation precision control parameter 641 in accordance with the modulation method indicated by the index 642. Thereby, in the operation precision control parameter modulating unit 640, the modulated operation precision control parameter (called modulation control parameter) 643 corresponding to MILS of the equation 29 is generated. The modulation control parameter 643 is provided to the inverse quantizer 603. In the inverse quantizer 603, inverse quantization of the quantized transform coefficients 621 is performed in accordance with the modulation control parameter 643.
 When the inverse quantizer 603 performs inverse quantization corresponding to the equation 16, instead of performing the modulation on the quantization matrix as in the fifth embodiment, the modulation may be performed on the quantization parameter. In this case, the equation 16 is transformed to the equation 31, and the modulation quantization parameter QP_{step }of the equation 31 is expressed by the equation 32.

FIG. 32 illustrates a video decoding apparatus according to a seventh embodiment, which corresponds to the video encoding apparatus according to the third embodiment illustrated inFIG. 22 . In the video decoding apparatus ofFIG. 32 , the quantization matrix modulating unit 610 in the video decoding apparatus according to the fifth embodiment illustrated inFIG. 30 is replaced by a quantization parameter modulating unit 650.  In the quantization parameter modulating unit 650, the quantization parameter 651 corresponding to Q_{step }of the equation 32 is provided from the decoding control unit 609, and the index (index indicating a modulation method) 652 corresponding to idx of the equations 31 and 32 is provided from the entropy decoder 602. In the quantization parameter modulating unit 650, a modulation is performed on the quantization parameter 651 in accordance with the modulation method indicated by the index 652, and a modulation quantization parameter 653 corresponding to QP_{step }of the equation 31 is generated. The modulation quantization parameter 653 is provided to the inverse quantizer 603. In the inverse quantizer 603, inverse quantization of the quantized transform coefficients 621 is performed in accordance with the modulation quantization parameter 653.

FIG. 33 illustrates a video decoding apparatus according to an eighth embodiment, which corresponds to the video encoding apparatus according to the fourth embodiment illustrated inFIG. 23 . In the video decoding apparatus ofFIG. 33 , the quantization matrix modulating unit 610 in the video decoding apparatus according to the fifth embodiment illustrated inFIG. 30 is replaced by a quantum scale table modulating unit 660.  In the quantization scale table modulating unit 660, the quantization scale table 661 and the index 662 indicating the modulation method that are decoded by the entropy decoder 602 are provided. In the quantum scale table modulating unit 660, a modulation is performed on the quantization scale table 661 in accordance with the modulation method indicated by the index 662, and a modulated quantization scale table 663 is generated. The modulated quantization scale table 663 is provided to the inverse quantizer 603. In the inverse quantizer 603, inverse quantization of the quantized transform coefficients 621 is performed in accordance with the modulated quantization scale table 663.
 Since the quantization scale table modulating unit 660 is the same as the quantization scale table modulating unit 160 according to the fourth embodiment, the repetitive description will be omitted. Further, since the syntax structure of the encoding data in this embodiment is the same as those described using
FIGS. 13 and 25 to 29, the repetitive description will be omitted.  The video encoding apparatuses and the video decoding apparatuses according to the abovedescribed embodiments can be realized by using a generalpurpose computer device as basic hardware. In this case, the program is previously installed in the computer device or stored in a storage medium, such as a CDROM. Alternatively, the program may be distributed through a network, and the program may be appropriately installed in the computer device.
 The present invention is not limited to the abovedescribed embodiments, but in an embodiment stage, the constituent elements can be modified and specified without departing from the scope. Further, various inventions can be made by appropriately combining the plurality of constituent elements disclosed in the abovedescribed embodiments. For example, some constituent elements may be removed from all the constituent elements disclosed in the embodiments. Further, the constituent elements according to the different embodiments may be appropriately combined.
 The present invention can be used in a technique for encoding/decoding a moving picture or a still image with high efficiency.
Claims (22)
1. A video encoding method comprising:
performing prediction for an input image signal to generate a prediction image signal;
calculating a difference between the input image signal and the prediction image signal to generate a prediction residual signal;
transforming the prediction residual signal to generate a transform coefficient;
performing modulation on any one of (a) a quantization matrix, (b) a control parameter for controlling operation precision for quantization, (c) a quantization parameter indicating roughness of the quantization, and (d) a table in which a quantization scale is associated with the quantization parameter indicating roughness of the quantization, to obtain a modulation result related to the quantization;
quantizing the transform coefficient using the modulation result to generate a quantized transform coefficient; and
encoding the quantized transform coefficient and an index related to the modulation to generate encoding data.
2. A video encoding apparatus comprising:
a predictor to perform prediction for an input image signal to generate a prediction image signal;
a subtractor to calculate a difference between the input image signal and the prediction image signal to generate a prediction residual signal;
a transformer to transform the prediction residual signal to generate a transform coefficient;
a modulating unit to perform modulation on any one of (a) a quantization matrix, (b) a control parameter for controlling operation precision for quantization, (c) a quantization parameter indicating roughness of the quantization, and (d) a table in which a quantization scale is associated with the quantization parameter indicating roughness of the quantization, to obtain a modulation result related to the quantization;
a quantizer to quantize the transform coefficient using the modulation result to generate a quantized transform coefficient; and
an encoder to encode the quantized transform coefficient and an index related to the modulation to generate encoded data.
3. The video encoding apparatus according to claim 2 , wherein the modulating unit is configured to perform the modulation using a modulation matrix having at least one of a logarithm model, an autocorrelation function model, a proportional/inversely proportional model, an Norder function (N≧1) model, a generalization Gauss function model including a Gauss distribution or a Laplace distribution, and a trigonometric function model.
4. The video encoding apparatus according to claim 2 , wherein the modulating unit is configured to perform the modulation using at least one of addition, subtraction, multiplication, division, and bit shift between any one of the quantization matrix, the control parameter, the quantization parameter, and the table and a modulation matrix.
5. The video encoding apparatus according to claim 4 , wherein the modulation matrix has at least one of a logarithm model, an autocorrelation function model, a proportional/inversely proportional model, an Norder function (N≧1) model, a generalization Gauss function model including a Gauss distribution or a Laplace distribution, and a trigonometric function model.
6. The video encoding apparatus according to claim 3 , wherein the modulation matrix has a frequency component calculated using one of Minkowski distances including a town distance and a Euclidean distance.
7. The video encoding apparatus according to claim 4 , wherein the modulation matrix has a frequency component that is calculated using one of Minkowski distances including a town distance and a Euclidean distance.
8. The video encoding apparatus according to claim 2 , wherein the encoder is configured to add, to the encoded data, a flag indicating whether or not to perform the quantization by the quantizer using the modulation result for every encoding sequence, picture, encoding slice or block of the input image signal.
9. The video encoding apparatus according to claim 2 , wherein the quantizer is configured to perform the quantization in units of a block having a different size.
10. The video encoding apparatus according to claim 2 , wherein the index related to the modulation is an index indicating at least one of (a) a modulation method, (b) modulation strength, (c) a modulation matrix, (d) a modulation model, and (e) a number of a table on which the modulation method is described.
11. A video decoding method comprising:
decoding encoded data including a quantization transform coefficient and an index related to modulation;
performing modulation on any one of (a) a quantization matrix, (b) a control parameter for controlling operation precision for quantization, (c) a quantization parameter indicating roughness of the quantization, and (d) a table wherein a quantization scale is associated with the quantization parameter indicating roughness of the quantization in accordance with the index, to obtain a modulation result related to the quantization;
inversely quantizing the quantization transform coefficient using the modulation result to generate an inverse quantized transform coefficient;
performing inverse transform on the inverse quantized transform coefficient to generate a prediction residual signal;
performing prediction using a decoding image signal to generate a prediction image signal; and
adding the prediction image signal and the prediction residual signal to generate a decoded image signal.
12. A video decoding apparatus comprising:
a decoder to decode encoded data including a quantized transform coefficient and an index related to modulation;
a modulating unit which performs modulation on any one of (a) a quantization matrix, (b) a control parameter for controlling operation precision for quantization, (c) a quantization parameter indicating roughness of the quantization, and (d) a table in which a quantization scale is associated with the quantization parameter indicating roughness of the quantization in accordance with information related to the transform, to obtain a modulation result related to the quantization;
an inverse quantizer to inversely quantize the quantized transform coefficient using the modulation result to generate an inverse quantized transform coefficient;
an inverse transformer to perform inverse transform on the inverse quantized transform coefficient to generate a prediction residual signal;
a predictor to perform prediction using a decoding image signal to generate a prediction image signal; and
an adder to adds the prediction image signal and the prediction residual signal to generate a decoded image signal.
13. The video decoding apparatus according to claim 12 , wherein the modulating unit is configured to perform the modulation using a modulation matrix having at least one of a logarithm model, an autocorrelation function model, a proportional/inversely proportional model, an Norder function (N≧1) model, a generalization Gauss function model including a Gauss distribution or a Laplace distribution, and a trigonometric function model.
14. The video decoding apparatus according to claim 12 , wherein the modulating unit is configured to perform the modulation using at least one of addition, subtraction, multiplication, division, and bit shift between any one of the quantization matrix, the control parameter, the quantization parameter, and the table and a modulation matrix.
15. The video decoding apparatus according to claim 13 , wherein the modulation matrix has at least one of a logarithm model, an autocorrelation function model, a proportional/inversely proportional model, an Norder function (N≧1) model, a generalization Gauss function model including a Gauss distribution or a Laplace distribution, and a trigonometric function model.
16. The video decoding apparatus according to claim 14 , wherein the modulation matrix has a frequency component calculated using one of Minkowski distances including a town distance and a Euclidean distance.
17. The video decoding apparatus according to claim 15 , wherein the modulation matrix has a frequency component calculated using one of Minkowski distances including a town distance and a Euclidean distance.
18. The video decoding apparatus according to claim 12 , wherein the encoding data includes a flag indicating whether the quantization transform coefficient is quantized for every encoding sequence, picture, encoding slice or block in accordance with the modulation result related to the quantization obtained by performing the modulation on any one of (a) the quantization matrix, (b) the control parameter for controlling operation precision for quantization, (c) the quantization parameter indicating roughness of the quantization, and (d) the table in which the quantization scale is associated with the quantization parameter indicating roughness of the quantization, and
the modulating unit is configured to perform the modulation in accordance with the flag.
19. The video decoding apparatus according to claim 12 , wherein the inverse quantizer is configured to perform the inverse quantization in units of a block having a different size.
20. The video decoding apparatus according to claim 12 , wherein the index related to the modulation is an index indicating at least one of (a) a modulation method, (b) modulation strength, (c) a modulation matrix, (d) a modulation model, and (e) a number of a table on which the modulation method is described.
21. A computerreadable storage medium to store commands of a computer program executed by a computer and causing operations of steps including:
performing prediction for an input image signal to generate a prediction image signal;
calculating a difference between the input image signal and the prediction image signal to generate a prediction residual signal;
transforming the prediction residual signal to generate a transform coefficient;
performing modulation on any one of (a) a quantization matrix, (b) a control parameter for controlling operation precision for quantization, (c) a quantization parameter indicating roughness of the quantization, and (d) a table in which a quantization scale is associated with the quantization parameter indicating roughness of the quantization, to obtain a modulation result related to the quantization;
quantizing the transform coefficient using the modulation result to generate a quantized transform coefficient; and
encoding the quantized transform coefficient and an index related to the modulation to generate encoded data.
22. A computerreadable storage medium to store commands of a computer program executed by a computer and causing operations of steps including:
decoding encoded data including a quantization transform coefficient and an index related to modulation;
performing modulation on any one of (a) a quantization matrix, (b) a control parameter for controlling operation precision for quantization, (c) a quantization parameter indicating roughness of the quantization, and (d) a table in which a quantization scale is associated with the quantization parameter indicating roughness of the quantization in accordance with information related to the transform, to obtain a modulation result related to the quantization;
inversely quantizing the quantized transform coefficient using the modulation result to generate an inverse quantized transform coefficient;
performing inverse transform on the inverse quantized transform coefficient to generate a prediction residual signal;
performing prediction using a decoded image signal to generate a prediction image signal; and
adding the prediction image signal and the prediction residual signal to generate a decoding image signal.
Priority Applications (3)
Application Number  Priority Date  Filing Date  Title 

JP2007106899  20070416  
JP2007106899  20070416  
PCT/JP2008/055012 WO2008132890A1 (en)  20070416  20080318  Image encoding and image decoding method and device 
Publications (1)
Publication Number  Publication Date 

US20100086028A1 true US20100086028A1 (en)  20100408 
Family
ID=39925355
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

US12/532,024 Abandoned US20100086028A1 (en)  20070416  20080318  Video encoding and decoding method and apparatus 
Country Status (13)
Country  Link 

US (1)  US20100086028A1 (en) 
EP (1)  EP2136566A4 (en) 
JP (1)  JPWO2008132890A1 (en) 
KR (1)  KR20090115176A (en) 
CN (1)  CN101622881A (en) 
AU (1)  AU2008245277A1 (en) 
BR (1)  BRPI0809255A2 (en) 
CA (1)  CA2680140A1 (en) 
MX (1)  MX2009009947A (en) 
RU (1)  RU2009135396A (en) 
TW (1)  TW200850013A (en) 
WO (1)  WO2008132890A1 (en) 
ZA (1)  ZA200906254B (en) 
Cited By (37)
Publication number  Priority date  Publication date  Assignee  Title 

US20110038420A1 (en) *  20090813  20110217  Samsung Electronics Co., Ltd.  Method and apparatus for encoding/decoding motion vector 
US8045612B1 (en) *  20070119  20111025  Marvell International Ltd.  Fast inverse integer transform for video decoding 
WO2013032794A1 (en) *  20110823  20130307  Mediatek Singapore Pte. Ltd.  Method and system of transform block processing according to quantization matrix in video coding 
WO2013109127A1 (en) *  20120120  20130725  한국전자통신연구원  Method for encoding and decoding quantized matrix and apparatus using same 
US20140050267A1 (en) *  20110520  20140220  Hironari Sakurai  Processing device and image processing method 
US20140086311A1 (en) *  20120925  20140327  General Instrument Corporation  Signaling of scaling list 
US8762797B2 (en)  20110429  20140624  Google Inc.  Method and apparatus for detecting memory access faults 
US20140341276A1 (en) *  20120130  20141120  Samsung Electronics Co., Ltd.  Method and apparatus for hierarchical data unitbased video encoding and decoding comprising quantization parameter prediction 
US20140348227A1 (en) *  20111104  20141127  Pantech Co., Ltd.  Method for encoding/decoding a quantization coefficient, and apparatus using same 
US8929459B2 (en)  20100928  20150106  Google Inc.  Systems and methods utilizing efficient video compression techniques for browsing of static image data 
US8989256B2 (en)  20110525  20150324  Google Inc.  Method and apparatus for using segmentationbased coding of prediction information 
US9094681B1 (en)  20120228  20150728  Google Inc.  Adaptive segmentation 
US20150229951A1 (en) *  20111107  20150813  Infobridge Pte. Ltd  Method of decoding video data 
US9113164B1 (en)  20120515  20150818  Google Inc.  Constant bit rate control using implicit quantization values 
US9167268B1 (en)  20120809  20151020  Google Inc.  Secondorder orthogonal spatial intra prediction 
JP2015188237A (en) *  20110625  20151029  クゥアルコム・インコーポレイテッドＱｕａｌｃｏｍｍ Ｉｎｃｏｒｐｏｒａｔｅｄ  Quantization in video coding 
US9185429B1 (en)  20120430  20151110  Google Inc.  Video encoding and decoding using unequal error protection 
US9247251B1 (en)  20130726  20160126  Google Inc.  Rightedge extension for quadtree intraprediction 
US9247257B1 (en)  20111130  20160126  Google Inc.  Segmentation based entropy encoding and decoding 
US9332276B1 (en)  20120809  20160503  Google Inc.  Variablesized super block based direct prediction mode 
US9344742B2 (en)  20120810  20160517  Google Inc.  Transformdomain intra prediction 
US9350988B1 (en)  20121120  20160524  Google Inc.  Prediction modebased block ordering in video coding 
US20160165255A1 (en) *  20100513  20160609  Sharp Kabushiki Kaisha  Encoding device, decoding device, and data structure 
US9369732B2 (en)  20121008  20160614  Google Inc.  Lossless intraprediction video coding 
US9369718B2 (en)  20091030  20160614  Sun Patent Trust  Decoding method, decoding apparatus, coding method, and coding apparatus using a quantization matrix 
US9380298B1 (en)  20120810  20160628  Google Inc.  Objectbased intraprediction 
US9407915B2 (en)  20121008  20160802  Google Inc.  Lossless video coding with subframe level optimal quantization values 
RU2597509C2 (en) *  20120229  20160910  Сони Корпорейшн  Device and method for image processing 
US9467710B2 (en)  20091030  20161011  Sun Patent Trust  Image decoding method, image coding method, image decoding apparatus, image coding apparatus, program, and integrated circuit 
US9510019B2 (en)  20120809  20161129  Google Inc.  Twostep quantization and coding method and apparatus 
US9532059B2 (en)  20101005  20161227  Google Technology Holdings LLC  Method and apparatus for spatial scalability for video coding 
US9628790B1 (en)  20130103  20170418  Google Inc.  Adaptive composite intra prediction for image and video compression 
US9681128B1 (en)  20130131  20170613  Google Inc.  Adaptive pretransform scanning patterns for video and image compression 
US9756346B2 (en)  20121008  20170905  Google Inc.  Edgeselective intra coding 
US9781447B1 (en)  20120621  20171003  Google Inc.  Correlation based interplane prediction encoding and decoding 
US9826229B2 (en)  20120929  20171121  Google Technology Holdings LLC  Scan pattern determination from base layer pixel information for scalable extension 
US10154288B2 (en) *  20160302  20181211  MatrixView, Inc.  Apparatus and method to improve image or video quality or encoding performance by enhancing discrete cosine transform coefficients 
Families Citing this family (14)
Publication number  Priority date  Publication date  Assignee  Title 

WO2008132890A1 (en) *  20070416  20081106  Kabushiki Kaisha Toshiba  Image encoding and image decoding method and device 
US8503527B2 (en)  20081003  20130806  Qualcomm Incorporated  Video coding with large macroblocks 
CN102045560B (en)  20091023  20130807  华为技术有限公司  Video encoding and decoding method and video encoding and decoding equipment 
WO2011052216A1 (en) *  20091030  20110505  パナソニック株式会社  Image encoding method, image decoding method, image encoding device and image decoding device 
JP5299328B2 (en) *  20100317  20130925  富士通株式会社  Image processing apparatus and image processing method 
CN102215391B (en) *  20100409  20130828  华为技术有限公司  Video data encoding and decoding method and device as well as transform processing method and device 
CN102281435B (en) *  20100611  20131002  华为技术有限公司  Encoding method, decoding method, encoding device, decoding device and encoding/decoding system 
CN102447896B (en) *  20100930  20131009  华为技术有限公司  Method, device and system for processing image residual block 
JP5741076B2 (en)  20101209  20150701  ソニー株式会社  Image processing apparatus and image processing method 
US20120189052A1 (en) *  20110124  20120726  Qualcomm Incorporated  Signaling quantization parameter changes for coded units in high efficiency video coding (hevc) 
US20130188691A1 (en) *  20120120  20130725  Sony Corporation  Quantization matrix design for hevc standard 
US20130272390A1 (en) *  20120416  20131017  Qualcomm Incorporated  Uniform granularity for quantization matrix in video coding 
WO2014038130A1 (en) *  20120906  20140313  パナソニック株式会社  Image encoding method, image decoding method, image encoding device, image decoding device, and image encoding and decoding device 
JP6200220B2 (en) *  20130624  20170920  日本放送協会  The image processing apparatus, the encoding apparatus, decoding apparatus, and program 
Citations (12)
Publication number  Priority date  Publication date  Assignee  Title 

US5995150A (en) *  19980220  19991130  Winbond Electronics Corporation America  Dual compressed video bitstream camera for universal serial bus connection 
US20030078774A1 (en) *  20010816  20030424  Broadcom Corporation  Robust composite quantization with subquantizers and inverse subquantizers using illegal space 
US20030083865A1 (en) *  20010816  20030501  Broadcom Corporation  Robust quantization and inverse quantization using illegal space 
US20040258156A1 (en) *  20021122  20041223  Takeshi Chujoh  Video encoding/decoding method and apparatus 
US20050105619A1 (en) *  20031119  20050519  Institute For Information Industry  Transcoder system for adaptively reducing framerate 
US20050249291A1 (en) *  20040507  20051110  Stephen Gordon  Method and system for generating a transform size syntax element for video decoding 
US20060209952A1 (en) *  20050316  20060921  Kabushiki Kaisha Toshiba  Image encoding/decoding method and apparatus therefor 
US20070189626A1 (en) *  20060213  20070816  Akiyuki Tanizawa  Video encoding/decoding method and apparatus 
US20090177931A1 (en) *  20080103  20090709  Samsung Eletronics Co.,Ltd  Memory device and error control codes decoding method 
US20090234792A1 (en) *  20080317  20090917  Samsung Electronics Co., Ltd.  Memory devices and data decision methods 
US20090235129A1 (en) *  20080312  20090917  Samsung Electronics, Co., Ltd.  Apparatus and method for hybrid detection of memory data 
EP2136566A1 (en) *  20070416  20091223  Kabushiki Kaisha Toshiba  Image encoding and image decoding method and device 
Family Cites Families (5)
Publication number  Priority date  Publication date  Assignee  Title 

DE69841007D1 (en) *  19970208  20090903  Panasonic Corp  Quantizierungsmatrix for still and moving image coding 
JP4560694B2 (en) *  20000405  20101013  ソニー株式会社  Encoding apparatus and method 
JP4254147B2 (en) *  20011130  20090415  ソニー株式会社  Image information encoding method and apparatus, and program and recording medium 
JP3948266B2 (en)  20011214  20070725  日本ビクター株式会社  Video encoding apparatus, encoding method, decoding device, decoding method and the moving picture code string transmission method 
GB2393060B (en) *  20020912  20060125  Snell & Wilcox Ltd  Manipulation of video compression 

2008
 20080318 WO PCT/JP2008/055012 patent/WO2008132890A1/en active Application Filing
 20080318 CA CA 2680140 patent/CA2680140A1/en not_active Abandoned
 20080318 RU RU2009135396/09A patent/RU2009135396A/en not_active Application Discontinuation
 20080318 KR KR1020097017977A patent/KR20090115176A/en not_active Application Discontinuation
 20080318 US US12/532,024 patent/US20100086028A1/en not_active Abandoned
 20080318 EP EP08722402A patent/EP2136566A4/en not_active Withdrawn
 20080318 AU AU2008245277A patent/AU2008245277A1/en not_active Abandoned
 20080318 BR BRPI08092559A2A patent/BRPI0809255A2/en not_active IP Right Cessation
 20080318 CN CN 200880006627 patent/CN101622881A/en not_active Application Discontinuation
 20080318 MX MX2009009947A patent/MX2009009947A/en not_active Application Discontinuation
 20080318 JP JP2009511709A patent/JPWO2008132890A1/en not_active Abandoned
 20080415 TW TW97113686A patent/TW200850013A/en unknown

2009
 20090909 ZA ZA200906254A patent/ZA200906254B/en unknown
Patent Citations (14)
Publication number  Priority date  Publication date  Assignee  Title 

US5995150A (en) *  19980220  19991130  Winbond Electronics Corporation America  Dual compressed video bitstream camera for universal serial bus connection 
US20030083865A1 (en) *  20010816  20030501  Broadcom Corporation  Robust quantization and inverse quantization using illegal space 
US20030078774A1 (en) *  20010816  20030424  Broadcom Corporation  Robust composite quantization with subquantizers and inverse subquantizers using illegal space 
US20090323817A1 (en) *  20021122  20091231  Takeshi Chujoh  Video encoding/decoding method and apparatus 
US20040258156A1 (en) *  20021122  20041223  Takeshi Chujoh  Video encoding/decoding method and apparatus 
US20050105619A1 (en) *  20031119  20050519  Institute For Information Industry  Transcoder system for adaptively reducing framerate 
US20050249291A1 (en) *  20040507  20051110  Stephen Gordon  Method and system for generating a transform size syntax element for video decoding 
US20060209952A1 (en) *  20050316  20060921  Kabushiki Kaisha Toshiba  Image encoding/decoding method and apparatus therefor 
US7792193B2 (en) *  20050316  20100907  Kabushiki Kaisha Toshiba  Image encoding/decoding method and apparatus therefor 
US20070189626A1 (en) *  20060213  20070816  Akiyuki Tanizawa  Video encoding/decoding method and apparatus 
EP2136566A1 (en) *  20070416  20091223  Kabushiki Kaisha Toshiba  Image encoding and image decoding method and device 
US20090177931A1 (en) *  20080103  20090709  Samsung Eletronics Co.,Ltd  Memory device and error control codes decoding method 
US20090235129A1 (en) *  20080312  20090917  Samsung Electronics, Co., Ltd.  Apparatus and method for hybrid detection of memory data 
US20090234792A1 (en) *  20080317  20090917  Samsung Electronics Co., Ltd.  Memory devices and data decision methods 
Cited By (67)
Publication number  Priority date  Publication date  Assignee  Title 

US8553762B1 (en)  20070119  20131008  Marvell International Ltd.  Fast inverse integer transform for video decoding 
US8045612B1 (en) *  20070119  20111025  Marvell International Ltd.  Fast inverse integer transform for video decoding 
US8811488B2 (en)  20090813  20140819  Samsung Electronics Co., Ltd.  Method and apparatus for encoding/decoding motion vector 
US8792558B2 (en)  20090813  20140729  Samsung Electronics Co., Ltd.  Method and apparatus for encoding/decoding motion vector 
US8537897B2 (en)  20090813  20130917  Samsung Electronics Co., Ltd.  Method and apparatus for encoding/decoding motion vector 
US10110902B2 (en)  20090813  20181023  Samsung Electronics Co., Ltd.  Method and apparatus for encoding/decoding motion vector 
US9544588B2 (en)  20090813  20170110  Samsung Electronics Co., Ltd.  Method and apparatus for encoding/decoding motion vector 
US9883186B2 (en)  20090813  20180130  Samsung Electronics Co., Ltd.  Method and apparatus for encoding/decoding motion vector 
US20110038420A1 (en) *  20090813  20110217  Samsung Electronics Co., Ltd.  Method and apparatus for encoding/decoding motion vector 
US8787463B2 (en)  20090813  20140722  Samsung Electronics Co., Ltd.  Method and apparatus for encoding/decoding motion vector 
US9467710B2 (en)  20091030  20161011  Sun Patent Trust  Image decoding method, image coding method, image decoding apparatus, image coding apparatus, program, and integrated circuit 
US9369718B2 (en)  20091030  20160614  Sun Patent Trust  Decoding method, decoding apparatus, coding method, and coding apparatus using a quantization matrix 
US20160165255A1 (en) *  20100513  20160609  Sharp Kabushiki Kaisha  Encoding device, decoding device, and data structure 
US8929459B2 (en)  20100928  20150106  Google Inc.  Systems and methods utilizing efficient video compression techniques for browsing of static image data 
US9532059B2 (en)  20101005  20161227  Google Technology Holdings LLC  Method and apparatus for spatial scalability for video coding 
US8762797B2 (en)  20110429  20140624  Google Inc.  Method and apparatus for detecting memory access faults 
US9641840B2 (en) *  20110520  20170502  Sony Corporation  Processing device and image processing method for encoding and decoding image 
US20140050267A1 (en) *  20110520  20140220  Hironari Sakurai  Processing device and image processing method 
US10070131B2 (en)  20110520  20180904  Sony Corporation  Image processing to encode and decode images based on square and nonsquare quantization matrices 
US8989256B2 (en)  20110525  20150324  Google Inc.  Method and apparatus for using segmentationbased coding of prediction information 
JP2015188237A (en) *  20110625  20151029  クゥアルコム・インコーポレイテッドＱｕａｌｃｏｍｍ Ｉｎｃｏｒｐｏｒａｔｅｄ  Quantization in video coding 
US9854275B2 (en)  20110625  20171226  Qualcomm Incorporated  Quantization in video coding 
US20140177728A1 (en) *  20110823  20140626  Ximin Zhang  Method and system of transform block processing according to quantization matrix in video coding 
WO2013032794A1 (en) *  20110823  20130307  Mediatek Singapore Pte. Ltd.  Method and system of transform block processing according to quantization matrix in video coding 
EP2748934A4 (en) *  20110823  20160601  Mediatek Singapore Pte Ltd  Method and system of transform block processing according to quantization matrix in video coding 
US9560347B2 (en) *  20110823  20170131  Hfi Innovation Inc.  Method and system of transform block processing according to quantization matrix in video coding 
US10218977B2 (en)  20110823  20190226  Hfi Innovation Inc.  Method and system of transform block processing according to quantization matrix in video coding 
US20140348227A1 (en) *  20111104  20141127  Pantech Co., Ltd.  Method for encoding/decoding a quantization coefficient, and apparatus using same 
US10212449B2 (en)  20111107  20190219  Infobridge Pte. Ltd.  Method of encoding video data 
US9648343B2 (en) *  20111107  20170509  Infobridge Pte. Ltd.  Method of decoding video data 
US20150229951A1 (en) *  20111107  20150813  Infobridge Pte. Ltd  Method of decoding video data 
US9247257B1 (en)  20111130  20160126  Google Inc.  Segmentation based entropy encoding and decoding 
WO2013109127A1 (en) *  20120120  20130725  한국전자통신연구원  Method for encoding and decoding quantized matrix and apparatus using same 
US9866839B2 (en)  20120120  20180109  Electronics And Telecommunications Research Institute  Method for encoding and decoding quantized matrix and apparatus using same 
US20170230660A1 (en) *  20120130  20170810  Samsung Electronics Co., Ltd.  Method and apparatus for hierarchical data unitbased video encoding and decoding comprising quantization parameter prediction 
US10045025B2 (en) *  20120130  20180807  Samsung Electronics Co., Ltd.  Method and apparatus for hierarchical data unitbased video encoding and decoding comprising quantization parameter prediction 
US20160050414A1 (en) *  20120130  20160218  Samsung Electronics Co., Ltd.  Method and apparatus for hierarchical data unitbased video encoding and decoding comprising quantization parameter prediction 
US9693061B2 (en) *  20120130  20170627  Samsung Electronics Co., Ltd.  Method and apparatus for hierarchical data unitbased video encoding and decoding comprising quantization parameter prediction 
US20160050415A1 (en) *  20120130  20160218  Samsung Electronics Co., Ltd.  Method and apparatus for hierarchical data unitbased video encoding and decoding comprising quantization parameter prediction 
US9544604B2 (en) *  20120130  20170110  Samsung Electronics Co., Ltd.  Method and apparatus for hierarchical data unitbased video encoding and decoding comprising quantization parameter prediction 
US9544603B2 (en) *  20120130  20170110  Samsung Electronics Co., Ltd.  Method and apparatus for hierarchical data unitbased video encoding and decoding comprising quantization parameter prediction 
US20160050417A1 (en) *  20120130  20160218  Samsung Electronics Co., Ltd.  Method and apparatus for hierarchical data unitbased video encoding and decoding comprising quantization parameter prediction 
US20160050416A1 (en) *  20120130  20160218  Samsung Electronics Co., Ltd.  Method and apparatus for hierarchical data unitbased video encoding and decoding comprising quantization parameter prediction 
US9538192B2 (en) *  20120130  20170103  Samsung Electronics Co., Ltd.  Method and apparatus for hierarchical data unitbased video encoding and decoding comprising quantization parameter prediction 
US20140341276A1 (en) *  20120130  20141120  Samsung Electronics Co., Ltd.  Method and apparatus for hierarchical data unitbased video encoding and decoding comprising quantization parameter prediction 
US9549185B2 (en) *  20120130  20170117  Samsung Electronics Co., Ltd.  Method and apparatus for hierarchical data unitbased video encoding and decoding comprising quantization parameter prediction 
US9094681B1 (en)  20120228  20150728  Google Inc.  Adaptive segmentation 
RU2597509C2 (en) *  20120229  20160910  Сони Корпорейшн  Device and method for image processing 
US9185429B1 (en)  20120430  20151110  Google Inc.  Video encoding and decoding using unequal error protection 
US9113164B1 (en)  20120515  20150818  Google Inc.  Constant bit rate control using implicit quantization values 
US9781447B1 (en)  20120621  20171003  Google Inc.  Correlation based interplane prediction encoding and decoding 
US9332276B1 (en)  20120809  20160503  Google Inc.  Variablesized super block based direct prediction mode 
US9167268B1 (en)  20120809  20151020  Google Inc.  Secondorder orthogonal spatial intra prediction 
US9510019B2 (en)  20120809  20161129  Google Inc.  Twostep quantization and coding method and apparatus 
US9344742B2 (en)  20120810  20160517  Google Inc.  Transformdomain intra prediction 
US9380298B1 (en)  20120810  20160628  Google Inc.  Objectbased intraprediction 
US20140086311A1 (en) *  20120925  20140327  General Instrument Corporation  Signaling of scaling list 
US9253483B2 (en) *  20120925  20160202  Google Technology Holdings LLC  Signaling of scaling list 
US9826229B2 (en)  20120929  20171121  Google Technology Holdings LLC  Scan pattern determination from base layer pixel information for scalable extension 
US9756346B2 (en)  20121008  20170905  Google Inc.  Edgeselective intra coding 
US9369732B2 (en)  20121008  20160614  Google Inc.  Lossless intraprediction video coding 
US9407915B2 (en)  20121008  20160802  Google Inc.  Lossless video coding with subframe level optimal quantization values 
US9350988B1 (en)  20121120  20160524  Google Inc.  Prediction modebased block ordering in video coding 
US9628790B1 (en)  20130103  20170418  Google Inc.  Adaptive composite intra prediction for image and video compression 
US9681128B1 (en)  20130131  20170613  Google Inc.  Adaptive pretransform scanning patterns for video and image compression 
US9247251B1 (en)  20130726  20160126  Google Inc.  Rightedge extension for quadtree intraprediction 
US10154288B2 (en) *  20160302  20181211  MatrixView, Inc.  Apparatus and method to improve image or video quality or encoding performance by enhancing discrete cosine transform coefficients 
Also Published As
Publication number  Publication date 

BRPI0809255A2 (en)  20140923 
AU2008245277A1 (en)  20081106 
MX2009009947A (en)  20090924 
WO2008132890A1 (en)  20081106 
TW200850013A (en)  20081216 
KR20090115176A (en)  20091104 
RU2009135396A (en)  20110410 
CA2680140A1 (en)  20081106 
CN101622881A (en)  20100106 
ZA200906254B (en)  20100526 
EP2136566A4 (en)  20120718 
JPWO2008132890A1 (en)  20100722 
EP2136566A1 (en)  20091223 
Similar Documents
Publication  Publication Date  Title 

EP1113672B1 (en)  Quantization matrix for still and moving picture coding  
US7336720B2 (en)  Realtime video coding/decoding  
EP1389017B1 (en)  Method and apparatus for encoding and decoding motion vectors  
JP4111351B2 (en)  Apparatus and method for optimizing the rate control in coding systems  
US8369628B2 (en)  Video encoding device, video encoding method, video encoding program, video decoding device, video decoding method, and video decoding program  
US7643559B2 (en)  Coding method, decoding method, coding apparatus, decoding apparatus, image processing system, coding program, and decoding program  
US7469069B2 (en)  Method and apparatus for encoding/decoding image using image residue prediction  
US8457203B2 (en)  Method and apparatus for coding motion and prediction weighting parameters  
JP5932719B2 (en)  Adaptive quantization related enhancement layer video coding  
EP1445962B1 (en)  Adaptive variablelength coding and decoding methods for image data  
EP0857394B1 (en)  Apparatus and method for optimizing the rate control in a coding system  
CA2737889C (en)  Method for reduced bitdepth quantization  
US9866842B2 (en)  Image coding method and apparatus using spatial predictive coding of chrominance and image decoding method apparatus  
US9060175B2 (en)  System and method for motion estimation and mode decision for lowcomplexity H.264 decoder  
US6192081B1 (en)  Apparatus and method for selecting a coding mode in a blockbased coding system  
US8619860B2 (en)  System and method for scalable encoding and decoding of multimedia data using multiple layers  
US7738714B2 (en)  Method of and apparatus for lossless video encoding and decoding  
US20120076203A1 (en)  Video encoding device, video decoding device, video encoding method, and video decoding method  
US20050013500A1 (en)  Intelligent differential quantization of video coding  
US6658157B1 (en)  Method and apparatus for converting image information  
EP1012778B1 (en)  Apparatus and method for macroblock based rate control in a coding system  
KR100984612B1 (en)  Global motion compensation for video pictures  
US8059721B2 (en)  Estimating sampledomain distortion in the transform domain with rounding compensation  
US6738423B1 (en)  Method for encoding and decoding video information, a motion compensated video encoder and a corresponding decoder  
RU2556465C2 (en)  Entropy coding coefficients using joint context model 
Legal Events
Date  Code  Title  Description 

AS  Assignment 
Owner name: KABUSHIKI KAISHA TOSHIBA,JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TANIZAWA, AKIYUKI;CHUJOH, TAKESHI;REEL/FRAME:023257/0959 Effective date: 20090907 